<$BlogRSDURL$>
Christina's LIS Rant
Sunday, March 29, 2009
  Comps readings this week
Hertzum, M. (2002). The importance of trust in software engineers’ assessment and choice of information sources. Information and Organization, 12, 1-18.
Not a hundred percent sure this lives up to its title. This was a study of about 11 people on a software project - and based on conversations I've heard on software engineering where I work (with people who do know about this sort of thing), this was software development and not truly engineering. Anyway, this article dug into the trustworthiness to add that it's also important that the information seeker has enough to determine if a source is trustworthy. This study reports the results of observation of 16 meetings, 11 interviews, and reviews of project documentation. Cost is also not really very important at all when compared to quality as a measure.

Bates, M. J. (1990). Where should the person stop and the information search interface start? Information Processing & Management, 26, 575-591 (looks like also available here: http://www.gseis.ucla.edu/faculty/bates/searchinterface.pdf, but I have a photocopy of the journal pages.
I am such a big Bates fan girl. She starts with
Much of the advanced research and development of automated information retrieval systems to date has been done with the implicit or explicit goal of eventually automating every part of the process... An unspoken assumption seems to be that if a part of the the information search process is not automated, it is only because we have not yet figured out how to automate it... The implicit assumption in much information retrieval (IR) system design is that the system (and behind that, the system designer) knows best... the system controls the pace and direction of the search... but not all searchers want that kind of response from an information system.
Eureka - exploratory search, anomalous states of knowledge, hcir, human information interaction... yes, of course! I think this is why engineers *love* Engineering Village's facets - it gives them a view into how the system interprets their query, teaches them what it knows, and gives them power to control it.

Do systems provide any more support for strategy than they did? Any more strategic advice? I'm thinking no.


Bates, M. (1989) The design of browsing and berrypicking techniques for the online search interface. Online Review, 13, 407-423.
At some point, I looked, and I had been assigned this article in like 5 different classes. Not only do I have a copy in every binder, but we actually had this journal in a bound volume in our collection. I was so close to cutting the article out with an exacto knife before the volume went in recycling, but I chickened out. This article pulls together evidence from a bunch of places to show that information behavior is frequently an evolving process, with learning, refocusing, and chainging of strategies along the way. Different strategies might be following citations backward or forward, running through a journal that appears to have good stuff, browsing a place on the shelf, looking for more from a particular author, etc. Apparently, way back, Garfield had a hard time convincing librarians and others that people wanted to follow citations - librarians thought subject access was all people needed. She argues that systems should enable searchers to do all of these different types of searches.
Digital libraries should allow readers to jump back and forth between the article text and the references, see the section headings in advance and be able to jump directly to them (like to methods or conclusion), see what cites the article, browse journal table of contents, browse classification schemes and jump up, down, and across the hierarchy,
It's interesting to think of what journal platforms do this stuff well (and others that don't do this stuff at all). She also talks about trying to make an equivalent for flipping through a book and reading random passages to see if you like the author's style - google books allows this, but proprietary ebook systems typically do not. She also says that systems should allow the user to take notes, highlight, and clip interesting pieces to save for later and use off line (or outside of the source)... some do this, but not as well. The articles on digital libraries by Soergel still suggest these things years later, and yet we don't always have them.

Started re-reading Rogers, E. M. (2003). Diffusion of innovations. 5th ed. New York: Simon & Schuster.
Excellent book and Rogers' stuff is so readable. Quite long, though. I think I said this on this blog before, but the current edition is really on crappy paper with a very thin cover. Like newsprint inside. I bought mine in probably 2006 and kept it out of sunlight, but it's still discoloring with age. I hope it doesn't be come brittle.... when I actually *buy* a new book, I expect it to last! Anyway, like other books, this will get its own post.

Labels:

 
Friday, March 27, 2009
  What do librarians do and how do libraries work?
Ok, I do realize that there is no way this post can live up to its title, but this is in response to some friendfeed threads (example). I suppose I can't keep giving people crap for not knowing what librarians do and how libraries work if I'm not willing to explain. I know quite a bit about how public libraries work, next to nothing about school libraries, so I'm really going to talk mostly about research libraries because that's where I live and the people asking the question are researchers. Most research libraries are in universities, but there are other research organizations like federal and corporate labs, hospitals, etc. I guess I lean toward how universities do things, because I was only in a government library for 4 months and both they and company libraries have some unique restrictions.

So where to start? Libraries connect people to information. Librarians touch every bit of this by:

That stuff is university libraries - my job differs a bit because we all do quite a few things that would be handled by 5 different people at a big library. Also, I'm "embedded" and I do in-depth literature searching, and I'm involved in enterprise-wide initiatives regarding collaboration, enterprise search, and knowledge sharing.

Embedded means I'm actually part of the team. There might be a chemist, a mechanical engineer, a mathematician, and me. Whenever something comes up that requires finding or organizing or presenting information, I take the lead. In depth literature searching might be someone presenting a problem, and asking me to compile and organize and sort of summarize the literature in that area. They get the annotated bibliography I provide, and then see what they want in full text, I fork that over, and then they make the world a better place. I provide value because I'm an expert searcher and I understand a lot about the context of the organization and our sponsors. The scientists are so busy that anything they can offload to me helps. Once I grok what they need, I'm more efficient at finding things, too. And I charge my time back to the sponsor.

So, if you're a librarian, please fix what I screwed up (or, oh dear, tell me what i missed)... if you are a library user (or SHOULD be but aren't), tell me what more you need to know.

Update: I forgot ILL! Holy cow... added above
 
Sunday, March 22, 2009
  Comps readings this week
(no readings last week due to family emergency, readings will probably be light again this week)

Leckie, G. J., Pettigrew, K. E., & Sylvain, C. (1996). Modeling the Information Seeking of Professionals: A General Model Derived from Research on Engineers, Health Care Professionals, and Lawyers. Library Quarterly, 66(2), 161-193.
Well-written, concise reviews of studies of professionals' (engineers, health care providers, and lawyers) information behaviors. Professionals are defined as those providing a service, with a heavy-duty theoretical knowledge base, extensive post-secondary education, etc. Does not include scholars or scientists (produce knowledge vs. provide services). Not sure how frequently people use their model, but it looks good.

Constant, D., Sproull, L., & Kiesler, S. (1996). The Kindness of Strangers: The Usefulness of Electronic Weak Ties for Technical Advice. Organization Science, 7(2), 119-135.
Compare to Wasko & Faraj (2005) and Hew & Hara (2007) read in a previous week. I need to go through these more carefully and pick out the similarities and differences. (btw, this article doesn't seem dated - sure e-mail is used differently - but still very useful). This research was done in a large multinational computer company. The company had 3 priority settings for e-mail, and one of these was frequently used to ask for help. Responses to requests were often compiled and posted publicly as sort of a knowledge base. The authors wanted to know why responders took the time when they don't know the person asking the question and there can't be any direct reciprocity. Other questions were about the usefulness of the responses, the diversity of the responders, and the motivation of the responders. They sent surveys to question askers and to responders and had the askers rate the responses on usefulness. Weak ties with greater resources (more senior, etc) gave more useful responses. Number of replies didn't help. This way of asking and answering questions had been in this company for a long time and there was a culture of sharing information this way. Personal motivation was more like good of the whole organization.

Ellis, D. (1993). Modeling the information-seeking patterns of academic researchers: A grounded theory approach. Library Quarterly, 63, 469-486.
This is the Ellis on my list, but I'm thinking I probably actually wanted another one (maybe his JoIS from the same time?). He gives an overview of how grounded theory works and why that was important for looking at information seeking - something that had previously been studied using structured questionnaires and quantitative methods. He then compares his findings from his dissertation (massive effort with interviews with 48 social scientists) with similar studies of physicists (18), chemists (18), and English lit researchers (10). All of these used grounded theory so somewhat different terms, but all basically found these characteristics:
Ellis, D., & Haugan, M. (1997). Modeling the information seeking patterns of engineers and research scientists in an industrial environment. Journal of Documentation, 53., 384-403
This is pretty cool - I'd forgotten some of the details of it. They did a ton of interviews with scientists and engineers at a large (14k employees) oil company in Norway. They broke out the results by type of project: incremental, radical, fundamental as well as by project stage (pulled from some project managment handbook or other). These are fairly similar to the above but with a category of surveying instead of starting, distinguishing instead of differentiating, added filtering, and added ending. For incremental projects, they talked to people first, then used their own files, then the library. For radical projects, they used their own files first, then other people, then the library. For fundamental projects, they used the libray resources first (like lit searching in online database), then their own experience/files - didn't really know who to talk to.
When I ever finish (sigh) the literature review for the massive JHU libraries project we did, I'm definitely including both of these pieces by Ellis (I call my piece of the world industrial for the most part, incidentally)

Kling, R., & McKim, G. (2000). Not just a matter of time: Field differences and the shaping of electronic media in supporting scientific communication. Journal of the American Society for Information Science, 51, 1306-1320
This quote is classic:
However, in the absence of a valid theory of how scholarly fields adopt and shape technology, scientists and policy makers are left only with context-free models, and hence, resources may be committed to projects that are not self sustainable, that wither, and that do not effectively improve the scientific communications system of the field. The consequences may not only be suboptimal use of financial resources, but also wasted effort on the part of individual researchers, and even data that languishes in marginal,decaying, and dead systems and formats. (p.1307)
The more things change, the more they stay the same. I don't remember any discussion of Electronic Transactions on Artificial Intelligence (ETAI)'s experimentation with open peer review in the more recent discussions of the Nature and Atmospheric Physics discussions.

Anyway, their point is that a lot of the discussions of how scientists use electronic media go with the assumption that they'll converge on the using the same tools in the same way, that it's an "inescapable imperative." The authors argue that differences in how different fields communicate shape how and what they will use - social shaping of technology views are needed instead of relying only on information processing views. Examples at the time of writing include things like arxiv which physicists use, pdb for molecular biology, etc. There's a big difference in on line representations of print processes and products vs. creating a new thing that takes advantage of unique features of the online environment. (librarians have shied away from using the "electronic journals" without modifiers because there has been a misunderstanding in some areas of research that online means maybe less thorough peer review instead of just a copy in another place).

The authors propose bases for field differences: trust and allocation of credit, research project costs, mutual visibility of on-going work, industrial integration, concentration of communication channels... In this part of the discussion, I think some other authors stated these things more clearly, but still a useful article.

Fidel, R., & Green, M. (2004). The many faces of accessibility: Engineers' perception of information sources. Information Processing & Management, 40, 563-581.
It's good I'm re-reading these things. I read and cited this article in my study of the personal information management of engineers... but re-reading makes salient points that weren't important to me at the time. At work right now we're really trying to make some headway in knowledge sharing and one of our efforts is to improve finding experts. This study was originally about sources engineers use to find information, but what came out of it was how complicated the notion of "accessibility" is. Lots of studies of engineers have found that they'll choose accessible over quality, but then the studies don't really talk about what accessible means. In library terms we talk about physical access vs. intellectual access. The authors here look at a sort-of psychological version - ease of use - along with availability, physical proximity, familiarity, right format, gathers a lot of info in one place (or is efficient, or saves time)... The authors compare what the engineers said about documentary sources with what they said about people as sources.... Anyway this is pretty interesting. There's a call at the end for more research on finding people and supporting engineers finding people, but the 29 citations (28 + my citation) in scopus don't seem to address that much.

Labels:

 
Sunday, March 08, 2009
  Comps readings this week
Hara, N., Solomon, P., Kim, S., & Sonnenwald, D.H. (2003) An emerging view of scientific collaboration: Scientists’ perspectives on factors that impact collaboration. Journal of the American Society for Information Science and Technology, 54, 952-965.
They start by saying that "scientific collaboration may be different from other varieties of collaboration as it is shaped by social norms of practice, the structure of knowledge, and the technological infrastructure of the scientific discipline" (p. 952). Seems like all professional(or even hobby ones) are shaped by the social norms and structure of knowledge... hm. This paper isn't as good as I remembered, but I think those problems are with the lack of a clear conceptual framework to guide them going in and sort of a rambling presentation of the results... (sounds familiar)

(I'm now going to try to plow through all of the readings that happen to be stored in my binder from my 601 Class on Information Use - taught by Dr Barlow in the Spring of 2001)
Allen, B. (1996). An introduction to user-centered information-system design. Information tasks: toward a user-centered approach to information systems (pp. 24-51). San Diego: Academic Press.
This is really an excellent reading. His ARIST article from 1991 is great, too. He has five components in his model:
  1. needs analysis
  2. task analysis
  3. resource analysis
  4. user modeling
  5. designing for usability
Do note that this emphasizes the problem-solving approach which is just one reason people use information systems. Oh - resource analysis isn't what you think (like from recent readings I was thinking about information objects in the system) - it's a person's individual and social knowledge and abilities. Certainly the model would be incomplete without that, but the name is a bit misleading - and how's an information system to work if there' s no consideration of matching user and user input with representations the systems holdings? (oh, grr... course pack copy didn't photocopy all of the references).

Davenport, T. H., & Prusak, L. (1997). Information Behavior and Culture. Information Ecology: mastering the Information and Knowledge Environment (pp. 83-107). New York: Oxford University Press
Even though many management books are quickly irrelevant, this one still speaks to me. They talk about the value of information and information sharing in organizations and information behavior (sharing, hoarding, organizing) within organizations. They also cite (but of course the course pack didn't include the citations- argh!) lots of different studies showing how organizations that do better with information are more productive and successful. The other chapter in my course pack - but not read because not on my list - talks about the role of corporate libraries. In this section, too, they mention briefly what a big mistake it is to undervalue the library by cutting its budget and minimizing the contributions of librarians. sigh.

Dervin, B. (1992). From the mind’s eye of the user: the sense-making qualitative-quantitative methodology. In J. Glazier & R. R. Powell (Eds.), Qualitative research in information management (pp. 61-84). Englewood, CO: Libraries Unlimited
She is really talking about a method, a methodology, a theory, and a paradigm here. If you approach certain problems by looking at the discontinuities and the helps that enabled people to bridge the gaps, you can really get some good information about information behavior and systems.

Kuhlthau, C. C. (1991). Inside the search process: Information seeking from the user's perspective. Journal of the American Society for Information Science, 42, 361-
I would be surprised if there's anyone who would bother reading my blog who isn't familiar with this one. Steps in the search process with affective, cognitive, physical parts...

Rogers, E. M., & Kincaid, D. L. (1981). The Convergence Model of Communication and Network Analysis. In E. M. Rogers, & D. L. Kincaid (Eds.), Communication networks: toward a new paradigm for research (pp. 31-78). New York: Free Press.
I like this one, too, because it disses the whole Shannon and Weaver thing (which I successfully kept OFF my list). Which reminds me about last year at the global STS grad student conference listening to someone spout the Shannon and Weaver version of information as the one true path (well maybe if you're an EE who is using information theoretic models for communication systems design). Anyway, the point of communication is to come to mutual understanding. Person A has their psychological reality, person b has theirs, and there's some physical reality. Information comes through all of these and through individual action to get to collection action, mutual agreement, mutual understanding, and then social reality shared by A and B.

Huh, wonder why it took so long for studies of scientific popularization or public understanding of science to take up the charge. If Schramm, Rogers, Kincaid ... all of that happened so long ago, and there seems to be consensus in communications about active audiences and the like... why did it basically take Wynne, Hilgartner, Meyers so long to get their point across... And actually, sometimes the scientists who are trying to do the communicating still don't know about all this (and how to apply it). hmmm.

Taylor, R. S. (1991). Information Use Environments. In B. Dervin (Ed.), Progress in Communication Sciences (pp. 217-255). Norwood, NJ: Ablex.
I keep getting things that Taylor said and Allen said confused, and this article might be one reason why: Taylor cites T.J. Allen (1997) extensively. Taylor's point is that you can construct useful groupings of users based on their common problem dimensions, settings, and what constitutes resolution to their problems, among other things. This does not look at demographic or other variables, for engineers, but it could if that's how you're defining your grouping.

Williamson, K. (1998). Discovered by chance. The role of incidental information acquisition in an ecological model of information use. Library & Information Science Research, 20, 23-40.
I pulled this out a couple of years ago again when another student basically said no work had ever been done on older adults (big sigh). This article is a spin off of her dissertation. She studied how older adults (in Australia) encounter information through telephone calls and through monitoring the media. This is information that meets a need - whether specifically identified in advance or not - and that wasn't purposfully sought. Made me think of possible research on social bookmarking - not in the school of "how do people assign categories" but more along the personal information management line... but no time.

Wilson, T. D. (1997). Information behaviour: An interdisciplinary perspective. Information Processing & Management, 33(4), 551-572.
Seems like everybody in the social sciences somehow studies information seeking behavior. This article reports some of Wilson's work looking outside of information science. He emphasizes psychology and sociology articles. Good stuff here.

From my 650 (Reference, aka information access) binder - I thought there was more in this binder than this on my list

Barry, C. L., & Schamber, L. (1998). User's criteria for relevance evaluation: A cross-situational comparison. Information Processing & Management, 34(2/3), 219-236.
In this article they compare their previous work eliciting users' relevance criteria to find overlaps and unique items. There were a lot of things in common and the things that weren't were mostly due to the differences in the user groups and what information they were seeking.

and then this one, because it went with the others in this group and was short

Belkin, N. J. (1980). Anomalous states of knowledge as a basis for information retrieval. Canadian Journal of Information Science, 5, 133-143.
Builds on my favorite Taylor and the like and makes suggestions for design and evaluation of information retrieval systems.

EDIT: changed posting date - sorry!

Labels:

 
Monday, March 02, 2009
  Comps reading - Little Science, Big Science... and Beyond
This one deserves its own post.

This is one of the books from a father of scientometrics and a great in the early days of STS. This re-issue of the book has a forward by Eugene Garfield and Robert K. Merton which tells you something!

Price, D.J.d. (1986). Little science, big science…and beyond. New York: Columbia University Press.
(I never knew how to refer to this author - some refer to him as de Solla Price and others as Price - Garfield makes a joke about this in his piece at the end, it's Price)
This book is a lot about modeling the shape, size, distribution of science. How many scientists should a country have and how many should be in the top most productive group? How many scientists are cited every year and how many have a single paper and then never again? More authors/co-authors mean more papers, are correlated with bigger grants. What does the distribution of citations look like (why can you cover 75% of the cited articles with only 7% of the articles written)? Will science continue to grow exponentially or more of a logistic curve where it flattens off?

Some of this stuff is pretty cool and timeless, but some of it makes me uncomfortable. It's cool to use some of these guesstimate approximations based on years of ISI's data, but it says nothing about individuals or disciplines or any individual attributes (which he freely admits).

One essay that's less frequently discussed, is the nice one on Sealing Wax and String - about the importance of technology to science and how sometimes technology leads science instead of lagging. Also about the importance of experimentalists and technicians who are sometimes completely omitted in romantic discussions of scientific inventions and emphasis on the scientific method (sometimes, holy cow-how did that happen- better come up with an explanation- instead of theory, hypothesis, experiment, rinse, repeat)

Useful quotes:
In chapter 7, Measuring the size of science, he makes the case for scientometrics - an econometric-type view of science. Likewise he makes the case for not leaving the study of science up to the scientists:
It is the business of sociologists to be knowledgeable about things that are important to society, and it is not necessarily the business, nor does it even lie within the competence, of natural scientists to turn the tools of their trade upon themselves or to act as their own guinea pigs (p. 136)

This is interesting, though, because Price was a reformed physicist (as was Kuhn) who got a second phd in history.

from chapter 8
Technical librarianship involves much more than librarianship applied to books with an esoteric vocabulary and much mathematics. It is somewhat like the dilemma of the man who tried to write a book on Chinese medicine by first reading one on China and then another on medicine and then "combining his knowledge"
(but this statement doesn't really have much to do with the remainder of the chapter which provides an overview of citation patterns - from chapter 5 ephemeral vs. classic references, immediacy, research fronts, aging of the literature, etc.) There is some number of references per paper, too many above this you are looking at a review paper and too many below this, you're looking at an ex cathedra pronouncement. Another main point is that you can tell the humanities, social sciences, and natural sciences apart by the percentage of citations to articles younger than 5 years. Natural sciences might be as high as 75% where things like history of the civil war might be like 8%.

Labels:

 
  Comps readings this week
I'm now back into re-reading, so notes will be shorter (and I'm hoping to pick up speed). In case anyone is keeping track, we're now looking at early May for the exam instead of March.

Fontana, A. & Frey, J.H. (2003). The Interview: From Structured Questions to Negotiated Text. In Denzin, N. K., & Lincoln, Y. S. (Eds.). Collecting and interpreting qualitative materials (2nd ed., pp. 61-106). Thousand Oaks, CA: Sage.
Traces to a certain extent the changes from very structured sort of oral version of a survey to more recent versions with more or less guided conversations in which meaning is constructed between the interviewer and interviewee.

Angrosino, M.V. & Mays de Pérez, K.A. (2003). Rethinking Observation: From Method to Context. In Denzin, N. K., & Lincoln, Y. S. (Eds.). Collecting and interpreting qualitative materials (2nd ed., pp. 107-154). Thousand Oaks, CA: Sage.
This is one of the few readings I have that really goes back to the early cultural anthropology version of observer and ideas of objectivity. It also talks a little about danger in the field (emotional as well as physical - for more on this I highly recommend the book with that title) and being a participant-observer or active participant. One thing that strikes me now is the idea that the researcher sometimes creates community- I won't say that I did this, but I did hear from a participant that our conversation had renewed their interest in blogging and engagement in the blogging community. Something that they talk frankly about but which doesn't come up in my work is physical relationships between researcher and participant (like Wolcott but that just seemed wrong because of power issues) and how gender is important (expected behavior from either depending on society). They also talk about the ethical and practical issues of revealing or not or assuming a sort-of crafted social identity while in the field. Campaigning on your personal agenda isn't always wise when you're trying to learn about other people (apparently this isn't so obvious to some researchers).

When reading these things I come back to how much to reveal (and practically HOW to reveal) my participant-observer status in the science blogosphere - I'm not a scientist, but I interact with scientists and I follow and comment on science blogs - this is a good thing and I think it helps me to insights, but I'd like to convey that meaningfully in my writing, and I don't know how. I want to write a methods paper to disagree with some of what's in [Kazmer, M. M., & Xie, B. (2008). Qualitative interviewing in Internet studies: playing with the media, playing with the method. Information, Communication and Society, 11(2), 115-136] regarding "bias" and communication online during the research process. If some of my results come via my particpation and not my interviews or content analysis, then that needs to be traceable. The authors caution the reader to not substitute technologically recorded evidence for lived experience - to not miss the whole by concentrating on the particular as recorded on tape.

This chapter is probably most useful in its frank discussion of ethics and IRBs - particularly when the IRB is trying to make all research into psychology experiments with hypothesis testing in controlled environments.

Ryan, G.W. & Bernard, H.R. (2003). Data Management and Analysis Methods. In Denzin, N. K., & Lincoln, Y. S. (Eds.). Collecting and interpreting qualitative materials (2nd ed., pp. 259-309). Thousand Oaks, CA: Sage
Everything is text. Linguistic tradition (narrative analysis, conversation or discourse analysis, performance analysis, formal linguistic analysis) vs. sociological tradition - text as a window into human experience. Can be systematically elicited or free-flowing. For elicitation techniques there are things like free listing and card sorting and... hm, I need to remember to go back in Wasserman and Faust and look at their discussion for SNA.... Anyway, this is a quick review that covers some things not found in some of the other readings that only cover coding.

...
Read most of chapter 2 of Baeza-Yates, R., & Ribeiro, B. d. A. N. (1999). Modern information retrieval. New York: ACM Press.
Wanted to see if it made any more sense to me than the chapters I put on my list - and yes, they're much more clear than Manning, C. D., Raghavan, P., & Schutze, H. (2008) but I still need to go back and hit the link analysis reading, sigh.

Read the first 3 chapters of Little Science, Big Science... and Beyond. More on that in next week's roundup.

Labels:

 

Powered by Blogger

This is my blog on library and information science. I'm into Sci/Tech libraries, special libraries, personal information management, sci/tech scholarly comms.... My name is Christina Pikas and I'm a librarian in a physics, astronomy, math, computer science, and engineering library. I'm also a doctoral student at Maryland. Any opinions expressed here are strictly my own and do not necessarily reflect those of my employer or CLIS. You may reach me via e-mail at cpikas {at} gmail {dot} com.

Site Feed (ATOM)

Add to My Yahoo!

Creative Commons License
Christina's LIS Rant by Christina K. Pikas is licensed under a Creative Commons Attribution 3.0 United States License.

Christina Kirk Pikas

Laurel , Maryland , 20707 USA
Most Recent Posts
-- Moved to Scientopia
-- I've been assimilated!
-- Hey science librarians...
-- Can we design *a* community for *scientists*
-- Comps readings this week
-- How would you design a collaboration community for...
-- Why ghostwriting, ghost management, and fake journ...
-- Should authors attest that they did a minimal lit ...
-- Comps preparations
-- How should advertising work in online journals?
ARCHIVES
02/01/2004 - 03/01/2004 / 03/01/2004 - 04/01/2004 / 04/01/2004 - 05/01/2004 / 05/01/2004 - 06/01/2004 / 06/01/2004 - 07/01/2004 / 07/01/2004 - 08/01/2004 / 08/01/2004 - 09/01/2004 / 09/01/2004 - 10/01/2004 / 10/01/2004 - 11/01/2004 / 11/01/2004 - 12/01/2004 / 12/01/2004 - 01/01/2005 / 01/01/2005 - 02/01/2005 / 02/01/2005 - 03/01/2005 / 03/01/2005 - 04/01/2005 / 04/01/2005 - 05/01/2005 / 05/01/2005 - 06/01/2005 / 06/01/2005 - 07/01/2005 / 07/01/2005 - 08/01/2005 / 08/01/2005 - 09/01/2005 / 09/01/2005 - 10/01/2005 / 10/01/2005 - 11/01/2005 / 11/01/2005 - 12/01/2005 / 12/01/2005 - 01/01/2006 / 01/01/2006 - 02/01/2006 / 02/01/2006 - 03/01/2006 / 03/01/2006 - 04/01/2006 / 04/01/2006 - 05/01/2006 / 05/01/2006 - 06/01/2006 / 06/01/2006 - 07/01/2006 / 07/01/2006 - 08/01/2006 / 08/01/2006 - 09/01/2006 / 09/01/2006 - 10/01/2006 / 10/01/2006 - 11/01/2006 / 11/01/2006 - 12/01/2006 / 12/01/2006 - 01/01/2007 / 01/01/2007 - 02/01/2007 / 02/01/2007 - 03/01/2007 / 03/01/2007 - 04/01/2007 / 04/01/2007 - 05/01/2007 / 05/01/2007 - 06/01/2007 / 06/01/2007 - 07/01/2007 / 07/01/2007 - 08/01/2007 / 08/01/2007 - 09/01/2007 / 09/01/2007 - 10/01/2007 / 10/01/2007 - 11/01/2007 / 11/01/2007 - 12/01/2007 / 12/01/2007 - 01/01/2008 / 01/01/2008 - 02/01/2008 / 02/01/2008 - 03/01/2008 / 03/01/2008 - 04/01/2008 / 04/01/2008 - 05/01/2008 / 05/01/2008 - 06/01/2008 / 06/01/2008 - 07/01/2008 / 07/01/2008 - 08/01/2008 / 08/01/2008 - 09/01/2008 / 09/01/2008 - 10/01/2008 / 10/01/2008 - 11/01/2008 / 11/01/2008 - 12/01/2008 / 12/01/2008 - 01/01/2009 / 01/01/2009 - 02/01/2009 / 02/01/2009 - 03/01/2009 / 03/01/2009 - 04/01/2009 / 04/01/2009 - 05/01/2009 / 05/01/2009 - 06/01/2009 / 08/01/2010 - 09/01/2010 /

Some of what I'm scanning

Locations of visitors to this page

Search this site
(gigablast)

(google api)
How this works

Where am I?

N 39 W 76