<$BlogRSDURL$>
Christina's LIS Rant
Friday, May 22, 2009
  I've been assimilated!
I'm now blogging over at http://scienceblogs.com/christinaslisrant/. This is very exciting! There's a chance I could post things that are tooo rant-like here, but I expect to basically do all of my blogging there. I will maintain these archives, though. Wish me luck!

edit 8/2010: no longer at scienceblogs, now at scientopia.org/blogs/christinaslisrant
 
Wednesday, May 20, 2009
  Hey science librarians...
The folks at ScienceBlogs.com would like to know why you read ScienceBlogs (both ScienceBlogs and science blogs, probably). Feel free to e-mail the editorial office directly (editorial at scienceblogs.com), or comment here, or respond via twitter @cpikas, or on friendfeed.

Thanks!
 
Sunday, May 17, 2009
  Can we design *a* community for *scientists*
My test essay on designing an online community has gotten some comments on FriendFeed and has been linked-to from twitter (this searched works at the time of writing). It's great to have this kind of quick feedback.

First, I think Deepak is absolutely right. He questions whether there is a generic type of scientist such that you can design a community for all scientists. The literature shows that you need to design a community with a clear purpose and clear policies. The participants have to have common interests to bring them together. Maybe what I meant (and didn't say!) is that these are minimal requirements for communities to be useful to people involved in science, and that additional features and functionality are required for the specific group of users or purpose for the site.

Could there be very generic tools (facebook, friendfeed), generic science-y tools, and then much more specific research area or even a specific collaboration tool? Sure, but they we also get into overload. Where do you post this update? Check each of these or get e-mails and feeds from each of these? What happens when your contacts post the same info in facebook, twitter, and friendfeed... and it's not even their own content, but someone else's that you also follow. Kind of annoying - is there a way to deduplicate? Should there be? maybe the content only appears once with comments and links showing from all of the various places?

What Mr.Gunn says is right, too, there has to be some up front value to get people to even try any site. There has to be a percieved relative advantage, there has to be various types of knowledge, and there has to be a low-risk way to trial the site. I don't really agree that just being first is enough. There were other photo sharing sites before Flickr that weren't as popular. Flickr did some things that make it more useful.

Recommender systems are great - ScienceDirect's related articles usually works pretty well for me (ducking bcs I liked something from the Evil Empire - sorry Bill!). PubMed doesn't seem to. From what I can tell, SD uses the full text whereas PubMed uses only the MeSH indexing. I'm not sure how many people have to be on Mendeley for it to really work. Individual papers might be saved for lots of different reasons, too, - it's almost like you need to say why you're keeping it - same organism, good method, author from x lab, person a recommended, etc. I think there's a lot of value about collaborating and communicating around scholarly publications. I need to probably post something more thoughtful about recommender systems for articles... I have some thoughts on that, too.
 
  Comps readings this week
Don't have time what with working at the public library this afternoon to do another test essay so I'm re-reading some additional pieces. Some of these came up as things I needed to check on when I was doing the last two mini essays.


Tracy, K., & Naughton, J. (1994). The identity work of questioning in intellectual discussion. Communication Monographs, 61(4), 281-302.
I like this article - as one who went through several Navy boards and even before that was conscious of what questions meant in physics class (being afraid of being seen as "not even wrong") - I can really identify with it. The authors looked at a series of colloquia - they taped them over the course of 18 months - as well as interviews with participants. The participants wanted to appear knowledgeable but reasonably so. They knew that speakers couldn't know everything. Question askers formulated the questions to show if they thought it was reasonable that the recipient of the question know the information being asked. It's interesting that the questioners in these cases do a lot of repairs and modifications to their questions to make it ok for the recipient to not know the answer - this is completely the opposite of Navy situations in which questions are phrased to build up the questioner and tear down the recipient.

Besides knowledgeability, there's novelty - the presenters have to link to previous research but show that their work is new. Reminds me a of a group in Neal Stephenson's Anathem, the Laurites (spelling might be off because I heard this as an audiobook). Their whole purpose was to point out connections to older work - they were really well-read and steeped in older research, so for any "new" idea they would say, "sounds like... ". (sometimes I wish we had a group like that I could join!). The participants in this study were much less gentle on the novelty front than they were in the knowledge front. The third part was about intellectual sophistication somehow this doesn't seem as obvious to me.


Clark, H. H., & Brennan, S. E. (1993). Grounding in communication. In R. M. Baecker (Ed.), Readings in groupware and computer-supported cooperative work: Assisting human-human collaboration (pp. 222-233). San Mateo, CA: Morgan Kaufmann Publishers.
(I forgot the name of this in my last test essay so time for a review!) There are copies of this online if you search in google by the authors' names.
Common ground is sharing mutual beliefs, knowledge, and assumptions - it's a necessity if communication is going to take place. This article starts by reviewing how people get to common ground (grounding) by looking at "adjacency pairs" of presentations and responses. One person says something, another provides evidence that he or she understands or doesn't. People try for least collaborative effort - but due to time pressure, errors, ignorance (not knowing your conversation partner), they have to do a little fixing of their own and their partner's "utterances." Then there's the whole deal about making sure you're referring to the same thing.

So that's all fine, but the reason this was assigned in my doctoral seminar was the second half - how this changes with the medium. Some of the techniques to get to common ground may not exist in some media, and even if they do, they may require more effort. They mention 8 constraints: copresence, visibility, audibility, cotemporality, simultaneity, sequentiality, reviewability, and revisability. (huh, I got these from reviewing Jenny's book, forgot about them being here). There are all kinds of grounding costs: formulation costs, production costs, reception costs, understanding costs, start-up costs, delay costs, asynchrony costs, speaker change costs, display costs, fault costs, and repair costs. People trade off costs based on their purpose.

Labels:

 
Saturday, May 16, 2009
  How would you design a collaboration community for scientists?
How would you design a collaboration community for scientists, given what we know about formal and informal scholarly communication in science; computer mediated communication; computer supported collaborative work; online communities; social software; and social studies of science?
(this is a test mini essay in for comps prep. I wrote this offline so it does not have links to provide appropriate attribution or credit nor does it have complete citations)

1. Introduction
There is an announcement for the next “facebook” for scientists almost every week. Frequently, these tools are just repurposed social software without any special design features specifically meant to support how scientists collaborate and communicate within collaborations. As Preece (2000) says, it is not a matter of “if you build it they will come”. Using ideas from the diffusion of innovations literature (Rogers, 2003; Ilie et al), it must be compatible with how scientists work, it must be visible, it must be trialable, there must be a perceived relative advantage over other similar tools, and since it is an interactive information technology, it will have to get to critical mass for wide-scale adoption. Based on what we know about how scientists communicate and how information technologies have changed how scientists communicate, we can suggest some guidelines for what a successful tool should do. The next few sections of this essay will describe these guidelines developed from the various streams of research.

2. Online communities.
2.1. Design Processes
First, the tool should be designed as an online community – a collaboration place that brings together people, with a common purpose, with policies, using information and communication technologies (Preece, 2000). In design, it is important to address both sociability as well as usability. To address sociability, there should be a clear stated purpose, with clear policies for membership and behavior, and moderators to encourage appropriate contributions and discourage inappropriate contributions. These policies help users trust the system and other users thereby encouraging contribution.

To address usability, navigation should be clear and easy to use, the site should be intuitive to use, there should be adequate help when needed, and the design should conform to best practices in web design. The site should also be machine-usable and interoperable; that is, it should import standard data formats (from RSS/XML to scientific data formats for chemicals or genes to bibliographic formats such as RIS and BibTeX), it should provide data streams in machine-consumable formats, and it should have a well-documented api to enable users to develop their own re-uses of the data.

Most importantly in the design process, this community cannot be designed in a vacuum and handed off to users complete as a Christmas present. It must be collaboratively constructed with lots of feedback from potential users and development should continue once the service is online to address feedback from actual users. At minimum, other sites should be evaluated using content analysis to determine what they do successfully and how scientists are using them; potential users should be interviewed to determine what needs the system could address, focus groups should be held to get feedback on prototypes, and usability testing should be done to check the web design choices that were made in the design process. Early adopters should be asked to trial the site in an alpha or beta test to run the software through regular use prior to wider release.

2.2 Interaction and Membership
Blanchard makes the distinction between virtual settlements and communities. She extends sociological studies of communities in the offline world to online communities (Huberman et al also address this). Communities provide support and a sense of belonging whereas virtual settlements may just be places people congregate online. Scientists have multiple memberships and social identities already as part of invisible colleges; as part of colleges, research groups, and labs; as editors, reviewers, authors, and readers of journals; as members of general purpose online communities (e.g., facebook, linked in, friendfeed, science blogs), and in their personal lives as friends, family, and so forth. Research and discussion with potential users is needed to determine if this tool should aim to be an online community in the Blanchard sense or merely a virtual settlement. It should not necessarily aim to replace anything, but to enable users to bring together some of this fragmentation.

In either case, it is clear from research by Lave and Wenger, as well as research done with open source software communities, and research done on “lurkers” by Nonnecke (sp?) and Preece, that the system should support various levels of participation. Lave and Wenger discuss legitimate peripheral participation. This is a way that new members can follow the activities of the community and learn how to participate while learning to become an active member and contributor. In other words, new users should be able to “lurk” to learn more about the community and then move into more central roles by first commenting on the work of others and then finally creating their own work, and forming their own subgroups.

2.3 Modes of Communication
In studies of computer supported collaborative work (or cooperative work, CSCW) and computer mediated communication, there is much discussion regarding:
• synchronous vs. asynchronous communication
• revisability
• reviewability
• distributed participation

Science is an international enterprise and a community should support widely distributed collaborations. This means different time zones, different cultures, different languages (although many participants in science will speak, read, and write English), and different expectations for social tools. This indicates that the system should focus on asynchronous tools that allow reflection, review, and can be revised. However, we know from studies by Olson et al, that distance does matter. Getting to common ground may be more difficult and may take longer with fewer cues (important article forgot the author – hope it will come back), particularly if the participants have not met in person at least once. Accordingly, this collaborative tool should offer support for linking to or embedding synchronous events such as meetings in Second Life or conference streaming as well as multimedia information such as YouTube videos or podcasts. In the case of blogs, trust is earned over time by establishing a personality through an archive of posts. This system can also provide histories for each person listing their contributions and memberships to enable other users to understand their point of view (see below in studies of scholarly communication and sts for discussion of attributes of the authors that should be shared).

3. Designing the System for Scientists
The previous sections have applied general research on online communities and computer mediated communication to the problem at hand, designing a collaboration tool for scientists; however, we know a great deal about how scientists communicate, and this information is very important to the design of a successful system.

3.1 Data types and representing scientific knowledge
Common research methods or materials can form boundary objects through which different groups of scientists can communicate (Fujimura). The issue at hand for this system is to represent these common objects such that users from different research areas can find them. For example, when searching an engineering digital library for Indium Tin Oxide, one would find many useful results by typing “ITO”. When searching a chemistry digital library this would have to be a linear formula (InSnO), and perhaps in Hill order (InOSn) (note: I’m not sure if any of these are 2 – this is for illustrative purposes). Likewise, mathematical or signal processing approaches may be shared by very diverse research groups, who do not read the same literature. A successful system would enable diverse users to collaborate around these boundary objects either on the same problem or just using the same method on different problems.

Some research areas in science seek to describe and model the physical world in terms of mathematical formulas. Typically, these formulas are created in LaTeX (a markup tool) or in a computational tool (like Matlab, Mathematica, etc) and then an image is generated and this image is uploaded to the web. The picture of the equation is not searchable or machine usable. Early adopters of blogs and wikis had to program their own plug-ins to be able to display equations in a usable format. Likewise, scientists represent materials using graphical chemical structures. More recently, a machine readable but chemically meaningful representation, inchi is being used, but its use is not entirely without controversy. This system must enable its users to represent scientific knowledge in the form of equations and chemical structures that are machine readable, but still fairly quick to input.

Borgman, Van House, and others describe the use of large collections of scientific data like GeneBank and virtual observatories in astronomy. In eScience, some of these repositories are so large that the calculations and manipulations must take place at the data, instead of downloading the data to the scientists’ machine. The collaborative tool also must support collaborative work around scientific data and information that are hosted in these large repositories. Linking out to these data might not be enough, the link should be semantic such that it indicates how the data are to be used.

Finally, the product of scientific work is often the peer-reviewed scientific articles. This is another form of data that is hosted externally, but around which collaborations can form. Community members should be able to refer to bibliographic data in a standard way and comment on scholarly articles. These comments should be made available to the journal publishers (as long as the commenter has agreed that her comments may be shared), so that they can display or use this information to provide context for the article. Likewise, a scientist who comments at the original article should be able to import his or her comment to this collaboration tool.

3.2 Attribution and Credit
There is continuing controversy about Mertonian Norms of Science and whether these norms are mythological or the lived experience of scientists. Likewise, there are many competing theories and explanations for why authors cite other work (see Nicolaisen’s review). In any case, attribution is still the currency of science (Polyani) and this cycle of credit is very important to science (Latour). Grant proposals, hiring, promotion, tenure, and lab space are all determined in part by what the scientist publishes, in which venues, and how well those publications are cited. The publication venues are judged in part by their impact factor, which is a measure of how frequently they are cited. Unfortunately, contributions to collaborations and to collaborative online communities are frequently not valued in evaluating scientists’ work. In the current system, therefore, this collaborative tool might be most useful in helping scientists complete their offline work and publish it as well as to find collaborators and establish collaborations to complete offline work and publish it.

In preparation for promotion, tenure, and grant systems that do value online work and to help in expertise location, work and contributions in this tool must be traceable to their contributor. In wikis, for example, edits are captured along with the time the change was made and the user name who made the edit. Contributions should be retrievable both at the place the information is stored, and at the contributor’s profile. Additionally, contributions could be rated by other users as to how useful they were. Authors who make a lot of valuable contributions might have some special icon in their profile or signature.

3.3 Member Profiles
We know from studies of document selection and relevance (see for example Wang and Soergel) that scientists judge the relevance of articles using information about the author, his or her advisor, and his or her affiliation. At the same time, as discussed above, members who are new to the system may want to be peripheral participants and members whose contributions will not be valued by their home institutions (or may be used against them as “a waste of time”) may want to use a pseudonym or be anonymous (see discussions of women scientist bloggers). Member profiles should allow links to professional home pages, blogs, profiles on other networks, a listing of articles written, a picture or avatar, and semantic links to affiliations – but all of these must be optional. If the member has an existing persona used on his or her blog, then this can be used in the member profile.

4. Fitting into the existing information ecosystem
This essay has touched on various ways that this tool should fit into the existing ecosystem, but it is valuable to compile these thoughts and to close the essay with a discussion of compatibility with the existing systems. First, scientists have various workflows for identifying, retrieving, keeping, using, and refinding information used as inputs to their scholarly work. Bloggers have mentioned that the refinding process is simplified when their notes are kept on a blog instead of in individual files on their desktop or in lab notebooks. Likewise, personal information management tools such as bibliographic managers are helpful when reusing references to published information. It is not suggested that this one tool replace all of these existing tools, rather that it can take data streams produced by a wide variety of narrow-use tools, and compile them in one place so that they can be searched, shared, annotated, and reused more easily. Friendfeed does this, to a certain extent, but this system could be built to understand data streams used in science. The system could replace some tools that do not work well for science such as blogs that do not adequately support equations and scientific symbols.

Borgman and others note that finding and reusing data is very complicated. Whereas there is a very well-developed system to support archiving, organizing, and providing access to scholarly publications, we really do not have a similar system for data. This tool certainly cannot address digital preservation issues or management of large data sets, but it can enhance access by enabling users to link to and collaboratively work with information pulled from these data sets. By providing semantic links out to data stored in disciplinary repositories, the system can support and enhance the data’s reuse, findability, and value.

Some collaborations and collaborative work needs to be done in private or in closed spaces and only shared when it is complete. This system should allow groups of members to create new private spaces where they can work together, with the support of the larger system, but without sharing their work until they are ready. Work done on the system should have permanent URLs which, at the option of the collaborators, can be assigned digital object identifiers for use in citations from the scholarly literature.

As discussed by Nielsen and by Gowers in his post on massively collaborative math, there needs to be a way to advertise for help/collaborators/expertise wanted to likely audiences whether or not they are part of the system. The example that Nielsen gives is if, in the middle of a proof, a mathematician has a sticking point that will take a couple of weeks to get around because it requires additional background reading, but which another mathematician might know right off. The system should allow users to describe these points to the larger group and solicit help. Moderators could help the users describe their problem so that members in different research areas can find it to respond.

Finally, this entire discussion has been about a community for practicing scientists, but I found in my overview of engineering communities that fledgling communities could be scuttled by being inundated by college students trying to get homework help (or to get their homework done for them). Likewise, there seems to be a lot of interest in the science blogosphere in supporting science classrooms and public understanding of science. Separate areas could be created at the community to “ask a scientist”, “get homework help”, “find a scientist speaker.” Any posts or contributions with these aims could be moved to these areas by the moderators and so that interested scientists can respond, but scientists who are collaborating on scientific work are not interrupted.

----
2 hours have elapsed… running out of steam anyway, so I’ll post.
 
Saturday, May 09, 2009
  Why ghostwriting, ghost management, and fake journals could be pernicious
We often discuss the value of scholarly publications in terms of attribution of credit for promotion, tenure, and maybe even social capital (discussed in Polyani); but their primary purpose is to convey knowledge. The introduction and background sections review the literature and place the new work in context. What is the research problem? Why is this interesting? What do we already know? The methods section is for reproducibility - so that ostensibly someone could come along and repeat the work and come up with similar results, even though we know that tacit knowledge including craftsmanship is needed to actually reproduce many experiments, and that this knowledge is not conveyed through journals (discussed in Shapin). The methods section helps readers to trust the results. Were the methods appropriate to the research problem? Were they applied appropriately? Were any issues addressed? The results section tells you what they found out and the discussion section tells you why this is important or useful and what needs to be done next.

As discussed variously in Social Studies of Knowledge work, scientists cannot repeat every experiment to trust it - if they need to go back and re-derive and re-test every piece of knowledge they use, then they will never be able to do new work. So they must use and trust the scholarly literature as well as other scientists and sources of "public knowledge". But they are skeptical, a Mertonian Norm, and require detailed information about how the information was obtained as well as information about the author, their lab, their funding, their training, etc. This last bit is counter to the Mertonian Norm of universalism which states that author attributes aren't important. We know from empirical studies of relevance and document selection (see for example Wang & Soergel) that researchers do look at author, affiliation, and publication attributes - things besides the actual content of the article (& topical relevance) - to assess value. Active researchers also have a pecking order of journals - which journals are better because they have a low acceptance rate or a high impact factor, strong editors, or just a better reputation.

When you are well-integrated into a research area, when you are part of an invisible college (Price, Crane), then you will know what research is being done in which labs, and you'll know who does good work (Garvey, 1979). You will have access to much of the research prior to the actual publication in a journal. This is particularly true in the case of "normal" science (Kuhn), in which the problems are pretty well defined and new work is somewhat incremental instead of revolutionary. So when you become aware of new work, you know about the journal, the lab, and may even have a personal relationship with the author - having chatted at meetings - so you can incorporate new ideas and findings into your own work. You have a foundation into which you can fit this new information.

Ok, let's go back now to the case of the fake journal. The Scientist reported that a division of Elsevier compiled re-prints of published articles and new questionable articles supporting the efficacy of a certain medicine into a journal, which they then handed out to physicians in Australia. This was not a real journal - the editor didn't have editorial control, there were no peer reviewers - it only was created to look like one, with a good-sounding name. It was packed with advertisements for the medicine alongside these favorable articles. When we look a little more closely, we understand that this (corporate-funded fake journals) is a service that this division of Elsevier offers, specifically trading on the reputation of Elsevier as a publisher of scholarly scientific and technical information (see quotes compiled by Bill Hooker).

Researchers who are integrated into the invisible college will not be fooled by these! They will not know the authors, they will not know the journal, and the fact that the journals are not carried by libraries or indexed in Medline indicates that they aren't well-respected. Medical libraries, who also use extensive collection development heuristics will also not be fooled. But these "journals" are not intended for the researcher in pharmacology or pain management or what have you! They are intended for the clinicians - the practicing physicians who are not personally involved in research, may only have limited access to "the literature", are pretty busy, and who might be just a little bit rusty on evaluating information sources. This is one reason the fake journals could be pernicious - these physicians might be fooled- they might even know that it's marketing stuff, but still think that it's reprinting good articles.

Good articles. If everyone follows the Mertonian norms of universalism, communism, disinterestedness, and organized skepticism, then the exact process of how the article came about is irrelevant. That is to say, features of the author aren't important, scientific information is given freely to increase society's knowledge with only attribution in repayment, it's all about the science (not societal good, not about personal gain, just what makes good science), and question everything. This assumes scientists are all behaving ethically, and that the only contributors to the scientific scholarly communication system are in fact scientists, who are committed to these norms.

However, we understand from reading two recent works by Sismondo (2007,2009) that there are other players in the system, who are not in any way committed to the norms, and who are gaming the system for financial gain. According to these articles, pharmaceutical companies hire companies to design and run experiments, write up the results, select the publication venue, recruit a doctor to sign his name to the article, and then shepard the article through publication. The lead author may have had no control over the research or the writing and is certainly not disinterested (only connection to the work is via pay check). These articles appear alongside other scholarly articles in reputable journals (indexed, carried by libraries, well-cited, well thought of by researchers). Further, the lead author may have been selected and hired because he or she is integrated into the invisible college and could have done this work.

I am not saying that the actual design of the trials was flawed, or that the results are not supported by the data or that it isn't actually good science. The employees of these companies are trained researchers, but ones who are committed not to science and knowledge, but to providing a service: making their customer's product look good. Scientists in academic settings sure do take money from big companies, but there is arguably more separation. Important questions include: how much of the persuasiveness of the article is due to rhetorical manipulations by players who are paid to make a product look good, are data omitted to insure that the product looks good, and are the discussion and implications sections supported by the results? I'm curious, too, if these articles (if they can be identified) are cited by articles produced the old-fashioned way.

So how big of a deal is this? Clinicians and practitioners are not naive - they may know a lot about these shenanigans - but how are they to assess the evidence with limited time, limited access, and when each article addresses just one small area of the knowledge they need to know for their every day job? Out and out falsifying data is perhaps easy to detect - but fudging the numbers just a bit to make things look just a little more convincing is not. Peer reviewers do not have enough information to detect this so that is not the answer.

Other interested parties include courts, policy makers, and patients. A researcher who is integrated into the invisible college may recognize advertising immediately, but how about the courts, the policy makers, and patients and caregivers who are looking for more information on the course of care their physician has chosen? Are these sponsored articles more findable or accessible than other articles or just the same?
(I have to stop this essay now because of time considerations, but I will try to come back to topics like this as I go)

----
Crane, D. (1972). Invisible colleges: Diffusion of knowledge in scientific communities. Chicago: University of Chicago Press.
Garvey, W. D. (1979). Communication, the essence of science: Facilitating information exchange among librarians, scientists, engineers, and students. New York: Pergamon Press.
Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed.). Chicago, IL: University of Chicago Press.
Polanyi, M. (2000). The republic of science: Its political and economic theory. Minerva: A Review of Science, Learning & Policy, 38(1), 1-21. Originally published 1962.
Price, Derek J. de Solla. (1986). Little science, big science--and beyond. New York: Columbia University Press.
Shapin, S. (1995). Here and everywhere: Sociology of scientific knowledge. Annual Review of Sociology, 21(1), 289-321.
Sismondo, S. (2007). Ghost management: How much of the medical literature is shaped behind the scenes by the pharmaceutical industry? PLoS Med, 4(9), e286. doi:10.1371%2Fjournal.pmed.0040286
Sismondo, S. (2009). Ghosts in the machine: Publication planning in the medical sciences. Social Studies of Science, 39(2), 171-198. doi:10.1177/0306312708101047
 
Wednesday, May 06, 2009
  Should authors attest that they did a minimal lit search?
I keep coming back to this piece:
Gallagher, R. (2009). Citation Violations: Scientists are guilty of bibliographic negligence. The Scientist 23, p13. http://www.the-scientist.com/2009/05/1/13/1/ (free registration may be required)

The title goes back to stuff from Eugene Garfield - basically about authors omitting references to work because they either weren't aware of earlier work or they had "citation amnesia." The piece discusses when articles don't cite work that support theirs, whether or not they read or used the work - "disregard for antecedent research" as a complaint about a Cell article went.

As discussed here earlier
, there are lots of theories of citation - but reference lists aren't supposed to provide comprehensive coverage of the field. As Merton found, there are multiple independent discoveries, too.

I'm all about people doing good literature reviews, and peer reviewers and editors will catch some missing references... but I'm not sure there can or should be a standard across science, particularly where there are such differences in the various research areas.
 
Sunday, May 03, 2009
  Comps preparations
I've re-read the majority of the things I have on my reading list, and I'm coming down to the test-taking time. The only problem is that by stringing this out over more than 6 months, I have loose pieces of information in my head when I need to come to some connected whole.

From now until the time when I take my exam, I'm going to be
- brushing up on my essay exam-taking skills
- reviewing notes taken while doing the readings (either within the last 6 months or at first reading) and preparing mini-essays to try to integrate pieces from different sections.
- doing practice exam questions using questions my Advisor prepares and ones given to previous comps-takers at Maryland. I have these for communications, research methods, and information retrieval, but not for cmc or sts. STS is the one that might cause the most problems because I don't have any examples.

The format of the exam is 2x2hour questions for each of the major areas, 1x2hour question for each of the minor areas, so 8+6= 14 hours exam time to be taken over the course of 5 contiguous working days (but you can have a weekend in the middle). You get a computer that's wiped clean except for a word processing program and you get stuck in a little room and you come out 2 hours later. You can bring water and ear plugs, scrap paper and a pen. Non-native speakers get slightly more time and can bring a dictionary.

Labels:

 
  How should advertising work in online journals?
(all this of course, IMHO, and not representing anyone else - but I'd like to start a conversation)

Compare many statements of the type:
"I don't want to pay for access, just support your service with advertising"
"It should be all open access, with the author paying, unless the author can't, then some foundation or other ought to pick up the slack"

to my horrified reaction:
"evil big publisher x dares to have Google ads on e access to journal y which we pay $x,000 a year to get"

to scientist/engineer reality:
"I sort of miss the ads for equipment and jobs, it made it easier to keep up with that sort of thing. I still flip through my society pub in print so I can get them. I'd like to be able to see older ones, too."

to publisher's reality:
Some publishers are picking only very relevant ads - like ones for scientific instruments while others, like one that starts with an Sp use Google Ads and they're frequently crap.

So how do we solve this problem? Some magazines have created online analogs of the print, that you can flip through like the print - and see ads, but this isn't the way most people use e-journals now. Magazines and trade pubs are used differently than journals. Some of the publishers surround the html pages for journal articles with carefully selected ads, but what do you do about pdf articles and "seagull" users (swoop in, grab pdf, leave)?

What would you say if your journal article got published and when it came out, it had an ad at the top of the pdf page? Maybe from a company you didn't like for some reason? (like poor experience with equipment or ethics or just bad blood with a sales person) Or what if you thought it made it look like your paper (or data gathering) were sponsored by that company, which could be a conflict of interest?
 

Powered by Blogger

This is my blog on library and information science. I'm into Sci/Tech libraries, special libraries, personal information management, sci/tech scholarly comms.... My name is Christina Pikas and I'm a librarian in a physics, astronomy, math, computer science, and engineering library. I'm also a doctoral student at Maryland. Any opinions expressed here are strictly my own and do not necessarily reflect those of my employer or CLIS. You may reach me via e-mail at cpikas {at} gmail {dot} com.

Site Feed (ATOM)

Add to My Yahoo!

Creative Commons License
Christina's LIS Rant by Christina K. Pikas is licensed under a Creative Commons Attribution 3.0 United States License.

Christina Kirk Pikas

Laurel , Maryland , 20707 USA
Most Recent Posts
-- Moved to Scientopia
-- I've been assimilated!
-- Hey science librarians...
-- Can we design *a* community for *scientists*
-- Comps readings this week
-- How would you design a collaboration community for...
-- Why ghostwriting, ghost management, and fake journ...
-- Should authors attest that they did a minimal lit ...
-- Comps preparations
-- How should advertising work in online journals?
ARCHIVES
02/01/2004 - 03/01/2004 / 03/01/2004 - 04/01/2004 / 04/01/2004 - 05/01/2004 / 05/01/2004 - 06/01/2004 / 06/01/2004 - 07/01/2004 / 07/01/2004 - 08/01/2004 / 08/01/2004 - 09/01/2004 / 09/01/2004 - 10/01/2004 / 10/01/2004 - 11/01/2004 / 11/01/2004 - 12/01/2004 / 12/01/2004 - 01/01/2005 / 01/01/2005 - 02/01/2005 / 02/01/2005 - 03/01/2005 / 03/01/2005 - 04/01/2005 / 04/01/2005 - 05/01/2005 / 05/01/2005 - 06/01/2005 / 06/01/2005 - 07/01/2005 / 07/01/2005 - 08/01/2005 / 08/01/2005 - 09/01/2005 / 09/01/2005 - 10/01/2005 / 10/01/2005 - 11/01/2005 / 11/01/2005 - 12/01/2005 / 12/01/2005 - 01/01/2006 / 01/01/2006 - 02/01/2006 / 02/01/2006 - 03/01/2006 / 03/01/2006 - 04/01/2006 / 04/01/2006 - 05/01/2006 / 05/01/2006 - 06/01/2006 / 06/01/2006 - 07/01/2006 / 07/01/2006 - 08/01/2006 / 08/01/2006 - 09/01/2006 / 09/01/2006 - 10/01/2006 / 10/01/2006 - 11/01/2006 / 11/01/2006 - 12/01/2006 / 12/01/2006 - 01/01/2007 / 01/01/2007 - 02/01/2007 / 02/01/2007 - 03/01/2007 / 03/01/2007 - 04/01/2007 / 04/01/2007 - 05/01/2007 / 05/01/2007 - 06/01/2007 / 06/01/2007 - 07/01/2007 / 07/01/2007 - 08/01/2007 / 08/01/2007 - 09/01/2007 / 09/01/2007 - 10/01/2007 / 10/01/2007 - 11/01/2007 / 11/01/2007 - 12/01/2007 / 12/01/2007 - 01/01/2008 / 01/01/2008 - 02/01/2008 / 02/01/2008 - 03/01/2008 / 03/01/2008 - 04/01/2008 / 04/01/2008 - 05/01/2008 / 05/01/2008 - 06/01/2008 / 06/01/2008 - 07/01/2008 / 07/01/2008 - 08/01/2008 / 08/01/2008 - 09/01/2008 / 09/01/2008 - 10/01/2008 / 10/01/2008 - 11/01/2008 / 11/01/2008 - 12/01/2008 / 12/01/2008 - 01/01/2009 / 01/01/2009 - 02/01/2009 / 02/01/2009 - 03/01/2009 / 03/01/2009 - 04/01/2009 / 04/01/2009 - 05/01/2009 / 05/01/2009 - 06/01/2009 / 08/01/2010 - 09/01/2010 /

Some of what I'm scanning

Locations of visitors to this page

Search this site
(gigablast)

(google api)
How this works

Where am I?

N 39 W 76