tag:blogger.com,1999:blog-64741472024-03-13T14:46:36.211-04:00Christina's LIS RantThis is my blog on library and information science. I'm into Sci/Tech libraries, special libraries, personal information management, sci/tech scholarly comms.... My name is Christina Pikas and I'm a librarian in a physics, astronomy, math, computer science, and engineering library. I'm also a doctoral student at Maryland. Any opinions expressed here are strictly my own and do not necessarily reflect those of my employer or CLIS. You may reach me via e-mail at cpikas {at} gmail {dot} com.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.comBlogger793125tag:blogger.com,1999:blog-6474147.post-84959734260344029692010-08-03T20:30:00.001-04:002010-08-03T20:31:37.389-04:00Moved to ScientopiaJust in case google or an old link brings you here, I'm now at Scientopia.org at http://scientopia.org/blogs/christinaslisrant . I intend to import these posts there, once things settle down.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com2tag:blogger.com,1999:blog-6474147.post-13633337123419100902009-05-22T06:45:00.003-04:002010-08-03T20:32:22.880-04:00I've been assimilated!I'm now blogging over at <a href="http://scienceblogs.com/christinaslisrant/">http://scienceblogs.com/christinaslisrant/</a>. This is very exciting! There's a chance I could post things that are tooo rant-like here, but I expect to basically do all of my blogging there. I will maintain these archives, though. Wish me luck!<br /><br />edit 8/2010: no longer at scienceblogs, now at scientopia.org/blogs/christinaslisrantChristinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-88910496408645872782009-05-20T12:33:00.002-04:002009-05-20T12:36:53.515-04:00Hey science librarians...The folks at ScienceBlogs.com would like to know why you read ScienceBlogs (both ScienceBlogs and science blogs, probably). Feel free to e-mail the editorial office directly (editorial at scienceblogs.com), or comment here, or respond via twitter @cpikas, or on friendfeed.<br /><br />Thanks!Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com1tag:blogger.com,1999:blog-6474147.post-39877734230240183302009-05-17T18:57:00.002-04:002009-05-17T19:34:13.739-04:00Can we design *a* community for *scientists*My <a href="http://christinaslibraryrant.blogspot.com/2009/05/how-would-you-design-collaboration.html">test essay on designing an online community</a> has gotten some comments on <a href="http://ff.im/2XEB2">FriendFeed</a> and has been linked-to from<a href="https://twitter.com/#search?q=Christina%27s%20%22design%20a%20collaboration%22"> twitter</a> (this searched works at the time of writing). It's great to have this kind of quick feedback.<br /><br />First, I think <a href="http://mndoci.com/blog/">Deepak</a> is absolutely right. He questions whether there is a generic type of scientist such that you can design a community for all scientists. The literature shows that you need to design a community with a clear purpose and clear policies. The participants have to have common interests to bring them together. Maybe what I meant (and didn't say!) is that these are minimal requirements for communities to be useful to people involved in science, and that additional features and functionality are required for the specific group of users or purpose for the site. <br /><br />Could there be very generic tools (facebook, friendfeed), generic science-y tools, and then much more specific research area or even a specific collaboration tool? Sure, but they we also get into overload. Where do you post this update? Check each of these or get e-mails and feeds from each of these? What happens when your contacts post the same info in facebook, twitter, and friendfeed... and it's not even their own content, but someone else's that you also follow. Kind of annoying - is there a way to deduplicate? Should there be? maybe the content only appears once with comments and links showing from all of the various places?<br /><br />What Mr.Gunn says is right, too, there has to be some up front value to get people to even try any site. There has to be a percieved relative advantage, there has to be various types of knowledge, and there has to be a low-risk way to trial the site. I don't really agree that just being first is enough. There were other photo sharing sites before Flickr that weren't as popular. Flickr did some things that make it more useful.<br /><br />Recommender systems are great - ScienceDirect's related articles usually works pretty well for me (ducking bcs I liked something from the Evil Empire - sorry Bill!). PubMed doesn't seem to. From what I can tell, SD uses the full text whereas PubMed uses only the MeSH indexing. I'm not sure how many people have to be on Mendeley for it to really work. Individual papers might be saved for lots of different reasons, too, - it's almost like you need to say why you're keeping it - same organism, good method, author from x lab, person a recommended, etc. I think there's a lot of value about collaborating and communicating around scholarly publications. I need to probably post something more thoughtful about recommender systems for articles... I have some thoughts on that, too.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com1tag:blogger.com,1999:blog-6474147.post-71418289385899130162009-05-17T11:43:00.005-04:002009-05-17T23:48:11.679-04:00Comps readings this weekDon't have time what with working at the public library this afternoon to do another test essay so I'm re-reading some additional pieces. Some of these came up as things I needed to check on when I was doing the last two mini essays.<br /><br /><br />Tracy, K., & Naughton, J. (1994). The identity work of questioning in intellectual discussion. Communication Monographs, 61(4), 281-302.<br />I like this article - as one who went through several Navy boards and even before that was conscious of what questions meant in physics class (being afraid of being seen as "not even wrong") - I can really identify with it. The authors looked at a series of colloquia - they taped them over the course of 18 months - as well as interviews with participants. The participants wanted to appear knowledgeable but reasonably so. They knew that speakers couldn't know everything. Question askers formulated the questions to show if they thought it was reasonable that the recipient of the question know the information being asked. It's interesting that the questioners in these cases do a lot of repairs and modifications to their questions to make it ok for the recipient to not know the answer - this is completely the opposite of Navy situations in which questions are phrased to build up the questioner and tear down the recipient.<br /><br />Besides knowledgeability, there's novelty - the presenters have to link to previous research but show that their work is new. Reminds me a of a group in Neal Stephenson's <a href="http://www.worldcat.org/oclc/191930336">Anathem</a>, the Laurites (spelling might be off because I heard this as an audiobook). Their whole purpose was to point out connections to older work - they were really well-read and steeped in older research, so for any "new" idea they would say, "sounds like... ". (sometimes I wish we had a group like that I could join!). The participants in this study were much less gentle on the novelty front than they were in the knowledge front. The third part was about intellectual sophistication somehow this doesn't seem as obvious to me.<br /><br /><br />Clark, H. H., & Brennan, S. E. (1993). Grounding in communication. In R. M. Baecker (Ed.), Readings in groupware and computer-supported cooperative work: Assisting human-human collaboration (pp. 222-233). San Mateo, CA: Morgan Kaufmann Publishers.<br />(I forgot the name of this in my last test essay so time for a review!) There are copies of this online if you search in google by the authors' names.<br />Common ground is sharing mutual beliefs, knowledge, and assumptions - it's a necessity if communication is going to take place. This article starts by reviewing how people get to common ground (grounding) by looking at "adjacency pairs" of presentations and responses. One person says something, another provides evidence that he or she understands or doesn't. People try for least collaborative effort - but due to time pressure, errors, ignorance (not knowing your conversation partner), they have to do a little fixing of their own and their partner's "utterances." Then there's the whole deal about making sure you're referring to the same thing.<br /><br />So that's all fine, but the reason this was assigned in my doctoral seminar was the second half - how this changes with the medium. Some of the techniques to get to common ground may not exist in some media, and even if they do, they may require more effort. They mention 8 constraints: copresence, visibility, audibility, cotemporality, simultaneity, sequentiality, reviewability, and revisability. (huh, I got these from reviewing Jenny's book, forgot about them being here). There are all kinds of grounding costs: formulation costs, production costs, reception costs, understanding costs, start-up costs, delay costs, asynchrony costs, speaker change costs, display costs, fault costs, and repair costs. People trade off costs based on their purpose.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-59401867366236771472009-05-16T12:49:00.001-04:002009-05-16T12:50:46.524-04:00How would you design a collaboration community for scientists?How would you design a collaboration community for scientists, given what we know about formal and informal scholarly communication in science; computer mediated communication; computer supported collaborative work; online communities; social software; and social studies of science?<br />(this is a test mini essay in for comps prep. I wrote this offline so it does not have links to provide appropriate attribution or credit nor does it have complete citations)<br /><br />1. Introduction<br />There is an announcement for the next “facebook” for scientists almost every week. Frequently, these tools are just repurposed social software without any special design features specifically meant to support how scientists collaborate and communicate within collaborations. As Preece (2000) says, it is not a matter of “if you build it they will come”. Using ideas from the diffusion of innovations literature (Rogers, 2003; Ilie et al), it must be compatible with how scientists work, it must be visible, it must be trialable, there must be a perceived relative advantage over other similar tools, and since it is an interactive information technology, it will have to get to critical mass for wide-scale adoption. Based on what we know about how scientists communicate and how information technologies have changed how scientists communicate, we can suggest some guidelines for what a successful tool should do. The next few sections of this essay will describe these guidelines developed from the various streams of research.<br /><br />2. Online communities.<br />2.1. Design Processes<br />First, the tool should be designed as an online community – a collaboration place that brings together people, with a common purpose, with policies, using information and communication technologies (Preece, 2000). In design, it is important to address both sociability as well as usability. To address sociability, there should be a clear stated purpose, with clear policies for membership and behavior, and moderators to encourage appropriate contributions and discourage inappropriate contributions. These policies help users trust the system and other users thereby encouraging contribution.<br /><br />To address usability, navigation should be clear and easy to use, the site should be intuitive to use, there should be adequate help when needed, and the design should conform to best practices in web design. The site should also be machine-usable and interoperable; that is, it should import standard data formats (from RSS/XML to scientific data formats for chemicals or genes to bibliographic formats such as RIS and BibTeX), it should provide data streams in machine-consumable formats, and it should have a well-documented api to enable users to develop their own re-uses of the data.<br /><br />Most importantly in the design process, this community cannot be designed in a vacuum and handed off to users complete as a Christmas present. It must be collaboratively constructed with lots of feedback from potential users and development should continue once the service is online to address feedback from actual users. At minimum, other sites should be evaluated using content analysis to determine what they do successfully and how scientists are using them; potential users should be interviewed to determine what needs the system could address, focus groups should be held to get feedback on prototypes, and usability testing should be done to check the web design choices that were made in the design process. Early adopters should be asked to trial the site in an alpha or beta test to run the software through regular use prior to wider release.<br /><br />2.2 Interaction and Membership<br />Blanchard makes the distinction between virtual settlements and communities. She extends sociological studies of communities in the offline world to online communities (Huberman et al also address this). Communities provide support and a sense of belonging whereas virtual settlements may just be places people congregate online. Scientists have multiple memberships and social identities already as part of invisible colleges; as part of colleges, research groups, and labs; as editors, reviewers, authors, and readers of journals; as members of general purpose online communities (e.g., facebook, linked in, friendfeed, science blogs), and in their personal lives as friends, family, and so forth. Research and discussion with potential users is needed to determine if this tool should aim to be an online community in the Blanchard sense or merely a virtual settlement. It should not necessarily aim to replace anything, but to enable users to bring together some of this fragmentation.<br /><br />In either case, it is clear from research by Lave and Wenger, as well as research done with open source software communities, and research done on “lurkers” by Nonnecke (sp?) and Preece, that the system should support various levels of participation. Lave and Wenger discuss legitimate peripheral participation. This is a way that new members can follow the activities of the community and learn how to participate while learning to become an active member and contributor. In other words, new users should be able to “lurk” to learn more about the community and then move into more central roles by first commenting on the work of others and then finally creating their own work, and forming their own subgroups.<br /><br />2.3 Modes of Communication<br />In studies of computer supported collaborative work (or cooperative work, CSCW) and computer mediated communication, there is much discussion regarding:<br />• synchronous vs. asynchronous communication<br />• revisability<br />• reviewability<br />• distributed participation<br /><br />Science is an international enterprise and a community should support widely distributed collaborations. This means different time zones, different cultures, different languages (although many participants in science will speak, read, and write English), and different expectations for social tools. This indicates that the system should focus on asynchronous tools that allow reflection, review, and can be revised. However, we know from studies by Olson et al, that distance does matter. Getting to common ground may be more difficult and may take longer with fewer cues (important article forgot the author – hope it will come back), particularly if the participants have not met in person at least once. Accordingly, this collaborative tool should offer support for linking to or embedding synchronous events such as meetings in Second Life or conference streaming as well as multimedia information such as YouTube videos or podcasts. In the case of blogs, trust is earned over time by establishing a personality through an archive of posts. This system can also provide histories for each person listing their contributions and memberships to enable other users to understand their point of view (see below in studies of scholarly communication and sts for discussion of attributes of the authors that should be shared).<br /><br />3. Designing the System for Scientists<br />The previous sections have applied general research on online communities and computer mediated communication to the problem at hand, designing a collaboration tool for scientists; however, we know a great deal about how scientists communicate, and this information is very important to the design of a successful system.<br /><br />3.1 Data types and representing scientific knowledge<br />Common research methods or materials can form boundary objects through which different groups of scientists can communicate (Fujimura). The issue at hand for this system is to represent these common objects such that users from different research areas can find them. For example, when searching an engineering digital library for Indium Tin Oxide, one would find many useful results by typing “ITO”. When searching a chemistry digital library this would have to be a linear formula (InSnO), and perhaps in Hill order (InOSn) (note: I’m not sure if any of these are 2 – this is for illustrative purposes). Likewise, mathematical or signal processing approaches may be shared by very diverse research groups, who do not read the same literature. A successful system would enable diverse users to collaborate around these boundary objects either on the same problem or just using the same method on different problems.<br /><br />Some research areas in science seek to describe and model the physical world in terms of mathematical formulas. Typically, these formulas are created in LaTeX (a markup tool) or in a computational tool (like Matlab, Mathematica, etc) and then an image is generated and this image is uploaded to the web. The picture of the equation is not searchable or machine usable. Early adopters of blogs and wikis had to program their own plug-ins to be able to display equations in a usable format. Likewise, scientists represent materials using graphical chemical structures. More recently, a machine readable but chemically meaningful representation, inchi is being used, but its use is not entirely without controversy. This system must enable its users to represent scientific knowledge in the form of equations and chemical structures that are machine readable, but still fairly quick to input.<br /><br />Borgman, Van House, and others describe the use of large collections of scientific data like GeneBank and virtual observatories in astronomy. In eScience, some of these repositories are so large that the calculations and manipulations must take place at the data, instead of downloading the data to the scientists’ machine. The collaborative tool also must support collaborative work around scientific data and information that are hosted in these large repositories. Linking out to these data might not be enough, the link should be semantic such that it indicates how the data are to be used.<br /><br />Finally, the product of scientific work is often the peer-reviewed scientific articles. This is another form of data that is hosted externally, but around which collaborations can form. Community members should be able to refer to bibliographic data in a standard way and comment on scholarly articles. These comments should be made available to the journal publishers (as long as the commenter has agreed that her comments may be shared), so that they can display or use this information to provide context for the article. Likewise, a scientist who comments at the original article should be able to import his or her comment to this collaboration tool.<br /><br />3.2 Attribution and Credit<br />There is continuing controversy about Mertonian Norms of Science and whether these norms are mythological or the lived experience of scientists. Likewise, there are many competing theories and explanations for why authors cite other work (see Nicolaisen’s review). In any case, attribution is still the currency of science (Polyani) and this cycle of credit is very important to science (Latour). Grant proposals, hiring, promotion, tenure, and lab space are all determined in part by what the scientist publishes, in which venues, and how well those publications are cited. The publication venues are judged in part by their impact factor, which is a measure of how frequently they are cited. Unfortunately, contributions to collaborations and to collaborative online communities are frequently not valued in evaluating scientists’ work. In the current system, therefore, this collaborative tool might be most useful in helping scientists complete their offline work and publish it as well as to find collaborators and establish collaborations to complete offline work and publish it. <br /><br />In preparation for promotion, tenure, and grant systems that do value online work and to help in expertise location, work and contributions in this tool must be traceable to their contributor. In wikis, for example, edits are captured along with the time the change was made and the user name who made the edit. Contributions should be retrievable both at the place the information is stored, and at the contributor’s profile. Additionally, contributions could be rated by other users as to how useful they were. Authors who make a lot of valuable contributions might have some special icon in their profile or signature.<br /><br />3.3 Member Profiles<br />We know from studies of document selection and relevance (see for example Wang and Soergel) that scientists judge the relevance of articles using information about the author, his or her advisor, and his or her affiliation. At the same time, as discussed above, members who are new to the system may want to be peripheral participants and members whose contributions will not be valued by their home institutions (or may be used against them as “a waste of time”) may want to use a pseudonym or be anonymous (see discussions of women scientist bloggers). Member profiles should allow links to professional home pages, blogs, profiles on other networks, a listing of articles written, a picture or avatar, and semantic links to affiliations – but all of these must be optional. If the member has an existing persona used on his or her blog, then this can be used in the member profile.<br /><br />4. Fitting into the existing information ecosystem<br />This essay has touched on various ways that this tool should fit into the existing ecosystem, but it is valuable to compile these thoughts and to close the essay with a discussion of compatibility with the existing systems. First, scientists have various workflows for identifying, retrieving, keeping, using, and refinding information used as inputs to their scholarly work. Bloggers have mentioned that the refinding process is simplified when their notes are kept on a blog instead of in individual files on their desktop or in lab notebooks. Likewise, personal information management tools such as bibliographic managers are helpful when reusing references to published information. It is not suggested that this one tool replace all of these existing tools, rather that it can take data streams produced by a wide variety of narrow-use tools, and compile them in one place so that they can be searched, shared, annotated, and reused more easily. Friendfeed does this, to a certain extent, but this system could be built to understand data streams used in science. The system could replace some tools that do not work well for science such as blogs that do not adequately support equations and scientific symbols.<br /><br />Borgman and others note that finding and reusing data is very complicated. Whereas there is a very well-developed system to support archiving, organizing, and providing access to scholarly publications, we really do not have a similar system for data. This tool certainly cannot address digital preservation issues or management of large data sets, but it can enhance access by enabling users to link to and collaboratively work with information pulled from these data sets. By providing semantic links out to data stored in disciplinary repositories, the system can support and enhance the data’s reuse, findability, and value.<br /><br />Some collaborations and collaborative work needs to be done in private or in closed spaces and only shared when it is complete. This system should allow groups of members to create new private spaces where they can work together, with the support of the larger system, but without sharing their work until they are ready. Work done on the system should have permanent URLs which, at the option of the collaborators, can be assigned digital object identifiers for use in citations from the scholarly literature.<br /><br />As discussed by Nielsen and by Gowers in his post on massively collaborative math, there needs to be a way to advertise for help/collaborators/expertise wanted to likely audiences whether or not they are part of the system. The example that Nielsen gives is if, in the middle of a proof, a mathematician has a sticking point that will take a couple of weeks to get around because it requires additional background reading, but which another mathematician might know right off. The system should allow users to describe these points to the larger group and solicit help. Moderators could help the users describe their problem so that members in different research areas can find it to respond.<br /><br />Finally, this entire discussion has been about a community for practicing scientists, but I found in my overview of engineering communities that fledgling communities could be scuttled by being inundated by college students trying to get homework help (or to get their homework done for them). Likewise, there seems to be a lot of interest in the science blogosphere in supporting science classrooms and public understanding of science. Separate areas could be created at the community to “ask a scientist”, “get homework help”, “find a scientist speaker.” Any posts or contributions with these aims could be moved to these areas by the moderators and so that interested scientists can respond, but scientists who are collaborating on scientific work are not interrupted.<br /><br />----<br />2 hours have elapsed… running out of steam anyway, so I’ll post.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com3tag:blogger.com,1999:blog-6474147.post-76686840036577723632009-05-09T10:28:00.003-04:002009-05-09T13:03:01.575-04:00Why ghostwriting, ghost management, and fake journals could be perniciousWe often discuss the value of scholarly publications in terms of attribution of credit for promotion, tenure, and maybe even social capital (discussed in Polyani); but their primary purpose is to convey knowledge. The introduction and background sections review the literature and place the new work in context. What is the research problem? Why is this interesting? What do we already know? The methods section is for reproducibility - so that ostensibly someone could come along and repeat the work and come up with similar results, even though we know that tacit knowledge including craftsmanship is needed to actually reproduce many experiments, and that this knowledge is not conveyed through journals (discussed in Shapin). The methods section helps readers to trust the results. Were the methods appropriate to the research problem? Were they applied appropriately? Were any issues addressed? The results section tells you what they found out and the discussion section tells you why this is important or useful and what needs to be done next.<br /><br />As discussed variously in Social Studies of Knowledge work, scientists cannot repeat every experiment to trust it - if they need to go back and re-derive and re-test every piece of knowledge they use, then they will never be able to do new work. So they must use and trust the scholarly literature as well as other scientists and sources of "public knowledge". But they are skeptical, a Mertonian Norm, and require detailed information about how the information was obtained as well as information about the author, their lab, their funding, their training, etc. This last bit is counter to the Mertonian Norm of universalism which states that author attributes aren't important. We know from empirical studies of relevance and document selection (see for example Wang & Soergel) that researchers do look at author, affiliation, and publication attributes - things besides the actual content of the article (& topical relevance) - to assess value. Active researchers also have a pecking order of journals - which journals are better because they have a low acceptance rate or a high impact factor, strong editors, or just a better reputation.<br /><br />When you are well-integrated into a research area, when you are part of an invisible college (Price, Crane), then you will know what research is being done in which labs, and you'll know who does good work (Garvey, 1979). You will have access to much of the research prior to the actual publication in a journal. This is particularly true in the case of "normal" science (Kuhn), in which the problems are pretty well defined and new work is somewhat incremental instead of revolutionary. So when you become aware of new work, you know about the journal, the lab, and may even have a personal relationship with the author - having chatted at meetings - so you can incorporate new ideas and findings into your own work. You have a foundation into which you can fit this new information.<br /><br />Ok, let's go back now to the case of the fake journal. <a href="http://www.the-scientist.com/blog/display/55671/">The Scientist</a> reported that a division of Elsevier compiled re-prints of published articles and new questionable articles supporting the efficacy of a certain medicine into a journal, which they then handed out to physicians in Australia. This was not a real journal - the editor didn't have editorial control, there were no peer reviewers - it only was created to look like one, with a good-sounding name. It was packed with advertisements for the medicine alongside these favorable articles. When we look a little more closely, we understand that this (corporate-funded fake journals) is a service that this division of Elsevier offers, specifically trading on the reputation of Elsevier as a publisher of scholarly scientific and technical information (see quotes <a href="http://www.sennoma.net/main/archives/2009/05/no_bottom_to_worse_at_elsevier.php">compiled by Bill Hooker</a>).<br /><br />Researchers who are integrated into the invisible college will not be fooled by these! They will not know the authors, they will not know the journal, and the fact that the journals are not carried by libraries or indexed in Medline indicates that they aren't well-respected. Medical libraries, who also use extensive collection development heuristics will also not be fooled. But these "journals" are not intended for the researcher in pharmacology or pain management or what have you! They are intended for the clinicians - the practicing physicians who are not personally involved in research, may only have limited access to "the literature", are pretty busy, and who might be just a little bit rusty on evaluating information sources. This is one reason the fake journals could be pernicious - these physicians might be fooled- they might even know that it's marketing stuff, but still think that it's reprinting good articles.<br /><br />Good articles. If everyone follows the Mertonian norms of universalism, communism, disinterestedness, and organized skepticism, then the exact process of how the article came about is irrelevant. That is to say, features of the author aren't important, scientific information is given freely to increase society's knowledge with only attribution in repayment, it's all about the science (not societal good, not about personal gain, just what makes good science), and question everything. This assumes scientists are all behaving ethically, and that the only contributors to the scientific scholarly communication system are in fact scientists, who are committed to these norms. <br /><br />However, we understand from reading two recent works by Sismondo (2007,2009) that there are other players in the system, who are not in any way committed to the norms, and who are gaming the system for financial gain. According to these articles, pharmaceutical companies hire companies to design and run experiments, write up the results, select the publication venue, recruit a doctor to sign his name to the article, and then shepard the article through publication. The lead author may have had no control over the research or the writing and is certainly not disinterested (only connection to the work is via pay check). These articles appear alongside other scholarly articles in reputable journals (indexed, carried by libraries, well-cited, well thought of by researchers). Further, the lead author may have been selected and hired because he or she is integrated into the invisible college and <span style="font-style: italic;">could have </span>done this work.<br /><br />I am not saying that the actual design of the trials was flawed, or that the results are not supported by the data or that it isn't actually good science. The employees of these companies are trained researchers, but ones who are committed not to science and knowledge, but to providing a service: making their customer's product look good. Scientists in academic settings sure do take money from big companies, but there is arguably more separation. Important questions include: how much of the persuasiveness of the article is due to rhetorical manipulations by players who are paid to make a product look good, are data omitted to insure that the product looks good, and are the discussion and implications sections supported by the results? I'm curious, too, if these articles (if they can be identified) are cited by articles produced the old-fashioned way.<br /><br />So how big of a deal is this? Clinicians and practitioners are not naive - they may know a lot about these shenanigans - but how are they to assess the evidence with limited time, limited access, and when each article addresses just one small area of the knowledge they need to know for their every day job? Out and out falsifying data is perhaps easy to detect - but fudging the numbers just a bit to make things look just a little more convincing is not. Peer reviewers do not have enough information to detect this so that is not the answer.<br /><br />Other interested parties include courts, policy makers, and patients. A researcher who is integrated into the invisible college may recognize advertising immediately, but how about the courts, the policy makers, and patients and caregivers who are looking for more information on the course of care their physician has chosen? Are these sponsored articles more findable or accessible than other articles or just the same?<br />(I have to stop this essay now because of time considerations, but I will try to come back to topics like this as I go)<br /><br />----<br />Crane, D. (1972). Invisible colleges: Diffusion of knowledge in scientific communities. Chicago: University of Chicago Press.<br />Garvey, W. D. (1979). Communication, the essence of science: Facilitating information exchange among librarians, scientists, engineers, and students. New York: Pergamon Press.<br /><span class="TF">Kuhn, T. S. (1996). <i>The structure of scientific revolutions</i> (3rd ed.). Chicago, IL: University of Chicago Press.</span><br /><span class="TF">Polanyi, M. (2000). The republic of science: Its political and economic theory.<i> Minerva: A Review of Science, Learning & Policy, </i><i>38</i>(1), 1-21. Originally published 1962.</span><br />Price, Derek J. de Solla. (1986). Little science, big science--and beyond. New York: Columbia University Press.<br />Shapin, S. (1995). Here and everywhere: Sociology of scientific knowledge. Annual Review of Sociology, 21(1), 289-321.<br />Sismondo, S. (2007). Ghost management: How much of the medical literature is shaped behind the scenes by the pharmaceutical industry? PLoS Med, 4(9), e286. doi:10.1371%2Fjournal.pmed.0040286<br />Sismondo, S. (2009). Ghosts in the machine: Publication planning in the medical sciences. Social Studies of Science, 39(2), 171-198. doi:10.1177/0306312708101047Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-90941073892319677882009-05-06T21:55:00.002-04:002009-05-06T22:29:07.783-04:00Should authors attest that they did a minimal lit search?I keep coming back to this piece:<br />Gallagher, R. (2009). Citation Violations: Scientists are guilty of bibliographic negligence. The Scientist 23, p13. http://www.the-scientist.com/2009/05/1/13/1/ (free registration may be required)<br /><br />The title goes back to stuff from Eugene Garfield - basically about authors omitting references to work because they either weren't aware of earlier work or they had "citation amnesia." The piece discusses when articles don't cite work that support theirs, whether or not they read or used the work - "disregard for antecedent research" as a complaint about a Cell article went.<br /><a href="http://christinaslibraryrant.blogspot.com/2008/08/meaning-of-citations.html"><br />As discussed here earlier</a>, there are lots of theories of citation - but reference lists aren't supposed to provide comprehensive coverage of the field. As Merton found, there are<a href="http://en.wikipedia.org/wiki/List_of_multiple_independent_discoveries"> multiple independent discoveries</a>, too.<br /><br />I'm all about people doing good literature reviews, and peer reviewers and editors will catch some missing references... but I'm not sure there can or should be a standard across science, particularly where there are such differences in the various research areas.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-49329093820756673892009-05-03T09:18:00.002-04:002009-05-03T09:43:52.523-04:00Comps preparationsI've re-read the majority of the things I have on my reading list, and I'm coming down to the test-taking time. The only problem is that by stringing this out over more than 6 months, I have loose pieces of information in my head when I need to come to some connected whole.<br /><br />From now until the time when I take my exam, I'm going to be<br />- brushing up on my essay exam-taking skills<br />- reviewing notes taken while doing the readings (either within the last 6 months or at first reading) and preparing mini-essays to try to integrate pieces from different sections.<br />- doing practice exam questions using questions my Advisor prepares and ones given to previous comps-takers at Maryland. I have these for communications, research methods, and information retrieval, but not for cmc or sts. STS is the one that might cause the most problems because I don't have any examples.<br /><br />The format of the exam is 2x2hour questions for each of the major areas, 1x2hour question for each of the minor areas, so 8+6= 14 hours exam time to be taken over the course of 5 contiguous working days (but you can have a weekend in the middle). You get a computer that's wiped clean except for a word processing program and you get stuck in a little room and you come out 2 hours later. You can bring water and ear plugs, scrap paper and a pen. Non-native speakers get slightly more time and can bring a dictionary.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com3tag:blogger.com,1999:blog-6474147.post-12429751566416439872009-05-03T07:59:00.002-04:002009-05-03T08:15:03.698-04:00How should advertising work in online journals?(all this of course, IMHO, and not representing anyone else - but I'd like to start a conversation)<br /><br />Compare many statements of the type:<br />"I don't want to pay for access, just support your service with advertising"<br />"It should be all open access, with the author paying, unless the author can't, then some foundation or other ought to pick up the slack"<br /><br />to my horrified reaction:<br />"evil big publisher x dares to have Google ads on e access to journal y which we pay $x,000 a year to get"<br /><br />to scientist/engineer reality:<br />"I sort of miss the ads for equipment and jobs, it made it easier to keep up with that sort of thing. I still flip through my society pub in print so I can get them. I'd like to be able to see older ones, too."<br /><br />to publisher's reality:<br /><ul><li>significant income used to come from advertising, particularly in chemistry and biomed journals, and as high as subscription prices are, they would be even higher without<br /></li><li>we haven't been able to convince advertisers to pay the same amount for online advertising</li><li>we're getting a lot of crap for advertising in and around the scholarly journals we publish</li></ul>Some publishers are picking only very relevant ads - like ones for scientific instruments while others, like one that starts with an Sp use Google Ads and they're frequently crap.<br /><br />So how do we solve this problem? Some magazines have created online analogs of the print, that you can flip through like the print - and see ads, but this isn't the way most people use e-journals now. Magazines and trade pubs are used differently than journals. Some of the publishers surround the html pages for journal articles with carefully selected ads, but what do you do about pdf articles and "seagull" users (swoop in, grab pdf, leave)?<br /><br />What would you say if your journal article got published and when it came out, it had an ad at the top of the pdf page? Maybe from a company you didn't like for some reason? (like poor experience with equipment or ethics or just bad blood with a sales person) Or what if you thought it made it look like your paper (or data gathering) were sponsored by that company, which could be a conflict of interest?Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com2tag:blogger.com,1999:blog-6474147.post-87025854906155182192009-04-26T22:40:00.002-04:002009-05-03T12:11:56.201-04:00Comps readings this weekDiffusion of innovations questions figured prominently in the folder of comps questions - seemed like nearly everyone had a question relating this area to another area, so this finishes up the readings I had on diffusion of innovations. (this post was added to throughout the week and finished after the post on comps preparations. there will probably still be some posts on comps readings, but I'm supposed to be doing more integrating now - and not in the fun math way!)<br /><br />Ilie, V., Van Slyke, C., Green, G., & Lou, H. (2005). Gender Differences in Perceptions and Use of Communication Technologies: A Diffusion of Innovation Approach. <span style="font-style: italic;">Information Resources Management Journal</span>, 18(3), 13-31<br /><br /><div style="text-align: center;">user's perceptions of technology influence intention to use the technology ><br />user perceptions differ by gender<br /></div>This article looks at how gender impacts "intention to use" IM. The standard Rogers things: perceived ease of use, relative advantage, compatibility, observability (broken down into visibility and results demonstrability), plus perceived critical mass. Note that all of these have been studied by people other than Rogers, and they are all "perceived" - it's not Gartner's assessment of relative advantage, it's what the user thinks. They also review the extensive literature on gender differences in communication- both in person and online. The participants were business students who were of course heavy users of icts. They did a survey, and used scales from other studies for most items and made their own for perceived critical mass. I'll leave the stats to anyone who's interested to read. The men were into relative advantage,semonstrability , and perceived critical mass. Women were into ease of use and visibility. This matched with the hypotheses.<br /><br />Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. <span style="font-style: italic;">MIS Quarterly</span>, 27(3), 425-478<br />MIS Quarterly articles are pretty meaty. This one seems even more so - jam packed with information. It's been cited about 520 times, 387 times from journals (from WoS).<br /><br /> This article compares 8 models of IT acceptance research and comes up with a unified model. The original 8 models were tested to see how much variance they explained, then the new model tested against the first set of data (improvement) and then the new model against new data (decent adjusted R^2).<br />The 8 models:<br /><ol><li>Theory of Reasoned Action (TRA)<ul><li>attitude toward behavior</li><li>subjective norm</li></ul></li><br /><li>Technology Acceptance Model (TAM)<ul><li>perceived usefulness</li><li>perceived ease of use</li><li>subjective norm (added as TAM2)<br /></li></ul></li><br /><li>Motivational Model(MM)<ul><li>extrinsic motivation</li><li>intrinsic motivation</li></ul></li><br /><li>Theory of Planned Behavior (TPB)<ul><li>attitude toward behavior</li><li>subjective norm</li><li>perceived behavioral control</li></ul></li><br /><li>Combined TAM and TPB<ul><li>attitude toward behavior</li><li>subjective norm</li><li>perceived behavioral control</li><li>perceived usefulness</li></ul></li><br /><li>Model of PC Utilization<ul><li>job-fit</li><li>complexity</li><li>long-term consequences</li><li>affect towards use</li><li>social factors</li><li>facilitating conditions</li></ul></li><br /><li>Innovation Diffusion Theory (not really strictly Rogers, more like various IS takes on Rogers)<ul><li>relative advantage</li><li>ease of use</li><li>image (if use enhances user's image)</li><li>visibility</li><li>compatibility</li><li>results demonstrability</li><li>voluntariness of use</li></ul></li><br /><li>Social Cognitive Theory<ul><li>outcome expectations-performance</li><li>outcome expectations-personal</li><li>self-efficacy</li><li>affect (liking the behavior)</li><li>anxiety</li></ul></li><br /></ol><br />For each of these things, there are "moderators" including experience, voluntariness, gender, age. There might also be things about the industry, job function of user, and things about the technology itself - but these aren't considered here.<br /><br />Some of the limitations of previous studies were that they were mostly done with students, they were done retrospectively, they were done with completely voluntary innovations (but managers need to know how to get employees going), and the technologies were fairly simple.<br /><br />The authors find 4 organizations in 4 different industries with samples drawn from different functional roles, and give a survey immediately after training on a new technology, a month later, and 3 months later. They also gather "duration of use" for 6 months after the training to look at actual usage. The questions are taken from scales from the studies supporting the 8models and were pre-tested and tweaked... then follows a lot of statistics and testing for validity, reliability... etc. The 4 orgs were grouped into 1a + 1b (voluntary) and 2a + 2b (mandatory). Essentially the older models each accounted for at most between 40-50% of the variance in intention and usage.<br /><br />The new model (for fear of getting in trouble, I'm re-drawing)<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjejE58xzwTvZcCuf3gTf1jAruTVOYv7HtkH7tlCddyReRAjfgXcfKACFrSzef8FDJJrUEf4ohbfxINUgQVwMeOhkVcTXmK9USwR-N8OlBlAVN2slAfEidjZlG7e_bM9upFpNfCZg/s1600-h/utaut.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 320px; height: 213px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjejE58xzwTvZcCuf3gTf1jAruTVOYv7HtkH7tlCddyReRAjfgXcfKACFrSzef8FDJJrUEf4ohbfxINUgQVwMeOhkVcTXmK9USwR-N8OlBlAVN2slAfEidjZlG7e_bM9upFpNfCZg/s320/utaut.PNG" alt="" id="BLOGGER_PHOTO_ID_5331625423278259266" border="0" /></a>Each of these pieces is pulled together from pieces of the 8 models - so they use the questions from the other models, and group them together this way. They did the standard tests to see if they hung together and tested some hypotheses about the moderators. They then got data from 2 more orgs and ran the new model against that. It essentially worked pretty well. The new model accounted for like 70 percent of the variance of usage intention. There are definitely some limitations - partially due to sample size and the shear number of variables.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com1tag:blogger.com,1999:blog-6474147.post-57211669137678476482009-04-26T12:15:00.000-04:002009-04-26T12:16:45.094-04:00Comps readings this weekJoho, H., & Jose, J. M. (2006). Slicing and dicing the information space using local contexts. IIiX: Proceedings of the 1st international conference on Information interaction in context, Copenhagen, Denmark. 66-74. (available from: http://eprints.gla.ac.uk/3521/1/slice_and_dice..pdf)<br />In this article they test a couple of different things about the information interaction in search. They look at having a workspace in the interface and pseudo-facets by co-occurrence (not the typical clustering). There were several tasks of low and high complexity - defined as how much information is given and needed about an imposed task. Participants were much happier with the workspace than the baseline layout and they also did better at identifying relevant pages using the workspace for complex tasks.<br /><br />Wacholder, N., & Liu, L. (2008). Assessing term effectiveness in the interactive information access process. Information Processing & Management, 44(3), 1022-1031.<br />Started reading this and then I took a detour to quickly read through: Wacholder, N., & Liu, L. (2006). User preference: A measure of query-term quality. Journal of the American Society for Information Science and Technology, 57(12), 1566-1580. doi:10.1002/asi.20315 - that article describes the experimental setup.<br />I just am having a really hard time telling the difference between these two articles. I guess the JASIST article is about what the user prefers and the IP&M article is about how effective these are at retrieving the correct result. The set up is that there's an electronic edition of the book. The investigators create a bunch of questions that can be answered with it. They have 3 indexes - the back of the book and two ways of doing noun phrases. One way keeps 2 phrases if they have the second term in common and the other keeps a phrase if the same word appears as the head of 2 or more phrases. They had questions that were easier or harder and created a test interface to show the query terms to the user. The user selects one and can see a bit of the text, which they can cut and paste or type into the answer block. Users preferred human terms - not surprising. The head sorting terms had a slight edge on the human terms for effectiveness with the TEC terms not doing nearly as well.<br /><br /><span class="TF">White, M. D. (1998). Questions in reference interviews.<i> Journal of Documentation, </i><i>54</i>, 443-465.<br />Looked at 12 pre-search interviews (recall in this time period when you wanted to do a literature search using an electronic database, you filled out a form, then made an appointment with a librarian, and then she mailed you a set of citations - or you picked them up a few days later). These interviews would be after the librarian had reviewed the form but before she'd done any real searching. Out of these 12 interviews, there were 600 questions (from both sides) using apparently a common set of rules as to what is a question... None of this seems earth shattering now. Oh well.<br /><br /></span><span class="TF">Lee, J. H., Renear, A., & Smith, L. C. (2006). Known-Item Search: Variations on a Concept. <i>Proceedings 69th Annual Meeting of the American Society for Information Science and Technology (ASIST), </i>Austin, TX. <i>, 43. </i></span><span class="TF">Also available from : http://eprints.rclis.org/8353/<br />We always talk about known item search, but everyone defines it differently...<br /></span><br />Green, R. (1995). Topical Relevance Relationships I. Why Topic Matching Fails. Journal of the American Society for Information Science, 46(9), 646-653.<br />There are ideal information retrieval systems goals and operational system design. Ideally, relevance, in a strong sense, means that the document/information retrieved helps the user with his or her information need. To get this done in systems, we make some assumptions. Namely, need can be represented by terms, documents can be represented by terms, that the system can retrieve documents based on input terms. So the weaker version of relevance that we use is matching term to term. But there are lots of things that are helpful or relevant that don't match term for term - like things that are up or down the hierarchy (you search for em radiation, microwave thing not returned even though it is a specific type of). She then goes wayyy into linguistics stuff (as is her specialty) about types of relationships...<br /><br />Huang, X., & Soergel, D. (2006). An evidence perspective on topical relevance types and its implications for exploratory and task-based retrieval. Information Research, 12(1), paper 281. Retrieved from http://informationr.net/ir/12-1/paper281.html<br />This article follows closely on the previous (if not temporally then topically - ha!). The authors used relevance assessments from the MALACH project to further define various topical relevance types. The MALACH project has oral histories from Holocaust survivors. Graduate students in history assessed segments for matching with given topics and then provided their reasoning for doing so.<br />Direct - says precisely what the person asked<br />Indirect - provides <span style="font-style: italic;">evidence</span> so that you can infer the answer<br />types within these<br />- generic - at the point but missing a piece<br />- backward inference or abduction - you have the result or a later event and you can infer what happened before<br />- forward inference or deduction - you have preceding event or cause<br />- from cases<br />Context - provides context for the topic like environmental, social, or cultural setting<br />Comparison - provides similar information about another person, or another time, or another place<br /><br />So you can see how these are all very important and how a good exploratory search would help with this. As it is now, you have to manually figure out all of the various things to look for - even if the system perfectly matches your query terms, it's not enough! Also, they discuss the importance if you're trying to build an argument, how you need different types of evidence at different stages. Good stuff (and not just 'cause colleague and advisor as authors)<br />(so there's a situation at work, where I've been trying to bring some folks into this point of view - they can only see direct match - but I contend that a new/good info retrieval system should do more)<br /><br />Wang, P., & Soergel, D. (1998). A Cognitive Model of Document Use during a Research Project. Study I. Document Selection. Journal of the American Society for Information Science, 49(2), 115-133<br />This was based on Wang's dissertation work - while she worked at a campus library for agricultural economics, she did searching using DIALOG. For these bunch, she had them read aloud and think aloud while they went through the results she retrieved to pick out the ones they wanted in full text. She recorded this and then coded it. From that she pulled out what document elements they looked at and how they selected documents. I mostly talk about this study in terms of pointing out the document elements that are important (like Engineering Village is spot on with the author and affiliation first), but the decision theory stuff is interesting too. In addition to topicality, their criteria include recency, authority, relationship to author (went to school with him), citedness, novelty, level, requisites (need to read Japanese), availability, discipline, expected quality, reading time...<br /><br />I figured while I'm in the relevance section - onward! (with all the cooper, wilson, and kemp stuff... i'm not sure i get it so much.. i'm really not about tricky arguments or nuanced ... as in the Patrick O'Brian novels, I go straight at 'em - even when i read one of these and get completely unscrewed - 5 minutes later I'm confused again)<br /><br />Cooper, W. S. (1971). A Definition of Relevance for Information Retrieval. Information Storage and Retrieval, 7(1), 19-37. DOI: 10.1016/<a href="http://findit.library.jhu.edu/resolve?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&__char_set=utf8&rft.issn=0020-0271&rfr_id=info:sid/libx&rft.genre=journal&sfx.ignore_date_threshold=1" title="LibX: Search Find It @ JH Libraries for ISSN "Information storage and retrieval", Oxford, England ; New York, N.Y. : Pergamon Press (non-peer-reviewed), Printed serial" class="libx-autolink" style="border-bottom: 1px dotted;">0020-0271</a>(71)90024-6<br />this pdf might be corrupted on ScienceDirect... I'll have to check from another machine - (no, it's fine from work). In the mean time I had to - dramatic sigh - get this out of my binder from the information retrieval doctoral seminar. Logical relevance has to do with topic appropriateness. It is the relationship between stored information and information need. Information need is a "psychological state" that is not directly observable - hope to express it in words, but that's not the same thing. The query is a first approximation of a representation of an information need. The request is what the system actually gets (is this sounding a bit like Taylor '68?). So when he's doing his own definition, he looks at a very limited situation - a system that answers yes or no questions. (here's where I get into trouble). He defines a premiss set for a component statement of information need as the group of system statements that are a logical consequence of the component (minimal means as small as possible). A statement is "logically relevant to (a representation of) and information need iff it is a member of some minimal premiss set." He later goes on to say that for topical information needs, you can create a component statement tree and get to something similar to Xiaoli & Dagobert's indirect topical relevance. Interestingly, his definition specifically doesn't include things like credibility and utility where other versions of relevance do, even while maybe only developing topical relevance.<br /><br />Wilson, P. (1973). Situational relevance. Information Storage and Retrieval, 9, 457-471. doi:<a href="http://findit.library.jhu.edu/resolve?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&__char_set=utf8&rft_id=info:doi/10.1016/0020-0271%2873%2990096-X&rfr_id=info:sid/libx&rft.genre=article" title="Search Find It @ JH Libraries for DOI 10.1016/0020-0271(73)90096-X: "Situational relevance", Wilson, P; Information Storage and Retrieval 9(8):457- (1973)" class="libx-autolink" style="border-bottom: 1px dotted;">10.1016/0020-0271(73)90096-X</a><br />Wilson also notes the difference between psychological relevance - what someone does do, or does perceive to be relevant - and a broader view of logical relevance - something can be relevant whether or not the person noticed it. Wilson is interested in logical relevance. Within logical relevance, there's a narrower logical relevance (elsewhere direct) and evidential relevance. Something is evidentially relevant if it strengthens or adds to the argument/case. Situational relevance deals with things that are of concern or things that matter, not just things you're mildly interested in. Something is situationally relevant if, when put together with your entire stock of knowledge, it is logically or evidentially relevant to some question of concern. Something is directly relevant if it's relevant to something in the concern set and indirectly situationally relevant if it's relevant to something that isn't part of the concern set. Wilson's situational relevance is time sensitive and person sensitive - what is of concern depends on who you ask. Within all this there are preferences, degree, practicality, etc.<br /><br />Kemp, D.A.(1974) Relevance, Pertinence, and Information System Development. Information Storage and Retrieval 10, 37-47.<br />In which we lead back to Kuhn again (all roads lead back to Kuhn and Ziman if you travel them far enough :) Kemp defines pertinence as a subjective measure of utility for the actual person with the information need, while relevance is something that can be judged more objectively, by others who can compare the expressed information request with the documents retrieved. He compares this to public vs. private knowledge (Ziman, and Foskett), denotation vs. connotation, semantics vs. pragmatics. Along the way, he provides a definition of informal vs. formal communication - but this is really much more complex now. His definition of informal is that it "does not result in the creation of a permanent record, or if it does, then that record is not available for general consultation" (p.40). Of course our informal communication may last well after you'd like it to and is certainly retrievable! His view is that pertinence is ephemeral - but I guess now we would say that it's situated.<br /><br />Kwasnik, B. H. (1999). The Role of Classification in Knowledge Representation and Discovery. Library Trends, 48(1), 22.<br />(btw the scans of this in both EbscoHost and Proquest aren't so hot - they're legible, but a little rough) This is a classic article for a reason... like this paragraph<br /><blockquote><span style="font-size:85%;">The process of knowledge discovery and creation in science has traditionally followed the path of systematic exploration, observation, description, analysis, and synthesis and testing of phenomena and facts, all conducted within the communication framework of a particular research community with its accepted methodology and set of techniques. We know the process is not entirely rational but often is sparked and then fueled by insight, hunches, and leaps of faith (Bronowski, 1978). Moreover, research is always conducted within a particular political and cultural reality (Olson, 1998). Each researcher and, on a larger scale, each research community at various points must gather up the disparate pieces and in some way communicate what is known, expressing it in such a way as to be useful for further discovery and understanding. A variety of formats exist for the expression of knowledge--e.g., theories, models, formulas, descriptive reportage of many sorts, and polemical essays. </span><br /></blockquote>Just sums up all of scholarly communication in a few sentences. "Classification is the meaningful clustering of experience" - and it can be used in a formative way while making new knowledge and to build theories. Then she describes different classification schemes:<br />Hierarchies have these properties: inclusiveness, species/differentia (luckily she translates that for us - is-a relationships), inheritance, transivity, systematic and predictable rules for association and distinction, mutual exclusivity, and necessary and sufficient criteria. People like hierarchical systems because they're pretty comprehensive, they're economical because of inheritance and all, they allow for inferences, etc. But these don't always work because of multiple hierarchies, multiple relationships, transivity breaks down, we don't have comprehensive knowledge, and other reasons.<br /><br />Trees go through that splitting but there's not that inheritance of properties. Her examples include part-whole relationships as well as a tree like general - colonel - lt colonel... - private. Trees are good because you can figure out relationships, but they're kind of rigid and handle only one dimension.<br /><br />Paradigms are matrices showing the intersection of two attributes ( really?). Hm.<br /><br />Facet analysis - choose and develop facets, analyze stuff using the different facets, develop a citation order. These are friendly and flexible once you get going, but deciding on facets is difficult and then there might not be any relationships between the facets.<br /><br />With all of these things, things get disrupted when perspective changes, or the science changes, or there are too many things that don't fit neatly into the scheme. The article stops kind of suddenly - but this really ties back to Bowker and Star who are much more comprehensive (well it's a book after all!) in how all of this ties into culture, but less detailed about how classifications work.<br /><br />Thus completes the relevance section... back to diffusion of innovations (see separate post on <a href="http://christinaslibraryrant.blogspot.com/2009/04/comps-reading-diffusion-of-innovations.html">Rogers</a>) These articles were originally assigned by M.D. White, who was a guest speaker at our doctoral seminar. One of her advisees did her <a href="http://www.ischool.umd.edu/research/students/hahn.shtml">dissertation on the diffusion of electronic journals</a>, good stuff. Dr White was on my qualifying "event" committee, but she has since retired so no luck in having her on my next couple.<br /><br />Fichman, R. G., & Kemerer, C. F. (1999). The illusory diffusion of innovation: An examination of assimilation gaps. Information Systems Research, 10(3), 255-275<br /><br />The point of this article is that for corporate IT innovations, there's a real difference between acquisition and deployment; that is, many companies purchase technologies that they never deploy. If you measure adoption by number of companies who have purchased, then you'll miss rejection and discontinuance which are actually very prevalent. This difference between cumulative acquisition and deployment is the assimilation gap. If you think of the typical s curve then a higher one (higher cumulative# acquired) is acquisition and a lower one deployment, the area between the two curves is the gap. You can draw a line at any time t and see the difference. The problem is that you have censoring - some firms still have not deployed at the end of the observation window. The authors use survival analysis for this, which enables them to use the data even with censoring, to look at median times to deployment, and to make statistical inferences about the relative sizes of two gaps<br /><br />They suggest that reasons for this gap for software innovations in firms might be increasing returns to adoption and knowledge barriers to adoption. Returns to adoption means that the more other organizations have already adopted, the more useful the innovation will be. Reasons for this include network effects, learning from the experiences of others, general knowledge in the industry about the innovation, economies of scale, and industry infrastructure to support the innovation (p. 260). Managers might hedge their bets for innovations that haven't caught on yet - purchase them, but wait to see what others do before deploying. Sometimes technology that is immature is oversold - and this only becomes clear after purchase. Knowledge barriers can be managerial as well as technological. It might not be clear how to go about the deployment process.<br /><br />The authors did a survey of 1,500 medium to large firms (>500 employees) located using an advertiser directory from ComputerWorld. At these companies the respondents were mid-level IT managers with some software development tools installed at their site. They had 608 usable responses - but they ended up using only 384 because they wanted larger firms (>5 it staff) who were assumed to be more capable of supporting these software innovations. Acquisition - first purchase of first instance; Deployment - 25% new projects using. For one tool there was a very small gap, but for another it was pretty large. They came up with median times to deploy and also what percentage of acquirers will probably never deploy (for one innovation 43%!). They compared these to a baseline from a Weibull distribution (in which 75% deploy in 4 years).<br /><br />Answers to the survey questions supported the idea that complexity and the technologies being oversold really contributed to this gap. An alternate explanation is that different people in the organization make the acquisition and deployment decisions.<br /><br />(I'm going to stop now and start on next week's... more diffusion to come)Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-88613325963622280492009-04-21T09:36:00.004-04:002009-04-21T17:08:07.372-04:00Ejournals and journal services: What is worth paying for?*rant alert*<br />This post has been bubbling up for a while, but I'm finally taking time out to say it. (see a discussion about <a href="http://bibwild.wordpress.com/2009/04/20/the-dangers-of-the-free-cloud-the-case-of-crossref/">crossref and free cloud on J.R.'s site</a>)<br /><br />This is in response to<br />a) Statements by some that anyone can publish a journal (and do it well), that journal hosting services provide little or no value, and that stashing copies of articles anywhere in a random pdf format is just as good as publishing in a journal<br />b) The <a href="http://www.library.yale.edu/consortia/icolc-econcrisis-0109.htm">ICOLC Statement</a>, which says in part:<br /><blockquote>1. Purchasers will trade features for price; that is, we can do without costly new interfaces and features. This is not a time for new products. Marketing efforts for new products will have only limited effects, if any at all. </blockquote>Part of what we (libraries) pay for when we license electronic journals is:<br /><ul><li>an interface that allows browsing, searching, and known item retrieval (like if you can just put in a jnl name, v, p and get the answer)<br /></li><li>an interface that does alerts</li><li>an interface that allows you to export metadata<br /></li><li>an interface with extra features like similar articles, times cited, post to delicious<br /></li><li>an interface that shows you what you have access to and what you don't</li><li>probably most importantly one that our machines will talk to so that we can use tools like open url resolvers (SFX) and metasearch (like metalib) to integrate into discovery platforms</li></ul>AIAA and some other publishers have chosen to ignore most if not all of these requirements, and to strike out on their own - but we still subscribe because they're the only game in town. Some libraries are so cash-strapped, that they use aggregators for journal full text instead of using the journal platform. This limits the features available and the context provided for the article as well as frequently imposing an embargo on access (new articles are not available, articles are available after 12 months or so). (I choose to believe that they use aggregators because they're cash-strapped, not because they're too lazy to make individual subscription/platform decisions).<br /><br />Publishers (like small societies) do not have to figure this out on their own - they join crossref, and they hire an organization like atypon or highwire or ingenta or even (eek) Elsevier or Wiley Blackwell to make their journals available.<br /><br />It IS worth money to:<br /><ul><li>be standards compliant</li><li>to have a useful/useable web site that facilitates information discovery - we KNOW that scholars browse journal runs for information, and chain from article to the next, our platforms MUST support this or they are not useful!</li><li>be reliable!</li><li>tell us (librarians) what you're up to and offer us training on how to use your services</li><li>ask us (librarians & users) what we want/like/use<br /></li><li>have a long term digital preservation plan<br /></li></ul><br />We DO NOT want to give publishers (like aiaa) and others more money to:<br /><ul><li>reinvent the wheel - to build their own site, from scratch, which is pretty but not useable or useful or standards compliant</li><li>lobby congress against things that we hold dear</li><li>hire lawyers to prevent us from doing what you have already licensed us to do</li><li>generally be evil</li></ul>*done ranting, I feel better now, thank you*<br /><br />Update, later that day: AIAA now has DOIs, thank goodness, but they still have issues. You could host your journal on BMC (if you are in Biomed!) or on some open journal service - not all of these are created equally! Your data export should be available in every format major bibliographic/citation managers take (ris, txt, endnote ,refworks, BibTeX...). Nice text and online as well as offline readability (how about readable html and readable pdf!)Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com1tag:blogger.com,1999:blog-6474147.post-62354049868219864562009-04-17T07:23:00.000-04:002009-04-17T07:26:21.382-04:00Comps reading - Diffusion of InnovationsVery disappointing progress for assorted reasons, not related to the book itself.<br /><br />Rogers, E. M. (2003). Diffusion of innovations. 5th ed. New York: Simon & Schuster.<br /><br />This book is very readable and enjoyable. Not to mention that the author reinforces what he says by referring backward and forward in the book a lot and italicizing new terms and providing complete definitions. The book is also peppered with case studies pulled from the literature.<br /><br />Diffusion of innovations research got its start in the study of diffusion of agricultural technologies in the Midwest. Rogers' dissertation in 1954 was on the diffusion of an insecticide in Iowa. He cites his own work from that time forward to work in press at the time of writing. This type of research is very popular in communication (and journalism), but also in marketing, epidemiology, medicine, sociology, anthropology, information science, international development, and elsewhere.<br /><blockquote><span style="font-style: italic;">Diffusion</span> is the process in which an innovation is communicated through certain channels over time among the members of a social system. (p.5)<br /><br /><span style="font-style: italic;">Communication</span> is a process in which participants create and share information with one another in order to reach a mutual understanding (p.5)</blockquote>Rogers also defines innovation, information, and the other terms he uses. He starts with the elements of diffusion (innovation, channels, time, social system) and continues by discussing the history and criticisms of the research. It's not until chapter 4 that he gets to the generation of innovations. Chapter 5 is one of the more important chapters, covering the innovation-decision process.<br /><br />For individuals, the innovation decision process is<br /><div style="text-align: center;">knowledge > persuasion > decision > implementation > confirmation<br /></div>Within knowledge, there are a few different kinds: awareness, how-to, and principles (how it works). Different types of knowledge are needed by different adopters. For early adopters awareness is most important. Persuasion is harder for preventive types of innovations (take vitamins), and there's often a knowledge-adoption gap. The decision can end in rejection of an invention. Implementation can include re-invention (or adopters adapting the innovation to their local circumstances). A higher degree of re-invention leads to faster adoption and greater sustainability. Even after all this is done there can be discontinuance.<br /><br />Chapter 6 is about the attributes of the innovation and how they impact its diffusion. Basically these perceived attributes are important<br /><ul><li>relative advantage (benefits it has over other innovations or existing stuff, within the social system, as perceived by potential adopters)</li><li>compatibility (how does it fit with existing culture, power outlets, ways of doing business, etc)</li><li>complexity</li><li>trialability (can you give it a whirl before making some big commitment?)</li><li>observability (can you see people using it?)<br /></li></ul>This section also gets into incentives and mandates for adoption.<br /><br />Chapter 7 is about properties of the adopter and adopter categories like innovators (2.5%), early adopters (13.5%), early majority (34%), late majority (34%), laggards (16%). There are a lot of different characteristics of early adopters including ability to deal with abstraction, rationality, intelligence, and exposure to mass media.<br /><br />Those few chapters in the middle are the most important. Chapter 8 discusses diffusion networks and how homophily and heterophility come into play. Also about opinion leaders, how to find them, and what they do. Critical mass. Individual thresholds for adoption. Chapter 9 is about the change agent (someone who works to influence adoption decisions in the direction desired by the change agency). <br /><br />Chapter 10 discusses innovation in organizations - and I always think this chapter will be most important, but it's really not much different from others. Internal characteristics of the organization<br /><ul><li>centralization</li><li>complexity (not what you think - it's how smart the employees are, their knowledge and expertise so that they can individually understand new innovations)<br /></li><li>formalization</li><li>interconnectedness</li><li>organization slack (more time/money left over to spend on innovations)</li><li>size<br /></li></ul>Organizational innovation-decision proccess are slightly different:<br /><div style="text-align: center;">agenda setting > matching > redefining structure > clarifying > routinizing<br /></div><br />Chapter 11 discusses consequences of innovations. A lot of bad unintended consequences with things missionaries tried to do (giving steel axes to younger members of a society when the elders had had control of the tools - upset the apple cart). It's like we think this is all done, but I was hearing on the radio the other day about how this woman is campaigning against a lot of aid for Africa because it's making local businesses less competiive, it's enriching dictators, and it's not encouraging local development. I don't know for sure, but seems like we're still not looking at the consequences (intended and unintended, direct and indirect, desired and undesired).<br /><br />I like this book a lot - but if you really want to get into this area, there are tons and tons of journal articles with more details. (I'll be re-reading a couple of these soon).Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-26381375175981213542009-03-29T12:07:00.000-04:002009-03-29T12:08:07.882-04:00Comps readings this weekHertzum, M. (2002). The importance of trust in software engineers’ assessment and choice of information sources. Information and Organization, 12, 1-18.<br />Not a hundred percent sure this lives up to its title. This was a study of about 11 people on a software project - and based on conversations I've heard on software engineering where I work (with people who do know about this sort of thing), this was software development and not truly engineering. Anyway, this article dug into the trustworthiness to add that it's also important that the information seeker has enough to determine if a source is trustworthy. This study reports the results of observation of 16 meetings, 11 interviews, and reviews of project documentation. Cost is also not really very important at all when compared to quality as a measure.<br /><br />Bates, M. J. (1990). Where should the person stop and the information search interface start? Information Processing & Management, 26, 575-591 (looks like also available here: <a href="http://www.gseis.ucla.edu/faculty/bates/searchinterface.pdf">http://www.gseis.ucla.edu/faculty/bates/searchinterface.pdf</a>, but I have a photocopy of the journal pages.<br />I am such a big Bates fan girl. She starts with<br /><blockquote><span style="font-size:85%;">Much of the advanced research and development of automated information retrieval systems to date has been done with the implicit or explicit goal of eventually automating every part of the process... An unspoken assumption seems to be that if a part of the the information search process is not automated, it is only because we have not yet figured out how to automate it... The implicit assumption in much information retrieval (IR) system design is that the system (and behind that, the system designer) knows best... the system controls the pace and direction of the search... but not all searchers want that kind of response from an information system.</span></blockquote>Eureka - exploratory search, anomalous states of knowledge, hcir, human information interaction... yes, of course! I think this is why engineers *love* Engineering Village's facets - it gives them a view into how the system interprets their query, teaches them what it knows, and gives them power to control it.<br /><br />Do systems provide any more support for strategy than they did? Any more strategic advice? I'm thinking no.<br /><br /><br />Bates, M. (1989) The design of browsing and berrypicking techniques for the online search interface. Online Review, 13, 407-423.<br />At some point, I looked, and I had been assigned this article in like 5 different classes. Not only do I have a copy in every binder, but we actually had this journal in a bound volume in our collection. I was so close to cutting the article out with an exacto knife before the volume went in recycling, but I chickened out. This article pulls together evidence from a bunch of places to show that information behavior is frequently an evolving process, with learning, refocusing, and chainging of strategies along the way. Different strategies might be following citations backward or forward, running through a journal that appears to have good stuff, browsing a place on the shelf, looking for more from a particular author, etc. Apparently, way back, Garfield had a hard time convincing librarians and others that people wanted to follow citations - librarians thought subject access was all people needed. She argues that systems should enable searchers to do all of these different types of searches.<br />Digital libraries should allow readers to jump back and forth between the article text and the references, see the section headings in advance and be able to jump directly to them (like to methods or conclusion), see what cites the article, browse journal table of contents, browse classification schemes and jump up, down, and across the hierarchy,<br />It's interesting to think of what journal platforms do this stuff well (and others that don't do this stuff at all). She also talks about trying to make an equivalent for flipping through a book and reading random passages to see if you like the author's style - google books allows this, but proprietary ebook systems typically do not. She also says that systems should allow the user to take notes, highlight, and clip interesting pieces to save for later and use off line (or outside of the source)... some do this, but not as well. The articles on digital libraries by Soergel still suggest these things years later, and yet we don't always have them.<br /><br />Started re-reading Rogers, E. M. (2003). Diffusion of innovations. 5th ed. New York: Simon & Schuster.<br />Excellent book and Rogers' stuff is so readable. Quite long, though. I think I said this on this blog before, but the current edition is really on crappy paper with a very thin cover. Like newsprint inside. I bought mine in probably 2006 and kept it out of sunlight, but it's still discoloring with age. I hope it doesn't be come brittle.... when I actually *buy* a new book, I expect it to last! Anyway, like other books, this will get its own post.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-33343289872092790542009-03-27T21:12:00.003-04:002009-03-29T00:21:05.786-04:00What do librarians do and how do libraries work?Ok, I do realize that there is no way this post can live up to its title, but this is in response to some friendfeed threads (<a href="http://friendfeed.com/e/b1fc0839-63ef-505e-ccba-68664fcadcae/Join-the-Libraries-of-the-Future-debate-at-the/">example</a>). I suppose I can't keep giving people crap for not knowing what librarians do and how libraries work if I'm not willing to explain. I know quite a bit about how public libraries work, next to nothing about school libraries, so I'm really going to talk mostly about research libraries because that's where I live and the people asking the question are researchers. Most research libraries are in universities, but there are other research organizations like federal and corporate labs, hospitals, etc. I guess I lean toward how universities do things, because I was only in a government library for 4 months and both they and company libraries have some unique restrictions.<br /><br />So where to start? Libraries connect people to information. Librarians touch every bit of this by:<br /><ul><li>selecting information sources (books, journals, protocols, spectra/data collections) based on balancing<br /></li><ul><li>subject (and relationship of subj to organization's research mission, vision, etc)</li><li>customer requests, discussions with customers, interlibrary loan requests<br /></li><li>cost considerations</li><li>measures or indicators of quality</li><li>reviews<br /></li><li>usage (global, local)</li><li>packages, special deals, consortial agreements, existing contracts that can't be reduced</li><li>global statements from management on what you're doing with electronic vs print or trying to build capacity or whatever</li><li>our professional expertise<br /></li><li>government documents are just a class of their own<br /></li> </ul><li>getting things customers need, but that we don't have from other libraries or document delivery services and finding, copying scanning and sending things that other libraries need for their customers<br /></li><li><a href="http://christinaslibraryrant.blogspot.com/2007/01/on-weeding.html">deselecting things</a> (if only to send to off site storage)<br /></li><li>selecting finding tools like research databases - which ones, and then also which platform (for example, you can get Inspec on maybe 10 different platforms like Web of Knowledge, EbscoHost, FirstSearch, Ovid, EngineeringVillage2), once again balancing</li><ul><li>functionality</li><li>cost</li><li>consortial agreements</li><li>how far back it goes</li><li>if it's standards compliant</li><li>if it can be searched using z39.50, if it's open URL compliant, if it can be proxied -- if it will talk to machines</li></ul><li>negotiating access, negotiating licenses - here librarians are between corporate lawyers from the vendors and university lawyers, and also incorporating what they know about how the end users/customers actually need to use the stuff (like in course web sites or whatever), and ideological statements, and pressure from the selection folks to just get it done</li><li>picking the companies that distribute and help us manage journal subscriptions (did you know we don't go directly to most journal publishers, but use a third party? we also use big distributors for books most of the time)<br /></li><li>paying the bills and accounting for things, managing the acquisitions process</li><li>organizing information so that it can be found</li><ul><li>cataloging books, journals - this is very complicated, also standards-based, and takes a lot to make sure that things can be found by people who need them</li><li>entering things into several content management systems - one that runs an open url referer (links you from a citation through to the full text), one that runs the web site, one that helps you track the licenses (some people manage to combine these things)</li><li>changing all of the urls all of the time when the #$%^ vendor updates their system or the @#$% publisher moves to a different vendor</li><li>see <a href="http://catalogablog.blogspot.com/">Catalogablog</a> for some insight into being a cataloger at a research organization (small and not a university)<br /></li></ul><li>building tools to connect people to information</li><ul><li>the online catalog, you know how it comes out of the box, right, needs lots of work</li><li>the open url referer SFX thing? oh, yeah, that needs to be customized</li><li>the web site? yep</li><li>the federated search? yep</li><li>who maintains the servers? do we pay the IT department, or do we have librarians with masters degrees swapping out broken drives - you'd be surprised!</li><li>usability testing</li><li>reviewing usage statistics, etc.<br /></li><li>refer to <a href="http://bibwild.wordpress.com/">Bibliographic Wilderness</a> for some more on some of this category</li></ul><li>teaching people how to help themselves</li><ul><li>quick 30 minute classes on databases</li><li>teaching 1-3 credit "intro to" or "cheminformatics" or other classes</li><li>teaching a session of every section of every engineering 101 class in the university</li><li>consulting with individual students, faculty, staff, researchers on how to get what they need, keep what they find, and use it<br /></li><li>creating screencast tutorials, handouts, self-paced online instruction</li><li>creating finding guides/pathfinders</li></ul><li>managing the circulation of materials - including putting stuff on reserve for classes<br /></li><li>collecting and preserving rare, special, or historical materials - everything from rebinding to specifying climate controls and security, to actually picking and using DRM, to licensing out materials<br /></li><li>collecting, organizing, and providing access to the organizations knowledge - doing knowledge management and archiving</li><li>institutional repositories, well, see <a href="http://cavlec.yarinareth.ne/">Caveat Lector</a><br /></li><li>sitting at the reference desk and answering questions and generally dealing with the public - unjamming the copier, refilling the printer, fixing the public access computers, keeping track of the stapler, getting the roof leak fixed....<br /></li><li>working as a consultant to departments and labs and groups and individual faculty on new projects, classes they might offer, assignments they might give</li><li>working with vendors to improve their offerings, and to learn about their new stuff<br /></li><li>getting grants and working their own research projects to study how people use information, presenting to other librarians</li><li>management, hr, strategic planning, development</li><li>committees, lots of committees!<br /></li></ul><br />That stuff is university libraries - my job differs a bit because we all do quite a few things that would be handled by 5 different people at a big library. Also, I'm "embedded" and I do in-depth literature searching, and I'm involved in enterprise-wide initiatives regarding collaboration, enterprise search, and knowledge sharing.<br /><br />Embedded means I'm actually part of the team. There might be a chemist, a mechanical engineer, a mathematician, and me. Whenever something comes up that requires finding or organizing or presenting information, I take the lead. In depth literature searching might be someone presenting a problem, and asking me to compile and organize and sort of summarize the literature in that area. They get the annotated bibliography I provide, and then see what they want in full text, I fork that over, and then they make the world a better place. I provide value because I'm an expert searcher and I understand a lot about the context of the organization and our sponsors. The scientists are so busy that anything they can offload to me helps. Once I grok what they need, I'm more efficient at finding things, too. And I charge my time back to the sponsor.<br /><br />So, if you're a librarian, please fix what I screwed up (or, oh dear, tell me what i missed)... if you are a library user (or SHOULD be but aren't), tell me what more you need to know.<br /><br />Update: I forgot ILL! Holy cow... added aboveChristinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com6tag:blogger.com,1999:blog-6474147.post-81740728507500081392009-03-22T11:15:00.000-04:002009-03-25T22:37:27.470-04:00Comps readings this week(no readings last week due to family emergency, readings will probably be light again this week)<br /><br />Leckie, G. J., Pettigrew, K. E., & Sylvain, C. (1996). Modeling the Information Seeking of Professionals: A General Model Derived from Research on Engineers, Health Care Professionals, and Lawyers. Library Quarterly, 66(2), 161-193.<br />Well-written, concise reviews of studies of professionals' (engineers, health care providers, and lawyers) information behaviors. Professionals are defined as those providing a service, with a heavy-duty theoretical knowledge base, extensive post-secondary education, etc. Does not include scholars or scientists (produce knowledge vs. provide services). Not sure how frequently people use their model, but it looks good.<br /><br />Constant, D., Sproull, L., & Kiesler, S. (1996). The Kindness of Strangers: The Usefulness of Electronic Weak Ties for Technical Advice. Organization Science, 7(2), 119-135.<br />Compare to Wasko & Faraj (2005) and Hew & Hara (2007) <a href="http://christinaslibraryrant.blogspot.com/2009/02/comps-readings-this-week_15.html">read in a previous week</a>. I need to go through these more carefully and pick out the similarities and differences. (btw, this article doesn't seem dated - sure e-mail is used differently - but still very useful). This research was done in a large multinational computer company. The company had 3 priority settings for e-mail, and one of these was frequently used to ask for help. Responses to requests were often compiled and posted publicly as sort of a knowledge base. The authors wanted to know why responders took the time when they don't know the person asking the question and there can't be any direct reciprocity. Other questions were about the usefulness of the responses, the diversity of the responders, and the motivation of the responders. They sent surveys to question askers and to responders and had the askers rate the responses on usefulness. Weak ties with greater resources (more senior, etc) gave more useful responses. Number of replies didn't help. This way of asking and answering questions had been in this company for a long time and there was a culture of sharing information this way. Personal motivation was more like good of the whole organization.<br /><br />Ellis, D. (1993). Modeling the information-seeking patterns of academic researchers: A grounded theory approach. Library Quarterly, 63, 469-486.<br />This is the Ellis on my list, but I'm thinking I probably actually wanted another one (maybe his JoIS from the same time?). He gives an overview of how grounded theory works and why that was important for looking at information seeking - something that had previously been studied using structured questionnaires and quantitative methods. He then compares his findings from his dissertation (massive effort with interviews with 48 social scientists) with similar studies of physicists (18), chemists (18), and English lit researchers (10). All of these used grounded theory so somewhat different terms, but all basically found these characteristics:<br /><ul><li>starting (finding an initial key paper to start with or familiarizing yourself using a reference book - today this means looking at wikipedia ;) )</li><li>chaining - citation chasing</li><li>browsing</li><li>extracting</li><li>differentiating (between sources based on their editor, specialty, other characteristics)</li><li>monitoring</li><li>some had an ending or dissemination (like in Kuhlthau) as well as a verification stage<br /></li></ul>Ellis, D., & Haugan, M. (1997). Modeling the information seeking patterns of engineers and research scientists in an industrial environment. Journal of Documentation, 53., 384-403<br />This is pretty cool - I'd forgotten some of the details of it. They did a ton of interviews with scientists and engineers at a large (14k employees) oil company in Norway. They broke out the results by type of project: incremental, radical, fundamental as well as by project stage (pulled from some project managment handbook or other). These are fairly similar to the above but with a category of surveying instead of starting, distinguishing instead of differentiating, added filtering, and added ending. For incremental projects, they talked to people first, then used their own files, then the library. For radical projects, they used their own files first, then other people, then the library. For fundamental projects, they used the libray resources first (like lit searching in online database), then their own experience/files - didn't really know who to talk to.<br />When I ever finish (sigh) the literature review for the massive JHU libraries project we did, I'm definitely including both of these pieces by Ellis (I call my piece of the world industrial for the most part, incidentally)<br /><br />Kling, R., & McKim, G. (2000). Not just a matter of time: Field differences and the shaping of electronic media in supporting scientific communication. Journal of the American Society for Information Science, 51, 1306-1320<br />This quote is classic:<br /><blockquote><span style="font-size:85%;">However, in the absence of a valid theory of how scholarly fields adopt and shape technology, scientists and policy makers are left only with context-free models, and hence, resources may be committed to projects that are not self sustainable, that wither, and that do not effectively improve the scientific communications system of the field. The consequences may not only be suboptimal use of financial resources, but also wasted effort on the part of individual researchers, and even data that languishes in marginal,decaying, and dead systems and formats. (p.1307)<br /></span></blockquote>The more things change, the more they stay the same. I don't remember any discussion of Electronic Transactions on Artificial Intelligence (ETAI)'s experimentation with open peer review in the more recent discussions of the Nature and Atmospheric Physics discussions.<br /><br />Anyway, their point is that a lot of the discussions of how scientists use electronic media go with the assumption that they'll converge on the using the same tools in the same way, that it's an "inescapable imperative." The authors argue that differences in how different fields communicate shape how and what they will use - social shaping of technology views are needed instead of relying only on information processing views. Examples at the time of writing include things like arxiv which physicists use, pdb for molecular biology, etc. There's a big difference in on line representations of print processes and products vs. creating a new thing that takes advantage of unique features of the online environment. (librarians have shied away from using the "electronic journals" without modifiers because there has been a misunderstanding in some areas of research that online means maybe less thorough peer review instead of just a copy in another place).<br /><br />The authors propose bases for field differences: trust and allocation of credit, research project costs, mutual visibility of on-going work, industrial integration, concentration of communication channels... In this part of the discussion, I think some other authors stated these things more clearly, but still a useful article.<br /><br />Fidel, R., & Green, M. (2004). The many faces of accessibility: Engineers' perception of information sources. Information Processing & Management, 40, 563-581.<br />It's good I'm re-reading these things. I read and cited this article in my study of the personal information management of engineers... but re-reading makes salient points that weren't important to me at the time. At work right now we're really trying to make some headway in knowledge sharing and one of our efforts is to improve finding experts. This study was originally about sources engineers use to find information, but what came out of it was how complicated the notion of "accessibility" is. Lots of studies of engineers have found that they'll choose accessible over quality, but then the studies don't really talk about what accessible means. In library terms we talk about physical access vs. intellectual access. The authors here look at a sort-of psychological version - ease of use - along with availability, physical proximity, familiarity, right format, gathers a lot of info in one place (or is efficient, or saves time)... The authors compare what the engineers said about documentary sources with what they said about people as sources.... Anyway this is pretty interesting. There's a call at the end for more research on finding people and supporting engineers finding people, but the 29 citations (28 + my citation) in scopus don't seem to address that much.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-75387348505745873152009-03-08T14:23:00.000-04:002009-03-08T14:23:45.465-04:00Comps readings this weekHara, N., Solomon, P., Kim, S., & Sonnenwald, D.H. (2003) An emerging view of scientific collaboration: Scientists’ perspectives on factors that impact collaboration. <span style="font-style: italic;">Journal of the American Society for Information Science and Technology</span>, 54, 952-965.<br />They start by saying that "scientific collaboration may be different from other varieties of collaboration as it is shaped by social norms of practice, the structure of knowledge, and the technological infrastructure of the scientific discipline" (p. 952). Seems like all professional(or even hobby ones) are shaped by the social norms and structure of knowledge... hm. This paper isn't as good as I remembered, but I think those problems are with the lack of a clear conceptual framework to guide them going in and sort of a rambling presentation of the results... (sounds familiar)<br /><br />(I'm now going to try to plow through all of the readings that happen to be stored in my binder from my 601 Class on Information Use - taught by Dr Barlow in the Spring of 2001)<br />Allen, B. (1996). An introduction to user-centered information-system design. Information tasks: toward a user-centered approach to information systems (pp. 24-51). San Diego: Academic Press.<br />This is really an excellent reading. His ARIST article from 1991 is great, too. He has five components in his model:<br /><ol><li>needs analysis<br /></li><li>task analysis<br /></li><li>resource analysis</li><li>user modeling</li><li>designing for usability<br /></li></ol>Do note that this emphasizes the problem-solving approach which is just one reason people use information systems. Oh - resource analysis isn't what you think (like from recent readings I was thinking about information objects in the system) - it's a person's individual and social knowledge and abilities. Certainly the model would be incomplete without that, but the name is a bit misleading - and how's an information system to work if there' s no consideration of matching user and user input with representations the systems holdings? (oh, grr... course pack copy didn't photocopy all of the references).<br /><br />Davenport, T. H., & Prusak, L. (1997). Information Behavior and Culture. Information Ecology: mastering the Information and Knowledge Environment (pp. 83-107). New York: Oxford University Press<br />Even though many management books are quickly irrelevant, this one still speaks to me. They talk about the value of information and information sharing in organizations and information behavior (sharing, hoarding, organizing) within organizations. They also cite (but of course the course pack didn't include the citations- argh!) lots of different studies showing how organizations that do better with information are more productive and successful. The other chapter in my course pack - but not read because not on my list - talks about the role of corporate libraries. In this section, too, they mention briefly what a big mistake it is to undervalue the library by cutting its budget and minimizing the contributions of librarians. sigh.<br /><br />Dervin, B. (1992). From the mind’s eye of the user: the sense-making qualitative-quantitative methodology. In J. Glazier & R. R. Powell (Eds.), Qualitative research in information management (pp. 61-84). Englewood, CO: Libraries Unlimited<br />She is really talking about a method, a methodology, a theory, and a paradigm here. If you approach certain problems by looking at the discontinuities and the helps that enabled people to bridge the gaps, you can really get some good information about information behavior and systems.<br /><br />Kuhlthau, C. C. (1991). Inside the search process: Information seeking from the user's perspective. Journal of the American Society for Information Science, 42, 361-<br />I would be surprised if there's anyone who would bother reading my blog who isn't familiar with this one. Steps in the search process with affective, cognitive, physical parts...<br /><br />Rogers, E. M., & Kincaid, D. L. (1981). The Convergence Model of Communication and Network Analysis. In E. M. Rogers, & D. L. Kincaid (Eds.), Communication networks: toward a new paradigm for research (pp. 31-78). New York: Free Press.<br />I like this one, too, because it disses the whole Shannon and Weaver thing (which I successfully kept OFF my list). Which reminds me about last year at the global STS grad student conference listening to someone spout the Shannon and Weaver version of information as the one true path (well maybe if you're an EE who is using information theoretic models for communication systems design). Anyway, the point of communication is to come to mutual understanding. Person A has their psychological reality, person b has theirs, and there's some physical reality. Information comes through all of these and through individual action to get to collection action, mutual agreement, mutual understanding, and then social reality shared by A and B.<br /><br />Huh, wonder why it took so long for studies of scientific popularization or public understanding of science to take up the charge. If Schramm, Rogers, Kincaid ... all of that happened so long ago, and there seems to be consensus in communications about active audiences and the like... why did it basically take Wynne, Hilgartner, Meyers so long to get their point across... And actually, sometimes the scientists who are trying to do the communicating still don't know about all this (and how to apply it). hmmm.<br /><br />Taylor, R. S. (1991). Information Use Environments. In B. Dervin (Ed.), Progress in Communication Sciences (pp. 217-255). Norwood, NJ: Ablex.<br />I keep getting things that Taylor said and Allen said confused, and this article might be one reason why: Taylor cites T.J. Allen (1997) extensively. Taylor's point is that you can construct useful groupings of users based on their common problem dimensions, settings, and what constitutes resolution to their problems, among other things. This does not look at demographic or other variables, for engineers, but it could if that's how you're defining your grouping.<br /><br />Williamson, K. (1998). Discovered by chance. The role of incidental information acquisition in an ecological model of information use. Library & Information Science Research, 20, 23-40.<br />I pulled this out a couple of years ago again when another student basically said no work had ever been done on older adults (big sigh). This article is a spin off of her dissertation. She studied how older adults (in Australia) encounter information through telephone calls and through monitoring the media. This is information that meets a need - whether specifically identified in advance or not - and that wasn't purposfully sought. Made me think of possible research on social bookmarking - not in the school of "how do people assign categories" but more along the personal information management line... but no time.<br /><br />Wilson, T. D. (1997). Information behaviour: An interdisciplinary perspective. Information Processing & Management, 33(4), 551-572.<br />Seems like everybody in the social sciences somehow studies information seeking behavior. This article reports some of Wilson's work looking outside of information science. He emphasizes psychology and sociology articles. Good stuff here.<br /><br />From my 650 (Reference, aka information access) binder - I thought there was more in this binder than this on my list<br /><br />Barry, C. L., & Schamber, L. (1998). User's criteria for relevance evaluation: A cross-situational comparison. Information Processing & Management, 34(2/3), 219-236.<br />In this article they compare their previous work eliciting users' relevance criteria to find overlaps and unique items. There were a lot of things in common and the things that weren't were mostly due to the differences in the user groups and what information they were seeking.<br /><br />and then this one, because it went with the others in this group and was short<br /><br />Belkin, N. J. (1980). Anomalous states of knowledge as a basis for information retrieval. Canadian Journal of Information Science, 5, 133-143.<br />Builds on my favorite Taylor and the like and makes suggestions for design and evaluation of information retrieval systems.<br /><br />EDIT: changed posting date - sorry!Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-55802997640349762662009-03-02T13:51:00.002-05:002009-03-03T23:16:18.089-05:00Comps reading - Little Science, Big Science... and BeyondThis one deserves its own post.<br /><br />This is one of <span style="font-style: italic;">the</span> books from a father of scientometrics and a great in the early days of STS. This re-issue of the book has a forward by Eugene Garfield and Robert K. Merton which tells you something!<br /><br />Price, D.J.d. (1986). Little science, big science…and beyond. New York: Columbia University Press.<br />(I never knew how to refer to this author - some refer to him as de Solla Price and others as Price - Garfield makes a joke about this in his piece at the end, it's Price)<br />This book is a lot about modeling the shape, size, distribution of science. How many scientists should a country have and how many should be in the top most productive group? How many scientists are cited every year and how many have a single paper and then never again? More authors/co-authors mean more papers, are correlated with bigger grants. What does the distribution of citations look like (why can you cover 75% of the cited articles with only 7% of the articles written)? Will science continue to grow exponentially or more of a logistic curve where it flattens off?<br /><br />Some of this stuff is pretty cool and timeless, but some of it makes me uncomfortable. It's cool to use some of these guesstimate approximations based on years of ISI's data, but it says nothing about individuals or disciplines or any individual attributes (which he freely admits).<br /><br />One essay that's less frequently discussed, is the nice one on Sealing Wax and String - about the importance of technology to science and how sometimes technology leads science instead of lagging. Also about the importance of experimentalists and technicians who are sometimes completely omitted in romantic discussions of scientific inventions and emphasis on the scientific method (sometimes, holy cow-how did that happen- better come up with an explanation- instead of theory, hypothesis, experiment, rinse, repeat)<br /><br />Useful quotes:<br />In chapter 7, Measuring the size of science, he makes the case for scientometrics - an econometric-type view of science. Likewise he makes the case for not leaving the study of science up to the scientists:<br /><blockquote><span style="font-size:85%;">It is the business of sociologists to be knowledgeable about things that are important to society, and it is not necessarily the business, nor does it even lie within the competence, of natural scientists to turn the tools of their trade upon themselves or to act as their own guinea pigs (p. 136)</span></blockquote><br />This is interesting, though, because Price was a reformed physicist (as was Kuhn) who got a second phd in history.<br /><br />from chapter 8<br /><blockquote><span style="font-size:85%;">Technical librarianship involves much more than librarianship applied to books with an esoteric vocabulary and much mathematics. It is somewhat like the dilemma of the man who tried to write a book on Chinese medicine by first reading one on China and then another on medicine and then "combining his knowledge"</span></blockquote>(but this statement doesn't really have much to do with the remainder of the chapter which provides an overview of citation patterns - from chapter 5 ephemeral vs. classic references, immediacy, research fronts, aging of the literature, etc.) There is some number of references per paper, too many above this you are looking at a review paper and too many below this, you're looking at an <span style="font-style: italic;">ex cathedra</span> pronouncement. Another main point is that you can tell the humanities, social sciences, and natural sciences apart by the percentage of citations to articles younger than 5 years. Natural sciences might be as high as 75% where things like history of the civil war might be like 8%.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-1538920605073753022009-03-02T08:24:00.000-05:002009-03-02T08:26:40.821-05:00Comps readings this weekI'm now back into re-reading, so notes will be shorter (and I'm hoping to pick up speed). In case anyone is keeping track, we're now looking at early May for the exam instead of March.<br /><br />Fontana, A. & Frey, J.H. (2003). The Interview: From Structured Questions to Negotiated Text. In Denzin, N. K., & Lincoln, Y. S. (Eds.). Collecting and interpreting qualitative materials (2nd ed., pp. 61-106). Thousand Oaks, CA: Sage.<br />Traces to a certain extent the changes from very structured sort of oral version of a survey to more recent versions with more or less guided conversations in which meaning is constructed between the interviewer and interviewee.<br /><br />Angrosino, M.V. & Mays de Pérez, K.A. (2003). Rethinking Observation: From Method to Context. In Denzin, N. K., & Lincoln, Y. S. (Eds.). Collecting and interpreting qualitative materials (2nd ed., pp. 107-154). Thousand Oaks, CA: Sage.<br />This is one of the few readings I have that really goes back to the early cultural anthropology version of observer and ideas of objectivity. It also talks a little about danger in the field (emotional as well as physical - for more on this I highly recommend the <a href="http://www.worldcat.org/oclc/49569884">book with that title</a>) and being a participant-observer or active participant. One thing that strikes me now is the idea that the researcher sometimes creates community- I won't say that I did this, but I did hear from a participant that our conversation had renewed their interest in blogging and engagement in the blogging community. Something that they talk frankly about but which doesn't come up in my work is physical relationships between researcher and participant (like <a href="http://www.worldcat.org/oclc/49305916">Wolcott</a> but that just seemed wrong because of power issues) and how gender is important (expected behavior from either depending on society). They also talk about the ethical and practical issues of revealing or not or assuming a sort-of crafted social identity while in the field. Campaigning on your personal agenda isn't always wise when you're trying to learn about other people (apparently this isn't so obvious to some researchers).<br /><br />When reading these things I come back to how much to reveal (and practically HOW to reveal) my participant-observer status in the science blogosphere - I'm not a scientist, but I interact with scientists and I follow and comment on science blogs - this is <span style="font-style: italic;">a good thing</span> and I think it helps me to insights, but I'd like to convey that meaningfully in my writing, and I don't know how. I want to write a methods paper to disagree with some of what's in [Kazmer, M. M., & <span class="nfakPe">Xie</span>, B. (2008). Qualitative interviewing in Internet studies: playing with the media, playing with the method. Information, Communication and Society, 11(2), 115-136] regarding "bias" and communication online during the research process. If some of my results come via my particpation and not my interviews or content analysis, then that needs to be traceable. The authors caution the reader to not substitute technologically recorded evidence for lived experience - to not miss the whole by concentrating on the particular as recorded on tape.<br /><br />This chapter is probably most useful in its frank discussion of ethics and IRBs - particularly when the IRB is trying to make all research into psychology experiments with hypothesis testing in controlled environments.<br /><br />Ryan, G.W. & Bernard, H.R. (2003). Data Management and Analysis Methods. In Denzin, N. K., & Lincoln, Y. S. (Eds.). Collecting and interpreting qualitative materials (2nd ed., pp. 259-309). Thousand Oaks, CA: Sage<br />Everything is text. Linguistic tradition (narrative analysis, conversation or discourse analysis, performance analysis, formal linguistic analysis) vs. sociological tradition - text as a window into human experience. Can be systematically elicited or free-flowing. For elicitation techniques there are things like free listing and card sorting and... hm, I need to remember to go back in Wasserman and Faust and look at their discussion for SNA.... Anyway, this is a quick review that covers some things not found in some of the other readings that only cover coding.<br /><br />...<br />Read most of chapter 2 of <span class="TF">Baeza-Yates, R., & Ribeiro, B. d. A. N. (1999). <i>Modern information retrieval</i>. New York: ACM Press.<br />Wanted to see if it made any more sense to me than the chapters I put on my list - and yes, they're much more clear than Manning, C. D., Raghavan, P., & Schutze, H. (2008) but I still need to go back and hit the link analysis reading, sigh.<br /><br />Read the first 3 chapters of Little Science, Big Science... and Beyond. More on that in next week's roundup.<br /></span>Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-55278206588515618212009-02-22T14:26:00.001-05:002009-02-22T17:43:57.139-05:00Comps readings this weekHess, D. J. (1997). Critical and Cultural Studies of Science and Technology (pp. 112-147). Science studies: An advanced introduction. New York: New York University Press.<br />Not useful in that I'll run out and apply things, but useful in that it's an overview and discussion. More really conversational. I'm not sure at all that this title really fits the content. His version of culture is definitely different than standpoint theory or critical race theory or the like... he's really talking about a continuation of STS after SSK and in to more recent times. Hits the gender/sex thing and public understanding of science... as well as mentioning ethnographic studies like Traweek's and dropping some famous names.<br /><br />Bishop, A. P. (1999). Document structure and digital libraries: how researchers mobilize information in journal articles. Information Processing & Management, 35(3), 255-279. doi:10.1016/S0306-4573(98)00061-2<br /><blockquote>...current manner in which the content of articles is constrained to a traditional, linear structure is an artifact of both the technology of printing and accepted beliefs about the scientific method that prevailed in the seventeenth century. Kircz [1998] argues that the electronic environment, where storage and presentation are no longer integrated, is hospitable to shattering the linear structure of the article so that research reporting more clearly serves the needs of research readers (p. 217):<br /><blockquote><span style="font-size:85%;">... we have reached the stage where comprehensive communication no longer needs a linear build-up of a single document. A complete set of modules, each being in themselves (small) texts emphasizing aspects of the message that together establish a complete message from the author to reader, is the next natural step in scientific communication.</span></blockquote></blockquote><br />modules - that sounds like blog posts :) This article gathers together findings from a whole bunch of different studies of how people use various parts and pieces of journal articles. It's hard to separate issues with the content and interface from differences that they're looking for but she does describe various uses of the metadata, headings, tables and images, etc. to get an overview/orientation, judge relevance, and maybe in place of reading the article. Unfortunately, the users really didn't take to their interface, so there really weren't demonstrated implications for interface design.<br /><br />Thelwall, M. (2007). Blog searching: The first general-purpose source of retrospective public opinion in the social sciences? Online Information Review, 31(3), 277-289<br />Sort of a general overview and some approaches to searching blogs using commercial blog searching tools, both subscription as well as things like Google blog search, for social science research. As he points out, blogs are one of the few sources of retrospective or historical opinion information for things as they were happening. Nothing new to see here, but I had to include some blog search articles because my committee doesn't recognize my <a href="http://dlist.sir.arizona.edu/1731/">street</a> <a href="http://www.infotoday.com/cil2006/presentations/A104_Pikas.pdf">cred</a> in that regard.<br /><br /><br />Mishne, G., & de Rijke, M. (2006). A study of blog search. Advances in information retrieval (LNCS 3936) (pp. 289-301). New York: Springer. DOI: 10.1007/11735106_26<br />More evidence that people treat google boxes differently based on where they are, what they need, and what they expect to find in the collection (see discussion of Wolfram's article last week, yes , it's obvious). The authors took the transaction logs from blogdigger from May 2005 (hey, blogdigger's still around, that's a surprise). They categorize the queries into filtering and ad-hoc. Filtering is setting up an alert. 81% of all queries were filtering, but once you look at unique queries only, filter are just 30%. Terms per query is pretty much the same as regular web search, but filtering queries are much shorter (for unique ad-hoc, 2.71/query, filtering 1.98/query). They tried to make standard categorizations work - informational, navigational, transaction - but it doesn't really work for these queries so they looked at 1000 random queries, half of each type. They come up with context queries, ones that answer the question, "in what context does this thing appear?" And concept queries to locate blogs on a topic. They go on to l0ok at percentages and then popular queries. People really were looking for different things on the blog search engine than they did on regular search engines. For one thing, they search for news and named entities a lot more. So this was pretty interesting.<br /><br /><br />Leydesdorff, L., & Vaughan, L. (2006). Co-occurrence matrices and their applications in information science: Extending ACA to the web environment. Journal of the American Society for Information Science and Technology, 57(12), 1616-1628. DOI: 10.1002/asi.20335<br />We join this debate currently in progress... So there's a pile of papers about which similarity measures to use (Jaccard, Cosine, Pearson...) and how to go about it for co-citation or author co-citation (ACA) work. This paper is in that family, but different. The authors discuss the difference between using the asymmetrical matrices (so the columns are cited papers or authors and the rows are citing papers or authors) and going from them vs. using a symmatrized version cited x cited with the number being the times that these two co-occur. A proximity measure shows the co-occurrences and can be entered directly into multidimensional scaling algorithms. You can get from the asymmetrical to the proximity one by doing correlation type thing (Pearson for metric similarities, Spearman for rank...) or by multiplying the matrix by its transpose. Farther on in the paper they basically say that you can ditch MDS and just use Pajek to lay out the network... So I guess I'm a little confused. I get how you aren't supposed to do the correlation after you've symmatrized, but I don't know why to do correlation vs. multiplying by transpose. Maybe a computing power issue depending on the size of the network? I guess if you want to know the correlation coefficient for some other reason...<br /><br />At this point, I think I've now read all of the things I hadn't read before, and I'm back to reading things I've read at some point. I'm concentrating on things that I don't remember too well, or read a long time ago (started library school in 2000!), and are more important. Of course, reading things now is different from reading them when I was coming from a science undergrad and work experience outside of libraries, you never dip your foot into the same river twice. I'm going to try to pay more attention to tying things together, too, to get ready for the actual exam.<br /><br /><br />Pettigrew, K. E., Fidel, R., & Bruce, H. (2001). Conceptual frameworks in information behavior. In M. E. Williams (Ed.), Annual Review of Information Science and Technology (ARIST) 35 (pp. 43-78). Medford,NJ: Information Today<br />Ignoring the little defensive bit about how we are much more theoretically inclined than we used to be and how information behavior research really does use theory (not so much)... I think the most valuable aspect of this article is how it compares and contrasts the cognitive view of information behavior with the social view. The cognitive view was really hot in the late 80s and 90s, somewhat in response to Dirvan and Nilan's call for it. This work - like Kuhlthau's model - addresses how individuals perceive information need, seek information, and use information based on their knowledge structures and other aspects, in context but separate from other people. The social view was slightly later, and deals with the impact a person's place in a social network, or the relationships they have has on their perception of information need, decision to seek information, sources from which they can seek information, and how they use information. Interestingly, and something that's frequently overlooked, is the research in this work on decisions not to seek or attend to information that might be useful based on social or other factors (blunting in medical stuff or Chatman's findings about self-protective behaviors and world views). They spend less time talking about organizational approaches - and actually, I didn't realized I'd read about these before I read some of the structuration stuff a few weeks ago.<br /><br />Of course now, with fmri and such, we are trying to get in the black box of cognitive processes, so to psychological, social, organizational and other levels of analysis, we also add chemical/electrical/biological/physiological - but information scientists aren't using these as much yet.<br /><br />Schramm, W. (1971). The nature of communication between humans. In W. Schramm & D. F. Roberts (Eds.), The process and effects of mass communication (Revised ed., pp. 3-53). Urbana, IL: University of Illinois Press.<br />Good stuff here. He discusses the view of communications in the 1950s - the "Bullet Theory of Communication" in which audiences were passive and you just had to target them to change their minds and motivate action. Funny how this still comes up sometimes, people still don't get it.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-9124944061067182832009-02-15T11:21:00.000-05:002009-02-15T19:58:41.975-05:00Comps readings this weekFinished Shapiro. Well, to be honest, skimmed the last few chapters. Also to be fair - he's entirely against ruling by poll. I indicated the opposite last week after reading the first few chapters. It's still dated so not really recommended.<br /><br />Wolek, F. W., & Griffith, B. C. (1974). Policy and informal communications in applied science and technology. <span style="font-style: italic;">Science Studies</span>, 4(4), 411-420. DOI: 10.1177/030631277400400406<br />'60s research on communication in science showed that progress in science and technology was reliant on informal communication, but it's harder for institutions to encourage/support informal communication with policy. Formal and informal communication are interrelated and both must work together.<br /><br />Kelly, D., & Fu, X. (2007). Eliciting better information need descriptions from users of information search systems. Information Processing & Management, 43(1), 30-46. DOI: <a href="http://findit.library.jhu.edu/resolve?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&__char_set=utf8&rft_id=info:doi/10.1016/j.ipm.2006.03.006&rfr_id=info:sid/libx&rft.genre=article" title="Search Find It @ JH Libraries for DOI 10.1016/j.ipm.2006.03.006: "Eliciting better information need descriptions from users of information search systems", KELLY, D; Information Processing & Management 43(1):30- (2007)" class="libx-autolink" style="border-bottom: 1px dotted;">10.1016/j.ipm.2006.03.006</a><br /><a href="http://christinaslibraryrant.blogspot.com/2009/02/comps-reading-interactive-ir-pandora.html">See separate post.</a><br /><br />Wasko, M. M., & Faraj, S. (2005). Why should I Share? Examining Social Capital and Knowledge Contribution in Electronic Networks of Practice. <span style="font-style: italic;">MIS Quarterly</span>, 29, 35-57.<br />Well-described statistical methods make me happy. You can count on the MIS literature to do the stats carefully, unlike some of the other stuff I read. Now to content. They describe electronic networks of practice like communities of practice but all computer mediated, geographically distributed, self-organizing, voluntary, and with little f2f interaction. For me, an example is PAMnet or even better, CHM-INF. Organizations benefit from these, even when members are from competitors, because the knowledge doesn't necessarily exist within the org, in particular in areas with a high rate of technological change.<br /><br />The authors wanted to know why individuals contribute when some of the standard theories of social capital and knowledge sharing don't translate to electronic networks. They came up with a series of hypotheses related to social exchange theory looking at individual motivations (reputation, wanting to help people), structural capital (centrality - relationships in the network), cognitive capital (expertise, tenure in the field), and relational capital (commitment to the network and perceptions of reciprocity).<br /><br />They looked at a bulletin board system for an association of lawyers. They looked at centrality and did content analysis to find the questions and responses, and for each of the responses to grade it from not helpful to very helpful on a 4 pt scale (an author and a subj matter expert, kappa .84). Each person got an average helpfulness and a number of responses. They then sent out a survey to each of the responders, using questions pulled from previous studies. They matched these responses with the membership directory to get some demographics. All kinds of tests that their measures were usable. They did a partial least squares regression - well, 2, one for volume of responses and another for helpfulness.<br /><br />Their answers: perception that participation enhances professional reputation is the biggest predictor of knowledge contribution. There's weak evidence that people who like helping provide more helpful results. Reciprocity and commitment didn't do anything, interestingly.<br /><br />Hew, K. F., & Hara, N. (2007). Knowledge sharing in online environments: A qualitative case study. <span style="font-style: italic;">Journal of the American Society for Information Science and Technology</span>, 58(14), 2310-2324. DOI:<a href="http://findit.library.jhu.edu/resolve?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&__char_set=utf8&rft_id=info:doi/10.1002/asi.20698&rfr_id=info:sid/libx&rft.genre=article" title="Search Find It @ JH Libraries for DOI 10.1002/asi.20698: "Knowledge sharing in online environments: A qualitative case study", Hew, Khe Foon; Journal of the American Society for Information Science and Technology 58(14):2310- (2007)" class="libx-autolink" style="border-bottom: 1px dotted;">10.1002/asi.20698</a><br />This follows on from Wasko & Faraj (both the above and their earlier one). It is a qualitative study of 3 electronic networks of practice: mailing lists for nurses, university web developers, and literacy educators. They were trying to figure out what types of knowledge were shared as well as barriers and motivators toward sharing. It really is enlightening to compare an MIS article with an info sci article: MIS clearly derives measures from theory while info sci cites articles but doesn't clearly derive measures/categories from theory. This article also gave a lot of information on the method, so that makes me happy (in case they care!). The authors combined qualitative content analysis of a pile of postings with semi-structured interviews (57 participants over 14 months, wow). Biggest motivators: collectivism and reciprocity (compare to the lawyers above). Biggest barriers: no additional knowledge, technology (this is more as it was inconvenient or people forgot, not that they couldn't get the e-mails to fly), lack of time, unfamiliarity with the subject being discussed. Categories of knowledge: nurses - mostly institutional practice (this is how we do it at mpow), next biggest personal opinion; web dev - 3 split pretty equally between instiitutional practice, personal suggestion, persional opinion; literacy educators - split pretty evenly between personal suggestion and personal opinion. This is one place in which I would have liked to see them repeat the survey used in Wasko & Faraj - since it held together pretty well - for direct comparability in terms of social exchange theory- derived motivations. Also, they talked about percent agreement, but not Cohen's kappa which is probably more useful because it takes into account the agreement that would happen by chance alone.Of course they negotiated all of their disagreements out so its not so important.<br /><br />Van House, N. A. (2002). Trust and epistemic communities in biodiversity data sharing. JCDL '02: Proceedings of the 2nd ACM/IEEE-CS joint conference on Digital libraries, Portland, Oregon, USA. 231-239.<br />Not sure about this - I think this article has a lot of value for how well it explains various aspects of <a href="http://plato.stanford.edu/entries/epistemology-social/">social epistemology</a>. I think I actually get Knorr-Cetina's epistemic cultures more after reading this short piece than reading her whole book. With that said, I'm not 100% sold on the usefulness of this paper (or approach) in designing digital libraries - I want to believe - but I'm not there yet. It's definitely promising. Maybe I'll have to give this another read.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-88177787650956100382009-02-14T12:35:00.002-05:002009-02-14T13:16:51.438-05:00Comps reading, Interactive IR, Pandora for PubMed.... and more!This comps reading brought to you separately, because it is directly relevant to an interesting conversation happening on friendfeed. (what luck!)<br /><br />First, friendfeed.<br /><br />In <a href="http://friendfeed.com/e/dbf3e181-7acf-e685-de82-c6986956e4f8/Could-this-be-the-Science-Social-Networking/">this string</a>, led by <a href="http://synthesis.williamgunn.org/2009/02/13/could-this-be-the-science-social-networking-killer-app/">Mr. Gunn</a>, we have comments on how new article alerts should take what you already know by looking at a collection you give it (possibly from your bibliographic manager - like EndNote, BibTeX, Refworks), and then suggest others, not based on full content, but based on human-assigned metadata like Pandora. (an important part of pandora, IMHO, is being able to tune it by skipping some - because there are different facets in the metadata, you might want to be related in one facet, and not another... anyhoo...)<br /><br />In <a href="http://friendfeed.com/e/729503f3-54cf-1fc3-267c-d13b5a6a3970/How-do-you-read-papers-Gobbledygook-Martin-Fenner/">this string</a>, based on an <a href="http://network.nature.com/people/mfenner/blog/2008/11/02/how-do-you-read-papers">older blog post by Martin Fenner</a>, but just picked up again by <a href="http://friendfeed.com/pansapiens">Andrew Perry</a> (liked by <a href="http://friendfeed.com/scilib">Richard Akerman</a>), we talk a little more about how people find articles, suggesting filtering by papers you or others read.<br /><br />Now, happy coincidence, a piece of this morning's comps readings.<br /><br />Kelly, D., & Fu, X. (2007). Eliciting better information need descriptions from users of information search systems. Information Processing & Management, 43(1), 30-46. DOI: 10.1016/j.ipm.2006.03.006 (can't immediately find an e-print free, but you can at least read the abstract on Science Direct)<br /><br />Given that<br />1) users have a difficult time articulating information needs (think anomalous states of knowledge, Belkin)<br />2) users tend to use really short queries because<br />a) they don't necessarily know what to put in (see 1))<br />and<br />b) the interfaces encourage them to do so<br />3) longer queries usually result in better retrieval performance<br /><br />there is a serious mismatch.<br /><br />This mismatch has been addressed in various studies using a couple different things.<br />1) query expansion (for non-IR folks out there, system adds additional terms to the search)<br />a) automatic - the system expands your search either using a thesaurus or maybe a spell checker or by terms found in top matching results<br />b) interactive - the system asks the users which terms to use and sometimes where to get additional terms.<br />2) polyrepresentation (Ingwersen 1996)- this tries to imitate what a good reference librarian does. This uses multiple representations of the information need including representations of the user's<br />a) prior knowledge<br />b) goals or why the user wants the information<br /><br />As Kelly and Fu say, the idea is that the user has a lot more information about their query than they give to the system. Part of this goes back to Taylor (1968 - of course, <a href="https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/650/0">I</a> <a href="http://christinaslibraryrant.blogspot.com/2007/08/re-reading-taylor-68-still-favorite.html">always</a> <a href="http://christinaslibraryrant.blogspot.com/2007/05/my-type-of-communication-in-lis.html">go</a> back to Taylor, 1968!) and his 4 levels of information need: visceral, conscious, formalized, compromised. The point is that users have a model of what the IR system can do, and they pose their query accordingly - they use different terms, shorter queries, etc.<br /><br />This article presents part of their work on the TREC 2004 track on High Accuracy Retrieval from Documents. So there's definitely some experimental design weirdness. They get the standard TREC query information and then they come back to the user to ask for q2) what the user already knows q3) why the user wants to know q4) some additional keywords. Consult the paper to read about various issues based on the way TREC works, but the upshot is adding all three q2, q3, q4 together was best by far. Q2 was the best single one, pseudorelevance wasn't so hot at all for these queries and this corpus, and longer queries did much better than shorter ones. (oh, better is mean average precision, relevance in binary judgements from the 13 people who wrote the topics/questions).<br /><br />Now we bring it all together.<br />Why not enable users to identify a collection of documents stored in their reference manager as what they already know. Ask the users for what they want to know... then as the alert comes in from week to week, allow the users to tune like Pandora. The system should also tune, based on items saved out of the alert, which become things that the user knows....<br /><br />Full text is one way, but actually, can just use MeSH, abstracts, and titles...<br /><br />Is anyone already doing this? Why not?Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com1tag:blogger.com,1999:blog-6474147.post-73412276111325643642009-02-08T12:11:00.001-05:002009-02-08T18:07:58.821-05:00Comps readings this weekFinished<br />Lessig's Code 2.0<br />Good stuff here if he does go a little far with ideas of cyberspace sovereignty and citizenship. With that said, his views are actually pretty balanced, even if they're not frequently represented that way. In case anyone was keeping track - it's actually 18 chapters, not 15.<br /><br />Lee, S., & Bozeman, B. (2005). The Impact of Research Collaboration on Scientific Productivity. <span style="font-style: italic;">Social Studies of Science</span>, 35(5), 673-702.<br />I'm kind of on the fence about this one. They used a few large surveys of scientists and engineers from all different backgrounds, compared with analysis of their CVs, their journal articles as found in Web of Science, some interviews... talk about triangulation! There's this standard thing that increasing collaboration increases productivity (as measured by peer-reviewed publications in science). As an aside, they cite Lotka (1926), yay! But there are a million potentially confounding factors and interactions from various things like:<br />- researcher status, rank, age<br />- researcher gender<br />- if researcher is a foreign national/non-native speaker<br />- researcher job satisfaction<br />- researcher perception of discrimination<br />- collaboration motives like mentoring/service, quality/social capital, or finding someone with complementary skills<br /><br />So this point of this article is to take this massive pile of data and to build a couple of big models, and see which terms are significant. Another thing you need to know is that they look at the straight number of articles, and then they look at a fractional number - each article divided by the number of authors.<br /><br />A problem with this article is that the regression formulas aren't explicitly stated, and it's not OLS, so I'm a bit confused about how they do things. Also, they talk about lots of things that weren't even close to significant, and they accept Chronbach alphas as low as .32! (wow, my prof gave me a hard time at .64, should be > .70). Probably still sound, though. The end is actually sort of not what you'd expect - for the fractional model, no real correlation once everything is taken into account. There is a decent correlation for the normal model... Hardly anything was significant - really only the number of grants (and not even the batting average for grants).<br /><br />Watts, D. J. (2004). The new science of networks. <span style="font-style: italic;">Annual Review of Sociology</span>, 30(1), 243-270. DOI:<a href="http://findit.library.jhu.edu/resolve?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&__char_set=utf8&rft_id=info:doi/10.1146/annurev.soc.30.020404.104342&rfr_id=info:sid/libx&rft.genre=article" title="Search Find It @ JH Libraries for DOI 10.1146/annurev.soc.30.020404.104342: "The “New” Science of Networks", Watts, Duncan J.; Annual Review of Sociology 30(1):243- (2004)" class="libx-autolink" style="border-bottom: 1px dotted;">10.1146/annurev.soc.30.020404.104342</a><br />Decent review of the more recent network stuff that's been heavily influenced by advances by physicists and mathematicians as well as widely available computing power. Discusses small world networks, scale-free networks, and a couple different approaches in epidemiology and social contagion. Not precisely what I needed - duh, this guy assumes his readers know the social theory, and is bringing them up to speed on math and applications. Note venue. Of course. I need the social theory, which I'm weak on*, and can pick up additional math as necessary later on. So not the thing for me, but still a decent article.<br /><br /><br />Bohlin, I. (2004). Communication Regimes in Competition: The Current Transition in Scholarly Communication Seen through the Lens of the Sociology of Technology. <span style="font-style: italic;">Social Studies of Science</span>, 34(3), 365-391. DOI: <a href="http://findit.library.jhu.edu/resolve?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&__char_set=utf8&rft_id=info:doi/10.1177/0306312704041522&rfr_id=info:sid/libx&rft.genre=article" title="Search Find It @ JH Libraries for DOI 10.1177/0306312704041522: "Communication Regimes in Competition: The Current Transition in Scholarly Communication Seen through the Lens of the Sociology of Technology", Bohlin, I.; Social Studies of Science 34(3):365- (2004)" class="libx-autolink" style="border-bottom: 1px dotted;">10.1177/0306312704041522</a><br />I have to say that the <abbr title="social construction of technology">SCOT</abbr> bit was just thrown in for the editor's sake - there's not the strong hand of theory involved in this. In any case, this article ties together many of the lines of reading I've been doing and makes some interesting and useful points. It's not empirical at all - really sort of a research paper like you'd do in a doctoral seminar. He compares scholarly publishing and its traditional functions with self-archiving in both e-/pre-print servers/repositories and the author's web page. For functions, he goes with quality control, distribution, and archiving (Compare to Borgman). He considers priority and allocating credit to be part of those three. He then goes through the development of AriXiv and its predecessors including <span style="font-style: italic;">Preprints in Particles and Fields</span>. In the case of <abbr title="High Energy Physics">HEP</abbr>, in particular, and also other areas represented on AriXiv, the distribution function is much more important than the quality control function and traditional journals fall down on the speed of distribution. Also, the costs of journals can be prohibitive, so the distribution is limited. These scientists do, however, continue to publish in traditional venues to be competitive for grants and promotions. Indeed, >70% of articles on ArXiv end up as journal pubs and another 20% end up as conference papers (he cites like 4 studies showing this).<br /><br />Here's an interesting part: why does self-archiving work in some areas and not others?<br />1) existing culture of exchanging pre-prints (like in HEP)<br />2) expectations and policies of journals in the field wrt prior publication. Compare UK biomed journals (BMJ, Lancet) with American (NEJM, JAMA), compare ACS to any sane publisher...<br />3) acceptance rates for journals in the field (ah-ha!)<br />Right, so, I remember in Merton and Zuckerman how they discussed the (at the time) 70% acceptance in physics and 20% acceptance in some areas of the humanities.... there's newer research that in some fields it's closer to 90% acceptance and others it might be below 20%. The reasons for this vary - agreement between authors and publishers about what constitutes good work, page length, institutional internal review required before submission to the journal (HEP), etc. This makes perfect sense (and I did read the Walsh and Bayma paper that this comes from but didn't connect the two): you don't really want to see something if a) you can't cite it and you might never be able to cite it b) it will undergo serious revision before it ever makes the light of day and c) the delay before it's citable might be like 2 years.<br /><br />Another interesting thing that might end up as the focus for a submission to the SSSS conference (if I can get into gear!) is this blurring of the distinctions between informal and formal scholarly communication. If the functions of formal scholarly communication are as mentioned above and they were used to make information seeking more efficient... Let's look at distribution - wider and quicker via self-archiving, but more efficient and more precise using a research database with human indexing. Putting stuff up on a blog or on twitter or friendfeed or a wiki is much faster - but information retrieval is at best imprecise, what with semantic markup seldom used. I suppose if you are well embedded in the appropriate network - IOW you are "friends" with people with the right interests - this becomes less important... Archiving. We did recently have this discussion on friendfeed. These conversations are definitely more archived and less ephemeral than hallway conversations (which is interesting, too), but are not as stable as the journals - particularly those in CLOCKSS or PORTICO , or whatever. As for quality control: trust is built on the web differently than in formal publications.... hm.<br /><br />Callon, M., Courtial, J. P., & Laville, F. (1991). Co-word analysis as a tool for describing the network of interactions between basic and technological research: The case of polymer chemistry. <span style="font-style: italic;">Scientometrics</span>, 22(1), 155-205. DOI:<a href="http://findit.library.jhu.edu/resolve?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&__char_set=utf8&rft_id=info:doi/10.1007/BF02019280&rfr_id=info:sid/libx&rft.genre=article" title="Search Find It @ JH Libraries for DOI 10.1007/BF02019280: "Co-word analysis as a tool for describing the network of interactions between basic and technological research: The case of polymer chemsitry", Callon, M.; Scientometrics 22(1):155- (1991)" class="libx-autolink" style="border-bottom: 1px dotted;">10.1007/BF02019280</a><br />*not* recommended. I think it's sketchy, to be frank. What is there isn't well described, and some of the choices were made due to computing issues (I presume) that no longer exist. The whole basis they use for building part of their collection is *uncited* and not easily findable using WoS or Google Scholar. I might pick up something from H. White whom I trust in this area to get this uneasiness out of my system. Goes to show when I pick something vs. my committee :)<br /><br />Wellman, B., Salaff, J., Dimitrova, D., Garton, L., Gulia, M., & Haythornthwaite, C. (1996). Computer Networks as Social Networks: Collaborative Work, Telework, and Virtual Community. <span style="font-style: italic;">Annual Review of Sociology</span>, 22, 213-238<br />Note date, then read this quote:<br /><blockquote><span style="font-size:85%;">The popular media is filled with accounts of life in cyberspace... much like earlier travellers'tales of journeys into exotic unexplored lands. Public discourse is (a) Manichean, seeing <abbr title="their abbreviation: computer supported social networks">CSSNs</abbr> as either thoroughly good or evil, (b) breathlessly present-oriented, writing as if CSSNs had been invented yesterday and not in the 1970s, (c) parochial, assuming that life on-line has no connection to life off-line, and (d) unscholarly, ignoring research into CSSNs as well as a century's research into the nature of community, work, and social organization. (p. 214)<br /></span></blockquote>Does this sound like blog discussions 2004-2007 (and at the colloquium I attended Friday)? Nothin' new under the sun. Anyway - this is a great road map of the research from about 1985 or so to its writing in 1996. I wouldn't recommend this for anyone but the most dedicated (or maybe with historic interest) as it is completely dated. (the Walsh & Bayma stuff cited is still very relevant, though, as are some of the other works cited.<br /><br />talking about dated...<br />Shapiro, A. L. (1999). <span style="font-style: italic;">The control revolution: How the internet is putting individuals in charge and changing the world we know</span>. New York: Public Affairs.<br />This is unfortunately <abbr title="overcome by events">OBE</abbr>. I recommend Code v.2, as it covers mostly the same topics and has been updated. I read the first 8 chapters so far (about a third) and<br />- control is more than countries: it comes from ISPs; other providers like libraries, schools, work<br />- outdated evidenced by statements such as "as schools and libraries become wired"<br />- overly glossy everything is beautiful talk like you'd have during the dot com bubble<br />- glorious disintermediation - now we know that there are some things worth paying for: a real broker, a Realtor(tm), a librarian<br />- we know know that running the country, the state, the x by poll doesn't work<br />- it's not a choice of command line green on black vs. glorious windows...<br />He does have an interesting point about over personalization - but the obvious stuff like customizing e-mails isn't quite as troubling to me as the search results...<br /><br /><span style="font-size:85%;">*(As an aside, seems like one of the most important functions of a doctoral program in the social sciences is to teach researchers various theories and how to use them as a tool to understand how society works... my program did not do this, but it straddles science, including CS, so that's definitely not done there... my colleagues in the COMM school and in SOCY had like 3 heavy duty "theory" courses, at least... students coming after me in my program have precisely zero and this is not something that you can really do entirely on your own or with informal mentoring - or at least, it's difficult for me because that's how I'm doing it).</span>Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com0tag:blogger.com,1999:blog-6474147.post-70743573046452828252009-02-01T21:12:00.002-05:002009-02-01T21:36:25.208-05:00Comps readings this weekThis is 2 weeks, and really quite paltry.<br />You would think that being off sick one day last week would help, but actually I just slept that day... and for my snow day, it took forever to get other stuff done, sigh.<br /><br />Ingwersen, P., & Jarvelin, K. (2005). Cognitive and user-oriented information retrieval In <span style="font-style: italic;">The turn: Integration of information seeking and retrieval in context</span>. (pp 191-256). Dordrecht: Springer<br />This was really a laundry list of readings. It wasn't too critical or anything, but just traced the development of cognitive and interactive information retrieval. I guess if I were reading this at a different stage, it would have been more helpful, but it seems somewhat redundant now. Part of the problem might have been which chapter I chose to read - others probably develop ideas more fully.<br /><br />Lessig, L. (2006). <span style="font-style: italic;">Code: And other laws of cyberspace. Version 2.0.</span> New York, N.Y.: Basic Books. Retrieved November 9, 2008 from http://pdf.codev2.cc/Lessig-Codev2.pdf<br />This edition has significant updates from the original - in content as well as just statistics and examples. Also, re-reading it at this different time in my life does allow me to make different connections. I think I get now, more than previously, how community architecture impacts behavior and social interaction within a community. Other writers talk about affordances of technology, but this is really about how the code and the policies really enable certain behaviors and discourage others. Some of the examples of what AOL allows could just as easily be aimed at Comcast and other ISPs (here and in other countries) and what they do with throttling torrent connections (according to their response to the FCC they stopped, but my husband thinks differently), for example. I've read elsewhere how code and the design of physical as well as virtual things incorporates values, and that's also here, but somehow Lessig's examples seem more pertinent (and show the connection more clearly) than bridges on Long Island.<br /><br />I like Lessig's discussion of threats to liberty and how they can be embedded into code. Recently, I wanted to place a picture taken of me into a slide for a trip report. Unfortunately, the photographer chose the copyright license on Flickr, so the software prevented me from copying or downloading the picture - even though I'm the subject of it and regardless of what use I intended. I also rail against all of the abridgments of our liberty (and fair use) that libraries must agree to in order to access content for our customers.<br /><br />Constraints or regulators can come from the market, architecture, law, or norms.<br /><br />In another part of the book, he talks about perfect filtering. If there were perfect filtering regimes, you would only see those things that support your point of view, and that you wanted to see. Sunstein argues (a la Madison?) that you can't be a well-informed citizen by only being exposed to your own POV. This made me think about Google's customized search results - is there a point at which these things get so good that you will lose serendipity, and even more, not be exposed to other points of view, inequality, unpleasantness, other cultures? Is that Google's job or is it's job to get you the most relevant things in the top 5 hits (and relevance does include authenticity, freshness, grade-level, language, point of view...)? Do libraries/librarians aim for perfect filtering?<br /><br />... still need to read chapters 13-15, but got a little numbed so moved to ...<br /><br />Sharp, H., Rogers, Y., & Preece, J. (2007). Usability Testing and Field Studies. In <span style="font-style: italic;">Interaction design: Beyond human-computer interaction</span> (pp. 645-683). Chichester; Hoboken, NJ: Wiley.<br />Well that was different! It's a textbook aimed at the upper level undergrad or entry level grad student. Easy to read and very clear - but not very detailed. I should probably read the rest of the book, too, but time is precious right now.Christinahttp://www.blogger.com/profile/12104847732663970352noreply@blogger.com1