Christina's LIS Rant
Comps readings this week
Hess, D. J. (1997). Critical and Cultural Studies of Science and Technology (pp. 112-147). Science studies: An advanced introduction. New York: New York University Press.
Not useful in that I'll run out and apply things, but useful in that it's an overview and discussion. More really conversational. I'm not sure at all that this title really fits the content. His version of culture is definitely different than standpoint theory or critical race theory or the like... he's really talking about a continuation of STS after SSK and in to more recent times. Hits the gender/sex thing and public understanding of science... as well as mentioning ethnographic studies like Traweek's and dropping some famous names.
Bishop, A. P. (1999). Document structure and digital libraries: how researchers mobilize information in journal articles. Information Processing & Management, 35(3), 255-279. doi:10.1016/S0306-4573(98)00061-2
...current manner in which the content of articles is constrained to a traditional, linear structure is an artifact of both the technology of printing and accepted beliefs about the scientific method that prevailed in the seventeenth century. Kircz  argues that the electronic environment, where storage and presentation are no longer integrated, is hospitable to shattering the linear structure of the article so that research reporting more clearly serves the needs of research readers (p. 217):
... we have reached the stage where comprehensive communication no longer needs a linear build-up of a single document. A complete set of modules, each being in themselves (small) texts emphasizing aspects of the message that together establish a complete message from the author to reader, is the next natural step in scientific communication.
modules - that sounds like blog posts :) This article gathers together findings from a whole bunch of different studies of how people use various parts and pieces of journal articles. It's hard to separate issues with the content and interface from differences that they're looking for but she does describe various uses of the metadata, headings, tables and images, etc. to get an overview/orientation, judge relevance, and maybe in place of reading the article. Unfortunately, the users really didn't take to their interface, so there really weren't demonstrated implications for interface design.
Thelwall, M. (2007). Blog searching: The first general-purpose source of retrospective public opinion in the social sciences? Online Information Review, 31(3), 277-289
Sort of a general overview and some approaches to searching blogs using commercial blog searching tools, both subscription as well as things like Google blog search, for social science research. As he points out, blogs are one of the few sources of retrospective or historical opinion information for things as they were happening. Nothing new to see here, but I had to include some blog search articles because my committee doesn't recognize my street cred
in that regard.
Mishne, G., & de Rijke, M. (2006). A study of blog search. Advances in information retrieval (LNCS 3936) (pp. 289-301). New York: Springer. DOI: 10.1007/11735106_26
More evidence that people treat google boxes differently based on where they are, what they need, and what they expect to find in the collection (see discussion of Wolfram's article last week, yes , it's obvious). The authors took the transaction logs from blogdigger from May 2005 (hey, blogdigger's still around, that's a surprise). They categorize the queries into filtering and ad-hoc. Filtering is setting up an alert. 81% of all queries were filtering, but once you look at unique queries only, filter are just 30%. Terms per query is pretty much the same as regular web search, but filtering queries are much shorter (for unique ad-hoc, 2.71/query, filtering 1.98/query). They tried to make standard categorizations work - informational, navigational, transaction - but it doesn't really work for these queries so they looked at 1000 random queries, half of each type. They come up with context queries, ones that answer the question, "in what context does this thing appear?" And concept queries to locate blogs on a topic. They go on to l0ok at percentages and then popular queries. People really were looking for different things on the blog search engine than they did on regular search engines. For one thing, they search for news and named entities a lot more. So this was pretty interesting.
Leydesdorff, L., & Vaughan, L. (2006). Co-occurrence matrices and their applications in information science: Extending ACA to the web environment. Journal of the American Society for Information Science and Technology, 57(12), 1616-1628. DOI: 10.1002/asi.20335
We join this debate currently in progress... So there's a pile of papers about which similarity measures to use (Jaccard, Cosine, Pearson...) and how to go about it for co-citation or author co-citation (ACA) work. This paper is in that family, but different. The authors discuss the difference between using the asymmetrical matrices (so the columns are cited papers or authors and the rows are citing papers or authors) and going from them vs. using a symmatrized version cited x cited with the number being the times that these two co-occur. A proximity measure shows the co-occurrences and can be entered directly into multidimensional scaling algorithms. You can get from the asymmetrical to the proximity one by doing correlation type thing (Pearson for metric similarities, Spearman for rank...) or by multiplying the matrix by its transpose. Farther on in the paper they basically say that you can ditch MDS and just use Pajek to lay out the network... So I guess I'm a little confused. I get how you aren't supposed to do the correlation after you've symmatrized, but I don't know why to do correlation vs. multiplying by transpose. Maybe a computing power issue depending on the size of the network? I guess if you want to know the correlation coefficient for some other reason...
At this point, I think I've now read all of the things I hadn't read before, and I'm back to reading things I've read at some point. I'm concentrating on things that I don't remember too well, or read a long time ago (started library school in 2000!), and are more important. Of course, reading things now is different from reading them when I was coming from a science undergrad and work experience outside of libraries, you never dip your foot into the same river twice. I'm going to try to pay more attention to tying things together, too, to get ready for the actual exam.
Pettigrew, K. E., Fidel, R., & Bruce, H. (2001). Conceptual frameworks in information behavior. In M. E. Williams (Ed.), Annual Review of Information Science and Technology (ARIST) 35 (pp. 43-78). Medford,NJ: Information Today
Ignoring the little defensive bit about how we are much more theoretically inclined than we used to be and how information behavior research really does use theory (not so much)... I think the most valuable aspect of this article is how it compares and contrasts the cognitive view of information behavior with the social view. The cognitive view was really hot in the late 80s and 90s, somewhat in response to Dirvan and Nilan's call for it. This work - like Kuhlthau's model - addresses how individuals perceive information need, seek information, and use information based on their knowledge structures and other aspects, in context but separate from other people. The social view was slightly later, and deals with the impact a person's place in a social network, or the relationships they have has on their perception of information need, decision to seek information, sources from which they can seek information, and how they use information. Interestingly, and something that's frequently overlooked, is the research in this work on decisions not to seek or attend to information that might be useful based on social or other factors (blunting in medical stuff or Chatman's findings about self-protective behaviors and world views). They spend less time talking about organizational approaches - and actually, I didn't realized I'd read about these before I read some of the structuration stuff a few weeks ago.
Of course now, with fmri and such, we are trying to get in the black box of cognitive processes, so to psychological, social, organizational and other levels of analysis, we also add chemical/electrical/biological/physiological - but information scientists aren't using these as much yet.
Schramm, W. (1971). The nature of communication between humans. In W. Schramm & D. F. Roberts (Eds.), The process and effects of mass communication (Revised ed., pp. 3-53). Urbana, IL: University of Illinois Press.
Good stuff here. He discusses the view of communications in the 1950s - the "Bullet Theory of Communication" in which audiences were passive and you just had to target them to change their minds and motivate action. Funny how this still comes up sometimes, people still don't get it.
Comps readings this week
Finished Shapiro. Well, to be honest, skimmed the last few chapters. Also to be fair - he's entirely against ruling by poll. I indicated the opposite last week after reading the first few chapters. It's still dated so not really recommended.
Wolek, F. W., & Griffith, B. C. (1974). Policy and informal communications in applied science and technology. Science Studies
, 4(4), 411-420. DOI: 10.1177/030631277400400406
'60s research on communication in science showed that progress in science and technology was reliant on informal communication, but it's harder for institutions to encourage/support informal communication with policy. Formal and informal communication are interrelated and both must work together.
Kelly, D., & Fu, X. (2007). Eliciting better information need descriptions from users of information search systems. Information Processing & Management, 43(1), 30-46. DOI: 10.1016/j.ipm.2006.03.006See separate post.
Wasko, M. M., & Faraj, S. (2005). Why should I Share? Examining Social Capital and Knowledge Contribution in Electronic Networks of Practice. MIS Quarterly
, 29, 35-57.
Well-described statistical methods make me happy. You can count on the MIS literature to do the stats carefully, unlike some of the other stuff I read. Now to content. They describe electronic networks of practice like communities of practice but all computer mediated, geographically distributed, self-organizing, voluntary, and with little f2f interaction. For me, an example is PAMnet or even better, CHM-INF. Organizations benefit from these, even when members are from competitors, because the knowledge doesn't necessarily exist within the org, in particular in areas with a high rate of technological change.
The authors wanted to know why individuals contribute when some of the standard theories of social capital and knowledge sharing don't translate to electronic networks. They came up with a series of hypotheses related to social exchange theory looking at individual motivations (reputation, wanting to help people), structural capital (centrality - relationships in the network), cognitive capital (expertise, tenure in the field), and relational capital (commitment to the network and perceptions of reciprocity).
They looked at a bulletin board system for an association of lawyers. They looked at centrality and did content analysis to find the questions and responses, and for each of the responses to grade it from not helpful to very helpful on a 4 pt scale (an author and a subj matter expert, kappa .84). Each person got an average helpfulness and a number of responses. They then sent out a survey to each of the responders, using questions pulled from previous studies. They matched these responses with the membership directory to get some demographics. All kinds of tests that their measures were usable. They did a partial least squares regression - well, 2, one for volume of responses and another for helpfulness.
Their answers: perception that participation enhances professional reputation is the biggest predictor of knowledge contribution. There's weak evidence that people who like helping provide more helpful results. Reciprocity and commitment didn't do anything, interestingly.
Hew, K. F., & Hara, N. (2007). Knowledge sharing in online environments: A qualitative case study. Journal of the American Society for Information Science and Technology
, 58(14), 2310-2324. DOI:10.1002/asi.20698
This follows on from Wasko & Faraj (both the above and their earlier one). It is a qualitative study of 3 electronic networks of practice: mailing lists for nurses, university web developers, and literacy educators. They were trying to figure out what types of knowledge were shared as well as barriers and motivators toward sharing. It really is enlightening to compare an MIS article with an info sci article: MIS clearly derives measures from theory while info sci cites articles but doesn't clearly derive measures/categories from theory. This article also gave a lot of information on the method, so that makes me happy (in case they care!). The authors combined qualitative content analysis of a pile of postings with semi-structured interviews (57 participants over 14 months, wow). Biggest motivators: collectivism and reciprocity (compare to the lawyers above). Biggest barriers: no additional knowledge, technology (this is more as it was inconvenient or people forgot, not that they couldn't get the e-mails to fly), lack of time, unfamiliarity with the subject being discussed. Categories of knowledge: nurses - mostly institutional practice (this is how we do it at mpow), next biggest personal opinion; web dev - 3 split pretty equally between instiitutional practice, personal suggestion, persional opinion; literacy educators - split pretty evenly between personal suggestion and personal opinion. This is one place in which I would have liked to see them repeat the survey used in Wasko & Faraj - since it held together pretty well - for direct comparability in terms of social exchange theory- derived motivations. Also, they talked about percent agreement, but not Cohen's kappa which is probably more useful because it takes into account the agreement that would happen by chance alone.Of course they negotiated all of their disagreements out so its not so important.
Van House, N. A. (2002). Trust and epistemic communities in biodiversity data sharing. JCDL '02: Proceedings of the 2nd ACM/IEEE-CS joint conference on Digital libraries, Portland, Oregon, USA. 231-239.
Not sure about this - I think this article has a lot of value for how well it explains various aspects of social epistemology
. I think I actually get Knorr-Cetina's epistemic cultures more after reading this short piece than reading her whole book. With that said, I'm not 100% sold on the usefulness of this paper (or approach) in designing digital libraries - I want to believe - but I'm not there yet. It's definitely promising. Maybe I'll have to give this another read.
Comps reading, Interactive IR, Pandora for PubMed.... and more!
This comps reading brought to you separately, because it is directly relevant to an interesting conversation happening on friendfeed. (what luck!)
In this string
, led by Mr. Gunn
, we have comments on how new article alerts should take what you already know by looking at a collection you give it (possibly from your bibliographic manager - like EndNote, BibTeX, Refworks), and then suggest others, not based on full content, but based on human-assigned metadata like Pandora. (an important part of pandora, IMHO, is being able to tune it by skipping some - because there are different facets in the metadata, you might want to be related in one facet, and not another... anyhoo...)
In this string
, based on an older blog post by Martin Fenner
, but just picked up again by Andrew Perry
(liked by Richard Akerman
), we talk a little more about how people find articles, suggesting filtering by papers you or others read.
Now, happy coincidence, a piece of this morning's comps readings.
Kelly, D., & Fu, X. (2007). Eliciting better information need descriptions from users of information search systems. Information Processing & Management, 43(1), 30-46. DOI: 10.1016/j.ipm.2006.03.006 (can't immediately find an e-print free, but you can at least read the abstract on Science Direct)
1) users have a difficult time articulating information needs (think anomalous states of knowledge, Belkin)
2) users tend to use really short queries because
a) they don't necessarily know what to put in (see 1))
b) the interfaces encourage them to do so
3) longer queries usually result in better retrieval performance
there is a serious mismatch.
This mismatch has been addressed in various studies using a couple different things.
1) query expansion (for non-IR folks out there, system adds additional terms to the search)
a) automatic - the system expands your search either using a thesaurus or maybe a spell checker or by terms found in top matching results
b) interactive - the system asks the users which terms to use and sometimes where to get additional terms.
2) polyrepresentation (Ingwersen 1996)- this tries to imitate what a good reference librarian does. This uses multiple representations of the information need including representations of the user's
a) prior knowledge
b) goals or why the user wants the information
As Kelly and Fu say, the idea is that the user has a lot more information about their query than they give to the system. Part of this goes back to Taylor (1968 - of course, I always go
back to Taylor, 1968!) and his 4 levels of information need: visceral, conscious, formalized, compromised. The point is that users have a model of what the IR system can do, and they pose their query accordingly - they use different terms, shorter queries, etc.
This article presents part of their work on the TREC 2004 track on High Accuracy Retrieval from Documents. So there's definitely some experimental design weirdness. They get the standard TREC query information and then they come back to the user to ask for q2) what the user already knows q3) why the user wants to know q4) some additional keywords. Consult the paper to read about various issues based on the way TREC works, but the upshot is adding all three q2, q3, q4 together was best by far. Q2 was the best single one, pseudorelevance wasn't so hot at all for these queries and this corpus, and longer queries did much better than shorter ones. (oh, better is mean average precision, relevance in binary judgements from the 13 people who wrote the topics/questions).
Now we bring it all together.
Why not enable users to identify a collection of documents stored in their reference manager as what they already know. Ask the users for what they want to know... then as the alert comes in from week to week, allow the users to tune like Pandora. The system should also tune, based on items saved out of the alert, which become things that the user knows....
Full text is one way, but actually, can just use MeSH, abstracts, and titles...
Is anyone already doing this? Why not?
Comps readings this week
Lessig's Code 2.0
Good stuff here if he does go a little far with ideas of cyberspace sovereignty and citizenship. With that said, his views are actually pretty balanced, even if they're not frequently represented that way. In case anyone was keeping track - it's actually 18 chapters, not 15.
Lee, S., & Bozeman, B. (2005). The Impact of Research Collaboration on Scientific Productivity. Social Studies of Science
, 35(5), 673-702.
I'm kind of on the fence about this one. They used a few large surveys of scientists and engineers from all different backgrounds, compared with analysis of their CVs, their journal articles as found in Web of Science, some interviews... talk about triangulation! There's this standard thing that increasing collaboration increases productivity (as measured by peer-reviewed publications in science). As an aside, they cite Lotka (1926), yay! But there are a million potentially confounding factors and interactions from various things like:
- researcher status, rank, age
- researcher gender
- if researcher is a foreign national/non-native speaker
- researcher job satisfaction
- researcher perception of discrimination
- collaboration motives like mentoring/service, quality/social capital, or finding someone with complementary skills
So this point of this article is to take this massive pile of data and to build a couple of big models, and see which terms are significant. Another thing you need to know is that they look at the straight number of articles, and then they look at a fractional number - each article divided by the number of authors.
A problem with this article is that the regression formulas aren't explicitly stated, and it's not OLS, so I'm a bit confused about how they do things. Also, they talk about lots of things that weren't even close to significant, and they accept Chronbach alphas as low as .32! (wow, my prof gave me a hard time at .64, should be > .70). Probably still sound, though. The end is actually sort of not what you'd expect - for the fractional model, no real correlation once everything is taken into account. There is a decent correlation for the normal model... Hardly anything was significant - really only the number of grants (and not even the batting average for grants).
Watts, D. J. (2004). The new science of networks. Annual Review of Sociology
, 30(1), 243-270. DOI:10.1146/annurev.soc.30.020404.104342
Decent review of the more recent network stuff that's been heavily influenced by advances by physicists and mathematicians as well as widely available computing power. Discusses small world networks, scale-free networks, and a couple different approaches in epidemiology and social contagion. Not precisely what I needed - duh, this guy assumes his readers know the social theory, and is bringing them up to speed on math and applications. Note venue. Of course. I need the social theory, which I'm weak on*, and can pick up additional math as necessary later on. So not the thing for me, but still a decent article.
Bohlin, I. (2004). Communication Regimes in Competition: The Current Transition in Scholarly Communication Seen through the Lens of the Sociology of Technology. Social Studies of Science
, 34(3), 365-391. DOI: 10.1177/0306312704041522
I have to say that the SCOT
bit was just thrown in for the editor's sake - there's not the strong hand of theory involved in this. In any case, this article ties together many of the lines of reading I've been doing and makes some interesting and useful points. It's not empirical at all - really sort of a research paper like you'd do in a doctoral seminar. He compares scholarly publishing and its traditional functions with self-archiving in both e-/pre-print servers/repositories and the author's web page. For functions, he goes with quality control, distribution, and archiving (Compare to Borgman). He considers priority and allocating credit to be part of those three. He then goes through the development of AriXiv and its predecessors including Preprints in Particles and Fields
. In the case of HEP
, in particular, and also other areas represented on AriXiv, the distribution function is much more important than the quality control function and traditional journals fall down on the speed of distribution. Also, the costs of journals can be prohibitive, so the distribution is limited. These scientists do, however, continue to publish in traditional venues to be competitive for grants and promotions. Indeed, >70% of articles on ArXiv end up as journal pubs and another 20% end up as conference papers (he cites like 4 studies showing this).
Here's an interesting part: why does self-archiving work in some areas and not others?
1) existing culture of exchanging pre-prints (like in HEP)
2) expectations and policies of journals in the field wrt prior publication. Compare UK biomed journals (BMJ, Lancet) with American (NEJM, JAMA), compare ACS to any sane publisher...
3) acceptance rates for journals in the field (ah-ha!)
Right, so, I remember in Merton and Zuckerman how they discussed the (at the time) 70% acceptance in physics and 20% acceptance in some areas of the humanities.... there's newer research that in some fields it's closer to 90% acceptance and others it might be below 20%. The reasons for this vary - agreement between authors and publishers about what constitutes good work, page length, institutional internal review required before submission to the journal (HEP), etc. This makes perfect sense (and I did read the Walsh and Bayma paper that this comes from but didn't connect the two): you don't really want to see something if a) you can't cite it and you might never be able to cite it b) it will undergo serious revision before it ever makes the light of day and c) the delay before it's citable might be like 2 years.
Another interesting thing that might end up as the focus for a submission to the SSSS conference (if I can get into gear!) is this blurring of the distinctions between informal and formal scholarly communication. If the functions of formal scholarly communication are as mentioned above and they were used to make information seeking more efficient... Let's look at distribution - wider and quicker via self-archiving, but more efficient and more precise using a research database with human indexing. Putting stuff up on a blog or on twitter or friendfeed or a wiki is much faster - but information retrieval is at best imprecise, what with semantic markup seldom used. I suppose if you are well embedded in the appropriate network - IOW you are "friends" with people with the right interests - this becomes less important... Archiving. We did recently have this discussion on friendfeed. These conversations are definitely more archived and less ephemeral than hallway conversations (which is interesting, too), but are not as stable as the journals - particularly those in CLOCKSS or PORTICO , or whatever. As for quality control: trust is built on the web differently than in formal publications.... hm.
Callon, M., Courtial, J. P., & Laville, F. (1991). Co-word analysis as a tool for describing the network of interactions between basic and technological research: The case of polymer chemistry. Scientometrics
, 22(1), 155-205. DOI:10.1007/BF02019280
*not* recommended. I think it's sketchy, to be frank. What is there isn't well described, and some of the choices were made due to computing issues (I presume) that no longer exist. The whole basis they use for building part of their collection is *uncited* and not easily findable using WoS or Google Scholar. I might pick up something from H. White whom I trust in this area to get this uneasiness out of my system. Goes to show when I pick something vs. my committee :)
Wellman, B., Salaff, J., Dimitrova, D., Garton, L., Gulia, M., & Haythornthwaite, C. (1996). Computer Networks as Social Networks: Collaborative Work, Telework, and Virtual Community. Annual Review of Sociology
, 22, 213-238
Note date, then read this quote:
The popular media is filled with accounts of life in cyberspace... much like earlier travellers'tales of journeys into exotic unexplored lands. Public discourse is (a) Manichean, seeing CSSNs as either thoroughly good or evil, (b) breathlessly present-oriented, writing as if CSSNs had been invented yesterday and not in the 1970s, (c) parochial, assuming that life on-line has no connection to life off-line, and (d) unscholarly, ignoring research into CSSNs as well as a century's research into the nature of community, work, and social organization. (p. 214)
Does this sound like blog discussions 2004-2007 (and at the colloquium I attended Friday)? Nothin' new under the sun. Anyway - this is a great road map of the research from about 1985 or so to its writing in 1996. I wouldn't recommend this for anyone but the most dedicated (or maybe with historic interest) as it is completely dated. (the Walsh & Bayma stuff cited is still very relevant, though, as are some of the other works cited.
talking about dated...
Shapiro, A. L. (1999). The control revolution: How the internet is putting individuals in charge and changing the world we know
. New York: Public Affairs.
This is unfortunately OBE
. I recommend Code v.2, as it covers mostly the same topics and has been updated. I read the first 8 chapters so far (about a third) and
- control is more than countries: it comes from ISPs; other providers like libraries, schools, work
- outdated evidenced by statements such as "as schools and libraries become wired"
- overly glossy everything is beautiful talk like you'd have during the dot com bubble
- glorious disintermediation - now we know that there are some things worth paying for: a real broker, a Realtor(tm), a librarian
- we know know that running the country, the state, the x by poll doesn't work
- it's not a choice of command line green on black vs. glorious windows...
He does have an interesting point about over personalization - but the obvious stuff like customizing e-mails isn't quite as troubling to me as the search results...*(As an aside, seems like one of the most important functions of a doctoral program in the social sciences is to teach researchers various theories and how to use them as a tool to understand how society works... my program did not do this, but it straddles science, including CS, so that's definitely not done there... my colleagues in the COMM school and in SOCY had like 3 heavy duty "theory" courses, at least... students coming after me in my program have precisely zero and this is not something that you can really do entirely on your own or with informal mentoring - or at least, it's difficult for me because that's how I'm doing it).
Comps readings this week
This is 2 weeks, and really quite paltry.
You would think that being off sick one day last week would help, but actually I just slept that day... and for my snow day, it took forever to get other stuff done, sigh.
Ingwersen, P., & Jarvelin, K. (2005). Cognitive and user-oriented information retrieval In The turn: Integration of information seeking and retrieval in context
. (pp 191-256). Dordrecht: Springer
This was really a laundry list of readings. It wasn't too critical or anything, but just traced the development of cognitive and interactive information retrieval. I guess if I were reading this at a different stage, it would have been more helpful, but it seems somewhat redundant now. Part of the problem might have been which chapter I chose to read - others probably develop ideas more fully.
Lessig, L. (2006). Code: And other laws of cyberspace. Version 2.0.
New York, N.Y.: Basic Books. Retrieved November 9, 2008 from http://pdf.codev2.cc/Lessig-Codev2.pdf
This edition has significant updates from the original - in content as well as just statistics and examples. Also, re-reading it at this different time in my life does allow me to make different connections. I think I get now, more than previously, how community architecture impacts behavior and social interaction within a community. Other writers talk about affordances of technology, but this is really about how the code and the policies really enable certain behaviors and discourage others. Some of the examples of what AOL allows could just as easily be aimed at Comcast and other ISPs (here and in other countries) and what they do with throttling torrent connections (according to their response to the FCC they stopped, but my husband thinks differently), for example. I've read elsewhere how code and the design of physical as well as virtual things incorporates values, and that's also here, but somehow Lessig's examples seem more pertinent (and show the connection more clearly) than bridges on Long Island.
I like Lessig's discussion of threats to liberty and how they can be embedded into code. Recently, I wanted to place a picture taken of me into a slide for a trip report. Unfortunately, the photographer chose the copyright license on Flickr, so the software prevented me from copying or downloading the picture - even though I'm the subject of it and regardless of what use I intended. I also rail against all of the abridgments of our liberty (and fair use) that libraries must agree to in order to access content for our customers.
Constraints or regulators can come from the market, architecture, law, or norms.
In another part of the book, he talks about perfect filtering. If there were perfect filtering regimes, you would only see those things that support your point of view, and that you wanted to see. Sunstein argues (a la Madison?) that you can't be a well-informed citizen by only being exposed to your own POV. This made me think about Google's customized search results - is there a point at which these things get so good that you will lose serendipity, and even more, not be exposed to other points of view, inequality, unpleasantness, other cultures? Is that Google's job or is it's job to get you the most relevant things in the top 5 hits (and relevance does include authenticity, freshness, grade-level, language, point of view...)? Do libraries/librarians aim for perfect filtering?
... still need to read chapters 13-15, but got a little numbed so moved to ...
Sharp, H., Rogers, Y., & Preece, J. (2007). Usability Testing and Field Studies. In Interaction design: Beyond human-computer interaction
(pp. 645-683). Chichester; Hoboken, NJ: Wiley.
Well that was different! It's a textbook aimed at the upper level undergrad or entry level grad student. Easy to read and very clear - but not very detailed. I should probably read the rest of the book, too, but time is precious right now.