Comps readings this week
Yin, R. K. (2003). Case study research: Design and methods
(3rd ed.). Thousand Oaks, CA: Sage Publications.
This is one of those books that's a mandatory read if you take certain methods classes in education or business. So if you have a member of your committee who was trained in either of those areas :)
I have to say that I'm not impressed. There were some useful nuggets here and things I can use, but I found it pretty superficial after reading Patton, Wolcott, Rubin & Rubin, the Sage Handbook, Miles & Huberman, Maxwell, Cresswell, etc. It just seems like a lot of handwaving (Wolcott is, too, but at least you come out of that feeling like you've gotten a pep talk!). At least it was short - oh and if/when/as I get back on the horse for fixing up and resubmitting my journal article, I'm going to cite this for how I actually used NVivo (as a case study database). Maybe I should move Wolcott up in my readings to get that pep talk going so I can get motivated to rewrite...
Zimmerman, A. S. (2008). New knowledge from old data - the role of standards in the sharing and reuse of ecological data. Science Technology & Human Values, 33(5), 631-652.
Excellent. I wish I read this prior to attending the HCIL Workshop
. She was interested in how ecologists reuse data collected by other scientists for other purposes. She located her participants by finding articles published in a major ecology journal that used other data sets. She conducted 90 minute semi-structured interviews with 13 ecologists who were first authors on these papers (she also notes a reference that indicates that in ecology the first author is the primary researcher
:) ). It turns out that methodological standardization really isn't the thing because there are so many reasons for using different techniques in ecology. The scientists use their informal knowledge through their own field work to judge the results, and often examine datasets point by point. They also use their knowledge of the data taker (and their commitment to the organism) to judge the value. The participants were able to articulate these criteria, so there might be things that can be added to repositories of ecological data to make reuse and scaling up of these projects more do-able.
This is a great addition to the Borgman book which talks in more general terms. And it's short :)
Rowlands, I. (2007). Electronic journals and user behavior: A review of recent research. Library & Information Science Research, 29(3), 369-396. DOI:10.1016/j.lisr.2007.03.005
ACK! He refers to KTL Vaughan as a "he" several times! Good grief. A little research would have solved that problem (btw, I had forgotten KT had a JASIST article, cool.)
This article is sort of about my life - nothing new here and I've read almost all of the articles he discusses. With that said it is a decent overview that brings Tennopir's 2003 report up to date.
Hine, C. (2006). Databases as scientific instruments and their role in the ordering of scientific work. Social Studies of Science, 36(2), 269-298. DOI:10.1177/0306312706054047
This was cited in the Zimmerman piece above. A good read, but not as accessible - probably due to the "framework" she uses and her writing style more than anything else. This article comes from a large ethnographic study surrounding the building of a genome database. Previous studies of databases in science were looking at the databases as dissemination tools, but more and more they function as a scientific instrument. She uses social and natural orderings from Knorr-Cetina as the framework - but I read that book (Epistemic Cultures) - and I don't see this as a framework exactly. Anyway - this was apparently a huge study and the paper has a lot of good stuff once you get away from the "ordering" business. Like how the computer scientists (in this case providing a service) and the scientists work together - in trading zones, not becoming homogeneous. How the computer scientists learn about the scientific culture and enough of what needs to happen from what are sometimes very vague requirements (too bad she doesn't cite Collins' more recent work on the various expertises - seems very relevant). The genomic researchers in her study were like the microbiology researchers in Knorr-Cetina's book in that the lead of the lab did do most of the talking to the outside world, but there were some differences due to this cross cutting database. Oh, and to get submissions to the db - the thing would calculate mappings for you, but only once you'd uploaded some data. She seemed to think this was more carrotty and the required-before-publication model was more sticky. I would have liked to see more quotes and more concrete findings from her study - but I suppose that must be in other papers.
Traweek, S. (1988). Beamtimes and lifetimes: The world of high energy physicists. Cambridge, MA: Harvard University Press. (but only the Prologue: An anthropologist studies physicists)
Compare to the intro chapters in Latour & Woolgar as well as the Knorr-Cetina book mentioned above. Seems de rigor
for writing a book that will be read by your participants, particularly when your participants are practitioners in an quantitative/experimental domain. Still, a nice chapter to perhaps assign to a group in a basic qualitative research class - sets up exactly what an anthropologist in a physics lab (vice overseas) does.
Pinch, T. J., & Bijker, W. E. (1987). The social construction of facts and artifacts: Or how the sociology of science and the sociology of technology might benefit each other. In T. P. Hughes, & T. J. Pinch (Eds.), The social construction of technological systems: New directions in the sociology and history of technology (pp. 17-50). Cambridge, MA: MIT Press.
Hm. I see, but still difficult to see how science and technology can be studied together. Bike stuff is cool, too.
Morris, S. A., & Van Der Veer Martens,B. (2008). Mapping research specialties. Annual Review of Information Science and Technology, 42, 213-295.
Whoa this is one looong article :) It's actually really helpful and I recommend it. It provides a practical history and framework for locating groups in science and provides links between various co-occurence measures and what people are trying to map. I've done some of this at work in bits and pieces, pulling various assorted measures out of, erm, the literature, and then trying with various amounts of success to put together a clear picture of what I did and why as well as of the results. So this is pretty cool. I would recommend this for independent information professionals, information scientists outside of libraries in corporations or research organizations, and for special librarians who should get work performing this function.
Garvey, W. D. (1979). Communication, the essence of science: Facilitating information exchange among librarians, scientists, engineers, and students. New York: Pergamon Press.
He (like others writing at the same time) was a real fan of librarians (thank you!). What happened? Anyway, he starts in his preface by discussing a theme that ran through a NATO information science conference in 1973: that more user studies were needed to make progress (sound familiar?). This book was written for a science librarian audience, to bring them up to speed on Garvey's work (with Griffith, Lin, et al) on how scientists communicate. I should have read this for my big lit review on informal scholarly communication - in particular since it has reprints of like 10 key articles as the appendix (sure would have been quicker than finding them all separately). Good stuff here but stuff mostly covered in those earlier articles so far. Also, he does the standard paradigm with basic > applied > engineering... which is pretty much discredited now (see the stuff from last week.)