<$BlogRSDURL$>
Christina's LIS Rant
Saturday, January 24, 2009
  Science Online '09: Searching the Scientific Literature
Our session wasn't quite as planned - I had hoped to demonstrate some things. The slides are online.

First, John and I described what a librarian in a university and one in a research lab does. Most people have some idea of what the main jobs were for their elementary school librarian. Some people who have children know that public librarians do story times sometimes (this is 1/100th of what public librarians do)... but unfortunately, most people really have no idea what librarians in corporations, research labs, and even in universities do. So we talked about that.

Then we wanted to talk about some big themes like:
- making connections between things - getting from a need to an answer or from a citation to the full text or from a collection of citations to a publishable paper
- how librarians can be your best resources in educating your peers about open access, in getting support for open science,
- in universities we can help you get your classes going or consult with you individually on your own research
- even if you have no university affiliation, you can walk in and use public university computers (public might not be the right word in your neck of the woods, but state funded like State U or U of State) and most of their subscription resources. AND you should always use your local public library and get assistance from the librarians there. They also have research databases

We also talked about some spiffy research databases, ebooks, and some free tools online.
I mentioned Inspec and Compendex, of course, because that's the way I roll (engineering and applied physics and all) - my point with Compendex was more about the interface, though.

John talked about Safari and books24x7 - ebooks
I talked about CRC Handbook of Chemistry & Physics - the online version with tables you can sort and now substructure searching. This is another of my major themes: helping you mobilize and *use* information, even information that's found in books. To this end we mentioned searching in google books or on amazon to find the content, and then using "find in a library" on google books or a special bookmarklet or browser plugin to find the book that's listed on Amazon. Your library might even provide you with an electronic copy, right there at your desk, even if your desk is at home (so long as you have proper credentials), and if not, might deliver the print copy to a location that's easy to get to (our staff get books at their mail stop).

If I remember other things, I'll add them.

Labels:

 
  Science Online '09: Sunday AM
Reputation, authority and incentives. Or: How to get rid of the Impact Factor — moderated by Peter Binfield and Bjoern Brembs
I was frustrated by the premises of this session: that the IF is inherently evil, nobody likes or uses it, and everyone agrees that it should go away. I was happy to hear someone piping up that they do indeed use it (if only as a first cut) and it does have some value. Like any other mathematical formula, it isn't inherently evil. It's only as good as the data that go into it, and it can only provide so much information. The problem is that it's abused and misused. It can be a useful tool if used as part of a much larger set of metrics. For example, for collection development decisions, when combined with local citation measures, subject matter expert opinion, cost (per page, part of package) measures, topical relevance, local usage, appearance on syllabi/course reserves, if a squeaky wheel is on the editorial board....

What was even more frustrating is the idea that one measure could be easily replaced with another simple measure, that we could brainstorm in one hour. Sigh.

On the positive side, I think it was Gee of Nature who suggested compling multiple inputs into single pages -- ideas of adding in commentary from the blogosphere, pre-prints, and other things seems useful. From this we somehow got around to uniquely identifying authors. We discussed on friendfeed the merits of developing a new tool or adapting openID or like to scientific purposes. Along these same lines, the value of having a consortium of publishers manage this (like DOI which is very successful) or using some other open model.

----
Providing public health and medical information to all — moderated by Martin Fenner
Martin prepared a very useful framework and posted it to the wiki page.
Doctors post for doctors - as a filtering mechanism. These filtering mechanisms are quite common in clinical medicine, like Cochrane Reviews. Doctors also post for the public, to comment on new research.

--
then there was our session, sigh. It needs its own poast
and I went home.

Labels:

 
  Science Online '09: Saturday PM
Ok, here's the weird part. I distinctly remember being at two different sessions that are on the schedule for the same time slot. Huh? So there might be some time travel involved here or maybe I'm imagining something...

Web and the History of Science – moderated by GG, Brian Switek and Scicurious
very cool session - but there were some deeper issues here that the audience did press that were not totally addressed. There's the myth of science - really probably mostly post-WW I - that paints this picture of science as clean and linear. Many popularizations reinforce this with 20-20 hindsight. Most popularizations also only hit the "cool" and the weird (there's a neat Fahnestock article on this - she's at UMCP, but I've never met her). Some of the bloggers go more deep and talk about controversies and the complexities of good scientists working hard whose science was later found to be faulty (or to not adequately describe and predict once better instruments became available). Seems like some bloggers just go: ha-ha look at those crazy old guys! One of the moderators was unable or unwilling to go deeper and that sort of set the tone, unfortunately. If/when this session is done again - bring in someone from rhetoric who looks at the language of popularizations or doing science history to ask tougher questions of the bloggers who do posts on this.

Social networking for scientists – moderated by Cameron Neylon and Deepak Singh
(how I could have attended this when it was at the same time as the above? beats me). Interestingly, these scientists came up with Rogers' innovation adoption decision process for communication technologies :) This was a very worthwhile and useful session, though. Is there value for having separate science networks, or should scientists just use the general purpose ones? Seems like if the network is built around *things*, and these things need special treatment, then science or scholarly networks might be in order. One example is myExperiment - this is built around sharing of workflow pieces - you can't do that well on facebook because you need special metadata, searching, and attribution. Likewise with citeulike or connotea - they are better for scholarly articles than delicious because they understand what metadata is required to describe scholarly work. Otherwise, sites that are just like linkedin, but have a smaller user base really don't offer anything over the general space, and might offer less if they can't get to critical mass.

Anonymity, Pseudonymity – building reputation online — moderated by PalMD and Abel Pharmboy
This is a perennial favorite. I think people who follow blogs at all get this: that you get authority and trust over the course of the blog through your posts. That your pseudonym becomes your brand and is meaningful and trustworthy based on your history of posts. This might be better than relying on your institution or journal IF for authority (IF, ha!). People who don't know blogs, and don't read them, really don't seem to get this. And there are legitimate reasons to not use your real name, even if people know it anyway. Also assume that you can be found out, no matter how hard you try to stay anon.

Labels:

 
  Science Online '09: Saturday AM
My very much delayed notes from the sessions I attended Saturday.

Open Access
Moderated by Bill Hooker and Bjoern Brembs

http://www.scienceonline09.com/index.php/wiki/Open_Access_publishing/

What’s open access – green vs. gold models…
for university and disciplinary repositories, they can be listed in directories and have machine harvestable/federated searchable content

citation advantage and acceleration of the research cycle.
also allows for text mining
example: iHOP, information hyperlinked over proteins – works amazingly well, but can only work on the abstracts, and needs to be able to work on the full text

serials crisis – costs rising >> consumer price index

Can’t access own article

member of audience – databases in NAR database issue, exponential growth (published in Scitable open access) – duplicating efforts – how to you prevent duplication

6 major publishers control most of scholarly publishing

my point – if journals are dead, why proliferating? Societies as well as (or perhaps more so than) commercial publishers

journals go on your cv – where you’re published
- journal quality as a proxy for researcher quality or research quality
- is using journals for assessment such a good idea
- do you want to delegate that filter to someone?

- idea separate the research that we’re doing – dissemination – vs. making judgments of people for promotion. separate those things out
- this was already done in particle physics – pre-publish, even before the net by hand dissemination through the mail, then through e-mail, ArXiv… everything in particle physics is open access, but this didn’t change the market in scientific publishing – didn’t have anything to do with making the science available, but for the point of reputation, credit, etc. Market wasn’t created by evil companies, it was created by the people working in universities – everyone has to find a place to published – even if the work is very poor. Overlap btwn ArXiv and the journals is 100%, but need the certification, etc., comes from the published edition.
- criticism of making money – business model – vs. spirit of open access.

business model
- is open access really more expensive in total because management decentralized vs. centralized within the library, who has more or less figured out how to manage licenses based on subscriptions

- shifting the cost from the library to the individuals is sometimes resisted, but sometimes ok – depends on the field and if they’re accustomed to paying page charges

So, this session covered a bit that was really appropriate to the impact factor discussion. Unsurprisingly, there were still those in the audience who wouldn't publish OA because their advisor bought the line about OA != quality. This is just like the advisors saying ejournals != quality when in fact nearly all journals are in electronic format and the format has nothing to do with the quality.

Another time this week someone who works at a big evil society tried to get me to make some absolute statement about OA and I just wouldn't take the bait. I think for certain communities, it works quite well (for example, HEP and biomed) while for others there are a lot of very real barriers. I don't think libraries are necessarily the place to manage the article payments, but some libraries have taken on this task and have been able to support their researchers this way. There's a question about scalability.

There was a very interesting contribution to LIB-LICENSE this week from someone who actually went to Kenya and talked to some users of the literature. He found that they were getting just about all they needed from HINARI. So the LDC or developing countries argument might not be the best one - the argument might have more weight for smaller institutions in developed countries -- they seem to be the ones who can't get what they need.

-----
Not just text – image, sound and video in peer-reviewed literature
http://www.scienceonline09.com/index.php/wiki/Not_just_text/

This session was very helpful in distinguishing for me the difference between these two things and how they might be valuable.

Two versions: You Tube, and Journal Like

Moshe P, from JOVE
journal articles are very inefficient at transferring tacit or craftsman like knowledge, such as how to do certain experimental techniques
“golden hands”
he had to fly to the original lab to get expertise and then bring it back.
need video publication – show me
like cooking – small things omitted from the recipe that you can get through watching someone prepare it or by watching a cooking show

questions:
- what incentives are there to publish science on videos
- what format would be most useful?
- what equipment is required

incentive – make it a publication, a scientific journal, peer review, indexed in PubMed
tools – they do it for the scientists. Distributed network. Specialized outsourced video production companies with expertise and equipment to capture.

each article begins with a schematic representation, then an introduction of the scientist
(questions – Java – can I embed – have to do it by e-mail them, bcs, they had people ripping off the entire content of their site)

are the authors given help in speaking to the camera?
(real question – about widening audience – but this isn’t about this at all, this is for an expert audience which should indeed be full of expert language, somewhat incomprehensible outside of the exact area)

Scivee.tb
- science video sharing web site – synchronize video, literature, slides – within the browser
- more discussing a paper, vs. showing an experiment
- profiles and community to connect scientists to each other
- changed – people wanted to upload stand alone science videos (lions on the savannah or conference presentation)
- poster casts, slide casts

questions to scivee:
- requirement that poster already be accepted/published elsewhere first?
- they contact conference organizers to have a private community where these are available to conference attendees by password for some period of time and then open later

questions to JOVE
- do they get the text with what the equipment is and where the consumables came from and such answer is yes
- is this new methods – or is this methods that already exist in the literature – could be both
- how does peer review work – goes to 2 or three reviewers who give time stamped comments
- is this multiple or duplicate publication? not really because you’d have a methods paper vs. the results paper

funding models:
- sci vee- trying to go to conference organizers
- jove: advertising, author fees ($1k) when they produce for you

question about animal research
- they review carefully special board
- nevertheless face associated with animal research – firebombings, attacks etc.
- they’re concerned about self-censure and the science being put out.

(question that occurs to me now - people judge other people by their names and institutions and stuff, of course, but also about personal attributes - when you know CK Pikas is a white woman in her 30s, does that make your think any differently about her work? so here's the question: is there value or what difference/impact whatever, does it make to see and hear the scientist with the protocol? Ideally using Mertonian norms of universalism, it makes no difference whatsoever... but people are really funny.... if you had the same or equivalent protocols done by an older white man and a younger minority woman, would the reaction and the use be the same? I really hope so! Maybe it's horrible that I even speculate? The choice to have a little intro of the scientist before the protocol is an interesting one - presumably to help the viewer trust the protocol.)

----

Semantic web in science: how to build it, how to use it

John Willbanks

semantic web rdf
triple – subject, relationship (directed arc, with label), object
literals vs reification (“has category”)
bootstrap using grddl (“griddle”) – extract rdf, using parser, from sql database or whatever

owl stuff- types of relationships (symmetric,
sparql (queerly language)
unlike xml, pull statement out and it still makes sense – can remix
licenses are really nasty – even if intended to be open, can prevent some remixing bcs eventual user might be corporate?

need a public domain- in US databases can be public domain (not so in Europe)
cc zero – zone of certainty for semantic web – certify types of reuse that are ok.

viral licensing actually causing problems… need this new way to make things usable in a semantic web

neurocommons – their proof of concept
- e pluribus unim
- this domain bcs pubmed/nlm non-copyrighted data
uses:
dns for life sciences
api to the public domain of data
enhanced document markup
activity center analysis

working with Microsoft to like spell check articles as they are written so all of the connections to the data are in tact
(all life sciences all the time - actually, I think astro and earth/planetary sciences have some goodies like this, but the people who work on these sorts of things apparently don't show up at this meeting --- it would be great from someone from ADS to show up and talk next year)

Labels:

 
Sunday, January 18, 2009
  Comps readings this week
Had train rides to NYC and back and a plane ride this week. Didn't read on the train up (so bumpy it was making me queasy) and on the way back read really slowly because I was so sleepy... but still got some stuff done.

Finished:
Monge, P. R., & Contractor, N. S. (2003). Theories of communication networks. New York: Oxford University Press.
This book was really uneven. There were places that really provided very useful overviews of vast bodies of literature - but in a way that made them useful. There were also places where it was nearly impossible to see the connection to communication networks, and where it seemed almost like they were regurgitating an encyclopedia article in a field outside their own. It could be that they were given specific guidance on how advanced they could go and they sort of had trouble not undershooting. I think probably that I would say chapter 2 is very helpful - and worth reading. It discusses the various units of analysis: individual, dyadic, triadic, and global; the units of measure at each of these; and then lists some social science theories that have been explored at that level with that measurement. The modeling stuff could be handy, too, as could the stuff on theories of self interest and collective action, exchange and dependency theories, and the homophily chapter.
With that said, the book won all sorts of awards and is widely respected, so, whatever.

Also read:
Forte, A., & Bruckman, A. (2005). Why do people write for Wikipedia? Incentives to contribute to open-content systems. Proceedings of GROUP 05 Workshop: Sustaining Community: The Role and Design of Incentive Mechanisms in Online Systems. Retrieved October 24, 2008 from http://www-static.cc.gatech.edu/~aforte/ForteBruckmanWhyPeopleWrite.pdf
They did a big research project - I really hope there were lots of other articles out of it. This was just a workshop paper, so very short and not theory driven. Not a whole lot of evidence. Their main deal is to compare authorship & identity practices in Wikipedians with Latour & Woolgar's (1986) discussion of the role of journal articles in knowledge production in science. The first thing they say that annoys me is that basically Latour & Woolgar were the first to study scientists and scholarly communication in science from a social sciences point of view! Holy freakin' cow. Anyway, seems to me what they're talking about is more like the open source software project model and actually, hardly like scientific communication at all except in the fact that some people like to get credit for hard work... which is pretty much a universal sentiment.

I tried to read 3 chapters but I really only completed:
Manning, C. D., Raghavan, P., & Schutze, H. (2008). Scoring, term weighting & the vector space model. In Introduction to information retrieval (pp. 100-123). New York: Cambridge University Press.
I feel like I need to go back and pick up the chapters between the ones I'm supposed to read.

Huang, X. (2008). Conceptual Framework and Literature Review. Unpublished Manuscript. (Chapter 2 of her dissertation)
Xiaoli has a deep and nuanced understanding of relevance and she explains it very clearly. At first I thought it a bit odd to read a chapter from a dissertation in progress, but I need to relax and trust my committee! Daniel Tunkelang has recently been on a relevance kick, and that's a good thing - it is the central point for information retrieval and human information behavior. This chapter reviews the history of studying relevance from the beginning of the 20th century through recent studies. She talks about the system view as well as the user view and lays out a conceptual framework. Her work deals with topical relevance.. but you'll have to wait 'til she's done to read about it.

Barley, S. R. (1990). The alignment of technology and structure through roles and networks. Administrative Science Quarterly, 35(1), 61-103.
He's trying to go from technology introduction to changes in social structure. He points out a couple of ways people try to do this, but apparently they focus on one side or the other and don't get from point a to point b. He uses social roles: relational and non-relational. Relational roles are ones that require an alter - like son/mother, wife/husband and non-relational are like butcher, baker, etc. Except, as he points out, life isn't so clean. He looks for the chain technology > non-relational roles > relational roles > change in social structure. His research was a massive study of radiology departments in 2 hospitals - he went to one or the other every day for like 6 months, interviewed people, did sociometric surveys, reviewed documents, and observed like 400 procedures. Interestingly, there was a really stratified social structure - when new equipment is purchased, they hire new doctors who are very junior, but who are trained in the newest modalities. So you have senior doctors who only know how to read 1 type, middle ones that know a couple, then the youngest who know like 4 or 5. Apparently the whole way the technologists are managed also goes with the modality. This guy had all kinds of evidence to prove his point. Interesting.

(btw- this will appear on Tuesday - got behind, but the readings are for the week ending 1/18)

UPDATE 4/30/2009: X. Huang's dissertation is now available online at http://www.dsoergel.com/XiaoliHuangDissertation.pdf - if you're into relevance (and who isn't?, this will be heaven :) )

Labels:

 
Friday, January 16, 2009
  Science Online '09: Friday Night
First - stalked various science bloggers in the lobby. I totally missed my lemur trip because my plane was an hour late... so I hung out in the lobby and ran into Tom from Inverse Square, then Aaron from Wired, James from Island of Doubt, Acmegirl, Bill, Jean-Claude... and probably some others all wandered in.

It was time for dinner - but there were no takers - everyone had eaten or was eating later or in their room. Acmegirl generously agreed to accompany me, so she sat with me while I started my burger. She left to go to the wine tasting, but then Miriam waved me over to their table. So I finished my burger with her, Danielle, Daniel, Erin, and Danica (who's research area is actually not that far from my own)

We headed over to the WISE Event: "Women, science, and storytelling: The immortal life of Henrietta Lacks (a.k.a. HeLa), and one woman’s journey from scientist to writer".
There are a lot of parts of this story that are more than a little disturbing. But it was fascinating and I was glad to hear it. Too bad we have to wait a year for the book :(

Labels:

 
Wednesday, January 14, 2009
  Suggestions or Input for Searching the Scientific Literature
John Dupuis and I are moderating a session at Science Online '09.

We're both pretty excited about this - but I have a little problem. I don't know who will show up or what they'd like to see. I have all kinds of neat tips and tricks but it would be great if we could hear from some potential attendees (either in person or online) about what they would like to hear about.

Comment here, on the wiki, on friendfeed, or at John's blog and we'll do our best to incorporate your suggestions.

Labels:

 
Sunday, January 11, 2009
  Comps readings this week
Finished:
Garvey, W. D. (1979). Communication, the essence of science: Facilitating information exchange among librarians, scientists, engineers, and students. New York: Pergamon Press.

Over all, seems like a digested version of his papers, which I've already read. There were some new things about how and when librarians can/should get in on the action...

Read chapters 1-6 of
Monge, P. R., & Contractor, N. S. (2003). Theories of communication networks. New York: Oxford University Press.
I initially had Wasserman & Faust on my reading list, but my committee members thought this would be better - wow, they're right! This sort of handwaves at the actual SNA measures, but is all about the various social theories and how they are investigated using SNA metrics (ah-ha, a missing piece). I've read a few of the papers mentioned, but this really just puts everything in one spot. When I started the first chapter - I was like blah, blah, blah - world is flat, emergence, complexity, globalization, yadda, yadda... and then, all of a sudden, they drop the names of like 15 major theories and then tell you what hypotheses and how to test them. So reading speed has been uneven :) I might either in next week's post or as a separate post list some of the theories. The authors' main contribution is a multitheoretical, multilevel framework - something to do when your theory with dyads conflicts with your theory for the global network, for example.

Off to read chp7...

Labels:

 
Wednesday, January 07, 2009
  A Structural Exploration of the Science Blogosphere: Director's Cut
Due to popular demand (well 3 requests :) ), this is a commentary and additional information for my conference paper and presentation:
Pikas, C. K. (2008). Detecting Communities in Science Blogs. Paper presented at eScience '08. IEEE Fourth International Conference on eScience, 2008. Indianapolis. 95-102. doi:10.1109/eScience.2008.30 (available in IEEE Xplore to institutional subscribers or e-mail me if you don't have access that way).

The presentation is embedded in another blog post, and is available online at SlideShare. The video of me talking about it is (will be?) available on the conference site, but I haven't gotten it to load.

Context:
I'm interested in scholarly communication in science, engineering, and math. Specifically, informal scholarly communication and how information and communication technologies, in particular social computing technologies, can/do/might impact informal scholarly communication in science/math/engineering. I'm also interested in knowledge production and public communication of science, two sub-areas of STS (this acronym has several translations - the most common probably science and technology studies).

As a blogger, and a 2-time (soon to be 3) attendee of what was the NC Science Blogging Conference and a reader of science blogs, I became curious about how and why scientists use blogs and if their use is: a) similar to how non-scientists use blogs b) for informal scholarly communication (to other scientists about their work) c) for public communication of science d) for personal information management e) maybe for team collaboration(?)... The first way I looked at this was by doing a study with content analysis and interviews of chemists and physicists (this has not been published yet, but maybe someday, these things aren't as perishable as writings in other fields, I hope). The second study swings all the way to a structural analysis of the science blogosphere - and that's what was reported here.

In social network analysis (SNA), you look at the link structure, not the attributes of the actors or nodes. The idea is that links show evidence of potential information flows or influence. You can pick out prestigious or central actors, and groups which are more tightly connected to each other than to the rest of the network.

The first major problem was locating science blogs - and even drawing any sort of boundary as to what a science blog was or wasn't. Given that I'm interested in how these things contribute to science, I drew the line thusly:
Blogs maintained by scientists that deal with any aspect of being a scientist
Blogs about scientific topics by non-scientists

Omitted were:
Primarily political speech
Ones maintained by corporations
Non-English language
(you could definitely draw the line somewhere else, but this is what I did!)

Also given that I'm a great searcher but almost not a coder at all, I did this by search, snowball, and any hook or crook to get as big a set as possible. I went to each of these, and copied off the URLS from the blogrolls (to answer a question from a Scibling - if you had a rotating list that showed up in javascript on the page source, I probably got it; if you have a second page with a list of 300 blogs (cough - Bora - cough); I probably got it, likewise if generated by like GoogleReader or something)... so this was incredibly tedious, and probably missed a few, but probably pretty accurate. So that was the first network.

The second network - and I originally had a much grander scheme - took the "most interesting" (most central by common measures) blogs from the first network, and then used Perl scripts (core script developed by Jen Golbeck, and then I customized to work for non-wordpress blogs, and blogs where people changed their templates a lot - you all really could have made this easier, lol) to pull all of the commenter links off of the last 10 posts (this was done in like April).

Blogs have links between them a) in the content b) in the blogroll c) in signed comments... other studies have used basically any link on the page, but the fact is that it's not really saying much to link within a post (a little link love, but not a real endorsement). Blogrolls are some sort of endorsement, typically, and signing a comment means *something*.

So then I ran all the typical SNA things across it to look at central actors and to find cohesive subgroups. As far as centrality - no real surprises. As far as cohesive subgroups - a bit more tricky. Basically one large component - and not terribly clumpy, with the exception of the astro bloggers - they're pretty tight. Most of the community detection techniques use a binary split - or start with binary splits - none of these were at all effective in dividing up the hairball. Spin glass, OTOH, worked beautifully to return 7 clusters. So then I went back and looked at the blog and figured out the commonality for each of the clusters (yes, I could have used some NLP to extract terms and automatically label the clusters, but there were 7 so...).

The single component isn't too surprising because we know from diffusion of innovations for ICTs that we would expect people to pick this up from other people and then probably link back. The power law degree distribution is also very typical when you're talking the activities of people (whether Lotka, Zipf, Pareto, Bradford.... whatever law). The clusters were related to subject areas - very broad subject areas. One question in my mind was how much people would be outside of their home discipline in their reading/commenting... based on this network, certainly outside of their particular speciality, but still in the neighborhood with the exception of a few "a-list" science bloggers who everyone reads.

What was interesting - and most definitely worthy of further investigation - is this cluster of blogs written mostly by women, discussing the scientific life, etc. The degree distribution was much closer to uniform within the cluster, and there were many comment links between all of the nodes. This, to me, indicates other uses for the blogs and perhaps a real community (or Blanchard's virtual settlement).

Also, picked out the troll very easily using the commenter network - so this method could be used to automate troll identification. (in the first study I talked about this guy with a physicist and the physicist basically only reins the troll in when he's so out of bounds as to be gross... so ID-ing a troll doesn't necessarily meaning banning).

I'm quickly running out of steam in this blog post - but this might end up being a pilot for my dissertation, so I'm definitely more than happy to talk about it either in the comments here, or on slideshare, or on friendfeed... or twitter or... just look for cpikas :)

Labels:

 
Sunday, January 04, 2009
  Comps readings this week
Finished Yin
Yin, R. K. (2003). Case study research: Design and methods (3rd ed.). Thousand Oaks, CA: Sage Publications.
This is one of those books that's a mandatory read if you take certain methods classes in education or business. So if you have a member of your committee who was trained in either of those areas :)
I have to say that I'm not impressed. There were some useful nuggets here and things I can use, but I found it pretty superficial after reading Patton, Wolcott, Rubin & Rubin, the Sage Handbook, Miles & Huberman, Maxwell, Cresswell, etc. It just seems like a lot of handwaving (Wolcott is, too, but at least you come out of that feeling like you've gotten a pep talk!). At least it was short - oh and if/when/as I get back on the horse for fixing up and resubmitting my journal article, I'm going to cite this for how I actually used NVivo (as a case study database). Maybe I should move Wolcott up in my readings to get that pep talk going so I can get motivated to rewrite...

also read:
Zimmerman, A. S. (2008). New knowledge from old data - the role of standards in the sharing and reuse of ecological data. Science Technology & Human Values, 33(5), 631-652.
Excellent. I wish I read this prior to attending the HCIL Workshop. She was interested in how ecologists reuse data collected by other scientists for other purposes. She located her participants by finding articles published in a major ecology journal that used other data sets. She conducted 90 minute semi-structured interviews with 13 ecologists who were first authors on these papers (she also notes a reference that indicates that in ecology the first author is the primary researcher :) ). It turns out that methodological standardization really isn't the thing because there are so many reasons for using different techniques in ecology. The scientists use their informal knowledge through their own field work to judge the results, and often examine datasets point by point. They also use their knowledge of the data taker (and their commitment to the organism) to judge the value. The participants were able to articulate these criteria, so there might be things that can be added to repositories of ecological data to make reuse and scaling up of these projects more do-able.
This is a great addition to the Borgman book which talks in more general terms. And it's short :)

Rowlands, I. (2007). Electronic journals and user behavior: A review of recent research. Library & Information Science Research, 29(3), 369-396. DOI:10.1016/j.lisr.2007.03.005
ACK! He refers to KTL Vaughan as a "he" several times! Good grief. A little research would have solved that problem (btw, I had forgotten KT had a JASIST article, cool.)
This article is sort of about my life - nothing new here and I've read almost all of the articles he discusses. With that said it is a decent overview that brings Tennopir's 2003 report up to date.

Hine, C. (2006). Databases as scientific instruments and their role in the ordering of scientific work. Social Studies of Science, 36(2), 269-298. DOI:10.1177/0306312706054047
This was cited in the Zimmerman piece above. A good read, but not as accessible - probably due to the "framework" she uses and her writing style more than anything else. This article comes from a large ethnographic study surrounding the building of a genome database. Previous studies of databases in science were looking at the databases as dissemination tools, but more and more they function as a scientific instrument. She uses social and natural orderings from Knorr-Cetina as the framework - but I read that book (Epistemic Cultures) - and I don't see this as a framework exactly. Anyway - this was apparently a huge study and the paper has a lot of good stuff once you get away from the "ordering" business. Like how the computer scientists (in this case providing a service) and the scientists work together - in trading zones, not becoming homogeneous. How the computer scientists learn about the scientific culture and enough of what needs to happen from what are sometimes very vague requirements (too bad she doesn't cite Collins' more recent work on the various expertises - seems very relevant). The genomic researchers in her study were like the microbiology researchers in Knorr-Cetina's book in that the lead of the lab did do most of the talking to the outside world, but there were some differences due to this cross cutting database. Oh, and to get submissions to the db - the thing would calculate mappings for you, but only once you'd uploaded some data. She seemed to think this was more carrotty and the required-before-publication model was more sticky. I would have liked to see more quotes and more concrete findings from her study - but I suppose that must be in other papers.

Traweek, S. (1988). Beamtimes and lifetimes: The world of high energy physicists. Cambridge, MA: Harvard University Press. (but only the Prologue: An anthropologist studies physicists)
Compare to the intro chapters in Latour & Woolgar as well as the Knorr-Cetina book mentioned above. Seems de rigor for writing a book that will be read by your participants, particularly when your participants are practitioners in an quantitative/experimental domain. Still, a nice chapter to perhaps assign to a group in a basic qualitative research class - sets up exactly what an anthropologist in a physics lab (vice overseas) does.

Pinch, T. J., & Bijker, W. E. (1987). The social construction of facts and artifacts: Or how the sociology of science and the sociology of technology might benefit each other. In T. P. Hughes, & T. J. Pinch (Eds.), The social construction of technological systems: New directions in the sociology and history of technology (pp. 17-50). Cambridge, MA: MIT Press.
Hm. I see, but still difficult to see how science and technology can be studied together. Bike stuff is cool, too.

Morris, S. A., & Van Der Veer Martens,B. (2008). Mapping research specialties. Annual Review of Information Science and Technology, 42, 213-295.
Whoa this is one looong article :) It's actually really helpful and I recommend it. It provides a practical history and framework for locating groups in science and provides links between various co-occurence measures and what people are trying to map. I've done some of this at work in bits and pieces, pulling various assorted measures out of, erm, the literature, and then trying with various amounts of success to put together a clear picture of what I did and why as well as of the results. So this is pretty cool. I would recommend this for independent information professionals, information scientists outside of libraries in corporations or research organizations, and for special librarians who should get work performing this function.

Started reading:
Garvey, W. D. (1979). Communication, the essence of science: Facilitating information exchange among librarians, scientists, engineers, and students. New York: Pergamon Press.
He (like others writing at the same time) was a real fan of librarians (thank you!). What happened? Anyway, he starts in his preface by discussing a theme that ran through a NATO information science conference in 1973: that more user studies were needed to make progress (sound familiar?). This book was written for a science librarian audience, to bring them up to speed on Garvey's work (with Griffith, Lin, et al) on how scientists communicate. I should have read this for my big lit review on informal scholarly communication - in particular since it has reprints of like 10 key articles as the appendix (sure would have been quicker than finding them all separately). Good stuff here but stuff mostly covered in those earlier articles so far. Also, he does the standard paradigm with basic > applied > engineering... which is pretty much discredited now (see the stuff from last week.)

Labels:

 

Powered by Blogger

This is my blog on library and information science. I'm into Sci/Tech libraries, special libraries, personal information management, sci/tech scholarly comms.... My name is Christina Pikas and I'm a librarian in a physics, astronomy, math, computer science, and engineering library. I'm also a doctoral student at Maryland. Any opinions expressed here are strictly my own and do not necessarily reflect those of my employer or CLIS. You may reach me via e-mail at cpikas {at} gmail {dot} com.

Site Feed (ATOM)

Add to My Yahoo!

Creative Commons License
Christina's LIS Rant by Christina K. Pikas is licensed under a Creative Commons Attribution 3.0 United States License.

Christina Kirk Pikas

Laurel , Maryland , 20707 USA
Most Recent Posts
-- Moved to Scientopia
-- I've been assimilated!
-- Hey science librarians...
-- Can we design *a* community for *scientists*
-- Comps readings this week
-- How would you design a collaboration community for...
-- Why ghostwriting, ghost management, and fake journ...
-- Should authors attest that they did a minimal lit ...
-- Comps preparations
-- How should advertising work in online journals?
ARCHIVES
02/01/2004 - 03/01/2004 / 03/01/2004 - 04/01/2004 / 04/01/2004 - 05/01/2004 / 05/01/2004 - 06/01/2004 / 06/01/2004 - 07/01/2004 / 07/01/2004 - 08/01/2004 / 08/01/2004 - 09/01/2004 / 09/01/2004 - 10/01/2004 / 10/01/2004 - 11/01/2004 / 11/01/2004 - 12/01/2004 / 12/01/2004 - 01/01/2005 / 01/01/2005 - 02/01/2005 / 02/01/2005 - 03/01/2005 / 03/01/2005 - 04/01/2005 / 04/01/2005 - 05/01/2005 / 05/01/2005 - 06/01/2005 / 06/01/2005 - 07/01/2005 / 07/01/2005 - 08/01/2005 / 08/01/2005 - 09/01/2005 / 09/01/2005 - 10/01/2005 / 10/01/2005 - 11/01/2005 / 11/01/2005 - 12/01/2005 / 12/01/2005 - 01/01/2006 / 01/01/2006 - 02/01/2006 / 02/01/2006 - 03/01/2006 / 03/01/2006 - 04/01/2006 / 04/01/2006 - 05/01/2006 / 05/01/2006 - 06/01/2006 / 06/01/2006 - 07/01/2006 / 07/01/2006 - 08/01/2006 / 08/01/2006 - 09/01/2006 / 09/01/2006 - 10/01/2006 / 10/01/2006 - 11/01/2006 / 11/01/2006 - 12/01/2006 / 12/01/2006 - 01/01/2007 / 01/01/2007 - 02/01/2007 / 02/01/2007 - 03/01/2007 / 03/01/2007 - 04/01/2007 / 04/01/2007 - 05/01/2007 / 05/01/2007 - 06/01/2007 / 06/01/2007 - 07/01/2007 / 07/01/2007 - 08/01/2007 / 08/01/2007 - 09/01/2007 / 09/01/2007 - 10/01/2007 / 10/01/2007 - 11/01/2007 / 11/01/2007 - 12/01/2007 / 12/01/2007 - 01/01/2008 / 01/01/2008 - 02/01/2008 / 02/01/2008 - 03/01/2008 / 03/01/2008 - 04/01/2008 / 04/01/2008 - 05/01/2008 / 05/01/2008 - 06/01/2008 / 06/01/2008 - 07/01/2008 / 07/01/2008 - 08/01/2008 / 08/01/2008 - 09/01/2008 / 09/01/2008 - 10/01/2008 / 10/01/2008 - 11/01/2008 / 11/01/2008 - 12/01/2008 / 12/01/2008 - 01/01/2009 / 01/01/2009 - 02/01/2009 / 02/01/2009 - 03/01/2009 / 03/01/2009 - 04/01/2009 / 04/01/2009 - 05/01/2009 / 05/01/2009 - 06/01/2009 / 08/01/2010 - 09/01/2010 /

Some of what I'm scanning

Locations of visitors to this page

Search this site
(gigablast)

(google api)
How this works

Where am I?

N 39 W 76