Comps readings this week
Finished Shapiro. Well, to be honest, skimmed the last few chapters. Also to be fair - he's entirely against ruling by poll. I indicated the opposite last week after reading the first few chapters. It's still dated so not really recommended.
Wolek, F. W., & Griffith, B. C. (1974). Policy and informal communications in applied science and technology.
Science Studies, 4(4), 411-420. DOI: 10.1177/030631277400400406
'60s research on communication in science showed that progress in science and technology was reliant on informal communication, but it's harder for institutions to encourage/support informal communication with policy. Formal and informal communication are interrelated and both must work together.
Kelly, D., & Fu, X. (2007). Eliciting better information need descriptions from users of information search systems. Information Processing & Management, 43(1), 30-46. DOI:
10.1016/j.ipm.2006.03.006See separate post.Wasko, M. M., & Faraj, S. (2005). Why should I Share? Examining Social Capital and Knowledge Contribution in Electronic Networks of Practice.
MIS Quarterly, 29, 35-57.
Well-described statistical methods make me happy. You can count on the MIS literature to do the stats carefully, unlike some of the other stuff I read. Now to content. They describe electronic networks of practice like communities of practice but all computer mediated, geographically distributed, self-organizing, voluntary, and with little f2f interaction. For me, an example is PAMnet or even better, CHM-INF. Organizations benefit from these, even when members are from competitors, because the knowledge doesn't necessarily exist within the org, in particular in areas with a high rate of technological change.
The authors wanted to know why individuals contribute when some of the standard theories of social capital and knowledge sharing don't translate to electronic networks. They came up with a series of hypotheses related to social exchange theory looking at individual motivations (reputation, wanting to help people), structural capital (centrality - relationships in the network), cognitive capital (expertise, tenure in the field), and relational capital (commitment to the network and perceptions of reciprocity).
They looked at a bulletin board system for an association of lawyers. They looked at centrality and did content analysis to find the questions and responses, and for each of the responses to grade it from not helpful to very helpful on a 4 pt scale (an author and a subj matter expert, kappa .84). Each person got an average helpfulness and a number of responses. They then sent out a survey to each of the responders, using questions pulled from previous studies. They matched these responses with the membership directory to get some demographics. All kinds of tests that their measures were usable. They did a partial least squares regression - well, 2, one for volume of responses and another for helpfulness.
Their answers: perception that participation enhances professional reputation is the biggest predictor of knowledge contribution. There's weak evidence that people who like helping provide more helpful results. Reciprocity and commitment didn't do anything, interestingly.
Hew, K. F., & Hara, N. (2007). Knowledge sharing in online environments: A qualitative case study.
Journal of the American Society for Information Science and Technology, 58(14), 2310-2324. DOI:
10.1002/asi.20698This follows on from Wasko & Faraj (both the above and their earlier one). It is a qualitative study of 3 electronic networks of practice: mailing lists for nurses, university web developers, and literacy educators. They were trying to figure out what types of knowledge were shared as well as barriers and motivators toward sharing. It really is enlightening to compare an MIS article with an info sci article: MIS clearly derives measures from theory while info sci cites articles but doesn't clearly derive measures/categories from theory. This article also gave a lot of information on the method, so that makes me happy (in case they care!). The authors combined qualitative content analysis of a pile of postings with semi-structured interviews (57 participants over 14 months, wow). Biggest motivators: collectivism and reciprocity (compare to the lawyers above). Biggest barriers: no additional knowledge, technology (this is more as it was inconvenient or people forgot, not that they couldn't get the e-mails to fly), lack of time, unfamiliarity with the subject being discussed. Categories of knowledge: nurses - mostly institutional practice (this is how we do it at mpow), next biggest personal opinion; web dev - 3 split pretty equally between instiitutional practice, personal suggestion, persional opinion; literacy educators - split pretty evenly between personal suggestion and personal opinion. This is one place in which I would have liked to see them repeat the survey used in Wasko & Faraj - since it held together pretty well - for direct comparability in terms of social exchange theory- derived motivations. Also, they talked about percent agreement, but not Cohen's kappa which is probably more useful because it takes into account the agreement that would happen by chance alone.Of course they negotiated all of their disagreements out so its not so important.
Van House, N. A. (2002). Trust and epistemic communities in biodiversity data sharing. JCDL '02: Proceedings of the 2nd ACM/IEEE-CS joint conference on Digital libraries, Portland, Oregon, USA. 231-239.
Not sure about this - I think this article has a lot of value for how well it explains various aspects of
social epistemology. I think I actually get Knorr-Cetina's epistemic cultures more after reading this short piece than reading her whole book. With that said, I'm not 100% sold on the usefulness of this paper (or approach) in designing digital libraries - I want to believe - but I'm not there yet. It's definitely promising. Maybe I'll have to give this another read.
Labels: comps