Should more emphasis be placed on successive fractions and less on building block?
While revising my &*#$ paper (again, ack, just make it go away!) my mind started wandering to how I would teach the advanced searching in electronic environments class (750 at Maryland but I think every library school has one). I would absolutely require the use of DIALOG - that's a no brainer. I was thinking, though, that what I do in real life when I use SciFinder (not Scholar, we're a special library) and also when I use the faceted presentation of search results in Ei Engineering Village (where you can search the thesaurus, but you can't directly access the classification codes) is more the successive fractions approach.
Although Buntrock and others coined these terms way back ('79? or earlier?), I usually refer to an article by Don Hawkins [*]. For DIALOG, I almost always use the building block method. I decompose the search in to concepts, map those concepts to controlled vocabulary or natural language terms, OR terms within each concept, AND the concepts. I keep a set for each of these so I can make my results set size smaller or bigger by anding or removing terms. I then type a few random hits in free format to see what's coming up and iterate... I do this, too, in databases to which we have a site license. I like to do a series of quick searches and then go to the search history page and recombine them in various ways. I keep notes as I go so I make sure I've covered all of the bases. (somehow Illumina and PubMed seem to encourage me to work this way, don't know why, but they do keep the search history longer and make it friendly to recombine sets).
I was trying to explain to an expert searcher sometime this week about how Scifinder works. Since we're paying per task (and we are flat broke for the year which doesn't end til 9/30 -- and we're going through tasks much faster this year -- and we're over budget in DIALOG, too -- and we can't buy more SciFinder tasks) and we have an access method that doesn't allow precision searching (librarian's nightmare) we do these huge searches -- ones that yield like 10,000 results. Then we analyze, refine, and combine with saved sets. Really, it's very much successive fractions. Funny, I barely remember that from 750.
SciFinder is an extreme case, but I've taken to encouraging a sort of overview and zoom approach to Compendex using the facets. I'd rather have the end users not start out too specific, but instead start out sort of vague and then narrow by terms appearing in the cv or class codes. It seems more likely that they'd get some serendipitous finds. Domain expert end users can also sift through a larger results set better than information seeking or system experts like most librarians [**].
I'd like to see an experiment on that to see if it's the case. To see how the end results set fares for building block searches over narrowing by successive fractions. Theoretically for the same searcher they should be about the same... hm.
[*] Hawkins, D. T., & Wagers, R. (1982). Online bibliographic search strategy development. Online, 6(3), 12-19.
[**] Marchionini, G. (1995). Information seeking in electronic environments. New York: Cambridge University Press.