ASIST2005: Establishing a Research Agenda for Studies of Online Search Behavior
Establishing a research agenda for studies of online search behavior
Monday, October 31, 2005, 8 a.m.
They did a Delphi study. They started with ~80 variables suggested at a SIGUSE meeting in the Fall of 2002. They asked ASIST members and selected other researchers who have published multiple studies on online search behavior. The survey provided scope notes on the variables. Room was provided for open comments. They had 77 respondents to the first study. Almost all variables were ranked very highly. There was some survey fatigue as only 56 people rated the last few variables. One comment received was that all the variables might be useful in some study. Several new variables were suggested. In round two, they tossed the 12 original variables with scores of less than 3.5 (on a scale of 1-5) and added several more. Tables with the results of both rounds are available in the proceedings.
Round 3 dealt with measurements. Only 47 people responded. This was due in part to survey fatigue and also some technical issues with Survey Monkey. The results seem a bit ambiguous – on the same question, equal numbers call it a strength and a weakness. (for example: hard to measure 8, easy to measure 9)
Of the respondents to the survey, more than half had greater than 4 relevant publications. Of those many felt that search terms were really important. Sorting of results and ranking by relevance are also important. Information literacy skill was important although it might be a difficult variable to measure. Experience variables – w/searching very important, w/the internet less so. System variables were not rated as highly as the cognitive and affective user variables, in general.
Methods for data collection – part 3 – were all over the place. Dichotomy: the most important variables were user-centric, but there was the most disagreement on how to measure them, or even statements that they can’t be measured. Everyone agrees that trasaction logs are important but that they don’t answer the most important question, “why?”.
What we don’t know from this: the group of respondents was very closed – the publishing faculty. What do other disciplines have to say? There were no surprises. Is there a left field and do we know what we’re missing? Do we know what we don’t know. The study could have been done the same way in 1985 – what about Google searching behavior?
Strength of preference was not captured. Respondents could mark everything a 5. Maybe should have been pick the 10 most important – but in developing work, you do need be selective.
Measurements were important if authentic and valid, but not relying on memory. The respondents were in multiple competing camps and so disagreed – holistic and positivist. There should be a way to apply both schools or to triangulate.
Sandra from MSR – next steps
Clarify if web searching is included. Search topics and tasks variables all very highly rated – how can we take that and craft a research agenda? Select a small group from very diverse points of view (international, levels of experience, quantitative vs. qualitative). Also invite practitioners from different domains.Cross posted to the annual meeting blog