ASIST2006 Closing Plenary: Susan Dumais
ASIST2006 Closing Plenary: Susan Dumais
(Microsoft)
http://research.microsoft.com/~sdumais
(came in at about 10:37 and she was already in full swing)
Views of IR from empty box, results set to empty box, black box, results set
We need to work on user context. Right now we look at results as independent things and don’t look at the interrelationships
User contexts – individual people (in social identities or in groups, implict/explict short/long term box), domain variables (structure of the domain, usage patterns, inter/intra document contexts), user/task context (information goal—information, navigational, transactional, browse, monitoring, doing, research, learning; physical setting)
What are the key influence points (where do you get a chance to do things differently for different contexts
- query specification
- system (match/ranking algorithms)
- results presentation
Examples of context in IR – sometimes they work and sometimes they don’t, some parts of amazon like add to my wish list other books you might like, context sensitive ads
Framework for Analyzing Contexts
Research in individual differences and learning
- assay (which characteristics predict differences in performance and how big effects are? Example: reading ability)
- isolate source of variation
Aptitude by Treatment Interaction (Character 1 people works really well, Character 2 works really poorly; benefit/cost ability to diagnose people with characteristic 1/2)
Is it worth dealing with context?
Depends on
- ease of identifying context (for a smaller group like a school you might be able to get this more easily than for a general web population)
- accommodation
- what are benefits
Domain contexts:
- what are you searching? Web, images, video? Is domain selected or automatically inferred?
- Influence on ranking? – is time important? Link topology? Usage patterns?
- See, for example, different information exposed in Google Scholar, Google Maps, Google Local
Search Macros
(like Rollyo, Macros (MSN, http://search.live.com/macros), Custom search engines(google))
Desktop Search (example of domain characteristic)
- known: something you’ve seen before, regardless or source
- time is really important – people don’t, in the longterm, remember absolute dates, but they remember them with respect to certain events or occurrences (example: after/during earthquake)
- richer metadata important for desktop search – we don’t want “best” email we know a lot about what we’re looking for and so we need a lot richer metadata (yes, ah-ha! ammunition for fighting the IT culture of "no" just-take-what-G-gives-us point of view)
- fast scrolling vice next-next-next
- quick search like one word, then check boxes or pick off relevant metadata points
Query Context
- informational, navigational, transactional?
- Length, frequency, category
- Where do they go next
- Time of the year, external events
If you’re running an operational system, look at the logs, see what queries get 0 results.
Personalized Web Results
- lots of information from history and information stored
- tabs – she really doesn’t like, people don’t look at different tabs and you can compare them
- various UI issues
IR in context
- tremendous opportunities and challenges
Audience questions:
1) privacy - in web search, contextual ads are more acceptable, but in e-mail sometimes they can be intrusive like ads to buy free research papers.
2) my favorite question: different personalities online. He talked about buying a jazz cd and a biography of a sociology (Amazon has a "why is this recommended" button, hm, who knew?) Algorithmically, can look at time evening searches on sports, maybe. Transparency -- why did it return that? Turn personalization on and off (see I don't want it off for knitting, I just want it not to co-mingle knitting with lidar or signal processing)
Labels: asist2006