14
Evaluating Search Interfaces Marti Hearst UC Berkeley Enterprise Search Summit West Search UI Design Panel

Evaluating Search Interfaces

Embed Size (px)

DESCRIPTION

Evaluating Search Interfaces. Marti Hearst UC Berkeley. Enterprise Search Summit West Search UI Design Panel. Evaluating Search Interfaces. This is very hard to do well First, a recap on iterative design and evaluation. Then I’ll present some do’s and don’ts. Interface design is iterative. - PowerPoint PPT Presentation

Citation preview

Page 1: Evaluating Search Interfaces

Evaluating Search Interfaces

Marti HearstUC Berkeley

Enterprise Search Summit WestSearch UI Design Panel

Page 2: Evaluating Search Interfaces

Marti Hearst, UI Design Panel 2007

Evaluating Search Interfaces

This is very hard to do well First, a recap on iterative design and

evaluation. Then I’ll present some do’s and don’ts.

Page 3: Evaluating Search Interfaces

Marti Hearst, UI Design Panel 2007

Interface design is iterative

Design

Prototype

Evaluate

Page 4: Evaluating Search Interfaces

Marti Hearst, UI Design Panel 2007

Discount Testing vs. Formal Testing

Fast A small number of

participants (5) Test mock-ups and

prototypes in addition to finished designs

Learn about what doesn’t work, a bit about what does, maybe new good ideas for future iterations.

More time-consuming Need many participants

(often still too few) Test particular

components or principles to be used by others

Learn if something is better than something else, and by how much.

Page 5: Evaluating Search Interfaces

Marti Hearst, UI Design Panel 2007

Qualitative Semi-Formal Studies

After the design has been mocked up, evaluated, redesigned several times,

Evaluate the system holistically or in parts with a large user base Watch the participants use the system on their own queries Use Likert scales to get subjective responses to different

features Find bugs Find features/tasks that need to be streamlined Determine next round of useful features

Refine and test again.

Page 6: Evaluating Search Interfaces

Do Use Motivated Participants

Participants need to know and care about the search goal

(Jared Spool, UIE.com)

Page 7: Evaluating Search Interfaces

Do Longitudinal Studies

Have people use the system for their needs for several weeks or months.

Observe changes in behavior, and subjective preferences.

Page 8: Evaluating Search Interfaces

Do Add New Features Gradually

If you’re doing something new with search, start simple, see what works,

then add more in using additional evaluations.

Page 9: Evaluating Search Interfaces

Beware of Query Sensitivity

In search engine comparisons, variability between queries/tasks can be

greater than variability between systems.

Page 10: Evaluating Search Interfaces

Beware of Cool vs. Usable

Some things are eye-catching, but serve best to draw the user in.

Will they really like it over time?Or if they don’t like it at first, will they learn to like it? (rarer)

Page 11: Evaluating Search Interfaces

Do Compare Against a Strong Baseline

Compare your new idea against the best, most popular current solution.

A good test:“How often would you use this system?”

Page 12: Evaluating Search Interfaces

Subjective vs. Quantitative Measures

Time to complete the task can be a misleading metric.

Subjective impressions are key for determining search interface success.

Page 13: Evaluating Search Interfaces

Subjective vs. Quantitative Measures

Time to complete the task can be a misleading metric.

Subjective impressions are key for determining search interface success.

Page 14: Evaluating Search Interfaces

Marti Hearst, UI Design Panel 2007

Summary

Search evaluation is hard because of huge variations in Information needs Searchers’ knowledge and skills Collection contents

A good strategy is to: Add a few features at a time, test as you add Obtain subjective preference information Measure over time using longitudinal studies.