Proposed Working Memory Measures for Evaluating Information Visualization Tools

Preview:

DESCRIPTION

Presenter: Laura Matzen, Laura McNamara, Kerstan Cole, Alisa Bandlow, Courtney Dornburg, Travis BauerBELIV 2010 Workshophttp://www.beliv.org/beliv2010/

Citation preview

Proposed Working Memory Measures for Evaluating Information Visualization Tools

Laura Matzen, Laura McNamara, Kerstan Cole,

Alisa Bandlow, Courtney Dornburg & Travis Bauer

Sandia National Laboratories

Albuquerque, NM 87185

This work was funded by Sandia’s Laboratory Research and Development Program as part of the Networks Grand Challenge (10-119351).

Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy’s National Nuclear Security

Administration under contract DE-AC04-94AL85000.

Evaluation of information visualization tools

Evaluations typically developed for a single, specific task and tool Time consuming, expensive, results can’t be

generalized

We propose using measures of cognitive resources to create standardized evaluation metrics

Why assess cognitive resources?

All analysis tasks are cognitively demanding

Human cognitive resources are finite

Well-designed interfaces should free cognitive resources for making sense of data Reduce cognitive burden of searching and

manipulating data

Working Memory

Mental workspace underlying all complex cognition

Has a limited, measurable capacity

Often used as a performance metric in other domains

Proposed methodology

Evaluate visual analytics interfaces using a dual-task methodology: Primary task: Interaction with interface Secondary task: Test of working memory capacity

Performance on the secondary task should correspond to cognitive resources that would be available for sensemaking in a real-world analysis task

Example working memory task Sternberg task (Sternberg, 1969)

Memory set delay probe items

Low-load memory set: M G J High-load memory set: D K H Y R Q

M G J FM

W

Comparisons of different interface designs

FM

W

Later, Compare Different Visualizations of Same Dataset

FM

W

By this time next year…Pilot Study 1: Compare two interface designs for a simple video player with tagging feature

Pilot Study 2: Compare two different interface designs for a visual text analytics application developed at Sandia

Use NASA TLX to develop convergent evidence

Both studies should provide insight into the use of working memory based metrics for interface assessment

…we should have data!

Comments, please!

Recommended