72
Computational Interaction Design Antti Oulasvirta, with many others Aalto University School of Electrical Engineering

Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

ComputationalInteraction DesignAntti Oulasvirta, with many othersAalto UniversitySchool of Electrical Engineering

Page 2: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Today

1. Computational design overview

“A new material for interaction design”

THIS

2a.Modeling basics: Task analysis

3a. Keystroke-level models: Predicting user performance

OR

2a. Jupyter Notebook on combinatorial UI optimization

Page 3: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven
Page 4: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Objectives

Computational thinking + Design thinking = Computational Design

“A new material for interaction design”

Understand the role of models

Mathematics and the science of design

Understand basics of computational solutions

Overview of a rapidly developing field

Page 5: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

AI and automation: Expectations

Saves human effort and increases quality.

Scalability: Can be replicated en masse

Should be: controllable, appreciate needs, and adapt sensibly

Page 6: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Fears

14.11.20176

Page 7: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

“In future, data-driven design will possibly reach out to decision making and generative design. But we’re not there yet.”

Lassi Liikkanen, blog post, June 2017

Page 8: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Machine learning does not design. Data does not design.

So what does?

Page 9: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

In this talk…

• UCD and Data-driven design: A view

• Computational approaches

• Model-driven design• Generative design• Self-designing interfaces

• A vision

14.11.20179

15+ examples of recent research

Computational design thinking

Page 10: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Did it not fail already?

Microsoft ‘Clippy’

Grid.io (AI for web design)

Page 11: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

UCD and Data-driven design: A computationalist’s view

Page 12: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Interaction design: Process (Kees Dorst 2015 MIT Press)

14.11.201712

Problem

Paradox

Thematic analysis

Frame generation

Solution creation

Pattern retention

Page 13: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Example: Rowdy Pub Patrons

1. Problem: Pub goers in city disturb the peace at 2am2. Paradox: City increases law enforcement, but the problems got worse3. Themes: Young people go to pubs to have fun, be social, relax4. Frame generation: Instead of seeing this as law enforcement problem, reframe as if planning for a music feastival5. Solution creation: Better transit, signage, restrooms, places to sleep it off etc6. Pattern retention: Rebranding and sustained marketing of the new concept

14.11.201713

Page 14: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Most (but not all) computational methods are limited to “Solution Creation”

Page 15: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Design as problem-solving

John Carroll; Nigel Cross; Stuart Card, Tom Moran, Allan Newell

Generate

Evaluate

Define

Page 16: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Usability engineering

Sets measurable objectives for design and evaluation. Acquires valid and realistic feedback from controlled studies

Low cost-efficiency

Designers tend to get fixated on one solution(Greenberg & Buxton 2008)

Page 17: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Rapid sketching

Accelerates the rate of idea-generation

Depends on experience and a-ha moments

Less contact with data

Page 18: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Rapid prototyping

Decreases the idea-to-evaluation cost

Axure

Page 19: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Online testing (A/B and MVT)

Decreases the per-subject cost of evaluation

Limited to partial features and may ‘contaminate’ users

Amazon

Page 20: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

User analytics

Reduces time and improved quality of evaluation and definition

Predefined views cause myopia

14.11.201720

Google Analytics

Page 21: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Data-driven design

A combination of analytics, rapid ideation, and online testing improves cost-efficiency of a design project. Increases reliance on data not opinions.

http://www.uxforthemasses.com/data-informed-design/

Page 22: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Data Brain-driven designInterpretations, definitions, and evaluations: do not follow from data.

Reliance on human effort makes design costly

Susceptible to biases and relies on creative insight and pure luck

Generate

Evaluate

Define

Page 23: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Model-driven design

14.11.201723

Page 24: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Model-driven design

Use models from behavioral sciences to automate Evaluation

Generate

Evaluate

Define

Page 25: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

An idea invented multiple times…

August Dvorak Herbert Simon Stuart Card

Page 26: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Research efforts did not converge until recently

Oulasvirta, IEEE Computer 2017

Page 27: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Project Ernestine

Page 28: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Model-based evaluationPredicts task performance with no empirical data. Tedious to adapt. Humans still generate designs

14.11.201728

Glean/EPIC

Page 29: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven
Page 30: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

FSMs for error prevention

Users’ likely errors can be predicted. Tedious finite state modeling.

14.11.201730

PVSioWeb

Page 31: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Visual importance prediction

Deep learning predicts visual importance of elements on a page

14.11.201731

Learning Visual Importancefor Graphic Designs and Data Visualizations

Zoya Bylinskii1 Nam Wook Kim2 Peter O’Donovan3 Sami Alsheikh1 Spandan Madan2

Hanspeter Pfister2 Fredo Durand1 Bryan Russell4 Aaron Hertzmann4

1 MIT CSAIL, Cambridge, MA USA {zoya,alsheikh,fredo}@mit.edu2 Harvard SEAS, Cambridge, MA USA {namwkim,spandan_madan,pfister}@seas.harvard.edu

3 Adobe Systems, Seattle, WA USA {podonova}@adobe.com4 Adobe Research, San Francisco, CA USA {hertzman,brussell}@adobe.com

ABSTRACTKnowing where people look and click on visual designs canprovide clues about how the designs are perceived, and wherethe most important or relevant content lies. The most im-portant content of a visual design can be used for effectivesummarization or to facilitate retrieval from a database. Wepresent automated models that predict the relative importanceof different elements in data visualizations and graphic designs.Our models are neural networks trained on human clicks andimportance annotations on hundreds of designs. We collecteda new dataset of crowdsourced importance, and analyzed thepredictions of our models with respect to ground truth impor-tance and human eye movements. We demonstrate how suchpredictions of importance can be used for automatic designretargeting and thumbnailing. User studies with hundreds ofMTurk participants validate that, with limited post-processing,our importance-driven applications are on par with, or out-perform, current state-of-the-art methods, including naturalimage saliency. We also provide a demonstration of how ourimportance predictions can be built into interactive designtools to offer immediate feedback during the design process.

ACM Classification KeywordsH.5.1 Information Interfaces and Presentation: MultimediaInformation Systems

Author KeywordsSaliency; Computer Vision; Machine Learning; Eye Tracking;Visualization; Graphic Design; Deep Learning; Retargeting.

INTRODUCTIONA crucial goal of any graphic design or data visualization isto communicate the relative importance of different designelements, so that the viewer knows where to focus attentionand how to interpret the design. In other words, the design

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] 2017, October 22–25, 2017, Quebec City, QC, Canada

© 2017 ACM. ISBN 978-1-4503-4981-9/17/10. . . $15.00DOI: https://doi.org/10.1145/3126594.3126653

Figure 1. We present two neural network models trained on crowd-sourced importance. We trained the graphic design model using adataset of 1K graphic designs with GDI annotations [33]. For trainingthe data visualization model, we collected mouse clicks using the Bubble-View methodology [22] on 1.4K MASSVIS data visualizations [3]. Bothnetworks successfully predict ground truth importance and can be usedfor applications such as retargeting, thumbnailing, and interactive de-sign tools. Warmer colors in our heatmaps indicate higher importance.

should provide an effective management of attention [39]. Un-derstanding how viewers perceive a design could be usefulfor many stages of the design process; for instance, to pro-vide feedback [40]. Automatic understanding can help buildtools to search, retarget, and summarize information in de-signs and visualizations. Though saliency prediction in naturalimages has recently become quite effective, there is little workin importance prediction for either graphic designs or datavisualizations.

Our online demo, video, code, data, trained models, and supplemen-tal material are available at visimportance.csail.mit.edu.

arX

iv:1

708.

0266

0v1

[cs.H

C]

8 A

ug 2

017

Bylinskii et al. UIST 2017

Page 32: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Bounded agent simulators

Predict emergence of interactive behavior as a response to design (e.g., reinforcement learning)

14.11.201732

Jokinen et al. CHI 2017

Page 33: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Models integrated in prototyping tools

Integration of models into design tools lower the costs of modeling.

14.11.201733

CogTool

Page 34: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Multi-model evaluation

14.11.201734

Aalto ComputationalMetrics Service (in progress)

Page 35: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Assessment: Model-informed design

Reduces evaluation costs and improve designs

Increases understanding of design

Models don’t design!

Modeling is tedious and requires special expertise

14.11.201735

Page 36: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Computational design

14.11.201736

Page 37: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven
Page 38: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Computational design

= applies computational thinking - abstraction, automation, and analysis - to explain and enhance interaction. It is underpinned by modelling which admits formal reasoning and involves:

1. a way of updating a model with data;

2. synthesisation or adaptation of design

14.11.201738

Oulasvirta, Kristensson, Howes, Bi. forthcoming

Page 39: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Many algorithmic approaches

Rule-based systems

State machines

Logic/theorem proving

Combinatorial optimization

Continuous optimization

Agent-based modeling

Reinforcement learning

Bayesian inference

Neural networks

14.11.201739

Page 40: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

The computational design problem

Computational design rests on

1. search (of d),

2. inference (of θ),3. prediction (of g)

14.11.201740

Find the design (d) out of candidate set (D) that maximizes goodness (g) for given conditions (θ):

max

d2Dg(d, ✓)

Page 41: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Implications

Design is hard because the math is hard

• Regular design problems have exponential design spaces and many contradictory objectives

Only computational approaches offer 1) high probability of finding the best design and 2) design rationale

14.11.201741

Page 42: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven
Page 43: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Example results14.11.2017

43

Page 44: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

UI description languages

Port a design from one terminal to another. Heavy. Overly case-specific.

14.11.201744

annotations. Conversely, the highly mobile PDA is the ideal device for entering new annotations. Through the application of a task model, we can take advantage of this knowledge and optimize the UI for each device. The designer should create mappings between platforms (or classes of platforms) and tasks (or sets of tasks). Additional mappings are then created between task elements and presentation structures that are optimized for a given set of tasks. We can assume these mappings are transitive; as a result, the appropriate presentation model is associated with each platform, based on mappings through the task model. The procedure is depicted in fig. 5. In this figure, the task model is shown to be a mere collection of tasks. This is for simplicity's sake; in reality, the task model is likely to be a highly structured graph where tasks are decomposed into subtasks at a much finer level than shown here.

There are several ways in which a presentation model can be optimized for the performance of a specific subset of tasks. Tasks that are thought to be particularly important should be represented by AIOs that are easily accessible. For example, on a PDA, clicking on a spot on the map of our MANNA application should allow the user to enter a new note immediately (fig. 6). However, on the desktop workstation, clicking on a spot on the map brings up a rich set of geographical and meteorological information describing the selected region, while showing previously entered notes. On the cellular phone, driving directions are immediately presented when any location is selected (fig. 7). On the other devices, an additional click is required to get the driving directions. The “bottom arrow” button of the cellular phone enables the user to select other options by scrolling between them.

In this way, our treatment of optimizations for the task structure of each device is similar to our treatment of the screen-space constraint: global, structural modifications to the presentation model are often necessary, and adaptive interactor selection alone will not suffice. Here, we propose to solve this problem through the use of several alternative user-generated presentation structures. In many cases, automated generation of these alternative presentation structures would be preferable. We hope to explore the automated generation of task-optimized presentation structures in the future.

FUTURE WORK In the introduction, we mentioned a lack of support for the observation of usability guidelines in the design of user-interfaces for mobile computing. We have experience in implementing model-based UI development environments that incorporate usability guidelines [25,26]. However, the lack of a substantial existing literature on usability guidelines for mobile computing [3,5] prevents us for implementing such a strategy here. As the research literature grows, we hope to provide support for applying these guidelines to mobile UI development.

Much of the information found in the platform and task models can be expressed as UI constraints: e.g., screen size, resolution, colors, available interaction devices, task structure, and task parameters (e.g., frequency, motivation). It is therefore tempting to address the problem of generating alternative presentation structures as a constraint-satisfaction problem. In this case, there are two types of constraints: first, features that are common to the user-interface across platforms and contexts of use; second, platform-specific constraints such as screen resolution and available bandwidth. In our future research, we hope to build a constraint-

Figure 5. Platform elements are mapped onto high-level task elements that are especially likely to be performed. In turn, the task elements are mapped onto presentation models that are optimized for the performance of a specific task.

Figure 6. Final presentation for the PDA.

74

annotations. Conversely, the highly mobile PDA is the ideal device for entering new annotations. Through the application of a task model, we can take advantage of this knowledge and optimize the UI for each device. The designer should create mappings between platforms (or classes of platforms) and tasks (or sets of tasks). Additional mappings are then created between task elements and presentation structures that are optimized for a given set of tasks. We can assume these mappings are transitive; as a result, the appropriate presentation model is associated with each platform, based on mappings through the task model. The procedure is depicted in fig. 5. In this figure, the task model is shown to be a mere collection of tasks. This is for simplicity's sake; in reality, the task model is likely to be a highly structured graph where tasks are decomposed into subtasks at a much finer level than shown here.

There are several ways in which a presentation model can be optimized for the performance of a specific subset of tasks. Tasks that are thought to be particularly important should be represented by AIOs that are easily accessible. For example, on a PDA, clicking on a spot on the map of our MANNA application should allow the user to enter a new note immediately (fig. 6). However, on the desktop workstation, clicking on a spot on the map brings up a rich set of geographical and meteorological information describing the selected region, while showing previously entered notes. On the cellular phone, driving directions are immediately presented when any location is selected (fig. 7). On the other devices, an additional click is required to get the driving directions. The “bottom arrow” button of the cellular phone enables the user to select other options by scrolling between them.

In this way, our treatment of optimizations for the task structure of each device is similar to our treatment of the screen-space constraint: global, structural modifications to the presentation model are often necessary, and adaptive interactor selection alone will not suffice. Here, we propose to solve this problem through the use of several alternative user-generated presentation structures. In many cases, automated generation of these alternative presentation structures would be preferable. We hope to explore the automated generation of task-optimized presentation structures in the future.

FUTURE WORK In the introduction, we mentioned a lack of support for the observation of usability guidelines in the design of user-interfaces for mobile computing. We have experience in implementing model-based UI development environments that incorporate usability guidelines [25,26]. However, the lack of a substantial existing literature on usability guidelines for mobile computing [3,5] prevents us for implementing such a strategy here. As the research literature grows, we hope to provide support for applying these guidelines to mobile UI development.

Much of the information found in the platform and task models can be expressed as UI constraints: e.g., screen size, resolution, colors, available interaction devices, task structure, and task parameters (e.g., frequency, motivation). It is therefore tempting to address the problem of generating alternative presentation structures as a constraint-satisfaction problem. In this case, there are two types of constraints: first, features that are common to the user-interface across platforms and contexts of use; second, platform-specific constraints such as screen resolution and available bandwidth. In our future research, we hope to build a constraint-

Figure 5. Platform elements are mapped onto high-level task elements that are especially likely to be performed. In turn, the task elements are mapped onto presentation models that are optimized for the performance of a specific task.

Figure 6. Final presentation for the PDA.

74

existing maps and reports on the area to prepare for her visit (fig. 1). The desktop workstation poses few limiting constraints to UI development, but unfortunately, it is totally immobile. The documents are downloaded to a laptop, and the geologist boards a plane for the site.

On the plane, the laptop is not networked, so commands that rely on a network connection are disabled. When the geologist examines video of the site, the UI switches to a black-and-white display, and reduces the rate of frames per second. This helps to conserve battery power. In addition, because many users find laptop touch pads inconvenient, interactors that are keyboard-friendly are preferred, e.g., drop-lists are replaced by list boxes.

After arriving at the airport, the geologist rents a car and drives to site. She receives a message through the MANNA system to her cellular phone, alerting her to examine a particular location. Because a cellular phone offers extremely limited screen-space, the map of the region is not displayed. Instead, the cell phone shows the geographical location, driving directions, and the geologist's current GPS position. A facility for responding to the message is also provided.

Finally arriving at the site, our geologist uses a palmtop computer to make notes on the region (fig. 6). Since the palmtop relies on a touch pen for interaction, interactors that require double-clicks and right-clicks are not permitted. Screen size is a concern here, so a more conservative layout is employed. Having completed the investigation, our geologist prepares a presentation in two formats. First, an annotated walk-through is presented on a heads-up display (HUD). Because of the HUD's limited capabilities for handling textual input, speech-based interactors are used instead. A more conventional presentation is prepared for a high-resolution large-screen display. Since this is a final presentation, the users will not wish to add information, and interactors that are intended for that purpose are removed. The layout adapts to accommodate the larger screen space, and important information is placed near the center and top, where everyone in the audience can see it.

THE USER-INTERFACE MODEL The highly adaptive, multi-platform user-interface described in this scenario would be extremely difficult to realize using conventional UI-design techniques. We will describe a set of model-based techniques that can greatly facilitate the design of such UIs.

All such techniques depend on the development of a user-interface model, which we define as a formal, declarative, implementation-neutral description of the UI. A UI model is expressed by a modeling language; that language should be declarative, so that it can be edited by hand, but it should be formal so that it can be understood and analyzed by a software system. The MIMIC modeling language meets these criteria, and it is the language we have chosen to use for UI modeling [16].

MIMIC is a comprehensive UI modeling language; ideally, all relevant aspects of the UI are included in a MIMIC UI model. However, we will focus on the three model components that are relevant to our design techniques for mobile computing: platform model, presentation model, and task model.

A platform model describes the various computer systems that may run a UI [19]. This model includes information regarding the constraints placed on the UI by the platform. The platform model contains an element for each platform that is supported, and each element contains attributes describing features and constraints. The platform model may be exploited at design time and be used as a static entity. In this case, a set of user-interface can be generated: one for each platform that is desired. However, we prefer the dynamic exploitation of the platform model at run-time, so that it can be sensitive to changing conditions of use. For example, the platform model should recognize a sudden reduction in bandwidth, and the UI should respond appropriately.

A presentation model describes the visual appearance of the user interface. The presentation model includes information

Figure 1. The desktop UI design for MANNA. Hyperlinks are indicated by color, and are underlined only when the mouse rolls over them.

70

Eisenstein et al. IUI 2001

Page 45: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Probabilistic design generation

A statistical model of features is formed and used to generate designs conforming to the domain. Poor transferability

14.11.201745

9

Image Parsing via Stochastic Scene GrammarYibiao Zhao, Song-Chun Zhu

Department of Statistics University of California, Los Angeles, CA 90095, Department ofStatistics and Computer Science University of California

Introduction

I This paper proposes a parsing algorithm for indoor sceneunderstanding which includes four aspects: computing 3D scenelayout, detecting 3D objects (e.g. furniture), detecting 2D faces(windows, doors etc.), and segmenting the background. Thealgorithm parse an image into a hierarchical structure, namely aparse tree. With the parse tree, we reconstruct the original image bythe appearance of line segments, and we further recover the 3Dscene by the geometry of 3D background and foreground objects.

Figure : 2: 3D synthesis of novel views based on the parse tree.

Stochastic Scene Grammar

The grammar represents compositional structures of visual entities,which includes three types of production rules and two types ofcontextual relations:I Production rules:

(i) AND rules represent the decomposition of an entity into sub-parts;(ii) SET rules represent an ensemble of visual entities;(iii) OR rules represent the switching among sub-types of an entity.

I Contextual relations:(a) Cooperative + relations represent positive links between bindingentities, such as hinged faces of a object or aligned boxes;(b) Competitive - relations represents negative links betweencompeting entities, such as mutually exclusive boxes.

Bayesian Formulation

I We define a posterior distribution for a solution (a parse tree) pt conditioned on animage I. This distribution is specified in terms of the statistics defined over thederivation of production rules.

P(pt |I) / P(pt)P(I|pt) = P(S)Y

v2Vn

P(Chv |v)Y

v2VT

P(I|v) (1)

I The probability is defined on the Gibbs distribution: and the energy term isdecomposed as three potentials:

E(pt |I) =X

v2VOR

EOR(Ar(Chv))+X

v2VA ND

EAND(AG(Chv))+X

⇤v2⇤I ,v2VT

ET(I(⇤v))

(2)

Inference by Hierarchical Cluster Sampling

We design an efficient MCMC inference algorithm, namely Hierarchicalcluster sampling, to search in the large solution space of sceneconfigurations. The algorithm has two stages:I Clustering:It forms all possible higher-level structures (clusters) from

lower-level entities by production rules and contextual relations.

P+(Cl|I) =Y

v2ClOR

POR(Ar(v))Y

u,v2ClAND

PAND+ (AG(u),AG(v))

Y

v2ClTPT (I(Av))

(3)I Sampling:It jumps between alternative structures (clusters) in each

layer of the hierarchy to find the most probable configuration(represented by a parse tree).

Q(pt⇤|pt , I) = P+(Cl⇤|I)Y

u2ClAND ,v2ptAND

PAND- (AG(u)|AG(v)). (4)

Results

Experiment and Conclusion

Segmentation precision compared with Hoiem et al. 2007 [1], Hedau etal. 2009 [2], Wang et al. 2010 [3] and Lee et al. 2010 [4] in the UIUCdataset [2].

Compared with other algorithms, our contributions are:I A Stochastic Scene Grammar (SSG) to represent the hierarchical

structure of visual entities;I A Hierarchical Cluster Sampling algorithm to perform fast inference

in the SSG model;I Richer structures obtained by exploring richer contextual relations.

(a) Designed by novice

Image Parsing via Stochastic Scene GrammarYibiao Zhao, Song-Chun Zhu

1Department of Statistics University of California, Los Angeles Los Angeles, CA 90095, Department ofStatistics and Computer Science University of California, Los Angeles Los Angeles, CA 90095

Introduction

This paper proposes a parsing algorithm for indoor sceneunderstanding which includes four aspects: computing 3Dscene layout, detecting 3D objects (e.g. furniture), detecting2D faces (windows, doors etc.), and segmenting thebackground. The algorithm parse an image into a hierarchicalstructure, namely a parse tree. With the parse tree, wereconstruct the original image by the appearance of linesegments, and we further recover the 3D scene by thegeometry of 3D background and foreground objects.

3D synthesis of novel views based on the parse tree

Results

Stochastic Scene Grammar

The grammar represents compositional structures of visual entities, which includes three typesof production rules and two types of contextual relations:• Production rules: (i) AND rules represent the decomposition of an entity into sub-parts; (ii)

SET rules represent an ensemble of visual entities; (iii) OR rules represent the switchingamong sub-types of an entity.

• Contextual relations: (a) Cooperative + relations represent positive links between bindingentities, such as hinged faces of a object or aligned boxes; (b) Competitive - relationsrepresents negative links between competing entities, such as mutually exclusive boxes.

Bayesian Formulation

We define a posterior distribution for a solution (a parse tree) pt conditioned on an image I.This distribution is specified in terms of the statistics defined over the derivation of productionrules.

P(pt |I) / P(pt)P(I|pt) = P(S)Y

v2Vn

P(Chv |v)Y

v2VT

P(I|v) (1)

The probability is defined on the Gibbs distribution: and the energy term is decomposed asthree potentials:

E(pt |I) =X

v2VOR

EOR(Ar(Chv)) +X

v2VA ND

EAND(AG(Chv)) +X

⇤v2⇤I ,v2VT

ET(I(⇤v)) (2)

Inference by Hierarchical Cluster Sampling

We design an efficient MCMC inference algorithm, namely Hierarchical cluster sampling, tosearch in the large solution space of scene configurations. The algorithm has two stages:• Clustering: It forms all possible higher-level structures (clusters) from lower-level entities by

production rules and contextual relations.

P+(Cl|I) =Y

v2ClOR

POR(Ar(v))Y

u,v2ClAND

PAND+ (AG(u),AG(v))

Y

v2ClTPT(I(Av)) (3)

• Sampling: It jumps between alternative structures (clusters) in each layer of the hierarchy tofind the most probable configuration (represented by a parse tree).

Q(pt⇤|pt , I) = P+(Cl⇤|I)Y

u2ClAND ,v2ptAND

PAND- (AG(u)|AG(v)). (4)

Experiment and Conclusion

Segmentation precision compared with Hoiem et al. 2007 [1], Hedau et al. 2009 [2], Wang etal. 2010 [3] and Lee et al. 2010 [4] in the UIUC dataset [2].

Compared with other algorithms, our contributions are• A Stochastic Scene Grammar (SSG) to represent the hierarchical structure of visual

entities;• A Hierarchical Cluster Sampling algorithm to perform fast inference in the SSG model;• Richer structures obtained by exploring richer contextual relations. Website:

http://www.stat.ucla.edu/ ybzhao/research/sceneparsing

(b) Our result

TEMPLATE DESIGN © 2008

www.PosterPresentations.com

Image Parsing via Stochastic Scene Grammar Yibiao Zhao and Song-Chun Zhu

University of California, Los Angeles

Introduction This paper proposes a parsing algorithm for indoor scene understanding which includes four aspects: computing 3D scene layout, detecting 3D objects (e.g. furniture), detecting 2D faces (windows, doors etc.), and segmenting the background. The algorithm parse an image into a hierarchical structure, namely a parse tree. With the parse tree, we reconstruct the original image by the appearance of line segments, and we further recover the 3D scene by the geometry of 3D background and foreground objects.

Bayesian Formulation

o Sampling: It jumps between alternative structures (clusters) in each layer of the hierarchy to find the most probable configuration (represented by a parse tree).

Stochastic Scene Grammar The grammar represents compositional structures of visual entities, which includes three types of production rules and two types of contextual relations: o Production rules: (i) AND rules represent the decomposition of an entity into sub-parts; (ii) SET rules represent an ensemble of visual entities; (iii) OR rules represent the switching among sub-types of an entity. o Contextual relations: between binding entities, such as hinged faces of a object or aligned boxes; (b)

such as mutually exclusive boxes.

Results Inference by Hierarchical Cluster Sampling

Website: http://www.stat.ucla.edu/~ybzhao/research/sceneparsing

3D synthesis of novel views based on the parse tree

The probability is defined on the Gibbs distribution: and the energy term is decomposed as three potentials:

We define a posterior distribution for a solution (a parse tree) pt conditioned on an image I. This distribution is specified in terms of the statistics defined over the derivation of production rules.

We design an efficient MCMC inference algorithm, namely Hierarchical cluster sampling, to search in the large solution space of scene configurations. The algorithm has two stages: o Clustering: It forms all possible higher-level structures (clusters) from lower-level entities by production rules and contextual relations.

Experiment and Conclusion Segmentation precision compared with Hoiem et al. 2007 [1], Hedau et al. 2009 [2], Wang et al. 2010 [3] and Lee et al. 2010 [4] in the UIUC dataset [2].

Compared with other algorithms, our contributions are o A Stochastic Scene Grammar (SSG) to represent the hierarchical structure of visual entities; o A Hierarchical Cluster Sampling algorithm to perform fast inference in the SSG model; o Richer structures obtained by exploring richer contextual relations.

(c) Original poster[35]

(d) Designed by novice (e) Our result (f) Original poster[36]

Fig. 5: Results generated by different ways

REFERENCES

[1] K. Arai and T. Herman. Method for automatic e-comic scene frameextraction for reading comic on mobile devices. In InformationTechnology: New Generations (ITNG), 2010 Seventh International Con-ference on, pages 370–375. IEEE, 2010.

[2] G. D. Battista, P. Eades, R. Tamassia, and I. G. Tollis. GraphDrawing: Algorithms for the Visualization of Graphs. Prentice HallPTR, Upper Saddle River, NJ, USA, 1st edition, 1998.

[3] Y. Cao, A. B. Chan, and R. W. H. Lau. Automatic stylistic mangalayout. ACM Trans. Graph., 31(6):141:1–141:10, Nov. 2012.

[4] Y. Cao, R. W. Lau, and A. B. Chan. Look over here: Attention-directing composition of manga elements. ACM Transactions onGraphics (TOG), 33(4):94, 2014.

[5] N. Damera-Venkata, J. Bento, and E. O’Brien-Strain. Probabilisticdocument model for automated document composition. In Pro-ceedings of the 11th ACM symposium on Document engineering, pages3–12. ACM, 2011.

[6] R. M. Fung and K.-C. Chang. Weighing and integrating evidence

for stochastic simulation in bayesian networks. pages 209–220,1990.

[7] K. Gajos and D. S. Weld. Preference elicitation for interfaceoptimization. In Proceedings of the 18th annual ACM symposium onUser interface software and technology, pages 173–182. ACM, 2005.

[8] J. Geigel and A. Loui. Using genetic algorithms for album pagelayouts. IEEE multimedia, (4):16–27, 2003.

[9] D. E. Goldberg. Genetic Algorithms in Search, Optimization andMachine Learning. Addison-Wesley Longman Publishing Co., Inc.,Boston, MA, USA, 1st edition, 1989.

[10] S. J. Harrington, J. F. Naveda, R. P. Jones, P. Roetling, andN. Thakkar. Aesthetic measures for automated document layout.In Proceedings of the 2004 ACM symposium on Document engineering,pages 109–111. ACM, 2004.

[11] K. Hoashi, C. Ono, D. Ishii, and H. Watanabe. Automatic previewgeneration of comic episodes for digitized comic search. InProceedings of the 19th ACM international conference on Multimedia,pages 1489–1492. ACM, 2011.

[12] J. H. Holland. Adaptation in Natural and Artificial Systems: An In-

Qiang et al. arXiv 2017

Page 46: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Heuristic design explorationAlternative layouts can be proposed to the designer. Lots of low quality suggestions

14.11.201746

DesignScape

Page 47: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Model-based UI optimization

Models allow the computer to predict ‘good design’. Costly to set up optimization systems.

14.11.201747MenuOptimizer Bailly et al. UIST’13

Page 48: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Modeling perception, attention, memory, choice, experience,….

14.11.201748

Page 49: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Example: Model of menu selection

14.11.201749

Bailly, Oulasvirta, Brumby, Howes CHI 2014

Page 50: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Example: Perceptual optimization of scatterplots

14.11.201750

Micallef et al. IEEE TGCV 2017

Page 51: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Under the hood

Design space

Objective function

14.11.201751

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS (TVCG), ACCEPTED, 2017 5

discretization of the design parameters, the size of our design spaceis 4,851, but this can be easily changed in the code by adding orremoving levels. Since our interest is not in real-time performancebut in assessing the result quality that can be obtained, we usedexhaustive search to find the global optimum.

The approach we describe here can be extended to coveradditional design parameters and tasks. To add a design parameter,we do not necessarily need to add new terms to the cost function,since it already captures a wide range of perceptual aspects. Forexample, the impact of new design parameters such as image widthor the drawing order of classes is already captured by the costfunction. On the other hand, adding a new task requires a weightcalibration following the procedure described in Section 6.

5 PERCEPTUAL SCATTERPLOT OPTIMIZATION

This section first describes the cost function and then introducesthe constituent cost measures. In the latter part of the section, wedescribe the implementation of the optimizer.

5.1 Cost FunctionLet us take two variables x,y for a dataset to be visualized bymeans of a scatterplot. The set of all possible designs is

D= S⇥O⇥A (1)

where S denotes the set of marker sizes considered, O is theset of marker opacities considered, and A is the set of aspectratios considered for the plot. Certain designs d 2D will be bettersuited than others to quickly and accurately perform a particulardata analysis task t 2 {T1,T2,T3}. We quantify this by using acost function, C(x,y,d, t), which has a low value when the designis deemed well suited to the data and task, and a high valueotherwise. The cost function is a weighted sum of cost measuresÂw(t)M(x,y,d), where weights w depend on the task and the costmeasures M are determined by the data and the design. Each ofthe three terms in the cost function addresses a distinct perceptualaspects:

C(x,y,d, t) = E(x,y,d, t)+ I(x,y,d, t)+S(x,y,d, t). (2)

With these terms, we assess the perception of linear correlationE(x,y,d, t), image quality measures I(x,y,d, t), and the perceptionof individual classes and outliers in terms of structural similarityS(x,y,d, t). Each of these terms is a weighted sum of cost measures:

E(x,y,d, t) = wa Ea +wrEr (3)I(x,y,d, t) = wµ Iµ +ws Is +wµ̄ Iµ̄ +ws̄ Is̄ +w`I`+wpIp (4)S(x,y,d, t) = wcSc +woSo (5)

An overview is presented here, with details further on and adiscussion of limitations saved for the end of the paper. All costmeasures are functions of the form (x,y,d)! [0,1]; i.e., they arenormalized and do not depend on the task. They have the followingmeanings:Ea : angle difference between covariance and perceived ellipseEr: axes length ratio diff. between cov. and perceived ellipseIµ : average amount of opacityIs : image contrastIµ̄ : distance from a desired average amount of opacityIs̄ : distance from a desired image contrast

I`: amount of overlapping of the markersIp: amount of overplottingSc: perceivability of classesSo: perceivability of outliers

We balance the weight of these measures in Equations (3)-(5) withthe set of factors

W(t) = {wa , wr, wµ , ws , wµ̄ , ws̄ , w`, wp, wc, wo}. (6)

We support different tasks by means of different weight factors.For a particular task t, the weight factors are constant values in therange [�1,1]. Negative weights allow us to reverse the meaning ofa cost measure. For example, wp < 0 denotes that we want to havea low amount of overplotting, whereas wp > 0 indicates that weallow for a large amount of overplotting. By setting a weight factorto 0, we omit it from consideration. The non-zero weights are setempirically in the manner described in Section 6.

In summary, with a given dataset and task, our goal is to findthe design for which the value of the cost function C becomesminimal; i.e., we want to solve the following optimization problem:

arg mind

C(x,y,d, t). (7)

5.2 Perceptual Metrics

In the following three subsections, we discuss the three terms inour cost function from Equations (3), (4), and (5).

5.2.1 Perception of Linear Correlation

Pearson’s coefficient and covariance: Linear correlationbetween two variables x,y can be described via Pearson’s coefficientas

r(x,y) =cov(x,y)

sx sy=

cov(x,y)pcov(x,x)cov(y,y)

, (8)

which is closely related to the covariance matrix

C(x,y) =

cov(x,x) cov(x,y)cov(y,x) cov(y,y)

�. (9)

C is a real, symmetric matrix. Hence, it has two real eigenvaluesand two orthogonal eigenvectors, which uniquely describe theshape of the covariance ellipse, ec. The length of the major axisof the ellipse is two times the larger eigenvalue, and its directionis given by the corresponding eigenvector; the equivalent is truefor the minor axis. Figure 4a shows an example. We describe thecovariance ellipse as a tuple ec = (ac,ac,bc), where ac denotes theangle of the major axis to the x-axis and ac,bc the lengths of themajor and minor axis, respectively.

The covariance matrix and ellipse depend on the scaling of thevariables. Consider scaling variable x with a constant factor s; inthis case, the covariance matrix changes as follows:

C(sx,y) =

cov(sx,sx) cov(sx,y)cov(y,sx) cov(y,y)

=

s2 cov(x,x) scov(x,y)scov(y,x) cov(y,y)

�. (10)

For example, changing the aspect ratio of a scatterplot essentiallyconstitutes such a scaling operation and can dramatically alter theshape of the covariance ellipse. Figures 4b–c demonstrate this.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS (TVCG), ACCEPTED, 2017 5

discretization of the design parameters, the size of our design spaceis 4,851, but this can be easily changed in the code by adding orremoving levels. Since our interest is not in real-time performancebut in assessing the result quality that can be obtained, we usedexhaustive search to find the global optimum.

The approach we describe here can be extended to coveradditional design parameters and tasks. To add a design parameter,we do not necessarily need to add new terms to the cost function,since it already captures a wide range of perceptual aspects. Forexample, the impact of new design parameters such as image widthor the drawing order of classes is already captured by the costfunction. On the other hand, adding a new task requires a weightcalibration following the procedure described in Section 6.

5 PERCEPTUAL SCATTERPLOT OPTIMIZATION

This section first describes the cost function and then introducesthe constituent cost measures. In the latter part of the section, wedescribe the implementation of the optimizer.

5.1 Cost FunctionLet us take two variables x,y for a dataset to be visualized bymeans of a scatterplot. The set of all possible designs is

D= S⇥O⇥A (1)

where S denotes the set of marker sizes considered, O is theset of marker opacities considered, and A is the set of aspectratios considered for the plot. Certain designs d 2D will be bettersuited than others to quickly and accurately perform a particulardata analysis task t 2 {T1,T2,T3}. We quantify this by using acost function, C(x,y,d, t), which has a low value when the designis deemed well suited to the data and task, and a high valueotherwise. The cost function is a weighted sum of cost measuresÂw(t)M(x,y,d), where weights w depend on the task and the costmeasures M are determined by the data and the design. Each ofthe three terms in the cost function addresses a distinct perceptualaspects:

C(x,y,d, t) = E(x,y,d, t)+ I(x,y,d, t)+S(x,y,d, t). (2)

With these terms, we assess the perception of linear correlationE(x,y,d, t), image quality measures I(x,y,d, t), and the perceptionof individual classes and outliers in terms of structural similarityS(x,y,d, t). Each of these terms is a weighted sum of cost measures:

E(x,y,d, t) = wa Ea +wrEr (3)I(x,y,d, t) = wµ Iµ +ws Is +wµ̄ Iµ̄ +ws̄ Is̄ +w`I`+wpIp (4)S(x,y,d, t) = wcSc +woSo (5)

An overview is presented here, with details further on and adiscussion of limitations saved for the end of the paper. All costmeasures are functions of the form (x,y,d)! [0,1]; i.e., they arenormalized and do not depend on the task. They have the followingmeanings:Ea : angle difference between covariance and perceived ellipseEr: axes length ratio diff. between cov. and perceived ellipseIµ : average amount of opacityIs : image contrastIµ̄ : distance from a desired average amount of opacityIs̄ : distance from a desired image contrast

I`: amount of overlapping of the markersIp: amount of overplottingSc: perceivability of classesSo: perceivability of outliers

We balance the weight of these measures in Equations (3)-(5) withthe set of factors

W(t) = {wa , wr, wµ , ws , wµ̄ , ws̄ , w`, wp, wc, wo}. (6)

We support different tasks by means of different weight factors.For a particular task t, the weight factors are constant values in therange [�1,1]. Negative weights allow us to reverse the meaning ofa cost measure. For example, wp < 0 denotes that we want to havea low amount of overplotting, whereas wp > 0 indicates that weallow for a large amount of overplotting. By setting a weight factorto 0, we omit it from consideration. The non-zero weights are setempirically in the manner described in Section 6.

In summary, with a given dataset and task, our goal is to findthe design for which the value of the cost function C becomesminimal; i.e., we want to solve the following optimization problem:

arg mind

C(x,y,d, t). (7)

5.2 Perceptual Metrics

In the following three subsections, we discuss the three terms inour cost function from Equations (3), (4), and (5).

5.2.1 Perception of Linear Correlation

Pearson’s coefficient and covariance: Linear correlationbetween two variables x,y can be described via Pearson’s coefficientas

r(x,y) =cov(x,y)

sx sy=

cov(x,y)pcov(x,x)cov(y,y)

, (8)

which is closely related to the covariance matrix

C(x,y) =

cov(x,x) cov(x,y)cov(y,x) cov(y,y)

�. (9)

C is a real, symmetric matrix. Hence, it has two real eigenvaluesand two orthogonal eigenvectors, which uniquely describe theshape of the covariance ellipse, ec. The length of the major axisof the ellipse is two times the larger eigenvalue, and its directionis given by the corresponding eigenvector; the equivalent is truefor the minor axis. Figure 4a shows an example. We describe thecovariance ellipse as a tuple ec = (ac,ac,bc), where ac denotes theangle of the major axis to the x-axis and ac,bc the lengths of themajor and minor axis, respectively.

The covariance matrix and ellipse depend on the scaling of thevariables. Consider scaling variable x with a constant factor s; inthis case, the covariance matrix changes as follows:

C(sx,y) =

cov(sx,sx) cov(sx,y)cov(y,sx) cov(y,y)

=

s2 cov(x,x) scov(x,y)scov(y,x) cov(y,y)

�. (10)

For example, changing the aspect ratio of a scatterplot essentiallyconstitutes such a scaling operation and can dramatically alter theshape of the covariance ellipse. Figures 4b–c demonstrate this.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS (TVCG), ACCEPTED, 2017 5

discretization of the design parameters, the size of our design spaceis 4,851, but this can be easily changed in the code by adding orremoving levels. Since our interest is not in real-time performancebut in assessing the result quality that can be obtained, we usedexhaustive search to find the global optimum.

The approach we describe here can be extended to coveradditional design parameters and tasks. To add a design parameter,we do not necessarily need to add new terms to the cost function,since it already captures a wide range of perceptual aspects. Forexample, the impact of new design parameters such as image widthor the drawing order of classes is already captured by the costfunction. On the other hand, adding a new task requires a weightcalibration following the procedure described in Section 6.

5 PERCEPTUAL SCATTERPLOT OPTIMIZATION

This section first describes the cost function and then introducesthe constituent cost measures. In the latter part of the section, wedescribe the implementation of the optimizer.

5.1 Cost FunctionLet us take two variables x,y for a dataset to be visualized bymeans of a scatterplot. The set of all possible designs is

D= S⇥O⇥A (1)

where S denotes the set of marker sizes considered, O is theset of marker opacities considered, and A is the set of aspectratios considered for the plot. Certain designs d 2D will be bettersuited than others to quickly and accurately perform a particulardata analysis task t 2 {T1,T2,T3}. We quantify this by using acost function, C(x,y,d, t), which has a low value when the designis deemed well suited to the data and task, and a high valueotherwise. The cost function is a weighted sum of cost measuresÂw(t)M(x,y,d), where weights w depend on the task and the costmeasures M are determined by the data and the design. Each ofthe three terms in the cost function addresses a distinct perceptualaspects:

C(x,y,d, t) = E(x,y,d, t)+ I(x,y,d, t)+S(x,y,d, t). (2)

With these terms, we assess the perception of linear correlationE(x,y,d, t), image quality measures I(x,y,d, t), and the perceptionof individual classes and outliers in terms of structural similarityS(x,y,d, t). Each of these terms is a weighted sum of cost measures:

E(x,y,d, t) = wa Ea +wrEr (3)I(x,y,d, t) = wµ Iµ +ws Is +wµ̄ Iµ̄ +ws̄ Is̄ +w`I`+wpIp (4)S(x,y,d, t) = wcSc +woSo (5)

An overview is presented here, with details further on and adiscussion of limitations saved for the end of the paper. All costmeasures are functions of the form (x,y,d)! [0,1]; i.e., they arenormalized and do not depend on the task. They have the followingmeanings:Ea : angle difference between covariance and perceived ellipseEr: axes length ratio diff. between cov. and perceived ellipseIµ : average amount of opacityIs : image contrastIµ̄ : distance from a desired average amount of opacityIs̄ : distance from a desired image contrast

I`: amount of overlapping of the markersIp: amount of overplottingSc: perceivability of classesSo: perceivability of outliers

We balance the weight of these measures in Equations (3)-(5) withthe set of factors

W(t) = {wa , wr, wµ , ws , wµ̄ , ws̄ , w`, wp, wc, wo}. (6)

We support different tasks by means of different weight factors.For a particular task t, the weight factors are constant values in therange [�1,1]. Negative weights allow us to reverse the meaning ofa cost measure. For example, wp < 0 denotes that we want to havea low amount of overplotting, whereas wp > 0 indicates that weallow for a large amount of overplotting. By setting a weight factorto 0, we omit it from consideration. The non-zero weights are setempirically in the manner described in Section 6.

In summary, with a given dataset and task, our goal is to findthe design for which the value of the cost function C becomesminimal; i.e., we want to solve the following optimization problem:

arg mind

C(x,y,d, t). (7)

5.2 Perceptual Metrics

In the following three subsections, we discuss the three terms inour cost function from Equations (3), (4), and (5).

5.2.1 Perception of Linear Correlation

Pearson’s coefficient and covariance: Linear correlationbetween two variables x,y can be described via Pearson’s coefficientas

r(x,y) =cov(x,y)

sx sy=

cov(x,y)pcov(x,x)cov(y,y)

, (8)

which is closely related to the covariance matrix

C(x,y) =

cov(x,x) cov(x,y)cov(y,x) cov(y,y)

�. (9)

C is a real, symmetric matrix. Hence, it has two real eigenvaluesand two orthogonal eigenvectors, which uniquely describe theshape of the covariance ellipse, ec. The length of the major axisof the ellipse is two times the larger eigenvalue, and its directionis given by the corresponding eigenvector; the equivalent is truefor the minor axis. Figure 4a shows an example. We describe thecovariance ellipse as a tuple ec = (ac,ac,bc), where ac denotes theangle of the major axis to the x-axis and ac,bc the lengths of themajor and minor axis, respectively.

The covariance matrix and ellipse depend on the scaling of thevariables. Consider scaling variable x with a constant factor s; inthis case, the covariance matrix changes as follows:

C(sx,y) =

cov(sx,sx) cov(sx,y)cov(y,sx) cov(y,y)

=

s2 cov(x,x) scov(x,y)scov(y,x) cov(y,y)

�. (10)

For example, changing the aspect ratio of a scatterplot essentiallyconstitutes such a scaling operation and can dramatically alter theshape of the covariance ellipse. Figures 4b–c demonstrate this.

Page 52: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven
Page 53: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Service design: Functionality selection

14.11.201753

Elicitation of objective functions from professionals

Oulasvirta, Feit, Lähteenlahti, Karrenbauer, TOCHI 2018

Page 54: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Interview-based elicitation of an objective function

14.11.201754

Fast algorithmicsolution

Oulasvirta, Feit, Lähteenlahti, Karrenbauer, TOCHI 2018

Page 55: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

‘Explorative optimization’

14.11.201755

Page 56: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Designers do not express task instances mathematically

Designers continuously engage in refining objectives and constraints of a problem (Dorst & Cross 2001)

For example, which weight to choose for an objective depends on how good the results are.

Page 57: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Interactive optimization

Algorithms can nudge designers toward better designs and more diverse thinking.

14.11.201757

Page 58: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Assessment: Algorithms that design

Can generate good designs and show viable alternatives

Interactive steering of the optimizer

Only for initializing a design, not for updates/redesigns

Limited scope (yet)

Expensive to set up optimization systems

No contact with user data

14.11.201758

Page 59: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Self-designing interfaces

14.11.201759

Page 60: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Self-designing interfacesInterfaces that initialize and update themselves as more data comes in…

Rests on three pillars:

1. Inference (from data to models)

2. Search (from models to designs)

3. Prediction (from designs to people)

Generate

Evaluate

Define

Page 61: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Recommendation engines

Recommend content based on inferred interest or preference. Only changes an isolated slot in the UI.

14.11.201761

Page 62: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

‘Coadaptive user interfaces’Interfaces that can fully reconfigure themselves taking into account how users will react to changes

Requirements:

1. Models able to predict the effects of its actions: what happens when design changes

2. Algorithms operate in run-time

3. Fast inference of model parameters from empirical data

14.11.201762

Page 63: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Ability-based optimizationWhen model parameters can be obtained for a user group, an optimizer can differentiate designs. How to obtain parameters?

14.11.201763Jokinen et al. in submission

Page 64: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Browser-side ‘familiarization’1. Most-Encountered 2. Serial Position Curve 3. Visual Statistical Learning 4. Generative Model of Positional Learning

History Original

1. Most-Encountered 2. Serial Position Curve 3. Visual Statistical Learning 4. Generative Model of Positional Learning

History Original

1. Most-Encountered 2. Serial Position Curve 3. Visual Statistical Learning 4. Generative Model of Positional Learning

History Original

1. Most-Encountered 2. Serial Position Curve 3. Visual Statistical Learning 4. Generative Model of Positional Learning

History Original

1. Most-Encountered 2. Serial Position Curve 3. Visual Statistical Learning 4. Generative Model of Positional Learning

History Original

1. Most-Encountered 2. Serial Position Curve 3. Visual Statistical Learning 4. Generative Model of Positional Learning

History Original

Page 65: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Conclusion14.11.2017

65

Page 66: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

A new discipline of interaction design

Analysis: Understanding user behavior and experienceGeneration of designs: Best solution for given problemCreativity support: Facilitating human creativityAdaptive user interfaces: Real-time adaptation

Largest obstacle is lack of appropriate models

Page 67: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Why computational design?

1. Increase efficiency, enjoyability, and robustness of interaction

2. Proofs and guarantees for designs

3. Boost user-centered design

4. Reduce design time of interfaces

5. Free up designers to be creative

6. Harness new technologies quicker

Page 68: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Computational Design Stool

14.11.201768

Search

Inference

Prediction

max

d2Dg(d, ✓)

Page 69: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Beyond deep learning

14.11.201769

Control over outcomes is critical in all design

Page 70: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

How interaction design may change (next 10 years)1. Resources emerge that lower the entry barrier

2. Organizations that use computational design win competition

3. Collaborative AI helps novice designers to design better

4. Users start demanding algorithmic UIs

5. Some commonly used UIs become coadaptive

6. Interaction design as a field embraces computation

14.11.201770

Page 71: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

What next?

THIS

2a.Modeling basics: Task analysis

3a. Keystroke-level models: Predicting user performance

OR

2a. Jupyter Notebook on combinatorial UI optimization

Page 72: Computational design intro ARTS...retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven

Thank you!

14.11.201772

IEEE Computer 2017 paper: bit.do/anttipaper