38
User User Interface Interface Agents Agents Roope Raisamo ([email protected] ) Department of Computer Sciences University of Tampere http://www.cs.uta.fi/sat/

User Interface Agents - Yonsei Universitysclab.yonsei.ac.kr/courses/05Agent/sat-lecture-05.pdf · User Interface. Agents Schiaffino and Amandi [2004]: Interface agents are computer

  • Upload
    others

  • View
    39

  • Download
    0

Embed Size (px)

Citation preview

User User InterfaceInterfaceAgentsAgents

Roope Raisamo ([email protected])Department of Computer Sciences

University of Tampere

http://www.cs.uta.fi/sat/

User User InterfaceInterface AgentsAgents

Schiaffino and Amandi [2004]:Interface agents are computer programs thathave the ability to learn a user’s preferencesand working habits, and (the ability) to provide him/her proactive and reactiveassistance in order to increase the user’sproductivity.

User User InterfaceInterface AgentsAgents

A user interface agent guides and helpsthe user– Many user interface agents observe the activities

of the user and suggest better ways for carryingout the same operations

– They can also automate a series of operationsbased on observing the users (e.g., Eager)

Many user interface agents are based on the principles of Programming by Example (PBE)

VIDEOS: VIDEOS: twotwo examplesexamples of of useruser interfaceinterface agentsagents

Allen Allen CypherCypher: : EagerEager (2:08)(2:08)Henry Henry LiebermanLieberman: : LetiziaLetizia (1:36)(1:36)

EagerEager –– automatedautomated macromacro generatorgenerator

Allen Cypher, 1991http://www.acypher.com/Eager/

– Observes the activities of the user and tries to detect repeating sequences of actions. When sucha sequence is detected, Eager offers a possibilityto automate that task.

– like an automated macro generator– this kind of functionality is still not a part of

common applications, even if it could be.

EagerEager

Eager observesrepeatingsequences of actionsWhen Eagerfinds one, itjumps on the screen and suggests the next phase

EagerEager

When all the phases suggestedby Eager havebeen shown and accepted, the usercan give Eager the permission to carry out the automated task.

LetiziaLetizia –– a a browserbrowser companioncompanion agentagent[Lieberman, 1997]

Letizia observesthe user and triesto preloadinteresting webpages at the sametime as the userbrowses throughthe web

http://lieber.www.media.mit.edu/people/lieber/Lieberary/Letizia/Letizia-Intro.html

LetiziaLetizia

LetiziaLetizia

Traditional browsing leads the user into doing a depth first search of the Web

Letizia conducts a concurrent breadth-first search rooted from the user's current position

Another example: [Marais and Bharat, 1997]

Questions?

The The appearanceappearance of of agentsagents

The appearance of an agent is a very importantfeature when a user tries to find out what a certainagent can do.It is a bad mistake to use such an appearance thatmakes the user believe an agent to be moreintelligent than it really is.The appearance must not be disturbing.

ComputerComputer--generatedgenerated talkingtalking headsheadsand and movingmoving bodiesbodies

one of the most demanding forms of agentpresentationa human head suggests the agent to be ratherintelligenta talking head probably is the most natural way to present an agent in a conversational user interface.

FaceWorks

http://www.interface.digital.com

DrawnDrawn oror animatedanimated characterscharacters

the apperance of the agent has a great effect on the expectations of the user– a paper clip vs. a dog vs. Merlin the Sorceror

Continuously animated, slowly changing or staticpresentation

VIDEO: an example of a conversational interface agent

[Cassel et al., 1999] Embodiment in Conversational Interfaces: Rea. CHI’99 Video Proceedings, 1999. (2:08)

TextualTextual presentationpresentation

Textual feedback of the actions of an agentTextual input should normally be avoided if itis not a part of the main task of the agent.Chatterbots– e.g., Julia that is a user in a MUD (multi user

dungeon) world. It can also answer to questionsconcerning this world.http://lcs.www.media.mit.edu/people/foner/Yenta/julia.html

– so called NPCs (non-person characters) in multiplayer role-playing computer games.

AuditoryAuditory presentationpresentation

An agent can also be presented only by voiceor sound, the auditory channel:– ambient sound– beeps, signals– melodies, music– recorded speech– synthetic speech

Haptic Haptic presentationpresentation

In addition to auditory channel, or to replaceit, an agent can present information by haptic feedbackHaptic simulation modalities– force and position– tactile– vibration– thermal– electrical

Haptic feedback Haptic feedback devicesdevices

Inexpensive devices:– The most common haptic devices are

still the different force-feedbackcontrollers used in computer games, for example force-feedback joysticks and wheels.

– In 1999 Immersion Corporation’s forcefeedback mouse was introduced as Logitech Wingman Force Feedback Gaming Mouse

– In 2000 Immersion Corporation’s tactilefeedback mouse was introduced as Logitech iFeel Tactile Feedback Mouse

No No directdirect presentationpresentation at at allall

– An agent helps the user by carrying out differentsupporting actions

e.g., prefetching needed information from the web, automatic hard disk management, …

– An indirectly controlled background agentThe question: How to implement this indirect control?Multisensory input: the agent is observing a system, an environment, or the user.Related to ubiquitous (intelligent) environments

RelatedRelated useruser interfaceinterface metaphorsmetaphors::

ConversationalConversational User User InterfacesInterfaces

Multimodal User Multimodal User InterfacesInterfaces

ConversationalConversational User User InterfacesInterfaces

Why conversation?– a natural way of communication– learnt at quite a young age– to fix the problems of direct manipulation interfaces

Conversation augments, not necessarily replaces a traditional user interface– the failure of Microsoft Bob– Microsoft Office Assistant

Microsoft Office Microsoft Office AssistantAssistant

Microsoft Office assistant tries to help in the use of Microsoft Office software with a varyingrate of success.The user can choose the appearance of the agent– unfortunately, this has no effect on

the capabilities of the agentA paper clip is most likely a better presentation for the present assistant than Merlin the sorceror.

Multimodal User Multimodal User InterfacesInterfaces

Multimodal interfaces combine manysimultaneous input modalities and maypresent the information using synergisticrepresentation of many different output modalities.

Multimodal User Multimodal User InterfacesInterfaces

An agent makes use of multimodality whenobserving the user, e.g.:– speech recognition

reacts on speech commands, or observes the userwithout requiring actual commands

– machine vision, pattern recognition:recognizing facial gesturesrecognizing gaze directionrecognizing gestures

Multimodal User Multimodal User InterfacesInterfaces

a specific problem in multimodal interaction is to combine the simultaneous inputs.– this requires a certain amount of domain knowledge and

”intelligence”– this way every multimodal user interface is at least in some

respect a user interface agent that tries to find out what the user wants based on the available information

A highA high--level architecture for level architecture for multimodal user interfacesmultimodal user interfaces

Inputprocessing- motor- speech- vision- …

Outputgeneration- graphics- animation- speech- sound- …

Mediaanalysis- language- recognition- gesture- …

Mediadesign- language- modality- gesture- …

Interactionmanagement- media fusion- discourse

modeling- plan

recognitionandgeneration

- usermodeling

- presentationdesign

App

licat

ion

inte

rfac

e

Adapted from[Maybury and Wahlster, 1998]

PutPut–– That That –– ThereThere[Bolt, 1980]

Combining inputsCombining inputs

[Nigay and Coutaz, 1993][Nigay and Coutaz, 1995]

CombiningCombining inputsinputs

Agents fit well in handling multimodal interaction– there can be specific agents for each input and feedback

channel– the raw input from the lower-level input agents is then

processed and combined with others by the higher-levelagents working in a higher abstraction level

– there can be any necessary amount of agent levels in a given system

– finally, the root agent has all the available information fromdifferent input devices and sensors, and acts based on thisinformation

ExampleExample: DEC : DEC SmartSmart KioskKiosk[Christian and Avery, 1998]

Smart Kiosk was a research project at Compaq-Digital (now HP) Cambridge Research Laboratory in which easy-to-use information kiosks were built to beused by all peopleCombines new technology:– machine vision, pattern recognition– speech synthesis (DECtalk)– speech recognition– animated talking head (DECface)

ExampleExample: DEC : DEC SmartSmart KioskKiosk

Vision

DECface

Netscape Navigator

Active vision zone

Touchscreen

ExampleExample: DEC : DEC SmartSmart KioskKiosk

ExampleExample: DEC : DEC SmartSmart KioskKiosk

35

The The RolesRoles of of AgentsAgents

Agent

Agent

Agent

observes

observes

gives feedback

gives feedback

Fully automatic, activeobservation

collaborative and passive agent

usergroups

Both active and passivegathering of information

results

results

results

User

User

User

The The RolesRoles of of AgentsAgents

System

Network

conversationConversational, anthropomorphicagents

AgentBackgroundmaintainingassistant

observes/adjusts

results

User

User

AgentBoth active and passive gatheringof information, collaborative

User

usergroups

observes

gives feedback

talking head

Questions?

ReferencesReferences

[Bolt, 1980] Richard A. Bolt, Put-that-there. SIGGRAPH ‘80 Conference Proceedings, ACM Press, 1980, 262-270.

[Christian and Avery, 1998] Andrew D. Christian and Brian L. Avery, Digital Smart Kiosk project. Human Factors in Computing Systems, CHI ’98 Conference Proceedings, ACM Press, 1998, 155-162.

[Lieberman, 1997] Henry Lieberman, Autonomous interface agents. Human Factors in Computing Systems, CHI ’97 Conference Prodeedings, ACM Press, 1997, 67-74.

[Maybury and Wahlster, 1998] Mark T. Maybury and Wolfgang Wahlster (Eds.), Readings in Intelligent User Interfaces. Morgan Kaufmann Publishers, 1998.

[Marais and Bharat, 1997] Supporting cooperative and personal surfing with a desktop assistant. Proceedings of UIST ’97, ACM Symposium on User Interface Software and Technology, ACM Press, 1997, 129-138.

[Nigay and Coutaz, 1993] Laurence Nigay and Joëlle Coutaz, A design space for multimodal systems: concurrent processing and data fusion. Human Factors in Computing Systems, INTERCHI ’93 Conference Proceedings, ACM Press, 1993, 172-178.

[Nigay and Coutaz, 1995] A generic platform for addressing the multimodalchallenge. Human Factors in Computing Systems, CHI ’95 Conference Prodeedings, ACM Press, 1995, 98-105.

[Schiaffino and Amandi, 2004] Silvia Schiaffino and Analía Amandi, User – interface agent interaction: personalization issues. International Journal on Human-Computer Studies 60, Elsevier Science, 2004, 129-148.