15
Q-Ball Tiffany Jen Liz Mikolaj Stephen Miller Nathan Potts Indiana University HCI/d

Q-Ball

Embed Size (px)

Citation preview

Q-Ball Tiffany JenLiz Mikolaj

Stephen MillerNathan Potts

Indiana University HCI/d

Executive Summary

Q-ball is a concept which seeks to bridge the gap between what people do and how the digital world behaves. Instead of creating a set of gestures or motions for a person to have to learn in order to use the system, it leverages things people already do in order to bring in some of the advantages of the digital world. For the sake of a proof of concept, we focused on search indexing and memory, which are things that a computer does well which are currently not done so well in physical spaces.

Search tagging and recollection is achieved by Q-ball in a few steps:1. The user puts a face on a post-it note (any face, provided it is proportional), creating a facetag2. They place this facetag on an object or document they wish to be able to search for later3. They go about their business as normal4. Later, if they wish to recall that document or object’s location, they give Q-ball the voice command “show”.5. Q-ball will then show them the last known location of the object, and the timestamp of when it was last seen by the system.6. The user can then go look there.

This is currently programmed to handle a couple of facetags, leveraging the face recognition software in combination with some color sampling to determine it is a facetag and not a person’s face. Q-ball saves locations and timestamps via XML, allowing it to track the movement of something over time. We wanted to use voice commands in order to keep a person’s hands free in order to allow them to keep working, leaving them able to maintain their workflow.

While this is a simple initial example, we believe it can feed into larger applications which can follow a “teach and converse” model with a system like Q-ball. As recognition code becomes more sophisticated, it would be able to track based on object recognition, and learn based on patterns of behavior gleaned from a user in a space it is watching over time. Q-ball would be able to make suggestions, like when to take breaks or where a certain tool is that they may be using soon before they ask for it. Users would be able to make interventions in the system by teaching it things that cannot be gleaned from passive watching and pattern recognition, like things they might be borrowing from another person or time-sensitive documents that need to be taken care of within a certain period.

This also has implications for multiple users, allowing people to better keep track of their things in a shared space or determine if someone else has borrowed something. Unlike other ubiquitous computing solutions out there, this thinks more on the individual level, allowing multiple people’s inputs to be weighed, or the changing dynamic of a single person’s workflow to be taken into consideration. Some small, more natural gestures could also be learned by the system in order to make a more holistic experience.

Storyboards

Due to time and technology limitations, we are only focusing on a small part of our concept idea.

With the perceptual computing camera and computer, we are giving the desk its own eyes, the ability to learn and help its human.

We don’t want to teach people new gestures on how to interact with a system. We want to teach the system how to react to people’s workflow! The system is doing the adapting.

10%: Intro

Bob is a very messy and unorganized worker and his desk knows it.

10%

The desk is able to see its entire surface and because of Bob’s habits…

The desk knows that it has the task to help Bob find the important documents that he has worked on but gets buried over time.

10%

Prototype

Bob has three important documents.

How Our Prototype Works

Important #2Important

#1

Important #3

He puts “facetags” on the documents in the form of different color post-it notes with faces. They are different colors to distinguish between the different documents.

We use face tracking to track the faces on the tags…

How Our Prototype Works

...so the desk knows where the important documents have ended up. It periodically saves the positions of that object over time.

A few weeks past...

How Our Prototype Works

Computer! Find...

...and Bob wishes to recall where his important documents are, he talks to the system in order to find them.

The system then highlights the last “seen” location of the objects for him.

How Our Prototype Works

To see our prototype in action, please view our video at:https://vimeo.com/81335703

How Our Prototype Works

A Special Thanks to:Robert Cooksey & Rajiv Mongia

Indiana University HCI/d