Continuous and Seamless Applications

Preview:

Citation preview

An overview of SGS ATS work on new application experiences

Michael Heydt

Principal Technologist, SGS ATS

CONTINUOUS (SEAMLESS), IMMERSIVE, AND CONTEXT AWARE APPLICATIONS

AGENDA• Overview

• Components

• Examples

• Technologies

• Demonstration

• Applicability

• Next Steps

• Q&A

HOW WE GOT HERE• Seamless is a natural evolution / aggregation of all of the following previous work:

• Composite Applications

• Rich Interfaces

• Mobile

• Cloud

• Cloud / on-premise integration

• Natural User Interfaces

• ATS is already in discussion with a major energy company to

• Assist in building a workspace of the future

• With fully seamless / mobile / interactive trading environments

• Including room size interactive visuals, and

• Community workspaces that adapt to the current person in the environment

WHAT I’M NOT GOING TO COVER• Simply a lot of stuff, so I’m going to only cursory cover:

• Kinect and NUI

• Cloud technologies

• Specifics on programming

• But I’ll be more than happy to do follow ups for anyone at a later time

• The focus today is on what this is, a few examples, and a demo

MEET YOUR NEW OR SOON TO BE USER• http://www.fastcompany.com/magazine/162/generation-flux-future-of-business

• Expects always on access

• Ability to work anywhere, any time, on anything

• Naturally works with multiple devices

• Device convergence is a thing of the past

CONTINUOUS / SEAMLESS• The Continuous Client

• http://www.engadget.com/2010/05/26/a-modest-proposal-the-continuous-client/

• When you leave one device, you pick up your session exactly in the same place on the next device you use

• “Placeshifting” your computing experience from one device to the next with no break in your work, timelines or conversations.

• But this is much more than just the “client”

COMPONENTS OF A CONTINUOUS APPLICATIONOperating System Provides the capability to run code on a particular platform

Application Dynamically composited: “the streaming application”

Services Both in the cloud, as well as on other mobile and ephemeral systems

Contextual App knows who, what, when, where and what’s around

Rendezvous Ability to dynamically locate other devices utilize their capabilities

Immersion The application experience is everywhere and all around the user

Multi-modal Not just vision and typing, but gestures, source, voice and haptic

Augmentation Applications utilize other devices to extend the experience

THE CONTINUOUS CLIENT• Not necessarily a common code base

• More a set of similar services on different devices

• That can find each other and augment the users experience

• Ideally they can be generated and/or composited “on-the-fly” (mashups anyone?)

• “Streaming Client”: where an application is composited and downloaded on the fly to a user based upon their current “context”

• In essence the client becomes immersive, moving with the user across location and devices, constantly providing the user with the services that are needed exactly at that moment

CONTEXT AWARE• Applications know who is using them, where they are, what they are running on, and what is

nearby to augment services (and what time it is)

• Example of context: Attentive phone and Smart Actions

IMMERSIVE• Immersion is the state of consciousness where an immersant’s awareness of physical self

is diminished or lost by being surrounded in an engrossing total environment, often artificial

• http://en.wikipedia.org/wiki/Immersion_(virtual_reality)

• Devices such as Kinect allow interaction away from the keyboard and mouse

• Devices such as phones can augment capabilities (as we will see)

AUGMENTED• Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world

environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data.

• http://en.wikipedia.org/wiki/Augmented_reality

• In addition to augmented reality, applications will be able to determine what other devices are nearby and use them to provide a combined and greater experience to the user

MULTI-MODAL• Interaction with the application becomes more than mouse and keyboard

• These are antiquated and artificial means of interacting with the computer

• The new modes:

• GUI (graphical representation of information; does not imply mouse and keyboard)

• Natural (gestural interfaces, either on tablets or with a Kinect)

• Haptic (feedback given to the user in form or resistance and vibration)

• Speech (ability to understand complex grammars for interacting with the system)

THE BRAVE NEW WORLD• These are no longer “new” or “advanced” technologies:

• Rich Interfaces

• Social communication

EXAMPLES• Fictional, but not so much any more…

• Minority Report Shopping Mall

• Real

• Nsquared Seamless Architecture / Design Application

MINORITY REPORT REAL-TIME PERSONALIZED ADVERTISING• Scenario:

• You walk into a public place, and video walls present you with personalized information

ENTERING THE PUBLIC PLACE

INITIAL IDENTIFICATION – MOTION TRACKING

DEVICE TRACKS MOTION

DOES A RETINAL SCAN TO IDENTIFY YOU

AND STARTS TO MAKE PERSONALIZED ADS

JOHN HAS SIMILAR TASTES TO MIKE

HOW REALISTIC IS THIS?• Not very unrealistic actually

• Detailed motion can be tracked by Kinect

• You can be identified easily by:

• Facial recognition (my demo later)

• RFID / NFC

• QR Code (a new market for t-shirts and hats?)

NSQUARED SEAMLESS DEMO• Demonstration of a seamless application using multiple forms

• Surface

• Slate

• Video wall

• Kinect

• Cellphone

• Similar things will be demonstrated later

• http://nsquaredsolutions.com/

• http://www.youtube.com/watch?v=oALIuVb0NJ4

PLACE A PHONE ON THE SURFACE

THEY START SHARING DATA – LIKE CONTACTS

SURFACE AUGMENTS THE PHONE

GESTURES ON PHONE EXTEND ALSO

SELECTING A DOCUMENT

AND THE SURFACE STARTS TO OPEN DATA

SLATE IS USED TO AUGMENT THE SURFACE

GIVING A DIFFERENT “LENS” ON THE DATA

SELECT A ROOM AND IT SHOWS ON THE SLATE

NOW MOVE TO A VIDEO WALL AND KINECT

GESTURE TO GO INTO THE HOUSE

AND YOU ANIMATE IN

LETS INTERACT WITH THE MODEL

USE SLATE TO SELECT A NEW KNOB WITH A FLICK

AND THE MODEL CHANGES

PUT THE BILL OF MATERIALS ON THE SURFACE

AND IT FIGURES OUT WHAT IT IS

GIVES A MAP TO THE LINE ITEMS

LETS SEND IT TO THE CUSTOMER

GET THE CONTACTS FROM THE PHONE AND SEND

NSQUARED – WHAT WAS DEMONSTRATED?Continuous Client Multiple applications working together to complete a task

Rendezvous Finding other systems and collaborating

Gestures Flicking data from one device to another

Location Knowing what devices are nearby and where the user is

Immersion Movement through the data

Augmentation Multiple examples of devices augmenting each other

THE DEMO - SCENARIO• An arbitrary person sits in front of a computer which recognizes the user and starts

communication with the users phone

• User can interact with the phone application and request augmentation on the desktop system

• Desktop application can retrieve contacts from the phone to send mail

THE TECHNOLOGIES IN THE CONTINUOUS DEMO

Kinect Used for vision and voice capture

Computer Vision OpenCV/EmGuCV

Cloud Services Microsoft Azure, SQL Azure, WCF and REST API’s

Phone Windows Phone 7

.NET Common code for phone, desktop and cloud

Voice Recognition .NET Speech SDK

Location Services GPS on the phone. Spatial data services in the cloud.

Rendezvous Microsoft AppFabric Service Bus to locate and communication between mobile systems

Near-range wireless UDP communications when on local WiFi (fallback to cloud messaging)

Gestures Flick data from phone to desktop

KINECT• Video, Depth and Audio capture

• $199!

• The demo uses a kinect, but for only video and voice

KINECT POINT CLOUDS AND SKELETONS

THE DEMO – GENERAL OUTLINE• Train the system on your face on your desktop

• Training data is stored in the cloud

• Phone app sends location updates to the cloud

• Cloud does spatial queries to find nearby services and lets them know you are near

• When you are near, local systems get the facial data from the cloud

• When it sees you, it starts communications with your phone, first by cloud messaging and then direct WiFi if available

• Use phone to look at stocks, and “flick” them onto the other system

FACE TRAINING AND RECOGNITION

REMOTE APP – DOESN’T KNOW WHO I AM

THE PHONE APP

THE PHONE APP

REMOTE APP NOW KNOWS AND SEES ME

PHONE APP – STOCK AND FLIP

REMOTE APP GETS MESSAGE FROM PHONE

REMOTE APP DISPLAYS AUGMENTED DATA

NEXT STEPS• Add ability to have mobile phone give real-time orientation, position and movement data

• Can be used to manipulate items

• Extensions to NUIDOTNET framework formalizing:

• Location updates

• Service location

• Device location via spatial queries

• Capabilities exchange

• Application compositing and delivery (streaming)

• Robust gesture processing (on phone, and with Kinect)

• Many more

Q&A• Any questions?

THANKS!

Recommended