11
 Scenario Project Goal Task Contact Bachelor / Master Announcement October 22nd 2014 APE: ccessing Personal Data gocentrically One way to handle lar ge dat ase ts is by provisi oning movabl e viewports so- cal led dynami c peepholes – which allow the user to explore information spaces in an egocentric way. Instead of panning the screen content (e.g. the photos of a collection), the user physically moves to off-screen content (e.g. a particular photo) as if it were situated in physical space. Studies on this technique have shown cognitive benefits, when compared to non-egocentric interaction styles. Even though today’s mobile devices already come with some degree of spatial awareness that allows for egocentric navigation (e. g. gyroscope), most apps still provide static navigation (e. g. by panning). Build a Proof of Concept Prototype that supports egocentric exploration of one particular kind of personal data (for example photos, calendar data, file system,..). Evaluate your concept, analyse and discuss your findings.  State-of-the-art analysis: Opportu nities and requirements of e gocentric exploration in digital da ta (seminar presentation & paper)  Concept deve lopment and implement ation based on a chosen scenario in which the technique is presumed to be beneficial (project presentation & paper)  Evaluation and reflection of your con cept through a usability test (bachelor) or a lab e xperiment (master) (thesis & thesis defence) Jens Müller Room: PZ 906 [email protected] Human-Computer Interaction Group – Prof. Dr. Harald Reiterer

HCI Projects

Embed Size (px)

DESCRIPTION

New Ideas for HCI..

Citation preview

  • Scenario

    Project Goal

    TaskContact

    Bachelor / Master

    Announcement October 22nd 2014

    APE: Accessing Personal Data Egocentrically

    One way to handle large datasets is by provisioning movable viewports so-called dynamicpeepholes which allow the user to explore information spaces in an egocentric way. Instead ofpanning the screen content (e.g. the photos of a collection), the user physically moves to off-screencontent (e.g. a particular photo) as if it were situated in physical space. Studies on this technique haveshown cognitive benefits, when compared to non-egocentric interaction styles.Even though todays mobile devices already come with some degree of spatial awareness that allowsfor egocentric navigation (e. g. gyroscope), most apps still provide static navigation (e. g. by panning).

    Build a Proof of Concept Prototype that supports egocentric exploration of one particular kind of personal data(for example photos, calendar data, file system,..). Evaluate your concept, analyse and discuss your findings.

    State-of-the-art analysis: Opportunities and requirements of egocentric exploration in digital data(seminar presentation & paper)

    Concept development and implementation based on a chosen scenario in which the technique ispresumed to be beneficial (project presentation & paper)

    Evaluation and reflection of your concept through a usability test (bachelor) or a lab experiment(master) (thesis & thesis defence)

    Jens MllerRoom: PZ [email protected]

    Human-Computer Interaction Group Prof. Dr. Harald Reiterer

  • Scenario

    Project Goal

    TaskContact

    Bachelor / Master

    Announcement October 22nd 2014

    Supporting Orientation for Egocentric Navigation in 3D Spaces

    One challenge of navigating in large datasets in particular with mobile devices lies in the limiteddisplay space, which makes orientation hard. Several techniques such as Overview+Detail or Wedges(see figure) exist to support orientation. These techniques assume static viewports (where the usermoves the information space), little however is known on how to support orientation in scenarios withdynamic viewport, where the user moves in physical space.

    Design, implement, and evaluate a visualization technique that supports orientation for egocentric navigation inphysical 3D space.Depending on the outcome of your seminar (and your personal preferences), the project can be done either asdesign or evaluation oriented.

    State-of-the-art analysis on existing visualization techniques (what can we learn from them andwhat aspects can be applied to egocentric scenarios?) (seminar presentation & paper)

    Concept development and implementation (project presentation & paper) Evaluation and reflection of your concept through a usability test (bachelor) or a lab experiment

    (master) (thesis & thesis defence)

    Jens MllerRoom: PZ [email protected]

    Human-Computer Interaction Group Prof. Dr. Harald Reiterer

    Figure 1. Left: Wedge visualization (Gustafson et al. 2008) supporting orientation through virtual landmarks; right: Egocentric navigation in a 3D space (without navigation aid).

  • Scenario

    Project Goal

    TaskContact

    Master

    Announcement October 22nd 2014

    Supporting Health and Wellness through Mobile Concepts:Collaborative Visual Analysis of Behavioral Data

    Mobile computing holds great promise for providing effective support for helping people manage their health ineveryday life. At this point the comprehensive collection of behavioral data such as eating habits and physicalactivities are crucial to evaluate user behavior and provide appropriate feedback. To analyze the collected datascientist often make use of visual data analysis approaches. To do so profiles of individuals has to be analyzed,compared and clustered in order to recognize trends and patterns. New trends in HCI like multi-displayenvironments, large wall-sized displays or proxemics interaction provide a promising design space to support thecollaborative visual data analysis of behavioral data.

    The goal of this project is to design visualization and interaction techniques which allow multiple users tocollaboratively analyses data on a large wall-sized display. Large wall-sized displays have shown to provideadvantages in terms of a better common understanding of the information space and provide space to thinkwhich facilitates tasks like clustering objects or get an overview of the data. However, there are several drawbackswhen working with large displays. One of them is the problem of visual distortion depending of the position infront of the display. Another problem is conflicting multiuser navigation on shared displays. The use of novelinteraction and visualization techniques can help to overcome these problems (e.g. multifocus visualizationtechniques, proxemics interaction, touch less interaction, combination of explicit and implicit navigation).

    Literature research and state-of-the-art analysis (seminar thesis) Design and discussion of several interaction concepts (project work) Implementation of a prototype (project work) (C#) Evaluation of different concepts (thesis)

    Simon ButscherRoom: PZ [email protected]

    Human-Computer Interaction Group Prof. Dr. Harald Reiterer

  • Scenario

    Project Goal

    TaskContact

    Master

    Announcement October 22nd 2014

    Facilitating Orientation in Situated Information Spaces

    In situated information spaces a data set is mapped into the real-world space. A spatially aware handhelddisplay can be used as a viewport to visualize the data. Zooming and panning navigation can then beachieved in a natural way as if looking through a handheld window into the virtual world. Some researchinvestigated the interaction aspect of such a egocentric peephole navigation and found benefits with respectto navigation performance and long-term spatial memory when compared to non egocentric navigationstyles. However facilitating the orientation in situated information spaces is an open issue.

    The project goal is to design, implement and evaluate different visualization techniques based on theoverview+detail and focus+context design pattern which facilitate users orientation in situated informationspaces. Know techniques like FishEye views, folding the space or providing a simple overview can beadapted for the use on mobile devices. Furthermore additional features like perspective corrections could beintegrated.

    Literature research and state-of-the-art analysis (seminar thesis) Design and discussion of several interaction concepts (project work) Implementation of a prototype (project work) (C#) Evaluation of different concepts (thesis)

    Simon ButscherRoom: PZ [email protected]

    Human-Computer Interaction Group Prof. Dr. Harald Reiterer

  • Scenario

    Project Goal

    TaskContact

    Bachelor

    Announcement October 22nd 2014

    Simplify Sketching in Post-WIMP Interactive Spaces

    Sketching is a crucial step in the design phase of the Usability Lifecycle. Photo traces and hybridsketches are two sketching methods which enable the creation of expressive sketches which illustrateinteractions or new user interfaces. For both methods a real photo forms the base of the sketch. Tosimplify the creation of such sketches, this work aims at developing a sketching application which usesPen and Touch to annotate or to trace the contours of photos. In addition, cameras will be mountedat Media Rooms truss to be able to easily take photos which overview scenarios of Post-WIMPinteractive spaces. These cameras will directly be accessed via the sketching application.

    This project aims at developing a sketching tool which has a focus on the fast and easy creation ofsketches which are based on real photos. Therefore the tool accesses cameras which are mounted atthe truss of the Media Room allowing to easily create good overview photos for scenario basedsketches. The underlying sketching tool will either be developed as a collaborative sketchingapplication on a huge interactive wall or as a single user system on Microsofts Perceptive Pixel.

    Literature research, state-of-the-art analysis (seminar presentation & paper) Development of a ready-to-use prototype (project presentation & paper) Conduction of user studies & analysis (thesis & thesis defence)

    Christoph GebhardtRoom: PZ [email protected]

    Human-Computer Interaction Group Prof. Dr. Harald Reiterer

    camera

  • Scenario

    Project Goal

    TaskContact

    Bachelor / Master

    Announcement October 22nd 2014

    Investigating the Potential of Augmented Paper for Active Reading Tasks

    Active Reading can be described as reading for learning or to understand. In this process the readerreflects on what he reads and constructs meaning for his own purposes. Active Reading involvesactivities like marking and annotating text, comparing sources or excerpting information. Regardingthe support of these tasks, it is common sense that both paper and electronic devices have certainadvantages. By taking the best of both worlds, this approach aims at creating an optimal ActiveReading environment. Therefore this project will simulate electronically interactive paper and integrateit into a virtual desktop allowing to use interactive paper for real-world Active Reading tasks.

    The first step of this project is to identify the basic tasks of Active Reading and to search for interfacedesigns which support these tasks. In a second step, the hardware setup of Integrative Workplace isrefined (e.g., realizing a document detection without markers). The final step is to implement andevaluate a mixed-reality Active Reading system which is integrated into the virtual desktop enablingusers to use the tools and programs they normally work with.

    Literature research, state-of-the-art analysis (seminar presentation & paper) Implementation of a prototype supporting common Active Reading tasks on augmented paper

    (interactive prototype, project presentation, and paper) Qualitative evaluation of prototype (thesis and thesis defense)

    Christoph GebhardtRoom: PZ [email protected]

    Human-Computer Interaction Group Prof. Dr. Harald Reiterer

  • Scenario

    Project Goal

    TaskContact

    Bachelor / Master

    Announcement October 22nd 2014

    Mixed Reality Mirror Box

    Mirror boxes are used in psychotherapy to manipulate the activation of affected nerves or to lessenphantom pain in amputated limbs with the help of a mirrored image of the opposite, healthy limb.To improve the experience, head-mounted displays have already been used to show virtual realityenvironments mimicking the affected or missing limb, displaying it as a virtual image in a virtual world.However, a see-through mixed-reality version of this solution is not yet available and it is unclear, ifthe mirror box therapy might profit from its realistic image in front of the real environment.

    The project aims at developing a mixed reality mirror box with the use of a head-mounted see-through display. Thus, mirroring the healthy limb can be compared to replacing the real limb(s) byvirtual substitutes.The project should be based on scientific findings as found in related literature and be evaluated inuser studies accordingly.

    Literature research, state-of-the-art analysis (seminar presentation & paper) Development of a ready-to-use prototype (project presentation & paper) Conduction of user studies & analysis (thesis & thesis defence)

    Christoph Gebhardt & Svenja LeifertRoom: PZ [email protected]@uni-konstanz.de

    Human-Computer Interaction Group Prof. Dr. Harald Reiterer

  • Scenario

    Project Goal

    TaskContact

    Bachelor / Master

    Announcement October 22nd 2014

    Automated Evaluation with Squidy

    The data processing tool Squidy is used to integrate input devices into natural user interfaces. Bybuilding pipelines through different filters, data is manipulated in a visually appealing way detachedfrom the technical background processes. Although Squidy offers a wide variety of settings, a liveevaluation framework has not been integrated yet.

    The goal of this project is to design a framework that uses Squidy to automatically analyse/plot datagathered in usability tests, e.g. a tapping test. The project should be based on scientific findings asfound in related literature and be evaluated in user studies accordingly.

    Literature research, state-of-the-art analysis (seminar presentation & paper) Development of test setting & tasks, implementation of test framework (project presentation &

    paper) Conduction of user studies & analysis (thesis & thesis defence)

    Svenja LeifertRoom: PZ [email protected]

    Human-Computer Interaction Group Prof. Dr. Harald Reiterer

  • Scenario

    Project Goal

    TaskContact

    Bachelor / Master

    Announcement October 22nd 2014

    Using Eye Tracking in Spatial Memory User Studies

    Our eyes are continually moving, adapting to our point of focus or adjusting the line of sight. Gazemotion is not only extremely fast, but even works subconsciously most of the time, allowing forinteresting insights to be gained during human-computer interaction.In cognitive psychology, recent study results hint at a correlation between (spatial) memory and eyemovement towards relevant screen positions. Due to the subconscious nature of gaze motion, thisconnection could offer a fast and faithful feedback of users spontaneous reaction when asked aboutpast events and locations.

    As a first step, related literature is to be evaluated to establish a theoretical foundation of how HCIcould make use of eye tracking in memory experiments. According to the research question(s), thisalso requires the development of one or several test tasks. An interactive prototype is to beimplemented or adjusted in order to conduct a usability study for answering the research question(s).The results need to be statistically analysed and discussed accordingly.

    Literature research, state-of-the-art analysis (seminar presentation & paper) Development of test setting & tasks, implementation of test framework (project presentation &

    paper) Conduction of user studies & analysis (thesis & thesis defence)

    Svenja LeifertRoom: PZ [email protected]

    Human-Computer Interaction Group Prof. Dr. Harald Reiterer

  • Scenario

    Project Goal

    TaskContact

    Master

    Announcement October 22nd 2014

    Gaze Control as Future Automotive Application

    It has long been a dream in automotive industry to be able to lessen work load on car drivers byreplacing some of the manual input (e.g. when manipulating the centre console) by touchlessinteraction such as gaze control. Nowadays, a multitude of eye tracking systems offer the possibility ofrealising what had seemed to be impossible only a few years ago. Mobile or head-mounted trackersmight be used either in a wide range to enable single components or in a more narrow field tocontrol only a single component such as the head-up display. Menu selection, adjusting airconditioning or the like will then be possible, while the hands are free to control the steering wheel.

    This project has two main goals to be worked on both theoretically and practically: 1) To researchavailable eye tracking solutions and evaluate their (dis-)advantages. This includes the integration ofthe most suitable solution into a simple demonstrator cockpit by finding the optimal tracker position.2) To develop the two mentioned usage scenarios (wide and narrow field of interaction) showing howeye tracking and gaze control may be used in automotive applications.

    internet, industry and literature research on eye-tracking systems (seminar presentation & paper) integration of eye tracker into prototypical automotive environment, evaluation of optimal position

    & usage scenarios, development of test application (project presentation & paper) evaluation of test set-up and test application, analysis and implications for future work (thesis &

    thesis defence)

    Thomas Klberer, Svenja LeifertRooms: PZ 908, PZ [email protected]@uni-konstanz.de

    Human-Computer Interaction Group Prof. Dr. Harald Reiterer

  • Scenario

    Project Goal

    TaskContact

    Master

    Announcement October 22nd 2014

    Skeleton Body Model for Future Automotive Applications

    With systems such as adaptive cruise control and collision prevention assistant a cars view to theoutside is getting more and more complete. This makes the car intelligent. To project such anintelligence into a cars interior, one ideally has to monitor the complete interaction space available.One key element is it to completely oversee the upper body of the occupants in order to know exactlyhow they are seated and what the driver or passenger is doing. Such a system will enable a lot of newapplications and moreover first realizations of prediction. It will help to increase comfort and safety.

    The first goal is to collect information about available body models, the algorithms behind andanatomic constraints and think about methods to handle overlaps. The next goal is to implement athree-dimensional camera in a demonstrator and visualize its data. It will be the main goal to create asoftware development kit which is able to realize the implementation of an intelligent algorithm, basedon gathered information, to map a skeleton body model onto the cameras three-dimensional data.

    internet, industry and literature research on available body models (seminar presentation & paper) Visualization of three-dimensional camera data and development of intelligent algorithms based

    on the gathered information (project presentation & paper) Create a software development kit and implement a first skeleton body model (thesis & thesis

    defence)

    Thomas KlbererRoom: [email protected]

    Human-Computer Interaction Group Prof. Dr. Harald Reiterer