25
Senior Project Ideas 1. ADAPT: Agent Development and Prototyping Testbed a. Networked Multi - user interaction b. Dialogue and Interaction System c. Implement Grasping for Objects d. Expanded Navigation Mesh Capabilities e. Cocktail Party Virtual Environment f. Exploring and Integrating SmartBody g. TOPIARY 2. Steering / Navigation / Planning a. Kinetic Navigation Meshes for Multi - Agent Navigation b. Modeling autonomous agents as deformable bodies c. Grasp - point navigation meshes for Full - body Character Navigation d. Adaptive Steering e. Exploiting Synchronization Slack in Multi - Agent Simulations f. Accelerating Search g. Mine Simulation 3. Character Animation a. Robust Locomotion of Virtual Humans / Footstep Annotated Motion Graphs / Footstep Driven Motion Synthesis b. Multi - modal foot and gait analysis c. Walking under realistic load conditions d. Agent Conversation Simulator e. Body Self - Intersection and Deformation Response f. Development of an Animation Blocking Tool using Kinect Body Motion Input g. Physically - based dance controllers h. Micro - expressions and micro - gesture controllers i. Secondary motions for game characters j. Generating Synthetic Animation Data from a Virtual Motion Capture Studio k. Personality - Driven Character Animation 4. Behavior Authoring a. Analysis and Evaluation of Team - Based Sports / Power in the hands of the Spectator / A Real - Time Augmented View of Team - Based Sports b. Learning Behavior Models for Sports c. Authoring Excitement in NPC Behavior d. Behavior Capture using the Kinect e. Event Recognition using Behavior Trees as an Action Lexicon f. Building a Character Bible g. Parameterized Action Representation Memory Model in Behavior Tree System 5. Sound Propagation a. Texture - based Sound Propagation using the GPU 6. Games a. Development of Sword Fighting AI Using Game Tree Search and Analysis b. Development of a Virtual Reality Racquet Game Using Kinect Body Tracking and a Virtual Locomotion Control ( VLC ) User Interface

Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

Senior Project Ideas

1. ADAPT: Agent Development and Prototyping Testbeda. Networked Multi-user interactionb. Dialogue and Interaction Systemc. Implement Grasping for Objects d. Expanded Navigation Mesh Capabilitiese. Cocktail Party Virtual Environmentf. Exploring and Integrating SmartBodyg. TOPIARY

2. Steering / Navigation / Planninga. Kinetic Navigation Meshes for Multi-Agent Navigation b. Modeling autonomous agents as deformable bodiesc. Grasp-point navigation meshes for Full-body Character Navigation d. Adaptive Steeringe. Exploiting Synchronization Slack in Multi-Agent Simulationsf. Accelerating Searchg. Mine Simulation

3. Character Animationa. Robust Locomotion of Virtual Humans / Footstep Annotated Motion Graphs /

Footstep Driven Motion Synthesis b. Multi-modal foot and gait analysis c. Walking under realistic load conditionsd. Agent Conversation Simulatore. Body Self-Intersection and Deformation Response f. Development of an Animation Blocking Tool using Kinect Body Motion Input g. Physically-based dance controllers h. Micro-expressions and micro-gesture controllersi. Secondary motions for game characters j. Generating Synthetic Animation Data from a Virtual Motion Capture Studio k. Personality-Driven Character Animation

4. Behavior Authoringa. Analysis and Evaluation of Team-Based Sports / Power in the hands of the

Spectator / A Real-Time Augmented View of Team-Based Sports b. Learning Behavior Models for Sports c. Authoring “Excitement” in NPC Behavior d. Behavior Capture using the Kinect e. Event Recognition using Behavior Trees as an Action Lexicon f. Building a Character “Bible”g. Parameterized Action Representation Memory Model in Behavior Tree System

5. Sound Propagationa. Texture-based Sound Propagation using the GPU

6. Games a. Development of Sword Fighting AI Using Game Tree Search and Analysis b. Development of a Virtual Reality Racquet Game Using Kinect Body Tracking

and a Virtual Locomotion Control (VLC) User Interface

Page 2: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

c. Development of adaptive games based on bio-sensor feedback (EEG, EKG, Respiration, etc)

d. Development of games and novel UI interfaces based on the eye tracker for mobile devices

7. Rendering / 3D Modeling

a. Specular Reflections as Light Sourcesb. Semi-Automatic Semantic Tagging of 3D Models c. Fabrication of toys and puzzles from 3D Models

8. HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone Platform c. Gesture Recognition for Mobile Phone Applications Utilizing Inertial Sensor

Inputs 9. Simulation/Optimization

a. Evolutionary Creatures 10. Nursing School

a. Projects11. ESE

a. Sucking and swallowing animation model for at-risk infants

Page 3: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

ADAPT: AGENT DEVELOPMENT AND PROTOTYPING TESTBED

The Agent Development and Prototyping Testbed is a Unity platform designed for rapidly-iterable experimentation on virtual crowds in arbitrary environments. It provides navigation mesh generation, steering, locomotion, animation, and behavior-tree-based decision controllers for medium-sized high-fidelity crowds. You can see a demo of the current state of the platform at http://www.youtube.com/watch?v=Y6Tp9n_hJSE The ADAPT framework is designed to work within Unity and is written in C# with core libraries written in C++ (Unity plugins). All ADAPT related projects will require familiarity with Unity and C#. Since ADAPT is an open-source framework intended for public use, we will conform to the highest standards of software engineering and coding principles for all ADAPT related projects. Networked Multi-user Interaction. Using a C# library or Unity’s built-in networking tools, design a peer-to-peer networking system so that multiple users can interact with the environment, actors, and each other. Dialogue and Interaction System. Implement an aesthetically-pleasing GUI in Unity for communicating with non-player characters. This should follow traditional 3D RPG paradigms, such as dialogue trees. BioWare’s dialog wheel would be a good starting point for this project. Characters with whom the player is interacting should play sounds and proper gestures, and should have an appropriate gaze and posture Implement grasping for objects. Combine motion capture data and an existing strength-biased and orientation-sensitive IK system to produce a comprehensive reaching and grasping system for objects in a virtual environment. Our goal is to produce a lightweight and modular character controller capable of performing sophisticated actions like bending down to pick an item off of the floor. Expanded Navigation Mesh Capabilities. Extend the current navmesh generation code to take greater advantage of Recast’s features. This should include tiled navmeshes that can be loaded and unloaded, as well as dynamically changing navmeshes that are re-generated according to changes in the environment. This will require familiarity with C++ as well as C#. Cocktail Party Virtual Environment. Design and model an aesthetically-pleasing environment suitable for a simulation in a cocktail party.

1. Environment Asset Creation. This environment should have multiple distinct areas (such as a mansion with a large ballroom, but also a parlor, patio, lawn, etc.). These assets should be suitable for real-time simulation in Unity.

Page 4: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

2. Character Asset Creation. Design and model human characters with suitable attire to participate in this scenario. These assets should be suitable for real-time simulation in Unity.

3. Behavior Design. The cocktail party environment will require the design and definition of multi-agent behaviors that are appropriate for the setting. Agent behaviors and events will be defined using Parametrized Behavior Trees (PBT’s).

Exploring and Integrating SmartBody. SmartBody (and the Virtual Human Toolkit) is a comprehensive library and toolset for simulating individual humans in a virtual environment. Explore integrating this library and its animations into Unity for use in locomotion, animation, grasping, gaze tracking, and other features. Especially interesting are the BML (Behavior Mark-Up Language) interface that can be used for high level control over SmartBody animation features and the facial animation tools. TOPIARY. TOPIARY is a graphical user interface for authoring parametrized behavior trees representing complex multi-agent interactions.

Page 5: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

STEERING / NAVIGATION / PLANNING Kinetic Navigation Meshes for Multi-Agent Navigation A major computational bottleneck in multi-agent simulations is performing navigation and nearest neighbor queries used for collision avoidance. The end effect is either reducing the number of agents or reducing the fidelity of the simulation (by removing collision-free guarantees). We propose a novel navigation mesh data structure that facilitates constant time / log time queries used for crowd simulations. The proposed method has strict guarantees on collision-free simulations and is capable of supporting a much larger number of agents in real-time. We can empirically evaluate the method to prior state of the art both in terms of computational performance and quality of simulation using a benchmarking suite. Modeling autonomous agents as deformable bodies Current crowd approaches model agents as simple discs. These approaches have several disadvantages and cannot simulate the full capabilities of virtual humans (e.g. walking sideways, going through narrow corridors, tight squeezes etc). The collision model of a real human agent is much more complex than a simple disc and constantly 'deforms' due to internal forces exerted by the agent as it steers in dynamic environments, as well as external forces (exerted by other agents) that affect the characters state. In this work, we model agents as deformable bodies and propose a kinetic navigation mesh that facilitates efficient nearest neighbor and navigation queries for such agents. (tied to previous project) Grasp-point navigation meshes for Full-body Character Navigation Human agents interact with their environment in complex ways using a combination of end effectors to navigate through rich 3D environments. Locomotion capabilities where a combination of limbs are used (e.g. climbing, jumping over obstacles using support of hands) is not modeled in current autonomous characters. In this work, we annotate the environment with 'grasp points' which can be used by a virtual character for hand or foot placement and propose a method for efficient navigation using a rich set of locomotion capabilities. Animate a zero- or micro-gravity simulation to demonstrate how your method works (think: astronauts navigating inside a large open space vehicle or moon base). Adaptive Steering Problem statement. Despite a large amount of research in crowd simulation and agent navigation, it is still difficult for human-like agents to autonomously navigate a virtual world in practical applications. Traditional crowd simulation papers model macroscopic phenomena while most navigation papers model only one or two aspects that humans use when steering. There are several different aspects of steering and navigation which are used independently (or in conjunction) depending on the current scenario an agent finds itself in. These include: (1) heuristic behaviors in a densely crowded environment, (2) space-time predictions in lightly crowded environments, and (3) space-time plans in complex configurations of static and

Page 6: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

dynamic obstacles. Inevitably, one aspect of human-like steering is not enough for an agent to robustly steer through all scenarios it encounters. Proposed Solution. In this project, we propose a method that automatically learns when to use each steering technique. The method can be seen as a classifier, trained using reinforcement learning, which inputs the current situation of an agent, and outputs the steering technique that is most appropriate for the agent's current situation. Related Work / Algorithms to use. Traditional steering research model different aspects of human steering (different “ways of thinking”). These approaches are classified as: (1) Reactive / Spatial prediction (any rule-based technique, Reynolds)(2) Space-time predictive (PPR paper, Donikian)(3) Local field-based (modeling human perception: egocentric)(4) Geometric (RVO’s)(5) Global fields (Continuum crowds, potential fields)(6) Social forces (Helbing, Pelechano) Data-driven approaches try to model human thinking but are limited to the data used. Machine learning has barely been explored for crowd navigation. Method of scoring an algorithm. The score for a particular algorithm on a scenario will be a multi-dimensional score where different components can be prioritized. Here is a description of some possible scoring measures: (1) choose algorithm that had the fewest collisions(2)choose algorithm that had the fewest non-human-like bugs:(3) oscillations: both position oscillations and orientation oscillations(4) side-strafing for too long(5) walking backwards for too long(6) unreasonable accelerations or velocities(7) Progress towards goal (total distance traveled, time taken to reach goal, total effort)(8) Performance Method of learning and classification. We can run every algorithm in the space of scenarios and compute the above scores for each algorithm for each scenario (offline process). This will allow us to train a machine the provides us a scenario to score mapping for every algorithm. We can use these machines to create a classifier that chooses the best possible algorithm, given a scenario. At any given instant of the simulation, an agent is located at a point in scenario space, which describes the relevant configuration of obstacles and other agents and its own history. Our classifier will receive a point in scenario space for input, and output the desired navigation algorithm to use. Analysis and Results.(1) Show “scores” for each algorithm in the ordered scenario space. We should observe peaks and valleys for each algorithm. No single algorithm can effectively handle the space of all possible scenarios.(2) Show “scores” for all algorithms using a “max” operator. We should observe good coverage.

Page 7: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

(3) Demonstrate results where agents are steering in complex situations and choosing different algorithms at different points (can show using color-coding). (4) Different agents in the same situation will be using different algorithms.(5) Seamless transition between different steering algorithms. Notes:(1) We have implementations of many of these algorithms(2) We have a method of sampling the space of all possible scenarios and methods of scoring an algorithm on a scenario. Exploiting Synchronization Slack in Multi-Agent Simulations Parallellizing multi-agent simulations requires synchronization between spatially co-located agents where agents read the internal data structures of other agents. In this project, we can explore the different methods of data decomposition in multi-agent applications and how we can exploit synchronization slack to achieve speed-up with no noticeable change in behavior. Decomposition Methods. (1) Naive decomposition: dividing agents among threads equally.(2) Spatial decomposition: taking into account spatial position of agents.(3) K-d tree decomposition. Partition the world based on number of agents using a K-d tree so that there is equal allocation of agents to threads. Synchronization. Agents who read the data of agents that are present on other threads will require synchronization. Classical synchronization using barriers is an overhead that prevents near-linear speedup of performance with increase in number of threads. We propose a slackened synchronization scheme where agents can read “stale” data of other agents where we can place bounds on the slack (maximum age, defined as the last time when the state of the agent was updated). Our hypothesis is that slackened synchronization does not reduce quality of AI (use prior work in [1] as method of evaluation) but significantly improves scalability. Dynamically choosing slack. The amount of slack we can use maybe highly situation dependent. For example. we may have high tolerance in open areas but may require greater amount of synchronization in narrow passageways where coordination between agents is a must. It will be interesting to explore how we can determine maximum slack based on environment and agent configurations. No synchronization. An alternate approach is to have a predictive model of neighboring agents and have no synchronization between agents. It will be interesting to compare and evaluate performance and quality of AI for these approaches. Accelerating Search Planning/Search presents an ideal for many automated applications (e.g. expert systems, autonomous simulations etc) due to the potential for arriving at an optimal solution, given sufficient resources. The main reason why the use of search is prohibitive in many real-time

Page 8: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

applications is due to the burden of computation. In this project, we propose to explore several avenues of accelerating search in order to make it a practical choice under real-time constraints. In particular, we can look at the following aspects:

1. Algorithmic optimizations2. Exploiting error tolerance

a. Reduction in floating-point precisionb. Reducing number of iterations (controlling search horizon)c. Sub-optimality of plans

i. Weighted heuristics (Weighted A*, Anytime search: ARA*)d. Granularity of transitions (modeling variable resolution of transitions)e. Reduction in precision of cost computationf. Reduction in precision of heuristic computation (this will indirectly affect the

search)3. Parallel solutions

a. Single agent searchb. Multiple agent search with synchronizationc. Centralized search (mini-max, mini-min search)

4. Custom FPGA hardware solutions to speed up core compute-intensive operations.5. Custom architectures for search

Note: We have done some preliminary exploration of 4). We concluded that the "collision detection" function was the most compute intensive function and proposed an accelerator for that with some decent results. We did not have too much success with the heuristic function due to its simplicity. If the heuristic function was complicated, then it would benefit proposing an accelerator for it. Mine Simulation A couple of years ago we began a project on underground mine simulation. This should be completely rebuilt in Unity using ADAPT. There are several interesting parts to this, including crew and work scheduling, virtual humans interacting with machinery, physiological modeling depending on airflow and disaster (fire) by-products, and evacuation planning. See P. Huang, J. Kang, J.Kider Jr., B. Sunshine-Hill, J. McCaffrey, D. Velázquez Rios and N. Badler. “Real-time evacuation simulation in mine interior model of smoke and action.” Computer Animation and Social Agents (CASA) 2010.

Page 9: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

CHARACTER ANIMATION Robust Locomotion of Virtual Humans / Footstep Annotated Motion Graphs / Footstep Driven Motion Synthesis The first ‘step’ in simulating autonomous virtual humans is to provide a robust, versatile interface for controlling the lower-body of a fully articulated character. In this project, we propose a data-driven approach for controlling a virtual character using footsteps as the command interface. The resulting character should produce natural walking, running, jumping, and side-stepping motions. Issues / Open questions:(1) Analysis of input motions to automatically annotate it to define “actions” which can be selected by a controller. For every action, we must define its pre-conditions (start state), effect (end state), and cost of executing actions (energy cost).(2) Defining a “parameter space” for input motion. Parameters can be quantitative (e.g. speed, turning, choice of step) or qualitative (e.g. style of motion). The “step space” paper [1] defined 10 parameters to define a footstep choice.(3) Choosing the minimum motion set required. [1] had a database of 200 step animations.(4) Spatial and temporal control: The locomotion system must ensure that spatial and temporal constraints are met. The work in [2] describes spatial warping, speed warping, and time warping to address this issue.(5) Greedy vs. Planning: Reactively choosing the next foot placement or choosing foot placement based on a predicted model of next N steps.(6) Motion re-targeting for different character types. References:[1] "The step space: example-based footstep-driven motion synthesis. " Basten, Peeters and Egges[2] Online locomotion generation based on motion blending, SCA ‘02. Multi-modal foot and gait analysis The goal of this project is to build models of how people spread weight across their feet. Such a model would be useful

1. for a better understand of the foot2. for building a biomechanically-based model of human balance3. for creating biomechanically-based procedural animation models. Procedural character

animation allows character motions to be created on the fly in real-time, making them more flexible than approaches, such as motion graphs and move trees, where animations must be pre-configured a priori.

So far, our lab has collected data for

1. people carrying bags having different weights (standing)2. people idling and ambling (slow movements)

Initial work would take this data and try to answer:

1. Can we build a model that allows us to identify a person based on their idling patterns?

Page 10: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

2. Can we build a model that allows us to identify what they are carrying based on the sole pressure data?

3. Can we use the above models to build a simple procedural character capable of standing and idling?

The first (non-computational) step is to collect a lot of video of foot behaviors while people are standing, talking or waiting. Since only feet will be in the video, you do not need model releases, but you do need to be subtle about it so as not to bias the data! The next step would be to collect sole pressure, force, and motion data for people standing, walking, and balancing carrying different weights, having different body types, and wearing different shoes. Walking under realistic load conditions It is now rather rare to see someone walking without holding or manipulating some additional “baggage’, whether it be as small as a cellphone or as large as a handbag, briefcase, backpack or child. These activities and loads change the way people walk. How? For example, cellphone users seem to walk with the arm carrying the phone bent more at the elbow (in anticipation of or actual use of the phone) and swinging less than “normal” unencumbered gaits predict. Carried loads or pulled luggage change forces, torques and inertia on the body. Building a physics-based controller seems reasonable, but perhaps motion capture has help elucidate more procedurally-based kinematic approaches. Agent Conversation Simulator We wrote a paper earlier this year on dyadic (2 person) conversation simulation: L. Sun, A. Shoulson, P. Huang, N. Nelson, W. Qin, A. Nenkova and N. Badler. "Animating synthetic dyadic conversations with variations based on context and agent attributes." Computer Animation and Virtual Worlds J., 2012. Re-implement this paper’s contents in Unity and show it working in the Marketplace and Cocktail party scenarios. Extend the work to more than 2 person simulations. That leads to more eye movement and gesture synchronization topics (e.g, E. Gu, S. P. Lee, J. Badler and N. Badler. "Eye movements, saccades, and multiparty conversations." In Data-Driven Facial Animation, Z. Deng and U. Neumann (eds). Springer-Verlag, London, 2008, pp. 79-97.) Body Self-Intersection and Deformation Response Design and implement an efficient character self-intersection and response system. This might be done by monitoring numerous inter-point distances on the body surfaces (do that on the GPU?) and noting ones nearly zero. Interestingly, a zero distance does not mean a rigid hard stop: body segments are deformable and some compressive “interpenetration” is allowable. The amount of compression depends on tissue properties and the underlying skeletal structure. We have done some preliminary work on how to empirically estimate local (real) tissue deformation properties but much more is needed. Development of an Animation Blocking Tool using Kinect Body Motion Input

Page 11: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

Development of a Maya Plugin allowing reference video and motion data obtained from the Kinect system to be used to block out the animation for a character in a Maya scene. Facial animation will be created from the input reference video using image processing techniques. The Animation Blocking tool should also allow the animator to set desired key frames and add additional layers to create stylized movements. Physically-based dance controllers The goal of this project would be to synthesizing physically-based ballet motions such as those described in the Physics of Dance. This work aims to answer the hypothesis of whether existing techniques, which are based on a simple mechanical joint model, can work for this class of motions. Relevant related work: A. Safonova, J. K. Hodgins, and N. S. Pollard, 2004. Synthesizing Physically Realistic Human Motion in Low-Dimensional, Behavior-Specific Spaces, ACM Transactions on Graphics 23(3), SIGGRAPH 2004 Proceedings. web page Jessica K. Hodgins, Wayne L. Wooten, David C. Brogan, and James F. O'Brien. Animating human athletics. In Proceedings of SIGGRAPH 95, pages 71–78, August 1995 web page KangKang Yin, Stelian Coros, Philippe Beaudoin, Michiel van de Panne, Continuation Methods for Adapting Simulated Skills , ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2008) web page Micro-expressions and micro-gesture controllers Investigate how to design and implement a two (or more) layer character micro-expression and/or micro-gesture controller. The technical challenge is the development of visually animated agents whose actions and the way those actions are performed mirror complex internal cognitive states. For example, gestures can be superficially related to speech prosody, but the rate and extent of the gestures may be heavily influenced by cultural norms, by internal emotional conflicts in the performer between her attitudes and those that she feels allowed to express, or by situational influences triggered by the subject’s or user’s presence that engender fear, awe, mistrust, nervousness, or harm. Creating the mappings between such cognitive states and an action performance is only poorly studied, but essential to achieving this goal are two requirements: (1) the underlying gesture performance must be parameterized so that cognitive states can map to parametric action/gesture/facial expression modifications and (2) the model must include a capability of simultaneously portraying both a surface action and a subconscious one so that conflicted states may be subtly but effectively portrayed. The mechanism proposed for (2) is a combination of gesture and facial micro-expressions. One layer of gesture or facial expression comes from the agent’s unconscious body motion patterns, but the second layer arises from her conscious attempt to mask her subconscious state: that state nonetheless flashes through or overlays the unconscious motions periodically and uncontrollably. Micro-expression and micro-gesture animations are novel, as is creating these parameters from internal agent cognitive

Page 12: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

states based on the situational context. Secondary motions for game characters The goal if this project would be develop secondary effects for game characters, such as hair, capes, skirts, tails, and ears. This project would involve the development of several characters (modeling + skinning) and then programming deformable accessories for these characters which work in Unity. Generating Synthetic Animation Data from a Virtual Motion Capture Studio Using the open source physically-driven walking system Cartwheel-3d (or another equivalent system): (1) add the functionality for recording motion clips and exporting them as FBX files for use in Unity, and (2) build an external tool that would let a user tweak the generated motion curves to express emotions such as "angry", "calm", "excited", or "tired". This would create a virtual motion capture studio capable of animating a wide range of different character body types through highly customizable motions such as walking, reaching for objects, carrying heavy crates, and responding to external forces. Such a tool could prove invaluable to smaller research groups or development teams without access to or experience using motion capture, Maya, or services such as Mixamo. Personality-Driven Character Animation A major focus of crowds research is the ability to simulate large scale heterogeneous crowds with groups of interacting characters exhibiting different behavioral patterns. One of the factors contributing to the variation in behavior is the personality of the individual. Research has shown that personality manifests itself in steering behaviors as well as body motion. The purpose of our project is to create crowds of virtual characters that express their personalities through their body motions. We first plan to derive a mapping between OCEAN personality factors (the Five Factor Model - FFM) and Laban Movement Analysis (LMA) parameters based on motion capture experiments and user studies. LMA is a movement analysis technique for systematically evaluating human motion. Formal description of LMA parameters facilitates the effective classification and formulation of human motion characteristics. Thus, based on our findings from the experiments, we intend to use LMA as a medium between personality types and animation of body movements. Experiments are two-fold. First, we capture the motion of trained specialists performing simple everyday actions such as waving a hand, and then analyze the data. Second, we perform user studies to find a mapping between LMA parameters and OCEAN traits and thus evaluate our findings from the motion capture experiments. In order to apply our findings to the simulation of heterogeneity in crowds, we need to create appropriate scenarios and expressive human characters that effectively demonstrate variation in behavior.

Page 13: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

BEHAVIOR AUTHORING / MACHINE LEARNING Analysis and Evaluation of Team-Based Sports / Power in the hands of the Spectator / A Real-Time Augmented View of Team-Based Sports Team-based sports are fascinating due to the intricate behaviors of each individual as well as coordinated strategies that go into each play. Due to the intensity and speed of the action, it is often difficult, if not impossible to catch everything that goes on the screen. In addition, there is so much more that goes on in the game that just cannot be captured by the camera lens (e.g., off the ball plays, coordinated moves etc). In this project, we propose a method to statistically analyse and evaluate team-based sport simulations. A rough outline of proposed tasks is as follows: 1) Tracking data of actual games. We can consider doing tracking from video of soccer games or possibly even take recorded data from a game of FIFA / FIFA manager where we would have much more control over the camera. If we can directly get the player action traces from the game itself, that would be even better. (Possible Software to use for tracking: SynthEyes [3]) 2) Computing primitive and derived metrics from the raw data. We can apply a large variety of operators ranging from derivatives, integrals, and statistical moments to capture more insightful information into the simulation. Metrics can be computed instantaneously, over a window, or over the entire simulation. Examples of metrics for a single agent include: total distance traveled by a player, min/max/average/instantaneous speed. It will be more interesting to also compute metrics for multiple agents, which will allow us to capture interactions between players on the field. We can leverage and build on top of prior work that we have done [1] for this task. 3) Specifying and automatically detecting behaviors between players. We can use rules or sketches to specify behaviors as conditions on these computed metrics. Refer [1] for more details. 4) Offline analysis. The overall metrics computed can serve as measures of the performance of individual players as well as the whole team. Applications (1) This framework can be used to augment and personalize the viewing experience of spectators. Spectators can use the specification interface to specify behaviors of interest and have those be automatically detected and highlighted on screen.(2) Behavior detection can also be used as a powerful tool to control the camera to track useful plays.(3) It can also serve as an invaluable tool for managers to analyse their strategies in real-time as they happen on the field, and provides insight into strategies used by the opposing team. References[1] SteerBench[2] SteerBug[3] SynthEyes: software for tracking (http://www.ssontech.com/)

Page 14: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

Learning Behavior Models for Sports The primitive metrics we compute in the above project provide ground truth for what is going on the field. The derived metrics are interesting features that capture more nuanced behaviors for a single player as well as multiple players. In this project we propose the use of machine learning to generate models that capture the behaviors of individual players as well as the whole team. A learnt machine (e.g. decision networks, R trees, neural networks) would essentially provide a mapping from the current state of a player to the action that it performs. Models can be learnt for each player as individuals as well as for groups of players to capture coordinated behaviors. Artificial Contender [1] is a commercial software that uses some form of decision trees to train AI from the controller input of humans playing a soccer game. Applications (1) A tool for creating heterogeneous AI for team-based sports(2) Emulation of behavior for specific players and teams (find out Barcelona's secrets) Proposed machine learning tool: R+ trees References:[1] Artificial Contender (http://www.trusoft.com/) Authoring “Excitement” in NPC Behavior A major aspect of authoring NPC behavior is to ensure that the resulting interaction between the player and the NPC is challenging and exciting. The AI should not be “optimal” which would mean that it cannot be defeated. At the same time, it should be challenging enough to ensure that games and interactions are engaging to the player or spectator. In this project, we will explore ways of authoring complex interactions between multiple NPCS and human players in order to generate simulations that have a high degree of excitement. Our target application can be a car chase through a busy city simulation. Issues / Challenges:(1) Quantifying excitement(2) Predicting human player behavior(3) Modeling competition and cooperation between NPCS and human player Behavior Capture using the Kinect The Kinect is a powerful new tool that facilitates full-body tracking in a real-time system. Open source libraries further facilitate independent research and development with the Kinect. We already have a working system that uses Kinect input to control a fully articulated virtual character in Unity. Prior work has uses full-body tracking to kino-dynamically control virtual avatars in dynamic virtual worlds. In this project, we propose the use of the kinect data as a space to learn "behaviors" in human motion. We will attempt to learn controllers for frequently occurring

Page 15: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

actions (joint angle motions), combinations of actions in different body parts (synchronizing simultaneous actions) , and a decision network that determines what action to choose in the given state of the character. Issues / Challenges:(1) Learning in the multi-dimensional, continuous space of joint angles of a fully articulated character(2) Defining state and actions in this space.(3) Identifying similarity of actions(4) Choice of machine to learn state -> action mapping (decision trees, R trees etc)(5) Extension: Learning goal driven controllers: {state,goal} -> action Target Applications:(1) Single character, not goal driven: (a) Simple navigation in difficult terrain (uneven terrain with pits and different types of obstacles requiring dexterous full-body movement) (b) Dynamic obstacle avoidance: dodge ball (c) Dancing: what constitutes a single step? how to choose a sequence of steps?(2) Multiple character, not goal driven: (a) Synchronized motion between multiple characters(3) Multiple character, goal driven: (a) Martial arts Notes:(1) Calibration phase: learning the state space / limits of the joints (min, max) as well as actions (transitions in joint space)(2) Consider abstracting action as target pose of end effector. Event Recognition using Behavior Trees as an Action Lexicon A useful function in an interactive system such as a VR experience or an immersive game would be a recognizer than “understands” when a particular event is happening which is not itself being created by the program. This can occur in a couple of ways. In remote surveillance cameras observe things in some area and should feed information to a situation analyzer or event recognizer that can say that such-and-such an event is happening (or is happening with some probability). The second situation is when a live game player or players act in an environment to do something: it would be useful for the game to perceive that some event (attack, destruction, illegal gathering, etc.) is being perpetrated. We speculate that the Behavior Trees we use for event and action execution can be “inverted” and used (with a suitable indexing structure) as action or event recognizers. This would unify some aspects of game design, but in any case would make for more powerful linkages between simulation and recognition. In fact, there are additional aspects of this that relate recognition to simulation via action envisionment. Explore these ideas and create an event recognition system inside a game. Building a Character “Bible” Locate some samples of Character Bibles as written for plays, screenplays, movies, etc. They are apparently hard to find though they do exist. E.g., see http://www.screenwritinggoldmine.com/forum/showthread.php?t=2029&highlight=bible

Page 16: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

We want to see if we can use these documents to seed the computational agent models we build for narrative simulations. The idea is to use textual materials to establish a "backstory" for a character just as a real actor would study in preparation for a role, but our actors are virtual. We’d like to see what language is used in these documents: Ani Nenkova to examine the language used and Norm Badler to transform the content into the agent model. Parameterized Action Representation Memory Model in Behavior Tree System Build a question/answering system based on Behavior Trees.

1. Learn what Behavior Tree are about.2. Design and implement a “memory model” [MM] for agents that accumulates a suitable

trace of the actions and events they participate in.3. Create a (simple) question-answering interface where the input will be simple w-

questions (mostly) [see below]; these are turned into queries that run over the MM and return values packed into preformed sentences. E.g.:

● Who? (return agent participant set)● What? (return object or action set)● Why? ( purpose)● Where? (object location)● How? (BT action chain to/toward this state)● When? (got/will get to a state on the timeline)● Which? (of a set of objects/actions/states/processes)● Can you? (are BT applicability conditions true?)● Did you? (were BT terminations satisfied?)

Page 17: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

SOUND PROPAGATION

Texture-based Sound Propagation on the GPU Sound propagation is a crucial component in many fields, including applications and models of virtual reality, virtual human, crowd simulation, digital games, and movies. Not only gamers or computer users, but virtual agents should also experience and react to multi-modal signals and communications with others and the ambient environment. We aim to build an efficient, effective and flexible sound propagation model for real-time crowd simulation, which simulates how sounds propagate, attenuate, degrade and change in the environment. This model will not only improve auralization of the system for users, but also provide the agents what they will receive as altered sound signals at their locations. Current popular models include numerical acoustics and geometry acoustics. We propose a cell-automata acoustics method, which simulates how sound packets evolve in the virtual scenes. We propose to pre-compute various basic and simple cases for sound propagation in virtual scenes, including scenarios such as no obstacles, single obstacle at different locations, etc. When it comes to a virtual world with numerous obstacles and various sound sources, we propose a texture-based propagation model which synthesize the final propagation result by combining a number of texture overlays for each obstacle or sound source. Since the Graphics Processor Unit (GPU) has dedicated parallel resources and powers for texture-based computational problems, we propose to utilize and deploy algorithms on to this device to implement the sound propagation model.

Page 18: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

GAMES

Development of Sword Fighting AI Using Game Tree Search and Analysis Development of AI techniques based on game-tree search allowing two intelligent agent characters to interactively sword fight using the principles of fencing. The work will involve development of real-time game-tree search algorithms, creation of the offensive and defensive sword swings and body movements using motion capture techniques and development of a sword fighting game application using the Unity3D engine. Development of a Virtual Reality Racquet Game Using Kinect Body Tracking and a Virtual Locomotion Control (VLC) User Interface Utilizing the newly released Kinect PC SDK a virtual racquet game such as tennis, squash or racquet ball will be developed using the Unity3D game engine and a Virtual Locomotion Control (VLC) user interface. The VLC user interface consists of smart shoes containing embedded force sensitive resistors, ultrasonic sensors and inertial sensors that allows the tracking of a users body movements in the real world, such as stepping, walking and running in place, to be mapped to associated translational movements of the player’s avatar in the virtual world. Development of adaptive games based on bio-sensor feedback (EEG, EKG, Respiration, etc) This project consists of two parts. In the first part, we would use bio-sensors to study how people react to games (for example, using an existing modable game such as Unreal Tournament or Neverwinter Nights, what else?). In the second part, we would try to use the bio-feedback to manipulate the gameplay. Development of games and novel UI interfaces based on the eye tracker for mobile devices The goal of this project is to create a compelling demo for an android tablet based on an eye tracker worn on the user’s head. This project would involve estimating the head position and angle relative to the table in order to create either new game experiences (such as the illusion of 3D) and/or novel UI interfaces (such as for reading text and maps). This project could easily span two semesters (in which the student would create more sophisticated demos, help perform user studies and analysis, and contribute to publications).

Page 19: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

RENDERING

Specular Reflections as Light Sources Ever sit in a room with a window that faced a street and watched the moving light patterns on the inside walls? No? Well, try it. The light is reflected from specular (moving) surfaces such as windows (primarily) and shiny metal on the cars. These light sources are interesting because they are generally white (sunlight) and defined by the primary reflecting surface as well as the window frame they pass through. So an implementation of this effect seems to bear some relationship to shadow volumes (though these are light volumes defined by the window) where the sources are both geometrically defined and moving. Can you produce an efficient rendering algorithm for these? Can you embed it in Z-buffer (and make it real-time!) or do you need a stronger (but much slower) algorithm such as a photon mapper? Semi-Automatic Semantic Tagging of 3D Models Building 3D models for visualization is only part of the information needed to utilize them in an interactive environment or a game. They must also have semantic tags: labels for a named part, explicit function, or potential use some part of an object can have. For example, a shape “door” can have states (open, closed, or an amount open) and can open or block locomotion paths (be a portal between spaces if sufficiently open or not, if closed). Windows might be assigned glass shaders but we need to know that they can be opened or closed (or shuttered), too. Space has uses, things have roles (e.g., tools), and objects can be used to create other objects. The trick here is: how much of these labels can be automatically generated from the 3D model? It may be useful to have labels annotated by the designer directly in Maya, e.g., but there needs to be some “standards” and dictionary for these labels. It may be best to write “pathway” recognizers that automatically find the walkable surfaces and create a path graph for animated agents. Particular labels of some importance are therefore doors, windows, paths, conveyances (vehicles, elevators), stairs, chairs, benches, tables, curtains, etc. Fabrication of toys and puzzles from 3D Models There is a new trend towards using graphics techniques for automatic fabrication. With this project, the student would make viable 3D-printable toys and puzzles from mesh models, not specifically designed to be printed. The project could have two separate focuses: (1) on automatically modifying the geometry (e.g. adding joints, or segmenting into puzzle pieces) for printing, and/or (2) verifying that the model is viable through physical simulation. References:http://www.baecher.info/fab_char_sig12.htmlhttp://hpcg.purdue.edu/?page=publication&id=164 And here a few others that don't use 3D printing per se, but that are based on graphics and simulation for DIY: http://www.geocities.jp/igarashi_lab/beady/index-e.htmlhttp://www3.ntu.edu.sg/home/cwfu/papers/burrpuzzle/http://cg.cs.tsinghua.edu.cn/people/~xianying/Papers/V-Popup/index.html

Page 20: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone
Page 21: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

HUMAN COMPUTER INTERACTION Multi-Modal User Interface for “Robot” Control While we rarely run robots in the SIG Center, we have research projects that require us to develop multi-modal communication interfaces between humans and robots: think teammates or helpers. There are a variety of aspects to this project, so there’s some freedom to develop aspects of interest:

1. Use an Android smartphone to control a simulated robot. The main activities are directional control (using the compass) and hand signal controls (using the inertial sensor). Speech and/or a screen tap should also be used as the equivalent of a “mouse click”. A limited speech vocabulary could also be implemented to give specific commands (“take picture”, faster, slower, stop!, etc.). Initial work performed by Lauren Frazier in he 2012 Master’s thesis showed that mapping gestures to low-level directional commands alone did not provide intuitive robot control. To offset these limitations, we will define a new set of high-level commands that are implemented using behavior trees which will map to gestures/speech/other UI.

2. Find or create real-time affect (emotion) recognizers for speech or gesture. These could distinguish speech rate, pitch, and amplitude to determine urgency, stress, boredom, etc. There is some work in this area but we need to have working online implementations. [Joint work with Dr. Ani Nenkova]

Wearable Computer Based on Mobile Phone Platform Development of a wearable computer consisting of a head-mounted display, a single hand keyboard, and a mobile computing/phone platform utilizing the Android operating system. The head-mounted display will be in the form of lightweight LCD glasses with a built in camera and will be capable of providing see thru, augmented reality and immersive virtual reality experiences. The single-hand keyboard will be based on Keyboard Alternative Technology (KAT) which allows the 26 letters of the alphabet plus a backspace and carriage return to be entered using sequential movement of four multi-position keys. A novel mobile application will be developed for the wearable computer using the Unity3D game engine. Gesture Recognition for Mobile Phone Applications Utilizing Inertial Sensor Inputs Development of a gesture recognition application for mobile phones based on Hidden Markov Models (HMMs) and input data obtained from the phone’s inertial sensors (i.e. accelerometers, gyros and magnetometers). A Kalman filter also will be developed to represent the inertial orientation of the phone in the form of quaternions and Euler angles (i.e. roll, pitch and yaw). The Unity3D game engine will be used to develop a hand signal gesture recognition application.

Simulation/Optimization Evolutionary Creatures

Page 22: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

The goal of this project would be to implement Karl Sim’s seminal paper on evolutionary creatures. The goal of this work is to gain insights into objective-based character animation as well as study how the morphology and movement of creatures inter-relate.

Page 23: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

Nursing School This document lists potential Healthcare technology projects for Engineering, Computer Science, and Wharton students. The concepts are from nursing students who want to develop their ideas. Multidisciplinary teams will be formed and project development will be supported through the NU599 Innovation & Technology for Healthcare Part I and Part II courses. All students will need to register for these courses to work on the following projects. Contact: Nancy Hanrahan, PhD. 215-514-3574 [email protected] 1. Discharge planning solution to reduce readmissions for individuals with chronic diseases. Students will work with Dr. Barry Silverman, Nancy Hanrahan and teams of clinicians at the Hospital of the University of Pennsylvania to develop a virtual society that simulates patient flows as they are hospital patients. Agent Based Modeling technology will be implemented. 2. Develop technology to quantify epidermal and mucosal tissue injury to improve forensic evidence for different skin colors.3. RECOVERY from an addiction requires patients to cope with slips and relapses. A game of Chutes & Ladders using the 12 step as the conceptual framework can teach patients about recovery, coping mechanisms, challenging tasks, triggers and stages of change. The game idea was formulated by a group of nursing students who were interning at a substance use facility. The staff at the facility is keen to develop the game further and test the effectiveness for patients. 4. iHEAL. Inpatient psychiatric settings are an opportunity to support patient decisions to improve the quality of their lives. Inpatient stays are short (5.6 days on average) and patients are highly stressed during their hospitalization. iHEAL is a web-based and interactive education tool that takes four basic function approaches in the following areas: Medication, Nutrition, Hygiene. Social Interactions/Decision Making. The goal is to help clients maintain or improve quality of life through a variety of levels using a social modeling game. The game is built to transcend the boundaries of the hospital by allowing patients access to the game post discharge. The player builds an avatar and moves through levels of social models and challenges. The game was conceived by nursing students Liz, Zuzana, Sarah, Jen, Dan, Alex, Mojo, & Bri. The staff at the inpatient setting wants to further develop the game and use it to help patients.5. BREATHE: an innovative solution to reducing restraints in in-patient psychiatric units. Seclusion and restraint is used to help patients gain control of anxiety, aggression, or other behavioral disturbances while they are inpatients. Alternatives to seclusion and restraint were proposed by a group of nursing students who were interning on a psychiatric unit. They propose to transform what is now called a “quiet” room into a “sanctuary” that engages advanced biofeedback technology that allows patients to transform and calm their internal distress. Carly Rubin, Caroline Pavloff, Kelly Murphy, Wesley Stratton, Farrell Weiss, Melissa Rosenberg, Allison DeAngelis, and Sarah Daigle6. StatMeUp: A web based application that allows a patient to keep vital information. The application assists cancer patients receiving chemotherapy to follow their lab values, medications, and other vital information and access important educational material.7. Mission Reintegration: the project is directed toward using social media to establish and maintain military members’ relationships as an asset for enhancing the positive effect of social

Page 24: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

support during and following deployment. The social media support will focus on education, reintegration, interpersonal relationships, symptom recognition and management.8. FALLS. Preventing falls in patients who are in the hospital and at risk. Falls are a leading cause of decline in function for the elderly population. While in the hospital, risk for falls increase dramatically due to medications, fragility, and the fact the environment is unknown to the individual. Nursing students believe that preventing falls would decrease the burden of illness and save countless health care dollars and suffering. 9. Electronic Health Record Challenge: Health information technology (HIT) and electronic health records (EHRs) hold great promise in improving the health outcomes and coordination of care for people with disabilities. However, the accessibility and usability of HIT is a matter of serious concern to people of diverse disabilities, including those who have vision, hearing, intellectual, manual dexterity, mental health, developmental and other types of disabilities. The disabled population cannot afford to miss out on the multitude of benefits that can be derived from having access to the health information stored in EHRs just because existing tools are not compliant with its needs. Building an accessible system from the ground up can be more cost effective than retrofitting current ones to suit this large group and can prevent future interoperability issues. In addition, innovation in this area can also help older individuals with changing abilities due to aging, and can help inspire usability improvements for all consumers on a more general basis.10. Patient-Centered Health Record access: As a patient, it’s astonishing how little access we have to our own health data in a format that is easily understandable. That’s why the Office of the National Coordinator for Health Information Technology is excited to have launched the SMART-Indivo App Challenge that tasks developers to build an Indivo application that provides value to patients using data distributed through the SMART API and its Indivo-specific extensions. Indivo is the original personal health platform, enabling an individual to own and manage a complete, secure, digital copy of her health and wellness information. The app will be either a web or mobile app that runs against the Indivo Developer Sandbox, where it can access patient demographics, medications, laboratory tests, and diagnoses using Web standards. Developers could, for example, build a medication manager, a health risk detector, a patient-friendly laboratory visualization tool, or an app that integrates external data sources (see http://www.healthdata.gov/) with patient records in real time. Essentially, it would alleviate many issues and allow for easier and more efficient access and management of patient information.11. Patient Safety Report: This challenge addresses a stark reality centered on hospitals struggling to increase internal incident reporting — a major reason being the busyness of care providers. Daily, hospital workers fight to create effective systems for the quality and risk management staff to complete root cause analyses and follow-ups, which are required by both the Centers for Medicare & Medicaid Services and the Joint Commission. However, their efforts are not always effective.12. MyAir: How do we connect personal devices for testing and reporting of both air quality and linked physiological data? Such a system would enable not only high-resolution mapping of pollutant concentrations, but also support research and reporting of individual physiological responses related to the pollutant. The U.S. Environmental Protection Agency (EPA) and U.S. Department of Health and Human Service (HHS) [National Institute of Environmental Health Sciences (NIEHS) and Office of the National Coordinator for Health Information Technology (ONC)] envision a future in which powerful, affordable, and portable sensors provide a rich awareness of environmental quality, moment-to-moment physiological changes, and long-term health outcomes. Health care will be connected to the whole environment, improving diagnosis, treatment, and prevention at all levels.13. Sleep Apnea devices: Currently, machines that blow air into the nostrils during the night are used to prevent apnea spells. Apnea occurs when an individual stops breathing during the night. It is associated with heart disease, diabetes, hypertension, and mortality. The challenge

Page 25: Senior Project Ideas - Penn Engineeringcse400/CSE400_2012_2013/project_id… · HCI a. Multi-Modal User Interface for “Robot” Control b. Wearable Computer Based on Mobile Phone

is to build a machine using technology to streamline the efficiency of current CPAP machines. 14. Ophthalmology examination data storage: Documentation of the typical ophthalmology examination in an electronic health record (EHR) continues to be challenging. This creates barriers to full acceptance and use of EHRs within the medical community. Ophthalmologists frequently perform office-based studies using ancillary measurement and imaging devices. Outputs from these devices include graphical displays of measurement data (e.g. visual field testing), numerical data (e.g. autorefraction, biometry), and image data (e.g. retinal photography, optical coherence tomography). However, these data and images are often stored on the acquisition devices in proprietary databases and file formats, and therefore have limited connectivity with EHR systems and ophthalmology-specific picture archiving and communication systems (PACS). There are DICOM (Digital Imaging and Communication in Medicine) standards for all major ophthalmology devices, but adoption rate continues to be relatively low and many existing legacy devices do not comply with these standards. As a result, there are often problems with redundant entry of demographic and clinical data into devices, data transfer from devices to EHRs and PACS without proprietary interfaces, workflow challenges, and difficulty connecting systems from different vendors.15. Medication Management. Management of medication continues to be a major problem for patients and providers. Storage of medication, changes in medication, timely management of medication, refills, etc are problematic for most patients and particularly for patients who are cognitively impaired. A medication management strategy would improve health and lower costs of care.16. Heart Risk Challenge. In communities across America, there are thousands of convenient and inexpensive ways to know your risk for heart-related conditions. Often, all it takes is scheduling a screening with your doctor or at a pharmacy. However, nearly 15% of people at risk for cardiovascular disease (CVD) are undiagnosed and less likely to take preventive action. The Centers for Medicare and Medicaid and the CDC want to reach individuals across the country, taking special aim at those who may be at risk for CVD and don’t know it. They want to deploy an engaging user interface that provides consumers with a quick health risk assessment, motivate them to obtain a more accurate risk assessment by entering their blood pressure and cholesterol values, and direct them to nearby community pharmacies offering affordable and convenient blood pressure and cholesterol screenings. Can you build an app to these specifications? Sucking and swallowing animation model for neonates A mobile system (NeoNur) has been designed to monitor the feeding characteristics of premature and at-risk neonates by faculty in ESE and the School of Nursing. An animation model of the sucking process is needed to begin correlating the observed pressure characteristics (associated with sucking and swallowing) with the movements of the neonates mouth (jaw, tongue and cheek motions). The goal of this effort would be to use the animation model as the basis for numerical fluid dynamic calculations that could account for both the pressure observations and provide diagnostic information for neonatal care-givers.