3
IMAGE - Complex Situation Understanding: An Immersive Concept Development Marielle Mokhtari 1 , Eric Boivin 1 , Denis Laurendeau 2 , Sylvain Comtois 2 , Denis Ouellet 2 , Julien-Charles Levesque 2 and Etienne Ouellet 2 1 Defence Research & Development Canada–Valcartier, Quebec, Canada 2 Dept of Electrical & Computer Eng., Laval University, Quebec, Canada ABSTRACT This paper presents an immersive Human-centric/built virtual work cell for analyzing complex situations dynamically. This environment is supported by a custom open architecture, is composed of objects of complementary nature reflecting the level of Human understanding. Furthermore, it is controlled by an intuitive 3D bimanual gestural interface using data gloves. KEYWORDS: Immersive environment, 3D objects, interaction, visualization, animation, synchronization. 1 INTRODUCTION Clever decisions usually result from good understanding of a situation. Nowadays, in the military domain, achieving good understanding is a challenge when rapid operational and technological changes occur. The challenge of complex situation (CS) understanding and the objective of increasing the agility of Canadian Forces in dealing with such situations are investigated in the IMAGE project [1]. IMAGE aims to support collaboration of experts/specialists for common CS understanding that can be shared among users. The IMAGE concept is supported by 3 closely coupled principles: Understanding Spiral Principle: IMAGE supports an iterative and incremental understanding process. CS understanding emerges through iteratively enhanced representations of the situation: from a vague and unorganized assessment to a structured one revealing hidden or emergent knowledge; Human Control Principle: IMAGE is fully controlled by Human (experts, decision makers … - individual and/or team); and Technology Synergy Principle: IMAGE tackles the CS understanding problem using the synergy between different technologies in order to favour complementarity, flexibility and agility. The IMAGE concept is composed of 4 modules exploiting cutting-edge technologies: (1) KNOWLEDGE REPRESENTATION (REP): building and sharing a vocabulary and conceptual graphs making the understanding explicit; (2) SCENARIO SCRIPTING: Transforming conceptual graphs into an executable simulation model; (3) SIMULATION (SIM): Moving in the space of scenario variables and browsing through simulation runs; and (4) EXPLORATION (XPL): Using visualization and interaction metaphors for investigating datasets dynamically so they become meaningful to the user. The work presented in this paper focuses on the XPL Module. Two versions of XPL tools have been developed to date: the desktop version and the semi-immersive version [2]. This paper presents a third version, a Human-centric/built fully immersive XPL environment, proposing a range of visualization and interaction concepts/metaphors for investigating datasets dynamically. Figure 1 shows the IMAGE concept associated with (a snapshot of) the immersive working environment. 2 XPL ENVIRONMENT ARCHITECTURE The IMAGE XPL environment can be used on a wall screen and in a CAVE, both providing active stereoscopic visualization. The architecture also extends to desktop/mobile platforms, and supports collaborative work between users. The architecture has been designed to be independent of graphics/physics rendering engines, and is built as a tree, in which each branch is an entity, some entities having no correspondence with graphics/physics rendering trees. 3 categories of entities have been defined: Manipulator: represents a hardware device (data glove, wand…) used by Human to interact with the Objects in the environment. Several Manipulators of the same type can coexist in IMAGE . User: corresponds to a real user in the environment. It establishes the correspondence between the User and his Manipulator(s) and also manages Object manipulation to avoid conflict between Users; Object: represents an element that is not a Manipulator and with which User can interact. 3D objects (see Section 3) are special instances of Object: their pose (position, rotation, rotation point and scale factor) can be modified and manipulations (move, rotate, zoom) can be applied on them. Interaction is considered as a communication mechanism (flow of messages) allowing Manipulators to send interactions to Objects. Interactions are not directly coupled with Manipulators so the Manipulator design is independent of the Object design. Drag & Drop is a mechanism that is implemented at entity level (1) to transfer information from one Object to another (via a ghost concept) and (2) to instantiate associations between two Objects. Another interesting feature of the architecture is the MetaData concept which encapsulates all data types provided to XPL by the SIM Module. Figure 1. (Right) IMAGE Concept – (Left) IMAGE Working Environment 3 3D OBJECTS WORK TOOLS The XPL environment is entirely built and controlled/managed by the User along his work session (including Object creation, Object manipulation, Object destruction, association of Objects...). The various Objects available, which can be considered as a set of work tools, are presented in the following subsections. 3.1 SIMULATION TREE ONE SCENARIO, ONE VARIABLE SPACE The Simulation Tree is the cornerstone of the XPL environment. It is the direct link between the SIM and XPL Modules and encapsulates Simulation data. It corresponds to a basic 3D implementation [4] of the visual representation of the simulation conceptual framework used by IMAGE [2]. Combining concepts of data/information visualisation, the Simulation Tree provides Users with a visually explicit history of his experiment. It also provides interaction metaphors with running Simulations for which parameters and resulting data, rendered in an adequate way, can assist Users to

IMAGE - Complex Situation Understanding: An Immersive Concept Development · IMAGE - Complex Situation Understanding: An Immersive Concept Development Marielle Mokhtari1, ... (experts,

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: IMAGE - Complex Situation Understanding: An Immersive Concept Development · IMAGE - Complex Situation Understanding: An Immersive Concept Development Marielle Mokhtari1, ... (experts,

IMAGE - Complex Situation Understanding: An Immersive Concept Development

Marielle Mokhtari1, Eric Boivin1, Denis Laurendeau2, Sylvain Comtois2, Denis Ouellet2, Julien-Charles Levesque2 and Etienne Ouellet2

1Defence Research & Development Canada–Valcartier, Quebec, Canada 2Dept of Electrical & Computer Eng., Laval University, Quebec, Canada

ABSTRACT This paper presents an immersive Human-centric/built virtual work cell for analyzing complex situations dynamically. This environment is supported by a custom open architecture, is composed of objects of complementary nature reflecting the level of Human understanding. Furthermore, it is controlled by an intuitive 3D bimanual gestural interface using data gloves. KEYWORDS: Immersive environment, 3D objects, interaction, visualization, animation, synchronization.

1 INTRODUCTION Clever decisions usually result from good understanding of a situation. Nowadays, in the military domain, achieving good understanding is a challenge when rapid operational and technological changes occur. The challenge of complex situation (CS) understanding and the objective of increasing the agility of Canadian Forces in dealing with such situations are investigated in the IMAGE project [1]. IMAGE aims to support collaboration of experts/specialists for common CS understanding that can be shared among users. The IMAGE concept is supported by 3 closely coupled principles: ▫ Understanding Spiral Principle: IMAGE supports an iterative and incremental understanding process. CS understanding emerges through iteratively enhanced representations of the situation: from a vague and unorganized assessment to a structured one revealing hidden or emergent knowledge; ▫ Human Control Principle: IMAGE is fully controlled by Human (experts, decision makers … - individual and/or team); and ▫ Technology Synergy Principle: IMAGE tackles the CS understanding problem using the synergy between different technologies in order to favour complementarity, flexibility and agility.

The IMAGE concept is composed of 4 modules exploiting cutting-edge technologies: (1) KNOWLEDGE REPRESENTATION (REP): building and sharing a vocabulary and conceptual graphs making the understanding explicit; (2) SCENARIO SCRIPTING: Transforming conceptual graphs into an executable simulation model; (3) SIMULATION (SIM): Moving in the space of scenario variables and browsing through simulation runs; and (4) EXPLORATION (XPL): Using visualization and interaction metaphors for investigating datasets dynamically so they become meaningful to the user.

The work presented in this paper focuses on the XPL Module. Two versions of XPL tools have been developed to date: the desktop version and the semi-immersive version [2]. This paper presents a third version, a Human-centric/built fully immersive XPL environment, proposing a range of visualization and interaction concepts/metaphors for investigating datasets dynamically. Figure 1 shows the IMAGE concept associated with (a snapshot of) the immersive working environment.

2 XPL ENVIRONMENT ARCHITECTURE The IMAGE XPL environment can be used on a wall screen and in a CAVE, both providing active stereoscopic visualization. The architecture also extends to desktop/mobile platforms, and supports collaborative work between users. The architecture has

been designed to be independent of graphics/physics rendering engines, and is built as a tree, in which each branch is an entity, some entities having no correspondence with graphics/physics rendering trees. 3 categories of entities have been defined: ▫ Manipulator: represents a hardware device (data glove, wand…) used by Human to interact with the Objects in the environment. Several Manipulators of the same type can coexist in IMAGE . ▫ User: corresponds to a real user in the environment. It establishes the correspondence between the User and his Manipulator(s) and also manages Object manipulation to avoid conflict between Users; ▫ Object: represents an element that is not a Manipulator and with which User can interact. 3D objects (see Section 3) are special instances of Object: their pose (position, rotation, rotation point and scale factor) can be modified and manipulations (move, rotate, zoom) can be applied on them.

Interaction is considered as a communication mechanism (flow of messages) allowing Manipulators to send interactions to Objects. Interactions are not directly coupled with Manipulators so the Manipulator design is independent of the Object design.

Drag & Drop is a mechanism that is implemented at entity level (1) to transfer information from one Object to another (via a ghost concept) and (2) to instantiate associations between two Objects.

Another interesting feature of the architecture is the MetaData concept which encapsulates all data types provided to XPL by the SIM Module.

Figure 1. (Right) IMAGE Concept – (Left) IMAGE Working Environment

3 3D OBJECTS – WORK TOOLS The XPL environment is entirely built and controlled/managed by the User along his work session (including Object creation, Object manipulation, Object destruction, association of Objects...). The various Objects available, which can be considered as a set of work tools, are presented in the following subsections. 3.1 SIMULATION TREE – ONE SCENARIO, ONE VARIABLE SPACE The Simulation Tree is the cornerstone of the XPL environment. It is the direct link between the SIM and XPL Modules and encapsulates Simulation data. It corresponds to a basic 3D implementation [4] of the visual representation of the simulation conceptual framework used by IMAGE [2]. Combining concepts of data/information visualisation, the Simulation Tree provides Users with a visually explicit history of his experiment. It also provides interaction metaphors with running Simulations for which parameters and resulting data, rendered in an adequate way, can assist Users to

Page 2: IMAGE - Complex Situation Understanding: An Immersive Concept Development · IMAGE - Complex Situation Understanding: An Immersive Concept Development Marielle Mokhtari1, ... (experts,

better understand the simulated CS. Figure 2 presents information on the Simulation Tree.

Figure 2. Simulation Tree (up) Hierarchy - (Down) Information draping

3.2 GEOSPATIAL VIEW – EXPLANATION OF SIMULATION The Geospatial View allows the Simulation status to be visualized (at each Simulation Step) using a terrain representation on which the scenario elements are displayed symbolically (cubing of military icons). Information layers and gauges/indicators (evolving with Simulation time) can be added to inform the User of the status of simulation variables. This tool also allows a Simulation to be played (like a video). This Object is active if and only if it is associated with a given Simulation (in the Simulation Tree). Figure 3 shows different aspects of the Geospatial View. 3.3 SCIENTIFIC VIEWS – 2D AND 3D VIEWS At any time, to analyze Simulation data or compare Simulations, User can configure Scientific Views (already associated or not to Simulation(s)) for: ▫ 2D plotting: line graphs (plot), bar graphs (histogram, stacked), and area graphs (pie). The User can create up to 4 different graphic displays within a figure window, each representing different combinations of data. ▫ 3D plotting: similar to the 2D plotting but in 3D.

Users can Drag & Drop a Simulation on a Scientific View and select data they wish to analyze (compare) as well as the type of plotting mode to be used. Figure 4 shows examples of Scientific Views. 3.4 ASSOCIATION – COMM. CHANNEL BETWEEN OBJECTS In order to create an environment in which Objects can interact with each other, the concept of association has been implemented at the entity level. This mechanism allows two Objects to synchronize and share data using a common protocol. In the environment, an association is represented graphically as a physical link between Objects (an arrow). Association Objects can be deleted when they are no longer needed by the User. Associations are shown in Figures 1 and 3. 3.5 CONTROL MENUS – MAIN MENU AND CIRCULAR MENUS Two types of control menus (see Figure 5) have been designed: ▫ Main menu: always visible, allows to connect the XPL module to the simulator, create Objects (and modify variables), request information on components of the environment; ▫ Circular menu: associated to specific Objects and visible on-demand, this type of menu allows choosing how data should be visualized and displayed.

4 HUMAN INTERACTIONS – BIMANUAL GESTURAL INTERFACE To enable the interaction between the User and the environment, a 3D bimanual gestural interface using CyberglovesTM has been developed [5]. It is built upon past contributions about gestural interfaces and bimanual interactions to create an efficient and intuitive gestural interface tailored to IMAGE needs. Based on real world bimanual interactions, the interface uses the hands in an

asymmetric style, with the left hand providing the mode of interaction and the right hand acting at a finer level of detail.

The User’s actions in the environment have been separated into 4 categories of gestures: (1) Selection and designation of Objects; (2) Generic manipulations, which group all interactions relevant to moving and positioning Objects; (3) Specific manipulations, which are interactions tuned for Objects such as the Geospatial View or the Simulation Tree; and (4) System control, which represents all actions that are related to menus and modifying the way the environment behaves.

There is no need for travelling interactions in IMAGE since the User is located at the center of the virtual environment and has access to all Objects without moving his body.

Figure 3. (Left) Snapshot of Geospatial View - (Right) Scenario elements

Figure 4. Examples of (Left) 2D Scientific View – (Right) 3D Scientific View

Figure 5. (Left) Main Control Menu Hierarchy – Icon & associated functionality

(Right) Examples of Circular Menus associated to Scientific View

5 CONCLUSION This paper has presented the prototype of a Human-centric /built virtual immersive environment which aims to accelerate CS understanding when combined with proper REP tools.

REFERENCES [1] M. Lizotte, D. Poussart, F. Bernier, M. Mokhtari, É. Boivin, and M.

DuCharme, (2008), “IMAGE: Simulation for Understanding Complex Situations and Increasing Future Force Agility,” Proc. of ASC.

[2] M. Mokhtari, É. Boivin, D. Laurendeau, and M. Girardin (2010), “Visual Tools for Dynamic Analysis of Complex Situations,” Proc. of VizWeek.

[3] F. Rioux, F. Bernier, and D. Laurendeau (2008), “Multichronia – A Generic Visual Interactive Simulation Exploration Framework,” Proc. of I/ITSEC.

[4] E. Ouellet (Pending), “Simulation Data Manipulation and Representation in a Virtual Environment,” MSc’s Thesis, Laval University, Quebec (QC), Canada.

[5] J.C. Levesque, D. Laurendeau and M. Mokhtari (2010), “Bimanual Gestural Interface for Complex Situations Understanding in VEs,” submitted to 3DUI.

Page 3: IMAGE - Complex Situation Understanding: An Immersive Concept Development · IMAGE - Complex Situation Understanding: An Immersive Concept Development Marielle Mokhtari1, ... (experts,

IMAGE - Complex Situation Understanding: An Immersive Concept DevelopmentM. Mokhtari, E. Boivin

Defence Research & Development Canada–Valcartier, Quebec, CanadaD. Laurendeau, S. Comtois, D. Ouellet, J.C. Lévesque & É. OuelletDept of Electrical & Computer Eng., Laval Univ., Quebec, Canada

This poster presents an immersive Human-centric/built virtual work cell for analyzing complex situations dynamically. It is supported by a custom open architecture, is composed of

This work is part of the IMAGE concept which supports collaboration of experts for common understanding of complex situations using a human guided feedback loop involving cutting-edge techniques for knowledge

objects of complementary nature reflecting the level of Human understanding. Furthermore, it is controlled by an intuitive 3D

bimanual gestural interface using CyberGlovesTM.

To analyze Simulation data or compare Simulations, users can configure views for:► 2D plotting: line (plot), bar (histogram, stacked), & area ( i ) h U t 4 diff t hi ithi fi

► KNOWLEDGE REPRESENTATION : Building and sharing a vocabulary & conceptual graphs making the understanding explicit;► SCENARIO SCRIPTING : Transforming conceptual graphs into a simulation (executable) model; ► SIMULATION : Moving in the space of scenario variables and

representation, simulation & exploration of large data sets.

3D OBJECTS – WORK TOOLS

(pie) graphs. Up to 4 different graphics within a figure window, each representing different data combinations.► 3D plotting: similar to the 2D plotting but in 3D.Users drag & drop a Simulation on a scientific view and select data they wish to analyze (and compare) as well as the type of plotting mode (2D or 3D) to be used. Simulation Tree

► Cornerstone of the XPL env.;► Di t li k ith th SIM M d l

walking along simulation runs; ► EXPLORATION : Using visualization & interaction metaphors for investigating datasets dynamically to make sense them.

► To create an environment in which objects can interact with each other, the concept of association is implemented; ► Mechanism allows two objects to synchronize and share data using a common protocol;► In the environment, association is represented graphically as an arrow between objects;► Can be deleted when it is no longer neededScientific Views

► Direct link with the SIM Module;► Encapsulates Simulation data; ► Basic 3D implementation of the 2D visual representation of the simulation conceptual framework;► Provides a visually explicit history of experiments;

HUMAN INTERACTIONS – BIMANUAL GESTURAL INTERFACE

Geospatial View► Allows the Simulation status to be visualized (at each Simulation step) using a terrain representation on which the scenario elements are displayed symbolically; ► Information layers & gauges/indicators (evolving with

► Can be deleted when it is no longer needed.Scientific Views p ;► Provides a self-explanatory representation of the situation;► Provides interaction metaphors with running Simulations.

User’s actions:► Selection & designation of objects; ► Generic manipulations (group all interactions related to moving & positioning objects); ► Specific manipulations (interactions that apply only to objects such as Geospatial View, Simulation Tree;

Based on real world bimanual interactions, the interface uses the hands (wearing CyberGlovesTM) in an asymmetric style, with

y g g ( gSimulation time) can be added to inform of the variables status; ► Allows a Simulation to be played (like a video);► Active if and only if it is associated with one Simulation.

Two types of control menus have been designed: ► Main menu: always visible, allows to connect the XPL Module to the simulator, create objects (and modify variables), j p , ;

► System control (all actions related to menus and modifying the way the environment behaves).

the left hand providing the mode of interaction and the right hand acting at a finer level of detail. Control Menus

, j ( y ),request information on components of the environment;► Circular menu: associated to specific objects and visible on-demand, this type of menu allows choosing how data should be visualized and displayed.