The Allosphere

Preview:

DESCRIPTION

Presentation of the Allosphere project at UCSB. Imagine a 3 story high sphere suspended in a cube where 3D video and audio are used for scientific discovery and exploration.

Citation preview

Allosphere@CNSI: Towards a fully Immersive and Interactive Scientific Experience

in partnership with the California Nanosystems Institute

MAT/CNSI

allosphere@CNSI

What is a Digital Media Center doing in a Nanosystems institute?

• A team of digital media researchers at UCSB has been fostering a cross-disciplinary field that unites science and engineering through the use of new media

• Allosphere = Integration + availability to a larger community

Description and Goals

Allosphere Steering Committe

• JoAnn Kuchera-Morin (Media Arts and Technology Initiatives)

• Xavier Amatriain (Media Arts and Technology Initiatives)

• Jim Blascovich (Psychology)

• Forrest Brewer (Electrical and Computer Engineering)

• Keith Clarke (Geography)

• Steve Fisher (Life Sciences)

• B.S. Manjunath (Electrical and Computer Engineering)

• Marcos Novak (Media Arts and Technology/Arts)

• Matthew Turk (Media Arts and Technology/Computer Science)

• T.B.D. (California Nanosystems Institute)

• The Allosphere – synthesis, manipulation, exploration and analysis of

large-scale data sets ....

– environment that can simulate virtually real sensorial perception providing multi-user immersive interactive interfaces

• research into – scientific visualization, numerical simulations, data

mining, visual/aural abstract data representations, knowledge discovery, systems integration and human perception

Description and Goals

• Allosphere and other labs hosted in UCSB’s California Nanosystems Institute (CNSI)

The Building

• The space itself is already a part of the final instrument: – three-story

anechoic sphere, ten meters in diameter, containing a built-in spherical screen.

The Space

• Once equipped, the CNSI Allosphere will be one of the largest immersive instruments in the world. – unique features:

true 3D spherical projection of visual and aural data, and sensing and camera tracking for interactivity.

The Features

• The AlloSphere is situated at one corner of the CNSI building, surrounded by different media labs.

– Visual Computing

– Interactive Installation

– Immersion/Eversion

– Robotics

– Plurilabs

Other MAT Labs at CNSI

Research in the Allosphere

• Inherent research comprises all of the activities that use the instrument as a research framework for immersive, multimodal environments:

Inherent Research

• Sensor and Camera Tracking Systems– research related with

computer vision as well as innovative interfaces and sensor networks that might be used to capture user interaction

Inherent Research. Interactivity

• System Design and Integrated Software/Hardware Research– integration of the different

hardware and software components at play

Inherent Research. Systems

• Immersive Visual Systems Research– re-creation of an immersive

visual space in a spherical environment

Inherent Research. Visual

• Immersive Audio Systems Research– re-creation of a virtual 3D sound environment in

which sources can be placed at arbitrary points in space with convincing synthesis and that allows to simulate the acoustics of real spaces

Inherent Research. Audio

• Functional research includes those activities that will use the Allosphere as a tool for scientific exploration:

Functional Research

• Multidimensional knowledge discovery– deal with issues such as highly dimensional

feature descriptors, similarity metrics, and indexing

– Machine learning, image data mining and understanding...

Functional Research. Knowledge

• Analysis of complex structures and systems– Constructing the next

generation of engineering paradigms requires a mechanism for rapid simulation, visualization and exploration supporting phenomena at multiple physical and temporal scales

Functional Research. Complex Systems

• Human perception, behavior and cognition– valuable instrument for

behavioural scientists interested on the impact of virtual environments, large scale visualization, or spatial hearing.

Functional Research. Psychology

• Cartographic display and Information Visualization– remote sensing and

geographic information science the opportunity to explore the potential of “inside-out” global data displays as tools for collective decision-making

Functional Research. Cartography

• Artistic scientific visualization/auralization– artistic principles are driving

research into real-time interactivity and human manipulation of complex scientific data structures

Functional Research. Artistic Visualization

• Most of the research in the Allosphere (Functional and Inherent) has a direct mapping into future forms of Entertainment and Edutainment.

– We envision collaboration from the Entertainment Industry

The Future of Entertainment

Prototype Projects in the Allosphere

Prototype-driven System

• State-of-the-art system: still many open research questions need to be addressed.

• We want content to drive the system design.

• For that reason we are prototyping the instrument with

different projects/requirements.

The Allobrain

• In collaboration with UCLA Brain Imaging Institute, Marcos Novak and many MAT/CREATE students (view video)

Quantum Spin Precession

• In collaboration with Prof. David Awschalom and Spintronics lab. Audiovisual model for coherent electron spin precession in a quantum dot

Multicenter Hydrogen Bond

• With Anderson Genotti – Matrerials Researcher and discoverer of the Hydrogen Bond – and Prof. Van De Walle. Visualization and multi-modal representation of unique atomic bonds for alternative fuel sources (view video).

NanoCAD in the Allosphere

• In collaboration with BinanGroup's NanoCAD

Alloproteins

• In collaboration with the Chemistry/CS department using Chromium and Vmd

An Engineering Challenge

Innovation

• The Allosphere presents innovative aspects in respect to existing environments such as The Cave

– Spherical environment with 360 degrees of visual

stereophonic information: spherical immersive systems

enhance subjective feelings of immersion, naturalness, depth

and ``reality''.

– It is fully multimedia as it combines latest techniques both on

virtual audio and visual data spatialization. Combined audio-

visual information can help information understanding but

most existing immersive environments focus on visual data.

Innovation

– Completely interactive and multimodal environment, including camara tracking systems, audio recognition and sensor networks.

– Pristine scientific instrument - e.g. the containing cube is

fully anechoic chamber and details such as room modes

or screen reflectivity have been studied.

– Multiuser: Its size allows for up to 15 people to interact

and collaborate on a common research task.

An Engineering Challenge

An Engineering Challenge

The Visual subsystem

10/04/23 AlloSphere Video Design - Alex Kouznetsov UCSB Proprietary and Confidential

35

• Allosphere display can only be compared to high-end state of the art planetariums (Gates planetarium at Denver Museum of Nature&Science or Griffith Observatory in LA)

• Some AlloSphere requirements are considerably more

demanding

– Variety of types of graphics including smaller size text

– Bright backgrounds and accurate color

– Stereo projection

– Excellent system flexibility and expandability

Overview

10/04/23 AlloSphere Video Design - Alex Kouznetsov UCSB Proprietary and Confidential

36

Overview

• Key Design Parameters– Display quality/performance– Mechanical/facilities constraints– Overall system architecture, configuration management,

automation, calibration– Cost

• Secondary Concerns– Aging– Maintenance– Upgrades– Acoustic performance (of video equipment)

The Visual subsystem

10/04/23 AlloSphere Video Design - Alex Kouznetsov UCSB Proprietary and Confidential

37

Display Brightness

• What is required?– Eyestrain-free operation over a decent range of color values– Brightness levels at or above photopic threshold for good contrast

and color acuity– High resolution– Stereo/mono operation

• Given:– Screen area: ~320m2

– Projector overlap factor: 1.7– Screen gain, direction averaged: 0.12– 14 projectors with a max. 3K lumens/projector

10/04/23 AlloSphere Video Design - Alex Kouznetsov UCSB Proprietary and Confidential

38

• Simulation results– ~10 cd/m2 screen luminance per 42,000 lumen of total light input

• Recommendations

– 0.7 – 5 cd/m2 recommended for multimedia domes

– 50 cd/m2 for cinema projection (SMPTE)• Conclusion

– 42K lumens is good enough for most applications

Display Brightness

10/04/23 AlloSphere Video Design - Alex Kouznetsov UCSB Proprietary and Confidential

39

• Active stereo introduces more than 50% loss in brightness but ...

– 5 cd/m2 is still in the high-end of recommendations for domes

– Active stereo introduces a dramatic gain in subjective quality

perception.

• On the other hand, we cannot project much more than that

because of:

– Back reflections

– Cross-reflections

Display Brightness

10/04/23 AlloSphere Video Design - Alex Kouznetsov UCSB Proprietary and Confidential

40

• “Eye-limiting resolution” is not feasible (right now)

– Approx. 150M pixels required

to achieve in 30lp/deg (1 arc

minute) in all directions

• 11 lp/deg (3 arc minute) is the

recommended value for domes

– 20M pixels, 14 projectors

Allosphere line pairs per degree as a function of total number of active pixels on all projectors combined

5

10

15

20

25

30

35

10 30 50 70 90 110 130 150

LP/deg

Resolution

10/04/23 AlloSphere Video Design - Alex Kouznetsov UCSB Proprietary and Confidential

41

• Design requirements relate to all aspects of system design– Projector side

• Best image quality, usually combined with color correction.• Limited configuration• Lower cost and higher flexibility

– Dedicated hardware

• Lower latencies• DLP projectors are problematic due to the extra frame buffer

latency

– Custom Software Infrastructure?

Image Warping and Blending

An Engineering Challenge

The Audio subsystem

Audio Requirements

• “Ear-limited” audio rendering– Flat freq. response 20 Hz – 22kHz

– Dynamic Range 120 dB

– SNR>90 dB

– T60 < 0.75 sec

– Spatial Accuracy: 3° in horizontal axis and 10° in

elevation

Spatial Audio

• Examples of Spatial audio: stereo, surround...

• Geometrical model-based spatialization

– Mono source + dynamic positioning

– Three “standard” techniques:

• Vector-based amplitude panning

• Ambisonic spatialization

• Wafefield Synthesis

Wavefield Synthesis

• Huygens principle of superposition: many closely spaced speakers create a coherent wavefront with an arbitrary source position

• 3D WFS has still not been attempted because of computational

complexity (3D KH integral) : can use Ambisonics on the z axis

Spatial Techniques

• All these techniques present pros/cons and interesting research problems

– We already have a framework that can effectively combine them

• Spatial audio: huge success in the near future

• Number of speakers depends on the specific technique

– but in order to have a reasonable spatial resolution we need

~ 500 speakers

• The technology to use (electrostats, ribbon, tweeter array...)

is also still under discussion.

Input Sensing & Multimodal HCI

Interactivity

• Dynamic, user-driven environment – how to best give users the ability to interact with data in effective, compelling, and natural ways?

• Powerful techniques for navigation,

selection, manipulation, and signaling

• Sense and perceive human movement, gesture,

and speech via a network of sensors

– Cameras, microphones, haptic devices, etc.

– Multimodal interaction!

Computing Infrastructure

Integration

• A typical multi-modal AlloSphere application will integrate services running on multiple hosts on the LAN that implement a distributed system composed of:

– input sensing (camera, sensor, microphone),

– gesture recognition/control mapping,

– interface to a remote (scientific, numerical, simulation, data

mining) application,

– back-end processing (data/content accessing),

– A/V rendering and projection management.

Integration

• Still need software infrastructure to distribute the different graphic pipes from the generation engine to the render farm

• Develop ad-hoc visual generation software engine and

interconnection with data streams.

• Efforts need to be put forward into building this intermediate

integration/coordination layer by combining several specialized

packages

– Cyberinfrastructure grant presented (Hollerer, Wolski and

Shea)

Video Generation Subsystem

• In order to generate high resolution (1920x1200@120Hz) in active stereo we need high end video cards

• Sample rendering farm for 14 stereo channels: 7 servers with one

Quadro FX5600 in each one.

• Blending and warping managed mostly at the projector side.

Sample video generation unit with a render Linux

Box with an NVidia Quadro 5600 (still to appear ) feeding two Christie Mirage S2K+

Video Distribution

Video Distribution

Audio Generation Subsystem

• Problem: Distribute 500+ channels of hi-fi audio to speakers

1.Distributed rendering

~1.3 Gbps (at 24bit/96kHz)

• Multichannel audio streaming over network: Yamaha's mLan,

Gibson's Global Information Carrier, Sony's Supermac...

• Sample synchronous output: Steve Butner's EtherSync

• Network interface box to be custom built

2.Single Render Point

• Develop custom DSP hardware

• Harder signal distribution

Audio Generation Subsystem

Audio Generation Subsystem

Synthesis/Processing Software: Audio team has extensive experience in developing such software and has ready-to-use frameworks such as CLAM (Amatriain, ACM MM Best Open Source Software 2006); CSL (Pope)

Open Research Areas and People

• Graphics (Hollerer)

• Audio (Amatriain)

– Auralization (Roads)

– 3D Audio (Pope)

• Systems (Hollerer, Brewer,

Butner, Pope, Amatriain)

• Interactivity (Turk, Kuchera-

Morin, Amatriain)

• Experiential Signal Processing (Gibson)

• HPC, Optimization (Wolski,

Krintz)

• Content Creation

– Visual (Legrady)

– Music (Kuchera-Morin)

– VW (Novak)

Open Research Areas and People

• Nanoscale systems representation (Oster, Garcia-Cervera)

• Brain Imaging (Grafton)

• Molecular Dynamics (Shea)

• GIS (Clarke et al.)

• Bio-imaging: (Fisher,

Manjunath)

• Perception (Loomis, Beall...)

http://www.mat.ucsb.edu/allosphere

THANKS!

Recommended