31
Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Embed Size (px)

DESCRIPTION

Reynold Bailey Washington University in St. Louis Media & Machines Lab 3 Introduction Perceptually Adaptive Graphics: For photorealistic images, how do we know that we are not simply producing pretty pictures and actually representing reality in a faithful manner? For real time rendering and simulation, how do we make speed- accuracy trade-offs while minimizing the perceptibility of any resulting anomalies? What types of anomalies are most noticeable? How can we quantify these factors and use them in a methodical way to adapt our graphics to the perception of the viewer?

Citation preview

Page 1: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Carol O’Sullivan, Sarah Howlett, Rachel McDonnell,Yann Morvan, Keith O’Connor

Perceptually Adaptive Graphics

Page 2: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

2Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Introduction

EUROGRAPHICS/SIGGRAPH Campfire – May 2001Researchers from various fields:

Computer Graphics and visualization

Psychology

Neuroscience

Psychophysics

Medicine

This paper outlines the state of the art as discussed at that event and progress made since.

Page 3: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

3Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Introduction

Perceptually Adaptive Graphics:

For photorealistic images, how do we know that we are not simply producing pretty pictures and actually representing reality in a faithful manner?

For real time rendering and simulation, how do we make speed-accuracy trade-offs while minimizing the perceptibility of any resulting anomalies?

What types of anomalies are most noticeable?

How can we quantify these factors and use them in a methodical way to adapt our graphics to the perception of the viewer?

Page 4: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

4Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Outline

Interactive Graphics

Image Fidelity

Virtual Environments

Visualization

Animation

Non-Photorealistic Rendering

Page 5: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

5Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics

Ideal: render fully detailed and photorealistic scenes in real time.

Not yet a feasible option.

Aim: produce the best perceived image in the time available.

Gaze-contingent approaches.

Perceptually guided polygonal simplification.

Interruptible rendering.

Page 6: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

6Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics: Gaze-contingent approaches

Basic idea:

Degrade the resolution in the peripheral image regions.

The high resolution area moves with the user’s focus, so area under scrutiny is always rendered in high resolution.

Page 7: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

7Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics: Gaze-contingent approaches

Gaze-contingent screens:Focus Plus Context Screens are one result of new research.

Usually display hardware has the same resolution for all areas of the screen even though the peripheral content is rendered at low resolution.

In F+C screens, there is a difference in resolution between the focus and the context area.

Wall-sized low-resolution display with an embedded high-resolution screen.

When user moves the mouse, the display content pans and can be brought into high-resolution focus as required.

Page 8: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

8Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics: Gaze-contingent approaches

Focus Plus Context Screens:

Page 9: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

9Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics: Gaze-contingent approaches

Focus Plus Context Screens:

Page 10: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

10Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics: Gaze-contingent approaches

Focus Plus Context Screens:

Page 11: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

11Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics: Gaze-contingent approaches

Focus Plus Context Screens:

Page 12: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

12Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics: Gaze-contingent approaches

Focus Plus Context Screens:

Page 13: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

13Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics: Gaze-contingent approaches

Attentive 3D-rendering engines have developed.

Uses the viewer’s gaze position to vary the LOD at which an object is drawn.

Similar to gaze-contingent displays but with one main difference:

Objects are simplified at the object geometry level instead of the image level.

Page 14: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

14Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics: Gaze-contingent approaches

Display systems that guide the viewer’s attention:Easily Perceived Displays

User’s attention is directed as opposed to followed.

Use the gaze information from one user to decide which parts of an image are important and which parts should be removed.

Subsequent viewers are guided to what the original viewer found important.

Page 15: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

15Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics: Gaze-contingent approaches

New research suggests that visual attention is largely controlled by the task being performed.

Identify task related objects in advance.

Render other objects at lower resolution.

Page 16: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

16Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Interactive Graphics: Perceptually guided polygon simplification

Reducing model complexity based on perceptual criteria.The circumstances under which simplification will be perceptible are determined.

Those deemed perceptible are not carried out.

How can we tell if one simplification is actually better than another?

Several techniques that experimentally and automatically measure and predict the visual fidelity of the simplified models have emerged.

Some techniques allow the user to identify important features of the models either by eye-tracking or use of the mouse.

Page 17: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

17Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Trade off between fidelity and performance.Basic idea: Progressive rendering framework.

Interactive Graphics: Interruptible Rendering

A coarse image is drawn on the back buffer and is continuously refined.

An error function is defined that combines the spatial error due to the coarseness of the image and the temporal error due to the time delay.

At each step the error is calculated and the image is rendered when the error threshold is met.

Page 18: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

18Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Develop perceptual metrics and heuristics to measure or predict the fidelity of images.

At the Campfire, one researcher described three standards of realism:

Physical realism: the image provides the same visual stimulation as the scene depicted.

Photorealism: the image produces the same visual response as the scene depicted.

Image Fidelity : Metrics

Functional realism: the image provides the same visual information as the scene depicted.

Metrics that evaluate visual appeal, structural distortion of scene objects, and animation quality have been proposed.

Page 19: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

19Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

In 2004, a technique for high quality global illumination rendering using perceptual illumination components was presented:

The illumination of a surface can be split into components that are separately computable:

Direct illumination.Indirect glossy illumination.

Image Fidelity : Rendering

Indirect diffuse illumination.Indirect specular illumination.

Goal was to produce a perceptual metric functioning on these terms and use it to drive the rendering process.

Perceptual experiments were conducted and a mathematical model was fitted to the experimental data to formulate the metric.

Page 20: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

20Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

The dynamic range of a scene is the contrast ratio between its brightest and darkest parts.

Image Fidelity : High Dynamic Range image reproduction

A high dynamic range image is one that has a greater dynamic range than can be shown on standard CRTs or LCDs.

High dynamic range display devices have developed.

Video

Page 21: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

21Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Evaluate and/or improve animations by applying perceptual principles.

The role of various factors on human perception of anomalous collisions has been investigated.

In real-time animation, fully accurate processing of dynamic events such as collisions can lead to long delays.

Psychophysical experiments were conducted to determine the thresholds for human sensitivity to dynamic anomalies.

Animation : Physical simulation

Page 22: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

22Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Plausible human motion in animation is difficult to achieve.

Techniques have been developed to generate transitions between two sets of motion capture data and also to combine two sets of motion capture data.

Animation : Human Motion

Motion capture data is used extensively.

Several perceptual metrics for character animation have also been developed.

Video

Page 23: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

23Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

At the campfire, the problem of effectively generating images of objects and environments that convey an accurate sense of distance and size was discussed.

Virtual Environments

The is interaction between visual perception and locomotion.Displays that combine computer-generated visual information with biomechanical information on locomotion.

Does image quality affect the perception of distance in VE?

The image quality played no role in the under-estimation of distances compared with the real world scenario (2003).

Distance perception is affected by cues such as shadows and reflections (2002).

Page 24: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

24Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Research was also done to determine the impact of field of view and binocular viewing restrictions on people’s ability to perceive distance in the real world.

Virtual Environments

The under-estimation of distances is not due to not being able to see one’s own body (same conclusion reached in a separate study).

Having a restricted field of view does not impact distance perception as long a head movement is allowed.

Monocular viewing did not produce poorer performance than binocular viewing.

These restrictions do not explain the poor performance in distance estimation for tasks in immersive environments displayed using a HMD.

Page 25: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

25Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Research on the simulation fidelity of immersive virtual environments:

Virtual Environments

The use of memory tasks as a measurement if the fidelity of VE was proposed.

Measurement of presence (the feeling of “being there”)

Previously done using questionnaires and interviews.

Technique based on physiological measurements is proposed.

Memory recall and memory awareness states.

Preliminary results are promising.

Page 26: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

26Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Designing and implementing algorithms for displaying large, complicated data sets.

Visualization

Goal: Facilitate the rapid, intuitive appreciation of the essential features.

Perceiving shape from texture.The texture used impacts the shape perceived.

Especially true for unfamiliar structures.

Several different textures were synthesized and applied to various surfaces.

Users were asked to orient probes so that matched the surface normal as closely as possible.

Page 27: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

27Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Visualization

Page 28: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

28Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Non-Photorealistic RenderingNPR techniques are well suited to achieving functional realism.

Studies have been done to evaluate how space is perceived in NPR immersive environments.

Participants wore HMD and were asked to walk towards a target.

In NPR environment, perceived distance were 66% of intended distances.

Page 29: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

29Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Non-Photorealistic RenderingFunctional realism in the context of facial illustrations.

New technique to automatically generate caricatures from photographs of people.

Page 30: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

30Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Non-Photorealistic RenderingManipulating rendering methods in order to evoke certain responses:

Strength.

Weakness.

Danger.

Safety.

Goal related judgement.

Rendering style can convey meaning and influence judgement in a controllable fashion.

Page 31: Carol O’Sullivan, Sarah Howlett, Rachel McDonnell, Yann Morvan, Keith O’Connor Perceptually Adaptive Graphics

Washington University in St. LouisMedia & Machines Lab

31Reynold Baileyhttp://www.cs.wustl.edu/~rjb1/

Conclusion

This paper gives a summary of recent and ongoing work in the field of perceptually adaptive graphics.

This report is selective and there is a lot of work in the field that was not reviewed.