Digital Human Faces

Embed Size (px)

Text of Digital Human Faces

DIGITAL HUMAN FACES

TS RADHIKA

Introduction

Faces are an important communication vector. Expressions result from subtle muscular contractions and wrinkle formations. We perceive them through a complex filter of subsurface scattering and other light reflections.

Modeling 3D faces and their expressions has generated a great deal of interest. It can help in the creation and animation of virtual actors for films and video games. The quality can be precise enough to capture a real actor's performances as well as the slightest movements in emotional expressions.

The Digital Emily Project: Achieving a Photoreal Digital Actor

Provides high-resolution animated face geometry. Provides production-quality special effects by employing latest techniques.

Aquiring High Resolution Scan Of facial Expressions

Scan the face with 156 LED lights turned on. This is called light stage scanning process. 15 photographs of the face under different lighting conditions. This captures faces geometry and reflectance.

Lights used are of different brightness. Each light is filtered by a linear polariser. The polarisers orientation infront of the camera can be changed. This allows diffuse reflectance (light in all directions)and specular reflectance (light in one direction)to be measured independently.

LIGHT STAGE SCANNING

OBTAINING DIFFUSED IMAGES AND SPECULAR IMAGES.CROSS POLARISATION

Used to improve the contrast. Polarising filter is placed between light source and the specimen. Rotating polarising filter is placed over the camera lens. The filter is rotated. This eliminates specular reflectance. The remaining will be diffuse reflectance.

.

PARALLEL POLARISATION Parallel polarised light is given as Id+2Is,where Id refers to diffuse reflection and Is refers to specular reflection.

The cameras polariser is rotated vertically so the specular reflection returns and is of double the strength than diffuse reflection. We can then reveal the specular reflection on its own by subtracting the first row of images from the second. So we get specular only images

DIFFUSE REFLECTION IMAGEIt defines the surfaces main color.

PARALLEL POLARISED IMAGE

SPECULAR ONLY IMAGES

It defines surfaces shineness and highlight color.

EMBOSSING SPECULAR ON DIFFUSED. This gives us the finer details of the skin surface.

Scanning A Multitude Of Expressions

This is done for blending expressions.

Example: For a pose,one with eyes closed and mouth closed wescan the face from left and right as well.

We then combine the left,front and right scan into a master mesh. Master mesh is composed of large number of polygons.

Building Digital Character From The Scans1)Preprocessing:

We regularise the polygon boundaries. Increase the clarity of subsurfaces around the eyes and teeth. Done using mesh editing software.

2)Creating Animatable Mesh

Remesh the neutral expression scan to create a 4000 polygon animatable mesh. This is done to make the animation easier. The geometric detail lost in this process is later replaced.

Next we determine where the vertices of the animatable mesh would move to in each expression. This helps in blending shapes.

A partial mesh. An expression scan. A partial blendshape expression mesh created by finding the correspondance from (1) and (2). A complete blendshape expression mesh created by interpolating.

Adding Eyes And Teeth

Plaster cast of upper and lower teeth is made. It is scanned under a light scanning system to produce a triangular mesh. Next,it is remeshed to produce a animatable mesh with fewer triangles. Eye geometry is also added into the eye socket.

Video Based Visual Animation

Using a single standard video camera and the motion analysis system,the actors all the performance characteristics is captured. The next step is to track the digital face according to the head position of the shot video. Further optimisation techniques is done to impart perfection. Then we light the digital actor according the environment they will be in.

FINAL DIGITAL FACE

REFERENCES

Paper by Paul Debevec-USC institute for Creative Technologies. IEEE journal on Digital Emily.