Rendering hair with graphics hardware Tae-Yong Kim Rhythm & Hues Studios tae@rhythm.com

Preview:

Citation preview

Rendering hair with graphics hardware

Tae-Yong Kim

Rhythm & Hues Studios

tae@rhythm.com

Overview

State of the Art in Hair Rendering and Modeling

Issues in Hardware Hair Rendering Line drawing with Graphics Hardware Illumination and Hardware Shader Shadows

Hair representation

Hair is not a single entity – always consider hair volume a a whole

Surface (polygons, nurbs, etc.) Line (guide hair, curve, polyline..) Volumetric representation (implicit

surface, 3D texture, tubes, …)

Representation - Polygons

ATI2004 - Polygonal Model

(from SIGGRAPH Sketch by Thorsten Scheuermann)

Real-time rendering Suited for existing

artists asset

Limited hairstyle Animation difficult

Representation - Lines

nVidia – Line representation

(from Matthias Wloka’s Eurographics 2004 tutorial)

Dynamic hair No restriction in

hairstyle Advanced rendering

possible

No standard modeling technique

Can be time consuming to render

Model Generation

Use control tools to generate bunch of hairs

Control curves (guide hairs) Cylinders – single level/multi level Fluid flow, 3D vector field Automated methods emerging

Guide hair Small number of guide hairs

control hair shape Interpolation generates

whole instances of hairs to render

Properties like density, length can be controlled with maps

Other guide objects (fluid) Create vector fields (e.g.

with fluid dynamics) hairs through 3D vector filed Each hair is instanced

through fluid control

Hadap and Nadia M. Thalmann Eurographics

2001

Other guide objects (cylinders) Cylinders to shape hair Instance hairs inside each

tube

Multi-layered control of cylinders

Kim and Neumann SIGGRAPH 2002

Emerging techniques Hair modeling from

photographs Directly recover hair

geometry from images

S. Paris, H. Briceno, F. Sillion SIGGRAPH 2004

Hair as line drawing 100-150k hair per human

scalp >100 lines per each hair > millions of lines (micro

polygons) typical in production hair render

Use of graphics hardware

Hair segment as a GL line

DrawHairLine(ps, pe , cs, ce){

glBegin(GL_LINES)glColor3fv(cs);glVertex3fv(ps);glColor3fv(ce);glVertex3fv(pe);glEnd()

}

Hair as GL lines

DrawHair(p0,p1,..,pn-1,c0,c1,….cn-1){

glBegin(GL_LINE_STRIP)glColor3fv(c0);glVertex3fv(p0);glColor3fv(c1);glVertex3fv(p1);…

glColor3fv(cn-1);glVertex(pn-1);glEnd()

}

Are we done?DrawHair(p0,p1,..,pn-1,c0,c1,….cn-1)

{glBegin(GL_LINE_STRIP)glColor3fv(c0);glVertex3fv(p0);glColor3fv(c1);glVertex3fv(p1);…

glColor3fv(cn-1);glVertex(pn-1);glEnd()

}

Algorithm1

For each hair, call this function!

Not quite..

Point samples

Computed sample color

True sample

The aliasing problem

Compare this..

Remedies

Increase sampling rate ( large image size, accumulation buffer)

Image quality depends on number of samples at a slow convergence rate

Required sampling rate is above 10,000 x 10,000 pixel resolution

Thinner (smaller) hairs require even higher sampling rate

Remedies

Hardware antialiasing of line drawing

GL_LINE_SMOOTH

Thickness control with alpha blending

=0.09 0.25 0.60 1.0

Thickness control with alpha blending

Correct

Visibility order is important

correct wrong

Hair as visibility ordered lines

Algorithm 2

1. Compute color for each end point of the lines (shading, shadowing)

2. Compute the visibility order

3. Draw lines sorted by the visibility order

ro

k

a

d

bc

e

g

f

v

su

p

t

q

m

hi

j

ln

k,l,m

d,e

c

f,g,h

i,j

n,o,p,q

r,s,t

u,v

b

a

A simple visibility ordering scheme

Drawing order: a, b, c, d, e, f, g, h, I, j, k, l, m, n, o, p, q, r, s, t, u, v

A simple visibility ordering scheme

Efficient (700K lines per second on 700Mhz CPU)

Approximate, but works well for small line fragments

dcpD )(

NiDD

DDNi

0,

minmax

min

Correct Cached

Coherence of visibility ordering

Visibility ordering can be cached and reused (useful for interactive application)

Hair shading model

Describes the amount of reflected/scattered light toward the viewing direction

Kajiya-Kay (1989)

Marschner et al. (2003)

L

V

T

Kajiya-Kay model

),sin( LTKdDiffuse

L

V

T

H

Kajiya-Kay model

psSpecular VTLTVTLTK )],sin(),sin())([(

Example cg program

),sin( LTKdDiffuse 2)(1 LTKdDiffuse

psSpecular VTLTVTLTK )],sin(),sin())([(

Real-time hair shading

Hair as set of unstructured lines

Sort visibility of hair lines

line render with vertex shader

Shadow2Color2Tangent2Pos2

Shadow1Color1Tangent1Pos1

Data structure

Position: comes from modeling, animation Tangent : computed from position (Unshadowed) Color: shaded with either CPU/

or Vertex Shader Shadow: computed with opacity shadow

maps Sort each line with visibility order

..........

..........

..........

ShadowNColorNTangentNPosNN

..........

Shadow4Color4Tangent4Pos44

Shadow3Color3Tangent3Pos33

Shadow2Color2Tangent2Pos22

1

Index

Shadow1Color1Tangent1Pos1

ShadowColorTangentPos

Vertex table

N-1

..

..

4

3

2

1

V1

NM

....

....

54

43

32

21

V2Index

Line table

Data structure

A shading model describes the amount of reflected light when hair is fully lit

Most hair receives attenuated light due to self-shadowing

Crucial for depicting volumetric structure for hair

Self-shadows

No shadows With shadows

Self-shadows

Front lighting Back lighting

Self-shadows

Shadow is a fractional visibility functionHow many hairs between me and the light?What percentage of light is blocked?

Self-shadows

Shadow is a fractional visibility function

Self-shadows

Self-shadows

Opacity shadow maps

[Kim and Neumann EGRW 2001]

Fast approximation of the deep shadowing functionIdea: use graphics hardware as much as possible

)exp()( p l dll0 ')'(

Ω(l)

l

Transmittance Opacity

Monotonically increasing

Self-shadows

Opacity Shadow Maps

Opacity Shadow Maps - Algorithmfor (1 i N) Determine the opacity map’s depth Di from the light

for each shadow sample point pj in P (1 j M)

Find i such that Di-1 Depth(pj) < Di

Add the point pj to Pi.

Clear the alpha buffer and the opacity maps Bprev, Bcurrent.

for (1 i N) Swap Bprev and Bcurrent.

Render the scene clipping it with Di-1 and Di.

Read back the alpha buffer to Bcurrent.

for each shadow sample point pk in Pi

Ω prev = sample(Bprev , pk)

Ω current = sample(Bcurrent , pk)

Ω = interpolate (Depth(pk), Di-1, Di, Ω prev, Ω current)

τ(pk) = e-κΩ

Φ(pk) = 1.0 - τ(pk)

Uniform slicing 1D BSP Nonlinear spacing

How many opacity maps?

Opacity Shadow Maps - Algorithmfor (1 i N) Determine the opacity map’s depth Di from the light

for each shadow sample point pj in P (1 j M)

Find i such that Di-1 Depth(pj) < Di

Add the point pj to Pi.

Clear the alpha buffer and the opacity maps Bprev, Bcurrent.

for (1 i N) Swap Bprev and Bcurrent.

Render the scene clipping it with Di-1 and Di.

Read back the alpha buffer to Bcurrent.

for each shadow sample point pk in Pi

Ω prev = sample(Bprev , pk)

Ω current = sample(Bcurrent , pk)

Ω = interpolate (Depth(pk), Di-1, Di, Ω prev, Ω current)

τ(pk) = e-κΩ

Φ(pk) = 1.0 - τ(pk)

Storing all the opacity maps incur high memory usage

Sort shadow computation points based on the map’s depth

P0

Pi

PN-1

As soon as the current map is rendered, compute shadows for corresponding sample points.

Preparing Shadow Samples

Opacity Shadow Maps - Algorithmfor (1 i N) Determine the opacity map’s depth Di from the light

for each shadow sample point pj in P (1 j M)

Find i such that Di-1 Depth(pj) < Di

Add the point pj to Pi.

Clear the alpha buffer and the opacity maps Bprev, Bcurrent.

for (1 i N) Swap Bprev and Bcurrent.

Render the scene clipping it with Di-1 and Di.

Read back the alpha buffer to Bcurrent.

for each shadow sample point pk in Pi

Ω prev = sample(Bprev , pk)

Ω current = sample(Bcurrent , pk)

Ω = interpolate (Depth(pk), Di-1, Di, Ω prev, Ω current)

τ(pk) = e-κΩ

Φ(pk) = 1.0 - τ(pk)

The alpha buffer is accumulated each time the scene is drawn.

The scene is clipped with Di and Di-1

Speedup factor of 1.5 to 2.0

In very complex scenes, preorder the scene geometry so that the scene object is rendered only for a small number of maps.

More speedup and reduce memory requirement for scene graph

Clipping and Culling

Opacity Shadow Maps - Algorithmfor (1 i N) Determine the opacity map’s depth Di from the light

for each shadow sample point pj in P (1 j M)

Find i such that Di-1 Depth(pj) < Di

Add the point pj to Pi.

Clear the alpha buffer and the opacity maps Bprev, Bcurrent.

for (1 i N) Swap Bprev and Bcurrent.

Render the scene clipping it with Di-1 and Di.

Read back the alpha buffer to Bcurrent.

for each shadow sample point pk in Pi

Ω prev = sample(Bprev , pk)

Ω current = sample(Bcurrent , pk)

Ω = interpolate (Depth(pk), Di-1, Di, Ω prev, Ω current)

τ(pk) = e-κΩ

Φ(pk) = 1.0 - τ(pk)

)-/())((,)0.1()(p 1-ii1currentprevk DDDpDepthttt ik

Opacity Shadow Maps

Opacity Shadow Maps - Algorithmfor (1 i N) Determine the opacity map’s depth Di from the light

for each shadow sample point pj in P (1 j M)

Find i such that Di-1 Depth(pj) < Di

Add the point pj to Pi.

Clear the alpha buffer and the opacity maps Bprev, Bcurrent.

for (1 i N) Swap Bprev and Bcurrent.

Render the scene clipping it with Di-1 and Di.

Read back the alpha buffer to Bcurrent.

for each shadow sample point pk in Pi

Ω prev = sample(Bprev , pk)

Ω current = sample(Bcurrent , pk)

Ω = interpolate (Depth(pk), Di-1, Di, Ω prev, Ω current)

τ(pk) = e-κΩ

Φ(pk) = 1.0 - τ(pk)

)exp()( p

)exp()( p1.0

1.0

1.0•Quantization in alpha buffer limits Ω to be 1.0 at maximum

•κ scales the exponential function s. t. Ω value of 1.0 represents a complete opaqueness (τ = 0)

•κ = 5.56 for 8 bit alpha buffer

(e-κ = 2-8)

Exponential Attenuation

N = 7(5secs) N = 15(7secs) N = 30(10secs)

N = 60(16secs) N = 100(25secs) N = 200(46secs) N = 500(109secs)

No shadow

Opacity Shadow Maps

Hair rendering with lines in graphics hardwareSetup pass:

Compute the visibility order Compute shadow values

Drawing pass: For each line segment Li ordered due to the

visibility order Set thickness (alpha value) Draw Li with programmable shader

Opacity Shadow Maps – Recent Extensions Whole opacity maps stored in 3D texture

nVidia’s demo Koster et al., Real-Time Rendering of Human Hair using Programmable Graphics Hardware, CGI 2004

Mertens et al, A Self-Shadow Algorithm for Dynamic Hair using Density Clustering, EGSR 2004

Hair rendering with lines in graphics hardware Antialiasing Shading through vertex/fragment shader Shadows with opacity maps

Questions

Recommended