Lecture 3 : Direct Volume Rendering

Preview:

DESCRIPTION

Lecture 3 : Direct Volume Rendering. Bong-Soo Sohn School of Computer Science and Engineering Chung-Ang University. Acknowledgement : Han-Wei Shen Lecture Notes 사용. Direct Volume Rendering. Direct : no conversion to surface geometry Four methods Ray-Casting Splatting - PowerPoint PPT Presentation

Citation preview

Lecture 3 : Direct Volume Rendering

Bong-Soo Sohn

School of Computer Science and Engineering

Chung-Ang University

Acknowledgement : Han-Wei Shen Lecture Notes 사용

Direct Volume Rendering

• Direct : no conversion to surface geometry

• Four methods– Ray-Casting– Splatting– 3D Texture-Based Method– CUDA

Data Representation

• 3D volume data are represented by a finite number of cross sectional slices (hence a 3D raster)

• On each volume element (voxel), stores a data value (if it uses only a single bit, then it is a binary data set. Normally, we see a gray value of 8 to 16 bits on each voxel.)

N x 2D arraies = 3D array

Data Representation

What is a Voxel? – Two definitions

A voxel is a cubic cell, whichhas a single value cover the entire cubic region

A voxel is a data pointat a corner of the cubic cellThe value of a point inside the cell is determined by interpolation

Basic Idea

Based on the idea of ray tracing

• Trace from eat each pixel as a ray into object space

• Compute color value along the ray

• Assign the value to the pixel

Transfer Function

• Maps voxel data values to optical properties

• Color/opacity map• Emphasize or classify features of interest in the data• Piecewise linear functions, Look-up tables, 1D, 2D• GPU – simple shader functions, texture lookup tables

Viewing

Ray Casting

• Where to position the volume and image plane • What is a ‘ray’ • How to march a ray

Viewing

y

(0,0,0) x

z

uv

E

S

E0 u0v0

+ S0

B

B = [0,0,0]S0 = [0,0,-D]u0 = [1,0,0]v0 = [0,1,0]

Now,R: the rotation matrix S = B – D x g U = [1,0,0] x R V = [0,1,0] x R

Ray Casting

• Stepping through the volume: a ray is cast into the volume, sampling the volume at certain intervals

• The sampling intervals are usually equi-distant, but don’t have to be (e.g. importance sampling)

• At each sampling location, a sample is interpolated / reconstructed from the grid voxels

• popular filters are: nearest neighbor (box), trilinear (tent), Gaussian, cubic spline

• Along the ray - what are we looking for?

Basic Idea of Ray-casting Pipeline

- Data are defined at the corners of each cell (voxel)

- The data value inside the voxel is determined using interpolation (e.g. tri-linear)

- Composite colors and opacities along the ray path

- Can use other ray-traversal schemes as well

c1

c2

c3

Ray Traversal Schemes

Depth

IntensityMax

Average

Accumulate

First

Ray Traversal - First

Depth

Intensity

First

• First: extracts iso-surfaces (again!)done by Tuy&Tuy ’84

Ray Traversal - Average

Depth

Intensity

Average

• Average: produces basically an X-ray picture

Ray Traversal - MIP

Depth

IntensityMax

• Max: Maximum Intensity Projectionused for Magnetic Resonance Angiogram

Ray Traversal - Accumulate

Depth

Intensity

Accumulate

• Accumulate opacity while compositing colors: make transparent layers visible!Levoy ‘88

Raycasting

color

opacity

1.0

volumetric compositing

object (color, opacity)

Raycasting

color

opacity

Interpolationkernel

1.0

object (color, opacity)

volumetric compositing

Raycasting

color c = c s s(1 - ) + c

opacity = s (1 - ) +

1.0

object (color, opacity)

volumetric compositingInterpolation

kernel

Raycasting

color

opacity

1.0

object (color, opacity)

volumetric compositing

Raycasting

color

opacity

1.0

object (color, opacity)

volumetric compositing

Raycasting

color

opacity

1.0

object (color, opacity)

volumetric compositing

Raycasting

color

opacity

1.0

object (color, opacity)

volumetric compositing

Raycasting

color

opacity

object (color, opacity)

volumetric compositing

Volume Ray Marching

1. Raycast – once per pixel

2. Sample – uniform intervals along ray

3. Interpolate – trilinear interpolate, apply transfer function

4. Accumulate – integrate optical properties

Shading and Classification

- Shading: compute a color(lighting) for each data point in the volume - Classification: Compute color and opacity for each data point in the volume

-Done by table lookup (transfer function)

f(xi) C(xi), a(xi)

Shading (Local Illumination)

• Blinn-Phong Shading Model

• Requires surface normal vector– What’s the normal vector of a voxel? Gradient– Central differences between neighboring voxels

NsLdLa nhkInlkIkI )ˆˆ()ˆˆ(

Resulting = Ambient + Diffuse + Specular

x

backfront

x

bottomtop

x

leftrightIIgrad

2

)(,

2

)(,

2

)()(

Shading (Local Illumination)• Compute on-the-fly within fragment shader

– Requires 6 texture fetches per calculation

• Precalculate on host and store in voxel data– Requires 4x texture memory– Pack into 3D RGBA texture to send to GPU

Shading (Local Illumination)

• Improve perception of depth• Amplify surface structure

Composition (alpha blending)

Texture Based Volume Rendering

3D Texture Based Volume Rendering

• Best known practical volume rendering method for rectlinear grid datasets

• Realtime Rendering is possible

Interpolation of Samples

• Volume stored as 3D texture• Viewport-aligned slices• Blended back-to-front• Trilinear interpolation by hardware

Classification

• Density values from texture map• Classification via lookup table• Takes place in texture mapping stage

Shading is possible

• Principle– Precompute Gradient plus density in texture– Shade first intensity (keep density!)– Classification via 2D pixel texture

Texture Mapping

2D image 2D polygon

+

Textured-mappedpolygon

Texture Mapping for Volume Rendering

Consider ray casting …

x

yz

(top view)

Texture based volume rendering

x

z

y

• Render every xz slice in the volume as a texture-mapped polygon• The proxy polygon will sample the volume data • Per-fragment RGBA (color and opacity) as classification results• The polygons are blended from back to front

Use pProxy geometry for sampling

Texture based volume rendering

Changing Viewing Direction

What if we change the viewing position?

That is okay, we justchange the eye position(or rotate the polygons and re-render),

Until …

x

y

Solution

Use Image-space axis-aligned slicing plane: the slicing planes are always parallel to the view plane

3D Texture Based Volume Rendering

Shading

• Use per-fragment shader – Store the pre-computed gradient into a RGBA texture– Light 1 direction as constant color 0– Light 1 color as primary color – Light 2 direction as constant color 1 – Light 2 color as secondary color

CUDA Volume Rendering

• Utilize massively parallel computing resources

• Assign each CUDA thread deal with a single ray

• CUDA – Suitable for computing lots of independent work

(e.g. processing pixels or voxels)

Recommended