Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Digital Photography and Digital Photography and Digital Photography and Digital Photography and VideosVideosVideosVideosJingyi YuJ gyCISC 849
Department of Computer and Information Science
Light Fields, Lumigraph, and Light Fields, Lumigraph, and Light Fields, Lumigraph, and Light Fields, Lumigraph, and ImageImage--based Renderingbased Renderinggg gg
Department of Computer and Information Science
Pinhole CameraPinhole CameraPinhole CameraPinhole Camera
•A camera captures a set Image Plane
A camera captures a set of rays•A i h l •A pinhole camera captures a set of rays passing through a common 3D point
Ray Origin -o
p
y g
Department of Computer and Information Science
Camera Array and Light FieldsCamera Array and Light FieldsCamera Array and Light FieldsCamera Array and Light Fields
Di it l • Digital cameras are cheap– Low image qualityLow image quality– No sophisticated
aperture (depth of view) effectseffects
• Build up an array of cameras
• It captures an array of imagesIt t f • It captures an array of set of rays
Department of Computer and Information Science
Why is it useful?Why is it useful?Why is it useful?Why is it useful?
• If we want to render • If we want to render a new image
We can query each – We can query each new ray into the light field ray databasefield ray database
– Interpolate each ray (we will see how)( )
– No 3D geometry is needed
Department of Computer and Information Science
Key ProblemKey ProblemKey ProblemKey Problem
• How to represent a ray?• How to represent a ray?• What coordinate system to use?• How to represent a line in 3D space
– Two points on the line (3D + 3D = 6D) Problem?
– A point and a direction (3D + 2D = 5D)Problem?
– Any better parameterization?
Department of Computer and Information Science
Two Plane ParameterizationTwo Plane ParameterizationTwo Plane ParameterizationTwo Plane Parameterization• Each ray is parameterized y p
by its two intersection points with two fixed
l planes. • For simplicity, assume
th t l 0 the two planes are z = 0 (st plane) and z = 1 (uv plane)plane)
• Alternatively, we can view (s t) as the camera view (s, t) as the camera index and (u, v) as the pixel index
Department of Computer and Information Science
pixel index
Ray RepresentationRay RepresentationRay RepresentationRay Representation
For the moment let’s consider just a 2D slice of this 4D ray spacet s ay space
Rendering new pictures = Interpolating raysH t t ? 6D? 5D? 4D?
Department of Computer and Information Science
How to represent rays? 6D? 5D? 4D?
Light Field RenderingLight Field RenderingLight Field RenderingLight Field Rendering
• For each desired ray:• For each desired ray:– Compute intersection with (u,v) and
(s t) plane(s,t) plane– Blend closest ray
What does closest mean?uuu – What does closest mean?
vvvuuu
21
21
(u, v) (u, v)(u1, v1) (u2, v2) tttt
ssss
21
21
,,
(s, t)(s1, t1) (s2, t )
Department of Computer and Information Science
(s, t) t2)
Light Field RenderingLight Field RenderingLight Field RenderingLight Field Rendering
• Linear Interpolation• Linear Interpolationy2y?
1xx • What happens to y1
12
12
1
)1( yyyxxxx
• What happens to higher dimension? Bilinear, tri-linear,
x1 x2x
, ,quadra-linear
(u, v) (u, v)(u1, v1) (u2, v2)vvvuuu
21
21
(s, t)(s1, t1) (s2, t )
tttt
ssss
21
21
,,
Department of Computer and Information Science
(s, t) t2)
Quadralinear InterpolationQuadralinear InterpolationQuadralinear InterpolationQuadralinear Interpolation
Serious aliasing artifactsDepartment of Computer and Information Science
Ray StructureRay StructureRay StructureRay Structure
All the rays in a light field passing through y g p g ga 3D geometry point
vvv1
( t )
v2v3
(v1, t1)
(v2, ( 2,t2)(v3,
t3)
t tt1 t2 t3
3)
Department of Computer and Information Science
2D Light Field2D Light Field 2D EPI2D EPI
Why Aliasing?Why Aliasing?Why Aliasing?Why Aliasing?
• We study aliasing in spatial domainWe study aliasing in spatial domain• Next class, we will study frequency domain (take
a quick review of Fourier transformation if you a quick review of Fourier transformation if you can)
Department of Computer and Information Science
Better Reconstruction MethodBetter Reconstruction MethodBetter Reconstruction MethodBetter Reconstruction Method
• Assume some geometric proxy• Assume some geometric proxy• Dynamically Reparametrized Light Field
Department of Computer and Information Science
Focal Surface ParameterizationFocal Surface Parameterization
Department of Computer and Information Science
Focal Surface Parameterization Focal Surface Parameterization --22Focal Surface Parameterization Focal Surface Parameterization 22
• Intersect each new ray with the focal plane
• Back trace the ray into the data camera
• Interpolate the corresponding rayscorresponding rays
• But need to do ray-plane intersectionplane intersection
• Can we avoid that?
Department of Computer and Information Science
Using Using Using Using Focal Plane
Image Plane
(u, v)(u1, v1) (u2, v2)
Camera Plane
Image Plane
(s, t)(s1, t1) (s2,
• Relative parameterization is hardware friendly[s’ t’] corresponds to a particular texture
Camera Plane(s, t)(s1, t1) (s2, t2)
– [s , t ] corresponds to a particular texture– [u’, v’] corresponds to the texture coordinate– Focal plane can be encoded as the disparity across the light
fi ld
Department of Computer and Information Science
field
ResultsResultsResultsResults
Quadralinear Focal plane at the monitor
First part in Assignment 1
Department of Computer and Information Science
Aperture FilteringAperture FilteringAperture FilteringAperture Filtering
• Simple and easy to implement• Simple and easy to implement• Cause aliasing (as we will see later in the
f l i )frequency analysis)• Can we blend more than 4 neighboring
rays? 8? 16? 32? Or more?• Aperture filteringp g
Department of Computer and Information Science
Variable ApertureVariable ApertureVariable ApertureVariable Aperture
Department of Computer and Information Science
Changing the shape and size of Changing the shape and size of fil filaperture filteraperture filter
Department of Computer and Information Science
FilteringFiltering--22gg
Department of Computer and Information Science
Rendering Rendering Rendering Rendering with memory with memory yycoherent Ray coherent Ray
TracingTracingTracingTracing
Department of Computer and Information Science
Small aperture sizeSmall aperture sizeSmall aperture sizeSmall aperture size
Department of Computer and Information Science
Large aperture sizeLarge aperture sizeLarge aperture sizeLarge aperture size
Department of Computer and Information Science
Using very large size apertureUsing very large size apertureUsing very large size apertureUsing very large size aperture
Department of Computer and Information Science
Variable focusVariable focusVariable focusVariable focus
Department of Computer and Information Science
Close focal surfaceClose focal surfaceClose focal surfaceClose focal surface
Department of Computer and Information Science
Distant focal surfaceDistant focal surfaceDistant focal surfaceDistant focal surface
Department of Computer and Information Science
Issues with Light FieldIssues with Light FieldIssues with Light FieldIssues with Light Field
• It only captures rays within a region • It only captures rays within a region (volume)It i h ( d th d i )• It is huge (and, thus, needs compression)
• If implemented using software, a lot of memory fetches– Solution 1: multitexturing– Solution 2: using graphics hardware
Department of Computer and Information Science
Light Field CoverageLight Field CoverageLight Field CoverageLight Field Coverage
Department of Computer and Information Science
MultiMulti--Slab Light FieldsSlab Light FieldsMultiMulti Slab Light FieldsSlab Light Fields
Department of Computer and Information Science
Light Field SlabsLight Field SlabsLight Field SlabsLight Field Slabs
Solution:Extending the light fields!
Naïve approach misses rays.
Department of Computer and Information Science
Light Field Slabs Light Field Slabs Light Field Slabs Light Field Slabs
45Department of Computer and Information Science
45
HardwareHardware--based Implementationbased ImplementationHardwareHardware based Implementationbased Implementation
• LF is an array of images• LF is an array of images– Can be very efficiently stored as textures
32 32 128 128 DXT1 i– 32x32x128x128 DXT1 compression– Each LF is stored in a 1024x1024x16 volume
6 l b t d i 1024 1024 128 l > – 6 slabs stored in a 1024x1024x128 volume => 64 MB
• A id b hi b l l ti Li ht • Avoid branching by calculating Light Field slices in volume
Ch h l di i f – Choose the largest direction component for each rayA oid unnecessar texture fetch
Department of Computer and Information Science
– Avoid unnecessary texture fetch
GPUGPU--based Implementation based Implementation
• Two pass PS2 0 algorithm
GPUGPU based Implementation based Implementation
• Two pass PS2.0 algorithm– First pass: render geometry, calculate and
write (s t u v) coordinates into the write (s, t, u, v) coordinates into the back buffer
– Second pass: Second pass: • Use (s, t, u, v) to query the light fields from the
volume texture• Four bilinear texture fetches and then interpolates
results in shader
• Can adopt current technology• Can adopt current technology
Department of Computer and Information Science
Light Field CompressionLight Field CompressionLight Field CompressionLight Field Compression
• Based on vector quantization• Preprocessing: build a representative Preprocessing: build a representative
codebook of 4D tiles• Each tile in lightfield represented by index• Each tile in lightfield represented by index• Example: 2x2x2x2 tiles, 16 bit index = 24:1
icompression
Department of Computer and Information Science
Lightfield Compression byLightfield Compression byV Q i iV Q i iVector QuantizationVector Quantization
• Advantages:• Advantages:– Fast at runtime
R bl i– Reasonable compression• Disadvantages:
– Preprocessing slow– Manually-selected codebook size– Does not take advantage of structure
Department of Computer and Information Science
3D Imaging System3D Imaging System3D Imaging System3D Imaging System
• The light field captures all (or almost all) • The light field captures all (or almost all) necessary raysIf j t th t i • If we project the ray onto some image plane, we can do 3D imaging
• Autostereoscopic display• 3D TV
Department of Computer and Information Science
Autostereoscopic Light FieldAutostereoscopic Light Fieldgg
Department of Computer and Information Science
Department of Computer and Information Science
3D TV3D TV3D TV3D TV
••16 camera Horizontal Array16 camera Horizontal Array••AutoAuto--Stereoscopic DisplayStereoscopic DisplayAutoAuto--Stereoscopic DisplayStereoscopic Display••Efficient coding, transmissionEfficient coding, transmission
Department of Computer and Information Science
RealReal--time Renderingtime RenderingRealReal time Renderingtime Rendering
Department of Computer and Information Science
The LumigraphThe LumigraphThe LumigraphThe Lumigraph
• Capture: move camera by hand• Capture: move camera by hand• Camera intrinsics assumed calibrated• Camera pose recovered from markers
Department of Computer and Information Science
Lumigraph PostprocessingLumigraph PostprocessingLumigraph PostprocessingLumigraph Postprocessing
• Obtain rough geometric model• Obtain rough geometric model– Chrome keying (blue screen) to extract
silhouettessilhouettes– Octree-based space carving
• Resample images to two plane • Resample images to two-plane parameterization
Department of Computer and Information Science
Lumigraph RenderingLumigraph RenderingLumigraph RenderingLumigraph Rendering
• Use rough depth information to improve • Use rough depth information to improve rendering quality
Department of Computer and Information Science
Lumigraph RenderingLumigraph RenderingLumigraph RenderingLumigraph Rendering
• Use rough depth information to improve • Use rough depth information to improve rendering quality
Department of Computer and Information Science
Lumigraph RenderingLumigraph RenderingLumigraph RenderingLumigraph Rendering
Without usinggeometry
Using approximategeometry
Department of Computer and Information Science
geometry geometry
Unstructured Lumigraph Unstructured Lumigraph R d iR d iRenderingRendering
• Further enhancement of lumigraphs:• Further enhancement of lumigraphs:do not use two-plane parameterizationSt i i l i t li• Store original pictures: no resampling
• Hand-held camera, moved around an environment
Department of Computer and Information Science
Unstructured Lumigraph Unstructured Lumigraph R d iR d iRenderingRendering
• To reconstruct views assign penalty to • To reconstruct views, assign penalty to each original ray
Distance to desired ray using– Distance to desired ray, usingapproximate geometry
– Resolution– Resolution– Feather near edges of image
• Construct “camera blending field”• Construct “camera blending field”• Render using texture mapping
Department of Computer and Information Science
Unstructured Lumigraph Unstructured Lumigraph R d iR d iRenderingRendering
Bl di fi ld R d i
Department of Computer and Information Science
Blending field Rendering
Other Lightfield Acquisition Other Lightfield Acquisition D iD iDevicesDevices
• Spherical motionSpherical motionof camera aroundan objectan object
• Samples space ofdirections uniformlydirections uniformly
• Second arm to li ht move light source –
measure BRDF
Department of Computer and Information Science
Other Lightfield Acquisition Other Lightfield Acquisition D iD iDevicesDevices
• Acquire an entire• Acquire an entirelight field at onceVid t• Video rates
• Integrated MPEG2compression foreach camera
(Bennett Wilburn, Michal Smulski, Mark Horowitz)
Department of Computer and Information Science
Assignment 1Assignment 1Assignment 1Assignment 1
• Implement a LF renderer• Implement a LF renderer– Read in an array of images (I will provide the
link to the images on my computer)link to the images on my computer)– Store the images as an array of array of pixels
(of RGB)(of RGB)– A user interface to control the camera motion
(forward, backward, left, right)( , , , g )– Implement the quadralinear interpolation– Have another control of focal plane p
(disparity)– Have another control of aperture
Department of Computer and Information Science
p– Due March 12