13
Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger * , Stefan Gustavson 2007, v3.0 1 Introduction In image based rendering and image processing it is important to represent the images used in efficient ways. The process of resampling between the different representations, warping, is also important. This practical is about constructing angular maps (light probe images) and latitude longitude maps (panoramas) from mirror sphere images. We also do some simple experiments with image based lighting. To your help you have a number of documents under suggested reading at the practicals home page. For this practical you will need: Matlab scripts used during the practical A mirror sphere image Images for the image based lighting The files needed can be downloaded from the practicals home-page. On the prac- ticals home-page, you can also find an extra optional exercise about constructing good angular maps, light probe images, by removing the reflection of the camera. 2 Image Warping Image warping is the process of resampling an image from one representation to an- other using non-linear transformations, e.g. going from a mirror sphere image to an angular map representation. What is important to understand, is that it is about trans- forming the image coordinates, i.e. find out which coordinates in one image that cor- responds to the coordinate in another image with a different mapping. To resample an image into another representation you should make a transforma- tion such that it maps from a pixel location in the output image to the corresponding coordinate in the original image, i.e. the inverse of what one intuitively might think. * [email protected] 1

Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

Light Probes, Panoramas and Image WarpingTNM078 – Image Based Rendering

Jonas Unger∗, Stefan Gustavson

2007, v3.0

1 IntroductionIn image based rendering and image processing it is important to represent the imagesused in efficient ways. The process of resampling between the different representations,warping, is also important.

This practical is about constructing angular maps (light probe images) and latitudelongitude maps (panoramas) from mirror sphere images. We also do some simpleexperiments with image based lighting. To your help you have a number of documentsunder suggested reading at the practicals home page.

For this practical you will need:

• Matlab scripts used during the practical

• A mirror sphere image

• Images for the image based lighting

The files needed can be downloaded from the practicals home-page. On the prac-ticals home-page, you can also find an extra optional exercise about constructing goodangular maps, light probe images, by removing the reflection of the camera.

2 Image WarpingImage warping is the process of resampling an image from one representation to an-other using non-linear transformations, e.g. going from a mirror sphere image to anangular map representation. What is important to understand, is that it is about trans-forming the image coordinates, i.e. find out which coordinates in one image that cor-responds to the coordinate in another image with a different mapping.

To resample an image into another representation you should make a transforma-tion such that it maps from a pixel location in the output image to the correspondingcoordinate in the original image, i.e. the inverse of what one intuitively might think.

[email protected]

1

Page 2: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

T -1

Ta)

b)

Figure 1: a) The transform T maps from the original image to the output warped image. Goingthrough the original image and transforming each pixel to the a new position in the output imagewill most likely lead to missing pixels in the warped image. b) Using the inverse transformationT−1 to, for each pixel in the warped output image, find out where to sample in the original imagewill lead to a result without holes.

This is because, see Figure1, when we do the resampling we want to go through everypixel in the output image and and determine from where in the input image to pick val-ues. If we do the opposite and for every pixel location in the original image calculatethe coordinate/coordinates it will end up in we will most likely end up with a resultingtransformed image with missing pixel values.

We define T to be a function transforming a pixel coordinate (s, t) in the inputimage a new location (u, v) in the warped output image:

(u, v) = T (s, t) (1)

Assuming that T is invertible, we can then find the correct mapping from pixel coordi-nates in the warped output image to the original pixel coordinates:

(s, t) = T−1(u, v) (2)

In the warping process you will most likely not find that the pixel locations in theinput image are aligned with the pixels in the output image. Therefore some form of

2

Page 3: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

interpolation,bilinear, bicubic etc., will be needed, to find a good estimate of the newpixel value.

Image PlaneMirror Sphere

n

Reflection from environment

r rθ/2

θ/2θ/2

a)

x

y

φ−1 1

1

−1

φb)

Figure 2: Under the assumption of parallel projection, the figure describes how the angles(φ, θ) are derived geometrically. In figure a) φ = 0 and, the radial distance r =

px2 + y2 to

the point in the image , is seen to be r = sin( θ2). Since the incident and reflected angles are

equal the reflected is twice surface normal ~n angle at the point of reflection. b) The φ directionis along the radial line through the midpoint of the sphere to the point of reflection.

3 Images of Reflective SpheresAn image of a reflective sphere shows a full panorama view of the environment, exceptfor the area directly behind the sphere. We choose to neglect this small area, wherebythe essential parameters of the mapping is the angle of the light reflected from eachvisible point on the sphere. We also make the related assumption that sphere subtendsonly a small angle from the point of view, so that the projection onto the imaging planecan be assumed to be approximately parallel, see Figure 2. We also assume that thecircular image is within the unit circle an all points in the image are addressed withrectangular coordinates (x, y) such that r =

√x2 + y2 ≤ 1. This mapping of image

coordinates to the unit square can be described as:

3

Page 4: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

x = 2( s

smax− 0.5

)(3)

y = 2( t

tmax− 0.5

)(4)

Under these assumptions, the from a point in the image of the reflective sphere tothe corresponding view angle in the environment in which the sphere was placed isdescribed by the following equations:

φ = atan(y

x) (5)

r =√

x2 + y2 (6)

θ = 2 ∗ asin(r) (7)

The corresponding inverse mapping from directions in the environment to imagecoordinates on the unit circle is:

r = sin(θ

2)

(8)

x = r ∗ cos(φ) (9)

y = r ∗ sin(φ) (10)

4 Angular MapsA problem with mirror sphere images is that the sampling is not very uniform. Thesampling at the rim of the circle, see Figure 3 a), is very bad. Note that the lines getsuccessively closer towards the rim. The derivative of equation 7 is infinite at θ = π,which is a bad property for a coordinate mapping function to have. Because of thisnon-uniformity, a different mapping was chosen for light probe images. We call animage with this mapping an angular map, see Figure 3 b).

In an angular map the radial distance to the center of the light probe image mapslinearly to the angle, and provides a far better sampling for backwards facing directions.For the mapping equations 5 - 7 this is just a matter of replacing θ with the radialdistance r multiplied by π. The mapping equations then becomes

φ = atan(y

x) (11)

r =√

x2 + y2 (12)

θ = πr (13)

An alternative way of doing this is t o approximate the remapped sine function inequation 8 with a polynomial. The radial shift can be approximated with a fourth orderpolynomial, and a close fit to the sine function according to equation 8 is:

4

Page 5: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

rang = 0.1433r4 − 0.7251r3 + 0.0088r2 + 1.5727r (14)

Radial resampling is in many image processing packages commonly used for barreldistortion. The free Photoshop plug-in PanoTools by Hermut Dersch does this verywell. The maximum absolute error of this approximation over the full range 0 ≤ r ≤ 1is about 0.0003, which is less than one pixel off for images smaller than 3000x3000pixels.

−1−1 −0.8−0.8 −0.6−0.6 −0.4−0.4 −0.2−0.2 00 0.20.2 0.40.4 0.60.6 0.80.8 11−1−1

−0.8−0.8

−0.6−0.6

−0.4−0.4

−0.2−0.2

00

0.20.2

0.40.4

0.60.6

0.80.8

11

a)

−−11 −−0.80.8 −−0.60.6 −−0.40.4 −−0.20.2 00 0.20.2 0.40.4 0.60.6 0.80.8 11−−11

−−0.80.8

−−0.60.6

−−0.40.4

−−0.20.2

00

0.20.2

0.40.4

0.60.6

0.80.8

11

b)

Figure 3: a) displays lines equally distributed over the angles (φ, θ) mapped to a mirror sphereimage. b) displays the mirror sphere image warped into an angular map. As can be seen thesampling is more uniform towards the rim of the sphere compared to the mirror sphere image.

5 Rotating the Polar CoordinatesThe mapping from image coordinates to reflected directions described in Section 3 issimple and natural to the spherical geometry, but it has its spherical coordinate polesθ = 0 and θ = π along the line of sight, which is not the most convenient environmentmapping. To rotate these polar coordinates to an orientation with these poles up anddown in the normal manner, the following transformations can be use:

φrot = atan(cosφsinθ

cosθ

)(15)

θrot = atan(1 − sin2φsin2θ

sinφsinθ

)(16)

5

Page 6: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

a) b)

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Figure 4: The figure displays the equidistant lines in (φ, θ) rotated 90 to position the northand south poles in more suitable positions than along the line of sight. Compare with Figure3. a) Displays equidistant lines in (φ, θ) mapped through the rotating mapping in Section 5. b)Displays a radially resampled version of the same.

The inverse mapping (for the actual sampling operation) is then given by:

φ = atan( cosθrot

sinφrotsinθrot

)(17)

θ = atan(√

1 + sin2φrotsin2θrot − sin2θrot

cosφrotsinθrot

)(18)

Figure 4 shows the rotated version of Figure 3 a) and b) respectively.The above equations are somewhat difficult to understand and derive. To get an

idea of what is happening during the rotation you can look at the animationsspherephi.avispheretheta.avi

that can be downloaded from the web-page. The rotation described in this section isshown i the spheretheta.avi animation.

6

Page 7: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

θ = 0

θ = π

φ = 0 φ = 2π

Figure 5: A latitude longitude map is a convenient way of storing environment maps and aretherefore often used as lookup tables for radiance values in rendering. It is a planar mapping ofdata, in this case the reflection of the environment, defined on a sphere.

6 Latitude Longitude MapsA latitude longitude map, see Figure 5, use the angular coordinates (φ, θ) to describespherical directions. Each pixel in a latitude longitude map describes one direction.Thus it can be used to describe the environment reflected in a spherical mirror.

Light probes are often stored as latitude longitude maps, since it is fast and easy touse spherical coordinates (φ, θ). The data is stored efficiently and the map could easilybe used as a lookup table in a real-time implementation.

Latitude longitude maps are often used as panoramic images. A panoramic im-age usually contains a full 360 panorama of the scene in which it was captured. Apanoramic image can be captured in several ways, e.g. using a fisheye lens, pho-tographing a mirror sphere etc.

7 AssignmentsBased on the mappings described above the assignments is to implement and try dif-ferent re-samplings. The images and scripts you need can be found in the Warpingfolder.

The resampling is done using a matlab script called resamp2D. This functiontakes an image and a string with the name of a map function as input. The image is theoriginal image and the map function, which you are to implement, is a function thattells the resamp2D script how the image coordinates are remapped. Since the idea isto, for every pixel coordinate in the output image, find which pixel coordinates in theoriginal image to sample, the map function can described by equation 2.

7

Page 8: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

This might seem a bit difficult in the beginning, but Section 7.1 will help you fa-miliarize yourself with the idea of image sampling and warping.

7.1 Getting StartedUnpack the files you downloaded and open the file called resamp2d.m in Matlab.Find out how it works by reading the help text found inside the script or by (at theMatlab prompt) typing:

help resamp2d

The file resamp2d1.m contains a simpler script that essentially does the same thingas resamp2d(A, mapfunc) but is much slower due to the loops used.

The resamp2d(A, mapfunc) is used to warp images from one representationto another. A is the input image and mapfunc the inverse transformation of the map-ping, i.e. mapfunc is used to find what pixel in the input image a given pixel in theoutput image maps to. Since the mapping is most unlikely to be aligned between theinput and the output image, bilinear interpolation is carried out to find the new outputimage pixel value. Look at the function proberot180 if you want to see examplesof functions to be used with resamp2d(A, mapfunc).

To try the resamp2D function load an image, e.g.image = imread(’mirrorsphere.jpg’);image = double(image) / double(max(image(:)));

The last line converts the image to double and normalizes it to 1. The normalizationis done because the matlab function imshow(image) maps 1 to white. Look at theimage:

imshow(image);

The function mapfunc is assumed to be defined over the unit square, i.e. the circularprojection of the sphere is assumed to have radius 1 as described in equations 3 and 4.

To run the resamp2D function and rotate the light probe 180 type:

imagerot = resamp2D(image, ’proberot180’);

Look at the result. The important thing is that you familiar yourself with theresamp2D function and how to write the remapping function.

As you can see the input to the map function is two matrices. The matrices containsthe image coordinates of the input image mapped to the unit square as described inSection 3 for X- and Y- coordinates respectively. To solve the exercise is importantthat you understand how this works. If you think it is confusing, you can display thecoordinates as an image. For the X-coordinates this can be done adding the followingline to the proberot180.m script:

imshow((x1+1) * 0.5);

The output, x2 and y2 in the script, of the map function is the remapped versionof the coordinates. The output is then used by the resamp2D function to do theremapping of the image, i.e. it uses the the new, remapped, coordinates to gather pixelvalues from the original image in the remapping process.

When you understand how it works you are ready to solve the exercises.

8

Page 9: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

Figure 6: A latitude longitude map warped from the image with lines mapped to amirror sphere.

7.2 Radial ResamplingOpen the file mirror2angular.m and look at it. It contains a body for a functionthat warps a mirror sphere image radially into an angular map. Implement the map-ping function to be used with resamp2d(A, mapfunc) for warping from a mirrorsphere image to an angular map, i.e. from the left to the right figure in Figure 3. This isdescribed in Section 4. Remember that the input coordinate matrices are defined overthe unit square, i.e. −1 ≤ x ≤ 1, −1 ≤ y ≤ 1.

If you transform the input coordinates to polar coordinates (φ, r), defined by thedistance from the origin and an angle, the radial resampling will be easy.

Remember that the output coordinates also should be on the unit square. Make surethat coordinates outside the unit circle is mapped to the rim of the circle, i.e. if r > 1set r = 1.

Use mirrorsphere.tif when you write your function. When you are doneyou should have a resulting image looking similar to the right image in Figure 3.

Now load the image officemirror.jpg and resample it into an angular mapand look at it.

7.3 PanoramaOpen the file angular2latlong.m and look at it. It contains a body for a functionthat warps an angular map into a panorama longitude latitude map, see Section 6.Since the panorama is defined on the spherical coordinates, you should remap the inputcoordinate matrices from −1 ≤ X ≤ 1, −1 ≤ Y ≤ 1 to define the angles 0 ≤ φ≤ 2π, 0 ≤ θ ≤ π. This is your angles to use in the remapping.

In angular2latlong.m there is a line saying:

9

Page 10: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

% Remap by turning the system 90 degrees

Ignore the above line for now and map as described the longitude latitude imagecoordinates to the angular map. How this is done is described in Section 3. It isessentially just to map the φ θ to cartesian coordinates. But since the input shouldbe in the angular format you have to take into account that you have done the radialresampling described in Section 4. Remember that it is the inverse mappings youshould use.

When you are done, resample the officemirror.jpg image and look at theresult. Note that you have to resample it into an angular map before resampling intoa panorama. It looks a bit weird, but try figure out how the result corresponds to themapping described in Figure 2.

Now go back to the line saying

% Remap by turning the system 90 degrees

and rotate the φ and θ as described in Section 5. Obviously this has to be done beforeconverting to cartesian image coordinates. If you use the angular map you got fromwarping mirrorsphere.tif you will, when you are finished get a result lookingsimilar to Figure 6.

Since a latitude longitude map spans over 0 ≤ φ ≤ 2π, 0 ≤ θ ≤ π and theoutput is a square image you should down-sample, resize, the image to half the heightfor correct viewing.

Now resample probe.jpg into a panorama and look at the result. As you cansee the pole of the mirror sphere facing away from the camera is in the middle of theimage. In the next exercise we will rotate the angular map to yield an image which wecan warp into a panorama with the poles rotated in a better way.

7.4 Rotate the ImageAgain load officemirror.jpg and warp it into an angular map using the functionyou wrote. The rotation about the Y-axis can be done in either of two ways.

You can use the proberot180 map function with resamp2d to rotate the an-gular map 180 and remap it into a panorama as before.

You can modify the φ angle in the panorama to span −π ≤ φ ≤ π instead of0 ≤ φ ≤ 2π in angular2latlong.m.

Look at the result. Save your images to disk using imwrite.

8 Some Concluding WordsThe panorama created from the mirror sphere image is of low quality. The qualitywould be improved by using higher resolution in the mirror sphere image and by re-touching out the camera. One could also use two fish eye images to create panoramas.The idea with this practical was to show the importance of image warping and the needfor using different image representations.

10

Page 11: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

9 Simple Image Based LightingIn this section we will do a very simplistic rendering of a car. For this we will usethe material found in the Rendering folder. Here you will find a normal map,normals.pfm, containing the normals of a car, an alpha mask, alpha, a diffuse con-volution map, uffiziDiffuse.pfm, a reflection map, uffizi.pfm,and a back-ground image, background.pfm. The can be loaded with the ReadPFM script. Ifyou want you can use the script writehdr to save your output images to the .hdr.

Load the images and take a look at them.

9.1 Diffuse RenderingIn this exercise we will make a diffuse rendering of the car. Open the script called .Your assignment is to fill out the script and make it render the car diffusely lit by usingthe normal map and the diffuse convolution map.

The diffuse convolution map contains the diffuse (cosine weighted) illuminationintegrated over the hemisphere for the normal direction given by each pixel position.

E( ~N) =∫

Ω

L′( ~N, ω)cosθdω (19)

where θ is the angle between the normal and the direction given by ω.This means that we can for each normal direction, ~N , in our normal map find the

proper image coordinates in the diffuse convolution map and use that as our diffuseillumination for that point.

The diffuse convolution map is in latitude longitude format and (u,v) coordinates,0 ≤ u ≤ 1, 0 ≤ v ≤ 1, for a normal direction ~N = (Nx, Ny, Nz) can be found as:

u = 0.5 ·(1 +

atan(Nx

−Nz))

(20)

v =1π

acos(Ny) (21)

By multiplying (u, v) with the width or height of the diffuse convolution map youwill find the correct pixel coordinate to use for the lookup. Don’t forget that matlabstart indexing at 1 and not 0.

When you have generated your diffusely lit image of the car you can use the alphamask to make a composite with the background. If the rendered image is called D thiscan be done as:

Composite = D · alpha + (1 − alpha) · backgorund (22)

Now look at your result. As you can see, this simplistic rendering does not containany self shadowing inter-reflections and a set of other desirable features. To fake selfshadowing a ambient occlusion map is often used. The ambient occlusion map containsdescribes how much of the hemisphere is that is covered at each point on the object.

11

Page 12: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

9.2 Specular RenderingThe next step is to write a script that uses the normals and the view-direction to renderthe car with a mirror like material. Open the script called RenderSpecular.m.Here we will for each normal in the normal map calculate the reflected view-directionR and use that to look up the reflected value. The reflected direction can be calculatedas:

~R = ~dv − 2( ~dv · ~N) ~N (23)

where ~dv is the view direction and ~N is the normal direction. For each direction inthe normal map use calculate the reflected direction and set the corresponding pixel inthe output image to the value found. As you can see in the script the view directionis (0, 0,−1). The reflection map is called uffiziReflection.pfm The reflectionmap image is in latitude longitude format and the pixels are addressed in the same wayas described in the diffuse rendering. Now make a composite with the background andlook at the result.

Make a composite in which you combine the diffuse and the specular renderingsas:

Composite = (0.9 · D + 0.1 · S) · alpha + (1 − alpha) · backgorund (24)

where D is the diffuse and S is the specular rendering. You can vary the amount addedfrom the diffuse and specular renderings until you find a combination that you like.

9.3 ConclusionThe renderings produced are very simplistic and leaves a lot to do to make them moreinteresting. However the exercise explains the basics in how diffuse convolution mapsare used and how reflection mappings works.

12

Page 13: Light Probes, Panoramas and Image Warpingwebstaff.itn.liu.se/~jonun/web/teaching/TNM083... · Light Probes, Panoramas and Image Warping TNM078 – Image Based Rendering Jonas Unger∗,

10 Some Useful Matlab FunctionsThis section lists a number of functions that can be helpful during the practical. Theyare fully described in the manual pages in Matlab. Therefore they are not further ex-plained here.

imread(fname);-reads the image fname

imwrite(image, fname, MODE)-writes an image to disk

imresize(image, [x y], MODE);-resizes an image to the given size

cart2pol(x,y)-transform Cartesian to polar coordinates.

pol2cart(th, r)-transform polar to Cartesian coordinates

atan2(x,y)-the inverse tangent over the four quadrants -pi <= atan2(x,y) <= pi,

13