Upload
charity-todd
View
218
Download
0
Tags:
Embed Size (px)
Citation preview
ApplicationsApplications
Presented byPresented by : :Michal KamaraMichal Kamara
Outline• Motivation-shadow removal from multi-
projector displays• Dynamic shadow elimination for multi-
projector displays• Dynamic shadow removal from front
projection displays• Augmented generation of consistent
shadows for Augmented Reality
• The use of large-scale front-projection display has emerged in recent years:– Immersive teleconferencing– Virtual reality environments– Augmented reality
• One fundamental problem:
Shadows easily remove the user from the visually immersive experience.
Motivation
What can be done?
• Back-projection– Problems of: space considerations, intensity and sharpness
attenuation and mechanical complexity.
• Constraint user movement– Some interactive display environments adaptively render a
model based on user’s position.– May prevent the user of viewing particular parts of the model.
• Or…
Dynamic Shadow Elimination Dynamic Shadow Elimination For Multi-Projector DisplaysFor Multi-Projector Displays
Rahul Sukthankar Tat-Jen Cham Gita SukthankarRahul Sukthankar Tat-Jen Cham Gita Sukthankar
20012001
Outline
• System Overview• Automatic Alignment• Reference Images• Shadow Detection• Shadow Elimination• Iterative feedback• results
System Overview
System Overview – cont’
• The system must accurately align the projected images on the display surface.
• Each occluder can create multiple shadows on the display surface.
• The system must precisely adjust projector output to compensate for each occlusion.
• Shadow boundaries must be treated carefully.
Algorithm’s steps
Occluded display
Reference image
Row slide
Multi projector display,After shadow elimination
Occluded display
Alpha mask (camera frame)
Apply
Alpha mask
Camera-screen
homography
Screen-projector1
homography
Screen-projectorN
homography
Projector 1 Projector N
Shadow-compensated slide (screen frame)
Automatic Alignment
• We need to find T such that:
For all points and in 2 different coordinate systems.
• Because T is a planar projective transform, it can be determined up to an unknown scale factor by 4 pairs of matching points.
ba Tpp ap
bp
Automatic Alignment– cont’
• Camera-projector homography, can be determined by:– iteratively projecting a random point from the projector onto the
display surface and observing that point in the camera.– projecting a rectangle from the projector, where the coordinates of
the rectangle corner in the projector are known and can be located at camera frame using image processing techniques.
• The display area is either automatically determined by the camera, or interactively specified by the user.
• Camera-screen homography, can be determined by the corners of the display surface.
ipcT ,
scT ,
scpcsp TTTii ,1,,
Automatic Alignment– cont’
• The projector-screen calibration is important to avoid distortions and double images on the display surface, that may be caused from the off-center projections.
Reference Images
• Creating the reference images is done during the initialization phase , when the scene is occluder-free.
• For each slide the system projects, several camera images are capture and are pixel-wise averaged to create a reference image for this slide.
Shadow Detection
• During operation, the camera acquires a current image which is compared to the reference image.
• A pixel-wise image differences between reference and current camera images is used to detect shadows.
• A median filter (5X5) is applied to the difference image to reduce the effects of camera noise and minor calibration errors.
Shadow Elimination
• From the difference image a mask called alpha mask is constructed:
Where is the camera image at time t.
is the reference image. is a system parameter, set to 0.25, to avoid rapid fluctuations.
And Note that there is only one alpha mask for all projectors.
yxIyxIyxyx ttt ,,,, 01
0ItI
128,0 yx
Shadow Elimination – cont’• The alpha mask is computed in the camera frame and
hence must be transformed into the screen frame.• Well, we know how to do that…
• Applying the alpha mask to the current slide is done by replacing the alpha channel of the slide image.
• What channel ???• An alpha channel is another channel (along with the 3
channels: R,G,B) that may be added to an image. That channel describes the importance of each pixel when composite over another image.
scT ,
Shadow Elimination – cont’
• After applying the alpha mask to the screen slide, it is transformed for each projector and…displayed.
Iterative Feedback
• Since there is no good photometric model of the environment, there is not a precise prediction of how much light is needed to remove the shadow.
• That is why the iterative feedback loop is used.• The system will continue to add light to shadowed
regions until it will appear as in the reference image.• Surprisingly it creates robustness, suppose 1 of the
projector fails, the alpha mask will uniformly increase.• Main drawback is time
– shadows are eliminated in approximately 3 iterations.
Results
Results – cont’
• To examine image quality over the shadow removal process, SSD error of gray scale intensities was calculated compared to the reference image.
• As expected, the hard shadow from the single projector is the major source of error.
Results – cont’
frame
error
•The “occluder” is entered at t=4.
•The “occluder” is leaving at t=11.
•The remaining low errors are referred to the “halo” effect.
Dynamic Shadow Removal Dynamic Shadow Removal from Front-Projector Displaysfrom Front-Projector Displays
Christopher Jayen Stephen Webb R.Matt Steele Michael Christopher Jayen Stephen Webb R.Matt Steele Michael Brown W.Brent Seales Brown W.Brent Seales
20012001
Outline
• System Overview• Requirements• Calibration
– Geometric Calibration– Color Calibration
• Creating an expected image• Alpha mask generation• Results• Main drawbacks
System Overview• Very similar to previous system, with one
main difference:– The expected image is created from projector
frame buffer using the calibration during operation.
• This difference derives a new type of calibration, color calibration.
Requirements
• Screen points are illuminated by more than 1 projector.
• At least 1 camera is able to observe the screen surface at all times.
Calibration
• Critical both for shadow detection and removal.
• A two phase process, performed prior to use of the system.– Geometric Calibration– Color Calibration
Geometric Calibration
• Very similar as in the previous algorithm, only now the calibration is directly between camera and projector.
• Given a camera and projector pair, calibration determines the transform from pixels in the camera plane to their corresponding positions in the projectors’ frame buffers.
Geometric Calibration cont’
• Reminder:• We need to find A such that:
for all points in the camera and all in the projector.
• Because A is a planar projective transform, it can be determined up to an unknown scale factor by 4 pairs of matching points.
• We can find such points with iteratively projecting a random point from the projector onto the display surface and observing that point in the camera.
cp App
ppcp
Geometric Calibration cont’
• The accuracy of A can be measured with:
• In this study, 10 matching pairs for calculating A were used and 50 points for calculating calibration error.
• To improve results, a technique called Monte Carlo was used.
2^
|||| N
i
pAp
Color Calibration• A given camera C observes the display surface, while
uniform color images of increasing intensity are iteratively projected from projector P.
• For each projected color image, the mean color intensity is computed over the corresponding observed image.
• This is computed for each channel separately. • The mean value over 10 trials is computed for each color
channel.
Dose not refer to color differences between the projectors.
Measured transfer function for each color channel
Color Calibration cont’
• The transfer function is of the form:
Where is a color transfer function for colorchannel C, the other 4 parameters are fit to themeasured datapoints using a technique called thenonlinear optimization Levenberg-Marquardt [et al 1998].
• Those color transfer functions provide a straightforward way to predict how a color in projector space will appear in the camera image.
ke
axf
bxc
1 xfc
Color Correction results
Observed image
Predicted image, Predicted image,
without color correction with color correction
Creating an expected imageIn a dynamic display the imagery may change in an unpredictable way (user movement, simulations, video data).
The expected image must account for the changing display.
The expected image is the basis for subsequent modification of projector frame buffer pixels, so we want it to be as accurate as possible.
Creating an expected image-cont’
• An expected image is recovered by:– Wrapping all projector pixels in to the camera frame
(geometric calibration).• For the high accuracy, a super sampling technique is used.
– Apply color correction (color calibration):
Where:
is the expected image after geometric calibration.
CyxIfCjiI c ,,,,~
I
},,{ BGRC
• Predicted image:
Example
• Camera view:
Before color correction
Alpha mask generation• Expected image is compared to the captured
imagery by a subtraction of color components.• That leads to 2 delta images, , .• Each delta image is filtered (3X3 median) to
remove the effect of sensor noise.• All the above is happening in the camera
coordinate frame.• Using the camera-projector homography, the
delta images are warped to the reference frame of each projector for correction.
I I
Alpha mask generation – cont’
• Once a delta image has been aligned to a projector, an appropriate alpha mask is computed as follows:
Where is the maximum intensity change between any 2 frames to avoid rapid fluctuations.
• The alpha blending process takes into account whether incoming alpha values should be added or subtracted from the alpha channel currently being projected.
jiIjiIji ,,,
• Resulting Alpha mask
Example
• Difference image
Results
Main drawback
• It takes the system about 3-4 frames to converge to a blended image.
• This is not an interactive rate.
Automation generation of Automation generation of consistent shadows for consistent shadows for
Augmented RealityAugmented Reality
Katrien jacobs Jean-Daniel NahmiasKatrien jacobs Jean-Daniel Nahmias Cameron Angus Alex Cameron Angus Alex Reche Celine Loscos Anthony SteedReche Celine Loscos Anthony Steed
20052005
Outline
• Motivation• The problem• Previous work• Method overview• Shadow detection step
– Automatic estimate of the shadow intensity
• Shadow Protection step• Shadow Generation step
Motivation • A wide range of applications use computer generated
animations in combination with pictures of real scenes.– Medical training
– Medical surgery
– Entertainment
• Some require an instantaneous inclusion between the virtual elements and the real ones.
• Consistent shadow of the virtual objects gives a correct geometric interpretation
• Correct lighting enhances the feeling that the virtual objects are part of the real scene.
The problem
• This doesn't seem natural:
The shadow lies correctly on the ground but overlap incorrectly with the real shadow.
Previous work
• Since the early 90’s a few solutions for the illumination inconsistency have been proposed.
• Most of them assume that a model of the real scene is available.
• If not, it is reconstructed using photos from different viewpoints.
• These usually leads to a mismatch between the simplified geometry and the texture.
• In this paper, a new new procedure is presented that offers a solution regardless of the quality of the geometric reconstruction.
Geometric Reconstruction, example
Outdoor sceneReconstructed geometry
Shadow created based on geometry
Mismatch between geometry and texture
Method overview
• The system is applied on scenes with one main real light source.
• The real element’s geometry and the position of the light source only need to be known approximately.
• A three-step mechanism is designed:– Shadow detection step– Shadow protection step– Shadow generation step
Shadow detection step
• In order to protect the existing shadows in the scene from any post-processing, the shadow pixels in the texture need to be identified.– First a shadow contour estimate is calculated
using the geometry and the light source position.
– Next, the exact shadow contour is extracted using an edge detector, in this case, Canny edge detector.
Edge detection example
Input for the edge detector
Using the geometric estimation an accurate edge detection is done
Shadow detection step – cont’
• Correct detection will occur when:– The position of the geometrical estimate is close to
that of the real shadow, regardless of the difference in shadow shape or detail.
– The shadow is hard or soft and shows a relatively high contrast with the background.
– The contrast of the shadow and the background is larger than the contrast in the texture pattern of the background.
The computation speed of the shadow edge detector depends on the size of the real shadows.
Shadow detection step: automatic estimate of the shadow intensity
Once the true shadow contour is known, it is possible to calculate a scaling factor per material in shadow that reflects the color intensity in the shadow region.
Shadow detection step: automatic estimate of the shadow intensity
pC
'pC
NSR
NSRPp
SR
SRpp
P
CP
C
pc
''
} The average in the shadow region
} The average in the non shadow region
C=}R,G,B}
SR- Shadow Region
NSR- Non Shadow Region
SRpNSRp
- The number of pixels in SR
- The number of pixels in NSR
Shadow Protection step
• Binary shadow mask is created in order to protect those points inside a real shadow from any scaling.
• The scaling factor is chosen to match the color of the non-overlapping areas with the points inside the real shadow.
Shadow Generation step
• A real-time shadow method such as shadow maps or shadow volumes is used to generate the virtual shadows.
• The intensity of the shadow relates to the appropriate scaling factor computed in the shadow protection step.
• Overlap between real and virtual shadows is prevented by using the mask generated in the shadow protection step.
• The intensities of the pixels in the non-overlapping regions are calculated by scaling the texture color with the scaling factor.
Results
Geometric estimation of the shadow
Real scene
Estimated shadow in yellow
Green area is sent to edge detection
Edge detection
Real-time results
Virtual man walking around real laptop
References
• Rahul Sukthankar, Tat-Jen Cham, Gita Sukthankar Dynamic Shadow Elimination for Multi-Projector Displays Proceedings of the IEEE (CVPR), 2001
• Christopher Jaynes, Stephen Webb, Matt Steele, Michael Brown, W. Brent Seales Dynamic Shadow Removal from Front Projection Displays Proceedings of the IEEE Visualization, 2001
• Katrien Jacobs, Jean-Daniel Nahmias, Cameron Angus, Alex Reche, Celine Loscos, Anthony Steed Automatic generation of consistent shadows for augmented reality ACM International Conference Proceeding Series; Vol. 112 archive, Proceedings of the 2005 conference on Graphics interface