3D Stereo Reconstruction using iPhone Devices

Preview:

DESCRIPTION

3D Stereo Reconstruction using iPhone Devices. Performed By : Ron Slossberg Omer Shaked. Supervised By : Aaron Wetzler. Final Presentation 24/12/2012. Project’s Goal. Building a self-contained mobile 3D reconstruction system using two iPhone devices. - PowerPoint PPT Presentation

Citation preview

1

3D Stereo Reconstruction using iPhone Devices

Final Presentation24/12/2012

Performed By:• Ron Slossberg• Omer Shaked

Supervised By:• Aaron Wetzler

2

Project’s Goal

• Building a self-contained mobile 3D reconstruction system using two iPhone devices

3

Background – Pinhole Camera Model

• The basic camera model• Transformation from 3D to 2D coordinates:

• Distortion also taken into account

x K R t X

4

Background – Stereo Vision• Combine images from two cameras to

generate depth image• Relative cameras’ positions in physical space

(R, T) and image planes space (F) retrieved by stereo calibration process

5

Background – Stereo Correspondence• OpenCV offers a number of algorithms for

stereo correspondence• We chose to use two algorithms that offer a

good compromise between efficiency and quality:– Block Matching– Semi Global Block Matching

6

Background – Stereo Correspondence (Cont.)

• Block Matching:– Looks at blocks of pixels along the epipolar lines

and finds matches according to cross correlation.– Example of typical result:

7

Background – Stereo Correspondence (Cont.)

• Semi Global Block Matching: – Adds on to the normal block matching algorithm

by introducing global consistency constraints– The constraints are introduced by aggregating

matching costs along several independent, one-dimensional paths across the image

– Example of typical result:

8

Background – Reconstructed Scene• The matching algorithm produces a disparity

map which is a gray scale image where every color corresponds to a certain disparity and thus a certain depth

• Using a reprojection matrix on the disparity map we obtain a point in space corresponding to each pixel

• We can render these points as a 3d mesh using the original picture colors for each vertex.

9

Programming Environment• iPhone app programming– Objective-C programming– Model-View-Controller design pattern

• Main implemented features:– Displaying and controlling the views– Inter-device communication and time

synchronization– Persistent storage of data

10

Programming Environment (cont.)• OpenCV libraries– C++ open-source code– Implement all the required algorithms for

performing the calibration and reconstruction processes

– Handle the interface with the iPhone’s camera• Main implemented features:– Integrating openCV functions into our iPhone app – Correct data flow into and out of every openCV

function

11

Programming Environment (cont.)• OpenGL ES libraries – A lightweight version of the open-source OpenGL

libraries, which includes an iOS API– Implement the framework for rendering 2D and

3D computer graphics• Main implemented features:– Displaying the reconstructed images as an

interactive 3D surface

12

Software High-Level Design

• Generating a Bluetooth session between the devices

Main Menu Connect Devices

13

Software High-Level Design

• Setting the right parameters for stereo calibration and reconstruction

Main Menu Settings

14

Software High-Level Design

• Performing stereo calibration for the devices using a chessboard pattern

Main Menu Calibration

15

Software High-Level Design

• Performing 3D stereo reconstruction of the images captured by the devices

Main Menu Reconstruction

16

Software High-Level Design

• Interactive 3D color display of the images• Disparity map images shown within table

Main Menu Photo Album 3D Image Display

17

TIME FOR LIVE DEMOAND MOVIE

Calibration Process Flow

18

Bluetooth session created

Calibration Calibration

Capture

Send Capture Indication

Capture Image

Capture Image

Extract Corners Extract Corners

Send Capture Order

Exchange image corners data

Initial State Initial StateMessage delay calculated

Validating message delay

Initializing camera

Wait Message Delay

19

Calibration Process Flow(Cont.)

Calibrate

Save Parameters

Compute Intrinsic Parameters

Performed separately at each

device

Compute Extrinsic Stereo Parameters

Process was separated to

increase accuracy

20

Reconstruction Process Flow

Bluetooth Session Created

Reconstruction

Initial State Initial State

Load parameters

Reconstruction

Load parameters

Message delay calculated

Calibration Performed

Compute undistorted rectified bitmap

Validating message delay

Initializing camera

Choose ReconstructionAlgorithm

Compute undistorted rectified bitmap

Choose ReconstructionAlgorithm

21

Reconstruction Process Flow(Cont.)

Save Disparity

Image

Compute Disparity by stereo correspondence

Capture

Send Capture Indication Capture Image

Send Capture Order

Send and Receive Image

Send Images

Wait Message Delay

Capture Image

Remap Images to get Rectified Images

Remap Images to get Rectified Images

Send and Receive Image

Compute Disparity by stereo correspondence

Save Disparity

Image

22

Implementation Issues • Simultaneous Photo Capture– Problem: need devices to capture images at the

same time to gain good results– Solution: implemented RTT calculation algorithm

that ignores anomalous results and performs update phases during operation

– Other Solutions: Web Service, GPS– Advantage: messages traverse only short distance,

no dependency on GPS signals– Main Disadvantage: GPS achieves better accuracy

23

Implementation Issues(Cont.)

• Inter-Device Communication– Problem: need to pass messages and data

between the two devices– Solution: Bluetooth communication– Other Solutions: Wi-Fi– Advantages: existing easy-to-use framework,

simple protocol with low overhead– Main Disadvantage: smaller BW (affects the

duration of the reconstruction process)

24

Summary

• Very challenging and enjoyable project• Introduction with both computer vision and

mobile app development• Final outcome is a stable, user-friendly app

providing live results • Helpful documentation for future usage– Detailed webpage and demo movie

• Finally – many thanks to Aaron for his guidance and support throughout this project !

25

References• Computer Vision course, Spring 2010, University of Illinois

– http://www.cs.illinois.edu/~dhoiem/courses/vision_spring10/lectures/

• Developing Apps for iOS, Paul Hegarty, Stanford fall 2010 course (available on iTunes)

• Multiple View Geometry in Computer Vision course, University of North Carolina– http://www.cs.unc.edu/~marc/mvg/slides.html

• Modeling the Pinhole camera, course lecture, University of Central Florida

• Computer Vision tutorial, GIP lab• Learning OpenCV, Gary Bradski & Adrian Kaehler • Stereo Vision using the OpenCV library by Sebastian Droppelmann,

Moos Hueting, Sander Latour and Martijn van der Veen

26

The End

COMMENTS &

QUESTIONS

Recommended