32
Mobile Point Fusion Real-time 3d surface reconstruction out of depth images on a mobile platform Aaron Wetzler Presenting: Daniel Ben-Hoda Supervisors: Prof. Ron Kimmel Gal Kamar Yaron Honen Supported by grant 267414

Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Mobile Point Fusion Real-time 3d surface reconstruction out of depth images on a mobile platform

Aaron Wetzler

Presenting: Daniel Ben-Hoda Supervisors: Prof. Ron Kimmel

Gal Kamar Yaron Honen

Supported by grant 267414

Page 2: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Motivation

Dense 3D Reconstruction is a fundamental problem in the field of computer vision.

Currently there are limited solutions for real time dense 3D reconstruction on mobile devices.

Dense 3D Reconstruction on mobile devices can be used to create 3D scanners and Augmented Reality apps for SmB’s and personal uses.

Depth Cameras are becoming cheaper, and more readily available, and are becoming available in mobile devices.

Mobile devices graphical and computational power is increasing.

Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices

Page 3: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

System Setup

Structure depth sensor attached to an iPad Retina

Page 4: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

The UI

RGB

Normals

Rendered model

Depth

Page 5: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Pipeline and System Overview

The pipeline continuously receives depth images from the sensor

Using the global model and the new data the camera pose is estimated

The global model is updated with the new data

The updated global model is rendered to the screen from the new point of view

Page 6: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Model Representation

Our system adopts a purely point-based representation. This representation eliminates

the need for spatial and structural data structures

Our global model is a simple point cloud

Points are assumed to be oriented circular disks

Each point is allocated a unique ID upon first encounter

The ID corresponds to the index of the OpenGL primitive GL_POINT representing the

point

The system recognizes previously encountered points using these ID’s

Each time a point is encountered, the system’s confidence in that point is increased

The system distinguishes between stable and unstable model points according to that

points confidence

Page 7: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Model Attributes

The model data structure is maintained on the GPU. Each point is represented by the

following attributes:

ID: 𝑖 ∈ ℕ+

Position: 𝑣𝑖 ∈ ℝ3

Normal: 𝑛𝑖 ∈ ℝ3

Radius: 𝑟𝑖 ∈ ℝ

Confidence: 𝑐𝑖 ∈ ℝ - a confidence counter

Points with 𝑐𝑖 ≥ 𝑐𝑠𝑡𝑎𝑏𝑙𝑒 are considered stable

Timestamp: 𝑡𝑖 ∈ ℕ - the last iteration in which

this point was modified

Normal

Page 8: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Depth Image Preprocessing

Depth images are filtered to remove sensor noise

For each depth measurement the following is computed:

3D position

Normal

Radius

Filtering 3D Position Extraction

Input depth images

Normal Estimation

Radius Estimation

Camera pose estimation

Page 9: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Bilateral Filtering

Depth images are filtered using a Gaussian Bilateral Filter to remove sensor noise

For each pixel distance possible in the filter box, Gaussian values are pre-calculated

Gaussian values are also pre-calculated for the range differences of 0-256

Page 10: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Bilateral Filtering

Filtered Depth image Raw depth image

Page 11: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Computing Camera-Space Position

Given the supplied OpenGL projection matrix, 𝑃, that simulates the

projection of the camera onto the image plane, the reverse OpenGL

projection processes is performed

Page 12: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Normal Estimation

Surface normals are estimated using neighboring pixels by calculating central

differences

𝑑𝑥 = 𝑝8𝑐 − 𝑝2

𝑐 𝑑𝑦 = 𝑝6𝑐 − 𝑝4

𝑐

𝑛𝑝5𝑐 = 𝑑𝑥 × 𝑑𝑦

Page 13: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Radii Estimation

Comparing solid angled provides an

upper limit to the radius of the object

which was projected

Radii are reduced over time as more

detailed measurement are acquired

therefore the radii are set to their respective upper limits

𝛿𝐴 =cos (𝛼)

cos (𝜃)

𝑧

𝑓

2

𝛿𝐼

Page 14: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Depth Image Preprocessing

Attributes of new points are saved into textures for use at later stages

Depth image Positions Normals + radius

Page 15: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Camera Pose Estimation

The current 6 DOF camera pose is estimated by aligning the new depth image and the

global model as viewed from the last known position of the camera

The alignment is preformed using Iterative Closest Point (ICP) which finds the 6 DOF

transformation between two point clouds by finding correspondences and computing

the transformation which minimizes a chosen error metric

Collect visible stable model points

Find correspondences

Solve linear least squares

Apply the transformation to the model

Page 16: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

ICP Preprocessing

Stable model points are rendered from the last known point of view, the ID’s of these

points are saved in a texture

Using this texture, the ID’s of the visible stable model points are collected for further

rendering

84 321

72

214 625

84

321

72

625

214

Page 17: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Finding Correspondences

Correspondences are found using projective association

Visible stable model points previously collected are projected onto the

new depth image

If a model point is projected onto a pixel containing a depth

measurement a correspondence is found

Correspondences are rejected if

The angle between normals is greater than 𝛿𝑛𝑜𝑟𝑚𝑎𝑙

The Euclidean distance is greater than 𝛿𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒

Page 18: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Finding the Transformation

The desired 6 DOF transformation is one

that minimizes a chosen error metric

In order to accelerate convergence

the point to plane error metric is chosen

Computing the desired transformation

requires solving a non-linear least

squares problem

The non-linear least squares problem is

approximated by a linear one to

reduce computation time

Page 19: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Applying the Transformation

The computed transformation is applied to the model and the process is

repeated until a sufficient accuracy is reached

Empirical data shows that the convergence of the point to plane ICP

variant is not monotonous, therefore correspondences are found every several iterations

Page 20: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Updating the Global Model

New data is fused into the global model, once the global model and the

new depth image are aligned

The model is updated using the following steps

The model is rendered in high resolution

Point fusion

Point Removal

Page 21: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Super-sampled Rendering

The model’s visible points’ ID’s and attributes are rendered to textures to

be used in the following steps

Each point is rendered into a single texel to reveal the actual surface

sample distribution

The model is rendered at 16 times the depth sensor’s native resolution

Each of the camera’s pixels corresponds to a 4X4 neighborhood in the

super-sampled textures

Page 22: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Point Fusion

For each new point a single model point is matched

The matching point is chosen from the corresponding 4X4 neighborhood:

Discard points larger than ±𝛿𝑑𝑒𝑝𝑡ℎ distance from the viewing ray, adjusted according to sensor uncertainty

Discard points whose normals have an angle larger than 𝛿𝑛𝑜𝑟𝑚𝑎𝑙

From the remaining points select the points with the highest confidence

From the remaining points select the point closest to the viewing ray

Page 23: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Point Fusion

The matching points’ attributes are

averaged according to their respective

confidence values

If no match is found, the point is

treated as a new point and added to the model

The CPU reads the averaged values

and updates the global model

Fusion: Green points are stable

Page 24: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Point Removal

In addition to the merging and adding of new points, under certain conditions model points need to be removed

Points that remain unstable for a long period of time are likely outliers or artifacts from moving objects and therefore need to be removed from the model

Stable model points that were merged with new data have their position adjusted, therefore:

Model points that lie in front of these points from the current point of view are removed as these are free space violations

Neighboring points that have very similar position, normal and overlapping spheres are redundant and are removed to further simplify the model

Page 25: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Point Removal

To identify the points for removal, the super-sampled textures are used

For each stable model point that was merged with new data the

corresponding 4x4 neighborhood is searched for removal candidates

The ID’s of points that need to be removed are rendered to texture

The model is rendered using index rendering throughout the pipeline.

Therefore each time the model is rendered a list of indices, the vertex ID’s,

is passed to the GPU

Removing a point from the model can be executed by simply removing

the point’s ID from the list of model indices

Page 26: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Point Removal

To prevent fragmentation in the model

memory, the following method is

employed:

Each time a point is removed it’s index is

inserted to a FIFO queue

When a new point needs to be added

to the model, it is written to the memory

corresponding to the index at the head

of the queue

If the queue is empty the point is inserted

after the last model point in the memory

Page 27: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Comparison to Kinect Fusion

Opposed to Kinect Fusion which employed a volumetric data structure,

Mobile Point Fusion uses a purely point based representation throughout

the pipeline

Enabling scanning of large scenes

Resolution is limited by the depth sensor only

Computation time is less dependent on model size and resolution

Eliminates the overhead of converting between representations

Does not create a watertight 3D model (but this can be later generated

offline)

Page 28: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Results

Page 29: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Accomplishments and Unresolved Issues

Mobile Point Fusion has demonstrated how point based fusion can be

efficiently preformed on mobile devices.

The main issue is the ICP correspondences. In order to adapt the system to

the mobile platform the model representation had to be modified. This

simpler representation which uses the GL_POINT primitive has the

drawback that for each point the depth of all fragments is equal. This

inaccuracy might be the cause of the ICP correspondence failure.

OpenGL ES 3.0 limits the use of floating point textures to 16 bit

components. This accuracy limit has caused various numerical issues.

Page 30: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Future Work

Performance optimizations:

Utilize the GPU to perform bilateral filtering

Avoid expensive CPU texture reads in the point removal stages

Utilize mip mapping to accelerate the pipeline in various stages.

Implement the Dynamic Estimation stage

Integrate RGB data into the pipeline

Offload old model data to HDD for later use

Page 31: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Bibliography

[1] Maik Keller, Damien Lefloch, Martin Lambers, Shahram Izadi, Tim Weyrich, Andreas Kolb. Real-time 3D Reconstruction in Dynamic Scenes using Point-based Fusion. In 2013 International Conference on 3D Vision

[2] C. Tomasi, R. Manduchi. Bilateral filtering for gray and color images. In Proceedings of the 1998 IEEE International Conference on Computer Vision, pages 839-846

[3] Kok-Lim Low. Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration. In Technical Report TR04-004, Department of Computer Science, University of North Carolina at Chapel Hill, February 2004

[4] Szymon Rusinkiewicz, Marc Levoy. Efficient Variants of the ICP Algorithm, 2001

[5] Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard A. Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew J. Davison, Andrew Fitzgibbon. KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera, 2006

[6] G. Guennebaud, M. Paulin. Efficient Screen Space Approach for Hardware Accelerated Surfel Rendering, November 2003

[7] Thibaut Weise, Thomas Wismer, Bastian Leibe, Luc Van Gool. In-Hand Scanning with Online Loop Closure, 2009

[8] M. Lindenbaum, 236873 Computer Vision Technion course, 2014

[9] https://www.opengl.org/wiki/Compute_eye_space_from_window_space, Compute eye space from window space

[10] http://www.songho.ca/opengl/gl_projectionmatrix.html, OpenGL projection Matrix

Page 32: Mobile Point Fusion - GIPgip.cs.technion.ac.il/files/MobilePointFusion/Mobile...Mobile Point Fusion = Dense 3D reconstruction from depth camera in real time on mobile Devices System

Thank You