6
Improved Phase Detection Algorithms for Indirect Time of Flight Range Imaging Cameras Benjamin M. M. Drayton, Dale A. Carnegie School of Engineering and Computer Science Victoria University of Wellington Wellington, New Zealand [email protected] Adrian A. Dorrington School of Engineering University of Waikato Hamilton, New Zealand Abstract— Two significant sources of systematic error in current generation indirect time of flight range imaging cameras are systematic linearity error due to harmonics present in the system and axial motion error. Both of these errors are due to violations of assumptions made by the phase detection algorithm. This paper provides a theoretical framework to understand these errors and experimental results demonstrating that both of these errors can be mitigated by replacing the current state of the art phase detection algorithm with the Windowed Discrete Fourier Transform. Using this technique the RMS linearity error of the Victoria University Range Imaging System was decreased from 0.018 ± 0.002 m to 0.003 ± 0.001 m and the RMS motion error was decreased from 0.044 ± 0.002 m to 0.009 ± 0.004 m. Keywords-range imaging; phase detection; ToF; mobile robotics; I. INTRODUCTION Indirect time of flight cameras are increasingly being used in applications where high quality full field of view range data are required. These cameras operate by encoding the time of flight into a phase shift between two modulated signals. In comparison with other laser scanning systems commonly used, their compact size and lack of moving parts make these cameras well suited to mobile robotics applications. Impeding the implementation of these sensors in this field are a systematic linearity error and error introduced by motion. Typically these cameras acquire several raw image frames to computed one range image. This process takes an appreciable period of time, hence motion in the scene can lead to errors. In this paper we focus on mitigating the effects of axial motion, towards and away from the camera. We propose replacing the current state of the art phase detection algorithm used in indirect time of flight cameras to mitigate both of these errors. II. BACKGROUND THEORY Time of flight measurements are based on the principle that the distance to the object being measured is related to the time by the equation d = vt / 2 (1) where t is the time measured for the signal to travel to the object and return to the sensor and v is the velocity of the signal. In this case electromagnetic waves are used so v is the speed of light, c. For indirect time of flight range imaging, an image sensor and an illumination source are modulated at the same frequency, generally in the range 10 – 100 MHz. A phase shift between the signals is introduced by the time taken for the illumination signal to reflect back from the object being measured to the sensor. The travel time is related to the phase, φ, by the equation t = φ / 2πf mod (2) where f mod is the modulation frequency. Substituting (2) into (1), and using c for the velocity, the equation for the distance with respect to phase is [1] d = cφ / 4πf mod . (3) Modern indirect time of flight sensors are generally made using CMOS technology where a 2D array of intensity based pixels can be electronically modulated. As the illumination signal is modulated at the same frequency, the intensity observed by the sensor is related to the phase shift between the two modulation signals by the equation I = A cos(φ) + B , (4) where A is an amplitude coefficient which includes the sensitivity of the sensor, the amplitude of the modulated light, the reflectivity of the object and the inverse square decrease in amplitude with distance due to spreading light waves. B is an offset caused by background illumination, DC offset in the ADC, and in some sensor architectures, the asymmetry in the pixel collectors [2]. A single measurement of the intensity is not sufficient to calculate the phase. As implied by the coefficients A and B above, the intensity is dependent on several other environmental and distance effects. Instead, N measurements are taken, where N must be at least three, with a phase step δ introduced between each measurement. The measured intensity for frame n (n = 1…N) is therefore I n = A cos( (n - 1)δ φ ) + B . (5) The set of N intensity measurements form the so-called correlation waveform, as it represents the correlation between the illumination and sensor modulation signals, the phase of which is the phase of the first measurement and therefore the The authors would like to thank the New Zealand Foundation for Research Science & Technology (Contract-VICX0907) for funding this work. 978-1-4673-2706-0/12/$31.00 ©2012 IEEE

[IEEE 2012 IEEE International Symposium on Robotic and Sensors Environments (ROSE) - Magdeburg, Germany (2012.11.16-2012.11.18)] 2012 IEEE International Symposium on Robotic and Sensors

Embed Size (px)

Citation preview

Page 1: [IEEE 2012 IEEE International Symposium on Robotic and Sensors Environments (ROSE) - Magdeburg, Germany (2012.11.16-2012.11.18)] 2012 IEEE International Symposium on Robotic and Sensors

Improved Phase Detection Algorithms for Indirect Time of Flight Range Imaging Cameras

Benjamin M. M. Drayton, Dale A. Carnegie School of Engineering and Computer Science

Victoria University of Wellington Wellington, New Zealand [email protected]

Adrian A. Dorrington School of Engineering University of Waikato

Hamilton, New Zealand

Abstract— Two significant sources of systematic error in current generation indirect time of flight range imaging cameras are systematic linearity error due to harmonics present in the system and axial motion error. Both of these errors are due to violations of assumptions made by the phase detection algorithm. This paper provides a theoretical framework to understand these errors and experimental results demonstrating that both of these errors can be mitigated by replacing the current state of the art phase detection algorithm with the Windowed Discrete Fourier Transform. Using this technique the RMS linearity error of the Victoria University Range Imaging System was decreased from 0.018 ± 0.002 m to 0.003 ± 0.001 m and the RMS motion error was decreased from 0.044 ± 0.002 m to 0.009 ± 0.004 m.

Keywords-range imaging; phase detection; ToF; mobile robotics;

I. INTRODUCTION Indirect time of flight cameras are increasingly being used

in applications where high quality full field of view range data are required. These cameras operate by encoding the time of flight into a phase shift between two modulated signals. In comparison with other laser scanning systems commonly used, their compact size and lack of moving parts make these cameras well suited to mobile robotics applications.

Impeding the implementation of these sensors in this field are a systematic linearity error and error introduced by motion. Typically these cameras acquire several raw image frames to computed one range image. This process takes an appreciable period of time, hence motion in the scene can lead to errors. In this paper we focus on mitigating the effects of axial motion, towards and away from the camera. We propose replacing the current state of the art phase detection algorithm used in indirect time of flight cameras to mitigate both of these errors.

II. BACKGROUND THEORY Time of flight measurements are based on the principle that

the distance to the object being measured is related to the time by the equation

d = vt / 2 (1)

where t is the time measured for the signal to travel to the object and return to the sensor and v is the velocity of the

signal. In this case electromagnetic waves are used so v is the speed of light, c. For indirect time of flight range imaging, an image sensor and an illumination source are modulated at the same frequency, generally in the range 10 – 100 MHz. A phase shift between the signals is introduced by the time taken for the illumination signal to reflect back from the object being measured to the sensor. The travel time is related to the phase, φ, by the equation

t = φ / 2πfmod (2)

where fmod is the modulation frequency. Substituting (2) into (1), and using c for the velocity, the equation for the distance with respect to phase is [1]

d = cφ / 4πfmod . (3)

Modern indirect time of flight sensors are generally made using CMOS technology where a 2D array of intensity based pixels can be electronically modulated. As the illumination signal is modulated at the same frequency, the intensity observed by the sensor is related to the phase shift between the two modulation signals by the equation

I = A cos(φ) + B , (4)

where A is an amplitude coefficient which includes the sensitivity of the sensor, the amplitude of the modulated light, the reflectivity of the object and the inverse square decrease in amplitude with distance due to spreading light waves. B is an offset caused by background illumination, DC offset in the ADC, and in some sensor architectures, the asymmetry in the pixel collectors [2].

A single measurement of the intensity is not sufficient to calculate the phase. As implied by the coefficients A and B above, the intensity is dependent on several other environmental and distance effects. Instead, N measurements are taken, where N must be at least three, with a phase step δ introduced between each measurement. The measured intensity for frame n (n = 1…N) is therefore

In = A cos( (n - 1)δ – φ ) + B . (5)

The set of N intensity measurements form the so-called correlation waveform, as it represents the correlation between the illumination and sensor modulation signals, the phase of which is the phase of the first measurement and therefore the

The authors would like to thank the New Zealand Foundation for ResearchScience & Technology (Contract-VICX0907) for funding this work.

978-1-4673-2706-0/12/$31.00 ©2012 IEEE

Page 2: [IEEE 2012 IEEE International Symposium on Robotic and Sensors Environments (ROSE) - Magdeburg, Germany (2012.11.16-2012.11.18)] 2012 IEEE International Symposium on Robotic and Sensors

phase relating to the distance to the object. A Fourier Transform is then used to determine the phase as

(6) Four frames are normally used with a phase step of π/2 as

this simplifies (6) to

(7) which is reported throughout the literature [2]-[4].

Experiments in this paper were performed using the Victoria University Range Imaging System [5], a custom built indirect time of flight range imaging system based on a PMD19K-2 sensor (PMDTechnologies GmbH, Siegen, Germany). An FPGA is used to control data flow within the system, provide the modulation signals for a bank of laser diodes and the sensor and to perform the phase calculation. This system allows complete control over the modulation signals and the phase calculation, which is not available in commercial cameras. It has a resolution of 160 × 120 pixels and can use modulation frequencies up to 40 MHz. Illumination is provided by sixteen 658 nm red laser diodes providing a total output power of 800 mW. Measurements are recorded to a computer using an Ethernet interface.

A 4.2 m MSA-M6S linear table (Macron Dynamics, Croydon, PA, USA) fitted with a ST5909 stepper motor (Nanotec GmbH, Munich, Germany) and an HEDS-5540 optical encoder (Avago Technologies, San Jose, CA, USA) were used for the measurements in this paper. This provides linear steps of 0.375 mm and an encoder accuracy of 0.3 mm. A 244-NPN-NC-06 (Macron Dynamics, Croydon, PA, USA) proximity sensor is used as a home position and is accurate to 0.3 mm. The maximum stable velocity is 2 m/s.

III. SYSTEMATIC LINEARITY ERROR The equation presented in section II for determining the

phase offset between the modulation signals for the illumination and the sensor make the assumption that the correlation waveform is sinusoidal. This necessitates that one or both of the modulation signals are sinusoidal. In reality, due to the nonlinear transfer function of the illumination source and the pixel gain modulation, as well as the ease in which they can be generated in hardware, these signals are generally square waves. This introduces harmonics into the system that cause a systematic sinusoidal error with distance [6]. A sinusoidal linearity error with four cycles inside the unambiguous measurement distance of the camera is expected using the standard four frame algorithm, due to the 3rd and 5th harmonics.

Attempts have been made to calibrate this error using sinusoids [1], 6th order polynomials [7], b-spline fitting [8],[9], and look up tables [10]. Look up tables and b-splines in particular have been reasonably successful at mitigating systematic linearity error. However, this error is dependent on the modulation frequency used and therefore a separate calibration is required for each desired frequency. A variable

modulation frequency is desirable as the modulation frequency is negatively correlated to the maximum unambiguous range and positively correlated to the precision.

The systematic linearity error of the Victoria University Range Imaging System, implementing the standard algorithm, was measured using the apparatus described in section II. Linearity measurements were taken by advancing the target in 50 mm steps, taking 100 measurements per step, averaging the intensities for each measurement and then performing the phase calculation. This was done over a distance of 4 m, however some of the data could not be used due to the camera saturating. A linear fit was performed on the acquired data with the residuals being the linearity error. A modulation frequency of 30 MHz was used for these measurements. These data are shown in Fig. 1.

For a modulation frequency of 30 MHz the unambiguous measurement distance is 5 m. Between 1 m and 3.5 m two cycles are observed therefore this error is likely to be the expected four cycle error. Another method of measuring the linearity error can be used to confirm this. For this method, the distance to a stationary object is measured while the initial phase offset between the illumination and sensor modulation signals is increased from 0 to 2π by the FPGA. The data from this method are shown in Fig. 2 with the data measured by moving an object also plotted for comparison. This demonstrates that the linearity error has four cycles as expected, the two measurement techniques agree except for a phase offset caused by the initial displacement of the object and a loss of quality for the moved object method at long ranges. The RMS linearity error, calculated from the data using the initial phase offset stepping method, is 0.018 ± 0.002 m.

0.5 1 1.5 2 2.5 3 3.5 4-0.03

-0.02

-0.01

0

0.01

0.02

0.03

Actual Distance (m)

Err

or (

m)

Figure 1. Systematic Linearity error for the four frame standard algorithm

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5-0.04

-0.03

-0.02

-0.01

0

0.01

0.02

0.03

0.04

Actual Distance (m)

Err

or (

m)

Initial phase steppingMoved object

Figure 2. Comparison of measurement techniques for the systematic

linearity error of the four frame standard algorithm

Page 3: [IEEE 2012 IEEE International Symposium on Robotic and Sensors Environments (ROSE) - Magdeburg, Germany (2012.11.16-2012.11.18)] 2012 IEEE International Symposium on Robotic and Sensors

0 1 2 3 4 5 6 7 8 9 1010

-5

10-4

10-3

10-2

10-1

100

Harmonic Number

Rel

ativ

e A

mpl

itude

Figure 3. Relative amplitude versus frequency for the demodulation signal of

the Victoria University Range Imaging System

TABLE I. HARMONIC AMPLITUDES OF THE DEMODULATION SIGNAL

Harmonic Inverse of the Relative Amplitude

Fundamental 1 2nd 105 3rd 31 4th 500 5th 1020 6th 926 7th 1449

As square wave modulation is used, it is expected that the correlation waveform will be triangular and therefore contain odd harmonics with amplitudes decreasing at 1 / m2 where m is the harmonic number. However, the response of the modulation drivers, the sensor and the illumination source can all affect the harmonic content of the system. To measure the actual harmonic content of the system the camera was set to record 64 frames per phase measurement and several measurements were taken of a stationary object. A Fourier Transform was used to determine the harmonic content of the signal, shown in Fig. 3 for a typical pixel. A table showing the inverse of the relative amplitude of the harmonics is shown in Table I. This demonstrates that the harmonics are significantly smaller than would be expected for triangular modulation. There is a significant second harmonic that was not expected, however by far the most significant harmonic is the 3rd. Harmonics higher than the 4th are unlikely to have any measureable effect on the phase measurement.

IV. MOTION ERROR Because multiple intensity measurements are being taken to

calculate a single range measurement, indirect time of flight cameras are susceptible to errors due to relative motion between the camera and objects within the scene. This is particularly important for mobile robotics where the camera will be moving relative to the scene. These can be classified as two separable errors, lateral motion errors, due to movement across the field of view of the camera, and axial motion errors, due to movement along the visual axis of the camera.

Lateral motion causes errors at the edges of objects. These pixels experience a step change in phase meaning one or more measurements do not relate to the same phase as the others.

Lateral motion errors are not the focus of this paper however in an environment involving motion they must be considered. Two methods for solving this issue reported in the literature are the use of a 2D camera for edge detection [11] and optical flow algorithms [12].

For axial motion the change in phase between frames is generally smaller. The phase becomes dependent on n which can be written in terms of the velocity [12]

(8) where φn-1 is the actual phase of the object being measured for the previous frame, tf is the time taken for one frame to be measured and v is the relative velocity between the object and the camera. The phase in terms of the first frame, φ1, is therefore

(9) where α, α = 4πtf fmod / c, is introduced for convenience. Substituting this into (6) and using standard trigonometric identities, the measured phase, φm, in terms of the real phase at the beginning of the measurement is

(10) The theoretical error versus the actual phase is shown in

Fig. 4 for a number of velocities. There is a two cycle sinusoidal error the amplitude of which is proportional to the velocity. There is also an offset that is proportional to the velocity, however this is due to the reference frame used. Fig. 5 shows the theoretical error using each of the four frames as the reference. Depending on reference frame, the measured phase can either lag or lead the actual phase. This is expected as choosing the reference frame is essentially choosing how many points were measured before and after the selected reference. There is no reason to have a preferential reference frame and therefore this offset is not significant.

Similar to the linearity, the effect of harmonics on the motion error must be investigated. Using the measured harmonic content of the system (as detailed in Table 1), simulations were run to predict the actual motion error that will be observed. These data are shown in Fig. 6. This indicates that a two cycle error will still be observed with the harmonics causing some deformation of the waveform.

0 1 2 3 4 5 60

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

Phase (radians)

Err

or (

m)

2 m/s1 m/s0.5 m/s

Figure 4. Theoretical axial motion error versus phase for various velocities

Page 4: [IEEE 2012 IEEE International Symposium on Robotic and Sensors Environments (ROSE) - Magdeburg, Germany (2012.11.16-2012.11.18)] 2012 IEEE International Symposium on Robotic and Sensors

0 1 2 3 4 5 6-0.15

-0.1

-0.05

0

0.05

0.1

0.15

Phase (radians)

Err

or (

m)

Frame 1Frame 2Frame 3Frame 4

Figure 5. Comparison of axial motion error for different reference frames

0 1 2 3 4 5 6-0.04

-0.03

-0.02

-0.01

0

0.01

0.02

0.03

Phase (radians)

Err

or (

m)

Figure 6. Simulated axial motion error versus distance for the four frame standard algorithm

The axial motion error of the standard algorithm was measured using the linear table apparatus described in section II. For each of 100 data runs performed, the encoder was used to record the position of the linear table at the beginning of each phase measurement. Calibration runs were then performed for five of the data runs by returning to each of these points and recording a static measurement for comparison. The average of the calibration runs was subtracted from each data run and the resulting error was averaged over the 100 data runs.

The motion error recorded for the standard four frame algorithm is shown in Fig. 7. Error bars are used to indicate plus or minus one standard deviation. A measurement time of 125 ms was used for this experiment and the insignificant DC offset has been subtracted from the results. The distance over which data could be recorded is limited due to the acceleration and deceleration time of the linear table, however it appears that a two cycle error is occurring as expected. Observation of the deviation of the waveform due to harmonics is not possible due to the small number of points acquired. Two quantitative metrics were chosen to measure the quality of the algorithm. These were the RMS error, which provides a measure of the spread of the data, and the range, to identify if there are significant outliers that could cause problems in real applications. For the standard algorithm the RMS error is 0.044 ± 0.002 m and the range is 0.112 ± 0.007 m.

The problem of axial motion error has not been well researched. Again, optical flow algorithms have been attempted [12] but the model used did not address the issue of harmonics and a rigorous investigation of the success of this method was not performed. Changing the order of the phase

measurements has been investigated and shown to improve the motion error [5], however, again, harmonics have not been taken into account. Other approaches reported in the literature are the use of custom pixel structures to measure all four phases at once and the combination of an arbitrary number of pixels to lower the integration time and therefore the effective motion error [13]. Both of these approaches have a negative impact on the spatial resolution of the camera and are therefore not desirable as spatial resolution is already a limiting factor of these cameras.

1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000-0.1

-0.08

-0.06

-0.04

-0.02

0

0.02

0.04

0.06

0.08

0.1

Actual Distance (mm)

Err

or (

m)

Figure 7. Measured axial motion error versus distance for the four frame standard algorithm

Both of the systematic errors described in this paper are caused by violations of the assumptions made by the phase detection algorithm. Instead of attempting to solve these problems after they occur, through a calibration, optical flow or other methods, the remainder of this paper will demonstrate they can be solved by the implementation of a phase detection algorithm that is designed to accommodate these violations.

V. THE WINDOWED DISCRETE FOURIER TRANSFORM Phase Shifting Interferometry is a field in which phase

detection algorithms have been extensively researched, including the effect of harmonics, and this analysis can be applied to indirect time of flight imaging. Furthermore, another problem that is well studied in Phase Shifting Interferometry is linear miscalibration of the phase step. This is expressed in the intensity for frame n as [15]

(11) where ε is an error introduced as the actual phase step does not match the desired phase step. Substituting (9) into (5) shows that for an object under linear motion the intensity for frame n is

(12) which is equivalent to a linear miscalibration of –vα / δ. Phase Shifting Interferometry techniques can therefore be used to mitigate axial motion errors in the presence of harmonics.

A number of phase algorithms have been developed and used in Phase Shifting Interferometry to improve system linearity and motion response in the presence of harmonics. An algorithm is desired that can provide insensitivity to motion error even in the presence of the third harmonic. Surrel [14] provides a method for determining the response of a phase

Page 5: [IEEE 2012 IEEE International Symposium on Robotic and Sensors Environments (ROSE) - Magdeburg, Germany (2012.11.16-2012.11.18)] 2012 IEEE International Symposium on Robotic and Sensors

algorithm based on the algorithm’s characteristic polynomial, an N-1 order polynomial of the form

P(x) = cNxN-1 + cN-1xN-2 + … + c1 , (13)

where cn is a set of complex coefficients such that

φ = arg (cNIN + cN-1IN-1 + … + c1I1). (14)

Three rules can be used to determine its performance

1. Insensitivity to the mth harmonic present in the intensity signal can be achieved when the complex numbers exp(imδ) (if m ≠ 1) and exp(-imδ) are roots of the characteristic polynomial.

2. Insensitivity to the mth harmonic present in the signal (m ≠ 0) is achieved in the presence of a phase-shift miscalibration when the two complex numbers exp(imδ) (if m ≠ 1) and exp(-imδ) are double roots of the characteristic polynomial.

3. The algorithmic insensitivity to the mth harmonic (m ≠ 0) is achieved in the presence of a phase-shift miscalibration when the two complex numbers, exp(imδ) and exp(-imδ) are roots of the order of k + 1 of the characteristic polynomial. The phase measured will contain no term in εp, p ≤ k, as a result of the presence of this harmonic.

Simulations can be used to demonstrate the theoretical error based on the number of roots at a particular harmonic. The response of algorithms with increasing numbers of roots at m = -1, without the presence of harmonics, is shown in Fig. 8. This demonstrates that a double root at m = -1 effectively removes this error, with a triple root decreasing it further. The error for an algorithm with one root at m = -1 is consistent with the error predicted for the standard algorithm.

0 1 2 3 4 5 6-0.04

-0.03

-0.02

-0.01

0

0.01

0.02

0.03

0.04

Phase (radians)

Err

or (

m)

Single rootDouble rootTriple root

Figure 8. Simulated motion error without harmonics for algorithms with

increasing number of roots at m = -1

0 1 2 3 4 5 6-0.06

-0.04

-0.02

0

0.02

0.04

0.06

0.08

Phase (radians)

Err

or (

m)

No rootsSingle rootDouble root

Figure 9. Simulated motion error for the third harmonic with increasing number of roots at m = 3 and -3

The error due to the third harmonic was simulated with increasing number of roots at m = 3 and -3. All of these algorithms also have a double root at m = -1 and therefore essentially all the error present is due to the third harmonic. These data are shown in Fig. 9. This demonstrates that the desired algorithm should have double roots at m = -1, 3 and -3 in order to provide good response with linear motion.

There are a large number of algorithms from Phase Shifting Interferometry that have attempted to solve the issue of harmonics and linear miscalibration, these include Carré’s algorithm [16], Hariharan’s algorithm [17], the N+1 type B algorithm [15], Novak’s algorithm [18], and the N+3 algorithm [19]. However the algorithm which provides the desired roots with the lowest number of frames is the Windowed Discrete Fourier Transform (WDFT) [14]. This algorithm requires five phase steps of 2π/3 and the phase is calculated as

(15)

This algorithm was implemented on the FPGA in the Victoria University Range Imaging System and the linearity measurements were repeated. Measurements recorded using both methods are shown in Fig. 10. As expected this shows significantly improved linearity error over the standard algorithm. The RMS linearity error, calculated using the phase stepping method, is 0.003 ± 0.001 m. There is some disagreement between the phase stepping and moved object measurements for this algorithm. There are a number of potential causes for this including multi-path interference, the inverse square change in intensity with distance and non-linearity in the intensity measurement of the sensor. The actual cause of this has not yet been determined. The precision of the camera has a significant impact on these measurements so any remaining systematic error is difficult to identify.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5-0.015

-0.01

-0.005

0

0.005

0.01

Actual Distance (m)

Err

or (

m)

Initial phase steppingMoved object

Figure 10. Systematic linearity error for the WDFT algorithm

The motion error for this algorithm was also characterized using the linear table. The results are shown in Fig. 11. The standard deviation of the error with distance for this algorithm is 0.009 ± 0.004 m and the range is 0.03 ± 0.01 m. This is a significant improvement over the standard algorithm.

Page 6: [IEEE 2012 IEEE International Symposium on Robotic and Sensors Environments (ROSE) - Magdeburg, Germany (2012.11.16-2012.11.18)] 2012 IEEE International Symposium on Robotic and Sensors

1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000-0.1

-0.08

-0.06

-0.04

-0.02

0

0.02

0.04

0.06

0.08

0.1

Actual Distance (m)

Err

or (

m)

Figure 11. Measured axial motion error for the WDFT algorithm

As demonstrated earlier in this section, axial motion is equivalent to a linear phase step miscalibration. Therefore it is possible to measure the axial motion response of the camera by adding an intentional phase miscalibration and stepping the initial phase offset between the modulation signals. This overcomes the issue of having relatively few usable data points. These data are shown in Fig. 12 for both the standard algorithm and the WDFT. The standard algorithm shows the result predicted by the simulation shown in Fig. 6. For the WDFT algorithm the precision of the indirect time of flight camera is not sufficient to identify any remaining systematic error.

0 1 2 3 4 5 6-0.05

-0.04

-0.03

-0.02

-0.01

0

0.01

0.02

0.03

Initial Phase (radians)

Err

or (

m)

Standard AlgorithmWDFT Algorithm

Figure 12. Linear miscalibration error comparison between the four frame standard algorithm and the WDFT

VI. SUMMARY This paper has presented theoretical analysis of the error in

indirect time of flight measurements due to harmonics and axial motion. It has been shown, both in simulation and experimentally, that replacing the current state of the art phase detection algorithm with the WDFT mitigates these errors. This method required minimal additional computational effort above the standard algorithm and is not dependent on the operating parameters of the camera making it desirable over optical flow and calibration techniques.

Future work is required investigating any other potential impacts the implementation of the WDFT has on range measurements, particularly in terms of precision, and to further

investigate the potential of other phase algorithms for application specific improvement of the quality of indirect time of flight measurements.

REFERENCES

[1] F. Chiabrando, R. Chiabrando, D. Piatti, and F. Rinaudo, “Sensors for 3D imaging: Metric evaluation and calibration of a CCD/CMOS time-of-flight camera,” Sensors , vol. 9, no. 12, pp. 10080-10096, 2009.

[2] R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron, vol. 37, no. 3, pp. 390-397, Mar 2001.

[3] S. Hussmann, A. Hermanski, and T. Edeler, “Real-time motion artifact suppression in TOF camera systems,” IEEE Trans. Instrum. Meas., vol. 60, no. 5, pp. 1682-1690, May 2011.

[4] N. Blanc, T. Oggier, G. Gruener, J. Weingarten, A. Codourey, and P. Seitz, “Miniaturized smart cameras for 3D-imaging in real-time,” in Proc. IEEE Sensors, Vienna, Austria, Oct. 2004, pp. 471–474.

[5] B. Drayton, D. A. Carnegie, and A. A. Dorrington, “The development of a time of flight range imager for mobile robotics,” in Proc. Int. Conf. Automation, Robotics Applications (ICARA), Wellington, New Zealand, Dec. 2011, pp. 470-475.

[6] A. P. P. Jongenelen, D. A. Carnegie, A. D. Payne, and A. A. Dorrington, “Development and characterisation of an easily configurable range imaging system,” in Proc. 24th Int. Conf. Image Vision Computing, Wellington, New Zealand, Nov. 2009, pp. 79-84.

[7] Y. M. Kim, D. Chan, C. Theobalt, and S. Thrun, “Design and calibration of a multi-view ToF sensor fusion system,” in Proc. IEEE CVPR Workshops, vol. 1-3, Anchorage, June 2008, pp. 1524–1530.

[8] M. Lindner and A. Kolb, “Lateral and depth calibration of PMD-distance Sensors,” in Proc. 2nd Int. Sym. Visual Computing, Lake Tahoe, USA, Nov. 2006, pp. 524–533.

[9] S. Fuchs, G. Hirzinger, “Extrinsic and depth calibration of ToF-cameras,” in Proc. IEEE Conf. Computer Vision Pattern Recognition, Anchorage, USA, Jun. 2008, pp. 1-6.

[10] T. Kahlmann, F. Remondino, and H. Ingensand, “Calibration for increased accuracy of the range imaging camera SwissrangerTM,” in ISPRS Commission V Symposium, Dresden, Sep. 2006, pp. 136–141.

[11] O. Lottner, A. Sluiter, K. Hartmann, and W. Weihs, “Movement artefacts in range images of time-of flight cameras,” in Proc. Int. Sym. Signals, Circuits and Systems, Jul. 2007, pp. 1-4.

[12] M. Lindner and A. Kolb, “Compensation of motion artifacts for time-of-flight cameras,” in Proc. Dynamic 3D Vision Workshop, Sep. 2009, pp. 16–27.

[13] P. O'Connor, I. Torunoglu,and R. Nagabhirana, “Method and system to correct motion blur and reduce signal transients in time-of-flight sensor systems,” U.S. Patent 7 450 220, Nov. 2008.

[14] Y. Surrel, “Design of algorithms for phase measurements by the use of phase stepping,” Appl. Optics, vol. 35, no. 1, pp. 51-60, Jan. 1996

[15] Y. Surrel, “Phase stepping: a new self-calibrating algorithm,” Appl. Optics, vol. 32, no. 19, pp. 3598-3600, Jul. 1993

[16] P. Carré, “Installation et utilisation du comparateur photoélectrique et interférentiel du Bureau International des Poids et Mesures,” Metrologia, vol. 2, no. 1, pp. 13-23, 1966.

[17] P. Hariharan, “Digital phase-stepping interferometry: effects of multiply reflected beams,” Appl. Optics, vol. 26, no. 13, pp. 2506-2507, Jul. 1987.

[18] J. Novák, P. Novák, and M. Antonín, “Multi-step phase-shifting algorithms insensitive to linear phase shift errors,” Optics Commun., vol. 281, no. 21, pp. 5302-5309, Nov. 2008.

[19] K. Hibino, B. F. Oreb, D. I. Farrant, and K. G. Larkin, “Phase shifting algorithms for nonlinear and spatially nonuniform phase shifts,” J. of the Optical Soc. of America, vol. 14, no. 4, pp. 918-930. Apr. 1997.