11
3386 IEEE SENSORS JOURNAL, VOL. 13, NO. 9, SEPTEMBER 2013 Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras Benjamin M. M. Drayton, Dale A. Carnegie, and Adrian A. Dorrington Abstract— Indirect time of flight cameras are increasingly being used in a variety of applications to provide real-time full field of view range measurements. Current generation cameras suffer from systematic linearity errors due to the influence of harmonics in the system and motion errors due to the require- ment of taking multiple measurements. This paper demonstrates that replacing the standard phase detection algorithm with the windowed discrete Fourier transform can improve the root mean square (RMS) axial motion error with distance from 0.044 ± 0.002 m to 0.009 ± 0.004 m and the range from 0.112 ± 0.007 m to 0.03 ± 0.01 m for an object with a velocity of 2 m/s using a measurement time of 125 ms. This algorithm also improves the linearity of the camera by removing systematic errors due to harmonics, decreasing the RMS linearity error from 0.018 ± 0.002 m to 0.003 ± 0.001 m. This paper establishes the robustness of the windowed discrete Fourier transform, demonstrating that it effectively eliminates axial motion error over a variety of velocities and modulation frequencies. The potential for tailoring phase detection algorithms to specific applications is also demonstrated. Index Terms— Robot vision systems, distance measurement, error correction, algorithms. I. I NTRODUCTION F ULL field of view range finding systems have many applications in emerging fields, particularly in human machine interfaces and mobile robotics. Sensors are required that can provide a high quality full field of view range measurement simultaneously in real time. Indirect time of flight cameras have the potential to fulfill this role, however their accuracy is degraded due to motion error caused by the requirement to acquire multiple intensity measurements for a single distance measurement and from systematic linearity error due to harmonics present in the system. It should be noted that there are a number of other errors that can impact the linearity of the sensor that are related to non-idealities in the sensor and illumination technologies such as phase and harmonic variation in the correlation waveform Manuscript received January 30, 2013; revised March 21, 2013; accepted March 29, 2013. Date of publication April 12, 2013; date of current version August 6, 2013. This work was supported in part by the New Zealand Foundation for Research Science and Technology under Contract VICX0907. The associate editor coordinating the review of this paper and approving it for publication was Dr. Alexander Fish. B. M. M. Drayton and D. A. Carnegie are with the Victoria University of Wellington, Wellington 6140, New Zealand (e-mail: [email protected]; [email protected]). A. A. Dorrington is with the University of Waikato, Hamilton 3240, New Zealand (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/JSEN.2013.2257737 across the sensor pixels, changes of the modulation signal between frames and crosstalk [1]. These errors can generally be mitigated by careful hardware design. Multi-path errors due to multiple reflections in the scene can also affect the quality of measurements [2]. The focus of this paper is reducing the systematic errors in indirect time of flight measurement caused by harmonics and relative motion between the camera and the scene. The remainder of this paper is structured as follows. Section II will provide background theory on the operation of indirect time of flight cameras and the error sources for these cameras that are relevant to this paper. Section III will outline the hardware and measurement methodology used to record the experimental data in this paper. Section IV will measure the systematic linearity error and harmonic content of our system. Section V will measure the axial motion error in our system. Section VI will demonstrate that the implementation of the five frame Windowed Discrete Fourier Transform can mitigate the errors measured in Sections IV and V over a wide range of operating parameters of the camera. Section VII explores the potential for application specific algorithms in different circumstances. An overview of the results and conclusions of this paper will be provided in Section VIII. II. BACKGROUND THEORY This section will provide a derivation of the standard phase detection algorithm and then describe errors that arise in indirect time of flight measurements, particularly focusing on errors due to the violation of assumptions made by the standard phase detection algorithm. A. Derivation of the Standard Phase Detection Algorithm Indirect time of flight cameras encode the time taken for electromagnetic waves to return from objects in a scene into a phase shift. A light source and an image sensor are both modulated at a frequency generally in the range of 10 to 100 MHz. The time taken for the light to return from the object introduces a phase shift between the two modulated signals. The signals are integrated over thousands of cycles and the phase can then be measured. The distance is therefore calculated as d = ct 2 = c 2 ϕ 2π f mod = cϕ 4π f mod (1) where c is the speed of light, ϕ is the introduced phase shift and f mod is the modulation frequency [3]. Modern 1530-437X © 2013 IEEE

Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

Embed Size (px)

Citation preview

Page 1: Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

3386 IEEE SENSORS JOURNAL, VOL. 13, NO. 9, SEPTEMBER 2013

Phase Algorithms for Reducing Axial Motion andLinearity Error in Indirect Time of Flight Cameras

Benjamin M. M. Drayton, Dale A. Carnegie, and Adrian A. Dorrington

Abstract— Indirect time of flight cameras are increasinglybeing used in a variety of applications to provide real-time fullfield of view range measurements. Current generation camerassuffer from systematic linearity errors due to the influence ofharmonics in the system and motion errors due to the require-ment of taking multiple measurements. This paper demonstratesthat replacing the standard phase detection algorithm with thewindowed discrete Fourier transform can improve the rootmean square (RMS) axial motion error with distance from0.044 ± 0.002 m to 0.009 ± 0.004 m and the range from0.112 ± 0.007 m to 0.03 ± 0.01 m for an object with a velocityof 2 m/s using a measurement time of 125 ms. This algorithmalso improves the linearity of the camera by removing systematicerrors due to harmonics, decreasing the RMS linearity error from0.018 ± 0.002 m to 0.003 ± 0.001 m. This paper establishesthe robustness of the windowed discrete Fourier transform,demonstrating that it effectively eliminates axial motion errorover a variety of velocities and modulation frequencies. Thepotential for tailoring phase detection algorithms to specificapplications is also demonstrated.

Index Terms— Robot vision systems, distance measurement,error correction, algorithms.

I. INTRODUCTION

FULL field of view range finding systems have manyapplications in emerging fields, particularly in human

machine interfaces and mobile robotics. Sensors are requiredthat can provide a high quality full field of view rangemeasurement simultaneously in real time. Indirect time offlight cameras have the potential to fulfill this role, howevertheir accuracy is degraded due to motion error caused by therequirement to acquire multiple intensity measurements fora single distance measurement and from systematic linearityerror due to harmonics present in the system.

It should be noted that there are a number of other errorsthat can impact the linearity of the sensor that are related tonon-idealities in the sensor and illumination technologies suchas phase and harmonic variation in the correlation waveform

Manuscript received January 30, 2013; revised March 21, 2013; acceptedMarch 29, 2013. Date of publication April 12, 2013; date of current versionAugust 6, 2013. This work was supported in part by the New ZealandFoundation for Research Science and Technology under Contract VICX0907.The associate editor coordinating the review of this paper and approving itfor publication was Dr. Alexander Fish.

B. M. M. Drayton and D. A. Carnegie are with the Victoria University ofWellington, Wellington 6140, New Zealand (e-mail: [email protected];[email protected]).

A. A. Dorrington is with the University of Waikato, Hamilton 3240, NewZealand (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/JSEN.2013.2257737

across the sensor pixels, changes of the modulation signalbetween frames and crosstalk [1]. These errors can generallybe mitigated by careful hardware design. Multi-path errors dueto multiple reflections in the scene can also affect the qualityof measurements [2]. The focus of this paper is reducing thesystematic errors in indirect time of flight measurement causedby harmonics and relative motion between the camera and thescene.

The remainder of this paper is structured as follows.Section II will provide background theory on the operation ofindirect time of flight cameras and the error sources for thesecameras that are relevant to this paper. Section III will outlinethe hardware and measurement methodology used to recordthe experimental data in this paper. Section IV will measure thesystematic linearity error and harmonic content of our system.Section V will measure the axial motion error in our system.Section VI will demonstrate that the implementation of thefive frame Windowed Discrete Fourier Transform can mitigatethe errors measured in Sections IV and V over a wide rangeof operating parameters of the camera. Section VII exploresthe potential for application specific algorithms in differentcircumstances. An overview of the results and conclusions ofthis paper will be provided in Section VIII.

II. BACKGROUND THEORY

This section will provide a derivation of the standard phasedetection algorithm and then describe errors that arise inindirect time of flight measurements, particularly focusing onerrors due to the violation of assumptions made by the standardphase detection algorithm.

A. Derivation of the Standard Phase Detection Algorithm

Indirect time of flight cameras encode the time taken forelectromagnetic waves to return from objects in a scene intoa phase shift. A light source and an image sensor are bothmodulated at a frequency generally in the range of 10 to100 MHz. The time taken for the light to return from theobject introduces a phase shift between the two modulatedsignals. The signals are integrated over thousands of cyclesand the phase can then be measured. The distance is thereforecalculated as

d = ct

2= c

2

ϕ

2π f mod= cϕ

4π f mod(1)

where c is the speed of light, ϕ is the introduced phaseshift and fmod is the modulation frequency [3]. Modern

1530-437X © 2013 IEEE

Page 2: Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

DRAYTON et al.: PHASE ALGORITHMS FOR REDUCING AXIAL MOTION AND LINEARITY ERROR 3387

indirect time of flight systems are based on custom CMOStechnology where the gain of a 2D intensity sensor array canbe electronically modulated. The intensity observed by thesensor is dependent on the amount of overlap between the twomodulation signals and therefore the distance to the object.The intensity, assuming sinusoidal modulation, is

I = A cos (ϕ) + B, (2)

where A is an amplitude coefficient including the gain of thesensor, the amplitude of the modulated light, the reflectivityof the object and the inverse square decrease with distancedue to spreading light waves. B is a DC offset caused bybackground illumination, DC offset in the ADC and, in somepixel architectures, asymmetry in the pixel collectors.

As both A and B in (2) are dependent on factors externalto the camera, a single intensity measurement is not sufficientto calculate the phase. This problem is solved by takingN measurements with an introduced phase step δ betweenmeasurements. The intensity for frame n (n = 1 . . . N) istherefore

In = A cos ((n − 1) δ − ϕ) + B. (3)

This set of N measurements forms the correlation wave-form, as it represents the correlation between the modulationsignals. Conventionally, a Fourier transform is then used toobtain the phase of the correlation waveform, and therefore thephase between the two modulation signals. As the frequencyof the signal is known only a single bin needs to be calculated.The phase is calculated as

ϕ = tan−1

∑Nn=1 In sin

(2π(n−1)

N

)

∑Nn=1 In cos

(2π(n−1)

N

) . (4)

Four phase steps are often used [4]–[6] with a phase stepof π/2 as this simplifies the phase equation to

ϕ = tan−1(

I2 − I4

I1 − I3

)

. (5)

The use of this phase encoding introduces a maximumunambiguous distance as a phase shift introduced by the traveltime of the illumination signal greater than 2π radians isindistinguishable from a signal that comes from an objectϕ ± 2π away. The actual distance to the object is thereforebetter represented by the equation

d = c

2 f mod

( ϕ

2π+ k

)= du

( ϕ

2π+ k

), (6)

where k is an integer and du is the maximum unambiguousmeasurement distance, du = c/2 fmod.

Common modulation frequencies of 20 MHz and 30 MHzhave unambiguous distances of 7.5 m and 5 m respectively.It has been shown that the use of multiple modulation fre-quencies can increase the unambiguous distance if requiredfor long range imaging [7], however in many applications thisis not necessary due to the physical nature of the scene or theapplication requirements. As this is an active technique usingdiffused illumination, the optical power required to performlong range imaging can result in eye safety problems.

B. Systematic Linearity Error

In Section II-A, the phase detection algorithm was derivedbased on sinusoidal modulation signals. In practice, squarewave modulation signals are generally used, due to the easeof generating them digitally, which introduces harmonics intothe system. As only a small number of samples per cycle,generally four, are used, aliasing of these harmonics can occurresulting in a systematic sinusoidal error with distance [8]. Forthe ±mth harmonic a m ∓ 1 cycle error is observed. The fourframe standard algorithm is known to be sensitive to both the−3rd harmonic and the 5th harmonic, both of which producea four cycle error.

A number of attempts have been made to calibrate thesystematic linearity error in indirect time of flight cameras,with varying amounts of success and valid over various ranges,using sinusoids [3], 6th order polynomials [9], b-spline flitting[10], [11] and look up tables [12]. For calibration approachesin general the calibration of the camera is also dependenton the frame time used, the modulation frequency and thetemperature of the camera. There is also some spatial variationof the harmonic content of the correlation waveform acrossthe sensor [13]. This is generally handled by combining afixed pattern noise calibration with the distance calibration.Having a comprehensive calibration allowing freedom forthese parameters is not practical, so generally commercialcameras limit the freedom of users to adjust these parameters.The requirement of a precise calibration environment is alsoundesirable.

Other methods demonstrated for mitigating this systematicerror are using a triangular or semi-triangular approximationof the correlation waveform [14], harmonic cancelation usingframe encoding [15] and using a heterodyne operating modeinstead of the traditional homodyne [16]. Assuming any par-ticular harmonic content in the correlation signal makes thesolution specific to a single camera and modulation frequency.It also does not account for spatial variations in the harmoniccontent of the correlation waveform. A more general solutionis desirable. Both Heterodyning and Harmonic cancelationusing phase encoding provide a solution that does not requirea calibration and is independent of the particular configuration.However, they do not address the issue of motion, discussed inthe following section, and both of these methods make the rela-tionship between the measurement and actual position morecomplicated for a moving object, potentially exacerbating theaxial motion problem.

C. Axial Motion Error

Because indirect time of flight measurements require anumber of successive frames to be acquired, object motionintroduces errors. These can be classified as lateral motionerrors, from movement across the field of view, and axialmotion errors, from movement along the viewing axis. Allcurrent indirect range finding cameras suffer from these motionerrors.

If an object moves laterally within the ranger’s field of viewthis causes errors in the pixels at the edges of the object.These pixels experience a step change in phase with one or

Page 3: Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

3388 IEEE SENSORS JOURNAL, VOL. 13, NO. 9, SEPTEMBER 2013

more measurements not relating to the same phase as theothers. Attempts to address this problem have included theuse of a 2D camera for edge detection [17] and optical flowalgorithms [18]. Optical flow algorithms require a significantincrease in computational power to operate in real time, whichscales with spatial resolution. Depending on application thismay be acceptable, however for many applications such asmobile robotics this increase can prohibit the use of thistechnique. Lateral motion error is a separable problem fromaxial motion error and is not the focus of this paper.

By including axial motion (3) becomes

In = A cos ((n − 1) δ − ϕn) + B

ϕn = ϕn−1 + 4π f mod vt f

c

= ϕ1 + 4π (n − 1) f mod vt f

c= ϕ1 + (n − 1) vα, (7)

where t f is the frame time, v is the axial velocity of theobject and ϕn is the real phase at frame n. This assumes thevelocity is linear over the short measurement time. It alsoassumes that the amplitude is constant over the measurementtime. In reality there will be a change in the amplitude due tothe inverse square decrease in illumination with distance. Fordistances greater than 1 m the effect of the change in amplitudeis small compared to the axial motion error. Indicatively, foran object traveling 2 m/s and a measurement time of 125 msthe root mean square (RMS) error for distances greater than1 m is increased by 7.2% by including the inverse squaredecrease. This restriction is acceptable for the majority ofmobile robotics applications.

Substituting (7) into (5) gives:

ϕm = tan−1 cos(

π2 − ϕ1 − vα

) − cos( 3π

2 − ϕ1 − 3vα)

cos (−ϕ1) − cos (π − ϕ1 − 2vα)(8)

where ϕm is the measured phase.This can then be simplified to

ϕm = tan−1(

sin (ϕ1 + va) + sin (ϕ1 + 3va)

cos (ϕ1) + cos (ϕ1 + 2va)

)

. (9)

The theoretical error vs. phase has been plotted in Fig. 1 forvarious velocities. It should be noted that as an object movesaxially it will shrink or grow from the camera’s point of view.Because of this the edges of the object will experience similarerror to objects moving laterally. The axial motion error isdependent on vα, the size of the phase shift due to the objectmoving during the frame time.

The axial motion error shown in Fig. 1 has two components,a sinusoidal component and an offset. However, it can beshown that the offset error is simply due to the selectionof reference frame [19]. Changing the frame used as thereference frame changes the observed offset. The selection ofthe reference frame can cause the measured phase to eitherlag or lead the actual phase. There is no reason to have apreferential reference frame and the time at which no offset isobserved lies within the time over which the measurement isbeing recorded. The offset is therefore not significant and doesnot require amelioration for many applications. If amelioration

0 1 2 3 4 5 60

0.02

0.04

0.06

0.08

0.1

0.12

0.14

Phase (radians)

Err

or (

m)

2 m/s1 m/s0.5 m/s

Fig. 1. Theoretical error from axial motion for various speeds using 30 MHzmodulation frequency, 31.25 ms frame time, and four frames per measurement.

is required, the linearization of the error proposed in thispaper will make estimation of the velocity, and thereforethe offset, simpler. The offset has been subtracted from theremaining axial motion error graphs in this paper to improvetheir readability.

The problem of axial motion error has not been wellresearched. Again optical flow algorithms have been used [18]however the model used for this did not address the issue ofharmonics and a rigorous investigation of the success of thismethod in eliminating axial motion error was not performed.Changing the order of the phase measurements has beeninvestigated and shown to improve the motion error [20].However, again, harmonics have not been taken into account.Other approaches reported in the literature are the use ofcustom pixel structures to measure all four phases at onceand the combination of an arbitrary number of pixels to lowerthe frame time and therefore the effective motion error [21].Both of these approaches have a negative impact on the spatialresolution of the camera and are therefore not desirable asthe spatial resolution is already a limiting factor of thesecameras.

The previous analysis has assumed that the signals areperfect sine waves. As discussed in Section II-B harmonicsare present in the signal and these can cause large errors ifthey are not addressed. A more advanced method of analysisis required to account for these harmonics.

Phase detection algorithms have been researched extensivelyfor use in Phase Shifting Interferometry, a field separate butrelated to indirect time of flight range imaging. The problemof harmonics has been extensively examined in this field andcan be adapted for our application. Furthermore the issue ofaxial motion error is analogous to a miscalibration of the phasestep in Phase Shifting Interferometers [19], again it is assumedthe amplitude is constant over the measurement time. We willtherefore utilize Surrel’s method of analysis using the phasealgorithm’s characteristic polynomial [22]. The following isa brief overview of this analysis and the results we can takefrom it.

The intensity of a particular frame as a function of the phasecan be written as the sum of an exponential Fourier series

I (ϕ) =∞∑

m=−∞αmeimϕ (10)

Page 4: Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

DRAYTON et al.: PHASE ALGORITHMS FOR REDUCING AXIAL MOTION AND LINEARITY ERROR 3389

where αm is the complex Fourier coefficient of the mth

harmonic.For Phase Shifting Interferometry, and indirect time of flight

range imaging, we introduce a phase step δ and use a numberof frames N (note that in this case the range n = 0 . . . N − 1has been used instead of n = 1 . . . N to simplify the form ofthe equations). The intensity can be written as

I (ϕ + nδ) =∞∑

m=−∞

[αmeimϕ

]eimnδ . (11)

To find the measured phase ϕm with introduced phase stepδ and N phase steps, phase algorithms have traditionally beenwritten as the arctangent of linear combinations of phase stepswith coefficients an and bn

ϕm = tan−1

(∑N−1n=0 bn I (ϕ + nδ)

∑N−1n=0 an I (ϕ + nδ)

)

. (12)

This is equivalent to the measured phase ϕm being theargument of a complex linear combination

ϕm = arg [S (ϕ)] (13)

where

S (ϕ) =N−1∑

n=0

cn I (ϕ + nδ), (14)

andcn = an + ibn. (15)

Substituting the equation for the intensities into S gives

S (ϕ) =∞∑

m=−∞

{αmeimϕ P

(eimδ

)}, (16)

where P(x) is a polynomial of degree N − 1

P (x) =N−1∑

n=0

cn xn, (17)

which Surrel labels the characteristic polynomial of the algo-rithm as it can be used to predict the algorithm’s behavior. Wethen define a phase miscalibration ε such that

δ′ = δ(1 + ε p) , (18)

where p is the order of the phase shift miscalibration and δ’is the actual phase step for desired phase step δ.

Three rules are used to make predictions of the algorithm’sbehavior.

1. Insensitivity to the mth harmonic present in the intensitysignal can be achieved when the complex numbersexp(imδ) (if m �= 1) and exp(−imδ) are roots of thecharacteristic polynomial.

2. Insensitivity to the mth harmonic present in the signal(m �= 0) is achieved in the presence of a linear phase-shift miscalibration when the two complex numbersexp(imδ) (if m �= 1) and exp(−imδ) are double rootsof the characteristic polynomial.

3. The algorithmic insensitivity to the mth harmonic(m �= 0) is achieved in the presence of a phase-shift miscalibration when the two complex numbers,

0 1 2 3 4 5 6−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

0.04

Phase (radians)

Err

or (

m)

Single rootDouble rootTriple root

Fig. 2. Comparison of motion error for algorithms with different numbersof roots at m = −1.

exp(imδ) and exp(−imδ) are roots of the order of n +1of the characteristic polynomial. The phase measuredwill contain no term in εp , p ≤ n, as a result of thepresence of this harmonic.

The use of complex exponentials to represent the frequencymeans we must necessarily handle negative frequencies. It canbe shown that the motion error described in traditional theoryfor sinusoidal modulation signals earlier in this section isequivalent to the error caused by the negative fundamentalfrequency. For the harmonic free case, S(ϕ) is

S (ϕ) = 1

2

N−1∑

n=0

[eiϕγ (1+ε)n + e−iϕγ −(1+ε)n

]γ −n (19)

where γ = exp(2π i/N) [23]. Knowing that ε = −vα/δ [19]the equivalent phase miscalibration error can be simulated.This provides equivalent results to those shown for traditionaltheory. If the same simulations are run without the negativefundamental included, the resulting phase error is only anoffset value which has been shown to not be significant.

Using Surrel’s analysis, the four step algorithm commonlyused in indirect time of flight range imaging has a character-istic polynomial of

P (x) = −i (x − 1)(

x − eπ i) (

x − e3π i/2)

. (20)

As the phase step is π/2, to be insensitive to the thirdharmonic, e3π i/2 and e−3π i/2 terms are required, howeveronly the positive term is present. The root at e3π i/2 is alsoa root for the −5th harmonic, however the positive root is notpresent. As we know from Section II-B the four frame standardalgorithm exhibits a four cycle error due to sensitivity to thenegative third harmonic and the positive fifth harmonic. Thereis a single root at m = −1, but not the double root requiredto remove the first ε term from the measured phase underlinear motion due to the negative fundamental. This makesthe same predictions as traditional analysis for the standardalgorithm under the assumption that error due to the negativefundamental will be the dominant source of axial motion error.

Simulations were run to find the effect of removing motionerror terms without the presence of harmonics, in other wordsincreasing the number of roots at m = −1. Fig. 2 showsa comparison of the motion error for algorithms with anincreasing number of roots at m = −1. This shows that motion

Page 5: Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

3390 IEEE SENSORS JOURNAL, VOL. 13, NO. 9, SEPTEMBER 2013

0 1 2 3 4 5 6−0.06

−0.04

−0.02

0

0.02

0.04

0.06

0.08

Phase (radians)

Err

or (

m)

No rootsSingle rootDouble root

Fig. 3. Effect of increasing the number of roots at m = 3 and −3 in thepresence of axial motion.

error due to the negative fundamental is a two cycle errorwithin the unambiguous measurement distance as expectedfrom traditional theory. A double root at m = −1 decreasesthe systematic motion error dramatically and a triple rootessentially removes the error completely.

If no harmonics were present in the system, having a doubleroot at m = −1 would provide a large improvement overthe standard algorithm. However, similar to linearity error, theeffect of harmonics needs to also be investigated for axialmotion error.

This analysis will focus on the third harmonic. Note thatwhen discussing harmonics in this paper, unless the positiveharmonic is specified explicitly, the negative harmonic isimplicitly included in the analysis. Theory tells us that to beinsensitive to axial motion error due to the third harmonica double root is required at m = 3 and −3. Fig. 3 showssimulations of axial motion error for algorithms with anincreasing number of roots at m = 3 and −3. The algorithmsused for these simulations have double roots at m = −1 andtherefore essentially all the error is due to the third harmonic.For these simulations the amplitude of the third harmonicwas based on a triangular correlation waveform, to provide aworst case measure. The effect of being sensitive to the thirdharmonic is much larger than the error introduced by beingsensitive to the second term of the negative fundamental andtherefore first order insensitivity to the third harmonic is moreimportant. Our desired phase algorithm, within the bounds ofusing a relatively small number of frames, should have at leasta double root at m = 3 and m = −3, as well as a double rootat m = −1.

III. HARDWARE AND MEASUREMENT METHODOLOGY

A custom indirect time of flight camera, the Victoria Uni-versity Range Imaging System, implementing a PMD19K-2image sensor (PMDTechnologies GmbH, Siegen, Germany),was used to perform the experimental measurements in thispaper. A detailed description of this camera is not includedhere, but is described in an earlier paper [20]. This cameraallows full control over the modulation waveforms, phasestepping and phase detection via programming of an on boardFPGA. A standard commercial camera could not be used asnone provide this customizability. Unless otherwise specified

0.5 1 1.5 2 2.5 3 3.5 4−0.03

−0.02

−0.01

0

0.01

0.02

0.03

Actual Distance (m)

Err

or (

m)

Fig. 4. Linearity error for the four frame standard algorithm.

the experiments discussed in this paper use a modulationfrequency of 30 MHz.

A 4.2 m MSA-M6S linear table (Macron Dynamics, Croy-don, PA, USA) fitted with a ST5909 stepper motor (Nan-otec GmbH, Munich, Germany) and an HEDS-5540 opticalencoder (Avago Technologies, San Jose, CA, USA) were usedfor the measurements in this paper. Linearity measurementswere taken by advancing the target in 50 mm steps, taking100 measurements per step and averaging the intensities foreach measurement. This was done over a distance of 4 m,however some of the data could not be used due to the camerasaturating. A linear fit was performed on the acquired data tofind the linearity error.

Axial motion error is also measured using the linear tableapparatus. For each algorithm, 100 data runs are recordedalong with 5 calibration runs. During data runs the encoder isused to record the position of the linear table at the beginningof each distance measurement. The calibration runs are thenperformed by returning to each of these points and recordinga static measurement for comparison. The average of thecalibration runs is subtracted from each data run and theresulting error is averaged over the 100 data runs. Error barsare used to indicate ± 1 standard deviation.

IV. LINEARITY ERROR

The results of linearity measurements performed with ourexperimental set-up using the standard algorithm (5) are shownin Fig. 4. At this frequency we would expect an ambiguitydistance of 5 m, and, due to the presence of both 3rd and 5thharmonics, a cyclic error with a period of 1.25 m. The lineartable is not long enough to cover the entire unambiguous mea-surement distance however two cycles are observed between1 m and 3.5 m which equates to the expected 1.25 m period.

To confirm the observed error is a four cycle error dueto harmonics, the linearity can also be tested by introducingan initial phase offset between the modulation of the sensorand the illumination source while imaging a stationary object.Data were measured using this technique and a comparisonbetween the results from physically moving an object and fromartificially stepping the phase offset is shown in Fig. 5. Apartfrom a phase shift caused by the initial offset of the object,and a loss of quality in the signal for the moved object dataat long range, the two methods show the same result. It is

Page 6: Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

DRAYTON et al.: PHASE ALGORITHMS FOR REDUCING AXIAL MOTION AND LINEARITY ERROR 3391

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

0.04

Actual Distance (m)

Err

or (

m)

Initial phase steppingMoved object

Fig. 5. Comparison of Linearity Error for the four frame standard algorithmbetween measurement techniques.

0 1 2 3 4 5 6 7 8 9 1010

−5

10−4

10−3

10−2

10−1

100

1621

Harmonic Number

Rel

ativ

e A

mpl

itude

18971449927

499

31

105

1017

Fig. 6. Relative amplitude versus Frequency for the Victoria UniversityRange Imaging System with inverse relative amplitude to the fundamental.

confirmed that this is the expected four cycle error. The RMSlinearity error, calculated using the phase stepping method, is0.018 ± 0.002 m.

Because we are using nominally square waves for modula-tion of both the sensor and the light source, we expect thatthe resulting correlation waveform will be a triangle wave andtherefore contain only odd harmonics with 1/m2 amplitudewhere m is the harmonic number. However, the response ofthe laser diodes, the sensor, the modulation drivers and a lowpass filter placed on the sensor modulation inputs all affectthe harmonic content of the signal. The limited bandwidth ofthe components in the system mean it is expected that theamplitude of the harmonics will be lower than the squarewave modulation model. To investigate the actual harmonicresponse of the camera, raw intensity images were recordedwhile the relative phase of the emitted light and the sensormodulation was stepped 64 times over the 2π phase range.This was measured over several cycles for a single pixel nearthe centre of the field of view. While there will be some spatialvariation in the harmonic content of the correlation waveformthis should give us sufficient information to select our phasealgorithm.

A Fast Fourier Transform was performed to investigate theharmonic content of the correlation waveform. Fig. 6 showsthe relative amplitude versus harmonic number. There arenoticeable harmonic peaks at both odd and even harmonics.The inverse relative amplitude to the fundamental is includedin the figure. As expected, the third harmonic is by far the

0 1 2 3 4 5 6−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

Phase (radians)

Err

or (

m)

Fig. 7. Simulated axial motion error for the standard algorithm using themeasured harmonics for our system.

1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000−0.1

−0.05

0

0.05

0.1

Actual Distance (mm)

Err

or (

m)

Fig. 8. Axial motion error vs. distance for the four frame standard algorithm.

strongest. However, it has significantly lower amplitude thanis expected if a simple square wave model was used. There isa relatively strong second harmonic that was not expectedusing square wave modulation however it is approximatelya third of the amplitude of the third harmonic. This is likelycaused primarily by asymmetric rise and fall times for thelaser diodes providing the illumination signal. After the thirdharmonic the amplitude of the harmonics quickly reduces tobeing insignificant.

Using square wave modulation, and confirmed by measure-ment of the harmonic content of the Victoria University RangeImaging System, it is of particular importance to be insensitiveto the third harmonic as this is by far the strongest of theharmonics. The 4th and higher harmonics are unlikely to haveany measurable impact on the measurement in our case.

V. AXIAL MOTION ERROR

Using the measured harmonic content of the camera, sim-ulations can be run to predict the actual response of the fourframe standard algorithm. This is shown in Fig. 7. The resultis mostly a two cycle error, however there is some distortionin the peaks caused by the third harmonic.

The motion error recorded for the standard four framealgorithm is shown in Fig. 8. The distance over which datacould be recorded is limited however it appears that a twocycle error is occurring as expected. The resolution and rangeof the measurements is not sufficient to observe the distortionin the waveform predicted by simulations. A frame time of

Page 7: Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

3392 IEEE SENSORS JOURNAL, VOL. 13, NO. 9, SEPTEMBER 2013

31.25 ms was used for this experiment, meaning that with fourframes per measurement the measurement time is 125 ms.

Two quantitative metrics were chosen to measure the qualityof the algorithm. These were the RMS axial motion error,which provides a measure of the spread of the data, and therange, to identify if there are significant outliers that couldcause problems in real applications. The calculation of theRMS axial motion error calculation was performed with theoffset subtracted from the data. For the standard algorithmthe RMS axial motion error is 0.044 ± 0.002 m and the rangeis 0.112 ± 0.007 m.

VI. THE WINDOWED DISCRETE FOURIER TRANSFORM

The cause of both the linearity and axial motion error isthe nature of the phase detection algorithm. Addressing thesensitivity of the phase detection algorithm to harmonics andlinear motion will provide a solution that is independent ofthe particular camera, scene and configuration used. It shouldnot require significant additional computational power anddoes not impact the spatial resolution of the camera. Thissection will outline the implementation of an advanced phasealgorithm with this insensitivity and provide experimentalresults quantifying its success in reducing both systematiclinearity error and error due to motion over a variety of systemparameters and velocities.

A large population of algorithms exist in the literaturefrom Phase Shifting Interferometry that could potentiallyimprove the response of indirect time of flight cameras.Algorithms investigated as part of this research were Carré’salgorithm [24], Hariharan’s algorithm [25], the N + 1 typeB algorithm [23], [26], Novak’s algorithm [27], the N + 3algorithm [28] and the Windowed Discrete Fourier Transform[22]. As discussed in Section II-C, we desire an algorithmthat gives us first order insensitivity to both the negativefundamental and the third harmonic by having double rootsat m = −1, 3 and −3. The Windowed Discrete Fourier Trans-form (WDFT) was identified as the algorithm that can providethis insensitivity, as shown by analysis of its characteristicpolynomial below. For M phase steps, where M = 2N − 1and the step size is 2π/N the WDFT has the form

ϕ = tan−1

(− ∑N−1

k=1 k (Ik − I2N−k ) sin(2πk

/N

)

N IN − ∑N−1k=1 k (Ik − I2N−k ) cos

(2πk

/N

)

)

(21)which is equivalent to the phase being the argument of thesecond DFT coefficient of a set of 2N − 1 intensity valuesextending over two periods of the correlation waveform andwindowed by the triangle function. For five frames the WDFTbecomes

ϕ = tan−1

( √3 (I1 − 2I2 + 2I4 − I5)

I1 + 2I2 − 6I3 + 2I4 + I5

)

(22)

and has a phase step of 2π/3.Using Surrel’s analysis, the five frame WDFT has the

characteristic polynomial

P (x) =(

1 − √3i

)(x − 1)2

(x − e−i 2π

3

)2. (23)

0 1 2 3 4 5 6−0.01

−0.005

0

0.005

0.01

Phase (radians)

Err

or (

m)

Phase steppingMoving object

Fig. 9. Linearity error for the five frame WDFT algorithm.

As the phase step is 2π/3, for the third harmonic mδ =6π/3 = 2π = 0 = −mδ. This means the root required to beinsensitive to the third harmonic is x = 1, which is present asa double root. Therefore this algorithm should be insensitiveto the third harmonic even with linear motion. There is alsoa double root at m = −1 meaning this algorithm will beinsensitive to aliasing of the negative fundamental, even in thepresence of linear motion. This algorithm therefore meets ourdesired criteria. The double root at m = −1 is also a doubleroot at m = 5, however there is no corresponding negativeroot meaning a six cycle linearity error is expected.

The resolution of the Phase Locked Loop (PLL) on theFPGA in the Victoria University Range Imaging System isnot able to provide the exact phase step of 2π/3. The PLLhas a resolution of 320 phase steps per cycle and thereforethe closest phase step setting is 107. This introduces a mis-calibration error of π/480, which is equivalent to a velocityof 0.2 m/s.

The linearity results using the moved object method, shownin Fig. 9, appear to have a two cycle error. It is believed thiserror is caused by non-linearity of the PMD19K-2 sensor usedin the Victoria University Range Imaging System [13]. It islikely that hardware improvements could remove this error.It is interesting to observe the linearity error if this error sourcewas removed. This can be done by using the initial phase offsetmethod discussed in Section IV to measure the linearity error.Data using this method, also shown in Fig. 9, does not havethis error and does not show any obvious pattern of error.A six cycle error was expected, however, from our analysis ofthe harmonic content of our system, the 5th harmonic is notstrong enough to give have a significant effect on the linearity.The RMS linearity error, calculated using the phase steppingmethod, is 0.003 ± 0.001 m.

Fig. 10 shows the motion error of the five frame WDFTalgorithm. The RMS axial motion error with distance for thisalgorithm is 0.009 ± 0.004 m and the range is 0.03 ± 0.01 m.The axial motion error has been improved significantly overthe standard algorithm, shown in Fig. 8, however the limitedoperating range of the measurements due to the accelerationtime of the linear table means the number of valid pointsrecorded is small. The frame time used for this experimentwas 25 ms. As this algorithm uses five phase steps, this givesit the same total measurement time as the experiments usingthe four frame standard algorithm. The comparison is made

Page 8: Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

DRAYTON et al.: PHASE ALGORITHMS FOR REDUCING AXIAL MOTION AND LINEARITY ERROR 3393

1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000−0.1

−0.05

0

0.05

0.1

Actual Distance (mm)

Err

or (

m)

Fig. 10. Axial motion error for the five frame WDFT algorithm.

0 1 2 3 4 5 6−0.05

−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

Initial Phase (radians)

Err

or (

m)

Standard AlgorithmWDFT Algorithm

Fig. 11. Linear miscalibration error comparison between algorithms with alinear miscalibration of ∼pi/10.

to the four frame standard algorithm as this algorithm is usedubiquitously in the literature and by commercial cameras.

Due to the acceleration time of the linear table, the distanceover which data can be gathered is very small. Similar tohow stepping the initial phase can be used to record data forlinearity measurements, the insertion of both a stepping ininitial phase and a change in the size of the phase steps canbe used to get more accurate data on the motion error of thisalgorithm. Instead of setting the phase step to 2π /3 it is setto 22π /32. This is approximately equivalent to the phase errorintroduced in the motion experiments however the resolutionof the phase steps means the exact equivalent motion errorcannot be used. For comparison, the standard algorithm wasalso measured using this method. In this case the phasestep was set to 21π /40. Over the four frames this gives amiscalibration of π /10 compared to 5π /48 for the WDFT.This method does not incorporate the change in intensitywith distance, however the influence of this, particularly forcameras that do not suffer from non-linearity, is expected tobe small. The result of this linear miscalibration of the phasestepping is shown in Fig. 11. Any remaining systematic motionerror is within the precision of the sensor implemented in ourcamera.

Comparing the error for the standard algorithm with thesimulations in Fig. 7 shows that the simulations have accu-rately predicted the shape of the linear miscalibration error.It is interesting to repeat these simulations for the WDFT topredict the error if a higher precision sensor was used. Theresults of this simulation are shown in Fig. 12. This shows

0 1 2 3 4 5 6−0.01

−0.008

−0.006

−0.004

−0.002

0

0.002

0.004

0.006

0.008

0.01

Phase (radians)

Err

or (

m)

Fig. 12. Simulated axial motion error for the five frame WDFT using themeasured harmonic content of our system.

500 1000 1500 2000 2500 3000 3500 40000

0.01

0.02

0.03

0.04

0.05

Distance (mm)

Sta

ndar

d D

evia

tion

(m)

Five Frame StandardFive Frame WDFT

Fig. 13. Standard deviation versus distance for WDFT compared with thestandard algorithm.

a very weak six cycle error due to the weak 5th harmonic,however this is well below the apparently random error shownusing the phase step miscalibration technique.

To ensure the quality of the phase measurements has notbeen compromised, a comparison between the precision ofthe standard algorithm and the WDFT is performed. So as tonot bias the results due to the readout time of the sensor thefive frame standard algorithm was used.

Fig. 13 shows a comparison of the standard deviation forthe two algorithms over 100 samples. At long distances theprecision of the WDFT appears slightly worse overall than thefive frame standard algorithm. The median for the WDFT is0.013 ± 0.009 m and the median for the standard algorithmis 0.012 ± 0.008 m.

While there is a slight difference between the two, comparedto the numerous other factors that impact the precision ofindirect time of flight cameras, specifically the large influenceof the frame time and the reflectivity of the imaged object, thissmall change in precision does not impact the usefulness of theWDFT algorithm. Taking the most easily controllable of thesefactors, the frame time, the median precision for the standardalgorithm with a frame time of 30 ms, compared to the 25 msused previously, is 0.008 ± 0.006 m. This small change,where legitimate frame times for the Victoria University RangeImaging System range from 10 ms to 500 ms or larger, has amuch greater impact on the precision than changing betweenthe two algorithms.

Page 9: Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

3394 IEEE SENSORS JOURNAL, VOL. 13, NO. 9, SEPTEMBER 2013

500 1000 1500 2000 2500 3000 3500−0.1

−0.05

0

0.05

0.1

Actual Distance (mm)

Err

or (

m)

0.5 m/s1.0 m/s1.5 m/s2.0 m/s

Fig. 14. Comparison of axial motion error over distance for the five frameWDFT for multiple positive velocities.

0 500 1000 1500 2000 2500 3000 3500−0.1

−0.05

0

0.05

0.1

Actual Distance (mm)

Err

or (

m)

−0.5 m/s−1 m/s−1.5 m/s−2 m/s

Fig. 15. Comparison of axial motion error over distance for the five frameWDFT for multiple negative velocities.

500 1000 1500 2000 2500 3000 3500−0.1

−0.05

0

0.05

0.1

Actual Distance (mm)

Err

or (

m)

0.5 m/s1.0 m/s1.5 m/s2.0 m/s

Fig. 16. Comparison of axial motion error over distance for the four framestandard algorithm for multiple positive velocities.

In Section II it was noted that for distances greater than1 m the change in illumination with distance had a smallimpact compared to the axial motion error. Using simulationsit can be shown that, while the performance of the WDFTis degraded by the inverse square decrease in illumination, atworst an order of magnitude improvement is observed whenusing a velocity of 2 m/s and a measurement time of 125 ms,with larger improvements possible when excluding distancesof under 1 m.

To ensure the robustness of the WDFT algorithm, measure-ments were taken of the error versus distance for a numberof positive and negative velocities. The results are shownin Fig. 14 for positive velocities and Fig. 15 for negativevelocities. For comparison the results for the same experiments

0 500 1000 1500 2000 2500 3000 3500−0.1

−0.05

0

0.05

0.1

Actual Distance (mm)

Err

or (

m)

−0.5 m/s−1 m/s−1.5 m/s−2 m/s

Fig. 17. Comparison of axial motion error over distance for the four framestandard algorithm for multiple negative velocities.

TABLE I

MISCALIBRATION ERROR FOR DIFFERENT FREQUENCIES

Modulation Frequency (MHz) Standard Deviation (radians)

10 0.0135

15 0.0119

20 0.0091

25 0.0070

30 0.0060

using the standard algorithm are shown in Fig. 16 for positivevelocities and Fig. 17 for negative velocities. Note that due tothe acceleration and deceleration time of the linear table, thedistance over which data can be recorded is larger for lowervelocities. The WDFT has successfully ameliorated the axialmotion error for a range of positive and negative velocities.As the axial motion error is dependent on the distance theobject has moved during the acquisition time, changing themeasurement time will have the same effect on the accuracyas changing the velocity, albeit with differing precision.

To further test the robustness of the WDFT, the phasestepping method was repeated, including miscalibration error,using a number of different modulation frequencies. Miscali-bration was used instead of velocity as for low modulation fre-quencies the large unambiguous measurement distance meansthat measuring a reasonable amount of data using our lineartable is not possible. Table I shows the standard deviationof the measurements with changing modulation frequency.As expected, as the modulation frequency is decreased thestandard deviation increases as the relative amplitude of the5th harmonic is increasing. Using 10 MHz modulation a clearsix cycle error, as expected from the negative 5th harmonic,is observed. It should be noted that at this point the relativeamplitude of the 3rd harmonic is very large so a very largeerror is measured when using the standard algorithm.

VII. APPLICATION-SPECIFIC ALGORITHMS

For situations where linear motion is expected, and theharmonic content of the camera is dominated by the thirdharmonic as it is in our camera, the WDFT provides a greatlyimproved response while still using a small number of phasesteps. However, an advantage of using phase algorithms to

Page 10: Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

DRAYTON et al.: PHASE ALGORITHMS FOR REDUCING AXIAL MOTION AND LINEARITY ERROR 3395

0 1 2 3 4 5 6 7−0.03

−0.02

−0.01

0

0.01

0.02

0.03

Phase (radians)

Err

or (

m)

Standard four frameStandard five frameType B N+1

Fig. 18. Comparison of linearity of algorithms for our system (simulated).

remove systematic errors from indirect time of flight mea-surements is that they provide great flexibility. It is interestingto look at algorithms that may be preferable to use in othersituations.

If linear motion is not a concern for the application, it ispossible to select the location of roots in the phase algorithmto optimize the mitigation of linearity error instead. It hasbeen suggested, and demonstrated using simulations [8], thatusing five frames instead of four for the standard algorithm cansignificantly improve the systematic linearity error, howeverfour frames is still common in commercial cameras. The char-acteristic polynomial for the five step algorithm is

P (x) = (x − 1)(

x − e6π i/5) (

x − e−6π i/5) (

x − e2π i/5).

(24)As the phase step is 2π/5 we can see that the two terms

(x − e6π i/5) and (x − e−6π i/5) indicate that this algorithmwill be insensitive to the third harmonic in the static case.Insensitivity to the fifth harmonic requires a root at e10π i/5 =e2π i = e0 = 1 = e−10π i/5 therefore the root at x = 1 providesinsensitivity to the fifth harmonic in the static case. The rootsfor the third harmonic are also roots for the seventh harmonic.The root at e2π i/5 is a root for m = −9, however the positiveroot is not present. Note that as there is a root at m = 1 andnot m = −1 and therefore the negative fundamental is beingused as the detection term. Therefore, the rule on the periodof the sinusoidal error caused by the harmonics is reversedand a ten cycle error is expected. There is only a single rootat m = 1 therefore we would not expect this algorithm to beinsensitive to motion error due to the positive fundamental.

The use of a different phase algorithm means the linearitycan be improved without changing the number of framesrequired. A Phase Shifting Interferometry algorithm that canprovide improved insensitivity to systematic linearity errors,without increasing the number of frames used, is the fourframe Type B N + 1 algorithm [23], [26]. The equation forthis algorithm is

ϕ = tan−1 −I1 − 3I2 + 3I3 + I4√3 (I1 − I2 − I3 + I4)

. (25)

The characteristic polynomial of this algorithm is

P (x) = (x − 1)(

x − e−2π i/3)2. (26)

0 1 2 3 4 5 6−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

0.25

Phase (radians)

Err

or (

m)

Double rootTriple rootQuadruple root

Fig. 19. Acceleration error comparison between multiplicity of roots atm = −1, 3, and −3 for an acceleration of pi/80.

This algorithm has a phase step of 2π /3 and therefore theroot at x = 1 is a root for both m = 3 and −3. It also has adouble root at m = −1. The fifth, and higher, harmonics arenot strong in our system, simulations were therefore used todemonstrate the improvement in linearity possible using thisalgorithm, as the non-linearity of our sensor distorts the results.The results of the simulations are shown in Fig. 18. These sim-ulations were performed using the harmonics measured for oursystem. While the linearity of this algorithm is better than thatof the four frame algorithm, as it does not provide insensitivityto the fifth and seventh harmonics, it has theoretically worselinearity than the five frame algorithm. The results demonstratethat, as the fifth and seventh harmonics are very weak, both theType B N +1 algorithm and the five frame standard algorithmhave essentially no systematic linearity error for our systemdue to harmonics.

It should be noted that the N+1 algorithm has a double rootat m = −1, and so it therefore should show some improvementin motion error, although the significance of this is diminishedas the third harmonic will have a strong influence. If onlylinearity is required, the same linearity can be achieved byreducing the double root to a single root, this results in thealgorithm reducing to the three step standard algorithm.

Another area where specialized algorithms could be usedis where non-linear motion is occurring, in particular wherean object is accelerating at a constant rate. From Surrel [22]we know that to be insensitive to acceleration error, highermultiplicity of roots is required. Starting from the five frameWDFT, it is reasonably easy to construct algorithms thathave improved insensitivity to the error from acceleratingobjects. The multiplicity of the double roots at m = −1, 3and −3 is increased, with each increase requiring two addi-tional phase steps. The results, measured by manipulating thesize of successive phase steps to imitate acceleration, areshown in Fig. 19. The WDFT (double root) is plotted forcomparison. This demonstrates that increasing the multiplicityof roots can improve the error due to acceleration significantly,although even with a quadruple root there is still significanterror. It should be noted that for a frame time of 25 msand modulation frequency of 30 MHz this acceleration isequivalent to 25 m/s2. Due to the resolution of the phase stepsof the PLL on the FPGA in the Victoria University RangeImaging System, smaller accelerations cannot be tested.

Page 11: Phase Algorithms for Reducing Axial Motion and Linearity Error in Indirect Time of Flight Cameras

3396 IEEE SENSORS JOURNAL, VOL. 13, NO. 9, SEPTEMBER 2013

VIII. SUMMARY

In this paper we have presented an algorithm which canimprove the quality of range measurements taken using indi-rect time of flight cameras. The five frame WDFT effectivelyremoves the systematic linearity error due to harmonics presentin the system, decreasing the RMS linearity error from 0.018± 0.002 m to 0.003 ± 0.001 m. It also improves the axialmotion error, decreasing the standard deviation with distancefrom 0.044 ± 0.002 m to 0.009 ± 0.004 m and the rangefrom 0.112 ± 0.007 m to 0.03 ± 0.01 m for an object with avelocity of 2 m/s and a measurement time of 125 ms. The fiveframe WDFT was shown to provide this insensitivity over awide range of positive and negative velocities and for a varietyof modulation frequencies. If linear motion insensitivity is notrequired, the use of the four frame Type B N+1 algorithm wasdemonstrated to improve the linearity error without changingthe number of frames used. Further, if accelerating objectsare likely to be present it was shown that increasing themultiplicity of roots at m = −1, 3 and −3 can mitigate errordue to accelerating objects.

REFERENCES

[1] J. P. Godbaz, M. J. Cree, and A. A. Dorrington, “Understanding andameliorating non-linear phase and amplitude responses in AMCW lidar,”Remote Sens., vol. 4, no. 1, pp. 21–42, Dec. 2011.

[2] A. A. Dorrington, J. P. Godbaz, M. J. Cree, A. D. Payne, andL. V. Streeter, “Separating true range measurements from multi-pathand scattering interference in commercial range cameras,” Proc. SPIE,vol. 7864, pp. 1–10, 2011.

[3] F. Chiabrando, R. Chiabrando, D. Piatti, and F. Rinaudo, “Sensors for3D imaging: Metric evaluation and calibration of a CCD/CMOS time-of-flight camera,” vol. 9, no. 12, pp. 10080–10096, 2009.

[4] N. Blanc, T. Oggier, G. Gruener, J. Weingarten, A. Codourey, andP. Seitz, “Miniaturized smart cameras for 3D-imaging in real-time,” inProc. IEEE Sensors, Oct. 2004, pp. 471–474.

[5] R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEEJ. Quantum Electron., vol. 37, no. 3, pp. 390–397, Mar. 2001.

[6] S. Hussmann, A. Hermanski, and T. Edeler, “Real-time motion artifactsuppression in TOF camera systems,” IEEE Trans. Instrum. Meas.,vol. 60, no. 5, pp. 1682–1690, May 2011.

[7] A. P. Jongenelen, D. A. Carnegie, A. A. Payne, and A. A. Dorrington,“Maximizing precision over extended unambiguous range for TOF rangeimaging systems,” in Proc. Int. Instrum. Meas. Technol. Conf., 2010,pp. 1575–1580.

[8] A. P. Jongenelen, D. A. Carnegie, A. D. Payne, and A. A. Dor-rington, “Development and characterisation of an easily configurablerange imaging system,” in Proc. Int. Conf. Image Vis. Comput., 2009,pp. 379–384.

[9] Y. M. Kim, D. Chan, C. Theobalt, and S. Thrun, “Design and calibrationof a multi-view TOF sensor fusion system,” in Proc. IEEE Comput. Soc.Conf. Comput. Vis. Pattern Recognit. Workshops, Jun. 2008, pp. 1–7.

[10] M. Lindner and A. Kolb, “Lateral and depth calibration of PMD-distancesensors,” in Proc. 2nd Int. Conf. Adv. Vis. Comput., 2006, pp. 524–533.

[11] S. Fuchs and G. Hirzinger, “Extrinsic and depth calibration of ToF-cameras,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2008,pp. 1–6.

[12] T. Kahlmann, F. Remondino, and H. Ingensand, “Calibration forincreased accuracy of the range imaging camera swissranger,” ImageEng. Vis. Metrol., vol. 36, pp. 136–141, Sep. 2006

[13] B. M. M. Drayton, D. A. Carnegie, and A. A. Dorrington, “Characteri-sation of the victoria university range imaging system,” in Proc. Austral.Conf. Robot. Autom., Dec. 2012, pp. 1–18.

[14] M. Lindner, A. Kolb, and T. Ringbeck, “New insights into the calibrationof ToF-sensors,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.Workshop, Jun. 2008, pp. 1–5.

[15] A. D. Payne, A. A. Dorrington, M. J. Cree, and D. A. Carnegie,“Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras,” Appl. Opt., vol. 49, no. 23,pp. 4392–4403, 2010.

[16] R. M. Conroy, A. A. Dorrington, R. Künnemeyer, and M. J. Cree,“Range imager performance comparison in homodyne and heterodyneoperating modes,” Proc. SPIE, vol. 7239, Jan. 2009.

[17] O. Lottner, A. Sluiter, K. Hartmann, and W. Weihs, “Movement artefactsin range images of time-of-flight cameras,” in Proc. Int. Symp. Signals,Circuits Syst., 2007, pp. 1–4.

[18] M. Lindner and A. Kolb, “Compensation of motion artifacts for time-of-flight cameras,” in Proc. DAGM Workshop Dynamic 3D Imag., 2009,pp. 16–27.

[19] B. M. M. Drayton, D. A. Carnegie, and A. A. Dorrington, “Improvedphase detection algorithms for indirect time of flight range imagingcameras,” in Proc. IEEE Int. Symp. Robot. Sensors Environ., Nov. 2012,pp. 156–161.

[20] B. Drayton, D. A. Carnegie, and A. A. Dorrington, “The developmentof a time of flight range imager for mobile robotics,” in Proc. Int. Conf.Autom. Robot. Appl., 2011, pp. 470–475.

[21] P. O’Connor, I. Torunoglu, and R. Nagabhirana, “Method and system tocorrect motion blur and reduce signal transients in time-of-flight sensorsystems,” U.S. Patent 7 450 220, Nov 11, 2008.

[22] Y. Surrel, “Design of algorithms for phase measurements by the use ofphase stepping,” Appl. Opt., vol. 35, no. 1, pp. 51–60, 1996.

[23] Y. Surrel, “Phase stepping: A new self-calibrating algorithm.” Appl. Opt.,vol. 32, no. 19, pp. 3598–3600, 1993.

[24] P. Carré, “Installation et utilisation du comparateur photoélectrique etinterférentiel du bureau international des poids et mesures,” Metrologia,vol. 2, no. 1, pp. 13–23, 1966.

[25] P. Hariharan, “Digital phase-stepping interferometry: Effects of multiplyreflected beams,” Appl. Opt., vol. 26, no. 13, pp. 2506–2507, 1987.

[26] K. G. Larkin and B. F. Oreb, “Design and assesment of symmetricalphase-shifting algorithms,” J. Opt. Soc. Amer. A, Opt. Image Sci., Vis.,vol. 9, no. 10, pp. 1740–1748, 1992.

[27] J. Novák, P. Novák, and M. Antonín, “Multi-step phase-shifting algo-rithms insensitive to linear phase shift errors,” Opt. Commun., vol. 281,no. 21, pp. 5302–5309, 2008.

[28] K. Hibino, B. F. Oreb, D. I. Farrant, and K. G. Larkin, “Phase shiftingalgorithms for nonlinear and spatially nonuniform phase shifts,” J. Opt.Soc. Amer., vol. 14, no. 4, pp. 917–930, 1997.

Ben M. M. Drayton received the B.Sc.Tech. degree with first class honorsin Electronics and Instrumentation from Victoria University of Wellington,Wellington, New Zealand. Since 2010 he has been studying towards the Ph.D.,degree focusing on operational and algorithmic improvements to reduce bothsystematic and random errors in indirect time-of-flight range imaging cameras.

Adrian A. Dorrington received the Ph.D. in 2001 from the University ofWaikato, Hamilton, New Zealand. He has held post-doctoral fellowships fromthe National Research Council at the NASA Langley Research Center, Lang-ley, VA, USA and from the Foundation for Research, Science, and Technology,New Zealand, University of Waikato. Currently, he is a Senior Lecturerin the School of Engineering at the University of Waikato. His researchinterests include optoelectronics and optical measurement technologies, and inparticular, developing techniques to improve the performance of time-of-flightimaging.

Dale A. Carnegie received the B.Sc. degree in Theoretical Physics andApplied Mathematics, the M.Sc. degree with first class honors in AppliedPhysics and Electronics and the Ph.D. degree in Computer Science. He iscurrently the Professor of electronic and computer systems engineering atVictoria University of Wellington, Wellington, New Zealand where he is alsoHead of the School of Engineering and Computer Science. He heads VictoriaUniversity’s Mechatronics Group which specialises in Autonomous MobileRobotics. Current major projects of this Group include investigation intorobotic music, emotion-based control parameter modulation for autonomousrobots and a hierarchical system of robots to assist urban search and rescueteams following a major terrorist or earthquake disaster.