6
The Development of a Time of Flight Range Imager for Mobile Robotics Ben Drayton, Dale A. Carnegie School of Engineering and Computer Science Victoria University of Wellington Wellington, New Zealand Adrian A. Dorrington School of Engineering University of Waikato Hamilton, New Zealand Abstract— Mobile robots are becoming increasingly prevalent in a large variety of applications. The introduction of robots into more complex environments necessitates the development of more advanced sensor systems to facilitate their successful implementation. This paper details the design and testing of a compact, configurable, indirect time of flight range imager with potential applications in mobile robotics. The paper then evaluates the range imager, particularly with regard to motion, and demonstrates some changes that can be made to improve the system’s response. Keywords-Mobile Robot; range imaging; 3D camera; time of flight; motion error; I. INTRODUCTION Mobile robots are becoming increasingly useful in real world applications such as urban search and rescue [1], healthcare [2] , the military [3] and space exploration [4]. To be able to successfully navigate and interface with the increasingly complex environments that robots are being introduced to, it is critical to develop sensor systems which allow mobile robots to gather detailed range information. Ideally a sensor is desired that can provide a full field range measurement with high accuracy in real time. A number of different types of sensors are widely available that provide range information. Common sensors used are position sensitive detectors (PSD), ultrasonic sensors and laser range finders. Each of these has shortcomings limiting their use in mobile robotics applications. PSD sensors have a relatively short maximum range, a typical sensor used on robots at Victoria University of Wellington (GP2Y3A003K0F from Sharp) has a maximum range of 3 m. Due to their operating principle they have a dead band which increases with increased maximum range. Multiple sensors can be used, however, this requires a complex control system to ensure there is no cross talk between sensors and the size and cost of the system increases proportionally with the quantity of information received [5]. Ultrasonic sensors have a higher maximum range than PSD sensors. However, due to the nature of sound wave attenuation in air, high power drivers are required. Ultrasonic sensors also suffer from multipath returns, which require significant signal processing to resolve. Using multiple ultrasonic devices to provide more range data points is also a complex control issue. Laser range finders can provide high accuracy long range measurements. Multiple laser range finders can be used, however this becomes prohibitively expensive when a full field of view is desired. A full field of view sensor can be implemented by mechanically directing the sensor over the area of interest but this comes at the cost of mechanical wear and the accuracy of the system is generally limited by the accuracy of the mechanical stage. For larger fields of view the time taken to scan the entire field can limit the refresh rate of the sensor [6]. A promising technology is using the indirect time of flight (ToF) method. This method uses the properties of modulated light and a specialized camera to take a full field of view range measurement in real time. This paper will outline the development of an indirect ToF sensor system suitable for use in mobile robotic applications and the current challenges facing the use of these systems. It will then outline a method for improving the performance of such systems and present experimental results demonstrating the system response to motion. II. BACKGROUND THEORY Homodyne indirect time of flight cameras operate by modulating both a light source and a detector at the same high frequency (generally 10 MHz – 100 MHz). The time taken for the light to travel to the object and return introduces a phase shift, φ, between the reflected wave and the modulation of the shutter, which is related to the distance, d, by the equation (1) where c is the speed of light and f is the modulation frequency. Typically an intensity based camera is used to measure the returning light; however intensity is not solely dependent on the relative phase of the returning light and the shutter but also on the reflectivity of the object, the inverse square relationship caused by waves spreading from a point source and background light levels. To get a phase measurement independent of these factors multiple frames are recorded with a differing phase shift introduced between the emitted light and the shutter. A Discrete Fourier Transform is then used to calculate the phase. It is common to use four frames per measurement as this simplifies the phase calculation to Proceedings of the 5th International Conference on Automation, Robotics and Applications, Dec 6-8, 2011, Wellington, New Zealand 978-1-4577-0330-0/11/$26.00 ©2011 IEEE 470

[IEEE 2011 5th International Conference on Automation, Robotics and Applications (ICARA 2011) - Wellington, New Zealand (2011.12.6-2011.12.8)] The 5th International Conference on Automation,

Embed Size (px)

Citation preview

Page 1: [IEEE 2011 5th International Conference on Automation, Robotics and Applications (ICARA 2011) - Wellington, New Zealand (2011.12.6-2011.12.8)] The 5th International Conference on Automation,

The Development of a Time of Flight Range Imager for Mobile Robotics

Ben Drayton, Dale A. Carnegie School of Engineering and Computer Science

Victoria University of Wellington Wellington, New Zealand

Adrian A. Dorrington School of Engineering University of Waikato

Hamilton, New Zealand

Abstract— Mobile robots are becoming increasingly prevalent in a large variety of applications. The introduction of robots into more complex environments necessitates the development of more advanced sensor systems to facilitate their successful implementation. This paper details the design and testing of a compact, configurable, indirect time of flight range imager with potential applications in mobile robotics. The paper then evaluates the range imager, particularly with regard to motion, and demonstrates some changes that can be made to improve the system’s response.

Keywords-Mobile Robot; range imaging; 3D camera; time of flight; motion error;

I. INTRODUCTION Mobile robots are becoming increasingly useful in real

world applications such as urban search and rescue [1], healthcare [2] , the military [3] and space exploration [4]. To be able to successfully navigate and interface with the increasingly complex environments that robots are being introduced to, it is critical to develop sensor systems which allow mobile robots to gather detailed range information. Ideally a sensor is desired that can provide a full field range measurement with high accuracy in real time.

A number of different types of sensors are widely available that provide range information. Common sensors used are position sensitive detectors (PSD), ultrasonic sensors and laser range finders. Each of these has shortcomings limiting their use in mobile robotics applications.

PSD sensors have a relatively short maximum range, a typical sensor used on robots at Victoria University of Wellington (GP2Y3A003K0F from Sharp) has a maximum range of 3 m. Due to their operating principle they have a dead band which increases with increased maximum range. Multiple sensors can be used, however, this requires a complex control system to ensure there is no cross talk between sensors and the size and cost of the system increases proportionally with the quantity of information received [5].

Ultrasonic sensors have a higher maximum range than PSD sensors. However, due to the nature of sound wave attenuation in air, high power drivers are required. Ultrasonic sensors also suffer from multipath returns, which require significant signal processing to resolve. Using multiple ultrasonic devices to provide more range data points is also a complex control issue.

Laser range finders can provide high accuracy long range measurements. Multiple laser range finders can be used, however this becomes prohibitively expensive when a full field of view is desired. A full field of view sensor can be implemented by mechanically directing the sensor over the area of interest but this comes at the cost of mechanical wear and the accuracy of the system is generally limited by the accuracy of the mechanical stage. For larger fields of view the time taken to scan the entire field can limit the refresh rate of the sensor [6].

A promising technology is using the indirect time of flight (ToF) method. This method uses the properties of modulated light and a specialized camera to take a full field of view range measurement in real time. This paper will outline the development of an indirect ToF sensor system suitable for use in mobile robotic applications and the current challenges facing the use of these systems. It will then outline a method for improving the performance of such systems and present experimental results demonstrating the system response to motion.

II. BACKGROUND THEORY Homodyne indirect time of flight cameras operate by

modulating both a light source and a detector at the same high frequency (generally 10 MHz – 100 MHz). The time taken for the light to travel to the object and return introduces a phase shift, φ, between the reflected wave and the modulation of the shutter, which is related to the distance, d, by the equation

(1)

where c is the speed of light and f is the modulation frequency.

Typically an intensity based camera is used to measure the returning light; however intensity is not solely dependent on the relative phase of the returning light and the shutter but also on the reflectivity of the object, the inverse square relationship caused by waves spreading from a point source and background light levels. To get a phase measurement independent of these factors multiple frames are recorded with a differing phase shift introduced between the emitted light and the shutter. A Discrete Fourier Transform is then used to calculate the phase. It is common to use four frames per measurement as this simplifies the phase calculation to

Proceedings of the 5th International Conference on Automation, Robotics and Applications, Dec 6-8, 2011, Wellington, New Zealand

978-1-4577-0330-0/11/$26.00 ©2011 IEEE 470

Page 2: [IEEE 2011 5th International Conference on Automation, Robotics and Applications (ICARA 2011) - Wellington, New Zealand (2011.12.6-2011.12.8)] The 5th International Conference on Automation,

, (2)

where In is the intensity of frame n. The intensity In can be modeled as

, (3)

where A is a gain factor including the gain of the sensor, the power of the laser diodes and the reflectivity of the object, θ is the introduced phase shift between the modulation of the light and the camera and B is an offset due to the DC offset of the ADC and background light.

Due to the high shutter speeds required, mechanical shutters are not suitable for indirect time of flight systems. Previously image intensifiers were used to provide high speed shuttering [7] however these intensifiers are expensive, high voltage, bulky devices that are not suitable for mobile robotics. The recent development of solid state CMOS cameras which can be electronically switched at high frequencies [8] has allowed indirect time of flight measurement to become suitable for mobile robotics applications. Because of the additional circuitry required to make them electronically modulatable, these sensors do not have the same resolution traditional digital cameras can achieve. Resolutions of up to 200 × 200 pixels are currently available although it is not unreasonable to expect sensors approaching 1 MegaPixel in the near future.

Several commercial indirect time of flight sensor systems are currently available. Two of the most prominent commercial options are the PMD[vision]® CamCube 3.0 released by PMDTechnologies in June 2010 [9] and the SR4000 released by Mesa Imaging in 2008 [10]. Table 1 shows a comparison of the main features of these two systems.

TABLE I. COMPARISON OF COMMERCIAL INDIRECT TOF CAMERAS

Feature CamCube 3.0 SR4000

Resolution 200 × 200 Pixels 176 (h) × 144 (v) pixels

Range 0.3 m to 7 m 0.8 m to 5 m

Speed 40 FPS 50 FPS

Field of View 40º × 40º 43.6º (h) × 34.6º (v)

Repeatability (1σ)

3 mm 4 mm

Cost ~$12,000 USD ~$10,000 USD

Both of these cameras provide high quality range images however they have a high cost and lack the customisability often required for academic research. Because of this it was decided to develop a flexible custom range finding system suitable for mobile applications research.

III. VICTORIA UNIVERSITY RANGE IMAGING SYSTEM A range imaging system has been developed at Victoria

University and Waikato University using the indirect time of flight method [11]. Initially the system was developed as a bench top prototype. This version was based around a Stratix III FPGA development board. Stratix III FPGAs are useful as they provide a number of configurable phase locked loops which are used to provide the accurate multi-phase modulation signals required for this method. Custom made PCBs were added to provide the required functionality, specifically:

• The Image Capture Board – This board provides an interface between the FPGA development kit and the Illumination and Sensor boards. It also has the power modulation drivers used to modulate the image sensor and contains an ADC to convert the analogue video stream into digital frames.

• The Sensor Board – This board houses the CMOS sensor (a PMD19K sensor from PMDTech). The sensor has a 16 mm optical lens and a filter.

• The Illumination Board – This board has two banks of laser diodes used to illuminate the scene with 800 mW of power. One bank uses IR (808 nm) diodes and the other uses red (658 nm) diodes.

• The VGA/Ethernet Board – This board provides two external interfaces for the system. An Ethernet connection for transferring data to a computer for long term storage and a VGA connection to display data on a monitor.

This system could successfully take full field range measurements in real time. Fig. 1 shows a profile of a mannequin head measured using this system.

Figure 1. Range profile of a mannequin head recorded using the prototype range imaging system [12]

There is a tradeoff between acquisition time and precision. Indicatively exposure times in the 10’s of seconds can provide sub-millimetre precision, while centimetre precision is achievable with acquisition times of 30 – 40 ms. A photograph of the prototype system is shown in Fig. 2.

Because it is based on an FPGA development kit the prototype system was physically too large to be used in mobile robotics applications. A compact version of the range imaging system was designed to allow this [5].

Proceedings of the 5th International Conference on Automation, Robotics and Applications, Dec 6-8, 2011, Wellington, New Zealand

471

Page 3: [IEEE 2011 5th International Conference on Automation, Robotics and Applications (ICARA 2011) - Wellington, New Zealand (2011.12.6-2011.12.8)] The 5th International Conference on Automation,

Figure 2. Photograph of prototype range imaging system [13]

IV. DESIGN OF A COMPACT RANGE IMAGING SYSTEM The compact system is designed specifically for mobile

robotic research. Because of this size, cost and customizability are the critical design parameters. Special attention also had to be paid to power distribution throughout the system. A modular design utilizing four separate boards stacked on top of each other is used as modularity means upgrading or changing the design does not require an entirely new system to be made. It also provides greater physical flexibility as different optical lenses can be used while keeping the laser diodes in a concentric circle aligned with the end of the lens. Using a stacked design also creates a smaller physical footprint than a single large board would. The four boards implemented in this design are the External Interface Board, the FPGA Board, the Image Capture Board and the Illumination Board.

Each board is supplied with unregulated DC power in a daisy chain configuration. This distributed design provides the most flexibility as each board is then a self contained system independent of the other boards.

Fig. 3 shows how data flows between the boards. The FPGA is programmed via a JTAG interface. This interface also allows a control computer to interface with the NIOS II soft processor that is implemented on the FPGA. This processor is used to modify the parameters of the system during run time without having to reprogram the FPGA. It also controls the capturing of data frames via the Ethernet interface to a computer for long term data storage.

To reduce the size of the system, the FPGA development board of the previous design was replaced with a custom PCB. The Stratix III FPGA from the prototype system was replaced with a Cyclone III. Table 2 shows a comparison of the features of a selection of relevant FPGAs. The Cyclone III family of FPGAs are approximately an order of magnitude cheaper than the Stratix III family and with the addition of external memory can provide the same functionality. The board designed for this system is compatible with both the EP3C120 and EP3C40 FPGAs.

Each pixel is stored as a 16 bit number. For our sensor’s current resolution of 160 × 120 pixels this means each frame

Figure 3. Data flow between the boards in the compact range imaging system

requires 300 kb of RAM. Six frames are stored in buffers at any time requiring 1800 kb of RAM. There is also some RAM required for operation of the NIOS soft processor meaning the amount of memory required for range imaging using our current sensor is approximately 2 Mbits. Because of this, the EP3C120 was used for this build since for this sensor no external memory was required which greatly simplifies this initial design.

TABLE II. COMPARISON OF FPGAS

FPGA # of logic elements

# of RAM bits

# of PLLs

Cost (USD)

Stratix III EP3SL150 F1152C2N

142,500 6,543,360 12 $3777

Cyclone III EP3C120 F780C7

119,088 3,981,312 4 $502

Cyclone III EP3C40 F780C6

39,600 1,161,216 4 $167

The FPGA Board is an 8-layer board comprising four signal layers and four power planes. Due to the number of components, it is slightly wider than the other boards. A photograph of the range imaging system is shown in Fig. 4.

Proceedings of the 5th International Conference on Automation, Robotics and Applications, Dec 6-8, 2011, Wellington, New Zealand

472

Page 4: [IEEE 2011 5th International Conference on Automation, Robotics and Applications (ICARA 2011) - Wellington, New Zealand (2011.12.6-2011.12.8)] The 5th International Conference on Automation,

Figure 4. Photograph of the compact range imaging system [5]

The sensor currently used in the system is a PMD19K2 sensor from PMDTechnologies which features 160 × 120 pixels of resolution. As discussed earlier, range calculation using this sensor is possible using only the internal memory of the Cyclone III FPGA. However, to allow for future improvements in sensor technology four external memory devices are included on the FPGA board. DDR2 memory was chosen for its high transfer speeds and large memory sizes. The DDR2 memory banks used are:

• Accumulator – This memory bank stores images while they are being processed. It stores both semi-processed and fully processed image data.

• Output buffer – This buffer stores processed image frames to be accessed by the NIOS II soft processor and the VGA output.

• NIOS II buffer – This buffer stores the program firmware for the NIOS II soft processor implemented on the FPGA.

• Ethernet buffer – This buffer stores processed image frames to allow for latency on the Ethernet connection.

Because of the transfer speeds of the memory chips, the PCB tracks for the DDR2 memory chips are length tuned to ensure signals arrive coincidentally.

A Flash memory chip is included to provide non-volatile storage of the FPGA configuration. On power-up the FPGA is programmed by the Flash memory meaning an external computer using the JTAG interface is only necessary if the program is being changed.

The Illumination Board provides modulated light to illuminate the scene being measured. It has sixteen 658 nm laser diodes run in a constant current configuration providing 800 mW of optical power. The current is set using an AD5311 DAC set by the FPGA via a two wire interface (TWI). To avoid overdriving and damaging the laser diodes, the current is slowly increased to allow the diode’s temperature to stabilise at a safe operating temperature before applying full power. Because the distance between the Illumination Board and the Image Capture Board must be able to accommodate different sized lenses, both the TWI signals and the modulation signals are sent via SATA interfaces.

The Image Capture Board contains the modulation drivers for the PMD19K2 sensor. The sensor is split into four 40 × 120 pixel modulation blocks. Each block has two modulation inputs (ModA and ModB, each of which has an associated analogue video output). Eight Ultra High Current pin drivers are used to drive the modulation inputs.

Analogue video signals from the sensor are converted into digital frames using a AD9826 16 bit imaging signal processor ADC from Analog Devices. A 172 pin high speed mezzanine connector connects the Image Capture Board to the FPGA board.

The External Interface Board provides three connections for interfacing with the range finding system. A VGA connection allows the display of data in real time on an external 640 × 480 monitor. This display is split into four zones which can be configured to display data at any stage of the processing steps completed by the FPGA; from raw data through to range images. An Ethernet connection is provided to transfer images to an external computer for long term storage. A USB connection provides a secondary means to store image data onto an external computer and can also be used to send commands to configure and control the imaging system. The External Interface Board is connected to the FPGA board by a 172 pin high speed mezzanine connector.

V. EVALUATION OF THE FIRST COMPACT SYSTEM Several design issues were discovered with the first

compact range imaging system. Because of these issues a second revision of the compact system was developed. This section will discuss the main issues and how they were solved.

One of the most critical issues with the compact system was the overheating of the Image Capture board. Each modulation block represents a capacitive load of 250 pF and should be driven up to 2.5 V to provide maximum contrast. For a modulation frequency of 40 MHz this means a current of 160 mA per modulation block. Because the boards are placed close together in the compact system there was not a good airflow around the pin drivers and the temperature of the PCB reached a point where it would periodically cause one of the voltage regulators on the board to go into thermal shutdown, especially when high modulation frequencies were used. To resolve this issue, temperature sensors and a dedicated fan controller were added. Further improvements were made by spreading out the modulation drivers and placing them on the opposite side of the PCB which has more vertical clearance making it easier to attach heat sinks to the modulation drivers.

It is desirable for the system to be able to operate from a single 12 V lead acid battery as this is a common power source for mobile robots. The range of voltage regulators in the original version of the system limited the input voltage range to between 14 and 20 V. Replacement of these regulators, specifically with a SEPIC device, expanded the allowable input range to 8 V to 20 V, well within the capability of a single FLa battery.

The measured distance versus actual range was tested by moving a target to set distances and recording 200 samples. This was repeated 10 times and the mean distance value is

Proceedings of the 5th International Conference on Automation, Robotics and Applications, Dec 6-8, 2011, Wellington, New Zealand

473

Page 5: [IEEE 2011 5th International Conference on Automation, Robotics and Applications (ICARA 2011) - Wellington, New Zealand (2011.12.6-2011.12.8)] The 5th International Conference on Automation,

shown in Fig. 5. As the distance increases the amplitude of the returning light decreases so the precision of the system decreases. Error bars indicate one standard deviation.

Figure 5. Static measured distance versus actual distance

VI. PROBLEMS WITH MOTION Because indirect time of flight measurement requires a

number of successive frames to be acquired, object motion introduces errors. Obviously motion errors have a large impact on the viability of a system for mobile robotics applications. These errors can be classified as lateral motion errors from movement across the field of view and axial motion errors from movement along the viewing axis. All current indirect range finding cameras suffer from these motion errors.

If an object moves laterally within the ranger’s field of view this causes problems for the pixels at the edges of the object. Existing methods of solving this include using coded exposure photography, which is a common method for de-blurring standard 2D photography [14], the use of a 2D camera for edge detection [15] and optical flow algorithms [16].

Axial motion has a more subtle effect on range imaging than lateral motion as the change in distance for successive frames is generally smaller. If we allow for an object to move with a constant velocity v in (2), and using (3) to model the intensity of the light, the phase measured by the system φm is described by the equation

. (4)

Equation (5) is used to relate the phase at any frame φn, where n is the frame number, to the phase at the start of the measurement φ0, where ti is the integration time.

(5)

By substituting (5) into (4) and simplifying the equation relating the measured phase φm with the actual phase at the first frame, φ0 is shown to be

(6)

This error is shown against actual distance in Fig. 6 for three different velocities. The error has an average offset that is proportional to velocity and also has an oscillatory relationship with distance. This large non-linearity is highly undesirable.

Figure 6. Theoretical motion error versus actual distance for various velocities

VII. IMPROVING THE MOTION RESPONSE OF INDIRECT TOF CAMERAS

A promising approach to improving the motion response of indirect time of flight cameras is changing the order in which the samples for the Discrete Fourier Transform are taken. If the order of the phase steps is changed from the traditional 0º, 90º, 180º, 270º then (6) can be generalised to

(7)

where m0,1,2,3 is some ordering of [0 1 2 3] indicating the order in which frames were measured. Fig. 7 shows the result that changing the order of frames has on the error. Only four orderings are shown, all other permutations produce the same results to those shown with different phase shifts. While the average error remains unchanged the change in error with distance can be greatly reduced by an intelligent selection of sampling order.

Experimental results were taken using a linear table to move a target axially. The same motion path was repeated 100 times and the mean value was taken. The target was then moved to the same position it was at the start of each frame and 200 static frames were recorded and averaged. The error was then taken as the difference between the static and dynamic cases. Measurements were taken using two different sampling orders and the results are shown in Fig. 8. As expected the 2-

Proceedings of the 5th International Conference on Automation, Robotics and Applications, Dec 6-8, 2011, Wellington, New Zealand

474

Page 6: [IEEE 2011 5th International Conference on Automation, Robotics and Applications (ICARA 2011) - Wellington, New Zealand (2011.12.6-2011.12.8)] The 5th International Conference on Automation,

0-1-3 ordering has approximately the same average error as the 1-2-3-0 ordering but the oscillatory nature is significantly reduced. While this is a good first step which introduces no additional computational load on the system there is still significant motion error. Research into methods for reducing this error is ongoing.

Figure 7. Theoretical motion error versus distance for selected orderings of frame offset

It is important to note that although the oscilllatory nature and shape of the experimental error matches the theory, its magnitude is approximately double. The cause of this is not currently known however the theory assumes that the intensity of the light remains approximately constant over the measurment period. For higher velocities this assumption may no longer be valid and a more complex model may be required. The 1 m/s curve shows a significantly higher quality sine wave as more data points are captured and has lower overall error as expected.

Figure 8. Experimental motion error versus distance for selected orderings of frame offset

VIII. CONCLUSIONS

This paper has described the development of a compact indirect time of flight range imager designed specifically for mobile robotics applications and research. It has outlined the limitations both of this system and of indirect time of flight cameras in general and has presented solutions or improvements that can be made to make the system more suitable for this application. Motion is a major impediment Experimental results show that an improvement in the motion error linearity is possible by changing the order of the frame measurements. Further research is being conducted to try to reduce the motion error further.

REFERENCES

[1] R. Murphy, “Trial by fire [rescue robots],” IEEE Robotics & Automation Magazine, vol. 11, no. 3 2004, pp. 50-61.

[2] E. Broadbent, R. Stafford and B. MacDonald, “Acceptance of healthcare robots for the older population: review and future directions”, International Journal of Social Robotics, vol. 1, no. 4 2009, pp. 319-330.

[3] D. Voth, “A New Generation of Military Robots,” Intelligent Systems, IEEE , vol.19, no.4, pp. 2- 3, Jul-Aug 2004

[4] K, Daniel, R. Some, “NASA advances robotic space exploration,” Computer , vol.36, no.1, pp. 52- 61, Jan 2003

[5] J. McClymont, D. Carnegie, A. Jongenelen, B. Drayton, “The development of a full-field image ranger system for mobile robotic platforms. Electronic Design, Test and Application (DELTA), 2011 Sixth IEEE International Symposium on , vol., no., pp.128-133, 17-19 Jan. 2011

[6] O. Wulf and B. Wagner, “Fast 3D scanning methods for laser measurement systems," International Conference on Control Systems and Computer Science, 2003.

[7] Cree, M. J., A. A. Dorrington, R. Conroy, A. D. Payne, and D. A. Carneigie. "The Waikato range imager." Image and Vision Computing New Zealand. 2006. 233-238.

[8] T. Ringbeck, T. Möller and B. Hagebeuker, “Multidimensional measurement by using 3-D PMD sensors”, Advances in Radio Science, vol. 5 2007, pp 135-146.

[9] http://www.pmdtec.com/ [10] http://www.mesa-imaging.ch/ [11] A. Jongenelen, A. Payne, D. A. Carnegie, A. Dorrington, and M. Cree,

“Development of a real-time full field range imaging system”, Recent Advances in Sensing Technology, LNEE49, pp. 113-129, 2009

[12] A. Jongenelen, “Development of a Compact, Configurable, Real-time Range Imaging System,” Ph.D dissertation. School of Eng. Victoria University of Wellington, 2010

[13] J. Mcclymont, “Development of Extrospective Systems for Mobile Robotic Vehicles,” M.S Thesis, School of Eng. Victoria University of Wellington, 2010

[14] Raskar, Ramesh, Amit Agrawal, and Jack Tumblin. "Coded exposure photography: motion deblurring using fluttered shutter." SIGGRAPH. Boston, 2006.

[15] Lottner, O., A.Sluiter, K. Hartmann, and W.Weihs. "Movement Artefacts in Range Images of Time-of-Flight Cameras." Signals, Circuits and Systems, 2007. ISSCS 2007. International Symposium on , 2007: 1-4.

[16] Lindner, Marvin, and Andreas Kolb. "Compensation of Motion Artifacts for Time-of-Flight Cameras." DAGM Workshop on Dynamic 3D Imaging. Siegen, Germany: Springer-Verlag Berlin, 2009. 16-27.

Proceedings of the 5th International Conference on Automation, Robotics and Applications, Dec 6-8, 2011, Wellington, New Zealand

475