Upload
raghava-aiti
View
220
Download
0
Embed Size (px)
Citation preview
8/7/2019 A Vision-Guided Autonomous Vehicle
http://slidepdf.com/reader/full/a-vision-guided-autonomous-vehicle 1/6
IEEE TRANSACTIONS ON EDUCATION, VOL. 40, NO. 4, NOVEMBER 1997 253
A Vision-Guided Autonomous Vehicle:An Alternative Micromouse Competition
Ning Chen
Abstract— A persistent problem facing today’s engineeringeducators is how to promote students’ interest in science andengineering. One of the best approaches to this challenge isto sponsor a technological competition that combines publicity,technology, and student participation. A current leading com-petition called “ micromouse,” although earning high marks onpublicity and technology, has had difficulty attracting large-scalestudent participation. In this paper a vision-guided autonomousvehicle project developed at California State University, Fullerton(CSUF) is proposed as an alternative. The idea is to fit anordinary radio-controlled toy car with a charge-coupled device(CCD) camera and a transmitter. Image processing and controlof the car are accomplished by a personal computer (PC). The
road image captured by the camera is transmitted to a PC via atransmitter–receiver pair at 900 MHz. A frame grabber boarddigitizes the image, and an image-processing program subse-quently analyzes the road condition and generates adequate drivecommands that are transmitted back to the vehicle via the built-in radio controller. Student teams write programs to competein racing or maze solving. In this paper detailed hardware andsoftware designs of the project are presented. The merit of theproject with respect to the criteria of publicity, technology, andstudent participation is also addressed.
Index Terms— Autonomous vehicles, student competition,visopn-guided vehicles.
I. INTRODUCTION
THE micromouse competition is one of the major student
competitions that stimulates interests of students majoring
in electrical engineering, computer science, and computer
engineering. Nevertheless, this competition has not been pop-
ular enough to become a significant inter-school activity. For
example, IEEE Region Six holds a micromouse competition
annually and among the six to seven engineering schools that
attended the meeting regularly, there is usually only one or two
schools that have managed to produce fully functional mice.
The major reason for this low success rate is not due to a
lack of interest, instead, it is the technical and nontechnical
difficulties associated with the micromouse project. We at
CSUF have tried to establish the micromouse program forseveral years [1] but only have achieved modest results.
By learning from the weaknesses of the current micromouse
competition, we propose a vision-guided autonomous vehicle
project as a viable alternative.
The project consists of the following hardware: a radio-
controlled (RC) toy car, a CCD camera, a transmitter–receiver
Manuscript received August 26, 1996; revised September 2, 1997.The author is with the Department of Computer Science and Department of
Electrical Engineering, California State University, Fullerton CA 92634 USA.Publisher Item Identifier S 0018-9359(97)08366-0.
pair, and a video frame grabber. The vehicle is an off-the-
shelf RC toy car mounted with a miniature CCD camera and
a 900-MHz transmitter. A receiver connected to a PC receives
the image and generates an RS-170 video signal. The video
signal is digitized by a frame grabber built at CSUF and is
discussed in detail later. The digitized image is then processed
by a program written in the C language. The software performs
pattern recognition and generates drive commands. The hand-held controller that comes with the RC car is modified so that
it can be controlled by the PC directly. The drive command is
sent to the vehicle by the hand-held controller at a frequency
at 27 MHz. The vehicle runs on a white floor with black dashed lines as tracks. Tracks can be arranged as a single loop
for racing or as multiple branches for maze solving. Student
teams write programs that drive the vehicle. A large-screen
TV also receives the image in real time. The audience can
view the actual image on TV as seen by the vehicle while
also commanding a birds-eye view of the competition arena
from the audience stand.
Section II analyzes the weaknesses of the micromouse
competition. The hardware of the proposed vision-guided
autonomous vehicle is presented in Section III. Section IV
covers the software responsible for pattern recognition and
command generation. Section V discusses the advantages of
the proposed project for promotion and recruiting purpose interms of publicity, technology, and student participation. The
conclusion is presented in Section VI.
II. CURRENT MICROMOUSE COMPETITION
A. General Description
The current micromouse competition is conducted on a maze
as specified in Fig. 1. A typical micromouse built to maneuver
the maze is shown in Fig. 2.
The first difficulty is the maze itself. The specifications
require a large wood floor of 3 m 3 m that is expensive
and difficult to build. At CSUF we tried to build one, butafter spending $1700, it still did not fully comply with the
specifications. To utilize the maze is another problem. A
special room needs to be set aside permanently for the maze.
Most of the testing of the micromouse requires the maze and
it is virtually impossible for students to test their vehicles at
home.
The construction of the micromouse also presents difficulty.
The main reason is that the maze square is very small. No off-
the-shelf toy cars with steering mechanism can fit in the square.
As a result, students need to build a precision mechanism that
0018–9359/9710.00 © 1997 IEEE
Authorized licensed use limited to: COLLEGE OF ENGINEERING. Downloaded on May 4, 2009 at 10:52 from IEEE Xplore. Restrictions apply.
8/7/2019 A Vision-Guided Autonomous Vehicle
http://slidepdf.com/reader/full/a-vision-guided-autonomous-vehicle 2/6
254 IEEE TRANSACTIONS ON EDUCATION, VOL. 40, NO. 4, NOVEMBER 1997
Fig. 1. Micromouse maze specification.
Fig. 2. A micromouse example.
is beyond the ability of most electrical engineering or computer
science students. The cost of such a precision mechanism is
also very high.
The on-board computer typically is a single board computer
built from scratch using the wire-wrap prototyping technique.
Microcontrollers used include the 68HC11 and 80C188EB.
To increase reliability and to reduce power consumption,
micromouse builders usually try to reduce the number of
components used. As a result, the on-board computer is usuallybarely able to handle the computation. The software program
is written from scratch and stored in EPROM’s. There is no
floppy or hard drive and the on-board computer does not
provide any programming environment making debugging an
extremely time-consuming process [1].
The sensor system is made of eight or more reflective
infrared sensors. These discrete sensors can only take in
limited amount of information.
B. Concerns
The major weakness of the micromouse competition is that
it is very difficult to achieve large-scale student participation
without a strong support from the faculty. Furthermore, high
school students and college freshmen who show significantinterests cannot build their own micromice without having to
take two to three years of engineering courses first. Our ex-
perience on micromouse project at CSUF gives the following
observations.
1) The maze specifications are not reasonable.
2) Professional machine-shop service is required to build
the mechanical portion of the micromouse.
3) Building and programming an embedded real-time sys-
tem is interesting. However, it can quickly become
a nightmare if expensive in-circuit-emulator, software-
developing tools and other equipment are not available.
4) It is very difficult to sponsor high school micromouse
teams. The value of the competition for the purpose of
recruitment is not high.
III. THE PROPOSED VISION-GUIDED
AUTONOMOUS VEHICLE COMPETITION
A. General DescriptionWe propose a new competition that takes advantages of the
following.
1) The use of CCD cameras, frame grabbers, and related
products that are becoming widespread due to the ex-
plosion of the multimedia market. These products are
inexpensive and readily available.
2) The fact that many students own powerful computers
with excellent programming environment and many of
whom possess excellent programming skills.
A vision-guided autonomous vehicle consists of a CCD
camera and a transmitter mounted on an RC car while a
receiver and a frame grabber are connected to a PC. Acompetition arena that consists of a white floor with black
dashed line serving as tracks hosts the game. The participants
run their software program on a PC. The program processes
the image seen by the vehicle and issues drive commands back
to the vehicle. Participants’ merit is judged by how fast the
vehicle finishes the loop or by how intelligently the vehicle
solves a maze. Major components of the project are discussed
in detail below:
B. Vehicle
The vehicle used is a popular radio-controlled toy car called
“little R/C buggy.” This toy car has a good steering mechanism
and has a changeable gear train. During testing and debugging,
the low gear feature is very handy. With three C batteries, it
can run almost 6 h without recharging. This also makes the
testing less painful.
C. Camera
We used an inexpensive black and white CCD camera with
a built-in wide angle lens costing about $130. It outputs a
standard RS-170 video signal and accepts a dc input from 8
to 14 V.
D. Transmitter/Receiver Pair
The transmitter/receiver pair is an off-the-shelf product
costing about $100 per pair. It can deliver quality video signal
at 900 MHz with a range of 125 ft. Fig. 3 shows a picture
of the vehicle fitted with a CCD camera, a transmitter, and a
battery pack.
E. Frequency Demodulator
The off-the-shelf receiver outputs a signal intended for TV
channel 4 which needs to be demodulated back to the RS-
170 video signal. Instead of buying or building a frequency
demodulator, we simply run the receiver output to a VCR. The
Authorized licensed use limited to: COLLEGE OF ENGINEERING. Downloaded on May 4, 2009 at 10:52 from IEEE Xplore. Restrictions apply.
8/7/2019 A Vision-Guided Autonomous Vehicle
http://slidepdf.com/reader/full/a-vision-guided-autonomous-vehicle 3/6
CHEN: A VISION-GUIDED AUTONOMOUS VEHICLE 255
Fig. 3. A radio-controlled toy car fitted with CCD camera, transmitter, andbattery pack.
Fig. 4. A track setup.
VCR’s VHF input takes in TV channel 4 signal and produces
an RS-170 video signal on its VIDEO OUT connector.
F. Video Frame Grabber
At CSUF we built a low-cost video frame grabber from
scratch. The description of the CSUF frame grabber controller
can be found in Appendix I.
G. Interface to the Hand-Held Controller
All RC toy cars are equipped with hand-held controller that
issues drive commands at 27 or 49 MHz. All control buttons on
the hand-held controller are on/off type mechanical switches.Using n-p-n transistors (e.g., 2N3904) we easily constructed
electronic switches that accepted commands from the PC.
H. Race Track/Maze
The race track/maze on which the vehicle operates consists
of a white base dotted with black dashed lines as the track.
Fig. 4 shows an example of a possible setup.
The white base can be easily constructed by taping white
poster sheets on the floor. The black dashed lines are made of
nonreflecting black paper cut into 1 cm by 2 cm rectangular
blocks. One can arrange the dashed black lines into a race
Fig. 5. A competition arrangement.
Fig. 6. An actual track image seen by the vehicle.
track or a maze. The cost and time for the construction is
minimal. Fig. 5 shows an arrangement of the competition. By
attaching an additional receiver to a TV, the audience can seethe actual image seen by the vehicle.
IV. SOFTWARE
We tested the vision-guided vehicle on a white floor with a
black dashed line. The vehicle is supposed to follow the dashed
line. The software used to achieve this task is written in C. The
first part of the software displays the incoming road image onthe PC monitor continuously. This was done by writing pixels
into VGA’s video memory directly [4]. Although the VGA
can display 256 colors at one time, the maximum number of
gray levels that can be shown is only 64. Fig. 6 shows an
actual road image seen by the vehicle.Note that the upper portion in Fig. 6 shows the image of
the surrounding objects. This is possible when the vehicle
travels near the edge of the white floor. The image processing
task begins with a reduction of the pixels from 512 512
to 256 256. Experiments showed this reduction did not
degrade the information significantly. Conversion of the image
to binary representation is the second step. By selecting a
proper threshold level, each pixel becomes either completely
white or completely black. The storage of the image can then
be achieved with a 256 32-byte array. This is done using
1 b, instead of 1 byte, to represent one pixel. Handling the
Authorized licensed use limited to: COLLEGE OF ENGINEERING. Downloaded on May 4, 2009 at 10:52 from IEEE Xplore. Restrictions apply.
8/7/2019 A Vision-Guided Autonomous Vehicle
http://slidepdf.com/reader/full/a-vision-guided-autonomous-vehicle 4/6
256 IEEE TRANSACTIONS ON EDUCATION, VOL. 40, NO. 4, NOVEMBER 1997
image by each pixel is not feasible for two reasons: one is
the effect of noise and the other is the speed of processing.
We applied the spatial digitization [5] with an 8 8-pixel
cell. All 64 pixels within the space covered by the cell are
examined. If the majority of them are black then the cell is
labeled as black and similarly for the white cells. The threshold
level was selected experimentally. Now the road image can be
represented by 32 32 cells with the following data structure:
struct cell
unsigned char color;
unsigned int group;
;
struct cell image cell [31] [31];
The first field represents the color of the cell, either black
or white. Note that each black block is part of a dashed line
in Fig. 6 and consists of clusters of black cells. The secondfield is intended for the following use. Once the road image
has been reduced to 32 32 records, we need to answer three
questions: How many clusters of black cells are there (i.e.,how many black blocks)? Where are the clusters’ coordinates?
What are the clusters’ shapes? After trying traditional edge-
detection techniques without much success, we proposed an
algorithm that solves this problem in real time. The description
of the proposed algorithm is given in Appendix II.
A. Driving
Once the coordinates of the blocks are determined, we can
plan a trajectory. The starting point of the trajectory is the
nearest black dashed line to the front end of the vehicle. By
examining the slope of the trajectory a drive decision can be
made. There are four basic drive decisions: forward, backward,
left turn, and right turn.
V. ADVANTAGES OF THE PROPOSED PROJECT
We believe that the proposed vision-guided vehicle project
will achieve large-scale participation and can be used as an
effective recruiting tool for the following reasons.
1) The track/maze can be easily constructed.
2) All hardware, vehicle, CCD camera, receiver, and trans-
mitter are commonly available at a reasonable price. Al-
though we built the frame grabber at CSUF, commodity-
priced frame grabbers are showing up in stores for
multimedia applications.3) The level of challenge on software can vary to suit
the student. On the low end, entry level students with
one semester programming experience can produce a
working program without too much difficulty. On the
high end, a championship program can require extensive
knowledge ranging from target recognition, artificial
intelligence, to fuzzy logic.
4) There are many young people whose hobbies are com-
puter games and RC controlled cars. Channeling those
hobbies into real education may become popular among
students, parents, and educators.
Fig. 7. Frame grabber block diagram.
VI. CONCLUSION
In this paper a modified micromouse project is proposed.
The current micromouse competition has never achieved a
reasonable participation rate among engineering schools. The
major reasons are: 1) unreasonable maze specifications; 2) high
engineering cost, and 3) inadequate computation environment.
The proposed vision-guided autonomous vehicle consists of
five major parts: a low-cost radio-controlled toy car, a CCD
camera, a receiver/transmitter, a video frame grabber, and apersonal computer. The vision-guided vehicle is controlled
by a program run on the PC to follow a track or to solve
a maze. The proposed project has the following advantages:
1) low-cost track/maze construction; 2) reasonable engineering
cost with the use of the off-the-shelf components; 3) adequate
computation environment by using PC and PC programming
tools; and 4) adequate challenges that satisfy all levels of
students.
APPENDIX I
The frame grabber’s top level block diagram is shown in
Fig. 7.A low-cost high-speed analog-to-digital converter (HI1175
from Harris; about $7) is used to digitize the analog video
signal. A pin-to-pin compatible part is also available from
Texas Instruments. The converter HI1175 has 8-b resolution
and samples at 20 MHz. The video memory buffer consists
of two 128-Kbytes static random-access memory (SRAM)
CKX581000 from Sony (about $8 each). The memory buffer
is arranged as 128 K by 16 b to implement word transfers
between the frame grabber and the PC. A video sync separator
(LM 1881 from National Semiconductor, about $2) is used to
generate composite sync output, vertical sync output, burst
output, and odd/even output. The LM1881 data sheet [2]
offers a very good explanation on the RS-170 signal. Anapplication note from Harris Semiconductor [3] is also quite
helpful. The 74LS688 comparator implements a user-select
I/O base address. The control of all signals are done by
a field-programmable gate array (FPGA) (XC3030A from
Xilinx, about $10). The FPGA approach greatly reduces thecomplexity of wiring and is highly recommended. There are
two major circuits inside the FPGA. The first part handles
the data exchange between the frame grabber and the PC’s
ISA bus. There is a control register with its bit initializing
the read mode and bit switching between the read mode
and capture mode. When operating in the read mode a one-
Authorized licensed use limited to: COLLEGE OF ENGINEERING. Downloaded on May 4, 2009 at 10:52 from IEEE Xplore. Restrictions apply.
8/7/2019 A Vision-Guided Autonomous Vehicle
http://slidepdf.com/reader/full/a-vision-guided-autonomous-vehicle 5/6
CHEN: A VISION-GUIDED AUTONOMOUS VEHICLE 257
word register is used to buffer data transferred from the frame
grabber memory to the PC’s memory. This is necessary to
ensure that when the PC is reading one word the next word
is clocked into the buffer. The second circuit is an address
generator. In the read mode a read signal from the PC generates
a delayed pulse that advances the address generator to the next
address. By doing so, one frame of image can be transferred
to the PC memory by reading the same I/O address 131 072
times. In the capture mode, the address generator is driven by a
free-running crystal oscillator for the first 512 counts. Counts
above 512 are driven by the horizontal sync signal coming
from LM1881.
Using this approach any mis-synchronization will be
restricted to within one video line instead of propagating
throughout the whole image. The vertical sync signal is used
to reinitialize the address counter. The even/odd frame signaldirects the data flow to the even memory bank or to the odd
memory bank. The CSUF-built frame grabber shown in Fig. 8
was used to produce all the pictures in this paper.
APPENDIX IIThe group1 function scans through each image cell. If the
cell is black and still has the original initialized group number,
we then assign the next available group number to this image
cell and perform a roundup operation.
void group1(void)
int row, col;
for (t=0; t 50; t++)
gsummary[t].q=0;
gsummary[t].rowsum=0;
gsummary[t].colsum=0;
for (row=0; row 31; row++)
for (col=0; col 31; col++)
if (imacell[row][col].status
&& imacell[row][col].group 0)
gnumber++;
roundup(row,col);
/* end of group1 */
The function roundup recruits all neighboring black cells
that are directly or indirectly (through other neighbors) con-
nected to the starting image cell. During the rounding up
process, the number of cells and their averaged column along
with the row coordinates of the same group are computed. The
following is the listing of the function roundup
/* This is a recursive call */
void roundup(int row, int col)
Fig. 8. The frame grabber built at CSUF.
gsummary[gnumber].q ++;
gsummary[gnumber].rowsum
= gsummary[gnumber].rowsum + row;
gsummary[gnumber].colsum
= gsummary[gnumber].colsum + col;
gsummary[gnumber].rowave
= gsummary[gnumber].rowsum/gsummary[gnumber].q;gsummary[gnumber].colave = gsummary
[gnumber].colsum/gsummary[gnumber].q;
/* check left */
if (col 1)
if (imacell[row][col-1].color==1
&& imacell[row][col-1].group == 0)
imacell[row][col-1].group=gnumber;
roundup(row, col-1);
/* check right */ if (col 30)
if (imacell[row][col+1].color==1
&& imcell[row][col+1].group == 0)
imacell[row][col+1].group=gnumber;
roundup(row, col+1);
/* check up */
if (row 1)
if (imacell[row-1][col].color==1&& imacell[row-1][col].group == 0)
imacell[row-1][col].group=gnumber;
roundup(row-1, col);
/* check down */
if (row 30)
if (imacell[row+1][col].color==1
&& imacell[row+1][col].group == 0)
Authorized licensed use limited to: COLLEGE OF ENGINEERING. Downloaded on May 4, 2009 at 10:52 from IEEE Xplore. Restrictions apply.
8/7/2019 A Vision-Guided Autonomous Vehicle
http://slidepdf.com/reader/full/a-vision-guided-autonomous-vehicle 6/6
258 IEEE TRANSACTIONS ON EDUCATION, VOL. 40, NO. 4, NOVEMBER 1997
imacell[row+1][col].group=gnumber;
roundup(row+1, col);
/* end of roundup */
Running the above algorithm yields the following answers:
a) The number of groups (clusters).
b) The size of each group.
c) The center of gravity of each group.
Although the shape of each group is not determined, further
processing is possible. For example, computing the standard
deviation of the column and row may yield a good estimate
of the cluster. A rigorous check of all coordinates of the cells
within each group can definitely yield the shape of the group.
The size and the shape of each group can then be used to weed
out false blocks such as the objects that appear on the upper
corners of Fig. 6.
REFERENCES
[1] N. Chen, H. Chung, and Y. K. Kwon, “Integration of micromouse
project with undergraduate curriculum: A large-scale student partici-
pation approach,” IEEE Trans. Educ., vol. 38, pp. 136–144, May 1995.[2] “LM1881 video sync separator,” Application Note, National Semicon-
ductor.[3] P. Louzon, “Circuit considerations in imaging applications,” Application
Note AN9313.1, Harris Semiconductor.[4] M. Abrash, Zen of Graphics Programming. Coriolis Group Books,
1995.[5] M. C. Fairhurst, Computer Vision for Robotic Systems: An Introduction.
Englewood Cliffs, NJ: Prentice-Hall, 1988.
Ning Chen graduated from the National Cheng-Kung University, Tainan,Taiwan, in 1978 with the B.S. degree in hydraulics engineering and receivedthe M.S. and Ph.D. degrees, both in electrical engineering, from the ColoradoState University, Fort Collins, in 1984 and 1986, respectively.
After a postdoctoral appointment at the University of Illinois at Urbana-Champaign, he joined the faculty at California State University, Fullerton,in 1987. He is currently Associate Professor in the Department of ComputerScience and in the Department of Electrical Engineering at the CaliforniaState University, Fullerton. His major research interests are in the fields of robotics, real-time embedded systems, and artificial intelligence.