118
VISION BASED AUTONOMOUS ROBOT MOHD RASHIDI BIN SALIM UNIVERSITI TEKNOLOGI MALAYSIA

64_MOHDRASHIDIBINSALIM2008

  • Upload
    seawh

  • View
    95

  • Download
    2

Embed Size (px)

Citation preview

Page 1: 64_MOHDRASHIDIBINSALIM2008

VISION BASED AUTONOMOUS ROBOT

MOHD RASHIDI BIN SALIM

UNIVERSITI TEKNOLOGI MALAYSIA

Page 2: 64_MOHDRASHIDIBINSALIM2008
Page 3: 64_MOHDRASHIDIBINSALIM2008

i

“I hereby declared that I have read the content of this thesis and in my opinion it is

sufficient in terms of scope and quality for the purpose of awarding a Bachelor Degree

of Electrical Engineering - Mechatronics”.

Signature : …………………………………………...

Name of Supervisor : Associate Professor Dr. Rosbi Bin Mamat

Date : 12 MAY, 2008

Page 4: 64_MOHDRASHIDIBINSALIM2008

ii

VISION BASED AUTONOMOUS ROBOT

MOHD RASHIDI BIN SALIM

A thesis submitted in fulfillment of the requirements for the award of the Degree of

Electrical Engineering (Mechatronics)

Faculty of Electrical Engineering

Universiti Teknologi Malaysia

MAY 2008

Page 5: 64_MOHDRASHIDIBINSALIM2008

iii

I declare that this thesis entitled “Vision Based Autonomous Robot” is the result of my

own research except as cited in the references. The thesis has not been accepted for

any degree and is not currently submitted in candidature of any other degree.

Signature : ………………………………..

Name : MOHD RASHIDI BIN SALIM

Date : 12 MAY, 2008

Page 6: 64_MOHDRASHIDIBINSALIM2008

iv

Lovingly dedicated to:

Ayahanda Salim Bin Ahmad

Arwah Ibunda Hasmah Bte Hj. Hassim

Kekanda Kamal,Leman,Mimi, dan adinda Gjal,Gjut & Zaini

“Yours supports are the greatest inspiration for me

Thank you and love you”

Page 7: 64_MOHDRASHIDIBINSALIM2008

v

ACKNOWLEDGEMENT

Primarily, I would like to take this opportunity to express my deepest gratitude

towards my project supervisor, Associate Professor Dr. Rosbi Bin Mamat who has

persistently and determinedly assisted me during the project. It would have been very

arduous to complete this project without the passionate supports and guidance

encouragement and advices given by him.

My outmost thanks also go to my family who has given me support throughout

my academic years. Without them, I might not be able to become who I am today. I am

grateful to have love affection and care from all of my family members as well. My

fellow friends should also be recognized for their continuous support and

encouragement. My sincere appreciation also extends to my entire course mate and

others who have provided assistance at various occasions. Their views and tips are

useful definitely.

Last but not least, thanks to individuals that has contributed either directly or

indirectly to make this thesis project. Without all these people encouragement, support,

and advices this thesis project might not be successfully carried out. Of course, as usual,

all errors and oversights are entirely my own. Thank you once again.

Page 8: 64_MOHDRASHIDIBINSALIM2008

vi

ABSTRACT

Vision-Based mobile robot is one of the important research topics in the machine

vision area and widely used in various applications. Recently many robots have widely

been used anywhere especially in manufacturing and industrial sectors. This mobile

robot has become much more familiar nowadays since it is already equipped with lots of

intelligences where it is beneficial to human. One of the intelligence or application

which been attached to the robot designed is vision system, whereby many researchers

are currently working and focusing on how the vision based robot would be successfully

developed.

In conjunction with the development of vision system on the robot itself, Vision

Based Autonomous Robot has been successfully designed for this project. In this

project, microcontroller PIC18F452 is used as the brain of the robot to control the

robot’s movements where all data and information would be processed. It is also

equipped with CMUCam1 vision sensor, where it performs onboard real time image

processing for objects and color recognition. In addition, this robot is equipped with four

pairs of IR sensors where it will guide the robot while approaching to the detected object

by sensing barriers from multiple angles, in front and back. C language is used to

program this microcontroller via MicroC, so that it will properly function as desired.

This mobile robot also contains one servomotor to control the position of CMUCam1 on

the top of the robot.

Page 9: 64_MOHDRASHIDIBINSALIM2008

vii

ABSTRAK

Robot gerakan bebas berasaskan penglihatan adalah satu topic kajian yang

penting dalam lapangan penglihatan mesin dan meluas digunakan dalam pelbagai

aplikasi. Baru-baru ini, banyak robot telah digunakan secara meluas dimana-mana sahaja

terutamanya dalam sektor perkilangan dan industri. Salah satu kepintaran atau aplikasi

yang disertakan bersama pada robot adalah system penglihatan, dimana kebanyakan

pengkaji sekarang ini bekerja dan memberi tumpuan kepada bagaimana robot

berasaskan penglihatan dapat dibangunkan dengan jayanya.

Bersempena dengan pembangunan system penglihatan pada robot itu sendiri,

robot gerakan bebas berasaskan penglihatan telah berjaya direkabentuk untuk projek ini.

Dalam projek ini, mikropengawal PIC18F452 digunakan sebagai otak kepada robot ini

untuk mengawal pergerakan robot dimana semua data dan informasi akan diproses. Ia

dilengkapi dengan penderia penglihatan CMUCam1, di mana ia melakukan bersama

pemprosesan imej masa nyata untuk objek dan pengecaman warna. Juga, ia dilengkapi

dengan empat pasang penderia IR dimana ia memandu ketika robot menghampiri objek

yang dikesan melalui penderiaan halangan yang dating dari pelbagai sudut, depan dan

belakang. Bahasa C telah digunakan untuk memprogram mikropengawal menggunakan

Micro C supaya ia dapat brfungsi seperti yang diinginkan. Ia juga terdiri daripada satu

servo motor utuk mengawal kedudukan CMUCam1 pada bahagian atas robot.

Page 10: 64_MOHDRASHIDIBINSALIM2008

viii

TABLE OF CONTENTS

CHAPTER TITLE PAGE

TITLE PAGE ii

DECLARATION iii

DEDICATION iv

ACKNOWLEGEMENT v

ABSTRACT vi

ABSTRAK vii

TABLE OF CONTENTS viii

LIST OF TABLE xii

LIST OF FIGURES xiii

LIST OF ABBREVIATIONS xv

LIST OF APPENDIXES xvi

1 INTRODUCTION 1

1.1 Robot Definition 2

1.2 Mobile Robot 3

1.3 Objectives of Research 4

1.4 Scope of Works 5

1.5 Research Methodology 7

1.6 Thesis Structure 8

2 LITERATURE REVIEW 11

2.1 "Super" Boe-Bot Robot 12

2.2 Vision-based Quadruped Walking Robot System 14

2.3 Parallax Boe-Bot 15

Page 11: 64_MOHDRASHIDIBINSALIM2008

ix

2.4 CAMBOT 17

2.5 Vision based Autonomous Docking Robot 18

2.6 Summary 19

3 ROBOT DESIGN 20

3.1 Robot Structure 21

3.2 Mechanical Design 21

3.2.1 Motor Positioning and Installation 23

3.3 Base 24

3.4 Servo Motor 25

3.5 DC Motor 26

3.5.1 Pulse Width Modulation 27

3.6 Main Electronic Components 28

3.6.1 CMU Cam1 Vision sensor 29

3.6.1.1 CMUCam1’ Features 29

3.6.2 Relay 30

3.6.3 Microcontroller 32

3.6.4 Power 33

3.6.5 Wheel 34

3.7 Conclusion 35

4 CIRCUIT DESIGN 37

4.1 Overview 37

4.2 Sensor Circuit 37

4.3 Main Controller Circuit 49

4.4 Connecting the Microcontroller to PC circuit 40

Page 12: 64_MOHDRASHIDIBINSALIM2008

x

5 SOFTWARE DEVELOPMENT 44

5.1 Overview 44

5.2 Mikroelektronika (MikroC) 44

5.2.1 USART Library 46

5.3 WinPic800 48

5.4 Programming Language 50

5.5 Initialization of CMUcam1 51

6 VISION SYSTEM USING CMUcam1 55

6.1 Introduction 55

6.2 Firmware 56

6.3 The Vision Sensor Target 56

6.3.1 The Test Area 56

6.4 Calibrating The CMUCam1 VIsion Sensor In RGB Color Space 60

6.4.1 The Target 60

6.4.2 The Test Area 61

6.5 General Testing 61

6.5.1 Fine Tuning for RGB Color Space 64

6.5.2 Target Brightness 65

6.5.3 RED Gain Control and BLUE Gain Control 68

6.5.3.1 min Value adjustment 69

6.5.3.2 max Value adjustment 70

6.6 C CODE COMMAMND FOR CMUcam1 73

7 RESULT AND DISCUSSION 74

7.1 Overview 74

7.2 Configuration of CMUcam1 and Servo Angle 75

7.3 Lighting Factor 76

Page 13: 64_MOHDRASHIDIBINSALIM2008

xi

7.4 CMUcam Evaluation 77

7.4.1 Sensitivity of CMUcam 78

7.4.2 Baud Rate 78

7.4.3 Sensor Aleartness 78

8 CONCLUSION AND RECOMMENDATIONS 80

8.1 Conclusion 80

8.2 Recommendations 81

REFERENCES 83

APPENDIXES 84

Page 14: 64_MOHDRASHIDIBINSALIM2008

xii

TABLE LIST

NO. TITLE PAGE

6.1 CMUcam1 programming Command 73

Page 15: 64_MOHDRASHIDIBINSALIM2008

xiii

LIST OF FIGURES

FIGURE TITLE PAGE

NO.

1.1 Flow Chart for Project Methodology 7

2.1 "Super" Boe-Bot Robot 12

2.2 Quadruped-Walking Robot 14

2.3 Parallax Boe-Bot 16

2.4 CAMBOT 17

2.5 Vision based Autonomous Docking Robot 18

3.2 Mechanical Structure-Top View 22

3.3 Mechanical Structure-Side View 22

3.4 DC Motor Positioning 23

3.5 FPS3003 Standard Servo 25

3.6 DC Geared Motor 26

3.7 PWM signals of varying duty cycles 28

3.9 Main Components 28

3.10 Rear View of CMU Cam1 29

3.11 Relay and Its Symbol 30

3.12 PIC 18F452 (PDIP) 32

3.13 PIC 18F452 (TQFP) 33

3.14 Variety of Power Supply 34

3.15 Variety of Wheels 35

3.16 Robot- Front View 36

3.17 Robot- Side View 36

3.18 Robot-Top View 36

4.1 Sensor Circuit 37

4.2 IR Sensor Circuit 38

Page 16: 64_MOHDRASHIDIBINSALIM2008

xiv

4.3 Main Controller Circuit 39

4.4 Microcontroller Unit (MCU) Circuit 40

4.5 RS232 HW Connection 41

4.6 Voltage Regulator 42

4.7 Relay Circuit 43

4.8 Schematic for Relay Circuit 43

5.1 Mikroelektronika (MikroC) 46

5.2 Example of USART library source code 47

5.3 WinPic 800 48

5.4 WinPic 800 (Hardware Settings) 49

5.5 WinPic 800 (GTP-USB_Lite) 49

5.6 Detect Device Icon 50

5.7 WinPic 800 Setting Mode Interface 50

6.1 Data Packet Returned by the CMUcam1 56

6.2 Value of the Rmax, Rmin, Gmax, Gmin, Bmax, Bmin 63

7.1 Configuration of servo and camera will result this image 75

7.2 Configuration of camera and servo and image captured by 76

the camera

7.3 X-axis and Y-axis of the window 76

Page 17: 64_MOHDRASHIDIBINSALIM2008

xv

LIST OF ABBREVIATIONS

BRA British Robot Association

CMU Carnegie Mellon University

MCU Microcontroller Unit

PDIP Plastic Dual In-line Package

RGB Red-Green-Blue

FPS Frame Per Second

Spos the current position of the servo

Mx The middle of mass x value

My The middle of mass y value

x1 The left most corner’s x value

y1 The left most corner’s y value

x2 The right most corner’s x value

y2 The right most corner’s y value

Page 18: 64_MOHDRASHIDIBINSALIM2008

xvi

LIST OF APPENDICES

APPENDIX TITLE PAGE

A MICRO C PROGRAMME 84

B PIC18F452 KEY FEATURES 97

C SCHEMATIC CIRCUITS 98

Page 19: 64_MOHDRASHIDIBINSALIM2008

CHAPTER 1

INTRODUCTION

Vision is the most important of five human senses, since it provides 90% of the

information our brain receives from the external world. Its main goal is to interpret and

to interact with the environments we are living in. In everyday life, humans are capable

of perceiving thousands of objects, identifying hundreds of faces, recognizing numerous

traffic signs, and appreciating beauty almost effortlessly.

Computer vision system is an applied science whose main objective is to provide

computers with the functions present in human vision. Typical applications of vision

systems are robot navigation, video surveillance, medical imaging and industrial quality

control. Vision systems that are inspired from human vision representing a promising

alternative towards building more robust and more powerful computer vision solution.

Page 20: 64_MOHDRASHIDIBINSALIM2008

This chapter will discuss definition of robot, objective of research, scopes of

research, literature review and thesis outline. Literature review will focus more on vision

based mobile robot and other mobile robots which functioned and equipped which

vision system that have been researched earlier.

1.1 Robot Definitions

There are many definitions of robots. It seems to be of difficulty to suggest an

accurate meaning or definition for the word robot, as there are various definitions of this

word, different according to the points of view. Some views a robot through the aspect

of reprogram ability while others more concern on the manipulation of the robot

behaviors, as well as intelligence.

The word ‘ROBOT’ actually derives from Czech word that call robota. Robota is

a forced word of compulsory service that has a physical agent, which can generates

intelligent connection between perception and action. The current notation of robot

includes programmable, mechanical capable and flexible.

The British Robot Association (BRA) defines robot as

Page 21: 64_MOHDRASHIDIBINSALIM2008

"A programmable device with a minimum of four degrees of freedom designed to

both manipulate and transport parts, tools or specialized manufacturing implements

through variable programmed motion for the performance of the specific manufacturing

task”.

The Robotic Institute of America, on the other hand defines the robot as:

"Reprogrammable multifunctional manipulator designed to move material, parts,

tools or specialized devices through variable programmed motion for the performance

of a variety of tasks.”

Definition robot from the Longman Dictionary is “machine that can move and

does some of the work of a person and usually controlled by a computer.” Based on the

definition of robot by the two institutes, as a conclusion, a robot must be an automatic

machine and be able to deal with the changing information received from the

environment.

According to the Webster dictionary:

"An automatic device that performs functions normally ascribed to humans or a

machine in the form of a human (Webster, 1993)."

Page 22: 64_MOHDRASHIDIBINSALIM2008

Nowadays, there are many types of robots that have been used to simplify and

the human task by using a robot. The current states of the art of robot are divided into

several categories. There are humanoid robot, wheeled robot, industrial robot tracked

robot and service robot.

1.2 Mobile Robot

Basically, robots can be classified into two categories, fixed robot and mobile

robot. Fixed robot is a robot mounted on fixed surface and the materials are brought to

the workspace near the robot. Fixed robot normally used at factory such as car factory,

which is the material like metal are is coming to the robot for welding or stamping.

A mobile robot has ability to moves from one place to another to do their task.

Mobility is the robot's capability to move from one place to another in unstructured

environments to a desired target. Mobile robots can be categorized into wheeled, tracked

or legged robot and they are more useful compared to fixed robot. Mobile robot can be

used at danger area which is human cannot be there like polluted place, space

exploration or to dispose explosion. Mobile robot also can be used at home as a pet

(example, Sony AIBO) or can replace home equipment like vacuum. This is to making

human life easier and entertained. For people who like to play with something, mobile

robot also can be used in robot competitions.

Page 23: 64_MOHDRASHIDIBINSALIM2008

The function of a mobile robot is actually to move from one place to place

another autonomously, without human intervention. Building mobile robots able to deal

autonomously with obstacles in rough terrain is a very complex task because the nature

of the terrain not known in advance and may change in time. The role of the path

planner is to determine a trajectory in order to reach the destination while avoiding

obstacles and without being stuck. A true autonomous mobile off-road robot has to be

able to evaluate its own ability to cross over the obstacles it may encounter.

1.3 Objectives of Research

The objectives of this project is to develop a mobile robot which equipped with

vision-based system, where it able to differentiate various color of objects (color

recognition) and able to follow certain colored objects. Besides, it also has been one of

the objectives in this project to design and construct a motherboard consists of

microcontroller, motor driver and other electronic components such as infrared sensor

and relay. To let the robot functions properly, enable to detect approach and move away

from the object, avoiding from any obstacles, programming in C language is required

before loading into microcontroller.

Page 24: 64_MOHDRASHIDIBINSALIM2008

1.4 Scope of Works

In this project, there are three (3) major scopes of works, which are electronics

design, mechanical design and software. For electronic design, it is limited on

microcontroller unit, vision sensor such CMU Cam1, infrared sensor (obstacles detector)

and dc motor.

For microcontroller, the author will only be focusing on PIC Microcontroller,

PIC18F452 as the main component, studying how it can be interfaced with CMU Cam1

vision sensor as the ‘eyes’ of the robot. The designed programming in C language

however should be able to control the movement of wheels and servomotors, which

control the motion of camera. All these components have to be properly interfaced

together so that the robot will only be function, implement the tasks as programmed

without error.

However, for mechanical parts, focusing on designing and developing the base of

the robot, the development of shaft of dc motor and the most appropriate material that

can be used to develop the body of the robot have become the aims of this project. While

constructing the mechanical part of the robot, it is also required to know the best

location of most of electronics board and other components to be located on the base. It

is essential to ensure that the robot does not look messy and crowded eventually. For the

actuator, this robot used two geared DC motors to control right and left tires. All outputs

are depend on the input that microcontroller received.

Page 25: 64_MOHDRASHIDIBINSALIM2008

1.5 Research Methodology

Start

Reevaluate Design

SUITABLE

UNAVAILABLE/ TOO EXPENSIVE Decide Material

Selecting

Mechanical Design

Literature Review Focus on Vision Based Mobile Robot

Circuit Design

Troubleshooting

Improvements on the Robot

NO

YES

End

Working

Programming

Build and Construct Robot’s Body

Figure 1.1: Flow Chart for Project Methodology

Page 26: 64_MOHDRASHIDIBINSALIM2008

For the starting of the project, it was begun by finding the concept and idea

related to this title. Gathering the information by analyzing the idea where the robot has

the ability to differentiate various color of object and subsequently able to follow or

move away from the recognized colored object was then started. The robot depending on

how it will be programmed should implement all these behaviors successfully. Next,

proceed to make some research on previous robots, which have similarity in vision-

based system. Most of the time, all thesis are taken from IEEE website provided by

UTM.

The third step is research on electrical part and mechanical design. A survey

concerning prices, availability, and choices had been done on the components that will

be used. The circuitry, data sheet and others information were gathered, seeking for the

most appropriate selection of components. Then, research on mechanical design where

how does base, and others mechanical part look alike.

Just after completing the design of mechanical and electronic parts, it proceeds to

build and construct the motherboard followed by developing the software in C

programming. Trouble shoot will always be done as long as there is an error in final

testing to get the final product.

Page 27: 64_MOHDRASHIDIBINSALIM2008

1.6 Thesis Structure

Chapter 1 gives the overview of the project including about the definition of

robot and mobile robot. Besides, the objectives of research, scope of research and thesis

structure also present in this chapter.

Chapter 2 will focus on literature review of many projects that have been

researched earlier. Also problem statements on why this project is being carried out. The

literature review will be helping a lot as the reference when facing any problems.

Chapter 3, it shows mobile robot’s mechanical structure (robot structure). This

chapter discussed on how to build the robot and the material selection, that used to build

the robot based. IR sensors and DC motor placement are the important parts because if

the sensors arrangements are in wrong place, the robot might not work properly.

However, it is essential to locate DC motor the appropriately since it will influence the

balance of the robot itself.

Chapter 4 shows the electronic design and interface circuit that have been used in

this project (circuit design). The microcontroller (MCU) circuit, relay circuit, and

circuits as input for microcontroller such as IR sensors and CMUCam1 will be

discussed. Besides, it covers the output of microcontroller, which is DC motor for robot

movements and FUTABA servomotor for CMUCam1’s movements on the top of the

robot.

Page 28: 64_MOHDRASHIDIBINSALIM2008

Chapter 5 is focusing more on software development of the mobile robot

programming whereby it starts on writing the program using C language programming.

For this project for the software development, mikroC Compiler and its development

tools is discussed. This tool is used to write the programming code to embed in

microcontroller.

Chapter 6 will cover the implementation of CMUcam1 as the Vision System of

the mobile robot, explained all the features as well as the characteristic of the camera,

also how to get started to use it in the right way.

Chapter 7 consists of the result, findings and discussion of this project. It will

consist of the output’s explanation and all of the matters arising. It evolves the

experimental result and analysis while the project is implemented.

Chapter 8 as the last chapter where the recommendation for future work or

research will be stated and conclusion on the whole project.

Page 29: 64_MOHDRASHIDIBINSALIM2008

CHAPTER 2

LITERATURE REVIEW

Literature review is vital to the research because from the previous researches, it

can be guidelines to this project. In other words, it can bring a various idea and method

to glorify this project. It also become a study case for this project to overcome with the

new idea and different design compared to the previous project. Otherwise, from the

literature review references it can develop the contents to this research. Below is a few

listing that been done from the previous project.

This chapter focuses on the related fields and knowledge pertaining to the

accomplishment of the thesis itself. Reading includes such as reference books, papers,

journal articles, websites, conferences articles and any documentation concerning the

related applications and research works.

Page 30: 64_MOHDRASHIDIBINSALIM2008

2.1 "Super" Boe-Bot Robot

Figure 2.1: "Super" Boe-Bot Robot

As can be seen from Figure 2.1 above, Super Boe-Bot Robot has been in

development for almost two years and it is usually used for as an excellent tool on how

to utilize many of the options a BASIC Stamp microcontroller offers. Initially, it began

with plain Parallax Boe-Bot robot which always been changed, equipped with additions

that were made to the original design from time to time to increase its versatility. Below

are the features that this robot has:

• Two Parallax Board of Education programming boards – The one with a BS2sx

Stamp and the second with a BS2 Stamp

• Parallax Audio Amplifier board

• Vacuum formed plastic Body

• Two low Bump Switches, mounted on front of the Robot Body

• Parallax “PING” Ultrasonic Sensor, mounted on the front of the Robot Body

Page 31: 64_MOHDRASHIDIBINSALIM2008

• Wireless Color TV Camera and Transmitter with sound

• Parallax “Gripper” attachment mounted on the rear of the Robot

• Rubber band rifle fired by a servo

• Laser pointer for aiming the rifle

• Parallax “Line Following” sensors

• Radio control

Basically, the robot consists of four modes of operation, which each one is

determined by pulses sent to the Main BOE program from the RC receiver. In the first

mode the robot is controlled by use of a Model Radio Control unit; the RC receiver

sends pulses to the Main BOE.

The program running on this board receives the pulses from two of channels on

the receiver, the direction (front or reverse) and one for turning to left or right. The

program takes the incoming pulses and determines the pulses to be sent to the two drive

wheel servos, reversing one wheel in relation to the other, so the robot will go in the

right direction.

The second mode allows complete control of the robot by radio control without

the sensors. This can be used for pushing objects with the robot. The third mode allows

the robot’s controls to be reversed when picking up a can with the Gripper mounted on

the rear of the robot. This is to make it easier to drive the robot in reverse while viewing

by TV the robot’s travels. This brings up the Gripper and TV camera mounting.

The fourth mode is a “Line Following” mode. The robot drives over a black line

on the floor and switches to Line Following mode. The robot’s main program will take

control from the RC unit and navigate along the line.

Page 32: 64_MOHDRASHIDIBINSALIM2008

2.2 Vision-based Quadruped Walking Robot System

Figure 2.2: Quadruped-Walking Robot

The progress made so far in the design of legged robots has been mostly in the

areas of leg coordination, gait control, stability, incorporation of various types of

sensors, etc. This progress has resulted in the demonstration of rudimentary robotic

walking capabilities in various labs around the world. The more stable of these robots

have multiple legs, four or more, and some can even climb stairs.

Nevertheless, what is missing in most of these robots is some perception-based

high-level control that would permit a robot to operate with a measure of autonomy.

Equipping a robot with perception-based control is not merely a matter of adding to the

robot yet another module; the high-level control must be tightly integrated with the low-

level control needed for locomotion and stability.

Page 33: 64_MOHDRASHIDIBINSALIM2008

So far, in this project, model-based methods for recognizing a staircase using a

2D image of a 3D scene containing the staircase have been implemented. The staircase

recognition is achieved by obtaining the pose of the camera coordinate frame that aligns

the model edges with the image edges.

The method contains the matching, the pose estimation, and refinement

procedures. The computational complexity of matching is ameliorated by grouping

together edges with certain common geometric characteristics. The refinement process

uses all matched features to tightly fit the model edges with camera image edges. The

resulting recognition is used to guide the robot to climb stairs.

2.3 Parallax Boe-Bot

Figure 2.3: Parallax Boe-Bot

Page 34: 64_MOHDRASHIDIBINSALIM2008

In the beginning, Parallax, Incorporation has created the Boe-Bot robot as an

educational kit for their Stamps In Class program. It is controlled by a BASIC Stamp

microcontroller and using CMUcam2 as its vision system. The BASIC Stamp is

programmed in PBASIC whereby the robot may be programmed to follow a line, solve a

maze, follow light, or communicate with another robot by following the instruction of

the Robotics text.

This mobile robot is basically a rolling BASIC Stamp on a carrier board (Board

of Education). All I/O projects are built on the breadboard, without a need for soldering,

the circuit board is mounted on an aluminum chassis that provides a sturdy platform for

the electronics, and the servo motors. Mounting holes and slots on the chassis may be

used and benefit to add some custom robotic equipment.

For the wheel, however, it is a drilled polyethylene ball held in place with a

cotter pin. Wheels were machined to fit precisely on the servo spline and held in place

with a small screw. Both the carrier board and the BASIC Stamp module may be

removed and used as a platform for non-robotic experimentation.

2.4 CAMBOT

An aluminum mount was made to attach the CMUcam to the CAMBOT. The

camera' s viewing angle can be adjusted by loosening the side mounted spacers. The

mount is attached to the robot with 3/4 inch, 4-40 threaded spacers and the weight of the

batteries at the base prevents the robot from falling over.

Page 35: 64_MOHDRASHIDIBINSALIM2008

Figure 2.4: CAMBOT

For the power supply, it is controlled by a switch mounted near the back end of

the robot. Both the Boebot and the CMUCam require at least 5.5 volts for their low

drop-out regulators to operate. The use of 5 cells at 1.2 volts supplies a total of 6 volts.

By connecting the CMUCam with opto-couplers, it allows the BS2 to turn the

camera' s power on and off, while preventing current from flowing into the camera' s

input when the unit is off. Isolation in this manner prevents damage to the SX28

microcontroller with the added plus of saving power when the camera is not needed.

The CMUcam's interface circuitry is fitted on a piece of Radio Shack (#276-150)

perf board. Visible to the left on the board are two 740L6000 high speed opto-couplers

used for serial communications. Bypass capacitors are mounted directly to the machine

pin socket pins up close to the opto-couplers. The bypass capacitors require connection

with short leads to the 740L6000 to remain stable.

Page 36: 64_MOHDRASHIDIBINSALIM2008

2.5 Vision based Autonomous Docking Robot

Figure 2.5: Vision based Autonomous Docking Robot

This Docking Robot is initially designed to make high-precision docking

maneuver. It is matched and suitable for application on industrial forklift, mobile

manufacturing assembly robot, and rescue robot. It uses CMUcam2 as vision sensor. For

the pan and tilt mechanical this robot is equipped with two servo motors. For the

actuator, two-gear heads motor with 40rpm and torque 5kg/cm was used.

Microcontroller: Atmel AVR AT mega128 is use to process all information from the

vision sensor and move the actuator.

2.6 Summary

Vividly, throughout the literature review discussed in this chapter, it shows the

depicting robot vision behaviors and most of the robot exists individually. However, for

this project, the priority is given to CMUcam1 vision sensor as a main medium of

communication between robot and its environment.

Page 37: 64_MOHDRASHIDIBINSALIM2008

CHAPTER 3

ROBOT DESIGN

Inherently, robotics is an interdisciplinary field that ranges in scope from the

design of mechanical and electrical components to sensors technology, computer system

and artificial intelligence. The mechanical and electrical components include mechanical

frame, motor and wheel while the electrical components consist of microcontroller and

sensing system. The sensing system will allow the robot to interact directly with

environment.

In this design, the components can be classified into three (3) categories, which

are input, the process and the output. Inputs are from the programming. As for the

process, microcontroller will work as the brain for the robot to decide what the step or

the task should be done and how it should behave.

Page 38: 64_MOHDRASHIDIBINSALIM2008

3.1 Robot Structure

The structure of the robot can be classified into two parts:

1) Mechanical Design

2) Electronic Design

3.2 Mechanical Design

Mechanical structure should be designed as accurate as possible to avoid

unbalancing while robot starts its movement. Because of that purpose, all possibilities

should be taken into account while constructing the mechanical structure. The selection

of the best materials that will be used should be done wisely. The advantages as well as

disadvantages of any taken action will influence the performances of the robot itself later

on.

The mechanical structure of the mobile robot consists of the chassis of the

mobile robot and the driving mechanism, which are two DC motors and wheels. This

structure will be constructed using simple materials which is Perspex that are easy to

fabricate and worked on. Consideration is also given to the weight of the materials so

that the complete mobile robot will not be too heavy. Weight is an important factor here,

as the robot will need to move smoothly. This vision based mobile robot base must be

made by light material, so the robot can move faster. The shape of robot is circular to

make robot easy to turn at multi angle.

Page 39: 64_MOHDRASHIDIBINSALIM2008

Basically, there are two main layers for this vision based mobile robot, which are

for the base layer and the upper layer. Base layer will hold and be the platform for relay

circuit and sensor circuit locations. Meanwhile, microcontroller (MCU) circuit will be

located on the upper layers since it is near to the camera position and both relay and

sensor circuits on the lower base layer (Refer to Figure 3.1 and Figure 3.2).

PCB Stands are used to connect between both

Figure 3.1: Mechanical Structure-Top View

Upper/Top Layer

Bottom /Base Layer

Figure 3.2: Mechanical Structure-Side View

Page 40: 64_MOHDRASHIDIBINSALIM2008

3.2.1 Motor Positioning and Installation

DC Motors are located at the center of the

Figure 3.3: DC Motor Positioning

There are two units of DC motors using in Vision Based Autonomous Robot. By

using two DC motors, the robot is able to turn right and left freely. If only using one DC

motor, the robot only able to move forward and backward. The DC motors are located

near the center of robot base (Refer to Figure 3.3). This will make robot able to twist its

body at the corner. Therefore, the robot can turn smooth at any corner.

Generally, the DC motors are used in drive systems for two reasons. The first

reason involves the relationship between the speed and the torque of the DC motor. The

torque of the DC motor can change over a wide range of application. That is, as the load

applied to the motor increases, the torque at the motor also increases. Nevertheless, this

increased torque tends to slow the motor down. Additional current supplied to the motor

will overcome the torque and keep the speed of the motor constant. The second reason

DC motors are used is that DC motors can easily be interfaced with electronic

components.

Page 41: 64_MOHDRASHIDIBINSALIM2008

The DC motor can be controlled by microprocessors and by other electronically

controlled 5 volts DC logic. There are many motor that can be use as the actuator such

ac DC motor, servo motor, stepper motor and many more. However, for the actuator,

this robot will construct by using two DC motors.

The DC motor was recognized because it is small, light and of course it is

available at market with reasonable price compare to the others. Furthermore, the

programming to operate DC motor is quite simple compare with stepper motor. DC

motor must attach to gear to make it able to carry more loads. If it is not attach to gear,

maybe the robot cannot move at all. The supply voltage for each DC motor is 5 volts.

3.3 Base

As depicted from the Figure 3.3, the base is initially designed with round shape,

where all electrical and mechanical components are placed on it. For the purpose of

balancing and neatness, this shape is chosen rather than other shapes. Based on previous

project observations on it, the coming up version will be more well-arranged, easy-

maintain and adjustable.

This has been the aims on the project, to redesign the previous design robot

provided with better performances as well as the outlook of the robot. The base will be

equipped with four (4) pairs of infrared sensors for obstacles avoidance purposes.

Page 42: 64_MOHDRASHIDIBINSALIM2008

On the base itself, there is CMU Cam1 vision sensor, which attached together on

the top position of the base. This will be the robot’s “eye” since all detected images will

be sent to microcontroller board located on the base. The gripper is directly connected

on the bottom of the base, positioned sharp in front of CMU Cam1 vision sensor.

3.4 Servo Motor

Figure 3.4: FPS3003 Standard Servo

A servo is a motor that is attached to a position feedback device. Generally, a

circuit allows the motor to be commanded to go to a specified position. In this project,

Futaba Servos, such FP-S3003 will be used. In fact, it is economical. The FP-S3003

servo is standard equipment with Futaba's 2VR and 2CR two channel systems. The

S3003 uses advanced design and manufacturing techniques to produce virtually identical

performance to the popular S148.

Page 43: 64_MOHDRASHIDIBINSALIM2008

Below are the lists of its specifications.

• Dimensions: 0.77" x 1.59" x 1.41"

• Weight: 1.5 oz.

• Torque: 42oz./in.

• Transit: 0.22 sec./60 degrees

This servomotor will be used to control the movement of the arm, end effectors,

and the condition of CMU Cam1 vision sensor. In short, there are three of them will be

attached to the robot. Since the targeted object will be small ball, then the torque

produced is much enough to grasp and hold the object. The rate transit as stated above

shows that the rotation is fast since it rotates 360o in only 1.32 sec. This spec obviously

expresses that the robot able to move faster and complete the task earlier.

3.5 DC Motor

Figure 3.5: DC Geared Motor

Page 44: 64_MOHDRASHIDIBINSALIM2008

Motors are inductive devices since they draw much more current at startup than

when they are running at a steady speed. Generally, a DC motor has 2 terminals on it. If

the positive and negative leads from a power source (battery, power supply etc.) are

connected to the terminals of the motor, the motor will spin in one direction. If the

connections are swapped, the motor will spin in the opposite direction.

A few things that should be known about the motor that will be used as the following:

• What voltage it is designed to work at

• How much current it draws when running (unloaded)

• How much current it draws at stall

The “stall current” is the current the motor draws when we stop (stall) the output

shaft. Stalling a motor is very hard on the motor and can burn open the motor windings

and ruin the motor. The only way to test the stall current is, grab the output shaft with

our hand while measuring the current drawn. As the motor approaches stall, the current

will climb. Figure 3.5 above shows several of DC geared motors with different gear

ratio. For this project, the SPG50-XX DC motor has 60rpm, weight 60g, and maximum

torque is about 2.35Nm are used.

3.5.1 Pulse Width Modulation

Pulse width modulation (PWM) is a powerful technique for controlling analog

circuits with a microprocessor's digital outputs. PWM is employed in a wide variety of

applications, ranging from measurement and communications to power control and

conversion.

Page 45: 64_MOHDRASHIDIBINSALIM2008

In a nutshell, PWM is a way of digitally encoding analog signal levels. Through

the use of high-resolution counters, the duty cycle of a square wave is modulated to

encode a specific analog signal level.

The PWM signal is still digital because, at any given instant of time, the full DC

supply is either fully on or fully off. The voltage or current source is supplied to the

analog load by means of a repeating series of on and off pulses. The on-time is the time

during which the DC supply is applied to the load, and the off-time is the period during

which that supply is switched off. Given a sufficient bandwidth, any analog value can be

encoded with PWM.

Figure 3.6 shows three different PWM signals. For the first, it shows a PWM

output at a 10% duty cycle. That is, the signal is on for 10% of the period and off the

other 90%. The next in Figures 3.7 show PWM outputs at 50% and 90% duty cycles,

respectively. These three PWM outputs encode three different analog signal values, at

10%, 50%, and 90% of the full strength. If, for example, the supply is 9V and the duty

cycle is 10%, a 0.9V analog signal results.

Figure 3.6: PWM signals of varying duty cycles

Page 46: 64_MOHDRASHIDIBINSALIM2008

3.6 Main Electronic Components

Figure 3.7: Main Components

3.6.1 CMU Cam1 Vision sensor

The CMU Cam has been developed at Carnegie Mellon University provides real

time object tracking vision system that is easy to interface to micro controllers and

personal computers. The CMU Cam performs onboard real time image processing for

object and color recognition. It uses a SX28 micro controller interfaced to an Omni

vision OV6620 CMOS camera chip. The SX28 microcontroller does much of the image

processing. Communication with the camera is done via a standard RS-232 or TTL

serial port.

Page 47: 64_MOHDRASHIDIBINSALIM2008

3.6.1.1 CMUCam1’s Features

Features of the CMUcam1 include:

• Track user define color objects at 17 frames per second

• Find the center of the object

• Gather mean color and variance data

• 80 x 143 pixel resolution

• Serial communication at 115,200 / 38,400 / 19,200/ 9600 Baud

• Demo Mode - Automatically lock onto and drive a servomotor to track an object

Figure 3.8: Rear View of CMU Cam1

The CMU Cam1 vision sensor has several functions such as in finding the

centroid of blob, gathering mean color and variance data, controlling 1 servo since it has

1 digital i/o, dump a raw image and arbitrary image window.

Page 48: 64_MOHDRASHIDIBINSALIM2008

3.6.2 Relay

Figure 3.9: Relay and Its Symbol

A relay is an electrically operated switch. Current flowing through the coil of the

relay creates a magnetic field, which attracts a lever and changes the switch contacts.

The coil current can be on or off so relays have two switch positions and they are double

throw (changeover) switches. Relays allow one circuit to switch a second circuit, which

can be completely separated from the first.

Like relays, transistors can be used as an electrically operated switch. For

switching small DC currents (< 1A) at low voltage they are usually a better choice than a

relay. However transistors cannot switch AC or high voltages (such as mains electricity)

and they are not usually a good choice for switching large currents (> 5A).

Page 49: 64_MOHDRASHIDIBINSALIM2008

In these cases, a relay will be needed, but note that a low power transistor may

still be needed to switch the current for the relay's coil. The main advantages and

disadvantages of relays are listed below:

Advantages of relays:

• Relays can switch AC and DC, transistors can only switch DC.

• Relays can switch high voltages, transistors cannot.

• Relays are a better choice for switching large currents (> 5A).

• Relays can switch many contacts at once.

Disadvantages of relays:

• Relays are bulkier than transistors for switching small currents.

• Relays cannot switch rapidly (except reed relays), transistors can switch many

times per second.

• Relays use more power due to the current flowing through their coil.

• Relays require more current than many ICs can provide, so a low power

transistor may be needed to switch the current for the relay's coil.

Page 50: 64_MOHDRASHIDIBINSALIM2008

3.6.3 Microcontroller

This project used 16-bit wide instructions PIC microcontroller 18F452 as its

brain to control the system including collecting the data from sensor, receiving data from

another node, sending data to another nodes and combining and extracting the data.

PIC18F452 is a microwatt series Micro-Controller Unit (MCU) with 32KB code and

1536 Bytes of RAM. PIC18F452 support all basic microcontroller communication

protocol such as 3-wire SPI™ (supports all 4 SPI modes), I2C™ Master and Slave

mode, Addressable USART module (Supports RS-485 and RS-232) and Parallel Slave

Port (PSP) module. PIC18F452 has a Wide operating voltage range (2.0V to 5.5V) and it

also has a Low power consumption (25uA typical @ 3V, 32 kHz)

Figure 3.10: PIC 18F452 (PDIP)

Page 51: 64_MOHDRASHIDIBINSALIM2008

Figure 3.11: PIC 18F452 (TQFP)

Figure 3.10 and Figure 3.11 shows the PIC18F452 pin diagram. This project only

used the Plastic Dual In-line (PDIP) package. These two packages are actually same and

the different of these packages are only the pin position. For example, PGM for PDIP

package is assign at pin number 40 but for TQFP package it is placed at pin number one.

PDIP is a trough whole package type. It is need an IC base to place it on the

Printed Circuit Board (PCB). TQFP is a surface mount package type. The K32-15i

Liquid Flux is needed in order to make sure that the pin is not short with one another in

the soldering process. The right way to solder this type of IC package (TQFP) can be

found in the internet.

Page 52: 64_MOHDRASHIDIBINSALIM2008

3.6.4 Power

There are many powers offered in market nowadays. It is included power such

AC-DC Adapter, Transformer, Rechargeable Battery, Lead Acid Battery, LiPo Battery

Charger and Cell Battery respectively (Refer to Figure 3.12). In making decision on

which power is more appropriate, few specifications should be revealed such as per-cell

voltage, amp-hour current, weight and reusability (rechargeable). For this project, lead

acid battery has been chosen as the power supply device since it is cheap and easy to

design.

Figure 3.12: Variety of Power Supply

The lead acid battery that being used as the input power supply is +12 V. It is

making +5 V to have used the 3 terminal regulator and to have been stable. This circuit

is to generate 5V, which is needed by the PIC18F452 Microcontroller from a 9V

Battery. Terminal positive of the battery is connected to “+9V” and the terminal

negative of the battery is connected to “Ground”.

However, the relay circuit needs a stable and constant 6-volt DC supply because

it is very sensitive to the voltage change. So regulators are used to convert from 9 volt

into 6 volt with regulated voltage. There are two common families of fixed voltage

regulator that can be used such as 78xx series for positive voltages, and the 79xx series

for negative voltages.

Page 53: 64_MOHDRASHIDIBINSALIM2008

3.6.5 Wheel

Initially, it was planned to just recycle part that already been provided such the

pneumatic wheel, since the aim was to reconstruct the outlook of the previous robot

become smaller, neater and nicer then the current type of wheel does not look match or

suit. This makes me turn to others choices, which more applicable to my new design

project.

Many types of wheel offered out there that might be the wheel for my mobile

robot such as Nylon Wheel, Tamiya Tire, Omni Wheel and Trans Wheel respectively

(Refer to Figure 3.13). The following Figure 3.18 shows few examples of wheels that

will be chosen depending on how good its alignments as well as performances.

Figure 3.13: Variety of Wheels

3.7 Conclusion

As the conclusion, every single mechanical part and electronic part was

successfully combined together. Figure 3.14-3.16 represent the view of the complete

robot.

Page 54: 64_MOHDRASHIDIBINSALIM2008

Figure 3.14: Front View Figure 3.15: Side View

Figure 3.16: Top View

Page 55: 64_MOHDRASHIDIBINSALIM2008

CHAPTER 4

CIRCUIT DESIGN

4.1 Overview

In this Chapter 4, it will discuss all the circuits that had been used for this

project. The circuits are microcontroller circuit, IR sensor circuit and relay circuit.

4.2 IR Sensor Circuit

Figure 4.1: IR Sensor Circuit

Page 56: 64_MOHDRASHIDIBINSALIM2008

Figure 4.1 is a sensor circuit for one IR sensor. In this project, four pair of

sensors and LM324 will be used for the obstacle detection of this robot. The following

Figure 4.2 shows the real circuit for IR sensors which being used in this project.

The operation circuit is beginning with IR detector. As we know, IR detector will

give an analog output. Then, the analog output will through to the comparator to convert

become a digital output. Led will be on if get ‘1’ and will be off if get ‘0’ digital value.

Figure 4.2: Real View of IR Sensor Circuit

Page 57: 64_MOHDRASHIDIBINSALIM2008

4.3 Main Controller Circuit

Figure 4.3: Main Controller Circuit

This main controller circuit as shown in Figure 4.3 is used to control the overall

performance of this Vision Based Autonomous Robot. The requirements for this circuit

are:

- PIC microcontroller (18F452)

- Resistor 100 ohm, 220 ohm & 10 kohm

- Connector 4-way

- Connector 2-way

- Connector 10-way

- Voltage regulator (6V & 5V)

- Capacitor 0.1uF, 33pF & 10uF

- Diode (1N4001)

- Led (Red)

- Crystal (10 MHZ)

- Button Switch

Page 58: 64_MOHDRASHIDIBINSALIM2008

Microcontroller will process all data from sensor and voice recognition (from

PC). Then, the processed data will be sending to motor driver to do the task.

Microcontroller need to program so that the desired task will occur. Figure 4.4 below

shows the designed of microcontroller circuit for this project.

Regulator 6V PIC

18F452

Max Regulato

r 5V

Figure 4.4: Microcontroller Unit (MCU) Circuit

4.4 Connecting the Microcontroller to PC circuit

This project need some circuit to interfacing data from computer to

microcontroller. The circuit is shown in Figure 4.5 below:

Page 59: 64_MOHDRASHIDIBINSALIM2008

Figure 4.5: RS232 HW Connection

The PIC18F452 is requires either TTL or CMOS logic, therefore before

connecting direct to RS232 port, max232 is using to transform the RS232 level into 0

and 5 volts since RS232 has some electrical specifications as below:

Logic 0: between +3V and +25V

Logic 1: between -3V and -25V

The region between +3V and -3V is undefined.

MAX232 has two receives and transmitters in the same package that proves

needed in this system. RS232 is the most known serial port used in transmitting the data

in communication and interface. Even though serial port is harder to program than the

parallel port, this is the most effective method in which the data transmission requires

less wires that yields to the less cost. The RS232 is the communication line, which

enables the data transmission by only using three wire links. The three links provides

‘transmit’, ‘receive’ and common ground.

Page 60: 64_MOHDRASHIDIBINSALIM2008

For this project, LM7805 chips are used to convert from 9 volt into 5 volt DC

and 3 capacitors are used to regulate the outputs .LM7805 can support up to 1 amp

current and need to using heat sink if the current higher than 0.5 Amp. For this robot, 3

unit power supply circuits are used to support three circuits which are microcontroller

circuit, sensor circuit and lastly for relay circuit. The following Figure 4.6 shows the

voltage regulator circuit that will be used in my project.

Figure 4.6: Voltage Regulator

Figure 4.7 and 4.8 below show the real view of relay circuit and its schematic

representation respectively, which are being used in this project.

Page 61: 64_MOHDRASHIDIBINSALIM2008

Figure 4.7: Real View of Relay Circuit

Figure 4.8: Schematic for Relay Circuit

Page 62: 64_MOHDRASHIDIBINSALIM2008

CHAPTER 5

SOFTWARE DEVELOPMENT

5.1 Overview

There are two software are used in this project. MikroC compiler used to compile

the C code to the hex code, and WinPic800 which used to load program into the

microcontroller.

5.2 Mikroelektronika (MikroC)

MikroC is a powerful, feature rich development tool for PIC microcontrollers

developed by mikroElektronika. It is designed to provide the programmer with the

easiest possible solution for developing applications for embedded systems, without

compromising performance or control.

Page 63: 64_MOHDRASHIDIBINSALIM2008

PIC and C fit together well. PIC is the most popular 8-bit chip in the world, used

in a wide variety of applications, and C, prized for its efficiency, is the natural choice for

developing embedded systems. MikroC provides a successful match featuring highly

advanced IDE, ANSI compliant compiler, broad set of hardware libraries,

comprehensive documentation, and plenty of ready-to-run examples.

MikroC allows user to quickly develop and deploy complex applications. The C

source code can be written using the built-in Code Editor (Code and Parameter

Assistants, Syntax Highlighting, Auto Correct, Code Templates, etc.). The mikroC

libraries are also included to dramatically speed up the development; data acquisition,

memory, displays, conversions, communication and many more. Practically all PIC

series likes P12, P16, and P18 chips are supported.

With MikroC, it is now possible to monitor the program structure, variables, and

functions in the Code Explorer. It also generates commented, human-readable assembly,

and standard HEX compatible with all programmers. Inspecting program flow and

debugging executable logic is no longer a problem with the integrated Debugger.

Besides, user can get detailed reports and graphs such as RAM and ROM map, code

statistics, assembly listing, calling tree, etc. For beginners in C programming, MikroC

has provided plenty of examples for them to expand, develop, and use as building bricks

in their projects. User can copy them entirely if they deem fit.

In this project, C language will be used in programming this Vision Based

Autonomous Robot. So, the mikroelektronika (MikroC) software is needed to compile

the C-language into machine code before programmed into microcontroller. This

compiler provides a number of useful libraries for user to use such as ADC and USART.

Page 64: 64_MOHDRASHIDIBINSALIM2008

By using this library user do not need to configure the register of the

microcontroller manually if they want to use say the ADC module of the

microcontroller, the compiler will do it for the user. What users have to do is to know

how to used the library.

Figure 5.1: Mikroelektronika (MikroC)

5.2.1 USART Library

USART is one of the communication protocols that available in PIC

microcontroller that can be used to communicate between the microcontroller and other

devices such as PC. MicroC, C compiler had provided the USART library for user to

use. Below are the steps to be followed to use the USART library from microC.

i. Initial the USART and the baud rate by using Usart_Init(baud rate).

ii. To receive the data, use function Usart_Data_Ready and Usart_Read.

Page 65: 64_MOHDRASHIDIBINSALIM2008

Usart_Data_Ready is used to make sure the data has being recived

completely

while Usart_Read is used to buffer the data into the desired buffer.

iii. To send the data, use function Usart_Write(data).

Figure 5.2: Example of USART library source code.

Figure 5.2 show an example of USART library source code. This code

demonstrates how to use USART library routines. Upon receiving data via RS232, PIC

MCU immediately sends it back to the sender. What this example does is first wait for

the data to be fully received then buffered the data and finally sends back the data

through USART.

Page 66: 64_MOHDRASHIDIBINSALIM2008

5.3 WinPic800

WinPic800 is software that used in this project to load the hex file into the

microcontroller. This software will be used together with the programmer which being

called JDM programmer or also it can be used with the USB programmer. This chapter

will discuss on how to use the WinPic800.

i. First run the WinPic800.exe file and interface as shown in figure 5.4 will be appear.

Figure 5.3: WinPic 800

ii. For the first time used, user has to configure the COM port by clicking setting →

hardware, and then windows as in Figure 5.4 will be appear. Choose JDM

programmer and then choose the right comport that being used.

Page 67: 64_MOHDRASHIDIBINSALIM2008

Figure 5.4: WinPic 800 (Hardware Settings)

If user is using the USB programmer then the interface as shown in Figure 5.5

should be appearing. Choose the right comport and then click ‘Apply edits’.

Figure 5.5: WinPic 800 (GTP-USB_Lite)

iii. Detect the type of PIC that is being used by clicking at the ‘detect device’ icon as

shown in Figure 5.6. WinPic800 will automatically change the ‘setting mode

interface’ (refer to Figure 5.7) according to the type of PIC that being used.

Page 68: 64_MOHDRASHIDIBINSALIM2008

Figure 5.6: Detect Device Icon

iv. Open the hex file which want to be loaded into the microcontroller by clicking the

open icon.

v. After that, click on the setting button to set the configuration before the code being

burn in the microcontroller. Figure 5.7 will be appearing after setting button being

click. Unmark the BOREN to disable the brown out reset in the microcontroller.

Figure 5.7: WinPic 800 Setting Mode Interface

vi. Finally, click on the ‘program all icon’ to load the code into the microcontroller.

5.4 Programming Language

Programming languages are used to facilitate communication about the task of

organizing and manipulating information, and to express algorithm precisely.

Page 69: 64_MOHDRASHIDIBINSALIM2008

In this project, C programming language had been chosen to code the major task

including interpret the data from the CMUcam1 and control the actuator to perform

desired task.

The reason why C programming language was chosen is because:

• It is a general purpose programming language that provide code effectively ,

elements of structured programming and have a rich set of operators

• Convenient and effective programming solution for a wide variety of software

task.

• Can be written faster than assembly code thus reduces cost and easier to be

understood.

5.5 Initialization of CMUcam1

The CMUcam1 is connected to the microcontroller using TTL logic level serial

transmission connector. The serial communication parameters used are 9600 baud rate

speed which is same as the baud rate for the servomotor, 8 data bits, 1 stop bit, no parity

and no flow control (No Xon/Xoff or hardware). The CMUcam1 is initialized in a

sequence of commands sent to the camera. The following shows how to initialize the

camera:

// Reset camera

Soft_Uart_Write("RS\r");

while (ser_rcv() != ':'||'\0') ;

Delay_ms(500);

// set camera to autogain and auto white balance on, rgb

Soft_Uart_Write("18 44 19 33\r");

while (ser_rcv() != ':'||'\0') ;

Delay_ms(500);

Page 70: 64_MOHDRASHIDIBINSALIM2008

// freeze camera gain and white balance before tracking

Soft_Uart_Write("CR 18 40 19 32\r");

while (ser_rcv() != ':'||'\0') ;

Delay_ms(3000);

//Pool Mode on

Soft_Uart_Write("PM 1\r");

while (ser_rcv() != ':'||'\0') ;

Delay_ms(500);

//

Soft_Uart_Write("RM 3\r");

while (ser_rcv() != ':'||'\0') ;

Delay_ms(2000);

The camera is reset on software and the auto-gain and white balance is on. Auto-

gain is an internal control that adjusts the brightness level of the image to best suit the

environment. It attempts to normalize the lights and darks in the image so that they

approximate the overall brightness of a hand adjusted image. This process iterates over

many frames as the camera automatically adjusts its brightness levels. If for example a

light is turned on and the environment gets brighter, the camera will try and adjust the

brightness to dim the overall image.

White balance on the other hand attempts to correct the camera’s color gains.

The ambient light in your image may not be pure white. In this case, the camera will see

colors differently. The camera begins with an initial guess of how much gain to give

each color channel. If active, white balance will adjust these gains on a frame-by-frame

basis so that the average color in the image approaches a gray color. Empirically, this

“gray world” method has been found to work relatively well. The problem with gray

world white balance is that if a solid color fills the camera’s view, the white balance will

slowly set the gains so that the color appears to be gray and not its true color. Then when

the solid color is removed, the image will have undesirable color gains until it re-

establishes its gray average.

Page 71: 64_MOHDRASHIDIBINSALIM2008

After a few seconds, the auto-gain and white balance is off. It is because when

tracking colors, auto-gain and white balance may be allowed to run for a short period so

the camera can set its brightness gain and color gains to what it sees as fit. Then the

auto-gain and white balance is turned off to stop the camera from unnecessarily

changing its settings due to an object being held close to the lens or shadows etc. If auto-

gain and white balance where not disabled and the camera changed its settings for the

RGB values, then the new measured values may fall outside the originally selected color

tracking thresholds. The pool mode is set to 1 which will make only one packet data will

be returned when an image processing function is called. Next, the raw serial transfer

data is set to 3.

All commands are sent using visible ASCII characters (123 is 3 bytes "123").

Upon a successful transmission of a command, the ACK string should be returned. If

there was a problem in the syntax of the transmission, or if a detectable transfer error

occurred a NCK string is returned. After either an ACK or a NCK, a \r is returned. When

a prompt ('\r' followed by a ':) is returned, this indicates that the camera is waiting for

another command in the idle state. White spaces do matter and are used to separate

argument parameters.

The \r is used to end each line and activate each command. This command is

used to set the camera board into an idle state. Like all other commands you should

receive the acknowledgment string "ACK” or the not acknowledge string "NCK" on

failure. After acknowledging the idle command the camera board waits for further

command which is shown by the ':' prompt. While in this idle state a /r by itself will

return an "ACK" followed by \r and : character prompt.

Page 72: 64_MOHDRASHIDIBINSALIM2008

CHAPTER 6

VISION SYSTEM USING CMUCam1

6.1 Introduction

This vision system is designed to provide high-level information extracted from a

camera image to an external processor that may, for example, control a mobile robot. In

a typical scenario, an external processor first configures the vision system’s streaming

data mode, for instance specifying the tracking mode for a particular bounded set of

RGB values. The vision system then processes the data in real time and outputs high-

level information to the external consumer. The CMUcam1 (From Seattle Robotics) is

an integrated digital CMOS camera with an SX-28 microcontroller.

Page 73: 64_MOHDRASHIDIBINSALIM2008

6.2 Firmware

When there are command send to the camera, the camera will process the command and

then return a data packet. The data packet returned serially by the camera can be read by

using terminal emulation. Every number of the data packet has their own meaning

depends on the type of command send.

Figure 6.1: Data Packet Returned by the CMUcam1

6.3 The Vision Sensor Target

The calibration parameters for Vision Sensor Target are included with the

distribution of the LabVIEW CMUcam1. This procedure is for use when there is

distractions to the Vision Sensor in the area it will be used.

6.3.1 The Test Area

Page 74: 64_MOHDRASHIDIBINSALIM2008

The area where the configuration is to take place should mimic the setting where

the sensor will actually be used. If a non-self illuminated target is to be used then the

lighting should be of the same type, brightness, and position relative to the target as

where the sensor will be used. The hardware (sensor and target) placement should be as

close to the mean positioning of where the sensor will be used but allow movement of

the sensor relative to the target to test for adverse lighting effects such as shadows and

glare.

Place the powered FRC Vision Sensor Target in somewhere in front of the

sensor. Supply power to the sensor and turn the power switch on. Connect the sensor to

COM port of the computer with LabVIEW 8 and the CMUCam1 demo.llb installed.

Start LabVIEW and load the CMUCam1 Graphical User Interface (GUI). Using the

Serial Port drop-down menu, below the CMUCam1 title at the upper left of the screen;

select the port where the sensor is connected. Start the CMUCam1 GUI by selecting the

white run arrow, on the toolbar above the CMUCam1 title at the top left of the screen.

To load the parameters for the 2006 Vision Sensor target select the Load Config From

File button to the right of the Track Frame display area. In the pop-up window select the

file named 2006 ‘Target.cfg’, select Load Config Params then OK in the lower right

corner of the pop-up window.

Once the pop-up window closes select the Upload Config to Camera button to

the lower right of the Tracking Frame display image. A pop-up window will appear

verifying the parameters have been uploaded to the sensor. Select OK to close this

window. To allow the sensor to fully realize the effects of the change in parameters

select the Start Grabbing Binary Pixel Maps button above the Tracking Frame image

display. Let it run for five seconds then select the Stop button above the Tracking Frame

image display. Acquire an image by selecting the Grab Frame button above the Frame

Grab image display.

Page 75: 64_MOHDRASHIDIBINSALIM2008

Drag the cursor over the target portion of the Frame Grab image slowly and

select (click on) the image, highlighting the target color as it now appears to the sensor.

Select the Load Config from File button to the right of the Track Frame display area. In

the pop-up window select the file named 2006 Target.cfg then select Load Config

Params then OK in the lower right corner of the pop-up window. As OK is selected the

target color in the Frame Grab display image becomes highlighted with cyan color. This

is a tool to assist in the calibration process.

In addition, the odd coloring of the background is due to YCrCb color space

being used and the image being darkened to filter other background lighting. There may

also be visible in this display image some fragments of local lighting, light reflections,

etcetera- these may be ignored since these are not the color of the target.

There are several criteria that we must consider:

1) If there are no areas of highlighting other than the target in the image then there

is no need to adjust the parameters/settings. If there are only the target and a few

stray dots or areas of highlighting then attempt to determine what is causing the

dots. The most common cause of stray color dots is fluorescent lighting.

2) If the source can be found then determine if it is necessary in the area you will be

using the sensor. If not then remove the cause and try again. If the cause is

necessary in the area you will be using the sensor then continue with this

procedure.

Page 76: 64_MOHDRASHIDIBINSALIM2008

3) If there are only the target and a few stray dots or areas of highlighting then

perform the next three steps up to 3 times.

1. Reduce the Red max value 1.

2. Select the Start Grabbing Binary Pixel Maps button above the Tracking

Frame image display.

3. Inspect the streaming in the Track Frame display image for the target

color highlighted with no stray dots highlighted.

4) If there are still the target and a few stray dots or areas of highlighting then

perform the next three steps up to two times.

1. Stop the Binary Pixel Map stream from the sensor using the Stop button

above the Tracking Frame image display.

2. Increase the Noise Filter setting below the Track Frame image display by

1.

3. Select the Start Grabbing Binary Pixel Maps button above the Tracking

Frame image display.

Inspect the streaming Binary Pixel Map image in the Tracking Frame display for

the target color highlighted with no stray dots highlighted.

5) If there are still the target and a few stray dots or areas of highlighting then return

the Noise Filter setting to 1. Perform the next five steps repeatedly until either

the stray dots are gone or the setting changes do not appear to improve the

image.

Page 77: 64_MOHDRASHIDIBINSALIM2008

1. Stop the Binary Pixel Map stream from the sensor using the Stop button

above the Tracking Frame image display.

2. Decrease the Saturation Control setting by 2.

3. Select the Upload Config to Camera button to the lower right of the

Tracking Frame display image.

4. A pop-up window will appear verifying that the parameters have been

uploaded to the sensor. Select OK to close this window.

5. Select the Start Grabbing Binary Pixel Maps button above the Tracking

Frame image display.

6) If there are no stray areas or dots of target color then the calibration is complete.

Save the image and the configuration parameters.

6.4 Calibrating The CMUCam1 Vision Sensor In RGB Color Space

The process of calibrating the CMUCam1 vision sensor to a specific target is a

multi- phase process. The first phase of the process is the target selection itself.

6.4.1 The Target

The target should be of a color and intensity to boldly stand out in the

environment where the sensor is to be used. Due to the nature of the effects lighting has

on color as seen through the sensor, a self-illuminated target is a more reliable choice.

Be aware that a Self-illuminated target can be an issue if it is too bright.

Page 78: 64_MOHDRASHIDIBINSALIM2008

This document assumes a target color other than white. The target should be of

sufficient size to be “tracked” by the vision sensor to a distance greater than the distance

required for the normal operation of the sensor. Since the sensor “tracks” to the center of

the color mass of the target, the maximum size is the only constraint, meaning do not

make the target so large that it fills the entire imaging area of the sensor.

6.4.2 The Test Area

The next phase of the process is the environment (test area) where the

configuration is to take place. The area where the configuration is to take place should

mimic the setting where the sensor will actually be used. If a non-self-illuminated target

is to be used then the lighting should be of the same type, brightness, and position

relative to the target as where the sensor will be used. If a self-illuminated target is used

then the lighting of the environment becomes a minor issue. The hardware (sensor and

target) placement should be as close to the mean positioning of where the sensor will be

used but allow movement of the sensor, relative to the target, to test for adverse lighting

effects such as shadows and glare.

6.5 General Testing

This phase is where general testing and data collection begin for the target Color

Tracking Parameters. The sensor may take up to three frames for the full effect of any

parameter / register changes to be realized. For this reason it is important to grab three

frames each time a parameter / register change is sent to the sensor. This may be skipped

when using the “Start Grabbing Binary Pixel Maps” button is used immediately after a

parameter / register change. To begin the testing process, acquire an image from the

sensor using the “Grab Frame” button above the Frame Grab display image.

Page 79: 64_MOHDRASHIDIBINSALIM2008

When the image is displayed, inspect it for focus. To focus the sensor repeat the

following four steps until the image is in focus.

1. Turn the end and outer most part of the lens up to one-half turn in either

direction,

(Clockwise, counter-clockwise), noting the direction.

2. Acquire a new image using the ‘Grab Frame’ button, (three times).

3. Inspect the image on the screen.

4. If the image is less focused reverse the direction in which the lens is turned.

Although the image is low resolution, good focus can be identified by examining

sharp edges in the image, except for objects very close to the lens.

Next, drag the cursor over the target portion of the Frame Grab image, slowly taking

note of the range (low to high) of numbers displayed in the Red, Green, and Blue display

boxes below the image. Use the lowest of the three ranges to set the tolerance.

Page 80: 64_MOHDRASHIDIBINSALIM2008

Figure 6.2: Value of the Rmax, Rmin, Gmax, Gmin, Bmax, Bmin

Example:

Red range 141 to 221 = Range of 80 Mid-point = 181

Green range 200 to 255 = Range of 55 Mid-point = 227

Blue range 0 to 77 = Range of 75 Mid-point = 39

Tolerance 30

Once determined the tolerance setting, select the number in the Tolerance text

box below the Tracking Frame image. Enter the tolerance setting and press enter on the

keyboard. Return the cursor to the target portion of the Frame Grab image. Locate a

position where the number in the display box of the color used for the tolerance (Red,

Green, or Blue) is at the mid-point of the values noted in order to determine the range,

and select with a mouse click. Two things happen at the click of the mouse.

Page 81: 64_MOHDRASHIDIBINSALIM2008

1. The Color Tracking Parameter text boxes below the Tracking Frame image are

filled in with the minimum and maximum color levels, based on the tolerance

setting, to use in tracking the target.

2. All portions of the Frame Grab image which fall within the range of color

represented by the values in the min / max Color Tracking Parameters are

highlighted in cyan color.

The amount of the image highlighted offers a clue as to how unique the target

color is in the setting the calibration is taking place. The less scattered the cyan is from

the actual target image, the more unique the targeted color. Save the image and the

configuration parameters. To keep things organized it would be helpful to create a folder

labeled ‘Test Parameters’ where the image and configuration files will be stored.

Additionally it will be easier to identify which image files goes with each configuration

file if they have the same first name.

Example:

Step_1_1 010706.bmp

Step_1_1 010706.cfg

The next phase in the process is to fine-tune the Color Tracking Parameters and

register settings for the target color to determine its validity in the current Color Space,

(RGB).

6.5.1 Fine Tuning for RGB Color Space

Page 82: 64_MOHDRASHIDIBINSALIM2008

At this point there is an image in the Frame Grab image display with some

portions highlighted in cyan color and a set of numbers for the targeted color in the

Color Tracking Parameter text boxes below the Tracking Frame image display. There

may also be a cyan bitmap image in the Tracking Frame image display, either from the

default start-up image or from those who like to play with buttons.

6.5.2 Target Brightness

The first step is to determine if the target is too bright for the current camera

settings. This is done by turning OFF the Highlight Color option using the Highlight

Color ON/OFF switch next to the Save Frame button and above the Frame Grab display.

The image of the target should either be the color of the target or white/mostly white,

possibly with hints of the target color around the edges If the target color in the image is

white, possibly with hints of the target color around the edges, then perform the

following six steps to darken the image:

1. Reduce the Auto-Exposure Control setting, (in a text box to the right of the Track

Frame image display), to 0.

2. Reduce the AGC Gain Control setting, (in a text box to the right of the Track

Frame image display), to 0.

3. Reduce the Brightness Control setting, (in a text box to the right of the Track

Frame image display), to 1.

4. Upload the setting to the sensor using the Upload Config to Camera button.

5. Acquire a new image from the sensor using the “Grab Frame” button, (three

times).

6. Inspect the image in the Grab Frame display.

Page 83: 64_MOHDRASHIDIBINSALIM2008

If the target image is still white then the target is too bright and either the light

intensity needs to be reduced or another target needs to be chosen. If the target image is

not visible in the Grab Frame display image then perform the next four steps repeatedly

until the target image is visible in the image in the color of the target.

1. In the text boxes to the right of the Frame Tracking display, increase the Auto-

Exposure Control setting by 1.

2. Upload the setting to the sensor using the Upload Config to Camera button.

3. Acquire a new image using the “Grab Frame” button (three times).

4. Inspect the image in the Grab Frame display.

At this point there should be an image of the target color with a very dark

background in the Grab Frame image display. There may also be visible in this display

image some fragments of local lighting, light reflections, etcetera- these may be ignored

since these are not the color of the target.

Turn ON the Highlight Color option using the Highlight Color ON/OFF switch

next to the Save Frame button above the Frame Grab display.

1. Drag the cursor slowly over the target portion of the image, taking note of the

range (low to high) of numbers displayed in the Red, Green, and Blue display boxes

below the image.

• Determine the mean of the lowest of the three ranges.

• Select a point on the target image where the mean value chosen is displayed in

the appropriate Red, Green, or Blue display box. The target color as it now

appears to the sensor will be highlighted.

Page 84: 64_MOHDRASHIDIBINSALIM2008

2. Save the image and the configuration parameters.

Inspect the entire image in the Grab Frame image display.

If the target is highlighted and there are areas other than the target that are

highlighted then perform the following four steps.

1. Decrease the Auto-Exposure Control setting by 1.

2. Upload the setting to the sensor using the Upload Config to Camera

button.

3. Acquire a new image using the “Grab Frame” button (three times).

4. Inspect the entire image in the Grab Frame image display.

5. If the target image is not visible in the Grab Frame display then return the

Auto- Exposure Control setting to its previous setting.

a. Upload the setting to the sensor using the Upload Config

to Camera button.

b. Acquire a new image using the “Grab Frame” button

(three times).

If the target shape is still visible in the image as the color of the target then

perform the following two steps.

1. Drag the cursor slowly over the target portion of the image, taking note of the

range (low to high) of numbers displayed in the Red, Green, and Blue display

boxes below the image.

• Determine the mean of the lowest of the three ranges.

Page 85: 64_MOHDRASHIDIBINSALIM2008

• Select a point on the target image where the mean value chosen is

displayed in the appropriate Red, Green, or Blue display box. The target

color as it now appears to the sensor will be highlighted.

2. Save the image and the configuration parameters.

6.5.3 RED Gain Control and BLUE Gain Control

Attempt to reduce the stray highlighted areas of the image by systematically

increasing/decreasing the RED Gain Control values then the BLUE Gain control values

(in a text box to the right of the Track Frame image display) repeatedly per the

following five steps.

1. Manually adjust the value by 30 (until the limit of either 0 or 255 has been

reached).

2. Upload the setting to the sensor using the Upload Config to Camera button.

3. Acquire a new image using the “Grab Frame” button (three times).

4. Inspect the entire image in the Grab Frame image display.

5. Repeat the process until the most effective value has been reached.

At this point there is an extremely darkened image with the target color portion

highlighted along with some other areas of the image as well. To attempt to remove or at

least reduce those other areas to usable levels requires fine-tuning the Color Tracking

Parameters min/max levels. This is done by manually adjusting one parameter at a time,

repeatedly until finished with the adjustment for that parameter then moving on to the

next parameter.

Page 86: 64_MOHDRASHIDIBINSALIM2008

The objective of this fine-tuning is to end up with a solidly highlighted image

that represents the target, as seen through the sensor, with as few stray dots (specs) of

highlighting as possible while finishing each color adjustment with the broadest range

between the min/max color values.

Working in the order of the text boxes, RED min, RED max, GREEN min,

GREEN max, BLUE min, BLUE max, adjust the parameters using the adjustment

methods below. Note: The range of numbers for these settings is from 15 to 240. The

sensor will ignore any number outside of this range.

6.5.3.1 Min Value adjustment

If the current value is greater than 15 then perform the following three steps until there is

either no change in the image, the image is less refined, or a value of 15 is reached.

1. Select the current value in the text box and manually decrease this value by 10

but to a value no less than 15.

2. Observe the image in the Grab Frame display as you press the enter key on the

keyboard.

3. If the image became less refined or there was no change in the image then return

the value to its previous setting.

If on the first attempt of the previous steps, the image became less refined or

there was no change in the image then perform the following three steps until there is

either no change in the image, the image is less refined, or a value no greater than (max

value – 1) is reached.

Page 87: 64_MOHDRASHIDIBINSALIM2008

1. Select the current value in the text box and manually increase the value in the

text box by 10.

2. Observe the image in the Grab Frame display as you press the enter key on the

keyboard.

3. If the image became less refined or there was no change in the image then return

the value to its previous setting.

6.5.3.2 Max Value adjustment

If the current value is less than 240 then perform the following three steps until

there is either no change in the image, the image is less refined, or a value of 240 is

reached.

1. Select the current value in the text box and manually increase the value by 10.

2. Observe the image in the Grab Frame display as you press the enter key on the

keyboard.

3. If the image became less refined or there was no change in the image then return

the value to its previous setting.

If , on the first attempt of the previous steps, the image became less refined or there was

no change in the image then perform the following three steps until there is either no

change in the image, the image is less refined, or a value no less than (min value + 1) is

reached.

1. Select the current value in the text box and manually decrease this value by 10.

Page 88: 64_MOHDRASHIDIBINSALIM2008

2. Observe the image in the Grab Frame display as you press the enter key on the

keyboard.

3. If the image became less refined or there was no change in the image then return

the value to its previous setting.

Once all of the Tracking Color Parameters have been fine-tuned per the above

method then save the image and the configuration parameters. Acquire a new image

using the “Grab Frame” button (three times). Inspect the entire image in the Grab Frame

image display. If the highlighted portion of the image consists of a solid target shaped

blob of color with possibly a few dots of highlight scattered about the image then it is

time to view the Binary Pixel Map. Start the Binary Pixel Map stream from the sensor

using the Start Grabbing Binary Pixel Maps button above the Tracking Frame image

display.

Once started this display continuously receives a stream of bit-mapped data from

the sensor until the process is stopped. The “shifting” of the highlighted image is the

result of the image being refreshed with new data. Observe the Track Frame image

display. Notice that the image consists of the highlighted portion of the Grab Frame

image including any stray dots of highlighted target color. In this image also note the red

circle, which may or may not be shifting about as the image is refreshed. The red circle

is placed where at the perceived center of the color blob, (target), is. The more stable the

position of the circle, on the target color mass, the more reliable the tracking. The

method used in the sensor to determine the center of the color blob includes any stray

areas or dots of target color as the color blob.

To be able to effectively use the sensor for tracking, it is essential that all

erroneous areas and dots of target color be eliminated from the image.

Page 89: 64_MOHDRASHIDIBINSALIM2008

If there are stray dots of highlighted target color in the Track Frame image then

perform the following four steps repeatedly, but no more than three times.

1. Stop the Binary Pixel Map stream from the sensor using the Stop button above

the Tracking Frame image display.

2. Increase the Noise Filter setting below the Track Frame image display by 1.

3. Start the Binary Pixel Map stream from the sensor using the Start Grabbing

Binary Pixel Maps button.

4. Observe the Track Frame image display.

It is normal for the actual target color mass of this image to be reduced in size as

the noise filter increases. The noise filter setting filters out the number of pixels it is set

for from the perimeter of all incidences of target color in the display image. If all stray

dots of highlighted target color have been eliminated from the image and the Noise

Filter setting is less than 3 then the calibration procedure for this target is complete. Save

the image and the configuration parameters.

If the stray dots of highlighted target color have been eliminated from the image

and the Noise Filter setting is greater than 2 then it must be determined whether the

tracked target image is large enough to be tracked by the sensor at the maximum

required distance. This is done by moving the target to the required distance while

monitoring the Track Frame image display. If the target reliably tracks to the maximum

required distance then the calibration procedure for this target is complete. Save the

image and the configuration parameters.

Page 90: 64_MOHDRASHIDIBINSALIM2008

6.6 COMMAMND FOR CMUcam1

Table 6.1: CMUcam1 programming Command

COMMON EXPLANATION

\r Set camera board into an idle state

CR[reg1 value1[reg2 value2 …

reg16 value 16]]\r

Set Camera’s internal Register

DF/r Dump a frame out the serial port

DM value\r Delay

GM\r Get mean color value on the Window

GV\r Get the current version of firmware

HM\r Half-Horizontal resolution Mode for DM

I1\r Use servo port as Digital Input

L1 value\r Control tracking light

LM type mode\r Enable Line Mode

RM bit_flags\r Engage Raw Serial Transfer Rate

RS\r Reset the Vision board

SM bit_flags \r Switching Mode for color tracking

SO servo_number level \r Sets a Servo Output on the CMUcam to be either a

constant low or high value

SV servo position \r Set the position of one of five SerVos

TC[Rmin Rmax Gmin Gmax Bmin

BBmax]

Track a color

TW\r Track color found in the central region

Page 91: 64_MOHDRASHIDIBINSALIM2008

CHAPTER 7

RESULT AND DISCUSSION

This chapter will discuss about the result and finding of this project. Besides, the

analysis conducted in this project also will be presented.

7.1 Overview

This chapter will discuss the result, findings and the assessment from the analysis

conducted in this project. After the development of the Vision-based Autonomous Color

Detection and Object Tracking Robot took place, the robot will be analyzed to measure

the effectiveness and to ensure the objectives successfully achieved. Throughout the

analysis stage, strengths and weaknesses of the robot were identified.

Page 92: 64_MOHDRASHIDIBINSALIM2008

7.2 Configuration of CMUcam1 and Servo Angle

Picture below is the configuration of CMUcam1 and Servomotor. The second picture is

the image captured by the CMUcam1 on this configuration.

Figure 7.1: Configuration of servo and camera

will result this image

As we can see, the CMUcam1 servo just rotate in x direction axis so that the

servo will move if there have changing position on mx (middle mass x value). So that

the CMUcam servo will become pan. Referring on the mechanical design, the CMU

servo must become tilt.

So that, the position of the CMU cam was rotated to 90 degree so that the CMU

servo can use as tilt. Below is the picture captured by CMUcam when camera and servo

rotate 90 degree.

Page 93: 64_MOHDRASHIDIBINSALIM2008

Figure 7.2: Configuration of camera and servo and

image captured by the camera

As we can see, the image capture by the CMucam1 also rotate 90 degree. For this

picture x an d y axis are as follow: YX

Figure 7.3: X-axis and Y-axis of the window

Page 94: 64_MOHDRASHIDIBINSALIM2008

7.3 Lighting Factor

The problem really originates in the fact that the light-sensitive pixels in the

CMOS camera are actually more sensitive to infrared than visible light- especially the

red-detecting pixels. So, in environment that has a great deal of infrared- such as a room

lit by a light bulb, the world looks very, very red. It will look so red, in fact, that there is

no room left to successfully measure the visible, color light. Therefore, the picture

begins to look like a bad black and white picture.

Several camera parameters are important to understand. Auto-Gain enables the

camera to adjust up and down the gain on the R, G and B channels equally so that dark

images are artificially brightened and bright images are artificially darkened. This is

useful in order to make the spectrum of brightness in a picture more visually appealing.

CMUcam1 tends to be used with Auto-Gain on if the robot will be moving

through different lighting conditions. On the other hand, if the robot is in a particular

room, the auto-gain just let to adjust for a while, then it off will turn off to, make it fixed.

This is useful if tracking objects task is done because brightness will not change

dramatically.

The effect of Auto White Balance is more subtle. When Auto White Balance is

enabled, it will adjust the relative gains of R, G and B so that, overall, the picture’s total

R, G and B brightness are equal. Therefore, if Auto White Balance is on and a large

green sheet of paper is put in front of the camera and start dumping frames, at first the

image will look green but, after 10-15 seconds, the image will have become grey.

Page 95: 64_MOHDRASHIDIBINSALIM2008

7.4 CMUcam Evaluation

Through out the development of the robot, there are some problems and

difficulties have been encountered involving software, hardware and mechanical parts.

The problems can be summarized as the following:

7.4.1 Sensitivity of CMUcam

Obviously, it is difficult to control CMUcam1 itself since it has its own

controller. Microcontroller PIC18F452 will only communicate with the camera, in fact,

CMUcam1 operates separately from the microcontroller. It is often to loss in

communication between microcontroller and CMUcam1 since CMUcam1 operates on

image processing, it cannot be interrupted by the microcontroller.

Therefore, by setting the CMUcam to slave mode is one of the solutions. In this case,

the image processing is done by the microcontroller itself subsequently need the user to

have their own image processing program.

7.4.2 Baud Rate

The performance of CMUcam is better and will increase when the Baud Rate in

increased. Therefore, the users or the programmers have to choose the most appropriate

microcontroller, which has the suitable baud rate, and can be adjusted.

Page 96: 64_MOHDRASHIDIBINSALIM2008

7.4.3 Sensor Aleartness

The IR sensors that being used in this project have less ability since it only can

detect obstacles around 10cm in front of it. It had better to have sensor with ability to

sense any obstacles earlier with safe distance before the robot getting collision.

Page 97: 64_MOHDRASHIDIBINSALIM2008

CHAPTER 8

CONCLUSION AND RECOMMENDATIONS

8.1 Conclusion

This project discusses the development of Vision-Based Autonomous Robot

which actuated by two dc motors, equipped with CMU Cam1 vision sensor. The CMU

Cam (initially created at Carnegie Mellon) is by far the most popular vision camera that

can track an object based on its color and even move the camera if it is mounted on

servos (small motors) to track the object. In this project, microcontroller PIC18F452 has

been used, to process all data using C programming language.

The vision-based robot has been successfully developed and actually has trained

me to learn many skills; hardware and software skills besides soft skill. Through the

process in developing the robot, there are many new things that I have explored. I am

hoping that this project will contribute something in machine vision system and will be

able to market whenever it is equipped with many features and functionality. This

project is carried out with limited recourses and funding. In addition, not much advanced

Page 98: 64_MOHDRASHIDIBINSALIM2008

81

technology can be applied since the duration time to complete it only a year. However, it

can be proceed later with much more additional intelligences and advanced approach as

collections or even to commercialize after this.

8.2 Recommendations

Even though the Vision-Based Autonomous Robot has successfully been

developed in the end, it still has many disadvantages and should be improved by

implementing some modifications in the future research. I hope that the research about

vision-based robot will not stop here and will be added with much more features and

intelligences.

This vision-based robot equipped with CMU Cam1 vision sensor, which has less

features. Compared to other version of CMU Cam such CMU Cam2 and CMU Cam2+,

it has many features and functionality. I am suggesting for the next time research for

PSM, the student involved can try to use this CMUcam2+ as vision sensor for their

mobile robot. The CMUcam2+ has the following functionality:

• Track user-defined colors at up to 50 Frames Per Second (FPS)

• Track motion using frame differencing at 26 FPS

• Find the centroid of any tracked data

• Gather mean color and variance information

• Gather a 28 bin histogram of each color channel

• Manipulate horizontally pixel-differenced images

• Transfer a real-time binary bitmap of the tracked pixels in an image

Page 99: 64_MOHDRASHIDIBINSALIM2008

82

• Adjust the camera's image properties

• Dump a raw image (single or multiple channels)

• Up to 160 X 255 resolution

• Supports multiple baudrates

• Control 5 servo outputs

• Automatically use servos to do two axis color tracking. Servo 0 is for pan.

Servo 1 is for tilt.

When compared with our other CMU Cam products, the CMUcam2+ has these

important differences:

• Thinner profile

• 15% lighter weight

• Uses only the more capable OV6620 camera*

• No level-shifter on board for more efficient connection to +5V controllers

• Minimal packaging: does not include printed manual, AC adapter, or CD-ROM

As for future enhancement, more features can be added to the mobile robot by

using others vision sensor such as face recognition and hand-eye coordination. The

image processing algorithm can be built for the CMUcam1 instead of relying on the

simple image processing that can be performed by the CMUcam1 itself.

Page 100: 64_MOHDRASHIDIBINSALIM2008

83

REFERENCES

1. www.eyebot.com

2. www.cs.mu.oz.au/~nmb

3. http://robotics.ee.uwa.edu.au/eyebot

4. http://vision.kuee.kyoto-u.ac.jp

5. http://cobweb.ecu.purdue.edu

6. http://cms.uniten.edu.my

7. http://www.garagegames.com

8. http://www.cytron.com.my

9. Azizul Kepli, “Vision Based Autonomous Color Detection and Object Tracking

Robot”

10. A. Kosaka, M. Meng, and A. C. Kak, "Vision-Guided Mobile Robot Navigation

Using Retroactive Updating of Position Uncertainty," Proceedings of the IEEE

International Conference on Robotics and Automation, Atlanta, 1993.

11. Asada, M., Uchibe, E., and Hosoda, K.: Cooperative Behavior Acquisition for

Mobile Robots in Dynamically Changing Real Worlds via Vision-Based

Reinforcement Learning and Development

12. Wada, T. and Kato, T: Event Driven Motion-Image Classification by Selective

Attention Model, Proc. of IAPR Workshop on Machine Vision Applications

(MVA '96), pp.208-211, 1996.

13. Matsuyama, T.: Multi-Image Integration for High Precision Image Sensing and

Versatile Image Formation, J. of Inst. of Electronics, Information and

Communication, Vol.79, No.5, pp.490-499, 1996

Page 101: 64_MOHDRASHIDIBINSALIM2008

84

APPENDIX A

SOURCE CODE OF VISION BASED AUTONOMOUS ROBOT

Page 102: 64_MOHDRASHIDIBINSALIM2008

85

/////////////////////////////////////////////////////Variable Declaration////////////////////////////////////////////

#define MAXCOUNT 50

#define MAXCOUNTBIG 5000

const int iMAXCOUNT = 1000;

unsigned char iRxCount ;

unsigned int iRxCountBig = 0;

volatile unsigned int data_rdy;

unsigned long ser_tmp;

unsigned char characterFromCamera;

unsigned char stringFromCamera[20];

unsigned char mmx, mmy, lcx, lcy, rcx,rcy, pix, conf;

unsigned char Rmean,Gmean,Bmean,Rdev, Gdev, Bdev;

///////////////////////////////////////////Function prototype declaration/////////////////////////////////

void set_port();

//void ser_init(char spbrg_val);

void ser_tx(char c);

void ser_putstring(const char* text);

const char ser_rcv();

void get_packetM();

void get_ACK();

void autogainon();

void clamp();

void setupCam();

void parseMpacket(unsigned char * ,unsigned char*,unsigned char*,unsigned

char*,unsigned char*,unsigned char*,unsigned char*,unsigned char*,unsigned

char*);//store M packet

void posisi();

void track_color1();

Page 103: 64_MOHDRASHIDIBINSALIM2008

86

void start_tc (unsigned char Rmin, unsigned char Rmax, unsigned char Gmin, unsigned

char Gmax, unsigned char Bmin, unsigned char Bmax);

void parseSpacket(char * ,char*,char*,char*,char*,char*,char*);

void main()

{

Delay_ms(2000);

set_port();

//ser_init(129); // start up serial port handling

Usart_Init(115200);

ser_putstring("rs\r");

// Delay_ms(100);

get_ACK();

Delay_ms(3000);

PORTD =0x01;

setupCam(); // autogain on, etc.

Delay_ms(5000);

PORTD =0x00;

ser_putstring("l1 2\r");

get_ACK();

ser_putstring("gm\r");

get_ACK();

parseSpacket(stringFromCamera, &Rmean, &Gmean, &Bmean, &Rdev, &Gdev,

&Bdev);*/

clamp();

Delay_ms(5000);

PORTD=0b00000001;

ser_putstring("tw\r");

get_ACK();

//parseMpacket(stringFromCamera, &rmean, &gmean, &bmean, &rdev, &gdev,

&bdev);

Page 104: 64_MOHDRASHIDIBINSALIM2008

87

while(1){

PORTD=0b00000001;

track_color1();

ser_putstring("tc\r");

ser_putstring("tc 110 160 0 41 0 78\r");

get_packetM();

parseMpacket(stringFromCamera, &mmx, &mmy,&lcx,&lcy, &rcx, &rcy, &pix,

&conf);

//Delay_ms(500);

// ser_putstring("tc \r");

PORTD=0b00000000;

posisi();

}

}

void ser_putstring(const char* text) {

char i = 0;

// set_bit( pie1, TXIE );

while( text[i] != 0 )

ser_tx( text[i++] );

// clear_bit(pie1,TXIE);

}

void ser_tx(char c) {

//set_bit( pie1, TXIE );

//wait for txif to go hi

while (!(PIR1 & 16)) ; //while (!TXIF);

TXREG = c;

//enable_interrupt(GIE); //?

}

const char ser_rcv(void) {

Page 105: 64_MOHDRASHIDIBINSALIM2008

88

while (1) {

if (RCSTA & 2) { // RCSTA bit 1 is overflow error

// overflow error

RCSTA.CREN =0; // CREN is RCStatus bit 4 //

ser_tmp = RCREG; // flush the rx buffer

ser_tmp = RCREG;

ser_tmp = RCREG;

RCSTA.CREN =1; // CREN = 1;

}

else if (RCSTA & 4) { // RCSTA bit 2 is framing error

// framing error

ser_tmp = RCREG;

}

else if (PIR1 & 32) { // PIR1 bit 5 is RCIF

ser_tmp = RCREG;

return ser_tmp;

}

}

}

/*const char ser_rcv(void) {

//int iFlag = PIR1 & 32;

while(1)

{

if(!(PIR1 & 32))

{

iRxCount++;

// Delay_us(500);

if(iRxCount > MAXCOUNT)

{

iRxCount = 0;

iRxCountBig++;

Page 106: 64_MOHDRASHIDIBINSALIM2008

89

if(iRxCountBig > MAXCOUNTBIG)

{

// Final timeout is iRXCountBig * MAXCOUNT

iRxCountBig = 0;

ser_putstring("tc\r");

return ':';

}

}

}

else

{

ser_tmp = RCREG;

iRxCount = 0;

return ser_tmp;

} } }*/

//////////////////////////////////////////Function For Setting the Port////////////////////////////////////

void set_port(void)

{

//Configure port A

TRISA = 0x00;

LATA = 0x00;

//Configure port B

TRISB = 0x00;

LATB = 0x00;

//Configure port C

TRISC = 0x00;

LATC = 0x00;

//Configure port D

TRISD = 0x00;

LATD = 0x00;

Page 107: 64_MOHDRASHIDIBINSALIM2008

90

//Configure port E

TRISE = 0x00;

LATE = 0x00;

//Configure A/D pins

//adcon1 = 0x06;

//Initialize port A

PORTA = 0x00;

//Initialize port B

PORTB = 0x00;

//Initialize port C

PORTC = 0x00;

//Initialize port D

PORTD = 0x00;

//Initialize port E

PORTE = 0x00; }

//////////////////////////////////////////Function For Get Acknowledgment//////////////////////////////

void get_ACK(){

while (ser_rcv() != ':'||'\0') ;

}

///////////////////////////////////////function for setting autogainon///////////////////////////////////////

void autogainon() {

char B;

ser_putstring("cr 18 44 19 33\r"); //18=color mode,44=wb on,19-autoeposure,33=gain

on

get_ACK(); ; // read all reply characters

} // autogainon(); //

Page 108: 64_MOHDRASHIDIBINSALIM2008

91

////////////////////////////////////////////Function For Clamp The Camera/////////////////////////////

void clamp() {

//char B;

ser_putstring("cr 18 40 19 32\r");

get_ACK();

} // clamp() //

///////////////////////////////////////////Function For Setting Up Camera/////////////////////////////////

void setupCam() {

// char inc;

Delay_ms(200);

PORTB=0x00;

// raw mode on //

ser_putstring("rm 3\r");

Delay_ms(100);

get_ACK();

autogainon();

// noise filter on //

ser_putstring("mm 1\r");

Delay_ms(100);

get_ACK();

ser_putstring("nf 1\r");

Delay_ms(100);

get_ACK();

// poll mode off //

ser_putstring("pm 1\r");

Delay_ms(100);

get_ACK();

ser_putstring("l1 1\r");

Delay_ms(100);

get_ACK();

}

Page 109: 64_MOHDRASHIDIBINSALIM2008

92

///////////////////////////////////////Function for get the packet of data////////////////////////////////

void get_packetM()

{

int i = 0;

//int i;

/*T0CON = 0xC4;

TMR0L = 96;

INTCON = 0xA0; // Enable TMRO interrupt*/

while(1)//fills array up with goodies

// for(i=1;i<9;i++) {

characterFromCamera = ser_rcv();

if(characterFromCamera == ':'||'\0')

{

stringFromCamera[i]='\0';

break;

}

stringFromCamera[i] = characterFromCamera;

i++;

/*void get_packetM(void)

{

int i = 1;

while(1)//fills array up with goodies

{characterFromCamera = ser_rcv();

if( (characterFromCamera == ':'||'\0') )

{

stringFromCamera[i]='\0';

break;

}

stringFromCamera[i] = characterFromCamera;

i++; }

}*/

Page 110: 64_MOHDRASHIDIBINSALIM2008

93

//////////////////////////////////////////Function for processing data//////////////////////////////////////

//parses S packets for GM command

void parseMpacket(unsigned char* inpString, unsigned char *mmx1, unsigned char

*mmy1, unsigned char *lcx1, unsigned char *lcy1, unsigned char *rcx1, unsigned

char *rcy1, unsigned char *pix1, unsigned char *conf1)

{

int i = 1;

//*mmx1 = atoi(&inpString[++i]);

*mmx1 =inpString[++i];

*mmy1 = inpString[++i];

*lcx1 = inpString[++i];

*lcy1 =inpString[++i];

*rcx1 = inpString[++i];

*rcy1 = inpString[++i];

*pix1 = inpString[++i];

*conf1 = inpString[++i];

}

void posisi()

{

int i=0;

for (;i<1;i++)

{

if((mmx<27 ) && (conf>40))

{

PORTC=0b00000010;

Delay_us(1480);

PORTC=0b00000000;

Delay_us(18520);

break; }

else if((mmx>60) && (conf>40)) {

Page 111: 64_MOHDRASHIDIBINSALIM2008

94

PORTC=0b00000001;

Delay_us(1520);

PORTC=0b00000000;

Delay_us(18480);

break; }

else if((pix<160) && (conf<90)&&(mmx>40))

{

PORTC=0b00000011;

Delay_us(600);

PORTC=0b00000001;

Delay_us(1800);

PORTC=0b00000000;

Delay_us(17600);

break; }

else if((pix>230) && (conf>100))

{

PORTC=0b00000011;

Delay_us(600);

PORTC=0b00000010;

Delay_us(1800);

PORTC=0b00000000;

Delay_us(17600);

break; }

else if((mmx<44 ) && (mmx>40))

{

PORTC=0b00000000;

break;

}

/* else if ((pix==0) && (conf==0))

{

Page 112: 64_MOHDRASHIDIBINSALIM2008

95

PORTC=0b00000011;

Delay_us(600);

PORTC=0b00000010;

Delay_us(1800);

PORTC=0b00000000;

Delay_us(17600);

break;

} */

else if((pix<130) && (conf<50))

{

PORTC=0b00000011;

Delay_us(600);

PORTC=0b00000001;

Delay_us(1800);

PORTC=0b00000000;

Delay_us(17600);

iRxCount++;

if(iRxCount > MAXCOUNT)

{

iRxCount = 0;

for (;i<150;i++)

{

Delay_us(600);

PORTC=0b00000010;

Delay_us(1800);

PORTC=0b00000000;

Delay_us(17600);

}

break; } } }}

Page 113: 64_MOHDRASHIDIBINSALIM2008

96

void start_tc (unsigned char Rmin, unsigned char Rmax, unsigned char Gmin, unsigned

char Gmax, unsigned char Bmin, unsigned char Bmax)

{

ser_tx('t');

ser_tx('c');

ser_tx(' ');

ser_tx(Rmin);

ser_tx(' ');

ser_tx(Rmax);

ser_tx(' ');

ser_tx(Gmin);

ser_tx(' ');

ser_tx(Gmax);

ser_tx(' ');

ser_tx(Bmin);

ser_tx(' ');

ser_tx(Bmax);

ser_tx('\r');

ser_tx('\0');

}

void parseSpacket(char* inpString, char *rmean, char *gmean, char *bmean, char

*rdev, int *gdev, int *bdev)

{

int i = 1;

*rmean =inpString[++i];

*gmean = inpString[++i];

*bmean = inpString[++i];

*rdev =inpString[++i];

*gdev = inpString[++i];

*bdev = inpString[++i];

}

Page 114: 64_MOHDRASHIDIBINSALIM2008

97

APPENDIX B

PIC18F452 KEY FEATURES

Page 115: 64_MOHDRASHIDIBINSALIM2008

98

APPENDIX C

SCHEMATIC CIRCUITS

Page 116: 64_MOHDRASHIDIBINSALIM2008

99

MICROCONTROLLER CIRCUIT

Page 117: 64_MOHDRASHIDIBINSALIM2008

100

RELAY CIRCUIT (BASIC)

VOLTAGE REGULATOR CIRCUIT

Page 118: 64_MOHDRASHIDIBINSALIM2008

101

IR SENSOR CIRCUIT

INTERNAL BLOCK DIAGRAM FOR THE LM324