49
Caddy Scott Root Spring 2014 University of Florida Department of Electrical and Computer Engineering EEL 5666 – IMDL – Final Report Instructors: A. Antonio Arroyo, Eric M. Schwartz TAs: Josh Weaver, Nick Cox, Andy Gray, Daniel Frank

Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

Caddy

Scott Root Spring 2014

University of Florida

Department of Electrical and Computer Engineering EEL 5666 – IMDL – Final Report

Instructors: A. Antonio Arroyo, Eric M. Schwartz TAs: Josh Weaver, Nick Cox, Andy Gray, Daniel Frank

Page 2: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  2  

Table of Contents Abstract .............................................................................................................................. 3 Executive Summary ........................................................................................................... 3 Introduction ........................................................................................................................ 3 Integrated System ............................................................................................................... 4 Mobile Platform ................................................................................................................. 4 Actuation ............................................................................................................................ 6 Sensors ............................................................................................................................... 7 Behaviors ........................................................................................................................... 9 Experimental Layout and Results .................................................................................... 11 Conclusion ....................................................................................................................... 11 Documentation ................................................................................................................. 12 Appendices ....................................................................................................................... 13

Page 3: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  3  

Abstract Caddy is an autonomous robot for the collection of golf balls. It utilizes a smartphone camera and OpenCV to detect and track balls for collection. When Caddy drives over to a golf ball, it uses servos, a brush, and a scoop to sweep up the ball and deposit it in a collection bucket. Caddy uses IR rangefinders as well as bump switches to successfully navigate its environment. Executive Summary Caddy is an autonomous robot designed to navigate its environment to find and collect golf balls. It is controlled by an Arduino Mega 2650, a dual H-bridge motor driver, and OpenCV running on my laptop. Wireless communication between my laptop and the robot is accomplished through the use of a pair of Wixel wireless communication devices.

Caddy’s main cabinet houses most of the electronics. Inside are the Arduino, the motor controller, an axel and both motors, one Wixel, a circuit board, a battery, status LEDs, and a power switch. Two reversible DC motors with 90mm wheels attached provide locomotion, while plastic ball casters provide support and smooth turning. The collection of the golf balls is accomplished by two Hi-Torque servos attached to a brush and scoop. Three IR rangefinders and four bumps sensors are used for obstacle avoidance. For object detection, a smartphone camera and an IR LED/Transistor pair are used.

Caddy begins its task by calibrating all of its IR sensors. It then begins to drive straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects the specified object (in this case golf balls) it will drive towards it and collect the object. Caddy does this by sweeping the object up the ramp and into the scoop. It then lifts its scoop and dumps the object into its collection bucket. If Caddy detects an obstacle instead, it will utilize obstacle avoidance routines to navigate away from the detected obstacles. Introduction One of the ways my friends and I kill time at our fraternity house is by driving plastic practice golf balls. They don’t go far, but they’re fun to hit. The problem is cleaning them up afterwards is annoying. One day I was driving balls trying to figure out what I should do for my IMDL project, and the idea came to me. I decided to make an autonomous robot that would collect the golf balls for me.

Page 4: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  4  

Integrated System

Caddy is controlled by an Arduino Mega 2560 microcontroller. It utilizes servos and DC motors for actuation. Three IR rangefinders, four bump sensors, an IR LED/transistor pair, and a smartphone camera are used to determine how Caddy navigates its environment. A pair of Wixel wireless communication modules allows the Arduino to communicate with OpenCV running on my laptop. See figure 1 below for details on the system.

Figure 1: Circuit Schematic

Mobile Platform The mobile platform consists of several parts (see figure 2 for detail). The main body consists of the cabinet and front extension. The front extension houses the brush, scoop, ramp, servos, and sensors. The cabinet houses the Arduino, motors, motor controller, 11.1V 5000mAh battery, power switch, status LEDs, Wixel, and Wixel Arduino shield/circuit board (see figure 3 for detail). The cabinet also contains magnetic latching points that allow the access flap to stay closed and the collection bucket to stay in place on top of the cabinet. This allows for easy access to the internals while keeping the wiring out of sight.

Page 5: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  5  

Two 90mm wheels are used on either side of the robot conjoined by an axel made from a PVC pipe, ensuring that the wheels are aligned. The axel is located in the front of the cabinet, approximately in the middle of the robot. The battery, which is large and heavy, is placed above the axel to minimize the torque it places on the motors. Six plastic ball casters are also placed on the robot (two at the rear of the cabinet, four on the front extension) to allow for balance and smooth driving.

The combination of the ramp, brush, and scoop allows golf balls to be transported from the ground to the collection bucket. When available, the sweep moves the golf balls up the ramp and into the scoop. The scoop then rotates upwards, and gravity causes the balls to roll into the collection bucket.

Figure 2: Platform

Page 6: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  6  

Figure 3: Cabinet Internals

Actuation Actuation for the robot is achieved by two DC motors and two servos, all bought from Pololu.com (see figure 4 for detail). The DC motors actuate the wheels and are powered by an 11.1 Volt source and have a peak output of 6 Amps. At peak conditions the motors can supply 130 Oz-In of torque at 130 RPM. Both motors are controlled by a dual H-bridge motor controller from Atlanta Robotics, and are capable of supplying 15 Amps continuous or 30 Amps peak per channel. Each motor is controlled by a single PWM input as well as two digital inputs. The PWM controls the speed of the motor and the binary combination of the digital inputs determines forward, reverse, or braking movements. I decided to use two wheels so that my robot would be able to turn about a single point. These motors have more than enough torque to carry the robot, but I overestimated how fast they would be. I have to slow down the motors to less than 1/5th of their max speed in order to get the robot to move slow enough to process the camera feed. The two servos are HD high-torque (240 Oz-In) servos. One is attached to the brush and one is attached to the scoop. Both are controlled by a single PWM input that determines their rotational position (out of 180 degrees). The brush servo actuates first to sweep golf balls up the ramp and into the scoop, then the servo returns to its initial

Page 7: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  7  

position. The scoop servo then tilts up, depositing the ball into the collection bucket. The servo then returns to its starting position. This routine is referred to as a sweep and scoop. A lesson I learned while implementing the actuation is that if the floor is uneven, the wheels won’t always touch the ground. This seems obvious but it’s easy to overlook. I was able to overcome this issue by wrapping three rubber bands around each wheel then slipping the rubber tread over top. This ensures that the wheels are always touching the ground, and improved my robots overall driving performance. Another lesson I learned was to not buy parts from India. I ordered a servo, motor controller, and battery from there and none of them worked. No refunds for a reason, I suppose.

Figure 4: Actuation

Sensors Bump Sensors: There are four bump sensors located on the front extension (see figure 6 for location). They consist of simple lever switches configured as normally open and are tied to a 10kΩ pull-up resistor. When closed, the output of the switch is pulled low. Each bump switch has a wooden arm attached to it to increase area of effectiveness. The rear bump switches protect the wheels from catching on anything (chair legs, walls, etc…) while the front switches tell the robot when it has turned too far.

Page 8: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  8  

IR Rangefinders: There are three IR (infrared) Rangefinders located on the front extension (see figure 6 for location). They consist of 10-80cm GP2Y0A21 Sharp distance sensors, which are powered by 5V and have a single analog output. The IRRF is read through the Arduino’s built in ADC (analog to digital converter), producing a number between 0 and 1023. The distance to voltage relationship is shown in figure 5.

Figure 5: Distance-Voltage Curve (pololu.com)

IR LED & IR Transistor Pair: This sensor is located directly under the front bump switches. The IR LED forms a beam directed at the IR transistor that spans the length of the ramp. The IR beam acts as the gate signal of the transistor; therefore, when the beam is blocked a different voltage will be measured on the output of the transistor (see figure 1 for schematic). This voltage is measured by the Arduino’s ADC, which allows the robot to sense if there is an object waiting to be collected or not. Smartphone Camera: Caddy’s vision system is designed to work with any smartphone running an IP camera app. The video feed from the phone is uploaded to a local network address. Using a computer on the same network, a program called IP Camera Adapter is used to pull the video feed from the network and trick my laptop into thinking the feed is from a local webcam. Another program called ManyCam is used to access the newly created fake webcam. Visual Studio 2012 can then grab this feed to be analyzed using OpenCV. I am

Page 9: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  9  

using the app BL-IP Cam on my Droid X2 for this sensor. It transmits at VGA (640x480) resolution at 30 frames per second. Under optimal wifi conditions, there is only a quarter second lag between the feed and the processing.

Figure 6: Sensor Placement

Behavior Arduino: When Caddy is turned on, it is initially set in a wait state. It is waiting for a bump sensor to be pressed. When a bump sensor is pressed, the nearest IR rangefinder is calibrated by reading the distance ten times and averaging the results. This value becomes the new threshold value for the rangefinders. If calibration is skipped, a default value will be used for each threshold. When the front right bump sensor is pressed, a similar calibration occurs with the IR LED/transistor pair, then the robot starts driving straight. Caddy now enters a four state loop that will continue until he is turned off, or receives a special debug command. The first state is to check if an object is ready to be collected. The Arduino does this by sampling the IR transistors analog value. If this value is greater than the calibrated threshold value (plus tolerance), a sweep and scoop routine is triggered. In this routine the brush sweeps the ball into the scoop. The robot then comes to a halt as the brush returns to its initial position and the scoop rotates upwards. Gravity cause the ball to roll into the collection bucket, and the sweep returns to it’s initial position. The robot then continues straight.

Page 10: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  10  

The next state is to check the camera feed. If an “L”,”M”, or “R” is received serially from the laptop (via the Wixel), the robot will start to drift left, drive straight, or drift right respectively. This will cause the robot to center itself on the ball and drive straight towards it for collection. This state also checks for debug commands sent serially from the laptop. The next takes an averaged read from each IR rangefinder. If the value returned is greater than the threshold value a binary digit is set in a 3-digit number (left sensor = MSB, right sensor = LSB). A total of 8 combinations are possible, and each combination has a different obstacle avoidance routine associated with it. These routines are timed routines that will navigate the robot away from any nearby sensors. I decided to use pre-timed routines over more efficient routines so that my robot would cover more ground and potentially see more objects. The final state is to check the bump sensors. If one is pressed the robot simply navigates away from the direction which it touched using pre-timed routines. After this, the four state loop restarts. Visual Studio & Open CV: My camera vision software was written in Microsoft Visual Studio 2012 on Windows 7 utilizing OpenCV, an open source computer vision algorithm depository. The code for this consists of code written by myself, code gathered from open source databases, and Josh Weaver’s HSV color detection code. The program begins by initializing variables and opening the wireless serial port established by the pair of Wixels. It then waits for a mouse click that will read the HSV values of the clicked pixel and store the values in threshold variables.

A loop is entered here. The program grabs a still image from the feed and converts the pixels into the HSV color space. It compares the HSV value of every pixel with the threshold values within some tolerance. If the color is close to the threshold the pixel will turn white. Otherwise, it will turn black. The program then uses erosion and dilation techniques to filter out noise.

After this, I tried to add in code to only recognize one object at a time. I did this by identifying contours, computing their areas, and filtering out all but the biggest. Unfortunately, my knowledge of coding is based off of one course in Java, which I extrapolated to C for junior design and Microprocessors, which I extrapolated to C++ for this course. Because I’m three degrees removed from the language I know, I wasn’t able to get the syntax correct for this part in time.

After the mask is established, the center of mass is computed from the locations of the white pixels. The last analyzed image is displayed along with its associated mask. If the x value of the center of mass is on the left, middle, or right side of the screen, a green, blue, or red dot will be displayed respectively at the center of mass and an “L”,”M’ or “R” will be serially sent respectively to the Arduino via the Wixel. The program will continue to loop through this until Q is pressed, at which it will exit the program.

Page 11: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  11  

Experimental Layout and Results As a whole, the design and implementation for Caddy stayed largely consistent throughout the entire process. The only major disparity between design and implementation was the material that the robot is made from. The original design was for an entirely 3D printed robot, however this proved unfeasible. I adopted my design to some 3D printed parts, but my 3D printing contact ended up falling through. My finished robot ended up being a clever construction of wood, cardboard, Styrofoam, moldable plastic, and pvc. Smaller iterations did occur. I went through two different motor controllers before finding one that worked (the first two didn’t provide enough power). This taught me when in doubt go for more capabilities not less. I also went through two sets of motors; the first set was way too fast and didn’t have enough torque. The only other change was that the robot was originally called Shelly and designed to pick up shell casings. Given the ammo shortage, however, the driving range has proven a more practical hobby than the firing range, leading me to repurpose my robot. In it’s current iteration, Caddy drives slowly and deliberately to allow for more effective processing. It navigates even crowded spaces quite effectively and in optimal wifi conditions it is excellent at performing the task it was designed for. Conclusion Overall, I was able to accomplish most of what I set out to do. I was able to effectively control both my motors and servos in a mechanically sound way. Caddy’s obstacle detection and avoidance routines are better than they need to be. The IR plane is effective at detecting golf balls and collecting is almost always successful. My smartphone adequately transmits a live video feed, and the Wixel units do a great job of communicating with the Arduino. And the robot also looks good, with few visible wires, which is always a plus. The obstacle avoidance capabilities turned out much better than expected, my robot is able to navigate the cafeteria-esque kitchen in my fraternity house exceptionally well (lots of chairs and tables). I am also particularly satisfied that I was able to use my smartphone as the camera (or any smartphone for that matter, it can change out very quickly). The wiring is also particularly neat and organized considering how much more of a rats nest it could have been. This projects limitation was definitely my lack of familiarity with C++. As I explained early the only formal coding education I’ve had has been a semester of Java, so fumbling with syntax was definitely my downfall. The most obvious improvements I could make would be to my vision processing just to make it more accurate. The brush servo could also be more powerful, the ramp could be more smooth, and the plastic ball casters should be replaced with metal ones. Through all this, I thoroughly enjoyed working on this project. It has taught me so much about design, controls, and automation that I wouldn’t have learned otherwise. I’m very satisfied with the way my robot turned out, as well as with the experience.

Page 12: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  12  

Documentation IR Rangefinder: http://www.pololu.com/product/136 DC Motors: http://www.pololu.com/product/1575 Wixel: http://www.pololu.com/product/1336 Motor Controller: http://www.atlanta-robotics.com/Dual_Hbridge_Motor_Drive.php Arduino: http://arduino.cc/en/Main/arduinoBoardMega2560 Servos: http://www.pololu.com/product/1057

Page 13: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  13  

Appendices Flowcharts: Arduino/Visual Studio

Page 14: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  14  

Arduino Code: #include <Servo.h> //Set Global Variables volatile int RF = 0; volatile int FLV = 0; volatile int FRV = 0; volatile int BLV = 0; volatile int BRV = 0; volatile int SV = 0; volatile int RFLthresh = 225; volatile int RFMthresh = 150; volatile int RFRthresh = 225; volatile int IRthresh = 0; //Set Variables int forward = 1; int reverse = -1; int brake = 0; int irled = 53; //Status LED's int cled = 51; int rfled = 49; int bled = 47;

Page 15: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  15  

int M1 = 13; //Motor Control int M2 = 12; int M3 = 9; int M4 = 8; int PWM1 = 11; int PWM2 = 10; int FL = 6; //Bump Sensors int FR = 7; int BL = 4; int BR = 5; int sweep = A3; //IR transistor int RFL = A2; //IR Rangefinders( int RFM = A1; int RFR = A0; Servo servo1; // create servo object to control sweep servo Servo servo2; // create servo object to control scoop servo int pos = 80; // variable to store sweep servo position int posit = 25; // variable to store scoop servo position void setup(){ pinMode(M1, OUTPUT);

Page 16: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  16  

pinMode(M2, OUTPUT); pinMode(M3, OUTPUT); pinMode(M4, OUTPUT); pinMode(irled, OUTPUT); pinMode(cled, OUTPUT); pinMode(rfled, OUTPUT); pinMode(bled, OUTPUT); Serial.begin(115200); //Set baud rate servo1.attach(3); // attaches the servo on pin 3 to the servo object servo2.attach(2); } void loop(){ //Initialize Servo positions servo1.write(80); servo2.write(25); delay(250); while(1){ //Calibrate IR Rangefinders FRV = digitalRead(FR); FLV = digitalRead(FL); BRV = digitalRead(BR);

Page 17: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  17  

BLV = digitalRead(BL); //Take 10 samples and average if corresponding bump sensor is pressed if(FLV == LOW){ RFMthresh = 0; int thresh = 0; for(int i = 0; i < 10; i++){ thresh = analogRead(RFM); RFMthresh = RFMthresh+thresh; } RFMthresh = RFMthresh/10; //Serial.println(RFMthresh); delay(500); } if(BRV == LOW){ RFRthresh = 0; int thresh = 0; for(int i = 0; i < 10; i++){ thresh = analogRead(RFR); RFRthresh = RFRthresh+thresh; } RFRthresh = RFRthresh/10; //Serial.println(RFRthresh);

Page 18: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  18  

delay(500); } if(BLV == LOW){ RFLthresh = 0; int thresh = 0; for(int i = 0; i < 10; i++){ thresh = analogRead(RFL); RFLthresh = RFLthresh+thresh; } RFLthresh = RFLthresh/10; //Serial.println(RFLthresh); delay(500); } //If front right BS is pressed, calibrate IR transistor and begin if(FRV == LOW){ //Serial.println("GO"); for(int i = 0; i < 10; i++){ int thresh = analogRead(sweep); IRthresh = IRthresh+thresh; } IRthresh = IRthresh/10; straight(255);

Page 19: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  19  

delay(50); straight(40); //Serial.println(IRthresh); while(1){ //Main Loop checkSweep(); checkCam(); checkRF(); doRF(); checkBS(); doBS(); } } } } void checkCam(){ //checks the camera for object while (Serial.available() > 0){ //If instructions from VS are recived digitalWrite(cled, HIGH); //enable status led int turn = Serial.read(); //read transmission

Page 20: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  20  

if(turn == 'L'){ driftLeft(40); delay(10); } if (turn == 'M'){ straight(40); delay(10); } if(turn == 'R'){ driftRight(40); delay(10); } if(turn == 'S'){ //debug command halt(); delay(10); } if(turn == 'T'){ //debug command turnAround(); delay(10); } break; //exit loop for stability } digitalWrite(cled, LOW); //disable status loop

Page 21: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  21  

} void checkSweep(){ //checks if object is in collection zone and collects it SV = 0; SV = analogRead(sweep); //get analog value if(SV > IRthresh + 10){ //if object is detected digitalWrite(irled, HIGH); //enable status LED swep(); //sweep object up delay(200); scoop(); //scoop into collection bucket straight(40); //continue } digitalWrite(irled, LOW); //disable status LED } void checkBS(){ //check if bump sensor is pressed FLV = 0; //clear analog values FRV = 0; BLV = 0; BRV = 0;

Page 22: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  22  

FLV = digitalRead(FL); //read analog values FRV = digitalRead(FR); BLV = digitalRead(BL); BRV = digitalRead(BR); } void doBS(){ //navigate away from colliding obstacle if (FLV == LOW){ digitalWrite(bled, HIGH); //enable status LED halt(); //delay(); back(); //delay(); right(350); //delay(); straight(40); return; } if (FRV == LOW){ digitalWrite(bled, HIGH); //enable status LED halt(); //delay(); back();

Page 23: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  23  

//delay(); left(350); //delay(); straight(40); return; } if (BLV == LOW){ digitalWrite(bled, HIGH); //enable status LED halt(); //delay(); back(); //delay(); right(350); //delay(); straight(40); return; } if (BRV == LOW){ digitalWrite(bled, HIGH); //enable status LED halt(); //delay(); back(); //delay();

Page 24: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  24  

left(350); //delay(); straight(40); return; } digitalWrite(bled, LOW); //disable status LED } void checkRF(){ //reads IRRF and determines action needed RF = 0; //initalize variables int LV = 0; int MV = 0; int RV = 0; //Take 3 samples per IRRF for(int i = 0; i<3; i++){ int readL = analogRead(RFL); LV = LV + readL; delay(10); } for(int i = 0; i<3; i++){

Page 25: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  25  

int readM = analogRead(RFM); MV = MV + readM; delay(10); } for(int i = 0; i<3; i++){ int readR = analogRead(RFR); RV = RV + readR; delay(10); } //averages values LV = LV/3; MV = MV/3; RV = RV/3; //determine combination of values int L = 0; int M = 0; int R = 0; if (LV > 225){ L = 1; }

Page 26: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  26  

if (MV > 150){ M = 2; } if (RV > 225){ R = 4; } RF = L + M + R; } void doRF(){ //navigate away from nearby obstacles if(RF > 0){ digitalWrite(rfled, HIGH); //enable status LED if(RF == 1){ // 100 halt(); //delay(); right(350); //delay(); straight(40); } if(RF == 2){ // 010 halt(); //delay(); back();

Page 27: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  27  

//delay(); left(350); //delay(); straight(40); } if(RF == 3){ // 110 halt(); //delay(); back(); //delay(); right(350); //delay(); straight(40); } if(RF == 4){ // 001 halt(); //delay(); left(350); //delay(); straight(40); } if(RF == 5){ // 101 halt();

Page 28: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  28  

//delay(); turnAround(); //delay(); straight(40); } if(RF == 6){ // 011 halt(); //delay(); back(); //delay(); left(350); //delay(); straight(40); } if(RF == 7){ // 111 halt(); //delay(); back(); //delay(); turnAround(); //delay(); straight(40); }

Page 29: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  29  

} digitalWrite(rfled, LOW); //disable status LED } void right(int del){ //turns right about a point halt(); delay(50); motorL(60, 1); motorR(60, -1); delay(del); halt(); delay(50); } void left(int del){ //turns left about a point halt(); delay(50); motorL(60, -1); motorR(60, 1); delay(del); halt(); delay(50); }

Page 30: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  30  

void straight(int sped){ //drives straight at specified speed: sped halt(); delay(50); motorL(sped, 1); motorR(sped, 1); } void turnAround(){ //rotates robot slightly more than 180 degrees halt(); delay(50); motorL(60, -1); motorR(60, 1); delay(1150); halt(); delay(50); } void back(){ //drives back about 1.5 feet delay(50); halt(); delay(50);

Page 31: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  31  

motorL(60, -1); motorR(60, -1); delay(1000); halt(); } void halt(){ //Stops robot motorL(255,0); motorR(255,0); delay(100); } void motorL(int fast, int FoR){ //Controls left motor (speed, direction) analogWrite(PWM1,fast); //specify speed //specify direction if (FoR == 1){ digitalWrite(M1, HIGH); digitalWrite(M2, LOW); } if(FoR == -1){ digitalWrite(M1, LOW); digitalWrite(M2, HIGH);

Page 32: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  32  

} if(FoR == 0){ digitalWrite(M1, LOW); digitalWrite(M2, LOW); } } void motorR(int fast, int FoR){ //Controls left motor (speed, direction) analogWrite(PWM2,fast); //specify speed //specify direction if (FoR == 1){ digitalWrite(M3, HIGH); digitalWrite(M4, LOW); } if(FoR == -1){ digitalWrite(M3, LOW); digitalWrite(M4, HIGH); } if(FoR == 0){ digitalWrite(M3, LOW); digitalWrite(M4, LOW); }

Page 33: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  33  

} void driftLeft(int sped){ //drift while moving motorL(sped - 15, 1); motorR(sped, 1); delay(10); } void driftRight(int sped){ //drift while moving motorL(sped, 1); motorR(sped - 15, 1); delay(10); } void swep(){ //sweeps object up for(pos = 80; pos >= 6; pos-=1) // goes from 80 degrees to 11 degrees { servo1.write(pos); // tell servo to go to position in variable 'pos' delay(15); // waits 15ms for the servo to reach the position } halt(); delay(500); for(pos = 5; pos < 80; pos += 1) // goes from 10 degrees to 80 degrees

Page 34: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  34  

{ // in steps of 1 degree servo1.write(pos); // tell servo to go to position in variable 'pos' delay(15); // waits 15ms for the servo to reach the position } } void scoop(){ //scoops into bucket for(posit = 25; posit < 130; posit += 1){ // goes from 25 degrees to 130 degrees // in steps of 1 degree servo2.write(posit); // tell servo to go to position in variable 'pos' delay(15); // waits 15ms for the servo to reach the position } delay(500); for(posit = 130; posit>=26; posit-=1){ // goes from 130 degrees to 26 degrees servo2.write(posit); // tell servo to go to position in variable 'pos' delay(15); // waits 15ms for the servo to reach the position } }

Page 35: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  35  

Visual Studio Code: #include "opencv/highgui.h" #include "opencv/cv.h" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> #include <stdlib.h> #include <stdio.h> #include <tchar.h> #include "SerialClass.h" #include <string> #include <Windows.h> HANDLE hSerial; DWORD btsIO; // Maths methods #define max(a, b) ((a) > (b) ? (a) : (b)) #define min(a, b) ((a) < (b) ? (a) : (b)) #define abs(x) ((x) > 0 ? (x) : -(x)) #define sign(x) ((x) > 0 ? 1 : -1)

Page 36: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  36  

// Step mooving for object min & max #define STEP_MIN 5 #define STEP_MAX 100 IplImage *image; // Position of the object we overlay CvPoint objectPos = cvPoint(-1, -1); // Color tracked and our tolerance towards it int h = 0, s = 0, v = 0, tolerance = 10; /* * Transform the image into a two colored image, one color for the color we want to track, another color for the others colors * From this image, we get two datas : the number of pixel detected, and the center of gravity of these pixel */ CvPoint binarisation(IplImage* image, int *nbPixels) { std::vector<std::vector<cv::Point> > contours; std::vector<cv::Vec4i> hierarchy; int x, y;

Page 37: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  37  

CvScalar pixel; IplImage *hsv, *mask, *bin; IplConvKernel *kernel; int sommeX = 0, sommeY = 0; *nbPixels = 0; // Create the mask &initialize it to white (no color detected) mask = cvCreateImage(cvGetSize(image), image->depth, 1); // Create the hsv image hsv = cvCloneImage(image); cvCvtColor(image, hsv, CV_BGR2HSV); // We create the mask cvInRangeS(hsv, cvScalar(h - tolerance -1, s - tolerance, 0), cvScalar(h + tolerance -1, s + tolerance, 255), mask); // Create kernels for the morphological operation kernel = cvCreateStructuringElementEx(5, 5, 2, 2, CV_SHAPE_ELLIPSE); // Morphological opening (inverse because we have white pixels on black background) cvDilate(mask, mask, kernel, 1); cvErode(mask, mask, kernel, 1);

Page 38: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  38  

/* // Filter mask for largest contour bin = cvCloneImage(mask); // find contours std::vector<std::vector<cv::Point> > contours; std::vector<cv::Vec4i> hierarchy; cv::findContours( bin, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0) ); // sort contours std::sort(contours.begin(), contours.end(), compareContourAreas); // grab contours std::vector<cv::Point> biggestContour = contours[contours.size()-1]; for(int i = 0; i < contours.size(); i ++){ if(contours[i] == biggestContour){ //Turn White } else{ //Turn Black

Page 39: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  39  

} } mask = cvCloneImage(bin); */ // We go through the mask to look for the tracked object and get its gravity center for(x = 0; x < mask->width; x++) { for(y = 0; y < mask->height; y++) { // If its a tracked pixel, count it to the center of gravity's calcul if(((uchar *)(mask->imageData + y*mask->widthStep))[x] == 255) { sommeX += x; sommeY += y; (*nbPixels)++; } } } // Show the result of the mask image cvShowImage("Mask", mask); // We release the memory of kernels

Page 40: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  40  

cvReleaseStructuringElement(&kernel); // We release the memory of the mask cvReleaseImage(&mask); // We release the memory of the hsv image cvReleaseImage(&hsv); // If there is no pixel, we return a center outside the image, else we return the center of gravity if(*nbPixels > 0) return cvPoint((int)(sommeX / (*nbPixels)), (int)(sommeY /(*nbPixels))); else return cvPoint(-1, -1); } /* * Add a circle on the video that fellow your colored object */ void addObjectToVideo(IplImage* image, CvPoint objectNextPos, int nbPixels) { int objectNextStepX, objectNextStepY; char outputChars[] = "c"; // Calculate circle next position (if there is enough pixels)

Page 41: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  41  

if (nbPixels > 10) { // Reset position if no pixel were found if (objectPos.x == -1 || objectPos.y == -1) { objectPos.x = objectNextPos.x; objectPos.y = objectNextPos.y; } // Move step by step the object position to the desired position if (abs(objectPos.x - objectNextPos.x) > STEP_MIN) { objectNextStepX = max(STEP_MIN, min(STEP_MAX, abs(objectPos.x - objectNextPos.x) / 2)); objectPos.x += (-1) * sign(objectPos.x - objectNextPos.x) * objectNextStepX; } if (abs(objectPos.y - objectNextPos.y) > STEP_MIN) { objectNextStepY = max(STEP_MIN, min(STEP_MAX, abs(objectPos.y - objectNextPos.y) / 2)); objectPos.y += (-1) * sign(objectPos.y - objectNextPos.y) *objectNextStepY; } // -1 = object isn't within the camera range } else {

Page 42: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  42  

objectPos.x = -1; objectPos.y = -1; } // Draw an object (circle) centered on the calculated center of gravity if (nbPixels > 10 ){ if (objectPos.x > -1){ if (objectPos.x < 250){ cvDrawCircle(image, objectPos, 15, CV_RGB(255, 0, 0), -1); outputChars[0] = 'L';//define variable for serial output outputChars[1] = 'L'; WriteFile(hSerial, outputChars, 1, &btsIO, NULL);//write to serial port } else if (objectPos.x > 450){ cvDrawCircle(image, objectPos, 15, CV_RGB(0, 255, 0), -1); outputChars[0] = 'M';//define variable for serial output outputChars[1] = 'M'; WriteFile(hSerial, outputChars, 1, &btsIO, NULL);//write to serial port

Page 43: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  43  

} else{ cvDrawCircle(image, objectPos, 15, CV_RGB(0, 0, 255), -1); outputChars[0] = 'R';//define variable for serial output outputChars[1] = 'R'; WriteFile(hSerial, outputChars, 1, &btsIO, NULL);//write to serial port } } } // We show the image on the window cvShowImage("Color Tracking", image); } /* * Get the color of the pixel where the mouse has clicked * We put this color as model color (the color we want to tracked) */ void getObjectColor(int event, int x, int y, int flags, void *param = NULL) { // Vars

Page 44: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  44  

CvScalar pixel; IplImage *hsv; if(event == CV_EVENT_LBUTTONUP) { // Get the hsv image hsv = cvCloneImage(image); cvCvtColor(image, hsv, CV_BGR2HSV); // Get the selected pixel pixel = cvGet2D(hsv, y, x); // Change the value of the tracked color with the color of the selected pixel h = (int)pixel.val[0]; s = (int)pixel.val[1]; v = (int)pixel.val[2]; // Release the memory of the hsv image cvReleaseImage(&hsv); } }

Page 45: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  45  

// comparison function object bool compareContourAreas ( std::vector<cv::Point> contour1, std::vector<cv::Point> contour2 ) { double i = fabs( contourArea(cv::Mat(contour1)) ); double j = fabs( contourArea(cv::Mat(contour2)) ); return ( i < j ); } int main() { hSerial = CreateFile(L"COM7", GENERIC_READ | GENERIC_WRITE, 0, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0); if (hSerial !=INVALID_HANDLE_VALUE) { printf("Port opened! \n"); DCB dcbSerialParams; GetCommState(hSerial,&dcbSerialParams); dcbSerialParams.BaudRate = CBR_115200; dcbSerialParams.ByteSize = 8;

Page 46: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  46  

dcbSerialParams.Parity = NOPARITY; dcbSerialParams.StopBits = ONESTOPBIT; SetCommState(hSerial, &dcbSerialParams); } else { if (GetLastError() == ERROR_FILE_NOT_FOUND) { printf("Serial port doesn't exist! \n"); } printf("Error while setting up serial port! \n"); } // Image & hsvImage IplImage *hsv; // Video Capture CvCapture *capture; // Key for keyboard event char key = 'd';

Page 47: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  47  

// Number of tracked pixels int nbPixels; // Next position of the object we overlay CvPoint objectNextPos; // Initialize the video Capture (200 => CV_CAP_V4L2) capture = cvCaptureFromCAM(0); //capture = cvCreateFileCapture("192.168.1.102:8000/mjpeg?dummy=param.mjpg"); // Check if the capture is ok if (!capture) { printf("Can't initialize the video capture.\n"); return -1; } // Create the windows cvNamedWindow("Color Tracking", CV_WINDOW_AUTOSIZE); cvNamedWindow("Mask", CV_WINDOW_AUTOSIZE); cvMoveWindow("Color Tracking", 0, 100); cvMoveWindow("Mask", 650, 100); // Mouse event to select the tracked color on the original image

Page 48: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  48  

cvSetMouseCallback("Color Tracking", getObjectColor); // While we don't want to quit while(key != 'Q' && key != 'q') { // We get the current image image = cvQueryFrame(capture); // If there is no image, we exit the loop if(!image) continue; objectNextPos = binarisation(image, &nbPixels); addObjectToVideo(image, objectNextPos, nbPixels); // We wait 10 ms key = cvWaitKey(10); } // Destroy the windows we have created cvDestroyWindow("Color Tracking"); cvDestroyWindow("Mask");

Page 49: Scott Root Final Report - University of Florida › 5666 › papers › IMDL_Report...straight and starts looking for objects and obstacles. Using its camera and OpenCV, if Caddy detects

  49  

// Destroy the capture cvReleaseCapture(&capture); CloseHandle(hSerial); return 0; }