COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot...

Preview:

Citation preview

COMP 417 – Jan 12th, 2006

Guest Lecturer: David MegerTopic: Camera Networks for

Robot Localization

Introduction

Who am I? Overview, Camera Networks for

Robot Localization What Where Why How (technical stuff)

Introduction - Hardware

Intro - What

Previously: Localization is a key task for a robot. It’s typically achieved using the robot’s sensors and a map.

Can “the environment” help with this?

Typical Robot Localization

Sensor Networks

Sensor Networks

Intro - Where

In cases where there is sensing already in the environment, we can invert the direction of sensing.

Where is this true? Buildings with security systems Public transportation areas (metro) More and more large cities (scary but

true)

Intro – Why

Advantages: In many cases sensors already exist Many robots operating in the same

place, can all share the same sensors Computation can be done at a

powerful central computer, saves robot computation

Interesting research problem

Intro – How As the robot appears in images, we can

use 3-D vision techniques to determine its position relative to the cameras

What do we need to know about the cameras to make this work? Can we assume we know where the cameras

are? Can we assume we know the camera

properties?

Problem

Can we use images from arbitrarycameras placed in unknown

positions inthe environment to help a robot

navigate?

Proposed Method

1. Detect the robot2. Measure the relative positions3. Place the camera in the map4. Move robot to the next camera5. Repeat

Detection – An algorithm to detect these robots?

Detection (cont’d) Computer Vision techniques attempt

detection of (moving) objects Background subtraction or image

differencing Image templates Color matching Feature matching

A robust algorithm for arbitrary robots is likely beyond current methods

Detection – Our Method

ARTag Markers

Proposed Method

Detect the robot2. Measure the relative positions3. Place the camera in the map 4. Move robot to the next camera5. Repeat

Position Measurement

Question: Can we determine the 3-D position of an object relative to the camera from examining 2-D images?

Hint: start from the introduction to Computer Vision from last time

Pinhole Camera Model

Camera Calibration An image depends on BOTH scene

geometry and camera properties

For example, zooming in and out and moving the object closer and farther have essentially the same effect

Calibration means determining relevant camera properties (e.g. focal length f)

Projective Calibration Equations

Coordinate Transformation

Calibration Equations

Matrix AT is a 3x4 and fully describes the geometry of image formation

Given known object points M, and image points m, it is possible to solve for both A and T

How many points are needed?

Calibration Targets

3-Plane ARTag Target

Position Measurement Conclusion

With enough image points whose 3-D location are known, measurement of coordinate transformation T is possible

The process is more complicated than traditional sensing, but luckily, we only need to do it once per camera

Proposed Method

Detect the robot Measure the relative positions3. Place the camera in the map 4. Move robot to the next camera5. Repeat

Mapping Camera Locations

Given the robot’s position, a measurement of the relative position of the camera allows us to place it in our map

Question: What affects the accuracy of this type of relative measurement?

Proposed Method

Detect the robot Measure the relative positions Place the camera in the map 4. Move robot to the next camera5. Repeat

Robot Motion

A robot moves by using electric motors to turn its wheels. There are numerous strategies here in each of the important aspects: Physical Design Control algorithms Programming Interface High-level software architecture

Nomad Scout

Differential Drive Kinematics

Odometry Position Readings

Robot Motion - Specifics

Robot control accomplished by using an in-house application – Robodaemon

Allows “point and shoot” motion, not continuous control

Graphical and programmatic interface to query robot odometry, send motion commands, collect sensor data

Proposed Method Detect the robot Measure the relative positions Place the camera in the map Move robot to the next camera Repeat

Are we done?

Challenges In general, it’s impossible to know the

robot or camera positions exactly. All measurements have error

What should the robot do if the cameras can’t see the whole environment?

I didn’t say anything about how the robot should decide where to go next

More?

Mapping with Uncertainty

Given exact knowledge of the robot’s position, mapping is possible

Given a pre-built map, localization is possible

What if neither are present? Is it realistic to assume they will be? If so, when?

Uncertainty in Robot Position In general, kinematics equations do

not exactly predict robot locations Sources of error

Wheel slippage Encoder quantization Manufacturing artifacts Uneven and terrain Rough/slippery/wet terrain

Typical Odometry Error

Simultaneous Localization and Mapping (SLAM)

When both the robot and map features are uncertain, both must be estimated

Progress can be made by viewing measurements as probability densities instead of precise quantities

SLAM Progress

SLAM (cont’d) A quantity of the work in robotics in the

last 5-10 years has involved localization and SLAM, results are now very pleasing indoors with good sensing

These methods apply to our system

More on this later in the course, or after class today if you’re interested

Motion Planning

The mapping framework described is dependant on the robot’s motion: The robot must pass in front of a

camera in order to collect any images Numerous points are needed for each

camera to perform calibration SLAM accuracy affected by order of

camera visitation

Local and Global Planning

Local: how should the robot move while in front of one camera, to collect the set of calibration images?

Global: in which order should the cameras be visited?

Local Planning

Modern calibration algorithms are quite good at estimating from noisy data, but there are some geometric considerations Field of view Detection accuracy Singularities in calibration equations

Local Planning

We must avoid configurations where all points collected lie in a linear sub-space of R3

For example, a set of images of a single plane moved only through translation, gives all co-planar points

Projective Calibration Equations

Global Planning

Camera positions estimated by relative measurements from the robot

This information is only as accurate as our knowledge about the robot

“Re-localizing” is our only way to reduce error

Distance / Accuracy Tradeoff Returning to well-known cameras

helps our position estimates but causes the robot to travel farther than necessary

An intelligent strategy is needed to manage this tradeoff

Some partial results so far, this is work in progress

Review

Using sensors in the environment, we can localize a robot

In order to use previously un-calibrated and unmapped cameras, a robot can carry out exploration, and SLAM

This must only be done once, and then accurate localization is possible

Future Work

Better motion planning strategies globally

Integrate other sensing (especially if the cameras have blind spots)

Lose the targets? Other types of ubiquitous sensing

(wireless, motion detection, etc)

Recommended