118
TABLE OF CONTENTS Page LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2. Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1. System Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2. End Point Position Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.1. End Point Sensor Operation . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.2. End Point Sensor Integration . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.3. End Point Sensor Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.4. End Point Sensor Calibration and Compensation . . . . . . . . . . 14 2.2.5. End Point Sensor Performance . . . . . . . . . . . . . . . . . . . . . . 21 3. Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.1. Design Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.2. End Point Feedback Nonlinearities . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2.1. Effect of Nonlinearities . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.2. Exact Linearization Methods . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2.3. Feedback Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.2.4. Feedforward Linearization . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.3. Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.3.1. Control Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.3.2. Compensator Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4. Hardware Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.1. Classical Compensator Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.2. LQG Compensator Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.3. Joint Based Control Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Nonlinear Robotic Control

Embed Size (px)

DESCRIPTION

Robotics

Citation preview

Page 1: Nonlinear Robotic Control

TABLE OF CONTENTS

Page

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2. Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1. System Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.2. End Point Position Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2.1. End Point Sensor Operation . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2.2. End Point Sensor Integration . . . . . . . . . . . . . . . . . . . . . . . 9

2.2.3. End Point Sensor Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.2.4. End Point Sensor Calibration and Compensation . . . . . . . . . . 14

2.2.5. End Point Sensor Performance . . . . . . . . . . . . . . . . . . . . . . 21

3. Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1. Design Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.2. End Point Feedback Nonlinearities . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.2.1. Effect of Nonlinearities . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.2.2. Exact Linearization Methods . . . . . . . . . . . . . . . . . . . . . . . . 32

3.2.3. Feedback Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.2.4. Feedforward Linearization . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.3. Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

3.3.1. Control Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

3.3.2. Compensator Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4. Hardware Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.1. Classical Compensator Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.2. LQG Compensator Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.3. Joint Based Control Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Page 2: Nonlinear Robotic Control

5. Further Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

6. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Appendix 1. Parameter Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Appendix 2. Code Listings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

2.1. AC100-C30 Serial Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

2.2. Slew Commander . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

2.3. Linearization Matrix Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

2.4. Singular State Transition LQR Solution . . . . . . . . . . . . . . . . . . . . . . 102

2.5. LQR Compensator Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Appendix 3. Equation Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

3.1 Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

3.2 Rigid Body Equations of Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

3.3 PSD Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

3.4 Second Order Slew Commander . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Appendix 4. Hardware Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

ii

Page 3: Nonlinear Robotic Control

LIST OF FIGURES

Number Page

Figure 1.1 Typical Manipulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Figure 1.2 Joint vs. End Point Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Figure 2.1 Table Arm with Tip Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure 2.2 Real Time System Command and Data Flow . . . . . . . . . . . . . . . . . . . 7

Figure 2.3 Sensor Triangulation (2 axis) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Figure 2.4 IR Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Figure 2.5 Sensor Data Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Figure 2.6 Sensor Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Figure 2.7 Table Top Misalignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Figure 2.8 Sensor Misalignments and Constraints . . . . . . . . . . . . . . . . . . . . . . . 16

Figure 2.9 Sensor Noise PSD Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Figure 2.10 Sensor Noise PSD Hardware Test . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Figure 3.1 Zero-Pole Map - End Point Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Figure 3.2 Zero-Pole Map - End Point and Joint Sensors . . . . . . . . . . . . . . . . . . . 29

Figure 3.3 Plant Shoulder Torque to X Position T.F. . . . . . . . . . . . . . . . . . . . . . 35

Figure 3.4 Plant Shoulder Torque to Y Position T.F. . . . . . . . . . . . . . . . . . . . . . 35

Figure 3.5 Plant Elbow Torque to X Position T.F. . . . . . . . . . . . . . . . . . . . . . . . 36

Figure 3.6 Plant Elbow Torque to Y Position T.F. . . . . . . . . . . . . . . . . . . . . . . . 36

Figure 3.7 Linearized X Axis Force to X Position T.F. . . . . . . . . . . . . . . . . . . . . 37

Figure 3.8 Linearized X Axis Force to Y Position T.F. . . . . . . . . . . . . . . . . . . . . 37

Figure 3.9 Linearized Y Axis Force to X Position T.F. . . . . . . . . . . . . . . . . . . . . 38

Figure 3.10 Linearized Y Axis Force to Y Position T.F. . . . . . . . . . . . . . . . . . . . 38

Figure 3.11 Feedback Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Figure 3.12 Compensated Feedback Linearization . . . . . . . . . . . . . . . . . . . . . . . . 39

Figure 3.13 Feedforward Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Figure 3.14 Control Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Figure 3.15 Classical Compensator Loop Transfer Function . . . . . . . . . . . . . . . . 44

Figure 3.16 Classical Compensator Nichols Plot . . . . . . . . . . . . . . . . . . . . . . . . 45

Figure 3.17 Classical Compensator I/O Magnitude Response . . . . . . . . . . . . . . . . 46

iii

Page 4: Nonlinear Robotic Control

Figure 3.18 Classical Compensator I/O Phase Response . . . . . . . . . . . . . . . . . . . 46

Figure 3.19 LQG Compensator Loop Transfer Function . . . . . . . . . . . . . . . . . . . 49

Figure 3.20 LQG Compensator Nichols Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Figure 3.21 LQG Compensator I/O Frequency Response . . . . . . . . . . . . . . . . . . 50

Figure 3.22 LQG Stability Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Figure 4.1 Classical Compensator Response to Slew Command . . . . . . . . . . . . . . 52

Figure 4.2 Classical Compensator X Axis Disturbance Response . . . . . . . . . . . . . 53

Figure 4.3 Classical Compensator Y Axis Disturbance Response . . . . . . . . . . . . . 53

Figure 4.4 LQG Compensator Response to Slew Command . . . . . . . . . . . . . . . . 55

Figure 4.5 LQG Compensator Commanded Slew cont. . . . . . . . . . . . . . . . . . . . . 55

Figure 4.6 LQG Compensator X Axis Disturbance Response . . . . . . . . . . . . . . . . 56

Figure 4.7 LQG Compensator Y Axis Disturbance Response . . . . . . . . . . . . . . . . 56

Figure 4.8 Joint Base Comparison End Point Positions . . . . . . . . . . . . . . . . . . . . 58

Figure 4.9 Joint Based Comparison End Point Errors . . . . . . . . . . . . . . . . . . . . . 59

iv

Page 5: Nonlinear Robotic Control

LIST OF TABLES

Number Page

Table 2.1 Sensor Output Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Table 2.2 Hardware Connection Editor Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Table 2.3 Calibration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Table 2.4 Sensor Noise Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Table 2.5 Sensor Noise Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

v

Page 6: Nonlinear Robotic Control

ACKNOWLEDGMENTS

The author would like to acknowledge the help provided by a number of individuals

during the design and documentation of this work. Thanks to Juris Vagners PhD., who

provided the laboratory in which this work was based and the time to critique this

document. Thanks to Martin Berg PhD., who provided the project, inspiration, and

guidance under which all this work was completed as well as many useful comments about

the documentation. Thanks to David Bossert, who answered all those annoying little day

to day questions in the laboratory.

vi

Page 7: Nonlinear Robotic Control

1. Introduction

Typical robotic manipulators consist of a number of mostly rigid links connected

together with free moving “joints”. Each joint has some method of sensing the local joint

position (and possibly rate) along with some type of force/torque actuator. The objective of

the electromechanical device is often to simply place the end point, or “tip”, of the

manipulator in the desired position/orientation.

11001100

RotaryJoint

SlidingJoint

RotaryJoint

Figure 1.1 Typical Manipulator

Historically, end point position control of robotic manipulators is based on the joint

position coordinates (which may be either rotations or linear translations). Desired tip

positions and rates are transformed to the equivalent joint positions and rates, assuming

rigid link kinematics (although these “inverse kinematics” are often not simplistic and

occasionally nonunique). Feedback control is then applied, based on joint position errors

and feedforward accelerations. Control strategies can often be as simple as PD control,

because of its guaranteed stability.

Although joint based control has advantages when examined from a stand point of

feedback stability, is has serious disadvantages in regards to absolute end point positioning

accuracy. Even if the assumption of perfect joint control could be made, there are still plant

uncertainties and disturbances that will result in end point position errors. They can neither

be eliminated nor reduced through joint feedback compensation.

Commanded joint positions generated through the use of manipulator inverse

kinematics are made using assumptions about the manipulator geometry (arm lengths

and/or axis orientations). Errors in these quantities lead to commanded joint angles that

Page 8: Nonlinear Robotic Control

2

result in actual end point positions different from the commands. Because this

transformation is in a feedforward path, the joint based feedback control cannot compensate

for these errors.

Certain types of link flexible deformation can also be unobservable in the joint

displacements. Dynamic vibration in the structure can either be sensed through joint rates,

or estimated through dynamic models (Kalman filter). Joint based controllers can also

increase damping in the flexible modes through proper phase relationships. However, joint

based control cannot compensate for static flexing that results from external forces (usually

applied at the end point).

Compensator

ManipulatorDynamics

AssumedInverse

Kinematics

Compensator

ManipulatorDynamics

X ref

θ ref

X real

Tor

X ref

Tor

X real

θ real

θ real

ParameterErrors

(θ ref = θ real ) (Xref = Xreal)

RealForward

Kinematics

Figure 1.2 Joint vs. End Point Feedback

Joint feedback control is unable to compensate for errors in end point position due

to kinematic uncertainties and end point disturbances because feedback is not made directly

around the quantity of interest. As shown in Figure 1.2, the error sources are not in the

feedback loop, and therefore are not affected by the compensation. To correct for this

problem, this document examines a control strategy based on direct sensing and feedback

of the manipulator end point position. This methodology does not suffer from any of the

intrinsic problems of unobservable errors that joint feedback does.

End point position sensing also has the advantage of a greater supply of more

Page 9: Nonlinear Robotic Control

3

accurate sensors. Because of the geometry of the manipulator, small errors in rotary joint

angles are magnified to larger errors in end point position. Rotational position sensing

instruments can be capable of measurements with a resolution in the range of approximately

2 /1024 to 2 /65,536 radians. Coupled to an arm length of 1 meter, end point errors of

0.1 to 6 mm result. Laser based interferometric position sensing instruments, on the other

hand, are capable of single axis measurements of the end point position with quantization

on the order of nanometers.

Although direct end point feedback does not suffer from the same intrinsic errors

associated with joint based control, it does present difficulties for feedback control design.

One of the difficulties is the nonlinear nature of end point measurements for noncartesian

manipulator geometries. This type of nonlinearity has large effects on the system dynamics

across the manipulator workspace and must be compensated for in the controller structure.

End point feedback also suffers from the noncolocated nature of the sensor and the

actuation device. Flexibility in the manipulator structure results in plant dynamics whose

linearization contains transmission zeros in the right half of the real/imaginary plane. This

is well known to cause increased difficulties in the control of the plant. As shown in

reference [1], this does allow for a faster response time, if noticeable deformation of the

structure is allowed. Flexible modes of the structure may be also be unobservable from the

end point position and therefore cannot be further stabilized.

A reduced scale hardware testbed was used to examine the key issues associated

with control of industrial robotic manipulators. It provides a complete emulation of a full

scale industrial manipulator in a controlled environment. The properties of the testbed are

described in the following chapters. It has exaggerated link flexibility to provide an

environment to study the relation between the compensator and the plant dynamic modes.

The sensors are typical of real world manipulators with similar properties and behavior.

An optical infrared tracking sensor was used to provide end point measurements

instead of a more accurate interferometric laser tracker. Although the accuracy of the IR

Page 10: Nonlinear Robotic Control

4

sensor is significantly less than a laser tracker, it still provides end point measurements

with a quantization size eighteen times better than the joint angle encoders. This more cost

effective solution gives position measurements of the end point with data properties similar

to the laser tracker.

The hardware testbed, equipped with the IR position sensor, was used to examine

the issues associated with direct end point position feedback. Particular attention is paid to

the difficulties with control of the plant, focusing mainly on the compensation of the plant

nonlinearity.

Page 11: Nonlinear Robotic Control

2. Hardware

This section contains a description of the hardware testbed used in the validation of

the end point feedback control structure and compensator design. The end point sensing

instrumentation has been implemented on the flexible manipulator testbed in the Control

Systems Laboratory of the University of Washington. The flexible manipulator testbed has

been used for other research related to control of electromechanical manipulator systems,

including general control, surface following, and tip force control (see reference [10]).

2.1. System Hardware

Since the end point control system was implemented on the existing flexible

manipulator testbed, this is only a general description of the system. Only the new end

point position measurement hardware is described in detail. See reference [2] for more

details.

Shoulder Motor

Optical Encoder

Elbow Motor

Air Cushion Supports

Granite Table Optical Encoder(on shaft under motor)

End Effector

Flexible Links

Target for PositionSensing System

Figure 2.1 Table Arm with Tip Sensor

The testbed is a two link planar robot with two rotational axes of control. (see

Figure 2.1) It floats on top of a granite table with the elbow and tip supported on air

cushion disks. The granite table reduces the effect of all ground motion. The air cushion

support reduces the surface friction to inertia ratios to very small levels. The arms have a

tall thin cross section to allow for ease of bending in the plane of the table top, but be stiff

Page 12: Nonlinear Robotic Control

6

to torsion and perpendicular bending. The first four flexible modes of the structure are at

1.8, 3.4, 4.5, and 8.0 hz. See reference [3] for a description of the plant dynamic model

and hardware validation.

The two joints are controlled with individual direct drive DC torque motors. The

joint angles are measured with optical encoders. The end point position is measured using

an optical infrared emitter/detector and a reflective target on the manipulator tip. The end

point rotational orientation is neither measured nor controlled. The controller is

implemented in an AC100-C30 DSP based real time computer.

The actuators are AeroFlex TQ82W-1FA and TQ40W-11FA brushless DC torque

motors. The controller D/A converters are 12 bit with a ±2.603 Nm maximum shoulder

torque for a 0.00127 Nm quantization. The elbow motor has a ±0.6215 Nm maximum for

a 0.000303 Nm quantization. The shoulder motor begins to leave the linear region at

around ±35°. The elbow motor loses linearity at about ±25°. Both motors are at 75%

sensitivity at ±60°.

The joint angle sensors are Hewlett Packard HEDS-6000 optical encoders. Each

has a 1024 cycle/revolution TTL quadrature output. The resulting angle quantization from

the encoder converter is 12 bit for a 0.00153 rad resolution. This is equivalent to a 1.00

mm end point resolution. The encoder linearity is unknown. The position errors are

specified as 0.0052 rad maximum.

The controller hardware is a digital AC100-C30 real time computer supplied by

Integrated Systems Incorporated (ISI) (see reference [11]). It is based on a Texas

Instruments TMS320C30 DSP card housed in a 486 PC. All sensor/actuator I/O is

structured through industry pack I/O daughter cards connected to the local DSP bus. The

system is commanded through software on a Sun SPARC computer using a standard

ethernet interface. The dedicated DSP allows the controller to operate with a consistent

sample update, without being burdened with network and disk traffic. The control

software is built on the Sun SPARC station using both graphical block diagram auto-

coding and user written C code. It is cross compiled on the PC for the TMS320C30 DSP.

The real time system allows for any regular fixed multi-rate digital timing structure.

Page 13: Nonlinear Robotic Control

7

111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

TMS320C30DSP

486 PC

AC100 1111111111111111111111111111111111111111111111111111111111111111111111

0000000000000000000000000000000000000000000000000000000000000000000000

Hardware

Torquers

End PointSensor

Sun SPARC

11111111110000000000

ControlDesign

andCoding

Software

Ethernet

Serial Line

DirectConnect

|| Local Bus ||

commands ->

<- Saved Data

Real TimeControl ->

Real Time<- Sensing

I/O

I/O

Figure 2.2 Real Time System Command and Data Flow

The end point position sensor is a DynaSight bi-optic infrared emitter/detector that

tracks a small reflective target connected to the manipulator tip. It is suspended over the top

of the table, looking down perpendicularly at the operating plane. It supplies absolute 3-D

position measurements of the target with a nominal 0.05 mm resolution. The mounting

structure contains a number of mounting holes so that the sensor can be placed in almost

any location above the table plane with four inch increments. The current mounting

location is the 5th hole in the table +z axis (from the bottom), the 9th hole in the table +y

axis (from the back), and the 5th and 6th hole in the table +x axis (to the side).

2.2. End Point Position Sensor

The DynaSight position sensor is an absolute 3-D position sensor built by Origin

Instrument Corporation in Grand Prairie Texas. (see reference [7]) It was originally

developed as an observer location sensor for virtual reality applications. It was chosen

based on its ease of integration and performance for a relatively low cost. Its accuracy is

less than that of a high performance laser tracking system, but is still sufficiently better the

joint based encoders. It has been shown to have sensor noise characteristics of the same

type as laser tracking systems.

Page 14: Nonlinear Robotic Control

8

2.2.1. End Point Sensor Operation

The DynaSight sensor is a self contained bi-optic system. It projects IR light into

its field of view and looks for the reflection from a small circular target. The target is made

from a material with good IR reflective properties so that it appears as the brightest IR

location in the sensor field of view. The sensor optics focus the reflection from the image

onto an internal CCD plane at a location proportional to the angles to the target. The

cartesian centroid of the CCD image gives the two angles from the sensor to the target. The

sensor has two optical inputs with a fixed separation, and uses triangulation to convert

from the four (redundant) angle measurements into three linear position measurements.

The noise and errors are therefore noticeably greater in the distance (z) axis.

Target

11111111

00000000

11111111

00000000

CCD 1

CCD 2

α

β

End Point Sensor

XD

ZD

Z

X

Figure 2.3 Sensor Triangulation (2 axis)

The DynaSight sensor uses target tracking rather than target scanning. The initial

brightest reflection point is determined from a complete scan of the field of view.

Afterwards, that reflection is tracked (using an initial location guess based on the previous

measurements). This method decreases the measurement time, but increases the sensitivity

to tracking loss of high speed targets. Typical reacquisition times after a loss of target are

observed to be approximately 0.6 seconds with a 1.0 second maximum.

Page 15: Nonlinear Robotic Control

9

Maximum target range and position measurement noise levels are partially

dependent on the size of the reflective target. Larger size targets allow for longer ranges.

With the chosen target, a 19 mm diameter, the useful range is on the order of 1.0m.

11111111111111111111

00000000000000000000

1100

1100

Figure 2.4 IR Target

There is a small three color LED on the face plate of the sensor that gives the

internal tracking state of the sensor. A green state implies tracking. A yellow state implies

danger. A red state implies loss of tracking. This tracking information is also made

available to the controller through the data interface.

The exact implementation of the tracking algorithms in the sensor firmware

(redundant measurement triangulation, target centroiding, noise rejection) is proprietary to

Origin Instruments.

2.2.2. End Point Sensor Integration

The DynaSight sensor is a self contained unit. Communication with the AC-100

controller is made through a standard 9 pin RS-232C serial communication port.

Completed position measurements are passed asynchronously in packets with initiation

from the DynaSight. Since the firmware does not currently support hardware

handshaking, the host computer is expected to pick up all data when it arrives. Input to the

AC-100 controller is made through a standard IP-serial I/O card connected to the DSP local

bus. The IP-SERIAL I/O card has two independent channels that support either RS-232 or

Page 16: Nonlinear Robotic Control

10

RS-422 communication protocols. The DynaSight is connected to the A channel of the IP-

SERIAL card in RS-232 mode.

The DynaSight sensor is capable of a number of operating modes. Each of these

modes has a different data collection timing method and speed with a number of possible

output formats. See reference [7]. The sensor mode is set through either an external dip

switch or, more generally, through serial line commands. Although the DynaSight is

capable of running in a synchronized mode through serial line initiation commands from the

AC-100 host computer, to increase data collection speed and reduce data latency (time delay

between the actual image capture on the sensor CCD and when completed position

information is used by the controller), the DynaSight sensor is put into a free running

mode. This increases data update rates and high speed tracking ability at the cost of a

certain amount of ambiguity over data latency.

The DynaSight sensor for the flexible manipulator testbed is configured to run in

stereoSync60 mode (or “V” mode). This is a free running mode with a nominal update rate

of 67 hz. It trades off centroiding and environmental light rejection accuracy for reduced

data latency. This is the mode with the fastest update rate and best high speed target

tracking ability. The sensor outputs an eight byte packet at 19200 baud (3.3 msec total

transmit time) every 15.2 msec.

Since the sensor read and controller sampling are asynchronous processes, the data

latency is variable. A delay of up to 0.015 seconds is produced by the sensor internal

processing. As the two processes shift in time relative to each other, the delay between the

receipt of the measurement packet and the beginning of the next controller sample period

changes. This delay is bounded between 0 seconds and the sample period. As the sample

rate is increased, the delay is shortened (in an absolute sense). However, for sampling

rates above the sensor rate of 67 hz., a certain percentage of the sample periods will be

initiated without any new sensor data. It is possible to identify these sample periods by

looking for transitions in the packet count number, but this is currently not implemented in

the controller designs.

As data packets are created, they are sent over the serial line. The IP-SERIAL I/O

card contains a three byte hardware buffer to reduce the interaction of the serial line

Page 17: Nonlinear Robotic Control

11

hardware and the software (see Figure 2.5). The AC-100 executables contain interrupt

drivers that collect and save all incoming bytes to a ring buffer for later processing. The

size of the software buffer is arbitrary (it is currently set to 1024 bytes which corresponds

to a maximum sample period of 1.95 seconds). At the sample rate in which serial data is

requested, a processing routine is called that retrieves all the bytes in the ring buffer and

attempts to decode the packetized position information into floating point position data.

The decoding routine uses a static state data structure so that partial packet information can

be processed. This allows the receipt of sensor data to span multiple processing calls. The

data packets also contain unique header patterns which allows the beginning of a data

packet to be identified in a larger list of received data bytes. These two features allow the

decoding process to run completely independently of the timing of the incoming data.

!

1010111111111111

000000000000

1111

0000

Serial Line

IP-Serial Card

3-Byte Buffer

End PointSensor

Target

Ring Buffer

Decode Subroutine

ToControlSoftware

111111111111100000000000001111111111100000000000

Hardware

Software10 1010 11001010

111111111111111111000000000000000000

Asyncronous - Fast

1100 1011111100000011001010

Syncronous - Slower

1100

Figure 2.5 Sensor Data Flow

The controller can sample the position information at nearly any interval between a

maximum of about 2.0 sec to a minimum of about 0.0003 sec. For periods less than 15

msec, it is possible that new sensor data may not have arrived since the last sample period,

and the decoding routine outputs will not have changed between sample periods.

The user written serial decoding routines exist in two different versions. The first

version is used to set the operating mode of the sensor. Its only function is to pass a

command packet to the DynaSight sensor. The second version is used to decode sensor

position packets. It assumes that the sensor is in a compatible operating mode. Its only

function is to receive and convert data from the sensor. These functions were separated to

avoid the one second retargeting delay that occurs when the sensor operating mode is

Page 18: Nonlinear Robotic Control

12

changed.

The decoding routine is based on a sample routine supplied by Origin Systems.

See appendix 2.2 for code listings and further information.

2.2.3. End Point Sensor Use

The controller inputs from the DynaSight sensor are given in table 2.1.

Name Units Range Resolution

X position mm ±500 0.05

Y position mm ±500 0.05

Z position mm ±500 0.05

Data Quality n/a -1,0,1,2 n/a

Sync Quality n/a 0,1 n/a

Packet Number n/a 1 - 1e37 1

Table 2.1 Sensor Output Data

The positions are the rectilinear distances of the target centroid from a small mark

on the sensor face plane. Measurements are given in the sensor local coordinate frame.

(see figure 2.6) A transformation to the manipulator table coordinates needs to be made in

the controller software. The exact transformation depends on the mounting location and

orientation of the sensor. In the nominal mounting location, the transformation is given by:

x table = x center − x DynaSight

1000

y table = y DynaSight

1000 − y center

(1)

Page 19: Nonlinear Robotic Control

13

111111000000

111111

000000

X

Z

Y

Figure 2.6 Sensor Coordinate System

The Data Quality flag marks the tracking state of the end point sensor according to the

following enumeration :

-1 new data has not been received since last output request

0 sensor is tracking, data is good

1 sensor is in danger of loosing track, data is suspect

2 sensor or software has lost track, data is either bad or not changing

The Sync Quality flag marks the state of the decoding software according to the following

enumeration :

0 decode software is synchronized to sensor packet headers

1 decode software has lost packet location

The packet count contains the number of data packets from the sensor that have been

successfully decoded. It can be used to test for the arrival of new position data. If the

sensor has lost target track, the position returned from the sensor is the latest tracked

position, but the packet count will still increment (examine the Data Quality output to

identify this case.)

All the appropriate code has already been included into the AC100 link list and

object libraries. No special code needs to be included in the System Build controller, apart

from a coordinate transformation / calibration block. All connections to the hardware are

made inside the AC100 Hardware Connection Editor.

Page 20: Nonlinear Robotic Control

14

To configure the sensor to operate in the proper mode, a simple AC-100 project

must run that has no other function than to send a single configuration command to the

sensor. An MS-DOS batch file was created that runs the AC-100 project. At the DOS

prompt on the AC-100 host computer, type :

C:/> dynaset

After the sensor face plate status light has turned red, the project can be canceled by

typing CTRL-C. The face plate status light should turn green in approximately one second

if a target is in the sensor field of view. The setup routine only needs to be run once after

the sensor has been turned on. The operating mode is maintained internally within the

sensor hardware.

The connections from the sensor hardware to the software inputs are made in the

Matrix/X AC-100 Hardware Connection Editor. All inputs are currently connected to

module C. The module type is IP_SERIAL_A. The data is indexed from zero.

Name Type Module Index

X position IP_SERIAL_A 3 0

Y position IP_SERIAL_A 3 1

Z position IP_SERIAL_A 3 2

Data Quality IP_SERIAL_A 3 3

Sync Quality IP_SERIAL_A 3 4

Packet Number IP_SERIAL_A 3 5

Table 2.2 Hardware Connection Editor Data

2.2.4. End Point Sensor Calibration and Compensation

In addition to additive noise, the physical configuration of the sensor relative to the

Page 21: Nonlinear Robotic Control

15

manipulator table gives rise to constant position and rotation errors that need to be

compensated. In particular, if the end point position measurements are to be used in

conjunction with the joint angles, the transformation must be made so that the nominal

outputs for undeformed links are consistent.

The geometry gives rise to nine parameters :

1 sensor center table x axis location kx

2 sensor center table y axis location ky

3 arm 1 length L1

4 arm 2 length L2

5 shoulder (joint 1) zero rotation angle ε1

6 elbow (joint 2) zero rotation angle ε2

7 sensor x axis rotation ηx

8 sensor y axis rotation ηy

9 sensor z axis rotation. ηz

Parameters ε1 and η z are not independent, so there are actually only eight

compensation parameters. There are five possible measurements:

1 DynaSight x position xD

2 DynaSight y position yD

3 DynaSight z position zD

4 shoulder (joint 1) encoder angle φ1

5 elbow (joint 2) encoder angle φ2

In addition, there is one constraint, the end point is constrained to move in only two

dimensions (the table plane).

Page 22: Nonlinear Robotic Control

16

90˚

ε1

ε2

X

table

Y

table

SensorZero Point

XD

YD

ky

kx

L1

L2

Figure 2.7 Table Top Misalignments

111111111111100000000000001111111111111

0000000000000

111111

000000

1111111111111111111111111111111111111111111111111111

0000000000000000000000000000000000000000000000000000

1111111111111111111111111111

0000000000000000000000000000

111111111111111111111

000000000000000000000

111111111111111111111111111111111111

000000000000000000000000000000000000

111111111111

000000000000

X

R

η

y

D

Z

Z

R

X

D

o

Z

=

target

realsensor

virtualsensor

table

Figure 2.8 Sensor Misalignments and Constraints

Under the assumptions of small sensor misalignments (ηx and ηy), the calibration

procedure for these misalignments can be separated from the other six parameters.

The real forward kinematic model for the end point location is given as :

x e = L1 cos( θ 1 ) + L2 cos( θ 1 + θ 2 )

y e = L1 sin( θ 1 ) + L2 sin( θ 1 + θ 2 ) (2)

Page 23: Nonlinear Robotic Control

17

where,

x e , y e real end point location

θ 1 , θ 2 real joint angles

The end point sensor compensation equations are given as :

x e = k x − x D

1000

y e = y D

1000 + k y

(3)

The joint angle compensation equations (for a 12 bit per revolution, relative encoder with

the elbow joint at a nominal angle of 90°) are given as :

θ 1 = 2 π 2 1 2 φ 1 + ε 1

θ 2 = 2 π 2 1 2 φ 2 +

π 2

+ ε 2

(4)

where

x D , y D DynaSight outputs

φ 1 , φ 2 encoder outputs

ε 1 , ε 2 static offsets

Under the assumption of small nominal joint angle offsets, the sinusoid terms can be

written as,

sin( θ + ε ) . sin( θ ) + cos( θ ) ε cos( θ + ε ) . cos( θ ) − sin( θ ) ε

(5)

Parameters (1) to (6) can now be written in a linear relationship to the measurements.

x D

y D

= 10001

0

0

− 1

− cos( ϕ 1 )

sin( ϕ 1 )

− cos( ϕ 1 + ϕ

2 )

sin( ϕ 1 + ϕ

2 )

sin( ϕ 1 )

cos( ϕ 1 )

sin( ϕ 1 + ϕ

2 )

cos( ϕ 1 + ϕ

2 )

k x

k y

L 1

L 2

α 1

α 2

(6)

where

Page 24: Nonlinear Robotic Control

18

α 1 = L1 ε 1

α 2 = L2 ( ε 1 + ε 2 ) (7)

ϕ 1 = 2 π 2 1 2 φ 1

ϕ 2 = 2 π 2 1 2 φ 2 +

π 2

The sensor mounting misalignments can be written as the following transformation.

Notice that the end point sensor twist, η z, is not independent of the other table top

parameters and is not included in the equations.

x R

y R

z R − z 0

=

1

0

0

0

cos( η x )

sin( η x )

0

− sin( η x )

cos( η x )

cos( η y )

0

− sin( η y )

0

1

0

sin( η y )

0

cos( η y )

x D

y D

z D − z 0

(8)

The z equation, under the constraint zR=z0, can be written in the form,

z D = sin( η y )

cos( η y ) x D +

− sin( η x )

cos( η x ) cos( η y ) y D + z0 (9)

which becomes the matrix equation

z D = x D − y D 1

ξ 1

ξ 2

z 0

(10)

where ξ 1 = tan( η y )

ξ 2 = tan( η x )

cos( η y )

The exact compensation equation for the misalignments could be used as given

above. However, because of the greater noise associated with the distance (z)

measurement, there is a second form of the compensation equations that doesn't use zD.

This second form is preferable. By application of the two dimensional table top constraint,

zD can be removed from the equations.

Page 25: Nonlinear Robotic Control

19

x R

y R =

cos( η y )

sin( η y ) sin( η x )

0

cos( η x )

sin( η y )

− sin( η x ) cos( η x )

x D

y D

z D − z 0

(11)

with

z R − z 0 = 0

= − sin( η y ) cos( η x ) x D + sin( η x ) y D + cos( η x ) cos( η y ) ( z D − z 0 )

z D − z 0 = sin( η y ) cos( η x ) x D − sin( η x ) y D

cos( η x ) cos( η y )

can be rearranged to give,

x R

y R =

1 cos( η y )

0

− tan( η x ) tan( η y )

1 cos( η x )

x D

y D (12)

At this point, we have a series of equations, linear in the unknown parameters. By

measurement of the known sensor outputs at a number of points, a series of over

determined linear equations can be formed.

m a 1

m a 2

!

m a n

=

m b 1

m b 2

!

m b n

p

M a = Mb p

(13)

where

m ai vector of the i th measurement of xD and yD ( z D )

m bi matrix from the i th measurement of φ 1 and φ 2 ( x D and yD )

p a vector of parameters

This has a solution which minimizes the square of the errors,

p = ( M T a Ma )

− 1 MT a Mb (14)

Page 26: Nonlinear Robotic Control

20

or, numerically more stable alternatives using a singular value decomposition (given here in

Matlab™ language) :

p = pinv( M a ) ∗ Mb

= M a \ Mb

(15)

Measurement data was collected at fifty different points. They were evenly spaced

across the sensor field of view work space to improve the condition number of the

measurement matrices. Because of the stochastic nature of the DynaSight outputs, 20

consecutive pieces of data were taken at each location and averaged to create a single

measurement.

The minimum least squares error equations were solved to give the following

results:

Parameter Value Units

kx 0.7621 m

ky 0.6533 m

L1 0.6519 m

L2 0.6019 m

α10.0504 m-rad

α20.01855 m-rad

ε10.07736 rad

ε2-0.04654 rad

η10.002906 rad

η20.01294 rad

z0 0.7746 m

Table 2.3 Calibration Parameters

Which results in the compensation equations :

x r

y r =

1 . 0000042

0 . 0000000

− 0 . 0000376

1 . 0000838 x D

y D

(16)

Page 27: Nonlinear Robotic Control

21

x = 0 . 7621 − 0 . 001 x r

y = 0 . 6533 + 0 . 001 y r

(17)

Due to the small size of the misalignments, the rotational compensation was not

implemented in the control software. Note that due to the initialization properties of the

encoders, the values for α1, α 2, ε1, and ε2 will change for each series of calibration runs,

and all data must be collected in a single series of runs without powering down any of the

hardware.

2.2.5. End Point Sensor Performance

Because the DynaSight sensor depends on precise imaging of the target, the

performance is strongly effected by the location of the target relative to the optics (the face

plate). The sensor has a limited field of view with a noticeable decrease in performance

near the edges. With the 6mm target, the sensor has a distance limit of about 1.2 m. The

sensor was placed at 0.8 m above the table plane to place the work space well within the

high performance area. The limited imaging area of the optics gives a square cone field of

view of approximately ±32˚. This results in a usable working area of a square with 0.8

meter sides. There is a noticeable increase in sensor noise near the work space edges. This

is compounded by the fact that, because of the constant target normal vector, the angle of

incidence between the target and the optics increases near the edges. Even at the center, the

maximum angle of incidence for acceptable tracking is about ±30˚.

The sensor tracks the target centroid, as opposed to scanning the entire field of

view. This decreases required processing time and increases the data update rate.

However, if the target moves too far during the update interval, the target can be lost. This

results in a maximum linear rate of the target for low frequency motion. The stereoSync60

mode maximizes this rate. The exact maximum rate is unknown (~0.8 m/s), but seems to

be less than the rate exhibited during control. Other modes had barely acceptable

Page 28: Nonlinear Robotic Control

22

maximum allowable linear rates. No problems have been observed with either target

tracking or rejection of spurious environmental light (sun light, overhead fluorescents,

metallic objects in the field of view).

The sensor thermal transient effects were measured. Short term variations of 0.6

mm total position accuracy were observed over the first hour. These dropped to 0.2 mm

after thermal steady state was reached.

The sensor is operated in a free running mode, where it is supplying new position

measurements at as fast a rate as possible. The nominal data update rate was measured to

be 65.75 hz ± 0.47 3σ (20 trials with 0.125 hz quantization). Short term variations in the

sample period where not measured, but were observed to be small. Target reacquisition

time after a tracking loss was also measured. Typical values were in the 0.60 to 0.66

second range. Maximum time was around 1.0 seconds.

Sensor noise (variations from the mean when the target undergoes no motion),

contain both quantization and (assumed) stochastic effects. For the sensor mounting

geometry, the quantization is fixed at 0.05 mm. The DynaSight firmware does have a

dynamically adjusted gain that doubles the quantization as the absolute target distance

increases. This feature should not be seen in testbed fixed mounting configuration.

The stochastic noise has an RMS magnitude that changes with a number of

parameters. The noise level increases for the following situations : 1) larger perpendicular

distance from the sensor center line, 2) larger distance from the face plate, 3) smaller

reflective target. The stochastic noise is mostly constant for a large range of configuration

values, with a dramatic increase at some limiting point (i.e. the noise is constant across the

workspace until 350 mm from the center line, at which point the noise increases until

targeting is lost at 400 mm).

The form of the sensor noise spectral variation is shown in Figure 2.9. The

theoretical covariance of the noise is given by

cov( x ) = 2

ω 0

I 0

PSD( τ ) dτ (18)

Page 29: Nonlinear Robotic Control

23

= 2

ω 0

I 0

N τ 2 + b 2

τ 2 + a 2 dτ

= 2 N � � � � ω 0 +

b 2 − a 2

a tan− 1 ( ω

0

a ) ����

An example PSD is shown in Figure 2.10 for the centerline target location and no external

disturbances. High frequency data was collected at the full 67 hz rate for 66 minutes (218

data points). Low frequency data was collected at a reduced 67/500 hz rate with a 0.02 hz

anti-aliasing low pass filter for 13 hours (213 data points). For each data set, an adaptive

window size frequency average was used with square windows.

log( spectal density power )

log(frequency)

111111

000000

1100N

a

b

11111110000000

-40 dB/dec slope

Figure 2.9 Sensor Noise PSD Format

10-6

10-4

10-2

100

102

10-6

10-4

10-2

100

102

104

frequency (hz)

spec

tral

pow

er (m

m2/

hz)

Figure 2.10 Sensor Noise PSD Hardware Test

Page 30: Nonlinear Robotic Control

24

Data was collected at a number of locations, both with and without external disturbances

(physical vibrations from the motor power supplies are transmitted through the sensor

mounting structure and are thought to have a possible effect on the position errors). The

results are given in Table 2.4.

Test # Location External Noise N a b

1 0, 0 off 4.0e-5 < 1e-6 0.008

2 0, 0 on 5.2e-5 < 1e-6 0.024

3 350, 0 off 3.0e-5 < 1e-6 0.020

Table 2.4 Sensor Noise Parameters

Since the bandwidth of the compensator will almost certainly be well above the

noise PSD pole frequency (b), and the zero is much less than the pole (a<b) it makes sense

to think of the noise as consisting of two separate additive components.

cov( x ) � 2 N ω 0 + π N b 2

a (19)

The first term contains high frequency noise. Its effect on the final covariance of the end

point position error is a direct function of the compensator bandwidth. The second term

contains only low frequency noise which will be passed by the compensator directly to the

end point position error covariance.

Assuming a simple PD compensator of bandwidth ƒ, the end point position error

covariance due to the sensor noise is approximated by :

cov( x err) . π N � � � ƒ +

b 2

a ��� (20)

Without an acceptable standard, it was difficult to measure the absolute (non-

stochastic) position errors. The sensor documentation gives a specification of 2 mm RMS

absolute accuracy at 0.80 meters for a 7 mm target. They were observed to be bounded by

± 2 mm at a few random test points. There is no spatial frequency information.

Page 31: Nonlinear Robotic Control

25

xmean

(mm)

N [no disturbance](mm2/hz)

N [with disturbance](mm2/hz)

0 1.04e-5 1.72e-5

50 0.93e-5 0.95e-5

100 0.50e-5 0.64e-5

150 1.27e-5 1.14e-5

200 2.75e-5 1.96e-5

250 2.11e-5 3.67e-5

300 15.8e-5 4.08e-5

350 3.00e-5 7.26e-5

400 23.9e-5 53.6e-5

Table 2.5 Sensor Noise Levels

Page 32: Nonlinear Robotic Control

3. Design

The physical configuration of the hardware to be controlled gives rise to a situation

where simple classical techniques (e.g., PID control) are incapable of stabilizing the system

under end point feedback except for a small limited set of joint angles. In this section, we

describe the difficulties and propose some possible solutions.

3.1. Design Issues

In addition to typical feedback control system design issues (uncertain mass

properties, non-zero sensor noise, limited control authority, high frequency flexibility,

digital implementation high frequency phase loss, unknown and possible nonlinear joint

friction), the use of the end point sensor for direct feedback gives rise to a number of

special issues.

With the position sensor referenced to the end point location as opposed to the joint

angles, the output equations relating the plant state variables and sensor measurements

become nonlinear. (It is possible to linearize the output equations through a change of state

variables, but then the input equations become nonlinear, so nothing is gained.) This will

be shown to be a strict nonlinearity, which can not be removed by the assumption of either

small rates or small arm deformations. This becomes one of the driving issues for the

controller structure and is a major part of this design (see section 3.2).

Sensing at the arm end point and the control actuation at the joints, along with arm

flexibility, creates a “noncolocated” system. The finite time required for a displacement

wave produced by the control torques to reach the end point sensor location gives rise to

dynamics that can have a different zero structure than that of a colocated system.

As an example, the general form of the linearized transfer function from the torque

Page 33: Nonlinear Robotic Control

27

input to the position output (either rotational or translational) for a single link is given by :

X ( s ) τ ( s )

= 1

Js2

� � � �

µ 1 λ

1 ω

2

1

s 2 + 2 ζ

1 ω

1 s + ω

2

1

+ µ

2 λ

2 ω

2

2

s 2 + 2 ζ

2 ω

2 s + ω

2

2

+ þ + µ

n λ

n ω

2

n

s 2 + 2 ζ

n ω

n s + ω

2

n

����

= 1

Js2 func( µ 1 , µ 2 , ÿ , µ n , λ 1 , λ 2 , ÿ , λ n )

� s 2 + 2 ζ

1 ω

1 s + ω

2

1 � � s 2 + 2 ζ

2 ω

2 s + ω

2

2 � þ � s 2 + 2 ζ

n ω

n s + ω

2

n �

(21)

where µi is the modal amplitude for mode i at the input point and λi is the modal amplitude

for mode i at the output point (i.e. the sensor location). If the output and input points are

different (µ i λi ), then it is possible for the output modal magnitude to either have a

different sign than the input modal magnitude, or to be zero. This can significantly change

the transfer function numerator structure. In any real system, one or more modes will have

an output out of phase with the input and the model magnitude will be negative (µi λ i < 0).

This results in at least one transfer function zero with a positive real part. If the output

location is a node of a particular mode, the modal magnitude will be zero (µi λ i = 0). The

result is a pair of complex zeros directly on top of two of the complex poles, removing

them from the transfer function.

In the multi-input, multi-output manipulator case, when the outputs are limited to

the end point positions, the flexibility between the input joint torques and the output

translational position produces transmission zeros with positive real parts in the linearized

dynamics. This has two effects. It limits the maximum achievable controller bandwidth to

the lowest frequency of all the system transmission zeros with positive real parts. (This is

essentially a period that is less than the delay of the traveling displacement wave between

the actuator and the sensor.) It also reduces the ability of the compensator to stabilize the

plant (for a sufficiently high gain, any linear compensator will result in an unstable

system). Cannon et.al. [reference 1] has shown that speed of response can be increased in

the noncolocated case, but this requires sufficient knowledge of the lower flexible mode

frequencies and mode shapes. In addition, greater arm deformations will result.

The exact placement of the end point position sensing can also greatly effect the

ability of the compensator to effect the flexible modes. If the end point sensing location is

placed at a node, that mode becomes unobservable to the compensator. This difficulty is

not often faced by a joint based control scheme, where the multiple measurements (one per

Page 34: Nonlinear Robotic Control

28

joint) are made at different locations. The probability that all joints are located at a flexible

mode node is small. This is not true for an end point sensor, where multiple measurements

(x,y,z axes) are made at a single location. The flexible manipulator testbed was observed

to have a mode (the second lowest flexible mode) where the end point measurement

location was very near a node.

-100 -80 -60 -40 -20 0 20 40 60 80 100-60

-40

-20

0

20

40

60

Real Part

x = system poles

o = system transmission zeros

Figure 3.1 Zero-Pole Map - End Point Sensor

The sensor suite for the flexible manipulator testbed consists of not only the end

point position sensor, but also the joint angle encoders. It can be shown that the linearized

plant dynamics which uses both end point and joint angle measurements does not have the

positive real valued transmission zeros associated with the case of end point measurements

only. Therefore, if the joint measurement quantization mapped to the end point position is

comparable (not significantly more than an order of magnitude larger) to the end point

measurement noise, there is a real advantage to using both types of sensors.

Page 35: Nonlinear Robotic Control

29

-100 -80 -60 -40 -20 0 20 40 60 80 100-60

-40

-20

0

20

40

60

Real Part

Imag

inar

y Pa

rt

x = system poles

o = system transmission zeros

Figure 3.2 Zero-Pole Map - End Point and Joint Sensors

The objective of this analysis is, however, to examine the control of flexible

manipulator structures “to high accuracy” using end point feedback. Given the extremely

high accuracy possible with an optical linear position sensor, it is possible that absolute

control accuracy could be brought well below that of the mapping of the joint position

measurement quantization to the end point location. In that case, the joint angle

measurements do not provide any additional information. Note that this is not analogous to

a Kalman filter where any new measurement, no matter how noisy, can lower estimation

error covariances, because for the encoder quantization case, the sensor measurement

errors are not independent, additive, or Gaussian. For this reason, joint angle

measurements are not used in this feedback control system design. This does not invalidate

the use of a (nonlinear) compensation scheme based on derived joint rates to achieve a

greater level of global stability.

The DynaSight image processing and asynchronous update method result in time

delays of the sensor position measurements. Although they are small enough not to be

destabilizing to the controller in general, they do produce a phase loss that was shown to be

destabilizing to some of the higher frequency modes if not accounted for.

Page 36: Nonlinear Robotic Control

30

3.2. End Point Feedback Nonlinearities

The rigid body output equations for the end point location are given by:

x = L1 cos( θ 1 ) + L2 cos( θ 1 + θ 2 )

y = L1 sin( θ 1 ) + L2 sin( θ 1 + θ 2 ) (22)

If the full form flexible equations (using the systems mode representation modification of

the assumed modes method) are written out (see reference [4]), the end point location is

given by :

x = ( L 1 − d

1 ) cos( θ

1 ) − �

� � �

n

3 i = 1

ψ 1 i q

i

� � � � sin( θ

1 ) + d

1 cos �

� � � θ

1 +

n

3 i = 1

ψ 1 i N q

i

� � � �

+ ( L 2 − d

2 ) cos �

� � � θ

1 +

n

3 i = 1

ψ 1 i N q

i + θ

2

� � � � − �

� � �

n

3 i = 1

ψ 2 i q

i

� � � � sin �

� � � θ

1 +

n

3 i = 1

ψ 1 i N q

i + θ

2

����

+ d2 cos �

� � � θ

1 +

n

3 i = 1

ψ 1 i N q

i + θ

2 +

n

3 i = 1

ψ 2 i N q

i

� � � �

(23)

y = ( L 1 − d

1 ) sin( θ

1 ) + �

� � �

n

3 i = 1

ψ 1 i q

i

� � � � cos( θ

1 ) + d

1 sin �

� � � θ

1 +

n

3 i = 1

ψ 1 i N q

i

� � � �

+ ( L 2 − d

2 ) sin �

� � � θ

1 +

n

3 i = 1

ψ 1 i N q

i + θ

2

� � � � + �

� � �

n

3 i = 1

ψ 2 i q

i

� � � � cos �

� � � θ

1 +

n

3 i = 1

ψ 1 i N q

i + θ

2

����

+ d2 sin �

� � � θ

1 +

n

3 i = 1

ψ 1 i N q

i + θ

2 +

n

3 i = 1

ψ 2 i N q

i

� � � �

where

ψ k i is the amplitude of mode shape i at the end of link k

ψ k i N is the slope of mode shape i at the end of link k

q i ( t ) is the time varying magnitude of mode i

L k is the length between joint axes

d k is the distance from the end of the flexible link k to

either the elbow joint ( k = 1 ) or the end point ( k = 2 )

When the flexible mode deformations are assumed to be small, the equations result in:

Page 37: Nonlinear Robotic Control

31

x = L 1 cos( θ

1 ) − sin( θ

1 ) � � � �

n

3 i = 1

( ψ 1 i + d

1 ψ

1 i N ) q

i

� � � �

+ L 2 cos( θ

1 + θ

2 ) − sin( θ

1 + θ

2 ) � � � �

n

3 i = 1

( ψ 2 i + d

2 ψ

2 i N + L

2 ψ

1 i N ) q

i

����

= L 1 cos( θ

1 ) + L

2 cos( θ

1 + θ

2 )

− sin( θ 1 ) � � � �

n

3 i = 1

α i q

i

� � � � − sin( θ

1 + θ

2 ) � � � �

n

3 i = 1

β i q

i

� � � �

(24)

y = L 1 sin( θ

1 ) + cos( θ

1 ) � � � �

n

3 i = 1

( ψ 1 i + d

1 ψ

1 i N ) q

i

� � � �

+ L 2 sin( θ

1 + θ

2 ) + cos( θ

1 + θ

2 ) � � � �

n

3 i = 1

( ψ 2 i + d

2 ψ

2 i N + L

2 ψ

1 i N ) q

i

����

= L 1 sin( θ

1 ) + L

2 sin( θ

1 + θ

2 )

+ cos( θ 1 ) � � � �

n

3 i = 1

α i q

i

� � � � + cos( θ

1 + θ

2 ) � � � �

n

3 i = 1

β i q

i

� � � �

However, the joint angles (θ1 and θ2) cannot be assumed to be small, and the state

to output equations will remain nonlinear. Note that the variation of the end point position

due to the flexible modes is 90˚ in phase ahead of the rigid body modes.

3.2.1. Effect of Nonlinearities

Because the output equations contain nonlinearities that have a major role in the

dynamics of the system, standard “robust” compensator design methodologies, which

assume a plant with linear dynamics, will not work well in this situation. Joint angle

variations can produce changes in the linearized version of the output equations with

infinite gain variation. As an example, the linearized output matrices for four nominal joint

angles are given below :

X ep

Y ep = C( θ 1 , θ 2 ) θ 1 θ 2 q 1 q 2 q 3 q 4

T

C ( 0 , π 2 ) =

-0.6012

0.6325

-0.6012

0

-0.3312

0.1950

-0.4130

-0.0952

0.0156

-0.0023

0.0047

-0.0007

C ( 0 , − π 2 ) =

0.6012

0.6325

0.6012

0

0.3312

0.1950

0.4130

-0.0952

-0.0156

-0.0023

-0.0047

-0.0007

Page 38: Nonlinear Robotic Control

32

C ( π 2 ,

π 2 ) =

-0.6325

-0.6012

0

-0.6012

-0.1950

-0.3312

0.0952

-0.4130

0.0023

0.0156

0.0007

0.0047

C ( − π 2 ,

π 2 ) =

0.6325

0.6012

0

0.6012

0.1950

0.3312

-0.0952

0.4130

-0.0023

-0.0156

-0.0007

-0.0047

There are no design methods based on linear time invariant systems that are equipped to

handle such extreme gain variations (which actually are sign variations).

To overcome these nonlinearities, a nonlinear controller is needed. Possible

methods include gain scheduling, exact linearization, and transformation of end points to

hub angles. This analysis will consider the exact continuous linearization methods.

3.2.2. Exact Linearization Methods

The dynamic equations of motion in general form are given by:

M ( θ ) θ ¨ + N( θ , θ ˙ ) + G( θ ) + K θ + R( θ ) = T τ x = H( θ )

(25)

where M(θ) is the system mass matrix, N(θ) is a vector of coriolis and centripetal terms that

depend on the square of the velocities, G(θ) is a vector of gravity terms (which are zero in

this case), K is a matrix of stiffness coefficients for the flexible modes, R(θ) is a vector of

nonlinear joint torques (friction), T is an input force transformation matrix, and H(θ) is an

output transformation function.

For the rigid body case, the equations of motion reduce to :

M ( θ ) θ ¨ + N( θ , θ ˙ ) + R( θ ) = τ x = H( θ )

(26)

If a new input to the system, f, is defined as,

T τ = M( θ ) J − 1 ( θ ) f − M( θ ) J − 1 ( θ ) .

J ( θ , θ ˙ ) J − 1 ( θ ) θ ˙ + N( θ , θ ˙ ) + R( θ ) + Kθ (27)

where J( ) is the system Jacobian matrix and f is the new control variable, then the dynamic

model between the new input and the output end point position is just a pair of double

Page 39: Nonlinear Robotic Control

33

integrators.

M θ ¨ + N + R + Kθ = M J − 1

f − MJ − 1

.

J J − 1 θ ˙ + N + R + Kθ

M θ ¨ = M J − 1

f − MJ − 1

.

J J − 1 θ ˙

J θ ¨ + .

J J − 1 θ ˙ = f

J θ ¨ + .

J x ˙ = f x ¨ = f

(28)

However, construction of the real plant input, τ, from the new compensator output,

f, is far from straight forward. The computations require knowledge of the current state

vector for the flexible modes and the joint friction torques. Neither is directly measurable

without additional sensors. (Estimation may be possible using complex, nonlinear

estimation schemes and is currently under investigation.) With actuators only at the joints,

the input transformation matrix, T, is not square, and therefore not invertible.

If the flexible modes and joint friction are ignored, the simpler rigid body equations

of motion give a result that is computationally more tractable. In the paper “On Manipulator

Control by Exact Linearization” [reference 5] it is shown that all methods of exact

linearization (computed torque technique, resolved acceleration control, nonlinear

decoupling theory, operational space control, and geometric control theory) give identical

results. This simplified transformation inverts the nonlinearity associated with the output

transformation matrix, but only for frequencies below the first flexible mode.

τ = MJ − 1

f − MJ − 1

.

J J − 1 θ ˙ + N( θ , θ ˙ ) (29)

If the rates are also assumed small, so that all terms with an angular rate squared can be

dropped, then a further simplification results,

τ = MJ − 1

f

τ 1

τ 2

= MJ − 1

f x

f y

(30)

For the two link flexible manipulator, this matrix is,

Page 40: Nonlinear Robotic Control

34

MJ− 1

= p 1 + p 2 + 2 p 3

p 2 + p 3

p 2 + p 3

p 2

L 2 cos( θ 1 + θ 2 )

L 1 L 2 sin( θ 2 )

− L 1 cos( θ 1 ) + L 2 cos( θ 1 + θ 2 )

L 1 L 2 sin( θ 2 )

L 2 sin( θ 1 + θ 2 )

L 1 L 2 sin( θ 2 )

− L 1 sin( θ 1 ) + L 2 sin( θ 1 + θ 2 )

L 1 L 2 sin( θ 2 )

p 1 = m1 a 2 1 + m2 L 2

1 + I1

p 2 = m2 a 2 2 + I2

p 3 = m2 L 1 a a cos( θ 2 )

(31)

where

m k is the mass of link k

L k is the length of link k

a k is the distance from the inboard joint to the c.g. of link k

I k is the inertia about the c.g. of link k

Because the mass matrix is always positive definite, the transformation matrix is

finite and nonsingular for all locations except at the singular points of the Jacobian matrix.

This singular point corresponds to θ2 = 0 for the flexible manipulator testbed. This state is

one where the dynamic system is uncontrollable, so the difficulty is a property of the

dynamics, not of the linearization method. From here on, this situation will be ignored

(although “near singular” conditions can be important).

The same result can be obtained in a more intuitive manner, if we look at the

Jacobian equation.

x ˙ = J( θ ) θ ˙ (32)

and make the following (incorrect) assumptions,

x ¨ = J( θ ) θ ¨

N ( θ , θ ˙ ) . 0 (33)

then we get the same result.

τ = MJ − 1 f (34)

Figures 3.3 to 3.6 give four possible transfer functions (τ1,τ2 inputs to xep,yep

outputs) for the original flexible plant as θ2 varies between - and .

Page 41: Nonlinear Robotic Control

35

10-1

100

101

102

-100

-50

0

50

Mag

nitu

de (

dB)

10-1

100

101

102

-200

0

200

Phas

e (d

eg)

Frequency (rad/sec)

Figure 3.3 Plant Shoulder Torque to X Position T.F.

10-1

100

101

102

-100

-50

0

50

Mag

nitu

de (

dB)

10-1

100

101

102

-200

0

200

Phas

e (d

eg)

Frequency (rad/sec)

Figure 3.4 Plant Shoulder Torque to Y Position T.F.

Page 42: Nonlinear Robotic Control

36

10-1

100

101

102

-100

-50

0

50

Mag

nitu

de (

dB)

10-1

100

101

102

-200

0

200

Phas

e (d

eg)

Frequency (rad/sec)

Figure 3.5 Plant Elbow Torque to X Position T.F.

10-1

100

101

102

-100

-50

0

50

Mag

nitu

de (

dB)

10-1

100

101

102

-200

0

200

Phas

e (d

eg)

Frequency (rad/sec)

Figure 3.6 Plant Elbow Torque to Y Position T.F.

Page 43: Nonlinear Robotic Control

37

10-1

100

101

102

-100

-50

0

50

Mag

nitu

de (

dB)

10-1

100

101

102

-200

0

200

Phas

e (d

eg)

Frequency (rad/sec)

Figure 3.7 Linearized X Axis Force to X Position T.F.

Figures 3.7 to 3.10 give the equivalent four transfer functions when the

transformation matrix is applied to the plant inputs (fx,fy inputs to xep, yep outputs). The

transformation matrix has removed the variations in the low frequency regions.

10-1

100

101

102

-100

-50

0

50

Mag

nitu

de (d

B)

10-1

100

101

102

-200

0

200

Phas

e (d

eg)

Frequency (rad/sec)

Figure 3.8 Linearized X Axis Force to Y Position T.F.

Page 44: Nonlinear Robotic Control

38

10-1

100

101

102

-100

-50

0

50

Mag

nitu

de (d

B)

10-1

100

101

102

-200

0

200

Phas

e (d

eg)

Frequency (rad/sec)

Figure 3.9 Linearized Y Axis Force to X Position T.F.

10-1

100

101

102

-100

-50

0

50

Mag

nitu

de (

dB)

10-1

100

101

102

-200

0

200

Phas

e (d

eg)

Frequency (rad/sec)

Figure 3.10 Linearized Y Axis Force to Y Position T.F.

Page 45: Nonlinear Robotic Control

39

3.2.3. Feedback Linearization

The rigid body linearization matrix is a function of the joint angles, which need to

be known before the matrix can be constructed. An obvious choice is to get the joint angles

from the joint position encoders.

Plant

C(s)

f

τ

X

ref

MJ

-1

X

θ

Figure 3.11 Feedback Linearization

Unfortunately, this situation does not result in a satisfactory response. Although

the linearization is algebraically exact, because the system parameters are not exactly known

and the encoder measurements are not exact, the linearization is a feedback loop and is

subject to all the same stability concerns. Still it might be possible to include a dynamic

compensator, as in a standard control loop.

Plant

C(s)

f

τ

X

ref

MJ

-1

X

θ

B(s)

Figure 3.12 Compensated Feedback Linearization

As will be shown by performing a perturbation analysis around the joint angles,

this loop has a multiplication nonlinearity with the commanded control actuation forces, so

that it is also nonlinear and position dependent.

If we assume that the joint angles consist of a fixed nominal value and a small

perturbation around the nominal, we can make the assumptions,

Page 46: Nonlinear Robotic Control

40

sin( θ ) Y sin( θ o ) + cos( θ o ) ∆ θ

cos( θ ) Y cos( θ o ) − sin( θ o ) ∆ θ (35)

When these are applied to the rigid body linearization matrix, three terms appear in

the equation for the applied torque,

τ P = g1 ( θ 1 o , θ 2 o ) + g2 ( θ 1 o , θ 2 o , ∆ θ 2 1 , ∆ θ 2

2 ) + g3 ( θ 1 o , θ 2 o , ∆ θ 1 , ∆ θ 2 ) (36)

The g1 term is just the desired nominal torque from the commanded forces if angle

perturbations are ignored. The g2 term contains ∆θ2 terms and is ignored. The g3 term is

linear in the angle perturbations and shows the effect of the linearization matrix

multiplication on the joint angle feedback loop.

g 3 = R ∆ θ ( θ 1 , θ 2 )

∆ θ 1

∆ θ 2

= f x

L 1 L

2 s θ

2

L 1 s θ

1 ( p

2 + p

3 c θ

2 ) − L

2 s θ

12( p

1 + p

3 c θ

2 )

L 1 p

2 s θ

1 − L

2 p

3 c θ

2 s θ

1 2

p3 s θ

2 ( L

1 c θ

1 − L

2 c θ

12) − L

2 s θ

12( p

1 + p

3 c θ

2 )

− L 2 p

3 ( s θ

1 c θ

1 2+ c θ

2 s θ

12)

∆ θ 1

∆ θ 2

+ f y

L 1 L

2 s θ

2

L 2 c θ

12( p

1 + p

3 c θ

2 ) − L

1 c θ

1 ( p

2 + p

3 c θ

2 )

L 2 p

3 c θ

2 c θ

1 2− L

1 p

2 c θ

1

p3 s θ

2 ( L

1 s θ

1 − L

2 s θ

12) + L

2 c θ

12( p

1 + p

3 c θ

2 )

L 2 p

3 ( c θ

2 c θ

1 2− s θ

2 s θ

12)

∆ θ 1

∆ θ 2

(37)

where

s θ i = sin( θ i )

c θ i = cos( θ i )

s θ ij = sin( θ i + θ j )

c θ ij = cos( θ i + θ j )

(38)

Because this matrix, R∆θ, depends on the commanded force, a joint angle feedback

dynamic compensator can not be designed independently from the end point feedback

compensation loop. In addition, it has the same type of nonlinearities that the matrix

transformation was meant to solve.

3.2.4. Feedforward Linearization

Although the linearization matrix can be constructed using measured joint angles, it

can also be constructed from the input reference signal. By assuming no flexible modal

Page 47: Nonlinear Robotic Control

41

deformations, the commanded end point can be transformed to the equivalent joint angles.

This alleviates the problems associated with the feedback loop.

Plant

C(s)

f

τ

X

ref

MJ

-1

X

θ

InverseKinematics

θ

est

Figure 3.13 Feedforward Linearization

The transformation from the commanded end point position to the equivalent joint

angles is through the arm inverse kinematics.

θ 1 = tan− 1 � y , x � − cos− 1 �

� � � L 2

1 + ( x 2 + y 2 ) − L 2 2

2 L1 x 2 + y 2

����

θ 2 = π − cos− 1 � � � L 2

1 + L 2 2 − ( x 2 + y 2 )

2 L1 L 2

� � �

(39)

The main drawback associated with this method is the limited capture region (where

control errors are small enough so that the system is stable) of the controller. The torque

errors produced from the difference between the actual and commanded joint angles are

proportional to ∆θ and should have negligible effects on the system stability only if the

control errors are small. Because of this limitation, the controller structure requires some

type of slew commander, so that transitions are smooth and errors are guaranteed to be

small. Large step changes in the reference input can result in an unstable system. See

appendix 2.2 and 3.4 for the slew commander equations.

The size of the capture region can be estimated from the same perturbation analysis

as was done in the joint angle feedback method. By assuming that there is a difference

between the command and the plant state and then expanding the input equations about the

plant state (the plant state is θo and the command is θo+∆θ), the linearized plant input

perturbation is given by,

Page 48: Nonlinear Robotic Control

42

B p × B N + ∆ B N

= B M J − 1 + B R F ( θ 1 , θ 2 ) (40)

where the linearized plant dynamics are

z ¨ = A z + B τ = A z + B N f

x = C z(41)

and

R F ( θ

1 , θ

2 ) =

∆ θ 1

L 1 L

2 s θ

2

L 1 s θ

1 ( p

2 + p

3 c θ

2 ) − L

2 s θ

12( p

1 + p

3 c θ

2 )

L 1 p

2 s θ

1 − L

2 p

3 c θ

2 s θ

1 2

L2 c θ

12( p

1 + p

3 c θ

2 ) − L

1 c θ

1 ( p

2 + p

3 c θ

2 )

L 2 p

3 c θ

2 c θ

1 2− L

1 p

2 c θ

1

+ ∆ θ 2

L 1 L

2 s θ

2

p 3 s θ

2 ( L

1 c θ

1 − L

2 c θ

12) − L

2 s θ

12( p

1 + p

3 c θ

2 )

− L 2 p

3 ( s θ

1 c θ

1 2+ c θ

2 s θ

12)

p3 s θ

2 ( L

1 s θ

1 − L

2 s θ

12) + L

2 c θ

12( p

1 + p

3 c θ

2 )

L 2 p

3 ( c θ

2 c θ

1 2− s θ

2 s θ

12)

This implies that the size of the capture region can be maximized by a design

methodology that maximizes the controller robustness to an additive structured uncertainty.

3.3. Controller Design

3.3.1. Control Structure

The controller for the end point feedback system has the following structure:

X

ref

est

X

target

X

est

θ

θ

X

InverseKinematics

SlewCommand

z = Az z + Br Xref + Be Xestf = C z + Dr Xref + De Xest

.

Plant

TorqueComp

MJ

-1

SensorComp

f

tor

Figure 3.14 Control Structure

The slew commander generates algebraically consistent 2-D positions, rates, and

Page 49: Nonlinear Robotic Control

43

accelerations under maximum single axis rate and acceleration constraints. The sensor

compensation transforms the end point position data from local sensor to table coordinates.

The inverse kinematics computes the equivalent joint angles from the end point location

based upon the rigid body kinematics. The compensator is a general state space digital

dynamic filter. The linearization block transforms the control forces to the joint torques

based on the rigid body linearization matrix. The actuator compensation converts the

commanded control actuation from torques in Newton-meters to the appropriate voltages.

3.3.2. Compensator Design

The plant seen by the compensator is, for the most part, linear and time invariant.

Any number of control methodologies are then applicable. The most relevant feature is

now the arm flexibility, both the noncolocation zeros and the variations in the modal output

over the different nominal joint angles. Possible control methodologies include :

Classical Single Loop - has the advantage of being able to directly deal with any

type of uncertainty due to the flexible modes. It has the disadvantage of a difficulty in

dealing with the cross axis uncertainty (which must still exist, since the decoupling is not

real, but rather artificially enforced. See reference [8]).

LQG - has the advantage of its ability to compensate for cross axis coupling and it

offers guaranteed nominal stability. It has a disadvantage in the difficulty in correcting for

uncertainty in the higher frequency flexible modes.

SANDY - (reference [12]) is perhaps the most general format. It's main advantage

here is its ability to handle the uncertainty in the flexible modes based on the different

operating points.

For demonstration purposes, both a classical and an LQG design were constructed.

The classical design attempts to create a compensator that is stable without any knowledge

of the flexible mode shapes or frequencies, except for a lower bound on the frequencies.

The LQG design is constructed assuming a reasonable knowledge of the two lowest

frequency modes and a limited knowledge of the frequencies of the next two modes.

Page 50: Nonlinear Robotic Control

44

Classical Compensator Design:

The compensator is based on a Bode loop shaping design (the filter which gives a

system loop gain function that surround the instability points with appropriate distances in

the gain and phase dimensions) in the continuous frequency domain which was later

transformed to the digital domain. The compensator filter provides a zero frequency

integrator pole to provide zero steady state errors to constant disturbances, a series of lower

frequency zeros to provide phase stabilization, and a number of higher frequency poles to

reduce feedback gain at higher frequencies at the uncertain flexible mode locations.

The continuous pole zero structure is given by

C ( s ) = 8 . 4 x 104 ( s + 2 ) ( s 2 + 0 . 6 S + 0 . 1 ) ( s 2 + 2 s + 120) s ( s + 60) ( s + 40) ( s + 8 ) ( s + 6 ) ( s 2 + 15s + 120)

(42)

10-2

10-1

100

101

102

-100

-50

0

50

100

Mag

nitu

de (d

B)

10-2

10-1

100

101

102

-400

-200

0

Frequency (rad/sec)

Phas

e (d

eg)

Figure 3.15 Classical Compensator Loop Transfer Function

Page 51: Nonlinear Robotic Control

45

The resulting digital transfer function (through the use of a Tustin transformation

approximation) is given as,

C ( z ) = 4.31 z7 + 11.9 z6 + 2.78 z5 + 20.1 z4 - 18.5 z3 - 4.40 z2 + 11.4 z - 3.76

z 7 - 4.48 z6 + 8.27 z5 - 8.04 z4 + 4.35 z3 - 1.25 z2 + 0.160 z - 0.00553 (43)

The resulting single loop stability margin frequency plots are shown in figures 3.15 and

3.16.

-500 -400 -300 -200 -100 0 100-100

-50

0

50

100

150

Phase (deg)

Mag

nitu

de (

dB)

Figure 3.16 Classical Compensator Nichols Plot

The resulting input-output frequency response between the commanded and actual end

point for the x axis is given in figure 3.17 (the y axis is similar).

Page 52: Nonlinear Robotic Control

46

10-2

10-1

100

101

-60

-50

-40

-30

-20

-10

0

10

20

Frequency (rad/sec)

Mag

nitu

de (

dB)

Figure 3.17 Classical Compensator I/O Magnitude Response

10-2

10-1

100

101

102

-800

-700

-600

-500

-400

-300

-200

-100

0

100

Frequency (rad/sec)

Phas

e (d

eg)

Figure 3.18 Classical Compensator I/O Phase Response

Page 53: Nonlinear Robotic Control

47

In figure 3.17 the lower frequency flexible modes of the structure are seen to cross

the unity gain threshold in the closed loop input-output frequency response. This is a direct

result of the design strategy of the compensator. The system bandwidth was raised to

nearly as high a frequency as possible, without any direct compensation of the flexible

frequencies. This is presented as a demonstration of a design for a system where no

information on the flexible modes of the system dynamics is available apart from a lower

limit on the modal frequencies. This can be contrasted with the LQG design, to follow,

which directly compensates for the flexible modes of the system dynamics, based on an

accurate dynamics model, and produces a much smoother closed loop input-output

frequency response.

Monte Carlo analysis of the classic compensator has shown that it is stable for all

nominal joint angles.

Linear Quadratic Gaussian Compensator Design:

The LQG compensator design methodology is model based. It is used here as an

example of an attempt to actively control the manipulator flexible modes. Unfortunately, as

shown in section 3.2, the spatial variation of the flexible modes is 90° out of phase with the

rigid body modes. This makes control difficult. The methodology also suffers from

limitations that required a slight modification in the problem formulation.

The design model for the LQR and LQE sections of the LQG design was the

linearized version of the plant dynamics with a nominal θ1=0°, θ2=72°. It was converted to

a discrete time model using a standard matrix exponential method under the assumption of

constant inputs over the sample period. To account for the computation time of the end

point sensor hardware and the control hardware, a two sample delay was added to the

model. (see the Matlab™ c2dt function). The first set of control designs lacked the time

delay and, consequentially, stabilized the plant, but, in general, did a poor job of stabilizing

the second flexible mode. The destabilizing effect manifested itself in the second mode

because this mode was the mode closest to the Nyquist frequency, that did not have the

Page 54: Nonlinear Robotic Control

48

gain attenuation that was applied to the higher frequency modes.

The LQG methodology assumes that the complete set of all information known

about the plant dynamics is contained in the design model. In general, this is not true.

Dynamic models are often more uncertain in the higher frequency modes and completely

unreliable above the last modeled mode. Experimental tests have shown this to be true for

the flexible manipulator plant model. To account for this uncertainty in the LQR design,

two steps were taken. First, additional outputs were constructed based on the plant

eigenvectors. For the flexible modes, the two eigenvectors were added to cancel the

imaginary parts. Differing amounts of control were then applies to each individual mode,

which would not be possible if the end point positions were weighted directly. The

different weighting in the LQR design was based on assumed relative uncertainties for the

modes. Secondly, a pair of first order low pass filters were added to the design model

input (and resulting compensator output) in order to force a high frequency roll-off to

reduce feedback at higher frequencies. This forces a low feedback gain at high frequencies

where the plant model contains no information about the dynamics. The break point was

set immediately above the second flexible mode frequency.

A matrix transformation from the commanded end point positions to a system

modal vector was added to map from the command to the regulator inputs. In the full

nonlinear world, this matrix consists of the model Jacobian matrix (with additional zeros

for the flexible modes). However, since the estimator state output is based on the

linearized plant at a single design point, only the static input matrix was used.

In the LQR design, the zero frequency eigenvalues were weighted to achieve a 2 hz

bandwidth. The flexible mode eigenvalues were marginally weighted to increase damping,

but still allow for a certain amount of uncertainty. Increased weighting on the flexible

modes, particularly the first mode, decreased the amplitude of the arm deformation under a

step input, but also decreased the stable region of the compensator.

In the LQE design, the sensor noise weights were set at the nominal DynaSight

RMS positional errors. The plant input noise weights were set to correspond to 20% of the

nominal commanded torque for a 0.1m step input command. Counter to intuition, an

Page 55: Nonlinear Robotic Control

49

increase of the plant model input noise weights, which should signal a plant model that can

not be trusted, decreased the variation on the nominal joint angles before compensator

instability.

With the pure time delay in the design model, the Matlab™ digital LQR solution,

which uses eigen decomposition of the system Hamiltonian matrix, could not proceed due

to the singular digital plant model transition matrix. An LQR solution function was written

that allows for a singular plant state transition matrix. It is based on the time variable

backwards iteration solution of the discrete finite time horizon Kalman solution. The

steady state solution is approximated as the iterative solution when the 2-norm of the LQR

gain matrix changes by less than 1e-7 percent over ten iterations. See appendix 2.4.

10-2

10-1

100

101

102

-800

-600

-400

-200

0

frequency (rad/sec)

phas

e (d

egre

es)

10-2

10-1

100

101

102

-100

-50

0

50

mag

nitu

de (d

B)

Figure 3.19 LQG Compensator Loop Transfer Function

The final compensator design x axis open loop gain and closed loop I/O frequency

response plots are given in figures 3.19 through 3.21. The y axis plots are similar.

Page 56: Nonlinear Robotic Control

50

-800 -700 -600 -500 -400 -300 -200 -100 0 100 200-80

-60

-40

-20

0

20

40

60

80

100

phase (degrees)

mag

nitu

de (

dB)

Figure 3.20 LQG Compensator Nichols Plot

10-2

10-1

100

101

102

-60

-50

-40

-30

-20

-10

0

10

frequency (rad/sec)

mag

nitu

de (

dB)

Figure 3.21 LQG Compensator I/O Frequency Response

Monte Carlo analysis of the LQG compensator has shown that it is stable for all

nominal joint angles where cos(θ1+θ2) = constant. See figure 3.22. This implies that the

Page 57: Nonlinear Robotic Control

51

variation in the flexible modes over the work space is causing the instability. This

conclusion is supported by the fact that for all nominal θ2 design points, instability for off

nominal elbow angles is always in one of the flexible modes (not always the same one).

-4 -3 -2 -1 0 1 2 3 4-4

-3

-2

-1

0

1

2

3

4

shoulder angle (rad)

elbo

w a

ngle

(ra

d)

o = stable point x = unstable point

Figure 3.22 LQG Stability Regions

Page 58: Nonlinear Robotic Control

4. Hardware Test Results

The controllers were tested using both a commanded slew and a set of disturbance

rejection tests. The commanded slew sweeps across the usable work space with a

maximum rate of 0.2 m/s and a maximum acceleration of 0.03 m/s/s. The response is

meant to show both performance of the compensator and the stability over the nominal joint

angles. The disturbance rejection tests were performed by applying a small impulsive force

to the tip in either the x or y axes. Absolute pointing and pointing stability errors were not

tested since a “truth” position was not available to compare against.

4.1. Classical Compensator Results

Results for the classical compensator are given below.

0 10 20 30 400.3

0.4

0.5

0.6

0.7

0.8

seconds

X P

ositi

on (

m)

0 10 20 30 400.3

0.4

0.5

0.6

0.7

0.8

seconds

Y P

ositi

on (

m)

0 10 20 30 40-50

0

50

100

150

seconds

Join

t Ang

les

(deg

)

0 10 20 30 40

-0.2

-0.1

0

0.1

0.2

seconds

Join

t Tor

ques

(N

m)

Figure 4.1 Classical Compensator Response to Slew Command

Page 59: Nonlinear Robotic Control

53

0 5 10 15 20 25 30 35 400.58

0.6

0.62

0.64

0.66

0.68

0.7

0.72

0.74

seconds

End

Poi

nt P

ositi

ons

(m)

X position

Y position

Figure 4.2 Classical Compensator X Axis Disturbance Response

0 2 4 6 8 10 120.58

0.59

0.6

0.61

0.62

0.63

0.64

0.65

0.66

0.67

seconds

End

Poi

nt P

ositi

ons

(m)

X position

Y position

Figure 4.3 Classical Compensator Y Axis Disturbance Response

Page 60: Nonlinear Robotic Control

54

From these results, it can be seen that :

* The peak in the closed loop frequency response manifests itself as a small amount of

oscillation in the slew response. This is a direct result of pushing the bandwidth of the

compensator as high as possible without direct compensation of the first flexible mode.

A smoother frequency response is only possible with either a reduction in the

compensator bandwidth or stability margins.

* The phase lag during the slew maneuver is due to the compensator bandwidth. This

could be solved with an acceleration feedforward term. Such a term was left out of the

compensator because of the large excitation of the flexible modes which resulted.

Either a higher order slew function (such as a standard 5th order trajectory generator) or

a filter on the acceleration could be used to reduce this effect.

* There are a number of rather slow poles in the closed loop system. They are a result of

the integrator poles added to reject constant magnitude disturbances. The slow poles

can be speeded up by either a reduction in the overall compensator bandwidth or by

removal of the compensator integral terms. The integrator terms in the compensator are

considered important because of the residual magnetism in the actuators which results

in steady state errors for state feedback based filters (LQG, PD).

* The Y impulse disturbance rejection test shows that there is still a reasonable amount of

coupling between the two axes. This coupling is much less apparent in the X direction

impulse disturbance rejection plots.

* The classical compensator was seen to be (during other tests) stable for θ2 angles out to

35°. Since this angle is also just outside of the linear region of the actuator, it is unclear

whether it is the compensator or the actuator that is causing the instability. The system

is stable for all testable θ1 angles.

4.2. LQG Compensator Results

The results for the LQG compensator design are given in figures 4.4 through 4.7.

Page 61: Nonlinear Robotic Control

55

0 5 10 15 20 25 300.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

0.75

time (sec)

posi

tion

(m)

X axis position

Y axis position

Figure 4.4 LQG Compensator Response to Slew Command

0 10 20 30-0.4

-0.2

0

0.2

time (sec)

posi

tion

(rad

)

shoulder angle

0 10 20 30 401.2

1.4

1.6

1.8

time (sec)

posi

tion

(rad

)

elbow angle

0 10 20 30-0.1

-0.05

0

0.05

0.1

time (sec)

torq

ue (

Nm

)

elbow joint

0 10 20 30-0.1

-0.05

0

0.05

0.1

time (sec)

torq

ue (

Nm

)

shoulder joint

Figure 4.5 LQG Compensator Commanded Slew cont.

Page 62: Nonlinear Robotic Control

56

0 2 4 6 8 10 12 14 16 18 200.74

0.76

0.78

0.8

0.82

time (sec)

posi

tion

(m)

X axis

Y axis

0 2 4 6 8 10 12 14 16 18 200.58

0.6

0.62

0.64

0.66

time (sec)

posi

tion

(m)

Figure 4.6 LQG Compensator X Axis Disturbance Response

0 5 10 15 20 250.73

0.74

0.75

0.76

0.77

time (sec)

posi

tion

(m)

X axis

0 5 10 15 20 250.63

0.64

0.65

0.66

0.67

time (sec)

posi

tion

(m)

Y axis

Figure 4.7 LQG Compensator Y Axis Disturbance Response

Page 63: Nonlinear Robotic Control

57

The following observations can be made from these results.

* There is a steady state position error for constant commands. The offsets are a result of

the residual magnetism in the actuators coupled with the lack of position error

integration terms in the LQG compensator. Position error compensation can be added

to LQG designs through augmentation of the plant model (reference 6, pg. 289 - 294).

* The controller response during a command with non-zero rates is far from optimal.

This behavior is most likely the result of the design procedure based on regulation. The

commanded end point position is converted to an equivalent stable plant state

command. The definition of stable in this context includes zero rates. An adaption of

the input transformation to include non-zero rate commands should alleviate the

problem.

* The LQG compensator exhibits less separation of axes than the classical design.

Examination of Figure 4.4 shows a Y axis slew command resulting in large X axis

motion. It is unknown if the addition of non-zero rate command inputs would improve

the behavior.

* The slew response with the LQG compensator to the slew command exhibits less

overshoot than the classical compensator. These results are consistent with the

smoother closed loop frequency response.

* The response to an impulse input disturbance is no faster than the classical design.

Once again, there appears to be more cross axis coupling.

4.3. Joint Based Control Comparison

The initiative for direct end point feedback control of flexible manipulators was the

desire to reduce the errors in the final end point position that a joint based control structure

cannot compensate for. To illustrate these errors, a joint based controller was run on the

identical hardware (except for the substitution of stiffer links). The controller was designed

as part of a research effort in hybrid force-position control of the manipulator end point (see

Page 64: Nonlinear Robotic Control

58

reference [10]). The end point position of the manipulator was measured using the end

point sensor, but the measurements were not used for position control.

A 20° slew in both axes was performed on the two link manipulator. The end point

position and joint angle measurement data was recorded. A reduced sensor parameter

calibration was performed to calculate the new L1, L2, ε1, and ε2 parameters. The end

point position was calculated based on both the end point sensor and the joint angles with

the forward rigid body kinematics. Figures 4.8 and 4.9 show the resulting end point

positions and the errors between the two position calculations.

0 1 2 3 4 5 6 7 80

0.2

0.4

0.6

0.8

1

time (sec)

posi

tion

(m)

Figure 4.8 Joint Base Comparison End Point Positions

These tests show an end point error of approximately 5% of the slew magnitude

that results from the assumption of rigid body kinematics. These errors could be reduced

through the use of a dynamic estimator, however, they could not be completely eliminated.

The errors would have been much larger for the more flexible links.

Page 65: Nonlinear Robotic Control

59

0 1 2 3 4 5 6 7 8

-0.02

-0.01

0

0.01

0.02

0.03

time (sec)

X a

xis

erro

r (m

)

0 1 2 3 4 5 6 7 8

-0.02

-0.01

0

0.01

0.02

0.03

time (sec)

Y a

xis

erro

r (m

)

Figure 4.9 Joint Based Comparison End Point Errors

Page 66: Nonlinear Robotic Control

5. Further Work

The following are possible areas where further analysis could be done :

* A compensator based on classical techniques that assumes little knowledge of the

flexible modes is shown. One possible compensator based on LQR techniques that

directly controls the flexible modes through an assumed knowledge of their frequencies

and eigenvectors is shown. Many other possible techniques that attempt to control the

flexible modes, each based on different assumptions of the level of knowledge of the

system properties and uncertainties, can be examined. These include H , direct

parameter optimization with multiple plants, linear quadratic worst case optimization

(LQW, a quadratic cost function optimization method similar to LQR that defines the

optimal state feedback control gain and the coupled worst case disturbance, reference

[13]), and various fuzzy and neural techniques.

* The control structure as presented is stable for only a limited capture region around the

commanded end point. The linearized perturbation of the plant dynamics for end points

away from the commanded position is given. The actual size of the stable region for

various compensators has not yet been calculated. Compensator design techniques that

seek to maximize this region could be employed.

* The control structure and compensator designs presented have assumed a single

position sensor that tracks the manipulator end point. The advantages of a second,

possibly redundant, position sensor have not been examined. The effects of placing the

second target at various locations (the elbow joint being an obvious choice) have not

been examined.

* The plant dynamics model linearization assumed both the existence of no flexible

modes (therefore K = 0) and that the nonlinear terms where zero. All of the nonlinear

terms in this model formulation took the form of squared joint angular rates. Direct

compensation of these rate squared terms is possible to guarantee stability for any

magnitude joint rates (not only small rates as assumed for the given controller

formulation). A simple direct cancellation of the nonlinear terms through joint angle

rate feedback (as typically found in literature on exact linearization on manipulators)

could be applied. Alternatives include both a nonlinear feedback that is in effect only

Page 67: Nonlinear Robotic Control

61

for large joint rates and a Lyapunov based approach that defines a simple, low order

joint rate feedback that guarantees global stability for all magnitudes of joint angle rate.

* How the uncertainties in the parameter values for the computed feedforward rigid body

linearization matrix effect the stability of the compensator has not been fully examined.

* How the properties of the end point position sensor effect the final performance of the

end point position placement has only been given a simplistic treatment.

* The application of a joint angle rate feedback scheme which, because of low quality

joint angle position sensors is not meant to improve end point placement performance,

but rather is meant to guarantee stability of modes which may be unobservable to the

end point position sensor has not been examined.

Page 68: Nonlinear Robotic Control

6. Conclusions

The difficulties with the use of an end point position sensor on mechanical

manipulators with non-cartesian geometries is examined with focus on the sensor-actuator

noncolocation and the nonlinear output function. The various forms of the dynamics

linearization through input transformation functions, both exact and approximate, are

given. A simple (and therefore computationally inexpensive) form that linearizes only the

rigid body modes is presented and the full form of the matrix for a two link planar

manipulator is given. It has been shown that simple application of the linearization matrix

to the plant inputs with joint angle feedback is unstable. A feedforward scheme that is

stable for a limited region around the input command is presented.

An end point position sensor has been integrated into the two link planar flexible

manipulator test bed in the Control Systems Laboratory at the University of Washington.

The geometric compensation equations for the end point sensor are given and the parameter

calibration techniques are presented to give the transformation from sensor to table top

coordinates. The data quality and performance of the chosen sensor are reported. Use of

the end point sensor in the existing control hardware is described.

A simple set of linear controllers is presented. They were implemented on the two

link flexible manipulator hardware and shown to have reasonable performance for a large

range of nominal joint angles. Instability is exhibited to exist near the manipulator

geometric configuration Jacobian singular points, as was expected from the linearization

mathematics.

Page 69: Nonlinear Robotic Control

REFERENCES

1. “Initial Experiments on End-Point Control of a Flexible One-Link Robot”, Robert H.Cannon Jr., Eric Schmitz, The International Journal of Robotics Research. vol. 3, no.3, Fall 1984, pg 62-75.

2. “The UWCSL Two-Link Manipulator Facility: System Description”, Steve Evers.Internal Report, University of Washington, Seattle, WA. 1994

3. “Development of a Mathematical Model for the Control of Flexible ElectromechanicalSystems test Bed”, V. Lertpiriyasuwat, M.S. Thesis, University of Washington,Seattle, WA. 1994.

4. “Experiments in Modeling and End-Point Control of Two-Link Flexible Manipulators”,Celia Oakley PhD Dissertation Stanford University, 1991.

5. “On Manipulator Control by Exact Linearization”, Kenneth Kreutz, IEEE Transactionson Automatic Control, vol. 34, no. 7, July 1989, pg 763-767

6. “Digital Control of Dynamic Systems” 2nd Ed., Franklin, Powell, and Workman.1980,1992. pg 422-439.

7. “The DynaSight Sensor™ : Developer Manual” ver 1.1, Origin InstrumentsCorporation, 854 Greenview Drive, Grand Prairie, TX. 1993

8. “Linear Robust Control”, Michael Green and David Limebeer. 1995. pg. 23-26.

9. “Discrete Time Noncolocated Control of Flexible Mechanical Systems Using TimeDelay”, M.S.Wang and B.Yang, Journal of Dynamic Systems, Measurement, andControl. vol. 116, June 1994, pg 216-222.

10. “Experimental Evaluation of Robotic Reduced-Order Hybrid Position/Force Control ona Two-Link Flexible Manipulator”, D.Bossert, U.Lee, J.Vagners, 1996 IEEEInternational Conference on Robotics and Automation.

11. “System Build Users Guide”, ver 3.0, Integrated Systems Inc. 3260 Jay St, SantaClara, CA 95054, 1992.

12. “SANDY™. A Design Tool for Linear Multivariable Optimal Control, User’s Guide”,A.J. Controls Inc. 25825 104th Ave. SE, Suite 442, Kent, WA. 98031. 1993

13. Personal conversations with A.E. Byson, PhD.

Page 70: Nonlinear Robotic Control

Appendix 1. Parameter Values

Parameter Description Value

m1, m2 Mass of the links in the rigid body

formulation

3.6745 kg, 1.0184 kg

L1, L2 Length of the links including both

flexible and rigid sections

0.6519 m, 0.6019 m

a1, a2 Distance from the joint to the center of

mass of the links in the rigid body

formulation

0.3365 m, 0.2606 m

I1, I2 Inertia of the links about the center of

mass in the rigid body formulation

0.370 kgm2, 0.081 kgm2

∆θ1, ∆θ2Encoder angular quantization 0.0015 r, 0.0015 r

x, y End point position sensor quantization 0.05 mm, 0.05 mm

ωiFlexible mode frequencies 1.8, 3.4, 4.5, 8.0 hz

fDYNA maximum sensor update rate 65.8 hz

kx, ky Sensor origin in table top coordinates 0.762 m, 0.653 m

z0 Sensor distance from table top 0.775 m

ηx, ηySensor rotational misalignments 0.0029 r, 0.0129 r

d1, d2 Inboard distances to flexible link 0.104 m, 0.0195 m

Page 71: Nonlinear Robotic Control

Appendix 2. Code Listings

2.1. AC100-C30 Serial Line Interface

The AC100 decoding software consists of two pieces; the serial driver and the serial

decoding/encoding routines. Each piece exists as an object module in the /AC100C30

directory on the AC100 PC host. References to these modules are made in the

AC100C30.LNK file. They are automatically linked during compile time, so no additional

code needs to be included.

The serial driver is a set of several software routines that define the timing and

structure of the serial I/O. It is loaded as an object library (IPSERIAL.OBJ) supplied by

ISI.

The decoding/encoding routines are routines written locally, customized to the

DynaSight interface. They transform the stream of received bytes into formatted floating

point outputs, based on the known DynaSight packet definitions. They also transform the

model floating point outputs into a byte stream to send to the sensor. Versions of these

routines were developed for the DynaSight sensor and are located in

\AC100C30\IP_DYNA.OBJ (default object code) and \AC100C30\DYNA.DIR (source

code directory).

The decoding/encoding software consists of three functions which are called by the

serial interface driver module. The usr_SERIAL_out and usr_sample_SERIAL_in routines

are called for input and output transfers of serial data. Each function is implemented to two

different formats, depending on the intended use of the project (see below) The

get_SERIAL_paramters routine defines the serial line communication protocol and also

initializes the user structure.

BAUD = 19200

DATA BITS = 8

PARITY = NONE

STOP BITS = 1

CLOCK MULT = 16 or 32

BUFFER SIZE = 200

Page 72: Nonlinear Robotic Control

66

2.1.1. De-packetizing Code

This is the normal use of the serial line decoding/encoding routines. It is called

during run time to decode the DynaSight position packets into floating point engineering

format. The routines are :

usr_SERIAL_parameters: standard protocol definitions

usr_SERIAL_out : is not called or used, since there are no outputs to the sensor

user_sample_SERIAL_in : decodes the DynaSight data packets into a vector of

floating point controller inputs.

Any number of input bytes can be decoded during processing. When the end of an

eight byte packet is reached, the data is transformed into an integer position. The latest

complete integer position is converted to a floating point measurement in millimeters and

returned to the controller.

/*=====================================================================

IP_DRIVER.C

FUNCTION :

Required top level serial I/O drivers for the AC100 DSP based real time controller. Data format translation between serial device asyncronous byte stream format and the System Build syncronous float/int vector format. (AC100 object-code performs the buffered serial I/O interupt functions.)

This version is for the DynaSight IR position sensor.

get_SERIAL_parameters Function sets the asynchronous communication parameters for the IP-SERIAL module.

user_poll_SERIAL_out Creates the transmit byte array and calling function write_serial which transmits the byte array across the serial channel.

user_sample_SERIAL_in Function is called every sampling interval. Fills the floating- point vector which is used as input to the model for the current sampling interval.

Page 73: Nonlinear Robotic Control

67

INPUTS : This version supports no inputs. To configure the sensor, see the IP_SETUP routine.

OUTPUTS :

[1] target X position in millimeters. (sensor frame)

[2] target Y position in millimeters. (sensor frame)

[3] target Z position in millimeters. (sensor frame)

See the DynaSight Developers manual for a deffinition of the sensor frame coordinate system.

[4] data quality flag. -1=old, 0=good, 1=supect, 2=bad.

[5] packet syncronization flag. 0=sync'ed, 1=error.

[6] packet count (decodable packets only).

HISTORY : 18 Nov 93 ISI Original 06 MAR 95 DB.Rathbun Adapted to DynaSight format

====================================================================*/

/* DynaSight/AC100 includes */#include "dynadriver.h"

#define NULL 0

/* semaphores and serial parameters for each physical channel */private struct { boolean allSent;

boolean broken; unsigned baud; } line_state[2];

/* definition of user memory structure for each channel */struct user_type { int update_interval; int update_count; int last_pktCount; struct dyNative0 dyna_struct;};

/*-------------------------------------------------------------------*/public void get_SERIAL_parameters( unsigned int hardware_channel, volatile struct user_param *device_param, volatile struct ring_buffer_param *rec_buffer, IOdevice *device ){ struct user_type *user_ptr; serial_param_type *serptr;

Page 74: Nonlinear Robotic Control

68

int inx;

serptr = device->parameters;

/* Initialize the user data structure memory/values */ if (SERIAL_USER_PTR == NULL) {

SERIAL_USER_PTR = (struct user_type *)malloc(sizeof(struct user_type));

user_ptr = SERIAL_USER_PTR;user_ptr->update_interval = 50;user_ptr->update_count = 0;user_ptr->last_pktCount = 0;user_ptr->dyna_struct.pktCount = 0;user_ptr->dyna_struct.pktState = 0;for ( inx=0; inx<PKT_MAX_LENGTH; inx++ ) user_ptr->dyna_struct.pktbuf[inx] = 0;user_ptr->dyna_struct.x = 0;user_ptr->dyna_struct.y = 0;user_ptr->dyna_struct.z = 0;user_ptr->dyna_struct.config_flag = 1;user_ptr->dyna_struct.buffer_flag = 1;

}

/* Set the serial I/O communication parameters */ if ((hardware_channel == chanA) | (hardware_channel == chanB)) {

/* set port parity = NONE, EVEN, ODD */ device_param->parity = NONE;

/* set port baud = 300, 9600, 19200, ... */ device_param->baud_rate = 19200;

/* set port stop bits = ONE, TWO, ONE_AND_HALF */ device_param->stop_bits = ONE;

/* set port transmit bit size = 5, 6, 7, 8 */ device_param->transmit_data_size = 8;

/* set port recieve bit size = 5, 6, 7, 8 */ device_param->receive_data_size = 8;

/* set clock multiplier = 1, 16, 32, 64 */ device_param->clock_multiplier = 16;

/* set size for receive ring buffer. Should be set to a number greater than the maximum expected recives bytes in one sample period. */ rec_buffer->buffer_size = 2000;

} else {

printx("INVALID CHANNEL\n"); }

Page 75: Nonlinear Robotic Control

69

}

/*-------------------------------------------------------------------- Function : user_SERIAL_out

Abstract : user-defined serial output function. Function called as scheduled output event.

Inputs : sysptr user-transparent pointer to systemattributes

model_float[] SystemBuild output vector ser_channel serial channel

Returns : error status

--------------------------------------------------------------------*/

public RetCode user_SERIAL_out(IOdevice *device, float model_float[], u_int ser_channel )

{ Byte cbuffer[3]; u_int i; struct user_type *local_ptr;

/* NOT USED. See the file=[ip_setup.c] / project=[ac_setup] for configuration of the sensor. */

return OK;

}

/*--------------------------------------------------------------------- Function : user_sample_SERIAL_in

Abstract : Specifies user-defined serial input function. Function called as a scheduled input event.

Inputs : sysptr user-transparent pointer to system attributes ser_channel serial channel

Outputs : *model_float pass back floating pt vector see file header for indexed description

Returns : error status

---------------------------------------------------------------------*/public RetCode user_sample_SERIAL_in( IOdevice *device,

float model_float[], u_int ser_channel )

{ u_int i, num;

Page 76: Nonlinear Robotic Control

70

unsigned char buf[80]; struct user_type *local_ptr; int sync_error;

/* set user pointer to buffer allocated by get_parameters */ local_ptr = SERIAL_USER_PTR;

/* If this is the first time through, reset the buffers */ if ( local_ptr->dyna_struct.buffer_flag ) {

if ( ser_channel == chanA ) { reset_buffer( device->parameters ); local_ptr->dyna_struct.buffer_flag = 0;}

}

/* Only do a real process function for channel A */ if ( ser_channel == chanA ) {

/* Get the number of new data bytes in the buffer */num = numbytes_in_buffer( device->parameters );

/* Only process data if there are at least [?] new bytes */if ( num != 0 ) {

/* set the no_data count to zero */ local_ptr->update_count = 0;

/* read in all the data from the buffer */ read_serial( device->parameters, num, buf );

/* sync to byte field and convert latest good data to float */ sync_error =

decodeNative0( buf, num, &(local_ptr->dyna_struct) );

/* put the output data into the System build output vector */ model_float[0] = 0.05 * (float) local_ptr->dyna_struct.x; model_float[1] = 0.05 * (float) local_ptr->dyna_struct.y; model_float[2] = 0.05 * (float) local_ptr->dyna_struct.z;

/* The data quality depends on the sensor tracking status */ if ( local_ptr->dyna_struct.status == STS_TRACK ) {

model_float[3] = 0.0e0; } else if ( local_ptr->dyna_struct.status == STS_CAUTION ) {

model_float[3] = 1.0e0; } else {

model_float[3] = 2.0e0; }

if ( sync_error ) {model_float[4] = 1.0e0;/* if sync error, assume bad data */model_float[3] = 2.0e0;

} else {

Page 77: Nonlinear Robotic Control

71

model_float[4] = 0.0e0; }

if ( local_ptr->dyna_struct.pktCount == local_ptr->last_pktCount ) {

model_float[3] = -1.0e0; } local_ptr->last_pktCount = local_ptr->dyna_struct.pktCount; model_float[5] = (float) (local_ptr->dyna_struct.pktCount);

}

else {

/* put the latest data into the output vector and flag old */ model_float[0] = 0.05 * (float) local_ptr->dyna_struct.x; model_float[1] = 0.05 * (float) local_ptr->dyna_struct.y; model_float[2] = 0.05 * (float) local_ptr->dyna_struct.z; model_float[3] = -1.0e0; model_float[4] = 0.0e0; model_float[5] = (float) (local_ptr->dyna_struct.pktCount);

/* count number of scheduler ticks that go by without recieving all 4 bytes. If too long, signal a line break and reset the buffer. */

local_ptr->update_count++; if ( local_ptr->update_interval < local_ptr->update_count ) {

IOerr( SERIAL_receive_interrupt_error, ser_channel );reset_buffer( device->parameters );local_ptr->update_count = 0;

}}

} else {

/* This is channel B so reset the buffer so it won't overflow */reset_buffer( device->parameters );printx("Reseting Buffer B \n");

}

return OK; }

/*************************************************************** END OF FILE

****************************************************************/

/*====================================================================

decode.h

Page 78: Nonlinear Robotic Control

72

FUNCTION :Include file for decode.c. Definitions and function prototypes for a decoder for the DynaSight NATIVE0 data format for 3-D position measurements.

HISTORY :xx xxx 92 Original, Origin Instruments Corporation06 MAR 95 DB.Rathbun modified for single data structure

********************************************************************/

/* AC100 includes */#include <stddef.h>#include <stdlib.h>#include "sa_types.h"#include "types.h"#include "ioerrs.h"#include "errcodes.h"#include "iodriver.h"#include "ipserial.h"#include "printx.h"

#ifndef DECODE_H#define DECODE_H

#ifndef FALSE#define FALSE 0#endif#ifndef TRUE#define TRUE (!FALSE)#endif

typedef int BOOL;typedef unsigned char UCHAR;typedef unsigned short USHORT;typedef unsigned long ULONG;

#define PKT_MAX_LENGTH 8#define PKT_LENGTH 8#define PKT_MARKER 0x80#define MARKER_MASK 0xF0

/* structure for raw data from NATIVE0 packet */struct dyNative0 {

unsigned status : 2; /* sensor operating mode */unsigned btnr : 1; /* reserved field */unsigned sync : 1; /* debounced sync indicator, 0=>open */unsigned dummy0 : 4; /* dummy field for 16-bit alignment */unsigned exp : 2; /* base 2 exponent for x, y, z */unsigned fmt : 2; /* data format indicator, 0=>NATIVE0 */unsigned dummy1 : 4; /* dummy field for 16-bit alignment */long x; /* x measurement */long y; /* y measurement */long z; /* z measurement */int pktCount; /* count of total 'ok' packets */

Page 79: Nonlinear Robotic Control

73

int pktState; /* buffer pointer for state machine */char pktbuf[PKT_MAX_LENGTH]; /* used to sync to data format */int config_flag; /* flag to mark initialization */int buffer_flag; /* flag to mark initialization */

};

/* Defines for decode of ".status" field in struct dyNative0: */#define STS_SEARCH 0#define STS_COAST 1#define STS_CAUTION 2#define STS_TRACK 3

/*** Function Prototypes */int decodeNative0( unsigned char *cp, u_int count, struct dyNative0 *tgt );

#endif

/*=====================================================================

decode.c

FUNCTION :Packet decoder for DynaSight NATIVE0 data format. Source for a simple state machine to decode the DynaSight NATIVE0 data format for 3-D position measurements.

HISTORY :xx xxx 92 Original, Origin Instruments Corporation06 MAR 95 DB.Rathbun convert to a single data structure

passed control of static structure to caller updated bit operations for a 32 bit arch.

=====================================================================*/

#include "dynadriver.h"

/******************************************************************** decodeNative0()** Decode a segmented stream of input characters and return the * most recent NATIVE0 packet that has been completely and * successfully decoded. This function decodes but ignores * packets for which the "fmt" field is non-zero.** Input Parameters:* cp - pointer to a string of characters to be decoded* count - the number of characters in the string* tgt - pointer to a user-allocated struct dyNative0that* will receive the most recently decoded packet.

Page 80: Nonlinear Robotic Control

74

** Return Value:* TRUE if a packet synchronization error was detected.* FALSE if no error was detected.********************************************************************/

int decodeNative0( unsigned char *cp, u_int count, struct dyNative0 *tgt ){ register temp; u_int tmp_count; unsigned char newChar; BOOL isMarker; int pktSyncError;

/* If there are more than 16 bytes in the buffer, ignore all but the last set */ if ( count > 16 ) {

tmp_count = count - ( ( count/8 - 2) * 8);for ( ; count>tmp_count; count--) cp++;

}

pktSyncError = FALSE;

/* Process all the new input characters */ for(; count>0; count--) {

newChar = *cp++;

/* Is the current character a DynaSight native0 maker byte? */isMarker = ( (newChar & MARKER_MASK) == PKT_MARKER );

/* State machine of native0 format with pointer tgt.pktState */switch( tgt->pktState ) {

case 0: if( isMarker ) {

/* expected marker byte */ tgt->pktbuf[tgt->pktState++] = newChar;

} else {

/* must be a loss of synchonization */ pktSyncError = TRUE; /* leave pktState at 0 */;

} break;

case 1: if( isMarker ) {

/* expected marker byte */ tgt->pktbuf[tgt->pktState++] = newChar;

} else {

Page 81: Nonlinear Robotic Control

75

/* must be loss of synchronization, so reset */ tgt->pktState = 0; pktSyncError = TRUE;

} break;

case 2: if( isMarker ) {

/* unexpected marker, so shift back */ tgt->pktbuf[0] = tgt->pktbuf[1]; tgt->pktbuf[1] = newChar; pktSyncError = TRUE; /* leave pktState at 2 */;

} else {

/* save the character */ tgt->pktbuf[tgt->pktState++] = newChar;

} break;

case 3: /* save the character */ tgt->pktbuf[tgt->pktState++] = newChar; break;

case 4: if( isMarker ) {

/* unexpected marker, so reset state machine */ tgt->pktbuf[0] = newChar; tgt->pktState = 1; pktSyncError = TRUE;

} else {

/* save the character */ tgt->pktbuf[tgt->pktState++] = newChar;

} break;

case 5: /* save the character */ tgt->pktbuf[tgt->pktState++] = newChar; break;

case 6: if( isMarker ) {

/* unexpected marker, so reset state machine */ tgt->pktbuf[0] = newChar; tgt->pktState = 1; pktSyncError = TRUE;

} else {

/* save the character */ tgt->pktbuf[tgt->pktState++] = newChar;

} break;

Page 82: Nonlinear Robotic Control

76

case 7: /* save the character */ tgt->pktbuf[tgt->pktState] = newChar;

/* ----- 8-byte packet is complete, so de-code it ----- */

/* increment packet count (allow for rollover) */ (tgt->pktCount)++;

/* check the message type field to make sure it is native0 type */

if( ( (temp=tgt->pktbuf[0]) & 0x0C ) == 0) {

/* update data structure with new flag data */ tgt->exp = temp; tgt->fmt = temp>>2; /* =0 */ tgt->status = temp = tgt->pktbuf[1]; tgt->btnr = (temp>>2); tgt->sync = (temp>>3);

/* update position data - put the two input bytes into the 32 bit word- do a 16 to 32 bit sign extension- apply the base 2 exponent */

tgt->x = ( tgt->pktbuf[2]<<8 ) | tgt->pktbuf[3]; if( tgt->x & 0x00008000 ) tgt->x = tgt->x | 0xffff0000; tgt->x = tgt->x << tgt->exp;

tgt->y = ( tgt->pktbuf[4]<<8 ) | tgt->pktbuf[5]; if( tgt->y & 0x00008000 ) tgt->y = tgt->y | 0xffff0000; tgt->y = tgt->y << tgt->exp;

tgt->z = ( tgt->pktbuf[6]<<8 ) | tgt->pktbuf[7]; if( tgt->z & 0x00008000 ) tgt->z = tgt->z | 0xffff0000; tgt->z = tgt->z << tgt->exp;

} else { pktSyncError = TRUE;

} tgt->pktState = 0; break;

} }

return pktSyncError;}

Page 83: Nonlinear Robotic Control

77

2.1.2. Configuration Code

The DynaSight needs to be configured to its internal 'V' mode (steroSync60) to get

acceptable dynamic behavior. An empty MatrixX/SystemBuild model was created in the

AC_SETUP folder in the \AC100C30\PROJECTS directory. It was compiled with the

configuration code only (i.e. the other version was temporarily removed from the default

link list). A BAT file was setup (DYNA_SETUP) that runs this project directly from the

PC command line. A CTRL-C must be hit to halt the executable.

get_SERIAL_parameters : defines the communication parameters

usr_SERIAL_out : constructs an output command packet and sends it to the sensor.A flag is set so that initialization only happens once. The command packet is :

0x03 ascii crtl-c, attention signal

0x56 ascii V, stereoSync60 mode command

0x00 end of command byte

user_sample_SERIAL_in : is not called or used, since no sensor inputs are

required.

/*====================================================================

IP_DRIVER.C

FUNCTION :

Required top level serial I/O drivers for the AC100 DSP based real time controller. Data format translation between serial device asyncronous byte stream format and the System Build syncronous float/int vector format. (AC100 object-code performs the buffered serial I/O interupt functions.)

This version is for the DynaSight IR position sensor.

get_SERIAL_parameters Function sets the asynchronous communication parameters for the IP-SERIAL module.

user_poll_SERIAL_out Creates the transmit byte array and calling function write_serial which transmits the byte array across the serial channel.

Page 84: Nonlinear Robotic Control

78

user_sample_SERIAL_in Function is called every sampling interval. Fills the floating- point vector which is used as input to the model for the current sampling interval.

INPUTS : This project requires a single input to activate the call to the user_SERIAL_out routine. Its value is unimportant.

OUTPUTS : This version is only to configure the sensor, there are no outputs. See the normal ip_driver.c code for sensor reading.

HISTORY : 18 Nov 93 ISI Original 06 MAR 95 DB.Rathbun Adapted to DynaSight format

====================================================================*/

/* AC100 includes */#include <stddef.h>#include <stdlib.h>#include "sa_types.h"#include "types.h"#include "ioerrs.h"#include "errcodes.h"#include "iodriver.h"#include "ipserial.h"#include "printx.h"

/* DynaSight includes */#include "decode.h"

#define NULL 0

/* semaphores and serial parameters for each physical channel */private struct { boolean allSent;

boolean broken; unsigned baud; } line_state[2];

/* definition of user memory structure for each channel */struct user_type { int update_interval; int update_count; struct dyNative0 dyna_struct;};

/*------------------------------------------------------------------*/public void get_SERIAL_parameters( unsigned int hardware_channel, volatile struct user_param *device_param, volatile struct ring_buffer_param *rec_buffer, IOdevice *device ){ struct user_type *user_ptr;

Page 85: Nonlinear Robotic Control

79

serial_param_type *serptr; int inx;

serptr = device->parameters;

/* Initialize all the user structure memory/parameters */ if (SERIAL_USER_PTR == NULL) {

SERIAL_USER_PTR = (struct user_type *)malloc(sizeof(struct user_type));

user_ptr = SERIAL_USER_PTR;user_ptr->update_interval = 50;user_ptr->update_count = 0;user_ptr->dyna_struct.pktCount = 0;user_ptr->dyna_struct.pktState = 0;user_ptr->dyna_struct.x = 0;user_ptr->dyna_struct.y = 0;user_ptr->dyna_struct.z = 0;user_ptr->dyna_struct.config_flag = 1;user_ptr->dyna_struct.buffer_flag = 1;

}

/* Set the serial I/O communication parameters */ if ((hardware_channel == chanA) | (hardware_channel == chanB)) {

/* set port parity = NONE, EVEN, ODD */ device_param->parity = NONE;

/* set port baud = 300, 9600, 19200, ... */ device_param->baud_rate = 19200;

/* set port stop bits = ONE, TWO, ONE_AND_HALF */ device_param->stop_bits = ONE;

/* set port transmit bit size = 5, 6, 7, 8 */ device_param->transmit_data_size = 8;

/* set port recieve bit size = 5, 6, 7, 8 */ device_param->receive_data_size = 8;

/* set clock multiplier = 1, 16, 32, 64 */ device_param->clock_multiplier = 16;

/* set size for receive ring buffer. Should be set to a number greater than the maximum expected recives bytes in one sample period. */ rec_buffer->buffer_size = 200;

} else {

printx("INVALID CHANNEL\n"); }

}

Page 86: Nonlinear Robotic Control

80

/*-------------------------------------------------------------------- Function : user_SERIAL_out

Abstract : user-defined serial output function. Function called as scheduled output event.

Inputs : sysptr user-transparent pointer to systemattributes

model_float[] SystemBuild output vector ser_channel serial channel

Returns : error status

--------------------------------------------------------------------*/

public RetCode user_SERIAL_out(IOdevice *device, float model_float[], u_int ser_channel )

{ Byte cbuffer[3]; u_int i; struct user_type *local_ptr;

/* set user pointer to buffer allocated by get_parameters */ local_ptr = SERIAL_USER_PTR;

/* Define the DynaSight start-stop codes */ cbuffer[0] = (Byte) 0x03; cbuffer[2] = (Byte) 0x00;

/* Only process channel A outputs */ if ( ser_channel == chanA ) {

/* Only output if current ouput buffer is empty */if (numbytes_in_buffer(device->parameters) == 0) {

/* Only do this once */ if ( local_ptr->dyna_struct.config_flag ) {

/* --- Set the DynaSight operating mode command --- SteroSync60 = 0x56 -------------------------------------------------*/cbuffer[1] =(Byte) 0x56;

/* Send the data */write_serial( device->parameters, cbuffer, 3 );

/* set the already done flag */local_ptr->dyna_struct.config_flag = 0;

}

}

} return OK;

Page 87: Nonlinear Robotic Control

81

}

/*-------------------------------------------------------------------- Function : user_sample_SERIAL_in

Abstract : Specifies user-defined serial input function. Functioncalled

as a scheduled input event.

Inputs : sysptr user-transparent pointer to system attributes ser_channel serial channel

Outputs : *model_float pass back floating pt vector

Returns : error status

--------------------------------------------------------------------*/public RetCode user_sample_SERIAL_in( IOdevice *device,

float model_float[], u_int ser_channel )

{ u_int i, num; unsigned char buf[80]; struct user_type *local_ptr; int sync_error;

/* set user pointer to buffer allocated by get_parameters */ local_ptr = SERIAL_USER_PTR;

/* If this is the first time through, reset the buffers */ if ( local_ptr->dyna_struct.buffer_flag ) {

if ( ser_channel == chanA ) { reset_buffer( device->parameters ); local_ptr->dyna_struct.buffer_flag = 0;}

}

return OK; }

/*************************************************************** END OF FILE

****************************************************************/

Page 88: Nonlinear Robotic Control

82

2.2. Slew Commander

/*====================================================================== ep_slew_comm.c

System Build format subroutine source code to implement the slew commander for the two link flexible manipulator.

Inputs : [ 0] Xpo X axis initial position [ 1] Ypo Y axis initial position [ 2] Xpf X axis final position [ 3] Ypf Y axis final position [ 4] Xvo X axis initial velocity [ 5] Yvo Y axis initial velocity [ 6] tinit slew start time [ 7] r_init re-initialization flag 1=init, 0=no Outputs : [ 0] XepComm x axis tip position command (m) [ 1] YepComm y axis tip position command (m) [ 2] XepRate x axis tip rate command (m/s) [ 3] YepRate y axis tip rate command (m/s) [ 4] XepAccel x axis tip accel command (m/s^2) [ 5] YepAccel y axis tip accel command (m/s^2)

States : none Parameters : [ 0] vmax maximum rate constraint [ 1] amax maximum acceleration constraint

Notes : none History : 18 Nov 95 D.Rathbun created ======================================================================*/

/* SystemBuild Includes */#include "sa_sys.h" /*Defines the platform*/#include "sa_types.h" /*Defines the structure STATUS_RECORD */#include "sa_user.h" /*Include extern declarations of your UCBs*/

/* ----- System defines ----------------- */#include <math.h>

/* ----- Local defines and includes ----- */#include "slew_comm.h"#include "tip_pos.h"

Page 89: Nonlinear Robotic Control

83

/* ----- Local Static Variables ----- *//* NOTE: because of the use of statics, this code is not re-entrant.** therefore, you can only use it once in any simulation. A fix** is only posible in the MatrixX/SystemBuild calling structure.*/static struct SLEW_PARAM param_s;static struct TIP_POSITION output_s;

void ep_slew_comm( info, t, u,nu, x,xdot,nx, y,ny, rpar,ipar ) RT_STATUS_RECORD *info; RT_DURATION t; RT_FLOAT u[],y[],x[],xdot[],rpar[]; RT_INTEGER nu,nx,ny,ipar[];

{ struct SLEW_PARAM *param; struct TIP_POSITION *output;

param = &param_s; output = &output_s;

if ( info->INIT ) { /* initialization */info->ERROR = 0;

}

if ( info->STATES ) { /* compute state derivatives */info->ERROR = 0;

}

if ( info->OUTPUTS ) { /* compute output update */

/* Do a slew point update */if ( fabs(u[7] - 1.0e0) < 1e-5 ) {

/* Reinitialize the slew parameters */ param->inputs.xpo = u[0]; param->inputs.ypo = u[1]; param->inputs.xpf = u[2]; param->inputs.ypf = u[3]; param->inputs.xvo = u[4]; param->inputs.yvo = u[5]; param->inputs.tinit = u[6]; param->limits.vmax = rpar[0]; param->limits.amax = rpar[1];

/* Set up the slew */ slew_set( param ); }

Page 90: Nonlinear Robotic Control

84

/* Get the new slew output */slew_comm( param, (RT_FLOAT) t, output );

/* Set the outputs */y[0] = output->x_pos;y[1] = output->y_pos;y[2] = output->x_rate;y[3] = output->y_rate;y[4] = output->x_accel;y[5] = output->y_accel;

info->ERROR = 0;

} else {info->ERROR = 1;

}

return;}

/* ========== End of File ============================================*/

/*--------------------------------------------------------------------- slew_comm.h

Data types/defines for slew command generation. NOTES: none HISTORY: 12 Oct 95 D.Rathbun created 13 Nov 95 D.Rathbun updated for real 2nd order slews 18 Nov 95 D.RAthbun altered for MatrixX real time data types

---------------------------------------------------------------------*/

/* MatrixX-SystemBuild data types */#include "sa_types.h"

/* Slew Data Types */struct SLEW_PARAM { struct { RT_FLOAT xpo; /* initial x axis position */ RT_FLOAT ypo; /* initial y axis position */ RT_FLOAT xvo; /* initial x axis velocity */ RT_FLOAT yvo; /* initial y axis velocity */ RT_FLOAT xpf; /* final x axis position */ RT_FLOAT ypf; /* final y axis position */ RT_FLOAT tinit; /* initial slew time */

Page 91: Nonlinear Robotic Control

85

} inputs; struct { RT_FLOAT vmax; /* maximum allowable single axis rate */ RT_FLOAT amax; /* maximum allowable acceleration */ } limits; struct { RT_FLOAT t1,t2,t3; /* slew phase durations */ RT_FLOAT T0,T1,T2,T3; /* slew phase end times */ RT_FLOAT xa1, xa2; /* x axis accelerations */ RT_FLOAT ya1, ya2; /* y axis accelerations */ RT_FLOAT xp1, xv1; /* x axis point 1 states */ RT_FLOAT yp1, yv1; /* y axis point 1 states */ RT_FLOAT xp2, xv2; /* x axis point 2 states */ RT_FLOAT yp2, yv2; /* y axis point 2 states */ } data;

RT_INTEGER setup;};

/* Function Prototypes */void slew_set( );void slew_comm( );

/*--------------------------------------------------------------------- slew_comm.c

Generates an optimmum two dimensional second order slew command between two points under rate and acceleration constraints. INPUTS : SLEW_PARAM INPUTS

xpo initial x axis position ypo initial y axis position xvo initial x axis rate yvo initial x axis rate xpf final x axis position ypf final y axis position tinit initial slew time LIMITS vmax maximum allowable single axis rate amax maximum allowable single axis acceleration

tnow current time OUTPUTS : TIP_POSITION x_pos current x axis position command

y_pos current y axis position command x_rate current x axis rate command y_rate current y axis rate command x_accel current x axis acceleration command

Page 92: Nonlinear Robotic Control

86

x_accel current x axis acceleration command NOTES : * the slew_set function should be called once before any calls to the slew generator (or else you will get 0 output). * the slew generator will back correct (assuming constant velocity) for times (tnow) less than the slew start time (tinit). * the slew generator does not pick the most constrained axis correctly. The accelerations for the other axis are not

garanteed to be within the acceleration limit. HISTORY : 12 Oct 95 D.Rathbun created 13 Nov 95 D.Rathbun updated for real 2nd order slews 18 Nov 95 D.Rathbun updated for MatriX real time data types 10 Dec 95 D.Rathbun updated for a initialization check updated for non-ANSI compliant compilers

----------------------------------------------------------------------*/

/* standard includes */#include <math.h>

/* local includes */#include "tip_pos.h"#include "slew_comm.h"

void slew_comm( param, tnow, output ) struct SLEW_PARAM *param; RT_FLOAT tnow; struct TIP_POSITION *output;{ RT_FLOAT del_t;

if ( param->setup == 0 ) {output->x_pos = 0.0e0;output->y_pos = 0.0e0;output->x_rate = 0.0e0;output->y_rate = 0.0e0;output->x_accel = 0.0e0;output->y_accel = 0.0e0;

} else if ( tnow < param->data.T0 ) {del_t = tnow - param->data.T0;output->x_pos = param->inputs.xpo + param->inputs.xvo*del_t;output->y_pos = param->inputs.ypo + param->inputs.yvo*del_t;output->x_rate = param->inputs.xvo;output->y_rate = param->inputs.yvo;output->x_accel = 0.0e0;output->y_accel = 0.0e0;

} else if ( tnow < param->data.T1 ) {del_t = tnow - param->data.T0;output->x_pos = param->inputs.xpo + param->inputs.xvo*del_t + 0.5e0 * param->data.xa1 * del_t * del_t;

Page 93: Nonlinear Robotic Control

87

output->y_pos = param->inputs.ypo + param->inputs.yvo*del_t + 0.5e0 * param->data.ya1 * del_t * del_t;output->x_rate = param->inputs.xvo + param->data.xa1 * del_t;output->y_rate = param->inputs.yvo + param->data.ya1 * del_t;output->x_accel = param->data.xa1;output->y_accel = param->data.ya1;

} else if ( tnow < param->data.T2 ) {del_t = tnow - param->data.T1;output->x_pos = param->data.xp1 + param->data.xv1*del_t;output->y_pos = param->data.yp1 + param->data.yv1*del_t;output->x_rate = param->data.xv1;output->y_rate = param->data.yv1;output->x_accel = 0.0e0;output->y_accel = 0.0e0;

} else if ( tnow < param->data.T3 ) {del_t = tnow - param->data.T2;output->x_pos = param->data.xp2 + param->data.xv2*del_t + 0.5e0 * param->data.xa2 * del_t * del_t;output->y_pos = param->data.yp2 + param->data.yv2*del_t + 0.5e0 * param->data.ya2 * del_t * del_t;output->x_rate = param->data.xv2 + param->data.xa2 * del_t;output->y_rate = param->data.yv2 + param->data.ya2 * del_t;output->x_accel = param->data.xa2;output->y_accel = param->data.ya2;

} else {output->x_pos = param->inputs.xpf;output->y_pos = param->inputs.ypf;output->x_rate = 0.0e0;output->y_rate = 0.0e0;output->x_accel = 0.0e0;output->y_accel = 0.0e0;

}

return;

}

void slew_set( param ) struct SLEW_PARAM *param;{ RT_FLOAT t1,t2,t3; RT_FLOAT a1,a2,a3,a4; RT_FLOAT delx, dely; RT_FLOAT tmp; RT_FLOAT vx, ax, vy, ay; RT_FLOAT x_check, y_check, check; RT_INTEGER test; RT_FLOAT del, vo, vm, am; RT_FLOAT del_off, voff; char slew_type;

/* flag data as being initialized */ ++(param->setup);

Page 94: Nonlinear Robotic Control

88

/* Compute some often used quantities */ delx = param->inputs.xpf - param->inputs.xpo; dely = param->inputs.ypf - param->inputs.ypo;

/* Check for correct slew signs */ tmp = 2.0e0 * param->limits.amax * delx - param->inputs.xvo * fabs( param->inputs.xvo); if ( tmp >= 0.0e0 ) {

vx = param->limits.vmax;ax = param->limits.amax;

} else {vx = - param->limits.vmax;ax = - param->limits.amax;

}

tmp = 2.0e0 * param->limits.amax * dely - param->inputs.yvo * fabs( param->inputs.yvo); if ( tmp >= 0.0e0 ) {

vy = param->limits.vmax;ay = param->limits.amax;

} else {vy = - param->limits.vmax;ay = - param->limits.amax;

}

/* Check for an X constrained or Y constrained slew Convert from x/y to on/off */ x_check = sqrt( param->inputs.xvo * param->inputs.xvo / 2.0e0 + ax * delx ); y_check = sqrt( param->inputs.yvo * param->inputs.yvo / 2.0e0 + ay * dely );

/* Note : this test is not the correct one. It works, but can result in off axis accelerations that exceed the given limits. */ test = x_check > y_check;

if ( test ) {slew_type = 'X';check = x_check;del = delx;vo = param->inputs.xvo;am = ax;vm = vx;del_off = dely;voff = param->inputs.yvo;

} else {slew_type = 'Y';check = y_check;del = dely;vo = param->inputs.yvo;am = ay;vm = vy;del_off = delx;voff = param->inputs.xvo;

Page 95: Nonlinear Robotic Control

89

}

/* Get times based on the need for a required coast phase */ if ( check < fabs(vm) ) {

/* Dont't need a coast pahse */t1 = sqrt(vo*vo/2/am/am + del/am) - vo/am;t2 = 0;t3 = sqrt(vo*vo/2/am/am + del/am);a1 = am;a2 = -am;

} else {/* Need a coast phase */t1 = (vm - vo) / am;t2 = del/vm - (2*vm*vm - vo*vo)/(2*vm*am);t3 = vm/am;a1 = am;a2 = -am;/* fix for abs(vo) large */if ( t1 < 0 ) { t1 = -t1; a1 = -a1; t2 = del/vm - (vo*vo)/(2*vm*am);}

} /* Get the off axis slew parameters */ a3 = (2*del_off - voff*(2*t1+2*t2+t3))/t1/(t1+2*t2+t3); a4 = -(2*del_off - voff*(t1))/t3/(t1+2*t2+t3); /* Decode on axis/off axis back to x/y system */ if ( slew_type == 'X' ) {

param->data.xa1 = a1;param->data.xa2 = a2;param->data.ya1 = a3;param->data.ya2 = a4;

} else {param->data.xa1 = a3;param->data.xa2 = a4;param->data.ya1 = a1;param->data.ya2 = a2;

} /* Convert times to absolute */ param->data.T0 = param->inputs.tinit; param->data.T1 = param->inputs.tinit + t1; param->data.T2 = param->inputs.tinit + t1 + t2; param->data.T3 = param->inputs.tinit + t1 + t2 + t3; param->data.t1 = t1; param->data.t2 = t2; param->data.t3 = t3;

Page 96: Nonlinear Robotic Control

90

/* Fill in the way points */ param->data.xp1 = param->inputs.xpo + param->inputs.xvo*t1 + 0.5e0*param->data.xa1*t1*t1; param->data.xv1 = param->inputs.xvo + param->data.xa1*t1; param->data.xp2 = param->data.xp1 + param->data.xv1*t2; param->data.xv2 = param->data.xv1; param->data.yp1 = param->inputs.ypo + param->inputs.yvo*t1 + 0.5e0*param->data.ya1*t1*t1; param->data.yv1 = param->inputs.yvo + param->data.ya1*t1; param->data.yp2 = param->data.yp1 + param->data.yv1*t2; param->data.yv2 = param->data.yv1; return;

}

2.3. Linearization Matrix Generation

/*====================================================================== ep_lin_mat.c

System Build format subroutine source code to implement the rigid body linearization for the two link flexible manipulator.

Inputs : [ 0] Xep cartesian X axis end point command (m) [ 1] Yep cartesian Y axis end point command (m) [ 2] TorX desired X axis force (N) [ 3] TorY desired Y axis force (N) Outputs : [ 0] TorTh1 theta 1 joint torque (Nm) [ 1] TorTh2 theta 2 joint torque (Nm)

States : none Parameters : none

Notes : none History : 18 Nov 95 D.Rathbun created ======================================================================*/

Page 97: Nonlinear Robotic Control

91

/* SystemBuild Includes */#include "sa_sys.h" /*Defines the platform*/#include "sa_types.h" /*Defines the structure STATUS_RECORD */#include "sa_user.h" /*Include extern declarations of your UCBs*/

/* ----- System defines ----------------- */#include <math.h>

/* ----- Local defines and includes ----- */#include "properties.h"#include "theta_pos.h"#include "tip_pos.h"

/* ----- Local Static Variables ----- *//* NOTE: because of the use of statics, this code is not re-entrant.** therefore, you can only use it once in any simulation. A fix** is only posible in the MatrixX/SystemBuild calling structure.*/static struct TIP_POSITION ep_s;static struct THETA_POSITION joint_s;static struct MASS_PROPERTIES mass_s;

void ep_lin_mat( info, t, u,nu, x,xdot,nx, y,ny, rpar,ipar ) RT_STATUS_RECORD *info; RT_DURATION t; RT_FLOAT u[],y[],x[],xdot[],rpar[]; RT_INTEGER nu,nx,ny,ipar[];

{ struct TIP_POSITION *ep; struct THETA_POSITION *joint; struct MASS_PROPERTIES *mass;

RT_FLOAT L1_sq, L2_sq, L12; RT_FLOAT th_right, L_opp2, L_opp; RT_FLOAT cTh1_prime, cTh2_prime;

RT_FLOAT s1, c1, s2, c2, s12, c12, det;

RT_FLOAT pM1, pM2, pM3;

RT_FLOAT lin_matrix[4];

/* Set the structure pointers */ ep = &ep_s; joint = &joint_s; mass = &mass_s;

/* initialize to an error condition to be reset if no error */ info->ERROR = 0;

Page 98: Nonlinear Robotic Control

92

/* initialization */ if ( info->INIT ) {

/* Fill the parameter structure */set_mass_properties( mass );

info->ERROR = 0; /* no errors */ }

/* compute state derivatives */ if ( info->STATES ) {

info->ERROR = 0; }

/* compute output updates */ if ( info->OUTPUTS ) {

/* Fill the commaned end point based positions */ep->x_pos = u[0];ep->y_pos = u[1];ep->x_rate = 0.0e0;ep->y_rate = 0.0e0;ep->x_accel = 0.0e0;ep->y_accel = 0.0e0;

/* Compute joint angles using the law of Cosines */x_to_th_pos( ep, mass, joint );

/* Get the linearization matrix */set_lin_MASS( joint, mass );set_lin_JACOB( joint, mass );lin_mat_gen( mass, lin_matrix );

/* Transform the input force = output Torque */y[0] = lin_matrix[0+0*2]*u[2] + lin_matrix[0+1*2]*u[3];y[1] = lin_matrix[1+0*2]*u[2] + lin_matrix[1+1*2]*u[3];

info->ERROR = 0;

}

return;}

/* ========== End of File ============================================*/

/*--------------------------------------------------------------------- lin_mat_gen.h

Data types/defines for rigid body linearization matrix generation NOTES: none

Page 99: Nonlinear Robotic Control

93

HISTORY: 12 Oct 95 D.Rathbun created 13 Nov 95 D.Rathbun broke into individual routines 18 Nov 95 D.Rathbun altered for MatrixX real time data types 10 Dec 95 D.Rathbun updated for non-ANSI compliant compilers ---------------------------------------------------------------------*/

/* MatrixX-SystemBuild data types */#include "sa_types.h"

/* Function Prototypes */void lin_mat_gen( );void set_lin_MASS( );void set_lin_JACOB( );void set_lin_J_DOT( );

/*--------------------------------------------------------------------- lin_mat_gen.c

Computes the rigid body linearization matrix for the two link flexible manipulator. Can be used to transform endpoint forces to the resulting joint torques.

INPUTS : MASS_PROPERTIES L1/L2 arm lengths M1/M2 arm mass A1/A2 arm c.m. locations L1/L2 arm inertias (about c.m.) THETA_POSITION th1_pos theta 1 axis position th2_pos theta 2 axis position th1_rate theta 1 axis rate th2_rate theta 2 axis rate

OUTPUTS : MASS_PROPERTIES M rigid body dynamic mass matrix M_inv rigid body dynamic mass matrix inverse J arm kinematic Jacobian matrix J_inv arm kinematic Jacobian matrix inverse J_dot arm kinematic Jacobian matrix time derivative

linMatrix 2x2 linearization matrix

NOTES : * be carefull with calling order w.r.t end point to joint angle conversion routines ( x_to_th_pos, x_to_th_rate, x_to_th_accel )

HISTORY : 25 Aug 95 D.Rathbun created 12 Oct 95 D.Rathbun converted to subroutine format 13 Nov 95 D.Rathbun broke into individual routines 18 Nov 95 D.Rathbun altered for MatrixX real time data types

Page 100: Nonlinear Robotic Control

94

10 Dec 95 D.Rathbun updated for non-ANSI compliant compilers

---------------------------------------------------------------------*/

/* Standard Includes */#include <math.h>

/* Local includes */#include "properties.h"#include "theta_pos.h"#include "lin_mat_gen.h"

/* Define the Mass Matrix and Mass Matrix Inverse */void set_lin_MASS( joint, mass ) struct THETA_POSITION *joint; struct MASS_PROPERTIES *mass;{ RT_FLOAT pM1, pM2, pM3; RT_FLOAT c2, det;

/* Often used angle sinusoids */ c2 = cos( joint->th2_pos );

/* Mass matrix parameters */ pM1 = mass->M1 * mass->A1 * mass->A1 + mass->M2 * mass->L1 * mass->L1 + mass->I1; pM2 = mass->M2 * mass->A2 * mass->A2 + mass->I2; pM3 = mass->M2 * mass->A2 * mass->L1;

/* Mass matrix */ mass->M[0][0] = pM1 + pM2 + 2.0 * pM3 * c2; mass->M[0][1] = pM2 + pM3 * c2; mass->M[1][0] = pM2 + pM3 * c2; mass->M[1][1] = pM2;

/* Mass Matrix Inverse (always exists) */ det = mass->M[0][0]*mass->M[1][1] - mass->M[1][0]*mass->M[0][1]; mass->M_inv[0][0] = mass->M[1][1] / det; mass->M_inv[0][1] = - mass->M[0][1] / det; mass->M_inv[1][0] = - mass->M[1][0] / det; mass->M_inv[1][1] = mass->M[0][0] / det;

return;

}

/* Jacobian Matrix and Jacobian Matrix Inverse */void set_lin_JACOB( joint, mass ) struct THETA_POSITION *joint; struct MASS_PROPERTIES *mass;{ RT_FLOAT s1, c1, s2, c2, s12, c12; RT_FLOAT det;

Page 101: Nonlinear Robotic Control

95

/* Often used angle sinusoids */ s1 = sin( joint->th1_pos ); c1 = cos( joint->th1_pos ); s2 = sin( joint->th2_pos ); c2 = cos( joint->th2_pos ); s12 = sin( joint->th1_pos + joint->th2_pos ); c12 = cos( joint->th1_pos + joint->th2_pos );

/* Jacobain */ mass->J[0][0] = -mass->L1 * s1 - mass->L2 * s12; mass->J[0][1] = -mass->L2 * s12; mass->J[1][0] = mass->L1 * c1 + mass->L2 * c12; mass->J[1][1] = mass->L2 * c12;

/* Jacobian determinate (watch for singularities) */ if ( fabs( joint->th2_pos ) >= 0.05 ) {

det = mass->L1 * mass->L2 * s2; } else if ( joint->th2_pos == 0 ) {

det = mass->L1 * mass->L2 * sin(0.05); } else {

det = mass->L1 * mass->L2 * sin(0.05*joint->th2_pos/fabs(joint->th2_pos)); }

/* Jacobain inverse */ mass->J_inv[0][0] = mass->J[1][1] / det; mass->J_inv[0][1] = -mass->J[0][1] / det; mass->J_inv[1][0] = -mass->J[1][0] / det; mass->J_inv[1][1] = mass->J[0][0] / det;

return;

}

/* Jacobian Derivative Matrix */void set_lin_J_DOT( joint, mass ) struct THETA_POSITION *joint; struct MASS_PROPERTIES *mass;{ RT_FLOAT s1, c1, s12, c12; RT_FLOAT dth1, dth12;

/* Often used angle sinusoids */ s1 = sin( joint->th1_pos ); c1 = cos( joint->th1_pos ); s12 = sin( joint->th1_pos + joint->th2_pos ); c12 = cos( joint->th1_pos + joint->th2_pos );

dth1 = joint->th1_rate; dth12 = joint->th1_rate + joint->th2_rate;

/* Jacobian Derivative */

Page 102: Nonlinear Robotic Control

96

mass->J_dot[0][0] = -mass->L1 * c1 * dth1 - mass->L2 * c12 * dth12; mass->J_dot[0][1] = -mass->L2 * c12 * dth12; mass->J_dot[1][0] = -mass->L1 * s1 * dth1 - mass->L2 * s12 * dth12; mass->J_dot[1][1] = -mass->L2 * s12 * dth12;

return;

}

void lin_mat_gen( mass, lin_mat ) struct MASS_PROPERTIES *mass; RT_FLOAT *lin_mat;{ /* linearizing Matrix = M*inv(J) */ lin_mat[0+0*2] = mass->M[0][0]*mass->J_inv[0][0] + mass->M[0][1]*mass->J_inv[1][0]; lin_mat[0+1*2] = mass->M[0][0]*mass->J_inv[0][1] + mass->M[0][1]*mass->J_inv[1][1]; lin_mat[1+0*2] = mass->M[1][0]*mass->J_inv[0][0] + mass->M[1][1]*mass->J_inv[1][0]; lin_mat[1+1*2] = mass->M[1][0]*mass->J_inv[0][1] + mass->M[1][1]*mass->J_inv[1][1];

return;}

/*--------------------------------------------------------------------- x_to_theta.h

Data types/defines for coordinate transformations NOTES: none HISTORY: 12 Oct 95 D.Rathbun created 13 Nov 95 D.Rathbun broke into individual routines 18 Nov 95 D.Rathbun altered for MatrixX real time data types 10 Dec 95 D.Rathbun updated for non-ANSI compliant compilers

---------------------------------------------------------------------*/

/* MatrixX-SystemBuild data types */#include "sa_types.h"

/* Data Types */

/* Function Prototypes */void x_to_th_pos( );void x_to_th_rate( );void x_to_th_accel( );

Page 103: Nonlinear Robotic Control

97

/*--------------------------------------------------------------------- x_to_theta.c

Transforms coodrinate systems for a planer two link manipulator from cartessian end point position to joint angle. INPUTS : TIP_POSITION x_pos x axis position

y_pos y axis position x_rate x axis rate y_rate y axis rate x_accel x axis acceleration y_accel x axis acceleration

OUTPUTS : THETA_POSITION th1_pos theta 1 axis position

th1_pos theta 2 axis position th1_rate theta 1 axis rate th2_rate theta 2 axis rate th1_accel theta 1 axis acceleration th2_accel theta 2 axis acceleration

NOTES : * be careful with the calling order w.r.t the system matricies generation routines (set_lin_MASS, set_lin_JACOB, set_lin_J_dot) HISTORY : 12 Oct 95 D.Rathbun created 13 Nov 95 D.Rathbun broke into individual routines 18 Nov 95 D.Rathbun altered for MatrixX real time data types 10 Dec 95 D.Rathbun updated for non-ANSI compliant compilers

---------------------------------------------------------------------*/

/* standard includes */#include <math.h>

/* local includes */#include "properties.h"#include "lin_mat_gen.h"#include "tip_pos.h"#include "theta_pos.h"#include "x_to_theta.h"

#define pi (3.14159265e0)

/* Convert cartesian positions to joint positions */void x_to_th_pos( ep, mass, joint ) struct TIP_POSITION *ep; struct MASS_PROPERTIES *mass; struct THETA_POSITION *joint;{ RT_FLOAT L1_sq, L2_sq, L12;

Page 104: Nonlinear Robotic Control

98

RT_FLOAT th_right, L_opp2, L_opp; RT_FLOAT cTh1_prime, cTh2_prime;

/* Compute joint angles using the law of Cosines */

/* get arm length products */ L1_sq = mass->L1 * mass->L1; L2_sq = mass->L2 * mass->L2; L12 = mass->L1 * mass->L2; /* get 'arm triangle' properties */ L_opp2 = ep->x_pos * ep->x_pos + ep->y_pos * ep->y_pos ; L_opp = sqrt( L_opp2 );

/* th2 is the 180 deg compliment to the inside angle */ cTh2_prime = ( L1_sq + L2_sq - L_opp2 ) / ( 2.0e0 * L12 ); joint->th2_pos = pi - acos(cTh2_prime);

/* th1 + inside angle = right triangle angle */ th_right = atan2( ep->y_pos, ep->x_pos ); cTh1_prime = ( L1_sq + L_opp2 - L2_sq ) / ( 2.0e0*mass->L1*L_opp ); joint->th1_pos = th_right - acos(cTh1_prime);

return;}

/* Convert cartesian rates to joint rates */void x_to_th_rate( ep, mass, joint ) struct TIP_POSITION *ep; struct MASS_PROPERTIES *mass; struct THETA_POSITION *joint;{

/* Compute the joint rates THd = Jinv*Xd */ joint->th1_rate = mass->J_inv[0][0]*ep->x_rate * mass->J_inv[0][1]*ep->y_rate; joint->th2_rate = mass->J_inv[1][0]*ep->x_rate * mass->J_inv[1][1]*ep->y_rate;

return;}

/* Convert cartesian accelerations to joint accelerations */void x_to_th_accel( ep, mass, joint ) struct TIP_POSITION *ep; struct MASS_PROPERTIES *mass; struct THETA_POSITION *joint;{ RT_FLOAT Jid[2][2];

/* Compute J_inv*J_dot */ Jid[0][0] = mass->J_inv[0][0]*mass->J_dot[0][0]

Page 105: Nonlinear Robotic Control

99

+ mass->J_inv[0][1]*mass->J_dot[1][0]; Jid[0][1] = mass->J_inv[0][0]*mass->J_dot[0][1] + mass->J_inv[0][1]*mass->J_dot[1][1]; Jid[1][0] = mass->J_inv[1][0]*mass->J_dot[0][0] + mass->J_inv[1][1]*mass->J_dot[1][0]; Jid[1][1] = mass->J_inv[1][0]*mass->J_dot[0][1] + mass->J_inv[1][1]*mass->J_dot[1][1];

/* Compute the joint accelerations * THdd = Jinv*Xdd - Jinv*Jd*Jinv*Xd * = Jinv*Xdd - Jinv*Jd*THd */ joint->th1_accel = mass->J_inv[0][0] * ep->x_accel + mass->J_inv[0][1] * ep->y_accel

+ Jid[0][0] * joint->th1_rate + Jid[0][1] * joint->th2_rate;

joint->th2_accel = mass->J_inv[1][0] * ep->x_accel + mass->J_inv[1][1] * ep->y_accel

+ Jid[1][0] * joint->th1_rate + Jid[1][1] * joint->th2_rate;

return;}

/*---------------------------------------------------------------------- tip_pos.h

Data types/defines for end point coordinate systems. NOTES: none HISTORY: 12 Oct 95 D.Rathbun created 18 Nov 95 D.Rathbun altered for MatrixX real time data types ---------------------------------------------------------------------*/

/* MatrixX-SystemBuild data types */#include "sa_types.h"

/* Data Types */struct TIP_POSITION { RT_FLOAT x_pos; /* x axis position */ RT_FLOAT y_pos; /* y axis position */ RT_FLOAT x_rate; /* x axis rate */ RT_FLOAT y_rate; /* y axis rate */ RT_FLOAT x_accel; /* x axis acceleration */ RT_FLOAT y_accel; /* x axis acceleration */};

/*---------------------------------------------------------------------- theta_pos.h

Data types/defines for joint based coordinate systems.

Page 106: Nonlinear Robotic Control

100

NOTES: none HISTORY: 12 Oct 95 D.Rathbun created 18 Nov 95 D.Rathbun altered for MatrixX real time data types ---------------------------------------------------------------------*/

/* MatrixX-SystemBuild data types */#include "sa_types.h"

/* Data Types */struct THETA_POSITION { RT_FLOAT th1_pos; /* theta 1 axis position */ RT_FLOAT th2_pos; /* theta 2 axis position */ RT_FLOAT th1_rate; /* theta 1 axis rate */ RT_FLOAT th2_rate; /* theta 2 axis rate */ RT_FLOAT th1_accel; /* theta 1 axis acceleration */ RT_FLOAT th2_accel; /* theta 2 axis acceleration */

RT_FLOAT L1; /* Arm 1 length = 0.6325 */ RT_FLOAT L2; /* Arm 2 length = 0.6012 */ RT_FLOAT M1; /* Arm 1 mass = 3.6745 */ RT_FLOAT M2; /* Arm 2 mass = 1.0184 */ RT_FLOAT A1; /* Arm 1 center of mass = 0.3365 */ RT_FLOAT A2; /* Arm 2 center of mass = 0.2606 */ RT_FLOAT I1; /* Arm 1 moment of inertia = 0.3703 */ RT_FLOAT I2; /* Arm 2 moment of inertia = 0.0811 */ RT_FLOAT M[2][2]; /* Mass Maxtrix */ RT_FLOAT M_inv[2][2]; /* Mass Matrix Inverse */ RT_FLOAT J[2][2]; /* Jacobian Matrix */ RT_FLOAT J_inv[2][2]; /* Jacobain Inverse */ RT_FLOAT J_dot[2][2]; /* Jacobain Derivative */};

/*---------------------------------------------------------------------- properties.h

Data types/defines for rigid body mass properties. NOTES: none HISTORY: 12 Oct 95 D.Rathbun created 13 Nov 95 D.Rathbun added dynamic and kinematic matricies 18 Nov 95 D.Rathbun updated for MatrixX real time data types 10 Dec 95 D.Rathbun updated for non-ANSI compliant compilers ---------------------------------------------------------------------*/

Page 107: Nonlinear Robotic Control

101

/* MatrixX-SystemBuild data types */#include "sa_types.h"

/* Data Types */struct MASS_PROPERTIES { RT_FLOAT L1; /* Arm 1 length = 0.6325 */ RT_FLOAT L2; /* Arm 2 length = 0.6012 */ RT_FLOAT M1; /* Arm 1 mass = 3.6745 */ RT_FLOAT M2; /* Arm 2 mass = 1.0184 */ RT_FLOAT A1; /* Arm 1 center of mass = 0.3365 */ RT_FLOAT A2; /* Arm 2 center of mass = 0.2606 */ RT_FLOAT I1; /* Arm 1 moment of inertia = 0.3703 */ RT_FLOAT I2; /* Arm 2 moment of inertia = 0.0811 */ RT_FLOAT M[2][2]; /* Mass Maxtrix */ RT_FLOAT M_inv[2][2]; /* Mass Matrix Inverse */ RT_FLOAT J[2][2]; /* Jacobian Matrix */ RT_FLOAT J_inv[2][2]; /* Jacobain Inverse */ RT_FLOAT J_dot[2][2]; /* Jacobain Derivative */};

/* Function Prototypes */void set_mass_properites( );

/*--------------------------------------------------------------------- properties.c

Sets the two link flexible manipulator mass properties. INPUTS : MASS_PPOPERTIES L1 Arm 1 length

L2 Arm 2 length M1 Arm 1 mass M2 Arm 2 mass A1 Arm 1 center of mass A2 Arm 2 center of mass I1 Arm 1 moment of inertia (about c.m.) I2 Arm 2 moment of inertia (about c.m.)

NOTES : * in kg-m-s units HISTORY : 12 Oct 95 D.Rathbun created 18 Nov 95 D.Rathbun altered for MatrixX real time data types 10 Dec 95 D.Rathbun updated for non-ANSI compliant compilers

---------------------------------------------------------------------*/

/* standard includes */#include <math.h>

Page 108: Nonlinear Robotic Control

102

/* local includes */#include "properties.h"

void set_mass_properties( mass ) struct MASS_PROPERTIES *mass;{ mass->L1 = 0.6325; mass->L2 = 0.6012; mass->M1 = 3.6745; mass->M2 = 1.0184; mass->A1 = 0.3365; mass->A2 = 0.2606; mass->I1 = 0.3703; mass->I2 = 0.0811;

return;

}

2.4. Singular State Transition LQR Solution

function [K,S] = dlqrdelay(A,B,Q,R);% DLQRDELAY Linear quadratic regulator design for discrete-time % systems.%% >> K = dlqrdelay( A, B, Q, R );%% K is the optimal feedback gain matrix such that the feedback % law u[n] = -Kx[n] minimizes the cost function%% J = Sum {x'Qx + u'Ru}%% subject to the constraint equation:%% x[n+1] = A x[n] + B u[n] %% DLQRDELAY differs from DLQR in that the system transition % matrix is allowed to be singular. This allows for solutions% when the system contains a pure delay. Sytem eigenvalues at% Z = -1 are not allowed.%% Based on a brute force itteration method. This method is % _much_ slower than the normal eigen decomposition method

% David Rathbun, [email protected]% AA Deptment, U of Washington

% Check the inputs

Page 109: Nonlinear Robotic Control

103

[m,n] = size(Q); if (m==n), state_size = m; else, error('Q not square'); end;

[m,n] = size(R); if (m==n), input_size = m; else, error('R not square'); end;

% Set the initial conditions

S = Q; Mp = B'*S*B + R; M = S - S*B*inv(Mp)*B'*S;

K = zeros(input_size,state_size); sol_norm = 0; keep_looping = 1;% Main Loop

while (keep_looping==1),

% Do 10 itterations on the solution for i = 1:10, S = A'*M*A + Q; Mp = B'*S*B + R; M = S - S*B*inv(Mp)*B'*S; end;

% Check the degree of convergence of the solution K = inv(B'*S*B + R)*B'*S*A; sol_norm_last = sol_norm; sol_norm = norm(K);

per_error = abs(sol_norm - sol_norm_last) / ... max([sol_norm, sol_norm_last]); if (per_error < 1e-7 ), keep_looping = 0; end;

end;

% ----- end ------

Page 110: Nonlinear Robotic Control

104

2.5. LQR Compensator Solution

% cont_LQG%% generates the tip position controller for the two link% flexible manipulator using LQR/LQE techniques

DT = 0.030;theta1 = 0; % nominal joint anglestheta2 = pi*0.40;

% ===== Plant Model construction ====================================

% Get the continuous plant model [Ap,Bp,Cp,Dp,xep,yep] = make_lin_model( theta1, theta2 );

% Convert to discrete with a time delay [Az,Bz,Cz,Dz] = c2dt(Ap,Bp,Cp,DT,2*DT);

% Add additional outputs of system modes using eigenvector weighting [Vec,Lam] = eig(Az); ino = find( diag(Lam) ~= 0 ); Lam = Lam(ino,ino); t_ang = imag(diag(Lam))./real(diag(Lam)); [t_ang2,inx] = sort(abs(t_ang)); s_inx2 = max(size( find(t_ang2 == 0 ) ));

VecI = pinv( Vec ); Cvec = real( VecI(ino(inx(1:s_inx2)),:) ); Cvec(s_inx2+1,:) = ... real(VecI(ino(inx(s_inx2+1)),:)+VecI(ino(inx(s_inx2+2)),:)); Cvec(s_inx2+2,:) = ... real(VecI(ino(inx(s_inx2+3)),:)+VecI(ino(inx(s_inx2+4)),:)); Cvec(s_inx2+3,:) = ... real(VecI(ino(inx(s_inx2+5)),:)+VecI(ino(inx(s_inx2+6)),:)); Cvec(s_inx2+4,:) = ... real(VecI(ino(inx(s_inx2+7)),:)+VecI(ino(inx(s_inx2+8)),:));

[outS1,stateS1] = size(Cz); Cz = [ Cz ; Cvec ]; [outS2,stateS2] = size(Cz); Dz = [ Dz ; zeros(outS2-outS1,2) ];

% Create a set of parallel single pole low-pass filters wp = 3.5; [a1,b1,c1,d1] = tf2ss(wp*[1],[1 wp]); [At,Bt,Ct,Dt] = c2dm(a1,b1,c1,d1,DT,'tustin'); [Alp,Blp,Clp,Dlp] = append(At,Bt,Ct,Dt,At,Bt,Ct,Dt);

% Add the filters to the plant model inputs [a1,b1,c1,d1] = append(Alp,Blp,Clp,Dlp,Az,Bz,Cz,Dz); fb = [ 3 1

Page 111: Nonlinear Robotic Control

105

4 2 ]; ins = [ 1 2 ]; outs = [ 3:outS2+2 ]; [Azf,Bzf,Czf,Dzf] = connect(a1,b1,c1,d1, fb, ins, outs );

% ===== Control ===================================================

% Define the output weighting matricies Q = diag([ 0 0 ... %tip-pos 0 0 ... %joints 0 0 0 0 ... %assumed-modes 800 200 0 0 ... %rb-modes 1 0.1 0.001 0 ... %real-modes ]); R = diag([ 10 20 ]);

% Convert to the State Weighting matricies Qp = Czf'*Q*Czf; Np = Czf'*Q*Dzf; Rp = R + Dzf'*Q*Dzf;

% Generate the steady state LGR control gain if ( max(max(abs(Np))) == 0 ), [ K ] = dlqrdelay(Azf,Bzf, Qp,Rp ); else, error('Can''t handle cross terms in the regulator solution'); end;

% Compute the reference input gain matricies stateS3 = max(size(Azf)); AREF = [ Azf-eye(stateS3) Bzf Czf(1:2,:) zeros(2,2) ]; BREF = [ zeros(stateS3,2) eye(2) ]; NN = AREF\BREF;

Nx = NN(1:stateS3,:); Nu = NN(stateS3+1:stateS3+2,:);

% ===== Estimator ===================================================

% Get the measurement/process noise Q = 1e-5*diag([ 2 8 ]);

q1 = 0.05e-3; cov1 = q1^2/12*2*4;

R = diag([cov1 cov1]);

Cza = Czf([1 2],:);

Page 112: Nonlinear Robotic Control

106

[kt,st] = dlqrdelay( Azf', Cza', Bzf*Q*Bzf', R ); L = (st'*Cza')/(Cza*st'*Cza'+R);

% ===== Assembly ====================================================

% Assemble the controller Ac = (Azf - Bzf*K)*(eye(size(Azf)) - L*Cza); Bc = [ Bzf*(K*Nx+Nu) (Azf-Bzf*K)*L ]; Cc = -K*( eye(size(Azf)) - L*Cza ); Dc = [ K*Nx+Nu -K*L ];

% Add the low pass [a1,b1,c1,d1] = append(Ac,Bc,Cc,Dc,Alp,Blp,Clp,Dlp); [Ac,Bc,Cc,Dc] = connect(a1,b1,c1,d1,[5 1;6 2],[1:4],[3 4]);

% Clean up clear AREF BREF At Bt Ct Dt Azf Bzf Czf Dzf Cza clear NN Np Nu Nx Q Qp R Rp a1 b1 c1 d1 clear Vec VecI Lam ino ins inx t_ang t_ang2 clear outS1 outS2 stateS1 stateS2 stateS3 clear cov1 q1 fb outs kt st s_inx2

% Saving save controller Ac Bc Cc Dc DT

Page 113: Nonlinear Robotic Control

Appendix 3. Equation Derivations

3.1 Jacobian

v b = v a + r ˙ + ω H r

= − L

1 s θ

1 θ ˙

1

L 1 c θ

1 θ ˙

1

+ 0

0 + − L

2 s θ

12( θ ˙

1 + θ ˙

2 )

L 2 c θ

12( θ ˙

1 + θ ˙

2 )

= − L

1 s θ

1 − L

2 s θ

12

L 1 c θ

1 + L

2 c θ

12

− L 2 s θ

12

L 2 c θ

12

θ ˙ 1

θ ˙ 2

(44)

3.2 Rigid Body Equations of Motion

Lagrangian Formulation

d dt

� � � � M L

M θ ˙ i �

� � � � −

M L M θ i

= τ i i = 1 , 2 (45)

Lagrangian

L = 1

2 m 1 a 2 1 θ ˙ 2

1 + 1

2 I 1 θ ˙ 2 1 + 1

2 I 2 ( θ ˙ 2 1 + 2 θ ˙ 1 θ ˙ 2 + θ ˙ 2

2 )

+ 1

2 m 2 7 ( L 2 1 + 2 L 1 a 2 c θ 2 + a 2

2 ) θ ˙ 2 1 + 2 a 2 ( a 2 + L 1 c θ 2 ) θ ˙ 1 θ ˙ 2 + ( a 2

2 ) θ ˙ (46)

Resulting Equations

m 1 1

m 1 2

m 1 2

m 2 2

θ ¨ 1

θ ¨ 2

+ n 1

n 2 =

τ 1

τ 2 (47)

m1 1 = m1 a 2 1 + m2 ( L 2

1 + 2 L 1 a 2 c θ 2 + a 2 2 ) + I1 + I2

m1 2 = m2 a 2 ( a 2 + L 1 c θ 2 ) + I2

m2 2 = m2 a 2 2 + I2

n1 = − m 2 L 1 a 2 s θ 2 ( 2 θ ˙ 1 θ ˙ 2 + θ ˙ 2 2 )

n2 = m 2 L 1 a 2 s θ 2 θ ˙ 2 1

3.3 PSD Integrals

Integral formulas from CRC Handbook

Page 114: Nonlinear Robotic Control

108

I x 2 dx

bx 2 + a =

x b

− a b I dx

bx 2 + a

I dx

b 2 x 2 + a2 = 1 ab

tan− 1 ( bxa )

(48)

gives

2 ω

I 0

x 2 + b 2

x 2 + a 2 dx = 2 x + ( b 2 − a 2 )

a tan− 1 (

x

a ) ω

0

= 2 ω + ( b 2 − a 2 )

a tan− 1 (

ω a )

(49)

and

2 N 4

I 0

a 2

x 2 + a 2 dx = 2 N a tan− 1 ( x a

) 4

0

= 2 N a π 2

= N a π (50)

3.4 Second Order Slew Commander

Assuming a maximum limit on the single axis acceleration and rate, the optimal slew profile

will look as follows :

Position

Time111000

Rate

Time

Acceleration

Time

t

2

3

t

1

t

If there is no coast phase, the base equations are :

x 1 = x o + v xot 1 + 1

2 α m t 2 1

v x 1 = v xo + α m t 1

(51)

Page 115: Nonlinear Robotic Control

109

x f = x 1 + v x 1 t 3 − 1

2 α m t 2 3

v xf = v x 1 − α m t 3 = 0

which solved for the time periods gives :

t 1 = − v xo

α m +

v 2 xo

2 α m

+ ∆ x α m

t 3 = v 2

xo

2 α m

+ ∆ x α m

where

∆ x = x f − x o

(52)

A coast phase is needed if the maximum attained velocity is greater than the allowable

velocity. This gives the test :

v m < v 1

= v xo + α m t 1

= v 2

xo

2 + α m ∆ x

(53)

times are now those that increase the velocity to the maximum and back to zero :

t 1 = v m − v xo

α m

t 2 = v m

α m

(54)

the coast time is that required to match the final position at the maximum rate :

t 2 = x 2 − x 1

v m

= ( x f − x o ) − ( x 1 − x o ) − ( x f − x 2 )

v m

(55)

= ∆ x v m

− 2 v 2

m − v 2 xo

2 v m α m

The suboptimal axis uses the same base equations :

Page 116: Nonlinear Robotic Control

110

y 1 = y o + v yot 1 + 1

2 α 1 t 2 1

v y 1 = v yo + α 1 t 1

(56)

y 2 = y 1 + v y 1 t 2

v y 2 = v y 1

y f = y 2 + v y 2 t 3 − 1

2 α 3 t 2 3

v yf = v y 2 − α 3 t 3 = 0

They are solved for the accelerations, using the times as given :

α 1 = 2 ∆ y − v yo( 2 t 1 + 2 t 2 + t 3 )

t 1 ( t 1 + 2 t 2 + t 3 )

α 3 = 2 ∆ y − v yot 1

t 1 ( t 1 + 2 t 2 + t 3 )

(57)

which are applicable to both coast and no coast (t2 =0) type slews.

These equations are valid for any magnitude input parameters, but they do assume that all

parameters are positive. If negative inputs are to be allowed (which they must) then certain

adjustments need to be made (as an example, if vxo=0 and x<0, then the equations are

valid for -αm).

The simple test for the sign convention is as follows :

given v xo = 0 ,

if ( ∆ x < 0 ) then

α m = − α m

v m = − v m

given ∆ x = 0 ,

if ( v xo > 0 ) then

α m = − α m

v m = − v m

which can be generalized to the multiple initial condition case by examining the case where

both t1 and t2 need to be zero.

Page 117: Nonlinear Robotic Control

111

∆ x = v xot − 1

2 α t 2

0 = v xo − α t(58)

therefore

2 α ∆ x − v 2 xo = 0

gives the final sign convention test :

if ( 2 α ∆ x − v xo v xo $ 0 ) ,

use + α m , + v m

else,

use − α m , − v m

(59)

and the coast phase test must become :

v m > α m v 2

xo

2 α m

+ ∆ x α m

(60)

Page 118: Nonlinear Robotic Control

Appendix 4. Hardware Diagrams

Serial line pin out diagram :

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

Line Symbol 25 Pin Serial Cable 25 Pin

Serial Cable

AC100

Breakout

TxD Transmitted Data 2 3

RxD Received Data 3 5

RTS Request to Send 4 7

CTS Clear to Send 5 9

DSR Data Set Ready 6 11

DCD Received Line Signal Detector 8 15

DTR Data Terminal Ready 20 14

GND Ground 7 13