245
Advances in Industrial Control Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan Aerial Manipulation

Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Advances in Industrial Control

Matko OrsagChristopher KorpelaPaul OhStjepan Bogdan

Aerial Manipulation

Page 2: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Advances in Industrial Control

Series editors

Michael J. Grimble, Glasgow, UKMichael A. Johnson, Kidlington, UK

Page 3: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

More information about this series at http://www.springer.com/series/1412

Page 4: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Matko Orsag • Christopher KorpelaPaul Oh • Stjepan Bogdan

Aerial Manipulation

123

Page 5: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Matko OrsagLaboratory for Robotics and IntelligentControl Systems, Faculty of ElectricalEngineering and Computing

University of ZagrebZagrebCroatia

Christopher KorpelaDepartment of Electrical Engineering andComputer Science

United States Military AcademyWest Point, NYUSA

Paul OhDepartment of Mechanical EngineeringUniversity of Nevada Las VegasLas Vegas, NVUSA

Stjepan BogdanLaboratory for Robotics and IntelligentControl Systems, Faculty of ElectricalEngineering and Computing

University of ZagrebZagrebCroatia

MATLAB® is a registered trademark of The MathWorks, Inc., 1 Apple Hill Drive, Natick,MA 01760-2098, USA, http://www.mathworks.com.

ISSN 1430-9491 ISSN 2193-1577 (electronic)Advances in Industrial ControlISBN 978-3-319-61020-7 ISBN 978-3-319-61022-1 (eBook)https://doi.org/10.1007/978-3-319-61022-1

Library of Congress Control Number: 2017947695

© Springer International Publishing AG 2018This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or partof the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmissionor information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilarmethodology now known or hereafter developed.The use of general descriptive names, registered names, trademarks, service marks, etc. in thispublication does not imply, even in the absence of a specific statement, that such names are exempt fromthe relevant protective laws and regulations and therefore free for general use.The publisher, the authors and the editors are safe to assume that the advice and information in thisbook are believed to be true and accurate at the date of publication. Neither the publisher nor theauthors or the editors give a warranty, express or implied, with respect to the material contained herein orfor any errors or omissions that may have been made. The publisher remains neutral with regard tojurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by Springer NatureThe registered company is Springer International Publishing AGThe registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Page 6: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

To my dear and patient wife NivesM.O.

To my wife, AdrianaC.K.

To Jakov and Domagoj, my sonsS.B.

Page 7: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Series Editors’ Foreword

The series Advances in Industrial Control aims to report and encourage technologytransfer in control engineering. The rapid development of control technology has animpact on all areas of the control discipline. New theory, new controllers, actuators,sensors, new industrial processes, computer methods, new applications, new designphilosophies…, new challenges. Much of this development work resides inindustrial reports, feasibility study papers, and the reports of advanced collaborativeprojects. The series offers an opportunity for researchers to present an extendedexposition of such new work in all aspects of industrial control for wider and rapiddissemination.

Aerial manipulation is an “emerging technology” that although soundingfuturistic undoubtedly has much potential for applications development. Thematerial presented in this monograph is conceptual and theoretical but is alsogrounded in today’s technology and hardware; there are rotorcraft applicationspresented in various sections of the monograph. The monograph focuses on athorough presentation of basic concepts and physical principles, mathematicalmodels, and demonstrations by worked examples. The chapter titles demonstratequite clearly how the authors plan to advance the reader’s understanding of thestructure and operation of generic aerial manipulators:

Chapter 1 State of the Art; Chap. 2 Coordinate Systems and Kinematics; Chap. 3Aerodynamics andRotor Actuation; Chap. 4Manipulator Kinematics; Chap. 5AerialManipulator Dynamics; Chap. 6 Sensors and Control; Chap. 7 Mission Planningand Control.

Of particular interest to the control specialist, the topics of Chap. 6 includeschemes for a modular structure of sensing and control tasks. There are somelow-level control loops that use PID control and high-level control loops that usestandard PID control. This chapter also introduces the sophisticated control meth-ods of gain scheduling, model reference adaptive control (MRAC), and robustadaptive control to tackle more complicated system-operational issues.

The experienced authorial team comprises: Matko Orsag who is an AssistantProfessor in the Laboratory for Robotics and Intelligent Control at the University ofZagreb, Croatia. He has spent time at the University of Drexel, USA, pursuing

vii

Page 8: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

robotics and UAV research. Christopher Korpela is an Academy Professor at WestPoint Military Academy, USA, and serves as Deputy Director of the RoboticsLaboratory at that Institution. Stjepan Bogdan is a Full Professor, also on theacademic staff of the Laboratory for Robotics and Intelligent Control Systems at theUniversity of Zagreb. He was the lead author on the monograph ManufacturingSystems Control Design (ISBN 1-84628-982-9, 2006) published in the Advances inIndustrial Control series. More recently, he has refocused his research on roboticsand related autonomous systems. Finally, Paul Oh is Lincy Professor of UnmannedAerial Systems at the University of Nevada, Las Vegas, USA. He has extensiveprofessional experience of the aerial manipulator topic. Mention should also bemade of Prof. Anibal Ollero, who is a distinguished Spanish scientist at the EscuelaTécnica Superior de Igeniería at the University of Seville, Spain. He contributed theState-of-the-Art review found in Chap. 1.

This monograph Aerial Manipulation will be of considerable interest toresearchers in the fields of robotics, aerospace, and control. Those researchers whoare at postgraduate, postdoctoral, and faculty staff level will appreciate the con-ceptual presentation of the authors and may find inspiration for new projects. Theremay be some interest from final-year undergraduate students seeking novel projectmaterial. It is a fascinating subject, and it may be that “do-it-yourself” hobbyists inaerial systems and rotorcraft technology will find inspiration for practical projectsfrom the monograph too.

The editors of the Advances in Industrial Control monograph series would bedelighted to add further monographs describing the theory, laboratory research, andindustrial applications in this interesting emerging field of robotic endeavor.Meantime, the authors are to be congratulated on producing a first volume on aerialmanipulation for the Advances in Industrial Control series.

M.J. GrimbleM.A. Johnson

Industrial Control CentreUniversity of StrathclydeGlasgow, Scotland, UK

viii Series Editors’ Foreword

Page 9: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Preface

As scholars we have the rare privilege to take part in the excitement that studentsfeel when their thoughts and abstract mathematical forms are implemented onelectrical-mechanical systems—motors start to spin, parts start to move, imageprocessing reveals hidden features, the systems become ‘alive’. No matter howmany times we have witnessed those thrills, again and again these small “miracles”and students’ passion make teaching and research so rewarding. And this isespecially true for robotics that is considered likely to become one of the mostinfluential technologies in the decades to come. Rapid development of electronics,emergence of new materials, and advances in computer science provide for theimplementation of complex algorithms and structures that are capable of raising thecognitive and manipulative abilities of robots to a new level and introducing theminto new fields.

One of these new fields of robotics is so-called aerial robotics—technology thatprovides services and facilitates the execution of tasks (such as observation,inspection, mapping, search and rescue, maintenance) by using unmanned aerialvehicles equipped with various sensors and actuators. While some of these serviceshave already been put into practice (e.g., aerial inspection and aerial mapping),others (like aerial manipulation) are still at the level of laboratory experimentationon account of their complexity. The ability of an aerial robotic system to interactphysically with objects within its surroundings completely transforms the way weview applications of unmanned aerial systems in near-earth environments. Thischange in paradigm conveying such new functionalities as aerial tactile inspection;aerial repair, construction, and assembly; aerial agricultural care; and aerial urbansanitation requires an extension of current modeling and control techniques as wellas the development of novel concepts.

Working for more than ten years in the field, we have discerned the expeditiousgrowth of scientific publications related to aerial robotics—special sessions andworkshops have been organized as a part of major robotics conferences, leadingjournals in the field have published special issues on the topic, and aerial roboticscompetitions and challenges have been arranged. All this has been closely followedby articles for the general public aimed at the popularization of this new scientific

ix

Page 10: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

field and in the same time by the rise of hundreds of small companies eager tocommercialize the latest findings. Even though far from being mature, aerialrobotics slowly but surely is becoming a very important aspect in the creation ofnovel industries that will mark this century. This book is a modest attempt toprovide an in-depth treatment of aerial manipulation—the most complex area ofaerial robotics. Covering all the steps, from the physical basics of rigid bodykinematics and dynamics through modeling of an unmanned aerial vehicle equip-ped with a dexterous manipulator to the description of aerodynamic phenomenaassociated with propulsion systems and the design of complex control composi-tions, this book is a sound foundation for a newcomer in the field and at the sametime represents complementary material for researchers seeking to enhanceexpertise in the field of aerial manipulation.

Careful selection of the fundamental elements of rigid body dynamics andkinematics, as well as essential principles of aerodynamics, provides awell-balanced background for effective and efficient design of unmanned aerialmanipulation systems. A systematic presentation of control techniques and aerialrobotic systems control structures provides a blueprint for immediate implemen-tation to real-world problems. Easy-to-follow exercises and examples offer studentsand researchers unique insight into the practice of modeling and control of aerialrobotics systems.

We hope that our text will help in understanding the phenomena encountered inaerial robotics, thus eliciting exciting moments and encouraging engineering“miracles” in research laboratories and industrial facilities.

Zagreb, Croatia Matko OrsagDecember 2016 Christopher Korpela

Paul OhStjepan Bogdan

x Preface

Page 11: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Acknowledgements

Many individuals have contributed to this book. We are indebted to the studentswho contributed by performing some of the simulations and practical experimentswhile doing their student projects or working on their diploma and master theses.This list includes, in particular, Tomislav Haus, Antun Ivanovic, and Marko Car.We would like to thank Prof. Anibal Ollero, one of the leading experts in the field,for his support and willingness to participate as the author of the introductorychapter of this book.

xi

Page 12: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 The State of the Art and Future of Aerial Robotics . . . . . . . . . . . . 1

1.1.1 Physical Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.2 Aerial Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.1.3 The Design of Aerial Manipulation Systems. . . . . . . . . . . . 91.1.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.1.5 Conclusions and Future of Aerial Robotics . . . . . . . . . . . . . 12

1.2 Structure of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2 Coordinate Systems and Transformations . . . . . . . . . . . . . . . . . . . . . . 192.1 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.1.1 Global Coordinate System. . . . . . . . . . . . . . . . . . . . . . . . . . 192.1.2 Local Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . 212.1.3 Coordinate System Representation . . . . . . . . . . . . . . . . . . . 22

2.2 Coordinate Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.2.1 Orientation Representation . . . . . . . . . . . . . . . . . . . . . . . . . 242.2.2 Euler Angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2.3 Change of Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.2.4 Translation and Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.3 Motion Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.3.1 Linear and Angular Velocities. . . . . . . . . . . . . . . . . . . . . . . 282.3.2 Rotational Transformations of a Moving Body . . . . . . . . . . 29

References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3 Multirotor Aerodynamics and Actuation . . . . . . . . . . . . . . . . . . . . . . . 333.1 The Aerodynamics of Rotary Flight . . . . . . . . . . . . . . . . . . . . . . . . 33

3.1.1 Momentum Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.1.2 Blade Element Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

xiii

Page 13: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.2 Different Multirotor Configurations. . . . . . . . . . . . . . . . . . . . . . . . . 443.2.1 Coplanar Configuration of Propulsors . . . . . . . . . . . . . . . . . 453.2.2 Independent Control of All 6 DOF . . . . . . . . . . . . . . . . . . . 48

3.3 Aerial Manipulation Actuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.3.1 DC Motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523.3.2 Brushless DC Motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673.3.3 Servo Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723.3.4 2-Stroke Internal Combustion Engine . . . . . . . . . . . . . . . . . 75

References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

4 Aerial Manipulator Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874.1 Manipulator Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874.2 Forward Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

4.2.1 DH Kinematic Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 894.2.2 The Arm Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 934.2.3 Moving Base Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

4.3 Inverse Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004.3.1 Tool Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004.3.2 Existence and Uniqueness of Solution . . . . . . . . . . . . . . . . 1014.3.3 Closed-Form Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1034.3.4 Iterative Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

4.4 Inverse Kinematics Through Differential Motion . . . . . . . . . . . . . . 1104.4.1 Jacobian Matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114.4.2 Inverse Kinematics—Jacobian Method . . . . . . . . . . . . . . . . 1134.4.3 Inverting the Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

5 Aerial Manipulator Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235.1 Newton–Euler Dynamic Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.1.1 Forward Equations in Fixed Base CoordinateSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.1.2 Forward Equations in a UAV (Moving) CoordinateSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

5.1.3 Multiple Rigid Body System Mass and Momentof Inertia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

5.1.4 Backward Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315.2 Lagrange–Euler Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

5.2.1 Aerial Robot Kinetic Energy. . . . . . . . . . . . . . . . . . . . . . . . 1465.2.2 Moment of Inertia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

5.3 Dynamics of Aerial Manipulator in Contactwith Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1535.3.1 Momentary Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1565.3.2 Loose Coupling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

xiv Contents

Page 14: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.3.3 Strong Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

6 Sensors and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1656.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

6.1.1 Inertial Measurement Unit. . . . . . . . . . . . . . . . . . . . . . . . . . 1656.1.2 Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666.1.3 GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676.1.4 Motion Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

6.2 Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1686.2.1 Attitude Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696.2.2 Position Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

6.3 Linear Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736.3.1 Attitude Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1756.3.2 Position Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

6.4 Robust and Adaptive Control Applications. . . . . . . . . . . . . . . . . . . 1846.4.1 Gain Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1846.4.2 Model Reference Adaptive Control . . . . . . . . . . . . . . . . . . . 1866.4.3 Backstepping Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1906.4.4 Hsia - a Robust Adaptive Control Approach. . . . . . . . . . . . 198

6.5 Impedance Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2006.6 Switching Stability of Coupling Dynamics . . . . . . . . . . . . . . . . . . . 203References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

7 Mission Planning and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2097.1 Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

7.1.1 Trajectory Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2127.2 Obstacle-Free Trajectory Planning . . . . . . . . . . . . . . . . . . . . . . . . . 217

7.2.1 Local Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2177.2.2 Global Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

7.3 Vision-Guided Aerial Manipulation . . . . . . . . . . . . . . . . . . . . . . . . 2217.3.1 Autonomous Pick and Place . . . . . . . . . . . . . . . . . . . . . . . . 2217.3.2 Transforming Camera-Detected Pose

to Global Pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

Contents xv

Page 15: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Chapter 1Introduction

1.1 The State of the Art and Future of Aerial Robotics

Aerial robotics has reached significant maturity in the last few years. Thus, thenumber of newunmanned aerial vehicles (UAV) andunmanned aerial systems (UAS),applications, and companies producing them has increased in a very important way.The number of publications and presentations in conferences has also experienced alarge increase. Moreover, almost every day we can find articles, videos, and news inthe general media related to drones. For many years, UAS have been developed inthree different environments:

• Industrial companies, usually large companies, which wanted to develop UASmainly related to defense and security.

• Small companies developing UAS, in many cases remotely operated, for applica-tions such as inspection, mapping, filming, and many others.

• Academic institutions performing research and developments in different fieldsincluding aerospace, robotics, control systems, and application fields.

The size of the above unmanned aerial systems is very diverse from centimeter,or even millimeter scale, to tens of meters of wingspan and tons of weight of thebiggest systems. The flight capabilities are also quite different with duration fromfew seconds to two weeks, and range from few meters, or even lower, to more thanten thousands kilometers. There are different levels of the decisional autonomy ofUAS.Aerial robotics usually involves significant autonomy.Particularly, the ability toautonomously generate reactions to events, track targets, recognize the environmentand reason about it are usually goals for aerial robots. Furthermore, there are alsogoals related to the autonomous coordination with other aerial robots sharing thesame aerial space, the autonomous cooperation between them to achieve a commonmission, or even the swarming with a large number of aerial robots to performmissions. The evolution in decisional autonomy of the unmanned aerial systems andthe industrial and service robots have similarities as it is pointed out in Table1.1.

© Springer International Publishing AG 2018M. Orsag et al., Aerial Manipulation, Advances in Industrial Control,https://doi.org/10.1007/978-3-319-61022-1_1

1

Page 16: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

2 1 Introduction

Table 1.1 Evolution in decisional autonomy of robots and aerial systems

Robots UAS

Human control Basic teleoperation Remotely piloted vehicles

Programmed systems Industrial robots Waypoint following

Autonomous informationinteractions

Robots in dynamicenvironments

Autonomous sense and avoid,patrolling, detection, tracking

Autonomous physicalinteractions

Autonomous field and servicerobots physically interactingwith the environment

Load transportation, aerialmanipulation, other tasksinvolving physical interactions

The first row is devoted to the human low-level control. The second corresponds toconventionally programmed robots, such asmost industrial robots today and theUASexecuting trajectories consisting in a sequence of previously defined waypoints. Thethird row involves autonomous information interactions, as for example many ser-vice robots operating in dynamic environments in which autonomous behaviors areneeded. There is a number of UAS in which autonomous behaviors have been imple-mented including autonomous “sense and avoid,” patrolling, target detection andtracking, and others. The last row involves explicitly autonomous physical interac-tions between the robot and the environment. This is relevant in a number of roboticapplications, particularly field and service robotics, in unstructured and dynamicenvironments. This is also a new field in UAS in which this book can be included.

1.1.1 Physical Interactions

In [20] the physical interactions of unmanned aerial vehicles with the environmentis studied. Different applications are considered, with special attention to load trans-portation and aerialmanipulation. Future cargo transportationwithout pilots has beenfor long time a motivating application for UAS. In the last years, the transportationof small loads and goods by means of multirotor systems have been demonstratedand disseminated extensively. Particular applications include the delivery of first-aidpackages to isolated victims in disasters and in isolated regions.

The slung load transportation by manned helicopters is useful in many appli-cations, such as the transportation of line towers in remote areas, or in the loggingindustry that uses helicopters to transport logs in areas inaccessible by ground. Slungload transportation has been also performed bymeans of autonomous helicopters [5].The weight of the load is critical because the cost of the unmanned aerial systemincreases exponentially with this load. In order to avoid this limitation, the jointtransportation of a single load by several autonomous helicopters was proposed anddemonstrated with three helicopters in the project AWARE funded by the EuropeanCommission [6]. Notice that, in this case, the physical coupling between UAVs isgiven by the direct interactions of each unmanned aerial vehicle with the joint load.

Page 17: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

1.1 The State of the Art and Future of Aerial Robotics 3

Fig. 1.1 Slung load transportation and deployment in the AWARE European FP6 project. Left -single load transportation. Right - joint transportation and deployment with three helicopters

Figure1.1 shows the slung load transportation with one and three unmanned heli-copters.

In the slung load transportation, the rope force causes a torque on the helicopterfuselage that depends on the orientation of the helicopter and its translational motion.When several helicopters are connected to the load, the translational and rotationalmotions of one particular helicopter have direct influence on the rotation dynamicsof all other helicopters. Then, even the translation with constant acceleration cancause oscillation of the angle between the rope and the helicopter axis. In [6] thesensing of the rope force is used for decoupling by means of feedforward. Then, theorientation controller becomes independent of the number of helicopters.On the otherhand, air-to-ground physical interactions are relevant for many applications such astaking samples with special devices or picking from the air for transportation. Thiswas implemented for the deployment of ground robots in the Planet FP7 projectfunded by the European Commission or in the experiments of the Challenge 3 in theMBZIRC robotics competition where static and mobile ground targets are pickedand transported by means of multirotor systems (see Fig. 1.2).

In [32] avian-inspired autonomous grasping while flying was presented. Theexperiments were performed with a quadrotor with an actuated appendage enablinggrasping and object retrieval at 2 and 3 m/s. The dynamic trajectories are planned

Page 18: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4 1 Introduction

Fig. 1.2 Picking while flying. Top - transportation of a ground robot in the PLANET European FP7project.Bottom -MBZIRCChallenge 3 experiments of the University of Seville and FADA-CATEC

taking into account the differential flatness property of the nonlinear dynamic modelof the system. An application involving physical contact between the aerial systemand ground objects is the cleaning of surfaces [2]. Here, a quadrotor carries a clean-ing brush and applies contact forces to a wall. An extra propeller in the quadrotorprovides horizontal thrust to flight in close proximity to a wall. In addition to thequadrotor flying control system, a microcontroller is used to collect and processthe data from the ultrasonic ranger to the wall and implement the control loops forthe horizontal propeller.

1.1.2 Aerial Manipulation

The next step in the aerial–ground interaction is the aerial robotic manipulation inwhich the aerial robots are equipped with robotic manipulators to perform tasks,such as assembly or contact inspection, in locations that cannot be accessed or thatare very dangerous or costly to be accessed from ground. In [7] the grasping of

Page 19: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

1.1 The State of the Art and Future of Aerial Robotics 5

objects on the ground by means of a flying helicopter with a gripper mechanismmounted ventrally between the aircraft’s skids is presented. The landing gear israised during grasping to avoid contact. An underactuated compliant gripper allowsfor positional errors between the helicopter and the target object. To acquire an object,the helicopter approaches the target, descends vertically to hover over the target, andthen closes its gripper. Once a solid grasp is achieved, the helicopter ascends with theobject. In [11, 23] a quadrotor system with simple manipulator for indoor pickingand basic manipulation is presented. The work includes cooperative assembly workswith several aerial robots. Also construction of cubic structures with quadrotor teamshas been presented. In the same year, the European FP7 projects AIRobots, dealingwith aerial robots for contact inspection, and ARCAS on cooperative assembly bymeans of aerial robots started. In [21] the progress and results toward a design andphysical system to emulate mobile manipulation by an UAVwith dexterous arms andend effectors are presented. The work includes a hybrid quadrotor-blimp prototypeto test some manipulation concepts. In the same year, the first successful graspingexperiments with a helicopter equipped with the 14 Kg and 7 Degrees of Freedom(DOF) KUKA-DLR LWR arm and visual servoing developed in the ARCAS projectwere presented for the first time (Fig. 1.3). Later, this platformwas evolved inARCASby adopting a Flettner helicopter configuration (Fig. 1.4). Also in [12, 22] an aerialrobot with a small arm maintaining contact with a surface was reported. In [3] aflying robot capable of depositing polyurethane expanding foam is presented. Aerialrobotic manipulation progressed significantly in the last years. Thus, new controltechniques and the first multirotors with 6 and 7 joint arms [18, 28] (Fig. 1.5) werepresented. Particularly, a very light manipulator with 6 DOF was developed [26],integrated in a platform with four pairs of rotors and demonstrated indoor (Fig. 1.5left). Also a commercial 7 DOF robotic arm was integrated in a bigger platformwith four pairs of rotors (Fig. 1.5 right) and demonstrated outdoor. In both cases,the control systems played an important role in the compensation of the motion ofthe arm. A technique to facilitate the control system activity was the motion of thebattery package in the opposite direction of the arm. Later, new aerial manipulatorswere also presented and demonstrated for applications such as opening a drawer (see,e.g., [31]).

Flying manipulation is usually performed close to the limits of the working spaceof the manipulator, which generates problems and requires suitable techniques tocompute inverse kinematics [1]. Themotion of themanipulator involves the change oftheCentre ofGravity (CoG) position and then the generation of torques on themanip-ulator base which should be compensated. The physical interaction with the environ-ment also generates torques to be compensated. One approach to solve the controlproblem is to consider the aerial platform and the robotic manipulator as two inde-pendent systems, and the control objective is to decrease the influence of the motionof one entity in the other. Here, impedance control methods can be applied to controlmultirotor systems [27]. Another approach is to consider information of the statevariables of a system (i.e., the joints of the robotic manipulator) in the control of theother (i.e., the aerial platform), as presented in [18] by using backstepping, or in [28]by means of admittance control. These techniques can also be applied to helicopterswith a robotic arm as shown in [19]. Starting from an integrated (multirotor-arm)

Page 20: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6 1 Introduction

Fig. 1.3 First aerial manipulation experiments with an autonomous helicopter and the LWR armat DLR in the ARCAS European FP7 project

Fig. 1.4 Helicopter with Flettner configuration and the LWR arm at DLR performing experimentsin the ARCAS FP7 European project

Fig. 1.5 First multirotors with 6 (left) and 7 DOF robotics arms, developed in the ARCAS FP7European project. Left - indoor robot at FADA-CATEC test bed. Right - outdoor robot at theUniversity of Seville

Page 21: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

1.1 The State of the Art and Future of Aerial Robotics 7

model, decoupling techniques can be applied to control the dynamics of the CoG andthe rotational dynamics [34]. Model-predictive control has been also applied [13].Perception for aerialmanipulation requires accurate relative position/orientation esti-mation even in the case of low-quality uncalibrated images [4]. Learning methodsby using low resolution and blur images [29] have been applied in ARCAS. Then,visual servoing methods [30] were used for grasping and assembly.

The application of range sensors has shown to be very useful for positioning whenno visual marks are visible from the on-board cameras. The application of range onlysimultaneous localization and mapping (RO-SLAM) techniques with radio emittersin the objects to be manipulated has been used to obtain positioning accuracy ofabout half a meter [8]. The combination with marker-based visual robot localization,by using cameras on-board the aerial robots, can be used to obtain accuracies offew centimeters. Regarding planning for aerial manipulation, the work in ARCASalso played an important role. Three levels have been implemented: mission plan-ning, task planning, and trajectory planning. Inmission planning, ARCAS developedan assembly planner for structure construction [10] which was integrated with taskplanning (geometric and symbolic) for several aerial robotic manipulators. Further-more, the task planner was also integrated with very efficient path planners based onmodifications of the RRTmethod [9]. An alternative to the aerial manipulation whileflying is the previous perching as shown in [14] opening a door. Aerial manipula-tors can be also applied to the joint transportation of bars [25]. In order to performthis task, coordinated motion control [33] should be implemented. Figure1.6 showsthe cooperative transportation of a bar in ARCAS, involving collision detection andavoidance.

An aerial robotic manipulator with two grippers was demonstrated at the Univer-sity of Zagreb (Fig. 1.7 left). Also the AEROARMS project is developing advanced

Fig. 1.6 Joint transportation of a bar in the structure construction final demonstration of theARCASFP7 European project

Page 22: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

8 1 Introduction

Fig. 1.7 Aerial robots with two hands (University of Zagreb) and two arms (University of Sevillein the AEROARMS H2020 European project)

Fig. 1.8 Aerial manipulatorachieving stable interactionafter an impact with avertical surface in theAEROWORKS H2020European project

aerial manipulation capabilities, including dual-arm manipulation (Fig. 1.7 right),and applications for inspection and maintenance.

In [24], a 6 degree of freedom parallel manipulator is used to robustly maintainprecise end-effector positions despite host UAV perturbations. The parallel manipu-lator allows for very little moving mass and is easily stowed below a quadrotor UAV.Work on the application of compliance in aerial robotic manipulation recently beganusing free flying multirotor systems. In the AEROWORKS project, an arm with 2DOF (one active and one passive) was applied [15] (Fig. 1.8). In the AEROARMSproject, a compliant 3 DOF arm with compliant finger [17] was developed (Fig. 1.9).The aerial robotic manipulator has been also demonstrated for the underside con-tact inspection of bridges [16], which is the subject of the H2020 AEROBI project(Fig. 1.10).

Page 23: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

1.1 The State of the Art and Future of Aerial Robotics 9

Fig. 1.9 Compliant arm and compliant finger in the AEROARMS H2020 European project

Fig. 1.10 Under sideinspection of bridgesperformed by the Universityof Seville in the AEROBIH2020 European project.Top - approach and extensionof the arm. The controlsystem maintains stability inspite of the motion of thelong arm. Bottom - contactand inspection bymaintaining the contact withthe ceiling effect generatedby the bridge

1.1.3 The Design of Aerial Manipulation Systems

The design of an aerial robotic manipulation system involves many decisions fromthe aerial platforms to the particular control, perception, and planning capabilities.Concerning the configuration of the aerial platform, both multirotors and helicoptershave been applied. The helicopters have more payload and time of flight. Aerialrobotic manipulators based on helicopters with different configurations, and tens ofkilograms of payload, have been demonstrated outdoor (see Figs. 1.3 and 1.4). TheFlettner configuration with two main rotors and without tail rotor has demonstratedimprovements in payload and flight endurance avoiding the energy losses due tothe tail rotor. On the other hand, the multirotor systems are characterized by themechanical simplicity, decreasing the maintenance needs. They can be operated in

Page 24: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

10 1 Introduction

more constrained spaces, but the payload and the time of flight are usually lower. Thepayload of conventional multirotor systems varies from hundreds of grams to fewkilograms. Multirotor systems with arms have been demonstrated indoor in differentlaboratories. In the ARCAS project, theywere demonstrated both indoor and outdoor(see Fig. 1.5).

The selection of the appropriated devices for interaction with the environment,and particularly for manipulation, is also an important topic. In some particulargrasping and sampling applications, a simple arm, with few degree of freedomsand gripper, could be enough, as it is in a ground manipulator with fixed base.However, it iswell known that generalmanipulation capabilities requiremore degreesof freedom to provide dexterity. Moreover, in aerial manipulation, these degrees offreedom are needed to compensate the unavoidable perturbation when the robotis flying near the objects been manipulated. In the ARCAS project, it has beenshown that this accommodation is very important inmanymanipulation applications.Thus, for example, it was shown that, in the configuration of Fig. 1.3, when themanipulator compensates a displacement of the helicopter, low frequency oscillationswith increasing amplitude appeared. In this system,manipulation in the vertical planecontaining the centre of gravity is important to avoid oscillations. However, in thiscase, the application of the 6DOF of a conventional armmay not be enough requiringthe seventh degree of freedom provided by the LWR arm in Fig. 1.3.

Conventional multirotor systems with coplanar rotors have inherent motion con-straints that could be compensated by means of the degrees of freedom of the armsin aerial manipulation tasks. Thus, the 6 and 7 degrees of freedom of the arms inFig. 1.5 demonstrated significant interest. Obviously, these arms require enough pay-load of the multirotor systems. In order to decrease this payload, very light arms, asshown in Fig. 1.4 left, were developed. The stable and accurate control of these armsis not a trivial task. In order to facilitate this control, rigidizers were added [26] (seeFig. 1.5 left). Aerial robotic manipulation with non-coplanar rotors can overcome theabove-mentionedmotion constraints of the aerial platform. Then, it could be possibleto decrease the number of DOF of the manipulators. However, the joint analysis ofthe platforms with non-coplanar rotors and aerial robotic manipulators with severaldegrees of freedom is still an open problem being researched in the AEROARMSproject. Also, in this project, the dual-arm aerial manipulation is being researched(see Fig. 1.7 right).

Perception on board, the aerial manipulation system is needed for both positioningof the platform and manipulation. The positioning of the aerial robot is particularlyrelevant in indoor environments, without accurate positioning and motion tracking,such as VICON or Optitrack, or in other GPS-denied environments. The combina-tion of range sensing by means of radio signal and computer vision is very useful.Thus, positioning based on radio signals could be the first stage to obtain an approx-imate estimation without the need of visual detection of target objects. When theaerial manipulator is close enough to the objects to be manipulated, computer visiontechniques with the cameras on board the aerial robotic manipulator can be applied.Computer vision software for object detection and recognition is needed in additionto low-level vision functionalities to decrease or eliminate the effects of vibration,

Page 25: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

1.1 The State of the Art and Future of Aerial Robotics 11

variable illumination conditions, and shadows. Fast 3D modeling and tracking of 3Dobjects are also needed. The efficient implementation of visual servoing is very rel-evant for autonomous aerial manipulation. The on-board implementation is neededto avoid the time delays involved in the transmission of images and signals to aground computer. The application of marker-based 3D pose estimation for manipu-lation demonstrated 0.75–1.25cm accuracy in the ARCAS project. The marker-lessvisual servoing in aerial manipulation is still an open problem being researched inthe AEROARMS project. Moreover, the application of cooperative perception tech-niques have been initiated. In this case, the pose of a visual mark or feature fromtwo cameras of different aerial robots should be computed taking into account thedelay of the signals from the aerial robots. This is required when the robot that isperforming the assembly operation loses the mark or feature, but the other robotobserves it and sends its pose to continue with the assembly operation.

As pointed out above, assembly planning, task planning and motion planning sys-tems have been developed for aerial assembly of structures. The assembly planning ofa structure with about 40 bars required about one second of CPU time in the ARCASproject. On the other hand, the task allocation to 3 aerial robots to assembly a struc-ture with 13 bars and various constraints were performed in less than one minute.Motion planners were evaluated taking into account the optimality. Thus, in less than3s of CPU time, it is possible to obtain a first solution (up to 50% suboptimal) ofthe optimal planning problem. The complete planning module involving assemblyplanning, task planning, and motion planning required less than 2min CPU time toobtain a geometrically feasible plan. The application of several aerial robots flying atthe same time while performing aerial robotic manipulation requires the applicationof fast enough multi-UAV reactive collision avoidance techniques. The applicationof ORCA methods allows to obtain reaction times of one millisecond in case of 10aerial robots.

1.1.4 Applications

The number of potential applications of aerial manipulation, and in general of aerialrobots physically interacting with the environment, is very large. The applicationsto inspection and maintenance is a relevant subset including industrial plants withaerial facilities (i.e., pipes), and large tanks or boilers. The power generation plants(thermal generation, wind generation, and solar plants) and the distribution systems(electrical lines, pipes) can be also included. The robotic inspection and maintenanceof offshore plants is also important due to the high costs involved. The inspectionand maintenance of infrastructures in which aerial manipulation can play an impor-tant role include bridges, dams, towers, buildings and aerial pipes. The applicationsinclude, for example, contact inspection to detect and measure cracks.

Aerial robotics can also support decommissioning works in nuclear installations,power stations, and fuel processing facilities. Other potential applications are relatedto search and rescue, and disaster management (floods, earthquakes, fires, industrial

Page 26: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

12 1 Introduction

disasters, and others) where load transportation for delivery of first aid to victims,taking samples in contaminated sites, or installation of cameras, sensors and commu-nication equipment are relevant tasks. The transportation by means of aerial robotshas been already publicized, particularly for small goods. This can be applied inmany logistic processes including warehouses and factories. Finally, it should benoted that many aerial manipulation technologies can be also applied to space andparticularly to on-orbit servicing of satellites involving flying manipulation.

1.1.5 Conclusions and Future of Aerial Robotics

To conclude it should be noted that general autonomous dexterous aerial manipula-tion capabilities require the development of new accurate control systems integratedwith perception and planning capabilities. The increasing of the control frequencyovercoming limitations in the current commercial servo-systems is very relevant. Theimprovement in the control systems of aerial manipulators also requires the consid-eration of aerodynamic effects near surfaces (ceiling, ground, and wall aerodynamiceffects). Also the accommodation to reject perturbation is relevant avoiding the effectof undesired forces and torques while performing the manipulation.

The perception system is very important both to avoid collisions and other con-flicts and to improve the robustness of control systems. Furthermore, the practicalapplication requires the minimization of artificial marks or beacons to be able toperform autonomous perception by using only natural features. This is particularlytrue in GPS-denied environments.

Most planning techniques in aerial robotics have been applied up to now withoutconsidering the additional degrees of freedom in the robotic manipulators. Also plan-ning of the aerial roboticmanipulationwith the dynamicmodels is still in preliminarystages and should be improved to be applied in real time by considering the effects ofthe manipulator motions in the aerial platform or even the reactions needed to com-pensate the forces and torques generated in the interactions. In any case, teleroboticsystems with automatic stabilization and low-level control are needed for practicalapplications. Thus, teleoperation of aerial manipulators by using haptic interfaces isalso being researched.

Obviously, the safety issues are very important in aerial robotic manipulation. Theapplication of significant forces and torques requiresmore powerful arms and deviceswithmore power demands. This typically leads to greater systems in which the safetyconcerns are more important, constraining the practical applications. The regulationsplay an important role. Currently, a number regulations of remotely piloted aircraftsystems (RPAS) have been issued in many countries and more will be published inthe short term. The design of new aerial robotic manipulator systems to meet theseregulations is an important topic. The recent accelerated evolution of aerial robotics,and particularly aerial robotics manipulation, requires the analysis of the field, com-pilation of significant results, and elaboration of the fundamental knowledge thatshould be known by the increasing number of researchers and practitioners.

Page 27: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

1.2 Structure of the Book 13

1.2 Structure of the Book

The book is structured around key components (shown in Fig. 1.11) of a typical aerialrobot. First of all, an aerial robot requires a thrust to stay airborne, which is providedthrough a system of rotors strategically placed around the main body. Depending onthe mission requirements, a rotorcraft aerial robot can have two, three, four, six, ormore rotors, most commonly driven by electrical motors supplied by power fromthe electrical battery (somewhat less common, but still available, are fuel cells).For heavier payloads, 2-stroke small-scale internal combustion engines are used todrive rotors. We visit those components in Chap.3. In order to carry out a certainmission, aerial robots require a manipulator. In most cases, a single manipulatorcan do the job, but for a more demanding missions dual or even a multimanipulatorsystem is necessary. Chapters4 and 5 are dedicated to extensive analysis of aerialmanipulator kinematics and dynamics. Keeping a vehicle stable during the flightand mission execution requires a set of sophisticated sensors and reliable controlalgorithms which are described in Chap.6. How to plan and execute a mission is thetopic of Chap. 7.

In the rest of this section, we give brief description of each chapter of the book sothat the reader is made familiar with how to approach the material—depending onher/his previous background some chapters could be omitted. For example, readerwith proficiency in kinematics and dynamics of standard robotic manipulators canstart with Chap.3, while reader with background knowledge in propulsion systemsaerodynamics can skip initial sections of Chap.3 and beginwith the section dedicatedto the actuators.

Power supply(Battery)

Main body

Sensors(Camera)

Landing gear

Rotor system

Manipulator

Fig. 1.11 Basic components of an aerial robot: main body fuselage, propulsion system, powersupply, landing gear, sensors, and manipulators

Page 28: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

14 1 Introduction

As one of the aims of the book is to attract newcomers in the field of aerial manip-ulation, in Chap.2 the reader is introduced with a notion of rotational and trans-lational kinematics in local and global coordinate systems, followed by orientationconventions used throughout the book. The chapter is concluded with transformationJacobian matrix and motion kinematics in fixed and moving coordinate systems.

Chapter 3 applies the knowledge gained in Chap. 2 to the basic physical principlesof a multirotor UAV (if not specified otherwise, throughout the book UAV abbrevi-ation refers to a vertical takeoff and landing (VTOL) multirotor unmanned vehicle).The initial part of this chapter is dedicated to the description of the propulsion systemaerodynamics. First, the momentum theory is presented. As this approach relies onthe requirement that the induced velocity (at the bottom of the propeller) is known,in the second part of this subsection the propulsion system aerodynamics analysis isextended with the blade element theory. In the second part of Chap.3, an UAV is pre-sented in its simplest form—as a rigid body with 6 degrees of freedom (DOF) whosemovement in space is described by Newton–Euler motion equations. Presentation ofan UAV under the influence of external forces and torques is structured so that thederivation of a rigid body motion under various configurations of propulsors helpsthe reader to understand the concepts of underactuation and coupling phenomena inaerial systems.

The chapter is concluded with description of the actuators mostly used in thepropulsion systems of aerial vehicles and to generate torques in manipulator joints.Physical principles of DC motor are given through differential equations describingelectromechanical laws, followed by a block diagram depicting DC motor transferfunctions. Based on this scheme, a simple controller design is described supportedby numerical examples. BLDC motor, the most commonly used actuator in UAVpropulsion systems, is described without going into details—only physical princi-ples of work are given, while complex equations are omitted. Basic description ofelectronics used in servo drives is given. At the end the description of a gas engineand the corresponding rotational speed control loop is described. Procedures foridentification of a gas engine dynamics, based on experimental data, and the designof a simple rotational speed controller are presented.

Chapter 4, together with Chap.5, represents the core chapters of the book. InChap.4, the reader is introduced with the basic formal approaches in robotics analy-sis, Denavit–Hartenberg representation, composite homogeneous transformations,forward and inverse kinematics. The chapter begins by describing the basic elementsof a robotic manipulator, the links, and joints, along with robotics specific conceptslike reach and workspace. Later, the coordinate system layout through Denavit–Hartenberg parameterization is outlined. The transformation between the coordinatesystems is derived through the abstraction of composite homogeneous transforma-tions, more precisely screw transformations. The reader is guided through a series ofexamples, starting from the simplest on ground examples, all the way to the advancedconcepts ofmultidegree of freedomaerialmanipulators. This layout allows the readerto easily grasp the basic concept and expand her/his understanding to aerial robots,free to move in three-dimensional space. The concept of a flying robot, with multipledegrees of freedom, distinguishes this book from other similar robotics material.

Page 29: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

1.2 Structure of the Book 15

In order for the reader to easily grasp the material, particular care is given to draw-ings that support the derivation of all important mathematical relations. Finally, thechapter presents the inverse kinematics, a mathematical approach used to calculatethe joint angles with respect to aerial robot pose in three-dimensional space. Throughvarious examples, the reader is thought to derive these relations through meticulousanalytical and numerical approaches, usually used in robotics community.

In order for the reader to fully understand how to control an aerial robot, Chap. 5is dedicated to analysis and mathematical modeling of its dynamic behavior. Startingwith the Newton–Euler forward equations in fixed base coordinate system, the readeris thought how to calculate each manipulator link speed and acceleration, all the wayto the end effector. Newton–Euler approach is chosen because it is easy to expand itto the idea of a moving (i.e., flying) base. Once the reader fully grasps the forwardNewton–Euler equationderivation, the direction is reversed to calculate the forces andtorques acting on each joint in the system. The backward equations of the Newton–Euler algorithm expand the simple physical concepts of torques and forces actingon a single rigid body from Chap.3 and apply it to each joint and link of the aerialrobot, starting from its end effector and propagating all the way to the flying base.

Since the moment of inertia plays a crucial role in the stability of the flying aerialmanipulator, Lagrange–Euler modeling approach is adopted to derive the centreof mass and tensor of inertia of a flying multiple degree of freedom manipulator.Again, the basic principles from Chap. 2 are further advanced, and through Jacobiantransformation, the moments of inertia of each link are transformed and the overalltensor of inertia is calculated. Finally, the chapter analyzes the aerial manipulator incontact with the environment. Specifically, environmental coupling is analyzed andbroken into three general categories: momentary coupling—where an aerial robotinteracts with objects of finite mass that are not attached to the environment; loosecoupling—fits tasks that include interacting with object attached to the environmentwithout perching onto them; strong coupling—occurs when UAV perches onto fixedobjects in the environment. Coupling analysis is conducted using classic controlproblems like pick and place, peg in hole, and perch and manipulate.

Chapter 6 is dedicated to description of sensors and control techniques used inaerial manipulation. First, very brief overview of inertial measurement unit, camera,GPS, and motion capture system is given, followed by presentation of attitude andposition estimation based on sensor fusion. The chapter continues with presentationof aerial manipulator control techniques and methods needed to guarantee the sta-bility of the robotic aerial system. Starting from the linear control approaches, thecontrol techniques are advanced with robust and adaptive control methods whichare described in detail. First, low-level control loops for attitude control are devised.Control of the UAV body attitude is achieved with a standard PID control loop. Thistype of PID implementation eliminates the potential damages to the actuators, nor-mally experiencedwhen driving the control difference directly through the derivation(D) channel. It is also the most commonly used control approach in most commer-cial autopilots on the market. The controller parameters are devised with respect tomoment of inertia and centroid variations by applying the Routh–Hurwitz stabilitycriteria to fourth-order transfer function of the system. Due to the inherent similarity

Page 30: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

16 1 Introduction

between attitude control and robotic joint angle control, this concepts are expandedto the manipulator joint control loop, based on the current-torque control loops.

Influence of the manipulator position on the system is analyzed in the second partof the chapter, and it is demonstrated that two major factors that affect the stabilityof the aircraft are the variable moment of inertia and change in the overall centre ofmass. In order to compensate those changes, gain scheduling and model referenceadaptive controllers for low-level control loops are designed. The gain schedulingadapts the system to changes simply by monitoring the position of the manipulatorjoints and relates the low-level controller parameters to these auxiliary variables. Inthe same time, the model reference adaptive controller, based on Lyapunov stability,compensates for changes in moment of inertia in the way that the low-level PIDcontroller output ismultiplied by the adaptive gain determined by adaptation loop. Asbackstepping controller seems to be a very popular choice for VTOL rotorcrafts, weexplain in detail this control technique applied for aerial robotic applications. Finally,the chapter presents the robust adaptive control techniques for wind disturbancerejection, necessary of fly the aerial robot outdoors. Again, due to their similarities,the concept of adaptive robust Hsia control, known within robotics community, isintroduced and applied to both manipulator joint angle and UAV attitude controlloops. At the end of the chapter, brief discussion on impedance control is given,as well as, the stability analysis of the system that exhibits discontinuity in thedynamic due to switching between contact with the environment and free flightwithout contact.

The closing chapter of the book, Chap. 7, is dedicated to the topic of path planningand its time-driven counterpart trajectory planning. We build starting from simplewaypoint navigation in uncluttered environment, only to later focus on obstacle-freetrajectory generation. First, brief explanation of the RRT algorithm, based on itsoriginal version, is given, followed by global planner description and vision-guidedaerial manipulation.We conclude the book with two examples of vision-based objecttracking and manipulation.

References

1. Acosta JA, Snchez MI, Ollero A (2015) Integral action in first-order closedloop inverse kine-matics. application to aerial manipulators. In: Proceedings of ICRA 2015, pp 5297–5302

2. Albers A, Trautmann S, Howard T, Nguyen TA, FrietschM, Sauter C (2010) Semi-autonomousflying robot for physical interaction with environment. In: 2010 IEEE conference on roboticsautomation and mechatronics (RAM), pp 441–446

3. Alhinai T, Hooper PA, Kovacik M, Hunt G, Mitzalis F (2014) 3D printing with flying robots.In: Proceedings of the ICRA 2014, pp 4493–4499

4. Andrade-Cetto J, Penate-Sanchez A, Moreno-Noguer F (2013) Exhaustive linearization forrobust camera pose and focal length estimation. IEEE Trans Pattern AnalMach Intell 35:2387–2400

5. Bernard M, Kondak K (2009) Generic slung load transportation system using small size heli-copters. In: Proceedings of IEEE international conference on robotics and automation ICRA’09, pp 3258–3264

Page 31: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

References 17

6. Bernard M, Kondak K, Maza I, Ollero A (2011) Autonomous transportation and deploymentwith aerial robots for search and rescue missions. J F Robot 28(6):914–931

7. Bersak DR, Pounds PEI, Dollar AM (2015) Grasping from the air: Hovering capture and loadstability. In: Proceedings of ICRA 2015, pp 2491–2498

8. Caballero F, Fabresse FR, Ollero A (2015) Decentralized simultaneous localization and map-ping for multiple aerial vehicles using range-only sensors. In: Proceedings of 2015 IEEEinternational conference on robotics & automation (ICRA)

9. Corts J, Devaurs D, Simon T (2013) Parallelizing RRT on large-scale distributed-memoryarchitectures. IEEE Trans Robot 29(1):571–579

10. Fernandez-Aguera CJ, Caballero F, Munoz-Morera J, Maza I, Ollero A (2015) Assemblyplanning for the construction of structures with multiple UAS equipped with robotic arms. In:Proceedings of 2015 international conference on unmanned aircraft systems (ICUAS 2015),pp 1049–1058

11. Fink J, Michael N, Kumar V (2011) Cooperative manipulation and transportation with aerialrobots. Auton Robot 30:73–86

12. Fumagalli M, Naldi R, Macchelli A, Carloni R, Stramigioli S, Marconi L (2012) Modeling andcontrol of a flying robot for contact inspection. In: 2012 IEEE/RSJ international conference onintelligent robots and systems (IROS), pp 3532–3537

13. Garimella G, Kobilarov M (2015) Towards model-predictive control for aerial pick-and-place.In: Proceedings of the ICRA 2015, pp 4692–4697

14. Hamada T, Ashlih D, Tsukagoshi H, Watanabe M, Iizuka R (2015) Aerial manipulator withperching and door-opening capability. In: Proceedings of the IROS 2015, pp 4663–4668 ([40])

15. Hamaza S, Stramigioli S, Bartelds T, Capra A, Fumagalli M (2016) Compliant aerial manipula-tors: toward a new generation of aerial robotic workers. IEEE Robot Autom Lett 1(1):477–483

16. Heredia G, Jimenez-Cano AE, Braga J, Ollero A (2015) Control of an aerial robot with multi-link arm for assembly tasks. In: Proceedings of IROS 2015

17. Heredia G, Suarez A, Ollero A (2015) A lightweight compliant arm for aerial manipulation.In: Proceedings of the IROS 2015, pp 1627–1632

18. Jimenez-Cano AE, Martin J, Heredia G, Ollero A, Cano R (2013) Control of an aerial robotwith multi-link arm for assembly tasks. In: 2013 IEEE international conference on roboticsand automation (ICRA), pp 4916–4921

19. Kondak K, Huber F, Schwarzbach M, Laiacker M, Sommer D, Bejar M, Ollero A (2014)Aerial manipulation robot composed of an autonomous helicopter and a 7 degrees of freedomindustrial manipulator. In: 2014 International conference on robotics and automation (ICRA)

20. KondakK,OlleroA,Maza I, KriegerK,Albu-SchaefferA, SchwarzbachM, LaiackerM (2015)Unmanned aerial systems physically interacting with the environment. Load transportation,deployment and aerial manipulation. Handbook of Unmanned Aerial Vehicles

21. Korpela CM, Danko TW, Oh PY (2012) MM-UAV: mobile manipulating unmanned aerialvehicle. J Intell Robot Syst 65(1–4):93–101

22. Macchelli A, Forte F, Keemink AQL, Stramigioli S, Carloni R, Fumagalli M, Naldi R, MarconiL (2014) Developing an aerial manipulator prototype. IEEE Robot Autom Mag 21(3):41–55

23. Mellinger D, LindseyQ, ShominM,Kumar V (2011) Design, modeling, estimation and controlfor aerial grasping and manipulation. In: Proceedings of IEEE/RSJ international conferenceon intelligent robots and systems (IROS), pp 2668–2673

24. Oh PY, Danko TW, Chaney KP (2015) A parallel manipulator for mobile manipulatingUAVs. In: 2015 IEEE international conference on technologies for practical robot applica-tions (TePRA)

25. Park S, Nguyen HN, Lee D (2015) Aerial tool operation system using quadrotors as rotatingthrust generators. In: Proceedings of IROS 2015, pp 1285–1291

26. Pruano F, Ollero A, Cano R, Prez C, Heredia G (2013) Mechanical design of a 6-DOF aer-ial manipulator for assembling bar structures using UAVs. In: Proceedings of the 2nd IFACworkshop on research, education and development of unmanned aerial systems (RED UAS2013)

Page 32: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

18 1 Introduction

27. Sadeghian H, Ruggiero F, Cacace J, Lippiello V (2014) Impedance control of VToL UAVswith a momentum-based external generalized forces estimator. In: Proceedings of ICRA 2014,pp 2093–2099

28. Sanchez MI, Llorente D, Vega V, Braga J, Acosta JA, Heredia G, Jimenez-Cano A, OlleroA (2014) Control of a multirotor outdoor aerial manipulator. In International conference onintelligent robots and systems (IROS 2014)

29. Sanfeliu A, Villamizar M, Andrade-Cetto J, Moreno-Noguer F (2012) Bootstrapping boostedrandomferns for discriminative and efficient object classification. PatternRecognit 45(9):3141–3153 ([41])

30. Santamaria-Navarro A, Andrade-Cetto J, Trujillo MA, Rodrguez Y, Lippiello V, Cacace J,Viguria A (2016) Hybrid visual servoing with hierarchical task composition for aerial manip-ulation. IEEE Robot Autom Lett 1:259–266

31. Seo H, Kim S, Kim HJ (2015) Operating an unknown drawer using an aerial manipulator. In:Proceedings of IROS 2015, pp 5503–5508

32. Thomas J, Polin J, Sreenath K, Kumar V (2013) Avian-inspired grasping for quadrotor microUAVs. In: ASME international design engineering technical conference (IDETC)

33. Trujillo MA, Cataldi E, Giglio G, Antonelli G, Caccavale F, Viguria A, Chiaverini S, MuscioG, Pierri F, Ollero A (2016) Experiments on coordinated motion of aerial robotic manipulators.In: Proceedings of ICRA 2016, pp 1224–1229

34. Yang H, Lee D (2014) Dynamics and control of quadrotor with robotic manipulator. Proc.ICRA 2014:5544–5549

Page 33: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Chapter 2Coordinate Systems and Transformations

2.1 Coordinate Systems

This chapter describes the coordinate systems used in depicting the position andorientation (pose) of the aerial robot and its manipulator arm(s) in relation to itselfand its environment. A reference frame provides a relationship in pose between onecoordinate system and another. Every reference frame can relate back to a universalcoordinate system. All positions and orientations are referenced with respect to theuniversal coordinate system or with respect to another Cartesian coordinate system[3].

In Table2.1 we give the list of variable nomenclature used throughout this chapterand the remainder of the book.

The angles of rotation about the center of mass of the aerial robot make up theoverall attitude of the body. In order to track the changes of these attitude angleswhilethe body is in motion, two coordinate systems are required. The body {B} framecoordinate system is attached in the vicinity of the geometric center of the robot andtypically aligned with the z-axis of the vehicle. The world {W } frame coordinatesystem is fixed to the earth and is taken as an inertial coordinate system [5].

2.1.1 Global Coordinate System

The world inertial frame {W } is fixed with the origin, LW , at a known location onearth. The inertial frame follows an East-North-Up (ENU) convention with the zWaxis pointing up, yW pointing east, and xW pointing forward or north. The inertialframe is shown in Fig. 2.1.

The North-East-Down convention is often used in aviation systems. For the pur-poses of an aerial robot affixed with manipulators, an ENU convention will be usedthroughout the aircraft and manipulator coordinate frames. An ENU convention isused for a number of reasons. First, an aerial manipulator involves multiple links

© Springer International Publishing AG 2018M. Orsag et al., Aerial Manipulation, Advances in Industrial Control,https://doi.org/10.1007/978-3-319-61022-1_2

19

Page 34: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

20 2 Coordinate Systems and Transformations

Table 2.1 Table of variable nomenclature

Coordinate frames

L Origin of reference frame

W World or inertial frame

B Body or local frame

T Tool or end-effector frame

Three-dimensional quantities

xi , yi , zi Coordinate axes of frame i

Ψ,Θ,Φ Orientation axes

ψ, θ, φ Orientation (roll, pitch, yaw angle)

pij Position of the i frame in the j frame

θ ij Orientation of the i frame in the j frame

Rij Rotation of the i frame in the j frame

Tij Transformation of the i frame in the j frame

Joint space variables n-dimensional vectors where n is the numberof DOF

q Joint position

q Joint velocity

q Joint acceleration

τ Joint torque

J Jacobian matrix

w 6-DOF tool pose vector

Spatial variables

v Velocity of a rigid body

a Acceleration of a rigid body

m Mass of a rigid body

f Force acting on a rigid body

I Inertia tensor of a rigid body

Control variables

Ki PID Control Gains: KP Proportional, KIIntegral, KD Derivative

Misc. variables

Sθ ,Cθ , Tθ Sin(θ),Cos(θ), Tan(θ)

β Aerodynamic drag

g Scalar value of gravitational acceleration

Page 35: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

2.1 Coordinate Systems 21

WL

Wx

Wz

Wy

BWp

Fig. 2.1 Global (inertial) coordinate frame {W } for the aerial robot. The origin is denoted as LW .The frame is fixed, and all other frames refer back to the global frame. The frame is right-handedwith the z-axis pointed upward

which are often described using Denavit–Hartenberg notation [4] (further detailed inChap.4). A right-handed coordinate system with the z-axis upward facilitates con-sistency, where every frame is the same. Ground robots are commonly labeled in thismanner. Further, a right-handed coordinate system also allows for easier tracking ofthe tool-tip or end-effector frame.

2.1.2 Local Coordinate System

Ageneralized 6-degree of freedom coordinate system is utilized to represent the poseof the aerial robot. The vehicle or body reference frame {B} is placed at the vehiclecenter of mass, geometric center, or at the top of the vehicle body. In almost allcases, the body frame z-axis is coincident with the z-axis of the vehicle as shown inFig. 2.2. The axes of the local body frame align with those of the global world frame.For example, zW and zB both point up. A local coordinate system and body frameis needed when using fixed sensors that measure quantities relative to the vehicle’sbody such as inertial measurement and ranging sensors. Figure2.2 also introduces agraphical representation of the local body frame which depicts the principal axes asunit vectors. Further, an arrow representing another vector is drawn from one frameto the next to indicate the change in position and how they are relative to each other.A local frame attaches to the rigid body to relate both position and orientation backto the previous frame.

Page 36: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

22 2 Coordinate Systems and Transformations

WLWx

Wz

Wy

Fig. 2.2 Local (body) coordinate frame {B} for the aerial robot where the z-axis points up. Theorigin is LB

2.1.3 Coordinate System Representation

With the global and local coordinate systems established, the position and orientationof any point in space can be described using a framewhich is the relationship betweenone coordinate system with respect to another. The frame contains four vectors: onefor position and three vectors to describe the orientation or commonly known as a3× 3 rotation matrix. For example, the body frame {B} is described by the positionvectorpwith respect to theworld frame {W }with a frameorigin of L . In this textbook,the trailing superscript is the frame being referenced and the trailing subscript is theframe with respect to. The position of the body frame with respect to the world orinertial reference frame can be expressed in standard form as:

pBW =

⎡⎣xyz

⎤⎦ (2.1)

where x , y, and z are the individual components of the vector. Figure2.3 illustratesthe description of the position of the body frame with respect to the world frame[2, 8].

In addition to position, the orientation of the point in spacemust also be described.The position of the body frame is described in points of a vector, whereas the orienta-tion of the body must be described using the attached body frame {B}. The rotationsaround the axes are roll, pitch, and yaw for the x-axis, y-axis, and z-axis, respectively,and represented symbolically as angles of ψ , θ , and φ [10, 13]. The orientation of

Page 37: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

2.1 Coordinate Systems 23

Fig. 2.3 Position of thebody frame with respect tothe world frame

WL

Wx

Wz

Wy

Fig. 2.4 Orientation of thebody frame with respect tothe world frame

WL

Wx

Wz

Wy

the body frame with respect to the world or inertial reference frame can be expressedin standard form as:

θ BW =

⎡⎣

ψ

θ

φ

⎤⎦ (2.2)

where the frames are again right-handed with the z-axis pointed upward as shown inFig. 2.4.

These are not the Euler angles which will be described in the next section.

2.2 Coordinate Transformations

The previous section established the position, orientation, frame, and relationshipbetween frames of a rigid body. It is now necessary to discuss transforming or map-ping from one frame to the next.

Looking solely at translation and given that both frames have the same orientation,describing a point in space from one frame to the next is relatively easy. For example,we wish to describe the position, psensorB , of a fixed sensor (i.e., range finder) on theair vehicle in terms of the {W } frame. The relationship of the body to world framewas given by pB

W , which is the origin of {B} relative to {W }. The position vectors aredefined by frames with the same orientation so the position of the sensor relative tothe world is calculated by vector addition:

psensorW = pBW + psensorB (2.3)

Page 38: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

24 2 Coordinate Systems and Transformations

Fig. 2.5 Translation of apoint (i.e., sensor) from oneframe to the next with fixedorientation

WL

Wx

Wz

Wy

and illustrated in Fig. 2.5. The position of the sensor is fixed, but the description ofhow it relates to the world is different than how it relates to the body.

Simple translation only involves vector addition between frames. In contrast,the description of orientation is far more complex as will be seen in the followingsections.

2.2.1 Orientation Representation

To represent orientation via the rotation matrix Rij , one must transform from the

coordinate frame Li toward the coordinate frame L j . Unlike position, there exist asubstantial amount of fundamentally different ways to represent the body orientation.This book, however, does not strive to provide an exhaustive summary, but ratherpoints to more common representations, which are frequently used in robotics andaerial robotics likewise [6]. The 3× 3 rotationmatrix contains orthogonal unit vectorswith unit magnitudes in their columns giving the relationship:

Rij = [R j

i ]T = [R j

i ]−1

(2.4)

The components of the rotation matrix are the dot products of the basis vectors ofthe two relative frames [11]:

Rij =

⎡⎣xi · x j yi · x j zi · x j

xi · y j yi · y j zi · y j

xi · z j yi · z j zi · z j

⎤⎦ (2.5)

which consists of nine elements. In most cases, there will be a vector offset intranslation between the two frames and the frames will not have the same orientation.Thus, it is necessary to systematically describe both translation and rotation betweenframes.

Page 39: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

2.2 Coordinate Transformations 25

2.2.2 Euler Angles

Although perhaps inferior to other conventions, Euler angles are still a predominantmethod of describing body orientation and shown in Fig. 2.6. Using only three anglesof rotation that form a minimal representation (i.e., three parameters only), Eulerangles suffer fromsingularities, or gimbal lock as they are usually referred to as shownin Fig. 2.7. However, Euler angles are commonly used and remarkably intuitive. Thisis exactly why they will be used throughout this book.

Unfortunately, there are many standard formulations and notations for Eulerangles.Different authorsmay use different sets of rotation axes to defineEuler angles,or different names for the same angles. Therefore, before discussing Euler angles,we first have to define the convention. For the description of aircraft and spacecraftmotion,Ψ is the “roll” angle;Θ the “pitch” angle; andΦ the “yaw” angle. It is worthmentioning that in some literature, authors denote Φ as roll and Ψ as the yaw angle.

Fig. 2.6 Euler angle representation with three angles: roll Ψ , pitch Θ , and yaw Φ

Fig. 2.7 The term gimbal lockwas coinedwhen the Euler angles were tightly connected tomechan-ical gyroscopes. Then, a gimbal lock would occur when the axes of two of the three gimbals in agyroscope are driven into a parallel configuration, thus locking the system into rotation in a degen-erate two-dimensional space. In this example, we rotate the UAV for 90◦ pitch, which locks theyaw and roll angle in a single degree of freedom rotation

Page 40: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

26 2 Coordinate Systems and Transformations

2.2.3 Change of Frame

Rotation matrices allow the mapping of coordinates of the inertial frame into thebody-fixed frame. To this end, we use Euler angles as previously mentioned todescribe the orientation of the rigid body. In this analysis, we will use the ZY XEuler angles [7, 12]. Consider the elementary rotations in the following equations:

Rx (ψ) =⎡⎣1 0 00 Cψ −Sψ

0 Sψ Cψ

⎤⎦ (2.6)

Ry(θ) =⎡⎣

Cθ 0 Sθ

0 1 0−Sθ 0 Cθ

⎤⎦ (2.7)

Rz(φ) =⎡⎣Cφ −Sφ 0Sφ Cφ 00 0 1

⎤⎦ (2.8)

The rotation matrix from the world frame {W } to the body frame {B} is the productof the three elementary rotations, which denote rotation around the z-axis followedby rotation around the y-axis and finally followed by rotation around the x-axis.Combined, the transformation from the body frame to the world frame is:

RBW = Rz(ψ) · Ry(θ) · Rx (φ) (2.9)

or

RBW =

⎡⎣CθCψ SφSθCψ − CφSψ CφSθCψ + SφSψ

CθSψ SφSθSψ + CφCψ CφSθSψ − SφCψ

−Sθ SφCθ CφCθ

⎤⎦ (2.10)

where Rzyx (ψ, θ, φ) ∈ SO(3). The ZY X Euler angle rotation is a commonly usedconvention.

2.2.4 Translation and Rotation

Given the position and orientation of the body frame with respect to the world frame,the overall translation and rotation is succinctly represented by the 4× 4 homoge-neous transformation matrix. This matrix provides a unified method to representtranslational and rotational displacements [1]. Taking Eq.2.1 and rewriting it in ageneralized vector form, we have:

Page 41: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

2.2 Coordinate Transformations 27

Fig. 2.8 Translation androtation of a point (i.e.,sensor) from one frame to thenext with requiring an offsetvector and rotation matrix

WL

Wx

Wz

Wy

pij =

⎡⎢⎢⎢⎢⎣

xij

yij

zij

⎤⎥⎥⎥⎥⎦

(2.11)

where the origin of frame {i} is not coincident with frame { j} but has a vectortranslational offset. Further, frame { j} is rotated with respect to frame {i} which isdescribed by the rotationmatrix,Ri

j . Using the example of a fixed point with a knownposition (i.e., sensor) on the {B} frame, psensorB , we wish to calculate the position ofthe point relative to the {W } frame, psensorW . The origin of frame {B} is given by pB

W .Figure2.8 illustrates the relationship between the frames requiring a translation androtation.

To change the orientation of the sensor to match that of the world frame, theposition vector is multiplied by the rotation matrix. Next, the translation offset isadded to generate a description of the position and orientation of the sensor as shownhere:

psensorW = RBW · psensorB + pB

W (2.12)

Combining the rotation matrix and translation components, we have the shorthandgeneralized notation of the entire 4× 4 homogeneous transformation matrix fromthe {i} to { j} frame.

Tij =

⎡⎣Ri

j pij

0T 1

⎤⎦ (2.13)

and the expanded version with the identity matrix as a placeholder for rotation:

T =

⎡⎢⎢⎣1 0 0 x0 1 0 y0 0 1 z0 0 0 1

⎤⎥⎥⎦ (2.14)

For rotation, the 3× 3 upper left elements represent rotation between one frame andthen next frame. This rotation was previously described in Eq.2.10.

Page 42: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

28 2 Coordinate Systems and Transformations

The homogeneous transformation matrix creates a four-dimensional space R4.To recover the original physical three-dimensional vector, a 3× 4 homogeneousconversion matrix can be utilized and defined as follows [9]:

H = 1

σ[I, 0] (2.15)

where σ is the scaling factor and I is the identity matrix. For convenience, the scalingfactor is typically set to σ = 1. As will be done in future chapters, four-dimensionalhomogeneous coordinates can be obtained from three-dimensional physical coordi-nates by adding a fourth component as seen in Eq.2.15.

2.3 Motion Kinematics

Given coordinate systems, frames, position and orientation representation, and trans-forms, this section describes the motion of the rigid body both through linear andangular rates of change and rotational transformations of the moving rigid body.

2.3.1 Linear and Angular Velocities

The angular velocity components p, q, and r are the projection values on the bodycoordinate system of rotation angular velocity ω which denotes the rotation from theworld coordinate system to the body coordinate system. Let us define the mathemat-ical model of the body of the quadrotor. The vector [x y z φ θ ψ]T containsthe linear and angular position of the vehicle in the world frame, and the vector[u v w p q r ]T contains the linear and angular velocities in the body frame.Therefore, the two reference frames have the relationship:

v = R · vB (2.16)

ω = T · ωB (2.17)

where R is defined in Eq.2.10. v is the linear velocity vector, v = [x y z]T ∈ R3,

and the angular velocity vector isω = [φ θ ψ]T ∈ R3. vb = [u v w]T ∈ R

3 andωb = [p q r ]T ∈ R

3. The angular transformation matrix, T, is [9]:

T =⎡⎣1 SφTφ CφTφ

0 Cφ −Sφ

0 Sφ

⎤⎦ (2.18)

Page 43: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

2.3 Motion Kinematics 29

2.3.2 Rotational Transformations of a Moving Body

Now consider small rotations δφ, δθ, δψ from one frame to anotherRBW (δφ, δθ, δψ)

(2.10); using the small angle assumption to ignore the higher-order terms gives:

δRBW �

⎡⎣CδθCδψ SδφSδθCδψ − CδφSδψ CδφSδθCδψ + SδφSδψ

CδθSδψ SδφSδθSδψ + CδφCδψ CδφSδθSδψ − SδφCδψ

−Sδθ SδφCδθ CδφCδθ

⎤⎦

�⎡⎣

1 −δψ δθ

δψ 1 −δφ

−δθ δφ 1

⎤⎦ = I3×3 +

⎡⎣

0 −δψ δθ

δψ 0 −δφ

−δθ δφ 0

⎤⎦ . (2.19)

If we denote the small angle rotations δφ, δθ, δψ , through the rotation speed vector

Ω · ∂t = [∂ψ

∂t∂θ∂t

∂φ

∂t

]T · ∂t , we can write the skew-symmetric matrix using its morefamiliar vector representation of a rotation speed cross-product:

∂t

⎡⎢⎣

0 − ∂ψ

∂t∂θ∂t

∂ψ

∂t 0 − ∂φ

∂t

− ∂θ∂t

∂φ

∂t 0

⎤⎥⎦ = ∂t

⎡⎢⎣

∂ψ

∂t∂θ∂t∂φ

∂t

⎤⎥⎦× = ∂tΩ× (2.20)

This aforementioned rotation transformation can be applied to any vector in themoving/rotating body frame. For instance, let us imagine a person standing insidea helicopter that is rotating with angular speed Ω shown in Fig. 2.9. The vectorconnecting this person to the body frame of the helicopter is now pP

B . If the personremains still w.r.t. the body frame of the helicopter, then its position at a certain timeinstance t in the world frame is simply:

pPW (t) = RB

W (Ψ,Θ,Φ)pPB + pB

W . (2.21)

Fig. 2.9 Figure showing a person, moving with linear speed v inside a helicopter, rotating withangular speedΩ . The distance between the person and the body frame of the helicopter pP

B changesin time, and this change is a linear combination of its angular rotation and linear speed

Page 44: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

30 2 Coordinate Systems and Transformations

After infinitesimally small period of time, we can take a second reading only to findthat the person is now repositioned at a different location in the world frame:

pPW (t + ∂t) = ∂RB

W (∂ψ, ∂θ, ∂φ)RBW (ψ, θ, φ)pP

B + pBW

= (I3×3 + ∂tΩ×)RBW (ψ, θ, φ)pP

B + pBW (2.22)

Comparing the two time instances, while letting the time interval reach infinitelysmall values, yields a derivation of the vector pP

B in a world coordinate frame ∂∂t

W:

∂t

W (pPW

) = lim∂t→0

pPW (t + ∂t) − pP

W (t)

∂t= Ω × RB

W (φ, θ, ψ) pPB (2.23)

If we now imagine this personmoving around the helicopter, the equations tend tobecome somewhat more complicated. If this person moves around, he or she changesthe size of the vector

∥∥pPB (t)

∥∥ to∥∥pP

B (t + ∂t)∥∥. It is important to note that for this

example, the change in size of the vector occurs without the change of orientation inthe body frame. Therefore, we can write:

pPB (t + ∂t) = ∥∥pP

B (t + ∂t)∥∥ · pP

B (t)∥∥pPB (t)

∥∥ (2.24)

= (∥∥pPB (t + ∂t)

∥∥ − ∥∥pPB (t)

∥∥) · pPB (t)∥∥pPB (t)

∥∥ + ∥∥pPB (t)

∥∥ pPB (t)∥∥pPB (t)

∥∥

= (∥∥pPB (t + ∂t)

∥∥ − ∥∥pPB (t)

∥∥)︸ ︷︷ ︸

change in size contribution

· pPB (t)∥∥pPB (t)

∥∥ + pPB (t)

= v · ∂t · pPB (t)∥∥pPB (t)

∥∥ + pPB (t).

In previous equation, we used v to denote the linear speed of the factor, or the rateof change of its size. It is also important to note that this is a scalar, multiplied

with the original direction of the vector pPB (t)

‖pPB (t)‖ . Since we observe these changes in

infinitesimally small time frames, it is safe to assume that both contributions, rotationand sizewise, can be viewed together as a linear combination of the two. In total, thenew position of the person in the second time instance pP

W (t + ∂t) w.r.t the worldframe thus becomes

pPW (t + ∂t) = ∂RBW (∂ψ, ∂θ, ∂φ)RB

W (ψ, θ, φ)pPB (t + ∂t) + pBW (2.25)

= (I3×3 + ∂tΩ×)

RBW (ψ, θ, φ)

⎛⎝v · ∂t · pPB (t)∥∥∥pPB (t)

∥∥∥+ pPB (t)

⎞⎠ + pBW

= v · ∂t · RBW (ψ, θ, φ)pPB (t)∥∥∥pPB (t)

∥∥∥+ RB

W (ψ, θ, φ)pPB (t) + ∂tΩ × RBW (ψ, θ, φ)pPB (t) + pBW

Page 45: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

2.3 Motion Kinematics 31

Repeating the steps applied to a stationary person moving inside the helicopter,we derive a time derivative of a vector in a moving frame observed in the inertial,world frame:

∂t

W (pPW

) = lim∂t→0

pPW (t + ∂t) − pP

W (t)

∂t(2.26)

= Ω × RBW (φ, θ, ψ) pP

B + vRB

W (ψ, θ, φ)pPB (t)∥∥pP

B (t)∥∥

only to find that as such, the derivation is a linear combination of its rotating andlinear components. Careful reader should notice that multiplying the vector pP

B (t)withRB

W implies that it is transformed and expressed in the world frame at this point.

References

1. Bajd T, Mihelj M, Lenarcic J, Stanovnik A, Munih M (2010) Robotics. Intelligent systems,control and automation: science and engineering. Springer, Netherlands

2. Bottema O, Roth B (2012) Theoretical kinematics. Dover Books on Physics, Dover Publica-tions, New York

3. Craig JJ (1986) Introduction to robotics: mechanics and control. Addison-Wesley, Reading(Mass.), Don Mills (Ont.), Wokingham, England

4. Denavit J, Hartenberg RS (1955) A kinematic notation for lower-pair mechanisms based onmatrices. Trans ASME E J Appl Mech 22:215–221

5. Duffy J (1991) Kinematic geometry of mechanisms (k. h. hunt). SIAM Rev 33(4):678–6796. Erdman AG, Sandor GN (1997) Mechanism design: analysis and synthesis. Number volume

1 in Prentice Hall international editions series. Prentice-Hall International7. Lee D, Burg TC, Dawson DM, Shu D, Xian B, Tatlicioglu E (2009) Robust tracking control of

an underactuated quadrotor aerial-robot based on a parametric uncertain model. In: 2009 IEEEinternational conference on systems, man and cybernetics, pp 3187–3192

8. Noble B, Daniel JW (1998) Applied linear algebra, 3rd edn. Prentice Hall, Upper Saddle River(Index included)

9. Schilling RJ (1990) Fundamentals of robotics: analysis and control. Prentice Hall, EnglewoodCliffs

10. Shames IH (2006) Engineering mechanics statics and dynamics. Pearson Education11. Siciliano B, Khatib O (2016) Springer handbook of robotics. Springer handbooks. Springer

International Publishing12. Siciliano B, Sciavicco L, Villani L, Oriolo G (2008) Robotics: modelling, planning and control,

1st edn. Springer Publishing Company, Incorporated13. Symon KR (1971) Mechanics. Addison-Wesley world student series. Addison-Wesley Pub-

lishing Company

Page 46: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Chapter 3Multirotor Aerodynamics and Actuation

3.1 The Aerodynamics of Rotary Flight

Thefirst step toward deriving a real-time controller is to adequatelymodel the dynam-ics of the system. This approach was utilized since the very beginning of quadrotorresearch [4]. As research on micro-aerial vehicle grows (i.e., mobile manipulation,aerobatic moves) [11, 13], the need for an elaborate mathematical model arises. Themodel needs to incorporate a full spectrum of aerodynamic effects that act on thequadrotor during climb, descent, and forward flight. To derive amore completemath-ematical model of a quadrotor, one needs to start with basic concepts of momentumtheory and blade elemental theory.

Unlike large manned helicopters, that have very complex rotor systems like, forinstance, the Bell 412 rotor shown in Fig. 3.1, micro-aerial vehicles have very simplerotors with hardly any degree of freedom, besides the main rotation. Keeping therotors as simple as possible lower the overall cost of the vehicle and makes it easierto construct and maintain. As a consequence, this allows further simplification in themathematical model of the aerodynamic forces acting on the rotor system.

The momentum theory of a rotor, also known as classical actuator disk theory,combines rotor thrust, induced velocity (i.e., airspeed produced in rotor), and aircraftspeed into a single equation. On the other hand, blade elemental theory is used tocalculate forces and torques actingon the rotor by studying a small rotor blade elementmodeled as an airplane wing and applies the thin airfoil theory to calculate the forcesand torques acting on the infinitesimal blade element as well as the whole blade [5].Combining the macroscopic approach of momentum theory with the microscopicobservations of blade element theoryyields afirst approximationmathematicalmodelof the rotor aerodynamics.

© Springer International Publishing AG 2018M. Orsag et al., Aerial Manipulation, Advances in Industrial Control,https://doi.org/10.1007/978-3-319-61022-1_3

33

Page 47: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

34 3 Multirotor Aerodynamics and Actuation

Fig. 3.1 Complex mechanical design of a Bell 412 helicopter hub

3.1.1 Momentum Theory

To understand the basic principles of rotor dynamics and describe them throughthe macroscopic analysis of momentum theory, first we analyze the rotor in verticalflight, for which have to adopt several key assumptions [5]:

• When the rotor spins, it forces the air through a disk enclosed by its spinning blade.The air vortex creates a funnel, known as the control volume shown in Fig. 3.2.

• Furthermore, we assume that the thrust exerted on the air is evenly distributedacross the entire disk. We apply the same assumption to an increase in air pressureΔp being felt across the entire disk area.

• For additional simplicity, we choose to neglect the rotation of the air mass inthe vortex caused from the rotor spin. In the end, this effect causes only a smallpercentage change in the overall analysis and therefore can be omitted withoutaffecting the quality of the end result.

Page 48: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.1 The Aerodynamics of Rotary Flight 35

Fig. 3.2 Control volume for a rotor (radius R) during vertical flight with climb speed Vc understandard atmospheric pressure p∞

• Final assumption allows us to think about the control volume as a separate part ofthe surrounding air. This means that all the air mass outside the volume is at rest.Nevertheless, the hydrostatic air pressure is equal everywhere, inside and outsidethe volume.

Since in vertical flight the aerial robot climbs with constant speed Vc we are freeto inverse the logic, and imagine that the air is flowing into the rotor blade withspeed −Vc. Underneath the blades, the air is sped up from the blade thrust by aninfinitesimal increase in speed, called the induced velocity vi . Exactly beneath therotor, the total air mass speed equals Vc + vi . Since the rotors exert force on theair mass, the air accelerates on its way down from the blades and exits the controlvolume with a total exit speed VE = Vc + v2.

Similar variations can be observed in air pressure, which starts off with the stan-dard atmospheric pressure p∞. The flow through the blades causes an infinitesimalincrease in air pressure Δp, and the air finally exits the control volume at p2 airpressure.

It is important to note that since the airmass enters the control volumewith a speedVc, which is slower then the speed Vc + v2 with which it exits the control volume,the radius of the funnel shrinks simply because no air is destroyed or created in thisprocess so that according to the law of conservation of mass, the amount of air thatoutflows must be equal to its inflow. In order to derive the equations, we turn to thecontrol volume cylinder of radius R1, which is the radius of the air flowing in above

Page 49: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

36 3 Multirotor Aerodynamics and Actuation

the rotor. The height of the control volume h can be arbitrary set. The air enters atthe top of the volume and on its sides, exiting on the bottom of the cylinder. A partof the airflow exits with the speed of Vc + v2 at radius R2 while the rest of the airR1 − R2, that is not affected by rotor, exits with the speed equal to the inflow speedVc. Summing these observations into a single mass conservation law, we can writethe following equation:

ρπR12Vc

︸ ︷︷ ︸

top inflow

+ ρπR22v2

︸ ︷︷ ︸

side inflow

= ρπR22(Vc + v2)

︸ ︷︷ ︸

funnel outflow

+ ρπ(R22 − R1

2)Vc︸ ︷︷ ︸

remaining outflow

(3.1)

The amount of air that flows in at the side of the control volume ρπR22v2 is simply

a result of the conservation of mass. It is simply the amount of airflow necessary tokeep both sides of (3.1) equal.

During a short period of time [t, t + ΔT ], with ΔT eventually approachinglim ΔT → 0, a small amount of air of mass Δm enters the control volume at itstop, with a velocity Vc:

Δm = ρπR12VcΔT (3.2)

The same can be written for the side inflow, as well as both outflows simply bymultiplying (3.1) with ΔT . Since this air mass travels at a specific velocity, it hasa matching linear momentum. The amount of air entering the control volume (i.e.,total inflow) with speed Vc has the total linear momentum:

πin = ΔTρπVc(R12Vc + R2

2v2). (3.3)

In this section, we will use π to denote linear momentum, simply to distinguish itfrom air pressure p. On the other hand, due to the exerted thrust force T , the amountof air that flows out of the funnel with speed v2 together with the rest of the airflowwhich flows out with climb speed Vc yields a net total linear momentum:

πout = ΔTρπR22(Vc + v2)

2 + ρπ(R22 − R1

2)V 2c . (3.4)

Of course we now from Newton’s second law that the rate of change of momentumequals the net force applied to the object. In this case, the object is the air that flowsfrom the rotor, and the rotor thrust T together with the air pressure yields the nettotal force acting on that same air:

T + πR12 pa − π(R1

2 − R22)pa − πR2

2 p2 = Δπ

ΔT(3.5)

where Δπ represents the momentum change:

Δπ = πout − πin = ΔTρπR22(Vc + v2)v2 (3.6)

Page 50: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.1 The Aerodynamics of Rotary Flight 37

Under the assumption that p2 = pa , solving previous two equations derives thethrust force applied from the rotor:

T = ρπR22(Vc + v2)v2. (3.7)

The final thing left to derive is the mathematical formalism that combines theinduced speed vi with the speed that the airflow exits the control volume v2. Onceagain we turn to elementary physics, the Bernoulli’s law for fluid mechanics, whichstates the relationship between the velocity, density, and pressure of a moving fluidthat can be directly applied to the airflow. Again we observe several key points in thecontrol volume. First is the point of entry with air pressure pa and speed Vc. Secondpoint of interest is the one exactly above the rotor where the air pressure equals pand the speed is Vc + vi . Bernoulli’s law states that the relationship between thesetwo points:

pa + 1

2ρ(Vc + vi )

2 = p + 1

2ρ(Vc + vi )

2 (3.8)

Since the rotor applies the force to the air, there is an infinitesimal build up of airpressure just bellow the rotor plane. Finally, we observe the end point with maximalspeed v2 and air pressure p2.

p + Δp + 1

2ρ(Vc + vi )

2 = p2 + 1

2ρ(Vc + v2)

2 (3.9)

Combining Bernoulli equations under the assumption p2 = pa yields:

Δp = ρ

(

Vc + 1

2v2

)

v2 (3.10)

Applied to the surface of the rotor R2π, the pressure differenceΔp equals the appliedthrust force:

T = ΔpR2π = ρπR2

(

Vc + 1

2v2

)

v2. (3.11)

Before proceeding to compare (3.11) with (3.7), we consider one final remark,that of isochoric airflow. This implies that the airflow is incompressible throughoutthe control volume, which when comparing the airflow at the rotor (R) and at theend of the funnel (R2) leads to equation:

(Vc + vi )R2 = (Vc + v2)R2

2. (3.12)

Solving for vi in (3.11) and (3.7), under assumption (3.12) yields:

vi = 1

2v2 (3.13)

Page 51: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

38 3 Multirotor Aerodynamics and Actuation

Fig. 3.3 Momentum theory - horizontal motion, vertical motion, and induced speed total airflowvector sum

and finally,T = 2ρπR2(Vc + vi )vi (3.14)

Even though (3.14) on its own offers little to directly solve the thrust since it is hardto directly measure the induced velocity vi , it will show to be a useful tool when weincorporate it with blade element theory.

Basic momentum theory offers two solutions, one for each of the two operationalstates in which the defined rotor slipstream exists. The solutions refer to rotorcraftclimb and descent, the so-called helicopter and the windmill states. Quadrotor in acombined lateral and vertical move is shown in Fig. 3.3. The figure shows the mostimportant airflow viewed in momentum theory: Vz and Vxy that are induced by thequadrotor’s movement, together with the induced speed vi that is produced by therotors.

Unfortunately, classicmomentum theory implies no steady state transitionbetweenthe helicopter and the windmill states. Experimental results, however, show that thistransition exists. In order for momentum theory to comply with experimental results,the augmented momentum theory equation (3.15) is proposed [9],

T = 2ρR2πvi

(vi + Vz)2 + Vxy

2 + Vz2

7.67(3.15)

where V 2z

7.67 term is introduced to assure that the augmented momentum theory equa-tion complies with experimental results, R stands for rotor radius, and ρ is the airdensity. It is easy to show that in case of autorotation with no forward speed, thrustin Eq. (3.15) becomes equal to the drag equation D = 1

2CDρR2πV 2z of a free-falling

plate with a drag coefficient CD = 1.

Page 52: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.1 The Aerodynamics of Rotary Flight 39

3.1.2 Blade Element Theory

Unlike momentum theory which observers thrust as a macroscopic effect, bladeelement theory observes a small rotor blade element Δr shown in Fig. 3.4 from amicroscopic perspective. As the rotor spins with angular velocity Ω , each elementof the blade at its respective radial distance r attacks the air with different speed Ωr .Figure3.5 shows the side-view of the infinitesimal part of quadrotor’s blade togetherwith elemental lift and drag forces it produces. Interested reader can find out moreabout the nature of this forces in [5], but for the scope of this book it suffices to statethe equations:

ric vVR

r

r

Infinitesimal blade element

xyV

)(t

Fig. 3.4 Snapshot of infinitesimal rotor blade element Δr , displaced from the center of bladerotation for r , taken at a time instance t , when horizontal motion of the UAV Vxy attacks the bladeat angle β. Two distinct airflows affect the blade element, horizontal produced from rotor spinningΩr and vertical produced from within climb and induced speeds Vc + vi

Fig. 3.5 A closer look of the infinitesimal rotor blade element Δr in the surrounding airflow,considering both lateral and vertical flows as observed in [15]. Angle shown larger for clarity.Airflow produces lift and drag forces also shown in the figure

Page 53: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

40 3 Multirotor Aerodynamics and Actuation

ΔL

ΔR= 1

2ρVSCL S

ΔD

ΔR= 1

2ρVSCDS (3.16)

where CL and CD represent the lift and drag coefficients; S = Δr · c(r) is thesurface of the element with chord length c(r) which is a function of the blade profileand distance from the center of rotation; and VS the airflow surrounding the bladeelement.

The airflow is mostly produced from the rotor angular velocity Ωr and there-fore depends on the distance of each blade element to the center of blade rotation.Adding to this airflow is the total air stream coming from quadrotor’s vertical Vc andhorizontal movement Vxysin(β), together with induced speed vi :

VS = Vc + vi + Ω × r + Vxy

|VS| =√

(Vc + vi )2 + (Ω · r + Vxysin(β))2 (3.17)

Ωr >> (Vc + vi ), Vxy → |VS| ∼ Ωr + Vxysin(β)

Figure3.4 depicts the blade in motion, captured at a specific time instance whenlateral motion of the UAV Vxy attacks the blade at an angle β. Since the blade rotatesat high speeds, the angle between the blade and lateral motion and thus it affect on theblade element varies rapidly β(t) = Ωt . This forces us to later develop expressionsaveraged for the blade cycle.

Although it varies depending on the blade shape and size, the ideal airfoil liftcoefficient CL can be calculated using Eq. (3.18) [9].

CL = aαe f = 2παe f (3.18)

where a is an aerodynamic coefficient, ideally equal to 2π. The effective angle ofattack αe f is the angle between the airflow and the blade with chord c. As the ratiobetween vertical and horizontal airflow varies, so does the angle of attack. It is due tothis fact that the rotorcraft UAVs have a small mechanical angle of attack blade. Sincethe airflow directed through the rotor (Vc + vi ) is always negligible in comparisonto the horizontal speed (Vc + vi ) << Ωr , the blade can keep a small mechanicalangle of attack without loosing the necessary thrust. In contrast to that, fixed wingrotors are facing directly into the airflow and require significant mechanical angle ofattack to compensate for the loss in the effective angle of attack. Higher mechanicalangles of attach increases energy consumption, so it is wise to choose a small angleof attack blade when large is not necessary.

Another important mechanical adaptation on standard rotor blades is that they aretwisted because the dominant airflow coming from blade rotation increases linearlytoward the end of the blade (i.e.,Ωr ). According toEq. (3.16), this causes the increaseof lift and drag forces. The difference in forces produced near and far from the centerof rotation would cause the blade to twist, and ultimately brake. To avoid that, a

Page 54: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.1 The Aerodynamics of Rotary Flight 41

linear twist, starting at initial angle Θ0, and constantly decreasing by a factor Θtw

with distance r from the center of the blade with radius R

αmech(r) = Θ0 − r

RΘtw (3.19)

is introduced to the blade design. Since r is both upper and lower bounded, 0 and Rrespectively, from now on we will use x = r/R to denote the position of the bladeelement with respect to the root x = 0 and tip x = 1.

Furthermore, rotor blades are not rectangular so they have a different chord atdifferent positions along their span. Chord refers to the imaginary straight line joiningthe leading and trailing edges of an aerofoil depicted in Fig. 3.5. For most blades,chord variation over the span, growing narrower toward the outer tips, is a complexfunction of distance. This means that more lift is generated on the wider, innerportions of the blade. Since outer portions of the blade experience more air flowingacross them, and thus in turn produce more thrust, in order to avoid unnecessarytorque on the blade, its outer portions are trimmed and the chord is narrower. In thisbook, we will only consider blade to be linearly decreasing for a factor Ctw from theroot to the tip of the blade (i.e., x):

c(x) = C0 − Ctwx (3.20)

Final issue we need to tackle is to denounce the assumption that the induced speedvi is equal across the blade. This assumption was valid for momentum theory, whereit is viewed across the blade as a whole, but it falls short when it comes to bladeelement theory where we observe every part of the blade separately. A far betterapproach for blade element theory is to approximate induced speed again as a linearfunction of distance from the root of the blade:

vi (x) = x · vi , (3.21)

where vi now denotes the induced speed at the tip of the blade. Since there is noairflow across the blade at its root (i.e., x · RΩ = 0), it is natural to expect that theinduced speed disappears for x = 0, and then steadily grows when moving towardthe blade tip x = 1.

Now we are able to move forward and derive the basic force equations of bladeelement theory. First thing is to obtain the effective angle of attack, which is easilyobserved in Fig. 3.5:

αe f = αmech − Φ(x, t) (3.22)

where the effect of varying airflow Φ can be calculated separating the vertical com-ponents Vc + vi and horizontal ones Vxysin(β(t) + Ωr), while at the same timebearing in mind that Ωr >> Vxy :

Φ(x) = arctan

(

Vc + viVxysin(β(t)) + Ωr

)

≈ Vc + vi (x)

ΩRx(3.23)

Page 55: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

42 3 Multirotor Aerodynamics and Actuation

As lift and drag forces are not aligned with body frame of reference, horizontaland vertical projection forces need to be derived. Keeping in mind that Ωr �{

Vc, vi , Vxy}

small angle approximations cos(Φ) ≈ 1 and sin(Φ) ≈ Φ can beused. Moreover, in a well-balanced rotor blade of chord length c, drag force shouldbe negligible compared to the lift [9]. Applying this considerations to (3.16) andkeeping in mind the relations from Fig. 3.5 enables the derivation of horizontal andvertical force equations (3.24).

dFV

dr= dL

drcos(Φ) + dD

drsin(Φ) ≈ dL

dr= 1

2ρV 2

s acαe f (3.24)

dFH

dr= dL

drsin(Φ) + dD

drcos(Φ) ≈ 1

2ρV 2

s CL SΦ + 1

2ρV 2

s CDS

Vertical force or thrust is what generally interest us to create motion of the UAV.Horizontal force acts as load on the motors and will not be covered in detail here.More on this can be found in [5].

dL

dr= 1

2ρac(x)Ω2R2(x + μcos(β(t)))2

(

αmech(r) − Vc + vi (r)

ΩRx

)

(3.25)

Next step is to continue with the observation of a small rotor blade element Δr ,placing the blade in real surroundings shown in Fig. 3.6. Since the blades rotate,

Fig. 3.6 Blade element in a quadrotor coordinate system

Page 56: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.1 The Aerodynamics of Rotary Flight 43

the forces produced by blade elements tend to change both in size and direction.This is the reason why an average elemental thrust of all blade elements should becalculated.

Figure3.6 shows the relative position of one rotor as it is seen from the quadrotor’sbody frame. This rotor is displaced from the body frame origin and forms an angleof 45◦ with the quadrotor’s body frame x axis. Similar relations can be shown forother rotors. Accounting for the number of rotor blades N , the following equationfor rotor vertical thrust force calculation is proposed [15]:

T = FV = 1

∫ 2π

0

∫ R

0N

ΔFV

ΔRdrdβ (3.26)

where β is the blade angle due to rotation, taken at a certain sample time. Givenlinear relationship between time and incidence angle β, we can observe and solveintegral equation (3.26) w.r.t. β = 0, 2π and yield the expression for rotor thrust(i.e., vertical force) [15] averaged for a single rotor cycle:

FV = NρacR3Ω2

4

[(

2

3+ μ2

)

Θ0 − (

1 + μ2) Θtw

2− λi − λc

]

(3.27)

The term inside the brackets of Eq. (3.27) is known as a thrust coefficient and isgiven separately in (3.28).

CT =(

2

3+ μ2

)

Θ0 − (

1 + μ2) Θtw

2− λi − λc (3.28)

Variables μ, λi , and λc are speed coefficients Vxy

RΩ, Vz

RΩ, and vi

RΩ, respectively. New

constant c is the average cord length of the blade element shown in Fig. 3.5.The same approach can be applied for the calculation of horizontal forces and

torques produced within the quadrotor [15]. Calculated lateral force has x and ycomponents, coming both from the drag and lift of the rotor, given in (3.29).

CHx = cos (α) μ

[

CD

a+ (λi + λc)

(

Θ0 − Θtw

2

)]

CHy = sin (α) μ

[

CD

a+ (λi + λc)

(

Θ0 − Θtw

2

)] (3.29)

In case of torque equations, the angles between the forces and directions are easilyderived from basic geometric relations shown in Fig. 3.6, resulting in the elementaltorque equations [15]:

ΔMz

Δr= −ΔFH

Δr

(

D cos(

Ψ − π

4

)

− r)

Page 57: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

44 3 Multirotor Aerodynamics and Actuation

ΔMxy

Δr= −

√2ΔFV

2Δr

(

D − r cos(

Ψ − π

4

)

± r sin(

Ψ − π

4

))

(3.30)

Using the samemethodswhichwere used for force calculation, the followingmomen-tum coefficients were calculated:

CMz = R

[

1 + μ2

2aCD − CT (μ,λ,λi ) |μ=0

]

± Dμ cos

(

π

4+ φ

CHx

cos(φ)μ

)

CMx = D

√2

2CT ± Rμ sin(φ)

[

2

3Θ0 − 1

2(Θtw + λ)

]

(3.31)

CMy = D

√2

2CT ± Rμ cos(φ)

[

2

3Θ0 − 1

2(Θtw + λ)

]

It is important to notice that Eq. (3.30) has two solutions, since the rotors spin in dif-ferent directions. Different rotational directions have the opposite effect on torques.This is why the ± sign is used in torque equations. These differences, induced fromthe specific quadrotor construction, along with the augmented momentum equationprovide an improved insight to quadrotor aerodynamics. Regardless of the flying stateof the quadrotor, by using these equations one can effectively model its behavior.

Finally, we write the complete generalized equations for each actuator in thegeneral multicopter configuration. The two forces we consider the most are actuator(rotor) trust ui (3.27) and torque τ i (3.30).

ui = NρacR3

4

[(

2

3+ μ2

)

Θ0 − (

1 + μ2) Θtw

2− λi − λc

]

Ωi |Ωi | (3.32)

NρacR3

4CTiΩi |Ωi | = cT iΩi |Ωi |

τ i = ± NρacR3

4Dμ cos

(

π

4+ φ

CHx

cos(φ)μ

)

Ωi |Ωi | = ±cDiΩi |Ωi |

3.2 Different Multirotor Configurations

In this section, we investigate how arrangement of propulsors (UAV configuration)influences UAV motion. Forces and torques acting on an aerial vehicle (consideredto be a rigid body) change its position and orientation in 3D space. There are twomain principles how those forces and torques are generated - (i) by propulsion sys-tem (propellers, jet engines), or (ii) by servo system (swash plate Fig. 3.1, ailerons,elevators, and other flight surfaces). If we consider any propulsion or servo systemas an actuator, then the arrangement of actuators with respect to the aerial vehiclecenter of mass, as well as their number, defines how many degrees of freedom can

Page 58: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.2 Different Multirotor Configurations 45

be changed independently. For example, in case of a classical helicopter, having fouractuators—main and tail rotor and push rods on a swash plate—one is able to controlall three degrees of position and only one degree of orientation (yaw angle). Othertwo degrees of orientation, roll and pitch angles, cannot be set arbitrarily withoutinfluencing other degrees of freedom (a helicopter cannot be inclined for 30◦ in pitchand keep its position). Formally, a decoupled tracking of referent 3D position and 3Dorientation trajectories cannot be achieved by a helicopter. We say that such systemis underactuated and coupled.

There are three basic reasons why aerial vehicles that we use on daily basisbelong to this category. Mechanical construction and actuators arrangement of anaerial vehicle are mainly driven by (i) its application, (ii) simplicity of design, and(iii) ease of maintenance. For example, the main purpose of a passenger airplane isto carry people from one place to the other reliably and in the reasonable time frame.Flight trajectories must be such that passengers feel comfortable, i.e., there is noneed for the airplane to hover in place oriented upside down. Hence, there is no needfor independent control of all 6 degrees of freedom. Once application determines thenumber of actuators, simplicity and maintenance come into picture and define thearrangement of actuators.

3.2.1 Coplanar Configuration of Propulsors

The same line of reasoning, that we just used in the previous section, works for smallmultirotor UAVs. As they are mostly applied for aerial photography, i.e., for carryinga gimbal with a camera, it is enough to independently control 3D position and onedegree of orientation (yaw angel), which can be done by a simple coplanar setup ofactuators (propellers spinning axes are all collinear), shown in Fig. 3.7a. However,when it comes to aerial manipulation where interaction with the environment is themain purpose of an aerial vehicle, such design degrades the capacity of the systemsince alignment of actuators in a single plane does not allow to set the vehicle thrustand the torque arbitrarily in 3D space.

Over the past decade, there have been many attempts to overcome this issueby developing UAVs with various number and arrangement of actuators. Typicalapproaches consider addition of propulsors, [6, 16], or servos that change rotationalplane of propellers (thrust vectoring), [1, 3]. Simple examples of both designs aredepicted in Fig. 3.7, where (b) shows 4 additional propulsors added to standard copla-nar setup, while (c) shows setupwith 4 servomotors that independently change thrustvector (rotation around a single axis) of 4 main propulsors.

In the sequel of this section, we briefly present static force and torque analysis forfew actuator configurations. For general case, let us assume that we have n propulsorsattached to theUAVbody so that each propulsor has its own local coordinate frame Li .Further, let us assume that rotational planes are perpendicular to their position vector,piB (

∥piB∥

∥ = li ), in order to maximize the torque output produced by the propulsor(Fig. 3.8). In the analysis that follows we use a first principles model, i.e., complex

Page 59: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

46 3 Multirotor Aerodynamics and Actuation

Fig. 3.7 Various arrangements of actuators

Fig. 3.8 Local coordinateframe Li of propulsor iattached to the UAV bodyframe LB

aerodynamic effects, such as interference between propulsors, are neglected. Also,it is assumed that propulsors are able to provide requested forces and torques.

Based on relations derived in the previous section, a force (thrust) ui producedalong zi axis and a torque (aerodynamic drag) τ i produced around zi axis are

ui = cT iΩi |Ωi | , (3.33)

τ i = ±cDi

cT iui , (3.34)

Page 60: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.2 Different Multirotor Configurations 47

where cT i and cDi depend on a thrust and drag coefficients, propeller dimensions,and air density (as devised in (3.27)), while termΩi |Ωi | comes from assumption thatreversible propulsors are used, i.e., ui can attain negative sign in order to get negativethrust. We use ± to denote the spin of propeller (CW or CCW). Thus, representationof force and torque, produced by ith propeller in the UAV body frame, becomes

f iB = RLiB

00ui

⎦ , (3.35)

τ iB = ±cDi

cT iRLi

B

00ui

⎦ , (3.36)

where RLiB is orientation of Li in the UAV body frame {B}. By using Newton–Euler

equation (overall aerial manipulator dynamics is derived in Chap.5), one is able todescribe translational motion of the UAV body

mpBW = m

00

−g

⎦ + RBW

n∑

i=1

f iB, (3.37)

where pBW is position and RB

W is orientation of the UAV body in world frame {W}(see (2.10)), g is the scalar value of gravitational acceleration, andm is the total massof the vehicle. For the rotational motion, one has

JωBW = −ωB

W × JωBW + τ B, (3.38)

where J is UAV body tensor of inertia, ωBW is UAV angular velocity with respect to

world frame {W}, and

τ B =n

i=1

piB × f iB +n

i=1

τ iB . (3.39)

Now, let us assume that we would like to keep pBW = 0 under arbitrary value of

RBW and in the same time prevent any rotational motion of the vehicle. According to

(3.37)–(3.39) that gives

mg = RBW

n∑

i=1

f iB, (3.40)

τ B =n

i=1

piB × f iB +n

i=1

τ iB = 0, (3.41)

Page 61: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

48 3 Multirotor Aerodynamics and Actuation

where g = [0 0 g]T . This means that propulsors should be able to generate force(equal tomg) in any direction in the UAV body frame while in the same time keepingoverall torque at 0 value.

If we return to Fig. 3.7a to investigate coplanar arrangements of 4 actuators (stan-dard quadrotor), then

4∑

i=1

f iB =4

i=1

RLiB

00ui

⎦ =4

i=1

Cφi −Sφi 0Sφi Cφi 00 0 1

00ui

⎦ =⎡

00

u1 + u2 + u3 + u4

(3.42)

where φi is angle of rotation of Li around zB (jaw). This result clearly demonstratesthat (3.40) holds only in case there is no rotation of the vehicle around xW or yW axis(roll or pitch) as such rotation will generate force component in xW or yW direction,thus violating (3.40). On the other hand, for (3.41) one has

τ B =4

i=1

piB × f iB +4

i=1

τ iB =

l(u2 − u4)l(u1 − u3)

− cDcT

(u1 + u3) + cDcT

(u2 + u4)

⎦ , (3.43)

where li = l; i = 1, . . . , 4. By choosing appropriate values of u1, u2, u3, andu4, and assuming that for all propellers cDi

cT i= cD

cT, condition τ B = 0 is satisfied

for any value of RBW . This confirms that standard quadrotor configuration allows

simultaneous control of only 4 DOF (as we already mentioned at the beginning ofthis section), since 2 DOF, x and y, are coupled with roll and pitch angles.

3.2.2 Independent Control of All 6 DOF

Let us now rewrite (3.40) and (3.41) in the following form

[

mg0

]

=[

RBW 00 I

] [

Γ1

Γ2

]

u =[

RBW 00 I

]

Γ u, (3.44)

where u = [u1u2 ... un]T and matrix Γ represent configuration mapping. Thisrelation clearly shows that only in case rank(Γ ) ≥ 6, controllability over all 6DOF is guaranteed. In order to fulfill this requirement, we need 6 or more actuatorsand they have to be arranged in particular way. For the previous case of coplanarquadrotor, one has

Γ 1 =⎡

0 0 0 00 0 0 01 1 1 1

⎦ , Γ 2 =⎡

0 l 0 −ll 0 −l 0

− cDcT

cDcT

− cDcT

cDcT

Page 62: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.2 Different Multirotor Configurations 49

Fig. 3.9 Propulsors attachedin the CoM of the UAV body(the origin of LB )

i.e., rank(Γ ) = 4 < 6, which again demonstrates the fact that only 4DOF can beindependently controlled with such configuration (clearly, this holds for any numberof coplanarly arranged propulsors).

Now we analyze one, purely theoretical (not realizable), solution that gives a fullrank Γ1. We set three propulsors in the CoM of the UAV body (the origin of LB) andalign rotational planes with the body frame axes as shown in Fig. 3.9.

This will give

Γ1 =⎡

1 0 00 1 00 0 1

which has a full rank, so that

mg = RBWΓ1u = RB

W

u1

u2

u3

i.e., components u1, u2, and u3 can be defined so that overall thrust compensatesmg for any RB

W . However, for such configuration (except for the fact that it is notrealizable), we cannot independently control rotations since condition

Γ2u =⎡

cDcT

0 00 cD

cT0

0 0 cDcT

u1

u2

u3

⎦ = 0

is violated as soon as u = [u1 u2 u3]T = 0. If we double the number of propulsors(6 instead of 3) and keep their orientations as in the previous example, then we get

Page 63: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

50 3 Multirotor Aerodynamics and Actuation

Fig. 3.10 An example ofpropulsors arranged so thatfull controllability over all 6DOF is guaranteed

Fig. 3.11 Propulsors withvariable rotational plane

the setup, depicted in Fig. 3.10, that guarantees a full rank of configuration mappingmatrix Γ . For such arrangement, one has

Γ =

1 1 0 0 0 00 0 0 0 1 10 0 1 1 0 0cDcT

− cDcT

0 0 l −l0 0 l −l cD

cT− cD

cTl −l cD

cT− cD

cT0 0

.

which allows independent control of all 6 DOF.Finally, let us examine propulsors arrangement depicted in Fig. 3.11c (we leave to

the reader to determine Γ for setup given in Fig. 3.7b for an exercise). In consideredarrangement, standard quadrotor setup is extended with 4 additional actuators (servomotors) that provide each propeller with ability to change its rotational plane. Letus assume that local coordinate frames of 4 propulsors are aligned so that, startingfrom positive xB axes (Fig. 3.7 depicts L1), each subsequent Li is rotated for π/2

Page 64: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.2 Different Multirotor Configurations 51

(with respect to the previous one) around zB . Additional actuators, ρi , are used totilt propellers with respect to xi (change roll angle ψi ). Since Li , i = 1, . . . , 4, arenot fixed with the respect to the vehicle body frame, configuration mapping is notconstant, i.e., Γ = f (ρ), where ρ = [ρ1 ρ2 ρ3 ρ4]T ,

Γ =

0 Sρ2 0 −Sρ4

−Sρ1 0 Sρ3 0Cρ1 Cρ2 Cρ3 Cρ4

0 lCρ2 + cDcTSρ2 0 −lCρ4 + cD

cTSρ4

lCρ1 − cDcTSρ1 0 −lCρ3 − cD

cTSρ3 0

− cDcTCρ1 + l Sρ1

cDcTCρ2 + l Sρ2 − cD

cTCρ3 + l Sρ3

cDcTCρ4 + l Sρ4

.

Obviously, for ρ1 = ρ2 = ρ3 = ρ4 = 0, we get Γ that corresponds to standardcoplanar qoadrotor. Having 8 actuators that control 6 DOF makes the system over-actuated, which opens space for optimization of control algorithms that can, forexample, increase efficiency in energy consumption while keeping configurationmapping Γ a full rank matrix. However, complex control algorithms might rely onmeasurement of variables that are not directly used in standard setup (e.g., linearand angular accelerations) and, furthermore, such algorithms must be executed withrelatively high sample rate, which in turn requires powerful microprocessors withextended memory. It is on the system designer to find balance between benefits anddrawbacks that particular setup brings.

In above examples, we analyzedUAV structures under assumption that propulsorswere able to provide unlimited forces and torques. Such feature is rarely met inpractice as it might result in energy inefficient setup of propulsors. As we mentionedat the beginning of the section, the structural arrangement of propulsors is in generaldriven by envisioned application and simplicity/efficiency of design. An applicationthat requires agile flight and high maneuverability will result in completely differentvehicle design than application that includes heavy payload transportation. Variousoptimization techniques can be used to get the most appropriate vehicle for particulartask [14]. Design that allows independent change of all 6 DOF, thus giving a vehicleability to point the thrust in any direction of the referent frame and provide fullcontrol over reference pose trajectory, is not yet commercially exploited, mostly dueto the fact that most of current applications do not demand such property. Althoughsome aerial manipulation applications could benefit from more complex setups ofactuators, in the rest of the book we consider systems with planar arrangement ofpropulsors.

3.3 Aerial Manipulation Actuation

Before detailed analysis of aerial manipulator kinematics and dynamics, given inChaps. 4 and 5, in this part of the book we will discuss models and properties ofactuators commonly used in UAV applications.

Page 65: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

52 3 Multirotor Aerodynamics and Actuation

3.3.1 DC Motor

The most commonly used actuators on small UAVs are direct current (DC) motors[10]. There are several types of DCmotors, but they all share the same physical prin-ciples of electromechanical energy conversion. Their simple mechanical design andrelatively straightforward control make them ideal for wide range of applications. Incase of smallUAVs, particularly interesting is brushless direct current (BLDC)motorthat will be described in the next section of the Chapter. Herein we start with basics ofthe simplest form of DCmotor—independently excited DCmotor (Fig. 3.12). As weare interested in building amodel suitable for analysis and synthesis of speed/positioncontrollers, we skip the motor construction details. Furthermore, as in-depth analy-sis of electromagnetic and heat properties of a DC motor is beyond the scope of thebook, some phenomena, such as heat influence on resistance and voltage drop onbrushes, will be just briefly mentioned, while others, such as nonlinear dependenceof inductance with respect to the current, will be completely neglected. Taking intoaccount those assumptions, an equivalent circuit of DC motor with mechanical loadcan be presented in a form given in Fig. 3.13.

The basic functionality of the motor can be described as follows: the armaturevoltage ua , applied across themotor terminals, will induce armature current ia to flow

Fig. 3.12 Permanent magnet DC motor construction (courtesy of Microchip Inc.)

Fig. 3.13 A DC motor with mechanical load

Page 66: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 53

through armature windings, represented by the resistance Ra and the inductanceLa . According to Lorentz law, the magnetic field Φ (in case of small DC motorsinducedbypermanentmagnets placedon stator)will exert the force upon the armaturewindings (placed on rotor). This will produce motor torque TM , and rotor will startto turn. Due to rotation of armature windings in the magnetic field, the flux changesin time, which in turn causes induction of counter voltage e (also known as backelectromagnetic force). These physical principles can be expressed in a formal wayas:

ua(t) = e(t) + Raia(t) + Ladia(t)

dt, (3.45)

TM(t) = KΦia(t) = KT ia(t), (3.46)

e(t) = KΦω(t) = Keω(t), (3.47)

where K is a constant defined by mechanical construction of the motor and ω isrotational speed. One should note that in (3.46) and (3.47), K Φ is expressed withdifferent units, [V/(rad/s)] and [Nm/A], respectively. Although those two are equiv-alent, in order to avoid confusion and to follow usual terminology used in motor datasheets provided by manufacturers, in (3.46) we substituted K Φ with KT , calledtorque constant, while in (3.47) we used Ke, called voltage constant.

On the mechanical side of the energy conversion, the torque produced by themotor is opposed by the load torque TL , the moment of inertia J (comprised ofthe load moment of inertia JL and the rotor moment of inertia JR , J = JL + JR), andthe friction:

TM(t) − TL(t) = bω(t) + Jdω(t)

dt. (3.48)

where b is the friction coefficient. For completeness, we included friction in (3.48),but in the analysis that follows we will neglect torque caused by the friction as innormal operating conditions it usually represents very small fraction of the torqueproduced by the motor (we will return to the friction torque later in the section).Equations (3.45)–(3.48) are foundation for study of static and dynamic behavior ofthe motor.

Once in a working point, transitions of the armature current and the rotationalspeed are completed, which leads to the following relations (time t , as a variable, isremoved from the equations as the motor is in the steady state):

ua = e + Raia, (3.49)

TM = TL . (3.50)

By including (3.46), (3.47), and (3.50) in (3.49), one obtains mechanical character-istic of the DC motor:

ω = uaKe

− Ra

KeKTTL . (3.51)

Page 67: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

54 3 Multirotor Aerodynamics and Actuation

Fig. 3.14 A DC motor mechanical characteristic: the load torque influence

Although relation (3.51) is valid in all 4 quadrants (DCmotor canwork as a generator)in the analysis that follows, we concentrate on quadrant I (standard motor mode ofoperation). First we investigate influence of the load torque on the rotational speed.From (3.51) it is clear that the rotational speed depends linearly on the load torque. Incase TL is increased while the armature voltage is kept constant, the rotational speedwill decrease (according to (3.48)). Due to drop in the rotational speed, countervoltage e (according to (3.47)) will be reduced. As a consequence, the armaturecurrent will increase in order to keep in balance (3.49) (as the armature voltage iskept constant). Finally, increased current will enlarge themotor torque TM (accordingto (3.46)) so that TM2 = TL2, which completes the transition and motor reaches newworking point (Fig. 3.14). In case of UAV applications described change of workingpoint corresponds with increase of blades pitch on a variable-pitch propeller drivenbyDCmotor under constant armature voltage—higher pitch implies higher propellerdrag which raises the load torque on the rotor.

Now let us see what happens with the mechanical characteristic if one changesua , Ra , or Φ. We start with the armature voltage while keeping the armature resis-tance, the load torque, and the magnetic field constant. As DC motors used on smallUAVs are controlled by changing armature voltage (typically through pulse-widthmodulation (PWM) realized by H switching bridge), it is important to understandphysical phenomena that are in the background of the controller design, which wewill visit later in the book. According to (3.51) decrease (increase) of ua will causethe characteristic to shift down (up), while the slope remains the same. As an effectof the armature voltage change, the working point changes (ω1 → ω2) (Fig. 3.15).Lowering the value of ua causes ia to decrease for a moment, which in turn weakensthe motor torque. Due to (3.48), the rotational speed is decreased and counter voltagedrops so that the armature current reaches the same value prior to the armature volt-age change, thus returning in balance the motor and the load torques. Following thisline of reasoning, one can analyze a DC motor equipped with a propeller (constantpitch blades) under demand to increase the thrust. According to (3.33), larger thrust

Page 68: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 55

Fig. 3.15 A DC motor mechanical characteristic: the armature voltage influence

Fig. 3.16 Amechanical characteristic of DCmotor with a propeller: the armature voltage influence

requires increase of the propeller rotational speed which can be done, as we justdescribed, by enlarging the armature voltage. However, with higher rotational speed,the propeller drag raises as well, so that the load torque on the rotor is increased (wecome back to this later, in discussion on the motor dynamics). This case is presentedin Fig. 3.16.

Beforeweproceedwith further analysis, let us go back to twodistinguishing pointson the mechanical characteristic (Fig. 3.15) that are of interest—no-load rotationalspeed, ω0, and the stall load torque, TLS . No-load rotational speed, as the name says,is obtained when there is no load on the rotor. In that case the motor torque has tocompensate only the torque caused by the friction of the motor parts, which is, as wealready mentioned, negligible. Hence, no-load armature current ia0 is minute (ia0 ≈0). According to (3.48), one has

ω0 = uaKe

. (3.52)

Page 69: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

56 3 Multirotor Aerodynamics and Actuation

On the other hand, in casemotor has to compensate the stall load torque, the armaturecurrent becomes significant. As the rotational speed is equal to 0, there is no countervoltage so that (3.49) takes a form

iaS = uaRa

, (3.53)

which givesTLS = KT iaS. (3.54)

Since the armature resistance is very low (in order to reduce losses), the armaturecurrent becomes large and if stall condition lasts for some time, the armaturewindingsmight be destroyed. It should be noted that the stall load condition exists every timethe motor is started from resting position—when the armature voltage is applied tothe motor (ω = 0), there is no counter voltage and the stall current flows through thearmaturewindings for a short period of time until rotor starts to spin. As the rotationalspeed increases so the counter voltage increases which reduces the armature currentdown to the value required for the working point.

Now we return to the analysis of influence of the armature resistance Ra and themagnetic field Φ on the mechanical characteristic of DC motor. According to (3.51)as an outcome of change in Ra , the slope of the characteristic is changed (Fig. 3.17).This fact is used in a design of simple controllers so that variable resistor, Radd ,is added in series with Ra . Let us consider start of the DC motor in an attempt toreach working point ω1. As motor is in rest, initial armature current will be verylarge (equal to stall current iaS). Adding Radd1 (Fig. 3.17) significantly reduces stallcurrent, and once rotational speed gets to particular value (ω11), Radd is decreased toRadd2 so that new working point is reached (ω12). Finally, variable resistor is set to0, and motor obtains desired working point ω1.

Fig. 3.17 A DC motor mechanical characteristic: the armature resistance influence

Page 70: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 57

Fig. 3.18 A DC motor mechanical characteristic: the magnetic field influence

At the end, let us just mention how the magnetic field Φ changes the mechanicalcharacteristic of aDCmotor. Aswe have seen theDCmotor, rotational speed dependslinearly on ua and Ra . On the other hand, the speed control characteristic based onthe magnetic field is highly nonlinear which makes the magnetic field-based speedcontrol difficult for implementation. Other reason why this type of control is notrelevant for UAV applications is that the magnetic field in DC motors applied onUAVs is induced by permanent magnets. In order to allow changes of the magneticfield, a separate field circuit should be incorporated in the construction of the motor.This circuit can be independent (separately excited motors) or it can be a part of thearmature circuit (shunt motors or series motors). Having ability to manipulate themagnetic field, from (3.51) (remember that Φ is a part of Ke and KT ) it is obviousthat Φ changes both the slope of the characteristic and no-load rotational speed(Fig. 3.18).

Now we shortly visit two characteristics that are important for matching appro-priate DC motor with particular application. Among several parameters that areconsidered when DCmotor has to be chosen, the torque and the power developed bythe motor are of the primary importance. We already analyzed how the load torquedetermines the mechanical characteristic. As far as the motor power output, PM , isconcerned we start with the fundamental relation between power and torque

PM = ωTM . (3.55)

Substituting (3.50) and (3.51) in (3.55), we arrive at an expression relating power totorque in DC motor:

PM =(

uaKe

− Ra

KeKTTL

)

TL = uaKe

TL − Ra

KeKTT 2L . (3.56)

Page 71: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

58 3 Multirotor Aerodynamics and Actuation

Fig. 3.19 A typical DC motor power characteristic

As (3.56) demonstrates, the motor power, PM , is quadratic function of the loadtorque TL . This result is interesting as points to the fact that there exists the loadtorque under which the motor develops the maximum power. It is easy to determinethat the maximum power load torque equals half of the stall torque TLS with

PMmax = KT

4KeRau2a . (3.57)

Relation (3.57) shows another important result—since PMmax is proportional to u2a ,changes in the armature voltage have a substantial impact on the DC motor power.The typical power characteristic is shown in Fig. 3.19.

In close relation with the motor power is the motor efficiency η. Defined as

η = PM − PlossesPel

(3.58)

efficiency is the basic indicator of the level electrical power is transformed intouseful mechanical power. Throughout this section, we intentionally ignored frictioneffects. However, when efficiency comes into the picture, it is mandatory to take intoconsideration losses, Plosses , produced by those effects (ia0 = 0),

η = ω(TL − bω)

iaua. (3.59)

The question is at which working point motor runs with maximum efficiency? Tofind the answer, one has to calculate condition under which motor generates the mostuseful torque for a given electric power. As developed torque is directly proportionalto the armature current, we will formalize the efficiency as a function of ia . Startingwith (3.59), we substitute ua = iaS Ra and bω = KT ia0 and include (3.51) in (3.59),

Page 72: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 59

η = KT ia − KT ia0iaiaS Ra

iaS Ra − ia Ra

Ke. (3.60)

Since KT = Ke in consistent units, after some mathematical manipulations, effi-ciency results in

η =(

1 − ia0ia

) (

1 − iaiaS

)

. (3.61)

where, aswe described at the beginning of this section, ia0 is no-load current (TL = 0)and iaS is stall current (TL = TLS). Relation (3.61) is very convenient for analysis asDC motor data sheets usually provide information about no-load and stall currents.At low-torque–high-speed working point (refer to Fig. 3.14), the armature current isvery small (ia ≈ ia0) so that the first part of (3.61) is approximately 0, i.e., efficiencyis very low. On the other hand, in case of high torque - low speed, the armaturecurrent is very high (ia ≈ iaS) which makes the second part of (3.61) close to 0.Again, efficiency under this condition is very low. To find the armature current thatresults in the maximum efficiency, we take the derivative of (3.61), which gives

iaηmax = √

ia0iaS. (3.62)

In case of small DC motors, the no-load current is several orders of magnitude lowerthan the stall current, hence the most efficient working point is at the region ofrelatively high rotational speeds (close to no-load speed) and relatively low torques.Finally, by including (3.62) in (3.61), we get the maximum efficiency as

ηmax =(

1 −√

ia0iaS

)2

(3.63)

The typical efficiency curve of a DC motor is presented in Fig. 3.20. It is beneficialif DC motor runs at the working point of maximum efficiency (usually referred as

Fig. 3.20 A typical DC motor efficiency curve

Page 73: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

60 3 Multirotor Aerodynamics and Actuation

Table 3.1 An example of DCmotor parameters from datasheets

No-load speed ω0 8130 [rpm]

No-load current ia0 0.32 [A]

Stall torque TLS 2.08 [Nm]

Stall current iaS 152 [A]

Armature (terminal)resistance

Ra 0.079 [Ω]

Armature (terminal)inductance

La 0.026 [mH]

Torque constant KT 0.0137 [Nm/A]

Speed constant Ks 699 [rpm/V]

Rotor inertia JR 99.5 [gcm2]

the nominal working point [ωn , ian]). However, in UAV applications, this is rarelythe case as the load torque varies due to changing flight conditions. What one can dois to keep the motor within the desirable working region, i.e., somewhere betweenthe point of the maximum efficiency and the point of the maximum power.

Problem 3.1 At this point, we will complete the study of static characteristics withan example of a typical small-brushed DC motor with permanent magnets and rated(nominal) power of 80W at ua = 12V. The motor parameters, as stated in datasheets, are given in Table3.1.

One should note that instead of the voltage constant Ke, the speed constant Ks isgiven in data sheets. To comply with notation used throughout the text, we calculatethe voltage constant Ke = 1/Ks(60/(2π)) = 0.0137 [V/(rad/s)]. As we explainedearlier, this value is exactly equal to KT from Table3.1. Also, a large difference(4 orders of magnitude) between no-load and stall currents should be noted. Fromgiven data, one can determine two effects, the friction torque (no-load torque) andthe voltage drop on brushes [12], that were neglected during analysis. According to(3.54), the friction torque is TL0 = KTω0 = 0.0137 · 851.37 = 0.004384 [Nm] (notethat the rotational speed is transfered from [rpm] in [rad/s]). To obtain the voltagedrop on brushes, we use (3.47) and (3.49),

ubr = ua − Keω0 − Raia0 = 12 − 0.0137 · 851.37 − 0.079 · 0.32 = 0.31 [V].Next, we turn to determination of the working point of maximum efficiency.

Including parameters in (3.62) and (3.63) results in

iaηmax = √0.32 · 152 = 6.97 [A], ηmax =

(

1 −√

0.32152

)2= 0.91.

At this working point, the motor torque is TMn = Kt · iaηmax = 0.0955 [Nm].Subtracting the friction torque from TMn gives nominal load torque, TLn = 0.0911[Nm], at nominal rotational speedωn = 811 [rad/s], andnominal power PMn = 77.78[W] (according to (3.57) PMmax = 455 [W]). Note that calculated nominal poweris very close to the measured rated power, which confirms correctness of equationsdevised earlier in this section.

Page 74: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 61

Fig. 3.21 An example of DC motor characteristics

Having defined all relevant points, we are able to calculate and draw DC motorcharacteristics (Fig. 3.21—we depicted only small fraction of characteristics aroundnominal working point).

The slope of the electromechanical characteristic is determined by (3.51), i.e.,

slope = − Ra

Ke · KT= −420.9

[

rad/s

Nm

]

.

From Fig. 3.21, it can be seen that efficiency increases rapidly with the torque andthen steadily decreases after reaching ηmax (for TL = 0.2 [Nm] ⇒ ia = 15 [A] itequals 0.88). The motor power increases with the torque up to the maximum value(not presented in the figure). Eventually, all three characteristics get to 0 at the stalltorque TLS = 2.08 [Nm].

Let us now return to Eqs. (3.45)–(3.48) and investigate DC motor dynamics. Asalready mentioned, the rotational speed of DC motors used on small UAVs is con-trolled by armature voltage, hence, we represent the system in a formω = f (ua). Wewill start by keeping assumptions stated at the beginning of the static analysis, i.e.,we neglect heat influence, nonlinear dependence of inductance with respect to thecurrent, voltage drop on brushes, and friction. A DC motor block scheme, depictedin Fig. 3.22, is direct interpretation of Eqs. (3.45)–(3.48). Such representation of aDC motor is particularly convenient for a simple linear controller design in complex(Laplace) domain since it directly leads to theDCmotor transfer function comprisingtime constants and gain.

As first step, we consider the case with no-load torque (note that J = JR) andfrictionless rotation (b = 0). By implementing Laplace operator (

∫ => 1/s), aftersimplemathematicalmanipulations, block scheme in Fig. 3.22 takes a formpresentedin Fig. 3.23. The transfer function is:

Page 75: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

62 3 Multirotor Aerodynamics and Actuation

Fig. 3.22 A DC motor block scheme

Fig. 3.23 Block diagram of a DC motor with no-load

ω(s)

ua(s)=

KTRa Js(τas+1)

1 + KT KeRa Js(τas+1)

, (3.64)

where τa = La/Ra [s] is DC motor electrical time constant. Rearranging (3.64)gives

ω(s)

ua(s)=

1Ke

Ra JKT Ke

τas2 + Ra JKT Ke

s + 1= Km

τmτas2 + τms + 1, (3.65)

where τm is DC motor mechanical time constant and Km = 1/Ke is DC motor gain.Time constants that determine motor dynamics depend on construction parameters(except for J that changes with the load). Since τm >> τa for most DC motors (forthe one with parameters given in Table3.1, τa = 0.33 [ms] and τm = 4.19 [ms]), wecan simplify (3.65) by disregarding τmτas2. Hence,

ω(s)

ua(s)≈ Km

τms + 1. (3.66)

Inverse Laplace transformation of (3.66), with ua(t) = ua1 +UaS(t), where S(t) isHeaviside step function, gives response of the rotational speed in the time domain,shown in Fig. 3.24.

ω(t) = KmUa(1 − e− tτm ) + ω1. (3.67)

Page 76: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 63

Fig. 3.24 The rotational speed response on step change of the armature voltage

The mechanical time constant determines the rate of change of the response—fort = τm the rotational speed is at 63%of its final value (ω(t = τm) = 0.63·KmUa+ ω1)and subsequently, as t → ∞, the rotational speed reaches newworking point,ω(t →∞) = ω2 = KmUa + ω1.

In above discussion, we neglected friction effects on the motor dynamics. Now,let us investigate what happens in case b = 0 (for the motor in the example b =5.15 · 10−6, which comes from b = KT ia0/ω0). The motor transfer function takesthe following form (please refer to Fig. 3.22):

ω(s)

ua(s)= KT

(Las + Ra)(Js + b) + KT Ke= KT

Las(Js + b) + Ra(Js + b) + KT Ke.

(3.68)

Since La << Ra ,

ω(s)

ua(s)≈ KT

Ra(Js + b) + KT Ke= K ∗

m

τ ∗ms + 1

, (3.69)

where

K ∗m = KT

Rab + KT Ke, τ ∗

m = Ra J

Rab + KT Ke.

As long as Rab << KT Ke (in most cases the difference is several orders of magni-tude), it is apparent that friction plays insignificant role in the motor dynamics (we

Page 77: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

64 3 Multirotor Aerodynamics and Actuation

Fig. 3.25 The load torque ofDC motor with a propeller,as a function of the rotationalspeed

leave to the reader to calculate the differences between (Km , τm) and (K ∗m , τ ∗

m) forthe motor with data given in Table3.1).

For DC motor applications on UAVs, more interesting results are obtained whenone considers influence of a propeller on the mechanical time constant and the motorgain. The effect is twofold; (i) a propeller moment of inertia is significantly largerthan the rotor moment of inertia, which causes a major change in the mechanicaltime constant, (ii) the propeller drag, that actually can be treated as particular typeof a friction (we shed more light on this in a moment), is proportional to ω2 (see(3.34)), which means that the mechanical time constant and the motor gain, willvary as the working point changes. This, second effect, due to its nonlinear nature, isparticularly important from the control point of view. Typical curve, demonstratingthe load torque (the propeller drag) with respect to the rotational speed, is shownin Fig. 3.25. For convenience and for an example that follows later in the text, weincluded electromechanical characteristics for two working points in the figure aswell. In case DC motor drives a propeller, Eq. (3.48) takes the following (nonlinear)form,

TM(t) = bω(t) + aω(t)2 + Jdω(t)

dt. (3.70)

where parameter a depends on a drag coefficient, a propeller diameter, and air density.Representation of (3.70) inLaplace domain requires linearization around theworkingpoint so that nonlinear term aω(t)2 is transformed in linear gain kω representing theslope of the tangent in the working point, as depicted in Fig. 3.25. From the blockdiagramof the system (Fig. 3.26), we canwrite the transfer function (notice similaritywith (3.69)):

ω(s)

ua(s)≈ KT

Ra(Js + kω) + KT Ke= K+

m

τ+m s + 1

, (3.71)

where

K+m = KT

Rakω + KT Ke, τ+

m = Ra J

Rakω + KT Ke.

If one compares relations for (K ∗m, τ ∗

m) and (K+m , τ+

m ), then resemblance is obvious—only difference is in kω that took place of the friction parameter b, which confirms

Page 78: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 65

Fig. 3.26 Block diagram of a DC motor with variable load (i.e., propeller)

our earlier assertion that variable load torque produced by a propeller formally canbe treated as a friction.

Problem 3.2 We will demonstrate an impact of a propeller on transfer function(3.71), by numerical example. Let us consider DC motor with parameters given inTable3.1 that runs propeller with drag characteristic TL = 1.39 · 10−7ω2 [Nm] andmoment of inertia JL = 0.125 · 10−3 [kgm2] (note that JL >> JR). By includingTL in (3.51), we can determine working points in case ua1 = 8 [V] and ua2 = 12[V], that is ω1 = 546 [rad/s] and ω2 = 805 [rad/s]. Now, one can calculate gainsas kω1 = 2aω1 = 2 · 1.39 · 10−7 · 546 = 1.498 · 10−4 [Nm/rad/s] and kω2 =2 ·1.38 ·10−7 ·805 = 2.224 ·10−4 [Nm/rad/s]. Transfer functions for working points1 and 2 become

Gm1(s) = ω(s)

ua(s)= 68.66

8.58 · 10−3s + 1, Gm2(s) = ω(s)

ua(s)= 66.74

8.34 · 10−3s + 1.

One can notice that mechanical time constant, compared with no-load case, doubleddue to the propeller moment of inertia (it was 4.19 [ms]), while the motor gainslightly decreased (it was 72.83 [rad/s/V]). The difference of the transfer functionparameters for two working points in the example is not significant and amounts justfew percent. However, depending on the propeller and the motor type, as well as theworking point, the difference might enlarge.

Fig. 3.27 Impact of thepropeller on DC motorefficiency

Page 79: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

66 3 Multirotor Aerodynamics and Actuation

Table 3.2 DC motorrotational speed and propellerthrust with respect to thearmature voltage

Armature voltage[V]

Rotational speed[rad/s]

Thrust [N]

4 194 0.16

5 241 0.29

6 284 0.44

7 329 0.58

8 367 0.72

9 403 0.94

10 434 1.16

11 464 1.34

12 490 1.42

An additional detail is worth to bementioned here. Aswe said earlier, the goal is tokeep the motor close to the point of the maximum efficiency. So, the question is howpropeller impacts η? In Fig. 3.27, we present drag characteristics of two propellers,(prop 1 and prop 2), together with electromechanical (ω) and efficiency (η) character-istics of a DCmotor (for nominal armature voltage). At working point 1, determinedas intersection of prop 1 drag characteristic and DC motor electromechanical char-acteristic, efficiency of the motor is η1, which lies close to ηmax . On the other hand,for a different type of the propeller (with increased blade pitch or larger diameter -equation (3.27)) mounted on the motor shaft, efficiency could decrease significantly(η2 in Fig. 3.27). That is why motor manufacturers provide information regarding atype of the propeller that should be used in combination with particular DC motorused for UAV applications.

Table3.2 presents results obtained by changing the armature voltage of a typicalDC motor equipped with a propeller.

Other aspect that should be taken into account when effects on the motor para-meters are considered is the temperature [7]. Depending on the working conditions,the motor’s operating temperature might increase up to the value that significantlychanges Ra (by heating the cooper windings), KT and Ke (by heating the magnets),which in turn influences Km and Tm . To calculate changes in Ra , KT , and Ke causedby the temperature, one can use simple relations

Ra(ϑ) = Ra(ϑ0)[1 + αR(ϑ − ϑ0)], (3.72)

KT (e)(ϑ) = KT (e)(ϑ0)[1 − αK (ϑ − ϑ0)], (3.73)

where ϑ is the motor temperature, ϑ0 is specified ambient temperature (usually25◦C), αR is temperature coefficient of the resistance for a metal used to constructthe windings (for cooper it is approximately 0.004 [1/◦C]), and αK is temperaturecoefficient for a magnet material (it can very from 0.0001–0.002 [1/◦C]). It shouldbe noted that Ra increases with increase in the temperature, while in the same timeKT and Ke decrease. In some extreme circumstances, the motor temperature couldrise for more than 100 [◦C], so that the motor parameters might change up to 50%.

Page 80: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 67

As a consequence, not only that the motor dynamics is affected but also its staticcharacteristics change.

Putting together mentioned effects on the motor parameters, repercussions couldbe such that require adaptation of the rotational speed controller in order to maintainthe control quality at required level, which is of high importance in aerial manipula-tion where precise UAV maneuvering is mandatory.

3.3.2 Brushless DC Motor

Even though, from the physical point of view, standard DC motor and brushless DC(BLDC) motor share common working principles, they significantly differ in theway how those principles are implemented [17]. Namely, the main difference is inthe method used for induction of the rotating magnetic field. While in standard DCmotor, the commutator (Fig. 3.12) assures changes in the current direction throughthe windings, while in BLDCmotor, commutation is accomplished electronically byelectronic switches (transistors). Compared with standard DC motor, BLDC gener-ally has (i) higher efficiency, (ii) higher speed range, (iii) longer operational lifetime,and (iv) higher torque-to-size ratio.

Figure3.28 depicts simplified three-phase BLDCmotor diagram—space distribu-tion of phases with corresponding electrical schematics and switching commutator -that will be used for brief description of BLDCworking principles. Sincemost BLDCmotors have a three-phase winding topology with star connection, in Fig. 3.28 thereare three circuits (A, B, and C) connected at a common point (com). Each phase issplit in the center (points a, b, and c), forming series connection of two R-L circuitsrepresented by phase resistance Rp and the phase inductance L p. As motor withthis topology is driven by energizing 2 phases at a time (single commutation step),such split permits the permanent magnet rotor to move in the middle of the inducedmagnetic field, thus rotating 60◦ per one commutation step.

We begin description of BLDC working principles by the static alignment of thepermanent magnet (rotor) shown on the left side of Fig. 3.28. This position of therotor is realized by creating an armature current flow from terminal A to B, which isachieved by electronic switches T1 and T4 so that positive pole of armature voltage uais connected to terminal A, and negative pole to terminal B. In the next commutationstep, switch T1 opens and switch T5 closes, while T4 keeps its state. This configurationprovides armature current ia to flow from terminal C to terminal B, which movesrotor clockwise (right side of Fig. 3.28). For a given arrangement, by continuingcommutation steps through six possible combinations (one electrical revolution), therotor will be pulled through one mechanical revolution. In practical implementationof described mechanism, each phase has several electrical circuits wired in parallelto each other. This, in the same time, requires a corresponding multipole permanentmagnetic rotor. So, for example, in case of two circuits, there will be two electricalrevolutions per single mechanical revolution, i.e., each commutation step will pullthe rotor for 30◦.

Page 81: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

68 3 Multirotor Aerodynamics and Actuation

Fig. 3.28 Simplified three-phase BLDC motor diagram

Having in mind above discussion one fact is clear—the timing sequence ofcommutation steps must be synchronized with the rotational speed, hence, preciseposition of the rotor is prerequisite for determination of the correct moment to com-mutate the phases. In standard DC motor, the commutator is a part of the rotor so themechanical design assures accurate commutation. For BLDC motor, this is not thecase, hence, one needs to sense the rotor position. There are two approaches to do so;(i) by using Hall effect position sensors triggered by magnetic field of passing rotor,and (ii) by detecting back-EMF induced by the movement of a permanent magnetrotor in front of stator windings. In case of sensored control, three Hall sensors areplaced so that each of them changes state at 180◦ electrical degrees, i.e., each sen-sor output is in alignment with one of the phases. A timing diagram showing therelationship between the sensors states and the phase voltages is shown in Fig. 3.29.One can see that position of the rotor is easily determined from Hall sensors states.

Page 82: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 69

Fig. 3.29 Timing diagramof Hall sensors

Fig. 3.30 Sensorlesscontrol - back-EMFmeasurement of the phasethat is not energized

For the rotor position depicted on the left side of Fig. 3.28, with closed switchesT1 and T4, sensors 1 and 3 are in state 1, while sensor 2 is in state 0, i.e., three bitencoded information reads as 101. In the next commutation step, when T5 and T4are closed, encoded information is 001, and so on. Hall sensors, that are usuallyintegrated within the BLDC motor housing, require additional wiring so that motorcomes with two separate connectors—one for the windings and one for the sensors -which effects price and design complexity. That is why sensorless control [8], basedon detecting back-EMF, is becoming more andmore common. This is especially truein case of UAV applications where (i) there is no need for high starting torque (whichrequires accurate information about rotor position at zero speed), (ii) the load torquedoes not change abruptly, and (iii) low-speed motor operation is rarely used. Basicidea of sensorless control is to measure back-EMF of the phase that is not energized(Fig. 3.30).

Page 83: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

70 3 Multirotor Aerodynamics and Actuation

Fig. 3.31 Timing diagramof back-EMF zero-crossings

Information that is important for commutation synchronization is zero-crossingof measured back-EMF signal, which happens in the middle of two commu-tations. Assuming constant rotational speed, the time period from one commu-tation to zero-crossing and the time period from zero-crossing to the nextcommutation are equal (Fig. 3.31). The amplitude of the back-EMF is directlyproportional to the rotational speed. This makes it extremely difficult to detectzero-crossings at low speed. To overcome this problem, there are numerous strate-gies for start-up of sensorless BLDC motors that ignore zero-crossing signals. Oneof the simplest uses a table, stored in the controller memory, of inter-commutationdelays for the first few commutations. Once the motor starts to rotate, the sequenceis terminated and back-EMF feedback is used.

Herein we omit description of electronic devices, usually realized as a single unitthat implements above-mentioned modules (switching bridge and back-EMF mea-surement unit), as this is out of the scope of the book. For our purpose, it is enoughto say that all electronics, required for BLDC motor control in UAV applications,is packed in a single component, commonly known as electronic speed controller(ESC). Such ESC requires the rotational speed set point in a form of so-called pulseposition modulation (ppm) signal generated by a receiver (in case of direct radio con-trol of an UAV) or by dedicated control unit (in case of more complex UAV controlstructure). It is important to understand that in general, ESCs work in an open controlloop, i.e., the rotational speed signal is not used as the feedback. This means thatin case the load torque changes, the rotational speed will change as well (conditiondepicted in Fig. 3.14). In order to close the control loop, one needs to measure therotational speed (by tachometer or encoder) and use additional controller, as depictedin Fig. 3.32.

From the control point of view, BLDC motor has similar characteristics as stan-dard DC motor, consequently, static and dynamic analysis, provided in the previous

Page 84: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 71

Fig. 3.32 BLDC rotational speed control loop

Table 3.3 An example ofBLDC motor parametersfrom data sheets

No-load speed ω0 7980 [rpm]

No-load current ia0 0.30 [A]

Stall torque TLS 0.38 [Nm]

Stall current iaS 26.8 [A]

Terminal resistancephase-to-phase

Ra 0.447 [Ω]

Terminal inductancephase-to-phase

La 0.049 [mH]

Torque constant KT 0.0142 [Nm/A]

Speed constant Ks 672 [rpm/V]

Rotor inertia JR 21.9 [gcm2]

section, applies to BLDC as well—that is why BLDC motor in Fig. 3.32 is repre-sented by transfer function Eq. (3.66). Same as for DC motor, parameters requiredfor the static and dynamic analysis can be obtained frommotor data sheets. An exam-ple of BLDC motor parameters, for rated (nominal) power of 60W at ua = 12V,is presented in Table3.3. By using given values and equations provided in the DCmotor section, we can determine all static characteristics and calculate parameters ofthe BLDCmotor transfer function. It should be noted that Ra is specified as terminalresistance phase-to-phase, hence, according to Fig. 3.28, parameter Ra = 2Rp.

We complete this section with description of a simple technique for tuning themost commonly used controller for the rotational speed, a PI controller of the form:

GC(s) = KC

(

1 + 1

τI s

)

= KC1 + τI s

τI s. (3.74)

where KC is controller gain and τI is integral time constant. The method we presentherein is called pole-zero cancellation. To start, let us assume that the ESC and thefeedback signal processing blocks in Fig. 3.32 are memory-less elements with unitygain, i.e., GESC(s) = 1, G f b(s) = 1. This assumption is valid as dynamics of anESC is much faster than dynamics of a motor. Also, the feedback signal processingis usually realized as a filter with time constant(s) much smaller than τm . Hence, therotational speed open loop transfer function becomes

GωOL(s) = KC1 + τI s

τI s

Km

1 + τms. (3.75)

Page 85: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

72 3 Multirotor Aerodynamics and Actuation

In case τI = τm , the controller zero cancels the motor pole so that

GωOL(s) = KCKm

τms, (3.76)

which gives the rotational speed closed loop transfer function

GωCL(s) = ω(s)

ωre f (s)=

KC Kmτms

1 + KC Kmτms

= 1τm

KC Kms + 1

= 1

τωs + 1. (3.77)

FromEq. (3.77), we see that the closed loop systemdynamics, dominated bymechan-ical time constant τm , can be enhanced by appropriate selection of controller gain KC .If we go back to the previous section example (Fig. 3.26), with ua1 = 8 [V],ω1 = 546[rad/s], Km = 68.66 [rad/s/V], and τm = 8.58 [ms], then in case KC = 0.03 themechanical time constant τm reduces by factor of 2, for KC = 0.045 by factor of 3,and so on. However, one should take care when increasing the gain since the outputof the speed controller sets the armature voltage reference for the ESC (Fig. 3.32),so that in case of large gain, the armature voltage can hit the limit. For a given exam-ple, assuming change in the rotational speed of 100 [rad/s] while in working pointω1, with KC = 0.045, the controller output, at the moment of the rotational speedreference change, will be uare f = ua1 + 100 · 0.045 = 12.5 [V], which is over thenominal value of 12 [V], and due to the controller integral term it increases evenfurther at the beginning of the system reaction. Reaching the limit will, in turn, leadto slower rotational speed response than expected.

3.3.3 Servo Drives

In aerial manipulation systems, servo drives are applied mainly for actuation ofmanipulator joints. Other applications include angle of attack control of a helicopterblades, tilting of a rotor in some quadrotor configurations, movement of controlsurfaces in case of fixed-wing UAVs, and throttle control for ICE engines. Workingprinciple of a servo drive for UAV applications is very simple: By using a high-ratiogear box, the rotation of a DC motor is transferred into the angular position of theoutput shaft (Fig. 3.33). A potentiometer, attached to the output shaft, is used toprovide the feedback for the angular position controller implemented on electronics,i.e., together with all mechanical parts, packed in a servo drive plastic casing. Forstandard servo drives for UAV applications, position reference is provided in a formof ppm signal, shown in Fig. 3.33. The signal period is 20 [ms], with pulse lengthdefining the servo drive position—the shortest pulse of 1 [ms] corresponds with thefar left position of the shaft, 1.5 [ms] pulse sets the shaft to the center, and the longestpulse of 2 [ms] positions the shaft far right. Depending on the servo type, the shaftposition can move in range between 180 and 300 [◦], with rotational speed from 200

Page 86: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 73

Fig. 3.33 A servo drive construction and ppm position reference

Fig. 3.34 A servo drive angular position control loop

to 1200 [◦/s]. Due to the high-ratio gear box, servo drives can sustain relatively largetorques (up to several [Nm]) compared to their size.

Figure3.34 depicts basic angular position control loop commonly used in servodrives. The rotational speed closed loop (Fig. 3.32) is an integral part of the scheme,together with a high-ratio gear box represented by N . Brief analyses of the systemperformance, utilizing simpleproportional type controller (GC(s) = KC ), are givenin the text that follows. Same as for the rotational speed control loop, it is assumedthat the feedback signal processing block can be represented with G f b(s) = 1. Inthat case

GαOL(s) = KCN

s(τωs + 1), (3.78)

which gives 2nd-order closed loop transfer function

GαCL(s) = 1τω

KC Ns2 + 1

KC Ns + 1

. (3.79)

The angular position dynamics is defined by the poles of transfer function (3.79),i.e., by selection of KC the system response can go from aperiodic (real poles) tooscillatory (complex poles). It is easy to show that the type of the poles is defined bythe following relation,

4τωKCN ≤ 1 aperiodic,4τωKCN > 1 oscillatory.

(3.80)

Page 87: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

74 3 Multirotor Aerodynamics and Actuation

t [s]

208 208.5 209 209.5 210 210.5 211

thro

ttle

[%]

32

33

34

35

36

37

38

39

40

41reference measured

Fig. 3.35 Response of a servo drive position

Since in most servo drive applications aperiodic response is preferred, while in thesame time the system dynamics should be kept as fast as possible, KC is usually setso that 4τωKCN = 1. However, unlike the rotational speed, in case of small servodrives the position controller is most of the time in limit, i.e., the shaft of the servodrive is moved with the maximum motor speed under particular torque, so there is awide range for KC values that can be used. For example, if no-load speed ω0 = 1570[rad/s] at nominal voltage, with τω = 2 [ms] and N = 0.00394, then, according to(3.80), KC = 1/(4τωN ) = 31725. In case change of the shaft position is 0.75 [rad],the controller output at the beginning of the response will be ωre f = 23794 [rad/s]which is more than 15 times over the no-load speed. Hence, as we said, the controlleris in the limit and the motor runs with the maximum speed.

Response of a servo drive that serves for actuation of an internal combustionengine throttle (step change of 5% in the throttle position corresponds to 0.1 [rad])is depicted in Fig. 3.35. The transition is aperiodic and completes in 65 [ms]. Dueto the throttle spring (mounted on the engine in order to return the throttle in idleposition in case of the servo failure), there is a static error in the position response.For such a system, PI servo controller should be used in order to reduce the staticerror to zero.

In novel servo drives, nonlinear controller is commonly used instead of standardP(I)-controller. The simplest form of such nonlinear controller has relay characteris-tic, i.e., as long as there is a difference between the position reference and the shaftposition, the motor provides maximum torque in order to move the shaft in desiredposition. The main problem with relay controller is chattering that appears once theshaft position approaches the reference. To overcome this, novel servo drives allowthe controller characteristic to be modified as shown in Fig. 3.36. By using parame-ters b and c a dead zone can be introduced so that chattering is avoided. Furthermore,

Page 88: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 75

Fig. 3.36 A servo drivenonlinear controllercharacteristic

parameters a and d provide definition of the range of linear dependence between theposition error and the motor speed. Finally, parameter e can be used to boost reactionof the servo drive once the position error is out of the dead zone. It should be notedthat in case a = b = c = d = 0 the characteristic attains standard relay form.

Novel servo drives allow programmable serial communication with up to 50write/read commands, comprising references, working parameters, controller para-meters (Fig. 3.36) and servo drive states (feedback signals). Furthermore, the serialprotocol (TTL level half duplex UART) provides servo drives to be connected toeach other in a chain configuration which is particularly useful for aerial manipula-tor design.

3.3.4 2-Stroke Internal Combustion Engine

Theworkingprinciple of internal combustion engine (ICE) is basedon transformationof heat into mechanical work [2]. The combustion of a fuel–air mixture inside thecombustion chamber (cylinder) causes increase in pressure that acts on the pistonsurface thus producing the force that moves the piston. By appropriate mechanicaldesign, linear motion of the piston is transformed in rotation, i.e., mechanical work.This concept is the same for all sorts of ICEs. The main factors that differentiateone type from the other are the method of mixing the fuel with air, the way howthe mixture is ignited and how the intake and exhaust flows into the cylinder arecontrolled. Although very inefficient (only 10–20% of heat is transferred into usableenergy), they are widely used due to high power density of hydrocarbon fuels (34–36[MJ/l] compared to 0.6–4.4 [MJ/l] for various types of batteries).

Originally designed in 1889 by Joseph Day, piston-ported 2-stroke ICE hasbecame one of the most commonly used engines due to its very simple design.Compared to other ICEs, light weight and small size of this type of engine are real-ized through simplified construction. First, all phases of the working cycle—intake,compression, combustion, and exhaust—occur in the same chamber, second, engineis air-cooled, hence, there is no need for a separate cooling system, and finally, oilis premixed into the fuel which makes separate oiling system unnecessary. Those

Page 89: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

76 3 Multirotor Aerodynamics and Actuation

Fig. 3.37 A 2-stroke ICEcycle

properties and very good power-to-weight ratio make 2-stroke ICE best choice forthe propulsion system actuation in UAV applications that require increased payloadsand extended autonomy.

Although the basic working principle is pretty simple in a 2-stroke engine, thereare many construction details that play significant role in the engine properties, suchas efficiency, delivered torque and power, fuel consumption. As we are interestedin the engine dynamics and control, herein we just briefly go through the enginecycle and skip detailed elaboration of physical phenomena (thermodynamics) andthe engine construction. To describe the engine cycle, we refer to Fig. 3.37.

In a 2-stroke ICE, a fuel-air mixture, compressed above the piston, is ignited bythe spark plug which produces a rapid rise in pressure and temperature (Fig. 3.37a).The increased pressure drives the piston downward (producing mechanical work)and uncovers the exhaust port which allows the burnt gases to leave the cylinder. Inthe same time, the piston is compressing the mixture in the crankcase (Fig. 3.37b).Shortly thereafter, as it moves downward, the piston uncovers the transfer port, whichlets the mixture to flow from the crankcase into the cylinder (Fig. 3.37c). This partof the cycle is specific for 2-stroke engine as in the same time the fresh mixture andthe burnt gases are present in the cylinder. In order to provide proper exchange ofthe mixture and the gases (known as scavenging), the transfer port pressure has to

Page 90: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 77

Fig. 3.38 A typical power,torque and fuel consumptioncharacteristics

be above the exhaust pressure, and the piston has to be shaped in a particular wayso that the incoming mixture is prevented to simply flow right over the top of thepiston and out the exhaust port. As the last part of the cycle, the mixture, once inthe cylinder, is compressed by the piston moving upward (Fig. 3.37d) and engine isready for the next cycle. The fact that intake, compression, combustion, and exhaustare performed in two strokes of the piston (going up and down once per cycle) hasthe main contribution to already mentioned high power-to-weight ratio for 2-strokeengine. Compared with 4-stroke ICE that has one power stroke per two revolutions ofthe crankshaft, 2-stroke engine has one power stroke per one revolution, i.e., powerstroke in a 2-stroke ICE happens twice as often as in a 4-stroke. On the other hand,having the intake and exhaust in separate strokes prevents the 4-stroke engine fromlosing fresh mixture through the exhaust, thus making them potentially more fuelefficient and less harmful from the emissions point of view. A typical power, torque,and fuel consumption characteristics of 2-stroke ICE are given in Fig. 3.38 (numbersrefer to three-cylinder, 450 [cm3] engine).

As far as fuel–air mixture feeding into the cylinder is concerned, the most com-monly used device, in case of small 2-stroke ICEs, is a carburetor. Placed at the engineinlet, its role is to spray a jet of fuel into the stream of air flowing into the engineinlet, thus providing fuel–air mixture appropriate for a particular working point.Figure3.39 presents basic components of a carburetor and is just for descriptionpurpose as it does not represent very complex technical construction of a carburetor.

A membrane, that moves in synchronization with the engine cycle, delivers a fuelfrom the fuel inflow into carburetor chamber from where it passes over low- andhigh-speed needles (LN and HN) that form jets to be sprayed into the stream of air.The amount of fuel provided by the jets depends on the engine operating conditions,and it is controlled by two adjustable needles (Fig. 3.39) that define air–fuel ratio(AFR). The engine power is controlled by the throttle plate (round metallic disk),where fully opened plate corresponds to the maximal power. The choke plate, that is

Page 91: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

78 3 Multirotor Aerodynamics and Actuation

Fig. 3.39 Basic components of 2-stroke ICE carburetor

Fig. 3.40 2-stroke ICE speed response in case of LN = 1.75 turns and HN = 1.5 turns

placed at the input of the carburetor, decreases the amount of air flow, thus increasingAFR. Such enriched mixture is desirable when one attempts to start the engine.

There are two main problems with the two-needle system: (i) those needles are“tuned” manually (by screw) which might cause inaccurate adjustments to the car-buretor, and (ii) with only two needles it is not possible to closely keep AFR at theoptimal level (14.7 to 1) across all engine operating conditions. As a consequence,engine can suffer large variability in the performance (varying power and torqueoutputs, varying dynamics) and increased fuel consumption. Even though manufac-turers provide a set of rules how to adjust low- and high-speed needles, experienceplays significant role in the carburetor tuning. Figures3.40 and 3.41 depict rotationalspeed responses of 2-stroke ICE with 27× 10 propeller for two different setups ofneedles (all experimental results presented in this section are obtainedwith 111 [cm3]engine that provides 8.35 [kW] at 7500 [rpm]).

Page 92: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 79

Fig. 3.41 2-stroke ICE speed response in case of LN= 1.25 turns andHN= 1.5 turns (manufacturerrecommendation)

It is apparent that the difference of just 0.5 turns on LN significantly changesstatic characteristic of the engine (rpm vs throttle position). For LN = 1.75 turns(Fig. 3.40), there is almost no change in rpmwhen throttle position (opening) changesfrom 15 to 45%, while in case LN = 1.25 turns (Fig. 3.41), the rotational speedchanges for more than 1000 [rpm]. Clearly, inappropriate adjustment of needlesleads to highly nonlinear static characteristic of the engine, depicted in Fig. 3.42(dashed line represents desirable linear characteristic with 10% corresponding tothe idle speed of 1500 [rpm] and 80% corresponding to 6500 [rpm]). Somewhatbetter characteristic is obtained with recommended adjustment (Fig. 3.43), but stilladditional tuning is required in order to get desired characteristic. Other interestingnonlinear phenomenon can be seen from Fig. 3.42. Namely, the engine rpm, obtainedfor a particular value of the throttle position, differs depending if the throttle positionis increasing or decreasing. For example, in case throttle is at approximately 50%the engine rotational speed is around 3100 [rpm] if throttle was increased, while thisvalue is around 4300 [rpm]when throttlewas decreased. By careful tuning of needles,hysteresis effects can be reduced (Fig. 3.43), however, for high-quality performanceof the propulsion system control algorithm should be designed so that completelycompensate all nonlinearities.

Investigation of the engine dynamics reveals significant variations in the rotationalspeed response depending on the working conditions, LN and HN adjustment, andthe direction of change (increase/decrease of the rotational speed). Figures3.44, 3.45,

Page 93: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

80 3 Multirotor Aerodynamics and Actuation

Fig. 3.42 Nonlinear static characteristic of 2-stroke ICE caused by inappropriate adjustment ofneedles

Fig. 3.43 Nonlinear static characteristic of 2-stroke ICE—needles adjusted according to manufac-turer recommendation

and 3.46 show the rotational speed response of the engine with 27× 10 propellerfor various working points in case the throttle position, αICE , changes for 5% andLN = 1.75 and HN = 1.5.

The rotational speed response is approximated with first-order transfer functionof the form:

Page 94: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 81

Fig. 3.44 2-stroke ICE speed response at ω0 = 3350 [rpm] and ΔαICE = 5%

Fig. 3.45 2-stroke ICE speed response at ω0 = 5925 [rpm] and ΔαICE = 5%

GICE (s) = ω(s)

αICE (s)= KICE

τICEs + 1, (3.81)

where KICE is the engine gain and τICE is the engine time constant. This transferfunction is obtained in the same way as the one describing DC motor dynamics(Eq. (3.71))—the torque produced by the engine (TICE ≡ TM ) is opposed by variableload torque (TL , formally having only linear termas it is treated as a friction) produced

Page 95: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

82 3 Multirotor Aerodynamics and Actuation

Fig. 3.46 2-stroke ICE speed response at ω0 = 6342 [rpm] and ΔαICE = −5%

by a propeller (refer to Fig. 3.26). So, from the control point of view, both actuators—a DCmotor and an ICE—in first approximation can be represented by the same formof the transfer function.

From ICE rotational speed responses, it can be seen that the difference betweenthe transfer function parameters for two different working points is in the order ofmagnitude - KICE = 320 [rpm/%] and τICE = 2.375 [s] for ω0 = 3350 [rpm], whileKICE = 46.2 [rpm/%] and τICE = 0.1407 [s] forω0 = 5925 [rpm]. The difference issignificant also in case of the opposite change of the throttle position for twoworkingpoints that are close to each other (Figs. 3.45 and 3.46). Nonlinearities in the staticcharacteristic, as well as changes in the engine dynamics, clearly demonstrate howimportant is to properly adjust LN and HN. Further nonlinearity of the propulsionsystem is caused by the propeller (induced drag torque is function of ω2), as wealready mentioned.

Now we turn to the closed loop control of 2-stroke ICE rotational speed. Hereinwe present linear controller structure, even though highly nonlinear nature of theICE propulsion system would require more sophisticated control designs (which isbeyond the scope of this book). In UAV applications, a servo drive, described in theprevious section, is used for actuation of the throttle. Hence, we use so-called cascadecontrol principle, with the servo drive control, depicted in Fig. 3.34, as an inner loop,and the rotational speed control as an outer loop. The system block diagram is givenin Fig. 3.47.

At the end of this section, we present experimental results obtained by implemen-tation of the cascade control principle for 2-stroke ICE. The inner loop controllerwas designed by using nonlinear characteristic depicted in Fig. 3.36. Parameters b,c, and e were set to 0, while a and d were set to 2%, i.e., for any position error larger

Page 96: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

3.3 Aerial Manipulation Actuation 83

Fig. 3.47 2-stroke ICE speed cascade control loop

Fig. 3.48 2-stroke ICE closed loop speed response for LN = 1.25 and HN = 1.5

than this value the motor provided maximum speed. Due to high variations in theengine response, the outer loop PI controller was designed to give robust behavior ofthe closed loop system. Both signal processing blocks were memory-less elementswith unity gain. The final result is shown in Figs. 3.48 and 3.49 with two differentadjustments of LN and HN.

To analyze closed loop dynamics of the engine, we use the same approximation—thefirst-order transfer function (seeEq. (3.77)) - as in the case of the open loop system.Identified time constant, τICECL , for various working ranges, is given in Table3.4.

Comparing closed loop results with those presented in Figs. 3.44, 3.45, and 3.46,it is obvious that closing the feedback loop and implementing the robust PI controllerreduces variations in the ICE speed response significantly. The closed loop system

Page 97: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

84 3 Multirotor Aerodynamics and Actuation

Fig. 3.49 2-stroke ICE closed loop speed response for LN = 1.5 and HN = 1

Table 3.4 2-stroke ICEclosed loop speed control -identified time constant

Working range [rpm] Time constant [s]

Increase Decrease

LN = 1.25, HN = 1.5

≤4500 0.169 0.118

4500–5500 0.177 0.183

≥5500 0.185 0.235

LN = 1.5, HN = 1

≤4500 0.220 0.142

4500–5500 0.202 0.131

≥5500 0.232 0.256

time constant takes values from 0.131 [s] to 0.256 [s], compared to values that rangefrom 0.1407 [s] all the way up to 2.375 [s] in situation without the speed controller.As we mentioned earlier, even better results can be attained if more sophisticatedcontrollers (nonlinear or adaptive) are used.

Page 98: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

References 85

References

1. Bicego D, Ryll M, Franchi A (2016) Modeling and control of fast-hex: a fullyactuated bysynchronizedtilting hexarotor. In: 2016 IEEE/RSJ international conference on intelligent robotsand system

2. BlairGP (1996)Design and simulation of two-stroke engines. Society ofAutomotive EngineersInc

3. Blthoff HH, Ryll M, Giordano PR (2015) Novel overactuated quadrotor unmanned aerialvehicle: modeling, control, and experimental validation. IEEE Trans Control Syst Technol23:540–556

4. Bouabdallah S,Murrieri P, Siegwart R (2004) Design and control of an indoor micro quadrotor.In: Proceedings - IEEE international conference robotics and automation ICRA ’04, vol 5, pp4393–4398

5. Bramwell ARS, Done GTS, Balmford D (2001) Bramwell’s helicopter dynamics. AmericanInstitute of Aeronautics and Astronautics, Reston

6. Brescianini D, DAndrea R (2016) Design, modeling and control of an omni-directional aerialvehicle. In: 2016 IEEE international conference on robotics and automation (ICRA), pp 3261–3266

7. Dan M (2012) Temperature effects on motor performance. Technical report, Pittman Motors8. Freescale Semiconductor. 3-phase sensorless bldc motor control using mc9s08mp16 - design

reference manual. Freescale Semiconductor, 20099. Gessow A, Myers GC (1952) Aerodynamics of the helicopter. F. Ungar Publishing Co., New

York10. Hughes A (2006) Electric motors and drives: fundamentals, types and applications. Elsevier,

London11. Korpela CM, Danko TW, Oh PY (2011) MM-UAV: mobile manipulating unmanned aerial

vehicle. In: Proceedings of the international conference on unmanned aerial systems (ICUAS)12. Lee1 S-H, Shin W-G (2010) An analysis of the main factors on the wear of brushes for auto-

motive small brush-type DC motor. J Mech Sci Technol 24:37–4113. Mellinger D, Lindsey Q, Shomin M, Kumar V (2011) Design, modeling, estimation and con-

trol for aerial grasping and manipulation. In: Proceeding IEEE/RSJ international conferenceintelligent robots and systems (IROS), pp 2668–2673

14. Nikou A, Gavridis GC, Kostas KJ (2015) Mechanical design, modelling and control of a novelaerial manipulator. In: Proceedings ICRA 2015

15. Orsag M, Bogdan S (2009) Hybrid control of quadrotor. In: Proceedings of the 17th mediter-ranean conference on control and automation, (MED)

16. Sanchez A, Romero H, Salazar S, Lozano R (2007) A new UAV configuration having eightrotors: dynamical model and real-time control. In Proceedings of the 46th IEEE conference ondecision and control, pp 6418–6423

17. Xia CL (2012) Permanent magnet brushless DC motor drives and controls. Wiley, New York

Page 99: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Chapter 4Aerial Manipulator Kinematics

4.1 Manipulator Concept

In classical robotics, robotic manipulators are composed of links connected throughjoints to form a so-called kinematic chain [6]. Normally, this kinematic chain consistsof two separate groups, manipulator and endeffector. However, in aerial and mobilerobotics, we go a step further to augment this manipulator with a mobile base. Inmobile robotics, this is usually a driving or a walking robot base, while in aerialrobotics this is a manned or unmanned aerial vehicle [4].

Link

Unless explicitly stated otherwise, the kinematic description of robotic mechanismstypically uses idealized parameters. The links that the robotic manipulator composesof are assumed to be perfectly rigid bodies with geometrically perfect surfaces andshapes [4]. As we will see further in the book, in some cases, links can be infinitesi-mally small or assume the entire shape of the UAV body. However, in all these cases,links represent the distance between the two joints in the robot.

Joint

Two consecutive links are connected through a single joint with a single DOF. Thisimplies that the joints, as seen from the perspective of this book, can only providerelative motion between the two links in a single coordinate system. The joints canbe either revolute (rotary) or linear (prismatic). The direction of the joint is denotedthrough the z-axis of a coordinate system. In prismatic joints, this is a directionof horizontal motion, while in rotary joints z-axis denotes the axis of rotation thatfollows the right-hand rule.

UAV Body

In most cases, unmanned aerial vehicles that carry the manipulator provide up to sixdegrees of freedom, depending on the configuration of their propulsion system. Theyare usually modeled as a part of manipulator kinematic chain with n infinitely small

© Springer International Publishing AG 2018M. Orsag et al., Aerial Manipulation, Advances in Industrial Control,https://doi.org/10.1007/978-3-319-61022-1_4

87

Page 100: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

88 4 Aerial Manipulator Kinematics

links connected via n joints. However, in some cases, the motion of the UAV canpose a disturbance to the manipulator, which has to compensate for the inaccuracyof UAV motion.

End-effector

Each aerial manipulator ends with a part mounted on the last link that holds themission specific tool. The tool completes the robot enabling it to interact with theenvironment in the desired manner.

4.2 Forward Kinematics

The problem of forward kinematics, depicted in Fig. 4.1, is to describe the relativepose of each link in a chain of n + 1 links connected through n joints. This enablesone to calculate the exact pose of the end-effector, given a priory knowledge of jointvariables. Naturally, one would expect to need six parameters to describe the poseof each link in the chain relative to the pose of the preceding link, yet Denavit andHartenberg showed in 1955 [3] how a systematic notation enables us to use only fourparameters to describe this relation. Since then, the so-called DH parameters havebecome a standard, widely adopted in robotics [2, 4, 5].

When assigning coordinate frames to the links, one associates i th frame Li withlink i . However, it is important to note that at the same time, this refers to the i + 1joint, which is positioned at the end of the link i . To further simplify the DH notationalgorithm, Spong et al. [8] singled out two constraints for the placement of adjacentframes, Li and Li−1, respectively:

• the axis xi is perpendicular to the axis zi1.• the axis xi intersects the axis zi1.

Keeping this two constraints in mind, we can write the steps of Denavit–Hartenbergprocedure. The first three rules of DH algorithm refer to proper coordinate systemplacement:

1. Align the zi -axis of Li coordinate frame with the i + 1 joint axis. In fact, z-axisof the joint points in the direction that both revolute and prismatic joints operate.For the revolute joints, z-axis is placed according to the right-hand rule, while forprismatic joints z-axis faces in the direction that the joint extends or contracts.

2. The common normal between two adjacent z-axes, zi−1 and zi , respectively,determines the xi -axis. Furthermore, we choose to direction so that xi faces zi .A careful reader will notice that this choice complies with the two previouslymentioned constraints and that xi is both perpendicular to and intersects with zi1.Furthermore, we will consider two special, but very common cases:

• When the two z-axes are parallel, from the infinite number of common nor-mals, we pick the one that is collinear with the common normal of the previousjoints, i.e., xi−1.

Page 101: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.2 Forward Kinematics 89

1iL

iL

1iL

1iz

1ix 1iy

ix

iy

iz

ia

id

i

1iz

1ix

1iy

Fig. 4.1 Showing two links connected at joint i , with their respective coordinate systems laid outaccording to the DH procedure

• A very common situation is when the two adjacent joints are perpendicularto each other. In that case, we place the xi so that it faces the zi−1 × zi , thusensuring that xi is both perpendicular to and intersects with zi1.

3. After all other axes are set, we place the yi -axes so that it forms the right-handedcoordinate system together with xi and yi .

4.2.1 DH Kinematic Parameters

Now that all the coordinate systems are set, we move on to derive DH parameters ofthe robot, which can be divided into two groups, joint- and link-related parameters.Joint-related parameters are two transformations, translation and rotation, that areperformed on zi−1-axis. This is followed by the second link group comprised yetanother translation and rotation around xi .

Joint-Related Parameters

We define two parameters, Θi and di that denote the rotation of the joint and itslength, respectively. First, we rotate for Θi around zi−1-axes, until xi−1 is alignedwith the xi . Next we translate xi−1 in zi−1 directions until xi−1 becomes collinearwith xi . For a prismatic joint i , its length di is a variable and for a revolute joint itsrotation Θi is a variable. In both cases, the second DH parameter, rotation or length,respectively, is constant. Throughout this book, as a convention we will use qi todenote the variable of the joint. Summarizing this procedure in two steps yields thefollowing procedure which is depicted in Fig. 4.2:

1. ComputeΘi as the angle for which one rotates xi−1 around zi−1 to align it with xi .

Page 102: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

90 4 Aerial Manipulator Kinematics

Fig. 4.2 Joint i size di andtwist θi

1iz

1ix

1iy

ix

iy

iz

id

i

1iz1ix

Fig. 4.3 Link i , length ai ,and twist αi

1iz

1ix

1iy

ix

iy

iz

ia

i

1iz

2. Measure the distancedi that xi−1 travels in the directionof zi−1 to becomecollinearwith xi .

Link-Related Parameters

Once xi−1-axis is aligned with xi , we can proceed transforming around this joinedxi -axis to align the rest of the coordinate systems. It is worth noting that since bothx-axis is aligned and collinear after joint transformations, it makes no differencewhether we rotate around xi or xi−1, since at this point they are both the same. Thetwo parameterswe derive through the following strategy are related to the dimensionsof the link.

3. Measure the distance ai , that zi−1 travels about xi to come up to and coincideswith zi .

4. Compute αi as the angle for which one rotates zi−1 around xi to align it with zi .

In practice, these two parameters, ai andαi , denote the size of the link, and therefore,we refer to them as link-related parameters. The procedure is depicted in Fig. 4.3.

Tool Orientation

By convention, robotic links and joints are numbered outward starting from the baseframe L0, ending with a tool, or a so-called endeffector. The robot with n joints

Page 103: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.2 Forward Kinematics 91

Fig. 4.4 Normal, sliding,and approach vector of astandard endeffector

Tz

Ty

Tx

has n + 1 links, and n + 1 coordinate systems, starting with L0 and ending withLn = LT . All the coordinate systems are laid out according to DH convention, sothe only thing left is to form a tool coordinate system which has no joint to link to.Nevertheless, knowing the exact orientation of the tool is crucial to complete themanipulation task.

It is important to know the approach vector of the endeffector with respect tothe targeted object, like the cylinder depicted in Fig. 4.4. Although tools on theendeffector can vary, if we imagine using a gripper, we define sliding vector asthe one orthogonal to the approach vector and aligned with the open–close gripperaxis. Finally, one can define the normal vector, as the one closing the right-handedcoordinate system with the approach and sliding vectors. Consequently, this vectordefines the normal to the plane in which the gripper operates.

Observing Fig. 4.4, we can denote the three tool vectors:

• Approach vector zT - commonly aligned with the tool roll axis and points awayfrom the tool

• Sliding vector yT - orthogonal to the approach vector• Normal vector zT - closes a right-handed coordinate system with zT and yT andforms the normal to the plane of endeffector operation.

Problem 4.1 Now that we have covered the Denavit–Hartenberg convection, let ustry to apply it to an aerial robot, with three manipulator arms shown in Fig. 4.5.Each manipulator arm has two degrees of freedom, first a revolute joint after whichfollows a motor-driven translation joint. The arms are rotated and placed so that thestarting horizontal position allows for the landing gear to be shorter, saving weightand allowing the arms to be protected when landing. For additional manipulationcapabilities, an end-effector is added to the construction. By convention, we considerthe center of assembly as the most important construction point that serves as thebase for the aerial robot. After constructing the coordinate systems we will derivethe Denavit–Hartenberg parameters for the proposed aerial manipulator.

Eachmanipulator arm is modeled as a serial chain RP (revolute-prismatic) manip-ulator singled out and shown in Fig. 4.6 for clarity. The quadrotor body frame L0

is considered to be the base frame of the system. It represents a virtual rotational

Page 104: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

92 4 Aerial Manipulator Kinematics

Flying baseframe

0LRevoluteJoint 1

PrismaticJoint 2

Gripper tool

Fig. 4.5 MM-UAV coordinate frames

0x

0y

0z

1z1y1x

2z

2x2y

Tz

Tx

Ty

33 dq

2d

1d

Fig. 4.6 CAD model design and final construction (showing only one arm for clarity)

joint, which enables us to mathematically describe the displacement and rotation ofeach arm. Since revolute joint axis z1 is perpendicular to the z0, first frame L1 isplaced below the center of construction, where z0 and z1 cross paths. Prismatic jointframe L2 is positioned at an intersection point of z1 and z2 which are once againperpendicular to each other. Final tool coordinate system LT is arranged according tothe aforementioned DH convention with approach vector facing out from the grippertool. Since both z2 and zT are collinear, tool coordinate system LT is placed at thetool tip.

So far, using the Denavit–Hartenberg (DH) parametrization, joint frames are setso that we can derive the DH parameters, shown in Table4.1. Table4.1 shows the DHparameters of each joint for all three arms where θ, d, a, and α are the standard DHparameters; qi

1, qi2, and q

i3 are joint variables of each manipulator arm i . Because the

manipulator arms are identical, DH parameters for all joints are the same. For each

Page 105: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.2 Forward Kinematics 93

Table 4.1 Denavit–Hartenberg parameters for case study shown in Fig. 4.6

θ d a α

Link 1

Arm 1 0 −d11 0 π2

Arm 2 2π3 −d21 0 π

2

Arm 3 4π3 −d31 0 π

2

Link 2

Arms 1, 2, 3 qi2 + π2 di2 0 π

2

Link 3

Arms 1, 2, 3 0 qi3 + di3 0 0

base quadrotor joint, the arms are fixed and placed in an equilateral triangle pattern,with a 120◦ angle between them. This is why the virtual base joints have a constantvalue θi . One can observe that due to coordinate system orientation, there are onlyjoint size parameters di . DH procedure is repeated for all three MM-UAV arms.

One should note how both revolute and prismatic joints have an initial value,qi2 + π

2 and qi3 + di

3, respectively. The initial value ensures that in the initial position,when all the joint variables are set to zero, the robot assumes the home positiondepicted in Fig. 4.5. As a convention, we will input the initial value when derivingthe forward kinematics of the manipulator.

4.2.2 The Arm Equation

Once the set of links, link coordinates, and parameters are set using the DH algo-rithm, we proceed to transform between successive coordinate systems k − 1 andk. Combining these successive transformations into a single homogeneous transfor-mation matrix allows us to transform coordinates from the end-effector toward thebase.

The four fundamental operations described in the previous section formulate eithera rotation or translation of Lk−1 along one of its axis (i.e., zk−1 and xk−1). As welearned so far, the order of rotation is important when dealing with transformations.The first two operations form a screw transformation along the axis zk−1. Once thisis completed, axis xk−1 is parallel and aligned with xk . The next two operations formyet another screw transform, this time along xk−1-axis, in order to align Lk−1 with Lk

and zk−1 and zk . Once again, we remind the reader that once the first two operationsare completed, axes xk and xk−1 are identical, therefore rotating around xk is actuallythe same as rotating around xk−1. Using the four DH parameters, θi , di , ai , αi , wecan write the equation for these two screw transforms:

Tkk−1 (θk, dk, ak,αk) = σ (dk, θk, zk−1) σ (ak,αk, xk−1) (4.1)

Page 106: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

94 4 Aerial Manipulator Kinematics

As a consequence of the systematic notation for assigning link coordinates inDH algorithm, transforming from Lk−1 toward Lk refers to kth joint and link. In(4.1), Tk

k−1 represents a successive homogeneous transformation between k − 1 andk coordinate systems, and σ denotes the screw transform defined by the length andangle of rotation and its respective reference vector. Since the transformations referto the mobile base, matrix multiplication is done on the right-hand side, which yieldsthe following transform matrix:

Tkk−1 (θk, dk, ak,αk) =

⎡⎢⎢⎣Cθk −SθkCαk SθkSαk akCθkSθk CθkCαk −CθkSαk akSθk0 Sαk Cαk dk0 0 0 1

⎤⎥⎥⎦ (4.2)

In robotics, it is a custom to join the two joint variables θk and dk in a single variableusing a joint-type parameter:

ξk ={1 joint k is revolute0 joint k is prismatic

, (4.3)

forming the joint variableqk = ξkθk + (1 − ξk)dk . (4.4)

From here, it is easy to show that using qk , one can rewrite (4.2):

Tkk−1 (qk, ak,αk) =

⎡⎢⎢⎣Cqk −SqkCαk SqkSαk akCqkSqk CqkCαk −CqkSαk akSqk0 Sαk Cαk qk0 0 0 1

⎤⎥⎥⎦ . (4.5)

Once all successive transformations Tkk−1 are obtained, it is straightforward to

calculate the transformation between the base and the tool TTB . Because the goal

is to calculate the representation of the tool coordinates in the base, we start bymultiplying the successive transformation matrices from the left side:

TTB = T1

0T21 · · ·Tk

k−1 · · ·Tnn−1, (4.6)

and the closed-form expression for the entire armmatrix can be obtained in the form:

TTB =

[RT

B pTB

0 0 0 1

], (4.7)

where RTB and pT

B represent the base tool rotation matrix and translation vector (i.e.,distance between Ln and L0). Usually, for industrial six-axis robots, it is helpful toportion the transformation matrix TT

B in two distinct portions:

Page 107: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.2 Forward Kinematics 95

0L

TL

VL

c1L

0x

0z

1z

2z4

1x

2x

Vx

Vz

Tz

Tx 2L

Fig. 4.7 MM-UAV coordinate frames

TTB = TW

B TTW . (4.8)

Here,W denotes the so-called wrist coordinate system. Usually, the first three jointsdetermine the position of the wrist, after which the last three joints TT

W determineits orientation. However, due to limited payload, for aerial manipulators it is oftenimpossible to build a full six-axis robot. Therefore, it is hard to portion the homoge-neous transform matrix into translation and rotation portion of the transformation.

Problem 4.2 A perfect example is depicted in Fig. 4.7, where a UAV helicopter isequipped with single three-degree-of-freedom manipulator. For this problem, thegoal is to derive the Denavit–Hartenberg parameters again observing the helicopteras the base of this aerial manipulator shown. Furthermore, we wish to derive thetransform matrix between this base frame and the tool end-effector.

We follow once again DH convention to properly set the coordinate systems ofthismanipulator chain. Setting coordinate system L0 and L1 is straightforward. Sincez1 and z2 are parallel, L2 is placed at the intersection line of a common normal of thetwo vector, depicted with a thin dashed line. Since the approach vector of the toolzT is facing perpendicular to z2, normally coordinate system LT would retreat to theposition of joint 3. This is of course a problem, since the position of LT representsthe tip of the end-effector. Not knowing its exact position prevents one from planningthe manipulation trajectory. To fix this issue, we add a virtual joint facing the exactsame direction as the end-effector approach vector. The virtual coordinate system

Page 108: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

96 4 Aerial Manipulator Kinematics

Table 4.2 Denavit–Hartenberg parameters for the case study shown in Fig. 4.7

Link θ d a α

1 q1 −l1 0 − π2

2 q2 + π4 0 l2 0

3 q3 − 3π4 0 0 − π

2

V-E qV = 0 l3 0 0

then takes the place of the tool coordinate system, retreating back to the intersectionwith joint 3 (i.e., z2).

When the coordinate systems are in place, we turn to derive the DH parameters ofthe serial chainmanipulatorwhich are given inTable4.2.Deriving theDHparametersfrom the coordinate systems is a routine procedure. The one thing worth noting isthe fact that the virtual joint is not a variable, it remains constant in both size andorientation and accounts only for the otherwise lost dimension l3.

From the Denavit–Hartenberg table of parameters Table4.2, it is straightforwardto derive the adjacent transformation matrices T k

k−1:

T10 =

⎡⎢⎢⎣C1 0 S1 0S1 0 C1 00 1 0 −l10 0 0 1

⎤⎥⎥⎦ T2

1 =

⎡⎢⎢⎢⎣

C(q2 + π4 ) −S(q2 + π

4 ) 0 l2C(q2 + π4 )

S(q2 + π4 ) C(q2 + π

4 ) 0 l2S(q2 + π4 )

0 0 1 00 0 0 1

⎤⎥⎥⎥⎦

T32 =

⎡⎢⎢⎢⎣

−C(q3 + π4 ) 0 S(q3 + π

4 ) 0

−S(q3 + π4 ) 0 −C(q3 + π

4 ) 0

0 −1 0 00 0 0 1

⎤⎥⎥⎥⎦ T4

3 =

⎡⎢⎢⎣1 0 0 00 1 0 00 0 1 l30 0 0 1

⎤⎥⎥⎦

.

(4.9)In the previous expression, apart from standard abbreviations we introduced S(),C()

to denote sine and cosine functions. Using each adjacent transform matrix, we pro-ceed to obtain the base to tool transformation matrix utilizing (4.2) which yields:

T40 =

⎡⎢⎢⎢⎢⎣

C1S23 S1 C1C2312C1

(√2l2 (C2 − S2) + 2l3C23

)

S23S1 −C1 S1C2312S1

(√2l2 (C2 − S2) + 2l3C23

)

C23 0 −S23 −l1 − l3S23 − l2(C2+S2)√

20 0 0 1

⎤⎥⎥⎥⎥⎦

(4.10)

For control and control planning purposes, one wishes to know the exact positionand orientation of the end-effector. The position can be directly obtained from thetransformation matrix T4

0:

Page 109: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.2 Forward Kinematics 97

p40 =

⎡⎢⎢⎢⎢⎣

12C1

(√2l2 (C2 − S2) + 2l3C23

)

12S1

(√2l2 (C2 − S2) + 2l3C23

)

−l1 − l3S23 − l2(C2+S2)√

21

⎤⎥⎥⎥⎥⎦

. (4.11)

However, rotation may be ambiguous when represented with the rotation part oftransform matrix R4

0. Therefore, we often use quaternions, or better yet Euler anglerepresentation. Since the approach vector z40 is of special interest when manipulatingobject, tool orientation in robotics is often represented through this vector (i.e., thirdcolumn of rotation matrix R4

0:

z40 =⎡⎣C1C23

S1C23

−S23

⎤⎦ (4.12)

Still, this vector does not hold the complete information about the tool orientation,since it cannot account for the roll angle about this very same axis. This informationhas to be either mathematically incorporated in the representation, or another vectorlike x40 or y

40 has to be used.

4.2.3 Moving Base Frame

Having a manipulator attached to the UAV helicopter allows for a shift in the waywe think of manipulation. Since the base is capable of moving in 3D space, standardmanipulators with n degrees of freedom have n + m degrees of freedom, where mdenotes the variable number of degrees of aerial vehicle, which in turn depends onthe vehicle configuration.

However, the precision of them DOFs, coming from the vehicle itself is still hardto compare with the precision of the manipulators. This is why for most aerial manip-ulations, UAVs provide rough positioning of the end-effector, while the manipulatoritself both compensates for UAV positioning error and provides additional degreesof freedom. This is depicted in Fig. 4.8, where a UAV from the previous example iscommanded to grab a cylinder-shaped object.

The distance between the inertial world frame LW and the target pTW remains

constant. On the other hand, due to imprecise position control of theUAV, the distancebetween the arm base frame L0 and that of the inertial world frame p0W oscillates.As a result, the arm itself needs to compensate for these oscillations, constantlycontrolling the distance pT

0 in order to constantly maintain:

pTW = p0W + pT

0 . (4.13)

Variations in position are not the only problem when it comes to precise manip-ulation. Some UAVs, like quadrotors, rely on attitude variations in order to control

Page 110: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

98 4 Aerial Manipulator Kinematics

WLWx

WzWy

0z

0x

Tx

TzTWp

0Wp

T0p

Fig. 4.8 Helicopter equipped with 3-DOF manipulator aims to grab a cylinder. The manipulatorarms compensate for the motion of the UAV in order to maintain adequate precision

their position. This without a doubt affects their ability to manipulate objects, and thearm has to take attitude variations into account when compensating for UAVmotion.We aim to account for six DOF that the UAV has three with respect to p0W and threeregard to attitude variations. A composition of these two transformations yields acomplete homogeneous transformation matrix T0

W :

T0W =

[R0

W 03×1

01×3 1

] [03×3 p0W01×3 1

]=

[R0

W p0W01×3 1

]. (4.14)

In (4.14), the rotation matrix R0W is usually obtained through standard Euler

representation. In aerial robotics, one usually chooses standard X–Y–Z or roll–pitch–yaw (ψ, θ,φ) representation. Putting it all together yields:

T0W =

⎡⎢⎢⎣CφCθ CφSθSψ − SφCψ CφSθCψ + SφSψ x0WSφCθ SφSθSψ + CφCψ SφSθSψ − CφSψ y0W−Sθ CθSψ CθCψ z0W0 0 0 1

⎤⎥⎥⎦ . (4.15)

To complete the homogeneous transformation matrix from the inertial world frameLW all the way to the aerial robot end-effector LT , one needs to include the T0

Wtransform. Serial chain transformation pT

0 of the manipulator is derived throughpreviously disseminated DH procedure. Together, these two transformations yieldthe exact pose of the tool with respect to the inertial frame.

Problem 4.3 To show the effect of the moving base frame, we turn to another aerialmanipulator example shown in Fig. 4.9. To solve the problem, we need to find thehomogeneous transformation matrix for the end-effector pose in the inertial worldframe.

Page 111: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.2 Forward Kinematics 99

WLWx

Wz

Wy

0z

0x

Tx

TzTWp

0Wp

T0p

1z

2z

1x

2x

Vx

Vz

2l

1l

3l

Fig. 4.9 Dual-arm aerial robot (MM-UAV) with its respective coordinate frames and notations

Table 4.3 Denavit–Hartenberg parameters obtained for Fig. 4.9, observing only arm A for clarity

Link θ d a α

1 q1A − π2 0 l1 − π

2

2 q2A + π2 0 l2 0

V q3A + π2 0 0 π

2

E 0 l3 0 0

The first step toward the solution for this problem is to derive theDenavit–Hartenbergparameters for the manipulator, which are shown in Table4.3. One should note thatthe coordinate system is laid so that for the initial pose q = 0, arms are fully extendeddownward.

From DH parameters, it is straightforward to obtain the manipulator chain matri-ces:

T10 =

⎡⎢⎢⎣

S1 0 C1 l1S1−C1 0 S1 −l1C10 −1 0 00 0 0 1

⎤⎥⎥⎦ T2

1 =

⎡⎢⎢⎣

−S2 −C2 0 −l2S2C2 −S2 0 l2C20 0 1 00 0 0 1

⎤⎥⎥⎦

T32 =

⎡⎢⎢⎣

−S3 0 C3 0C3 0 S3 00 1 0 00 0 0 1

⎤⎥⎥⎦ T4

3 =

⎡⎢⎢⎣1 0 0 00 1 0 00 0 1 l30 0 0 1

⎤⎥⎥⎦

, (4.16)

and through that obtain the homogeneous transformation matrix:

T40 =

⎡⎢⎢⎣

−C23S1 C1 −S1S23 S1 (l1 − l2S2 − l3S23)C1C23 S1 C1S23 C1 (−l1 + l2S2 + l3S23)S23 0 −C23 −l2C2 − l3C23

0 0 0 1

⎤⎥⎥⎦ . (4.17)

Page 112: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

100 4 Aerial Manipulator Kinematics

Finally, to obtain the homogeneous transformation matrixT4W , one needs to multiply

the previous equation with the world to base transformation matrix:

T4W = T0

WT40. (4.18)

4.3 Inverse Kinematics

Direct kinematics offers a solution to the problem of knowing the exact positionof the endeffector, when a priori knowledge of joint orientation and position exists.However, in practice, one is equally likely to command the robot to position itsendeffector at a desired pose in order to complete a certain task. The aforementionedproblem is known as the inverse kinematics problem and will be covered in thissection.

4.3.1 Tool Configuration

Transformation matrix Ttoolbase holds all the necessary information about the tool, its

position and orientation, respectively. We will refer to this collective information astool configuration. If we want to perform precise manipulation, knowing the toolconfiguration is vital. There are twelve variables in the transformation matrix Ttool

base:

Ttoolbase =

⎡⎢⎢⎣r1,1 r1,2 r1,3 p1,4r2,1 r2,2 r2,3 p2,4r3,1 r3,2 r3,3 p3,40 0 0 1

⎤⎥⎥⎦ , (4.19)

however, these 12 variables are by no means independent of one another. As it hasbeen previously shown, the rotation part Rtool

base of the transformation matrix fallsunder the special orthogonal group SO(3), where the three columns of Rtool

base forman orthonormal set. For an orthonormal set of vectors, we can write the followingsix constraints:

rir j = 0, i �= j, 1 ≤ i ≤ 3, 1 ≤ j ≤ 3 (4.20)∥∥rk∥∥ = 1, 1 ≤ k ≤ 3, (4.21)

where ri denotes i th column in Rtoolbase. Together with the position vector, we are

left we only six degrees of freedom in the transformation matrix and use twice asmuch variables to denote the tool configuration. To overcome this, one uses eitherquaternions or Euler angles to denote the orientation of the tool. In [5], authorspropose using the approach vector ztoolbase, to describe the orientation of the tool. This

Page 113: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.3 Inverse Kinematics 101

Table 4.4 Common tool configuration representations

Advantages Disadvantages

Ttoolbase • Complete representation of

tool configurationbullet Not a minimalrepresentation of orientation(12 variables)

• Hard to incorporate in pathplanning algorithms

• For humans, it iscounterintuitive and hard toread

ptoolbase + (�,Θ,�) • Minimal representation • Gimbal lock

• Intuitive and easy to read • Multiple solutions

• Direct implementation inpath planning algorithms

ptoolbase + (q0, q1, q2, q3) • Close to minimalrepresentation of toolconfiguration

• For humans, it iscounterintuitive and hard toread

• Complete representation

ptoolbase + eqnπ ztoolbase • Minimal representation • Hard to incorporate in path

planning algorithms

• Complete representation oftool configuration

• Intuitive and easy to read

type of representation is especially useful when the manipulation task requires tomaintain constant orientation of the tool with respect to the surface being treated.However, since the representation relies on the vector ztoolbase, it is impossible to holdthe information about the twist angle around ztoolbase. Since

∣∣ztoolbase

∣∣ = 1, authors in [5]propose scaling the approach vector with a positive, invertible, exponential scalingfunction:

f (qn) := eqnπ (4.22)

Table4.4 summarizes the most common tool configuration representation along withtheir advantages and disadvantages.

4.3.2 Existence and Uniqueness of Solution

So far we have learned that every tool configuration holds a total of six constraints,which are left once we remove the orthonormal constraints of the arm equationTtool

base.The six constraints we are left with imply that one would have to have 6 variablesto satisfy all the constraints. In robotics, these variables refer to manipulator jointswhich shape the pose of the robot in order to satisfy the arm equation.

Page 114: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

102 4 Aerial Manipulator Kinematics

if ∀Ttoolbase ∈ {Workspace} , ∃q : Ttool

base(q) ⇒ dim(q) > 6 (4.23)

Provided that no joint is redundant and that tool configuration lies within theworkspace of the robot, inverse kinematics solution exists. In that case, the suffi-cient condition (4.23) becomes the necessary condition.

However, the provided solution might not be unique, with different solutionscorresponding to different robot configuration that guarantees the same tool config-uration. It is this multiplicity of the solution that forces us to choose the one solutionthat is closest to the current pose of the robot. This strategy ensures that each timewe command the robot to move, it does it by traversing the shortest path available.

Due to limited payload capabilities of UAVs, manipulators of aerial robotics areoften limited to less than six joints. Unfortunately, this implies that without using theextra degrees of freedom from the motion of the UAV, aerial robots have very limitedmanipulation capabilities. However, even though the number of joints does not ensurethe existence of a solution for a general manipulation problem, a solution still existsfor a reduced task space. For such a limited set of commanded tool configurations,one can find a solution, and even in this reduced case, this solution does not have tobe unique.

For robotswithmore than6 joints,we refer to as redundantmanipulators. In theory,such a manipulator has infinitely many solutions to any given tool configuration. Inpractice, this is limited to the limits and the resolution of the joints. However, amanipulator might be redundant with less than 6 joints. A very common example,referred to as elbow up/elbow down redundancy [5], is shown in Fig. 4.10. The figureshows two distinct solutions in joint space for a two-degree-of-freedom manipulatorthat provide identical result in the tool configuration space. Provided that the jointlimits allow this, both solutions are mathematically equally acceptable. However,from amanipulation point of view, one solution can allow a better or a safer approach,and the user has to choose the best one. Additional degrees of freedom, besides thosestrictly required to execute a given task, allow for the so-called internal motions,

ElbowUP

ElbowDOWN

WL

TWp

Fig. 4.10 Multiple solutions with a non-redundant manipulator with only two degrees of freedom

Page 115: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.3 Inverse Kinematics 103

during which the tool remains stationary while the rest of the manipulator moves.Besides avoiding obstacles or singularities, increased dexterity can be utilized tooptimize energy consumption during task execution.

Since the aerial robots possess the joint degrees of freedom of the manipulatorand the UAV, in most cases they possess kinematic redundancy. However, the pos-sible presence of non-holonomic constraints and possible coupling in the degrees offreedom of the UAV base motion must be taken into account in order to determinethe actual degree of redundancy in aerial robots. For instance, although a quadrotorhas six degrees of freedom, one uses attitude control in order to move it. Therefore,this coupling takes away two degrees of freedom and leaves it with only four degreesof freedom. Since roll and pitch angles control more than its position, they act as adisturbance for the manipulator as well, taking away its ability to achieve a higherdegree of autonomy.

4.3.3 Closed-Form Solutions

A direct approach to solving the inverse kinematics problem is also the fastest one.Analytical solutions are reliable and work fast when compared to iterative methods.On the other hand, analytical solutions offer little, if any, flexibility for differentrobotic manipulators. Each analytical solution has to be specially tailored for theproposed manipulator. Adjusting solutions for different configurations are not onlytedious, but most of the times impossible. Nevertheless, for a simple manipulator,to which the aerial robots are limited, analytical solutions provide the best approachfor solving the problem of inverse kinematics.

Even though each solution is tailored specific, there are still several usefulapproaches one should be aware of. To that effect, we turn to the following problemseveral problems to demonstrate these approaches.

Problem 4.4 Solve the inverse kinematics problem for a dual-arm 2-DOF quadrotoraerial manipulator from the previous example! The exact pose of the end-effector isdefined with two vectors, p40 and z40, respectively. The first vector defines the exactposition we want the end-effector to reach, and the approach vector z40 determinesthe tool orientation. Together they form a tool pose vector w (Fig. 4.11):

w =

⎡⎢⎢⎢⎢⎢⎢⎣

S1 (l1 − l2S2 − l3S23)C1 (−l1 + l2S2 + l3S23)

−l2C2 − l3C23

−S1S23C1S23−C23

⎤⎥⎥⎥⎥⎥⎥⎦

. (4.24)

The solution to the inverse kinematics problems starts with the approach vectorcomponent w(6):

Page 116: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

104 4 Aerial Manipulator Kinematics

0z

0xTx

TzT0p 1z

1x

2x

32322 lSlS 1l

2z

32322 lClC

Fig. 4.11 Second glance at the quadrotor equipped with a 2-DOF arm. It shows the projection ofthe tool vector pT0 for the inverse kinematics solution

w(6) = C23 → q2 + q3 = ±arccos(w(6)), (4.25)

where using the inverse cosine function we can calculate the sum of joint variablesq2 and q3. The arccosine of x is defined as the inverse cosine function of x when−1 ≤ x ≤ 1. However, since the cosine function is an even function, one cannotsimply know if arccos(x) is positive or negative and hence the ambiguity of sign ±.

To calculate either q2 or q3, once again we utilize w(6), in order to subtract acomponent from z = w(3), where we can again use arccosine function to determinethe value of joint variable q2.

q2 = ±arccos

(w(6)l3 − w(3)

l2

)(4.26)

where we bare in mind that manipulator parameters l3 and l2 are a prior known. Tocalculate bothq2 andq3, we need to combine both equationsq3 = arccos(w(6)) − q2,which now doubles the ambiguity from the combination of the two equations. Howambiguity evolves and multiplies is depicted in a graph in Fig. 4.12, showing howafter each equation another set of solution arises, forming alwaysmore permutations.

The only joint variable left to determine at this point is the first joint q1. This can besolved using the atan2 function. Function atan2 is the augmented arctangent functionwhich uses two arguments instead of one. The purpose of using two arguments is togather information on the signs of the inputs and return the appropriate quadrant ofthe computed angle. This is not possible for the single-argument arctangent functionwe are commonly used to. The atan2 function also avoids the problems of divisionby zero. For any real fraction, where both arguments are not both equal to zero,atan2(y, x) is the angle in radians between the positive x-axis and the line connectingthe origin of the xy plane and the point given by the coordinates (x, y) on it. Onecan apply atan2 to a combination of either w(4)

w(5) orw(1)w(2) . Since at this point we already

Page 117: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.3 Inverse Kinematics 105

)25.4()26.4(

)27.4()27.4()27.4()27.4(

)26.4(

IqIIqIIIqIVq

Multiple inverse kinematics solutions Selection w.r.t. jointconstraints

IIqIIIqIVq

Choosing the closestsolution w.r.t. current

pose

Fig. 4.12 Uncertainty raises when one combines equations, like arccos, that have two possiblesolutions. The graph shows how each new ambiguity forms a new pair of solutions, thus increasingthe possible outcomes of the analytical solution. The sheer number of solutions increases fromthe permutation of all possible solutions to the equations. Once all the solutions are obtained, weproceedwith selecting the ones that are achievable due tomanipulator constraints and finally choosethe solution that is closed to the current pose of the robot

know the value of S23 from (4.25), we utilize it to get:

q1 = atan2(w(4)/S23,w(5)/S23). (4.27)

In the previous equation, we divide the components ofwwith S23 even though, froma strictly mathematical point of view this is not necessary, since both values canceleach other out. However, if one does not know in advance the sign of S23, it can causethe switch in the sign of the solution q1. This would again bring out the ambiguityin the solution, leaving us with 8 permutations of equations instead of 4 shown inFig. 4.12. Furthermore, one has to note that for S23 = 0, (4.25) looses all feasiblesolutions. If possible, in that case, one has to turn to w(1)

w(2) , to solve for q1.

4.3.4 Iterative Methods

Unlike closed-form solutions, iterative methods for solving the inverse kinematicsproblem are not tailored for a specific manipulator. They form a generalized solutionthat can be more or less easily applied to any type of robotic manipulator. However,since they rely on some form of iterative computation, these methods usually fallbehind analytic solutions in terms of speed and reliability. As a general rule, the morethe joints the robot has, the slower the inverse kinematics problem is solved.

One straightforward approach to solving the inverse kinematics problem is withcyclic coordinate descent (CCD) method, which was first proposed in [9]. The basicidea behind this approach, observed from the point of view of rotational joint, is toobserve the relative distance, both orientation and position wise, between the tooland the goal position observed from each joint coordinate system, and then turn thejoint of that coordinate system toward the goal. To do so, one first needs to find:

Page 118: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

106 4 Aerial Manipulator Kinematics

G0p

i0p

i0z

i

ir

iG00 pp

Tip

T0p

Fig. 4.13 Showing how each joint i is rotated around ri for αi to align the tool with the goal. Theprocedure is repeated for every joint starting from the last joint. Once all the joints are rotated, theprocedure reiterates until exit condition

∣∣pG0 − pT0∣∣ < ε is met

Fig. 4.14 MM-UAV with 2arms that have four degreesof freedom

iiqTip

Gip

Tip

Gip

ii dq

ip

cos(αi ) = pTi∥∥pTi

∥∥ · pG0 − pi0∥∥pG0 − pi0

∥∥ (4.28)

ri = pTi∥∥pTi

∥∥ × pG0 − pi0∥∥pG0 − pi0

∥∥ (4.29)

the ideal axis ri aroundwhich to rotate the joint for angleαi . The ideal axis of rotationis really a normal to the plane enclosed with vectors pT

i and pGi . One does not know

the exact value of pGi , but it can be calculated as a difference between vectors pG

0and pi0. Ideally, the joint i will be rotated around ri for αi to align the tool with thegoal, as shown in Fig. 4.13.

However, since we consider only single-degree-of-freedom joints, rarely can oneexpect that zi0 actually coincides with ri . Consequently, αi does not necessarily haveto be the optimal angle of rotation for the i th joint. Furthermore, for prismatic joints,where there is no rotation, we need to find another way to calculate the optimal Δqto bring our manipulator endeffector closer to the desired goal pG

0 . In Fig. 4.14, weshow two cases, both for revolute and prismatic joints. The goal of CCD optimizationproblem is to find the optimal Θi and di that minimize the error ΔPi = pG

i − pTi .

Once wemove the prismatic joint for an optimal length qi = di , the newly formedvector pT∗

i becomes:

Page 119: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.3 Inverse Kinematics 107

pT∗i = pT

i + qizi0. (4.30)

For a quadratic error function δ(qi )

δ(qi ) = ΔPi · ΔPi = (pGi − pT

i − qizi0) · (

pGi − pT

i − qizi0), (4.31)

it is straightforward to find analytical solution to the problem. Simply taking the firstderivative dδ(qi )

dqi= 0, andmaking sure that the second derivative d2δ(qi )

d2qi< 0, we show

that the optimal value for qi is:

qi = di = (pGi − pT

i

)zi0. (4.32)

Similar analytical solution can be derived for revolute joint. Rotation around jointaxis zi0 produces a new vector:

pT∗i = R(qi , zi0)p

Ti = R(θi , zi0)p

Ti . (4.33)

To find an optimal value of qi , once again we turn to the quadratic error function:

δ(qi ) = ΔPi · ΔPi = (pGi − R(θi , zi0)p

Ti

) · (pGi − R(θi , zi0)p

Ti

)(4.34)

Expanding (4.34) and with basic mathematical manipulation it is easy to show thatminimizing δ(qi ) is the same as maximizing:

pGi · R(θi , zi0) · pT

i (4.35)

In the theory of three-dimensional rotation, Rodrigues’ rotation formula is a methodof computing the rotationR(θi , zi0)of vectorp

Ti , given an axis and angle of rotation z

i0,

through three vectormultiplication operations. Therefore, we canwriteR(θi , zi0) · pTi

as:

R(θi , zi0) · pTi = pT

i cos(Θ) + (zi0 × pTi )sin(Θ) + zi0(z

i0 · pT

i )(1 − cos(Θ)).

(4.36)Now in order to maximize (4.35), we need to find the condition its first derivative isequal to zero:

d(pGi · R(qi , zi0) · pT

i )

dqi= SΘi (−pG

i pTi + pG

i zi0(z

i0p

Ti )) + CΘi (p

Gi (zi0 × pT

i )) = 0

(4.37)where we used well-known abbreviations for sine and cosine functions SΘ and CΘ ,respectively. If the secondderivative is positive,we cannowdirectly calculate the ana-lytical solution through several straightforward mathematical manipulations whichyield:

qi = Θi = atan2(pGi p

Ti − pG

i zi0(z

i0p

Ti ),pG

i (zi0 × pTi )). (4.38)

Page 120: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

108 4 Aerial Manipulator Kinematics

If we imagine for a moment that zi0 coincides with pTi × pG

i , we can easily showhow (4.35) reduces to:

pGi · R(θi , zi0) · pT

i = pGi p

Ti CΘi + pG

i (zi0 × pTi )SΘi . (4.39)

Making sure its first derivative is equal to zero yields the condition

d(pGi · R(qi , zi0) · pTi )

dqi= − pGi p

Ti SΘi + CΘ(pGi (zi0 × pTi )) = 0 (4.40)

=∥∥∥pGi

∥∥∥∥∥∥pTi

∥∥∥ cos(π2

− αi )cos(θi ) −∥∥∥pGi

∥∥∥∥∥∥pTi

∥∥∥ cos(αi )sin(θi )

=∥∥∥pGi

∥∥∥∥∥∥pTi

∥∥∥ sin(αi )cos(θi ) −∥∥∥pGi

∥∥∥∥∥∥pTi

∥∥∥ cos(αi )sin(θi )

=∥∥∥pGi

∥∥∥∥∥∥pTi

∥∥∥ sin(αi − θi ) = 0,

whereαi denotes the angle between the two vectors pTi and pG

i , as shown in Fig. 4.13.This condition clearly shows that when zi0 approaches ri from (4.29), optimal jointangle rotation qi = Θi = αi becomes equal to (4.28).

We summarize the whole CCD algorithm in the following pseudocode:

Data:• Robot DH parameters• Direct kinematics function DH(q)

• Goal position in Cartesian space pG0• Start position in joint space q0

Result: Inverse kinematics solution qε ← ∞ ;q ← q0 ;while ε > εtreshold do

for i = (n − 1) : 0 doif Prismatic joint then

Calculate Δqi according to (4.32);qi+ = (

pGi − pTi (q))zi0(q);

elseCalculate Δqi according to (4.38);qi+ = atan2(pGi p

Ti (q)− pGi z

i0(q)(zi0(q)pTi (q)),pGi (zi0(q) × pTi (q))).;

endendCalculate the norm error of the vector ;ε ← ∥∥pG0 − pT0 (q)

∥∥;end

Algorithm 4.1. Pseudocode example of the CCD algorithm that uses Euclideannorm to calculate the error from the current goal.

Page 121: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.3 Inverse Kinematics 109

Careful reader might notice that so far we have only covered the problem ofposition error between the goal and tool. Depending on the mission, this might notbe sufficient. Therefore, onewould need to augment the optimization functions (4.38)and (4.32) to include the orientation error:

ΔOi (q) = (zGi z

Ti − 1

)2, (4.41)

where zGi and zTi represent the z-axis row of matrices TGi and TG

i , respectively. Sincethere is no direct generalmethod to compare the attitude and position error, for a CCDalgorithm to work one would need to make sure both errors satisfy the maximumerror condition:

ΔPiΔPi < εPmax ∩ (zGi z

Ti − 1

)2< εOmax (4.42)

where the maximal position and orientation error, εPmax and εOmax , respectively, arearbitrary defined, depending on the type of work one aims to accomplish with therobot.

Problem 4.5 To observe the CCD algorithm in action, let us once again consider thetwo-arm, two-degree-of-freedom manipulator from Fig. 4.9. The goal is to solve theinverse kinematics problem using the CCD algorithm, moving from the start positionpT0 toward the goal position pT

0 as defined in Fig. 4.15 and Table4.5. The table alsodenotes the robot dimensions for the DH parameters derived from Fig. 4.9.

It is interesting to denote that the problem is actually laid out in 2D space, sincethe x component of the start and goal vectors, pT

0 and pE0 , respectively, both have

zero values. For this specific situation, all joint rotations involved in the process ofreaching the goal (i.e., z10 and z

20) are facing the ideal direction (4.29). CCD algorithm

reaches the goal position in 13 iterations, each shown overlaid on Fig. 4.15. α is the

2 0

12

313

T0p

T2p

E2p

E TEET 22 ppp

E0p

0z

0y

Fig. 4.15 Solving the inverse kinematics problems for a single arm in a dual-arm two-degree-of-freedom aerial manipulator, starting from the beginning 0 toward the end pose E

Page 122: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

110 4 Aerial Manipulator Kinematics

Table 4.5 Parameters for the inverse kinematics problem in Fig. 4.15. All the numbers are givenin a non-dimensional form

Start goal positions

pT0 x = 0.0 y = 1.25 z = −2.83

q0 x = 0.0 y = −4.92 z = 1.928

pG0 q1A = 0 q2A = π4 q3A = 0

Robot dimensions

DH l1 = 1.58 l2 = 2 lr = 2

angle obtained from the first iteration of the algorithm and can be calculated usingthe (4.29).

The important thing to notice is that the CCD algorithm reaches close to the endposition in only three steps. However, once it gets close to the goal, it needs 10 moresteps in order to finally reach the goal within the error margin εPmax = 0.1.

4.4 Inverse Kinematics Through Differential Motion

So far we have seen how there may not always be a solution to the IK problem, andif the solution exists it probably is not unique. Moreover, when the manipulator iswell behaved in a sense that the solution exists, finding a closed-form equation thatsolves the IK problem may not be possible.

In this section, we will approach the IK solution, from a perspective of differentialmotion. Differential motion is described with a matrix J(q) that maps the speed ofthe joints in the joint space q to the tool speed in the Cartesian space pT

0 :

pT0 = J(q)q. (4.43)

where dot denotes the first time derivative of a variable.Matrix J(q) is a linear operator, known as the Jacobian matrix, that depends on

the current manipulator pose q. To solve the inverse kinematics problem, however,one needs to invert the operator J(q)−1. Of course, when multiplying speed withpassed time Δt , we are left with relative motions ΔpT

0 and Δq, respectively. Usingthe inverse transformation matrix, these are related in the following mathematicalmanner:

q · Δt = J(q)−1pT0 Δt (4.44)

Δq = J(q)−1ΔpT0

Since the Jacobian matrix is joint pose dependent, the linear relationship (4.44)deteriorates asΔq increases. Due to this, finding the exactΔq necessary to move the

Page 123: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.4 Inverse Kinematics Through Differential Motion 111

tool toward the desired goal, might not always be possible from a single attempt atsolving (4.44). The rest of the section discusses different methods for calculating theinverse transform matrix J(q)−1 and choosing the appropriate iteration procedure,but before diving into these problems, we take a moment to learn more about theJacobian matrix.

4.4.1 Jacobian Matrix

We assume that the global position and orientation of the endeffector TT0 are spec-

ified with a tool configuration vector, comprised of the position component pT0 and

orientation of the approach vector zT0 :

w =[pT0

zT0

]. (4.45)

Taking the time derivative of w, while bearing in mind it is a complex functionof joint variables q j (t), j ∈ 1, . . . , n, which are time-dependent yields:

w = ∂w∂t

=[

∂pT0 (q(t))∂t

∂zT0 (q(t))∂t

]=

n∑j=1

⎡⎣

∂pT0 (q(t))∂q j

∂q j (t)∂t

∂zT0 (q(t))∂q j

∂q j (t)∂t

⎤⎦ = J (q(t)) q(t), (4.46)

where J (q(t)) denotes the Jacobian matrix encompassing the partial derivations:

J (q(t)) =[

∂pT0 (q(t))∂q1

∂pT0 (q(t))∂q2

· · · ∂pT0 (q(t))∂qn

∂zT0 (q(t))∂q1

∂zT0 (q(t))∂q2

· · · ∂zT0 (q(t))∂qn

]. (4.47)

Partial derivation can be derived in a purely analytical way, but for rotationderivations, we can obtain a direct method for calculating the Jacobian components.Since we are considering only single-degree-of-freedom joints, the rotation velocitybetween two consecutive frames Li−1 and Li is equal to

ωii−1 = zii−1 =

{qizi−1

0 if joint i is revolute

0 if joint i is prismatic. (4.48)

The total angular velocity zT0 is now simply the sum of all relative velocities ωii−1,

ωT0 = zT0 =

n∑i=1

ωi−1 =n∑

i=1

zi−10 qi . (4.49)

Page 124: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

112 4 Aerial Manipulator Kinematics

Therefore, the Jacobian matrix, for an n-degree-of-freedommanipulator can be writ-ten in the following form:

J (q(t)) =⎡⎣

∂pT0 (q(t))∂q1

∂pT0 (q(t))∂q2

· · · ∂pT0 (q(t))∂qn

z00 z10 · · · zn−10

⎤⎦ . (4.50)

To get a better feeling how to calculate the Jacobianwe turn to the following problem:

Problem 4.6 Derive the Jacobianmatrix for an aerial manipulator shown in Fig. 4.8.

Since we have previously solved the direct kinematic problem, we can reuse theresults obtained before. Once again for clarity, we write the necessary transformationmatrix, the tool configuration matrix:

TT0 =

⎡⎢⎢⎢⎢⎢⎢⎣

C1S23 S1 C1C2312C1

(√2l2 (C2 − S2) + 2l3C23

)

S23S1 −C1 S1C2312S1

(√2l2 (C2 − S2) + 2l3C23

)

C23 0 −S23 −l1 − l3S23 − l2(C2+S2)√

2

0 0 0 1

⎤⎥⎥⎥⎥⎥⎥⎦

, (4.51)

together with successive transform matrices Ti0

T00 =

⎡⎢⎢⎢⎢⎣

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

⎤⎥⎥⎥⎥⎦

, T10 =

⎡⎢⎢⎢⎢⎣

C1 0 −S1 0

S1 0 C1 0

0 −1 0 0

0 0 −l1 0

⎤⎥⎥⎥⎥⎦

, (4.52)

T20 =

⎡⎢⎣C1S23 S1 C1C23 l2C1cos(q1 + π

4 )

S1S23 −C1 C23S1 l2S1cos(q1 + π4 )

C23 0 −S23 −l1 − l2sin(q1 + π4 )

⎤⎥⎦ .

One should note that the transform matrix T00 is an identity matrix I4×4, since it

does not actually perform transformation between two identical frames L0 and L0.However, it still written down to complete the procedure, since z00 describes the z-axisin the base coordinate system L0.

Separating tool position vectorpT0 fromTT

0 , one can calculate the respective partialderivatives:

Page 125: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.4 Inverse Kinematics Through Differential Motion 113

∂pT0

∂q1=

⎡⎢⎢⎣

− 12S1

(√2l2 (C2 − S2) + 2l3C23

)

− 12S1

(√2l2 (C2 − S2) + 2l3S23

)

0

⎤⎥⎥⎦ (4.53)

∂pT0

∂q2=

⎡⎢⎢⎣

12C1

(√2l2 (−S2 − C2) − 2l3S23

)

12C1

(√2l2 (−S2 − C2) + 2l3C23

)

−l3C23 − l2(−S2+C2)√

2

⎤⎥⎥⎦ ,

∂pT0

∂q3=

⎡⎣

−2l3S232l3C23

−l3C23

⎤⎦ .

In the same manner, we can single out the first 3 rows of the third column of thetransformation matrices Ti

0, to obtain the zi0 vectors:

z00 =⎡⎣001

⎤⎦ , z10 =

⎡⎣

−S1C1

0

⎤⎦ , z30 =

⎡⎣C1C23

C23S1−S23

⎤⎦ . (4.54)

Putting it all together enables us to write the complete and somewhat very longJacobian matrix:

J =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣

− 12S1

(√2l2 (C2 − S2) + 2l3C23

)12C1

(√2l2 (−S2 − C2) − 2l3S23

)−2l3S23

− 12S1

(√2l2 (C2 − S2) + 2l3S23

)12C1

(√2l2 (−S2 − C2) + 2l3C23

)2l3C23

0 −l3C23 − l2(−S2+C2)√

2−l3C23

0 −S1 C1C23

0 C1 C23S11 0 −S23

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦

(4.55)

4.4.2 Inverse Kinematics—Jacobian Method

We have learned so far that the Jacobian matrix serves as a first-order approximationof the direct kinematics function. From somewhat different perspective, the Jacobiancan be viewed as a direct kinematics function approximatedwith infinite Taylor seriesrepresentation around some initial value q0:

w(q) =∞∑n=0

∂nw(q0)∂qn

(q − q0)n

n! . (4.56)

Page 126: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

114 4 Aerial Manipulator Kinematics

Taking only the first member of the Taylor series representation yields the Jacobianapproximation of the direct kinematics function,

w(q) =w(q0) + ∂w(q0)∂q︸ ︷︷ ︸

Jacobi Matrix

(q − q0) +∞∑n=2

∂nw(q0)∂qn

(q − q0)n

n!︸ ︷︷ ︸

Remainder term

(4.57)

=w(q0) + J(q0)(q − q0) + Rn(q,q0)

≈w(q0) + J(q0)(q − q0).

However, since the remainder of the Taylor series Rn(q,q0) �= 0 is neglected, Jaco-bian approximation of the direct kinematics function w(q) becomes the more inac-curate the more the robot moves away from the initial pose q0. Furthermore, thedirect kinematics function is highly nonlinear adding to the problem of inaccurateapproximation and implies the Jacobian can only be used around current manipulatorconfiguration.

To compute the inverse kinematics problem through the inverse Jacobian functionJ(q0)

−1, one needs to apply small steps toward the goal until it is finally reached. Thedifference between Jacobian methods stems from different approaches at solvingthe inverse Jacobian matrix. Nevertheless, the algorithm remains similar and it issummarized with the following pseudocode:

Data:• Robot DH parameters• Direct kinematics function DH(q)

• Goal position in Cartesian space pG0• Start position in joint space q0

Result: Inverse kinematics solution qε ← ∞ ;q ← q0 ;α ← 〈0, 1〉 ;Δe = pG0 − pT0 (q) ;while ε > εtreshold do

Calculate Jacobian J(q) ;Calculate Jacobian inverse J(q)−1 ;Δq = J(q)−1e;q+ = αΔq;Δe = pG0 − pT0 (q) ;ε ← ‖Δe‖;

endAlgorithm4.2.Pseudocode example of the Jacobian approach to solving the inversekinematics problem.

Page 127: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.4 Inverse Kinematics Through Differential Motion 115

4.4.3 Inverting the Jacobian

When attempting to invert the Jacobian, one has to be aware that the matrix mightnot be invertible at all. In most cases, the Jacobian is not a square n × n matrix.Even when it is invertible, when the pose configuration vector changes, the Jacobianmight become a singular matrix at some point. As a general rule, as the pose vectorchanges, so will the properties of its Jacobian matrix.

4.4.3.1 Jacobian Transpose

Jacobian transpose is a computationally simple algorithm, which does not requirecomputationally demanding matrix inversion in order to successfully converge to adesired solution [7]. Instead, one uses a transpose of the Jacobian matrix:

Δq = J(q)T e. (4.58)

We can depict the inverse algorithm based on Jacobian transpose as a dynamicsystem shown with its block scheme in Fig. 4.16. Writing the algorithm in a formof a dynamic system allows us to verify its stability using standard tools, such asLyapunov function. For a system shown in Fig. 4.16, we can choose a followingLyapunov function candidate in a positive definite quadratic form:

V (e) = 1

2eTKe (4.59)

where K denotes a symmetric positive definite matrix. Taking a time derivative of(4.58), while having in mind that the reference value pG

0 is constant, yields:

V (e) = eTKe (4.60)

= eTK(pG0 − pT

0 (q))

{pG0 = 0

}

= − eTKpT0 (q) = −eTKJq = −eT · KJJTα · e

Since JJT is obviously positive definite, with a right choice of gain matrix α andK it is straightforward to make sure that V < 0 for all pG

0 . This implies that usingthe Jacobian transpose instead of inverting the matrix yields a stable algorithm thatalways converges to a solution. The biggest drawback of this approach is that itrequires multiple iterations until finally converging to the solution.

Page 128: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

116 4 Aerial Manipulator Kinematics

Fig. 4.16 Observing theJacobian transpose iterativeinverse kinematics algorithmas a dynamic system throughits block schemerepresentation

+ )(qJT

)(0 qTT

G0p

q q

)(0 qpT

-

e

4.4.3.2 Jacobian Pseudoinverse

Another computationally fast approach for solving the inverse problem of theJacobian matrix involves the Pseudoinverse Jacobian method. The approach canbe observed as minimizing a quadratic cost function with an equality constraintΔp = JΔq introduced through Lagrange multiplier λ strategy [1]:

F(Δq,λ) = 1

2ΔqTΔq + λ(Δp − Jq). (4.61)

To find the cost function (4.61) minimum, one needs to obtain its first derivativeswith respect to its two degrees of freedom (i.e., Δq and λ):

∂F

∂λ= Δp − JΔq = 0 ⇒ Δp = JΔq (4.62)

∂F

∂Δq= Δq − JTλ = 0 ⇒ Δq = JTλ (4.63)

We now seek to find a common solution as a linear combination of the previous twominimum criteria. Combining the two conditions yields:

Δp = JΔq = JJTλ → λ = (JJT )−1Δp. (4.64)

From this point, it is straightforward to show that the optimal choice of next stepevaluation for the optimization criteria (4.61) yields:

Δq = JT (JJT )−1Δp, (4.65)

where the expression JT (JJT )−1 is better known as pseudoinverse method of invert-ing non-square matrices.

Problem 4.7 To compare the Jacobian iterative method to CCD algorithm, onceagain we turn to the example of a dual-arm 2-DOF aerial manipulator from Fig. 4.9.Problem parameters are identical to the ones given in Table4.5 only this time we aimto apply the Jacobian approach.

So far we have shown how to solve the inverse kinematics problem for this aerialrobot through the CCD algorithm using only the direct kinematic solution. To solvethe same problem via Jacobian approach, we need find the Jacobian matrix of the

Page 129: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.4 Inverse Kinematics Through Differential Motion 117

2 0

1

129

T0p

E

E0p

0z

0y

81

Fig. 4.17 Solving the problem of inverse kinematics for a simple dual-arm aerial manipulator withtwo degrees of freedom. The solution is obtained in 12 steps, starting from the beginning 0 towardthe end pose E

manipulator:

J =[

∂pT0

∂q1

∂pT0

∂q2

∂pT0

∂q3

z00 z10 z20

](4.66)

=

⎡⎢⎢⎢⎢⎢⎢⎣

C1 (a1 − a2S2 − a4S23) − (a2C2 + a4C23)S1 −a4C23S1S1 (a1 − a2S2 − a4S23) C1 (a2C2 + a4C23) a4C1C23

0 a2S2 + a4S23 a4S230 C1 C1

0 S1 S11 0 0

⎤⎥⎥⎥⎥⎥⎥⎦

,

while bearing in mind that the fourth joint q4 is a virtual joint and therefore constant.The Jacobian matrix clearly shows that only the first joint q1 can change the

direction of rotation of both consecutive joints, q2 and q3, respectively.When q1 = 0,both z10 and z20 face the same direction, providing motion only in y − z plane. Bothstart and end goals are in the y − z plane, so one does not expect q1 to change throughJacobian algorithm iterations. Since the goal is only to reach the goal position andnot the orientation, we only require the first 3 × 3 portion of the Jacobian matrix,which deals with tool configuration position variations with respect to the variationsin the joint space.

As with the CCD algorithm, we show the solution with a series of overlaid imagesshowing each step of the algorithm in Fig. 4.17. The difference in the solution istwofold: First, we observe the kinematic redundancy in the final result; Second, wefocus on the difference between the roads leading to this final result. Unlike CCDalgorithm, the Jacobian method demonstrates great difficulty in reaching close to the

Page 130: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

118 4 Aerial Manipulator Kinematics

goal position. Its first 8 iterations are mainly locked close to the starting solution, butonce the vicinity of the goal is reached (i.e., iteration 9), the algorithm reaches theend goal very quickly, in just 3 iterations.

This is an important observation and key comparison between the two algorithms.Because it calculates the direction toward the end position in each step, the CCDalgorithm reaches close to the goal in very few steps. On the other hand, the Jacobianmethod relies on the current configuration of the robot in order to reach the designatedgoal.When this initial position is far away from the goal, the Jacobianmatrix becomesa very poor approximation of the ideal direction needed to reach the goal simplybecause the remainder of the Taylor series Rn(q,q0) in (4.57) becomes too big tobe neglected. However, once it reaches close to the solution, the remainder becomessmall enough and the Jacobian approximation forces the endeffector ever closer tothe desired goal in very few steps. The main problem with the CCD algorithm at thispoint (i.e., close to the goal) is that its computation observes each joint separately.Therefore, often enough the motion of one joint can counteract the action takenby the previous joints, causing more oscillations before actually settling at the goalposition. The Jacobian algorithm calculates the necessary motion of all joints at thesame time.

Problem 4.8 Consider a dual-arm aerial manipulator shown in Fig. 4.18. This aerialrobot is equipped with two arms, each with four degrees of freedom. For simplicity,we can omit a specific tool from the endeffector and just observe its tip behaviordenotedwith its approach vector zT . Formanipulation, the aerialmanipulator remainslocked at specific world frame coordinates. However, UAV can freely rotate about itsz0-axis which provides additional DOF (i.e., first joint q1). For every arm endeffector,this yields the total of 5 degrees of freedom. Using iterative methods, both CCD andJacobian solve the inverse kinematics problem for this aerial robot.

Oddly enough, the first step toward solving the inverse kinematics problem is solv-ing the direct kinematics problems.We therefore align the coordinate systems accord-ing to the Denavit–Hartenberg rules. The coordinate systems are clearly marked andthe home position is chosen and presented in the right half side of Fig. 4.18. Havingthe home position in mind, we derive the Denavit–Hartenberg parameters shown inTable4.6. Again, we note the virtual joint frame Lv that allows us to transform thelast joint frame L4 to the desired approach vector zT and tool coordinate system LT .

Once the coordinate systems are set and DH parameters laid in, we proceed withderiving the homogeneous transformation matrices Ti

i−1. Observing Fig. 4.18, onecan see that the joints q1 and q2, z0 and z1, respectively, are collinear. Because ofthis alignment, their effect on the transformation of the arm is similar. The samething can be said for joints q3 and q4, z2 and z3, respectively. Therefore, for claritywe show their joined homogeneous transformation matrices together with the totalmanipulator configuration matrix.

Page 131: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.4 Inverse Kinematics Through Differential Motion 119

0L0z

1z

2z

3z 4z

vz

Tz

1L2L

3L

4L vL

TL

1d

1a

2a

3a

4a

0x

1x

2x

3x

4xvx

Tx

0z

1z

4z4L

2x

3x

TzTx

1a

2d1d

2a

3a

5d5d

4a

1x

4x

Fig. 4.18 Dual-arm aerial manipulator where each arm has four degrees of freedom. Showingdimensions and coordinate systems as described with Denavit–Hartenberg parameters

Table 4.6 Denavit–Hartenberg parameters for the dual-arm aerial manipulator shown in Fig. 4.18

Link θ d a α

1 q1 − π2 −d1 a1 0

2 q2 −d2 a2 − π2

3 q3 + π2 0 a3 0

4 q4 0 a4 − π2

5 q5 + π2 0 0 π

2

V-E qV = 0 d5 0 0

T20 =

⎡⎢⎢⎣

S12 0 C12 a1S1 + a2S12−C12 0 S12 −a1C1 − a2C12

0 −1 0 −d1 − d20 0 0 1

⎤⎥⎥⎦ ,T4

2 =

⎡⎢⎢⎣

−S34 0 −C34 −a3S3 − a4S34C34 0 −S34 a3C3 + a4C34

0 −1 0 00 0 0 1

⎤⎥⎥⎦

TT4 =

⎡⎢⎢⎣

−S5 0 C5 d5C5

C5 0 S5 d5S50 1 0 00 0 0 1

⎤⎥⎥⎦ (4.67)

Taking the consecutive transformations enables us to formulate the manipulatorconfiguration matrix TT

0 = TT4 T

42T

20. For clarity, we write separately the rotation

transformation matrix RT0 and tool position vector pT

0 :

Page 132: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

120 4 Aerial Manipulator Kinematics

RT0 =

⎡⎣

−C12C5 + S12S34S5 −C34S12 −C5S12S34 − C12S5−C5S12 − C12S34S5 C12C34 C12C5S34 − S12S5

C34S5 S34 −C34C5

⎤⎦ (4.68)

pT0 =

⎡⎣

a1S1 + S12 (a2 − a3S3 − (a4 + d5C5) S34) − d5C12S5−a1C1 + C12 (−a2 + a3S3 + (a4 + d5C5)S34) − d5S12S5

−d1 − d2 − a3C3 − C34 (a4 + d5C5)

⎤⎦ (4.69)

In order to use the Jacobian iterative method, next we calculate the Jacobianmatrix of the manipulator. Because of sheer complexity of equations, we first writethe general form of the Jacobian matrix, observing only the position error which isused in the process:

J =[

∂pT0

∂q1

∂pT0

∂q2

∂pT0

∂q3

∂pT0

∂q4

∂pT0

∂q5

]=

⎡⎣p11 p12 p13 p14 p15p21 p22 p23 p24 p25p31 p32 p33 p34 p35

⎤⎦ (4.70)

p12 = cos(q1 + q2) · (a2 − a3 · sin(q3) − (a4 + d5 · cos(q5)) · sin(q3 + q4))++ d5 · sin(q1 + q2) · sin(q5)

p22 = sin(q1 + q2) · (a2 − a3 · sin(q3) − (a4 + d5 · cos(q5)) · sin(q3 + q4))−− d5 · cos(q1 + q2) · sin(q5)

p11 = a1 · cos(q1) + p12, p31 = p32 = 0, p21 = a1 · sin(q1) + p22p13 = (−a3 · cos(q3) − cos(q3 + q4) · (a4 + d5 · cos(q5))) · sin(q1 + q2)

p23 = cos(q1 + q2) · (a3 · cos(q3) + cos(q3 + q4) · (a4 + d5 · cos(q5)))p33 = a3 · sin(q3) + (a4 + d5 · cos(q5)) · sin(q3 + q4)

p14 = (−cos(q3 + q4) · (a4 + d5 · cos(q5))) · sin(q1 + q2)

p24 = cos(q1 + q2) · cos(q3 + q4) · (a4 + d5 · cos(q5))p34 = (a4 + d5 · cos(q5)) · sin(q3 + q4)

p15 = − d5 · cos(q1 + q2) · cos(q5) + d5 · sin(q1 + q2) · sin(q3 + q4) · sin(q5)p25 = − d5 · (cos(q5) · sin(q1 + q2) + cos(q1 + q2) · sin(q3 + q4) · sin(q5))p35 = d5 · cos(q3 + q4) · sin(q5) (4.71)

Finally, we show the results of several experiments conducted on a random setof data of joint values. For each available combination, we derived the tool con-figuration using direct kinematics solution and inverted it using one of the iterativemethods previously disseminated. We show the results in the following three figures(Figs. 4.19, 4.20 and 4.21), showing that the CCD algorithm provides the most con-sistent solution to the problem, with almost all solutions solved under 500 iterations.Depending how far from the original assumption the problem is, the CCD meth-ods take longer time to find a solution, which is shown in Fig. 4.21 as a correlationbetween the distance and the number of iterations it takes the algorithm to solve theproblem.

Page 133: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

4.4 Inverse Kinematics Through Differential Motion 121

Fig. 4.19 Result histogram

0 500 1000 1500 2000 2500 30000

100

200

300

400

500

600

700

Iterations

T α = 0.1T ( T )−1 α = 0.1T ( T )−1 α = 0.5

CCD

Fig. 4.20 Jacobiandistribution plottedcompared to the distancebetween the originalassumption and the actualresult

0 100 200 300 400 500 6000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

E0 − T

0

iterations

Fig. 4.21 CCD distributionplotted compared to thedistance between the originalassumption and the actualresult

0 100 200 300 400 500 6000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1E0 − T

0

iterations

Page 134: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

122 4 Aerial Manipulator Kinematics

References

1. Bertsekas DP (1999) Nonlinear programming. Athena Scientific, Belmont2. Corke PI (2011) Robotics, vision & control: fundamental algorithms in MATLAB®. Springer,

Berlin3. Denavit J, Hartenberg RS (1955) A kinematic notation for lower-pair mechanisms based on

matrices. Trans ASME E J Appl Mech 22:215–2214. Jazar RN (2010) Theory of applied robotics: kinematics, dynamics, and control, 2nd edn.

Springer, Berlin5. Schilling RJ (1990) Fundamentals of robotics: analysis and control. Prentice Hall, Englewood

Cliffs6. Siciliano B, Khatib O (2008) Springer handbook of robotics. Springer Science & Business

Media, New York7. Siciliano B, Sciavicco L (2000) Modelling and control of robot manipulators. Advanced text-

books in control and signal processing, 2nd edn. Springer, London8. SpongMW, Hutchinson S, Vidyasagar M (2005) Robot modeling and control. Wiley, New York9. Wang L-CT, Chen C-C (1991) A combined optimization method for solving the inverse kine-

matics problems of mechanical manipulators. IEEE Trans Robot Autom 7(4):489–499

Page 135: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Chapter 5Aerial Manipulator Dynamics

In order for us to be able to control the end-effector of a robotic manipulator,first we need to understand and mathematically model its dynamics. There aretwo approaches mainly used to model manipulator dynamics: Lagrange–Euler andNewton–Euler. Newton–Euler recursive algorithm is considered to be a more com-prehensive approach that demonstrates the fundamental physical phenomena expe-rienced from within each link and joint of the manipulator. For this reason, this bookwill focus on the Newton–Euler approach to rigid body dynamics.

5.1 Newton–Euler Dynamic Model

There are two stages in Newton–Euler dynamic modeling. First, we propagate andcalculate angular and linear speeds of each link, starting from the base link, towardthe end-effector. Once angular and linear speeds of the last link are known, weproceed to calculate the forces and torques acting on each manipulator link, startingfrom the end-effector. The first stage of computation, where one devises speeds andaccelerations, is known as forward dynamics. The second stage, used to calculateforces and torques, is known as backward dynamics. This stage separation makesthe Newton–Euler dynamic modeling approach ideal for recursive implementation;hence, we often refer to Newton–Euler dynamic modeling as the Newton–Eulerrecursive algorithm.

5.1.1 Forward Equations in Fixed Base Coordinate System

We begin the dynamic analysis by isolating a single link i , shown in Fig. 5.1,of a generic robotic manipulator attached to a fixed base L0. This link rotates aroundjoint i , which according to Denavit–Hartenberg parametrization lies on the Li−1

z-axis. Link i stretches all the way through, to the next joint, placed within the Li

© Springer International Publishing AG 2018M. Orsag et al., Aerial Manipulation, Advances in Industrial Control,https://doi.org/10.1007/978-3-319-61022-1_5

123

Page 136: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

124 5 Aerial Manipulator Dynamics

z

y

x

i-1

i-1

i-1

x i

yi

zi

z

y

xFixed base frame

10ip

is

icJoint i

Joint i+1

iL

0L

1i

1iL

1

i0p

Fig. 5.1 Single link Newton–Euler forward dynamics analysis

coordinate frame, and in a first approximation, its center of mass ci0 is placed some-where in between the two frames. Using Denavit–Hartenberg and forward kinemat-ics, one can easily calculate the vector positions pi−1

0 and pi0 of each frame withrespect to fixed base coordinate frame.

If we assume to know the angular speedωk−10 of frame Li−1, then we can calculate

the angular speed at the end of the link (i.e., frame Li ) simply by adding the rotationproduced from within joint i . Since the joint i is facing the same direction as thez-axis for frame Li−1, according to the right-hand rule, its angular speed qi · zi−1

0 isa vector, whose magnitude is determined with the rate of angular change qi facingthe same zi−1

0 axis direction. Therefore, we write:

ωi0 = ωi−1

0 + qi · zi−10 (5.1)

Next, we derive the angular acceleration αi of frame Li with respect to the fixedbase coordinate system. This can be accomplished through the time derivation of(5.1), again with respect to the fixed base coordinate system: For more details pleasesee Sect. 2.3.2.

Page 137: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.1 Newton–Euler Dynamic Model 125

αk0 = ∂

∂tωk

0 = αk−10 + ∂

∂t

(qi · zi−1

0

)(5.2)

αk−10 + ∂

∂t(qi ) · zi−1

0 + qi · ∂

∂t

(zi−10

)

αk−10 + qi · zi−1

0 + ωk−10 × (

qi · zi−10

)

In order to calculate the linear speed a vi0 of frame Li , while assuming we know thelinear speed of frame Li−1, we have to take into account the distance between thetwo frames.

�Si = pi0 − pi−10 (5.3)

Since Li rotates around L0 with angular velocity ωi0, its linear velocity vi0 is simply

a linear combination of vi−10 and the cross-radial (tangential) velocity of rotation:

vi0 = vi−10 + ωi

0 × �Si (5.4)

The samemethod applies to angular acceleration where linear acceleration is derivedthrough the time derivative of its speed (5.4):

ai = ∂

∂tvi0 = ai−1 + ∂

∂t

(ωi

0 × �Si)

(5.5)

= ai−1 + ∂

∂t

(ωi

0

) × �Si + ωi0 × ∂

∂t(�Si )

= ai−1 + αi0 × �S + ωi

0 × (ωi

0 × �Si)

There are two additional components in the previous derivation. One is a vectorproduct

(ωi

0 × ωi0

)×�Si of collinear vectors ωi0, which by definition always yields

a zero vector. The second component is a size derivation of distance vector �Si ,which is always a zero vector, simply because the link cannot change its size orshape.

On the other hand, let us consider a completely different kind of link motionshown in Fig. 5.2. This is a scissor type of manipulator joint which represents an idealtranslation joint movement. According to Denavit–Hartenberg parametrization, thelink is stretched in a single direction, aligned with the frame Li−1 z-axis. Unlikein the previous case of rotational motion, within the translation joint the distancebetween the two coordinate systems Li−1 and Li varies when the joint is in motion.This means that its time derivative �Si = q izi−1

0 is different than zero and facesframe’s Li−1 z-axis.

Since the equations have changed from the previous rotational joint problem, wewrite them again, starting from angular velocity and acceleration of Li . We have toconsider that since the joint is translational, it does not introduce additional rotationalmotion, simplifying the aforementioned equations to:

Page 138: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

126 5 Aerial Manipulator Dynamics

z

y

x

i-1

i-1

i-1

x iyi

zi

z

y

xFixed base frame

i0p

10ip

is

ic

Joint i

Joint i+1

iL

0L

1iv

1iL

Fig. 5.2 Single link Newton–Euler forward dynamics analysis

ωi0 = ωi−1

0 (5.6)

αi0 = αi−1

0

As far as linear motion is concerned, equations are a bit more complicated than in theprevious case due to additional translational motion induced from joint movement.Linear speed of frame Li now becomes:

vi0 = vi−10 + ∂

∂t(�Si ) (5.7)

= vi−10 + ωi

0 × �Si + �Si

= vi−10 + ωi

0 × �Si + q izi−10

where time derivate ∂∂t (�Si ) obviously has two components: one coming from the

change of orientation due to rotation ωi0 × �Si , and the other produced from the

change of size of vector �Si , caused from joint translation �Si .

Page 139: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.1 Newton–Euler Dynamic Model 127

ai = ai−1 + ∂

∂t

(ωi

0 × �Si) + ∂

∂t

(q izi−1

0

)(5.8)

ai−1 + ∂

∂t

(ωi

0

) × �Si + ωi0 × ∂

∂t(�Si ) + ∂

∂t

(q i)zi−10 + q i ∂

∂t

(zi−10

)

ai−1 + αi0 × �Si + ωi

0 × (ωi

0 × �Si + �Si) + q izi−1

0 + q i(ωi

0 × zi−10

)

ai−1 + αi0 × �Si + ωi

0 × (ωi

0 × �Si + q izi−10

) + q izi−10 + q iωi

0 × zi−10

ai−1 + αi0 × �Si + ωi

0 × (ωi

0 × �Si) + ωi

0 × q izi−10 + q izi−1

0 + q iωi0 × zi−1

0

ai−1 + αi0 × �Si + ωi

0 × (ωi

0 × �Si) + q izi−1

0 + 2q iωi0 × zi−1

0

Introducing additional variable ξi = 1 for rotational joint i and ξi = 0 for trans-lational joint i, we can now generalize these equations:

ωi0 = ωi−1

0 + ξi(qi · zi−1

0

)(5.9)

αk0 = αk−1

0 + ξi(qi · zi−1

0 + ωk−10 × qi · zi−1

0

)

vi0 = vi−10 + ωi

0 × �Si + (1 − ξi )(q izi−1

0

)

ai = ai−1 + αi0 × �Si + ωi

0 × (ωi

0 × �Si) + (1 − ξi )

(q izi−1

0 + 2q iωi0 × zi−1

0

)

Since the base on which the manipulator is attached to is fixed, we have to concludethat the initial speed and acceleration, both in angular and linear case, of the baseframe L0 is zero. Additionally, we consider that the manipulator is placed withinearth’s gravity field. Therefore, we consider gravitational acceleration g, as an inertialforce of the system. In practice, this implies adding −g to the expressions for linearacceleration of the system. Putting it all together yields a set of equations for initialmotion of the base frame:

ω0 = 0,α0 = 0, v0 = 0, a0 = −g (5.10)

5.1.2 Forward Equations in a UAV (Moving) CoordinateSystem

In aerial robotics, however, (Fig. 5.3), we have to consider that the base frame, whichis attached to a UAV body, is expected to be moving around in 3D space. This ofcourse adds to the complexity of the manipulator control problem. It also producesundesired disturbances on the UAV body and its autopilot controller, and thereforeit is crucial to model the motion coupling forces precisely as possible in order todesign a stable UAV controller.

Easily enough, we canmodify the initial step of Newton–Euler dynamicmodelingin order to include the initial motion of the base frame. This requires us to modify(5.10) so that we can write:

Page 140: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

128 5 Aerial Manipulator Dynamics

Flying base frame

is

iL

0L101i

iq z

1iL

000zq

iiq 0z

cF

Center of mass

iC

10ix

00x

i0x

1iC

Fig. 5.3 Showing a chainmanipulator to depict a step inNewton–Euler recursive algorithm. Impor-tant to note is the position of the center of mass Ci+1 of link i + 1, together with notable vectors�Ci (i.e., distance between Li−1 and Ci ) and �Si for link number i all observed w.r.t. body frameL0. The contact force Fc is depicted at the end tip of the end-effector tool

ω0 = ωU AV ,α0 = αU AV , v0 = vU AV , a0 = aU AV − g (5.11)

This way, the initial angular and linear velocities and accelerations of the UAV bodyωU AV , vU AV ,αU AV , aU AV are propagated all the way through to the end-effector.

5.1.3 Multiple Rigid Body System Massand Moment of Inertia

Elementary physics teaches us that inertia is a measure of resistance of any physicalobject felt when one tries to change its state of motion. That is to say, that there is atendency of any physical object to keep moving in a straight line at constant velocity.Two integrals, mass and moment of inertia, mathematically describe this resistance,one related to the changes to physical object’s speed, and the other to the changes ofits direction, respectively.

Mass measures an object’s resistance to being accelerated by a force, which isrepresented by the relationship F = ma. Let us imagine an arbitrary object, repre-sented with a large cube shown in Fig. 5.4, and divide it in n infinitely small identicalcubes. Each small cube is represented with its position in an arbitrary chosen worldcoordinate frame δr, and its infinitely small mass δm, that resists linear motion. By

Page 141: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.1 Newton–Euler Dynamic Model 129

z

yx

z

yx

W

CMLcmr ir

ir iL

jL

jr

cmv

Fig. 5.4 Single link Newton–Euler backward dynamics analysis

summing together all the masses of each infinitely small cube, one can calculate theoverall mass of the system.

m =∑

n

δmi (5.12)

Moreover, taking into account the distance weighted with mass, one can calculatethe center of mass of the entire system [4, 5]. Since all the cube elements are thesame size and mass δm, this equation simplifies to:

rcm =∑

n δm · r i∑n δm

=∑

n r in

(5.13)

The center of mass, or the centroid, is the center point around which the dynamicequations of motion are derived, and therefore a key aspect of any dynamic analysis.Every rigid body canbe representedwith its centroid, its respectivemass, andmomentof inertia. All the forces and torques should be observed as affecting the centroid ofthe rigid body. The relative distance between each element cube and the centroid isdenoted with rcm . One can easily show that the sum of all relative distance vectorsis equal to zero vector:

rcm = δm∑

n r i∑n δm

=∑

n (δri + rcm)

n(5.14)

=∑

n δrin

+ rcm ⇔∑

n

δri = 0

Page 142: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

130 5 Aerial Manipulator Dynamics

Therefore, in order to describe point mass (i.e., cube element) motion, we need toconcentrate on the motion of the entire system through its centroid rcm . Given a rigidbody, the only way a point mass can move relative to the canter of mass is if thewhole system rotates with angular speed Ω . The total speed of any point mass i isthen equal to:

∂ tri = ∂

∂ trcm + Ω × δri = vcm + Ω × δri (5.15)

Next, we observed the kinetic energy of this single particle. Kinetic energy is a scalarvalue, calculated as a mass weighted square of object’s speed:

Eki = δm (vcm + Ω × δr i )T · (vcm + Ω × δri ) (5.16)

= δm (vcm)2 + 2δmvcmT · (Ω × δri ) + (Ω × δri )2

where (x)2 in vector form denotes xT · x. Given that the kinetic energy is a scalar,when we want to calculate the kinetic energy of the entire system, we need but toadd the kinetic energy of all its particles together.

n

Eki =∑

n

δm (vcm)2 + 2δmvTcm ·(

Ω ×∑

n

δri

)

+ δm∑

n

(Ω × δri )2

(5.17)

=∑

n

δm (vcm)2 + δm∑

n

(Ω × δri )2

Since according to 5.14 the middle part of the 5.17 vanishes. To define the inertiamatrix, we introduce the skew-symmetric matrix [A] constructed from a vector athat performs the cross-product operation, such that

[A]b = a × b. (5.18)

This matrix [A] has the components of a = [ax , ay, az] as its elements, in the form

[A] =⎡

⎣0 −az ayaz 0 −ax

−ay ax 0

⎦ (5.19)

Now, it is possible to finally rearrange the equation for the total kinetic energy of abody to yield:

n

Eki =m (vcm)2 + δm∑

n

(δri × Ω)2 (5.20)

m (vcm)2 + δm∑

n

([δri ] · Ω)2

Page 143: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.1 Newton–Euler Dynamic Model 131

m (vcm)2 + δm∑

n

([δri ] · Ω)T · ([δri ] · Ω)

m (vcm)2 + δmΩT

(∑

n

·[δri ]T · [δri ])

· Ω

m (vcm)2 − δmΩT

(∑

n

·[δri ] · [δri ])

· Ω

The term δm∑

n[δri ][δri ] is called themoment of inertia, and it is defined around thecenter of mass. For any other coordinate system, the moment of inertia would havedifferent values. In general, the moment of inertia is a second-order tensor, that inthree-dimensional orthogonal space takes up the following general symmetric form:

D =⎡

⎣Dxx Dxy Dxz

Dxy Dyy Dyz

Dxz Dyz Dzz

⎦ =∑

i

⎣δry

2 + δrz2 −δrxδry −δrxδrz

−δrxδry δrx2 + δrz

2 −δryδrz−δrxδrz −δryδrz δrx

2 + δry2

⎦ . (5.21)

Cutting the small body parts ever closer to infinitesimally small enables us to writethe moment of inertia as a volume integral across the entire rigid body volume V :

D =∫∫∫

V

ρ(r)

⎣δry

2 + δrz2 δrxδry δrxδrz

δrxδry δrx2 + δrz

2 δryδrzδrxδrz δryδrz δrx

2 + δry2

⎦ δV . (5.22)

The moment of inertia matrix can be written in a more compact form, with anexpression for all matrix elements stated for continuous bodies:

Dj,k =∫∫∫

ρ(r)(r2δ jk − r jrk

)δV,∀ j, k ∈ (x, y, z), (5.23)

with δ jk denoting the Kronecker delta function:

δ jk ={0,∀ j �= k1,∀ j = k

. (5.24)

Careful reader should once again note that the equations have been derived forthe center of mass of the rigid body. There are many examples of moments of inertiaderived for various shapes which can be found in different physics literature readilyavailable, like [3].

5.1.4 Backward Equations

Now that, we have learned how to calculate the angular and linear motion and inertiaof each link, starting from the base and iterating all the way through the final end-effector frame, we can move to the next stage of Newton–Euler dynamic modeling

Page 144: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

132 5 Aerial Manipulator Dynamics

and calculate the forces and torques acting on each manipulator link. FromNewton’ssecond law, we know that in order to move a rigid body and change its momentum,we need to apply forces for linear motion, and torques for angular motion. Therefore,we can write that the sum of forces acting on a body link equals its change of linearmomentum, and that the sum of torques acting on a body link equals its change ofangular momentum. Under the assumption of constant mass and inertia, which isvalid for most cases in robotics, the following equations are derived:

n∑F ji = ∂

∂t(mi ci ) = mi

∂ci∂t

(5.25)

n∑τ j = ∂

∂t(Diωi ) = Di

∂ωi

∂t

It is important to note here that the rate of change of i th center of mass ci does notequal the speed vi of its frame (one should note that ai = pi ). The center of massis displaced from the coordinate system Li , so that consequently ci �= pi , but ratherone needs to take into account additional displacement �ci vector when calculatingthe linear velocity of the center of mass as one can observe in Fig. 5.5. This can beaccomplished in two ways: substituting �Si in forward dynamics calculations with�S∗

i = �Si + �ci , or at the end of calculations adjust for the time derivative of �cito yield:

ci = vi + ∂

∂ t�ci (5.26)

ci = vi + ∂

∂ t∂

∂ t�ci

Angular motion is somewhat straightforward, since the center of mass rotates thesame way as its associated frame at the end of the link. As shown in Fig. 5.5, eventhough we translate the frame Li for �ci , for any rigid link, the newly formed frameplaced in the center of link’s mass is coupled with the motion of Li as far as angulardynamics is concerned. Again it is important to stress out that the moment of inertiaused to derive Newton–Euler equations must be calculated around the center of massof each link, keeping in mind that the coordinate system is aligned with frame Li+1.

We now move to analyze the left-hand side of Eq. (5.25) (i.e., sum of forces andtorques, respectively). According to the Newton’s third law, once the joint i assertsforce Fi onto link i , the same amount of forces but in opposite direction is assertedon link i − 1. Therefore, the forces acting on link i are the ones coming from joint iand i + 1:

n∑F ji = fi − fi+1 (5.27)

The same rule applies to torques. However, forces also contribute by applying torquesto the center of mass of the link. Basic physics teaches us about lever arm effect,when we apply force at an perpendicular distance from the axis of rotation to the

Page 145: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.1 Newton–Euler Dynamic Model 133

z

y

x

i-1

i-1 i-1

x i

yi

zi

z

y

x Fixed base frame

i0p

ici0cJoint i

Joint i+1

iL

0L

i

1iL

10ip

Fig. 5.5 Single link Newton–Euler backward dynamics analysis

line of action of the force. The force fi+1 is applied at the distance −�ci (minushere strictly denotes the vector direction) from the center of mass, and the force fi isapplied at the distance �Si + �ci . Because the torque is equal to the vector productof the distance and the force, the overall sum of torques acting on link’s center ofmass is:

n∑τ

ji = τ i − τ i+1 − (�Si + �ci ) × fi + �ci × fi+1 (5.28)

Combining both left- and right-hand side of equations enables us to derive the equa-tions governing the forces of each joint i with respect to their successive joint i + 1all the way up to the final contact force Fc.

fi = fi+1 + mi ·(

∂vi∂t

+ ωi × �ci + �ci

)(5.29)

The same of course could be applied to torques to yield:

τ i = τ i+1 + (�Si + �Ci ) × Fi − �C × Fi+1 + Diαi0 + ωi

0 × (Diω

i0

)(5.30)

Page 146: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

134 5 Aerial Manipulator Dynamics

The full procedure of the recursive Newton–Euler algorithm can now be summa-rized with a pseudocode Algorithm 5.1. To demonstrate, we can proceed with a fewexample problems.

Data:• Robot DH parameters• Direct kinematics function DH(q)

• Initial linear and rotation speed of the UAV base Vq Ωq• Start position in joint space q0

Result: Forces acting on each joint F, τω00 = Ωq ; α0

0 = αq ;v0 = Vq ; v0 = Vq ;foreach Link i = 1 : n do

ωi0 = ωi−1

0 + q izi−10 ;

αi0 = αi−1

0 + q izi−10 + ωi−1

0 × (q izi−10 );

vi = vi−1 + ωi0 × �Si ;

vi = vi−1 + αi0 × �Si + ωi

0 × (ωi0 × �Si );

endFn+1 = FC ;foreach Link i = n : −1 : 1 do

Fi = Fi+1 + mi[ai + αi

0 × �r i + ωi0 × (

ωi0 × �ri

)];

τ i = τ i+1 + (�Si + �Ci ) × Fi − �C × Fi+1 + Diαi0 + ωi

0 × (Diω

i0

);

end.

Algorithm 5.1. Newton–Euler algorithm utilized to calculate the forces andtorques acting on the quadrotor body produced from arm movement, under theassumption that all the joints are revolute and that the contact force of the j tharm is characterized with FC .

Problem 5.1 A twodegree of freedomaerialmanipulator is shown in Fig. 5.6.Aerialrobot is supposed to catch a sphere-like object, also depicted in the figure. Objectis hurtling toward the aerial robot with a relative speed vm . Assuming the robotcaught the object, and that at the moment of catching, its joints were positioned atq = [ pi

4pi2

34d3

], and using the Newton–Euler algorithm calculate:

• linear and angular speed of the end effector;• linear and angular acceleration of the end effector;• the force impact that the sensor on the translational joint of the robot detects;

We assume that the joints of the manipulator move at a constant speed at the momentof the capture q = [

λ λ 0], and that the force of the impact of the meteor in the base

coordinate system equals F = −λ2d3m3

[1

2√2

12√20]. Symbols m3 i d3 indicate the

weight and length of translational link.Matrix transformations of coordinate systemsare shown in Eq. (5.31).

Page 147: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.1 Newton–Euler Dynamic Model 135

Fig. 5.6 Two degree offreedom aerial robotcatching a sphere-like object

mv

0z

0L

1L

VL

TL

0x

1z1x

2xTx

2z

Tz

1d

T 10 =

⎢⎢⎣

C1 0 −S1 0S1 0 C1 00 −1 0 d10 0 0 1

⎥⎥⎦ , T 2

0 =

⎢⎢⎣

C1C2 S1 −C1S2 0C2S1 −C1 −S1S2 0−S2 0 −C2 d10 0 0 1

⎥⎥⎦ (5.31)

T 30 =

⎢⎢⎣

C1C2 S1 −C1S2 −C1S2q3C2S1 −C1 −S1S2 −S1S2q3−S2 0 −C2 d1 − C2q30 0 0 1

⎥⎥⎦

The first frame L1 rotates around the first joint of the aerial robot, which is theyaw motion of the satellite body. Given that the angular velocity is constant, the firstjoint acceleration is, therefore, q1 = 0:

ω1 = q1z00 = λ

⎣001

⎦ , ω = q1z00 =⎡

⎣000

⎦ (5.32)

One can easily verify that the following equation holds: ω1 × �S1 = 0. In Fig. 5.6,one can note that the vectors ω1 �S1 are mutually perpendicular, therefore theirvector product must be equal to zero. Given that the first joint is rotational, thereis no additional change of link �S1 size, so one can write the linear velocity andacceleration:

v1 =⎡

⎣000

⎦ , ai =⎡

⎣000

⎦ (5.33)

For the angular velocity of the second joints, it follows:

Page 148: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

136 5 Aerial Manipulator Dynamics

Table 5.1 DH parameters

Θ d α a

q1 d1 − π2 0

q2 0 − π2 0

0 q3 0 0

ω20 = ω1

0 + q2z10 = λ

⎣001

⎦ + λ

⎣−S1C1

0

⎦ = λ

⎣−S1C1

1

⎦ (5.34)

α20 = ω1

0 × q2z10 = λ

⎣001

⎦ × λ

⎣−S1C1

0

⎦ = λ2

⎣−C1

−S10

From the DH Table5.1 and matrices in 5.31, it follows that the second link length�S2 = p20− p10 = 0. Therefore, its linear and angular velocities and accelerations arealso zerovectors. This, however, is not the case for the third andfinal linkof the roboticmanipulator. The acceleration of the end effector is, therefore, calculated keeping inmind that the last joint is a translational joint. Since there are no additional rotationsproduced within the translational joints, the angular speeds and accelerations ofthe end effector are equal to that of coordinate system L2. First, we calculate thecoordinate system displacement �S3:

�S2 = p30 − p20 =⎡

⎣−C1S2q3−S1S2q3d1 − C2q3

⎦ −⎡

⎣00d1

⎦ =⎡

⎣−C1S2q3−S1S2q3−C2q3

⎦ (5.35)

then, we proceed to calculate the end effector speed:

v30 = v20 + ω30 × �S3 + q3z20

=ω20 × �S3 = λ

⎣−S1C2

1

⎦ ×⎡

⎣−C1S2q3−S1S2q3−C2q3

=λq3

⎣−C12

−S12S2

and finally, the end effector acceleration:

Page 149: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.1 Newton–Euler Dynamic Model 137

v30 = α30 × �S3 + ω3

0 × (ω30 × �S3)

= λ2

⎣−C1

−S10

⎦ × �S3

⎣−C1S2−S1S2−C2

⎦ + λ2�C3

⎣−S1C1

1

⎦ ×⎛

⎣−S1C1

1

⎦ ×⎡

⎣−C1S2−S1S2−C2

= λ2 3d34

⎣2S12

−2C12

C2

⎦ .

To measure the force applied to the translational joint, first we need to modify thederivation of the end effector acceleration v3 to fit the linear acceleration of the thirdlink’s center of mass. This is achieved by accounting for the displacement vector�R3

0 = ck0 − pk0:

∂c∂t

= v30 + ω30 × �R3 + ω3

0 × (ω3

0 × �R3)

(5.36)

where the third link’s centroid displacement with respect to the base frame ck0 iscalculated knowing the displacement with respect to the end effector coordinatesystem �C3

3 = [0 0 −q3

2 1]T:

c30 = HT30�C3

3 (5.37)

Finally, we calculate the acceleration of the third link’s centroid:

∂c∂t

= λ2 3d38

⎣2S12

−2C12

C2

⎦ (5.38)

which allows us to derive the final problem and calculate the forces acting on thethird link using Newton laws:

F3 + Fc = (m3 c3

)(5.39)

F3 = −Fc + (m3 c3

)(5.40)

= m3λ2d3

⎢⎣

3√2

8 +√24

3√2

8 +√24

0

⎥⎦ = m3λ

2d3

⎢⎣

5√2

85√2

80

⎥⎦ (5.41)

To calculate the force measured by the sensor, we need to calculate F3 projection inz2, which yields:

F3z20 = −5

4m3λ

2d3 (5.42)

Problem 5.2 In the previous example, we observed an aerial robot at hover, whereonly the joints of the robot contributed to the dynamics of the system. In this next

Page 150: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

138 5 Aerial Manipulator Dynamics

Total center of mass

R

qqqq ,,a,v

0L

Added center ofmass

1T

2T

3T

4T 4r

3r 2r

1r

Initial dynamics of the total center of mass

1L

2L

moving masses

00z

00z

Fig. 5.7 Quadrotor, powered with four internal combustion engines, controlled through variablecenter of mass. Figure is showing the most notable parts of the system: moving masses, overallcenter of mass, and Added center of mass

example, we imagine a quadrotor, powered with four internal combustion enginesthat are normally to slow to directly control rolling and pitching of the vehicle. To thatend, we observe a system based on four moving masses placed in each arm shownin Fig. 5.7. For the proposed control system, we aim to devise the controller, whichwould stabilize the vehicle through the moving masses. In order to control it, first weneed to derive a mathematical model of the system applying standard Newton–Eulerprocedure, to learn more about the dynamics of such an UAV.

For better clarity, first we simplify our observation and replace the four adjustablemasses with a single Added center of mass m= ∑4

1 mi . The location of this Addedcenter of mass p∗ is determined with a weighted position of each mass in the coor-dinate system of the UAV, L1:

p∗ =∑4

1 mipi∑4

1 mi

. (5.43)

Due to the construction of the UAV, this Added center of mass can only move in asingle plane, XY . Starting from the coordinate system of the UAV (i.e., L1), we canillustrate the system as a planar robot consisting of two prismatic joints that movethe Added center of mass p∗ = p30 around. The two prismatic joints are placed withintwo orthogonal frames, L1 and L2, respectively. Added center of mass sits in theend-effector frame L3. Having all the coordinate systems laid out, we can write theDH parameters of the 2DOF planar robot in Table5.2. All of this is depicted closerin Fig. 5.8.

When the mass of the manipulator is small enough to be neglected comparedto the UAV body mass, it is safe to observe the dynamics of the system in thecenter of construction of the UAV (i.e., L0). However, moving masses need to haveenough mass to distribute the overall system’s center of mass and as such cannotbe neglected. Therefore, there exists an additional transformation that needs to be

Page 151: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.1 Newton–Euler Dynamic Model 139

Table 5.2 DH parameters for the Added center of mass for a planner stabilizer quadrotor in Fig. 5.7

Link Θ d α a

1 π2 q1

π2 0

2 0 q2 0 0

00x

10z

10x

20z

30z

00y

20y

30y

Added center of mass

Overall center of mass

Fig. 5.8 Depicting the four coordinate systems, starting from L0 placed in the overall center ofmass. L1 denotes the center of UAV body, and the final link L3 denotes the Added center of massshown as the end-effector of a 2DOF planar manipulator

taken into account in order to calculate the forces and torques acting on the systemas a whole. On that account, L0 is placed in the overall center of mass of the system,while L1 denotes the center of construction (i.e., UAV body). The orientation of L0

can be arbitrary set, so that we can choose the orientation that is the most appropriatefor our calculations. Furthermore, the orientation of L1 w.r.t. the orientation if L0

remains constant. From the DH parameters in Table5.2, we can write the somewhattrivial transform matrices:

T21 =

⎢⎢⎣

0 0 1 01 0 0 00 1 0 q10 0 0 1

⎥⎥⎦ ,T3

1 =

⎢⎢⎣

0 0 1 q21 0 0 00 1 0 q10 0 0 1

⎥⎥⎦ (5.44)

Page 152: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

140 5 Aerial Manipulator Dynamics

Observing the Fig. 5.8 R10 comes down to:

R10 =

⎣0 0 11 0 00 1 0

⎦ . (5.45)

On the other hand, the distance between the two frames p10 − p00 varies. One cancalculate this distance keeping in mind that p00 is actually the center of mass of thesystem:

p10 − p0|L0 =p10 − p10mq + p30m∗

mq + m∗ (5.46)

= p10 + p30mq + m∗m

∗ (5.47)

= m∗

mq + m∗R10

(p11 + p31

)(5.48)

The vector distance p11 +p31 is extracted from the transformation matrices T21 and T

32.

Once rotated to align with the L0 frame, we can write the entire T10 transformation

matrix between the center of mass of the system L0 and the center of mass of thequadrotor L1:

T10 =

⎢⎢⎣

0 0 1 δq11 0 0 δq20 1 0 00 0 0 1

⎥⎥⎦ . (5.49)

Here, we denote μ = m∗mQ+m∗ as a scale factor (i.e., weighted distance) of the Added

center of mass and the overall mass of the system mQ + m∗.Next, we derive the distances between the centroid of each link, its respective

mass, and the corresponding frame L . The first centroid refers to the center of massof the quadrotor,mq , and is placed exactly in the coordinate frame L1, making�c1 azero vector. The second link is virtual and, therefore, has nomass so thatm2 = 0. Theactual mass of the planarmanipulator is placedwithin the third and final link, denotedwith a point mass m3 = m∗. Since m∗ is placed within the coordinate system L3,again vector�c3 is a zero vector. All together, mass displacement matrix�C = 03×3

becomes a zero matrix. Observing moments of inertia and keeping in mind that link2 is virtual and the point mass m∗ has no moment of inertia on its own, we can writethe following equations:

D1 =⎡

⎣DQxx 0 00 DQyy 00 0 DQyy

⎦ , D2 = 03×3, D3 = 03×3 (5.50)

Page 153: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.1 Newton–Euler Dynamic Model 141

Observing the dynamics of the center of mass of the system allows us to take intoaccount the thrust forces and torques produced within the propulsion system ofthe UAV. Since these forces act on the center of mass of the system as a whole,they become the driving forces of the systems dynamics. As far as linear motion isconcerned, it is straightforward to calculate the total thrust force:

T =4∑

i=1

Tiz0 (5.51)

Next, to calculate the total torque, one needs to take into account the distance ribetween each propeller i and the total center of mass shown in Fig. 5.7. Since all thepropeller are symmetrically arranged around the quadrotor center of mass, one candenote their distance to the center of mass as:

πi = p10 + R

⎣cos( π

4 · i)sin( π

4 · i)0

⎦ , (5.52)

yielding the total torque acting on the center of mass of the system

τ =4∑

i=1

(πi × Tiz0 + Qi ) (5.53)

where Ti and Qi denote the thrust and torque of each propeller, respectively.To calculate the dynamics of motion, we go a step further and simplify our obser-

vation, by fixing the rotation of the UAV to a single DOF, roll dynamics (Fig. 5.9).Consequently, the center of mass is constrained to move only in x-axis. Next, weobserve the two parts of the multibody system as separate objects, each with theirrespective momentum. The UAV body, represented with its point mass mq , has thelinear momentum Pq = mqvq . The Added center of mass, however, has its ownlinear momentum P∗ = m∗v∗, where its linear speed v∗ is a linear combination ofquadrotor speed vq and the rate of change of its relative distance to the center of theUAV.

v∗ = vq + dp∗

dt= vq + ωq × p∗ + d |p∗|

dtp∗ (5.54)

= vq + ωq × p30 + d∣∣p30

∣∣

dtp30

In previous examples, we observed changes of linear momentum when the massand moment of inertia of the system remain constant. As a direct consequence ofmoving mass system, the moment of inertia in this example varies as the massesmove. According to Newton’s laws, the rate of change of linear momentum is equalto the force acting on the body. For the Added center of mass, the forces acting on the

Page 154: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

142 5 Aerial Manipulator Dynamics

qqqq ,,a,v

3T

qm

1p

The total center of mass

1T

1m

CMp

z

x

Fig. 5.9 Variable mass-controlled quadrotor observed in projected 2D roll dynamics

body are (if we neglect the friction inside the mechanical components) the gravityand the control force u∗:

dP∗

dt= m∗g + u∗. (5.55)

Similar equations hold for the UAV body; only here the control force of the addedmass works in the opposite direction on the UAV body. Including the gravity and thethrust of the propellors, the rate of change of body linear momentum becomes:

dPq

dt= mqg + (T1 + T2)z0 − u∗ (5.56)

It is important to note here that if there is no control force acting on the added mass,the two bodies are observed as separate objects. This implies that when there is nocontrol force, the thrust forces act only on the quadrotor mass mq . Since the addedmasses are constrained to move in x-axis through the mechanics of the components,the control force u∗ incorporates the reactionary forces from the body construction inthe y- and z-axis. Additionally the motors of the translation axis supply the necessaryforce to move the added mass in the x-axis. When evaluating time derivative of theadded masses speeds, one has to take into account the rotation of the UAV baseframe, and thus dP∗

dt becomes:

dP∗

dt

1

m∗ = dvqdt

+ ωq × vq + dωq

dt× p∗ + ωq × (

ωq × p∗) + 2dp∗

dt× ωq + d2p∗

d2t(5.57)

In order to derive the rotation dynamics, we turn to angular momentum of thesystem as a whole Ltot . The reason for viewing the system as a single body stemsfrom the fact that, unlike with linear motion, there is no relative rotation between thebody and the added masses. Therefore, without the loss of generality we can write:

Ltot = Dtotωq , (5.58)

Page 155: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.1 Newton–Euler Dynamic Model 143

where Dtot represents the tensor of inertia of the two body system viewed as apoint mass. Since the center of mass is displaced from the UAV body center forpCM = μp∗, taking into account the relative distance between each body yields thefollowing equation for the total moment of inertia:

Dtot =((1 − μ)2 p∗2m∗ + (

μp∗)2 mq

)⎡

⎣1 0 00 0 00 0 1

⎦ . (5.59)

Once more, the rate of change of momentum equals the sum of torques applied to theUAV body. Of course, since the moment of inertia is a function of the displacementof added mass p∗, which is a function of time, the rate of change of tensor of inertiaDtot has to be incorporated into the equation.

dLtot

dt= dDtot

dtωq + Dtot

dωq

dt(5.60)

= ((1 − μ)2 m∗ + μ2mq

)⎡

⎣p∗ 0 00 0 00 0 p∗

⎦ p∗ + ωq × Dtotωq + Dtotdωq

dt

For the right-hand side of the equation, we apply the (5.56) to yield the torqueapplied to the rate of change of angular momentum, yielding:

dLtot

dt= (T1 + T2)μp∗ (5.61)

(T1 + T2)μp∗ = ((1 − μ)2 m∗ + μ2mq

)⎡

⎣p∗ 0 00 0 00 0 p∗

⎦ + ωq × Dtotωq + Dtotdωq

dt

5.2 Lagrange–Euler Model

Although Newton–Euler algorithm provides a direct method to calculate the forcesacting on the UAV body, as observed from previous example it lacks the necessarydirect approach to calculate the variations of the moment of inertia. This derivationis vital for the analysis of the stability of the autopilot controlling the aerial robot. Tofind this formal approach, we turn to a differentmethod that relies on Lagrange–Eulerapproach to modeling rigid body dynamics, which in turn relies on the dissipationof energy within the system.

Problem 5.3 Before continuing to derive the Lagrange–Euler equations, let us tryto calculate the variations of the center of mass and moment of inertia in an aerialmanipulator shown in Fig. 5.10 consisting of an ideal X-frame quadrotor UAV body

Page 156: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

144 5 Aerial Manipulator Dynamics

0z0x

arm A arm B

10z20z30z

40z

E0z

T0z

10x

20x

30x

40x

E0x

T0x

1F

2F3F

4F

CM

Fig. 5.10 Two 4DOF arm aerial robot center of mass and moment of inertia distribution

Table 5.3 Denavit–Hartenberg parameters, for example, Fig. 5.10

Link θ d a α

1 q1A − π2 0 la − π

2

2 q2A 0 laπ2

3 q3A 0 la − π2

4 q4A + π2 0 0 π

2

T-E 0 0 la 0

with massmQ and radius ρ and two 4DOF arms of identical links with massmL . DHparameters of the robot are shown in Table5.3.

As shown in Sect. 5.1, one can easily approach this problem using the basic phys-ical principles. First, we calculate the varying center of mass CM, by summing allthe body part distances from the center of quadrotor construction (i.e., frame L0),weighted by their respective mass:

rCM(q jA, q

jB) =

QCMmQ + mL∑4

i=1

[rAi (q j

A, qjB) + rBi (q j

A, qjB)]

mQ + 8mL, (5.62)

whereQCM denotes the distance of quadrotor center of mass, while rAi and rBi denotearm ith link’s center of mass in arm A and B, respectively. Recalling the quadrotorpropulsion system torque τ q(u, q j

A, qjB) is a nonlinear function of systems centroid

rCM(q jA, q

jB). As the centroid shifts due to joint angle changes, the distance between

each propeller and the centroid oiCM(q jA, q

jB) varies:

Page 157: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.2 Lagrange–Euler Model 145

oiCM(q jA, q

jB) = ρ

[Cos( π

4 + (i − 1) π2 ) Sin( π

4 + (i − 1) π2 ) 0

]T + rCM(q jA, q

jB)

(5.63)where i denotes each propeller which are, due to the chosen X configuration, placedin a manner that each propeller closes a 45◦ angle between its closest coordinatesystem axis. The total torque applied to the aerial robot thus becomes:

τ q(u, q jA, q

jB) =

4∑

i=1

Q(u)i + oiCM(q jA, q

jB) × F(u)i (5.64)

=4∑

i=1

Q(u)i + rCM(q jA, q

jB) × F(u)i

if taking into account the assumption of perfect quadrotor construction, resultingtorques zero out the first term ρ

[Cos( π

4 + (i − 1) π2 ) Sin( π

4 + (i − 1) π2 ) 0

]T ×F(u)i

when all the rotor contributions are added together.The overall moment of inertia changes are a bit more tedious to devise, but still

manageable in two separate steps:

• Transforming each body part moment of inertia from its own principal axis to thebody origin coordinate system.

R j,i0 D j,iR

j,i0

T

where R j,i0 is a 3 × 3 rotation part of transformation matrix T j,i

0• After the coordinate systems are aligned, it is possible to apply the parallel axistheorem and yield a complete expression for systems moment of inertia.

DA,i + mL[(rAi · rAi

)I − rAi ⊗ rAi

]

where r is a vector distance from the body part center of mass to the center ofconstruction, I denotes 3× 3 identity matrix, and ⊗ represent the outer product ofthe two vectors. The results are shown only for arm A, while the same equationsapply to arm B as well.

Putting it all together yields the equation formoment of inertia variationswith respectto manipulator joint angle changes:

J (q jA, q

jB) =

4∑

i=1

j=A,B

RiBJ

j,iL Ri

BT + mL

[(c j,i · c j,i

)I − c j,i ⊗ c j,i

](5.65)

Instead of writing the full equations for this problem, we depict the solution as a3D relationship between two joints and overall center of mass Fig. 5.11. One can seethat depending on the pose of the robot the moment of inertia can vary substantiallyfrom its nominal value.

Page 158: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

146 5 Aerial Manipulator Dynamics

Fig. 5.11 Two 4DOF armaerial robot center of massand moment of inertiadistribution c© 2013Springer Science+BusinessMedia Dordrecht withpermission of Springer [9]

From this solution, it follows that deriving the moment of inertia of the system is atedious derivation process, impractical for implementation on a generic aerial robot.Therefore, in the following sections we will switch to a new approach, formallyknown as Lagrange–Euler dynamic modeling, which is based on the dissipation ofpotential and kinetic energy of the system (i.e., aerial robot). It is through the conceptof kinetic energy that we devise the formal approach in calculating overall momentof inertia.

5.2.1 Aerial Robot Kinetic Energy

Generic aerial robot, shown in Fig. 5.12, consists of multiple body parts. Each con-tributing to its dynamic model. Unlike in Sect. 5.1.3, we now switch to macroscopic

z

yx

z

yx

W

CML

cmv

iL

jL

iv

cmv

i

ir

jr

cmr

0Lz

yx

Fig. 5.12 Aerial robot rigid body components

Page 159: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.2 Lagrange–Euler Model 147

analysis of the aerial robot, where each body part l has a finite size, mass (ml) andmoment of inertia (Dl). The overall kinetic energy of the system, just like in themicroscopic situation, is easily calculated if we know the angular and linear speedof each body part’s centroid, vl and ωl , respectively.

Ek =n∑

l=1

vl T mlvl + ωlTDlωl

2(5.66)

Someparts of aerial robot, likeUAVfuselage, remain stillwith respect to the centerof mass so that their angular and linear speeds correspond to the speed of the centerof mass. Others, however, like manipulator link i , add their own degree of freedomto aerial robot dynamics. As this movable parts move, they tend to shift the centerof mass. Therefore, we choose an arbitrary point in the fuselage, called the center ofconstruction, as the basis for the L0 coordinate system. The center of constructionhas identical angular and liner speed as the center of mass, but each manipulator linkmoves with respect to its degrees of freedom. In order to calculate the kinetic energyof manipulator link i , one needs to consider transforming its moment of inertia tothe center of construction Ti

0.

5.2.2 Moment of Inertia

Normally, as we learned in previous chapters, moment of inertia is calculated w.r.t.link’s center of mass, which we denote as D∗

l . On the other hand, angular speedωi is calculated w.r.t. origin frame L0. Therefore, one needs to transform angularspeed back to frame Li , taking a rotational portion, Ri

0 of Ti0 and reverse transform

Ri0−1 = Ri

0Tthe angular speed from L0 to Li . The rotational part of kinetic energy

thus becomes:

ωlTDlωl

2=

(Ri

0Tωl

)TD∗

l

(Ri

0Tωl

)

2(5.67)

= ωlTRi

0D∗l R

i0Tωl

2,

which yields:

Dl = Ri0D

∗l R

i0T

(5.68)

In robotics, it is important to find the kinetic energy as a result of forces appliedfrom each joint. In aerial robotics, we generalize this statement, to find the kineticenergy produced from each degree of freedom, which includes joint forces, as wellas UAV propulsion system. In both situations, we need to transform the moment ofinertia from a three-dimensional space, onto an n-dimensional space corresponding

Page 160: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

148 5 Aerial Manipulator Dynamics

to n degrees of freedom the aerial robot has. That is to say that we aim to derive theinertia as seen from each degree of freedom. In previous chapters, we have learnedthat the Jacobian matrix transforms the speeds from a 3D space, which includerotation and translation speeds, onto an n-dimensional joint space. Separating theJacobian matrix Jl(q) of lth link into linear Al(q) and angular Bl(q) parts, we write:

vl(q, q) = Al(q)q (5.69)

ωl(q, q) = Bl(q)q

Rewriting Eq. (5.66) yields:

Ek =n∑

l=1

vl T mlvl + ωlTDlωl

2(5.70)

=n∑

l=1

(Al(q)q)T ml (Al(q)q) + (Bl(q)q)T Dl (Bl(q)q)

2

=n∑

l=1

qT(AT

l (q)mlAl(q))q + qT

(Bl(q)TDlBl(q)

)q

2

=n∑

l=1

qT

(Al(q)TmlAl(q)

) + (Bl(q)TDlBl(q)

)

2q

From here, we extract a new tensor of inertia for all the degrees of freedom, denotedas DM

DM(q) =n∑

l=i

Al(q)TmlAl(q) + Bl(q)TDlBl(q). (5.71)

Problem 5.4 Once again we turn to the well-known example of a helicopterequipped with a single three degree of freedom manipulator, and for clarity weshow this aerial manipulator in Fig. 5.13, this time showing the center of mass ofeach link. Our goal is to find the total manipulator tensor of inertia of this aerialrobot, assuming the arm pose is set at q = [

0 π4

−π4

]T.

Unless explicitly stated otherwise, the kinematic description of robotic mecha-nisms uses idealized parameters. The links that the robotic manipulator composesof are assumed to be perfectly rigid bodies with geometrically perfect surfaces andshapes [5]. As a result, the center of mass of each link is placed exactly in the middleof the link, as is shown in Fig. 5.13. The frame of reference for link i is the link thatis placed at the end of the link, Li :

• link 1 - L1

• link 2 - L2

• link 3 - LE .

Page 161: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.2 Lagrange–Euler Model 149

00x

00z

10z

20z

10x

20x

V0x

V0z

E0z

E0x

3C2C

1C 1l

2l3l

1z1x

Fig. 5.13 Once again we show the example of a helicopter endowed with a single 3DOF arm. Thefigure shows the center of masses of each link necessary to calculate the total kinetic energy of thesystem

We can now write the displacement vector �Ci for each link in their respectivecoordinate system:

�C1 = [0 − l1

2 0 1]T

(5.72)

�C2 = [− l22 0 0 1

]T

�C3 = [0 0 − l3

2 1]T

.

The same procedure is now applied to calculating the moments of inertia of eachlink. Since they are observed as ideal representation, we think of them as infinitelythin rods, where only their length li is greater than 0. According to (5.22), we cancalculate the moment of inertia for such an ideal representation of a robotic link:

D∗1 = lim

δx,δy→0

m1

12

⎣l12 + δz2 0 00 δx2 + δz2 00 0 δx2 + l1

2

⎦ = m1l12

12

⎣1 0 00 0 00 0 1

⎦ , (5.73)

D∗2 = m2l2

2

12

⎣0 0 00 1 00 0 1

⎦ ,D∗3 = m3l3

2

12

⎣1 0 00 1 00 0 0

⎦ ,

where a careful reader should notice a well-known result for the moment of inertiaof a thin rod equal to mL2

12 , [3].Since the moments of inertia D∗

i are derived in their respective coordinate systemLi , before we can sum them together we need to transform them all in the base

Page 162: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

150 5 Aerial Manipulator Dynamics

coordinate system L0. For this, we turn to the transform matrices Ti0 which provide

the necessary rotation matrices

R10 =

⎣1 0 00 0 10 −1 0

⎦ ,R20 =

⎣0 −1 00 0 1

−1 0 0

⎦ ,RE0 =

⎣1 0 00 −1 00 0 −1

⎦ , (5.74)

that can be applied through (5.68) to transform the moments of inertia to the baseframe L0:

D1 =R10D

∗1R

10T = m1l1

2

12

⎣1 0 00 1 00 0 0

⎦ (5.75)

D2 =R20D

∗2R

20T = m2l2

2

12

⎣1 0 00 1 00 0 0

D∗3 =R2

0D∗3R

30T = m3l32

12

⎣0 0 00 1 00 0 1

The similar procedure needs to be applied in order to transform the displacement vec-tors �Ci , and calculate the distance of each center of mass w.r.t. the base coordinatesystem L0:

c10 = HT10�C1 =

⎣00

− l12

⎦ , c20 = HT20�C2 =

⎣12 l2C1C π

4 +q212 l2S1C π

4 +q2

−l1 − l22 S π

4 +q2

⎦ (5.76)

c30 = HTE0 �C3 =

⎢⎢⎢⎣

12 l2C1

(l3C23 + √

2l2(C2 − S2))

12 l2S1

(l3C23 + √

2l2(C2 − S2))

12

(−2l1 − l3S23 + √

2l2(C2 + S2))

⎥⎥⎥⎦

At this point, it is possible to obtain the Jacobian matrices Ai and Bi for eachlink’s center of mass. For brevity, we write only the general form of these matricesand leave the interested reader to calculate the exact values:

A1 =[

∂c10∂q1

0 0],A2 =

[∂c20∂q1

∂c20∂q2

0],A3 =

[∂c30∂q1

∂c30∂q2

∂c30∂q3

](5.77)

B1 = [z00 0 0

],B2 = [

z00 z10 0

],B3 = [

z00 z10 z

20

](5.78)

To calculate the total tensor of inertia of the manipulator, we simply proceed with(5.71) to solve the problem:

Page 163: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.2 Lagrange–Euler Model 151

DM(q) =A1(q)m1A1(q) + B1(q)D1B1(q) (5.79)

A2(q)m2A2(q) + B2(q)D2B2(q)

A3(q)m3A3(q) + B3(q)D3B3(q)⎡

⎢⎣

l32m33 0 00 1

3

(l22m2 + (3l22 + l32)m3

) l3m32

3

0 l3m32

3l3m3

2

3

⎥⎦

The 3 × 3 matrix we obtained through this solution does not refer to the Cartesiancoordinate system, but rather to the joint space. This means that for instance theprincipal moment of inertia l32m3

3 obtained from element (1, 1) does not refer to themoment of inertia of x coordinate axis. Instead, this is now the moment of inertiaexerted on the first joint of the manipulator. Being that, it refers to the axis of the firstjoint, and it is the moment of inertia about axis z00. This is important to note, sincein this case, the obtained tensor of inertia of the manipulator is a third-degree squarematrix. This can only happen, when the robot has three joints. For any other numberof joint n, the obtained tensor of inertia matrix is n × n matrix.

Problem 5.5 Calculate the manipulator tensor of inertia for the two degree of free-dom aerial manipulator shown in Fig. 5.6.

We have previously solved the direct problem for the proposed aerial manipulator,and the necessary matrices are shown in (5.31) with the DH parameters assignedaccording to Table5.1. To solve the problem, we proceed to calculate the necessarycomponents for the derivation of (5.71). We note that due to the limitations of DHmethod, the second and the third coordinate systems meet at the exact same point.Therefore, the second link becomes virtual, or nonexistent, so that its mass andmoment of inertia are equal to zero. Therefore, we only need to apply the derivation(5.71) on the first and the third link:

DM(q) =A1(q)m1A1(q) + B1(q)D1B1(q) (5.80)

A3(q)m3A3(q) + B3(q)D3B3(q)

We start by calculating the first link centroid,

�C11 = [

0 d12 0 1

]T(5.81)

c10 = HT10�C1

1 =

⎢⎢⎣

C1 0 −S1 0S1 0 C1 00 −1 0 d10 0 0 1

⎥⎥⎦

⎢⎢⎣

0d1201

⎥⎥⎦ =

⎣00d12

⎦ (5.82)

Page 164: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

152 5 Aerial Manipulator Dynamics

and the third link centroid,�C3

3 = [0 0 − q3

2 1]T

(5.83)

c30 = HT30�C3

3 =

⎢⎢⎣

C1C2 S1 −C1S2 −C1S2q3C2S1 −C1 −S1S2 −S1S2q3−S2 0 −C2 d1 − C2q30 0 0 1

⎥⎥⎦

⎢⎢⎣

00

− q321

⎥⎥⎦ =

⎣−C1S2

q32−S1S2q32

d1 − C2q32

⎦ .

(5.84)We proceed with the derivation by calculating the Jacobian matrices:

A1 =[

∂c1∂q1

∂c1∂q2

∂c1∂q3

]=

⎣0 0 00 0 00 0 0

⎦ (5.85)

B1 = [z0 0 0

] =⎡

⎣0 0 00 0 01 0 0

⎦ (5.86)

A3 =⎡

⎣S1S2

q32 −C1C2

q32 −C1S2

2−C1S2q32 −S1C2

q32 − S1S2

20 S2

q32 −C2

2

⎦ (5.87)

B3 = [z0 z1 z2

] =⎡

⎣0 −S1 00 C1 01 0 0

⎦ (5.88)

Next, we can calculate the tensors of inertia for each link:

D1 = m1d12

12

⎣C1 0 −S1S1 0 C1

0 −1 0

⎣1 0 00 0 00 0 1

⎣C1 0 −S1S1 0 C1

0 −1 0

⎦ (5.89)

= m1d12

12

⎣0 0 00 0 00 0 0

D3 = m3q32

12

⎣C1C2 S1 −C1S2S1C2 −C1 −S1S2−S2 0 −C2

⎣1 0 00 1 00 0 0

⎣C1C2 S1 −C1S2S1C2 −C1 −S1S2−S2 0 −C2

T

(5.90)

= m3q32

12

⎣C1

2C22 + S1

2 −C1S1S22 −C1C2S2

2

−C1S1S22 C1

2 + C22S1

2 −C2S1S2−C1C2S2

2 −C2S1S2 S22

Page 165: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.2 Lagrange–Euler Model 153

Finally, we can solve the problem and derive the manipulator tensor of inertia:

DM(q) = 0 + 0 + m3

4

⎣q32S2

2 0 00 q32C2

2 12q3S2q2

0 12q3S2q2 1

⎦+ (5.91)

m3d32

12

⎣S22 0 00 1 00 0 0

Solving the equations for the initial conditions q1 = π4 and q2 = π

2 yields:

DM =⎡

⎢⎣

m3q32

3 0 0

0 m3q32

3 00 0 m3

4

⎥⎦ (5.92)

5.3 Dynamics of Aerial Manipulator in Contactwith Environment

So far, we have observed the dynamics of aerial manipulators in flight, observingonly contact forces as inputs to the recursive Newton–Euler dynamics analysis. Inthis section, we reiterate some of the previously devised equations in order to adaptthem to the proposed benchmark tasks and thus gain further insight into the dynamicsof contacts. Specifically, environmental coupling is analyzed and broken into threegeneral categories:

• Momentary coupling - where MM-UAS interacts with objects of finite mass, thatare not attached to the environment, that is they can be picked up and manipulatedin air.

• Loose coupling - fits tasks that include interacting with objects attached to theenvironment without perching onto them, like assembling, inserting, pushing, orpulling objects.

• Strong coupling - occurs when MM-UAS perches onto fixed objects in the envi-ronment, thus becoming firmly attached to the environment.

This coupling analysis is conducted using classic control theory and stability analysis.The aforementioned categories are well known, have been widely studied by theground robotics community, and can be easily benchmarked for comparison.

Penaltymethods are the oldest and simplest approach to computing contact forces.Instead of preventing penetrations, they maintain the penetration negligible relativeto the scale of the system. The contact is thus modeled as a spring which providesrestoring force when there is contact and breaks apart when bodies move apart.Penalty methods suffer from numerical instability and are mostly used in rigid bodysimulations [8, 12].

Page 166: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

154 5 Aerial Manipulator Dynamics

On the other hand, there are the constraint-based methods, which formulate thecontacts as a linear complementary problem with a constraint function inequalityg(p) ≥ 0 governing the position vector of the robot [2, 12]. However, all of thesetechniques work well for numerical simulations, but fall short when trying to portraythe actual physical phenomena behind the mathematics. Therefore, in this book wepropose a somewhat different approach.

To include points of contact, we need to rethink the way in which we observe thecenter of mass of the rigid body system. Deriving the direct and inverse kinematicsof the UAV-manipulator system allows us to calculate each body part mi ’s centerof mass ci0, with respect to a unified frame of reference, the body frame L0. Inorder to maintain the same mathematical approach used throughout the book, wepropose representing points of contact with the static objects in the environment aspoints of infinite mass M∞ and infinitesimally small tensor of inertia D∞ ≈ 0. It isworth noting that we do not consider strict points of contact, but rather pivot pointswhere the end-effector perches onto the environment. The physical notion behindthis approach is that a contact with the environment stops the body motion, but atthe same time allows it to rotate freely around that same point (i.e., perching is neverideal). Putting it all together allows us to propose a unified body centroid CM (q) asa function of joint vectors qA,qB in a single equation

CM (q) =∑

i∈q ci0mi + ∑m

k=1 CkM∞k∑

i∈q mi + ∑mk=1 M

∞k

, (5.93)

where q denotes all the degrees of freedom (i.e., separate body parts), including theUAV body base, and each manipulator, with its respective body parts (i.e., links);k = 1, . . . ,m denotes m contact points. Written in this form, the equation has thepotential to capture the physics of flight dynamics as well as a point of contactphysical phenomena. The center of mass of the system is the critical issue for thesystem control. From a macroscopic point of view, it is a point at which both forcesand torques act upon the system, so it is vital to know its position w.r.t. the arbitrarychosenbody coordinate system L0. Equation (5.93) iswritten in the L0,which impliesthat we have to consider additional transformation matrix TCM

0 as a function of jointposes q and positions of contact points Ck :

TCM0 =

[e3 CM (q)

01×3 1

], (5.94)

taking into account that both LCM and L0 are aligned, and thus the rotation transfor-mation is 3 × 3 eye matrix e3.

Since the center of mass varies according to CM (q), calculating the moment ofinertia around L0 does not solve the problem. We need to find the moment of inertiaaround variable center ofmassDCM . In order to calculate this, one applies the parallelaxis theorem on a given displacement vector x = CM (q). Before proceeding, we

Page 167: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.3 Dynamics of Aerial Manipulator in Contact with Environment 155

write the equations of parallel axis theorem as a matrix function Π (x) with matrixelement i, j following standard parallel theorem equation

Π (x)i, j = δi, jxT x − xi x j , (5.95)

where δi, j denotes the Kronecker delta. Using this formulation, we can write thecomplete moment of inertia equations:

DCM =∑

i∈qDi + miΠ

(�ci0

) +m∑

k=1

MkΠ (�Ck) (5.96)

where � denotes the relative distance between the center of mass CM(qA,qB ) andeach centroid c j

i of the multibody system, including contact points Ck . The term∑mk=1 D

∞k is left out since it obviously approaches zero. Careful reader should note

that all the vectors and the moments of inertia Di are transformed in the center ofmass LCM which is displaced from L0 for (5.93) but faces the same directions.

In Chap.3, we discussed how different configurations apply forces on the center ofmass of the rotorcraft. Considering a general rotorcraft construction, a common bodyframe layout considers coplanar configuration of propellers. In such a configuration,thrust force T(u)i and torque Q(u)i point in the body frame z-axis.

In reality, both torque and thrust are complex nonlinear functionswhich depend onmultiple aerodynamic conditions that we mention before. Considering a first-orderapproximation model, the resulting thrust forces and torques are a quadratic functionof rotor speed, which in turn is linearly dependent on the applied voltage u. Summingall the forces together gives the total aircraft thrust. Total torque on the other handdepends not only on the speed and thrust of each rotor, but on the position of theMM-UAS’s centroid as well. Therefore, propeller torque τ i has two components,one coming from the actual propeller drag, and the other due to the displacement ofthe propeller from the center of mass. Of course, in a mobile manipulating unmannedaerial system, the center of mass shifts as each joint of the manipulator moves. Thus,the torque becomes a nonlinear function of the manipulator joint angles:

Fq(u) =4∑

i=1

T(u)i (5.97a)

τ q(u,q) =4∑

i=1

Q(u)i + CM(q) × T(u)i (5.97b)

Using a Recursive Newton–Euler approach, one can derive force/torque equationsproduced from all the joints.

Page 168: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

156 5 Aerial Manipulator Dynamics

Now that, we have laid the mathematical foundations for the analysis, and we canproceed to verify it. To that end, we have selected three classic controls experimentcase studies:

• Pick and place (Momentary coupling);• Peg-in-hole or insertion tasks (Loose coupling);• Knob or valve turning (Strong coupling).

5.3.1 Momentary Coupling

This class of task is the first step toward aerial manipulation. It involves interactionwith loose objects in the environment like delivering packages.

Without loss of generality, we consider a cube-shaped object of specified size (i.e.,w - width, l - length, h - height) and mass mP that forms a tightly coupled kinematicand dynamic aircraft manipulation system as shown in Fig. 5.14 performing pick-and-place tasks.

In order for the MM-UAS to grab the object, it needs to enclose it using both ofits arms. Denoting pA

E and pBE as position vectors of both arm A and B end-effectors

in the L0 frame and obeying the following constraint equation ensures the objectremains grabbed: (

pBE − pA

E

)y = w. (5.98)

0

pPB=cP

B

EB

PB

Fig. 5.14 Showing simple pick-and-place operation, where every picked object can be representedas a generic cube,with its respectivemass, size, and tensor of inertia that adds to the overall dynamicsof the system

Page 169: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.3 Dynamics of Aerial Manipulator in Contact with Environment 157

Note that the same could be achieved if one uses a large enough gripper with just asingle degree of freedom. We do not consider all points of contact which would inturn transfer the forces of the payload onto the UAV, but rather assume that once thepayload is grabbed, it is consider to become a part of the system. The object is thuswithout loss of generality attached as an additional link on the right, B manipulatorarm. We place its coordinate system LB

P in payload centroid and orient it so that itfaces the same direction as the arm end-effector frame LB

E , making the transformationT PE a straightforward translation matrix.Once the object is grabbed, three changes in the system dynamics occur: First,

the overall mass increases; second, the centroid (5.93) shifts; and third, the momentof inertia (5.96) changes. Simply by moving the end-effector pB

E , while followingthe grab constraint (5.98), one can easily align the overall centroid with the z-axisof the center of construction L0.

minpBE∈�3

(CM(qA,qB )

) ⇒ CM(qA,qB )x = CM(qA,qB )y = 0 (5.99)

However, little can be done to minimize the effect of inertia changes (i.e., mass andthemoment of inertia). Applying the constraint on the second arm ensuring the objectremains gabbed, we have indirectly created a manipulator configuration known as aparallel manipulator chain [11].

5.3.2 Loose Coupling

Insertion tasks also fall under the category of classic controls experiments that arewell known, have been widely studied by the robotics community, and can be easilybenchmarked for comparison. Some common applications include inserting a powerplug into a socket or placing a bolt into a structure. Ground-based mobile manipula-tors have solved these problems with submillimeter accuracy [6]. While the couplingbetween the environment (i.e., insertion point) and robot does influence the vehiclebase, the base of a ground robot can typically maintain stability during the insertion.However, the loose coupling required during insertion greatly influences the dynam-ics of an aerial manipulator. Even a simple task such as inserting a plug into an outletcan become extremely difficult.

We consider insertion tasks to be loosely coupled events. There is only a briefperiod of coupling from when the peg is inserted in the hole to when the gripperfinally releases the peg. During insertion tasks, the coupling with the environmentoccurs at a single point of contact cCP for a limited amount of time as depicted inFig. 5.15.

Theorem 5.1 During the time the contact between the environment cCP exists, theCoM (5.93) shifts toward that single point of contact (i.e., cCP). Furthermore, thesystem is unconstrained to rotate around that same point, just as it would rotatearound any other CoM. However, since this point is stationary so is the overalllinear speed w.r.t. the CoM.

Page 170: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

158 5 Aerial Manipulator Dynamics

Proof We prove the first claim of the theorem by analyzing the limits of (5.93) whenM∞ approaches infinity. Applying l’Hospital’s rule

lim f (x)

lim g(x)= lim f ′(x)

lim g′(x), (5.100)

on the CoM (5.93) yields:

limM∞

j →∞CM(q) = lim

M∞j →∞

∑i∈q c

i0mi + CCPM∞

k∑i∈q mi + M∞ (5.101)

= limM∞

j →∞M∞cCP

M∞l’h= cCP . (5.102)

The derivation shows that the CoM shifts toward the cCP . Since the overall massof the system is infinite, we conclude that as far as linear motion is concerned thesystem becomes stationary. However, relative motion between points (i.e., rotation)can still occur and we need to prove the second claim that concerns the rotation. Atthis point, we postulate that the total moment of inertia:

DCM =∑

i∈qDi + miΠ

(�ci0

)

︸ ︷︷ ︸Always finite

+m∑

k=1

MkΠ (�Ck)

︸ ︷︷ ︸proof ?

(5.103)

is a sum of two finite values. As it is trivial to show that the left-hand side of inertiaequation is always finite, the task is to show that the sum

∑mk=1 MkΠ (�Ck) is also

finite, however proving it is a two-stage process.First, we note that MkΠ (�Ck)i, j is both upper and lower bounded because it is

always positive and so is Mk . Therefore, we are free to write:

0 ≤ MkΠ (�C)i, j ≤ 3Mk‖�C‖∞2, (5.104)

where ‖•‖∞ denotes infinity norm distance. While CoM shifts toward the contactpoint, the relative distance between CCP and CoM ‖�C‖∞ obviously approacheszero. On the other hand, by definition Mk approaches infinity. It is not straightfor-ward to conclude that their product approaches zero as Mk approaches infinity. Toshow this, we need to rearrange the previous equation to apply l’Hospital’s rule for afunction of two variables, χ = ‖�C‖∞ and γ = 1

Mk. Strictly mathematically speak-

ing, we are free to define Mk . One valid approach is to define it as a linear functionγ = kχ, with k as an arbitrary constant factor. Applying l’Hospital’s rule once againone gets:

limχ,γ→0

3χ2

γ

γ=kχ= limχ→0

3χ2

kχL’H= 0. (5.105)

Page 171: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.3 Dynamics of Aerial Manipulator in Contact with Environment 159

Next, we can apply one of basic calculus theorems, the squeeze theorem, to provethat since both upper and lower bounds tend to reach zero, MkΠ (�C)i, j has noother alternative but to approach zero, which proves the proposition. We have math-ematically shown that the point of contact becomes a pivot point for the aerial robot.Moreover, the moment of inertia of the system becomes:

DCM =∑

i∈qDi + miΠ

(�ci0

). (5.106)

A closer look at the equation shows that it is the moment of inertia of the systemderived through the parallel axis theorem observing the rotation of the body aroundthe pivot point placed in the original point of contact Ck .

It is important to note that limχ,γ→03χ2

γdoes not approach zero for other chosen

paths γ(χ). However, since Mk is not strictly defined by the environment, we arefree to choose the appropriate approach, as the one taken in this proof.

5.3.3 Strong Coupling

Valve turning represents another classic controls problem along with insertion tasksand pick and place [1, 7]. We humans know from experience that three points ofcontact result in a stable, static pose of a given rigid body. This can be proven usingthe aforementioned representation. Nevertheless, a more interesting proposition isto show that when a flying robot executes two points of contact C1 and C2 with theenvironment (i.e., valve), it can rotate solely around a single axis connecting the twocontact points (Fig. 5.15).

We start by aligning the valve coordinate system in such a way that its y-axismatches the line that connects the contact points, and its z-axis lies on the z-axisof the quadrotor body frame L0. In order to grab the valve, similar to pick-and-place constraint

(pBE − pA

E

)y = 2R, where R represents the valve radius, has to

be satisfied. During flight, the arm end-effector trajectory pAE compensates for UAV

flight controller errors and aligns the aerial robot with the valve.

Theorem 5.2 Once the arms grip onto the valve, the center of mass shifts toward thecenter of the valve. Furthermore, this system can rotate solely around the intersectionline λ(s) = C1 + s(C2 −C1), s ∈ 〈−∞,∞〉 between the two points of contact (i.e.,y-axis in Fig.5.16).

Proof As with loose coupling, the first claim is straightforward to prove:

limM∞

j →∞CM = M∞

j

∑2k=1 Ck

2M∞j

l’h= C1 + C2

2= CM , (5.107)

Page 172: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

160 5 Aerial Manipulator Dynamics

1

z yx

EL

cpL

Center of massshifts towards the

contact point

oL

xy

WL

00z

00y

00x

Fig. 5.15 Showing a key representative of loose coupling, a peg-in-hole manipulation is illustrated.Upon making contact with the environment, the center of mass of the system shifts toward the pointof contact with the environment

L0

CC1 22

lim 21 CCCMjM

x

z

2C1C

11 CCC 2

Fig. 5.16 Valve turing is a classic representative of strong coupling. Aligning the system with thevalve center and perching onto it cause the center of mass to shift toward the center of the valve

showing that the center of mass shifts toward the geometric center of the point ofcontacts (i.e., center of the valve). Through previous theorem, we showed that if andonly if �C tends to zero, the system can rotate around the point of contact. To apply

Page 173: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

5.3 Dynamics of Aerial Manipulator in Contact with Environment 161

the same reasoning here, we need to find an axis of rotation around which �C tendsto zero.

To examine the axis of rotation, we consider that since the rigid body generallyrotates around its center of mass, any line λ(s) = CM + sσ, s ∈ 〈−∞,∞〉 passingthrough the center of valveCM is a good candidate for the axis of rotation. However,only for σ = C2 −C1 are all three pointsC2,C1, andCM collinear. Without the lossof generality, we denote this line as x-axis of the newly created coordinate system.Furthermore, one can choose two lines passing through CM perpendicular to σ andto each other (i.e., σ⊥

1 and σ⊥2 ), thus completing an orthogonal coordinate system.

It is now straightforward to show that the projections �Ckσ⊥1 = 0 and �Ckσ

⊥2 = 0

for k = 1, 2. On the other hand, projections�C1σ = −�C2σ. Putting it all togetheryields:

2∑

k=1

Π (�Ck) =2∑

k=1

‖�Ck‖∞

⎣0 0 00 1 00 0 1

⎦ . (5.108)

From previous observations, it is clear that:

limMk→∞ Mk

2∑

k=1

Π (�Ck) =⎡

⎣0 0 00 ∞ 00 0 ∞

⎦ , (5.109)

showing that the system can rotate (i.e., has a finite moment of inertia) only in thex-axis. Moreover, since (0,∞,∞) are clearly the eigenvalues of (5.109), thereforethe chosen lines are principle axis of rotation, so that the lineλ(s) = C1+s(C2−C1)

is the only axis around which the system can rotate.

Once the aerial robot aligns with and perches onto the valve, it is important toadjust the arms to minimize the load on the joints which are required to carry the fullload of theUAVwhile gripping firmly onto the valve. This is achievedwith stretchingthe arms out, and the positive effect this move has on the system is twofold:

• Reduces the overall moment of inertia of the system with respect to the valveturning z-axis;

• It eliminates the torque applied to the joints from the weight of the system. Toshow this, we consider the mass of the quadrotor mQ �, thus neglecting the massof the manipulator, to yield a simplified equation for the torque applied to the armmotors as:

∥∥τq1∥∥ = 1

2g (L1cos(q1) + L2sin(q1 + q2))mQ = 0

∥∥τq2

∥∥ = 1

2gL2sin(q1 + q2)mQ = 0,

that for the stretched pose of the arms (q1 = π2 , q2 = − π

2 ) becomes equal to 0.

Page 174: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

162 5 Aerial Manipulator Dynamics

Simply perching onto the valve is not enough, the aerial robot needs to utilize itspropulsion system to turn the valve. Since the valve itself can rotate, this givesthe system additional degree of freedom in the combined MM-UAS+valve sys-tem. Therefore, the MM-UAS needs to overcome the combined moment of inertiaof the quadrotor body, manipulator arms, and the valve. Without loss of generality,we align the coordinate systems of the valve to the aerial robot L0 (Fig. 5.16). Takinginto account the additional degree of freedom and the moment of inertia of the valve,we rewrite (5.109) to yield:

DCM ={q,A,B,valve}∑

j∈

n j∑

i=1

D ji + m j

i Π(�C j

i

)+

⎣0 0 00 ∞ 00 0 0

⎦ (5.110)

Additionally, valve friction, unlike free flight motion, produces additional frictionforces, also affecting the system dynamics. An aerodynamic drag model, which ispresent during flying maneuvers, is replaced with a Coulomb nonlinear friction forcemodel [10] once the valve is grabbed:

β(ωz) ={

bAk (ωz)

2, r = 1

bvkωz + sgn(ωz)[bdk + (bsk − bdk )e

−|ωz |ε

], r = 2

(5.111)

• bAk and bvk denote the aerodynamic drag, and viscous friction, respectively.

• bdk represents the dynamic friction coefficient.• bsk stands for static friction of the valve.

References

1. Allen PK, Miller AT, Oh PY (1997) Using tactile and visual sensing with a robotic hand. In:1997 IEEE International Conference on Robotics and Automation, 1997, Proceedings, vol 1,pp 676–681

2. BaraffD (1994) Fast contact force computation for nonpenetrating rigid bodies. In: Proceedingsof the 21st annual conference onComputer graphics and interactive techniques, pp 23–34.ACM

3. D Young H, A Freedman R (2000) University Physics with modern physics. Addison WesleyLongman, Inc

4. Kane TR, Levinson DA (1985) Dynamics, theory and applications. McGraw Hill5. Marion JB (2013). Classical dynamics of particles and systems. Academic Press6. MaytonB,LeGrandL, Smith JR (2010)Robot, feed thyself: plugging in to unmodified electrical

outlets by sensing emittedACelectric fields. In: 2010 IEEE international conference on roboticsand automation (ICRA), pp 715–722. IEEE

7. Michelman P, Allen P (1994) Forming complex dextrous manipulations from task primitives.In: 1994 IEEE international conference on robotics and automation, 1994, proceedings, pp3383–3388, vol 4

8. Mirtich B (1998) Rigid body contact: collision detection to force computation. In: IEEE inter-national conference on robotics and automation

9. OrsagM, Korpela C, Bogdan S, Paul O (2014) Hybrid adaptive control for aerial manipulation.J Intell Robot Syst 73(1–4):693–707

Page 175: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

References 163

10. Schilling RJ (1990) Fundamentals of robotics: analysis and control. Prentice Hall11. Siciliano B, Khatib, O (2008) Springer handbook of robotics. Springer Science & Business

Media12. Katsu Y, Yoshihiko N (2008) A numerically robust LCP solver for simulating articulated rigid

bodies in contact. In: Proceedings of robotics: science and systems IV, Zurich, Switzerland,vol 19, p 20

Page 176: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Chapter 6Sensors and Control

6.1 Sensors

As with all UAS, sensors play an integral part in environmental interaction, poseestimation, and safety. Microelectronics and the software controlling them has dras-tically changed in recent years. The open-source software community continues torapidly expand. The nature of the open-source software and maker communities hasproduced software and electronic components that can be easily combined creat-ing new capabilities. Control algorithms, GPS waypoint navigation techniques, pathplanning, feature detection, and obstacle and collision avoidance methods are eas-ily downloaded and implemented on COTS (common-off-the-shelf) sensors aerialvehicles [7, 29]. The following sections on sensors provide of a brief overview of thevarious devices used in aerial manipulation operations. Typically, all of the devicesare used in a fused manner (Sect. 6.2) to provide the best estimation of pose for thevehicle or the target object of interaction.

6.1.1 Inertial Measurement Unit

At the core of UAS is the inertial measurement unit (IMU) and primary componentof an autopilot or flight control unit (FCU). The flight controller processor readssensor data and sends appropriate speed commands to the motors. Most controllershave 32-bit processors with integrated accelerometers and gyroscopes to measureacceleration forces and rotational forces. These measurements allow the flight con-troller to estimate the attitude (angle) and correct as necessary. To calculate heading,a magnetometer or compass measures magnetic forces. Since a compass is suscep-tible to electromagnetic interference from motors, speed controllers, and wiring, thesensor module is typically mounted on a mast away from the electronics. At a mini-mum, the flight controller must contain the processor, accelerometer, gyroscope, andmagnetometer. There are many open and closed-source autopilots available on themarket.

© Springer International Publishing AG 2018M. Orsag et al., Aerial Manipulation, Advances in Industrial Control,https://doi.org/10.1007/978-3-319-61022-1_6

165

Page 177: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

166 6 Sensors and Control

A typical 6 DOF IMU consists of a three axis accelerometer and a three axisgyroscope. These are inertial sensors and measure the specific force and angularvelocity respective to the IMU that can then be related to the UAS body. The axes ofboth the gyroscope and accelerometer are aligned with the axes of the UAS platformand orthogonal to each other. Microelectrical mechanical systems (MEMS) sensorsare the most common type of IMU due to their low price and small size allowingfor a wide variety of use cases and applications. Sources of error for IMUs includenoise and systematic errors such as bias, scaling factors, and misalignment errors[46]. Calibration can alleviate many of the misalignment errors and scaling factorproblems. The output forces and angular velocities of these sensors can be estimatedas:

fb = fb + δfb + nf (6.1)

ωb = ωb + δωb + nω (6.2)

where fb and ωb ∈ R3 are the actual force and angular velocity, respectively. n is

Gaussian white noise and δfb and δωb are biases [2]. Update rates for an IMU are onthe order of 100–1000s of Hz. While it is feasible to integrate and obtain positionand orientation data, the large influence of error quickly renders these calculationsuseless. However, as will be seen in Sect. 6.2, other sources of information alongwith the IMU can be combined to generate valid and more accurate pose estimations.

6.1.2 Cameras

For the purposes of aerial manipulation, cameras play a critical role in localizationof the aircraft as well as for target acquisition of the object for interaction. Image-based visual servoing (IVBS) has been extensively used by rotorcraft to achievehover for surveillance and localization [5, 40]. Visual odometry and self localizationtechniques have advanced rapidly to achieve highly accurate pose information.

For target acquisition, several vision-based detection algorithms (i.e., AR tags[12] and pose detection of circular markers [22]) provide pose measurements of thetracked object in the camera coordinate system, based on its respective projection inthe image plane. To estimate both attitude and the position of the UAV, a Bayesianfilter can be used to track the target and feed the target’s position as a reference forthe autopilot.With this approach, it is easy to combine the data from different sensors(i.e., IMU, camera, GPS, motion capture), all coming in at different rates.

In both UAS localization and target acquisition, it is necessary to relate the targetobject to the camera frame back to the body frame. One needs to know the transfor-mation matrix TC

B , to transform the position of the target T in the camera frame C(pT

C ) in the global coordinate system. The relationship to measure the position of thetarget T w.r.t. the body B is as follows:

Page 178: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.1 Sensors 167

pTB = TC

BpTC (6.3)

where pTC denotes the measured position of the target w.r.t. the camera. Finding the

transform matrix:

TCB =

[Rot(φ, θ,ψ) p(x, y, z)

0 1

](6.4)

depends on the distance p(x, y, z) and the rotation Rot(φ, θ,ψ) of the camera w.r.t.the body marker and is not a straightforward measurement process. First, one needsto estimate the extrinsic parameters of the camera in order to calculate the position ofthe object in camera frame of reference, pT

C . Next, one needs to calculate and applya standard Kalman filter in order to fuse the information to gain better knowledge ofthe position of the UAV and the targets.

6.1.3 GPS

GPS is a satellite-based navigation system with global, persistent, and all-weathercoverage. Part of the Global Navigation Satellite System (GNSS) is the most widelyused method for location estimation.With a minimum of four satellites within line ofsight, a GPS receiver can acquire accurate positioning and timing data anywhere onthe earth. Satellites transmit known-formatted messages that contain the time whenthe message was sent, the specific satellite that sent the message, and its current loca-tion. The GPS receiver can calculate the distance to a single satellite by subtractingthe arrival time from the sending time to estimate time of flight multiplied by thespeed of light. Then, using triangulation, the GPS receiver can determine its positionusing the distance to four or more satellites.

In UAS, GPS is commonly integrated with the on-board IMU and amagnetometer(compass) to determine the vehicle orientation. While GPS does provide accurateinformation on position, the low sampling rate prevents frequent stand-alone updateson position required for high-speed applications as well as position derivatives. Fur-ther, GPS is susceptible to signal degradation and loss due to interference frombuildings, trees, mountains, and other forms of blockage. There are many resourceson GPS which are available for further study [11, 26, 35].

6.1.4 Motion Capture

Usually, high-level position control system relies on global navigation satellite datafusedwith on-board visual feedback algorithms. In indoor environments wheremanyscenarios and application take place, a GNSS system probably will not be available.Nevertheless, recent results in visual odometry and self localization provide hopethat sufficient precision could be achieved through an on-board sensory apparatus.However, the problem of reliable, robust, and accurate localization can be overcome

Page 179: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

168 6 Sensors and Control

indoors using a motion capture system. These systems emulate GPS data whilerelying on visual data to find the target.

Motion capture systems are popular pose estimation tools for their high degreeof accuracy and high frame rates. The high-level autopilot has the option to usemotion capture for state estimation that provides position and velocity information.Motion capture is based on vision markers placed just above the center of mass ofthe vehicle. An on-board autopilot uses available motion capture data (position andspeed) to navigate to a desired set point.

6.2 Sensor Fusion

Different missions require different levels of accuracy, different UAV construction,and different sensor arrays attached to the aerial robot. However, all of the setupshave one thing in common: to try to make the most out of the limited sensors theyhave. Aerial robots are predominantly restricted by their payload capabilities, whichprevents them to carry the ideal sensory setup. Ideal measurements, like the oneprovided through a motion capture systems, are usually either too expensive or notavailable. Nevertheless, relying on sensor fusion enables engineers to overcome thelimitations of each sensor and provide information from the combined sensor arraywith higher accuracy and refresh rates.

The purpose of this section is to provide the reader with basic examples on sensorfusion, following the most wide-spread Kalman filtering concept. We cover only asmall portion of the material associated with Kalman filtering. We assume the readeris familiar with the concepts of observers and sensor fusion which can be found inseveral classic and standard textbooks on the subject of estimation theory [13]. Ourchoice of material is primarily motivated by experience in working with UAVs. Thatbeing said, we will continue this brief overview based on the following exampleproblem.

Problem 6.1 Imagine an aerial robot equipped with an inertial measurement unit(IMU) and a generic pose sensor with output data rate 100 and 10Hz, respectively.Unprocessed IMU outputs of angular velocities are measured with a gyroscope andlinear accelerations are measured by an accelerometer. The IMU measurements arecorrupted by additional noise, a constant offset, and a slowly drifting bias, but are notdelayed. On the other hand, a generic 6DOFpose sensor,measuring position and atti-tude of the UAV outputs measurements which are accurate, but may be significantlydelayed compared to the aerial robot dynamics. Furthermore, pose measurementsare at a low rate and corrupted by noise. To produce accurate measurements at a highfrequency needed for control, it is inevitable to use an attitude estimation algorithm.

In the next two subsections, we will provide an overview of how we devised andimplemented an estimator which provides position data with a higher output ratethan the given pose sensor without delay, and accurate pose information needed forthe low-level control.

Page 180: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.2 Sensor Fusion 169

6.2.1 Attitude Estimation

We will begin this section with summarizing and restating the main points of theKalman filter, namely the discrete form of the Kalman filter. Within this recapitula-tion, we will show how to apply this form of Kalman filter on the proposed problem,narrowing the focus to the specific equations and their use in this application.

Prediction State

Through the integration of gyroscopic measurements gx , gy, gz , we aim to derive theinformation about the pose of the UAV, namely the Euler angles φ, θ,ψ, which areroll, pitch, and yaw, respectively. To achieve this, we use the following model in theprediction step of the Kalman filter used for attitude estimation [30]:

⎡⎢⎢⎢⎢⎢⎢⎣

φθψgxbgybgzb

⎤⎥⎥⎥⎥⎥⎥⎦

(k) = Aatt ·

⎡⎢⎢⎢⎢⎢⎢⎣

φθψgxbgybgzb

⎤⎥⎥⎥⎥⎥⎥⎦

(k − 1) + Batt ·⎡⎣gxgygz

⎤⎦ (k − 1) (6.5)

Aatt =[I3x3 −Ta

03x3 I3x3

](6.6)

Batt =[Tatt

03x3

](6.7)

Tatt =⎡⎣1 sin(φ) tan(θ) cos(φ) tan(θ)0 cos(φ) − sin(φ)

0 sin(φ)/ cos(θ) cos(φ)/ cos(θ)

⎤⎦ (6.8)

where gm = [gxb, gyb, gzb]T are corresponding gyroscope sensor biases, I3x3 is theidentity matrix, 03x3 is the null matrix, Tatt is transformation matrix of gyro mea-surements to attitude rate (roll, pitch, yaw rate), and (k) denotes the discrete k-thstep of the process. The state of our model is a 6 × 1 combination vector x of atti-tude measurements and sensor bias. The Kalman filter estimates a process by usinga form of feedback control: the model part of the filter estimates the process stateafter which it obtains feedback (i.e., measurements) [45]. The equations presentedso far fall into the first group of Kalman filter equations, known as time update orprediction state. The time update equations project the current state prediction x∗(k)and error covariance P∗(k):

x∗(k) = Aatt · x(k − 1) + Battgm (6.9)

P∗(k) = Aatt · ATatt + Q (6.10)

Page 181: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

170 6 Sensors and Control

In theory, one should know the exact noise covariance of the process Q; however,in practice, the process noise covariance is assumed to be constant and is tuned toobtain the best result.

Correction State

In the second step, the correction step of the Kalman filter uses IMU data (rolland pitch computed from accelerometer measurements and yaw frommagnetometermeasurements)

zpos(k) = [I3x3 03x3]︸ ︷︷ ︸H

·

⎡⎢⎢⎢⎢⎢⎢⎣

φθψgxbgybgzb

⎤⎥⎥⎥⎥⎥⎥⎦

(k). (6.11)

First, we find the optimal Kalman gainK(k) w.r.t. the measurement noise covariancematrix R, with which we update the predicted state x∗(k):

K(k) = P∗(k)HT(HP∗(k)HT + R

)(6.12)

x(k) = x(k)∗ + K(k)(z(k) − Hx(k)∗

)(6.13)

only to finally update the error covariance P(k):

P(k) = (I − K(k)H)P∗(k). (6.14)

In practice, one should tune the initial value of the estimate error covariancematrixP(k), and a good starting point for this is to use the n × n eye matrix, where n = 6in this case.

6.2.2 Position Estimation

A similar approach is applied to position estimation, but due to different samplerates of sensors, the Kalman filter process needs to be slightly adjusted. To estimateposition and linear velocity of the UAV, the following model that runs at 100Hz isused in the prediction step of the Kalman filter [14]:

[pv

](k) = Apos ·

[pv

](k − 1) + Bpos · a(k − 1) (6.15)

Apos =[I3x3 dt · I3x303x3 I3x3

](6.16)

Bpos =[dt2/2 · I3x3dt · I3x3

](6.17)

Page 182: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.2 Sensor Fusion 171

where p = [x y z], v = [vx vy vz

], a = [ax ay az

]denote position, velocity and

the acceleration vector in the global coordinate frame and dt is a discrete time stepof the Kalman filter. Acceleration is computed as [27]:

a = [0 0 9.81]T + DCM · acc (6.18)

where DCM denotes direction cosine matrix and acc is the vector of accelerometermeasurements. In the correction phase of the Kalman filter, position data from thegeneric pose sensor is used. However, it is determined that this data contains a con-stant delay of 100 ms causing inevitable delay of the estimated values, which in turndeteriorates the UAV positioning performance. Hence, we modify the filter model

Fig. 6.1 Diagram of theimplemented positionestimator structure. Inputsare estimated roll, pitch, yawangles, and accelerometermeasurements. x is thevector of position and linearvelocity values in the globalcoordinate system, ag isacceleration in the globalcoordinate system. The filteris executed 10 steps in thepast, but the 10 most recentacceleration measurementsare stored in a buffer andused in forward integration©2015 IEEE. Reprinted,with permission, from [37]

Page 183: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

172 6 Sensors and Control

Fig. 6.2 The comparison ofthe UAV estimated andground truth x position (a)and velocity (b) © 2015IEEE. Reprinted, withpermission, from [37]

5 10 15 20 25 30 35−0.2

0

0.2

0.4

0.6

0.8

1

1.2

t [s]

x [m

]

Ground truth Estimated

(a)

5 10 15 20 25 30 35−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

t [s]

v [m

/s]

Ground truth Estimated

(b)

[25]. As the frequency of the accelerometer is 10 times greater than the frequency ofthe pose sensor, in each time step we store the latest 10 accelerometer measurements.Furthermore, prediction (6.15) is delayed for 100ms, in order tomatch the time stampof predicted values and the time stamp of the pose sensor. The final estimated valuein each time step is then computed using the corrected value as initial value of themodel and consecutively computing (6.15) each accelerometer value stored in thememory (forward integration). This procedure is depicted in Fig. 6.1.

Next, we imagine adding the second generic pose sensor to update when the firstsensor is not able to provide information. To account for this adjustment, wemodifiedour estimation algorithm to support additional pose measurements by adding thesecond on-demand correction phase in the Kalman filter when the measurementsfrom the second sensor are available. An interested reader can find more practicalinformation about this solution in [37].

Page 184: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.2 Sensor Fusion 173

In Fig. 6.2a and b, we present results of the position and velocity estimation,respectively. We depict ground truth and estimated x position and velocity and con-clude that there is no significant delay in the estimated signals.

6.3 Linear Control System

More or less a standard today, control of rotorcraft UAVs is achieved through someform of PID control. This can be observed in off-the-shelf multirotor platforms forhobby, scientific, and professional purposes.One such linear implementation, dubbedPID control, will be used throughout this book and is shown in Fig. 6.5. This form ofPID control was chosen because it eliminates potential damages to the actuators thatcan usually be experienced when leading the control difference directly through thederivation channel [34]. This implies that the derivation channel feeds the informationfrom the speed values, thus eliminating the necessity to obtain the time derivative ofthe position feedback.

As we have learned in Sect. 3.2, not every multirotor configuration has the abilityto control all 6 degrees of freedom that the UAV body has. Moreover, the mostcommon multirotors, available with 4, 6, or 8 rotors, are aligned in what we calla planar configuration. Within the discussion in Sect. 3.2, we concluded that theconfiguration mapping Γ (6 × n matrix, where n denotes the number of actuators)for any coplanar multirotor platform has insufficient rank and thus does not enablefull control of UAV body. Before we proceed, we revisit the concept of configurationmapping through the following problem.

Problem 6.2 Derive the configurationmappingΓ for the twomost common quadro-tor configurations dubbed plus + and cross × configuration.

Practical difference between the two similar configurations is only observed inthe rotation part of configuration mapping Γ . We write the two matrices directly

0L

x x

yy

1

1

22

3

3

44

l2l

9045

ccw ccw

Fig. 6.3 Showing standard cross configuration on the left and plus configuration on the right-handside of image. z-axis is facing the direction from the paper surface. Odd actuators are rotating inccw direction

Page 185: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

174 6 Sensors and Control

following from Fig. 6.3:

Γ plus =

⎡⎢⎢⎢⎢⎢⎢⎣

0 0 0 00 0 0 01 1 1 10 l 0 −l−l 0 l 0

−CDCT

CDCT

−CDCT

CDCT

⎤⎥⎥⎥⎥⎥⎥⎦

,Γ cross =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

0 0 0 00 0 0 01 1 1 1

− l√2

l√2

l√2

− l√2

− l√2

− l√2

l√2

l√2

−CDCT

CDCT

−CDCT

CDCT

⎤⎥⎥⎥⎥⎥⎥⎥⎦

. (6.19)

To tilt the body in x or y direction, plus configuration uses only two rotors. At thesame time, in cross configuration all rotors are used. It might seem at first that theplus configuration outperforms the cross configuration due to its simplicity. However,dispersing the control input on all four actuators provides advantage since it saves thesystem from entering actuator limits. Nevertheless, both configurations are equallyused both in literature and in practice.

Since bothΓ matrices have rank four, the configurations are not fully controllable.More precisely, the control of x and y-axis position is missing since the first two rowsare zeros. This implies that one cannot independently control the position of the UAVbody in horizontal space. To control the position of UAV, one needs to tilt it towardthe desired direction. The only available control inputs form an input vector ν:

ν =

⎡⎢⎢⎣uφ

uz

⎤⎥⎥⎦ (6.20)

which somehow have to map to the vector of n actuator forces u. We thus need tofind a n × 4 matrix Γ �, that maps from the dimension space of controller inputs ν ton-dimensional space of actuator forces u. After applying the configuration mappingΓ , one should obtain the forces and torques applied to the multicopter body. Inprevious case of planar quadrotor configuration, this would yield the force appliedin z direction and 3 torques acting on the body, since Γ has insufficient rank:

[fz τx τy τz

]T = Γ Γ �ν (6.21)

Together Γ and Γ � produce a one-to-one mapping of control to forces acting on theUAV body. Since the two matrices are correlated, in essence, operator ()� denotesa quasi-inverse mapping, which yields the control mapping dependent of its config-uration. Standard cascade control structure of a planar quadrotor UAV platform isshown in Fig. 6.4.

Therefore, we are left to conclude that the first and by far the most importantcontrol loop in multirotor control is the one that stabilizes the attitude of the UAV.If we are able to successfully regulate and manage the attitude of the rotorcraft,indirectly we enable the position control of the UAV. Every off-the-shelf multirotorplatform has the attitude control loop configured and tuned to enable the pilot to fly it

Page 186: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.3 Linear Control System 175

AttitudeControl

rxMMUASdynamics

PositionControlry

rzr

Feedback

rr ,4321 ,,, uuuuuuu ,,

zu

Fig. 6.4 Standard cascade control loop for planar quadrotorUAVconfiguration.The cascade controlstructure is a direct consequence of limited controllability Γ of the configuration

using a remote RC controller. Since it is the first step toward the control of multirotorUAVs, we will start the linear control system dissemination discussing the attitude(angle) control loop.

6.3.1 Attitude Control

In Fig. 6.5, we show a linearized angle model closed with a linear PID controlloop. Linearization is performed around hover condition, where small angle approx-imations sin(α) ∼ α, cos(α) ∼ 1 are valid for attitude angles roll and pitch,α ∈ (�,�). As it has been previously shown in Chap.2, the transformation betweenthe body angular rotation and global angular velocity vector is achieved through(2.18). Only for the case of infinitesimal Euler angles it is true that the time rate ofchange of the Euler angles equals the body-referenced rotation rate since

Fig. 6.5 Attitude control loop, showing only a single angle for clarity

Page 187: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

176 6 Sensors and Control

T =⎡⎣1 Sφtθ Cφtθ0 Sφ −Sφ

0 Sφ

⎤⎦ ≈ I3×3. (6.22)

This enables us to measure the rotation speed directly through an on-board IMUsensor. Linearized motor function encompasses the nonlinear quadratic equationof the propeller trust (3.67), linearized around the expected voltage applied duringhover. Finally, if we neglect the flapping dynamics and angular rotation drag, takinginto consideration symmetric body, the moment of inertia:

D =⎡⎣Dxx 0 0

0 Dyy 00 0 Dzz

⎤⎦ (6.23)

becomes the dominant quadrotor dynamics parameter that affects the overall system’sdynamics. To distinguish it from the moment derivative component, throughout therest of the chapter, we will use Jxx , Jyy, Jzz to denote Dxx , Dyy, Dzz components oftensor of inertia. Moreover, as a generalization we will use J to denote a moment ofinertia around a general angle of rotation, be it roll, pitch, or yaw. In those situations,J denotes a scalar value that the reader is used to from common physics notations.

We have learned from Sect. 3.2 that Γ can be split in two parts, Γ 1 denotingthe first three rows, and Γ 2 the last three rows, translation and rotation degrees offreedom, respectively. Following this convention, we can write the complete rotationdynamic equation:

Dω = − ω × Dω + Γ2u (6.24)

− ω × Dω + Γ2Γ2�ν

whereu denotes a nonlinear vector actuator function for each actuator (3.33), repeatedhere for clarity:

ui = cT iΩi |Ωi | (6.25)

obtained from the input vector ν. We have discussed the thrust produced withinvarious actuators and the nonlinear vector function u incorporates all the nonlineareffects discussed inChap.3. However, one does not directly control the rotation speedof each propeller Ωi (s). It is a dynamic function of propeller rotation derived fromthe actuators, either internal combustion engines or DC motors. We have learned inSect. 5 that the motor transfer function can be approximated with first-order transferfunction (3.66). Now, applying control and configuration mapping Γ2Γ2

� to all theactuators of the system yields a slightly modified transfer function:

τx

u�

= Km

1 + Tms(6.26)

Page 188: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.3 Linear Control System 177

thatmaps the control inputwith u� to torque applied to the body τx , with Km denotinga slightly different propulsion system gain (denoting a single motor gain in Sect. 3.2),and Tm denoting the same motor time constant. Putting it all together, the transferfunction of the angle control loop in Fig. 6.5 can easily be derived (6.27):

GαCL =Ki KDKm

Tm J

(Kp

Kis + 1

)

s4 + 1Tms3 + KDKm

Tm J s2 + KDKmKp

Tm J s + Ki KDKmTm J

(6.27)

where, coefficients KD , Kp, and Ki are PID respective gains.In order to analyze the stability of the system, one needs to know the varying

parameters in the control loop. The disturbance caused from the Euler equation com-ponent ωyωz(Jyy − Jzz) affects the behavior of the control loop, but not its stabilityand therefore will not be considered in this analysis.

Problem 6.3 Recalling Problem 5.3, we calculated the variations of moment ofinertia in an aerial manipulator shown in Fig. 5.10 consisting of an ideal X-framequadrotor UAV body with mass mQ , radius ρ, and two 4DOF arms of identical linkswith massmL . Now, we aim to analyze the stability of the linear controller applied tothe proposed problem and show that given the fourth-order dynamic system equation(6.27), there exists a limit for the moment of inertia J for which the system remainsstable.

Mathematical formalisms that describe the variations in the moment of inertia wasgiven inChap.5. The following analysis gives a formal proof of stability for a standardPID controller, following the findings in [39]. The stability conditions are appliedto the fourth-order characteristic polynomial a4s4 + a3s3 + a2s2 + a1s + a0, from(6.27), where the fourth-order dynamic system includes both the dynamics of the air-craft and the motor dynamics [21]. For clarity, we grouped coefficients the followingway:

a4 = 1

a3 = 1

Tm

a2 = KDKm

Tm J

a1 = KDKmKp

Tm J

a0 = Ki KDKm

Tm J. (6.28)

We begin to prove this proposal by applying the Routh–Hurwitz stability cri-teria to the characteristic polynomial, observing coefficients in (6.28). Next, wecontinue to construct the Hurwitz determinants Δ1,Δ2,Δ3,Δ4. According to the

Page 189: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

178 6 Sensors and Control

Routh–Hurwitz stability algorithm [28], these determinants are required to be strictlypositive:

Δ1 = a3 > 0

Δ2 =∣∣∣∣a3 a1a4 a2

∣∣∣∣ = a3a2 − a4a1 > 0

Δ3 =∣∣∣∣∣∣a3 a1 0a4 a2 a00 a3 a1

∣∣∣∣∣∣ = a1Δ2 − a23a0 > 0

Δ4 = a0Δ3 > 0. (6.29)

Additionally, the criteria requires that ai > 0,∀i ∈ 0, 1, 2, 3, 4. This first condi-tion states that all the coefficient of the characteristic polynomial have to be positive.Observing (6.28), this condition is trivial to satisfy. Combining all the conditions in(6.29) through some elementary mathematics, we derive the two necessary condi-tions:

℘ = KDKmKp

Ki

(1 − TmKp

)> J (qA,qB) (6.30a)

KDKm(1 − TmKp

)> 0. (6.30b)

Previous inequalities clearly show that the system remains stable for a given limitof the moment of inertia, which we denote as the stability criteria ℘. This limitdepends on parameters of the PID controller, which shows that it is possible to tuneand stabilize the aircraft with straightforward linear control strategy. The stabilitycriteria can be visualized in Fig. 6.6, as a plane cutting across the moment of inertiavariations. The figure plots the moment of inertia variations against variable jointangles for an aerial manipulator example in Fig. 5.10. Everything above the plane of

Fig. 6.6 Visualization of thestability criterion ©2013Springer Science+BusinessMedia Dordrecht withpermission of Springer [38]

Page 190: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.3 Linear Control System 179

Fig. 6.7 Arm motion, from fully tucked into fully deployed

0 2 4 6 8 10 12 14 16 18 200

1

2

3

4

Z [m

]

0 2 4 6 8 10 12 14 16 18 20−10

−5

0

5

10

Θ [° ]

0 2 4 6 8 10 12 14 16 18 20−5

0

5

Ψ [° ]

time [s]

Fig. 6.8 MATLAB® simulation (Take off with arms stowed, Oscillations settled; Deploying armsmove): Roll and pitch angles ©2013 Springer Science+Business Media Dordrecht with permissionof Springer [38]

criteria ℘ is unstable, while for all the joint poses that produce the moment of inertiabelow the plane ℘, the system remains stable.

In order to test the proposed stability analysis, a series of simulation tests wererun. Figures6.8 and 6.9 show the results of these tests. In this particular run, thequadrotor roll controller was tuned close to the stability boundary. The aircraft takesoff with arms tucked and stowed. After the vehicles settles to a hover, the armsare deployed down and fully extended as depicted in Fig. 6.7, thus increasing themoments of inertia. One has to note that the angle of the arm changes correspondsto pitch angle dynamics, thus in Fig. 6.8 small oscillations occur in the pitch angle

Page 191: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

180 6 Sensors and Control

0 2 4 6 8 10 12 14 16 18 20−20

−15

−10

−5

0

F z [N]

0 2 4 6 8 10 12 14 16 18 20−0.4

−0.2

0

0.2

0.4

Mx, M

y [Nm

2 ]

time [s]

MxMy

Fig. 6.9 MATLAB® simulation (Take off with arms stowed, Oscillations settled; Deploying armsmove): Roll and pitch angles ©2013 Springer Science+Business Media Dordrecht with permissionof Springer [38]

caused by the dynamic disturbance of arm movement. On the other hand, as theroll angle control is tuned closer to stability bounds, this movement causes the rollangle control to become unstable, which can be seen in the red portion of Fig. 6.8.The pitch controller, on the other hand, is tuned closer to the safe, stable controllerregion, and thus does not become unstable. It does, however, exhibit more oscillatorybehavior, due to the increase of its moment of inertia. Nevertheless, the controller isfully capable of stabilizing it, even after dynamic disturbances of the angle controller.

The bottom two graphs from Fig. 6.9 show the rotor forces and torques fromthis experiment. It is interesting to notice how the controller compensates for theadditional torque load at the beginning of the experiment. This load is caused by thearms’ center of mass positioned away from the CoM. This causes static torque on thequadrotor body, which the integral component of the PID controller compensates.Once the arms are extended downward, this torque disappears as the arms’ center ofmass moves closer to the CoM.

The transfer function in (6.27) shows that there exists four parameters in theattitude control loop of a mobile manipulating UAV that change during flight andmanipulation:

• Km - Propulsion system gain changes drastically through time, especially in loadmanipulation missions where a different payload is transported around. This raisesthe linearization condition and thus varies the overall system’s gain.

• Tm - Propulsion system dynamics changes due to various effects as the missionprogresses through time. One dominant reason is battery voltage, but temperaturevariations can also play an important role.

Page 192: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.3 Linear Control System 181

• β - The aerodynamic conditions constantly change during the flight, dependingon the speed, maneuvers, and surrounding conditions.

• J - As previously discussed, the moment of inertia changes depending on the loadand manipulator arm pose.

Aerial manipulation missions mostly require steady flight conditions, for which thechanges in the aerodynamic conditions as well as the aerodynamic coefficient βcan be neglected. On the other hand, if the battery power supply is kept constantthroughout the mission, the variations in Tm can be minimized. The two remainingparameters, J and Km diverge the most during aerial manipulation. The variations inthe moment of inertia have been previously discussed. The propulsion system gainchanges are mostly caused from the variations in the load mass, which changes thepiecewise linearization of the quadratic relationship between the propeller thrust andthe applied voltage. Apart from that the variations in temperature and the batterydepletion also change the linearized motor gain throughout the mission.

6.3.2 Position Control

Once we have control of attitude dynamics, through the attitude control loop, wecan use it to steer the UAV body in the desired direction. This is in accordance withthe limited rank of Γ matrix. The only output one can achieve through the positioncontrol is the sum of total thrust which is facing in the body z-axis direction zBW :

Γ1f(u) =∑

f (ui )zBW = RBW

∑f (ui )z (6.31)

For linear control, we heavily rely on the hovering assumption that is to say thatthe height of the vehicle is steady.When the UAV body is at hover, the net total thrustproduced from the array of propellers is equal to the UAV weight mUAS · g, wheremUAS denotes the total mass of the system. Position control is designed to keep theUAV over a desired point. This includes both the (x, y) horizontal positions and theheight of the UAV (z). To achieve this, one has to rotate the UAV, so that to controlthe position, one has to control the rolling and pitching angles of the UAV body.Therefore, the output of the position control loop is simply a desired orientationRB

W ,so that the position dynamics function becomes:

mUAS x =mUASg + RBWΓ1f(u) (6.32)

=mUASg + RBW

∑f (ui )z

For simplicity, we will avoid using the subscript U AS to denote the mass of the entiremulticopter vehicle. Using the small angle approximation valid for the hoveringassumption, the z-axis of the multicopter body in world frame zBW derives to:

Page 193: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

182 6 Sensors and Control

Fig. 6.10 Height control loop of a standard UAV. Depicting the system dynamics with linearizedaerodynamic effects and a standard PID linear controller designed to keep the UAV stable andairborne

zBW =⎡⎣CφSθCψ + SφSψ

CφSθSψ − SφCψ

CφCθ

⎤⎦ ∼

⎡⎣θ

φ1

⎤⎦ (6.33)

Unlike horizontal position, vertical position of the UAV is directly controlled,simply outputting the desired total thrust of the UAV

∑f (ui ). Since the two control

loops are so distinct, we will analyze them separately.

Height Control

One cannot emphasize enough the importance of properly tuned height control, notonly because it keeps the UAV airborne, but because it plays a crucial role in theaforementionedhovering assumption that affects both attitude andhorizontal positioncontrollers. Figure6.10 sketches the components of height control on aquadrotor baseUAV. The total thrust T is simply a sum of all the thrust forces produced from withineach propeller f (ωi ). Once again, by far the most common solution for the heightcontrol is a standard PID controller, shown in Fig. 6.10. The dynamics of the system isdepicted with respect to the analysis fromChap.3, including the aerodynamic effectswhich act as a negative speed feedback in the dynamics. The linearized aerodynamicequation, for a given speed of the UAV produces a constant feedback gain β(z).This is of course a simplification of its true dynamics, but nevertheless adequatelydescribes UAV dynamics.

Page 194: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.3 Linear Control System 183

Observing from Fig. 6.10, we can write the transfer function of the system:

Gz = KdKm(Ki + Kps)

KdKi Km + KdKmKps + βs2 + KdKms2 + ms3 + βTms3 + mTms4(6.34)

which is obviously qualitatively similar to the dynamics of angle control in (6.27)and thus allows us to reiterate the stability analysis. Once again, applying theRouth–Hurwitz criteria, and taking into account a reasonable assumption that βis small (i.e., β2 ∼ 0) yields:

KdKmKp

Ki

(1 + β

mTm − TmKp

)>

(1 + β

mTm

)m

KdKm(β

KdKm+ βTm

m+ 1 − KpTm) > 0, (6.35)

where for limβ→0 (6.35) clearly reaches (6.30b). Although rather small, the aero-dynamic drag β affects the stability of the height control. To test how sensitive thecontrol loop is to variations in β, we turn to standard sensitivity analysis [10, 41].Focusing on the inner speed control loop

Gclz = KdKm

β + KdKm + ms + βTms + mTms2, (6.36)

we calculate the sensitivity:

S(s) = ∂Gclz

∂β

β

Gclz= − β

β + KdKm + ms. (6.37)

From the sensitivity Eq. (6.37), we deduce that for a large enough derivative gainin the PID control loop, the sensitivity of the height control loop with respect tovariations in the aerodynamic drag is negligible. Since the dampening gain in (6.37)raises with frequency (i.e., s → ω), it follows that the dynamical behavior of thesystem is even more resistant to the aerodynamic variations at higher frequencies,even when the derivative gain is not properly set.

Horizontal Position Control

The next and final step of UAV control is to control its lateral position. The horizontalposition control loop relies on both attitude controlGαcl(s) to control its direction andheight control to stabilize its total thrust

∑f (ui ). Therefore, the resulting closed-

loop dynamics is a rather complex function of different system dynamics shown inFig. 6.11.

In practice, one can also find cascade control loops, with inner loop controllinghorizontal speed and outer loop controlling the actual position. It is common toenable feed-forward signals of both speed and acceleration, like the ones shown inFig. 6.11, to enable optimal trajectory following for smoother flights. Tuning the

Page 195: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

184 6 Sensors and Control

Fig. 6.11 The horizontal position control loop relies on both attitude control Gαcl (s) to control itsdirection and height control to stabilize its total thrust

∑f (ui )

position control loop, even under the hovering simplification, requires the use ofsome optimization and tuning through time-weighted performance index. More onthis can be found in standard control literature [28, 42].

6.4 Robust and Adaptive Control Applications

In practice,most off-the-shelf quadrotors one finds on themarket today are controlledthrough some form of cascaded linear PID control system. As the field evolves,researchers worldwide struggle to find optimal nonlinear control strategies that cansurpass classical linear control in a given mission scenario. The same goes for aerialrobots, which introduce a whole new spectra of disturbances produced from withinthe dynamics of manipulator motion and contact with the environment.

In this section, we aim to show a set of selected nonlinear control strategies thathave shown to be effective in aerial manipulation missions. We start by presentingthe straightforward gain scheduling approach followed by the form of model refer-ence control, which has proven its effectiveness on different systems with variableparameters. Next, we show a standard version of backstepping control, which is verycommon in UAV applications. Finally, we discuss a specific form of Model Refer-ence Adaptive Control technique that helps effectively stabilize an aerial robot inwindy conditions.

6.4.1 Gain Scheduling

As shown in the previous paragraph, it is possible to predict the changes in the dynam-ics of the proposed aerial manipulator when it is not in contact with the environment.In these situations, changes in the overall dynamics come from armmovement, whichis a known nonlinearity. Therefore, the controller parameters could be modified sim-ply by monitoring the position of the manipulator joints and relate the controllerparameters to these auxiliary variables (i.e., manipulator joints). Gain scheduling

Page 196: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.4 Robust and Adaptive Control Applications 185

has been proposed and verified on different aerial platforms [32, 36] and can beregarded as a common adaptive control mechanism in flight control systems [3].Because there is no explicit method for gain scheduling controller synthesis [1], thissection discusses two possible ways to implement such a controller on the aerialrobot.

Problem 6.4 Adapt the linear controller from Problem 6.3, by choosing the bestparameter to adjust using gain scheduling adaptive control.

The first approach is straightforward: adapting the KD gain with respect to jointangle changes qA and qB respectively, while maintaining the speed control loopquality parameters:

ωn =√

KDKm

J (qA,qB)Tm(6.38a)

ζ = 1

2

√J (qA,qB)

KDKmTm(6.38b)

Even though there exists only one DOF for parameter adaptation, it is possible tosatisfy both natural frequency ωn (6.38a) and dampening ζ (6.38b) conditions. Thus,rewriting the equations from (6.38) yields two versions of gain scheduling functionfor KD:

KD(qA,qB) = ωn2 J (qA,qB)Tm

Km(6.39a)

KD(qA,qB) = J (qA,qB)

4ζ2KmTm(6.39b)

Because both equations have a linear relation to variations in moment of inertia,setting KD proportional with respect to J (qA,qB) cancels out both variations ofnatural frequency and dampening.

On the other hand, in this control loop there exist other parameters that vary withtime. For instance, aforementioned Km and Tm change as the battery depletes. Evenmore, the linear relationship between KD and the stability condition℘ makes it idealfor model reference adaptation, which will be discussed later in the text. Therefore,in order to use both gain scheduling and MRAC together, gain scheduling should beapplied to other controller parameters. To that end, a different parameter adaptationlaw is proposed: adapting the K p and Ki gains with respect to joint angle changes,qA and qB respectively, while maintaining the stability condition ℘ > J (qA,qB).

The design approach for this gain scheduling adaptation controller is straightfor-ward. First, we chose a nominal design pose. The pose can be chosen arbitrary, but

Page 197: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

186 6 Sensors and Control

Fig. 6.12 Visualization ofthe gain schedulingalgorithm based onmaintaining the stabilitycondition ℘ > J (qA,qB)

©2013 SpringerScience+Business MediaDordrecht with permissionof Springer [38]

for practical reasons, this pose should be chosen in a way that simplifies parametertuning. Once the pose is chosen, the system is tuned according to the controller designgoals, which yields a stable, nominal value ℘N > JN . Gain scheduling adaptationalgorithm is then applied in the following form:

Ki (qA,qB) ∼ 1

J (qA,qB)→ ℘(qA,qB) − ℘N = J (qA,qB) − JN . (6.40)

This adaptation algorithmsuccessfully creates an envelope above thevaryingmomentof inertia, graphically shown in Fig. 6.12. Wrapping the stability criterion around themoment of inertia keeps the system stable at all times.

6.4.2 Model Reference Adaptive Control

Just like gains scheduling analytically calculates the necessary adaptation envelope,using a priori knowledge of the system (Fig. 6.12), Model Reference Adaptive Con-trol (MRAC) concept [1], achieves the same through online numerical calculations.The controller discussed in this book has been tested and used in numerous applica-tions [6, 23, 24]. Together with the original PID controller of the aircraft, it is usedto assure aircraft stability throughout the manipulation process.

Problem 6.5 Adapt the linear controller from Problem example 6.3, by choosingthe best parameter to adjust using the Model Reference Adaptive Control.

Criterion (6.30) shows how stability could be maintained through adaptation ofthree parameters, KD ,Ki and Kp, respectively. Of course, the stability criterion hasdifferent sensitivity for each variable which adaptation algorithm needs to take intoaccount. For KD , the stability criteria ℘ is a linear function

δ℘

δKD= KmKp

Ki

(1 − TmKp

)(6.41)

Page 198: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.4 Robust and Adaptive Control Applications 187

which for all stable parameter values is greater than zero, making the ℘ a monotoni-cally rising function of KD . Thus, increasing KD should increase the stability of thesystem, making KD an optimal choice for Model Reference Adaptive Control. Onthe other hand, Kp has a more complex affect on stability:

δ℘

δKp= KmKD

Ki

(1 − TmKp

)(6.42)

where ℘ is rising for Kp < 12Tm

and falling for 12Tm

> KP < 1Tm. Due to (6.30b), the

system becomes unstable for all KP > 1Tm. Increasing Kp for the first range keeps

the system stable. Increasing it too much reverses the process and ultimately drivesthe system unstable. Finally, the integral part of PID controller, Ki , has a reverse,nonlinear affect on system stability ℘:

δ℘

δKi= −KDKm

Ki2 (1 − TmKp). (6.43)

Previous equation shows that ℘ is a monotonically falling nonlinear function of Ki .Both Ki and Kp have nonlinear effects on system stability ℘ and are not suitablefor direct adaptive control. Therefore, we propose using them in a separate adaptivecontroller based on gain scheduling technique.

Discussions from the previous chapters show that KD is an optimal choice formodel reference adaptation. In this paragraph, the stability and feasability of suchan adaptation controller is devised.

6.4.2.1 Stability Analysis

Using the Lyapunov stability theory, more precisely the Lyapunov-Like Lemma itcan be shown that such a system is uniformly stable. What is even more importantit is possible to show that the error of adaptation dynamics e converges to zero [43,44]. In order to show this, we need to find a scalar function V (e, ζ) that satisfies thefollowing criteria:

• V (e, ζ) is bounded below• V (e, ζ) is negative semidefinite• V (e, ζ) is uniformly continuous in time

We start by isolating error dynamics:

Pe =[0 10 − 1

Tm

](y − ym) + (ζ − ζ0)

Km

JTmu = Ae + (ζ − ζ0)

Km

JTmu (6.44)

where ζ is the current adaptation gain according to (6.44). ζ0 denotes the ideal steadyvalue, outputtedwhen themodel and the system are the same ym = y. Nextwe choosethe candidate Lyapunov function of the following form:

Page 199: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

188 6 Sensors and Control

V (e, ζ) = γ

2eT e + k

2(ζ − ζ0)

2 (6.45)

again with ζ0 representing a steady state value of ζ and arbitrary chosen gains γ andk.

Bounded bellow

Due to the quadratic nature of the chosen Lyapunov function (6.45) one can easilyshow that it is positive definite with a minimum valueV (0, 0) = k

2ζ02, and therefore

bounded from below.

Lyapunov function derivative is negative semidefinite

Further evaluation of (6.45) shows that its derivative propagates as follows:

dV (e, ζ)

dt= γeT e + k

Km

J(ζ − ζ0) ζ

γeTAe + γeTKm

J(ζ − ζ0) u + k

Km

J(ζ − ζ0) ζ

γeTAe + (ζ − ζ0)

(kdζ

dt+ γeTu

Km

Tm J

)(6.46)

The last evaluation of Lyapunov’s candidate derivative shows that if the adaptationgain is constructed as dζ

dt = − γk u

Tpi- de, which complies with the chosen adaptation

rule (6.44), Lyapunov function derivative breaks down to the following simple form:

dV (e, ζ)

dt= γeTAe (6.47)

Since the initial dynamic propagation matrix A is stable, it is straightforward tochoose an arbitrary positive adaptation gain γ > 0, and set k = 1 which ensures thatγA is negative semi-definite, therefore, proving that the Lyapunov Candidate (6.45)is negative semi-definite for the chosen adaptation rule 6.44.

Lyapunov function derivative is uniformly continuous in time

Asufficient condition for a V (e, ζ) to be uniformly continuous is that its derivativeV (e, ζ) is bounded for all t ≥ 0, and therefore dV

dt → 0 when→ ∞. Taking the timederivative again yields:

d2V (e, ζ)

dt2= γeT

(A + AT

)e (6.48)

= γeT(A + AT

) (e + (ζ − ζ0)

Km

JTmu)

. (6.49)

Since u is bounded by definition (i.e. bounded input system), and e(t), ζ(t) arebounded as well, the second derivative (6.48) is also bounded. Introducing Bar-balat’s lemma, it goes to show that dV(e,ζ)

dt is uniformly continuous in time. To show

Page 200: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.4 Robust and Adaptive Control Applications 189

how e(t), ζ(t) are bounded, one looks at Lyapunov function (6.45) which is obvi-ously decreasing in time (i.e. its derivation is negative semi-definite), which showsin particular that e(t), ζ(t) ≤ √

V(t) ≤ √V(0),∀t > 0 and are therefore bounded.

Applying Barbalat’s lemma implies that although the system has not been provento be asymptotically stable, the adaptation error e(t) converges to zero, even if theadaptation gain ζ converges to a steady value [44].Although, the Lyapunov stability analysis sets no upper bound for the correctionfactor γ, it is still necessary to choose its appropriate value and to that end, theapproach in [23] is chosen. A practical implementation requires that the upper andthe lower bound for the adaptation gain ζ, ζmax , and ζmin are set. According to criteria(6.30b), the range of KD for which the system is stable can be determined once therange of changes of the moment of inertia J is known. In our case, KDmax = 2KD0

and KDmin = KD0/2, with KD0 as the nominal value of the control parameter KD .Since the adaptation mechanism influences the system through multiplication KDζ(Fig. 6), determination of ζ maximum and minimum is straightforward, i.e., in ourcase ζmax = 2 and ζmin = 1/2.Now, one is able to estimate the range of the correctionfactor γ. Rewriting equation (6.44) gives:

ζ(t) = −γ

∫uP I D(t)e(t)dt (6.50)

During the adaptation phase (Fig. 6.13), a set of pulses is generated by PID controllerin order to perturb the system so that new value of the adaptation parameter can bedetermined, thus,

ζ(t) = −γ

∫δ(t) [y(t) − ym(t)] dt (6.51)

As we already wrote, J and Km are two parameters that are mostly influenced byaerial manipulation. Hence, including inverse Laplace transform of the system andthe model (neglecting influence of Tm) in (6.51), one gets

ζ(t) = −γ

(Km

J− KM

JM

)t + ζ0 (6.52)

Since dynamics of the adaptation loop must be slower (usually 5–10 times) than thesystem dynamics, in case of large change of parameters, the adaptation parametershould attain maximum/minimum value at t ≈ 5 · (5Tm), which gives

ζmax ≈ −γ

(Km

Jmax− KM

JM

)· 25Tm + ζ0 (6.53)

ζmin ≈ −γ

(Km

Jmin− KM

JM

)· 25Tm + ζ0 (6.54)

Finally, lower of two values of the correction factor γ, calculated from previous twoequations, should be included in the MRAC.

Page 201: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

190 6 Sensors and Control

Because model reference adaptation is highly susceptible to disturbances, one hasto take into account the static and dynamic torque disturbances produced from thearm movement. That is why the disturbance torque estimator is introduced to theMRAC control schematics. Dynamic disturbances are canceled out by using a low-pass filter for the adaptation rule. Static torques, however, cannot be bypassed with afilter. Static torque is caused from the shift in the center of mass of the aircraft and thegravity that affects its unbalanced body. Learning from the results in [33], one canfind the unknown center of mass offset, in a least-squares minimization sense, simplyas an average over collected data. The estimation results are then fed to the MRACmodel, thus minimizing the controller vulnerabilities to disturbances. Static torqueestimation works well for steady-state estimation, but fails to accurately estimate thedynamic changes in the gravity torque. Therefore, we propose adding a dead zoneto the adaptation rule, in order to cancel out the estimation errors.

Previously, we have shown how a poorly designed PID controller aerial robotbecomes unstable duringmanipulation tasks, even though it is perfectly stable duringthe flight [21]. In this paragraph, we put the adaptive control to the test, trying tostabilize the same system in the exact same situation. Figures6.13 and 6.14 show theresults of one of the performed tests where the quadrotors roll controller was tunedclose to the stability boundary. The aircraft takes off with arms tucked and stowed.After the vehicles settles to a hover, the arms are deployed down and fully extended(Fig. 6.7), thus increasing themoments of inertia. This change in themoment of inertiatries to destabilize the system and thus produces undesired oscillations in the rollangle control loop. The oscillations trigger theMRAC that changes the overall controlloop gain and therefore stabilizes the system.According to the stability criteria (6.30),the adaptive gain ζ needs to increase the derivative gain KD to account for the rise inJ . Fig. 6.15 shows how the adaptive gain ζ changes throughout the simulation, andFig. 6.14 shows the system response and the produced forces and torques.

6.4.3 Backstepping Control

Among these strategies, backstepping controller seems to be a very popular choicefor VTOL rotorcrafts. This section aims to explain in detail backstepping controlfor aerial robotic applications. Authors in [4, 31] were among first to implementbackstepping control technique on standard quadrotor platforms.

We proceed to derive the backstepping controller for x-axis only, but the sameprocedure can be followed to derive the controllers for y- and z-axes, respectively.Like in linear control, we observe attitude and position control separately, with posi-tion control feeding references to attitude controller, which in turn affects the motionof the quadrotor. This well-known cascade control technique is shown in Fig. 6.4.Weexpand once again for clarity, the attitude dynamic equation of a standard quadrotor:

Page 202: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.4 Robust and Adaptive Control Applications 191

0 5 10 15 20 25 300

2

4

Z [m

]

0 5 10 15 20 25 30

−2

0

2

Θ [° ]

0 5 10 15 20 25 30

−2

0

2

Ψ [° ]

time [s]

Fig. 6.13 MATLAB® simulation (Take off with arms stowed, Oscillations settled; Deploying armsmove): Roll and pitch angles ©2013 Springer Science+Business Media Dordrecht with permissionof Springer [38]

0 5 10 15 20 25 30−20

−10

0

F z [N]

0 5 10 15 20 25 30−0.2

0

0.2

Mx, M

y [Nm

2 ]

time [s]

MxMy

Fig. 6.14 MATLAB® Simulation (Take off with arms stowed, Oscillations settled; Deployingarms move): Propulsion system thrust and torque values ©2013 Springer Science+Business MediaDordrecht with permission of Springer [38]

m · a = −m · g + z f(∑

Ωi

)(6.55)

Iω = −ω × Iω + τ . (6.56)

Writing the quadrotor dynamic equation in state space, observing only the x-axis,and its respective pitch angle dynamics, yields:

Page 203: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

192 6 Sensors and Control

0 5 10 15 20 25 300.5

1

1.5

ζ

time [s]

Fig. 6.15 The adaptive gain ζ changes as the oscillations occur, andbrings the systemback in the sta-bility region ©2013 Springer Science+Business Media Dordrecht with permission of Springer [38]

[xx

]=[0 10 0

] [xx

]+[01

]cψsθcφ + sψsφ

m

∑f (ui ) (6.57)

θ

]=[0 10 0

] [θ

θ

]+[01

](Jzz − Jxx )φψ + τy

Jyy(6.58)

Standard small angle and hovering approximations enable us to simplify these equa-tions to:

[xx

]=[0 10 0

] [xx

]+[01

m

∑f (ui ) (6.59)

θ

]=[0 10 0

] [θ

θ

]+[01

]τy

Jyy(6.60)

Attitude Control

The backstepping control procedure starts with setting a desired set point for thesystem’s pitch, in our case θr . Next, we derive a tracking error system

z1 = θr − θ, (6.61)

which we aim to stabilize. To that end, we use Lyapunov theory, to show the stabilityboundaries of this subsystem. The most usual approach is to use standard quadraticequation as Lyapunov function of the subsystem,

V (z1) = z12

2. (6.62)

As a standard procedure, we seek that the Lyapunov function is positive definite andits time derivative negative semi-definite:

V (z1) = z1 z1 = z1(θr − θ) < 0. (6.63)

Page 204: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.4 Robust and Adaptive Control Applications 193

In order to keep the derivative negative semi-definite, and thus maintain the stabilityof the subsystem, we introduce θ as a virtual input v1 to z1. If one chooses thefollowing control strategy:

v1 = θr + α1z1, (6.64)

for any scalar valueα1 > 0 the Lyapunov derivative becomes negative semi-definite:

V (z1) = −α1z12 < 0 (6.65)

Since θ is a dynamic state variable, it cannot be arbitrary set to follow virtual inputv1, so we need to find another control input that will steer θ toward v1. Therefore, wecontinue the backstepping process by deriving a new error tracking subsystem

z2 = θ − v1 = θ − θr − α1z1, (6.66)

with its respective Lyapunov function V2:

V2(z1, z2) = z12

2+ z22

2. (6.67)

Before we proceed with examining the first derivative of V2, let us consider therelation between z1 and z2:

z1 = θr − θ (6.68)

z2 = θ − v1 = θ − θr − α1z1z1 = −z2 − α1z1.

Now, we proceed to derive the first time derivative of the Lyapunov functionV2(z1, z2):

V2(z1, z2) =z1 z1 + z2 z2 (6.69)

=z1(−z2 − α1z1) + z2(θ − θr − α1(z2 + α1z1))

= − α1z21 + z2(−z1 + θ − θr − α1(z2 + α1z1))

Quadrotor dynamics dictates that θ = τyJyy

. Taking that into account, and in order to

obtain V2(z1, z2) = −α1z12 − α2z22, the expression in parenthesis needs to yield:

−z1 + θ − θr − α1(z2 + α1z1) = −α2z2 (6.70)

−z1 + τy

Jyy− θr − α1(z2 + α1z1) = −α2z2.

Page 205: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

194 6 Sensors and Control

Following that, we calculate that the control input for system dynamics τy , needs todrive the system according to mathematical law:

τy = Jyy(z1 − α2z2 + θr − α1(z2 + α1z1)), (6.71)

in order to force the quadrotor to follow the set point θr . Furthermore, this controllaw will be stable under the assumption that α2 > 0.

At this point, we can show that it is straightforward to relax the hovering conditionφψ ≈ 0. Without this constraint, the dynamic equation for θ becomes:

θ = (Jzz − Jxx )φψ + τy

Jyy, (6.72)

and it is easy to show that the control law thus becomes:

τy = Jyy(z1 − α2z2 + θr − α1(z2 + α1z1)) + (Jzz − Jxx )φψ (6.73)

and thus easily accounts for nonlinear intertwined effects of Euler rotation.

Comparing Backstepping Control with Linear PD Controller

Before proceeding with position control, we pause briefly to take a closer look at pre-viously derived nonlinear control algorithm (6.73). Considering only step response,we can drop the term θr . Furthermore, for fair comparison we neglect the nonlin-ear component of the algorithm (Jzz − Jxx )φψ. This leaves us with a bare equation(6.71), where we substitute z1 and z2 for:

z1 = e (6.74)

z2 = −e − α1z1,

where e and e denote signal error θr − θ and its respective derivative. (6.71) thentakes the following form:

τy = Jyy(e(1 + α1α2) + e(α1 + α2)), (6.75)

which is identical to a standard PD controller, where

Kp = Jyy(1 + α1α2) (6.76)

Kd = Jyy(α1 + α2).

When Iyy is equal to the exact value of the system’smoment of inertia, the closed-looptransfer function is a standard second-order transfer function:

1 + α1α2 + s(α1 + α2)

s2 + (α1 + α2)s + 1 + α1α2. (6.77)

Page 206: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.4 Robust and Adaptive Control Applications 195

Applying Hurwitz criteria to the transfer function (6.77) shows that it is stable forall α1 > 0 and α2 > 0. This conclusion goes hand in hand with conditions pro-vided through Lyapunov method used to derive the backstepping control algorithm.The conclusion allows us to tune this backstepping version of the controller in thesame manner one would tune a standard PD controller. Therefore, previous stabilityanalysis applies to backstepping control as well.

Problem 6.6 Next, we compare the results of a linear PD controller to the back-stepping controller. To make a fair comparison, we tune both controllers with sameparameters. To prove a point, we choose expected values of Kd and Kp and calculatetheir respective values of α1 and α2, according to (6.4.3). The expected differencebetween the controllers comes from compensation of a nonlinear Euler coupling φψ.In practice, this component affects the system only during the most demanding aero-batic maneuvers. To emphasize the effect of coupling, we increase the Jzz moment ofinertia of the quadrotor. This in turn, amplifies the coupling effect at slower speeds.The parameters are laid out in Table6.1, and the results are shown in Fig. 6.16.

The experiment is derived to show the effect of coupling. First, we commandthe quadrotor to pitch for 0.1 radians without yaw rotation. During this maneuver,there is slightly any difference between the backstepping and linear control. Nextturn, we command the same pitch angle step change, and at the same time, rotate thevehicle in yaw directions. This produces nonlinear coupling and renders differentdynamic response of the linear and nonlinear controller denoted with black arrow.The experiment is repeated for a bigger pitch angle reference change, and with largerspeed causes slightly larger effect on the transition dynamics. Comparing linear andnonlinear controllers side by side shows that the nonlinear controller copes betterwith dynamically more demanding flight trajectories.

Position Control

The control principle explained herein follows the same structure as with classiclinear control, where the position of the quadrotor is controlled indirectly throughthe attitude control loop. Position control loop output drives the attitude controlloop and directs the thrust force of the quadrotor to fly the UAV toward the desiredwaypoint.

Table 6.1 Control parameters and quadrotor simulation parameters for experiment comparingbackstepping to PD control

Control parameters

Kd Kp α1 α2

5.05 105.65 10 10

Quadrotor parameters

Jxx [kgm2] Jyy[kgm2] Jzz[kg ∗ m2] Ji j , i = j

0.00528250 0.00528250 0.105 0.0

Page 207: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

196 6 Sensors and Control

0 2 4 6 8 10 12 14 16 18 20−0.5

0

0.5

Pit

ch-Ψ

0 2 4 6 8 10 12 14 16 18 20−0.5

0

0.5

time [s]

Yaw

ReferencePIDBackstepping

Fig. 6.16 Showing system response comparing classical PID and Backstepping controller. Bothpitch and roll angle are set to follow the same reference, first twice for 0.2, followed by two 0.4 radstep references. At the same time, yaw angle is commaned to change after the first attempt. Thisproduces different responses, depending on the nonlinear Euler coupling effect. Responses showthat the backstepping controller deals with the nonlinear effect, while PID controller falls behindand stabilizes once the yaw angle settles down

Applying the previously mentioned small angle approximation, one can derivethe backstepping control algorithm to control the position of the quadrotor in 3Dspace indirectly through the attitude control loop. We proceed with deriving thebackstepping control loop for x-axis, but identical procedure can be applied to y-axis control. Same as in attitude control loop, we start off by extracting a trackingerror system z3 and virtual control input v3:

z3 = xr − x (6.78)

z3 = xr − x, x ∼ v3.

Stabilizing this tracking error subsystem will force the dynamic position of thequadrotor to follow the desired reference point xr . Once more, we turn to Lyapunovtheory,

V3(z3) = z32

2(6.79)

V3(z3) = z3 z3 = z3(xr − v3)

to derive a stable control law for the virtual control input v3:

v3 = α3z3 + xr , (6.80)

Page 208: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.4 Robust and Adaptive Control Applications 197

under the constraint V3(z3) < α3z23. To produce virtual input v3, we turn to x , and inorder to shape it properly, we derive another error tracking subsystem:

z4 = x − v3 (6.81)

= x − xr − α3z3= −z3 − α3z3.

The augmented Lyapunov function V4 then propagates the following way

V4 =1

2(z3

2 + z42) (6.82)

V4 =z3 z3 + z4 z4=z3(−z4 − α3z3) + z4(x − xr − α3 z3)

= − α3z32 + z4(−z3 + x − xr + α3(z4 + α3z3)).

System dynamics clearly shows that linear acceleration x depends on speed of thepropellers Ωi and attitude θ:

x = θ

m

∑f (Ωi ). (6.83)

The approach taken chooses θ as a control input θr . In order to stabilize the system

−z3 + x − xr + α3(z4 + α3z3) = −α4z4 (6.84)

−z3 + θ

m

∑f (Ωi ) − xr + α3(z4 + α3z3) = −α4z4

control input to the lower level attitude controller (i.e., θr ∼ θ) needs to obey thefollowing backstepping control algorithm to maintain the system stable:

θr = m∑f (Ωi )

(z3 − α4z4 + xr − α3(z4 + α3z3)) (6.85)

The same analysis could be applied to y-axis control, but the steps are omitted forclarity. Informed reader is advised to attempt to derive the equations and check withthe complete set of equations given further down in the text.

Altitude Control

We observe altitude as a separate part of system dynamics, although it is indirectlylinked with the entire system. The backstepping control derivation procedure fol-lows the same steps previously described. Without repeating what has already beencovered, we proceed presenting results:

Page 209: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

198 6 Sensors and Control

z5 = zr − z, v5 = α5z5 + zr , z6 = z − v5. (6.86)

The final augmented Lyapunov function V6 yields the control constraints for thealtitude control:

V6 =1

2(z5

2 + z62) (6.87)

V6 = − α5z52 + z6(−z5 + z − zr + α5(z6 + α5z5)).

Recalling the altitude dynamics of the quadrotor, we can write:

mz = −mg + cos(ψ)sin(θ)(∑

f (Ωi )). (6.88)

The altitude is controlled through varying the total thrust vector of the quadrotor(∑

f (Ωi )). Furthermore, we remain under hover condition and small angle approxi-mation, such that cos(ψ)sin(θ) ∼ 1.Combining previous two equations, under hoverassumption, yields the following control rule:

∑f (Ωi ) = m(z5 + g − α5(z6 + α5z5) − α6z6) (6.89)

Relaxing the hover condition assumption yields a somewhat different, but still astraightforward control input:

∑f (Ωi ) = m

cos(ψ)sin(θ)(z5 + g − α5(z6 + α5z5) − α6z6) (6.90)

6.4.4 Hsia - a Robust Adaptive Control Approach

The last adaptive control technique we aim to cover with this book is a form ofModel Reference Adaptive Control. The actual control system implementation iscomprised of classical cascade linear control PID solutions coupled with nonlinearrobust adaptive techniques. Although not explicitly shown, the approach appliedin this scenario has similar robust adaptive characteristic of backstepping controltechniques like the one proposed in [19].

Problem 6.7 Design an adaptive control for an multirotor aerial vehicle shown inFig. 6.17. The controller should be capable of dealing with wind gusts, with totalforce of 2N. At the same time, the weight of the UAV is 15N, but can vary ±10%.

First, the power needs to be equally distributed across all six MAV propellers:

Page 210: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.4 Robust and Adaptive Control Applications 199

⎡⎢⎣u1...

u6

⎤⎥⎦ =

⎡⎢⎢⎢⎢⎢⎢⎣

18 − 1

4 − 16 1

14 0 1

6 118

14 − 1

6 1− 1

414

16 1

− 18 0 − 1

6 1− 1

4 − 14

16 1

⎤⎥⎥⎥⎥⎥⎥⎦

⎡⎢⎢⎣uτx

uτy

uτz

uFz

⎤⎥⎥⎦ . (6.91)

Each controller generates output to the power distribution system (i.e., roll - uτx ,pitch - uτy , yaw - uτz , thrust uFz ). Taking into account (6.91), the power distributionthen calculates the voltage applied to each propeller.

The first stage of the control system encloses the low-level attitude and heightcontrollers. Each one is built as a cascade velocity-position PID control loop. Anauto-tune algorithm is introduced in order to tune the PID gains according to themean of the squared error (MSE) objective function [20]. Finally, Model ReferenceAdaptive Control, like the one proposed in [38] Sect. 6.4.2, should be applied to thelow-level controllers, in order to cope with 10% variations in the MAV dynamics,introduced during the evaluation.

The second stage of the cascade control loop is again built upon classical PIDcontrol loops addedwith nonlinear robust adaptive technique. The chosen approach isto utilize the well-known manipulator joint control technique known as Hsia method[18]. The idea behind this approach is to design a control output f , so that it iscomprised of two parts:

f = u + v (6.92)

where u represents the output of a generic PID controller, and v the output of themodel-based auxiliary controller shown in Fig. 6.17. Since this controller is appliedto wind rejection in hover position, the underlining assumption for the model-basedpart is the linearization of the nonlinear MAV model around hover condition. Thisimplies that the acceleration is proportional to the attitude angle:

x ∼ g · � (6.93)

where g represents the gravity acceleration and � marks the pitch angle, showing x-axis dynamics only for clarity. In the original Hsia implementation [18], the auxiliarycontroller output v is obtained from the comparison of the actual measured andmodeled position value. This implies integrating (6.93) two times, comparing it withmeasured x and then deriving two additional times to obtain the control values v. Forquadrotor implementation,wepropose avoiding this process, feeding the accelerationmeasurements directly from the IMU sensor, as shown in Fig. 6.17. Therefore, vbecomes:

v = g · f − xm (6.94)

There are twomajor drawbacks of the proposed solution: one, the IMUmeasurementsare noisy, and two, the proposed controller (6.92) encompasses an algebraic loop

Page 211: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

200 6 Sensors and Control

PID

LOW LEVEL ATTITUDE CONTROL

HSIA Auxiliary controller

+

+ IMU ACC

Reference

gK

LP Filter

HIGH LEVEL POSITION CONTROL

Fig. 6.17 Hsia auxiliary controller design overview ©2015 IEEE. Reprinted, with permission,from [37]

which cannot be explicitly solved. To solve both of these issues, we propose adding alow-pass filter in the loop, shown in Fig. 6.17. Finally, this yields the control output f :

f [k] = P I D(e[k]) + (1 − KF ) gKu[k − 1] − xmKGLP(z)

1 − gKFK X(6.95)

where GLP(z) = KF + (1 − KF ) z−1 is a discrete low-pass filter implementation, erepresents position error, while xm represents acceleration measurements from theMAV IMU sensor, and K is the overall adaptation law gain used to fine tune thecontroller.

Figure6.18 shows the simulation results of the simulation experiment, where thegoal was to keep theMAV at hover as close as possible to a predefined position underconstant wind disturbance. With the abrupt wind change from 0 to 2 N, the MAV iskept within 12cm of its set point position, finally settling within 10cm under 1.2 s.

In Fig. 6.19, we show the simulation responses from a slightly different experi-ment. Within this task, the goal again was to keep the MAV steady under wind gust.In this task, the wind force is 5 N, and lasts only 1.5 s, during which time the MAVis kept within 35cm of its commanded position. The MAV settles within the 10cmgoal under 2.8 s.

6.5 Impedance Control

In this section, an impedance control strategy is proposed to control the dynamicinteraction between the manipulator and its environment. Impedance control enablescontact between the manipulator and its environment while maintaining stabilityduring the transition from free motion to interaction [17]. In a simplified manner, themanipulator can be seen as mass-spring-damper system behaving like an impedance

Page 212: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.5 Impedance Control 201

38 40 42 44 46 48 50−0.5

0

0.5

1

1.5

t [s]

forc

e [N

]

x, yz

(a) Wind force components

38 40 42 44 46 48 50−0.1

−0.05

0

0.05

0.1

0.15

t [s]

x [m

]

ReferenceGround truth

(b) X position

38 40 42 44 46 48 50−0.1

−0.05

0

0.05

0.1

0.15

t [s]

y [m

]

ReferenceGround truth

(c) Y position

38 40 42 44 46 48 500.9

0.95

1

1.05

t [s]

z [m

]

ReferenceGround truth

(d) Z position

Fig. 6.18 Simulation responses under wind disturbance. At t= 40s wind force is abruptly changed(total force 2N, theMAVweight 15N) and stays constant after that a.b,b, c showx, y, and z positionresponses. The stabilization algorithm keeps the MAV within 12cm ©2015 IEEE. Reprinted, withpermission, from [37]

toward the environment. The controller applies prescribed interaction forces at theend effector which are calculated as:

Fint = K [X0 − X ] (6.96)

where Fint is the desired interaction force to be applied at the end effector, andX0 − X is the position error and K is a stiffness gain to map between position errorand interaction force. K can be thought of as a spring constant while X0 − X can bethought of as the spring’s compression. Equation (6.96) can be rearranged to solve fora pseudogoal position to command the end-effector to, using the position controllerthat will impart the desired amount of force. To achieve this, we need to calculatethe torques necessary to command each joint where:

Tact = J #T Fint (6.97)

Page 213: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

202 6 Sensors and Control

38 40 42 44 46 48 500

0.5

1

1.5

2

2.5

3

3.5

4

t [s]

forc

e [N

]x, yz

(a) Wind force components

38 40 42 44 46 48 50

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

t [s]

x [m

]

ReferenceGround truth

(b) X position

38 40 42 44 46 48 50

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

t [s]

y [m

]

ReferenceGround truth

(c) Y position

38 40 42 44 46 48 500.8

0.85

0.9

0.95

1

1.05

1.1

t [s]

z [m

]

ReferenceGround truth

(d) Z position

Fig. 6.19 Simulation responses in T3.3 under wind gust. At t= 40s, wind force is abruptly changed(total force 5 N, theMAVweight 15 N) and stays constant for 1.5 s a. b, b, c show x, y, and z positionresponses. The stabilization algorithm keeps the MAV within 35cm ©2015 IEEE. Reprinted, withpermission, from [37]

Combining (6.96) and (6.97), we have:

Tact = J #T K [X0 − X ] (6.98)

to represent overall commanded joint torques.

Page 214: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.5 Impedance Control 203

Even with excellent vehicle position control, relative motions between the UAVand end-effector highlight the need for compliant manipulation approaches. Toaddress the difficulties of using a rigid, redundant manipulator, a desired end-effectorimpedance can be expressed as:

Md(x − xd) + Bd(x − xd) + Kd(x − xd) = − fe (6.99)

where Md is the inertia matrix, Bd is the damping matrix, and Kd is the stiffnessmatrix. Vectors x and xd represent the actual and desired end-effector positions, andfe represents the generalized force the environment exerts upon the end-effector [8].A proposed Cartesian PD controller to move the n-DOF manipulator through

space without regard to environmental interactions has the form:

τ (XY Z) = Kp(XY Z ′d − XY Z) + Kd( ˙XY Z

′d − ˙XY Z) (6.100)

where τ (XY Z) is torque commanded to joints to provide XYZ movement, Kp andKd are the proportional and derivative control coefficients, XY Z ′

d and ˙XY Z′d are the

desired end-effector position and velocity trajectories, and XY Z and ˙XY Z are thecurrent end-effector position and velocity.

To specify actual torques (τ ) to send to the individual joints (q) and consider-ing inertia (M), gravity (G), Coriolis and centripetal torque (C), and viscous andCoulomb friction (F), the equation has the form:

τ (q) = J T τ (XY Z) + M(q)q + C(q, q)q + F(q) + G(q) (6.101)

Manipulator and contact forces and torques are sensed and fedback into the controller.

6.6 Switching Stability of Coupling Dynamics

As Chap.5 clearly demonstrated, the system exhibits discontinuity in the dynamic.This occurs when the aerial robot switches between contact with the environmentand flight without contact. In many scenarios, for instance pick-and-place missions,the switching usually occurs only once when the object is grabbed. However, in morecomplex tasks like the system is forced to make multiple attempts to grab or toucha static object, the switch between system dynamics occurs multiple times or in acontinuous manner. To truly verify the stability of the aerial robot in contact withthe environment, this section is devoted to analyzing the stability of the system w.r.tswitching dynamics.

Before diving into ability analysis and in order to be able to write a full state-space representation of the aerial robot dynamics, we step back once again andwrite asimplified transfer functiondynamics for the joint control.Due to payload constraints,the construction of the robotic arms is limited to use lightweight servo motors for

Page 215: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

204 6 Sensors and Control

actuation, which often offer little or no choice for control parametrization. However,suitable servo motors can be selected in order to minimize, from practical point ofview, the variations of the closed-loop dynamics. This fact permits us to decouplemanipulator dynamics from the body motion, so that we can propose approximatingclosed-loop dynamics with a second-order transfer function

q ji = 1

s2

(ωnji )

2+ 2ζ j

i

ωnji

s + 1uqr

ji ,∀

{i ∈ 1, ..., 4j ∈ A, B

(6.102)

where each joint angle (q ji ) has specific dynamics (i.e., natural frequency ωn

ji and

damping ζji ) and input uqr

ji .

The aforementioned dynamic components of MM-UAS represent a first-ordermathematical approximation model. Although various simplifications are applied inthe derivation of this model, it is still capable of capturing basic physical phenomenaneeded to asses system stability while performing the proposed benchmark aerialmanipulation tasks.

Observing (6.27) and (6.102), we can write the state-space representation of sys-tem dynamics Ar(t):

Ar(t) =

⎡⎢⎢⎢⎢⎣

AαCL (t) 04×8

08×4 . . .012×4n

04n×12Aq j

i02×(4n−2)

0(4n−2)×2 . . .

⎤⎥⎥⎥⎥⎦ (6.103)

AαCL(t) =

⎡⎢⎢⎣

0 1 0 00 0 1 00 0 0 1

−Kiη(t) −Kpη(t) −η(t) 1Tm

⎤⎥⎥⎦

Aq ji

=[

0 1−ωn

ji −2ζ j

i ωnji

],∀⎧⎨⎩

α ∈ {ψ, θ,φ}i ∈ 1, ..., 4j ∈ {A, B}

⎫⎬⎭

The switching phenomena of the system stems from variable ηα(t), which in turndepends on the variable moment of inertia. We can write the switching dynamics ofthe aerial robot in a compact generalized form:

ξ = Ar(t) (ξ) (6.104)

where for piecewise constant switching rule function r(t) : R≥0 → {1, 2}, thereexists a corresponding transfer function matrixAr : R4n+12 → R

4n+12 (eg. shown in(6.27)). Depending whether or not the contact exist, Ar changes parameters, respec-tively, and thus initiates the switch in the dynamics. The locally absolute continuousstate function ξ : R≥0 → R

4n+12 that satisfies (6.104) for t ∈ R≥0, and a switching

Page 216: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

6.6 Switching Stability of Coupling Dynamics 205

function r(t) with a finite amount of discontinuities in each time interval, completethe solution for (6.104) [15].

Earlier in this chapter, we have shown that each system dynamics can be madeasymptotically stable (i.e., Hurwitz matrix), if the stability criteria (Fig. 6.6) is raisedabove the coupled moment of inertia for both cases. However, that still does notguarantee switching stability, since a certain switching combinationmight still renderthe system unstable. To prove the system is stable under arbitrary switching, oneneeds to find the Common Quadratic Lyapunov (CQL) function, which is a highlydifficult task for a system of 2n + 2 dimension and goes beyond the scope of thisbook. However, if we constrain our observations to the class of switching signals

D := {r(t) : tk+1 − tk ≥ τD} (6.105)

where the time passed between each two consecutive switching times tk+1, tk isgreater than or equal to τD , also known as dwell time. Authors in [16] show thatswitching among stable linear systems results in a stable overall system providedthat switching is slow enough. The idea behind the stability analysis is the conceptof Multiple Lyapunov functions (MLF), where for the corresponding state i thereexist a Lyapunov function Vi (ξ) such that each time the system changes from state ito j at time interval t j , Vj (ξ(t j )) < Vi (ξ(ti )). When we observe linearized functionsAr(t), the following inequality has to hold:

Vj (eAi T ξ) < Vi (ξ), ∀x = 0, ∀i = j (6.106)

Since it is not trivial to find general MLF combination, we turn to the findings in [9].More precisely, Theorem 2.3 which we restate here for completeness:

Theorem 6.1 Assume that, for given τD > 0,

∃Pi :⎧⎨⎩

Pi > 0, ∀iAi

T Pi + Pi Ai < 0, ∀ieAi

T τD Pj eAi τD < Pi , ∀i = j(6.107)

where Pi are positive definite matrices for given state i , then it is sufficient to saythat the system is exponentially stable inD.

It goes without saying that it is straightforward (although tedious) to find a minimumdwell time τD as a solution for a convex programming problem with linear matrixinequalities constraints. However, since the solution depends on the given matricesPi , the computation might become unproductive given that another choice of Pimight yield a smaller minimal time τD . Therefore, we turn to the remarks in [9],stating that when τD approaches infinity, the third inequality reduces to Pi > 0,∀i .This implies that if we keep the aerial robot in a given state long enough, the onlynecessary condition for system stability is that each matrix Ar is Hurwitz. Althoughthis remarkmight seem trivial at first, it fits in line with our proposed control strategy,where the mission control automaton prevents the system from switching between

Page 217: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

206 6 Sensors and Control

states (i.e., perch and contactless flight). For a given state of the mission, the missioncontrol algorithm measures the mean square error of the system in order to detectwhen it reaches an equilibrium point, and when according to (6.106) it is safe totransit to a different state.

References

1. ÅströmKJ (1995)Wittenmark B (1995) Adaptive control. Addison-Wesley Series in ElectricalEngineering. Addison-Wesley, Boston

2. Barczyk M, Lynch AF (2012) Integration of a triaxial magnetometer into a helicopter uavgps-aided ins. IEEE Trans Aerosp Electron Syst 48(4):2947–2960

3. Boskovic JD, Mehra RK (2000) Multi-mode switching in flight control. In: Proceedings of the19th digital avionics systems conference, 2000. DASC, vol 2 (2000), pp 6F2/1–6F2/8

4. Bouabdallah S, Siegwart R (2005) Backstepping and sliding-mode techniques applied to anindoor micro quadrotor. In: Proceedings of the IEEE international conference on robotics andautomation (ICRA) (2005)

5. Bourquardez O, Mahony R, Guenard N, Chaumette F, Hamel T, Eck L (2009) Image-basedvisual servo control of the translation kinematics of a quadrotor aerial vehicle. IEEE TransRobot 25(3):743–749

6. Butler H (1992) Model reference adaptive control: from theory to practice. Prentice Hall Inter-national Series in Systems and Control Engineering, Prentice Hall, Upper Saddle River

7. Chao HY, Cao YC, Chen YQ (2010) Autopilots for small unmanned aerial vehicles: a survey.Int J Control Autom Syst 8(1):36–44

8. Cheah C-C, Wang D (1998) Learning impedance control for robotic manipulators. IEEE TransRobot Autom 14(3):452–465

9. Colaneri P (2009) Dwell time analysis of deterministic and stochastic switched systems. In:2009 European control conference (ECC). IEEE, New York (2009), pp 15–31

10. Doyle JC, Francis BA, Tannenbaum AR (1991) Feedback control theory. Prentice Hall Profes-sional Technical Reference, Prentice Hall, Upper Saddle River

11. Farrell J (1999) The global positioning system and inertial navigation.McGraw-Hill Education,New York (1999)

12. Fabresse FR, Caballero F, Maza I, Ollero A (2014) Localization and mapping for aerial manip-ulation based on range-only measurements and visual markers. In: Proceedings of 2014 IEEEinternational conference on robotics and automation (ICRA) (2014), pp 2100–2106

13. Grewal MS, Andrews AP (1993) Kalman filtering: theory and practice14. Haus T, OrsagM, Bogdan S (2014) Visual target localization with the spincopter. J Intell Robot

Syst 74(1–2):45–5715. Hespanha JP (2004) Uniform stability of switched linear systems: extensions of lasalle’s invari-

ance principle. IEEE Trans Autom Control 49(4):470–48216. Hespanha JP, Morse AS (1999) Stability of switched systems with average dwell-time. Pro-

ceedings of the 38th IEEE conference on decision and control (Cat. No.99CH36304), vol 3(1999)

17. Hogan N (1984) Impedance control: an approach to manipulation. In Proceedings of the Amer-ican Control Conference 1984:304–313

18. Hsia TCS (1989) A new technique for robust control of servo systems. Industrial Electronics,IEEE Transactions on 36(1):1–7

19. Jimenez-Cano AE, Martin J, Heredia G, Ollero A, Cano R (2013) Control of an aerial robotwith multi-link arm for assembly tasks. In: 2013 IEEE international conference on roboticsand automation (ICRA), May 2013, pp 4916–4921

Page 218: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

References 207

20. Kim J-S, Kim J-H, Park J-M, Park S-M, Choe W-Y, Heo H (2008) Auto tuning pid controllerbased on improved genetic algorithm for reverse osmosis plant. World Acad Sci Eng Technol47:384–389

21. Korpela C, Orsag M, Pekala M, Oh P (2013) Dynamic stability of a mobile manipulatingunmanned aerial vehicle. In: 2013 IEEE international conference on robotics and automation(ICRA), May 2013, pp 4922–4927

22. Korpela C, OrsagM, Paul O (2014) Hardware-in-the-loop verification for mobile manipulatingunmanned aerial vehicles. J Intell Robot Syst 73(1–4):725–736

23. Kovacic Z, Bogdan S, Puncec M (2003) Adaptive control based on sensitivity model-basedadaptation of lead-lag compensator parameters. In: 2003 IEEE international conference onindustrial technology, vol 1, December 2003, pp 321–326

24. Landau ID (2011) Adaptive control. Communications and control engineering, Springer, Lon-don

25. Larsen TD, Andersen NA, Ravn O, Poulsen NK (1998) Incorporation of time delayed mea-surements in a discrete-time kalman filter. In: Proceedings of the 37th IEEE conference ondecision and control, 1998, vol 4, December 1998, pp 3972–3977

26. Leick A (2004) GPS satellite surveying. Wiley, New Jersey27. Leishman RC, Macdonald JC, Beard RW, McLain TW (2014) Quadrotors and accelerometers:

state estimation with an improved dynamic model. IEEE Control Syst 34(1):28–4128. Levine WS (1996) The control handbook. CRC Press, Boca Raton29. Lim H, Park J, Lee D, Kim HJ (2012) Build your own quadrotor: opensource projects on

unmanned aerial vehicles. IEEE Robot Autom Mag 19(3):33–4530. Macdonald J, Leishman R, Beard R, McLain T (2014) Analysis of an improved imu-based

observer for multirotor helicopters. J Intell Robot Syst 74(3–4):1049–106131. Madani T, Benallegue A (2006) Backstepping control for a quadrotor helicopter. In: 2006

IEEE/RSJ international conference on intelligent robots and systems, October 2006, pp 3255–3260

32. Masubuchi I, Kato J, Saeki M, Ohara A (2004) Gain-scheduled controller design based ondescriptor representation of lpv systems: application to flight vehicle control. In: 43rd IEEEconference on decision and control, 2004. CDC, vol 1, pp 815–820

33. Mellinger D, LindseyQ, ShominM,Kumar V (2011) Design, modeling, estimation and controlfor aerial grasping and manipulation. In: Proceedings of the IEEE/RSJ international intelligentrobots and systems (IROS) Conference, pp 2668–2673

34. Miskovic N, Vukic Z, Bibuli M, Caccia M, Bruzzone G (2009) Marine vehicles’ line followingcontroller tuning through selfoscillation experiments. Proceedings of the 2009 17th mediter-ranean conference on control and automation, MED ’09. IEEE Computer Society,Washington,DC, USA, pp 916–921

35. Misra P, Enge P (2006) Global positioning system: signals, measurements and performance,2nd edn

36. Nichols RA, Reichert RT, Rugh WJ (1993) Gain scheduling for h-infinity controllers: a flightcontrol example. IEEE Trans Control Syst Technol 1(2):69–79

37. Orsag M, Haus T, Palunko I, Bogdan S (2015) State estimation, robust control and obstacleavoidance for multicopter in cluttered environments: Euroc experience and results. 2015 inter-national conference on unmanned aircraft systems (ICUAS). IEEE, New Jersey, pp 455–461

38. OrsagM, Korpela C, Bogdan S, Paul O (2014) Hybrid adaptive control for aerial manipulation.J Intell Robot Syst 73(1–4):693–707

39. Pounds PEI, Bersak DR, Dollar AM (2011) Grasping from the air: hovering capture and loadstability. In: Proceedings IEEE international robotics and automation (ICRA) Conference, pp2491–2498

40. Romero H, Benosman R, Lozano R (2006) Stabilization and location of a four rotor helicopterapplying vision. In: American control conference

41. Rozenwasser E, Yusupov R (1999) Sensitivity of automatic control systems. CRC Press, BocaRaton, Control Series

Page 219: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

208 6 Sensors and Control

42. Seborg DE, Mellichamp DA, Edgar TF, Doyle FJ III (2010) Process dynamics and control.Wiley, New Jersey

43. Slotine JJE, Li W (1991) Applied nonlinear control, Englewood Cliffs, NJ, Prentice hall44. Vukic Z (2003) Nonlinear Control Systems, Taylor and Francis, CRC Press45. Welch G, Bishop G (2006) An introduction to the kalman filter. Department of computer

science, university of north carolina46. Zachariah D (2013) Estimation for Sensor Fusion and Sparse Signal Processing. PhD thesis,

KTH Royal Institute of Technology

Page 220: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Chapter 7Mission Planning and Control

Unmanned aerial vehicles have attracted significant attention for a variety ofstructural inspection operations, for their ability to move in unstructured environ-ments [3]. Typical examples include bridge inspection [26], power plant inspection[4], wind farm inspection [29], and maritime surveillance [28]. In recent years, wehavewitnessed a tremendous rise of research potential in the field of unmanned aerialvehicles (UAVs). Consequently, the worldwide UAV market grows rapidly as well.Unfortunately, mostly due to UAV’s limited payload capabilities, in both researchand industry, engineers focused their efforts to deploy UAVs in surveillance, recon-naissance, or search and rescue mission, avoiding all possible interaction with theenvironment. However, the ability of aerial vehicles to manipulate a target or carryobjects and interact with the environment could greatly expand the application poten-tial of UAVs to: infrastructure inspection [23], construction and assembly [13, 22],power line inspection (Fig. 7.1), agriculture, urban sanitation, high-speed graspingand payload transportation [14, 38] and many more [9, 18, 36].

The development of robotics is omnipresent, and bringing the robots to worktogether is the next step in mission planning. One such example, where an unmannedsurface marine vehicle (USV) and UAV marsupial system work together to recoverobjects floating at the sea surface is presented in [28]. Of course, controlling a het-erogeneous team of robots, like the one proposed herein, requires precise, fast, andreliable high-level planning and task allocation algorithm. A high-level task planningframework needs to be devised. For instance, a mission planning software based onTÆMS decomposition in order to expand the capabilities of ground and aerial robotsbymaking them cooperate together is presented in [1]. As such, the framework can beused to commission other types of robots, as long as they have clearly defined capa-bilities structured into actions that can be incorporated into the framework. Finally,mission planning involves bringing robots in contact with humans and allowing themto work safely side by side. Reference [30] shows one such framework for human-in-the-loop control of multiagent aerial systems deployed on an aerial manipulationtask in time critical and stressful rescue missions (Fig. 7.2).

© Springer International Publishing AG 2018M. Orsag et al., Aerial Manipulation, Advances in Industrial Control,https://doi.org/10.1007/978-3-319-61022-1_7

209

Page 221: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

210 7 Mission Planning and Control

Fig. 7.1 Power line inspection and maintenance with aerial robotic system working side by sidewith humans

Fig. 7.2 A UGV and UAV working together in order to find a parcel in a cluttered environmentand bring it back to base with minimum energy consumption. Both UAV and UGV possess thespecific mutually compatible capabilities and are capable of carrying each other when necessary.UAV carries the UGV across obstacles, while the UGV conserves the energy consumption of theUAV by carrying it throughout the map

Mission planning includes all aspects of unmanned aerial system, and not onlyflight. It is an intricate optimization algorithm that aims to solve an array of optimiza-tion problems needed to successfully complete the mission. The array of problemsone might aim to solve include, but are not limited to:

Page 222: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

7 Mission Planning and Control 211

• Complete coverage for surveillance and inspection missions;• Energy optimization in order to make use of the energy containers (i.e., batteriesand fuel tanks);

• Optimize the duration of execution for time critical missions;• Minimize communication delays or minimize the communication exchange toprevent the leaking of information.

The requirements are many and in most cases mutually excluding. This book, how-ever, scratches only the surface of mission planning focusing on optimization prob-lems related to rotorcraft flight. Since there is plenty of good literature on the subjectof mission planning, the scope of this book does not cover this topic to its full extent.

In the following sections, we expand on the topic of path planning and its time-driven counterpart trajectory planning. We build starting from simple waypoint nav-igation in uncluttered environment, only to later focus on obstacle-free trajectorygeneration. We conclude the book with two examples of vision-based object track-ing and manipulation examples.

7.1 Path Planning

The first stage of mission planning is to find a feasible path toward the desired desti-nation point. The manner in which this is accomplished varies from trivial solutionsin uncluttered environments where operators can choose a set of waypoints, and theUAV successfully flies sequentially through each waypoint. However, this is rarelyoptimal and for most cases impossible in a cluttered environment no matter howexperienced the operator is.

In cluttered environment finding an obstacle-free path is often a very complextask. We assume the there exist some form of a priori knowledge of the environmentin the form of occupancy grid. Obstacle-free path planning is an ongoing topic ofresearch, but the most common approach is to use the concept of rapidly exploringrandom trees (RRT). Although a class of probabilistic roadmap planners, where theidea is to explore the configuration space exhaustively before execution, RRTs tendto achieve fast and efficient single-query planning by exploring the environment aslittle as possible [8].

The following brief explanation of the RRT algorithm is based on its originalversion from [17], but the idea can be expanded to newer versions of the algorithmthat are used throughout the state of the art. The algorithm starts from its root nodeconfiguration qr , which for a UAV is a point in 3D space qr = [x, y, z,φ]T . The goalis naturally to reach the goal point qg . In order to do so, the algorithm preforms aquery from the current set of reachable configurations (i.e., nodes), to find a randomconfiguration qrand , that is, within the reach (i.e., within maximum step sizeΔqmax ).Once a random configuration is chosen, we search for the nearest tree node. In thebeginning, this node is equal to root node configuration qr . We proceed with anexhaust search in single-step δq < Δqmax increments from the nearest tree towardthe random point, until reaching an obstacle. The last configuration reached becomesthe new tree node. Random search query is applied several times for existing nodes,

Page 223: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

212 7 Mission Planning and Control

Fig. 7.3 Showing only one step of RRT algorithm in building the tree of configuration nodes towardthe goal point

and the algorithm stops once we find a node/configuration that is in the vicinity of thegoal configuration qg . This procedure is depicted in Fig. 7.3. A tree that is the resultof the algorithm is a directed graph, with no cycles, in which all nodes have severaloutgoing edges, resembling a directory structure in a computer system. Each nodecan in turn have several other nodes directed from it and referred to as its children.All nodes, excluding the root, have one and only one incoming edge (i.e., parent).

7.1.1 Trajectory Generation

Knowing obstacle-free steps toward the goal is just the start. In order to reach thegoal, we need to know the exact time when we are supposed to be in each step towardit. The process of generating this information is known as trajectory planning, andtrajectory holds the information of both the position of the node and time when therobot is supposed to reach it. The trajectory is therefore tightly coupled with thedynamic capability of the robot.

To control the trajectory of the system, one must first prove that the system isdifferentially flat. To put it simply, for a flat system, the state and the inputs canbe expressed as algebraic functions of the flat outputs and their successive timederivatives [37].More on differentially flat system can be found in [27]. Although thisproperty is not strictly valid for n DOF manipulators, authors in [25] proved that theplanar rotorcraft such as quadrotor is differentially flat system. Differential flatnessimplies that there exists a simplemapping function between the desired linear motionin 3D space and angular speed of the quadrotor to the speed of its four propellers.Since we observe linear control based on the linearized model of the rotorcraft UAV,

Page 224: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

7.1 Path Planning 213

differential flatness is a special case of controllability. According to [35], in case ofa linear system, controllability is an equivalent property to differential flatness.

To obtain a linearized model, once again we turn to in near hover condition withsmall-angle approximation. At this point, it is important to note that for such anapproximation, body frame angular velocities p, q, and r are virtually the same asangular velocities in inertial frame, u, v, and w, respectively. This is directly derivedfrom (2.18) which becomes identity matrix for small-angle approximation. We canselect the state-space vector:

x = [x x y y z z ψ ωx θ ωy φ ωz

]T(7.1)

and write the linear differential equations for this simplified model:

x = g · θ, y = −g · ψ, x = u1m

(7.2)

u = ωx = u2Dxx

, v = ωy = u3Dyy

, w = ωz = u4Dzz

. (7.3)

Next, we reduce the state vector x to a portion of a system referring only to thex-axis dynamics

x = [x x θ ωy

]T(7.4)

and write its corresponding dynamics in state-space representation

˙x = Ax + Bu3y = Cx,

(7.5)

where the specific matrices are written in the following equation:

A =

⎢⎢⎣

0 1 0 00 0 g 00 0 0 10 0 0 0

⎥⎥⎦ ,B =

⎢⎢⎣

0001

Dxx

⎥⎥⎦ ,C = [

1 0 0 0]. (7.6)

To demonstrate that this system is controllable, we build the controllability matrix[21]

W = [B,BA,BA2,BA3

] =

⎢⎢⎣

0 0 0 1Dxx

0 0 1Dxx

00 g

Dxx0 0

gDxx

0 0 0

⎥⎥⎦ (7.7)

and demonstrate that its rank(W) = 4 is equal to the size of state vector x, and thus,the system is controllable. Furthermore, we can show that the linearized system is

Page 225: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

214 7 Mission Planning and Control

observable aswell. Finally, we can extend the above-disseminated procedure to provethe same holds for the entire dynamic system.

Returning to the starting point, trajectory planning involves generating solutionsthat take the system from an initial condition to a desired goal. When a nonlinearsystem is dynamically flat, or a linear system is controllable, one simply needs togenerate functions of time that satisfy the initial and final conditions and applythe control input to drive the system. For aerial robots, this means one can planthe trajectory in 3D Cartesian space and apply it on the control inputs in order tocommand the robot. We will demonstrate this on the following example problem.

Problem 7.1 Solve the high-speed trajectory design problem for the following 4waypoints. Each waypoint is published every 60s, which is repeated 4 times. Aver-age settling time (time since the newwaypoint is published and theMAV is stabilizedwithin 0.1m of the new goal), energy consumed, and position RMS error are mea-sured. We assume that the controller designed for the vehicle is capable of executingthe designed trajectory.

In this task, we put an effort on reaching the settling time limit of 5 s. The distancebetween two consecutive waypoints is set at 21.5m (20m in x or y direction and 8min z direction), which implies the required average speed of 4.3m/s. To achieve that,we would have to tune the PID controllers (both inner and outer loop) to be able toperform more aggressive maneuvers. One would also need to modify the controllerstructure by adding feed-forward signals in the position, velocity, and acceleration(roll and pitch) controllers.

To design a feasible trajectory, we choose a fifth-degree polynomial trajectorythat will enable us to ensure an acceleration-free UAV motion:

pre f (t) = a0 + a1 · t + a2 · t2 + a3 · t3 + a4 · t4 + a5 · t5, (7.8)

where pre f is the desired position (x, y or z) and t is the elapsed time. We now moveforward to take consecutive derivations of the trajectory polynomial

pre f (t) = a1 + 2a2 · t + 3a3 · t2 + 4a4 · t3 + 5a5 · t4 (7.9)

pre f (t) = 2a2 + 6a3 · t + 12a4 · t2 + 20a5 · t3...p re f (t) = 6a3 + 24a4 · t + 60a5 · t2....p re f (t) = 24a4 + 120a5 · t.

We start designing the trajectory by determining the constraints on our polynomialregarding its position, speed, split, and jerk in given time. For instance, at time t = 0,the UAV starts from the initial position, and therefore:

pre f (0) = a0 = [x0 y0 z0

]T. (7.10)

Next, we aim for a smooth start, so we constraint the trajectory to start at 0 speedand acceleration:

Page 226: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

7.1 Path Planning 215

pre f (0) = a1 = 0, pre f (0) = a2 = 0. (7.11)

Thefinal three parameters, a3, a4, and a5 are determined through the constraints of thetrajectory end point. To that end,wewrite the same constraints on position, speed, andacceleration. The trajectory has to end on the givenwaypoint pre f (T ) = [

xe, ye, ze]T

at time T , where duration T is a parameter subject to a form of optimization. How-ever, for the purpose of this task, we can set T = 4 s in order to achieve trajectorycompletion in 5 s

pre f (T ) = a0 + a3 · t3 + a4 · t4 + a5 · t5 (7.12)

pre f (T ) = 3a3 · t2 + 4a4 · t3 + 5a5 · t4 = 0

pre f (T ) = 6a3 · t + 12a4 · t2 + 20a5 · t3 = 0.

Solving this set of linear algebraic equations for a3, a4, and a5 is straightforward. Ifwe assign a value Δp = [

x0 − xe, y0 − ye, z0 − ze]T, we can write the solution:

a3 = Δp · 10/T 3. (7.13)

a4 = Δp · 15/T 4. (7.14)

a5 = Δp · 6/T 5. (7.15)

It is important to note that the shape of the trajectory (i.e., coefficients ai ) dependsheavily on the given waypoints. Each waypoint will formulate a different solutionpre f (t). Moreover, we emphasized the importance of feed-forward signals in tra-jectory following. To enable this, once coefficients ai are designed, one needs tocalculate the trajectory derivatives pre f (t) and pre f (t) and provide them as feed-forward input to the controllers, as shown in Fig. 7.4. At the same time, trajectorypre f (t) is applied directly at the control reference, taking into account the importanceof synchronization (i.e., time stamp t).

Finally, we show the results obtained through simulations in Fig. 7.5, when wefeed the results of the proposed trajectory to the underlining control system (i.e.,feedforward).

Fig. 7.4 Position control loop, showing the selected x-axis position control, with reference andfeed-forward signals coming directly from the trajectory planning algorithm

Page 227: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

216 7 Mission Planning and Control

20 25 30 35 40 45 50 55 60 65 70−5

0

5

10

15

20

25

t [s]

x [m

]

(a) X position

20 25 30 35 40 45 50 55 60 65 70−15

−10

−5

0

5

10

15

t [s]

vx [m

/s]

(b) X velocity

20 25 30 35 40 45 50 55 60 65 70−5

0

5

10

15

20

25

t [s]

y [m

]

(c) Y position

20 25 30 35 40 45 50 55 60 65 70−15

−10

−5

0

5

10

15

t [s]

vy [m

/s]

(d) Y velocity

20 25 30 35 40 45 50 55 60 65 700

1

2

3

4

5

6

7

8

9

10

t [s]

z [m

]

(e) Z position

20 25 30 35 40 45 50 55 60 65 70−5

−4

−3

−2

−1

0

1

2

3

4

5

t [s]

vz [m

/s]

(f) Z velocity

Planned Executed Planned Executed

Fig. 7.5 MAV planned and executed a set of trajectories. a and b show position and velocityresponses, respectively c©2015 IEEE. Reprinted, with permission, from [29]

Page 228: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

7.2 Obstacle-Free Trajectory Planning 217

7.2 Obstacle-Free Trajectory Planning

In practice, path planning algorithms return a series of waypoints that ensure lineof sight obstacle-free path. Optimal connection between the points in path is a con-tinuous interpolation, and the simplest approach is to use either a single high-orderpolynomial function or a series of splines. Since the polynomial functions of highorder tend to become dynamically unstable, we will limit our discussion on splineinterpolation.

The topics covered so far include the so-called global planner that relies on apriori knowledge of the area map. However, as the vehicle traverses across the map,it encounters new or dynamic obstacles and has to react. The inner planner designedfor this purpose is called the local planner. It performs a form of faster and lessexhausting search policy in order to directly avoid the new obstacle.

Local planner executes the trajectories fed to it through the global planner untilit finds a new obstacle that forces it to adjust its course. At that point, the algorithmcan either:

• Calculate the trajectory leading it to the closest point in the globally plannedtrajectory;

• Calculate the new trajectory leading it to the goal but avoiding the new obstacle;• Stop the vehicle and call the global planner to plan a new feasible path toward thegoal.

7.2.1 Local Planner

A straightforward implementation of the local planner is to mathematically derivea repulsive force acting on the vehicle when it is in the vicinity of an obstacle. Thecloser the vehicle is to the obstacle, the more force does it feel pushing it away.Map representation and choice of potential cost function is the central point of thedescribed algorithm.

To describe the algorithm in action, first we imagine a point in configuration spaceq. If some point x in the vicinity of q is occupied, we calculate its repulsive force as:

F = − x − q‖x − q‖ · e− f (‖x−q‖), (7.16)

where a scalar function f is chosen such that it increases proportional to the distancebetween the occupied point and the current position of the UAV.

Common approach in defining f is to model a normal distribution, where f boilsdown to:

f (‖q − x‖) ={1 if ‖q‖ < μ‖q−x‖−μ

2σ2 else. (7.17)

Page 229: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

218 7 Mission Planning and Control

q

0L Obstacle

minr

maxr

qxfe

Current configuration

F

Fig. 7.6 Figure showing a local planner in action when facing an obstacle. At the current quadrotorconfiguration q, its surroundings x is probed, so that one can calculate (7.16) the repulsive forceF that does not allow the quadrotor to approach the obstacle. The closer the obstacle is to thequadrotor, the bigger the repulsive force that steers it away from the obstacle. Minimum searchradius rmin = μ is the critical distance at which the UAV should stop, while rmax is the distanceafter which no obstacle should influence the UAV (i.e., rmax = 5σ)

This approach allows us to define a minimum distance μ for which the obstacleexerts the maximum force, and shape the distribution with σ coefficient. When thedistance between an obstacle and minimum distance increases up to three to five σ,the reactive force practically vanishes and UAV can move freely (Fig. 7.6).

Local planner receives the commands to move the vehicle either through a globalplanner or through operator inputs. It then uses the local occupancy informationto adjust these commands in order to steer clear from obstacles, which the globalplanner or the operator was not aware of. For a global planner, this might happenin the presence of dynamic obstacles. Sometimes, the global map is not accurateenough, assumes unknown space as free or is deliberately made sparse to plan faster.In all these situations, the local planner is assigned to safely steer clear.

The local planner is particularly useful when exploring the area for the first time.In such a scenario, the entire space is treated as unknown so that as the UAV gathersmore information about the environment, the local planner pushes it away fromobstacles. An operator can then drive the vehicle safely, relying on the local plannerto override his commands and navigate through the environment.

7.2.2 Global Planner

Global planner is an intricate part of every mission planning and UAV control. Here,we assume that the map of the environment is a priori known and that the vehicle

Page 230: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

7.2 Obstacle-Free Trajectory Planning 219

knows its exact position. In practice, this is difficult to achieve without utilizationof self-localization and map building algorithms, like the ones mentioned earlier inthe book. However, with plenty of literature available on this topic, we leave thereader to explore this part of the aerial robotic system. There are three stages of theproposed global plan solution: obstacle-free path generation, polynomial trajectoryfitting, and ensuring that the trajectory remains obstacle-free.

Like most of the state-of-the-art solutions, we base our approach on the differen-tial flatness of the quadrotor model which was demonstrated in [25]. The proposedsolution, on track with [34], is to utilize the RRT* algorithm to find a collision-free path through the environment based on vehicle kinematics. The result of thisfirst stage is a set of waypoints qi ∈ Q interconnected with a piecewise straight-lineobstacle-free path λi :

λi (t) = t · (qi+1 − qi ) + qi , i ∈ 〈1, ‖Q‖〉 , t ∈ 〈0, 1〉 . (7.18)

Even though the work of Mellinger et al. [25] has shown that minimum-snaptrajectories can be successfully applied to quadrotor motion, the solution proposedherein continues on a different path rather then using quadratic programming, whichgoes beyond the scope of this book, to find minimum-snap trajectories. We based theproposed solution on the piecewise straight-line planned through RRT* used as anoptimal cost function for the planned trajectories. This is in line with the assumptionthat the straight-line path is the shortest path between the two points; therefore,it is vital that the executed trajectory is as similar as possible to this initial plan.We continue to replace the straight-line paths λi (t) with continuous splines πi (t),represented with a kth-order polynomial function:

πi (t) =k∑

i=0

ai,k tk, (7.19)

taking into account their respective derivatives of lth order

π(l)i (t) =

k−l∑

i=0

⎣l−1∏

j=0

(k − j)

⎦ ai,k tk−l, l ≥ 1. (7.20)

There are several constraints we have to impose on the proposed polynomialfunction πi (t). First, its value at the beginning (t = 0) and the end (t = Ti ) mustpass through waypoints i and i + 1:

πi (t) =k∑

i=0ai,k tk |t=0 = qi . (7.21)

πi (t) =k∑

i=0ai,k tk |t=Ti = qi+1. (7.22)

Page 231: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

220 7 Mission Planning and Control

Similar constraints need to be applied to the derivatives of the trajectory. The mostimportant factor that needs to be adjusted for is the continuity of trajectory. Thisimplies that the end speed of trajectory i needs to be adjusted to the start speed oftrajectory i + 1:

π(l)i (t)|t=Ti = π(l)

i+1(t)|t=0. (7.23)

The same is then applied to all respective derivatives (l > 1). The derivates at thestart and end of the entire path are adjusted so that the start and end are smooth.Meaning the derivatives of splines need to start with zero values:

π(1)0 (t)|t=0 = 0,π(1)

n−1(t)|t=Tn−1 = 0. (7.24)

In order to fit the trajectory within the radial bound ρ from the initial piecewisestraight-line path, the second stage of trajectory planning (i.e., polynomial trajectoryfitting) fits polynomial trajectories using an iterative procedure, which ensures thatthe mission executes according to the plan. By modifying the planning algorithm in[32], which guarantees not only the continuity of jerk, acceleration, and velocity, butalso the continuity of split at all trajectory segments, we ensure dynamics smoothnessof the trajectory. The proposed method optimizes each segment’s duration Ti in orderto achieve maximum allowed acceleration, speed, or jerk on at least one segment ofthe trajectory. To account for the desired accuracy, we apply an optimization strategythat minimizes the Hausdorff distance from the polynomial trajectory πi (t) andpiecewise straight-line path λi (t) on each segment:

h(π,λ) = maxt∈〈0,1〉

minτ∈〈0,1〉

‖π(t) − λ(τ )‖ ≤ ρ. (7.25)

Even though Frechet distance does a better job of measuring similarity betweencurves, without loss of accuracy, Hausdorff distance is applied since there is a directanalytical solution for the straight line:

h(π,λ) = maxt∈〈0,1〉

min ‖π(t) − λ(τ )‖⎧⎨

τ = 0τ = (π(t)−qi )(qi+1−qi )

(qi+1−qi )·(qi+1−qi )τ = 1

(7.26)

Providing an analytical solution to the problem ensures numerical stability and speedof the algorithm. If there is a threshold ρ breached at segment i , either at the beginningτ = 0, end τ = 1, or at the point of extremum, additional point is added in themiddleof the segment (i.e., qM = qi+qi+1

2 ).The final stage of path planning is to ensure that neither of the segment’s trajectory

collideswith the environment at any point. In case there is a collision, additional pointqC is added in the original collision-free straight-line path at the Hausdorff distancefrom the original point of collision, and the whole procedure is returned back tothe previous stage more until an obstacle-free trajectory as close as possible to theoriginal straight-line path is generated. Example trajectory is depicted in Fig. 7.7,together with aforementioned trajectory planning steps.

Page 232: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

7.3 Vision-Guided Aerial Manipulation 221

Fig. 7.7 Example of a planned trajectory: marking waypoints with a piecewise straight-lineobstacle-free path in black, transparent white cylinder showing ρ threshold and red spline denot-ing final fitted obstacle-free continuous split trajectory. Obstacles are represented through a 3Doccupancy grid matrix, compact in a standard OctoMap format [11]

7.3 Vision-Guided Aerial Manipulation

In this section, we continue on the topic of localization, describing in detail, the visionalgorithms used to locate the target of themanipulation task.More precisely,we focusour discussion on two example algorithms used to detect a valve and a designatedpickup object. In both cases, the targets are tracked, and their position transformedin “world” coordinates, represented with motion capture system coordinate systemorigin. In outdoor applications, motion capture system can be replaced with a GPSsignal, or some other, preferably more accurate localization technology, like, forinstance, differential GPS (D-GPS).

To test the proposed algorithms, a control system depicted in Fig. 7.8 that iscomprised of two cascaded control loops, one for position and the other for velocity, isproposed. It relies on feedback coming from a motion capture system that representsan ideal position/velocity feedback system. Cascade control strategy allows us toseparately tune, first the velocity of the aircraft and then its position.

7.3.1 Autonomous Pick and Place

In this section,we dealwith objectswith very fewdistinctive features that require spe-cific markers in order to be recognized via vision-based software. In these situations,

Page 233: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

222 7 Mission Planning and Control

PID ++refx PIDxx

Speed control loopPosition control loop

Autonomous target localization algorithm

Optitrack system SensorFusion .

Fig. 7.8 Cascade speed–position control loop, based on motion capture position feedback

the designated target has to be marked so that it can be tracked. One such scenariois demonstrated in [28], where an object designated for pickup is marked with anARToolKit tag [16], which enables a UAV to track it and navigate an unmannedmarine vehicle to pick it up. The same scenario is repeated for our proposed bench-mark pick-and-place aerial manipulation task, in which a box-shaped object is pin-pointed for pickup.

ARToolKit started as a software library for building augmented reality (AR) appli-cations, which is still its main purpose [16]. However, in recent years it has beenwidely used in the robotics community, mainly due to the fact that its tracking algo-rithm is very robust, capable of detecting both position and attitude of the body andallows for multiple target tracking by giving the markers specific IDs. Applicationscan be found in marine robotics [12], mobile robotics [10, 33], industrial robotics[31], and as previouslymentioned in aerial robotics [28], where several other researchgroups demonstrated successful use case scenarios [6, 19]. Square and circular tagsseem to be the most often used marker trackers, because these geometric primitivesare well detectable in images. Markers’ ID depends on the selected fiducials, whichcan be any of the following: (1) barcodes, (2)maxicodes, (3) cybercodes, (4) tricodes,(5) BCH markers. This paragraph aims to cover the very basic concepts utilized forsuccessful tracking and does not go into details.

7.3.1.1 Position and Pose Estimation of the Markers

In the initial steps of image analysis, acquired grayscale images are run through edgedetection and threshold filters. After this initial step, the input image is scanned tofind and select square regions. These regions do not necessary have to be limited torectangles, but rather their outline contour has to be fitted within four line segments.

Page 234: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

7.3 Vision-Guided Aerial Manipulation 223

TOp

CMT

0

MOT

TCp

Fig. 7.9 ARmarker detection using a pinhole model camera displaced from the body center by pCBand rotated for RC

B . The image plane is shown in the focus of the camera. The figure shows the fourlines encompassing the marker fiducial. These four lines with their respective intersection verticesare later utilized for successful 6 DOF localization of the marker

Parameters of these four line segments and the exact coordinates of the four verticesin the intersection points of the encompassing lines are stored for later processes.The subimage within the selected region is compared to template matching with apriori known fiducial as specific ID of the marker. Once the markers’ ID is correctlyidentified, its position and orientation need to be measured.

In order to perform a 6 DOF marker localization, the projections of the two sidesof a square marker in the image frame are observed. Each of the sides forms a linein the image

ai x + bi y + ci = 0, i ∈ 〈1, 4〉 (7.27)

as can be observed in Fig. 7.9. Two unit direction vectors in the image coordinateframe can be obtained from two sets of two parallel sides of the square (i.e., (7.27)),u1 and u2. Once the projection matrix of the camera is calibrated, this is a ratherstraightforward process. Given that the two vectors are observed in the world coor-dinate frame, they should be perpendicular to each other. However, due to imageprocessing uncertainty, this is often not the case. In order to alleviate image process-ing errors, the projection matrix is optimized by solving the problem of mapping forintersection vertices. This process is repeated several times, until the overall errordeviation is small enough.

Page 235: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

224 7 Mission Planning and Control

7.3.2 Transforming Camera-Detected Pose to Global Pose

The proposed vision-based detection algorithm provides pose–position measure-ments of the tracked object in the camera coordinate system. As shown at the begin-ning of this section, the controller is devised in the global (i.e., motion capture origin)coordinate system. In order for the algorithm to work, one needs to detect the extrin-sic parameters of the camera. Since the motion capture system relies on the positionof the marker, the exact transformation pipeline is as follows:

pTO = TM

OTCMp

TC . (7.28)

Previous equation clearly states that if one knows the position of the target in themotion capture coordinate system pT

O (which one can easily measure), one can easilycalculate its position in the camera coordinate system pT

C . The solution is acquiredwhen one knows the transformation matrices between the motion capture origin andthe marker, and marker and the camera, TM

O and TCM , respectively.

Motion capture software is in charge of calculating the motion capture-origin(O)-to-marker(M) transformationTM

O . At the same time, the previously described vision-based tracking algorithm can detect the position of target with respect to camera pT

C .Since we can measure the position of the marker in the motion capture coordinatesystem, the only unknown parameter of the equation is the transformation matrixTC

M :

TCM =

[Rot(φ, θ,ψ) p(x, y, z)

0 1

], (7.29)

which depends on the distance p(x, y, z) and the rotationRot(φ, θ,ψ) of the camerawith respect to body marker. Initially, both position and orientation can be measured,but for the exact values, one needs to optimize this transformationmatrix through a setof experimentally obtained values. The procedure is straightforward using simplexoptimization approach [20]. Experimental values are obtained from a single aircraftflight, during which the UAS takes measurement samples of the target. Later, theexperimental values are compared so that the following optimization problem canbe solved:

minTMC

(∥∥pTO − TM

OTCMp

TC .

∥∥). (7.30)

7.3.2.1 Valve Detection Algorithm

Previously we tackled the problem of tracking featureless targets using passivemark-ers. Here, we go a step further to show how we can utilize specific features of thetarget in order to track it. More precisely, we focus our attention to an exampleproblem of finding and perching onto a valve.

The work presented in this paragraph builds upon the idea from [5, 24, 39], wherethe authors proposed using various circular landmarks to localize an aerial vehicle.

Page 236: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

7.3 Vision-Guided Aerial Manipulation 225

BCp B

CR

CVd

CVn

f

Fig. 7.10 Valve detection using a pinhole model camera displaced from the body center by pCBand rotated for RC

B . The image plane is shown in the focus of the camera. The figure shows how acircular object (i.e., valve) is projected onto an ellipse on the image plane

This technique is ideal for handwheel valve knobs, due to their circular shape. Usingthe results from [15], one can use the data collected from the ellipse’s shape in orderto calculate the exact pose and position of the valve.

The idea behind the algorithm is to apply a 3-stage filter:

• 1st stage: Use color filtering based on different valve color specifications.• 2nd stage: Search for ellipses on a binary image.• 3rd stage: Obtain the position and velocity of the valve using findings from [15].

Reaching a desired threshold for spokes and the outer diameter of the valve, agood valve candidate is chosen. Detecting circular shapes of known radius R in 3Denvironment by observing their elliptic perspective projection has been tackled bymany researchers in different applications [5, 24, 39]. The main approach used forthis problem is based on projective linear transformation, namely collineation of acircle [15], observing the camera with a pinhole model approach shown in Fig. 7.10.Because of the camera’s field of view (FOV) and the way it limits a full view of thevalve in all practical applications, it is most likely that the camera needs to be placedoutside the body center by pB

C and rotated for RBC . It is this rotation that transforms a

Page 237: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

226 7 Mission Planning and Control

regular circular shape valve into the projected ellipse. If the angle between the valveand the camera is π

2 , then the image of the valve would be a perfect circle.To detect ellipse parameters, the authors in [7] proposed an algorithm for direct

least-squares fitting of ellipses. Algorithm’s implementation returns a canonical rep-resentation of the ellipse (7.31), with its center displaced from the picture coordinateframe origin by (x0, y0) and rotated for angle φ, with w and h being its respectivewidth and height.

x2

w2+ y2

h2= 1. (7.31)

Any given ellipse, written in general quadratic form:

Ax2 + 2Bxy + Cy2 + 2Dx + 2Ey + F = 0, (7.32)

can be derived from its rotated and displaced canonical representation through thefollowing set of conic transformation equations:

A = cos (φ)2

w2+ sin (φ)2

h2

B =(

1

h2− 1

w2

)cos(φ)sin(φ)

C = cos(φ)2

h2+ sin(φ)2

w2

D = x0cos(φ)2

w2+

(1

h2− 1

w2

)cos(φ)sin(φ)y0 + x0

sin(φ)2

h2

E = y0cos(φ)2

h2+

(1

h2− 1

w2

)cos(φ)sin(φ)x0 + y0

sin(φ)2

w2

F = (h2 + w2)(x20 + y20 ) + (h2 − w2)[(x20 − y20 )cos(2φ) − 2x0y0sin(2φ)

]

2h2w2− 1.

(7.33a)

Proof First, we write the well-known 2D ellipse quadratic equation (7.32) in obliqueelliptical cone matrix representation (i.e., quadratic form Q) [39]:

[x y 1

]TQ

⎣xy1

⎦ = XT

⎢⎣

A B Df

B C Ef

Df

Ef

Ff 2

⎥⎦X, (7.34)

where f stands for the focal length of the camera, and A − F denote ellipse para-meters. Next, we apply the necessary transformations in two steps:

Page 238: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

7.3 Vision-Guided Aerial Manipulation 227

• Rotate ellipse w2x2 + h2y2 for φ

R =⎡

⎣cos(φ) sin(φ) 0

−sin(φ) cos(φ) 00 0 1

⎦ .

• Translate ellipse’s origin for (x0, y0).

T =⎡

⎢⎣

1 0 x00 1 y00 0 1

⎥⎦ .

Transformations are applied to ellipse coordinates (i.e., x = TRX), so that the orig-inal, easy to calculate, matrix representation of the canonical ellipse:

xTqx = xT

⎣1w2 0 00 1

h2 00 0 1

⎦ x,

can be transformed to its complete elliptical cone representation (7.32) in the fol-lowing manner:

XT

⎢⎣

A B Df

B C Ef

Df

Ef

Ff 2

⎥⎦X = XTRTTTqTRX. (7.35)

Comparing each element of matrices Q and RTTTqTR yields the transformationsfrom the proposition.

In the proposed setup, a camera is placed beneath the quadrotor to detect the valve.Observing the camera as a pinhole model, with its image plane in front of the focus(i.e., its mirroredmathematical representation) as shown in Fig. 7.10, one can see thatthe image projection in the camera coordinate frame forms an oblique cone. In thisprojection, image plane and the focus of the camera form an elliptical cone, whosebase is the ellipse (7.32). This transformation is the result of projective transformationthat each point in the world frame (Px ,Py ,Pz) is subjected to in order to be placed onthe camera image plane (P∗

x ,P∗y ):

⎣P∗x

P∗y

1

⎦ =⎡

⎢⎣

FPz

0 00 F

Pz0

0 0 FPz

⎥⎦

⎣PxPy

Pz

⎦ . (7.36)

To simplify the underlining mathematical formalism, one places the origin of thecamera coordinate frame (O, X, Z ,Y ) in the center of its focus, so that Z = F ,where F denotes the focus. Image coordinate system xy is then aligned with X

Page 239: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

228 7 Mission Planning and Control

Fig. 7.11 Camera coordinate system and its 2D projection

and Y , and the optical axis becomes its Z axis. For every image point P∗(x, y), wedefine a vectorm that looks from the origin of the camera coordinate system O andpoints to P∗(x, y). In the similar manner, we define a line Ax + By + C = 0 whichis placed in the plane that passes through the center O through this plane’s normalvector n. Arbitrary point P∗(x, y) and line Ax + By + C = 0 along with respectivecoordinate frames are shown in Fig. 7.11. In the camera coordinate system, we canthen write the previously mentioned vectors m i n in the following manner:

m =⎡

⎣xyF

⎦ ,n =⎡

⎣AB

C/F

⎦ . (7.37)

It is straightforward to show that if the point P∗ lies on the line Ax + By + C = 0,then the scalar vector product m · n becomes equal to zero.

Having this inmind,wegoback to the circular objects and their perspective cameraprojection. Perspective projection is a collineation, more precisely a bijection in tworespective spaces such that the images of collinear points are themselves collinear[2]. Without formal proof, we state that rotational and translational transforms alsofall under the same group of transformations. What this means is that, when thecircular object is rotated and translated with respect to the camera, its projectionis, in general, always an ellipse. Working with this property of camera projection,the authors in [15] proved that a circular object of known radius R, with normalnCV written in the camera frame, displaced from the center of the camera by vector

distance dCV , can be calculated using unit eigenvectors and eigenvalues, v1–v3 andλ1–λ3, respectively, of projection’s conic representation (7.32). This is accomplishedthrough the following set of equations:

Page 240: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

7.3 Vision-Guided Aerial Manipulation 229

nCV = S1

√λ2 − λ1

λ2 − λ3v1 + S2

√λ1 − λ3

λ2 − λ3v3 (7.38)

dCV = z0

(S1λ3

√λ2 − λ1

λ2 − λ3v1 + S2λ2

√λ1 − λ3

λ2 − λ3v3

)(7.39)

with z0 being a radius-dependent factor z0 = S3r√−λ2λ3

, and S1−3 undetermined signs.

The signs can be determined by restraining ourselves to situations where nCV faces

the camera and the valve itself is in front of the camera. For a conic to represent anellipse, one of the eigenvalues must be less than zero. Therefore, the eigenvalues andcorresponding eigenvectors are ordered in the following manner: λ3 < 0 < λ2 ≤ λ1.This analysis calculates only five degrees of freedom, neglecting only the yaw angleof the valve. Due to the fact that it is not crucial to know the exact yaw angle of thevalve in order to grasp it, the proposed algorithm does not calculate it. However, theyaw angle could be calculated using, for instance, the point where the valve spokescross and use it as an additional degree of freedom in the attitude estimation. Onceweknow the position of the valve in the camera coordinate system, it is straightforwardto calculate its position in the MM-UAV body frame, keeping in mind the rotationof the camera RB

C and displacement from the body center pBC :

dBV = pB

C + RBCd

CV . (7.40)

nBV = RB

CnCV . (7.41)

References

1. Arbanas B, Ivanovic A, Car M, Haus T, Orsag M, Petrovic T, Bogdan S (2016) Aerial-groundrobotic system for autonomous delivery tasks. In: 2016 IEEE international conference onrobotics and automation (ICRA), pp 5463–5468. IEEE

2. Beutelspacher A, RosenbaumU (1998) Projective geometry: from foundations to applications.Cambridge University Press

3. Bircher A, KamelM, Alexis K, Burri M, Oettershagen P, Omari S,Mantel T, Siegwart R (2015)Three dimensional coverage path planning via viewpoint resampling and tour optimization foraerial robots. Auton Robots, 1–20

4. Burri M, Nikolic J, Hürzeler C, Caprari G, Siegwart R (2012) Aerial service robots for visualinspection of thermal power plant boiler systems. In: 2012 2nd international conference onapplied robotics for the power industry (CARPI), pp 70–75. IEEE

5. Eberli D, Scaramuzza D, Weiss S, Siegwart R (2011) Vision based position control for mavsusing one single circular landmark. J Intell Robot Syst, 495–512

6. Fabresse Felipe R, Fernando C, Ivan M, Anibal O (2014) Localization and mapping for aerialmanipulation based on range-only measurements and visual markers. In: Proceedings of 2014IEEE international conference on robotics & automation (ICRA), pp 2100–2106

7. Fitzgibbon A, Pilu M, Fisher RB (1999) Direct least square fitting of ellipses. IEEE TransPattern Anal Mach Intell 21(5):476–480

8. Frazzoli E, Dahleh MA, Feron E (2002) Real-time motion planning for agile autonomousvehicles. J Guid Control Dyn 25(1):116–129

Page 241: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

230 7 Mission Planning and Control

9. Fumagalli M, Naldi R, Macchelli A, Carloni R, Stramigioli S, Marconi L (2012) Modeling andcontrol of a flying robot for contact inspection. In: 2012 IEEE/RSJ international conference onintelligent robots and systems (IROS), pp 3532–3537

10. Gray S, Clingerman C, Likhachev M, Chitta S (2011) Pr2: opening spring-loaded doors. In:Proceedings of IROS

11. Hornung A, Wurm KM, Bennewitz M, Stachniss C, Burgard W (2013) OctoMap: an efficientprobabilistic 3D mapping framework based on octrees. Auton Robots. http://octomap.github.com

12. IshidaM, Shimonomura K (2012)Marker based camera pose estimation for underwater robots.In: 2012 IEEE/SICE international symposium on system integration (SII), pp 629–634

13. Jimenez-Cano AE, Martin J, Heredia G, Ollero A, Cano R (2013) Control of an aerial robotwith multi-link arm for assembly tasks. In: 2013 IEEE international conference on roboticsand automation (ICRA), pp 4916–4921

14. Justin T, Giuseppe L, Koushil S, Vijay K (2014) Toward image based visual servoing for aerialgrasping and perching. In: Proceedings of 2014 IEEE international conference on robotics &automation (ICRA), pp 2113–2118

15. Kanatani K, Liu W (1993) 3D interpretation of conics and orthogonality. CVGIP: ImageUnderst 58(3):286–301

16. Kato H, Billinghurst M (1999) Marker tracking and HMD calibration for a video based aug-mented reality conferencing system. In: Proceedings of the 2nd IEEE and ACM internationalworkshop on augmented reality, 1999 (IWAR ’99), pp 85–94

17. Kavraki LE, Svestka P, Latombe J-C, Overmars MH (1996) Probabilistic roadmaps for pathplanning in high-dimensional configuration spaces. IEEE Trans Robot Autom 12(4):566–580

18. Kim S, Choi S, Kim HJ (2013) Aerial manipulation using a quadrotor with a two dof roboticarm. In: IEEE/RSJ international conference on intelligent robots and systems, Tokyo, Japan

19. Kondak K, Huber F, Schwarzbach M, Laiacker L, German S, Sommer D, Bejar M, Ollero A(2014) Aerial manipulation robot composed of an autonomous helicopter and a 7 degrees offreedom industrial manipulator. In: 2014 international conference on robotics and automation(ICRA)

20. Lagarias JC, Reeds JA, Wright MH, Wright PE (1998) Convergence properties of the nelder-mead simplex method in low dimensions. SIAM J Opt 9(1):112–147

21. Levine WS (1996) The control handbook. CRC press22. Lindsey Q, Mellinger D, Kumar V (2012) Construction with quadrotor teams. Auton Robots

33(3):323–33623. Macchelli A, Forte F, Keemink AQL, Stramigioli S, Carloni R, Fumagalli M, Naldi R, Marconi

L (2014) Developing an aerial manipulator prototype. IEEE Robot Autom Mag, 41–5524. Mak LC, Furukawa T (2007) A 6 DoF visual tracking system for a miniature helicopter. In:

2nd international conference on sensing technology, pp 32–37. IIST, Massey University25. Mellinger D, Kumar V (2011) Minimum snap trajectory generation and control for quadrotors.

In: Proceedings IEEE international robotics and automation (ICRA) conference, pp 2520–252526. Metni N, Hamel T (2007) A UAV for bridge inspection: visual servoing control law with

orientation limits. Autom Constr 17(1):3–1027. Murray RM, RathinamM, Sluis W (1995) Differential flatness of mechanical control systems:

a catalog of prototype systems. In: ASME international mechanical engineering congress andexposition. Citeseer

28. Nikola M, Stjepan B, Dula N, Filip M, Matko O, Tomislav H (2014) Unmanned marsupialsea–air system for object recovery. In: Proceedings 22nd mediterranean conference on controland automation

29. Orsag M, Haus T, Palunko I, Bogdan S (2015) State estimation, robust control and obstacleavoidance for multicopter in cluttered environments: Euroc experience and results. In: 2015international conference on unmanned aircraft systems (ICUAS), pp 455–461. IEEE

30. Orsag M, Haus T, Tolic D, Ivanovic A, Car M, Palunko I, Bogdan S (2016) Human-in-the-loopcontrol of multi-agent aerial systems. In: European control conference

Page 242: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

References 231

31. Pentenrieder K, Bade C, Doil F, Meier P (2007) Augmented reality-based factory planning -an application tailored to industrial needs. In: 6th IEEE and ACM international symposium onmixed and augmented reality, 2007, ISMAR 2007, pp 31–42

32. Petrinec K, Kovacic Z (2007) Trajectory planning algorithm based on the continuity of jerk.In: 2007 mediterranean conference on control & automation

33. Reza DSH, Mutijarsa K, Adiprawita W (2011) Mobile robot localization using augmentedreality landmark and fuzzy inference system. In: 2011 international conference on electricalengineering and informatics (ICEEI), pp 1-6

34. Richter C, Bry A, Roy N (2013) Polynomial trajectory planning for aggressive quadrotor flightin dense indoor environments. In: Proceedings of the international symposium on roboticsresearch (ISRR)

35. Rigatos G (2015) Nonlinear control and filtering using differential flatness approaches: appli-cations to electromechanical systems, vol 25. Springer

36. Scholten JLJ, Fumagalli M, Stramigioli S, Carloni R (2013) Interaction control of an UAVendowed with a manipulator. In: 2013 IEEE international conference on robotics and automa-tion (ICRA), pp 4910–4915

37. Siciliano B, Khatib O (2008) Springer handbook of robotics. Springer Science & BusinessMedia

38. Sreenath K, Michael N, Kumar V (2013) Trajectory generation and control of a quadrotor witha cable-suspended load-a differentially-flat hybrid system, pp 4888–4895. IEEE

39. Yang S, Scherer SA, Zell A (2013) An onboard monocular vision system for autonomoustakeoff, hovering and landing of a micro aerial vehicle. J Intell Robot Syst 69:499–515

Page 243: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Index

AAdaptive, 190, 198Adaptive control, 184, 186Added center of mass, 138Aerodynamic coefficient, 40Aerodynamic drag, 162Air-fuel ratio, 77Angle control, 177Angle of attack, 40Angular transformation matrix, 28Approach vector, 91Armature inductance, 60Armature resistance, 60Atan2, 104Augmented momentum theory, 38Augmented reality, 222Autorotation, 38

BBack electromagnetic force EMF, 53Backstepping, 190Backstepping control, 190Bernoulli’s law, 37Bijection, 228Binary tree node, 211Blade element theory, 39Body frame, 21, 23Brushless DC motor, 52, 67

CCarburetor, 77, 78Center of mass, 129, 154Centroid, 129, 154Chattering, 74Choke plate, 78

Chord, 41Chord twist factor, 41Climb speed, 35Collineation, 228Common Quadratic Lyapunov (CQL), 205Common Quadratic Lyapunov function, 205Configuration mapping, 48, 50, 173, 174Configuration space, 217Controllability, 213Control volume, 34, 35Control volume inflow, 36Coriolis, 203Correction state, 170Cost function, 217Coulomb friction, 203Coulomb friction model, 162Cross configuration, 173Cyclic Coordinate Descent, 105, 109Cylinder, 76

DDC motor efficiency, 65Dead zone, 74Denavit–Hartenberg, 88, 89Differential flatness, 212, 213Direction Cosine Matrix (DCM), 171Drag, 39Drag coefficient, 47Dwell time, 205

EEast-North-Up (ENU), 19Effective angle of attack, 41Eigenvalues, 228Eigenvector, 228

© Springer International Publishing AG 2018M. Orsag et al., Aerial Manipulation, Advances in Industrial Control,https://doi.org/10.1007/978-3-319-61022-1

233

Page 244: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

234 Index

Electrical time constant, 62Electronic speed controller (ESC), 70Elementary rotations, 26Ellipse, 226Ellipse (canonical representation), 226Ellipse (Quadratic form), 226Elliptical cone, 226Endeffector, 88Environmental coupling, 153Euclidean norm, 108Euler angles, 25, 26Exponential stability, 205Extrinsic camera parameters, 224

FFCU, 165Features, 221Feedforward, 215Fiducial, 221Fiducial marker, 222Forward speed, 38FOV, 225

GGain scheduling, 184Gear box, 73GNSS, 167GPS, 167

HHall sensor, 69Hausdorff distance, 220Helicopter state, 38Hexacopter, 198High speed needle HN, 77Homogeneous conversion matrix, 28Hsia method, 199Hurwitz criteria, 183, 195Hurwitz matrix, 205Hybrid systems, 203

IImpedance, 203Impedance control, 200IMU, 165Incidence angle, 43Induced speed, 41Induced velocity, 35Inertial reference frame, 23Internal Combustion Engine (ICE), 75

IVBS, 166

JJacobian matrix, 111, 148Jacobian Pseudo-Inverse, 116Jerk, 214Joint, 87

KKalman filter, 169, 171, 172Kinetic energy, 130, 146Kronecker delta, 131, 155

LLagrange–Euler Model, 143Lift, 39Lift coefficient, 40Link, 87Load torque , 64Local planner, 218Lorentz law, 53Low speed needle LN, 77Lyapunov, 203, 205Lyapunov function, 192

MMaximum power load torque, 58Mechanical angle of attack, 41Mechanical time constant, 72MEMS, 166Model reference adaptive control (MRAC),

186Moment of inertia, 145, 147, 176Momentum theory, 34Motion capture, 168Motor efficiency, 58Motor efficiency curve, 59Motor speed constant, 60Multiple Lyapunov Function (MLF), 205

NNewton–Euler algorithm, 134Newton–Euler model, 123, 134No-load armature current, 55Nominal working point, 60North-East-Down (NED), 19

OOctomap, 221

Page 245: Matko Orsag Christopher Korpela Paul Oh Stjepan Bogdan ...dl.booktolearn.com/ebooks2/engineering/... · The experienced authorial team comprises: Matko Orsag who is an Assistant Professor

Index 235

Optimization, 2242nd order transfer function, 73

PParallel axis theorem, 155PID, 173, 194Plus configuration, 173Polynomial representation, 214, 219Prediction state, 169Prismatic joint, 91, 94, 111Projection matrix, 223, 227Proportional control (P), 73Propulsor, 45Pseudo-Inverse, 116Pulse-width Modulation (PWM), 54

QQuadratic form, 226

RRelay nonlinearity, 75Revolute joint, 91, 94, 111Right-hand rule, 89Robuts, 198Rotation rate transformation, 175Rotor inertia, 60Routh-Hurwitz criteria, 177, 183RRT, 211, 219

SScrew transformation, 93Sensor bias, 166, 169Sensorless control, 69Servo drive, 73Servo system, 44Skew-symmetric matrix, 29, 130

Sliding vector, 91Small angle approximation, 175Spark plug, 76Spline, 217, 219Split, 214Stability, 177, 203Stall current, 60Stall load torque, 55Stall torque, 58, 602-stroke engine, 76Switching systems, 203

TTaylor series, 114Thrust coefficient, 43, 47Tool configuration, 100Torque constant, 60Transfer function, 177, 183Transformation pipeline, 224Transform matrix, 27, 94, 100Twist factor, 41

UUAS, 1UAV, 1

VViscous friction, 162Visual odometry, 166

WWaypoint, 217, 219White noise, 166Windmill state, 38World frame, 22