Upload
hathuan
View
213
Download
0
Embed Size (px)
Citation preview
Submitted: December 7, 2009
SD-May1014 Team Members: Michael Peat Kollin Moore Matt Rich Alex Reifert Advisors: Dr. Nicola Elia Dr. Phillip Jones
MicroCART-Phase 5 Design Report The goal of this project is to create a Vertical Take-off and Landing Unmanned Aerial Vehicle (VTOL UAV) which, at a minimum, can take off, hover, and land autonomously. We will be basing our design off of a purchasable, electrically powered, indoor use base platform which can handle all necessary flight mechanics for the system. The system will then incorporate a variety of sensors, definitely including a position tracking system and probably also including an inertial measurement unit, to provide the necessary stabilization during flight. The system will also use wireless communication channels to provide information to a ground station, which may or may not do the required processing.
i | P a g e
Table of Contents
Section Page
Table of Contents ........................................................................................................................................... i
List of Tables and Figures .............................................................................................................................. ii
I. Executive Summary .......................................................................................................................... 1
II. Problem/Mission Statement ............................................................................................................ 2
III. Solution/Procedure Statement ........................................................................................................ 2
IV. Operating Environment ................................................................................................................... 2
V. Intended use and users .................................................................................................................... 3
VI. Assumptions and Limitations ........................................................................................................... 3
VII. Expected End Product ...................................................................................................................... 4
VIII. Deliverables ..................................................................................................................................... 4
IX. Approach Used ................................................................................................................................. 4
X. Detailed Design ................................................................................................................................ 9
XI. Estimated Resource Requirements ................................................................................................ 19
XII. Project Schedule ............................................................................................................................ 20
XIII. Project Team Information .............................................................................................................. 21
XIV. Closing Summary............................................................................................................................ 22
Appendix 1: WiiYourself Library Source Code, Download Location ............................................................. 1
Appendix 2: Detailed Specifications for Sensor System Bracket .................................................................. 2
Appendix 3: Modeling the Lama Coaxial Helicopter (Li Chen and Phillip McKerrow) .................................. 4
Appendix 4: Current Wiidata.m file for running test with the Wiimote ..................................................... 13
ii | P a g e
List of Tables and Figures
Name Page
Table 1. Personal Effort Requirements ...................................................................................................... 19
Table 2. Financial Requirements ................................................................................................................ 19
Figure 1: Top-level system breakdown ......................................................................................................... 9
Figure 2: Minimal Sensor System Breakdown ............................................................................................ 10
Figure 3: Optimal Sensor System Breakdown ............................................................................................ 11
Figure 4: Sensor Power System Breakdown ............................................................................................... 12
Figure 5. Unused Voltage divider circuit .................................................................................................... 12
Figure 6: Communication System Design Breakdown ................................................................................ 14
Figure 7: Breakdown of the Software Subsystem ....................................................................................... 15
Figure 8: Sensor System Mounting Bracket ................................................................................................ 17
Figure 9. Project Schedule .......................................................................................................................... 20
iii | P a g e
List of Definitions
autonomous: controlled by non-human input, i.e. by a computer program.
ground station: most likely a wirelessly-enabled computer or network of devices which will send
the robot commands, such as take-off, hover, fly to waypoint A, and land.
inertial measurement unit (IMU): sensor device which uses internal gyroscopes and
accelerometers to measure acceleration, velocity, and orientation in several different axes.
indoor positioning system (IPS): sensor system which is capable of calculating the x-, y-, and z-
axis coordinates of the user by using wireless signals within a building.
latency: time delay in a system, a.k.a. lag.
1 | P a g e
I. Executive Summary
Our end product will be a small electrically powered autonomous flying vehicle capable of
hovering stably in a controlled environment. It will have an onboard sensor array capable
of the measurements necessary for flight adjustments. These measurements will be
transmitted wirelessly to a ground station for overall flight instructions. After many years of
failure to complete a similar task with a different platform, we are trying to overcome the
shortcomings and create a more usable system.
The most likely set of sensors would include a six degree of freedom inertial measurement
unit and a position measurement unit (most likely a real time tracking camera system). Due
to extreme budget constraints, we have been forced to use a very inexpensive sensor
system. This constraint will definitely limit the effectiveness of the overall project. If better
sensor systems become available through donation of the faculty advisors for use in this
project and there is still a reasonable amount of time remaining before the project deadline,
then these sensors will most likely be implemented. The wireless transmission from the
sensors will be handled by a Bluetooth connection.
The next major subsystem is the power distribution system, which is a two part system for
this project. The first part is powering the helicopter and the sensor array on the helicopter.
By splitting the leads from the helicopter’s battery and attaching an appropriate dc-dc buck
converter, we can power the sensor array as well as the helicopter from the original power
source which came with the helicopter. This will reduce the maximum flight time for the
system, but was necessary due to the severity of the payload constraints. The second part
of the power system is powering the FM transmitter which will be linked with the ground
station and will send the control signals back to the helicopter. This will most likely be
handled by an AC to DC convertor or a bench-top power supply.
The third major subsystem is the interface between the transmitter and the ground station.
There are two options for this system currently, and we have not yet resolved to which
method we will use. Since the ground station will be a PC running Windows, the transmitter
will be connected using a serial or parallel port. This connection can either be fed to a
software defined radio transmitter or to the inputs of the original RC controller which came
with our base platform. Both options have merit, and thus we have not decided which will
work more efficiently.
The fourth major subsystem is the ground station itself. As stated earlier, the ground
station is a PC running Microsoft Windows (version dependence yet to be determined). We
will be writing software which will most likely contain the following modules: Input data
transformer, PID Controller, Output data transformer. The input data transformer will take
the sensor data and transform it into the form needed to be entered into the PID controller.
The PID controller will take the data transformed from the sensor data and decide what the
necessary changes to the system will be. The output data transformer will transform the
2 | P a g e
data from the PID controller and send the appropriate signals to the base platform through
the transmitter.
The last subsystem will not be designed as it will be purchased. The base platform for this
system will be a dual concentric propeller radio controlled helicopter. The platform will be
expected to handle all basic flight mechanics. It will also be packaged with an onboard
control unit which will receive the signals which are sent from our transmitter and transform
these signals into the corresponding changes to the base system (increase/decrease of
speed on either motor, change the position of any of the servos, etc…)
This phase of the project will span from August 2009 until May 2010, during which time the
team will deliver an assortment of reports and designs, as well implement a prototype of
the end product. The overall project will cost $26,695 (including labor) or $215 (without
labor included).
II. Problem/Mission Statement
To create a small electrically powered autonomous flying vehicle capable of takeoff and
landing from horizontal surfaces as well as stable indoor hover without human assistance.
Ideally this should include onboard sensors for orientation and acceleration along all three
rotational axes as well as altitude and overall position relative to known reference points.
III. Solution/Procedure Statement
In order to achieve our goal, we will research platforms fitting the above outlined
description, attempt to maximize payload capabilities in order to allow for the desired on
board electronics to be carried in stable flight. To help with this we will be also researching
the most light weight sensors available to us and if necessary cutting out any that we do not
absolutely need for basic takeoff, hover, and landing.
Once a platform is firmly established and tested for capabilities we will be attempting to
either find or create mathematical/computer simulation models for it in order to aid us in
the process of designing our control system.
We will also attempt to establish effective and reliable Wi-Fi communication with a
computer ground station for overall flight instructions.
IV. Operating Environment
The operating environment for our system will be climate controlled university buildings
without any obstacles (dynamic or otherwise) in the intended flight path. The vehicle shall
have enough room to takeoff, hover, and land without any probable danger of collision with
its surroundings.
3 | P a g e
The minimal option position tracking system will use Bluetooth to communicate from
platform to ground station with an approximate maximum range of 300ft. However the IR
camera will be limited to operating within approximately 4 to 16 ft. of the IR reference
sources. The accelerometer (IMU) has no range considerations of its own, although it as all
the sensors is limited by the communication range of the Bluetooth transmission.
The higher quality option will involve a far more intricate IR image tracking system able to
produce cm resolution or less within an approximate volume of 5 cubic feet, with increased
volume of operation coming at a cost to accuracy. A higher quality inertial measurement
system which is being designed by an ISU graduate student is not yet available to us, but
would use Zigbee communication and likely have range very similar to Bluetooth.
Hazards in this environment would include the possibility of serious loss of control and a
collision with walls and/or other obstacles resulting in damage to the vehicle, as well as the
possibility of accidental human interference from an unknowing intruder into our flight
space.
V. Intended use and users
The intended end use of our system will be continued research and development into the
area of autonomous flight systems, and intended users will be knowledgeable engineering
students and/or professors.
VI. Assumptions and Limitations
We will be operating under the following assumptions and limitations:
1. Our system will only be operated in the environment defined in this project plan.
See Operating Environment for more information.
a. Our vehicle will not have to deal with any obstacles dynamic or otherwise
b. Our vehicle will not be subject to any weather conditions
2. Basic flight mechanics will be taken care of by the base platform and we will not
have to do any sort of design or modification in order to achieve mechanical flight
capabilities.
3. Our vehicle will be able to communicate data to and receive instructions from a
computer ground station
a. The ground station will handle overall flight instructions such as takeoff, landing,
and possible waypoint movement.
b. The ground station will not be responsible for fast flight stability adjustments.
4 | P a g e
4. Our vehicle should have sufficient payload capability to carry the required sensor
and control equipment to maintain a stable hover for at least some small period of
time.
a. Our vehicle payload will be severely limited
b. We may have to make design changes as progress is made if we are unable to
procure the desired sensors and controls within the payload limit
VII. Expected End Product
Our expected end product should be a small electrically powered autonomous flying vehicle
capable of stable indoor hover without human assistance, ideally including onboard sensors
for orientation and acceleration along all three rotational axes as well as altitude and overall
position relative to known reference points, as well as Wi-Fi communication with a
computer ground station. As part of the project, we will be compiling a set of quantitative
specifications for use in designing UAV projects.
VIII. Deliverables
We plan on being able to deliver the following things:
1. A presentation giving a general overview of the current technology involved in non-
ISU UAV projects, both at other universities and in the general marketplace.
2. A written report detailing the capabilities of our platform.
3. A written report detailing the sensors used in our systems operation.
4. A written report on the overall processes and means by which our system operates.
5. A working mathematical/computer simulation of our platform.
6. A written design document.
7. A written summary of the development process.
8. Our end product itself.
IX. Approach Used
i. Past Project Considerations
1. History
a. MicroCART has been an active project since 1998.
b. There are many useable aspects of the engineering which has been completed
to this point.
c. There are many parts of the previous projects that will not apply to this project.
5 | P a g e
2. Rationale for project restructuring
a. Platform needed to be smaller.
b. Platform needed to be more stable.
c. Platform needed to be flown indoors.
d. Control system needed to be simplified.
3. Useable aspects from previous projects
a. Coding structure (basic flow of the coding)
b. Filter structure and design (to help remove noise from sensor data)
c. Control software(hopefully based on current 492 simulation design)
d. UAV system design intricacies overcome
ii. Functional Requirements:
Our end product will be designed subject to the following set of functional
requirements:
1. Capability of autonomous takeoff, flight, and landing
a. The vehicle should be able to power up when signaled
b. The vehicle should be able to enter flight autonomously with signaled
c. The vehicle should be able to maintain stable hover for a given span of time
d. The vehicle should be able to land on a level surface
e. The vehicle should be able to power down when signaled
f. The vehicle should do no damage to itself or any surroundings during this
process
2. Capability to make high frequency mid-flight stability adjustments
a. The vehicle must be able to stabilize itself during indoor hover for a given period
of time
b. The stabilization must be controlled fast enough to avoid any amount of lag
which would permit the unit to lose overall control or stability
c. The sensors for the vehicle must be able to communicate with the control
system quickly enough to avoid such lag
3. Capability to communicate with computer ground control station
a. The vehicle should be able to communicate information to a ground station
b. The vehicle should be able to receive instructions from a ground station
c. The vehicle will ideally be able to be overridden by manual control through the
ground station
6 | P a g e
iii. Constraint Considerations:
We will be operating under the following constraint considerations:
1. Size constraint
a. Our vehicle must be small enough to operate indoors
i. Specifically it must be operable in available university buildings
containing proper flight environments.
2. Power constraint
a. Our vehicle will have a limited power supply for flight in the form of a battery.
b. This power supply must be carried within the platform during flight.
c. This power supply should ideally allow at least 5 minutes of stable flight with all
equipment attached.
d. This power supply will be limited in capacity due to weight considerations. *see
below.
3. Payload constraint
a. Our vehicle will have a severely limited payload capability
i. Due to this, our equipment and power payload will also be severely
limited by weight
iv. Technological Considerations:
Our design will take place subject to the following technological considerations:
Platform:
1. Must be electrically powered and controlled
2. Must be of a small size for indoor flight
3. Must be a wirelessly controlled model during human flight
4. Must fit within our budget
5. Will have a necessarily very limited payload due to power source and size
Sensors/Controls:
1. A six degree of freedom inertial measurement unit (IMU)
a. Should not suffer from accumulated error
i. If so, we must be able to mitigate or eliminate the effects of such error or
somehow prevent their accumulation by modification
b. Must be light enough to fit within payload capabilities of platform
2. An image tracking unit to help eliminate error from the IMU as well as give a precise
location of the base platform inside of the intended environment.
7 | P a g e
Communications:
1. The transmitter on sensor array must be able to transmit a minimum of 35ft to
ensure adequate space for the ground station and personnel.
2. The sensor transmitter communication must be compatible with the ground station.
3. The ground station must be able to alter the signal being transmitted to the base
platform
v. Testing Requirements Considerations
Platform determination:
First we will need to test the stability of different platforms to decide which one will
work best for us. This needs to be done because for us to have a stable final UAV an
initially stable platform is essential. This testing will be done by flying different
platforms manually and seeing which one is the easiest to fly.
Payload capabilities testing:
Second we will test the payload capabilities of the platform we decide to use. This is
necessary to determine what type of sensor systems can possibly be attached to the
platform in flight. This testing will be accomplished through the use of a digital scale and
timing device. The platform will be attached to the measurement surface of the digital
scale and powered up to various levels of lift, and data will be collected on both what
levels of lift are achievable as well as what time intervals they are achievable for without
causing damage to the platform or its battery power supply.
Sensor System Testing:
After this we will need to do extensive testing of the sensors we will be using to both
ensure that they will have functionality sufficient for our purposes as well as to
determine their output characteristics and optimal configuration while in use. Since
such testing would be done for us already for the higher quality option (which is
unavailable currently) the minimal option sensors will need the testing.
This will be accomplished by setting up a Bluetooth connection to a PC running various
software to acquire, record, and analyze the data. We will use free, downloadable
software such as WiiYourself, which we will modify the source code for and re-compile
in order to better meet our needs. In the event that we run into trouble re-compiling
the source code (with dependency issues etc…) we will temporarily use the Wiiwrap
program written by Dr. Tom Daniels here at ISU which should allow us to run many tests
although it does not possess the full functionality we desire.
The data itself, once recorded, will be analyzed by use of MATLAB. We will write script
to determine the sensor resolution and accuracy for all six degrees of freedom for both
8 | P a g e
the inertial measurement unit and infrared camera tracking system and to determine
latency. This data will then be presented in an appropriate form for our analysis (such as
graphically) and should allow the determination of the usability as well as optimal
functional configuration of our sensor system.
Control Testing (Simulation):
Once we have implemented a control system on our ground control station, we will
need to ensure that it can actually maintain the level of stability and control we wish to
achieve in the platform without human assistance of any kind. This will be accomplished
first by using our mathematical modeling software to simulate the physics of our
platform in flight with inputs being manipulated by our controller.
Integrated System Testing:
Once a control system is determined to handle a simulation successfully, we will test its
real world functionality by allowing the system gradually increasing degrees of
independence in a very tightly controlled environment to minimize the risk of serious
damage to the system should something go wrong. Eventually the system will be tested
fully independent of any constraints or assistance and determined fit or unfit for
autonomous operation.
Of course this phase of testing will go beyond the software control alone. It will also
determine the effectiveness of our communications system, power system, mechanical
system, and sensor system integration. Our testing procedure will seek to isolate as
much as possible each of these systems and determine appropriate adjustments as well
as ensuring a working integrated system overall.
vi. Safety Considerations
Since we will be working with a helicopter and no one on the team is a professional
model pilot there will always be the risk of someone possibly being injured by the
spinning rotors. Also if a larger and higher power model is used the risk of electrical
shock and being burned by hot motors also come into play. The only other potentially
harmful aspect of our project is when testing the different electrical components. The
components themselves are not particularly dangerous but the equipment we use to
test them most notably the power generator could potentially cause serious injury to
the user if not used correctly. However, we are all trained to use such equipment so
even though it is a potential risk it is not very likely to occur.
9 | P a g e
X. Detailed Design
Figure 1: Top-level system breakdown
i. Overall System Design
There are five main subsystems which the design of our UAV can be broken
down into, as seen in Figure 1. These systems are the Sensor System, Power
System, Communications System, Software System, and the Mechanical
System. Each of these systems is described in more detail below. It should be
noted that there is an optimal and minimal design option for the sensor
system, as we do not have the funding to buy the optimal system but it would
work better. It is still an option as use of the equipment may be donated to
project for a short time.
UAVSensor System
•Minimal Option
•Position Tracking System
•Inertial Measurement Unit
•Optimal Option
•Position Tracking System
•Inertial Measurement Unit
Power System
•Onboard UAV power
•Base Platform
•Sensor Array
•Testing Power Harness
•Ground Station Power
•Control System Power
•Communications Power
Communication System
•Sensor to Ground Station Communications
•Ground Station to Controller Communications
•Controller to UAV Communications
Software System
•Data Acquisition
•Input Data Transform
•Controller
•Output Data Transform
•Data Transmission Protocols
Mechanical System
•Sensor Mounting to Base Platform
•Minimal Sensor Design
•Optimal Sensor Design
•Testing platforms
10 | P a g e
ii. Sensor System Design
Figure 2: Minimal Sensor System Breakdown
1. Minimal Option
The acceleration data will be used to give us very fast response feedback on the
dynamic movements of our platform more quickly than we would be able to
achieve by position sensing alone. Currently we are working with a ADXL330 3
axis accelerometer (again the chip used in Wii-motes).
This accelerometer has been found to provide us with 0.04g (g being the
acceleration of gravity) resolution along each of its 3 axes. It also has a free fall
frame of reference, meaning that while stationary a net acceleration equal in
magnitude to that of gravity will be read by the sensor.
Through testing and experimentation we have determined that this
accelerometer cannot be used to provide us with useful data on the inclination
of the platform. We had hoped that using trigonometric calculation and the
axis acceleration readings to be able to calculate accurate pitch and roll
inclinations. However the resolution for these was approximately 2.2 degrees,
which is not usable for a very sensitive/responsive system such as ours.
In order to provide us with accurate data on the current position and
orientation of our platform, we are using an infrared camera capable of
tracking 4 “blobs” at a time, given 2-D coordinates for each as well as size. This
device is also from the Wii-mote. Due to the very high resolution it has in
tracking IR dots (essentially IR LEDs) we will use it to give us (through
complicated calculations) a full orientation, pitch, yaw, roll, as well as x, y, and
z spatial coordinates. It will also track movements, the data of which will be
supplemented by the accelerometer data.
Bluetooth Transmitter
Onboard Microcontroller
Infrared CameraInfrared Light
Emitters
3 - Axis Accelerometer
11 | P a g e
We will either have an IR camera on-board the platform referencing a ground
constellation of LEDs, or more likely one or more cameras viewing LEDs
positioned on the platform itself.
Figure 3: Optimal Sensor System Breakdown
2. Optimal Option
The acceleration data will again be used to give us very fast response feedback
on the dynamic movements of our platform more quickly than we would be
able to achieve by position sensing alone. This will involve a higher quality six
degree of freedom accelerometer. We should be able to use integration to
achieve velocity and position indication over short periods of time with this
higher quality model, but will still use an IR system to provide updates and
overall tracking.
In order to provide us with accurate data over time on the current position and
orientation of our platform, we would use the OptiTrack™ optical motion
capture system to provide us with the spatial position and orientation
coordinates. All of the software for this system, as well as a hardware interface
with a PC are included with the product.
iii. Power System Design
1. Onboard UAV Power
a. Base Platform
The power system which will be used to power the vehicle and any
onboard sensors will be a 7.4 volt, 1000 mAh, 2-cell lithium polymer
rechargeable battery pack. This battery pack is designed to provide
approximately 10-15 minutes of flight time with our selected vehicle
without powering any additional electronics.
Infrared Light Emitter
Infrared CamerasDirect Wired Into Ground Station
6-axis Inertial Measurement
Unit
Onboard FPGA's and
Microcontroller
Zigbee Wireless Transmitter
12 | P a g e
Figure 4: Sensor Power System Breakdown
b. Sensor Array
There are two primary issues that had to be addressed when selecting
an onboard power supply: power conversion and battery life
considerations. The issue of power conversion arises when trying to
connect sensors that run at lower voltages than the vehicle. To solve
this issue, we first attempted to design a simple voltage divider circuit
using to resistors, as shown in Figure 5 below.
Figure 5. Unused Voltage divider circuit
One fundamental flaw with this design is that much of the power
introduced into the circuit is dissipated as heat, in the form of IR2
losses. Since battery life is already relatively short while powering the
vehicle alone, we cannot afford to waste energy when trying to power
the onboard sensors.
Another disadvantage of using a simple voltage divider involves the
sensor system itself. The impedance load of the sensor system will
change as the sensors power up, which will change the value of Vout in
Figure 5 above. Since the load is constantly fluctuating, the voltage will
also fluctuate which may cause the device to either shut off or receive
sudden moments of high voltage which can damage the device.
Ultimately, it was decided that the issue of power conversion could
easily be solved by using an off-the-shelf buck converter, which is
designed to take an input voltage and output a lower voltage at a preset
7.4V, 1000 mAh lithium-polymer rechargeable 2-cell battery pack
7.4V - 3.3V step-down DC-
DC (buck) converter
Wii Remote PCB
13 | P a g e
level. We initially selected the model TPS62056 step-down converter
from Texas Instruments, which takes an input voltage range of 2.7-10V
and outputs at a preset 3.3V. However, the chip was much smaller than
anticipated, making it very difficult to work with. A slightly larger buck
converter from TI with similar properties will be selected for the final
version of the power system. The layout of the onboard power system
is shown in Figure 4.
c. Testing Power Harness
To save time and money (i.e. waiting for the battery pack to recharge or
buying multiple rechargeable battery packs), the platform will use a
fixed power supply during testing.
A 0-40V, 0-10A DC power supply (Hewlett-Packard 6267B) will be used.
A connector similar to the one which normally connects the Li-Po
battery to the platform will be used to ensure no short-circuiting. The
wires leading into the platform shall be oriented in such a way that the
platform’s hovering/flight characteristics will not be altered.
2. Ground Station Power
a. Control System Power
The ground station will consist of a standard PC that is Bluetooth-
enabled (this will likely be a USB plug-in device). The computer portion
of the ground station will only require a wall socket to power the PC and
computer monitor.
The computer will also have attached DACs to send commands to the
platform controller. These DACs will be powered by the PC and will
require no additional power source.
b. Communications Power
The platform controller normally runs with 8 AA batteries, but also has
an optional plug-in. The controller plug-in uses a simple AC/DC
converter which plugs in to the wall. To minimize project costs, the wall
plug-in shall be used whenever possible.
14 | P a g e
Figure 6: Communication System Design Breakdown
iv. Communication System Design
1. Sensor to Computer Communications
a. Using Minimal Sensor System Option
Since the minimal sensor system design is entirely based off the
Nintendo Wii Remote, the communication channel between the sensor
system and the ground station was already defined. The Wii Remote
uses Bluetooth standard IEEE 802.15 (through a Broadcom 2042 IC)
communication channels and Bluetooth HID protocols to transfer
information. We will use the BlueSoliel dangle to receive the
information.
b. Using Optimal Sensor System Option
Our optimal sensor option would be to use the OptiTrack optical motion
tracking system. This system will use several IR cameras to track
reflective markers on the UAV. It uses a USB interface to send the
information back to a computer and comes with its own analysis
software. We will most likely use a 4 camera system and will have a
tracking area of about 125 ft3. We will also use a wireless
accelerometer to assist the camera system in motion capture.
2. Computer to Controller Communications
To make our UAV fly autonomously we will not be able to use the original
controlling method which is a human using a dual joystick 8 channel RC
controller. To remove the controller’s dependence on human interaction we
will be using one of three different interfacing techniques that will allow a
computer program, that we will design or purchase, to replace the human
interaction. To do this we will hard wired into the controller itself. We will use
a digital to analog converter to manually control voltages that are being fed into
the four separate channels of the controller. This will be read by the
Information from On-
Board Sensors Computer Processor
DAC to RC Controller
UAV Control System
15 | P a g e
3. Controller to UAV Communications
The controller will use a 72.8 MHz RF transmission to send information to the on
board control systems on the UAV. The UAV will read these signals no
differently than if a human were using the controller.
Figure 7: Breakdown of the Software Subsystem
v. Software System Design
1. Data Acquisition
a. Using Minimal Sensor System
Since the minimal sensor system design is entirely based off the
Nintendo Wii Remote, the data acquisition program was available in a
downloadable library called Wii Yourself. This C++ library handles
linking to multiple Wii Remotes as well as calculating pitch and roll from
the accelerometer data. The challenges of using this library are
familiarization with a non-team programmer’s code as well as compiling
an out of date project file (the library was last compiled using Microsoft
Visual Studio 2005, which does have compatibility issues with Visual
Studio 2008). For source code and source site link, see Appendix 1.
Data Aquisition
• Outputs:
• X, Y, Z positions
• X,Y,Z accelerations
• Pitch, Roll, and Yaw
Input Data Transform and Filtering
• Outputs
• Actual Angular speed for both propellers.
• Actual Blade Pitch for both propellers.
Controller
• Outputs
• New Angular Speed for both propellers
• New Blade Pitch for both propellers
Output Data Transform
• Outputs
• New Throttle
• New Yaw
• New Pitch
• New Roll
Data Transmission
• Outputs
• Data Stream for sending to the DAC described in the communications plan.
16 | P a g e
b. Using Optimal Sensor System
Since we have neither of the systems for the optimal sensor system
design, we do not know what the data streams from the devices will
look like. Thus, this section of code will have to be written if/when this
option is implemented with the system.
2. Input Data Transform and Filtering
Regardless of which sensor system option is implemented, the input data should
be relatively the same (it may require some different scaling factors, but the
overall transformation should be roughly the same). The sensor system is
designed to report an X, Y, and Z coordinate from the image tracking system as
well as X, Y, and Z acceleration from the IMU. If the minimal sensor system is
chosen, then pitch, roll, and yaw will have to be calculated from the above
information (which may be included in the data acquisition module to keep this
module more equivalent between options). If the optimal sensor system is
chosen, the pitch, roll, and yaw should be able to be obtained from both the
image tracking system and the IMU. Thus the nine inputs to this module will be
X, Y, and Z coordinates and accelerations as well as pitch, roll, and yaw.
The function of this module will be simple: take each type of input data and
transform it into the input data needed for the controller module. Since the
inputs to the controller have not yet been determined, we have not written the
necessary data transforms. If we are unable to adapt the current controller
being developed by the 492 team for our smaller helicopter, the outputs of this
section will be the desired speed for both propellers and the desired pitch of
both propeller blades.
This module will also be responsible for filtering the input data. As the testing
of this data is not yet complete, these filters can’t be completely formulated.
Currently we are planning on low pass filtering all the data and a moving
average filter for the acceleration data.
3. Controller
This module has not been designed due to its similiarity to the simulation model
which is currently being designed by the 492 MicroCART team. Because the
simulation model overlaps into the implementation phase of this project, this
module is not expected to be completed by the deadline for this report. If we
are not able to use the controller the current 492 team is using, we will be
designing a controller based on a paper by Li Chen and Phillip McKerrow from
17 | P a g e
the University of Wollongong entitled “Modelling the Lama Coaxial Helicopter.”
(The full text of this paper can be found in Appendix 3) The Lama helicopter is
manufactured by the same company which manufactures our base platform,
and thus the model should generally be the same. The outputs of the controller
should be the new propeller speeds and pitches.
4. Output Data Transform
The inputs to this module will be the outputs from the controller module. The
function of this module will be to take each type of output data and transform it
into the input data needed for the module to communicate the intent of the
controller module to the base platform by means of the ground station
communication which is implemented. The outputs of this function will be
numerical values (possibly percentages of the total range) describing the new
desired throttle position, yaw position, pitch position, and roll position.
5. Data Transmission
The inputs of this module will be the outputs from the Output Data Transform
module. These include the new throttle, pitch, roll, and yaw positions as
percentages. These values will then be put into the correct number of bits to be
sent to each perspective DAC converter. Each converter will be patched into the
original RC controller instead of the potentiometers which it used for manual
control. The RC controller will be streaming these signals back to the control
unit on the base platform.
vi. Mechanical System Design
1. Sensor Mounting to Base Platform
a. Using Minimal Sensor System Option
Figure 8: Sensor System Mounting Bracket
18 | P a g e
The PCB taken from the Wii Remote will be mounted on a small cradle
device which will hang directly below the battery cage of the helicopter.
The PCB will be attached to the cage via 2 screws which are similar or
identical in size to the tri-wing screws used in the Wii Remote casing.
The entire cradle assembly will be attached to the helicopter using four
screws that are similar or identical in size to the screws currently
holding the helicopter battery cage in place. The cradle screw holes
have also been positioned to allow for nominal weight distribution of
the Wii Remote PCB, to avoid flight instability. See Figure 8 for a
detailed configuration drawing. Please reference additional view and
specifications in Appendix 2.
b. Using Optimal Sensor System Option
An optimal sensor system will use a cradle system similar to the one
used in the minimal sensor system option. The optimal sensor system
will likely have different dimensions and weight distribution, so the
cradle system will have to be modified to accommodate these different
characteristics.
2. Testing Platforms
The main testing platform for the system will be a 4x4 ft piece of plywood
covered in a minimum of 1 inch thick soft foam padding. The plywood base was
chosen as it was purchased for previous phases of MicroCART. It also provides a
stable and portable anchor for our testing platform. Above the plywood there
will be a minimum of 1 inch thick soft foam padding so that in the event of the
base platform crashing, it will do minimal damage to itself and the environment.
There will be four harnessing points anchored to the plywood base which will be
attached by flexible cables to the four corners of the base platform struts.
There will also be a system allowing the wired testing power harness to travel
up to the base platform without greatly disturbing flight dynamics.
19 | P a g e
XI. Estimated Resource Requirements
i. Personal Effort Requirements:
Expected Personnel Effort Requirements (hours)
Team Member Problem Statement
Tech Selection
End Product Design
Prototype Implement
End Product Testing
End Product Document
End Product Demos
Project Reporting
Totals
Michael Peat 15 65 85 65 25 15 15 65 350
Kollin Moore 12 80 80 55 25 15 15 50 332
Matt Rich 17 65 80 60 40 10 20 30 322
Alex Reifert 10 65 85 60 40 10 20 30 320
Totals 54 275 330 240 130 50 70 175 1324
Table 1. Personal Effort Requirements
ii. Financial Requirements:
Estimated Original Project Costs
Section Item Cost
Equipment:
Base Platform Donated
Replacement Parts $ 50.00
Upgraded Batteries $ 20.00
Microprocessor Board Donated
IMU Donated
IPS Donated
Other Sensors $ 40.00
Tools and Hardware $ 40.00
Reporting:
Project Poster $ 40.00
Bound Project/Design Plans $ 25.00
Labor ($20/hr): (hours)
Michael Peat 350 $ 7,000.00
Kollin Moore 332 $ 6,640.00
Matt Rich 322 $ 6,440.00
Alex Reifert 320 $ 6,400.00
Subtotal (without labor): $ 215.00
Total: $ 26,695.00
Table 2. Financial Requirements
20 | P a g e
XII. Project Schedule
Figure 9. Project Schedule
Problem Statement
Problem Definition Completion
End-Users and End-Use Identification
Constraint and System Requirement Identification
Technology and implementation considerations and …
Identification of Possible Technologies
Identification of Selection Criteria
Technology Research
Technology Selection
End-product design
Identification of Design Requirements
Simulation Design Process
Design Process
Documentation of Design
End-product prototype implementation
Identification of Prototype Limitations and …
Implementation of End-Product Prototype
End-product testing
Test Planning
Test Development
Text Execution
Test Evaluation
Documentation of Testing
End-product documentation
Development of End-User Documentation
Development of Maintenance and Support …
End-product demonstration
Demonstration Planning
Faculty Advisor Demonstration
Industrial Review Panel Demonstration
Project Reporting
Project Plan Development
Project Poster Development
End-Product Design Report Development
Project Phase Report Development
Weekly Email Reporting
SD-May10-14: MicroCART Phase 5 Schedule Breakdown
Task Duration Subtask Duration
21 | P a g e
XIII. Project Team Information
Client Information: N/A (project dropped by Lockheed Martin)
Advisor Information:
Dr. Nicola Elia, EE Assistant Professor 3131 Coover Hall Iowa State University Ames, IA 50010 [email protected] (515) 294-3579
Dr. Phillip Jones, EE/CprE Assistant Professor 329 Durham Hall Iowa State University Ames, IA 50010 [email protected] (515) 294-9208
Student Team Information:
Michael Peat, EE 4362 Maricopa Dr Ames, IA 50014-7980 [email protected] (712) 540-8170
Kollin Moore, EE 211 Campus Ave Ames, IA 50014-7407 [email protected] (563) 940-8502
Matt Rich, EE 620 S 4th St Unit 205 Ames, IA 50010-6901 [email protected] (712) 899-7691
Alex Reifert, EE 1224 Frederiksen Court Ames, IA 50010 [email protected] (563) 506-5769
22 | P a g e
XIV. Closing Summary
The goal of this project is to create a Vertical Take-off and Landing Unmanned Aerial Vehicle
(VTOL UAV) which, at a minimum, can take off, hover, and land autonomously. We will be
basing our design off of a purchasable, electrically powered, indoor use base platform which
can handle all necessary flight mechanics for the system. The system will then incorporate a
variety of sensors, definitely including a position tracking system and probably also including
an inertial measurement unit, to provide the necessary stabilization during flight. The
system will also use wireless communication channels to provide information to a ground
station, which may or may not do the required processing.
The project will span from August 2009 until May 2010, during which time the team will
deliver an assortment of reports and designs, as well implement a prototype of the end
product. The overall project will cost $26,695 (including labor) or $215 (without labor
included).
1 | P a g e
Appendix 1: WiiYourself Library Source Code, Download Location
I had intended to put the source code in this appendix. After inserting it, I realized that it was far to
large to print (30+ pages at 8pt font). This source code can be found on our website,
seniord.ece.iastate.edu/may1014 under the design report, or can be downloaded in its original version
at: http://wiiyourself.gl.tter.org/
13 | P a g e
Appendix 4: Current Wiidata.m file for running test with the Wiimote
function[xmins,ymins,zmins,pmins,rmins] =
wiidata6(IRchoice,data1,data2,data3,data4,data5,data6) % % %This function takes up to 6 data set inputs and REQUIRES an IRchoice. %IR choice = 1 for yes, 0 for no *technically anything not 1 means no % % %% %History %========================================================================== %version 0: % -needs 6 data set inputs and produces accel, pitch, roll plots only % AUTHOR: Matt Rich % %version 1: % -support added for variable number of inputs % *(nargin) number of arguments adjusting figure outputs correctly % AUTHOR: Mike Peat % %version 2: % -support added for IR dot data inputs (two dots A and B) % -a plot of each data sets dot xy posisitons as x's % *(nargin adjusted to nargin-1 due to new 1st input [IRchoice]) % AUTHOR: Matt Rich % %version 3: % -a composite plot of dot position as x's from each data set added % AUTHOR: Matt Rich % %version 4: % -dot size data incorporated and plotted as "boldness" (linewidth) % AUTHOR: Matt Rich % %version 5: % -improved IR plotting with distinguised dots: two triangle types % -added x markers to accel, pitch, roll plots to show actual data points % as well as trend line % -additional one time figure showing x and y coordiantes of dots A and B % from data set 1 as functions of time on isolated plots with trendline % and markers IF IRchoice is 1 % AUTHOR: Matt Rich % %version 6: (this version) % -minimum step size for all accels, pitch, roll, for each data set % calculated and returned in vectors % -in this update to 6 pitch,roll, mins converted to degrees from rad % AUTHOR: Matt Rich % %version 7: (to come) % -Will add support for 4 dots % -Need data sets to have proper structure first to not break IR option % -Outputs will be altered/added etc... More IR analysis %==========================================================================
14 | P a g e
% %% Assuming times, x accel, y accel, z accel %% 1st,2nd,3rd,4th col of each data set %% AND dot A found, A x , A y, A size, B found, B x, B y, B size %% 5th,6th,7th,8th,9th,10th,11th,12th col of each data set %%
close all w=1;
%% time, x accel, y accel, z accel col # assignments tcol=1; xcol=2; ycol=3; zcol=4;
%% 1st and 2nd ir dot found, x cordinate, y coordinate, size col # assign.
irAfcol=5; % 1 found 0 not found 'A' dot irAxcol=6; %ranges 0 to 1 irAycol=7; %ranges 0 to 1 irAscol=8; %ranges 1 to 5
irBfcol=9; % 1 found 0 not found 'B' dot irBxcol=10; %ranges 0 to 1 irBycol=11; %ranges 0 to 1 irBscol=12; %ranges 1 to 5 % so far no 'C' or 'D' dot %% %%
%%
if nargin-1==0 'ERROR: You need more than just an IR choice sucker!' return end
if nargin-1>0 % time, xyz accel data taken in, and pitch/roll calculated t1=data1(:,tcol); x1=data1(:,xcol); y1=data1(:,ycol); z1=data1(:,zcol); p1=(180/pi)*atan(y1./z1); r1=(180/pi)*atan(x1./z1);
if IRchoice==1 % IR data taken in IF option selected
irAfound1=data1(:,irAfcol); irAx1=data1(:,irAxcol); irAy1=data1(:,irAycol); irAs1=data1(:,irAscol);
irBfound1=data1(:,irBfcol); irBx1=data1(:,irBxcol);
15 | P a g e
irBy1=data1(:,irBycol); irBs1=data1(:,irBscol);
end
end if nargin-1>1 % time, xyz accel data taken in, and pitch/roll calculated t2=data2(:,tcol); x2=data2(:,xcol); y2=data2(:,ycol); z2=data2(:,zcol); p2=(180/pi)*atan(y2./z2); r2=(180/pi)*atan(x2./z2);
if IRchoice==1
irAfound2=data2(:,irAfcol); irAx2=data2(:,irAxcol); irAy2=data2(:,irAycol); irAs2=data2(:,irAscol);
irBfound2=data2(:,irBfcol); irBx2=data2(:,irBxcol); irBy2=data2(:,irBycol); irBs2=data2(:,irBscol);
end
end if nargin-1>2 % time, xyz accel data taken in, and pitch/roll calculated t3=data3(:,tcol); x3=data3(:,xcol); y3=data3(:,ycol); z3=data3(:,zcol); p3=(180/pi)*atan(y3./z3); r3=(180/pi)*atan(x3./z3);
if IRchoice==1
irAfound3=data3(:,irAfcol); irAx3=data3(:,irAxcol); irAy3=data3(:,irAycol); irAs3=data3(:,irAscol);
irBfound3=data3(:,irBfcol); irBx3=data3(:,irBxcol); irBy3=data3(:,irBycol); irBs3=data3(:,irBscol);
end
end if nargin-1>3 % time, xyz accel data taken in, and pitch/roll calculated t4=data4(:,tcol);
16 | P a g e
x4=data4(:,xcol); y4=data4(:,ycol); z4=data4(:,zcol); p4=(180/pi)*atan(y4./z4); r4=(180/pi)*atan(x4./z4);
if IRchoice==1
irAfound4=data4(:,irAfcol); irAx4=data4(:,irAxcol); irAy4=data4(:,irAycol); irAs4=data4(:,irAscol);
irBfound4=data4(:,irBfcol); irBx4=data4(:,irBxcol); irBy4=data4(:,irBycol); irBs4=data4(:,irBscol);
end
end if nargin-1>4 % time, xyz accel data taken in, and pitch/roll calculated t5=data5(:,tcol); x5=data5(:,xcol); y5=data5(:,ycol); z5=data5(:,zcol); p5=(180/pi)*atan(y5./z5); r5=(180/pi)*atan(x5./z5);
if IRchoice==1
irAfound5=data5(:,irAfcol); irAx5=data5(:,irAxcol); irAy5=data5(:,irAycol); irAs5=data5(:,irAscol);
irBfound5=data5(:,irBfcol); irBx5=data5(:,irBxcol); irBy5=data5(:,irBycol); irBs5=data5(:,irBscol);
end
end if nargin-1>5 % time, xyz accel data taken in, and pitch/roll calculated t6=data6(:,tcol); x6=data6(:,xcol); y6=data6(:,ycol); z6=data6(:,zcol); p6=(180/pi)*atan(y6./z6); r6=(180/pi)*atan(x6./z6);
if IRchoice==1
17 | P a g e
irAfound6=data6(:,irAfcol); irAx6=data6(:,irAxcol); irAy6=data6(:,irAycol); irAs6=data6(:,irAscol);
irBfound6=data6(:,irBfcol); irBx6=data6(:,irBxcol); irBy6=data6(:,irBycol); irBs6=data6(:,irBscol);
end
end %% if 1 data input plotting
figure(1) subplot(nargin-1,1,1) plot(t1,x1,'bx-') title('x acceleration 1') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure(2) plot(t1,x1,'bx-') title('x accelerations') ylabel('Acceleration/Gravity (unitless)') xlabel('Time (ms)') %% figure(3) subplot(nargin-1,1,1) plot(t1,y1,'bx-') title('y acceleration 1') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure(4) plot(t1,y1,'bx-') title('y accelerations') ylabel('Acceleration/Gravity (unitless)') xlabel('Time (ms)') %% figure(5) subplot(nargin-1,1,1) plot(t1,z1,'bx-') title('z acceleration 1') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure(6) plot(t1,z1,'bx-') title('z accelerations') ylabel('Acceleration/Gravity (unitless)') xlabel('Time (ms)') %% figure(7) subplot(nargin-1,1,1)
18 | P a g e
plot(t1,r1,'bx-') title('roll 1') ylabel('\theta (degrees)') xlabel('Time (ms)') %% figure(8) plot(t1,r1,'bx-') title('rolls') ylabel('\theta (degrees)') xlabel('Time (ms)') %% figure(9) subplot(nargin-1,1,1) plot(t1,p1,'bx-') title('pitch 1') ylabel('\theta (degrees)') xlabel('Time (ms)') %% figure(10) plot(t1,p1,'bx-') title('pitches') ylabel('\theta (degrees)') xlabel('Time (ms)') %% % Plots an additional IR figure if the option is selected if IRchoice==1 figure(11) for k=1:length(irAfound1) if irAfound1(k)==1 subplot(nargin-1,1,1)
plot(irAx1(k),irAy1(k),'b>','markersize',10,'linewidth',irAs1(k)+1) %xy
coordinate of found dot A hold on else subplot(nargin-1,1,1) plot(irAx1(k),irAy1(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound1) if irBfound1(k)==1 subplot(nargin-1,1,1)
plot(irBx1(k),irBy1(k),'b<','markersize',10,'linewidth',irBs1(k)+1) %xy
coordinate of found dot B hold on else subplot(nargin-1,1,1) plot(irBx1(k),irBy1(k),'ko') %xy coordinate of unfound dot B hold on end end
figure(12) for k=1:length(irAfound1) if irAfound1(k)==1
19 | P a g e
plot(irAx1(k),irAy1(k),'b>','markersize',10,'linewidth',irAs1(k)+1) %xy
coordinate of found dot A hold on else
plot(irAx1(k),irAy1(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound1) if irBfound1(k)==1
plot(irBx1(k),irBy1(k),'b<','markersize',10,'linewidth',irBs1(k)+1) %xy
coordinate of found dot B hold on else
plot(irBx1(k),irBy1(k),'ko') %xy coordinate of unfound dot B hold on end end end %%
%% one time only figures ================================================== figure(13) subplot(2,1,1) plot(t1,x1,'bx-') hold on plot(t1,y1,'rx-') plot(t1,z1,'gx-') legend('x','y','z') title('x,y,z accelerations 1') ylabel('Accelerations/Gravity (unitless)') xlabel('Time (ms)') subplot(2,1,2) plot(t1,r1,'bx-') hold on plot(t1,p1,'rx-') legend('r','p') title('pitch,roll 1') ylabel('\theta (degrees)') xlabel('Time (ms)') %% if IRchoice==1 figure(14) subplot(2,2,1) plot(irAx1,t1,'bx-') title('dot A x coordinate VS time') xlabel('x coordinate') ylabel('Time (ms)') subplot(2,2,3)
20 | P a g e
plot(t1,irAy1,'rx-',t1,irAy1,'r') title('dot A y coordinate VS time') ylabel('x coordinate') xlabel('Time (ms)') subplot(2,2,2) plot(irBx1,t1,'gx-') title('dot B x coordinate VS time') xlabel('x coordinate') ylabel('Time (ms)') subplot(2,2,4) plot(t1,irBy1,'mx-') title('dot B y coordinate VS time') ylabel('x coordinate') xlabel('Time (ms)') end %%
%% ======================================================================== %% %%
%% if 2 data inputs plotting if nargin-1>1 figure (1) subplot(nargin-1,1,2) plot(t2,x2,'rx-') title('x acceleration 2') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure (2) hold on plot(t2,x2,'rx-') hold off %% figure(3) subplot(nargin-1,1,2) plot(t2,y2,'rx-') title('y acceleration 2') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure (4) hold on plot(t2,y2,'rx-') hold off %% figure(5) subplot(nargin-1,1,2) plot(t2,z2,'rx-') title('z acceleration 2') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure (6) hold on
21 | P a g e
plot(t2,z2,'rx-') hold off %% figure (7) subplot(nargin-1,1,2) plot(t2,r2,'rx-') title('roll 2') ylabel('\theta (degrees)') xlabel('Time (ms)') %% figure (8) hold on plot(t2,r2,'rx-') hold off %% figure (9) subplot(nargin-1,1,2) plot(t2,p2,'rx-') title('pitch 2') ylabel('\theta (degrees)') xlabel('Time (ms)') %% figure (10) hold on plot(t2,p2,'rx-') hold off %% % Plots an additional IR figure if the option is selected if IRchoice==1 figure(11) for k=1:length(irAfound2) if irAfound2(k)==1 subplot(nargin-1,1,2)
plot(irAx2(k),irAy2(k),'r>','markersize',10,'linewidth',irAs2(k)+1) %xy
coordinate of found dot A hold on else subplot(nargin-1,1,2) plot(irAx2(k),irAy2(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound2) if irBfound2(k)==1 subplot(nargin-1,1,2)
plot(irBx2(k),irBy2(k),'r<','markersize',10,'linewidth',irBs2(k)+1) %xy
coordinate of found dot B hold on else subplot(nargin-1,1,2) plot(irBx2(k),irBy2(k),'ko') %xy coordinate of unfound dot B hold on end end figure(12)
22 | P a g e
hold on for k=1:length(irAfound2) if irAfound2(k)==1
plot(irAx2(k),irAy2(k),'r>','markersize',10,'linewidth',irAs2(k)+1) %xy
coordinate of found dot A hold on else
plot(irAx2(k),irAy2(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound2) if irBfound2(k)==1
plot(irBx2(k),irBy2(k),'r<','markersize',10,'linewidth',irBs2(k)+1) %xy
coordinate of found dot B hold on else
plot(irBx2(k),irBy2(k),'ko') %xy coordinate of unfound dot B hold on end end hold off end %%
end %% if 3 data inputs plotting if nargin-1>2 figure (1) subplot(nargin-1,1,3) plot(t3,x3,'gx-') title('x acceleration 3') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure (2) hold on plot(t3,x3,'gx-') hold off %% figure(3) subplot(nargin-1,1,3) plot(t3,y3,'gx-') title('y acceleration 3') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure (4) hold on
23 | P a g e
plot(t3,y3,'gx-') hold off %% figure(5) subplot(nargin-1,1,3) plot(t3,z3,'gx-') title('z acceleration 3') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure (6) hold on plot(t3,z3,'gx-') hold off %% figure (7) subplot(nargin-1,1,3) plot(t3,r3,'gx-') title('roll 3') ylabel('\theta (degrees)') xlabel('Time (ms)') %% figure (8) hold on plot(t3,r3,'gx-') hold off %% figure (9) subplot(nargin-1,1,3) plot(t3,p3,'gx-') title('pitch 3') ylabel('\theta (degrees)') xlabel('Time (ms)') %% figure (10) hold on plot(t3,p3,'gx-') hold off
%% % Plots an additional IR figure if the option is selected if IRchoice==1 figure(11) for k=1:length(irAfound3) if irAfound3(k)==1 subplot(nargin-1,1,3)
plot(irAx3(k),irAy3(k),'g>','markersize',10,'linewidth',irAs3(k)+1) %xy
coordinate of found dot A hold on else subplot(nargin-1,1,3) plot(irAx3(k),irAy3(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound3)
24 | P a g e
if irBfound3(k)==1 subplot(nargin-1,1,3)
plot(irBx3(k),irBy3(k),'g<','markersize',10,'linewidth',irBs3(k)+1) %xy
coordinate of found dot B hold on else subplot(nargin-1,1,3) plot(irBx3(k),irBy3(k),'ko') %xy coordinate of unfound dot B hold on end end figure(12) hold on for k=1:length(irAfound3) if irAfound3(k)==1
plot(irAx3(k),irAy3(k),'g>','markersize',10,'linewidth',irAs3(k)+1) %xy
coordinate of found dot A hold on else
plot(irAx3(k),irAy3(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound3) if irBfound3(k)==1
plot(irBx3(k),irBy3(k),'g<','markersize',10,'linewidth',irBs3(k)+1) %xy
coordinate of found dot B hold on else
plot(irBx3(k),irBy3(k),'ko') %xy coordinate of unfound dot B hold on end end hold off end %%
end %% if 4 data inputs plotting if nargin-1>3 figure (1) subplot(nargin-1,1,4) plot(t4,x4,'mx-') title('x acceleration 4') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure (2)
25 | P a g e
hold on plot(t4,x4,'mx-') hold off %% figure(3) subplot(nargin-1,1,4) plot(t4,y4,'mx-') title('y acceleration 4') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure (4) hold on plot(t4,y4,'mx-') hold off %% figure(5) subplot(nargin-1,1,4) plot(t4,z4,'mx-') title('z acceleration 4') ylabel('a/g (unitless)') xlabel('Time (ms)') %% figure (6) hold on plot(t4,z4,'mx-') hold off %% figure (7) subplot(nargin-1,1,4) plot(t4,r4,'mx-') title('roll 4') ylabel('\theta (degrees)') xlabel('Time (ms)') %% figure (8) hold on plot(t4,r4,'mx-') hold off %% figure (9) subplot(nargin-1,1,4) plot(t4,p4,'mx-') title('pitch 4') ylabel('\theta (degrees)') xlabel('Time (ms)') %% figure (10) hold on plot(t4,p4,'mx-') hold off
%% % Plots an additional IR figure if the option is selected if IRchoice==1 figure(11) for k=1:length(irAfound4)
26 | P a g e
if irAfound4(k)==1 subplot(nargin-1,1,4)
plot(irAx4(k),irAy4(k),'m>','markersize',10,'linewidth',irAs4(k)+1) %xy
coordinate of found dot A hold on else subplot(nargin-1,1,4) plot(irAx4(k),irAy4(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound4) if irBfound4(k)==1 subplot(nargin-1,1,4)
plot(irBx4(k),irBy4(k),'m<','markersize',10,'linewidth',irBs4(k)+1) %xy
coordinate of found dot B hold on else subplot(nargin-1,1,4) plot(irBx4(k),irBy4(k),'ko') %xy coordinate of unfound dot B hold on end end figure(12) hold on for k=1:length(irAfound4) if irAfound4(k)==1
plot(irAx4(k),irAy4(k),'m>','markersize',10,'linewidth',irAs4(k)+1) %xy
coordinate of found dot A hold on else
plot(irAx4(k),irAy4(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound4) if irBfound4(k)==1
plot(irBx4(k),irBy4(k),'m<','markersize',10,'linewidth',irBs4(k)+1) %xy
coordinate of found dot B hold on else
plot(irBx4(k),irBy4(k),'ko') %xy coordinate of unfound dot B hold on end end hold off end %%
27 | P a g e
end %% if 5 data inputs plotting if nargin-1>4 figure (1) subplot(nargin-1,1,5) plot(t5,x5,'kx-') title('x acceleration 5') ylabel('a/g (unitless)') xlabel('Time (ms)')
figure (2) hold on plot(t5,x5,'kx-') hold off
figure(3) subplot(nargin-1,1,5) plot(t5,y5,'kx-') title('y acceleration 5') ylabel('a/g (unitless)') xlabel('Time (ms)')
figure (4) hold on plot(t5,y5,'kx-') hold off
figure(5) subplot(nargin-1,1,5) plot(t5,z5,'kx-') title('z acceleration 5') ylabel('a/g (unitless)') xlabel('Time (ms)')
figure (6) hold on plot(t5,z5,'kx-') hold off
figure (7) subplot(nargin-1,1,5) plot(t5,r5,'kx-') title('roll 5') ylabel('\theta (degrees)') xlabel('Time (ms)')
figure (8) hold on plot(t5,r5,'kx-') hold off
figure (9) subplot(nargin-1,1,5)
28 | P a g e
plot(t5,p5,'kx-') title('pitch 5') ylabel('\theta (degrees)') xlabel('Time (ms)')
figure (10) hold on plot(t5,p5,'kx-') hold off
%% % Plots an additional IR figure if the option is selected if IRchoice==1 figure(11) for k=1:length(irAfound5) if irAfound5(k)==1 subplot(nargin-1,1,5)
plot(irAx5(k),irAy5(k),'y>','markersize',10,'linewidth',irAs5(k)+1) %xy
coordinate of found dot A hold on else subplot(nargin-1,1,5) plot(irAx5(k),irAy5(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound5) if irBfound5(k)==1 subplot(nargin-1,1,5)
plot(irBx5(k),irBy5(k),'y<','markersize',10,'linewidth',irBs5(k)+1) %xy
coordinate of found dot B hold on else subplot(nargin-1,1,5) plot(irBx5(k),irBy5(k),'ko') %xy coordinate of unfound dot B hold on end end figure(12) hold on for k=1:length(irAfound5) if irAfound5(k)==1
plot(irAx5(k),irAy5(k),'y>','markersize',10,'linewidth',irAs5(k)+1) %xy
coordinate of found dot A hold on else
plot(irAx5(k),irAy5(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound5)
29 | P a g e
if irBfound5(k)==1
plot(irBx5(k),irBy5(k),'y<','markersize',10,'linewidth',irBs5(k)+1) %xy
coordinate of found dot B hold on else
plot(irBx5(k),irBy5(k),'ko') %xy coordinate of unfound dot B hold on end end hold off end %%
end %% if 6 data inputs plotting if nargin-1>5 figure (1) subplot(nargin-1,1,6) plot(t6,x6,'cx-') title('x acceleration 6') ylabel('a/g (unitless)') xlabel('Time (ms)')
figure (2) hold on plot(t6,x6,'cx-') hold off
figure(3) subplot(nargin-1,1,6) plot(t6,y6,'cx-') title('y acceleration 6') ylabel('a/g (unitless)') xlabel('Time (ms)')
figure (4) hold on plot(t6,y6,'cx-') hold off
figure(5) subplot(nargin-1,1,6) plot(t6,z6,'cx-') title('z acceleration 6') ylabel('a/g (unitless)') xlabel('Time (ms)')
figure (6) hold on plot(t6,z6,'cx-') hold off
30 | P a g e
figure (7) subplot(nargin-1,1,6) plot(t6,r6,'cx-') title('roll 6') ylabel('\theta (degrees)') xlabel('Time (ms)')
figure (8) hold on plot(t6,r6,'cx-') hold off
figure (9) subplot(nargin-1,1,6) plot(t6,p6,'cx-') title('pitch 6') ylabel('\theta (degrees)') xlabel('Time (ms)')
figure (10) hold on plot(t6,p6,'cx-') hold off
%% % Plots an additional IR figure if the option is selected if IRchoice==1 figure(11) for k=1:length(irAfound6) if irAfound6(k)==1 subplot(nargin-1,1,6)
plot(irAx6(k),irAy6(k),'c>','markersize',10,'linewidth',irAs6(k)+1) %xy
coordinate of found dot A hold on else subplot(nargin-1,1,6) plot(irAx6(k),irAy6(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound6) if irBfound6(k)==1 subplot(nargin-1,1,6)
plot(irBx6(k),irBy6(k),'c<','markersize',10,'linewidth',irBs6(k)+1) %xy
coordinate of found dot B hold on else subplot(nargin-1,1,6) plot(irBx6(k),irBy6(k),'ko') %xy coordinate of unfound dot B hold on end end figure(12)
31 | P a g e
hold on for k=1:length(irAfound6) if irAfound6(k)==1
plot(irAx6(k),irAy6(k),'c>','markersize',10,'linewidth',2*irAs6(k)+1) %xy
coordinate of found dot A hold on else
plot(irAx6(k),irAy6(k),'ko') %xy coordinate of unfund dot A hold on end end for k=1:length(irBfound6) if irBfound6(k)==1
plot(irBx6(k),irBy6(k),'c<','markersize',10,'linewidth',irBs6(k)+1) %xy
coordinate of found dot B hold on else
plot(irBx6(k),irBy6(k),'ko') %xy coordinate of unfound dot B hold on end end hold off end %%
end %
%% minimum step size calculations % x1minstep=1e10; y1minstep=1e10; z1minstep=1e10; x2minstep=1e10; y2minstep=1e10; z2minstep=1e10; x3minstep=1e10; y3minstep=1e10; z3minstep=1e10; x4minstep=1e10; y4minstep=1e10; z4minstep=1e10; x5minstep=1e10; y5minstep=1e10; z5minstep=1e10; x6minstep=1e10; y6minstep=1e10; z6minstep=1e10;
r1minstep=1e10; p1minstep=1e10; r2minstep=1e10; p2minstep=1e10; r3minstep=1e10; p3minstep=1e10; r4minstep=1e10; p4minstep=1e10; r5minstep=1e10; p5minstep=1e10; r6minstep=1e10; p6minstep=1e10;
if nargin-1>0 for k=1:length(t1)-1 if x1(k+1)~=x1(k) && abs(x1(k+1)-x1(k))<x1minstep
32 | P a g e
x1minstep=abs(x1(k+1)-x1(k)); end if y1(k+1)~=y1(k) && abs(y1(k+1)-y1(k))<y1minstep y1minstep=abs(y1(k+1)-y1(k)); end if z1(k+1)~=z1(k) && abs(z1(k+1)-z1(k))<z1minstep z1minstep=abs(z1(k+1)-z1(k)); end if p1(k+1)~=p1(k) && abs(p1(k+1)-p1(k))<p1minstep p1minstep=abs(p1(k+1)-p1(k)); end if r1(k+1)~=r1(k) && abs(r1(k+1)-r1(k))<r1minstep r1minstep=abs(r1(k+1)-r1(k)); end
end xmins(1)=x1minstep; ymins(1)=y1minstep; zmins(1)=z1minstep; pmins(1)=p1minstep*180/pi; rmins(1)=r1minstep*180/pi; end
if nargin-1>1 for k=1:length(t2)-1 if x2(k+1)~=x2(k) && abs(x2(k+1)-x2(k))<x2minstep x2minstep=abs(x2(k+1)-x2(k)); end if y2(k+1)~=y2(k) && abs(y2(k+1)-y2(k))<y2minstep y2minstep=abs(y2(k+1)-y2(k)); end if z2(k+1)~=z2(k) && abs(z2(k+1)-z2(k))<z2minstep z2minstep=abs(z2(k+1)-z2(k)); end if p2(k+1)~=p2(k) && abs(p2(k+1)-p2(k))<p2minstep p2minstep=abs(p2(k+1)-p2(k)); end if r2(k+1)~=r2(k) && abs(r2(k+1)-r2(k))<r2minstep r2minstep=abs(r2(k+1)-r2(k)); end end xmins(2)=x2minstep; ymins(2)=y2minstep; zmins(2)=z2minstep; pmins(2)=p2minstep*180/pi; rmins(2)=r2minstep*180/pi; end
if nargin-1>2 for k=1:length(t3)-1 if x3(k+1)~=x3(k) && abs(x3(k+1)-x3(k))<x3minstep x3minstep=abs(x3(k+1)-x3(k)); end if y3(k+1)~=y3(k) && abs(y3(k+1)-y3(k))<y3minstep y3minstep=abs(y3(k+1)-y3(k)); end
33 | P a g e
if z3(k+1)~=z3(k) && abs(z3(k+1)-z3(k))<z3minstep z3minstep=abs(z3(k+1)-z3(k)); end if p3(k+1)~=p3(k) && abs(p3(k+1)-p3(k))<p3minstep p3minstep=abs(p3(k+1)-p3(k)); end if r3(k+1)~=r3(k) && abs(r3(k+1)-r3(k))<r3minstep r3minstep=abs(r3(k+1)-r3(k)); end end xmins(3)=x3minstep; ymins(3)=y3minstep; zmins(3)=z3minstep; pmins(3)=p3minstep*180/pi; rmins(3)=r3minstep*180/pi; end
if nargin-1>3 for k=1:length(t4)-1 if x4(k+1)~=x4(k) && abs(x4(k+1)-x4(k))<x4minstep x4minstep=abs(x4(k+1)-x4(k)); end if y4(k+1)~=y4(k) && abs(y4(k+1)-y4(k))<y4minstep y4minstep=abs(y4(k+1)-y4(k)); end if z4(k+1)~=z4(k) && abs(z4(k+1)-z4(k))<z4minstep z4minstep=abs(z4(k+1)-z4(k)); end if p4(k+1)~=p4(k) && abs(p4(k+1)-p4(k))<p4minstep p4minstep=abs(p4(k+1)-p4(k)); end if r4(k+1)~=r4(k) && abs(r4(k+1)-r4(k))<r4minstep r4minstep=abs(r4(k+1)-r4(k)); end end xmins(4)=x4minstep; ymins(4)=y4minstep; zmins(4)=z4minstep; pmins(4)=p4minstep*180/pi; rmins(4)=r4minstep*180/pi; end
if nargin-1>4 for k=1:length(t5)-1 if x5(k+1)~=x5(k) && abs(x5(k+1)-x5(k))<x5minstep x5minstep=abs(x5(k+1)-x5(k)); end if y5(k+1)~=y5(k) && abs(y5(k+1)-y5(k))<y5minstep y5minstep=abs(y5(k+1)-y5(k)); end if z5(k+1)~=z5(k) && abs(z5(k+1)-z5(k))<z5minstep z5minstep=abs(z5(k+1)-z5(k)); end if p5(k+1)~=p5(k) && abs(p5(k+1)-p5(k))<p5minstep p5minstep=abs(p5(k+1)-p5(k)); end
34 | P a g e
if r5(k+1)~=r5(k) && abs(r5(k+1)-r5(k))<r5minstep r5minstep=abs(r5(k+1)-r5(k)); end end xmins(5)=x5minstep; ymins(5)=y5minstep; zmins(5)=z5minstep; pmins(5)=p5minstep*180/pi; rmins(5)=r5minstep*180/pi; end
if nargin-1>5 for k=1:length(t6)-1 if x6(k+1)~=x6(k) && abs(x6(k+1)-x6(k))<x6minstep x6minstep=abs(x6(k+1)-x6(k)); end if y6(k+1)~=y6(k) && abs(y6(k+1)-y6(k))<y6minstep y6minstep=abs(y6(k+1)-y6(k)); end if z6(k+1)~=z6(k) && abs(z6(k+1)-z6(k))<z6minstep z6minstep=abs(z6(k+1)-z6(k)); end if p6(k+1)~=p6(k) && abs(p6(k+1)-p6(k))<p6minstep p6minstep=abs(p6(k+1)-p6(k)); end if r6(k+1)~=r6(k) && abs(r6(k+1)-r6(k))<r6minstep r6minstep=abs(r6(k+1)-r6(k)); end
end xmins(6)=x6minstep; ymins(6)=y6minstep; zmins(6)=z6minstep; pmins(6)=p6minstep*180/pi; rmins(6)=r6minstep*180/pi; end %
%%
end