47
Team PDR v1.8 5/8/2012 11:12 AM i Advanced PACER Program Preliminary Design Review Document for the Chosen Experiment by Team Just Viewing Natural Devastations Prepared by: Team Spokesperson (replace with name) Date Team Member (Denae Bullard) Date Team Member (Kimenyi Joram) Date Team Member (Nathaniel J. Morris) Date Team Member (Vincent Rono) Date Submitted: Reviewed: Revised: Approved: Institution Signoff (Dr. Jim Burnett) Date Institution Signoff (replace with name) Date LAACES Signoff Date

Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Embed Size (px)

Citation preview

Page 1: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM i

Advanced PACER Program Preliminary Design Review Document

for the Chosen Experiment

by Team Just Viewing Natural Devastations

Prepared by: Team Spokesperson (replace with name) Date

Team Member (Denae Bullard) Date

Team Member (Kimenyi Joram) Date

Team Member (Nathaniel J. Morris) Date

Team Member (Vincent Rono) Date

Submitted: Reviewed: Revised: Approved: Institution Signoff (Dr. Jim Burnett) Date

Institution Signoff (replace with name) Date

LAACES Signoff Date

Page 2: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM ii

TABLE OF CONTENTS

Preliminary Design Review Document ........................................................................................ i Experiment ..................................................................................................................................... i TABLE OF CONTENTS ................................................................................................................ ii LIST OF FIGURES ....................................................................................................................... iv

LIST OF TABLES .......................................................................................................................... v

1.0 Document Purpose .................................................................................................................... 1

1.1 Document Scope ................................................................................................................... 1

1.2 Change Control and Update Procedures ............................................................................... 1

2.0 Reference Documents ............................................................................................................... 1

3.0 Goals, Objectives, Requirements .............................................................................................. 4

3.1 Mission Goal ......................................................................................................................... 4

3.2 Objectives ............................................................................................................................. 4

3.2.1 Science Objectives ......................................................................................................... 4

3.2.2 Technical Objectives ...................................................................................................... 4

3.3 Science Background and Requirements................................................................................ 5

3.3.1 Science Background....................................................................................................... 5

3.3.2 Characteristics of Remote Sensing ................................................................................ 5

3.3.3 Establishing Ground Truth of Images .......................................................................... 12

3.3.4 Science Requirements .................................................................................................. 18

3.4 Technical Background and Requirements .......................................................................... 18

3.4.1 Technical Background ................................................................................................. 19

3.4.2 Image Sensors & Cameras ........................................................................................... 19

3.4.3 Orientation Sensors ...................................................................................................... 22

3.4.4 Position Sensor............................................................................................................. 27

3.4.5 System Monitoring....................................................................................................... 27

3.4.6 Flight Introductory ....................................................................................................... 29

3.4.7 HASP Interface ............................................................................................................ 30

3.4.8 Technical Requirements............................................................................................... 34

4.0 Payload Design ....................................................................................................................... 34

4.1 Principle of Operation ......................................................................................................... 35

Page 3: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM iii

4.2 System Design .................................................................................................................... 35

4.2.1 Functional Groups ........................................................................................................ 35

4.2.2 Group Interfaces........................................................................................................... 36

4.2.3 Traceability .................................................................................................................. 36

4.3 Electrical Design ................................................................................................................. 36

4.3.1 Sensors ......................................................................................................................... 36

4.3.2 Sensor Interfacing ........................................................................................................ 36

4.3.3 Control Electronics ...................................................................................................... 37

4.3.4 Power Supply ............................................................................................................... 37

4.3.5 Power Budget ............................................................................................................... 37

4.4 Software Design .................................................................................................................. 37

4.4.1 Data Format & Storage ................................................................................................ 38

4.4.2 Flight Software............................................................................................................. 38

4.5 Thermal Design ................................................................................................................... 38

4.6 Mechanical Design.............................................................................................................. 38

4.6.1 External Structure ........................................................................................................ 38

4.6.2 Internal Structure ......................................................................................................... 39

4.6.3 Weight Budget ............................................................................................................. 39

5.0 Payload Development Plan ..................................................................................................... 39

5.1 Electrical Design Development .......................................................................................... 39

5.2 Software Design Development ........................................................................................... 40

5.3 Mechanical Design Development ....................................................................................... 40

5.4 Mission Development ......................................................................................................... 40

8.0 Project Management ............................................................................................................... 40

8.1 Organization and Responsibilities ...................................................................................... 40

8.2 Configuration Management Plan ........................................................................................ 40

8.3 Interface Control ................................................................................................................. 40

9.0 Master Schedule ...................................................................................................................... 41

9.1 Work Breakdown Structure (WBS) .................................................................................... 41

9.2 Staffing Plan........................................................................................................................ 41

9.3 Timeline and Milestones ..................................................................................................... 41

11.0 Risk Management and Contingency ..................................................................................... 41

12.0 Glossary ............................................................................................................................... 42

Page 4: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM iv

LIST OF FIGURES Figure 1 Angular rotation caused by HASP ................................................................................... 6 Figure 2 View of Camera tilt angle................................................................................................. 6 Figure 3 Radians demonstrating radians ........................................................................................ 7 Figure 4 Angular Field of View ...................................................................................................... 7 Figure 5 Typical spectral reflectance curves for vegetation, soil, and water. ................................. 9 Figure 6 shows the relationship between resolution and pixels in spatial resolution. ................ 10 Figure 7 shows the relationship between spatial and angular resolution ...................................... 11 Figure 8 NASA’s Terra satellite on Poomacha Fire on the terrain. ............................................. 14 Figure 9 Photograph of hill slope in Capulin Canyon ................................................................. 16 Figure 10 Rodeo-Chediski Fire .................................................................................................... 16 Figure 11 shows the fire affected areas of the Wallow Fire ......................................................... 17 Figure 12 Digital camera skeleton diagram .................................................................................. 19 Figure 13 Quantum efficiency for CMOS image sensor ............................................................ 20 Figure 14 Depiction of the field of view for an optical sensor at a given altitude ........................ 22 Figure 15 Magneto resistive effect in a permalloy ...................................................................... 24 Figure 16 Mercury tilt sensor showing the two conducting elements .......................................... 25 Figure 17 Electrolytic tilt sensor rotated about its axis of sensitivity ......................................... 25 Figure 18 Half wheat stone bridge configuration ....................................................................... 26 Figure 19 Resistance profile for thermistor, RTD, and Thermocouple ....................................... 28 Figure 20 Altitude in kilometers versus Universal Time profile for the 2009 flight of HASP .... 29 Figure 21 Ground track of the 2009 flight of HASP from launch in Ft. Sumner, New Mexico to somewhere in west of phoenix, Arizona. ..................................................................................... 30 Figure 22 Payload mounting bracket (units are centimeters) ...................................................... 31 Figure 23 Reflectance Curve vs. Wavelength............................................................................... 34

Page 5: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM v

LIST OF TABLES Table 1 Factors Affecting Geometry ............................................................................................ 18 Table 2 Payload’s mass and volume footprint restraints.............................................................. 31 Table 3 Description of pin number functions............................................................................... 31 Table 4 Description of power pins for the EDAC 516 ................................................................. 32

Page 6: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 1

1.0 Document Purpose This document describes the preliminary design for the Just Viewing Natural Devastation experiment by Team Chosen for the Advanced PACER Program. It fulfills part of the Advanced PACER Project requirements for the Preliminary Design Review (PDR) to be held September 2011. 1.1 Document Scope This PDR document specifies the scientific purpose and requirements for the Just Viewing Natural Devastation experiment and provides a guideline for the development, operation and cost of this payload under the Advanced PACER Project. The document includes details of the payload design, fabrication, integration, testing, flight operation, and data analysis. In addition, project management, timelines, work breakdown, expenditures and risk management is discussed. Finally, the designs and plans presented here are preliminary and will be finalized at the time of the Critical Design Review (CDR). 1.2 Change Control and Update Procedures Changes to this PDR document shall only be made after approval by designated representatives from Team and the Advanced PACER Institution Representative. Document change requests should be sent to Team members and the Advanced PACER Institution Representative and the Advanced PACER Project. 2.0 Reference Documents

1. States, Organization of American. Remote Sensing in Natural Hazard Assessments. OAS - Organization of American States: Democracy for peace, security, and development. [Online] [Cited: June 8, 2011.] http://www.oas.org/dsd/publications/unit/oea66e/ch04.htm#d. 2. Wang, L. Chapter 4 Multispectral Remote Sensing I. [Powerpoint]. Baton Rouge, LA : LSU Dept. of Geography & Anthropology, June 10, 2011. 3. Aggarwal, S. Principles of Remote Sensing. Wolrd AgroMeterogical Information Service. [Online] [Cited: June 8, 2011.] URL: http://www.wamis.org/agm/pubs/agm8/Paper-2.pdf. 4. Tutorial: Fundamentals of Remote Sensing Introduction - Radiation - Target Interactions. Canada Center for Remore Sensing. [Online] [Cited: June 9, 2011.] http://www.ccrs.nrcan.gc.ca/resource/tutor/fundam/chapter1/05_e.php. 5. Tutorial: Fundamentals of Remote Sensing Satellites and Sensors - Spatial Resolution, Pixel Size, and Scale. Canada Center for Remote Sensing. [Online] January 2008. [Cited: June 10, 2011.] http://www.ccrs.nrcan.gc.ca/resource/tutor/fundam/chapter2/03_e.php. 6. Remote Sensing- Resolution. CISR - The Center for International Stabilization and Recovery at James Madison University. [Online] March 31, 2004. [Cited: June 11, 2011.] http://maic.jmu.edu/sic/rs/resolution.htm .

Page 7: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 2

7. Liew, S C. Analog and Digital Images. Center for Remote Imaging, Sensing & Processing. [Online] 2001. [Cited: June 10, 2011.] http://www.fas.org/man/dod-101/navy/docs/es310/EO_image/EO_Image.htm. 8. Navulur, Kumar. Multispectral image analysis using the object-oriented paradigm. Boca Raton : CRC Press/Taylor & Francis, 2007. 9. McCoy, Roger M. Field Methods in Remote Sensing. New York : Guilford Publications, 2004. 10. Paine, David P and Kiser, James D. Aerial photography and image interpretation. Hoboken, NJ : John Wiley, 2003. 11. Byrnes, Mark E. Field sampling methods for remedial investigations. 2nd. Boca Raton, Fl : CRC, 2008. 12. Davidson, Robert. Aerial Photograph Types and Comparison With Topographic Maps. Map Reading - A Free e-book on how to read topographic maps and use a compass. [Online] [Cited: June 12, 2011.] http://www.map-reading.com/aptypes.php. 13. Gopi, Satheesh, Sathikumar, R and Madhu, N. Advanced Surveying: Total Station, Gis And Remote Sensing. 2nd. Patparganj, Delhi : Dorling Kindersley (India) Pvt. Ltd., 2008. 14. Hewitt, Allan. Soils - Soil features. Te Ara - Encyclopedia of New Zealand. [Online] March 1, 2009. [Cited: June 12, 2011.] http://www.TeAra.govt.nz/en/soils/2 . 15. Geologic Framework of Arizona. The University of Arizona Library's Southwest Electronic Text Center. [Online] [Cited: June 12, 2011.] http://southwest.library.arizona.edu/azso/body.1_div.2.html#index-div-. 16. State Soils. NRCS Soils Website. [Online] United States Department of Agriculture: Natural Resources Conservation Service. [Cited: June 12, 2011.] soils.usda.gov/gallery/state_soils/. 17. Bruhjell, Darren and Tegart, Greg. Fire Effects on Soil - Factsheet 2 of 6 in the Fire Effects on Rangeland Factsheet Series - Range- BC Ministry of Agriculture. BC Ministry of Agriculture, Food and Fisheries. [Online] [Cited: June 12, 2011.] http://www.agf.gov.bc.ca/range/publications/documents/fire2.htm. 18. Forest fire effects on soil color and texture. Ulery, A L and Graham, R C. 57:135-140, s.l. : Soil Science Society of America Journal, 1993. 19. NASA - NASA Images of California Wildfires. NASA - Home. [Online] [Cited: June 12, 2011.] http://www.nasa.gov/vision/earth/lookingatearth/socal_wildfires_oct07.html. 20. Finch, Deborah and Hawksworth, David. Effects of Wildfire on Wildfire Populations and Vegetation in the Middle Rio Grande Valley, New Mexico. United States Department of Agriculture. [Online] [Cited: June 11, 2011.] http://www.fs.fed.us/rm/grassland-shrubland-desert/research/projects/middle-rio-wildfire/. 21. Toledo Blade - Google News Archive Search. Google News. [Online] [Cited: June 15, 2011.] http://news.google.com/newspapers?nid=8_tS2Vw13FcC&dat=19600627&b_mode=2&hl=en. 22. Keller, Paul. Arizona's Rodeo-Chediski Fire: A Forest Health Problem. Wild Fire Lessons. [Online] Volume 65 No. 1. [Cited: June 15, 2011.] http://www.wildfirelessons.net/documents/Arizonas_Rodeo-Chediski_Fire_A_Forest_Health_Problem_FMT_Vol65_1.pdf. 23. Wallow Fire 2011 - Round Valley. The Other McBrides. [Online] [Cited: June 14, 2011.] http://www.wesmcb.com/2011/06/wallow-fire/. 24. InciWeb the Incident Information: Wallow. InciWeb the Incident Information System: Current Incidenta=s. [Online] [Cited: June 18, 2011.] http://www.inciweb.org/incident/2262/.

Page 8: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 3

25. Active Fire Mapping Program. Fire Detection Maps. [Online] [Cited: June 18, 2011.] http://activefiremaps.fs.fed.us/activefiremaps.php?op=archive&rCode=swx&sensor=modis. 26. Gupta, Ravi P. Remote Sensing Geology. Germany : Springer Verlag Berlin Heidelberg New York, 2003. 27. Busch, D. Digital SLR Cameras & Photography For Dummies. 3rd. 2009. 28. Megan, P. Detection of visible photons in CCD and CMOS. Riddle. [Online] [Cited: June 9, 2011.] www.riddle.ru/dl/ccd/Magnan/520CCD-CMOS/520comparison.pdf. 29. MICRO SWITCH Sensing and Control. Hall Effect Sensing and Application. [Online] Honeywell. [Cited: June 6, 2011.] http://content.honeywell.com/sensing/prodinfo/solidstate/technical/hallbook.pdf. 30. Forslund, A. Design a Miniaturized Fluxgate Magnetometer. Division of Sapce and Plasma Physics, School of Electrical Engineering. [Online] [Cited: June 11, 2011.] http://www.ee.kth.se/php/modules/publications/reports/2006/XR-EE-ALP_2006_002.pdf. 31. Hauser, H, et al., et al. Magnetoresistive sensors. Institut fur Industrielle Elektronik und Materialwissenschaften. [Online] June 2000. [Cited: June 12, 2011.] http://www.iemw.tuwien.ac.at/publication/workshop0600/Hauser.pdf. 32. Kjernsmo, K. How to Compass. [Online] [Cited: June 8, 2011.] http://www.learn-orienteering.org/old/lesson3.html. 33. Tilt Sensor. [Online] Ladyada. [Cited: June 5, 2011.] http://www.ladyada.net/learn/sensors/tilt.html . 34. Electrolytic Level Tiltmeter. TriTech Group Limited. [Online] [Cited: June 11, 2011.] www.tritech.com.sg/instrument/_media/pdf/UserManual/tiltMeters/electrolyticTiltSensorTES-AN-31EL-B.pdf. 35. Clifford, M and Gomez, L. Measuring Tilt with Low-g Accelerometers. Sesnor Products. [Online] May 2005. [Cited: June 10, 2011.] http://www.freescale.com/files/sensors/doc/app_note/AN3107.pdf. 36. Piezoelectric Accelerometers: Theory and Application . Metro Mess- und Frequenztechnick. [Online] 2001. [Cited: June 8, 2011.] http://www.gracey.com/downloads/accelerometers.pdf . 37. Capacitive Accelerometers. PCB Piezotronics. [Online] [Cited: June 10, 2011.] http://www.pcb.com/Linked_Documents/Vibration/Cap_Accels_0704.pdf . 38. JA GPS . The theory and Practise of GPS. [Online] [Cited: June 15, 2011.] www.ja-gps.com.au/what-is-gps.aspx . 39. Omega. Thermocouple Introduction and Theory. [Online] [Cited: June 14, 2011.] www.omega.com/temperature/z/pdf/z021-032.pdf . 40. Sensortecinc. Thermistor Theory. [Online] [Cited: June 15, 2011.] www.sensortecinc.com/docs/technical_resources/Thermistor_Theory.pdf . 41. Guzik, G T. HASP - Student Payload Interface Manual. HASP Manual. 42. Tega, J. Troposphere. Universe Today. [Online] [Cited: June 12, 2011.] www.universetoday.com/41559/troposphere/HASP_Interface_Manial_v21709-5 2009. 43. Schwartz, Noaki. CBS research. [Online] 2009. [Cited: June 12, 2011.] 44. Remote Sensing Introduction. University of Calgary. [Online] [Cited: June 10, 2011.] http://www.ccrs.nrcan.gc.ca/resource/tutor/fundam/chapter1/05_e.php . 45. Electro-Optical Imaging Systems. Federation of American Scientists. [Online] [Cited: June 10, 2011.] http://www.crisp.nus.edu.sg/~research/tutorial/image.htm . 46. Karimi, Hassan A and Hammad, Amin. Telegeoinformatics: location-based computing and services. Boca Raton : CRC Press, 2004.

Page 9: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 4

47. Why is an image Formed? The Physics Classroom. [Online] [Cited: June 12, 2011.] http://www.physicsclassroom.com/class/refln/u13l2a.cfm. 48. Angle of View Formula. Rainbow CCTV Products. [Online] 2nd. [Cited: June 12, 2011.] http://www.isorainbow.com/tech/angle.html. 49. Rayleigh Criterion. COSMOS. [Online] Swinburne University of Technology. [Cited: June 14, 2011.] http://astronomy.swin.edu.au/cosmos/R/Rayleigh+Criterion. 50. Typical Quantum Efficiency. Awaiba. [Online] [Cited: June 15, 2011.] http://cheerall.com/pictures/awaiba.com . 51. Steward, D. Fluxgate Sensor Analysis. ANSOFT Corporation. [Online] [Cited: June 11, 2011.] http://www.ansoft.com/inspiringdesign_02/Flux_Gate_Sensor_Analysis.pdf. 52. Magnetoresistive sensors for magnetic field measurement. PHILLIP. [Online] September 06, 2000. [Cited: June 12, 2011.] http://www.nxp.com/acrobat_download2/various/SC17_GENERAL_MAG_2-1.pdf. 53. Electrolytic Tilt Sensors and Inclinometers. Spectron. [Online] [Cited: June 12, 2011.] www.spectronsensors.com/primer-2.php. 54. Hoffman, K. Applying the Wheatstone Bridge Circuit. Hottinger Baldwin Messtechnik. [Online] [Cited: June 12, 2011.] . http://www.hbm.com.pl/pdf/w1569.pdf. 55. Wang, Lei. Remote Sensing. LSU Dept. of Geography & Anthropology. [Lecture]. June 10, 2011. 56. Chandra, A M and Ghosh, Santi Kumar. Remote Sensing and geographical information system. 1st. s.l. : Oxford: Alpha Science Intern, 2007.

3.0 Goals, Objectives, Requirements 3.1 Mission Goal The goal of this project is to construct a payload that will study areas of environmental disasters from high altitude by using geographically located images. 3.2 Objectives 3.2.1 Science Objectives

• Identify characteristics of affected areas to establish ground truth • Collect and record images and data at high altitude • Extract and process data stored on the payload • Determine the latitude, longitude, orientation, and scale of each image • Analyze and compare obtained information to image data collected by other remote

sensing sources • Determine the effects of hazards on terrain imaged by the payload

3.2.2 Technical Objectives

Page 10: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 5

• Successfully launch an optical sensor payload on HASP • Determine location and orientation of the payload in relationship to HASP and the

ground • Extract, process, and analyze data stored on the payload • A geospatial software will be used to analyze and compare flight and alternative data

3.3 Science Background and Requirements 3.3.1 Science Background Remote sensing is the technique of acquiring information on a large geographical location from a distance. Remote sensing imagery is used in a variety of ways. It can be used to determine the amount of landscape change and examine the progress of recovery of the area affected by a natural phenomenon or hazard event. Some of the more common practices for remote sensing images are archaeological investigations, forestry, agriculture, soil mapping, land use and cover mapping, and identifying paths for building roads. (1) 3.3.2 Characteristics of Remote Sensing

This section will cover several basic parameters of remote sensing. Visual remote sensing is dependent on parameters such as light wavelength, resolution, angular field of view, image location and orientation. Calibrating the wavelength is important because remote sensing success depends on how much detail can be seen by the camera. The camera will use filters to see the near-infrared and mid-infrared bands. The angular view determines the size of a visual print a camera can have on the ground depending on altitude. Image orientation and location are used to compare data to other verified sources to estimate the accuracy of the images taken. Each of these parameters provides the researcher with specific information to identify ground features.

3.3.2.1. Image Location Image location is the geographical location of the image recorded. Image location is used to distinguish features of an image and match them with the geographical features identified, and pinpoint the depicted scene/area. To determine the image location, geographical coordinates such as longitude and latitude of the object of study must be identified. For example, for observing the state of Arizona the location of the sensor must be within 109°3’ W to 114°50’ W longitude and 31°20’ N to 37° N latitude to study the geography. 3.3.2.2. Image Orientation Image orientation refers to the tilting and/or rotation of an image caused by the attitude of HASP. Correct image orientation is needed in order to create an accurate conversion of the raw image from the camera to the surface picture to be used in comparison with known data.

Page 11: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 6

Figure 1 Angular rotation caused by HASP

The figure 1 above demonstrates the displacement caused by HASP rotating around the z-axis. The compass sensor will give data on the angle of rotation for each image. The angle of rotation will then be used to determine the correct/desired orientation with the use of trigonometric functions.

Figure 2 View of Camera tilt angle The figure 2 above demonstrates the tilt caused by the pendulum motion of HASP. Tilt will be measured with respect to both the x and y axis to give tilt in the normal plane. The tilt sensor will give image-by-image tilt angles for each image.

Page 12: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 7

3.3.2.3 Angular Field of View The angular field of view is the field of view of a lens also defined as the range of angles from which sensors can detect radiation. The field of view of a lens is ordinarily measured in degrees or radians. 1radians is the angle that subtends an arc of a circle whose length is equal to the length of the radius. (As shown below in Figure 3. Since we know that 360° = 2π, then 1 radians = 57.2957795 degrees).

Figure 3 Radians demonstrating radians

Figure 4. Below shows the formula for angular view used in remote sensing. (2) The two factors, which determine the angular field of view, are the physical size of the sensor lens and the focal length of the scanner optics.

Figure 4 Angular Field of View

Page 13: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 8

The formula for calculating the angular field of view is:

Where,

= focal length

= size of the sensitive element of the sensor

3.3.2.4. Electromagnetic Spectrum

The electromagnetic spectrum (EMS) is an extensive range of wavelengths and photon energies contained in light. Remote sensors are used to detect these wavelengths and energies to identify ground features. This spectrum has a vast band of energy frequency waves that extend from radio to gamma rays. The commonly used regions of the EMS in remote sensing for observing vegetation are the visible, mid-infrared and near-infrared, due to the fact that maximum illumination by the sun is in these regions.

When shortwave radiation (energy) from the sun comes into contact with the earth’s surface, the energy is reflected by, transmitted into or absorbed and emitted by the surface. A number of changes occur in the EMS on contact such as magnitude, direction, wavelength, polarization and phase. When the remote sensor obtains this data both spatial and spectral information is provided. (3) Note that the light used to observe an object must have a wavelength approximately the same size as or smaller than the object. The visible light spectrum has a wavelength ranging from 400 to 700 nm. The near infrared region has a wavelength ranging from 800 to 1300 nm. The mid infrared has a wavelength of 1550 to 1750 nm.

Multispectral images are the measuring of the energy of these wavelengths. The images taken at different wavelengths are layered to make composite images by showing different images at varies wavelengths as red, green or blue in the final picture. The combination of these images creates a distinct color configuration to identify surface features. For example, leaves which have chlorophyll absorb radiation in the red and blue wavelengths but reflect green wavelengths. Therefore leaves appear to be green when the amount of chlorophyll is great but if the amount of chlorophyll is less leaves appear to be red or yellow. This is due to the fact that there is less absorption and more reflection of the red wavelengths. (3) Using the relationship with the wavelength and the object will help determine the health of vegetation. Figure 4 shows the reflectance curve for vegetation, clear lake water and dry bare soil features, found on the earth’s surface. For example, green vegetation has the highest reflectance between 0.6 and 1.4 μm of light. The formula to detect these surface features is the spectral reflectance formula:

𝑝(𝜆) = [𝐸𝑅(𝜆)/𝐸𝐼(𝜆)] × 100

Page 14: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 9

Where,

p(λ) = Spectral reflectance (reflectivity) at a particular wavelength.

𝐸𝑅(λ) = Energy of wavelength reflected from object

𝐸𝐼(𝜆) = Energy of wavelength incident upon the object (3)

Figure 5 Typical spectral reflectance curves for vegetation, soil, and water. (4)

3.3.2.5. Resolution

Resolution is the number of pixels contained in an image. Resolution is important in remote sensing because it defines the features that will be distinguished on the ground. The way objects appear in remote sensing is dependent upon three main types of resolution: spatial, angular, spectral, and radiometric. Each of these affects the accuracy and effectiveness of remote sensors.

Spatial resolution is the scope of the minimum possible feature that can be identified on the ground. Restricted by the pixel size, spatial resolution distinguishes the details of the image. Observation of large features on an image requires low spatial resolution, while detection of small objects demands high spatial resolution. High spatial resolution is the ideal resolution to use when observing disturbances in the health of vegetation because the smaller the pixel the more detail. Figure 5 shows an image of a football field detected by a sensor. The sensor has a spatial resolution of 1 m; the effective area of coverage per pixel is 1 m x 1 m. The 30 m resolution is the least fine or coarsest because objects and features smaller than the spatial resolution on the ground are unresolvable. Each pixel of the 30m resolution is spatially equivalent to 900 pixels in the 1m resolution. (5)

Page 15: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 10

1 m resolution 2 m resolution 5 m resolution

10 m resolution 20 m resolution 30 m resolution

Figure 6 shows the relationship between resolution and pixels in spatial resolution. (6) The angular resolution can be defined as the field of view as seen by one pixel. The wavelength of the radiation and diameter of the objective lens determines the angular resolution. Figure 6 shows the relationship between spatial and angular resolution. The simplest way to get the angular resolution is to divide the number of pixels by the angular field of view, which would give an angular view per pixel. The Rayleigh Criterion mathematically defines the smallest possible angular resolution. The Rayleigh criterion is determined by the wavelength of light used and the aperture (Diameter of the lens). The formula used to calculate the angular resolution is:

Where,

δ = Rayleigh Criteriterion

λ = wavelength

D = Diameter of a lens

1.22( )Dλδ =

( )ffnumberD

=

Page 16: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 11

Once the angular resolution is determined the spatial resolution can be calculated by using the small angle approximation formula. The small angle approximation will determine the size of the pixel because the field of view for any pixel is smaller than one radian. The formula for calculating the small angle approximation is:

𝐷2

= 𝐻 �𝜃2�

Where,

D = Diameter

H = Height

𝜃 = The angle in radian

Figure 7 shows the relationship between spatial and angular resolution

Spectral resolution is sensing and recording features in the amplitude of the EMS. Spectral resolution detects color, and light intensity. Different features and details of an

( )ffnumberD

=

Page 17: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 12

image can be identified when compared over distinct wavelength ranges. The narrower the wavelength range for a particular band, the finer the spectral resolution will be. The multispectral sensor used to detect narrow spectral bands in the visible, near-infrared and mid-infrared parts of the electromagnetic spectrum are called hyperspectral sensors. When observing landscapes this type of sensor is important because it detects the way different objects respond to the narrow bands. (5)

Radiometric resolution is determined by a sensor’s sensitivity to the intensity of the radiation. Radiometric resolution describes the definite data content of the image. The signal to noise ratio of the sensor determines the radiometric resolution of a camera. The sensing system in radiometric resolution senses slight differences in reflected or emitted energy. Finer radiometric resolution of a sensor increases the sensitivity to these energies. These reflected energies signals are transferred as an analog signal. The analog signal is then converted to a digital number using the analog-to-digital converter (ADC). This resolution is measured in bits or in multispectral levels. (7) Positive digital numbers that range from 0 to a selected power of 2 represent the radiometric resolution of the image. The range of brightness from absence of color to combination of colors is represented in binary format (0 to 255). The brightness level of the pixel in the image depends on the number of bits used in representing the energy recorded. For example, to record data the sensor used 8 bits which would be 28 = 256 digital value available ranging from 0 to 255. 9 The value range is determined by the equation:

𝑁 = 2𝑅 Where,

N = range

R = radiometric depth (8)

3.3.3 Establishing Ground Truth of Images

There are a few basic steps to consider before determining ground truth. They include certifying that records of data images and reference materials are available for the specific location, establishing the proper size for the area of study, creating a possible legend of the final map, and forming a map scale and level of accuracy. There are many sampling methods such as sampling aerial photographs and soil. In the following sections these sampling methods will be discussed. (9) 3.3.3.1. Aerial Photography

Aerial photography refers to the taking of ground images from high altitude by aircrafts, rockets, and space crafts. (10) Historical and/or recent images of the geographical location must be examined to identify changes in topography and vegetation patterns. If both historical and recent photographs are available to examine the landscape, comparing them will reveal changes that may be effects of recent or past phenomena such as hazard events in the area. (11)

Aerial photography can be used to extend the information obtained from the soil collected in the field. When using aerial images for soil mapping, a geological map will show subsurface

Page 18: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 13

geological features. With some knowledge of geological features, soil materials can be interpreted from photos. (11)There are many types of aerial photography such as vertical, oblique, trimetrogon, multiple lens, convergent and panoramic.

Vertical aerial photography allows researchers to observe and examine the environmental features of a terrain. In this method, the sensor is positioned directly above (facing straight down at) the terrain. The perpendicular line in the cameras axis is no more than +3° from the vertical, tilting the photograph between 3° and 90°. The accuracy of an image of a flat terrain depends on the distance between the camera, the ground, and the angular view. (12)

Aerial photography allows an overlap, to ensure that all areas of study are photographed. Overlap is when one photo covers a particular percentage area of another. The usual overlap for aerial photography is 60 to 80% forward and 30% lateral. Due to distortions at the edge of photographs affected by the tilt, displacements, and distortion of lens a 60% forward overlap must be implemented. If there happens to be an extreme amount of tilting not accounted for by the sensor system the forward overlap of 60 to 80% will produce three to four images of the geographical area to construct a new photograph. (13)

3.3.3.2. Soil Features

Soil is the upper layer of the earth’s surface that is comprised of rock and mineral particles mixed with decayed organic matter. There are three main features of soil: color, texture, and structure. For the purpose of remote sensing the color of the soil will be the primary factor to detect. Factors affecting the color of soil are age, wetness, the climate in which it is formed, and its parent material. Soil is stratified, having three layers. The true color of topsoil is covered with dark organic matter and can only be observed in subsoil. For remote sensing purposes the effects caused by devastations can be detected on the topsoil. Old and moist soil is detected by bright colors showing that the soil has undergone saturation and drying out in between, seasonally. If the soil appears mottled grey it indicates that the soil has experienced long periods of wetness. (14) According to the flight profile discussed in §3.4.6, the probable flight path will be from New Mexico to Arizona. The type of soil found in Arizona is mountain soil. Mountain soil is shallow, rocky, and fragmented, comprised of coarse gravel and rock fragments. (15) For instance, Arizona’s state soil is light brown and fine sandy loam at the surface, reddish brown and sandy clay loam in the upper subsoil, and reddish brown and clay loam in the lower subsoil. (16) New Mexico is known for its state soil called the Penistaja. The Penistaja soil creates productive rangelands which are used for grazing, wildfire, and recreation. The surface layer is a brown fine sandy loam, the subsoil is strong sandy clay loam and the substratum layer is reddish yellow sandy loam. (16)

3.3.3.2.1 Fire Effects on Soil

Physical and chemical properties, nutrient properties, soil temperature, soil moisture and soil biota are the categories of fire-related impacts which affect the soil of a terrain. The physical properties which can be seen on the surface will be focused on for the purpose of remote sensing.

Page 19: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 14

The heat from wildfires may cause loss of structure, reduction of porosity, and change in color of the soil. (17) In severe fires soil color changes can be detected. A reddened layer about 1 to 8 cm thick is formed at the site. It is brighter in color than the unburned soil. The redder hues detected in the soil are the results of the transferring of iron oxide and the elimination of organic matter. (18) Figure 7 shows an image of the Poomacha Fire taken by NASA’s Terra Satellite on November 6, 2007. The reddened layer helps determine the soil that was burned by giving it a bright pink or red color when being observed using remote sensing. The unburned dry land is detected by a light pink color. To detect these changes the red and near-infrared spectral regions are used because they cover the red regions of the spectral reflectance curve for vegetation.

Figure 8 NASA’s Terra satellite on November 6, 2007 showing the effect of the Poomacha Fire on the terrain. (19)

3.3.3.3 Vegetation Vegetation is the overall plant cover over a specific area. The main vegetation types in Arizona and New Mexico are grasslands, scrublands, desert, and forest and woodlands.

3.3.3.3.1 Wild Fires and the Effects on Vegetation “Over the last century, riparian communities along the Middle Rio Grande in New Mexico have become increasingly xeric due to increased fuel biomass), long-term drought conditions, and flood control. Salt cedar ( Tamarix ramoisissima) and Russian Olive (Elaeagnus angustifolia), highly invasive exotic plant species, not only make up a large percentage of the fuel biomass but also appear to outcompete native species such as Rio Grande cottonwood (Populus deltiodes ssp. wislizeni) by rapidly replacing them after wildfire events. Plant invasions can alternative ecosystems by changing fuel properties and subsequently affecting fire behavior characteristics such as frequency, intensity, seasonality, extent, and type of fire. Rapid regeneration of invasive

Page 20: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 15

species (salt cedar, in particular) at Middle Rio Grande sites immediately following wildfire events may create an invasive plant-fire regime cycle which, in turn, may make reestablishment of native plant communities extremely difficult. Wildfire as a disturbance process is becoming increasingly important in desert riparian ecosystems and is a major issue of concern for land managers in New Mexico. Quantifying the immediate and long term responses of plants, animals, and water, to wildfire will aid in developing environmentally sound policies to manage ecosystems exposed to fire events, and develop knowledge regarding ecosystem responses to wildfire which can be used to justify wildfire management plans. (20)” US Forest Service

“The wildfires leave a legacy of environmental devastation that is evident for years, especially in areas that have been scorched several times before. Some of the damage may never be reversed. Invasive weeds and grasses could crowd out native plants and shrubs, accelerating erosion and leading to more frequent wildfires. Pine stands that have been a signature feature of many mountainside communities could become just a memory. Small birds, rabbits and other animals dependent on the rapidly disappearing native vegetation will struggle to maintain a foothold, while some endangered species will find themselves locked into increasingly imperiled islands of refuge. ” (20)US Forest Service

“Of the three primary drainage basins of Capulin Canyon burned by the April 1996 fire, the Capulin Creek basin responded with severe flooding (including significant erosion and sediment transport) during significant rainfall events of the monsoon season. Widespread erosive sheet wash and riling occurred on hill slopes within this basin in response to the first storm of the season. Riling is a very slow-acting form of erosion whereby a thin film of water—sheet wash—transports soil particles by rolling them along the ground. Studies show that after numerous fires, the soil may tend toward being more water repellant. This means that rain water is not absorbed as much and hence flows at the surface and causes soil erosion. The North Tributary basin produced at least one debris flow in response to the first storm of the season, and the hill slopes exhibited extensive erosion of the mineral soil to a depth of 5 cm and down slope movement of up to boulder sized material.” Susan H. Cannon and Steven L. Reneau. US geological survey, geologic Hazards team.

Page 21: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 16

Figure 9 Photograph of hill slope in Capulin Canyon showing complete tree mortality and total consumption of tree needles and ground- covering vegetation and soil duff typical of high-intensity fire. 3.3.3.4 A Brief look into Pass Wildfires of Arizona Arizona has a range of historical wildfires such as the “1,000 Battle Grand Canyon Forest Blaze” which occurred on June 22, 1960. The fire was noted to be one of the worst fires in Arizona in the 1960s. The fire consumed 7,500 acres of vegetation such as the Douglas fir trees, timber, scrub pine and brush in the National Forest near the slope of the Grand Canyon. Thirty-five miles per hour wind directed the fire up the slope of the Grand Canyon. (21) Another historical wildfire in Arizona is the Rodeo fire which was located in east-central Arizona in Fort Apache Indian Reservation, approximately 100 miles east-northeast of Phoenix. The fire began June 18, 2002. Within 3 hours it covered 9,000 acres. By June 20, 2002 the fire had consume more than 85,000 acres. The fire moved at 1.5-mph, destroying 2 to 20 ponderosa pines and 150 to 200 small trees per acre. Figure 9 shows the before and aftermath of the Rodeo- Chediski Fire. (22)

Figure 10 The right image shows the forest before the Rodeo-Chediski Fire and the left shows the aftermath of the fire. (22)

Page 22: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 17

3.3.3.5 Current Wildfire In Springville, AZ in the White Mountains, the wildfire known as the Wallow fire which is classified as one of the largest wildfire in Arizona’s history has grown to approximately 511,118 acres of vegetation. The geographical location of this fire is approximately 33.602 latitude and -109.499 longitudes. The wildfire began Sunday, May 29, 2011. The fire is said to be heading towards Luna Town, NM. (23)

Smoke from the fire reached as far as the northeast (i.e. Albuquerque and Santa Fe, NM). Figure 7 shows a map of all affected areas of the fire as of June 19, 2011. The fire is 44% contained as of June19, 2011. The fire expected to have extreme growth. Winds are expected to deescalate 20 to 30 mph SW, while temperatures will continue to range from 73 to 91 degrees, and humidity will continue to be low. (24)

Figure 11 shows the fire affected areas of the Wallow Fire (25)

3.3.3.6 Geometric Distortion Images usually taken at high altitudes by sensors are known for having geometric distortions. Geometric distortions result in spatial information being incorrect. Geometric distortions can be categorized in to two main groups: systematic distortion and non-systematic distortion. System distortions are accounted for; it results from planned mechanism and regular relative motions.

Page 23: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 18

Systematic distortions can be moved during the pre-processing of raw data. Non-systematic distortions are the results of the unaccounted for deviations and agitations. For the purposes of remote sensing there is a non-systematic distortion. For factors affecting the geometry of images and photographs see table 1. (26)

Table 1 Factors Affecting Geometry Factors Whether systematic (S)/

non-systematic (NS) 1. Sensor system factors

- Instrument error - Panoramic distortion - Aspect ratio

NS, S NS, S NS, S

2. Sensor craft attitude and perturbations - Variation in velocity and altitude of the sensor craft - Pitch, roll yaw distortions due to platform instability

NS NS

3. Earth’s shape

- Effects of the Earth’s curvature - Skewing due to the Earth’s rotation

NS S

4. Relief displacement - Local terrain relief - Sensor look angle

NS S

For the purpose of imaging a terrain the distortions related to the earth’s shape and spin must be explored. The geometric scale of an image of the earth’s surface is affected by the earth’s curvature, resulting in a panoramic effect. To account for the curvature of the Earth in a non-systematic distortion, the image taken must be recorded to a specific ground coordinate called Ground Control Points (GCP). These GCP are points on the Earth’s surface that directly correspond with the image coordinates (rows and pixel number) and geographical coordinates (longitude and latitude). There are two ways to correct a geometric distortion: geometric rectification and registration. When two or more images are used to distinguish the changes in a terrain, geometric registration is used. In geometric registration two images are referenced to each other to identify changes in a terrain. (26)

3.3.4 Science Requirements

• Take a sufficient amount of image to have a continuous mosaic of the fire affected areas. • Capture a 40km footprint of the fire affected areas. • Determine attitude and location of HASP. • Format the images to be compatible with quantitative tools.

Page 24: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 19

• Images obtained by the camera shall be in visible, near-infrared, and mid-infrared wavelengths.

• Post-flight software shall do numerical analysis of the image. • The angular field of view shall be 60° • Spatial resolution shall be 30 m or less. • Determine geographical center of images. • The team shall acquire data from other sources to profile healthy vegetation in Arizona. • The location of the fire affected areas shall be determined by using US forest service data • The team shall compare data collected by payload with Landsat current and historic data.

3.4 Technical Background and Requirements 3.4.1 Technical Background The experiment is conducted by a remote sensing apparatus (camera) that will capture image data for the duration of the flight. At the beginning of the flight’s ascent, images will be captured and stored on the payload to help calibrate “object recognition” for the post flight image processing. Once High Altitude Student Payload (HASP) has reached an altitude of 36 km there will be a set time interval when the payload will begin capturing images of the earth’s ground. The image data collected from the remote sensor is assigned coordinates to the corners of each image to assist in image orientation. The coordinates being assigned to the images are relative to HASP’s location. Additionally, the angle of rotation on the x, y, and z axis will be determined by using one or more orientation sensors. Once the payload is recovered, the captured images and relevant information will be extracted by the use of a computer. 3.4.2 Image Sensors & Cameras 3.4.2.1 Digital Cameras The digital camera is a device that transforms light into electrical charges that becomes the image captured by the device. Photons are the light particles that leave the light source and are reflected by the object. The reflected light particles enter the camera through a series of processes as depicted in figure 12. In general for digital cameras the reflected light particles enter the digital camera through an aperture, a series of lenses, then passes through the shutter, and into the image sensor. The image sensor is a tightly spaced grid made up of extremely small photo sensors. When a photon hits the photo sensor the photon is absorbed by the photo sensor’s semiconductor material. For each photon absorbed the photo sensor emits an

Figure 12 Digital camera skeleton diagram [30]

Page 25: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 20

electrical particle called an electron. The photon’s energy is transferred to the electron, which is the electrical charge. The greater the light intensity is, the stronger the electrical charge. Therefore, each electrical charge is converted to a binary number. The binary number represents the color and brightness for one out of millions of pixels. (27) 3.4.2.2 CCD/CMOS An image sensor is a mechanism that converts light into electric charge and process it into electrical signal. There are two types of image sensors that are commonly used for digital capturing devices. The two types of image sensors are the charge coupled device (CCD) and

complementary metal oxide semiconductor (CMOS). A CCD sensor’s pixel charges are transferred through output nodes to be converted to a voltage, conditioned, and sent off as an analog signal. Since each pixel can be devoted to light capture, the uniformity of the output is high. A CMOS sensor’s pixels have their own charge to voltage conversion and the sensor often includes amplifiers, noise correction, and an analog to digital converter so that the chip outputs digital bits. The additional features that are required for a CMOS sensor increases the design complexity and limits the area that is available for light capture. Since each pixel does its own conversion there is a lower uniformity compared to a CCD sensor. However, a CMOS sensor requires less off chip circuitry for basic operation. Additionally, the CCD and the CMOS image sensor have a specific wavelength range where they are considered to be effective. The figure 13 shows the quantum efficiency for the typical CCD and CMOS image sensor. Quantum efficiency is the percentage of photons that are absorbed by the image sensor. Typical image sensors tend to be more sensitive to the visible light range (400nm-700nm) as depicted in figure x. CCD and CMOS sensors are also slightly sensitive to near infrared but has less quantum efficiency. Image sensors are commonly found in image recording devices such as digital cameras and web cams. Parameters of an image sensor and camera are the resolution, zoom, aspect ratio, viewing angle, exposure, aperture, dynamic range, signal to noise ratio, low light sensitivity, and wavelength sensitivity. These parameters determine the conditional usage for an image sensor and how effective it will be. (28) 3.4.2.3 Parameters of Camera and Image Sensor The parameters of an image sensor and camera determine the quality of images that can be captured. Each parameter of an image sensor and camera should be fully understood in order to determine the performance. The resolution of an image sensor is the amount of detail that can be captured without becoming blurry. Zoom is the magnification of an image when the source is set at a fixed distance. A camera can come with two types of zooms or both, optical and digital zoom. Optical zoom is considered to be the true zoom since a series of lens at desired distances are used to draw the image closer to the image sensor. Conversely, for digital zoom it takes a portion of the image and expands that image. The image appears to be closer but in reality the image is enlarged and the quality is reduced. The aspect ratio is the ratio of the width to the height of an image. Aspect ratio is defined by two numbers separated by a colon. The left

Figure 13 Quantum efficiency for CMOS image sensor (50)

Page 26: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 21

number is the width and the right number is the height of the image. The viewing angle of a camera is simply the maximum angle that a display can be viewed with no significant performance loss. Exposure for an image sensor is the amount of light collected by the sensor for a single picture. Additionally, aperture affects exposure in the means of the lens size that opens to let light into the sensor. The dynamic range of a camera is the “ratio between the maximum and minimum measurable light intensities”. (28) Signal-to-noise ratio is a term to characterize the quality of signal detection to the amount of noise that was detected by a CCD or CMOS sensors. Low light sensitivity is the minimum amount of light that an image sensor can detect without losing image data. Wavelength sensitivity is the certain wavelength bands that an image sensor can detect. The wavelength sensitivity of an image sensor can be designed for specific bands. However, a large portion of CMOS sensors are not only sensitive to visible light but also near infrared. Conclusively, understanding the parameters of an image sensor and camera can help the decision for choosing the most appropriate image sensor camera combination. (28) 3.4.2.4 Image Scale Image scaling is the relationship between the viewing angle of the optical sensor and the size of the area that has been captured. Image scale can be represented in more than one way. For instance, 7 meters to a pixel, 1 𝑝𝑖𝑥𝑒𝑙

7 𝑚𝑒𝑡𝑒𝑟𝑠, and 1 pixel:7 meters are all ways of writing image scale.

Image scale is affected by the change in distance between the optical sensor and the object. So, when the distance from the optical sensor to the object increases, the scale of the image also increases. That is if the scale was 1 pixel:7 meters, then the increased image scale would be something like 1 pixel: 30 meters.

Calculation Objective:

- Determining the Image Size

- Determine the Pixel Size

Page 27: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 22

Figure 14 Depiction of the field of view for an optical sensor at a given altitude Determining the actual area captured on the ground

Step 1: 20𝑘𝑚 = 35tan (30°)

Step 2: 𝑊𝑖𝑑𝑡ℎ 𝑜𝑓 𝑔𝑟𝑜𝑢𝑛𝑑 𝑐𝑎𝑝𝑡𝑢𝑟𝑒𝑑 = 40𝑘𝑚 = 2 ∗ 20𝑘𝑚

Step 3: 32

= 40𝑥

𝐶𝑟𝑜𝑠𝑠 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑦

Step 4: 3𝑥 = 80, 𝑥 = 803

= 26 23𝑘𝑚

Step 5: 𝐺𝑟𝑜𝑢𝑛𝑑 𝑐𝑎𝑝𝑡𝑢𝑟𝑒𝑑 𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛𝑠: 40𝑘𝑚 𝑥 26 23𝑘𝑚

Determine the area per pixel and resolution

Step 1: 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑤𝑖𝑑𝑡ℎ = 4000030

= 1333 13

𝑝𝑖𝑥𝑒𝑙𝑠

Step 2: 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠 𝑖𝑛 𝑡ℎ𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 =260002330

= 866.688 𝑝𝑖𝑥𝑒𝑙𝑠

Step 3: 𝑅𝑒𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 = 1333 13

𝑥 866.688 = 1155585.185 𝑝𝑖𝑥𝑒𝑙𝑠 = 1.15 𝑀𝑝

3.4.2.5 Multispectral Camera

3.4.3 Orientation Sensors

Page 28: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 23

A variety of sensors are available to measure the orientation of an object. It is important to choose a sensor of sufficient range and sensitivity for remote sensing. It is also desired to choose a sensor which can be calibrated. 3.4.3.1 Magnetic Field Sensors A magnetic field sensor is a device that measures the strength and or direction of a magnetic field such as the earths. There are several types of magnetic field sensors that can be used. However there are three types of magnetic field sensors that are commonly used such as the Hall Effect, flux gate, and the magneto resistive. 3.4.3.1.1 Hall Effect Magnetic Field Sensor The Hall sensor is a popular magnetic sensor used to measure magnetic fields greater than 1 mT while having operating temperatures ranging from -100°C to +100°C. Additionally, the sensitivity for a Hall sensor ranges from 0.7 mV A Hall Effect magnetic field sensor is a semiconductor device that uses deflection of moving electric charge by a magnetic field to a generate voltage which is proportional to the magnetic field strength. In addition, Hall Effect sensors are more sensitive to temperature variation than a flux gate sensor. Hall Effect sensors are driven by the basic principle of the Hall Effect. When you have a semiconducting material (Hall element) with a current passing through it and no magnetic field present, then the current distribution is uniform. As a result, there is no potential difference seen across the semiconducting material’s output. Soon as a perpendicular magnetic field is present, a Lorentz force is exerted on the current. The Lorentz force distorts the current distribution and results in a potential difference across the Hall Effect sensor. The voltage is the Hall voltage which is described by the equation VH I x B where I is current and B is the magnetic field. “The Hall voltage is proportional to the vector cross product of the current and the magnetic field”. (29) However, the voltages are on the order of micro volts and will require amplification. (29) 3.4.3.1.2 Flux Gate Magnetic Field Sensor The flux gate sensor measures static or low frequency magnetic fields. Flux gate sensors are sensitive to both field direction and magnitude. The typical field magnitude for a flux gate sensor is up to 1 mT. A flux gate sensor is a magnetic core with a drive coil and a pick-up coil wound around the iron core. The iron core is forced into a positive and negative saturation by applying an AC current to the drive coil. When there are changes in the magnetic flux through the pick up coil this induces a voltage that can be detected. “Without an external field, the magnetic flux in the core will depend only on the field created by the drive coil”. (30) The core will mainly stay in magnetic saturation, with two directions of saturation. The changes of the two states of saturation will induce a voltage in the pick up coil. In order to cancel the large signal created by the drive coil an additional core is used with a drive coil in the opposite direction from the first core. Now the large signal created by the drive coil will be canceled out and the pick up coil will be able to detect magnetic fields other than the drive coil. The saturation point of iron core varies with the strength of an external magnetic field. The saturation point is sensed and measured by an electrical circuit. (30)

Page 29: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 24

3.4.3.1.3 Magneto Resistive Magnetic Field Sensor The magneto resistive sensor is a highly sensitive sensor that has a wide operating temperature range and an operating frequency range approaching 10 MHz. The typical magnetic field measurement for a magneto resistive sensor is in the range up to 200μT. A magneto resistive (MR) sensor is a sensor that makes use of magneto resistive effect. The sensor houses a material that changes its resistivity in the presence of an external magnetic field. The material that is used in MR sensors is a permalloy (nickel-iron magnetic alloy) with a composition of 19% FE and 81% Ni. MR sensors are highly sensitive in the detection of magnetic fields. In addition, the MR sensors have a wide operating temperature range and a low sensitivity to mechanical stresses. Due to MR sensors having a high sensitivity in detecting magnetic fields, they are ideal in detecting weak magnetic fields such as the earths. The basic operating principal for a MR sensor is a strip of permalloy with a current (I) passing through the material from the left to the right as seen in figure 15. When there is no external magnetic field, the internal magnetization vector of the permalloy is parallel to the current flow. However, if a magnetic field “H” is parallel to the strip of permalloy but perpendicular to the current flow, the internal magnetization vector of the permalloy will rotate around an angle α. As a result, the resistance of the permalloy changes as a function of rotation angle α. The governing equation for the resistance is “R = R0 + ΔR0 cos2 α” while R0 and ΔR0 are material parameters. [36] Due to the resistance being a quadratic equation it is obvious that the resistance versus magnetic field is a non-linear relationship. (31) 3.4.3.1.4 Magnetic Declination Magnetic field sensors are prone to error due to magnetic declination. Magnetic declination is the angle between magnetic north and true north. Since these sensors are sometimes calibrated to detect earth’s magnetic field, there is an error that must be corrected for between magnetic north and true north. This error is dependent on the magnetic sensor’s location relative to the North Pole. Additionally, the magnetic declination is consider a negative angle counter clockwise from true north and a positive angle clockwise from true north. (32) 3.4.3.2 Tilt Sensors A tilt sensor can measure the tilting of one or more axes to a reference plane. Tilt sensors come in three major categories. They are force balanced, solid state, and fluid filled. Since, force balanced tilt sensors are very bulky and heavy they will not be included. The tilt sensors of interest are a rolling ball, mercury type, electrolytic, gyroscope, and an accelerometer tilt sensor.

Figure 15 Magneto resistive effect in a permalloy (31)

Page 30: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 25

3.4.3.2.1 Tilt Switches Tilt switches allow the detection of the orientation of an object. They are small and inexpensive, but there are larger tilt switches that are able to switch power without any extra circuitry. Most tilt switches consist of a cylindrical cavity that houses a conductive roller ball or mercury. On one end of the cavity there are two conductive elements. So, when the sensor is oriented so that the two conductive elements are downward as seen in figure 16, the roller ball or mercury rolls down onto the two elements and short them out, acting like a switch. Mercury tilt switches are rarer to find due to mercury being recognized as being very toxic. Therefore, roller ball tilt switches are more common. Roller ball tilt switches are easy to make and physically strong. However, the mercury switches are less susceptible to vibrations due to mercury’s high density. Both sensors have a sensitivity range of ± 15 degrees. When the tilt switches’ angle is equal to ±15 degrees the switch will close. The positive and negative angle depends on the orientation of the tilt switch and the reference angle 0 degrees. (33) 3.4.3.2.2 Electrolytic Tilt Sensor An electrolytic tilt sensor is a conductive fluid that is sealed within a glass or ceramic housing. There are three leads that are part of an electrolytic tilt sensor. That is the common, positive, and negative electrode. When the sensor is electrically level, both electrodes are evenly submerged into the electrolytic fluid. A leveled sensor produces equal signal outputs between the common, negative, and positive electrodes. If the sensor begins to rotate about its sensitive axis as seen in figure 17, one electrode will end up being more submerged into the fluid compared to the other electrode simultaneously. Therefore, the electrical balance at the outputs will change accordingly based on one electrode being more submerged in the fluid than the other. The imbalance at the output is directly proportional to the angle of rotation. However, if a dc voltage is applied across the tilt sensor’s terminals the electrolyte deteriorates irreversibly. Consequently, an ac voltage must be applied across the terminals in order to use the electrolytic tilt sensor. The change in the ratio of resistance between the common electrode and the other electrodes can be measured with a half wheat stone bridge configuration. The half bridge circuit will apply a voltage across the positive and negative electrode UE and measure the output voltage UA between the common and one of the end electrodes as depicted in figure 18. The electrodes of the electrolytic tilt sensor will be connected at R1 and R2. The common electrode is the location where the output voltage UA is measured between R1/R2 and R3/R4. All fluid filled tilt sensors are limited to a sensing range of ± 90

Conductive Element 1

Conductive Element 2

Figure 16 Mercury tilt sensor showing the two conducting elements [39]

Figure 17 Electrolytic tilt sensor rotated about its axis of sensitivity (53)

Page 31: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 26

degrees because of one electrode will become completely retracted or submerged in the electrolytic fluid. This limitation can be resolved by adding another tilt sensor offset 90 degrees to the first sensor on the sensitive axis. Additionally, there are some electrolytic tilt sensors with sub-arc second resolution. The draw back to fluid filled tilt sensors is that they are extremely sensitive to vibrations and temperature. For instances, if the electrolytic fluid begin to vibrate then the output voltage would dramatically be affected and not produce stable results. If the temperature of the electrolytic fluid changes the resistance of the fluid alters and affects the output voltage. Therefore, an electrolytic tilt sensor can only be used in environments that have very little external forces and fluctuations. (34) 3.4.3.2.3 Accelerometer Tilt Sensor An accelerometer is one of the most common inertial sensors and can come in either a mechanical or electromechanical form. Accelerometers are available that can measure acceleration or the rate of increase/decrease in velocity up to three axes. In addition, accelerometers are used to measure both static and dynamic accelerations. (35)The two major categories of accelerometers are the piezoelectric and capacitive accelerometer. Piezoelectric accelerometer is based on the piezoelectric effect. The piezoelectric effect is when a “resulting force produces a strain on the crystal causing charge to accumulate on opposing surfaces”. (36) Inside the piezoelectric accelerometer there is a piezoelectric material. On one side the piezoelectric material is fixed to a rigid surface at the sensor’s base. On the other end there is a seismic mass attached. When the vibrations are introduced to the sensor, there is a force created and acts on the piezoelectric element. The force acting upon the piezoelectric element is equal to the product of the acceleration and the seismic mass. Hence, the piezoelectric effect produces a charge with an output voltage proportional to the applied force. (36)

(1) 𝐹 = 𝑚𝐴 (2) 𝑞 = 𝑑33F (3) 𝑢 = (𝑑33𝑑)/(𝑒33𝐴) Variables: 𝑑33: Piezo constant 𝑒33: Piezo constant 𝐴: Area of piezoelectric element 𝑑: Thickness of piezoelectric element 𝑢: Voltage

Capacitive accelerometers’ core is a surface micro machine poly silicon structure made up of “springs, masses, and motion sensing components”. (37) The layers of fixed plates and masses form a differential capacitor. Any movement of the mass will change the capacitance and result into a square wave output with an amplitude proportional to the acceleration. If the accelerometer is more than one axis there is a demodulator on each axis that will condition the

Figure 18 Half wheat stone bridge configuration (54)

Page 32: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 27

signal to determine the direction of the acceleration. The signals are then feed through a series of filters that takes the analog signal and converts it to a duty cycle. (37) 3.4.3.3 Sun Sensor A sun sensor is a device that uses the sun’s illumination to provide attitude information. Sun sensors consist of photo sensitive elements that can measure the intensity of light. The photo sensitive elements are placed in a manner that any orientation other than pointing directly towards the sun will be detected. An analog sun sensor has accuracy up to 1°. However, the accuracy of digital suns sensors can be as high as 0.1°. For instance, there is a device that has a separator perpendicular to a plane and two photo diodes on each side. The photo diodes are blocked from each other by the separator. Therefore, if the sun sensor platform is pointing directly towards the sun, each photo diode will have equal amount of sun light. However, when the sun sensor begins to tilt, one photo diode will have more illumination than the other. The unbalanced light intensity will cause a difference in output. The difference in the photo diodes’ output can be used to determine the attitude of the platform for one axis. To have a second axis on the platform an additional separator perpendicular to the first one can be used with two diodes on each side. Therefore, a sun sensor can be used to determine the attitude and altitude of an object. 3.4.4 Position Sensor 3.4.4.1 GPS Sensor A GPS (Global Position System) sensor uses a satellite based navigation system. The satellite navigation system consists of 24 satellites orbiting 20,000 km above the earth’s surface. Each satellite orbits the earth two times within a 24 hour period. In order to determine the position of the GPS sensor, the GPS sensor compares the time the signal was transmitted from the satellite and the time the signal was received. The difference in time determines the distance between the satellite and the GPS sensor. If two more satellites’ distances are determined, then a triangulation algorithm can be used to determine the latitude and longitude of the GPS sensor. In order to calculate the latitude, longitude, and altitude a GPS sensor needs 4 or more satellites. A GPS sensor can also provide the speed and direction of travel by continuously updating its position. Due to the Department of Defense regulations the greatest accuracy for a civilian GPS is “± (10-15 meters)”. (38) L1 frequency= 1575.42 MHz 3.4.5 System Monitoring System monitoring plays a crucial role in maintain circuitry and electronics’ health. Variables that affect the payload’s internal health are temperature, voltage spikes, and humidity. Additionally, a variable that affects the payload’s mechanical structure is impact shock. Needs more work!

Page 33: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 28

3.4.5.1 Temperature Sensor A temperature sensor is a device that measures temperature and converts that temperature into a voltage. The type of temperature sensors that are available are the thermocouple, thermistor, resistive thermal device, integrated circuit sensor, and the silicon diode. Each type of these temperature sensors have different behavior compared to the others. 3.4.5.1.1 Thermocouple Sensor A thermocouple produces a voltage when two conductors are at different temperatures. The voltage produced is proportional to the temperature difference. The conductor that is higher in temperature will have thermally energized electrons compared to the electrons on the other conductor. The energized electrons from the hot conductor will flow to the lowest energy state, into the cold conductor and create a negative charge. The flow of electrons from the temperature difference creates an electrostatic voltage across the two conductors that are proportional to the temperature. In order to measure the voltage across the two conductors a voltmeter would be ideal for this application. However, when using a voltmeter the two leads end up creating two more metallic junctions. If the material of the conductor and the lead are the same then the lead has no effect on the thermocouple system. When the material of the conductor and the lead are different, the metallic junction adds an EMF in opposition to the original voltage. The voltmeter reading will express the voltage proportional to the difference between the original voltage and the added junction voltage. Therefore, there is no way to determine the original voltage without knowing the added junction voltage. There are several techniques to determine the added junction voltage such as ice bath, thermistor, or an integrated circuit sensor. A thermocouple can be used over a range of temperatures compared to the thermistor, resistive thermal device, and the integrated circuit sensor. However, the complexity of the thermocouple makes it challenging to interface. (39) 3.4.5.1.2 Thermistor Sensor & Resistive Thermal Device Thermistor sensors are temperature sensing devices that resistance changes as the temperature changes. There is one major difference between the thermistor and the resistive thermal device. A thermistor’s resistance decreases as temperature increases. Conversely, a resistive thermal device’s resistance increases as the temperature increases. One of the benefits of using a thermistor is the large resistance change to subtle temperature deviations. This behavior makes thermistors sensitive to small temperature changes. Thermistors tolerances are not as tight as resistive thermal devices but the extended resolution offsets the difference. Thermistors and resistive thermal devices’ resistance profile as seen in figure 19 are not linear, making the control algorithms not straightforward. A thermistor can be used for any temperature between “-80°C to 300°C”. [48] A resistive thermal device can be used for temperature measurements from “-200°C to 850°C”. (40)

Figure 19 Resistance profile for thermistor, RTD, and Thermocouple

Page 34: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 29

3.4.5.1.3 IC Temperature Sensor & Diode 3.4.5.2 Current Sensing Current sensing is the means of determining how much current is passing through a wire at a critical location. The current is a way to determine the power consumption of the payload and also a means of monitoring short circuits and voltage spikes. Current sensing can be implemented by determine the voltage drop across a very low resistance load. Applying Ohms law to the voltage and resistance will provide the current passing through the load. 3.4.5.3 Humidity Sensor To be continued…. 3.4.6 Flight Characteristics

HASP is a medium weight payload of total suspended weight of 2000 pounds. It uses 11 million cubic foot zero - pressure balloon to rise to a float altitude of about 36 km with flight duration of approximately 17 hours. A zero pressure polythene film balloon is attached to the HASP before the launch. The balloon will be inflated with helium gas. Thereafter, the payloads will be integrated with the HASP ready for the launch The payload rises and takes about two hours to get to the float altitude. The payload is traveling through different layers of the atmosphere to get to the float altitude which is within the stratosphere. The air flow in the stratosphere is mostly horizontal making payload to move horizontally towards a direction which will be determined by the wind movement. Throughout the flight, the payload is capturing images. Knowing the total flight duration and the rate at which the images are captured, it is possible to calculate the total number of pictures captured. According to 2006 HASP group, winds have been prevailing at high altitude while blowing from east to west in Ft. Sumner, New Mexico. The winds have also been moderate in speed. The moderate speed of the wind helps to maximize time aloft. (41)

Figure 20 Altitude in kilometers versus Universal Time profile for the 2009 flight of HASP

Page 35: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 30

As the payload travels up, it experiences different temperatures, pressures and humidity in different layers of the atmosphere. Immediately after taking off from the ground, it passes through the troposphere. Troposphere is the lowest major atmospheric layer of the Earth’s atmosphere. The majority of the clouds and water vapor in the atmosphere occurs in the troposphere. It is also a region of rising and falling winds. The rate at which the payload is rising in this region is influenced by these rising and falling winds. Before the payload gets to the stratosphere, it passes through the tropopause. In the stratosphere, the temperature increases as the altitude increases, the top of it has a temperature of about -3oC. According to HASP, the HASP unit and integrated payloads are installed within the CSBF thermal and vacuum chamber and subjected to a temperature range of -65oC to +65oC. The payload, thus, survives the temperate in the stratosphere. (42)

The payload remains at the float altitude for about 14 hours. At this altitude, the payload will move horizontally from east to west for approximately 700km from New Mexico making ground tracking easier. The payload will experience an increasing magnetic declination but a decreasing magnetic field strength as it moves from east to west. The payload will take about 45 minutes to come down on a parachute after being terminated. (41)

3.4.7 HASP Interface There are several features that HASP provides to a payload such as a mounting bracket, electrical connections, and telemetry. The payload will be attached to a ¼ inch thick PVC (Polyvinyl Chloride) mounting plate on HASP while satisfying the size, weight, and shock requirements as illustrated in table 1 for a small student payload. The payload’s volume foot print is limited to a 15 cm x 15 cm x 30 cm and must not exceed this volume footprint. The payload mounting bracket includes two connectors, a “DB9 for serial up and down link and a EDAC 516-020 for

Figure 21 Ground track of the 2009 flight of HASP from launch in Ft. Sumner, New Mexico to somewhere in west of phoenix, Arizona. (41)

Page 36: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 31

power, discrete commands and analog output”. However, some payloads will not use all the possible features provided by HASP. (HASP Interface Manual) (41)

Figure 22 Payload mounting bracket (units are centimeters) (41)

Table 2 Payload’s mass and volume footprint restraints

The HASP flight system uses a serial connector with “8 data bits, no parity, 1 stop bit and no flow control”. HASP’s serial connector port will have a rate of 1200 pulses per second. The connector itself will only use three pins 2, 3, and 5. The descriptions of the pin function are listed below in table 2. (41) Table 3 Description of pin number functions

Connector Pin Number Function

DB9 Male 2 Received Data 3 Transmitted Data 5 Signal Ground

The connector that distributes power, analog downlink, and discrete commands is the EDAC 516 connector. The pin functions are listed in table 3. The voltage rating for pins A, B, C, D is 30 ± 2

Payload Mass Volume footprint Shock

Small 3kg 15 cm x 15 cm x 30cm 10g Vertical 5g Horizontal

Page 37: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 32

VDC and the four ground pins are W, T, U, X. The maximum current draw from the four pins in parallel is 0.5 amps and a transient current of 1.5 amps for 1 second. The four pins are used to spread the current load to prevent any overheating and create redundancy in case one of the power pins malfunction. (41)

Table 4 Description of power pins for the EDAC 516 Function EDAC Pins

30 ± 2 VDC A,B,C,D Power Ground W,T,U,X

3.4.8.2 Analysis and correlation of ground images Image usefulness in remote sensing depends among other things on the scale and resolution of the image. The scale of an image is defined as the ratio of a distance on the image/picture to the corresponding distance on the ground. (43)

The size and number of picture elements or pixels used to form an image determine image resolution. The smaller the pixel size the greater the resolution we can have on an image. Both scale and resolution are dependent on the camera that will be used in taking pictures on the payload. The more resolution a payload camera can have the better images we can have for analysis after and hence more conclusive results due to more detail. (43) Image contrast is the difference in visual properties that makes an object (or its representation in an image) distinguishable from other objects and the background. Image contrast also refers to the variation in tone between light and dark spots on an image. For example, if we were viewing different types vegetation in a given area, images with a contrast would show different shades in trees denoting different aspects such as acid rain damage, plant age, possible lack of soil nutrients etc. In cases where infrared vision cannot make a clear cut distinction on aerial images contrast plays a major role in increasing definition and helping distinguish image elements. Image and GPS correlation are needed in order to locate the exact location of images on the ground. We will have the UTC time on the camera. Since GPS time is always set to UTC (Coordinated Universal Time / GMT) both images and the GPS will have the same time stamp such that we can precisely know which images were taken and when. Both image and GPS data are analyzed using computer software. Images then have a time stamp and geographical coordinates such that a visual map of the payload flight can be generated from resulting images.

In order to be able to record images that are relevant to plant data a camera that has infrared capabilities is needed. Infra-red sensing would allow the payload to detect surface reflectance and hence physical and chemical properties of the plants on the ground. Physical and Chemical properties of plants can tell us some aspects of how the plants have been affected by previous fires, and even plant youth if we have a resolution of at least 60m. (43)

3.4.8.3 Ground data Analysis In order to have a basis of comparison for our data we need to either collect image data from a

Page 38: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 33

verifiable source or Collect real ground data. Some of the verifiable sources would include the United States Geographical Survey (USGS). As the United States largest water, earth, biological science and civilian-mapping agency, USGS collects, monitors, analyzes, and provides scientific understanding about natural resource conditions, issues, and problems. A specific branch of USGS that has remote sensing images to the resolution of at least 100m is called Glovis. Glovis is also a free source of remote sensing data collected from the Land Sat satellite, which has been recording images from 1972 to 2011. (20) Another verified source of information is Google earth which provides information dating back to 1993, data downloaded from this source is free of charge and has maps in some cases as recent as 2011. (BARC) Burned Area Reflectance Classification is a satellite-derived map of post-fire vegetation condition developed by the USDA forestry service. The BARC has four classes: high, moderate, low, and unburned. This map product is used as an input to the burn severity map produced by the Burned Area Emergency Response (BAER) teams. Comparing satellite near and mid infrared reflectance values makes BARC maps. In the immediate aftermath of a wildfire, a Forest Service Burned Area Emergency Response (BAER) team is dispatched to the site to prepare an emergency rehabilitation and restoration plan. One of the first tasks for this team is the creation of a burn severity map that highlights the areas of high, moderate, and low burn severity. This map then serves as a key component in the subsequent flood modeling and Geographic Information System (GIS) analysis. The BARC data is meant to be used as a main input into the development of the final burn severity map. (20) Physical data collection and analysis means going to the physical ground location and collecting samples for laboratory testing. This method requires extensive laboratory time due to the fact that team members have to travel to the areas in question to collect physical samples then bring those samples to the lab and test them for specific aspects of fire damage to determine damage and time. 3.4.8.4 Image Analysis

ERDAS IMAGINE 2010 is computer software that is used to process remote sensing images. ERDAS is free software that requires a license to operate. LSU currently has the ERDAS operating license so this is also available for use. One of the many functions of this software is to clarify features specific to certain range of the electromagnetic spectrum so data can be separated according to what is viewed on different levels of the electromagnetic spectrum. This would all depend on whether the camera in question has infra-red capabilities. (2)

**More information to be added after hands on experience with ERDAS this week. **

Page 39: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 34

Figure 23 Reflectance Curve vs. Wavelength

From this graph we can see that the greatest variation in green vegetation reflectance occurs between 0.5 and 0.8μm (Micrometers) of wavelength. This means that to increase our chances of seeing significant changes in green vegetation it would be advisable to have a camera capable of taking pictures within either one of these wavelengths or two cameras to capture each wavelength range. The graph above also shows reflectance of green vegetation, dry vegetation and soil. This graph shows how reflectance varies at different light wavelengths. Near infrared light is largely reflected by healthy green vegetation. That means that near infrared bands will be very high in areas of healthy green vegetation and low in areas where there is little vegetation. Mid infrared light is largely reflected by rock and bare soil. That means that mid infrared band values will be very high in bare, rocky areas with little vegetation and low in areas of healthy green vegetation. Imagery collected over a forest in a pre-fire condition will have very high near infrared band values and very low mid infrared band values. Imagery collected over a forest after a fire will have very low near infrared band values and very high mid infrared band values. 3.4.8 Technical Requirements

• The optical sensors shall provide a pixel resolution of 30 m in near-infrared, mid-infrared, and visible wavelength.

• Images shall be in a format so when stored it fits within the memory constraints. (+Software)

• The payload shall conform to HASP’s interface requirements for a small student payload as defined in the HASP interface manual

Page 40: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 35

o Mechanical The payload shall be 3kg or less The payload shall be 15 cm x 15 cm x 30 cm

• The payload shall utilize the telemetry and command features of HASP. • The payload shall remain fully functional during the thermal, vacuum, and shock pre-

flight test. o The payload shall survive -60° to 40° C temperatures o The payload shall survive 5 mb to 1000 mb of pressures o The payload shall survive a shock impact of 10 g Vertical and 5 g Horizontal.

• Post-flight software shall associate time stamp with GPS location. • The voltage supplied by HASP shall be converted to meet the requirements of the optical

sensor, attitude sensor, and the electronics. • Images taken in visible, near infrared, and mid infrared shall be captured at the same

time. • The picture overlap shall be at least 10% • Determine the magnetic declination over the span of the flight path.

4.0 Payload Design [Provide sufficient details about your payload so that the reviewers can get a clear picture of your payload design.] 4.1 Principle of Operation [What measurements is your payload going to make? What techniques will you use to make these measurements? How do your sensors function? How do your measurements flow from your scientific and technical goals (i.e. a traceability matrix).] 4.2 System Design [This is a high level description of your experiment showing and explaining the major components, processes and interfaces that make up your payload. A system level diagram should be included here. This diagram is then explained in detail in sections 4.2.1 and 4.2.2] 4.2.1 Functional Groups [Use this section to describe the major components in your payload. The function of these groups should be described as well as the elements within the group that together generate the

Figure showing all major groups

and interfaces making up the

experiment

Page 41: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 36

function. Note: This section provides details on the groups shown in the system level diagram figure.] 4.2.2 Group Interfaces [Interfaces describe the relationship and interconnects between different groups. Such interfaces can include serial data, parallel data, analog signals, timing information, commands, power, mechanical, thermal, human intervention and can be unidirectional or bidirectional. All interfaces should be described and clearly indicated on the system diagram. Note: This section provides details on the groups shown in the system level diagram figure.] 4.2.3 Traceability [Here you show how you can “trace” the individual aspects of your design back to the experiment requirements. This can be done, in part, with a table that lists each design element with the requirement that it satisfies. 4.3 Electrical Design [Describe your electrical design including sensors, sensor interface, controllers, data acquisition, storage, telemetry and power supply system. Electrical schematics are required here.] For the PDR only: For sections 4.3.2, 4.3.3 and 4.3.4 detailed “systems design” drawings are needed for PDR. Detailed circuit schematics will be needed for CDR. 4.3.1 Sensors [What sensor or sensors will be used in your payload? Give the part numbers and specifications. Show how the chosen part satisfies the measurement requirements. Also discuss the performance and linearity of the sensor in terms of environmental variations (i.e. temperature, pressure, magnetic field, electrical bias, etc.). Also discuss what sensor calibrations need to be done to take into account environmental variations. Much of this information comes from the sensor datasheet. One of the primary purposes of this section is to prove that your chosen sensors will A) function in the near-space environment and B) will provide the measurement specified in the requirements.] 4.3.2 Sensor Interfacing [Here you will need to show an electrical schematic as well as describe how the signal from your sensor is conditioned and converted to digital information. You will need to discuss how you set the readout gain and accuracy to satisfy the technical requirements.]

Table showing how each component /

interface in your system satisfies the

requirements listed in section 3

Figure(s) showing the electronic circuits for sensor signal conditioning, conversion to digital information and, if necessary, control. PDR: high level system drawing CDR: electronic circuit schematic

Page 42: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 37

For the PDR only a high level system diagram is needed, but for the CDR a full electronic circuit schematic should be added. 4.3.3 Control Electronics [Here you will need to show an electrical schematic as well as describe how your experiment is controlled and monitored.] For the PDR only a high level system diagram is needed, but for the CDR a full electronic circuit schematic should be added. 4.3.4 Power Supply [Here you will need to show an electrical schematic as well as describe how you will obtain power in the lab as well as during “flight”, how this power is converted to the required voltages and currents needed and how it is distributed.] For the PDR only a high level system diagram is needed, but for the CDR a full electronic circuit schematic should be added. 4.3.5 Power Budget [You should also include a power budget identifying the power consumption of each component and showing how your system will supply this power throughout the flight. Also needed in this section is the proof that your batteries will deliver the required power at the cold temperature of a balloon flight. That is, you need to show your steps in how you “de-rate” your chosen batteries for cold temperatures.] For the PDR use measured values of power consumption of the BalloonSat, datasheet power consumption values for your sensors and estimates for your sensor interface electronics. For CDR these values will need to be complete and exact. 4.4 Software Design [This section describes the design of the controlling software in your flight payload as well as your analysis software on the ground.]

Figure(s) showing a schematic of the experiment control electronics including data storage, commanding, status display, etc. PDR: high level system drawing CDR: electronic circuit schematic

Figure(s) showing a schematic of how power is obtained, conditioned and distributed. PDR: high level system drawing CDR: electronic circuit schematic

Table showing your power budget

Page 43: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 38

4.4.1 Data Format & Storage [Detail your data record format including time stamp, digital sensor data, environmental information, counters, etc. At what rate will you be acquiring data? You should be able to show how many bytes each data record will require, plus how many bytes of on-board storage will be required for your entire mission. Note that this section should be well defined at the PDR stage.] 4.4.2 Flight Software [Here you need to discuss the design of your flight software including the major processes involved in data acquisition, data storage, time stamping, commanding and flow control. ] For PDR you need to identify and discuss the major programs you will needed for before, during and after flight operations. You will need a high level flow chart for each of this programs. For CDR you will need, in addition, detailed flow charts and discussion for each major program element as well as all subroutines. 4.5 Thermal Design [What is the thermal environment you expect to encounter? What are the thermal operating ranges for components in your payload? How will you keep your payload in proper operating range?] 4.6 Mechanical Design [This section describes your mechanical layout, payload weight and issues such as … What are the mechanical stresses you expect to encounter during flight? What mechanical design features have you incorporated to assure your payload will survive the flight? What are the materials you will use?] 4.6.1 External Structure [What does your payload look like from the outside? Where are any external controls and / or indicators? How do you access the internals (i.e. lid, hinge). How do you attach your box to the balloon vehicle? For the PDR a simple sketch with basic dimensions will be needed to assure your box conforms to dimensional constraints imposed by the cold chamber, vacuum chamber and flight lines. For the CDR a full mechanical drawing with final dimensions will be needed.

Flow chart diagram(s) showing how your flight software will function.

Figure(s) showing a mechanical drawing (top, bottom, side) of the outside of your payload with dimensions.

Page 44: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 39

4.6.2 Internal Structure [What does your payload look like on the inside? How will all components fit in the required size constraint? What support members are required? How will these structures be secured? How will you route your electrical cabling?] For the PDR a simple sketch showing the internal placement of your components will be needed. For the CDR a full mechanical drawing with final dimensions will be needed. 4.6.3 Weight Budget [What is your weight budget and do you fit the weight constraint? List each component and indicate if the component weight is measured or calculated. Show your calculations. What components have you not yet taken into account? Show how you determine that uncertainty in your total weight estimate.] For the PDR you should include at least all the groups identified in your system design and most of these weights will be estimated or calculated. For CDR all components need to be included and most components should have measured weights from prototypes. 5.0 Payload Development Plan [Payload development occurs after PDR and is the period of time that is used to finalize the design of all parts of your payload for the CDR. During development you may need to experiment with different options for your electronics, software and mechanical systems in order to choose the best design as well as determine the final parameters (dimensions, resistor values, etc.) for your design. This is called prototyping and should not be confused with fabrication which is what is done after CDR in order to construct your flight payload. You should identify all the prototyping you need to do and describe in detail the activities and testing you will be doing between PDR and CDR for all the major groups of your payload. This section is then used to develop your work breakdown structure (WBS), staffing plan and schedule required in section 9. Note that work effort flowcharts, tables of parameters to be determined and/or other figures will likely help organize and explain your plan.] 5.1 Electrical Design Development

Figure(s) showing a mechanical drawing (top, bottom, side) with dimensions of the inside on your payload indicating clearly where each component is place and the structures to hold it in place.

Table of weights of all of your components and the total

Page 45: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 40

[Here you provide the details on how you will finalize your sensor choice, get from the system level diagrams of section 4.3 in this document to the circuit schematics with final values for all components, and complete your power budget as required for the CDR.] 5.2 Software Design Development [Here you provide the details on how you will get from the general description and flowchart of section 4.4 in this document to the detailed flowcharts required for the CDR] 5.3 Mechanical Design Development [Here you provide the details on how you will get to the final mechanical drawings, component layout, and weight table required for the CDR] 5.4 Mission Development [At CDR you will need to include details about how you will calibrate your sensors, what ground software is needed and how you will process and analyze your data. All these are part of your “mission”. In this section you provide the details on what you will need to do to develop these parts of the mission as well as any other tasks that don’t easily fall within sections 5.1, 5.2 or 5.3.] 8.0 Project Management [Describe the techniques that will be used to ensure meeting the experiments objective within the allocate schedule and budget. This should include discussion of project direction, authorization, communication, meeting, reviews, record keeping and monitoring.] 8.1 Organization and Responsibilities [This section includes an Organization Chart and describes the team members and their responsibilities. Subsystem and subtask leaders and their authority level are identified. Contact information such as phone numbers and e-mail can be included here.] 8.2 Configuration Management Plan [This section describes how the baseline design configuration will be documented and the techniques to be used to manage changes to this design during the life of the project. The process for requesting, reviewing and approving changes is identified here.] 8.3 Interface Control

Page 46: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 41

[Describe the interfaces between subsystems as well as between the payload and “spacecraft” and the techniques to be used to control the definition of and changes to these interfaces.] 9.0 Master Schedule [This section describes how you will organize and manage the effort associated with your payload. You may want to use Microsoft Project to organize your WBS, Staffing Plan and Timeline. 9.1 Work Breakdown Structure (WBS) [The WBS is a list of all the work tasks, in hierarchical form, for the project. It needs to be comprehensive and forms the framework for all the project staffing, budgeting, assignments and scheduling.] For PDR Only: You should include a detailed WBS up to, but not beyond, CDR. That is, the WBS included here should detail the tasks described in your Payload Development Plan (section 5.0) 9.2 Staffing Plan [Lists the individuals assigned to each WBS task and is used to determine if tasks are adequately staffed or if individuals are overloaded.] For PDR Only: Your Pre-PDR version of this section which identifies the leads for major task categories should be included here. In addition, you need to add the detailed responsibility for tasks in your Payload Development Plan. 9.3 Timeline and Milestones [This section includes a list of key milestones (and associated dates) for the project and a Gantt chart timeline. The timeline should be a weekly schedule organized by major WBS elements. ] For PDR Only: Your Pre-PDR version of this section which provides a high level schedule for your over all project should be included here. In addition, you should add another Gantt chart showing the detailed schedule leading up to CDR (i.e. your Payload Development Phase). 11.0 Risk Management and Contingency [This section describes the major risks that have been identified that can cause significant impact to the project and prevent it from achieving its objectives within the allocated schedule

Page 47: Advanced PACER Program Preliminary Design Review Document ...laspace.lsu.edu/pacer/Experiment/2011/documentation/Advanced_PAC… · Advanced PACER Program . Preliminary Design Review

Team PDR v1.8 5/8/2012 11:12 AM 42

and budget. List the programmatic and technical risks, assess their impact to the project and describe the plan to mitigate them.] 12.0 Glossary [Define any terms that are used in your document. See below for examples.] PACER Physics & Aerospace Catalyst Experiences in Research CDR Critical Design Review FRR Flight Readiness Review PDR Preliminary Design Review TBD To be determined TBS To be supplied