Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
Dr. Gabriele CavallaroPostdoctoral Researcher High Productivity Data Processing GroupJuelich Supercomputing Centre, Germany
Remote Sensing Systems and Applications (1)
October 16th, 2018Room V02 – 138
Cloud Computing & Big DataPARALLEL & SCALABLE MACHINE LEARNING & DEEP LEARNING
PRACTICAL LECTURE 6.1
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Outline
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Outline of the Course
1. Cloud Computing & Big Data
2. Machine Learning Models in Clouds
3. Apache Spark for Cloud Applications
4. Virtualization & Data Center Design
5. Map-Reduce Computing Paradigm
6. Deep Learning driven by Big Data
7. Deep Learning Applications in Clouds
8. Infrastructure-As-A-Service (IAAS)
9. Platform-As-A-Service (PAAS)
10. Software-As-A-Service (SAAS)
11. Data Analytics & Cloud Data Mining
12. Docker & Container Management
13. OpenStack Cloud Operating System
14. Online Social Networking & Graphs
15. Data Streaming Tools & Applications
16. Epilogue
+ additional practical lectures for our
hands-on exercises in context
Practical Topics
Theoretical / Conceptual Topics
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Outline
Remote Sensing Background
Data Acquisition
Preprocessing
Feature Extraction and Selection
Copernicus: Sentinel 2 Mission
Practical Lecture 6.1 – Remote Sensing Systems and Applications
The term remote sensing was first used in the United States in the1950s by Ms. Evelyn Pruitt of the U.S. Office of Naval Research
Remote (without physical contact) Sensing (measurement of information)
• Measurement of radiation of differentwavelengths reflected or emitted fromdistant objects or materials
• They may be categorized by class/type,substance, and spatial distribution
[1] Satellite (1960)
Remote Sensing
[2] The Earth-Atmosphere Energy BalancePractical Lecture 6.1 – Remote Sensing Systems and Applications
• Suitable for many applications
• Non-invasive method
• Satellite platforms• Invaluable view• Repetitive and consistent
Application Domain
Practical Lecture 6.1 – Remote Sensing Systems and Applications
• How to assess the damage?• Which are the most hit points?
• How to plan the Humanitarian aid?
A devastating tsunami hit many coastal regions in the Indian Ocean (2014)
• One of the most devastating natural disasters in recorded history• 14 countries were hit
[3] The 2004 Indian Ocean Tsunami [4] Five years after Indian Ocean tsunami
Application Examples (1)
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Global change detection and monitoring: Deforestation
Environmental assessment and monitoring: Urban growth
[6] Deforestation in Bolivia from 1986 to 2001
[5] Population growth from 1975 - 2010 of Manila
Application Examples (2)
1986 2001Practical Lecture 6.1 – Remote Sensing Systems and Applications
• Perhaps the most common form of image interpretation
• Applications: environmental management, agricultural planning, health studies, climate and biodiversity monitoring, and land change detection
Classification of Remote Sensing Images
Generation of thematic maps
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Different tasks of equal importance
Successful classification results depend on all the steps
Pipeline for Classification
New analysis challenges
5Vs: Volume, Variety, Velocity, Veracity and Value
Need of scalable methods and underlying infrastructures
DATA ACQUISITION PRE-PROCESSING
EXTRACT/SELECTFEATURES CLASSIFICATION
POST-PROCESSING
CLASSIFICATION SYSTEM
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Platforms and Sensors
Active Sensor: own source of illumination
Capture image in day and night
Any weather or cloud conditions
Passive Sensor: natural light available
Great quality satellite imagery
Multispectral and Hyperspectral technology
Platform: selected according to the application
[7] Active-and-passive-remote-sensing
DATA ACQUISITION PRE-PROCESSING
EXTRACT/SELECTFEATURES CLASSIFICATION
POST-PROCESSING
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Data are complex, noisy and may contain errors
Improve image quality as the basis for later analyses that will extract information
Detection and restoration of bad lines
Geometric rectification
Image registration
Radiometric calibration
Atmospheric correction
Preprocessing
DATA ACQUISITION PRE-PROCESSING
EXTRACT/SELECTFEATURES CLASSIFICATION
POST-PROCESSING
CLASSIFICATION SYSTEM Practical Lecture 6.1 – Remote Sensing Systems and Applications
Radiometric Calibration and Correction Process
[8] Radiometric Corrections
The value recorded for a given pixel includes:
reflected or emitted radiation from the surface
radiation scattered and emitted by the atmosphere
Most of the applications are interested in the actual surface values
Practical Lecture 6.1 – Remote Sensing Systems and Applications
The Feature Domain
Select suitable features for successfully implementing an image classification
How to obtain discriminating and independent features?
[9] G. Hughes
Too many features may decrease the classification accuracy
(1) : few raw features
(2): hundreds of raw features
DATA ACQUISITION PRE-PROCESSING
EXTRACT/SELECTFEATURES CLASSIFICATION
POST-PROCESSING
CLASSIFICATION SYSTEM
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Normalized Difference Vegetation Index (NDVI)
Create additional relevant features from the existing raw features in the data
Increase the predictive power of the classifier
NDVI = (NIR – R) / (NIR + R)
[11] NDVI
RG
BNIR
Bellingham, WA , US
[10] NDVI & ClassificationPractical Lecture 6.1 – Remote Sensing Systems and Applications
Spatial Information (1) The scene complexity and the spatial resolution determines the number of mixed pixels
The spectral unmixing problem:
Identify the pure materials (endmembers)
Estimate their corresponding proportions (abundances)
mixed spectral signature mixed pixel
65%20%5%6%4%
soft classification
Two models to analyze the mixed pixel
[12] A. Plaza et al.
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Spatial Information (2)
When spatial resolution increases, structures are larger than the pixel size
The correlation between neighboring pixels increases
Adjacent pixels of a roof pixel belong to the same class with a high probability
Structures can be represented as regions of spatially connected pixels
The presence of mixed pixels can not be avoided
Spatial contextual classifiers can exploit the correlation of pixels within a subset domain
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Classification
DATA ACQUISITION PRE-PROCESSING
EXTRACT/SELECTFEATURES CLASSIFICATION
POST-PROCESSING
CLASSIFICATION SYSTEM
Model choice
Mapping between low level features and high level information
Training
Training set must be representative
Evaluation
Measure the error rate (or performances)
Computational Complexity
Scalability
Practical Lecture 6.1 – Remote Sensing Systems and Applications
World's largest single earth observation programme
Directed by the European Commission in partnership with the European Space Agency (ESA)
Monitors the Earth, its environment and ecosystems
Free and open data policy
Features: Continuity, Global coverage, Frequent updates and Huge data volumes
Served by a set of dedicated satellites (the Sentinel families)
[13] Copernicus programme
[14] Sentinel Space
Copernicus
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Copernicus
[15] Sentinels for Copernicus Practical Lecture 6.1 – Remote Sensing Systems and Applications
Sentinel 2 Mission
• Platform: Twin polar-orbiting satellites, phased at 180° to each other
• Temporal resolution of 5 days at the equator in cloud-free conditions
Provides images of agriculture, forests, land-use change and land-cover change
Mapping biophysical variables, e.g., leaf chlorophyll/water content and leaf area index
Monitoring of coastal and inland waters and helping with risk and disaster mapping
[16] Earth Observation Mission Sentinel 2
~23 TB data stored per day
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Copernicus Open Access Hub
[17] Copernicus Open Access Hub
Provides complete, free and open access to Sentinel-1, Sentinel-2 and Sentinel-3 products
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Example of Data Retrieval (1)
Select the region of interest, the date and the product
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Example of Data Retrieval (2)
Two tiles with different levels (i.e., L1C and L2A) are found over the region of interest
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Products Visualization
Level-1Top-of-atmosphere reflectances
Level-2ABottom-of-atmosphere reflectance
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Sentinel 2 Products Type Sentinel-2 tile gridding is based on the NATO Military Grid Reference System
Each tile covers an area of 100 × 100 𝐾𝐾𝐾𝐾2 (excluding overlapping edges of 9.8 𝐾𝐾𝐾𝐾)
[18] Military Grid Reference System [19] Sentinel-2 product types
Germany can be covered with 56 tiles
E.g., Time serie of tile images for 1 year(365 days / 5 days = 73 acquisitions)
Tile
…
Data size to be processed: 73 acq. * 56 tiles * 800Mb = 3.11 TB
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Updating land-cover maps is an important task for regularly monitoring the Earth’s surface
Generation of reliable maps with spatial consistency is challenging Use of data acquired from several satellite orbit tracks observed at different dates Presence of clouds
Map production cannot rely on field campaigns (huge amounts of data would need to becollected)
Existing databases are used to build the reference data sets needed for the supervisedclassification
[21] France land cover classification 2016 [22] S2 prototype LC map at 20m of Africa 2016
[20] CORINE land cover
Sentinel 2 - Update of Land-Cover Maps at Country Scale
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Sentinel 2 – Land Cover Map of France 2017
[23] CESBIO: 2017 Land Cover Map
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Lecture Bibliography (1) [1] CORONA: American’s First Satellite Program: first photograph.
Online: https://www.oneonta.edu/faculty/baumanpr/geosat2/RS%20History%20II/RS-History-Part-2.html [2] The Earth-Atmosphere Energy Balance
Online: http://theatmosphere.pbworks.com/w/page/27058542/The%20Earth-Atmosphere%20Energy%20Balance [3] The 2004 Indian Ocean Tsunami
Online: https://www.thoughtco.com/the-2004-indian-ocean-tsunami-195145 [4] Five years after Indian Ocean tsunami, affected nations rebuilding better – UN
Online: http://www.un.org/apps/news/story.asp?NewsID=33365#.WltdyK6nGpp [5] Population growth from 1975 - 2010 of Manila
Online: https://manilabydaniellaandisabel.weebly.com/location-and-characteristics.html [6] Deforestation in Bolivia from 1986 to 2001
Online: https://www.satimagingcorp.com/gallery/more-imagery/aster/aster-deforestation-bolivia/ [7] Active-and-passive-remote-sensing
Online: http://grindgis.com/remote-sensing/active-and-passive-remote-sensing [8] Introduction to Remote Sensing: Radiometric Corrections
Online: http://gsp.humboldt.edu/olm_2015/Courses/GSP_216_Online/lesson4-1/radiometric.html [9] G. Hughes, "On the mean accuracy of statistical pattern recognizers," in IEEE Transactions on Information Theory, vol.
14, no. 1, pp. 55-63, 1968 [10] NDVI & Classification
Online: https://lholmesmaps.wordpress.com/my-work-2/environmental-studies-421-gis-iv-advanced-gis-applications/2-2/
Practical Lecture 6.1 – Remote Sensing Systems and Applications
https://www.oneonta.edu/faculty/baumanpr/geosat2/RS%20History%20II/RS-History-Part-2.htmlhttp://theatmosphere.pbworks.com/w/page/27058542/The%20Earth-Atmosphere%20Energy%20Balancehttps://www.thoughtco.com/the-2004-indian-ocean-tsunami-195145http://www.un.org/apps/news/story.asp?NewsID=33365#.WltdyK6nGpphttps://manilabydaniellaandisabel.weebly.com/location-and-characteristics.htmlhttps://www.satimagingcorp.com/gallery/more-imagery/aster/aster-deforestation-bolivia/http://grindgis.com/remote-sensing/active-and-passive-remote-sensinghttp://gsp.humboldt.edu/olm_2015/Courses/GSP_216_Online/lesson4-1/radiometric.htmlhttps://lholmesmaps.wordpress.com/my-work-2/environmental-studies-421-gis-iv-advanced-gis-applications/2-2/
Lecture Bibliography (2) [11] Normalized Difference Vegetation Index (NDVI)
Online: http://www.agasyst.com/portals/NDVI.html [12] A. Plaza, G. Martín, J. Plaza, M. Zortea and S. Sánchez, “Recent Developments in Endmember Extraction and Spectral
Unmixing“ , in Optical Remote Sensing, vol 3. Springer, Berlin, Heidelberg, 2011 [13] Copernicus programme: Europe’s eyes on Earth
Online: http://www.copernicus.eu/ [14] Sentinel Space
Online: http://earsc.org/news/airbus-selected-by-esa-for-copernicus-data-and-information-access-service-dias [15] Sentinels for Copernicus
Online: https://www.youtube.com/watch?v=xcflQZJ5n88 [16] Earth Observation Mission Sentinel 2
Online: https://sentinel.esa.int/web/sentinel/missions/sentinel-2 [17] Copernicus Open Access Hub
Online: https://scihub.copernicus.eu/dhus/#/home [18] Military Grid Reference System
Online: https://en.wikipedia.org/wiki/Military_Grid_Reference_System [19] Sentinel-2 product types
Online: https://earth.esa.int/web/sentinel/user-guides/sentinel-2-msi/product-types [20] CORINE land cover
Online: https://land.copernicus.eu/pan-european/corine-land-cover
Practical Lecture 6.1 – Remote Sensing Systems and Applications
http://www.agasyst.com/portals/NDVI.htmlhttp://www.copernicus.eu/http://earsc.org/news/airbus-selected-by-esa-for-copernicus-data-and-information-access-service-diashttps://www.youtube.com/watch?v=xcflQZJ5n88https://sentinel.esa.int/web/sentinel/missions/sentinel-2https://scihub.copernicus.eu/dhus/#/homehttps://en.wikipedia.org/wiki/Military_Grid_Reference_Systemhttps://earth.esa.int/web/sentinel/user-guides/sentinel-2-msi/product-typeshttps://land.copernicus.eu/pan-european/corine-land-cover
Lecture Bibliography (3) [21] France land cover classification 2016
Online: http://www.cesbio.ups-tlse.fr/multitemp/?p=11778 [22] S2 prototype LC map at 20m of Africa 2016
Online: http://2016africalandcover20m.esrin.esa.int/ [23] CESBIO: 2017 Land Cover Map
Online: http://osr-cesbio.ups-tlse.fr/~oso/
Practical Lecture 6.1 – Remote Sensing Systems and Applications
http://www.cesbio.ups-tlse.fr/multitemp/?p=11778http://2016africalandcover20m.esrin.esa.int/http://osr-cesbio.ups-tlse.fr/%7Eoso/
Dr. Gabriele CavallaroPostdoctoral Researcher High Productivity Data Processing GroupJuelich Supercomputing Centre, Germany
Remote Sensing Systems and Applications (2)October 16th, 2018Room V02 – 138
Cloud Computing & Big DataPARALLEL & SCALABLE MACHINE LEARNING & DEEP LEARNING
PRACTICAL LECTURE 6.1
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Outline
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Outline of the Course
1. Cloud Computing & Big Data
2. Machine Learning Models in Clouds
3. Apache Spark for Cloud Applications
4. Virtualization & Data Center Design
5. Map-Reduce Computing Paradigm
6. Deep Learning driven by Big Data
7. Deep Learning Applications in Clouds
8. Infrastructure-As-A-Service (IAAS)
9. Platform-As-A-Service (PAAS)
10. Software-As-A-Service (SAAS)
11. Data Analytics & Cloud Data Mining
12. Docker & Container Management
13. OpenStack Cloud Operating System
14. Online Social Networking & Graphs
15. Data Streaming Tools & Applications
16. Epilogue
+ additional practical lectures for our
hands-on exercises in context
Practical Topics
Theoretical / Conceptual Topics
Practical Lecture 6.1 – Remote Sensing Systems and Applications 3 / 56
Outline
4 / 60
Machine Learning Background
Deep Learning and Shallow Learning
Indian Pines Hyperspectral Dataset
Hyperspectral Image Classification
Challenges of Remote Sensing with Deep Learning
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Classical Pattern Recognition System
Patter recognition is the science of making inferences from perceptual data, using toolsfrom statistics, probability, computational geometry, machine learning, signal processingand algorithm design
• Machine learning: term introduced by Arthur Samuel in 1959:• Gives computers the ability to learn without being explicitly programmed • Explores the study and construction of algorithms that can learn from and
make predictions on data
• Today: Machine Learning is a huge (growing) field
DATA ACQUISITION PRE-PROCESSING
EXTRACT/SELECTFEATURES CLASSIFICATION
POST-PROCESSING
CLASSIFICATION SYSTEM
Practical Lecture 6.1 – Remote Sensing Systems and Applications
• Unsupervised learning:
• No labels are given to the learning algorithm
• Goal: find structure in its inputs
• Used to discover hidden patterns in data, learn features
• Clustering
Machine Learning Methods
[1] K-means in Python 3 on Sentinel 2 dataPractical Lecture 6.1 – Remote Sensing Systems and Applications
Supervised learning:
The computer is presented with examples inputs and desired outputs= training data D = {(𝒙𝒙,𝑦𝑦)}
The goal is to learn a general rule 𝑓𝑓𝑤𝑤 that maps inputs to outputs to outputs 𝑓𝑓𝑤𝑤 𝒙𝒙 = 𝑦𝑦
Machine Learning Methods
?
Classification: 𝑦𝑦 is a nominal number (i.e., a class label)
Regression: 𝑦𝑦 is a continuous number
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Domain adaptation:
Aims at learning from a source data distribution a well performing model on a different (but related) target data distribution
E.g., Acquisitions on different dates, with different sensors, etc.
Machine Learning Methods
Example of a shift in the signature ofa hyperspectral image acquired bythe Hyperion sensor over two areas ofthe Okavango Delta in Botswana
[2] D. Tuia, C. Persello and L. Bruzzone
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Deep Learning and Shallow Learning
Shallow learning: learning networks that usually have at most one to two layers
They compute linear or nonlinear functions of the data (often hand-designed features)
DATA ACQUISITION PRE-PROCESSING
EXTRACT/SELECTFEATURES CLASSIFICATION
POST-PROCESSING
DL means a deeper network with many layers of non-linear transformations
No universally accepted definition of how many layers constitute a “deep” learner
Typical networks are typically at least four or five layers deep
Practical Lecture 6.1 – Remote Sensing Systems and Applications
http://www.cs.utoronto.ca/~rgrosse/cacm2011-cdbn.pdf
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Deep Networks Learn Hierarchical Feature Representations
[3] H. Lee et al.
• Classification: make a prediction for a whole input
• What are the classes and ranked list
• Localization or detection: towards fine-grained inference
• Classification and spatial location (e.g., bounding boxes)
• Semantic segmentation: fine-grained inference
• Make dense predictions inferring labels for every pixel
• Further improvements: provide different instances of the same class
• Decomposition of already segmented classes into their components
• Many applications nourish from inferring knowledge from imagery
• Autonomous driving
• Human-machine interaction
• Computational photography
• Image search engines
• Remote sensing Practical Lecture 6.1 – Remote Sensing Systems and Applications
Progression from Coarse to Fine Inference
[4] A. Garcia-Garcia
[5] Image Segmentation
Classification of Hyperspectral Images
1DSPECTRAL
2DSPATIAL
3DSPECTRAL +SPATIAL
3D Convolution Consider data as a volume Local features (spatial and spectral) Input/output: 3D tensor
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Break the images into many small crops and classify the central pixel
Redundant and computationally expensive
Stores not only every pixel but also the surrounding pixels
Increases the data size by a factor determined by the number of neighbouring pixels
Advantage of using spectral and spatial information in the classification process
More visible with Hyperspectral images
Building
Building
Tree
Extract patchClassify center pixel with CNN
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Classification Approach: Sliding Window
Example of Hyperspectral Dataset – Indian Pines
Acquired by the Nasa’s Airborne Visible/Infrared ImagingSpectrometer (AVIRIS) on June 12, 1992
Covers a mostly agricultural and wooded area in the westof the Purdue University in Indiana (USA)
The image consists of 1417 × 617 pixels with a spatialresolution of around 20 m
For each pixel the image provides 220 spectral channelscovering wavelengths in the range of 0.4 𝜇𝜇m to 2.5 𝜇𝜇m(i.e., ‘cube’)
The pixel-wise labelling was done by M. Baumgardner andher students
This resulted in a label-map of the dataset
[6] P. U. R. Repository [7] M.F. Baumgardner et al.
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Hyperspectral Image Classification – Indian Pines
Practical Lecture 6.1 – Remote Sensing Systems and Applications [8] G. Cavallaro, et al.
58 different classes Distribution of the number of samples per class is highly unbalanced
Biggest class contains >60.000 pixels whereas there are several classes with
Class score
Series of convolution and pooling layers
Fully connected layers
Convolutional layers: convolution operation on the input
Emulate the response of an individual neuron to visual stimuli
Each convolutional neuron processes data only for its receptive field
Polling layers: progressively reduce the spatial size of the representation
Reduce the amount of parameters and computation and control overfitting
Fully connected layers connect every neuron in one layer to every neuron in another layer
Same principle as the traditional multi-layer perceptron (MLP) network
Image Classification CNNs
[9] J. Long et al.
Practical Lecture 6.1 – Remote Sensing Systems and Applications
CNN Classifier for Hyperspetral Image - Architecture
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Classify pixels in a hyperspectral remote sensing image having groundtruth/labels available
Created CNN architecture for a specific hyperspectral land cover type classification problem
Performed no manual feature engineering to obtain good results (aka accuracy)
Keras – Remote Sensing CNN ‘Standard‘ Model
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Experimental Setup – Results – Full Dataset – Accuracy
Practical Lecture 6.1 – Remote Sensing Systems and Applications
SVM comparison~ 77% withmanual featureengineering
Experimental Setup – Results – Full Dataset – Class Checks
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Experimental Setup – Results – Full Dataset – Class Checks
Blue: correctly classified / training data Red: incorecctly classified
Practical Lecture 6.1 – Remote Sensing Systems and Applications
RS applications have massive amounts of temporal and spatial data (e.g., Sentinel 2)
But not enough labeled training samples, which usually don’t fully represent: Seasonal variations Object variation (e.g., plants, crops, etc.)
Most online hyperspectral data sets have little-to-no variety
DL systems with many parameters require large amounts of training data Else they can easily overtrain and not generalize well
DL systems in CV use very large training setse.g., millions or billions of faces in different illuminations, poses, inner class variations, etc.
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Limited Remote Sensing Training Data
[6] P. U. R. Repository
Possible approaches to mitigate small training samples:
1. Data augmentation Affine transformations, rotations, small patch removal, etc.
2. Transfer learning Train on other imagery to obtain low-level to mid-level features
3. Use ancillary data Other sensor modalities (e.g., LiDAR, SAR, etc.)
4. Unsupervised training training labels not required
Practical Lecture 6.1 – Remote Sensing Systems and Applications
DL systems with limited training data
State of the art DL networks have parameters in the order of millions
The learning model needs a proportional amount of examples
The number of parameters should be proportional to the complexity of the task
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Need for a Large Amount of Training Data
[10] Data Augmentation
The available dataset is taken in a limited set of conditions
Different orientation, location, scale, brightness etc.
“A poorly trained neural network would think that these three tennis balls, are distinct, unique images”
Train with additional synthetically modified data
Techniques to artificially increase the size of the training set
Make minor changes such as flips, translations and rotations to the existing dataset
Employed to counteract overfitting
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Data Augmentation
[10] Data Augmentation
Ability to recognize an object as an object, even when its appearance varies in some way
It allows to abstract an object's identity from the specifics of the visual input
E.g., relative positions of the viewer/camera and the object.
Well-trained CNNs can be invariant to translation, viewpoint, size or illumination
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Essential Assumption: Invariance
[11] Invariance property
Flip horizontally and vertically
Rotate
Scaled outward or inward
Crop: random sample a section
Translate: moving the image along the X or Y direction
Add noise
Data augmentation is more challenging for remote sensing
Images exist in a variety of conditions (e.g., different seasons)
They cannot be accounted for by the above simple methods
Practical Lecture 6.1 – Remote Sensing Systems and Applications
[11] Data Augmentation
Popular Augmentation Techniques
Practical Lecture 6.1 – Remote Sensing Systems and Applications
Master's Theses
I provide supervision for Master's thesis topics that included methods for processing and analyzing remote sensing data (from very high spatial resolution to hyperspectral)
Classic machine learning approaches and more advanced deep learning algorithms
Priority to the scalability (use of HPC systems)
Feel free to contact me ([email protected]) to discuss further
mailto:[email protected]
Lecture Bibliography (1) [1] K-means in Python 3 on Sentinel 2 data
Online: http://www.acgeospatial.co.uk/k-means-sentinel-2-python/ [2] D. Tuia, C. Persello and L. Bruzzone, "Domain Adaptation for the Classification of Remote Sensing Data: An Overview of
Recent Advances," in IEEE Geoscience and Remote Sensing Magazine, vol. 4, no. 2, pp. 41-57, June 2016. doi:10.1109/MGRS.2016.2548504
[3] H. Lee, R. Grosse, R. Ranganath and A. Y. Ng, “Unsupervised Learning of Hierarchical Representations with ConvolutionalDeep Belief Networks”, in Commun. ACM, vol. 54, no. 10, pp. 95-103, 2011.
[4] A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, J. Garcia-Rodriguez, “A Review on Deep LearningTechniques Applied to Semantic Segmentation”, in CoRR, 2017.Online: http://arxiv.org/abs/1704.06857
[5] Image Segmentation Using DIGITS 5Online: https://devblogs.nvidia.com/image-segmentation-using-digits-5/
[6] P. U. R. Repository. 220 band aviris hyperspectral image data set: June 12, 1992 indian pine test site 3, 2015, Online: https://purr.purdue.edu/publications/1947/about?v=1
[7] M. F. Baumgardner, L. L. Biehl, and D. A. Landgrebe. 220 band aviris hyperspectral, image data set: June 12, 1992 indian pine test site 3. Purdue University Research Repository, 2015.
[8] G. Cavallaro, M. Riedel, J.A. Benediktsson et al., ‘On Understanding Big Data Impacts in Remotely Sensed Image Classification using Support Vector Machine Methods’, IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing, 2015
[9] J. Long, E. Shelhamer and T. Darrell, "Fully convolutional networks for semantic segmentation," 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 3431-3440.
[10] Data Augmentation - How to use Deep Learning when you have Limited DataOnline: https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-
c26971dc8ced [11] Invariance property
Online: https://i.stack.imgur.com/iY5n5.png
Practical Lecture 6.1 – Remote Sensing Systems and Applications
http://www.acgeospatial.co.uk/k-means-sentinel-2-python/http://arxiv.org/abs/1704.06857https://devblogs.nvidia.com/image-segmentation-using-digits-5/https://purr.purdue.edu/publications/1947/about?v=1https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8cedhttps://i.stack.imgur.com/iY5n5.png
1_Remote_Sensing_Systems_and_ApplicationsCloud Computing & Big DataPARALLEL & SCALABLE MACHINE LEARNING & DEEP LEARNINGOutlineOutline of the CourseOutlineRemote SensingSlide Number 6Application Examples (1)Application Examples (2)Classification of Remote Sensing Images Pipeline for ClassificationPlatforms and Sensors PreprocessingRadiometric Calibration and Correction ProcessThe Feature DomainNormalized Difference Vegetation Index (NDVI)Spatial Information (1)Spatial Information (2)ClassificationSlide Number 19Slide Number 20Sentinel 2 MissionCopernicus Open Access HubExample of Data Retrieval (1)Example of Data Retrieval (2)Products VisualizationSentinel 2 Products TypeSlide Number 27Slide Number 28Lecture Bibliography (1)Lecture Bibliography (2)Lecture Bibliography (3)
2_Remote_Sensing_Systems_and_ApplicationsCloud Computing & Big DataPARALLEL & SCALABLE MACHINE LEARNING & DEEP LEARNINGOutlineOutline of the CourseOutlineClassical Pattern Recognition SystemMachine Learning MethodsMachine Learning MethodsMachine Learning MethodsDeep Learning and Shallow LearningDeep Networks Learn Hierarchical Feature RepresentationsProgression from Coarse to Fine Inference Classification of Hyperspectral ImagesClassification Approach: Sliding WindowExample of Hyperspectral Dataset – Indian PinesHyperspectral Image Classification – Indian PinesImage Classification CNNsCNN Classifier for Hyperspetral Image - Architecture Keras – Remote Sensing CNN ‘Standard‘ ModelExperimental Setup – Results – Full Dataset – Accuracy Experimental Setup – Results – Full Dataset – Class ChecksExperimental Setup – Results – Full Dataset – Class ChecksSlide Number 22Slide Number 23Slide Number 24Data Augmentation Essential Assumption: InvarianceSlide Number 27Slide Number 28Lecture Bibliography (1)