Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Addressing blending challenges with neural networks —
A case study: Mask R-CNN
Sowmya Kamath, Patricia BurchatBTF Telecon, 2nd July, 2018
Blending & Neural Networks
● Object detection and instance segmentation is an active area of
research in computer vision applications.
● Neural Networks could potentially contribute to a solution to several
additional blending challenges:
○ Identifying unrecognized blends.
○ Identifying blends that are too “blended” to perform meaningful measurements.
○ Identifying shredded objects.
○ Deblending.
Convolutional layer
● Category of Neural Network effective in image recognition and classification.
● “Convolves” image with a kernel (filter).
http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution
Convolutional layer
http://cs.nyu.edu/~fergus/tutorials/deep_learning_cvpr12/
● Category of Neural Network effective in image recognition and classification.
● “Convolves” image with a kernel (filter).● Different kernels extract different
features.
Convolutional layer
http://web.eecs.umich.edu/~honglak/icml09-ConvolutionalDeepBeliefNetworks.pdf
● Category of Neural Network effective in image recognition and classification.
● “Convolves” image with a kernel (filter).● Different kernels extract different
features.● Convolve each layer feature map with
more kernels to learn complex features.● Network learns the kernel values
during training.
Mask R-CNN
Currently using existing neural network framework, Mask Region-based Convolutional Neural Network (Mask R-CNN),[1] to perform detection and segmentation.
Input:
● RGB Image
Output for each object:
● Label● Bounding Box (x, y, h, w)● Segmentation Mask
[1] https://github.com/facebookresearch/detectron
Mask R-CNN
Classification
Bounding Box
Segmentation mask
Project each proposal onto 7
x 7 grid
http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture11.pdf
What it was developed for:
1. Opaque objects2. Sharp edges3. Large objects4. Good image quality 5. Same resolution in RGB bands
What we need it for:
1. (Semi) transparent objects2. No sharp edges3. Some objects as small as pixel scale4. Lower SNR5. Resolution can vary between filters
Training data
● Simulated images of two-galaxy pairs with varying overlap.
● 0.6 - 2 arcsec apart
● Bulge+disk Sersic galaxies from CatSim drawn with WeakLensingDeblendingPackage.
● i<24, 10-year LSST depth.
● gri bands → RGB .
● 18,000 pairs (72,000 images with data augmentation).
Truth Network Output
Examples of successful detections Green = true segmentationRed = CNN segmentation
Examples of unsuccessful detections Green = true segmentationRed = CNN segmentation
Conclusions
● Demonstration of Mask R-CNN to perform detection and segmentation mask
prediction for images of overlapping galaxy pairs.
● Network designed for opaque images with sharp edges -- learned to detect
transparent overlapping galaxies with less well-defined edges.
● Parameters and threshold values need to be optimized to reduce false
positives.
Future Work on Blending with Neural Networks
● Most architectures are built for input images with three bands (RGB).=> Will modify so that all six bands of LSST images can be utilized.
● Modify end layers of network to output pixel values for individual galaxies instead of segmentation maps.
● Include different kinds of sources (types of galaxies, stars, image artefacts?) and perform classification as well.
● Investigate using space-based images (HST) as truth & Hyper Suprime-Cam (HSC) as input images for training and measuring performance.
● Continue to develop metrics to compare performance with single-band and other multi-band techniques (e.g., current LSST science pipeline, Scarlet, …).
○ See 05/21/2018 BTF presentation on “Quantifying effects of blending using simulated galaxy pairs + Hybrid use of the LSST science pipelines & Scarlet” https://confluence.slac.stanford.edu/display/LSSTDESC/BTF+Meetings+and+Resources#BTFMeetingsandResources-SowmyaKamath