66
Autonomous Driving & Sustainable Supervision Patrick Pérez

Autonomous Driving & Sustainable Supervision

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Autonomous Driving & Sustainable Supervision

Autonomous Driving & Sustainable Supervision

Patrick Pérez

Page 2: Autonomous Driving & Sustainable Supervision

2https://www.youtube.com/watch?v=vE0h3Yy458k

Page 3: Autonomous Driving & Sustainable Supervision

Saving lives, time, energy

Safety critical, real-time, embedded AI in the wild

Driving AI must be:

accurate, robust, able to generalize well

validated, certified, explainable

Promises & Challenges

Page 5: Autonomous Driving & Sustainable Supervision

Driverless Cars

1986-95 2004

2007

Darpa ChallengePrometheus

Page 6: Autonomous Driving & Sustainable Supervision

Eureka-Prometheus 86-95

https://www.youtube.com/watch?v=I39sxwYKlEE

Page 7: Autonomous Driving & Sustainable Supervision

Driverless CarsCVPR 2019

Page 8: Autonomous Driving & Sustainable Supervision

Automation Levels

1Driver

Assistance

2Partial

Automation

3ConditionalAutomation

4High

Automation

5Full

Automation

0No

Assistance

Page 9: Autonomous Driving & Sustainable Supervision

Radar Range-Doppler plots

[Shubert 2015]

Page 10: Autonomous Driving & Sustainable Supervision

Key sensors

Page 11: Autonomous Driving & Sustainable Supervision

Sensor suite

Page 12: Autonomous Driving & Sustainable Supervision

Advanced Driving Assistance Systems (ADAS)

Source: Ansys

Page 13: Autonomous Driving & Sustainable Supervision

Autonomous Driving (AD) Systems

Fusion Scene Model Prediction Planning Control

Localization

Sensors

Objects

Perception

Courtesy X. Perrotton@valeo

Page 14: Autonomous Driving & Sustainable Supervision

Autonomous Driving (AD) Systems

Scene Model Prediction Planning Control

Localization

Sensors

Perception

Courtesy X. Perrotton@valeo

Page 15: Autonomous Driving & Sustainable Supervision

Autonomous Driving (AD) Systems

Scene Model Prediction Planning Control

Localization

Sensors

Perception

End-to-end driving

Courtesy X. Perrotton@valeo

Page 16: Autonomous Driving & Sustainable Supervision

Autonomous Driving (AD) Systems

Scene Model Prediction Planning Control

Localization

Sensors

Perception

End-to-end driving

deep learning

Courtesy X. Perrotton@valeo

Page 17: Autonomous Driving & Sustainable Supervision

Autonomous Driving (AD) Systems

Scene Model Prediction Planning Control

Localization

Sensors

Perception

End-to-end driving

deep learning

reinforcement learning

Courtesy X. Perrotton@valeo

Page 18: Autonomous Driving & Sustainable Supervision

Detect (bounding boxes with categories)

• Vehicles , vulnerable road users (VRUs), signs, road work

Segment (pixel/point labelling)

• Road, pavement, free/drivable space, lane marks

Measure (pixel/point regression)

• Distance, speed

Analyze (object-level)

• Sub-categories, attributes, ‘intention, ‘attention’, next position

2D/3D Scene Understanding

Page 19: Autonomous Driving & Sustainable Supervision

Variants: Semantic, instance, plenoptic

2D Semantic Segmentation

Page 20: Autonomous Driving & Sustainable Supervision

Variants: Semantic, instance, plenoptic

Metric: Mean intersection over union (mIoU)

2D Semantic Segmentation

Page 21: Autonomous Driving & Sustainable Supervision

Variants: Semantic, instance, plenoptic

Metric: Mean intersection over union (mIoU)

Key architecture: Fully convolutional encoder-decoder

• Favorite deep ConvNet as backbone encoder

• Skip connections à la U-net for full res decoding

2D Semantic Segmentation

Page 22: Autonomous Driving & Sustainable Supervision

U-net (2015)

Page 23: Autonomous Driving & Sustainable Supervision

DeepLab-v3 (2018)

Page 24: Autonomous Driving & Sustainable Supervision

DeepLab-v3 (2018)

Page 25: Autonomous Driving & Sustainable Supervision

Training Semantic Segmentation

“soft” segmentation mapafter softmax

pixel-level cross entropy

Stochastic Gradient Descent on

Page 26: Autonomous Driving & Sustainable Supervision

syn

thet

ic

Driving Datasets

• CamVid – ECCV 2008

• KITTI – IJRR 2013

• Cityscapes – CVPR 2016

• Oxford Robotcar – IJRR 2016

• BDD100K – CVPR 2017

• Mapillary Vistas – ICCV 2017

• ApolloScape (Baidu) – CVPR 2018

• HDD (Honda) – CVPR 2018

2019

• India Driving Dataset – WACV 2019

• nuScenes (Aptiv) – arXiv 2019

• Waymo Open Dataset –2019

• Lyft Level 5 AV Dataset – 2019

• D2 City (Didi) – arXiv 2019

• A2D2 (Audi) – 2019

• Woodscape (Valeo) – ICCV 2019

• GTA5 – ECCV 2016

• Synthia – CVPR 2016

• Carla simulator – arXiv 2017

Page 27: Autonomous Driving & Sustainable Supervision

Synthia

Page 28: Autonomous Driving & Sustainable Supervision

KITTI

Page 29: Autonomous Driving & Sustainable Supervision
Page 30: Autonomous Driving & Sustainable Supervision

IDD

Page 31: Autonomous Driving & Sustainable Supervision

nuScenes

Page 32: Autonomous Driving & Sustainable Supervision

nuScenes

Page 33: Autonomous Driving & Sustainable Supervision

nuScenes

Page 34: Autonomous Driving & Sustainable Supervision

A2D2

Page 35: Autonomous Driving & Sustainable Supervision

Valeo Woodscape

Page 36: Autonomous Driving & Sustainable Supervision

SoA visual deep learning is fully supervised

• Data collection is not so easy (complex, costly, possibly dangerous)

• Labelling is hell (if possible)

• Doomed insufficient for in-the-wild, life-long visual understanding

Annotation hell

Page 37: Autonomous Driving & Sustainable Supervision

Alternatives to reduce annotation needs• Semi-supervised learning

• Unsupervised learning

• Weakly supervised learning

• Zero-shot and few-shot learning

• Transfer learning

• Domain adaptation

• Learning from synthetic data

• Self-supervised learning

• Active learning

• Incremental learning

• Online learning

Toward sustainable supervision?

Page 38: Autonomous Driving & Sustainable Supervision

• Learn one task, conduct another one

• Learn on one distribution, run on another one

Street light effect (a.k.a. drunkard’s search)?

Not quite….

Transfer and adaptation

= Domain Adaptation

Page 39: Autonomous Driving & Sustainable Supervision

Domain adaptation in vision

Page 40: Autonomous Driving & Sustainable Supervision

Different, though related input data distributions

Source domain → Target domain

• Different weather, light, location, sensor’s spec/setup

• Synthetic vs. real

Domain gap

Page 41: Autonomous Driving & Sustainable Supervision

Different, though related input data distributions

Source domain → Target domain

• Different weather, light, location, sensor’s spec/setup

• Synthetic vs. real

Domain gap

Page 42: Autonomous Driving & Sustainable Supervision

Different, though related input data distributions

Source domain → Target domain

• Different weather, light, location, sensor’s spec/setup

• Synthetic vs. real

Domain gap

Page 43: Autonomous Driving & Sustainable Supervision

Different, though related input data distributions

Source domain → Target domain

• Different weather, light, location, sensor’s spec/setup

• Synthetic vs. real

Domain gap

Page 44: Autonomous Driving & Sustainable Supervision

Different, though related input data distributions

Source domain → Target domain

• Different weather, light, location, sensor’s spec/setup

• Synthetic vs. real

Domain gap

Page 45: Autonomous Driving & Sustainable Supervision

Different, though related input data distributions

Source domain → Target domain

• Different weather, light, location, sensor’s spec/setup

• Synthetic vs. real

Domain gap

Page 46: Autonomous Driving & Sustainable Supervision

Labelled source domain data Unlabelled target domain data

Unsupervised Domain Adaptation (UDA)

Page 47: Autonomous Driving & Sustainable Supervision

Distribution alignment

• Appearance, deep features, outputs

Some alignment tools

• Distribution discrepancy loss

• Optimal transport

• Discriminative adversarial loss

• Generative adversarial models

Self-training

• Curriculum learning

• Pseudo-label from confident prediction on target data

Deep learning for UDA

Page 48: Autonomous Driving & Sustainable Supervision

Adversarial gradient reversal[Ganin ICML’15]

Page 49: Autonomous Driving & Sustainable Supervision

Adversarial feature alignment[Hoffmann 2016]

Page 50: Autonomous Driving & Sustainable Supervision

Adversarial feature alignment[Chen ICCV’17]

Page 51: Autonomous Driving & Sustainable Supervision

Cycle-Consistent Adversarial Domain AdaptationCyCADA [Hoffman ICML’18]

Page 52: Autonomous Driving & Sustainable Supervision

Adversarial output alignment[Tsai CVPR’18]

Page 53: Autonomous Driving & Sustainable Supervision

AdvEnt: Entropy-based alignment [Vu CVPR’19]

Source

learned segmentation

model

TEST

Target

TRAIN

Source labelled data

Page 54: Autonomous Driving & Sustainable Supervision

AdvEnt: Entropy-based alignment [Vu CVPR’19]

Source

learned segmentation

model

TEST

Target

TRAIN

Source labelled data

Page 55: Autonomous Driving & Sustainable Supervision

Proposed method

Page 56: Autonomous Driving & Sustainable Supervision

Qualitative results

Page 57: Autonomous Driving & Sustainable Supervision

Qualitative results

Page 58: Autonomous Driving & Sustainable Supervision

Clear-to-foggy-weather adaptation

• Detector: SSD-300

• Data: Cityscapes>Cityscapes-Foggy (synthetic depth-aware fog)

Extension to object detection

Page 59: Autonomous Driving & Sustainable Supervision

Learning using ‘Privileged Information’ (LUPI)

• Vapnik and Vashist 2009

• Leverage additional information at training time

In sim2real UDA

• PI comes for free on source domain, e.g. dense depth map

• Set up auxiliary task at train time (→multi-task learning – MTL)

• Get better, domain-agnostic features

• TRI’s “SPIGAN” (ICLR’19) and Vlaeo’s “DADA”(ICCV19)

Privileged Information (PI) for UDA

Page 60: Autonomous Driving & Sustainable Supervision

SPIGAN [Lee ICLR’19]

Page 61: Autonomous Driving & Sustainable Supervision

Depth-Aware Domain Adaptation (DADA)[Vu ICCV’19]

Page 62: Autonomous Driving & Sustainable Supervision

Qualitative results

Page 63: Autonomous Driving & Sustainable Supervision

Qualitative results

Page 64: Autonomous Driving & Sustainable Supervision

Zero-Shot Semantic Segmentation (ZS3)[Bucher NeurIPS 2019]

64

Page 65: Autonomous Driving & Sustainable Supervision

Zero-Shot Semantic Segmentation (ZS3)[Bucher NeurIPS 2019]

65

Page 66: Autonomous Driving & Sustainable Supervision

Autonomous vehicles (and robots)

• Fundamental ML challenges toward major impact

• From generalization to certified performance, on a budget

Toward sustainable supervision

• Lessen pressing need on unpractical large-scale full supervision

• Improve and hybridize low-supervision approaches

• Reduce sim2real gap

• Make RL a reality

• Leverage world knowledge

Take home messages and overlook