Priority Programme 1835 Cooperative Interacting Automobiles … · 2019-08-07 · Overview Focus...

Preview:

Citation preview

Priority Programme 1835

Cooperative Interacting Automobiles

2018 C. Stiller

Deep Learning, Behavioral Safety and

Motion Planning

Overview

Focus Programme „Cooperative Interacting Automobiles“

Constraints on Motion Plans

Drivability

Integrity (even when you do not know the plans of others)

Legality

Comfort (soft)

Optimization Based Motion Planning

Probabilistic Models and Planning

MDP

POMDP

Learning

RL

DL

Behavioural Safety

German Science Foundation

Focus Research Programme

Cooperatively Interacting Automobiles

Christoph Stiller, Karlsruhe; Autonomous Automobiles (Spokesman)

Wolfram Burgard, Freiburg; Robotics

Barbara Deml, Karlsruhe; Ergonomics

Lutz Eckstein, Aachen; Vehicular Technology

Frank Flemisch, Aachen; Ergonomics

Frank Köster, Braunschweig; Intention Recognition

Markus Maurer, Braunschweig; Representation of Skills

Gerd Wanielik, Chemnitz; Situation Understanding

Cooperatively Interacting

Automobiles

~30 Ph.D. students across Germany

Automatic driving & car2x

yield new opportunities of automated

interaction

Research on added value of automated

cooperation between automated vehicles

and other traffic participants

Research Issues

How should interaction between automated vehicles and other

traffic participants be implemented in a cooperative way?

Impacts on traffic. Will novel forms of traffic operation arise?

Explicit & implicit cooperation

Mixed traffic & automated-only traffic scenarios

Towards Cooperative Interaction

assessment

&

plausibility

checking

cooperative perception

data and

information

base

cooperative maneuver

and trajectory planning

ego

sensors

driver

vehicle internal

situation prediction and

intention recognition

sensors

of other

traffic

participants... car2x

negotiation and information

exchange with others

information

to other

traffic participants

exchange with

other traffic

participants

Types of Cooperation

pedestrians

conventional vehicle

with driver

assisted vehicle

with driver

automated vehicle

without driver

explicit cooperation

implicit cooperation

Ego-Fzg.

Automated vehicles may cooperate

with:

other automated vehicles

assisted vehicles or vehicles with a

human driver

other traffic participants

pedestrian, passenger in Ego-Veh.

Communication

explicit (car2x, …)

implicit

Research Fieldsa) Cooperative Perception

Implement a ‘telematic perception horizon‘ through exchange of

information with others

Issues: latencies, uncertainties in spatio-temporal

acquisition, perception loops caused by cyclic information

exchange, etc.

b) Situation Prediction

Predict behavior and trajectories of others (vehicles, humans,

…) to enable cooperation

Create predictable ego-behavior

Research Fieldsc) Cooperative Maneuver and Trajectory Planning

Cooperative trajectory planning of automated vehicles with

other traffic participants

Implicit or explicit coordination of different maneuver options

d) Data and Information Base (‘crowd mapping’)

Knowledge aggregation in a collektive data and information

base.

Focus on information that influences tactical driving

decisions

Research Fieldse) System ergonomics

Make automated decisions transparent and acceptable to

humans (inside and outside the vehicle)

Generate implicit cooperation with pedestrians, byciclist, etc.

Interaction between passenger and automated vehicle

f) Architecture of cooperatively interacting automobiles

Metrics for quality assessment of provided information, of

cognitive skills, of safety of trajectories, …

Introduction

steering angle acceleration

map goalpose objects

Motion

Planning

dynamics rules

Human Behavior Model

Information Decision Planning Navigation

Recognition Assoziation Rules Guidance

knowledge based behavior

rule based behavior

Feature

Extraction

Signal-Reactive

SkillsStabilization

skill based behavior

Sensor

Information

Subcortical

Information

Motoric

Action

cf. Rassmusen 1983

Constraints on Motion Plans

Hard Constraints

Drivability

Integrity

Static Obstacles

Dynamic Obstacles

(some of which may be difficult to predict)

Traffic Rules (sometimes vague)

Soft Constraints

Comfort

Predictability

Cooperativity

Vehicle Dynamics

Friction Circle

Assume limit for force acting on vehicle (or wheel) by friction coefficient

and vertical force .

maximumlongitudinal accelerationmaximum

deceleration

maximumlateral acceleration

Acceleration vectorsbeyond friction circleare not feasible

friction circle longitudinal and lateral

dynamics are coupled !

on dry road

lateral force over wheel slip angle

for varying longitudinal force FxLangitudinal

wheel axis

lower potential for lateral force due to longitudinal force Fx

Acceleration in narrow curves changes path

linearized wheel forces front/rear:

cornering stiffness

Wheel Forces

~10 °Wheel slip angle

late

ral fo

rce

FS

Bicycle Model

Models major dynamic properties of a vehicle

Replaces wheels of each axle by a single centered

wheel

2d model

Dry road, moderate lateral acceleration < 4m/s²

Constant velocity

Variants:

Kinematic bicycle model

-> for low speed, neglect lateral tire forces

Kinetic bycicle model

Non-linear

linearized

CoG

v

Kinematic Bicycle Model Ackermann Steering Angle

CoG

invariant

point

• neglect lateral forces

• valid for slow driving

22tan

hM

A

lr

l

M

Ar

ltan

for :,1 22

MhA rl

A

Mr

Kinetic Bicycle Model

CoG

invariant

point

Mr

Collision Avoidance

Configuration Space (C-Space)

The Configuration Space C is the set of all admissible states

(configurations) q of a robot

Defines the set of feasible robot parameters and thus the search

space for planning

Free Space: Point Robot

Cfree := {set of robot parameters q that is collision free}

For a point Robot: The space in R² not occopied by an obstacle

[Figs.: H. Choset,

http://www.cs.cmu.edu/~motionplanning/]

Free Space: Circular Robot

extend obstacles by robot radius

= convolute robot shape with obstacle map

Thao

Dang,

23

Free Space: Arbitrarily-Shaped Robot

Orientation matters

3d configuration space

for equivalent point robot

Free Space: Arbitrarily-Shaped Robot

25

Fast Collision Checking

[Ziegler et al. 2011]

Approximation of vehicle shape by a set of circles

Motion Planning

Motion Planning Methodology

Roadmaps• Visibility Graphs

• Voronoi-Diagrams

Discrete Graph Search• Dijkstra

• A*

• D*, D* Lite

Monte-Carlo-Methods• Probabilistic Roadmaps

• Rapidly Exploring Random Trees

Potential Field Methods• Artificial Potential Fields

• Elastic Bands

• Vector Field Histograms

Continuous Optimization• Parameter Optimization of Motion

Primitives

• Model Predictive Control MPC

Learning Methods• Reinforcement Learning

• Inverse Reinforcement Learning

• Deep Learning

• Deep Driving

Trajectory Representation for Planningglobal, discrete,

combinatoric

[Ziegler et al.2009–2011] [Ziegler et al.2011–2014]

local, continuous, variational

Continous Planning

Model Predictive Control

Optimal Control

Find the control minimizing cost integral J

under the vehicle dynamic constraints

and the equality and inequality constraints

Optimal Control

If J is quadratic and f,g,h are linear, the optimization is efficiently done

by a quadratic program

Fast constant time solvers exist for quadratic programs

For the kinematic bicycle model, x is a flat output, i.e.

Model Predictive Control (MPC)

Plan the optimal control for a time horizon T and conduct this control

input for a short time interval

After each time interval plan again for horizon T and conduct the new

control input for

In the extreme case of infinite T, u should not change unless the

optimal control problem changes!

Example: 1D MPC Lateral Control

Let the velocity of the vehicle be given in x-direction

Thus trajectory planning is reduced to the lateral component

Let further

and y be constrained by road boundaries (or obstacles) with linear h

yields a quadratic program :)

left border

right border

MPC 1D Trajectory example

left border right border

unconstrained

constrained

Dynamic Objects

We need to plan for ourselves …

… and for others

2d Trajectory planning

optimize cost functional

outer conditions

enforce integrity,

e.g.:

inner conditions

enforce drivability,

e.g.:

subject to hard conditions

2d Trajectory planning

Non-quadratic programs ;( result due to

x-y coupling due to vehicle dynamic model

x-y decoupling (sacrifizing optimality) yields piecewise

quadratic programme in many situations

non-linear vehicle dynamics

obstacles whose trajectory depends on ours

much slower iterative solutions

optimality cannot be guaranteed by solver.

in general, solution depends on initialization

MPC Results

0 10 20 30 40 50

-25

-20

-15

-10

-5

0

5

10

15

1 sec

2 sec

3 sec

MPC Results

-20 -10 0 10 20 30 40 50 60 70

-10

0

10

20

30

40

50

60

Lanelets

• left & right boundary

• reference trajectory

of driving corridor

Ego-position

& past 2 poses

Static obstacle

@[t1,te]

crossing

pedestrian

Oncoming vehicle

MPC 2D Trajectory planning

Heidelberg

Feudenheim

Pedestrian Protection

Courtesy Daimler AG

Maneuver Planning

Maneuver Planning

Idea:

Decouple maneuver and trajectory planning

Initialize iterative trajectory planner with a trajectory in the same

maneuver class (homotopy class)

MPC 2D Trajectories

P-Map(lanelets,

local rules),

L-Map(Localisation)

3 2

10

Perception PredictionDetermine

Maneuver

VariantsDecide best

Maneuver &

Trajectory

Trajectory

Planner for

all Variants

MPC Trajectories for each Maneuvre

Homotopy

time

[Bender et al. ITSC 2015]

Probabilistic Models and Planning

Effects of Uncertainty

Probabilistic Models

Deterministic models do not consider uncertainty appropriately:

Uncertainty in actuation

Uncertainty in sensing

Uncertainty in occluded regions

Uncertainty in intent and future trajectories of other traffic

participants

Effects of Uncertainty

Assuming certainty: yes

Can my black vehicle safely pass the white one?

Effects of Uncertainty

Assuming uncertain perception

in position and velocity:

Can my black vehicle safely pass the white one?

too tight

Effects of Uncertainty

The situation gets even worse with

uncertain prediction

Can my black vehicle safely pass the white one?

In this consideration overtaking is never possible

when the prediction horizon is large enough

Effects of Uncertainty

Considering uncertainty:

Can my black vehicle safely pass the white one?

Just take a few safe steps and see what happens

Effects of Uncertainty

Considering uncertainty:

Can my black vehicle safely pass the white one?

Just take a few safe steps and see what happens

Effects of Uncertainty

Considering uncertainty:

Can my black vehicle safely pass the white one?

Just take a few safe steps and see what happens

Effects of Uncertainty

Considering uncertainty:

Can my black vehicle safely pass the white one?

Just take a few safe steps and see what happens

Effects of Uncertainty

Considering uncertainty:

Can my black vehicle safely pass the white one?

Just take a few safe steps and see what happens

Effects of Uncertainty

Considering uncertainty:

Can my black vehicle safely pass the white one?

Just take a few safe steps and see what happens

Effects of Uncertainty

Considering uncertainty:

Can my black vehicle safely pass the white one?

Well, but what if …

Effects of Uncertainty

Considering uncertainty:

Can my black vehicle safely pass the white one?

Well, but what if …

=> replan, upon unexpected evolution

Probabilistic Trajectory Planningpose = (position, orientation)

trajectory

past

future

special case „certain prediction“, e.g. through v2v communication

Interaction Modes

Yielding:

Cooperation: Yielding:

Yielding:

Cooperation:

Yielding:

Cooperation:

Cond. independent

3 2

10

Markov Decision Process

Policy decides action a in state x

Reinforcement Learning = policy learning

maximizing expected future reward

Even under mild assumptions

- deterministic future

and coarse quantization of a and x

An exhaustive search is infeasible for real time driving

Shalev-Swartz et al. propose abstraction of the action space to

„semantic action“ ~ maneuver ~ homotopy

Partially Observable Markov Decision

ProcessIn a POMDP, states are not directly observable, but are observed via

a probability distribution

The policy then perates on these observations

State transition distribution

[Hubmann&al IV 2017]

POMDP Planning

[Hubmann&al IV 2017]

Define state space

POMDP Planning

[Hubmann&al IV 2017]

Observations

POMDP Motion Planning

[Hubmann&al IV 2017]

POMDP Planning

[Hubmann&al IV 2017]

Approximate Q function with Monte Carlo Sampling

POMDP Planning

[Hubmann&al IV 2017]

Results

POMDP Planning - Results

[Hubmann&al IV 2017]

POMPD predicts “when it can decide” rather than the decision itself!

Learning

Neural Networks as Graph Solvers

[RehderWirthLauerStiller 2017]

KIT – The Research University in the Helmholtz Association

www.kit.edu

Deep Graph Solvers

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

StartGoal

Find shortest path from start to goal:

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

0

Start

?

??

?

?

Goal

Find shortest path from start to goal:

Assign edge costs, node costs, Start = 0

4

2

1

4

5

4 2

2

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

0

Start

4

2?

?

?

Goal

Find shortest path from start to goal:

Assign edge costs, node costs, Start = 0

Propagate and sum costs

4

2

1

4

5

4 2

2

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

0

Start

4

27

6

?

Goal

4

2

1

4

5

4 2

2

Find shortest path from start to goal:

Assign edge costs, node costs, Start = 0

Propagate and sum costs

Expand cheapest node

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

0

Start

4

27

6

?

Goal

4

2

1

4

5

4 2

2

Find shortest path from start to goal:

Assign edge costs, node costs, Start = 0

Propagate and sum costs

Expand cheapest node

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

0

Start

4

27

6

?

Goal

4

2

1

4

5

4 2

2

Find shortest path from start to goal:

Assign edge costs, node costs, Start = 0

Propagate and sum costs

Expand cheapest node

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

0

Start

4

27

5

?

Goal

4

2

1

4

5

4 2

2

Find shortest path from start to goal:

Assign edge costs, node costs, Start = 0

Propagate and sum costs

Expand cheapest node

Re-assign minimum cost

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

0

Start

4

27

5

7

Goal

4

2

1

4

5

4 2

2

Find shortest path from start to goal:

Assign edge costs, node costs, Start = 0

Propagate and sum costs

Expand cheapest node

Re-assign minimum cost

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

0

Start

4

27

5

7

Goal

4

2

1

4

5

4 2

2

Find shortest path from start to goal:

Assign edge costs, node costs, Start = 0

Propagate and sum costs

Expand cheapest node

Re-assign minimum cost

Trace back shortest path

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

Find shortest path from start to goal:

Start

Goal

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

Find shortest path from start to goal:

Graph Edges

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

Find shortest path from start to goal:

Obstacle

[Eike Rehder 2017]

Short Review: Dijkstra‘s Algorithm

Find shortest path from start to goal:

[Eike Rehder 2017]

Shortest Path with a CNN

[Eike Rehder 2017]

Finding the Shortest Path with a CNN

Assign edge costs, node costs, Start = 0

[Eike Rehder 2017]

Finding the Shortest Path with a CNN

Assign edge costs, node costs, Start = 0

Propagate

[Eike Rehder 2017]

Finding the Shortest Path with a CNN

Assign edge costs, node costs, Start = 0

Propagate

=

[Eike Rehder 2017]

Finding the Shortest Path with a CNN

Assign edge costs, node costs, Start = 0

Propagate and sum costs

= + 1 =

[Eike Rehder 2017]

Finding the Shortest Path with a CNN

Assign edge costs, node costs, Start = 0

Propagate and sum costs

=

+1

+1

+1

+1

+inf

[Eike Rehder 2017]

Finding the Shortest Path with a CNN

Assign edge costs, node costs, Start = 0

Propagate and sum costs

=

[Eike Rehder 2017]

Finding the Shortest Path with a CNN

Assign edge costs, node costs, Start = 0

Propagate and sum costs

Re-assign minimum cost

= min

[Eike Rehder 2017]

Finding the Shortest Path with a CNN

Assign edge costs, node costs, Start = 0

Propagate and sum costs

Re-assign minimum cost

= min

[Eike Rehder 2017]

Finding the Shortest Path with a CNN

CostNon-Zero

Padding (!)

Transition

Filters

Transition

Cost

+

Cost

per

Action

min

pool

Updated

Cost

Replace

[Eike Rehder 2017]

Finding the Shortest Path with a CNN

CostNon-Zero

Padding (!)

Transition

Filters

Transition

Cost

+

Cost

per

Action

min

pool

Updated

Cost

Replace

Argmin of this layer is transition policy

[Eike Rehder 2017]

Evaluating the Shortest Path with a CNN

+

*

Current

State

Transition

Policy

Transition

Selection

Flipped

Transition

Filters Next

State

argmin

Destination

State

[Eike Rehder 2017]

Example I:

Path Planning

[Eike Rehder 2017]

Example I: Path Planning

Start

Goal

Obstacle

Find shortest path from start to goal:

[Eike Rehder 2017]

Nine possible transition filters

Cost is the traversed distance

Example I: Path Planning

+1 +inf

+√2

+1

+1

+1 +√2

+√2 +√2

[Eike Rehder 2017]

Example I: Path Planning

Cost ModelAdditive layer

High cost where obstacle is located

[Eike Rehder 2017]

Example I: Path Planning

Cost Map State Transition Map

[Eike Rehder 2017]

Finding the Shortest Path with a CNN

If you use Dijkstra:

Graph traversal with known transitions is faster

States can be updated selectively

Visited nodes will never be touched again

Why would you do it then?

[Eike Rehder 2017]

Example II:

Imitation Learning

[Eike Rehder 2017]

Example II: Imitation Learning

Arial view: Google Maps

Intersection in Karlsruhe

[Eike Rehder 2017]

Example II: Imitation Learning

Recorded trajectories

Teach a network to imitate human behavior

Intersection in Karlsruhe

Arial view: Google Maps

[Eike Rehder 2017]

CostNon-Zero

Padding (!)

Transition

Filters

Transition

Cost

+

Cost

per

Action

min

pool

Updated

Cost

Replace

Example II: Imitation Learning

[Eike Rehder 2017]

Example II: Imitation Learning

CostNon-Zero

Padding (!)

Transition

Filters

Transition

Cost

+

Cost

per

Action

min

pool

Updated

Cost

Replace

Fill in the whole bunch of CNN techniques

[Eike Rehder 2017]

Example II: Imitation Learning

+

In our case:

FCN operating

on the arial view

[Eike Rehder 2017]

Example II: Imitation Learning

Path driven by human

[Eike Rehder 2017]

Example II: Imitation Learning

Path driven by human Cost map from arial image

[Eike Rehder 2017]

Example II: Imitation Learning

Path driven by human Cost map after planning

[Eike Rehder 2017]

Example II: Imitation Learning

Path planned by network Cost map after planning

[Eike Rehder 2017]

Example II: Imitation Learning

Path planned by network

Path driven by human

Cost map after planning

[Eike Rehder 2017]

Markov Decision Processes as

Deep NN

[Eike Rehder 2017]

Markov Decision Processes as CNN

Camera image

Top view and semantic map

Road

Sidewalk

Obstacles

[Eike Rehder 2017]

Markov Decision Processes as CNN

CostNon-Zero

Padding

Transition

Filters

Transition

Cost

+

Cost

per

Action

min

pool

Updated

Cost

Replace

[Eike Rehder 2017]

Markov Decision Processes as CNN

Cost

Value Non-Zero

Padding

Uncertain

Transition

Filters Transition

Cost

Reward

+

Cost

Reward

per

Action

min

max

pool

Updated

Cost

Value

Replace

Shankar et al. „Reinforcement Learning via

Recurrent Convolutional Neural Networks “, ICPR 2016

[Eike Rehder 2017]

Example III:

Pedestrian Prediction

[Eike Rehder 2017]

Example III: Pedestrian Prediction

Camera image

Semantic map and top view

Teach a network to predict human motion by planning

Road

Sidewalk

Obstacles

[Eike Rehder 2017]

Example III: Pedestrian Prediction

Crop of map centered

around pedestrian

Road

Sidewalk

Obstacles

Pedestrian

[Eike Rehder 2017]

Example III: Pedestrian Prediction

Predict destination for planning

Road

Sidewalk

Obstacles

Pedestrian

[Eike Rehder 2017]

Example III: Pedestrian Prediction

Predicted with MDP Net

Road

Sidewalk

Obstacles

Pedestrian

[Eike Rehder 2017]

Deep Learning of Motion Trajectories

[RehderWirthLauerStiller 2017]

Value Iteration Network

Policy Learning in a

Markov Decision Process

Motion Generation

applying the learned Policy

[RehderWirthLauerStiller 2017]

The People

Marcos Sobrinho

Learning to Plan

Florian Wirth

Destination Prediction

Philipp Bender

Learning to Plan

Jannik Quehl

Trajectory Data

Sahin Tas

Trajectory Data

Haohao Hu

Trajectory Data

Eike Rehder

Trajectory Learning

[Eike Rehder 2017]

Behavioural Safety

Functional Evolution or Disruptive Change?

Self

Driving

Car

SAE Automation Levels

L0 L1 L2 L3 L4 L5

Driver

completely in

control

No automated

intervention

Driver only

Driver

permanently in

control for

some

functions

Single control

functions such

as speed

selection,

braking or lane

keeping are

automated

Assisted

Driver

permanently

monitors

function and

environment

Vehicles

performs

longitudinal

and lateral

control in

defined use

case

Partial Automation

Driver must

always be

available to

resume control

within

reasonable

time

Vehicles

performs

longitudinal

and lateral

control in

defined use

case. At limits

it requests

driver to

resume driving

with sufficient

time margin

ConditionalAutomation

Driver is not

required

during defined

use case

Vehicles

performs

longitudinal

and lateral

control in

defined use

case

HighAutomation

Manual

Driving

Adaptive

Cruise Control

Stop & Go

Automation

Highway

PilotLast Mile

Taxi

No driver

required

Vehicles

performs

longitudinal

and lateral

control in all

situations

FullAutomation

Self-Driving

Car

First Deadly Accident of Automated Car March 18, 2018, Tempe, AZ, USA

~1 sec before impact

Safety Goal

Legal Risk and Safety

Risk is the potential of losing something of value.

Safety is the absence of risk

Hence, safety is a binary measure: a system is either safe or not.

Bayesian Risk

Sum of probability of each event x hazard (cost) of the event

Events could be accidents classified by Abbreviated Injury Scale (AIS)

For simplicity we restrict the following considerations to fatalities.

Safety GoalToday, about 1.25 Million persons suffer a fatal accident in road traffic

per year.

In industrial countries the risk to suffer a fatal accident in road traffic is in

a similar range, e.g. for conventional driving in Germany

Automated Driving is expected to reduce this number significantly.

On the flip side AD is a new technology and is exposed to the risk of

every new technology that extends over todays technology frontiers.

Automated Driving is a risky technology that is expected to improve

safety. A careful safety assessment is essential prior to market

introduction.

Accidents of

Conventional

Driving

Accidents of

Automated

Driving

How to predict newly introduced accident

types of automated driving?

Safety GoalWhat is an acceptable risk?

Naive Thinking: The safety goal for SDA should be „Zero Accidents“

However, many traffic accidents are unavoidable by an SDA.

Hence the safety goal should be „safe as many as possible“, i.e.

maximize SIF

How much larger than 1 must SIF be for market introduction?

It should be significant as the risk exposure is more difficult to control

by individuals through safe traffic behavior.

This value should be a societal consensus.

Empirical Determination of SIF

From [Wachenfeld 2017]

Poisson distribution

𝜆: Expected number of

events during experiment

Empirical Determination of SIF

From [Wachenfeld 2017]

For a system which has SIF = 2

one needs to test for 6 x 2 10^8 km

in order to expect verification of SIF > 1

with 95% confidence

Hence empirical proof of safety

requires large scale deployment

(1)Wachenfeld, W., Winner, H.: Die Freigabe des autonomen Fahrens. In: Maurer, M.,

Gerdes, J.C., Lenz, B., Winner, H. (Hrsg.) Autonomes Fahren, pp. 439-464. Springer

Berlin Heidelberg (2015)

(2)Winner, H.: ADAS, Quo Vadis?, in Winner, H.; Hakuli, S.; Lotz, F.; Singer, C. (eds.):

Hand of Driver Assistance Systems, Springer 2016

(3)Statistisches Bundesamt / German Federal Statistical Office, 2014,

https://www.destatis.de/DE/Publikationen/ Thematisch/TransportVerkehr/

Verkehrsunfaelle/VerkehrsunfaelleJ2080700147004.pdf?__blob=publicationFile

(4)Wachenfeld, Walther: How Stochastic can Help to Introduce Automated Driving,

Dissertation Technische Universität Darmstadt, http://tuprints.ulb.tu-

darmstadt.de/5949 (2017)

(5)Schuldt, F.; Saust, F.; Lichte, B.; Maurer, M.; Scholz, S.: Effiziente systematische

Testgenerierung für Fahrerassistenzsysteme in virtuellen Umgebungen, in:

Automatisierungssysteme, Assistenzsysteme und eingebettete Systeme für

Transportmittel (AAET), Braunschweig, 2013

References for Safety

Further Reading

Steven M. LaValle: Planning Algorithms,

http://planning.cs.uiuc.edu/

Choset et. al.: Principles of Motion Planning, MIT Press.

Latombe’s “motion planning” Lecture,

http://robotics.stanford.edu/~latombe/cs326/2007/index.htm

The Open Motion Planning Library (OMPL),

http://ompl.kavrakilab.org/

Robot Operating System (ROS); http://www.ros.org

Further Research

Collaborate with us

Without financial support from us for limited time:

Guest Scientist for limited time

Master student, PhD student with support of the supervisor

Postdoc, Prof.

With financial support

PhD student

Postdoc

Send your application to stiller@kit.edu

Recommended