Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments

Preview:

DESCRIPTION

Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments. Lecture 3: Behavior Selection Gal A. Kaminka galk@cs.biu.ac.il. Previously, on Robots …. Multiple levels of control: Behaviors. Plan changes. Identify objects. Monitor Change. Map. Explore. - PowerPoint PPT Presentation

Citation preview

Lecture 3: Behavior Selection

Gal A. Kaminka

galk@cs.biu.ac.il

Introduction to Robots and Multi-Robot Systems

Agents in Physical and Virtual Environments

2 © Gal Kaminka

Previously, on Robots …Multiple levels of control: Behaviors

Avoid Object

Wander

Explore

Map

Monitor Change

Identify objects

Plan changes

3 © Gal Kaminka

Subsuming Layers How to make sure overall output is coherent?

e.g., avoid object is in conflict with explore Subsumption hierarchy: Higher levels modify lower

Avoid Object

Wander

Explore

Map

4 © Gal Kaminka

This week, on Robots ….

Behavior Selection/Arbitration Activation-based selection

winner-take-all selection argmax selection (priority, utility, success likelihood, … )

Behavior networks Goal-oriented behavior-based control Takes a direct aim at key weaknesses of reactive approach

Behavior hierarchies

5 © Gal Kaminka

Behavior Selection (Arbitration) One behavior takes over completely

All sensors, actions controlled by the behavior Behaviors compete for control

Key questions: How do we select the correct behavior? When do we terminate the selected behavior?

6 © Gal Kaminka

Maes’ Actions Selection Mechanism (MASM)

Some key highlights: Merges some planning with behavior-based control

Goal-oriented, allows predictions Responsive, allows reactivity “Speed vs. thought” trade-off

Lots of number-hacking A later article addressed this issue with learning However, complex environment may suffer from this

7 © Gal Kaminka

Overall Structure

Behaviors: preconditions, delete/add lists, activation Activation links spread positive and negative activation

Sensor

Sensor

Sensor

Goal

Goal

Behavior

Behavior

Behavior

Behavior

Behavior

Behavior

Behavior

8 © Gal Kaminka

Behaviors

Similar to a fully-instantiated planning operator No variables (i.e, pick-up-A, not pick-up(A)

Preconditions (what must be true to be executable) Add/delete list (what changes once behavior

executes)

Behavior

9 © Gal Kaminka

Connecting Behaviors

Activation: Sensors to behaviors with matching preconditions

Sensor

Behavior

10 © Gal Kaminka

Connecting Behaviors

Activation: Sensors to behaviors with matching preconditions Add lists to behaviors with matching preconditions

Sensor

Behavior

Behavior

Behavior

11 © Gal Kaminka

Connecting Behaviors (Backward)

Activation: Goals to behaviors with matching add lists Behaviors to behaviors with matching add lists

Sensor

Behavior

Behavior

Behavior

Goal

12 © Gal Kaminka

Connecting Behaviors (Backward)

Advantages: Goal-orientedness (goal drives behaviors) Reactivity (sensors drive behaviors) Parameterized!

Sensor

Behavior

Behavior

Behavior

Goal

13 © Gal Kaminka

Handling Conflicts

Conflicting behaviors inhibit each other This is a winner-take-all configuration

Sensor

Behavior

Behavior

Behavior

Goal

Sensor

Sensor

GoalBehavior

Behavior

Behavior

14 © Gal Kaminka

Winner Take All

A very basic structure in neural networks Relies on recurrence

Key idea: Nodes compete by inhibiting each other After some cycles, winner emerges This is useful in many neural models of behavior

15 © Gal Kaminka

Basic Structure

Each node excited by incoming information Each node’s activation inhibits its competitors

1+

+

+

-

-

-

-

+

+

+

2

3

16 © Gal Kaminka

First activation

Darker == more activation

(2 is most active, 1 least)

+

+

+

-

-

-

-

+

+

+

1

2

3

17 © Gal Kaminka

After a few cycles

3 and 2 stronger than 1, so 1 quickly deactivates 2 slightly stronger than 3, so 3 slowly deactivates

+

+

+

-

-

-

-

+

+

+

1

2

3

18 © Gal Kaminka

After a few more cycles

Once 1 is out of picture, only 2 and 3 compete 2 becomes stronger: a weaker 3 inhibits 2 less

+

+

+

-

-

-

-

+

+

+

1

2

3

19 © Gal Kaminka

Until finally….

Only output from 2 remains

+

+

+

-

-

-

-

+

+

+

1

2

3

20 © Gal Kaminka

Winner Take All

Output from winning node ends up being used Typically, if over a threshold

Once node becomes active, never lets in any other A basic problem. Standard solutions: reset after some time, decay, …

This mechanism can be used to solve competition Activation is key feature/requirement

21 © Gal Kaminka

Running a behavior network

Let activation spread for a while, wait for threshold Once behavior over threshold, execute it

Reset activation after it’s done

Sensor

Behavior

Behavior

Behavior

Goal

Sensor

Sensor

GoalBehavior

Behavior

Behavior

22 © Gal Kaminka

Advantages

We’ve discussed planned vs. reactive behavior Threshold value changes “speed vs. thought”

Larger threshold, more behaviors involved before selection Small threshold, less likely to find optimal chain

This is not hybrid architecture—really something new!

23 © Gal Kaminka

Criticisms

Where will this fail? Succeed?

What needs improvement? What does not?

What tasks is it good for?

As scientists,

you must always ask yourself these questions

24 © Gal Kaminka

Protected Goals Sussman Anomaly:

Given: A on B, B on table, C on table Do: A on B, B on C, C on table

No way to do this without undoing a subgoal If one is not careful, might go into thrashing

Take off A, put A back, Take off A, …. Maes added mechanism for protected goals

Not clear where protection comes from

25 © Gal Kaminka

Other problems with MASM No variables Blow up in the number of behaviors Thrashing: Behavior resets, then re-selected Bug in activation algorithm:

Activation from goals is divided by number of goals Thus a behavior satisfying more goals is not preferred Additional minor issues like this found, corrected later Tyrell 1993,1994, Dorer 1999, Blumberg 1994, …

26 © Gal Kaminka

Reminder We are talking about behavior selection Multiple behaviors exist Question is which one to choose

Behaviors compete for control of robot

Behavior networks have activation: Goal priority “meets” sensor data (preconditions, effects) Winner-take-all selection

27 © Gal Kaminka

Activation-based selection For each behavior, build an activation function

How useful it is (utility, value) How urgent it is (priority) How likely it is to succeed (likelihood of success) How much it matches current state (applicability) ….

Can of course combine these (e.g., utility X priority) Select behavior with top activation Let it run Re-evaluate all activations

28 © Gal Kaminka

Formal behavior selection Behaviors are arranged in a DAG <B,E>

DAG: Directed Acyclic Graph B set of behaviors (vertices) E set of edges (a,b), where a, b in B.

The graph is structured hierarchically: Single root behavior is most general leaf behaviors correspond to primitive actions A path from every behavior to at least one primitive behavior children(b) = { all behaviors a, such that (b,a) is in E }

29 © Gal Kaminka

Hierarchical behaviors The root behavior is always active An active behavior with no active child must select one An active behavior can decide to deactivate itself

WinGame

Play Interrupt

Attack-Center Zone Defense

Move Kick Pass Clear Turn

Attack Pincer

30 © Gal Kaminka

argmax selection At any given time, select behavior whose

priority value likelihood of success applicability

is greatest No sequence of behaviors known in advance

Many instances of behaviors can co-exist, compete

31 © Gal Kaminka

Formally …. f(b) be a function which gives the behavior’s activation Then the arbitration result is:

argmaxc (f(c)), where c in children(b) For instance, to choose by value,

argmaxc (value(c)) Or, to choose by priority,

argmaxc (priority(c) Or decision-theoretic choice,

argmaxc (probability(c) * value(c))

32 © Gal Kaminka

Subsumption as argmax selection Subsumption level of behavior b, given by level(b) Applicability of behavior b, given by app(b) --- 0 or 1 Subsumption arbitration: argmaxb (app(b) * level(b))

Avoid Object

Wander

Explore

Map

33 © Gal Kaminka

Case Study: HandleBall Arbitrator (ChaMeleons’01)

HandleBall behavior triggered when player has ball Must select between multiply-instantiated children:

shoot on goal, pass for shot, pass forward, dribble to goal dribble forward, clear, pass to closer, ….

We defined a complex arbitrator combining: priority, and likelyhood of success

34 © Gal Kaminka

HandleBall Example

35 © Gal Kaminka

“Number-hacking”: Thrashing

de-selection and re-selection of behaviors the time

sensor value around threshold

sensor value around threshold

sensor value around threshold

sensor value around threshold

36 © Gal Kaminka

“Number-hacking”: Sensitivity Sensitivity to specific values, ranges

Manually adjusting values by 0.1 to get a wanted result… Where do the numbers come from? Learning?

e.g., programmer forgot a range of values? e.g., programmer needs to extend range

37 © Gal Kaminka

State-Based Selection

State-based selection Look at world and internal state to make selection

Behaviors as operators? Almost. Pre-conditions, termination-conditions Selection control rules (non-numeric preferences, priorities)

Finite state machines and hierarchical machines

38 © Gal Kaminka

State-Based Behavior Selection

Elements from reactive control, but with internal state Quick response to sensor readings

Sensor-driven operation Behaviors maintain internal state

e.g., previously-executed behaviors e.g., previous sensor readings …

39 © Gal Kaminka

Behaviors as operators Conditions:

Preconditions: When is it applicable? Termination conditions: When is it done?

Conditions test sensors, internal state Must maintain World Model

Can be simple (e.g., vector of sensor readings) Or complex (e.g., internal variables, previous readings)

40 © Gal Kaminka

State-Based Selection: Architecture

World Model(beliefs)

Behavior

Behavior

Behavior

Behavior

CommandScheduling

41 © Gal Kaminka

State-Based Selection: Architecture

World Model(beliefs)

Behavior

Behavior

Behavior

Behavior

CommandScheduling

42 © Gal Kaminka

State-Based Selection: Architecture

World Model(beliefs)

Behavior

Behavior

Behavior

Behavior

CommandScheduling

43 © Gal Kaminka

Conflicting Behaviors

What if more than one behavior matches?

World Model(beliefs)

Behavior

Behavior

Behavior

Behavior

CommandScheduling

44 © Gal Kaminka

Preference Rules Prefer one behavior over another Provide “local guidance”

Do not consider all possible cases, nor global ranking Test world model (which also records behaviors)

World Model(beliefs)

Behavior

Behavior

Behavior

Behavior

CommandScheduling

PreferenceRules

45 © Gal Kaminka

שאלות?

46 © Gal Kaminka

What’s in a world model?

World Model(beliefs)

Behavior

Behavior

Behavior

Behavior

CommandScheduling

PreferenceRules

47 © Gal Kaminka

What’s in a world model?

A vector of sensor readings Distance front = 250 Light Left = Detected Battery = Medium level

A vector of virtual sensors Distance front < 90 AND light front Average front distances = 149.4

Complex

Simple

48 © Gal Kaminka

What’s in a world model?

A vector processed data Estimated X, Y from detected landmarks Seen purple blob at pixel 2,5 Communication from teammate

A vector of world models Position of opponent 2 seconds ago My position 10 seconds ago

Complex

Simple

49 © Gal Kaminka

Hierarchical Behaviors

Hierarchies allow designer to build reusable behaviors

At any given moment, a path is selected All behaviors in the path are active

May issue action commands Monitor sensors

This is different from a function call stack What happens when behavior terminates?

50 © Gal Kaminka

Case Study: ModSAF Preference rules manage high-priority interrupts Preconditions dictate ordering

Execute Mission

Fly Flight Plan Wait-at-Point

Fly Route Land

NOE Low Contour Unmask Shoot

Find Position

Halt Join Scout Engage

51 © Gal Kaminka

State-based selection

Preconditions and termination conditions Effective, allow flexible re-use

Very complex behavior generated Thrashing still very much a problem

52 © Gal Kaminka

Finite State Machines: Avoid Thrashing by Sequencing

Every state represents a behavior Transitions are triggered by sensor readings

Start

A

A

B C

AC

A D

C

53 © Gal Kaminka

Example: Foraging

Pick UpClose to Puck

Go HomeAcquire

Have Puck

Drop At Home

54 © Gal Kaminka

Example: Foraging

Pick UpClose to Puck

Go HomeAcquire

Lost Puck

Have Puck

Drop At Home

55 © Gal Kaminka

Hierarchical Finite State Machines

A behaviors can be decomposed into others Decomposition selected based on sensors, memory

Start

A

A

B CA C

A D

C

56 © Gal Kaminka

BITE: Bar Ilan Teamwork Engine

Combining FSAs and state-based selection Multiple opportunities for arbitration

Temporal (what comes next) Hierarchical (which child should be selected)

Prevention of cycling, thrashing e.g., by keeping record of which child was recently selected

57 © Gal Kaminka

שאלות?

58 © Gal Kaminka

Homework #2

1. Propose algorithms for detecting (a) thrashing, (b) cycling. The algorithms must be appropriate for execution on robots.

2. One of the advantages of the state-based and activation-based approaches (non-FSA) is that they allow opportunism. Using FSAs limits this opportunism, since behaviors are executed in pre-determined sequences. Propose a method to allow opportunism in FSAs.

3. Propose a technique for resolving thrashing and cycling once detected.

Recommended