74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Preview:

Citation preview

74.419 Artificial Intelligence

Intelligent Agents 1

Russell and Norvig, Ch. 2

Outline

Agents and environments Rationality PEAS (Performance measure,

Environment, Actuators, Sensors) Environment types Agent types

Agents

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

Human agent has, for example: eyes, ears, and other organs as sensors; hands, legs, mouth, and other body parts as

actuators.

Robotic agent has, for example: cameras and infrared range finders as sensors; various motors as actuators.

Agent and Environment

The Vacuum-Cleaner Mini-World

Environment: square A and B Percepts: location and status, e.g., [A, Dirty] Actions: left, right, suck, and no-op

The Vacuum-Cleaner Mini-World

World State Action

[A,Clean] Right

[A, Dirty] Suck

[B, Clean] Left

[B, Dirty] Suck

[A, Dirty], [A, Clean] Right

[A, Clean], [B, Dirty]

[A, Clean], [B, Clean]

...

Suck

No-op

...

Agent Function

The agent function maps from percept histories to actions:

[f: P* A] An agent is completely specified by the agent

function mapping percept sequences to actions The agent program runs on the physical

architecture to produce f. agent = architecture + program

The Vacuum-Cleaner Mini-World

function REFLEX-VACUUM-AGENT ([location, status]) return an actionif status == Dirty then return Suckelse if location == A then return Rightelse if location == B then return Left

Does not work this way. Need full state space (table) or memory.

The Vacuum-Cleaner Mini-World

World State Action

[A, Clean]

[A, Dirty]

[B, Clean]

[B, Dirty]

[A, Dirty], [A, Clean]

[A, Clean], [B, Dirty]

[B, Dirty], [B, Clean]

[B, Clean], [A, Dirty]

[A, Clean], [B, Clean]

[B, Clean], [A, Clean]

Right

Suck

Left

Suck

Right

Suck

Left

Suck

No-op

No-op

Rational Agents

Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Rationality

Rationality omniscience An omniscient agent knows the actual outcome

of its actions.

Rationality perfection Rationality maximizes expected performance,

while perfection maximizes actual performance.

"Ideal Rational Agent": Always does "the right thing".

Rationality

The proposed definition requires: Information gathering/exploration

To maximize future rewards Learn from percepts

Extending prior knowledge Agent autonomy

Compensate for incorrect prior knowledge

Rationality

What is rational at a given time depends on: Performance measure, Prior environment knowledge, Actions, Percept sequence to date (sensors).

Task Environment

To design a rational agent we must first specify its task environment.

PEAS description of the task environment: Performance Environment Actuators Sensors

Task Environment - Example

For example, a fully automated taxi driver:PEAS description of the environment: Performance

Safety, destination, profits, legality, comfort Environment

Streets/freeways, other traffic, pedestrians, weather,, … Actuators

Steering, accelerating, brake, horn, speaker/display,… Sensors

Video, sonar, speedometer, engine sensors, keyboard, GPS, …

Examples of Agents (Norvig)

PEAS

Agent: Medical diagnosis system Performance measure: Healthy patient, minimize

costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests,

diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings,

patient's answers)

PEAS

Agent: Part-picking robot Performance measure: Percentage of parts in

correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors

PEAS

Agent: Interactive English Tutor Performance measure: Maximize student's score

on test Environment: Set of students Actuators: Screen display (exercises,

suggestions, corrections) Sensors: Keyboard

Classification of Environment Types

Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time.

Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic, except for the actions of other agents, then the environment is strategic)

Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.

Environment types Static (vs. dynamic): The environment is unchanged

while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does).

Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions.

Single agent (vs. multiagent): An agent operating by itself in an environment.

Task Environments (Norvig)Agents design depends on task environment:

deterministic vs. stochastic vs. non-deterministic

assembly line vs. weather vs. “odds & gods”

episodic vs. non-episodic assembly line vs. diagnostic repair robot, Flakey

static vs. dynamic room without vs. with other agents

discrete vs. continuous chess game vs. autonomous vehicle

single vs. multi agent solitaire game vs. soccer, taxi driver

fully observable vs. partially observable video camera vs. infrared camera - colour?

Infrared Picture of an Unpleasant Situation

from www.indigosystems.com

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable??

Deterministic??

Episodic??

Static??

Discrete??

Single-agent??

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable??

Deterministic??

Episodic??

Static??

Discrete??

Single-agent??

Fully vs. partially observable: an environment is fully observable when the sensors can detect all aspects that are relevant to the choice of action.

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic??

Episodic??

Static??

Discrete??

Single-agent??

Fully vs. partially observable: an environment is fully observable when the sensors can detect all aspects that are relevant to the choice of action.

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic??

Episodic??

Static??

Discrete??

Single-agent??

Deterministic vs. stochastic: if the next environment state is completely determined by the current state and the executed action, then the environment is deterministic.

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic??

Static??

Discrete??

Single-agent??

Deterministic vs. stochastic: if the next environment state is completely determined by the current state and the executed action, then the environment is deterministic.

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic??

Static??

Discrete??

Single-agent??

Episodic vs. sequential: In an episodic environment, the agent’s experience can be divided into atomic steps, where the agent perceives and then performs a single action. The choice of action depends only on the episode itself.

Environment types

Solitaire Backgammon Internet shopping

Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static??

Discrete??

Single-agent??

Episodic vs. sequential: In an episodic environment, the agent’s experience can be divided into atomic steps, where the agent perceives and then performs a single action. The choice of action depends only on the episode itself.

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static??

Discrete??

Single-agent??

Static vs. dynamic: If the environment can change, while the agent is choosing an action, the environment is dynamic. It is semi-dynamic, if the agent’s performance changes, even when the environment remains the

same.

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static?? YES YES SEMI NO

Discrete??

Single-agent??

Static vs. dynamic: If the environment can change, while the agent is choosing an action, the environment is dynamic. It is semi-dynamic, if the agent’s performance changes, even when the environment remains the

same.

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static?? YES YES SEMI NO

Discrete??

Single-agent??

Discrete vs. continuous: This distinction can be applied to the state of the environment, the way time is handled and to the percepts/ actions of

the agent.

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static?? YES YES SEMI NO

Discrete?? YES YES YES NO

Single-agent??

Discrete vs. continuous: This distinction can be applied to the state of the environment, the way time is handled and to the percepts/ actions of the agent.

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static?? YES YES SEMI NO

Discrete?? YES YES YES NO

Single-agent??

Single vs. multi-agent: Does the environment contain other agents who are also maximizing some performance measure that depends on the current agent’s actions?

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static?? YES YES SEMI NO

Discrete?? YES YES YES NO

Single-agent?? YES NO NO NO

Single vs. multi-agent: Does the environment contain other agents, who are also maximizing some performance measure that depends on the current agent’s actions?

Chess with Chess w.o. Taxi clock clock driving

Fully observable Yes Yes No Deterministic Strategic Strategic No Episodic No No No Static Semi Yes No Discrete Yes Yes NoSingle agent No No No

The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent.

Examples of Environment Types

Environment types

The simplest environment is Fully observable, deterministic, episodic, static,

discrete and single-agent.

Most real situations are: Partially observable, stochastic, sequential,

dynamic, continuous and multi-agent.

Agent types

How does the inside of the agent work? Agent = architecture + program

All agents have the same skeleton: Input = current percepts Output = action Program= manipulates input to produce output

Note difference with agent function.

Agent types

Function TABLE-DRIVEN_AGENT(percept) returns an action

static: percepts, a sequence initially empty

table, a table of actions, indexed by percept sequence

append percept to the end of percepts

action LOOKUP(percepts, table)

return action This approach is doomed to failure.

Agent types

Four basic kinds of agent programs will be discussed: Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents

All these can be turned into learning agents.

Simple Reflex Agents

Select action on the basis of only the current percept. E.g. the vacuum-agent

Large reduction in possible percept/action situations (next page).

Implemented through condition-action rules If dirty then suck

Simple Reflex Agent – Example (Nilsson)

Robot in Maze• perceives 8 squares around it • low-level percept: can robot move to

square or not• higher level percept: 2 unit segments• 4 basic actions: left (west), right (east),

up (north), down (south)• task is to move along a border• no 'tight' spaces, at least two free

squares

Simple Reflex Agent - Example (Nilsson)

Note: The description of the left bottom agent seems to be wrong. This agent will walk clockwise along the outside wall.

Note: The description of the left bottom agent seems to belong to this agent. It will walk counter- clockwise around the object.

Simple Reflex Agent - Example

Behaviour RoutinesIf x1=1 and x2=0 then move rightIf x2=1 and x3=0 then move downIf x3=1 and x4=0 then move leftIf x4=1 and x1=0 then move upelse move up

Simple Reflex Agent - Example

Simple Reflex Agents

function SIMPLE-REFLEX-AGENT(percept) returns an action

static: rules, a set of condition-action rulesstate INTERPRET-INPUT(percept)rule RULE-MATCH(state, rules)action RULE-ACTION[rule]return action

Will only work if the environment is fully observable. Otherwise infinite loops may occur.

The Vacuum-Cleaner Mini-World

function REFLEX-VACUUM-AGENT ([location, status]) return an actionif status == Dirty then return Suckelse if location == A then return Rightelse if location == B then return Left

Does not work this way. Need full state space (table) or memory.

Model/State-based Agents To tackle partially

observable environments. Maintain internal state

Over time update state using world knowledge How does the world

change. How do actions affect

world. Model of World

Model/State-based Agents

function REFLEX-AGENT-WITH-STATE(percept) returns an actionstatic: rules, a set of condition-action rules

state, a description of the current world state

(action, the most recent action)state UPDATE-STATE(state, (action,) percept)rule RULE-MATCH(state, rule)action RULE-ACTION[rule]return action

Goal-based Agents The agent needs a goal to know

which situations are desirable. Things become difficult when long

sequences of actions are required to reach the goal.

Typically investigated in search and planning research.

Major difference: future is taken into account.

Is more flexible since knowledge is represented explicitly - to a certain degree - and can be manipulated.

Utility-based Agents Certain goals can be reached

in different ways. Some are better, have a

higher utility. Utility function maps a

(sequence of) state(s) onto a real number.

Improvement on goal setting: Selecting between conflicting

goals. Select appropriately between

several goals based on likelihood of success.

Learning Agents

All previous agent-programs describe methods for selecting actions. Yet, this does not explain

the origin or development of these programs.

Learning mechanisms can be used.

Advantage is the robustness of the program towards unknown environments.

Learning Agents Learning element: introduce improvements in performance element. Critic provides feedback on

agents performance based on fixed performance standards.

Performance element: selecting actions based on percepts. Corresponds to the previous

agent programs. Problem generator:

suggests actions that will lead to new and informative experiences. Exploration vs. exploitation

Robotic Sensors (digital) camera infrared sensor range finders, e.g. radar, sonar GPS tactile (whiskers, bump panels) proprioceptive sensors, e.g. shaft

decoders force sensors torque sensors

Robotic Effectors ‘limbs’ connected through joints; degrees of freedom = #directions in

which limb can move (incl. rotation axis) drives: wheels (land), propellers, turbines

(air, water) driven through electric motors, pneumatic

(gas), or hydraulic (fluids) actuation statically stable, dynamically stable

Recommended