37
CS 8520: Artificial Intelligence Intelligent Agents Paula Matuszek Fall, 2008 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn based on Russell, aima.eecs.berkeley.edu/slides-pdf.

CS 8520: Artificial Intelligence Intelligent Agents Paula Matuszek Fall, 2008 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are

Embed Size (px)

Citation preview

CS 8520: Artificial Intelligence

Intelligent AgentsPaula Matuszek

Fall, 2008

Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn based on Russell, aima.eecs.berkeley.edu/slides-pdf.

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

2

Outline• Agents and environments

• Rationality

• PEAS (Performance measure, Environment, Actuators, Sensors)

• Environment types

• Agent types

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

3

Agents• An agent is anything that can be viewed as

perceiving its environment through sensors and acting upon that environment through actuators

• Human agent: eyes, ears, and other organs for sensors; hands,

• legs, mouth, and other body parts for actuators• Robotic agent: cameras and infrared range finders

for sensors;• various motors for actuators

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

4

Agents and environments

• The agent function maps from percept histories to actions:

[f: P* A]• The agent program runs on the physical

architecture to produce f• agent = architecture + program

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

5

Vacuum-cleaner world

• Percepts: location and contents, e.g., [A,Dirty]

• Actions: Left, Right, Suck, NoOp

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

6

A vacuum-cleaner agentPercept sequence Action

[A,Clean] Right

[A, Dirty] Suck

[B, Clean] Left

[B, Dirty] Suck

[A, Clean],[A, Clean] Right

[A, Clean],[A, Dirty] Suck

… …

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

7

Rational agents• An agent should strive to "do the right thing",

based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful

• Performance measure: An objective criterion for success of an agent's behavior

• E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc.

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

8

Rational agents• Rational Agent: For each possible percept

sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

9

Rational agents• Rationality is distinct from omniscience

(all-knowing with infinite knowledge)• Agents can perform actions in order to

modify future percepts so as to obtain useful information (information gathering, exploration)

• An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

10

PEAS: Description of an Agent's World• Performance measure: How do we assess whether

we are doing the right thing?• Environment: What is the world we are in?• Actuators: How do we affect the world we are in?• Sensors: How do we perceive the world we are in?

• Together these specify the setting for intelligent agent design

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

11

PEAS: Taxi Driver• Consider, e.g., the task of designing an

automated taxi driver:– Performance measure: Safe, fast, legal,

comfortable trip, maximize profits– Environment: Roads, other traffic, pedestrians,

customers– Actuators: Steering wheel, accelerator, brake,

signal, horn– Sensors: Cameras, sonar, speedometer, GPS,

odometer, engine sensors, keyboard

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

12

PEAS• Agent: Medical diagnosis system• Performance measure: Healthy patient,

minimize costs, lawsuits• Environment: Patient, hospital, staff

• Actuators: Screen display (questions, tests, diagnoses, treatments, referrals)

• Sensors: Keyboard (entry of symptoms, findings, patient's answers)

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

13

PEAS• Agent: Medical diagnosis system

• Performance measure: Healthy patient, minimize costs, lawsuits

• Environment: Patient, hospital, staff

• Actuators: Screen display (questions, tests, diagnoses, treatments, referrals)

• Sensors: Keyboard (entry of symptoms, findings, patient's answers)

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

14

PEAS• Agent: Part-picking robot

• Performance measure: Percentage of parts in correct bins

• Environment: Conveyor belt with parts, bins

• Actuators: Jointed arm and hand

• Sensors: Camera, joint angle sensors

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

15

PEAS• Agent: Part-picking robot

• Performance measure: Percentage of parts in correct bins

• Environment: Conveyor belt with parts, bins

• Actuators: Jointed arm and hand

• Sensors: Camera, joint angle sensors

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

16

PEAS• Agent: Interactive English tutor

• Performance measure: Maximize student's score on test

• Environment: Set of students

• Actuators: Screen display (exercises, suggestions, corrections)

• Sensors: Keyboard

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

17

PEAS• Agent: Interactive English tutor

• Performance measure: Maximize student's score on test

• Environment: Set of students

• Actuators: Screen display (exercises, suggestions, corrections)

• Sensors: Keyboard

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

18

Environment types• Fully observable (vs. partially observable): An agent's

sensors give it access to the complete state of the environment at each point in time.

• Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic)

• Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

19

Environment types• Static (vs. dynamic): The environment is

unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does)

• Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions.

• Single agent (vs. multiagent): An agent operating by itself in an environment.

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

20

Environment typesChess with Chess without Taxi a clock a clock

driving

Fully observableDeterministicEpisodicStaticDiscreteSingle agent

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

21

Environment typesChess with Chess w/out Taxi a clock a clock driving

Fully observable Yes Yes NoDeterministic Strategic Strategic No Episodic No Yes No Static Semi Yes No Discrete Yes Yes NoSingle agent No No No

• The environment type largely determines the agent design• The simplest environment is fully observable, deterministic, episodic, static,

discrete and single-agent.

• The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

22

Agent functions and programs• An agent is completely specified by the

agent function mapping percept sequences to actions

• A rational agent function maximizes the average performance for a given environment class

• Aim: find a way to implement the rational agent function concisely

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

23

Table-lookup agentFunction TABLE-DRIVEN_AGENT(percept) returns an

actionappend percept to the end of perceptsaction LOOKUP(percepts, table)return action

• Drawbacks:– Huge table– Takes a long time to build the table– Takes a long time to find entries– No autonomy

• Surely we can do better!

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

24

Agent types• Four basic types in order of increasing

generality:

• Simple reflex agents

• Model-based reflex agents

• Goal-based agents

• Utility-based agents

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

25

Simple reflex agents

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

26

Simple reflex Vacuum Agentfunction REFLEX-VACUUM-AGENT ([location, status]) return

an actionif status == Dirty then return Suckelse if location == A then return Rightelse if location == B then return Left

• Observe the world, choose an action, implement action, done.

• Problems if environment is not fully-observable.• Depending on performance metric, may be

inefficient.

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

27

Model-Based Agents• Suppose moving has a cost?

• If a square stays clean once it is clean, then this algorithm will be extremely inefficient.

• A very simple improvement would be– Record when we have cleaned a square – Don’t go back once we have cleaned both.

• We have built a very simple model.

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

28

Reflex Agents with State

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

29

Reflex Agents with State More complex agent with model: a square can get dirty again.

Function REFLEX_VACUUM_AGENT_WITH_STATE ([location, status]) returns an action. last-cleaned-A and last-cleaned-B initially declared = 100.

Increment last-cleaned-A and last-cleaned-B.

if status == Dirty then return Suck

if location == A

then

set last-cleaned-A to 0

if last-cleaned-B > 3 then return right else no-op

else

set last-cleaned-B to 0

if last-cleaned-A > 3 then return left else no-op

The value we check last-cleaned against could be modified. Could track how often we find dirt to compute value

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

30

Model-Based = Reflex Plus State• Maintain an internal model of the state of

the environment

• Over time update state using world knowledge– How the world changes– How actions affect the world

• Agent can operate more efficiently

• More effective than a simple reflex agent for partially observable environments

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

31

Goal-based agents

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

32

Goal-Based Agent• Agent has some information about desirable

situations

• Needed when a single action cannot reach desired outcome

• Therefore performance measure needs to take into account "the future".

• Typical model for search and planning.

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

33

Utility-based agents

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

34

Utility-Based Agents• Possibly more than one goal, or more than

one way to reach it

• Some are better, more desirable than others

• There is a utility function which captures this notion of "better".

• Utility function maps a state or sequence of states onto a metric.

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

35

Learning agents

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

36

Learning Agents• All agents have methods for selection actions.

• Learning agents can modify these methods.

• Performance element: any of the previously described agents

• Learning element: makes changes to actions

• Critic: evaluates actions, gives feedback to learning element

• Problem generator: suggests actions

Paula Matuszek, CSC 8520, Fall 2008. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt

37

Summary• We will view our systems as agents.• An agent operates in a world which can be

described by its Performance measure, Environment, Actuators, and Sensors.

• A rational agent chooses actions which maximize its performance measure, given the information it has.

• Agents range in complexity from simple reflex agents to complex utility-based and learning agents.