Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College

Preview:

Citation preview

Agents

CPSC 386 Artificial IntelligenceEllen WalkerHiram College

Agents

• An agent perceives its environment through sensors, and acts upon it through actuators.

• The agent’s percepts are its impression of the sensor input.

• (The agent doesn’t necessarily know everything in its environment)

• Agents may have knowledge and/or memory

A Simple Vacuum Cleaner Agent

• 2 Locations, A and B• Dirt sensor (current location only)• Agent knows where it is• Actions: left, right, suck

• “Knowledge” represented by percept, action pairs(e.g. [A, dirty] -> (suck))

Agent Function vs. Agent Program

• Agent function:– Mathematical abstraction f(percepts) = action

– Externally observable (behavior)

• Agent program:– Concrete implementation of an algorithm that decides what the agent will do

– Runs within a “physical system”– Not externally observable (thought)

Rational Agents

• Rational Agents “do the right thing” based on– Performance measure that defines criterion of success

– The agent’s prior knowledge of the environment

– Actions that the agent can perform– Agent’s percept sequence to date

• Rationality is not omniscience; it optimizes expected performance, based on (necessarily) incomplete information.

Program for an Agent

• Repeat forever1. Record latest percept from sensors

into memory2. Choose best action based on memory3. Record action in memory4. Perform action (observe results)

• Almost all of AI elaborates this!

A Reasonable Vacuum Program

• [A, dirty] -> suck• [B, dirty] -> suck• [A, clean] -> right• [B, clean] -> left

• What goals will this program satisfy?• What are pitfalls, if any?• Does a longer history of percepts help?

Aspects of Agent Behavior

• Information gathering - actions that modify future percepts

• Learning - modifying the program based on actions and perceived results

• Autonomy - agent’s behavior depends on its own percepts, rather than designer’s programming (a priori knowledge)

Specifying Task Environment

• Performance measure• Environment (real world or “artificial”)• Actuators• Sensors

• Examples:– Pilot– Rat in a maze– Surgeon– Search engine

Properties of Environments

• Fully vs. partially observable (e.g. map?)

• Single-agent vs. multi-agent– Adversaries (competitive)– Teammates (cooperative)

• Deterministic vs. stochastic – May appear stochastic if only partially observable (e.g. card game)

– Strategic: deterministic except for other agents

• (Uncertain = not fully observable, or nondeterministic)

Properties (cont)

• Episodic vs. Sequential – Do we need to know history?

• Static vs. Dynamic – Does environment change while agent is thinking?

• Discrete vs. Continuous– Time, space, actions

• Known vs. Unknown– Does the agent know the “rules” or “laws of physics”?

Examples

• Solitaire• Driving• Conversation• Chess• Internet search• Lawn mowing

Agent Types

• Reflex• Model-based Reflex• Goal based• Utility based

Reflex Agent

AgentEnviron-

mentsensors

effectors

world now

action now

rules

Model-Based Reflex Agent

AgentEnviron-

mentsensors

effectors

world now

action now

rules

state

how world evolves

what actions do

Goal Based

AgentEnviron-

mentsensors

effectors

world now

action now

goals

state

how world evolves

what actions do future world

Utility Based

AgentEnviron-

mentsensors

effectors

world now

action now

utility

state

how world evolves

what actions do future world

"happiness"

Learning Agent

PerformanceElement

(was agent)

Environment

Critic

LearningElement

Problem Generator

L. Goals

Feedback

Sensors

Effectors

changes

know-ledge

Classes of Representations

• Atomic– State is indivisible

• Factored– State consists of attributes and values

• Structured– State consists of objects (which have attributes and relate to other objects)

Recommended