Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
CE417: Introduction to Artificial IntelligenceSharif University of TechnologySpring 2013
Course material: “Artificial Intelligence: A Modern Approach”, Chapter 3
Solving problems by searching
Soleymani
Outline Problem-solving agents Problem formulation and some examples of problems
Search algorithms Uninformed
Using only the problem definition
Informed Using also problem specific knowledge
2
Problem-Solving Agents Problem Formulation: process of deciding what actions
and states to consider States of the world Actions as transitions between states
Goal Formulation: process of deciding what the next goalto be sought will be
Agent must find out how to act now and in the future toreach a goal state Search: process of looking for solution (a sequence of actions
that reaches the goal starting from initial state)
3
Problem-Solving Agents A goal-based agent adopts a goal and aim at satisfying it
(as a simple version of intelligent agent maximizing a performance measure)
“How does an intelligent system formulate its problem asa search problem” Goal formulation: specifying a goal (or a set of goals) that agent
must reach them Problem formulation: abstraction (removing detail)
Retaining validity and ensuring that the abstract actions are easy toperform
4
Example: Romania On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest
Initial state currently in Arad
Formulate goal be in Bucharest
Formulate problem states: various cities actions: drive between cities
Solution sequence of cities, e.g.,Arad, Sibiu, Fagaras, Bucharest
Map of Romania
5
Example: Romania (Cont.) Assumptions about environment Known Observable
The initial state can be specified exactly.
Deterministic Each applied action to a state results in a specified state.
Discrete
Given the above first three assumptions, by starting in an initial stateand running a sequence of actions, it is absolute where the agent will be
Perceptions after each action provide no new information Can search with closed eyes (open-loop)
6
Problem-solving agents
Formulate, Search, Execute
7
Problem types Deterministic and fully observable (single-state problem)
Agent knows exactly its state even after a sequence of actions Solution is a sequence
Non-observable or sensor-less (conformant problem) Agent’s percepts provide no information at all Solution is a sequence
Nondeterministic and/or partially observable (contingencyproblem) Percepts provide new information about current state Solution can be a contingency plan (tree or strategy) and not a sequence Often interleave search and execution
Unknown state space (exploration problem)8
Belief State In partially observable & nondeterministic environments,
a state is not necessarily mapped to a world configuration State shows the agent’s conception of the world state
Agent's current belief (given the sequence of actions and percepts upto that point) about the possible physical states it might be in.
9
World states
A sample belief state
Example: vacuum world Single-state, start in {5}
Solution?
[Right, Suck]
Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8}
Solution?[Right,Suck,Left,Suck]
Contingency Nondeterministic: Suck may dirty a clean carpet Partially observable: location, dirt only at the current location
Percept: [L, Clean], i.e., start in {5} or {7}Solution?[Right, if dirt then Suck]
10
[Right, while dirt do Suck]
Single-state problem
11
In this lecture, we focus on single-state problem Search for this type of problems is simpler And also provide strategies that can be base for search in
more complex problems
Single-state problem formulationA problem is defined by five items:
Initial state e.g., ( ) Actions: ( ) shows set of actions that can be executed in e.g., ( ( )) = { ( ), ( ), ( )}
12
Single-state problem formulationA problem is defined by five items:
Initial state e.g., ( ) Actions: ( ) shows set of actions that can be executed in
Transition model: ( , ) shows the state that results from doingaction in state e.g., ( ( ), ( )) = ( )
13
Single-state problem formulationA problem is defined by five items:
Initial state e.g., ( ) Actions: ( ) shows set of actions that can be executed in
Transition model: ( , ) shows the state that results from doingaction in state
Goal test: _ ( ) shows whether a given state is a goal state explicit, e.g., x = "at Bucharest" abstract e.g., Checkmate(x)
14
Single-state problem formulationA problem is defined by five items:
Initial state e.g., ( ) Actions: ( ) shows set of actions that can be executed in
Transition model: ( , ) shows the state that results from doingaction in state
Goal test: _ ( ) shows whether a given state is a goal state
Path cost (additive): assigns a numeric cost to each path that reflects agent’sperformance measure e.g., sum of distances, number of actions executed, etc. ( , , ) ≥ 0 is the step cost
15
Single-state problem formulationA problem is defined by five items:
Initial state e.g., ( ) Actions: ( ) shows set of actions that can be executed in
Transition model: ( , ) shows the state that results from doingaction in state
Goal test: _ ( ) shows whether a given state is a goal state
Path cost (additive): assigns a numeric cost to each path that reflects agent’sperformance measure
Solution: a sequence of actions leading from the initial state to a goal state
Optimal Solution has the lowest path cost among all solutions.
16
State Space State space: set of all reachable states from initial state Initial state, actions, and transition model together define it
It forms a directed graph Nodes: states Links: actions
Constructing this graph on demand
17
Vacuum world state space graph
States? Actions? Goal test? Path cost?
dirt locations & robot location
Left, Right, Suck
no dirt at all locations
one per action
2 × 2 = 8States
18
Example: 8-puzzle
States? Actions? Goal test? Path cost?
locations of eight tiles and blank in 9 squares
move blank left, right, up, down (within the board)
e.g., above goal state
one per move
[Note: optimal solution of n-Puzzle family is NP-complete]
9!/2 = 181,440States
19
Example: 8-queens problem
Initial State? States? Actions? Goal test? Path cost?
any arrangement of 0-8 queens on the board is a state
no queens on the board
add a queen to the state (any empty square)
8 queens are on the board, none attacked
of no interest
64 × 63 × ⋯ × 57≃ 1.8 × 10 States
search cost vs. solution path cost
20
Example: 8-queens problem(other formulation)
Initial state? States?
Actions?
Goal test? Path cost?
any arrangement of n queens one per column in the leftmost ncolumns with no queen attacking another
no queens on the board
add a queen to any square in the leftmost empty column such that it is not attacked by any other queen
8 queens are on the board
of no interest
2,057 States
21
Example: Cryptarithmatic
States? A cryptarithmetic puzzle (some letters replaced with digits)
Actions? Replacing a letter with an unused digit (satisfying constraints)
Goal test? Puzzle contains only digits
Path cost? Zero. All solutions equally valid.
FORTY Solution: 29786 F=2,
+ TEN + 850 O=9,
+ TEN + 850 R=7,
------- ----- etc.
SIXTY 31486
22
Example: Knuth problem
23
Knuth Conjecture: Starting with 4, a sequence of factorial,square root, and floor operations will reach any desiredpositive integer.
Example: 4! ! = 5 States? Positive numbers Initial State? 4 Actions? Factorial (for integers only), square root, floor Goal test? State is the objective positive number Path cost? Zero. All solutions equally valid.
Read-world problems Route finding Travelling salesman problem VLSI layout Robot navigation Automatic assembly sequencing
24
Example: Robot navigation (real-world) Infinite set of possible actions and states Techniques are required to make the search space finite.
For the robot with arms and legs or wheels, the searchspace becomes many-dimensional.
Dealing with errors in sensor readings and motorcontrols
25
Example: robotic assembly
States? Actions? Goal test? Path cost?
coordinates of robot joint angles, parts to be assembled
time to execute
complete assembly if a collision-free merging motion exists
motions of robot joints, merge two subassemblies
26
Tree search algorithm Basic idea
offline, simulated exploration of state space by generating successors ofalready-explored states
Frontier: all leaf nodes available for expansion at any given point
function TREE-SEARCH( problem) returns a solution, or failureinitialize the frontier using the initial state of problem loop do
if the frontier is empty then return failure choose a leaf node and remove it from the frontierif the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier
Different data structures (e.g, FIFO, LIFO) for frontier can cause differentorders of node expansion and thus produce different search algorithms.
27
Tree search example
28
Tree search example
29
Tree search example
30
Graph Search Redundant paths in tree search: more than one way to get from
one state to another may be due to a bad problem definition or the essence of the problem can cause a tractable problem to become intractable
explored set: remembered every explored node
function GRAPH-SEARCH( problem) returns a solution, or failureinitialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontierif the node contains a goal state then return the corresponding solutionadd the node to the explored set expand the chosen node, adding the resulting nodes to the frontier
only if not in the frontier or explored set
31
Graph Search Example: rectangular grid
explored
frontier
…
32
Search for 8-puzzle Problem
Taken from: http://iis.kaist.ac.kr/es/
Start Goal
33
34
Implementation: states vs. nodes A state is a (representation of) a physical configuration
A node is a data structure constituting part of a search treeincludes state, parent node, action, path cost g(x), depth
Search strategies Search strategy: order of node expansion Strategies performance evaluation:
Completeness: Does it always find a solution when there is one? Time complexity: How many nodes are generated to find solution? Space complexity: Maximum number of nodes in memory during search Optimality: Does it always find a solution with minimum path cost?
Time and space complexity are expressed by b (branching factor): maximum number of successors of any node d (depth): depth of the shallowest goal node m: maximum depth of any node in the search space (may be ∞)
Time & space are described for tree search For graph search, analysis depends on redundant paths
35
Uninformed Search Algorithms
36
Uninformed (blind) search strategies No additional information beyond the problem definition Breadth-First Search (BFS) Uniform-Cost Search (UCS) Depth-First Search (DFS) Depth-Limited Search (DLS) Iterative Deepening Search (IDS)
37
Breadth-first search Expand the shallowest unexpanded node Implementation: FIFO queue for the frontier
38
Breadth-first search
39
Breadth-first search
40
Breadth-first search
41
BFS (another example)
42 Adopted from Dan Klein’s slides
Properties of breadth-first search Complete? Yes (for finite and )
Time + 2 + 3 + ⋯ + = ( ) total number of generated nodes
goal test has been applied to each node when it is generated
Space ( ) + ( ) = ( ) (graph search)
Tree search does not save much space while may cause a great time excess
Optimal? Yes, if path cost is a non-decreasing function of d
e.g. all actions having the same cost
explored frontier
43
Properties of breadth-first search Space complexity is a bigger problem than time complexity Time is also prohibitive Exponential-complexity search problems cannot be solved by
uninformed methods (only the smallest instances)
44
d Time Memory
10 3 hours 10 terabytes
12 13 days 1 pentabyte
14 3.5 years 99 pentabytes
16 350 years 10 exabytes
1 million node/sec, 1kb/node
Uniform-Cost Search (UCS) Expand node (in the frontier) with the lowest path cost ( )
Extension of BFS that is proper for any step cost function
Implementation: Priority queue (ordered by path cost) forfrontier
Equivalent to breadth-first if all step costs are equal Two differences
Goal test is applied when a node is selected for expansion A test is added when a better path is found to a node currently on the frontier
45
80 + 97 + 101 < 99 + 211
Properties of uniform-cost search Complete? Yes, if step cost ≥ > 0 (to avoid infinite sequence of zero-cost
actions)
Time Number of nodes with ≤ cost of optimal solution, ( ∗⁄ )
where ∗ is the optimal solution cost ( ) when all step costs are equal
Space Number of nodes with ≤ cost of optimal solution, ( ∗⁄ )
Optimal? Yes – nodes expanded in increasing order of ( )
Difficulty: many long paths may exist with cost ≤ ∗46
Uniform-cost search (proof of optimality) Lemma: If UCS selects a node for expansion, the optimal
solution to that node has been found.
Proof by contradiction: Another frontier node must exist on theoptimal path from initial node to (using graph separation property).Moreover, based on definition of path cost (due to non-negative stepcosts, paths never get shorter as nodes are added), we have≤ and thus would have been selected first.
⇒ Nodes are expanded in order of their optimal path cost.
47
Depth First Search (DFS) Expand the deepest node in frontier
Implementation: LIFO queue (i.e., put successors at front)for frontier
48
DFS Expand the deepest unexpanded node in frontier
49
DFS Expand the deepest unexpanded node in frontier
50
DFS Expand the deepest unexpanded node in frontier
51
DFS Expand the deepest unexpanded node in frontier
52
DFS Expand the deepest unexpanded node in frontier
53
DFS Expand the deepest unexpanded node in frontier
54
DFS Expand the deepest unexpanded node in frontier
55
DFS Expand the deepest unexpanded node in frontier
56
DFS Expand the deepest unexpanded node in frontier
57
DFS Expand the deepest unexpanded node in frontier
58
DFS Expand the deepest unexpanded node in frontier
59
DFS (another example)
60 Adopted from Dan Klein’s slides
Properties of DFS Complete?
Tree-search version: not complete (repeated states & redundant paths) Graph-search version: fails in infinite state spaces (with infinite non-goal path)
but complete in finite ones
Time ( ): terrible if is much larger than
In tree-version, can be much larger than the size of the state space
Space ( ), i.e., linear space complexity for tree search
So depth first tree search as the base of many AI areas Recursive version called backtracking search can be implemented in ( )
space
Optimal? No
61
DFS: tree-search version
Depth Limited Search Depth-first search with depth limit (nodes at depth have no successors)
Solves the infinite-path problem In some problems (e.g., route finding), using knowledge of problem to specify
Complete? If > , it is complete
Time ( )
Space ( )
Optimal? No
62
Iterative Deepening Search (IDS)
Combines benefits of DFS & BFS DFS: low memory requirement BFS: completeness & also optimality for special path cost functions
Not such wasteful (most of the nodes are in the bottom level)
63
IDS: Example l =0
64
IDS: Example l =1
65
IDS: Example l =2
66
IDS: Example l =3
67
Properties of iterative deepening search Complete? Yes (for finite and )
Time × 1 + ( − 1) × 2 + ⋯ + 2 × + 1 × = ( )
Space ( )
Optimal? Yes, if path cost is a non-decreasing function of the node depth
IDS is the preferred method when search space is large andthe depth of solution is unknown
68
Iterative deepening search Number of nodes generated to depth d:= × 1 + ( − 1) × 2 + … + 2 × + 1 ×= ( ) For = 10, = 5, we compute number of generated nodes:
NBFS = 10 + 100 + 1,000 + 10,000 + 100,000 = 111,110 NIDS = 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450 Overhead of IDS = (123,450 - 111,110)/111,110 = 11%
69
Bidirectional search Simultaneous forward and backward search (hoping that they
meet in the middle) Idea: / + / is much less than “Do the frontiers of two searches intersect?” instead of goal test
First solution may not be optimal
Implementation Hash table for frontiers in one of these two searches
Space requirement: most significant weakness
Computing predecessors? May be difficult
List of goals? a new dummy goal Abstract goal (checkmate)?!
70
Summary of algorithms (tree search)
a Complete if b is finite
b Complete if step cost ≥ ε>0
c Optimal if step costs are equal
d If both directions use BFS
71
Iterative deepening search uses only linear space and not muchmore time than other uninformed algorithms
Informed Search
72
When exhaustive search is impractical, heuristic methods are used to speed up the process of finding a satisfactory solution.
Outline Best-first search Greedy best-first search A* search Finding heuristics
73
Best-first search Idea: use an evaluation function ( ) for each node and
expand the most desirable unexpanded node More general than “ = cost so far to reach ” Evaluation function provides an upper bound on the desirability (lower
bound on the cost) that can be obtained through expanding a node
Implementation: priority queue with decreasing order ofdesirability (search strategy is determined based on evaluation function)
Special cases: Greedy best-first search A* search Uniform-cost search
74
Heuristic Function Incorporating problem-specific knowledge in search
Information more than problem definition In order to come to an optimal solution as rapidly as possible
Heuristic function can be used as a component of ( ) ℎ : estimated cost of cheapest path from to a goal
Depends only on (not path from root to ) If is a goal state then ℎ( )=0 ℎ( ) ≥ 0
Examples of heuristic functions include using a rule-of-thumb,an educated guess, or an intuitive judgment
75
Greedy best-first search Evaluation function e.g., ℎ = straight-line distance from n to Bucharest
Greedy best-first search expands the node that appearsto be closest to goal
Greedy
76
Romania with step costs in km
77
Greedy best-first search example
78
Greedy best-first search example
79
Greedy best-first search example
80
Greedy best-first search example
81
Properties of greedy best-first search Complete? No
Similar to DFS, only graph search version is complete in finite spaces Infinite loops, e.g., (Iasi to Fagaras) Iasi Neamt Iasi Neamt
Time ( ), but a good heuristic can give dramatic improvement
Space ( ): keeps all nodes in memory
Optimal? No
82
A* search Idea: minimizing the total estimated solution cost Evaluation function = cost so far to reach ℎ = estimated cost of the cheapest path from to goal So, = estimated total cost of path through to goal
83
start n… goal…
Actual cost Estimated cost ℎ
= + ℎ( )
A* search
84
Combines advantages of uniform-cost and greedysearches
A* can be complete and optimal when has someproperties
A* search: example
85
A* search: example
86
A* search: example
87
A* search: example
88
A* search: example
89
A* search: example
90
Conditions for optimality of A*
Admissibility: ℎ( ) be a lower bound on the cost to reach goal Condition for optimality of TREE-SEARCH version of A*
Consistency (monotonicity): ℎ ≤ , , + ℎ Condition for optimality of GRAPH-SEARCH version of A*
91
Admissible heuristics Admissible heuristic never overestimates the
cost to reach the goal (optimistic) ℎ( ) is a lower bound on path cost from to goal∀ , ℎ( ) ≤ ℎ∗( )
where ℎ∗( ) is the real cost to reach the goal state from Example: ℎ ( ) ≤ the actual road distance
92
Consistent heuristics
93
Triangle inequality
′ ℎ( ′)
ℎ( )( , , ′), , : cost of generating ′ by applying action to
for every node and every successorgenerated by any action
ℎ ≤ , , + ℎ
Consistency vs. admissibility
94
Consistency ⇒ Admissblity All consistent heuristic functions are admissible Nonetheless, most admissible heuristics are also consistent
ℎ ≤ , , + ℎ( ) ≤ , , + , , + ℎ( )…≤ ∑ , , + ℎ(G)
…( , , ) ( , , )( , , )
0 ⇒ ℎ ≤ cost of (every) path from to goal≤ cost of optimal path from to goal
Admissible but not consistent: Example
95
(for admissible heuristic) may decrease along a path Is there any way to make consistent?
′( , , ’) = 1ℎ( ) = 9ℎ( ’) = 6⟹ ℎ ≰ ℎ ’ + ( , , ’)1
= 5ℎ = 9( ) = 14′ = 6 ℎ = 6( ′) = 12G
10
10
ℎ ′ = max (ℎ , ℎ − ( , , ′))
Optimality of A* (admissible heuristics) Theorem: If ℎ( ) is admissible, A* using TREE-SEARCH is
optimal Assumptions: 2 is a suboptimal goal in the frontier, is an
unexpanded node in the frontier and it is on a shortest path toan optimal goal .I. ℎ 2 = 0 ⇒ 2 = 2II. ℎ( ) = 0 ⇒ ( ) = ( )III. 2 is suboptimal ⇒ 2 >
IV. I, II, III ⇒ 2 >V. ℎ is admissible ⇒ ℎ ≤ ℎ∗⇒ ( ) + ℎ( ) ≤ ( ) + ℎ∗( ) ⇒ ≤ ⇒ < ( 2)A* will never select 2 for expansion
96
Optimality of A* (consistent heuristics)Theorem: If ℎ( ) is consistent, A* using GRAPH-SEARCH isoptimal
Lemma1: if ℎ( ) is consistent then ( ) values are non-decreasing along any path
Proof: Let ′ be a successor of
I. ( ′) = ( ′) + ℎ( ′) II. ( ′) = ( ) + ( , , ′)III. , ⇒ = + , , + ℎIV. ℎ is consistent ⇒ ℎ ≤ , , + ℎ V. , ⇒ ( ′) ≥ ( ) + ℎ( ) = ( )97
Optimality of A* (consistent heuristics)Lemma 2: If A* selects a node for expansion, the optimalsolution to that node has been found.
Proof by contradiction: Another frontier node must exist on the optimalpath from initial node to (using graph separation property). Moreover, basedon Lemma 1, ≤ and thus would have been selected first.
Lemma 1 & 2 ⇒ The sequence of nodes expanded by A* (using GRAPH-SEARCH) is in non-decreasing order of ( )Since ℎ = 0 for goal nodes, the first selected goal node for expansion is anoptimal solution ( is the true cost for goal nodes)
98
Admissible vs. consistent (tree vs. graph search)
99
Consistent heuristic: When selecting a node for expansion, thepath with the lowest cost to that node has been found
When an admissible heuristic is not consistent, a node willneed repeated expansion, every time a new best (so-far) costis achieved for it.
Contours in the state space A* (using GRAPH-SEARCH) expands nodes in order of
increasing value Gradually adds "f-contours" of nodes
Contour has all nodes with = where < +1
A* expands all nodes with f(n) < C*A* expands some nodes with f(n) = C* (nodes on the goal contour)A* expands no nodes with f(n) > C* ⟹ pruning
100
A* search vs. uniform cost search Uniform-cost search (A* using ℎ( ) = 0) causes circular bands around
initial state
A* causes irregular bands More accurate heuristics stretched toward the goal (more narrowly focused
around the optimal path)
Start
goal
States are points in 2-D Euclidean space.g(n)=distance from starth(n)=estimate of distance from goal
101
Properties of A* Complete?
Yes if nodes with ≤ = ∗ are finite Step cost≥ > 0 and is finite
Time? Exponential
But, with a smaller branching factor
∗ or when equal step costs × ∗ ∗
Polynomial when |ℎ( ) − ℎ∗( )| = ( ℎ∗( )) However,A* is optimally efficient for any given consistent heuristic
No optimal algorithm of this type is guaranteed to expand fewer nodes than A* (exceptto node with = ∗)
Space? Keeps all leaf and/or explored nodes in memory
Optimal? Yes (expanding node in non-decreasing order of )
102
Robot navigation example
103
Initial state? Red cell
States? Cells on rectangular grid (except to obstacle)
Actions? Move to one of 8 neighbors (if it is not obstacle)
Goal test? Green cell
Path cost? Action cost is the Euclidean length of movement
A* vs. UCS: Robot navigation example
104
Heuristic: Euclidean distance to goal
Expanded nodes: filled circles in red & green Color indicating value (red: lower, green: higher)
Frontier: empty nodes with blue boundary
Nodes falling inside the obstacle are discarded
Adopted from: http://en.wikipedia.org/wiki/Talk%3AA*_search_algorithm
Robot navigation: Admissible heuristic
105
Is Manhattan distancean admissible heuristic for previous example?
A*: inadmissible heuristic
106
ℎ = ℎ_ℎ = 5 ∗ ℎ_Adopted from: http://en.wikipedia.org/wiki/Talk%3AA*_search_algorithm
A*, Greedy, UCS: Pacman
107 Adapted from Dan Klein’s slides
UCS
Greedy
A*
Heuristic: Manhattan distance
Color: expanded in which iteration (red: lower)
A* difficulties
Space is the main problem of A*
Overcoming space problem while retaining completenessand optimality IDA*, RBFS, MA*, SMA*
A* time complexity Variants of A* trying to find suboptimal solutions quickly More accurate but not strictly admissible heuristics
108
8-puzzle problem: state space
, average solution cost for random 8-puzzle
Tree search: , states
Graph search: states for 8-puzzle
10 for 15-puzzle
109
Admissible heuristics: 8-puzzle ℎ1( ) = number of misplaced tiles ℎ2( ) = sum of Manhattan distance of tiles from their target position
i.e., no. of squares from desired location of each tile
ℎ1( ) = ℎ2( ) =
8
3+1+2+2+2+3+3+2 = 18
110
Effect of heuristic on accuracy : number of generated nodes by A* : solution depth
Effective branching factor ∗ : branching factor of auniform tree of depth containing nodes.∗ ∗ ∗
Well-defined heuristic: ∗ is close to one
111
Comparison on 8-puzzle
IDS A*( ) A*( )
6 680 20 18
12 3644035 227 73
24 -- 39135 1641
IDS A*( ) A*( )
6 2.87 1.34 1.30
12 2.78 1.42 1.24
24 -- 1.48 1.26
Search Cost ( )
Effective branching factor ( ∗)
112
Heuristic qualityIf ∀ , ℎ2( ) ≥ ℎ1( ) (both admissible)then ℎ2 dominates ℎ1 and it is better for search
Surely expanded nodes: < ∗ ⇒ ℎ < ∗ − If ℎ2( ) ≥ ℎ1( ) then every node expanded for ℎ2 will also be surely
expanded with ℎ1 (ℎ1 may also causes some more node expansion)
113
More accurate heuristic
114
Max of admissible heuristics is admissible (while it is amore accurate estimate)
How about using the actual cost as a heuristic? ℎ( ) = ℎ∗( ) for all
Will go straight to the goal ?!
Trade of between accuracy and computation time
ℎ = max (ℎ , ℎ )
Generating heuristics
Relaxed problems Inventing admissible heuristics automatically
Sub-problems (pattern databases)
Learning heuristics from experience
115
Relaxed problem
116
Relaxed problem: Problem with fewer restrictions on theactions
Optimal solution to the relaxed problem may be computedeasily (without search)
The cost of an optimal solution to a relaxed problem is anadmissible heuristic for the original problem The optimal solution is the shortest path in the super-graph of the state-
space.
Relaxed problem: 8-puzzle 8-Puzzle: move a tile from square A to B if A is adjacent (left,
right, above, below) to B and B is blank
Relaxed problems1) can move from A to B if A is adjacent to B (ignore whether or not position is
blank)
2) can move from A to B if B is blank (ignore adjacency)
3) can move from A to B (ignore both conditions)
Admissible heuristics for original problem (ℎ1( ) and ℎ2( ))are optimal path costs for relaxed problems
First case: a tile can move to any adjacent square ⇒ ℎ2( ) Third case: a tile can move anywhere ⇒ ℎ1( )
117
Sub-problem heuristic The cost to solve a sub-problem Store exact solution costs for every possible sub-problem
Admissible? The cost of the optimal solution to this problem is a lower
bound on the cost of the complete problem
118
Pattern databases heuristics Storing the exact solution cost for every possible sub-
problem instance
Combination (taking maximum) of heuristics resulted bydifferent sub-problems 15-Puzzle: 103 times reduction in no. of generated nodes vs. ℎ2
119
Disjoint pattern databases Adding these pattern-database heuristics yields an admissible
heuristic?!
Dividing up the problem such that each move affects only onesub-problem (disjoint sub-problems) and then addingheuristics 15-puzzle: 104 times reduction in no. of generated nodes vs. ℎ2 24-Puzzle: 106 times reduction in no. of generated nodes vs. ℎ2 Can Rubik’s cube be divided up to disjoint sub-problems?
120
Learning heuristics from experience Machine Learning Techniques Learn ℎ( ) from samples of optimally solved problems
(predicting solution cost for other states)
Features of state (instead of raw state description) 8-puzzle
number of misplaced tiles number of adjacent pairs of tiles that are not adjacent in the goal state
Linear Combination of features
121