45
If d h l ith Informed search algorithms Chapter 3.5-3.6 (3 rd ed.) Chapter 4 (2 nd ed.) Chapter 4 (2 ed.)

Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

I f d h l ithInformed search algorithms

Chapter 3.5-3.6 (3rd ed.)Chapter 4 (2nd ed.)Chapter 4 (2 ed.)

Page 2: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Outline Review limitations of uninformed search methods Informed (or heuristic) search uses problem-specific

heuristics to improve efficiencyheuristics to improve efficiency Best-first search Greedy best-first search

A* search A search Memory Bounded A* Search

HeuristicsC id i ifi t d i ti Can provide significant speed-ups in practice But can still have worst-case exponential time

complexityp y

Page 3: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Limitations of uninformed search

Search Space Size makes search tediousCombinatorial Explosion Combinatorial Explosion

For example, 8-puzzle Avg. solution cost is about 22 steps

b hi f t 3 branching factor ~ 3 Exhaustive search to depth 22:

3.1 x 1010 statesE d 12 IDS d 3 6 illi t t E.g., d=12, IDS expands 3.6 million states on average

[24 puzzle has 1024 states (much worse)]

Page 4: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Recall tree search…

Page 5: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Recall tree search…

This “strategy” is what differentiates different

search algorithmssearch algorithms

Page 6: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Best-first search Idea: use an evaluation function f(n) for each node

f(n) provides an estimate for the total cost. Expand the node n with smallest f(n).

g(n) path cost so far to node n g(n) = path cost so far to node n. h(n) = estimate of (optimal) cost to goal from node n. f(n) = g(n)+h(n).

Implementation:Order the nodes in frontier by increasing order of cost.

Evaluation function is an estimate of node qualityEvaluation function is an estimate of node quality More accurate name for “best first” search would be

“seemingly best-first search”S h ffi i d d h i ti lit Search efficiency depends on heuristic quality

Page 7: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Heuristic function Heuristic:

Definition: a commonsense rule (or set of rules) intended to increase the probability of solving some problem

“using rules of thumb to find answers”

Heuristic function h(n)f ( l) f l Estimate of (optimal) cost from n to goal

Defined using only the state of node n h(n) = 0 if n is a goal node

Example: straight line distance from n to Bucharest Example: straight line distance from n to Bucharest Note that this is not the true state-space distance It is an estimate – actual state-space distance can be higher

Provides problem-specific knowledge to the search algorithm

Page 8: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

H i ti f ti f 8 lHeuristic functions for 8-puzzle

8-puzzle Avg. solution cost is about 22 steps branching factor ~ 3 branching factor ~ 3 Exhaustive search to depth 22:

3.1 x 1010 states.A good heuristic function can reduce the search process A good heuristic function can reduce the search process.

Two commonly used heuristicsh th b f i l d til h1 = the number of misplaced tiles h1(s)=8

h2 = the sum of the distances of the tiles from their goal positions (Manhattan distance)positions (Manhattan distance). h2(s)=3+1+2+2+2+3+3+2=18

Page 9: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Romania with straight-line dist.

Page 10: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Greedy best-first search

h(n) = estimate of cost from n to goalh( ) t i ht li di t f t e.g., h(n) = straight-line distance from n to

Bucharest

Greedy best-first search expands the d h b l lnode that appears to be closest to goal.

f(n) = h(n)

Page 11: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Greedy best-first search example

Page 12: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Greedy best-first search example

Page 13: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Greedy best-first search example

Page 14: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Greedy best-first search example

Page 15: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Optimal Path

Page 16: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Properties of greedy best-first search Complete?

Tree version can get stuck in loops.G h i i l t i fi it Graph version is complete in finite spaces.

Time? O(bm), but a good heuristic can give dramatic improvementdramatic improvement

Space? O(bm) - keeps all nodes in memory Optimal? Nop

e.g., Arad Sibiu Rimnicu Vilcea Pitesti Bucharest is shorter!

Page 17: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

A* search

Idea: avoid expanding paths that are already expensive

Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach ng( ) h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n( ) est ated tota cost o pat t oug

to goal Greedy Best First search has f(n)=h(n)y ( ) ( ) Uniform Cost search has f(n)=g(n)

Page 18: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Admissible heuristics

A heuristic h(n) is admissible if for every node n,h(n) ≤ h*(n), where h*(n) is the true cost to reachh(n) ≤ h (n), where h (n) is the true cost to reach the goal state from n.

An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic

Example: hSLD(n) (never overestimates the actual p SLD( ) (road distance)

Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal

Page 19: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Admissible heuristicsE.g., for the 8-puzzle: h1(n) = number of misplaced tiles h2(n) = total Manhattan distance h2(n) total Manhattan distance(i.e., no. of squares from desired location of each tile)

h1(S) = ? h2(S) = ?

Page 20: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Admissible heuristicsE.g., for the 8-puzzle: h1(n) = number of misplaced tiles h2(n) = total Manhattan distance h2(n) total Manhattan distance(i.e., no. of squares from desired location of each tile)

h1(S) = ? 8 h2(S) = ? 3+1+2+2+2+3+3+2 = 18

Page 21: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Consistent heuristics(consistent => admissible) A heuristic is consistent if for every node n, every successor n'

of n generated by any action a,

h( ) ≤ ( ') h( ')h(n) ≤ c(n,a,n') + h(n')

If h is consistent, we have

f(n’) = g(n’) + h(n’) (by def.)= g(n) + c(n,a,n') + h(n’) (g(n’)=g(n)+c(n.a.n’)) ≥ g(n) + h(n) = f(n) (consistency)

f(n’) ≥ f(n) It’s the trianglef(n’) ≥ f(n)

i.e., f(n) is non-decreasing along any path.

It s the triangleinequality !

keeps all checked nodes Theorem:

If h(n) is consistent, A* using GRAPH-SEARCH is optimal

pin memory to avoid repeated

states

Page 22: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

A* h lA* search example

Page 23: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

A* search exampleA search example

Page 24: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

A* search example

Page 25: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

A* search example

Page 26: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

A* search example

Page 27: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

A* search example

Page 28: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Contours of A* Search

A* expands nodes in order of increasing f valueGradually adds "f contours" of nodes Gradually adds "f-contours" of nodes

Contour i has all nodes with f=fi, where fi < fi+1

Page 29: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Properties of A* Complete? Yes (unless there are infinitely many

nodes with f ≤ f(G) , i.e. step-cost > ε) Time/Space? Exponential

except if:

db* *| ( ) ( ) | (log ( ))h n h n O h n except if:

Optimal? Yes (with: Tree-Search, admissible heuristic; Graph-Search consistent heuristic)

| ( ) ( ) | ( g ( ))n n n

heuristic; Graph-Search, consistent heuristic) Optimally Efficient: Yes (no optimal algorithm with

same heuristic is guaranteed to expand fewer nodes)same heuristic is guaranteed to expand fewer nodes)

Page 30: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Optimality of A* (proof) Suppose some suboptimal goal G2 has been generated and is in

the frontier. Let n be an unexpanded node in the frontier such that n is on a shortest path to an optimal goal G.p p g

We want to prove:f(n) < f(G2)(then A* will prefer n over G2)

f(G2) = g(G2) since h(G2) = 0 f(G) = g(G) since h(G) = 0 g(G2) > g(G) since G2 is suboptimal

f(G2) > f(G) from above h(n) ≤ h*(n) since h is admissible (under-estimate) g(n) + h(n) ≤ g(n) + h*(n) from above f(n) ≤ f(G) since g(n)+h(n)=f(n) & g(n)+h*(n)=f(G) f(n) < f(G2) from

Page 31: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Memory Bounded Heuristic Search: Recursive BFS

How can we solve the memory problem for A* search?A search?

Idea: Try something like depth first search, but let’s not forget everything about thebut let s not forget everything about the branches we have partially explored.We remember the best f value we have We remember the best f-value we have found so far in the branch we are deleting.

Page 32: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

RBFS: b l ibest alternativeover frontier nodes,which are not children:i.e. do I want to back up?

RBFS changes its mind very often in practice.

i.e. do I want to back up?

y p

This is because the f=g+h become more accurate (less optimistic)accurate (less optimistic)as we approach the goal.Hence, higher level nodeshave smaller f-values andwill be explored first.

Problem: We should keep in memory whatever we canin memory whatever we can.

Page 33: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Simple Memory Bounded A*

This is like A*, but when memory is full we delete the worst node (largest f-value).

Like RBFS, we remember the best descendent in the branch we delete.

If there is a tie (equal f-values) we delete the oldest nodes first.

simple-MBA* finds the optimal reachable solution given the memory constraint. A Solution is not reachable

if a single path from root to goal

Time can still be exponential. g p g

does not fit into memory

Page 34: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

SMA* pseudocode (not in 2nd edition 2 of book)book)

function SMA*(problem) returns a solution sequenceinputs: problem, a problemstatic: Queue, a queue of nodes ordered by f-cost

Q MAKE QUEUE({MAKE NODE(INITIAL STATE[ bl ])})QueueMAKE-QUEUE({MAKE-NODE(INITIAL-STATE[problem])})loop do

if Queue is empty then return failuren deepest least-f-cost node in Queueif GOAL-TEST(n) then return successif GOAL TEST(n) then return successs NEXT-SUCCESSOR(n)if s is not a goal and is at maximum depth then

f(s) else

f(s) MAX(f(n),g(s)+h(s))if all of n’s successors have been generated then

update n’s f-cost and those of its ancestors if necessaryif SUCCESSORS(n) all in memory then remove n from Queueif memory is full then

delete shallowest, highest-f-cost node in Queueremove it from its parent’s successor listinsert its parent on Queue if necessary

insert s in Queueinsert s in Queueend

Page 35: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Simple Memory-bounded A* (SMA*)(Example with 3-node memory) maximal depth is 3, since(Example with 3 node memory)

Progress of SMA*. Each node is labeled with its current f-cost. Values in parentheses show the value of the best forgotten descendant.

S h

maximal depth is 3, sincememory limit is 3. This branch is now useless.

best forgotten nodebest estimated solution

A0+12=12

g+h = f ☐ = goalSearch space

A A A

A13[15]

best estimated solution so far for that node

B G10+5=15 8+5=13

10 8

10 10 8 16

A12

A

B

12

15

A

B G

13

H

13G

18

24+0=24

C D

E F

H

J

I

K

20+5=25

20+0=20

16+2=18

10 10 8 8

15 13 H18

A15[15]

A

A15[24]

A

8

20[24]

30+5=35 30+0=30 24+0=24 24+5=29 G24[]

I

A

B G

15B

15B

D

8

20[]

Algorithm can tell you when best solution found within memory constraint is optimal or not.

I24 15 24

C 25 D

20

Page 36: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Conclusions

The Memory Bounded A* Search is the best of the search algorithms we havebest of the search algorithms we have seen so far. It uses all its memory to avoid double work and uses smart heuristics todouble work and uses smart heuristics to first descend into promising branches of the search-tree.the search tree.

Page 37: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Heuristic functions

8-puzzle Avg. solution cost is about 22 steps branching factor ~ 3branching factor 3 Exhaustive search to depth 22:

3.1 x 1010 states. A good heuristic function can reduce the search process. A good heuristic function can reduce the search process.

Two commonly used heuristics h = the number of misplaced tiles h1 = the number of misplaced tiles

h1(s)=8 h2 = the sum of the distances of the tiles from their goal

positions (manhattan distance).positions (manhattan distance). h2(s)=3+1+2+2+2+3+3+2=18

Page 38: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Dominance If h2(n) ≥ h1(n) for all n (both admissible) then h2 dominates h1 h is better for search: it is guaranteed to expand h2 is better for search: it is guaranteed to expand

less or equal nr of nodes.

T i l h t ( b f d Typical search costs (average number of nodes expanded):

d=12 IDS = 3,644,035 nodesA*(h1) = 227 nodes A*(h2) = 73 nodes

d 2 S d d=24 IDS = too many nodesA*(h1) = 39,135 nodes A*(h2) = 1,641 nodes

Page 39: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Relaxed problems A problem with fewer restrictions on the actions

is called a relaxed problemThe cost of an optimal solution to a relaxed The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem

If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solutions o test so ut o

If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solutionshortest solution

Page 40: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Effective branching factor

Effective branching factor b*I th b hi f t th t if t f d th d ld h i Is the branching factor that a uniform tree of depth d would have in order to contain N+1 nodes.

1 1 b * (b*)2 (b*)d

Measure is fairly constant for sufficiently hard problems.

N 11 b *(b*)2 ... (b*)d

Can thus provide a good guide to the heuristic’s overall usefulness.

Page 41: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Effectiveness of different heuristics

Results averaged over random instances of the 8-puzzlep

Page 42: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Inventing heuristics via “relaxed problems” A problem with fewer restrictions on the actions is called a relaxed

problem

The cost of an optimal solution to a relaxed problem is an admissible The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem

If the rules of the 8-puzzle are relaxed so that a tile can move anywhere then h (n) gives the shortest solutionanywhere, then h1(n) gives the shortest solution

If the rules are relaxed so that a tile can move to any adjacent square,then h2(n) gives the shortest solution

Can be a useful way to generate heuristics E.g., ABSOLVER (Prieditis, 1993) discovered the first useful heuristic for the

Rubik’s cube puzzle

Page 43: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

More on heuristics

h(n) = max{ h1(n), h2(n),……hk(n) } Assume all h functions are admissible Always choose the least optimistic heuristic

(most accurate) at each node

Could also learn a convex combination of featuresfeatures Weighted sum of h(n)’s, where weights sum to 1 Weights learned via repeated puzzle-solving Weights learned via repeated puzzle solving

Page 44: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Pattern databases

Ad i ibl h i ti l b d i d f th Admissible heuristics can also be derived from the solution cost of a subproblem of a given problem.

This cost is a lower bound on the cost of the real This cost is a lower bound on the cost of the real problem.

Pattern databases store the exact solution to for Pattern databases store the exact solution to for every possible subproblem instance. The complete heuristic is constructed using the

patterns in the DBp

Page 45: Chapter 3.5-3.6 (3 ed.) Chapter 4 (2Chapter 4 (2 ed.)rickl/courses/cs-271/2011-fq-cs271... · Outline Review limitations of uninformed search methods Informed (or heuristic) search

Summary Uninformed search methods have their limits

Informed (or heuristic) search uses problem-specific heuristics to improve efficiencyimprove efficiency Best-first A* RBFS SMA* SMA Techniques for generating heuristics

Can provide significant speed-ups in practice e.g., on 8-puzzleg , p But can still have worst-case exponential time complexity

Next lecture: local search techniques Hill-climbing, genetic algorithms, simulated annealing, etc Hill climbing, genetic algorithms, simulated annealing, etc Read Ch.4 (3rd e.), Ch. 5 (2nd ed.)