72
David Luebke 1 06/13/22 CS 332: Algorithms Dijkstra’s Algorithm Continued Disjoint-Set Union Return to MST (Kruskal) Amortized Analysis

CS 332: Algorithms

  • Upload
    azana

  • View
    43

  • Download
    0

Embed Size (px)

DESCRIPTION

CS 332: Algorithms. Dijkstra’s Algorithm Continued Disjoint-Set Union Return to MST (Kruskal) Amortized Analysis. Review: Minimum Spanning Tree. Problem: given a connected, undirected, weighted graph, find a spanning tree using edges that minimize the total weight. 6. 4. 5. 9. 14. 2. - PowerPoint PPT Presentation

Citation preview

Page 1: CS 332: Algorithms

David Luebke 1 04/21/23

CS 332: Algorithms

Dijkstra’s Algorithm Continued

Disjoint-Set Union

Return to MST (Kruskal)

Amortized Analysis

Page 2: CS 332: Algorithms

David Luebke 2 04/21/23

Review: Minimum Spanning Tree

Problem: given a connected, undirected, weighted graph, find a spanning tree using edges that minimize the total weight

1410

3

6 4

5

2

9

15

8

Page 3: CS 332: Algorithms

David Luebke 3 04/21/23

Review: Minimum Spanning Tree

MSTs satisfy the optimal substructure property: an optimal tree is composed of optimal subtrees

If T is MST of G, and A T is a subtree of T, and (u,v) is the min-weight edge connecting A to V-A, then (u,v) T

Page 4: CS 332: Algorithms

David Luebke 4 04/21/23

Review: Prim’s Algorithm

MST-Prim(G, w, r)

Q = V[G];

for each u Q key[u] = ; key[r] = 0;

p[r] = NULL;

while (Q not empty)

u = ExtractMin(Q);

for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u;

key[v] = w(u,v);

5 2 9

0 8 15

3

4

1410

3

6 45

2

9

15

8

u

Page 5: CS 332: Algorithms

David Luebke 5 04/21/23

Review: Prim’s Algorithm

MST-Prim(G, w, r)

Q = V[G];

for each u Q key[u] = ; key[r] = 0;

p[r] = NULL;

while (Q not empty)

u = ExtractMin(Q);

for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u;

key[v] = w(u,v);

What will be the running time?A: Depends on queue binary heap: O(E lg V) Fibonacci heap: O(V lg V + E)

Page 6: CS 332: Algorithms

David Luebke 6 04/21/23

Review: Single-Source Shortest Path

Problem: given a weighted directed graph G, find the minimum-weight path from a given source vertex s to another vertex v “Shortest-path” = minimum weight Weight of path is sum of edges E.g., a road map: what is the shortest path from

Chapel Hill to Charlottesville?

Page 7: CS 332: Algorithms

David Luebke 7 04/21/23

Review: Shortest Path Properties

Optimal substructure: the shortest path consists of shortest subpaths

Let (u,v) be the weight of the shortest path from u to v. Shortest paths satisfy the triangle inequality: (u,v) (u,x) + (x,v)

In graphs with negative weight cycles, some shortest paths will not exist

Page 8: CS 332: Algorithms

David Luebke 8 04/21/23

Review: Relaxation

Key technique: relaxation Maintain upper bound d[v] on (s,v):Relax(u,v,w) {

if (d[v] > d[u]+w) then d[v]=d[u]+w;

}

952

752

Relax

652

652

Relax

Page 9: CS 332: Algorithms

David Luebke 9 04/21/23

Review: Bellman-Ford Algorithm

BellmanFord()

for each v V d[v] = ; d[s] = 0;

for i=1 to |V|-1

for each edge (u,v) E Relax(u,v, w(u,v));

for each edge (u,v) E if (d[v] > d[u] + w(u,v))

return “no solution”;

Relax(u,v,w): if (d[v] > d[u]+w) then d[v]=d[u]+w

Initialize d[], whichwill converge to shortest-path value

Relaxation: Make |V|-1 passes, relaxing each edge

Test for solution:have we converged yet?Ie, negative cycle?

Page 10: CS 332: Algorithms

David Luebke 10 04/21/23

Review: Bellman-Ford Algorithm

BellmanFord()

for each v V d[v] = ; d[s] = 0;

for i=1 to |V|-1

for each edge (u,v) E Relax(u,v, w(u,v));

for each edge (u,v) E if (d[v] > d[u] + w(u,v))

return “no solution”;

Relax(u,v,w): if (d[v] > d[u]+w) then d[v]=d[u]+w

What will be the running time?

Page 11: CS 332: Algorithms

David Luebke 11 04/21/23

Review: Bellman-Ford

Running time: O(VE) Not so good for large dense graphs But a very practical algorithm in many ways

Note that order in which edges are processed affects how quickly it converges

Page 12: CS 332: Algorithms

David Luebke 12 04/21/23

Review: DAG Shortest Paths

Problem: finding shortest paths in DAG Bellman-Ford takes O(VE) time. Do better by using topological sort

If were lucky and processes vertices on each shortest path from left to right, would be done in one pass

Every path in a dag is subsequence of topologically sorted vertex order, so processing verts in that order, we will do each path in forward order (will never relax edges out of vert before doing all edges into vert).

Thus: just one pass. Running time: O(V+E)

Page 13: CS 332: Algorithms

David Luebke 13 04/21/23

Review: Dijkstra’s Algorithm

Dijkstra(G)

for each v V d[v] = ; d[s] = 0; S = ; Q = V; while (Q ) u = ExtractMin(Q);

S = S {u}; for each v u->Adj[] if (d[v] > d[u]+w(u,v))

d[v] = d[u]+w(u,v);RelaxationStepNote: this

is really a call to Q->DecreaseKey()

Page 14: CS 332: Algorithms

David Luebke 14 04/21/23

Review: Dijkstra’s Algorithm

Dijkstra(G)

for each v V d[v] = ; d[s] = 0; S = ; Q = V; while (Q ) u = ExtractMin(Q);

S = S {u}; for each v u->Adj[] if (d[v] > d[u]+w(u,v))

d[v] = d[u]+w(u,v);Running time: O(E lg V) using binary heap for QCan acheive O(V lg V + E) with Fibonacci heaps

Page 15: CS 332: Algorithms

David Luebke 15 04/21/23

Dijkstra’s Algorithm

Dijkstra(G)

for each v V d[v] = ; d[s] = 0; S = ; Q = V; while (Q ) u = ExtractMin(Q);

S = S {u}; for each v u->Adj[] if (d[v] > d[u]+w(u,v))

d[v] = d[u]+w(u,v);Correctness: we must show that when u is removed from Q, it has already converged

Page 16: CS 332: Algorithms

David Luebke 16 04/21/23

Correctness Of Dijkstra's Algorithm

Note that d[v] (s,v) v Let u be first vertex picked s.t. shorter path than d[u] d[u] > (s,u) Let y be first vertex V-S on actual shortest path from su d[y] = (s,y)

Because d[x] is set correctly for y's predecessor x S on the shortest path, and When we put x into S, we relaxed (x,y), giving d[y] the correct value

s

xy

up2

p2

Page 17: CS 332: Algorithms

David Luebke 17 04/21/23

Correctness Of Dijkstra's Algorithm

Note that d[v] (s,v) v Let u be first vertex picked s.t. shorter path than d[u] d[u] > (s,u) Let y be first vertex V-S on actual shortest path from su d[y] = (s,y) d[u] > (s,u)

= (s,y) + (y,u) (Why?)= d[y] + (y,u) d[y] But if d[u] > d[y], wouldn't have chosen u. Contradiction.

s

xy

up2

p2

Page 18: CS 332: Algorithms

David Luebke 18 04/21/23

Disjoint-Set Union Problem

Want a data structure to support disjoint sets Collection of disjoint sets S = {Si}, Si Sj =

Need to support following operations: MakeSet(x): S = S {{x}} Union(Si, Sj): S = S - {Si, Sj} {Si Sj}

FindSet(X): return Si S such that x Si

Before discussing implementation details, we look at example application: MSTs

Page 19: CS 332: Algorithms

David Luebke 19 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

Page 20: CS 332: Algorithms

David Luebke 20 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 21: CS 332: Algorithms

David Luebke 21 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 22: CS 332: Algorithms

David Luebke 22 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 23: CS 332: Algorithms

David Luebke 23 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1?

5

13

1725

148

21

Run the algorithm:

Page 24: CS 332: Algorithms

David Luebke 24 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 25: CS 332: Algorithms

David Luebke 25 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2? 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 26: CS 332: Algorithms

David Luebke 26 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 27: CS 332: Algorithms

David Luebke 27 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5?

13

1725

148

21

Run the algorithm:

Page 28: CS 332: Algorithms

David Luebke 28 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 29: CS 332: Algorithms

David Luebke 29 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148?

21

Run the algorithm:

Page 30: CS 332: Algorithms

David Luebke 30 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 31: CS 332: Algorithms

David Luebke 31 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9?

1

5

13

1725

148

21

Run the algorithm:

Page 32: CS 332: Algorithms

David Luebke 32 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 33: CS 332: Algorithms

David Luebke 33 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13?

1725

148

21

Run the algorithm:

Page 34: CS 332: Algorithms

David Luebke 34 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 35: CS 332: Algorithms

David Luebke 35 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

14?8

21

Run the algorithm:

Page 36: CS 332: Algorithms

David Luebke 36 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 37: CS 332: Algorithms

David Luebke 37 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

17?25

148

21

Run the algorithm:

Page 38: CS 332: Algorithms

David Luebke 38 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19?

9

1

5

13

1725

148

21

Run the algorithm:

Page 39: CS 332: Algorithms

David Luebke 39 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21?

Run the algorithm:

Page 40: CS 332: Algorithms

David Luebke 40 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725?

148

21

Run the algorithm:

Page 41: CS 332: Algorithms

David Luebke 41 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 42: CS 332: Algorithms

David Luebke 42 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

2 19

9

1

5

13

1725

148

21

Run the algorithm:

Page 43: CS 332: Algorithms

David Luebke 43 04/21/23

Correctness Of Kruskal’s Algorithm

Sketch of a proof that this algorithm produces an MST for T: Assume algorithm is wrong: result is not an MST Then algorithm adds a wrong edge at some point If it adds a wrong edge, there must be a lower weight

edge (cut and paste argument) But algorithm chooses lowest weight edge at each step.

Contradiction Again, important to be comfortable with cut and

paste arguments

Page 44: CS 332: Algorithms

David Luebke 44 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

What will affect the running time?

Page 45: CS 332: Algorithms

David Luebke 45 04/21/23

Kruskal’s Algorithm

Kruskal()

{

T = ; for each v V MakeSet(v);

sort E by increasing edge weight w

for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T {{u,v}}; Union(FindSet(u), FindSet(v));

}

What will affect the running time? 1 Sort

O(V) MakeSet() callsO(E) FindSet() callsO(V) Union() calls

(Exactly how many Union()s?)

Page 46: CS 332: Algorithms

David Luebke 46 04/21/23

Kruskal’s Algorithm: Running Time

To summarize: Sort edges: O(E lg E) O(V) MakeSet()’s O(E) FindSet()’s O(V) Union()’s

Upshot: Best disjoint-set union algorithm makes above 3

operations take O(E(E,V)), almost constant Overall thus O(E lg E), almost linear w/o sorting

Page 47: CS 332: Algorithms

David Luebke 47 04/21/23

Disjoint Set Union

So how do we implement disjoint-set union? Naïve implementation: use a linked list to

represent each set:

MakeSet(): ??? time FindSet(): ??? time Union(A,B): “copy” elements of A into B: ??? time

Page 48: CS 332: Algorithms

David Luebke 48 04/21/23

Disjoint Set Union

So how do we implement disjoint-set union? Naïve implementation: use a linked list to represent each

set:

MakeSet(): O(1) time FindSet(): O(1) time Union(A,B): “copy” elements of A into B: O(A) time

How long can a single Union() take? How long will n Union()’s take?

Page 49: CS 332: Algorithms

David Luebke 49 04/21/23

Disjoint Set Union: Analysis

Worst-case analysis: O(n2) time for n Union’sUnion(S1, S2) “copy” 1 element

Union(S2, S3) “copy” 2 elements

Union(Sn-1, Sn) “copy” n-1 elements

O(n2)

Improvement: always copy smaller into larger Why will this make things better? What is the worst-case time of Union()?

But now n Union’s take only O(n lg n) time!

Page 50: CS 332: Algorithms

David Luebke 50 04/21/23

Amortized Analysis of Disjoint Sets

Amortized analysis computes average times without using probability

With our new Union(), any individual element is copied at most lg n times when forming the complete set from 1-element sets Worst case: Each time copied, element in smaller set

1st time resulting set size 2

2nd time 4

(lg n)th time n

Page 51: CS 332: Algorithms

David Luebke 51 04/21/23

Amortized Analysis of Disjoint Sets

Since we have n elements each copied at most lg n times, n Union()’s takes O(n lg n) time

We say that each Union() takes O(lg n) amortized time Financial term: imagine paying $(lg n) per Union At first we are overpaying; initial Union $O(1) But we accumulate enough $ in bank to pay for later

expensive O(n) operation. Important: amount in bank never goes negative

Page 52: CS 332: Algorithms

David Luebke 52 04/21/23

Amortized Analysis

Book describes 3 views of amortized analysis in Chapter 18: Aggregate Accounting Potential

Page 53: CS 332: Algorithms

David Luebke 53 04/21/23

Amortized Analysis: Aggregate Method

Aggregate method This is what we just did for Union() n operations take time T(n) Average cost of an operation = T(n)/n Not very precise

Page 54: CS 332: Algorithms

David Luebke 54 04/21/23

Amortized Analysis: Accounting Method

Accounting method We have done this with graph algorithms Charge each operation an amortized cost

Usually just guess/invent this cost Amount not used stored in “bank” Later operations can used stored work Balance must not go negative

Page 55: CS 332: Algorithms

David Luebke 55 04/21/23

Amortized Analysis:Potential Method

Potential method “Stored work” of accounting method is viewed as

“potential energy” Most flexible and powerful approach See book if interested; we won’t go into (or test)

Page 56: CS 332: Algorithms

David Luebke 56 04/21/23

Amortized Analysis Example: Dynamic Tables

Implementing a table (e.g., hash table) for dynamic data, want to make it small as possible

Problem: if too many items inserted, table may be too small

Idea: allocate more memory as needed

Page 57: CS 332: Algorithms

David Luebke 57 04/21/23

Dynamic Tables

1. Init table size m = 1

2. Insert elements until number n > m

3. Generate new table of size 2m

4. Reinsert old elements into new table

5. (back to step 2)What is the worst-case cost of an insert?One insert can be costly, but the total?

Page 58: CS 332: Algorithms

David Luebke 58 04/21/23

Analysis Of Dynamic Tables

Let ci = cost of ith insert

ci = i if i-1 is exact power of 2, 1 otherwise

Example: Operation Table Size Cost

Insert(1) 1 1 1

Page 59: CS 332: Algorithms

David Luebke 59 04/21/23

Analysis Of Dynamic Tables

Let ci = cost of ith insert

ci = i if i-1 is exact power of 2, 1 otherwise

Example: Operation Table Size Cost

Insert(1) 1 1 1Insert(2) 2 1 + 1 2

Page 60: CS 332: Algorithms

David Luebke 60 04/21/23

Analysis Of Dynamic Tables

Let ci = cost of ith insert

ci = i if i-1 is exact power of 2, 1 otherwise

Example: Operation Table Size Cost

Insert(1) 1 1 1Insert(2) 2 1 + 1 2Insert(3) 4 1 + 2 3

Page 61: CS 332: Algorithms

David Luebke 61 04/21/23

Analysis Of Dynamic Tables

Let ci = cost of ith insert

ci = i if i-1 is exact power of 2, 1 otherwise

Example: Operation Table Size Cost

Insert(1) 1 1 1Insert(2) 2 1 + 1 2Insert(3) 4 1 + 2 3Insert(4) 4 1 4

Page 62: CS 332: Algorithms

David Luebke 62 04/21/23

Analysis Of Dynamic Tables

Let ci = cost of ith insert

ci = i if i-1 is exact power of 2, 1 otherwise

Example: Operation Table Size Cost

Insert(1) 1 1 1Insert(2) 2 1 + 1 2Insert(3) 4 1 + 2 3Insert(4) 4 1 4Insert(5) 8 1 + 4 5

Page 63: CS 332: Algorithms

David Luebke 63 04/21/23

Analysis Of Dynamic Tables

Let ci = cost of ith insert

ci = i if i-1 is exact power of 2, 1 otherwise

Example: Operation Table Size Cost

Insert(1) 1 1 1Insert(2) 2 1 + 1 2Insert(3) 4 1 + 2 3Insert(4) 4 1 4Insert(5) 8 1 + 4 5Insert(6) 8 1 6

Page 64: CS 332: Algorithms

David Luebke 64 04/21/23

Analysis Of Dynamic Tables

Let ci = cost of ith insert

ci = i if i-1 is exact power of 2, 1 otherwise

Example: Operation Table Size Cost

Insert(1) 1 1 1Insert(2) 2 1 + 1 2Insert(3) 4 1 + 2 3Insert(4) 4 1 4Insert(5) 8 1 + 4 5Insert(6) 8 1 6Insert(7) 8 1 7

Page 65: CS 332: Algorithms

David Luebke 65 04/21/23

Analysis Of Dynamic Tables

Let ci = cost of ith insert

ci = i if i-1 is exact power of 2, 1 otherwise

Example: Operation Table Size Cost

Insert(1) 1 1 1Insert(2) 2 1 + 1 2Insert(3) 4 1 + 2 3Insert(4) 4 1 4Insert(5) 8 1 + 4 5Insert(6) 8 1 6Insert(7) 8 1 7Insert(8) 8 1 8

Page 66: CS 332: Algorithms

David Luebke 66 04/21/23

Analysis Of Dynamic Tables

Let ci = cost of ith insert

ci = i if i-1 is exact power of 2, 1 otherwise

Example: Operation Table Size Cost

Insert(1) 1 1 1Insert(2) 2 1 + 1 2Insert(3) 4 1 + 2Insert(4) 4 1Insert(5) 8 1 + 4Insert(6) 8 1Insert(7) 8 1Insert(8) 8 1Insert(9) 16 1 + 8

123456789

Page 67: CS 332: Algorithms

David Luebke 67 04/21/23

Aggregate Analysis

n Insert() operations cost

Average cost of operation = (total cost)/(# operations) < 3

Asymptotically, then, a dynamic table costs the same as a fixed-size table Both O(1) per Insert operation

nnnncn

j

jn

ii 3)12(2

lg

01

Page 68: CS 332: Algorithms

David Luebke 68 04/21/23

Accounting Analysis

Charge each operation $3 amortized cost Use $1 to perform immediate Insert() Store $2

When table doubles $1 reinserts old item, $1 reinserts another old item Point is, we’ve already paid these costs Upshot: constant (amortized) cost per operation

Page 69: CS 332: Algorithms

David Luebke 69 04/21/23

Accounting Analysis

Suppose must support insert & delete, table should contract as well as expand Table overflows double it (as before) Table < 1/2 full halve it: BAD IDEA (Why?) Better: Table < 1/4 full halve it Charge $3 for Insert (as before) Charge $2 for Delete

Store extra $1 in emptied slot Use later to pay to copy remaining items to new table when

shrinking table

Page 70: CS 332: Algorithms

David Luebke 70 04/21/23

The End

Page 71: CS 332: Algorithms

David Luebke 71 04/21/23

Exercise 1 Feedback

First, my apologies… Harder than I thought Too late to help with midterm

Proof by substitution: T(n) = T(n/2 + n) + n Most people assumed it was O(n lg n)…why?

Resembled proof from class: T(n) = 2T(n/2 + 17) + n The correct intuition: n/2 dominates n term, so it resembles

T(n) = T(n/2) + n, which is O(n) by m.t. Still, if it’s O(n) it’s O(n lg n), right?

Page 72: CS 332: Algorithms

David Luebke 72 04/21/23

Exercise 1: Feedback

So, prove by substitution thatT(n) = T(n/2 + n) + n = O(n lg n)

Assume T(n) cn lg n Then T(n) c(n/2 + n) lg (n/2 + n)

c(n/2 + n) lg (n/2 + n) c(n/2 + n) lg (3n/2)