Discrete Optimization Lecture-9€¦ · Lecture-9 Ngày 22 tháng 10 năm 2011 Discrete...

Preview:

Citation preview

Discrete OptimizationLecture-9

Ngày 22 tháng 10 năm 2011

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 1 / 5

Hall’s Theorem

First we have an equivalent formulation of Hall’s theorem:

TheoremLet G be a bipartite graph with partitions X ,Y . LetN(x) = {y | (x , y) ∈ E(G)}. There is a mtching M that saturates X ifand only if ∀A ⊂ X , |

⋃x∈A N(x) |≥ |A|.

CorollaryAn immediate corollary is yet anohter proof that a k− regular bipartitegraph has a perfect matching.

Let us re-visit the previous example.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 1 / 5

Hall’s Theorem

First we have an equivalent formulation of Hall’s theorem:

TheoremLet G be a bipartite graph with partitions X ,Y . LetN(x) = {y | (x , y) ∈ E(G)}. There is a mtching M that saturates X ifand only if ∀A ⊂ X , |

⋃x∈A N(x) |≥ |A|.

CorollaryAn immediate corollary is yet anohter proof that a k− regular bipartitegraph has a perfect matching.

Let us re-visit the previous example.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 1 / 5

Hall’s Theorem

First we have an equivalent formulation of Hall’s theorem:

TheoremLet G be a bipartite graph with partitions X ,Y . LetN(x) = {y | (x , y) ∈ E(G)}. There is a mtching M that saturates X ifand only if ∀A ⊂ X , |

⋃x∈A N(x) |≥ |A|.

CorollaryAn immediate corollary is yet anohter proof that a k− regular bipartitegraph has a perfect matching.

Let us re-visit the previous example.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 1 / 5

Hall’s Theorem

First we have an equivalent formulation of Hall’s theorem:

TheoremLet G be a bipartite graph with partitions X ,Y . LetN(x) = {y | (x , y) ∈ E(G)}. There is a mtching M that saturates X ifand only if ∀A ⊂ X , |

⋃x∈A N(x) |≥ |A|.

CorollaryAn immediate corollary is yet anohter proof that a k− regular bipartitegraph has a perfect matching.

Let us re-visit the previous example.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 1 / 5

Let us introduce a slight change:

{1,3,2,5}, {1,3,4}, {1,4,8}, {2,3,5,6}, {2,4,6}, {1,5,2, }{1,3,7}, {1,4,5,6} ?

Let us try a simple greedy selection:

1 1→ {1,3,2,5} 3→ {1,3,4}2 4→ {1,4,8} 2→ {2,3,5,6}3 6→ {2,4,6} 5→ {1,5,2}4 7→ {1,3,7} ??→ {1,4,5,6}

This time we do have enough candidates, yet we failed to find an SDR.

But we can find an augmenting path!{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8is an augmenting path! So here is the SDR:{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 2 / 5

Let us introduce a slight change:

{1,3,2,5}, {1,3,4}, {1,4,8}, {2,3,5,6}, {2,4,6}, {1,5,2, }{1,3,7}, {1,4,5,6} ?

Let us try a simple greedy selection:

1 1→ {1,3,2,5} 3→ {1,3,4}2 4→ {1,4,8} 2→ {2,3,5,6}3 6→ {2,4,6} 5→ {1,5,2}4 7→ {1,3,7} ??→ {1,4,5,6}

This time we do have enough candidates, yet we failed to find an SDR.

But we can find an augmenting path!{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8is an augmenting path! So here is the SDR:{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 2 / 5

Let us introduce a slight change:

{1,3,2,5}, {1,3,4}, {1,4,8}, {2,3,5,6}, {2,4,6}, {1,5,2, }{1,3,7}, {1,4,5,6} ?Let us try a simple greedy selection:

1 1→ {1,3,2,5} 3→ {1,3,4}

2 4→ {1,4,8} 2→ {2,3,5,6}3 6→ {2,4,6} 5→ {1,5,2}4 7→ {1,3,7} ??→ {1,4,5,6}

This time we do have enough candidates, yet we failed to find an SDR.

But we can find an augmenting path!{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8is an augmenting path! So here is the SDR:{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 2 / 5

Let us introduce a slight change:

{1,3,2,5}, {1,3,4}, {1,4,8}, {2,3,5,6}, {2,4,6}, {1,5,2, }{1,3,7}, {1,4,5,6} ?Let us try a simple greedy selection:

1 1→ {1,3,2,5} 3→ {1,3,4}2 4→ {1,4,8} 2→ {2,3,5,6}

3 6→ {2,4,6} 5→ {1,5,2}4 7→ {1,3,7} ??→ {1,4,5,6}

This time we do have enough candidates, yet we failed to find an SDR.

But we can find an augmenting path!{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8is an augmenting path! So here is the SDR:{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 2 / 5

Let us introduce a slight change:

{1,3,2,5}, {1,3,4}, {1,4,8}, {2,3,5,6}, {2,4,6}, {1,5,2, }{1,3,7}, {1,4,5,6} ?Let us try a simple greedy selection:

1 1→ {1,3,2,5} 3→ {1,3,4}2 4→ {1,4,8} 2→ {2,3,5,6}3 6→ {2,4,6} 5→ {1,5,2}

4 7→ {1,3,7} ??→ {1,4,5,6}This time we do have enough candidates, yet we failed to find an SDR.

But we can find an augmenting path!{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8is an augmenting path! So here is the SDR:{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 2 / 5

Let us introduce a slight change:

{1,3,2,5}, {1,3,4}, {1,4,8}, {2,3,5,6}, {2,4,6}, {1,5,2, }{1,3,7}, {1,4,5,6} ?Let us try a simple greedy selection:

1 1→ {1,3,2,5} 3→ {1,3,4}2 4→ {1,4,8} 2→ {2,3,5,6}3 6→ {2,4,6} 5→ {1,5,2}4 7→ {1,3,7} ??→ {1,4,5,6}

This time we do have enough candidates, yet we failed to find an SDR.

But we can find an augmenting path!{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8is an augmenting path! So here is the SDR:{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 2 / 5

Let us introduce a slight change:

{1,3,2,5}, {1,3,4}, {1,4,8}, {2,3,5,6}, {2,4,6}, {1,5,2, }{1,3,7}, {1,4,5,6} ?Let us try a simple greedy selection:

1 1→ {1,3,2,5} 3→ {1,3,4}2 4→ {1,4,8} 2→ {2,3,5,6}3 6→ {2,4,6} 5→ {1,5,2}4 7→ {1,3,7} ??→ {1,4,5,6}

This time we do have enough candidates, yet we failed to find an SDR.

But we can find an augmenting path!{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8is an augmenting path! So here is the SDR:{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 2 / 5

Let us introduce a slight change:

{1,3,2,5}, {1,3,4}, {1,4,8}, {2,3,5,6}, {2,4,6}, {1,5,2, }{1,3,7}, {1,4,5,6} ?Let us try a simple greedy selection:

1 1→ {1,3,2,5} 3→ {1,3,4}2 4→ {1,4,8} 2→ {2,3,5,6}3 6→ {2,4,6} 5→ {1,5,2}4 7→ {1,3,7} ??→ {1,4,5,6}

This time we do have enough candidates, yet we failed to find an SDR.

But we can find an augmenting path!{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8is an augmenting path! So here is the SDR:{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 2 / 5

Let us introduce a slight change:

{1,3,2,5}, {1,3,4}, {1,4,8}, {2,3,5,6}, {2,4,6}, {1,5,2, }{1,3,7}, {1,4,5,6} ?Let us try a simple greedy selection:

1 1→ {1,3,2,5} 3→ {1,3,4}2 4→ {1,4,8} 2→ {2,3,5,6}3 6→ {2,4,6} 5→ {1,5,2}4 7→ {1,3,7} ??→ {1,4,5,6}

This time we do have enough candidates, yet we failed to find an SDR.

But we can find an augmenting path!{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8is an augmenting path! So here is the SDR:{1,4,5,6}←1←{1,3,2,5}←3←{1,3,4}←4←{1,4,8}←8

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 2 / 5

Proof of hall’s theorem

This example is the essence of Hall’s theorem proof.

Chứng minh.

If ∀A ⊂ X , |⋃

x∈A N(x) |≥ |A| and M is a matching that does notsaturate X then we can always find an M-augmenting path:

Watch the blackboard!

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 3 / 5

Proof of hall’s theorem

This example is the essence of Hall’s theorem proof.

Chứng minh.

If ∀A ⊂ X , |⋃

x∈A N(x) |≥ |A| and M is a matching that does notsaturate X then we can always find an M-augmenting path:

Watch the blackboard!

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 3 / 5

Proof of hall’s theorem

This example is the essence of Hall’s theorem proof.

Chứng minh.

If ∀A ⊂ X , |⋃

x∈A N(x) |≥ |A| and M is a matching that does notsaturate X then we can always find an M-augmenting path:

Watch the blackboard!

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 3 / 5

Proof of hall’s theorem

This example is the essence of Hall’s theorem proof.

Chứng minh.

If ∀A ⊂ X , |⋃

x∈A N(x) |≥ |A| and M is a matching that does notsaturate X then we can always find an M-augmenting path:

Watch the blackboard!

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 3 / 5

More matching theory and applications

DefinitionA permutation matrix is a 0− 1 matrix in which every row and everycolumn has sum 1.

TheoremLet A = (ai,j) be a matrix with ai,j ∈ N such that:∑n

i=1 ai,j = m ∀1 ≤ j ≤ n and∑n

j=1 ai,j = m ∀1 ≤ i ≤ n

Then A = P1 + P2 + . . . = Pm where Pi are permutation matrices.

Definition

1 An n × n real, non-negative matrix is called doubly stocahstic ifthe sum of every row and every column is 1.

2 The expression∑n

i=1 αivi where vi are vectorsαi ≥ 0,

∑ni=1 αi = 1 is called a convex combination of the v ′

i s.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 4 / 5

More matching theory and applications

DefinitionA permutation matrix is a 0− 1 matrix in which every row and everycolumn has sum 1.

TheoremLet A = (ai,j) be a matrix with ai,j ∈ N such that:∑n

i=1 ai,j = m ∀1 ≤ j ≤ n and∑n

j=1 ai,j = m ∀1 ≤ i ≤ n

Then A = P1 + P2 + . . . = Pm where Pi are permutation matrices.

Definition

1 An n × n real, non-negative matrix is called doubly stocahstic ifthe sum of every row and every column is 1.

2 The expression∑n

i=1 αivi where vi are vectorsαi ≥ 0,

∑ni=1 αi = 1 is called a convex combination of the v ′

i s.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 4 / 5

More matching theory and applications

DefinitionA permutation matrix is a 0− 1 matrix in which every row and everycolumn has sum 1.

TheoremLet A = (ai,j) be a matrix with ai,j ∈ N such that:∑n

i=1 ai,j = m ∀1 ≤ j ≤ n and∑n

j=1 ai,j = m ∀1 ≤ i ≤ n

Then A = P1 + P2 + . . . = Pm where Pi are permutation matrices.

Definition

1 An n × n real, non-negative matrix is called doubly stocahstic ifthe sum of every row and every column is 1.

2 The expression∑n

i=1 αivi where vi are vectorsαi ≥ 0,

∑ni=1 αi = 1 is called a convex combination of the v ′

i s.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 4 / 5

More matching theory and applications

DefinitionA permutation matrix is a 0− 1 matrix in which every row and everycolumn has sum 1.

TheoremLet A = (ai,j) be a matrix with ai,j ∈ N such that:∑n

i=1 ai,j = m ∀1 ≤ j ≤ n and∑n

j=1 ai,j = m ∀1 ≤ i ≤ n

Then A = P1 + P2 + . . . = Pm where Pi are permutation matrices.

Definition1 An n × n real, non-negative matrix is called doubly stocahstic if

the sum of every row and every column is 1.

2 The expression∑n

i=1 αivi where vi are vectorsαi ≥ 0,

∑ni=1 αi = 1 is called a convex combination of the v ′

i s.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 4 / 5

More matching theory and applications

DefinitionA permutation matrix is a 0− 1 matrix in which every row and everycolumn has sum 1.

TheoremLet A = (ai,j) be a matrix with ai,j ∈ N such that:∑n

i=1 ai,j = m ∀1 ≤ j ≤ n and∑n

j=1 ai,j = m ∀1 ≤ i ≤ n

Then A = P1 + P2 + . . . = Pm where Pi are permutation matrices.

Definition1 An n × n real, non-negative matrix is called doubly stocahstic if

the sum of every row and every column is 1.2 The expression

∑ni=1 αivi where vi are vectors

αi ≥ 0,∑n

i=1 αi = 1 is called a convex combination of the v ′i s.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 4 / 5

Matchings

Theorem (Birkhoff)A doubly stochastic matrix A is a convex combination of permutationsmatrices.

CommentThis is a very important theorem used in many areas of mathematics,applied mathematics, computer science, economics, electricalengineering and more.

The proof of this important theorem is similar to the proof of the integercase and left to you.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 5 / 5

Matchings

Theorem (Birkhoff)A doubly stochastic matrix A is a convex combination of permutationsmatrices.

CommentThis is a very important theorem used in many areas of mathematics,applied mathematics, computer science, economics, electricalengineering and more.

The proof of this important theorem is similar to the proof of the integercase and left to you.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 5 / 5

Matchings

Theorem (Birkhoff)A doubly stochastic matrix A is a convex combination of permutationsmatrices.

CommentThis is a very important theorem used in many areas of mathematics,applied mathematics, computer science, economics, electricalengineering and more.

The proof of this important theorem is similar to the proof of the integercase and left to you.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 5 / 5

Matchings

Theorem (Birkhoff)A doubly stochastic matrix A is a convex combination of permutationsmatrices.

CommentThis is a very important theorem used in many areas of mathematics,applied mathematics, computer science, economics, electricalengineering and more.

The proof of this important theorem is similar to the proof of the integercase and left to you.

() Discrete Optimization Lecture-9 Ngày 22 tháng 10 năm 2011 5 / 5

Recommended