19
Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Embed Size (px)

Citation preview

Page 1: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Geometry of Online Packing Linear Programs

Marco Molinaro and R. RaviCarnegie Mellon University

Page 2: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Packing Integer Programs (PIPs)

• Non-negative c, A, b • Max st

• A has entries in [0,1]

Ax ≤ b m

n

Page 3: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

A

Online Packing Integer Programs• Adversary chooses values for c, A, b • …but columns are presented in random order• …when column comes, set variable to 0/1 irrevocably• b and n are known upfront

x ≤ b

c

A

A

A

A

n

A

A10

Page 4: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Online Packing Integer Programs

• Goal: Find feasible solution that maximizes expected value

• -competitive:

E [ ALG ] ≥ (1−𝜖 )OfflineIP

Page 5: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

• First online problem: secretary problem [Dynkin 63]

• B-secretary problem (m=1, b=B, A is all 1’s) [Kleinberg 05]

-competitive for

• PIPs (B=min bi) [FHKMS 10, AWY]

-competitive for

need

Previous Results

do not depend on n

depends on n

Page 6: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Main Question and Result

• Q: Do general PIPs become more difficult for larger n?

• A: No!

Main result

Algorithm -competitive when

Page 7: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

High-level Idea

1. Online PIP as learning2. Improving learning error using tailored covering bounds 3. Geometry of PIPs that allow good covering bounds4. Reduce general PIP to above

For this talk:

• Every right-hand side • Show weaker bound

Page 8: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Online PIP as Learning1) Reduction to learning a classifier [DH 09]

Linear classifier: given (dual) vector ,

𝑥 (𝑝)𝑡=1    iff𝑝 𝐴𝑡−1<0

𝑨𝒕

1

1

1

1

00

0 𝒙 (𝒑 )𝒕𝒑

Page 9: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Online PIP as Learning

Claim: If the classification ( ) given by satisfies 𝑥 𝑝1)

2)

then ( ) is (1− ) optimal.𝑥 𝑝 𝜖Moreover, such classification always exists.

[Feasible]

[Packs tightly] If , then

1) Reduction to learning a classifier [DH 09]

Linear classifier: given (dual) vector ,

𝑥 (𝑝)𝑡=1    iff𝑝 𝐴𝑡−1<0

Page 10: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Claim: If the classification ( ) given by satisfies 𝑥 𝑝1)

2)

then ( ) is (1− ) optimal.𝑥 𝑝 𝜖Moreover, such classification always exists.

Online PIP as Learning1) Reduction to learning [DH 09]

Linear classifier: given (dual) vector , set

𝑥 (𝑝)𝑡=1    iff𝑝 𝐴𝑡−1<0

[Feasible]

[Packs tightly] If , then

Page 11: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Online PIP as Learning

2) Solving PIP via learning a)S fraction of columnsb) Compute appropriate for sampled IP

c) Use to classify remaining columns

Page 12: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Online PIP as Learning

2) Solving PIP via learning

Probability of learning good classifier:• Consider a classifier that overfills some budget:• Can only learn if sample is skewed. Happens with probability at most • At most distinct bad classifiers•Union bounding over all bad classifiers, learn bad classifier with prob. at most •When to get good classifier with high probability

a)S fraction of columnsb) Compute appropriate for sampled IP

c) Use to classify remaining columns

𝒑

Page 13: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Online PIP as Learning

2) Solving PIP via learning

Probability of learning good classification:• Consider a classification that overfills some budget:• Can only learn if sample is skewed. Happens with probability at most • At most distinct bad classifications•Union bounding over all bad classifications, learn desired good classification with

prob. at least •When to get good classification with high probability

Improve this…

Page 14: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Improved Learning Error

-witness: is a +-witness of for constraint if1) Columns picked by columns picked by 2) Total occupation of constraint by columns picked by is

-witness: similar…

Lemma: Suppose there is a witness set of size . Then probability of learning a bad classifier is

• Idea 1: Covering bounds via witnesses (handling multiple bad classifiers at a time)

Total weight

Page 15: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Geometry of PIPs with Small Witness Set

• For some PIPs, size of witness set is at least

• Idea 2: Consider PIPs whose columns lie on few () 1-d subspaces

Page 16: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Geometry of PIPs with Small Witness Set

• For some PIPs, size of witness set is at least

• Idea 2: Consider PIPs whose columns lie on few () 1-d subspaces

=2

Lemma: For such PIPs, can find witness set of size

Page 17: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Geometry of PIPs with Small Witness Set

• Covering bound + witness size: it suffices • Final step: Convert any PIP into one with , loses value

Algorithm -competitive when

Page 18: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Conclusion• Guarantee for online PIPs independent of number of columns • Asymptotically matches that for single constraint version [Kleinberg

05]• Ideas

1) Tailored covering bound based on witnesses2) Analyze geometry of columns to obtain small witness set Make the learning problem more robust

Open problems3) Obtain optimal ? Can do if sample columns with replacement

[DJSW 11]

4) Generalize to AdWords-type problem

5) Better online models: infinite horizon? less randomness?

Page 19: Geometry of Online Packing Linear Programs Marco Molinaro and R. Ravi Carnegie Mellon University

Thank you!