32
Generalized Linear Programming Jiří Matoušek Charles University, Prague

Generalized Linear Programming

  • Upload
    keilah

  • View
    64

  • Download
    2

Embed Size (px)

DESCRIPTION

Generalized Linear Programming. Ji ří Matou š ek Charles University, Prague. The cool slides in this presentation are included by the courtesy of Tibor Szab ó. Linear Programming. Minimize cx subject to Ax  b. - PowerPoint PPT Presentation

Citation preview

Page 1: Generalized Linear Programming

Generalized Linear Programming

Jiří Matoušek

Charles University, Prague

Page 2: Generalized Linear Programming

The cool slides in this presentation are included by the courtesy of Tibor Szabó.

Page 3: Generalized Linear Programming

Linear Programming

• Minimize cx subject to Ax b.

• Geometry: Minimize a linear function over the intersection of n halfspaces in Rd (=convex polyhedron).

Page 4: Generalized Linear Programming

LP Algorithms

• Simplex method [Dantzig 1947] – very fast in practice– very good “average case” – exponential-time examples for almost all pivot

rules

• Ellipsoid method [Khachyian], interior-point methods [Karmakar],…– weakly polynomial but no (worst-case) bound

in terms of n and d alone

Page 5: Generalized Linear Programming

Combinatorial LP algorithms

• wanted: time f(d,n) for all inputs

• computations “coordinate independent”; use only combinatorial structure of the feasible set (polyhedron) or of the arrangement of bounding hyperplanes

Page 6: Generalized Linear Programming

Combinatorial LP algorithms

Computational geometry: research started with d fixed (and small)– [Megiddo] exp(exp(d)).n– [Clarkson] randomization; d2n+dd/2 log n– [Seidel] simple randomized; d! n– [Chazelle, M.] exp(O(d)).n deterministic– parallel [Alon, Megiddo] [Ajtai, Megiddo]

Page 7: Generalized Linear Programming

A subexponential algorithm

Theory of convex polytopes (Hirsch conjecture):

[Kalai] 1992

Computational geometry:

[Sharir, Welzl],

[M., Sharir, Welzl] 1992

exp((d log d)).n (randomized expected)– known as RANDOM FACET :

In the current vertex of the feasible polytope, choose a random improving facet, recursively find its optimum, and repeat

– still the best known running time!

Page 8: Generalized Linear Programming

Abstract frameworks

• systems of axioms capturing some of the properties of linear programming

• running time of algorithms counted in terms of certain primitive operations

• to apply to a specific problem, need to implement them …

• … and then algorithms become available (such as Kalai/MSW, Clarkson)

Page 9: Generalized Linear Programming

Abstract frameworks

Abstract objective functions [Adler, Saigal 1976], [Wiliamson Hoke 1988], [Kalai 1988]– P a (convex) polytope– f : V(P) → R is an abstract objective function if

a local minimum of any face F is also the unique global minimum of F

– every generic linear function induces an AOF

– but there are nonrealizable AOF on the 3-dimensional cube!

Page 10: Generalized Linear Programming

Abstract frameworks

Acyclic Unique Sink Orientations (AUSO)– acyclic orientation of the graph of the

considered polytope such that every nonempty face has exactly one sink (sink = all edges incoming)

– same as abstract objective functions

Page 11: Generalized Linear Programming

Abstract frameworks

LP-type problems [Sharir, Welzl]– also called Generalized Linear Programs

[Amenta]– encompass many geometric optimization

problems [MSW,Amenta,Halman…]• smallest enclosing ball of n points in Rd

• smallest enclosing ellipsoid of n points in Rd

• distance of two (convex) polyhedra in Rd

• ………

– plus some non-geometric (games on graphs)

Page 12: Generalized Linear Programming

LP-type problems• H a finite set of constraints• (W,) a linearly ordered set (such as the reals)• w: 2H W a value function; intuitively: w(G) is

the minimum value of a solution attainable under the constraints in G

• Axiom M (monotonicity): If F G, then w(F) w(G).

• Axiom L (locality): If F G and w(F) = w(G) =w(F{h}), then w(G)=w(G{h}).

Page 13: Generalized Linear Programming

Example: Smallest enclosing ball• H a finite set of points in the plane• w(G) = radius of the smallest disk containing G

a

e

c

d

b

monotonicity trivial

locality depends onuniqeness of the smallestenclosing ball!

Page 14: Generalized Linear Programming

LP-type problems: more notions

• basis for G: inclusion-minimal B G with w(B)=w(G)

• dimension d of (H,w): maximum cardinality of a basis

• computational primitives (B a given basis)

– violation test: value(B{h})>value(G)?

– pivoting: compute a basis for B{h}

Page 15: Generalized Linear Programming

Abstract frameworks

Abstract Optimization Problems [Gärtner]– only one parameter: dimension d=|H| (no n)– a linear ordering of 2H

– primitive operation: Is G optimal among all sets containing F? If not, give a better G’

– nice randomized algorithm: exp(O(d)) [Gärtner]– allows a (rather) efficient implementation of

“primitives” in Kalai/MSW, e.g., for the smallest enclosing ball problem

Page 16: Generalized Linear Programming

Algorithms in the abstract frameworks

• several algorithms (Kalai/MSW = RANDOM FACET; Clarkson) work for AOF’s, same analysis– AUSO given by oracle: returns edge orientations for a

given vertex– yields n.exp(O(d)) randomized algorithm – analysis tight in this abstract setting [M.]

• for LP-type problems they work too (but…)– O(n) algorithms for fixed d usually immediate– but primitives “depend on d” … may be hard– sometimes Gärtner’s algorithm helps

Page 17: Generalized Linear Programming

Algorithms in the abstract frameworks

RANDOM EDGE

• the simplex algorithm that selects an improving edge uniformly at random

• for AUSO: random outgoing edge

• great expectations: perhaps always quadratic??? [Williamson Hoke 1988]

Page 18: Generalized Linear Programming

RANDOM EDGEExpected running time

– on the d-dimensional simplex: (log d) [Liebling]

– on d-dimensional polytopes with d+2 facets: (log2d) [Gärtner et al. 2001]

– on the d-dimensional Klee-Minty cube:• O(d2) Williamson Hoke (1988)(d2/log d) Gärtner, Henk, Ziegler (1995)(d2) Balogh, Pemantle (2004)

Page 19: Generalized Linear Programming

RANDOM EDGE can be (mildly) exponential

There exists an AUSO of the d-dimensional cube such that RANDOM EDGE, started at a random vertex, makes at least exp(c.d1/3) steps before reaching the sink, with probability at least 1- exp(-c.d1/3).

[M., Szabó, FOCS 2004]

Page 20: Generalized Linear Programming

The Klee-Minty cube

reversed KMm-1

KMm-1

KMm

Page 21: Generalized Linear Programming

A blowup construction

Page 22: Generalized Linear Programming

Hypersink reorientation

Page 23: Generalized Linear Programming

A simpler construction

Let A be a d-dimensional cube on which RANDOM EDGE is slow (constructed recursively)

– take the blowup of A with random KMm‘s whose sink is in the same copy of A, m=d

– reorient the hypersink by placing a random copy of A

– thus, a step from d to d+d

Page 24: Generalized Linear Programming

A

A

A

rand A

A simpler construction

Page 25: Generalized Linear Programming

A typical RANDOM EDGE move

• Move in the frame:– RANDOM EDGE move in KMm

– stay put in A

• Move within a hypervertex:– RANDOM EDGE move in A– move to a random vertex of

KMm on the same level

A

rand A

A

A

v

Random walk with reshuffles on KMmRANDOM EDGE on A

Page 26: Generalized Linear Programming

Walk with reshuffles on KMm

• Start at a random v(0) of KMm

• v(i) is chosen as follows:– with probability pi,step make a step of RANDOM

EDGE from v(i-1);

– with probability pi,resh randomly permute (reshuffle) the coordinates of v(i-1) to obtain v(i)

– with probability 1- pi,step - pi,resh, v(i) = v(i-1).

Page 27: Generalized Linear Programming

Walk with reshuffles on KMm is slow

Proposition. Suppose that

Then with probability at least

the random walk with reshuffles makes

at least steps (α and β are constants).

stepireshi pp ,, max11min me 1

me

Page 28: Generalized Linear Programming

Reaching the hypersink

• Either we reach the sink by reaching the sink of a copy of A and then perform RANDOM EDGE on KMm. This takes at least T(d) time.

• Or we reach the hypersink without entering the sink of any copy of A. That is, the random walk with reshuffles reaches the sink of KMm . This takes at least exp(m) T(d) time.

Page 29: Generalized Linear Programming

The recursion

• RANDOM EDGE arrives to the hypersink at a random vertex. Then it needs T(d) more steps.

So passing from dimension d to d+d the expected running time of RANDOM EDGE doubles.

• Iterating d - times gives T(2d) 2d T(d).• In order to guarantee that reshuffles are frequent

enough we need a more complicated construction and that is why we are only able to prove a running time of exp(c.d1/3).

Page 30: Generalized Linear Programming

Open questions

• Obtain any reasonable upper bound on the running time of RANDOM EDGE

• Can one modify the construction such that the cube is realizable? (Probably not …)

• Or at least it satisfies the Holt-Klee condition?

• Or at least each three-dimensional subcube satisfies the Holt-Klee condition?

Page 31: Generalized Linear Programming

More open questions

• Find an algorithm for AOF on the d-cube better than exp(d)

• The model of unique sink orientations of cubes (possibly with cycles) include LP on an arbitrary polytope.

Find a subexponential algorithm!

Page 32: Generalized Linear Programming

THE END