Upload
georgina-shields
View
213
Download
0
Embed Size (px)
DESCRIPTION
The Improtance of Submodularity Alternative (more intuitive) Definition A function f is submodular if for sets A B E and e B: f(A e) - f(A) f(B e) - f(B). The “economy of scale” feeling of this definition made submodular functions common in economics and game theory. Submodular Function in Combinatorics Submodular functions appear frequently in combinatorial settings. Here are two simple examples: 3 Ground SetSubmodular Function Nodes of a graphThe number of edges leaving a set of nodes. Collection of setsThe number of elements in the union of a sub-collection.
Citation preview
A Unified Continuous Greedy Algorithm for Submodular Maximization
Moran Feldman Roy SchwartzJoseph (Seffi) Naor
Technion – Israel Institute of Technology
2
Set FunctionsDefinitionGiven a ground set E, a set function f : 2E assigns a number to every subset of the ground set.
PropertiesProperty Definition
Normalization f() = 0
Monotonicity For every two sets A B E:f(A) f(B)
Submodularity For all sets A, B E:f(A) + f(B) f(A B) + f(A B)
3
The Improtance of SubmodularityAlternative (more intuitive) Definition• A function f is submodular if for sets A B E and e B:
f(A e) - f(A) f(B e) - f(B).• The “economy of scale” feeling of this definition made submodular
functions common in economics and game theory.
Submodular Function in Combinatorics• Submodular functions appear frequently in combinatorial settings.• Here are two simple examples:
Ground Set Submodular FunctionNodes of a graph The number of edges leaving a
set of nodes.Collection of sets The number of elements in the
union of a sub-collection.
4
Polytope Constraints
• We abuse notation and identify a set S with its characteristic vector in [0, 1]E.
EPx
xf1,0.t.s
max
Exxxx
xxxxf
1,03322
13.t.smax
321
321
• Using this notation, we can define IP like problems:
• More generally, maximizing a submodular function subject to a polytope P constraint is the problem:
• Difficulty: Generalizes “integer programming”. Unlikely to have a reasonable approximation.
5
Relaxation• Replace the constraint x {0,1}E with x [0,1]E.• Use the multilinear extension F (a.k.a. extension by expectation)
[Calinescu et al. 07] as objective.– Given a vector x, let R(x) denote a random set containing every element e E
with probability xe independently.– F(x) = E[f(R(x))].
The Problem• Approximating the relaxed program.
Motivation• For many polytopes, a fractional solution can be rounded without
losing too much in the objective.– Matroid Polytopes – no loss [Calinescu et al. 07].– Constant number of knapsacks – (1 – ε) loss [Kulik et al. 09].– Unsplittable flow in trees – O(1) loss [Chekuri et al. 11].
6
The Continuous Greedy AlgorithmThe Algorithm [Vondrak 08]• Let δ > 0 be a small number.1. Initialize: y(0) and t 0.2. While t < 1 do:3. For every e E, let we = F(y(t) e) – F(y(t)).4. Find a solution x in P [0, 1]E maximizing w ∙ x.5. y(t + δ) y(t) + δ ∙ x6. Set t t + δ 7. Return y(t)
RemarkIf we cannot be evaluated directly, it can be approximated arbitrarily well via sampling.
7
The Continuous Greedy Algorithm - Demonstration
Observations• The algorithm is somewhat like gradient descending.• The algorithm moves only in positive directions because the
extension F is guaranteed to be concave in such directions.
y(0)
y(0.01)y(0.02)
y(0.03)
y(0.04)
1x2x
3x
4x
8
The Continuous Greedy Algorithm - Analysis
Theorem• Assuming,
– f is a normalized monotone submodular function.– P is a solvable polytope.
• The continuous greedy algorithm gives 1 – 1/e – o(n-1) approximation.
There are two important lemmata in the proof of the theorem.
Lemma 1There is a good direction, i.e., w ∙ x f(OPT) – F(y(t)).Proof IdeaOPT itself is a feasible direction, and its value is at least f(OPT) – F(y(t)).
9
The Continuous Greedy Algorithm – Analysis (cont.)
Lemma 2The improvement is related to w ∙ x, i.e.,
F(y(t + δ)) F(y(t)) + δ ∙ w ∙ x.Proof• Since δ is small, F(y(t + δ)) - F(y(t)) δ ∙ e eF(y(t)) ∙ xe.
• We need to relate eF(y(t)) and we:– F is multilinear, hence, we = F(y(t) e) – F(y(t)) = (1 – ye(t)) ∙
[F(y(t) e) – F(y(t) - e)] = (1 – ye(t)) ∙ eF(y(t)).
– F is monotone, hence eF(y(t)) is non-negative, and we eF(y(t)).
etyF tyF etyF tye tye1
10
Insight• Recall the equality we = (1 – ye(t)) ∙ eF(y(t)).• We used the monotonicity of f to get: we eF(y(t)).• This is the most significant obstacle to extend the last algorithm
to non-monotone functions.• Idea:
– Currently the improvement in each step is proportional to: e eF(y(t)) ∙ xe
– We want it to be proportional toe we ∙ xe = e (1 – ye(t)) ∙ eF(y(t)) ∙ xe
– This can be done by increasing ye(t) only by δ ∙ xe (1 ∙– ye(t)), instead of by δ ∙ xe.
11
The Measured Continuous Greedy Algorithm
The Algorithm• Let δ > 0 be a small number.1. Initialize: y(0) and t 0.2. While t < T do:3. For every e E, let we = F(y(t) e) – F(y(t)).4. Find a solution x in P [0, 1]E maximizing w ∙ x.5. For every e E, ye(t + δ) ye(t) + δ ∙ xe (1 – ∙ ye(t)).6. Set t t + δ 7. Return y(t)
Remark• The algorithm never leaves the box [0, 1]E, so it can be used with
arbitrary values of T.
12
The Measured Continuous Greedy Algorithm - Analysis
Theorem• Assuming,– f is a non-negative submodular function.– P is a solvable down-montone polytope.
• The approximation ratio of the measured continuous greedy algorithm with T = 1 is 1/e – o(n-1).
Remarks• The solution is no longer a convex combination of P
points.• For T 1, the output is in P since P is down-monotone.
13
The Measured Continuous Greedy Algorithm – Analysis (cont.)
Lemma 2The improvement is related to w ∙ x, i.e., F(y(t + δ)) F(y(t)) + δ ∙ w ∙ x.ProofThe insight removes the only place in the previous proof of this lemma that used the monotonicity of f.
Lemma 1There is a good direction, i.e., w ∙ x e-t ∙ f(OPT) – F(y(t)).Proof IdeaAgain, we show OPT itself is a good direction with at least that value.
14
Result for Monotone Functions
• For non-monotone functions, the approximation ratio is maximized for T = 1.
• For monotone f, we get the an approximation ratio of 1-e-T.– For T = 1, this is the same ratio of the previous algorithm.– The approximation ratio improves as T increases.
• In general, T > 1 might cause the algorithm to produce solutions outside the polytope.
• However, for some polytopes, somewhat larger values of T can be used.
15
The Submodular Welfare ProblemInstance• A set P of n players , and a set Q of m items.• Normalized monotone submodular utility function wj: 2Q
+ for each player.
Objective• Let Qj Q denote the set of items the jth player gets.• The utility of the jth player is wj(Qj).• Distribute the items among the players, maximizing the sum of utilities.
Approximation• Can be represented as a problem of the previous form.• The algorithm can be executed till time -n ln (1 – 1/∙ n).• The expected value of the solution is at least: 1 – (1 – 1/n)n.
16
Open Problem
• The measured continuous greedy algorithm provides tight approximation for monotone functions [Vondrak 06].
• Is this also the case for non-monotone functions?
• The current approximation ratio of e-1 is a natural number.
?