61
Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6 th , 2018 1 / 38

Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

  • Upload
    others

  • View
    3

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Parallel Coordinate Optimization

Julie Nutini

MLRG - Spring Term

March 6th, 2018

1 / 38

Page 2: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Coordinate Descent in 2D• Contours of a function F : IR2 → IR.• Goal: Find the minimizer of F .

2 / 38

Page 3: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Coordinate Descent in 2D• Contours of a function F : IR2 → IR.• Goal: Find the minimizer of F .

xk+1xk

2 / 38

Page 4: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Coordinate Descent in 2D• Contours of a function F : IR2 → IR.• Goal: Find the minimizer of F .

xk+1xk

xk+2

2 / 38

Page 5: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Coordinate Descent in 2D• Contours of a function F : IR2 → IR.• Goal: Find the minimizer of F .

xk+1xk

xk+2xk+3

2 / 38

Page 6: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Coordinate Descent in 2D• Contours of a function F : IR2 → IR.• Goal: Find the minimizer of F .

xk+1xk

xk+2xk+3

2 / 38

Page 7: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Coordinate Descent in 2D• Contours of a function F : IR2 → IR.• Goal: Find the minimizer of F .

xk+1xk

xk+2xk+3

2 / 38

Page 8: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Coordinate Descent in 2D• Contours of a function F : IR2 → IR.• Goal: Find the minimizer of F .

xk+1xk

xk+2xk+3

x∗

2 / 38

Page 9: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Coordinate Descent

• Update a single coordinate at each iteration,

xk+1i ← xki − αk∇fi(xk)

• Easy to implement, low memory requirements, cheap iteration costs.

• Suitable for large-scale optimization (dimension n is large):• Certain smooth (unconstrained) problems.• Non-smooth problems with separable constraints/regularizers.

• e.g., `1-regularization, bound constraints

T Faster than gradient descent if iterations n times cheaper.

→ Adaptable to distributed settings.

→ For truly huge-scale problems, it is absolutely necessary to parallelize.

3 / 38

Page 10: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Problem

• Consider the optimization problem

minx∈IRn

F (x) := f(x) + g(x),

where• f is loss function – convex (smooth or nonsmooth)• g is regularizer – convex (smooth or nonsmooth), separable

4 / 38

Page 11: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Regularizer Examples

g(x) =

n∑i=1

gi(xi), x = (x1, x2, . . . , xn)T

• No regularizer: gi(xi) ≡ 0

• Weighted L1-norm: gi(xi) = λi|xi| (λi > 0) ← e.g., LASSO

• Weighted L2-norm: gi(xi) = λi(xi)2 (λi > 0)

• Box constraints: gi(xi) =

0, xi ∈ Xi,

+∞, otherwise.← e.g., SVM dual

5 / 38

Page 12: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Loss Examples

Name f(x) References

Quadratic loss 12 ||Ax− y||

22 =

12

∑mj=1(Aj:x− yj)2 Bradley et al., 2011,

Logistic loss∑m

j=1 log(1 + exp(−yjAj:x)) Richtarik & Takac, 2011b, 2013a,

Square hinge loss 12(max{0, 1− yjAj:x})2) Takac et al., 2013

L-infinity ‖Ax− y‖∞ = max1≤j≤m |Aj:x− yj |

L1-regression ‖Ax− y‖1 =∑m

j=1 |Aj:x− yj | Fercoq & Richtarik, 2013

Exponential loss log(

1m

∑mj=1 exp(yjAj:x)

)

6 / 38

Page 13: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Parallel Coordinate Descent

• Embarrassingly parallel if objective is separable.

→ Speedup equal to number of processors, τ .

• For partially-separable objectives:• Assign ith processor task of updating ith component of x.• Each processor communicates respective x+i to processors that require it.• The ith processor needs current value of xj only if ∇if or ∇2

iif depends on xj .

→ Parallel implementations suitable when dependency graph is sparse.

7 / 38

Page 14: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Dependency Graph• Given a fixed serial ordering of updates, those in red can be done in parallel.

→ (i, j) is an arc of

the dependency graph

iff update function hj

depends on xi

Update order: {1, 2, 3, 4}

xk+11 = h1(x

k1 , x

k3)

xk+12 = h2(x

k+11 , xk2)

xk+13 = h3(x

k+12 , xk3 , x

k4)

xk+14 = h4(x

k+12 , xk4)

Better update order: {1, 3, 4, 2}

xk+11 = h1(x

k1 , x

k3)

xk+13 = h3(x

k2 , x

k3 , x

k4)

xk+14 = h4(x

k2 , x

k4)

xk+12 = h2(x

k+11 , xk2)

8 / 38

Page 15: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Parallel Coordinate Descent

• Synchronous parallelism:• Divide iterate updates between processors, followed by synchronization step.• Very slow for large-scale problems (wait for slowest processor).

• Asynchronous parallelism:• Each processor has access to x, chooses index i, loads components of x that

are needed to compute the gradient component ∇if(x), then updates the ithcomponent xi.• No attempt to coordinate or synchronize with other processors.• Always using ‘stale’ x: convergence results restrict how stale.

→ Many numerical results actually use asynchronous implementation, ignore

synchronization step required by theory.

9 / 38

Page 16: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Totally Asynchronous AlgorithmDefinition: An algorithm is totally asynchronous if

1 each index i ∈ {1, 2, . . . , n} of x is updated at infinitely many iterations, and

2 if νkj denotes the iteration at which component j of the vector xk was last

updated, then νkj →∞ as k →∞ for all j = 1, 2, . . . , n.

→ No condition on how stale xj is, just requires that it will be updated eventually.

Theorem (Bertsekas and Tsitsiklis, 1989)

Suppose a mapping T (x) := x− α∇f(x) for some α > 0 satisfies

‖T (x)− x∗‖∞ ≤ η‖x− x∗‖∞, for some η ∈ (0, 1).

Then if we set αk ≡ α in Algorithm 7, the sequence {xk} converges to x∗. 10 / 38

Page 17: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Partly Asynchronous Algorithm

• No convergence rate for totally asynchronous, given weak assumptions on xk.

• `∞ contraction assumption on mapping T is quite strong.

• Liu et al. (2015) assume no component of xk older than nonnegative integer τ(maximum delay) at any k.• τ related to number of processors τ (indicator of potential parallelism)• If all processors complete updates at approx same rate, τ ≈ cτ for some positive

integer c.

→ Linear convergence if “essential strong convexity” holds.

→ Sublinear convergence for general convex functions.→ Near-linear speedup if number of processors is:

• O(n1/2) in unconstrained optimization.• O(n1/4) in the separable-constrained case.

11 / 38

Page 18: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Question

Under what structural assumptions does

parallelization lead to acceleration?

12 / 38

Page 19: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Convergence of Randomized Coordinate Descent

• In IRn, randomized coordinate descent with uniform selection requires:

O(n× ξ(ε)) iterations

• Strong convex F : ξ(ε) = log(1ε

)• Smooth, or simple nonsmooth F : ξ(ε) = 1

ε

• ‘Difficult’ nonsmooth F : ξ(ε) = 1ε2

→ When dealing with big data, we only care about n.

13 / 38

Page 20: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

The Parallelization Dream

Serial Parallel

(1 coordinate per iteration) (τ coordinates per iteration)

O(n× ξ(ε)) iterations ⇒ O(nτ × ξ(ε)

)iterations

• What do we actually get?

O

(nβ

τ× ξ(ε)

)• Want β = O(1).

→ Depends on extent to which we can add up individual updates.→ Properties of F , select of coordinates at each iteration.

14 / 38

Page 21: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 + x2 − 1)2

• Just compute for more/all coordinates and then add up the updates.

f(x1 , x

2 ) =0

f(1, 1) = 1

f(0, 0) = 1

15 / 38

Page 22: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 + x2 − 1)2

• Just compute for more/all coordinates and then add up the updates.

xk

xk+12

xk+11

15 / 38

Page 23: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 + x2 − 1)2

• Just compute for more/all coordinates and then add up the updates.

xk

xk+12

xk+11xk

xk+1

15 / 38

Page 24: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 + x2 − 1)2

• Just compute for more/all coordinates and then add up the updates.

xk+1

15 / 38

Page 25: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 + x2 − 1)2

• Just compute for more/all coordinates and then add up the updates.

xk+1xk+22

xk+21

15 / 38

Page 26: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 + x2 − 1)2

• Just compute for more/all coordinates and then add up the updates.

xk+1xk+22

xk+21xk+2

NO GOOD!

15 / 38

Page 27: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 + x2 − 1)2

• What about averaging the updates??

xk

16 / 38

Page 28: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 + x2 − 1)2

• What about averaging the updates??

xk

xk+12

xk+11

16 / 38

Page 29: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 + x2 − 1)2

• What about averaging the updates??

xk

xk+12

xk+11

xk+1

16 / 38

Page 30: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 + x2 − 1)2

• What about averaging the updates??

x∗SOLVED!

16 / 38

Page 31: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 − 1)2 + (x2 − 1)2

• What about averaging the updates?? COULD BE TOO CONSERVATIVE...

xk

xk+12

xk+11 17 / 38

Page 32: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 − 1)2 + (x2 − 1)2

• What about averaging the updates?? COULD BE TOO CONSERVATIVE...

xk

xk+12

xk+11

xk+1

17 / 38

Page 33: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 − 1)2 + (x2 − 1)2

• What about averaging the updates?? COULD BE TOO CONSERVATIVE...

xk

xk+12

xk+11

xk+1

xk+22

xk+21

17 / 38

Page 34: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 − 1)2 + (x2 − 1)2

• What about averaging the updates?? COULD BE TOO CONSERVATIVE...

xk

xk+12

xk+11

xk+1

xk+22

xk+21

xk+2

17 / 38

Page 35: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Naive Parallelization• Consider the function f(x1, x2) = (x1 − 1)2 + (x2 − 1)2

• What about averaging the updates?? COULD BE TOO CONSERVATIVE...

xk

xk+12

xk+11

xk+1

xk+22

xk+21

xk+2

AND SO ON...

17 / 38

Page 36: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Averaging may be too conservative...

• Consider the function f(x) = (x1 − 1)2 + (x2 − 1)2 + · · ·+ (xn − 1)2.

• Evaluate at x0 = 0, f(x0) = n.

• We want

f(xk) = n

(1− 1

n

)2k

≤ ε.

• With averaging, we get

k ≥ n

2log(nε

)→ Factor of n is bad!

• We wanted O(nβτ × ξ(ε)

)18 / 38

Page 37: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

What to do?

• We can write the coordinate descent update as follows,

x+ ← x+1

β

n∑i=1

hiei,

where• hi is the update to coordinate i• ei is the ith unit coordinate vector

• Averaging: β = n

• Summation: β = 1

→ When can we safely use β ≈ 1?

19 / 38

Page 38: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

When can we use small β?

• Three models for f with small β:1 Smooth partially separable f [Richtarik & Takac, 2011b]

f(x+ tei) ≤ f(x) +∇f(x)T (tei) +Li2t2

f(x) =∑J∈J

fJ(x), fJ depends on xi for i ∈ J onlyω := maxJ∈J |J |

2 Nonsmooth max-type f [Fercoq & Richtarik, 2013]f(x) = maxz∈Q{zTAx− g(z)} ω := max1≤j≤m |{i : Aji 6= 0}|

3 f with ‘bounded Hessian’ [Bradley et al., 2011, Richtarik & Takac, 2013a]

f(x+ h) ≤ f(x) +∇f(x)Th+1

2hTATAh

L = diag(ATA)

σ := λmax(L−1/2ATAL−1/2)

→ ω is the degree of partial separability, σ is spectral radius.20 / 38

Page 39: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Parallel Coordinate Descent Method

• At iteration k, select a random set Sk.

• Sk is a realization of a random set-valued mapping (or sampling) S.

• Update hi depends on F , x and on law describing S.

→ Continuously interpolates between serial coordinate descent and gradient.• Manipulates n and E[|S|].

21 / 38

Page 40: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

ESO: Expected Separable Overapproximation

• We say that f admits a (β, ω)-ESO with respect to (uniform) sampling S if for

all x, h ∈ IRn

(f, S) ∼ ESO(β,w) ⇐⇒ E[f(x+ h[S])

]≤ f(x)+E[|S|]

n

(∇f(x)Th+

β

2||h||2w

)where• h[S] =

∑i∈S hiei, and ‖h‖2w :=

∑ni=1 wi(hi)

2

• We note that ∇f(x)Th+ β2 ||h||

2w is separable in h.

• Minimize with respect to h in parallel→ yields update

x+ → x+1

β

∑i∈S

1

wi∇if(x)ei

• Compute updates for i ∈ S only.

→ Separable quadratic overapproximation of E[f ] evaluated at update.22 / 38

Page 41: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Convergence Rate for Convex f

• If (f, S) ∼ ESO(β,w), then [Richtarik & Takac, 2011b]

k ≥

(βn

E[|S|]

)(2R2

w(x0, x∗)

ε

)log

(F (x0)− F ∗

ερ

),

which implies that

P (F (xk)− F ∗ ≤ ε) ≥ 1− ρ.

• ε: error tolerance• n: # coordinates• E[|S|]: average # updated coordinates per iteration• β: step size parameter

23 / 38

Page 42: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Convergence Rate for Strongly Convex f

• If (f, S) ∼ ESO(β,w), then [Richtarik & Takac, 2011b]

k ≥

(n

E[|S|]

)(β + µg(w)

µf (w) + µg(w)

)log

(F (x0)− F ∗

ερ

),

which implies that

P (F (xk)− F ∗ ≤ ε) ≥ 1− ρ.

• µf (w): strong convexity constant of loss f• µg(w): strong convexity constant of regularizer g

→ If µg(w) is large, then the slowdown effect of β is eliminated.

24 / 38

Page 43: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

What if problem is only partially separable?

• Uniform sampling: P (S = {i}) = 1n

• τ -nice sampling: P (S = S) =

1

(nτ), |S| = τ

0, otherwise← for shared memory systems

• At each iteration:

→ Choose set of i, each subset of τ coordinates chosen with the same probability.→ Assign each i to a dedicated processor.→ Compute and apply the update.

→ All blocks are the same size τ (otherwise, probability is 0).

25 / 38

Page 44: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

What if problem is only partially separable?

• Doubly uniform (DU) sampling: P (S = S) =q|S|

( n|S|)

• Generates all sets of equal cardinality with equal probability.• Can model unreliable processors/machines.• Let qτ = P (|S| = τ), with n = 5 coordinates.

q1 q2 q3 q4 q5

S

26 / 38

Page 45: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

What if problem is only partially separable?

• Doubly uniform (DU) sampling: P (S = S) =q|S|

( n|S|)

• Generates all sets of equal cardinality with equal probability.• Can model unreliable processors/machines.• Let qτ = P (|S| = τ), with n = 5 coordinates.

q1 q2 q3 q4 q5

S

26 / 38

Page 46: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

What if problem is only partially separable?

• Doubly uniform (DU) sampling: P (S = S) =q|S|

( n|S|)

• Generates all sets of equal cardinality with equal probability.• Can model unreliable processors/machines.• Let qτ = P (|S| = τ), with n = 5 coordinates.

q1 q2 q3 q4 q5

S

26 / 38

Page 47: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

What if problem is only partially separable?

• Doubly uniform (DU) sampling: P (S = S) =q|S|

( n|S|)

• Generates all sets of equal cardinality with equal probability.• Can model unreliable processors/machines.• Let qτ = P (|S| = τ), with n = 5 coordinates.

q1 q2 q3 q4 q5

S

26 / 38

Page 48: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

What if problem is only partially separable?

• Doubly uniform (DU) sampling: P (S = S) =q|S|

( n|S|)

• Generates all sets of equal cardinality with equal probability.• Can model unreliable processors/machines.• Let qτ = P (|S| = τ), with n = 5 coordinates.

q1 q2 q3 q4 q5

S

26 / 38

Page 49: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

What if problem is only partially separable?

• Doubly uniform (DU) sampling: P (S = S) =q|S|

( n|S|)

• Generates all sets of equal cardinality with equal probability.• Can model unreliable processors/machines.• Let qτ = P (|S| = τ), with n = 5 coordinates.

q1 q2 q3 q4 q5

S

26 / 38

Page 50: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

What if problem is only partially separable?

• Binomial sampling: consider independent equally unreliable processors.• Each of τ processors available with probability pb, busy with probability 1− pb.• # available processors (number of blocks that can be updated in parallel) at

each iteration is a binomial random variable with parameters τ and pb.• Use explicit or implicit selection.

27 / 38

Page 51: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

ESO Theory

1 Smooth partially separable f [Richtarik & Takac, 2011b]

f(x+ tei) ≤ f(x) +∇f(x)T (tei) +Li2t2

f(x) =∑J∈J

fJ(x), fJ depends on xi for i ∈ J onlyω := maxJ∈J |J |

Theorem: If S is doubly uniform, then

E[f(x+ h[S])

]≤ f(x) + E[|S|]

n

(∇f(x)Th+

β

2||h||2w

),

where

β = 1 +(ω − 1)

(E[|S|2]E[|S|]

− 1)

n− 1, wi = Li. i = 1, 2, . . . , n

→ β is small if ω is small (i.e., more separable)28 / 38

Page 52: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

ESO Theory

→ (Richtarik & Takac, 2013) “Parallel Coordinate Descent Methods for Big Data Optimization”.

29 / 38

Page 53: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Theoretical Speedup

s(r) =τ

1 + r(τ − 1),

r = (ω − 1)/(n− 1)

• ω often a constant that depends on n.

• r is a measure of ‘density’.

→ MUCH OF BIG DATA IS HERE!

30 / 38

Page 54: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Theory vs Practice

• τ = # processors vs. theoretical (left) and experimental (right) speed-up for

n = 1000 coordinates31 / 38

Page 55: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Experiment

• 1 billion-by-2 billion LASSO problem [Richtarik & Takac, 2012]

f(x) =1

2‖Ax− b‖22, g(x) = ‖x‖1

• A has 2× 109 rows and n = 109 columns.

• ‖x∗‖0 = 105

• ‖A:,i‖0 = 20 (column)

• maxj ‖Aj,:‖0 = 35 (row)⇒ ω = 35 (degree of partial separability of f ).

• Used approximation of τ -nice sampling S (independent sampling, τ << n).

• Asynchronous implementation.

→ Older information used to update coordinates, but observed slow down is limited.

32 / 38

Page 56: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Experiment: Coordinate Updates

• For each τ , serial and parallel CD need approximately same number of

coordinate updates.• Method identifies active set.

33 / 38

Page 57: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Experiment: Iterations

• Doubling τ roughly translates to halving the number of iterations.

34 / 38

Page 58: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Experiment: Wall Time

• Doubling τ roughly translates to halving the wall time.

35 / 38

Page 59: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Citation Algorithm Paper

(Bradley et al, 2011) Shotgun Parallel coordinate descent for L1-regularized loss minimization.ICML, 2011 (arXiv: 1105.5379)

(Richtarik & Takac, 2011) SCD Efficient serial and parallel coordinate descent methods for huge-scale truss topology design.Operations Research Proceedings, 27-32, 2012 (Opt Online 08/2011)

(Richtarik & Takac, 2012) PCDM Parallel coordinate descent methods for big data optimization.Mathematical Programming, 2015 (arXiv:1212.0873)

(Fercoq & Richtarik, 2013) SPCDM Smooth minimization of nonsmooth functions with parallel coordinate descent method.2013 (arXiv:1309.5885)

(Richtarik & Takac, 2013a) HYDRA Distributed coordinate descent method for learning with big data.2013 (arXiv:1310.2059)

(Liu et al., 2013) AsySCD An asynchronous parallel stochastic coordinate descent algorithm. ICML2014 (arXiv: 1311.1873)

(Fercoq & Richtarik, 2013) APPROX Accelerated, parallel and proximal coordinate descent.2013 (arXiv:1312.5799)

(Yang, 2013) DisDCA Trading computation for communication: distributed stochastic dual coordinate ascent.NIPS 2013

(Bian et al, 2013) PCDN Parallel coordinate descent Newton method for efficient `1-regularized minimization.2013 (arXiv:1306.4080)

(Liu & Wright, 2014) AsySPCD Asynchronous stochastic coordinate descent: parallelism and convergence properties.SIAM J. Optim. 25(1), 351376, 2015 (arXiv:1403.3862)

(Mahajan et al, 2014) DBCD A distributed block coordinate descent method for training `1-regularized linear classifiers.arXiv:1405.4544, 2014

36 / 38

Page 60: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Citation Algorithm Paper

(Fercoq et al, 2014) Hydra2 Fast distributed coordinate descent for non-strongly convex losses. MLSP2014 (arXiv:1405.5300)

(Marecek, Richtarik and Takac, 2014) DBCD Distributed block coordinate descent for minimizing partially separable functions.Numerical Analysis and Opt., Springer Proc. in Math. and Stat. (arXiv:1406.0238)

(Jaggi, Smith, Takac et al, 2014) CoCoA Communication-efficient distributed dual coordinate ascent.NIPS 2014 (arXiv: 1409.1458)

(Qu, Richtarik & Zhang, 2014) QUARTZ Randomized dual coordinate ascent with arbitrary sampling.arXiv:1411.5873, 2014

(Ma, Smith, Jaggi et al, 2015) CoCoA+ Adding vs. averaging in distributed primal-dual optimization.ICML 2015

(Tappenden, Takac & Richtarik, 2015) PCDM On the complexity of parallel coordinate descent.arXiv:1503.03033, 2015

(Hsieh, Yu & Dhillon, 2015) PASSCoDe PASSCoDe: Parallel ASynchronous Stochastic dual Co-ordinate Descent.ICML, 2015

(Peng et al, 2016) ARock ARock: an algorithmic framework for asynchronous parallel coordinate updates.SIAM J. Sci. Comput., 2016

(You et al, 2016) Asy-GCD Asynchronous parallel greedy coordinate descent.NIPS, 2016

.

.

....

.

.

.

37 / 38

Page 61: Parallel Coordinate Optimizationjnutini/documents/mlrg_ParallelCD.pdf · Parallel Coordinate Optimization Julie Nutini MLRG - Spring Term March 6th, 2018 1/38. ... Square hinge loss

Conclusions

• Coordinate descent scales very well to big data problems of special structure.

→ Requires (partial) separability/sparse dependency graph.

• Care is needed when combining updates (add them up? average?)

• Sampling strategies that take into account unreliable processors.

38 / 38