End to End Learning and Optimization on Graphs
Bryan Wilder1, Eric Ewing2, Bistra Dilkina2, Milind Tambe11Harvard University
2University of Southern California
To appear at NeurIPS 2019
110/27/2019 Bryan Wilder (Harvard)
10/27/2019 Bryan Wilder (Harvard) 2
Data Decisions???
10/27/2019 Bryan Wilder (Harvard) 3
Data Decisionsarg max๐ฅ๐ฅโ๐๐ ๐๐ ๐ฅ๐ฅ,๐๐
Standard two stage: predict then optimize
Training: maximize accuracy
10/27/2019 Bryan Wilder (Harvard) 4
Data Decisionsarg max๐ฅ๐ฅโ๐๐ ๐๐ ๐ฅ๐ฅ,๐๐
Standard two stage: predict then optimize
Challenge: misalignment between โaccuracyโ and decision quality
Training: maximize accuracy
10/27/2019 Bryan Wilder (Harvard) 5
Data Decisions
Pure end to end
Training: maximize decision quality
10/27/2019 Bryan Wilder (Harvard) 6
Data Decisions
Pure end to end
Challenge: optimization is hard
Training: maximize decision quality
10/27/2019 Bryan Wilder (Harvard) 7
Data Decisions
Decision-focused learning: differential optimization during training
arg max๐ฅ๐ฅโ๐๐
๐๐ ๐ฅ๐ฅ,๐๐
Training: maximize decision quality
10/27/2019 Bryan Wilder (Harvard) 8
Data Decisions
Challenge: how to make optimization differentiable?
arg max๐ฅ๐ฅโ๐๐
๐๐ ๐ฅ๐ฅ,๐๐
Training: maximize decision quality
Decision-focused learning: differential optimization during training
Relax + differentiate
10/27/2019 Bryan Wilder (Harvard) 9
Forward pass: run a solver
Backward pass: sensitivity analysis via KKT conditions
Relax + differentiateโข Convex QPs [Amos and Kolter 2018, Donti et al 2018]โข Linear and submodular programs [Wilder, Dilkina, Tambe 2019]โข MAXSAT (via SDP relaxation) [Wang, Donti, Wilder, Kolter 2019]โข MIPs [Ferber, Wilder, Dilkina, Tambe 2019]
โข Monday @ 11am, Room 612
10/27/2019 Bryan Wilder (Harvard) 10
Whatโs wrong with relaxations?โข Some problems donโt have good onesโข Slow to solve continuous optimization problemโข Slower to backprop through โ ๐๐(๐๐3)
10/27/2019 Bryan Wilder (Harvard) 11
This workโข Alternative: include solver for a simpler proxy problemโข Learn a representation that maps hard problem to simple oneโข Instantiate this idea for a class of graph optimization problems
10/27/2019 Bryan Wilder (Harvard) 12
Graph learning + optimization
10/27/2019 Bryan Wilder (Harvard) 13
Problem classesโข Partition the nodes into K disjoint groups
โข Community detection, maxcut, โฆโข Select a subset of K nodes
โข Facility location, influence maximization, vaccination, โฆโข Methods of choice are often combinatorial/discrete
10/27/2019 Bryan Wilder (Harvard) 14
Approachโข Observation: grouping nodes into communities is a good heuristic
โข Partitioning: correspond to well-connected subgroupsโข Facility location: put one facility in each community
โข Observation: graph learning approaches already embed into ๐ ๐ ๐๐
10/27/2019 Bryan Wilder (Harvard) 15
Approach1. Start with clustering algorithm (in ๐ ๐ ๐๐)
โข Can (approximately) differentiate very quickly 2. Train embeddings (representation) to solve the particular problem
โข Automatically learning a good continuous relaxation!
10/27/2019 Bryan Wilder (Harvard) 16
ClusterNet Approach
10/27/2019 Bryan Wilder (Harvard) 17
Node embedding (GCN)
K-means clustering Locate 1 facility in
each community
Differentiable K-means
10/27/2019 Bryan Wilder (Harvard) 18
Update cluster centers
Softmax update to node assignments
Forward pass
Differentiable K-means
10/27/2019 Bryan Wilder (Harvard) 19
Backward pass
โข Option 1: differentiate through the fixed-point condition ๐๐๐ก๐ก = ๐๐๐ก๐ก+1
โข Prohibitively slow, memory-intensive
Differentiable K-means
10/27/2019 Bryan Wilder (Harvard) 20
Backward pass
โข Option 1: differentiate through the fixed-point condition ๐๐๐ก๐ก = ๐๐๐ก๐ก+1
โข Prohibitively slow, memory-intensive โข Option 2: unroll the entire series of updates
โข Cost scales with # iterationsโข Have to stick to differentiable operations
Differentiable K-means
10/27/2019 Bryan Wilder (Harvard) 21
Backward pass
โข Option 1: differentiate through the fixed-point condition ๐๐๐ก๐ก = ๐๐๐ก๐ก+1
โข Prohibitively slow, memory-intensive โข Option 2: unroll the entire series of updates
โข Cost scales with # iterationsโข Have to stick to differentiable operations
โข Option 3: get the solution, then unroll one updateโข Do anything to solve the forward passโข Linear time/memory, implemented in vanilla pytorch
Differentiable K-meansTheorem [informal]: provided the clusters are sufficiently balanced and well-separated, the Option 3 approximate gradients converge exponentially quickly to the true ones.
Idea: show that this corresponds to approximating a particular term in the analytical fixed-point gradients.
10/27/2019 Bryan Wilder (Harvard) 22
ClusterNet Approach
10/27/2019 Bryan Wilder (Harvard) 23
GCN node embeddings
K-means clustering Locate 1 facility in
each community
ClusterNet Approach
10/27/2019 Bryan Wilder (Harvard) 24
GCN node embeddings
K-means clustering Locate 1 facility in
each community
Loss: quality of facility assignment
ClusterNet Approach
10/27/2019 Bryan Wilder (Harvard) 25
GCN node embeddings
K-means clustering Locate 1 facility in
each community
Loss: quality of facility assignment
Differentiate through K-means
ClusterNet Approach
10/27/2019 Bryan Wilder (Harvard) 26
GCN node embeddings
K-means clustering Locate 1 facility in
each community
Loss: quality of facility assignment
Differentiate through K-means
Update GCN params
Experimentsโข Learning problem: link predictionโข Optimization: community detection and facility location problemsโข Train GCNs as predictive component
10/27/2019 Bryan Wilder (Harvard) 27
Example: community detection
10/27/2019 Bryan Wilder (Harvard) 28
Observe partial graph Predict unseen edges Find communities
max๐๐
12๐๐
๏ฟฝ๐ข๐ข,๐ฃ๐ฃโ๐๐
๏ฟฝ๐๐=1
๐พ๐พ
๐ด๐ด๐ข๐ข,๐ฃ๐ฃ โ๐๐๐ข๐ข๐๐๐ฃ๐ฃ2๐๐
๐๐๐ข๐ข๐๐๐๐๐ฃ๐ฃ๐๐
๐๐๐ข๐ข๐๐ โ 0,1 โ๐ข๐ข โ ๐๐, ๐๐ = 1 โฆ๐พ๐พ
๏ฟฝ๐๐=1
๐พ๐พ
๐๐๐ข๐ข๐๐ = 1 โ๐ข๐ข โ ๐๐
Example: community detection
10/27/2019 Bryan Wilder (Harvard) 29
max๐๐
12๐๐
๏ฟฝ๐ข๐ข,๐ฃ๐ฃโ๐๐
๏ฟฝ๐๐=1
๐พ๐พ
๐ด๐ด๐ข๐ข,๐ฃ๐ฃ โ๐๐๐ข๐ข๐๐๐ฃ๐ฃ2๐๐
๐๐๐ข๐ข๐๐๐๐๐ฃ๐ฃ๐๐
๐๐๐ข๐ข๐๐ โ 0,1 โ๐ข๐ข โ ๐๐, ๐๐ = 1 โฆ๐พ๐พ
๏ฟฝ๐๐=1
๐พ๐พ
๐๐๐ข๐ข๐๐ = 1 โ๐ข๐ข โ ๐๐
โข Useful in scientific discovery (social groups, functional modules in biological networks)โข In applications, two-stage approach is common
[Yan & Gegory โ12, Burgess et al โ16, Berlusconi et al โ16, Tan et al โ16, Bahulker et al โ18โฆ]
Observe partial graph Predict unseen edges Find communities
Experimentsโข Learning problem: link predictionโข Optimization: community detection and facility location problemsโข Train GCNs as predictive componentโข Comparison
โข Two stage: GCN + expert-designed algorithm (2Stage)โข Pure end to end: Deep GCN to predict optimal solution (e2e)
10/27/2019 Bryan Wilder (Harvard) 30
Results: single-graph link prediction
10/27/2019 Bryan Wilder (Harvard) 31
0
0.2
0.4
0.6
Mod
ular
ityCommunity detection
(higher is better)
ClusterNet 2stage e2e5
7
9
11
Max
dist
ance
Facility location (lower is better)
ClusterNet 2stage e2e
Representative example from cora, citeseer, protein interaction, facebook, adolescent health networks
Results: generalization across graphs
10/27/2019 Bryan Wilder (Harvard) 32
0
0.2
0.4
0.6
Mod
ular
ityCommunity detection
(higher is better)
ClusterNet 2stage e2e5
7
9
Max
dist
ance
Facility location (lower is better)
ClusterNet 2stage e2e
ClusterNet learns generalizable strategies for optimization!
Takeawaysโข Good decisions require integrating learning and optimizationโข Pure end-to-end methods miss out on useful structureโข Even simple optimization primitives provide good inductive bias
NeurIPSโ19 paper, see bryanwilder.github.io Code available at https://github.com/bwilder0/clusternet
10/27/2019 Bryan Wilder (Harvard) 33
https://github.com/bwilder0/clusternet
End to End Learning and Optimization on GraphsSlide Number 2Slide Number 3Slide Number 4Slide Number 5Slide Number 6Slide Number 7Slide Number 8Relax + differentiateRelax + differentiateWhatโs wrong with relaxations?This workGraph learning + optimizationProblem classesApproachApproachClusterNet ApproachDifferentiable K-meansDifferentiable K-meansDifferentiable K-meansDifferentiable K-meansDifferentiable K-meansClusterNet ApproachClusterNet ApproachClusterNet ApproachClusterNet ApproachExperimentsExample: community detectionExample: community detectionExperimentsResults: single-graph link predictionResults: generalization across graphsTakeaways