Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Gradient flows, iterated logarithms, and semistability
Pranav Pandit
joint work with Fabian Haiden, Ludmil Katzarkov,and Maxim Kontsevich
University of Vienna
June 7, 2018
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 1 / 16
This talk is based on:
- arXiv:1706.01073
- arXiv:1802.04123
Goals:
Describe the asymptotic behavior of the flows discussed in theprevious talk (in special cases)
Describe a canonical refinement of the Harder-Narasimhan filtration,and its relation to the asymptotic behavior of the flow
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 2 / 16
Category Fuk(X , ω) Rep(Q)
Object Lag upto isotopy (Evv , Tαα∈Arr(Q))
Metrized object Lagrangian (Ev , hv ), hv hermitian metric
Kahler data Ω P :=∑
zvprv +∑
[T ∗α,Tα]hol vol form zv ∈ H; v ∈ Q0
Flow F L = ArgΩL h−1h = ArgP
Mass M M(L) =∫L |Ω| M = |P|
Central charge Z (L) =∫L Ω Z =
∑zvχ(Ev )
Kahler potential dSC(f ) =∫L Ωf SC =
∑log det hv +
∑T ∗αTα
Harmonic metric Fixed points of F Fixed points of F/rescaling= Crit(SC) = Crit(SC)
= special Lagrangian
Amplitude sup / inf Arg Ω|L sup / inf Spec Arg P
DUY theorem ?? King’s theorem
Good features (some proven): (i) Mass, Amp decrease with flow; (ii) BPSinequality |Z | ≤ M and (iii) “properness” of mass.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 3 / 16
Recall:
- There is a flow on the space of “metrized objects”.
- Fixed points of the flow on metrics upto rescaling are harmonicmetrics. The underlying objects should be polystable.
- The “speed” of the rescaling action gives the slope/phase of thepolystable object.
- For general objects, the flow should “decompose” the object into itspolystable constituents, equipped with harmonic metrics.
The Harder-Narasimhan filtration only decomposes an object intosemistable constituents.
Basic problem: Describe the decomposition (induced by the flow) of asemistable object into polystable pieces.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 4 / 16
C stable ∞-category equipped with Bridgeland stability conditionCssθ θ∈R,Z : K0(C)→ C.
For each θ ∈ R, Cssθ is an Artinian abelian category equipped with ahomomorphism
X := exp(−iθ)Z : K0(Cssθ )→ R
which is positive on non-zero objects.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 5 / 16
Natural filtrations
A Artinian abelian category; E ∈ E0 = E0 ⊂ E1 ⊂ E2 ⊂ . . .En = E filtration.
1 Socle: Ek is maximal containing Ek−1 such that Ek/Ek−1 issemisimple (one extreme).
2 Cosocle: Ek is minimal contained in Ek+1 such that Ek+1/Ek issemisimple (another extreme).
3 Let A be the category of pairs (V ,N) consisting of a vector space Vand a nilpotent endomorphism N. Then there is a unique filtrationlabelled by half-integers such that
Nk : Grk/2V → Gr−k/2V
is an isomorphism for all k. (Balanced; this is the weight/Lefschetzfiltration from mixed Hodge theory).
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 6 / 16
Natural filtrations
A Artinian abelian category; E ∈ E0 = E0 ⊂ E1 ⊂ E2 ⊂ . . .En = E filtration.
1 Socle: Ek is maximal containing Ek−1 such that Ek/Ek−1 issemisimple (one extreme).
2 Cosocle: Ek is minimal contained in Ek+1 such that Ek+1/Ek issemisimple (another extreme).
3 Let A be the category of pairs (V ,N) consisting of a vector space Vand a nilpotent endomorphism N. Then there is a unique filtrationlabelled by half-integers such that
Nk : Grk/2V → Gr−k/2V
is an isomorphism for all k. (Balanced; this is the weight/Lefschetzfiltration from mixed Hodge theory).
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 6 / 16
Natural filtrations
A Artinian abelian category; E ∈ E0 = E0 ⊂ E1 ⊂ E2 ⊂ . . .En = E filtration.
1 Socle: Ek is maximal containing Ek−1 such that Ek/Ek−1 issemisimple (one extreme).
2 Cosocle: Ek is minimal contained in Ek+1 such that Ek+1/Ek issemisimple (another extreme).
3 Let A be the category of pairs (V ,N) consisting of a vector space Vand a nilpotent endomorphism N. Then there is a unique filtrationlabelled by half-integers such that
Nk : Grk/2V → Gr−k/2V
is an isomorphism for all k. (Balanced; this is the weight/Lefschetzfiltration from mixed Hodge theory).
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 6 / 16
Balanced filtration
Theorem (Haiden-Katzarkov-Kontsevich-P.)
A Artinian abelian category, X : K0(A)→ R, positive on non-zero objects,E ∈ A. Then there exists a unique R-filtration
0 = E0 ⊂ E1 ⊂ . . . ⊂ En = E
labelled byλ1 < λ2 < . . . < λn
characterized by
1 Paracomplementedness: Ej/Ek−1 is semisimple whenever λj − λk < 1.
2 Balancing condition:∑
j λjX (Ej/Ej−1) = 0
3 For any Fj with Ej−1 ⊂ Fj ⊂ Ej such that Fj/Fk is semisimple forλj − λk ≤ 1, we have ∑
j
λjX (Fj/Ej−1) ≤ 0
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 7 / 16
Balanced filtration
Theorem (Haiden-Katzarkov-Kontsevich-P.)
A Artinian abelian category, X : K0(A)→ R, positive on non-zero objects,E ∈ A. Then there exists a unique R-filtration
0 = E0 ⊂ E1 ⊂ . . . ⊂ En = E
labelled byλ1 < λ2 < . . . < λn
characterized by
1 Paracomplementedness: Ej/Ek−1 is semisimple whenever λj − λk < 1.
2 Balancing condition:∑
j λjX (Ej/Ej−1) = 0
3 For any Fj with Ej−1 ⊂ Fj ⊂ Ej such that Fj/Fk is semisimple forλj − λk ≤ 1, we have ∑
j
λjX (Fj/Ej−1) ≤ 0
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 7 / 16
Balanced filtration
Theorem (Haiden-Katzarkov-Kontsevich-P.)
A Artinian abelian category, X : K0(A)→ R, positive on non-zero objects,E ∈ A. Then there exists a unique R-filtration
0 = E0 ⊂ E1 ⊂ . . . ⊂ En = E
labelled byλ1 < λ2 < . . . < λn
characterized by
1 Paracomplementedness: Ej/Ek−1 is semisimple whenever λj − λk < 1.
2 Balancing condition:∑
j λjX (Ej/Ej−1) = 0
3 For any Fj with Ej−1 ⊂ Fj ⊂ Ej such that Fj/Fk is semisimple forλj − λk ≤ 1, we have ∑
j
λjX (Fj/Ej−1) ≤ 0
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 7 / 16
Balanced filtration
Theorem (Haiden-Katzarkov-Kontsevich-P.)
A Artinian abelian category, X : K0(A)→ R, positive on non-zero objects,E ∈ A. Then there exists a unique R-filtration
0 = E0 ⊂ E1 ⊂ . . . ⊂ En = E
labelled byλ1 < λ2 < . . . < λn
characterized by
1 Paracomplementedness: Ej/Ek−1 is semisimple whenever λj − λk < 1.
2 Balancing condition:∑
j λjX (Ej/Ej−1) = 0
3 For any Fj with Ej−1 ⊂ Fj ⊂ Ej such that Fj/Fk is semisimple forλj − λk ≤ 1, we have ∑
j
λjX (Fj/Ej−1) ≤ 0
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 7 / 16
Iterated balanced filtration and asymptotics
The last condition can be formulated as stability of the filtration FλEconsidered as an object in an auxillary abelian category
Theorem (Haiden-Katzarkov-Kontsevich-P.)
- Iterating the construction of the previous theorem gives a canonicalfiltration of E labelled by R∞ equipped with the lexicographic order.
- When A = Rep(Q) this filtration controls the asymptotic behavior ofthe gradient flow on the space of metrics
log |h(t)| = λ1logt + λ2loglogt + . . .+ λnlog (n)t + O(1)
on the (λ1, λ2, . . . , λn) piece of the filtration.
Meta-principle: The asymptotic dynamics of geometric flows (e.g., meancurvature flow, Yang-Mills flow) can be reduced to the finite dimensionalquiver situation using the theory of center manifolds.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 8 / 16
Iterated balanced filtration and asymptotics
The last condition can be formulated as stability of the filtration FλEconsidered as an object in an auxillary abelian category
Theorem (Haiden-Katzarkov-Kontsevich-P.)
- Iterating the construction of the previous theorem gives a canonicalfiltration of E labelled by R∞ equipped with the lexicographic order.
- When A = Rep(Q) this filtration controls the asymptotic behavior ofthe gradient flow on the space of metrics
log |h(t)| = λ1logt + λ2loglogt + . . .+ λnlog (n)t + O(1)
on the (λ1, λ2, . . . , λn) piece of the filtration.
Meta-principle: The asymptotic dynamics of geometric flows (e.g., meancurvature flow, Yang-Mills flow) can be reduced to the finite dimensionalquiver situation using the theory of center manifolds.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 8 / 16
Iterated balanced filtration and asymptotics
The last condition can be formulated as stability of the filtration FλEconsidered as an object in an auxillary abelian category
Theorem (Haiden-Katzarkov-Kontsevich-P.)
- Iterating the construction of the previous theorem gives a canonicalfiltration of E labelled by R∞ equipped with the lexicographic order.
- When A = Rep(Q) this filtration controls the asymptotic behavior ofthe gradient flow on the space of metrics
log |h(t)| = λ1logt + λ2loglogt + . . .+ λnlog (n)t + O(1)
on the (λ1, λ2, . . . , λn) piece of the filtration.
Meta-principle: The asymptotic dynamics of geometric flows (e.g., meancurvature flow, Yang-Mills flow) can be reduced to the finite dimensionalquiver situation using the theory of center manifolds.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 8 / 16
Dynamical systems from quiver representations
Q = (Q0,Q1) quiver; Q0 vertices, Q1 arrows.
Quiver representation:Vertex i 7→ Ei vector space.Arrow α : i → j 7→ Tα : Ei → Ej operator.
Metrized quiver representation: hermitian metric hi on Ei
adjoint operator T ∗α : Ej → Ei .
Choosing “masses” (mi )i∈Q0 flow on the space of metrics.
mih−1i hi =
∑α:i→j
h−1i T ∗αhjTα −
∑α:k→i
Tαh−1j T ∗αhi
This is asymptotic to the previous flow when the central charge takesvalues in a ray (the positive reals).
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 9 / 16
X compact Riemann surface, Kahler form ω, E finite dimensionalholomorphic vector bundle on X , and h a hermitian metric on E . Considerthe flow:
h−1∂th = −2i(ΛF − λ)
Theorem (Haiden-Katzarkov-Kontsevich-P.)
There is a canonical filtration F kE =: Ek on E labelled by
βk ∈ Rt ⊕ R log t ⊕ R log log t ⊕ · · · ' R∞
such that
|| log h(t)|Ek|| = βk + O(1)
Ek/Ek−1 is a sum of stable bundles of slope µk given by
βk = 4π
(∫Xω
)−1
(µk − µ(E ))t + . . .
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 10 / 16
Iterated Logarithms from a dynamical system
log(1)(t) := log tlog(n)(t) := log(log(n−1) t)
x1 = e−x1 ⇒ x1 = log t
x2 = e−(x1+x2) ⇒ x2 = 1t e−x2 ⇔ d
d log t x2 = e−x2 ⇒ x2 = log(2) t
x3 = e−(x1+x2+x3) ⇔ dd log t x3 = e−(x2+x3) ⇒ x3 = log(3) t
...xn = e−(x1+x2+...xn) ⇔ d
d log t xn = e−(x2+...+xn) ⇒ xn = log(n) t
The original system of n differential equations reduces to an identicalsystem of n-1 equations in one less variable upon passing to logarithmictime x1 = s := log t. .
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 11 / 16
Iterated Logarithms from a dynamical system
log(1)(t) := log tlog(n)(t) := log(log(n−1) t)
x1 = e−x1 ⇒ x1 = log t
x2 = e−(x1+x2) ⇒ x2 = 1t e−x2 ⇔ d
d log t x2 = e−x2 ⇒ x2 = log(2) t
x3 = e−(x1+x2+x3) ⇔ dd log t x3 = e−(x2+x3) ⇒ x3 = log(3) t
...xn = e−(x1+x2+...xn) ⇔ d
d log t xn = e−(x2+...+xn) ⇒ xn = log(n) t
The original system of n differential equations reduces to an identicalsystem of n-1 equations in one less variable upon passing to logarithmictime x1 = s := log t. .
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 11 / 16
Iterated Logarithms from a dynamical system
log(1)(t) := log tlog(n)(t) := log(log(n−1) t)
x1 = e−x1 ⇒ x1 = log t
x2 = e−(x1+x2) ⇒ x2 = 1t e−x2
⇔ dd log t x2 = e−x2 ⇒ x2 = log(2) t
x3 = e−(x1+x2+x3) ⇔ dd log t x3 = e−(x2+x3) ⇒ x3 = log(3) t
...xn = e−(x1+x2+...xn) ⇔ d
d log t xn = e−(x2+...+xn) ⇒ xn = log(n) t
The original system of n differential equations reduces to an identicalsystem of n-1 equations in one less variable upon passing to logarithmictime x1 = s := log t. .
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 11 / 16
Iterated Logarithms from a dynamical system
log(1)(t) := log tlog(n)(t) := log(log(n−1) t)
x1 = e−x1 ⇒ x1 = log t
x2 = e−(x1+x2) ⇒ x2 = 1t e−x2 ⇔ d
d log t x2 = e−x2
⇒ x2 = log(2) t
x3 = e−(x1+x2+x3) ⇔ dd log t x3 = e−(x2+x3) ⇒ x3 = log(3) t
...xn = e−(x1+x2+...xn) ⇔ d
d log t xn = e−(x2+...+xn) ⇒ xn = log(n) t
The original system of n differential equations reduces to an identicalsystem of n-1 equations in one less variable upon passing to logarithmictime x1 = s := log t. .
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 11 / 16
Iterated Logarithms from a dynamical system
log(1)(t) := log tlog(n)(t) := log(log(n−1) t)
x1 = e−x1 ⇒ x1 = log t
x2 = e−(x1+x2) ⇒ x2 = 1t e−x2 ⇔ d
d log t x2 = e−x2 ⇒ x2 = log(2) t
x3 = e−(x1+x2+x3) ⇔ dd log t x3 = e−(x2+x3) ⇒ x3 = log(3) t
...xn = e−(x1+x2+...xn) ⇔ d
d log t xn = e−(x2+...+xn) ⇒ xn = log(n) t
The original system of n differential equations reduces to an identicalsystem of n-1 equations in one less variable upon passing to logarithmictime x1 = s := log t. .
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 11 / 16
Iterated Logarithms from a dynamical system
log(1)(t) := log tlog(n)(t) := log(log(n−1) t)
x1 = e−x1 ⇒ x1 = log t
x2 = e−(x1+x2) ⇒ x2 = 1t e−x2 ⇔ d
d log t x2 = e−x2 ⇒ x2 = log(2) t
x3 = e−(x1+x2+x3) ⇔ dd log t x3 = e−(x2+x3)
⇒ x3 = log(3) t...xn = e−(x1+x2+...xn) ⇔ d
d log t xn = e−(x2+...+xn) ⇒ xn = log(n) t
The original system of n differential equations reduces to an identicalsystem of n-1 equations in one less variable upon passing to logarithmictime x1 = s := log t. .
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 11 / 16
Iterated Logarithms from a dynamical system
log(1)(t) := log tlog(n)(t) := log(log(n−1) t)
x1 = e−x1 ⇒ x1 = log t
x2 = e−(x1+x2) ⇒ x2 = 1t e−x2 ⇔ d
d log t x2 = e−x2 ⇒ x2 = log(2) t
x3 = e−(x1+x2+x3) ⇔ dd log t x3 = e−(x2+x3) ⇒ x3 = log(3) t
...xn = e−(x1+x2+...xn)
⇔ dd log t xn = e−(x2+...+xn) ⇒ xn = log(n) t
The original system of n differential equations reduces to an identicalsystem of n-1 equations in one less variable upon passing to logarithmictime x1 = s := log t. .
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 11 / 16
Iterated Logarithms from a dynamical system
log(1)(t) := log tlog(n)(t) := log(log(n−1) t)
x1 = e−x1 ⇒ x1 = log t
x2 = e−(x1+x2) ⇒ x2 = 1t e−x2 ⇔ d
d log t x2 = e−x2 ⇒ x2 = log(2) t
x3 = e−(x1+x2+x3) ⇔ dd log t x3 = e−(x2+x3) ⇒ x3 = log(3) t
...xn = e−(x1+x2+...xn) ⇔ d
d log t xn = e−(x2+...+xn)
⇒ xn = log(n) t
The original system of n differential equations reduces to an identicalsystem of n-1 equations in one less variable upon passing to logarithmictime x1 = s := log t. .
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 11 / 16
Iterated Logarithms from a dynamical system
log(1)(t) := log tlog(n)(t) := log(log(n−1) t)
x1 = e−x1 ⇒ x1 = log t
x2 = e−(x1+x2) ⇒ x2 = 1t e−x2 ⇔ d
d log t x2 = e−x2 ⇒ x2 = log(2) t
x3 = e−(x1+x2+x3) ⇔ dd log t x3 = e−(x2+x3) ⇒ x3 = log(3) t
...xn = e−(x1+x2+...xn) ⇔ d
d log t xn = e−(x2+...+xn) ⇒ xn = log(n) t
The original system of n differential equations reduces to an identicalsystem of n-1 equations in one less variable upon passing to logarithmictime x1 = s := log t. .
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 11 / 16
1 dim representations
Input:
1 Directed acyclic graph G = (G0,G1)
2 Masses (mi )i∈G0 ∈ RG0>0 metric
∑mi (dxi )
2 on RG0 .
3 Weights (cα)α∈G1 ∈ RG1>0 action functional S : RG0 → R
S(x) :=∑α:i→j
cαexj−xi
gradient flow of S with respect to metric
mi xi =∑α:i→j
cαexj−xi −∑α:k→i
cαexi−xk
Claim: The asymptotic behavior of these dynamical systems is controlledby iterated logarithms.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 12 / 16
1 dim representations
Input:
1 Directed acyclic graph G = (G0,G1)
2 Masses (mi )i∈G0 ∈ RG0>0
metric∑
mi (dxi )2 on RG0 .
3 Weights (cα)α∈G1 ∈ RG1>0 action functional S : RG0 → R
S(x) :=∑α:i→j
cαexj−xi
gradient flow of S with respect to metric
mi xi =∑α:i→j
cαexj−xi −∑α:k→i
cαexi−xk
Claim: The asymptotic behavior of these dynamical systems is controlledby iterated logarithms.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 12 / 16
1 dim representations
Input:
1 Directed acyclic graph G = (G0,G1)
2 Masses (mi )i∈G0 ∈ RG0>0 metric
∑mi (dxi )
2 on RG0 .
3 Weights (cα)α∈G1 ∈ RG1>0 action functional S : RG0 → R
S(x) :=∑α:i→j
cαexj−xi
gradient flow of S with respect to metric
mi xi =∑α:i→j
cαexj−xi −∑α:k→i
cαexi−xk
Claim: The asymptotic behavior of these dynamical systems is controlledby iterated logarithms.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 12 / 16
1 dim representations
Input:
1 Directed acyclic graph G = (G0,G1)
2 Masses (mi )i∈G0 ∈ RG0>0 metric
∑mi (dxi )
2 on RG0 .
3 Weights (cα)α∈G1 ∈ RG1>0
action functional S : RG0 → R
S(x) :=∑α:i→j
cαexj−xi
gradient flow of S with respect to metric
mi xi =∑α:i→j
cαexj−xi −∑α:k→i
cαexi−xk
Claim: The asymptotic behavior of these dynamical systems is controlledby iterated logarithms.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 12 / 16
1 dim representations
Input:
1 Directed acyclic graph G = (G0,G1)
2 Masses (mi )i∈G0 ∈ RG0>0 metric
∑mi (dxi )
2 on RG0 .
3 Weights (cα)α∈G1 ∈ RG1>0 action functional S : RG0 → R
S(x) :=∑α:i→j
cαexj−xi
gradient flow of S with respect to metric
mi xi =∑α:i→j
cαexj−xi −∑α:k→i
cαexi−xk
Claim: The asymptotic behavior of these dynamical systems is controlledby iterated logarithms.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 12 / 16
1 dim representations
Input:
1 Directed acyclic graph G = (G0,G1)
2 Masses (mi )i∈G0 ∈ RG0>0 metric
∑mi (dxi )
2 on RG0 .
3 Weights (cα)α∈G1 ∈ RG1>0 action functional S : RG0 → R
S(x) :=∑α:i→j
cαexj−xi
gradient flow of S with respect to metric
mi xi =∑α:i→j
cαexj−xi −∑α:k→i
cαexi−xk
Claim: The asymptotic behavior of these dynamical systems is controlledby iterated logarithms.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 12 / 16
An example
m1• c // •m2
Gradient flow:
m1x1 = cex2−x1
m2x2 = −cex2−x1
Solution:
x1(t) =m2
m1 + m2log t + log c1
x2(t) = − m1
m1 + m2log t + log c2
Wherec2
c1=
m1m2
c(m1 + m2)
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 13 / 16
General caseAnsatz: xi = vi log t + bi ; vi , bi ∈ R.
mivi
t=∑α:i→j
cαtvj−vi ebj−bi −∑α:k→i
cαtvi−vk ebi−bk
Interested in asymptotic behavior of xi upto O(1), so look for solutionsxi (t) of the differential equation correct upto terms in L1(R).
=⇒ only powers t≤−1 in RHS=⇒ vi − vj ≥ 1 if ∃α : i → j ⇔ (vi )i is an admissible grading (defn)
comparing coefficients=⇒ mivi =
∑α:i→j cαuα −
∑α:k→i cαuα, and uα = 0 if vi − vj > 1.
Here uα := ebj−bi ∈ R>0 for α : i → j .Key observation: uα’s are Lagrange multipliers for a convex optimizationproblem.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 14 / 16
General caseAnsatz: xi = vi log t + bi ; vi , bi ∈ R.
mivi
t=∑α:i→j
cαtvj−vi ebj−bi −∑α:k→i
cαtvi−vk ebi−bk
Interested in asymptotic behavior of xi upto O(1), so look for solutionsxi (t) of the differential equation correct upto terms in L1(R).
=⇒ only powers t≤−1 in RHS
=⇒ vi − vj ≥ 1 if ∃α : i → j ⇔ (vi )i is an admissible grading (defn)
comparing coefficients=⇒ mivi =
∑α:i→j cαuα −
∑α:k→i cαuα, and uα = 0 if vi − vj > 1.
Here uα := ebj−bi ∈ R>0 for α : i → j .Key observation: uα’s are Lagrange multipliers for a convex optimizationproblem.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 14 / 16
General caseAnsatz: xi = vi log t + bi ; vi , bi ∈ R.
mivi
t=∑α:i→j
cαtvj−vi ebj−bi −∑α:k→i
cαtvi−vk ebi−bk
Interested in asymptotic behavior of xi upto O(1), so look for solutionsxi (t) of the differential equation correct upto terms in L1(R).
=⇒ only powers t≤−1 in RHS=⇒ vi − vj ≥ 1 if ∃α : i → j ⇔ (vi )i is an admissible grading (defn)
comparing coefficients=⇒ mivi =
∑α:i→j cαuα −
∑α:k→i cαuα, and uα = 0 if vi − vj > 1.
Here uα := ebj−bi ∈ R>0 for α : i → j .Key observation: uα’s are Lagrange multipliers for a convex optimizationproblem.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 14 / 16
General caseAnsatz: xi = vi log t + bi ; vi , bi ∈ R.
mivi
t=∑α:i→j
cαtvj−vi ebj−bi −∑α:k→i
cαtvi−vk ebi−bk
Interested in asymptotic behavior of xi upto O(1), so look for solutionsxi (t) of the differential equation correct upto terms in L1(R).
=⇒ only powers t≤−1 in RHS=⇒ vi − vj ≥ 1 if ∃α : i → j ⇔ (vi )i is an admissible grading (defn)
comparing coefficients=⇒ mivi =
∑α:i→j cαuα −
∑α:k→i cαuα, and uα = 0 if vi − vj > 1.
Here uα := ebj−bi ∈ R>0 for α : i → j .
Key observation: uα’s are Lagrange multipliers for a convex optimizationproblem.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 14 / 16
General caseAnsatz: xi = vi log t + bi ; vi , bi ∈ R.
mivi
t=∑α:i→j
cαtvj−vi ebj−bi −∑α:k→i
cαtvi−vk ebi−bk
Interested in asymptotic behavior of xi upto O(1), so look for solutionsxi (t) of the differential equation correct upto terms in L1(R).
=⇒ only powers t≤−1 in RHS=⇒ vi − vj ≥ 1 if ∃α : i → j ⇔ (vi )i is an admissible grading (defn)
comparing coefficients=⇒ mivi =
∑α:i→j cαuα −
∑α:k→i cαuα, and uα = 0 if vi − vj > 1.
Here uα := ebj−bi ∈ R>0 for α : i → j .Key observation: uα’s are Lagrange multipliers for a convex optimizationproblem.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 14 / 16
Balanced weight grading
C := set of admissible gradings ⊂ RG0 (convex body).M : C → R≥0; M((vi )i ) :=
∑i miv
2i convex function.
Proposition
TFAE
1 (vi )i minimizes M.
2 ∃uα ≥ 0 α ∈ G1 satisfying
mivi =∑α:i→j
cαuα −∑α:k→i
cαuα
3 (i)∑
i mivi = 0 and (ii)∑
i∈E mivi ≤ 0 for all E ⊂ G0 such thati ∈ G0 and ∃α : i → j =⇒ j ∈ E “slope semistability”.
Unique grading satisying (1)-(3) is called the balanced weight grading.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 15 / 16
Balanced weight grading
C := set of admissible gradings ⊂ RG0 (convex body).M : C → R≥0; M((vi )i ) :=
∑i miv
2i convex function.
Proposition
TFAE
1 (vi )i minimizes M.
2 ∃uα ≥ 0 α ∈ G1 satisfying
mivi =∑α:i→j
cαuα −∑α:k→i
cαuα
3 (i)∑
i mivi = 0
and (ii)∑
i∈E mivi ≤ 0 for all E ⊂ G0 such thati ∈ G0 and ∃α : i → j =⇒ j ∈ E “slope semistability”.
Unique grading satisying (1)-(3) is called the balanced weight grading.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 15 / 16
Balanced weight grading
C := set of admissible gradings ⊂ RG0 (convex body).M : C → R≥0; M((vi )i ) :=
∑i miv
2i convex function.
Proposition
TFAE
1 (vi )i minimizes M.
2 ∃uα ≥ 0 α ∈ G1 satisfying
mivi =∑α:i→j
cαuα −∑α:k→i
cαuα
3 (i)∑
i mivi = 0 and (ii)∑
i∈E mivi ≤ 0 for all E ⊂ G0 such thati ∈ G0 and ∃α : i → j =⇒ j ∈ E “slope semistability”.
Unique grading satisying (1)-(3) is called the balanced weight grading.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 15 / 16
Iterated balanced weight grading
There are walls in the parameter space of mi ’s (recall: (mi )i ∈ RG0>0
parametrize certain metrics on RG0) along which the some of the uα = 0for α : i → j with vi − vj = 1.Simplest example:
m1•u12
##
m3•u32
u34
##m2• m4•
It turns out u32 = m2m3−m1m4∑i mi
.
Wall: m2m3 = m1m4..
Iterate procedure along a certain subgraph asymptotics governed byiterated logarithms along wall.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 16 / 16
Iterated balanced weight grading
There are walls in the parameter space of mi ’s (recall: (mi )i ∈ RG0>0
parametrize certain metrics on RG0) along which the some of the uα = 0for α : i → j with vi − vj = 1.Simplest example:
m1•u12
##
m3•u32
u34
##m2• m4•
It turns out u32 = m2m3−m1m4∑i mi
.
Wall: m2m3 = m1m4..
Iterate procedure along a certain subgraph asymptotics governed byiterated logarithms along wall.
Pranav Pandit (U Vienna) Gradient flows and iterated logs June 7, 2018 16 / 16