46
QUALIFICATION EXAM UMUT VAROLGUNES Abstract. Here are some notes from my preparation for the exam. My two minors were finite dimensional Lie algebras, and characteristic classes. The Lie algebras section is a standard linear exposition whereas the characteristic classes one is random notes. My major is an assortment of things related to symplectic geometry and mirror symmetry. The notes there are mostly about the parts that I found more esoteric. I enjoyed writing these, someone else will maybe enjoy reading them - very unlikely. 1. Lie Algebras 1.1. Lie algebras from Algebraic Groups. An algebraic group over a field F is a collection of polynomials over F in the variables corresponding to entries of a matrix over F. Condition is that the set of invertible solutions for every base extension (algebra over F) is closed under multiplication and inversion in the group of matrices. Use the algebra of dual numbers F[]/( 2 ) to define the corresponding Lie Algebra, i.e. all X Mat n (F) for which I + X is in G(F[]/( 2 )). 1.2. Engel’s Theorem. Theorem 1. Let g gl(V ) be a representation. If all elements of g act nilpotently on V , then there exists a vector in V which is killed by all of g. Proof. Induction on dimg. Suffices to show for subalgebras. Take a maximal proper subalgebra h of g. Key step: Consider the representation of h on g given by the adjoint action. Clearly, h g is a subrepresentation, so we can consider the quotient representation. It is elementary to check that h acts nilpotently. Hence, there is an element in g/h which is killed by h and let us lift that element to g, and call it a. By construction [a, h] h. Hence, h F · a = g, by maximality. Now we finish with the more obvious use of the induction hypothesis - take the subspace that is killed by h, and show that it is invariant under the action of a. Corollary 1. Let g gl(V ) be a representation. If all elements of g act nilpotently on V , then V has a basis in which the matrices that represent the elements of g are all strictly upper-triangular. Corollary 2. The adjoint representation is nilpotent if and only if the Lie algebra is nilpotent. 1.3. Lie’s Theorem. Lemma 1. Let g be a Lie algebra, and V its finite dimensional representation. Furthermore, let h be an ideal and λ h * . Then the weight space W h λ of λ is invariant under the action of g. 1

Lie Algebras - MIT Mathematicsmath.mit.edu/~umutvg/notesforqual.pdf · 2015. 2. 5. · 0 representations by the magic identity. Let’s assume that ga 0 is not nilpotent. Then, we

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

  • QUALIFICATION EXAM

    UMUT VAROLGUNES

    Abstract. Here are some notes from my preparation for the exam. My two

    minors were finite dimensional Lie algebras, and characteristic classes. The

    Lie algebras section is a standard linear exposition whereas the characteristicclasses one is random notes. My major is an assortment of things related to

    symplectic geometry and mirror symmetry. The notes there are mostly about

    the parts that I found more esoteric. I enjoyed writing these, someone else willmaybe enjoy reading them - very unlikely.

    1. Lie Algebras

    1.1. Lie algebras from Algebraic Groups.

    • An algebraic group over a field F is a collection of polynomials over F inthe variables corresponding to entries of a matrix over F. Condition is thatthe set of invertible solutions for every base extension (algebra over F) isclosed under multiplication and inversion in the group of matrices.• Use the algebra of dual numbers F[�]/(�2) to define the corresponding Lie

    Algebra, i.e. all X ∈ Matn(F) for which I + �X is in G(F[�]/(�2)).

    1.2. Engel’s Theorem.

    Theorem 1. Let g→ gl(V ) be a representation. If all elements of g act nilpotentlyon V , then there exists a vector in V which is killed by all of g.

    Proof. Induction on dimg. Suffices to show for subalgebras. Take a maximal propersubalgebra h of g. Key step: Consider the representation of h on g given by theadjoint action. Clearly, h ⊂ g is a subrepresentation, so we can consider the quotientrepresentation. It is elementary to check that h acts nilpotently. Hence, there isan element in g/h which is killed by h and let us lift that element to g, and call ita. By construction [a, h] ⊂ h. Hence, h ⊕ F · a = g, by maximality. Now we finishwith the more obvious use of the induction hypothesis - take the subspace that iskilled by h, and show that it is invariant under the action of a. �

    Corollary 1. Let g→ gl(V ) be a representation. If all elements of g act nilpotentlyon V , then V has a basis in which the matrices that represent the elements of g areall strictly upper-triangular.

    Corollary 2. The adjoint representation is nilpotent if and only if the Lie algebrais nilpotent.

    1.3. Lie’s Theorem.

    Lemma 1. Let g be a Lie algebra, and V its finite dimensional representation.

    Furthermore, let h be an ideal and λ ∈ h∗. Then the weight space W hλ of λ isinvariant under the action of g.

    1

  • 2 UMUT VAROLGUNES

    Proof. We need to show that for any g ∈ g and h ∈ h, λ([g, h]) = 0, for W hλnonempty. Let v ∈W hλ .

    Let Wn be the span of v, g · v, . . . , gn · v. Notice that Wn get strictly bigger asn increases up to some point (say N), and then it stabilizes completely. We claimthat for every n, and for every a ∈ h, a is upper triangular with diagonal entriesλ(a) on Wn, with respect to its defining basis. We do this by induction on n. Basecase n = 1 is clear. Let us assume it for Wn−1. We have

    (1) a · gn · v = [a, g] · gn−1 · v + g · (a · gn−1(v)),which finishes the proof. In particular, the trace of a as an operator on WN is equalto Nλ(a). Since WN is closed under the action of g as well, we have TrWN ([g, h]) =0, this finishes the proof if N is coprime with the characteristic of our base field (sowe assume something in the beginning), by setting a = [g, h]. �

    Theorem 2. Let g be a solvable Lie algebra and V its finite dimensional represen-tation. Then there exists a common eigenvector to all elements of g.

    Proof. We do induction on the dimension - base case by algebraically closedness.Let h be a maximal subalgebra of g which contains [g, g]. It is easy to see thath is also an ideal, and hence a solvable ideal. It is also of codimension one bymaximality. By the induction hypothesis, h has a common eigenvector. We takethe corresponding weight space, and apply the previous theorem. Using again thatthe field is algebraically closed, this finishes the proof. �

    Corollary 3. Let V be a finite dimensional representation of a solvable Lie algebrag. Then there exists a basis of V , with respect to which all elements of g are uppertriangular.

    Corollary 4. A Lie algebra g is solvable if and only if its adjoint representationcan be made simultaneously upper-triangular.

    Corollary 5. A Lie algebra g is solvable if and only if [g, g] is nilpotent.

    1.4. Weight space decomposition.

    Lemma 2. Let a ∈ g, α, β ∈ F. We define gaα, and V aβ to be the generalizedeigenspaces of ad(a) and a : V → V with eigenvalues α and β. We then havegaα · V aβ ⊂ V aα+β.

    Proof. We need to show that for v ∈ V aβ , and g ∈ gaα, there exists N ∈ Z suchthat (a − α − β)N · g · v = 0. Now do binomial expansion on the left hand sideaccording to the identity (left multiplication −α− β) = (adjoint action-α) + (rightmultiplication−β), and choose large N �

    We can define the generalized weight spaces Vλ, and in particular gλ, in theobvious way. What we have shown implies gλ · Vγ ⊂ Vλ+γ , and in particular[gλ, gγ ] ⊂ gλ+γ .

    Theorem 3. Let h be a nilpotent Lie algebra, and V its finite dimensional repre-sentation. Then V decomposes into its generalized weight spaces.

    Proof. Let a be any element of h. Then by assumption ha0 = h, and by the lemmah · V aα ⊂ V aα . Hence, if there exists any a with two different eigenvalues, thenby Jordan decomposition for a, we can do induction on dimV . Let us assume

  • QUALIFICATION EXAM 3

    otherwise, then by Lie’s theorem we can find a basis of V such that all elementsof g is upper triangular with the same diagonal entries. This proves that V is ageneralized weight space itself. �

    Corollary 6. We get very fundamental decompositions of representations and ofthe Lie algebra itself, if we are given a nilpotent subalgebra inside.

    1.5. Cartan subalgebras.

    Definition 1. Let h ⊂ g be a nilpotent subalgebra. If it is maximal in either of thefollowing senses (which are equivalent):

    • If [a, h] ⊂ h, then a ∈ h• There doesn’t exist a nilpotent subalgebra strictly containing it.

    we call it a Cartan subalgebra.

    One direction of equality follows from Jacobi identity. For other side, considerthe commutator series of the larger nilpotent algebra and take the largest one whichis contained in the smaller nilpotent algebra.

    Proposition 1. Let h be a Cartan of g. Then the zero part of the generalizedweight space decomposition is h itself.

    Proof. Let g =⊕

    λ∈h∗ gλ. Trivially, h ⊂ g0. Consider g0/h as an h representation.This is by definition a nilpotent action, and Engel’s theorem contradicts maximality.

    Theorem 4. Let a ∈ g be a regular element. Then the 0-eigenspace of ad(a) is aCartan.

    Proof. Let g =⊕

    λ∈F gaλ = g

    a0⊕V . ga0 is a subalgebra, and the second decomposition

    is one of ga0 representations by the magic identity. Let’s assume that ga0 is not

    nilpotent. Then, we can find an element b ∈ ga0 which acts nonsingularly on V , andnonnilpotently on ga0 - as these are both non-empty open sets. This contradicts theregularity of a. Maximality is easy. �

    Corollary 7. Let h ⊂ g is a Cartan, and a ∈ h be a regular element. Then ga0 = h.

    Theorem 5. Any Cartan subalgebra is in fact of the form constructed in the abovetheorem. Moreover, for any two Cartan there exists an automorphism of the Liealgebra taking one to the other.

    Proof. The key point is to find a regular element in h. Consider the generalizedweight space decomposition g = h⊕V . The important observation is that the pureweight elements of V act nilpotently under adjoint representation. This lets usexponentiate them and obtain isomorphisms of the Lie algebra. Heuristically, byapplying exponentials in all directions from h we can cover most of g, which willmake us hit a regular element as they are in abundance - also note that regularelements are preserved under automorphisms.

    Define φ : g → g, by φ(Σαibi + h) = Exp(α1ad(b1)) . . .Exp(αnad(bn))h. It isnot hard to show that for a ∈ h,(2) dφa(b+ h) = h+ [b, a].

    Hence, if we choose a outside the hyperplanes given by the roots, dφa is nonsingular.Now we use that everything is defined by polynomial equations, which implies that

  • 4 UMUT VAROLGUNES

    the image of φ contains a Zariski dense subset. This proves the first claim - notethis only used that the image contains an open set. Doing this for two Cartansubalgebras and taking their intersection proves the second claim - this actuallyuses Zariski open. �

    1.6. Killing form. If we are given a representation V of a Lie algebra g then wecan define a bilinear form < g, h >V = tr(π(g)π(h)). This is trivially symmetricand invariant i.e. < [g, h], g′ >V =< g, [h, g

    ′] >V . The Killing form is the case ofadjoint representation.

    Lemma 3. Let h be a Cartan subalgebra, and V a representation. Let e ⊂ gα andf ⊂ g−α for some root α ∈ h∗. Then, for any λ ∈ h∗ such that Vλ 6= 0, we haveλ(h) = r · α(h), where r is a rational number depending only on α and λ.Proof. Let U be

    ⊕n∈Z Vλ+n·α. Clearly, e and f both preserve this subspace so

    tr(h |U ) = 0. On the other hand h preserves all the summands of U , and usingtr(h |Vλ+n·α) = dim(Vλ+n·α)(λ(h) + n · α(h)) we conclude the result. �

    Theorem 6. Let g ⊂ gl(V ). Then the following are equivalent:(1) < g, [g, g] >V = 0.(2) < a, a >V = 0, for all a ∈ [g, g].(3) g is solvable.

    Proof. (1) implies (2) is obvious. For (3) implies (1), choose an upper-triangularbasis for V , and use that product of an upper-triangular and strictly upper tri-angular matrix is strictly upper triangular. The actual content is in (2) implies(3).

    Consider the derived series of g and call the subalgebra that it stabilizes to p.Take a Cartan h of p. By construction p = [p, p]. Therefore, h has a basis {hαi}such that hαi ∈ [gα, g−α].

    By assumption < hαi, hαi >V = 0. If we compute the LHS using the definition,we get Σλdim(Vλ) · λ(hαi)2. Now we use the previous lemma and get(3) Σλdim(Vλ) · (rαλα(hαi))2 = (α(hαi))2(Σλdim(Vλ) · r2αλ) = 0.

    Hence, V = V0 - as the equation shows that λ(g) = 0 for all λ. This implies thatp = p0. But, then since h is Cartan p = h, which implies p = 0. �

    Corollary 8. g is solvable if and only if K(g, [g, g]) = 0.

    Remark 1. In some of these theorems we can drop the algebraically closed hypoth-esis. These can be deduced from the algebraically closed case by passing to thealgebraic closure, which preserves nilpotency, solvability, and abelianness.

    1.7. Semi-simplicity. We call a Lie algebra semi-simple if it has no solvable (orequivalently abelian) ideals.

    Let R(g) be the maximal solvable ideal of g, which can be constructed as thesum of all solvable ideals. Clearly, g is semisimple iff R(g) = 0.

    Lemma 4. g/R(g) is semisimple.

    Proof. Assume that a is an abelian ideal of g/R(g), and take its preimage in g,which is also a solvable ideal containing R(g) - contradiction. �

    Yet stronger than this is the following theorem (Levi’s theorem), which will beproven later.

  • QUALIFICATION EXAM 5

    Theorem 7. In fact there exists a semisimple subalgebra s such that g = R(g)⊕ s

    Theorem 8. Let g ⊂ glV such that V is irreducible. Then either g is semisimple,or it is the direct sum of F · IV and a semisimple Lie algebra.

    Proof. Consider V as a representation of R(g). By Lie’s theorem, there is at leastone weight space that is nonempty, and by Lie’s lemma this gives a subrepresen-tation of V as a rep. of g. Hence, it should be the whole of V . This implies thatR(g) is either empty or the multiples of identity, as desired. �

    Theorem 9. g is semisimple if and only if the Killing form is nondegenerate.

    Proof. Assume that there is an abelian ideal a, and choose a complement V to itinside g. Let a ∈ a and g ∈ g, and let us compute K(g, a) = tr(ad(g) ad(a)). Usingthat a is an ideal we see that ad(g) is upper-triangular, and also using that a isabelian that ad(a) is strictly upper triangular. Hence the Killing form is degenerate.

    Conversely, if the Killing form is degenerate, we take its kernel a, which is an idealby the invariance property of the Killing form. Consider the adjoint representationof a on g. This shows that a/center is solvable by Cartan’s criterion, which thenshows that a is solvable. �

    Theorem 10. If the Killing form is nondegenerate, then the center of g is trivial,and all its derivations are inner.

    Proof. Its obvious that the center is trivial by the definition of the Killing form.We identify g with its image in glg, so we have inclusions of Lie algebras g ⊂Der(g) ⊂ glg. Consider the trace form on glg. Its restriction to g is nondegenerateby assumption. We can take the orthogonal complement g⊥ of g inside Der(g). Itis easy to see that g⊥ is an ideal, just as g. Hence [g, D] = 0, if D ∈ g⊥ ⊂ Der(g).This shows that D is the zero derivation as the center is trivial, which finishes theproof. �

    1.8. Jordan decompositions. One can characterize the standard Jordan decom-position as writing a matrix as a sum of a semisimple matrix and a nilpotent matrixwhich commute, A = As +An. Some facts:

    • The proof is actually not so hard.• There is a polynomial P such that As = P (A) - do it first for a Jordan

    block.• This decomposition is unique - consider the differences, this is clearly nilpo-

    tent, it is semisimple by the previous item, and finish by noting that theonly semisimple nilpotent element is zero.

    An abstract Jordan decomposition of an element a of g is obtained by writinga = as + an such that ad(as) is semisimple, ad(an) nilpotent, and [an, as] = 0. Thestandard Jordan decomposition of a matrix is an example of this.

    Lemma 5. If the center is trivial then there is at most one Jordan decompositionof an element.

    Proof. This follows immediately from the uniqueness of the standard Jordan de-composition. �

    Theorem 11. If all derivations are inner, and the center is trivial, then thereexists a Jordan decomposition.

  • 6 UMUT VAROLGUNES

    Proof. We embed g in gl(g) by the adjoint representation. We need to show that thesummands in the standard Jordan decomposition ad(a) = An +As are derivationsof g. Generalized eigenspaces of ad(a) are the eigenspaces of As, and generalizedeigenspaces play nicely with respect to Lie bracket by the magic formula. That Asis a derivation can be checked on a basis, and finishes the proof. �

    1.9. Simplifications for the root decomposition in the semisimple case.Let g =

    ⊕gα be a root space decomposition for h. Let K be the Killing form.

    • K(gα, gβ) is zero unless α + β = 0. Notice that ad(a)ad(b) is a nilpotentoperator unless α+ β = 0. This doesn’t use semisimplicity.• K is nondegenerate on gα ⊕ g−α and h. Follows immediately from the

    previous point, and semisimplicity.• h is abelian. This follows from Cartan’s criterion and semisimplicity: h is

    solvable, so K(h, [h, h]) = 0.• h is semisimple. Use Jordan decomposition h = s + n. It follows from

    the properties of the standard Jordan decomposition that anything thatcommutes with h commutes also with s and n. By maximality they shouldboth lie inside h. Since n is nilpotent, and ad(n) commutes with the adjointaction of all elements in h, it follows that n is in the kernel of K |h, whichshows that it is zero as desired.• At this point, the generalized root space decomposition became much sim-

    pler. We make the obvious definition of a root (given a Cartan), and callall of them ∆ ⊂ h∗. K defines an isomorphism ν : h → h∗, and a bilinearform on h∗.• Let e ∈ gα and f ∈ g−α, then we can compute K([e, f ], h), for any h ∈ h

    - use the invariance property of K - and we find K(e, f)α(h). This isequivalent to [e, f ] = K(e, f)ν−1(α).

    • Let us take a root α, apply the previous point, and find an sl2 triple inside g.If α(ν−1(α)) = 0, then one can check that the Cartan criterion is satisfied,and sl2 is solvable, which is a contradiction. So we proved that K(α, α) 6= 0.

    Lemma 6. Let V be a finite dimensional representation of sl2, i.e. < E,F, F >with [E,F ] = H, [H,E] = 2E, and [H,F ] = −2F . Let v ∈ V be such that Ev = 0and Hv = λv, where λ 6= 0. Then the following hold:

    (1) HFnv = (λ− 2n)Fnv(2) EFnv = n(λ− n+ 1)Fn−1v(3) λ ∈ Z+. Moreover, Fλ+1v = 0, and the ones before that are linearly

    independent.

    Proof. The first two are trivial inductions (pay attention to normalizations though).For the last part note that since they are eigenvectors with different eigenvalues,Fnv’s are linearly independent as long as they are nonzero. By finite dimensionalitythey should become zero at some point, let N be the smallest one. By the secondidentity this means that λ = N − 1. �

    • gα is one dimensional. Let’s assume it is not and let us choose an sl2 triplewith E in gα. Then by nondegeneracy of Killing form g−α is at least twodimensional too, and hence contains a vector v which is perpendicular toE. This means that it commutes with E from the formulas above. We alsoknow that [H, v] = −2v, but this is a contradiction from the lemma above.

  • QUALIFICATION EXAM 7

    • The string property. Let α and β be two roots. Then the intersection of{β + nα} with ∆ ∪ {0} is a string consisting of adjacent roots such thatthe length of the positive and negative sides differ by 2K(α,β)K(α,α) . Consider

    the sl2 triple corresponding to α, which acts on⊕

    gβ+nα. Let gβ+pα bethe last nonempty one (and q be the first). Then it follows that H actsas β(H) + 2p on gβ+pα, and it also follows that the ones in between p andp − β(H) − 2p are nonzero. Similarly, it follows that the ones between qand q − β(H) − 2q are nonzero. Using the definitions of q and p, we getq < β(H)−p and β(H)− q < p. When we sum this two we get an equality,so we have that the two chains are in fact the same and that its length is

    p + q = β(H) = 2K(α,β)K(α,α) . This also shows, by going back to the numbers

    that were supposed to be positive, that q < 0 < p.• [gα, gβ ] = gα+β - if α + β is not a root this is obvious. This follows from

    the proof above, as β, β + α is part of the string - so they are connectedwith the sl2 action corresponding to α.• If α is a root, nα is a root iff n = ±1. Assume n > 1. We have that

    2K(nα,α)K(nα,nα) =

    2n is an integer. Then n = 2. This is impossible by the previous

    point.• ∆ spans h∗. Say it doesn’t, so we can choose a nonzero element in theK-orthogonal complement of span(∆). This shows by construction thatthe K-dual of that vector in h commutes with everything.• K(α, β) is rational, for α, β ∈ ∆. If we can show this for K(α, α) we would

    be done by the string property. Let us find a formula for K(α, β) for allα, β ∈ h∗ regardless. Dually, for h1, h2 in the Cartan subalgebra,

    (4) K(h1, h2) = tr(ad(h1)ad(h2)) = Σα∈∆α(h1)α(h2),

    which implies K(α, β) = Σγ∈∆K(α, γ)K(β, γ). Plug in β = α and di-vide both sides by K(α, β)2, and use the rationality obtained by the stringproperty again.• K is positive definite on the Q-linear span of the roots. This follows from

    the previous point (and the formula derived there) directly. Note that therationals here is independent of the ground field.

    1.10. Indecomposable root systems and simple Lie algebras.

    Proposition 2. Let a be an ideal of semisimple g. Then g is isomorphic to a⊕a⊥.

    Proof. It follows that a ∩ a⊥ is empty from Cartan’s criterion. This also impliesthat [a, a⊥] = 0. Dimension counting finishes the proof. �

    We define a simple Lie algebra to be a non-abelian one with no proper ideals.Note that we could equivalently say non-solvable. By the proposition above, wehave that a Lie algebra is semisimple if and only if it is a direct sum of simple Liealgebras.

    Let us say that two roots α and β are connected if there are roots a = γ1, . . . , γn =b such that γi + γi+1’s are also roots. Relatedly, we call a root system indecompos-able, if it can’t be written as the disjoint union of disconnected subsets of roots.

    If the roots we get from a semisimple Lie algebra is decomposable, the decomposi-tion induces in the only way possible a decomposition of the Lie algebra. Moreover,

  • 8 UMUT VAROLGUNES

    if the root system is decomposable, the connected components actually lie on com-plementary orthogonal subspaces. This shows that the decomposition of the rootsystem into indecomposable root systems is unique.

    1.11. Abstract root systems.

    Definition 2. We call a pair (∆, V ), where V is a vector space with an inner product,and ∆ a finite subset of vectors, an abstract root system if

    (1) 0 /∈ ∆, and ∆ spans V .(2) If α ∈ ∆, then nα ∈ ∆ if and only if n = ±1.(3) The string property, i.e. if α, β ∈ ∆ then {α + nβ} ∩ (∆ ∪ {0}) is a

    connected string with nonempty positive and negative sides differing in

    length by 2(α,β)(α,α) .

    Note that there is no extra rationality assumption - except the one imposed bythe string property. There is an obvious notion of an isomorphism of abstract rootsystems.

    Lemma 7. The inner product in the definition is unique up to scaling.

    Proof. Fix a root, which determines the scaling factor. Then for any other rootuse the string property on one side to determine the scaling factor for the innerproduct of the two roots, and then for the length of the root. �

    Here are some examples of abstract root systems.

    • ∆An = {�i − �j | i 6= j, 1 ≤ i, j ≤ n}• ∆Bn = {�i − �j , �i + �j ,−�i − �j , �i,−�i | i 6= j, 1 ≤ i, j ≤ n}• ∆Cn = {�i − �j , �i + �j ,−�i − �j , 2�i,−2�i | i 6= j, 1 ≤ i, j ≤ n}• ∆Dn = {�i − �j , �i + �j ,−�i − �j | i 6= j, 1 ≤ i, j ≤ n}

    Recall the definition of an even lattice: a lattice in a vector space with inner productsuch that the the norms of the elements of the are all even - this is a weaker conditionthan all inner products being integer. The most important example of such latticeis Γr = {�i ∈ Z + 12 ,Σ�i ∈ 2Z} in the standard R

    r, where r is a multiple of 8. It iseasy to see that the minimal norm nonzero elements of the lattice, assuming thatthey span the vector space, form an abstract root system.

    • Let us call the abstract root system we get for Γ8 above E8.• E7 is the set of roots perpendicular to vector (1/2, 1/2, . . . , 1/2) inside the

    perpendicular vector space.• E6 is the one perpendicular to (0, . . . , 0, 1, 1) inside E7.• F4 is the norm 1 or 2 elemensts inside the lattice of all half integer, or all

    integer vectors in standard R4.• G2 lives inside the lattice of A2 and is the elements of the lattice with norm

    2 or 6.

    1.12. Cartan matrix and Dynkin diagrams. Let us now choose a functional fon h∗ which doesn’t vanish on any of the roots. This divides the roots in two parts,positive and negative. Define a simple root to be a positive root which can’t bewritten as a sum of two positive roots. Let us also define a highest root be a rootwith maximum value under f .

    String property lets us prove a bunch of statements about the simple roots. Letα and β denote two different simple roots, θ be a highest root vector, and γ be apositive root.

  • QUALIFICATION EXAM 9

    • (α, β) ≤ 0. This follows from the fact that α − β can’t be a root bysimplicity.• (θ, γ) ≥ 0. This from that θ + γ can’t be a root.• Any positive root is a nonnegative integer linear combination of the simple

    roots. This follows by induction using the filtration given by the value off .• There is no linear relation between the simple roots. Assume that there is

    a relation, which we write as an equality of two sums, we can assume thatthe coefficients are positive integers. Then use the pairing with one side ofthe equation, and get the result by the first point above and the positivedefiniteness of the Killing form.• We define the decomposition of the simple roots in the obvious way. This is

    equivalent to the decomposition of the abstract root system in a functorialway.• If the root system is indecomposable, there is a unique highest root. Write

    the highest root as a nonnegative integer linear combination of the simpleroots. If there is a coefficient which is zero, consider the pairing betweenthe two and deduce that the root system is decomposable using the firsttwo points above - a contradiction. Now assume that there are two highestroots, then their difference or sum can’t be a root, so by the string propertythey are orthogonal. Now notice that since a highest root can’t be simple,it should pair nontrivially with at least one of the simple roots. We geta contradiction to the orthogonality by expanding out one of the highestroots in terms of the simple roots and using everything above.• Let us define the Cartan matrix using the simple roots and the quantity

    that appears in the string property. This has diagonal consisting of 2’s.Other entries are all non-positive, and their negativity is symmetric alongthe diagonal. Most nontrivially, Cartan matrix is a positive definite matrix.This can be seen by writing it as the product of a diagonal matrix withpositive entries and the Gramm matrix of the Killing form with respect tothe basis given by the simple roots.• Cartan matrix can be represented concisely by a quiver called the Dynkin

    diagram. Symmetry described above along with Cauchy-Schwartz inequal-ity simplifies the situation a lot. There can be only four kinds of connec-tions between vertices, no connection, standard one edge connection, anddirected two and three edge connections.• Finally note that if we also add the highest root to the simple roots and

    form the Cartan matrix of one larger size, we get a positive semi-definitematrix, which satisfies the other properties of the Cartan matrix.

    1.13. Classification of simple Lie algebras. We first classify all positive definiteCartan matrices. The strategy for that is to first create a large pool of positive semi-definite matrices, which are barely not positive definite. This is done by taking theextended Dynkin diagrams obtained by adding the highest root to the simple roots.Dynkin diagrams which give positive definite Cartan matrices should not containthese Dynkin diagrams. It is possible to show that the matrices obtained this wayare positive definite of corank one, i.e. any principal minor is positive definite. Thiscan be done in a more or less systematic way by generalizing the definition of aCartan matrix in a suitable manner (see Kac’s book). A particularly delicate point

  • 10 UMUT VAROLGUNES

    is to show that Dynkin diagrams can’t contain multiply laced cycles. As a result weget the A,B,C,D general families and E6, E7, E8, F4, G2. Now, we have to performtwo tasks to finish the classification

    (1) Show that if we start with a simple Lie algebra, make all the choices,and get a Dynkin diagram, any other simple Lie algebra which potentiallyinduces this Dynkin diagram is isomorphic to the original Lie algebra. Thisinvolves writing down the obvious generators and relations, and showingthat the seemingly undetermined Lie brackets are actually determined bysemisimplicity.

    (2) Show that all Dynkin diagrams actually come from Lie algebras. Thisinvolves actually constructing the undetermined Lie brackets.

    Let us start with the first point above. Take generators {Ei, Fi, Hi} for eachnode of the Dynkin diagram such that [Ei, Fi] =

    2ν−1(αi)K(αi,αi)

    = Hi. This implies

    the following commutation relations: [Hi, Hj ] = 0, [Hi, Ej ] = aijEj , [Hi, Fj ] =−aijFj , [Ei, Fj ] = 0. The last one is because the roots are simple. Then define thedecomposition g = n+ ⊕ h ⊕ n− of Lie algebras, by declaring the plus ones to bethe span of the root spaces of the positive roots. In Kac’s choices these become theupper triangular and lower triangular matrices in all of the types A,B,C,D.

    Lemma 8. All ideals of g intersect h nontrivially.

    Proof. Let’s say I doesn’t. Then, I is a subrepresentation of h on g, and the weightspace decomposition of I is just obtained by intersecting the weight spaces of g withI. Note that this fact about the decompositions of the representations of abelianLie algebras is true for infinite dimensional representations as well, we will use itbelow. This implies that I contains gα for some α. But then it would have tocontain an element of h by the nondegeneracy of the Killing form. �

    Let the Lie algebra generated by {Ei, Fi, Hi} with only the relations above beg̃. We will show that g is a uniquely defined quotient of g̃ proving uniqueness.

    Proposition 3. • Define ñ± in the obvious way. All pure commutators areeither inside ñ± or in h.• g̃ = ñ+ ⊕ h⊕ ñ−• There exists a unique maximal ideal J in g̃ which doesn’t intersect h, and

    this ideal is proper.• The natural map g̃ → g factors through g̃/J and induces an isomorphism

    of Lie algebras.

    Proof. The first item is an easy induction. The second one is a consequence of thefirst item. The third one is also easy. The only thing to note is by the above generaleasy fact, if I doesn’t intersect h then I = I+ ⊕ I−, and hence the direct sum of allideals not intersecting h is a proper ideal, which doesn’t intersect h. Fourth item isjust by construction. �

    This finished the first point. Now we come to the second point. Before that notethe exceptional isomorphisms A1 = B1 = C1, B2 = C2, A3 = D3, D2 = A1 ⊕ A1,and there is no D1 (it is abelian). I should remember these statements in the Liegroup level, which we did in Vogan’s class as a homework.

    Now we actually start from Cartan matrix. In the simply-laced case, there is anice reconstruction of all the roots from the simple roots, and you grind your way

  • QUALIFICATION EXAM 11

    to define a Lie algebra structure by considering the relation [Eα, Eβ ] = �(α, β)Eα+βand what should � satisfy. This seems to work only in the simply laced case, soA,D,E. You can in general reconstruct all the roots using the string property butit is not as nice (should look at this at some point though). We already know B,Cexists. To construct F4 and G2, we go back to the previous tilde construction. Itsuffices to show that the quotient is finite dimensional apparently (need to checkthat Lie algebra has the given Cartan matrix and is semisimple) and for that weconstruct maps from g̃(A) to Lie algebras we know E6 for F4 and D4 for G2. Thesemaps are injective on the h’s inside, and hence the kernels can only get bigger whenwe do the actual quotient, which proves finite dimensionality.

    1.14. More on the Killing form.

    Proposition 4. On simple Lie algebra there exists a unique nondegenerate sym-metric invariant bilinear form up to a constant factor.

    Proof. We could go back and do the proofs again with using any such bilinear formand we would find that the string property is still satisfied. Hence we know that itthe statement is true on a Cartan subalgebra. Now we use the properties cleverlyand show it is correct everywhere. �

    In the simply laced case, we have a good construction of the Lie algebra fromthe Cartan matrix. Similarly for the Killing form, we can construct it from theCartan matrix. We let it be what it was on the Cartan, and then declare (h,Eα) =0, (Eα, Eβ) = −δα,−β . The previous proposition shows that this is the Killing form.

    Armed with this description, we now take our base fields to be C, which we seeas the base extension of the one with the base field R. We define the automorphismω |hR= −Id, and ω(Eα) = E−α, and extend to the complexification by antilinearity.This is an automorphism of the complex Lie algebra, and its fixed point set is calledthe compact form of it (because the corresponding Lie group inside is compact) andthe Killing form restricted there is negative definite.

    1.15. Weyl group. Weyl group W ∈ O(V,) is the group generated by thereflections along the orthogonal hyperplanes of the roots - we work with an inde-composable abstract root system here. Here are some of its properties:

    • W preserves the roots. This is a direct consequence of the string property,we know that one side that is A more than the other, this is asking for theAth one in the longer side.• If we assume the previous point, and also that 2 is an integer, the

    string property follows. This is not trivial. Note that if a root is k timesthe other then 2k and 2/k are both integers by integrality and k can onlybe ±1,±2. I don’t think that there is a way to exclude the 2, but thisis ok for the string property anyways. Assuming that the roots are notmultiples of each other: first we show that there is at least one root inthe side that should be longer right next to the existing root, and this isby proving that at least one of 2 should be ±1. Then we show thatthe string is connected, let’s assume it is not, then we write down what isrequired to have that from the sentence below, and get a contradiction tothe positive definiteness. Finally, we prove the exact formula with 2by noting that the reflection along the hyperplane perpendicular to the

  • 12 UMUT VAROLGUNES

    string can send one end only to the other end - the formula gives exactlywhat is needed.• W (An) = Sn, W (Bn) = Sn n Zn2 , W (Cn) = Sn n Zn2 and W (Dn) =Sn n Zn−12

    Choose an f ∈ V ∗ and define positive roots, simple roots and stuff. Call thereflections of the simple roots simple reflections.

    • Reflection of a simple root can’t make any positive root other than itselfnegative. We have that any positive root is a positive combination of simpleones and just reflection can change only one of these.• A positive but not simple root can be simply reflected such that the sum of

    the coefficients in its representation with simple roots decreases. Assumethe contrary, which means that it pairs with all simple roots negatively,which is impossible, as we have shown before - or you can just look at itsnorm and expand one side only.• Hence we can apply simple reflections and bring any positive root to the

    position of a simple root. This shows that W is generated by simple re-flections because if we want to make a reflection at a positive root, wefirst change coordinates by the reflection obtained in the previous sentence(conjugation) and do the simple reflection instead.

    Now we define the chambers to be the closures of the connected componentsof the complement of all the orthogonal hyperplanes to the roots. Define the fun-damental chamber as the closure of the portion that pairs positively with all thesimple roots.

    Lemma 9. The fundamental chamber is a chamber.

    Proof. First, the fundamental chamber can’t intersect with the hyperplanes. Be-cause if they do we find a vector which pairs positively with all the simple roots,but pairs trivially with a positive root which is impossible. Since the fundamentalchamber is connected, it should lie completely in one of the chambers. Finally, ifwe look at the map from that chamber to the reals given by pairing with any ofthe simple roots we see by continuity and connectedness that all of that chambershould be in the fundamental chamber. �

    Theorem 12. (1) W acts transitively on the chambers.(2) If we choose another f ′ and get other positive roots, there is a reflection

    that takes one set of positive roots to the other.

    Proof. First one is quite good. Find points in the two chambers such that thestraight line connecting them doesn’t hit any of the points that are in the intersec-tion of two (or more) of the hyperplanes. We walk along that segment, and once wehit a hyperplane, we make the corresponding reflection (do not change the straightline).

    For the second one, we find the fundamental chambers from the two functionals.By the previous point, there is a reflection sending one to the other. This impliesthat that one reflection sends one set of simple roots to the other, which finishesthe proof. �

    Corollary 9. Cartan matrix is independent of the choice of the functional.

  • QUALIFICATION EXAM 13

    Lemma 10. Let sit−1 , . . . , si1 be simple reflections and αit be a simple root suchthat si1 . . . sit−1(αit) is a negative root. Then si1 . . . sit−1sit can be reduced to theword si1 . . . ŝirsit−1 .

    Proof. There will be a first time when a positive root becomes a negative one. Inthat step, the root we have should be the same as the reflection we are applyingby a previous point. Let w be the reflection that took αit to αir which is thatroot. Then sir . . . sit−1sit is the same as sir+1 . . . sit−1 by the change of basis trick,proving the result. �

    Theorem 13. W acts simply transitively on the chambers.

    Proof. We do this by showing that if a reflection fixes the fundamental chamber,then it is the identity. Indeed, in that case, it has to fix the set of simple roots. Letus write the reflection as a reduced word si1 . . . sit−1sit in the simple reflections,then si1 . . . sit−1(αit) should be a negative root, and by the lemma, this gives acontradiction to the word being reduced. �

    1.16. A bit more genera theory.

    Definition 3. We define the universal enveloping algebra U(g) of a Lie algebra tobe smallest (in the categorical sense) unital associative algebra with a Lie algebramap from g with respect to its commutator Lie algebra structure. It is unique up tounique isomorphism by abstract nonsense. It exists, because we can choose a basisof the Lie algebra, and take the algebra with the commutation relations. Noticethat representations of g is the same as the representations of U(g).

    Theorem 14. Poincare-Birkhoff-Witt theorem. Consider the explicit constructionabove. Denote the basis by a1, . . . , an. Then the ordered monomials form a basis ofU(g).

    Theorem 15. It is easy to see that they span U() - unordered monomials does byconstruction, then we use commutation relations. For the linear independence, wedefine the following map U(g) → B, where B is the vector space generated by theordered monomials in some set of n generators. We send the ordered monomialsto unordered monomials, and define the map for unordered polynomials by makingsome random choices of commutations but in the most effective way (there is anumber only depending on the arrangment of the numbers), we get down to a linearsum of ordered monomials, and define the map by linearity. We need to showthat this map is well defined, i.e. it doesn’t depend on the choices. For this wemake induction on the number of inversions on a word. We end up with two cases(reminiscent of braid relations), and when the commutations are far apart, it iseasy, when they are next to each other, we need Jacobi identity.

    Let us take a symmetric invariant nondegenerate pairing (·, ·) on g. Then takea basis (u1, . . . un) of g, and let (v1, . . . vn) be the dual basis. Then the elementΣuivi doesn’t depend on the choice of basis, and it is called the Casimir element Ω.(do this computation!) The matrix of an adjoint action of an element a ∈ g withrespect to these two bases are negative transposes of each other.

    Given a representation V of g, define the Lie algebra cohomology as follows Thecochains are given by alternating k-multilinear maps Λlg→ V , and the differentialis

    (5) dφ(g1, . . . , gl) =

  • 14 UMUT VAROLGUNES

    (6) Σ(−1)igiφ(g1, . . . , ĝi, . . . , gl) + Σi

  • QUALIFICATION EXAM 15

    and we just fixed an origin for it. Note that M as a subrepresentation of End(V ),where we define the action of g by taking commutators with endomorphisms.

    Now define a one cocycle g→M , by φ(a) = aP0−P0a. By the main theorem oncohomology, this cocycle is in fact of the form φ(a) = ab− ba for some b ∈M . Nowwe can check easily that P0 − b gives a g-invariant splitting, since it is a projectorto U and also commutes with the actions of all elements of g. �

    Now we also give a proof of Levi’s theorem which was promised before.

    Theorem 18. Let Rad(g) be the solvable radical of g. Then, there exists a semisim-ple s such that g = Rad(g) + s.

    Proof. We will do induction the dimension of g. We divide the argument intotwo cases. If Rad(g) is not abelian. Then, we consider g′ = g/[Rad(g),Rad(g)](because what the hell, why not). This has smaller dimension, so we use ourinduction hypothesis, and write g′ = Rad(g′) + s′. Let us call g1 the preimage ofs′ inside g. We have g = g1 + Rad(g). Note that g1 6= g, because the image ofRad(g) is a non-empty solvable ideal in the quotient, so g′ can’t be semisimple.Now we apply our induction hypothesis again to g1, and write g1 = Rad(g1) + s1.By elementary but kind of confusing computations we see that Rad(g) + Rad(g1)is again a solvable ideal. Hence, it should be that it is equal to Rad(g), and we aredone.

    Now assume that Rad(g) is abelian. Do the trick in the proof of Weyl completereducibility for V = g the adjoint representation, and U is the subspace given byRad(g) ∈ End(g). So we get M̃ . Note that we naturally have Rad(g) ∈ M̃ becauseRad(g) is an abelian ideal. We take quotient s = g/Rad(g), and also the quotient

    M̃/Rad(g), which is an s-module because Rad(g) acts trivially on M̃ (so s has anaction there), and then we use again that Rad(g) is an ideal to say that it is asubrepresentation and we can pass to the quotient representation.

    Now we take a projector P0 : g → Rad(g), and define φ : s → M by φ(s) =s̄P0 − P0s̄, where s̄ ∈ g is a lift of s. This is apparently a one cocycle. Then wefind a primitive m ∈ M , and want to use the projection P0 −m, but this time wedon’t directly get what we want because we have to lift everything up to g again,we have the formula

    (8) ad(ã)(P0 − m̃)− (P0 − m̃)ad(ã) = ad(ra),

    where ra is some mystery element in the radical. If ad(ra)’s are all zero, then weare done - in fact we have such a decomposition to ideals. If not, then we considerg1 = {a ∈ g | Pad(a) = ad(a)P , which is a proper subalgebra, and we can put theequation above into the form P (ad(a)− ra) = (ad(a)− ra)P , which shows that anyelement of g can be put in g1 by subtracting an element which is in the radical.Hence, we have g = g1 + Rad(g), so we do the same thing that we did in the abovenonabelian case. �

    There is also the following uniqueness theorem for the Levi decomposition, whichis called Malcev’s theorem. I may write down a proof later, it follows a similarstrategy to Levi’s theorem - it’s easier.

    Theorem 19. Any two Levi decompositions are related by an inner automorphism(exponential of an inner derivation) of the Lie algebra.

  • 16 UMUT VAROLGUNES

    1.17. Representation Theory of Semisimple Lie algebras. In addition to allthe choices we have been making, choose an ordering of all positive roots, this willbe used in conjunction with PBW theorem.

    The key definition is of a highest weight g-module. A highest weight moduleV with highest weight Λ ∈ h∗ is defined by the property that it contains a vectorwhich is in the weight space VΛ, which is killed by n+, and which generates thewhole of V under the action of the universal enveloping algebra. Notice that forthe generation it is equivalent to require that it is generated by the part of theuniversal enveloping algebra generated by n−.

    A singular weight vector is one that is described in the above paragraph withoutthe generation requirement, and such weight is called a singular weight. Let us alsodefine D(Λ) for a weight Λ, the negative quadrant of the lattice generated by thesimple roots with origin taken to be Λ. Let V be a highest weight g-module withhighest weight Λ.

    • V =⊕

    λ∈D(Λ) Vλ. This follows from the PBW theorem and a small com-

    putation of how the eigenvalues of elements in h change when a highestweight vector is hit by E−β1 . . . E−βn - it becomes Λ− β1 − . . .− βn. Notethat some of these weight spaces can be empty.• VΛ is one dimensional and all weight spaces are finite dimensional. This

    follows from the fact there are only finitely many ways to split a nonnegativeinteger vector into other nonnegative (nonzero) vectors. First statement isa degenerate case of this too.• V is irreducible if and only if Λ is its only singular weight. If there is another

    singular weight, then the part that it generates is a proper submodule.Conversely, if there is a submodule U , then its weight space decompositionis given by U =

    ⊕λ∈D(Λ)(Vλ ∩U). Let us take one of the minimal λ’s such

    that Vλ ∩ U is nonempty. Then this weight should be singular.• V contains a unique proper maximal submodule. This also follows fromU =

    ⊕λ∈D(Λ)(Vλ∩U), because if we take the sum of all proper submodules,

    it will not contain the highest weight vector.• Let us compute Ω(v) where v is a singular vector with weight λ. We

    choose a basis of g which looks like E±α, α ∈ ∆+ and Hi a basis of h, suchthat K(Eα, E−α) = 1. Then Ω(v) = (ΣEαE−α + ΣE−αEα + ΣH

    iHi)v =2ΣE−αEαv+Σν

    −1(α)v+Σν−1(Hi)λ(Hi)v = λ(ν−1(Σα+Σλ(Hi)Hi))v =<

    λ, λ+ 2ρ > v, where 2ρ = Σα. Note that if v is the highest weight vector,this implies the same result for the whole of V . Also if λ is another singu-lar weight, then we have < λ, λ+ 2ρ >=< Λ,Λ + 2ρ >, which can also berewritten as < λ+ ρ, λ+ ρ >=< Λ + ρ,Λ + ρ >.

    • If the highest vector happens to be a real valued functional (as opposedto for example complex valued), then this shows that there can be onlyfinitely many singular weigths, because we know that they should lie insome lattice and the previous points shows that they also lie in a compactsubset.

    Let us define M(Λ) to be the universal example of a highest weight module withhighest weight Λ. This can be defined as the obvious quotient of the universalenveloping algebra, and clearly satisfies the universal property (any such module isa quotient of M(Λ)). This comes with a basis once a functional is chosen in h∗. Byone of the points above it has a unique maximal proper submodule, the quotient of

  • QUALIFICATION EXAM 17

    M(Λ) by that submodule is called L(Λ). Note that Λ determines M(Λ) and L(Λ),and more surprisingly the converse is also true.

    We make the last set of important definitions now. Let b = n+ ⊕ h be calleda Borel subalgebra. We have [b, b] = n+. Let us define the coroots to be

    2ν−1(α)(α,α) ,

    where α is a root. Then define the lattice P of functionals in h∗ which evaluate toan integer for all coroots to be the weight lattice. If all the integers are positive,then we call such a weight dominant, of which set we represent by P+. Notice thatthe lattice of roots lies inside P by the string property. Also let ρ = 1/2Σα asbefore.

    Theorem 20. The g-modules {L(Λ}Λ∈P+ are all finite dimensional g-modules.

    Proof. We know that L(Λ)’s are irreducible, and that they are finite dimensionalfollow from Weyl’s dimension formula. For the other direction, note that b issolvable (its commutator subalgebra is nilpotent). By Lie’s theorem, there is aλ ∈ b∗ such that bv = λ(b)v, for every b ∈ b. If n ∈ n+, then n = [b, b”], and itfollows that nv = 0. Since the representation is irreducible v should be a generator.To see that the weight is dominant integral, we use the sl2-triples inside, and ofcourse the main lemma on sl2. �

    Theorem 21. • dimL(Λ) = Πα∈∆ (Λ+ρ,α)(ρ,α) , for Λ ∈ P+• Let us define the formal character of a representation by ch(L(Λ)) = Σλ∈h∗ dimVλeλ.

    Then, for Λ ∈ P+, we have eρRch(L(Λ)) = Σw∈W (detw)ew(Λ+ρ), whereR = Πα∈∆+(1− e−α).

    Proof. The dimension formula follows from the character formula as follows. Wecan define the map Fτ from the formal characters (with finitely many terms) toreal numbers by eλ → e(λ,τ)t for all τ ∈ h∗. Now let us apply Fρ to both sides ofthe Weyl character formula. We find:

    (9) et(ρ,ρ)Π(1− e−t(ρ,α))(Σ dimL(Λ)λet(λ,ρ)) = Fρ+Λ(Σw∈W (detw)e

    w(ρ)).

    By putting Λ = 0 in the character formula we derive the Weyl denominotor formula

    (10) eρR = Σw∈W (detw)ew(ρ),

    which we can substitute in the previous equation.

    (11) e(t(ρ,ρ)(Σ dimL(Λ)λet(λ,ρ)) = et(Λ+ρ,ρ)(Πα∈∆+

    1− e−t(Λ+ρ,α)

    1− e−t(ρ,α).

    We finally do something cool and let t → 0. Using L’Hospital’s rule for the RHSwe get exactly what we want.

    Now we start proving the Weyl character formula. We do it step by step:

    (1) Weyl group acts on formal characters by acting on the exponents. We firstshow that ch(L(Λ) is invariant under this action. This will be by reductionto sl2, in which case the main lemma shows the claim. Let α be a rootand let < E,F,H > be the corresponding sl2 triple, call it s. We will showthat the reflection corresponding to α fixes the character. If we restrict therepresentation to s, we can compute its character simply by projecting theexponents into the line generated by α. Moreover, if V is an irreduciblesubrepresentation then by the main lemma, we know that it should begenerated by one vector by applying F ′s and this shows that the orthogonalcomplement of < α > acts exactly the same way on all of V . Now the only

  • 18 UMUT VAROLGUNES

    things to show is that this splits into irreducible s representations. SinceΛ(H) is integral, we can turn the highest weight vector into another singularvector by applying F enough times, and that should be zero. Now take thedirect sum of all irreducible s subrepresentations of L(Λ). It is not hard tosee that this is a g-submodule and hence it should be everything.

    (2) w(eρR) = det(w)eρR. It suffices to show this for w = si. Here we notethat ρ is also equal to ω1 + . . . + ωn, where ωi are the fundamental roots.This is because the set of positive roots except αi is invariant under thereflection si, and writing down the actualy formula for what the reflectiondoes on ρ gives the desired claim. Using the same argument, w(eρR) =eρ−αi(1− eαi)Πα6=α1(1− e−α), which proves the statement.

    (3) We can compute the characters of the Verma modules directly. RchM(Λ) =eΛ.

    (4) We can write the character of a finite dimensional highest weight repre-sentation of weight Λ in as a positive integer linear combination of thecharacters of L(λ)’s where λ satisfies < λ + ρ, λ + ρ >=< Λ + ρ,Λ + ρ >,and the coefficient of L(Λ) is 1. This is by induction, using the fact that allirreducible subrepresentations are of the form L(λ) for such λ’s, when wetake quotient by such subrepresentation all the properties are still satisfied,and finally that the character splits under short exact sequences.

    (5) The same with M(λ)’s instead of L(λ)’s. For this we use the previousstatement for V = M(λ). This will give us some matrix with positiveinteger entries aµν , where µ and ν are weights satisfying < λ+ρ, λ+ρ >=<Λ + ρ,Λ + ρ >. I don’t quite understand this step. Let me just skip it fornow.

    (6) We use the previous step to decompose the character into characters ofM(λ)’s. We multiply both sides with eρR, and use the formula for char-acters of M(λ)’s. Then we use the fact that the LHS of the equationeρRchL(Λ) = Σλ∈B(Λ)aλe

    λ+ρ is antilinear with respect to the Weyl group

    action, so should be the RHS. We have the terms obtained from the 1 ·eΛ+ρand we want to show that there is no other. If there is one such aλ 6= 0, wecan make it so that λ+ ρ is dominant integral weight by reflections. Also,Λ− λ = α pairs nonnegatively with al Hi’s.

    (12) < λ+ ρ, λ+ ρ > − < Λ + ρ,Λ + ρ >=< Λ + 2ρ, α > + < α, λ > .

    Λ and λ + ρ pairs nonnegatively α, and ρ pairs positively, which finishesthe proof.

    2. Characteristic Classes

    2.1. Rather arbitrary notes.

    • One can consider Pn as the set of symmetric rank 1 projections with traceone by means of exterior product (as physicists do in bra-ket notation).This generalizes in the obvious way to other Grassmanians.

    • For the naturality axiom of characteristic classes, it is better to say it forall vector bundle maps (require tit to be an isomorphism of the fibre vectorspaces) covering the given map.

  • QUALIFICATION EXAM 19

    • Two real manifolds are cobordant if and only if their Stiefel-Whitney num-bers are the same. One side is elementary, but the other is a theorem ofThom.• For Stiefel-Whitney classes you really work with Z2 coefficients, which

    makes the computations with real projective spaces very pleasant.• To relate the tangent bundle of projective spaces to the tautological line

    bundle one either uses Euler sequence or its identification with the bundleHom(γ, γ⊥) - this is easy to see when one passes to the sphere above theprojective space.• If the total space of a principal G-bundle is weakly contractible (aspherical)

    then it is a model for a universal principal G-bundle (or its base for aclassifying space).• One can construct an EG → BG explicitly by either Milnor’s join con-

    struction or as the geometric realization of a simplicial set constructedusing group operations. One can also deduce the existence of EG → BGusing Brown representability theorem. In the case of vector bundles (i.e.G = GLn) there is a more concrete description via infinite Grassmanians.• A central problem is to compute the cohomology of BG. For example G =GLn(R) is hard. This is where characteristic classes come from essentially.The method I know is to work with the whole family BGn, and do inductionusing SSS. One needs to identify what is Gn/Gn−1 - in the cases one usuallyconsiders for characteristic classes this is a sphere.• To embed a vector bundle into a trivial bundle there are two approaches.

    One can first choose sections which generate everything, i.e. a projectionfrom a trivial bundle, and take an orthogonal complement to the kernel, orone can cover the base with trivializations, pick cutoff functions subordinateto the covering, and map to (Rn)×N , where N is the number of open sets.When the base is paracompact (CW complexes, metric spaces, countableunions of compact regular spaces) then the second procedure lands you inR∞ after a refinement of the covering - by the local finiteness condition.Also note that a bundle map to the tautological bundle over the Grassma-nian (infinite or not) is the same thing as a vector bundle embedding asabove.• A very good toy case of classifying space theory: PrinG(Sn) = πn−1(G),

    heuristically this is clear by clutching function construction.• Homotopy covering theorem: We have the following diagram of fiber bun-

    dles where the upper map is a fibrewise homeomorphism and X is nice:

    (1) E //

    ��

    F

    ��X //Y

    If we are given a homotopy X × I → Y , then there is a covering homotopy(again a fibrewise homeomorphism) E×I → F making the obvious diagramcommute.• Homotopy covering theorem can be used to prove that homotopic maps

    induce isomorphic bundles. For principal G-bundles over smooth mani-folds this can also be done using connections. In MS, they never use thissomehow.

  • 20 UMUT VAROLGUNES

    • MS does prove that any vector bundle over X gives rise to a well definedhomotopy class of maps X → Grn(R∞) via some trickery.• Schubert cells provide a cell decomposition for Grassmannians. These are

    cells indexed by partitions of m into n numbers, with easily computeddimensions.• Thom isomorphism for orientable bundles. This is very easy to prove using

    the Serre Spectral sequence. The only tricky point is to use SSS for fiber asthe pair (vectors, non-zero vectors), and realize that the orientablity is thesame as the local system on the base being trivial. Note that the isomor-phism is given by taking cup product with the Thom class which restrictsto the given orientation of each fiber. This class can also be constructedusing differential geometry.• The Euler class is a characteristic class in Hn(M,Z) for oriented real vector

    bundles. It satisfies all the possible axioms it could satisfy. It is two-torsionwhen n is odd. Its mod−2 reduction is the top dimensional Stiefel-Whitneyclass. It can be defined using Chern-Weil theory by taking the invariantpolynomial as the Pfaffian - note that this exists only when the bundle isorientable. It can also be defined by extending the Thom class from (E,E0)to E, this correspoonds to pulling back the square of Thom class via theThom isomorphism.• Steenrod squares are usually used in the construction of Stiefel-Whitney

    classes.• Obstruction theory: Let F → E → B be a fibration, where B is a CW-

    complex, and let B[k] be its k−skeleton. Assume that we are given a sectionσ on B[k], then we can extend it to a section on B[k+1] only if a certainclass in Hk+1CW (B, πk(F )) ' Hk+1(Homπ1(B)(Ccell∗ (B̃) → πk(F ))) vanishes.This class is constructed as follows: For each k + 1-cell α : Dk+1 → B, wesend α̃ to an element in πk(F ) represented by σ |δα: Sk → α∗E under theisomorphism πk(α

    ∗E) ' πk(Fα̃(0)) ' πk(F ), where the second isomorphismcan be chosed consistently for all cells of the universal cover. It is not trivialto see that this is a cocycle. Once that is established one can also show thatthe cohomology class of the cocycle is independent of the section chosed onthe k-skeleton. The point is that if the section extends to k + 1-skeleton,the obstrcution class vanishes.This construction is natural with respect topullbacks. See Hutching’s notes.• Consider the Stiefel bundle of k−frames associated to a rank−n real vector

    bundle. One can check that Vk(n) is n − k − 1 connected - consider thebundle GL(k)→ Vk(n)→ Grk(n), note that Grk(n)→ Grk(∞) is n−k−1connected and the map ΩGrk(∞) → GL(k) is a homotopy equivalence.Also there is a well defined mod−2 reduction of the obstruction class forextending section along the n−k+1-skeleton, which gives rise to an elementHn−k+1(B,Z/2Z), which is precisely wn−k(E). In the case of k = 1, andorientable E the local system is trivial so we get a class in Hn(B,Z), andthis class is the Euler class.• The way MS defines Chern classes is actually very similar to Grothendieck’s

    method. You define them inductively You take a rank-n complex vectorbundle E → B, define a rank-n − 1 vector bundle over E0 by taking or-thogonal complements (either by picking a Riemannian metric or by taking

  • QUALIFICATION EXAM 21

    quotient). Then write down the Gysin sequence for E0 → B - note that Ehas canonical orientation - and define the characteristic classes except thetop dimensional one. Finally, the top Chern class to be the Euler class ofthe underlying oriented vector bundle.• Pontrjagin classes are defined by complexifying the real vector bundle.

    They exist at degrees which are multiples of four. They are directly re-lated to the cohomology of the Grassmanian of real oriented planes withcoefficients in a ring where 2 is inverted. If one takes bundle E which isalready complex and complexifies it, he gets E ⊕ Ē, and correspondinglythere is a formula relating Pontrjagin and Chern classes of E. If we havean oriented even rank real vector bundle, its top degree Pontrjagin class isthe square of its Euler class.• Chern and Pontrjagin numbers - defined in the obvious way. Again these

    are cobordism invariants. Note that the definition can be modified to useany basis of nth cohomology of the classifying space (do the pairing there).We will use the one given by taking symmetric sums of monomials (powersequal to only 1 or 0 gives the Chern classes, one variable monomials arethe Newton sums). Notice how a partition defines directly a class thisway. There is a product formula for Whitney sums. The nth Newton sumelement is crucial for the case of the tangent bundle of an n dimensionalmanifold - this vanishes for product manifolds for example, deduce thatprojective space is not a product. A nice theorem is that these numbersare linearly independent - proved very ingeniously by taking manifolds withdimensions 1, . . . , n with the crucial numbers nonzero, and arranging all thepossible numbers we get to a triangular matrix with nonzero entries in thediagonal.• To prove splitting principle one should take the projectivization - use Leray-

    Hirsch. Universal instance of that and stuff it’s all very simple and nice.This is the way to prove Cartan formula also. Never forget the functorialmeaning of BG it gets many things for free.• Boundary maps in the long exact sequence of cohomology is actually not so

    weird. When we think about the Puppe sequence we see that they in fact arejust some actual pullback maps composed with the suspension map. Forexample this shows that Steenrod squares commute with the connectingboundary maps. This in turn shows that transgression commutes withSteenrod squares.• Transgression. We say that a class in Hn(F ) transgresses to a class inHn+1(B) if they map to the same element under the maps δ : Hn−1(F )→Hn(E,F ) and Hn(B) → Hn(E,F ). Note that this is not map or some-thing, just a relation - it is a map from a subspace to a quotient though.A better way to understand trasgression is through the Serre spectral se-quence, which identifies what this subspace and quotient are in terms ofthe edge homomorphism. Consider the edges of the spectral sequence,the mystery subspace and quotient and the well defined map is preciselydn : E0,n−1n → En,0n . There is also a homological version of this and thenice point there is that the spherical classes always transgress. This trans-gression explains the relationship between the cohomologies of U and BUfor example.

  • 22 UMUT VAROLGUNES

    • Do not get confused between the universal vector bundles and universalprincipal G-bundles over the infinite Grassmanians. Also keep in my mindthe frame bundles and the Stiefel manifolds.• One can compute the cohomologies of the Stiefel manifolds by brute force

    with rational coefficients say. You first compute the low degree homotopygroups - this I talked about a little bit above. You first compute for the2-frame bundles using the spherical fibration. Then you use two differentkinds of fibrations for the induction, when the euler class should be thereyou use the spherical one, otherwise you take two vectors and use one withfiber a 2 frame bundle. In particular, you get the cohomology of O(n) andSO(n).• The cohomology of Grassmannians is harder. You get pretty combinatorial

    formulas for the ring structure over integers involving Young tableux stufflike Littlewood-Richardson numbers, well that precisely gives the structurecoefficients.

    2.2. Grothendieck’s method for defining characteristic classes. The keypoint here is the classical Leray-Hirsch theorem.

    Theorem 22. Let F → E → B be a fibration such that H∗(E) → H∗(F ) issurjective, and also assume that it has a section q : H∗(E) → H∗(F ). Let us alsoassume that B is connected for safety. Then, the induced map H∗(F )⊗H∗(B)→H∗(E) is an H∗(B)-module isomorphism. If q is an isomorphism of rings, then weget ring isomorphism.

    Proof. This all follows from the edge homomorphism for the fibre edge, which meansthat the map H∗(E)→ H∗(F ) factors through the fibre edge of the E∞ page. Byconstruction of the Serre spectral sequence, the fibre edge of the E2 page is theπ1 invariants of the cohomology of the fibre. Since, H

    ∗(E)→ H∗(F ) is surjective,these should actually be all of H∗(F ), and the differentials starting there should allbe zero after this point. Hence, the local system is trivial, and the spectral sequencecollapses at the E2 page. To finish we notice that the mapH∗(F )⊗H∗(B)→ H∗(E)respects the filtration of the antidiagonal and the filtration on H∗(E) given by theconstruction of the spectral sequence. Since this map is an isomorphism at the levelof associated graded’s, we finish the proof. �

    Now the construction of Grothendieck goes as follows. Construct c1 or w1 insome way for line bundles (take sections, or I guess more generally use the Thomisomorphism). Take the projectivization of the given vector bundle, there is acanonical line bundle over that, and the naturality of c1 shows that the conditionsof Leray-Hirsch theorem hold, with a preferred section. Then consider the imageof c1 ⊗ 1 in H∗E. The n + 1th power of it is not zero but it is in the image ofH∗(F )⊗H∗(B), and hence its preimage in H∗(F )⊗H∗(B) can be written uniquelyas Σni=0ci ⊗ ci1, the ci’s are defined to be required classes. Their naturality followsfrom among everything in the construction, the naturality of SSS.

    For the Cartan formula one needs to grind a little bit. Take the natural embed-ding j : P(E)→ P(E⊕E′). There is a projection p : P(E⊕E′)− j(P(E))→ P(F ).Let us also denote the inclusion P(E ⊕ E′) − j(P(E)) → P(E ⊕ E′) by ρ. Defineα = ΣrankEi=0 π

    ∗P(E⊕E′)(ci(E))ξ

    iE⊕E′ and let β be the same for E

    ′. What we are trying

    to show is α ∪ β = 0. By naturality of c1, both j∗α and ρ∗β are zero. Since ρ∗β is

  • QUALIFICATION EXAM 23

    zero, β is in the image of H∗(P(E⊕E′),P(E⊕E′)−j(P(E))), and α is in the imageof H∗(P(E ⊕ E′), j(P(E))). Now by some generalities on cup product show whatwe are trying to show (essentially the classes are supported in disjoint regions).

    2.3. Computation of the cohomologies of BU,BO,BSO with convenientcoefficients. Cohomology of BU. We have U(n − 1) → U(n) → S2n−1. Thiscan be extended to the right by S2n−1 → BU(n − 1) → BU(n). This is becausefor any Lie group inclusion H → G, EG → EG/H is a model for the universalprincipal H-bundle, and the natural map EG/H → EG/G is the natural mapBH → BG given by functoriality. For this it is enough to show that EG pulls backto EG×H G under the map.

    Note that a model for BU(1) is CP∞, of which cohomology ring is just gradedpolynomial ring in one variable with degree 2. I here want to remind myself thatit is actually note so easy to get the ring structure of the cohomology of projectivespaces. On that note I also want to remind myself how to construct Poincare dualsof submanifolds using the Thom isomorphism (also recall more generally the Gysinmap), and how it relates the cup product to geometric intersection.

    We write the SSS for the fiber sequence above, and do induction. First definecn - the image of the generator of the cohomology of sphere, which should map tosomething non-zero, since it has to die. Show that the cohomology should lie in evendimensions. Using the Leibniz rule show that all differentials are multiplication bycn. Everything in the upper row should die, so we have an injective map whichis multiplication by cn and the quotient is a polynomial ring. We can choose asection of this exact sequence, which is a ring homomorphism. Hence, our ringis a direct sum of the image of cn and a polynomial ring. There is an obvioushomomorphism from a polynomial algebra, which is clearly surjective, and a bitless clearly injective.

    We can use the same proof for computing the cohomologies of BSU(n), BSp(n).For U(n), SU(n), Sp(n), we write down the fibration and the spectral sequence.This time the spectral sequence collapses by the Leibniz rule as all generators inthe first column are sent to zero. The short exact sequences split and we deducethe ring structure from the spectral sequence.

    Cohomology of BO. We want to do induction again: Sn−1 → BO(n − 1) →BO(n). We know that H∗(BO(1),F2) is a polynomial ring in one variable. We runthe Serre spectral sequence. We see that the generators of H∗(BO(n− 1),F2) arein the image of the induced map H∗(BO(n),F2)→ H∗(BO(n− 1),F2), hence thismap is surjective. Therefore the Gysin exact sequence splits, and shows that thecritical differential is injective. This differential is multiplication by some element,which is the new generator.

    Cohomology of BSO. For this we use a coefficient ring in which 2 is inverted.I use rational coefficients. H∗(BSO(2),Q) is a polynomial ring on one variable ofdegree two, and H∗(BSO(3),Q) is a polynomial ring on one variable of degree four.Now we would like to do induction but we see that it doesn’t work because we lostthe surjectivity of the BO(n) case. Hence, we do something else. This works for ageneral compact Lie group in fact.

    Let T be a maximal torus in G, and let N be its normalizer. The Weyl groupis W = N/T . We factor the map T → G as T → N → G, which induces BT →BN → BG. Using the trick for finding a model of the classifying space of asubgroup, we see that BT = EG/T , and BN = EG/N . Let us first analyze

  • 24 UMUT VAROLGUNES

    r : H∗(BN,Q)→ H∗(BT,Q) using theW → BT → BN = BT/W . The importantpoint is that there is also a map H∗(BT,Q) → H∗(BN,Q) called the transfermap. These two maps compose to |W | times the identity in one direction showinginjectivity of r. The image of r is invariant under the action of W - one can showeasily that the map BT → BN is invariant under the action of W . Final point tomake is that this map is surjective onto the invariant classes, but this is also trivialusing the fact the transfer map is defined on chains first.

    Now we consider the fibration (G/T )/W → BN → BG. First we show thatH∗(G/T,Q) is concentrated in even degrees and G/T has Euler characteristic equalto |W |. This is a direct consequence of Bruhat decomposition. There should also besome explanation using Morse theory, but I don’t exactly know how to do that. But,after we mod out byW , the Euler characteristic becomes 1, and hence (G/T )/W hastrivial homology. Hence the SSS tells us that BN → BG induces an isomorphismof rings on cohomology.

    We just need to find the invariants in the polynomial algebra correspondingto the cohomology of BT under the Weyl group action. In the case of SO(n),this action is described by permuting and changing signs of variables. In the casen is odd, we can change signs whichever way we want, so we get the symmetricpolynomials in the squares of the variables (which originally had degree 2). Whenn is even, we can change signs only even times. So we also have the product of thevariables, which is the Euler class. Note that above I worked with rationals butthe only prime that is actually relevant is 2 it turns out. I am a bit confused aboutthis.

    There are also the following two very cool theorems, which tells us that thecohomology of BG is in even degrees directly for example. The first one is Hopf’stheorem, and was the reason why Hopf algebras was invented in the first place.The second one is called Borel’s transgression theorem. If we already know that thegenerators transgress, then the rest is actually not so hard to prove by constructingsome model and using the comparison theorem for spectral sequences (see Zeeman’spaper).

    Theorem 23. The rational (or maybe real actually) cohomology of a compact con-nected Lie group is an exterior algebra on odd generators.

    Theorem 24. If we have a spectral sequence that looks like the E2 page of a Serrespectral sequence where the local coefficients is trivial, and the total space has nocohomology, and we further have that the fibre edge is an exterior algebra on oddgenerators, then:

    • We can find homogenous generators of the exterior algebra which transgress.• The cohomology of the base is a polynomial algebra generated by the trans-

    gressions of those generators - in particular, these generators are in evendegree.

    3. Major

    3.1. Classical Integrable Systems.

    3.1.1. Duistermaat’s paper.

  • QUALIFICATION EXAM 25

    What is here?

    • Construction of action-angle coordinates locally• The monodromy and gluing invariants of a Lagrangian fibration, and how

    they completely determine the symplectic type of the fibration.

    Let (X2n, ω) be a symplectic manifold. A classical integrable system can bedescribed as either:

    • A Lagrangian fibration π : X → Bn possibly with singularities.• A surjective proper map π : X → Bn such that the Poisson bracket of any

    two functions pulled back from B is zero, and that π is a submersion on anopen dense subset of X.

    Initially we assume that the map π : X → Bn doesn’t have singularities, i.e.it is a Lagrangian fibration on the nose, or it is a submersion everywhere. Theirequivalence can be seen using Ehressmann’s theorem plus some manipulations withthe Lagrangian condition.

    Notice that there is a fiberwise action of T ∗B on X - take a covector at b,its pullback defines a covector on π−1(x), which in turn defines a vector field onπ−1(x), which is tangent to the fibre by the Lagrangian condition, and finally takethe time-1 map of this vector field. Since the fibres are compact this implies thatthey should be disjoint union of n-tori - the orbits have to be open subsets, so theorbits are precisely the connected components, result follows from the compactnessof the fibres and properties of discrete subgroups in Rn. This also gives a smoothlyvarying full rank lattice P in T ∗B assuming that the fibers are connected (or onecan think about the ramified finite covering of B having the smooth connectedcomponents of the fibers as its points). The fact that this is an integral affinestructure follows from the existence of action coordinates.

    Theorem 25. Around each of these tori, there are coordinates I1, . . . , In on thebase (around the point it projects to) and θ1, . . . , θn in the fibers such that ω =dI∧dθ. In other words, there is a fibre preserving symplectomorphism with a productneighborhood of the tori and T ∗Tn → Rn - Weinstein neighborhood theorem wouldimply the existence of only a symplectomorphism.

    Proof. Notice first that the key point is really to produce the action coordinates.We want to produce them in such a way that they are coordinates on the base, and ifwe lift them to functions on the total space and run the Hamiltonian flow, their timeone flows are all identity. Once we do that we can first locally construct a sectionover the open subset in the base by locally (in the total space) completing the actionvariables to a symplectic coordinate system - which we do by a generating functionsargument. Then we can just move the points around using the Hamiltonian flowsof the functions and parametrize the torus using that.

    In the first proof as one of the steps is tricky in my opinion. Choose a trivializedchart U and coordinates x1, . . . , xn in the base. Also choose a basis of P over thischart, which we see as one-forms over U . We are trying to show that these sections

    are closed (and hence exact). More explicitly, we have functions T(i)j (x1, . . . xn),

    and we want to show that ∂kT(i)j = ∂jT

    (i)k . We have a fiberwise action of U × Rn

    on π−1(U) as described before, hence a map π−1(U) ×U (U × Rn) → π−1(U).Restricting this action to the graph of T (i) inside U × Rn gives the identity map,so we write the identity map as a composition of the inclusion and the action map,

  • 26 UMUT VAROLGUNES

    and see what happens to the vector field ∂∂xl using the chain rule. This is essentiallyequivalent to computing how the action changes when we change xl along the graph,which changes both xl and T

    (i) - Duistermaat does this by heart. I would have todo it by computing the Jacobians.

    In the second proof, we start with a product neighborhood which is also a Wein-stein neighborhood of a fibre. In this neighborhood, the symplectic form is exact.We choose a primitive, and also a smoothly varying basis for the first integral ho-mology of the fibres. We will integrate the primitive along the cycles to find theaction coordinates. Note that changing the basis would change the coordinates bythe action of GL(n,Z), and changing the primitive by the translation action of Rn- and this is all the freedom there is when the lattice is given.

    Let y, θ be the Weinstein coordinates. We have a Lagrangian foliation by theLiouvile tori, and all the leaves are of the form (θ, y(θ, I)), i.e. they are graphicalover the center leaf, and we use I to parametrize the set of leaves. Also fix some θ0.Define the multivalued generating function S(θ, I) by integrating ydθ along pathslying inside the leaves and starting at θ0. Now consider the canonical transformationdefined by S and denote the canonical momentum of I by φ, which is a multivaluedfunction. Let’s see what happens to this functions when we go along the n linearlyindependent vector fields a full turn (the basis is the same basis we used for H1).Change in the ith φ coordinate after a full turn in jth basis element is easilycomputed to be 2πδij (just change the places of difference with derivative, alsothere is a normalization mistake here but anyways) which proves the claim.

    We can ask when X → B is isomorphic to T ∗B/P (as a bundle). Clearly, theyare smoothly isomorphic if and only if there is a section of X → B. Morever, thisisomorphism is also a symplectomorphism if and only if the image of the section isLagrangian. But there is a better answer.

    Cover X with action-angle coordinates, and choose arbitrary Lagrangian sectionsin each chart. In the intersections of charts the sections differ by a Lagrangiansection (over the intersection) of T ∗B defined up to translation by P . These satisfythe cocycle condition, and hence define a Cech cocyle µ and a Cech cohomologyclass, [µ] ∈ H1(B,ΛL(T ∗B/P )), where ΛL(T ∗B/P ) denotes the sheaf of Lagrangiansections. Clearly, the vanishing of [µ] implies the existence of a Lagrangian section.If we instead see µ as a 1-Cech cocycle of Λ(T ∗B/P ), then the vanishing of thecohomology class implies the existence of a smooth section. Consider the followingdiagram of short exact sequences.

    (1) 0 //P //

    =

    ��

    ΛL(T∗B) //

    ��

    ΛL(T∗B/P ) //

    ��

    0

    0 //P //Λ(T ∗B) //Λ(T ∗B/P ) //0

    Identifying the sheaf of Lagrangian sections of T ∗B with the sheaf of closed one-forms on B, the following short exact sequence (of sheaves) also comes in handy(always does).

    (2) 0 //R //C∞(B,R) //C∞clsd(B, T ∗B) //0

  • QUALIFICATION EXAM 27

    Being vector bundles (and hence O-modules), Λ(T ∗B) and C∞(B,R) have nohigher cohomology. So we get isomorphisms Hk(C∞clsd(B, T

    ∗B)) → Hk+1(B,R).It is also nice to describe them explicitly. Take a Cech cover (contractible inter-sections) of B and take sections on k + 1-fold intersections which define a cocycle,and pick primitives for the closed forms. The coboundary operator applied to theprimitives gives a constant in the k+ 2-fold intersections. This is a cocycle becausein the Cech complex is a complex. This construction of course works in generalfor the connecting map in the long exact sequence of sheaf cohomologies. For thecase k = 1, there is another description which will be used below. Take a onecocycle, this is a coboundary in Λ(T ∗B), so choose a 0-cochain lift there (we willhave a given lift below). Now apply exterior derivative to this, which glues to awell-defined 2-form, whose deRham cohomology class is what we need.

    Using the second row in (1) above, we can push the obstruction class toH2(B,P ).If this class vanishes, then we get a section B → X, since Λ(T ∗B) has no cohomol-ogy. This doesn’t give a Lagrangian section yet. Let us now assume that for somesection s : B → X, s∗ω is exact. Take a cover of B by action angle coordinates,s defines local one-forms over the action coordinate charts, and their coboundarydefines an element of H1(C∞clsd(B, T

    ∗B)) (modulo P ). This class maps to [µ] underthe map in the first row of (1), because if we instead of si’s took a section of P wewould get the same cocycle. Finally, note that dsi = s

    ∗iω, and hence the deRham

    2-cohomology class we constructed above is really the class of ω. Hence, if ω isexact [µ] = 0.

    Having answered the question when X is the same as T ∗B/P , we now turn tothe question of when T ∗B/P is a trivial bundle.

    The local system P is trivial if and only if its monodramy representation istrivial. First notice that by passing to a covering space of the base, we can makethe local system P trivial - we can transfer to whole data to a covering space bypullback, and the local system there, which is also just the pullback local system,will be trivial.

    In case we have P trivial, we have a trivialization of the cotangent bundle withclosed one forms. If those one forms happened to be exact, they would define alocal diffeomorphism to Rn, and the integral affine structure on B would be pulledback from the standard one on Rn.

    Passing to the universal cover (maybe less) we can kill all the H1, and makethese forms exact but they are also automatically exact if ω is exact. We will showthis by showing that for any loop γ in B,

    ∫γa, where a is one of these forms. Let

    us form a two dimensional cycle β over γ by spinning a lift of γ capped off to aloop within the fiber over the basepoint using the vector field associated to a. Bya local computation (use partitions of unity and cover by action-angle coordinates)we can show

    ∫γa =

    ∫βω, which finishes the proof.

    3.1.2. Almost toric manifolds.What is here?

    • Integral affine structures associated to Lagrangian fibrations• Toric manifolds and symplectic boundary reduction• Singularities of classical integrable systems, in particular the local model

    for the focus-focus singularity in 4 dimensions• Base diagrams, and operations on the almost toric manifold that can be

    described through them

  • 28 UMUT VAROLGUNES

    • Classification of almost toric manifoldsIn this paper, we tend to see the integral affine structure in the tangent bundle

    instead of the cotangent bundle. Here one point that confuses me is that a latticein the cotangent bundle defines an integral affine structure if locally there is a basisgiven by differentials of functions but in the tangent bundle we should have the Liebrackets of the vector fields to vanish. To see the equivalence of the two (for duallattices) one has to make a computation, that I still did not do.

    A more direct way to construct the lattice in the tangent bundle is to glue thelocally defined integral affine structures (via the action-angle coordinates) together.More precisely, we have

    Lemma 12. Fiber-preserving symplectomorphisms of (Rn×Tn, dpdq) are (p, q) 7→(A−T p+ c, Aq + f(p)), where A ∈ GL(n,Z), and A−1∂f∂p is symmetric.

    Proof. Such map Rn × Tn → Rn × Tn is equivalent to a fiber preserving mapT ∗Rn → T ∗Rn which sends every translate of the standard lattice to a translate ofthe standard lattice - apparantly I get confused about maps Tn → Tn. Hence if wewrite the map as (p, q) 7→ (φ(p), A(p)q+ f(p)) and write down what it means to bea symplectomorphism, we are done - A(p) has to be constant. �

    Note that we didn’t assume any integrality anywhere. This is kinda surprisingactually.

    We can define affine subspaces, and affine length.There are two different notions of monodromy when one is given a Lagrangian

    fibration. One is the monodromy of the first homology bundle and the second is themonodromy of the local system. If we take the local system in the cotangent bun-dle, then these are the same under the induced identification of T ∗b with H1(Fb,R),hence they are transposed inverses of each other if we use the tangent bundle.

    Interlude: Atiyah-Guillemin-Sternberg convexity

    Lemma 13. Let M be a compact connected smooth manifold, and f be a Morse-Bott function such that all the critical manifolds have index and coindex not equalto one, i.e. the ascending and descending manifolds are not of codimension one.Then the level sets are connected.

    Proof. The key point here is that the complement of a codimension not equal to onesubmanifold of a connected manifold is also connected, which is in stark contrastwith the Jordan-Brouwer seperation theorem for example. To prove this for highcodimension we use the Sard Theorem, which implies that any two submanifoldscan be perturbed so that they become transverse

    We first show that there is exactly one index zero critical manifold - same holdsfor coindex of course. We know that M is the union of all the ascending manifolds,and if we use the previous paragraph, we find that the union of all codimension zeroascending manifolds is connected, as desired. Hence we have unique local minimumand maximum.

    If c is just a bit above the minimum: then take two points in the level set, connectthem by going down to the minimal value critical manifold, move the path to missthe critical manifold, and push them up with gradient flow. It continues like this.I skip it for now. �

  • QUALIFICATION EXAM 29

    One of the ideas is to notice that if we choose a point in the Lie algebra of thetorus, which defines a one parameter subgroup of the torus, and also a function onthe manifold, then this function is Morse-Bott, and its critical manifolds are fixedpoints of that one parameter subgroup. Moreover one can show that dimensions,indices, and coindices of all its critical manifolds are even, moreover the criticalsubmanifolds are symplectic. The key point here seems to be to choose a torusinvariant almost complex structure.

    Now by combining the two discussions, and doing symplectic reduction one di-mension at a time, we can do induction and show that the preimages of the regularvalues of the moment map are connected. Then by induction again, we want toshow that the image is convex. This clearly holds for circle actions. For the induc-tion step we use codimension one tori which get denser and denser. The momentmap for such torus is a projection to some hypersurface of the original momentmap. For any two points lying over the same point in that hypersurface, we lookat the preimage of the moment map (for the small torus of course) at this pointwhich is also assumed to be regular and we know that it is connected, so we connectthem and their image under the big moment map should lie on a line. For any twopoints, we approximate them by such pairs and prove the claim. With a bit morework, we can show that this convex image is the convex hull of the images of thefixed submanifolds of this action.

    We finish the discussion of Atiyah-Guillemin-Sternberg convexity here.In this paper, we want to work things like toric manifolds, but we only want

    to think about the image of the moment map and recover the symplectic manifoldfrom the image of the moment map using extra date and forget about the actionof the torus. More precisely, if we have a toric manifold (not necessarily compact),we have the projection to the quotient space X → B, and an immersion B → Rnobtained by the moment map.

    The observation is that for a toric manifold, the points on the toric divisorshave normal forms for their neighborhoods. Since outside of the divisor a toricmanifold is boring - torus bundle over an open convex polytope in euclidean space -we can maybe just take the closed pol