173
Exercises in Advanced Linear and Multilinear Algebra John M. Erdman Portland State University Version May 25, 2009 c 2009 John M. Erdman E-mail address : [email protected]

Multi Linear Algebra PDF

  • Upload
    imraans

  • View
    716

  • Download
    9

Embed Size (px)

DESCRIPTION

Advanced linear Algebra

Citation preview

Page 1: Multi Linear Algebra PDF

Exercises in Advanced Linear and Multilinear Algebra

John M. Erdman

Portland State University

Version May 25, 2009

c©2009 John M. Erdman

E-mail address: [email protected]

Page 2: Multi Linear Algebra PDF
Page 3: Multi Linear Algebra PDF

Contents

Preface vii

Some Algebraic Objects ix

Chapter 1. Abelian Groups 11.1. Background 11.2. Exercises (Due: Wed. Jan. 7) 2

Chapter 2. Functions and Diagrams 32.1. Background 32.2. Exercises (Due: Fri. Jan. 9) 5

Chapter 3. Rings 73.1. Background 73.2. Exercises (Due: Mon. Jan. 12) 8

Chapter 4. Vector Spaces and Subspaces 94.1. Background 94.2. Exercises (Due: Wed. Jan. 14) 10

Chapter 5. Linear Combinations and Linear Independence 135.1. Background 135.2. Exercises (Due: Fri. Jan. 16) 14

Chapter 6. Bases for Vector Spaces 176.1. Background 176.2. Exercises (Due: Wed. Jan. 21) 19

Chapter 7. Linear Transformations 217.1. Background 217.2. Exercises (Due: Fri. Jan. 23) 22

Chapter 8. Linear Transformations (continued) 258.1. Background 258.2. Exercises (Due: Mon. Jan. 26) 26

Chapter 9. Duality in Vector Spaces 279.1. Background 279.2. Exercises (Due: Wed. Jan. 28) 28

Chapter 10. Duality in Vector Spaces (continued) 3110.1. Background 3110.2. Exercises (Due: Fri. Jan. 30) 32

Chapter 11. The Language of Categories 3311.1. Background 3311.2. Exercises (Due: Mon. Feb. 2) 35

iii

Page 4: Multi Linear Algebra PDF

iv CONTENTS

Chapter 12. Direct Sums 3712.1. Background 3712.2. Exercises (Due: Wed. Feb. 4) 39

Chapter 13. Products and Quotients 4113.1. Background 4113.2. Exercises (Due: Mon. Feb. 9) 43

Chapter 14. Products and Quotients (continued) 4514.1. Background 4514.2. Exercises (Due: Wed. Feb. 11) 46

Chapter 15. Projection Operators 4915.1. Background 4915.2. Exercises (Due: Fri. Feb. 13) 50

Chapter 16. Algebras 5116.1. Background 5116.2. Exercises (Due: Mon. Feb. 16) 52

Chapter 17. Spectra 5517.1. Background 5517.2. Exercises (Due: Wed. Feb. 18) 56

Chapter 18. Polynomials 5918.1. Background 5918.2. Exercises (Due: Fri. Feb. 20) 61

Chapter 19. Polynomials (continued) 6319.1. Background 6319.2. Exercises (Due: Mon. Feb. 23) 64

Chapter 20. Invariant Subspaces 6720.1. Background 6720.2. Exercises (Due: Wed. Feb. 25) 68

Chapter 21. The Spectral Theorem for Vector Spaces 6921.1. Background 6921.2. Exercises (Due: Fri. Feb. 27) 70

Chapter 22. The Spectral Theorem for Vector Spaces (continued) 7122.1. Background 7122.2. Exercises (Due: Mon. Mar. 2) 72

Chapter 23. Diagonalizable Plus Nilpotent Decomposition 7323.1. Background 7323.2. Exercises (Due: Wed. Mar. 4) 74

Chapter 24. Inner Product Spaces 7724.1. Background 7724.2. Exercises (Due: Mon. Mar. 9) 78

Chapter 25. Orthogonality and Adjoints 8125.1. Background 8125.2. Exercises (Due: Wed. Mar. 11) 82

Chapter 26. Orthogonal Projections 85

Page 5: Multi Linear Algebra PDF

CONTENTS v

26.1. Background 8526.2. Exercises (Due: Fri. Mar. 13) 86

Chapter 27. The Spectral Theorem for Inner Product Spaces 8927.1. Background 8927.2. Exercises (Due: Mon. Mar. 30) 90

Chapter 28. Multilinear Maps 9328.1. Background 93Differential calculus 93Permutations 93Multilinear maps 9428.2. Exercises (Due: Wed. Apr. 1) 95

Chapter 29. Determinants 9729.1. Background 9729.2. Exercises (Due: Fri. Apr. 3) 98

Chapter 30. Free Vector Spaces 9930.1. Background 9930.2. Exercises (Due: Mon. Apr. 6) 100

Chapter 31. Tensor Products of Vector Spaces 10131.1. Background 10131.2. Exercises (Due: Wed. Apr. 8) 102

Chapter 32. Tensor Products of Vector Spaces (continued) 10332.1. Background 10332.2. Exercises (Due: Fri. Apr. 10) 104

Chapter 33. Tensor Products of Linear Maps 10533.1. Background 10533.2. Exercises (Due: Mon. April 13) 106

Chapter 34. Grassmann Algebras 10734.1. Background 10734.2. Exercises (Due: Wed. April 15) 108

Chapter 35. Graded Algebras 10935.1. Background 10935.2. Exercises (Due: Fri. April 17) 110

Chapter 36. Existence of Grassmann Algebras 11136.1. Background 11136.2. Exercises (Due: Mon. April 20) 112

Chapter 37. The Hodge ∗-operator 11337.1. Background 11337.2. Exercises (Due: Wed. April 22) 114

Chapter 38. Differential Forms 11538.1. Background 11538.2. Exercises (Due: Fri. April 24) 117

Chapter 39. The Exterior Differentiation Operator 11939.1. Background 119

Page 6: Multi Linear Algebra PDF

vi CONTENTS

39.2. Exercises (Due: Mon. April 27) 120

Chapter 40. Differential Calculus on R3 12140.1. Background 12140.2. Exercises (Due: Wed. April 29) 122

Chapter 41. Closed and Exact Forms 12541.1. Background 12541.2. Exercises (Due: Mon. May 4) 126

Chapter 42. The de Rham Cohomology Group 12742.1. Background 12742.2. Exercises (Due: Wed. May 6) 128

Chapter 43. Cochain Complexes 12943.1. Background 12943.2. Exercises (Due: Fri. May 8) 130

Chapter 44. Simplicial Homology 13144.1. Background 13144.2. Exercises (Due: Wed. May 13) 134

Chapter 45. Simplicial Cohomology 13545.1. Background 13545.2. Exercises (Due: Fri. May 15) 137

Chapter 46. Integration of Differential Forms 13946.1. Background 13946.2. Exercises (Due: Mon. May 18) 141

Chapter 47. Stokes’ Theorem 14347.1. Background 14347.2. Exercises (Due: Wed. May 20) 145

Chapter 48. Quadratic Forms 14748.1. Background 14748.2. Exercises (Due: Fri. May 22) 148

Chapter 49. Definition of Clifford Algebra 14949.1. Background 14949.2. Exercises (Due: Wed. May 27) 150

Chapter 50. Orthogonality with Respect to Bilinear Forms 15150.1. Background 15150.2. Exercises (Due: Fri. May 29) 152

Chapter 51. Examples of Clifford Algebras 15351.1. Background 15351.2. Exercises (Due: Mon. June 1) 154

Bibliography 155

Index 157

Page 7: Multi Linear Algebra PDF

Preface

This collection of exercises is designed to provide a framework for discussion in a two-term se-nior/first year graduate level class in linear and multilinear algebra such as the one I have conductedfairly regularly at Portland State University.

Most recently I have been using Douglas R. Farenick’s Algebras of Linear Transformations(Springer-Verlag 2001) as a text for parts of the course. (In these Exercises it is referred to asAOLT.) For the more elementary parts of linear algebra there is certainly no shortage of readilyavailable texts. In particular there are now a number of excellent online texts which are availablefree of charge. Among the best are Linear Algebra [14] by Jim Hefferon,

http://joshua.smcvt.edu/linearalgebra

A First Course in Linear Algebra [2] by Robert A. Beezer,http://linear.ups.edu/download/fcla-electric-2.00.pdf

and Linear Algebra [9] by Paul Dawkins.http://tutorial.math.lamar.edu/pdf/LinAlg/LinAlg_Complete.pdf

Another very useful online resource is Przemyslaw Bogacki’s Linear Algebra Toolkit [4].http://www.math.odu.edu/~bogacki/lat

And, of course, many topics in linear algebra are discussed with varying degrees of thoroughnessin the Wikipedia [28]

http://en.wikipedia.org

and Eric Weisstein’s Mathworld [27].http://http://mathworld.wolfram.com

For more advanced topics in linear algebra some references that I particularly like are Paul Hal-mos’s Finite-Dimensional Vector Spaces [13], Hoffman and Kunze’s Linear Algebra [15], CharlesW. Curits’s Linear Algebra: An Introductory Approach [8], Steven Roman’s Advanced Linear Alge-bra [23], and William C. Brown’s A Second Course in Linear Algebra [5]. Readable introductionsto some topics in multilinear algebra (exterior, graded, and Clifford algebras, for example) are abit harder to come by. For these I will suggest specific sources later in these Exercises.

The short introductory Background section which precedes each assignment is intended tofix notation and provide “official” definitions and statements of important theorems for ensuingdiscussions.

vii

Page 8: Multi Linear Algebra PDF
Page 9: Multi Linear Algebra PDF

Some Algebraic Objects

Let S be a nonempty set. Consider the following axioms:(1) +: S × S → S. ( + is a binary operation, called addition, on S)(2) (x+ y) + z = x+ (y + z) for all x, y, z ∈ S. (associativity of addition)(3) There exists 0S ∈ S such that x + 0S = 0S + x = x for all x ∈ S. (existence of an

additive identity)(4) For every x ∈ S there exists −x ∈ S such that x+ (−x) = (−x) + x = 0S . (existence

of additive inverses)(5) x+ y = y + x for all x, y ∈ S. (commutativity of addition)(6) m : S × S → S : (x, y) 7→ xy. (the map (x, y) 7→ xy is a binary operation, called

multiplication, on S)(7) (xy)z = x(yz) for all x, y, z ∈ S. (associativity of multiplication)(8) (x+y)z = xz+yz and x(y+z) = xy+xz for all x, y, z ∈ S. (multiplication distributes

over addition)(9) There exists 1S in S such that x1S = 1S x = x for all x ∈ S. (existence of a

multiplicative identity or unit)(10) 1S 6= 0S .(11) For every x ∈ S such that x 6= 0S there exists x−1 ∈ S such that xx−1 = x−1x = 1S .

(existence of multiplicative inverses)(12) xy = yx for all x, y ∈ S. (commutativity of multiplication)

Definitions.• (S,+) is a semigroup if it satisfies axioms (1)–(2).• (S,+) is a monoid if it satisfies axioms (1)–(3).• (S,+) is a group if it satisfies axioms (1)–(4).• (S,+) is an Abelian group if it satisfies axioms (1)–(5).• (S,+,m) is a ring if it satisfies axioms (1)–(8).• (S,+,m) is a commutative ring if it satisfies axioms (1)–(8) and (12).• (S,+,m) is a unital ring (or ring with identity) if it satisfies axioms (1)–(9).• (S,+,m) is a division ring (or skew field) if it satisfies axioms (1)–(11).• (S,+,m) is a field if it satisfies axioms (1)–(12).

Remarks.• A binary operation is often written additively, (x, y) 7→ x + y, if it is commutative and

multiplicatively, (x, y) 7→ xy, if it is not. This is by no means always the case: in acommutative ring (the real numbers or the complex numbers, for example), both additionand multiplication are commutative.

• When no confusion is likely to result we often write 0 for 0S and 1 for 1S .• Many authors require a ring to satisfy axioms (1)–(9).• It is easy to see that axiom (10) holds in any unital ring except the trivial ring S = 0.

ix

Page 10: Multi Linear Algebra PDF
Page 11: Multi Linear Algebra PDF

CHAPTER 1

Abelian Groups

1.1. Background

Topics: groups, Abelian groups, group homomorphisms.

1.1.1. Definition. Let G and H be Abelian groups. A map f : G→ H is a homomorphism if

f(x+ y) = f(x) + f(y)

for all x, y ∈ G. We will denote by Hom(G,H) the set of all homomorphisms from G into H andwill abbreviate Hom(G,G) to Hom(G).

1.1.2. Definition. Let G and H be Abelian groups. For f and g in Hom(G,H) we define

f + g : G→ H : x 7→ f(x) + g(x).

1

Page 12: Multi Linear Algebra PDF

2 1. ABELIAN GROUPS

1.2. Exercises (Due: Wed. Jan. 7)

1.2.1. Exercise. In AOLT, page 2, the author says that a vector space is, among other things,“an additive Abelian group”. What sort of Abelian group is an additive Abelian group? Give anexample of a very common Abelian group which is not written additively.

1.2.2. Exercise. Prove that the identity element in an Abelian group is unique.

1.2.3. Exercise. Let x be an element of an Abelian group. Prove that the inverse of x isunique.

1.2.4. Exercise. Let x be an element of an Abelian group. Prove that if x+x = x, then x = 0.

1.2.5. Exercise (†). Let x be an element of an Abelian group. Prove that −(−x) = x.

1.2.6. Exercise. Let f : G→ H be a homomorphism of Abelian groups. Show that f(0) = 0.

1.2.7. Exercise (†). Let f : G→ H be a homomorphism of Abelian groups. Show that f(−x) =−f(x) for each x ∈ G.

1.2.8. Exercise. Let E2 be the Euclidean plane. It contains points (which do not have coordi-nates) and lines (which do not have equations). A directed segment is an ordered pair of points.Define two directed segments to be equivalent if they are congruent (have the same length), lie onparallel lines, and have the same direction. This is clearly an equivalence relation on the set DS

of directed segments in the plane. We denote by−−→PQ the equivalence class containing the directed

segment (P,Q), going from the point P to the point Q. Define an operation on DS by−−→PQ+

−−→QR =

−→PR.

Show that this operation is well defined and that under it DS is an Abelian group.

1.2.9. Exercise. Suppose that A, B, C, and D are points in the plane such that−−→AB =

−−→CD.

Show that−→AC =

−−→BD.

1.2.10. Exercise. Let G and H be Abelian groups. Show that with addition as defined in 1.1.2Hom(G,H) is an Abelian group.

Page 13: Multi Linear Algebra PDF

CHAPTER 2

Functions and Diagrams

2.1. Background

Topics: functions, commutative diagrams.

2.1.1. Definition. It is frequently useful to think of functions as arrows in diagrams. Forexample, the situation h : R → S, j : R → T , k : S → U , f : T → U may be represented by thefollowing diagram.

S Uk

//

R

S

h

R Tj // T

U

f

The diagram is said to commute if k h = f j. Diagrams need not be rectangular. For instance,

S Uk

//

R

S

h

R

U

d

???

????

????

?

is a commutative diagram if d = k h.2.1.2. Example. Here is one diagrammatic way of stating the associative law for composition

of functions: If h : R→ S, g : S → T , and f : T → U and we define j and k so that the triangles inthe diagram

S Uk

//

R

S

h

R Tj // T

U

f

S

T

g

??

commute, then the square also commutes.

2.1.3. Convention. If S, T , and U are sets we will not distinguish between (S × T ) × U ,S × (T × U), and S × T × U . That is, the ordered pairs

((s, t), u

)and

(s, (t, u)

)and the ordered

triple (s, t, u) will be treated as identical.

2.1.4. Notation. Let S be a set. The map

idS : S → S : x 7→ x

is the identity function on S. When no confusion will result we write id for idS .

2.1.5. Notation. Let f : S → U and g : T → V be functions between sets. Then f × g denotesthe map

f × g : S × T → U × V : (s, t) 7→(f(s), g(t)

).

3

Page 14: Multi Linear Algebra PDF

4 2. FUNCTIONS AND DIAGRAMS

2.1.6. Convention. We will have use for a standard one-element set, which, if we wish, wecan regard as the Cartesian product of an empty family of sets. We will denote it by 1. For eachset S there is exactly one function from S into 1. We will denote it by εS . If no confusion is likelyto arise we write ε for εS .

2.1.7. Notation. We denote by δ the diagonal mapping of a set S into S × S. That is,

δ : S → S × S : s 7→ (s, s).

2.1.8. Notation. Let S be a set. We denote by σ the interchange (or switching) operationon the S × S. That is,

σ : S × S → S × S : (s, t) 7→ (t, s).

2.1.9. Notation. If S and T are sets we denote by F(S, T ) the family of all functions from Sinto T . We will use F(S) for F(S,R), the set of all real valued functions on S.

Page 15: Multi Linear Algebra PDF

2.2. EXERCISES (DUE: FRI. JAN. 9) 5

2.2. Exercises (Due: Fri. Jan. 9)

2.2.1. Exercise. Let S be a set and a : S × S → S be a function such that the diagram

S × S × Sa×id //

id×a// S × S

a // S (D1)

commutes. What is (S, a)? Hint. Interpret a as, for example, addition (or multiplication).

2.2.2. Exercise. Let S be a set and suppose that a : S × S → S and η : 1 → S are functionssuch that both diagram (D1) above and the diagram (D2) which follows commute.

1× S S × Sη×id

// S × S S × 1ooid×η

S

1× S

99

f

ssssssssssssssss S

S × S

OO

a

S

S × 1

ee

g

KKKKKKKKKKKKKKKKK

(D2)

(Here f and g are the obvious bijections.) What is (S, a, η)?

2.2.3. Exercise (†). Let S be a set and suppose that a : S×S → S and η : 1 → S are functionssuch that the diagrams (D1) and (D2) above commute. Suppose further that there is a functionι : S → S for which the following diagram commutes.

Sδ //

ε

$$HHHHHHHHHHHHHHHHHHHHH S × S

ι×id //

id×ι// S × S

a // S

1

η

::vvvvvvvvvvvvvvvvvvvvv

(D3)

What is (S, a, η, ι)?

2.2.4. Exercise. Let S be a set and suppose that a : S × S → S, η : 1 → S, and ι : S → S arefunctions such that the diagrams (D1), (D2), and (D3) above commute. Suppose further that thefollowing diagram commutes.

S × S

S

a

???

????

????

?S × S S × Sσ // S × S

S

a

(D4)What is (S, a, η, ι, σ)?

2.2.5. Exercise. Let f : G → H be a function between Abelian groups. Suppose that thediagram

G Hf

//

G×G

G

+

G×G H ×Hf×f // H ×H

H

+

commutes. What can be said about the function f?

2.2.6. Exercise (††). Let S be a set with exactly one element. Discuss the cardinality of (thatis, the number of elements in) the sets F(∅, ∅), F(∅, S),F(S, ∅), and F(S, S),

Page 16: Multi Linear Algebra PDF
Page 17: Multi Linear Algebra PDF

CHAPTER 3

Rings

3.1. Background

Topics: rings; zero divisors; cancellation property; ring homomorphisms.

3.1.1. Definition. An element a of a ring is left cancellable if ab = ac implies that b = c.It is right cancellable if ba = ca implies that b = c. A ring has the cancellation propertyif every nonzero element of the ring is both left and right cancellable.

3.1.2. Definition. A nonzero element a of a ring is a zero divisor (or divisor of zero) ifthere exists a nonzero element b of the ring such that (i) ab = 0 or (ii) ba = 0.

Most everyone agrees that a nonzero element a of a ring is a left divisor or zero if it satisfies (i)for some nonzero b and a right divisor of zero if it satisfies (ii) for some nonzero b. There agreementon terminology ceases. Some authors ([7], for example) use the definition above for divisor of zero;others ([17], for example) require a divisor of zero to be both a left and a right divisor of zero; andyet others ([18], for example) avoid the issue entirely by defining zero divisors only for commutativerings. Palmer in [22] makes the most systematic distinctions: a zero divisor is defined as above;an element which is both a left and a right zero divisor is a two-sided zero divisor ; and if the samenonzero b makes both (i) and (ii) hold a is a joint zero divisor.

3.1.3. Definition. A function f : R→ S between rings is a (ring) homomorphism if

f(x+ y) = f(x) + f(y)

andf(xy) = f(x)f(y)

for all x and y in R. If in addition R and S are unital rings and f(1R) = 1S then f is a unital(ring) homomorphism.

7

Page 18: Multi Linear Algebra PDF

8 3. RINGS

3.2. Exercises (Due: Mon. Jan. 12)

3.2.1. Exercise. Show that the additive identity of a ring is an annihilator. That is, show thatfor every element a of a ring 0a = a0 = 0.

3.2.2. Exercise. Show that if a and b are elements of a ring, then (−a)b = a(−b) = −ab and(−a)(−b) = ab.

3.2.3. Exercise. Every division ring has the cancellation property.

3.2.4. Exercise. A division ring has no zero divisors. That is, if ab = 0 in a division ring, thena = 0 or b = 0.

3.2.5. Exercise (†). A ring has the cancellation property if and only if it has no zero divisors.

3.2.6. Exercise. Let G, H, and J be Abelian groups. If f ∈ Hom(G,H) and g ∈ Hom(H,J),then the composite of g with f belongs to Hom(G, J). Note: This composite is usually written asgf rather than as g f .

3.2.7. Exercise. Let G be an Abelian group. Then Hom(G) is a unital ring (under the opera-tions of addition and composition).

3.2.8. Exercise. Let (G,+) be an Abelian group, F a field, and M : F → Hom(G) be a unitalring homomorphism. What is (G,+,M)?

Page 19: Multi Linear Algebra PDF

CHAPTER 4

Vector Spaces and Subspaces

4.1. Background

Topics: vector spaces, linear (or vector) subspaces.

4.1.1. Notation.

F = F [a, b] = f : f is a real valued function on the interval [a, b]P = P[a, b] = p : p is a polynomial function on [a, b]P4 = P4[a, b] = p ∈ P : the degree of p is three or lessQ4 = Q4[a, b] = p ∈ P : the degree of p is equal to 4C = C[a, b] = f ∈ F : f is continuousD = D[a, b] = f ∈ F : f is differentiableK = K[a, b] = f ∈ F : f is a constant functionB = B[a, b] = f ∈ F : f is boundedJ = J [a, b] = f ∈ F : f is integrable

(A function f ∈ F is bounded if there exists a number M ≥ 0 such that |f(x)| ≤ M for all x in[a, b]. It is (Riemann) integrable if it is bounded and the Riemann integral

∫ ba f(x) dx exists.)

4.1.2. Notation. For a vector space V we will write U V to indicate that U is a subspaceof V . To distinguish this concept from other uses of the word “subspace” (topological subspace, forexample) writers frequently use the expressions linear subspace, vector subspace, or linear manifold.

4.1.3. Definition. If A and B are subsets of a vector space then the sum of A and B, denotedby A+B, is defined by

A+B := a+ b : a ∈ A and b ∈ B.

9

Page 20: Multi Linear Algebra PDF

10 4. VECTOR SPACES AND SUBSPACES

4.2. Exercises (Due: Wed. Jan. 14)

4.2.1. Exercise (†). Show that condition (b)(ii) in the author’s definition of vector space(AOLT, page 2) is redundant. Using only the other vector space axioms and exercise 1.2.4 provethat in fact αv = 0 if and only if α = 0 or v = 0.

4.2.2. Exercise. Let x be an element of a vector space. Using only the vector space axiomsand the preceding exercises prove that (−1)x is the additive inverse of x. That is, show that(−1)x = −x.

4.2.3. Exercise. Let V be the set of all real numbers. Define an operation of “addition” by

x y = the maximum of x and y

for all x, y ∈ V . Define an operation of “scalar multiplication” by

α x = αx

for all α ∈ R and x ∈ V . Prove or disprove: under the operations and the set V is a vectorspace.

4.2.4. Exercise (†). Let V be the set of all real numbers x such that x > 0. Define an operationof “addition” by

x y = xy

for all x, y ∈ V . Define an operation of “scalar multiplication” by

α x = xα

for all α ∈ R and x ∈ V . Prove or disprove: under the operations and the set V is a vectorspace.

4.2.5. Exercise. Let V be R2, the set of all ordered pairs (x, y) of real numbers. Define anoperation of “addition” by

(u, v) (x, y) = (u+ x+ 1, v + y + 1)

for all (u, v) and (x, y) in V . Define an operation of “scalar multiplication” by

α (x, y) = (αx, αy)

for all α ∈ R and (x, y) ∈ V . Prove or disprove: under the operations and the set V is a vectorspace.

4.2.6. Exercise. Find the smallest subspace of R3 containing the vectors (2,−3,−3) and(0, 3, 2).

4.2.7. Exercise. The author’s definition of “subspace” (AOLT, page 2) is not quite correct(especially in view of the sentence following the definition). Explain why. Provide a correct defini-tion.

4.2.8. Exercise. Let α, β, and γ be real numbers. Prove that the set of all solutions to thedifferential equation

αy′′ + βy′ + γy = 0is a subspace of the vector space of all twice differentiable functions on R.

4.2.9. Exercise. For a fixed interval [a, b], which sets of functions in the notation list 4.1.1 arevector subspaces of which?

4.2.10. Exercise (††). Let U and V be subspaces of a vector space W , and let Vλ : λ ∈ Λ bea (perhaps infinite) family of subspaces of W . Consider the following subsets of W .

Page 21: Multi Linear Algebra PDF

4.2. EXERCISES (DUE: WED. JAN. 14) 11

(a) U ∩ V .(b) U ∪ V .(c) U + V .(d) U − V .(e) ∩λ∈ΛVλ.

Which of these are subspaces of W?

4.2.11. Exercise. Let R∞ denote the (real) vector space of all sequences x = (xk) = (xk)∞k=1 =(x1, x2, x3, . . . ) of real numbers. Define addition and scalar multiplication pointwise. That is,

x+ y = (x1 + y1, x2 + y2, x3 + y3, . . . )

for all x, y ∈ R∞ andαx = (αx1, αx2, αx3, . . . )

for all α ∈ R and x ∈ R∞. Which of the following are subspaces of R∞?(a) the sequences which have infinitely many zeros.(b) the sequences which have only finitely many nonzero terms.(c) the decreasing sequences.(d) the convergent sequences.(e) the sequence which converge to zero.(f) the arithmetic progressions (that is, sequences such that xk+1 − xk is constant).(g) the geometric progressions (that is, sequences such that

xk+1

xkis constant).

(h) the bounded sequences (that is sequences for which there is a number M > 0 such that|xk| ≤M for all k).

(i) the absolutely summable sequences (that is, sequences such that∞∑

k=1

|xk| < ∞).

Page 22: Multi Linear Algebra PDF
Page 23: Multi Linear Algebra PDF

CHAPTER 5

Linear Combinations and Linear Independence

5.1. Background

Topics: linear combinations, linear independence, span. (See AOLT, p. 2.)

5.1.1. Definition. Let A be a subset of a vector space V . The span of A is the set of all linearcombinations of elements of A. It is denoted by span(A) (or by spanF(A) if we wish to emphasizethe role of the scalar field F). The subset A spans the space V if V = span(A). In this case wealso say that A is a spanning set for V .

13

Page 24: Multi Linear Algebra PDF

14 5. LINEAR COMBINATIONS AND LINEAR INDEPENDENCE

5.2. Exercises (Due: Fri. Jan. 16)

5.2.1. Exercise. Prove that if A is a nonempty subset of a vector space V , then spanA is asubspace of V . (This is the claim made in AOLT, page 2, lines 19–20.)

5.2.2. Exercise. Let A be a nonempty subset of a vector space V . What, exactly, do we meanwhen we speak of the smallest subspace of V which contains A?

5.2.3. Exercise. Let A be a nonempty subset of a vector space V . Then the following sets areequal:

(a) span(A);(b) the intersection of the family of all subspaces of V which contain A; and(c) the smallest subspace of V which contains A.

5.2.4. Exercise. Verify that supersets of linearly dependent sets are linearly dependent andthat subsets of linearly independent sets linearly independent. That is, show that if V is a vectorspace, A is a linearly dependent subset of V , and B is a subset of V which contains A, then B islinearly dependent. Also show that if B is a linearly independent subset of V and A is containedin B, then A is linearly independent.

5.2.5. Exercise. Let w = (1, 1, 0, 0), x = (1, 0, 1, 0), y = (0, 0, 1, 1), and z = (0, 1, 0, 1).(a) Show that w, x, y, z does not span (that is, generate) R4 by finding a vector u in R4

such that u /∈ span(w, x, y, z).(b) Show that w, x, y, z is a linearly dependent set of vectors by finding scalars α, β, γ, and

δ—not all zero—such that αw + βx+ γy + δz = 0.(c) Show that w, x, y, z is a linearly dependent set by writing z as a linear combination of

w, x, and y.

5.2.6. Exercise. In the vector space C[0, π] of continuous functions on the interval [0, π] definethe vectors f , g, and h by

f(x) = x

g(x) = sinx

h(x) = cosx

for 0 ≤ x ≤ π. Show that f , g, and h are linearly independent.

5.2.7. Exercise. In the vector space C[0, π] of continuous functions on [0, π] let f , g, h, and jbe the vectors defined by

f(x) = 1

g(x) = x

h(x) = cosx

j(x) = cos2x

2for 0 ≤ x ≤ π. Show that f , g, h, and j are linearly dependent by writing j as a linear combinationof f , g, and h.

5.2.8. Exercise. Let a, b, and c be distinct real numbers. Show that the vectors (1, 1, 1),(a, b, c), and (a2, b2, c2) form a linearly independent subset of R3.

5.2.9. Exercise (†). In the vector space C[0, 1] define the vectors f , g, and h by

f(x) = x

g(x) = ex

h(x) = e−x

Page 25: Multi Linear Algebra PDF

5.2. EXERCISES (DUE: FRI. JAN. 16) 15

for 0 ≤ x ≤ 1. Are f , g, and h linearly independent?

5.2.10. Exercise. Let u = (λ, 1, 0), v = (1, λ, 1), and w = (0, 1, λ). Find all values of λ whichmake u, v, w a linearly dependent subset of R3.

5.2.11. Exercise (†)). Suppose that u, v, w is a linearly independent set in a vector space V .Show that the set u+ v, u+ w, v + w is linearly independent in V .

Page 26: Multi Linear Algebra PDF
Page 27: Multi Linear Algebra PDF

CHAPTER 6

Bases for Vector Spaces

6.1. Background

Topics: basis, dimension, partial ordering, comparable elements, linear ordering, chain, maximalelement, largest element, axiom of choice, Zorn’s lemma. (See AOLT, page 2 and section 1.6.)

6.1.1. Definition. A relation on a set S is a subset of the Cartesian product S × S. If therelation is denoted by ≤, then it is conventional to write x ≤ y (or equivalently, y ≥ x) rather than(x, y) ∈ ≤ .

6.1.2. Definition. A relation ≤ on a set S is reflexive if x ≤ x for all x ∈ S. It is transitiveif x ≤ z whenever x ≤ y and y ≤ z. It is antisymmetric if x = y whenever x ≤ y and y ≤ x.A relation which is reflexive, transitive, and antisymmetric is a partial ordering. A partiallyordered set is a set on which a partial ordering has been defined.

6.1.3. Example. The set R of real numbers is a partially ordered set under the usual relation ≤.

6.1.4. Example. A family A of subsets of a set S is a partially ordered set under the relation ⊆.When A is ordered in this fashion it is said to be ordered by inclusion.

6.1.5. Example. Let F(S) be the family of real valued functions defined on a set S. For f ,g ∈ F(S) write f ≤ g if f(x) ≤ g(x) for every x ∈ S. This is a partial ordering on F(S).

6.1.6. Definition. Let A be a subset of a partially ordered set S. An element u ∈ S is anupper bound for A if a ≤ u for every a ∈ A. An element m in the partially ordered set S ismaximal if there is no element of the set which is strictly greater than m; that is, m is maximalif c = m whenever c ∈ S and c ≥ m. An element m in S is the largest element of S if m ≥ s forevery s ∈ S.

6.1.7. Example. Let S = a, b, c be a three-element set. The family P(S) of all subsets of Sis partially ordered by inclusion. Then S is the largest element of P(S)—and, of course, it is also amaximal element of P(S). The family Q(S) of all proper subsets of S has no largest element; butit has three maximal elements b, c, a, c, and a, b.

6.1.8. Definition. A linearly independent subset of a vector space V which spans V is a(Hamel) basis for V . We make a special convention that the empty set is a basis for the trivialvector space 0.

6.1.9. Definition. Let A be a subset of a partially ordered set S. An element l ∈ S is a lowerbound for A if l ≤ a for every a ∈ A. An element l in the partially ordered set S is minimal ifthere is no element of the set which is strictly less than l; that is, l is minimal if c = l wheneverc ∈ S and c ≤ l. An element l in S is the smallest element of S if l ≤ s for every s ∈ S.

17

Page 28: Multi Linear Algebra PDF

18 6. BASES FOR VECTOR SPACES

6.1.10. Definition. Two elements x and y in a partially ordered set are comparable if eitherx ≤ y or y ≤ x. A chain (or linearly ordered subset) is a subset of a partially ordered set inwhich every pair of elements is comparable.

6.1.11. Axiom (Axiom of Choice). If A = Aλ : λ ∈ Λ is a nonempty family of nonemptypairwise disjoint sets, then there exists a function f : Λ →

⋃A such that f(λ) ∈ Aλ for every

λ ∈ Λ.

6.1.12. Axiom (Zorn’s Lemma). If each chain in a nonempty partially ordered set S has anupper bound in S, then S has a maximal element. (See AOLT, page 31, lines 4–7.)

The following theorem is an extremely important item in the set-theoretic foundations of math-ematics. Its proof is difficult and can be found in any good book on set theory.

6.1.13. Theorem. The axiom of choice and Zorn’s lemma are equivalent.

Page 29: Multi Linear Algebra PDF

6.2. EXERCISES (DUE: WED. JAN. 21) 19

6.2. Exercises (Due: Wed. Jan. 21)

6.2.1. Exercise. A linearly independent subset of a vector space V is a basis for V if and onlyif it is a maximal linearly independent subset. (This is AOLT, proposition 1.35.)

6.2.2. Exercise (††). A spanning subset for a nontrivial vector space V is a basis for V if andonly if it is a minimal spanning set for V .

6.2.3. Exercise. Let e1, . . . , en be a basis for a vector space V . Then for each v ∈ V thereexist unique scalars α1, . . . , αn such that v =

∑nk=1 αkek.

6.2.4. Exercise. Let e1, . . . , en be a basis for a vector space V and v =∑n

k=1 αkek be avector in V . If αp 6= 0 for some p, then e1, . . . , ep−1, v, ep+1, . . . , en is a basis for V .

6.2.5. Exercise. If some basis for a vector space V contains n elements, then every linearlyindependent subset of V with n elements is also a basis. Hint. Suppose e1, . . . , en is a basis for Vand v1, . . . , vn is linearly independent in V . Start by using exercise 6.2.4 to show that (afterperhaps renumbering the ek’s) the set v1, e2, . . . , en is a basis for V .

6.2.6. Exercise. In a finite dimensional vector space V any two bases have the same numberof elements. (This establishes the claim made in AOLT, page 2, lines 25–27.) This number is thedimension of V , and is denoted by dimV .

6.2.7. Exercise. Let S3 be the vector space of all symmetric 3 × 3 matrices of real numbers.(A matrix [aij ] is symmetric if aij = aji for all i and j.)

(a) What is the dimension of S3?(b) Find a basis for S3.

6.2.8. Exercise. AOLT, section 1.8, exercise 1.

6.2.9. Exercise (†). Let U be the set of all matrices of real numbers of the form[u −u− x0 x

]and V be the set of all real matrices of the form

[v 0w −v

]. Find bases for U , V, U + V, and U ∩ V.

6.2.10. Exercise. Let A be a linearly independent subset of a vector space V . Then thereexists a basis for V which contains A. (This is AOLT, Theorem 1.36. Can you think of anyway toshorten the proof?)

6.2.11. Exercise. Every vector space has a basis. (This is AOLT, Corollary 1.37.)

Page 30: Multi Linear Algebra PDF
Page 31: Multi Linear Algebra PDF

CHAPTER 7

Linear Transformations

7.1. Background

Topics: linear maps, kernel, range, invertible linear map, isomorphism.

7.1.1. Definition. Let V and W be vector spaces over the same field F. A function T : V →Wis linear if T (x+ y) = Tx+ Ty and T (αx) = αTx for all x, y ∈ V and α ∈ F.

7.1.2. Notation. If V and W are vector spaces the family of all linear functions from V intoW is denoted by L(V,W ). Linear functions are usually called linear transformations, linear maps,or linear mappings. When V = W we condense the notation L(V, V ) to L(V ) and we call themembers of L(V ) operators.

7.1.3. Definition. Let T : V → W be a linear transformation between vector spaces. ThenkerT , the kernel (or nullspace) of T is defined to be the set of all x in V such that Tx = 0.Also, ranT , the range of T (or the image of T ), is the set of all y in W such that y = Tx forsome x in V . The rank of T is the dimension of its range and the nullity of T is the dimensionof its kernel.

7.1.4. Definition. Let T : V →W be a linear transformation between vector spaces and let Ube a subspace of V . Define T→(U) := Tx : x ∈ U. This is the (direct) image of U under T .

7.1.5. Definition. Let T : V → W be a linear transformation between vector spaces and letU be a subspace of W . Define T←(U) := x ∈ V : Tx ∈ U. This is the inverse image of Uunder T .

7.1.6. Definition. A linear map T : V → W between vector spaces is left invertible (orhas a left inverse, or is a section) if there exists a linear map L : W → V such that LT = 1V .(Note: 1V is the identity mapping v 7→ v on V . See AOLT, page 4, lines -13 to -10, convention.Other notations for the identity map on V are idV and IV .) The map T is right invertible(or has a right inverse, or is a retraction) if there exists a linear map R : W → V such thatTR = 1W . We say that T is invertible (or has an inverse, or is an isomorphism) if it is bothleft invertible and right invertible. If there exists an isomorphism between two vector spaces V andW , we say that the spaces are isomorphic and we write V ∼= W .

Note: This is not the definition of isomorphism given in AOLT (definition on page 3, lines 17–18).You will prove in exercise 7.2.11 that the two definitions are equivalent.

21

Page 32: Multi Linear Algebra PDF

22 7. LINEAR TRANSFORMATIONS

7.2. Exercises (Due: Fri. Jan. 23)

7.2.1. Exercise. Use the notation of exercise 3.2.8 and suppose that (V,+,M) and (W,+,M)are vector spaces over a common field F and that T ∈ Hom(V,W ) is such that the diagram

V WT

//

V

V

V WT // W

W

commutes for every α ∈ F. What can be said about the homomorphism T?

7.2.2. Exercise. Let T : R3 → R3 : x 7→ (x1 + 3x2 − 2x3, x1 − 4x3, x1 + 6x2).(a) Identify the kernel of T by describing it geometrically and by giving its equation(s).(b) Identify the range of T by describing it geometrically and by giving its equation(s).

7.2.3. Exercise. Let T be the linear map from R3 to R3 defined by

T (x, y, z) = (2x+ 6y − 4z, 3x+ 9y − 6z, 4x+ 12y − 8z).

Describe the kernel of T geometrically and give its equation(s). Describe the range of T geometri-cally and give its equation(s).

7.2.4. Exercise. Let C = C[a, b] be the vector space of all continuous real valued functions onthe interval [a, b] and C1 = C1[a, b] be the vector space of all continuously differentiable real valuedfunctions on [a, b]. (Recall that a function is continuously differentiable if it has a derivativeand the derivative is continuous.) Let D : C1 → C be the linear transformation defined by

Df = f ′

and let T : C → C1 be the linear transformation defined by

(Tf)(x) =∫ x

af(t) dt

for all f ∈ C and x ∈ [a, b].(a) Compute (and simplify) (DTf)(x).(b) Compute (and simplify) (TDf)(x).(c) Find the kernel of T .(d) Find the range of T .

7.2.5. Exercise. Let T : V → W be a linear map between vector spaces and U V . Showthat T→(U) is a subspace of W . Conclude that the range of a linear map is a subspace of thecodomain of the map. (This establishes the claim made in AOLT, page 3, lines 15–16.)

7.2.6. Exercise. Let T : V → W be a linear map between vector spaces and U W . Showthat T←(U) is a subspace of V . Conclude that the kernel of a linear map is a subspace of thedomain of the map. (This establishes the claim made in AOLT, page 3, lines 13–14.)

7.2.7. Exercise (†). Let T : V → W be a linear map between vector spaces. Show that T isinjective (that is, one-to-one) if and only if kerT = 0.

7.2.8. Exercise. Show that an operator T ∈ L(V ) is invertible if it satisfies the equation

T 2 − T + 1V = 0.

7.2.9. Exercise. Let s be the vector space of all sequences of real numbers. (Addition andscalar multiplication are defined pointwise.) Define a linear map T by

T : s → s : (a1, a2, a3, . . . ) 7→ (0, a1, a2, . . . ).

Is T left invertible? right invertible? injective? surjective?

Page 33: Multi Linear Algebra PDF

7.2. EXERCISES (DUE: FRI. JAN. 23) 23

7.2.10. Exercise. If a linear map T : V →W between two vector spaces has both a left inverseand a right inverse, then these inverses are equal; so there exists a linear map T−1 : W → V suchthat T−1T = 1V and TT−1 = 1W .

7.2.11. Exercise. A linear map between vector spaces is invertible if and only if it is bijective(that is, one-to-one and onto).

Page 34: Multi Linear Algebra PDF
Page 35: Multi Linear Algebra PDF

CHAPTER 8

Linear Transformations (continued)

8.1. Background

Topics: matrix representations of linear maps.

8.1.1. Convention. The space Pn([a, b]) of polynomial functions of degree strictly less thann ∈ N on the interval [a, b] (where a < b) is a vector space of degree n. For each n = 0, 1, 2,. . . let pn(t) = tn for all t ∈ [a, b]. Then we take p0, p1, p2, . . . , pn−1 to be the standard basisfor Pn([a, b]).

25

Page 36: Multi Linear Algebra PDF

26 8. LINEAR TRANSFORMATIONS (CONTINUED)

8.2. Exercises (Due: Mon. Jan. 26)

8.2.1. Exercise (†). Let P4 be the vector space of polynomials of degree less than 4. Considerthe linear transformation D2 : P4 → P4 : f 7→ f ′′.

(a) Find the matrix representation of D2 (with respect to the usual basis 1, t, t2, t3 for P4).(b) Find the kernel of D2.(c) Find the range of D2.

8.2.2. Exercise. Let T : P4 → P5 be the linear transformation defined by (Tp)(t) = (2+3t)p(t)for every p ∈ P4 and t ∈ R. Find the matrix representation of T with respect to the usual basesfor P4 and P5.

8.2.3. Exercise. Show that an operator T on a vector space V is invertible if it has a uniqueright inverse. Hint. Consider ST + S − idV , where S is the unique right inverse for T .

8.2.4. Exercise. Let T : V →W be a linear map between vector spaces. If A is a subset of Vsuch that T→(A) is linearly independent, then A is linearly independent. Can you think of a wayof saying this without using any symbols?

8.2.5. Exercise (†). Let T : V → W be a linear map between vector spaces. If T is injectiveand A is a linearly independent subset of V , then T→(A) is a linearly independent subset of W .

8.2.6. Exercise. Let T : V → W be a linear map between vector spaces and A ⊆ V . ThenT→(spanA) = spanT→(A).

8.2.7. Exercise. Let V and W be vector spaces over a field F and E be a basis for V . Everyfunction f : E →W can be extended in a unique fashion to a linear map T : V →W .

8.2.8. Exercise. Let T : V → W be a linear map between vector spaces. If T is injective andB is a basis for a subspace U of V , then T→(B) is a basis for T→(U).

8.2.9. Exercise. Let T : V →W be a linear map between vector spaces. If V is spanned by aset B of vectors and T→(B) is a basis for W , then B is a basis for V and T is an isomorphism.

8.2.10. Exercise. Prove that a linear transformation T : R3 → R2 cannot be one-to-one andthat a linear transformation S : R2 → R3 cannot be onto. What is the most general version of theseassertions that you can invent (and prove)?

8.2.11. Exercise. Suppose that V and W are finite dimensional vector spaces of the same finitedimension and that T : V →W is a linear map. Then the following are equivalent;

(a) T is injective;(b) T is surjective; and(c) T is invertible.

Page 37: Multi Linear Algebra PDF

CHAPTER 9

Duality in Vector Spaces

9.1. Background

Topics: linear functionals, dual space, dual basis. (See AOLT, pages 4–6.)

9.1.1. Definition. Let S be a nonempty set, F be a field, and f be an F valued function on S.The support of f , denoted by supp f , is x ∈ S : f(x) 6= 0.

9.1.2. Notation. Let S be a nonempty set and F be a field. We denote by l(S) (or by l(S,F),or by FS , or by F(S,F)) the family of all functions α : S → F. For x ∈ l(S) we frequently write thevalue of x at s ∈ S as xs rather than x(s). (Sometimes it seems a good idea to reduce the numberof parentheses cluttering a page.) Furthermore we denote by lc(S) (or by lc(S,F), or by Fc(S))the family of all functions α : S → F with finite support; that is, those functions on S which arenonzero at only finitely many elements of S.

27

Page 38: Multi Linear Algebra PDF

28 9. DUALITY IN VECTOR SPACES

9.2. Exercises (Due: Wed. Jan. 28)

9.2.1. Exercise (†). Let V be a vector space (over a field F) with basis B. Define Ω: lc(B) → Vby

Ω(v) =∑e∈B

v(e) e.

Prove that l(B) (under the usual pointwise operations) is a vector space, that lc(B) is a subspaceof l(B), and that Ω is an isomorphism.

9.2.2. Exercise. A vector v in a vector space V with basis B can be written in one and onlyone way as a linear combination of elements in B.

9.2.3. Exercise. Let V and W be vector spaces and B be a basis for V . Every functionT0 : B →W can be extended in one and only one way to a linear transformation T : V →W .

9.2.4. Convention. If in exercise 9.2.1 we write v for Ω(v) then we see that

v =∑e∈B

v(e) e. (*)

The isomorphism Ω creates a one-to-one correspondence between functions v in lc(B) and vectorsv in V . This is an extension of the usual notation in Rn where we write a vector v in terms of itscomponents:

v =n∑

k=1

vk ek.

We will go even further and use Ω to identify V with lc(B) and write

v =∑e∈B

v(e) e

instead of (*). That is, in a vector space with basis we will treat a vector as a scalar valued functionon its basis. (Of course, if this identification ever threatens to cause confusion we can always goback to the v or Ω(v) notation.)

9.2.5. Exercise. According to convention 9.2.4 above, what is the value of f(e) when e and fare elements of the basis B?

9.2.6. Exercise. Let V be a vector space with basis B. For every v ∈ V define a function v∗

on V byv∗(x) =

∑e∈B

x(e) v(e) for all x ∈ V .

Then v∗ is a linear functional on V .

9.2.7. Notation. In the preceding exercise 9.2.6 the value v∗(x) of v∗ at x is often written〈x, v〉.

9.2.8. Exercise. Consider the notation 9.2.7 above in the special case that the scalar fieldF = R. Then 〈 , 〉 is an inner product on the vector space V . (See the definition of inner producton page 11 of AOLT.)

9.2.9. Exercise. In the special case that the scalar field F = C, things above are usually donea bit differently. For v ∈ V the function v∗ is defined by

v∗(x) = 〈x, v〉 =∑e∈B

x(e) v(e).

Why do you think things are done this way?

Page 39: Multi Linear Algebra PDF

9.2. EXERCISES (DUE: WED. JAN. 28) 29

9.2.10. Exercise. Let v be a nonzero vector in a vector space V and E be a basis for V whichcontains the vector v. Then there exists a linear functional φ ∈ V ∗ such that φ(v) = 1 and φ(e) = 0for every e ∈ E \ v.

9.2.11. Corollary. If v is a vector in a vector space V and φ(v) = 0 for every φ ∈ V ∗, thenv = 0.

9.2.12. Exercise. Let V be a vector space with basis B. The map Φ: V → V ∗ : v 7→ v∗ (seeexercise 9.2.6) is linear and injective.

9.2.13. Exercise (Riesz-Frechet theorem for vector spaces). (†) In the preceding exercise 9.2.12the map Φ is an isomorphism if V is finite dimensional. Thus for every φ ∈ V ∗ there exists a uniquevector a ∈ V such that a∗ = φ.

9.2.14. Exercise. If V is a finite dimensional vector space with basis e1, . . . , en, then e∗1, . . . , e∗nis the dual basis for V ∗. (See AOLT, page 5, line -7 and Proposition 1.1.)

Page 40: Multi Linear Algebra PDF
Page 41: Multi Linear Algebra PDF

CHAPTER 10

Duality in Vector Spaces (continued)

10.1. Background

Topics: annihilators, pre-annihilators, trace. (See AOLT, page 7.)

10.1.1. Notation. Let V be a vector space and M ⊆ V . Then

M⊥ := f ∈ V ∗ : f(x) = 0 for all x ∈MWe say that M⊥ is the annihilator of M . (The reasons for using the familiar “orthogonalcomplement” notation M⊥ will become apparent when we study inner product spaces, where“orthogonality” actually makes sense.)

10.1.2. Notation. Let V be a vector space and F ⊆ V ∗. Then

F⊥ := x ∈ V : f(x) = 0 for all f ∈ FWe say that F⊥ is the pre-annihilator of F .

31

Page 42: Multi Linear Algebra PDF

32 10. DUALITY IN VECTOR SPACES (CONTINUED)

10.2. Exercises (Due: Fri. Jan. 30)

10.2.1. Exercise. In exercise 9.2.12 we showed that the map

Φ: V → V ∗ : v 7→ v∗

is always an injective linear map. In exercise 9.2.13 we showed that if V is finite dimensional, thenso is V ∗ and Φ is an isomorphism between V and V ∗. Now prove that if V is infinite dimensional,then Φ is not an isomorphism. Hint. Let B be a basis for V . Is there a functional ψ ∈ V ∗ suchthat ψ(e) = 1 for every e ∈ B? Could such a functional be Φ(x) for some x ∈ V ?

10.2.2. Exercise (†). Let V be a vector space over a field F. For every x in V define

x : V ∗ → F : φ 7→ φ(x) .

(a) Show that x ∈ V ∗∗ for each x ∈ V .(b) Let Γ

Vbe the map from V to V ∗∗ which takes x to x. (When no confusion is likely we

write Γ for ΓV

, so that Γ(x) = x for each x ∈ V .) Prove that Γ is linear.(c) Prove that Γ is injective.

(See AOLT, page 33, exercise 5.)

10.2.3. Exercise. Prove that if V is a finite dimensional vector space, then the map Γ: V →V ∗∗ defined in the preceding exercise 10.2.2 is an isomorphism. Prove also that if V is infinitedimensional, then Γ is not an isomorphism. Hint. Let B be a basis for V and ψ ∈ V ∗ be as inexercise 10.2.1. Show that if we let C0 be e∗ : e ∈ B, then the set C0∪ψ is linearly independentand can therefore be extended to a basis C for V ∗. Find an element τ in V ∗∗ such that τ(ψ) = 1and τ(φ) = 0 for every other φ ∈ C. Can τ be Γx for some x ∈ V ? (If x 6= 0 in V = l0(B), thenthere is a basis vector f in B such that x(f) 6= 0.)

10.2.4. Exercise (†). Find the annihilator in(R2

)∗ of the vector (1, 1) in R2. (Express youranswer in terms of the standard dual basis for

(R2

)∗.)10.2.5. Exercise. Let M and N be subsets of a vector space V . Then(a) M⊥ is a subspace of V ∗.(b) If M ⊆ N , then N⊥ M⊥.(c) (spanM)⊥ = M⊥.(d) (M ∪N)⊥ = M⊥ ∩N⊥.

10.2.6. Exercise. If M and N be subspaces of a vector space V , then

(M +N)⊥ = M⊥ ∩N⊥.Explain why it is necessary in this exercise to assume that M and N are subspaces of V and notjust subsets of V . Hint. Is it always true that span (M +N) = spanM + spanN?

10.2.7. Exercise. In exercises 10.2.5 and 10.2.6 you established some properties of the annihila-tor mapping M 7→M⊥. See to what extent you can prove similar results about the pre-annihilatormapping F 7→ F⊥. What can you say about the sets M⊥⊥ and F⊥⊥?

Page 43: Multi Linear Algebra PDF

CHAPTER 11

The Language of Categories

11.1. Background

Topics: categories, objects, morphisms, functors, vector space adjoint of a linear map.

11.1.1. Definition. Let A be a class, whose members we call objects. For every pair (S, T ) ofobjects we associate a set Mor(S, T ), whose members we call morphisms from S to T . We assumethat Mor(S, T ) and Mor(U, V ) are disjoint unless S = U and T = V .

We suppose further that there is an operation (called composition) that associates withevery α ∈ Mor(S, T ) and every β ∈ Mor(T,U) a morphism β α ∈ Mor(S,U) in such a way that:

(1) γ (β α) = (γ β) α whenever α ∈ Mor(S, T ), β ∈ Mor(T,U), and γ ∈ Mor(U, V );(2) for every object S there is a morphism IS ∈ Mor(S, S) satisfying α IS = α whenever

α ∈ Mor(S, T ) and IS β = β whenever β ∈ Mor(R,S).Under these circumstances the class A, together with the associated families of morphisms, isa category. For a few more remarks on categories and some examples see section 8.1 of mynotes [11].

11.1.2. Definition. If A and B are categories a (covariant) functor F from A to B

(written A F //B) is a pair of maps: an object map F which associates with each object S inA an object F (S) in B and a morphism map (also denoted by F ) which associates with eachmorphism f ∈ Mor(S, T ) in A a morphism F (f) ∈ Mor(F (S), F (T )) in B, in such a way that

(1) F (g f) = F (g) F (f) whenever g f is defined in A; and(2) F (idS) = idF (S) for every object S in A.

The definition of a contravariant functor A F //B differs from the preceding definitiononly in that, first, the morphism map associates with each morphism f ∈ Mor(S, T ) in A amorphism F (f) ∈ Mor(F (T ), F (S)) in B and, second, condition (1) above is replaced by

(1′) F (g f) = F (f) F (g) whenever g f is defined in A.For a bit more on functors and some examples see section 15.4 of my notes [11].

11.1.3. Definition. The terminology for inverses of morphisms in categories is essentially the

same as for functions. Let S α //T and Tβ //S be morphisms in a category. If β α = IS , then β

is a left inverse of α and, equivalently, α is a right inverse of β. We say that the morphism

α is an isomorphism (or is invertible) if there exists a morphism Tβ // S which is both a left

and a right inverse for α. Such a function is denoted by α−1 and is called the inverse of α.

11.1.4. Remark. All the categories that are of interest in this course are concrete categories.A concrete category is, roughly speaking, one in which the objects are sets with additionalstructure (algebraic operations, inner products, norms, topologies, and the like) and the morphismsare maps (functions) which preserve, in some sense, the additional structure. If A is an object in

some concrete category C, we denote by A its underlying set. And if Af // B is a morphism

33

Page 44: Multi Linear Algebra PDF

34 11. THE LANGUAGE OF CATEGORIES

in C we denote by f the map from A to B regarded simply as a function between sets. Itis easy to see that , which takes objects in C to objects in Set (the category of sets and maps)and morphisms in C to morphisms in Set, is a functor. It is referred to as a forgetful functor.In the category Vec of vector spaces and linear maps, for example, causes a vector space V to“forget” about its addition and scalar multiplication (V is just a set). And if T : V → W is alinear transformation, then T : V → W is just a map between sets—it has “forgotten” aboutpreserving the operations. For more precise definitions of concrete categories and forgetful functorsconsult any text on category theory.

11.1.5. Definition. Let T : V →W be a linear map between vector spaces. For every g ∈W ∗let T ∗(g) = g T . Notice that T ∗(g) ∈ V ∗. The map T ∗ from the vector space W ∗ into the vectorspace V ∗ is the (vector space) adjoint map of T .

WARNING: In inner product spaces we will use the same notation T ∗ for a different map. IfT : V → W is a linear map between inner product spaces, then the (inner product space) adjointtransformation T ∗ maps W to V (not W ∗ to V ∗).

11.1.6. Definition. A partially ordered set is order complete if every nonempty subset hasa supremum (that is, a least upper bound) and an infimum (a greatest lower bound).

11.1.7. Definition. Let S be a set. Then the power set of S, denoted by P(S), is the familyof all subsets of S.

11.1.8. Notation. Let f : S → T be a function between sets. Then we define f→(A) =f(x) : x ∈ A and f←(B) = x ∈ S : f(x) ∈ B. We say that f→(A) is the image of A underf and that f←(B) is the preimage of B under f .

Page 45: Multi Linear Algebra PDF

11.2. EXERCISES (DUE: MON. FEB. 2) 35

11.2. Exercises (Due: Mon. Feb. 2)

11.2.1. Exercise. Prove that if a morphism in some category has both a left and a right inverse,then it is invertible.

11.2.2. Exercise (The vector space duality functor). Let T ∈ L(V,W ) where V and W arevector spaces. Show that the pair of maps V 7→ V ∗ and T 7→ T ∗ is a contravariant functor fromthe category of vector spaces and linear maps into itself. Show that (the morphism map of) thisfunctor is linear. (That is, show that (S + T )∗ = S∗ + T ∗ and (αT )∗ = αT ∗.)

11.2.3. Exercise (†). Let T : V →W be a linear map between vector spaces. Show that

kerT ∗ = (ranT )⊥.

Is there a relationship between T being surjective and T ∗ being injective?

11.2.4. Exercise. Let T : V →W be a linear map between vector spaces. Show that

kerT = (ranT ∗)⊥.

Is there a relationship between T being injective and T ∗ being surjective?

11.2.5. Exercise (The power set functors). Let S be a nonempty set.(a) The power set P(S) of S partially ordered by ⊆ is order complete.(b) The class of order complete partially ordered sets and order preserving maps is a category.

(c) For each function f between sets let P(f) = f→. Then P is a covariant functor from thecategory of sets and functions to the category of order complete partially ordered sets andorder preserving maps.

(d) For each function f between sets let P(f) = f←. Then P is a contravariant functor fromthe category of sets and functions to the category of order complete partially ordered setsand order preserving maps.

Page 46: Multi Linear Algebra PDF
Page 47: Multi Linear Algebra PDF

CHAPTER 12

Direct Sums

12.1. Background

Topics: internal direct sum, external direct sum. (See AOLT, pages 8–9.)

12.1.1. Definition. Let M and N be subspaces of a vector space V . We say that the space Vis the (internal) direct sum of M and N if M + N = V and M ∩ N = 0. In this case wewrite V = M ⊕N and say that M and N are complementary subspaces. We also say that M isa complement of N and that N is a complement of M . Similarly, if M1, . . . , Mn are subspacesof a vector space V , if V = M1 + · · · + Mn, and if Mj ∩Mk = 0 whenever j 6= k, then we saythat V is the (internal) direct sum of M1, . . . , Mn and we write

V = M1 ⊕ · · · ⊕Mn =n⊕

k=1

Mk .

12.1.2. Notation. If the vector space V is a direct sum of subspaces M and N , then, accordingto exercise 12.2.1, every v ∈ V can be written uniquely in the form m+n where m ∈M and n ∈ N .We will indicate this by writing v = m ⊕ n. It is important to realize that this notation makessense only when we have in mind a particular direct sum decomposition M ⊕N of V . Thus in thiscontext when we see v = a⊕ b we conclude that v = a+ b, that a ∈M , and that b ∈ N .

12.1.3. Definition. Let V and W be vector spaces over a field F. To make the Cartesianproduct V ×W into a vector space we define addition by

(v, w) + (v1, w1) = (v + v1, w + w1)

(where v, v1 ∈ V and w, w1 ∈W ), and we define scalar multiplication by

α(v, w) = (αv, αw)

(where α ∈ F, v ∈ V , and w ∈ W ). The resulting vector space we call the (external) directsum of V and W . It is conventional to use the same notation V ⊕W for external direct sums thatwe use for internal direct sums.

12.1.4. Notation. If S, T , and U are nonempty sets and if f : S → T and g : S → U , then wedefine the function (f, g) : S → T × U by

(f, g)(s) = (f(s), g(s)).

Suppose, on the other hand, that we are given a function h mapping S into the Cartesian productT ×U . Then for each s ∈ S the image h(s) is an ordered pair, which we will write as

(h1(s), h2(s)

).

(The superscripts have nothing to do with powers.) Notice that we now have functions h1 : S → Tand h2 : S → U . These are the components of h. In abbreviated notation h = (h1, h2).

The (external) direct sum of two vector spaces V1 and V2 is best thought of not just as the vectorspace V1⊕V2 defined in 12.1.3 but as this vector space together with two distinguished “projection”mappings defined below. (The reason for this assertion may be deduced from definition 13.1.1 andexercise 13.2.1.)

37

Page 48: Multi Linear Algebra PDF

38 12. DIRECT SUMS

12.1.5. Definition. Let V1 and V2 be vector spaces. For k = 1, 2 define the coordinateprojections πk : V1 ⊕ V2 → Vk by πk(v1, v2) = vk. Notice two simple facts:

(i) π1 and π2 are surjective linear maps; and(ii) idV1⊕V2 = (π1, π2).

Page 49: Multi Linear Algebra PDF

12.2. EXERCISES (DUE: WED. FEB. 4) 39

12.2. Exercises (Due: Wed. Feb. 4)

12.2.1. Exercise. If V is a vector space and V = M ⊕ N , then for every v ∈ V there existunique vectors m ∈M and n ∈ N such that v = m+ n.

12.2.2. Exercise. Suppose that a vector space V is the direct sum of subspaces M1, . . . , Mn.Show that for every v ∈ V there exist unique vectors mk ∈ Mk (for k = 1, . . . , n) such thatv = m1 + · · ·+mn.

12.2.3. Exercise. Let U be the plane x + y + z = 0 and V be the line x = y = z in R3. Thepurpose of this problem is to confirm that R3 = U ⊕ V . This requires establishing three things: (i)U and V are subspaces of R3 (which is very easy and which we omit); (ii) R3 = U + V ; and (iii)U ∩ V = 0.

(a) To show that R3 = U + V we need R3 ⊆ U + V and U + V ⊆ R3. Since U ⊆ R3 andV ⊆ R3, it is clear that U + V ⊆ R3. So all that is required is to show that R3 ⊆ U + V .That is, given a vector x = (x1, x2, x3) in R3 we must find vectors u = (u1, u2, u3) in Uand v = (v1, v2, v3) in V such that x = u+ v. Find two such vectors.

(b) The last thing to verify is that U ∩ V = 0; that is, that the only vector U and V havein common is the zero vector. Suppose that a vector x = (x1, x2, x3) belongs to both Uand V . Since x ∈ U it must satisfy the equation

x1 + x2 + x3 = 0. (1)

since x ∈ V it must satisfy the equations

x1 = x2 and x2 = x3. (2)

Solve the system of equations (1) and (2).

12.2.4. Exercise. Let U be the plane x + y + z = 0 and V be the line x = −34y = 3z. The

purpose of this exercise is to see (in two different ways) that R3 is not the direct sum of U and V .(a) If R3 were equal to U ⊕V , then U ∩V would contain only the zero vector. Show that this

is not the case by finding a vector x 6= 0 in R3 such that x ∈ U ∩ V .(b) If R3 were equal to U ⊕ V , then, in particular, we would have R3 = U + V . Since both

U and V are subsets of R3, it is clear that U + V ⊆ R3. Show that the reverse inclusionR3 ⊆ U +V is not correct by finding a vector x ∈ R3 which cannot be written in the formu+ v where u ∈ U and v ∈ V .

(c) We have seen in part (b) that U + V 6= R3. Then what is U + V ?

12.2.5. Exercise. Let U be the plane x + y + z = 0 and V be the line x − 1 = 12y = z + 2

in R3. State in one short sentence how you know that R3 is not the direct sum of U and V .

12.2.6. Exercise (††). Let C = C([0, 1]) be the (real) vector space of all continuous real valuedfunctions on the interval [0, 1] with addition and scalar multiplication defined pointwise.

(a) Let f1(t) = t and f2(t) = t4 for 0 ≤ t ≤ 1. Let U be the set of all functions of the formαf1 + βf2 where α, β ∈ R. Show that U is a subspace of C.

(b) Let V be the set of all functions g in C which satisfy∫ 1

0tg(t) dt = 0 and

∫ 1

0t4g(t) dt = 0.

Show that V is a subspace of C.(c) Show that C = U ⊕ V .(d) Let f(t) = t2 for 0 ≤ t ≤ 1. Find functions u ∈ U and v ∈ V such that f = u+ v.

Page 50: Multi Linear Algebra PDF

40 12. DIRECT SUMS

12.2.7. Exercise. Let C = C[−1, 1] be the vector space of all continuous real valued functionson the interval [−1, 1]. A function f in C is even if f(−x) = f(x) for all x ∈ [−1, 1]; it is oddif f(−x) = −f(x) for all x ∈ [−1, 1]. Let Co = f ∈ C : f is odd and Ce = f ∈ C : f is even .Show that C = Co ⊕ Ce.

12.2.8. Exercise. Prove that the external direct sum of two vector spaces (as defined in 12.1.3)is indeed a vector space.

12.2.9. Exercise. Let V1 and V2 be vector spaces. Prove that following fact about the directsum V1⊕V2: for every vector space W and every pair of linear maps T1 : W → V1 and T2 : W → V2

there exists a unique linear map S : W → V1 ⊕ V2 which makes the following diagram commute.

V1 V1 ⊕ V2oo

π1V1 ⊕ V2 V2π2

//

W

V1

T1

W

V1 ⊕ V2

S

W

V2

T2

???

????

????

?

12.2.10. Exercise. Every subspace of a vector space has a complement. That is, if M is asubspace of a vector space V , then there exists a subspace N of V such that V = M ⊕N .

12.2.11. Exercise. Let V be a vector space and suppose that V = U ⊕W . Prove that if Bis a basis for U and C is a basis for W , then B ∪ C is a basis for V . From this conclude thatdimV = dimU + dimW .

12.2.12. Exercise (†). Let U , V , and W be vector spaces. Show that if U ∼= W , then U ⊕V ∼=W ⊕ V . Show also that the converse of this assertion need not be true.

12.2.13. Exercise. A linear transformation has a left inverse if and only if it is injective (one-to-one). It has a right inverse if and only if it is surjective (onto).

Page 51: Multi Linear Algebra PDF

CHAPTER 13

Products and Quotients

13.1. Background

Topics: products, coproducts, quotient spaces, exact sequences. (See AOLT, pages 9–10.)

13.1.1. Definition. Let A1 and A2 be objects in a category C. We say that a triple (P, π1, π2),where P is an object and πk : P → Ak (k = 1, 2) are morphisms, is a product of A1 and A2 iffor every object B in C and every pair of morphisms fk : B → Ak (k = 1, 2) there exists a uniquemap g : B → P such that fk = πk g for k = 1, 2.

A1 Pooπ1

P A2π2

//

B

A1

f1

B

P

g

B

A2

f2

???

????

????

?

Notice that what you showed in exercise 12.2.9 is that the external direct sum is a product inthe category of vector spaces and linear maps.

A triple (P, j1, j2), where P is an object and jk : Ak → P , (k = 1, 2) are morphisms, is acoproduct of A1 and A2 if for every object B in C and every pair of morphisms Fk : Ak → B(k = 1, 2) there exists a unique map G : P → B such that Fk = G jk for k = 1, 2.

A1 Pj1

// P A2oo

j2

B

A1

??

F1

B

P

OO

G

B

A2

__

F2

????

????

????

13.1.2. Definition. Let M be a subspace of a vector space V . Define an equivalence relation∼ on V by

x ∼ y if and only if y − x ∈M.

For each x ∈ V let [x] be the equivalence class containing x. Let V/M be the set of all equivalenceclasses of elements of V . For [x] and [y] in V/M define

[x] + [y] := [x+ y]

and for α ∈ R and [x] ∈ V/M defineα[x] := [αx].

Under these operations V/M becomes a vector space. It is the quotient space of V by M . Thenotation V/U is usually read “V mod M”. The linear map

π : V → V/M : x 7→ [x]

is called the quotient map.

41

Page 52: Multi Linear Algebra PDF

42 13. PRODUCTS AND QUOTIENTS

13.1.3. Definition. A sequence of vector spaces and linear maps

· · · // Vn−1jn // Vn

jn+1 // Vn+1// · · ·

is said to be exact at Vn if ran jn = ker jn+1. A sequence is exact if it is exact at each of itsconstituent vector spaces. A sequence of vector spaces and linear maps of the form

0 // Uj // V

k //W // 0

is a short exact sequence. (Here 0 denotes the trivial 0-dimensional vector space, and theunlabeled arrows are the obvious linear maps.)

Page 53: Multi Linear Algebra PDF

13.2. EXERCISES (DUE: MON. FEB. 9) 43

13.2. Exercises (Due: Mon. Feb. 9)

13.2.1. Exercise. Show that in the category of vector spaces and linear maps the externaldirect sum is not only a product but also a coproduct.

13.2.2. Exercise (††). Show that in the category of sets and maps (functions) the product andthe coproduct are not the same.

13.2.3. Exercise. Show that in a arbitrary category (or in the category of vector spaces andlinear maps, if you prefer) that products (and coproducts) are essentially unique. (Essentiallyunique means unique up to isomorphism. That is, if (P, π1, π2) and (Q, ρ1, ρ2) are both productsof two given objects, then P ∼= Q.)

13.2.4. Exercise (†). Verify the assertions made in definition 13.1.2. In particular, show that∼ is an equivalence relation, that addition and scalar multiplication of the set of equivalence classesis well defined, that under these operations V/M is a vector space, and that the quotient map islinear.

The following exercise is called the fundamental quotient theorem or the first isomorphismtheorem for vector spaces. (See AOLT, theorem 1.6, page 10.)

13.2.5. Exercise. Let V and W be vector spaces and M V . If T ∈ L(V,W ) and kerT ⊇M ,then there exists a unique T ∈ L(V/M ,W ) which makes the following diagram commute.

V

π

T

!!CCC

CCCC

CCCC

CCCC

CC

V/M

T

// W

Furthermore, T is injective if and only if kerT = U ; and T is surjective if and only if T is.Corollary: ranT ∼= V/ kerT .

13.2.6. Exercise. The sequence

0 // Uj // V

k //W // 0

of vector spaces is exact at U if and only if j is injective. It is exact at W if and only if k issurjective.

13.2.7. Exercise. Let U and V be vector spaces. Then the following sequence is short exact:

0 // Uι1 // U ⊕ V

π2 // V // 0.

The indicated linear maps are the obvious ones:

ι1 : U → U ⊕ V : u 7→ (u, 0)

andπ2 : U ⊕ V → V : (u, v) 7→ v.

Page 54: Multi Linear Algebra PDF

44 13. PRODUCTS AND QUOTIENTS

13.2.8. Exercise. Suppose a < b. Let K be the family of constant functions on the interval[a, b], C1 be the family of all continuously differentiable functions on [a, b], and C be the family ofall continuous functions on [a, b]. (A function f is said to be continuously differentiable ifits derivative f ′ exists and is continuous.)

Specify linear maps j and k so that the following sequence is short exact:

0 //Kj // C1 k // C // 0.

13.2.9. Exercise. Let C be the family of all continuous functions on the interval [0, 2]. Let E1

be the mapping from C into R defined by E1(f) = f(1). (The functional E1 is called evaluationat 1.)

Find a subspace F of C such that the following sequence is short exact.

0 // F ι // C E1 // R // 0.

13.2.10. Exercise. If j : U → V is an injective linear map between vector spaces, then thesequence

0 // Uj // V

π // V/ ran j // 0is exact.

13.2.11. Exercise. Consider the following diagram in the category of vector spaces and linearmaps.

0 //

0

0

0 //

U ′ V ′j ′

//

U

U ′

f

U Vj // V

V ′

g

V ′ W ′

k′//

V

V ′

V Wk // W

W ′

h

0//

0// 0

0

If the rows are exact and the left square commutes, then there exists a unique linear map h : W →W ′ which makes the right square commute.

13.2.12. Exercise. Consider the following diagram of vector spaces and linear maps

0 //

0

0

0 //

U ′ V ′j ′

//

U

U ′

f

U Vj // V

V ′

g

V ′ W ′

k′//

V

V ′

V Wk // W

W ′

h

0//

0// 0

0

where the rows are exact and the squares commute. Prove the following.(a) If g is surjective, so is h.(b) If f is surjective and g is injective, then h is injective.(c) If f and h are surjective, so is g.(d) If f and h are injective, so is g.

Page 55: Multi Linear Algebra PDF

CHAPTER 14

Products and Quotients (continued)

14.1. Background

Topics: quotient map in a category, quotient object.

14.1.1. Definition. Let A be an object in a concrete category C. A surjective morphismA

π // B in C is a quotient map for A if a function g : B → C (in SET) is a morphism (in C)whenever g π is a morphism. An object B in C is a quotient object for A if it is the range ofsome quotient map for A.

45

Page 56: Multi Linear Algebra PDF

46 14. PRODUCTS AND QUOTIENTS (CONTINUED)

14.2. Exercises (Due: Wed. Feb. 11)

14.2.1. Exercise. Show that if 0 //Uj //V

k //W //0 is an exact sequence of vector spacesand linear maps, then V ∼= U ⊕W . Hint. Consider the following diagram and use exercise 13.2.12.

0 // Uj // V

k //

g

W // 0

0 // Ui1// U ⊕W

π2 //W //

i2oo 0

.

The trick is to find the right map g.

14.2.2. Exercise (†). Prove the converse of the preceding exercise. That is, suppose that U ,V , and W are vector spaces and that V ∼= U ⊕W ; prove that there exists linear maps j and k such

that the sequence 0 // Uj // V

k //W // 0 is exact. Hint. Suppose g : U ⊕W → V is anisomorphism. Define j and k in terms of g.

14.2.3. Exercise. Show that if 0 // Uj // V

k //W // 0 is an exact sequence of vectorspaces and linear maps, then W ∼= V/ ran j. Thus, if U V and j is the inclusion map, thenW ∼= V/U . Give two different proofs of this result: one using exercise 13.2.5 and the other usingexercise 13.2.12.

14.2.4. Exercise. Prove the converse of exercise 14.2.3. That is, suppose that j : U → V is aninjective linear map between vector spaces and that W ∼= V/ ran j; prove that there exists a linear

map k which makes the sequence 0 // Uj // V

k //W // 0 exact.

14.2.5. Exercise. Let W be a vector space and M V W . Then

(W/M)/(V/M) ∼= W/V.

Hint. Exercise 14.2.3

14.2.6. Exercise. Let M be a subspace of a vector space V . Then the following are equivalent:(a) dimV/M <∞ ;(b) there exists a finite dimensional subspace F of V such that V = M ⊕ F ; and(c) there exists a finite dimensional subspace F of V such that V = M + F .

14.2.7. Exercise. Suppose that a vector space V is the direct sum of subspaces U and W .Some authors define the codimension of U to be dimW . Others define it to be dimV/U . Showthat these are equivalent.

14.2.8. Exercise. Prove that if M is a finite dimensional subspace of a vector space V , thendimV/M = dimV − dimM .

14.2.9. Exercise (††). Prove that in the category of vector spaces and linear maps everysurjective linear map is a quotient map.

14.2.10. Exercise. Let U , V , and W be vector spaces. If S ∈ L(U, V ), T ∈ L(V,W ), then thesequence

0 // kerS // kerTS // kerT // cokerS // cokerTS // cokerT // 0

is exact.

Page 57: Multi Linear Algebra PDF

14.2. EXERCISES (DUE: WED. FEB. 11) 47

For obvious reasons the next result is usually called the rank-plus-nullity theorem. (See AOLT,theorem 1.8, page 10.)

14.2.11. Exercise. Let T : V →W be a linear map between vector spaces. Give a very simpleproof that if V is finite dimensional, then

rank T + nullity T = dimV.

14.2.12. Exercise. Show that if V0, V1, . . . , Vn are finite dimensional vector spaces and thesequence

0 // Vndn // Vn−1

// . . . // V1d1 // V0

// 0

is exact, thenn∑

k=0

(−1)k dimVk = 0.

Page 58: Multi Linear Algebra PDF
Page 59: Multi Linear Algebra PDF

CHAPTER 15

Projection Operators

15.1. Background

Topics: idempotent maps, projections. (See AOLT, section 3.2, pages 81–86.) If you have not yetdiscovered Professor Farenick’s web page where he has a link to a list of corrections to AOLT, thismight be a good time to look at it. See

http://www.math.uregina.ca/~farenick/fixups.pdf

15.1.1. Definition. Let V be a vector space. An operator E ∈ L(V ) is a projection oper-ator if it is idempotent; that is, if E2 = E.

15.1.2. Definition. Let V be a vector space and suppose that V = M ⊕ N . We know froman earlier exercise 12.2.1 that for each v ∈ V there exist unique vectors m ∈ M and n ∈ N suchthat v = m+ n. Define a function E

NM: V → V by E

NMv = m. The function E

NMis called the

projection of V along N onto M . (This terminology is, of course, optimistic. We must provethat E

NMis in fact a projection operator.)

15.1.3. Definition. Let M1 ⊕ · · · ⊕Mn be a direct sum decomposition of a vector space V .For each k ∈ Nn let Nk be the following subspace of V complementary to Mk:

Nk := M1 ⊕ · · · ⊕Mk−1 ⊕Mk+1 ⊕ · · · ⊕Mn.

Also (for each k) letEk := E

NkMk

be the projection onto Mk along the complementary subspace Nk. The projections E1, . . .En arethe projections associated with the direct sum decomposition V = M1 ⊕ · · · ⊕Mn.

49

Page 60: Multi Linear Algebra PDF

50 15. PROJECTION OPERATORS

15.2. Exercises (Due: Fri. Feb. 13)

15.2.1. Exercise (†). If E is a projection operator on a vector space V , then

V = ranE ⊕ kerE.

15.2.2. Exercise. Let V be a vector space and E, F ∈ L(V ). If E + F = idV and EF = 0,then E and F are projection operators and V = ranE ⊕ ranF .

15.2.3. Exercise. Let V be a vector space and E1, . . . , En ∈ L(V ). If∑n

k=1Ek = idV andEiEj = 0 whenever i 6= j, then each Ek is a projection operator and V =

⊕nk=1 ranEk.

15.2.4. Exercise. If E is a projection operator on a vector space V , then ranE = x ∈V : Ex = x.

15.2.5. Exercise. Let E and F be projection operators on a vector space V . Then E+F = idV

if and only if EF = FE = 0 and kerE = ranF .

15.2.6. Exercise (†). If M ⊕ N is a direct sum decomposition of a vector space V , then thefunction E

NMdefined in 15.1.2 is a projection operator whose range is M and whose kernel is N .

15.2.7. Exercise. If M ⊕N is a direct sum decomposition of a vector space V , then ENM

+E

MN= idV and E

NME

MN= 0.

15.2.8. Exercise. If E is a projection operator on a vector space V , then there exist M , N 4 Vsuch that E = E

NM.

15.2.9. Exercise. Let M be the line y = 2x and N be the y-axis in R2. Find [EMN

] and[E

NM].

15.2.10. Exercise. Let E be the projection of R3 onto the plane 3x − y + 2z = 0 along thez-axis and let F be the projection of R3 onto the z-axis along the plane 3x− y + 2z = 0.

(a) Find [E].(b) Where does F take the point (4, 5, 1)?

15.2.11. Exercise. Let P be the plane in R3 whose equation is x − y − 2z = 0 and L be theline whose equations are x = 0 and y = −z. Let E be the projection of R3 along L onto P and Fbe the projection of R3 along P onto L. Then

[E] =

a b b−a c ca −a −a

and [F ] =

b b ba −a −c−a a c

where a = , b = , and c = .

15.2.12. Exercise. Suppose a finite dimensional vector space V has the direct sum decompo-sition V = M ⊕ N and that E = E

MNis the projection along M onto N . Show that E∗ is the

projection in L(V ∗) along N⊥ onto M⊥.

Page 61: Multi Linear Algebra PDF

CHAPTER 16

Algebras

16.1. Background

Topics: algebras, matrix algebras, standard matrix units, ideals, center, central algebra, polyno-mial functional calculus, quaternions, representation, faithful representation, left regular represen-tation, permutation matrices. (See AOLT, sections 2.1–2.3.)

16.1.1. Definition. Let A be an algebra over a field F. The unitization of A is the unital alge-bra A = A×F in which addition and scalar multiplication are defined pointwise and multiplicationis defined by

(a, λ) · (b, µ) = (ab+ µa+ λb, λµ).

51

Page 62: Multi Linear Algebra PDF

52 16. ALGEBRAS

16.2. Exercises (Due: Mon. Feb. 16)

16.2.1. Exercise. In AOLT at the top of page 43 the author says that the ideal generated bya subset S of an algebra A is the intersection of all ideals containing S. For this to be correct, wemust know that the intersection of a family of ideals in A is itself an ideal. Prove this.

16.2.2. Exercise. Also in AOLT at the top of page 43 the author says that the ideal generatedby a subset S of an algebra A can be characterized by as m∑

j=1

ajsjbj : m = 1, 2, 3, . . . , sj ∈ S, and aj , bj ∈ A.

Prove that for a unital algebra this is correct..

16.2.3. Exercise. In AOLT at the bottom of page 43 the author constructs the Cartesianproduct algebra A1 × · · · × An of algebras A1, . . .An. Show that this is indeed a product in thecategory of algebras and algebra homomorphisms. Under what circumstances will the productalgebra A1 × · · · ×An be unital?

16.2.4. Exercise (†). Prove that product algebra Mn(F)×Mn(F) is neither simple nor central.(See AOLT, section 2.7, exercise 3.)

16.2.5. Exercise. AOLT, section 2.7, exercise 4.

16.2.6. Exercise. In AOLT at the bottom of page 44 and the top of page 45 the authorconstructs the quotient algebra A/J , where A is an algebra and J is an ideal in A. Prove that theoperations on A/J are well-defined and that under these operations A/J is in fact an algebra.

16.2.7. Exercise (†). An element a of a unital algebra A is invertible if there exists anelement a−1 ∈ A such that aa−1 = a−1a = 1A. Explain why no proper ideal in a unital algebracan contain an invertible element.

16.2.8. Exercise. Let φ : A→ B be an algebra homomorphism. Prove that the kernel of φ isan ideal in A and that the range of φ is a subalgebra of B.

16.2.9. Exercise. Prove that the unitization A of an algebra A is in fact a unital algebra with(0, 1) as its identity. Prove also that A is (isomorphic to) a subalgebra of A with codimension 1.(See AOLT, pages 49–50.)

16.2.10. Exercise. Let A and B be algebras and J be an ideal in A. If φ : A→ B is an algebrahomomorphism and kerφ ⊇ J , then there exists a unique algebra homomorphism φ : A/J → Bwhich makes the following diagram commute.

A

π

φ

AAA

AAAA

AAAA

AAAA

AA

A/J

φ

// B

Furthermore, φ is injective if and only if kerφ = U ; and φ is surjective if and only if φ is.Corollary: ranφ ∼= A/ kerφ. (See AOLT, page 45, Theorem 2.5.)

Page 63: Multi Linear Algebra PDF

16.2. EXERCISES (DUE: MON. FEB. 16) 53

16.2.11. Exercise. Let V be a vector space, v and w be in V and φ and ψ be in V ∗. Then

(v ⊗ φ)(w ⊗ ψ) = φ(w)(v ⊗ ψ).

16.2.12. Exercise. Let V be a vector space, v be in V and φ and ψ be in V ∗. Then

(v ⊗ φ)∗(ψ) = ψ(v)φ.

Page 64: Multi Linear Algebra PDF
Page 65: Multi Linear Algebra PDF

CHAPTER 17

Spectra

17.1. Background

Topics: eigenvalues, spectrum, annihilating polynomial, minimal polynomial, division algebra,centralizer, skew-centralizer, spectral mapping theorem. (See AOLT, sections 2.4–2.5.)

17.1.1. Definition. Let a be an element of a unital algebra A over a field F. The spectrumof a, denoted by σA(a) or just σ(a), is the set of all λ ∈ F such that a− λ1 is not invertible.NOTE: This definition, which is the “official” one for this course, differs from—and isnot equivalent to—the one given on page 61 of AOLT.

17.1.2. Theorem (Spectral Mapping Theorem). If T is an operator on a finite dimensionalvector space and p is a polynomial, then

σ(p(T )) = p(σ(T )).

That is, if σ(T ) = λ1, . . . , λk, then σ(p(T )) = p(λ1), . . . , p(λk).

55

Page 66: Multi Linear Algebra PDF

56 17. SPECTRA

17.2. Exercises (Due: Wed. Feb. 18)

17.2.1. Exercise. Let V be a vector space, T ∈ L(V ), v ∈ V , and φ ∈ V ∗. Then

T (v ⊗ φ) = (Tv)⊗ φ.

17.2.2. Exercise. Let V be a vector space, T ∈ L(V ), v ∈ V , and φ ∈ V ∗. Then

v ⊗ T ∗φ = (v ⊗ φ)T.

17.2.3. Exercise. Prove that every finite rank linear map between vector spaces is a linearcombination of rank 1 linear maps.

17.2.4. Exercise (†). Give an example to show that the result stated in AOLT, Theorem 2.24,part 1 need not hold in an infinite dimensional unital algebra. Hint. Let l1(N,R) be the familyof all absolutely summable sequences of real numbers, that is, the set of all sequences (an) of realnumbers such that

∑∞k=1|an| <∞. Consider the operator T in the algebra L(l1(N,R)) which takes

the sequence (an) to the sequence(

1nan

).

17.2.5. Exercise. Give an example to show that the result stated in AOLT, Theorem 2.24, part2 need not hold in an infinite dimensional unital algebra. Hint. Consider the unilateral shift op-erator U in the algebra L(l(N,R)) which takes the sequence (an) to the sequence (0, a1, a2, a3, . . . ).

17.2.6. Exercise. Give an example to show that the result stated in AOLT, Theorem 2.24,part 3 need not hold in an infinite dimensional unital algebra.

17.2.7. Exercise. If z is an element of the algebra C of complex numbers, then σ(z) = z.

17.2.8. Exercise. Let f be an element of the algebra C([a, b]) of continuous complex valuedfunctions on the interval [a, b]. Find the spectrum of f .

17.2.9. Exercise. The family M3(C) of 3× 3 matrices of complex numbers is a unital algebra

under the usual matrix operations. Show that the spectrum of the matrix

5 −6 −6−1 4 23 −6 −4

is 1, 2.

17.2.10. Exercise. Let a be an element of a unital algebra such that a2 = 1. Then either(i) a = 1, in which case σ(a) = 1, or(ii) a = −1, in which case σ(a) = −1, or(iii) σ(a) = −1, 1.

Hint. In (iii) to prove σ(a) ⊆ −1, 1, consider1

1− λ2(a+ λ1).

17.2.11. Exercise. An element a of an algebra is idempotent if a2 = a. Let a be an idempo-tent element of a unital algebra. Then either

(i) a = 1, in which case σ(a) = 1, or(ii) a = 0, in which case σ(a) = 0, or(iii) σ(a) = 0, 1.

Hint. In (iii) to prove σ(a) ⊆ 0, 1, consider1

λ− λ2

(a+ (λ− 1)1

).

Page 67: Multi Linear Algebra PDF

17.2. EXERCISES (DUE: WED. FEB. 18) 57

17.2.12. Exercise. Let T be the operator on R3 whose matrix representation is

1 −1 00 0 0−2 2 2

and let p(x) = x3 − x2 + x− 3. Verify the spectral mapping theorem 17.1.2 in this special case bycomputing separately σ(p(T )) and p(σ(T )).

17.2.13. Exercise. Prove the spectral mapping theorem 17.1.2.

Page 68: Multi Linear Algebra PDF
Page 69: Multi Linear Algebra PDF

CHAPTER 18

Polynomials

18.1. Background

Topics: formal power series, polynomials, convolution, indeterminant, degree of a polynomial,polynomial functional calculus, annihilating polynomial, monic polynomial, minimal polynomial.

18.1.1. Notation. We make the convention that the set of natural numbers N does not includezero but the set Z+ of positive integers does. Thus Z+ = N ∪ 0.

18.1.2. Notation. If S is a set and A is an algebra, l(S,A) denotes the vector space of allfunctions from S into A with pointwise operations of addition and scalar multiplication, and lc(S,A)denotes the subspace of functions with finite support.

18.1.3. Definition. Let A be a unital commutative algebra. On the vector space l(Z+, A)

define a binary operation ∗ (often called convolution) by (f ∗ g)n =∑

j+k=n

fjg k =n∑

j=0fjgn−j

(where f , g ∈ l(Z+, A) and n ∈ Z+. An element of l(Z+, A) is a formal power series (withcoefficients in A) and an element of lc(Z+, A) is a polynomial (with coefficients in A).

18.1.4. Remark. We regard the algebra A as a subset of l(Z+, A) by identifying the elementa ∈ A with the element (a, 0, 0, 0, . . . ) ∈ l(Z+, A). Thus the map a 7→ (a, 0, 0, 0, . . . ) becomes aninclusion map. (Technically speaking, of course, the map ψ : a 7→ (a, 0, 0, 0, . . . ) is an injectiveunital homomorphism and A ∼= ranψ.)

18.1.5. Convention. In the algebra l(Z+, A) we will henceforth write ab for a ∗ b.

18.1.6. Definition. Let A be a unital commutative algebra. In the algebra l(Z+, A) of formalpower series the special sequence x = (0,1A, 0, 0, 0, . . . ) is called the indeterminant of l(Z+, A).Notice that the sequence xn = xxx · · ·x (n factors) has the property that (xn)n = 1 and (xn)k = 0whenever k 6= n. It is conventional to take x0 to be the identity (1A, 0, 0, 0, . . . ) of l(Z+, A).

18.1.7. Remark. The algebra l(Z+, A) of formal power series with coefficients in a unitalcommutative algebra A is frequently denoted by A

[[x]

]and the subalgebra lc(Z+, A) of polynomials

is denoted by A[x].For many algebraists scalar multiplication is of little interest so A is taken to be a unital

commutative ring, so that A[[x]

]is ring of formal power series (with coefficients in A) and A[x] is

the polynomial ring (with coefficients in A). In your text, AOLT, A is always a field F. Since afield can be regarded as a one-dimensional vector space over itself, it is also an algebra. Thus in thetext F[x] is the polynomial algebra with coefficients in F and has as its basis xn : n = 0, 1, 2, . . . .

18.1.8. Definition. A nonzero polynomial p, being an element of l0(Z+, A), has finite support.So there exists n0 ∈ Z+ such that pn = 0 whenever n > n0. The smallest such n0 is the degree

59

Page 70: Multi Linear Algebra PDF

60 18. POLYNOMIALS

of the polynomial. We denote it by deg p. A polynomial of degree 0 is a constant polynomial.The zero polynomial (the additive identity of l(Z+, A)) is also a constant polynomial and manyauthors assign its degree to be −∞.

If p is a polynomial of degree n, then pn is the leading coefficient of p. A polynomial ismonic if its leading coefficient is 1.

18.1.9. Definition. Let A be a unital algebra over a field F. For each polynomial p =∑nk=0 pkx

k with coefficients in F define

p : A→ A : a 7→n∑

k=0

pkak.

Then p is the polynomial function on A determined by the polynomial p. Also for fixed a ∈ Adefine

Φ: F[x] → A : p 7→ p(a).The mapping Φ is the polynomial functional calculus determined by the element a.

18.1.10. Definition. Let V be a vector space over a field F and T ∈ L(V ). A nonzero polyno-mial p ∈ F[x] such that p(T ) = 0 is an annihilating polynomial for T . A monic polynomial ofsmallest degree that annihilates T is a minimal polynomial for T .

18.1.11. Notation. If T is an operator on a finite dimensional vector space over a field F, wedenote its minimal polynomial in F[x] by m

T.

Page 71: Multi Linear Algebra PDF

18.2. EXERCISES (DUE: FRI. FEB. 20) 61

18.2. Exercises (Due: Fri. Feb. 20)

18.2.1. Exercise. If A is a unital commutative algebra, then under the operations definedin 18.1.3 l(Z+, A) is a unital commutative algebra (whose multiplicative identity is the sequence(1A, 0, 0, 0, . . . )) and lc(Z+, A) is a unital subalgebra of l(Z+, A).

18.2.2. Exercise. If φ : A→ B is a unital algebra homomorphism between unital commutativealgebras, then the map

l(Z+, φ) : l(Z+, A) → l(Z+, B) : f 7→(φ(fn)

)∞n=0

is also a unital homomorphism of unital commutative algebras. The pair of maps A 7→ l(Z+, A) andφ 7→ l(Z+, φ) is a covariant functor from the category of unital commutative algebras and unitalalgebra homomorphisms to itself.

18.2.3. Exercise. Let A be a unital commutative algebra. If p is a nonzero polynomial inlc(Z+, A), then

p =n∑

k=0

pkxk where n = deg p.

18.2.4. Exercise. If p and q are polynomials with coefficients in a unital commutative algebraA, then

(i) deg(p+ q) ≤ maxdeg p,deg q, and(ii) deg(pq) ≤ deg p+ deg q.

If A is a field, the equality holds in (ii).

18.2.5. Exercise. Show that if A is a unital commutative algebra, then so is l(A,A) underpointwise operations of addition, multiplication, and scalar multiplication.

18.2.6. Exercise (†). Prove that for each a ∈ A the polynomial functional calculus Φ: F[x] →A defined in 18.1.9 is a unital algebra homomorphism. Show also that the map Ψ: F[x] →l(A,A) : p 7→ p is a unital algebra homomorphism. (Pay especially close attention to the factthat “multiplication” on F[x] is convolution whereas “multiplication” on l(A,A) is defined point-wise.) What is the image under Φ of the indeterminant x? What is the image under Ψ of theindeterminant x?

18.2.7. Exercise. AOLT, section 2.7, exercise 2.

18.2.8. Exercise (†). Give an example to show that the polynomial functional calculus Φ mayfail to be injective. Hint. The preceding exercise 18.2.7.

18.2.9. Exercise. Let A be a unital algebra over a field F with finite dimension m. Show thatfor every a ∈ A there exists a polynomial p ∈ F[x] such that 1 ≤ deg p ≤ m and p(a) = 0.

18.2.10. Exercise. Let V be a finite dimensional vector space. Prove that every T ∈ L(V ) hasa minimal polynomial. Hint. Use exercise 18.2.9..

18.2.11. Exercise. Let f and d be polynomials with coefficients in a field F and suppose thatd 6= 0. Then there exist unique polynomials q and r in F[x] such that

(i) f = dq + r and

Page 72: Multi Linear Algebra PDF

62 18. POLYNOMIALS

(ii) r = 0 or deg r < deg d.Hint. Let f =

∑kj=0 fjx

j and d =∑m

j=0 djxj be in standard form. The case k < m is trivial. For

k ≥ m suppose the result to be true for all polynomials of degree strictly less than k. What canyou say about f = f − p where p = (fkd

−1m )xk−md?

18.2.12. Exercise. Let V be a finite dimensional vector space and T ∈ L(V ). Prove that theminimal polynomial for T is unique.

Page 73: Multi Linear Algebra PDF

CHAPTER 19

Polynomials (continued)

19.1. Background

Topics: irreducible polynomial, prime polynomial, Langrange interpolation formula, unique fac-torization theorem, greatest common divisor, relatively prime.

19.1.1. Definition. A polynomial p ∈ F[x] is irreducible (over F) provided that it is notconstant and whenever p = fg with f , g ∈ F[x], then either f or g is constant.

19.1.2. Definition. Let t0, t1, . . . , tn be distinct elements of a field F. For 0 ≤ k ≤ n definepk ∈ F[x] by

pk =n∏

j=0j 6=k

x− tjtk − tj

.

19.1.3. Definition. If F is a field, a polynomial p ∈ F[x] is reducible over F if there existpolynomials q and r in F[x] both of degree at least one such that p = qr. A polynomial whichis not reducible over F is irreducible over F. A nonscalar irreducible polynomial is a primepolynomial over F (or is a prime in F[x]).

63

Page 74: Multi Linear Algebra PDF

64 19. POLYNOMIALS (CONTINUED)

19.2. Exercises (Due: Mon. Feb. 23)

19.2.1. Exercise. Let T be an operator on a finite dimensional vector space. Show that T isinvertible if and only if the constant term of its minimal polynomial is not zero. Explain, for aninvertible operator T , how to write its inverse as a polynomial in T .

19.2.2. Exercise (†). Let T be an operator on a finite dimensional vector space over a field F.If p ∈ F[x] and p(T ) = 0, then mT divides p. (If p, p1 ∈ F[x], we say that p1 divides p if thereexists q ∈ F[x] such that p = p1q.)

19.2.3. Exercise. Let T be the operator on the real vector space R2 whose matrix representa-

tion (with respect to the standard basis) is[0 −11 0

]. Find the minimal polynomial mT of T and

show that it is irreducible (over R).

19.2.4. Exercise. Let T be the operator on the complex vector space C2 whose matrix repre-

sentation (with respect to the standard basis) is[0 −11 0

]. Find the minimal polynomial mT of T

and show that it is not irreducible (over C).

19.2.5. Exercise. Let p be a polynomial of degree m ≥ 1 in F[x]. If Jp is the principal idealgenerated by p, that is if

Jp := pf : f ∈ F[x]then dim F[x]/Jp = m. Hint. Show that B = [xk] : k = 0, 1, . . . ,m − 1 is a basis for the vectorspace F[x]/Jp.

19.2.6. Exercise. Let T be an operator on a finite dimensional vector space V over a fieldF and Φ: F[x] → L(V ) be the associated polynomial functional calculus. If p is a polynomial ofdegree m ≥ 1 in F[x] and Jp is the principal ideal generated by p, then the sequence

0 // Jp// F[x] Φ // ranΦ // 0

is exact.

19.2.7. Exercise (Lagrange Interpolation Formula). Prove that the polynomials defined in 19.1.2form a basis for the vector space V of all polynomials with coefficients in F and degree less than orequal to n and that for each polynomial q ∈ V

q =n∑

k=0

q(tk)pk.

19.2.8. Exercise. Use the Lagrange Interpolation Formula to find the polynomial with coeffi-cients in R and degree no greater than 3 whose values at −1, 0, 1, and 2 are, respectively, −6, 2,−2, and 6.

19.2.9. Exercise. Let F be a field and p, q, and r be polynomials in F[x]. If p is a prime inF[x] and p divides qr, then p divides q or p divides r.

19.2.10. Exercise. Prove the Unique Factorization Theorem: Let F be a field. A nonscalarmonic polynomial in F[x] can be factored in exactly one way (except for the order of the factors)as a product of monic primes in F[x] .

Page 75: Multi Linear Algebra PDF

19.2. EXERCISES (DUE: MON. FEB. 23) 65

19.2.11. Exercise. Let F be a field. Then every nonzero ideal in F[x] is principal.

19.2.12. Exercise. If p1, . . . , pn are polynomials, not all zero, with coefficients in a field F, thenthere exists a unique monic polynomial d in the ideal generated by p1, . . . , pn such that d divides pk

for each k = 1, . . . , n and, furthermore, any polynomial which divides each pk also divides d. Thispolynomial d is the greatest common divisor of the pk’s. The polynomials pk are relativelyprime if their greatest common denominator is 1.

Page 76: Multi Linear Algebra PDF
Page 77: Multi Linear Algebra PDF

CHAPTER 20

Invariant Subspaces

20.1. Background

Topics: invariant subspaces, invariant subspace lattice, reducing subspaces, transitive algebra,Burnside’s theorem, triangulable. (See AOLT sections 3.1 and 3.3.)

20.1.1. Definition. An operator T on a vector space V is reduced by a pair (M,N) ofsubspaces M and N of V if

(i) V = M ⊕N ,(ii) M is invariant under T , and(iii) N is invariant under T .

67

Page 78: Multi Linear Algebra PDF

68 20. INVARIANT SUBSPACES

20.2. Exercises (Due: Wed. Feb. 25)

20.2.1. Exercise. Let S be the operator on R3 whose matrix representation is

3 4 20 1 20 0 0

.

Find three one dimensional subspaces U , V , and W of R3 which are invariant under S.

20.2.2. Exercise. Let T be the operator on R3 whose matrix representation is

0 0 20 2 02 0 0

.

Find a two dimensional subspace U of R3 which is invariant under T .

20.2.3. Exercise. Find infinitely many subspaces of the vector space of polynomial functionson R which are invariant under the differentiation operator.

20.2.4. Exercise. Let T be the operator on R3 whose matrix representation is

2 0 0−1 3 21 −1 0

.Find a plane and a line in R3 which reduce T .

20.2.5. Exercise. AOLT, section 3.7, exercise 1.

20.2.6. Exercise. AOLT, section 3.7, exercise 2.

20.2.7. Exercise (†). Let M be a subspace of a vector space V and T ∈ L(V ). Show thatif M is invariant under T , then ETE = TE for every projection E onto M . Show also that ifETE = TE for some projection E onto M , then M is invariant under T .

20.2.8. Exercise. Suppose a vector space V has the direct sum decomposition V = M ⊕ N .An operator T on V is reduced by the pair (M,N) if and only if ET = TE, where E = E

MNis

the projection along M onto N .

20.2.9. Exercise. Let M and N be complementary subspaces of a vector space V (that is, Vis the direct sum of M and N) and let T be an operator on V . Show that if M is invariant underT , then M⊥ is invariant under T ∗ and that if T is reduced by the pair (M,N), then T ∗ is reducedby the pair (M⊥, N⊥).

20.2.10. Exercise. Let T : V →W be linear and S : W → V a left inverse for T . Then(a) W = ranT ⊕ kerS, and(b) TS is the projection along kerS onto ranT .

20.2.11. Exercise. If V is a vector space, V = M ⊕N = M ′⊕N , and M ⊆M ′, then M = M ′.

Page 79: Multi Linear Algebra PDF

CHAPTER 21

The Spectral Theorem for Vector Spaces

21.1. Background

Topics: similarity of operators, diagonalizable, resolution of the identity, spectral theorem for vectorspaces. (See section 1.1 of my notes on operator algebras [12].)

21.1.1. Definition. Suppose that on a vector space V there exist projection operators E1, . . . ,En such that

(i) IV = E1 + E2 + · · ·+ En and(ii) EiEj = 0 whenever i 6= j.

Then we say that the family E1, E2, . . . , En of projections is a resolution of the identity.

21.1.2. Definition. Two operators on a vector space (or two n × n matrices) R and T aresimilar if there exists an invertible operator (or matrix) S such that R = S−1TS.

21.1.3. Notation. Let α1, . . . , αn be elements of a field F. Then diag(α1, . . . , αn) denotes then × n matrix whose entries are all zero except on the main diagonal where they are α1, . . . , αn.That is, if [d1, . . . , dn] = diag(α1, . . . , αn), then d k

k = αk for each k, and d kj = 0 whenever j 6= k.

Such a matrix is a diagonal matrix.

21.1.4. Definition. Let V be a vector space of finite dimension n. An operator T on V isdiagonalizable if it has n linearly independent eigenvectors (or, equivalently, if it has a basis ofeigenvectors).

21.1.5. Theorem (Spectral Theorem for Vector Spaces). If T is a diagonalizable operator ona finite dimensional vector space V , then

T =n∑

k=1

λkEk

where λ1, . . . , λn are the (distinct) eigenvalues of T and E1, . . . En is the resolution of the identitywhose projections are associated with the corresponding eigenspaces M1, . . . , Mn.

69

Page 80: Multi Linear Algebra PDF

70 21. THE SPECTRAL THEOREM FOR VECTOR SPACES

21.2. Exercises (Due: Fri. Feb. 27)

21.2.1. Exercise. If E1, E2, . . . , En is a resolution of the identity on a vector space V , thenV =

⊕nk=1 ranEk.

21.2.2. Exercise. If M1 ⊕ · · · ⊕Mn is a direct sum decomposition of a vector space V , thenthe family E1, E2, . . . , En of the associated projections is a resolution of the identity.

21.2.3. Exercise. If v is a nonzero eigenvector associated with an eigenvalue λ of an operatorT ∈ L(V ) and p is a polynomial in F[x], then p(T )v = p(λ)v.

21.2.4. Exercise. Let λ1, . . . , λn be the distinct eigenvalues of an operator T on a finitedimensional vector space V and let M1, . . .Mn be the corresponding eigenspaces. If U =

∑nk=1Mk,

then dimU =∑n

k=1 dimMk. Furthermore, if Bk is a basis for Mk (k = 1, . . . n), then⋃n

k=1Bk is abasis for U .

21.2.5. Exercise. If R and T are operators on a vector space, is RT always similar to TR?What if R is invertible?

21.2.6. Exercise. Let R and T be operators on a vector space. If R is similar to T and p ∈ F[x]is a polynomial, then p(R) is similar to p(T ).

21.2.7. Exercise. If R and T are operators on a vector space, R is similar to T , and R isinvertible, then T is invertible and T−1 is similar to R−1.

21.2.8. Exercise. Let A be an n× n matrix with entries from a field F. Then A, regarded asan operator on Fn, is diagonalizable if and only if it is similar to a diagonal matrix.

21.2.9. Exercise (†). Let E1, . . . , En be the projections associated with a direct sum decom-position V = M1⊕· · ·⊕Mn of a vector space V and let T be an operator on V . Then each subspaceMk is invariant under T if and only if T commutes with each projection Ek.

21.2.10. Exercise. Prove the spectral theorem for vector spaces 21.1.5.

21.2.11. Exercise. Let T be an operator on a finite dimensional vector space V . If λ1, . . . , λn

are distinct scalars and E1, . . . , En are nonzero operators on V such that(i) T =

∑nk=1 λkEk,

(ii) I =∑n

k=1Ek, and(iii) EjEk = 0 whenever j 6= k,

then T is diagonalizable, the scalars λ1, . . . , λn are the eigenvalues of T , and the operators E1, . . . ,En are projections whose ranges are the eigenspaces of T .

21.2.12. Exercise. If T is an operator on a finite dimensional vector space V over a field Fand p ∈ F[x], then

p(T ) =n∑

k=1

p(λk)Ek

where λ1, . . . , λn are the (distinct) eigenvalues of T and E1, . . .En are the projections associatedwith the corresponding eigenspaces M1, . . . , Mn.

Page 81: Multi Linear Algebra PDF

CHAPTER 22

The Spectral Theorem for Vector Spaces (continued)

22.1. Background

Topics: representation, faithful representation, irreducible representation, Cayley-Hamilton theo-rem. (See AOLT, section 3.4.)

22.1.1. Definition. If A is an n × n matrix, define the characteristic polynomial cA

ofA to be the determinant of xI − A. (Note that xI − A is a matrix with polynomial entries.) Asyou would expect, the characteristic polynomial of an operator on a finite dimensional space (withbasis B) is the characteristic polynomial of the matrix representation of that operator (with respectto B).

22.1.2. Theorem (Cayley-Hamilton Theorem). If T is an operator on a finite dimensionalvector space, then the characteristic polynomial of T annihilates T .

71

Page 82: Multi Linear Algebra PDF

72 22. THE SPECTRAL THEOREM FOR VECTOR SPACES (CONTINUED)

22.2. Exercises (Due: Mon. Mar. 2)

22.2.1. Exercise (†). AOLT, section 3.7, exercise 12.

22.2.2. Exercise. If T is a diagonalizable operator on a finite dimensional vector space V , thenthe projections E1, . . . , En associated with the decomposition of V as a direct sum

⊕Mk of its

eigenspaces can be expressed as polynomials in T . Hint. Apply the Lagrange interpolation formulawith the tk’s being the eigenvalues of T .

22.2.3. Exercise. Let T be the operator on R3 whose matrix representation is

0 0 20 2 02 0 0

. Useexercise 22.2.2 to write T as a linear combination of projections.

22.2.4. Exercise. Let T be the operator on R3 whose matrix representation is

2 −2 1−1 1 1−1 2 0

.Use exercise 22.2.2 to write T as a linear combination of projections.

22.2.5. Exercise. Let T be the operator on R3 whose matrix representation is given in exer-cise 22.2.2. Write T as a linear combination of projections.

22.2.6. Exercise. Let T be an operator on a finite dimensional vector space over a field F.Show that λ is an eigenvalue of T if and only if c

T(λ) = 0.

22.2.7. Exercise (††). Let T be an operator on a finite dimensional vector space. Show that Tis diagonalizable if and only if its minimal polynomial is of the form

∏nk=1(x−λk) for some distinct

elements λ1, . . . , λn of the scalar field F.

22.2.8. Exercise. If T is an operator on a finite dimensional vector space, then its minimalpolynomial and characteristic polynomial have the same roots.

22.2.9. Exercise. Prove the Caley-Hamilton theorem 22.1.2. Hint. A proof is in the text (seeAOLT, pages 92–94). The problem here is to fill in missing details and provide in a coherent fashionany background material necessary to understanding the proof.

Page 83: Multi Linear Algebra PDF

CHAPTER 23

Diagonalizable Plus Nilpotent Decomposition

23.1. Background

Topics: primary decomposition theorem, diagonalizable plus nilpotent decomposition, Jordannormal form.

23.1.1. Theorem (Primary Decomposition Theorem). Let T ∈ L(V ) where V is a finite di-mensional vector space. Factor the minimal polynomial

mT

=n∏

k=1

p rkk

into powers of distinct irreducible monic polynomials p1, . . . , pn and let Wk = ker(pk(T rk)

)for

each k. Then(i) V =

⊕nk=1Wk,

(ii) each Wk is invariant under T , and(iii) if Tk = T

∣∣Wk

, then mTk

= p rkk .

In the preceding theorem the spaces Wk are the generalized eigenspaces of the operator T .

23.1.2. Theorem. Let T be an operator on a finite dimensional vector space V . Suppose thatthe minimal polynomial for T factors completely into linear factors

mT(x) = (x− λ1)d1 . . . (x− λr)dr

where λ1, . . . λr are the (distinct) eigenvalues of T . For each k let Wk be the generalized eigenspaceker(T − λkI)dk and let E1, . . . , Er be the projections associated with the direct sum decomposition

V = W1 ⊕W2 ⊕ · · · ⊕Wr.

Then this family of projections is a resolution of the identity, each Wk is invariant under T , theoperator

D = λ1E1 + · · ·+ λrEr

is diagonalizable, the operatorN = T −D

is nilpotent, and N commutes with D.Furthermore, if D1 is diagonalizable, N1 is nilpotent, D1 +N1 = T , and D1N1 = N1D1, then

D1 = D and N1 = N .

23.1.3. Corollary. Every operator on a finite dimensional complex vector space can be writtenas the sum of two commuting operators, one diagonalizable and the other nilpotent.

73

Page 84: Multi Linear Algebra PDF

74 23. DIAGONALIZABLE PLUS NILPOTENT DECOMPOSITION

23.2. Exercises (Due: Wed. Mar. 4)

23.2.1. Exercise. Prove the primary decomposition theorem 23.1.1.

23.2.2. Exercise. Prove theorem 23.1.2.

23.2.3. Exercise. Let T be the operator on R2 whose matrix representation is[

2 1−1 4

].

(a) Explain briefly why T is not diagonalizable.(b) Find the diagonalizable and nilpotent parts of T .

Answer: D =[a bb a

]and N =

[−c c−c c

]where a = , b = , and c = .

23.2.4. Exercise (†). Let T be the operator on R3 whose matrix representation is

0 0 −3−2 1 −22 −1 5

.

(a) Find D and N , the diagonalizable and nilpotent parts of T . Express these as polynomialsin T .

(b) Find a matrix S which diagonalizes D.

(c) Let [D1] =

2 −1 −1−1 2 −1−1 −1 2

and [N1] =

−2 1 −2−1 −1 −13 0 3

. Show that D1 is diagonalizable,

that N1 is nilpotent, and that T = D1 +N1. Why does this not contradict the uniquenessclaim made in theorem 23.1.2?

23.2.5. Exercise. Let T be the operator on R4 whose matrix representation is

0 1 0 −1−2 3 0 −1−2 1 2 −12 −1 0 3

.

(a) The characteristic polynomial of T is (λ− 2)p where p = .(b) The minimal polynomial of T is (λ− 2)r where r = .

(c) The diagonalizable part of T is D =

a b b bb a b bb b a bb b b a

where a = and b = .

(d) The nilpotent part of T is N =

−a b c −b−a b c −b−a b c −ba −b c b

where a = , b = , andc = .

23.2.6. Exercise. Let T be the operator on R5 whose matrix representation is

1 0 0 1 −10 1 −2 3 −30 0 −1 2 −21 −1 1 0 11 −1 1 −1 2

.

(a) Find the characteristic polynomial of T .Answer: cT (λ) = (λ+ 1)p(λ− 1)q where p = and q = .

(b) Find the minimal polynomial of T .Answer: mT (λ) = (λ+ 1)r(λ− 1)s where r = and s = .

Page 85: Multi Linear Algebra PDF

23.2. EXERCISES (DUE: WED. MAR. 4) 75

(c) Find the eigenspaces V1 and V2 of T .Answer: V1 = span(a, 1, b, a, a) where a = and b = ; andV2 = span(1, a, b, b, b), (b, b, b, 1, a) where a = and b = .

(d) Find the diagonalizable part of T .

Answer: D =

a b b b bb a −c c −cb b −a c −cb b b a bb b b b a

where a = , b = , and c = .

(e) Find the nilpotent part of T .

Answer: N =

a a a b −ba a a b −ba a a a ab −b b −b bb −b b −b b

where a = and b = .

(f) Find a matrix S which diagonalizes the diagonalizable part D of T . What is the diagonalform Λ of D associated with this matrix?

Answer: S =

a b a a ab a b a ab a a b aa a a b ba a a a b

where a = and b = .

and Λ =

−a 0 0 0 00 a 0 0 00 0 a 0 00 0 0 a 00 0 0 0 a

where a = .

23.2.7. Exercise. Prepare and deliver a (30–45 minute) blackboard presentation on the Jordannormal form of a matrix.

Page 86: Multi Linear Algebra PDF
Page 87: Multi Linear Algebra PDF

CHAPTER 24

Inner Product Spaces

24.1. Background

Topics: inner products, norm, unit vector, square summable sequences, the Schwarz (or Cauchy-Schwarz) inequality. (See AOLT, section 1.3 and also section 1.2 of my lecture notes on operatoralgebras [12].)

24.1.1. Definition. If x is a vector in Rn, then the norm (or length) of x is defined by

‖x‖ =√〈x,x〉 .

A vector of length 1 is a unit vector

24.1.2. Theorem (Pythagorean theorem). If x and y are perpendicular vectors in an innerproduct space, then

‖x+ y‖2 = ‖x‖2 + ‖y‖2.

24.1.3. Theorem (Parallelogram law). If x and y are vectors in an inner product space, then

‖x+ y‖2 + ‖x− y‖2 = 2‖x‖2 + 2‖y‖2.

24.1.4. Theorem (Polarization identity). If x and y are vectors in a complex inner productspace, then

〈x, y〉 = 14(‖x+ y‖2 − ‖x− y‖2 + i ‖x+ iy‖2 − i ‖x− iy‖2) .

What is the correct formula for a real inner product space?

77

Page 88: Multi Linear Algebra PDF

78 24. INNER PRODUCT SPACES

24.2. Exercises (Due: Mon. Mar. 9)

24.2.1. Exercise. Prove the Pythagorean theorem 24.1.2.

24.2.2. Exercise. Prove the parallelogram law 24.1.3.

24.2.3. Exercise. On the vector space C([0, 1]) of continuous real valued functions on theinterval [0, 1] the uniform norm is defined by ‖f‖u = sup|f(x)| : 0 ≤ x ≤ 1. Prove that theuniform norm is not induced by an inner product. That is, prove that there is no inner product onC([0, 1]) such that ‖f‖u =

√〈f, f〉 for all f ∈ C([0, 1]). Hint. Use exercise 24.2.2

24.2.4. Exercise. Prove the polarization identity 24.1.4.

24.2.5. Exercise. If a1, . . . , an > 0, then( n∑j=1

aj

)( n∑k=1

1ak

)≥ n2.

The proof of this is obvious from the Schwarz inequality if we choose x and y to be what?

24.2.6. Exercise (†). Notice that part (a) is a special case of part (b).

(a) Show that if a, b, c > 0, then(

12a+ 1

3b+ 16c

)2 ≤ 12a

2 + 13b

2 + 16c

2.(b) Show that if a1, . . . , an, w1, . . . , wn > 0 and

∑nk=1wk = 1, then( n∑

k=1

akwk

)2

≤n∑

k=1

ak2wk.

24.2.7. Exercise. Show that if∑∞

k=1 ak2 converges, then

∑∞k=1 k

−1ak converges absolutely.

24.2.8. Exercise. A sequence (ak) of (real or) complex numbers is square summable if∑∞k=1|ak|2 <∞. The vector space of all square summable sequences of real numbers (respectively,

complex numbers) is denoted by l2(R) (respectively, l2(C)). When no confusion will result, bothare denoted by l2. If a, b ∈ l2, define

〈a, b〉 =∞∑

k=1

akbk.

Show that this definition makes sense and makes l2 into an inner product space.

24.2.9. Exercise (†). Use vector methods (no coordinates, no major results from Euclideangeometry) to show that the midpoint of the hypotenuse of a right triangle is equidistant from thevertices. Hint. Let 4ABC be a right triangle and O be the midpoint of the hypotenuse AB. Whatcan you say about <

−→AO +

−−→OC,

−−→CO +

−−→OB >?

24.2.10. Exercise. Use vector methods to show that if a parallelogram has perpendiculardiagonals, then it is a rhombus (that is, all four sides have equal length). Hint. Let ABCD bea parallelogram. Express the dot (inner) product of the diagonals

−→AC and

−−→DB in terms of the

lengths of the sides−−→AB and

−−→BC.

24.2.11. Exercise. Use vector methods to show that an angle inscribed in a semicircle is aright angle.

Page 89: Multi Linear Algebra PDF

24.2. EXERCISES (DUE: MON. MAR. 9) 79

24.2.12. Exercise. Let a be a vector in an inner product space H. If 〈x, a〉 = 0 for everyx ∈ H, then a = 0.

24.2.13. Exercise. Let S, T : H → K be linear maps between inner product spaces H and K.If 〈Sx, y〉 = 〈Tx, y〉 for every x ∈ H and y ∈ K, then S = T .

Page 90: Multi Linear Algebra PDF
Page 91: Multi Linear Algebra PDF

CHAPTER 25

Orthogonality and Adjoints

25.1. Background

Topics: orthogonality, orthogonal complement, adjoint, involution, ∗-algebra, ∗-homomorphism,self-adjoint, Hermitian, normal, unitary.

25.1.1. Definition. An involution on an algebra A is a map x 7→ x∗ from A into A whichsatisfies

(i) (x+ y)∗ = x∗ + y∗,(ii) (αx)∗ = αx∗,(iii) x∗∗ = x, and(iv) (xy)∗ = y∗x∗

for all x, y ∈ A and α ∈ E. An algebra on which an involution has been defined is a ∗-algebra(pronounced “star algebra”). An algebra homomorphism φ between ∗-algebras which preservesinvolution (that is, such that φ(a∗) = (φ(a))∗) is a ∗-homomorphism (pronounced “star homomor-phism”. A ∗-homomorphism φ : A→ B between unital algebras is said to be unital if φ(1A) = 1B.

25.1.2. Definition. Let H and K be inner product spaces and T : H → K be a linear map. Ifthere exists a function T ∗ : K → H which satisfies

〈Tx, y〉 = 〈x, T ∗y〉for all x ∈ H and y ∈ K, then T ∗ is the adjoint of T .

When H and K are real vector spaces, the adjoint of T is usually called the transpose of Tand the notation T t is used (rather than T ∗).

25.1.3. Definition. An element a of a ∗-algebra A is self-adjoint (or Hermitian) if a∗ = a.It is normal if a∗a = aa∗. And it is unitary if a∗a = aa∗ = 1. The set of all self-adjoint elementsof A is denoted by H(A), the set of all normal elements by N (A), and the set of all unitary elementsby U(A).

81

Page 92: Multi Linear Algebra PDF

82 25. ORTHOGONALITY AND ADJOINTS

25.2. Exercises (Due: Wed. Mar. 11)

25.2.1. Exercise. Let H be an inner product space and a ∈ H. Define ψa : V → E byψa(x) = 〈x, a〉 for all x ∈ H. Then ψa is a linear functional on H. (See the last sentence of theproof of Theorem 1.15, AOLT, page 15.)

25.2.2. Exercise (†). Give a proof of the Riesz representation theorem that is much simplerthan the one given in AOLT, page 15, Theorem 1.15. Hint. Use exercises 9.2.9 and 9.2.13.

25.2.3. Exercise. Show by example that the Riesz representation theorem (AOLT, page 15,Theorem 1.15) does not hold (as stated) in infinite dimensional spaces. Hint. Consider the functionφ : lc(N) → E : x 7→

∑∞k=1 αk where x =

∑∞k=1 αke

k, the ek’s being the standard basis vectors forl0(N).

25.2.4. Exercise. Show that if S is a set of mutually perpendicular vectors in an inner productspace and 0 /∈ S, then the set S is linearly independent.

25.2.5. Exercise. Let S and T be subsets of an inner product space H.(a) S⊥ is a subspace of H.(b) If S ⊆ T , then T⊥ S⊥.(c) (spanS)⊥ = S⊥.

25.2.6. Exercise. Show that if M is a subspace of a finite dimensional inner product space H,then H = M ⊕M⊥. Show also that this need not be true in an infinite dimensional space.

25.2.7. Exercise. Let M be a subspace of an inner product space H.(a) Show that M ⊆M⊥⊥.(b) Prove that equality need not hold in (a).(c) Show that if H is finite dimensional, then M = M⊥⊥.

25.2.8. Exercise. Let M and N be subspaces of an inner product space H.(a) Show that (M +N)⊥ = (M ∪N)⊥ = M⊥ ∩N⊥.(b) Show that if H is finite dimensional, then (M ∩N)⊥ = M⊥ +N⊥.

25.2.9. Exercise. It seems unlikely that the similarity between the results of the exercises 25.2.5and 25.2.8 and those you obtained in exercises 10.2.5 and 10.2.6 could be purely coincidental.Explain carefully what is going on here.

25.2.10. Exercise. Show that if U is the unilateral shift operator on l2U : l2 → l2 : (x1, x2, x2, . . . ) 7→ (0, x1, x2, . . . ),

then its adjoint is given by

U∗ : l2 → l2 : (x1, x2, x3, . . . ) 7→ (x2, x3, x4, . . . ).

25.2.11. Exercise (Multiplication operators). Let (X,A, µ) be a sigma-finite measure spaceand L2(X) be the Hilbert space of all (equivalence classes of) complex valued functions on Xwhich are square integrable with respect to µ. Let φ be an essentially bounded complex valuedµ-measurable function on X. Define Mφ on L2(X) by Mφ(f) := φf. Then Mφ is an operator onL2(X), ‖Mφ‖ = ‖φ‖∞, and Mφ

∗ = Mφ.

Page 93: Multi Linear Algebra PDF

25.2. EXERCISES (DUE: WED. MAR. 11) 83

25.2.12. Exercise. Let a and b be elements of a ∗-algebra. Show that a commutes with b ifand only if a∗ commutes with b∗.

25.2.13. Exercise. Show that in a unital ∗-algebra 1∗ = 1.

25.2.14. Exercise. Let a be an element of a unital ∗-algebra. Show that a∗ is invertible if andonly if a is. And when a is invertible we have

(a∗)−1 =(a−1

)∗.

25.2.15. Exercise. Let a be an element of a unital ∗-algebra. Show that λ ∈ σ(a) if and onlyif λ ∈ σ(a∗).

Page 94: Multi Linear Algebra PDF
Page 95: Multi Linear Algebra PDF

CHAPTER 26

Orthogonal Projections

26.1. Background

Topics: orthogonal projections on an inner product space, projections in an algebra with involu-tion, orthogonality of projections in a ∗-algebra.

26.1.1. Definition. An operator P on an inner product space H is an orthogonal projec-tion if it is self-adjoint and idempotent. (On a real inner product space, of course, the conditionsare symmetric and idempotent.)

26.1.2. Definition. A projection in a ∗-algebra A is an element p of the algebra which isidempotent (p2 = p) and self-adjoint (p∗ = p). The set of all projections in A is denoted by P(A).Notice that projections in *-algebras correspond to orthogonal projections on inner product spacesand not to (the more general) projections on vector spaces.

26.1.3. Definition. Two projections p and q in a ∗-algebra are orthogonal, written p ⊥ q,if pq = 0.

85

Page 96: Multi Linear Algebra PDF

86 26. ORTHOGONAL PROJECTIONS

26.2. Exercises (Due: Fri. Mar. 13)

26.2.1. Exercise. Let T : H → K be a linear map between inner product spaces. Show that ifthe adjoint of T exists, then it is unique. (That is, there is at most one function T ∗ : K → H thatsatisfies 〈Tx, y〉 = 〈x, T ∗y〉 for all x ∈ H and y ∈ K.)

26.2.2. Exercise (†). Let T : H → K be a linear map between inner product spaces. Showthat if the adjoint of T exists, then it is linear.

26.2.3. Exercise. Let S and T be operators on an inner product space H. Then (S + T )∗ =S∗ + T ∗ and (αT )∗ = αT ∗ for every α ∈ E. (See AOLT, Theorem 1.17.)

26.2.4. Exercise. Let T : H → K be a linear map between inner product spaces. Show that ifthe adjoint of T exists, then so does the adjoint of T ∗ and T ∗∗ = T .

26.2.5. Exercise. Let S : H → K and T : K → L be linear maps between complex innerproduct space. Show that if S and T both have adjoints, then so does their composite TS and

(TS)∗ = S∗T ∗.

(See AOLT, Theorem 1.17 and section 1.8, exercise 12.)

26.2.6. Exercise. Let H be an inner product space and M and N be subspaces of H such thatH = M + N and M ∩ N = ∅. (That is, we suppose that H is the vector space direct sum of Mand N .) Also let P = ENM be the projection of H along N onto M . Prove that P is self-adjoint(P ∗ exists and P ∗ = P ) if and only if M ⊥ N .

26.2.7. Exercise. If P is an orthogonal projection on an inner product space H, then thespace is the orthogonal direct sum of the range of P and its kernel; that is H = ranP + kerP andranP ⊥ kerP .

26.2.8. Exercise. Let p and q be projections in a ∗-algebra. Then pq is a projection if andonly if pq = qp.

26.2.9. Exercise. Let P and Q be orthogonal projections on an inner product space V . IfPQ = QP , then PQ is an orthogonal projection whose kernel is kerP + kerQ and whose range isranP ∩ ranQ.

26.2.10. Exercise (†). Let T : H → K be a linear map between inner product spaces. Showthat

kerT ∗ = (ranT )⊥.Is there a relationship between T being surjective and T ∗ being injective?

26.2.11. Exercise. Let T : H → K be a linear map between inner product spaces. Show that

kerT = (ranT ∗)⊥.

Is there a relationship between T being injective and T ∗ being surjective?

26.2.12. Exercise. It seems unlikely that the similarity between the results of the two precedingexercises and those you obtained in exercises 11.2.3 and 11.2.4 could be purely coincidental. Explaincarefully what is going on here.

Page 97: Multi Linear Algebra PDF

26.2. EXERCISES (DUE: FRI. MAR. 13) 87

26.2.13. Exercise. A necessary and sufficient condition for two projections p and q in a∗-algebra to be orthogonal is that pq + qp = 0.

26.2.14. Exercise. Let p and q be projections in a ∗-algebra. Then p+ q is a projection if andonly if p and q are orthogonal.

26.2.15. Exercise. Let P and Q be orthogonal projections on an inner product space V . ThenP ⊥ Q if and only if ranP ⊥ ranQ. In this case P +Q is an orthogonal projection whose kernel iskerP ∩ kerQ and whose range is ranP + ranQ.

Page 98: Multi Linear Algebra PDF
Page 99: Multi Linear Algebra PDF

CHAPTER 27

The Spectral Theorem for Inner Product Spaces

27.1. Background

Topics: subprojections, partial isometries, initial and final projections, orthogonal resolutions ofthe identity, the spectral theorem for complex inner product spaces.

27.1.1. Definition. If p and q are projections in a ∗-algebra we write p ≤ q if p = pq. In thiscase we say that p is a subprojection of q or that p is smaller than q. (Note: it is easy to seethat the condition p = pq is equivalent to p = qp.)

27.1.2. Definition. An element v of a ∗-algebra is a partial isometry if vv∗v = v. If v is apartial isometry, then v∗v is the initial projection of v and vv∗ is its final projection.

27.1.3. Definition. Let M1⊕· · ·⊕Mn be an orthogonal direct sum decomposition of an innerproduct space V . For each k let Pk be the orthogonal projection onto Mk. The projections P1,. . .Pn are the orthogonal projections associated with the orthogonal direct sumdecomposition V = M1 ⊕ · · · ⊕Mn. The family P1, . . . , Pn is an orthogonal resolutionof the identity. (Compare this with definitions 15.1.3 and 21.1.1.)

27.1.4. Definition. Two elements a and b of a ∗-algebra A are unitarily equivalent if thereexists a unitary element u of A such that b = u∗au.

27.1.5. Definition. An operator T on a complex inner product space V is unitarily diago-nalizable if there exists an orthonormal basis for V consisting of eigenvectors of T .

27.1.6. Theorem (Spectral Theorem: Complex Inner Product Space Version). If N is a normaloperator on a finite dimensional complex inner product space V , then N is unitarily diagonalizableand can be written as

N =n∑

k=1

λkPk

where λ1, . . . , λn are the (distinct) eigenvalues of N and P1, . . . , Pn is the orthogonal resolutionof the identity whose orthogonal projections are associated with the corresponding eigenspaces M1,. . . , Mn.

89

Page 100: Multi Linear Algebra PDF

90 27. THE SPECTRAL THEOREM FOR INNER PRODUCT SPACES

27.2. Exercises (Due: Mon. Mar. 30)

27.2.1. Exercise. If A is a ∗-algebra, then the relation ≤ defined in 27.1.1 is a partial orderingon P(A). If A is unital, then 0 ≤ p ≤ 1 for every p ∈ P(A).

27.2.2. Exercise. Let p and q be projections in a ∗-algebra. Then q − p is a projection if andonly if p ≤ q.

27.2.3. Exercise. Let P and Q be orthogonal projections on an inner product space V . Thenthe following are equivalent:

(i) P ≤ Q;(ii) ‖Px‖ ≤ ‖Qx‖ for all x ∈ V ; and(iii) ranP ⊆ ranQ.

In this case Q − P is an orthogonal projection whose kernel is ranP + kerQ and whose range isranQ ranP .Notation: If M and N are subspaces of an inner product space with N M , then M N denotesthe orthogonal complement of N in M (so N ⊥ (M N) and M = N ⊕ (M N) ).

27.2.4. Exercise. If p and q are commuting projections in a ∗-algebra, then p ∧ q and p ∨ qexist. In fact, p ∧ q = pq and p ∨ q = p+ q − pq.

27.2.5. Exercise. Let v be a partial isometry in a ∗-algebra, p be its initial projection, and qbe its final projection. Then

(a) v∗ is a partial isometry,(b) p is a projection,(c) p is the smallest projection such that vp = v,(d) q is a projection, and(e) q is the smallest projection such that qv = v.

27.2.6. Exercise. Let N be a normal operator on a finite dimensional complex inner productspace H. Show that ‖Nx‖ = ‖N∗x‖ for all x ∈ H. (See AOLT, page 21, Proposition 1.22.)

27.2.7. Exercise. Let N be the operator on C2 whose matrix representation is[0 1−1 0

].

(a) The eigenspace M1 associated with the eigenvalue −i is the span of ( 1 , ).(b) The eigenspace M2 associated with the eigenvalue i is the span of ( 1 , ).(c) The (matrix representations of the) orthogonal projections P1 and P2 onto the eigenspaces

M1 and M2, respectively, are P1 =[a b−b a

]; and P2 =

[a −bb a

]where a = and

b = .(d) Write N as a linear combination of the projections found in (c).

Answer: [N ] = P1 + P2.

(e) A unitary matrix U which diagonalizes [N ] is[a a−b b

]where a = and b = .

The associated diagonal form Λ = U∗ [N ]U of [N ] is

.

Page 101: Multi Linear Algebra PDF

27.2. EXERCISES (DUE: MON. MAR. 30) 91

27.2.8. Exercise. Let H be the self-adjoint matrix[

2 1 + i1− i 3

].

(a) Use the spectral theorem to write H as a linear combination of orthogonal projections.

Answer: H = αP1 + βP2 where α = , β = , P1 =13

[2 −1− i

],

and P2 =13

[1 1 + i

].

(b) Find a square root of H.

Answer:√H =

13

[4 1 + i

].

27.2.9. Exercise. Let N =13

4 + 2i 1− i 1− i1− i 4 + 2i 1− i1− i 1− i 4 + 2i

.

(a) The matrix N is normal because NN∗ = N∗N =

a b bb a bb b a

where a = and

b = .(b) According to the spectral theorem N can be written as a linear combination of orthogonal

projections. Written in this form N = λ1P1 + λ2P2 where λ1 = ,

λ2 = , P1 =

a a aa a aa a a

, and P2 =

b −a −a−a b −a−a −a b

where a = and

b = .

(c) A unitary matrix U which diagonalizesN is

a −b −ca b −ca d 2c

where a = , b = ,

c = , and d = . The associated diagonal form Λ = U∗NU of N is.

27.2.10. Exercise (†). Let A be an n× n matrix A of complex numbers. Then A, regarded asan operator on Cn, is unitarily diagonalizable if and only if it is unitarily equivalent to a diagonalmatrix.

27.2.11. Exercise. The real and imaginary parts of an element of a ∗-algebra are self-adjoint.

27.2.12. Exercise. An element of a ∗-algebra is normal if and only if its real part and itsimaginary part commute. (See AOLT, page 23, proposition 1.25.)

27.2.13. Exercise. AOLT, section 1.8, exercise 19.

Page 102: Multi Linear Algebra PDF
Page 103: Multi Linear Algebra PDF

CHAPTER 28

Multilinear Maps

28.1. Background

Topics: free objects, free vector space, the “little-oh” functions, tangency, differential, permuta-tions, cycles, symmetric group, multilinear maps, alternating multilinear maps.

Differential calculus

The following four definitions introduce the concepts necessary for a (civilized) discussion ofdifferentiability of real valued functions on Rn. In these definitions f and g are functions from Rn

into R and a is a point in Rn.

28.1.1. Definition. For every h ∈ Rn let

∆fa := f(a+ h)− f(a).

28.1.2. Definition. The function f belongs to the family o of “little-oh” functions if for everyc > 0 there exists a δ > 0 such that ‖f(x)‖ ≤ c‖x‖ whenever ‖x‖ < δ.

28.1.3. Definition. Functions f and g are tangent (at 0) if f − g ∈ o.

28.1.4. Definition. The function f is differentiable at the point a if there exists a linearmap dfa from Rn into R which is tangent to ∆fa. We call dfa the differential (or totalderivative) of f at a.

28.1.5. Remark. If a function f : Rn → R is differentiable, then at each point a in Rn thedifferential of f at a is a linear map from Rn into R. Thus we regard df : a 7→ dfa (the differentialof f) as a map from Rn into L(Rn,R). It is natural to inquire whether the function df is itselfdifferentiable. If it is, its differential at a (which we denote by d 2fa) is a linear map from Rn intoL(Rn,R); that is

d 2fa ∈ L(Rn,L(Rn,R)).In the same vein, since d 2f maps Rn into L(Rn,L(Rn,R)), its differential (if it exists) belongs toL(Rn,L(Rn,L(Rn,R))). It is moderately unpleasant to contemplate what an element of L(Rn,L(Rn,R))or of L(Rn,L(Rn,L(Rn,R))) might “look like”. And clearly as we pass to even higher order differ-entials things look worse and worse. It is comforting to discover that an element of L(Rn,L(Rn,R))may be regarded as a map from (Rn)2 into R which is bilinear (that is, linear in both of its vari-ables), and that L(Rn,L(Rn,L(Rn,R))) may be thought of as a map from (Rn)3 into R which islinear in each of its three variables. More generally, if V1, V2, V3, and W are arbitrary vector spacesit will be possible to identify the vector space L(V1,L(V2,W ))) with the space of bilinear maps fromV1 × V2 to W , the vector space L(V1,L(V2,L(V3,W ))) with the trilinear maps from V1 × V2 × V3

to W , and so on (see exercise 28.2.3).

Permutations

A bijective map σ : X → X from a set X onto itself is a permutation of the set. If x1,x2, . . . , xn are distinct elements of a set X, then the permutation of X that maps x1 7→ x2,x2 7→ x3, . . . , xn−1 7→ xn, xn 7→ x1 and leaves all other elements of X fixed is a cycle (or cyclic

93

Page 104: Multi Linear Algebra PDF

94 28. MULTILINEAR MAPS

permutation) of length n. A cycle of length 2 is a transposition. Permutations σ1, . . . , σn

of a set X are disjoint if each x ∈ X is moved by at most one σj ; that is, if σj(x) 6= x for at mostone j ∈ Nn := 1, 2, . . . , n.

28.1.6. Proposition. If X is a nonempty set, the set of permutations of X is a group undercomposition.

Notice that if σ and τ are disjoint permutations of a set X, then στ = τσ. If X is a setwith n elements, then the group of permutations of X (which we may identify with the group ofpermutations of the set Nn) is the symmetric group on n elements (or on n letters); it isdenoted by Sn.

28.1.7. Proposition. Any permutation σ 6= idX of a set X can be written as a product (com-posite) of cycles of length at least 2. This decomposition is unique up to the order of the factors.

A permutation of a set X is even if it can be written as the product of an even number oftranspositions, and it is odd if it can be written as a product of an odd number of transpositions.

28.1.8. Proposition. Every permutation of a finite set is either even or odd, but not both.

The sign of a permutation σ, denoted by sgnσ, is +1 if σ is even and −1 if σ is odd.

Multilinear maps

28.1.9. Definition. Let V1, V2, . . . , Vn, and W be vector spaces over a field F. We say thata function f : V1 × · · · × Vn → W is multilinear (or n-linear) if it is linear in each of its nvariables. We ordinarily call 2-linear maps bilinear and 3-linear maps trilinear. We denote byLn(V1, . . . , Vn;W ) the family of all n-linear maps from V1 × · · · × Vn into W . A multilinear mapfrom the product V1 × · · · × Vn into the scalar field F is a multilinear form (or a multilinearfunctional.

28.1.10. Definition. A multilinear map f : V n → W from the n-fold product V × · · · × V ofa vector space V into a vector space W is alternating if f(v1, . . . , vn) = 0 whenever vi = vj forsome i 6= j.

28.1.11. Definition. If V and W are vector spaces, a multilinear map f : V n → W is skew-symmetric if

f(v1, . . . , vn) = (sgnσ)f(vσ(1), . . . , vσ(n)

)for all σ ∈ Sn.

Page 105: Multi Linear Algebra PDF

28.2. EXERCISES (DUE: WED. APR. 1) 95

28.2. Exercises (Due: Wed. Apr. 1)

28.2.1. Exercise. Let V and W be vector spaces, u, v, x, y ∈ V , and α ∈ R.(a) Expand T (u+ v, x+ y) if T is a bilinear map from V × V into W .(b) Expand T (u+ v, x+ y) if T is a linear map from V ⊕ V into W .(c) Write T (αx, αy) in terms of α and T (x, y) if T is a bilinear map from V × V into W .(d) Write T (αx, αy) in terms of α and T (x, y) if T is a linear map from V ⊕ V into W .

28.2.2. Exercise. Show that composition of operators on a vector space V is a bilinear mapon L(V ).

28.2.3. Exercise. Show that if U , V , and W are vector spaces, then so is L2(U, V ;W ). Showalso that the spaces L(U,L(V,W )) and L2(U, V ;W ) areisomorphic. Hint. The isomorphism isimplemented by the map

F : L(U,L(V,W )) → L2(U, V ;W ) : φ 7→ φ

where φ(u, v) := (φ(u))(v) for all u ∈ U and v ∈ V . (Recall remark 28.1.5.)

28.2.4. Exercise. Let V = R2 and f : V 2 → R : (x, y) 7→ x1y2. Is f bilinear? Is it alternating?

28.2.5. Exercise. Let V = R2 and g : V 2 → R : (x, y) 7→ x1+y2. Is g bilinear? Is it alternating?

28.2.6. Exercise. Let V = R2 and h : V 2 → R : (x, y) 7→ x1y2 − x2y1. Is h bilinear? Is italternating? If e1, e2 is the usual basis for R2, what is h(e1, e2)?

28.2.7. Exercise. Let V and W be vector spaces. Then every alternating multilinear mapf : V n →W is skew-symmetric. Hint. Consider f(u+ v, u+ v) in the bilinear case.

Page 106: Multi Linear Algebra PDF
Page 107: Multi Linear Algebra PDF

CHAPTER 29

Determinants

29.1. Background

Topics: determinant of a matrix, determinant function on Mn(A).

29.1.1. Definition. A field F is of characteristic zero if n1 = 0 for no n ∈ N.

NOTE: In the following material on determinants, we will assume that the scalar fields underlyingall the vector spaces we encounter are of characteristic zero. Thus multilinear functions will bealternating if and only if they are skew-symmetric. (See exercises 28.2.7 and 29.2.3.)

29.1.2. Remark. Let A be a unital commutative algebra. In the sequel we identify the algebra(An

)n = An × · · · ×An (n factors) with the algebra Mn(A) of n× n matrices of elements of A byregarding the term ak in (a1, . . . , an) ∈

(An

)n as the kth column vector of an n×nmatrix of elementsof A. There are many standard notations for the same thing: Mn(A), An × · · · × An (n factors),(An

)n, An×n, and An2, for example.

The identity matrix, which we usually denote by I, in Mn(A) is (e1, . . . , en), where e1, . . . , en

are the standard basis vectors for An; that is, e1 = (1A, 0, 0, . . . ), e2 = (0,1A, 0, 0, . . . ), and so on.

29.1.3. Definition. Let A be a unital commutative algebra. A determinant function isan alternating multilinear map D : Mn(A) → A such that D(I) = 1A.

97

Page 108: Multi Linear Algebra PDF

98 29. DETERMINANTS

29.2. Exercises (Due: Fri. Apr. 3)

29.2.1. Exercise. Let V = Rn. Define

∆: V n → R : (v1, . . . , vn) 7→∑σ∈Sn

(sgnσ)v1σ(1) . . . v

nσ(n).

Then ∆ is an alternating multilinear form which satisfies ∆(e1, . . . , en) = 1.

Note: If A is an n× n matrix of real numbers we define detA, the determinant of A, to be∆(v1, . . . , vn) where v1, . . . , vn are the column vectors of the matrix A.

29.2.2. Exercise. Let A =

1 3 2−1 0 3−2 −2 1

. Use the definition above to find detA.

29.2.3. Exercise. If V and W are vector spaces over a field F of characteristic zero andf : V n →W is a skew-symmetric multilinear map, then f is alternating.

29.2.4. Exercise. Let ω be an n-linear functional on a vector space V over a field of charac-teristic zero. If ω(v1, . . . , vn) = 0 whenever vi = vi+1 for some i, then ω is skew-symmetric andtherefore alternating.

29.2.5. Exercise. Let f : V n → W be an alternating multilinear map, j 6= k in Nn, and α bea scalar. Then

f(v1, . . . , vj +↑j

αvk, . . . , vn) = f(v1, . . . , vj

↑j

, . . . , vn).

29.2.6. Exercise. Let A be a unital commutative algebra and n ∈ N. A determinant functionexists on Mn(A). Hint. Consider

det : Mn(A) → A : (a1, . . . an) 7→∑σ∈Sn

(sgnσ)a1σ(1) . . . a

nσ(n).

29.2.7. Exercise. Let D be an alternating multilinear map on Mn(A) where A is a unitalcommutative algebra and n ∈ N. For every C ∈ Mn(A)

D(C) = D(I) detC.

29.2.8. Exercise. Show that the determinant function on Mn(A) (where A is a unital com-mutative algebra) is unique.

29.2.9. Exercise. Let A be a unital commutative algebra and B, C ∈ Mn(A). Then

det(BC) = detB detC.

Hint. Consider the function D(C) = D(c1, . . . , cn) := det(Bc1, . . . , Bcn), where Bck is the productof the n× n matrix B and the kth column vector of C.

29.2.10. Exercise. For an n×n matrix B let B t, the transpose of B, be the matrix obtainedfrom B by interchanging its rows and columns; that is, if B =

[bji

], then B t =

[bij

]. Prove that

detB t = detB.

Page 109: Multi Linear Algebra PDF

CHAPTER 30

Free Vector Spaces

30.1. Background

Topics: free object, free vector space. (See AOLT, section 6.1.)

30.1.1. Definition. Let F be an object in a (concrete) category C and ι : S → F be a mapwhose domain is a nonempty set S. We say that the object F is free on the set S (or that F isthe free object generated by S) if for every object A in C and every map f : S → A thereexists a unique morphism fι : F → A in C such that fι ι = f .

S Fι //S

A

f

???

????

????

? F

A

efι

F

A

efι

We will be interested in free vector spaces; that is, free objects in the category VEC of vectorspaces and linear maps. Naturally, merely defining a concept does not guarantee its existence. Itturns out, in fact, that free vector spaces exist on arbitrary sets. (See exercise 30.2.2.)

99

Page 110: Multi Linear Algebra PDF

100 30. FREE VECTOR SPACES

30.2. Exercises (Due: Mon. Apr. 6)

30.2.1. Exercise. If two objects in some concrete category are free on the same set, then theyare isomorphic.

30.2.2. Exercise. Let S be an arbitrary nonempty set and F be a field. Prove that there existsa vector space V over F which is free on S. Hint. Given the set S let V be the set of all F-valuedfunctions on S which have finite support. Define addition and scalar multiplication pointwise.The map ι : s 7→ χs of each element s ∈ S to the characteristic function of s is the desiredinjection. To verify that V is free over S it must be shown that for every vector space W and

every function Sf // W there exists a unique linear map V

ef //W which makes the followingdiagram commute.

S Vι //S

W

f

???

????

????

? V

W

f

V

W

f

30.2.3. Exercise. Prove that every vector space is free. Hint. Of course, part of the problemis to specify a set S on which the given vector space is free.

30.2.4. Exercise. Let S = a, ∗,#. Then an expression such as

3a− 12 ∗+

√2 #

is said to be a formal linear combination of elements of S. Make sense of such expressions.

Page 111: Multi Linear Algebra PDF

CHAPTER 31

Tensor Products of Vector Spaces

31.1. Background

Topics: vector space tensor products, elementary tensors. (See AOLT, section 6.2. A moreextensive and very careful exposition of tensor products can be found in chapter 14 of [23].)

31.1.1. Definition. Let U and V be vector spaces. A vector space U ⊗ V together with abilinear map τ : U × V → U ⊗ V is a tensor product of U and V if for every vector space Wand every bilinear map B : U × V → W , there exists a unique linear map B : U ⊗ V → W whichmakes the following diagram commute.

U × V U ⊗ Vτ //U × V

W

B

???

????

????

? U ⊗ V

W

eB

101

Page 112: Multi Linear Algebra PDF

102 31. TENSOR PRODUCTS OF VECTOR SPACES

31.2. Exercises (Due: Wed. Apr. 8)

31.2.1. Exercise. Prove that in the category of vector spaces and linear maps if tensor productsexist, then they are unique (up to isomorphism).

31.2.2. Exercise. Show that in the category of vector spaces and linear maps tensor productsexist. Hint. Let U and V be vector spaces over a field F. Consider the free vector space lc(U×V ) =lc(U × V , F). Define

∗ : U × V → lc(U × V ) : (u, v) 7→ χ(u,v) .

Then let

S1 = (u1 + u2) ∗ v − u1 ∗ v − u2 ∗ v : u1, u2 ∈ U and v ∈ V ,S2 = (αu) ∗ v − α(u ∗ v) : α ∈ F, u ∈ U , and v ∈ V ,S3 = u ∗ (v1 + v2)− u ∗ v1 − u ∗ v2 : u ∈ U and v1, v2 ∈ V ,S4 = u ∗ (αv)− α(u ∗ v) : α ∈ F, u ∈ U , and v ∈ V ,S = span(S1 ∪ S2 ∪ S3 ∪ S4), and

U ⊗ V = lc(U × V )/S .

Also defineτ : U × V → U ⊗ V : (u, v) 7→ [u ∗ v].

Then show that U ⊗ V and τ satisfy the conditions stated in definition 31.1.1.

NOTE: It is conventional to write u⊗ v for τ((u, v)

)= [u ∗ v]. Tensors of the form u⊗ v are

called elementary tensors (or decomposable tensors or homogeneous tensors). Keepin mind that not every member of U ⊗ V is of the form u⊗ v.

31.2.3. Exercise. Let u and v be elements of finite dimensional vector spaces U and V , re-spectively. Show that if u⊗ v = 0, then either u = 0 or v = 0.

31.2.4. Exercise. Let u1, . . . , un be linearly independent vectors in a vector space U andv1, . . . , vn be arbitrary vectors in a vector space V . Prove that if

∑nk=1 uk ⊗ vk = 0, then vk = 0

for each k ∈ Nn.

31.2.5. Exercise. Let eimi=1 and fjn

j=1 be bases for vector spaces U and V , respectively.Show that the family ei ⊗ fj m n

i=1, j=1 is a basis for U ⊗ V . Conclude that if U and V are finitedimensional, then the dimension of U ⊗ V is the product of the dimensions of U and V .

31.2.6. Exercise. Let U and V be finite dimensional vector spaces and fjnj=1 be a basis

for V . Show that for every element t ∈ U ⊗ V there exist unique vectors u1, . . .un ∈ U such that

t =n∑

j=1

uj ⊗ fj .

31.2.7. Exercise. If U and V are vector spaces, then

U ⊗ V ∼= V ⊗ U.

31.2.8. Exercise. If V is a vector space over a field F, then

V ⊗ F ∼= V ∼= F⊗ V.

Page 113: Multi Linear Algebra PDF

CHAPTER 32

Tensor Products of Vector Spaces (continued)

32.1. Background

Topics: No additional topics.

103

Page 114: Multi Linear Algebra PDF

104 32. TENSOR PRODUCTS OF VECTOR SPACES (CONTINUED)

32.2. Exercises (Due: Fri. Apr. 10)

32.2.1. Exercise. Let U , V , and W be vector spaces. For every vector space X and everytrilinear map k : U × V ×W → X there exists a unique linear map k : U ⊗ (V ⊗W ) : → X suchthat

k(u⊗ (v ⊗ w)

)= k(u, v, w)

for all u ∈ U , v ∈ V , and w ∈W .

32.2.2. Exercise. If U , V , and W are vector spaces, then

U ⊗ (V ⊗W ) ∼= (U ⊗ V )⊗W.

32.2.3. Exercise. If U and V are finite dimensional vector spaces, then

U ⊗ V ∗ ∼= L(V,U).

Hint. Consider the mapT : U × V ∗ → L(V,U) : (u, φ) 7→ T (u, φ)

where T (u, φ) (v) = φ(v)u.

32.2.4. Exercise. If U , V , and W are vector spaces, then

U ⊗ (V ⊕W ) ∼= (U ⊗ V )⊕ (U ⊗W ).

32.2.5. Exercise. If U and V are finite dimensional vector spaces, then

(U ⊗ V )∗ ∼= U∗ ⊗ V ∗.

32.2.6. Exercise. If U , V , and W are finite dimensional vector spaces, then

L(U ⊗ V,W ) ∼= L(U,L(V,W )) ∼= L2(U, V ;W ).

32.2.7. Exercise. Let u1, u2 ∈ U and v1, v2 ∈ V where U and V are finite dimensional vectorspaces. Show that if u1 ⊗ v1 = u2 ⊗ v2 6= 0, then u2 = αu1 and v2 = βv1 where αβ = 1.

Page 115: Multi Linear Algebra PDF

CHAPTER 33

Tensor Products of Linear Maps

33.1. Background

Topics: tensor products of linear maps.

33.1.1. Definition. Let S : U →W and T : V → X be linear maps between vector spaces. Wedefine the tensor product of the linear maps S and T by

S ⊗ T : U ⊗ V →W ⊗X : u⊗ v 7→ S(u)⊗ T (v) .

105

Page 116: Multi Linear Algebra PDF

106 33. TENSOR PRODUCTS OF LINEAR MAPS

33.2. Exercises (Due: Mon. April 13)

33.2.1. Exercise. Definition 33.1.1 defines the tensor product S ⊗ T of two maps only forhomogeneous elements of U ⊗ V . Explain exactly what is needed to convince ourselves that S ⊗ Tis well defined on all of U ⊗ V . Then prove that S ⊗ T is a linear map.

33.2.2. Exercise. Some authors hesitate to use the notation S ⊗ T for the mapping definedin 33.1.1 on the (very reasonable) grounds that S⊗T already has a meaning; it is a member of thevector space L(U,W ) ⊗ L(V,X). Discuss this problem and explain, in particular, why the use ofthe notation S ⊗ T in 33.1.1 is not altogether unreasonable.

33.2.3. Exercise. Suppose that R, S ∈ L(U,W ) and that T ∈ L(V,X) where U , V , W , andX are finite dimensional vector spaces. Then

(R+ S)⊗ T = R⊗ T + S ⊗ T.

33.2.4. Exercise. Suppose that R ∈ L(U,W ) and that S, T ∈ L(V,X) where U , V , W , andX are finite dimensional vector spaces. Then

R⊗ (S + T ) = R⊗ S +R⊗ T.

33.2.5. Exercise. Suppose that S ∈ L(U,W ) and that T ∈ L(V,X) where U , V , W , and Xare finite dimensional vector spaces. Then for all α, β ∈ R

(αS)⊗ (βT ) = αβ(S ⊗ T ).

33.2.6. Exercise. Suppose that Q ∈ L(U,W ), R ∈ L(V,X), S ∈ L(W,Y ), and that T ∈L(X,Z) where U , V , W , X, Y , and Z are finite dimensional vector spaces. Then

(S ⊗ T )(Q⊗R) = SQ⊗ TR.

33.2.7. Exercise. Suppose that S ∈ L(U,W ) and that T ∈ L(V,X) where U , V , W , and Xare finite dimensional vector spaces. Show that if S and T are invertible, then so is S ⊗ T and

(S ⊗ T )−1 = S−1 ⊗ T−1.

33.2.8. Exercise. Suppose that S ∈ L(U,W ) and that T ∈ L(V,X) where U , V , W , and Xare finite dimensional vector spaces. Show that if S ⊗ T = 0, then either S = 0 or T = 0.

33.2.9. Exercise. Suppose that S ∈ L(U,W ) and that T ∈ L(V,X) where U , V , W , and Xare finite dimensional vector spaces. Then

ran(S ⊗ T ) = ranS ⊗ ranT.

33.2.10. Exercise. Suppose that S ∈ L(U,W ) and that T ∈ L(V,X) where U , V , W , and Xare finite dimensional vector spaces. Then

ker(S ⊗ T ) = kerS ⊗ V + U ⊗ kerT.

33.2.11. Exercise. Suppose that S ∈ L(U,W ) and that T ∈ L(V,X) where U , V , W , and Xare finite dimensional vector spaces. Then

(S ⊗ T ) t = S t ⊗ T t.

33.2.12. Exercise. If U and V are finite dimensional vector spaces, then

IU ⊗ IV = IU⊗V .

Page 117: Multi Linear Algebra PDF

CHAPTER 34

Grassmann Algebras

34.1. Background

Topics: group algebras, Grassmann algebras, wedge product.

34.1.1. Definition. Let V be an d-dimensional vector space over a field F. We say that∧

(V )is the Grassmann algebra (or the exterior algebra) over V if

(1)∧

(V ) is a unital algebra over F (multiplication is denoted by ∧),(2) V is “contained in”

∧(V ),

(3) v ∧ v = 0 for every v ∈ V ,(4) dim(

∧(V )) = 2d, and

(5)∧

(V ) is generated by 1Λ(V ) and V .The multiplication ∧ in a Grassmann algebra is called the wedge product (or the exteriorproduct).

107

Page 118: Multi Linear Algebra PDF

108 34. GRASSMANN ALGEBRAS

34.2. Exercises (Due: Wed. April 15)

34.2.1. Exercise. Near the top of page 48 in AOLT, the author, in the process of definingthe group algebra FG, says that, “We regard the elements of [a group] G as linearly independentvectors, . . . .” Explain carefully what this means and why we may in fact do such a thing.

34.2.2. Exercise. In the middle of page 48 of AOLT the author gives a “succinct formula” forthe product of two elements in a group algebra. Show that this formula is correct.

34.2.3. Exercise. Check the author’s computation of xy in Example 2.10, page 48, AOLT. Findthe product both by using the formula given in the middle of page 48 and by direct multiplication.

34.2.4. Exercise. AOLT, section 2.7, exercise 7.

34.2.5. Exercise. In definition 34.1.1 why is “contained in” in quotation marks. Give a moreprecise version of condition (2).

34.2.6. Exercise. In the last condition of definition 34.1.1 explain more precisely what is meantby saying that

∧(V ) is generated by 1 and V .

34.2.7. Exercise. Show that if∧

(V ) is a Grassmann algebra over a vector space V , then1V(V ) /∈ V .

34.2.8. Exercise. Let v and w be elements of a finite dimensional vector space V . Show thatin the Grassmann algebra

∧(V ) generated by V

v ∧ w = −w ∧ v .

34.2.9. Exercise. Let V be a d-dimensional vector space with basis E = e1, . . . , ed. For eachnonempty subset S = ei1 , ei2 , . . . , eip of E let eS = ei1 ∧ ei2 ∧ · · · ∧ eip . Also let e∅ = 1V(V ). Showthat eS : S ⊆ E is a basis for the Grassmann algebra

∧(V ).

Page 119: Multi Linear Algebra PDF

CHAPTER 35

Graded Algebras

35.1. Background

Topics: graded algebras, homogeneous elements, decomposable elements.

35.1.1. Definition. An algebra A is a Z+-graded algebra if it is a direct sum A =⊕k≥0

Ak

of vector subspaces Ak and its multiplication ∧ takes elements in Aj ×Ak to elements in Aj+k forall j, k ∈ Z+. Elements in Ak are said to be homogeneous of degree k.

The definitions of Z-graded algebras, N-graded algebras and Z2-graded algebras are similar. (Inthe case of a Z2-graded algebra the indices are 0 and 1 and A1∧A1 ⊆ A0.) Usually the unmodifiedexpression “graded algebra” refers to an Z+-graded algebra.

Exercise 35.2.1 says that every Grassmann algebra∧

(V ) over a vector space V is a gradedalgebra. The set of elements homogeneous of degree k is denoted by

∧k(V ). An element of∧k(V ) which can be written in the form v1 ∧ v2 ∧ · · · ∧ vk (where v1, . . . , vk all belong to V ) is adecomposable element of degree k.

109

Page 120: Multi Linear Algebra PDF

110 35. GRADED ALGEBRAS

35.2. Exercises (Due: Fri. April 17)

35.2.1. Exercise. Show that every Grassmann algebra is an Z+-graded algebra. We denote by∧k(V ) the subspace of all homogeneous elements of degree k in∧

(V ). In particular,∧0(V ) = R

and∧1(V ) = V . If the dimension of V is d, take

∧k(V ) = 0 for all k > d. (And if you wish toregard

∧(V ) as a Z-graded algebra also take

∧k(V ) = 0 whenever k < 0.)

35.2.2. Exercise. If the dimension of a vector space V is 3 or less, then every homogeneouselement of the corresponding Grassmann algebra is decomposable.

35.2.3. Exercise. If the dimension of a (finite dimensional) vector space V is at least four,then there exist homogeneous elements in the corresponding Grassmann algebra which are notdecomposable. Hint. Let e1, e2, e3, and e4 be distinct basis elements of V and consider (e1 ∧ e2) +(e3 ∧ e4).

35.2.4. Exercise. The elements v1, v2, . . . , vp in a vector space V are linearly independent ifand only if v1 ∧ v2 ∧ · · · ∧ vp 6= 0 in the corresponding Grassmann algebra

∧(V ).

35.2.5. Exercise. Let T : V → W be a linear map between finite dimensional vector spaces(over the same field). Then there exists a unique extension of T to a unital algebra homomorphism∧

(T ) :∧

(V ) →∧

(W ). This extension maps∧k(V ) into

∧k(W ) for each k ∈ N.

35.2.6. Exercise. The pair of maps V 7→∧

(V ) and T 7→∧

(T ) is a covariant functor fromthe category of vector spaces and linear maps to the category of unital algebras and unital algebrahomomorphisms.

35.2.7. Exercise. If V is a vector space of dimension d, then dim(∧p(V )

)=

(dp

)for 0 ≤ p ≤ d.

35.2.8. Exercise. If V is a finite dimensional vector space, ω ∈∧p(V ), and µ ∈

∧q(V ), then

ω ∧ µ = (−1)pqµ ∧ ω.

Page 121: Multi Linear Algebra PDF

CHAPTER 36

Existence of Grassmann Algebras

36.1. Background

Topics: tensor algebras, shuffle permutations, (A good reference for much of this material—andthe following material on differential forms—is chapter 4 of [3].)

36.1.1. Definition. Let V0, V1, V2, . . . be vector spaces (over the same field). Then their

(external) direct sum, which is denoted by∞⊕

k=0

Vk, is defined to be the set of all functions

v : Z+ →⋃∞

k=0 Vk with finite support such that v(k) = vk ∈ Vk for each k ∈ Z+. The usualpointwise addition and scalar multiplication make this set into a vector space.

36.1.2. Definition. Let V be a vector space over a field F. Define T 0(V ) = F, T 1(V ) = V ,T 2(V ) = V ⊗ V , T 3(V ) = V ⊗ V ⊗ V , . . . , T k(V ) = V ⊗ · · · ⊗ V (k factors), . . . . Then let

T (V ) =∞⊕

k=0

T k(V ). Define multiplication on T (V ) by using the obvious isomorphism

T k(V )⊗ T m(V ) ∼= T k+m(V )

and extending by linearity to all of T (V ). The resulting algebra is the tensor algebra of V (orgenerated by V ).

36.1.3. Notation. If V is a vector space over F and k ∈ N we denote by Altk(V ) the set ofall alternating k-linear maps from V k into F. (The space Alt1(V ) is just V ∗.) Additionally, takeAlt0(V ) = F.

36.1.4. Definition. Let p, q ∈ N. We say that a permutation σ ∈ Sp+q is a (p, q)-shuffleif σ(1) < · · · < σ(p) and σ(p + 1) < · · · < σ(p + q). The set of all such permutations is denotedby S(p, q).

36.1.5. Definition. Let V be a vector space. For p, q ∈ N define

∧ : Altp(V )×Altq(V ) → Altp+q(V ) : (ω, µ) 7→ ω ∧ µwhere

(ω ∧ µ)(v1, . . . , vp+q) =∑

σ∈S(p,q)

(sgnσ)ω(vσ(1), . . . , vσ(p))µ(vσ(p+1), . . . , vσ(p+q)).

111

Page 122: Multi Linear Algebra PDF

112 36. EXISTENCE OF GRASSMANN ALGEBRAS

36.2. Exercises (Due: Mon. April 20)

36.2.1. Exercise. Show that T (V ) as defined in 36.1.2 is in fact a unital algebra.

36.2.2. Exercise. Let V be a finite dimensional vector space and J be the ideal in the tensoralgebra T (V ) generated by the set of all elements of the form v ⊗ v where v ∈ V . Show that thequotient algebra T (V )/J is the Grassmann algebra over V ∗ (or, equivalently, over V ).

36.2.3. Exercise. Show that if V is a finite dimensional vector space and k > dimV , thenAltk(V ) = 0.

36.2.4. Exercise. Give an example of a (4, 5)-shuffle permutation σ of the set N9 = 1, . . . , 9such that σ(7) = 4.

36.2.5. Exercise. Show that definition 36.1.5 is not overly optimistic by verifying that if ω ∈Altp(V ) and µ ∈ Altq(V ), then ω ∧ µ ∈ Altp+q(V ).

36.2.6. Exercise. Show that the multiplication defined in 36.1.5 is associative. That is ifω ∈ Altp(V ), µ ∈ Altq(V ), and ν ∈ Altr(V ), then

ω ∧ (µ ∧ ν) = (ω ∧ µ) ∧ ν.

36.2.7. Exercise. Let V be a finite dimensional vector space. Explain in detail how to makeAltk(V ) (or, if you prefer, Altk(V ∗) ) into a vector space for each k ∈ Z and how to make thecollection of these into a Z-graded algebra. Show that this algebra is the Grassmann algebragenerated by V . Hint. Take Altk(V ) = 0 for each k < 0 and extend the definition of the wedgeproduct so that if α ∈ Alt0(V ) = F and ω ∈ Altp(V ), then α ∧ ω = αω.

36.2.8. Exercise. Let ω1, . . . , ωp be members of Alt1(V ) (that is, linear functionals on V ).Then

(ω1 ∧ · · · ∧ ωp)(v1, . . . , vp) = det[ωj(vk)

]p

j,k=1

for all v1, . . . , vp ∈ V .

36.2.9. Exercise. If e1, . . . , en is a basis for an n-dimensional vector space V , then

e∗σ(1) ∧ · · · ∧ e∗σ(p) : σ ∈ S(p, n− p)

is a basis for Altp(V ).

Page 123: Multi Linear Algebra PDF

CHAPTER 37

The Hodge ∗-operator

37.1. Background

Topics: right-handed basis, orientation, opposite orientation, Hodge star operator.

37.1.1. Definition. Let E be a basis for an n-dimensional vector space V . Then the n-tuple(e1, . . . , en) is an ordered basis for V if e1, . . . , en are distinct elements of E.

37.1.2. Definition. Let E = (e1, . . . , en) be an ordered basis for Rn. We say that the basis Eis right-handed if det[e1, . . . , en] > 0 and left-handed otherwise.

37.1.3. Definition. Let V be a real n-dimensional vector space and T : Rn → V be an iso-morphism. Then the set of all n-tuples of the form (T (e1), . . . , T (en)) where (e1, . . . , en) is aright-handed basis in Rn is an orientation of V . Another orientation consists of the set of n-tuples (T (e1), . . . , T (en)) where (e1, . . . , en) is a left-handed basis in Rn. Each of these orientationsis the opposite (or reverse) of the other. A vector space together with one of these orientationsis an oriented vector space.

113

Page 124: Multi Linear Algebra PDF

114 37. THE HODGE ∗-OPERATOR

37.2. Exercises (Due: Wed. April 22)

37.2.1. Exercise. For T : V →W a linear map between vector spaces define

Altp(T ) : Altp(W ) → Altp(V ) : ω 7→ Altp(T )(ω)

where[Altp(T )(ω)

](v1, . . . , vp) = ω(Tv1, . . . , T vp) for all v1, . . . , vp ∈ V . Then Altp is a contravari-

ant functor from the category of vector space and linear maps into itself.

37.2.2. Exercise. Let V be an n-dimensional vector space and T ∈ L(V ). If T is diagonalizable,then

cT(λ) =

n∑k=0

(−1)k[Altn−k(T )]λk.

37.2.3. Exercise. Let V be an n-dimensional real inner product space. In exercise 9.2.13we established an isomorphism Φ: v 7→ v∗ between V and its dual space V ∗. Show how thisisomorphism can be used to induce an inner product on V ∗. Then show how this may be used tocreate an inner product on Altp(V ) for 2 ≤ p ≤ n. Hint. For v, w ∈ V let 〈v∗, w∗〉 = 〈v, w〉. Thenfor ω1, . . . , ωp, µ1, . . . , µp ∈ Alt1(V ) let 〈ω1 ∧ · · · ∧ ωp , µ1 ∧ · · · ∧ µp〉 = det[〈ωj , µk〉].

37.2.4. Exercise. Let V be an d-dimensional oriented real inner product space. Fix a unitvector vol ∈ Altd(V ). This vector is called a volume element. (In the case where V = Rd, wewill always choose vol = e∗1 ∧ · · · ∧ e∗d where (e1, . . . , ed) is the usual ordered basis for Rd.)

Let ω ∈ Altp(V ) and q = d− p. Show that there exists a vector ∗ω ∈ Altq(V ) such that

〈∗ω, µ〉 vol = ω ∧ µfor each µ ∈ Altq(V ). Show that the map ω 7→ ∗ω from Altp(V ) into Altq(V ) is a vector spaceisomorphism. This map is the Hodge star operator.

37.2.5. Exercise. Let V be a finite dimensional oriented real inner product space of dimen-sion n. Suppose that p+ q = n. Show that ∗ ∗ ω = (−1)pqω for every ω ∈ Altp(V ).

Page 125: Multi Linear Algebra PDF

CHAPTER 38

Differential Forms

38.1. Background

Topics: tangents, tangent space, cotangent space, partial derivatives on a manifold, vector fields,differential forms, the differential of a map between manifolds. (One good source for much of thismaterial is chapters 1 and 4 of [3].

In everything that follows all vector spaces areassumed to be real, finite dimensional, and ori-ented; and all manifolds are smooth oriented dif-ferentiable manifolds.

38.1.1. Notation. Let m be a point on a manifold M . A real valued function f belongs to C∞mif it is defined in a neighborhood of m and is smooth. (Recall that a function is smooth if it hasderivatives of all orders).

38.1.2. Definition. Let m be a point on a manifold M . A tangent at m is a linear mapt : C∞m → R such that

t(fg) = f(m)t(g) + f(t)g(m)

for all f , g ∈ C∞m . The set Tm of all tangents at m is called the tangent space at m. The dualT ∗m of the tangent space at m is called the cotangent space at m.

38.1.3. Definition. Let U ⊆ M where M is a manifold. A function X defined on U is avector field on U if Xm (also written as X(m)) belongs to the tangent space Tm for each pointm ∈ U .

For each f ∈ C∞m define the function Xf by

(Xf)(m) = Xm(f)

for all m in the intersection of the domains of X and f . A vector field X is smooth if Xf ∈ C∞mwhenever f ∈ C∞m . Thus a smooth vector field is often regarded as a mapping from C∞m into itself.In everything that follows we will assume that all vector fields are smooth.

38.1.4. Definition. Let U ⊆ M where M is a manifold. A function ω defined on U is adifferential p -form on U if ωm (also written as ω(m)) belongs to

∧p(T ∗m) for all m ∈ U .A differential p -form is smooth if whenever X1, . . . , Xp are vector fields on U the functionω(X1, . . . , Xp) defined on U by

ω(X1, . . . , Xp)(m) = ωm(X1m, . . . , X

pm)

is smooth.The vector space of all smooth differential p-forms on U is denoted by Ωp(U). If M is a manifold

of dimension d, we agree (as in exercise 35.2.1 that Ωp(U) = 0 whenever p < 0 or p > d. The spaceof all differential forms on U is denoted by Ω(U). Since

∧0(T ∗m) = R, a member of Ω0(U) is just asmooth real valued function on U , so Ω0(U) = C∞(U). We denote the Z-graded algebra generated

115

Page 126: Multi Linear Algebra PDF

116 38. DIFFERENTIAL FORMS

by the vector spaces Ωp(U) and the wedge product by Ω(U). In everything that follows whenwe refer to a p-form or a differential form we will assume that it is smooth.

38.1.5. Definition. Let φ = (x1, . . . , xd) be a chart (or coordinate system) at a point m ona d-dimensional manifold M . We define the partial derivative at m with respect to xk,denoted by Dxk

(m), to be the tangent given by the formula

Dxk(m)(f) =

∂(f φ−1)∂uk

(φ(m))

where ∂/∂uk is the usual partial derivative with respect to the kth coordinate on Rd. Dependingon which variable we are interested in we often write Dxk

(f)(m) for Dxk(m)(f). It is common

practice to write ∂∂xk

for Dxk. In Rn we often write Dk for Duk

, where, as above, u1, . . . , un arethe standard coordinates on Rn. Also, if f is a smooth real valued function on a manifold, we maywrite fxk

(or just fk) for Dxk(f).

38.1.6. Theorem. Let φ = (x1, . . . , xd) be a chart (or coordinate system) at a point m on ad-dimensional manifold M . If t is a tangent at m, then

t =d∑

k=1

t(xk)Dxk

For a proof of this theorem consult [3] (section 1.3, theorem 1) or [25] (chapter 1, proposition3.1).

38.1.7. Definition. Let F : M → N be a smooth function between manifolds and m be a pointin M . Define dFm : Tm → TF (m), the differential of F at m, by dFm(t)(g) = t(g F ) for allt ∈ Tm and all g ∈ C∞F (m).

38.1.8. Notation. In exercise 37.2.4 we defined a volume element vol in Rd. We will adopt thesame notation vol for the differential form dx1∧· · ·∧dxd on a d-manifold M (where φ = (x1, . . . , xd)is a chart on M). This is called the volume form.

Page 127: Multi Linear Algebra PDF

38.2. EXERCISES (DUE: FRI. APRIL 24) 117

38.2. Exercises (Due: Fri. April 24)

38.2.1. Exercise. Let m be a point on a d-dimensional manifold M . Show that the tangentspace Tm is a vector space of dimension d.

38.2.2. Exercise. Let F : M → N be a smooth function between manifolds and m be a pointin M . Show that dFm(t) ∈ TF (m) whenever t ∈ Tm and that dFm ∈ L(Tm, TF (m)).

38.2.3. Exercise (Chain Rule). Let F : M → N and G : N → P be smooth functions betweenmanifolds. Prove that

d(G F )m = dGF (m) dFm .

38.2.4. Exercise. Help future students. Design some reasonably elementary exercises whichyou think would help clarify points in the background material for this assignment that you foundconfusing at first reading.

Page 128: Multi Linear Algebra PDF
Page 129: Multi Linear Algebra PDF

CHAPTER 39

The Exterior Differentiation Operator

39.1. Background

Topics: exterior derivative.

39.1.1. Definition. Let U be an open subset of a manifold M . The exterior differen-tiation operator (or exterior derivative) d is a mapping which takes k-forms on U to(k + 1)-forms on U . That is, d : Ωk(U) → Ωk+1(U). It is defined to have the following properties:

(i) If f is a 0-form on U , then d(f) is the usual differential df of f (as defined in exercise 38.1.7;(ii) d is linear;(iii) d2 = 0 (that is, d(dω) = 0 for every k-form ω); and(iv) if ω is a k-form and µ is any differential form, then

d(ω ∧ µ) = (dω) ∧ µ+ (−1)kω ∧ dµ.

Proofs of the existence and uniqueness of such a function can be found in [20] (theorem 12.14),[25] (chapter 1, theorem 11.1), and [3] (section 4.6).

119

Page 130: Multi Linear Algebra PDF

120 39. THE EXTERIOR DIFFERENTIATION OPERATOR

39.2. Exercises (Due: Mon. April 27)

39.2.1. Exercise. Theorem 38.1.6 tells us that at each point m of a d-dimensional manifold

the vectors∂

∂x1(m). . . . ,

∂xd(m) constitute a basis for the tangent space at m. Explain why the

vectors d(x1

)m

, . . . , d(xd

)m

make up its dual basis.

39.2.2. Exercise. If f is a smooth real valued function on an open subset U of Rd, then

df =d∑

k=1

fkdxk.

39.2.3. Exercise. Let φ = (x, y, z) be a chart at a point m in a 3-manifold M and let U bethe domain of φ. Explain why Ω0(U), Ω1(U), Ω2(U), Ω3(U), and

⊕k∈Z Ωk(U) are vector spaces

and exhibit a basis for each.

39.2.4. Exercise. In beginning calculus texts some curious arguments are given for replacingthe expression dx dy in the integral

∫∫R f dx dy by r dr dθ when we change from rectangular to polar

coordinates in the plane. Show that if we interpret dx dy as the differential form dx∧ dy, then thisis a correct substitution. (Assume additionally that R is a region in the open first quadrant andthat the integral of f over R exists.)

39.2.5. Exercise. Give an explanation similar to the one in the preceding exercise of the changein triple integrals from rectangular to spherical coordinates.

39.2.6. Exercise. Generalize the two preceding exercises.

39.2.7. Exercise. Let φ = (x1, . . . , xd) be a chart at a point m on a d-dimensional manifold.If ω is a differential form defined on U then it can be written in the form

∑ωJdx

J where J rangesover all subsets J = j1, . . . , jk of Nd with j1 < · · · < jk, dxJ = dxj1 ∧ · · · ∧ dxjk

(letting dxJ = 1when J = ∅), and ωJ is a smooth real valued function on U . Show that

dωm =∑

dωJ(m) ∧ dxJ(m).

Page 131: Multi Linear Algebra PDF

CHAPTER 40

Differential Calculus on R3

40.1. Background

Topics: association of vector fields with 1-forms, gradient, curl, divergence.

40.1.1. Notation. If, on some manifold, f is a 0-form and ω is a differential form (of arbitrarydegree), we write fω instead of f ∧ ω for the product of the forms.

40.1.2. Definition. Let P , Q, and R be scalar fields on an open subset U of R3. In 38.1.3we defined the term vector field. Recall that this term is used in a somewhat different fashion inbeginning calculus. There it refers simply to a mapping from R3 to R3. So in this more elementarycontext a vector field F can be written in the form P i+Q j+Rk. We say that ω = P dx+Qdy+Rdzis the differential 1-form associated with the vector field F = P i +Q j + Rk. (And, of course,F is the vector field associated with ω.) Notice that a vector field is identically zero if and only ifits associated 1-form is.

121

Page 132: Multi Linear Algebra PDF

122 40. DIFFERENTIAL CALCULUS ON R3

40.2. Exercises (Due: Wed. April 29)

Recall that in beginning calculus the cross product is defined only for vectors in R3. Thefollowing exercise makes clear the sense in which the wedge product is a generalization of the crossproduct.

40.2.1. Exercise. Show that there exists an isomorphism u 7→ u# from R3 to Alt2(R3) suchthat

v∗ ∧ w∗ = (v × w)#

for every v, w ∈ R3.

40.2.2. Exercise. Let M be a 3-manifold, φ = (x, y, z) : U → R3 be a chart on M , andf : U → R be a 0-form on U . Compute d(f dy).

40.2.3. Exercise. Let M be a 3-manifold and φ = (x, y, z) : U → R3 be a chart on M . Computed(x dy ∧ dz + y dz ∧ dx+ z dx ∧ dy).

40.2.4. Exercise. Let M be a 3-manifold and φ = (x, y, z) : U → R3 be a chart on M . Computed[(3xz dx+ xy2 dy) ∧ (x2y dx− 6xy dz)].

40.2.5. Exercise. Let M be a d-manifold, φ = (φ1, . . . , φd) : U → Rd be a chart on M , andf : U → R be a 0-form on U . Show that ∗ (f dφk) = f ∗ dφk for 1 ≤ k ≤ d.

40.2.6. Exercise. Let M be a 3-manifold, φ = (x, y, z) : U → R3 be a chart on M , and a, b,c : U → R be 0-forms on U . Compute the following:

(1) ∗ a(2) ∗ (a dx+ b dy + c dz)(3) ∗ (a dx ∧ dy + b dx ∧ dz + c dy ∧ dz) and(4) ∗ (a dx ∧ dy ∧ dz)

40.2.7. Exercise. Explain how the Hodge star operator is extended to differential forms on Rn.For differential forms on an open subset U of R3 prove that ∗ dx = dy ∧ dz, ∗ dy = dz ∧ dx, and∗ dz = dx ∧ dy.

40.2.8. Exercise. Let f be a scalar field on an open subset U of R3. Show that grad f (thegradient of f) is the vector field associated with the 1-form df .

40.2.9. Exercise. Let F and G be vector fields on a region in R3. Show that if ω and µ are,respectively, their associated 1-forms, then ∗ (ω ∧ µ) is the 1-form associated with F×G.

40.2.10. Exercise. Let ω = x2yz dx+ yz dy + xy2 dz. Find d ∗ ω and ∗ dω.

40.2.11. Exercise. Let f be a scalar field. Write the left side of Laplace’s equation fxx + fyy +fzz = 0 in terms of d, ∗, and f only.

40.2.12. Exercise. Suppose that F is a smooth vector field in R3 and that ω is its associated1-form. Show that ∗ dω is the 1-form associated with curlF.

40.2.13. Exercise. Let F be a vector field on R3 and ω be its associated 1-form. Show that∗ d ∗ ω = div F. (Here div F is the divergence of F .)

40.2.14. Exercise. Let F be a vector field on an open subset of R3. Use differential forms (butnot partial derivatives) to show that div curlF = 0.

Page 133: Multi Linear Algebra PDF

40.2. EXERCISES (DUE: WED. APRIL 29) 123

40.2.15. Exercise. Let f be a smooth scalar field (that is, a 0-form) in R3. Use differentialforms (but not partial derivatives) to show that curl grad f = 0.

Page 134: Multi Linear Algebra PDF
Page 135: Multi Linear Algebra PDF

CHAPTER 41

Closed and Exact Forms

41.1. Background

Topics: closed differential forms, exact differential forms,

41.1.1. Definition. Let U be an open subset of a manifold and

· · · // Ωp−1(U)dp−1 // Ωp(U)

dp // Ωp+1(U) // · · ·where dp−1 and dp are exterior differentiation operators. Elements of ker dp are called closedp-forms and elements of ran dp−1 are exact p-forms.

125

Page 136: Multi Linear Algebra PDF

126 41. CLOSED AND EXACT FORMS

41.2. Exercises (Due: Mon. May 4)

41.2.1. Exercise. Writers of elementary texts in calculus and physics in their otherwise laudableefforts to express complicated ideas in a simple fashion will occasionally lose all semblance of selfcontrol and write things like dr = dx i + dy j + dz k and dS = dy∧ dz i + dz ∧ dx j + dx∧ dy k. Inthe depths of their depravity they may even go so far as to claim for a vector field F = (F 1, F 2, F 3)in R3 that the derivative of F ·dr is curlF ·dS. Is it possible to make any sense of such a statement?

41.2.2. Exercise. With the shameless notations suggested in exercise 41.2.1 try to make senseof the claim that for a vector field F in R3

d(F · dS) = (div F) dV

where dV is the “volume element” dx ∧ dy ∧ dz.

41.2.3. Exercise. Show that every exact differential form is closed.

41.2.4. Exercise. Show that if ω and µ are closed differential forms, then so is ω ∧ µ.

41.2.5. Exercise. Show that if ω is an exact differential form and µ is a closed form, then ω∧µis exact.

41.2.6. Exercise. Let φ = (x, y, z) : U → R3 be a chart on a manifold and ω = a dx+b dy+c dz

be a 1-form on U . Show that if ω is exact, then∂c

∂y=∂b

∂z,∂a

∂z=∂c

∂x, and

∂b

∂x=∂a

∂y.

41.2.7. Exercise. Explain why solving the initial value problem

ex cos y + 2x− ex(sin y)y′ = 0, y(0) = π/3

is essentially the same thing as showing that the 1-form (ex cos y + 2x) dx − ex(sin y) dy is exact.Do it.

Page 137: Multi Linear Algebra PDF

CHAPTER 42

The de Rham Cohomology Group

42.1. Background

Topics: cocycles, coboundaries, the de Rham cohomology group.

42.1.1. Definition. Let M be a d-manifold. For 0 ≤ p ≤ d we denote by Zp(M) (or just Zp)the vector space of all closed p -forms on M . For reason which will become apparent shortly,members of Zp are sometimes called (de Rham)p -cocycles. Also let Bp(M) (or just Bp) denotethe vector space of all exact p-forms on M (sometimes called p -coboundaries). Since there areno differential forms of degree less than 0, we take B0 = B0(M) = 0. For convenience we also setZp = Zp(M) = 0 and Bp = Bp(M) = 0 whenever p < 0 or p > d. It is a trivial consequenceof exercise 41.2.3 that Bp(M) is a subspace of the vector space of Zp(M). Thus it makes sense todefine

Hp = Hp(M) =Zp(M)Bp(M)

.

This is the pth de Rham cohomology group of M . (Clearly this cohomology “group” is actuallya vector space. It is usually called a “group” because most other cohomology theories produce onlygroups.) The dimension of the vector space Hp(M) is the pth Betti number of M .

Another (obviously equivalent) way of phrasing the definition of the pth de Rham cohomologygroup is in terms of the maps

· · · // Ωp−1(M)dp−1 // Ωp(M)

dp // Ωp+1(M) // · · ·where dp−1 and dp are exterior differentiation operators. Define

Hp(M) :=ker dp

ran dp−1

for all p.

42.1.2. Definition. Let F : M → N be a smooth function between smooth manifolds. Forp ≥ 1 define

ΩpF : Ωp(N) → Ωp(M) : ω 7→ (ΩpF )(ω)where (

(ΩpF )(ω))m

(v1, . . . , vp) = ωF (m)

(dFm(v1), . . . , dFm(vp)

)(42.1)

for every m ∈M and v1, . . . vp ∈ Tm. Also define((Ω0F )(ω)

)m

= ωF (m). We simplify the notationin 42.1 slightly

(ΩpF )(ω)(v1, . . . , vp) = ω(dF (v1), . . . , dF (vp)). (42.2)Denote by F ∗ the map induced by the maps ΩpF which takes the Z-graded algebra Ω(N) to theZ-graded algebra Ω(M).

42.1.3. Definition. Let F : M → N be a smooth function between smooth manifolds. Foreach integer p define

Hp(F ) : Hp(N) → Hp(M) : [ω] 7→ [Ωp(F )(ω)].Denote by H∗(F ) the induced map which takes the Z-graded algebra H∗(N) into H∗(M).

127

Page 138: Multi Linear Algebra PDF

128 42. THE DE RHAM COHOMOLOGY GROUP

42.2. Exercises (Due: Wed. May 6)

42.2.1. Exercise. Show that Ωp (as defined in 38.1.4 and 42.1.2) is a contravariant functorfrom the category of smooth manifolds and smooth maps to the category of vector spaces andlinear maps.

42.2.2. Exercise. If F : M → N is a smooth function between smooth manifolds, ω ∈ Ωp(N),and µ ∈ Ωq(N), then

(Ωp+qF )(ω ∧ µ) = (ΩpF )(ω) ∧ (ΩqF )(µ).

42.2.3. Exercise. In exercise 42.2.1 you showed that Ωp was a functor for each p. What aboutΩ itself? Is it a functor? Explain.

42.2.4. Exercise. Let F : M → N be a smooth function between smooth manifolds. Provethat

d F ∗ = F ∗ d .

42.2.5. Exercise. Calculate the cohomology group H0(R).

42.2.6. Exercise. For U⊆ Rn give a very clear description of H0(U) and explain why its

dimension is the number of connected components of U . Hint. A function is said to be locallyconstant if it is constant in some neighborhood of each point in its domain.

42.2.7. Exercise. Let V = 0 be the 0-dimensional Euclidean space. Compute the pth deRham cohomology group Hp(V ) for all p ∈ Z.

42.2.8. Exercise. Compute Hp(R) for all p ∈ Z.

42.2.9. Exercise. Let U be the union of m disjoint open intervals in R. Compute Hp(U) forall p ∈ Z.

42.2.10. Exercise. Let U⊆ Rn. For [ω] ∈ Hp(U) and [µ] ∈ Hq(U) define

[ω][µ] = [ω ∧ µ] ∈ Hp+q(U).

Explain why exercise 41.2.4 is necessary for this definition to make sense. Prove also that this def-inition does not depend on the representatives chosen from the equivalence classes. Show that thisdefinition makes H∗(U) =

⊕p∈Z

Hp(U) into a Z-graded algebra. This is the de Rham cohomologyalgebra of U .

42.2.11. Exercise. Prove that with definition 42.1.3 H∗ becomes a contravariant functor fromthe category of open subsets of Rn and smooth maps to the category of Z-graded algebras and theirhomomorphisms.

Page 139: Multi Linear Algebra PDF

CHAPTER 43

Cochain Complexes

43.1. Background

Topics: cochain complexes, cochain maps.

43.1.1. Definition. A sequence

· · · // Vp−1dp−1 // Vp

dp // Vp+1// · · ·

of vector spaces and linear maps is a cochain complex if dp dp−1 = 0 for all p ∈ Z. Such asequence may be denoted by (V ∗, d) or just by V ∗.

43.1.2. Definition. We generalize definition 42.1.1 in the obvious fashion. If V ∗ is a cochaincomplex, then the pth cohomology group Hp(V ∗) is defined to be ker dp/ ran dp−1. (As before,this “group” is actually a vector space.) In this context the elements of Vp are often called p-cochains, elements of ker dp are p-cocycles, elements of ran dp−1 are p-coboundaries, and dis the coboundary operator.

43.1.3. Definition. Let (V ∗, d) and (W ∗, δ) be cochain complexes. A cochain map G : V ∗ →W ∗ is a sequence of linear maps Gp : Vp →Wp satisfying

δp Gp = Gp+1 dp

for every p ∈ Z. That is, the diagram

. . . // Vp

Gp

dp // Vp+1

Gp+1

// . . .

. . . // Wpδp

// Wp+1 // . . .

commutes.

43.1.4. Definition. A sequence

0 // U∗F // V ∗

G //W ∗ // 0

of cochain complexes and cochain maps is (short) exact if for every p ∈ Z the sequence

0 // UpFp // Vp

Gp //Wp// 0

of vector spaces and linear maps is (short) exact.

129

Page 140: Multi Linear Algebra PDF

130 43. COCHAIN COMPLEXES

43.2. Exercises (Due: Fri. May 8)

43.2.1. Exercise. Let G : V ∗ → W ∗ be a cochain map between cochain complexes. For eachp ∈ Z define

G∗p : Hp(V ∗) → Hp(W ∗) : [v] 7→ [Gp(v)]whenever v is a cocycle in Vp. Show that the maps G∗p are well defined and linear. Hint. To provethat G∗p is well-defined we need to show two things: that Gp(v) is a cocycle in Wp and that thedefinition does not depend on the choice of representative v.

43.2.2. Exercise. If 0 // U∗F // V ∗

G //W ∗ // 0 is a short exact sequence of cochaincomplexes, then

Hp(U∗)F ∗p //Hp(V ∗)

G∗p //Hp(W ∗)is exact at Hp(V ∗) for every p ∈ Z.

43.2.3. Exercise. A short exact sequence

0 // U∗F // V ∗

G //W ∗ // 0

of cochain complexes induces a long exact sequence

//Hp−1(W ∗)ηp−1 //Hp(U∗)

F ∗p //Hp(V ∗)G∗p //Hp(W ∗)

ηp //Hp+1(U∗) //

Hint. If w is a cocycle in Wp, then, since Gp is surjective, there exists v ∈ Vp such that w = Gp(v).It follows that dv ∈ kerGp+1 = ranFp+1 so that dv = Fp+1(u) for some u ∈ Up+1. Let ηp([w]) = [u].

Page 141: Multi Linear Algebra PDF

CHAPTER 44

Simplicial Homology

44.1. Background

Topics: convex, convex combinations, convex hull, convex independent, closed simplex, opensimplex, oriented simplex, face of a simplex, simplicial complex, r -skeleton of a complex, polyhedronof a complex, chains, boundary maps, cycles, boundaries, Betti number, Euler characteristic.

44.1.1. Definition. Let V be a vector space. Recall that a linear combination of a finiteset x1, . . . , xn of vectors in V is a vector of the form

∑nk=1 αkxk where α1, . . . , αn ∈ R. If

α1 = α2 = · · · = αn = 0, then the linear combination is trivial ; if at least one αk is differentfrom zero, the linear combination is nontrivial. A linear combination

∑nk=1 αkxk of the vectors

x1, . . . , xn is a convex combination if αk ≥ 0 for each k (1 ≤ k ≤ n) and if∑n

k=1 αk = 1.

44.1.2. Definition. If a and b are vectors in the vector space V , then the closed segmentbetween a and b, denoted by [a, b], is (1− t)a+ tb : 0 ≤ t ≤ 1.

44.1.3. CAUTION. Notice that there is a slight conflict between this notation, when appliedto the vector space R of real numbers, and the usual notation for closed intervals on the real line.In R the closed segment [a, b] is the same as the closed interval [a, b] provided that a ≤ b. If a > b,however, the closed segment [a, b] is the same as the segment [b, a], it contains all numbers c suchthat b ≤ c ≤ a, whereas the closed interval [a, b] is empty.

44.1.4. Definition. A subset C of a vector space V is convex if the closed segment [a, b] iscontained in C whenever a, b ∈ C.

44.1.5. Definition. Let A be a subset of a vector space V . The convex hull of A is thesmallest convex subset of V which contain A.

44.1.6. Definition. A set S = v0, v1, . . . , vp of p+ 1 vectors in a vector space V is convexindependent if the set v1 − v0 , v2 − v0 , . . . , vp − v0 is linearly independent in V .

44.1.7. Definition. An affine subspace of a vector space V is any translate of a linearsubspace of V .

44.1.8. Definition. Let p ∈ Z+. The closed convex hull of a convex independent set S =v0, . . . , vp of p + 1 vectors in some vector space is a closed p -simplex. It is denoted by [s] orby [v0. . . . , vp]. The integer p is the dimension of the simplex. The open p -simplex determinedby the set S is the set of all convex combinations

∑pk=0 αkvk of elements of S where each αk > 0.

The open simplex will be denoted by (s) or by (v0, . . . , vp). We make the special convention thata single vector v is both a closed and an open 0 -simplex.

If [s] is a simplex in Rn then the plane of [s] is the affine subspace of Rn having the leastdimension which contains [s]. It turns out that the open simplex (s) is the interior of [s] in theplane of [s].

131

Page 142: Multi Linear Algebra PDF

132 44. SIMPLICIAL HOMOLOGY

44.1.9. Definition. Let [s] = [v0, . . . , vp] be a closed p -simplex in Rn and j0, . . . , jq be anonempty subset of 0, 1, . . . , p. Then the closed q -simplex [t] = [vj0 , . . . , vjq ] is a closed q-faceof [s]. The corresponding open simplex (t) is an open q-face of [s]. The 0 -faces of a simplex arecalled the vertices of the simplex.

Note that distinct open faces of a closed simplex [s] are disjoint and that the union of all theopen faces of [s] is [s] itself.

44.1.10. Definition. Let [s] = [v0, . . . , vp] be a closed p -simplex in Rn. We say that twoorderings (vi0 , . . . , vip) and (vj0 , . . . , vjp) of the vertices are equivalent if (j0, . . . , jp) is an evenpermutation of (i0, . . . , ip). (This is an equivalence relation.) For p ≥ 1 there are exactly twoequivalence classes; these are the orientations of [s]. An oriented simplex is a simplex togetherwith one of these orientations. The oriented simplex determined by the ordering (v0, . . . , vp) will bedenoted by 〈v0, . . . , vp〉. If, as above, [s] is written as [v0, . . . , vp], then we may shorten 〈v0, . . . , vp〉to 〈s〉.

Of course, none of the preceding makes sense for 0 -simplexes. We arbitrarily assign them twoorientations, which we denote by + and −. Thus 〈s〉 and −〈s〉 have opposite orientations.

44.1.11. Definition. A finite collection K of open simplexes in Rn is a simplicial complexif the following conditions are satisfied:

(1) if (s) ∈ K and (t) is an open face of [s], then (t) ∈ K; and(2) if (s), (t) ∈ K and (s) 6= (t), then (s) ∩ (t) = ∅.

The dimension of a simplicial complex K, denoted by dimK, is the maximum dimension of thesimplexes constituting K. If r ≤ dimK, then the r-skeleton of K, denoted by Kr, is the set ofall open simplexes in K whose dimensions are no greater than r. The polyhedron, |K|, of thecomplex K is the union of all the simplexes in K.

44.1.12. Definition. Let K be a simplicial complex in Rn. For 0 ≤ p ≤ dimK let Ap(K) (orjust Ap) denote the free vector space generated by the set of all oriented p -simplexes belongingto K. For 1 ≤ p ≤ dimK let Wp(K) (or just Wp) be the subspace of Ap generated by all elementsof the form

〈v0, v1, v2, . . . , vp〉+ 〈v1, v0, v2, . . . , vp〉and let Cp(K) (or just Cp) be the resulting quotient space Ap/Wp.For p = 0 let Cp = Ap and forp < 0 or p > dimK let Cp = 0. The elements of Cp are the p -chains of K.

Notice that for any p we have

[〈v0, v1, v2, . . . , vp〉] = −[〈v1, v0, v2, . . . , vp〉] .To avoid cumbersome notation we will not distinguish between the p -chain [〈v0, v1, v2, . . . , vp〉] andits representative 〈v0, v1, v2, . . . , vp〉.

44.1.13. Definition. Let 〈s〉 = 〈v0, v1, . . . , vp+1〉 be an oriented (p+1) -simplex. We define theboundary of 〈s〉, denoted by ∂〈s〉, by

∂〈s〉 =p+1∑k=0

(−1)k〈v0, . . . , vk, . . . , vp+1〉 .

(The caret above the vk indicates that that term is missing; so the boundary of a (p+ 1) -simplexis an alternating sum of p -simplexes.)

44.1.14. Definition. Let K be a simplicial complex in Rn. For 1 ≤ p ≤ dimK define

∂p = ∂ : Cp+1(K) → Cp(K) :

as follows. If∑a(s)〈s〉 is a p -chain in K, let

∂(∑

a(s)〈s〉)

=∑

a(s)〈s〉 .

Page 143: Multi Linear Algebra PDF

44.1. BACKGROUND 133

For all other p let ∂p be the zero map. The maps ∂p are called boundary maps. Notice that each∂p is a linear map.

44.1.15. Definition. Let K be a simplicial complex in Rn and 0 ≤ p ≤ dimK. Define Zp(K) =Zp to be the kernel of ∂p : Cp → Cp−1 and Bp(K) = Bp to be the range of ∂p+1 : Cp+1 → Cp. Themembers of Zp are p -cycles and the members of Bp are p -boundaries.

It is clear from exercise 44.2.2 that Bp is a subspace of the vector space Zp. Thus we may defineHp(K) = Hp to be Zp/Bp. It is the pth simplicial homology group of K. (And, of course, Zp,Bp, and Hp are the trivial vector space whenever p < 0 or p > dimK.)

44.1.16. Definition. Let K be a simplicial complex. The number βp := dimHp(K) is the pth

Betti number of the complex K. And χ(K) :=∑dim K

p=0 (−1)pβp is the Euler characteristicof K.

Page 144: Multi Linear Algebra PDF

134 44. SIMPLICIAL HOMOLOGY

44.2. Exercises (Due: Wed. May 13)

44.2.1. Exercise. Show that definition 44.1.5 makes sense by showing that the intersectionof a family of convex subsets of a vector space is itself convex. Then show that a “constructivecharacterization” is equivalent; that is, prove that the convex hull of A is the set of all convexcombinations of elements of A.

44.2.2. Exercise. Let K be a simplicial complex in Rn. Show that ∂2 : Cp+1(K) → Cp−1(K)is identically zero. Hint. It suffices to prove this for generators 〈v0, . . . , vp+1〉.

44.2.3. Exercise. Let K be the topological boundary (that is, the 1 -skeleton) of an oriented2 -simplex in R2. Compute Cp(K), Zp(K), Bp(K), and Hp(K) for each p.

44.2.4. Exercise. What changes in exercise 44.2.3 if K is taken to be the oriented 2 -simplexitself.

44.2.5. Exercise. Let K be the simplicial complex in R2 comprising two triangular regionssimilarly oriented with a side in common. For all p compute Cp(K), Zp(K), Bp(K), and Hp(K).

44.2.6. Exercise. Let K be a simplicial complex. For 0 ≤ p ≤ dimK let αp be the number ofp -simplexes in K. That is, αp = dimCp(K). Show that

χ(K) =dim K∑p=0

(−1)pαp.

Page 145: Multi Linear Algebra PDF

CHAPTER 45

Simplicial Cohomology

45.1. Background

Topics: simplicial cochain, simplicial cocycles, simplicial coboundaries, simplicial cohomologygroup, smooth submanifold, smoothly triangulated manifold, de Rham’s theorem, pullback of dif-ferential forms.

45.1.1. Definition. Let K be a simplicial complex. For each p ∈ Z let Cp(K) =(Cp(K)

)∗.The elements of Cp(K) are (simplicial) p -cochains. Then the adjoint ∂p

∗ of the boundary map

∂p : Cp+1(K) → Cp(K)

is the linear map∂p∗ = ∂∗ : Cp(K) → Cp+1(K) .

(Notice that ∂∗ ∂∗ = 0.)Also define(1) Zp(K) := ker ∂p

∗;

(2) Bp(K) := ran ∂p−1∗; and

(3) Hp(K) := Zp(K)/Bp(K).Elements of Zp(K) are (simplicial) p -cocycles and elements of Bp(K) are (simplicial) p -coboundaries. The vector space Hp(K) is the pth simplicial cohomology group of K.

45.1.2. Definition. Let F : N → M be a smooth injection between smooth manifolds. Thepair (N,F ) is a smooth submanifold of M if dFn is injective for every n ∈ N .

45.1.3. Definition. Let M be a smooth manifold, K be a simplicial complex in Rn, andh : [K] →M be a homeomorphism. The triple (M,K, h) is a smoothly triangulated manifoldif for every open simplex (s) in K the map h

∣∣[s]

: [s] → M has an extension hs : U → M to a

neighborhood U of [s] lying in the plane of [s] such that (U, hs) is a smooth submanifold of M .

45.1.4. Theorem. A smooth manifold can be triangulated if and only if it is compact.

The proof of this theorem is tedious enough that very few textbook authors choose to includeit in their texts. You can find a “simplified” proof in [6].

45.1.5. Theorem (de Rham’s theorem). If (M,K, φ) is a smoothly triangulated manifold, then

Hp(M) ∼= Hp(K)

for every p ∈ Z.

You can find proofs of de Rham’s theorem in [16], chapter IV, theorem 3.1; [20], theorem 16.12;[24], pages 167–173; and [26], theorem 4.17.

45.1.6. Definition (pullbacks of differential forms). Let F : M → N be a smooth mappingbetween smooth manifolds. Then there exists an algebra homomorphism F ∗ : Ω(N) → Ω(M),called the pullback associated with F which satisfies the following conditions:

135

Page 146: Multi Linear Algebra PDF

136 45. SIMPLICIAL COHOMOLOGY

(1) F ∗ maps Ωp(N) into Ωp(M) for each p;(2) F ∗(g) = g F for each 0-form g on N ; and(3) (F ∗µ)m(v) = µF (m)(dFm(v)) for every 1-form µ on N , every m ∈M , and every v ∈ Tm.

Page 147: Multi Linear Algebra PDF

45.2. EXERCISES (DUE: FRI. MAY 15) 137

45.2. Exercises (Due: Fri. May 15)

45.2.1. Exercise. Show that if K is a simplicial complex in Rn, then Hp(K) ∼=(Hp(K)

)∗ forevery integer p.

45.2.2. Exercise. Prove the assertion made in definition 45.1.6.

45.2.3. Exercise. Show that if F : M → N is a smooth map between n-manifolds, then F ∗ isa cochain map from the cochain complex (Ω∗(N), d ) to the cochain complex (Ω∗(M), d ). That is,show that the diagram

Ωp(M) Ωp+1(M)d

//

Ωp(N)

Ωp(M)

F ∗

Ωp(N) Ωp+1(N)d // Ωp+1(N)

Ωp+1(M)

F ∗

commutes for every p ∈ Z.

Page 148: Multi Linear Algebra PDF
Page 149: Multi Linear Algebra PDF

CHAPTER 46

Integration of Differential Forms

46.1. Background

Topics: integral of a differential form over a simplicial complex, integral of a differential form overa manifold, manifolds with boundary.

46.1.1. Definition. Let 〈s〉 be an oriented p -simplex in Rn (where 1 ≤ p ≤ n) and µ be a p -form defined on a set U which is open in the plane of 〈s〉 and which contains [s]. If 〈s〉 = 〈v0, . . . , vp〉take (v1 − v0, . . . , vp − v0〉 to be an ordered basis for the plane of 〈s〉 and let x1, . . . , xp be thecoordinate projection functions relative to this ordered basis; that is, if a =

∑pk=1 ak(vk − v0) ∈ U ,

then xj(a) = ak for 1 ≤ j ≤ p. Then φ = (x1, . . . , xp) : U → Rp is a chart on U ; so there exists asmooth function g on U such that µ = g dx1 ∧ · · · ∧ dxp. Define∫

〈s〉

µ =∫[s]

g dx1 . . . dxp

where the right hand side is an ordinary Riemann integral. If 〈v0〉 is a 0 -simplex, we make a specialdefinition ∫

〈v0〉

f = f(v0)

for every 0 -form f .Extend the preceding definition to p -chains by requiring the integral to be linear as a function

of simplexes; that is, if c =∑as〈s〉 is a p -chain (in some simplicial complex) and µ is a p -form,

define ∫c

µ =∑

a(s)∫〈s〉

µ .

46.1.2. Definition. For a smoothly triangulated manifold (M,K, h) we define a map∫p

: Ωp(M) → Cp(K)

as follows. If ω is a p -form on M , then∫p ω is to be a linear functional on Cp(K); that is, a member

of Cp(K) =(Cp(K)

)∗. In order to define a linear functional on Cp(K) it suffices to specify its valueson the basis vectors of Cp(K); that is, on the oriented p -simplexes 〈s〉 which constitute Cp(K).Let hs : U →M be an extension of h

∣∣[s]

to an open set U in the plane of 〈s〉. Then hs∗ pulls back

p -forms on M to p -forms on U so that hs∗(ω) ∈ Ωp(U). Define(∫

p

ω

)〈s〉 :=

∫〈s〉

hs∗(ω) .

139

Page 150: Multi Linear Algebra PDF

140 46. INTEGRATION OF DIFFERENTIAL FORMS

46.1.3. Notation. Let Hn = x ∈ Rn : xn ≥ 0. This is the upper half-space of Rn.

46.1.4. Definition. A n -manifold with boundary is defined in the same way as an n -manifold except that the range of a chart is assumed to be an open subset of Hn.

The interior of Hn, denoted by int Hn, is defined to be x ∈ Rn : xn > 0. (Notice that thisis the interior of Hn regarded as a subset of Rn—not of Hn.) The boundary of Hn, denoted by∂Hn, is defined to be x ∈ Rn : xn = 0.

If M is an n -manifold with boundary, a point m ∈M belongs to the interior of M (denotedby intM) if φ(m) ∈ int Hn for some chart φ. And it belongs to the boundary of M (denoted by∂M) if φ(m) ∈ ∂Hn for some chart φ.

46.1.5. Theorem. Let M and N be a smooth n -manifolds with boundary and F : M → N be asmooth diffeomorphism. Then both intM and ∂M are smooth manifolds (without boundary). Theinterior of M has dimension n and the boundary of M has dimension n − 1. The mapping Finduces smooth diffeomorphisms intF : intM → intN and ∂F : ∂M → ∂N .

For a proof of this theorem consult the marvelous text [1], proposition 7.2.6.

Page 151: Multi Linear Algebra PDF

46.2. EXERCISES (DUE: MON. MAY 18) 141

46.2. Exercises (Due: Mon. May 18)

46.2.1. Exercise. Let V be an open subset of Rn, F : V → R, and c : [t0, t1] → V be a smoothcurve in V . Let C = ran c. It is conventional to define the “integral of the tangential componentof F over C ”, often denoted by

∫C FT , by the formula∫

C

FT =∫ t1

t0

〈F c,Dc〉 =∫ t1

t0

〈F (c(t)), c′(t)〉 dt (46.1)

The “tangential component of F ”, written FT may be regarded as the 1 -form∑n

k=1 Fk dxk.

Make sense of the preceding definition in terms of the definition of the integral of 1 -forms overa smoothly triangulated manifold. For simplicity take n = 2. Hint. Suppose we have the following:

(1) 〈t0, t1〉 (with t0 < t1) is an oriented 1 -simplex in R;(2) V is an open subset of R2;(3) c : J → V is an injective smooth curve in V , where J is an open interval containing [t0, t1];

and(4) ω = a dx+ b dy is a smooth 1 -form on V .

First show that (c∗(dx)

)(t) = Dc1(t)

for t0 ≤ t ≤ t1. (We drop the notational distinction between c and its extension cs to J . Sincethe tangent space Tt is one-dimensional for every t, we identify Tt with R. Choose v (in (3) ofdefinition 45.1.6) to be the usual basis vector in R, the number 1.)

Show in a similar fashion that (c∗(dy)

)(t) = Dc2(t) .

Then write an expression for(c∗(ω)

)(t). Finally conclude that

(∫1 ω

)(〈t0, t1〉) is indeed equal to∫ t1

t0〈(a, b) c,Dc〉 as claimed in 46.1.

46.2.2. Exercise. Let S1 be the unit circle in R2 oriented counterclockwise and let F be thevector field defined by F(x, y) = (2x3 − y3) i + (x3 + y3) j. Use your work in exercise 46.2.1 tocalculate

∫S1 FT . Hint. You may use without proof two facts: (1) the integral does not depend

on the parametrization (triangulation) of the curve, and (2) the results of exercise 46.2.1 hold alsofor simple closed curves in R2; that is, for curves c : [t0, t1] → R2 which are injective on the openinterval (t0, t1) but which satisfy c(t0) = c(t1).

46.2.3. Exercise. Let V be an open subset of R3, F : V → R3 be a smooth vector field, and(S,K, h) be a smoothly triangulated 2 -manifold such that S ⊆ V . It is conventional to define the“normal component of F over S ”, often denoted by

∫∫S FN , by the formula∫∫

S

FN =∫∫K

〈F h, n〉

where n = h1 × h2. (Notation: hk is the kth partial derivative of h.)Make sense of the preceding definition in terms of the definition of the integral of 2 -forms

over a smoothly triangulated manifold (with or without boundary). In particular, suppose thatF = a i+b j+ck (where a, b, and c are smooth functions) and let ω = a dy∧dz+b dz∧dx+c dx∧dy.This 2 -form is conventionally called the “normal component of F” and is denoted by FN . Noticethat FN is just ∗µ where µ is the 1 -form associated with the vector field F. Hint. Proceed asfollows.

(a) Show that the vector n(u, v) is perpendicular to the surface S at h(u, v) for each (u, v) in[K] by showing that it is perpendicular to D(h c)(0) whenever c is a smooth curve in [K]such that c(0) = (u, v).

Page 152: Multi Linear Algebra PDF

142 46. INTEGRATION OF DIFFERENTIAL FORMS

(b) Let u and v (in that order) be the coordinates in the plane of [K] and x, y, and z (in thatorder) be the coordinates in R3. Show that h∗(dx) = h1

1 du+ h12 dv. Also compute h∗(dy)

and h∗(dz).Remark. If at each point in [K] we identify the tangent plane to R2 with R2 itself and ifwe use conventional notation, the “v” which appears in (3) of definition 45.1.6 is just notwritten. One keeps in mind that the components of h and all the differential forms arefunctions on (a neighborhood of) [K].

(c) Now find h∗(ω). (Recall that ω = FN is defined above.)(d) Show for each simplex (s) in K that(∫

2

ω

)(〈s〉) =

∫∫[s]

〈F h, n〉 .

(e) Finally show that if 〈s1〉, . . . , 〈sn〉 are the oriented 2 -simplexes of K and c =∑n

k=1〈sk〉,then (∫

2

ω

)(c) =

∫∫[K]

〈F h, n〉 .

46.2.4. Exercise. Let F(x, y, z) = xz i + yz j and H be the hemisphere of x2 + y2 + z2 = 4 forwhich z ≥ 0. Use exercise 46.2.3 to find

∫∫H

FN .

Page 153: Multi Linear Algebra PDF

CHAPTER 47

Stokes’ Theorem

47.1. Background

Topics: Stokes’ theorem.

47.1.1. Theorem (Stokes’ theorem). Suppose that (M,K, h) is an oriented smoothly triangu-lated manifold with boundary. Then the integration operator

∫=

(∫p

)p∈Z is a cochain map from

the cochain complex (Ω∗(M), d ) to the cochain complex (C∗(K), ∂∗ ).

This is an important and standard theorem, which appears in many versions and with manydifferent proofs. See, for example, [1], theorem 7.2.6; [19], chapter XVII, theorem 2.1; [21], theorem10.8; or [26], theorems 4.7 and 4.9.

Recall that when we say in Stokes’ theorem that the integration operator is a cochain map, weare saying that the following diagram commutes.

Cp(K)∂∗

//

Ωp(M)d // Ωp(M)

Cp(K)

R

Cp(K) Cp+1(K)

∂∗//

Ωp(M)

Cp(K)

Ωp(M) Ωp+1(M)d // Ωp+1(M)

Cp+1(K)

R

Cp+1(K)

∂∗//

Ωp+1(M)

Cp+1(K)

Ωp+1(M) d //

Thus if ω is a p -form on M and 〈s〉 is an oriented (p + 1) -simplex belonging to K, then we musthave ( ∫

p+1

)(〈s〉) =

(∂∗

(∫p

ω

))(〈s〉). (47.1)

This last equation (47.1) can be written in terms of integration over oriented simplexes:∫〈s〉

d(hs∗ω

)=

∫∂〈s〉

hs∗ω . (47.2)

In more conventional notation all mention of the triangulating simplicial complex K and of themap h is suppressed. This is justified by the fact that it can be shown that the value of the integralis independent of the particular triangulation used. Then when the equations of the form (47.2) areadded over all the (p + 1) -simplexes comprising K we arrive at a particularly simple formulationof (the conclusion of) Stokes’ theorem ∫

M

dω =∫

∂M

ω . (47.3)

One particularly important topic that has been glossed over in the preceding is a discussion oforientable manifolds (those which possess nowhere vanishing volume forms), their orientations, andthe manner in which an orientation of a manifold with boundary induces an orientation on its

143

Page 154: Multi Linear Algebra PDF

144 47. STOKES’ THEOREM

boundary. One of many places where you can find a careful development of this material is insections 6.5 and 7.2 of [1].

47.1.2. Theorem. Let ω be a 1 -form on a connected open subset U of R2. Then ω is exact onU if and only if

∫C ω = 0 for every simple closed curve in U .

For a proof of this result see [10], chapter 2, proposition 1.

Page 155: Multi Linear Algebra PDF

47.2. EXERCISES (DUE: WED. MAY 20) 145

47.2. Exercises (Due: Wed. May 20)

47.2.1. Exercise. Let ω = − y dx

x2 + y2+

x dy

x2 + y2. Show that on the region R2 \ (0, 0) the

1-form ω is closed but not exact.

47.2.2. Exercise. What classical theorem do we get (for smooth functions) from the versionof Stokes’ theorem given by equation (47.3) in the special case that ω is a 0 -form and M is a1- manifold (with boundary) in R.

47.2.3. Exercise. What classical theorem do we get (for smooth functions) from the versionof Stokes’ theorem given by equation (47.3) in the special case that ω is a 0 -form and M is a1- manifold (with boundary) in R3.

47.2.4. Exercise. What classical theorem do we get (for smooth functions) from the versionof Stokes’ theorem given by equation (47.3) in the special case that ω is a 1 -form and M is a2- manifold (with boundary) in R2.

47.2.5. Exercise. Use exercise 47.2.4to compute∫

S1(2x3 − y3) dx + (x3 + y3) dy (where S1 isthe unit circle oriented counterclockwise).

47.2.6. Exercise. What classical theorem do we get (for smooth functions) from the versionof Stokes’ theorem given by equation (47.3) in the special case that ω is a 1 -form and M is a2- manifold (with boundary) in R3.

47.2.7. Exercise. What classical theorem do we get (for smooth functions) from the versionof Stokes’ theorem given by equation (47.3) in the special case that ω is a 2 -form and M is a3- manifold (with boundary) in R3.

47.2.8. Exercise. Your good friend Fred R. Dimm calls you on his cell phone seeking helpwith a math problem. He says that he wants to evaluate the integral of the normal component ofthe vector field on R3 whose coordinate functions are x, y, and z (in that order) over the surfaceof a cube whose edges have length 4. Fred is concerned that he’s not sure of the coordinates ofthe vertices of the cube. How would you explain to Fred (over the phone) that it doesn’t matterwhere the cube is located and that it is entirely obvious that the value of the surface integral he isinterested in is 192?

Page 156: Multi Linear Algebra PDF
Page 157: Multi Linear Algebra PDF

CHAPTER 48

Quadratic Forms

48.1. Background

Topics: quadratic forms.

48.1.1. Definition. Let V be a finite dimensional real vector space. A function Q : V → R isa quadratic form if

(i) Q(v) = Q(−v) for all v, and(ii) the map B : V × V → R : (u, v) 7→ Q(u+ v)−Q(u)−Q(v) is a bilinear form.

In this case B is the bilinear form associated with the quadratic form Q. It is obviously sym-metric. Note: In many texts B(u, v) is defined to be 1

2 [Q(u+ v)−Q(u)−Q(v)].

147

Page 158: Multi Linear Algebra PDF

148 48. QUADRATIC FORMS

48.2. Exercises (Due: Fri. May 22)

48.2.1. Exercise. Let B be a symmetric bilinear form on a real finite dimensional vectorspace V . Define Q : V → R by Q(v) = B(v, v). Show that Q is a quadratic form on V .

48.2.2. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V . Provethat

Q(u+ v + w)−Q(u+ v)−Q(u+ w)−Q(v + w) +Q(u) +Q(v) +Q(w) = 0for all u, v, w ∈ V .

48.2.3. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V .Prove that Q(αv) = α2Q(v) for all α ∈ R and v ∈ V . Hint. First use exercise 48.2.2 to show thatQ(2v) = 4Q(v) for every v ∈ V .

Page 159: Multi Linear Algebra PDF

CHAPTER 49

Definition of Clifford Algebra

49.1. Background

Topics: an algebra universal over a vector space, Clifford map; Clifford algebra.

49.1.1. Definition. Let V be a vector space. A pair (U, ι), where U is a unital algebra andι : V → U is a linear map, is universal over V if for every unital algebra A and every linear mapf : V → A there exists a unique unital algebra homomorphism f : U → A such that f ι = f .

49.1.2. Definition. Let V be a real finite dimensional vector space with a quadratic form Qand A be a real unital algebra. A map f : V → A is a Clifford map if

(i) f is linear, and(ii)

(f(v)

)2 = Q(v)1A for every v ∈ V .

49.1.3. Definition. Let V be a real finite dimensional vector space with a quadratic form Q.The Clifford algebra over V is a real unital algebra Cl(V,Q), together with a Clifford mapj : V → Cl(V,Q), which satisfies the following universal condition: for every real unital alge-bra A and every Clifford map f : V → A, there exists a unique unital algebra homomorphismf : Cl(V,Q) → A such that f j = f .

149

Page 160: Multi Linear Algebra PDF

150 49. DEFINITION OF CLIFFORD ALGEBRA

49.2. Exercises (Due: Wed. May 27)

49.2.1. Exercise. Let V be a vector space. Prove that if U and U ′ are unital algebras universalover V , then they are isomorphic.

49.2.2. Exercise. Let V be a vector space. Prove that there exists a unital algebra which isuniversal over V .

49.2.3. Exercise. Show that condition (ii) in definition 49.1.2 is equivalent to(ii ′) f(u)f(v) + f(v)f(u) = B(u, v)1A for all u, v ∈ V ,

where B is the bilinear form associated with Q.

49.2.4. Exercise. Let V be a real finite dimensional vector space with a quadratic form Q.Prove that if the Clifford algebra Cl(V,Q) exists, then it is unique up to isomorphism.

49.2.5. Exercise. Let V be a real finite dimensional vector space with a quadratic form Q.Prove that the Clifford algebra Cl(V,Q) exists. Hint. Recall the definition of the tensor algebraT (V ) in 36.1.2. Try T (V )/J where J is the ideal in T (V ) generated by elements of the formv ⊗ v −Q(v)1T (V )

where v ∈ V .

Page 161: Multi Linear Algebra PDF

CHAPTER 50

Orthogonality with Respect to Bilinear Forms

50.1. Background

Topics: orthogonality with respect to bilinear forms, the kernel of a bilinear form, nondegeneratebilinear forms.

50.1.1. Definition. Let B be a symmetric bilinear form on a real vector space V . Vectors vand w in V are orthogonal, in which case we write v ⊥ w, if B(v, w) = 0. The kernel of B isthe set of all k ∈ V such that k ⊥ v for every v ∈ V . The bilinear form is nondegenerate if itskernel is 0.

50.1.2. Definition. Let V be a finite dimensional real vector space and B be a symmetricbilinear form on V . An ordered basis E = (e1, . . . , en) for V is B-orthonormal if

(a) B(ei, ej) = 0 whenever i 6= j and(b) for each i ∈ Nn the number B(ei, ei) is −1 or +1 or 0.

50.1.3. Theorem. If V is a finite dimensional real vector space and B is a symmetric bilinearform on V , then V has a B-orthonormal basis.

Proof. See [5], chapter 1, theorem 7.6.

50.1.4. Convention. Let V be a finite dimensional real vector space, let B be a symmetricbilinear form on V , and let Q be the quadratic form associated with B. Let us agree that wheneverE = (e1, . . . , en) is an ordered B-orthonormal basis for V , we order the basis elements in such away that for some positive integers p and q

Q(ei) =

1, if 1 ≤ i ≤ p;−1, if p+ 1 ≤ i ≤ p+ q;0, if p+ q + 1 ≤ i ≤ n.

50.1.5. Theorem. Let V be a finite dimensional real vector space, let B be a symmetric bilinearform on V , and let Q be the quadratic form associated with B. Then there exist p, q ∈ Z+ suchthat if E = (e1, . . . , en) is a B-orthonormal basis for V and v =

∑vke

k, then

Q(v) =p∑

k=1

vk2 −

p+q∑k=p+1

vk2 .

Proof. See [5], chapter 1, theorem 7.11.

151

Page 162: Multi Linear Algebra PDF

152 50. ORTHOGONALITY WITH RESPECT TO BILINEAR FORMS

50.2. Exercises (Due: Fri. May 29)

50.2.1. Exercise. One often sees the claim that “the classification of Clifford algebras amountsto classifying vector spaces with quadratic forms”. Explain precisely what is meant by this assertion.

50.2.2. Exercise. Let B be a symmetric bilinear form on a real finite dimensional vectorspace V . Suppose that V is an orthogonal direct sum V1 ⊕ · · · ⊕ Vn of subspaces. Then B isnondegenerate if and only if the restriction of B to Vk is nondegenerate for each k. In fact, ifV \

k is the kernel of the restriction of B to Vk, then the kernel of B is the orthogonal direct sumV \

1 ⊕ · · · ⊕ V \n .

50.2.3. Exercise. Let B be a nondegenerate symmetric bilinear form on a real finite dimen-sional vector space V . If W is a subspace of V , then the restriction of B to W is nondegenerate ifand only if its restriction to W⊥ is nondegenerate. Equivalently, B is nondegenerate on W if andonly if V = W +W⊥.

50.2.4. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V andlet e1, . . . , en be a basis for V which is orthogonal with respect to the bilinear form B associatedwith Q. Show that if Q(ek) is nonzero for 1 ≤ k ≤ p and Q(ek) = 0 for p < k ≤ n, then the kernelof B is the span of ep+1, . . . , en.

50.2.5. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V . Showthat if dimV = n, then dim Cl(V,Q) = 2n.

50.2.6. Exercise. Let V be a real finite dimensional vector space and let Q be the quadraticform which is identically zero on V . Identify Cl(V,Q).

Page 163: Multi Linear Algebra PDF

CHAPTER 51

Examples of Clifford Algebras

51.1. Background

Topics: No additional topics.

51.1.1. Notation. If (V,Q) is a finite dimensional real vector space V with a nondegeneratequadratic form Q, we often denote the Clifford algebra C(V,Q) by C(p, q) where p and q are as intheorem 50.1.5.

153

Page 164: Multi Linear Algebra PDF

154 51. EXAMPLES OF CLIFFORD ALGEBRAS

51.2. Exercises (Due: Mon. June 1)

51.2.1. Exercise. Let f : (V,Q) → (W,R) be a linear map between finite dimensional realvector spaces with quadratic forms. If R(f(v)) = Q(v) for every v ∈ V we say that f is anisometry.

(a) Show that if f is such a linear isometry, then there exists a unique unital algebra homo-morphism Cl(f) : Cl(V,Q) → Cl(W,R) such that Cl(f)(v) = f(v) for every v ∈ V .

(b) Show that Cl is a covariant functor from the category of vector spaces with quadraticforms and linear isometries to the category of Clifford algebras and unital algebra homo-morphisms.

(c) Show that if the linear isometry f is an isomorphism, then Cl(f) is an algebra isomorphism.

51.2.2. Exercise. Let V be a real finite dimensional vector space, Q be a quadratic form on V ,and A = Cl(V,Q) be the associated Clifford algebra.

(a) Show that the map f : V → V : v 7→ −v is a linear isometry.(b) Let ω = Cl(f). Show that ω2 = id.(c) Let A0 = a ∈ A : ω(a) = a and A1 = a ∈ A : ω(a) = −a. Show that A = A0 ⊕ A1.

Hint. If a ∈ A, let a0 = 12(a+ ω(a)).

(d) Show that AiAj ⊆ Ai+j where i, j ∈ 0, 1 and i+ j is addition modulo 2. This says thata Clifford algebra is a Z2-graded (or Z/2Z -graded) algebra.

51.2.3. Exercise. Let V = R andQ(v) = v2 for every v ∈ V . Show that Clifford algebra Cl(1, 0)associated with (R, Q) is isomorphic to R⊕ R. Hint. Consider the map u1 + ve 7→ (u− v, u+ v).

51.2.4. Exercise. Let V = R and Q(v) = −v2 for every v ∈ V . The Clifford algebra Cl(V,Q)is often denoted by Cl(0, 1).

(a) Show that Cl(0, 1) ∼= C.(b) Show that Cl(0, 1) can be represented as a subalgebra of M2(R). Hint. AOLT, page 50,

Example 2.12.

51.2.5. Exercise. Let V = R2 and Q(v) = v21 + v2

2 for every v ∈ V . The Clifford algebra

Cl(V,Q) is often denoted by Cl(2, 0). Show that Cl(2, 0) ∼= M2(R). Hint. Let ε1 :=[1 00 −1

],

ε2 :=[0 11 0

], and ε12 := ε1ε2.

51.2.6. Exercise. Let V = R2 and Q(v) = −v21 − v2

2 for every v ∈ V . The Clifford algebraCl(V,Q) is often denoted by Cl(0, 2).

(a) Show that Cl(0, 2) ∼= H.(b) Show that Cl(0, 2) can be represented as a subalgebra of M4(R). Hint. AOLT, pages

50–51, Example 2.13.

51.2.7. Exercise. Take a look at the web page written by Pertti Lounesto:

http://users.tkk.fi/ ppuska/mirror/Lounesto/counterexamples.htm

51.2.8. Exercise. Show that the Clifford algebra Cl(3, 1) (Minkowski space-time algebra) isisomorphic to M4(R). Hint. Exercise 51.2.7.

Page 165: Multi Linear Algebra PDF

Bibliography

1. Ralph Abraham, Jerrold E. Marsden, and Tudor Ratiu, Manifolds, Tensor Analysis, and Applications, Addison-Wesley, Reading, MA, 1983. 140, 143, 144

2. Robert A. Beezer, A First Course in Linear Algebra, 2004,http://linear.ups.edu/download/fcla-electric-2.00.pdf. vii

3. Richard L. Bishop and Richard J. Crittenden, Geometry of Manifolds, Academic Press, New York, 1964. 111,115, 116, 119

4. Przemyslaw Bogacki, Linear Algebra Toolkit, 2005, http://www.math.odu.edu/∼bogacki/lat. vii5. William C. Brown, A second Course in Linear Algebra, John Wiley, New York, 1988. vii, 1516. Stewart S. Cairns, A simple triangulation method for smooth manifolds, Bull. Amer. Math. Soc. 67 (1961),

389–390. 1357. P. M. Cohn, Basic Algebra: Groups, Rings and Fields, Springer, London, 2003. 78. Charles W. Curtis, Linear Algebra: An Introductory Approach, Springer, New York, 1984. vii9. Paul Dawkins, Linear Algebra, 2007, http://tutorial.math.lamar.edu/pdf/LinAlg/LinAlg Complete.pdf. vii

10. Manfredo P. do Carmo, Differential Forms and Applications, Springer-Verlag, Berlin, 1994. 14411. John M. Erdman, A Companion to Real Analysis, 2007,

http://www.mth.pdx.edu/∼erdman/CTRA/CRAlicensepage.html. 3312. , Operator Algebras: K-Theory of C∗-Algebras in Context, 2008,

http://www.mth.pdx.edu/∼erdman/614/operator algebras pdf.pdf. 69, 7713. Paul R. Halmos, Finite-Dimensional Vector Spaces, D. Van Nostrand, Princeton, 1958. vii14. Jim Hefferon, Linear Algebra, 2006, http://joshua.smcvt.edu/linearalgebra. vii15. Kenneth Hoffman and Ray Kunze, Linear Algebra, second ed., Prentice Hall, Englewood Cliffs,N.J., 1971. vii16. S. T. Hu, Differentiable Manifolds, Holt, Rinehart, and Winston, New York, 1969. 13517. Thomas W. Hungerford, Algebra, Springer-Verlag, New York, 1974. 718. Saunders Mac Lane and Garrett Birkhoff, Algebra, Macmillan, New York, 1967. 719. Serge Lang, Fundamentals of Differential Geometry, Springer Verlag, New York, 1999. 14320. John M. Lee, Introduction to Smooth Manifolds, Springer, New York, 2003. 119, 13521. Ib Madsen and Jørgen Tornehave, From Calculus to Cohomology: de Rham cohomology and characteristic classes,

Cambridge University Press, Cambridge, 1997. 14322. Theodore W. Palmer, Banach Algebras and the General Theory of ∗-Algebras I–II, Cambridge University Press,

Cambridge, 1994/2001. 723. Steven Roman, Advanced Linear Algebra, second ed., Springer-Verlag, New York, 2005. vii, 10124. I. M. Singer and J. A. Thorpe, Lecture Notes on Elementary Topology and Geometry, Springer Verlag, New York,

1967. 13525. Gerard Walschap, Metric Structures in Differential Geometry, Springer-Verlag, New York, 2004. 116, 11926. Frank W. Warner, Foundations of Differentiable Manifolds and Lie Groups, Springer Verlag, New York, 1983.

135, 14327. Eric W. Weisstein, MathWorld, A Wolfram Web Resource, http://mathworld.wolfram.com. vii28. Wikipedia, Wikipedia, The Free Encyclopedia, http://en.wikipedia.org. vii

155

Page 166: Multi Linear Algebra PDF
Page 167: Multi Linear Algebra PDF

Index

α−1 (inverse of a morphism α), 33V(V ) (Grassmann or exterior algebra), 107Vk(V ) (homogeneous elements of degree k), 110

∂〈s〉 (boundary of a simplex), 132∂p (boundary map), 133∗ -algebra (algebra with involution), 81∗-homomorphism (star homomorphism), 81〈x, v〉 (alternative notation for v∗(x)), 28(f, g) (function into a product), 37M ⊕N (direct sum), 37S ⊗ T (tensor products of linear maps), 105U ⊗ V (tensor product), 101V/M (quotient of V by M), 41ω ∧ µ (wedge product), 107m⊕ n (direct sum of vectors), 37L∞

k=0 Vk (direct sum), 111Lnk=1 Mk (direct sum), 37

A + B (sum of sets of vectors), 9f × g (product function), 3U V (U is a subspace of V ), 9[a, b] (closed segment in a vector space), 131[x] (equivalence class containing x), 41|K| (polyhedron of a complex), 132A[x] (algebra of polynomials), 59A[x]

(algebra of formal power series), 59F[x] (polynomial algebra with coefficients in F), 591 (standard one-element set), 41V , idV , IV (the identity map on V ), 21T←(U) (inverse image of U under T ), 21T→(U) (direct image of U under T ), 21(s) = (v0, . . . , vp) (open p -simplex), 131[s] = [v0, . . . , vp] (closed p -simplex), 131〈s〉 = 〈v0, . . . , vp〉 (closed p -simplex), 132bx (= Γ(x)), 32V ∼= W (isomorphism of vector spaces), 21F⊥ (pre-annihilator of F ), 31xs (alternative notation for x(s)), 27

M⊥ (annihilator of M), 31T ∗ (adjoint of T ), 81T ∗ (vector space adjoint of T ), 34T t (transpose of T ), 81f←(B) (preimage of B under f), 34f→(A) (image of A under f), 34v∗, 28(V ∗, d), V ∗ (cochain complex), 129∂M (boundary of a manifold), 140∂Hn (boundary of a half-space), 140

int M (interior of a manifold with boundary), 140int Hn (interior of a half-space), 140 (the forgetful functor), 33

Abelian group, ixadjoint, 81

vector space, 34affine subspace, 131algebra

Z+-graded, 109Clifford, 149de Rham cohomology, 128exterior, 107Grassmann, 107quotient, 52tensor, 111

alternating, 94Altk(V ) (set of alternating k-linear maps), 111annihilating polynomial, 60annihilator, 8, 31antisymmetric, 17axiom of choice, 18

basis, 17dual, 29ordered, 113right-handed, 113

βp (Betti number), 133Betti number, 127, 133bilinear, 94

formassociated with a quadratic form, 147nondegenerate, 151

B-orthonormal, 151bound

lower, 17upper, 17

boundaries, 133boundary

maps, 133of a half-space, 140of a manifold, 140of a simplex, 132

boundedreal valued function, 9

Bp(M) (space of de Rham p-coboundaries), 127Bp(K) (space of simplicial p -coboundaries), 135Bp(K) (space of simplicial p -boundaries), 133

157

Page 168: Multi Linear Algebra PDF

158 INDEX

calculuspolynomial functional, 60

cancellation property, 7category, 33

concrete, 33chain, 18chains, 132characteristic

Euler, 133polynomial, 71

χ(K) (Euler characteristic), 133choice

axiom of, 18C∞m (smooth functions at m), 115Clifford

algebra, 149map, 149

closeddifferential form, 125face of a simplex, 132segment, 131simplex, 131

Cl(V, Q) (Clifford algebra), 149coboundary

p-, 129de Rham, 127operator, 129simplicial, 135

cochain, 135p-, 129complex, 129map, 129

cocyclep-, 129de Rham, 127simplicial, 135

codimension, 46coefficient

leading, 60cohomology

de Rham, 128group, 129

de Rham, 127simplicial, 135

combinationconvex, 131formal linear, 100

commutativediagram, 3ring, ix

comparable, 18complementary subspace, 37complete

order, 34complex

cochain, 129simplicial, 132

dimension of a, 132components, 37composition

of morphisms, 33concrete category, 33constant

locally, 128polynomial, 60

continuousdifferentiability, 22

continuously differentiable, 44contravariant, 33conventions

all categories are concrete, 33all differential forms are smooth, 116all manifolds are smooth and oriented, 115all vector field are smooth, 115all vector spaces are real, finite dimensional, and

oriented, 115on Cartesian products, 3on ordering B-orthonormal bases, 151on the standard basis for P([a, b]), 25write fω for f ∧ ω when f is a 0-form, 121

convexcombination, 131hull, 131independent, 131set, 131

convolution, 59coordinate

projections, 38coproduct

in a category, 41uniqueness of, 43

cotangentspace, 115

covariant, 33Cp(K) (p -chains), 132Cp(K) (p -cochains), 135cycle, 93

length of a, 94cycles, 133

d (exterior derivative), 119de Rham

coboundary, 127cocycle, 127cohomology group, 127

de Rham cohomology algebra, 128decomposable

element of a Grassmann algebra, 109tensor, 102

degreeof a decomposable element, 109of a homogeneous element, 109of a polynomial, 59

δ (diagonal mapping), 4∆fa, 93derivative

exterior, 119partial, 116total, 93

determinant, 98function, 97

Page 169: Multi Linear Algebra PDF

INDEX 159

dfa (differential of f at a), 93diagonal

mapping, 4matrix, 69

diag(α1, . . . , αn) (diagonal matrix), 69diagonalizable, 69

plus nilpotent decomposition, 73unitarily, 89

diagramcommutative, 3

differentiable, 93continuously, 22, 44

differential, 93p-form, 115

smooth, 115form

closed, 125exact, 125

of a function on a manifold, 116differential form

associated with a vector field, 121dimension

of a simplex, 131of a simplicial complex, 132of a vector space, 19

dim K (dimension of a simplicial complex K), 132dim V (dimension of a vector space V ), 19direct

image, 21sum

as a product, 41external, 37, 111internal, 37projections associated with, 49

disjointpermutations, 94

divides, 64division

ring, ixdivisor

greatest common, 65of zero, 7

dualbasis, 29

duality functorfor vector spaces, 35

Dxk (m) (partial derivative on a manifold), 116

eigenspacesgeneralized, 73

elementary tensor, 102E

NM(projection along N onto M), 49

εS , ε (function from S into 1), 4equivalent

orderings of vertices, 132unitarily, 89

essentialuniqueness, 43

Euler characteristic, 133even

permutation, 94

even function, 40exact

differential form, 125exact sequence

of cochain complexes, 129of vector spaces, 42short, 42

exterioralgebra, 107derivative, 119product, 107

externaldirect sum, 37, 111

F(S) (real valued functions on S), 4F(S, T ) (functions from S to T ), 4face, 132factorization, 64field, ix

vector, 115, 121final projection, 89finite

support, 27first isomorphism theorem, 43forgetful functor, 34form

closed , 125differential p-, 115exact, 125multilinear, 94quadratic, 147

formal linear combination, 100formal power series, 59free

object, 99vector space, 99

functionbounded, 9diagonal, 4integrable, 9interchange, 4product, 3switching, 4

functionalcalculus

polynomial, 60multilinear, 94

functorcontravariant, 33covariant, 33forgetful, 34power set, 35vector space duality, 35

fundamental quotient theorem, 43

Γ (the map from V to V ∗∗ taking x to bx), 32generalized eigenspaces, 73graded algebra, 109Grassmann algebra, 107greatest common divisor, 65group, ix

Page 170: Multi Linear Algebra PDF

160 INDEX

Abelian, ixcohomology, 129de Rham cohomology, 127simplicial cohomology, 135simplicial homology, 133symmetric, 94

H(A) (self-adjoint elements of A), 81half-space

boundary of a, 140interior of a, 140

half-space, upper, 140Hamel basis, 17Hermitian, 81Hn (upper half-space), 140Hodge star operator, 114Hom(G, H), Hom(G) (group homomorphisms), 1homogeneous

elements of a graded algebra, 109tensor, 102

homology, 133homomorphism

of Abelian groups, 1ring, 7unital ring, 7

Hp(M) (the pth de Rham cohomology group), 127

Hp(K) (the pth simplicial cohomology group), 135

Hp(K) (the pth simplicial homology group), 133hull, convex, 131

idempotent, 49, 56idS (identity function on S), 3identity

function on a set, 3resolution of the, 69

image, 21, 34direct, 21inverse, 21

independentconvex, 131

indeterminant, 59initial projection, 89inner product

spacespectral theorem for, 89

integrablereal valued function, 9

integralof a p -form over a p -chain, 139

interchange operation, 4interior

of a half-space, 140of a manifold with boundary, 140

internaldirect sum, 37

interpolationLagrange, 64

inverseimage, 21left, 33of a morphism, 33

right, 33invertible, 33

element of an algebra, 52left, 21linear map, 21right, 21

involution, 81irreducible, 63isometry, 154

partial, 89isomorphism, 21

in a category, 33

kernel, 21of a bilinear form, 151

ker T (the kernel of T ), 21

l(S, F), l(S) (functions on S), 27l2(R), l2(C) (square summable sequences), 78Lagrange interpolation formula, 64largest, 17lc(S, F), lc(S) (functions on S with finite support), 27lc(S, A) (functions from S into A with finite

support), 59leading coefficient, 60left

-handed basis, 113inverse

of a morphism, 33invertible linear map, 21

lengthof a cycle, 94of a vector, 77

linear, 21combination

formal, 100ordering, 18transformation

invertible, 21transformations, 21

tensor products of, 105L(V, W ), L(V ) (family of linear functions), 21Ln(V1, . . . , Vn) (family of n-linear functions), 94little-oh functions, 93locally constant, 128lower bound, 17l(S, A) (functions from S into A), 59

manifoldboundary of a, 140smoothly triangulated, 135with boundary, 140

interior of a, 140matrix

diagonal, 69similarity, 69symmetric, 19

maximal, 17minimal, 17

polynomial, 60existence of, 61

Page 171: Multi Linear Algebra PDF

INDEX 161

Mn(A) (n× n matrices of members of A), 97monic polynomial, 60monoid, ixmorphism

map, 33Mor(S, T ) (morphisms from S to T ), 33morphisms, 33

composition of, 33m

T(minimal polynomial for T ), 60

multilinear, 94form, 94functional, 94

N (A) (normal elements of A), 81n-linear function, 94nondegenerate bilinear form, 151norm, 77normal, 81nullity, 21nullspace, 21

o (the family of little-oh functions, 93object

map, 33objects, 33odd

permutation, 94odd function, 40Ωp(U) (space of p-forms on U), 115ΩpF , 127Ω(U) (the algebra of differential forms on U), 116open

face of a simplex, 132simplex, 131

operator, 21diagonalizable, 69projection, 49similarity, 69unilateral shift, 56

opposite orientation, 113order

complete, 34ordered

basis, 113by inclusion, 17linearly, 18partially, 17

orientationof a simplex, 132of a vector space, 113

orientedsimplex, 132vector space, 113

orthogonaldirect sum

projections associated with, 89orthogonal projections, 85projection, 85

assocuated with an orthogonal direct sumdecomposition, 89

resolution of the identity, 89

with respect to a bilinear form, 151orthonormal, B-, 151

partialderivative, 116isometry, 89ordering, 17

p -boundaries, 133p -chains, 132p -coboundary, 129

simplicial, 135p -cochain, 129, 135p -cocycle, 129

simplicial, 135p -cycles, 133permutation, 93

cyclic, 93even, 94odd, 94sign of a, 94

p-form, 115plane

of a simplex, 131Pn([a, b]) (polynomials of degree strictly less than n),

25polarization identity, 77polyhedron, 132polynomial, 59

annihilating, 60characteristic, 71constant, 60degree of a, 59function, 60functional calculus, 60irreducible, 63minimal, 60

existence of, 61monic, 60prime, 63reducible, 63

power seriesformal, 59

power set, 34functor, 35

(p, q)-shuffle, 111pre-annihilator, 31preimage, 34primary decomposition theorem, 73prime

polynomial, 63relatively, 65

productexterior, 107in a category, 41tensor, 101uniqueness of, 43wedge, 107

projectionalong one subspace onto another, 49assocuated with direct sum decomposition, 49final, 89

Page 172: Multi Linear Algebra PDF

162 INDEX

in a ∗-algebra, 85initial, 89operator, 49orthogonal, 85

projectionscoordinate, 38

P(S) (power set of S), 34p -simplex, 131pullback of differential forms, 135

quadratic form, 147associated with a bilinear form, 147

quotientalgebra, 52map, 41, 45, 46object, 45space, 41

R∞ (sequences of real numbers), 11range, 21ran T (the range of T ), 21rank, 21rank-plus-nullity theorem, 47reducible, 63reducing subspaces, 67reflexive, 17relation, 17relatively prime, 65resolution of the identity

in vector spaces, 69orthogonal, 89

retraction, 21reverse orientation, 113Riesz-Frechet theorem

for vector spaces, 29right

-handed basis, 113inverse

of a morphism, 33invertible linear map, 21

ring, ixcommutative, ixdivision, ixhomomorphism, 7

unital, 7with identity, ix

r -skeleton, 132

section, 21segment, closed, 131self-adjoint, 81semigroup, ixsequence

exact, 42series

formal power, 59shift operator, 56short exact sequence, 42, 129shuffle, 111σ (interchange operation), 4sign

of a permutation, 94similar, 69simplex

closed, 131dimension of a, 131face of a, 132open, 131oriented, 132plane of a, 131vertex of a, 132

simplicialcoboundary, 135cocycle, 135cohomology group, 135complex, 132

dimension of a, 132simplicial homology group, 133skeleton of a complex, 132skew-symmetric, 94smallest, 17smooth

differential form, 115function, 115submanifold, 135triangulation, 135vector field, 115

span, 13span(A) (the span of A), 13spectral theorem

for complex inner product spaces, 89for vector spaces, 69

spectrum, 55S(p, q) (shuffle permutations), 111square summable sequence, 78star operator, 114Stokes’ theorem, 143submanifold, 135subprojection, 89subspace, 9

affine, 131complementary, 37reducing, 67

sumdirect, 37, 111of sets of vectors, 9

supp F (the support of f), 27support, 27switching operation, 4symmetric

group, 94matrix, 19skew-, 94

tangent, 93at a point, 115space, 115

tensoralgebra, 111decomposable, 102elementary, 102homogeneous, 102

Page 173: Multi Linear Algebra PDF

INDEX 163

product, 101of linear maps, 105

T ∗m (cotangent space at m), 115Tm (tangent space at m), 115total

derivative, 93transitive, 17transpose, 81, 98transposition, 94triangulated manifold, 135trilinear, 94T (V ) (the tensor algebra of V ), 111

U(A) (unitary elements of A), 81unilateral shift operator, 56unique factorization theorem, 64uniqueness

essential, 43of products, coproducts, 43

unit vector, 77unital

ring, ixring homomorphism, 7

unitarilydiagonalizable, 89equivalent, 89

unitary, 81unitization, 51universal, 149upper

bound, 17half-space, 140

vectorfield, 115, 121

associated with a differential form, 121smooth, 115

space, 8adjoint, 34dimension of a, 19free, 99orientation of a, 113oriented, 113spectral theorem for, 69

unit, 77vertex

of a simplex, 132vol (volume element or form), 114, 116volume

element, 114form, 116

wedge product, 107

zero divisor, 7Z+-graded algebra, 109Zorn’s lemma, 18Zp(M) (space of de Rham p -cocycles), 127Zp(K) (space of simplicial p -cocycles), 135Zp(K) (space of simplicial p -cycles), 133