66
Chapter 5 Finite-Dimensional Vector Spaces 5.1 Vectors and Linear Transformations 5.1.1 Vector Spaces A vector space consists of a set E, whose elements are called vectors, and a eld F (such as R or C), whose elements are called scalars. There are two operations on a vector space: 1. Vector addition, + : E × E E, that assigns to two vectors u, v E another vector u + v, and 2. Multiplication by scalars, · : R × E E, that assigns to a vector v E and a scalar a R a new vector av E. The vector addition is an associative commutative operation with an addi- tive identity. It satises the following conditions: 1. u + v = v + u, u, v, E 2. (u + v) + w = u + (v + w), u, v, w E 3. There is a vector 0 E, called the zero vector, such that for any v E there holds v + 0 = v. 4. For any vector v E, there is a vector (v) E, called the opposite of v, such that v + (v) = 0. The multiplication by scalars satises the following conditions: 119

Chapter 5 Finite-Dimensional Vector Spacesinfohost.nmt.edu/~iavramid/notes/vectsp-findim.pdf · Chapter 5 Finite-Dimensional Vector Spaces 5.1 Vectors and Linear Transformations 5.1.1

Embed Size (px)

Citation preview

Chapter 5

Finite-Dimensional Vector Spaces

5.1 Vectors and Linear Transformations

5.1.1 Vector Spaces

• A vector space consists of a set E, whose elements are called vectors, anda field F (such as R or C), whose elements are called scalars. There are twooperations on a vector space:

1. Vector addition, + : E × E → E, that assigns to two vectors u, v ∈ Eanother vector u + v, and

2. Multiplication by scalars, · : R × E → E, that assigns to a vectorv ∈ E and a scalar a ∈ R a new vector av ∈ E.

The vector addition is an associative commutative operation with an addi-tive identity. It satisfies the following conditions:

1. u + v = v + u, ∀u, v, ∈ E

2. (u + v) + w = u + (v + w), ∀u, v,w ∈ E

3. There is a vector 0 ∈ E, called the zero vector, such that for any v ∈ Ethere holds v + 0 = v.

4. For any vector v ∈ E, there is a vector (−v) ∈ E, called the oppositeof v, such that v + (−v) = 0.

The multiplication by scalars satisfies the following conditions:

119

120 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

1. a(bv) = (ab)v, ∀v ∈ E, ∀a, bR,

2. (a + b)v = av + bv, ∀v ∈ E, ∀a, bR,

3. a(u + v) = au + av, ∀u, v ∈ E, ∀aR,

4. 1 v = v ∀v ∈ E.

• The zero vector is unique.

• For any u, v ∈ E there is a unique vector denoted by w = v − u, called thedifference of v and u, such that u + w = v.

• For any v ∈ E,0v = 0 , and (−1)v = −v .

• Let E be a real vector space and A = {e1, . . . , ek} be a finite collection ofvectors from E. A linear combination of these vectors is a vector

a1e1 + · · · + akek ,

where {a1, . . . , an} are scalars.

• A finite collection of vectorsA = {e1, . . . , ek} is linearly independent if

a1e1 + · · · + akek = 0

implies a1 = · · · = ak = 0.

• A collection A of vectors is linearly dependent if it is not linearly inde-pendent.

• Two non-zero vectors u and v which are linearly dependent are also calledparallel, denoted by u||v.

• A collectionA of vectors is linearly independent if no vector ofA is a linearcombination of a finite number of vectors fromA.

• LetA be a subset of a vector space E. The span ofA, denoted by spanA,is the subset of E consisting of all finite linear combinations of vectors fromA, i.e.

spanA = {v ∈ E | v = a1e1 + · · · + akek , ei ∈ A, ai ∈ R} .

We say that the subset spanA is spanned byA.

mathphys15.tex; August 5, 2015; 16:20; p. 116

5.1. VECTORS AND LINEAR TRANSFORMATIONS 121

• Theorem 5.1.1 The span of any subset of a vector space is a vector space.

• A vector subspace of a vector space E is a subset S ⊆ E of E which isitself a vector space.

• Theorem 5.1.2 A subset S of E is a vector subspace of E if and only ifspan S = S .

• Span ofA is the smallest subspace of E containingA.

• A collection B of vectors of a vector space E is a basis of E if B is linearlyindependent and spanB = E.

• A vector space E is finite-dimensional if it has a finite basis.

• Theorem 5.1.3 If the vector space E is finite-dimensional, then the numberof vectors in any basis is the same.

• The dimension of a finite-dimensional real vector space E, denoted bydim E, is the number of vectors in a basis.

• Theorem 5.1.4 If {e1, . . . , en} is a basis in E, then for every vector v ∈ Ethere is a unique set of real numbers (vi) = (v1, . . . , vn) such that

v =n�

i=1

viei = v1e1 + · · · + vnen .

• The real numbers vi, i = 1, . . . , n, are called the components of the vectorv with respect to the basis {ei}.• It is customary to denote the components of vectors by superscripts, which

should not be confused with powers of real numbers

v2 � (v)2 = vv, . . . , vn � (v)n .

Examples of Vector Subspaces

• Zero subspace {0}.• Line with a tangent vector u:

S 1 = span {u} = {v ∈ E | v = tu, t ∈ R} .

mathphys15.tex; August 5, 2015; 16:20; p. 117

122 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• Plane spanned by two nonparallel vectors u1 and u2

S 2 = span {u1, u2} = {v ∈ E | v = tu1 + su2, t, s ∈ R} .

• More generally, a k-plane spanned by a linearly independent collection ofk vectors {u1, . . . ,uk}

S k = span {u1, . . . ,uk} = {v ∈ E | v = t1u1 + · · · + tkuk, t1, . . . , tk ∈ R} .

• An (n − 1)-plane in an n-dimensional vector space is called a hyperplane.

• Examples of vector spaces: P[t], Pn[t], Mm×n, Ck([a, b]), C∞([a, b])

5.1.2 Inner Product and Norm• A complex vector space E is called an inner product space if there is a

function (·, ·) : E × E → R, called the inner product, that assigns to everytwo vectors u and v a complex number (u, v) and satisfies the conditions:∀u, v,w ∈ E, ∀a ∈ C:

1. (v, v) ≥ 0

2. (v, v) = 0 if and only if v = 0

3. (u, v) = (v, u)

4. (u + v,w) = (u,w) + (v,w)

5. (u, av) = a(u, v)

A finite-dimensional real inner product space is called a Euclidean space.

• Examples: On C([a, b])

( f , g) =� b

af (t)g(t)w(t)dt

where w is a positive continuous real-valued function called the weightfunction.

• The Euclidean norm is a function || · || : E → R that assigns to every vectorv ∈ E a real number ||v|| defined by

||v|| =�

(v, v).

mathphys15.tex; August 5, 2015; 16:20; p. 118

5.1. VECTORS AND LINEAR TRANSFORMATIONS 123

• The norm of a vector is also called the length.

• A vector with unit norm is called a unit vector.

• The natural distance function (a metric) is defined by

d(u, v) = ||u − v||

• Example.

• Theorem 5.1.5 For any u, v ∈ E there holds

||u + v||2 = ||u||2 + 2Re(u, v) + ||v||2 .

• If the norm satisfies the parallelogram law

||u + v||2 + ||u − v||2 = 2||u||2 + 2||v||2

then the inner product can be defined by

(u, v) =14

�||u + v||2 − ||u − v||2 − i||u + iv||2 + i||u − iv||2

• Theorem 5.1.6 A normed linear space is an inner product space if and onlyif the norm satisfies the parallelogram law.

• Theorem 5.1.7 Every finite-dimensional vector space can be turned intoan inner product space.

• Theorem 5.1.8 Cauchy-Schwarz’s Inequality. For any u, v ∈ E thereholds

|(u, v)| ≤ ||u|| ||v|| .The equality

|(u, v)| = ||u|| ||v||

holds if and only if u and v are parallel.

• Corollary 5.1.1 Triangle Inequality. For any u, v ∈ E there holds

||u + v|| ≤ ||u|| + ||v|| .

mathphys15.tex; August 5, 2015; 16:20; p. 119

124 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• In real vector space the angle between two non-zero vectors u and v isdefined by

cos θ =(u, v)||u|| ||v|| , 0 ≤ θ ≤ π .

Then the inner product can be written in the form

(u, v) = ||u|| ||v|| cos θ .

• Two non-zero vectors u, v ∈ E are orthogonal, denoted by u ⊥ v, if

(u, v) = 0.

• A basis {e1, . . . , en} is called orthonormal if each vector of the basis is aunit vector and any two distinct vectors are orthogonal to each other, that is,

(ei, e j) =�

1, if i = j0, if i � j .

• Theorem 5.1.9 Every Euclidean space has an orthonormal basis.

• Let S ⊂ E be a nonempty subset of E. We say that x ∈ E is orthogonal toS , denoted by x ⊥ S , if x is orthogonal to every vector of S .

• The setS ⊥ = {x ∈ E | x ⊥ S }

of all vectors orthogonal to S is called the orthogonal complement of S .

• Theorem 5.1.10 The orthogonal complement of any subset of a Euclideanspace is a vector subspace.

• Two subsets A and B of E are orthogonal, denoted by A ⊥ B, if everyvector of A is orthogonal to every vector of B.

• Let S be a subspace of E and S ⊥ be its orthogonal complement. If everyelement of E can be uniquely represented as the sum of an element of S andan element of S ⊥, then E is the direct sum of S and S ⊥, which is denotedby

E = S ⊕ S ⊥ .

• The union of a basis of S and a basis of S ⊥ gives a basis of E.

mathphys15.tex; August 5, 2015; 16:20; p. 120

5.1. VECTORS AND LINEAR TRANSFORMATIONS 125

5.1.3 Exercises1. Show that if λv = 0, then either v = 0 or λ = 0.

2. Prove that the span of a collection of vectors is a vector subspace.

3. Show that the Euclidean norm has the following properties

(a) ||v|| ≥ 0, ∀v ∈ E;

(b) ||v|| = 0 if and only if v = 0;

(c) ||av|| = |a| ||v||, ∀v ∈ E,∀a ∈ R.

4. Parallelogram Law. Show that for any u, v ∈ E

||u + v||2 + ||u − v||2 = 2�||u||2 + ||v||2

5. Show that any orthogonal system in E is linearly independent.

6. Gram-Schmidt orthonormalization process. Let G = {u1, · · · , uk} be a linearlyindependent collection of vectors. Let O = {v1, · · · , vk} be a new collection ofvectors defined recursively by

v1 = u1,

v j = u j −j−1�

i=1

vi(vi, u j)||vi||2

, 2 ≤ j ≤ k,

and the collection B = {e1, . . . , ek} be defined by

ei =vi

||vi|| .

Show that: a) O is an orthogonal system and b) B is an orthonormal system.

7. Pythagorean Theorem. Show that if u ⊥ v, then

||u + v||2 = ||u||2 + ||v||2 .

8. Let B = {e1, · · · en} be an orthonormal basis in E. Show that for any vector v ∈ E

v =n�

i=1

ei(ei, v)

and

||v||2 =n�

i=1

(ei, v)2 .

mathphys15.tex; August 5, 2015; 16:20; p. 121

126 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

9. Prove that the orthogonal complement of a subset S of E is a vector subspace of E.

10. Let S be a subspace in E. Prove that

a) E⊥ = {0}, b) {0}⊥ = E, c) (S ⊥)⊥ = S .

11. Show that the intersection of orthogonal subsets of a Euclidean space is eitherempty or consists of only the zero vector. That is, for two subsets A and B, ifA ⊥ B, then A ∩ B = {0} or ∅.

5.1.4 Linear Transformations.• A linear transformation from a vector space V to a vector space W is a

mapT : V → W

satisfying the condition:

T (αu + βv) = αTu + βTv

for any u, v ∈ V and α, β ∈ C.

• Zero transformation maps all vectors to the zero vector.

• The linear transformation is called an endomorphism (or a linear operator)if V = W.

• The linear transformation is called a linear functional if W = C.

• A linear transformation is uniquely determined by its action on a basis.

• The set of linear transformations from V to W is a vector space denoted byL(V,W).

• The set of endomorphisms (operators) on V is denoted by End (V) or L(V).

• The set of linear functionals on V is called the dual space and is denotedby V∗.

• Example.

• The kernel (null space) (denoted by Ker T ) of a linear transformation T :V → W is the set of vectors in V that are mapped to zero.

mathphys15.tex; August 5, 2015; 16:20; p. 122

5.1. VECTORS AND LINEAR TRANSFORMATIONS 127

• Theorem 5.1.11 The kernel of a linear transformation is a vector space.

• The dimension of a finite-dimensional kernel is called the nullity of thelinear transformation.

null T = dim Ker T

• Theorem 5.1.12 The range of a linear transformation is a vector space.

• The dimension of a finite-dimensional range is called the rank of the lineartransformation.

rank T = dim Im T

• Theorem 5.1.13 Dimension Theorem. Let T : V → W be a linear trans-formation between finite-dimensional vector spaces. Then

dim Ker T + dim Im T = dim V .

• Theorem 5.1.14 A linear transformation is injective if and only if its kernelis zero.

• An endomorphism of a finite-dimensional space is bijective if it is eitherinjective or surjective.

• Two vector spaces are isomorphic if they can be related by a bijective lineartransformation (which is called an isomorphism).

• An isomorphism is called an automorphism if V = W.

• The set of all automorphisms of V is denoted by Aut (V) or GL(V).

• A linear surjection is an isomorphism if and only if its nullity is zero.

• Theorem 5.1.15 An isomorphism maps linearly independent sets onto lin-early independent sets.

• Theorem 5.1.16 Two finite-dimensional vector spaces are isomorphic ifand only if they have the same dimension.

• All n-dimensional complex vector spaces are isomorphic to Cn.

• All n-dimensional real vector spaces are isomorphic to Rn.

mathphys15.tex; August 5, 2015; 16:20; p. 123

128 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• The dual basis fi in the dual space V∗ is defined by

fi(e j) = δi j,

where e j is the basis in V .

• Theorem 5.1.17 The dual space V∗ is isomorphic to V.

• The dual (or the pullback) of a linear transformation T : V → W is thelinear transformation T ∗ : W∗ → V∗ defined for any g ∈ W∗ by

(T ∗g)v = g(Tv), v ∈ V .

• Graph.

• If T is surjective then T ∗ is injective.

• If T is injective then T ∗ is surjective.

• If T is an isomorphism then T ∗ is an isomorphism.

5.1.5 Algebras• An algebra A is a vector space together with a binary operation called mul-

tiplication satisfying the conditions:

u(αv + βw) = αuv + βuw

(αv + βw)u = αvu + βwu

for any u, v,w, α, β ∈ C.

• Examples. Matrices, functions, operators.

• The dimension of the algebra is the dimension of the vector space.

• The algebra isassociative if

u(vw) = (uv)w

and commutative ifuv = vu

mathphys15.tex; August 5, 2015; 16:20; p. 124

5.1. VECTORS AND LINEAR TRANSFORMATIONS 129

• An algebra with identity is an algebra with an identity element 1 satisfying

u1 = 1u = u

for any u ∈ A.

• An element v is a left inverse of u if

vu = 1

and the right inverse ifuv = 1.

• Example. Lie algebras.

• An operator D : A→ A on an algebra A is called a derivation if it satisfies

D(uv) = (Du)v + uDv

• Example. Let A = Mat(n) be the algebra of square matrices of dimensionn with the binary operation being the commutator of matrices.

• It is easy to show that for any matrices A, B,C the following identity (Jacobiidentity) holds

[A, [B,C]] + [B, [C, A]] + [C, [A, B]] = 0

• Let C be a fixed matrix. We define an operator AdC on the algebra by

AdC B = [C, B]

Then this operator is a derivation since for any matrices A, B

AdC[A, B] = [AdCA, B] + [A, AdC B]

• A linear transformation T : A → B from an algebra A to an algebra B iscalled an algebra homomorphism if

T (uv) = T (u)T (v)

for any u, v ∈ A.

mathphys15.tex; August 5, 2015; 16:20; p. 125

130 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• An algebra homomorphism is called an algebra isomorphism if it is bijec-tive.

• Example. The isomorphism of the Lie algebra so(3) and R3 with the crossproduct.

Let Xi, i = 1, 2, 3 be the antisymmetric matrices defined by

(Xi) jk = ε

jik .

They form an algebra with respect to the commutator

[Xi, Xj] = εki jXk .

We define a map T : R3 → so(3) as follows. Let v = viei be a vector in R3.Then

T (v) = viXi .

let R3 be equipped with the cross product. Then

T (v × u) = (Tv)(Tu)

Thus T is an isomorphism (linear bijective algebra homomorphism).

• Any finite dimensional vector space can be converted into an algebra bydefining the multiplication of the basis vectors by

eie j =

n�

k=1

Cki jek

where Cki j are some scalars called the structure constants of the algebra.

• Example. Lie algebra su(2).

Pauli matrices are defined by

σ1 =

�0 11 0

�, σ2 =

�0 −ii 0

�, σ3 =

�1 00 −1

�. (5.1)

They are Hermitian traceless matrices satisfying

σiσ j = δi jI + iεi jkσk . (5.2)

mathphys15.tex; August 5, 2015; 16:20; p. 126

5.1. VECTORS AND LINEAR TRANSFORMATIONS 131

They satisfy the following commutation relations

[σi,σ j] = 2iεi jkσk (5.3)

and the anti-commutation relations

σiσ j + σ jσi = 2δi jI (5.4)

Therefore, Pauli matrices form a representation of Clifford algebra in 2 di-mensions.

The matricesJi = − i

2σi (5.5)

are the generators of the Lie algebra su(2) with the commutation relations

[Ji, J j] = εki jJk (5.6)

Algebra homomorphism Λ : su(2) → so(3) is defined as follows. Letv = viJi ∈ su(2). Then Λ(v) is the matrix defined by

Λ(v) = viXi .

• Example. Quaternions. The algebra of quaternions H is defined by (herei, j, k = 1, 2, 3)

e20 = e0 , e2

i = −e0 , e0ei = eie0 = ei ,

eie j = εk

i jek i � j

There is an algebra homomorphism ρ : H→ su(2)

ρ(e0) = I, ρ(e j) = −iσ j

• A subspace of an algebra is called a subalgebra if it is closed under algebramultiplication.

• A subset B of an algebra A is called a left ideal if AB ⊂ B, that is, for anyu ∈ A and any v ∈ B, uv ∈ B.

• A subset B of an algebra A is called a right ideal if BA ⊂ B, that is, for anyu ∈ A and any v ∈ B, vu ∈ B.

mathphys15.tex; August 5, 2015; 16:20; p. 127

132 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• A subset B of an algebra A is called a two-sided ideal if it is both left andright ideal, that is, if ABA ⊂ B, or for any u,w ∈ A and any v ∈ B, uvw ∈ B.

• Every ideal is a subalgebra.

• A proper ideal of an algebra with identity cannot contain the identity ele-ment.

• A proper left ideal cannot contain an element that has a left inverse.

• If an ideal does not contain any proper subideals then it is the minimal ideal.

• Examples. Let x be and element of an algebra A. Let Ax be the set definedby

Ax = {ux|u ∈ A}Then Ax is a left ideal.

• Similarly xA is a right ideal and AxA is a two-sided ideal.

mathphys15.tex; August 5, 2015; 16:20; p. 128

5.2. OPERATOR ALGEBRA 133

5.2 Operator Algebra

5.2.1 Algebra of Operators on a Vector Space• A linear operator on a vector space E is a mapping L : E → E satisfying

the condition ∀u, v ∈ E, ∀a ∈ R,

L(u + v) = L(u) + L(v) and L(av) = aL(v).

• Identity operator I on E is defined by

I v = v, ∀v ∈ E

• Null operator 0 : E → E is defined by

0v = 0, ∀v ∈ E

• The vector u = L(v) is the image of the vector v.

• If S is a subset of E, then the set

L(S ) = {u ∈ E | u = L(v) for some v ∈ S }is the image of the set S and the set

L−1(S ) = {v ∈ E | L(v) ∈ S }is the inverse image of the set A.

• The image of the whole space E of a linear operator L is the range (or theimage) of L, denoted by

Im(L) = L(E) = {u ∈ E | u = L(v) for some v ∈ E} .

• The kernel Ker(L) (or the null space) of an operator L is the set of allvectors in E which are mapped to zero, that is

Ker (L) = L−1({0}) = {v ∈ E | L(v) = 0} .

• Theorem 5.2.1 For any operator L the sets Im(L) and Ker (L) are vectorsubspaces.

mathphys15.tex; August 5, 2015; 16:20; p. 129

134 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• The dimension of the kernel Ker (L) of an operator L

null (L) = dim Ker (L)

is called the nullity of the operator L.

• The dimension of the range Im(L) of an operator L

rank (L) = dim Ker (L)

is called the rank of the operator L.

• Theorem 5.2.2 For any operator L on an n-dimensional Euclidean spaceE

rank (L) + null (L) = n

• The set L(E) of all linear operators on a vector space E is a vector spacewith the addition of operators and multiplication by scalars defined by

(L1 + L2)(x) = L1(x) + L2(x), and (aL)(x) = aL(x) .

• The product of the operators A and B is the composition of A and B.

• Since the product of operators is defined as a composition of linear map-pings, it is automatically associative, which means that for any operators A,B and C, there holds

(AB)C = A(BC) .

• The integer powers of an operator are defined as the multiple compositionof the operator with itself, i.e.

A0 = I A1 = A, A2 = AA, . . .

• The operator A on E is invertible if there exists an operator A−1 on E, calledthe inverse of A, such that

A−1A = AA−1 = I .

• Theorem 5.2.3 Let A and B be invertible operators. Then:

(A−1)−1 = A , (AB)−1 = B−1A−1 .

mathphys15.tex; August 5, 2015; 16:20; p. 130

5.2. OPERATOR ALGEBRA 135

• The operators A and B are commuting if

AB = BA

and anti-commuting ifAB = −BA .

• The operators A and B are said to be orthogonal to each other if

AB = BA = 0 .

• An operator A is involutive if

A2 = I

idempotent ifA2 = A ,

and nilpotent if for some integer k

Ak = 0 .

• Two operators A and B are equal if for any u ∈ V

Au = Bu

• If Aei = Bei for all basis vectors in V then A = B.

• Operators are uniquely determined by their action on a basis.

• Theorem 5.2.4 An operator A is equal to zero if and only if for any u, v ∈ V

(u, Av) = 0

• Theorem 5.2.5 An operator A is equal to zero if and only if for any u

(u, Au) = 0

Proof: Use (w, Aw) = 0 for w = au + bv with a = 1, b = i and a = i, b = 1.

• Theorem 5.2.6 1. The inverse of an automorphism is unique.

mathphys15.tex; August 5, 2015; 16:20; p. 131

136 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

2. The product of two automorphisms is an automorphism.

3. A linear transformation is an automorphism if and only if it maps abasis to another basis.

• Polynomials of Operators.

Pn(T ) = anT n + · · · a1T + a0I,

where I is the identity operator.

• Commutator of two operators A and B is an operator [A, B] defined by

[A, B] = AB − BA

• Theorem 5.2.7 Properties of commutators.

Anti-symmetry[A, B] = −[B, A]

linearity[aA, bB] = ab[A, B]

[A, B +C] = [A, B] + [A,C]

[A +C, B] = [A, B] + [C, B]

right derivation[AB,C] = A[B,C] + [A,C]B

left derivation[A, BC] = [A, B]C + B[A,C]

Jacobi identity

[A, [B,C]] + [B, [C, A]] + [C, [A, B]] = 0

• Consequences[A, Am] = 0

[A, A−1] = 0

[A, f (A)] = 0

mathphys15.tex; August 5, 2015; 16:20; p. 132

5.2. OPERATOR ALGEBRA 137

• Functions of Operators.

Negative powersT m = T · · · T������

m

T−m = (T−1)m

T mT n = T m+n

(T m)n = T mn

Let f be an analytic function given by

f (x) =∞�

k=0

f (k)(x0)k!

(x − x0)k

Then for an operator T

f (T ) =∞�

k=0

f (k)(x0)k!

(T − x0I)k

• Exponential

exp(T ) =∞�

k=0

1k!

T k

• Example.

5.2.2 Derivatives of Functions of Operators• A time-dependent operator is a map

H : R→ End (V)

Note that[H(t),H(t�)] � 0

• Example.A : R2 → R2

A(x, y) = (−y, x)

mathphys15.tex; August 5, 2015; 16:20; p. 133

138 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

Note thatA2 = −I

soA2n = (−1)nI, A2n+1 = (−1)nA

Thereforeexp(tA) = cos tI + sin tA

which is a rotation by the angle t

exp(tA)(x, y) = (cos t x − sin t y, cos t y + sin t x)

So A is a generator of rotation.

• Derivative of a time-dependent operator is an operator defined by

dHdt= lim

h→0

H(t + h) − H(t)h

• Rules of the differentiation

ddt

(AB) =dAdt

B + AdBdt

• Example.ddt

exp(tA) = A exp(tA)

• Exponential of the adjoint. Let

X(t) = etABe−tA

It satisfies the equationddt

X = [A, B]

with initial conditionX(0) = I

Let AdA be defined byAdAB = [A, B]

Then

X(t) = exp(tAdA)B =∞�

k=0

tk

k![A, · · · [A, B]����������������������

k

mathphys15.tex; August 5, 2015; 16:20; p. 134

5.2. OPERATOR ALGEBRA 139

• Duhamel’s Formula.

ddt

exp[H(t)] = exp[H(t)]� 1

0exp[−sH(t)]

dH(t)dt

exp[sH(t)]ds

Proof. Let

Y(s) = e−sH ddt

esH

ThenY(0) = 0

andddt

exp[H] = exp[H]Y(1)

We computedYds= −[H, Y] + H�

So,(∂s + AdH)Y = H�

Therefore

Y(1) = exp(−AdH)� 1

0exp(sAdH)H�ds

which can be written in the form

Y(1) =� 1

0e−(1−s)HH�e(1−s)Hds

By changing the variable s→ (1 − s) we get the desired result.

• Particular case. If H commutes with H� then

∂teH(t) = eH∂tH

• Campbell-Hausdorff Formula.

exp A exp B = exp[C(A, B)]

ConsiderU(s) = eAesB = eC(s)

mathphys15.tex; August 5, 2015; 16:20; p. 135

140 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

Of courseC(0) = A

We havedds

U = UB

Also,

∂sU = U� 1

0exp[−τC]∂sC exp[τC] dτ

Let

F(z) =� 1

0e−τzdτ =

1 − e−z

zThen

∂sU = UF(AdC)∂sC

ThereforeF(AdC)∂sC = B

Now, let

Ψ(z) =1

F(log z)=

z log zz − 1

Note thateAdC = AdeC = AdeA · AdesB = eAdAesAdB

ThenΨ(eAdAesAdB)F(AdC) = I

Therefore, we get a differential equation

∂sC = Ψ(eAdAesAdB)B

with initial conditionC(0) = A

Therefore,

C(1) = A +� 1

0Ψ(eAdAesAdB)Bds

This gives a power series in AdA, AdB.

• Particular case. If [A, B] commutes with both A and B then

eAeB = eA+B exp�12

[A, B]�

mathphys15.tex; August 5, 2015; 16:20; p. 136

5.2. OPERATOR ALGEBRA 141

5.2.3 Self-Adjoint and Unitary Operators• The adjoint A∗ of an operator A is defined by

(Au, v) = (u, A∗v), ∀u, v ∈ E.

• Theorem 5.2.8 For any two operators A and B

(A∗)∗ = A , (AB)∗ = B∗A∗ .

(A + B)∗ = A∗ + B∗

(aA)∗ = aA∗

• An operator A is self-adjoint (or Hermitian) if

A∗ = A

and anti-selfadjoint if

A∗ = −A

• Every operator A can be decomposed as the sum

A = AS + AA

of its selfadjoint part AS and its anti-selfadjoint part AA

AS =12

(A + A∗) , AA =12

(A − A∗) .

• Theorem 5.2.9 An operator H is Hermitian if and only if (u,Hu) is realfor any u.

• An operator A on E is called positive, denoted by A ≥ 0, if it is selfdadjointand ∀v ∈ E

(Av, v) ≥ 0.

• An operator H is positive definite (H > 0) if it is positive and

(u,Hu) = 0

only for u = 0.

mathphys15.tex; August 5, 2015; 16:20; p. 137

142 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• Example.H = A∗A ≥ 0

• An operator A is called unitary if

AA∗ = A∗A = I .

• An operator U is isometric if for any v ∈ E

||Uv|| = ||v||

• Example.U = exp(A), A∗ = −A

• Unitary operators preserve the inner product.

• Theorem 5.2.10 Let U be a unitary operator on a real vector space E.Then there exists an anti-selfadjoint operator A such that

U = exp A .

• Recall that the operators U and A satisfy the equations

U∗ = U−1 and A∗ = −A.

5.2.4 Trace and Determinant

• The trace of an operator A is defined by

tr A =n�

i=1

(ei, Aei)

• The determinant of a positive operator on a finite-dimensional space isdefined by

det A = exp(tr log A)

mathphys15.tex; August 5, 2015; 16:20; p. 138

5.2. OPERATOR ALGEBRA 143

• Propertiestr AB = tr BA

det AB = det A det B

tr (RAR−1) = tr A

det(RAR−1) = det A

• Theorem.ddt

det(I + tA)����t=0= tr A

det(I + tA) = I + ttr A + O(t2)

ddt

det A = det A tr�A−1 dA

dt

• Note thattr I = n , det I = 1 .

• Theorem 5.2.11 Let A be a self-adjoint operator. Then

det exp A = etr A .

• Let A be a positive definite operator, A > 0. The zeta-function of theoperator A is defined by

ζ(s) = tr A−s =1Γ(s)

� ∞

0dt ts−1tr e−tA .

• Theorem 5.2.12 The zeta-functions has the properties

ζ(0) = n ,

andζ�(0) = − log det A .

mathphys15.tex; August 5, 2015; 16:20; p. 139

144 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

5.2.5 Finite Difference Operators

• Let ei be an orthonormal basis. The shift operator E is defined by

Ee1 = 0, Eei = ei−1, i = 1, . . . , n,

that is,

E f =n−1�

i=1

fi+1ei

or

(E f )i = fi+1

Let

Δ = E − I

∇ = I − E−1

Next, define an operator D by

E = exp(hD)

that is,

D =1h

log E =1h

log(I + Δ) = −1h

log(I − ∇)

Also, define an operator J by

J = ΔD−1

Then

Δ fi = fi+1 − fi

∇ fi = fi − fi−1

• Problem. Compute U(t) = exp[tD2].

mathphys15.tex; August 5, 2015; 16:20; p. 140

5.2. OPERATOR ALGEBRA 145

5.2.6 Projection Operators• A Hermitian operator P is a projection if

P2 = P

• Two projections P1, P2 are orthogonal if

P1P2 = P2P1 = 0 .

• Let S be a subspace of E and E = S ⊕ S ⊥. Then for any u ∈ E there existunique v ∈ S and w ∈ S ⊥ such that

u = v + w .

The vector v is called the projection of u onto S .

• The operator P on E defined by

Pu = v

is called the projection operator onto S .

• The operator P⊥ defined byP⊥u = w

is the projection operator onto S ⊥.

• The operators P and P⊥ are called complementary projections. They havethe properties:

P∗ = P, (P⊥)∗ = P⊥ ,

P + P⊥ = I ,

P2 = P , (P⊥)2 = P⊥ ,

PP⊥ = P⊥P = 0 .

• More generally, a collection of projections {P1, . . . , Pk} is a complete or-thogonal system of complimentary projections if

PiPk = 0 if i � k

andk�

i=1

Pi = P1 + · · · + Pk = I .

mathphys15.tex; August 5, 2015; 16:20; p. 141

146 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• The trace of a projection P onto a vector subspace S is equal to its rank, orthe dimension of the vector subspace S ,

tr P = rank P = dim S .

• Theorem 5.2.13 An operator P is a projection if and only if P is idempotentand self-adjoint.

• Theorem 5.2.14 The sum of projections is a projection if and only if theyare orthogonal.

• The projection onto a unit vector |e� has the form

P = |e��e|

• Let {|ei�}mi=1 be an orthonormal set and S = span 1≤i≤m{|ei�}. Then the opera-tor

P =m�

i=1

|ei��ei|

is the projection onto S .

• If |ei� is an orthonormal basis then the projections

Pi = |ei��ei|for a complete orthogonal set.

Examples

• Let u be a unit vector and Pu be the projection onto the one-dimensionalsubspace (line) S u spanned by u defined by

Puv = u(u, v) .

The orthogonal complement S ⊥u is the hyperplane with the normal u. Theoperator Ju defined by

Ju = I − 2Pu

is called the reflection operator with respect to the hyperplane S ⊥u . Thereflection operator is a self-adjoint involution, that is, it has the followingproperties

J∗u = Ju , J2u = I .

mathphys15.tex; August 5, 2015; 16:20; p. 142

5.2. OPERATOR ALGEBRA 147

The reflection operator has the eigenvalue −1 with multiplicity 1 and theeigenspace S u, and the eigenvalue +1 with multiplicity (n − 1) and witheigenspace S ⊥u .

• Let u1 and u2 be an orthonormal system of two vectors and Pu1,u2 be theprojection operator onto the two-dimensional space (plane) S u1,u2 spannedby u1 and u2

Pu1,u2v = u1(u1, v) + u2(u2, v) .

Let Nu1,u2 be an operator defined by

Nu1,u2v = u1(u2, v) − u2(u1, v) .

ThenNu1,u2 Pu1,u2 = Pu1,u2 Nu1,u2 = Nu1,u2

andN2

u1,u2= −Pu1,u2 .

A rotation operator Ru1,u2(θ) with the angle θ in the plane S u1,u2 is definedby

Ru1,u2(θ) = I − Pu1,u2 + cos θ Pu1,u2 + sin θ Nu1,u2 .

The rotation operator is unitary, that is, it satisfies the equation

R∗u1,u2Ru1,u2 = I .

5.2.7 Exercises1. Prove that the range and the kernel of any operator are vector spaces.

2. Show that

(aA + bB)∗ = aA∗ + bB∗ ∀a, b ∈ R ,(A∗)∗ = A

(AB)∗ = B∗A∗

3. Show that for any operator A the operators AA∗ and A + A∗ are selfadjoint.

4. Show that the product of two selfadjoint operators is selfadjoint if and only if theycommute.

5. Show that a polynomial p(A) of a selfadjoint operator A is a selfadjoint operator.

mathphys15.tex; August 5, 2015; 16:20; p. 143

148 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

6. Prove that the inverse of an invertible operator is unique.

7. Prove that an operator A is invertible if and only if Ker A = {0}, that is, Av = 0implies v = 0.

8. Prove that for an invertible operator A, Im(A) = E, that is, for any vector v ∈ Ethere is a vector u ∈ E such that v = Au.

9. Show that if an operator A is invertible, then

(A−1)−1 = A .

10. Show that the product AB of two invertible operators A and B is invertible and

(AB)−1 = B−1A−1

11. Prove that the adjoint A∗ of any invertible operator A is invertible and

(A∗)−1 = (A−1)∗ .

12. Prove that the inverse A−1 of a selfadjoint invertible operator is selfadjoint.

13. An operator A on E is called isometric if ∀v ∈ E,

||Av|| = ||v|| .

Prove that an operator is unitary if and only if it is isometric.

14. Prove that unitary operators preserves inner product. That is, show that if A is aunitary operator, then ∀u, v ∈ E

(Au, Av) = (u, v) .

15. Show that for every unitary operator A both A−1 and A∗ are unitary.

16. Show that for any operator A the operators AA∗ and A∗A are positive.

17. What subspaces do the null operator 0 and the identity operator I project onto?

18. Show that for any two projection operators P and Q, PQ = 0 if and only if QP = 0.

19. Prove the following properties of orthogonal projections

P∗ = P, (P⊥)∗ = P⊥, P⊥ + P = I, PP⊥ = P⊥P = 0 .

mathphys15.tex; August 5, 2015; 16:20; p. 144

5.2. OPERATOR ALGEBRA 149

20. Prove that an operator is projection if and only if it is idempotent and selfadjoint.

21. Give an example of an idempotent operator in R2 which is not a projection.

22. Show that any projection operator P is positive. Moreover, show that ∀v ∈ E

(Pv, v) = ||Pv||2 .

23. Prove that the sum P = P1+P2 of two projections P1 and P2 is a projection operatorif and only if P1 and P2 are orthogonal.

24. Prove that the product P = P1P2 of two projections P1 and P2 is a projectionoperator if and only if P1 and P2 commute.

mathphys15.tex; August 5, 2015; 16:20; p. 145

150 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

5.3 Matrix Representation of Operators

5.3.1 Matrices• Cn is the set of all ordered n-tuples of complex numbers, which can be

assembled as columns or as rows.

• Let v be a vector in a n-dimensional vector space V with a basis ei. Then

v =n�

i=1

viei

where v1, . . . , vn are complex numbers called the components of the vectorv. The column-vector is an ordered n-tuple of the form

v1

v2...

vn

.

We say that the column vector represents the vector v in the basis ei.

• Let < v| = (|v >)∗ be a linear functional dual to the vector |v >. Let < ei| bethe dual basis. Then

< v| =n�

i=1

vi < ei|

The row-vector (also called a covector) is an ordered n-tuple of the form

(v1, v2, . . . , vn) .

It represents the dual vector < v| in the same basis.

• A set of nm complex numbers Ai j, i = 1, . . . , n; j = 1, . . . ,m, arranged in anarray that has m columns and n rows

A =

A11 A12 · · · A1m

A21 A22 · · · A2m...

.... . .

...An1 An2 · · · Anm

is called a rectangular n × m complex matrix.

mathphys15.tex; August 5, 2015; 16:20; p. 146

5.3. MATRIX REPRESENTATION OF OPERATORS 151

• The set of all complex n × m matrices is denoted by Mat(n,m;C).

• The number Ai j (also called an entry of the matrix) appears in the i-th rowand the j-th column of the matrix A

A =

A11 A12 · · · A1 j · · · A1m

A21 A22 · · · A2 j · · · A2m...

.... . .

......

...

Ai1 Ai2 · · · Ai j · · · Aim

......

....... . .

...An1 An2 · · · An j · · · Anm

• Remark. Notice that the first index indicates the row and the second indexindicates the column of the matrix.

• The matrix whose all entries are equal to zero is called the zero matrix.

• Finally, we define the multiplication of column-vectors by matrices fromthe left and the multiplication of row-vectors by matrices from the right asfollows.

• Each matrix defines a natural left action on a column-vector and a rightaction on a row-vector.

• For each column-vector v and a matrix A = (Ai j) the column-vector u = Avis given by

u1

u2...ui...

un

=

A11 A12 · · · A1n

A21 A22 · · · A2n...

.... . .

...Ai1 Ai2 · · · Ain...

......

...An1 An2 · · · Ann

v1

v2...vi...

vn

=

A11v1 + A12v2 + · · · + A1nvn

A21v1 + A22v2 + · · · + A2nvn...

Ai1v1 + Ai2v2 + · · · + Ainvn...

An1v1 + An2v2 + · · · + Annvn

• The components of the vector u are

ui =

n�

j=1

Ai jv j = Ai1v1 + Ai2v2 + · · · + Ainvn .

mathphys15.tex; August 5, 2015; 16:20; p. 147

152 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• Similarly, for a row vector vT the components of the row-vector uT = vT Aare defined by

ui =

n�

j=1

v jA ji = v1A1i + v2A2i + · · · + vnAni .

• Let W be an m-dimensional vector space with aa basis fi and A : V → W bea linear transformation. Such an operator defines a n × m matrix (Ai j) by

Aei =

m�

j=1

Ajif j

orAji = (f j, Aei)

Thus the linear transformation A is represented by the matrix Ai j.

The components of a vector v are obtained by acting on the colum vector(vi) from the left by the matrix (Aji), that is,

(Av)i =

n�

j=1

Ai jv j

• Proposition. The vector space L(V,W) of linear transformations A : V →W is isomorphic to the space M(m × n,C) of m × n matrices.

• Proposition. The rank of a linear transformation is equal to the rank of itsmatrix.

5.3.2 Operation on Matrices

• The addition of matrices is defined by

A + B =

A11 + B11 A12 + B12 · · · A1m + B1m

A21 + B21 A22 + B22 · · · A2m + B2m...

.... . .

...An1 + Bn1 An2 + Bn2 · · · Anm + Bnm

mathphys15.tex; August 5, 2015; 16:20; p. 148

5.3. MATRIX REPRESENTATION OF OPERATORS 153

and the multiplication by scalars by

cA =

cA11 cA12 · · · cA1m

cA21 cA22 · · · cA2m...

.... . .

...cAn1 cAn2 · · · cAnm

• A n × m matrix is called a square matrix if n = m.

• The numbers Aii are called the diagonal entries. Of course, there are ndiagonal entries. The set of diagonal entries is called the diagonal of thematrix A.

• The numbers Ai j with i � j are called off-diagonal entries; there are n(n−1)off-diagonal entries.

• The numbers Ai j with i < j are called the upper triangular entries. Theset of upper triangular entries is called the upper triangular part of thematrix A.

• The numbers Ai j with i > j are called the lower triangular entries. The setof lower triangular entries is called the lower triangular part of the matrixA.

• The number of upper-triangular entries and the lower-triangular entries isthe same and is equal to n(n − 1)/2.

• A matrix whose only non-zero entries are on the diagonal is called a diago-nal matrix. For a diagonal matrix

Ai j = 0 if i � j .

• The diagonal matrix

A =

λ1 0 · · · 00 λ2 · · · 0....... . .

...0 0 · · · λn

is also denoted byA = diag (λ1, λ2, . . . , λn)

mathphys15.tex; August 5, 2015; 16:20; p. 149

154 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• A diagonal matrix whose all diagonal entries are equal to 1

I =

1 0 · · · 00 1 · · · 0....... . .

...0 0 · · · 1

is called the identity matrix. The elements of the identity matrix are

δi j =

1, if i = j

0, if i � j .

• A matrix A of the form

A =

∗ ∗ · · · ∗0 ∗ · · · ∗....... . .

...0 0 · · · ∗

where ∗ represents nonzero entries is called an upper triangular matrix.Its lower triangular part is zero, that is,

Ai j = 0 if i < j .

• A matrix A of the form

A =

∗ 0 · · · 0∗ ∗ · · · 0....... . .

...∗ ∗ · · · ∗

whose upper triangular part is zero, that is,

Ai j = 0 if i > j ,

is called a lower triangular matrix.

mathphys15.tex; August 5, 2015; 16:20; p. 150

5.3. MATRIX REPRESENTATION OF OPERATORS 155

• The transpose of a matrix A whose i j-th entry is Ai j is the matrix AT whosei j-th entry is Aji. That is, AT obtained from A by switching the roles of rowsand columns of A:

AT =

A11 A21 · · · Aj1 · · · An1

A12 A22 · · · Aj2 · · · An2...

.... . .

......

...

A1i A2i · · · Aji · · · Ani

......

....... . .

...A1m A2m · · · Ajm · · · Anm

or(AT )i j = Aji .

• The Hermitian conjugate of a matrix A = (Ai j) is a matrix A∗ = (A∗i j)defined by

(A∗)i j = A ji

• A matrix A is called symmetric if

AT = A

and anti-symmetric ifAT = −A .

• A matrix A is called Hermitian if

A∗ = A

and anti-Hermitian ifA∗ = −A .

• An anti-Hermitian matrix has the form

A = iH

where H is Hermitian.

• A Hermitian matrix has the form

H = A + iB

where A is real symmetric and B is real anti-symmetric matrix.

mathphys15.tex; August 5, 2015; 16:20; p. 151

156 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• The number of independent entries of an anti-symmetric matrix is n(n−1)/2.

• The number of independent entries of a symmetric matrix is n(n + 1)/2.

• The diagonal entries of a Hermitian matrix are real.

• The number of independent real parameters of a Hermitian matrix is n2.

• Every square matrix A can be uniquely decomposed as the sum of its di-agonal part AD, the lower triangular part AL and the upper triangular partAU

A = AD + AL + AU .

• For an anti-symmetric matrix

ATU = −AL and AD = 0 .

• For a symmetric matrixAT

U = AL .

• Every square matrix A can be uniquely decomposed as the sum of its sym-metric part AS and its anti-symmetric part AA

A = AS + AA ,

whereAS =

12

(A + AT ) , AA =12

(A − AT ) .

• The product of square matrices is defined as follows. The i j-th entry ofthe product C = AB of two matrices A and B is

Ci j =

n�

k=1

AikBk j = Ai1B1 j + Ai2B2 j + · · · + AinBn j .

This is again a multiplication of the “i-th row of the matrix A by the j-thcolumn of the matrix B”.

• Theorem 5.3.1 The product of matrices is associative, that is, for any ma-trices A, B, C

(AB)C = A(BC) .

• Theorem 5.3.2 For any two matrices A and B

(AB)T = BT AT , (AB)∗ = B∗A∗ .

mathphys15.tex; August 5, 2015; 16:20; p. 152

5.3. MATRIX REPRESENTATION OF OPERATORS 157

5.3.3 Inverse Matrix• A matrix A is called invertible if there is another matrix A−1 such that

AA−1 = A−1A = I .

The matrix A−1 is called the inverse of A.

• Theorem 5.3.3 For any two invertible matrices A and B

(AB)−1 = B−1A−1 ,

and(A−1)T = (AT )−1 .

• A matrix A is called orthogonal if

AT A = AAT = I ,

which means AT = A−1.

•• A matrix A is called unitary if

A∗A = AA∗ = I ,

which means A∗ = A−1.

• Every unitary matrix has the form

U = exp(iH)

where H is Hermitian.

• A similarity transformation of a matric A is a map

A �→ UAU−1

where U is a given invertible matrix.

• The similarity transformation of a function of a matrix is equal to the func-tion of the similar matrix

U f (A)U−1 = f (UAU−1) .

mathphys15.tex; August 5, 2015; 16:20; p. 153

158 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

5.3.4 Trace

• The trace is a map tr : Mat(n,C) that assigns to each matrix A = (Ai j) acomplex number tr A equal to the sum of the diagonal elements of a matrix

tr A =n�

k=1

Akk .

• Theorem 5.3.4 The trace has the properties

tr (AB) = tr (BA) ,

andtr AT = tr A , tr A∗ = tr A

• Obviously, the trace of an anti-symmetric matrix is equal to zero.

• The trace is invariant under a similarity transformation.

• A natural inner product on the space of matrices is defined by

(A, B) = tr (A∗B)

5.3.5 Determinant

• Consider the set Zn = {1, 2, . . . , n} of the first n integers. A permutation ϕof the set {1, 2, . . . , n} is an ordered n-tuple (ϕ(1), . . . ,ϕ(n)) of these num-bers.

• That is, a permutation is a bijective (one-to-one and onto) function

ϕ : Zn → Zn

that assigns to each number i from the set Zn = {1, . . . , n} another numberϕ(i) from this set.

• An elementary permutation is a permutation that exchanges the order ofonly two numbers.

mathphys15.tex; August 5, 2015; 16:20; p. 154

5.3. MATRIX REPRESENTATION OF OPERATORS 159

• Every permutation can be realized as a product (or a composition) of ele-mentary permutations. A permutation that can be realized by an even num-ber of elementary permutations is called an even permutation. A permu-tation that can be realized by an odd number of elementary permutations iscalled an odd permutation.

• Proposition 5.3.1 The parity of a permutation does not depend on therepresentation of a permutation by a product of the elementary ones.

• That is, each representation of an even permutation has even number ofelementary permutations, and similarly for odd permutations.

• The sign of a permutation ϕ, denoted by sign(ϕ) (or simply (−1)ϕ), isdefined by

sign(ϕ) = (−1)ϕ =�+1, if ϕ is even,−1, if ϕ is odd

• The set of all permutations of n numbers is denoted by S n.

• Theorem 5.3.5 The cardinality of this set, that is, the number of differentpermutations, is

|S n| = n! .

• The determinant is a map det : Mat(n,C) → C that assigns to each matrixA = (Ai j) a complex number det A defined by

det A =�

ϕ∈S n

sign (ϕ)A1ϕ(1) · · · Anϕ(n) ,

where the summation goes over all n! permutations.

• The most important properties of the determinant are listed below:

Theorem 5.3.6 1. The determinant of the product of matrices is equal tothe product of the determinants:

det(AB) = det A det B .

2. The determinants of a matrix A and of its transpose AT are equal:

det AT = det A .

mathphys15.tex; August 5, 2015; 16:20; p. 155

160 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

3. The determinant of the conjugate matrix is

det A∗ = det A .

4. The determinant of the inverse A−1 of an invertible matrix A is equalto the inverse of the determinant of A:

det A−1 = (det A)−1

5. A matrix is invertible if and only if its determinant is non-zero.

• The determinant is invariant under the similarity transformation.

• The set of complex invertible matrices (with non-zero determinant) is de-noted by GL(n,C).

• A matrix with unit determinant is called unimodular.

• The set of complex matrices with unit determinant is denoted by S L(n,C).

• The set of complex unitary matrices is denoted by U(n).

• The set of complex unitary matrices with unit determinant is denoted byS U(n).

• The set of real orthogonal matrices is denoted by O(n).

• An orthogonal matrix with unit determinant (a unimodular orthogonal ma-trix) is called a proper orthogonal matrix or just a rotation.

• The set of real orthogonal matrices with unit determinant is denoted byS O(n).

• Theorem 5.3.7 The determinant of an orthogonal matrix is equal to either1 or −1.

• Theorem. The determinant of a unitary matrix is a complex number ofmodulus 1.

• A set G of invertible matrices forms a group if it is closed under takinginverse and matrix multiplication, that is, if the inverse A−1 of any matrix Ain G belongs to the set G and the product AB of any two matrices A and Bin G belongs to G.

mathphys15.tex; August 5, 2015; 16:20; p. 156

5.3. MATRIX REPRESENTATION OF OPERATORS 161

5.3.6 Exercises1. Show that

tr [A, B] = 0

2. Show that the product of invertible matrices is an invertible matrix.

3. Show that the product of matrices with positive determinant is a matrix with posi-tive determinant.

4. Show that the inverse of a matrix with positive determinant is a matrix with positivedeterminant.

5. Show that GL(n,R) forms a group (called the general linear group).

6. Show that GL+(n,R) is a group (called the proper general linear group).

7. Show that the inverse of a matrix with negative determinant is a matrix with nega-tive determinant.

8. Show that: a) the product of an even number of matrices with negative determinantis a matrix with positive determinant, b) the product of odd matrices with negativedeterminant is a matrix with negative determinant.

9. Show that the product of matrices with unit determinant is a matrix with unit deter-minant.

10. Show that the inverse of a matrix with unit determinant is a matrix with unit deter-minant.

11. Show that S L(n,R) forms a group (called the special linear group or the unimod-ular group).

12. Show that the product of orthogonal matrices is an orthogonal matrix.

13. Show that the inverse of an orthogonal matrix is an orthogonal matrix.

14. Show that O(n) forms a group (called the orthogonal group).

15. Show that orthogonal matrices have determinant equal to either +1 or −1.

16. Show that the product of orthogonal matrices with unit determinant is an orthogonalmatrix with unit determinant.

17. Show that the inverse of an orthogonal matrix with unit determinant is an orthogo-nal matrix with unit determinant.

mathphys15.tex; August 5, 2015; 16:20; p. 157

162 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

18. Show that S O(n) forms a group (called the proper orthogonal group or the rota-tion group).

mathphys15.tex; August 5, 2015; 16:20; p. 158

5.4. SPECTRAL DECOMPOSITION 163

5.4 Spectral Decomposition

5.4.1 Direct Sums• Let U and W be vector subspaces of a vector space V . Then the sum of the

vector spaces U and W is the space of all sums of the vectors from U andW, that is,

U +W = {v ∈ V | v = u + w, u ∈ U,w ∈ W}• If the only vector common to both U and W is the zero vector then the sum

U +W is called the direct sum and denoted by U ⊕W.

• Theorem 5.4.1 Let U and W be subspaces of a vector space V. Then V =U ⊕W if and only if every vector v ∈ V in V can be written uniquely as thesum

v = u + wwith u ∈ U,w ∈ W.

Proof.

• Theorem 5.4.2 Let V = U ⊕W. Then

dim V = dim U + dim W

Proof.

• The direct sum can be naturally generalized for several subspaces so that

V =r�

i=1

Ui

• To such a decomposition one naturally associates orthogonal complemen-tary projections Pi on each subspace Ui such that

P2i = I, PiPj = 0 if i � j,

r�

i=1

Pi = I

• A complete orthogonal system of projections defines the orthogonal decom-position of the vector space

V = U1 ⊕ · · · ⊕ Ur ,

where Ui is the subspace the projection Pi projects onto.

mathphys15.tex; August 5, 2015; 16:20; p. 159

164 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• Theorem 5.4.3 1. The dimension of the subspaces Ui are equal to theranks of the projections Pi

dim Ui = rank Pi .

2. The sum of dimensions of the vector subspaces Ui equals the dimen-sion of the vector space V

r�

i=1

dim Ui = dim U1 + · · · + dim Ur = dim V .

• Let M be a subspace of a vector space V . Then the orthogonal complementof M is the vector space M⊥ of all vector in V orthogonal to all vectors inM

M⊥ = {v ∈ V | (v, u) = 0 ∀u ∈ M}• Show that M⊥ is a vector space.

• Theorem 5.4.4 Every vector subspace M of V defines the orthogonal de-composition

V = M ⊕ M⊥

such that the corresponding projection operators P and P⊥ are Hermitian.

• Remark. The projections Pi are Hermitian only in inner product spaceswhen the subspaces Ei are mutually orthogonal.

5.4.2 Invariant Subspaces• Let V be a finite-dimensional vector space, M be its subspace and P be the

projection onto the subspace M.

• The subspace M is an invariant subspace of an operator A if it is closedunder the action of this operator, that is, A(M) ⊆ M.

• An invariant subspace M is called proper invariant subspace if M � V .

• Theorem 5.4.5 Let v be a vector in an n-dimensional vector space V andA be an operator on V. Then

M = span {v, Av, . . . , An−1v}

mathphys15.tex; August 5, 2015; 16:20; p. 160

5.4. SPECTRAL DECOMPOSITION 165

is an invariant space of A.

Proof.

• Theorem 5.4.6 The subspace M is invariant under an operator A if andonly if M⊥ is invariant under its adjoint A∗.

Proof.

• The vector subspace M reduces the operator A if both M and its orthogonalcomplement M⊥ are invariant subspaces of A.

• If a subspace M reduces an operator A then we write

A = A1 ⊕ A2

where A1 acts on M and A2 acts on M⊥.

• If the subspace M reduces the operator A then, in a natural basis, the matrixrepresentation of the operator A has a block-diagonal form

A =�

A1 00 A2

• An operator whose matrix can be brought to this form by choosing a basisis called reducible; otherwise, it is irreducible.

• Theorem 5.4.7 A subspace M reduces an operator A if and only if it isinvariant under both A and A∗.

Proof: follows from above.

• Theorem 5.4.8 A self-adjoint operator is reducible if and only if it has aproper invariant subspace.

• Theorem 5.4.9 The subspace M is invariant under an operator A if andonly if

AP = PA = PAP.

Proof.

• Theorem 5.4.10 The subspace M reduces the operator A if and only if Aand P commute.

Proof.

mathphys15.tex; August 5, 2015; 16:20; p. 161

166 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

5.4.3 Eigenvalues and Eigenvectors• Let A be an operator on a vector space V . A scalar λ is an eigenvalue of A

if there is a nonzero vector v in V such that

Av = λv, or (A − λI)v = 0

Such a vector is called an eigenvector corresponding to the eigenvalue λ.

• Theorem 5.4.11 1. The eigenvalues of a Hermitian operator are real.

2. A Hermitian operator is positive if and only if all of its eigenvalues arepositive.

3. The eigenvalues of a unitary operator are complex numbers of unitmodulus.

4. The eigenvalues of a projection operator can be only 0 and 1.

5. The eigenvalues of a self-adjoint involution can be only 1 and −1.

6. The eigenvalues of an anti-symmetric operator can be either 0 or bepurely imaginary, which appear in complex conjugated pairs.

• The eigenspace of A corresponding to the eigenvalue λ is the vector space

Mλ = Ker (A − λI)

• The eigenspace Mλ is the span of all eigenvectors corresponding to theeigenvalue λ.

• The dimension of the eigenspace of the eigenvalue λ is called the multiplic-ity (also called the geometric multiplicity) of λ,

dλ = dim Mλ

• An eigenvalue of multiplicity 1 is called simple (or non-degenerate).

• An eigenvalue of multiplicity greater than 1 is called multiple (or degener-ate).

• The norm of an operator A is defined by

||A|| = supv∈V

||Av||||v||

mathphys15.tex; August 5, 2015; 16:20; p. 162

5.4. SPECTRAL DECOMPOSITION 167

• An operator A is bounded if it has a final norm.

• In finite dimensions all operators are bounded.

• In finite dimensions the norm of the operator is equal to the absolute valueof its largest eigenvalue

||A|| = max1≤i≤n|λi|

• For an invertible operator A

||A−1|| = 1||A|| = max

1≤i≤n

1|λi|

• An operator is invertible if it does not have zero eigenvalues.

• The resolvent of an operator A is the operator

R(λ) = (A − λI)−1

The resolvent is defined for all complex numbers λ for which the operatorA − λI is invertible.

• The norm of the resolvent is

||R(λ)|| = max1≤i≤n

1|λi − λ|

• The resolvent set of an operator A is the set of all complex numbers λ ∈ Csuch that the operator (A − λI) is invertible,

ρ(A) = {λ ∈ C | (A − λI) is invertible}

• The spectrum of an operator A is the complement of the resolvent set,

σ(A) = C − ρ(A)

• In finite dimensions the spectrum of an operator A is equal to the set of alleigenvalues of A,

σ(A) = {λ ∈ C | λ is an eigenvalue of A}

mathphys15.tex; August 5, 2015; 16:20; p. 163

168 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• The characteristic polynomial of A is defined by

χ(λ) = det(A − λI)

• The eigenvalues of an operator A are the roots of its characteristic polyno-mial

χ(λ) = 0

Theorem 5.4.12 Every operator on a n-dimensional complex vector spacehas exactly n eigenvalues.

• If there are p distinct roots λi then

χ(λ) = (λ1 − λ)m1 · · · (λp − λ)mp

• Here mj is called the algebraic multiplicity of λ j.

• The geometric multiplicity di of an eigenvalue λi is less or equal to thealgebraic multiplicity mi,

di ≤ mi .

• Example. Let A : R2 → R2 be defined by

A(x, y) = (x + y, y)

Then it has one eigenvalue λ = 1 with geometric multiplicity 1 and algebraicmultiplicity 2. The eigenvectors are of the form (a, 0).

Let the operator B be defined by

B(x, y) = (y, 0)

ThenA = I + B

Since B is nilpotentB2 = 0

thenAn = I + nB

andexp(tA) = et(1 + tB)

mathphys15.tex; August 5, 2015; 16:20; p. 164

5.4. SPECTRAL DECOMPOSITION 169

• An operator A on a vector space V is diagonalizable if there is a basis in Vconsisting of eigenvectors of A.

• In such basis the operator A is represented by a diagonal matrix.

• Theorem 5.4.13 Let A be a diagonalizable operator, λ j, j = 1, . . . , p, beits distinct eigenvalues, Mi be the corresponding eigenspaces and Pi be theprojections onto Mi. Then:

1.

I =p�

j=1

Pj, PiPj = 0 if i � j,

2.

V =p�

i=1

Mi

3.

A =p�

j=1

λiPi .

• In other words, for any

v =n�

i=1

ei(ei, v) ,

we have

Av =n�

i=1

λiei(ei, v) .

5.4.4 Spectral Decomposition• An operator is normal if it commutes with its adjoint.

• Both Hermitian and unitary operators are normal.

• Theorem 5.4.14 An operator A is normal if and only if for any v ∈ V

||Av|| = ||A∗v||

Proof.

mathphys15.tex; August 5, 2015; 16:20; p. 165

170 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• Theorem 5.4.15 Let A be a normal operator. Then λ is an eigenvalue of Awith an eigenvector v if and only if λ is an eigenvalue of A∗ with the sameeigenvector v.

Proof.

• Theorem 5.4.16 Let A be a normal operator. Then:

1. every eigenspace Mλ reduces A,

2. the eigenspaces Mλ and Mµ corresponding to distinct eigenvalues λ �µ are orthogonal.

Proof.

• The projections to the eigenspaces of a normal operator are Hermitian.

• Theorem 5.4.17 Spectral Decomposition Theorem. Let A be a normaloperator on a complex vector space V. Let λ j, j = 1, . . . , p, be the dis-tinct eigenvalues of A, Mj be the corresponding eigenspaces and Pj be theprojections on Mj. Then:

1.

V =p�

j=1

Mj,

2.p�

j=1

dim Mj = dim V ,

3. the projections are Hermitian, orthogonal and complete

p�

j=1

Pj = I

PiP j = 0 if i � j,P∗i = Pi

4. there is the spectral decomposition of the operator

A =p�

j=1

Aj =

p�

j=1

λ jP j

mathphys15.tex; August 5, 2015; 16:20; p. 166

5.4. SPECTRAL DECOMPOSITION 171

Proof.

• Theorem 5.4.18 Let A be a normal operator on a complex vector space V.Then:

1. there is an orthonormal basis consisting of eigenvectors of A,

2. the operator A is diagonalizable.

Proof.

• Theorem 5.4.19 A Hermitian operator is diagonalizable by a unitary op-erator, that is, for every Hermitian operator H there is a diagonal operatorD and a unitary operator U such that

H = UDU−1 .

• Two operators A and B are simultaneously diagonalizable if there is thereis a complete system of Hermitian orthogonal projections Pi, i = 1, . . . , pand numbers λi, µi such that

A =p�

i=1

λiPi,

B =p�

i=1

µiPi.

• Theorem 5.4.20 Let A a normal operator and Pi be its projections to theeigenspaces. Then an operator B commutes with A if and only if B com-mutes with all projections Pi.

Proof.

• Theorem 5.4.21 Two normal operators are simultaneously diagonalizableif and only if they commute.

• Let A be a self-adjoint operator with distinct eigenvalues {λ1, . . . , λp} withmultiplicities mj. Then the trace of the operator and the determinant ofthe operator A are defined by

tr A =p�

i=1

mjλi ,

mathphys15.tex; August 5, 2015; 16:20; p. 167

172 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

det A =p�

j=1

λm j

j .

• The zeta-function of a positive operator A is defined by

ζ(s) =p�

i=1

mj1λs

i.

• There holdsζ�(0) = − log det A

• The trace of a projection P onto a vector subspace S is equal to its rank, orthe dimension of the vector subspace S ,

tr P = rank P = dim S .

5.4.5 Functions of Operators

• Let A be a normal operator on a vector space V given by its spectral decom-position

A =p�

i=1

λiPi ,

where Pi are the projections to the eigenspaces..

• Let f : C→ C be a complex function analytic at 0.

• Then one can define the function of the operator A by

f (A) =p�

i=1

f (λi)Pi .

• The exponential of A is defined by

exp A =∞�

k=1

1k!

Ak =

p�

i=1

eλi Pi

mathphys15.tex; August 5, 2015; 16:20; p. 168

5.4. SPECTRAL DECOMPOSITION 173

• The trace of a function of a self-adjoint operator A is then

tr f (A) =p�

i=1

mj f (λi) .

where mj is the multiplicity of the eigenvalue.

• Let A be a positive definite operator, A > 0. The zeta-function of theoperator A is defined by

ζ(s) = tr A−s =

p�

i=1

mj1λs

i.

• Theorem 5.4.22 For every unitary operator U there is an Hermitian oper-ator H with real eigenvalues λ j and the corresponding projections Pj suchthat

U = exp(iH) =p�

j=1

eiλ j P j

• The positive square root of a positive operator A is defined by

√A =

p�

j=1

�λ jP j

• Theorem 5.4.23 Let A be a normal operator and Pj, j = 1, . . . , p be theprojections to eigenspaces. Then

Pj =

p�

k� j;k=1

A − λkIλ j − λk

Proof: Let pj(z) be polynomials of the form

pj(z) =p�

k� j;k=1

z − λk

λ j − λk

• The polynomial pj(z) has (p − 1) roots λk, k � j, that is,

pj(λk) = 0

mathphys15.tex; August 5, 2015; 16:20; p. 169

174 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• Moreover, the polynomial pj(z) satisfies the equation

pj(λ j) = 1.

• That is,pj(λk) = δ jk

and, therefore, we have

pj(A) =p�

k=1

pj(λk)Pk = Pj

• Dimension-independent definition of the determinant of a positive oper-ator:

1.det A = exp tr log A

2.det A = exp[−ζ�(0)]

3.det A−1/2 =

Rn

dx exp[−π(x, Ax)]

where dx = dx1 · · · dxn

5.4.6 Polar Decomposition• Theorem 5.4.24 Polar Decomposition Theorem Let A be an operator on

a complex vector space. Then there exist a unique positive operator R anda unitary operator U such that

A = UR.

If the operator A is invertible then the operator U is also unique.

• Proof. Suppose A is invertible. Let

R =√

A∗A

andU = AR−1.

mathphys15.tex; August 5, 2015; 16:20; p. 170

5.4. SPECTRAL DECOMPOSITION 175

5.4.7 Real Vector Spaces• Theorem 5.4.25 Let A be a symmetric operator on a real vector space V.

Let λ j, j = 1, . . . , p, be the distinct eigenvalues of A, Mj be the correspond-ing eigenspaces and Pj be the projections on Mj. Then:

1.

V =p�

j=1

Mj,

2.p�

j=1

dim Mj = dim V ,

3. the projections are symmetric, orthogonal and complete

p�

j=1

Pj = I

PiP j = 0 if i � j,PT

i = Pi

4. there is the spectral decomposition of the operator

A =p�

j=1

Aj =

p�

j=1

λ jP j

• Theorem 5.4.26 The only eigenvalues of an orthogonal operator (on a realvector space) are +1 and −1.

Proof.

• Theorem 5.4.27 Let O be an orthogonal operator in a real vector space V.Then there exists an anti-symmetric operator A such that

O = exp A.

• The (complex) diagonal form of an anti-symmetric operator A is

A = diag (0, . . . , 0, iθ1,−iθ1, . . . , iθk,−iθk)

mathphys15.tex; August 5, 2015; 16:20; p. 171

176 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

where θi are real.

Therefore

O = exp A = diag�1, . . . , 1, eiθ1 , e−iθ1 , . . . , eiθk , e−iθk

• Some of the θ�s may be equal to ±π. By separating them we get

O = exp A = diag�1, . . . , 1,−1, . . . ,−1, eiθ1 , e−iθ1 , . . . , eiθk , e−iθp

where θi � ±π.

• The real block diagonal form of an anti-symmetric operator A is

A = diag (0, . . . , 0, θ1ε, . . . , θkε)

where

ε =

�0 1−1 0

• Note thatε2 = −I

andε2n = (−1)nI, ε2n+1 = (−1)nε

• Thereforeexp(θε) = I cos θ + ε sin θ

• Theorem 5.4.28 Spectral Decomposition of Orthogonal Operators onReal Vector Spaces. Let O be an orthogonal operator on a real vectorspace V. Then the only eigenvalues of O are +1 and −1 (possibly multiple)and there exists an orthogonal decomposition

V = V+ ⊕ V− ⊕ V1 ⊕ · · · ⊕ Vp ,

where V+ and V− are the eigenspaces corresponding to the eigenvalues 1and −1, and V1, . . . ,Vp are mutually orthogonal two-dimensional subspacessuch that

dim V = dim V+ + dim V− + 2p .

mathphys15.tex; August 5, 2015; 16:20; p. 172

5.4. SPECTRAL DECOMPOSITION 177

Let P+, P−, P1, . . . , Pp be the corresponding orthogonal complimentary sys-tem of projections, that is,

P+ + P− +p�

i=1

Pi = I .

Then there exists a corresponding system of operators N1, . . . ,Np satisfyingthe equations

N2i = −Pi , NiPi = PiNi = Ni ,

NiPj = PjNi = 0 , if i � j

and the angles θ1, . . . θk such that −π < θi < π and

O = P+ − P− +p�

i=1

Ri(θi)

whereRi(θi) = cos θi Pi + sin θi Ni .

are the two-dimensional rotation operators in the planes corresponding toPi.

• Theorem 5.4.29 Every invertible operator A on a real vector space can bewritten in a unique way as a product

A = OR

of an orthogonal operator O and a symmetric positive operator R.

5.4.8 Heisenberg Algebra, Fock Space and Harmonic Oscilla-tor

• Heisenberg Algebra. The Heisenberg algebra is a 3-dimensional Lie alge-bra with generators X, Y, Z satisfying the commutation relations

[X, Y] = Z, [X, Z] = 0, [X, Z] = 0.

• A representation of the Lie algebra A is a homomorphism ρ : A → L(V)from the Lie algebra to the space of operators on a vector space V such that

ρ([S , T ]) = [ρ(S ), ρ(T )].

mathphys15.tex; August 5, 2015; 16:20; p. 173

178 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• The Heisenberg algebra can be represented by matrices

X =

0 1 00 0 00 0 0

, Y =

0 0 00 0 10 0 0

, Z =

0 0 10 0 00 0 0

or by the differential operators C∞(R3)→ C∞(R3) defined by

X = ∂x − 12

y∂z, Y = ∂y +12

x∂z, Z = ∂z.

• Properties of the Heisenberg algebra

[X, Yn] = nZYn−1

[X, exp(bY)] = bZ exp(bY)

X exp (bY) = exp (bY) (X + bZ)

exp(−bY)X exp(bY) = X + bZ

X exp�−1

2Y2�= exp

�−1

2Y2�

(X − ZY)

[Y, Xn] = nZXn−1

[Y, exp(aX)] = −aZ exp(aX)

exp(aX)Y = (Y + aZ) exp(aX)

exp(aX)Y exp(−aX) = Y + aZ

exp(aX) exp(bY) = exp(abZ) exp(bY) exp(aX)

exp(−bY) exp(aX) exp(bY) = exp(abZ) exp(aX)

exp(aX) exp(bY) exp(−aX) = exp(abZ) exp(bY)

• Campbell-Hausdorff formula

exp(aX + bY) = exp�−ab

2Z�

exp(aX) exp(bY)

= exp�ab2

Z�

exp(bY) exp(aX)

mathphys15.tex; August 5, 2015; 16:20; p. 174

5.4. SPECTRAL DECOMPOSITION 179

• Heisenberg group. The Heisenberg group is a 3-dimensional Lie groupwith the generators X,Y, Z.

• An arbitrary element of the Heisenberg group is parametrized by canonicalcoordinates (a, b, c) as

g(a, b, c) = exp(aX + bY + cZ)

Obviously,g(0, 0, 0) = I

and the inverse is defined by

[g(a, b, c)]−1 = g (−a,−b,−c)

• The group multiplication law in the Heisenberg group takes the form

g(a, b, c)g(a�, b�, c�) = g�a + a�, b + b�, c + c� +

12

(ab� − a�b)�

• Notice that

g(a, b, c) = g�0, 0, c +

ab2

�g(0, b, 0)g(a, 0, 0)

• A representation of a Lie group G is a homomorphism ρ : G → Aut (V)from the group G to the space of invertible operators on a vector space Vsuch that for any g, h ∈ G

ρ(gh) = ρ(g)ρ(h)

andρ(g−1) = [ρ(g)]−1, ρ(e) = I

• Representations of the Heisenberg group.

• The elements of the Heisenberg group could be represented by the upper-triangular matrices. Notice that

X2 = Y2 = Z2 = XZ = YZ = 0

mathphys15.tex; August 5, 2015; 16:20; p. 175

180 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

andXY = YX = Z

or 0 a c0 0 b0 0 0

2

=

0 0 ab0 0 00 0 0

,

1 a c0 1 b0 0 1

3

= 0.

Therefore,

g(a, b, c) =

1 a c + ab0 1 b0 0 1

,

• Another representation is defined by the action on functions in R3. Noticethat

exp(aX) f (x, y, z) = f�x + a, y, z − a

2y�

exp(bY) f (x, y, z) = f�a, y + b, z +

a2

x�

exp(cZ) f (x, y, z) = f (x, y, z + c)

• Therefore,

g(a, b, c) f (x, y, z) = f�x + a, y + b, z + c +

b2

x − a2

y�

• Fock space.

• Let us define the operatorN = YX

• It is easy to see that

[N, Y] = ZY, [N, X] = −ZX.

• Suppose that there exists a unit vector v0 called the vacuum state such that

Xv0 = 0

• Let us define a sequence of vectors

vn =1√n!

Ynv0

mathphys15.tex; August 5, 2015; 16:20; p. 176

5.4. SPECTRAL DECOMPOSITION 181

• By using the properties of the Heisenberg algebra it is easy to show that

Yvn =√

n + 1 vn+1, n ≥ 0,Xvn =

√n Zvn−1, n ≥ 1

ThereforeNvn = n Zvn, n ≥ 0

• Let us define vectors

w(b) = exp(bY)v0 =

∞�

n=0

bn

√n!

vn

called the coherent states.

• Then by using the properties of the Heisenberg algebra we get

Xw(b) = bZw(b)

• Now, suppose thatY = X∗

andZ = I

• Then, the vectors vn are orthonormal

(vn, vm) = δnm

and are the eigenvectors of the self-adjoint operator

N = X∗X

with integer eigenvalues n ≥ 0.

• Then the spacespan {vn | n ≥ 0}

is called the Fock space and the operators X and X∗ are called the annihi-lation and creation operators and the operator N is called the operator ofthe number of particles.

mathphys15.tex; August 5, 2015; 16:20; p. 177

182 CHAPTER 5. FINITE-DIMENSIONAL VECTOR SPACES

• Note, also that the coherent states are not orthonormal

(w(a),w(b)) = eab

• Finally, we compute the trace of the heat semigroup operator

Tr exp(−tX∗X) =∞�

n=0

e−tn =1

1 − e−t

• Harmonic oscillator. Let D be an anti-self-adjoint operator and Q be aself-adjoint operator satisfying the commutation relations

[D,Q] = I

• The harmonic oscillator is a quantum system with the (self-adjoint posi-tive) Hamiltonian

H = −12

D2 +12

Q2

• Then the operators

X =1√2

(D + Q), X∗ =1√2

(−D + Q).

are the creation and annihilation operators.

• The operator of the number of particles is

N = X∗X = −12

D2 +12

Q2 − 12

and, therefore, the Hamiltonian is

H = N +12

• The eigenvalues of the Hamiltonian are

λn = n +12

with the eigenvectors vn

mathphys15.tex; August 5, 2015; 16:20; p. 178

5.4. SPECTRAL DECOMPOSITION 183

• It is clear that the vectors

ψn(t) = e−itλnvn = exp�−it�n +

12

��1

2n/2√

n!(−D + Q)nv0

satisfy the equation(i∂t − H)ψn = 0

which is called the Schrodinger equation.

• The vacuum state is determined from the equation

(D + Q)v0 = 0

and has the form

v0 = exp�−1

2Q2�ψ0

with ψ0 satisfyingDψ0 = 0 .

• The heat trace (also called the partition function) for the harmonic oscilla-tor is

Tr exp(−tH) =1

2 sinh(t/2)

5.4.9 Exercises1. Find the eigenvalues of a projection operator.

2. Prove that the span of all eigenvectors corresponding to the eigenvalue λ ofan operator A is a vector space.

3. LetE(λ) = Ker (A − λI) .

Show that: a) if λ is not an eigenvalue of A, then E(λ) = ∅, and b) if λ is aneigenvalue of A, then E(λ) is the eigenspace corresponding to the eigenvalueλ.

4. Show that the operator A−λI is invertible if and only if λ is not an eigenvalueof the operator A.

mathphys15.tex; August 5, 2015; 16:20; p. 179

184 Bibliography

5. Let T be a unitary operator. Then the operators A and

A = T AT−1

are called similar. Show that the eigenvalues of similar operators are thesame.

6. Show that an operator similar to a selfadjoint operator is selfadjoint and anoperator similar to an anti-selfadjoint operator is anti-selfadjoint.

7. Show that all eigenvalues of a positive operator A are non-negative.

8. Show that the eigenvectors corresponding to distinct eigenvalues of a uni-tary operator are orthogonal to each other.

9. Show that the eigenvectors corresponding to distinct eigenvalues of a self-adjoint operator are orthogonal to each other.

10. Show that all eigenvalues of a unitary operator A have absolute value equalto 1.

11. Show that if A is a projection, then it can only have two eigenvalues: 1 and0.

mathphys15.tex; August 5, 2015; 16:20; p. 180