139
Cyclic subspace From Wikipedia, the free encyclopedia

Cyclic Subspace

  • Upload
    man

  • View
    282

  • Download
    1

Embed Size (px)

DESCRIPTION

1. From Wikipedia, the free encyclopedia2. Lexicographical order

Citation preview

  • Cyclic subspaceFrom Wikipedia, the free encyclopedia

  • Contents

    1 Canonical basis 11.1 Representation theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    1.1.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.2.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2 Cartesian tensor 52.1 Cartesian basis and related terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    2.1.1 Vectors in three dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.1.2 Second order tensors in three dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.1.3 Vectors and tensors in n dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    2.2 Transformations of Cartesian vectors (any number of dimensions) . . . . . . . . . . . . . . . . . . 92.2.1 Meaning of invariance under coordinate transformations . . . . . . . . . . . . . . . . . . 92.2.2 Derivatives and Jacobian matrix elements . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.2.3 Projections along coordinate axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    2.3 Transformation of the dot and cross products (three dimensions only) . . . . . . . . . . . . . . . . 122.3.1 Dot product, Kronecker delta, and metric tensor . . . . . . . . . . . . . . . . . . . . . . . 122.3.2 Cross and product, Levi-Civita symbol, and pseudovectors . . . . . . . . . . . . . . . . . . 132.3.3 Applications of the tensor and pseudotensor . . . . . . . . . . . . . . . . . . . . . . . 15

    2.4 Transformations of Cartesian tensors (any number of dimensions) . . . . . . . . . . . . . . . . . . 162.4.1 Second order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.4.2 Any order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    2.5 Pseudovectors as antisymmetric second order tensors . . . . . . . . . . . . . . . . . . . . . . . . . 172.6 Vector and tensor calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    2.6.1 Vector calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.6.2 Tensor calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    2.7 Dierence from the standard tensor calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.8 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    i

  • ii CONTENTS

    2.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.10.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.10.2 Further reading and applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    2.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    3 Category of vector spaces 263.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    4 CauchySchwarz inequality 274.1 Statement of the inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.2 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.3 Alternative Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.4 Special cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    4.4.1 Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.4.2 L2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    4.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.5.1 Probability theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    4.6 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.6.1 Positive functionals on C*- and W*-algebras . . . . . . . . . . . . . . . . . . . . . . . . . 314.6.2 Positive maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    4.7 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    5 Centrosymmetric matrix 345.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.2 Algebraic structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.3 Related structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    6 Change of basis 366.1 Preliminary notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    6.1.1 Matrix of a set of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376.2 Change of coordinates of a vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    6.2.1 Two dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376.2.2 Three dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386.2.3 General case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    6.3 The matrix of a linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

  • CONTENTS iii

    6.3.1 Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386.4 The matrix of an endomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    6.4.1 Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396.5 The matrix of a bilinear form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    6.5.1 Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396.6 Important instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    7 Characteristic polynomial 417.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417.2 Formal denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427.5 Characteristic polynomial of a product of two matrices . . . . . . . . . . . . . . . . . . . . . . . . 437.6 Secular function and secular equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    7.6.1 Secular function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437.6.2 Secular equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    7.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    8 Chois theorem on completely positive maps 458.1 Some preliminary notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458.2 Chois result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    8.2.1 Statement of theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468.2.2 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    8.3 Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488.3.1 Kraus operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488.3.2 Completely copositive maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488.3.3 Hermitian-preserving maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

    8.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    9 Coates graph 509.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

    10 Codimension 5110.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5110.2 Additivity of codimension and dimension counting . . . . . . . . . . . . . . . . . . . . . . . . . . 5210.3 Dual interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5210.4 In geometric topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

  • iv CONTENTS

    10.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5310.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    11 Coecient matrix 5411.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5411.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    12 Column space 5512.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5612.2 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5612.3 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5712.4 Relation to the left null space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5712.5 For matrices over a ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5812.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5812.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5812.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

    12.8.1 Textbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5812.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

    13 Column vector 6013.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6013.2 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6013.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6113.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    14 Commutation matrix 6214.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6214.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

    15 Complex conjugate vector space 6415.1 Antilinear maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6415.2 Conjugate linear maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6515.3 Structure of the conjugate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6515.4 Complex conjugate of a Hilbert space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6515.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6515.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

    16 Complex spacetime 6616.1 Real and complex spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

    16.1.1 Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6616.1.2 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

    16.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6616.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6716.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

  • CONTENTS v

    16.4.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6716.4.2 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

    17 Compressed sensing 6817.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6817.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6817.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

    17.3.1 Underdetermined linear system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6917.3.2 Solution / reconstruction method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6917.3.3 Total Variation based CS reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

    17.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7217.4.1 Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7217.4.2 Holography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7317.4.3 Facial recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7317.4.4 Magnetic resonance imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7317.4.5 Network tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7317.4.6 Shortwave-infrared cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7317.4.7 Aperture synthesis in radio astronomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

    17.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7317.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7317.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7417.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

    18 Computing the permanent 8018.1 Denition and naive algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8018.2 Ryser formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8018.3 Balasubramanian-Bax/Franklin-Glynn formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8118.4 Special cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

    18.4.1 Planar and K,-free . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8118.4.2 Computation modulo a number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

    18.5 Approximate computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8218.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8218.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

    19 Cone (linear algebra) 8419.1 Related concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

    19.1.1 The cone of a set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8419.1.2 Salient cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8419.1.3 Convex cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8419.1.4 Ane cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8419.1.5 Proper cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    19.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

  • vi CONTENTS

    19.2.1 Boolean, additive and linear closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8519.2.2 Spherical section and projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    19.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8519.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    20 Conformable matrix 8620.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8620.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8620.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

    21 Conjugate transpose 8721.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8721.2 Basic remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8821.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8821.4 Properties of the conjugate transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8821.5 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8921.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8921.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

    22 Controlled invariant subspace 9022.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9022.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9022.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

    23 Convex cone 9123.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

    23.1.1 Convex cones are linear cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9223.1.2 Alternative denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

    23.2 Blunt and pointed cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9223.3 Half-spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9223.4 Salient convex cones and perfect half-spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9323.5 Cross-sections and projections of a convex set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

    23.5.1 Flat section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9323.5.2 Spherical section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

    23.6 Dual cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9323.7 Partial order dened by a convex cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9423.8 Proper convex cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9423.9 Examples of convex cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9423.10References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9523.11See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

    23.11.1 Related combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

    24 Coordinate space 96

  • CONTENTS vii

    24.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9624.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9624.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

    25 Coordinate vector 9725.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9725.2 The standard representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9825.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

    25.3.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9825.3.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

    25.4 Basis transformation matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9925.4.1 Corollary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9925.4.2 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

    25.5 Innite-dimensional vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9925.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

    26 Corank 10126.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

    27 Cramers rule 10227.1 General case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10227.2 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10327.3 Finding inverse matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10427.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

    27.4.1 Explicit formulas for small systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10427.4.2 Dierential geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10527.4.3 Integer programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10627.4.4 Ordinary dierential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

    27.5 Geometric interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10627.6 Other proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

    27.6.1 A short proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10727.6.2 Proof using Cliord algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

    27.7 Systems of vector equations: Cramers Rule extended . . . . . . . . . . . . . . . . . . . . . . . . 10927.7.1 Solving for unknown vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10927.7.2 Solving for unknown scalars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11127.7.3 Projecting a vector onto an arbitrary basis. . . . . . . . . . . . . . . . . . . . . . . . . . . 11227.7.4 Projecting a vector onto an orthogonal basis . . . . . . . . . . . . . . . . . . . . . . . . . 11727.7.5 Solving a system of vector equations using SymPy . . . . . . . . . . . . . . . . . . . . . . 118

    27.8 Incompatible and indeterminate cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11827.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11827.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

    28 CSS code 122

  • viii CONTENTS

    28.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12228.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12228.3 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

    29 Cyclic decomposition theorem 12329.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

    29.1.1 Cyclic subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12329.1.2 Annihilator of a vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12329.1.3 Conductor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12329.1.4 Admissible subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

    29.2 Cyclic decomposition theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12429.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12429.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

    30 Cyclic subspace 12530.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

    30.1.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12530.2 Companion matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12530.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12630.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12630.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12630.6 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 127

    30.6.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12730.6.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12930.6.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

  • Chapter 1

    Canonical basis

    In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on theprecise context:

    In a coordinate space, and more generally in a free module, it refers to the standard basis dened by theKronecker delta.

    In a polynomial ring, it refers to its standard basis given by the monomials, (Xi)i . For nite extension elds, it means the polynomial basis. In linear algebra, it refers to a set of n linearly independent generalized eigenvectors of an nn matrix A , ifthe set is composed entirely of chains.[1]

    1.1 Representation theoryIn representation theory there are several basis that are called canonical, e.g. Lusztigs canonical basis and closelyrelated Kashiwaras crystal basis in quantum groups and their representations. There is a general concept underlyingthese basis:Consider the ring of integral Laurent polynomials Z := Z[v; v1] with its two subrings Z := Z[v1] and theautomorphism that is dened by v := v1 .A precanonical structure on a free Z -module F consists of

    A standard basis (ti)i2I of F , A partial order on I that is interval nite, i.e. (1; i] := fj 2 Ijj ig is nite for all i 2 I , A dualization operation, i.e. a bijection F ! F of order two that is -semilinear and will be denoted by aswell.

    If a precanonical structure is given, then one can dene the Z submodule F :=PZtj of F .A canonical basis at v = 0 of the precanonical structure is then a Z -basis (ci)i2I of F that satises:

    ci = ci and ci 2

    PjiZ+tj and ci ti mod vF+

    for all i 2 I . A canonical basis at v =1 is analogously dened to be a basis (eci)i2I that satises eci = eci and eci 2PjiZtj and eci ti mod v1F

    1

  • 2 CHAPTER 1. CANONICAL BASIS

    for all i 2 I . The naming at v = 1 " alludes to the fact limv!1 v1 = 0 and hence the specialization v 7! 1corresponds to quotienting out the relation v1 = 0 .One can show that there exists at most one canonical basis at v=0 (and at most one at v =1 ) for each precanonicalstructure. A sucient condition for existence is that the polynomials rij 2 Z dened by tj =

    Pi rijti satisfy rii = 1

    and rij 6= 0 =) i j .A canonical basis at v=0 ( v =1 ) induces an isomorphism from F+ \ F+ =Pi Zci to F+/vF+ ( F \ F =P

    i Zeci ! F/v1F respectively).1.1.1 ExamplesQuantum groups

    The canonical basis of quantum groups in the sense of Lusztig and Kashiwara are canonical basis at v = 0 .

    Hecke algebras

    Let (W;S) be a Coxeter group. The corresponding Iwahori-Hecke algebra H has the standard basis (Tw)w2W ,the group is partially ordered by the Bruhat order which is interval nite and has a dualization operation dened byTw := T

    1w1 . This is a precanonical structure onH that satises the sucient condition above and the corresponding

    canonical basis ofH at v = 0 is the Kazhdan-Lusztig basisC 0w =P

    yw Py;w(v2)Tw with Py;w being the Kazhdan-

    Lusztig polynomials.

    1.2 Linear algebraIf we are given an n n matrix A and wish to nd a matrix J in Jordan normal form similar to A , we are interestedonly in sets of linearly independent generalized eigenvectors. A matrix in Jordan normal form is an almost diagonalmatrix, that is, as close to diagonal as possible. A diagonal matrix D is a special case of a matrix in Jordan normalform. An ordinary eigenvector is a special case of a generalized eigenvector.Every n n matrix A possesses n linearly independent generalized eigenvectors. Generalized eigenvectors corre-sponding to distinct eigenvalues are linearly independent. If is an eigenvalue of A of algebraic multiplicity , thenA will have linearly independent generalized eigenvectors corresponding to .For any given n n matrix A , there are innitely many ways to pick the n linearly independent generalized eigen-vectors. If they are chosen in a particularly judicious manner, we can use these vectors to show that A is similar to amatrix in Jordan normal form. In particular,Denition: A set of n linearly independent generalized eigenvectors is a canonical basis if it is composed entirelyof chains.Thus, once we have determined that a generalized eigenvector of rank m is in a canonical basis, it follows that the m 1 vectors xm1; xm2; : : : ; x1 that are in the chain generated by xm are also in the canonical basis.[2]

    1.2.1 MethodLet i be an eigenvalue of A of algebraic multiplicity i . First, nd the ranks (matrix ranks) of the matrices(AiI); (AiI)2; : : : ; (AiI)mi . The integermi is determined to be the rst integer for which (AiI)mihas rank n i (n being the number of rows or columns of A , that is, A is n n).Now dene

    k = rank(A iI)k1 rank(A iI)k (k = 1; 2; : : : ;mi):The variable k designates the number of linearly independent generalized eigenvectors of rank k (generalized eigen-vector rank; see generalized eigenvector) corresponding to the eigenvalue i that will appear in a canonical basis forA . Note that rank(A iI)0 = rank(I) = n .

  • 1.2. LINEAR ALGEBRA 3

    Once we have determined the number of generalized eigenvectors of each rank that a canonical basis has, we canobtain the vectors explicitly (see generalized eigenvector).[3]

    1.2.2 Example

    This example illustrates a canonical basis with two chains. Unfortunately, it is a little dicult to construct an interestingexample of low order.[4] The matrix

    A =

    0BBBBBB@4 1 1 0 0 10 4 2 0 0 10 0 4 1 0 00 0 0 5 1 00 0 0 0 5 20 0 0 0 0 4

    1CCCCCCAhas eigenvalues 1 = 4 and 2 = 5 with algebraic multiplicities 1 = 4 and 2 = 2 , but geometric multiplicities

    1 = 1 and 2 = 1 .For 1 = 4; we have n 1 = 6 4 = 2;

    (A 4I)

    (A 4I)2

    (A 4I)3

    (A 4I)4

    Thereforem1 = 4:

    4 = rank(A 4I)3 rank(A 4I)4 = 3 2 = 1;

    3 = rank(A 4I)2 rank(A 4I)3 = 4 3 = 1;2 = rank(A 4I)1 rank(A 4I)2 = 5 4 = 1;1 = rank(A 4I)0 rank(A 4I)1 = 6 5 = 1:Thus, a canonical basis for A will have, corresponding to 1 = 4; one generalized eigenvector each of ranks 4, 3, 2and 1.For 2 = 5; we have n 2 = 6 2 = 4;

    (A 5I)

    (A 5I)2

    Thereforem2 = 2:

    2 = rank(A 5I)1 rank(A 5I)2 = 5 4 = 1;

    1 = rank(A 5I)0 rank(A 5I)1 = 6 5 = 1:Thus, a canonical basis for A will have, corresponding to 2 = 5; one generalized eigenvector each of ranks 2 and 1.A canonical basis for A is

  • 4 CHAPTER 1. CANONICAL BASIS

    fx1; x2; x3; x4; y1; y2g =

    8>>>>>>>>>>>:

    0BBBBBB@400000

    1CCCCCCA

    0BBBBBB@2740000

    1CCCCCCA

    0BBBBBB@25252000

    1CCCCCCA

    0BBBBBB@03612221

    1CCCCCCA

    0BBBBBB@321100

    1CCCCCCA

    0BBBBBB@841010

    1CCCCCCA

    9>>>>>>=>>>>>>;:

    x1 is the ordinary eigenvector associated with 1 . x2; x3 and x4 are generalized eigenvectors associated with 1 . y1is the ordinary eigenvector associated with 2 . y2 is a generalized eigenvector associated with 2 .A matrix J in Jordan normal form, similar to A is obtained as follows:

    M = (x1; x2; x3; x4; y1; y2) =

    0BBBBBB@4 27 25 0 3 80 4 25 36 2 40 0 2 12 1 10 0 0 2 1 00 0 0 2 0 10 0 0 1 0 0

    1CCCCCCA; J =0BBBBBB@4 1 0 0 0 00 4 1 0 0 00 0 4 1 0 00 0 0 4 0 00 0 0 0 5 10 0 0 0 0 5

    1CCCCCCA;

    where the matrixM is a generalized modal matrix for A and AM = MJ .[5]

    1.3 See also Canonical form Normal form (disambiguation) Polynomial basis Normal basis Change of basis

    1.4 Notes[1] Bronson (1970, p. 196)

    [2] Bronson (1970, pp. 196,197)

    [3] Bronson (1970, pp. 197,198)

    [4] Nering (1970, pp. 122,123)

    [5] Bronson (1970, p. 203)

    1.5 References Bronson, Richard (1970), Matrix Methods: An Introduction, New York: Academic Press, LCCN 70097490 Deng, Bangming; Ju, Jie; Parshall, Brian; Wang, Jianpan (2008), Finite Dimensional Algebras and QuantumGroups, Mathematical surveys and monographs 150, Providence, R.I.: American Mathematical Society, ISBN9780821875315

    Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76091646

  • Chapter 2

    Cartesian tensor

    In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclideanspace in the form of components. Converting a tensors components from one such basis to another is through anorthogonal transformation.The most familiar coordinate systems are the two-dimensional and three-dimensional Cartesian coordinate systems.Cartesian tensors may be used with any Euclidean space, or more technically, any nite-dimensional vector spaceover the eld of real numbers that has an inner product.Use of Cartesian tensors occurs in physics and engineering, such as with the Cauchy stress tensor and the momentof inertia tensor in rigid body dynamics. Sometimes general curvilinear coordinates are convenient, as in high-deformation continuummechanics, or even necessary, as in general relativity. While orthonormal bases may be foundfor some such coordinate systems (e.g. tangent to spherical coordinates), Cartesian tensors may provide considerablesimplication for applications in which rotations of rectilinear coordinate axes suce. The transformation is a passivetransformation, since the coordinates are changed and not the physical system.

    2.1 Cartesian basis and related terminology

    2.1.1 Vectors in three dimensionsIn 3d Euclidean space, 3, the standard basis is e, e, e. Each basis vector points along the x-, y-, and z-axes, andthe vectors are all unit vectors (or normalized), so the basis is orthonormal.Throughout, when referring to Cartesian coordinates in three dimensions, a right-handed system is assumed and thisis much more common than a left-handed system in practice, see orientation (vector space) for details.For Cartesian tensors of order 1, a Cartesian vector a can be written algebraically as a linear combination of the basisvectors e, e, e:

    a = axex + ayey + azezwhere the coordinates of the vector with respect to the Cartesian basis are denoted a, a, a. It is common and helpfulto display the basis vectors as column vectors

    ex =

    0@100

    1A ; ey =0@010

    1A ; ez =0@001

    1Awhen we have a coordinate vector in a column vector representation:

    a =

    0@axayaz

    1A5

  • 6 CHAPTER 2. CARTESIAN TENSOR

    A row vector representation is also legitimate, although in the context of general curvilinear coordinate systems therow and column vector representations are used separately for specic reasons see Einstein notation and covarianceand contravariance of vectors for why.The term component of a vector is ambiguous: it could refer to:

    a specic coordinate of the vector such as a (a scalar), and similarly for x and y, or the coordinate scalar-multiplying the corresponding basis vector, in which case the y-component of a is ae(a vector), and similarly for y and z.

    A more general notation is tensor index notation, which has the exibility of numerical values rather than xedcoordinate labels. The Cartesian labels are replaced by tensor indices in the basis vectors e e1, e e2, e e3and coordinates A A1, A A2, A A3. In general, the notation e1, e2, e3 refers to any basis, and A1, A2, A3refers to the corresponding coordinate system; although here they are restricted to the Cartesian system. Then:

    a = a1e1 + a2e2 + a3e3 =3Xi=1

    aiei

    It is standard to use the Einstein notation the summation sign for summation over an index repeated only twicewithin a term may be suppressed for notational conciseness:

    a =3Xi=1

    aiei aiei

    An advantage of the index notation over coordinate-specic notations is the independence of the dimension of theunderlying vector space, i.e. the same expression on the right hand side takes the same form in higher dimensions(see below). Previously, the Cartesian labels x, y, z were just labels and not indices. (It is informal to say "i = x, y,z).

    2.1.2 Second order tensors in three dimensionsA dyadic tensor T is an order 2 tensor formed by the tensor product of two Cartesian vectors a and b, written T =a b. Analogous to vectors, it can be written as a linear combination of the tensor basis e e e, e e e,..., e e e (the right hand side of each identity is only an abbreviation, nothing more):

    T = (axex + ayey + azez) (bxex + byey + bzez)

    = axbxex ex + axbyex ey + axbzex ez+ aybxey ex + aybyey ey + aybzey ez+ azbxez ex + azbyez ey + azbzez ez

    Representing each basis tensor as a matrix:

    ex ex exx =0@1 0 00 0 00 0 0

    1A ; ex ey exy =0@0 1 00 0 00 0 0

    1A ; ez ez ezz =0@0 0 00 0 00 0 1

    1Athen T can be represented more systematically as a matrix:

    T =

    0@axbx axby axbzaybx ayby aybzazbx azby azbz

    1A

  • 2.1. CARTESIAN BASIS AND RELATED TERMINOLOGY 7

    See matrix multiplication for the notational correspondence between matrices and the dot and tensor products.More generally, whether or not T is a tensor product of two vectors, it is always a linear combination of the basistensors with coordinates T, T, ... T:

    T = Txxexx + Txyexy + Txzexz+ Tyxeyx + Tyyeyy + Tyzeyz+ Tzxezx + Tzyezy + Tzzezz

    while in terms of tensor indices:

    T = Tijeij Xij

    Tijei ej ;

    and in matrix form:

    T =

    0@Txx Txy TxzTyx Tyy TyzTzx Tzy Tzz

    1ASecond order tensors occur naturally in physics and engineering when physical quantities have directional dependencein the system, often in a stimulus-response way. This can be mathematically seen through one aspect of tensors -they are multilinear functions. A second order tensor T which takes in a vector u of some magnitude and directionwill return a vector v; of a dierent magnitude and in a dierent direction to u, in general. The notation used forfunctions in mathematical analysis leads us to write v = T(u),[1] while the same idea can be expressed in matrix andindex notations[2] (including the summation convention), respectively:

    0@vxvyvz

    1A =0@Txx Txy TxzTyx Tyy TyzTzx Tzy Tzz

    1A0@uxuyuz

    1A ; vi = TijujBy linear, if u = r + s for two scalars and and vectors r and s, then in function and index notations:

    v = T(r+ s) = T(r) + T(s)

    vi = Tij(rj + sj) = Tijrj + Tijsj

    and similarly for the matrix notation. The function, matrix, and index notations all mean the same thing. The matrixforms provide a clear display of the components, while the index form allows easier tensor-algebraic manipulation ofthe formulae in a compact manner. Both provide the physical interpretation of directions; vectors have one direction,while second order tensors connect two directions together. One can associate a tensor index or coordinate label witha basis vector direction.The use of second order tensors are the minimum to describe changes in magnitudes and directions of vectors, asthe dot product of two vectors is always a scalar, while the cross product of two vectors is always a pseudovectorperpendicular to the plane dened by the vectors, so these products of vectors alone cannot obtain a new vector ofany magnitude in any direction. (See also below for more on the dot and cross products). The tensor product of twovectors is a second order tensor, although this has no obvious directional interpretation by itself.The previous idea can be continued: if T takes in two vectors p and q, it will return a scalar r. In function notationwe write r = T(p, q), while in matrix and index notations (including the summation convention) respectively:

    r =px py pz

    0@Txx Txy TxzTyx Tyy TyzTzx Tzy Tzz

    1A0@qxqyqz

    1A ; r = piTijqj

  • 8 CHAPTER 2. CARTESIAN TENSOR

    The tensor T is linear in both input vectors. When vectors and tensors are written without reference to components,and indices are not used, sometimes a dot is placed where summations over indices (known as tensor contractions)are taken. For the above cases:[1][2]

    v = T u

    r = p T qmotivated by the dot product notation:

    a b aibiMore generally, a tensor of order m which takes in n vectors (where n is between 0 and m inclusive) will return atensor of order m n, see Tensor: As multilinear maps for further generalizations and details. The concepts abovealso apply to pseudovectors in the same way as for vectors. The vectors and tensors themselves can vary withinthroughout space, in which case we have vector elds and tensor elds, and can also depend on time.Following are some examples:

    For the electrical conduction example, the index and matrix notations would be:

    Ji = ijEj Xj

    ijEj

    0@JxJyJz

    1A =0@xx xy xzyx yy yzzx zy zz

    1A0@ExEyEz

    1Awhile for the rotational kinetic energy T :

    T =1

    2!iIij!j 1

    2

    Xij

    !iIij!j ;

    T =1

    2

    !x !y !z

    0@Ixx Ixy IxzIyx Iyy IyzIzx Izy Izz

    1A0@!x!y!z

    1A :See also constitutive equation for more specialized examples.

    2.1.3 Vectors and tensors in n dimensionsIn n-dimensional Euclidean space over the real numbers, n, the standard basis is denoted e1, e2, e3, ... en. Eachbasis vector ei points along the positive xi axis, with the basis being orthonormal. Component j of ei is given by theKronecker delta:

    (ei)j = ij

    A vector in n takes the form:

    a = aiei Xi

    aiei :

  • 2.2. TRANSFORMATIONS OF CARTESIAN VECTORS (ANY NUMBER OF DIMENSIONS) 9

    Similarly for the order 2 tensor above, for each vector a and b in n:

    T = aibjeij Xij

    aibjei ej ;

    or more generally:

    T = Tijeij Xij

    Tijei ej :

    2.2 Transformations of Cartesian vectors (any number of dimensions)

    2.2.1 Meaning of invariance under coordinate transformations

    The position vector x in n is a simple and common example of a vector, and can be represented in any coordinatesystem. Consider the case of rectangular coordinate systems with orthonormal bases only. It is possible to have acoordinate system with rectangular geometry if the basis vectors are all mutually perpendicular and not normalized,in which case the basis is orthogonal but not orthonormal. However, orthonormal bases are easier to manipulate andare often used in practice. The following results are true for orthonormal bases, not orthogonal ones.In one rectangular coordinate system, x as a contravector has coordinates xi and basis vectors ei, while as a covectorit has coordinates xi and basis covectors ei, and we have:

    x = xiei ; x = xiei

    In another rectangular coordinate system, x as a contravector has coordinates xi and bases ei, while as a covector ithas coordinates xi and bases ei, and we have:

    x = xiei ; x = xiei

    Each new coordinate is a function of all the old ones, and vice versa for the inverse function:

    xi = xix1; x2; xi = xi x1; x2;

    xi = xi (x1; x2; ) xi = xi (x1; x2; )and similarly each new basis vector is a function of all the old ones, and vice versa for the inverse function:

    ej = ej (e1; e2 ) ej = ej (e1;e2 )

    ej = eje1; e2 ej = ej e1;e2

    for all i, j.A vector is invariant under any change of basis, so if coordinates transform according to a transformation matrix L,the bases transform according to the matrix inverse L1, and conversely if the coordinates transform according toinverse L1, the bases transform according to the matrix L. The dierence between each of these transformationsis shown conventionally through the indices as superscripts for contravariance and subscripts for covariance, and thecoordinates and bases are linearly transformed according to the following rules:

  • 10 CHAPTER 2. CARTESIAN TENSOR

    where Lij represents the entries of the transformation matrix (row number is i and column number is j) and (L1)ikdenotes the entries of the inverse matrix of the matrix Lik.If L is an orthogonal transformation (orthogonal matrix), the objects transforming by it are dened asCartesian ten-sors. This geometrically has the interpretation that a rectangular coordinate system is mapped to another rectangularcoordinate system, in which the norm of the vector x is preserved (and distances are preserved).The determinant of L is det(L) = 1, which corresponds to two types of orthogonal transformation: (+1) for rotationsand (1) for improper rotations (including reections).There are considerable algebraic simplications, the matrix transpose is the inverse from the denition of an orthog-onal transformation:

    LT = L1 ) (L1)ij = (LT)ij = (L)ji = LjiFrom the previous table, orthogonal transformations of covectors and contravectors are identical. There is no needto dier between raising and lowering indices, and in this context and applications to physics and engineering theindices are usually all subscripted to remove confusion for exponents. All indices will be lowered in the remainder ofthis article. One can determine the actual raised and lowered indices by considering which quantities are covectorsor contravectors, and the relevant transformation rules.Exactly the same transformation rules apply to any vector a, not only the position vector. If its components ai do nottransform according to the rules, a is not a vector.Despite the similarity between the expressions above, for the change of coordinates such as xj = Lijxi, and the actionof a tensor on a vector like bi = Tijaj, L is not a tensor, but T is. In the change of coordinates, L is a matrix, usedto relate two rectangular coordinate systems with orthonormal bases together. For the tensor relating a vector to avector, the vectors and tensors throughout the equation all belong to the same coordinate system and basis.

    2.2.2 Derivatives and Jacobian matrix elementsThe entries of L are partial derivatives of the new or old coordinates with respect to the old or new coordinates,respectively.Dierentiating xi with respect to xk:

    @xi@xk

    =@

    @xk(xjLji) = Lji

    @xj@xk

    = kjLji = Lki

    so

    Lij Lij = @xj@xi

    is an element of the Jacobian matrix. There is a (partially mnemonical) correspondence between index positionsattached to L and in the partial derivative: i at the top and j at the bottom, in each case, although for Cartesian tensorsthe indices can be lowered.Conversely, dierentiating xj with respect to xi:

    @xj@xk

    =@

    @xk(xi(L1)ij) =

    @xi@xk

    (L1)ij = ki(L1)ij = (L1)kj

    so

    (L1)ij (L1)ij = @xj@xi

    is an element of the inverse Jacobian matrix, with a similar index correspondence.Many sources state transformations in terms of the partial derivatives:

  • 2.2. TRANSFORMATIONS OF CARTESIAN VECTORS (ANY NUMBER OF DIMENSIONS) 11

    and the explicit matrix equations in 3d are:

    x = Lx0@x1x2x3

    1A =0B@@x1@x1 @x1@x2 @x1@x3@x2@x1 @x2@x2 @x2@x3

    @x3@x1

    @x3@x2

    @x3@x3

    1CA0@x1x2x3

    1Asimilarly for

    x = L1x = LTx

    2.2.3 Projections along coordinate axesAs with all linear transformations, L depends on the basis chosen. For two orthonormal bases

    ei ej = ei ej = ij ; jeij = jeij = 1 ;

    projecting x to the x axes: xi = ei x = ei xjej = xiLij ;

    projecting x to the x axes: xi = ei x = ei xjej = xj(L1)ji :

    Hence the components reduce to direction cosines between the xi and xj axes:

    Lij = ei ej = cos ij(L1)ij = ei ej = cos jiwhere ij and ji are the angles between the xi and xj axes. In general, ij is not equal to ji, because for example 12and 21 are two dierent angles.The transformation of coordinates can be written:

    and the explicit matrix equations in 3d are:

    x = Lx0@x1x2x3

    1A =0@e1 e1 e1 e2 e1 e3e2 e1 e2 e2 e2 e3e3 e1 e3 e2 e3 e3

    1A0@x1x2x3

    1A =0@cos 11 cos 12 cos 13cos 21 cos 22 cos 23cos 31 cos 32 cos 33

    1A0@x1x2x3

    1Asimilarly for

    x = L1x = LTxThe geometric interpretation is the xi components equal to the sum of projecting the xj components onto the xj axes.The numbers eiej arranged into a matrix would form a symmetric matrix (a matrix equal to its own transpose) dueto the symmetry in the dot products, in fact it is the metric tensor g. By contrast eiej or eiej do not form symmetricmatrices in general, as displayed above. Therefore, while the L matrices are still orthogonal, they are not symmetric.Apart from a rotation about any one axis, in which the xi and xi for some i coincide, the angles are not the same asEuler angles, and so the L matrices are not the same as the rotation matrices.

  • 12 CHAPTER 2. CARTESIAN TENSOR

    2.3 Transformation of the dot and cross products (three dimensions only)The dot product and cross product occur very frequently, in applications of vector analysis to physics and engineering,examples include:

    power transferred P by an object exerting a force F with velocity v along a straight-line path:

    P = v F

    tangential velocity v at a point x of a rotating rigid body with angular velocity :

    v = ! x

    potential energy U of a magnetic dipole of magnetic momentm in a uniform external magnetic eld B:

    U = m B

    angular momentum J for a particle with position vector r and momentum p:

    J = r p

    torque acting on an electric dipole of electric dipole moment p in a uniform external electric eld E:

    = p E

    induced surface current density jS in a magnetic material of magnetizationM on a surface with unit normal n:

    jS = M n

    How these products transform under orthogonal transformations is illustrated below.

    2.3.1 Dot product, Kronecker delta, and metric tensorThe dot product of each possible pairing of the basis vectors follows from the basis being orthonormal. For perpen-dicular pairs we have

    ex ey = ey ez = ez exey ex = ez ey = ex ez = 0while for parallel pairs we have

    ex ex = ey ey = ez ez = 1:

  • 2.3. TRANSFORMATION OF THE DOT AND CROSS PRODUCTS (THREE DIMENSIONS ONLY) 13

    Replacing Cartesian labels by index notation as shown above, these results can be summarized by

    ei ej = ij

    where ij are the components of the Kronecker delta. The Cartesian basis can be used to represent in this way.In addition, each metric tensor component gij with respect to any basis is the dot product of a pairing of basis vectors:

    gij = ei ej :

    For the Cartesian basis the components arranged into a matrix are:

    g =

    0@gxx gxy gxzgyx gyy gzzgzx gzy gzz

    1A =0@ex ex ex ey ex ezey ex ey ey ey ezez ex ez ey ez ez

    1A =0@1 0 00 1 00 0 1

    1Aso are the simplest possible for the metric tensor, namely the :

    gij = ij

    This is not true for general bases: orthogonal coordinates have diagonal metrics containing various scale factors (i.e.not necessarily 1), while general curvilinear coordinates could also have nonzero entries for o-diagonal components.The dot product of two vectors a and b transforms according to

    a b = ajbj = aiLijbk(L1)jk = aiikbk = aibi

    which is intuitive, since the dot product of two vectors is a single scalar independent of any coordinates. This alsoapplies more generally to any coordinate systems, not just rectangular ones; the dot product in one coordinate systemis the same in any other.

    2.3.2 Cross and product, Levi-Civita symbol, and pseudovectors

    e 2 e 3

    e 1

    123

    e 2 e 3

    e 1

    231

    e 2 e 3

    e 1

    312

    Cyclic permutations of index values and positively oriented cubic volume.

  • 14 CHAPTER 2. CARTESIAN TENSOR

    e 1

    e 2 e 3

    132

    e 3

    e 2

    321

    e 1

    e 2

    e 1

    e 3

    213

    Anticyclic permutations of index values and negatively oriented cubic volume.Non-zero values of the Levi-Civita symbol ijk as the volume ei ej ek of a cube spanned by the 3d orthonormalbasis.

    For the cross product of two vectors, the results are (almost) the other way round. Again, assuming a right-handed3d Cartesian coordinate system, cyclic permutations in perpendicular directions yield the next vector in the cycliccollection of vectors:

    ex ey = ez ey ez = ex ez ex = eyey ex = ez ez ey = ex ex ez = eywhile parallel vectors clearly vanish:

    ex ex = ey ey = ez ez = 0and replacing Cartesian labels by index notation as above, these can be summarized by:

    ei ej =24 +ek permutations: cyclic(i; j; k) = (1; 2; 3); (2; 3; 1); (3; 1; 2)ek permutations: anticyclic(i; j; k) = (2; 1; 3); (3; 2; 1); (1; 3; 2)

    0 i = j

    where i, j, k are indices which take values 1, 2, 3. It follows that:

    ek ei ej =24 +1 permutations: cyclic(i; j; k) = (1; 2; 3); (2; 3; 1); (3; 1; 2)1 permutations: anticyclic(i; j; k) = (2; 1; 3); (3; 2; 1); (1; 3; 2)

    0 i = j or j = k or k = i

    These permutation relations and their corresponding values are important, and there is an object coinciding with thisproperty: the Levi-Civita symbol, denoted by . The Levi-Civita symbol entries can be represented by the Cartesianbasis:

    "ijk = ei ej ekwhich geometrically corresponds to the volume of a cube spanned by the orthonormal basis vectors, with sign in-dicating orientation (and not a positive or negative volume). Here, the orientation is xed by 123 = +1, for aright-handed system. A left-handed system would x 123 = 1 or equivalently 321 = +1.The scalar triple product can now be written:

    c a b = ciei ajej bkek = "ijkciajbk

  • 2.3. TRANSFORMATION OF THE DOT AND CROSS PRODUCTS (THREE DIMENSIONS ONLY) 15

    with the geometric interpretation of volume (of the parallelepiped spanned by a, b, c) and algebraically is a determinant:[3]

    c a b =cx ax bxcy ay bycz az bz

    This in turn can be used to rewrite the cross product of two vectors as follows:

    (a b)i = ei a b = "`jk(ei)`ajbk = "`jki`ajbk = "ijkajbk) a b = (a b)iei = "ijkajbkeiContrary to its appearance, the Levi-Civita symbol is not a tensor, but a pseudotensor, the components transformaccording to:

    "pqr = det(L)"ijkLipLjqLkr :

    Therefore the transformation of the cross product of a and b is:

    (a b)i = "ijkajbk= det(L) "pqrLpiLqjLrk amLmj bnLnk= det(L) "pqr Lpi Lqj(L1)jm Lrk(L1)kn am bn= det(L) "pqr Lpi qm rn am bn= det(L) Lpi "pqraqbr= det(L) (a b)pLpi

    and so a b transforms as a pseudovector, because of the determinant factor.The tensor index notation applies to any object which has entities that form multidimensional arrays not everythingwith indices is a tensor by default. Instead, tensors are dened by how their coordinates and basis elements changeunder a transformation from one coordinate system to another.Note the cross product of two vectors is a pseudovector, while the cross product of a pseudovector with a vector isanother vector.

    2.3.3 Applications of the tensor and pseudotensorOther identities can be formed from the tensor and pseudotensor, a notable and very useful identity is one thatconverts two Levi-Civita symbols adjacently contracted over two indices into an antisymmetrized combination ofKronecker deltas:

    "ijk"pqk = ipjq iqjpThe index forms of the dot and cross products, together with this identity, greatly facilitate the manipulation andderivation of other identities in vector calculus and algebra, which in turn are used extensively in physics and engi-neering. For instance, it is clear the dot and cross products are distributive over vector addition:

    a (b+ c) = ai(bi + ci) = aibi + aici = a b+ a c

    a (b+ c) = ei"ijkaj(bk + ck) = ei"ijkajbk + ei"ijkajck = a b+ a cwithout resort to any geometric constructions - the derivation in each case is a quick line of algebra. Although theprocedure is less obvious, the vector triple product can also be derived. Rewriting in index notation:

  • 16 CHAPTER 2. CARTESIAN TENSOR

    [a (b c)]i = "ijkaj("k`mb`cm) = ("ijk"k`m)ajb`cmand because cyclic permutations of indices in the symbol does not change its value, cyclically permuting indices inkm to obtain mk allows us to use the above - identity to convert the symbols into tensors:

    [a (b c)]i = (i`jm imj`)ajb`cm= i`jmajb`cm imj`ajb`cm= ajbicj ajbjci

    thusly:

    a (b c) = (a c)b (a b)c

    Note this is antisymmetric in b and c, as expected from the left hand side. Similarly, via index notation or even justcyclically relabelling a, b, and c in the previous result and taking the negative:

    (a b) c = (c a)b (c b)a

    and the dierence in results show that the cross product is not associative. More complex identities, like quadrupleproducts;

    (a b) (c d); (a b) (c d); : : :

    and so on, can be derived in a similar manner.

    2.4 Transformations of Cartesian tensors (any number of dimensions)Tensors are dened as quantities which transform in a certain way under linear transformations of coordinates.

    2.4.1 Second order

    Let a = aiei and b = biei be two vectors, so that they transform according to aj = aiLij, bj = biLij.Taking the tensor product gives:

    a b = aiei bjej = aibjei ejthen applying the transformation to the components

    apbq = aiLipbjLjq = LipLjqaibj

    and to the bases

    ep eq = (L1)piei (L1)qjej = (L1)pi(L1)qjei ej = LipLjqei ejgives the transformation law of an order-2 tensor. The tensor ab is invariant under this transformation:

  • 2.5. PSEUDOVECTORS AS ANTISYMMETRIC SECOND ORDER TENSORS 17

    apbqep eq = LkpL`qakb` (L1)pi(L1)qjei ej= Lkp(L1)piL`q(L1)qj akb`ei ej= ki`j akb`ei ej= aibjei ej

    More generally, for any order-2 tensor

    R = Rijei ej ;

    the components transform according to;

    Rpq = LipLjqRij

    and the basis transforms by:

    ep eq = (L1)ipei (L1)jqejIf R does not transform according to this rule - whatever quantity R may be, its not an order 2 tensor.

    2.4.2 Any orderMore generally, for any order p tensor

    T = Tj1j2jpej1 ej2 ejpthe components transform according to;

    Tj1j2jp = Li1j1Li2j2 LipjpTi1i2ipand the basis transforms by:

    ej1 ej2 ejp = (L1)j1i1ei1 (L1)j2i2ei2 (L1)jpipeipFor a pseudotensor S of order p, the components transform according to;

    Sj1j2jp = det(L)Li1j1Li2j2 LipjpSi1i2ip :

    2.5 Pseudovectors as antisymmetric second order tensorsThe antisymmetric nature of the cross product can be recast into a tensorial form as follows.[4] Let c be a vector, abe a pseudovector, b be another vector, and T be a second order tensor such that:

    c = a b = T b

    As the cross product is linear in a and b, the components of T can be found by inspection, and they are:

  • 18 CHAPTER 2. CARTESIAN TENSOR

    T =

    0@ 0 az ayaz 0 axay ax 0

    1Aso the pseudovector a can be written as an antisymmetric tensor. This transforms as a tensor, not a pseudotensor. Forthe mechanical example above for the tangential velocity of a rigid body, given by v = x, this can be rewritten asv = x where is the tensor corresponding to the pseudovector :

    =

    0@ 0 !z !y!z 0 !x!y !x 0

    1AFor an example in electromagnetism, while the electric eld E is a vector eld, the magnetic eld B is a pseudovectoreld. These elds are dened from the Lorentz force for a particle of electric charge q traveling at velocity v:

    F = q(E+ v B) = q(E B v)

    and considering the second term containing the cross product of a pseudovector B and velocity vector v, it can bewritten in matrix form, with F, E, and v as column vectors and B as an antisymmetric matrix:

    0@FxFyFz

    1A = q0@ExEyEz

    1A q0@ 0 Bz ByBz 0 BxBy Bx 0

    1A0@vxvyvz

    1AIf a pseudovector is explicitly given by a cross product of two vectors (as opposed to entering the cross product withanother vector), then such pseudovectors can also be written as antisymmetric tensors of second order, with eachentry a component of the cross product. The angular momentum of a classical pointlike particle orbiting about anaxis, dened by J = x p, is another example of a pesudovector, with corresponding antisymmetric tensor:

    J =

    0@ 0 Jz JyJz 0 JxJy Jx 0

    1A =0@ 0 (xpy ypx) (zpx xpz)(xpy ypx) 0 (ypz zpy)(zpx xpz) (ypz zpy) 0

    1AAlthough Cartesian tensors do not occur in the theory of relativity; the tensor form of orbital angular momentum Jenters the spacelike part of the relativistic angular momentum tensor, and the above tensor form of the magnetic eldB enters the spacelike part of the electromagnetic tensor.

    2.6 Vector and tensor calculusIt should be emphasized the following formulae are only so simple in Cartesian coordinates - in general curvilinearcoordinates there are factors of the metric and its determinant - see tensors in curvilinear coordinates for more generalanalysis.

    2.6.1 Vector calculusFollowing are the dierential operators of vector calculus. Throughout, left (r, t) be a scalar eld, and

    A(r; t) = Ax(r; t)ex +Ay(r; t)ey +Az(r; t)ez

    B(r; t) = Bx(r; t)ex +By(r; t)ey +Bz(r; t)ez

  • 2.6. VECTOR AND TENSOR CALCULUS 19

    be vector elds, in which all scalar and vector elds are functions of the position vector r and time t.The gradient operator in Cartesian coordinates is given by:

    r = ex @@x

    + ey@

    @y+ ez

    @

    @z

    and in index notation, this is usually abbreviated in various ways:

    ri @i @@xi

    This operator acts on a scalar eld to obtain the vector eld directed in the maximum rate of increase of :

    (r)i = riThe index notation for the dot and cross products carries over to the dierential operators of vector calculus.[5]

    The directional derivative of a scalar eld is the rate of change of along some direction vector a (not necessarilya unit vector), formed out of the components of a and the gradient:

    a (r) = aj(r)jThe divergence of a vector eld A is:

    r A = riAiNote the interchange of the components of the gradient and vector eld yields a dierent dierential operator

    A r = Airiwhich could act on scalar or vector elds. In fact, if A is replaced by the velocity eld u(r, t) of a uid, this is a termin the material derivative (with many other names) of continuum mechanics, with another term being the partial timederivative:

    D

    Dt=

    @

    @t+ u r

    which usually acts on the velocity eld leading to the non-linearity in the Navier-Stokes equations.As for the curl of a vector eld A, this can be dened as a pseudovector eld by means of the symbol:

    (r A)i = "ijkrjAkwhich is only valid in three dimensions, or an antisymmetric tensor eld of second order via antisymmetrization ofindices, indicated by delimiting the antisymmetrized indices by square brackets (see Ricci calculus):

    (r A)ij = riAj rjAi = 2r[iAj]which is valid in any number of dimensions. In each case, the order of the gradient and vector eld componentsshould not be interchanged as this would result in a dierent dierential operator:

    "ijkAjrk

  • 20 CHAPTER 2. CARTESIAN TENSOR

    Airj Ajri = 2A[irj]which could act on scalar or vector elds.Finally, the Laplacian operator is dened in two ways, the divergence of the gradient of a scalar eld :

    r (r) = ri(ri)or the square of the gradient operator, which acts on a scalar eld or a vector eld A:

    (r r) = (riri)(r r)A = (riri)AIn physics and engineering, the gradient, divergence, curl, and Laplacian operator arise inevitably in uid mechanics,Newtonian gravitation, electromagnetism, heat conduction, and even quantum mechanics.Vector calculus identities can be derived in a similar way to those of vector dot and cross products and combinations.For example, in three dimensions, the curl of a cross product of two vector elds A and B:

    [r (A B)]i = "ijkrj("k`mA`Bm)= ("ijk"`mk)rj(A`Bm)= (i`jm imj`)(BmrjA` +A`rjBm)= (BjrjAi +AirjBj) (BirjAj +AjrjBi)= (Bjrj)Ai +Ai(rjBj)Bi(rjAj) (Ajrj)Bi= [(B r)A+ A(r B) B(r A) (A r)B]i

    where the product rule was used, and throughout the dierential operator was not interchanged with A or B. Thus:

    r (A B) = (B r)A+ A(r B) B(r A) (A r)B

    2.6.2 Tensor calculusOne can continue the operations on tensors of higher order. Let T = T(r, t) denote a second order tensor eld, againdependent on the position vector r and time t.For instance, the gradient of a vector eld in two equivalent notations (dyadic and tensor, respectively) is:

    (rA)ij (r A)ij = riAjwhich is a tensor eld of second order.The divergence of a tensor is:

    (r T)j = riTijwhich is a vector eld. This arises in continuummechanics in Cauchys laws of motion - the divergence of the Cauchystress tensor is a vector eld, related to body forces acting on the uid.

    2.7 Dierence from the standard tensor calculusCartesian tensors are as in tensor algebra, but Euclidean structure of and restriction of the basis brings some simpli-cations compared to the general theory.

  • 2.8. HISTORY 21

    The general tensor algebra consists of general mixed tensors of type (p, q):

    T = T i1i2ipj1j2jq ej1j2jqi1i2ip

    with basis elements:

    ej1j2jqi1i2ip = ei1 ei2 eip ej1 ej2 ejq

    the components transform according to:

    Tk1k2kp`1`2`q = Li1

    k1Li2k2 Lipkp(L1)`1 j1(L1)`2 j2 (L1)`q jqT i1i2ipj1j2jqas for the bases:

    e`1`2`qk1k2kp = (L1)k1

    i1(L1)k2 i2 (L1)kp ipLj1`1Lj2`2 Ljq `qej1j2jqi1i2ipFor Cartesian tensors, only the order p + q of the tensor matters in a Euclidean space with an orthonormal basis,and all p + q indices can be lowered. A Cartesian basis does not exist unless the vector space has a positive-denitemetric, and thus cannot be used in relativistic contexts.

    2.8 HistoryDyadic tensors were historically the rst approach to formulating second-order tensors, similarly triadic tensors forthird-order tensors, and so on. Cartesian tensors use tensor index notation, in which the variance may be glossed overand is often ignored, since the components remain unchanged by raising and lowering indices.

    2.9 See also Tensor algebra Tensor calculus Tensors in curvilinear coordinates Rotation group

    2.10 References[1] C.W. Misner, K.S. Thorne, J.A. Wheeler. Gravitation. ISBN 0-7167-0344-0., used throughout

    [2] T.W. B. Kibble (1973). classical mechanics. European physics series (2nd ed.). McGrawHill. ISBN 978-0-07-084018-8.,see Appendix C.

    [3] M. R. Spiegel, S. Lipcshutz, D. Spellman (2009). Vector analysis. Schaums Outlines (2nd ed.). McGraw Hill. p. 23.ISBN 978-0-07-161545-7.

    [4] T. W. B. Kibble (1973). classical mechanics. European physics series (2nd ed.). McGraw Hill. pp. 234235. ISBN978-0-07-084018-8., see Appendix C.

    [5] M. R. Spiegel, S. Lipcshutz, D. Spellman (2009). Vector analysis. Schaums Outlines (2nd ed.). McGraw Hill. p. 197.ISBN 978-0-07-161545-7.

  • 22 CHAPTER 2. CARTESIAN TENSOR

    2.10.1 Notes D. C. Kay (1988). Tensor Calculus. Schaums Outlines. McGraw Hill. pp. 1819, 3132. ISBN 0-07-033484-6.

    M. R. Spiegel, S. Lipcshutz, D. Spellman (2009). Vector analysis. Schaums Outlines (2nd ed.). McGraw Hill.p. 227. ISBN 978-0-07-161545-7.

    J.R. Tyldesley (1975). An introduction to tensor analysis for engineers and applied scientists. Longman. pp.513. ISBN 0-582-44355-5.

    2.10.2 Further reading and applications S. Lipcshutz, M. Lipson (2009). Linear Algebra. Schaums Outlines (4th ed.). McGraw Hill. ISBN 978-0-07-154352-1.

    Pei Chi Chou (1992). Elasticity: Tensor, Dyadic, and Engineering Approaches. Courier Dover Publications.ISBN 048-666-958-0.

    T. W. Krner (2012). Vectors, Pure and Applied: A General Introduction to Linear Algebra. CambridgeUniversity Press. p. 216. ISBN 11070-3356-X.

    R. Torretti (1996). Relativity and Geometry. Courier Dover Publications. p. 103. ISBN 0-4866-90466.

    J. J. L. Synge, A. Schild (1978). Tensor Calculus. Courier Dover Publications. p. 128. ISBN 0-4861-4139-X.

    C. A. Balafoutis, R. V. Patel (1991). Dynamic Analysis of Robot Manipulators: A Cartesian Tensor Approach.The Kluwer International Series in Engineering and Computer Science: Robotics: vision, manipulation andsensors 131. Springer. ISBN 0792-391-454.

    S. G. Tzafestas (1992). Robotic systems: advanced techniques and applications. Springer. ISBN 0-792-317-491.

    T. Dass, S. K. Sharma (1998). Mathematical Methods In Classical And Quantum Physics. Universities Press.p. 144. ISBN 817-371-0899.

    G. F. J. Temple (2004). Cartesian Tensors: An Introduction. Dover Books on Mathematics Series. DOVERPUBN Incorporated. ISBN 0-4864-3908-9.

    Sir H. Jereys (1961). Cartesian Tensors. The University Press.

    2.11 External links Cartesian Tensors V. N. Kaliakin, Brief Review of Tensors, University of Delaware R. E. Hunt, Cartesian Tensors, University of Cambridge

  • 2.11. EXTERNAL LINKS 23

    e 2

    x 1

    x 2

    x 3

    e 3

    e 1

    x 1

    x 2

    x 3

    e 2

    e 3

    e 1

    Two dierent 3d orthonormal bases: each basis consists of unit vectors that are mutually perpendicular.

  • 24 CHAPTER 2. CARTESIAN TENSOR

    e 1 x 1

    e 2 x 2

    e 3 x 3

    x

    x 1 e 1

    x 2 e 2 x 3 e 3 x

    The same position vector x represented in two 3d rectangular coordinate systems each with an orthonormal basis, the cuboids illustratethe parallelogram law for adding vector components.

  • 2.11. EXTERNAL LINKS 25

    x 3

    x 1

    x 2

    x 3

    33 32 31

    x 1

    x 1

    x 2

    x 3

    13

    12 11

    x 2

    x 1

    x 2

    x 3

    23 22

    21

    x 1

    x 2

    x 3

    x 1

    13 12 11

    x 2

    x 1

    x 2

    x 3

    21

    23 22

    x 3

    x 1

    x 2

    x 3

    32 33

    31

    Top: Angles from the x axes to the x axes. Bottom: Vice versa.

  • Chapter 3

    Category of vector spaces

    In mathematics, especially category theory, the categoryK-Vect (some authors use VectK) has all vector spaces overa xed eld K as objects and K-linear transformations as morphisms. If K is the eld of real numbers, then thecategory is also known as Vec.Since vector spaces over K (as a eld) are the same thing as modules over the ring K, K-Vect is a special case ofR-Mod, the category of left R-modules. K-Vect is an important example of an abelian category.Much of linear algebra concerns the description of K-Vect. For example, the dimension theorem for vector spacessays that the isomorphism classes inK-Vect correspond exactly to the cardinal numbers, and thatK-Vect is equivalentto the subcategory of K-Vect which has as its objects the free vector spaces Kn, where n is any cardinal number.There is a forgetful functor from K-Vect to Ab, the category of abelian groups, which takes each vector space toits additive group. This can be composed with forgetful functors from Ab to yield other forgetful functors, mostimportantly one to Set.K-Vect is a monoidal category with K (as a one-dimensional vector space over K) as the identity and the tensorproduct as the monoidal product.

    3.1 See also Category of graded vector spaces

    3.2 References1. Mac Lane, Saunders (September 1998). Categories for the Working Mathematician. Graduate Texts in Math-

    ematics 5 (second ed.). Springer. ISBN 0-387-98403-8. Zbl 0906.18001.

    26

  • Chapter 4

    CauchySchwarz inequality

    In mathematics, the CauchySchwarz inequality is a useful inequality encountered in many dierent settings, suchas linear algebra, analysis, probability theory, and other areas. It is considered to be one of the most importantinequalities in all of mathematics.[1] It has a number of generalizations, among them Hlders inequality.The inequality for sums was published by Augustin-Louis Cauchy (1821), while the corresponding inequality forintegrals was rst proved by Viktor Bunyakovsky (1859). The modern proof of the integral inequality was given byHermann Amandus Schwarz (1888).[1]

    4.1 Statement of the inequalityThe CauchySchwarz inequality states that for all vectors x and y of an inner product space it is true that

    jhx; yij2 hx; xi hy; yi;where h; i is the inner product also known as dot product. Equivalently, by taking the square root of both sides, andreferring to the norms of the vectors, the inequality is written as

    jhx; yij kxk kyk: [2]

    Moreover, the two sides are equal if and only if x and y are linearly dependent (or, in a geometrical sense, they areparallel or one of the vectors magnitude is zero).If x1; : : : ; xn 2 C and y1; : : : ; yn 2 C have an imaginary component, the inner product is the standard inner productand the bar notation is used for complex conjugation then the inequality may be restated more explicitly as

    jx1y1 + + xnynj2 (jx1j2 + + jxnj2)(jy1j2 + + jynj2):When viewed in this way the numbers x1, ..., xn, and y1, ..., yn are the components of x and y with respect to anorthonormal basis of V.Even more compactly written:

    nXi=1

    xiyi

    2

    nXj=1

    jxj j2nX

    k=1

    jykj2:

    Equality holds if and only if x and y are linearly dependent, that is, one is a scalar multiple of the other (which includesthe case when one or both are zero).The nite-dimensional case of this inequality for real vectors was proven by Cauchy in 1821, and in 1859 Cauchysstudent Bunyakovsky noted that by taking limits one can obtain an integral form of Cauchys inequality. The generalresult for an inner product space was obtained by Schwarz in the year 1888.

    27

  • 28 CHAPTER 4. CAUCHYSCHWARZ INEQUALITY

    4.2 ProofLet u, v be arbitrary vectors in a vector space V over F with an inner product, where F is the eld of real or complexnumbers. We prove the inequality

    hu; vi kuk kvk ;and the equality holds only when either u or v is a multiple of the other.If v = 0 it is clear that we have equality, and in this case u and v are also linearly dependent (regardless of u). Wehenceforth assume that v is nonzero. We also assume that hu; vi 6= 0 otherwise the inequality is obviously true,because neither kuk nor kvk can be negative.Let

    z = u hu; vihv; vi v:

    Then, by linearity of the inner product in its rst argument, one has

    hz; vi =u hu; vihv; vi v; v

    = hu; vi hu; vihv; vi hv; vi = 0;

    i.e., z is a vector orthogonal to the vector v (Indeed, z is the projection of u onto the plane orthogonal to v.). We canthus apply the Pythagorean theorem to

    u =hu; vihv; viv + z;

    which gives

    kuk2 = hu; vihv; vi

    2 kvk2 + kzk2 = jhu; vij2kvk2 + kzk2 jhu; vij2

    kvk2 ;

    and, after multiplication by kvk2 , the CauchySchwarz inequality. Moreover, if the relation '' in the above expres-sion is actually an equality, then kzk2 = 0 and hence z = 0 ; the denition of z then establishes a relation of lineardependence between u and v. This establishes the theorem.

    4.3 Alternative ProofLet u, v be arbitrary vectors in a vector space V over F with an inner product, where F is the eld of real or complexnumbers.If hu; vi = 0 , the theorem holds trivially.If not, then u 6= 0 , v 6= 0 . Choose = jhu;vijhu;vi . Then jj = 1 and

    0

    ukuk vkvk

    2 = jj2 kuk2kuk2 2Re

    u

    kuk ;v

    kvk

    +kvk2kvk2 = 2 2

    hu; vikukkvk :

    It follows that

    jhu; vij = hu; vi kukkvk:

  • 4.4. SPECIAL CASES 29

    4.4 Special cases

    4.4.1 Rn

    In Euclidean space Rn with the standard inner product, the CauchySchwarz inequality is

    nXi=1

    xiyi

    !2

    nXi=1

    x2i

    ! nXi=1

    y2i

    !:

    To prove this form of the inequality, consider the following quadratic polynomial in z.

    (x1z + y1)2 + + (xnz + yn)2 =

    X(x2i )

    z2 + 2

    X(xi yi)

    z +

    X(y2i )

    Since it is nonnegative it has at most one real root in z, whence its discriminant is less than or equal to zero, that is,

    X(xi yi)

    2X

    x2i X

    y2i 0;

    which yields the CauchySchwarz inequality.An equivalent proof for Rn starts with the summation below.Expanding the brackets we have:

    nXi=1

    nXj=1

    (xiyj xjyi)2 =nXi=1

    x2i

    nXj=1

    y2j +nXj=1

    x2j

    nXi=1

    y2i 2nXi=1

    xiyi

    nXj=1

    xjyj ;

    collecting together identical terms (albeit with dierent summation indices) we nd:

    1

    2

    nXi=1

    nXj=1

    (xiyj xjyi)2 =nXi=1

    x2i

    nXi=1

    y2i

    nXi=1

    xiyi

    !2:

    Because the left-hand side of the equation is a sum of the squares of real numbers it is greater than or equal to zero,thus:

    nXi=1

    x2i

    nXi=1

    y2i

    nXi=1

    xiyi

    !2 0:

    Yet another approach when n 2 (n = 1 is trivial) is to consider the plane containing x and y. More precisely,recoordinatize Rn with any orthonormal basis whose rst two vectors span a subspace containing x and y. In this basisonly x1; x2; y1 and y2 are nonzero, and the inequality reduces to the algebra of dot product in the plane, which isrelated to the angle between two vectors, from which we obtain the inequality:

    jx yj = kxkkykj cos j kxkkyk:When n = 3 the CauchySchwarz inequality can also be deduced from Lagranges identity, which takes the form

    hx; xi hy; yi = jhx; yij2 + jx yj2

    from which readily follows the CauchySchwarz inequality.Another proof of the general case for n can be done by using the technique used to prove Inequality of arithmeticand geometric means.

  • 30 CHAPTER 4. CAUCHYSCHWARZ INEQUALITY

    4.4.2 L2

    For the inner product space of square-integrable complex-valued functions, one has

    ZRn

    f(x)g(x) dx

    2 ZRnjf(x)j2 dx

    ZRnjg(x)j2 dx:

    A generalization of this is the Hlder inequality.

    4.5 ApplicationsThe triangle inequality for the standard norm is often shown as a consequence of the CauchySchwarz inequality, asfollows: given vectors x and y:

    kx+ yk2 = hx+ y; x+ yi= kxk2 + hx; yi+ hy; xi+ kyk2= kxk2 + 2Re hx; yi+ kyk2 kxk2 + 2jhx; yij+ kyk2 kxk2 + 2kxkkyk+ kyk2= (kxk+ kyk)2 :

    Taking square roots gives the triangle inequality.The CauchySchwarz inequality allows one to extend the notion of angle between two vectors to any real innerproduct space, by dening:

    cos xy =hx; yikxkkyk :

    The CauchySchwarz inequality proves that this denition is sensible, by showing that the right-hand side lies in theinterval [1, 1], and justies the notion that (real) Hilbert spaces are simply generalizations of the Euclidean space.It can also be used to dene an angle in complex inner product spaces, by taking the absolute value of the right-handside, as is done when extracting a metric from quantum delity.The CauchySchwarz is used to prove that the inner product is a continuous function with respect to the topologyinduced by the inner product itself.The CauchySchwarz inequality is usually used to show Bessels inequality.

    4.5.1 Probability theoryLet X, Y be random variables, then:

    Var (Y ) Cov (Y;X)Cov (Y;X)Var (X) :

    In fact we can dene an inner product on the set of random variables using the expectation of their product:

    hX;Y i , E(XY );

    and so, by the CauchySchwarz inequality,

  • 4.6. GENERALIZATIONS 31

    jE(XY )j2 E(X2)E(Y 2):Moreover, if = E(X) and = E(Y), then

    jCov(X;Y )j2 = jE((X )(Y ))j2= jhX ; Y ij2 hX ;X ihY ; Y i= E((X )2)E((Y )2)= Var(X)Var(Y );

    where Var denotes variance and Cov denotes covariance.

    4.6 GeneralizationsVarious generalizations of the CauchySchwarz inequality exist in the context of operator theory, e.g. for operator-convex functions, and operator algebras, where the domain and/or range of are replaced by a C*-algebra or W*-algebra.This section lists a few of such inequalities from the operator algebra setting, to give a avor of results of this type.

    4.6.1 Positive functionals on C*- and W*-algebrasOne can discuss inner products as positive functionals. Given a Hilbert space L2(m), m being a nite measure, theinner product < , > gives rise to a positive functional by

    (g) = hg; 1i:Since < f, f > 0, (f*f) 0 for all f in L2(m), where f* is pointwise conjugate of f. So is positive. Converselyevery positive functional gives a corresponding inner product < f, g > = (g*f). In this language, the CauchySchwarz inequality becomes

    j(gf)j2 (ff)(gg);which extends verbatim to positive functionals on C*-algebras.We now give an operator theoretic proof for the CauchySchwarz inequality which passes to the C*-algebra setting.One can see from the proof that the CauchySchwarz inequality is a consequence of the positivity and anti-symmetryinner-product axioms.Consider the positive matrix

    M =

    f

    g

    f g

    =

    ff fggf gg

    :

    Since is a positive linear map whose range, the complex numbers C, is a commutative C*-algebra, is completelypositive. Therefore

    M 0 = (I2 )(M) =(ff) (fg)(gf) (gg)

    is a positive 2 2 scalar matrix, which implies it has positive determinant:

  • 32 CHAPTER 4. CAUCHYSCHWARZ INEQUALITY

    (ff) (gg) j (gf)j2 0 i.e. (ff) (gg) j (gf)j2 :

    This is precisely the CauchySchwarz inequality. If f and g are elements of a C*-algebra, f* and g* denote theirrespective adjoints.We can also deduce from above that every positive linear functional is bounded, corresponding to the fact that theinner product is jointly continuous.

    4.6.2 Positive mapsPositive functionals are special cases of positive maps. A linear map between C*-algebras is said to be a positivemap if a 0 implies (a) 0. It is natural to ask whether inequalities of Schwarz-type exist for positive maps. Inthis more general setting, usually additional assumptions are needed to obtain such results.

    KadisonSchwarz inequality

    The following theorem is named after Richard Kadison.Theorem. If is a unital positive map, then for every normal element a in its domain, we have(aa) (a)(a)and (aa) (a)(a) .This extends the fact '(aa) 1 '(a)'(a) = j'(a)j2 , when ' is a linear functional.The case when a is self-adjoint, i.e. a = a , is sometimes known as Kadisons inequality.

    2-positive maps

    When is 2-positive, a stronger assumption than merely positive, one has something that looks very similar to theoriginal CauchySchwarz inequality:Theorem (Modied Schwarz inequality for 2-positive maps).[3] For a 2-positive map between C*-algebras, for alla, b in its domain,

    (a)(a) k(1)k(aa)

    k(ab)k2 k(aa)k k(bb)k:A simple argument for (2) is as follows. Consider the positive matrix

    M =

    a 0b 0

    a b0 0

    =

    aa abba bb

    :

    By 2-positivity of ,

    (I2 )M =(aa) (ab)(ba) (bb)

    is positive. The desired inequality then follows from the properties of positive 2 2 (operator) matrices.

    Part (1) is analogous. One can replace the matrixa b0 0

    by1 a0 0

    :

    4.7 PhysicsThe general formulation of the Heisenberg uncertainty principle is derived using the CauchySchwarz inequality.

  • 4.8. SEE ALSO 33

    4.8 See also Hlders inequality Minkowski inequality Jensens inequality

    4.9 Notes[1] The CauchySchwarz Master Class: an Introduction to the Art of Mathematical Inequalities, Ch. 1 by J. Michael Steele.

    [2] Strang, Gilbert (19 July 2005). 3.2. Linear Algebra and its Applications (4th ed.). Stamford, CT: Cengage Learning. pp.154155. ISBN 978-0030105678.

    [3] Paulsen (2002), Completely Bounded Maps and Operator Algebras, ISBN 9780521816694 page 40.

    4.10 ReferencesJ.M. Aldaz, S. Barza, M. Fujii and M.S. Moslehian, Advances in Operator Cauchy--Schwarz inequalities and theirreverses, Ann. Funct. Anal. 6 (2015), no. 3, 275-295.

    Bityutskov, V.I. (2001), Bunyakovskii inequality, in Hazewinkel, Michiel, Encyclopedia of Mathematics,Springer, ISBN 978-1-55608-010-4

    Bouniakowsky, V. (1859), Sur quelques inegalits c