189
System of linear equations From Wikipedia, the free encyclopedia

System of Linear Equations

  • Upload
    man

  • View
    57

  • Download
    12

Embed Size (px)

DESCRIPTION

1. From Wikipedia, the free encyclopedia2. Lexicographical order

Citation preview

  • System of linear equationsFrom Wikipedia, the free encyclopedia

  • Contents

    1 Flag (linear algebra) 11.1 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Stabilizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Subspace nest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Set-theoretic analogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    2 Scalar (mathematics) 32.1 Etymology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2 Denitions and properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2.2.1 Scalars of vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2.2 Scalars as vector components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2.3 Scalars in normed vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2.4 Scalars in modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2.5 Scaling transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2.6 Scalar operations (computer science) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    2.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    3 Scalar multiplication 63.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    3.1.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.2 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    4 Schmidt decomposition 94.1 Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    4.1.1 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.2 Some observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    4.2.1 Spectrum of reduced states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    i

  • ii CONTENTS

    4.2.2 Schmidt rank and entanglement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104.2.3 Von Neumann entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    4.3 Crystal plasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    5 Schur complement 125.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125.2 Application to solving linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135.3 Applications to probability theory and statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135.4 Schur complement condition for positive deniteness . . . . . . . . . . . . . . . . . . . . . . . . 145.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    6 Schur product theorem 166.1 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    6.1.1 Proof using the trace formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166.1.2 Proof using Gaussian integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166.1.3 Proof using eigendecomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    6.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176.3 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    7 Segre classication 197.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    8 Self-adjoint 208.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    9 Semi-simple operator 219.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    10 Semi-simplicity 2210.1 Introductory example of vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2210.2 Semi-simple modules and rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2210.3 Semi-simple matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2310.4 Semi-simple categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2310.5 Semi-simplicity in representation theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2310.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2410.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2410.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

  • CONTENTS iii

    11 Semilinear transformation 2511.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2511.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2611.3 General semilinear group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    11.3.1 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2711.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    11.4.1 Projective geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2711.4.2 Mathieu group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    11.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    12 Sesquilinear form 2812.1 Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2812.2 Complex vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    12.2.1 Geometric motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2912.2.2 Hermitian form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2912.2.3 Skew-Hermitian form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    12.3 Over arbitrary elds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3012.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    12.4 In projective geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3112.5 Over arbitrary rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3112.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3212.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3212.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3212.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    13 Seven-dimensional cross product 3313.1 Multiplication table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3313.2 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3313.3 Consequences of the dening properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3413.4 Coordinate expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    13.4.1 Dierent multiplication tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3613.4.2 Using geometric algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    13.5 Relation to the octonions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3713.6 Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3713.7 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3713.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3813.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3813.10References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    14 Shear mapping 4114.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    14.1.1 Horizontal and vertical shear of the plane . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

  • iv CONTENTS

    14.1.2 General shear mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4314.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4414.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    15 Shear matrix 4515.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4515.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4615.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4615.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4615.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    16 ShermanMorrison formula 4716.1 Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4716.2 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4716.3 Verication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4716.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4816.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4916.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    17 Signal-ow graph 5017.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5017.2 Domain of application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5017.3 Basic ow graph concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    17.3.1 Choosing the variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5217.3.2 Non-uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

    17.4 Linear signal-ow graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5217.4.1 Basic components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5317.4.2 Systematic reduction to sources and sinks . . . . . . . . . . . . . . . . . . . . . . . . . . 5417.4.3 Solving linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    17.5 Relation to block diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5917.6 Interpreting 'causality' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5917.7 Signal-ow graphs for analysis and design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    17.7.1 Signal-ow graphs for dynamic systems analysis . . . . . . . . . . . . . . . . . . . . . . . 6017.7.2 Signal-ow graphs for design synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    17.8 Shannon and Shannon-Happ formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6117.9 Linear signal-ow graph examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    17.9.1 Simple voltage amplier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6117.9.2 Ideal negative feedback amplier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6117.9.3 Electrical circuit containing a two-port network . . . . . . . . . . . . . . . . . . . . . . . 6217.9.4 Mechatronics : Position servo with multi-loop feedback . . . . . . . . . . . . . . . . . . . 63

    17.10Terminology and classication of signal-ow graphs . . . . . . . . . . . . . . . . . . . . . . . . . 6417.10.1 Standards covering signal-ow graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

  • CONTENTS v

    17.10.2 State transition signal-ow graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6417.10.3 Closed owgraph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

    17.11Nonlinear ow graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6417.11.1 Examples of nonlinear branch functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 6417.11.2 Examples of nonlinear signal-ow graph models . . . . . . . . . . . . . . . . . . . . . . . 65

    17.12Applications of SFG techniques in various elds of science . . . . . . . . . . . . . . . . . . . . . 6517.13See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6517.14Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6517.15References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6817.16Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6917.17External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

    18 Singular value decomposition 7818.1 Statement of the theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7918.2 Intuitive interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

    18.2.1 Rotation, scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7918.2.2 Singular values as semiaxes of an ellipse or ellipsoid . . . . . . . . . . . . . . . . . . . . . 7918.2.3 The columns of U and V are orthonormal bases . . . . . . . . . . . . . . . . . . . . . . . 79

    18.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8018.4 Singular values, singular vectors, and their relation to the SVD . . . . . . . . . . . . . . . . . . . . 8118.5 Applications of the SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

    18.5.1 Pseudoinverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8218.5.2 Solving homogeneous linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8218.5.3 Total least squares minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8218.5.4 Range, null space and rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8218.5.5 Low-rank matrix approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8318.5.6 Separable models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8318.5.7 Nearest orthogonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8318.5.8 The Kabsch algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8418.5.9 Signal processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8418.5.10 Other examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

    18.6 Relation to eigenvalue decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8418.7 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    18.7.1 Based on the spectral theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8518.7.2 Based on variational characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

    18.8 Geometric meaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8718.9 Calculating the SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

    18.9.1 Numerical approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8818.9.2 Analytic result of 2 2 SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

    18.10Reduced SVDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8918.10.1 Thin SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8918.10.2 Compact SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

  • vi CONTENTS

    18.10.3 Truncated SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8918.11Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

    18.11.1 Ky Fan norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9018.11.2 HilbertSchmidt norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

    18.12Tensor SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9018.13Bounded operators on Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    18.13.1 Singular values and compact operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9118.14History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9118.15See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9218.16Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9318.17References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9418.18External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

    19 Skew-Hamiltonian matrix 9519.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

    20 Skew-Hermitian 9620.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

    21 Special linear group 9721.1 Geometric interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9821.2 Lie subgroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9821.3 Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9821.4 Relations to other subgroups of GL(n,A) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9821.5 Generators and relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9921.6 Structure of GL(n,F) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9921.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9921.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

    22 Spectral theorem 10022.1 Finite-dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

    22.1.1 Hermitian maps and Hermitian matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 10022.1.2 Normal matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

    22.2 Compact self-adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10122.3 Bounded self-adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10222.4 General self-adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10222.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10322.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

    23 Spectral theory 10423.1 Mathematical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10423.2 Physical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10423.3 A denition of spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

  • CONTENTS vii

    23.4 Spectral theory briey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10523.5 Resolution of the identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10623.6 Resolvent operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10723.7 Operator equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10923.8 Spectral theorem and Rayleigh quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11023.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11123.10Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11123.11References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11223.12External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

    24 Spherical basis 11424.1 Spherical basis in three dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

    24.1.1 Basis denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11424.1.2 Commutator Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11524.1.3 Rotation Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11524.1.4 Coordinate vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

    24.2 Properties (three dimensions) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11524.2.1 Orthonormality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11524.2.2 Change of basis matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11624.2.3 Cross products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11624.2.4 Inner product in the spherical basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

    24.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11724.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

    24.4.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11724.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

    25 Spinors in three dimensions 11825.1 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11825.2 Isotropic vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11925.3 Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11925.4 Reality structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12025.5 Examples in physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

    25.5.1 Spinors of the Pauli spin matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12125.5.2 General remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

    25.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12225.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

    26 Split-complex number 12326.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

    26.1.1 Conjugate, modulus, and bilinear form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12426.1.2 The diagonal basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

    26.2 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

  • viii CONTENTS

    26.3 Algebraic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12726.4 Matrix representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12826.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13026.6 Synonyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13026.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13126.8 References and external links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

    27 Spread of a matrix 13427.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13427.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13427.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13427.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

    28 Squeeze mapping 13628.1 Logarithm and hyperbolic angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13728.2 Group theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13728.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

    28.3.1 Corner ow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13828.3.2 Relativistic spacetime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13928.3.3 Bridge to transcendentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

    28.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13928.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

    29 Stabilizer code 14129.1 Mathematical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14129.2 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14229.3 Stabilizer error-correction conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14229.4 Relation between Pauli group and binary vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 14229.5 Example of a stabilizer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14429.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

    30 Standard basis 14530.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14630.2 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14730.3 Other usages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14730.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14730.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

    31 Steinitz exchange lemma 14831.1 Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14831.2 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14831.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14931.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

  • CONTENTS ix

    32 Stokes operator 15032.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15032.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15032.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

    33 Sublinear function 15233.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15233.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15233.3 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15233.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

    34 Sylvesters determinant theorem 15334.1 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15334.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15434.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

    35 Sylvesters law of inertia 15535.1 Statement of the theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15535.2 Statement in terms of eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15535.3 Law of inertia for quadratic forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15635.4 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15635.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15635.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15635.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

    36 Symplectic vector space 15736.1 Standard symplectic space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

    36.1.1 Analogy with complex structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15836.2 Volume form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15836.3 Symplectic map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15836.4 Symplectic group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15936.5 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15936.6 Heisenberg group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15936.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16036.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

    37 System of linear equations 16137.1 Elementary example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16237.2 General form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

    37.2.1 Vector equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16337.2.2 Matrix equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

    37.3 Solution set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16337.3.1 Geometric interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

  • x CONTENTS

    37.3.2 General behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16537.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

    37.4.1 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16637.4.2 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16637.4.3 Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

    37.5 Solving a linear system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16837.5.1 Describing the solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16937.5.2 Elimination of variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16937.5.3 Row reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17037.5.4 Cramers rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17037.5.5 Matrix solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17137.5.6 Other methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

    37.6 Homogeneous systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17237.6.1 Solution set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17237.6.2 Relation to nonhomogeneous systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

    37.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17337.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17337.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

    37.9.1 Textbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17337.10Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 174

    37.10.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17437.10.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17637.10.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

  • Chapter 1

    Flag (linear algebra)

    In mathematics, particularly in linear algebra, a ag is an increasing sequence of subspaces of a nite-dimensionalvector space V. Here increasing means each is a proper subspace of the next (see ltration):

    f0g = V0 V1 V2 Vk = V:

    If we write the dim Vi = di then we have

    0 = d0 < d1 < d2 < < dk = n;

    where n is the dimension of V (assumed to be nite-dimensional). Hence, we must have k n. A ag is called acomplete ag if di = i, otherwise it is called a partial ag.A partial ag can be obtained from a complete ag by deleting some of the subspaces. Conversely, any partial agcan be completed (in many dierent ways) by inserting suitable subspaces.The signature of the ag is the sequence (d1, dk).Under certain conditions the resulting sequence resembles a ag with a point connected to a line connected to asurface.

    1.1 BasesAn ordered basis for V is said to be adapted to a ag if the rst di basis vectors form a basis for Vi for each 0 i k. Standard arguments from linear algebra can show that any ag has an adapted basis.Any ordered basis gives rise to a complete ag by letting the Vi be the span of the rst i basis vectors. For example,the standard ag in Rn is induced from the standard basis (e1, ..., en) where ei denotes the vector with a 1 in the ithslot and 0s elsewhere. Concretely, the standard ag is the subspaces:

    0 < he1i < he1; e2i < < he1; : : : ; eni = Kn:

    An adapted basis is almost never unique (trivial counterexamples); see below.A complete ag on an inner product space has an essentially unique orthonormal basis: it is unique up to multiplyingeach vector by a unit (scalar of unit length, like 1, 1, i). This is easiest to prove inductively, by noting that vi 2V ?i1 < Vi , which denes it uniquely up to unit.More abstractly, it is unique up to an action of the maximal torus: the ag corresponds to the Borel group, and theinner product corresponds to the maximal compact subgroup.[1]

    1

  • 2 CHAPTER 1. FLAG (LINEAR ALGEBRA)

    1.2 StabilizerThe stabilizer subgroup of the standard ag is the group of invertible upper triangular matrices.More generally, the stabilizer of a ag (the linear operators on V such that T (Vi) < Vi for all i) is, in matrix terms,the algebra of block upper triangular matrices (with respect to an adapted basis), where the block sizes di di1 .The stabilizer subgroup of a complete ag is the set of invertible upper triangular matrices with respect to any basisadapted to the ag. The subgroup of lower triangular matrices with respect to such a basis depends on that basis, andcan therefore not be characterized in terms of the ag only.The stabilizer subgroup of any complete ag is a Borel subgroup (of the general linear group), and the stabilizer ofany partial ags is a parabolic subgroup.The stabilizer subgroup of a ag acts simply transitively on adapted bases for the ag, and thus these are not uniqueunless the stabilizer is trivial. That is a very exceptional circumstance: it happens only for a vector space of dimension0, or for a vector space over F2 of dimension 1 (precisely the cases where only one basis exists, independently of anyag).

    1.3 Subspace nestIn an innite-dimensional space V, as used in functional analysis, the ag idea generalises to a subspace nest, namelya collection of subspaces ofV that is a total order for inclusion and which further is closed under arbitrary intersectionsand closed linear spans. See nest algebra.

    1.4 Set-theoretic analogsFurther information: Field with one element

    From the point of view of the eld with one element, a set can be seen as a vector space over the eld with oneelement: this formalizes various analogies between Coxeter groups and algebraic groups.Under this correspondence, an ordering on a set corresponds to a maximal ag: an ordering is equivalent to a maximalltration of a set. For instance, the ltration (ag) f0g f0; 1g f0; 1; 2g corresponds to the ordering (0; 1; 2) .

    1.5 See also Filtration (mathematics) Flag manifold Grassmannian

    1.6 References[1] Harris, Joe (1991). Representation Theory: A First Course, p. 95. Springer. ISBN 0387974954.

    Shafarevich, I. R.; A. O. Remizov (2012). Linear Algebra and Geometry. Springer. ISBN 978-3-642-30993-9.

  • Chapter 2

    Scalar (mathematics)

    For other uses, see Scalar (disambiguation).In linear algebra, real numbers are called scalars and relate to vectors in a vector space through the operation of scalar

    Scalars are real numbers used in linear algebra, as opposed to vectors. This image shows a Euclidean vector. Its coordinates x andy are scalars, as is its length, but v is not a scalar.

    multiplication, in which a vector can be multiplied by a number to produce another vector.[1][2][3] More generally, avector space may be dened by using any eld instead of real numbers, such as complex numbers. Then the scalarsof that vector space will be the elements of the associated eld.A scalar product operation not to be confused with scalar multiplication may be dened on a vector space, allowingtwo vectors to be multiplied to produce a scalar. A vector space equipped with a scalar product is called an innerproduct space.The real component of a quaternion is also called its scalar part.

    3

  • 4 CHAPTER 2. SCALAR (MATHEMATICS)

    The term is also sometimes used informally to mean a vector, matrix, tensor, or other usually compound value thatis actually reduced to a single component. Thus, for example, the product of a 1n matrix and an n1 matrix, whichis formally a 11 matrix, is often said to be a scalar.The term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix.

    2.1 EtymologyThe word scalar derives from the Latin word scalaris, adjectival form from scala (Latin for ladder). The Englishword scale is also derived from scala. The rst recorded usage of the word scalar in mathematics was by FranoisVite in Analytic Art (In artem analyticen isagoge)(1591):[4]

    Magnitudes that ascend or descend proportionally in keeping with their nature from one kind to anotherare called scalar terms.(Latin: Magnitudines quae ex genere ad genus sua vi proportionaliter adscendunt vel descendunt, vocenturScalares.)

    According to a citation in the Oxford English Dictionary the rst recorded usage of the term in English was by W. R.Hamilton in 1846, to refer to the real part of a quaternion:

    The algebraically real part may receive, according to the question in which it occurs, all values containedon the one scale of progression of numbers from negative to positive innity; we shall call it therefore thescalar part.

    2.2 Denitions and properties

    2.2.1 Scalars of vector spaces

    A vector space is dened as a set of vectors, a set of scalars, and a scalar multiplication operation that takes a scalark and a vector v to another vector kv. For example, in a coordinate space, the scalar multiplication k(v1; v2; : : : ; vn)yields (kv1; kv2; : : : ; kvn) . In a (linear) function space, k is the function x k((x)).The scalars can be taken from any eld, including the rational, algebraic, real, and complex numbers, as well as niteelds. a number by the elements inside the brackets.

    2.2.2 Scalars as vector components

    According to a fundamental theorem of linear algebra, every vector space has a basis. It follows that every vectorspace over a scalar eld K is isomorphic to a coordinate vector space where the coordinates are elements of K. Forexample, every real vector space of dimension n is isomorphic to n-dimensional real space Rn.

    2.2.3 Scalars in normed vector spaces

    Alternatively, a vector space V can be equipped with a norm function that assigns to every vector v in V a scalar ||v||.By denition, multiplying v by a scalar k also multiplies its norm by |k|. If ||v|| is interpreted as the length of v, thisoperation can be described as scaling the length of v by k. A vector space equipped with a norm is called a normedvector space (or normed linear space).The norm is usually dened to be an element of V 's scalar eld K, which restricts the latter to elds that support thenotion of sign. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the fourarithmetic operations; thus the rational numbers Q are excluded, but the surd eld is acceptable. For this reason, notevery scalar product space is a normed vector space.

  • 2.3. SEE ALSO 5

    2.2.4 Scalars in modulesWhen the requirement that the set of scalars form a eld is relaxed so that it need only form a ring (so that, forexample, the division of scalars need not be dened, or the scalars need not be commutative), the resulting moregeneral algebraic structure is called a module.In this case the scalars may be complicated objects. For instance, if R is a ring, the vectors of the product space Rncan be made into a module with the nn matrices with entries from R as the scalars. Another example comes frommanifold theory, where the space of sections of the tangent bundle forms a module over the algebra of real functionson the manifold.

    2.2.5 Scaling transformationThe scalar multiplication of vector spaces and modules is a special case of scaling, a kind of linear transformation.

    2.2.6 Scalar operations (computer science)Operations that apply to a single value at a time.

    Scalar processor

    2.3 See also Scalar (physics)

    2.4 References[1] Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). AddisonWesley. ISBN 0-321-28713-4.

    [2] Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6.

    [3] Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2.

    [4] http://math.ucdenver.edu/~{}wcherowi/courses/m4010/s08/lcviete.pdf Lincoln Collins. Biography Paper: Francois Viete

    2.5 External links Hazewinkel, Michiel, ed. (2001), Scalar, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 Weisstein, Eric W., Scalar, MathWorld. Mathwords.com Scalar

  • Chapter 3

    Scalar multiplication

    Not to be confused with scalar product.In mathematics, scalar multiplication is one of the basic operations dening a vector space in linear algebra[1][2][3]

    aa

    a

    3a

    Scalar multiplication of a vector by a factor of 3 stretches the vector out.

    (or more generally, a module in abstract algebra[4][5]). In an intuitive geometrical context, scalar multiplication of areal Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction.The term scalar itself derives from this usage: a scalar is that which scales vectors. Scalar multiplication is themultiplication of a vector by a scalar (where the product is a vector), and must be distinguished from inner productof two vectors (where the product is a scalar).

    3.1 DenitionIn general, if K is a eld and V is a vector space over K, then scalar multiplication is a function from K V to V. Theresult of applying this function to c in K and v in V is denoted cv.

    3.1.1 Properties

    Scalar multiplication obeys the following rules (vector in boldface):

    6

  • 3.2. INTERPRETATION 7

    a

    -a2a

    The scalar multiplications a and 2a of a vector a

    Additivity in the scalar: (c + d)v = cv + dv;

    Additivity in the vector: c(v + w) = cv + cw;

    Compatibility of product of scalars with scalar multiplication: (cd)v = c(dv);

    Multiplying by 1 does not change a vector: 1v = v;

    Multiplying by 0 gives the zero vector: 0v = 0;

    Multiplying by 1 gives the additive inverse: (1)v = v.

    Here + is addition either in the eld or in the vector space, as appropriate; and 0 is the additive identity in either.Juxtaposition indicates either scalar multiplication or the multiplication operation in the eld.

    3.2 InterpretationScalar multiplication may be viewed as an external binary operation or as an action of the eld on the vector space.A geometric interpretation of scalar multiplication is that it stretches, or contracts, vectors by a constant factor.As a special case, V may be taken to be K itself and scalar multiplication may then be taken to be simply themultiplication in the eld.When V is Kn, scalar multiplication is equivalent to multiplication of each component with the scalar, and may bedened as such.The same idea applies if K is a commutative ring and V is a module over K. K can even be a rig, but then thereis no additive inverse. If K is not commutative, the distinct operations left scalar multiplication cv and right scalarmultiplication vc may be dened.

    3.3 See also

    Statics

    Mechanics

    Product (mathematics)

  • 8 CHAPTER 3. SCALAR MULTIPLICATION

    3.4 References[1] Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). AddisonWesley. ISBN 0-321-28713-4.

    [2] Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6.

    [3] Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2.

    [4] Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). John Wiley & Sons. ISBN 0-471-43334-9.

    [5] Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. Springer. ISBN 0-387-95385-X.

  • Chapter 4

    Schmidt decomposition

    In linear algebra, the Schmidt decomposition (named after its originator Erhard Schmidt) refers to a particular wayof expressing a vector in the tensor product of two inner product spaces. It has numerous applications in quantuminformation theory, for example in entanglement characterization and in state purication, and plasticity.

    4.1 TheoremLet H1 and H2 be Hilbert spaces of dimensions n and m respectively. Assume n m . For any vector v in thetensor product H1 H2 , there exist orthonormal sets fu1; : : : ; umg H1 and fv1; : : : ; vmg H2 such thatv =

    Pmi=1 iui vi , where the scalars i are real, non-negative, and, as a set, uniquely determined by v .

    4.1.1 Proof

    The Schmidt decomposition is essentially a restatement of the singular value decomposition in a dierent context.Fix orthonormal bases fe1; : : : ; eng H1 and ff1; : : : ; fmg H2 . We can identify an elementary tensor ei fjwith the matrix eifTj , where fTj is the transpose of fj . A general element of the tensor product

    v =X

    1in;1jmijei fj

    can then be viewed as the n m matrix

    Mv = (ij)ij :

    By the singular value decomposition, there exist an n n unitary U, m m unitary V, and a positive semidenitediagonal m m matrix such that

    Mv = U

    0

    V ?:

    Write U =U1 U2

    where U1 is n m and we have

    Mv = U1V?:

    Let fu1; : : : ; umg be the rst m column vectors of U1 , fv1; : : : ; vmg the column vectors of V, and 1; : : : ; m thediagonal elements of . The previous expression is then

    9

  • 10 CHAPTER 4. SCHMIDT DECOMPOSITION

    Mv =mXk=1

    kukv?k;

    Then

    v =mXk=1

    kuk vk;

    which proves the claim.

    4.2 Some observationsSome properties of the Schmidt decomposition are of physical interest.

    4.2.1 Spectrum of reduced states

    Consider a vector w of the tensor product

    H1 H2in the form of Schmidt decomposition

    w =mXi=1

    iui vi:

    Form the rank 1 matrix = ww*. Then the partial trace of , with respect to either system A or B, is a diagonal matrixwhose non-zero diagonal elements are |i |2. In other words, the Schmidt decomposition shows that the reduced stateof on either subsystem have the same spectrum.

    4.2.2 Schmidt rank and entanglement

    The strictly positive values i in the Schmidt decomposition of w are its Schmidt coecients. The number ofSchmidt coecients of w , counted with multiplicity, is called its Schmidt rank, or Schmidt number.If w can be expressed as a product

    u v

    then w is called a separable state. Otherwise, w is said to be an entangled state. From the Schmidt decomposition,we can see that w is entangled if and only if w has Schmidt rank strictly greater than 1. Therefore, two subsystemsthat partition a pure state are entangled if and only if their reduced states are mixed states.

    4.2.3 Von Neumann entropy

    A consequence of the above comments is that, for bipartite pure states, the von Neumann entropy of the reducedstates is a well-dened measure of entanglement. For the von Neumann entropy of both reduced states of isPi jij2 log jij2 , and this is zero if and only if only if is a product state (not entangled).

  • 4.3. CRYSTAL PLASTICITY 11

    4.3 Crystal plasticityIn the eld of plasticity, crystalline solids such as metals deform plastically primarily along crystal planes. Each plane,dened by its normal vector can slip in one of several directions, dened by a vector . Together a slip plane anddirection form a slip system which is described by the Schmidt tensor P = . The velocity gradient is a linearcombination of these across all slip systems where the scaling factor is the rate of slip along the system.

    4.4 See also Singular value decomposition Purication of quantum state

    4.5 Further reading Pathak, Anirban (2013). Elements of Quantum Computation and Quantum Communication. London: Taylor

    & Francis. pp. 9298. ISBN 978-1-4665-1791-2.

  • Chapter 5

    Schur complement

    In linear algebra and the theory of matrices, the Schur complement of a matrix block (i.e., a submatrix within alarger matrix) is dened as follows. Suppose A, B, C, D are respectively pp, pq, qp and qq matrices, and D isinvertible. Let

    M =

    A BC D

    so that M is a (p+q)(p+q) matrix.Then the Schur complement of the block D of the matrix M is the pp matrix

    ABD1C:It is named after Issai Schur who used it to prove Schurs lemma, although it had been used previously.[1] EmilieHaynsworth was the rst to call it the Schur complement.[2] The Schur complement is a key tool in the elds ofnumerical analysis, statistics and matrix analysis.

    5.1 BackgroundThe Schur complement arises as the result of performing a block Gaussian elimination by multiplying the matrix Mfrom the right with the block lower triangular matrix

    L =

    Ip 0

    D1C Iq:

    Here Ip denotes a pp identity matrix. After multiplication with the matrix L the Schur complement appears in theupper pp block. The product matrix is

    ML =

    A BC D

    Ip 0

    D1C Iq=

    ABD1C B

    0 D

    =

    Ip BD

    1

    0 Iq

    ABD1C 0

    0 D

    :

    This is analogous to an LDU decomposition. That is, we have shown that

    A BC D

    =

    Ip BD

    1

    0 Iq

    ABD1C 0

    0 D

    Ip 0

    D1C Iq

    ;

    12

  • 5.2. APPLICATION TO SOLVING LINEAR EQUATIONS 13

    and inverse of M thus may be expressed involving D1 and the inverse of Schurs complement (if it exists) only as

    A BC D

    1=

    Ip 0

    D1C Iq

    (ABD1C)1 00 D1

    Ip BD10 Iq

    =

    " ABD1C1 ABD1C1BD1

    D1C ABD1C1 D1 +D1C ABD1C1BD1#:

    C.f. matrix inversion lemma which illustrates relationships between the above and the equivalent derivation with theroles of A and D interchanged.If M is a positive-denite symmetric matrix, then so is the Schur complement of D in M.If p and q are both 1 (i.e. A, B, C and D are all scalars), we get the familiar formula for the inverse of a 2-by-2 matrix:

    M1 =1

    AD BCD BC A

    provided that AD BC is non-zero.Moreover, the determinant of M is also clearly seen to be given by

    det(M) = det(D) det(ABD1C)which generalizes the determinant formula for 2x2 matrices.

    5.2 Application to solving linear equationsThe Schur complement arises naturally in solving a system of linear equations such as

    Ax+By = a

    Cx+Dy = b

    where x, a are p-dimensional column vectors, y, b are q-dimensional column vectors, and A, B, C, D are as above.Multiplying the bottom equation by BD1 and then subtracting from the top equation one obtains

    (ABD1C)x = aBD1b:Thus if one can invert D as well as the Schur complement of D, one can solve for x, and then by using the equationCx + Dy = b one can solve for y. This reduces the problem of inverting a (p + q) (p + q) matrix to that ofinverting a pp matrix and a qq matrix. In practice one needs D to be well-conditioned in order for this algorithmto be numerically accurate.In electrical engineering this is often referred to as node elimination or Kron reduction.

    5.3 Applications to probability theory and statisticsSuppose the random column vectors X, Y live in Rn and Rm respectively, and the vector (X, Y) in Rn+m has amultivariate normal distribution whose covariance is the symmetric positive-denite matrix

    =

    A BBT C

    ;

  • 14 CHAPTER 5. SCHUR COMPLEMENT

    where A 2 Rnn is the covariance matrix of X, C 2 Rmm is the covariance matrix of Y and B 2 Rnm is thecovariance matrix between X and Y.Then the conditional covariance of X given Y is the Schur complement of C in :

    Cov(X j Y ) = ABC1BT :E(X j Y ) = E(X) +BC1(Y E(Y )):If we take the matrix above to be, not a covariance of a random vector, but a sample covariance, then it may havea Wishart distribution. In that case, the Schur complement of C in also has a Wishart distribution.

    5.4 Schur complement condition for positive denitenessLet X be a symmetric matrix given by

    X =

    A BBT C

    :

    Let S be the Schur complement of A in X, that is:

    S = C BTA1B:Then

    X is positive denite if and only if A and S are both positive denite:

    X 0, A 0; S = C BTA1B 0

    X is positive denite if and only if C and ABC1BT are both positive denite:

    X 0, C 0; ABC1BT 0

    If A is positive denite, then X is positive semidenite if and only if S is positive semidenite:

    If A 0 , then X 0, S = C BTA1B 0 .

    If C is positive denite, then X is positive semidenite if and only if ABC1BT is positive semidenite:

    If C 0 , then X 0, ABC1BT 0 .

    The rst and third statements can be derived[3] by considering the minimizer of the quantity

    uTAu+ 2vTBTu+ vTCv;

    as a function of v (for xed u).Furthermore, since

    A BBT C

    0()

    C BT

    B A

    0

    and similarly for positive semi-denite matrices, the second (respectively fourth) statement is immediate from therst (resp. third).

  • 5.5. SEE ALSO 15

    5.5 See also Woodbury matrix identity Quasi-Newton method Haynsworth inertia additivity formula Gaussian process Total least squares

    5.6 References[1] Zhang, Fuzhen (2005). The Schur Complement and Its Applications. Springer. doi:10.1007/b105056. ISBN 0-387-24271-

    6.

    [2] Haynsworth, E. V., On the Schur Complement, Basel Mathematical Notes, #BNB 20, 17 pages, June 1968.

    [3] Boyd, S. and Vandenberghe, L. (2004), Convex Optimization, Cambridge University Press (Appendix A.5.5)

  • Chapter 6

    Schur product theorem

    In mathematics, particularly in linear algebra, the Schur product theorem states that the Hadamard product of twopositive denite matrices is also a positive denite matrix. The result is named after Issai Schur[1] (Schur 1911, p.14, Theorem VII) (note that Schur signed as J. Schur in Journal fr die reine und angewandte Mathematik.[2][3])

    6.1 Proof

    6.1.1 Proof using the trace formulaIt is easy to show that for matrices M and N , the Hadamard product M N considered as a bilinear form acts onvectors a; b as

    aT (M N)b = Tr(M diag(a)N diag(b))where Tr is the matrix trace and diag(a) is the diagonal matrix having as diagonal entries the elements of a .Since M and N are positive denite, we can consider their square-roots M1/2 and N1/2 and write

    Tr(M diag(a)N diag(b)) = Tr(M1/2M1/2 diag(a)N1/2N1/2 diag(b)) = Tr(M1/2 diag(a)N1/2N1/2 diag(b)M1/2)

    Then, for a = b , this is written as Tr(ATA) for A = N1/2 diag(a)M1/2 and thus is positive. This shows that(M N) is a positive denite matrix.

    6.1.2 Proof using Gaussian integrationCase ofM = N

    LetX be ann -dimensional centered Gaussian random variable with covariance hXiXji = Mij . Then the covariancematrix of X2i and X2j is

    Cov(X2i ; X2j ) = hX2iX2j i hX2i ihX2j iUsing Wicks theorem to develop hX2iX2j i = 2hXiXji2 + hX2i ihX2j i we have

    Cov(X2i ; X2j ) = 2hXiXji2 = 2M2ijSince a covariance matrix is positive denite, this proves that the matrix with elements M2ij is a positive denitematrix.

    16

  • 6.2. REFERENCES 17

    General case

    LetX and Y ben -dimensional centered Gaussian random variables with covariances hXiXji = Mij , hYiYji = Nijand independt from each other so that we have

    hXiYji = 0 for any i; j

    Then the covariance matrix of XiYi and XjYj is

    Cov(XiYi; XjYj) = hXiYiXjYji hXiYiihXjYjiUsing Wicks theorem to develop

    hXiYiXjYji = hXiXjihYiYji+ hXiYiihXjYji+ hXiYjihXjYiiand also using the independence of X and Y , we have

    Cov(XiYi; XjYj) = hXiXjihYiYji = MijNijSince a covariance matrix is positive denite, this proves that the matrix with elements MijNij is a positive denitematrix.

    6.1.3 Proof using eigendecompositionProof of positivity

    Let M =PimimTi and N =P ininTi . ThenM N =

    Xij

    ij(mimTi ) (njnTj ) =

    Xij

    ij(mi nj)(mi nj)T

    Each (mi nj)(mi nj)T is positive (but, except in the 1-dimensional case, not positive denite, since they are rank1 matrices) and ij > 0 , thus the sum giving M N is also positive.

    Complete proof

    To show that the result is positive denite requires further proof. We shall show that for any vector a 6= 0 , we haveaT (M N)a > 0 . Continuing as above, each aT (mi nj)(mi nj)Ta 0 , so it remains to show that there existi and j for which the inequality is strict. For this we observe that

    aT (mi nj)(mi nj)Ta = X

    k

    mi;knj;kak

    !2Since N is positive denite, there is a j for which nj;kak is not 0 for all k , and then, since M is positive denite,there is an i for which mi;knj;kak is not 0 for all k . Then for this i and j we have (

    Pkmi;knj;kak)

    2> 0 . This

    completes the proof.

    6.2 References[1] Bemerkungen zur Theorie der beschrnkten Bilinearformen mit unendlich vielen Vernderlichen. Journal fr die reine

    und angewandte Mathematik (Crelles Journal) 1911 (140): 100. 1911. doi:10.1515/crll.1911.140.1.

  • 18 CHAPTER 6. SCHUR PRODUCT THEOREM

    [2] Zhang, Fuzhen, ed. (2005). The Schur Complement and Its Applications. Numerical Methods and Algorithms 4.doi:10.1007/b105056. ISBN 0-387-24271-6., page 9, Ch. 0.6 Publication under J. Schur

    [3] Ledermann, W. (1983). Issai Schur and His School in Berlin. Bulletin of the London Mathematical Society 15 (2):97106. doi:10.1112/blms/15.2.97.

    6.3 External links Bemerkungen zur Theorie der beschrnkten Bilinearformen mit unendlich vielen Vernderlichen at EUDML

  • Chapter 7

    Segre classication

    The Segre classication is an algebraic classication of rank two symmetric tensors. The resulting types are thenknown as Segre types. It is most commonly applied to the energy-momentum tensor (or the Ricci tensor) andprimarily nds application in the classication of exact solutions in general relativity.

    7.1 See also Corrado Segre Jordan normal form Petrov classication

    7.2 References Stephani, Hans; Kramer, Dietrich; MacCallum, Malcolm; Hoenselaers, Cornelius; and Herlt, Eduard (2003).Exact Solutions of Einsteins Field Equations. Cambridge: Cambridge University Press. ISBN 0-521-46136-7.See section 5.1 for the Segre classication.

    Segre, C. (1884). Sulla teoria e sulla classicazione delle omograe in uno spazio lineare ad uno numeroqualunque di dimensioni. Memorie della R. Accademia dei Lincei 3a: 127.

    19

  • Chapter 8

    Self-adjoint

    In mathematics, an element x of a star-algebra is self-adjoint if x = x .A collection C of elements of a star-algebra is self-adjoint if it is closed under the involution operation. For example,if x = y then since y = x = x in a star-algebra, the set {x,y} is a self-adjoint set even though x and y need notbe self-adjoint elements.In functional analysis, a linear operator A on a Hilbert space is called self-adjoint if it is equal to its own adjoint A*and that the domain of A is the same as that of A*. See self-adjoint operator for a detailed discussion. If the Hilbertspace is nite-dimensional and an orthonormal basis has been chosen, then the operator A is self-adjoint if and onlyif the matrix describing A with respect to this basis is Hermitian, i.e. if it is equal to its own conjugate transpose.Hermitian matrices are also called self-adjoint.In a dagger category, a morphism f is called self-adjoint if f = fy ; this is possible only for an endomorphismf : A! A .

    8.1 See also Symmetric matrix Self-adjoint operator Hermitian matrix

    8.2 References Reed, M.; Simon, B. (1972). Methods of Mathematical Physics. Vol 2. Academic Press. Teschl, G. (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrdinger Operators.

    Providence: American Mathematical Society.

    20

  • Chapter 9

    Semi-simple operator

    In mathematics, a linear operator T on a nite-dimensional vector space is semi-simple if every T-invariant subspacehas a complementary T-invariant subspace.[1]

    An important result regarding semi-simple operators is that, a linear operator on a nite dimensional vector spaceover an algebraically closed eld is semi-simple if and only if it is diagonalizable.[1] This is because such an operatoralways has an eigenvector; if it is, in addition, semi-simple, then it has a complementary invariant hyperplane, whichitself has an eigenvector, and thus by induction is diagonalizable. Conversely, diagonalizable operators are easily seento be semi-simple, as invariant subspaces are direct sums of eigenspaces, and any basis for this space can be extendedto an eigenbasis.

    9.1 Notes[1] Lam (2001), p. 39

    9.2 References Homan, Kenneth; Kunze, Ray (1971). Semi-Simple operators. Linear algebra (2nd ed.). Englewood Clis,

    N.J.: Prentice-Hall, Inc. MR 0276251.

    Lam, Tsit-Yuen (2001). A rst course in noncommutative rings. Graduate texts in mathematics 131 (2 ed.).Springer. ISBN 0-387-95183-0.

    21

  • Chapter 10

    Semi-simplicity

    This article is about mathematical use. For the philosophical reduction thinking, see Reduction (philosophy).

    In mathematics, semi-simplicity is a widespread concept in disciplines such as linear algebra, abstract algebra,representation theory, category theory and algebraic geometry. A semi-simple object is one that can be decom-posed into a sum of simple objects, and simple objects are those which do not contain non-trivial sub-objects. Theprecise denitions of these words depends on the context.For example, if G is a nite group, then a nite-dimensional representation V over a eld is said to be simple ifthe only subrepresentations it contains are either {0} or V (these are also called irreducible representations). ThenMaschkes theorem says that any nite-dimensional representation is a direct sum of simple representations (providedthe characteristic does not divide the order of the group). So, in this case, every representation of a nite group is semi-simple. Especially in algebra and representation theory, semi-simplicity is also called complete reducibility. Forexample, Weyls theorem on complete reducibility says a nite-dimensional representation of a semisimple compactLie group is semisimple.A square matrix (in other words a linear operator T : V ! V with V nite dimensional vector space) is said to besimple if the only subspaces which are invariant under T are {0} and V. If the eld is algebraically closed (such asthe complex numbers), then the only simple matrices are of size 1 by 1. A semi-simple matrix is one that is similar toa direct sum of simple matrices; if the eld is algebraically closed, this is the same as being diagonalizable.These notions of semi-simplicity can be unied using the language of semi-simple modules, and generalized to semi-simple categories.

    10.1 Introductory example of vector spacesIf one considers all vector spaces (over a eld, such as the real numbers), the simple vector spaces are those whichcontain no proper subspaces. Therefore, the one-dimensional vector spaces are the simple ones. So it is a basic resultof linear algebra that any nite-dimensional vector space is the direct sum of simple vector spaces; in other words, allnite-dimensional vector spaces are semi-simple.

    10.2 Semi-simple modules and ringsFurther information: Semisimple module and Semisimple ring

    For a xed ring R, an R-moduleM is simple, if it has no submodules other than 0 andM. The moduleM is semi-simpleif it is the direct sum of simple modules. Finally, R is called a semi-simple ring if it is semi-simple as an R-module.As it turns out, this is equivalent to requiring that any nitely generated R-module M is semi-simple.[1]

    Examples of semi-simple rings include elds and, more generally, nite direct products of elds. For a nite groupG Maschkes theorem asserts that the group ring R[G] over some ring R is semi-simple if and only if R is semi-simple and |G| is invertible in R. Since the theory of modules of R[G] is the same as the representation theory of G

    22

  • 10.3. SEMI-SIMPLE MATRICES 23

    on R-modules, this fact is an important dichotomy, which causes modular representation theory, i.e., the case when|G| does divide the characteristic of R to be more dicult than the case when |G| does not divide the characteristic,in particular if R is a eld of characteristic zero. By the ArtinWedderburn theorem, a unital Artinian ring R issemisimple if and only if it is (isomorphic to) Mn(D1)Mn(D2) Mn(Dr) , where each Di is a divisionring and Mn(D) is the ring of n-by-n matrices with entries in D.As indicated above, the theory of semi-simple rings is much more easy than the one of general rings. For example,any short exact sequence

    0!M 0 !M !M 00 ! 0

    of modules over a semi-simple ring must split, i.e., M = M 0M 00 . From the point of view of homological algebra,this means that there are no non-trivial extensions. The ring Z of integers is not semi-simple: Z is not the direct sumof nZ and Z/n.

    10.3 Semi-simple matricesA matrix or, equivalently, a linear operator T on a nite-dimensional vector space V is called semi-simple if everyT-invariant subspace has a complementary T-invariant subspace.[2][3] This is equivalent to the minimal polynomialof T being square-free.For vector spaces over an algebraically closed eld F, semi-simplicity of a matrix is equivalent to diagonalizability.[2]This is because such an operator always has an eigenvector; if it is, in addition, semi-simple, then it has a comple-mentary invariant hyperplane, which itself has an eigenvector, and thus by induction is diagonalizable. Conversely,diagonalizable operators are easily seen to be semi-simple, as invariant subspaces are direct sums of eigenspaces, andany basis for this space can be extended to an eigenbasis.Actually this notion of semi-simplicity is a special case of the one of rings: T is semi-simple if and only if thesubalgebra F [T ] EndF (V ) generated by the powers (i.e., iterations) of T inside the ring of endomorphisms of Vis semi-simple.

    10.4 Semi-simple categoriesMany of the above notions of semi-simplicity are recovered by the concept of a semi-simple category C. Briey, acategory is a collection of objects and maps between such objects, the idea being that the maps between the objectspreserve some structure inherent in these objects. For example, R-modules and R-linear maps between them form acategory, for any ring R.An abelian category[4] C is called semi-simple if there is a collection of simple objects X 2 C , i.e., ones whichhave no subobject other than the zero object 0 and X itself, such that any object X is the direct sum (i.e., coproductor, equivalently, product) of simple objects.With this terminology, a ring R is semi-simple if and only if the category of nitely generated R-modules is semisim-ple. An example from algebraic geometry is the category of pure motives of smooth projective varieties over a eld kMot(k) modulo an adequate equivalence relation . As was conjectured by Grothendieck and shown by Jannsen,this category is semi-simple if and only if the equivalence relation is numerical equivalence.[5] This fact is a conceptualcornerstone in the theory of motives.

    10.5 Semi-simplicity in representation theoryOne can ask whether the category of (say, nite-dimensional) representations of a group G is semisimple or not (insuch a category, irreducible representations are precisely simple objects). For example, the category is semisimple ifG is a semisimple compact Lie group (Weyls theorem on complete reducibility).See also: fusion category (which is semisimple).

  • 24 CHAPTER 10. SEMI-SIMPLICITY

    10.6 See also A semisimple Lie algebra is a Lie algebra that is a direct sum of simple Lie algebras. A semisimple algebraic group is a linear algebraic group whose radical of the identity component is trivial. Semisimple algebra

    10.7 References[1] Lam, Tsit-Yuen (2001). A rst course in noncommutative rings. Graduate texts in mathematics 131 (2 ed.). Springer.

    ISBN 0-387-95183-0.

    [2] Lam (2001), p. 39

    [3] Homan, Kenneth; Kunze, Ray (1971). Semi-Simple operators. Linear algebra (2nd ed.). Englewood Clis, N.J.:Prentice-Hall, Inc. MR 0276251.

    [4] More generally, the same denition of semi-simplicity works for pseudo-abelian additive categories. See for example YvesAndr, Bruno Kahn: Nilpotence, radicaux et structures monodales. With an appendix by Peter O'Sullivan. Rend. Sem.Mat. Univ. Padova 108 (2002), 107291. http://arxiv.org/abs/math/0203273.

    [5] Uwe Jannsen: Motives, numerical equivalence, and semi-simplicity, Invent. math. 107, 447~452 (1992)

    10.8 External links http://mathoverflow.net/questions/245/are-abelian-nondegenerate-tensor-categories-semisimple http://ncatlab.org/nlab/show/semisimple+category

  • Chapter 11

    Semilinear transformation

    In linear algebra, particularly projective geometry, a semilinear transformation between vector spaces V and Wover a eld K is a function that is a linear transformation up to a twist, hence semi-linear, where twist means "eldautomorphism of K". Explicitly, it is a function T : V !W that is:

    linear with respect to vector addition: T (v + v0) = T (v) + T (v0) semilinear with respect to scalar multiplication: T (v) = T (v); where is a eld automorphism of K, and means the image of the scalar under the automorphism. There must be a single automorphism for T,in which case T is called -semilinear.

    The invertible semilinear transforms of a given vector space V (for all choices of eld automorphism) form a group,called the general semilinear group and denoted L(V ); by analogy with and extending the general linear group.Similar notation (replacing Latin characters with Greek) are used for semilinear analogs of more restricted lineartransform; formally, the semidirect product of a linear group with the Galois group of eld automorphism. Forexample, PU is used for the semilinear analogs of the projective special unitary group PSU. Note however, that it isonly recently noticed that these generalized semilinear groups are not well-dened, as pointed out in (Bray, Holt &Roney-Dougal 2009) isomorphic classical groups G and H (subgroups of SL) may have non-isomorphic semilinearextensions. At the level of semidirect products, this corresponds to dierent actions of the Galois group on a givenabstract group, a semidirect product depending on two groups and an action. If the extension is non-unique, there areexactly two semilinear extensions; for example, symplectic groups have a unique semilinear extension, while SU(n,q)has two extension if n is even and q is odd, and likewise for PSU.

    11.1 DenitionLet K be a eld and k its prime subeld. For example, if K is C then k is Q, and if K is the nite eld of order q = pi;then k is Z/pZ:Given a eld automorphism of K, a function f : V ! W between two K vector spaces V and W is -semilinear,or simply semilinear, if for all x; y in V and l in K it follows:

    1. f(x+ y) = f(x) + f(y);2. f(lx) = lf(x);

    where l denotes the image of l under :Note that must be a eld automorphism for f to remain additive, for example, must x the prime subeld as

    nf(x) = f(nx) = f(x+ + x) = nf(x)Also

    25

  • 26 CHAPTER 11. SEMILINEAR TRANSFORMATION

    (l1 + l2)f(x) = f((l1 + l2)x) = f(l1x) + f(l2x) = (l

    1 + l

    2)f(x)

    so (l1 + l2) = l1 + l2: Finally,

    (l1l2)f(x) = f(l1l2x) = l

    1f(l2x) = l

    1l2f(x)

    Every linear transformation is semilinear, but the converse is generally not true. If we treat V and W as vector spacesover k, (by considering K as vector space over k rst) then every -semilinear map is a k-linear map, where k is theprime subeld of K.

    11.2 Examples

    Let K = C; V = Cn; with standard basis e1; : : : ; en: Dene the map f : V ! V by

    f (Pn

    i=1 ziei) =Pn

    i=1 ziei

    f is semilinear (with respect to the complex conjugation eld automorphism) but not linear.

    Let K = GF (q) the Galois eld of order q = pi; p the characteristic. Let l = lp: By the Freshmans dreamit is known that this is a eld automorphism. To every linear map f : V ! W between vector spaces V andW over K we can establish a -semilinear map

    ef nXi=1

    liei

    !:= f

    nXi=1

    li ei

    !

    Indeed every linear map can be converted into a semilinear map in such a way. This is part of a general observationcollected into the following result.

    11.3 General semilinear groupGiven a vector space V, the set of all invertible semilinear maps (over all eld automorphisms) is the group L(V ):Given a vector space V over K, and k the prime subeld of K, then L(V ) decomposes as the semidirect product

    L(V ) = GL(V )o Gal(K/k)

    where Gal(K/k) is the Galois group of K/k: Similarly, semilinear transforms of other linear groups can be dened asthe semidirect product with the Galois group, or more intrinsically as the group of semilinear maps of a vector spacepreserving some properties.We identify Gal(K/k) with a subgroup of L(V ) by xing a basis B for V and dening the semilinear maps:

    Xb2B

    lbb 7!Xb2B

    lb b

    for any 2 Gal(K/k): We shall denoted this subgroup by Gal(K/k)B. We also see these complements to GL(V) inL(V ) are acted on regularly by GL(V) as they correspond to a change of basis.

  • 11.4. APPLICATIONS 27

    11.3.1 ProofEvery linear map is semilinear, thus GL(V ) L(V ): Fix a basis B of V. Now given any semilinear map f withrespect to a eld automorphism 2 Gal(K/k); then dene g : V ! V by

    g

    Xb2B

    lbb

    !:=Xb2B

    fl

    1b b

    =Xb2B

    lbf(b)

    As f(B) is also a basis of V, it follows that g is simply a basis exchange of V and so linear and invertible: g 2 GL(V ):Set h := fg1: For every v =Pb2B lbb in V,hv = fg1v =

    Xb2B

    lb b

    thus h is in the Gal(K/k) subgroup relative to the xed basis B. This factorization is unique to the xed basis B.Furthermore, GL(V) is normalized by the action of Gal(K/k)B, so L(V ) = GL(V )o Gal(K/k):

    11.4 Applications

    11.4.1 Projective geometryThe L(V ) groups extend the typical classical groups in GL(V). The importance in considering such maps followsfrom the consideration of projective geometry. The induced action of L(V ) on the associated vector space P(V)yields the projective semilinear group, denoted PL(V ); extending the projective linear group, PGL(V).The projective geometry of a vector space V, denoted PG(V), is the lattice of all subspaces of V. Although the typicalsemilinear map is not a linear map, it does follow that every semilinear map f : V !W induces an order-preservingmap f : PG(V )! PG(W ): That is, every semilinear map induces a projectivity. The converse of this observation(except for the projective line) is the fundamental theorem of projective geometry. Thus semilinear maps are usefulbecause they dene the automorphism group of the projective geometry of a vector space.

    11.4.2 Mathieu groupMain article: Mathieu group

    The group PL(3,4) can be used to construct the Mathieu group M24, which is one of the sporadic simple groups;PL(3,4) is a maximal subgroup of M24, and there are many ways to extend it to the full Mathieu group.

    11.5 References Gruenberg, K. W. and Weir, A.J. Linear Geometry 2nd Ed. (English) Graduate Texts in Mathematics. 49.

    New York Heidelberg Berlin: Springer-Verlag. X, 198 pp. (1977). Bray, John N.; Holt, Derek F.; Roney-Dougal, Colva M. (2009), Certain classical groups are not well-dened,Journal of Group Theory 12 (2): 171180, doi:10.1515/jgt.2008.069, ISSN 1433-5883, MR 2502211

    This article incorporates material from semilinear transformation, which is licensed under the Creative CommonsAttribution/Share-Alike License.

  • Chapter 12

    Sesquilinear form

    In mathematics, a sesquilinear form is a generalization of a bilinear form that, in turn, is a generalization of theconcept of the dot product of Euclidean space. A bilinear form is linear in each of its arguments, but a sesquilinearform allows one of the arguments to be twisted in a semilinear manner, thus the name; which originates from theLatin numerical prex sesqui- meaning one and a half. The basic concept of the dot product producing a scalarfrom a pair of vectors can be generalized by allowing a broader range of scalar values and, perhaps simultaneously,by widening the denition of what a vector is.A motivating special case is a sesquilinear form on a complex vector space, V. This is a map V V C that is linearin one argument and twists the linearity of other argument by complex conjugation (referred to as being antilinear inthe other argument). This case arises naturally in mathematical physics applications. Another important case allowsthe scalars to come from any eld and the twist is provided by a eld automorphism. Many authors assume that thisautomorphism is an involution (has order two) to stay in analogy with the complex case, but others prove this propertywhen introducing Hermitian forms.An application in projective geometry requires that the scalars come from a division ring (skeweld), K, and thismeans that the vectors should be replaced by elements of a K-module. In a very general setting, sesquilinear formscan be dened over R-modules for arbitrary rings R.

    12.1 ConventionConventions dier as to which argument should be linear. We shall take the rst to be linear, as is common in themathematical literature, except in the section devoted to sesquilinear forms on complex vector spaces. There we usethe other convention and take the rst argument to be conjugate-linear (i.e. antilinear) and the second to be linear.This is the convention used mostly by mathematical physicists[1] and originates in Diracs braket notation in quantummechanics.

    12.2 Complex vector spacesOver a complex vector space V a map : V V C is sesquilinear if

    '(x+ y; z + w) = '(x; z) + '(x;w) + '(y; z) + '(y; w)

    '(ax; by) = ab'(x; y)

    for all x, y, z, w V and all a, b C. a is the complex conjugate of a.A complex sesquilinear form can also be viewed as a complex bilinear map

    V V ! C

    28

  • 12.2. COMPLEX VECTOR SPACES 29

    where V is the complex conjugate vector space to V. By the universal property of tensor products these are in one-to-one correspondence with complex linear maps

    V V ! C:

    For a xed z in V the map w (z, w) is a linear functional on V (i.e. an element of the dual space V). Likewise,the w (w, z) is a conjugate-linear functional on V.Given any complex sesquilinear form on V we can dene a second complex sesquilinear form via the conjugatetranspose:

    (w; z) = '(z; w):

    In general, and will be dierent. If they are the same then is said to be Hermitian. If they are negatives of oneanother, then is said to be skew-Hermitian. Every sesquilinear form can be written as a sum of a Hermitian formand a skew-Hermitian form.

    12.2.1 Geometric motivation

    Bilinear forms are to squaring (z2), what complex sesquilinear forms are to the squared magnitude (|z|2 = zz). Regard-ing the complex plane geometrically as a two-dimensional real vector space, the latter corresponds with the squareof the Euclidean norm.The norm associated to a complex sesquilinear form is invariant under multiplication by complex numbers of unitnorm (elements of the complex unit circle), while the norm associated to a bilinear form is equivariant (with respectto squaring). Bilinear forms are algebraically more natural, while sesquilinear forms are geometrically more natural.If B is a bilinear form on a complex vector space and |x|B := B(x, x) is the associated norm, then |ix|B = B(ix, ix) =i2B(x, x) = |x|B.By contrast, if S is a sesquilinear form on a complex vector space and |x|S := S(x, x) is the associated norm, then |ix|S= S(ix, ix) = iiS(x, x) = |x|S.

    12.2.2 Hermitian formThe term Hermitian form may also refer to a dierent concept than that explained below: it may refer toa certain dierential form on a Hermitian manifold.

    A complex Hermitian form (also called a symmetric sesquilinear form), is a sesquilinear form h : V V Csuch that

    h(w; z) = h(z; w):

    The standard Hermitian form on Cn is given (again, using the physics convention of linearity in the second andconjugate linearity in the rst variable) by

    hw; zi =nXi=1

    wizi:

    More generally, the inner product on any complex Hilbert space is a Hermitian form.A vector space with a Hermitian form (V, h) is called a Hermitian space.If V is a nite-dimensional complex vector space, then relative to any basis { ei } of V, a complex Hermitian form isrepresented by a Hermitian matrix H, w by the column vector w, and z by the column vector z:

  • 30 CHAPTER 12. SESQUILINEAR FORM

    h(w; z) = wTHz:

    The components of H are given by Hij = h(ei, ej).The quadratic form associated to a complex Hermitian form

    Q(z) = h(z; z)

    is always real. Actually, one can show that a complex sesquilinear form is Hermitian i the associated quadratic formis real for all z V.

    12.2.3 Skew-Hermitian formA complex skew-Hermitian form (also called an antisymmetric sesquilinear form), is a complex sesquilinearform s : V V C such that

    s(w; z) = s(z; w):Every complex skew-Hermitian form can be written as i times a Hermitian form.If V is a nite-dimensional complex vector space, then relative to any basis { ei } of V, a complex skew-Hermitianform s is represented by a skew-Hermitian matrix S, w by the column vector w, and z by the column vector z:

    s(w; z) = wTSz:

    The quadratic form associated to a complex skew-Hermitian form

    Q(z) = s(z; z)

    is always pure imaginary.

    12.3 Over arbitrary eldsOn a vector space V dened over an arbitrary eld F having a distinguished automorphism of order two (aninvolution known as the companion automorphism), a map : V V F is sesquilinear if

    '(x+ y; z + w) = '(x; z) + '(x;w) + '(y; z) + '(y; w)

    '(cx; dy) = cd'(x; y) = c (d)'(x; y)

    for all x, y, z, w V and all c, d F. Recall the convention of having the rst argument linear and notice the use ofthe transformation exponential notation t t.If the automorphism = id then the sesquilinear form is a bilinear form.A sesquilinear form is reexive if for every pair x, y V, (x, y) = 0 implies (y, x) = 0.A sesquilinear form is said to be -Hermitian (sometimes referred to as being conjugate-symmetric) if

    '(x; y) = '(y; x)

    for all x, y V. It follows from this denition that