43
The Eigenvalue Problem: A Quick Introduction Concepts of primary interest: Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue - m-fold repeated Characteristic Polynomial/Equation Sample calculations: Eigenvalues for a specific electric dipole problem Coupled Oscillators Application examples: Angular momentum and the moment of inertia Coupled Differential Equations Coupled Oscillators Sample Calculations Eigenvalues Eigenvectors Transformed matrices Tools of the trade: Checks on the characteristic equation and eigenvalues Eigenvalues for and –1 : 1 λ λ Discovery Exercise Worksheets: Eigenvector Discovery Inertia Tensor Contact: [email protected]

The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

The Eigenvalue Problem: A Quick Introduction

Concepts of primary interest:

Real Symmetric & Orthogonal Matrices

Hermitian & Unitary Matrices

Determinant

Eigenvalue

Eigenvector

Degenerate eigenvalue - m-fold repeated

Characteristic Polynomial/Equation

Sample calculations:

Eigenvalues for a specific electric dipole problem

Coupled Oscillators

Application examples:

Angular momentum and the moment of inertia

Coupled Differential Equations

Coupled Oscillators

Sample Calculations

Eigenvalues

Eigenvectors

Transformed matrices

Tools of the trade:

Checks on the characteristic equation and eigenvalues

Eigenvalues for and –1: 1λ λ −→

Discovery Exercise Worksheets:

Eigenvector Discovery

Inertia Tensor

Contact: [email protected]

Page 2: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

German: Eigenshaft => nature, virtue, feature, quality, property, attribute

character, characteristic, characteristic quality

The majority of concepts discussed in introductory physics courses have scalar or vector character.

Equations of cosmic significance proclaim the proportionality or equality of vector quantities.

netF m= a

Further studies lead to situations in which linear relationships are defined between vectors that are

not (necessarily) in the same direction. These relations require a matrix representation:

11 12 13

21 22 23

31 32 33

x x

y

z z

y

E pE p

a E

α α αα α αα α

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ p

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

or E p= [EV.1]

The details are not crucial to the development, but, for definiteness, E is the electric field applied to

a molecule, is the resultant electric dipole moment (charge rearrangement) due to the applied

field and

p

is the polarizabilty of the molecule. The matrix character of is needed because, in a

strangely shaped molecule, charges are not expected to move in the direction of the applied field.

E p

For example, if a molecule is cigar shaped, charge can move along the cigar more freely than across

it. Hence, if the field is applied at relative to the long axis, the direction for the polarization

might be expected to lie between the direction of the field and that of the long axis of the cigar, a

relation more complex than simple scalar proportionality as exists between and in Newton's

Law.

045

netF a

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-2

Page 3: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

The Question: Are there directions for the applied field E that cause a polarization in the same

direction as the applied field?

p

E p= = constant * E In matrix form,

( )11 12 13

21 22 23 0

31 32 33

x x x

y y

z z

y

z

E p EE p EE p E

α α αα α α λ αα α α

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

[EV.2]

A constant α0 is to be factored out of all the other constants λ. Wisely, α0 is chosen to suck up all

the units so that the remaining constants have no units, (they are pure numbers). Note that scaling

the problem is a convenience adopted to simplify the nth order polynomial equation to be solved for

the eigenvalues. A price must be paid for this economy as is noted below.

11 12 13

0 21 22 23 0

31 32 33

x x

y y

z z

m m m E Em m m E Em m m E E

α α λ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

11 12 13

21 22 23

31 32 33

0 00 00 0

or x x x

y y y

z z z

m m m E E Em m m E E Em m m E E E

λλ λ

λ

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

After some simple algebra,

[ - λ ] [EV.3] 11 12 13

21 22 23

31 32 33

000

x x

y y

z z

E m m m EE m m m EE m m m E

λλ

λ

−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= −⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

=

This equation is a classic matrix eigenvalue problem. Cramer’s Rule declares that E must be

identically zero if the matrix - λ has a non-vanishing determinant. Everything identically zero

is designated the trivial solution. At the very best, it is not an exciting solution. If the determinant

| - λ |does vanish, the matrix [ - λ ] is singular, then it is possible to find non-zero (non-trivial)

electric fields E that satisfy the equations above. The equations are not linearly independent in this

case so only the ratio of the components can be found, not their absolute sizes. The size scale is set

in a final normalization step after all the information in [EV.3] has been extracted.

The Classic Eigenvalue Problem

EIGENVALUE EQUATION: v = λ v = λ v or [ - λ ] v = 0 [EV.4]

This equation has only trivial solutions unless the determinant vanishes.

DETERMINANT CONDITION: | - λ | = 0. [EV.5]

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-3

Page 4: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

For n x n matrices this leads to an nth order equation in λ that is often called the characteristic

polynomial: . Each of the n roots λ( ) 0nPoly λ = i of that equation has an associated column vector

that satisfies the eigenvalue equation for that eigenvalue λiv i. The eigenvector for an eigenvalue is

found by substituting that eigenvalue into the eigenvalue equation and solving for the ratios of the

components of the vectors. As the determinant of coefficients vanishes, there are at most (n – 1)

independent relations limiting the available information to the ratios of the components rather than

their values. This limitation is consistent with the stated goal: To find directions of E such that E

and p are parallel. As the problem is linear, any solution multiplied by a scalar is also a solution.

That is: the direction of the vector can be set, but not its magnitude. The ratio of the components is

to be found and then the vector is to be normalized to be a direction, a vector of length one. The

three components of an eigen-direction are its direction cosines with respect to the three coordinate

directions.

WARNING: An assumption is to be made about the nature of the matrices for which

eigenvalues and eigenvectors are sought. The two types of matrices considered are real

symmetric and Hermitian. (It should be noted that a real Hermitian matrix is real symmetric.)

Real symmetric matrices often represent physical properties in classical physics. Hermitian

matrices are the complex generalization that represent measurable quantities in quantum

mechanics. Because of this assumption, the conclusions reached LACK GENERAL

VALIDITY. They are only necessarily valid for the restricted cases to be listed. The cases

discussed in this handout are the cases of major interest for applications in science and

engineering. The developments that follow ignore the contrary cases.

Physical principles dictate properties for matrices of interest. The matrices of interest are those in

the following classes.

Real Symmetric Matrix: One for which = t or and ij jia a= ija ∈ , the real numbers. Real

symmetric matrices are a subset of Hermitian matrices. Certain classes of physical properties are

represented by real symmetric matrices in classical physics.

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-4

Page 5: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

Hermitian Matrices: A matrix equal to its Hermitian conjugate t. The Hermitian conjugate t

(pronounced “H dagger”) of the matrix is the complex conjugate of the transpose of .

[ htij = (hji)*]. Hermitian matrices are the complex generalization of real symmetric and find

application in quantum mechanics.

Orthogonal Matrix: A matrix such that its transpose t

is its inverse, or -1 =

t for an

orthogonal matrix. Orthogonal matrices are used to represent rotations and inversions of coordinates.

They are transformations that preserve the magnitudes of vectors, and in fact, they preserve the

values of inner products of two vectors. A general rotation of a 3D Cartesian coordinate system

takes the components of a vector in the original system and shuffles them to yield the components as

observed in the rotated system. The sum of the squares of the components in the original system is

identical to the sum of the squares of the components relative to the rotated system.

Unitary Matrix: A matrix such that its Hermitian conjugate t is its inverse, or -1 = t for a

unitary matrix. Unitary matrices are the complex generalization of orthogonal. The associated

transformations preserve inner products and hence the magnitudes of complex vectors.

Singular Matrix: A square matrix for which no multiplicative inverse exists is a singular matrix.

An inverse can be constructed for all square matrices with non-zero determinants. A matrix is

singular if its determinant vanishes.

Non-Singular Matrix: A square matrix with a non-zero determinant. An inverse can be constructed

for all square matrices with non-zero determinants.

Hermitian and real symmetric are diagonalizable and have sets of orthogonal eigenvectors. The

eigenvectors for distinct (unequal) eigenvalues are orthogonal, while those for repeated (degenerate)

eigenvalues can be chosen to be orthogonal. For an n x n matrix of these types, with n distinct

eigenvalues, (n – 1) equations constrain each eigenvector which loosely sets its direction while

leaving its magnitude to be adjusted by a normalization procedure. If an eigenvalue appears m

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-5

Page 6: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

times, (n – m) relations constrain the eigenvector for the repeated eigenvalue restricting it to an m

dimensional space. Every vector in that m dimensional space has the specified eigenvalue, and a set

(many sets) of m mutually orthogonal eigenvectors exist in the space.

A unitary transformation exists which can diagonalize a Hermitian matrix .

1

2

0 0 00 00 0 00 0 0 n

λλ 0

λ

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

The diagonalized form of a matrix has zeros everywhere except on the diagonal, and the eigenvalues

appear as the elements on the diagonal. Unitary transformations basically restate the problem in

terms of a new set of basis vectors which are linear combinations of the original basis set and which

are normalized. For a unitary transformation that diagonalizes the matrix, the new basis vectors are

the eigenvectors.

An orthogonal transformation (an inversion or a rotation; we will choose a rotation of the coordinate

axes) exists which can diagonalize a real symmetric matrix.

1

2

0 0 00 00 0 00 0 0 n

λλ 0

λ

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

Orthogonal transformations are a subset of unitary transformations and include, for example,

rotations. Note that, under a rotation, all the distances between point pairs are preserved. A rigid

body is reoriented without distortion. The moment of inertia matrix for a rigid body is real

symmetric. It has a diagonal representation when calculated relative to some set of rotated

coordinates. A coordinate axis along the long axis of a football plus two axes perpendicular to that

axis through the ball’s center of mass provide such a preferred system for the football. More is to be

presented later in the linear transformations section.

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-6

Page 7: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

What can go wrong? For more general matrices (matrices that are not Hermitian or real

symmetric), the eigenvectors may not be mutually orthogonal although they will be a linearly

independent set. In the case of degenerate eigenvalues, the matrix may not transform to a diagonal

form, but rather, for m-fold degeneracy [eigenvalue repeated m times] may have m x m blocks along the

diagonal which contain some off-diagonal elements. The eigenvalues and the eigenvectors can be

complex rather than real. All of this depresses the reader. Ignore it until you encounter a case of

interest in which these complications arise.

Sample calculation: Eigenvalues for a specific dipole problem: E λα0= Ε . Where

/αo = 2 1 01 2 00 0 2

⎡ ⎤⎢ ⎥ ⇒⎢ ⎥⎢ ⎥⎣ ⎦

2 1 01 2 0 00 0 2

x

y

z

EEE

λλ

λ

− 0

0

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦

First step, expand the determinant condition.

( ) ( ) ( ) ( )3 22 1 0

1 2 0 0 2 2 2 2 10 0 2

λλ λ λ λ λ

λ

−⎡ ⎤− = = − − − = − − −⎣ ⎦

−0=

0

( ) ( ) ( )2 3 1λ λ λ⎡ ⎤⇒ − − − =⎣ ⎦

Clearly, the roots are λ = {1, 2, 3}. [The actual eigenvalues are { α0, 2α0 , 3α0 }; a difference of no

consequence when only the eigenvectors are sought. A difference that must be remembered when the

actual eigenvalues are of interest. For example, in quantum mechanics, the eigenvalues represent the

allowed energy levels.]

The next step is to find an eigenvector for each of the eigenvalues. There are various suggestions for

the order in which solve for the eigenvectors. One rule is to set the eigenvalues in increasing order

and to work from the smallest to the largest. I recommend that the eigenvectors for the distinct

eigenvalues (the values that appear only once in the list) be found first. For problems in our standard

three-dimensional space, the final order rather than the initial order is important. The eigenvectors

should be listed in right-hand rule (RHR) order. One can respect all these suggestions by finding the

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-7

Page 8: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

eigenvectors for the smallest remaining distinct eigenvalue at each step. For three dimension cases,

the eigenvectors can be chosen to be orthogonal, and, if the eigenvectors are not in RHR order, the

last eigenvector can be multiplied by -1 to place them in RHR order.

Start with λ = 1. Substitute the eigenvalue for λ into the general eigenvalue equation:

[ - λ ] v = 0 .

The eigenvector is represented by a column vector of values (aλ,bλ,cλ), the unknown components.

1 1

1 1

1 1

2 1 1 0 1 1 0 01 2 1 0 1 1 0 00 0 2 1 0 0 1 0

a ab bc c

−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

=

Two equations follow: and 1 1 1 0a b+ = 1 0c = . The z-component is zero and the y-component is

minus the x-component. ( )11

2ˆ ˆv i j⎡ ⎤= −⎣ ⎦ As there were only two independent equations, only

the direction, the ratios of the components is specified. The final form follows by normalizing the

result to have length one.

Onward to λ = 2.

2 2

2 2

2 2

2 2 1 0 0 1 0 01 2 2 0 1 0 0 00 0 2 2 0 0 0

a ab bc c

−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢− =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ 0

⎤ ⎡ ⎤⎥ ⎢ ⎥=⎥ ⎢ ⎥⎥ ⎢ ⎥⎦ ⎣ ⎦

0

which yields ; ; and 2 0b = 2 0a = 20 c = . The x- and y-components must be zero, and the z-

component is unrestricted. Hence: . The direction follows from the eigenvalue equation;

the magnitude of c

2 2ˆv c k=

2 must be set by a separate normalization step. 2ˆv k→

Finally, λ = 3 is treated.

3 3

3 3

3 3

2 3 1 0 1 1 0 01 2 3 0 1 1 0 00 0 2 3 0 0 1

a ab bc c

− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦0

=

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-8

Page 9: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

The equation a3 – b3 = 0 requires that the y-component equals the x-component while - c3 = 0

requires that there be no z component. 3 3

3 2 2 23 3

ˆ ˆ ˆ ˆ0

20

a i a j k i jv

a a

⎡ ⎤+ + ⎡ ⎤+⎣ ⎦ ⎣ ⎦= =+ +

Note 1: Anytime a component of an eigenvector is unrestricted, the eigenvector can be chosen to be

entirely that direction. For example, the equations a = - c and 0 b = 0 impose a condition on a and c

but the choice still works because a = c = 0 satisfies aˆv j= = - c. The equation 0 b = 0 indicates that

the value of b is unrestricted so entirely y or ˆv j= works. Given that j is solution, the independent

eigenvectors will be no j solution. No j plus a = - c would lead to 12

ˆˆ( )i k− .

Note 2: If the solution vector for a particular value λ is forced to be zero (a = 0, b = 0, ….) then a

mistake has been made. Repeat the eigenvector process for that eigenvalue λ. If no error is found,

check to see that λ satisfies the characteristic polynomial. Finally, check the expansion of the

determinant to ensure that you are working with the correct polynomial for the problem.

Observations: The three eigenvectors are mutually orthogonal. The eigenvectors corresponding to

distinct (unequal) eigenvalues λ are always orthogonal. In the case of a repeated root, two (or more)

eigenvalues are equal. Such cases are called degenerate, and, in these cases, the equations permit

the choice of as many independent vectors as the number of times the root is repeated, and these

vectors may be chosen to be orthogonal. Be sure that you always do so.

Note the form of the original matrix:

[ ]

2 1 02 1 01 2 01 2 0

0 0 2 0 0 2

⎡ ⎤⎡ ⎤⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥ → ⎣ ⎦⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦

This form is a block diagonal. The components associated with each block are mixed with

themselves, but not with those of other blocks. There are two eigenvectors with mixed x and y parts,

and one with only z content which has eigenvalue 2.

Exercise: Consider the eigenvalue problem

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-9

Page 10: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

2 1 01 2 0 00 0 1

x

y

z

EEE

λλ

λ

−⎡ ⎤ ⎡⎢ ⎥ ⎢− =⎢ ⎥ ⎢⎢ ⎥ ⎢−⎣ ⎦ ⎣

0

0

⎤ ⎡ ⎤⎥ ⎢ ⎥⎥ ⎢ ⎥⎥ ⎢ ⎥⎦ ⎣ ⎦

Show that the eigenvalues are: {1, 1, 3 }. Find a possible set of eigenvectors. Show that the

eigenvector for λ = 3 is orthogonal to both of the eigenvectors for λ = 1. Choose the two

eigenvectors for the degenerate eigenvalue to be orthogonal. Find a second set of orthogonal

eigenvectors for λ = 1, the degenerate eigenvalue. How are all the eigenvectors for λ = 1 oriented

relative to the eigenvector for λ = 3?

Degenerate Eigenvalues:

In the absence of degeneracy, each eigenvalue is distinct from all the other eigenvalues, and n – 1 of

the equations represented by the eigenvalue equation are linearly independent. The eigenvector

sought has n components so the n – 1 equations do not uniquely specify its n components. The n –1

equations set the n – 1 ratios of the n components that specify the direction of the eigenvector. To

finalize the eigenvector, it is scaled by a normalization process.

When an eigenvalue is degenerate, the situation is even less determined. Suppose an eigenvalue is

repeated m times. (The eigenvalue is designated m-fold degenerate.) Then, n – m of the equations

are linearly independent, and the corresponding eigenvectors can be any vectors in an m-dimensional

space. (For the non-degenerate or 1-fold case, the direction is set, but the vector can scaled by scalar

multiplication. => Any vector in a 1D space.) Consider the exercise above. The eigenvalue 1 is

2-fold degenerate. The eigenvalue condition yields the equations:

1 1 0a b+ = ; 0 1 0c = .

The y-component must be the negative of the x-component, and the z-component is unrestricted.

The eigenvector is any vector of the form:

1 2 2 2 2 2 2

ˆˆ ˆ ˆ ˆ ˆ ˆ2 ˆ ˆ( , ) sin( cos(2 22 2 2

a i a j c k a i j c i jv a c k ka c a c a c

α α⎛ ⎞ ⎛ ⎞− + − −

= = + = ) +⎜ ⎟ ⎜ ⎟+ + +⎝ ⎠ ⎝ ⎠

) .

The two free parameters a and c represent the two degrees of freedom allowing to be any

vectors lying in a two dimensional space, the plane perpendicular to the x-y plane and intersecting it

1( , )v a c

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-10

Page 11: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

along the line y = - x. One set of orthogonal vectors is ( ) ( )12

ˆ ˆi j− and . Another set is found at

in between these:

k

045

( ) ( ) ( ) ( )11 1 1

2 2 2ˆˆ ˆ( , )v i j k ) ) ( )⎡ ⎤+ + = − +⎢ ⎥⎣ ⎦ and ( ( ( )1

1 1 12 2 2

ˆˆ ˆ( , )v i j k⎡ ⎤+ − = − −⎢ ⎥⎣ ⎦

A set of orthogonal eigenvectors can always be chosen for degenerate eigenvalues of a real

symmetric matrix.

The eigenvectors for distinct eigenvalues are always orthogonal. Every eigenvector corresponding to

a degenerate eigenvalue is orthogonal to every eigenvector corresponding to a distinct eigenvalue.

Eigenvectors for an m - fold degenerate eigenvalue can be chosen to orthogonal.

Note 3: Efficient attack plan for the case of degenerate eigenvalues and 3 x 3 matrices. Find the

eigenvector for the distinct eigenvalue first. Next find a direction that works for the degenerate

eigenvalue. Choose the final eigenvector to be 1v v2× , the cross product of the first two. This ensures

that the eigenvectors are mutually orthogonal. Further, eigenvectors should be listed in a right hand

rule order. The cross product ensures that order. If the cross product is used to generate the third

eigen-direction, the result should be substituted into the eigenvalue equation to verify that it

satisfies that equation with the correct eigenvalue.

Mega-Application: Angular Momentum and Tire Balancing

A rotating rigid body has angular momentum L as it rotates with angular velocity ω . If the body is

highly symmetric, and L ω will be in the same direction. If the body is lumpy and misshapen, L

and ω may not be in the same direction. The direction of the angular velocity may be set or

constrained to lie along an axle held in place by bearings. Consider a (wheel plus) tire on your car.

You desire that it rotate smoothly on the axle. This condition is met if the angular momentum L of

the tire is parallel to its angular velocity ω (parallel to the axle). For L not parallelω , the angular

momentum changes in time as the angular velocity maintains a fixed direction along the axle. A net

torque must be applied to the wheel (body) to change its angular momentum. If the necessary

torques are not precisely applied, the wheel shimmies. An eigenvalue problem lurks in this!

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-11

Page 12: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

For a rotating rigid body, the velocity v of a particle is rω × given that ω is the angular velocity or

the rigid bodies rotation about the origin and that r is the position of the particle relative to the

origin. Below, the particle travels around a circle of radius r sin(θ ) at speed v= ω r sin(θ ). As

drawn, the velocity is into the page. v

ω

ω

rθL

L r mv= ×

is into the pagev rω= ×has the rotation rate as its magnitude.

It is directedalong therotation axis in the RHR sense.ω

The angular momentum of the particle is computed as:

( ) ( )L r m v m r v m r rω= × = × = × ×⎡ ⎤⎣ ⎦ [EV.6]

Note that and L ω are not necessarily parallel.

This equation is almost enough to inspire fear, but tools are available to crush it.

Please note that the details of this development are presented elsewhere. Just accept the form

proposed for the moment of inertia below.

**** A worksheet development of the inertia tensor is attached at the end of the tools of the trade

section.

The Form of the Inertia Tensor for a Rigid Body: The angular momentum is to be represented as the

matrix product of the moment of inertia and the angular velocity vector. That is: = L ω or, in terms of the

elements and components. i iL I ω3

=1

= ∑ . With the target form identified, one begins with the general

definition of the angular momentum of a particle: ( )L r mv m r rω= × = × ×⎡ ⎤⎣ ⎦ . Each particle in a rigid

body that has an angular velocity ω directed along a fixed axis is moving along a circular path about that

axis. The velocity relative to a point on that axis is rω × where r is the position of the particle relative to

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-12

Page 13: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

that axis point. The identification of the velocity as rω × is valid for rigid body rotation so the results that

follow are valid for rigid bodies. The component form of the angular momentum equation is:

( )i iL m r rω= × ×⎡ ⎤⎣ ⎦ .

Reviewing some notation, , and the inner product 1 2 3ˆˆ ˆ ˆ ˆr x i y j z k x i x j x k= + + = + + ˆ

32

1i i

ir r r x x

=

⋅ = = ∑

or 3

2

, 1i j ij

i j

r r r x x δ=

⋅ = = ∑ . The cross products can be represented using the permutation symbol and the

components xi of . In general, the x component of a vector is to be identified as the 1 component, the y rcomponent as the 2 component and the z component as the 3 component. For example,

( )3

1k m mk

m

r xω ε=

× = ∑ ω

Applying this notation to the angular momentum,

( ) ( )3 3 3

, 1 , 1 , 1i i j k j i j k jki

j k j k m

L m r r m x r m x xω ε ω ε ε= = =

⎛ ⎞= × × = × =⎡ ⎤ ⎜ ⎟⎣ ⎦

⎝ ⎠∑ ∑ ∑ k m mω .

Two interchanges convert i j kε to k i jε with no net sign change. 3

1i k i j k m j

j k m

L m x xε ε ω=

= ∑ m

This form is dramatically simplified by using the identities: 3

1k i j k m i j m i m j

k

ε ε δ δ δ δ=

= −∑ and 3 3

2

, 1 1s t s t s s

s t s

x x x x r rδ= =

r= = ⋅ =∑ ∑

Note that the first identity is easily established by computing the 81 possible outcomes for each form and

demonstrating that they agree. The second identity is established in standard homework problems.

( )3 3 3 3

1 1 1 1i k i j k m j m k i j k m j m i j m i m j j

j k m j m k j m

L m x x m x x m x xε ε ω ε ε ω δ δ δ δ ω= = = =

⎛ ⎞= = = −⎜ ⎟

⎝ ⎠∑ ∑ ∑ ∑ m

( )3

, , 1i i j m m j i m j m j

j m

L m x x x xδ δ δ δ ω=

= −∑

The ordering of the components of the vector was shuffled in the last step. They are scalars so their

multiplication is commutative. Next, the sum over j is executed. The Kronecker delta picks out the one term

with j = m in the first part and the term with j = in the second part.

( )3

, 1i i m m i m m

m

L m x x x xδ δ ω=

= −∑

Finally, the sum over m is executed. In the first part, the sum of the squares of the components of r is just r2,

and the Kronecker delta picks out the one term with m = i in the second part.

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-13

Page 14: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

( ) (3 3

2

1 1i i iL m r x x I )iδ ω ω

= =

= − =∑ ∑

Comparison with the target form identifies the elements of the inertia tensor for a particle as:

Ii = m (r2 δi – xi x )

leading to the form for a collection of particles:

( )i iparticles

iI m r x xα α α αα

δ2= −∑

Matrix Form: 2 2 2

2 2

2 2

r xx x y x z y z x y x z

m y x r yy y z m y x x z y z

z x z y r zz z x z y x y

− − − + − −

= − − − = − + −

− − − − − +

⎡ ⎤ ⎡⎢ ⎥ ⎢⎢ ⎥ ⎢⎢ ⎥ ⎢⎣ ⎦ ⎣

2

2

⎤⎥⎥⎥⎦

This truly ugly relation becomes more transparent in matrix form. 2 2 2

2 2 2

2 2 2

x x x

y y y

z z z

L r xx x y x z y z x y x z

L m y x r yy y z m y x x z y z

L z x z y r zz z x z y x y

ω ω

ω ω

ω ω

− − − + − −

= − − − = − + −

− − − − − +

⎡ ⎤ ⎡⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣

⎤⎥⎥⎥⎦

or, in shorthand as = L ω where

=

2 2

2 2

2 2

xx xy xz

yx yy yz

zx zy zz

I I I y z x y x zI I I m y x x z y z

z x z y x yI I I

α α α α α α

α α α α α α αα

α α α α α α

⎡ ⎤ ⎡ ⎤+ − −⎢ ⎥ ⎢ ⎥= − + −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− − +⎣ ⎦⎢ ⎥⎣ ⎦

∑ [EV.7]

A sum over the particles using the particle index α has been added to generalize to bodies consisting

of several particles. The diagonal elements of the inertia tensor are the familiar moments of inertia

about the x, y and z axes, the sum of the products of each mass and its perpendicular distance from

the axis squared. For example:

( )2 2xx perpendicular

2I m r m y zα α α αα α

= =∑ ∑ +

The off-diagonal elements are called skew moments.

The entire rotating body problem is frustrating because even before you finish computing all the

entries in the inertia tensor, the body rotates and the particles are in new positions requiring that a

revised inertia tensor be computed. Even this problem can be solved if you understand the

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-14

Page 15: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

eigenvalue problem. For now, a few examples are to be displayed. The full solution of the rotating

rigid body is left as a topic for your second mechanics course. Just remember that the eigenvalue

problem identifies the axes or coordinate system in which the inertia tensor is a diagonal matrix, its

simplest form.

Angular momentum example 1: two masses: m @ (3,4,0) and m @ (-3,-4,0).

Both masses are 5 units from the origin and lie in the x-y plane. 2

2

2

5 (3)(3) (3)(4) (3)(0) 16 12 0

2 (4)(3) 5 (4)(4) (4)(0) 2 12 9 0

(0)(3) (0)(4) 5 (0)(0) 0 0 25

x x x

y y y

z z z

L

L m m

L

ω ω

ω ω

ω ω

− − − −

= − − − = −

− − −

⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦

Note that the two masses make equal contributions to the inertia tensor in this case.

Exercise: Given that the angular velocity is ωx in the x direction compute the angular momentum

using the matrix equation above. Compare it to the value that would arise from using L r mv= ×

directly.

The goal is to find axis directions for which L and ω are in the same direction.

16 12 0

2 12 9 0 2

0 0 25

x x

y y

z z

L

L m m

L

ω ω

ω λ

ω ω

= − =

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

x

y

z

ω

⎤ ⎡ ⎤⎥ ⎢ ⎥⎥ ⎢ ⎥⎥ ⎢ ⎥⎦ ⎣ ⎦

or the EIGENVALUE EQUATION: 16 12 0 0

12 9 0 0

0 0 25 0

x

y

z

λ ω

λ ω

λ ω

− −

− − =

⎡ ⎤ ⎡⎢ ⎥ ⎢⎢ ⎥ ⎢⎢ ⎥ ⎢⎣ ⎦ ⎣

and hence the DETERMINANT CONDITION: 0

16 12 0

12 9 0

0 0 25

λ

λ

λ

=

− −

− −

with its CHARACTERISTIC POLYNOMIAL: ( ) ( ) ( ) ( ) ( ) ( )16 9 25 12 12 25 0λ λ λ λ− − − − − − − =

( ) ( )( )[ ] ( ) ( )[ ]25 16 9 144 25 25 0λ λ λ λ λ λ− − − − = − − =

The roots are 25, 25 and 0. [After restoring the factor of 2 m the actual eigenvalues are { 50 m, 50 m , 0};

a difference of no consequence when only the eigenvectors are sought. A difference that must be

remembered when the actual moments of inertia are of interest. ]

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-15

Page 16: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

Exercise: Show that the eigenvalues 25 and 25 correspond to any directions that are perpendicular

to the line joining the two masses while the eigenvalue 0 is for rotations about an axis lying along

the line joining the two particles. Why is the moment of inertia about the line joining the particles

zero?

Angular momentum example 2: two masses: m @ (3,4,0) and m @ (-3,-4,0) plus four additional

masses m located at the corners of a square in the x-y plane, at ( )2, 2, 0± ± . Computing II for this

mass distribution:

EIGENVALUE EQUATION: 24 12 0

2 12 17 0 2

0 0 41

x x

y y

z z

L

L m m

L

ω ω

ω λ

ω ω

= − =

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

x

y

z

ω

DETERMINANT CONDITION:

24 12 0

12 17 0 0

0 0 41

λ

λ

λ

− −

− −

=

( ) ( )( ) ( ) ( )( )41 24 17 12 12 0λ λ λ− − − − =

( ) ( )241 41 264 0λ λ λ− − + =

λ = 41, 241 41 4(1) 264

2

± − = { 41, 33, 8 }.

Somewhat remarkably, this equation has roots {41, 33, 8 }. [ Recall that the actual eigenvalues are

{82 m, 66 m, 16 m } after the 2 m scale factor is restored.]

The eigenvalues are distinct so three orthogonal eigen-directions are expected.

EIGENVECTOR λ = 41 :

24 41 12 0 17 12 0

12 17 41 0 12 24 0

0 0 41 41 0 0 0

000

a ab bc c

− − − −

− − = − − =

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-16

Page 17: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

The first two rows require the x-component to be both larger and smaller than the y-component

unless they are both zero. The third row states 0 c = 0 or that the z-component is unrestricted.

. 41ˆe k⇒ =

EIGENVECTOR λ = 33 :

24 33 12 0 9 12 0

12 17 33 0 12 16 0

0 0 41 33 0 0 8

000

a ab bc c

− − − −

− − = − − =

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

j

The first two rows redundantly require that the y-component is minus three-fourths of the x-

component. The third row 8 c = 0 states: no z-component . 33ˆ ˆˆ 0.8 0.6e i⇒ = −

EIGENVECTOR λ = 8:

24 8 12 0 16 12 0

12 17 8 0 12 9 0

0 0 41 8 0 0 33

000

a ab bc c

− − + −

− − = − + =

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

j

As expected! The first two rows redundantly require that the x - component is plus three-fourths of

the y-component. The third row 33 c = 0 states: no z - component . 8ˆ ˆˆ 0.6 0.8e i⇒ = +

Observations: The three eigenvectors are mutually orthogonal as expected for distinct eigenvalues.

Prepare a sketch of the masses plus the original 3D Cartesian coordinate system. Sketch the three

eigen-directions, each with its tail at the origin. Estimate the moment of inertia of the masses about

axes through the origin and in the direction of each eigen-direction.

Mega-Application: Coupled Oscillators or Systems of Linear DEs

A set of coupled linear differential equations of the form:

/

11 12 11/

21 22 22

a a uua a uu

⎡ ⎤ ⎡ ⎤ ⎡=⎢ ⎥ ⎢ ⎥ ⎢

⎣ ⎦ ⎣⎣ ⎦

⎤⎥⎦

where 2

/2 .....du du d uu or or or

dx dt dt→

can be solved by assuming a solution of the form: 1

2

s t

s t

u eu

u eαβ

⎡ ⎤⎡ ⎤= = ⎢ ⎥⎢ ⎥

⎣ ⎦ ⎣ ⎦ for the du

dt case. The power

of this trial solution is that the process of differentiation becomes effectively multiplication by s

reducing the problem to solving the algebraic eigenvalue problem: 11 12

21 22

a as

a aα αβ β

⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦⎣ ⎦

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-17

Page 18: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

For the 2

2

d udt

case, it is traditional to choose the trial form 1

2

i t

i t

u eu

u e

ω

ω

αβ

⎡ ⎤⎡ ⎤= = ⎢ ⎥⎢ ⎥

⎣ ⎦ ⎣ ⎦ leading to the

eigenvalue problem 11 12 2

21 22

a aa a

α αω

β β⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= −⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦

. The treatment of these problems is to be deferred

in favor of treating the coupled oscillator problem.

Just the single oscillator:

The motion of a mass about its equilibrium position while subject to a linear restoring force is well-

studied. The model equation is: m a m x k x= = − where x is measured relative to the equilibrium

position. The solutions are known to be: ( )( ) cos( ) sin( ) cos( ) Re i tx t C t D t A t Ae ω φω ω ω φ +⎡ ⎤= + = + = ⎣ ⎦ where 2 k

mω = .

The mass hanging on a vertical spring reduces to the same equation when the displacement y is measured relative to the

equilibrium position for the mass. That is: The Hooke’s Law model for the force due to a spring m a m y k y= = −

xF k= − x for displacements measured relative to equilibrium can be used for all springs. In this model is the spring k

or Hooke’s constant.

The fundamental or bare differential equation for the system is x x=− . It is immediately evident

that sine and cosine are solutions as the second derivative of each of those functions is minus itself.

As the equation is a linear second order equation with constant coefficients, it can also be solved by

substitution of the standard trial solution: x(t) = A est.

EXAMPLE: The simple harmonic oscillator provides an example of the procedure outlined above.

Consider the differential equation 2

22 0d x x

dtω+ =

It’s a simple case, but let's run through it. Substituting x(t) = A est, we find

2 2 2 2s 0 sstAe hence the indicial equationω ω⎡ ⎤ ⎡ ⎤⎣ ⎦ ⎣ ⎦+ = + 0=

which has solutions s = ±i ω. It follows that the DE has solutions

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-18

Page 19: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

( ) i t i tx t A e Beω ω−= +

A Digression about DEs and Euler’s Identity

Independent Solutions: How many are there?

If everything is well-behaved, an nth order differential equation has n independent solutions. Recall

that our well-behaved functions can be expanded in a Taylor’s Series. 2

22

1 1( ) ( ) ( ) ( ) ( )2! !

o o o

nn

o o o ont t t

df d f d ff t f t t t t t t tdt dt n dt

= + − + − + + − +… …

A well-behaved function is known if all its derivatives are known for a single value of its argument.

For a function which obeys an nth order differential equation, one need only specify the value of the

function value and of its first n-1 derivatives. All the rest of the function’s derivatives can be

computed using the differential equation. For our example above we find 1 2

1 21 21n n n

n nn n nn

d x d x d xc cdt c dt dt

− −

− −− − 0c x⎡ ⎤⎢ ⎥⎣ ⎦

= − − − −…

and after differentiating the equation once, we find: 1 1

1 2 01 11n n n

n nn n nn

d x d x d x d xc c cdt c dt dt dt

+ −

− −+ −

⎡ ⎤⎢ ⎥⎣ ⎦

= − − − −…

and by repeating the process, we can find the derivative of x(t) of arbitrary order.

Hence we expect there to be n independent solutions corresponding to setting the value of the

function and its first n-1 derivatives.

EXAMPLE: Consider the equation 2

2 0d x xdt

+ = which yields 2

2

n n

nnd x d x

dtdt

+

+ = − .

From this we see that alternate even (odd) derivatives are just the negative of one another. We can

specify two solutions with the choices

0 0

1 0 2 01 2( ) 1; 0 and ( ) 1; 0

t t

dx dxdt dt

x t x t= = = =

Substituting into the Taylor's Series template, we find:

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-19

Page 20: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

2 21

1 ( 1)( ) 1 ( ) ( ) cos( )2! (2 )!

nn

o o ox t t t t t tn

−= − − + + − + = −… … t and

12*1 1 2 1

2( 1) ( 1)( ) ( ) ( ) ( ) sin(

(2*1 1)! (2 1)!

nn

o o o )ox t t t t t t t t tn

+ +− −= − + − + + − + = −

+ +… …

It follows that the solution for any arbitrary set of initial values 0

01( ) ;

t

dxdt

x t A B⎡ ⎤⎢ ⎥⎢ ⎥⎣ ⎦

= = is x(t) = A

cos(t-to) + B sin(t-to). That is all solutions to the DE can be expressed in terms of the two

independent solutions that we have found.

Euler's Identity: ei ( t ) = cos(t ) + i sin(t ) can now be established if we first show that e

it is a

solution of the DE that satisfies 0

0( ) 1;t

dxx tdt

= = i for to=0. The completion of the proof is left to

the student as an exercise.

Back to the coupled oscillator problem:

k k 2m 2m

k

x1 x2

Consider a system with two masses, one of mass m and the other of mass 2m. The displacements x1

and x2 are measured relative to the equilibrium positions of the masses which are coupled as shown

by springs with Hooke constants k, k and 2 k.

Step One - Find the equations of Motion for the Masses: When mass one is displaced by x1 to

the right while mass two is held fixed, forces of – k x1 and – k x1 are applied to it by the two springs

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-20

Page 21: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

in contact with it. When mass one is held fixed at equilibrium while mass two is displaced by x2 to

the right, the center spring exerts a force of + k x2 on mass one. The equation of motion for mass

one is : . Following the same procedure, the forces on mass two are

identified when mass two is held fixed and mass one is displaced and then added to those that arise

when mass one is fixed and mass two is displaced.

( )1 12m x k x k x= − + 2

( ) 2 12 3m x k x k x= + − 2

2m x k x k x= − +

Equations of Motion: and ( )1 1 2 ( ) 2 1 22 3m x k x k x= + −

Step Two – Identify the Mass and Spring Constant Matrices

Equation of motion summary: 1

2

xx

⎡ ⎤⎢ ⎥⎣ ⎦

= - 1

2

xx

⎡ ⎤⎢ ⎥⎣ ⎦

or x = - x . [EV.8]

The signs are chosen to resemble the single oscillator equation. is the mass matrix, and is the

spring constant matrix.

1

2

00 2

xmxm

⎡ ⎤⎡ ⎤⎢ ⎥⎢ ⎥

⎣ ⎦ ⎣ ⎦ = - 1

2

23

xk kxk k

− ⎡ ⎤⎡ ⎤⎢ ⎥⎢ ⎥−⎣ ⎦ ⎣ ⎦

where =0

0 2m

m⎡ ⎤⎢ ⎥⎣ ⎦

and = . [EV.9] 2

3k kk k

−⎡ ⎤⎢−⎣ ⎦

)

Note that the off-diagonal elements can be chosen such that the matrices are real symmetric.

Step Three – Assume Harmonic Solutions in which the masses execute motion at the same

frequency.

(1

2

( )( )

i tx t ae

x t bω φ+⎡ ⎤ ⎡ ⎤

=⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦

[EV.10] and (1 2

2

( )( )

i tx t ae

x t b)ω φω +⎡ ⎤ ⎡ ⎤

= −⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦

yielding

( )2

2

0203 2

i tak m ke

bk k mω φω

ω+⎡ ⎤− − ⎡ ⎤ ⎡

=⎢ ⎥⎤

⎢ ⎥− − ⎢ ⎥⎣ ⎦ ⎣⎣ ⎦ ⎦

[EV.11]

Step Four – Solve the modified eigenvalue problem. 2

2

20

3 2k m k

k k mω

ω− −

=− −

or ( )( )2 22 3 2k m k m kω ω 2 0− − − =

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-21

Page 22: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

As in so many cases, a little notation is helpful. The equation is rewritten in terms of the reference

or scale value for the frequency squared: 20

kmω = .

Dividing by m2: ( )( )2 2 2 2 40 0 02 3 2 0ω ω ω ω ω− − = 4 2 2 4

0 02 7 5 0ω ω ω ω or + = . − −

Solving for ω2: ( )( )

( ) ( ) { }2

2 2 20 0 0

7 7 4 2 5 7 3 ; 2.52 2 2 2

2 20ω ω ω ω

⎡ ⎤± − ⎡ ⎤±⎢ ⎥= = =⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦

ω

Step Five – Identify the patterns of motion

In turn, substitute each eigen-frequency into the eigenvalue equation to find the ratio of a to b.

First try ω2 = ωo2 = k/m.

20

20

0203 2

ak m kbk k m

ωω

⎡ ⎤− − ⎡ ⎤ ⎡=⎢ ⎥

⎤⎢ ⎥ ⎢− − ⎥⎣ ⎦ ⎣⎣ ⎦ ⎦

2 0

3 2 0k k k a

k k k b− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⇒ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦

As is the usual case, the two row equations redundantly state a single relation which for this case is:

a = b. The pattern of motion is: ( 01

2

( ) 1( ) 1

i tx tA e

x t)ω φ+⎡ ⎤ ⎡ ⎤

=⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦

. The prose description of this mode is the

that the two masses oscillate in phase (in the same direction at the same time) with equal amplitude.

Note that these prose descriptions of the motion are crucial. You need to describe the pattern of

the motion without reference to your choice of x1 and x2. Express your answers in coordinate

independent form! Complex motions are not the order of the day so the real part of the complex

exponential is taken.

( )(

0

0 11

0 12

cos( )cos( )

A tx tA tx t

ω

ω φω φ

+

)⎡ ⎤⎡ ⎤

= ⎢ ⎥⎢ ⎥ +⎣ ⎦ ⎣ ⎦ [EV.12]

This answer is expected as both masses would oscillate at ω0 if the center spring were removed. In

the motion pattern found, x1(t) – x2(t) = 0 so the length of the center spring and hence its tension

does not change from its value when the masses are at their equilibrium positions.

Next try 20

52 ω .

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-22

Page 23: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

12

002

k k abk k

⎡ ⎤− − ⎡ ⎤ ⎡⇒ =⎢ ⎥

⎤⎢ ⎥ ⎢

⎢ ⎥⎥

⎣ ⎦ ⎣− −⎣ ⎦ ⎦

Both row equations state that 12b = − a . Hence x2(t) = - 1/2 x1(t). The prose description of this

mode is the that the two masses oscillate out of phase (in the opposite direction at any instant) with

the amplitude of the smaller mass’s motion being twice that of the 2 m mass.

( )( )0

0 21

522 0 2

12

52

52

cos( )( ) cos

tx tB

x t tω

ω φ

ω φ

⎡ ⎤+⎢ ⎥⎡ ⎤= ⎢ ⎥⎢ ⎥

⎣ ⎦ ⎢ ⎥− +⎢ ⎥⎣ ⎦

The frequency can be understood if one substitutes x2(t) = - 1/2 x1(t) into the equations of motion.

( )Mass one: m x ( )1 1 22k x k x= − + ( )1 1 1 151

2 22 km x k x k x x− + − = −

2 3m x k x k x= + −

⇒ =

Mass two: ( ) 2 1 2 ( ) ( )2 1 2 2 2 22 3 2 3 5m x k x k x k x k x k x⇒ = + − = + − − = −

The frequency of oscillation for mass one is 5

2( ) k

m and for mass two is 52

km .

Both mass oscillate at 20

52

2ω ω= for this motion pattern.

Step Six – Match Initial Conditions

( )( )

( )( )0

0 20 11

0 12 0 2

52

52

coscos( )cos( ) ½ cos

B tA tx tA tx t B tω

ω φω φω φ ω φ

⎡ ⎤++⎡ ⎤⎡ ⎤ ⎢ ⎥= +⎢ ⎥⎢ ⎥ ⎢ ⎥+⎣ ⎦ ⎣ ⎦ − +⎢ ⎥⎣ ⎦

so:

( )( )

( )( )

1 21

1 22 0

cos cos(0)cos ½ cos(0)

xA B

xφ φφ φ

⎡ ⎤ ⎡⎡ ⎤= +

⎤⎢ ⎥ ⎢⎢ ⎥ − +⎣ ⎦

⎥⎣ ⎦ ⎣ ⎦

and

( )( )

( )

( )

0 20 11

0 120 2

52

52

12

sinsin(0)sin(0) sin

xA B

x

ω φω φω φ ω φ

⎡ ⎤⎡ ⎤⎡ ⎤ ⎢ ⎥= − −⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ − +⎢ ⎥⎣ ⎦

As there are two masses and as the initial position and initial velocity must be matched for each, the

initial conditions lead to four equations that establish the values of A, B, φ1 and φ2.

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-23

Page 24: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

Summary:

EIGENVALUE EQUATION: v = λ v = λ or [ - λ ] v = 0

This equation has only trivial solutions unless the determinant vanishes.

DETERMINANT CONDITION: | - λ | = 0.

An nth order equation results, and its n solutions are the eigenvalues. Substitute each eigenvalue into

the eigenvalue equation to set the ratios of its components. The normalization condition is used to

complete the specification of the eigenvector. Eigenvectors for distinct eigenvalues are always

orthogonal. Eigenvectors for degenerate eigenvalues can be (and should be) chosen to be

orthogonal.

Cramer’s Rule Summary:

Linear Equations: We introduced the concept of a matrix and hence the determinant by means of

consideration of a system of linear equations. Let us now return to that consideration as an example

of the use of determinants.

Consider the set of n inhomogeneous linear equations in n unknowns.

y1 = a11x1 + a12x2 + ............ + a1nxn

y2 = a21x1 + a22x2 + ............ + a2nxn

.

.

yn = an1x1 + an2 + ............... + annxn

This set of equations can be expressed in matrix form as

y1y2..yn

=

a11 a12 ... a1na21 a22 ... a2n... ... ... ...an1 ... ... ann

x1x2..xn

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-24

Page 25: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

Let be the n x n matrix of the coefficients. This system of n equations in n unknowns always has a

unique solution if the determinant of the coefficient matrix is not zero (| | ≠ 0). If this is the case, we

can solve for any unknown xn by replacing the nth column in with the values of yn , obtain the

determinant of this new matrix, and divide that result by the value of | |. As an example, the value

for x2 will be

x2 = 1A

a11 y1 ... a1na21 y2 ... a2n... ... ... ...an1 yn ... ann

and similarly for all other unknowns, xi. This prescription is known as Cramer's rule. In the case

where | | = 0, the set of inhomogeneous equations has no solution except in the case where the

determinant of the numerator is also equal to zero, in which event solutions may exist, but they are

not necessarily unique.

In the event that you have a set of homogeneous equations (that is, all of the yn = 0 ) then if

| | ≠ 0 the only possible solution is the trivial solution where all xn = 0. If, however, | | = 0, then a

non-trivial solution does exist. The last case is of special interest for eigenvalue problems such as

finding the normal frequencies and modes of a system of coupled oscillators.

More Sample Calculations:

The Classic Eigenvalue Problem

EIGENVALUE EQUATION: v = λ v = λ v or [ - λ ] v = 0 [EV.13]

This equation has only trivial solutions unless the determinant vanishes.

DETERMINANT CONDITION: | - λ | = 0 used to find the λ’s. [EV.14]

1.) Eigenvalues: Find the eigenvalues for = 3 0 20 5 02 0 3

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

. Set | - λ | = 0 to find the eigenvalues λ.

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-25

Page 26: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

3 0 20 5 02 0 3

λλ

λ

−−

−= 0. Expanding by minors of the first row,

(3-λ) (-1)1+1 [(5-λ) (3-λ)] + 0 (-1)1+2(0) + 2 (-1)1+3 [(-2) (5-λ)] = 0

When you see a common factor, grab it!

(5-λ) [(3-λ) (3-λ) - 4] = (5-λ) [5 – 6λ +λ2] = (5-λ) [(5-λ)(1-λ)]

The eigenvalues are {5, 5, 1}. Note that the quadratic formula could be applied to [5 – 6λ +λ2] = 0.

2.) Eigenvectors: Given that the eigenvalues are {11,11,2}, find the eigenvectors for = 8 3 33 8 33 3 8

− −⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥− −⎣ ⎦

.

[ - λ ] v = 0 Begin by choosing the distinct eigenvalue 2.

8 2 3 3 03 8 2 3 03 3 8 2 0

abc

− − −⎡ ⎤ ⎡ ⎤ ⎡⎢ ⎥ ⎢ ⎥ ⎢− − − =⎢ ⎥ ⎢ ⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎢− − −⎣ ⎦ ⎣ ⎦ ⎣

6 3 3 03 6 3 03 3 6 0

a b ca b ca b c

⎤⎥⎥⎥⎦

⇒ − − =

− + − =− + =

⇒ 222

a b cb a cc a b

= += += +−

Twice the first equation with 2c substituted from the third:

4 a = 2 b + a + b ⇒ a = b. Continuing, it follows that: a = b = c.

2 2 2

ˆ ˆˆ ˆ ˆ ˆˆ

3a i a j a k i j ke

a a aλ 2

+ + += =

+ +

+

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

000

Normalized

One could observe that the three equations are symmetric in a, b and c so they must be equal.

The degenerate eigenvalue 11.

3 3 3 03 3 3 03 3 3 0

abc

− − −⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥− − − =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− − −⎣ ⎦ ⎣ ⎦

⇒ 3 3 33 3 33 3 3

a b ca b ca b c

− − − =− − − =− − − =

⇒ 0a b c+ + =

There is only one independent relation for a two-fold degenerate eigenvalue of a 2 x 2 matrix.

Choose any combination of {a, b, c} that satisfies a + b + c = 0. For example: {1, -1, 0}.

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-26

Page 27: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

11 2 2 2

ˆˆ ˆ ˆ1 1 0ˆ21 1 0

Ai j k ieλ

j− += =

+ +

− Normalized

If the third eigenvalue were distinct, the eigenvalue equation [ - λ ] v = would yield a third

eigenvector that would be mutually orthogonal. For the degenerate case, one uses the cross product

of the first two eigenvectors to generate a third mutually orthogonal vector with the extra benefit that

they are automatically in right-hand-rule order.

0

11

ˆ ˆˆ ˆ ˆ ˆ ˆ ˆ 2ˆ3 2 6B

i j k i j i j keλ

⎛ ⎞+ + − + −= × =⎜ ⎟

⎝ ⎠ so: { }2 11 11

ˆ ˆˆ ˆ ˆ ˆ ˆ ˆ 2ˆ ˆ ˆ; ; ; ;3 2 6A B

i j k i j i j ke e eλ λ λ

⎧ ⎫⎛ ⎞+ + − + −⎪ ⎪= ⎨ ⎬⎜ ⎟⎪ ⎪⎝ ⎠⎩ ⎭

Whenever the cross product is used to generate a candidate to be an eigenvector, the result

should be substituted in the eigenvalue equation ( v = λ v ) to show that it satisfies the equation

with the expected eigenvalue.

Eigenvectors for distinct eigenvalues are orthogonal. Eigenvectors for degenerate (repeated)

eigenvalues can be chosen to be orthogonal. In the example above, the only requirement for an

eigenvector with eigenvalue 11 is that it be perpendicular to the one for eigenvalue 2. That is:

ˆˆ ˆˆ ˆˆ ˆ ˆ ˆˆ ( ) 0 ( )3 3

i j k a b ce a i b j c k a i b j c kλ 2

⎛ ⎞+ + + +⋅ + + = = ⋅ + + = =⎜ ⎟⎜ ⎟

⎝ ⎠0

This requirement is identical to the one generated by the eigenvalue equation.

3.) Transforming matrices and Eigenvectors:

The matrix = has eigenvalues {1, 2, 3} and eigenvectors 2 1 01 2 00 0 2

⎡ ⎤⎢⎢⎢ ⎥⎣ ⎦

⎥⎥

ˆ ˆ ˆ ˆˆ; ;2 2

i j i jk⎧ ⎫− − −⎨ ⎬⎩ ⎭

.

Verify that the eigenvectors are listed in right-hand rule order and that the order corresponds to the order in

which the eigenvalues are listed. Form the matrix by using the components of each normalized eigenvector

as a row of the matrix and form the eigenvectors into column vectors.

= ;2 2 2

3 3 3

x y z

x y z

x y z

e e ee e ee e e

λ λ λ

λ λ λ

λ λ λ

1 1 1⎡ ⎤⎢ ⎥

⎥⎢⎢ ⎥⎣ ⎦

1

1 1

1

x

y

z

eee

λ

λ λ

λ

e⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦

; …. ; ….

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-27

Page 28: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

Compute: t; 1eλ ; 2eλ 3eλ

Next, list the eigenvectors list the eigenvectors in a distinct RHR order. Form 2 using this new

order. Compute 2 2 t. Comment.

= 2

2 2

1 12

1 1

0

0 0 10

− −

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

; 2

1 2

1

1

0eλ

⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦

; 2

001

⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦

; 2

3 2

1

1

0eλ

⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦

1eλ = 2 2

2

2 2

1 1 121

1 1

0 10 0 1 0

0 0 0

− −

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦

; 2eλ = 2

2 2

1 12

1 1

0 0 00 0 1 0 1

0 1 0

− −

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦

;

3eλ = 2 2

2

2 2

1 1 121

1 1

0 00 0 1 0

0 0 1

− −

− −

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦

t = 2 2

2 2

2 2

1 1 1 12 2

1 1

1 1

0 02 1 00 0 1 1 2 0 0

0 0 0 2 0 1 0

− −

− −

− −

⎧ ⎫⎡ ⎤ ⎡ ⎤⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦⎣ ⎦ ⎣ ⎦⎩ ⎭

t = 2 2

2 2

2 2

31 1 12 2

31

1 1

0 0

0 0 1 00 0 2 0

−−

−−

− −

⎡ ⎤⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥

⎣ ⎦ ⎣ ⎦

t = 1 0 00 2 00 0 3

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

The transformed matrix is diagonal with the eigenvalues appearing along the diagonal in the same

order as the corresponding eigenvectors were listed as rows in .

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-28

Page 29: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

Choosing a new RHR order to give 2, the second or ‘2’-form of the transformation.

2 = 2 2

2 2

1 1

1 1

0 100

0− −

⎡ ⎤⎢⎢⎢ ⎥⎣ ⎦

⎥⎥ where the eigenvectors are to be in the same order as they appear as rows in .

With this ordering of the ‘2’-form eigenvector, the first eigenvector transforms as above; that is into

the canonical form for the first coordinate direction and so on.

2 2; 1e λ = 2 2

2 2

1 1

1 1

0 1 0 10 0 00 1 0

0− −

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦

; 2 2; 2e λ = 2 2

2

2 2

1 1 121

1 1

0 00 0 1 1

0 0 0

− −

− −

⎡ ⎤ ⎡ ⎤ ⎡⎢ ⎥ ⎢ ⎥ ⎢=⎢ ⎥ ⎢ ⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎢⎣ ⎦ ⎣⎣ ⎦

⎤⎥⎥⎥⎦

;

2 2; 3e λ = 2

2 2 2

2 2

1

1 1 1

1 1

0 1 00 00 0 1

0− − −

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦

2 2t =

2 2

2 2 2 2

2 2

1 1

1 1 1 1

1 1

0 0 1 2 1 0 00 1 2 0 00 0 0 2 1 0 0

− − − −

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦

=

2 2t =

2 2

2 2 2 2

2 2

3 1

31 1 1

1 1

0 0 10 00 2 0 0

0 −

−− − −

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦

2 2t =

2 0 00 3 00 0 1

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

The ‘2’-form transformed matrix is diagonal with the eigenvalues appearing along the diagonal in

the same order as the corresponding eigenvectors were listed as rows in the ‘2’-form .

Tools of the Trade:

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-29

Page 30: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

Eigenvector techniques:

If a matrix is real symmetric, it has real eigenvalues, and its eigenvectors with distinct eigenvalues

are automatically orthogonal.

For a distinct eigenvalue, the eigenvalue equation provides (n -1) constraints. Adding the

normalization condition, there are n conditions and the eigenvector is determined to within a factor

of +/- 1. Find the eigenvectors for all distinct eigenvalues first.

When you solve the (n -1) equations for the n components, you must solve for (n -1) unknowns in

terms of the nth. Solve for all the others in terms of a. If that fails, try to solve for the others in terms

of b or c. Set that nth value equal to one, find the others and then normalize that vector to have

magnitude one.

When you solve the n - m equations for the n components, you must solve for n - m unknowns in

terms of the remaining m.

If the matrix has repeated eigenvalues, there will be fewer independent constraint equations, and a

little imagination is required. For example, one might have only one independent equation. Say a

+b + c = 0. Don't waste time! Just pick a case that works like {a =1, b = -1, c = 0}. After

normalizing, you have chosen the eigenvector 12

ˆ ˆ( )i j− .

To generate another vector which is orthogonal, find the cross product of the eigenvector for the

distinct eigenvalue and the eigenvector that you just picked. Always check that the result satisfies

the constraint for the repeated eigenvalue. Ask: a +b + c = 0?

Use: to compute the cross product of vectors with one

or two non-zero components. Use a determinant based method for if the vectors have more than teo

non-zero components.

ˆ ˆ ˆ ˆˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ; ; ; ; ....i j k j k i k i j j i k× = × = × = × = −

Classical physics usually involves real symmetric matrices.

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-30

Page 31: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

If the matrix is not real symmetric, the eigenvalues need not be real, and the eigenvectors need not

be orthogonal. They are to be linearly independent.

For a distinct eigenvalue, the eigenvalue equation provides (n -1) constraints. Adding the

normalization condition, there are n conditions and the eigenvector is determined to within a factor

of +/- 1. Find the eigenvectors for all distinct eigenvalues first.

If the matrix has repeated eigenvalues, there will be fewer independent constraint equations and a

little imagination is required. For example, one might have only one independent equation. Say a

+b + c = 0. Don't waste time! Just pick a case that works like {a =1, b = -1, c = 0}. After

normalizing, you have chosen the eigenvector: 12

ˆ ˆ( )i j− .

The eigenvectors may not be orthogonal so the cross product is not helpful. But, the eigenvectors are

linearly independent. Just choose another solution to the constraint equation in which the various

components have different ratios. Working with a +b + c = 0 and having previously chosen {a =1, b

= -1, c = 0}, choose say: {a =2, b = - 1, c = - 1}. After normalizing, you have chosen

16

ˆˆ ˆ(2 )i j k− − . It is linearly independent, and it satisfies a +b + c = 0. That is all you need since it

is all that the mathematicians guarantee that we can get.

General Note: If you have a constraint equation like: 0 b = 0, it means that b is unrestricted; it can

be anything. That means that the direction corresponding to b must be an eigenvector. By our

convention, b belongs to j . So: 0 b = 0 means that j is an answer. Given the constraints: 0 b = 0

and a + c = 0, choose all b for one answer, and not b for the other answer. That is j and 12

ˆˆ( )i k− .

List the eigenvectors in Right-Hand Rule order for real symmetric matrices.

Checks on the characteristic equation and eigenvalues:

The characteristic equation for an eigenvalue problem for the matrix has the form: 2

0 1 2( ) ... nnD c c c cλ λ λ λ= + + + + which has n roots. Some points to check to partially validate your

calculations: The constant cn = (-1) n ; c0 = | |; the product of the eigenvalues 1 2 3 ... nλ λ λ λ = |

|.

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-31

Page 32: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

Eigenvalues for and -1: 1λ λ −→

Suppose that = v vλ , then ( -1 ) v = v = -1 vλ or –1 v = 1 vλ − .

Basically, the eigenvalue equation was pre-multiplied by 1λ − to yield the eigenvalue equation

for

-1

with eigenvalue-1 1λ − . So if has eigenvalues { }1 2 3 ... nλ λ λ λ , then has eigenvalues -1

{ }1 2 31 1 1 1... nλ λ λ λ− − − − for the same set of eigenvectors.

Coupled Linear Differential Equations with constant coefficients

One example is to be presented. Another example, the coupled oscillator problem, is the topic of a

separate section. Any good differential equations text will include a more general development of

the method.

Consider a charge +q moving perpendicular to a uniform magnetic field ˆB B k= . The force is

leading to the equations: ˆ ˆ[ y xF q v B q B v i v j= × = − ]

qBmΩ =

0

0

x xy y

y yx x

v vv vt tv v

v vt t

∂ ∂

∂ ∂∂ ∂

∂ ∂

= + Ω − Ω =⇒

= − Ω + Ω =

A trial solution for the coupled equation is assumed to be:

( )( )

x

y

stv t ae

v t b⎡ ⎤ ⎡ ⎤

=⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦

The time derivative is equivalent to multiplication by s so the differential equations can be replaced

by coupled algebraic equations.

0 00 0

sts a s ae

s b s b−Ω −Ω⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡

= ⇒ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢Ω Ω⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣

⎤⎥⎦

Cramer’s rule stipulates that a and b can have non-zero values only if the determinant of the

coefficient matrix vanishes.

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-32

Page 33: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

0s

s is

−Ω= ⇒ = ± Ω

Ω

Substituting the first value i Ω which leads to:

00

0i a

i a b or b i ai b

Ω −Ω⎡ ⎤ ⎡ ⎤ ⎡ ⎤= ⇒ Ω − Ω = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥Ω Ω⎣ ⎦ ⎣ ⎦ ⎣ ⎦

Substituting, the velocities are found to be:

( ) ; ( )x yi t i tv t a e v t i a eΩ Ω= = ( ) ; ( )i i

x yi t i tv t Ae e v t i Ae eδ δΩ Ω= =

As complex numbers are in play, a = A ei δ where A and δ are real. As classical physics is not

complex, the real parts are the physical results.

( ) cos[ ] ; ( ) sin[ ]x yv t A t v t A tδ δ= Ω + = − Ω +

The conclusion is that the particles circle CW as viewed from the positive z direction at the Larmor

frequency Ω for any non-zero initial velocity. Integrating,

( ) ( )0 0( ) sin[ ] ; ( ) cos[ ]A Ax t x t y t y tδ δΩ Ω= + Ω + = + Ω +

The radius of the path is A/Ω for an initial velocity of magnitude A. This result restates the standard

result: v = Ω r or r =( m v/q B ).

Exercise: Develop the result that b = ia. Show that taking the real part leads to the result that

( ) sin[ ]yv t A t δ= − Ω + .

Discovery Exercise: Eigenvectors

Consider the matrix and the set of vectors below. For each vector, compute iv and then

sketch the vector on a set of axes and sketch

iv

iv on a set of axes next to it. Draw a box around

the two sketches made for each pair { ,

iv

iv }. The entities to be studied are: iv

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-33

Page 34: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

x

y

0 11 0

⎡ ⎤= ⎢ ⎥

⎣ ⎦

1

2

3

4

5

6

7

ˆ1ˆ ˆ1 1ˆ ˆ1 2ˆ ˆ0 1.5ˆ ˆ1 2ˆ ˆ1 1ˆ ˆ1 1

v i

v i j

v i j

v i

v i

v i

v i

=

= +

= +

= +

= − +

= − +

= + −

j

j

j

j

In general, what is the action of on the vectors ?

The action of on a few of the vectors is special. What is special about the action?

Compare and contrast the action of the matrix on and . How is oriented relative to ?

Are and independent vectors?

2v 7v 2v 7v

2v 7v

Compare and contrast the action of the matrix on and . How is oriented relative to ?

Are and independent vectors?

6v 7v 6v 7v

6v 7v

Propose a vector with the same relation to that has to . What is the action of 2v 7v 6v on that

vector?

Worksheet: Moment of Inertia Tensor

The angular momentum of a collection of point particles is defined as:

particlesL r m vα α α

α

= ×∑

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-34

Page 35: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

In the case of a rigid body rotating about an axis through a reference point with angular velocity ω ,

the velocity of each mass point is rω × where r is measured from the reference point. The

contribution to the angular momentum due to a single particle is to be developed. It should be

summed over all particles to find the total angular momentum of the rigid body. As our goal is to use

matrix methods, the relation is represented as:

1 11 12 13

2 21 22 23

3 31 32 33

x

y

z

L L I I IL L L I I I

L L I I I

1

2

3

ωωω

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

Using summation notation for the matrix multiplication, 3 3

1 1i i j j i

j

L I Iω ω= =

= ≡∑ ∑

?: Why can the symbol for the summation index be changed at will ?

Our goal is to find the form of Iij. The summation representations of the dot product and cross

product are to be reviewed to prepare us for that task.

First, 3 3

1 1i j i j

i j

A B A B δ= =

⋅ = ∑ ∑ .

?: DOT PRODUCT: Compute the sum over j. Display the result.

It is said that, in a sum, the Kronecker delta picks out the terms for which its indices are

equal. Do you agree with this claim ?

Complete the sum over i explicitly. Interpret the result using the mapping {1,2,3} > {x,y,z}.

Does 3 3

1 1i j i j

i j

A B δ= =∑ ∑ yield a correct evaluation of A B⋅ ?

The cross product can be represented as 3 3 3

1 1 1

ˆijk i j ki j k

A B e A Bε= = =

× = ∑ ∑ ∑ where the unit vectors

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-35

Page 36: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

{ }1 2 3ˆ ˆ ˆ, ,e e e map to { }ˆˆ ˆ, ,i j k . In particular, 3 3

1 1ijk j k

j ki

A B A Bε= =

⎡ ⎤× =⎣ ⎦ ∑ ∑ .

?: CROSS PRODUCT: Let i = 2 (or y) and compute the sums over j and k. Does the

result properly represent 2 y

A B A B⎡ ⎤ ⎡ ⎤× = ×⎣ ⎦ ⎣ ⎦ ?

?: Use the summation representation of the cross product to represent the kth component

of rω × . Choose summation indices s and t.

?: Use the summation representation of the cross product to represent the ith component

of . Choose summation indices j and k. (r ω× × )r

k s t s tω

s

Assembling all the pieces, the expression for the ith component of the angular momentum of a single

particle in the rigid body becomes.

( ) ( )3 3 3 3

1 1 1 1

i i j k j i j k jkij k j k st

i j k k s t j s tj k s t

L m r r m r r m r r

m r r

ω ε ω ε ε

ε ε ω= = = =

⎛ ⎞= × × = × =⎡ ⎤ ⎜ ⎟⎣ ⎦ ⎝ ⎠

=

∑ ∑ ∑

∑ ∑ ∑ ∑

?: Comparing with 3 3

1 1i i j j i s

j s

L I Iω ω= =

= ≡∑ ∑ , move the ωs factor to the right and the sum over

s to the left. The form should be { }3 3

1 1...i s s s

s sI ω ω

= =

=∑ ∑ . Identify the expression for Iis.

This is a rather formidable equation so a rather formidable tool is to be used to crush it. The identity 3

1k i j k s t i s j t i t j s

k

ε ε δ δ δ δ=

= −∑ is selected.

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-36

Page 37: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

?: Compare the values of 3

1k i j k st

kε ε

=∑ and

3

1i j k k s t

kε ε

=∑ . Explain your claim.

?: Compute 3

1k i j k st

kε ε

=∑ and i s j t i t j sδ δ δ δ− for {i,j,s,t} = {1,3,1,3} and {1,3,3,1}.

?: Substitute the Kronecker delta form for the sum over products on permutation symbols

into your expression for Iis. Separate the result into two independent sums over t and j.

Display the results.

?: For the positive term, compute the sum over t and then the sum over j. Recall that

{1,2,3} represents {x,y,z}. Interpret the result. Is your result equivalent to 2i sm r δ ?

?: For the term with the negative sign, compute the sum over t and then the sum over j. Is

your result equivalent to m ri rs ? {r1, r2, r3 } -> { x, y, z}.

Your final form should be equivalent to: 2is i s i sI m r r rδ⎡ ⎤= −⎣ ⎦ .

?: Give the expression for Ixx. Is your result equivalent to 2 2m y z⎡ ⎤+⎣ ⎦ ?

The result represents the moment of inertia of the rigid body about the x axis. The sophomore

physics form of that was 2 2x xx

particles particles

2I I m r m yα α α α αα α

⊥ z⎡ ⎤→ = = +⎣ ⎦∑ ∑ where is the

(perpendicular) distance from the particle to the x-axis.

r⊥

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-37

Page 38: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

?: Give the expression for Ixy. The off-diagonal elements are called skew moments.

?: Complete the moment of inertia tensor.

II ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )

2 2 2 2r x x y y z x ym m

⎡ ⎤− − ⎡ ⎤+ −⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥⎣ ⎦

II =

2 2

2 2

2 2

xx xy xz

yx yy yz

zx zy zz

I I I y z x y x zI I I m y x x z y z

z x z y x yI I I

α α α α α α

α α α α α α αα

α α α α α α

⎡ ⎤ ⎡ ⎤+ − −⎢ ⎥ ⎢ ⎥= − + −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− − +⎣ ⎦⎢ ⎥⎣ ⎦

Problems:

1. For the matrix = , find the eigenvalues and the directions of the eigenvectors

for each eigenvalue. Pick orthogonal eigenvectors. [Hint: One eigenvalue is 11.]

8 3 33 8 33 3 8

− −⎡ ⎤⎢− −⎢⎢ ⎥− −⎣ ⎦

⎥⎥

* As 11 is a root, divide (λ - 11) into the polynomial found by expanding the determinant. The

quotient will be a quadratic allowing one to find the other two eigenvalues.

Sample Calculation: Given the eigenvalue polynomial

λ3 - 8 λ + 8 = 0 and the information that one of the eigenvalues is

2, reduce the polynomial to the quadratic that can be solved to

find the other two eigenvalues. It would result that the other two

eigenvalues are: 1 5− ± .

Divide by the 'known' factor (λ-2). If it is a factor, the remainder

will be zero, and the quotient will be the desired quadratic for the

remaining eigenvalues.

( )

( )

( )

( )

2

8 8

2

2 8 8

2 4

4 84 8

0

λ λ

λ λ λ

λ λ

λ λ

λ λ

λλ

2

3

3 2

2

2

+ − 4

− 2 + − +

− −

− +

− −

− +

− − +

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-38

Page 39: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

2. Two masses are hung in tandem from springs as

illustrated. Find the equations of motion for the two masses

and apply the coupled oscillator scheme to find eigen-

frequencies and eigen or normal modes (motion patterns).

As always, the springs are massless and faithfully obey

Hookes law. Measure the displacements of the two masses

relative to their equilibrium positions.

2k

2m

k

m

3. Two masses are hung in tandem from springs as illustrated.

Assume that up is positive and that y1 and y2 are measured from the

equilibrium positions. The spring constants of the upper and lower

springs are m ω12 and m ω2

2. Find the equations of motion for the

two masses with a vertical harmonic force applied to the upper

mass. Assume: (0

11

2

( )cos

( )y t a

ty t bω

)ω φ⎡ ⎤ ⎡ ⎤

=⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦

+ where φ1 = 0.

Arrange the equations of motion in matrix form and find the

inhomogeneous solution to the equations by applying Cramer’s

rule. Do not use the eigen-mode process for homogeneous

equations. (Leave the driving force term on the right-hand side!)

Give expressions for y1(t) and y2(t). What happens for ω = ω2?

What happens when ω equals an eigen-frequency for the system

with no external driving force? Note that the eigen-mode

determinant is the denominator in Cramer's rule!

m ω12

m1=m

m2=m

m ω12

m ω22

Fo cos(ωt)

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-39

Page 40: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

4. The characteristic equation for the eigenvalue problem with the matrix MM has the form: 2

0 1 2( ) ... nnD c c c cλ λ λ λ= + + + + which has n roots.

a.) Show that cn = (-1)n .

b.) Show that c0 = |MM|.

c.) Assuming that the matrix can be diagonalized, argue that

1 2 3 ... nλ λ λ λ , the product of the eigenvalues, is the determinant of the diagonalized

form. For real symmetric matrices, the determinant is the same in the original and

diagonalized forms.|MM|.

Result (a) can be established by using expansion by minors and noting that the lead terms in λ come

from the factors along the diagonal. Part (b) follows from the definition of a determinant given that

you ignore the - λ wherever it appears as you compute the constant term. The final part follows from

expansion by minors. For a real symmetric matrix, the diagonalization is accomplished using an

orthogonal transformation, usually a rotation (see the Linear Transformations handout). Orthogonal

transformations are represented by matrices whose transpose is their inverse. If the transformation is

represented by the matrix then the transformed matrix is -1 . The point is that the determinant

is invariant under such transformations.

diag is -1 so | diag | is |

-1| = | | | | | 1|=| | as | | and |

-1| are inverses.

5. For the matrix = , find the eigenvalues and the directions of the eigenvectors for

each eigenvalue. This matrix is real symmetric so the eigenvectors can be found that are mutually

orthogonal.

2 0 10 3 01 0 2

⎡ ⎤⎢⎢⎢ ⎥⎣ ⎦

⎥⎥

6. For the matrix = , find the eigenvalues and the directions of the eigenvectors for

each eigenvalue. [Hint: One eigenvalue is 3.] Pick linearly independent eigenvectors as

4 1 12 5 21 1 2

−⎡ ⎤⎢ −⎢⎢ ⎥⎣ ⎦

⎥⎥

is not real

symmetric so orthogonal eigenvectors may not exist.

a.) Find the eigenvalues for .

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-40

Page 41: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

b.) Find an eigenvector for λ = 3. (Normal practice is to attack the distinct value first.)

c.) Find an eigenvector for a distinct eigenvalue.

d.) Compute the cross product of your results for the previous two parts. Does that result

satisfy an eigenvalue equation?

e.) Find an eigenvector for the remaining eigenvalue that is linearly independent of the

previously identified eigenvectors. Verify that it satisfies the eigenvalue equation.

Linearly independent means that no sum of constants times any pair of the eigenvectors is equal to the third.

That third eigenvector always contains direction information not included in the first two. The eigenvectors

need not be orthogonal if the matrix is not real symmetric so one can not use the cross product of the first two

eigenvectors found to find the third. The eigenvectors and eigenvalues can be complex rather than real.

However, they are all real for this particular problem. See problem 9.

7. Identify a type of matrix that can represent physical properties in classical physics. Supposing that

the eigenvalues found for that matrix are distinct, what can be said about the eigenvectors? In the

case that the eigenvalues are not distinct, what should you do?

8.) Consider the eigenvalue equation v = λ i iv = λi iv . Multiply the equation by -1 and

conclude that is also an eigenvector of iv -1 with eigenvalue λ i −1.

9.) REPLACEFor the matrix = 4 1 12 5 21 1 2

−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦

Find the eigenvalues and find the directions

of the eigenvectors for each eigenvalue. [Hint: One eigenvalue is 3.] Pick linearly independent

eigenvectors as is not real symmetric so orthogonal eigenvectors may not exist.

a.) Find the eigenvalues for .

b.) Find an eigenvector for λ = 3.

c.) Find an eigenvector for a distinct eigenvalue.

d.) Compute the cross product of your results for the previous two parts. Does that result

satisfy an eigenvalue equation?

e.) Find an eigenvector for the remaining eigenvalue that is linearly independent of the

previously identified eigenvectors. Verify that it satisfies the eigenvalue equation.

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-41

Page 42: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

Linearly independent means that no sum of constants times any pair of the eigenvectors is equal to the third.

That third eigenvector always contains direction information not included in the first two. The eigenvectors

need not be orthogonal if the matrix is not real symmetric so one can not use the cross product of the first two

eigenvectors found to find the third. The eigenvectors and eigenvalues can be complex rather than real. Note

that the inner product for complex vectors is: * * *

* * * 2 * * *

x y z x

y x x y y z z x x y y z

z

v v v uv u u v u v u v u v v v v v v v v v

u

⎡ ⎤ ⎡ ⎤⎣ ⎦⎢ ⎥⋅ = = + + ⇒ = ⋅ = + +⎢ ⎥⎢ ⎥⎣ ⎦

z

10. For the matrices = and 4 0 20 4 02 0 4

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

= 3 0 20 5 02 0 3

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

, find the eigenvalues and the directions

of the eigenvectors for each eigenvalue. These matrices are real symmetric so the eigenvectors can

be found that are mutually orthogonal. Include a detailed description of each step in the process to

identify the eigenvectors. What differences are encountered when you find the eigenvectors for ?

11.) Find the eigenvalues and eigenvectors for the matrix:

= 8 0 00 5 10 1 5

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

12.) Find the eigenvalues and eigenvectors for the matrix:

= 8 0 00 5 30 3 5

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

13.) Angular momentum example 1: two masses: m @ (3,4,0) and m @ (-3,-4,0). 2

2

2

5 (3)(3) (3)(4) (3)(0) 16 12 0

2 (4)(3) 5 (4)(4) (4)(0) 2 12 9 0

(0)(3) (0)(4) 5 (0)(0) 0 0 25

x x x

y y y

z z z

L

L m m

L

ω ω

ω ω

ω ω

− − − −

= − − − = −

− − −

⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦

Given that the angular velocity is ωx in the x direction compute the angular momentum using the

matrix equation above. Compare it to the value that would arise from using L r mv= × directly. If

the rotation axis is fixed by bearings so that the masses rotate as a rigid body at ωx in the x direction,

the tip of the angular momentum vector traces out a circle of radius 24 m ωx. What is dL/dt? What is

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-42

Page 43: The Eigenvalue Problem: A Quick Introduction Concepts of ... · Real Symmetric & Orthogonal Matrices Hermitian & Unitary Matrices Determinant Eigenvalue Eigenvector Degenerate eigenvalue

the acceleration of each mass? What force is required to cause the acceleration of each mass? What

is the torque applied by the bearings to maintain the fixed rotation direction? Does τ = dL/dt?

References:

1. Jerry B. Marion and Stephen T. Thornton, Classical Dynamics of Particles and Systems, 4th Edition, Saunders (1995), chapter twelve. 2. S. Lipschutz and M. Lipson, Linear Algebra, 3rd Edition, chapters 2 & 8, McGraw-Hill (2001). 3. K. F. Riley, M. P. Hobson and S. J. Bence, Mathematical Methods for Physics and Engineering, 3nd Ed., Cambridge, Cambridge UK (2006). 4. Mary L. Boas, Mathematical Methods in the Physical Sciences, 2nd Edition, chapter 3, John Wiley & Sons (1983). 5. V. Barger and M. Olsson, Classical mechanics: A Modern Perspective, McGraw-Hill (1995). 6. Discovery exercises are inspired by or directly borrowed from the Physics Education Group at Oregon State headed by Corinne A. Manogue. 7. Donald A. McQuarrie, Mathematical Methods for Scientists and Engineers, University Science Books, Sausalito, CA (2003).

11/7/2007 Physics Handout: Eigenvalue Quick Intro EVi-43