88
1. Show that the decomposition of a tensor into the symmetric and anti-symmetric parts is unique. = 1 2 ( + T )+ 1 2 ( − T ) = sym + skw Suppose there is another decomposition into symmetric and antisymmetric parts similar to the above so that ∃ such that = 1 2 ( + T )+ 1 2 ( − T ). Now take the inner product of the two expressions for the tensor and a symmetric tensor : = ( + ): = ( ): =( 1 2 ( + T )+ 1 2 ( − T )) : =( 1 2 ( + T )) : since the inner products of symmetric and anti-symmetric tensors vanish. The above equation leads to, (sym ): − ( 1 2 ( + T )) : = (sym − 1 2 ( + T )) : = Since is arbitrary, it is clear that,

1. Show that the decomposition of a tensor into the

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

1. Show that the decomposition of a tensor into the symmetric and anti-symmetric

parts is unique.

𝐒 =1

2(𝐒 + 𝐒T) +

1

2(𝐒 − 𝐒T) = sym𝐒 + skw𝐒

Suppose there is another decomposition into symmetric and antisymmetric parts

similar to the above so that ∃𝐁 such that 𝐒 =1

2(𝐁 + 𝐁T) +

1

2(𝐁 − 𝐁T). Now take the

inner product of the two expressions for the tensor 𝐒 and a symmetric tensor 𝐃

𝐒:𝐃 = (𝐬𝐲𝐦𝐒 + 𝐬𝐤𝐰𝐒): 𝐃

= (𝐬𝐲𝐦𝐒): 𝐃

= (1

2(𝐁 + 𝐁T) +

1

2(𝐁 − 𝐁T)) : 𝐃

= (1

2(𝐁 + 𝐁T)) : 𝐃

since the inner products of symmetric and anti-symmetric tensors vanish. The

above equation leads to,

(sym 𝐒): 𝐃 − (1

2(𝐁 + 𝐁T)) : 𝐃 = (sym𝐒 −

1

2(𝐁 + 𝐁T)) : 𝐃 = 𝟎

Since 𝐃 is arbitrary, it is clear that,

sym 𝐒 −1

2(𝐁 + 𝐁T) = 𝟎

or, sym𝐒 ≡1

2(𝐒 + 𝐒T) =

1

2(𝐁 + 𝐁T) showing uniqueness of the symmetrical part. The

uniqueness of the anti-symmetrical part is arrived at by taking the inner product with an

antisymmetric tensor at the beginning.

2. The trilinear mapping, [ , ,] :V × V × V → R from the product set

V × V × V to real space is defined by: [𝐚, 𝐛, 𝐜] ≡ 𝐚 ⋅ (𝐛 × 𝐜) = (𝐚 × 𝐛) ⋅ 𝐜. Show

that [ 𝐚, 𝐛, 𝐜] = [𝐛, 𝐜, 𝐚] = [𝐜, 𝐚, 𝐛] = −[𝐛, 𝐚, 𝐜] = −[𝐜, 𝐛, 𝐚] = −[𝐚, 𝐜, 𝐛]

In component form,

[𝐚, 𝐛, 𝐜] = 𝜖𝑖𝑗𝑘𝑎𝑖𝑏𝑗𝑐𝑘

Cyclic permutations of this, upon remembering that (𝑖, 𝑗, 𝑘) are dummy indices,

yield,

𝜖𝑗𝑘𝑖𝑏𝑗𝑐𝑘𝑎𝑖 = [𝐛, 𝐜, 𝐚] = 𝜖𝑖𝑗𝑘𝑏𝑖𝑐𝑗𝑎𝑘

= 𝜖𝑘𝑖𝑗𝑐𝑘𝑎𝑖𝑏𝑗 = [𝐜, 𝐚, 𝐛] = 𝜖𝑖𝑗𝑘𝑐𝑖𝑎𝑗𝑏𝑘

The other results follow from antisymmetric arrangements and the nature of 𝜖𝑖𝑗𝑘.

3. Given that, [𝐚, 𝐛, 𝐜] ≡ 𝐚 ⋅ (𝐛 × 𝐜) = (𝐚 × 𝐛) ⋅ 𝐜. Show that this product vanishes if

the vectors (𝐚, 𝐛, 𝐜) are linearly dependent.

Suppose it is possible to find scalars 𝛼 and 𝛽 such that, 𝐚 = 𝛼𝐛 + 𝛽𝐜. It therefore

means that,

[𝐚, 𝐛, 𝐜] = 𝜖𝑖𝑗𝑘𝑎𝑖𝑏𝑗𝑐𝑘 = 𝜖𝑖𝑗𝑘(𝛼𝑏𝑖 + 𝛽𝑐𝑖)𝑏𝑗𝑐𝑘

= 𝛼𝜖𝑖𝑗𝑘𝑏𝑖𝑏𝑗𝑐𝑘 + 𝛽𝜖𝑖𝑗𝑘𝑐𝑖𝑏𝑗𝑐𝑘

= 0

Note that 𝑏𝑖𝑏𝑗𝑐𝑘 is symmetric in 𝑖 and 𝑗, 𝑐𝑖𝑏𝑗𝑐𝑘 is symmetric in 𝑖 and 𝑘 and 𝜖𝑖𝑗𝑘 is

antisymmetric in 𝑖, 𝑗 and 𝑘 Because each term is the product of a symmetric and an

antisymmetric object which must vanish.

4. Show that the product of a symmetric and an antisymmetric object vanishes.

Consider the product sum, 𝜖𝑖𝑗𝑘𝑏𝑖𝑏𝑗𝑐𝑘 in which 𝑏𝑖𝑏𝑗 is symmetric in 𝑖 and 𝑗 and 𝜖𝑖𝑗𝑘 is

antisymmetric in 𝑖, 𝑗 and 𝑘. Only the shared symmetrical and antisymmetrical

indices 𝑖, 𝑗 are relevant here.

𝜖𝑖𝑗𝑘𝑏𝑖𝑏𝑗𝑐𝑘 = −𝜖𝑗𝑖𝑘𝑏𝑖𝑏𝑗𝑐𝑘 = −𝜖𝑗𝑖𝑘𝑏𝑗𝑏𝑖𝑐𝑘 = −𝜖𝑖𝑗𝑘𝑏𝑖𝑏𝑗𝑐𝑘 = 0

The first equality on account of the antisymmetry of 𝜖𝑖𝑗𝑘 in 𝑖, 𝑗; the second on the

symmetry of 𝑏𝑖𝑏𝑗 in 𝑖, 𝑗; the third on the fact that 𝑖, 𝑗 are dummy indices. These

vanish because a non-trivial scalar quantity cannot be the negative of itself.

This is a general rule and its observation makes a number of steps easy to see

transparently. Watch out for it.

5. Show that the product 𝐀𝐀 = (

𝑎11 𝑎12 𝑎13

𝑎21 𝑎22 𝑎23

𝑎31 𝑎32 𝑎33

)(

𝑎11 𝑎12 𝑎13

𝑎21 𝑎22 𝑎23

𝑎31 𝑎32 𝑎33

) = 𝐁 Can be

written in indicial notation as, 𝑎𝑖𝑗𝑎𝑗𝑘 = 𝑏𝑖𝑘.

To show this, apply summation convention and see that,

for 𝑖 = 1, 𝑘 = 1, 𝑎11𝑎11 + 𝑎12𝑎21 + 𝑎13𝑎31 = 𝑏11

for 𝑖 = 1, 𝑘 = 2, 𝑎11𝑎12 + 𝑎12𝑎22 + 𝑎13𝑎32 = 𝑏12

for 𝑖 = 1, 𝑘 = 3, 𝑎11𝑎13 + 𝑎12𝑎23 + 𝑎13𝑎33 = 𝑏13

for 𝑖 = 2, 𝑘 = 1, 𝑎21𝑎11 + 𝑎22𝑎21 + 𝑎23𝑎31 = 𝑏21

for 𝑖 = 2, 𝑘 = 2, 𝑎21𝑎12 + 𝑎22𝑎22 + 𝑎23𝑎32 = 𝑏22

for 𝑖 = 2, 𝑘 = 3, 𝑎21𝑎13 + 𝑎22𝑎23 + 𝑎23𝑎33 = 𝑏23

for 𝑖 = 3, 𝑘 = 1, 𝑎31𝑎11 + 𝑎32𝑎21 + 𝑎33𝑎31 = 𝑏31

for 𝑖 = 3, 𝑘 = 2, 𝑎31𝑎12 + 𝑎32𝑎22 + 𝑎33𝑎32 = 𝑏32

for 𝑖 = 3, 𝑘 = 3, 𝑎31𝑎13 + 𝑎32𝑎23 + 𝑎33𝑎33 = 𝑏33

It is necessary to go through this manual process to gain the experience of seeing

this transparently in future. Similary, 𝐀𝐀𝐓 = 𝐁 can be written in indicial notation as,

𝑎𝑖𝑗𝑎𝑘𝑗 = 𝑏𝑖𝑘 which again becomes clear after a manual expansion after invoking the

summation convention.

6. Given that vectors 𝐮, 𝐯 and 𝐰 are linearly independent, and that the tensor 𝐓 is not

singular, show that the set 𝐓𝐮, 𝐓𝐯 and 𝐓𝐰 are also linearly independent.

If 𝐓 is not singular, then its determinant exists and is not equal to zero. Therefore,

det 𝐓 =[𝐓𝐮, 𝐓𝐯, 𝐓𝐰]

[𝐮, 𝐯, 𝐰]≠ 0

Consequently, [𝐓𝐮, 𝐓𝐯, 𝐓𝐰] ≠ 0. Which shows that 𝐓𝐮, 𝐓𝐯 and 𝐓𝐰 are also linearly

independent.

7. Given that vectors 𝐮, 𝐯 and 𝐰 are linearly independent, and that the tensor 𝐓 is not

singular, show that the set 𝐓𝐮, 𝐓𝐯 and 𝐓𝐰 are also linearly independent.

If 𝐓 is not singular, if 𝐓𝐮, 𝐓𝐯 and 𝐓𝐰 are also linearly dependent, then ∃𝛼, 𝛽 and 𝛾 all

real such that 𝛼𝐓𝐮 + 𝛽𝐓𝐯 + 𝛾𝐓𝐰 = 𝐨. But 𝐮, 𝐯 and 𝐰 are linearly independent. This

means that 𝛼𝐮 + 𝛽𝐯 + 𝛾𝐰 ≠ 𝐨.

𝛼𝐓𝐮 + 𝛽𝐓𝐯 + 𝛾𝐓𝐰 = 𝐓(𝛼𝐮 + 𝛽𝐯 + 𝛾𝐰) = 𝒐.

This means that 𝛼𝐮 + 𝛽𝐯 + 𝛾𝐰 = 𝐨. This states that set of linearly independent

vectors is linearly dependent! That is a contradiction!

8. Given that vectors 𝐮 and 𝐯 are linearly independent, and that the tensor 𝐓 is not

singular, show that the set 𝐓𝐮 and 𝐓𝐯 are also linearly independent.

If 𝐓 is not singular, if 𝐓𝐮 and 𝐓𝐯 are also linearly dependent, then ∃𝛼, and 𝛽 both

real such that 𝛼𝐓𝐮 + 𝛽𝐓𝐯 = 𝐨. But 𝐮, 𝐯 and 𝐰 are linearly independent. This means

that 𝛼𝐮 + 𝛽𝐯 ≠ 𝐨.

𝛼𝐓𝐮 + 𝛽𝐓𝐯 = 𝐓(𝛼𝐮 + 𝛽𝐯) = 𝒐.

This means that 𝛼𝐮 + 𝛽𝐯 = 𝐨. This states that set of linearly independent vectors is

linearly dependent! That is a contradiction!

.

9. Given that vectors 𝐮 and 𝐯 are linearly independent, and that the tensor 𝐓 is not

singular, show that the set 𝐓𝐮 and 𝐓𝐯 are also linearly independent.

If 𝐓 is not singular, then its determinant exists and is not equal to zero. Therefore

the cofactor, 𝐓c = 𝐓−T det 𝐓 ≠ 0 also exists and is non-zero. The linear

independence of 𝐮 and 𝐯 means that the parallelogram formed by them has a non-

trivial area 𝐮 × 𝐯 ≠ 0. Now, the parallelogram formed by 𝐓𝐮 and 𝐓𝐯 is also non zero

because,

𝐓𝐮 × 𝐓𝐯 = 𝐓c(𝐮 × 𝐯) ≠ 0

Hence 𝐓𝐮 and 𝐓𝐯 are also linearly independent.

10. Given three vectors 𝐮, 𝐯 and 𝐰, show that (𝐰 × 𝐮) × (𝐰 × 𝐯) = (𝐰 ⊗ 𝐰)(𝐮 × 𝐯)

and that for the unit vector 𝐞, [𝐞, 𝐞 × 𝐮, 𝐞 × 𝐯] = [𝐞, 𝐮, 𝐯]

(𝐰 × 𝐮) × (𝐰 × 𝐯) = [(𝐰 × 𝐮) ⋅ 𝐯]𝐰 − [(𝐰 × 𝐮) ⋅ 𝐰]𝐯

= [(𝐰 × 𝐮) ⋅ 𝐯]𝐰

= [(𝐮 × 𝐯) ⋅ 𝐰]𝐰

= (𝐰 ⊗ 𝐰)(𝐮 × 𝐯)

Consequently,

[𝐞, 𝐞 × 𝐮, 𝐞 × 𝐯] = 𝐞 ⋅ [(𝐞 × 𝐮) × (𝐞 × 𝐯)]

= 𝐞 ⋅ [(𝐞 ⊗ 𝐞)(𝐮 × 𝐯)]

= (𝐮 × 𝐯) ⋅ (𝐞 ⊗ 𝐞)𝐞

= (𝐮 × 𝐯) ⋅ 𝐞 = [𝐞, 𝐮, 𝐯]

making use of the symmetry of (𝐞 ⊗ 𝐞).

11. Given three vectors 𝐮, 𝐯 and 𝐰, using the result, (𝐰 × 𝐮) × (𝐰 × 𝐯) =

(𝐰 ⊗ 𝐰)(𝐮 × 𝐯), show that [(𝐮 × 𝐯), (𝐯 × 𝐰), (𝐰 × 𝐮)] = [𝐮, 𝐯,𝐰]2

From the given result,

[(𝐮 × 𝐯), (𝐯 × 𝐰), (𝐰 × 𝐮)] = −(𝐮 × 𝐯) ⋅ (𝐰 × 𝐯) × (𝐰 × 𝐮)

= −(𝐮 × 𝐯) ⋅ (𝐰 ⊗ 𝐰)(𝐯 × 𝐮)

= (𝐮 × 𝐯)((𝐰 ⋅ 𝐮 × 𝐯)𝐰)

= [𝐮, 𝐯,𝐰]2

12. For any tensor 𝑨, define (Sym(𝑨))𝑖𝑗

=𝟏

𝟐(𝑨𝑖𝑗 + 𝑨𝑗𝑖). Show that Sym(𝑨T𝑺𝑨) =

𝑨TSym(𝑺)𝑨

Clearly, Sym(𝑺) =𝟏

𝟐(𝑺 + 𝑺T)

It also follows that,

𝑨TSym(𝑺)𝑨 = 1

2𝑨T(𝑺 + 𝑺T)𝑨

=1

2(𝑨T𝑺𝑨 + 𝑨T𝑺T𝑨)

But Sym(𝑨T𝑺𝑨) =𝟏

𝟐(𝑨T𝑺𝑨 + 𝑨T𝑺T𝑨).

Hence Sym(𝑨T𝑺𝑨) = 𝑨TSym(𝑺)𝑨

13. Given that the trace of a dyad 𝐚 ⊗ 𝐛, tr(𝐚 ⊗ 𝐛) = 𝐚 ⋅ 𝐛. By expressing the tensors

𝐓 and 𝐒 in component form, show that tr(𝐒𝐓) = tr(𝐓𝐒) = tr(𝐒T𝐓T) = tr(𝐓T𝐒T)

In component form, 𝐒 = 𝑆𝑖𝑗𝐠𝑖 ⊗ 𝐠𝑗, 𝐓 = 𝑇𝛼𝛽𝐠𝛼 ⊗ 𝐠𝛽.

𝐒𝐓 = 𝑆𝑖𝑗𝑇𝛼𝛽(𝐠𝑖 ⊗ 𝐠𝑗)(𝐠𝛼 ⊗ 𝐠𝛽)

= 𝑆𝑖𝑗𝑇𝛼𝛽(𝐠𝑖 ⊗ 𝐠𝑗)(𝐠𝛼 ⊗ 𝐠𝛽)

= 𝑆𝑖𝑗𝑇𝛼𝛽𝐠𝑖 ⊗ 𝐠𝛽𝑔𝑗𝛼

tr(𝐒𝐓) = 𝑆𝑖𝑗𝑇𝛼𝛽𝐠𝑖 ⋅ 𝐠𝛽𝑔𝑗𝛼 = 𝑆𝑖𝑗𝑇𝛼𝛽𝑔𝑖𝛽𝑔𝑗𝛼

= 𝑆𝑖𝑗𝑇𝑗𝑖

𝐒T𝐓T = 𝑆𝑖𝑗𝑇𝛼𝛽(𝐠𝑗 ⊗ 𝐠𝒊)(𝐠𝛽 ⊗ 𝐠𝛼) = 𝑆𝑖𝑗𝑇𝛼𝛽𝐠𝑗 ⊗ 𝐠𝛼𝑔𝑖𝛽

so that

tr(𝐒T𝐓T) = 𝑆𝑖𝑗𝑇𝛼𝛽𝐠𝑗 ⋅ 𝐠𝛼𝑔𝑖𝛽 = 𝑆𝑖𝑗𝑇𝛼𝛽𝑔𝑗𝛼𝑔𝑖𝛽

= 𝑆𝑖𝑗𝑇𝑗𝑖

Similar computations lead to the conclusion that

tr(𝐒𝐓) = tr(𝐓𝐒) = tr(𝐒T𝐓T) = tr(𝐓T𝐒T)

14. Given an arbitrary tensor 𝐓 a skew tensor 𝐖 and a symmetric tensor 𝐒. Show that

𝐒: 𝐓 = 𝐒: 𝐓T = 𝐒: sym𝐓

𝐖:𝐓 = −𝐖:𝐓T = 𝐒: skw𝐓

𝐒:𝐖 = 0

Note that 𝐓 = sym 𝐓 + skw𝐓 , and 𝐓T = sym 𝐓 − skw𝐓 . Also note that the inner

product between a skew and a symmetric tensor vanishes. Consequently,

𝐒: 𝐓 = 𝐒: (sym 𝐓 + skw𝐓)

= 𝐒: sym 𝐓 + 𝐒: skw𝐓

= 𝐒: sym 𝐓

= 𝐒: 𝐓T

𝐖:𝐓 = 𝐖: (sym𝐓 + skw𝐓)

= 𝐖: sym 𝐓 + 𝐖: skw𝐓

= 𝐖: skw 𝐓

= −𝐒: 𝐓T

To show that 𝐒:𝐖 = 0. Observe that, in component form, 𝐒 = 𝑆𝑖𝑗𝐠𝑖 ⊗ 𝐠𝑗, 𝐖 =

𝑊𝛼𝛽𝐠𝛼 ⊗ 𝐠𝛽 .

𝐒T𝐖 = 𝑆𝑖𝑗𝑊𝛼𝛽(𝐠𝑗 ⊗ 𝐠𝑖)(𝐠𝛼 ⊗ 𝐠𝛽)

= 𝑆𝑖𝑗𝑊𝛼𝛽(𝐠𝑗 ⊗ 𝐠𝑖)(𝐠𝛼 ⊗ 𝐠𝛽)

= 𝑆𝑖𝑗𝑊𝛼𝛽𝐠𝑗 ⊗ 𝐠𝛽𝑔𝑖𝛼

𝐒:𝐖 = tr 𝐒T𝐖

= 𝑆𝑖𝑗𝑊𝛼𝛽𝐠𝑗 ⋅ 𝐠𝛽𝑔𝑖𝛼 = 𝑆𝑖𝑗𝑊𝛼𝛽𝑔𝑗𝛽𝑔𝑖𝛼

= 𝑆𝑖𝑗𝑊𝑖𝑗 = −𝑆𝑖𝑗𝑊

𝑗𝑖

= −𝑆𝑗𝑖𝑊𝑗𝑖 = −𝑆𝑖𝑗𝑊

𝑖𝑗

Which vanishes because it is equal to the negative of itself.

15. Show that if for every skew tensor 𝐖, the inner product 𝐒:𝐖 = 0, it must follow

that 𝐒 is symmetric.

𝐒T𝐖 = 𝑆𝑖𝑗𝑊𝛼𝛽(𝐠𝑗 ⊗ 𝐠𝑖)(𝐠𝛼 ⊗ 𝐠𝛽)

= 𝑆𝑖𝑗𝑊𝛼𝛽(𝐠𝑗 ⊗ 𝐠𝑖)(𝐠𝛼 ⊗ 𝐠𝛽)

= 𝑆𝑖𝑗𝑊𝛼𝛽𝐠𝑗 ⊗ 𝐠𝛽𝑔𝑖𝛼

𝐒:𝐖 = tr 𝐒T𝐖

= 𝑆𝑖𝑗𝑊𝛼𝛽𝐠𝑗 ⋅ 𝐠𝛽𝑔𝑖𝛼 = 𝑆𝑖𝑗𝑊𝛼𝛽𝑔𝑗𝛽𝑔𝑖𝛼

= 𝑆𝑖𝑗𝑊𝑖𝑗 = −𝑆𝑖𝑗𝑊

𝑗𝑖 = 0 = 𝑆𝑖𝑗𝑊𝑗𝑖

Since all inner products 𝐒:𝐖 = 0. But,

𝑆𝑖𝑗𝑊𝑖𝑗 = 𝑆𝑖𝑗𝑊

𝑗𝑖 = 𝑆𝑗𝑖𝑊𝑖𝑗

So that (𝑆𝑖𝑗 − 𝑆𝑗𝑖)𝑊𝑖𝑗 = 0 ⇒ 𝑆𝑖𝑗 = 𝑆𝑗𝑖 Hence, 𝐒 is symmetric.

16. Show that if for every symmetric tensor 𝐒, the inner product 𝐒:𝐖 = 0, it must

follow that 𝐖 is anti-symmetric.

𝐒T𝐖 = 𝑆𝑖𝑗𝑊𝛼𝛽(𝐠𝑗 ⊗ 𝐠𝑖)(𝐠𝛼 ⊗ 𝐠𝛽)

= 𝑆𝑖𝑗𝑊𝛼𝛽(𝐠𝑗 ⊗ 𝐠𝑖)(𝐠𝛼 ⊗ 𝐠𝛽)

= 𝑆𝑖𝑗𝑊𝛼𝛽𝐠𝑗 ⊗ 𝐠𝛽𝑔𝑖𝛼

𝐒:𝐖 = tr 𝐒T𝐖

= 𝑆𝑖𝑗𝑊𝛼𝛽𝐠𝑗 ⋅ 𝐠𝛽𝑔𝑖𝛼 = 𝑆𝑖𝑗𝑊𝛼𝛽𝑔𝑗𝛽𝑔𝑖𝛼

= 𝑆𝑖𝑗𝑊𝑖𝑗 = 𝑆𝑗𝑖𝑊

𝑖𝑗 = 0 = −𝑆𝑗𝑖𝑊𝑗𝑖

Since all inner products 𝐒:𝐖 = 0. But,

𝑆𝑖𝑗𝑊𝑖𝑗 = 𝑆𝑗𝑖𝑊

𝑗𝑖 = −𝑆𝑗𝑖𝑊𝑗𝑖

So that 𝑆𝑖𝑗(𝑊𝑖𝑗 + 𝑊𝑗𝑖) = 0 ⇒ 𝑊𝑖𝑗 = −𝑊𝑗𝑖 Hence, 𝐖 is anti-symmetric

17. If we can find 𝛼, 𝛽 and 𝛾 unit tensor, 𝐈 = 𝛼𝐚 ⊗ 𝐛 + 𝛽𝐛 ⊗ 𝐜 + 𝛾𝐜 ⊗ 𝐚, show that

unless 𝐛 ⋅ 𝐚 = 𝛼−1 then 𝐚, 𝐛 and 𝐜 cannot be linearly independent.

In the expression,

𝐈 = 𝛼𝐚 ⊗ 𝐛 + 𝛽𝐛 ⊗ 𝐜 + 𝛾𝐜 ⊗ 𝐚

Take a product on the right with vector 𝐚,

𝐈𝐚 = 𝛼𝐚(𝐛 ⋅ 𝐚) + 𝛽𝐛(𝐜 ⋅ 𝐚) + 𝛾𝐜(𝐚 ⋅ 𝐚)

⇒ 𝐚(1 − 𝛼(𝐛 ⋅ 𝐚)) = 𝛽𝐛(𝐜 ⋅ 𝐚) + 𝛾𝐜(𝐚 ⋅ 𝐚)

𝐚 = 𝛽𝐛(𝐜 ⋅ 𝐚)

1 − 𝛼(𝐛 ⋅ 𝐚)+

𝛾𝐜(𝐚 ⋅ 𝐚)

(1 − 𝛼(𝐛 ⋅ 𝐚))

So that this expression enables us to write 𝐚 in terms of 𝐛 and 𝐜.

18. Divergence of a product: Given that 𝜑 is a scalar field and 𝐯 a vector field, show

that div(𝜑𝐯) = (grad𝜑) ⋅ 𝐯 + 𝜑 div 𝐯

grad(𝜑𝐯) = (𝜑𝑣𝑖),𝑗 𝐠𝑖 ⊗ 𝐠𝑗

= 𝜑,𝑗 𝑣𝑖𝐠𝑖 ⊗ 𝐠𝑗 + 𝜑𝑣𝑖 ,𝑗 𝐠𝑖 ⊗ 𝐠𝑗

= 𝐯 ⊗ (grad 𝜑) + 𝜑 grad 𝐯

Now, div(𝜑𝐯) = tr(grad(𝜑𝐯)). Taking the trace of the above, we have:

div(𝜑𝐯) = 𝐯 ⋅ (grad 𝜑) + 𝜑 div 𝐯

19. Show that grad(𝐮 · 𝐯) = (grad 𝐮)T𝐯 + (grad 𝐯)T𝐮

𝐮 · 𝐯 = 𝑢𝑖𝑣𝑖 is a scalar sum of components.

grad(𝐮 · 𝐯) = (𝑢𝑖𝑣𝑖),𝑗 𝐠𝑗

= 𝑢𝑖 ,𝑗 𝑣𝑖𝐠𝑗 + 𝑢𝑖𝑣𝑖 ,𝑗 𝐠𝑗

Now grad 𝐮 = 𝑢𝑖 ,𝑗 𝐠𝑖 ⊗ 𝐠𝑗 swapping the bases, we have that,

(grad 𝐮)T = 𝑢𝑖 ,𝑗 (𝐠𝑗 ⊗ 𝐠𝑖).

Writing 𝐯 = 𝑣𝑘𝐠𝑘 , we have that, (grad 𝐮)T𝐯 = 𝑢𝑖 ,𝑗 𝑣𝑘(𝐠𝑗 ⊗ 𝐠𝑖)𝐠𝑘 = 𝑢𝑖 ,𝑗 𝑣𝑘𝐠𝑗𝛿𝑖

𝑘 =

𝑢𝑖 ,𝑗 𝑣𝑖𝐠𝑗

It is easy to similarly show that 𝑢𝑖𝑣𝑖 ,𝑗 𝐠𝑗 = (grad 𝐯)T𝐮. Clearly,

grad(𝐮 · 𝐯) = (𝑢𝑖𝑣𝑖),𝑗 𝐠𝑗 = 𝑢𝑖 ,𝑗 𝑣𝑖𝐠𝑗 + 𝑢𝑖𝑣𝑖 ,𝑗 𝐠𝑗

= (grad 𝐮)T𝐯 + (grad 𝐯)T𝐮

As required.

20. Show that grad(𝐮 × 𝐯) = (𝐮 ×)grad 𝐯 − (𝐯 ×)grad 𝐮

𝐮 × 𝐯 = 𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘𝐠𝑖

Recall that the gradient of this vector is the tensor,

grad(𝐮 × 𝐯) = (𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘),𝑙 𝐠𝑖 ⊗ 𝐠𝑙

= 𝜖𝑖𝑗𝑘𝑢𝑗 ,𝑙 𝑣𝑘𝐠𝑖 ⊗ 𝐠𝑙 + 𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘 ,𝑙 𝐠𝑖 ⊗ 𝐠𝑙

= −𝜖𝑖𝑘𝑗𝑢𝑗 ,𝑙 𝑣𝑘𝐠𝑖 ⊗ 𝐠𝑙 + 𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘 ,𝑙 𝐠𝑖 ⊗ 𝐠𝑙

= − (𝐯 ×)grad 𝐮 + (𝐮 ×)grad 𝐯

21. Show that div (𝐮 × 𝐯) = 𝐯 ⋅ curl 𝐮 − 𝐮 ⋅ curl 𝐯

We already have the expression for grad(𝐮 × 𝐯) above; remember that

div (𝐮 × 𝐯) = tr[grad(𝐮 × 𝐯)]

= −𝜖𝑖𝑘𝑗𝑢𝑗 ,𝑙 𝑣𝑘𝐠𝑖 ⋅ 𝐠𝑙 + 𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘 ,𝑙 𝐠𝑖 ⋅ 𝐠𝑙

= −𝜖𝑖𝑘𝑗𝑢𝑗 ,𝑙 𝑣𝑘𝛿𝑖𝑙 + 𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘 ,𝑙 𝛿𝑖

𝑙

= −𝜖𝑖𝑘𝑗𝑢𝑗 ,𝑖 𝑣𝑘 + 𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘 ,𝑖 = 𝐯 ⋅ curl 𝐮 − 𝐮 ⋅ curl 𝐯

22. Given a scalar point function 𝜙 and a vector field 𝐯, show that curl (𝜙𝐯) =

𝜙 curl 𝐯 + (grad 𝜙) × 𝐯.

curl (𝜙𝐯) = 𝜖𝑖𝑗𝑘(𝜙𝑣𝑘),𝑗 𝐠𝑖

= 𝜖𝑖𝑗𝑘(𝜙,𝑗 𝑣𝑘 + 𝜙𝑣𝑘 ,𝑗 )𝐠𝑖

= 𝜖𝑖𝑗𝑘𝜙,𝑗 𝑣𝑘𝐠𝑖 + 𝜖𝑖𝑗𝑘𝜙𝑣𝑘 ,𝑗 𝐠𝑖

= (grad𝜙) × 𝐯 + 𝜙 curl 𝐯

23. Show that div (𝐮 ⊗ 𝐯) = (div 𝐯)𝐮 + (grad 𝐮)𝐯

𝐮 ⊗ 𝐯 is the tensor, 𝑢𝑖𝑣𝑗𝐠𝑖 ⊗ 𝐠𝑗 . The gradient of this is the third order tensor,

grad (𝐮 ⊗ 𝐯) = (𝑢𝑖𝑣𝑗),𝑘 𝐠𝑖 ⊗ 𝐠𝑗 ⊗ 𝐠𝑘

And by divergence, we mean the contraction of the last basis vector:

div (𝐮 ⊗ 𝐯) = (𝑢𝑖𝑣𝑗)𝑘(𝐠𝑖 ⊗ 𝐠𝑗)𝐠

𝑘

= (𝑢𝑖𝑣𝑗)𝑘𝐠𝑖𝛿𝑗

𝑘 = (𝑢𝑖𝑣𝑗)𝑗𝐠𝑖

= 𝑢𝑖 ,𝑗 𝑣𝑗𝐠𝑖 + 𝑢𝑖𝑣𝑗 ,𝑗 𝐠𝑖

= (grad 𝐮)𝐯 + (div 𝐯)𝐮

24. For a scalar field 𝜙 and a tensor field 𝐓 show that grad (𝜙𝐓) = 𝜙grad 𝐓 + 𝐓 ⊗

grad𝜙. Also show that div (𝜙𝐓) = 𝜙 div 𝐓 + 𝐓grad𝜙

grad(𝜙𝐓) = (𝜙𝑇𝑖𝑗),𝑘 𝐠𝑖 ⊗ 𝐠𝑗 ⊗ 𝐠𝑘

= (𝜙,𝑘 𝑇𝑖𝑗 + 𝜙𝑇𝑖𝑗 ,𝑘 )𝐠𝑖 ⊗ 𝐠𝑗 ⊗ 𝐠𝑘

= 𝐓 ⊗ grad𝜙 + 𝜙grad 𝐓

Furthermore, we can contract the last two bases and obtain,

div(𝜙𝐓) = (𝜙,𝑘 𝑇𝑖𝑗 + 𝜙𝑇𝑖𝑗 ,𝑘 )𝐠𝑖 ⊗ 𝐠𝑗 ⋅ 𝐠𝑘

= (𝜙,𝑘 𝑇𝑖𝑗 + 𝜙𝑇𝑖𝑗 ,𝑘 )𝐠𝑖𝛿𝑗𝑘

= 𝑇𝑖𝑘 𝜙,𝑘 𝐠𝑖 + 𝜙𝑇𝑖𝑘 ,𝑘 𝐠𝑖

= 𝐓grad𝜙 + 𝜙 div 𝐓

25. For two arbitrary tensors 𝐒 and 𝐓, show that grad(𝐒𝐓) = (grad 𝐒T)T𝐓 + 𝐒grad𝐓

grad(𝐒𝐓) = (𝑆𝑖𝑗𝑇𝑗𝑘),𝛼 𝐠𝑖 ⊗ 𝐠𝑘 ⊗ 𝐠𝛼

= (𝑆𝑖𝑗,𝛼𝑇𝑗𝑘 + 𝑆𝑖𝑗𝑇𝑗𝑘 ,𝛼 )𝐠𝑖 ⊗ 𝐠𝑘 ⊗ 𝐠𝛼

= (𝑇𝑘𝑗𝑆𝑗𝑖,𝛼 + 𝑆𝑖𝑗𝑇𝑗𝑘 ,𝛼 )𝐠𝑖 ⊗ 𝐠𝑘 ⊗ 𝐠𝛼

= (grad 𝐒T)T𝐓 + 𝐒 grad𝐓

26. For two arbitrary tensors 𝐒 and 𝐓, show that div(𝐒𝐓) = (grad 𝐒): 𝐓 + 𝐓div 𝐒

grad(𝐒𝐓) = (𝑆𝑖𝑗𝑇𝑗𝑘),𝛼 𝐠𝑖 ⊗ 𝐠𝑘 ⊗ 𝐠𝛼

= (𝑆𝑖𝑗,𝛼𝑇𝑗𝑘 + 𝑆𝑖𝑗𝑇𝑗𝑘 ,𝛼 )𝐠𝑖 ⊗ 𝐠𝑘 ⊗ 𝐠𝛼

div(𝐒𝐓) = (𝑆𝑖𝑗,𝛼𝑇𝑗𝑘 + 𝑆𝑖𝑗𝑇𝑗𝑘 ,𝛼 )𝐠𝑖(𝐠𝑘 ⋅ 𝐠𝛼)

= (𝑆𝑖𝑗,𝛼𝑇𝑗𝑘 + 𝑆𝑖𝑗𝑇𝑗𝑘 ,𝛼 )𝐠𝑖𝛿𝑘

𝛼

= (𝑆𝑖𝑗,𝑘𝑇𝑗𝑘 + 𝑆𝑖𝑗𝑇𝑗𝑘 ,𝑘 )𝐠𝑖

= (grad 𝐒): 𝐓 + 𝐒div 𝐓

27. For two arbitrary vectors, 𝐮 and 𝐯, show that grad(𝐮 × 𝐯) = (𝐮 ×)grad𝐯 −

(𝐯 ×)grad𝐮

grad(𝐮 × 𝐯) = (𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘),𝑙 𝐠𝑖 ⊗ 𝐠𝑙

= (𝜖𝑖𝑗𝑘𝑢𝑗 ,𝑙 𝑣𝑘 + 𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘 ,𝑙 )𝐠𝑖 ⊗ 𝐠𝑙

= (𝑢𝑗 ,𝑙 𝜖𝑖𝑗𝑘𝑣𝑘 + 𝑣𝑘 ,𝑙 𝜖

𝑖𝑗𝑘𝑢𝑗)𝐠𝑖 ⊗ 𝐠𝑙

= −(𝐯 ×)grad𝐮 + (𝐮 ×)grad𝐯

28. For a vector field 𝐮, show that grad(𝐮 ×) is a third ranked tensor. Hence or

otherwise show that div(𝐮 ×) = −curl 𝐮.

The second–order tensor (𝐮 ×) is defined as 𝜖𝑖𝑗𝑘𝑢𝑗𝐠𝑖 ⊗ 𝐠𝑘. Taking the covariant

derivative with an independent base, we have

grad(𝐮 ×) = 𝜖𝑖𝑗𝑘𝑢𝑗 ,𝑙 𝐠𝑖 ⊗ 𝐠𝑘 ⊗ 𝐠𝑙

This gives a third order tensor as we have seen. Contracting on the last two bases,

div(𝐮 ×) = 𝜖𝑖𝑗𝑘𝑢𝑗 ,𝑙 𝐠𝑖 ⊗ 𝐠𝑘 ⋅ 𝐠𝑙

= 𝜖𝑖𝑗𝑘𝑢𝑗 ,𝑙 𝐠𝑖𝛿𝑘𝑙

= 𝜖𝑖𝑗𝑘𝑢𝑗 ,𝑘 𝐠𝑖

= −curl 𝐮

29. Show that div (𝜙𝐈) = grad 𝜙

Note that 𝜙𝐈 = (𝜙𝑔𝛼𝛽)𝐠𝛼 ⊗ 𝐠𝛽 . Also note that

grad 𝜙𝐈 = (𝜙𝑔𝛼𝛽),𝑖 𝐠𝛼 ⊗ 𝐠𝛽 ⊗ 𝐠𝑖

The divergence of this third order tensor is the contraction of the last two bases:

div (𝜙𝐈) = tr(grad 𝜙𝐈) = (𝜙𝑔𝛼𝛽),𝑖 (𝐠𝛼 ⊗ 𝐠𝛽)𝐠𝑖 = (𝜙𝑔𝛼𝛽),𝑖 𝐠

𝛼𝑔𝛽𝑖

= 𝜙,𝑖 𝑔𝛼𝛽𝑔𝛽𝑖𝐠𝛼

= 𝜙,𝑖 𝛿𝛼𝑖 𝐠𝛼 = 𝜙,𝑖 𝐠

𝑖 = grad 𝜙

30. Show that curl (𝜙𝐈) = ( grad 𝜙) ×

Note that 𝜙𝐈 = (𝜙𝑔𝛼𝛽)𝐠𝛼 ⊗ 𝐠𝛽 , and that curl 𝐓 = 𝜖𝑖𝑗𝑘𝑇𝛼𝑘,𝑗 𝐠𝑖 ⊗ 𝐠𝛼 so that,

curl (𝜙𝐈) = 𝜖𝑖𝑗𝑘(𝜙𝑔𝛼𝑘),𝑗 𝐠𝑖 ⊗ 𝐠𝛼

= 𝜖𝑖𝑗𝑘(𝜙,𝑗 𝑔𝛼𝑘)𝐠𝑖 ⊗ 𝐠𝛼 = 𝜖𝑖𝑗𝑘𝜙,𝑗 𝐠𝑖 ⊗ 𝐠𝑘

= ( grad 𝜙) ×

31. Show that the dyad 𝐮 ⊗ 𝐯 is NOT, in general symmetric: 𝐮 ⊗ 𝐯 = 𝐯 ⊗ 𝐮 −

(𝐮 × 𝐯) ×

𝐮 × 𝐯 = 𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘 𝐠𝑖

((𝐮 × 𝐯) ×) = 𝜖𝛼𝑖𝛽𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘𝐠𝛼 ⊗ 𝐠𝛽

= −(𝛿𝛼𝑗𝛿𝛽

𝑘 − 𝛿𝛼𝑘𝛿𝛽

𝑗 ) 𝑢𝑗𝑣𝑘𝐠𝛼 ⊗ 𝐠𝛽

= (−𝑢𝛼𝑣𝛽 + 𝑢𝛽𝑣𝛼)𝐠𝛼 ⊗ 𝐠𝛽

= 𝐯 ⊗ 𝐮 − 𝐮 ⊗ 𝐯

32. Show that curl (𝐯 ×) = (div 𝐯)𝐈 − grad 𝐯

(𝐯 ×) = 𝜖𝛼𝛽𝑘𝑣𝛽 𝐠𝛼 ⊗ 𝐠𝑘

curl 𝐓 = 𝜖𝑖𝑗𝑘𝑇𝛼𝑘 ,𝑗 𝐠𝑖 ⊗ 𝐠𝛼

so that

curl (𝐯 ×) = 𝜖𝑖𝑗𝑘𝜖𝛼𝛽𝑘𝑣𝛽 ,𝑗 𝐠𝑖 ⊗ 𝐠𝛼

= (𝑔𝑖𝛼𝑔𝑗𝛽 − 𝑔𝑖𝛽𝑔𝑗𝛼) 𝑣𝛽 ,𝑗 𝐠𝑖 ⊗ 𝐠𝛼

= 𝑣𝑗 ,𝑗 𝐠𝛼 ⊗ 𝐠𝛼 − 𝑣𝑖 ,𝑗 𝐠𝑖 ⊗ 𝐠𝑗

= (div 𝐯)𝐈 − grad 𝐯

33. Show that div (𝐮 × 𝐯) = 𝐯 ⋅ curl 𝐮 − 𝐮 ⋅ curl 𝐯

div (𝐮 × 𝐯) = (𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘),𝑖

Noting that the tensor 𝜖𝑖𝑗𝑘 behaves as a constant under a covariant differentiation,

we can write,

div (𝐮 × 𝐯) = (𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘),𝑖

= 𝜖𝑖𝑗𝑘𝑢𝑗 ,𝑖 𝑣𝑘 + 𝜖𝑖𝑗𝑘𝑢𝑗𝑣𝑘 ,𝑖

= 𝐯 ⋅ curl 𝐮 − 𝐮 ⋅ curl 𝐯

34. Given a scalar point function 𝜙 and a vector field 𝐯, show that curl (𝜙𝐯) =

𝜙 curl 𝐯 + (grad𝜙) × 𝐯.

curl (𝜙𝐯) = 𝜖𝑖𝑗𝑘(𝜙𝑣𝑘),𝑗 𝐠𝑖

= 𝜖𝑖𝑗𝑘(𝜙,𝑗 𝑣𝑘 + 𝜙𝑣𝑘 ,𝑗 )𝐠𝑖

= 𝜖𝑖𝑗𝑘𝜙,𝑗 𝑣𝑘𝐠𝑖 + 𝜖𝑖𝑗𝑘𝜙𝑣𝑘 ,𝑗 𝐠𝑖

= (grad𝜙) × 𝐯 + 𝜙 curl 𝐯

35. Show that curl (grad 𝜙) = 𝐨

For any tensor 𝐯 = 𝑣𝛼𝐠𝛼

curl 𝐯 = 𝜖𝑖𝑗𝑘𝑣𝑘 ,𝑗 𝐠𝑖

Let 𝐯 = grad 𝜙. Clearly, in this case, 𝑣𝑘 = 𝜙,𝑘 so that 𝑣𝑘 ,𝑗 = 𝜙,𝑘𝑗 . It therefore follows

that,

curl (grad 𝜙) = 𝜖𝑖𝑗𝑘𝜙,𝑘𝑗 𝐠𝑖 = 𝐨.

The contraction of symmetric tensors with anti-symmetric led to this conclusion.

Note that this presupposes that the order of differentiation in the scalar field is

immaterial. This will be true only if the scalar field is continuous – a proposition we

have assumed in the above.

36. Show that curl (grad 𝐯) = 𝟎

For any tensor 𝐓 = T𝛼𝛽𝐠𝛼 ⊗ 𝐠𝛽

curl 𝐓 = 𝜖𝑖𝑗𝑘𝑇𝛼𝑘 ,𝑗 𝐠𝑖 ⊗ 𝐠𝛼

Let 𝐓 = grad 𝐯. Clearly, in this case, T𝛼𝛽 = 𝑣𝛼 ,𝛽 so that 𝑇𝛼𝑘 ,𝑗 = 𝑣𝛼 ,𝑘𝑗. It therefore

follows that,

curl (grad 𝐯) = 𝜖𝑖𝑗𝑘𝑣𝛼 ,𝑘𝑗 𝐠𝑖 ⊗ 𝐠𝛼 = 𝟎.

The contraction of symmetric tensors with anti-symmetric led to this conclusion.

Note that this presupposes that the order of differentiation in the vector field is

immaterial. This will be true only if the vector field is continuous – a proposition we

have assumed in the above.

37. Show that curl (grad 𝐯)T = grad(curl 𝐯)

From previous derivation, we can see that, curl 𝐓 = 𝜖𝑖𝑗𝑘𝑇𝛼𝑘,𝑗 𝐠𝑖 ⊗ 𝐠𝛼. Clearly,

curl 𝐓T = 𝜖𝑖𝑗𝑘𝑇𝑘𝛼,𝑗 𝐠𝑖 ⊗ 𝐠𝛼

so that curl (grad 𝐯)T = 𝜖𝑖𝑗𝑘𝑣𝑘,𝛼𝑗 𝐠𝑖 ⊗ 𝐠𝛼. But curl 𝐯 = 𝜖𝑖𝑗𝑘𝑣𝑘,𝑗 𝐠𝑖 . The gradient of

this is,

grad(curl 𝐯) = (𝜖𝑖𝑗𝑘𝑣𝑘,𝑗 ),𝛼 𝐠𝑖 ⊗ 𝐠𝛼 = 𝜖𝑖𝑗𝑘𝑣𝑘 ,𝑗𝛼 𝐠𝑖 ⊗ 𝐠𝛼 = curl (grad 𝐯)T

38. Show that div (grad 𝜙 × grad θ) = 0

grad 𝜙 × grad θ = 𝜖𝑖𝑗𝑘𝜙,𝑗 𝜃,𝑘 𝐠𝑖

The gradient of this vector is the tensor,

grad(grad 𝜙 × grad 𝜃) = (𝜖𝑖𝑗𝑘𝜙,𝑗 𝜃,𝑘 ),𝑙 𝐠𝑖 ⊗ 𝐠𝑙

= 𝜖𝑖𝑗𝑘𝜙,𝑗𝑙 𝜃,𝑘 𝐠𝑖 ⊗ 𝐠𝑙 + 𝜖𝑖𝑗𝑘𝜙,𝑗 𝜃,𝑘𝑙 𝐠𝑖 ⊗ 𝐠𝑙

The trace of the above result is the divergence we are seeking:

div (grad 𝜙 × grad θ) = tr[grad(grad 𝜙 × grad 𝜃)]

= 𝜖𝑖𝑗𝑘𝜙,𝑗𝑙 𝜃,𝑘 𝐠𝑖 ⋅ 𝐠𝑙 + 𝜖𝑖𝑗𝑘𝜙,𝑗 𝜃,𝑘𝑙 𝐠𝑖 ⋅ 𝐠𝑙

= 𝜖𝑖𝑗𝑘𝜙,𝑗𝑙 𝜃,𝑘 𝛿𝑖𝑙 + 𝜖𝑖𝑗𝑘𝜙,𝑗 𝜃,𝑘𝑙 𝛿𝑖

𝑙

= 𝜖𝑖𝑗𝑘𝜙,𝑗𝑖 𝜃,𝑘+ 𝜖𝑖𝑗𝑘𝜙,𝑗 𝜃,𝑘𝑖 = 0

Each term vanishing on account of the contraction of a symmetric tensor with an

antisymmetric.

39. Show that curl curl 𝐯 = grad(div 𝐯) − grad2𝐯

Let 𝐰 = curl 𝐯 ≡ 𝜖𝑖𝑗𝑘𝑣𝑘,𝑗 𝐠𝑖 . But curl 𝐰 ≡ 𝜖𝛼𝛽𝛾𝑤𝛾 ,𝛽 𝐠𝛼 . Upon inspection, we find

that 𝑤𝛾 = 𝑔𝛾𝑖𝜖𝑖𝑗𝑘𝑣𝑘 ,𝑗 so that

curl 𝐰 ≡ 𝜖𝛼𝛽𝛾(𝑔𝛾𝑖𝜖𝑖𝑗𝑘𝑣𝑘,𝑗 ),𝛽 𝐠𝛼 = 𝑔𝛾𝑖𝜖

𝛼𝛽𝛾𝜖𝑖𝑗𝑘𝑣𝑘 ,𝑗𝛽 𝐠𝛼

Now, it can be shown that 𝑔𝛾𝑖𝜖𝛼𝛽𝛾𝜖𝑖𝑗𝑘 = 𝑔𝛼𝑗𝑔𝛽𝑘 − 𝑔𝛼𝑘𝑔𝛽𝑗 so that,

curl 𝐰 = (𝑔𝛼𝑗𝑔𝛽𝑘 − 𝑔𝛼𝑘𝑔𝛽𝑗)𝑣𝑘 ,𝑗𝛽 𝐠𝛼

= 𝑣𝛽 ,𝑗𝛽 𝐠𝑗 − 𝑔𝛽𝑗𝑣𝛼 ,𝑗𝛽 𝐠𝛼

= grad(div 𝐯) − grad2𝐯

Also recall that the Laplacian (grad2) of a scalar field 𝜙 is, grad2𝜙 = 𝑔𝑖𝑗𝜙,𝑖𝑗 . In

Cartesian coordinates, this becomes,

grad2𝜙 = 𝑔𝑖𝑗𝜙,𝑖𝑗 = 𝛿𝑖𝑗 𝜙,𝑖𝑗 = 𝜙,𝑖𝑖

as the unit (metric) tensor now degenerates to the Kronecker delta in this special

case. For a vector field, grad2𝐯 = 𝑔𝛽𝑗𝑣𝛼 ,𝑗𝛽 𝐠𝛼 .

Also note that while grad is a vector operator, the Laplacian (grad2) is a scalar

operator.

40. Given that 𝜑(𝑡) = |𝐀(𝑡)|, Show that �̇�(𝑡) =𝐀

|𝐀(𝑡)|: �̇�

𝜑2 ≡ 𝐀:𝐀

Now, 𝑑

𝑑𝑡(𝜑2) = 2𝜑

𝑑𝜑

𝑑𝑡=

𝑑𝐀

𝑑𝑡: 𝐀 + 𝐀:

𝑑𝐀

𝑑𝑡= 2𝐀:

𝑑𝐀

𝑑𝑡

as inner product is commutative. We can therefore write that

𝑑𝜑

𝑑𝑡=

𝐀

𝜑:𝑑𝐀

𝑑𝑡=

𝐀

|𝐀(𝑡)|: �̇�

as required.

41. Given a tensor field 𝐓, obtain the vector 𝐰 ≡ 𝐓T𝐯 and show that its divergence is

𝐓: (∇𝐯) + 𝐯 ⋅ div 𝐓

The gradient of 𝐰 is the tensor, (𝑇𝑗𝑖𝑣𝑗),𝑘 𝐠𝑖 ⊗ 𝐠𝑘. Therefore, divergence of 𝒘 (the

trace of the gradient) is the scalar sum, 𝑇𝑗𝑖𝑣𝑗 ,𝑘 𝑔𝑖𝑘 + 𝑇𝑗𝑖 ,𝑘 𝑣𝑗𝑔𝑖𝑘 . Expanding, we

obtain,

div (𝐓T𝐯) = 𝑇𝑗𝑖𝑣𝑗 ,𝑘 𝑔𝑖𝑘 + 𝑇𝑗𝑖 ,𝑘 𝑣𝑗𝑔𝑖𝑘

= 𝑇𝑗𝑘 ,𝑘 𝑣𝑗 + 𝑇𝑗

𝑘𝑣𝑗 ,𝑘

= (div 𝐓) ⋅ 𝐯 + tr(𝐓Tgrad 𝐯)

= (div 𝐓) ⋅ 𝐯 + 𝐓: (grad 𝐯)

Recall that scalar product of two vectors is commutative so that

div (𝐓T𝐯) = 𝐓: (grad 𝐯) + 𝐯 ⋅ div 𝐓

42. For a second-order tensor 𝐓 define curl 𝐓 ≡ 𝜖𝑖𝑗𝑘𝑇𝛼𝑘 ,𝑗 𝐠𝑖 ⊗ 𝐠𝛼 show that for any

constant vector 𝒂, (curl 𝐓) 𝒂 = curl (𝐓T𝒂)

Express vector 𝒂 in the invariant form with covariant components as 𝒂 = 𝑎𝛽𝐠𝛽 . It

follows that

(curl 𝐓) 𝒂 = 𝜖𝑖𝑗𝑘𝑇𝛼𝑘,𝑗 (𝐠𝑖 ⊗ 𝐠𝛼)𝒂

= 𝜖𝑖𝑗𝑘𝑇𝛼𝑘,𝑗 𝑎𝛽(𝐠𝑖 ⊗ 𝐠𝛼)𝐠𝛽

= 𝜖𝑖𝑗𝑘𝑇𝛼𝑘,𝑗 𝑎𝛽𝐠𝑖𝛿𝛽

𝛼

= 𝜖𝑖𝑗𝑘(𝑇𝛼𝑘),𝑗 𝐠𝑖𝑎𝛼

= 𝜖𝑖𝑗𝑘(𝑇𝛼𝑘𝑎𝛼),𝑗 𝐠𝑖

The last equality resulting from the fact that vector 𝒂 is a constant vector. Clearly,

(curl 𝐓) 𝒂 = curl (𝐓T𝒂)

43. For any two vectors 𝐮 and 𝐯, show that curl (𝐮 ⊗ 𝐯) = [(grad 𝐮)𝐯 ×]𝑇 +

(curl 𝐯) ⊗ 𝒖 where 𝐯 × is the skew tensor 𝜖𝑖𝑘𝑗𝑣𝑘 𝐠𝑖 ⊗ 𝐠𝑗.

Recall that the curl of a tensor 𝑻 is defined by curl 𝑻 ≡ 𝜖𝑖𝑗𝑘𝑇𝛼𝑘,𝑗 𝐠𝑖 ⊗ 𝐠𝛼. Clearly

therefore,

curl (𝐮 ⊗ 𝐯) = 𝜖𝑖𝑗𝑘(𝑢𝛼𝑣𝑘),𝑗 𝐠𝑖 ⊗ 𝐠𝛼

= 𝜖𝑖𝑗𝑘(𝑢𝛼 ,𝑗 𝑣𝑘 + 𝑢𝛼𝑣𝑘 ,𝑗 ) 𝐠𝑖 ⊗ 𝐠𝛼

= 𝜖𝑖𝑗𝑘𝑢𝛼 ,𝑗 𝑣𝑘 𝐠𝑖 ⊗ 𝐠𝛼 + 𝜖𝑖𝑗𝑘𝑢𝛼𝑣𝑘,𝑗 𝐠𝑖 ⊗ 𝐠𝛼

= (𝜖𝑖𝑗𝑘𝑣𝑘 𝐠𝑖) ⊗ (𝑢𝛼 ,𝑗 𝐠𝛼) + (𝜖𝑖𝑗𝑘𝑣𝑘 ,𝑗 𝐠𝑖) ⊗ (𝑢𝛼𝐠𝛼)

= (𝜖𝑖𝑗𝑘𝑣𝑘 𝐠𝑖 ⊗ 𝐠𝑗)(𝑢𝛼 ,𝛽 𝐠𝛽 ⊗ 𝐠𝛼) + (𝜖𝑖𝑗𝑘𝑣𝑘 ,𝑗 𝐠𝑖) ⊗ (𝑢𝛼𝐠𝛼)

= −(𝐯 ×)(grad 𝐮)𝑻 + (curl 𝐯) ⊗ 𝐮

= [(grad 𝐮)𝐯 ×]𝑻 + (curl 𝐯) ⊗ 𝐮

upon noting that the vector cross is a skew tensor.

44. Show that curl (𝐮 × 𝐯) = div(𝐮 ⊗ 𝐯 − 𝐯 ⊗ 𝐮)

The vector 𝐰 ≡ 𝐮 × 𝐯 = 𝑤𝒌𝐠𝒌 = 𝜖𝑘𝛼𝛽𝑢𝛼𝑣𝛽𝐠𝒌 and curl 𝐰 = 𝝐𝒊𝒋𝒌𝑤𝑘 ,𝑗 𝐠𝑖 . Therefore,

curl (𝐮 × 𝐯) = 𝜖𝑖𝑗𝑘𝑤𝑘,𝑗 𝐠𝑖

= 𝜖𝑖𝑗𝑘𝜖𝑘𝛼𝛽(𝑢𝛼𝑣𝛽),𝑗 𝐠𝑖

= (𝛿𝛼𝑖 𝛿𝛽

𝑗− 𝛿𝛽

𝑖 𝛿𝛼𝑗) (𝑢𝛼𝑣𝛽),𝑗 𝐠𝑖

= (𝛿𝛼𝑖 𝛿𝛽

𝑗− 𝛿𝛽

𝑖 𝛿𝛼𝑗) (𝑢𝛼 ,𝑗 𝑣𝛽 + 𝑢𝛼𝑣𝛽 ,𝑗 )𝐠𝑖

= [𝑢𝑖 ,𝑗 𝑣𝑗 + 𝑢𝑖𝑣𝑗 ,𝑗− (𝑢𝑗 ,𝑗 𝑣𝑖 + 𝑢𝑗𝑣𝑖 ,𝑗 )]𝐠𝑖

= [(𝑢𝑖𝑣𝑗),𝑗− (𝑢𝑗𝑣𝑖),𝑗 ]𝐠𝑖

= div(𝐮 ⊗ 𝐯 − 𝐯 ⊗ 𝐮)

since div(𝐮 ⊗ 𝐯) = (𝑢𝑖𝑣𝑗),𝛼 𝐠𝑖 ⊗ 𝐠𝑗 ⋅ 𝐠𝛼 = (𝑢𝑖𝑣𝑗),𝑗 𝐠𝑖 .

45. Given a scalar point function 𝜙 and a second-order tensor field 𝐓, show that

curl (𝜙𝐓) = 𝜙 curl 𝐓 + ((grad𝜙) ×)𝐓T where [(grad𝜙) ×] is the skew tensor

𝜖𝑖𝑗𝑘𝜙,𝑗 𝐠𝑖 ⊗ 𝐠𝑘

curl (𝜙𝐓) ≡ 𝜖𝑖𝑗𝑘(𝜙𝑇𝛼𝑘),𝑗 𝐠𝑖 ⊗ 𝐠𝛼

= 𝜖𝑖𝑗𝑘(𝜙,𝑗 𝑇𝛼𝑘 + 𝜙𝑇𝛼𝑘 ,𝑗 ) 𝐠𝑖 ⊗ 𝐠𝛼

= 𝜖𝑖𝑗𝑘𝜙,𝑗 𝑇𝛼𝑘 𝐠𝑖 ⊗ 𝐠𝛼 + 𝜙𝜖𝑖𝑗𝑘𝑇𝛼𝑘 ,𝑗 𝐠𝑖 ⊗ 𝐠𝛼

= (𝜖𝑖𝑗𝑘𝜙,𝑗 𝐠𝑖 ⊗ 𝐠𝑘) (𝑇𝛼𝛽𝐠𝛽 ⊗ 𝐠𝛼) + 𝜙𝜖𝑖𝑗𝑘𝑇𝛼𝑘,𝑗 𝐠𝑖 ⊗ 𝐠𝛼

= 𝜙 curl 𝐓 + ((grad𝜙) ×)𝐓T

46. For a second-order tensor field 𝑻, show that div(curl 𝐓) = curl(div 𝐓T)

Define the second order tensor 𝑆 as

curl 𝐓 ≡ 𝜖𝑖𝑗𝑘𝑇𝛼𝑘,𝑗 𝐠𝑖 ⊗ 𝐠𝛼 = 𝑆.𝛼𝑖 𝐠𝑖 ⊗ 𝐠𝛼

The gradient of 𝑺 is 𝑆.𝛼𝑖 ,𝛽 𝐠𝑖 ⊗ 𝐠𝛼 ⊗ 𝐠𝛽 = 𝜖𝑖𝑗𝑘𝑇𝛼𝑘,𝑗𝛽 𝐠𝑖 ⊗ 𝐠𝛼 ⊗ 𝐠𝛽

Clearly,

div(curl 𝐓) = 𝜖𝑖𝑗𝑘𝑇𝛼𝑘 ,𝑗𝛽 𝐠𝑖 ⊗ 𝐠𝛼 ⋅ 𝐠𝛽 = 𝜖𝑖𝑗𝑘𝑇𝛼𝑘 ,𝑗𝛽 𝐠𝑖 𝑔𝛼𝛽

= 𝜖𝑖𝑗𝑘𝑇𝛽𝑘 ,𝑗𝛽 𝐠𝑖 = curl(div 𝐓T)

47. The position vector in Cartesian coordinates 𝐫 = 𝑥𝑖𝐞𝑖. Show that (a) div 𝐫 = 𝟑, (b)

div (𝐫 ⊗ 𝐫) = 𝟒𝐫, (c) div 𝐫 = 3, and (d) grad 𝐫 = 𝟏 and (e) curl (𝐫 ⊗ 𝐫) = −𝐫 ×

grad 𝒓 = 𝑥𝑖 ,𝒋 𝒆𝑖 ⊗ 𝒆𝒋

= 𝛿𝑖𝑗𝒆𝑖 ⊗ 𝒆𝒋 = 𝟏

div 𝒓 = 𝑥𝑖 ,𝒋 𝒆𝑖 ⋅ 𝒆𝒋

= 𝛿𝑖𝑗𝛿𝑖𝑗 = 𝛿𝑗𝑗 = 3. 𝒓 ⊗ 𝒓 = 𝑥𝑖𝒆𝑖 ⊗ 𝑥𝑗𝒆𝑗 = 𝑥𝑖𝑥𝑗𝒆𝑖 ⊗ 𝒆𝒋grad(𝒓 ⊗ 𝒓)

= (𝑥𝑖𝑥𝑗),𝑘 𝒆𝑖 ⊗ 𝒆𝒋 ⊗ 𝒆𝒌 = (𝑥𝑖 ,𝑘 𝑥𝑗 + 𝑥𝑖𝑥𝑗 ,𝑘 )𝒆𝑖 ⊗ 𝒆𝒋 ⋅ 𝒆𝒌

= (𝛿𝑖𝑘𝒙𝒋 + 𝒙𝒊𝛿𝑗𝑘)𝛿𝑗𝑘𝒆𝑖 = (𝛿𝑖𝑘𝒙𝒌 + 𝒙𝒊𝛿𝑗𝑗)𝒆𝑖

= 4𝑥𝑖𝒆𝑖 = 4𝒓

curl(𝒓 ⊗ 𝒓) = 𝜖𝛼𝛽𝛾(𝑥𝑖𝑥𝛾),𝛽 𝒆𝛼 ⊗ 𝒆𝒊

= 𝜖𝛼𝛽𝛾(𝑥𝑖 ,𝛽 𝑥𝛾 + 𝑥𝑖𝑥𝛾,𝛽 )𝒆𝛼 ⊗ 𝒆𝒊

= 𝜖𝛼𝛽𝛾(𝛿𝑖𝛽𝑥𝛾 + 𝑥𝑖𝛿𝛾𝛽)𝒆𝛼 ⊗ 𝒆𝒊

= 𝜖𝛼𝑖𝛾𝑥𝛾𝒆𝛼 ⊗ 𝒆𝒊 + 𝜖𝛼𝛽𝛽𝑥𝑖𝒆𝛼 ⊗ 𝒆𝒊 = −𝜖𝛼𝛾𝑖𝑥𝛾𝒆𝛼 ⊗ 𝒆𝒊 = −𝒓 ×

48. Define the magnitude of tensor 𝐀 as, |𝐀| = √tr(𝐀𝐀T) Show that ∂|𝐀|

∂𝐀=

𝐀

|𝐀|

By definition, given a scalar 𝛼, the derivative of a scalar function of a tensor 𝑓(𝐀) is

∂𝑓(𝐀)

∂𝐀:𝐁 = lim

𝛼→0

∂𝛼𝑓(𝐀 + 𝛼𝐁)

for any arbitrary tensor 𝐁.

In the case of 𝑓(𝐀) = |𝐀|,

∂|𝐀|

∂𝐀: 𝐁 = lim

𝛼→0

∂𝛼|𝐀 + 𝛼𝐁|

|𝐀 + 𝛼𝐁| = √tr(𝐀 + 𝛼𝐁)(𝐀 + 𝛼𝐁)T = √tr(𝐀𝐀T + 𝛼𝐁𝐀T + 𝛼𝐀𝐁T + 𝛼2𝐁𝐁T)

Note that everything under the root sign here is scalar and that the trace operation

is linear. Consequently, we can write,

lim𝛼→0

∂𝛼|𝐀 + 𝛼𝐁| = lim

𝛼→0

tr (𝐁𝐀T) + tr (𝐀𝐁T) + 2𝛼tr (𝐁𝐁T)

2√tr(𝐀𝐀T + 𝛼𝐁𝐀T + 𝛼𝐀𝐁T + 𝛼2𝐁𝐁T)=

2𝐀:𝐁

2√𝐀: 𝐀=

𝐀

|𝐀|: 𝐁

So that,

∂|𝐀|

∂𝐀: 𝐁 =

𝐀

|𝐀|: 𝐁

or,

∂|𝐀|

∂𝐀=

𝐀

|𝐀|

as required since 𝐁 is arbitrary.

49. Show that 𝜕𝑰3(𝐒)

𝜕𝐒=

𝜕det(𝐒)

𝜕𝐒= 𝐒c the cofactor of 𝐒.

Clearly 𝐒c = det(𝐒) 𝐒−T = 𝑰3(𝐒) 𝐒−T. Details of this for the contravariant

components of a tensor is presented below. Let

det(𝐒) ≡ |𝐒| ≡ 𝑆 =1

3!𝜖𝑖𝑗𝑘𝜖𝑟𝑠𝑡𝑆𝑖𝑟𝑆𝑗𝑠𝑆𝑘𝑡

Differentiating wrt 𝑆𝛼𝛽 , we obtain,

𝜕𝑆

𝜕𝑆𝛼𝛽𝐠𝛼 ⊗ 𝐠𝛽 =

1

3!𝜖𝑖𝑗𝑘𝜖𝑟𝑠𝑡 [

𝜕𝑆𝑖𝑟

𝜕𝑆𝛼𝛽𝑆𝑗𝑠𝑆𝑘𝑡 + 𝑆𝑖𝑟

𝜕𝑆𝑗𝑠

𝜕𝑆𝛼𝛽𝑆𝑘𝑡 + 𝑆𝑖𝑟𝑆𝑗𝑠

𝜕𝑆𝑘𝑡

𝜕𝑆𝛼𝛽] 𝐠𝛼 ⊗ 𝐠𝛽

=1

3!𝜖𝑖𝑗𝑘𝜖𝑟𝑠𝑡 [𝛿𝑖

𝛼𝛿𝑟𝛽𝑆𝑗𝑠𝑆𝑘𝑡 + 𝑆𝑖𝑟𝛿𝑗

𝛼𝛿𝑠𝛽𝑆𝑘𝑡 + 𝑆𝑖𝑟𝑆𝑗𝑠𝛿𝑘

𝛼𝛿𝑡𝛽] 𝐠𝛼 ⊗ 𝐠𝛽

=1

3!𝜖𝛼𝑗𝑘𝜖𝛽𝑠𝑡[𝑆𝑗𝑠𝑆𝑘𝑡 + 𝑆𝑗𝑠𝑆𝑘𝑡 + 𝑆𝑗𝑠𝑆𝑘𝑡]𝐠𝛼 ⊗ 𝐠𝛽

=1

2!𝜖𝛼𝑗𝑘𝜖𝛽𝑠𝑡𝑆𝑗𝑠𝑆𝑘𝑡𝐠𝛼 ⊗ 𝐠𝛽 ≡ [𝑆c]𝛼𝛽𝐠𝛼 ⊗ 𝐠𝛽

Which is the cofactor of [𝑆𝛼𝛽] or 𝑺

50. For a scalar variable 𝛼, if the tensor 𝐓 = 𝐓(𝛼) and �̇� ≡𝑑𝐓

𝑑𝛼, Show that

𝑑

𝑑𝛼det(𝐓) =

det(𝐓) tr(�̇�𝐓−𝟏)

Let 𝑨 ≡ �̇�𝑻−1 so that, �̇� = 𝑨𝑻. In component form, we have �̇�𝑗𝑖 = 𝐴𝑚

𝑖 𝑇𝑗𝑚. Therefore,

𝑑

𝑑𝛼det(𝑻) =

𝑑

𝑑𝛼(𝑒𝑖𝑗𝑘𝑇𝑖

1𝑇𝑗2𝑇𝑘

3) = 𝑒𝑖𝑗𝑘(�̇�𝑖1𝑇𝑗

2𝑇𝑘3 + 𝑇𝑖

1�̇�𝑗2𝑇𝑘

3 + 𝑇𝑖1𝑇𝑗

2�̇�𝑘3)

= 𝑒𝑖𝑗𝑘(𝐴𝑙1𝑇𝑖

𝑙𝑇𝑗2𝑇𝑘

3 + 𝑇𝑖1𝐴𝑚

2 𝑇𝑗𝑚𝑇𝑘

3 + 𝑇𝑖1𝑇𝑗

2𝐴𝑛3𝑇𝑘

𝑛)

= 𝑒𝑖𝑗𝑘 [(𝐴11𝑇𝑖

1 + 𝐴21𝑇𝑖

2 + 𝐴31𝑇𝑖

3 ) 𝑇𝑗2𝑇𝑘

3 + 𝑇𝑖1 ( 𝐴1

2𝑇𝑗1 + 𝐴2

2𝑇𝑗2 + 𝐴3

2𝑇𝑗3 ) 𝑇𝑘

3

+ 𝑇𝑖1𝑇𝑗

2 ( 𝐴13𝑇𝑘

1 + 𝐴23𝑇𝑘

2 + 𝐴33𝑇𝑘

3)]

All the boxed terms in the above equation vanish on account of the contraction of a

symmetric tensor with an antisymmetric one.

(For example, the first boxed term yields, 𝑒𝑖𝑗𝑘𝐴21𝑇𝑖

2𝑇𝑗2𝑇𝑘

3

Which is symmetric as well as antisymmetric in 𝑖 and 𝑗. It therefore vanishes. The

same is true for all other such terms.)

∂𝛼det(𝐓) = 𝑒𝑖𝑗𝑘[(𝐴1

1𝑇𝑖1)𝑇𝑗

2𝑇𝑘3 + 𝑇𝑖

1(𝐴22𝑇𝑗

2)𝑇𝑘3 + 𝑇𝑖

1𝑇𝑗2(𝐴3

3𝑇𝑘3)]

= 𝐴𝑚𝑚𝑒𝑖𝑗𝑘𝑇𝑖

1𝑇𝑗2𝑇𝑘

3 = tr(�̇�𝐓−1) det(𝐓)

as required.

51. Prove Liouville’s Theorem that for a scalar variable 𝛼, if the tensor 𝐓 = 𝐓(𝛼) and

�̇� ≡𝑑𝐓

𝑑𝛼,

𝑑

𝑑𝛼det(𝐓) = det(𝐓) tr(�̇�𝐓−𝟏) by direct methods.

We choose three constant, linearly independent vectors 𝐚, 𝐛 and 𝐜 so that

[𝐚, 𝐛, 𝐜] det(𝐓) = [𝐓𝐚, 𝐓𝐛, 𝐓𝐜]

Differentiating both sides, noting that the RHS is a product,

[𝐚, 𝐛, 𝐜]𝑑

𝑑𝛼det(𝐓)

=𝑑

𝑑𝛼[𝐓𝐚, 𝐓𝐛, 𝐓𝐜]

= [𝑑𝐓

𝑑𝛼𝐚, 𝐓𝐛, 𝐓𝐜] + [𝐓𝐚,

𝑑𝐓

𝑑𝛼𝐛, 𝐓𝐜] + [𝐓𝐚, 𝐓𝐛,

𝑑𝐓

𝑑𝛼𝐜]

= [𝑑𝐓

𝑑𝛼𝐓−1𝐓𝐚, 𝐓𝐛, 𝐓𝐜] + [𝐓𝐚,

𝑑𝐓

𝑑𝛼𝐓−1𝐓𝐛, 𝐓𝐜] + [𝐓𝐚, 𝐓𝐛,

𝑑𝐓

𝑑𝛼𝐓−1𝐓𝐜]

= tr (𝑑𝐓

𝑑𝛼𝐓−1) [𝐓𝐚, 𝐓𝐛, 𝐓𝐜]

Clearly, 𝑑

𝑑𝛼det(𝐓) = det(𝐓) tr (

𝑑𝐓

𝑑𝛼𝐓−1)

52. If 𝐓 is invertible, show that ∂

∂𝐓(log det(𝐓)) = 𝐓−𝐓

∂𝐓(log det(𝐓)) =

∂(log det(𝐓))

∂det(𝐓) ∂det(𝐓)

∂𝐓

=1

det(𝐓)𝐓c =

1

det(𝐓)det(𝐓) 𝐓−T

= 𝐓−T

53. If 𝐓 is invertible, show that ∂

∂𝐓(log det(𝐓−1)) = −𝐓−𝐓

∂𝐓(log det(𝐓−1)) =

∂(log det(𝐓−1))

∂det(𝐓−1) ∂det(𝐓−1)

∂𝐓−1

∂𝐓−1

∂𝐓

=1

det(𝐓−1)𝐓−𝐜(−𝐓−2)

=1

det(𝐓−1)det(𝐓−1)𝐓𝐓(−𝐓−2)

= −𝐓−𝐓

54. Given that 𝐀 is a constant tensor, Show that ∂

∂𝐒tr(𝐀𝐒) = 𝐀T

In invariant components terms, let 𝐀 = A𝑖𝑗𝐠𝑖 ⊗ 𝐠𝑗 and let 𝐒 = S𝛼𝛽𝐠𝛼 ⊗ 𝐠𝛽.

𝐀𝐒 = A𝑖𝑗S𝛼𝛽(𝐠𝑖 ⊗ 𝐠𝑗)(𝐠𝛼 ⊗ 𝐠𝛽)

= A𝑖𝑗S𝛼𝛽(𝐠𝑖 ⊗ 𝐠𝛽)𝛿𝑗𝛼

= A𝑖𝑗S𝑗𝛽(𝐠𝑖 ⊗ 𝐠𝛽)

tr(𝐀𝐒) = A𝑖𝑗S𝑗𝛽(𝐠𝑖 ⋅ 𝐠𝛽)

= A𝑖𝑗S𝑗𝛽𝛿𝑖𝛽

= A𝑖𝑗S𝑗𝑖

∂𝐒tr(𝐀𝐒) =

∂S𝛼𝛽tr(𝐀𝐒)𝐠𝛼 ⊗ 𝐠𝛽

=∂A𝑖𝑗S𝑗𝑖

∂S𝛼𝛽𝐠𝛼 ⊗ 𝐠𝛽

= A𝑖𝑗𝛿𝑗𝛼𝛿𝑖

𝛽𝐠𝛼 ⊗ 𝐠𝛽 = A𝑖𝑗𝐠𝑗 ⊗ 𝐠𝑖 = 𝐀T =

∂𝐒(𝐀T: 𝐒)

as required.

55. Given that 𝐀 and 𝐁 are constant tensors, show that ∂

∂𝐒tr(𝐀𝐒𝐁T) = 𝐀T𝐁

First observe that tr(𝐀𝐒𝐁T) = tr(𝐁T𝐀𝐒). If we write, 𝐂 ≡ 𝐁T𝐀, it is obvious from

the above that ∂

∂𝐒tr(𝐂𝐒) = 𝐂T. Therefore,

∂𝐒tr(𝐀𝐒𝐁T) = (𝐁T𝐀)𝐓 = 𝐀T𝐁

56. Given that 𝐀 and 𝐁 are constant tensors, show that ∂

∂𝐒tr(𝐀𝐒T𝐁T) = 𝐁T𝐀

Observe that tr(𝐀𝐒T𝐁T) = tr(𝐁T𝐀𝐒T) = tr[𝐒(𝐁T𝐀)T] = tr[(𝐁T𝐀)T𝐒]

[The transposition does not alter trace; neither does a cyclic permutation. Ensure

you understand why each equality here is true.] Consequently,

∂𝐒tr(𝐀𝐒T𝐁T) =

∂𝐒 tr[(𝐁T𝐀)T𝐒] = [(𝐁T𝐀)T]𝐓 = 𝐁T𝐀

57. Let 𝐒 be a symmetric and positive definite tensor and let 𝐼1(𝐒), 𝐼2(𝐒)and𝐼3(𝐒) be

the three principal invariants of 𝐒 show that (a) 𝜕𝑰1(𝐒)

𝜕𝐒= 𝐈 the identity tensor, (b)

𝜕𝑰2(𝐒)

𝜕𝐒= 𝐼1(𝐒)𝐈 − 𝐒 and (c)

𝜕𝐼3(𝐒)

𝜕𝐒= 𝐼3(𝐒) 𝐒−1

𝜕𝑰1(𝑺)

𝜕𝑺 can be written in the invariant component form as,

𝜕𝐼1(𝐒)

𝜕𝐒=

𝜕𝐼1(𝐒)

𝜕𝑆𝑖𝑗

𝐠𝑖 ⊗ 𝐠𝑗

Recall that 𝐼1(𝐒) = tr(𝐒) = 𝑆αα hence

𝜕𝐼1(𝐒)

𝜕𝐒=

𝜕𝐼1(𝐒)

𝜕𝑆𝑖𝑗

𝐠𝑖 ⊗ 𝐠𝑗 =𝜕𝑆α

α

𝜕𝑆𝑖𝑗𝐠𝑖 ⊗ 𝐠𝑗

= 𝛿𝛼𝑖 𝛿𝑗

𝛼𝐠𝑖 ⊗ 𝐠𝑗 = 𝛿𝑗𝑖𝐠𝑖 ⊗ 𝐠𝑗

= 𝐈

which is the identity tensor as expected. 𝜕𝐼2(𝐒)

𝜕𝐒 in a similar way can be written in the invariant component form as,

𝜕𝐼2(𝐒)

𝜕𝐒=

1

2

𝜕𝐼1(𝐒)

𝜕𝑆𝑖𝑗

[𝑆αα𝑆𝛽

𝛽− 𝑆𝛽

α𝑆α𝛽] 𝐠𝑖 ⊗ 𝐠𝑗

where we have utilized the fact that 𝐼2(𝐒) =1

2[tr2(𝐒) − tr(𝐒2)]. Consequently,

𝜕𝐼2(𝐒)

𝜕𝐒=

1

2

𝜕

𝜕𝑆𝑖𝑗[𝑆α

α𝑆𝛽𝛽

− 𝑆𝛽α𝑆α

𝛽] 𝐠𝑖 ⊗ 𝐠𝑗

=1

2[𝛿𝛼

𝑖 𝛿𝑗𝛼𝑆𝛽

𝛽+ 𝛿𝛽

𝑖 𝛿𝑗𝛽𝑆α

α − 𝛿𝛽𝑖 𝛿𝑗

𝛼𝑆α𝛽

− 𝛿𝛼𝑖 𝛿𝑗

𝛽𝑆𝛽

α] 𝐠𝑖 ⊗ 𝐠𝑗

=1

2[𝛿𝑗

𝑖𝑆𝛽𝛽

+ 𝛿𝑗𝑖𝑆α

α − 𝑆𝑖𝑗− 𝑆𝑖

𝑗] 𝐠𝑖 ⊗ 𝐠𝑗 = (𝛿𝑗

𝑖𝑆αα − 𝑆𝑖

𝑗)𝐠𝑖 ⊗ 𝐠𝑗

= 𝐼1(𝐒)𝟏 − 𝐒

det(𝑺) ≡ |𝑺| ≡ 𝑆 =1

3!𝜖𝑖𝑗𝑘𝜖𝑟𝑠𝑡𝑆𝑖𝑟𝑆𝑗𝑠𝑆𝑘𝑡

Differentiating wrt 𝑆𝛼𝛽 , we obtain,

𝜕𝑆

𝜕𝑆𝛼𝛽𝐠𝛼 ⊗ 𝐠𝛽 =

1

3!𝜖𝑖𝑗𝑘𝜖𝑟𝑠𝑡 [

𝜕𝑆𝑖𝑟

𝜕𝑆𝛼𝛽𝑆𝑗𝑠𝑆𝑘𝑡 + 𝑆𝑖𝑟

𝜕𝑆𝑗𝑠

𝜕𝑆𝛼𝛽𝑆𝑘𝑡 + 𝑆𝑖𝑟𝑆𝑗𝑠

𝜕𝑆𝑘𝑡

𝜕𝑆𝛼𝛽] 𝐠𝛼 ⊗ 𝐠𝛽

=1

3!𝜖𝑖𝑗𝑘𝜖𝑟𝑠𝑡 [𝛿𝑖

𝛼𝛿𝑟𝛽𝑆𝑗𝑠𝑆𝑘𝑡 + 𝑆𝑖𝑟𝛿𝑗

𝛼𝛿𝑠𝛽𝑆𝑘𝑡 + 𝑆𝑖𝑟𝑆𝑗𝑠𝛿𝑘

𝛼𝛿𝑡𝛽] 𝐠𝛼 ⊗ 𝐠𝛽

=1

3!𝜖𝛼𝑗𝑘𝜖𝛽𝑠𝑡[𝑆𝑗𝑠𝑆𝑘𝑡 + 𝑆𝑗𝑠𝑆𝑘𝑡 + 𝑆𝑗𝑠𝑆𝑘𝑡]𝐠𝛼 ⊗ 𝐠𝛽

=1

2!𝜖𝛼𝑗𝑘𝜖𝛽𝑠𝑡𝑆𝑗𝑠𝑆𝑘𝑡𝐠𝛼 ⊗ 𝐠𝛽 ≡ [𝑆c]𝛼𝛽𝐠𝛼 ⊗ 𝐠𝛽

Which is the cofactor of [𝑆𝛼𝛽] or 𝑺

58. For a tensor field 𝜩, The volume integral in the region Ω ⊂ ℰ, ∫ (grad 𝜩)Ω

𝑑𝑣 =

∫ 𝜩 ⊗ 𝒏∂Ω

𝑑𝑠 where 𝒏 is the outward drawn normal to 𝜕Ω – the boundary of Ω.

Show that for a vector field 𝒇

∫ (div 𝒇)Ω

𝑑𝑣 = ∫ 𝒇 ⋅ 𝒏𝜕Ω

𝑑𝑠

Replace 𝜩 by the vector field 𝒇 we have,

∫ (grad 𝒇)Ω

𝑑𝑣 = ∫ 𝒇 ⊗ 𝒏∂Ω

𝑑𝑠

Taking the trace of both sides and noting that both trace and the integral are linear

operations, therefore we have,

∫ (div 𝒇)Ω

𝑑𝑣 = ∫ tr(grad 𝒇)Ω

𝑑𝑣

= ∫ tr(𝒇 ⊗ 𝒏)∂Ω

𝑑𝑠

= ∫ 𝒇 ⋅ 𝒏𝜕Ω

𝑑𝑠

59. Show that for a scalar function Hence the divergence theorem

becomes,∫ (grad 𝜙)Ω

𝑑𝑣 = ∫ 𝜙𝒏𝜕Ω

𝑑𝑠

Recall that for a vector field, that for a vector field 𝒇

∫ (div 𝒇)Ω

𝑑𝑣 = ∫ 𝒇 ⋅ 𝒏𝜕Ω

𝑑𝑠

if we write, 𝐟 = 𝜙𝒂 where 𝒂 is an arbitrary constant vector, we have,

∫ (div[𝜙𝒂])Ω

𝑑𝑣 = ∫ 𝜙𝒂 ⋅ 𝐧𝜕Ω

𝑑𝑠 = 𝒂 ⋅ ∫ 𝜙𝐧𝜕Ω

𝑑𝑠

For the LHS, note that, div[𝜙𝒂] = tr(grad[𝜙𝒂])

grad[𝜙𝒂] = (𝜙𝑎𝑖),𝑗 𝐠𝑖 ⊗ 𝐠𝑗 = 𝑎𝑖𝜙,𝑗 𝐠𝑖 ⊗ 𝐠𝑗

The trace of which is,

𝑎𝑖𝜙,𝑗 𝐠𝑖 ⋅ 𝐠𝑗 = 𝑎𝑖𝜙,𝑗 𝛿𝑖𝑗= 𝑎𝑖𝜙,𝑖 = 𝒂 ⋅ grad 𝜙

For the arbitrary constant vector 𝒂, we therefore have that,

∫ (div[𝜙𝒂])Ω

𝑑𝑣 = 𝒂 ⋅ ∫ grad 𝜙 Ω

𝑑𝑣 = 𝒂 ⋅ ∫ 𝜙𝐧𝜕Ω

𝑑𝑠

∫ grad 𝜙 Ω

𝑑𝑣 = ∫ 𝜙𝐧𝜕Ω

𝑑𝑠

60. For the tensor 𝐓, given that 𝒀𝑻 = 𝑰, show that 𝑻𝒀 = 𝒀𝑻 = 𝑰.

Consider 𝑻𝒀𝑻𝒖 where 𝒖 is a vector. Since 𝒀𝑻 = 𝑰, it follows that 𝑻𝒀𝑻𝒖 = 𝑻𝑰𝒖 =

𝑻𝒖 ≡ 𝒗 where 𝒗 is also a vector. Clearly,

𝑻𝒀𝑻𝒖 = 𝑻𝒀𝒗 = 𝒗

which immediately shows that 𝑻𝒀 = 𝑰 as required to be shown.

61. Given the basis vectors 𝐠𝑖 , 𝑖 = 1,2,3 show that for any tensor 𝐓, the product

𝐓𝐠𝑖 = 𝑇𝛼𝑖𝐠𝛼

In component form, let 𝐓 = 𝑇𝛼𝛽(𝐠𝛼 ⊗ 𝐠𝛼) so that

𝐓𝐠𝑖 = 𝑇𝛼𝛽(𝐠𝛼 ⊗ 𝐠𝛽)𝐠𝑖

= 𝑇𝛼𝛽(𝐠𝛽 ⋅ 𝐠𝑖)𝐠𝛼 = 𝑇𝛼𝛽𝛿𝑖

𝛽𝐠𝛼

= 𝑇𝛼𝑖𝐠𝛼

62. Show that the trace of a tensor 𝐓, in component form, can be written as 𝑇𝑖.𝑖 =

𝑇𝑖𝑗𝑔𝑖𝑗 = 𝑇𝑖𝑗𝑔𝑖𝑗 .

Observe that any three linearly independent vectors can be treated as a basis of a

coordinate system, 𝐠𝑖 , 𝑖 = 1,2,3. The existence of the dual of these vectors can be

taken as given. Consequently,

tr (𝐓) ≡ 𝐼1(𝐓) =[𝐓𝐠1, 𝐠2, 𝐠3] + [𝐠1, 𝐓𝐠2, 𝐠3] + [𝐠1, 𝐠2, 𝐓𝐠3]

[𝐠1, 𝐠2, 𝐠3]

=[(𝑇𝑖1𝐠

𝑖), 𝐠2, 𝐠3] + [𝐠1, (𝑇𝑗2𝐠𝑗), 𝐠3] + [𝐠1, 𝐠2, (𝑇𝑘3𝐠

𝑘)]

[𝐠1, 𝐠2, 𝐠3]

=[(𝑇𝑖1𝑔

𝛼𝑖𝐠𝛼), 𝐠2, 𝐠3] + [𝐠1, (𝑇𝑗2𝑔𝛽𝑗𝐠𝛽), 𝐠3] + [𝐠1, 𝐠2, (𝑇𝑘3𝑔

𝛾𝑘𝐠𝛾)]

[𝐠1, 𝐠2, 𝐠3]

=𝑇1

𝛼[𝐠𝛼 , 𝐠2, 𝐠3] + 𝑇2𝛽[𝐠1, 𝐠𝛽 , 𝐠3] + 𝑇3

𝛾[𝐠1, 𝐠2, 𝐠𝛾]

𝜖123

=𝑇1

𝛼𝜖𝛼23 + 𝑇2𝛽𝜖1𝛽3 + 𝑇3

𝛾𝜖12𝛾

𝜖123

=𝑇1

1𝜖123 + 𝑇22𝜖123 + 𝑇3

3𝜖123

𝜖123 = 𝑇𝑖

.𝑖 = 𝑇𝑖𝑗𝑔𝑖𝑗 = 𝑇𝑖𝑗𝑔𝑖𝑗

63. Show that [𝐓𝐠𝑖 , 𝐠𝑗 , 𝐠𝑘] + [𝐠𝑖 , 𝐓𝐠𝑗 , 𝐠𝑘] + [𝐠𝑖 , 𝐠𝑗 , 𝐓𝐠𝑘] = 𝑇𝛼𝛼[𝐠𝑖 , 𝐠𝑗 , 𝐠𝑘]

Note that,

[𝐓𝐠1, 𝐠2, 𝐠3] + [𝐠1, 𝐓𝐠2, 𝐠3] + [𝐠1, 𝐠2, 𝐓𝐠3]

= [(𝑇𝑖1𝐠𝑖), 𝐠2, 𝐠3] + [𝐠1, (𝑇𝑗2𝐠

𝑗), 𝐠3] + [𝐠1, 𝐠2, (𝑇𝑘3𝐠𝑘)]

= [(𝑇𝑖1𝑔𝛼𝑖𝐠𝛼), 𝐠2, 𝐠3] + [𝐠1, (𝑇𝑗2𝑔

𝛽𝑗𝐠𝛽), 𝐠3] + [𝐠1, 𝐠2, (𝑇𝑘3𝑔𝛾𝑘𝐠𝛾)]

= 𝑇1𝛼[𝐠𝛼 , 𝐠2, 𝐠3] + 𝑇2

𝛽[𝐠1, 𝐠𝛽 , 𝐠3] + 𝑇3

𝛾[𝐠1, 𝐠2, 𝐠𝛾]

= 𝑇1𝛼𝜖𝛼23 + 𝑇2

𝛽𝜖1𝛽3 + 𝑇3

𝛾𝜖12𝛾

= 𝑇11𝜖123 + 𝑇2

2𝜖123 + 𝑇33𝜖123

= 𝑇𝛼𝛼𝜖123 = 𝑇𝛼

𝛼[𝐠1, 𝐠2, 𝐠3]

Swapping 𝐠2 and 𝐠3, it is clear that,

[𝐓𝐠1, 𝐠3, 𝐠2] + [𝐠1, 𝐓𝐠3, 𝐠2] + [𝐠1, 𝐠3, 𝐓𝐠2]

= −[𝐓𝐠1, 𝐠2, 𝐠3] − [𝐠1, 𝐓𝐠2, 𝐠3] − [𝐠1, 𝐠2, 𝐓𝐠3]

= 𝑇𝛼𝛼𝜖132 = 𝑇𝛼

𝛼[𝐠1, 𝐠3, 𝐠2]

Continuing, we have that

[𝐓𝐠𝑖 , 𝐠𝑗 , 𝐠𝑘] + [𝐠𝑖 , 𝐓𝐠𝑗 , 𝐠𝑘] + [𝐠𝑖 , 𝐠𝑗 , 𝐓𝐠𝑘] = 𝑇𝛼𝛼[𝐠𝑖 , 𝐠𝑗 , 𝐠𝑘] = 𝑇𝛼

𝛼𝜖𝑖𝑗𝑘

64. Show that 𝐼1(𝑻) ≡ tr(𝑻) =[𝑻𝒂,𝒃,𝒄]+[𝒂,𝑻𝒃,𝒄]+[𝒂,𝒃,𝑻𝒄]

[𝒂,𝒃,𝒄] is independent of the choice of

the linearly independent vectors 𝒂, 𝒃 and 𝒄.

Let us refer each vector to a covariant basis so that, 𝒂 = 𝑎𝑖𝐠𝑖 , 𝒃 = 𝑏𝑗𝐠𝑗 , and 𝒄 =

𝑐𝑘𝐠𝑘 . Hence,

𝐼1(𝑻) ≡ tr(𝑻) =[𝑻(𝑎𝑖𝐠𝑖), 𝑏

𝑗𝐠𝑗 , 𝑐𝑘𝐠𝑘] + [𝑎𝑖𝐠𝑖 , 𝑻(𝑏𝑗𝐠𝑗), 𝑐

𝑘𝐠𝑘] + [𝑎𝑖𝐠𝑖 , 𝑏𝑗𝐠𝑗 , 𝑻(𝑐𝑘𝐠𝑘)]

[𝒂, 𝒃, 𝒄]

=𝑎𝑖𝑏𝑗𝑐𝑘([𝑻𝐠𝑖 , 𝐠𝑗 , 𝐠𝑘] + [𝐠𝑖 , 𝑻𝐠𝑗 , 𝐠𝑘] + [𝐠𝑖 , 𝐠𝑗 , 𝑻𝐠𝑘])

[𝒂, 𝒃, 𝒄]

But [𝐓𝐠𝑖 , 𝐠𝑗 , 𝐠𝑘] + [𝐠𝑖 , 𝐓𝐠𝑗 , 𝐠𝑘] + [𝐠𝑖 , 𝐠𝑗 , 𝐓𝐠𝑘] = 𝑇𝛼𝛼[𝐠𝑖 , 𝐠𝑗 , 𝐠𝑘]. We have that

𝐼1(𝑻) =𝑎𝑖𝑏𝑗𝑐𝑘𝑇𝛼

𝛼[𝐠𝑖 , 𝐠𝑗 , 𝐠𝑘]

𝜖𝑖𝑗𝑘𝑎𝑖𝑏𝑗𝑐𝑘

=𝜖𝑖𝑗𝑘𝑎𝑖𝑏𝑗𝑐𝑘

𝜖𝑖𝑗𝑘𝑎𝑖𝑏𝑗𝑐𝑘𝑇𝛼

𝛼 = 𝑇𝛼𝛼

Which is obviously independent of the choice of 𝒂, 𝒃 and 𝒄.

65. Write the second tensor invariant in terms of components

As previously observed, any three linearly independent vectors can be treated as the

basis of a coordinate system, 𝐠𝑖 , 𝑖 = 1,2,3. The existence of the dual of these vectors

can be taken as given. Consequently,

𝐼2(𝐓) =[𝐓𝐠1, 𝐓𝐠2, 𝐠3] + [𝐠1, 𝐓𝐠2, 𝐓𝐠3] + [𝐓𝐠1, 𝐠2, 𝐓𝐠3]

[𝐠1, 𝐠2, 𝐠3]

The first of the numerator terms can be simplified as,

[𝐓𝐠1, 𝐓𝐠2, 𝐠3] = [(𝑇𝑖1𝐠𝑖), (𝑇𝑗2𝐠

𝑗), 𝐠3]

= [(𝑇𝑖1𝑔𝛼𝑖𝐠𝛼), (𝑇𝑗2𝑔

𝛽𝑗𝐠𝛽), 𝐠3] = 𝑇1𝛼𝑇2

𝛽[𝐠𝛼 , 𝐠𝛽 , 𝐠3]

The other terms are similarly simplified. Clearly,

𝐼2(𝐓) =𝑇1

𝛼𝑇2𝛽[𝐠𝛼 , 𝐠𝛽 , 𝐠3] + 𝑇2

𝛽𝑇3

𝛾[𝐠1, 𝐠𝛽 , 𝐠𝛾] + 𝑇1

𝛼𝑇3𝛾[𝐠𝛼 , 𝐠2, 𝐠𝛾]

[𝐠1, 𝐠2, 𝐠3]

=𝑇1

𝛼𝑇2𝛽𝜖𝛼𝛽3 + 𝑇2

𝛽𝑇3

𝛾𝜖1𝛽𝛾 + 𝑇1

𝛼𝑇3𝛾𝜖𝛼2𝛾

𝜖123

=[(𝑇1

1𝑇22 − 𝑇1

2𝑇21) + (𝑇2

2𝑇33 − 𝑇2

3𝑇32) + (𝑇3

3𝑇11 − 𝑇3

1𝑇13)]𝜖123

𝜖123

=1

2(𝑇𝛼

𝛼𝑇𝛽𝛽

− 𝑇𝛽𝛼𝑇𝛼

𝛽)

66. Using direct notation, show that the second invariant of a tensor is the trace of its

cofactor.

Given three basis vectors, (𝐠1, 𝐠2, 𝐠3)

𝐼2(𝐓) =[𝐓𝐠1, 𝐓𝐠2, 𝐠3] + [𝐠1, 𝐓𝐠2, 𝐓𝐠3] + [𝐓𝐠1, 𝐠2, 𝐓𝐠3]

[𝐠1, 𝐠2, 𝐠3]

=𝐓c(𝐠1 × 𝐠2) ⋅ 𝐠3 + 𝐠1 ⋅ 𝐓c(𝐠2 × 𝐠3) + [𝐓c(𝐠3 × 𝐠1) ⋅ 𝐠2]

[𝐠1, 𝐠2, 𝐠3]

=(𝐠1 × 𝐠2) ⋅ 𝐓cT𝐠3 + (𝐠2 × 𝐠3) ⋅ 𝐓cT𝐠1 + (𝐠3 × 𝐠1) ⋅ 𝐓cT𝐠2

[𝐠1, 𝐠2, 𝐠3]

=[𝐠1, 𝐠2, 𝐓

cT𝐠3] + [𝐓cT𝐠1, 𝐠2, 𝐠3] + [𝐠3, 𝐠1, 𝐓cT𝐠2]

[𝐠1, 𝐠2, 𝐠3]

= 𝐼1(𝐓cT) = 𝐼1(𝐓

c)

Which is the trace of its cofactor as required.

67. Express the third tensor invariant in terms of its components.

As previously observed, any three linearly independent vectors can be treated as the

basis of a coordinate system, 𝐠𝑖 , 𝑖 = 1,2,3. The existence of the dual of these vectors

can be taken for granted. Consequently, for any tensor 𝐓,

𝐼3(𝐓) =[𝐓𝐠1, 𝐓𝐠2, 𝐓𝐠3]

[𝐠1, 𝐠2, 𝐠3]=

[(𝑇𝑖1𝐠𝑖), (𝑇𝑗2𝐠

𝑗), (𝑇𝑘3𝐠𝑘)]

𝜖123

=[(𝑇𝑖1𝑔

𝛼𝑖𝐠𝛼), (𝑇𝑗2𝑔𝛽𝑗𝐠𝛽), (𝑇𝑘3𝑔

𝛾𝑘𝐠𝛾)]

𝜖123=

𝑇1𝛼𝑇2

𝛽𝑇3

𝛾[𝐠𝛼 , 𝐠𝛽 , 𝐠𝛾]

𝜖123

=𝑇1

𝛼𝑇2𝛽𝑇3

𝛾𝜖𝛼𝛽𝛾

𝜖123= 𝑒𝛼𝛽𝛾𝑇1

𝛼𝑇2𝛽𝑇3

𝛾= det 𝐓

68. For the tensor 𝐓, the cofactor is defined as 𝐓c ≡ 𝐓−T det 𝐓. Using direct notation

and the property of the cofactor that 𝐓𝐚 × 𝐓𝐛 = 𝐓c(𝐚 × 𝐛) for any two vectors 𝐚, 𝐛,

show that the third invariant of a tensor, defined as 𝐼3(𝐓) =[𝐓𝐠1,𝐓𝐠2,𝐓𝐠3]

[𝐠1,𝐠2,𝐠3] for any set

of independent vectors, (𝐠1, 𝐠2, 𝐠3), is its determinant.

[𝐓𝐠1, 𝐓𝐠2, 𝐓𝐠3] = 𝐓𝐠1 ⋅ (𝐓𝐠2 × 𝐓𝐠3)

= 𝐓𝐠1 ⋅ 𝐓c(𝐠2 × 𝐠3)

= (𝐠2 × 𝐠3) ⋅ (𝐓c)T𝐓𝐠1

= (𝐠2 × 𝐠3) ⋅ 𝐓cT𝐓𝐠1

= (𝐠2 × 𝐠3) ⋅ (det 𝐓)𝐠1

so that

𝐼3(𝐓) =[𝐓𝐠1, 𝐓𝐠2, 𝐓𝐠3]

[𝐠1, 𝐠2, 𝐠3]

= det 𝐓(𝐠2 × 𝐠3) ⋅ 𝐠1

[𝐠1, 𝐠2, 𝐠3]

= det 𝐓

69. In component form, the third tensor invariant of a tensor 𝐓, 𝐼3(𝐓) =

𝑒𝛼𝛽𝛾𝑇1𝛼𝑇2

𝛽𝑇3

𝛾= det 𝐓. Show that 𝑒𝑖𝑗𝑘𝑇𝛼

𝑖𝑇𝛽𝑗𝑇𝛾

𝑘 = 𝑒𝛼𝛽𝛾 det 𝐓.

We do this by first establishing the fact that the LHS is completely antisymmetric in

𝛼, 𝛽 and 𝛾. We note that the indices 𝑖, 𝑗 and 𝑘 are dummy and therefore,

𝑒𝑖𝑗𝑘𝑇𝛼𝑖𝑇𝛽

𝑗𝑇𝛾

𝑘 = 𝑒𝑘𝑗𝑖𝑇𝛼𝑘𝑇𝛽

𝑗𝑇𝛾

𝑖 = 𝑒𝑘𝑗𝑖𝑇𝛾𝑖𝑇𝛼

𝑘𝑇𝛽𝑗= −𝑒𝑖𝑗𝑘𝑇𝛾

𝑖𝑇𝛽𝑗𝑇𝛼

𝑘

Showing that a simple swap of 𝛼 and 𝛾 changes the sign. This is similarly true for the

other pairs in the lower symbols. Thus we establish anti-symmetry in 𝛼, 𝛽 and 𝛾.

Noting that both sides take the same values when 𝛼, 𝛽 and 𝛾 are equal to 1, 2 and 3

respectively. The arrangement of the indices makes this value positive or negative in

the same antisymmetric way. This completes the proof. Similarly we can write,

𝑒𝑖𝑗𝑘𝑇𝑖𝛼𝑇𝑗

𝛽𝑇𝑘

𝛾= 𝑒𝑖𝑗𝑘𝑇𝑖

1𝑇𝑗2𝑇𝑘

3𝑒𝛼𝛽𝛾 = 𝑒𝛼𝛽𝛾det 𝐓

70. Given the basis vectors, 𝐠1, 𝐠2, 𝐠3 and their dual, 𝐠1, 𝐠2, 𝐠3, Show that for any

other basis pair, 𝛄1, 𝛄2, 𝛄3 and their dual, 𝛄1, 𝛄2, 𝛄3, the relationship,

(𝛄𝑖 ⋅ 𝐠𝛼)(𝛄𝑖 ⋅ 𝐠𝛽) = 𝛿𝛼𝛽

holds. By simply reversing the step, it is immediately obvious that,

(𝛄𝑖 ⋅ 𝐠𝛼)(𝛄𝑖 ⋅ 𝐠𝛽) = [(𝛄𝑖 ⊗ 𝛄𝑖)𝐠𝛼] ⋅ 𝐠𝛽

Observe that the expression in the parentheses is the unit tensor – having no effect

on a vector; it follows that,

(𝛄𝑖 ⋅ 𝐠𝛼)(𝛄𝑖 ⋅ 𝐠𝛽) = [(𝛄𝑖 ⊗ 𝛄𝑖)𝐠𝛼] ⋅ 𝐠𝛽 = 𝐠𝛼 ⋅ 𝐠𝛽 = 𝛿𝛼𝛽

71. Show that the first invariant has the same value in every coordinate system.

We have shown elsewhere that for any tensor 𝑻 the first invariant, 𝐼1(𝑻)=

tr(𝑻) ≡[{𝐓𝐠1}, 𝐠2, 𝐠3] + [𝐠1, {𝐓𝐠2}, 𝐠3] + [𝐠1, 𝐠2, {𝐓𝐠3}]

[𝐠1, 𝐠2, 𝐠3]= 𝑇𝑖𝑗𝑔

𝑖𝑗 = 𝑇𝑖.𝑖

We proceed to show that this quantity has the same value in any other independent

set of basis vectors. Let (𝜸1, 𝜸2, 𝜸3) be another arbitrary set of linearly independent

vectors. They therefore form a basis of a coordinate system. Let (𝜸1, 𝜸2, 𝜸3) be the

dual to the set. It is easy to establish that the new set will be related to the old in;

𝛄𝑗 = (𝛄𝒋 ⋅ 𝐠𝛼)𝐠𝛼 , and 𝛄𝑖 = (𝛄𝑖 ⋅ 𝐠𝛽)𝐠𝛽

Let us write the (𝑖, 𝑗)𝑡ℎ components in the 𝜸 − bases as �̃�𝑖.𝑗.

Clearly,

�̃�𝑖.𝑗

≡ 𝛄𝑖 ⋅ 𝐓𝛄𝑗 = (𝛄𝒊 ⋅ 𝐠𝛼)𝐠𝛼 ⋅ [𝐓(𝛄𝑗 ⋅ 𝐠𝛽)𝐠𝛽]

= (𝛄𝒊 ⋅ 𝐠𝛼)(𝛄𝑗 ⋅ 𝐠𝛽)[𝐠𝛼 ⋅ 𝐓𝐠𝛽]

= [(𝛄𝑗 ⊗ 𝛄𝑖)𝐠𝛼] ⋅ 𝐠𝛽 [𝑇𝛼

.𝛽]

Contracting 𝑖 with 𝑗, we obtain the sum,

�̃�𝑖.𝑖 = 𝛄𝑖 ⋅ 𝐓𝛄𝑖 = [(𝛄𝑖 ⊗ 𝛄𝑖)𝐠

𝛼] ⋅ 𝐠𝛽 [𝑇𝛼.𝛽

]

= 𝐠𝜶 ⋅ 𝐠𝛽𝑇𝛼.𝛽

= 𝛿𝛽𝛼𝑇𝛼

.𝛽= 𝑇𝛼

.𝛼

So that the trace has the same value in any arbitrarily chosen coordinate system

including curvilinear ones whether orthogonal or not.

72. Express the second invariant of a tensor in terms of traces only. Show that the

second invariant has the same value in all coordinate systems.

For arbitrary vectors bases, (𝐠1, 𝐠2, 𝐠3) the second principal invariant of a tensor 𝐓

𝐼2(𝐓) =[𝐓𝐠1, 𝐓𝐠2, 𝐠3] + [𝐠1, 𝐓𝐠2, 𝐓𝐠3] + [𝐓𝐠1, 𝐠2, 𝐓𝐠3]

[𝐠1, 𝐠2, 𝐠3], or

𝐼2(𝐓) =1

2(𝑇𝛼

𝛼𝑇𝛽𝛽

− 𝑇𝛽𝛼𝑇𝛼

𝛽) =

1

2(𝐼1

2(𝐓) − 𝐼1(𝐓2))

That is, half of the square of the trace minus the trace of the square. These therefore

are unchanged by coordinate transformation because traces are unchanged bt

coordinate transformation.

73. Show that the third tensor invariant is unaffected by a coordinate transformation.

We begin by showing that we can express the third invariant in terms of traces only.

Given a tensor 𝐒, by Cayley Hamilton theorem,

𝐒3 − 𝐼1𝐒2 + 𝐼2𝐒 − 𝐼3 = 0

Note that 𝐼1 = 𝐼1(𝐒), 𝐼2 = 𝐼2(𝐒) and 𝐼3 = 𝐼3(𝐒) the first, second and third invariants

of 𝐒 are all scalar functions of the tensor 𝐒. Taking the trace of the above equation,

𝐼1(𝐒3) = 𝐼1(𝐒)𝐼1(𝐒

2) − 𝐼2(𝐒)𝐼1(𝐒) + 3𝐼3(𝐒)

= 𝐼1(𝑺)(𝐼12(𝐒) − 2𝐼2(𝐒)) − 𝐼1(𝐒)𝐼2(𝐒) + 3𝐼3(𝐒)

= 𝐼13(𝐒) − 3𝐼1(𝐒)𝐼2(𝐒) + 3𝐼3(𝐒)

Or, 𝐼3(𝐒) =1

3(𝐼1(𝐒

3) − 𝐼13(𝐒) + 3𝐼1(𝐒)𝐼2(𝐒))

which show that the third invariant is itself expressible in terms of traces only. It is

therefore invariant in value as a result of coordinate transformation.

74. For any tensor 𝐓, the arbitrary vector 𝐮 and the scalar 𝜆, show that the eigenvalue

problem, 𝐓𝐮 = 𝜆𝐮 leads to the characteristic equation, 𝜆3 − 𝐼1𝜆2 + 𝐼2𝜆 − 𝐼3 = 0,

where 𝐼1 = 𝐼1(𝐓), 𝐼2 = 𝐼2(𝐓) and 𝐼3 = 𝐼3(𝐓) the first, second and third invariants of

𝐓.

Writing the tensor and vector in component forms, we have

𝐓𝐮 = 𝑇𝑖.𝑗(𝐠𝑖 ⊗ 𝐠𝑗)𝑢𝑘𝐠𝑘 = 𝜆𝐮 = 𝜆𝑢𝑖𝐠

𝑖

So that,

𝐓𝐮 − 𝜆𝐮 = 𝑇𝑖.𝑗𝐠𝑖(𝐠𝑗 ⋅ 𝐠𝑘)𝑢𝑘 − 𝜆𝑢𝑖𝐠

𝑖 = 𝑇𝑖.𝑗𝐠𝑖𝑢𝑗 − 𝜆𝑢𝑖𝐠

𝑖

= 𝑇𝑖.𝑗𝐠𝑖𝑢𝑗 − 𝜆𝑢𝑗𝛿𝑖

𝑗𝐠𝑖 = (𝑇𝑖

.𝑗− 𝜆𝛿𝑖

𝑗)𝑢𝑗𝐠

𝑖 = 𝒐

Which is possible only if the coefficient determinant, |𝑇𝑖.𝑗

− 𝜆𝛿𝑖𝑗| vanishes.

Expanding, we find that,

−𝑇31𝑇2

2𝑇13 + 𝑇2

1𝑇32𝑇1

3 + 𝑇31𝑇1

2𝑇23 − 𝑇1

1𝑇32𝑇2

3 − 𝑇21𝑇1

2𝑇33 + 𝑇1

1𝑇22𝑇3

3 + 𝑇21𝑇1

2𝜆 − 𝑇11𝑇2

2𝜆

+ 𝑇31𝑇1

3𝜆 + 𝑇32𝑇2

3𝜆 − 𝑇11𝑇3

3𝜆 − 𝑇22𝑇3

3𝜆 + 𝑇11𝜆2 + 𝑇2

2𝜆2 + 𝑇33𝜆2 − 𝜆3

= −𝑇31𝑇2

2𝑇13 + 𝑇2

1𝑇32𝑇1

3 + 𝑇31𝑇1

2𝑇23 − 𝑇1

1𝑇32𝑇2

3 − 𝑇21𝑇1

2𝑇33 + 𝑇1

1𝑇22𝑇3

3

+ (𝑇21𝑇1

2 − 𝑇11𝑇2

2 + 𝑇31𝑇1

3 + 𝑇32𝑇2

3 − 𝑇11𝑇3

3 − 𝑇22𝑇3

3)𝜆 + (𝑇11 + 𝑇2

2 + 𝑇33)𝜆2

− 𝜆3 = 0

Or, 𝜆3 − 𝐼1𝜆2 + 𝐼2𝜆 − 𝐼3 = 0, as required.

75. For arbitrary vectors 𝐚, 𝐛 and 𝐜, and a tensor 𝐓 use the relationship,

[𝜆𝐚 − 𝐓𝐚, 𝜆𝐛 − 𝐓𝐛, 𝜆𝐜 − 𝐓𝐜 ] = det(𝜆𝐈 − 𝐓)[𝐚, 𝐛, 𝐜] and the characteristic equation

𝜆3 − 𝐼1(𝐓)𝜆2 + 𝐼2(𝐓)𝜆 − 𝐼3(𝐓) = 0 to find expressions for the for the invariants of

𝐓. The characteristic equation, det(𝜆𝐈 − 𝐓) = 0 immediately implies that

[𝜆𝐚 − 𝐓𝐚, 𝜆𝐛 − 𝐓𝐛, 𝜆𝐜 − 𝐓𝐜 ] = 0.

Expanding the scalar triple product, we have

[𝐚, 𝐛, 𝐜]𝜆3 − ([𝐓𝐚, 𝐛, 𝐜] + [𝐚, 𝐓𝐛, 𝐜] + [𝐚, 𝐛, 𝐓𝐜])𝜆2

+ ([𝐓𝐚, 𝐓𝐛, 𝐜] + [𝐚, 𝐓𝐛, 𝐓𝐜] + [𝐓𝐚, 𝐛, 𝐓𝐜])𝜆 − [𝐓𝐚, 𝐓𝐛, 𝐓𝐜] = 0

From which we can see that,

𝐼1(𝐓) =[𝐓𝐚, 𝐛, 𝐜] + [𝐚, 𝐓𝐛, 𝐜] + [𝐚, 𝐛, 𝐓𝐜]

[𝐚, 𝐛, 𝐜]

𝐼2(𝐓) =[𝐓𝐚, 𝐓𝐛, 𝐜] + [𝐚, 𝐓𝐛, 𝐓𝐜] + [𝐓𝐚, 𝐛, 𝐓𝐜]

[𝐚, 𝐛, 𝐜], and

𝐼3(𝐓) =[𝐓𝐚, 𝐓𝐛, 𝐓𝐜]

[𝐚, 𝐛, 𝐜]

Assuming we have carefully chosen [𝐚, 𝐛, 𝐜] ≠ 0.

76. For any two vectors 𝐮 and 𝐯, find the principal invariants of the dyad 𝐮 ⊗ 𝐯

The first principal invariant 𝐼1(𝐮 ⊗ 𝐯) is the trace, defined as,

tr(𝐮 ⊗ 𝐯 ) =[{(𝐮 ⊗ 𝐯) 𝐠1}, 𝐠2, 𝐠3] + [𝐠1, {(𝐮 ⊗ 𝐯) 𝐠2}, 𝐠3] + [𝐠1, 𝐠2, {(𝐮 ⊗ 𝐯)𝐠3}]

[𝐠1, 𝐠2, 𝐠3]

=1

𝜖123

{[𝑣1𝐮, 𝐠2, 𝐠3] + [𝐠1, 𝑣2𝐮, 𝐠3] + [𝐠1, 𝐠2, 𝑣3𝐮]}

=1

𝜖123{(𝑣1𝐮) ⋅ (𝜖23𝑖𝐠

𝑖) + (𝜖31𝑖𝐠𝑖) ⋅ (𝑣2𝐮) + (𝜖12𝑖𝐠

𝑖) ⋅ (𝑣3𝐮)}

=1

𝜖123

{(𝑣1𝐮) ⋅ (𝜖231𝐠1) + (𝜖312𝐠

2) ⋅ (𝑣2𝐮) + (𝜖123𝐠3) ⋅ (𝑣3𝐮)} = 𝑣𝑖𝑢

𝑖 .

The second principal invariant 𝐼2(𝐮 ⊗ 𝐯) =

=[{(𝐮 ⊗ 𝐯) 𝐠1}, (𝐮 ⊗ 𝐯)𝐠2, 𝐠3] + [𝐠1, {(𝐮 ⊗ 𝐯) 𝐠2}, (𝐮 ⊗ 𝐯)𝐠3] + [(𝐮 ⊗ 𝐯)𝐠1, 𝐠2, {(𝐮 ⊗ 𝐯)𝐠3}]

[𝐠1, 𝐠2, 𝐠3]

=1

𝜖123{[𝑣1𝐮, 𝑣2𝐮, 𝐠3] + [𝐠1, 𝑣2𝐮, 𝑣3𝐮] + [𝑣1𝐮, 𝐠2, 𝑣3𝐮]} = 0. Vanishes – collinearity.

The third invariant 𝐼3(𝐮 ⊗ 𝐯) =1

𝜖123[𝑣1𝐮, 𝑣2𝐮, 𝑣3𝐮] which also vanishes on account of

collinearity

77. Define the cofactor of a tensor as, cof 𝐓 ≡ 𝐓c ≡ 𝐓−𝐓 det 𝐓. Show that, for any

pair of independent vectors 𝐮 and 𝐯 the cofactor satisfies, 𝐓𝐮 × 𝐓𝐯 = 𝐓c(𝐮 × 𝐯)

First note that if 𝐓 is invertible, the independence of the vectors 𝒖 and 𝒗 implies the

independence of vectors 𝐓𝐮 and 𝐓𝐯. Consequently, we can define the non-vanishing

𝐧 ≡ 𝐓𝐮 × 𝐓𝐯 ≠ 𝟎.

It follows that 𝐧 must be on the perpendicular line to both 𝐓𝐮 and 𝐓𝐯. Therefore,

𝐧 ⋅ 𝐓𝐮 = 𝐧 ⋅ 𝐓𝐯 = 𝟎.

We can also take a transpose and write,

𝐮 ⋅ 𝐓𝐓𝐧 = 𝐯 ⋅ 𝐓𝐓𝐧 = 𝟎

Showing that the vector 𝐓𝐓𝐧 is perpendicular to both 𝐮 and 𝐯. It follows that ∃ 𝛼 ∈𝕽

such that

𝐓𝐓𝐧 = 𝛼(𝐮 × 𝐯)

Therefore, 𝐓T(𝐓𝐮 × 𝐓𝐯) = 𝛼(𝐮 × 𝐯).

Let 𝐰 = 𝐮 × 𝐯 so that 𝐮, 𝐯 and 𝐰 are linearly independent, then we can take a scalar

product of the above equation and obtain,

𝐰 ⋅ 𝐓T(𝐓𝐮 × 𝐓𝐯) = 𝛼(𝐮 × 𝐯 ⋅ 𝐰)

The LHS is also 𝐓𝐰 ⋅ (𝐓𝐮 × 𝐓𝐯) = 𝐓𝐮 × 𝐓𝐯 ⋅ 𝐓𝐰. In the equation, 𝐓𝐮 ×

𝐓𝐯 ⋅ 𝐓𝐰 = 𝛼(𝐮 × 𝐯 ⋅ 𝐰), it is clear that

𝛼 = det 𝐓

We have that, 𝐓𝐮 × 𝐓𝐯 = 𝐓−T det 𝐓 (𝐮 × 𝐯). And therefore, we have that,

𝐓𝐮 × 𝐓𝐯 = 𝐓−T det 𝐓 (𝐮 × 𝐯) = 𝐓c(𝐮 × 𝐯).

78. Find and expression for the components of the cofactor of tensor 𝐓 in terms of

the components of 𝐓.

We now express the cofactor in its general components.

𝐓c = (𝐓c)𝑖𝛼𝐠𝛼 ⊗ 𝐠𝑖 = (𝐠α ⋅ 𝐓c𝐠𝑖)𝐠𝛼 ⊗ 𝐠𝑖

=1

2𝜖𝑖𝑗𝑘[𝐠α ⋅ 𝐓c(𝐠𝑗 × 𝐠𝑘)]𝐠𝛼 ⊗ 𝐠𝑖

=1

2𝜖𝑖𝑗𝑘[𝐠α ⋅ (𝐓𝐠𝑗) × (𝐓𝐠𝑘)]𝐠𝛼 ⊗ 𝐠𝑖 .

The scalar in brackets,

𝐠α ⋅ (𝐓𝐠𝑗) × (𝐓𝐠𝑘) = 𝐠α ⋅ 𝜖𝑙𝑚𝑛(𝐠𝑚 ⋅ 𝐓𝐠𝑗)(𝐠𝑛 ⋅ 𝐓𝐠𝑘)𝐠𝑙

= 𝛿𝑙𝛼𝜖𝑙𝑚𝑛(𝐠𝑚 ⋅ 𝐓𝐠𝑗)(𝐠𝑛 ⋅ 𝐓𝐠𝑘)

= 𝛿𝑙𝛼𝜖𝑙𝑚𝑛𝑇𝑚

𝑗𝑇𝑛

𝑘 = 𝜖𝛼𝑚𝑛𝑇𝑚𝑗𝑇𝑛

𝑘

Inserting this above, we therefore have, in invariant component form,

𝐓c =1

2𝜖𝑖𝑗𝑘[𝐠α ⋅ (𝐓𝐠𝑗) × (𝐓𝐠𝑘)]𝐠𝛼 ⊗ 𝐠𝑖

=1

2𝜖𝑖𝑗𝑘𝜖𝛼𝑚𝑛𝑇𝑚

𝑗𝑇𝑛

𝑘𝐠𝛼 ⊗ 𝐠𝑖

=1

2𝛿𝑖𝑗𝑘

𝑙𝑚𝑛𝑇𝑚𝑗𝑇𝑛

𝑘𝐠𝑙 ⊗ 𝐠𝑖

79. Given that cof 𝐓 =1

2𝛿𝑖𝑗𝑘

𝑙𝑚𝑛𝑇𝑚𝑗𝑇𝑛

𝑘𝐠𝑙 ⊗ 𝐠𝑖 if 𝐓 = 𝐮 ⊗ 𝐩 + 𝐯 ⊗ 𝐪 + 𝐰 ⊗ 𝐫, show that

cof 𝐓 = 𝐮 × 𝐯 ⊗ 𝐩 × 𝐪 + 𝐯 × 𝐰 ⊗ 𝐪 × 𝐫 + 𝐰 × 𝐮 ⊗ 𝐫 × 𝐩, tr 𝐓 = 𝐮 ⋅ 𝐩 + 𝐯 ⋅ 𝐪 +

𝐰 ⋅ 𝐫 and 𝐼2(𝐓) = 𝐮 × 𝐯 ⋅ 𝐩 × 𝐪 + 𝐯 × 𝐰 ⋅ 𝐪 × 𝐫 + 𝐰 × 𝐮 ⋅ 𝐫 × 𝐩

Clearly, 𝑇𝑗𝑚 = 𝑢𝑗𝑝

𝑚 + 𝑣𝑗𝑞𝑚 + 𝑤𝑗𝑟

𝑚. The cofactor

cof 𝐓 =1

2𝛿𝑖𝑗𝑘

𝑙𝑚𝑛(𝑢𝑗𝑝𝑚 + 𝑣𝑗𝑞

𝑚 + 𝑤𝑗𝑟𝑚)(𝑢𝑘𝑝𝑛 + 𝑣𝑘𝑞𝑛 + 𝑤𝑘𝑟𝑛)𝐠𝑙 ⊗ 𝐠𝑖

=1

2𝛿𝑖𝑗𝑘

𝑙𝑚𝑛 ((𝑢𝑗𝑝𝑚𝑢𝑘𝑝𝑛 + 𝑣𝑗𝑞

𝑚𝑣𝑘𝑞𝑛 + 𝑤𝑗𝑟𝑚𝑤𝑘𝑟

𝑛)

+ 2(𝑢𝑗𝑝𝑚𝑣𝑘𝑞𝑛 + 𝑣𝑗𝑞

𝑚𝑤𝑘𝑟𝑛 + 𝑢𝑘𝑝𝑛𝑤𝑗𝑟

𝑚)) 𝐠𝑙 ⊗ 𝐠𝑖

= 𝛿𝑖𝑗𝑘𝑙𝑚𝑛(𝑢𝑗𝑝

𝑚𝑣𝑘𝑞𝑛 + 𝑣𝑗𝑞𝑚𝑤𝑘𝑟

𝑛 + 𝑢𝑘𝑝𝑛𝑤𝑗𝑟𝑚)𝐠𝑙 ⊗ 𝐠𝑖

= 𝐮 × 𝐯 ⊗ 𝐩 × 𝐪 + 𝐯 × 𝐰 ⊗ 𝐪 × 𝐫 + 𝐰 × 𝐮 ⊗ 𝐫 × 𝐩

as the terms in the first inner parentheses vanish on account of symmetry and anti-symmetry in

𝑚, 𝑛, as well as 𝑗, 𝑘. The rest of the results follow by a change in the tensor product to the dot

product once we take the traces of the tensor and its cofactor noting that

𝐼2(𝐓) = tr(cof 𝐓) = 𝐮 × 𝐯 ⋅ 𝐩 × 𝐪 + 𝐯 × 𝐰 ⋅ 𝐪 × 𝐫 + 𝐰 × 𝐮 ⋅ 𝐫 × 𝐩

80. Find an expression for the inverse of the tensor 𝐓 in terms of the components of

𝐓.

The cofactor 𝐓c = 𝐓−T det 𝐓 . Transposing the equation, we have,

𝐓−1 =1

det 𝐓(𝐓c)T

=1

2det 𝐓𝛿𝑖𝑗𝑘

𝑙𝑚𝑛𝑇𝑚𝑗𝑇𝑛

𝑘𝐠𝑖 ⊗ 𝐠𝑙

81. Let 𝐓 be invertible and assume we can select the vectors 𝐮 and 𝐯 such that 𝐯 ⋅

𝐓−1𝐮 ≠ 1, Show that (𝐓 + 𝐮 ⊗ 𝐯)−1 = 𝐓−1 +𝐓−1𝐮⊗𝐯𝐓−1

1−𝐯⋅𝐓−1𝐮.

Let (𝐓 + 𝐮 ⊗ 𝐯)𝐱 = 𝐲. In our attempt to find 𝐱, it will be necessary to find the

inverse of 𝐓 + 𝐮 ⊗ 𝐯. We proceed indirectly and operate the tensor 𝐓−1 on this

vector euation and obtain,

𝐓−1(𝐓 + 𝐮 ⊗ 𝐯)𝐱 = 𝐓−1𝐲

= 𝐱 + (𝐓−1𝐮 ⊗ 𝐯)𝐱 = 𝐱 + 𝐓−1𝐮(𝐯 ⋅ 𝐱)

= 𝐱 + 𝐓−1𝐮(𝐯 ⋅ 𝐱) = 𝐓−1𝐲

Taking the dot product of both sides with 𝐯,

𝐯 ⋅ 𝐱 + 𝐯 ⋅ 𝐓−1𝐮(𝐯 ⋅ 𝐱) = 𝐯 ⋅ 𝐓−1𝐲

so that the scalar ,

𝐯 ⋅ 𝐱 =(𝐯 ⋅ 𝐓−1𝐲)

(1 + 𝐯 ⋅ 𝐓−1𝐮)

But we can write that,

𝐱 = (𝐓 + 𝐮 ⊗ 𝐯)−1𝐲

= 𝐓−1𝐲 − 𝐓−1𝐮(𝐯 ⋅ 𝐱)

Substituting for 𝐯 ⋅ 𝐱, we have,

(𝐓 + 𝐮 ⊗ 𝐯)−1𝐲 = 𝐓−1𝐲 −𝐓−1𝐮(𝐯 ⋅ 𝐓−1𝐲)

1 + 𝐯 ⋅ 𝐓−1𝐮

= 𝐓−1𝐲 −(𝐓−1(𝐮 ⊗ 𝐯)𝐓−1)𝐲

1 + 𝐯 ⋅ 𝐓−1𝐮

so that,

(𝐓 + 𝐮 ⊗ 𝐯)−1 = 𝐓−1 −𝐓−1(𝐮 ⊗ 𝐯)𝐓−1

1 + 𝐯 ⋅ 𝐓−1𝐮

Provided, as we have been given, that 𝐯 ⋅ 𝐓−1𝐮 ≠ 1.

82. For the invertible tensor 𝐓 and the tensors 𝐅, 𝐕 and 𝐔, show that

(𝐓 + 𝐔𝐅𝐕)−1 = 𝐓−1 − 𝐓−1𝐔(𝐅−1 + 𝐕𝐓−𝟏𝐔)−1

𝐕𝐓−1

First consider the matrix (𝐓 −𝐔𝐕 𝐅−1). Its inverse is obtained by solving the matrix

equation,

(𝐀 𝐁𝐂 𝐃

) (𝐓 −𝐔𝐕 𝐅−1) = (

𝟏 𝟎𝟎 𝟏

)

which yields,

𝐀 𝐓 + 𝐁𝐕 = 𝟏

−𝐀𝐔 + 𝐁𝐅−1 = 𝟎 ⇒ 𝐁 = 𝐀𝐔𝐅

so that,

𝐀 𝐓 + 𝐀𝐔𝐅𝐕 = 𝐀(𝐓 + 𝐔𝐅𝐕) = 𝟏

⇒ 𝐀 = (𝐓 + 𝐔𝐅𝐕)−1

But 𝐀 = 𝐓−1 − 𝐁𝐕𝐓−1 substituting in the second equation,

from which we can now write that (𝐓−1 − 𝐁𝐕𝐓−1)𝐔 = 𝐁𝐅−1 so that

𝐁 = 𝐓−1𝐔(𝐅−1 + 𝐕𝐓−1𝐔)−1

𝐀 = 𝐓−1 − 𝐁𝐕𝐓−1 = 𝐓−1 − 𝐓−1𝐔(𝐅−1 + 𝐕𝐓−1𝐔)−1𝐕𝐓−1

Finally 𝐀 = (𝐓 + 𝐔𝐅𝐕)−1 = 𝐓−1 − 𝐓−1𝐔(𝐅−1 + 𝐕𝐓−1𝐔)−1𝐕𝐓−1 as required

In the special case when 𝐅 is the unit tensor, we have,

(𝐓 + 𝐔𝐕)−1 = 𝐓−1 − 𝐓−1𝐔(𝟏 + 𝐕𝐓−1𝐔)−1𝐕𝐓−1

83. Determinant is not a linear operation. For any two tensors 𝐒 and 𝐓, use the

component representation of the cofactor to show that the determinant of the sum

det(𝐒 + 𝐓) = det(𝐒) + tr(𝐓c𝐒T) + tr(𝐒c𝐓T) + det(𝐓)

det(𝐒 + 𝐓) = 𝑒𝑖𝑗𝑘(𝑆𝑖1 + 𝑇𝑖

1)(𝑆𝑗2 + 𝑇𝑗

2)(𝑆𝑘3 + 𝑇𝑘

3)

= 𝑒𝑖𝑗𝑘[𝑆𝑖1𝑆𝑗

2𝑆𝑘3 + (𝑇𝑖

1𝑆𝑗2𝑆𝑘

3 + 𝑆𝑖1𝑇𝑗

2𝑆𝑘3 + 𝑆𝑖

1𝑆𝑗2𝑇𝑘

3) + 𝑇𝑖1𝑇𝑗

2𝑇𝑘3

+ (𝑇𝑖1𝑇𝑗

2𝑆𝑘3 + 𝑆𝑖

1𝑇𝑗2𝑇𝑘

3 + 𝑇𝑖1𝑆𝑗

2𝑇𝑘3)]

Now, we can write,

𝐒T𝐓c = (𝑆𝛼𝛽𝐠𝛼 ⊗ 𝐠𝛽)T(1

2𝛿𝑖𝑗𝑘

𝑙𝑚𝑛𝑇𝑚𝑗𝑇𝑛

𝑘𝐠𝑙 ⊗ 𝐠𝑖)

=1

2𝛿𝑖𝑗𝑘

𝑙𝑚𝑛𝑆𝛼𝛽𝑇𝑚𝑗𝑇𝑛

𝑘(𝐠𝛽 ⊗ 𝐠𝛼)(𝐠𝑙 ⊗ 𝐠𝑖)

=1

2𝛿𝑖𝑗𝑘

𝑙𝑚𝑛𝑆𝛼𝛽𝑇𝑚𝑗𝑇𝑛

𝑘𝛿𝑙𝛼(𝐠𝛽 ⊗ 𝐠𝑖) =

1

2𝛿𝑖𝑗𝑘

𝑙𝑚𝑛𝑆𝑙𝛽𝑇𝑚𝑗𝑇𝑛

𝑘(𝐠𝛽 ⊗ 𝐠𝑖)

So that tr(𝐒T𝐓c) =1

2𝛿𝑖𝑗𝑘

𝑙𝑚𝑛𝑆𝑙𝛽𝑇𝑚𝑗𝑇𝑛

𝑘 = 𝑒𝑖𝑗𝑘(𝑇𝑖1𝑇𝑗

2𝑆𝑘3 + 𝑆𝑖

1𝑇𝑗2𝑇𝑘

3 + 𝑇𝑖1𝑆𝑗

2𝑇𝑘3) after

expansion. Similarly, tr(𝐓T𝐒c) expands to 𝑒𝑖𝑗𝑘(𝑇𝑖1𝑆𝑗

2𝑆𝑘3 + 𝑆𝑖

1𝑇𝑗2𝑆𝑘

3 + 𝑆𝑖1𝑆𝑗

2𝑇𝑘3) from

which the result immediately follows.

84. Given that 𝐓c is the cofactor of the tensor 𝐓, show that 𝐼1(𝐓c) = 𝐼2(𝐓) that is,

that the trace of the cofactor is the second principal invariant of the original tensor:

tr(𝐓c) =1

2𝛿𝑖𝑗𝑘

𝑙𝑚𝑛𝑇𝑚𝑗𝑇𝑛

𝑘𝐠𝑙 ⋅ 𝐠𝑖 = 𝐼1(𝐓c)

=1

2𝛿𝑖𝑗𝑘

𝑙𝑚𝑛𝑇𝑚𝑗𝑇𝑛

𝑘𝛿𝑙𝑖 =

1

2𝛿𝑖𝑗𝑘

𝑖𝑚𝑛𝑇𝑚𝑗𝑇𝑛

𝑘

=1

2(𝛿𝑗

𝑚𝛿𝑘𝑛 − 𝛿𝑘

𝑚𝛿𝑗𝑛)𝑇𝑚

𝑗𝑇𝑛

𝑘

=1

2(𝑇𝑗

𝑗𝑇𝑘

𝑘 − 𝑇𝑘𝑗𝑇𝑗

𝑘) = 𝐼2(𝐓)

85. For a scalar 𝛼 show that det 𝛼𝐀 = 𝛼3 det 𝐀

Given that det 𝐀 =[𝐀𝐚,𝐀𝐛,𝐀𝐜]

[𝐚,𝐛,𝐜], then

det 𝛼𝐀 =[𝛼𝐀𝐚, 𝛼𝐀𝐛, 𝛼𝐀𝐜]

[𝐚, 𝐛, 𝐜]= 𝛼3

[𝐀𝐚, 𝐀𝐛, 𝐀𝐜]

[𝐚, 𝐛, 𝐜]= 𝛼3 det 𝐀

86. Define the inner product of tensors 𝐓 and 𝐒 as 𝐓: 𝐒 = tr(𝐓𝑇𝐒) = tr(𝐓𝐒𝑇) show

that 𝐼1(𝐓) = 𝐓: 𝐈

𝐓: 𝐒 = tr(𝐓𝑇𝐒) = tr(𝐓𝐒𝑇)

Let 𝐒 = 𝐈;

𝐓: 𝐈 = tr(𝐓𝑇𝐈) = tr(𝐓𝐈)

= tr(𝐓) = 𝐼1(𝐓)

87. Given any scalar 𝑘 > 0, for the scalar-valued tensor function, 𝑓(𝐒) = tr(𝐒𝑘),

show that, 𝑑

𝑑𝐒 𝑓(𝐒) = 𝑘(𝐒𝑘−1)𝑇.

When 𝑘 = 1,

𝐷𝑓(𝑺, 𝑑𝑺) =𝑑

𝑑𝛼𝑓(𝑺 + 𝛼𝑑𝑺)|

𝛼=0

=𝑑

𝑑𝛼tr(𝑺 + 𝛼𝑑𝑺)|

𝛼=0 = tr(𝟏 𝑑𝑺)

= 𝐈: 𝑑𝑺

So that,

𝑑

𝑑𝑺 tr(𝑺) = 𝐈.

When 𝑘 = 2, The Gateaux differential in this case,

𝐷𝑓(𝑺, 𝑑𝑺) =𝑑

𝑑𝛼𝑓(𝑺 + 𝛼𝑑𝑺)|

𝛼=0 =

𝑑

𝑑𝛼tr{(𝑺 + 𝛼𝑑𝑺)2}|

𝛼=0

=𝑑

𝑑𝛼tr{(𝑺 + 𝛼𝑑𝑺)(𝑺 + 𝛼𝑑𝑺)}|

𝛼=0 = tr [

𝑑

𝑑𝛼(𝑺 + 𝛼𝑑𝑺)(𝑺 + 𝛼𝑑𝑺)]|

𝛼=0

= tr[𝑑𝑺(𝑺 + 𝛼𝑑𝑺) + (𝑺 + 𝛼𝑑𝑺)𝑑𝑺]|𝛼=0

= tr[𝑑𝑺 𝑺 + 𝑺 𝑑𝑺] = 2(𝑺)T: 𝑑𝑺

When 𝑘 = 3,

𝐷𝑓(𝑺, 𝑑𝑺) =𝑑

𝑑𝛼𝑓(𝑺 + 𝛼𝑑𝑺)|

𝛼=0 =

𝑑

𝑑𝛼tr{(𝑺 + 𝛼𝑑𝑺)3}|

𝛼=0

=𝑑

𝑑𝛼tr{(𝑺 + 𝛼𝑑𝑺)(𝑺 + 𝛼𝑑𝑺)(𝑺 + 𝛼𝑑𝑺)}|

𝛼=0

= tr [𝑑

𝑑𝛼(𝑺 + 𝛼𝑑𝑺)(𝑺 + 𝛼𝑑𝑺)(𝑺 + 𝛼𝑑𝑺)]|

𝛼=0

= tr[𝑑𝑺(𝑺 + 𝛼𝑑𝑺)(𝑺 + 𝛼𝑑𝑺) + (𝑺 + 𝛼𝑑𝑺)𝑑𝑺(𝑺 + 𝛼𝑑𝑺)

+ (𝑺 + 𝛼𝑑𝑺)(𝑺 + 𝛼𝑑𝑺)𝑑𝑺]|𝛼=0

= tr[𝑑𝑺 𝑺 𝑺 + 𝑺 𝑑𝑺 𝑺 + 𝑺 𝑺 𝒅𝑺] = 3(𝑺2)T: 𝑑𝑺

It easily follows by induction that, 𝑑

𝑑𝐒 𝑓(𝐒) = 𝑘(𝐒𝑘−1)𝑇.

88. Find the derivative of the second principal invariant of the tensor 𝑺 𝑑

𝑑𝑺 𝐼2(𝑺) =

1

2

𝑑

𝑑𝑺[tr2(𝑺) − tr(𝑺2)]

=1

2[2tr(𝑺)𝟏 − 2 𝑺T]

= tr(𝑺)𝟏 − 𝑺T

89. Find the derivative of the third principal invariant of the tensor 𝑺

By Cayley-Hamilton, in terms of traces only,

𝐼3(𝑺) =1

6[tr3(𝑺) − 3tr(𝑺)tr(𝑺2) + 2tr(𝑺3)]

𝑑

𝑑𝑺 𝐼3(𝑺) =

1

6

𝑑

𝑑𝑺[tr3(𝑺) − 3tr(𝑺)tr(𝑺2) + 2tr(𝑺3)]

=1

6[3tr2(𝑺)𝐈 − 3tr(𝑺2)𝐈 − 3tr(𝑺)2𝑺𝑇 + 2 × 3(𝑺2)𝑇]

= 𝐼2𝐈 − 𝐼1(𝑺)𝑺𝑇 + 𝑺2𝑇

90. By the representation theorem, every real isotropic tensor function is a function

of its principal invariants. Show that for every isotropic function 𝑓(𝐒) the derivative ∂𝑓(𝐒)

∂𝐒 can be represented as a quadratic function of 𝐒T.

By the representation theorem,

𝑓(𝐒) = 𝜙(𝐼1(𝐒), 𝐼2(𝐒), 𝐼3(𝐒))

consequently,

∂𝑓(𝐒)

∂𝐒=

∂𝑓(𝐒)

∂𝐼1

∂𝐼1∂𝐒

+∂𝑓(𝐒)

∂𝐼2

∂𝐼2∂𝐒

+∂𝑓(𝐒)

∂𝐼3

∂𝐼3∂𝐒

=∂𝑓(𝐒)

∂𝐼1𝐈 +

∂𝑓(𝐒)

∂𝐼2(tr(𝐒)𝐈 − 𝐒T) +

∂𝑓(𝐒)

∂𝐼3(𝐼2𝐈 − 𝐼1(𝐒)𝐒𝑇 + 𝐒2𝑇)

= (∂𝑓(𝐒)

∂𝐼1+

∂𝑓(𝐒)

∂𝐼2𝐼1(𝐒) +

∂𝑓(𝐒)

∂𝐼3𝐼2(𝐒)) 𝐈 − (

∂𝑓(𝐒)

∂𝐼2+

∂𝑓(𝐒)

∂𝐼3𝐼1(𝐒)) 𝐒T

+∂𝑓(𝐒)

∂𝐼3𝐒2𝑇

= 𝛼0𝐈 + 𝛼1𝐒T + α2𝐒

2𝑇

where 𝛼0 =∂𝑓(𝐒)

∂𝐼1+

∂𝑓(𝐒)

∂𝐼2𝐼1(𝐒) +

∂𝑓(𝐒)

∂𝐼3𝐼2(𝐒), 𝛼1 = −

∂𝑓(𝐒)

∂𝐼2−

∂𝑓(𝐒)

∂𝐼3𝐼1(𝐒) and 𝛼2 =

∂𝑓(𝐒)

∂𝐼3 is the desired

quadratic representation of the derivative.

91. Given arbitrary vectors 𝒂 and 𝒃 the tensor 𝐐 is said to be orthogonal if (𝐐𝒂) ⋅

(𝐐𝒃) = 𝒂 ⋅ 𝒃 = 𝒃 ⋅ 𝒂 show that the inverse of 𝐐 is its transpose. and that 𝐐 is the

cofactor of itself.

By definition of the transpose, we have that,

𝐪 ⋅ 𝐐𝐛 = 𝐛 ⋅ 𝐐T𝐪 = 𝐛 ⋅ 𝐐𝐓𝐐𝐚 = 𝐛 ⋅ 𝐚

Clearly, 𝐐𝐓𝐐 = 1 which makes the transpose the same as the inverse tensor.

A condition necessary and sufficient for a tensor 𝐐 to be orthogonal is that 𝐐 be

invertible and its inverse is equal to its transpose.

92. An orthogonal tensor 𝐐 is said to be “proper orthogonal” if its determinant |𝐐| =

+1. Show that a proper orthogonal tensor is the cofactor of itself. Show also that its

first two invariants are equal.

cof𝐐 = det 𝐐 𝐐−T = +1 (𝐐T)−𝟏 = 1 (𝐐−1)−𝟏 = 𝐐

𝐼2(𝐐) = 𝐼1(𝐐c)

The second principal invariant for any vector is equal to the first principal invariant

of its co-factor. But we find here that 𝐐 = 𝐐c. It follows that the first two invariants

of a proper orthogonal tensor are equal. The third invariant, 𝐼3(𝐐) = det(𝐐)=1. All

essential information on an orthogonal tensor is known once we know its trace!

93. For any invertible tensor 𝐒 show that det(𝐒c) = (det 𝐒)2

First note that the determinant of the product of a tensor C with a scalar 𝛼 is,

det 𝛼𝐂 = 𝜖𝑖𝑗𝑘(𝛼𝐶𝑖1)(𝛼𝐶𝑗

2)(𝛼𝐶𝑘3) = 𝛼3 det 𝐂

The inverse of tensor 𝐒,

𝐒−1 = (det 𝐒)−1(𝐒cT)

Let the scalar 𝛼 = det 𝐒. We can see clearly that,

𝐒c = 𝛼𝐒−𝑇

Taking the determinant of this equation, we have,

det(𝐒c) = 𝛼3 det(𝐒−𝑇) = 𝛼3 det(𝐒−1)

as the transpose operation has no effect on the value of a determinant. Noting that

the determinant of an inverse is the inverse of the determinant, we have,

det(𝐒c) = 𝛼3 det(𝐒−1) =𝛼3

𝛼= (det 𝐒)2

94. For any invertible tensor 𝐒 and a scalar 𝛼 show that show that the cofactor of the

product of 𝛼 and 𝐒 equals 𝛼2 × the cofactor of 𝐒 , that is, (𝛼𝐒)c = 𝛼2𝐒c

(𝛼𝐒)c = (det(𝛼𝐒))(𝛼𝐒)−T

= (𝛼3 det(𝐒))𝛼−1𝑺−T

= (𝛼2 det(𝐒))𝐒−T

= 𝛼2𝐒c

95. For any invertible tensor 𝐒 show that (𝐒−1)c = (det 𝐒)−1𝐒T

(𝐒−1)c = det(𝐒−1) (𝐒−1)−𝑇

= (det 𝐒)−1𝐒T

96. For any invertible tensor 𝐒 show that 𝐒−c = (det 𝐒)−1𝐒T, that is, the inverse of the

cofactor is the transpose divided by the determinant.

𝐒C = det(𝐒) 𝐒−𝑇

Consequently,

𝐒−c = (det 𝐒)−1(𝑺−𝑇)−1 = (det 𝐒)−1𝐒T

97. For an invertible tensor show that the cofactor of the cofactor is the product of

the original tensor and its determinant 𝐒cc = (det 𝐒)𝐒

𝐒c = det(𝐒) 𝐒−𝑇

So that,

𝐒cc = (det 𝐒c)(𝐒c)−𝑇

= (det 𝐒)2[(𝐒c)−1]𝑇 = (det 𝐒)2[(det 𝐒)−1𝐒T]𝑇

= (det 𝐒)2(det 𝐒)−1𝐒

= (det 𝐒)𝐒

as required.

98. Show that for any invertible tensor 𝐒 and any vector 𝐮, [(𝐒𝐮) ×] = 𝐒c(𝐮 ×)𝐒−𝟏

where 𝐒c and 𝐒−𝟏 are the cofactor and inverse of 𝐒 respectively.

By definition,

𝐒𝐜 = (det 𝐒)𝐒−T

We are to prove that,

[(𝐒𝐮) ×] = 𝐒𝐜(𝐮 ×)𝐒−𝟏 = (det 𝐒)𝐒−T(𝐮 ×)𝐒−𝟏

or that,

𝐒T[(𝐒𝐮) ×] = (𝐮 ×)(det 𝐒)𝐒−𝟏 = (𝐮 ×)(𝐒𝐜)𝐓

On the RHS, the contravariant 𝑖𝑗 component of 𝒖 × is

(𝐮 ×)𝑖𝑗 = 𝜖𝑖𝛼𝑗𝑢𝛼

which is exactly the same as writing, (𝐮 ×) = 𝜖𝑖𝛼𝑙𝑢𝛼𝐠𝑖 ⊗ 𝐠𝑙 in the invariant form.

We now turn to the LHS;

[(𝐒𝐮) ×] = 𝜖𝑙𝛼𝑘(𝐒𝐮)𝛼𝐠𝑙 ⊗ 𝐠𝑘 = 𝜖𝑙𝛼𝑘𝑆𝛼𝑗𝑢𝑗𝐠𝑙 ⊗ 𝐠𝑘

Now, 𝐒 = 𝑆.𝛽𝑖. 𝐠𝑖 ⊗ 𝐠𝛽 so that its transpose, 𝐒T = 𝑆𝛽

𝑖 𝐠β ⊗ 𝐠i = 𝑆𝑖𝛽𝐠𝑖 ⊗ 𝐠𝛽 so that

𝐒T[(𝐒𝐮) ×] = 𝜖𝑙𝛼𝑘𝑆𝑖𝛽𝑆𝛼

𝑗𝑢𝑗𝐠

𝑖 ⊗ 𝐠𝛽 ⋅ 𝐠𝑙 ⊗ 𝐠𝑘

= 𝜖𝑙𝛼𝑘𝑆𝑖𝑙𝑆𝛼𝑗𝑢𝑗𝐠

𝑖 ⊗ 𝐠𝑘 = 𝜖𝑙𝛼𝑘𝑆𝑙𝑖𝑆𝛼

𝑗𝑢𝑗𝐠𝑖 ⊗ 𝐠𝑘

= 𝜖𝛼𝛽𝑘𝑢𝑗𝑆𝛼𝑖 𝑆𝛽

𝑗𝐠𝑖 ⊗ 𝐠𝑘 = (𝐮 ×)(𝐒c)T

99. Show that [(𝐒c𝐮) ×] = 𝐒(𝐮 ×)𝐒T

The LHS in component invariant form can be written as:

[(𝐒c𝐮) ×] = 𝜖𝑖𝑗𝑘(𝐒c𝐮)𝑗𝐠𝑖 ⊗ 𝐠𝑘

where (𝐒c)𝑗𝛽

=1

2𝜖𝑗𝑎𝑏𝜖

𝛽𝑐𝑑𝑆𝑐𝑎𝑆𝑑

𝑏 so that

(𝐒𝐜𝐮)𝑗 = (𝐒c)𝑗𝛽𝑢𝛽 =

1

2𝜖𝑗𝑎𝑏𝜖

𝛽𝑐𝑑𝑢𝛽𝑆𝑐𝑎𝑆𝑑

𝑏

Consequently,

[(𝐒c𝐮) ×] =1

2𝜖𝑖𝑗𝑘𝜖𝑗𝑎𝑏𝜖

𝛽𝑐𝑑𝑢𝛽𝑆𝑐𝑎𝑆𝑑

𝑏𝐠𝑖 ⊗ 𝐠𝑘

=1

2𝜖𝛽𝑐𝑑(𝛿𝑎

𝑘𝛿𝑏𝑖 − 𝛿𝑏

𝑘𝛿𝑎𝑖 )𝑢𝛽𝑆𝑐

𝑎𝑆𝑑𝑏𝐠𝑖 ⊗ 𝐠𝑘

=1

2𝜖𝛽𝑐𝑑𝑢𝛽(𝑆𝑐

𝑘𝑆𝑑𝑖 − 𝑆𝑐

𝑖𝑆𝑑𝑘)𝐠𝑖 ⊗ 𝐠𝑘

= 𝜖𝛽𝑐𝑑𝑢𝛽𝑆𝑐𝑘𝑆𝑑

𝑖 𝐠𝑖 ⊗ 𝐠𝑘

On the RHS, (𝐮 ×)𝐒T = 𝜖𝛼𝛽𝛾𝑢𝛽𝑆𝛾𝑘𝐠𝑖 ⊗ 𝐠𝑘 . We can therefore write,

𝐒(𝐮 ×)𝐒T = 𝜖𝛼𝛽𝛾𝑢𝛽𝑆𝛼𝑖 𝑆𝛾

𝑘𝐠𝑖 ⊗ 𝐠𝑘

Which on a closer look is exactly the same as the LHS so that, [(𝐒c𝐮) ×] = 𝐒(𝐮 ×)𝐒T

as required.

100. For a tensor 𝐒, given that [(𝐒c𝐮) ×] = 𝐒(𝐮 ×)𝐒T for any two vectors 𝐮 and 𝐯,

show that ((cof 𝐒)𝐮 ×)𝐯 = 𝐒(𝐮 × 𝐒T𝐯)

The product of the given equation with the vector 𝐯 immediately yields,

[(𝐒c𝐮) ×]𝐯 = 𝐒(𝐮 ×)𝐒T𝐯

⇒ ((cof 𝐒)𝐮 ×)𝐯 = 𝐒(𝐮 × 𝐒T𝐯)

101. Given that 𝛀 is a skew tensor with the corresponding axial vector 𝛚. Given

vectors 𝐮 and 𝐯, show that 𝛀𝐮 × 𝛀𝐯 = (𝛚 ⊗ 𝛚)(𝐮 × 𝐯) and, hence conclude that

𝛀c = (𝛚 ⊗ 𝛚). 𝛀𝐮 × 𝛀𝐯 = (𝝎 × 𝐮) × (𝝎 × 𝐯) = (𝝎 × 𝐮) × (𝝎 × 𝐯)

= [(𝝎 × 𝐮) ⋅ 𝐯]𝝎 − [(𝝎 × 𝐮) ⋅ 𝝎]𝐯 = [𝝎 ⋅ (𝐮 × 𝐯)]𝝎 = (𝝎 ⊗ 𝝎)(𝐮 × 𝐯)

But by definition, the cofactor must satisfy,

𝛀𝐮 × 𝛀𝐯 = 𝛀c(𝐮 × 𝐯)

which compared with the previous equation yields the desired result that

𝛀c = (𝝎 ⊗ 𝝎).

102. Show, using indicial notation, that the cofactor of a tensor can be written as 𝐒c =

(𝐒2 − 𝐼1𝐒 + 𝐼2𝟏)T even if 𝐒 is not invertible. 𝐼1, 𝐼2 are the first two invariants of 𝐒.

The above equation can be written more explicitly as,

𝐒𝑐 = (𝐒2 − tr(𝐒)𝐒 + [1

2[tr2(𝐒) − tr(𝐒2)]] 𝟏)

T

In the invariant component form, this is easily seen to be,

𝑺c = (𝑆𝜂 𝑖 𝑆𝑗

𝜂− 𝑆𝛼

𝛼𝑆𝑗𝑖 +

1

2(𝑆𝛼

𝛼𝑆𝛽𝛽

− 𝑆𝛽𝛼𝑆𝛼

𝛽) 𝛿𝑗

𝑖) 𝐠𝑗 ⊗ 𝐠𝑖

But we know that the cofactor can be obtained directly from the equation,

(𝐒c) =1

2𝜖𝑖𝛽𝛾𝜖𝑗𝜆𝜂𝑆𝛽

𝜆𝑆𝛾𝜂𝐠𝑖 ⊗ 𝐠𝑗 =

1

2[ 𝛿𝑗

𝑖 𝛿𝜆 𝑖 𝛿𝜂

𝑖

𝛿𝑗𝛽

𝛿𝜆 𝛽

𝛿𝜂𝛽

𝛿𝑗𝛾

𝛿𝜆 𝛾

𝛿𝜂𝛾] 𝑆𝛽

𝜆𝑆𝛾𝜂𝐠𝑖 ⊗ 𝐠𝑗

=1

2(𝛿𝑗

𝑖 [𝛿𝜆

𝛽𝛿𝜂

𝛽

𝛿𝜆 𝛾

𝛿𝜂𝛾] − 𝛿𝜆

𝑖 [𝛿𝑗

𝛽𝛿𝜂

𝛽

𝛿𝑗𝛾

𝛿𝜂𝛾] + 𝛿𝜂

𝑖 [𝛿𝑗

𝛽𝛿𝜆

𝛽

𝛿𝑗𝛾

𝛿𝜆 𝛾]) 𝑆𝛽

𝜆𝑆𝛾𝜂𝐠𝑖 ⊗ 𝐠𝑗

=1

2[𝛿𝑗

𝑖 (𝛿𝜆 𝛽𝛿𝜂

𝛾− 𝛿𝜂

𝛽𝛿𝜆

𝛾) − 𝛿𝜆

𝑖 (𝛿𝑗𝛽𝛿𝜂

𝛾− 𝛿𝜂

𝛽𝛿𝑗

𝛾) + 𝛿𝜂

𝑖 (𝛿𝑗𝛽𝛿𝜆

𝛾− 𝛿𝜆

𝛽𝛿𝑗

𝛾)] 𝑆𝛽

𝜆𝑆𝛾𝜂𝐠𝑖

⊗ 𝐠𝑗

=1

2[𝛿𝑗

𝑖 (𝑆𝛼𝛼𝑆𝛽

𝛽− 𝑆𝜂

𝜆𝑆𝜆 𝜂) − 2𝑆𝑗

𝑖𝑆𝛼𝛼 + 2𝑆𝜂

𝑖 𝑆𝑗𝜂

] 𝐠𝑖 ⊗ 𝐠𝑗

= [𝐼2(𝐒)𝟏 − 𝐼1(𝐒)𝐒 + 𝐒2]T

103. Show, using direct notation, that the cofactor of a tensor can be written as 𝐒c =

(𝐒2 − 𝐼1𝐒 + 𝐼2𝟏)T even if 𝐒 is not invertible. 𝐼1, 𝐼2 are the first two invariants of 𝐒.

For any three linearly independent vectors, the trace of a tensor 𝐓

tr 𝐓 ≡ 𝐼1(𝐓) =[𝐓𝐠1, 𝐠2, 𝐠3] + [𝐠1, 𝐓𝐠2, 𝐠3] + [𝐠1, 𝐠2, 𝐓𝐠3]

[𝐠1, 𝐠2, 𝐠3]

Replacing 𝐠1 by 𝐓𝐠1 in the above equation, we have,

tr 𝐓 [𝐓𝐠1, 𝐠2, 𝐠3] = [𝐓2𝐠1, 𝐠2, 𝐠3] + [𝐓𝐠1, 𝐓𝐠2, 𝐠3] + [𝐓𝐠1, 𝐠2, 𝐓𝐠3]

Or, upon rearrangement,

[𝐓𝐠1, 𝐓𝐠2, 𝐠3] + [𝐓𝐠1, 𝐠2, 𝐓𝐠3] = tr 𝐓 [𝐓𝐠1, 𝐠2, 𝐠3] − [𝐓2𝐠1, 𝐠2, 𝐠3]

But, the second Invariant,

𝐼2(𝐓) =[𝐓𝐠1, 𝐓𝐠2, 𝐠3] + [𝐠1, 𝐓𝐠2, 𝐓𝐠3] + [𝐓𝐠1, 𝐠2, 𝐓𝐠3]

[𝐠1, 𝐠2, 𝐠3]

=tr 𝐓 [𝐓𝐠1, 𝐠2, 𝐠3] − [𝐓2𝐠1, 𝐠2, 𝐠3] + [𝐠1, 𝐓𝐠2, 𝐓𝐠3]

[𝐠1, 𝐠2, 𝐠3]

=tr 𝐓 [𝐓𝐠1, 𝐠2, 𝐠3] − [𝐓2𝐠1, 𝐠2, 𝐠3] + 𝐠1 ⋅ 𝐓c(𝐠2 × 𝐠3)

[𝐠1, 𝐠2, 𝐠3]

=[(tr 𝐓)𝐓𝐠1, 𝐠2, 𝐠3] − [𝐓2𝐠1, 𝐠2, 𝐠3] + [𝐓cT𝐠1, 𝐠2, 𝐠3]

[𝐠1, 𝐠2, 𝐠3]

so that,

[(𝐼2(𝐓)𝟏)𝐠1, 𝐠2, 𝐠3] = [(tr 𝐓)𝐓𝐠1, 𝐠2, 𝐠3] − [𝐓2𝐠1, 𝐠2, 𝐠3] + [𝐓cT𝐠1, 𝐠2, 𝐠3]

From which we can write that

𝐼2(𝐓)𝟏 = (tr 𝐓)𝐓 − 𝐓2 + 𝐓cT

or,

𝐓c = (𝐓2 − 𝐼1(𝐓)𝐓 + 𝐼2(𝐓)𝟏)T

104. Given a Euclidean Vector Space E, a tensor 𝐐 is said to be rotation if in addition to

satisfying (𝐐𝒂) ⋅ (𝐐𝒃) = 𝒂 ⋅ 𝒃 ∀𝒂, 𝒃 ∈ E, its determinant (det 𝐐) = +1. For any pair

of vectors 𝐮, 𝐯 show that 𝐐(𝐮 × 𝐯) = (𝐐𝐮) × (𝐐𝐯) if 𝐐 is a rotation. That is, that the

cofactor of 𝐐 is 𝐐 itself

We can write that

𝐓(𝐮 × 𝐯) = (𝐐𝐮) × (𝐐𝐯)

where

𝐓 = cof(𝐐) = det(𝐐)𝐐−𝐓

Now that 𝐐 is a rotation, det(𝐐) = 1, and because it is orthogonal, its inverse is its

transpose:

𝐐−T = (𝐐−1)T = (𝐐T)T = 𝐐

This implies that 𝐓 = 𝐐 and consequently,

𝐐(𝐮 × 𝐯) = (𝐐𝐮) × (𝐐𝐯)

105. For a proper orthogonal tensor 𝐐, show that the eigenvalue equation always

yields an eigenvalue of +1. This means that there is always a solution for the

equation, 𝐐𝐮 = 𝐮.

For any invertible tensor, note that the cofactor is defined as,

𝐒c = (det 𝐒)𝐒−T

For a proper orthogonal tensor 𝐐, det 𝐐 = 1, and its inverse is its transpose. It

therefore follows that,

𝐐𝐜 = (det𝐐)𝐐−T = 𝐐−T = 𝐐

It is easily shown that tr 𝐐c = 𝐼2(𝐐). It follows from the above that, 𝐼2(𝐐) = 𝐼𝟏(𝐐).

For the general eigenvalue equation, 𝐐𝐮 = 𝜆𝐮, the characteristic equation is,

𝐝et (𝐐 − 𝝀𝟏) = 𝜆3 − 𝜆2𝑄1 + 𝜆𝑄2 − 𝑄3 = 0

where 𝑄1, 𝑄2(= 𝑄1) and 𝑄3(= 1) are the first second and third invariants of the

orthogonal tensor respectively. In this particular case, the characteristic equation

becomes,

𝜆3 − 𝜆2𝑄1 + 𝜆𝑄1 − 1 = 0

Which is obviously satisfied by 𝜆 = 1. Hence there is always a solution to 𝐐𝐮 = 𝐮.

106. If for an arbitrary unit vector 𝐞, the tensor, 𝐐(𝜃) = cos (𝜃)𝟏 + (1 − cos (𝜃))𝐞 ⊗

𝐞 + sin (𝜃)(𝐞 ×) where (𝐞 ×) is the vector cross of 𝐞. Show that 𝐐(𝜃)(𝟏 − 𝐞 ⊗ 𝐞) =

cos(𝜃)(𝟏 − 𝐞 ⊗ 𝐞) + sin(𝜃)(𝐞 ×) We first observe that,

𝐐(𝜃)(𝐞 ⊗ 𝐞) = cos(𝜃)(𝐞 ⊗ 𝐞) + (1 − cos(𝜃))𝐞 ⊗ 𝐞 + sin(𝜃)[𝐞 × (𝐞 ⊗ 𝐞)]

The last term vanishes immediately on account of the fact that 𝐞 ⊗ 𝐞 is a symmetric

tensor. (The contraction of a symmetric and an antisymmetric tensor always

vanishes). Consequently, we have,

𝐐(𝜃)(𝐞 ⊗ 𝐞) = cos(𝜃)(𝐞 ⊗ 𝐞) + (1 − cos(𝜃))𝐞 ⊗ 𝐞 = 𝐞 ⊗ 𝐞

which again means that 𝐐(𝜃) has no effect on 𝐞 ⊗ 𝐞 so that,

𝐐(𝜃)(𝟏 − 𝐞 ⊗ 𝐞) = cos(𝜃)𝟏 + (1 − cos(𝜃))𝐞 ⊗ 𝐞 + sin(𝜃)(𝐞 ×) − 𝐞 ⊗ 𝐞

= cos(𝜃) (𝟏 − 𝐞 ⊗ 𝐞) + sin(𝜃)(𝐞 ×)

as required.

107. For an arbitrary unit vector 𝐞, show that the skew tensor, 𝐖 = (𝐞 ×) is such that

𝐖2 ≡ (𝐞 ×)(𝐞 ×) = (𝐞 ⊗ 𝐞) − 𝐈

(𝐞 ×)(𝐞 ×) = (𝜖𝑖𝑗𝑘𝑒𝑗𝐠𝑖 ⊗ 𝐠𝑘)(𝜖𝛼𝛽𝛾𝑒𝛽𝐠𝛼 ⊗ 𝐠𝛾)

= 𝛿𝛼𝛽𝛾𝑖𝑗𝑘

𝑒𝑗𝑒𝛽𝐠𝑖 ⊗ 𝐠𝛾𝛿𝑘

𝛼

= 𝛿𝑘𝛽𝛾𝑖𝑗𝑘

𝑒𝑗𝑒𝛽𝐠𝑖 ⊗ 𝐠𝛾

= (𝛿𝛽𝑖 𝛿𝛾

𝑗− 𝛿𝛽

𝑗𝛿𝛾

𝑖) 𝑒𝑗𝑒𝛽𝐠𝑖 ⊗ 𝐠𝛾

= 𝑒𝛾𝑒𝛽𝐠𝛽 ⊗ 𝐠𝛾 − 𝑒𝛽𝑒𝛽𝐠𝑖 ⊗ 𝐠𝑖

= 𝑒𝛾𝑒𝛽𝐠𝛽 ⊗ 𝐠𝛾 − (𝐞 ⋅ 𝐞)𝐠𝑖 ⊗ 𝐠𝑖

= (𝐞 ⊗ 𝐞) − 𝐈

upon noting that the dot product of the unit vector with itself is unity.

108. If for an arbitrary unit vector 𝐞, the tensor, 𝐐(𝜃) = cos (𝜃)𝟏 + (1 − cos (𝜃))𝐞 ⊗

𝐞 + sin(𝜃)(𝐞 ×) where (𝐞 ×) ≡ 𝐖 is the vector cross of 𝐞. Show that for, 𝐐(𝜃) =

𝐈 + 𝐖sin 𝜃 + 𝐖2(1 − cos 𝜃) [Note that 𝐞 ⊗ 𝐞 = 𝐖2 + 𝐈] Using the noted result,

𝐐(𝜃) = cos 𝜃 𝐈 + (1 − cos 𝜃)𝐞 ⊗ 𝐞 + sin 𝜃 (𝐞 ×)

= cos 𝜃 𝐈 + (1 − cos 𝜃)(𝐖2 + 𝐈) + 𝐖sin 𝜃

= 𝐈 + 𝐖sin 𝜃 + 𝐖2(1 − cos 𝜃)

109. Use the fact that the tensor 𝐐(𝜃) = 𝐈 + 𝐖sin 𝜃 + 𝐖2(1 − cos 𝜃) where 𝐖 ≡

(𝐞 ×) - the vector cross of the unit tensor, rotates every vector about the axis of 𝐞

by the angle 𝜃 to find the tensor that rotates {𝐞1, 𝐞2, 𝐞3} to {𝐞𝟐, −𝐞1, 𝐞3}.

Clearly, the rotation axis here is the unit vector 𝐞3 and the angle of rotation is 𝜋

2.

Consequently, since 𝐞3 = {0,0,1},

𝐖 ≡ (𝐞3 ×) = (0 −1 01 0 00 0 0

), and 𝐖2 = (−1 0 00 −1 00 0 0

)

𝐐(𝜋

2) = 𝐈 + 𝐖sin

𝜋

2+ 𝐖2 (1 − cos

𝜋

2)

= (1 0 00 1 00 0 1

) + (0 −1 01 0 00 0 0

) + (−1 0 00 −1 00 0 0

)

= (0 −1 01 0 00 0 1

)

This same tensor can be found directly by recognizing that the tensor, 𝐐 = 𝝃1 ⊗

𝐞1 + 𝝃2 ⊗ 𝐞2 + 𝝃3 ⊗ 𝐞3 rotates {𝐞1, 𝐞2, 𝐞3} to {𝝃1, 𝝃2, 𝝃3} so that the tensor we seek

is,

𝐐 = 𝐞2 ⊗ 𝐞1 − 𝐞1 ⊗ 𝐞2 + 𝐞3 ⊗ 𝐞3 = (0 −1 01 0 00 0 1

)

110. Given that 𝐞1 = {1,0,0}, 𝐞2 = {0,1,0}, 𝐞3 = {0,0,1}, 𝐞4 = {√3

4,1

4,√3

2} , 𝐞5 =

{3

4,√3

4, −

1

2} , 𝐞6 = {−

1

2,√3

2, 0}, Find the tensor that transforms from {𝐞𝟐, 𝐞1, −𝐞3} to

{𝐞4, 𝐞5, 𝐞6}.

Tensor, 𝝃1 ⊗ 𝐞1 + 𝝃2 ⊗ 𝐞2 + 𝝃3 ⊗ 𝐞3 rotates {𝐞1, 𝐞2, 𝐞3} to {𝝃1, 𝝃2, 𝝃3}. The tensor

we seek is,

𝐐 = 𝐞4 ⊗ 𝐞2 + 𝐞5 ⊗ 𝐞1 − 𝐞6 ⊗ 𝐞3

=

(

3

4

√3

4

1

2

√3

4

1

4−

√3

2

−1

2

√3

20 )

111. Find the rotation tensor around an axis parallel to the unit vector, 𝐞 =

{−1

√6,

1

√6,

2

√6} through an angle

𝜋

3.

The skew tensor (𝐞 ×) = 𝐖 =

(

0 −√2

3

1

√6

√2

30

1

√6

−1

√6−

1

√60)

.

(𝐞 ×)2 = 𝐖2 =

(

0 −√

2

3

1

√6

√2

30

1

√6

−1

√6−

1

√60

)

(

0 −√

2

3

1

√6

√2

30

1

√6

−1

√6−

1

√60

)

=

(

−5

6−

1

6−

1

3

−1

6−

5

6

1

3

−1

3

1

3−

1

3)

𝐐(𝜋

6) = 𝐈 + 𝐖sin

𝜋

6+ 𝐖2 (1 − cos

𝜋

6)

=

(

1 −5

6(1 −

√3

2) −

1

√6+

1

6(−1 +

√3

2)

1

2√6+

1

3(−1 +

√3

2)

1

√6+

1

6(−1 +

√3

2) 1 −

5

6(1 −

√3

2)

1

2√6+

1

3(1 −

√3

2)

−1

2√6+

1

3(−1 +

√3

2) −

1

2√6+

1

3(1 −

√3

2) 1 +

1

3(−1 +

√3

2)

)

= (0.888354 −0.430577 0.1594650.385919 0.888354 0.248782

−0.248782 −0.159465 0.955341)

The inverse of this tensor is its transpose and its determinant is unity. Clearly, it is

the rotation tensor we seek.

112. Given the unit vector, 𝐰 = sin 𝛽 cos 𝛼 𝐞1 + sin 𝛽 sin 𝛼 𝐞2 + cos 𝛽 𝐞3. Find its

vector cross, 𝐖 ≡ (𝐰 ×) and use the formula 𝐐(𝜃) = 𝐈 + 𝐖sin 𝜃 + 𝐖2(1 − cos 𝜃)

to determine the rotation tensor around the bisector of the 𝐞1 − 𝐞2 axis through an

angle 𝜃.

𝐖(𝛼, 𝛽) = (𝐰 ×) = (

0 − cos 𝛽 sin 𝛽 sin 𝛼cos 𝛽 0 − sin 𝛽 cos 𝛼

− sin 𝛽 sin 𝛼 sin 𝛽 cos 𝛼 0)

Along the bisector of the 𝐞1 − 𝐞2 axis, 𝛼 =𝜋

4, 𝛽 =

𝜋

2. Consequently, 𝐰 =

1

√2𝐞1 +

1

√2𝐞2.

𝐖 (𝜋

4,𝜋

2) = (𝐰 ×) =

(

0 01

√2

0 0 −1

√2

−1

√2

1

√20

)

, 𝐖2 (𝜋

4,𝜋

2) = (

−1

2

1

20

1

2−

1

20

0 0 −1

)

And the rotation tensor for this axis is,

𝐐(𝜃) = 𝐈 + 𝐖sin 𝜃 + 𝐖2(1 − cos 𝜃)

=

(

1

2(1 + cos 𝜃)

1

2(1 − cos 𝜃)

sin 𝜃

√21

2(1 − cos 𝜃)

1

2(1 + cos 𝜃) −

sin 𝜃

√2

−sin 𝜃

√2

sin 𝜃

√2cos 𝜃

)

.

113. Given the unit vector, 𝐰 = sin 𝛽 cos 𝛼 𝐞1 + sin 𝛽 sin 𝛼 𝐞2 + cos 𝛽 𝐞3. Find its

vector cross, 𝐖 ≡ (𝐰 ×) and use the formula 𝐐(𝜃) = 𝐈 + 𝐖sin 𝜃 + 𝐖2(1 − cos 𝜃)

to determine the general rotation through an angle 𝜃.

𝐖(𝛼, 𝛽) = (𝐰 ×) = (

0 − cos 𝛽 sin 𝛽 sin 𝛼cos 𝛽 0 − sin 𝛽 cos 𝛼

− sin 𝛽 sin 𝛼 sin 𝛽 cos 𝛼 0)

𝐐(𝛼, 𝛽, 𝜃) = 𝐈 + 𝐖(𝛼, 𝛽) sin 𝜃 + 𝐖2(𝛼, 𝛽)(1 − cos 𝜃) =

𝐐(𝛼, 𝛽, 𝜃) Row 1:

{(1 − cos (𝜃))(−sin2(𝛼)sin2(𝛽) − cos2(𝛽)) + 1,

sin(𝛼) cos(𝛼) sin2(𝛽)(1 − cos(𝜃)) − cos(𝛽) sin(𝜃) ,

sin (𝛼)sin (𝛽)sin (𝜃) + cos (𝛼)sin (𝛽)cos (𝛽)(1 − cos (𝜃))}

𝐐(𝛼, 𝛽, 𝜃) Row 2:

{sin (𝛼)cos (𝛼)sin2(𝛽)(1 − cos (𝜃)) + cos (𝛽)sin (𝜃),

(1 − cos(𝜃))(−cos2(𝛼)sin2(𝛽) − cos2(𝛽)) + 1,

sin (𝛼)sin (𝛽)cos (𝛽)(1 − cos (𝜃)) − cos (𝛼)sin (𝛽)sin (𝜃)}

𝐐(𝛼, 𝛽, 𝜃) Row 3

{cos (𝛼)sin (𝛽)cos (𝛽)(1 − cos (𝜃)) − sin (𝛼)sin (𝛽)sin (𝜃),

sin(𝛼) sin(𝛽) cos(𝛽) (1 − cos(𝜃)) + cos(𝛼) sin(𝛽) sin(𝜃) ,

(1 − cos (𝜃))(−sin2(𝛼)sin2(𝛽) − cos2(𝛼)sin2(𝛽)) + 1}

114. Given that the skew tensor (𝐞 ×) ≡ 𝐖, and that 𝐐(𝜃) ≡ 𝐈 + 𝐖sin 𝜃 +

𝐖2(1 − cos 𝜃) is the rotation along the axis 𝐞 through the angle 𝜃, Find out if the

set {𝐈,𝐖,𝐖2} is linearly independent.

First note that 𝐖 is antisymmetric but 𝐖2 = (𝐞 ⊗ 𝐞) − 𝐈 is the linear combination

of two symmetric tensors, and therefore symmetric. Assume that {𝐈,𝐖,𝐖2} to be

linearly dependent. It means we can find 𝛼, 𝛽 and 𝛾 not all equal to zero such that

𝛼𝐈 + 𝛽𝐖 + 𝛾𝐖2 = 0

Since 𝛼, 𝛽 and 𝛾 are not all equal to zero, we assume in particular that 𝛽 ≠ 0.

Consequently, we can write,

𝐖 = −𝛼

𝛽𝐈 −

𝛾

𝛽𝐖2

In which we have expressed the anti-symmetric tensor 𝐖 as a linear combination of

two symmetric tensors! A contradiction! We can conclude that the set {𝐈,𝐖,𝐖2} is

linearly independent.

115. Given that every rotation tensor 𝐐 can be expressed in terms of the skew tensor

𝐖(≡ 𝐞 ×) as a function of the rotation angle 𝜃: 𝐐(𝜃) = 𝐈 + 𝐖sin 𝜃 +

𝐖2(1 − cos 𝜃) and that {𝐈,𝐖,𝐖2} is linearly independent set of tensors, show that,

{𝐈, 𝐐, 𝐐T} is also a linearly independent set.

Assume that the tensor set, {𝐈, 𝐐, 𝐐T} is linearly dependent. It means we can find 𝛼, 𝛽

and 𝛾 not all equal to zero such that

𝛼𝐈 + 𝛽𝐐 + 𝛾𝐐T = 0

Since 𝐐(𝜃) = 𝐈 + 𝐖sin 𝜃 + 𝐖2(1 − cos 𝜃), we substitute and obtain,

𝛼𝐈 + 𝛽𝐐 + 𝛾𝐐T =

= 𝛼𝐈 + 𝛽(𝐈 + 𝐖 sin 𝜃 + 𝐖2(1 − cos 𝜃)) + 𝛾(𝐈 − 𝐖sin 𝜃 + 𝐖2(1 − cos 𝜃))

= (𝛼 + 𝛽 + 𝛾)𝐈 + ( 𝛽 − 𝛾) 𝐖 sin 𝜃 + ( 𝛽 + 𝛾)𝐖2(1 − cos 𝜃)

= 𝑎𝐈 + 𝑏𝐖 + 𝑐𝐖2 = 0

if we write (𝛼 + 𝛽 + 𝛾) = 𝑎, ( 𝛽 − 𝛾) sin 𝜃 = 𝑏 and ( 𝛽 + 𝛾)(1 − cos 𝜃) = 𝑐 thereby

contradicting the well-known fact that {𝐈,𝐖,𝐖2} is a linearly independent set.

116. If for an arbitrary unit vector 𝐞, the tensor, 𝐐(𝜃) = cos(𝜃)𝟏 + (1 − cos(𝜃))𝐞 ⊗

𝐞 + sin(𝜃)(𝐞 ×) where (𝐞 ×) is the vector cross of 𝐞. Given that for any vector 𝐮,

the vector 𝐯 ≡ 𝐐(𝜃) 𝐮 has the same magnitude as 𝐮, and that, for any scalar 𝛼,

𝐐(𝜃)(𝛼𝐞) = 𝛼𝐞, What is the physical meaning of 𝐐(𝜃)?

𝐐(𝜃) is a rotation about the vector 𝐞 counterclockwise through an angle 𝜃. It

therefore does not alter the magnitude or direction of any vector in the direction of

𝐞; for any other vector, it has no effect on the magnitude but affects direction.

117. If for an arbitrary unit vector 𝐞, the tensor, 𝐐(𝜃) = cos(𝜃)𝟏 + (1 − cos(𝜃))𝐞 ⊗

𝐞 + sin(𝜃)(𝐞 ×) where (𝐞 ×) is the vector cross of 𝐞. Show that for any vector 𝐮,

the vector 𝐯 ≡ 𝐐(𝜃) 𝐮 has the same magnitude as 𝐮. What is the physical meaning

of 𝐐(𝜃)?

Let the scalar 𝑥 ≡ 𝐞 ⋅ 𝐮 be the projection of 𝐮 onto the unit vector 𝐞. The square of

the magnitude of 𝐯 is |𝐯|𝟐

= 𝐯 ⋅ 𝐯 = ( cos𝜃(𝟏𝐮) + (1 − cos𝜃)(𝐞 ⊗ 𝐞)𝐮 + sin𝜃(𝐞 × 𝐮))

⋅ ( cos𝜃(𝟏𝐮) + (1 − cos𝜃)(𝐞 ⊗ 𝐞)𝐮 + sin𝜃(𝐞 × 𝐮)) ⋅

= (𝐮 cos𝜃 + (1 − cos𝜃)𝑥𝐞 + sin𝜃(𝐞 × 𝐮))2

= (𝐮 cos𝜃) ⋅ (𝐮 cos𝜃 + (1 − cos𝜃)𝑥𝐞 + sin𝜃(𝐞 × 𝐮))

+𝑥𝐞 ⋅ (𝐮 cos𝜃 + (1 − cos𝜃)𝑥𝐞 + sin𝜃(𝐞 × 𝐮))(1 − cos𝜃)

+(𝐞 × 𝐮) ⋅ (𝐮 cos𝜃 + (1 − cos𝜃)𝑥𝐞 + sin𝜃(𝐞 × 𝐮))sin𝜃

= 𝐮𝟐 cos2 𝜃 + 2(cos𝜃 − cos2 𝜃)𝑥2 + 2(𝐞 × 𝐮 ⋅ 𝐮)sin𝜃 cos𝜃 + (1 − cos𝜃)2𝑥2

+ 2𝑥(𝐞 × 𝐮 ⋅ 𝐞)(1 − cos𝜃) sin𝜃 + sin2 𝜃 (𝐞 × 𝐮)2

= 𝐮𝟐 cos2 𝜃 + 2(cos𝜃 − cos2 𝜃)𝑥2 + 2(𝐞 × 𝐮 ⋅ 𝐮)sin𝜃 cos𝜃 + (1 − cos𝜃)2𝑥2

+ 2𝑥(𝐞 × 𝐮 ⋅ 𝐞)(1 − cos𝜃) sin𝜃 + sin2 𝜃 (𝐮2 − 𝑥2)

= 𝐮𝟐(cos2 𝜃 + sin2 𝜃) + 𝑥2[2(cos𝜃 − cos2 𝜃) + (1 − cos𝜃)2 − sin2 𝜃]

= 𝐮𝟐

As the term in square brackets vanish when expanded.

118. If for an arbitrary unit vector 𝐞, the tensor, 𝐐(𝜃) = cos(𝜃)𝟏 + (1 − cos(𝜃))𝐞 ⊗

𝐞 + sin(𝜃)(𝐞 ×) where (𝐞 ×) is the vector cross of 𝐞. Show that for arbitrary 0 <

𝛼, 𝛽 ≤ 2𝜋, that 𝐐(𝛼 + 𝛽) = 𝐐(𝛼)𝐐(𝛽).

It is convenient to write 𝐐(𝛼) and 𝐐(𝛽) in terms of their 𝑖, 𝑗 components; we assume

that the unit vector 𝐞 = (𝑥1, 𝑥2, 𝑥3):

[𝐐(𝛼)]𝑖𝑗 = cos𝛼 𝛿𝑖𝑗 + (1 − cos 𝛼)𝑥𝑖𝑥𝑗 − sin 𝛼 𝜖𝑖𝑗𝑘𝑥𝑘

Consequently, we can write for the product 𝐐(𝛼)𝐐(𝛽),

[𝐐(𝛼)𝐐(𝛽)]𝑖𝑗 = [𝐐(𝛼)]𝑖𝑘[𝐐(𝛽)]𝑘𝑗 =

= [cos𝛼 𝛿𝑖𝑘 + (1 − cos 𝛼)𝑥𝑖𝑥𝑘 − sin 𝛼 𝜖𝑖𝑘𝑙𝑥𝑙][cos𝛽 𝛿𝑘𝑗 + (1 − cos 𝛽)𝑥𝑘𝑥𝑗

− sin 𝛽 𝜖𝑘𝑗𝑛𝑥𝑛]

= cos𝛼 cos 𝛽 𝛿𝑖𝑘𝛿𝑘𝑗 + cos𝛼 (1 − cos 𝛽)𝛿𝑖𝑘𝑥𝑘𝑥𝑗 − cos𝛼 sin 𝛽 𝛿𝑖𝑘𝜖𝑘𝑗𝑛𝑥𝑛

+ (1 − cos 𝛼) cos 𝛽 𝑥𝑖𝑥𝑘𝛿𝑘𝑗 + (1 − cos 𝛼)(1 − cos 𝛽)𝑥𝑖𝑥𝑘2𝑥𝑗

− (1 − cos 𝛼) sin 𝛽 𝑥𝑖𝑥𝑘𝑥𝑛𝜖𝑘𝑗𝑛 − sin 𝛼 cos 𝛽 𝜖𝑖𝑘𝑙𝑥𝑙𝛿𝑘𝑗

− sin 𝛼 (1 − cos 𝛽)𝜖𝑖𝑘𝑙𝑥𝑙𝑥𝑘𝑥𝑗 + sin 𝛼 sin 𝛽 𝜖𝑖𝑘𝑙𝜖𝑘𝑗𝑛𝑥𝑛𝑥𝑙

= cos𝛼 cos 𝛽 𝛿𝑖𝑗 + cos 𝛼 (1 − cos 𝛽)𝑥𝑖𝑥𝑗 − cos𝛼 sin 𝛽 𝜖𝑖𝑗𝑛𝑥𝑛 + (1 − cos 𝛼) cos 𝛽 𝑥𝑖𝑥𝑗

+ (1 − cos 𝛼)(1 − cos 𝛽)𝑥𝑖𝑥𝑗 − (1 − cos 𝛼) sin 𝛽 𝑥𝑖𝑥𝑘𝑥𝑛𝜖𝑘𝑗𝑛

− sin 𝛼 cos 𝛽 𝜖𝑖𝑗𝑙𝑥𝑙 − sin 𝛼 (1 − cos 𝛽)𝜖𝑖𝑘𝑙𝑥𝑙𝑥𝑘𝑥𝑗 + sin 𝛼 sin 𝛽 𝜖𝑖𝑘𝑙𝜖𝑘𝑗𝑛𝑥𝑛𝑥𝑙

= cos𝛼 cos 𝛽 𝛿𝑖𝑗 + cos 𝛼 (1 − cos 𝛽)𝑥𝑖𝑥𝑗 − cos𝛼 sin 𝛽 𝜖𝑖𝑗𝑛𝑥𝑛 + (1 − cos 𝛼) cos 𝛽 𝑥𝑖𝑥𝑗

+ (1 − cos 𝛼)(1 − cos 𝛽)𝑥𝑖𝑥𝑗 − (1 − cos 𝛼) sin 𝛽 𝑥𝑖𝑥𝑘𝑥𝑛𝜖𝑘𝑗𝑛

− sin 𝛼 cos 𝛽 𝜖𝑖𝑗𝑙𝑥𝑙 − sin 𝛼 (1 − cos 𝛽)𝜖𝑖𝑘𝑙𝑥𝑙𝑥𝑘𝑥𝑗

+ sin 𝛼 sin 𝛽 (𝛿𝑙𝑗𝛿𝑖𝑛 − 𝛿𝑙𝑛𝛿𝑗𝑖)𝑥𝑛𝑥𝑙

= (cos 𝛼 cos 𝛽 − sin 𝛼 sin 𝛽)𝛿𝑖𝑗 + [1 − (cos 𝛼 cos 𝛽 − sin 𝛼 sin 𝛽)]𝑥𝑖𝑥𝑗

− [(cos 𝛼 sin 𝛽 − sin 𝛼 cos 𝛽)]𝜖𝑖𝑗𝑛𝑥𝑛

= [𝐐(𝛼 + 𝛽)]𝑖𝑗

With the boxed terms vanishing on account of antisymmetric contraction with

symmetry.

119. If for an arbitrary unit vector 𝐞, the tensor, 𝐐(𝜃) = cos(𝜃)𝟏 + (1 − cos(𝜃))𝐞 ⊗

𝐞 + sin(𝜃)(𝐞 ×) where (𝐞 ×) is the vector cross of 𝐞. Show that . 𝐐(𝜃) is a periodic

tensor function with period 2𝜋. [Hint: 𝐐(𝛼 + 𝛽) = 𝐐(𝛼)𝐐(𝛽)

Since 𝐐(𝛼 + 𝛽) = 𝐐(𝛼)𝐐(𝛽) we can write that 𝐐(𝛼 + 2𝜋) = 𝐐(𝛼)𝐐(2𝜋). But a

direct substitution shows that, 𝐐(0) = 𝐐(2π) = 𝟏. We therefore have that,

𝐐(𝛼 + 2𝜋) = 𝐐(𝛼)𝐐(2𝜋) = 𝐐(𝛼)

which completes the proof. The above results show that 𝐐(𝛼) is a rotation along the

unit vector 𝐞 through an angle 𝛼.

120. Given that 𝐐 is an orthogonal tensor, show that the principal invariants of a

tensor 𝑆 satisfy 𝐼𝑘(𝐐𝐒𝐐T) = 𝐼𝑘(𝐒), 𝑘 = 1,2, or 3, that is, Rotations and orthogonal

transformations do not change the Invariants.

𝐼1(𝐐𝐒𝐐T) = tr(𝐐𝐒𝐐T)

= tr(𝐐T𝐐𝐒) = tr(𝐒)

= 𝐼1(𝐒)

𝐼2(𝐐𝐒𝐐T) =1

2[tr2(𝐐𝐒𝐐T) − tr(𝐐𝐒𝐐T𝐐𝐒𝐐T)]

=1

2[𝐼1

2(𝐒) − tr(𝐐𝐒𝟐𝐐T)]

=1

2[𝐼1

2(𝐒) − tr(𝐐T𝐐𝐒𝟐)]

=1

2[𝐼1

2(𝐒) − tr(𝐒𝟐)] = 𝐼2(𝐒)

𝐼3(𝐐𝐒𝐐T) = det(𝐐𝐒𝐐T)

= det(𝐐T𝐐𝐒) = det(𝐒)

= 𝐼3(𝐒)

Hence 𝐼𝑘(𝐐𝐒𝐐T) = 𝐼𝑘(𝑺), 𝑘 = 1,2, or 3