159
MA106-Linear Algebra Spring 2011 Anant R. Shastri Department of Mathematics Indian Institute of Technology, Bombay January 19, 2011

Spring 2011 Anant R. Shastri

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

MA106-Linear AlgebraSpring 2011

Anant R. Shastri

Department of MathematicsIndian Institute of Technology, Bombay

January 19, 2011

ARS (IITB) MA106-Linear Algebra January 19, 2011 2 / 45

—————————

Determinants

We are now interested in studying linear transformation from a space to itself:

T : Rn → Rn

These are also called endomorphisms.

We have seen that a rigid motion which fixes the origin is an invertible lineartransformation of a euclidean space.It is clear that such a transformation does not change the volume of any solid.Obviously, this cannot be true of all linear transformations. But then how tomeasure the change in the volume effected by a linear transformation. An answeris in the notion ofdeterminants.For example consider the map λ : Rn −→ Rn

given by

v 7→ λv

where λ ∈ R. For n = 1 it is clear that a segment of length ` in R (no matterwhere it is situated in R) is mapped to a segment of length `λ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 2 / 45

Determinants

We are now interested in studying linear transformation from a space to itself:

T : Rn → Rn

These are also called endomorphisms.We have seen that a rigid motion which fixes the origin is an invertible lineartransformation of a euclidean space.

It is clear that such a transformation does not change the volume of any solid.Obviously, this cannot be true of all linear transformations. But then how tomeasure the change in the volume effected by a linear transformation. An answeris in the notion ofdeterminants.For example consider the map λ : Rn −→ Rn

given by

v 7→ λv

where λ ∈ R. For n = 1 it is clear that a segment of length ` in R (no matterwhere it is situated in R) is mapped to a segment of length `λ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 2 / 45

Determinants

We are now interested in studying linear transformation from a space to itself:

T : Rn → Rn

These are also called endomorphisms.We have seen that a rigid motion which fixes the origin is an invertible lineartransformation of a euclidean space.It is clear that such a transformation does not change the volume of any solid.Obviously, this cannot be true of all linear transformations.

But then how tomeasure the change in the volume effected by a linear transformation. An answeris in the notion ofdeterminants.For example consider the map λ : Rn −→ Rn

given by

v 7→ λv

where λ ∈ R. For n = 1 it is clear that a segment of length ` in R (no matterwhere it is situated in R) is mapped to a segment of length `λ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 2 / 45

Determinants

We are now interested in studying linear transformation from a space to itself:

T : Rn → Rn

These are also called endomorphisms.We have seen that a rigid motion which fixes the origin is an invertible lineartransformation of a euclidean space.It is clear that such a transformation does not change the volume of any solid.Obviously, this cannot be true of all linear transformations. But then how tomeasure the change in the volume effected by a linear transformation. An answeris in the notion of

determinants.For example consider the map λ : Rn −→ Rn

given by

v 7→ λv

where λ ∈ R. For n = 1 it is clear that a segment of length ` in R (no matterwhere it is situated in R) is mapped to a segment of length `λ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 2 / 45

Determinants

We are now interested in studying linear transformation from a space to itself:

T : Rn → Rn

These are also called endomorphisms.We have seen that a rigid motion which fixes the origin is an invertible lineartransformation of a euclidean space.It is clear that such a transformation does not change the volume of any solid.Obviously, this cannot be true of all linear transformations. But then how tomeasure the change in the volume effected by a linear transformation. An answeris in the notion ofdeterminants.

For example consider the map λ : Rn −→ Rngiven by

v 7→ λv

where λ ∈ R. For n = 1 it is clear that a segment of length ` in R (no matterwhere it is situated in R) is mapped to a segment of length `λ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 2 / 45

Determinants

We are now interested in studying linear transformation from a space to itself:

T : Rn → Rn

These are also called endomorphisms.We have seen that a rigid motion which fixes the origin is an invertible lineartransformation of a euclidean space.It is clear that such a transformation does not change the volume of any solid.Obviously, this cannot be true of all linear transformations. But then how tomeasure the change in the volume effected by a linear transformation. An answeris in the notion ofdeterminants.For example consider the map λ : Rn −→ Rn

given by

v 7→ λv

where λ ∈ R.

For n = 1 it is clear that a segment of length ` in R (no matterwhere it is situated in R) is mapped to a segment of length `λ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 2 / 45

Determinants

We are now interested in studying linear transformation from a space to itself:

T : Rn → Rn

These are also called endomorphisms.We have seen that a rigid motion which fixes the origin is an invertible lineartransformation of a euclidean space.It is clear that such a transformation does not change the volume of any solid.Obviously, this cannot be true of all linear transformations. But then how tomeasure the change in the volume effected by a linear transformation. An answeris in the notion ofdeterminants.For example consider the map λ : Rn −→ Rn

given by

v 7→ λv

where λ ∈ R. For n = 1 it is clear that a segment of length ` in R (no matterwhere it is situated in R) is mapped to a segment of length `λ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 2 / 45

Determinants

Likewise it is easily seen that a cube of volume m in R3is mapped to a solid of

volume mλ3.

In Rn, the effect on the volume is to multiply by λn. The matrix of λ is the

diagonal diag(λ, λ, . . . , λ).A little more generally, if we consider the endomorphism which effects a scaling byλi in the i th direction, then it is clear that the volume of an n-dimensional cube ischanged by a factor of λ1 · · ·λn.These considerations lead us to believe that each linear transformation effects ascaling in the volume. We must hunt for this scaling factor.

ARS (IITB) MA106-Linear Algebra January 19, 2011 3 / 45

Determinants

Likewise it is easily seen that a cube of volume m in R3is mapped to a solid of

volume mλ3.In Rn

, the effect on the volume is to multiply by λn. The matrix of λ is thediagonal diag(λ, λ, . . . , λ).A little more generally, if we consider the endomorphism which effects a scaling byλi in the i th direction, then it is clear that the volume of an n-dimensional cube ischanged by a factor of λ1 · · ·λn.

These considerations lead us to believe that each linear transformation effects ascaling in the volume. We must hunt for this scaling factor.

ARS (IITB) MA106-Linear Algebra January 19, 2011 3 / 45

Determinants

Likewise it is easily seen that a cube of volume m in R3is mapped to a solid of

volume mλ3.In Rn

, the effect on the volume is to multiply by λn. The matrix of λ is thediagonal diag(λ, λ, . . . , λ).A little more generally, if we consider the endomorphism which effects a scaling byλi in the i th direction, then it is clear that the volume of an n-dimensional cube ischanged by a factor of λ1 · · ·λn.These considerations lead us to believe that each linear transformation effects ascaling in the volume. We must hunt for this scaling factor.

ARS (IITB) MA106-Linear Algebra January 19, 2011 3 / 45

Determinants

Example

Consider T : R2 −→ R2such that T (1, 0) = (a, b) =: v1 and

T (0, 1) = (c , d) = v2.

This means that T maps the unit square in R2to the parallelogram with its

vertices (0, 0), (a, b), (c , d), (a + c , b + d).

v

v

1

2

The area of this parallelogram is given by |ad − bc|.In terms of vector product it is |v1 × v2|.

ARS (IITB) MA106-Linear Algebra January 19, 2011 4 / 45

Determinants

Example

Consider T : R2 −→ R2such that T (1, 0) = (a, b) =: v1 and

T (0, 1) = (c , d) = v2.

This means that T maps the unit square in R2to the parallelogram with its

vertices (0, 0), (a, b), (c , d), (a + c , b + d).

v

v

1

2

The area of this parallelogram is given by |ad − bc|.In terms of vector product it is |v1 × v2|.

ARS (IITB) MA106-Linear Algebra January 19, 2011 4 / 45

Determinants

Example

Consider T : R2 −→ R2such that T (1, 0) = (a, b) =: v1 and

T (0, 1) = (c , d) = v2.

This means that T maps the unit square in R2to the parallelogram with its

vertices (0, 0), (a, b), (c , d), (a + c , b + d).

v

v

1

2

The area of this parallelogram is given by |ad − bc|.

In terms of vector product it is |v1 × v2|.

ARS (IITB) MA106-Linear Algebra January 19, 2011 4 / 45

Determinants

Example

Consider T : R2 −→ R2such that T (1, 0) = (a, b) =: v1 and

T (0, 1) = (c , d) = v2.

This means that T maps the unit square in R2to the parallelogram with its

vertices (0, 0), (a, b), (c , d), (a + c , b + d).

v

v

1

2

The area of this parallelogram is given by |ad − bc|.In terms of vector product it is |v1 × v2|.

ARS (IITB) MA106-Linear Algebra January 19, 2011 4 / 45

DeterminantsIf we do not take the modulus, and only take the value ad − bc it can thought ofas a ‘signed’ area which is allowed to be negative. The vector product itself hasthe property viz.,

v1 × v2 = −v2 × v1.

We also have

(u + v)×w = u×w + v ×w.

The following simple diagram illustrates this and affirms that we are on the righttrack. Namely, we have (v1 + v2)× u = v1 × u + v2 × u.

O

A

B

C

DE

F

G

OBEA+OCFA=OBEA+BDGE−OBD+AEG=ODGA

Geometric proof of the additivity of the area(Caution: The above figure is not a 3-dimensional one!)

ARS (IITB) MA106-Linear Algebra January 19, 2011 5 / 45

Determinants

Example

Likewise, if T ′ : R3 −→ R3is such that T ′(ei ) = vi then the volume of the

parallelepiped defined by the vectors v1, v2, v3 is given by the modulus of thescalar triple product (v1v2v3).

We can even write down the formula for this interms of coordinates of vi , if we wish.

v

v

1

2

v3

ARS (IITB) MA106-Linear Algebra January 19, 2011 6 / 45

Determinants

Example

Likewise, if T ′ : R3 −→ R3is such that T ′(ei ) = vi then the volume of the

parallelepiped defined by the vectors v1, v2, v3 is given by the modulus of thescalar triple product (v1v2v3).We can even write down the formula for this interms of coordinates of vi , if we wish.

v

v

1

2

v3

ARS (IITB) MA106-Linear Algebra January 19, 2011 6 / 45

Axioms for determinant

What do we do in the general case?

We would like to approach this problem asfollows:First we ask what are the basic properties of this ‘creature’ that we areseeking. After listing sufficiently many of these properties in an economic way, weshall launch a search party to look out for the creature.In what follows we shall use the symbol K to denote either the field R of realnumbers or the field C of complx numbers. (Indeed, for many of the concepts andresults, we could have taken K to be just any field.)We are looking for a K-valued function defined on the set of n × n matrices (foreach n) say f : Mn(K) −→K which is likely to represent the volume of then-dim. parallelepiped defined by the row vectors of a given matrix.

ARS (IITB) MA106-Linear Algebra January 19, 2011 7 / 45

Axioms for determinant

What do we do in the general case? We would like to approach this problem asfollows:

First we ask what are the basic properties of this ‘creature’ that we areseeking. After listing sufficiently many of these properties in an economic way, weshall launch a search party to look out for the creature.In what follows we shall use the symbol K to denote either the field R of realnumbers or the field C of complx numbers. (Indeed, for many of the concepts andresults, we could have taken K to be just any field.)We are looking for a K-valued function defined on the set of n × n matrices (foreach n) say f : Mn(K) −→K which is likely to represent the volume of then-dim. parallelepiped defined by the row vectors of a given matrix.

ARS (IITB) MA106-Linear Algebra January 19, 2011 7 / 45

Axioms for determinant

What do we do in the general case? We would like to approach this problem asfollows:First we ask what are the basic properties of this ‘creature’ that we areseeking.

After listing sufficiently many of these properties in an economic way, weshall launch a search party to look out for the creature.In what follows we shall use the symbol K to denote either the field R of realnumbers or the field C of complx numbers. (Indeed, for many of the concepts andresults, we could have taken K to be just any field.)We are looking for a K-valued function defined on the set of n × n matrices (foreach n) say f : Mn(K) −→K which is likely to represent the volume of then-dim. parallelepiped defined by the row vectors of a given matrix.

ARS (IITB) MA106-Linear Algebra January 19, 2011 7 / 45

Axioms for determinant

What do we do in the general case? We would like to approach this problem asfollows:First we ask what are the basic properties of this ‘creature’ that we areseeking. After listing sufficiently many of these properties in an economic way, weshall launch a search party to look out for the creature.

In what follows we shall use the symbol K to denote either the field R of realnumbers or the field C of complx numbers. (Indeed, for many of the concepts andresults, we could have taken K to be just any field.)We are looking for a K-valued function defined on the set of n × n matrices (foreach n) say f : Mn(K) −→K which is likely to represent the volume of then-dim. parallelepiped defined by the row vectors of a given matrix.

ARS (IITB) MA106-Linear Algebra January 19, 2011 7 / 45

Axioms for determinant

What do we do in the general case? We would like to approach this problem asfollows:First we ask what are the basic properties of this ‘creature’ that we areseeking. After listing sufficiently many of these properties in an economic way, weshall launch a search party to look out for the creature.In what follows we shall use the symbol K to denote either the field R of realnumbers or the field C of complx numbers. (Indeed, for many of the concepts andresults, we could have taken K to be just any field.)

We are looking for a K-valued function defined on the set of n × n matrices (foreach n) say f : Mn(K) −→K which is likely to represent the volume of then-dim. parallelepiped defined by the row vectors of a given matrix.

ARS (IITB) MA106-Linear Algebra January 19, 2011 7 / 45

Axioms for determinant

What do we do in the general case? We would like to approach this problem asfollows:First we ask what are the basic properties of this ‘creature’ that we areseeking. After listing sufficiently many of these properties in an economic way, weshall launch a search party to look out for the creature.In what follows we shall use the symbol K to denote either the field R of realnumbers or the field C of complx numbers. (Indeed, for many of the concepts andresults, we could have taken K to be just any field.)We are looking for a K-valued function defined on the set of n × n matrices (foreach n) say f : Mn(K) −→K which is likely to represent the volume of then-dim. parallelepiped defined by the row vectors of a given matrix.

ARS (IITB) MA106-Linear Algebra January 19, 2011 7 / 45

Hunt for determinant

I (i) If two rows of a matrix A are identical then f (A) = 0.

I (ii) Look at the sign of the area of the parallelogram. Depending upon theorder in which we measure it, the sign of the area could be positive ornegative. If we change any two rows of A then the effect on f (A) must be tomultiply by −1.

I (iii) We have already observed that if we scale one of the vectors then theentire volume is also scaled by the same factor.

I Likewise the volume is ‘additive, in each of the slots,

((u1 + u2)vw) = (u1vw) + (u2vw).

ARS (IITB) MA106-Linear Algebra January 19, 2011 8 / 45

Hunt for determinant

I (i) If two rows of a matrix A are identical then f (A) = 0.

I (ii) Look at the sign of the area of the parallelogram. Depending upon theorder in which we measure it, the sign of the area could be positive ornegative. If we change any two rows of A then the effect on f (A) must be tomultiply by −1.

I (iii) We have already observed that if we scale one of the vectors then theentire volume is also scaled by the same factor.

I Likewise the volume is ‘additive, in each of the slots,

((u1 + u2)vw) = (u1vw) + (u2vw).

ARS (IITB) MA106-Linear Algebra January 19, 2011 8 / 45

Hunt for determinant

I (i) If two rows of a matrix A are identical then f (A) = 0.

I (ii) Look at the sign of the area of the parallelogram. Depending upon theorder in which we measure it, the sign of the area could be positive ornegative. If we change any two rows of A then the effect on f (A) must be tomultiply by −1.

I (iii) We have already observed that if we scale one of the vectors then theentire volume is also scaled by the same factor.

I Likewise the volume is ‘additive, in each of the slots,

((u1 + u2)vw) = (u1vw) + (u2vw).

ARS (IITB) MA106-Linear Algebra January 19, 2011 8 / 45

Hunt for determinant

With these things in mind we shall now attack the problem systematically.Given a matrix A ∈ Mn(K) say,

A =

A1

A2

...An

,recall that by arranging the n rows of A one after another we can think of it as anordered n-tuple of n-vectors: A = (A1, . . . ,An).

More generally, this method maybe used to identify Mn,m(K) with Kmn

. We are going to exploit it now.

ARS (IITB) MA106-Linear Algebra January 19, 2011 9 / 45

Hunt for determinant

With these things in mind we shall now attack the problem systematically.Given a matrix A ∈ Mn(K) say,

A =

A1

A2

...An

,recall that by arranging the n rows of A one after another we can think of it as anordered n-tuple of n-vectors: A = (A1, . . . ,An). More generally, this method maybe used to identify Mn,m(K) with Kmn

. We are going to exploit it now.

ARS (IITB) MA106-Linear Algebra January 19, 2011 9 / 45

Axioms for determinant

Definition

Let 1 ≤ i ≤ n. A function f : Mn(K) −→K is called linear in the i th row if,f (A1, . . . ,Ai−1, αA + βB,Ai+1, . . . ,An) =

αf (A1, . . . ,Ai−1,A,Ai+1, . . . ,An) + βf (A1, . . . ,Ai−1,B,Ai+1, . . . ,An) (1)

for all vectors Aj ,A,B ∈Knand all scalars α, β ∈K. A function f which is linear

in i th row for each 1 ≤ i ≤ n will be called multi-linear.

ARS (IITB) MA106-Linear Algebra January 19, 2011 10 / 45

Axioms for determinant

Examples of multi-linear maps that we have are the vector product of two vectorsin case n = 2 and the scalar triple product in case n = 3.

We do not know many examples.This should not deter us from our goal.Remember that a linear map was completely determined by its values on thestandard vectors eT

i .′s We are going to exploit this now.

Let N = {1, 2, . . . , n} and let F (N,N) be the set of all maps σ : N −→ N.(This set can also be thought of as the set of all sequences of length n with valuesbetween 1 and n.)Let Eσ denote the matrix with its i th row equal to eT

σ(i).For example, in this notation if σ = Id then Eσ = In.If σ1 = (1, 1, 2, 2), σ2 = (3, 2, 1) say then

Eσ1 =

1 0 0 01 0 0 00 1 0 00 1 0 0

Eσ2 =

0 0 10 1 01 0 0

Write down a few more examples to get familiar with this notation.

ARS (IITB) MA106-Linear Algebra January 19, 2011 11 / 45

Axioms for determinant

Examples of multi-linear maps that we have are the vector product of two vectorsin case n = 2 and the scalar triple product in case n = 3.We do not know many examples.

This should not deter us from our goal.Remember that a linear map was completely determined by its values on thestandard vectors eT

i .′s We are going to exploit this now.

Let N = {1, 2, . . . , n} and let F (N,N) be the set of all maps σ : N −→ N.(This set can also be thought of as the set of all sequences of length n with valuesbetween 1 and n.)Let Eσ denote the matrix with its i th row equal to eT

σ(i).For example, in this notation if σ = Id then Eσ = In.If σ1 = (1, 1, 2, 2), σ2 = (3, 2, 1) say then

Eσ1 =

1 0 0 01 0 0 00 1 0 00 1 0 0

Eσ2 =

0 0 10 1 01 0 0

Write down a few more examples to get familiar with this notation.

ARS (IITB) MA106-Linear Algebra January 19, 2011 11 / 45

Axioms for determinant

Examples of multi-linear maps that we have are the vector product of two vectorsin case n = 2 and the scalar triple product in case n = 3.We do not know many examples.This should not deter us from our goal.

Remember that a linear map was completely determined by its values on thestandard vectors eT

i .′s We are going to exploit this now.

Let N = {1, 2, . . . , n} and let F (N,N) be the set of all maps σ : N −→ N.(This set can also be thought of as the set of all sequences of length n with valuesbetween 1 and n.)Let Eσ denote the matrix with its i th row equal to eT

σ(i).For example, in this notation if σ = Id then Eσ = In.If σ1 = (1, 1, 2, 2), σ2 = (3, 2, 1) say then

Eσ1 =

1 0 0 01 0 0 00 1 0 00 1 0 0

Eσ2 =

0 0 10 1 01 0 0

Write down a few more examples to get familiar with this notation.

ARS (IITB) MA106-Linear Algebra January 19, 2011 11 / 45

Axioms for determinant

Examples of multi-linear maps that we have are the vector product of two vectorsin case n = 2 and the scalar triple product in case n = 3.We do not know many examples.This should not deter us from our goal.Remember that a linear map was completely determined by its values on thestandard vectors eT

i .′s We are going to exploit this now.

Let N = {1, 2, . . . , n} and let F (N,N) be the set of all maps σ : N −→ N.(This set can also be thought of as the set of all sequences of length n with valuesbetween 1 and n.)Let Eσ denote the matrix with its i th row equal to eT

σ(i).For example, in this notation if σ = Id then Eσ = In.If σ1 = (1, 1, 2, 2), σ2 = (3, 2, 1) say then

Eσ1 =

1 0 0 01 0 0 00 1 0 00 1 0 0

Eσ2 =

0 0 10 1 01 0 0

Write down a few more examples to get familiar with this notation.

ARS (IITB) MA106-Linear Algebra January 19, 2011 11 / 45

Axioms for determinant

Examples of multi-linear maps that we have are the vector product of two vectorsin case n = 2 and the scalar triple product in case n = 3.We do not know many examples.This should not deter us from our goal.Remember that a linear map was completely determined by its values on thestandard vectors eT

i .′s We are going to exploit this now.

Let N = {1, 2, . . . , n} and let F (N,N) be the set of all maps σ : N −→ N.

(This set can also be thought of as the set of all sequences of length n with valuesbetween 1 and n.)Let Eσ denote the matrix with its i th row equal to eT

σ(i).For example, in this notation if σ = Id then Eσ = In.If σ1 = (1, 1, 2, 2), σ2 = (3, 2, 1) say then

Eσ1 =

1 0 0 01 0 0 00 1 0 00 1 0 0

Eσ2 =

0 0 10 1 01 0 0

Write down a few more examples to get familiar with this notation.

ARS (IITB) MA106-Linear Algebra January 19, 2011 11 / 45

Axioms for determinant

Examples of multi-linear maps that we have are the vector product of two vectorsin case n = 2 and the scalar triple product in case n = 3.We do not know many examples.This should not deter us from our goal.Remember that a linear map was completely determined by its values on thestandard vectors eT

i .′s We are going to exploit this now.

Let N = {1, 2, . . . , n} and let F (N,N) be the set of all maps σ : N −→ N.(This set can also be thought of as the set of all sequences of length n with valuesbetween 1 and n.)

Let Eσ denote the matrix with its i th row equal to eTσ(i).

For example, in this notation if σ = Id then Eσ = In.If σ1 = (1, 1, 2, 2), σ2 = (3, 2, 1) say then

Eσ1 =

1 0 0 01 0 0 00 1 0 00 1 0 0

Eσ2 =

0 0 10 1 01 0 0

Write down a few more examples to get familiar with this notation.

ARS (IITB) MA106-Linear Algebra January 19, 2011 11 / 45

Axioms for determinant

Examples of multi-linear maps that we have are the vector product of two vectorsin case n = 2 and the scalar triple product in case n = 3.We do not know many examples.This should not deter us from our goal.Remember that a linear map was completely determined by its values on thestandard vectors eT

i .′s We are going to exploit this now.

Let N = {1, 2, . . . , n} and let F (N,N) be the set of all maps σ : N −→ N.(This set can also be thought of as the set of all sequences of length n with valuesbetween 1 and n.)Let Eσ denote the matrix with its i th row equal to eT

σ(i).

For example, in this notation if σ = Id then Eσ = In.If σ1 = (1, 1, 2, 2), σ2 = (3, 2, 1) say then

Eσ1 =

1 0 0 01 0 0 00 1 0 00 1 0 0

Eσ2 =

0 0 10 1 01 0 0

Write down a few more examples to get familiar with this notation.

ARS (IITB) MA106-Linear Algebra January 19, 2011 11 / 45

Axioms for determinant

Examples of multi-linear maps that we have are the vector product of two vectorsin case n = 2 and the scalar triple product in case n = 3.We do not know many examples.This should not deter us from our goal.Remember that a linear map was completely determined by its values on thestandard vectors eT

i .′s We are going to exploit this now.

Let N = {1, 2, . . . , n} and let F (N,N) be the set of all maps σ : N −→ N.(This set can also be thought of as the set of all sequences of length n with valuesbetween 1 and n.)Let Eσ denote the matrix with its i th row equal to eT

σ(i).For example, in this notation if σ = Id then Eσ = In.If σ1 = (1, 1, 2, 2), σ2 = (3, 2, 1) say then

Eσ1 =

1 0 0 01 0 0 00 1 0 00 1 0 0

Eσ2 =

0 0 10 1 01 0 0

Write down a few more examples to get familiar with this notation.

ARS (IITB) MA106-Linear Algebra January 19, 2011 11 / 45

Axioms for determinant

Theorem

Any multi-linear map f is completely determined by its values on Eσ, σ ∈ Λ.

Proof: Let us first illustrate this for n = 2. Write

A =

[a bc d

]. (2)

Then A = (A1,A2) where A1 = (a, b) and A2 = (c , d).Further A1 = aeT

1 + beT2 and A2 = ceT

1 + deT2 . Therefore

f (A) = af (eT1 ,A2) + bf (eT

2 ,A2)= a(cf (eT

1 , eT1 ) + df (eT

1 , eT2 )) + b(cf (eT

2 , eT1 ) + df (eT

2 , eT2 ))

= ac f (E(11)) + ad f (E(12)) + bc f (E(21)) + bd f (E(22)).

Thus the pattern in the general case is clear. By applying linearity in each slot ata time we obtain,

f (A) =∑

σ∈F (N,N)

aσf (Eσ) (3)

ARS (IITB) MA106-Linear Algebra January 19, 2011 12 / 45

Axioms for determinant

Theorem

Any multi-linear map f is completely determined by its values on Eσ, σ ∈ Λ.

Proof: Let us first illustrate this for n = 2. Write

A =

[a bc d

]. (2)

Then A = (A1,A2) where A1 = (a, b) and A2 = (c , d).

Further A1 = aeT1 + beT

2 and A2 = ceT1 + deT

2 . Therefore

f (A) = af (eT1 ,A2) + bf (eT

2 ,A2)= a(cf (eT

1 , eT1 ) + df (eT

1 , eT2 )) + b(cf (eT

2 , eT1 ) + df (eT

2 , eT2 ))

= ac f (E(11)) + ad f (E(12)) + bc f (E(21)) + bd f (E(22)).

Thus the pattern in the general case is clear. By applying linearity in each slot ata time we obtain,

f (A) =∑

σ∈F (N,N)

aσf (Eσ) (3)

ARS (IITB) MA106-Linear Algebra January 19, 2011 12 / 45

Axioms for determinant

Theorem

Any multi-linear map f is completely determined by its values on Eσ, σ ∈ Λ.

Proof: Let us first illustrate this for n = 2. Write

A =

[a bc d

]. (2)

Then A = (A1,A2) where A1 = (a, b) and A2 = (c , d).Further A1 = aeT

1 + beT2 and A2 = ceT

1 + deT2 . Therefore

f (A) = af (eT1 ,A2) + bf (eT

2 ,A2)= a(cf (eT

1 , eT1 ) + df (eT

1 , eT2 )) + b(cf (eT

2 , eT1 ) + df (eT

2 , eT2 ))

= ac f (E(11)) + ad f (E(12)) + bc f (E(21)) + bd f (E(22)).

Thus the pattern in the general case is clear. By applying linearity in each slot ata time we obtain,

f (A) =∑

σ∈F (N,N)

aσf (Eσ) (3)

ARS (IITB) MA106-Linear Algebra January 19, 2011 12 / 45

Axioms for determinant

Theorem

Any multi-linear map f is completely determined by its values on Eσ, σ ∈ Λ.

Proof: Let us first illustrate this for n = 2. Write

A =

[a bc d

]. (2)

Then A = (A1,A2) where A1 = (a, b) and A2 = (c , d).Further A1 = aeT

1 + beT2 and A2 = ceT

1 + deT2 . Therefore

f (A) = af (eT1 ,A2) + bf (eT

2 ,A2)= a(cf (eT

1 , eT1 ) + df (eT

1 , eT2 )) + b(cf (eT

2 , eT1 ) + df (eT

2 , eT2 ))

= ac f (E(11)) + ad f (E(12)) + bc f (E(21)) + bd f (E(22)).

Thus the pattern in the general case is clear. By applying linearity in each slot ata time we obtain,

f (A) =∑

σ∈F (N,N)

aσf (Eσ) (3)

ARS (IITB) MA106-Linear Algebra January 19, 2011 12 / 45

Axioms for determinant

Theorem

Any multi-linear map f is completely determined by its values on Eσ, σ ∈ Λ.

Proof: Let us first illustrate this for n = 2. Write

A =

[a bc d

]. (2)

Then A = (A1,A2) where A1 = (a, b) and A2 = (c , d).Further A1 = aeT

1 + beT2 and A2 = ceT

1 + deT2 . Therefore

f (A) = af (eT1 ,A2) + bf (eT

2 ,A2)= a(cf (eT

1 , eT1 ) + df (eT

1 , eT2 )) + b(cf (eT

2 , eT1 ) + df (eT

2 , eT2 ))

= ac f (E(11)) + ad f (E(12)) + bc f (E(21)) + bd f (E(22)).

Thus the pattern in the general case is clear. By applying linearity in each slot ata time we obtain,

f (A) =∑

σ∈F (N,N)

aσf (Eσ) (3)

ARS (IITB) MA106-Linear Algebra January 19, 2011 12 / 45

Axioms for determinant

f (A) =∑σ

aσf (Eσ) (4)

Here the scalars aσ are given by

aσ = a1σ(1)a2σ(2) · · · anσ(n). (5)

The verification is straight forward. ♠

Definition

A map f : Mn(R) −→ R is called alternating if for every pair (i , j), i 6= j ,

f (A1, . . . ,Ai , . . . ,Aj , . . . ,An) = −f (A1, . . . ,Aj , . . . ,Ai , . . . ,An). (6)

That is, if we interchange the place of any two rows then the sign of the value ofthe function is changed.

ARS (IITB) MA106-Linear Algebra January 19, 2011 13 / 45

Axioms for determinant

f (A) =∑σ

aσf (Eσ) (4)

Here the scalars aσ are given by

aσ = a1σ(1)a2σ(2) · · · anσ(n). (5)

The verification is straight forward. ♠

Definition

A map f : Mn(R) −→ R is called alternating if for every pair (i , j), i 6= j ,

f (A1, . . . ,Ai , . . . ,Aj , . . . ,An) = −f (A1, . . . ,Aj , . . . ,Ai , . . . ,An). (6)

That is, if we interchange the place of any two rows then the sign of the value ofthe function is changed.

ARS (IITB) MA106-Linear Algebra January 19, 2011 13 / 45

Axioms for determinant

TheoremA multi-linear map f is alternating iff it vanishes on all matrices with two equalrows.

Proof: Let f be multi-linear and alternating. If two rows of A are the same thenby interchanging them, A does not change. On the other hand the sign of thevalue of f on A has to change. This means that f (A) = −f (A) and hencef (A) = 0.To prove the converse, let us consider the 2× 2 case.We assume that f is multi-linear and has the property f (A,A) = 0 for all A. Wemust show that f (B,C ) = −f (C ,B). We proceed by taking

0 = f (B + C ,B + C )= = f (B,B + C ) + f (C ,B + C )= f (B,B) + f (B,C ) + f (C ,B) + f (C ,C )= f (B,C ) + f (C ,B)

ARS (IITB) MA106-Linear Algebra January 19, 2011 14 / 45

Axioms for determinant

TheoremA multi-linear map f is alternating iff it vanishes on all matrices with two equalrows.

Proof: Let f be multi-linear and alternating. If two rows of A are the same thenby interchanging them, A does not change. On the other hand the sign of thevalue of f on A has to change.

This means that f (A) = −f (A) and hencef (A) = 0.To prove the converse, let us consider the 2× 2 case.We assume that f is multi-linear and has the property f (A,A) = 0 for all A. Wemust show that f (B,C ) = −f (C ,B). We proceed by taking

0 = f (B + C ,B + C )= = f (B,B + C ) + f (C ,B + C )= f (B,B) + f (B,C ) + f (C ,B) + f (C ,C )= f (B,C ) + f (C ,B)

ARS (IITB) MA106-Linear Algebra January 19, 2011 14 / 45

Axioms for determinant

TheoremA multi-linear map f is alternating iff it vanishes on all matrices with two equalrows.

Proof: Let f be multi-linear and alternating. If two rows of A are the same thenby interchanging them, A does not change. On the other hand the sign of thevalue of f on A has to change. This means that f (A) = −f (A) and hencef (A) = 0.

To prove the converse, let us consider the 2× 2 case.We assume that f is multi-linear and has the property f (A,A) = 0 for all A. Wemust show that f (B,C ) = −f (C ,B). We proceed by taking

0 = f (B + C ,B + C )= = f (B,B + C ) + f (C ,B + C )= f (B,B) + f (B,C ) + f (C ,B) + f (C ,C )= f (B,C ) + f (C ,B)

ARS (IITB) MA106-Linear Algebra January 19, 2011 14 / 45

Axioms for determinant

TheoremA multi-linear map f is alternating iff it vanishes on all matrices with two equalrows.

Proof: Let f be multi-linear and alternating. If two rows of A are the same thenby interchanging them, A does not change. On the other hand the sign of thevalue of f on A has to change. This means that f (A) = −f (A) and hencef (A) = 0.To prove the converse, let us consider the 2× 2 case.We assume that f is multi-linear and has the property f (A,A) = 0 for all A. Wemust show that f (B,C ) = −f (C ,B).

We proceed by taking

0 = f (B + C ,B + C )= = f (B,B + C ) + f (C ,B + C )= f (B,B) + f (B,C ) + f (C ,B) + f (C ,C )= f (B,C ) + f (C ,B)

ARS (IITB) MA106-Linear Algebra January 19, 2011 14 / 45

Axioms for determinant

TheoremA multi-linear map f is alternating iff it vanishes on all matrices with two equalrows.

Proof: Let f be multi-linear and alternating. If two rows of A are the same thenby interchanging them, A does not change. On the other hand the sign of thevalue of f on A has to change. This means that f (A) = −f (A) and hencef (A) = 0.To prove the converse, let us consider the 2× 2 case.We assume that f is multi-linear and has the property f (A,A) = 0 for all A. Wemust show that f (B,C ) = −f (C ,B). We proceed by taking

0 = f (B + C ,B + C )= = f (B,B + C ) + f (C ,B + C )= f (B,B) + f (B,C ) + f (C ,B) + f (C ,C )= f (B,C ) + f (C ,B)

ARS (IITB) MA106-Linear Algebra January 19, 2011 14 / 45

Axioms for determinant

In the general case, given a matrix A consider another matrix B whose rows arethat of A except in the i th and j th places we have Bi = Bj = Ai + Aj . Thenf (B) = 0.

On the other hand by multi-linearity, we have

0 = f (B)= f (A1, . . . ,Ai , . . . ,Ai + Aj , . . . ,An) + f (A1, . . . ,Aj , . . . ,Ai + Aj , . . . ,An)= f (A1, . . . ,Ai , . . . ,Ai , . . . ,An) + f (A1, . . . ,Ai , . . . ,Aj , . . . ,An)

+f (A1, . . . ,Aj , . . . ,Ai , . . . ,An) + f (A1, . . . ,Aj , . . . ,Aj , . . . ,An)

Hence f (A1, . . . ,Ai , . . . ,Aj , . . . ,An) = −f (A1, . . . ,Aj , . . . ,Ai , . . . ,An). ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 15 / 45

Axioms for determinant

In the general case, given a matrix A consider another matrix B whose rows arethat of A except in the i th and j th places we have Bi = Bj = Ai + Aj . Thenf (B) = 0.On the other hand by multi-linearity, we have

0 = f (B)= f (A1, . . . ,Ai , . . . ,Ai + Aj , . . . ,An) + f (A1, . . . ,Aj , . . . ,Ai + Aj , . . . ,An)= f (A1, . . . ,Ai , . . . ,Ai , . . . ,An) + f (A1, . . . ,Ai , . . . ,Aj , . . . ,An)

+f (A1, . . . ,Aj , . . . ,Ai , . . . ,An) + f (A1, . . . ,Aj , . . . ,Aj , . . . ,An)

Hence f (A1, . . . ,Ai , . . . ,Aj , . . . ,An) = −f (A1, . . . ,Aj , . . . ,Ai , . . . ,An). ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 15 / 45

Axioms for determinant

In the general case, given a matrix A consider another matrix B whose rows arethat of A except in the i th and j th places we have Bi = Bj = Ai + Aj . Thenf (B) = 0.On the other hand by multi-linearity, we have

0 = f (B)= f (A1, . . . ,Ai , . . . ,Ai + Aj , . . . ,An) + f (A1, . . . ,Aj , . . . ,Ai + Aj , . . . ,An)= f (A1, . . . ,Ai , . . . ,Ai , . . . ,An) + f (A1, . . . ,Ai , . . . ,Aj , . . . ,An)

+f (A1, . . . ,Aj , . . . ,Ai , . . . ,An) + f (A1, . . . ,Aj , . . . ,Aj , . . . ,An)

Hence f (A1, . . . ,Ai , . . . ,Aj , . . . ,An) = −f (A1, . . . ,Aj , . . . ,Ai , . . . ,An). ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 15 / 45

Axioms for determinant

RemarkThere is more merit in this equivalent formulation of alternating property thanmeets the eye.

It is the right one for certain other types of fields which we havenot considered here.As a consequence of this we can relax the condition in the defeinition ofalternating property a little bit.

Theorem

A multi-linear map f : Mn(R)→ R is alternating iff f simply changes the signwhenever we interchange the position of two consecutive rows.

Proof: Clearly if f is alternating then it does satisfy this weaker condition.We have to prove the converse. So, assuming that f is as given in the theorem,we shall prove that f vanishes on matrices with two rows equal. But then thetwo rows can be made consecutive, by interchanging the consecutive rowsbetween these two rows and therefore, it suffices to show that f vanishes onmatrices which have two consecutive rows equal. Now the rest of proof isexactly same as that of the previous theorem.

ARS (IITB) MA106-Linear Algebra January 19, 2011 16 / 45

Axioms for determinant

RemarkThere is more merit in this equivalent formulation of alternating property thanmeets the eye. It is the right one for certain other types of fields which we havenot considered here.

As a consequence of this we can relax the condition in the defeinition ofalternating property a little bit.

Theorem

A multi-linear map f : Mn(R)→ R is alternating iff f simply changes the signwhenever we interchange the position of two consecutive rows.

Proof: Clearly if f is alternating then it does satisfy this weaker condition.We have to prove the converse. So, assuming that f is as given in the theorem,we shall prove that f vanishes on matrices with two rows equal. But then thetwo rows can be made consecutive, by interchanging the consecutive rowsbetween these two rows and therefore, it suffices to show that f vanishes onmatrices which have two consecutive rows equal. Now the rest of proof isexactly same as that of the previous theorem.

ARS (IITB) MA106-Linear Algebra January 19, 2011 16 / 45

Axioms for determinant

RemarkThere is more merit in this equivalent formulation of alternating property thanmeets the eye. It is the right one for certain other types of fields which we havenot considered here.As a consequence of this we can relax the condition in the defeinition ofalternating property a little bit.

Theorem

A multi-linear map f : Mn(R)→ R is alternating iff f simply changes the signwhenever we interchange the position of two consecutive rows.

Proof: Clearly if f is alternating then it does satisfy this weaker condition.We have to prove the converse. So, assuming that f is as given in the theorem,we shall prove that f vanishes on matrices with two rows equal. But then thetwo rows can be made consecutive, by interchanging the consecutive rowsbetween these two rows and therefore, it suffices to show that f vanishes onmatrices which have two consecutive rows equal. Now the rest of proof isexactly same as that of the previous theorem.

ARS (IITB) MA106-Linear Algebra January 19, 2011 16 / 45

Axioms for determinant

RemarkThere is more merit in this equivalent formulation of alternating property thanmeets the eye. It is the right one for certain other types of fields which we havenot considered here.As a consequence of this we can relax the condition in the defeinition ofalternating property a little bit.

Theorem

A multi-linear map f : Mn(R)→ R is alternating iff f simply changes the signwhenever we interchange the position of two consecutive rows.

Proof: Clearly if f is alternating then it does satisfy this weaker condition.We have to prove the converse. So, assuming that f is as given in the theorem,we shall prove that f vanishes on matrices with two rows equal. But then thetwo rows can be made consecutive, by interchanging the consecutive rowsbetween these two rows and therefore, it suffices to show that f vanishes onmatrices which have two consecutive rows equal.

Now the rest of proof isexactly same as that of the previous theorem.

ARS (IITB) MA106-Linear Algebra January 19, 2011 16 / 45

Axioms for determinant

RemarkThere is more merit in this equivalent formulation of alternating property thanmeets the eye. It is the right one for certain other types of fields which we havenot considered here.As a consequence of this we can relax the condition in the defeinition ofalternating property a little bit.

Theorem

A multi-linear map f : Mn(R)→ R is alternating iff f simply changes the signwhenever we interchange the position of two consecutive rows.

Proof: Clearly if f is alternating then it does satisfy this weaker condition.We have to prove the converse. So, assuming that f is as given in the theorem,we shall prove that f vanishes on matrices with two rows equal. But then thetwo rows can be made consecutive, by interchanging the consecutive rowsbetween these two rows and therefore, it suffices to show that f vanishes onmatrices which have two consecutive rows equal. Now the rest of proof isexactly same as that of the previous theorem.

ARS (IITB) MA106-Linear Algebra January 19, 2011 16 / 45

Axioms for determinant

Theorem

Let f be a multi-linear alternating map on Mn(K). Consider the elementarymatrix E = I + αEij . Then for all A ∈ Mn,

f (EA) =

{f (A) if i 6= j(1 + α)f (A) if i = j

Proof: Let A = (A1, . . . ,An). For i 6= j , observe that

EA = (A1, . . . ,Ai + αAj , . . . ,Aj , . . . ,An).

Therefore

f (EA) = f (A1, . . . ,Ai , . . . ,Aj , . . . ,An) + αf (A1, . . . ,Aj , . . . ,Aj , . . . ,An)= f (A).

For i = j , we have EA = (A1, . . . ,Ai−1, (1 + α)Ai ,Ai+1, . . . ,An). Therefore,f (EA) = (1 + α)f (A), by linearity in the i th slot. ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 17 / 45

Axioms for determinant

Theorem

Let f be a multi-linear alternating map on Mn(K). Consider the elementarymatrix E = I + αEij . Then for all A ∈ Mn,

f (EA) =

{f (A) if i 6= j(1 + α)f (A) if i = j

Proof: Let A = (A1, . . . ,An). For i 6= j , observe that

EA = (A1, . . . ,Ai + αAj , . . . ,Aj , . . . ,An).

Therefore

f (EA) = f (A1, . . . ,Ai , . . . ,Aj , . . . ,An) + αf (A1, . . . ,Aj , . . . ,Aj , . . . ,An)= f (A).

For i = j , we have EA = (A1, . . . ,Ai−1, (1 + α)Ai ,Ai+1, . . . ,An). Therefore,f (EA) = (1 + α)f (A), by linearity in the i th slot. ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 17 / 45

Axioms for determinant

Theorem

Let f be a multi-linear alternating map on Mn(K). Consider the elementarymatrix E = I + αEij . Then for all A ∈ Mn,

f (EA) =

{f (A) if i 6= j(1 + α)f (A) if i = j

Proof: Let A = (A1, . . . ,An). For i 6= j , observe that

EA = (A1, . . . ,Ai + αAj , . . . ,Aj , . . . ,An).

Therefore

f (EA) = f (A1, . . . ,Ai , . . . ,Aj , . . . ,An) + αf (A1, . . . ,Aj , . . . ,Aj , . . . ,An)= f (A).

For i = j , we have EA = (A1, . . . ,Ai−1, (1 + α)Ai ,Ai+1, . . . ,An). Therefore,f (EA) = (1 + α)f (A), by linearity in the i th slot. ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 17 / 45

Axioms for determinant

Theorem

Let f be a multi-linear alternating map on Mn(K). Consider the elementarymatrix E = I + αEij . Then for all A ∈ Mn,

f (EA) =

{f (A) if i 6= j(1 + α)f (A) if i = j

Proof: Let A = (A1, . . . ,An). For i 6= j , observe that

EA = (A1, . . . ,Ai + αAj , . . . ,Aj , . . . ,An).

Therefore

f (EA) = f (A1, . . . ,Ai , . . . ,Aj , . . . ,An) + αf (A1, . . . ,Aj , . . . ,Aj , . . . ,An)= f (A).

For i = j , we have EA = (A1, . . . ,Ai−1, (1 + α)Ai ,Ai+1, . . . ,An). Therefore,f (EA) = (1 + α)f (A), by linearity in the i th slot. ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 17 / 45

Axioms for determinant

Remark

In the case of linear maps we were free to choose the value of the function on e′i s.

Same is true for a multi-linear map, viz., we are free to choose its value on Eij andthen extend it by the formula (4). However, in addition, if f is alternating also,then we are forced to choose its value on Eσ to be zero for all those σ which arenot one-to-one.

Definition

Let Σ(n) denote those elements σ ∈ F (N,N) which are one-to-one. These arecalled permutations on n letters. If a permutation changes its value only on twoindices then it is called a transposition.

ARS (IITB) MA106-Linear Algebra January 19, 2011 18 / 45

Axioms for determinant

Remark

In the case of linear maps we were free to choose the value of the function on e′i s.Same is true for a multi-linear map, viz., we are free to choose its value on Eij andthen extend it by the formula (4).

However, in addition, if f is alternating also,then we are forced to choose its value on Eσ to be zero for all those σ which arenot one-to-one.

Definition

Let Σ(n) denote those elements σ ∈ F (N,N) which are one-to-one. These arecalled permutations on n letters. If a permutation changes its value only on twoindices then it is called a transposition.

ARS (IITB) MA106-Linear Algebra January 19, 2011 18 / 45

Axioms for determinant

Remark

In the case of linear maps we were free to choose the value of the function on e′i s.Same is true for a multi-linear map, viz., we are free to choose its value on Eij andthen extend it by the formula (4). However, in addition, if f is alternating also,then we are forced to choose its value on Eσ to be zero for all those σ which arenot one-to-one.

Definition

Let Σ(n) denote those elements σ ∈ F (N,N) which are one-to-one. These arecalled permutations on n letters. If a permutation changes its value only on twoindices then it is called a transposition.

ARS (IITB) MA106-Linear Algebra January 19, 2011 18 / 45

Axioms for determinant

Remark

In the case of linear maps we were free to choose the value of the function on e′i s.Same is true for a multi-linear map, viz., we are free to choose its value on Eij andthen extend it by the formula (4). However, in addition, if f is alternating also,then we are forced to choose its value on Eσ to be zero for all those σ which arenot one-to-one.

Definition

Let Σ(n) denote those elements σ ∈ F (N,N) which are one-to-one. These arecalled permutations on n letters. If a permutation changes its value only on twoindices then it is called a transposition.

ARS (IITB) MA106-Linear Algebra January 19, 2011 18 / 45

Axioms for determinant

Given σ ∈ Σ(n), let us observe what GEM does on Eσ.

It is clear that GEM simply consists of a sequence of change of rows and weobtain G (Eσ) = J(Eσ) = In. Observe that a change of two rows of Eσ is effectedby composing σ with a corresponding transposition.Thus GEM yields a sequence of transpositions τ1, . . . , τk such that

E (τk ◦ · · · ◦ τ1 ◦ σ) = In.

This means that τk ◦ · · · τ1 ◦ σ = Id . Hence σ = τ1 ◦ · · · ◦ τk . Thus we have proved:

Lemma

Every permutation is a (finite) composite of transpositions.

Definition

Let e(σ) denote the number of row changes in the GEM applied to Eσ to bring itto G (Eσ) = In. Let sgn(σ) = (−1)e(σ). We call this the signature of σ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 19 / 45

Axioms for determinant

Given σ ∈ Σ(n), let us observe what GEM does on Eσ.It is clear that GEM simply consists of a sequence of change of rows and weobtain G (Eσ) = J(Eσ) = In. Observe that a change of two rows of Eσ is effectedby composing σ with a corresponding transposition.

Thus GEM yields a sequence of transpositions τ1, . . . , τk such that

E (τk ◦ · · · ◦ τ1 ◦ σ) = In.

This means that τk ◦ · · · τ1 ◦ σ = Id . Hence σ = τ1 ◦ · · · ◦ τk . Thus we have proved:

Lemma

Every permutation is a (finite) composite of transpositions.

Definition

Let e(σ) denote the number of row changes in the GEM applied to Eσ to bring itto G (Eσ) = In. Let sgn(σ) = (−1)e(σ). We call this the signature of σ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 19 / 45

Axioms for determinant

Given σ ∈ Σ(n), let us observe what GEM does on Eσ.It is clear that GEM simply consists of a sequence of change of rows and weobtain G (Eσ) = J(Eσ) = In. Observe that a change of two rows of Eσ is effectedby composing σ with a corresponding transposition.Thus GEM yields a sequence of transpositions τ1, . . . , τk such that

E (τk ◦ · · · ◦ τ1 ◦ σ) = In.

This means that τk ◦ · · · τ1 ◦ σ = Id . Hence σ = τ1 ◦ · · · ◦ τk . Thus we have proved:

Lemma

Every permutation is a (finite) composite of transpositions.

Definition

Let e(σ) denote the number of row changes in the GEM applied to Eσ to bring itto G (Eσ) = In. Let sgn(σ) = (−1)e(σ). We call this the signature of σ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 19 / 45

Axioms for determinant

Given σ ∈ Σ(n), let us observe what GEM does on Eσ.It is clear that GEM simply consists of a sequence of change of rows and weobtain G (Eσ) = J(Eσ) = In. Observe that a change of two rows of Eσ is effectedby composing σ with a corresponding transposition.Thus GEM yields a sequence of transpositions τ1, . . . , τk such that

E (τk ◦ · · · ◦ τ1 ◦ σ) = In.

This means that τk ◦ · · · τ1 ◦ σ = Id . Hence σ = τ1 ◦ · · · ◦ τk . Thus we have proved:

Lemma

Every permutation is a (finite) composite of transpositions.

Definition

Let e(σ) denote the number of row changes in the GEM applied to Eσ to bring itto G (Eσ) = In. Let sgn(σ) = (−1)e(σ). We call this the signature of σ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 19 / 45

Axioms for determinant

Given σ ∈ Σ(n), let us observe what GEM does on Eσ.It is clear that GEM simply consists of a sequence of change of rows and weobtain G (Eσ) = J(Eσ) = In. Observe that a change of two rows of Eσ is effectedby composing σ with a corresponding transposition.Thus GEM yields a sequence of transpositions τ1, . . . , τk such that

E (τk ◦ · · · ◦ τ1 ◦ σ) = In.

This means that τk ◦ · · · τ1 ◦ σ = Id . Hence σ = τ1 ◦ · · · ◦ τk . Thus we have proved:

Lemma

Every permutation is a (finite) composite of transpositions.

Definition

Let e(σ) denote the number of row changes in the GEM applied to Eσ to bring itto G (Eσ) = In. Let sgn(σ) = (−1)e(σ). We call this the signature of σ.

ARS (IITB) MA106-Linear Algebra January 19, 2011 19 / 45

Axioms for determinant

Applying repeatedly the alternating property, it follows that

f (Eσ) = sgn(σ)f (In). (7)

Combining

(4), f (A) =∑σ

aσf (Eσ)

and(5), aσ = a1σ(1)a2σ(2) · · · anσ(n)

with (7), we get

f (A) =

∑σ∈Σ(n)

sgn(σ)aσ

f (In). (8)

ARS (IITB) MA106-Linear Algebra January 19, 2011 20 / 45

Axioms for determinant

Applying repeatedly the alternating property, it follows that

f (Eσ) = sgn(σ)f (In). (7)

Combining

(4), f (A) =∑σ

aσf (Eσ)

and(5), aσ = a1σ(1)a2σ(2) · · · anσ(n)

with (7), we get

f (A) =

∑σ∈Σ(n)

sgn(σ)aσ

f (In). (8)

ARS (IITB) MA106-Linear Algebra January 19, 2011 20 / 45

Existence

Theorem

For every t ∈ R, there exists a unique ft : Mn(R)→ R which is row-wisemulti-linear, alternating and such that ft(In) = t. Moreover, for any such ft , wehave ft(Eσ) = sgn(σ)t.

Note that so far what we have already proved the uniqueness of such a functionand the formula ft(Eσ) = sgn(σ)t.

ARS (IITB) MA106-Linear Algebra January 19, 2011 21 / 45

Existence

Theorem

For every t ∈ R, there exists a unique ft : Mn(R)→ R which is row-wisemulti-linear, alternating and such that ft(In) = t. Moreover, for any such ft , wehave ft(Eσ) = sgn(σ)t.

Note that so far what we have already proved the uniqueness of such a functionand the formula ft(Eσ) = sgn(σ)t.

ARS (IITB) MA106-Linear Algebra January 19, 2011 21 / 45

Existence

Theorem

For every t ∈ R, there exists a unique ft : Mn(R)→ R which is row-wisemulti-linear, alternating and such that ft(In) = t. Moreover, for any such ft , wehave ft(Eσ) = sgn(σ)t.

Note that so far what we have already proved the uniqueness of such a functionand the formula ft(Eσ) = sgn(σ)t.

ARS (IITB) MA106-Linear Algebra January 19, 2011 21 / 45

Existence

To show that such a function exists, we just define ft by the formula

ft(A) =

(∑σ∈Σ

sgn(σ)aσ

)t. (9)

To verify that ft is multi-linear is straight forward.Assuming that formula (9) has alternating property on the set {Eσ, σ ∈ Σ}, itfollows easily that the same holds for all matrices. Therefore, it is enough to prove:

Lemma

Let σ, τ ∈ Σ(n), where τ is a transposition. Then

sgn(σ) = −sgn(τ ◦ σ). (10)

ARS (IITB) MA106-Linear Algebra January 19, 2011 22 / 45

Existence

To show that such a function exists, we just define ft by the formula

ft(A) =

(∑σ∈Σ

sgn(σ)aσ

)t. (9)

To verify that ft is multi-linear is straight forward.Assuming that formula (9) has alternating property on the set {Eσ, σ ∈ Σ}, itfollows easily that the same holds for all matrices. Therefore, it is enough to prove:

Lemma

Let σ, τ ∈ Σ(n), where τ is a transposition. Then

sgn(σ) = −sgn(τ ◦ σ). (10)

ARS (IITB) MA106-Linear Algebra January 19, 2011 22 / 45

Existence

Proof: We prove it by induction on n. If n = 2 this is obvious. (For n = 3 this canbe verified straight forward way, by considering various cases. Try this as anexercise before reading the rest of the proof which will help you to understand theproof better. However, this is not a logical necessity.)Inductively, let the statement (10) hold for n − 1. Let A = Eσ, B = Eτ◦σ.

Suppose τ interchanges i and j , i 6= j . Notice that B is obtained from A byinterchanging the i th and j th row.We look at the first step of GEM performed on both A and B cut down the firstrow and first column from both of them compare the results on either side, useinduction to obtain the equality (10) for n − 1 and finally relate it to (10) for n.

ARS (IITB) MA106-Linear Algebra January 19, 2011 23 / 45

Existence

Proof: We prove it by induction on n. If n = 2 this is obvious. (For n = 3 this canbe verified straight forward way, by considering various cases. Try this as anexercise before reading the rest of the proof which will help you to understand theproof better. However, this is not a logical necessity.)Inductively, let the statement (10) hold for n − 1. Let A = Eσ, B = Eτ◦σ.Suppose τ interchanges i and j , i 6= j . Notice that B is obtained from A byinterchanging the i th and j th row.We look at the first step of GEM performed on both A and B cut down the firstrow and first column from both of them compare the results on either side, useinduction to obtain the equality (10) for n − 1 and finally relate it to (10) for n.

ARS (IITB) MA106-Linear Algebra January 19, 2011 23 / 45

Existence

Suppose GEM applied to A interchanges 1st row with k th row. (We do notexclude the case when k = 1; it merely means that the first step of GEM does notdo anything on A.)The rest proof is best understood by glancing at the table below which is indeed aschematic representation of the proof.

ARS (IITB) MA106-Linear Algebra January 19, 2011 24 / 45

Existence

Case 1: Let {1, k} = {i , j}. Here first step of GEM applied to A is the same as τapplied to A and hence both produce B. Clearly, the first step of GEM applied toB does nothing and all the remaining steps will coincide for both. Hence (10) getsverified here.Case 2: Suppose the set {1, k , i , j} has three elements. Observe that during thefirst step of GEM on A or B none of the other rows are affected. (Thus by simplyignoring these rows and the corresponding columns, this reduces to the case whenn = 3 for which you have already verified (10). However, we shall write down adetailed proof.)

ARS (IITB) MA106-Linear Algebra January 19, 2011 25 / 45

Existence

We consider first the subcase, where k = 1. Then the first step of GEM does notproduce any change in A or B and we simply pass to the next stage. Theequation (10) for n follows from the same for n − 1 by induction.

ARS (IITB) MA106-Linear Algebra January 19, 2011 26 / 45

Existence

Next, suppose k is equal to one of the indices {i , j} say for definiteness, k = i .This means that the non zero entry in the first column of A is in i th place andhence the same for B is j th place. Therefore the first step of GEM for A swapsrows 1↔ i , where as the same for B swaps rows 1↔ j . Therefore after cuttingdown the first row and first column, the two matrices are again related by thetransposition τ ′ = (i − 1, j − 1). Again the equation (10) for n is obtained bymultiplying by −1 the same equation for n − 1.

ARS (IITB) MA106-Linear Algebra January 19, 2011 27 / 45

Existence

Case 3: Suppose {1, k , i , j} has four elements. Under this condition, it is clearthat the first step of GEM on both A and B is to interchange the first and k th

rows. After this the two (n − 1)× (n − 1) matrices A′,B ′ are again related by atransposition. Therefore, by induction, (10) is true for A′,B ′ and hence aftermutliplying it by −1 the same holds for A,B. ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 28 / 45

Existence of determinantCase Stage A B

Initial Y ————— (1) Y ————— (1)(a) stage . . . . . . . . . . . .

R ————— (i) P ————— (i). . . . . . . . . . . .

P ————— (j) R ————— (j). . . . . . . . . . . .

After Y ————— (1=k) Y ————— (1=k)step 1 . . . . . . . . . . . .

R ————— (i) P ————— (i). . . . . . . . . . . .

P ————— (j) R ————— (j). . . . . . . . . . . .

(b) Initial Y ————— (1) Y ————— (1)stage . . . . . . . . . . . .

R ————— (i=k) P ————— (i). . . . . . . . . . . .

P ————— (j) R ————— (j). . . . . . . . . . . .

After R ————— (1) R ————— (1)Step 1 . . . . . . . . . . . .

Y ————— (i) P ————— (i). . . . . . . . . . . .

P ————— (j) Y ————— (j). . . . . . . . . . . .

ARS (IITB) MA106-Linear Algebra January 19, 2011 29 / 45

Axioms for determinant

Case 3Stage A BInitial Y ————— (1) Y ————– (1)stage . . . . . . . . . . . .

R ————— (i) P ————— (i). . . . . . . . . . . .

G ————— (k) G ————— (k). . . . . . . . . . . .

P ————— (j) R ————— (j). . . . . . . . . . . .

After G ————— (1) G ————— (1)Step 1 . . . . . . . . . . . .

R ————— (i) P ————— (i). . . . . . . . . . . .

Y ————— (k) Y ————— (k). . . . . . . . . . . .

P ————— (j) R ————— (j). . . . . . . . . . . .

ARS (IITB) MA106-Linear Algebra January 19, 2011 30 / 45

Existence of determinant

Definition

A multi-linear alternating map f : Mn(K) −→K is called the determinantfunction if f (In) = 1.

TheoremThere exist a unique determinant function for each n ≥ 1. This is given by theExpansion formula

det A =∑

σ∈Σ(n)

sgn(σ)aσ =∑

σ∈Σ(n)

sgn(σ)a1σ(1)a2σ(2) · · · anσ(n). (11)

RemarkThe condition in the above definition is called normalization. Thus it follows thatviz., a multi-linear, alternating, normalised map is unique.It is denoted by det . Often one writes det A or |A| to denote the determinant of asquare matrix A.Clearly it coincides with the concept that we already have in case n = 1, 2.We shall now establish other properties of det which we are familiar with.

ARS (IITB) MA106-Linear Algebra January 19, 2011 31 / 45

Existence of determinant

Definition

A multi-linear alternating map f : Mn(K) −→K is called the determinantfunction if f (In) = 1.

TheoremThere exist a unique determinant function for each n ≥ 1. This is given by theExpansion formula

det A =∑

σ∈Σ(n)

sgn(σ)aσ =∑

σ∈Σ(n)

sgn(σ)a1σ(1)a2σ(2) · · · anσ(n). (11)

RemarkThe condition in the above definition is called normalization. Thus it follows thatviz., a multi-linear, alternating, normalised map is unique.

It is denoted by det . Often one writes det A or |A| to denote the determinant of asquare matrix A.Clearly it coincides with the concept that we already have in case n = 1, 2.We shall now establish other properties of det which we are familiar with.

ARS (IITB) MA106-Linear Algebra January 19, 2011 31 / 45

Existence of determinant

Definition

A multi-linear alternating map f : Mn(K) −→K is called the determinantfunction if f (In) = 1.

TheoremThere exist a unique determinant function for each n ≥ 1. This is given by theExpansion formula

det A =∑

σ∈Σ(n)

sgn(σ)aσ =∑

σ∈Σ(n)

sgn(σ)a1σ(1)a2σ(2) · · · anσ(n). (11)

RemarkThe condition in the above definition is called normalization. Thus it follows thatviz., a multi-linear, alternating, normalised map is unique.It is denoted by det . Often one writes det A or |A| to denote the determinant of asquare matrix A.

Clearly it coincides with the concept that we already have in case n = 1, 2.We shall now establish other properties of det which we are familiar with.

ARS (IITB) MA106-Linear Algebra January 19, 2011 31 / 45

Existence of determinant

Definition

A multi-linear alternating map f : Mn(K) −→K is called the determinantfunction if f (In) = 1.

TheoremThere exist a unique determinant function for each n ≥ 1. This is given by theExpansion formula

det A =∑

σ∈Σ(n)

sgn(σ)aσ =∑

σ∈Σ(n)

sgn(σ)a1σ(1)a2σ(2) · · · anσ(n). (11)

RemarkThe condition in the above definition is called normalization. Thus it follows thatviz., a multi-linear, alternating, normalised map is unique.It is denoted by det . Often one writes det A or |A| to denote the determinant of asquare matrix A.Clearly it coincides with the concept that we already have in case n = 1, 2.

We shall now establish other properties of det which we are familiar with.

ARS (IITB) MA106-Linear Algebra January 19, 2011 31 / 45

Existence of determinant

Definition

A multi-linear alternating map f : Mn(K) −→K is called the determinantfunction if f (In) = 1.

TheoremThere exist a unique determinant function for each n ≥ 1. This is given by theExpansion formula

det A =∑

σ∈Σ(n)

sgn(σ)aσ =∑

σ∈Σ(n)

sgn(σ)a1σ(1)a2σ(2) · · · anσ(n). (11)

RemarkThe condition in the above definition is called normalization. Thus it follows thatviz., a multi-linear, alternating, normalised map is unique.It is denoted by det . Often one writes det A or |A| to denote the determinant of asquare matrix A.Clearly it coincides with the concept that we already have in case n = 1, 2.We shall now establish other properties of det which we are familiar with.

ARS (IITB) MA106-Linear Algebra January 19, 2011 31 / 45

Computing the determinantLet us now see how GEM can be applied to compute the determinant of a n × nmatrix.

First consider the case when the row rank of A is equal to n. Then we know thatJ(A) = In.Suppose the number of change of rows performed to arrive at G (A) is e(A). Bytheorem 4, it follows that det (A) = (−1)e(A)det (G (A)). Now look at thediagonal entries of G (A). While arriving at J(A) from G (A), we first divide eachrow by the corresponding diagonal entry ci and then sweep each column. Sincethe sweeping of columns does not change the determinant, it follows that

det (G (A)) = c1c2 · · · cn. (12)

Combining these two we have

det A = (−1)e(A)c1 · · · cn. (13)

where ci are the diagonal entries of G (A). Now if the row rank of A is < n thenclearly some row of G (A) is zero and hence one of the ci = 0. Also, det G (A) = 0and since det A = ±det G (A), it follows that det A = 0. So, the formula (13) isstill valid.

ARS (IITB) MA106-Linear Algebra January 19, 2011 32 / 45

Computing the determinantLet us now see how GEM can be applied to compute the determinant of a n × nmatrix.First consider the case when the row rank of A is equal to n. Then we know thatJ(A) = In.

Suppose the number of change of rows performed to arrive at G (A) is e(A). Bytheorem 4, it follows that det (A) = (−1)e(A)det (G (A)). Now look at thediagonal entries of G (A). While arriving at J(A) from G (A), we first divide eachrow by the corresponding diagonal entry ci and then sweep each column. Sincethe sweeping of columns does not change the determinant, it follows that

det (G (A)) = c1c2 · · · cn. (12)

Combining these two we have

det A = (−1)e(A)c1 · · · cn. (13)

where ci are the diagonal entries of G (A). Now if the row rank of A is < n thenclearly some row of G (A) is zero and hence one of the ci = 0. Also, det G (A) = 0and since det A = ±det G (A), it follows that det A = 0. So, the formula (13) isstill valid.

ARS (IITB) MA106-Linear Algebra January 19, 2011 32 / 45

Computing the determinantLet us now see how GEM can be applied to compute the determinant of a n × nmatrix.First consider the case when the row rank of A is equal to n. Then we know thatJ(A) = In.Suppose the number of change of rows performed to arrive at G (A) is e(A).

Bytheorem 4, it follows that det (A) = (−1)e(A)det (G (A)). Now look at thediagonal entries of G (A). While arriving at J(A) from G (A), we first divide eachrow by the corresponding diagonal entry ci and then sweep each column. Sincethe sweeping of columns does not change the determinant, it follows that

det (G (A)) = c1c2 · · · cn. (12)

Combining these two we have

det A = (−1)e(A)c1 · · · cn. (13)

where ci are the diagonal entries of G (A). Now if the row rank of A is < n thenclearly some row of G (A) is zero and hence one of the ci = 0. Also, det G (A) = 0and since det A = ±det G (A), it follows that det A = 0. So, the formula (13) isstill valid.

ARS (IITB) MA106-Linear Algebra January 19, 2011 32 / 45

Computing the determinantLet us now see how GEM can be applied to compute the determinant of a n × nmatrix.First consider the case when the row rank of A is equal to n. Then we know thatJ(A) = In.Suppose the number of change of rows performed to arrive at G (A) is e(A). Bytheorem 4, it follows that det (A) = (−1)e(A)det (G (A)).

Now look at thediagonal entries of G (A). While arriving at J(A) from G (A), we first divide eachrow by the corresponding diagonal entry ci and then sweep each column. Sincethe sweeping of columns does not change the determinant, it follows that

det (G (A)) = c1c2 · · · cn. (12)

Combining these two we have

det A = (−1)e(A)c1 · · · cn. (13)

where ci are the diagonal entries of G (A). Now if the row rank of A is < n thenclearly some row of G (A) is zero and hence one of the ci = 0. Also, det G (A) = 0and since det A = ±det G (A), it follows that det A = 0. So, the formula (13) isstill valid.

ARS (IITB) MA106-Linear Algebra January 19, 2011 32 / 45

Computing the determinantLet us now see how GEM can be applied to compute the determinant of a n × nmatrix.First consider the case when the row rank of A is equal to n. Then we know thatJ(A) = In.Suppose the number of change of rows performed to arrive at G (A) is e(A). Bytheorem 4, it follows that det (A) = (−1)e(A)det (G (A)). Now look at thediagonal entries of G (A). While arriving at J(A) from G (A), we first divide eachrow by the corresponding diagonal entry ci and then sweep each column.

Sincethe sweeping of columns does not change the determinant, it follows that

det (G (A)) = c1c2 · · · cn. (12)

Combining these two we have

det A = (−1)e(A)c1 · · · cn. (13)

where ci are the diagonal entries of G (A). Now if the row rank of A is < n thenclearly some row of G (A) is zero and hence one of the ci = 0. Also, det G (A) = 0and since det A = ±det G (A), it follows that det A = 0. So, the formula (13) isstill valid.

ARS (IITB) MA106-Linear Algebra January 19, 2011 32 / 45

Computing the determinantLet us now see how GEM can be applied to compute the determinant of a n × nmatrix.First consider the case when the row rank of A is equal to n. Then we know thatJ(A) = In.Suppose the number of change of rows performed to arrive at G (A) is e(A). Bytheorem 4, it follows that det (A) = (−1)e(A)det (G (A)). Now look at thediagonal entries of G (A). While arriving at J(A) from G (A), we first divide eachrow by the corresponding diagonal entry ci and then sweep each column. Sincethe sweeping of columns does not change the determinant, it follows that

det (G (A)) = c1c2 · · · cn. (12)

Combining these two we have

det A = (−1)e(A)c1 · · · cn. (13)

where ci are the diagonal entries of G (A). Now if the row rank of A is < n thenclearly some row of G (A) is zero and hence one of the ci = 0. Also, det G (A) = 0and since det A = ±det G (A), it follows that det A = 0. So, the formula (13) isstill valid.

ARS (IITB) MA106-Linear Algebra January 19, 2011 32 / 45

Computing the determinantLet us now see how GEM can be applied to compute the determinant of a n × nmatrix.First consider the case when the row rank of A is equal to n. Then we know thatJ(A) = In.Suppose the number of change of rows performed to arrive at G (A) is e(A). Bytheorem 4, it follows that det (A) = (−1)e(A)det (G (A)). Now look at thediagonal entries of G (A). While arriving at J(A) from G (A), we first divide eachrow by the corresponding diagonal entry ci and then sweep each column. Sincethe sweeping of columns does not change the determinant, it follows that

det (G (A)) = c1c2 · · · cn. (12)

Combining these two we have

det A = (−1)e(A)c1 · · · cn. (13)

where ci are the diagonal entries of G (A).

Now if the row rank of A is < n thenclearly some row of G (A) is zero and hence one of the ci = 0. Also, det G (A) = 0and since det A = ±det G (A), it follows that det A = 0. So, the formula (13) isstill valid.

ARS (IITB) MA106-Linear Algebra January 19, 2011 32 / 45

Computing the determinantLet us now see how GEM can be applied to compute the determinant of a n × nmatrix.First consider the case when the row rank of A is equal to n. Then we know thatJ(A) = In.Suppose the number of change of rows performed to arrive at G (A) is e(A). Bytheorem 4, it follows that det (A) = (−1)e(A)det (G (A)). Now look at thediagonal entries of G (A). While arriving at J(A) from G (A), we first divide eachrow by the corresponding diagonal entry ci and then sweep each column. Sincethe sweeping of columns does not change the determinant, it follows that

det (G (A)) = c1c2 · · · cn. (12)

Combining these two we have

det A = (−1)e(A)c1 · · · cn. (13)

where ci are the diagonal entries of G (A). Now if the row rank of A is < n thenclearly some row of G (A) is zero and hence one of the ci = 0. Also, det G (A) = 0and since det A = ±det G (A), it follows that det A = 0. So, the formula (13) isstill valid.

ARS (IITB) MA106-Linear Algebra January 19, 2011 32 / 45

Computing determinant

Example

I (i) In the special case when A is upper triangular, we have G (A) = A. Hencedet A is the product of the diagonal entries.

I (ii) Consider an n × n matrix M for which aij = 0, k + 1 ≤ i ≤ n, 1 ≤ j ≤ k.Thus M can written in the form

M =

[A B0 D

](14)

where A,B,D are respectively of type k × k , k × (n − k), (n − k)× (n − k).Such a matrix is called a block matrix.Let us compute the determinant of such a matrix. Let us apply GEM to M.Observe that unless A has become upper triangular, the process will neverinvolve the rows numbered > k. Suppose the number of steps in G (A) is lessthan k . Then there will be a row of zeros in G (A) and hence G (M) will havea zero on the diagonal. It follows that in this case det A = 0 = det M.

ARS (IITB) MA106-Linear Algebra January 19, 2011 33 / 45

Computing determinant

Example

I (i) In the special case when A is upper triangular, we have G (A) = A. Hencedet A is the product of the diagonal entries.

I (ii) Consider an n × n matrix M for which aij = 0, k + 1 ≤ i ≤ n, 1 ≤ j ≤ k.

Thus M can written in the form

M =

[A B0 D

](14)

where A,B,D are respectively of type k × k , k × (n − k), (n − k)× (n − k).Such a matrix is called a block matrix.Let us compute the determinant of such a matrix. Let us apply GEM to M.Observe that unless A has become upper triangular, the process will neverinvolve the rows numbered > k. Suppose the number of steps in G (A) is lessthan k . Then there will be a row of zeros in G (A) and hence G (M) will havea zero on the diagonal. It follows that in this case det A = 0 = det M.

ARS (IITB) MA106-Linear Algebra January 19, 2011 33 / 45

Computing determinant

Example

I (i) In the special case when A is upper triangular, we have G (A) = A. Hencedet A is the product of the diagonal entries.

I (ii) Consider an n × n matrix M for which aij = 0, k + 1 ≤ i ≤ n, 1 ≤ j ≤ k.Thus M can written in the form

M =

[A B0 D

](14)

where A,B,D are respectively of type k × k , k × (n − k), (n − k)× (n − k).

Such a matrix is called a block matrix.Let us compute the determinant of such a matrix. Let us apply GEM to M.Observe that unless A has become upper triangular, the process will neverinvolve the rows numbered > k. Suppose the number of steps in G (A) is lessthan k . Then there will be a row of zeros in G (A) and hence G (M) will havea zero on the diagonal. It follows that in this case det A = 0 = det M.

ARS (IITB) MA106-Linear Algebra January 19, 2011 33 / 45

Computing determinant

Example

I (i) In the special case when A is upper triangular, we have G (A) = A. Hencedet A is the product of the diagonal entries.

I (ii) Consider an n × n matrix M for which aij = 0, k + 1 ≤ i ≤ n, 1 ≤ j ≤ k.Thus M can written in the form

M =

[A B0 D

](14)

where A,B,D are respectively of type k × k , k × (n − k), (n − k)× (n − k).Such a matrix is called a block matrix.

Let us compute the determinant of such a matrix. Let us apply GEM to M.Observe that unless A has become upper triangular, the process will neverinvolve the rows numbered > k. Suppose the number of steps in G (A) is lessthan k . Then there will be a row of zeros in G (A) and hence G (M) will havea zero on the diagonal. It follows that in this case det A = 0 = det M.

ARS (IITB) MA106-Linear Algebra January 19, 2011 33 / 45

Computing determinant

Example

I (i) In the special case when A is upper triangular, we have G (A) = A. Hencedet A is the product of the diagonal entries.

I (ii) Consider an n × n matrix M for which aij = 0, k + 1 ≤ i ≤ n, 1 ≤ j ≤ k.Thus M can written in the form

M =

[A B0 D

](14)

where A,B,D are respectively of type k × k , k × (n − k), (n − k)× (n − k).Such a matrix is called a block matrix.Let us compute the determinant of such a matrix. Let us apply GEM to M.Observe that unless A has become upper triangular, the process will neverinvolve the rows numbered > k .

Suppose the number of steps in G (A) is lessthan k . Then there will be a row of zeros in G (A) and hence G (M) will havea zero on the diagonal. It follows that in this case det A = 0 = det M.

ARS (IITB) MA106-Linear Algebra January 19, 2011 33 / 45

Computing determinant

Example

I (i) In the special case when A is upper triangular, we have G (A) = A. Hencedet A is the product of the diagonal entries.

I (ii) Consider an n × n matrix M for which aij = 0, k + 1 ≤ i ≤ n, 1 ≤ j ≤ k.Thus M can written in the form

M =

[A B0 D

](14)

where A,B,D are respectively of type k × k , k × (n − k), (n − k)× (n − k).Such a matrix is called a block matrix.Let us compute the determinant of such a matrix. Let us apply GEM to M.Observe that unless A has become upper triangular, the process will neverinvolve the rows numbered > k . Suppose the number of steps in G (A) is lessthan k . Then there will be a row of zeros in G (A) and hence G (M) will havea zero on the diagonal.

It follows that in this case det A = 0 = det M.

ARS (IITB) MA106-Linear Algebra January 19, 2011 33 / 45

Computing determinant

Example

I (i) In the special case when A is upper triangular, we have G (A) = A. Hencedet A is the product of the diagonal entries.

I (ii) Consider an n × n matrix M for which aij = 0, k + 1 ≤ i ≤ n, 1 ≤ j ≤ k.Thus M can written in the form

M =

[A B0 D

](14)

where A,B,D are respectively of type k × k , k × (n − k), (n − k)× (n − k).Such a matrix is called a block matrix.Let us compute the determinant of such a matrix. Let us apply GEM to M.Observe that unless A has become upper triangular, the process will neverinvolve the rows numbered > k . Suppose the number of steps in G (A) is lessthan k . Then there will be a row of zeros in G (A) and hence G (M) will havea zero on the diagonal. It follows that in this case det A = 0 = det M.

ARS (IITB) MA106-Linear Algebra January 19, 2011 33 / 45

Computing determinant

Example

(ii) continued:Now suppose the number of steps in G (A) is k .

Then GEM applied to M will firstmake A into an upper triangular matrix and the rest of the operation on M willnot involve the first k rows at all. Therefore, these operations can be thought ofas two separate sets of row operations on A and D which correspond tooperations of GEM on them. Thus we conclude that the diagonal entries of G (M)are those of G (A) and G (D) whereas the number of row swaps for M will be thesum of these numbers for A and D separately. Thus we have proved:

TheoremFor a block matrix

M =

[A B0 D

]we have det M = (det A)(det D).

ARS (IITB) MA106-Linear Algebra January 19, 2011 34 / 45

Computing determinant

Example

(ii) continued:Now suppose the number of steps in G (A) is k . Then GEM applied to M will firstmake A into an upper triangular matrix and the rest of the operation on M willnot involve the first k rows at all.

Therefore, these operations can be thought ofas two separate sets of row operations on A and D which correspond tooperations of GEM on them. Thus we conclude that the diagonal entries of G (M)are those of G (A) and G (D) whereas the number of row swaps for M will be thesum of these numbers for A and D separately. Thus we have proved:

TheoremFor a block matrix

M =

[A B0 D

]we have det M = (det A)(det D).

ARS (IITB) MA106-Linear Algebra January 19, 2011 34 / 45

Computing determinant

Example

(ii) continued:Now suppose the number of steps in G (A) is k . Then GEM applied to M will firstmake A into an upper triangular matrix and the rest of the operation on M willnot involve the first k rows at all. Therefore, these operations can be thought ofas two separate sets of row operations on A and D which correspond tooperations of GEM on them.

Thus we conclude that the diagonal entries of G (M)are those of G (A) and G (D) whereas the number of row swaps for M will be thesum of these numbers for A and D separately. Thus we have proved:

TheoremFor a block matrix

M =

[A B0 D

]we have det M = (det A)(det D).

ARS (IITB) MA106-Linear Algebra January 19, 2011 34 / 45

Computing determinant

Example

(ii) continued:Now suppose the number of steps in G (A) is k . Then GEM applied to M will firstmake A into an upper triangular matrix and the rest of the operation on M willnot involve the first k rows at all. Therefore, these operations can be thought ofas two separate sets of row operations on A and D which correspond tooperations of GEM on them. Thus we conclude that the diagonal entries of G (M)are those of G (A) and G (D) whereas the number of row swaps for M will be thesum of these numbers for A and D separately. Thus we have proved:

TheoremFor a block matrix

M =

[A B0 D

]we have det M = (det A)(det D).

ARS (IITB) MA106-Linear Algebra January 19, 2011 34 / 45

Computing Determinants

Formula (10) has some other interesting applications.

Corollary

Let a permutation σ be the composite of s transpositions. Then sgn(σ) = (−1)s .

Proof: Follows by induction on s by successive application of (10). ♠

Corollary

For any two permutations, sgn(σ ◦ τ) = sgn(σ)sgn(τ). In particularsgn(σ) = sgn(σ−1).

Proof: First part is clear. Now observe that for any transposition τ, we haveτ−1 = τ. Therefore, if σ = τ1 ◦ · · · ◦ τk then σ−1 = τk ◦ · · · ◦ τ1. Now apply theprevious lemma. ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 35 / 45

Computing Determinants

Formula (10) has some other interesting applications.

Corollary

Let a permutation σ be the composite of s transpositions. Then sgn(σ) = (−1)s .

Proof: Follows by induction on s by successive application of (10). ♠

Corollary

For any two permutations, sgn(σ ◦ τ) = sgn(σ)sgn(τ). In particularsgn(σ) = sgn(σ−1).

Proof: First part is clear. Now observe that for any transposition τ, we haveτ−1 = τ. Therefore, if σ = τ1 ◦ · · · ◦ τk then σ−1 = τk ◦ · · · ◦ τ1. Now apply theprevious lemma. ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 35 / 45

Computing Determinants

Formula (10) has some other interesting applications.

Corollary

Let a permutation σ be the composite of s transpositions. Then sgn(σ) = (−1)s .

Proof: Follows by induction on s by successive application of (10). ♠

Corollary

For any two permutations, sgn(σ ◦ τ) = sgn(σ)sgn(τ). In particularsgn(σ) = sgn(σ−1).

Proof: First part is clear. Now observe that for any transposition τ, we haveτ−1 = τ. Therefore, if σ = τ1 ◦ · · · ◦ τk then σ−1 = τk ◦ · · · ◦ τ1. Now apply theprevious lemma. ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 35 / 45

Computing Determinants

Formula (10) has some other interesting applications.

Corollary

Let a permutation σ be the composite of s transpositions. Then sgn(σ) = (−1)s .

Proof: Follows by induction on s by successive application of (10). ♠

Corollary

For any two permutations, sgn(σ ◦ τ) = sgn(σ)sgn(τ). In particularsgn(σ) = sgn(σ−1).

Proof: First part is clear. Now observe that for any transposition τ, we haveτ−1 = τ. Therefore, if σ = τ1 ◦ · · · ◦ τk then σ−1 = τk ◦ · · · ◦ τ1. Now apply theprevious lemma. ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 35 / 45

The Transpose

The entire discussion so far was with respect to the rows.

We could have carriedthe whole thing with respect to rows. We then get another normalised alternatingmulti-linear map with respect to the columns. Let us call it δ. The followingtheorem tell you that δ = det .

ARS (IITB) MA106-Linear Algebra January 19, 2011 36 / 45

The Transpose

The entire discussion so far was with respect to the rows. We could have carriedthe whole thing with respect to rows.

We then get another normalised alternatingmulti-linear map with respect to the columns. Let us call it δ. The followingtheorem tell you that δ = det .

ARS (IITB) MA106-Linear Algebra January 19, 2011 36 / 45

The Transpose

The entire discussion so far was with respect to the rows. We could have carriedthe whole thing with respect to rows. We then get another normalised alternatingmulti-linear map with respect to the columns.

Let us call it δ. The followingtheorem tell you that δ = det .

ARS (IITB) MA106-Linear Algebra January 19, 2011 36 / 45

The Transpose

The entire discussion so far was with respect to the rows. We could have carriedthe whole thing with respect to rows. We then get another normalised alternatingmulti-linear map with respect to the columns. Let us call it δ.

The followingtheorem tell you that δ = det .

ARS (IITB) MA106-Linear Algebra January 19, 2011 36 / 45

The Transpose

The entire discussion so far was with respect to the rows. We could have carriedthe whole thing with respect to rows. We then get another normalised alternatingmulti-linear map with respect to the columns. Let us call it δ. The followingtheorem tell you that δ = det .

ARS (IITB) MA106-Linear Algebra January 19, 2011 36 / 45

Det AT =Det A

Theorem

For any square matrix A we have det AT = det A.

Proof: Putting B = AT and observing that bij = aji from (11) it follows that

det AT =∑σ

sgn(σ)aσ(1)1aσ(2)2 · · · aσ(n)n.

Put τ = σ−1. Then as σ runs over all possible permutations of {1, . . . , n} so doesτ. On the other hand for each i we have i = τ(σ(i)) and for each fixed σ as i runsover {1, . . . , n} so does σ(i). Therefore

aσ(1)1aσ(2)2 · · · aσ(n)n = a1τ(1) · · · anτ(n).

Since we have seen that sgn(σ) = sgn(σ−1), the above summation is the same asthe rhs of (11). ♠

RemarkThis result is extremely useful. For, all the properties of the determinant functionthat we formulated or verified for rows will now be available for columns also.

ARS (IITB) MA106-Linear Algebra January 19, 2011 37 / 45

Det AT =Det A

Theorem

For any square matrix A we have det AT = det A.

Proof: Putting B = AT and observing that bij = aji from (11) it follows that

det AT =∑σ

sgn(σ)aσ(1)1aσ(2)2 · · · aσ(n)n.

Put τ = σ−1. Then as σ runs over all possible permutations of {1, . . . , n} so doesτ.

On the other hand for each i we have i = τ(σ(i)) and for each fixed σ as i runsover {1, . . . , n} so does σ(i). Therefore

aσ(1)1aσ(2)2 · · · aσ(n)n = a1τ(1) · · · anτ(n).

Since we have seen that sgn(σ) = sgn(σ−1), the above summation is the same asthe rhs of (11). ♠

RemarkThis result is extremely useful. For, all the properties of the determinant functionthat we formulated or verified for rows will now be available for columns also.

ARS (IITB) MA106-Linear Algebra January 19, 2011 37 / 45

Det AT =Det A

Theorem

For any square matrix A we have det AT = det A.

Proof: Putting B = AT and observing that bij = aji from (11) it follows that

det AT =∑σ

sgn(σ)aσ(1)1aσ(2)2 · · · aσ(n)n.

Put τ = σ−1. Then as σ runs over all possible permutations of {1, . . . , n} so doesτ. On the other hand for each i we have i = τ(σ(i)) and for each fixed σ as i runsover {1, . . . , n} so does σ(i).

Therefore

aσ(1)1aσ(2)2 · · · aσ(n)n = a1τ(1) · · · anτ(n).

Since we have seen that sgn(σ) = sgn(σ−1), the above summation is the same asthe rhs of (11). ♠

RemarkThis result is extremely useful. For, all the properties of the determinant functionthat we formulated or verified for rows will now be available for columns also.

ARS (IITB) MA106-Linear Algebra January 19, 2011 37 / 45

Det AT =Det A

Theorem

For any square matrix A we have det AT = det A.

Proof: Putting B = AT and observing that bij = aji from (11) it follows that

det AT =∑σ

sgn(σ)aσ(1)1aσ(2)2 · · · aσ(n)n.

Put τ = σ−1. Then as σ runs over all possible permutations of {1, . . . , n} so doesτ. On the other hand for each i we have i = τ(σ(i)) and for each fixed σ as i runsover {1, . . . , n} so does σ(i). Therefore

aσ(1)1aσ(2)2 · · · aσ(n)n = a1τ(1) · · · anτ(n).

Since we have seen that sgn(σ) = sgn(σ−1), the above summation is the same asthe rhs of (11). ♠

RemarkThis result is extremely useful. For, all the properties of the determinant functionthat we formulated or verified for rows will now be available for columns also.

ARS (IITB) MA106-Linear Algebra January 19, 2011 37 / 45

Det AT =Det A

Theorem

For any square matrix A we have det AT = det A.

Proof: Putting B = AT and observing that bij = aji from (11) it follows that

det AT =∑σ

sgn(σ)aσ(1)1aσ(2)2 · · · aσ(n)n.

Put τ = σ−1. Then as σ runs over all possible permutations of {1, . . . , n} so doesτ. On the other hand for each i we have i = τ(σ(i)) and for each fixed σ as i runsover {1, . . . , n} so does σ(i). Therefore

aσ(1)1aσ(2)2 · · · aσ(n)n = a1τ(1) · · · anτ(n).

Since we have seen that sgn(σ) = sgn(σ−1), the above summation is the same asthe rhs of (11). ♠

RemarkThis result is extremely useful. For, all the properties of the determinant functionthat we formulated or verified for rows will now be available for columns also.

ARS (IITB) MA106-Linear Algebra January 19, 2011 37 / 45

Det AT =Det A

Theorem

For any square matrix A we have det AT = det A.

Proof: Putting B = AT and observing that bij = aji from (11) it follows that

det AT =∑σ

sgn(σ)aσ(1)1aσ(2)2 · · · aσ(n)n.

Put τ = σ−1. Then as σ runs over all possible permutations of {1, . . . , n} so doesτ. On the other hand for each i we have i = τ(σ(i)) and for each fixed σ as i runsover {1, . . . , n} so does σ(i). Therefore

aσ(1)1aσ(2)2 · · · aσ(n)n = a1τ(1) · · · anτ(n).

Since we have seen that sgn(σ) = sgn(σ−1), the above summation is the same asthe rhs of (11). ♠

RemarkThis result is extremely useful. For, all the properties of the determinant functionthat we formulated or verified for rows will now be available for columns also.

ARS (IITB) MA106-Linear Algebra January 19, 2011 37 / 45

Product formula(Skip the proof)

Theorem

Product formula for determinant For any two n× n matrices A and B , we havedet (AB) = (det A)(det B).

Proof: Fix B and consider the map f (A) = det (AB). Then f (In) = det B. If weshow that f is multi-linear and alternating then by the uniqueness of such mapsthe theorem would follow. Observe that the i th row of AB is nothing but AiB.Hence if two rows of A are equal then so are the corresponding two rows of AB.Hence f (A) = det (AB) = 0. This proves alternating property. Now to checkmulti-linearity:

f (A1, . . . , aAi + a′A′i , . . . ,An)= det (A1B, . . . , (aAi + a′A′i )B, . . . ,AnB)= a det (A1B, . . . ,AiB, . . . ,AnB) + a′ det (A1B, . . . ,A′iB, . . . ,AnB)= a f (A1, . . . ,Ai , . . . ,An) + a′f (A1, . . . ,A

′i , . . . ,An)

This proves the multi-linearity of f . ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 38 / 45

Product formula(Skip the proof)

Theorem

Product formula for determinant For any two n× n matrices A and B , we havedet (AB) = (det A)(det B).

Proof: Fix B and consider the map f (A) = det (AB). Then f (In) = det B. If weshow that f is multi-linear and alternating then by the uniqueness of such mapsthe theorem would follow. Observe that the i th row of AB is nothing but AiB.Hence if two rows of A are equal then so are the corresponding two rows of AB.Hence f (A) = det (AB) = 0. This proves alternating property. Now to checkmulti-linearity:

f (A1, . . . , aAi + a′A′i , . . . ,An)= det (A1B, . . . , (aAi + a′A′i )B, . . . ,AnB)= a det (A1B, . . . ,AiB, . . . ,AnB) + a′ det (A1B, . . . ,A′iB, . . . ,AnB)= a f (A1, . . . ,Ai , . . . ,An) + a′f (A1, . . . ,A

′i , . . . ,An)

This proves the multi-linearity of f . ♠

ARS (IITB) MA106-Linear Algebra January 19, 2011 38 / 45

Laplace determinant

(Skip the proofs)

Definition

Let Aij denote submatrix (minor) of A obtained by cutting down the i th row andj th column. We define the (i , j)th cofactor of A by cofi,j(A) = (−1)i+jdet Aij .

For example

A32 =

∗ × ∗ ∗∗ × ∗ ∗× × × ×∗ × ∗ ∗

ARS (IITB) MA106-Linear Algebra January 19, 2011 39 / 45

Expansion by minors

Remark

I (i) Consider a matrix A = ((aij)) in which a1j = ai1 = 0 for all i , j > 1 anda11 = 1. Then check that det A = cof11(A) = det A11. (This is an easyapplication of GEM).

I (ii) Let now for any square matrix A and (i , j), let us introduce a temporary

notation A#ij to denote the matrix obtained by replacing the i th row by ej . In

order to compute the determinant of A#ij , we can sweep the j th column and

assume that akj = 0 for k 6= i . Then by a sequence of interchanging the i th

row with the row just above it we bring this row to the 1st place. Now the j th

column is equal to (1, 0, . . . , 0)T . Mean while the determinant has changedits sign by (−1)i−1.Likewise we now swap this column with the columns left to it and bring it tothe first place. The determinant now changed its sign by(−1)i+j−2 = (−1)i+j . Thus if we denote this new matrix by B then

det B = (−1)i+jdet A#ij . Also B11 = Ai,j Hence by (i) above we have

det B = det B11 = det Aij .

ARS (IITB) MA106-Linear Algebra January 19, 2011 40 / 45

Expansion by minors

Remark

I (i) Consider a matrix A = ((aij)) in which a1j = ai1 = 0 for all i , j > 1 anda11 = 1. Then check that det A = cof11(A) = det A11. (This is an easyapplication of GEM).

I (ii) Let now for any square matrix A and (i , j), let us introduce a temporary

notation A#ij to denote the matrix obtained by replacing the i th row by ej .

In

order to compute the determinant of A#ij , we can sweep the j th column and

assume that akj = 0 for k 6= i . Then by a sequence of interchanging the i th

row with the row just above it we bring this row to the 1st place. Now the j th

column is equal to (1, 0, . . . , 0)T . Mean while the determinant has changedits sign by (−1)i−1.Likewise we now swap this column with the columns left to it and bring it tothe first place. The determinant now changed its sign by(−1)i+j−2 = (−1)i+j . Thus if we denote this new matrix by B then

det B = (−1)i+jdet A#ij . Also B11 = Ai,j Hence by (i) above we have

det B = det B11 = det Aij .

ARS (IITB) MA106-Linear Algebra January 19, 2011 40 / 45

Expansion by minors

Remark

I (i) Consider a matrix A = ((aij)) in which a1j = ai1 = 0 for all i , j > 1 anda11 = 1. Then check that det A = cof11(A) = det A11. (This is an easyapplication of GEM).

I (ii) Let now for any square matrix A and (i , j), let us introduce a temporary

notation A#ij to denote the matrix obtained by replacing the i th row by ej . In

order to compute the determinant of A#ij , we can sweep the j th column and

assume that akj = 0 for k 6= i .

Then by a sequence of interchanging the i th

row with the row just above it we bring this row to the 1st place. Now the j th

column is equal to (1, 0, . . . , 0)T . Mean while the determinant has changedits sign by (−1)i−1.Likewise we now swap this column with the columns left to it and bring it tothe first place. The determinant now changed its sign by(−1)i+j−2 = (−1)i+j . Thus if we denote this new matrix by B then

det B = (−1)i+jdet A#ij . Also B11 = Ai,j Hence by (i) above we have

det B = det B11 = det Aij .

ARS (IITB) MA106-Linear Algebra January 19, 2011 40 / 45

Expansion by minors

Remark

I (i) Consider a matrix A = ((aij)) in which a1j = ai1 = 0 for all i , j > 1 anda11 = 1. Then check that det A = cof11(A) = det A11. (This is an easyapplication of GEM).

I (ii) Let now for any square matrix A and (i , j), let us introduce a temporary

notation A#ij to denote the matrix obtained by replacing the i th row by ej . In

order to compute the determinant of A#ij , we can sweep the j th column and

assume that akj = 0 for k 6= i . Then by a sequence of interchanging the i th

row with the row just above it we bring this row to the 1st place.

Now the j th

column is equal to (1, 0, . . . , 0)T . Mean while the determinant has changedits sign by (−1)i−1.Likewise we now swap this column with the columns left to it and bring it tothe first place. The determinant now changed its sign by(−1)i+j−2 = (−1)i+j . Thus if we denote this new matrix by B then

det B = (−1)i+jdet A#ij . Also B11 = Ai,j Hence by (i) above we have

det B = det B11 = det Aij .

ARS (IITB) MA106-Linear Algebra January 19, 2011 40 / 45

Expansion by minors

Remark

I (i) Consider a matrix A = ((aij)) in which a1j = ai1 = 0 for all i , j > 1 anda11 = 1. Then check that det A = cof11(A) = det A11. (This is an easyapplication of GEM).

I (ii) Let now for any square matrix A and (i , j), let us introduce a temporary

notation A#ij to denote the matrix obtained by replacing the i th row by ej . In

order to compute the determinant of A#ij , we can sweep the j th column and

assume that akj = 0 for k 6= i . Then by a sequence of interchanging the i th

row with the row just above it we bring this row to the 1st place. Now the j th

column is equal to (1, 0, . . . , 0)T . Mean while the determinant has changedits sign by (−1)i−1.

Likewise we now swap this column with the columns left to it and bring it tothe first place. The determinant now changed its sign by(−1)i+j−2 = (−1)i+j . Thus if we denote this new matrix by B then

det B = (−1)i+jdet A#ij . Also B11 = Ai,j Hence by (i) above we have

det B = det B11 = det Aij .

ARS (IITB) MA106-Linear Algebra January 19, 2011 40 / 45

Expansion by minors

Remark

I (i) Consider a matrix A = ((aij)) in which a1j = ai1 = 0 for all i , j > 1 anda11 = 1. Then check that det A = cof11(A) = det A11. (This is an easyapplication of GEM).

I (ii) Let now for any square matrix A and (i , j), let us introduce a temporary

notation A#ij to denote the matrix obtained by replacing the i th row by ej . In

order to compute the determinant of A#ij , we can sweep the j th column and

assume that akj = 0 for k 6= i . Then by a sequence of interchanging the i th

row with the row just above it we bring this row to the 1st place. Now the j th

column is equal to (1, 0, . . . , 0)T . Mean while the determinant has changedits sign by (−1)i−1.Likewise we now swap this column with the columns left to it and bring it tothe first place.

The determinant now changed its sign by(−1)i+j−2 = (−1)i+j . Thus if we denote this new matrix by B then

det B = (−1)i+jdet A#ij . Also B11 = Ai,j Hence by (i) above we have

det B = det B11 = det Aij .

ARS (IITB) MA106-Linear Algebra January 19, 2011 40 / 45

Expansion by minors

Remark

I (i) Consider a matrix A = ((aij)) in which a1j = ai1 = 0 for all i , j > 1 anda11 = 1. Then check that det A = cof11(A) = det A11. (This is an easyapplication of GEM).

I (ii) Let now for any square matrix A and (i , j), let us introduce a temporary

notation A#ij to denote the matrix obtained by replacing the i th row by ej . In

order to compute the determinant of A#ij , we can sweep the j th column and

assume that akj = 0 for k 6= i . Then by a sequence of interchanging the i th

row with the row just above it we bring this row to the 1st place. Now the j th

column is equal to (1, 0, . . . , 0)T . Mean while the determinant has changedits sign by (−1)i−1.Likewise we now swap this column with the columns left to it and bring it tothe first place. The determinant now changed its sign by(−1)i+j−2 = (−1)i+j . Thus if we denote this new matrix by B then

det B = (−1)i+jdet A#ij . Also B11 = Ai,j Hence by (i) above we have

det B = det B11 = det Aij .

ARS (IITB) MA106-Linear Algebra January 19, 2011 40 / 45

Expansion by minors

Thus we have proved:

LemmaFor any square matrix A we have,

det A#ij = cofij(A). (15)

The following theorem gives an inductive formula to obtain the determinant ofa matrix in terms of its entries.

ARS (IITB) MA106-Linear Algebra January 19, 2011 41 / 45

Expansion by minors

Thus we have proved:

LemmaFor any square matrix A we have,

det A#ij = cofij(A). (15)

The following theorem gives an inductive formula to obtain the determinant ofa matrix in terms of its entries.

ARS (IITB) MA106-Linear Algebra January 19, 2011 41 / 45

Expansion by minors

Theorem

Laplace Expansion Formula For Determinant: For any n× n matrix A we have

det A =n∑

j=1

aijcofijA, ∀ 1 ≤ i ≤ n. (16)

Proof: Using linearity in the i th row we expand the determinant function. Thus

det A = det (A1, . . . ,Ai , . . .An)= det (A1, . . . ,

∑j aijej , . . . ,An)

=∑

j aij det (A1, . . . , ej , . . . ,An)

=∑

j aij det A#ij =

∑j aij cofij(A).

the last equality is taken from (15).

RemarkLaplace formula can be used to give an inductive proof of the existence of thedeterminant. We shall not discuss this any further.

ARS (IITB) MA106-Linear Algebra January 19, 2011 42 / 45

Expansion by minors

Theorem

Laplace Expansion Formula For Determinant: For any n× n matrix A we have

det A =n∑

j=1

aijcofijA, ∀ 1 ≤ i ≤ n. (16)

Proof: Using linearity in the i th row we expand the determinant function. Thus

det A = det (A1, . . . ,Ai , . . .An)= det (A1, . . . ,

∑j aijej , . . . ,An)

=∑

j aij det (A1, . . . , ej , . . . ,An)

=∑

j aij det A#ij =

∑j aij cofij(A).

the last equality is taken from (15).

RemarkLaplace formula can be used to give an inductive proof of the existence of thedeterminant. We shall not discuss this any further.

ARS (IITB) MA106-Linear Algebra January 19, 2011 42 / 45

Expansion by minors

Theorem

Laplace Expansion Formula For Determinant: For any n× n matrix A we have

det A =n∑

j=1

aijcofijA, ∀ 1 ≤ i ≤ n. (16)

Proof: Using linearity in the i th row we expand the determinant function. Thus

det A = det (A1, . . . ,Ai , . . .An)= det (A1, . . . ,

∑j aijej , . . . ,An)

=∑

j aij det (A1, . . . , ej , . . . ,An)

=∑

j aij det A#ij =

∑j aij cofij(A).

the last equality is taken from (15).

RemarkLaplace formula can be used to give an inductive proof of the existence of thedeterminant. We shall not discuss this any further.

ARS (IITB) MA106-Linear Algebra January 19, 2011 42 / 45

Expansion by minors

Theorem

Laplace Expansion Formula For Determinant: For any n× n matrix A we have

det A =n∑

j=1

aijcofijA, ∀ 1 ≤ i ≤ n. (16)

Proof: Using linearity in the i th row we expand the determinant function. Thus

det A = det (A1, . . . ,Ai , . . .An)= det (A1, . . . ,

∑j aijej , . . . ,An)

=∑

j aij det (A1, . . . , ej , . . . ,An)

=∑

j aij det A#ij =

∑j aij cofij(A).

the last equality is taken from (15).

RemarkLaplace formula can be used to give an inductive proof of the existence of thedeterminant. We shall not discuss this any further.

ARS (IITB) MA106-Linear Algebra January 19, 2011 42 / 45

Cramer’s RuleGiven any square matrix M, and i 6= j , let M = Mij denote the matrix obtained byreplacing the j th row of M by its i th row.

Since two rows of Mij are equal we knowthat det M = det Mij = 0.On the other hand (16) applied to Mij and taking the expansion along the j th rowwe get

0 = det (M) =n∑

k=1

mikcofjk(M).

This is true for every i 6= j .Combining it with (16) we obtain

n∑k=1

aikcofjk(A) =

{det A, if i = j ;0, otherwise.

(17)

Similarly by considering the transpose we have

n∑k=1

akicofkj(A) =

{det A, if i = j ;0, otherwise.

(18)

ARS (IITB) MA106-Linear Algebra January 19, 2011 43 / 45

Cramer’s RuleGiven any square matrix M, and i 6= j , let M = Mij denote the matrix obtained byreplacing the j th row of M by its i th row. Since two rows of Mij are equal we knowthat det M = det Mij = 0.

On the other hand (16) applied to Mij and taking the expansion along the j th rowwe get

0 = det (M) =n∑

k=1

mikcofjk(M).

This is true for every i 6= j .Combining it with (16) we obtain

n∑k=1

aikcofjk(A) =

{det A, if i = j ;0, otherwise.

(17)

Similarly by considering the transpose we have

n∑k=1

akicofkj(A) =

{det A, if i = j ;0, otherwise.

(18)

ARS (IITB) MA106-Linear Algebra January 19, 2011 43 / 45

Cramer’s RuleGiven any square matrix M, and i 6= j , let M = Mij denote the matrix obtained byreplacing the j th row of M by its i th row. Since two rows of Mij are equal we knowthat det M = det Mij = 0.On the other hand (16) applied to Mij and taking the expansion along the j th rowwe get

0 = det (M) =n∑

k=1

mikcofjk(M).

This is true for every i 6= j .Combining it with (16) we obtain

n∑k=1

aikcofjk(A) =

{det A, if i = j ;0, otherwise.

(17)

Similarly by considering the transpose we have

n∑k=1

akicofkj(A) =

{det A, if i = j ;0, otherwise.

(18)

ARS (IITB) MA106-Linear Algebra January 19, 2011 43 / 45

Cramer’s RuleGiven any square matrix M, and i 6= j , let M = Mij denote the matrix obtained byreplacing the j th row of M by its i th row. Since two rows of Mij are equal we knowthat det M = det Mij = 0.On the other hand (16) applied to Mij and taking the expansion along the j th rowwe get

0 = det (M) =n∑

k=1

mikcofjk(M).

This is true for every i 6= j .Combining it with (16) we obtain

n∑k=1

aikcofjk(A) =

{det A, if i = j ;0, otherwise.

(17)

Similarly by considering the transpose we have

n∑k=1

akicofkj(A) =

{det A, if i = j ;0, otherwise.

(18)

ARS (IITB) MA106-Linear Algebra January 19, 2011 43 / 45

Cramer’s RuleGiven any square matrix M, and i 6= j , let M = Mij denote the matrix obtained byreplacing the j th row of M by its i th row. Since two rows of Mij are equal we knowthat det M = det Mij = 0.On the other hand (16) applied to Mij and taking the expansion along the j th rowwe get

0 = det (M) =n∑

k=1

mikcofjk(M).

This is true for every i 6= j .

Combining it with (16) we obtain

n∑k=1

aikcofjk(A) =

{det A, if i = j ;0, otherwise.

(17)

Similarly by considering the transpose we have

n∑k=1

akicofkj(A) =

{det A, if i = j ;0, otherwise.

(18)

ARS (IITB) MA106-Linear Algebra January 19, 2011 43 / 45

Cramer’s RuleGiven any square matrix M, and i 6= j , let M = Mij denote the matrix obtained byreplacing the j th row of M by its i th row. Since two rows of Mij are equal we knowthat det M = det Mij = 0.On the other hand (16) applied to Mij and taking the expansion along the j th rowwe get

0 = det (M) =n∑

k=1

mikcofjk(M).

This is true for every i 6= j .Combining it with (16) we obtain

n∑k=1

aikcofjk(A) =

{det A, if i = j ;0, otherwise.

(17)

Similarly by considering the transpose we have

n∑k=1

akicofkj(A) =

{det A, if i = j ;0, otherwise.

(18)

ARS (IITB) MA106-Linear Algebra January 19, 2011 43 / 45

Cramer’s RuleGiven any square matrix M, and i 6= j , let M = Mij denote the matrix obtained byreplacing the j th row of M by its i th row. Since two rows of Mij are equal we knowthat det M = det Mij = 0.On the other hand (16) applied to Mij and taking the expansion along the j th rowwe get

0 = det (M) =n∑

k=1

mikcofjk(M).

This is true for every i 6= j .Combining it with (16) we obtain

n∑k=1

aikcofjk(A) =

{det A, if i = j ;0, otherwise.

(17)

Similarly by considering the transpose we have

n∑k=1

akicofkj(A) =

{det A, if i = j ;0, otherwise.

(18)

ARS (IITB) MA106-Linear Algebra January 19, 2011 43 / 45

Cramer’s RuleGiven any square matrix M, and i 6= j , let M = Mij denote the matrix obtained byreplacing the j th row of M by its i th row. Since two rows of Mij are equal we knowthat det M = det Mij = 0.On the other hand (16) applied to Mij and taking the expansion along the j th rowwe get

0 = det (M) =n∑

k=1

mikcofjk(M).

This is true for every i 6= j .Combining it with (16) we obtain

n∑k=1

aikcofjk(A) =

{det A, if i = j ;0, otherwise.

(17)

Similarly by considering the transpose we have

n∑k=1

akicofkj(A) =

{det A, if i = j ;0, otherwise.

(18)

ARS (IITB) MA106-Linear Algebra January 19, 2011 43 / 45

Cramer’s rule: The Adjoint of a matrix

Given a n × n matrix A we define the adjoint A to be the n × n matrixadj A = (mij) where

mij = cofji (A).

Then

Theorem

(adj A)A = A(adj A) = diag(det A, . . . ,det A)

Check that this is just a reformulation of (17).In particular if det A 6= 0, then it follows that A is invertible with the inverse givenby

Theorem

A−1 = (det A)−1(adj A).

ARS (IITB) MA106-Linear Algebra January 19, 2011 44 / 45

Cramer’s rule: The Adjoint of a matrix

Given a n × n matrix A we define the adjoint A to be the n × n matrixadj A = (mij) where

mij = cofji (A).

Then

Theorem

(adj A)A = A(adj A) = diag(det A, . . . ,det A)

Check that this is just a reformulation of (17).

In particular if det A 6= 0, then it follows that A is invertible with the inverse givenby

Theorem

A−1 = (det A)−1(adj A).

ARS (IITB) MA106-Linear Algebra January 19, 2011 44 / 45

Cramer’s rule: The Adjoint of a matrix

Given a n × n matrix A we define the adjoint A to be the n × n matrixadj A = (mij) where

mij = cofji (A).

Then

Theorem

(adj A)A = A(adj A) = diag(det A, . . . ,det A)

Check that this is just a reformulation of (17).In particular if det A 6= 0, then it follows that A is invertible with the inverse givenby

Theorem

A−1 = (det A)−1(adj A).

ARS (IITB) MA106-Linear Algebra January 19, 2011 44 / 45

Cramer’s ruleAs a simple minded application we obtain:

TheoremCramer’s Rule: Given a system of linear equation Ax = b with an invertiblecoefficient matrix, let Mj be the matrix obtained from A by replacing the j th

column by the column b. Then

xi =det Mi

det A(19)

Proof: We have x = A−1b = (det A)−1(adj A)b. Equating the i th entry on eitherside, we get

xi == (det A)−1(n∑

j=1

mijbj) = (det A)−1n∑

j=1

cofji (A)bj = (det A)−1(det Mi )

the last equality is obtained by taking the expansion of det Mi along the i th

column. ♠

RemarkCramer’s rule is not good for computational purposes as it involves computation ofdeterminants which required huge number of operations, as compared to GEM andcertain modifications of GEM.

ARS (IITB) MA106-Linear Algebra January 19, 2011 45 / 45

Cramer’s ruleAs a simple minded application we obtain:

TheoremCramer’s Rule: Given a system of linear equation Ax = b with an invertiblecoefficient matrix, let Mj be the matrix obtained from A by replacing the j th

column by the column b. Then

xi =det Mi

det A(19)

Proof: We have x = A−1b = (det A)−1(adj A)b. Equating the i th entry on eitherside, we get

xi == (det A)−1(n∑

j=1

mijbj) = (det A)−1n∑

j=1

cofji (A)bj = (det A)−1(det Mi )

the last equality is obtained by taking the expansion of det Mi along the i th

column. ♠

RemarkCramer’s rule is not good for computational purposes as it involves computation ofdeterminants which required huge number of operations, as compared to GEM andcertain modifications of GEM.

ARS (IITB) MA106-Linear Algebra January 19, 2011 45 / 45

Cramer’s ruleAs a simple minded application we obtain:

TheoremCramer’s Rule: Given a system of linear equation Ax = b with an invertiblecoefficient matrix, let Mj be the matrix obtained from A by replacing the j th

column by the column b. Then

xi =det Mi

det A(19)

Proof: We have x = A−1b = (det A)−1(adj A)b. Equating the i th entry on eitherside, we get

xi == (det A)−1(n∑

j=1

mijbj) = (det A)−1n∑

j=1

cofji (A)bj = (det A)−1(det Mi )

the last equality is obtained by taking the expansion of det Mi along the i th

column. ♠

RemarkCramer’s rule is not good for computational purposes as it involves computation ofdeterminants which required huge number of operations, as compared to GEM andcertain modifications of GEM.

ARS (IITB) MA106-Linear Algebra January 19, 2011 45 / 45

Cramer’s ruleAs a simple minded application we obtain:

TheoremCramer’s Rule: Given a system of linear equation Ax = b with an invertiblecoefficient matrix, let Mj be the matrix obtained from A by replacing the j th

column by the column b. Then

xi =det Mi

det A(19)

Proof: We have x = A−1b = (det A)−1(adj A)b. Equating the i th entry on eitherside, we get

xi == (det A)−1(n∑

j=1

mijbj) = (det A)−1n∑

j=1

cofji (A)bj = (det A)−1(det Mi )

the last equality is obtained by taking the expansion of det Mi along the i th

column. ♠

RemarkCramer’s rule is not good for computational purposes as it involves computation ofdeterminants which required huge number of operations, as compared to GEM andcertain modifications of GEM.

ARS (IITB) MA106-Linear Algebra January 19, 2011 45 / 45

Cramer’s ruleAs a simple minded application we obtain:

TheoremCramer’s Rule: Given a system of linear equation Ax = b with an invertiblecoefficient matrix, let Mj be the matrix obtained from A by replacing the j th

column by the column b. Then

xi =det Mi

det A(19)

Proof: We have x = A−1b = (det A)−1(adj A)b. Equating the i th entry on eitherside, we get

xi == (det A)−1(n∑

j=1

mijbj) = (det A)−1n∑

j=1

cofji (A)bj = (det A)−1(det Mi )

the last equality is obtained by taking the expansion of det Mi along the i th

column. ♠

RemarkCramer’s rule is not good for computational purposes as it involves computation ofdeterminants which required huge number of operations, as compared to GEM andcertain modifications of GEM.

ARS (IITB) MA106-Linear Algebra January 19, 2011 45 / 45