19
Multilinear Methods in Linear Algebra Marvin Marcus* Depurtment of Computer Science Unicersity of California Suntu Burhru, CuliJi,rnicl 93106 ABSTBACT Several classical and a f&v new results are presrntwl in which multilinear algebra has proven to be an effective tool. Some of the topics covered are: FVeyl’s inryualities; the Hadamard product; mappings on tensor spaces: Gram matrices in tensor spact~s; strong nonsingularity and the LDU theorem. INTRODUCTION The title of this paper suggests that “multilinear methods” are somehow distinct from the mainstream of linear algebra. In fact, it is true that courses in multilinear algebra do not customarily appear as natural successors to standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because of its important intersections with other branches of pure and applied mathematics: e.g., differential geometry; algebraic geome- try; invariant theory; group representation theov; combinatorics; partial differential equations; quantum mechanics; theory of inequalities; mechanics. A second reason is the sometimes alarming notation that traditionally has been the hallmark of the subject. Consider for example the following “This work was supported in part Iw the Air Forw Office of Scientific Rvscarch under Grant AFOSR-88417.5. LINEAR ALGEBRA AND ITS APPLICATIONS 150:41-59 (1991) 0 Elsevier Science Publishing Co., Inc., 1991 41 655 Avenue of the Americas, New York, NY 10010 00243795/91/$3.50

of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

Multilinear Methods in Linear Algebra

Marvin Marcus*

Depurtment of Computer Science Unicersity of California Suntu Burhru, CuliJi,rnicl 93106

ABSTBACT

Several classical and a f&v new results are presrntwl in which multilinear algebra

has proven to be an effective tool. Some of the topics covered are: FVeyl’s inryualities;

the Hadamard product; mappings on tensor spaces: Gram matrices in tensor spact~s;

strong nonsingularity and the LDU theorem.

INTRODUCTION

The title of this paper suggests that “multilinear methods” are somehow

distinct from the mainstream of linear algebra. In fact, it is true that courses

in multilinear algebra do not customarily appear as natural successors to

standard courses in linear algebra. There are several reasons for this.

Historically, the subject was studied not because it is an extension of linear

algebra, but because of its important intersections with other branches of

pure and applied mathematics: e.g., differential geometry; algebraic geome-

try; invariant theory; group representation theov; combinatorics; partial

differential equations; quantum mechanics; theory of inequalities; mechanics.

A second reason is the sometimes alarming notation that traditionally has

been the hallmark of the subject. Consider for example the following

“This work was supported in part Iw the Air Forw Office of Scientific Rvscarch under

Grant AFOSR-88417.5.

LINEAR ALGEBRA AND ITS APPLICATIONS 150:41-59 (1991)

0 Elsevier Science Publishing Co., Inc., 1991

41

655 Avenue of the Americas, New York, NY 10010 00243795/91/$3.50

Page 2: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

42 MARVIN MARCUS

interesting formula taken from H. Boseck’s excellent 1972 monograph, Ten-

sorriiume :

12...p 2*...zplp+~...tr,$ jp + I j,,,*

Y 12. p

f=7. t ‘~-..l~J,,+l--Jm,* ip+,...i,,

A display such as this is not likely to elicit wild applause from an audience of

undergraduate computer-science students.

Further, as a follow-on to the usual course in linear algebra, multilinear

algebra will inevitably rank far below such real-world topics as planar

pin-jointed frameworks, error-correcting codes, electrical networks, digital

codes, etc. Finally, numerical linear algebra has emerged in the last ten years

as the natural successor to linear algebra in most undergraduate and graduate

programs. Many schools recently have begun new programs in scientific

computation, and since numerical linear algebra plays a central role in this

subject, it is not surprising that a variety of excellent courses in the subject

have sprung up in the offerings of these schools. Despite the rather daunting

evidence that multilinear algebra is not about to challenge matrix computa-

tion for a place in the curriculum, it is my opinion that the subject is a very

attractive possibility for a second course in linear algebra. This view is

supported to some extent by the many excellent hooks that are now available.

The bibliography for this paper largely consists of hooks that are entirely or

partly devoted to a direct exposition of the subject; most contain significant

applications of multilinear algebra to standard problems in linear algebra.

It would he futile to attempt even a hasty exposition of the foundations of

multilinear algebra. Rather, it is my intention to discuss a selection of just a

few typical results from the vast literature on this subject, some of which

dates hack to the last century. Moreover, the choice of results is completely

idiosyncratic and is only intended to call the reader’s attention to the broad

range of applications of the subject.

I. WEYL’S INEQUALITIES

If the eigenvalues A~ and singular values CY~ of A are arranged so that

\A,] 2 . . . > lA,,l and LY, 2 . . . > a,,, then

k k

.I3 lhjl G JQlaj, k=l,...,n. j=l

(1)

Page 3: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

MULTILINEAR METHODS 43

The inequality (1) is usually attributed to H. Weyl in 1949; however, the result appears as an exercise in [79], where it is credited to E. T. Browne in a paper published in 1928. In any event, regarded as an operator on an n-dimensional unitary space V, A induces a unique operator on the k th exterior space over V:

C,(A): i V-, A V. (-2)

The mapping A + C,(A) is a representation of Hom(V, V) on Hom( A’V, l\“V>, and from this it is not difficult to show that the eigenval-

ues of C,(A) are the ;f ( 1

products, taken k at a time, of the eigenvalues of

A. The representation property also implies immediately that C,( A)*C,( A) = C,(A*)C,(A) = C,(A*A) and h ence the singular values of C,(A) are the products taken k at a time of the singular values of A. If the inequality IAll < a, is applied to C,(A), the inequality (1) results. Now, (1) is equality for k = II (both sides are (det AI), so that taking logarithms results in

k =l,...,n, (3)

with equality for k = n. The inequalities (3) are the famous majorization inequalities. They imply that loglhl = (loglh, 1,. . ,loglA,,I) is an orthostochas- tic transform of log (Y. From this all kinds of interesting results follow, not the least of which are the Weyl inequalities:

i: IAil’ < ; a.;, s>O, k=l,..., n. j=l ,j = 1

II. THE HADAMARD PRODUCT

If A and B are n-square matrices, then A.B =[cli,bij] is called the Hadamard product of A and B. Regarding A and B as operators on an n-dimensional unitary space V to itself, the tensor product A@ B maps V@V into itself. The eigenvalues of A@ B are the n2 products Ai(A)Aj(B), i,j=l >.I., n. Let E = [e ,,.. ., e,,} be an orthonormal basis of V. Define rt to be the integer rt = (t - 1)n + t, t = 1,. , n. If the basis of E @ = {e,@ej, i,j = 1 ,‘..> n} of V@V is ordered lexicographically in the pairs (i, j), then the

Page 4: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

44 MARVIN MARCUS

r,th basis tensor is e,@e,. Hence the entry in position T,, TV in the matrix

representation of A@B with respect to the basis E@ is (ABBe,@e,, e,@e,)

= (Ae,@Be,,, e,@e,) = (Ae,, e,)(Be,,, e,> = ~,~~h,~~. From this it follows that if A and B are hermitian, then the Cauchy

interlacing inequalities imply that any eigenvalue of A. B lies between the

least and largest products A,(A)Aj(B). In particular, if A and B are positive

semidefinite hermitian, then so is A.B. Halmos [41] described this as “a

remarkable theorem on positive matrices.” Whether this result is indeed

“remarkable” is a matter of opinion, but in any event it is due to I. Schur. In

a forthcoming paper [44], R. Horn provides the matrix-theory community

with an excellent survey of the theory and many applications of the Hadamard

product of matrices.

III. MAPPINGS ON TENSOR SPACES

Let A : V + V be a linear map of the n-dimensional unitary space V. The

tensor space 63: V is a unitary space with an inner product that satisfies

(“,@ . . . 8x,,, y,cx . . . @I?/,,) = ii (St>!,,). t=I

(4)

Thus if E ={e,,..., e,,} is an orthonormal (o.n.1 hasis of V, then the n”

tensors

constitute an 0.11. basis of BP:’ V. The linear map

induced by A satisfies

TX,@ . . @x,, = As,@ . . . @Ax,,. (6)

Page 5: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

MULTILINEAR METHODS

If the w in (5) are ordered lexicographic&, then from C-1) and (6),

45

= fJ au(w3(r). (7)

In (7) the matrix representation of the map A is the n-square matrix [u,,]. If

@y V is orthogonally projected by /J onto the subspace spanned by those et

for which w is an injection of {l,. . ., p} into (1,. ., n) (i.e., p < n), then

obviously T = pTp has a matrix representation in which the (Y, p entry is (7):

(Y and p are injections. Thus we have established a correspondence that

associates with any n-square matrix A an n!/(n - II)!-square matrix J&

whose entries are given by (7) m which CY and /3 are injections. Note that if

A = [(xi, zrj)] is a Gram matrix then so is @‘:

[fil “n(r)./3(0 = te (x,(t), %rJ

= (X,&3 . . . @axa( SO(,)@ . .

=(x:,x;).

In particular, if x is a complex n!/(n - p)!-tuple then

@Jx /xv) 1

(8)

At this point assume p = n, so that the summation in (8) is over all

permutations (Y in S,,, the symmetric group of degree n. The Cholesky

factorization of A states that the vectors x ,, , x,, may be chosen so that

xk= zlk.ie,i, k=l,..., n, (9) .j = 1

i.e., A = LL*. The lower triangular property of L implies that for any

Page 6: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

46 MARVIN MARCUS

(10)

If we set

in (8) and use Bessel’s inequality to project G@ onto the space spanned by the e;, /? E S,, we have

= c 1 c x(a)6,8detLl’ (from( 10)) p E s,, a fE s,,

= (det A)x*x. (II)

Thus, det A is a lower bound for the eigenvalues of ti*. But more is true. If

Page 7: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

MULTILINEAR METHODS 47

we simply set x = F, the alternating character of S,, then in fact

= a ,cts 4w1) fI a,,pa-‘@) t=1

2 n

= aFs & &?a-‘) ii at,pa-‘(t). (12) t=1

n ”

For each fixed p E S,, the inner sum is det A, so that (12) has the value

x*&*x = n!det A. (13)

Note that n! =x*x, and thus (11) and (13) imply that det A is the minimal eigenvalue of J.$~. In an interesting paper G. Soules [78] makes the following conjecture:

The permanent of A (per A) and det A are the maximal and minimal eigenvalues of &A respectively.

Actually, Schur knew that det A is the minimal eigenvalue of &A [73], and the conjecture concerning per A is unresolved despite the efforts of many talented people. R. Merris has written an interesting history of this problem in [61]. For n = 3, Bapat and Sunder [S] proved that per A is the dominant eigenvalue of tiA. As far as I know, there is nothing more than this available for the general problem.

It is not difficult to show that if G is a subgroup of S,, and x is an irreducible character of G of degree d, then

(I41

is a value of the hermitian form associated with J& (this is true for any A). In fact, if x is a character of degree 1, then (14) is an eigenvalue of the principal submatrix of JL(* indexed by G (exercise for the reader). It is

Page 8: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

48 MARVIN MARCUS

worth mentioning that the Hadamard and Fischer inequalities come out of specializing G appropriately. A somewhat less obvious fact is that the eigenvalue per A of MA (G = S,,, x = 1) is at least as large as the eigenvalue n;= ,nii (G = {e}, x = 1) [54].

IV. STRONGLY NONSINGULAR MATRICES

A square matrix A = [u,~] is strongly nonsingular if the leading sequence of n principal submatrices

A,=A[l,..., k/l ,.... k], k=l ) . . . ) 11,

are all nonsingular. This concept is important because any strongly nonsingu- lar matrix can be uniquely factored into

A=LDU

where L and U are lower and upper triangular matrices with entries the main diagonal, and D is a diagonal matrix with diagonal entries

D,, = I,

1 along

det A,

D’k= detAk_,’ k =2,...,n

Of course, strong nonsingularity is a much more stringent assumption than nonsingularity itself. However, there are circumstances under which it is important to know whether some power of a nonsingular matrix A is strongly nonsingular. This question can he answered as follows. First note that the Cayley-Hamilton theorem implies that if A is nonsingular, then for some positive integer p, 1 < p < n,

(A”),, f 0. (15)

If we regard A as a linear operator on the space of n-tuples, then the matrix

representation of C,(A) can be taken as the ( 1

;f -square “compound matrix”

Page 9: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

MULTILINEAR METHODS 49

whose rows and columns are lexicographically indexed by the strictly increas- ing sequences w, 1 <w(l)< ..’ < w(k) < n. We continue to denote this

l),...,P(k) 1, (16)

matrix as C,(A):

C,(A),,p=detA[c-u(I),...,c-u(k)]P(

At this point consider the operator

T=A@C,(A)@ ... @C,,_,(A)

on the space

Since A -+ C,(A) is a homomorphism, it follows that

T”=A”@CC1(A”)@ ... @C,,p,(A”).

Now, for some positive integer p, 1 =G p < n;,lI: ;’ i 1

,

(T”),, # 0. (17)

But the (1,1) entry of a tensor (Kronecker) product of matrices is the product of the (1,l) entries of the factors. Thus

(T”),, = (A”),,C,(A”),, . . . %,(A”)II

=(Ar’),,detA”[1,2]1,2] . ..detA”[l...., n-111 ,..., n-l]

and hence

detA”[l,..., kll,..., k] ~0, k =l,...,n.

It follows that AP is strongly nonsingular.

Page 10: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

50 MARVIN MARCUS

V. THE NUMERICAL RANGE

The numerical range of a linear operator A defined on an n-dimensional

unitary space V is the image of the surface of the unit sphere under the

mapping

x -(Ax,x).

The numerical range is usually denoted by W(A). One of the most interest-

ing aspects of the study of W(A) is the interplay between the geometry of

W(A) and algebraic properties of A. Of course, the most elegant result of

this kind is the famous Toeplitz-Hausdorff theorem of 1918-1919 that asserts

that W(A) is a convex set in the plane and the boundary of W(A) is the

union of algebraic arcs. The literature concerning W(A) and its various

generalizations is enormous. I have compiled a preliminary bibliography of

779 items which is incomplete and not altogether correct. However, it is

available in its present questionable form.

One of the generalizations of W(A) that has received a lot of study over

the years is the set

Y’:,,,(A) (18)

In (18), 1~ T < m zg n, and (18) denotes the totality of values of

as x r,...,r,,, run over all o.n. sets of vectors in V. If r = m = 1, then W{,(A)

is simply W(A). It is easy to check that

%(X 1 ,..., x,,) = c x1 A . . . A Axooj A . . . A Ax,~,.) A . . . A x,,, (20)

w t vr.,,,

is an alternating multilinear function and hence there exists a linear map

1n

D,(A): A V+ ;;; V

Page 11: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

MULTILINEAR METHODS 51

such that

D,(A)r, A . . . A x,,, = cp,(x ,>...> x,,,).

The map D,(A) is called the rth derivation of A. By a fairly easy computa- tion one can show that Wr!,‘,,,(A) is the totality of values

(D,(A)u, A ... AU,,,, ui A ... Au,,,) (21)

for all unit decomposable skew-symmetric tensors U, A . . . A u,,~. At this point it is tempting to conclude that Wry,,,(A) is convex, simply as a consequence of the Toeplitz-Hausdorff theorem. But this argument is decep- tively wrong. The problem is that in (21) the tensors u A = ui A . . . A u,,,,

while they are of length 1, do not in general fill up all of the surface of the unit sphere in A”’ V. If we write an arbitrary unit tensor in A”’ V as

c P(w)e,^> (22) w = v,,,.,,

where e,, . , e, is any o.n basis of V, then the quadratic Plucker relations state that (22) is decomposable, i.e., of the form t~i A . . . A u ,,1 in A”’ V, iff

rn

f4a)dP) = c P((Y[S,t:pl)p(p[t,s:(Yl) (23) t=1

for all (Y and p in D,,,,, and each s = 1,. , m. To explain the notation: D,,, n is the set of all sequences of length m consisting of distinct integers chosen from 1 , . ..,n; p is extended to be skew-symmetric by p(wa) = &(a)p(w),

w E D,,,,ro (7 E SW and p(w) = 0 if w is any sequence of length m chosen from 1 , . . . , n with repetitions; a[ s, t : p] is the sequence obtained from (Y by replacing a(s) with P(t); i.e.,

a[s,t:p] = ((Y(1) ,...,,(s-l),p(t),(.U(s+l),...,(Y(m))

Now suppose we specialize all this as follows: take r = m = 2 and n = 4;

chooseAtobenormalwitheigenvaluesA,=A\z=1,h,=A,=i;lete,,...,e, be a corresponding o.n. basis of eigenvectors of A. In this case D,.(A)

Page 12: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

52 MARVIN MARCUS

becomes C,(A): A” V + A” V. The eigenvalues of C,(A) with correspond-

ing eigenvectors are

Since the eigenvalucs of C,(A) are in W&(A), if this set were convex, it

would follow that 0 E W:,(A). Now, suppose

(C,(A)u, A uCe, u, A u,) = 0. (25)

If we write

then it is simple to check that (25) becomes

+ Ip(23) I”A,A, + (~(24) 12h,h, + Ip(34) I’A~A~

Thus

IP(12)12=IP(34)12 (27)

and

p( 13) = p( 14) = ~(23) = ~(24) = 0. (28)

But the only nontrivial Plucker relation for m = 2, n = 4 is

~(12)~(34) - dl3)~(24) + ~(14)~(23) = 0. (29)

Since (27) cannot be 0 (11~~ A uzIJ= I>, we see that (29) is incompatible with

(27) and (28). In other words, W;,(A) is not convex.

Page 13: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

MULTILINEAR METHODS 53

FIG:. 1

Another simpler way to state this conclusion is in terms of the diagonal matrix

A=diag(A,,A,,A.,,A,)

=diag(I,I,i,i).

Namely, as U varies over all 4X2 partial isometrics, i.e., U*li = I,, the values of

det( U”AU) (30)

lie in a triangle in the complex plane whose vertices are 1, i, - 1 (Figure 1). However, the numbers (30) do not fill up the triangle; in particular, 0 is omitted.

One of the complications of this phenomenon of nonconvexity of Wry,,,(A) is that the various algorithms that have been devised for obtaining computer-generated images of W,‘:,,,(A) are no longer feasible. The geometry of Wry,,,(A) is not well understood.

VI. LINEAR PRESERVERS

The “linear preserver” problem frequently has the following general form. Let T be a linear transformation defined on a vector space V. Let U be

Page 14: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

54 MARVIN MARCUS

some subset of V. What are necessary and sufficient conditions (n.a.s.c.) on T such that U is preserved, i.e.,

T(U) CUP (31)

Sometimes this problem is stated in terms of an invariant f defined on U;

that is, what are n.a.s.c. on T such that for each u E U,

f(Tu) =f(u)?

This invariance problem has been of interest to mathematicians since the last

century, and there is a vast literature on the subject. In fact, it is difficult to

sensibly limit the extent of a comprehensive bibliography. I understand that

Stephen Pierce at San Diego State University is assembling one. No particu-

lar purpose would be served by inadequately duplicating a part of his efforts

here. Thus I will omit any special references to this part of the paper.

In many cases the invariance problem specializes to V = M,,,,,,(F). In

other words, it is required to determine n.u.s.c. on a linear T,

such that T(U) c U for some appropriate set U, or possibly f(T(A)) = f(A) for all A E U. For example, take V = M,,,, ,L(C) and U to be the set of partial

isometries, i.e., determine T such that whenever A*A = I, it follows that

T(A)*T(A) = I,. Frequently such problems can be reduced to the study of

those T that preserve rank, i.e., U = R, is the totality of matrices of rank k:

rank A = k

implies

rankT(A) = k.

In turn, this question leads to a study of the structure of subspaces of

M,,,,.(F) whose nonzero elements are all of rank k. R. Westwick and H.

Flanders, among many others, have done work on this question.

There are several connections of the invariance problem with multilinear

algebra, but probably the simplest can be described as follows. The space of

matrices M,,,,,,(F) is a tensor product:

M,,,,.(F) = K,,,,(F) @‘M,,,(F)

Page 15: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

55 MULTILINEAR METHODS

in which the tensor map is the dyad map

It is not difficult to confirm that A has rank k iff

in which x,,...,xk as well as yl,. ., yk are linearly independent. Thus a

linear T preserves rank k iff T sends every tensor

k

c Xt@YyI I=1

(32)

of irreducible length k into another tensor of irreducible length k. A

representation such as (32) is said to have irreducible length k if k is

minimal over all possible such representations.

Finally, it is important to mention that there is a considerable amount of

current activity related to the isometry-preserver problem mentioned above.

The issue is: characterize those linear

which preserve various functions of the singular values.

REFERENCES

Ralph Abraham, Linear und Multilineur Algebra, Henjamin, New York, 1966. A. C. Aitken, Determinants and Mutrices, Univ. Math. Texts, Oliver and Boyd,

Edinburgh, 1948.

M. A. Akivis and V. V. Goldberg, An Introduction to Lineur Algebru & Tensors, Dover, New York, 1977.

Dennis B. Ames, Fundumentuls of Linear Algebru. International Textbook Com-

pany, Scranton, Pa., 1970.

F. V. Atkinson, Multiparameter Eigenculue Problems, Mu&ices und Compact

Operutors, Math. Sci. Engrg., Academic, New York, 1972.

Louis Auslander and Robert E. MacKenzie, Introduction to Differentiable Muni- folds, Dover, New York, 1977.

Page 16: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

56 MARVIN MARCUS

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

Thor A. Bak and Jonas Lichtenberg, Vectors, Tensors, und Groups, Benjamin,

New York, 1966.

R. B. Bapat and V. S. Sunder, An extremal property of the permanent and the

determinant, Lineur Algebra Appl. 76:153-163 (1986).

Abraham Berman, The Perrnunent and Other Generalized Motrix Functions, Israel Inst. of Technology, Ha&, 1968.

Rajendra Bhatia, Perturb&on Boundsfor Matrix Eigenualzles, Pitman Res. Notes

Math. Ser., Wiley, New York, 1987.

Richard L. Bishop and Samuel I. Goldberg, Tensor Anulysis on Manifolds, %lacmillan, New York, 1968.

A. Blokhuis and J. J. Seidel, An introduction to multilinear algebra and some

applications, Philips J. Res. 39(3/5):11 l-120 (1984).

H. Boseck, Tensorriiume, VEB Deutscher Verlag der \Vissenschaftcn, Berlin,

1972.

Nicolas Bourbaki, Element.s of Muthemutics, Algebru 1, Adiwes Internat. Ser.

Math., Addison-\$‘eslev, Reading, Mass., 1974, Chapters l-3.

Ray M. Bowen and C.-C. Wang, Introduction to Vectors and Tensors, Lineur und M&linear Algebru, Math. Concepts Methods Sci. Eng. Plenum, New York,

1976.

Ray M. Bowen and C.-C. Wang, Introduction to l’ectors und Tensors, Vector and

Tensor Analysis, birth. Concepts .Methods Sci. Eng., Plenum, New York, 1976.

William C. Brown, A Second Course in Linear Algebra, Wiley, New York, 1987.

I&c Cartan, The Theory of Spinors, Hermann, Paris, 1966.

I?lie Cat-tan, D$&erentiul Forms, Hermann, Paris, 1970.

I?he Cartan, Les Syst2me.s Diffkntiels Exte’rieurs et Leurs Applications G~ome’tripes, Hermann, Paris, 1971.

L. Chambadal and J. L. Ovaert, Al&bre Lin&ire et Al&hre Tensorielle, Dunod,

Paris, 1968.

S.-S. Chern, Topics in Diffcrrentiul Geometry, Inst. for Advanced Study, 1951.

C. Chevalley, The Construction und Study of Certuin Important Algebrus, Pub.

Math. Sot. Japan, Herald Printing Co., Tokyo, 1955.

Claude Chevalley, Fundumentul Concepts of Algebra, Pure Appl. Math., Aca-

demic, New York, 1956.

Claude C. Chevalley, The Algebruic Theory of Spinors, Columbia U.P., Univer-

sity blicrofilms, Ann Arbor, Mich., 1969.

Frank Chorlton, Vector tir Tensor Methods, Math. Appl.. Ellis Horwood, Chich-

ester, 1976.

Graciano Neves de Oliveira, Generalized Matrix Functions, Funda@o Calouste

Gulbenkian, Instituto Gulbenkian de Cicncia, Centro de CBlcuIo Cientifico,

Coimbra, 1973.

P. A. M. Dirac, Spinors in Hilbert Spuce, Plenum, New York, 1974.

C. T. J. Dodson and T. Poston, Tensor Geometry, The Geometric Viewpoint und its Uses, Fearon-Pitman, Belmont, Calif., 1977.

John A. Eisele and Robert M. Mason, Applied Matrix and Tensor Analysis, Wiley-Interscience, New York, 1970.

Page 17: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

MULTILINEAR METHODS 57

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

Miroslav Fiedler, Special Matrices and their Applications in Numerical Mathe- matics, Martinus Nijhoff, Kluwer, Boston, 1986.

Harley Flanders, Differential Forms with Applications to the Physical Sciences, Math. Sci. Eng., Academic, New York, 1963.

Henry George Forder, The Calculus of Extension, Chelsea, New York, 1960.

Robert B. Gardner, Exterior Algebras over Commutative Rings, Dept. of Mathe- matics, Univ. of North Carolina at Chapel Hill, 1972.

Wallace Givens, Elementary Divisors and Some Properties of the Lyapunov Mapping X + AX + XA”, Argonne National Lab., 1961.

Donald Edward Glassco, Irreducible Sums of Simple Multivectors, Dept. of Mathematics, Univ. of Southern California, 1971.

1. M. G&man and Ju. I. Ljuhic, Finite-Dimensional Linear Analysis, A System- atic Presentation in Problem Form, MIT Press, Cambridge, Mass., 1974.

Alexander Graham, Kronecker Products and Matrix Calculus with Applications, Math. Appl., Wiley, New York, 1981.

Werner Greub, Multilineur Algebra, Springer-Verlag, New York, 1978.

K. P. Grotemeyer, Multilineare Algebru, Freien Univ., Berlin, 1968.

Paul R. Halmos, Finitr-Dilnensionul Vector Spuccs, Univ. Ser. Undergraduate Math., Van Nostrand, Princeton, 1958.

Gerhard P. Ho&child, Perspectives of Elementury Mathematics, Springer-Verlag, New York, 1983, pp. 30-60.

Roger Horn and Charles R. Johnson, Matrix Anulysis, Cambridge U.P., Cam- bridge, 1985.

Roger A. Horn, The Hadamard product, Proc. Sympos. Appl. Math., to appear.

Edward \V. Hyde, Grussmann’s Space Anulysis, Math. Monographs. Wiley, New York, 1906.

Birger Iversen, Linear Determinants with Applications to the Picurd Scheme of a Family uf Algrbruic Curves, Lecture Notes in Math., Springer-Verlag, New York, 1970.

C. R. Johnson, The permanent-on-top conjecture: A status report;‘in Current Trends in Matrix Theory, Proceedings of the Third Auburn Matrix Theory Conference, March 19- 22, 1986, Auburn University, Auburn, Alabama 1 (F. Uhlig and R. Grone, Eds.), Elsevier Science, New York, 1987, pp. 167-174.

C. N. Kockinos and R. C. Alverson, Tensors (Multilinear Algebra): The Tensor Product of Finite-Dimensional Vwtor Spaces and the Foundutions of Exterior Algebra, Stanford Research Inst.. hlenlo Park, Calif., 1966.

G. LemaTtre, Spinors According to klie Cartan, Dept. of Mathematics, Univ. of California, Berkeley. 1964.

C.-K. Li, Linear operators that preserve the m-numerical radius or the m- numerical range of matrices: Results and questions, in Current Trends in Matrix Theory, Proceedings of the Third Auburn Matrix Theory Corlference, March 19- 22, 1986, Auburn University, Auburn, Alabama (F. Uhlig and R. Grone, Eds.), Elsevier Science, New York, 1987, pp. 199-202.

C. C. MacDuffee, The Theory qf Matrices, Ergeb. Math. Grenzgeb., Chelsea, New York, 1946.

Page 18: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

58 MARVIN MARCUS

52 A. I. Mal’cev, Foundations of Linear Algebra, Freeman, San Francisco, 1963.

53 Marvin Marcus, Lectures on Tensor und Grassmann Products, Dept. of Mathe-

matics, Univ. of British Columbia, Vancouver, 1958.

54 Marvin Marcus, The permanent analogue of the Hadamard determinant theo-

rem, Bull. Amer. Math. Sot. 69:494-496 (1963).

55 Marvin Marcus, Finite Dimensional Multilineur Algebra, Purt I, Pure Appl.

Math.. Marcel Dekker, New York, 1973.

56 Marvin Marcus, Finite Dimensional Multilinear Algebra, Part II, Pure Applied

Math., Marcel Dekker, New York, 1975.

57 Marvin Marcus, Introduction to Modern Algebru, Pure Appl. Math., Marcel

Dekker, New York, 1978.

58 Marvin LMarcus and Henryk Mint, A Survey of Matrix Theory and Matrix Inequalities, Allyn and Bacon, Boston, 1964.

59 Bernard R. h?cDonald, Linear Algebru over Commutative Rings, Pure Appl.

‘cfath., Marcel Dekker, New York, 1984.

60 Russell Merris, Multilinear Algebra, Inst. for the Interdisciplinary Applications

of Algebra and Combinatorics, Santa Barbara Calif., 1975.

61 Russell Merris, The permanental dominance conjecture, in Current Trends in Matrix Theory, Proceedings of the Third Auburn Matrix Theory Conference, March 19-22, 1986, Auburn University, Auburn, Alubamu (F. Uhlig and

R. Crone, Eds.), Elsevier Science, New York, 1987, pp. 213-223.

62 Hemyk Mint, Permanents, Encyclopedia Math. Appl., Addison-Wesley, Reading,

Mass., 1978.

63 Edward Nelson, Tensor Analysis, Princeton U.P., Princeton, 1967.

64 H. K. Nickerson, D. C. Spencer, and N. E. Steenrod, Advanced Culculus, Van

Nostrand, Princeton, 1959.

65 Katsumi Nomizu, Vector Spuces and Tensor Algebras, Dept. of Mathematics,

Brown Univ., Providence, 1961.

66 D. G. Northcott, Multilineur Algebra, Cambridge U.P., Cambridge, 1983.

67 Giinter Pickert, Lineare Algebra, in Enzyklop. d. Math. Wissensch. I, l., 2. Auffl., Heft 3, I, 1953.

68 Ian R. Porteous, Topologicul Geometry, New Univ. Math. Ser., Van Nostrand

Reinhold, London, 1969.

69 Marcel Riesz, Clifford Numbers and Spinors, Univ. of Maryland, Inst. for Fluid

Dynamics and Applied Mathematics, College Park, 1957, Chapters I-IV.

70 Herbert J. Ryser, Lecture Notes in Mutrix Theory, Dept. of Mathematics,

California Inst. of Technology, Pasadena, 1981.

71 Robert Schatten, A Theory of Cross-Spaces, Ann. Math. Stud., Princeton U.P.,

Princeton, 1950.

72 M. Schreiber, Differentiul Forms, u Heuristic Introduction, Springer-Verlag,

New York, 1971.

73 I. Schur, Uber endliche Gruppen und Hermitische Formen, Muth. Z. 1:184-207

(1918).

74 Ronald Shaw, Linear Algebra and Group Representations: Linear Algebru and Introduction to Group Representations, Academic, New York, 1982.

Page 19: of - CORE · standard courses in linear algebra. There are several reasons for this. Historically, the subject was studied not because it is an extension of linear algebra, but because

MULTILINEAR METHODS 59

75

76

77

78

79

‘SO

81

Ronald Shaw, Linear Algebra and Group Representations: Multilinear Algebra

and Group Representations, Academic, New York, 1983.

G. C. Shephard, Vector Spaces of Finite Dimension, Univ. Math. Texts, Inter-

science, New York, 1966.

Wladyslaw Slebodzinski, Exterior Forms and Their Applications, PWN-Polish

Scientific Publishers, Warsaw, 1970.

G. W. Soules, Constructing symmetric nonnegative matrices, Linear and Multi-

linear Algebra 13:241-251 (1983).

H. W. Turnbull and A. C. Aitken, An Introduction to the Theory of Canonical

Matrices, Blackie & Son, London, 1950.

Hermann Weyl, The Classical Groups, Their Inzjariants and Representations,

Princeton Math. Ser., Princeton U.P., Princeton, 1946.

Helmut Wielandt, Topics in the Analytic Theory of Matrices, Dept. of Mathemat-

ics, Univ. of Wisconsin, Madison, 1967.

Receiced 24 August 1989; final manuscript cmeptrd 20 October 1989