528
(MOSTLY) COMMUTATIVE ALGEBRA Antoine Chambert-Loir

(MOSTLY)COMMUTATIVE ALGEBRA

  • Upload
    others

  • View
    14

  • Download
    1

Embed Size (px)

Citation preview

Page 1: (MOSTLY)COMMUTATIVE ALGEBRA

(MOSTLY) COMMUTATIVEALGEBRA

Antoine Chambert-Loir

Page 2: (MOSTLY)COMMUTATIVE ALGEBRA

Antoine Chambert-Loir

E-mail : [email protected]

English translation in progress - Version July 9, 2020

Page 3: (MOSTLY)COMMUTATIVE ALGEBRA

Mais je ne m’arrête point à expliquer ceci plus en détail, à cause

que je vous ôterais le plaisir de l’apprendre par vous-même, et

l’utilité de cultiver votre esprit en vous exerçant...

René Descartes (1596-1659), La géométrie (1638)

Page 4: (MOSTLY)COMMUTATIVE ALGEBRA
Page 5: (MOSTLY)COMMUTATIVE ALGEBRA

CONTENTS

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

1. Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1. Definitions. First examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2. Nilpotent elements; regular and invertible elements; division

rings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.3. Algebras, polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.4. Ideals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

1.5. Quotient rings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

1.6. Fraction rings of commutative rings. . . . . . . . . . . . . . . . . . . . . . . . 39

1.7. Relations between quotient rings and fraction rings . . . . . . . . . 47

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

2. Ideals and divisibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

2.1. Maximal ideals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

2.2. Maximal and prime ideals in a commutative ring . . . . . . . . . . . 67

2.3. Hilbert’s Nullstellensatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

2.4. Principal ideal domains, euclidean rings . . . . . . . . . . . . . . . . . . . 82

2.5. Unique factorization domains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

2.6. Polynomial rings are unique factorization domains . . . . . . . . . 94

2.7. Resultants and another theorem of Bézout. . . . . . . . . . . . . . . . . . 98

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

3. Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153.1. Definition of a module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

3.2. Morphisms of modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

3.3. Operations on modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

3.4. Quotients of modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

Page 6: (MOSTLY)COMMUTATIVE ALGEBRA

vi CONTENTS

3.5. Generating sets, free sets; bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

3.6. Localization of modules (commutative rings) . . . . . . . . . . . . . . . 140

3.7. Vector spaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

3.8. Alternate multilinear forms. Determinants . . . . . . . . . . . . . . . . . 152

3.9. Fitting ideals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

4. Field extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1754.1. Integral elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

4.2. Integral extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

4.3. Algebraic extensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

4.4. Separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

4.5. Finite fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

4.6. Galois’s theory of algebraic extensions . . . . . . . . . . . . . . . . . . . . . 200

4.7. Norms and traces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

4.8. Transcendence degree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

5. Modules over principal ideal rings . . . . . . . . . . . . . . . . . . . . . . . . . . . 2275.1. Matrix operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228

5.2. Application to linear algebra. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232

5.3. Hermite normal form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

5.4. Finitely generated modules over a principal ideal domain . . . 245

5.5. Application: Finitely generated abelian groups . . . . . . . . . . . . . 253

5.6. Application: Endomorphisms of a finite dimensional vector

space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264

6. Further results on modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2716.1. Nakayama’s lemma. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272

6.2. Length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274

6.3. The noetherian property. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282

6.4. The artinian property. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290

6.5. Support of a module, associated ideals . . . . . . . . . . . . . . . . . . . . . 297

6.6. Primary decomposition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312

7. First steps in homological algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323

Page 7: (MOSTLY)COMMUTATIVE ALGEBRA

CONTENTS vii

7.1. Diagrams, complexes and exact sequences. . . . . . . . . . . . . . . . . . 324

7.2. The “snake lemma”. Finitely presented modules . . . . . . . . . . . 328

7.3. Projective modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334

7.4. Injective modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343

7.5. Exactness conditions for functors. . . . . . . . . . . . . . . . . . . . . . . . . . . 348

7.6. Adjoint functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357

7.7. Differential modules. Homology and cohomology. . . . . . . . . . 363

Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371

8. Tensor products and determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3798.1. Tensor product of two modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380

8.2. Tensor product of modules over a commutative ring . . . . . . . . 389

8.3. Tensor algebras, symmetric and exterior algebra . . . . . . . . . . . . 394

8.4. The exterior algebra and determinants . . . . . . . . . . . . . . . . . . . . . 401

8.5. Adjunction and exactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407

8.6. Flat modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411

8.7. Faithful flatness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421

8.8. Faithfully flat descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426

8.9. Galois descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444

9. Algebras of finite type over a field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4519.1. Noether’s normalization theorem. . . . . . . . . . . . . . . . . . . . . . . . . . 452

9.2. Dimension and transcendence degree . . . . . . . . . . . . . . . . . . . . . . 458

9.3. Krull’s Hauptidealsatz and applications. . . . . . . . . . . . . . . . . . . . 463

9.4. Heights and dimension. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467

9.5. Finiteness of integral closure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471

9.6. Dedekind rings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489A.1. Algebra. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490

A.2. Set Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493

A.3. Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511

Page 8: (MOSTLY)COMMUTATIVE ALGEBRA
Page 9: (MOSTLY)COMMUTATIVE ALGEBRA

PREFACE

This book stems out lectures on commutative algebra for 4th-year

university students of two French universities (Paris and Rennes). At that

level, students have already followed a basic course in linear algebra and

are essentially fluent with the language of vector spaces over fields. The

new topics to be introducedwere arithmetic of rings, modules, especially

principal ideal rings and the classification of modules over such rings,

Galois theory, as well as an introduction to more advanced topics such as

homological algebra, tensor products, and algebraic concepts involved

in algebraic geometry.

Rewriting the text in view of its publication, I have been led to

reorganizing or expandingmany sections, for various motivations which

range from giving interesting applications of a given notion, adopting

a more natural definition which would be valid in a broader context,

to calming the anxiety of the author that “this is not yet enough”. For

example, the “(Mostly)” of the title refers to the fact that basic notions

of rings, ideals, modules are equally important in the context of non-

commutative rings — important in the sense that this point of view is

definitely fruitful in topics such as representation theory — but was

initially motivated by the (accordingly naïve) desire of being able to say

that a structure of an A-module on an abelian group M is a morphism

of rings from A to the ring of endomorphisms of M, the latter ring being

non-commutative in general.

This makes the present book quite different from classic textbooks

on commutative algebra like Atiyah/Macdonald’s, Matsumura’s or

Eisenbud’s. I am certainly less terse than the first two, and won’t go as

far as the last two. Doing so, I believe that it will be already accessible

Page 10: (MOSTLY)COMMUTATIVE ALGEBRA

x PREFACE

to younger (undergraduate) students, while giving them an idea of

advanced topics which are usually not discussed at that level.

This book has also not been written as an absolute treatise (such

as Bourbaki’s chapters on the topic). While I have taken the time to

explain basic definitions and elementary examples, its reading probably

presupposes some habit of abstract algebra. Sometimes, it even assumes

some understanding of parts which are discussed in detail only later,

but which certainly were the object of former study by the reader.

Consequently, and this is especially true for the first chapters, the reader

may be willing to skip some sections for a first reading. Its 500 pages of

the book don’t make it fit for a 15-week course anyway. Another hope is

that the reader will enjoy more advanced results of each chapter at the

occasion of another visit.

The main protagonists of the book are rings, even commutative rings,

viewed as generalized numbers in which we have an addition and a

multiplication. The first chapter introduces them, as well as the language

that is needed to understand their properties. Three ways to forge new

rings from old ones are introduced: polynomials, quotients and fractions.

In the second chapter, I study divisibility in general rings. As was

discovered in the xixth century, classical properties of integers such as

unique factorization, may not hold anymore and this leads to different

attitudes. One can either set up new structures — and this leads to the

study of prime ormaximal ideals— or restrict oneself to rings that satisfy

that familiar properties: then, principal ideal domains, euclidean rings,

unique factorization domains come in. First applications to algebraic

geometry already appear in this chapter: Hilbert’s Nullstellensatz and

Bézout’s theorem on the number of intersection points of plane curves.

Modules are to rings what vector spaces are to fields. This third

chapter thus revisits classic notions linear algebra in this broader context

— operations (intersection, direct sums, products, quotients), linear

independence, bases, (multi-)linear forms. . . — but also new notions

such as Fitting ideals. Conversely, I have found interesting to retell the

story of vector spaces from this general perspective, establishing the

existence of bases and the well-definition of dimension. Nakayama’s

lemma is also discussed in this chapter, as a tool that sometimes allows

Page 11: (MOSTLY)COMMUTATIVE ALGEBRA

PREFACE xi

to study of modules over general lings by reduction to the case linear

algebra over fields.

The fourth chapter presents field extensions. While its culminating

point is Galois’s theory, we however do not discuss classical “number-

theoretical” applications of Galois’s theory, such as solvability by radicals

or geometric constructions with compass and straight-edge, and for

which numerous excellent books already exist. The reader will also

observe that the first section of this chapter studies integral dependence

in the context of rings; this is both motivated by the importance of this

notion in later chapter, and by the fact that it emphasizes in a deeper

way how linear algebra is used to study algebraic/integral dependence.

The description of module over fields, that is, of vector spaces, is par-

ticularly easy: they are classified by a single invariant, their dimension,

which in the finitely generated case, is an integer. The next case where

such an explicit description is possible is the object of the fifth chapter,

namely the classification of modules over a principal ideal ring. This classi-

fication can be obtained in various ways and we chose the algorithmic

approach allowed by the Hermite forms of matrices; it also has the

interest to furnish information on the structure of the linear group over

euclidean domains. Of course, we give the two classical applications

of the general theory, to finitely generated abelian groups (when the

principal ideal domain is Z) and to the reduction of endomorphisms

(when it is the ring :[T] of polynomials in one indeterminate over a

field :).

Chapter six introduces various tools to study a module over a general

ring. Modules of finite length, or possessing the noetherian, or the artinian

properties, are analogues to finitely dimensional vector spaces over a

field. I then introduce the support of a module, its associated prime

ideals, and establish the existence of a primary decomposition. These

sections have a more geometric flavour which and can be thought as a

first introduction to the algebraic geometry of schemes.

Homological algebra, one of the important concepts invented in the

second half of xxth century, allows to systematically quantify the defect

of injectivity/surjectivity of a linear map, and is now an unavoidable

tool in algebra, geometry, topology,. . . The introduction to this topic I

Page 12: (MOSTLY)COMMUTATIVE ALGEBRA

xii PREFACE

provide here goes in various directions. I define projective and injective

modules, and use them as a pretext for elaborating on the language of

categories, but fall short of defining resolutions and Tor/Ext functors in

general.

The next chapter is devoted to the general study of tensor products. It is

first used to develop the theory of determinants, via the exterior algebra.

The general lack of exactness of this operation leads to the definition of a

flat module and, from that point, to the study of faithfully flat descentand

Galois descent. This theory, invented by Alexander Grothendieck, formal-

izes the following question: imagine you need additional parameters to

define an object, how can you get rid of them?

Algebras of finite type over a field are the algebraic building blocks of

algebraic geometry, and some of the properties studied in this last

chapter, Noether’s normalization theorem, dimension, codimension,

have an obvious geometric content. Algebraically, these rings are also

better behaved than general noetherian rings. A final section is devoted

to Dedekind rings; an algebraic counterparts of curves, they are also very

important in number theory and I conclude the chapter by proving the

finiteness of the class groups of number field.

In the appendix, I first summarize general mathematical conventions.

The second section proves two important results in set theory that are

used all over the text: the Cantor-Bernstein-Schröder theorem and the

Zorn theorem. The main part of the book makes use of the language

of categories and functors, in a hopefully progressive and pedagogical

way, and I have summarized it in a last section.

Every chapter comes with a large supply of exercises of all kind

(around 300 in total), ranging from a simple application of concepts to

additional results that I couldn’t include in the text.

I would like to thank François Loeser, with whom I first taught this

course, for his trust and friendship all over these years, Yuri Tschinkel

who brought it to a publisher, Yves Laszlo and Daniel Ferrand for their

suggestions at various times of the elaboration of the book.

I also thank Colas Bardavid, Pierre Bernard, Samit Dasgupta, Brandon

Gontmacher, Ondine Meyer, Andrey Mokhov for having indicated me a

few unfortunate typos.

Page 13: (MOSTLY)COMMUTATIVE ALGEBRA

CHAPTER 1

RINGS

Rings, the definition of which is the subject of this chapter, are algebraic

objects in which one can compute as in classical contexts, Integers, real numbers

or matrices: one has an addition, a multiplication, two symbols 0 and 1, and

the usual computation rules are satisfied.

The absence of a division, however, gives rise to various subtleties. For

example, the product of two elements can be zero, while none of them is zero, or

the square of a non-zero element can be zero; the reader has probably already

seen such examples exist in matrix rings. Fields, or division rings, in which

every non-zero element is invertible, have more familiar algebraic properties.

Some rings may be given by Nature, but it is the task of a mathematician,

be he/she an apprentice mathematician, to construct new rings from old ones.

To that aim, this chapter proposes three different tools, each of them having a

different goal:

– Polynomial rings allow to add new “indeterminate” elements to a ring,

which are not supposed to satisfy any property but be computed with. We

introduce them and take the opportunity to review a few facts concerning

polynomials in one indeterminate with coefficients in a field and their roots.

– Ideals embody in general rings the classic concepts of divisibility and

congruences that the reader is certainly familiar with when it comes about

integers. Given a (two-sided) ideal I of a ring A, I construct the quotient ring

A/I where the elements of I are formally forced to be “equal to zero”, while

keeping intact the standard computation rules in a ring.

– On the opposite, fraction rings are defined so as forcing given elements to

be invertible. A particular case of this construction is the field of fractions of an

integral domain.

Page 14: (MOSTLY)COMMUTATIVE ALGEBRA

2 CHAPTER 1. RINGS

1.1. Definitions. First examples

Definition (1.1.1). — A ring is a set A endowed with two binary laws, an

addition (0, 1) ↦→ 0 + 1, and a multiplication (0, 1) ↦→ 01, satisfying the

following axioms.

a) Axioms concerning the addition law and stating that (A,+) is anabelian group:

(i) For every 0, 1 in A, 0 + 1 = 1 + 0 (commutativity of addition);

(ii) For every 0, 1, 2 in A, (0 + 1) + 2 = 0 + (1 + 2) (associativity of

addition);

(iii) There exists an element 0 ∈ A such that 0 + 0 = 0+ 0 = 0 for every0 in A (neutral element for the addition);

(iv) For every 0 ∈ A, there exists 1 ∈ A such that 0 + 1 = 1 + 0 = 0

(existence of an opposite for the addition);

b) Axioms concerning the multiplication law and stating that (A, ·) isa monoid:

(v) There exists an element 1 ∈ A such that 10 = 01 = 0 for every 0 ∈ A

(neutral element for the multiplication);

(vi) For every 0, 1, and 2 in A, (01)2 = 0(12) (associativity of

multiplication);

(vii) Axiom relating the addition and the multiplication:

(viii) For every 0, 1, and 2 inA, 0(1+2) = 01+02 and (1+2)0 = 10+20(multiplication distributes over the addition).

c) One says that the ring A is commutative if, moreover,

(viii) For every 0 and 1 in A, 01 = 10 (commutativity).

With their usual addition and multiplication, rational integers, real

numbers, or matrices are fundamental examples of rings. In fact, the

ring axioms specify exactly the relevant computation rules to which one

is accustomed to. We shall give more examples in a moment, but we first

explicit a few computation rules which follow from the stated axioms.

1.1.2. — Let A be a ring.

Endowed with the addition law, A is in particular an abelian group.

As a consequence, it admits exactly one zero element, which is usually

Page 15: (MOSTLY)COMMUTATIVE ALGEBRA

1.1. DEFINITIONS. FIRST EXAMPLES 3

denoted by 0, or by 0A if it is necessary to specify of what ring it is the

zero element. Moreover, any element 0 ∈ A has exactly one opposite

for the addition, and it is denoted by −0. Indeed, if 1 and 2 are two

opposites, then 1 = 1 + 0 = 1 + (0 + 2) = (1 + 0) + 2 = 2.For every 0 ∈ A, one has 0 · (0+0) = 0 ·0+ 0 ·0, hence 0 ·0 = 0; similarly,

0 · 0 = 0. Then, for any 0, 1 ∈ A, one has 0(−1)+ 01 = 0(−1+1) = 0 ·0 = 0,

hence 0(−1) = −(01); similarly, (−0)1 = −(01). Consequently, there is noambiguity in writing −01 for either (−0)1, −(01) or 0(−1).For any rational integer =, one defines =0 by induction, by setting

00 = 0, and =0 = 0 + (= − 1)0 if = > 1 and =0 = −(−=)0 if = 6 −1.

Observe that for any 0, 1 ∈ A and any rational integer =, one has

0(=1) = =(01) = (=0)1. This is proved by induction on =: if = = 0, then all

three terms are 0; if = > 1, then 0(=1) = 0(1+(=−1)1) = 01+0((=−1)1) =01 + (= − 1)(01) = =01 and similarly for the other equality; finally, if

= 6 −1, then 0(=1) = 0(−((−=)1)) = −0((−=)1) = (−=)(−01) = =01.Similarly, the multiplicative monoid (A, ·) has exactly one neutral

element, usually denoted by 1, or by 1A if it is necessary to specify the

ring, and called the unit element of A.

If 0 belongs to a ring A and = is any positive1 rational integer = > 0,

one defines 0= by induction by setting 00 = 1, and, if = > 1, 0= = 0 · 0=−1.

For any integers < and =, one has 0<+= = 0<0= and (0<)= = 0<=, as canbe checked by induction.

However, one should take care that 0=1= and (01)= are generally

distinct, unless 01 = 10, in which case one says that 0 and 1 commute.

Proposition (1.1.3) (Binomial formula). — Let A be a ring, let 0 and 1 be

elements in A such that 01 = 10. Then, for any positive integer =,

(0 + 1)= ==∑:=0

(=

:

)0:1=−: .

Proof. — The proof is the one that is well known when 0, 1 are rational

integers. It runs by induction on =. When = = 0, both sides are equal

1Recall that in this book, positive means greater or equal to 0, and negative means smaller or equal

to 0.

Page 16: (MOSTLY)COMMUTATIVE ALGEBRA

4 CHAPTER 1. RINGS

to 1. Assume the formula holds for =; then,

(0 + 1)=+1 = (0 + 1)(0 + 1)= = (0 + 1)=∑:=0

(=

:

)0:1=−:

=

=∑:=0

(=

:

)0:+11=−: +

=∑:=0

(=

:

)10:1=−:

=

=∑:=0

(=

:

)0:+11=−: +

=∑:=0

(=

:

)0:1=+1−:

=

=+1∑:=1

(=

: − 1

)0:1=+1−: +

=∑:=0

(=

:

)0:1=+1−:

=

=+1∑:=0

((=

: − 1

)+

(=

:

))0:1=+1−:

=

=+1∑:=0

(= + 1

:

)0:1=+1−:

since

(=:−1

)+

(=:

)=

(=+1

:

)for any positive integers = and :. This concludes

the proof by induction on =. �

Examples (1.1.4). — a) As well-known basic examples of commutative

rings, let us mention the ring Z of rational integers, the quotient rings

Z/=Z for = > 1, the fields Q of rational numbers, R of real numbers, C of

complex numbers, the ring K[X] of polynomials in one indeterminate X

with coefficients in a field (or a commutative ring) K.

b) Let A be a ring and let S be a set. The set ASof functions from S

to A, with pointwise addition and pointwise multiplication, is a ring.

Explicitly, for 5 and 6 ∈ AS, 5 + 6 and 5 6 are the functions such that

( 5 + 6)(B) = 5 (B) + 6(B) and ( 5 6)(B) = 5 (B)6(B). The set �(X,R) of allcontinuous functions from a topological space X into R is a ring, as well

as the sets �:(Ω,R) and �

:(Ω,C) of all functions of differentiabilityclass �

:from an open subset Ω of R=

into R or C (here, : ∈ N ∪ {∞}).c) Let S be a set and let (AB)B∈S be a family of rings indexed by B. Let

A =∏

B∈S AB be the product of this family; this is the set of all families

Page 17: (MOSTLY)COMMUTATIVE ALGEBRA

1.1. DEFINITIONS. FIRST EXAMPLES 5

(0B)B∈S, where 0B ∈ AB for all B. We endow A with termwise addition

and termwise multiplication: (0B)+ (1B) = (0B + 1B) and (0B) · (1B) = (0B1B).Then A is a ring; its zero element is the null family (0AB), its unit element

is the family (1AB).

Let us now give non-commutative examples.

Examples (1.1.5). — a) Let A be a ring, let = be a positive integer and

let M=(A) be the set of = × = matrices with coefficients in A, endowed

with the usual computation rules: the sum of two matrices P = (?8 , 9)and Q = (@8 , 9) is the matrix R = (A8 , 9) such that A8 , 9 = ?8 , 9 + @8 , 9 forall 8 and 9 in {1, . . . , =}; the product of these matrices P and Q is the

matrix S = (B8 , 9) given by

B8 , 9 =

=∑:=1

?8 ,:@:,9 .

Endowed with these laws, M=(A) is a ring; its zero element is the null

matrix; its unit element is the matrix I= = (�8 , 9), where �8 , 9 = 1 if 8 = 9,

and 0 otherwise.

When = > 2, or if = > 1 and A is not commutative, then the ring

M=(A) is not commutative.

b) Let G be an abelian group. When ! are # any two endomorphisms

of G, the map 6 ↦→ !(6) +#(6) is again an endomorphism of G, written

! + #; this endowes the set End(G) of endomorphisms of G with the

structure of an abelian group, whose zero element is the application

6 ↦→ 0. Composition of endomorphisms (!,#) ↦→ ! ◦# is an associative

law, and distributes with respect to the addition; the identity map of G,

6 ↦→ 6, is the unit element. Consequently, these laws endow the set

End(G) of all endomorphisms of the group G with the structure of a

ring.

c) Let K be a field (commutative, say). The set EndK(V) of all en-domorphisms of a K-vector space V is a ring, non-commutative as

soon as dim(V) > 2. Here, the addition law is the pointwise addi-

tion, (! + #)(E) = !(E) + #(E) for !,# ∈ EndK(V) and E ∈ V, while

the multiplication law is the composition of endomorphisms, given by

Page 18: (MOSTLY)COMMUTATIVE ALGEBRA

6 CHAPTER 1. RINGS

! ◦ #(E) = !(#(E)). In fact, EndK(V) is also a K-vector space and the

multiplication is K-linear. One says that EndK(V) is a K-algebra.

If V has finite dimension, say =, we may choose a basis of V and

identify V with K=; then endomorphisms of V identify with matrices of

size =, and this identifies the rings EndK(V) and M=(K).

Here is a probably less-known example.

Example (1.1.6). — Let A be a ring and let M be any monoid (a group,

for example).

Inside the abelian group AM

of all functions from M to A, let us

consider the subgroup A(M)

of functions 5 with finite support, namely

such that 5 (6) ≠ 0 for only finitely many 6 ∈ M.

It also possesses a convolution product, defined by the formula:

(! ∗ #)(6) =∑ℎ,:∈M6=ℎ:

!(ℎ)#(:).

This product is well-defined: only finitely many terms in the sum are

nonzero, and the convolution of two functions with finite support still

has finite support. Moreover, the convolution product is associative:

indeed, for !,#, � ∈ A(M)

, one has

(! ∗ #) ∗ �(6) =∑ℎ,:∈M6=ℎ:

(! ∗ #)(ℎ)�(:)

=

∑ℎ,:∈M6=ℎ:

∑8 , 9∈Mℎ=8 9

!(8)#(9)�(:)

=

∑8 , 9 ,:∈M6=8 9:

!(8)#(9)�(:),

and a similar computation shows that this is as well equal to ! ∗(# ∗�)(6).There is a neutral element for convolution, given by the “Dirac func-

tion” � such that �(6) = 1 for 6 = 4, the neutral element of M, and

Page 19: (MOSTLY)COMMUTATIVE ALGEBRA

1.1. DEFINITIONS. FIRST EXAMPLES 7

�(6) = 0 otherwise. Indeed, for ! ∈ A(M)

and 6 ∈ M, one has

! ∗ �(6) =∑ℎ,:∈M6=ℎ:

!(ℎ)�(:) = !(6),

and similarly, � ∗ !(6) = !(6).These laws endow A

(M)with the structure of a ring, When A is a

commutative ring, the ring A(M)

is also called the monoid algebra of M

(with coefficients in A) (or the group algebra, when M is a group).

For : ∈ M, let �: ∈ A(M)

be the function such that �:(6) = 1 if 6 = :,

and �:(6) = 0 otherwise. One has �ℎ ∗ �: = �ℎ:.

Definition (1.1.7). — A subring of a ring A is a subgroup B of A for the

addition, which contains 1, and is stable under the multiplication, so that the

laws of A endow B with the structure of a ring, admitting the same neutral

elements 0 and 1.

Examples (1.1.8). — a) Let A be a ring and let G = A=, viewed as an

abelian group for addition. With the usual action of matrices on vectors,

M=(A) is a subset of End(A=), with compatible addition, multiplication

and unit element, so that M=(A) is a subring of End(A=).b) Let A be a ring.

The set Z(A) of all elements 0 ∈ A such that 0G = G0 for every G ∈ A is

a commutative subring of A, called its center.

More generally, let S be a subset of A and let CS(A) be the set of all

elements 0 ∈ A such that 0G = G0 for all G ∈ S. This is a subring of A,

called the centralizer of S in A. By definition, S is contained in the center

of CS(A), and CS(A) is the largest subring of A satisfying this property.

c) The intersection of any family of subrings of a ring A is a subring

of A.

1.1.9. — Let A be a ring and let S be a subset of A. Let B be the

intersection of all subrings of A containing S. It is a subring of A; one

calls it the subring of A generated by S.

Lemma (1.1.10). — Let A be a ring and let S be a subset of A; assume that

any two elements of S commute. Then the subring of A generated by S is

commutative.

Page 20: (MOSTLY)COMMUTATIVE ALGEBRA

8 CHAPTER 1. RINGS

Proof. — Let B be this subring. First of all, S is contained in the cen-

tralizer CS of S in A, by assumption, which is a subring of A, hence

B ⊂ CS(A). Then S is contained in the center of CS(A), which is a

subring of CS(A), hence of A, so that B ⊂ Z(CS(A)). Since Z(CS(A)) is acommutative ring, B is commutative. �

Definition (1.1.11). — Let A and B be two rings. A morphism of rings

5 : A→ B is a map that satisfies the following properties:

(i) One has 5 (0) = 0 and 5 (1) = 1;

(ii) For any 0 and 1 in A, one has 5 (0 + 1) = 5 (0) + 5 (1) and 5 (01) =5 (0) 5 (1).

Synonyms are homomorphism of rings and ring morphism. An endomor-

phism of the ring A is a ring morphism from A to A.

If A is a ring, the identity map idA : A→ A, 0 ↦→ 0, is a ring morphism.

The composition of two ring morphisms is again a ring morphism, and

composition is associative.

Consequently, rings and their morphisms form a category, denoted

byRings .2

As in general category theory, we say that a morphism of rings

5 : A→ B is an isomorphism if there exists a ring morphism 6 : B→ A

such that 5 ◦ 6 = idB and 6 ◦ 5 = idA. In that case, there exists exactly

one such morphism 6, and we call it the inverse of 5 ; we also say that the

rings A and B are isomorphic and write A ' B. We also write 5 : A

∼−→ B

to say that 5 is an isomorphism of rings from A to B.

If A is a ring, an automorphism of A is an isomorphism from A to A.

The set of all automorphisms of a ring A is a group for composition.

1.1.12. — Let 5 : A→ B be a morphism of rings. The image 5 (A) of A

by 5 is a subring of B. The inverse image 5 −1(C) of a subring C of B is a

subring of A.

Let 5 , 6 : A→ B be morphisms of rings. The set C of all 0 ∈ A such

that 5 (0) = 6(0) is a subring of A.

2The theory of categories furnishes a language that is extremly useful to formulate structural

properties of mathematics. We have included a basic summary in appendix A.3.

Page 21: (MOSTLY)COMMUTATIVE ALGEBRA

1.1. DEFINITIONS. FIRST EXAMPLES 9

Proposition (1.1.13). — A ring morphism is an isomorphism if and only if it

is bijective.

Proof. — Let 5 : A→ B be a morphism of rings.

Assume that 5 is an isomorphism, and let 6 be its inverse. The relations

6 ◦ 5 = idA and 5 ◦ 6 = idB respectively imply that 5 is injective and

surjective, so that 5 is bijective.

Conversely, let us assume that 5 is bijective and let 6 be its inverse. We

have 6 ◦ 5 = idA and 5 ◦ 6 = idB, so we just need to show that 6 is a ring

morphism from B to A. Since 5 (0) = 0 and 5 (1) = 1, one has 6(0) = 0

and 6(1) = 1. For any 0 and 1 ∈ B,

5 (6(0 + 1)) = 0 + 1 = 5 (6(0)) + 5 (6(1)) = 5 (6(0) + 6(1))

and

5 (6(01)) = 01 = 5 (6(0)) 5 (6(1)) = 5 (6(0)6(1)).Since 5 is a bijection, 6(0 + 1) = 6(0) + 6(1) and 6(01) = 6(0)6(1). �

Example (1.1.14). — Let A be a commutative ring and let ? be a prime number

such that ?1A = 0. The map ! : A → A defined by !(G) = G? is a ring

homomorphism, called the Frobenius homomorphism.

Observe that !(0) = 0 and !(1) = 1. Moreover, since A is commutative,

we have !(GH) = (GH)? = G?H? = !(G)!(H) for every G, H ∈ A. It remains

to show that ! is additive. Let G, H ∈ A; by the binomial formula, one

has

!(G + H) = (G + H)? =?∑:=0

(?

:

)G:H?−: .

But

(?:

)= ?!/:!(? − :)!, a fraction the numerator of which is a multiple

of ?. On the other hand, since ? is prime, ? divides neither :! nor (?− :)!,if 1 6 : 6 ? − 1, hence ? does not divide :!(? − :)!. Consequently, ?

divides

(?:

)for 1 6 : 6 ?−1. Since ?1A = 0, this implies

(?:

)1A = 0 for any

: ∈ {1, . . . , ? − 1} and all these terms

(?:

)G:H?−: vanish. Consequently,

!(G + H) = G? + H? = !(G) + !(H),

as was to be shown.

Page 22: (MOSTLY)COMMUTATIVE ALGEBRA

10 CHAPTER 1. RINGS

Example (1.1.15). — Let A be a ring and let 0, 1 be elements of A such

that 01 = 10 = 1. (We shall soon say that 0 is invertible and that 1 is its

inverse.) Then, the map G ↦→ 0G1 is an automorphism of A, called an

interior automorphism. Any C-linear automorphism of M=(C) is interior(exercise 1/11).

Example (1.1.16). — LetA = C[X1, . . . ,X=]be the ring of polynomials in =

variables with coefficients in C. (They will be defined in example 1.3.9.)

Let Q1, . . . ,Q= be elements in A. Consider the map ! from A to A which

associates to a polynomial P the polynomial P(Q1, . . . ,Q=) obtainedfrom P by substituting the polynomial Q8 to the indeterminate X8; it is

an endomorphism of A. It is the unique endomorphism of A such that

!(X8) = Q8 for all 8, and !(2) = 2 for all 2 ∈ C.

Similarly, for any permutation � of {1, . . . , =}, let Φ� be the unique

endomorphism of A such that Φ�(X8) = X�(8) for all 8 and Φ�(2) = 2

for all 2 ∈ C. Observe that Φ��(X8) = X�(�(8)) = Φ�(X�(8)) = Φ�(Φ�(X8));therefore, the endomorphisms Φ�� and Φ� ◦ Φ� are equal. The map

� ↦→ Φ� is a group morphism from the symmetric group S= to the

group Aut(C[X1, . . . ,X=]).

1.2. Nilpotent elements; regular and invertible elements; divisionrings

Some elements of a ring have nice properties with respect to the

multiplication. This justifies a few more definitions.

Definition (1.2.1). — Let A be a ring. One says that 0 is nilpotent if there

exists an integer = > 1 such that 0= = 0.

As an example, nilpotent elements of the matrix ring M=(C) are exactlythe nilpotent = × =-matrices.

Definition (1.2.2). — Let A be a ring. We say that an element 0 ∈ A is left

regular if the relation 01 = 0 in A implies 1 = 0; otherwise, we say that 0

is a left zero divisor Similarly, we say that 0 is right regular if the relation

10 = 0 in A only holds for 1 = 0, and that it is a right zero divisor otherwise

An element is said to be regular if it is both left and right regular

Page 23: (MOSTLY)COMMUTATIVE ALGEBRA

1.2. NILPOTENT ELEMENTS; REGULAR AND INVERTIBLE ELEMENTS; DIVISION RINGS 11

We say that a ring A is a domain if it is commutative, nonzero, and if every

nonzero element of A is regular.

In a commutative ring, an element which is not regular is simply called

a zero divisor.

Definition (1.2.3). — Let A be a ring and let 0 be any element of A.

One says that 0 is right invertible if there exists 1 ∈ A such that 01 = 1; we

then say that 1 is a right inverse of 0. Similarly, we say that 0 is left invertible

if there exists 1 ∈ A such that 10 = 1; such an element 1 is called a left inverse

of 0. Finally, we say that 0 is invertible, or a unit, if it is both left and right

invertible.

Assume that 0 is right invertible and let 1 be a right inverse for 0, so

that 01 = 1. If 20 = 0, then 201 = 0, hence 2 = 0. This shows that 0 is

right regular. Similarly, an element which is left invertible is also left

regular.

Assume that 0 be invertible; let 1 be a right inverse and 2 be a left

inverse of 0. One has 1 = 11 = (20)1 = 2(01) = 21 = 2. Consequently,

the right and left inverses of 0 are equal. In particular, 0 has exactly one

left inverse and one right inverse, and they are equal. This element is

called the inverse of 0, and usually denoted by 0−1.

If the ring is commutative, the notions of left and right regular coincide;

an element which is left invertible is also right invertible, and conversely

Let A×be the set of invertible elements in a ring A.

Proposition (1.2.4). — The set of all invertible elements in a ring A is a group

for multiplication. It is called the group of units of A.

Any ring morphism 5 : A→ B induces by restriction a morphism of groups

from A×to B

×.

Proof. — Let 0 and 1 be two invertible elements of A, with inverses 0−1

and 1−1. Then, (01)(1−10−1) = 0(11−1)0−1 = 00−1 = 1, so that 01 is right

invertible, with right inverse 1−10−1. Similarly, (1−10−1)(01) = 1, so that

01 is left invertible too. The multiplication of A induces an associative

law on A×. Moreover, 1 is invertible and is the neutral element for that

law. Finally, the inverse of 0 ∈ A×is nothing but 0−1

. This shows that

A×is a group for the multiplication

Page 24: (MOSTLY)COMMUTATIVE ALGEBRA

12 CHAPTER 1. RINGS

Let 5 : A→ B be a ring morphism. Let 0 ∈ A×and let 1 be its inverse.

Since 01 = 10 = 1, one has 1 = 5 (1) = 5 (0) 5 (1) = 5 (1) 5 (0). This shows

that 5 (0) is invertible, with inverse 5 (1). Consequently, the map 5

induces by restriction an application from A×to B

×. It maps the product

of two invertible elements to the product of their images, hence is a

group morphism. �

Let A be a commutative ring. Say that two elements 0 and 1 ∈ A are

associated if there exists an invertible element D ∈ A×such that 0 = 1D.

The relation of “being associated” is an equivalence relation.

Definition (1.2.5). — One says that a ring A is a division ring (or a division

algebra) if it is not the nullring, and if any nonzero element of A is invertible.

A field is a commutative division ring.3 A subfield of a field is a subring of a

field which is still a field.

A few examples of fields are certainly well-known to the reader, but it

is not obvious that there are at all any noncommutative division rings.

Let us begin by quoting the theorem of Wedderburn according to which

any finite division ring is commutative, i.e., is a field; see exercise 4/19

for its proof, borrowed from the Book Aigner & Ziegler (2014). The ring

of quaternions is probably the most renowned of all noncommutative

division rings; it is in fact the first one to have been discovered, by

Hamilton in 1843.

Example (1.2.6). — The underlying abelian group of the quaternions

is H = R4; we write (1, 8 , 9 , :) for its canonical basis. Besides the

properties of being associative, having 1 as a neutral element, distributing

addition, and being R-bilinear (that is, C(01) = (C0)1 = 0(C1) for any two

quaternions 0, 1 and any real number C), the multiplication H ×H→ His characterized by the following relations: 82 = 92 = :2 = −1 and 8 9 = :.

Provided these laws give H the structure of a ring, other relations follow

quite easily. Indeed, 82 = −1 = :2 = (8 9): = 8(9:), hence 8 = 9: after

3Terminology is not always consistent among mathematicians, and is sometimes chosen in

order to avoid the repetition of adjectives. Some books call fields what we call division rings;

some other use the awkward expression “skew field” to talk about division rings, but then a

skew field is not a field. Since fields, ie, commutative division rings, will play a larger role in

this book than general division rings, it is useful to have a shorter name for them.

Page 25: (MOSTLY)COMMUTATIVE ALGEBRA

1.2. NILPOTENT ELEMENTS; REGULAR AND INVERTIBLE ELEMENTS; DIVISION RINGS 13

multiplying both sides by−8; the equality 9 = :8 is proved similarly; then,

: 9 = :(:8) = :28 = −:, 8: = 8(8 9) = 82 9 = −9, and 98 = 9(9:) = 92: = −:.It is thus a remarkable discovery that “Hamilton’s multiplication table”

1 8 9 :

1 1 8 9 :

8 8 −1 : −99 9 −: −1 8

: : 9 −8 −1

gives rise to a ring structure on H. Properties of the addition follow

from the fact that H = R4is a vector space. Only the products of the

basic vectors are defined, and distributivity is basically a built-in feature

of multiplication. The crucial point is associativity; using distributivity,

it is enough to check the relation 0(12) = (01)2 when 0, 1, 2 belong

to {1, 8 , 9 , :}. This is obvious if 0 = 1 (1(12) = 12 = (11)2), if 1 = 1

(0(12) = 02 = (01)2), or if 2 = 1 (0(11) = 01 = (01)1). We may thus

assume that 0, 1, 2 belong to {8 , 9 , :} but leave the reader to check these

twenty-seven remaining cases!

Let @ = 01+18+2 9+3: be anyquaternion; set @ = 01−18−2 9−3:. For anytwo quaternions @ and @′, one has @@′ = @′ @ and @@ = (02+ 12+ 22+ 32)1;in particular, @@ is a positive real number (multiplied by 1), and vanishes

only for @ = 0.

Any nonzero quaternion @ is invertible, with inverse the quaternion

(@@)−1 @.

Let @ = 01 + 18 + 2 9 + 3: be a quaternion. Observe that 8−1@8 = −8@8 =01+ 18 − 2 9 − 3:, 9−1@ 9 = 01− 18 + 2 9 − 3: and :−1@: = 01− 18 − 2 9 + 3:.It follows that the center of H consists of the elements 01, for 0 ∈ R; it is

a subfield of H, isomorphic to the field of real numbers.

Observe that the set of all quaternions of the form 0 + 18, for 0, 1 ∈ R,

is a subfield of H isomorphis to C. More generally, for any unit vector

(D1, D2, D3) ∈ R3, the quaternion D = D18 + D2 9 + D3: satisfies D2 = −1;

moreover, the set of all quaternions of the form 0 + 1D, for 0 and 1 ∈ R,

is a subfield of H which is also isomorphic to C.

In particular, we see that the equation X2 + 1 = 0 has infinitely many

solutions in H, in marked contrast to what happens in a field, where a

Page 26: (MOSTLY)COMMUTATIVE ALGEBRA

14 CHAPTER 1. RINGS

polynomial equation has at most as many solutions as its degree (see

§1.3.19).

Theorem (1.2.7) (Frobenius, 1878). — Let A be a finite dimensional R-vector

space, endowed with an R-bilinear multiplication law which endowes it with

the structure of a division algebra. Then, A is isomorphic to R, C, or H.

Observe that this statement does not hold if one removes the hypothesis

that A is finitely dimensional (as shown by the field R(X) of rationalfunctions with real coefficients), or the hypothesis that the multiplication

turns A into a division ring (consider the product ring R × R). It is

also false without the assumption that the multiplication is R-bilinear,

see Deschamps (2001). The proof below follows quite faithfully the paper

by Palais (1968). It makes use of basic results about polynomials in one

variable that will be reviewed in later sections of the book, hence may

be skipped for a first reading.

Proof. — Let 1A be the neutral element of A for the multiplication. Let

us identify R with the subring of A consisting of the elements G1A, for

G ∈ R.

We first make a few observations that will be useful in the proof.

First of all, any subring B of A which is an R-vector space is a division

algebra. Indeed, for any nonzero 1 ∈ B, the map G ↦→ 1G from B to itself

is R-linear and injective; since B is a finite dimensional vector space, it

is also surjective so that the inverse of 1 (the preimage of 1 under this

map) belongs to B.

Let also be any element in A R. The ring R[ ] generated by Rand is commutative. It is a vector-subspace of A, hence is a field. The

minimal polynomial P of in R[ ] is irreducible, hence has degree 6 2.

Since ∉ R, this degree is exactly 2 and there are real numbers D, E

such that P = X2 + 2DX + E. Then, ( + D)2 = D2 − E so that the element

8 = ( + D)/√E − D2

of R[ ] satisfies 82 = −1. Let us observe that

R[ ] = R[8] is isomorphic to C. In particular, if 2 ∈ R, then 2 < 0.

Let us now assume that A ≠ R and let us choose ∈ A R. As we

just saw, there exists 8 ∈ R[ ] such that 82 = −1 and R[ ] = R[8] ' C.

We may identify the subfield R[8] with the field of complex numbers

Page 27: (MOSTLY)COMMUTATIVE ALGEBRA

1.2. NILPOTENT ELEMENTS; REGULAR AND INVERTIBLE ELEMENTS; DIVISION RINGS 15

and view A as a C-vector space, complex scalars acting on A by left

multiplication.

Let us now assume that A is commutative; we shall show that in this

case, A = C. More generally, we shall show that there is no element

� ∈ A C which commutes with 8. By contradiction, let � ∈ A C such

that �8 = 8�. The subring C[�] is commutative, and is a field. Moreover,

since � ∉ R, the same argument as above furnishes an element 9 of the

form (� + D′)/√E′ − (D′)2 of A such that 92 = −1 and R[9] = R[�]. Since

� ∉ C, we see that 9 ≠ ±8. It follows that the polynomial X2 + 1 has at

least four roots (namely, 8, −8, 9 and −9) in the field C[�], which is absurd.

Let ! : A → A be the map given by !(G) = G8. It is C-linear and

satisfies !2 = − idA. Since the polynomial X2 + 1 is split in C, with

simple roots, 8 and −8, the space A is the direct sum of the eigenspaces

for the eigenvalues 8 and −8. In other words, A is equal to the direct sum

A+ ⊕ A−, where A+ is the set of all G ∈ A such that !(G) = G8 = 8G andA− is the space of G ∈ A such that !(G) = G8 = −8G.Let us remark that A+ is stable under multiplication; indeed, if G8 = 8G

and H8 = 8H, then (GH)8 = G8H = 8GH. It also contains the inverse of any

of its nonzero elements since, if G ∈ A+ is not equal to 0, then G8 = 8G, so

that G−18 = 8G−1. Hence A+ is a subfield of A which contains the field C.

Since no element of A C commutes with 8, we obtain A+ = C.

Assume finally that A ≠ A+ and let us consider any nonzero element

� ∈ A−. The map G ↦→ G� is C-linear, and injective, hence bijective.

If G8 = 8G, then G�8 = G(−8�) = −G8� = −8G�, so that A+� ⊂ A−.Conversely, if H8 = −8H, let us choose G ∈ A such that H = G�; thenG8� = −G�8 = −H8 = 8H = 8G�, hence G8 = 8G since � ≠ 0, hence the other

inclusion and A+� = A−.The same argument shows that A−� = A+. In particular, �2 ∈ C∩R[�].

These two vector spaces, C and R[�], are distinct, have dimension 2, and

contain R. Consequently, their intersection is equal to R. Since �2 ∈ Rand � ∉ R, we have �2 < 0. Then 9 = (−�2)−1/2� is an element of A− suchthat 92 = −1. Moreover, A− = A+� = A+ 9 and A+ = A− 9.Set : = 8 9. One has A+ = R1 ⊕ R8, A− = A+ 9 = R9 ⊕ R:, and

A = A+ ⊕ A−. The R-vector space A has dimension 4, and (1, 8 , 9 , :) isa basis. One has :2 = 8 98 9 = 8(−8 9)9 = −82 92 = −1; more generally, the

Page 28: (MOSTLY)COMMUTATIVE ALGEBRA

16 CHAPTER 1. RINGS

multiplication table of A coincides with that of H. This shows that A

and H are isomorphic. �

1.3. Algebras, polynomials

Definition (1.3.1). — Let : be a commutative ring. A :-algebra is a ring A

together with a morphism of rings 8 : : → A whose image is contained in the

center of A.

Formally, a :-algebra is defined as the ordered pair (A, 8 : : → A).However, we will mostly say “Let A be a :-algebra”, therefore understat-

ing the morphism 8. For G ∈ : and 0 ∈ A, we will also commit the abuse

of writing G0 for 8(G)0, even when 8 is not injective. Observe also that a

:-algebra may be non-commutative.

A subalgebra of a :-algebra (A, 8) is a subring B containing the image

of 8, so that (B, 8) is a :-algebra.

Definition (1.3.2). — Let : be a commutative ring and let (A, 8) and (B, 9) be:-algebras. Amorphism of :-algebras 5 : A→ B is a ring morphism 5 such

that 5 (8(G)0) = 9(G) 5 (0) for every G ∈ : and every 0 ∈ A.

A composition ofmorphisms of :-algebras is amorphism of :-algebras;

the inverse of a bijective morphism of :-algebras is a morphism of :-

algebras.

Remark (1.3.3). — There is a general notion of :-algebras based on the

language of modules, that we develop later in this book (from chapter 3

on). Namely, a (general) :-algebra is a :-module M endowed with a

:-bilinear “multiplication” �M : M ×M→ M, a morphism of :-algebras

is a :-linear map 5 : M→ N which is compatible with themultiplication:

for 0, 1 ∈ M, 5 (�M(0, 1)) = �N( 5 (0), 5 (1)).Additional properties of the multiplication can of course be required,

giving rise to the notions of unitary :-algebras (if there is a unit for

multiplication, and morphisms are also required to map the unit to the

unit), commutative :-algebras (if the multiplication is commutative),

associative :-algebras (if it is associative), but also Lie :-algebras (if it

satisfies the Jacobi triple identity), etc.

Page 29: (MOSTLY)COMMUTATIVE ALGEBRA

1.3. ALGEBRAS, POLYNOMIALS 17

If the multiplication is unitary and associative, then we get a :-algebra

in our sense; if it is moreover commutative, we get a commutative

:-algebra. The terminology we chose reflects the fact that, at least in this

book, we are essentially interested in unitary and associative algebras.

Examples (1.3.4). — a) Let : be a subring of a commutative ring A.

The inclusion : ↩→ A endowes A with the structure of a :-algebra.

b) Any ring is, in a unique way, a Z-algebra. Indeed, for any ring A,

the map defined by 8(=) = =1A for = ∈ Z is the unique morphism of

rings 8 : Z→ A.

c) Let K be a division ring and let 8 : Z→ K the unique morphism of

rings.

Assume that 8 is injective. Then its image is a subring of K which is

isomorphic to Z. Moreover, the ring K being a division ring, it contains

the set K0 of all fractions 0/1, for 0, 1 ∈ 8(Z) and 1 ≠ 0. This set K0 is a

subfield of K, isomorphic to the field Q of rational numbers; one says

that K has characteristic 0.

Assume now that 8 is not injective and let ? be the smallest integer

such that ? > 0 and 8(?) = 0. Let us prove that ? is a prime number.

Since 8(1) = 1K ≠ 0, one has ? > 1. Let < and = be positive integers such

that ? = <=; then 8(<)8(=) = 8(?) = 0, hence 8(<) = 0 or 8(=) = 0; by

minimality, one has< > ? or = > ?; this proves that ? is a prime number,

as claimed. The image of 8 is a then subfield K0 of K of cardinality ? and

one says that K has characteristic ?.

The field K0 is sometimes called the prime field of K.

d) Let : be a commutative ring. The ring :[X] of polynomials with

coefficients in : in one indeterminate X is naturally a :-algebra. We will

define below algebras of polynomials in any number of indeterminates.

e) Let : be a commutative ring. The ring M=(:) of = × = matrices

with coefficients in : is a :-algebra, where the canonical morphism

8 : : →M=(:) associates with 0 ∈ : the matrix 0I=.

1.3.5. Algebra generated by a set. — Let : be a commutative ring, let

A be a :-algebra and let S be a subset of A. By definition, the :-algebra

:[S] is the smallest subalgebra of A which contains S. It is the set of

Page 30: (MOSTLY)COMMUTATIVE ALGEBRA

18 CHAPTER 1. RINGS

all sums of elements of the form �B1 . . . B=, where � ∈ :, = ∈ N, and

B1, . . . , B= ∈ S. If S = {01, . . . , 0=}, one writes also :[01, . . . , 0=] for :[S].

Proposition (1.3.6). — Let : be a commutative ring, let A be a commutative

:-algebra and let B be an A-algebra. Assume that A is generated as a :-algebra

by elements (08)8∈I and that B is generated as an A-algebra by elements (1 9)9∈J.Then B is generated as a :-algebra by the elements (1 9)9∈J and the elements

(08)8∈I.

Proof. — Let B′be the :-subalgebra of B generated by the 08 and the 1 9,

and let A′be its inverse image in A. It is a :-subalgebra of A that contains

the 08, hence is equal to A, so that B′contains the image of A in B. Since

it also contains the 1 9, one has B′ = B, as was to be shown. �

Corollary (1.3.7). — Let : be a commutative ring. If A is a finitely generated

commutative :-algebra and B is a finitely generated A-algebra, then B is finitely

generated as a :-algebra

1.3.8. — Let : be a commutative ring, let A be a commutative :-algebra

which is a field, and let S be a subset of A. Then one also considers the

subfield of A generated by S over :, denoted by :(S): this is the smallest

subfield of A that contains : and S. It is the set of all fractions 0/1, where

0, 1 ∈ :[S], the :-subalgebra of A generated by S, and 1 ≠ 0.

Example (1.3.9) (General polynomial rings). — Let A be a ring and let I

be any set. The set N(I) is the set of all multi-indices indexed by I: its

elements are families (=8)8∈I consisting of positive integers, almost all of

which are zero. When I is the finite set {1, . . . , 3}, then N(I) is naturallyidentified with the set N3

of 3-tuples of positive integers. The termwise

addition of N(I) endowes it with the structure of a monoid.

The ring�I is defined as the monoid ring associated with this monoid,

and coefficients in A. Let us review its definition in this particular case.

By definition,�I is the set A(NI)

of families (0<)<∈NI of elements of A,

indexed by N(I), with finite support, that is of which all but finitely many

terms are zero (we also that such a family is almost null). Endowed with

term-by-term addition, it is an abelian group. Let P = (?<) and Q = (@<)

Page 31: (MOSTLY)COMMUTATIVE ALGEBRA

1.3. ALGEBRAS, POLYNOMIALS 19

be elements of�I. For any < ∈ N(I), one may set

A< =∑

<′+<′′=<?<′@<′′ ,

because the sum is finite; the family (A<) is an almost null family of

elements of A indexed by N(I), hence defines an element R of �I.

One checks that this law (P,Q) ↦→ R is associative: for P = (?<),Q = (@<) and R = (A<), one has (PQ)R = P(QR) = S, where S = (B<) isgiven by

B< =∑

<′+<′′+<′′′=<?<′@<′′A<′′′.

The unit element of �I is the family (�<) such that �0 = 1 (for the

multi-index 0 = (0, . . . )) and �< = 0 for any < ∈ N(I) such that < ≠ 0.

The map A→ A[(X8)] given by 0 ↦→ 01 is a morphism of rings.

For any < ∈ N(I), write X<for the element of �I whose only nonzero

term is at <, and equals 1. One has X<

X<′ = X

<+<′. In particular, the

elements of the form X<pairwise commute.

For 8 ∈ I, let �8 ∈ N(I) be the multi-index that equals 1 at 8 and 0

otherwise; one writes X8 = X�8. For any multi-index <, one has < =∑

8∈I <8�8 (a finite sum, since all but finitely many coefficients <8 are

zero), so that X< =

∏8∈I X

<8

8. Consequently, for any P = (?<) ∈ �I, one

has

P =

∑<∈N(I)

?<X< =

∑<∈N(I)

?<

∏8∈I

X<8

8.

The ring�I is called the ring of polynomials with coefficients in A in the

family of indeterminates (X8)8∈I; it is denoted by A[(X8)8∈I].Themap 0 ↦→ 0X0

is a ringmorphism fromA toA[(X8)8∈I]; the elements

of its image are called constant polynomials.

The choice of the letter X for the denoting the indeterminates is totally

arbitrary; G8, Y8, T8, are other common notation. When I = {1, . . . , =},one rather writes A[X1, . . . ,X=], or A[G1, . . . , G=], or A[Y1, . . . ,Y=], etc.when I has only one element, the indeterminate is often denoted by G,

T, X, etc. so that this ring is denoted by A[G], A[T], A[X], accordingly.When I has few elements, the indeterminates are also denoted by distinct

letters, as in A[X, T], or A[X,Y,Z], etc.

Page 32: (MOSTLY)COMMUTATIVE ALGEBRA

20 CHAPTER 1. RINGS

When A is a commutative ring, the ring of polynomials is a commuta-

tive ring, hence an A-algebra via the morphism A→ A[(X8)] given by

0 ↦→ 01.

Remark (1.3.10). — Let A be a ring, let I be a set, let J be a subset of I and

let K = I J. Extending families (0 9)9∈J by zero, let us identify N(J) with

a subset of N(I), and similarly for N(K). Then every element < ∈ N(I)can be uniquely written as a sum < = <′ + <′′, where <′ ∈ N(J) and<′′ ∈ N(K); in fact, <′

9= < 9 for 9 ∈ J, and <′′

:= <: for : ∈ K.

Then we can write any polynomial P =∑<∈N(I) ?<X

<in�I under the

form

P =

∑=∈N(K)

( ∑<∈N(J)

?<+=X<

)X= =

∑=∈N(K)

P=X= ,

where P= =∑<∈N(J) ?<+=X

< ∈ �J. This furnishes a map P ↦→ (P=)= from�I to the ring (A[(X9)9∈J)[(X:):∈K]. One checks readily that this map is

bijective and is a ring morphism; it is thus an isomorphism of rings.

1.3.11. Monomials and degrees. — For any < ∈ N(I), the element

X< =

∏8∈I X

<8

8is called the monomial of exponent <. One defines its

degree with respect to X8 to be <8, and its total degree as

∑8∈I <8.

Let P be a polynomial in A[(X8)]. Write P =∑0<X

<, the monomials

of P are the polynomials 0<X<for 0< ≠ 0. For any 8 ∈ I, the degree of P

with respect to X8, denoted by degX8(P), is the least upper bound of all

degrees with respect to X8 of all monomials of P. Similarly, the total

degree deg(P) of P is the least upper bound of the total degrees of all

monomials of P. These least upper bounds are computed in N ∪ {−∞},so that the degrees of the zero polynomial are equal to −∞; otherwise,

they are positive.

The constant term of a polynomial P is the coefficient of the monomial

of exponent 0 ∈ N(I). A polynomial P is constant if it is reduced to this

constant term, equivalently if its total degree is 0, equivalently if its

degree with respect to every indeterminate is 0.

When there is only one indeterminate T, the degree degTand the total

degree deg of a polynomial P ∈ A[T] coincide. Moreover, if deg(P) = =,then T

=is the unique monomial of degree =, and its coefficient is called

Page 33: (MOSTLY)COMMUTATIVE ALGEBRA

1.3. ALGEBRAS, POLYNOMIALS 21

the leading coefficient of P. The polynomial P is said to be monic if its

leading coefficient is equal to 1.

Let 3 ∈ N. One says that a polynomial P ∈ �I is homogeneous of

degree 3 if all of its monomials have total degree 3. One can write

uniquely any polynomial P ∈ �I has a sum P =∑3:=0

P3, where P3 ∈ �I

is homogeneous of degree 3. In fact, if P =∑<∈N(I) ?<X

<, then P3 =∑

deg(X<)=3 ?<X<.

Let 8 ∈ I. For any polynomials P and Q, one has

degX8(P +Q) 6 sup(deg

X8(P), deg

X8(Q)),

with equality if degX8(P) ≠ deg

X8(Q). Moreover,

deg(P +Q) 6 sup(deg(P), deg(Q)).

Regarding multiplication, we have

degX8(PQ) 6 deg

X8(P) + deg

X8(Q),

deg(PQ) 6 deg(P) + deg(Q),

and we shall see below that these inequalities are equalities if A is a

domain.

Proposition (1.3.12). — Let P and Q ∈ A[T] be nonzero polynomials in one

indeterminate T. Assume that the leading coefficient of P is left regular, or that

the leading coefficient of Q is right regular. Then, deg(PQ) = deg(P)+deg(Q).In particular, PQ ≠ 0.

Proof. — Write P = ?0 + ?1T + · · · + ?<T<and Q = @0 + · · · + @=T

=, with

< = deg P and = = deg Q, so that ?< ≠ 0 and @= ≠ 0. Then,

PQ = ?0@0+(?0@1+ ?1@0)T+ · · · + (?<−1@= + ?<@=−1)T<+=−1+ ?<@=T<+= .

By assumption, ?<@= ≠ 0 and the degree of PQ is equal to < + =, asclaimed. �

Corollary (1.3.13). — Let A be a domain, and let I be a set. Then, for any

polynomials P,Q ∈ A[(X8)8∈I], and any 8 ∈ I, one has

degX8(PQ) = deg

X8(P) + deg

X8(Q) and deg(PQ) = deg(P) + deg(Q).

In particular, the polynomial ring A[(X8)] is a domain.

Page 34: (MOSTLY)COMMUTATIVE ALGEBRA

22 CHAPTER 1. RINGS

Proof. — We first prove that the product of two nonzero polynomials is

not equal to 0, and that its degree with respect to the indeterminate X8 is

the sum of their degrees.

Since P and Q have only finitely many monomials, we may assume

that there are only finitely many indeterminates. We can then argue

by induction on the number of indeterminates that appear in P and Q.

If this number is 0, then there P = ? and Q = @, for ?, @ ∈ A, and

degX8(P) = deg

X8(@) = 0. Since A is a domain, one has ?@ ≠ 0, hence

PQ = ?@ ≠ 0, and degX8(PQ) = 0.

Let < = degX8(P) and = = deg

X8(Q). First assume that the indetermi-

nate X8 appears in P or Q, so that < > 0 or = > 0. Let J = I {8}. Thereare polynomials (P<) and (Q=) in the ring�J of polynomials with coeffi-

cients in A in the indeterminates X9, for 9 ∈ J, such that P =∑<:=0

P:X:8

and Q =∑=:=0

Q:X:8. Then one can write PQ =

∑<+=:=0

R:X:8for some poly-

nomials R: ∈ �J, and one has R<+= = P<Q=. Since X8 appears in P or Q,

the number of indeterminates that appears in P< or Q= has decreased;

by induction, one has P<Q= ≠ 0. This proves that degX8(PQ) = < + =; in

particular, PQ ≠ 0.

If the indeterminate X8 appear neither in P, nor in Q, we can redo this

argument with some other indeterminate 9 that appears in P or in Q; this

shows that PQ ≠ 0. Since the indeterminate X8 does not appear in PQ,

this shows that degX8(PQ) = 0.

To prove the assertion about the total degrees, let < = deg(P), = =deg(Q), and let us write P =

∑<8=0

P8 and Q =∑=9=0

P9 as the sums of

their homogeneous components. Then PQ =∑<+=:=0

R:, where R: =∑8+9=: P8Q9. Expanding the products P8Q9, we observe that for each :,

the polynomial R: is a sum of monomials of degree :, hence R: is

homogeneous of degree :. By assumption, P< ≠ 0 and Q= ≠ 0, so that

R<+= = P<Q= ≠ 0. This proves that deg(PQ) = < + =, as was to be

shown. �

Corollary (1.3.14). — Let A be a domain and let I be a set. The units of the

polynomial ring A[(X8)8∈I] are the constant polynomials equal to a unit of A.

(For the case of a general commutative ring A, see exercise 1/20.)

Page 35: (MOSTLY)COMMUTATIVE ALGEBRA

1.3. ALGEBRAS, POLYNOMIALS 23

Proof. — Since the map 0 ↦→ 01 is a ring morphism from A to A[(X8)], itmaps units to units, and the constant polynomials equal to a unit of A

are units in A[(X8)].Conversely, let P be a unit in A[(X8)] and let Q ∈ A[(X8)] be such

that PQ = 1. Then deg(PQ) = deg(P) + deg(Q) = deg(1) = 0, hence

deg(P) = deg(Q) = 0. This proves that P and Q are constant polynomials.

Let 0, 1 ∈ A be such that P = 0 and Q = 1; then 01 = 1, so that 0, 1

are units in A. In particular, P is constant polynomial equal to a unit

of A. �

Theorem (1.3.15). — Let A be a ring and let P, Q be two polynomials with

coefficients in A in one indeterminate X. One assumes that Q ≠ 0 and that

the coefficient of its monomial of highest degree is invertible. Then, there

exists a unique pair (R, S) of polynomials in A[X] such that P = RQ + S and

deg(S) < deg(Q).

Proof. — Let us begin with uniqueness. If P = RQ + S = R′Q + S

′for

polynomials R, S,R′, S′ such that deg(S) < deg(Q) and deg(S′) < deg(Q),then the degree of (R′ − R)Q = S − S

′is at most sup(deg(S), deg(S′)) <

deg Q. Assume R ≠ R′, that is, R

′ − R ≠ 0, then deg((R′ − R)Q) =deg(R′ − R) + deg(Q) > deg(Q), a contradiction. Consequently, R = R

and S = P − RQ = P − R′Q = S

′.

Let us now prove the existence of a pair (R, S) as in the statement of

the theorem. Let 3 = deg(Q) and let D be the leading coefficient of Q; by

assumption, it is a unit. We argue by induction on = = deg(P). If = < 3,

it suffices to set R = 0 and S = P. Otherwise, let 0X= be the monomial of

highest degree of P. Then, P′ = P− 0D−1

X=−3

Q if a polynomial of degree

at most =. However, by construction, the coefficient of X=is equal to

0 − 0D−1D = 0, so that deg(P′) < = = deg P. By induction, there exist

polynomials R′and S

′in A[X] such that

P′ = R

′Q + S

′and deg(S′) < deg(Q).

Then,

P = P′ + 0D−1

X=−3

Q = (R′ + 0D−1

X=−3)Q + S

′.

It now suffices to set R = R′ + 0D−1

X=−3

and S′ = S. This concludes the

proof. �

Page 36: (MOSTLY)COMMUTATIVE ALGEBRA

24 CHAPTER 1. RINGS

Polynomial algebras obey an important universal property.

Proposition (1.3.16). — Let : be a commutative ring. Let A be any :-algebra

and let I be a set. For any family (08)8∈I of elements of A which commute

pairwise, there exists a unique morphism 5 : :[(X8)8∈I] → A of :-algebras such

that 5 (X8) = 08 for every 8 ∈ I.

Proof. — If there is such a morphism 5 , it must satisfy

5 (�∏8∈I

X<8

8) = �

∏8∈I

5 (X8)<8 = �∏8∈I

0<8

8

for any multi-index (<8) ∈ N(I) and any � ∈ :. (In the previous formulae,

all products are essentially finite, and the order of factors is not relevant,

because the 08 commute.) Consequently, for anypolynomialP =∑?<X

<,

one must have

5 (P) =∑<∈N(I)

?<

∏8∈I

0<8

8,

which proves first that there exists at most one such morphism 5 , and

second, that if it exists, it has to be defined by this formula.

Conversely, it is straightforward to prove, using that the 08 commute

pairwise, that this formula defines indeed a morphism of :-algebras. �

Especially when A = :, this morphism is sometimes called the

evaluation morphism at the point (08)8∈I, and the image of the poly-

nomial P is denoted by P((08)). When I = {1, . . . , =}, one writes sim-

ply P(01, . . . , 0=). This gives, for example, a morphism of :-algebras

:[X1, . . . ,X=] → ℱ(:= , :) from the algebra of polynomials in = indeter-

minates to the :-algebra of functions from := to :. The functions which

belong to the image of this morphism are naturally called polynomial

functions.

Here is an important example. Let : be a field, let V be a :-vector

space and let A be the :-algebra of endomorphisms of V. One can also

take A to be the :-algebra M=(:) of all = × = matrices with coefficients

in :. Then, for any element 0 ∈ A and any polynomial P ∈ :[X], one may

compute P(0). For P,Q ∈ :[X], one has P(0) + Q(0) = (P + Q)(0), andP(0)Q(0) = (PQ)(0). These formulae just express that the map from :[X]to A given by P ↦→ P(0) is a morphism of :-algebras.

Page 37: (MOSTLY)COMMUTATIVE ALGEBRA

1.3. ALGEBRAS, POLYNOMIALS 25

Lemma (1.3.17). — Let : be a commutative ring, let A be a :-algebra and

let 01, . . . , 0= be elements of A that commute pairwise. Then, the subalgebra

of A generated by 01, . . . , 0=, :[01, . . . , 0=], is the image of the morphism from

:[X1, . . . ,X=] to A given by evaluation at at (01, . . . , 0=).

Proof. — Indeed, let ! be this evaluation morphism. Since !(X8) = 08,the image Im(!) of ! is a subalgebra of A which contains 01, . . . , 0=.

Therefore, Im(!) contains :[01, . . . , 0=]. Conversely, any subalgebra of A

which contains 01, . . . , 0= contains also all elements of A of the form

�0<1

1. . . 0

<== , as well as their sums. This shows that :[01, . . . , 0=] contains

Im(!), hence the equality. �

1.3.18. Formal derivatives. — Let A be a ring and let P ∈ A[X] bea polynomial; writing P =

∑=<=0

0<X<, its (formal) derivative is the

polynomial P′ =

∑=<=1

<0<X<−1

.

We observe the following rules: (0P)′ = 0P′, (P + Q)′ = P′ + Q

′and

(PQ)′ = P′Q + PQ

′(Leibniz rule). Leaving to the reader the proofs of

the first two, let us prove the third one. By additivity, we may assume

that P = X<and Q = X

=; then PQ = X

<+=, P′ = <X

<−1and Q

′ = =X=−1

,

hence

(PQ)′ = (< + =)X<+=−1 = <X<+=−1 + =X

<+=−1 = P′Q + PQ

′,

as claimed.

1.3.19. Roots and their multiplicities. — Let K be a field and let P ∈K[X] be a non-constant polynomial. A root of P in K is an element 0 ∈ K

such that P(0) = 0.

If 0 is a root of P, then the euclidean division of P byX−0 takes the formP = (X− 0)Q+R, where R ∈ K[X] is a polynomial of degree < 1, hence a

constant, and by evaluation, one has 0 = P(0) = (0 − 0)Q(0) + R(0) = R,

hence P = (X − 0)Q and X − 0 divides P. Conversely, if X − 0 divides Q,

then P(0) = 0.

By successive euclidean divisions, one may write P = Q

∏=8=1(X− 08)<8

where Q ∈ K[X] is a polynomial without root in K, 01, . . . , 0= are distinct

elements of K and <1, . . . , <= are strictly positive integers. (The process

must stop since the degree of Q decreases by 1 at each step, and is

positive.)

Page 38: (MOSTLY)COMMUTATIVE ALGEBRA

26 CHAPTER 1. RINGS

Let us prove that <8 is the largest integer < such that (X − 08)< divides P.

This integer is called the multiplicity of the root 08. By definition,

(X − 08)<8divides P. Conversely, assume by contradiction that there

exists < > <8 such that (X− 08)< divides P; let us write P = Q1 (X− 08)<.From the equality P = Q1 (X − 08)< = Q

∏=9=1(X − 0 9)<9

, we then get

Q1 (X− 08)<−<8 = Q

∏9≠8(X− 0 9)<9

; evaluating both sides of this equality

at X = 08, we obtain 0 = Q(08)∏

9≠8(08 − 0 9)<9, a contradiction.

Let 0 ∈ K be such that P(0) = 0. Then there exists 8 ∈ {1, . . . , =} suchthat 0 = 08: the elements 01, . . . , 0= are the roots of P in K.

One has deg(P) = deg(Q) + ∑=8=1<8, hence

∑=8=1<8 6 deg(P). The

number of roots of a polynomial, counted with their multiplicities, is at most its

degree.

Let us give an application of this fact.

Proposition (1.3.20). — If K is a field, then every finite subgroup of the

multiplicative group of K is cyclic.

Proof. — Let G be a finite subgroup of K×and let = be its cardinality.

For every prime number ? that divides =, there exists G? ∈ G such that

(0?)=/? ≠ 1, because the polynomial T=/? − 1 has at most =/? roots and

= > =/?. Let <? be the exponent of ? in the decomposition of = as

a product of prime factors; set 0? = G=/?<?? . One has (0?)?

<?= 1 and

(0?)?<?−1

= G=/?? ≠ 1, so that the order of 0? in G is equal to ?<?

.

Set 0 =∏

? 0? and let < be its order; let us prove that the < = =.

Otherwise, there exists a prime number ℓ that divides =/<. In particular,

ℓ divides =,< divides =/ℓ and 0=/ℓ = 1. If ? is a prime number dividing =

which is distinct from ℓ , then the integer ?<?divides =/ℓ , hence 0=/ℓ? = 1.

Consequently, 0=/ℓ = 0=/ℓℓ

≠ 1. On the other hand, 0=/ℓℓ

≠ 1 because the

order of 0ℓ , equal to ℓ<ℓ, does not divide =/ℓ . We thus have 0=/ℓ ≠ 1, a

contradiction.

The subgroup G1 of G generated by 0 is cyclic, of order =, and its

elements are the =th roots of the polynomial T= − 1 in K. If 6 ∈ G, then

6= = 1, by definition of =, hence 6 ∈ G1, so that G = G1. �

The following lemma characterizes the multiplicity of a root using

derivatives.

Page 39: (MOSTLY)COMMUTATIVE ALGEBRA

1.4. IDEALS 27

Lemma (1.3.21). — Let P ∈ K[X] be a non-constant polynomial and let 0 ∈ K

be a root of P; let < be its multiplicity.

a) For every integer : ∈ {0, . . . , < − 1}, 0 is a root of P(:)

of multiplicity at

least < − :.b) If the characteristic of K is zero, or if it is strictly larger than <, then 0 is

not a root of P(<)

.

Proof. — Assume that 0 is a root of P with multiplicity at least < and let

us write P = (X − 0)<Q, where Q ∈ K[X] is a polynomial. Taking formal

derivatives, we obtain

P′ = <(X − 0)<−1

Q + (X − 0)<Q′ = (X − 0)<−1(<Q + (X − 0)Q′).

In particular, 0 is a root of P′with multiplicity > < − 1. By induction,

we conclude that 0 is a root of P(:)

of multiplicity at least < − :, for every: ∈ {1, . . . , < − 1}.If the characteristic of K is zero, or if it is a prime number that does not

divide <, then the polynomial <Q + (X − 0)Q′ takes the value <Q(0)at 0. In this case, we see in particular that 0 is a root of multiplicity < − 1

of P′if and only if 0 is a root of multiplicity < of P.

By induction, we also have P(<)(0) = <!Q(0).

Assume that < is the multiplicity of 0, so that Q(0) ≠ 0. If the

characteristic of K is zero, or if it is strictly greater than <, then <! ≠ 0

in K, so that 0 is not a root of P(<)

. �

Finally, recall that a field K is said to be algebraically closed if every non

constant polynomial P ∈ K[T] has a root in K. By what precedes, this

is equivalent to saying that every nonzero polynomial is a product of

factors of degree 1; such a polynomial is said to be split. We will return

to this topic in more detail in §4.3.

1.4. Ideals

Definition (1.4.1). — Let A be a ring and let I be a subgroup of A (for the

addition). One says that I is a left ideal if for any 0 ∈ I and any 1 ∈ A, one has

10 ∈ I. One says that I is a right ideal if for any 0 ∈ I and any 1 ∈ A, one has

01 ∈ I. One says that I is a two-sided ideal if it is both a left and right ideal.

Page 40: (MOSTLY)COMMUTATIVE ALGEBRA

28 CHAPTER 1. RINGS

In a commutative ring, it is equivalent for a subgroup to be a left ideal,

a right ideal or a two-sided ideal; one then just says that it is an ideal. Let

us observe that in any ring A, the subsets 0 and A are two-sided ideals.

Moreover, an ideal I is equal to A, if and only if it contains some unit, if

and only if it contains 1

For any 0 ∈ A, the set A0 consisting of all elements of A of the form G0,

for G ∈ A, is a left ideal; the set 0A consisting of all elements of the

form 0G, for G ∈ A, is a right ideal. When A is commutative, this ideal is

denoted by (0)To show that a subset I of A is a left ideal, it suffices to prove the

following properties:

(i) 0 ∈ I;

(ii) for any 0 ∈ I and any 1 ∈ I, 0 + 1 ∈ I;

(iii) for any 0 ∈ I and any 1 ∈ A, 10 ∈ I.

Indeed, since −1 ∈ A and (−1)0 = −0 for any 0 ∈ A, these properties

imply that I is a subgroup of A; the third one then shows that it is a left

ideal.

The similar characterization of right ideals is left to the reader.

Example (1.4.2). — If K is a division ring, the only left ideals (or right ideals)

of K are (0) and K. Indeed, let I be a left ideal of K such that I ≠ 0; let 0

be any nonzero element of I. Let 1 be any element of K. Since 0 ≠ 0, it

is invertible in K and, by definition of a left ideal, 1 = (10−1)0 ∈ I. This

shows that I = K.

Example (1.4.3). — For any ideal I of Z, there exists a unique integer = > 0

such that I = (=). More precisely, if I ≠ (0), then = is the smallest strictly

positive element of I.

If I = (0), then = = 0 is the only integer such that I = (=). Assume now

that I ≠ (0).If I = (=), with = > 0, one observes that the strictly positive elements

of I are {=; 2=; 3=; . . .}, and = is the smallest of them. This implies that

the uniqueness of such an integer =. So let = be the smallest strictly

positive element of I. Since = ∈ I, 0= ∈ I for any 0 ∈ Z, so that (=) ⊂ I.

Conversely, let 0 be any element of I. Let 0 = @= + A be the euclidean

division of 0 by =, with @ ∈ Z and 0 6 A 6 = − 1. Since 0 and @= ∈ I,

Page 41: (MOSTLY)COMMUTATIVE ALGEBRA

1.4. IDEALS 29

A = 0 − @= belongs to I. Since = is the smallest strictly positive element

of I and 0 6 A < =, we necessarily have A = 0 and 0 = @= ∈ (=). Thisshows that I = (=).

Example (1.4.4). — For any nonzero ideal I of K[X], there exists a unique

monic polynomial P ∈ K[X] such that I = (P); moreover, P is the unique monic

polynomial of minimal degree which belongs to I.

The proof is analogous to that of the previous example. If I = (P), thenthe degree of any nonzero polynomial in I is at least deg(P), and P is

the unique monic polynomial in I of minimal degree. This shows the

uniqueness part of the assertion.

Let P be a monic polynomial in I, of minimal degree. Any multiple

of P belongs to I, so (P) ⊂ I. Conversely, let A ∈ I, and let A = PQ + R be

the euclidean division of A by P, with deg(R) < deg(P). Since A and P

belong to I, R = A − PQ ∈ I. If R ≠ 0, the quotient of R by its leading

coefficient is a monic polynomial of degree < deg(P), which contradicts

the definition of P. Consequently, R = 0, A ∈ (P). Hence I = (P).

There are various useful operations on ideals.

1.4.5. Intersection. — Let I and J be left ideals of A; their intersection

I ∩ J is again a left ideal. More generally, the intersection of any family of

left ideals of A is a left ideal of A.

Proof. — Let (IB)B∈S be a family of ideals of A and let I =⋂B IB . (If

S = ∅, then I = A.) The intersection of any family of subgroups being a

subgroup, I is a subgroup of A. Let now G ∈ I and 0 ∈ A and let us show

that 0G ∈ I. For any B ∈ S, G ∈ IB , hence 0G ∈ IB since IB is a left ideal. It

follows that 0G belongs to every ideal IB , hence 0G ∈ I. �

We let to the reader to state and prove the analogous statements for

right and two-sided ideals.

1.4.6. Ideal generated by a subset. — Let S be any subset of A; there

exists a smallest left ideal of A containing S and it is called the left ideal

generated by S. Indeed, this ideal is nothing but the intersection of the

family of all left ideals containing S. Equivalently, the left ideal generated

by S contains S, and is contained in any left ideal which contains S.

Page 42: (MOSTLY)COMMUTATIVE ALGEBRA

30 CHAPTER 1. RINGS

Moreover, it is equal to the set of all linear combinations

∑=8=108B8, for

= ∈ N, 01, . . . , 0= ∈ A and B1, . . . , B= ∈ S. Indexing these elements by the

elements of S rather than a set of integers, let us prove that this ideal is

the set of all linear combinations

∑B∈S 0BB, where (0B) ∈ A

(S)ranges over

all almost null families of elements of A.

Proof. — By the preceding proposition, the intersection of the family

of all left ideals containing S is a left ideal of A, hence is the smallest

left ideal of A that contains S. Let I be the set of all linear combinations∑B∈S 0BB, for (0B) ∈ A

(S). Moreover, we see by induction on the number

of nonzero terms in this sum that

∑B∈S 0BB belongs to any left ideal of A

which contains S, so belongs to the left ideal generated by S. This proves

that I is contained in the left ideal generated by S. To conclude, it suffices

to show that I is a left ideal containing S. Taking a family (0B) where

all members are zero except for one of them equal to 1, we have S ⊂ I.

We then check the axioms for a left ideal: we have 0 =∑B∈S 0B ∈ I; if∑

0BB and∑1BB belong to I, then almost all of the terms of the family

(0B + 1B)B∈S are equal to 0 and

∑(0B + 1B)B = ∑0BB +

∑1BB; moreover, if

0 =∑0BB ∈ I and 1 ∈ A, then 10 =

∑(10B)B belongs to I. �

Similar arguments show that there exists a smallest right ideal (resp. a

smallest two-sided ideal) of A containing S; it is the intersection of

the family of all right ideals (resp. of all two-sided ideals) of A which

contain S. They are respectively equal to the set of all linear combinations∑B∈S B0B and

∑B∈S 0BB1B , where (0B) and (1B) run along A

(S).

Definition (1.4.7). — The kernel Ker( 5 ) of a ring morphism 5 : A→ B is the

set of all elements 0 ∈ A such that 5 (0) = 0.

Since a ring morphism is in particular a morphism of the underlying

additive groups, a ring morphism 5 is injective if and only if Ker( 5 ) = 0.

Proposition (1.4.8). — The kernel of a ring morphism is a two-sided ideal.

Proof. — Let 5 : A→ B be a ring morphism. Since a morphism of rings

is a morphism of abelian groups, Ker( 5 ) is a subgroup of A. Moreover,

if G ∈ Ker( 5 ) and 0 ∈ A, then 5 (0G) = 5 (0) 5 (G) = 5 (0)0 = 0 so that 0G ∈Ker( 5 ). Similarly, if G ∈ Ker( 5 ) and 0 ∈ A, then 5 (G0) = 5 (G) 5 (0) = 0, so

that G0 ∈ Ker( 5 ). This shows that Ker( 5 ) is a two-sided ideal of A. �

Page 43: (MOSTLY)COMMUTATIVE ALGEBRA

1.4. IDEALS 31

1.4.9. Image and inverse image of ideals. — Let 5 : A→ B be a mor-

phism of rings and let J be a left ideal of B; the inverse image of J

by 5 ,

5 −1(J) = {0 ∈ A ; 5 (0) ∈ J}is a left ideal of A.

Proof. — Since 5 (0) = 0 ∈ J, we have 0 ∈ 5 −1(J). For any 0 and 1 ∈ 5 −1(J),5 (0 + 1) = 5 (0) + 5 (1) ∈ J since 5 (0) and 5 (1) ∈ J and J is an ideal

of B. Finally, for any 0 ∈ A and 1 ∈ 5 −1(J), we have 5 (1) ∈ J, hence

5 (01) = 5 (0) 5 (1) ∈ J. �

Similarly, the inverse image of a right ideal (resp. of a two-sided ideal)

by a morphism of rings is a right ideal (resp. a two-sided ideal).

However, the image of a left ideal by a ringmorphism is not necessarily

a left ideal. Let 5 : A→ B be a morphism of rings and I be a left ideal

of A. We shall write B 5 (I), or even BI, the left ideal of B generated by 5 (I).

1.4.10. Sum of ideals. — Let I and J be left ideals of a ring A. The set

I + J of all sums 0 + 1, for 0 ∈ I and 1 ∈ J, is a left ideal of A. It is also

the left ideal of A generated by the subset I ∪ J. More generally, for any

family (IB)B∈S of left ideals of A, the set of sums

∑B 0B of all almost null

families (0B)B∈S, where, for any B, 0B ∈ IB , is a left ideal of A, denoted by∑B IB . It is also the left ideal of A generated by

⋃B IB .

The similar assertions for right and two-sided ideals hold.

Proof. — Let us prove the result for left ideals. Since 0 =∑B 0 and 0 ∈ IB

for every B, one has 0 ∈ ∑B IB . Then, if 0 =

∑B 0B and 1 =

∑B 1B are any

two elements of

∑B IB , then 0 + 1 =

∑B(0B + 1B), where, for every B ∈ S,

0B + 1B ∈ IB , almost all terms of this sum being null; hence 0 + 1 ∈ ∑B IB .

Finally, if 0 =∑B 0B belongs to

∑IB and 1 ∈ A, then 10 =

∑B(10B). For

every B, 10B ∈ IB , so that 10 ∈ ∑B IB . We have shown that

∑B IB is a left

ideal of A.

To prove that

∑IB is the left ideal of A generated by the subset

⋃B IB ,

we must establish two inclusions. Let I denote the latter ideal. First of

all, for any C ∈ S and any 0 ∈ IC , we have 0 =∑B 0B , where 0B = 0 for B ≠ C

and 0C = 0. This shows that 0 ∈ ∑B IB and the ideal

∑B IB contains IC . By

definition of the left ideal I, we thus have I ⊂ ∑B IB . On the other hand,

let 0 =∑B 0B be any element of

∑B IB , where 0B ∈ IB for all B. All terms of

Page 44: (MOSTLY)COMMUTATIVE ALGEBRA

32 CHAPTER 1. RINGS

this sum belong to I, by definition of I. By definition of a left ideal, this

implies 0 ∈ I, hence I ⊃ ∑B IB . �

1.4.11. Product of two-sided ideals. — Let A be a ring and let I, J be

two-sided ideals of A. The set of all products 01, for 0 ∈ I and 1 ∈ J, is

not necessarily a two-sided ideal of A. We define IJ as the two-sided

ideal generated by these products.

Let K be the set of all linear combinations

∑=B=1

0B1B , with = ∈ N,

0B ∈ I and 1B ∈ J for every 8. It is contained in IJ. Let us show that K

is a two-sided ideal of A. The relation 0 = 0 · 0 shows that 0 ∈ IJ. If

2 =∑=8=10818 and 2

′ =∑=′8=10′81′8are elements of IJ, then 2+ 2′ = ∑=+=′

8=10818,

where, for 8 ∈ {= + 1, . . . , = + =′}, we have set 08 = 0′8−= and 18 = 1′

8−=;hence 2 + 2′ ∈ IJ. Let moreover 2 =

∑=8=10818 ∈ K and let 0 ∈ A. One has

02 = 0(∑

0818)=

∑(008)18;

because I is a left ideal, one has 008 ∈ I for every 8, hence 02 ∈ K. Similarly,

the equalities

20 =(∑

0818)0 =

∑08(180)

show that 20 ∈ K since, J being a right ideal, 180 ∈ J for every 8. Since K

contains all products 01, with 0 ∈ I and 1 ∈ J, we have IJ ⊂ K and, finally,

IJ = K.

Since I and J are two-sided ideals, for any 0 ∈ I and any 1 ∈ J, the

product 01 belongs to I and to J. It follows that IJ ⊂ I ∩ J.

1.4.12. Nilradical, radical of an ideal. — Let A be a commutative ring.

The nilradical of A is the set of its nilpotent elements. It is an ideal of A.

One says that the ring A is reduced if its nilradical is (0), that is, if 0 is its

only nilpotent element.

More generally, the radical of an ideal I of the commutative ring A is

defined by the formula

√I = {0 ∈ A ; there exists = > 1 such that 0= ∈ I}.

It is an ideal of A which contains I. By definition, the nilradical of A is the

radical

√(0) of the null ideal (0).

Page 45: (MOSTLY)COMMUTATIVE ALGEBRA

1.5. QUOTIENT RINGS 33

Proof. — Since 01 = 0 ∈ I, one has 0 ∈

√I. For 0 ∈

√I and 1 ∈

√I, let

us choose integers = and < > 1 such that 0= ∈ I and 1< ∈ I. By the

binomial formula

(0 + 1)=+<−1 =

=+<−1∑:=0

(= + < − 1

:

)0:1=+<−1−: .

In this sum, all terms belong to I: this holds for those corresponding

to : > =, since, then, 0: = 0=0=−: and 0= ∈ I; similarly, if : < =, then

= + < − 1 − : > <, hence 1=+<−1−: = 1<1=−1−:belongs to I. Therefore,

(0 + 1)=+<−1 ∈ I, and 0 + 1 ∈√

I. Finally, for any 0 ∈√

I and 1 ∈ A, let us

choose = > 1 such that 0= ∈ I. Then, (10)= = 1=0= ∈ I, hence 10 ∈√

I. �

1.5. Quotient rings

Given a ring and an adequate equivalence relation on that ring, the

goal of this section is to endow the set of equivalence classes with the

structure of a ring. This will allow to formally make all elements of an

ideal equal to zero without modifying the other rules for computation.

1.5.1. Construction. — Let ℛ be binary relation on a set X. Recall that

one says thatℛ is an equivalence relation if it is reflexive (for any G, G ℛ G),

symmmetric (if G ℛ H, then H ℛ G) and transitive (if G ℛ H and H ℛ I,

then G ℛ I). The set of all equivalence classes of X for the relationℛ is

denoted by X/ℛ; the map clℛ : X→ X/ℛ (such that, for every G ∈ X,

clℛ(G) is the equivalence class of G) is a surjection; its important property

is that any two elements have the same image if and only if they are in

relation byℛ.

Let A be a ring. Let us search all equivalence relations which are

compatible with the ring structure, namely

if G ℛ H and G′ ℛ H′, then G + G′ ℛ H + H′ and GG′ ℛ HH′.Let I be the equivalence class of 0. If G ℛ H, since (−H)ℛ (−H), one getsG − H ℛ 0, hence G − H ∈ I, and conversely. Therefore, the relationℛ can

be recovered from I by the property: G ℛ H if and only if G − H ∈ I.

On the other hand, let us show that I is a two-sided ideal of A. Of

course, one has 0 ∈ I. Moreover, for any G ∈ I and H ∈ I, one has G ℛ 0

and H ℛ 0, so that (G + H)ℛ 0, which shows that G + H ∈ I. Finally, for

Page 46: (MOSTLY)COMMUTATIVE ALGEBRA

34 CHAPTER 1. RINGS

G ∈ I and 0 ∈ A, one has G ℛ 0, hence 0G ℛ 00 and G0 ℛ 00; since

00 = 00 = 0, we get that 0G ∈ I and G0 ∈ I.

In the opposite direction, the preceding computations show that the

following theorem holds.

Theorem (1.5.2). — Let A be a ring and let I be a two-sided ideal of A. The

binary relation ℛI on A given by G ℛI H if and only if G − H ∈ I is an

equivalence relation on A which is compatible with its ring structure. There

exists a unique structure of a ring on the quotient set A/ℛI for which the

canonical surjection clℛI: A→ A/ℛI be a morphism of rings. This morphism

is surjective and its kernel is equal to I.

The quotient ring A/ℛI is rather denoted by A/I; the morphism clℛIis

called the canonical surjection from A to A/I and is often denoted by clI

Let 0 be any element of the center ofA; observe that clI(0) belongs to thecenter of A/I. Let indeed G ∈ A/I; there exists 1 ∈ A such that G = cl(1);then, cl(0)G = cl(0) cl(1) = cl(01) = cl(10) since 0 is central, hence

cl(0)G = cl(1) cl(0) = G cl(0). Consequently, for any commutative ring :

and any ringmorphism 8 : : → A whose image is contained in the center

of A, so that (A, 8) is a :-algebra, the composition clI ◦8 : : → A→ A/Iendowes A/I of a structure of a :-algebra (in fact, the unique one!) for

which the canonical surjection clI is a morphism of :-algebras.

Quotient rings are most often used through their universal property

embodied in the following factorization theorem.

Theorem (1.5.3). — Let A and B be rings and let 5 : A→ B be a morphism of

rings. For any two-sided ideal I of A which is contained in Ker( 5 ), there existsa unique ring morphism ! : A/I→ B such that 5 = ! ◦ clI.

The morphism ! is surjective if and only if 5 is surjective; the morphism !is injective if and only if Ker( 5 ) = I.

It is useful to understand this last equalitywith the help of the following

diagram:

A B

A/I

←→5

←→clI

← →!

Page 47: (MOSTLY)COMMUTATIVE ALGEBRA

1.5. QUOTIENT RINGS 35

in which the two paths that go from A to B (either the arrow 5 , or the

composition ! ◦clI of the two arrows ! and clI) coincide. For that reason,

one says that this diagram is commutative.

In the context of theorem 1.5.3, the morphisms 5 and clI are given,

while this theorem asserts existence and uniqueness of the morphism !that completes the diagram and makes it commutative.

Proof. — Necessarily, ! has to satisfy !(clI(0)) = 5 (0) for every 0 ∈ A.

Since every element of A/I is of the form clI(0), for some 0 ∈ A, this

shows that there exists at most one ring morphism ! : A/I→ B such

that 5 = ! ◦ clI.

Let us now show its existence. Let G ∈ A/I and let 0 ∈ A such that

G = clI(0). Let 0′ be any other element of A such that G = clI(0′); bydefinition, 0′− 0 ∈ I, hence 5 (0′− 0) = 0 since I ⊂ Ker( 5 ); this shows that

5 (0) = 5 (0′). Thus one can set !(G) = 5 (0), the result is independent onthe chosen element 0 such that G = clI(0). It remains to show that the

map ! just defined is a morphism of rings.

Since clI(0A) = 0A/I and clI(1A) = 1

A/I, we have 5 (0A/I) = 0B and

5 (1A/I) = 1B. Moreover, let G and H be elements of A/I, let 0 and 1 ∈ A

be such that G = clI(0) and H = clI(1). One has G + H = clI(0 + 1) and

!(G + H) = !(clI(0 + 1)) = 5 (0 + 1) = 5 (0) + 5 (1)= !(clI(0)) + !(clI(1)) = !(G) + !(H).

Similarly,

!(GH) = 5 (01) = 5 (0) 5 (1) = !(G)!(H).Therefore, ! is a morphism of rings, as claimed, and this completes the

proof of the existence and uniqueness of the morphism !.By the surjectivity of clI, the relation 5 = ! ◦ clI implies that 5 is

surjective if and only if ! is surjective.

Let us assume that ! is injective. Let 0 ∈ Ker( 5 ); one has 0 = 5 (0) =!(clI(0)). Consequently, clI(0) ∈ Ker(!), hence clI(0) = 0 since ! is

injective; this proves that 0 ∈ I, hence Ker( 5 ) ⊂ I. On the other hand,

I ⊂ Ker( 5 ) by assumption, hence I = Ker( 5 ).Finally, let us assume that I = Ker( 5 ). Let G ∈ A/I be such that !(G) = 0;

let 0 ∈ A be such that G = clI(0); then 5 (0) = !(G) = 0, hence 0 ∈ I and

G = 0. This proves that ! is injective. �

Page 48: (MOSTLY)COMMUTATIVE ALGEBRA

36 CHAPTER 1. RINGS

Let 5 : A→ B be a morphism of rings. We saw (page 8) that 5 (A) is asubring of B. Consequently, the factorization theorem allows us to write

the morphism 5 as

A

clKer( 5 )−−−−→ A/Ker( 5 )

!−→ 5 (A) ↩→ B

that is, as the composition of a surjective morphism, an isomorphism,

and an injective morphism of rings.

Let A be a ring and let I be a two-sided ideal of A. We want to describe

the ideals of the quotient ring A/I. So let be a left ideal of A/I. We

know that cl−1() is a left ideal of A. By construction, it contains the

ideal I, for cl(0) = 0 belongs to for any 0 ∈ I. The important property

is the following proposition.

Proposition (1.5.4). — Let A be a ring and let I be a two-sided ideal of A. The

map cl−1

:

left ideals of A/I → left ideals of A containing I

↦→ cl−1()

is a bijection. The same result holds for right ideals and for two-sided ideals.

In other words, for every left ideal J of A containing I, there exists a

unique ideal of A/I such that J = cl−1(). Moreover, = cl(J) (so that,

in this case, the image of the ideal J by the canonical surjection cl still is

a left ideal).

Proof. — Let us first give the inverse map. Let J be a left ideal of A and

let us show that cl(J) is a left ideal of A/I. Obviously, 0 = cl(0) ∈ cl(J).Moreover, let G and H belong to cl(J), let 0 and 1 be elements of J such

that G = cl(0) and H = cl(1). Then, G + H = cl(0) + cl(1) = cl(0 + 1);since J is a left ideal of A, 0 + 1 belongs to J and G + H is an element

of cl(J). Finally, let G be an element of cl(J) and let H be an element of A/I.Choose again 0 ∈ J and 1 ∈ A such that G = cl(0) and H = cl(1). ThenHG = cl(1) cl(0) = cl(10) ∈ cl(J) since, J being a left ideal of A, 10 ∈ J.

If is an ideal of A/I, I claim that

cl(cl−1()) = .

Page 49: (MOSTLY)COMMUTATIVE ALGEBRA

1.5. QUOTIENT RINGS 37

We show the two inclusions separately. Any element G of cl(cl−1()) is

of the form G = cl(0) for some 0 ∈ cl−1(), hence G ∈ . Conversely, let

G ∈ and let 0 ∈ A be such that G = cl(0); Then cl(0) = G ∈ , hence 0

belongs to cl−1() and G is an element of cl(cl

−1()), as claimed.

Let us show that for any left ideal of A,

cl−1(cl(J)) = I + J.

Again, we show the two inclusions. Let G ∈ I+ J and let us write G = 0+1with 0 ∈ I and 1 ∈ J. Consequently, cl(G) = cl(0) + cl(1) = cl(1) ∈ cl(J),hence G ∈ cl

−1(cl(J)). In the other way, let G ∈ cl−1(cl(J)); by definition,

cl(G) ∈ cl(J) and there exists 0 ∈ J such that cl(G) = cl(0). We get

cl(G − 0) = 0, which means that G − 0 ∈ I. Finally, G = (G − 0) + 0 belongsto I + J, as was to be shown.

If, moreover, J contains I, then I + J = J and the two boxed formulae

show that the map cl−1

induces an bijection from the set of left ideals

of A/I onto the set of left ideals of A which contain I, whose inverse map

is given by cl. �

When J is a left ideal of A which contains I, the ideal cl(J) of A/I isalso denoted by J/I. This notation is precisely useful when the map cl is

omitted from the notation. In particular, when we write “let J/I be anideal of A/I. . . ”, we will always mean that J is an ideal of A containing I.

Proposition (1.5.5). — Let A be a ring, let I be a two-sided ideal of A and

let J be a two-sided ideal of A containing I. The composition of the canonical

surjections A → A/I → (A/I)/(J/I) has kernel J. This gives a canonical

isomorphism

A/J ' (A/I)/(J/I).

In short, a quotient of a quotient is again a quotient.

Proof. — The compositionof two surjectivemorphisms is still a surjective

morphism, hence the morphism A → (A/I)/(J/I) is surjective. An

element 0 ∈ A belongs to its kernel if and only if the element cl(0) ∈ A/Ibelongs to the kernel of the morphism cl

J/I : A/I→ (A/I)/(J/I), which

means cl(0) ∈ (J/I). Since J/I = cl(J) and J contains I, this is equivalent to

the condition 0 ∈ cl−1(cl(J)) = J.

Page 50: (MOSTLY)COMMUTATIVE ALGEBRA

38 CHAPTER 1. RINGS

The factorization theorem (theorem 1.5.3) then asserts the existence of

a unique morphism ! : A/J→ (A/I)/(J/I)which makes the diagram

A A/I (A/I)/(J/I)

A/J

←→

←→

←→

→!

commutative. This map ! is surjective. Let us show that is is injective.

Let G ∈ A/J be such that !(G) = 0. Let 0 ∈ A be such that G = clJ(0). Bydefinition, !(G) = cl

J/I ◦ clI(0) = 0, that is, 0 ∈ J. Consequently, G = 0 and

the map ! is injective. This proves that ! is an isomorphism. �

The last part of this proof can be generalized and gives an important

complement to the factorization theorem.

Proposition (1.5.6). — Let 5 : A→ B be a morphism of rings and let I be a

two-sided ideal of A contained in Ker( 5 ). Let ! : A/I→ B be the morphism

given by the factorization theorem. Then the kernel of ! is given by Ker( 5 )/I.

Proof. — Indeed, let G ∈ A/I be such that !(G) = 0 and let 0 ∈ A be

any element with G = clI(0). Then 5 (0) = 0, hence 0 ∈ Ker( 5 ) andG = cl(0) ∈ clI(Ker( 5 )) = Ker( 5 )/I. Conversely, if G ∈ Ker( 5 )/I, thereexists 0 ∈ Ker( 5 ) such that G = clI(0). It follows that !(G) = 5 (0) = 0 and

G ∈ Ker(!). �

1.5.7. Chinese remainder theorem. — We say that two two-sided ide-

als I and J of a ring A are comaximal if I + J = A. This notion gives rise to

the general formulation of the Chinese remainder theorem.

Theorem (1.5.8). — Let A be a ring, let I and J be two-sided ideals of A.

Let us assume that I and J are comaximal. Then the canonical morphism

A→ (A/I) × (A/J) given by 0 ↦→ (clI(0), clJ(0)) is surjective; its kernel is thetwo-sided ideal I∩ J of A. Passing to the quotient, we thus have an isomorphism

A/(I ∩ J) ' A/I ×A/J.

Corollary (1.5.9). — Let I and J be comaximal two-sided ideals of a ring A.

For any pair (G, H) of elements of A, there exists 0 ∈ A such that 0 ∈ G + I and

0 ∈ H + J.

Page 51: (MOSTLY)COMMUTATIVE ALGEBRA

1.6. FRACTION RINGS OF COMMUTATIVE RINGS 39

Proof. — In the diagram of rings

A

A/(I ∩ J) A/I ×A/J

←→clI∩J

→(clI ,clJ)

← →!

wehave to show the existence of auniquemorphism!, drawnas adashed

arrow, so that this diagram is commutative, and that! is an isomorphism.

But the morphism A → A/I × A/J maps 0 ∈ A to (clI(0), clJ(0)). Its

kernel is thus I ∩ J. By the factorization theorem, there exists a unique

morphism ! that makes the diagram commutative, and this morphism

is injective. For any 0 ∈ A, one has !(clI∩J(0)) = (clI(0), clJ(0)).It remains to show that ! is surjective. Since I + J = A, there are G ∈ I

and H ∈ J such that G + H = 1. Then, we have 1 = clI(G + H) = clI(H) inA/I and 1 = clJ(G + H) = clJ(G) in A/J. Let � = clI∩J(G) and � = clI∩J(H).For any 0, 1 ∈ A, we get

!(1� + 0�) = 5 (1G + 0H) = (clI(0), clJ(1)).

Since any element of (A/I) × (A/J) is of the form (clI(0), clJ(1)), themorphism ! is surjective. �

Remark (1.5.10). — Let I and J be ideals of a commutative ring such that

I + J = A; then I ∩ J = IJ.

We had already noticed the inclusion IJ ⊂ I∩ J, which does not require

the ring A to be commutative, nor the ideals I and J to be comaximal.

Conversely, let 0 ∈ I ∩ J. Since I + J = A, there are G ∈ I and H ∈ J such

that G+ H = 1. Then 0 = (G+ H)0 = G0+ H0. Since G ∈ I and 0 ∈ J, G0 ∈ IJ;

since H0 = 0H, H ∈ J, and 0 ∈ I, we get H0 = 0H ∈ IJ. Finally, 0 ∈ IJ.

1.6. Fraction rings of commutative rings

In the preceding section devoted to quotients, we somewhat “forced”

some elements of a ring to vanish. We now want to perform a quite

opposite operation: making invertible all the elements of an adequate

subset. In all of this section, we restrict to the case of commutative rings (for a

preview of the general situation, see remark 1.6.8).

Page 52: (MOSTLY)COMMUTATIVE ALGEBRA

40 CHAPTER 1. RINGS

Definition (1.6.1). — Let A be a commutative ring. A subset S of A is said to

be multiplicative if the following properties hold:

(i) 1 ∈ S;

(ii) For any 0 and 1 in S, 01 ∈ S.

Given a commutative ring A and a multiplicative subset S of A, our

goal is to construct a ring S−1

A together with a morphism 8 : A→ S−1

A

such that 8(S) consists of invertible elements of S−1

A, and which is

“universal” for this property.

Let us first give a few examples.

Examples (1.6.2). — a) Let A = Z and S = Z {0}, the ring S−1

A will

be equal to Q and 8 : Z→ Q the usual injection. More generally, if A is a

domain, then S = A {0} is a multiplicative subset of A and the ring

S−1

A will be nothing else but the field of fractions of A.

b) If all elements of S are invertible in A, then S−1

A = A.

c) Let A = Z and S = {1; 10; 100; . . .} be the set of all powers of 10 in Z.Then S

−1A is the set of decimal numbers, those rational numbers which

can be written as 0/10=, for some 0 ∈ Z and some positive integer =.

As these examples show, we are simply going to mimick middle school

calculus of fractions.

1.6.3. Construction. — On the set A × S, let us define an equivalence

relation ∼ by:

(0, B) ∼ (1, C) if and only if there exists D ∈ S such that (0C−1B)D = 0.

It is indeed an equivalence relation:

– For any (0, B) ∈ A × S, (0, B) ∼ (0, B), since 1 ∈ S and (0B − 0B)1 = 0:

the relation is reflexive;

– If (0, B) ∼ (1, C), let us choose D ∈ S such that (0C − 1B)D = 0. Then

(1B − 0C)D = 0, hence (1, C) ∼ (0, B): the relation is symmetric;

– Finally, if (0, B) ∼ (1, C) and (1, C) ∼ (2, D), let E and F ∈ S satisfy

(0C − 1B)E = (1D − 2C)F = 0. Since

(0D − 2B)C = (0C − 1B)D + (1D − 2C)B,

Page 53: (MOSTLY)COMMUTATIVE ALGEBRA

1.6. FRACTION RINGS OF COMMUTATIVE RINGS 41

we have (0D − 2B)EFC = 0 (here we use that the ring A is commutative).

Since E, F and C belong to S and S is a multiplicative subset, one has

EFC ∈ S, hence (0, B) ∼ (2, D): the relation is transitive.

We write S−1

A for the set of equivalence classes; the class of a pair

(0, B) is denoted by 0/B; let also 8 : A → S−1

A be the map that sends

0 ∈ A to the class 0/1. The set A × S is not a ring, in any reasonable way.

However we are going to endow the quotient set S−1

A with the structure

of a ring in such a way that the map 8 is a morphism of rings.

The definition comes from the well known formulae for the sum or

product of fractions: we define

(0/B) + (1/C) = (0C + 1B)/BC, (0/B) · (1/C) = (01/BC).

Let us first check that these formulae make sense: if (0, B) ∼ (0′, B′) and(1, C) ∼ (1′, C′), we have to show that

(0C + 1B, BC) ∼ (0′C′ + 1′B′, B′C′) and (01, BC) ∼ (0′1′, B′C′).

Observe that

(0C + 1B)B′C′ − (0′C + 1′B′)BC = (0B′ − 0′B)CC′ + (1C′ − 1′C)BB′.

Let us choose D, E ∈ S such that (0B′ − 0′B)D = (1C′ − 1′C)E = 0; we get

((0C + 1B)B′C′ − (0′C + 1′B′)BC) DE = (0B′−0′B)D(CC′E)+(1C′−1′C)E(DBB′) = 0,

hence (0C + 1B, BC) ∼ (0′C + 1′B′, B′C′). Similarly,

(01B′C′ − 0′1′BC)DE = (0B′ − 0′B)D(1C′E) + (1B′ − 1′B)E(D0′BE) = 0,

so that (01, BC) ∼ (0′1′, B′C′).Checking that these laws give S

−1A the structure of a ring is a bit

long, but without surprise, and we shall not do it here. For example,

distributivity of addition onmultiplication can be proven in the following

way: let 0/B, 1/C and 2/D be elements of S−1

A, then

0

B

(1

C+ 2D

)=0(1D + 2C)

BCD=01D

BCD+ 02CBCD

=01

BC+ 02BD

=0

B

1

C+ 0B

2

D.

The unit element of S−1

A is 1/1, the zero element is 0/1.

Page 54: (MOSTLY)COMMUTATIVE ALGEBRA

42 CHAPTER 1. RINGS

The map 8 : A → S−1

A given, for any 0 ∈ A, by 8(0) = 0/1 is a

morphism of rings. Indeed, 8(0) = 0/1 = 0, 8(1) = 1/1 = 1, and, for any 0

and 1 ∈ A,

8(0 + 1) = (0 + 1)/1 = 0/1 + 1/1 = 8(0) + 8(1)and

8(01) = (01)/1 = (0/1)(1/1) = 8(0)8(1).Finally, for B ∈ S, 8(B) = B/1 and 8(B)(1/B) = B/B = 1, so that 8(B) isinvertible in S

−1A for any B ∈ S.

Remarks (1.6.4). — a) A fraction 0/B is zero if and only if (0, B) ∼ (0, 1).By the definition of the relation ∼, this holds if and only if there exists

C ∈ S such that C0 = 0. In particular, the canonical morphism 8 : A→ S−1

A

is injective if and only if every element of S is regular.

b) In particular, a ring of fractions S−1

A is null if and only if 0 belongs

to S. Indeed, S−1

A = 0 means that 1/1 = 1 = 0, hence that there exists

C ∈ S such that C · 1 = C = 0, in other words, that 0 ∈ S. Informally, if you

want to divide by 0 and still pretend that all classical rules of algebra

apply, then everything becomes 0 (and algebra is not very interesting

anymore).

A posteriori, this explains the middle school rule that thou shalt never

divide by zero! if you would, the rules of calculus of fractions (imposed

by algebra) would make any fraction equal to 0!

c) The definition of the equivalence relation used in the construction

of the ring of fractions is at first surprising. It is in any case more

complicated that the middle school rule that would claim that 0/B = 1/Cif and only if 0C = 1B. If the ring A is a domain and 0 ∉ S, more generally

when all elements of S are not zero divisors, this simpler definition is

sufficient. However, in the general case, the simpler definition does not

give rise to an equivalence relation.

The importance of this construction is embodied in its universal property,

another factorization theorem.

Theorem (1.6.5). — Let A be a commutative ring and let S be a multiplicative

subset of A. Let 8 : A→ S−1

A be the ring morphism that has been constructed

above. Then, for any (possibly not commutative) ring B and any morphism

Page 55: (MOSTLY)COMMUTATIVE ALGEBRA

1.6. FRACTION RINGS OF COMMUTATIVE RINGS 43

5 : A → B such that 5 (S) ⊂ B×, there exists a unique ring morphism

5 : S−1

A→ B such that 5 = ! ◦ 8.

One can sum up this last formula by saying that the diagram

A B

S−1

A

←→5

←→8 ← →!

is commutative.

Proof. — If such a morphism ! exists, it has to satisfy

!(0/B) 5 (B) = !(0/B)!(8(B)) = !(0/B)!(B/1) = !(0/1) = !(8(0)) = 5 (0)

hence

!(0/B) = 5 (0) 5 (B)−1,

where 5 (B)−1is the inverse of 5 (B) in B. Similarly, it also has to satisfy

5 (B)!(0/B) = !(8(B))!(0/B) = !(B/1)!(0/B) = !(0/1) = 5 (0),

hence !(0/B) = 5 (B)−1( 5 (0). This shows that there can exist at most one

such morphism !.To establish its existence, it now suffices to check that these two

formulae are compatible and define a ring morphism ! : S−1

A→ B such

that ! ◦ 8 = 5 .

Let us check that the formula is well posed. So let 0, 1 ∈ A, B, C ∈ S

such that 0/B = 1/C, let D ∈ S be such that (0C − 1B)D = 0. Then,

5 (0C) 5 (D) = 5 (1B) 5 (D), hence 5 (0C) = 5 (1B) since 5 (D) is invertible in B.

Since A is commutative, it follows that 5 (C) 5 (0) = 5 (0) 5 (C) = 5 (1) 5 (B),hence 5 (0) 5 (B)−1 = 5 (C)−1 5 (1).Applying this equality to (1, C) = (0, B), we also obtain the relation

5 (1) 5 (C)−1 = 5 (C)−1 5 (1)This proves that there is a map ! : S

−1A → B such that 5 (0/B) =

5 (0) 5 (B)−1 = 5 (B)−1 5 (0) for any 0 ∈ A and any B ∈ S.

By construction, for any 0 ∈ A, one has!◦8(0) = !(0/1) = 5 (0) 5 (1)−1 =

5 (0), hence ! ◦ 8 = 5 .

Let us now verify that ! is a ring morphism. We have

!(0) = 5 (0/1) = 5 (1)−1 5 (0) = 0 and !(1) = 5 (1/1) = 5 (1)−1 5 (1) = 1.

Page 56: (MOSTLY)COMMUTATIVE ALGEBRA

44 CHAPTER 1. RINGS

Then,

!(0/B) + !(1/C) = 5 (0) 5 (B)−1 + 5 (1) 5 (C)−1 = 5 (BC)−1(5 (0C) + 5 (1B)

)= 5 (BC)−1 5 (0C + 1B) = !((0C + 1B)/BC) = !((0/B) + (1/C)).

Finally,

!(0/B)!(1/C) = 5 (B)−1 5 (0) 5 (C)−1 5 (1) = 5 (BC)−1 5 (01)= !(01/BC) = !((0/B)(1/C)).

The map ! is thus a ring morphism. That concludes the proof of the

theorem. �

It is also possible to construct the fraction ring as a quotient. I content

myself with the particular case of a multiplicative subset consisting of

the powers of a single element; the generalization to any multiplicative

subset is left as an exercise (exercice 1/48).

Proposition (1.6.6). — Let A be a commutative ring, let B be any element of A

and let S = {1; B; B2; . . .} be the multiplicative subset of A consisting of powers

of B. The canonical morphism

5 : A[X] → S−1

A, P ↦→ P(1/B)

is surjective, its kernel is the ideal (1− BX). Passing to the quotient, one deducesan isomorphism

! : A[X]/(1 − BX) ' S−1

A.

Proof. — Any element of S−1

A can be written 0/B= for some 0 ∈ A and

some integer = > 0. Then, 0/B= = !(0X=) so that 5 is surjective. Its

kernel contains the polynomial 1 − BX since 5 (1 − 0X) = 1 − 0/0 = 0.

Therefore, it contains the ideal (1 − BX) generated by this polynomial.

By the universal property of quotient rings, we get a well-defined

ring morphism ! : A[X]/(1 − BX) → S−1

A which maps the class cl(P)modulo (1 − BX) of a polynomial P to 5 (P) = P(1/B). Let us show that

! is an isomorphism. By proposition 1.5.6, it will then follow that

Ker( 5 ) = (1 − BX).To show that ! is an isomorphism, we shall construct its inverse. This

inverse ! should satisfy#(0/B=) = cl(0X=) for all 0 ∈ A and all = ∈ N; we

could check this by a direct computation, but let us rather choose a more

Page 57: (MOSTLY)COMMUTATIVE ALGEBRA

1.6. FRACTION RINGS OF COMMUTATIVE RINGS 45

abstract way. Let 6 : A→ A[X]/(1− BX) be the canonical morphism such

that 6(0) = cl(0), the class of the constant polynomial 0modulo 1− BX. Inthe ring A[X]/(1− BX), we have cl(BX) = 1, so that cl(B) is invertible, with

inverse cl(X). By the universal property of fraction rings, there exists a

unique morphism # : S−1

A → A[X]/(1 − BX) such that #(0/1) = 6(0)for any 0 ∈ A. By construction, for any 0 ∈ A and any = > 0, one has

#(0/B=) = 0 cl(X=) = cl(0X=).Finallly, let us show that # and ! are inverses one of the other. For

P ∈ A[X], one has #(!(cl(P))) = #(P(1/B)). Consequently, if P =∑0=X

=,

we get

#(!(cl(P))) = #( 5 (P)) = #(P(1/B))

= #(∑(0=/B=)) =

∑#(0=/B=)

=

∑cl(0=X

=) = cl

(∑0=X

=)= cl(P)

and # ◦ ! = id. Finally,

!(#(0/B=)) = !(cl(0X=)) = 5 (0X=) = 0/B=

and ! ◦# = id. The ring morphism ! is therefore an isomorphism, with

inverse #, as was to be shown. �

Examples (1.6.7). — a) Let A be a domain. The set S = A {0} is amultiplicative subset of A. The morphism 8 : A→ S

−1is injective and

the ring S−1

A is a field, called the field of fractions of A.

Proof. — Since A is a domain, 1 ≠ 0 and 1 ∈ S. On the other hand, for

any two nonzero elements 0 and 1 ∈ A, their product 01 is nonzero, for

A is a domain. This shows that S is a multiplicative subset of A. Since

no element of S is a zero divisor, the morphism 8 : A→ S−1

A is injective.

Any element G of S−1

A can be written 0/B, for some 0 ∈ A and some

B ≠ 0. Assume G = 0; then, there exists 1 ∈ S such that 01 = 0. Then

1 ≠ 0 and, since A is a domain, 0 = 0. Reversing the argument, any

element 0/B of S−1

A, with 0 ≠ 0, is distinct from 0. In particular, 1/1 ≠ 0

and the ring S−1

A is nonnull. Let again G = 0/B be any nonzero element

of S−1

A. Then 0 ≠ 0, so that we can consider the element H = B/0 of S−1

A.

Obviously, GH = (0/B)(B/0) = 0B/0B = 1, hence G is invertible.

We thus have shown that S−1

A is a field. �

Page 58: (MOSTLY)COMMUTATIVE ALGEBRA

46 CHAPTER 1. RINGS

b) LetA be a commutative ring and let B ∈ A by any element ofAwhich

is not nilpotent. Then, the subset S = {1; B; B2; . . .} of A is a multiplicative

subset (obviously) which does not contain 0. By remark 1.6.4, b) above,

the ring S−1

A is nonnull; it is usually denoted by AB .

c) Let 5 : A → B be a morphism of commutative rings. For any

multiplicative subset S of A, T = 5 (S) is a multiplicative subset of B.

Moreover, there is a unique morphism F : S−1

A → T−1

B extending 5 ,

in the sense that F(0/1) = 5 (0)/1 for any 0 ∈ A. This follows from the

universal property explained below applied to the morphism F1 from A

to T−1

B given by 0 ↦→ 5 (0)/1; one just needs to check that for any B ∈ A,

F1(B) = B/1 ∈ T, which is the very definition of T. This can also be shown

by hand, copying the proof of the universal property to this particular

case.

If the morphism 5 is implicit, for example if B is given as an A-algebra,

we will commit the abuse of writing S−1

B instead of T−1

B.

d) Let 5 : A→ B be a morphism of commutative rings and let T be a

multiplicative subset of B. Then, S = 5 −1(T) is a multiplicative subset

of A such that 5 (S) ⊂ T. There is a unique morphism F : S−1

A→ T−1

B

extending the morphism 5 .

e) Let I be an ideal of a commutative ring A. The set S = 1 + I of

elements 0 ∈ A such that 0 − 1 ∈ I is a multiplicative subset. Indeed,

this is the inverse image of the multiplicative subset {1} of A/I by the

canonical morphism A→ A/I.f ) Let A be a commutative ring and let P be an ideal of A such that

P ≠ A and such that the condition the ideal I is distinct from A and the

condition 01 ∈ P implies 0 ∈ P or 1 ∈ P (we shall say that P is a prime

ideal of A, see definition 2.2.2). Then A P is a multiplicative subset of A

(proposition 2.2.3) and the fraction ring (A P)−1A is generally denoted

by AP.

Remark (1.6.8). — Let A be a (non necessarily commutative) ring and

let S be a multiplicative subset of A. There are several purely formal

ways of proving existence and uniqueness of a ring AS, endowed with

a ring morphism 8 : A→ AS, that satisfies the universal property: for

any ring morphism 5 : A→ B such that 5 (S) ⊂ B×, there exists a unique

Page 59: (MOSTLY)COMMUTATIVE ALGEBRA

1.7. RELATIONS BETWEEN QUOTIENT RINGS AND FRACTION RINGS 47

ring morphism 5 : AS → B such that 5 = 5 ◦ 8. One of them consists

in defining rings of polynomials in noncommuting indeterminated

(replacing the commutative monoid N(I) on a set I by the word monoid I∗

on I), adding such indeterminates TB for all B ∈ S, and quotienting this

ring by the two-sided ideal generated by the elements BTB −1 and TBB−1,

for B ∈ S.

However, almost nothing can be said about it. For example, in 1937,

Malcev constructed a ring A without any nonzero zero divisor that

admits no morphism to a field (1 should map to 0!). (See A. Malcev

(1937), “On the immersion of an algebraic ring into a field”, Math. Ann.,

vol. 113, p. 686–691.)

For rings satisfying the so-called Ore condition, one can check that

the construction using fractions furnishes a ring for which the universal

property holds; see exercise 1/45.

1.7. Relations between quotient rings and fraction rings

Finally, I want to study briefly the ideals of a fraction ring S−1

A. Here

is a first result.

Proposition (1.7.1). — LetA be a commutative ring and let S be amultiplicative

subset of A; let 8 : A→ S−1

A be the canonical morphism. For any idealℐ of

S−1

A, there exists an ideal I of A such thatℐ = 8(I)(S−1A). In fact, the ideal

I = 8−1(ℐ) is the largest such ideal.

Proof. — If ℐ = 8(I)(S−1A), we have in particular 8(I) ⊂ ℐ, that is

I ⊂ 8−1(ℐ). Provided that the ideal 8−1(ℐ) satisfies the given assertion,

this shows that it is the largest such ideal. We thus need to show that

ℐ = 8(8−1(ℐ))(S−1

A).Since 8(8−1(ℐ)) ⊂ ℐ, the ideal generated by 8(8−1(ℐ)) is contained in ℐ,

hence the inclusion

8(8−1(ℐ))(S−1

A) ⊂ ℐ.

Conversely, let G ∈ ℐ, let us choose 0 ∈ A and B ∈ S such that G = 0/B.Then, BG = 8(0) ∈ ℐ so that 0 ∈ 8−1(ℐ). It follows that BG ∈ 8(8−1(ℐ)),hence G = (BG)(1/B) belongs to 8(8−1(ℐ))(S−1

A), the other required

inclusion. �

Page 60: (MOSTLY)COMMUTATIVE ALGEBRA

48 CHAPTER 1. RINGS

To shorten the notation, we can omit the morphism 8 and write IS−1

A

instead of 8(I)S−1A. In fact, we shall rather denote this ideal by S

−1I,

this last notation being the one used in the general context of fraction

modules (§3.6).

Proposition (1.7.2). — Let A be a commutative ring, let S be a multiplicative

subset of A and let I be an ideal of A. Let T = clI(S) ⊂ A/I be the image of S by

the canonical surjection clI : A→ A/I. There exists a unique ring morphism

! : S−1

A/S−1

I→ T−1(A/I)

such that for any 0 ∈ A, !(clI(0/1)) = clI(0)/1. Moreover, ! is an isomor-

phism.

In a more abstract way, the two rings S−1

A/S−1I and T

−1(A/I) are A-

algebras (a quotient, or a fraction ring, of an A-algebra is an A-algebra).

The proposition states that there exists a unique morphism of A-algebras

between this two rings, and that this morphism is an isomorphism.

Proof. — It is possible to give an explicit proof, but it is more elegant (say,

more abstract and less computational) to rely on the universal properties

of quotients and fraction rings. Let us consider the composition

5 : A→ A/I→ T−1(A/I), 0 ↦→ cl(0)/1.

By this morphism, an element B ∈ S is sent to cl(B)/1, an invertible

element of T−1(A/I)whose inverse is 1/cl(B). By the universal property

of fraction rings, there exists a unique ring morphism

!1 : S−1

A→ T−1(A/I)

such that !1(0/1) = cl(0)/1.If, moreover, 0 ∈ I, then

!1(0/1) = !1(0)/1 = cl(0)/1 = 0

since cl(0) = 0 in A/I. It follows that Ker(!8) contains the image of I

in S−1

A; it thus contains the ideal S−1

I generated by I in S−1

A. By the

universal property of quotient rings, there exists a unique ringmorphism

! : S−1

A/S−1

I→ T−1(A/I)

such that !(cl(0/B)) = cl(0)/cl(B) for any 0/B ∈ S−1

A.

Page 61: (MOSTLY)COMMUTATIVE ALGEBRA

1.7. RELATIONS BETWEEN QUOTIENT RINGS AND FRACTION RINGS 49

Wehave shown that there exists amorphism! : S−1

A/S−1I→ T

−1(A/I)of A-algebras. We can sum up the construction by the commutative

diagram

S−1

A S1A/S−1

I

A

A/I T−1(A/I).

←→cl

S−1

I

→!1

→ !

←→8

→clI←

→5

←→9

We also see that ! is the only possible morphism of A-algebras. Indeed,

by the universal property of quotient rings, the morphism ! is charac-

terized by its restriction to S−1

A which, by the universal property of

fraction rings, is itself characterized by its restriction to A.

It remains to prove that ! is an isomorphism. For that, let us now

consider this diagram in the other direction. The kernel of the morphism

6 = clS−1

I◦8 contains I, hence there is a unique morphism of A-algebras

#1 : A/I→ S−1

A/S−1

I

satisfying #1(cl(0)) = cl(0/1) for any 0 ∈ A. For every B ∈ S, #1(cl(B)) =cl(B/1) is invertible, with inverse cl(1/B). Therefore, the image of T by #1

consists of invertible elements in S−1

A/S−1I. There thus exists a unique

morphism of A-algebras,

# : T−1(A/I) → S

−1

A/S−1

I

(that is, such that #(cl(0)/1) = cl(0/1) for any 0 ∈ A). Again, these

constructions are summarized by the commutative diagram:

S−1

A S1A/S−1

I

A

A/I T−1(A/I).

←→

←→8

→clI

→6

←→

→#1

#

Page 62: (MOSTLY)COMMUTATIVE ALGEBRA

50 CHAPTER 1. RINGS

Finally, for 0 ∈ A and B ∈ S, we have !(cl(0/B)) = cl(0)/cl(B) inT−1(A/I)

and #(cl(0)/cl(B)) = cl(0/B) in S−1

A/S−1I. Necessarily, ! ◦ # and # ◦ !

are the identical maps, so that ! is an isomorphism. �

This property will appear later under the fancy denomination of

exactness of localization.

Exercises

1-exo503) a) Let A be a ring. Let us write Aofor the abelian group A endowed with the

multiplication • defined by 0 • 1 = 10. Then Aois a ring, called the opposite ring of A.

b) LetA be the ring of =×=matriceswith coefficients inC. Show that the transposition

map, M ↦→Mt, is an isomorphism from A to A

o.

c) Show that the conjugation @ ↦→ @ of quaternions induces an isomorphism from Hto Ho

.

2-exo515) a) Let A be a ring and let (B8) be a family of subrings of A. Show that the

intersection of all B8 is a subring of A.

b) Let A and B be rings, let 5 , 6 : A→ B be morphisms of rings, and let C be the set

of all 0 ∈ A such that 5 (0) = 6(0). Prove that C is a subring of A.

c) Let A be a ring, let B be a subring of A and let I be a two-sided ideal of A. Let R be

the set of all sums 0 + 1, for 0 ∈ B and 1 ∈ I. Show that R is a subring of A.

3-exo513) Let A be a ring and let S be a subset of A. In the following two cases,

determine explicitly the centralizer CS(A).a) When A = EndK(V) is the ring of endomorphisms of a finite dimensional K-vector

space V and S = {D} consists in a diagonalizable endomorphism D;

b) When A = M=(C) is the ring of complex = × = matrices and S = {M} consists in a

matrix with minimal polynomial X=.

4-exo520) Let K be a field and let V be a finite dimensional K-vector space. Show that

the center of the ring EndK(V) consists in the homotheties G ↦→ 0G, for 0 ∈ K.

5-exo525) Let A be a ring and let M be a monoid. Let Z be the center of A.

a) For < ∈ M, let �< be the function from M to A with value 1 at <, and 0 elsewhere.

Compute the convolution product �< ∗ �<′ in the monoid ring A(M)

.

b) For < ∈ M, compute the centralizer of the element �< in A(M)

.

c) Assume that M is a group G. Show that the center of the ring A(G)

consists in all

functions 5 : G→ Z with finite support which are constant on every conjugacy class

in G.

6-exo527) a) Let % : C[X] → C[X] be the map given by %(P) = P′, for P ∈ C[X]. A

differential operator on C[X] is a C-linear map of C[X] to itself of the form P ↦→∑=8=0

08(X)%8(P), where the 08 are polynomials.

Page 63: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 51

b) Show that the set of all differential operators on C[X], endowed with addition and

composition, is a ring.

c) Determine its center.

7-exo529) Let 5 : A→ B be a ring morphism.

a) Let R be the set of all ordered pairs (0, 1) ∈ A × A such that 5 (0) = 5 (1). Showthat R, with termwise addition and multiplication, is a ring.

b) One says that 5 is a monomorphism if 6 = 6′ for any ring C and any pair (6, 6′) ofring morphisms from C to A such that 5 ◦ 6 = 5 ◦ 6′.Show that a ring morphism is a monomorphism if and only if it is injective.

c) One says that 5 is an epimorphism if 6 = 6′ for any ring C and any pair (6, 6′) ofring morphisms from B to C such that 6 ◦ 5 = 6′ ◦ 5 .Show that a surjective ring morphism is an epimorphism. Show also that the

inclusion morphism from Z into Q is an epimorphism.

8-exo005) Let Z[√

2] and Z[√

3] be the subrings of C generated by Z, and by

√2 and

√3

respectively.

a) Show that Z[√

2] = {0 + 1√

2 ; 0, 1 ∈ Z} and Z[√

3] = {0 + 1√

3 ; 0, 1 ∈ Z}.b) Show that besides the identity, there is only one automorphism of Z[

√2]; it maps

0 + 1√

2 to 0 − 1√

2 for any 0, 1 ∈ Z.c) Show that there does not exist a ring morphism from Z[

√2] to Z[

√3].

d) More generally, what are the automorphisms of Z[8] ? of Z[ 3

√2] ?

9-exo514) Let be a complex number. Assume that there exists a monic polynomial P

with integral coefficients, say P = X3+ 03−1X

3−1+· · ·+ 00, such that P( ) = 0. Show that

the setA of all complex numbers of the form 20+21 +· · ·+23−1 3−1, for 20, . . . , 23−1 ∈ Z,

is a subring of C. Give an example that shows that the result does not hold when P is

not monic.

10-exo539) Let K be a field and V be a K-vector space.

a) Let (V8) be any family of subspaces of V such that V =⊕

V8 . For G =∑G8 , with

G8 ∈ V8 , set ? 9(G) = G 9 . Show that for any 9, ? 9 is a projector in V (namely, ? 9 ◦ ? 9 = ? 9)with image V9 and kernel

⊕8≠9 V8 . Show that ? 9 ◦ ?8 = 0 for 8 ≠ 9, and that idV =

∑?8 .

b) Conversely, let (?8) be a family of projectors in V such that ?8 ◦ ? 9 = 0 for 8 ≠ 9, and

such that idV =∑?8 . Let V8 be the image of ?8 . Show that V is the direct sum of the

subspaces V8 , and that ?8 is the projector onto V8 whose kernel is the sum of all V9 , for

9 ≠ 8.

11-exo540) Automorphisms of M=(C). Let A = M=(C) be the ring of = × = matrices with

complex coefficients and let ! be any automorphism of A. Let Z be the center of A; it

consists in all scalar matrices 0I= , for 0 ∈ C.

a) Show that ! induces, by restriction, an automorphism of Z.

In the sequel, we assume that ! |Z = idZ. For 1 6 8 , 9 6 =, let E8 , 9 be the elementary

matrix whose only nonzero coefficient is a 1 at place (8 , 9); let B8 , 9 = !(E8 , 9).

Page 64: (MOSTLY)COMMUTATIVE ALGEBRA

52 CHAPTER 1. RINGS

b) Show that B8 ,8 is the matrix of a projector ?8 of C=. Show that ?8 ◦ ? 9 = 0 for 8 ≠ 9,

and that idC= =∑?8 .

c) Using exercise 1/10, show that there exists a basis ( 51, . . . , 5=) of C=such that for

any 8, ?8 be the projector onto C 58 with kernel the sum

∑9≠8 C 59 .

d) Show that there exist elements �8 ∈ C∗ such that, denoting 48 = �8 58 , one has

B8 9(4:) = 0 for : ≠ 9, and B8 9(4 9) = 48 . Deduce the existence of a matrix B ∈ GL=(C) suchthat !(M) = BMB

−1for any matrix M in M=(C).

e) What happens when one does not assume anymore that ! restricts to the identity

on Z?

12-exo512) Let = be an integer such that = > 1.

a) What are the invertible elements of Z/=Z? For which integers = is that ring a

domain? a field?

b) Let < be an integer such that < > 1. Show that the canonical map from Z/=<Zto Z/=Z is a ring morphism. Show that it induces a surjection from (Z/<=Z)× onto(Z/=Z)×.

c) Determine the nilpotent elements of the ring Z/=Z.13-exo541) Give an example of a ring morphism 5 : A→ B which is surjective but such

that the associated group morphism from A×to B

×is not surjective.

14-exo504) a) Let K be a field, let V be a K-vector space and let A = EndK(V) be thering of endomorphisms of V.

Show that the elements of A which are right invertible are the surjective endomor-

phisms, those who are left invertible are the injective endomorphisms.

b) Give an example of a noncommutative ring and of an element which possesses

infinitely many right inverses.

15-exo508) Product ring. Let A and B be two rings. The set A × B is endowed with the

addition defined by (0, 1) + (0′, 1′) = (0 + 0′, 1 + 1′) and the multiplication defined by

(0, 1) · (0′, 1′) = (00′, 11′), for 0 and 0′ ∈ A, 1 and 1′ ∈ B,

a) Show that A × B is a ring. What is the neutral element for multiplication?

b) Determine the elements of A × B which are respectively right regular, left regular,

right invertible, left invertible, nilpotent.

c) Show that the elements 4 = (1, 0) and 5 = (0, 1) of A × B satisfy 42 = 4 and 5 2 = 5 .

One says that they are idempotents.

16-exo506) Let A be a ring such that A ≠ 0.

a) Let 0 be an element of A which has exactly one right inverse. Show that 0 is

invertible. (First prove that 0 is regular.)

b) If every nonzero element of A is left invertible, then A is a division ring.

c) One assumes that A is finite. Show that any left regular element is right invertible.

If any element of A is left regular, then A is a division ring. When A is commutative,

any prime ideal of A is a maximal ideal.

Page 65: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 53

d) Same question when K is a field and A is a K-algebra which is finite-dimensional

as a K-vector space. (In particular, we impose that the multiplication of A is K-bilinear.)

e) Assume that any nonzero element of A is left regular and that the set of all right

ideals of A is finite. Show that A is a division ring. (To show that a nonzero element G is

rignt inversible, introduce the right ideals G=A, for = > 1.)

17-exo509) Let A be a ring and let 4 ∈ A be any idempotent element of A (this means,

we recall, that 42 = 4).

a) Show that 1 − 4 is an idempotent element of A.

b) Show that 4A4 = {404 ; 0 ∈ A} is an abelian subgroup of A, and that the

multiplication of A endowes it with the structure of a ring. (However, it is not a subring

of A, in general.)

c) Explicit the particular case where A = M=(:), for some field :, the element 4 being

a diagonal matrix of rank A.

18-exo505) Let A be a ring.

a) Let 0 ∈ A be a nilpotent element. Let = > 0 be such that 0=+1 = 0; compute

(1 + 0)(1 − 0 + 02 − · · · + (−1)=0=). Deduce that 1 + 0 is invertible in A.

b) Let G, H be two elements of A; one assumes that G is invertible, H is nilpotent, and

GH = HG. Show that G + H is invertible.

c) Let G and H be nilpotent elements of A which commute. Show that G + H is

nilpotent. (Let = and < be two integers > 1 such that G= = H< = 0; use the binomial

formula to compute (G + H)=+<−1.)

19-exo511) Let A be a ring, let 0 and 1 be elements of A such that 1 − 01 is invertiblein A.

a) Show that 1− 10 is invertible in A and give an explicit formula for its inverse. (One

may begin by assuming that 01 is nilpotent and give a formula for the inverse of 1− 10in terms of (1 − 01)−1

, 0 and 1. Then show that this formula works in the general case.)

b) One assumes that : is a field and A = M=(:). Show that 01 and 10 have the same

characteristic polynomial. What is the relation with the preceding question?

20-exo033) Let A be a commutative ring, 00, . . . , 0= ∈ A and let 5 be the polynomial

5 = 00 + 01X + · · · + 0=X= ∈ A[X].

a) Show that 5 is nilpotent if and only if all 08 are nilpotent.

b) Show that 5 is invertible in A[X] if and only if 00 is invertible in A and 01, . . . , 0= are

nilpotent. (If 6 = 5 −1 = 10 + 11X+ · · · + 1<X<, show by induction on : that 0:+1

= 1<−: = 0.)

c) Show that 5 is a zero-divisor if and only if there exists a nonzero element 0 ∈ A

such that 0 5 = 0. (Let 6 be a nonzero polynomial of minimal degree such that 5 6 = 0; show

that 0: 6 = 0 for every :.)

21-exo521) Let A be a commutative ring.

a) Use the universal property of polynomial rings to show that there exists a unique

morphism of A-algebras ! : A[X,Y] → (A[X])[Y] such that !(X) = X and !(Y) = Y.

Prove that ! is an isomorphism.

Page 66: (MOSTLY)COMMUTATIVE ALGEBRA

54 CHAPTER 1. RINGS

b) More generally, let I be a set, let J be a subset of I and let K = I J. Show that

there exists a unique morphism of A-algebras ! : A[(X8)8∈I] → (A[(X9)9∈J])[(X:):∈K],and that this morphism is an isomorphism (see remark 1.3.10).

22-exo526) Let A be a commutative ring, let P and Q be polynomials with coefficients

in A in one indeterminate X. Let < = deg(P) and = = deg(Q), let 0 be the leading

coefficient of Q and � = sup(1 + < − =, 0). Show that there exists a pair (R, S) ofpolynomials such that 0�P = QR + S and deg S < =. Show that this pair is unique if A

is a domain, or if 0 is regular.

23-exo052) Let : be a field and let A = :[X1, . . . ,X=] be the ring of polynomials with

coefficients in : in = indeterminates. One says that an ideal of A is monomial if it is

generated by monomials.

a) Let (M ) ∈E be a family of monomials and let I be the ideal they generate. Show

that amonomial M belongs to I if and only if it is a multiple of one of themonomials M .

b) Let I be an ideal of A. Show that I is a monomial ideal if and only if, for any

polynomial P ∈ I, each monomial of P belongs to I.

c) Let I and J be monomial ideals of A. Show that the ideals I + J, IJ, I ∩ J,

I : J = {0 ∈ A ; 0J ⊂ I} and√

I are again monomial ideals. Given monomials which

generate I and J, explicit monomials which generate those ideals.

24-exo071) Let : be a field, let (M ) ∈E be a family of monomials of A = :[X1, . . . ,X=],and let I be the ideal they generate. The goal of this exercise is to show that there

exists a finite subset F ⊂ E such that I = (M ) ∈F. The proof runs by induction on the

number = of indeterminates.

a) Treat the case = = 1.

b) In the sequel, one assumes that = > 2 and that the property holds if there are

strictly less than = indeterminates.

For 8 ∈ {1, . . . , =}, one defines a morphism of rings !8 by

!8(P) = P(X1, . . . ,X8−1, 1,X8+1, . . . ,X=).Using the induction hypothesis, observe that there exists a finite subset F8 ⊂ E such

that for any ∈ E the monomial !8(M ) can be written !8(M ) = M′ × !8(M�) for

some � ∈ F8 .

c) Let F0 be the set of all ∈ E such that for all 8 ∈ {1, . . . , =}, one hasdeg

X8(M ) < sup{deg

X8(M�) ; � ∈ F8}

}.

Set F =⋃=8=0

F8 . Show that I = (M ) ∈F.25-exo522) Let A be a ring, let I be a set and let M be the set N(I) of all multi-indices

on I. Let ℱI = AM

be the set of all families of elements of A indexed by M. It is an

abelian group for termwise addition.

a) Show that the formulae that define the multiplication of polynomials make sense

on ℱI and endow it with the structure of a ring, of which the ring of polynomials �I is

a subring.

Page 67: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 55

Let X8 be the indeterminate with index 8. Any element ofℱI is formally written as

an infinite sum

∑< 0<X

<1

1. . .X

<== . The ring ℱI is called the ring of power series in the

indeterminates (X8)8∈I. When I = {1, . . . , =}, it is denoted A[[X1, . . . ,X=]], and A[[X]]when I has a single element, the indeterminate being written X.

Let : be a commutative ring.

b) For any :-algebra A and any family (01, . . . , 0=) of nilpotent elements which com-

mutepairwise, show that there is a uniquemorphismof :-algebras! : :[[X1, . . . ,X=]] →A such that !(X8) = 08 for all 8.

c) Show that an element

∑0=X

=of :[[X]] is invertible if and only if 00 is invertible

in :.

26-exo619) Let : be a field. Let Φ be the morphism from :[X1, . . . ,X=] to ℱ(:= , :) thatmaps a polynomial to the corresponding polynomial function.

a) Let A1, . . . ,A= be subsets of :. Let P ∈ :[X1, . . . ,X=] be a polynomial in = indeter-

minates such that degX8(P) < Card(A8) for every 8. One assumes that P(01, . . . , 0=) = 0

for every (01, . . . , 0=) ∈ A1 × · · · ×A= . Show that P = 0.

b) If : is an infinite field, show that Φ is injective but not surjective.

c) One assumes that : is a finite field. Show that Φ is surjective. For any function

5 ∈ ℱ(:= ; :), explicit a polynomial P such that Φ(P) = 5 (think of the Lagrange

interpolation polynomials). Show thatΦ is not injective. More precisely, if @ = Card(:),show that Ker(Φ) is generated by the polynomials X

@

8− X8 , for 1 6 8 6 =.

27-exo867) Let A be a ring.

a) Endow B = A(N)

with termwise addition and with multiplication given by

(0=) · (1=) = (2=), with 2= =∑=:=0

0:1=−: . Prove that B is a ring, with unit element

(1, 0, . . . ). Let T = (0, 1, 0, . . . ); prove that for every integer =, T=is the sequence whose

only nonzero term is equal to 1 and has index =. Prove also that (0=) =∑∞==0

0=T=for

every (0=) ∈ B.

Elements of B are called noncommutative polynomials with coefficients in A and denoted

by A[T]. The degree deg(P) of an element P =∑0=T

= ∈ A[T] is the supremum of

all = such that 0= ≠ 0; if P ≠ 0, then deg(P) ∈ N and the coefficient 0deg(P) is called the

leading coefficient of P.

b) Let P,Q ∈ A[T]; assume that Q ≠ 0 and that the leading coefficient of Q is

equal to 1. Prove that there exists a unique pair (U,V) of elements of A[T] such that

P = QU + V and deg(V) < deg(Q).c) Assume thatQ = T−0, for some 0 ∈ A. Prove that there exists a unique polynomial

U ∈ A[T] and a unique element 1 ∈ A such that P = U · (T − 0) + 1. If P =∑2=T

=,

prove that 1 =∑2=0

=.

d) Let K be a commutative ring and let A = M=(K) be the ring of = × = matrices

with coefficients in K. Let U ∈ A and let V ∈ M=(K[T]) be the adjugate the comatrix of

U − TI= . Using the formula (U − TI=) ·V = PU(T), where PU ∈ K[T] is the characteristicpolynomial of U, prove that PU(U) = 0 (theorem of Cayley–Hamilton).

Page 68: (MOSTLY)COMMUTATIVE ALGEBRA

56 CHAPTER 1. RINGS

28-exo524) One says that a ring A admits a right euclidean division if there exists a

map ! : A {0} → N such that for any pair (0, 1) of elements of A, with 1 ≠ 0, there

exists a pair (@, A) of elements of A such that 0 = @1 + A and such that either A = 0 or

!(A) < !(1).a) Assume that A admits a right euclidean division. Show that any left ideal of A is

of the form A0, for some 0 ∈ A.

b) Let K be a division ring. Show that the polynomial ring K[X] admits a right

euclidean division.

29-exo528) Let A be a ring and let I be a right ideal of A.

a) Show that the left ideal generated by I in A is a two-sided ideal.

b) Show that the set J of all elements 0 ∈ A such that G0 = 0 for any G ∈ I (the right

annihilator of I) is a two-sided ideal of A.

30-exo001) Let A be a commutative ring, and let I, J, L be ideals of A. Show the

following properties:

a) I · J ⊂ I ∩ J;

b) (I · J) + (I · L) = I · (J + L);c) (I ∩ J) + (I ∩ L) ⊂ I ∩ (J + L);d) If J ⊂ I, then J + (I ∩ L) = I ∩ (J + L);e) Let K be a field and assume that A = K[X,Y]. Set I = (X), J = (Y) and L = (X + Y).

Compute (I ∩ J) + (I ∩ L) and I ∩ (J + L); compare these two ideals.

31-exo535) Let A be a ring.

a) Give an example where the set of all nilpotents elements of A is not an additive

subgroup of A. (One may check that A = M2(C)works.)

b) Let N be the set of all elements 0 ∈ A such that 0G is nilpotent, for every G ∈ A.

Show that N is a two-sided ideal of A, every element of which is nilpotent.

c) Let I be a two-sided ideal of A such that every element of I is nilpotent. Show that

I ⊂ N.

32-exo516) Let K be a field, let V be a K-vector space and let A be the ring of

endomorphisms of V.

a) For any subspace W of V, show that the set NW of all endomorphisms whose

kernel contains W is a left ideal of A, and that set set IW of all endomorphisms whose

image is contained in W is a right ideal of A.

b) If V has finite dimension, then all left ideals (resp. right ideals) are of this form.

c) If V has finite dimension, the only two-sided ideals of A are (0) and A.

d) The set of all endomorphisms of finite rank of V (that is, those whose image

has finite dimension) is a two-sided ideal of A. It is distinct from A if V has infinite

dimension.

33-exo519) Let A be a commutative ring and let 0, 1 be two elements of A.

Page 69: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 57

a) If there exists a unit D in A such that 0 = 1D (in which case one says that 0 and 1

are associates), show that the ideals (0) = 0A and (1) = 1A coincide.

b) Conversely, assuming that A is a domain and that (0) = (1), show that 0 and 1 are

associates.

c) Assume that A is the quotient of the ring Z[0, 1, G, H] of polynomials in four

indeterminates by the ideal generated by (0 − 1G, 1 − 0H). Prove that the ideals (0)and (1) coincide but that 0 and 1 are not associate.34-exo006) Let A, B be commutative rings and let 5 : A→ B be a ring morphism. For

any ideal I of A, let 5∗(I) be the ideal of B generated by 5 (I); we say it is the extension of I

to B. For any ideal J of B, we call the ideal 5 −1(J) the contraction of J in A.

Let I be an ideal of A and let J be an ideal of B; show the following properties.

a) I ⊂ 5 −1( 5∗(I)) and J ⊃ 5∗( 5 −1(J));b) 5 −1(J) = 5 −1

(5∗( 5 −1(J)

)and 5∗(I) = 5∗

(5 −1( 5∗(I)

).

Let � be the set of ideals of A which are contractions of ideals of B, and letℰ be the

set of ideals of B which are extensions of ideals of A.

c) Show that � = {I; I = 5 −1

(5∗(I)

)} andℰ = {J; J = 5∗

(5 −1(I)

)};

d) The map 5∗ defines a bijection from � ontoℰ; what is its inverse?

Let I1 and I2 be ideals of A, let J1 and J2 be ideals of B. Show the following properties:

e) 5∗(I1 + I2) = 5∗(I1) + 5∗(I2) and 5 −1(J1 + J2) ⊃ 5 −1(J1) + 5 −1(J2);f ) 5∗(I1 ∩ I2) ⊂ 5∗(I1) ∩ 5∗(I2), and 5 −1(J1 ∩ J2) = 5 −1(J1) ∩ 5 −1(J2);g) 5∗(I1 · I2) = 5∗(I1) · 5∗(I2), and 5 −1(J1 · J2) ⊃ 5 −1(J1) · 5 −1(J2);h) 5∗(

√I) ⊂

√5∗(I), and 5 −1(

√J) =

√5 −1(J).

35-exo184) Let I and J be two ideals of a commutative ring A. Assume that I + J = A.

Then, show that I= + J

= = A for any positive integer =.

36-exo517) Let A be a commutative ring, let I be an ideal of A and let S be any subset

of A. The conductor of S in I is defined by the formula

(I : S) = {0 ∈ A ; for all B ∈ S, 0B ∈ I}.Show that it is an ideal of A, more precisely, the largest ideal J of A such that JS ⊂ I.

37-exo014) Let K be a field and let A = K[X,Y]/(X2,XY,Y2).a) Compute the invertible elements of A.

b) Determine all principal ideals of A.

c) Determine all ideals of A.

38-exo534) Let A be a ring and let I be the two-sided ideal of A generated by the

elements of the form GH − HG, for G and H ∈ A.

a) Show that the ring A/I is commutative.

b) Let J be a two-sided ideal of A such that A/J is a commutative ring. Show that

I ⊂ J.

39-exo568) Let A be a ring, let I be a right ideal of A.

Page 70: (MOSTLY)COMMUTATIVE ALGEBRA

58 CHAPTER 1. RINGS

a) Show that the set B of all 0 ∈ A such that 0I ⊂ I is a subring of A. Show that I is a

two-sided ideal of B.

b) Define an isomorphism from the ring EndA(A/I), where A/I is a right A-module,

onto the ring B/I.c) Let I be a maximal right ideal of A. Show that B/I is a division ring.

40-exo536) Let A be a ring, let I be an ideal of A. We let I[X] be the set of polynomials

P ∈ A[X] all of which coefficients belong to I.

a) Show that I[X] is a left ideal of A[X].b) If I is a two-sided ideal of A, show that I[X] is a two-sided ideal of A[X] and

construct an isomorphism from the ring A[X]/I[X] to the ring (A/I)[X].41-exo537) Let A be a ring, let I be a two-sided ideal of A and let M=(I) be the set ofmatrices in M=(A) all of which coefficients belong to I.

a) Show that M=(I) is a two-sided ideal of M=(A) and construct an isomorphism of

rings from M=(A)/M=(I) onto M=(A/I).b) Conversely, show that any two-sided ideal of M=(A) is of the form M=(I), for some

two-sided ideal I of A.

42-exo544) a) What are the invertible elements of the ring of decimal numbers?

Let A be a commutative ring and let S be a multiplicative subset of A.

b) Show that an element 0 ∈ A is invertible in S−1

A if and only if there exists 1 ∈ A

such that 01 ∈ S.

c) Let T be a multiplicative subset of A which contains S. Construct a ring morphism

from S−1

A to T−1

A.

d) Let S be the set of elements of A whose image in S−1

A is invertible. Show that S

contains S and that the ring morphism from S−1

A to S−1

A is an isomorphism.

Give an explicit proof of this fact, as well as a proof relying only on the universal

property.

43-exo559) Let A be a commutative ring.

a) Let B be an element of A and let S = {1; B; B2; . . .} be the multiplicative subset

generated by B. We write AB for the ring of fractions S−1

A. Show that the following

properties are equivalent:

(i) The canonical morphism 8 : A→ AB is surjective;

(ii) The decreasing sequence of ideals (B=A)= is ultimately constant;

(iii) For any large enough integer =, the ideal B=A is generated by an idempotent

element.

(To show that (ii) implies (iii), show by induction on : that a relation of the form B= = B=+10

implies that B= = B=+:0: ; then conclude that B=0= is an idepotent element.)

b) Let S be a multiplicative subset of A consisting of elements B for which the

morphism A→ AB is surjective. Show that the morphism A→ S−1

A is surjective too.

Page 71: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 59

c) Let A be a ring which is, either finite, or which is a finite dimensional vector

subspace over a subfield (more generally, an artinian ring). Show that condition (ii)

holds for any element B ∈ A.

44-exo543) a) Let A be a subring of Q. Show that there exists a multiplicative subset S

of Z such that A = S−1Z.

b) Let A = C[X,Y] be the ring of polynomials with complex coefficients in two

indeterminates X and Y. Let B = A[Y/X] be the subring generated by A and Y/X in the

field C(X,Y) of rational functions.Show that the unique ringmorphism from C[T,U] to B that maps T to X and U to Y/X

is an isomorphism. Deduce that A× = B

× = C×, hence that B is not the localization

of A with respect to some multiplicative subset.

45-exo538) Let A be a (not necessarily commutative) ring and let S be a multiplicative

subset of A.

Let AS be a ring and let 8 : A→ AS be a morphism of rings. One says that (AS, 8) is aring of right fractions for S if the following properties hold:

(i) For any B ∈ S, 8(B) is invertible in AS;

(ii) Any element of AS is of the form 8(0)8(B)−1for some 0 ∈ A and B ∈ S.

a) Assume that A admits a ring of right fractions for S. Show the following properties

(right Ore conditions):

(i) For any 0 ∈ A and any B ∈ S, there exists 1 ∈ A and C ∈ S such that 0C = B1;

(ii) For any 0 ∈ A and any B ∈ S such that B0 = 0, there exists C ∈ S such that

0C = 0.

(The second condition is obviously satisfied if every element of S is regular.)

b) Conversely, assume that the right Ore conditions hold. Define an equivalence

relation ∼ on the set A × S by “(0, B) ∼ (1, C) if and only if there exist 2, 3 ∈ A and

D ∈ S such that D = B2 = C3 and 02 = 13.” Construct a ring structure on the quotient

set AS such that the map 8 : A→ AS sending 0 ∈ A to the equivalence class of (0, 1) isring morphism, and such that for any B ∈ S, 8(B) be invertible in AS with inverse the

equivalence class of (1, B). Conclude that AS is a ring of right fractions for S. If every

element of S is regular, prove that 8 is injective.

46-exo560) Let 0 and 1 be positive integers.

a) Show that there exist integers < and = such that < is prime to 1, each prime

divisor of = divides 1, and 0 = <=. (Beware, = is not the gcd of 0 and 1.)

b) Show that the ring (Z/0Z)1 is isomorphic to Z/=Z. To that aim, construct explicitly

a ring morphism from Z/=Z to (Z/0Z)1 and show that it is an isomorphism.

47-exo561) a) Show that the ring Z[8] is isomorphic to the ring Z[X]/(X2 + 1).b) Let 0 be an integer. Considering the ring Z[8]/(0 + 8) as a quotient of the ring of

polynomials Z[X], define an isomorphism

Z[8]/(0 + 8) ∼−→ Z/(02 + 1)Z.

Page 72: (MOSTLY)COMMUTATIVE ALGEBRA

60 CHAPTER 1. RINGS

c) More generally, let 0 and 1 be two coprime integers. Show that the image of 1 in

Z[8]/(0 + 81) is invertible. Write this ring as a quotient of Z1[X] and then define an

isomorphism

Z[8]/(0 + 81) ∼−→ Z/(02 + 12)Z.(Observe that if 1 = 0D + 1E, then 1 = (0 + 18)D + 1(E − D8).)48-exo531) Let A be a commutative ring, let S be a multiplicative subset of A.

a) One assumes that there are elements B and C ∈ S such that S be the set of all B=C< ,

for = and < in N. Show that the morphism A[X,Y] → S−1

A, P(X,Y) ↦→ P(1/B, 1/C)is surjective and that its kernel contains the ideal (1 − BX, 1 − CY) generated by 1 − BXand 1 − CY in A[X,Y]. Construct an isomorphism A[X,Y]/(1 − BX, 1 − CY) ' S

−1A.

b) More generally, assume that S is the smallest multiplicative subset of A containing

a subset T, and let 〈1− CXC〉C∈T the ideal of the polynomial ring (with a possibly infinite

set of indeterminates XC , for C ∈ T) generated by the polynomials 1 − CXC , for C ∈ T.

Then show that the canonical morphism

A[(XC)C∈T] → S−1

A, P ↦→ P((1/C)C)induces an isomorphism

A[(XC)C∈T]/〈1 − CXC〉C∈T ' S−1

A.

49-exo091) Let A be a commutative ring and let S be a multiplicative subset of A such

that 0 ∉ S.

a) If A is a domain, show that S−1

A is a domain.

b) If A is reduced, show that S−1

A is reduced.

c) Let 5 : A→ S−1

A be the canonical morphism 0 ↦→ 0/1. Show that the nilradical

of S−1

A is the ideal of S−1

A generated by the image of the nilradical of A.

50-exo532) Let B ⊂ R(X) be the set of all rational functions with real coefficients of the

form P/(X2 + 1)= , where P ∈ R[X] is a polynomial and = is an integer. Let A be the

subset of B consisting of those fractions P/(X2 + 1)= for which deg(P) 6 2=.

a) Show that A and B are subrings of R(X).b) Determine their invertible elements.

c) Show that B is a principal ideal domain. Show that the ideal of A generated by

1/(X2 + 1) and X/(X2 + 1) is not principal.

Page 73: (MOSTLY)COMMUTATIVE ALGEBRA

CHAPTER 2

IDEALS AND DIVISIBILITY

In this second chapter, we study deeper aspects of divisibility in rings. This

question appeared historically when mathematicians — notably, in the hope of

proving Fermat’s Last Theorem! — tried to make use of unique factorization in

rings where it didn’t hold. Ideals are one device that was invented then so to

have a better understanding of divisibility. In this context, there are two natural

analogues of prime numbers, namelymaximal and prime ideals.

I introduce maximal ideals in the general framework of possibly noncommuta-

tive rings; I then define the Jacobson radical. Matrix rings furnish interesting

examples.

However, from that point on, the chapter is essentially focused on commutative

rings. I define prime ideals and show their relations with nilpotent elements (via

the nilpotent radical), or local rings. The set of all prime ideals of a commutative

ring is called its spectrum, and it is endowed with a natural topology. The

detailed study of spectra of rings belongs to the algebraic geometry of schemes,

as put forward by Alexander Grothendieck. Although that is a more advanced

subject than what is planned for this book, I believe that the early introduction

of the spectrum of a ring allows to add an insightful geometric interpretation to

some constructions of commutative algebra.

Hilbert’s “Nullstellensatz” describes the maximal ideals of the ring of

polynomials in finitely many indeterminates over an algebraically closed field.

After proving it in the particular case of the field of complex numbers (by a short

argument which some colleagues find too special), I show how this theorem

gives rise to a correspondence between geometry and algebra. (The general case

of the theorem will be proved in chapter 9; a few other proofs also appear as

exercises in that chapter, as well as in this one.)

Page 74: (MOSTLY)COMMUTATIVE ALGEBRA

62 CHAPTER 2. IDEALS AND DIVISIBILITY

Another beautiful geometric example is given by Gelfand’s theorem that

describes the maximal ideals of the ring of continuous functions on a compact

metric space.

I then go back to the initial goal of understanding uniqueness of a decom-

position into prime numbers in general rings, and introduce three classes of

commutative rings of increasing generality in which the divisibility relation

is particularly well behaved: euclidean rings, principal ideal domains and

unique factorization domains. In particular, I prove a theorem of Gauss that

asserts that polynomial rings are unique factorization domains.

I conclude the chapter by introducing the resultant of two polynomials and

use it to prove a theorem of Bézout about the number of common roots to two

polynomials in two indeterminates — geometrically, this gives an upper bound

for the number of intersection points of two plane curves.

2.1. Maximal ideals

Definition (2.1.1). — Let A be a ring. One says that a left ideal of A is

maximal if it maximal among the left ideals of A which are distinct from A.

In other words, a left ideal I is maximal if one has I ≠ A and if the only

left ideals of A containing I are A and I. Consequently, to check that a

left ideal I ≠ A is maximal, it suffices to show that for any 0 ∈ A I, the

left ideal I +A0 is equal to A.

There is an analogous definition for the right and two-sided ideals.

These notions coincide when the ring A is commutative, but the reader

shall be cautious in the general case: When the ringA is not commutative,

a two-sided ideal can be a maximal ideal as a two-sided ideal but not as

a left ideal, or be a maximal ideal as a left ideal but not as a right ideal.

Examples (2.1.2). — a) The ideals of Z have the form =Z, for some

= ∈ Z (example 1.4.3). If = divides <, then <Z ⊂ =Z, and conversely.

It follows that the maximal ideals of Z are the ideals ?Z, where ? is a

prime number.

Similarly, if K is a field, the maximal ideals of the ring K[X] of polyno-mials in one indeterminate are the ideals generated by an irreducible

polynomial. When K is algebraically closed, the maximal ideals are the

ideals (X − 0), for some 0 ∈ K.

Page 75: (MOSTLY)COMMUTATIVE ALGEBRA

2.1. MAXIMAL IDEALS 63

b) Let K be a field and let V be a K-vector space of finite dimension.

In exercise 1/32, we asked to determine the left, right, and two-sided

ideals of EndK(V).The left ideals ofEnd(V) are the ideals IW, whereW is a vector subspace

of V and IW is the set of endomorphisms whose kernel contains W. For

W ⊂ W′, IW

′ ⊂ IW. Consequently, the maximal left ideals of End(V) arethose ideals IW, where W is a line in V.

The right ideals of V are the ideals RW where W is a vector subspace

of V and RW is the set of endomorphisms whose image is contained

in W. For W ⊂ W′, IW ⊂ IW

′. Consequently, the maximal right ideals

of End(V) are those ideals RW, where W is a hyperplane in V.

The only two-sided ideals of End(V) are (0) and End(V), so that (0) isthe only maximal two-sided ideal of End(V).c) Amaximal left ideal of a ring is amaximal right ideal of the opposite

ring.

The following general theorem asserts the existence of maximal ideals

of a given ring; it is a consequence of Zorn’s lemma (theorem A.2.11).

Theorem (2.1.3) (Krull). — Let A be a ring and let I be a left ideal of A, distinct

from A. There exists a maximal left ideal of A which contains I.

In particular, any nonzero ring possesses at least one maximal left

ideal (take I = 0).

The analogous statements for right ideals and two-sided ideals are

true, and proven in the same way.

Proof. — Let ℐ be the set of left ideals of A which contain I and are

distinct from A. Let us endow ℐ with the ordering given by inclusion.

Let us show that ℐ is inductive. Let indeed (J8) be a totally ordered

family of left ideals of A such that I ⊂ J8 ( A for any 8; let us show that it

has an upper bound J in ℐ.

If this family is empty, we set J = I.

Otherwise, let J be the union of the ideals J8. Let us show that J ∈ ℐ.

By construction, J contains I, because there is an index 8, and J ⊃ J8 ⊃ I.

Since 1 does not belong to J8, for any index 8, we deduce that 1 ∉ J and

J ≠ A. Finally, we prove that J is a left ideal of A. Let G, H ∈ J; there are

indices 8 and 9 such that G ∈ J8 and H ∈ J9. Since the family (J8) is totally

Page 76: (MOSTLY)COMMUTATIVE ALGEBRA

64 CHAPTER 2. IDEALS AND DIVISIBILITY

ordered, we have J8 ⊂ J9 or J9 ⊂ J8. In the first case, G + H ∈ J9, in the

second, G + H ∈ J8; in any case, G + H ∈ J. Finally, if G ∈ J and 0 ∈ A, let 8

be such that G ∈ J8; since J8 is a left ideal of A, 0G ∈ J8, hence 0G ∈ J.

By Zorn’s lemma (theorem A.2.11), the set ℐ possesses a maximal

element J. By definition of the ordering on ℐ, J is a left ideal of A,

distinct from A, which contains I, andwhich is maximal for that property.

Consequently, J is a maximal left ideal of A containing I, hence Krull’s

theorem. �

Corollary (2.1.4). — Let A be a ring. For an element of A to be left invertible

(resp. right invertible), it is necessary and sufficient that it belongs to no

maximal left ideal (resp. to no maximal right ideal) of A.

Proof. — Let 0 be an element of A. That 0 is left invertible means that

the left ideal A0 equals A; then no maximal left ideal of A can contain 0.

Otherwise, if A0 ≠ A, there exists a maximal left ideal of A containing

the left ideal A0, and that ideal contains 0. �

Definition (2.1.5). — Let A be a ring. The Jacobson radical of A is the

intersection of all maximal left ideals of A.

Lemma (2.1.6). — Let A be a ring and let J be its Jacobson radical. Let 0 ∈ A.

The following properties are equivalent:

(i) One has 0 ∈ J;

(ii) For every G ∈ A, 1 + G0 is left invertible;(iii) For every G, H ∈ A, 1 + G0H is invertible.

Proof. — We may assume that A ≠ 0, the lemma being trivial otherwise.

(i)⇒(ii). Let 0 ∈ J, let G ∈ A and let us show that 1+ G0 is left invertible.Otherwise, by Krull’s theorem 2.1.3, there exists a maximal left ideal I

of A such that 1 + G0 ∈ I. Since 0 ∈ J, one has 0 ∈ I and G0 ∈ I; this

implies that 1 ∈ I, a contradiction.

(ii)⇒(i). Let 0 ∈ A be such that 1+ G0 is left invertible, for every G ∈ A.

Let I be a maximal left ideal of A and let us prove that 0 ∈ I. Otherwise,

one has I+A0 = A, by the definition of a maximal left ideal, hence there

exists 1 ∈ I and G ∈ A such that 1 + G0 = 1. Then 1 = 1 − G0 is left

Page 77: (MOSTLY)COMMUTATIVE ALGEBRA

2.1. MAXIMAL IDEALS 65

invertible, by assumption, contradicting the fact that 1 ∈ I. This proves

that 0 ∈ J.

(ii)⇒(iii). Let 0 ∈ A be such that 1 + G0 is left invertible, for every

G ∈ A.

Let us first prove that 0H ∈ J, for every H ∈ A. Let I be a maximal left

ideal of A; assume that 0H ∉ I. Then A = I +A0H by the definition of a

maximal left ideal. In particular, H ∈ I +A0H, so that there exists 1 ∈ I

and G ∈ A such that H = 1 + G0H, hence (1− G0)H = 1 ∈ I. Since 1− G0 isleft invertible, this implies that H ∈ I, hence 0H ∈ I, a contradiction. In

particular, 0H ∈ I. This proves that 0H ∈ J.

The implication (iii)⇒(ii) being obvious, this concludes the proof of

the lemma. �

Corollary (2.1.7). — Let A be a ring and let J be its Jacobson radical.

a) The ideal J is the intersection of all right maximal ideals of A, and is a

two-sided ideal.

b) An element 0 ∈ A is invertible (resp. left invertible, right invertible) if

and only if the same holds of its image in A/J.

Proof. — a) Since characterization (iii) of the Jacobson radical in

lemma 2.1.6 is symmetric, it implies that J is equal to the Jacobson

radical of the opposite ring Ao, that is, the intersection of all maximal

right ideals of A. In particular, J is both a left ideal and a right ideal,

hence a two-sided ideal.

b) Let 0 ∈ A. If 0 is left invertible, then so is its image in A/J. So let

us assume that the image of 0 in A/J is left invertible. Let 1 ∈ A be any

element whose image modulo J is a left inverse of 0. One has 10 ∈ 1 + J.

By lemma 2.1.6, 10 is left invertible, so that 0 is left invertible as well.

By symmetry, 0 is right invertible if and only if its image in A/J is rightinvertible. Consequently, 0 is invertible if and only if its image in A/J isinvertible. �

Definition (2.1.8). — Let A be a ring and let J be its Jacobson radical. One

says that the ring A is local if the quotient ring A/J is a division ring.

Proposition (2.1.9). — Let A be a ring, let J be its Jacobson radical. The

following properties are equivalent:

Page 78: (MOSTLY)COMMUTATIVE ALGEBRA

66 CHAPTER 2. IDEALS AND DIVISIBILITY

(i) The ring A is local;

(ii) The ring A admits a unique maximal left ideal ;

(ii′) The ring A admits a unique maximal right ideal ;

(iii) The ideal J is a maximal left ideal of A;

(iii′) The ideal J is a maximal right ideal of A;

(iv) The set of noninvertible elements of A is an additive subgroup of A.

Proof. — If A = 0, then A is not local, it has no maximal left ideal, no

maximal right ideal, and every element of A is invertible; this proves

the equivalences of the given properties in this case. Let us now assume

that A ≠ 0.

Assume that A is a local ring. Then 0 is a the unique maximal left (resp.

right) ideal of the division ring A/J, so that J is the unique maximal left

(resp. ideal) of A. Moreover, every nonzero element of A/J is invertible,so that A J is the set of invertible elements of A, by corollary 2.1.7. This

proves that (i) implies all other assertions (ii)–(iv).

Let us now assume (ii) that A admits a unique maximal left ideal; then

it is equal to this left ideal J, so that J is the unique maximal left ideal

of A. This shows the equivalence of (ii) and (iii). Similarly, (ii′) and (iii

′)

are equivalent.

Let us assume (iii). No element of J is left invertible; conversely, an

element of A J belongs to no left maximal ideal, hence is left invertible.

Let 0 ∈ A J; let G ∈ A be such that G0 = 1. Since J is a right ideal, one

has G ∉ J, hence G is left invertible. Then G is both left and right invertible,

hence G is invertible. This implies that 0 is invertible. We thus have

shown that J is the set of noninvertible elements of A. Since J ≠ A, this

implies that A/J is a division ring, hence A is a local ring. We have shown

the implication (iii)⇒(i), and the proof of the implication (iii′)⇒(i) is

analogous.

Let us now assume (iv), ie, that the set N of noninvertible elements

of A is an additive subgroup of A.

We first prove that an element 0 ∈ A which is left invertible is invertible.

Let 1 ∈ A be such that 10 = 1. The element 1 − 01 is not left invertible,since the equality G(1 − 01) = 1 implies 0 = G(1 − 01)0 = G(0 − 010) = 0,

Page 79: (MOSTLY)COMMUTATIVE ALGEBRA

2.2. MAXIMAL AND PRIME IDEALS IN A COMMUTATIVE RING 67

but 0 ≠ 0. Since 1 = 01 + (1 − 01) is invertible, this proves that 01 is

invertible; then 0 is right invertible, hence it is invertible.

Let M be a maximal left ideal of A. No element of M can be invertible,

hence M ⊂ N. Conversely, if 0 ∈ A M, then 0 is left invertible, hence is

invertible, by what precedes, and 0 ∉ N; in other words, N ⊂ M. This

proves that N is the unique maximal left ideal of A, hence (ii). By what

precedes, N = J and A is a local ring. �

2.2. Maximal and prime ideals in a commutative ring

In this section, we restrict ourselves to the case of a commutative ring.

Proposition (2.2.1). — LetA be a commutative ring. An ideal I ofA is maximal

if and only if the ring A/I is field.

Proof. — Let us assume that A/I is a field. Its ideals are then (0) and A/I.Thus, the ideals of A containing I are I and A, which implies that I is a

maximal ideal. Conversely, if I is a maximal ideal, this argument shows

that the ring A/I is nonzero and that its only ideals are 0 and A/I itself.Let G be any nonzero element of A/I. Since the ideal (G) generated by G

is nonzero, it equals A/I; therefore, there exists H ∈ A/I such that GH = 1,

so that G is invertible. This shows that A/I is a field. �

Definition (2.2.2). — Let A be a commutative ring. An ideal P of A is said to

be prime if the quotient ring A/P is an integral domain.

In particular, any maximal ideal of A is a prime ideal. Together

with Krull’s theorem (theorem 2.1.3), this shows that any nonzero

commutative ring possesses at least one prime ideal.

The set of all prime ideals of a ring A is denoted by Spec(A) and called

the spectrum of A The set of all maximal ideals of A is denoted by Max(A)and called its maximal spectrum.

Proposition (2.2.3). — Let A be a commutative ring and let P be an ideal of A.

The following properties are equivalent:

(i) The ideal P is a prime ideal;

Page 80: (MOSTLY)COMMUTATIVE ALGEBRA

68 CHAPTER 2. IDEALS AND DIVISIBILITY

(ii) The ideal P is distinct from A and the product of any two elements of A

which do not belong to P does not belong to P;

(iii) The complementary subset A P is a multiplicative subset of A.

Proof. — The equivalence of the last two properties is straightforward.

Moreover, A/P being an integral domain (property (i), by definition)

means that P ≠ 0 (an integral domain is nonnull) and that the product of

any two elements not in P does not belong to P, hence the equivalence

with property (ii). �

Examples (2.2.4). — a) The ideal P = (0) is prime if and only if A is an

integral domain.

b) In the ring Z, the maximal ideals are the ideals (?), for all prime

numbers ?; the only remaining prime ideal is (0).These ideals are indeed prime. Conversely, let I be a prime ideal of Z

and let us show that I is of the given form. We may assume that I ≠ (0);let then ? be the smallest strictly positive element of I, so that I = ?Z.We need to show that ? is a prime number. Since Z is not a prime ideal

of Z, one has ? > 1. If ? is not prime, there are integers <, = > 1 such

that ? = <=. Then, 1 < <, = < ?, so that neither < nor = belongs to I;

but since ? = <= belongs to I, this contradicts the definition of a prime

ideal.

Proposition (2.2.5). — Let 5 : A→ B be a morphism of rings and let Q be a

prime ideal of B. Then P = 5 −1(Q) is a prime ideal of A.

Proof. — The ideal P is the kernel of the morphism from A to B/Qgiven by the composition of 5 and of the canonical morphism B→ B/Q.

Consequently, passing to the quotient, 5 defines an injectivemorphism

from A/P to B/Q. In particular, A/P is a domain and the ideal P is prime.

We could also have proved this fact directly. First of all, P ≠ A since

5 (1) = 1 and 1 ∉ Q. Let then 0 and 1 be elements of A such that 01 ∈ P.

Then, 5 (01) = 5 (0) 5 (1) belongs to Q, hence 5 (0) ∈ Q or 5 (1) ∈ Q. In the

first case, 0 ∈ P, in the second, 1 ∈ P. �

Thus amorphism of rings 5 : A→ B gives rise to amap 5 ∗ : Spec(B) →Spec(A), given by 5 ∗(Q) = 5 −1(Q) for every Q ∈ Spec(B).

Page 81: (MOSTLY)COMMUTATIVE ALGEBRA

2.2. MAXIMAL AND PRIME IDEALS IN A COMMUTATIVE RING 69

Remarks (2.2.6). — Let A be a commutative ring.

a) For every ideal I of A, let V(I) be set of prime ideals P of A which

contain I. One has V(0) = Spec(A) and V(A) = ∅.

Let (I9) be a family of ideals ofA and let I =∑

I9 be the ideal it generates.

A prime ideal P contains I if and only if it contains every I9; in other

words,

⋂9 V(I9) = V(I).

Let I and J be ideals of A. Let us show that V(I)∪V(J) = V(I∩ J) = V(IJ).Indeed, if P ⊃ I or P ⊃ J, then P ⊃ I∩ J ⊃ IJ, so that V(I)∪V(J) ⊂ V(I∩ J) ⊂V(IJ). On the other hand, let us assume that P ∉ V(I) ∪ V(J). Then, P ⊅ I,

so that there exists 0 ∈ I such that 0 ∉ P; similarly, there is 1 ∈ J such that

1 ∉ P. Then, 01 ∈ IJ but 01 ∉ P, hence P ∉ V(IJ). This proves the claim.

These properties show that the subsets of Spec(A) of the form V(I) arethe closed subsets of a topology on Spec(A). This topology is called the

Zariski topology.

b) Let I be an ideal of A. If I = A, then V(I) = ∅. Conversely, let us

assume that I ≠ A. By Krull’s theorem (theorem 2.1.3), there exists a

maximal ideal P of A such that I ⊂ P; then P ∈ V(I), hence V(I) ≠ ∅.

c) Let 5 : A → B be a morphism of commutative rings. Let I be an

ideal of A; a prime ideal Q of B belongs to the inverse image ( 5 ∗)−1(V(I))of V(I) if and only if 5 −1(Q) contains I; this is equivalent to the fact that Q

contains 5 (I), or, since Q is an ideal, to the inclusion Q ⊃ 5 (I)B. We thus

have shown that ( 5 ∗)−1(V(I)) = V( 5 (I)B). This shows that the inverse

image of a closed subset of Spec(A) is closed in Spec(B). In other words,

the map 5 ∗ : Spec(B) → Spec(A) induced by 5 is continous.

Proposition (2.2.7). — Let A be a commutative ring, let S be a multiplicative

subset of A and let 5 : A → S−1

A be the canonical morphism. The map

5 ∗ : Spec(S−1A) → Spec(A) is injective, and its image is the set of prime

ideals of A which are disjoint from S.

Proof. — For every prime ideal Q of S−1

A, the ideal P = 5 ∗(Q) of A is a

prime ideal of A. By proposition 1.7.1, it satisfies Q = P ·S−1A. Moreover,

P ∩ S = ∅ since, otherwise, one would have 1 ∈ Q. This implies that

the map 5 ∗ : Spec(S−1A) → Spec(A) is injective and that its image is

contained in the set of prime ideals of A which are disjoint from S.

Page 82: (MOSTLY)COMMUTATIVE ALGEBRA

70 CHAPTER 2. IDEALS AND DIVISIBILITY

Conversely, let P be a prime ideal of A which is disjoint from S. The

ideal Q = P · S−1A of S

−1A is the set of all fractions 0/B, with 0 ∈ P and

B ∈ S.

Let us first show that 5 −1(Q) = P: let 0 ∈ A be such that 0/1 ∈ Q; then

there exists 1 ∈ P and B ∈ S such that 0/1 = 1/B, hence there exists C ∈ S

such that CB0 = C1. In particular CB0 ∈ P; one has CB ∉ P since P ∩ S = ∅,

hence 0 ∈ P, by definition of a prime ideal.

Let us show that Q is a prime ideal of S−1

A. Since 5 −1(Q) = P ≠ A, one

has Q ≠ S−1

A. Let 0, 1 ∈ A and let B, C ∈ S be such that (0/B)(1/C) ∈ Q.

Then 01/BC ∈ Q, hence 01/1 ∈ Q. By what precedes, 01 ∈ P. Since P is a

prime ideal of A, either 0, or 1 belongs to P. This implies that either 0/Bor 1/C belongs to Q.

We thus have proved that 5 ∗ is a bijection from Spec(S−1A) to the

subset of Spec(A) consisting of prime ideal P such that P ∩ S = ∅. �

Example (2.2.8). — Let A be a commutative ring and let P be a prime ideal

of A. Then the fraction ring AP is a local ring, and PAP is its unique maximal

ideal. Indeed, the prime ideals of the fraction ring AP are of the form QP,

where Q is a prime ideal of A such that Q ∩ (A P) = ∅, that is, Q ⊂ P.

This observation leads to an useful general proof technique in com-

mutative algebra, namely to first prove the desired result in the case for

local rings, and to reduce to this case by localization.

Remark (2.2.9). — When Spec(A) and Spec(S−1A) are endowed with

their Zariski topologies, the map 5 ∗ is continuous, by remark 2.2.6, b).

Let us prove that it is a homeomorphism from Spec(S−1A) onto its image

in Spec(A).It suffices to show that it maps closed sets of Spec(S−1

A) to traces of

closed sets of Spec(A). Let thus J be an ideal of S−1

A and let I = 5 −1(J)be the set of all 0 ∈ A such that 0/1 ∈ J. Then I is an ideal of A and

I · S−1A = J. Let us prove that 5 ∗(V(J)) = V(I) ∩ 5 ∗(Spec(SA)).

Let Q be a prime ideal of S−1

A and let P = 5 ∗(Q). Assume that Q

contains J; let 0 ∈ I, then 0/1 ∈ J, so that 0/1 ∈ Q and finally 0 ∈ P, so

that P contains I. Assume conversely that P contains I; let 0 ∈ P and let

B ∈ S; then 0/1 ∈ J by definition of I, hence 0/B ∈ J, hence Q contains J.

This concludes the proof.

Page 83: (MOSTLY)COMMUTATIVE ALGEBRA

2.2. MAXIMAL AND PRIME IDEALS IN A COMMUTATIVE RING 71

Examples (2.2.10). — The next two examples give some geometric content

to the terminology “localization” which is used in the context of fraction

rings.

a) Let 0 ∈ A and let S be the multiplicative subset {1; 0; 02; . . .}. For

a prime ideal P, being disjoint from S is equivalent to not containing 0;

consequently, 5 ∗ identifies Spec(S−1A)with the complement of the closed

subset V((0)) of Spec(A).b) Let P be a prime ideal of A and let S be the multiplicative subset

A P. Being disjoint from S is equivalent to being contained in P.

Let us show that 5 ∗ identifies Spec(S−1A) with the intersection of all

neighborhoods of P in Spec(A).Let Q be a prime ideal of A such that Q ⊂ P. Let U be an open

neighborhood of P in Spec(A) and let I be an ideal of A such that

U = Spec(A) V(I). Then P ∈ U, hence P ∉ V(I), hence P does not

contain I. A fortiori, Q does not contain I, so that Q ∉ V(I). This provesthat the image of Spec(S−1

A) is contained in any open neighborhood of P

in Spec(A). Conversely, if Q is a prime ideal of A which is not contained

in P, then P ∉ V(Q), and Spec(A) V(Q) is an open neighborhood of P

that does not contain Q.

Proposition (2.2.11). — Let A be a commutative ring.

a) An element of A is nilpotent if and only if it belongs to every prime ideal

of A.

b) For any ideal I of A, its radical

√I is the intersection of all prime ideals

of A containing I.

Proof. — a) Let 0 ∈ A and let P be a prime ideal of A. If 0 ∉ P, the

definition of a prime ideal implies by induction on = that 0= ∉ P for any

integer = > 0. In particular, 0= ≠ 0 and 0 is not nilpotent. Nilpotent

elements belong to every prime ideal.

Conversely, let 0 ∈ A be a non-nilpotent element. The set S =

{1, 0, 02, . . .} of all powers of 0 is a multiplicative subset of A which does

not contain 0. The localization S−1

A is then non null. Let M be a prime

ideal of S−1

A (for example, a maximal ideal of this nonzero ring) and let

P be the set of all elements G ∈ A such that G/1 ∈ M. By definition, P is

the inverse image of M by the canonical morphism from A to S−1

A. It is

Page 84: (MOSTLY)COMMUTATIVE ALGEBRA

72 CHAPTER 2. IDEALS AND DIVISIBILITY

therefore a prime ideal of A. Since 0 ∈ S, 0/1 is invertible in S−1

A, hence

0/1 ∉ M and 0 ∉ P. We thus have found a prime ideal of A which does

not contain 0, and this concludes the proof.

Part b) follows formally. Indeed, let B be the quotient ring A/I. An

element 0 ∈ A belongs to

√I if and only if its class clI(0) is nilpotent

in A/I. By a), this is equivalent to the fact that clI(0) belongs to every

prime ideal of A/I. Recall that every ideal of A/I is of the form P/I,for a unique ideal P of A which contains I; moreover, (A/I)/(P/I) isisomorphic to A/P, so that P/I is a prime ideal of A/I if and only if P is a

prime ideal of A which contains I. Consequently, 0 belongs to√

I if and

only if it belongs to every prime ideal containing I. In other words,

√I

is the intersection of all prime ideals of A which contain I, as was to be

shown. �

Lemma (2.2.12) (Prime avoidance lemma). — Let A be a commutative ring,

let = be an integer > 1 and let P1, . . . , P= be prime ideals of A, and let I be an

ideal of A. If I ⊂ ⋃=8=1

P8, then there exists 8 ∈ {1, . . . , =} such that I ⊂ P8.

Proof. — We prove the result by induction on =. It is obvious is = = 1.

Assume that for every 8, I is not contained in P8. By induction, for

every 8 ∈ {1, . . . , =}, the ideal I is not contained in the union of the prime

ideals P9, for 9 ≠ 8. Consequently, there exists 08 ∈ I such that 08 ∉ P9 if

9 ≠ 8. Since I ⊂ ⋃=9=1

P9, we see that 08 ∈ P8. Let 0 = 01 + 02 . . . 0=; since

the elements 01, . . . , 0= belong to I, we have 0 ∈ I. On the other hand,

P1 is prime, hence 02 . . . 0= ∉ P1; moreover, 01 ∈ P1, so that 0 ∉ P1. For

8 ∈ {2, . . . , =}, 02 . . . 0= ∈ P8, but 01 ∉ P8, hence 0 ∉ P8. Consequently,

0 does not belong to any of the ideals P1, . . . , P=. This contradicts the

hypothesis that I is contained in their union. �

The following lemma is another application of Zorn’s lemma, but

constructs minimal prime ideals.

Lemma (2.2.13). — Let A be a commutative ring, let I be an ideal of A such

that I ≠ A. Then, for any prime ideal Q containing I, there exists a prime ideal

P of A such that I ⊂ P ⊂ Q and which is minimal among all prime ideals of A

containing I.

Page 85: (MOSTLY)COMMUTATIVE ALGEBRA

2.2. MAXIMAL AND PRIME IDEALS IN A COMMUTATIVE RING 73

In particular, every prime ideal of A contains a prime ideal which is

minimal for inclusion.

Proof. — Let us order the set � of prime ideals of A containing I by

reverse inclusion: we say that P � Q if P contains Q. The lemma will

follow at once from Zorn’s lemma (theorem A.2.11) once we show that

� is an inductive set.

To that aim, let (P9)9∈J be a totally ordered family of prime ideals ofA. If

this family is empty, the prime ideal Q contains I and is an upper bound.

Let us assume that this family is not empty and let P be its intersection.

It contains I, because I is contained in each P9, and is distinct from A

because it is contained in P9 ≠ A, for any 9 ∈ J. Let us show that P is a

prime ideal of A. Let 0, 1 ∈ A be such that 01 ∈ P but 1 ∉ P; let us show

that 0 belongs to every ideal P9. So let 9 ∈ J. By definition of P, there

exists 8 ∈ J such that 1 ∉ P8. Since the family (P9) is totally ordered, we

have either P8 ⊂ P9, or P9 ⊂ P8. In the first case, the relations 01 ∈ P8 and

1 ∉ P8 imply that 0 ∈ P8, hence 0 ∈ P9. In the latter case, we have 1 ∉ P9;

since 01 ∈ P9 and P9 is prime, this implies 0 ∈ P9. Consequently, P ∈ �,hence � is an inductive set. �

Remark (2.2.14). — Let S be a multiplicative subset of A. The map

P ↦→ S−1

P is a bijection respecting inclusion of the set of prime ideals

of A disjoint from S to the set of prime ideals of S−1

A. In particular, the

inverse image of a minimal prime ideals of S−1

A by this bijection is a

minimal prime ideal of A.

Conversely, if P is a minimal prime ideal of A which is disjoint from S,

then S−1

P is a minimal prime ideal of S−1

A.

Lemma (2.2.15). — Let A be a commutative ring and let P be a minimal prime

ideal of A. For every 0 ∈ P there exists 1 ∈ A P and an integer = > 0 such

that 0=1 = 0.

Proof. — Let 0 ∈ A and let S be the set of elements of A of the form

0=1, for 1 ∈ A P and = > 0. It is a multiplicative subset of A that

contains A P. Assume 0 ∉ S. Then, S−1

A is a nonzero ring, hence

contains a prime ideal Q′. The inverse image of Q in A is a prime ideal P

disjoint from S. In particular, P′ ⊂ P. Moreover, 0 ∉ P

′, because 0 ∈ S,

hence P′ ≠ P. This contradicts the hypothesis that P is a minimal prime

Page 86: (MOSTLY)COMMUTATIVE ALGEBRA

74 CHAPTER 2. IDEALS AND DIVISIBILITY

ideal of A. Consequently, 0 ∈ S and there exists 1 ∈ A P and = > 0

such that 0=1 = 0. �

2.3. Hilbert’s Nullstellensatz

The following theoremofHilbert gives a precise description ofmaximal

ideals of polynomial rings over an algebraically closed field.

Theorem (2.3.1) (Hilbert’s Nullstellensatz). — Let K be an algebraically

closed field and let = be a positive integer. The maximal ideals of K[X1, . . . ,X=]are the ideals (X1 − 01, . . . ,X= − 0=), for (01, . . . , 0=) ∈ K

=.

Proof. — Let us first show that for any (01, . . . , 0=) ∈ K=, the ideal

(X1 − 01, . . . ,X= − 0=) of K[X1, . . . ,X=] is indeed a maximal ideal. Let

! : K[X1, . . . ,X=] → K be the “evaluation at (01, . . . , 0=)” morphism,

defined by !(P) = P(01, . . . , 0=). It is surjective and induces an iso-

morphism K[X1, . . . ,X=]/Ker(!) ' K; since K is a field, Ker(!) is a

maximal ideal of K[X1, . . . ,X=]. It thus suffices to show that Ker(!)coincides with the ideal (X1 − 01, . . . ,X= − 0=). One inclusion is obvious:

if P = (X1 − 01)P1 + · · · + (X= − 0=)P= for some polynomials P1, . . . , P=,

then !(P) = 0. Conversely, let P ∈ K[X1, . . . ,X=] be such that !(P) = 0.

Let us perform the euclidean division of P by X1 − 01 with respect to the

variable X1; we get a polynomial P1 and a polynomial R1 ∈ K[X2, . . . ,X=]such that

P = (X1 − 01)P1 + R1(X2, . . . ,X=).Let us repeat the process and divide by X2 − 02, etc.: we see that there

exist polynomials P1, . . . , P=, where P8 ∈ K[X8 , . . . ,X=] and a constant

polynomial R=, such that

P = (X1 − 01)P1 + · · · + (X= − 0=)P= + R= .

Evaluating at (01, . . . , 0=), we get

!(P) = P(01, . . . , 0=) = R= .

Since we assumed !(P) = 0, we have R= = 0 and P belongs to the ideal

(X1 − 01, . . . ,X= − 0=).We shall only prove the converse assertion under the supplementary

hypothesis that the field K is uncountable. (Exercise 2/5 explains how to

Page 87: (MOSTLY)COMMUTATIVE ALGEBRA

2.3. HILBERT’S NULLSTELLENSATZ 75

derive the general case from this particular case; see also corollary 9.1.3

for an alternative proof.) Let M be a maximal ideal of K[X1, . . . ,X=] andlet L be the quotient ring K[X1, . . . ,X=]/M; it is a field. Let G8 denote the

class of X8 in L. The image of K by the canonical homomorphism is a

subfield of L which we identify with K.

The field L admits a natural structure of K-vector space. As such, it

is generated by the countable family of all G 811. . . G

8== , for 81, . . . , 8= ∈ N.

Indeed, any element of L is the class of some polynomial, hence of a

linear combination of monomials. Moreover, the set N=of all possible

exponents of monomials is countable.

Let 0 be any element of L and let ! : K[T] → L the ring morphism

given by !(P) = P(0). Assume that it is injective. Then, ! extends as a

(still injective) morphism of fields, still denoted by !, from K(T) to L. In

particular, the elements 1/(0 − 2) (for 2 ∈ K) are linearly independent

over K since they are the images of the rational functions 1/(T − 2)which are linearly independent in K(T) in view of the uniqueness of

the decomposition of a rational function in simple terms. However,

this would contradict the lemma 2.3.2 below: any linearly independent

family of L must be countable, while K is not. This implies that the

morphism ! is not injective.

Let P ∈ K[T] be a nonzero polynomial such that P(0) = 0. The

polynomial P is not constant. Since K is algebraically closed, it has the

form 2∏<

8=1(T − 28), for some strictly positive integer < and elements

21, . . . , 2< ∈ K and 2 ∈ K∗. Then, 2

∏=8=1( 5 − 28) = 0 in L. Since L is a

field, there exists 8 ∈ {1, . . . , <} such that 0 = 28; consequently, 0 ∈ K

and L = K.

In particular, there exists, for any 8 ∈ {1, . . . , =}, some element 08 ∈ K

such that G8 = 08. This implies the relations X8 − 08 ∈ M, hence the

ideal M contains the ideal (X1 − 01, . . . ,X= − 0=). Since the latter is a

maximal ideal, we have equality, and this concludes the proof. �

Lemma (2.3.2). — Let K be a field, let V be a K-vector space which is generated

by a countable family. Then every subset of V which is linearly independent is

countable.

Page 88: (MOSTLY)COMMUTATIVE ALGEBRA

76 CHAPTER 2. IDEALS AND DIVISIBILITY

Proof. — This will follow from a general result on dimension of vec-

tor spaces, possibly infinite dimensional. However, one may give an

alternate argument that only makes use of finite dimensional vector

spaces.

Let J be a countable set and (4 9)9∈J be a generating family of V. Let

(E8)8∈I be a linearly independent family in V; we have to prove that I is

countable. For every finite subset A of J, the subspace VA of V generated

by the elements 4 9, for 9 ∈ A, is finite dimensional. In particular, the

subset IA of all 8 ∈ I such that E8 ∈ VA is finite. Since the set of all finite

subsets of J is countable, the union I′of all subsets IA, when A runs

over all finite subsets of J, is countable. For every 8 ∈ I, there is a finite

subset A of J sch that E8 is a linear combination of the 4 9, for 9 ∈ A, hence

8 ∈ I′. Consequently, I = I

′and I is countable. �

The following theorem is a topological analogue.

Theorem (2.3.3) (Gelfand). — Let X be a topological space; let �(X) be thering of real valued, continuous functions on X.

For any G ∈ X, the set MG of all continuous functions on X which vanish at G

is a maximal ideal of the ring �(X).If X is a compact metric space, the map G ↦→MG from X to the set of maximal

ideals of �(X) is a bijection.

Proof. — Let !G : �(X) → R the ring morphism given by 5 ↦→ 5 (G)(“evaluation at the point G”). It is surjective, with kernel MG. This proves

that MG is a maximal ideal of �(X).Let us assume that X is a compact metric space, with distance 3. For

any point G ∈ X, the function H ↦→ 3(G, H) is continuous, vanishes at Gbut does not vanish at any other point of X. Consequently, it belongs

to MG but not to MH if H ≠ G. This shows that MG ≠ MH for G ≠ H and

the mapping G ↦→MG is injective.

Let I be an ideal of �(X) which is not contained in any of the maximal

ideals MG, for G ∈ X. For any G ∈ X, there thus exists a continuous

function 5G ∈ I such that 5G(G) ≠ 0. By continuity, the set UG of points of X

at which 5G does not vanish is an open neighborhood of G. These open

sets UG cover X. Since X is compact, there exists a finite subset S ⊂ X such

that the open sets UB , for B ∈ S, cover X as well. Let us set 5 =∑B∈S( 5B)2.

Page 89: (MOSTLY)COMMUTATIVE ALGEBRA

2.3. HILBERT’S NULLSTELLENSATZ 77

This is a positive continuous function, and it belongs to I. If G ∈ X

is a point such that 5 (G) = 0, then for any B ∈ S, 5B(G) = 0, that is to

say, G ∉ UB . Since the UB cover X, we have a contradiction and 5 does

not vanish at any point of X. It follows that 5 is invertible in �(X) (itsinverse is the continuous function G ↦→ 1/ 5 (G)). Since 5 ∈ I, we have

shown I = �(X). Consequently, any ideal of �(X) distinct from �(X) iscontained in one of the maximal ideals MG, so that these ideals constitute

the whole set of maximal ideals of �(X). �

Let K be an algebraically closed field. Hilbert’s Nullstellensatz is the

basis of an admirable correspondence between algebra (some ideals of

the polynomial ring K[X1, . . . ,X=]) and geometry (some subsets of K=)

which we describe now.

Definition (2.3.4). — An algebraic set is a subset of K=defined by a family of

polynomial equations.

Specifically, a subset Z of K=is an algebraic set if and only if there

exists a subset S of K[X1, . . . ,X=] such that

Z =�(S) = {(01, . . . , 0=) ∈ K=

; for every P ∈ S, P(01, . . . , 0=) = 0}.

Proposition (2.3.5). — a) If S ⊂ S′, then�(S′) ⊂ �(S).

b) The empty set and K=are algebraic sets.

c) If 〈S〉 is the ideal generated by S in K[X1, . . . ,X=], then�(〈S〉) =�(S).d) The intersection of a family of algebraic sets, the union of two algebraic

sets are algebraic sets.

e) If I is any ideal of K[X1, . . . ,X=], then�(I) =�(√

I).

Proof. — a) Let (01, . . . , 0=) ∈ �(S′) and let us show that (01, . . . , 0=) ∈�(S). If P ∈ S, we have to show that P(01, . . . , 0=) = 0, which holds since

P ∈ S′.

b) One has ∅ = �({1}) (the constant polynomial 1 does not vanish

at any point of K=) and K

= = �({0}) (the zero polynomial vanishes

everywhere; we could also write K= =�(∅)).

c) Since S ⊂ 〈S〉, we have�(〈S〉) ⊂ �(S). Conversely, let (01, . . . , 0=) ∈�(S) and let us show that (01, . . . , 0=) ∈ �(〈S〉). LetP ∈ 〈S〉; bydefinition,

Page 90: (MOSTLY)COMMUTATIVE ALGEBRA

78 CHAPTER 2. IDEALS AND DIVISIBILITY

there are finite families (P8)8∈I and (Q8)8∈I of polynomials in K[X1, . . . ,X=]such that P =

∑P8Q8 and Q8 ∈ S for every 8 ∈ I. Then,

P(01, . . . , 0=) =∑8∈I

P8(01, . . . , 0=)Q8(01, . . . , 0=) = 0

since Q8(01, . . . , 0=) = 0. Consequently, (01, . . . , 0=) ∈ �(〈S〉)d) Let (Z9) be a family of algebraic sets, for every 9, let S9 be a subset

of K[X1, . . . ,X=] such that Z9 =�(S9). We shall show that⋂9

�(S9) =�(⋃9

S9).

Indeed, (01, . . . , 0=) belongs to⋂9�(S9) if and only if P(01, . . . , 0=) for

every 9 and every P ∈ S9, which means exactly that P(01, . . . , 0=) = 0 for

every P ∈ ⋃9 S9, that is, (01, . . . , 0=) ∈ �(

⋃9 S9).

Let S and S′be two subsets of K[X1, . . . ,X=]. Let T = {PP

′; P ∈

S, P′ ∈ S

′}. We are going to show that �(S) ∪�(S′) = �(T). Indeed,if (01, . . . , 0=) ∈ �(S) and Q ∈ T, we may write Q = PP

′with P ∈ S

and P′ ∈ S

′. Then, Q(01, . . . , 0=) = P(01, . . . , 0=)P′(01, . . . , 0=) = 0 since

(01, . . . , 0=) ∈ �(S). In other words, �(S) ⊂ �(T). Similarly, �(S′) ⊂�(T), hence �(S) ∪�(S′) ⊂ �(T). Conversely, let (01, . . . , 0=) ∈ �(T).To show that (01, . . . , 0=) ∈ �(S) ∪ �(S′), it suffices to prove that if

(01, . . . , 0=) ∉ �(S′), then (01, . . . , 0=) ∈ �(S). By definition, there is a

polynomial P′ ∈ S

′such that P

′(01, . . . , 0=) ≠ 0. Then, for every P ∈ S,

one has PP′ ∈ T, hence (PP

′)(01, . . . , 0=) = 0 = P(01, . . . , 0=)P′(01, . . . , 0=)so that P(01, . . . , 0=) = 0, as was to be shown.

e) Since I ⊂√

I, one has �(√

I) ⊂ �(I). Conversely, let (01, . . . , 0=) ∈�(I). Let P ∈

√I, let < > 1 be such that P

< ∈ I. Then P<(01, . . . , 0=) = 0,

hence P(01, . . . , 0=) = 0 and (01, . . . , 0=) ∈ �(√

I). �

Remark (2.3.6). — The preceding proposition can be rephrased by saying

that there exists a topology on K=for which the closed sets are the

algebraic sets. This topology is called the Zariski topology.

We have constructed one of the two directions of the corespondence:

with any ideal I of K[X1, . . . ,X=], we associate the algebraic set �(I).

Page 91: (MOSTLY)COMMUTATIVE ALGEBRA

2.3. HILBERT’S NULLSTELLENSATZ 79

From the proof of the previous proposition, we recall the formulae

�(0) = K=

�(K[X1, . . . ,X=]) = ∅

�(∑

I9) =⋂9

�(I9)

�(IJ) =�(I) ∪�(J).The other direction of the correspondence associates an ideal with any

subset of K=.

Definition (2.3.7). — Let V be a subset of K=. One defines

ℐ(V) = {P ∈ K[X1, . . . ,X=] ; P(01, . . . , 0=) = 0 for every (01, . . . , 0=) ∈ V}

Proposition (2.3.8). — a) For everyV ⊂ K=,ℐ(V) is an ideal ofK[X1, . . . ,X=].

Moreover,ℐ(V) =√ℐ(V).

b) If V ⊂ V′, thenℐ(V′) ⊂ ℐ(V).

c) For any two subsets V and V′of K

=, one hasℐ(V∪V

′) = ℐ(V) ∩ℐ(V′).

Proof. — a) For any (01, . . . , 0=) ∈ K=, the map P ↦→ P(01, . . . , 0=) is a

ring morphism from K[X1, . . . ,X=] to K, and ℐ(V) is the intersection of

the kernels of those morphisms, for (01, . . . , 0=) ∈ V. Consequently, it is

an ideal of K[X1, . . . ,X=]. Moreover, if P ∈ K[X1, . . . ,X=] and < > 1 is

such that P< ∈ ℐ(V), then P

<(01, . . . , 0=) = 0 for every (01, . . . , 0=) ∈ V;

this implies that P(01, . . . , 0=) = 0, hence P ∈ ℐ(V). This shows that

ℐ(V) =√ℐ(V).

b) Let P ∈ ℐ(V′). For any (01, . . . , 0=) ∈ V, we have P(01, . . . , 0=) = 0,

since V ⊂ V′. Consequently, P ∈ ℐ(V).

c) By definition, a polynomial P belongs toℐ(V ∪ V′) if and only if it

vanishes at any point of V and of V′. �

Proposition (2.3.9). — a) For any ideal I of K[X1, . . . ,X=], one has I ⊂ℐ(�(I)).b) For any subset V of K

=, one has V ⊂ �(ℐ(V)).

Proof. — a) LetP ∈ I; let us show thatP ∈ ℐ(�(I)). We need to show that

P vanishes at every point of�(I). Now, for any point (01, . . . , 0=) ∈ �(I),one has P(01, . . . , 0=) = 0, since P ∈ I.

Page 92: (MOSTLY)COMMUTATIVE ALGEBRA

80 CHAPTER 2. IDEALS AND DIVISIBILITY

b) Let (01, . . . , 0=) ∈ V and let us show that (01, . . . , 0=) belongs to

�(ℐ(V)). We thus need to prove that for any P ∈ ℐ(V), one has

P(01, . . . , 0=) = 0. This assertion is clear, since (01, . . . , 0=) ∈ V. �

We are going to use Hilbert’s Nullstellensatz to establish the following

theorem.

Theorem (2.3.10). — For any ideal I of K[X1, . . . ,X=], one has

ℐ(�(I)) =√

I .

Beforewe pass to the proof, let us show how this gives rise to a bijection

between algebraic sets — geometry — and ideals equal to their own

radical — algebra.

Corollary (2.3.11). — The maps V ↦→ ℐ(V) and I ↦→ �(I) induce bijec-

tions, inverse the one of the other, between algebraic sets in K=and ideals I

of K[X1, . . . ,X=] such that I =√

I.

Proof of the corollary. — Let I be an ideal ofK[X1, . . . ,X=] such that I =√

I.

By theorem 2.3.10, one has

ℐ(�(I)) =√

I = I.

Conversely, let V be an algebraic set and let S be any subset of

K[X1, . . . ,X=] such that V =�(S). Letting I = 〈S〉, we have V =�(I). Byproposition 2.3.5, we even have V =�(

√I). Then,

ℐ(V) = ℐ(�(I)) =√

I,

hence

V =�(I) =�(√

I) =�(ℐ(V)).This concludes the proof of the corollary. �

Proof of theorem 2.3.10. — The inclusion

√I ⊂ ℐ(�(I)) is easy (and has

been proved incidentally in the course of the proof of prop. 2.3.5).

Let indeed P ∈√

I, let < > 1 be such that P< ∈ I. Then, for any

(01, . . . , 0=) ∈ �(I), one has P<(01, . . . , 0=) = 0 hence P(01, . . . , 0=) = 0.

Consequently, P ∈ ℐ(�(I)).Conversely, let P be a polynomial belonging toℐ(�(I)). We want to

show that there exists < > 1 such that P< ∈ I. Let us introduce the

Page 93: (MOSTLY)COMMUTATIVE ALGEBRA

2.3. HILBERT’S NULLSTELLENSATZ 81

ideal J of K[X1, . . . ,X= , T]which is generated by I and by the polynomial

1 − TP. We have�(J) = ∅. Let indeed (01, . . . , 0= , C) ∈ K=+1

be a point

belonging to�(J). Since P ∈ I, it follows from the definition of the ideal J

that P belongs to J, hence P(01, . . . , 0=) = 0. On the other hand, 1 − TP

belongs to J too, so that we have CP(01, . . . , 0=) = 1, a contradiction.

By Hilbert’s Nullstellensatz, the ideal J is contained in no maximal

ideal ofK[X1, . . . ,X= , T], hence J = K[X1, . . . ,X= , T]. Consequently, thereare polynomials Q8 ∈ I, R8 ∈ K[X1, . . . ,X= ,T] and R ∈ K[X1, . . . ,X= ,T]such that

1 = (1 − TP)R +∑8

Q8R8 ,

and the image of the ideal I in the quotient ring K[X1, . . . ,X= , T]/(1−TP)generates the unit ideal.

Now, recall from proposition 1.6.6 that this quotient ring is iso-

morphic to the localization of K[X1, . . . ,X=] by the multiplicative set

S = {1, P, P2, . . . , } generated by P. Consequently, the image of the

ideal I in the fraction ring S−1

K[X1, . . . ,X=] generates the full ideal,

which means that the multiplicative subset S meets the ideal I. In other

words, there is an integer < such that P< ∈ I, as was to be shown. �

Remark (2.3.12). — The introduction of a new variable T in the preced-

ing proof is known as Rabinowitsch trick. However, in most of the

presentations of the proof, it is often followed by the following, more

elementary-looking, argument. We start as above until we reach the

equality 1 = (1−TP)R+∑Q8R8. of polynomials in K[X1, . . . ,X= , T]. Our

goal now is to substitute T = 1/P. Formally, we obtain an equality of

rational functions in the field K(X1, . . . ,X=):

1 =

∑8

Q8(X1, . . . ,X=)R8(X1, . . . ,X= , 1/P(X1, . . . ,X=)).

The denominators are powers of P; chasing them, we shall obtain a

relation of the form

PM =

∑8

Q8(X1, . . . ,X=)S8(X1, . . . ,X=)

which shows that PMbelongs to I. To make this argument more precise,

let M be an integer larger than the T-degrees of all of the polynomials R8

Page 94: (MOSTLY)COMMUTATIVE ALGEBRA

82 CHAPTER 2. IDEALS AND DIVISIBILITY

and of the polynomial R. We can then write

R(X1, . . . ,X= , T) =M∑<=0

S8(X1, . . . ,X=)T<

and, for any 8,

R8(X1, . . . ,X= , T) =M∑<=0

S8 ,<(X1, . . . ,X=)T< ,

for some polynomials S8 and S8 ,< in K[X1, . . . ,X=]. Multiplying both

sides of the initial equality by PM, we obtain a relation

PM = (1 − TP)

M∑<=0

(PT)<S<PM−< +

∑8

M∑<=0

(PT)<Q8S8 ,<PM−< .

Collecting the monomials of various T-degrees, we deduce

PM = S0P

M +∑8

Q8S8 ,0PM,

0 = S<PM−< − S<−1P

M−<+1 +∑8

Q8S8 ,<PM−<

for < ∈ {1, . . . ,M}, and0 = −SM.

Summing all of these relations, we get

PM =

∑8

M∑<=0

Q8S8 ,<PM−< .

Since the polynomials Q8 belong to I, we conclude that PMbelongs to I,

hence P ∈√

I.

2.4. Principal ideal domains, euclidean rings

2.4.1. — Let A be a commutative ring.

Recall that an ideal of A is said to be principal if it is of the form 0A, for

some 0 ∈ A. One also writes (0) for 0A.

Let 0, 1 ∈ A. The ideal 0A is contained in the ideal 1A if and only if

there exists 2 ∈ A such that 0 = 12, that is, if and only if 1 divides 0.

Page 95: (MOSTLY)COMMUTATIVE ALGEBRA

2.4. PRINCIPAL IDEAL DOMAINS, EUCLIDEAN RINGS 83

Assume moreover that A is an integral domain. Let 0, 1 ∈ A be such

that 0A = 1A. Then, there exist 2, 3 ∈ A such that 0 = 12 and 1 = 03,

hence 0 = 0(23) and 1 = (23). If 0 ≠ 0 then 1 ≠ 0; simplifying by 0, we

get 23 = 1, hence 2 and 3 are invertible. In other words, two nonzero

elements 0 and 1 of an integral domain A generate the same ideal if and only if

there exists a unit D ∈ A such that 1 = 0D.

The units of the ring A = Z are ±1; it is thus customary to choose,

as a generator of a principal ideal, an positive element. Similarly, the

units of the ring K[X] of polynomials in one indeterminate X and with

coefficients in a field K are the nonzero constant polynomials; we then

rather choose for a generator of a principal ideal, a monic polynomial.

Definition (2.4.2). — One says that a (commutative) ring is a principal ideal

domain if it is a domain and if all of its ideals are principal.

Examples (2.4.3). — a) The ring Z is a principal ideal domain (exam-

ple 1.4.3), as well as the ring K[X] of polynomials in one variable with

coefficients in a (commutative) field K (example 1.4.4).

b) In the ring K[X,Y] of polynomials in two variables with coefficients

in a field K, the ideal (X,Y) is not principal. For if it were generated by

a polynomial P, this polynomial would need to divide both X and Y.

Necessarily, P has to be a nonzero constant. It follows that there exist

Q,R ∈ K[X,Y] such that 1 = XQ(X,Y) + YR(X,Y). This, however, is

absurd, since the right hand term of this equality has not constant term.

2.4.4. Greatest common divisor, least common multiple. — Let A be

a principal ideal domain. Let (08) be a family of elements of A. By the

assumption on A, the ideal I generated by the (08) is generated by one

element, say 0. It follows that 3 divides 08 for any 8: 3 is a common

divisor of all of the 08. Moreover, if 3′ is a common divisor of the 08, then

08 ∈ (3′) for every 8, hence I ⊂ (3′) and 3′ divides 3. One says that 3 is a

greatest common divisor (gcd) of the 08. The word “greatest” has to be

understood in the sense of divisibility: the common divisors of the 08 are

exactly the divisors of their gcd. There is in general no preferred choice

of a greatest common divisor, all differ by multiplication by a unit in A.

Let J be the intersection of the ideals (08) and let < be a generator of

the ideal J. For any 8, < ∈ (08), that is, < is a multiple of 08 for every 8.

Page 96: (MOSTLY)COMMUTATIVE ALGEBRA

84 CHAPTER 2. IDEALS AND DIVISIBILITY

Moreover, if <′ ∈ A is a multiple of 08 for every 8, then <′ ∈ (08) for

every 8, hence <′ ∈ (<) and <′ is a multiple of <. One says that < is

a least common multiple (lcm) of the 08. Again, the word “least” has

to be understood in the sense of divisibility. As for the gcd, there is no

preferred choice and all least common multiples differ by multiplication

by a unit in A.

As explained above, when A = Z is the ring of integers, one may

choose for the gcd and the lcm the unique positive generator of the ideal

generated by the 08, resp. of the intersection of the (08). Then, except fordegenerate cases, 3 is the greatest common divisor and < is the least

common (nonzero) multiple in the naïve sense too.

Similarly, when A = K[X] is the ring of polynomials in one indeter-

minate X, it is customary to choose of the gcd and the lcm to be monic

polynomials (or the zero polynomial).

Remark (2.4.5). — Let K be a field, let P,Q ∈ K[X] be polynomials (not

both zero) and let D ∈ K[X] be their gcd, chosen to be monic. Let

P1,Q1 ∈ K[X] be defined by P = DP1 and Q = DQ1. Since D is a

generator of the ideal (P,Q), there exist polynomials U and V such that

D = UP + VQ, hence 1 = UP1 + VQ1 and (P1,Q1) = K[X].It follows that for every fieldL containingK, one has (P1,Q1)L[X] = L[X],

and D is still a gcd of P and Q in the ring L[X].Stated in the opposite direction, this result states that the monic gcd

of P and Q in L[X] belongs to K[X].

Definition (2.4.6). — Let A be a domain. A map � : A {0} → N, is called a

euclidean gauge1 on A if it satisfies the following two properties:

– For any 0, 1 ∈ A {0}, �(01) > sup(�(0), �(1));– For any 0, 1 ∈ A such that 1 ≠ 0, there exists @ and A ∈ A such that

0 = 1@ + A and such that either A = 0 or �(A) < �(1).If there exists a euclidean gauge on A, then one says that A is a euclidean ring.

1According for example to Bourbaki and Wedderburn, the official word is stathm.

Page 97: (MOSTLY)COMMUTATIVE ALGEBRA

2.5. UNIQUE FACTORIZATION DOMAINS 85

Examples (2.4.7). — a) The ring of integers and the ring of polynomials

in one variablewith coefficients in a field are euclidean rings, with gauges

given by the usual absolute value and the degree.

b) The ring Z[8] of Gaussian integers is an euclidean ring, with gauge �defined by �(I) = II = |I |2 (exercise 2/15). See also exercises 2/16, 2/17

and 2/14 for other examples of euclidean rings.

Remark (2.4.8). — Property (i) of gauges implies that �(0) 6 �(1)when

0 divides 1. Consequently, if D is a unit, then �(0) = �(0D) for any non

zero element of A.

This property (i) is however not crucial for the definition of a euclidean

ring. Indeed, if � is any map satisfying property (ii), one may modify it

in order to get a gauge; see exercise 2/18.

Proposition (2.4.9). — Any euclidean ring is a principal ideal domain.

Proof. — Let indeed A be a euclidean ring with gauge �, and let I be a

non zero ideal of A. Let 0 be a nonzero element of I such that �(0) isminimal among the values of � on I {0}. Let G ∈ I and let us consider

an euclidean division G = 0@ + A of G by 0; one has A = G − 0@ ∈ I. If

A ≠ 0, then �(A) < �(0), which contradicts the choice of 0. So A = 0 and

G = 0@ ∈ 0A, hence I = (0). �

Remark (2.4.10). — There exist principal ideal domains which are not

euclidean, for any map �. One such example is the set of all complex

numbers of the form 0 + 1 1+i

√19

2, with 0 and 1 ∈ Z. (See D. Perrin, Cours

d’algèbre, Ellipses, p. 53–55; the proof that this ring is not euclidean can

be found in exercise 2/20.)

2.5. Unique factorization domains

Definition (2.5.1). — Let A be a domain. One says that an element 0 ∈ A is

irreducible if it is not a unit and if the relation 0 = 12, for some 1 and 2 ∈ A

implies that 1 or 2 is a unit.

Examples (2.5.2). — a) The irreducible elements of Z are the prime

numbers and their opposites.

Page 98: (MOSTLY)COMMUTATIVE ALGEBRA

86 CHAPTER 2. IDEALS AND DIVISIBILITY

b) Let : be a field; the irreducible elements of :[X] are the irreduciblepolynomials, that is, the polynomials of degree > 1 which cannot be

written as the product of two polynomials of degree > 1.

c) The element 0 is never irreducible: it can be written 0 × 0 and 0 is

not a unit (A being a domain, 1 ≠ 0).

Proposition (2.5.3) (Gauss’s lemma). — Let A be a principal ideal domain.

For a nonzero ideal of A to be prime, it is necessary and sufficient that it be

generated by an irreducible element; it is then a maximal ideal.

Proof. — Let I be a prime ideal of A; assume that I ≠ 0. Since A is a

principal ideal domain, there exists 0 ∈ A such that I = (0). Since I ≠ 0,

we have 0 ≠ 0; Let us show that 0 is irreducible. Since A is not a prime

ideal, 0 is not a unit. Let 1 and 2 be elements of A such that 0 = 12. Since

I is a prime ideal, 1 or 2 belongs to I. Assume that 1 ∈ I; then, there exists

D ∈ A such that 1 = 0D, hence 0 = 0D2 and 2D = 1 after simplifying by 0.

This shows that 2 is a unit. Similarly, if 2 ∈ I then 1 is a unit. It follows

that 0 is irreducible, as claimed.

Conversely, let 0 be an irreducible element of A and let us show that

the ideal I = (0) is a maximal ideal of A. Let G ∈ A be any element which

is not a multiple of 0 and let J = I + (G). Let 1 ∈ A be such that J = (1).Since 0 ∈ J, there exists 2 ∈ A such that 0 = 12. If 2 were a unit, one

would have (0) = (1) = I+ (G), hence G ∈ (0), contrary to the assumption.

Since 0 is irreducible, it follows that 1 is a unit, hence J = A. This shows

that I is a maximal ideal of A. �

The first part of the proof shows, more generally, that in a domain, if a

nonzero prime ideal is generated by one element, then this element is

irreducible. The converse does not hold for general rings and leads to

the notion of a unique factorization domain.

Definition (2.5.4). — Let A be a domain. One says that A is a unique

factorization domain if it satisfies the following two properties:

(i) Every increasing sequence of principal ideals of A is stationary;

(ii) Every ideal generated by an irreducible element is a prime ideal.

The first condition will allow us to write any nonzero element as

the product of finitely many irreducible elements; it is automatic if the

Page 99: (MOSTLY)COMMUTATIVE ALGEBRA

2.5. UNIQUE FACTORIZATION DOMAINS 87

ring A is noetherian (see definition 6.3.5). The second one is the most

important and will guarantee, up to minor tweeks, the uniqueness of

such a decomposition in irreducible factors.

Let us rewrite this condition somewhat. Let ? be an irreducible element

of A. Since ? is not a unit, the ideal (?) is a prime ideal if and only if the

product of two elements of A cannot belong to (?) unless one of thembelongs to ?. In other words, 01 is divisible by ? if and only if either 0,

or 1 is divisible by ?.

Examples (2.5.5). — a) A field is a unique factorization domain.

b) Proposition 2.5.3 states that every irreducible element of a principal

ideal domain generate a maximal ideal. Given lemma 2.5.6 below, we

conclude that principal ideal domains are unique factorization domains.

c) We will prove (corollary 2.6.7) that for any unique factorization

domain A, the ring A[X1, . . . ,X=] of polynomials with coefficients in A

is also a unique factorization domain. In particular, polynomial rings

with coefficients in a field, or in Z, are unique factorization domains.

Lemma (2.5.6). — In a principal ideal domain, any increasing sequence of

ideals is stationary. 2

Proof. — Let A be a principal ideal domain, let (I=) be an increasing

sequence of ideals. Let I be the union of all ideals I=. Since the sequence

is increasing, I is again an ideal of A. Since A is a principal ideal domain,

there exists 0 ∈ I such that I = (0). Let then < ∈ N be such that 0 ∈ I<.

For = > <, we have I = (0) ⊂ I= ⊂ I, hence the equality I= = I: the

sequence (I=) is stationary. �

Theorem (2.5.7). — Let A be a unique factorization domain and let 0 be any

nonzero element of A.

a) There exist an integer = > 0, irreducible elements ?1, . . . , ?= ∈ A, and

a unit D ∈ A, such that 0 = D?1 . . . ?= (existence of a decomposition in

irreducible factors).

b) Let us consider two such decompositions of 0, say 0 = D?1 . . . ?= =

E@1 . . . @<. Then = = < and there exists a permutation � of {1, . . . , =} and

2In other terms, a principal ideal domain is a noetherian ring (definition 6.3.5).

Page 100: (MOSTLY)COMMUTATIVE ALGEBRA

88 CHAPTER 2. IDEALS AND DIVISIBILITY

units D8, for 1 6 8 6 =, such that @8 = D8?�(8) for every 8 (uniqueness of adecomposition in irreducible factors).

This is often taken as the definition of a unique factorization domain;

in any case, this explains the chosen denomination!

Let us comment a little bit the use of the word “uniqueness” in

the theorem. Strictly speaking, there is no unique decomposition in

irreducible factors; indeed, it is always possible to change the order

of the factors, or simultaneously multiply some irreducible factor and

dividing the unit in front of the decomposition by the same unit. The

content of the uniqueness property is that these are the only two ways

by which two decompositions may differ.

Proof. — When 0 is invertible, property a) is obvious: take = = 0 and

D = 0. Otherwise, there exists a maximal ideal containing 0, hence

an irreducible element ?1 ∈ A such that ?1 divides 0. Let 01 ∈ A be

such that 0 = ?101; we then have (0) ( (01). If 01 is not invertible, we

may redo the argument, obtaining irreducible elements ?1, . . . , ?= of A,

and elements 01, . . . , 0= ∈ A such that 0 = ?1 . . . ?=0=. Might we go

on forever, we obtain a strictly increasing sequence of principal ideals

(0) ( (01) ( (02) ( . . . , which contradicts the first axiom of a unique

factorization domain. So 0= is a unit for some =, and property a) is

proved.

Let now 0 = D?1 . . . ?= = E@1 . . . @< be two decompositions of 0 as the

product of a unit and of irreducible elements. Let us prove property b)

by induction on <. If < = 0, then 0 = E is a unit; consequently, D?1 . . . ?=is a unit too, which implies = = 0 and 0 = D = E (indeed, if = > 1,

D?1 . . . ?= belongs to the maximal ideal (?1) so cannot be a unit). Assume

< > 1. Since @< divides D?1 . . . ?< and the ideal (@<) is prime, @<divides one of the factors D, ?1, . . . , ?=. Since D is a unit, there exists

an integer 9 ∈ {1, . . . , =} such that @< divides ? 9; let B ∈ A be such that

? 9 = B@<. Since ? 9 is irreducible and @< is not a unit, B is necessarily a

unit; we then set D< = B−1

so that @< = D<? 9. Set 1 = 0/@<. It admits two

decompositions as a product of irreducible elements, namely E@1 . . . @<−1

and (D/D<)?1 . . . ? 9 . . . ?=, where the hat on ? 9 indicates that this factor is

omitted from the product. By induction, we have<−1 = =−1 and there

exist a bijection � from {1, . . . , < − 1} to {1, . . . , 9 , . . . , =}, and units D8,

Page 101: (MOSTLY)COMMUTATIVE ALGEBRA

2.5. UNIQUE FACTORIZATION DOMAINS 89

such that @8 = D8?�(8) for 1 6 8 6 < − 1. It follows that < = = and

the mapping (still denoted by �) from {1, . . . , <} to {1, . . . , =} which

extends � andmaps< to 9 is a bijection. For every 8 ∈ {1, . . . , =}, one has@8 = D8?�(8). This concludes the proof of the uniqueness property. �

Remark (2.5.8). — Conversely, let A be a domain which satisfies the conclusion of the

theorem.

For any non zero 0 ∈ A, let $(0) be the number of irreducible factors in a decom-

position of 0 as a product of irreducible elements. It does not depend on the chosen

decomposition. Let 0, 1, 2 be elements of A such that 0 = 12 and 0 ≠ 0. The uniqueness

property implies that $(0) = $(1) + $(2).Let (0=) be a sequence of elements of A such that the sequence of ideals (00), (01), . . .

is increasing. For every integer =, there thus exists 1= ∈ A such that 0= = 1=0=+1.

Consequently, $(0=+1) 6 $(0=), so that the sequence ($(0=))= is a decreasing sequence

of positive integers; it is thus stationary. Moreover, if $(0=) = $(0=+1), then 1= is a unitand the ideals (0=) and (0=+1) coincide. This implies that the sequence (0=) of ideals isitself stationary.

Let ? be any irreducible element of A. Let us show that it generates a prime ideal

of A. Since ? is irreducible, ? is not a unit and (?) ≠ A. Let then 0, 1 be elements

of A such that ? divides 01; let 2 ∈ A be given by 01 = ?2. If 2 = 0, then ?2 = 0

and either 0 or 1 is zero. Let us assume that 2 ≠ 0; then 0 ≠ 0 and 1 ≠ 0. Pickup

decompositions of 0, 1 and 2 as products of irreducible factors, say 0 = D?1 . . . ?= ,

1 = E@1 . . . @< , and 2 = FA1 . . . AB . (Here, D, E, F are units, =, <, B are positive integers,

while the ?8 , @ 9 and A: are irreducible elements ofA.) Thenwe have two decompositions

of 01, namely DE?1 . . . ?=@1 . . . @< and F?A1 . . . AB . By the uniqueness property, the

factor ? which appears in the second one has to intervene in the first one; precisely,

there exists a unit ∈ A and either an integer 8 ∈ {1, . . . , =} such that ? = ?8 , or aninteger 9 ∈ {1, . . . , <} such that ? = @ 9 . In the first case, ? divides 0, in the second

one, ? divides 1. This shows that (?) is a prime ideal.

These remarks prove that the ring A is a unique factorization domain.

Remark (2.5.9). — One of the reasons of the nonuniqueness of the decom-

position into irreducible factors is that one may multiply those factors

by units.

In certain rings, it is possible to distinguish privileged irreducible

elements so as to remove this source of ambiguity.

For example, in Z, the irreducible elements are the prime numbers

and their opposites, but one may decide to prefer the prime numbers

themselves. Up to the order of the factors, any nonzero integer is then

uniquely written as the product of ±1 (the only units in Z) by a product

of prime numbers.

Page 102: (MOSTLY)COMMUTATIVE ALGEBRA

90 CHAPTER 2. IDEALS AND DIVISIBILITY

Similarly, in the ringK[T] in one indeterminate T over a fieldK onemay

prefer the monic irreducible polynomials. Still up to the order of factors,

any nonzero polynomial can be uniquely written as the product of a

nonzero constant (the units in K[T]) by a product of monic irreducible

polynomials.

In a general unique factorization domain A, let us show how to

normalize the decomposition in irreducible elements so that it only can

be modified by the order of the factors.

Let us choose a family (�8) of irreducible elements of A in such a way

that

– For 8 ≠ 9, �8 and � 9 are not associated;

– Every irreducible element of A is associated to one of the �8.

(To prove the existence of such a family, just choose one element in every

equivalence class of irreducible elements for the equivalence relation of

being associated.) Then, any nonzero element 0 of A can be uniquely

written on the form 0 = D∏

8 �A88where D is a unit and the A8 are positive

integers, all but finitely many of them being equal to zero.

In other words, the map from A× ×N(I) to A {0} which maps (D, (A8))

to the product D∏

8 �A88is an isomorphism of monoids.

One interesting aspect of this normalization is that it makes the

divisibility relation explicit: an element 0 = D∏

8 �A88divides an element

1 = E∏

8 �B88if and only if A8 6 B8 for every 8. Indeed, it is clear that

this condition is sufficient, for it suffices to set 2 = (ED−1)∏8 �B8−A88

to get

0 = 12. Conversely, if 2 ∈ A is such that 1 = 02, decompose 2 as F∏

8 �C88;

then,

1 = E∏8

�B88= DF

∏8

�A8+C88

,

hence B8 = A8 + C8 for every 8. This implies B8 > A8.

2.5.10. Greatest common divisor, least common multiple. — Let A

be a unique factorization domain.

Let (0=) be a family of elements of A. We are going to show that it

possesses a greatest common divisor (gcd) and a least common multiple (lcm).

As in the case of principal ideal domains, a gcd of the family (0=) is anyelement 3 ∈ A which divides each of the 0= and such that every such

Page 103: (MOSTLY)COMMUTATIVE ALGEBRA

2.5. UNIQUE FACTORIZATION DOMAINS 91

common divisor divides 3, and a lcm of this family is an element < ∈ A

which is a multiple of all of the 0= and such that every such common

multiple is a multiple of <.

We first treat particular, essentially trivial, cases. Remark that every

element of A divides 0, but that the only multiple of 0 is 0 itself.

Consequently, if one of the 0= is equal to 0, it has only one common

multiple, namely 0, which thus is its lcm. On the other hand, in order to

show that the family (0=) has a gcd, we may remove all the terms equal

to 0.

If the family is empty, then every element is a common divisor, so that

0 is a greatest common divisor of the empty family. Similarly, every

element is a common multiple, so that 1 is least common multiple of the

empty family.

These remarks allow to assume that the family (0=) is nonempty and

consists of nonzero elements. To simplify the construction, assume

also that we have normalized as above the decomposition in irreducible

factors. For every =, let 0= = D=∏

8 �A=,88

be the decomposition of 0= into

irreducible factors. For every 8, set 38 = inf=(A=,8); this is a positive integer,and 38 = 0 for all but finitely many 8. This allows to set 3 =

∏8 �

388.

Let us show that 3 is a gcd of the family (0=). Since 38 6 A=,8 for

every 8, 3 divides 0= for every =. Let 1 be a common divisor ot the 0=,

let 1 = E∏

8 �B88be its decomposition in irreducible factors. Since 1

divides 0=, B8 6 A=,8 for every 8. It follows that B8 6 38 for every 8, hence

1 divides 3.

For every 8, set <8 = sup(A=,8); this is an element of N ∪ {+∞}. If

every <8 is an integer, and if all but finitely many of them are 0, we may

set < =∏

8 �<8

8; otherwise, we set < = 0. In each case, < is a common

multiple of all of the 0=. Conversely, let 1 be any nonzero element of A;

let 1 = E∏

8 �B88be its decomposition into irreducible elements. For 1

to be a multiple of 0=, it is necessary and sufficient that B8 > A=,8 for

every 8; consequently, 1 is a common multiple of the family (0=) if andonly if B8 > <8 for every 8. If <8 is infinite for some 8, of if infinitely many

terms of the family (<8) are nonzero, this never holds, so that 0 is the

only common multiple to the family (0=). Otherwise, we see that 1 is a

multiple of <.

Page 104: (MOSTLY)COMMUTATIVE ALGEBRA

92 CHAPTER 2. IDEALS AND DIVISIBILITY

Remark (2.5.11). — Unless they are zero, two greatest common divisors

(resp. two least common multiples) of a family (0=), differ by mul-

tiplication by a unit. As we have seen above, choosing a particular

normalization for the decomposition in irreducible factors allows to get

a well-defined representative of the gcd (resp. of the lcm).

When we shall write equalities involving greatest common divisors or

least common multiples, we shall always assume that they are properly

normalized. In any case, it is always possible to read these equalities up

to multiplication by a unit.

Definition (2.5.12). — One says that a family of elements of a unique factor-

ization domain consists of coprime elements if this family has 1 for a greatest

common divisor.

Remark (2.5.13). — Let (0=) be a family of elements of A. For every G ∈ A,

gcd((G0=)) = G gcd(0=). This follows easily from the construction above;

thanks to the formula inf(A=)+B = inf(A=+B) for any family (A=) of integersand any integer B. Conversely, let 3 = gcd((0=)). For every =, there is

1= ∈ A such that 0= = 31=. Then 3 gcd((1=)) = gcd((0=)). Assume that

at least one of the 0= is nonzero. Then 3 ≠ 0 hence gcd((1=)) = 1, so that

the 1= are coprime.

Proposition (2.5.14). — Let A be a unique factorization domain and let (0=) bea family of elements of A, with gcd 3 and lcm <. The ideal 3A is the smallest

principal ideal containing the ideal

∑0=A, and the ideal <A is the largest

principal ideal containing the ideal

⋂= 0=A.

Assume in particular that A is a principal ideal domain. Then two elements 0

and 1 of A are coprime if and only if the ideals (0) and (1) are comaximal.

Proof. — Since the inclusion of ideals 0A ⊂ 1A is equivalent to the fact

that 1 divides 0, this is but a reformulation of the discussion above. �

Corollary (2.5.15) (Bézout’s theorem). — Let A be a principal ideal domain,

let (0=) be a family of elements of A with gcd 3. There exist elements D= ∈ A,

all but finitely many of which are zero, such that 3 =∑0=D=.

In a euclidean ring, there is a simple algorithm to compute the gcd of

two elements 0 and 1, as well as a relation 3 = 0D + 1E.

Page 105: (MOSTLY)COMMUTATIVE ALGEBRA

2.5. UNIQUE FACTORIZATION DOMAINS 93

Proposition (2.5.16) (Euclid’s algorithm). — Let A be an euclidean ring; let

0, 1 be elements of A. One defines four sequences (3=), (D=), (E=) and (@=) byinduction on = by setting

30 = 0 D0 = 1 E0 = 0

31 = 1 D1 = 0 E1 = 1

and then, if 3= ≠ 0, let @= be the quotient of a euclidean division of 3=−1 by 3=,

and

3=+1 = 3=−1 − @=3= D=+1 = D=−1 − @=D= E=+1 = E=−1 − @=E= .These sequences are finite: there exists a smallest integer = such that 3=+1 = 0;

then, 3= = 0D= + 1E= is a greatest common divisor of 0 and 1.

Proof. — Let � be the gauge of A. If 3= ≠ 0, then 3=+1 is the remainder

of an euclidean division by 3=, so that either �(3=+1) < �(3=), or 3=+1 =

0. Consequently, the sequence of positive integers (�(3=)) is strictly

decreasing as soon as it is defined. Let us also remark that the pairs (1, 0−1@) and (0, 1) have the same common divisors, so that gcd(1, 0 − 1@) =gcd(0, 1). By induction, gcd(3=−1, 3=) = gcd(3= , 3=+1) for every =. If 3= ≠0 and 3=+1 = 0, we thenhavegcd(0, 1) = gcd(30, 31) = gcd(3= , 3=+1) = 3=.By induction, the proof of the relation 3= = 0D= + 1E= is immediate. �

The following result is very useful.

Proposition (2.5.17) (Gauss). — Let A be a unique factorization domain and

let 0, 1, 2 ∈ A.

a) If 0 is prime to 1 and 2, then 0 is prime to 12.

b) If 0 is prime to 2, and if 0 divides 12, then 0 divides 1.

Proof. — a) Let ? be an irreducible element of A that divides 0. By

assumption, ? does not divide 1, and ? does not divide 2. Since A is a

unique factorization domain, ? does not divide 12. This proves that 0 is

prime to 12.

b) The result is obvious if 2 is a unit. We then argue by induction on

the number of irreducible factors of 2. Let ? be a prime number that

divides 2; write 2 = ?2′. Then 0 is prime to 2′ and 0 divides 1?2′2, hence0 divides 1? by induction. Let 3 ∈ A be such that 1? = 03. Since A is a

Page 106: (MOSTLY)COMMUTATIVE ALGEBRA

94 CHAPTER 2. IDEALS AND DIVISIBILITY

unique factorization domain, the ideal (?) is prime. one has 0 ∉ (?) byassumption, and 03 ∈ (?); consequently, 3 ∈ (?). Let 3′ ∈ A be such that

3 = ?3′; then 1? = ?03′, hence 1 = 03′: this shows that 0 divides 1. �

2.6. Polynomial rings are unique factorization domains

One of the most important and basic results in the theory of unique

factorization domains is the theorem of Gauss according to which

polynomial rings with coefficients in a unique factorization domain are

themselves unique factorization domains.

So let A be a unique factorization domain. We first recall that the

units of A[T] are the units of A×, viewed as constant polynomials

(corollary 1.3.14).

Definition (2.6.1). — Let A be a unique factorization domain and let P be any

polynomial in A[T]. The content of P, denoted by ct(P), is defined as a greatestcommon divisor of the coefficients of P. One says that a polynomial is primitive

if its content is a unit, that is to say, if its coefficients are coprime.

As usual for questions of gcd, the content of a nonzero polynomial is

onlywell defined if we have normalized the decomposition in irreducible

factors; otherwise, it is definedup tomultiplication by a unit. The content

of the zero polynomial is 0.

Lemma (2.6.2). — Let A be a unique factorization domain, let K be its field of

fractions.

a) For any polynomialP ∈ K[T], there exist a primitive polynomialP1 ∈ A[T]and an element 0 ∈ K such that P = 0P1.

b) Let P = 0P1 be such a decomposition. Then, P ∈ A[T] if and only if

0 ∈ A; in that case, 0 = ct(P). In particular, P is a primitive polynomial

of A[T] if and only if 0 is a unit in A.

Proof. — a) If P = 0, we set 0 = 0 and P1 = 1. Assume P ≠ 0. Let

3 ∈ A {0} be a common denominator to all of the coefficients of P, so

that 3P ∈ A[T]. Let then 1 ∈ A be the content of the polynomial 3P and

set P1 = (3P)/1 and 0 = 1/3. The polynomial P1 belongs to A[T] and is

primitive; one has P = 0P1.

Page 107: (MOSTLY)COMMUTATIVE ALGEBRA

2.6. POLYNOMIAL RINGS ARE UNIQUE FACTORIZATION DOMAINS 95

b) If 0 ∈ A, it is clear that P ∈ A[T]. Conversely, assume that P ∈ A[T]and let us show that 0 ∈ A. Let 1 and 2 be elements of A such that

0 = 1/2. Wewrite 1 = 02, hence 1P1 = 02P1 = 2P, fromwhichwe deduce

that 1 = ct(2P) = 2 ct(P). It follows that 2 divides 1 and 0 = ct(P) ∈ A.

If 0 is a unit in A, then P is a primitive polynomial in A[T]. Conversely,assume that P is a primitive polynomial in A[T]. By the preceding

paragraph, we have 0 ∈ A. Then ct(P) = 0 ct(P1) is a unit, hence 0 is a

unit. �

Proposition (2.6.3). — Let A be a unique factorization domain and let P,Q be

two polynomials in A[T]. Then, ct(PQ) = ct(P) ct(Q).

Proof. — We first treat the particular case where P and Q are primitive.

We then need to show that PQ is primitive as well. Let � be any

irreducible element of A and let us show that � does not divide all of the

coefficients of PQ. Since P is primitive, the reduction cl(P) of P modulo �is a nonzero polynomial with coefficients in the ring A/(�). Similarly,

cl(Q) is a nonzero polynomial with coefficients in A/(�). Since � is

irreducible and A a unique factorization domain, the quotient ring A/(�)is a domain, hence the polynomial ring (A/(�))[T] is again a domain

(corollary 1.3.13). It follows that the product cl(P) cl(Q) = cl(PQ) is anonzero polynomial in (A/�)[T]. This means exactly that � does not

divide all of the coefficients of PQ, as was to be shown.

Let us now treat the general case. We may assume that P and Q

are nonzero, so that ct(P) and ct(Q) are nonzero too. By definition,

we can write P = ct(P)P1 and Q = ct(Q)Q1, where P1,Q1 ∈ A[T]are primitive polynomials. Then, P1Q1 is primitive and the equality

PQ = ct(P) ct(Q)P1Q1 shows that, up to a unit, ct(PQ) is thus equal toct(P) ct(Q). �

Corollary (2.6.4). — Let A be a unique factorization domain and let K be its

field of fractions. Let P,Q ∈ A[T] be polynomials.

a) Assume that Q is primitive and that Q divides P in K[T]. Then Q

divides P in A[T].b) Let R = gcd(P,Q) be a greatest common divisor of P,Q. Then R is a gcd

of P and Q in K[T].

Page 108: (MOSTLY)COMMUTATIVE ALGEBRA

96 CHAPTER 2. IDEALS AND DIVISIBILITY

Proof. — a) Let R ∈ K[T] be such that P = RQ. Write R = 0R1, where

0 ∈ K and R1 ∈ A[T] is a primitive polynomial. Then P = 0R1Q. Since

R1Q is primitive, we have 0 ∈ A, hence R ∈ A[T], which shows that Q

divides P in A[T].b) Let S be a gcd of P and Q in K[T]. Of course, R divides P and Q

inK[T], so thatR divides S. Conversely, we need to prove that S dividesR.

Writing S = 0S1, with 0 ∈ K and S1 ∈ A[T] primitive, we assume that

S ∈ A[T] is primitive. By a), S divides P andQ inA[T], so that it dividesR.

This concludes the proof. �

Thanks to this fundamental proposition, we are now able to determine

the irreducible elements of A[T].

Proposition (2.6.5). — Let A be a unique factorization domain and let K be its

field of fractions. The irreducible elements of A[T] are the following:– Irreducible elements of A, considered as constant polynomials;

– Primitive polynomials inA[T]which are irreducible as polynomials inK[T].

Proof. — We shall begin by proving that these elements are indeed

irreducible in A[T], and then show that there are no other.

Let 0 ∈ A be an irreducible element. It is not invertible in A, hence is

not a unit of A[T]. Let P,Q be polynomials in A[T] be such that 0 = PQ.

One has deg(P) +deg(Q) = deg(PQ) = 0, hence deg(P) = deg(Q) = 0. In

other words, P and Q are constant polynomials. Since 0 is irreducible,

either P or Q is a unit in A, hence in A[T]. This shows that 0 is irreducible

in A[T].Let now P ∈ A[T] be a primitive polynomial which is irreducible

in K[T]. Since constant polynomials are units in K[T], P is not constant;

in particular, it is not a unit in A[T]. Let Q,R be polynomials in A[T]such that P = QR. A fortiori, they furnish a decomposition of P in K[T],so that Q or R is a unit in K[T]. In particular, Q or R is constant. To fix

the notation, assume that R is the constant 0. We thus have P = 0Q. It

follows that the content of P satisfies

ct(P) = ct(0Q) = 0 ct(Q).

Since P is primitive, 0 is a unit in A, hence in A[T]. This shows that P is

irreducible in A[T].

Page 109: (MOSTLY)COMMUTATIVE ALGEBRA

2.6. POLYNOMIAL RINGS ARE UNIQUE FACTORIZATION DOMAINS 97

Conversely, let P be an irreducible element of A[T]. Let P1 be a

primitive polynomial in A[T] such that P = ct(P)P1. Necessarily, ct(P) isa unit in A, or P1 is a unit in A[T].Assume first that ct(P) is not a unit. Then, P1 is a unit in A[T], which

means that P1 is a constant polynomial and a unit in A. In other words, P

is an element 0 of A. Let us show that 0 is irreducible in A. It is not a unit

(otherwise, P would be a unit). And if 0 = 12, for some elements 1, 2 ∈ A,

we get P = 12. Since P is irreducible in A[T], 1 or 2 is a unit in A[T], thatis, 1 or 2 is a unit in A.

Assume now that P1 is not a unit in A[T]; then ct(P) is a unit and P

is primitive. Let us prove that P is irreducible in K[T]. , First of all,

deg(P) > 0, for otherwise, P would be a unit in A. In particular, P is a not

a unit in K[T]. Let P = QR be a factorization of P as the product of two

polynomials in K[T]. By the above lemma 2.6.2, we may write Q = @Q1

and R = AR1, where @, A ∈ K and Q1,R1 are primitive polynomials

in A[T]. Then, P = (@A)Q1R1. Since P is a primitive polynomial in A[T],lemma 2.6.2 implies that @A is a unit in A. Since P is irreducible in A[T],either Q1 or R1 is a unit in A[T], hence in K[T]. It follows that either

Q = @Q1 or R = AR1 is a unit in K[T]. �

Theorem (2.6.6) (Gauss). — If A is a unique factorization domain, then so is

A[T].

Proof. — Let us prove that A[T] satisfies the two properties of the

definition 2.5.4 of a unique factorization domain.

1) Let (P=)= be a sequence of polynomials in A[T] such that the

sequence ((P=))= of principal ideals is increasing; let us prove that it isstationary.

If P= ≠ 0, then P< ≠ 0 for all = > <. The case of the constant sequence

((0)) being obvious, we may assume that P= ≠ 0 for all =.

For = > <, P= divides P<, hence ct(P=) divides ct(P<). Consequently,the sequence ((ct(P=)))= of principal ideals of A is increasing too. Since

A is a unique factorization domain, it is stationary. Moreover, for

= > <, deg(P=) 6 deg(P<), so that the sequence ((deg(P=))= of integersis decreasing, hence stationary.

Page 110: (MOSTLY)COMMUTATIVE ALGEBRA

98 CHAPTER 2. IDEALS AND DIVISIBILITY

Let N be any integer such that deg(P=) = deg(PN) and ct(P=) = ct(PN)for = > N. Let = be some integer such that = > N. Since P= divides PN,

there exists a polynomial Q such that PN = QP=. Necessarily, deg(Q) =0 and ct(Q) = 1, so that Q is a constant polynomial, with constant

term ct(Q), hence is invertible. Consequently, the ideals (P=) and (PN)coincide. This shows that the sequence (P=)= of principal ideals of A[T]is stationary.

2) Let us now show that the irreducible elements of A[T] generateprime ideals. Since irreducible elements are not units, it suffices to

show that if an irreducible element of A[T] divides a product PQ of two

polynomials in A[T], then it divides P or Q.

Let first � be an irreducible element of A; assume that � divides PQ.

Taking the contents, we see that � divides ct(PQ) = ct(P) ct(Q). Since �is irreducible in A and A is a unique factorization domain, � divides

ct(P) or ct(Q), hence � divides P or Q.

Let nowΠ be a primitive polynomial of A[T], irreducible in K[T], suchthat Π divides PQ. Since K[T] is a principal ideal domain, it is a unique

factorization domain andΠ divides P or Q in K[T]. By corollary 2.6.4,Π

divides P or Q in A[T]. �

Corollary (2.6.7) (Gauss). — Let A be a unique factorization domain. For any

integer =, A[X1, . . . ,X=] is a unique factorization domain. In particular, if K

is a field, then K[X1, . . . ,X=] is a unique factorization domain.

Proof. — This is immediate by induction on =, using the isomorphisms

(remark 1.3.10):

A[X1, . . . ,X<] ' (A[X1, . . . ,X<−1])[X<].�

2.7. Resultants and another theorem of Bézout

In all of this section, A is a commutative ring. We shall make some

use of determinants of matrices.

Definition (2.7.1). — Let<, = be positive integers, letP andQ be two polynomi-

als in A[X] such that deg(P) 6 = and deg(Q) 6 <. Write P = 0=X=+· · ·+ 00

Page 111: (MOSTLY)COMMUTATIVE ALGEBRA

2.7. RESULTANTS AND ANOTHER THEOREM OF BÉZOUT 99

and Q = 1<X< + · · · + 10, for 00, . . . , 0= , 10, . . . , 1< ∈ A. The resultant (in

sizes (=, <)) of (P,Q) is defined as the determinant

Res=,<(P,Q) =

����������������������

00 0

01 00

.... . .

0<−1 00

......

......

0= 0=−1 0=−<+1

0=...

. . ....

0 0=︸ ︷︷ ︸< columns

10 0

11 10

.... . .

1<−1 10

1<. . .

. . . 10

1<...

. . ....

. . ....

0 1<

����������������������︸ ︷︷ ︸= columns

(Precisely, the column vector (00, . . . , 0=) is copied < times, each time

shifted by one row, then the column vector (10, . . . , 1<) is copied = times,

each time shifted by one row.)

Remark (2.7.2). — Let P and Q ∈ A[X] be polynomials as above, and let

5 : A→ B be a morphism of rings. Let P5and Q

5be the polynomials

in B[X] deduced from P and Q by applying 5 to their coefficients.

One has deg(P 5 ) 6 = and deg(Q 5 ) 6 <. The resultant Res=,<(P,Q)is the determinant of the matrix of the definition, while the resultant

Res=,<(P 5 ,Q 5 ) is the determinant of the matrix obtained by applying 5

to each entry. Since the determinant is a polynomial expression, we

obtain the equality

Res=,<(P 5 ,Q 5 ) = 5 (Res=,<(P,Q)).

Proposition (2.7.3). — Let K be a field. Let P,Q be two polynomials in K[X],let =, < be strictly positive integers such that deg(P) 6 = and deg(Q) 6 <.

Then, Res=,<(P,Q) = 0 if and only if

– either P and Q are not coprime;

– or 0= = 1< = 0.

Page 112: (MOSTLY)COMMUTATIVE ALGEBRA

100 CHAPTER 2. IDEALS AND DIVISIBILITY

Proof. — If ? is any integer > 0, let K[X]? be the K-vector space of

polynomials of degree 6 ?. The family (1,X, . . . ,X?) is a basis of K[X]?hence this space has dimension ? + 1. Let us observe that Res=,<(P,Q)is the determinant of the linear map

� : K[X]<−1 × K[X]=−1→ K[X]<+=−1, (U,V) ↦→ UP + VQ

in the bases (1, . . . ,X<−1; 1, . . . ,X=−1) of K[X]<−1 × K[X]=−1 and

(1,X, . . . ,X<+=−1) of K[X]<+=−1. Consequently, Res=,<(P,Q) = 0 if and

only if � is not invertible. We shall compute the kernel of �.Assume first that P = 0 or Q = 0. Then some column of this matrix is

zero, hence Res=,<(P,Q) = 0.

Let us now assume that P and Q are both nonzero and let D be their

gcd; one has D ≠ 0. We can thus write P = DP1 and Q = DQ1, where P1

and Q1 are two coprime polynomials in K[X]. Let (U,V) ∈ Ker(�). One

has UP+VQ = 0, hence UP1 +VQ1 = 0 since D ≠ 0. Since P1 and Q1 are

coprime (see §2.5.10), we obtain that Q1 divides U and P1 divides V. We

can thus write U = Q1S and V = P1T, for two polynomials S,T ∈ K[X].Then UP1 + VQ1 = P1Q1(S + T), so that T = −S, U = Q1S and V = −P1S.

Since U ∈ K[X]<−1 and V ∈ K[X]=−1, one has deg(S) 6 < − 1 − deg(Q1)and deg(S) 6 = − 1 − deg(P1). Now,

< − 1 − deg(Q1) = < − deg(Q) + deg(Q) − deg(Q1) − 1

= (< − deg(Q)) + deg(D) − 1,

and, similarly,

= − 1 − deg(P1) = (= − deg(P)) + deg(D) − 1.

Set

B = inf(= − deg(P), < − deg(Q))so that B = 0 unless 0= = 1< = 0, in which case B > 1. Conversely,

every element of K[X]<−1 × K[X]=−1 of the form (Q1S,−P1S), for S ∈K[X]B+deg(D)−1

, belongs to Ker(�).This shows that Res=,<(P,Q) = 0 if and only if B + deg D > 0, that is, if

and only if either 0= = 1< = 0 or deg(D) > 0. �

Corollary (2.7.4). — Let K be an algebraically closed field, and let A = K[Y];let us identify the ring A[X] with K[X,Y] (remark 1.3.10). Let P, Q be

Page 113: (MOSTLY)COMMUTATIVE ALGEBRA

2.7. RESULTANTS AND ANOTHER THEOREM OF BÉZOUT 101

two polynomials in K[X,Y] = A[X]. Let <, = be positive integers such that

degX(P) 6 = and deg

X(Q) 6 < and let us write

P = P=(Y)X= + · · · + P0(Y) and Q = Q<(Y)X< + · · · +Q0(Y)where P0, . . . , P= ,Q0, . . . ,Q< ∈ K[Y]. Let R = Res=,<(P,Q) ∈ K[Y] be theresultant in sizes (=, <) of the pair (P,Q). Then, an element H ∈ K is a root

of R if and only if

– either the polynomials P(X, H) and Q(X, H) have a common root in K;

– or P=(H) = Q<(H) = 0.

Proof. — By the definition of the resultant (remark 2.7.2), we have

R(H) =(Res=,<(P,Q)

)(H) = Res=,<(P(X, H),Q(X, H)).

It thus suffices to apply the preceding proposition to the polynomi-

als P(X, H) and Q(X, H) of K[X]. �

Theorem (2.7.5) (Bézout). — Let K be an algebraically closed field. Let P,Q

be two coprime polynomials in K[X,Y], with total degrees ? and @ respectively.

Then, the set of common roots to P and Q in K2, that is, the pairs (G, H) ∈ K

2

such that P(G, H) = Q(G, H) = 0, has at most ?@ elements; in particular, it is

finite.

Proof. — Since P and Q are coprime in the ring K[X,Y], they remain co-

prime in K(Y)[X] (see corollary 2.6.4). By proposition 2.7.3, when viewed

as polynomials in K(Y)[X], their resultant R is a nonzero polynomial RY

of K[Y]. Consequently, there are only finitely many possibilities for the

ordinates H of the common roots (G, H) to P and Q. Exchanging the roles

of X and Y, we prove similarly that there are only finitely possibilities

of the abscissae G of these common roots. It follows that the set Σ of

common roots to P and Q is finite.

We now show that Card(Σ) is less or equal to the product ?@ of the

degrees of P and Q. To that aim, we make a linear change of variables

so that any horizontal line contains at most one point of Σ. Since there

are only finitely directions to avoid, this may be done. This modifies the

polynomials P and Q, but not their degrees ? and @.

Let us write

P = P=(Y)X= + · · · + P0(Y) and Q = Q<(Y)X< + · · · +Q0(Y)

Page 114: (MOSTLY)COMMUTATIVE ALGEBRA

102 CHAPTER 2. IDEALS AND DIVISIBILITY

where P= and Q< are nonzero polynomials. Let R = Res=,<(P,Q)(resultant with respect to X). We know that for any H ∈ K, R(H) = 0 if

and only if either P=(H) = Q<(H) = 0, or H is the ordinate of a point of Σ.

It thus suffices to show that deg(R) 6 ?@.

We first observe that for any integer 8, deg(P8) 6 ? − 8 and deg(Q8) 6@ − 8. Let us explicit the entry R8 9 at row 8 and column 9 of the matrix

whose determinant is R:

– for 1 6 9 6 <, one has R8 9 = P8−9 when 0 6 8 − 9 6 =, and R8 9 = 0

sinon ;

– for < + 1 6 9 6 < + =, one has R8 9 = Q8−9+< when 0 6 8 − 9 +< 6 <,

and R8 9 = 0 otherwise.

In particular, deg(R8 9) is bounded above by

deg(R8 9) 6{? − 8 + 9 if 1 6 9 6 < ;

@ − < − 8 + 9 if < + 1 6 9 6 < + =..

The determinant R is a sum of products of the form

∏<+=9=1

R�(8)8, forall permutations � of {1; . . . ;< + =}. The degree of such a product is

bounded above by

<+=∑9=1

deg(R�(9)9) 6<∑9=1

(? − �(9) + 9) +<+=∑9=<+1

(@ − < − �(9) + 9)

6 ?< + =(@ − <) −<+=∑9=1

�(9) +<+=∑9=1

9

6 ?@ − (? − =)(@ − <) 6 ?@.

Consequently, deg(R) 6 ?@ and the theorem is proved. �

Remark (2.7.6). — There is a more precise version of this theorem of

Bézout that takes into account the multiplicity of common roots (for

example, if the curves with equations P and Q are tangent at some

intersection point, this point will have multiplicity > 2), as well as the

possible common roots “at infinity”. Then, the number of common roots

is exactly ?@.

Let us now give two additional properties of the resultant.

Page 115: (MOSTLY)COMMUTATIVE ALGEBRA

2.7. RESULTANTS AND ANOTHER THEOREM OF BÉZOUT 103

Proposition (2.7.7). — LetA be a commutative ring, letP,Q be two polynomials

in A[X], let =, < be integers such that deg(P) 6 = and deg(Q) 6 <. Then,

the resultant Res=,<(P,Q) belongs to the ideal(P,Q)

A[X] ∩A

of A.

Proof. — We may compute the determinant defining the resultant

Res<,=(P,Q) in any (commutative) overring of A, in particular in A[X].Let thus add to the first row X times the second one, X

2times the third

one, etc. We obtain that Res=,<(P,Q) is the determinant of a matrix with

coefficients in A[X]whose first row is

P XP . . . X<−1

P Q XQ . . . X=−1

Q.

If we expand the determinant with respect to this row we see that

Res=,<(P,Q) has the form UP + VQ, for two polynomials U and V

in A[X]. This shows that Res=,<(P,Q) belongs to the ideal (P,Q)A[X]

generated by P and Q in A[X]. Since it also belongs to A, this concludes

the proof of the proposition. �

Proposition (2.7.8). — Let A be a commutative ring and let P,Q be two split

polynomials of A[X]: P = 0=∏=

8=1(X − C8) and Q = 1<

∏<9=1(X − D9). Then,

Res=,<(P,Q) = (−1)<=0<= 1=<∏8 , 9

(C8 − D9)

= 1=<

<∏9=1

P(D9) = 0<= (−1)<==∏8=1

Q(C 9).

Proof. — It is obvious that the three written formulae on the right of

the first equal sign are pairwise equal. We shall prove that they are

equal to Res=,<(P,Q) by induction on =. If = = 0, then P = 00 hence

Res0,<(P,Q) = 0<0, so that the formula holds in this case. Let us now

show by performing linear combinations on the resultant matrix that

Res=+1,<((X − C)P,Q) = (−1)<Q(C)Res=,<(P,Q).Let us indeed write P = 0=X

= + · · · + 00. Then,

(X − C)P = 0=X=+1 + (0=−1 − C0=)X= + · · · + (00 − C01)X + 00

Page 116: (MOSTLY)COMMUTATIVE ALGEBRA

104 CHAPTER 2. IDEALS AND DIVISIBILITY

and Res=+1,<((X − C)P,Q) equals

����������������������������

−C00 10

01 − C00 −C00 11

. . .... 00 − C01 −C00

... 10

. . ....

...

−C00

...

0=−1 − C0=...

...

0=. . .

.... . .

. . . 1<−1

.... . . 1<

.... . . 0=−1 − C0= . . . 1<−1

0= 1<

����������������������������(There are < “0” columns and = + 1 “1” columns.) Beginning from the

bottom, let us add to each row C times the next one. This does not change

the determinant hence Res=+1,<((X − C)P,Q) equals

��������������

0 . . . 0 10 + C11 + · · · + C<1< C(10 + C11 + . . . ) . . . C=(10 + . . . )00 (11 + C12 + . . . ) (10 + C11 + . . . ) C=−1(10 + . . . )...

. . . 00

......

... 1<...

0=...

. . . 1<−1 + C1<0= 1<

��������������We observe that Q(C) is a factor of each entry of the first row, so that

Res=+1,<((X − C)P,Q) is equal to

Q(C)

��������������

0 . . . 0 1 C . . . C=

00 (11 + C12 + . . . ) (10 + C11 + . . . ) C=−1(10 + . . . )...

. . . 00

......

... 1<...

0=...

. . . 1<−1 + C1<0= 1<

��������������

Page 117: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 105

Beginning from the right, we then may subtract from each “1”-column

C times the preceding one; we then get the determinant

Q(C)

��������������

0 . . . 0 1 0 . . . 0

00 11 + C12 + . . . 10

.... . . 00

... 11

. . ....

......

. . .

0=...

. . . 1<−1

0=. . . 1<

��������������It now suffices to expand the determinant with respect to the first row

and we obtain

(−1)<Q(C)Res<,=(P,Q).This concludes the proof of the proposition by induction. �

Exercises

1-exo599) a) Show that the ideal (2,X) of the ring Z[X] is not principal.b) Let A be a commutative ring such that the ring A[X] is a principal ideal domain.

Show that A is a field.

2-exo569) a) Show that the set of continuous functions with compact support, or the

set of functions which vanish at any large enough integer, are ideals of the ring �(R)of continuous functions on the real line R. Prove that they are not contained in any

ideal MG , for G ∈ R.

b) Let A be the ring of holomorphic functions on a neighborhood of the closed unit

disk. Show that any ideal of A is generated by a polynomial P ∈ C[I]whose roots have

modulus 6 1. Prove that the maximal ideals of A are the ideals (I − 0), for 0 ∈ C such

that |0 | 6 1.

c) (Generalization.) Let K be a compact, connected and nonempty subset of C and

letℋ be the ring of holomorphic functions an an open neighborhood of K. Show that

the ring ℋ is a principal ideal domain. Show that all maximal ideals of A are the

ideals (I − 0), for 0 ∈ K.

3-exo571) Let A be a commutative local ring (definition 2.1.8). Let I and J be two ideals

of A, let 0 ∈ A be a regular element such that IJ = (0).a) Show that there exist G ∈ I and H ∈ J such that GH = 0. Check that G and H are

regular.

b) Deduce from this that I = (G) and J = (H).4-exo572) Let A be the product ring of all fields Z/?Z, where ? runs over the set of all

prime numbers. Let N be the subset of A consisting of all families (0?) such that 0? = 0

for every but finitely many prime numbers ?; let B be the quotient ring A/N.

Page 118: (MOSTLY)COMMUTATIVE ALGEBRA

106 CHAPTER 2. IDEALS AND DIVISIBILITY

a) Let M be a maximal ideal of A which does not contain N. Show that thre exists a

prime number @ such that M be the set of all families (0?), where 0@ = 0. What is the

quotient ring A/M?

b) Let ? be a prime number; show that ?B = B.

c) Show that the ring B admits a unique structure of a Q-algebra.

d) Let M be a maximal ideal of A containing N. Show that the field A/M has

characteristic 0.

5-exo620) Let : be a field, let A be the ring :N(with termwise addition and multiplica-

tion) and let N be the subset :(N) of all almost null sequences.

a) Show that N is an ideal of A. Explain why there exists a maximal ideal M of A

which contains N. Then set K = A/M. Show that K is a field extension of :.

b) Let 0 = (01, . . . , 0<) ∈ A<be an element of N

<. Show that the set of integers =

such that 08 ,= ≠ 0 for some 8 ∈ {1, . . . , =} is finite. (Otherwise, construct 11, . . . , 1< ∈ A

such that

∑<8=11808 maps to 1 in K.)

c*) If : is infinite, show that the cardinality of K is uncountable.

d) Assuming that : is algebraically closed, show that K is algebraically closed too.

e) Let I be an ideal of :[X1, . . . ,X=] and let J be the K-vector subspace of K[X1, . . . ,X=]generated by the elements of I. Show that it is an ideal of K[X1, . . . ,X=] and that J ≠ (1)if I ≠ (1).

f ) Combine the previous constructionwith the special case ofHilbert’sNullstellensatz

proved in the text to derive the general case. (Youmay admit that any ideal of :[X1, . . . ,X=]is finitely generated.)

6-exo256) Let A be a commutative ring.

a) Let I and J be ideals of A such that V(I) ∩V(J) = ∅ in Spec(A). Show that I+ J = A.

b) Let I and J be ideals of A such that V(I) ∪V(J) = Spec(A). Show that every element

of I ∩ J is nilpotent.

c) Let I and J be ideals of A such that V(I) ∪ V(J) = Spec(A) and V(I) ∩ V(J) = ∅.

Show that there exist an idempotent 4 ∈ A (this means that 42 = 4) such that I = (4)and J = (1 − 4).d) Show that Spec(A) is connected if and only if the only idempotents of A are 0

and 1.

7-exo856) Let R be an integral domain which is not a field and let 0 ∈ R {0; 1} suchthat the fraction ring R0 is a field. Prove that 1 − 0 is a unit.8-exo259) One says that a non-empty topological space T is irreducible if for any

closed subsets Z and Z′of T such that T = Z ∪ Z

′, either Z = T or Z

′ = T. Let A be a

commutative ring. In this exercise, we study the irreducible closed subsets of Spec(A).a) Let P be a prime ideal of A. Show that V(P) is irreducible.b) Show that Spec(A) is irreducible if and only if it has exactly one minimal prime

ideal.

Page 119: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 107

c) Let I be an ideal of A such that V(I) is irreducible. Show that there exists a unique

minimal prime ideal P containing I and that V(I) = V(P).9-exo261) An irreducible closed subset of a topological space is called an irreducible

component if it is maximal.

a) Let T be a topological space and let A be a closed irreducible subset of T. Show

that the closed subsets of T containing A is inductive; conclude that there exists an

irreducible component of T that contains A.

b) Deduce from the preceding question that every prime ideal P of A contains a

minimal prime ideal of A.

10-exo260) Let K be an algebraically closed field. Let Z be a subset of K=which is

closed for the Zariski topology.

a) Show that the following properties are equivalent:

(i) Z is irreducible;

(ii) There exists a prime ideal P of K[T1, . . . , T=] such that Z =�(P);(iii) The idealℐ(Z) is prime.

b) Show that a closed subset Z′of K

=is an irreducible component of Z if and only if

ℐ(Z′) is a minimal prime ideal contained inℐ(Z).11-exo645) Let E be a field and let X be an infinite set. Let A = E

Xbe the ring of

functions X→ E. For 5 ∈ A, let�( 5 ) = {G ∈ X ; 5 (G) = 0}.a) Show that a function 5 ∈ A is invertible if and only if it does not vanish.

b) Let I be an ideal of A such that I ≠ A and letℱI = {�( 5 ) ; 5 ∈ I}. Show thatℱI is

a filter of subsets of X: (i) ∅ ∉ ℱ; (ii) If Y1,Y2 are subsets of X such that Y1 ⊂ Y2 and

Y1 ∈ ℱ, then Y2 ∈ ℱ; (iii) If Y1,Y2 are elements ofℱ, then Y1 ∩ Y2 ∈ ℱ.

c) Let I, J be ideals of A such that I ⊂ J ≠ A; prove thatℱI ⊂ ℱJ.

d) Let P be a prime ideal of A. Prove that the filter ℱP is an ultrafilter: for every

subset Y of X, either Y, or X Y belongs toℱP.

e) Letℱ be an ultrafilter on X. Prove that the set of all 5 ∈ A such that�( 5 ) ∈ ℱ is a

maximal ideal of A.

f ) Assume that I is a finitely generated prime ideal. Prove that there exists G ∈ X

such that I = { 5 ∈ A ; 5 (G) = 0}, that I is principal, and thatℱI = {Y ⊂ X ; G ∈ Y}. (One

says thatℱI is a principal ultrafilter.)

g) Let I be the set of all 5 ∈ A such that X �( 5 ) is finite. Prove that I is an ideal

of A and I ≠ A. Prove that if P is a prime ideal that contains I, then P is not principal.

12-exo823) Let P,Q,R ∈ C[T1, . . . , T=] be polynomials such that P does not divide R.

a) Assume that P(0) = 0 for every 0 ∈ C=such that R(0) ≠ 0. Prove that P = 0.

b) Assume that P(0) = 0 for every 0 ∈ C=such that R(0) ≠ 0 and Q(0) = 0. Prove that

if Q is irreducible, then Q divides P.

c) Under the hypothesis of b), what can be deduced if Q is no more assumed to be

irreducible?

Page 120: (MOSTLY)COMMUTATIVE ALGEBRA

108 CHAPTER 2. IDEALS AND DIVISIBILITY

13-exo646) A ring A is called a von Neumann ring if for every 0 ∈ A, there exists G ∈ A

such that 0 = 0G0.

a) Prove that a division ring is a von Neumann ring.

b) Let (A8)8∈I be a family of von Neumann rings. Then the product ring A =∏

8∈I A8

is a von Neumann ring.

c) Prove that a local von Neumann ring is a division ring.

d) Let K be a division ring and let V be a K-vector space. Prove that EndK(V) is a vonNeumann ring.

14-exo012) Let A be the ring C[X,Y]/(XY − 1). Let G and H be the images of X and Y

in A.

a) Show that G is invertible in A. Show that any nonzero element 0 ∈ A can

be written uniquely on the form 0 = G<P(G), for some integer < ∈ Z and some

polynomial P ∈ C[T]whose constant term is nonnull.

b) For 0, < and P as above, set 4(0) = deg(P). Show that the map 4 : A {0} → N is

a gauge on A, hence that A is a euclidean ring.

c) Conclude that A is a principal ideal domain.

15-exo644) Let A be the set of all complex numbers of the form 0 + 18, for 0 and 1 ∈ Z.Let K be the set of all complex numbers of the form 0 + 18, for 0 and 1 ∈ Q.

a) Show that K is a subfield of C and that A is a subring of K. Show also that any

element of A (resp. of K) can be written in a unique way in the form 0 + 18 with 0, 1 ∈ Z(resp. 0, 1 ∈ Q). (One says that (1, 8) is a basis of A as a Z-module, and a basis of K as a

Q-vector space.)

b) For G = 0 + 18 ∈ C, set �(G) = |G |2 = 02 + 12. Show that �(GH) = �(G)�(H) for any

G, H ∈ K.

c) For G = 0 + 18 ∈ K, set {G} = {0} + {1}8, where {C} denotes the integer which is

the closest to a real number C, chosen to be smaller than C if there are two such integers.

Show that �(G − {G}) 6 1

2.

d) Show that � is a gauge on A, hence that A is an euclidean ring.

16-exo597) Let A be the set of all real numbers of the form 0 + 1√

2, for 0 and 1 ∈ Z. LetK be the set of all real numbers of the form 0 + 1

√2, for 0 and 1 ∈ Q.

a) Show that K is a subfield of R and that A is a subring of K. Show also that any

element of A (resp. of K) can be written in a unique way in the form 0 + 1√

2 with

0, 1 ∈ Z (resp. 0, 1 ∈ Q). — One says that (1,√

2) is a basis of A as a Z-module, and a

basis of K as a Q-vector space.

b) For G = 0 + 1√

2 ∈ K, set �(G) =��02 − 212

��. Show that �(GH) = �(G)�(H) for any

G, H ∈ K.

c) For G = 0+ 1√

2 ∈ K, set {G} = {0} + {1}√

2, where {C} denotes the integer which is

the closest to a real number C, chosen to be smaller than C if there are two such integers.

Show that �(G − {G}) 6 1

2.

Page 121: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 109

d) Show that � is a gauge on A, hence that A is an Euclidean ring.

17-exo598) Let $ be the complex number given by $ = (1 + 8√

3)/2; let K be the set of

all complex numbers of the form 0 + 1$, with 0, 1 ∈ Q, and let A be the set of all such

elements of K where 0, 1 ∈ Z.a) Show that K is a subfield of C and that A is a subring of K.

b) Show that A is an euclidean ring for the gauge given by I ↦→ |I |2.18-exo605) Let A be a domain and let � : A {0} → N be a map which satisfies the

second property of the definition of a euclidean gauge, namely: for any 0, 1 ∈ A such

that 1 ≠ 0, there exists @ and A ∈ A such that 0 = 1@ + A and such that, moreover, either

A = 0, or �(A) < �(1).For any 0 ∈ A {0}, set �′(0) = inf1≠0 �(01).Show that �′ is a gauge on A, hence that A is an Euclidean ring.

19-exo611) Let A be an Euclidean ring, with gauge �.

a) Let 0 ∈ A be a nonzero, and non-unit element, with minimal gauge. Show that for

any G ∈ A which is not a multiple of 0, there exists a unit D ∈ A such that 1 − DG be a

multiple of 0.

b) Let = be the cardinality of the set of units in A. Show that there exists a maximal

ideal M ⊂ A such that the cardinality of the quotient field A/M is smaller or equal

to = + 1.

20-exo612) Let A be the subring of C generated by � = (1 + 8√

19)/2.a) Show that �2 = � − 5. Deduce that any element of A can be written uniquely as

0 + �1, for 0, 1 ∈ Z.b) Show that for any 0 ∈ A, |0 |2 is an integer. Show that 0 ∈ A is a unit if and only if

|0 |2 = 1. Conclude that A× = {−1,+1}.

c) Let M be any maximal ideal of A. Show that there exists a prime number ? such

that ? ∈ M. Show that A/M has cardinality ?2if P = X

2 − X + 5 is irreducible in Z/?Z,and cardinality ? otherwise.

d) Show that the polynomial X2 − X + 5 is irreducible in Z/2Z and Z/3Z. Conclude

that the cardinality of A/M is at least 4.

e) Show that A is not a euclidean ring.

21-exo872) Let X be a compact metric space and let A = �(X) be the ring of real valued,

continuous functions on X.

a) If I is an ideal of A, let V(I) be the set of all points G ∈ X such that 5 (G) = 0 for all

5 ∈ I. Prove that V(I) is a closed subset of X.

b) Prove that the map 5 ↦→ 5 −1(0) gives a bijection between idempotents of A and

closed open subsets of X.

c) Let P be a prime ideal of A. Prove that there exists a unique point G ∈ X such that

V(P) = {G}.d) Let I be a finitely generated ideal of A such that I =

√I. Prove that V(I) is open

in X. Prove that there exists an idempotent 4 ∈ A such that I = 4A.

Page 122: (MOSTLY)COMMUTATIVE ALGEBRA

110 CHAPTER 2. IDEALS AND DIVISIBILITY

22-exo873) Let X be metric space and let A = �(X) be the ring of real valued continuous

functions on X. Let P be a prime ideal of X.

a) Let 5 , 6 ∈ A; assume that 0 6 5 6 6 and 6 ∈ P. Prove that there exists ℎ ∈ A such

that 5 2 = ℎ6. Conclude that 5 ∈ P.

b) Prove that the (partial) order on A induces a total order on the quotient ring A/P.c) Let Q,Q′ be prime ideals of X containing X. Prove that either Q ⊂ Q

′or Q

′ ⊂ Q

(Kohls, 1958).

23-exo888) Let A be a commutative von Neumann ring (see exercise 2/13).

a) Prove that 0 is the only nilpotent element of A.

b) Let S be a multiplicative subset of A. Prove that the fraction ring S−1

A is a von

Neumann ring.

c) Prove that all prime ideals of A are maximal.

24-exo600) Let A be the set of all complex numbers of the form 0 + 18√

5, for 0 and

1 ∈ Z.a) Show that A is a subring of C.

b) Show that the only units of A are 1 and −1.

c) Show that 2, 3, 1 + 8√

5 and 1 − 8√

5 are irreducible in A.

d) Observing that 2 · 3 = (1+ 8√

5)(1− 8√

5), prove that A is not a unique factorization

domain; in particular it is not principal ideal ring.

25-exo625) Let ? be a prime number, let = be an integer such that = > 2 and let P be

the polynomial P = X= + X + ?.

a) Assume that ? ≠ 2. Show that any complex root I of P satisfies |I | > 1.

b) Stil assuming ? ≠ 2, show that P is irreducible in Z[X].c) Assume now that ? = 2. If = is even, show that P is irreducible in Z[X]. If = is odd,

show that X + 1 divides P but that P/(X + 1) is irreducible in Z[X].d) More generally, let P = 0=X

= + · · · + 01X + 00 be any polynomial with integer

coefficients such that |00 | is a prime number strictly greater than |01 | + · · · + |0= |. Showthat P is irreducible in Z[X].26-exo626) Let = be any integer > 2 and let S be the polynomial X

= − X− 1. The goal of

this exercise is to show, following Selmer (Math. Scand. 4 (1956), p. 287–302) that S is

irreducible in Z[X].a) Show that S has = distinct roots in C.

b) For any polynomial P ∈ C[X] such that P(0) ≠ 0, set

!(P) =<∑9=1

(I 9 −

1

I 9

),

where I1, . . . , I< are the complex roots of P, repeated according to their multiplicities.

Compute !(P) in terms of the coefficients of P. In particular, compute !(S).

Page 123: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 111

c) If P and Q are two polynomials in C[X] such that P(0)Q(0) ≠ 0, show that

!(PQ) = !(P) + !(Q).d) For any complex root I of S, show that

2<(I − 1

I

)>

1

|I |2− 1.

(Set I = A4 8� and estimate cos(�) in terms of A.)

e) For any positive real numbers G1, . . . , G< with

∏<9=1

G 9 = 1, establish the inequality

<∑9=1

G 9 > <.

f ) Let P and Q be polynomials in Z[X] of strictly positive degrees such that S = PQ.

Show that |P(0)| = 1 and that !(P) is a strictly positive integer. Deduce a contradiction

and conclude that S is irreducible in Z[X].27-exo017) Eisenstein’s irreducibility criterion. Let A be an integral domain, let K be its

fraction field and let P be a prime ideal of A. Let 5 (X) = ∑06:6= 0:X

:be a polynomial

of degree = > 1 with coefficients in A. We assume that 00, . . . , 0=−1 ∈ P, 0= ∉ P and

00 ∉ P2.

a) Let 6 ∈ A[X] be a non-constant polynomial that divides 5 . Prove that deg(6) = =.(Consider a factorization 5 = 6ℎ and reduce it modulo P.)

b) Show that 5 is irreducible in K[X].c) Is 5 necessary irreducible in A[X]?

28-exo623) Let P = X= + 0=−1X

=−1 + · · · + 00 be a monic polynomial in Z[X] such that

00 ≠ 0 and

|0=−1 | > 1 + |0=−2 | + · · · + |00 | .a) Using Rouché’s theorem in the theory of holomorphic functions, show that P

has exactly one complex root with absolute value > 1, and that all other roots have

absolute value < 1.

b) Show that P is irreducible in Z[X] (Perron’s theorem).

29-exo849) Let K be a field. The Newton polytope N 5 of a polynomial 5 ∈ K[T1, . . . , T=]in = indeterminates is the convex hull in R=

of the set of exponents of all monomials

that appear in 5 . If C is a convex subset of R=, a point D ∈ C is called a vertex if for

every E, F ∈ C such that D ∈ [E, F] (the line segment with endpoints E and F in R=),

one has E = F = D.

a) When = = 2, describe all possible Newton polytopes of polynomials of degree 6 3.

Let 5 , 6, ℎ ∈ K[T1, . . . , T=] be polynomials such that 5 = 6ℎ.

b) Let D be a vertex of N6 +Nℎ ; prove that there exists a unique pair (E, F) where

E ∈ N6 and F ∈ Nℎ such that D = E + F. Prove that E is a vertex of N6 and that F is a

vertex of Nℎ ; conclude that D ∈ N 5 .

c) Prove that N 5 = N6 +Nℎ .

Page 124: (MOSTLY)COMMUTATIVE ALGEBRA

112 CHAPTER 2. IDEALS AND DIVISIBILITY

d) We assume that = = 2. Let 0, 1, D, E be coprime integers. Describe the set of

polynomials 5 ∈ K[T1,T2] whose Newton polytope is a triangle with vertices (0, 0),(0, 1) and (D, E). Prove that these polynomials are irreducible.

30-exo198) One considers the C-algebra C[X,Y] of polynomials with complex coeffi-

cients in two variables X and Y. Let I be the ring of C[X,Y] generated by the polynomial

Y2 −X

3 +X and let A be the quotient ring C[X,Y]/I. One writes G and H for the images

of X and Y in A.

The goal of the exercise is to show that A is not a unique factorization domain.

a) Show that Y2 − X

3 + X is a irreducible polynomial and that A is a domain.

b) Show that the canonical morphism from C[T] to A which sends T to G is injective.

Deduce from this that the subring C[G] of A generated by C and G is isomorphic to the

ring of polynomials C[T]. We define the degree of an element of C[G] to be the degree

of its unique preimage in C[T].c) Let 0 ∈ A. Show that there exists a unique pair (?, @) of elements of C[G] such that

0 = ? + @H.d) Show that there exists a unique C-linear endomorphism � of A such that �(G) = G

and �(H) = −H. Show that � is an automorphism of A. For 0 ∈ A, show that �(0) = 0 ifand only if 0 ∈ C[G].e) For any 0 ∈ A, set N(0) = 0�(0). Show that N(0) ∈ C[G], that N(1) = 1, and that

deg(N(0)) ≠ 1. Show also that N(01) = N(0)N(1) for any 0, 1 ∈ A.

f ) Deduce from questions b), c) and e) that C {0} is the set of units of A.

g) Show that G, H, 1 − G and 1 + G are irreducible in A.

h) Show that H does not divide G, 1 + G nor 1 − G. Conclude that A is not a unique

factorization domain.

31-exo627) Let A be a commutative ring.

a) Let P and Q be polynomials in A[X] Assume that the coefficients of P, resp. those

of Q, generate the unit ideal (1) of A. Show that the coefficients of PQ generate the unit

ideal too. (Modify the proof of proposition 2.6.3: observe that the hypothesis implies that for

any maximal ideal M of A, P and Q are nonzero modulo M.)

b) Show that there are polynomials

W8 ∈ Z[P0, . . . , P= ,Q0, . . . ,Q= ,U0, . . . ,U= ,V0, . . . ,V=](for 0 6 8 6 2=) such that if P =

∑?8X

8, Q =

∑@8X

8are polynomials of degree at

most =, 1 =∑?8D8 and 1 =

∑@8E8 are Bézout relations for their coefficients, then,

writing R = PQ =∑A8X

8, the coefficients of R satisfy a Bézout relation

1 =

2=∑8=0

A8W8(?0, . . . , ?= , @0, . . . , @= , D0, . . . , D= , E0, . . . , E=).

c) For P ∈ A[X], let ict(P) be the ideal of A generated by the coefficients of P. By

the first question, we have ict(PQ) = A if ict(P) = ict(Q) = A. If A is a principal ideal

domain, ict(P) is the ideal generated by ct(P), hence ict(PQ) = ict(P) ict(Q) by Gauss’s

Page 125: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 113

lemma. Show however that the equality ict(PQ) = ict(P) ict(Q) does not hold in general,

for example if A = C[U,V], P = UX + V and Q = VX +U.

32-exo801) Let A be an integral domain and let S be a multiplicative subset of A such

that 0 ∉ S. This exercise studies the relations between A and S−1

A being unique

factorization domains (theorems of Nagata).

a) Assume that A is a unique factorization domain; prove that S−1

A is a unique

factorization domain.

b) Let ? ∈ A be such that the ideal (?) is prime. Assume that S = {?= ; = ∈ N}and that the ring S

−1A is a unique factorization domain. Prove that A is a unique

factorization domain.

c) Assume that every increasing sequence of principal ideals of A is stationary. Prove

that if S−1

A is a unique factorization domain, then A is a unique factorization domain.

d) We consider the particular case of a polynomial ring A = R[T], where R is a unique

factorization domain. Taking S = R {0}, deduce from the preceding question that A

is a unique factorization domain.

e) Let : be a field in which −1 is not a square and let A = :[X,Y,Z]/(X2+Y2+Z

2− 1).Prove that A is a unique factorization domain. (Take S = {(1 − I)= , = ∈ N}, where I isthe class of Z in A.)

f ) Let : be an algebraically closed field of characteristic different from 2, let = > 5

and let 5 = X2

1+ · · · +X

2

= . Prove that :[X1, . . . ,X=]/( 5 ) is a unique factorization domain.

(Prove that X2

3+ · · · + X

2

= is irreducible, then take S = {(G1 + 8G2)< , < ∈ N}, where 8 ∈ :satisfies 82 = −1.)

33-exo846) Let K be a field.

a) Let 0, 1 ∈ K×and 3 > 1. Prove that every irreducible factor of 0X3 + 1Y3

is

homogeneous.

b) Let 0, 1, 2 ∈ K×and 3 > 1, Prove that 0X3 + 1Y3 + 2 is irreducible.

c) Let P1, P2, . . . , P= be non-constant polynomials in K[T], of degrees 31, . . . , 3= . Let

P = P1(T1) + · · · + P=(T=). Prove that P is irreducible if = = 2 and 31, 32 are coprime

integers. (Assign weight 31 to T1 and 32 to T2, hence <131 + <232 to a monomial T<1

1T<2

2,

and consider the monomials of largest weights in a factorization of P.)

d) Prove that P is irreducible if = > 3 (a theorem of Tverberg (1964)).

Page 126: (MOSTLY)COMMUTATIVE ALGEBRA
Page 127: (MOSTLY)COMMUTATIVE ALGEBRA

CHAPTER 3

MODULES

Modules are to rings what vector spaces are to fields and this introductory

chapter studies them under the angle of linear algebra. The first sections have

to set a number of definitions up, which are the closed companions to similar

definitions for vector spaces: modules and their submodules; morphisms from

a module to another (the analogues of linear maps), their kernel and image,

isomorphisms; generating families, independent families, bases. A notable

difference with the linear algebra of vector spaces is that modules do not possess

a basis in general.

I also present three general constructions of modules: direct sums and

products; quotient of a module by a submodule; and localization of a module

by a multiplicative subset of the base ring. These last constructions have

universal properties which I will use systematically in the rest of the book.

These universal properties not only characterize the modules they consider;

above all, they furnish an explicit way to work with them — in some sense,

they have to be understood as mere computation rules. They also furnish an

appropriate opportunity to introduce a bit of categorical language.

As I said in the introduction to the book, I presuppose in this book a basic

acquaintance with basic algebra, such as vector spaces. However, in case the

theory of vector spaces would have only been presented to the reader in a

restricted case (such as with a base field equal to R or C), I decided to prove

the main theorems here. One excuse I may present is that I allow for general

division rings and not only (commutative) fields.

Now restricting myself to commutative rings, I then discuss the theory of

determinants, from the point of view of alternate multilinear forms. (I will

also revisit this theory in the chapter devoted to tensor products.) As in linear

Page 128: (MOSTLY)COMMUTATIVE ALGEBRA

116 CHAPTER 3. MODULES

algebra over fields, determinants allow to characterize injective, resp. surjective

endomorphisms of a free module of finite rank, but the proofs are a bit subtler.

Determinants are also used to define the Fitting ideals of a finitely generated

module. The definition of these ideals, will reappear in the classification of

modules over principal ideal domain.

3.1. Definition of a module

Definition (3.1.1). — Let A be a ring. A right A-module is a set M endowed

with two binary laws: an internal addition, M×M→M, (<, =) ↦→ < + =,and a scalar multiplication M × A → M, (<, 0) ↦→ <0, subject to the

following properties:

a) Axioms concerning the addition law and stating that (M,+) is anabelian group:

(i) For every <, = in M, < + = = = +< (commutativity of addition);

(ii) For every <, =, ? in M, (< + =) + ? = < + (= + ?) (associativityof addition);

(iii) There exists an element 0 ∈ M such that < + 0 = 0 + < = < for

every < ∈ M (neutral element for the addition);

(iv) For every < ∈ M, there exists = ∈ M such that < + = = = +< = 0

(existence of an opposite for the addition);

b) Axioms concerning the multiplication law:

(v) For any < ∈ M, <1 = < (the unit of A is neutral for the scalar

multiplication);

(vi) For any 0, 1 ∈ A and any < ∈ M, one has <(01) = (<0)1(associativity of the scalar multiplication);

c) Axioms relating the ring addition, the module addition and the

scalar multiplication:

(vii) For any 0, 1 ∈ A and any < ∈ M, one has <(0 + 1) = <0 + <1(scalar multiplication distributes over the ring addition);

(viii) For any 0 ∈ A and any <, = ∈ M, one has (< + =)0 = <0 + =0(scalar multiplication distributes over the module addition).

Page 129: (MOSTLY)COMMUTATIVE ALGEBRA

3.1. DEFINITION OF A MODULE 117

Analogously, a left A-module is a set M endowed with two binary laws: an

internal addition, M×M→M, (<, =) ↦→ <+=, and a scalar multiplication

A ×M→M, (0, <) ↦→ 0<, subject to the following properties:

a) Axioms concerning the addition law and stating that (M,+) is anabelian group:

(i) for every <, = in M, < + = = = +< (commutativity of addition);

(ii) for every <, =, ? in M, (< + =) + ? = < + (= + ?) (associativityof addition);

(iii) there exists an element 0 ∈ M such that < + 0 = 0 + < = < for

every < ∈ M (neutral element for the addition);

(iv) for every < ∈ M, there exists = ∈ M such that < + = = = + < = 0

(existence of an opposite for the addition);

b) Axioms concerning the multiplication law:

(v) for any < ∈ M, 1< = < (the unit of A is neutral for the scalar

multiplication);

(vi) for any 0, 1 ∈ A and any < ∈ M, one has (01)< = 0(1<)(associativity of the scalar multiplication);

c) Axioms relating the ring addition, the module addition and the

scalar multiplication:

(vii) for any 0, 1 ∈ A and any < ∈ M, one has (0 + 1)< = 0< + 1<(scalar multiplication distributes over the ring addition);

(viii) for any 0 ∈ A and any <, = ∈ M, one has 0(< + =) = 0< + 0=(scalar multiplication distributes over the module addition).

Endowed with its internal addition law, any (left or right) module M

is in particular an abelian group. The opposite of an element < ∈ M

is written −<. The neutral element of M is written 0, as is the neutral

element of the ring A for the addition. If needed, one may write 0M, 0A

to insist on their differences.

Observe that for any right A-module M and any < ∈ M, one has

<0A = <(0A + 0A) = <0A + <0A, hence <0A = 0M; moreover, 0M =

<0A = <(1 − 1) = <1 + <(−1), hence −< = <(−1). Similarly, for any

left A-module M and any < ∈ M, one has 0A< = 0M and (−1)< = −<.

Page 130: (MOSTLY)COMMUTATIVE ALGEBRA

118 CHAPTER 3. MODULES

3.1.2. — Let M be a right A-module, let I be a set, let (08)8∈I be a family

of elements of A and let (<8)8∈I be a family of elements of M. Assume

that the family (08) has finite support. Then the family (<808) has finitesupport; its sum

∑8∈I <808 is called a linear combination of the elements<8.

If I′, I′′ are disjoint subsets of I such that 08 = 0 for all 8 ∈ I (I′ ∪ I

′′),then one has ∑

8∈I<808 =

∑8∈I′

<808 +∑8∈I′′

<808 .

Moreover, for every 1 ∈ A, one has(∑8∈I

<808

)1 =

∑8∈I

<8(081).

Finally, if (08)8∈I and (18)8∈I are two families with finite support, then the

family (08 + 18)8∈I has finite support as well, and one has∑8∈I

<8(08 + 18)∑8∈I

<808 +∑8∈I

<818 .

Examples (3.1.3). — a) Let A be a ring; multiplication A × A → A

endowes the abelian group with two structures of modules, one left,

denoted AB (sinister, Latin for left), and one right, denoted A3 (dexter,

Latin for right). Let I be a left ideal of A; themultiplication of A, A×I→ I,

endowed I with the structure of a left A-module. Similarly, if I is a right

ideal of A, the multiplication I ×A→ I endowes it with the structure of

a right A-module.

b) Let A be a ring and let M be a left A-module. Let Aobe the opposite

ring to A, with the same addition and multiplication computed in the

opposite order. Endowed with the scalar multiplication M ×Ao→M

given by (<, 0) ↦→ 0<, M becomes a right Ao-module.

This correspondence allows to deduce from a statement for right

A-modules an analogous statement for left A-modules.

When A is commutative, Aois identical to A, hence any left A-module

is also a right A-module, with the same addition and the same scalar

multiplication, only written in the opposite order.

c) An abelian group G can be converted to a Z-module in a unique

way. Indeed, for any integer = and any element 6 ∈ G, =6 is defined by

Page 131: (MOSTLY)COMMUTATIVE ALGEBRA

3.1. DEFINITION OF A MODULE 119

induction by the rules 06 = 0, =6 = (=−1)6+6 if = > 1 and =6 = −(−=)6if = 6 −1.

d) When A is a field, in particular commutative, the notion of a A-

module coincides with that of a A-vector space which has presumably

be already studied extensively enough. We keep this terminology of

a vector space when A is a division ring, thus talking of left or right

A-vector space.

e) Let 5 : A → B be a morphism of rings, the scalar multiplication

B ×A→ B given by (1, 0) ↦→ 1 5 (0) endowes B with the structure of a

right A-module.

More generally, if 5 : A→ B is a morphism of rings and if M is a right

B-module, the scalarmultiplicationM×A→M given by (<, 0) ↦→ < 5 (0)endowes M with the structure of a right A-module.

We leave to the reader to explicit the similar statements for left A-

modules.

f) Let A and B be rings. An (A,B)-bimodule is an abelian group M

endowed with a structure of a left A-module and a structure of right

B-module such that 0(<1) = (0<)1 for every 0 ∈ A, < ∈ M and 1 ∈ B.

This amounts to the datum of a structure of left A × Bo-module, or to

the datum of a structure of right Ao × B-module.

In the sequel, a A-module (without further precision) shall be under-

stood as a right A-module and most of the stated properties will be

only proved for them. I will often leave to the conscientious reader to

prove (and sometimes even to explicit) the analogous statements for left

A-modules.

Remark (3.1.4). — Let A be a ring and let M be a left A-module. For

0 ∈ A, let �0 : M → M be the map given by �0(<) = 0<. This is an

endomorphism ofM as an abelian group. Indeed, �0(<+=) = 0(<+=) =0< + 0= = �0(<) + �0(=) for any <, = ∈ M.

Moreover, the map 0 ↦→ �0 is a morphism of rings from A to the

ring End(M) of endomorphisms of the abelian group M. Indeed, �0is the zero map, �1 = idM, and for any 0, 1 ∈ A and any < ∈ M, one

has �0+1(<) = (0 + 1)< = 0< + 1< = �0(<) + �1(<) = (�0 + �1)(<) and

Page 132: (MOSTLY)COMMUTATIVE ALGEBRA

120 CHAPTER 3. MODULES

�01(<) = (01)< = 0(1<) = �0(1<) = �0 ◦ �1(<), so that �0+1 = �0 + �1and �01 = �0 ◦ �1.Conversely, let M be an abelian group and let � : A→ End(M) be a

morphism of rings. Let us define an scalar multiplication A ×M→M

by (0, <) ↦→ �(0)(<). One checks that this enriches the abelian group

(M,+)with a structure of a left A-module such that for any 0 ∈ A, �(0)be the multiplication by 0.

Consequently, given an abelian group M, it is equivalent to endow

it with a structure of a left A-module and to give oneself a morphism

of rings A → End(M). In particular, every abelian group M becomes

canonically a left EndAb(M)-module.

A structure of right A-module on an abelian group is equivalent to

the datum of a morphism of rings A→ End(M)o.

3.1.5. — Let A be a ring and let M be an A-module.

Let < ∈ M. The set AnnA(<) of all 0 ∈ A such that <0 = 0 is a right

ideal of A, called the annihilator of <. (Indeed, for 0, 1 ∈ A, one has

<0 = 0, <(0 + 1) = <0 +<1 = 0 if <0 = <1 = 0, and <(01) = (<0)1 = 0

if <0 = 0.)

The set AnnA(M) of all 0 ∈ A such that <0 = 0 for all < ∈ M is

called the annihilator of M. Since it is the intersection of all AnnA(<), for< ∈ M, it is a right ideal of A. On the other hand, if 0 ∈ AnnA(M) and1 ∈ A, one has <(10) = (<1)0 = 0 for all < ∈ M, so that 10 ∈ AnnA(M).Consequently, AnnA(M) is a two-sided ideal of A.

If AnnA(M) = 0, one says that the A-module M is faithful.

In fact, the ideal AnnA(M) is the kernel of the morphism � : A →End(M), 0 ↦→ �0. In this way, M can be viewed as an A/I-module, for

every ideal I of A such that I ⊂ AnnA(M). Taking I = AnnA(M), we

observe that M is a faithful (A/AnnA(M))-module.

Conversely, if M is an (A/AnnA(M))-module, the composition A→A/AnnA(M) → End(M) endowes M with the structure of an A module

whose annihilator contains AnnA(M).

Definition (3.1.6). — Let A be a ring and let M be a right A-module. A

submodule of M is a subset N of M satisfying the following properties:

(i) N is an abelian subgroup of M;

Page 133: (MOSTLY)COMMUTATIVE ALGEBRA

3.1. DEFINITION OF A MODULE 121

(ii) for every 0 ∈ A and every < ∈ M, then <0 ∈ N.

There is a similar definition for left A-modules, and for (A,B)-bimodules.

Examples (3.1.7). — a) Let M be a A-module. The subsets {0} and M

of M are submodules.

b) Let A be a ring. The left ideals of A are exactly the submodules of

the left A-module AB . The right ideals of A are the submodules of the

right A-module A3.

c) Let A be a division ring. The submodules of an A-vector space are

its vector subspaces.

Lemma (3.1.8). — Let A be a ring, let M be a right A-module and let N be a

subset of M. To show that N is a submodule of M, it suffices to show that it

satisfies the following properties:

(i) 0 ∈ N;

(ii) if 0 ∈ A and < ∈ N, then <0 ∈ N;

(iii) if < ∈ N and = ∈ N, then < + = ∈ N.

Proof. — Indeed, the second property applied to 0 = −1 and < ∈ N

implies that −< ∈ N. Combined with the two other properties, this

shows that N is an subgroup of the abelian group M. Then the second

property shows that it is a submodule of M �

Example (3.1.9). — Let A be a commutative ring and let M be an A-

module. An element < ∈ M is said to be torsion if there exists a regular

element 0 ∈ A such that <0 = 0.

The set T(M) of all torsion elements of M is a submodule of M. Indeed, one

has 0 ∈ T(M), because 0M · 1A = 0. Let < ∈ T(M) and let 0 ∈ A; let 1 be

a regular element such that <1 = 0. Then (<0)1 = <(01) = (<1)0 = 0,

hence <0 ∈ T(M). Finally, let <, = ∈ T(M), let 0, 1 ∈ A be regular

elements such that <0 = =1 = 0; then 01 is a regular element of A and

(< + =)01 = (<0)1 + (=1)0 = 0, hence < + = ∈ T(M).One says that M is torsion-free if T(M) = 0, and that M is a torsion A-module

if M = T(M).

Page 134: (MOSTLY)COMMUTATIVE ALGEBRA

122 CHAPTER 3. MODULES

3.2. Morphisms of modules

Definition (3.2.1). — Let A be a ring, let M and N be two A-modules. A

morphism from M to N is a map 5 : M→ N such that

5 (<0 + =1) = 5 (<)0 + 5 (=)1

for every 0, 1 ∈ A and every <, = ∈ M. One writes HomA(M,N) for the setof morphisms from M to N.

A morphism from M to itself is called an endomorphism of M. The set of

all endomorphisms of an A-module M is denoted EndA(M).

The expressions “A-linear mapping”, or even “linear mapping” are

synonyms for “morphism of A-modules”.

The identity map idM from an A-module M to itself is an endomor-

phism.

Let A be a ring, let M,N, P be three A-modules. If 5 : M → N and

6 : N → P are morphisms, then their composition 6 ◦ 5 : M → P is a

morphism of A-modules.

We thus can say that A-modules, and morphisms of A-modules form a

categoryModA. Moreover, if 5 : A→ B is a morphism of rings, viewing

a B-module as an A-module furnishes a functor 5 ∗ : ModB→ModA.

One says that amorphism ofA-modules 5 : M→ N is an isomorphism

if there exists a morphism 6 : N → M such that 5 ◦ 6 = idN and

6 ◦ 5 = idM. Then, there is exactly one such morphism 6, called

the reciprocal (or inverse) of 5 . Indeed, if 5 ◦ 6 = 5 ◦ ℎ = idN and

6◦ 5 = ℎ◦ 5 = idM, then ℎ = ℎ◦idN = ℎ◦( 5 ◦6) = (ℎ◦ 5 )◦6 = idM ◦6 = 6.

One says that a morphism 5 : M→ N is left invertible if there exists a

morphism 6 : N→M such that 6 ◦ 5 = idM, and that it is right invertible

if there exists a morphism 6 : N → M such that 5 ◦ 6 = idN. A left

invertible morphism is injective, a right invertible morphism is surjective,

but the converse assertions are not true in general.

Proposition (3.2.2). — For a morphism of A-modules to be an isomorphism, it

is necessary and sufficient that it be bijective.

Proof. — If 5 : M→ N if an isomorphism, with reciprocal 6, then 6 is

also the inverse bijection of 5 .

Page 135: (MOSTLY)COMMUTATIVE ALGEBRA

3.2. MORPHISMS OF MODULES 123

Conversely, let 5 : M → N be a bijective morphism and let 6 be its

inverse bijection. Then, 6 is a morphism. Indeed, for =, =′ ∈ N and

0, 0′ ∈ A, we have

5(6(=)0 + 6(=′)0′

)= 5 (6(=))0 + 5 (6(=′))0′ = =0 + =′0′

hence 6(=)0 + 6(=′)0′ = 6(=0 + =′0′), so that 6 is linear. �

Definition (3.2.3). — Let 5 : M → N be a morphism of A-modules. The

kernel of 5 , written Ker( 5 ), is the set of all < ∈ M such that 5 (<) = 0.

It is the kernel of 5 as a morphism of abelian groups.

Proposition (3.2.4). — Let 5 : M→ N be a morphism of A-modules.

For any submodule M′of M, 5 (M′) is a submodule of N. For any submod-

ule N′of N, then 5 −1(N′) is a submodule of M.

In particular, the kernel Ker( 5 ) and the image Im( 5 ) = 5 (M) of 5 are

submodules (respectively of M and N).

Proof. — Let us show that 5 (M′) is a submodule of N. Since 5 (0M) = 0N

and 0M ∈ M′, we have 0N ∈ 5 (M′). On the other hand, for =, =′ ∈ 5 (M′),

there exist <, <′ ∈ M′such that = = 5 (<) and =′ = 5 (<′). Then,

= + =′ = 5 (<) + 5 (<′) = 5 (< + <′) ∈ 5 (M′).

Finally, if = = 5 (<) belongs to 5 (M′) and if 0 ∈ A, then =0 = 5 (<)0 =5 (<0) belongs to 5 (M′) since <0 ∈ M

′.

Let us show that 5 −1(N′) is a submodule of M. Since 5 (0M) = 0N ∈ N′,

we have 0M ∈ 5 −1(N′). Moreover, for <, <′ ∈ 5 −1(N′) and for 0, 1 ∈ A,

we have

5 (<0 + <′1) = 5 (<)0 + 5 (<′)1 ∈ N′

since 5 (<) and 5 (<′) both belong to N′and N

′is a submodule of N.

Hence <0 + <′1 belongs to 5 −1(N′). �

Example (3.2.5). — Let A be a ring and let M,N be two A-modules. The

set HomAb(M,N) of all morphisms of abelian groups from M to N is

again an abelian group, the sum of to morphisms 5 and 6 being the

Page 136: (MOSTLY)COMMUTATIVE ALGEBRA

124 CHAPTER 3. MODULES

morphism 5 + 6 given by < ↦→ 5 (<) + 6(<). If 5 and 6 are A-linear,

then 5 + 6 is A-linear too, because

( 5 + 6)(<0) = 5 (<0) + 6(<0) = 5 (<)0 + 6(<)0= ( 5 (<)0 + 6(<)0) = ( 5 + 6)(<)0,

for 0 ∈ A and <, = ∈ M. Similarly, the zero map is a morphism, as

well as the map − 5 : < ↦→ − 5 (<) for any morphism 5 ∈ HomA(M,N).Consequently, HomA(M,N) is a subgroup of Hom(M,N).When M = N, EndA(M) is moreover a ring, whose multiplication is

given by composition of morphisms; it is a subring of End(M).Assume that A is a commutative ring. Let 5 ∈ HomA(M,N) and let

0 ∈ A. Then, the map 5 0 defined by < ↦→ 5 (<)0 is a morphism of

abelian groups, but is also A-linear, since

( 5 0)(<1) = 5 (1<)0 = 5 (<)10 = 5 (<)01 = ( 5 0)(<)1,

for any < ∈ M and any 1 ∈ A. This endowes the abelian

group HomA(M,N)with the structure of an A-module.

However, when the ring A is not commutative, the map < ↦→ 5 (<)0 isnot necessarily A-linear, and the abelian group HomA(M,N)may not

have any natural structure of an A-module.

Assume however that N be a (B,A)-bimodule. One can then endow

HomA(M,N)with the structure of a left B-module by defining a linear

map 1 5 , for 5 ∈ HomA(M,N) and 1 ∈ B, by the formula (1 5 )(<) = 1 5 (<).(It is clearly a morphism of abelian groups; the equalities 1 5 (<0) =1( 5 (<0)) = 1 · 5 (<) · 0 = (1 5 (<))0 prove that it is A-linear.) Moreover,

one has (11′) 5 = 1(1′ 5 ) for any 1, 1′ ∈ B and any 5 ∈ HomA(M,N).Similarly, if M is a (B,A)-bimodule, one can endow HomA(M,N)

with a structure of right B-module by defining, for 5 ∈ HomA(M,N)and 1 ∈ B, a linear map 5 1 by the formula ( 5 1)(<) = 5 (1<). One has

5 (11′) = ( 5 1)1′ for 1, 1′ ∈ B and 5 ∈ HomA(M,N). Indeed, for any

< ∈ M, one has ( 5 11′)(<) = 5 (11′<); on the other hand, ( 5 1)1′maps <

to ( 5 1)(1′<) = 5 (11′<), hence the relation 5 11′ = ( 5 1)1′.

Remark (3.2.6). — We have explained how an abelian group M has a

natural structure of an End(M)-module. Similarly, if A is a ring, than

Page 137: (MOSTLY)COMMUTATIVE ALGEBRA

3.3. OPERATIONS ON MODULES 125

any right A-module M is endowed with a canonical structure of a left

EndA(M)-module.

Assume that A is commutative, so that EndA(M) is now naturally an

A-module. Let, moreover, D ∈ EndA(M). By the universal property of

polynomial rings (proposition 1.3.16), there is a unique ring morphism

�D : A[X] → EndA(M) that coincides with � on A and maps X to D. It

maps a polynomial P =∑?=X

=to the endomorphism P(D) = ∑

?=D=.

This endowes the A-module M with a structure of an A[X]-module.

This construction will be of great use in the case where A is a field,

thanks to the facts that A[X] is a principal ideal domain and to the

classification of modules over such rings.

Definition (3.2.7). — Let A be a ring, let M be a right A-module. The dual

of M, written M∨, is the abelian group HomA(M,A3) endowed with its natural

structure of a left A-module (for which (0!)(<) = 0!(<) for any 0 ∈ A,

< ∈ M and ! ∈ M∨). Its elements are called linear forms on M.

Similarly, thedual of a leftA-module is the abelian groupHomA(M,AB)endowed with its natural structure of a right A-module for which

(!0)(<) = !(<)0 for any 0 ∈ A, < ∈ M and ! ∈ M∨.

3.3. Operations on modules

Proposition (3.3.1). — Let A be a ring, let M be a right A-module and let

(NB)B∈S be a family of submodules of M. Then, its intersection N =⋂B NB is

a submodule of M.

Proof. — Since 0 ∈ NB for every B, one has 0 ∈ N. Let < and = be two

elements of N. For any B, < and = belong to NB , hence so does < + =,so that < + = belongs to N. Finally, let < ∈ N and 0 ∈ A. For every B,

< ∈ NB , hence<0 ∈ NB and finally<0 ∈ N. Therefore, N is a submodule

of M. �

Proposition (3.3.2). — Let A be a ring, let M be a right A-module and let X be

a subset of M. There exists a smallest submodule 〈X〉 of M that contains X: it

is the intersection of the family of all submodules of M which contain X. It is

Page 138: (MOSTLY)COMMUTATIVE ALGEBRA

126 CHAPTER 3. MODULES

also the set of sums

∑G∈X G0G, where (0G)G∈X runs among the set of all almost

null families of elements of A.

One says that 〈X〉 is the submodule of M generated by X.

Proof. — The intersection 〈X〉 of all of the submodules ofM that containX

is a submodule of M; it contains X. By construction, 〈X〉 is contained in

every submodule of M which contains X; it is therefore the smallest of

them all.

Let (0G)G be an almost-null family of elements of A; then,

∑G∈X G0G is a

linear combination of elements in 〈X〉, hence belongs to 〈X〉. This shows

that the set 〈X〉′ of all such linear combinations is contained in 〈X〉.To obtain the other inclusion, let us first show that 〈X〉′ is a submodule

of M. First of all, 0 =∑G G0 belongs to 〈X〉′. On the other hand, let< and

= be two elements of 〈X〉′, let (0G)G and (1G)G be two almost-null families

such that < =∑G G0G and = =

∑G G1G. Then, the family (0G + 1G)G is

almost-null and one has

< + = =(∑

G

G0G)+

(∑G

G1G)=

∑G

G(0G + 1G)

so that < + = belongs to 〈X〉′. Finally, let < ∈ 〈X〉′ and 0 ∈ A, let (0G)G bean almost-null family such that < =

∑G G0G. Then, <0 =

∑G G(0G0), so

that <0 ∈ 〈X〉′. This concludes the proof that 〈X〉′ is a submodule of M.

Since it contains X, one has 〈X〉 ⊂ 〈X〉′, the other desired inclusion. �

Definition (3.3.3). — Let A be a ring, let M be a right A-module and let

(MB)B∈S be a family of submodules of M. Its sum, written

∑B∈S MB , is the

submodule of M generated by the union

⋃B MB of the submodules MB .

It is also the set of all linear combinations

∑B <B where (<B)B is an

almost-null family of elements of M such that <B ∈ MB for every B ∈S. Indeed, this set of linear combinations is a submodule of M, it

contains

⋃B MB , and is contained in every submodule of M which

contains all of the MB .

Definition (3.3.4). — LetA be a ring, let (MB)B∈S be a family of rightA-modules.

Its direct product is the set

∏B∈S MB endowed with the laws:

(<B)B + (=B)B = (<B + =B)B , (<B)B0 = (<B0)B

Page 139: (MOSTLY)COMMUTATIVE ALGEBRA

3.3. OPERATIONS ON MODULES 127

which endow it with the structure of a right A-module.

The direct sum of the family (MB) is the submodule

⊕B∈S MB of

∏B∈S MB

consisting of elements (<B)B such that <B = 0 for all but finitely many B ∈ S.

Remark (3.3.5). — If all of the MB are equal to a given module M, one

writes

∏B∈S MB = M

Sand

⊕B∈S MB = M

(S).

Lemma (3.3.6). — Let (MB)B∈S be a family of right A-modules. For every C ∈ S,

define maps

8C : MC →⊕B

MB , ?C :

∏B

MB →MC

by 8C(<) = (<B)B∈S where<C = < and<B = 0 for B ≠ C, and ?C((<B)B∈S) = <C .

These are morphisms of right A-modules.

Proof. — Let C ∈ S, let <, = ∈ MC , let 0, 1 ∈ A. Then,

8C(<0 + =1) = (0, . . . , 0, <0 + =1, 0, . . . )

(in the right hand side, the term <0 + =1 has index C)

= (0, . . . , 0, <, 0, . . . )0 + (0, . . . , 0, =, 0, . . . , 0)1= 8C(<)0 + 8C(=)1.

Consequently, 8C is a morphism of A-modules. We leave as an exercise

the proof that ?C is a morphism. �

The morphism 8C is injective, and called the canonical injection of

index C; the morphism ?C is surjective; it is called the canonical surjection

of index C.

Direct products and direct sums of modules satisfy a universal property

which we now state.

Theorem (3.3.7). — Let A be a ring and let (MB)B∈S be a family of right

A-modules.

a) For every right A-module N and any family ( 5B)B∈S, where 5B : N→MB

is a morphism, there exists a unique morphism 5 : N → ∏B MB such that

?B ◦ 5 = 5B for every B ∈ S.

Page 140: (MOSTLY)COMMUTATIVE ALGEBRA

128 CHAPTER 3. MODULES

b) For every right A-module N and any family ( 5B)B∈S, where 5B : MB → N

is a morphism, there exists a unique morphism 5 :

⊕B MB → M such that

5 ◦ 8B = 5B for every B ∈ S.

Proof. — a) Assume that 5 : N→∏B MB satisfies ?B◦ 5 = 5B for every B ∈

S. Then, if = ∈ N and 5 (=) = (<B)B , we have

<B = ?B((<B)B) = ?B( 5 (=)) = (?B ◦ 5 )(=) = 5B(=),

so that there is at most one morphism 5 satisfying ?B ◦ 5 = 5B for all B.

Conversely, let us define a map 5 : N→∏B MB by 5 (=) = ( 5B(=))B∈S for

= ∈ N and let us show that it is a morphism. Indeed, for any 0, 1 ∈ A

and any <, = ∈ N,

5 (<0 + =1) =(5B(<0 + =1)

)B=

(5B(<)0 + 5B(=)1

)B

=(5B(<)

)B0 +

(5B(=)

)B1 = 5 (<)0 + 5 (=)1,

so that 5 is linear.

b) Assume that 5 :

⊕B MB → N satisfies 5 ◦ 8B = 5B for every B. Then,

the image by 5 of any element (0, . . . , 0, <, 0, . . . ) = 8B(<) (where< ∈ MB

has index B) is necessarily equal to 5B(<). Any element < of

⊕MB is a

family (<B)B∈S, where <B ∈ MB for every B, almost finitely many of them

being nonzero. It follows that < =∑B 8B(<B) (the sum is finite)

5 (<) = 5 (∑B

8B(<B)) =∑B

( 5 ◦ 8B)(<B) =∑B

5B(<B),

hence the uniqueness of such amap 5 . Conversely, themap 5 :

⊕MB →

M defined for any (<B)B ∈⊕

MB by

5 ((<B)B) =∑B

5B(<B) (finite sum)

is a morphism of A-modules and satisfies 5 ◦ 8B = 5B for every B ∈ S. Let

indeed 0, 1 ∈ A, and (<B)B∈S, (=B)B∈S be two elements of

⊕B MB ; then,

Page 141: (MOSTLY)COMMUTATIVE ALGEBRA

3.3. OPERATIONS ON MODULES 129

one has

5 ((<B)B0 + (=B)B1) = 5 ((<B0 + =B1)B)

=

∑B

5B(<B0 + =B1) =∑B

(5B(<B)0 + 5B(=B)1

)=

(∑B

5B(<B))0 +

(∑B

5B(=B))1

= 5 ((<B)B)0 + 5 ((=B)B)1.�

Remark (3.3.8). — One can reformulate this theorem as follows: for every

right A-module N, the maps

HomA(⊕B

MB ,M) →∏B

HomA(MB ,M), 5 ↦→ ( 5 ◦ 8B)B

and

HomA(M,∏B

MB) →∏B

HomA(M,MB), 5 ↦→ (?B ◦ 5 )B

are bijections. In fact, they are isomorphisms of abelian groups (and of

A-modules if A is commutative).

In the language of category theory, this says that

∏B MB and

⊕B MB

(endowed with the morphisms 8B and ?B) are respectively a product and

a coproduct of the family (MB).

3.3.9. Internal direct sums. — Let M be a right A-module and let

(MB)B∈S be a family of submodules of M. Then, there is a morphism of

A-modules,

⊕B MB → M, defined by (<B) ↦→

∑<B for every almost-

null family (<B)B∈S, where <B ∈ MB for every B. The image of this

morphism is

∑B MB . One says that the submodules MB are in direct sum

if this morphism is an isomorphism; this means that any element of M

can be written in a unique way as a sum

∑B∈S <B , with <B ∈ MB for

every B, all but finitely many of them being null. In that case, one writes

M =⊕

B∈S MB (“internal” direct sum).

When the set S has two elements, say S = {1, 2}, the kernel of this

morphism is the set of all pairs (<,−<), where < ∈ M1 ∩M2, and its

image is the submodule M1+M2. Consequently, M1 and M2 are in direct

Page 142: (MOSTLY)COMMUTATIVE ALGEBRA

130 CHAPTER 3. MODULES

sum if and only if M1 ∩M2 = 0 and M1 +M2 = M. The picture is slightly

more complicated for families indexed by a set with 3 or more elements;

this is already the case for vector spaces, see exercise 3/9.

LetM be a rightA-module, letN be a submodule ofM; a direct summand

of N is a submodule P of M such that M = N ⊕ P (N and P are in direct

sum). Contrarily to the case of vector spaces, not any submodule has a

direct summand. If a submodule N of M has a direct summand, one

also says that N is a direct summand.

Definition (3.3.10). — Let A be a ring, let M be right A-module and let I

be a right ideal of A. One defines the submodule MI of M as the submodule

generated by all products <0, for < ∈ M and 0 ∈ I.

It is the set of all finite linear combinations

∑<808, where 08 ∈ I and

<8 ∈ M for every 8.

3.4. Quotients of modules

Let A be a ring and let M be a right A-module. We are interested in

the equivalence relations ∼ on M which are compatible with its module

structure, namely such that for any <, <′, =, =′ ∈ M, and any 0, 1 ∈ A,

if < ∼ <′ and = ∼ =′, then <0 + =1 ∼ <′0 + =′1.

Let N be the set of all < ∈ M such that < ∼ 0. Since an equivalence

relation is reflexive, one has 0 ∈ N. If <, = ∈ N, then < ∼ 0 and = ∼ 0,

hence <0 + =1 ∼ (00 + 10) = 0, hence <0 + =1 ∈ N. This shows that N

is a submodule of M. Moreover, if < and = are elements of M which

are equivalent for ∼, then the relations < ∼ = and (−=) ∼ (−=) imply

that < − = ∼ 0, that is, < − = ∈ N. Conversely, if <, = ∈ M are such that

< − = ∈ N, then < − = ∼ 0, and using that = ∼ =, we obtain < ∼ =.Conversely, let N be a submodule of M and let ∼ be the relation on M

defined by “< ∼ = if and only if < − = ∈ N”. This is an equivalence

relation on M. Indeed, since 0 ∈ N, < ∼ < for any < ∈ M; if < ∼ =, then< − = ∈ N, hence = −< = −(< − =) ∈ N and = ∼ <; finally, if < ∼ = and

= ∼ ?, then<−= and =−? belong toN, so that<−? = (<−=)+(=−?) ∈ N

and < ∼ ?. Moreover, this equivalence relation is compatible with its

Page 143: (MOSTLY)COMMUTATIVE ALGEBRA

3.4. QUOTIENTS OF MODULES 131

structure of module: if < ∼ <′, = ∼ =′ and 0, 1 ∈ A, then

(<′0 + =′1) − (<0 + =1) = (< − <′)0 + (= − =′)1 ∈ N,

since < − <′ ∈ N and = − =′ ∈ N, hence <′0 + =′1 ∼ <0 + =1.Let M/N be the set of all equivalence classes on M for this relation,

and let clN : M→ M/N be the canonical projection. (If not confusion

can arise, this map shall only be written cl.) From the above calculation,

we get the following theorem:

Theorem (3.4.1). — Let A be a ring, let M be an A-module and let N be a

submodule of M. The relation ∼ on M given by < ∼ = if and only if < − = ∈ N

is an equivalence relation on M which is compatible with its structure of module.

The quotient set M/N has a unique structure of an A-module for which the

map clN : M→M/N is a morphism of A-modules.

The map clN is surjective; its kernel is N.

We now prove a factorization theorem, the universal property of quotient

modules.

Theorem (3.4.2). — Let A be a ring, let M be an A-module and let N be a

submodule of M. For any A-module P and any morphism 5 : M→ P such

that N ⊂ Ker( 5 ), there exists a unique morphism ! : M/N → P such that

5 = ! ◦ clN.

Moreover, Im(!) = Im( 5 ) and Ker(!) = clN(Ker( 5 )). In particular, ! is

injective if and only if Ker( 5 ) = N, and ! is surjective if and only if 5 is

surjective.

One can represent graphically the equality 5 = 5 ◦ cl of the theorem

by saying that the diagram

M P

M/N

←→?

←→clN

← →!

is commutative. This theorem allows to factor anymorphism 5 : M→ N

of A-modules as a composition

M

clN−−→M/Ker( 5 )!−→ Im( 5 ) ↩→ N

Page 144: (MOSTLY)COMMUTATIVE ALGEBRA

132 CHAPTER 3. MODULES

of a surjective morphism, an isomorphism, and an injective morphism.

Proof. — Necessarily, !(cl(<)) = 5 (<) for any < ∈ M. Since every

element of M/N is of the form cl(<) for a certain < ∈ M, this shows that

there is at most one morphism ! : M/N→ P such that ! ◦ cl = 5 .

Let us show its existence. Let G ∈ M/N and let <, <′ ∈ M be two

elements such that G = cl(<) = cl(<′). Then < − <′ ∈ N, hence

5 (< − <′) = 0 since N ⊂ Ker( 5 ). Consequently, 5 (<) = 5 (<′) and one

may define ! by setting !(G) = 5 (<), where < is any element of M such

that cl(<) = G, the result does not depend on the chosen element <.

It remains to show that ! is a morphism. So let G, H ∈ M/N and let

0, 1 ∈ A. Let us choose <, = ∈ M such that G = cl(<) and H = cl(=).Then G0 + H1 = cl(<)0 + cl(=)1 = cl(<0 + =1) hence

!(G0 + H1) = !(cl(<0 + =1)) = 5 (<0 + =1)= 5 (<)0 + 5 (=)1 = !(G)0 + !(H)1.

Hence, ! is linear.

It is obvious that Im( 5 ) ⊂ Im(!). On the other hand, if ? ∈ Im(!), letus choose G ∈ M/N such that ? = !(G), and < ∈ M such that G = cl(<).Then, ? = !(cl(<)) = 5 (<) ∈ Im( 5 ), so that Im(!) ⊂ Im( 5 ), hence the

equality.

Finally, let G ∈ M/N be such that !(G) = 0; write G = cl(<) for some

< ∈ M. Since 5 (<) = ! ◦ cl(<) = !(G) = 0, we have < ∈ Ker( 5 ).Conversely, if G = cl(<) for some < ∈ Ker( 5 ), then !(G) = !(cl(<)) =5 (<) = 0, hence G ∈ Ker!. This shows that Ker(!) = cl(Ker( 5 )). �

Example (3.4.3). — Let A be a ring and let 5 : M → N be a morphism

of A-modules. The A-module N/ 5 (M) is called the cokernel of 5 and is

denoted by Coker( 5 ).One has Coker( 5 ) = 0 if and only 5 is surjective. Consequently, a

morphism of A-modules 5 : M→ N is an isomorphism if and only if

Ker( 5 ) = 0 and Coker( 5 ) = 0.

The next proposition describes the submodules of a quotient mod-

ule M/N.

Page 145: (MOSTLY)COMMUTATIVE ALGEBRA

3.4. QUOTIENTS OF MODULES 133

Proposition (3.4.4). — Let A be a ring, let M be an A-module, let N be a

submodule of M. The map cl−1

:

submodules of M/N → submodules of M containing N

� ↦→ cl−1

N(�)

is a bijection.

So, for any submodule P of M such that N ⊂ P, there is exactly

one submodule � of M/N such that P = cl−1

N(�). Moreover, one has

� = clN(P). The submodule clN(P) of M/N will be written P/N. This is a

coherent notation. Indeed, the restriction of cl to the submodule P is a

morphism clN |P : P→M/Nwith kernel P∩N = N and image clN(P). Bythe factorization theorem, clN |P induces an isomorphism P/N→ clN(P).Proof. — The proof is an immediate consequence of the two following

formulae: if P is a submodule of M, then

cl−1

N(clN(P)) = P +N

while if � is a submodule of M/N,

clN(cl−1

N(�)) = �.

Indeed, if P ⊂ N, P +N = P and these formulae precisely show that the

map cl−1

Nin the statement of the proposition is bijective, with reciprocal

map clN.

Let us show the first formula. Let < ∈ cl−1

N(cl(P)); we have clN(<) ∈

clN(P), hence there is ? ∈ P such that clN(<) = clN(?) and it follows

that clN(< − ?) = 0, hence = = < − ? ∈ N; then, < = ? + = belongs

to P +N. Conversely, if < ∈ P +N, write < = ? + = for some ? ∈ P and

some = ∈ N; then, clN(<) = clN(? + =) = clN(?) belongs to clN(P), hence< ∈ cl

−1

N(clN(P)).

Let us now show the second formula. By definition, one has

clN(cl−1

N(�)) ⊂ �. Conversely, if G ∈ �, let < ∈ M be such

G = clN(<). Then, clN(<) ∈ � so that < ∈ cl−1

N(�), hence

G = clN(<) ∈ clN(cl−1

N(�)). �

We now compute quotients of quotients.

Page 146: (MOSTLY)COMMUTATIVE ALGEBRA

134 CHAPTER 3. MODULES

Proposition (3.4.5). — Let A be a ring, let M be an A-module and let N, P be

two submodules of M such that N ⊂ P. Then, there is a unique map

(M/P) → (M/N)/(P/N)

such that clP(<) mapsto clP/N(clN(<)) for every < ∈ M, and this map is an

isomorphism.

Proof. — The uniqueness of such a map follows from the fact that every

element of M/P is of the form clP(<), for some < ∈ M. Let 5 be the

morphism given by

5 : M→ (M/N) → (M/N)/(P/N), < ↦→ clP/N(clN(<)).

It is surjective, as the composition of two surjective morphisms. An

element < ∈ M belongs to Ker( 5 ) if and only if clN(<) ∈ Ker clP/N =

P/N = clN(P); since P contains N, this is equivalent to < ∈ P. Conse-

quently, Ker( 5 ) = P and the factorization theorem (theorem 3.4.2) asserts

that there is a unique morphism ! : M/P → (M/N)/(P/N) such that

!(clP(<)) = clP/N(clN(<)) for every < ∈ M. Since Ker( 5 ) = P, the mor-

phism ! is injective; since 5 is surjective, the morphism ! is surjective.

Consequently, ! is an isomorphism. �

Remark (3.4.6). — Let M be an A-module, let N be a submodule of M. Assume

that N has a direct summand P in M. The morphism 5 from P to M/N given

by the composition of the injection of P into M and of the surjection of M

onto M/N is an isomorphism.

Let indeed < ∈ P be such that 5 (<) = 0. Then, < ∈ N hence

< ∈ N ∩ P = 0; this shows that Ker( 5 ) = 0 so that 5 is injective.

Observe now that 5 is surjective: for < ∈ M, let = ∈ N and ? ∈ P

be such that < = = + ?; then clN(<) = clN(=) + clN(?) = 5 (?), henceIm 5 = clN(M) = M/N.

3.5. Generating sets, free sets; bases

Definition (3.5.1). — Let A be a ring and let M be a right A-module.

One says that a family (<8)8∈I of elements of M is:

– generating if the submodule of M generated by the <8 is equal to M;

Page 147: (MOSTLY)COMMUTATIVE ALGEBRA

3.5. GENERATING SETS, FREE SETS; BASES 135

– free if for every almost-null family (08)8∈I of elements of A, the relation∑8∈I <808 = 0 implies 08 = 0 for every 8 ∈ I;

– bonded if it is not free;

– a basis of M if it is free and generating.

On defines similar notions of generating subset, free subset, bonded subset,

and basis, as subsets S of M for which the family (B)B∈S is respectively

generating, free, bonded, and a basis.

When the ring is non null, the members of a free family are pairwise

distinct (otherwise, the linear combination G−H = 0, for G = H, contradicts

the definition of a free family since 1 ≠ 0). Consequently, with regards to

these properties, the only interesting families (<8)8∈I are those for which

the map 8 ↦→ <8 is injective and the notions for families and subsets

correspond quite faithfully one to another.

A subfamily (subset) of a free family (subset) is again free; a family

(set) possessing a generating subfamily (subset) is generating.

Proposition (3.5.2). — Let A be a ring, let M be a right A-module. Let (<8)8∈Ibe a family of elements of M and let ! be the morphism given by

! : A(I)3→M, (08)8∈I ↦→

∑8∈I

<808 .

Then,

– the morphism ! is injective if and only if the family (<8)8∈I is free;– the morphism ! is surjective if and only if the family (<8)8∈I is generating;– the morphism ! is an isomorphism if and only if the family (<8)8∈I is a

basis.

Proof. — The kernel of ! is the set of all almost-null families (08) suchthat

∑8∈I <808 = 0. So (<8)8∈I is free if and only if Ker(!) = 0, that is, if

and only if ! is injective.

The image of ! is the set of all linear combinations of terms of (<8). Itfollows that Im(!) = 〈{<8}〉, so that ! is surjective if and only if (<8)8∈Iis generating.

By what precedes, (<8) is a basis if and only if ! is bijective, that is, an

isomorphism. �

Page 148: (MOSTLY)COMMUTATIVE ALGEBRA

136 CHAPTER 3. MODULES

Example (3.5.3). — Let I be a set and let M = A(I)3. For every 8 ∈ I, let

48 ∈ M be the family (�8 , 9)9∈I, whose only nonzero term is that of index 8,

equal to 1. For every almost-null family (08)8∈I, one has (08)8∈I =∑8∈I 4808.

Consequently, the family (48)8∈I is a basis of A(I)3, called its canonical basis.

The module A(I)3

is often called the free right A-module on I.

Definition (3.5.4). — One says that a module is free if it has a basis. If a

module has a finite generating subset, it is said to be finitely generated.

Proposition (3.5.5). — Let M be an A-module, let N be a submodule of M.

a) If M is finitely generated, then M/N is finitely generated;

b) If N and M/N are finitely generated, then M is finitely generated;

c) If N and M/N are free A-modules, then M is a free A-module.

However, with this notation, itmay happen thatM be finitely generated

but that N be not, as it may happen that M be free, but not N or M/N.

More precisely, we shall prove the following statements:

a′) If M has a generating subset of cardinality A, then so does M/N;

b′) If N and M/N have generating subsets of cardinalities respectively A

and B, then M has a generating subset of cardinality A + B;c′) If N and M/N have bases of cardinalities respectively A and B, then M

has a basis of cardinality A + B.Proof. — a

′) The images in M/N of a generating subset of M gener-

ate M/N, since the canonical morphism from M to M/N is surjective. In

particular, M/N is finitely generated if M is.

b′) Let (=1, . . . , =A) be a generating family of N, and let (<1, . . . , <B) be

a family of elements of M such that (clN(<1), . . . , clN(<B)) generate M/N.

Let us show that the family (<1, . . . , <B , =1, . . . , =A) generates M.

Let < ∈ M. By hypothesis, clN(<) is a linear combination of

clN(<1), . . . , clN(<B). There thus exist elements 08 ∈ A such that

clN(<) =∑B8=1

clN(<8)08. Consequently, = = < − ∑B8=1<808 belongs

to N and there exist elements 1 9 ∈ A such that = =∑A9=1= 91 9. Then

< =∑B8=1<808 +

∑A9=1= 91 9 is a linear combination of the <8 and of the = 9.

c′) Let us moreover assume that (=1, . . . , =A) be a basis of N and

that (clN(<1), . . . , clN(<B)) be a basis of M/N; let us show that

Page 149: (MOSTLY)COMMUTATIVE ALGEBRA

3.5. GENERATING SETS, FREE SETS; BASES 137

(<1, . . . , <B , =1, . . . , =A) is a basis M. Since we already proved that

this family generates M, it remains to show that it is free. So let

0 =∑B8=1<808 +

∑A9=1= 91 9 be a linear dependence relation between

these elements. Applying clN, we get a linear dependence relation

0 =∑B8=1

clN(<8)08 for the family clN(<8). Since this family is free, one

has 08 = 0 for every 8. It follows that 0 =∑A9=1= 91 9; since the family

(=1, . . . , =A) is free, 1 9 = 0 for every 9. The considered linear dependence

relation is thus trivial, as was to be shown. �

Corollary (3.5.6). — Let M be an A-module, let M1, . . . ,M= be submodules

of M which are finitely generated. Their sum

∑=8=1

M8 is finitely generated.

In particular, the direct sum of a finite family of finitely generated modules is

finitely generated.

Proof. — By induction, it suffices to treat the case of two modules, that

is, = = 2. The second projection pr2

: M1 ×M2 → M2 is a surjective

linear map, and its kernel, equal to M1 × {0}, is isomorphic to M1, hence

is finitely generated. Since M1 and M2 are finitely generated, their direct

productM1×M2 is finitely generated. Themorphism (<1, <2) ↦→ <1+<2

from M1 ×M2 to M has image M1 +M2; consequently, M1 +M2 is finitely

generated. �

Remark (3.5.7). — When the ring A is commutative (and nonzero), or

when A is a division ring, we shall prove in the next section that all bases

of a finitely generated free module have the same cardinality. This is not

true in the general case. However, one can prove that if a module M

over a nonzero ring is finitely generated and free, then all bases of M are

finite (see exercise 3/12).

The case of a zero ring A = 0 is pathological: there is no other A-

module than the zero module M = 0, and every family (<8)8∈I is a

basis...

Proposition (3.5.8). — Let A be a ring, let M be a free A-module and let (48)8∈Ibe a basis of M. For any A-module N and any family (=8)8∈I of elements of N,

there is a unique morphism ! : M→ N such that !(48) = =8 for every 8 ∈ I.

Page 150: (MOSTLY)COMMUTATIVE ALGEBRA

138 CHAPTER 3. MODULES

Proof. — The map A(I)→M given by (08) ↦→

∑4808 is an isomorphism

of modules, so that M inherits from the universal property of direct

sums of modules. To give oneself a linear map from M to N is equivalent

to give oneself the images of the elements of a basis. The morphism !such that !(48) = =8 for every 8 is given by !(∑ 4808) =

∑D(48)08. �

Corollary (3.5.9). — Let A be a ring, let M be a free A-module and let (48)8∈Ibe a basis of M. There exists, for every 8 ∈ I, a unique linear form !8 on M such

that !8(48) = 1 and !8(4 9) = 0 for 9 ≠ 8. For any element < =∑8∈I 4808 of M,

one has !8(<) = 08.The family (!8)8∈I is free.If I is finite, then it is a basis of the dual module M

∨.

Proof. — Existence and uniqueness of the linear forms !8 follow from

the proposition.

Let (08)8∈I be an almost-null family such that

∑08!8 = 0. Apply this

relation to 48; one gets 08 = 0. This proves that the family (!8) is free.Assume that I is finite and let us show that (!8) is a generating family

in M∨. Let ! be a linear form on M and let < =

∑8 4808 be an element

of M. One has !(<) = ∑8 !(48)08, hence !(<) = ∑

8 !(48)!8(<). This

proves that ! =∑8∈I !(48)!8. �

When I is finite, the family (!8)8∈I defined above is called the dual basis

of the basis (48)8∈I.

3.5.10. Matrices. — One of the interest of free modules is that they

allow matrix computations. Let Φ = (08 , 9) ∈ M<,=(A) be a matrix with

< rows and = columns and entries in A. One associates to Φ a map

! : A= → A

<given by the formula

!(G1, . . . , G=) = (=∑9=1

08 , 9G 9)1686< .

This is a morphism of right A-modules from (A3)= to (A3)<. Indeed, forevery G = (G1, . . . , G=) ∈ A

=, every H = (H1, . . . , H=) ∈ A

=, any 0, 1 ∈ A,

Page 151: (MOSTLY)COMMUTATIVE ALGEBRA

3.5. GENERATING SETS, FREE SETS; BASES 139

one has

!(G0 + H1) = !((G10 + H11, G20 + H21, . . . , G=0 + H=1))

=

=∑9=1

08 , 9(G 90 + H 91)

=( =∑9=1

08 , 9G 9)0 + +

( =∑9=1

08 , 9H 9)1

= !(G)0 + !(H)1.

Conversely, any morphism ! : A=3→ A

<3of right A-modules has this

form.

Let indeed (41, . . . , 4=) be the canonical basis of (A3)= (recall that

4 9 ∈ A=has all of its coordinates equal to 0, but for the 9th coordinate,

which is equal to 1). . We represent any element of (A3)= as a column

matrix, i.e., with one column and = rows. Let Φ = (08 , 9) be the matrix

with < rows and = columns whose 9th column is equal to !(4 9). In

other words, we have !(4 9) = (01, 9 , . . . , 0<,9) for 1 6 9 6 =. For any

G = (G1, . . . , G=) ∈ A=, one has G =

∑=9=14 9G 9, hence

!(G) ==∑9=1

!(4 9)G 9 = (=∑9=1

08 , 9G 9) = ΦX,

if X is the column matrix (G1, . . . , G=). This shows that ! is given by the

= × < matrix Φ = (08 , 9).The identity matrix I= is the = × = matrix representing the identity

endomorphism of A=; its diagonal entries are equal to 1, all the other

entries are 0.

Let Φ′ = (1 9 ,:) ∈ M=,? be a matrix with = rows and ? columns,

let !′ : (A3)? → (A3)= be the morphism it represents. The product

matrix Φ′′ = ΦΦ

′has < rows and ? columns and its coefficient 28 ,: with

index (8 , :) is given by the formula

28 ,: =

=∑9=1

08 , 91 9 ,: .

Page 152: (MOSTLY)COMMUTATIVE ALGEBRA

140 CHAPTER 3. MODULES

It represents the morphism !′′ = ! ◦ !′ : (A3)? → (A3)<. Indeed one

has by definition

!′′(G1, . . . , G?) = (∑:

28 ,:G:)1686< = (∑9 ,:

08 , 91 9 ,:G:)8 = (∑9

08 , 9H 9)8 ,

where (H1, . . . , H=) = !′(G1, . . . , G?). Consequently,

!′′(G1, . . . , G?) = !(H1, . . . , H=) = !(!′(G1, . . . , G?)),

so that !′′ = ! ◦ !′.This shows in particular that the map M=(A) → EndA(A=

3) that sends

a square = × = matrix to the corresponding endomorphism of (A3)= isan isomorphism of rings.

Here is the very reason why we preferred right A-modules!

3.6. Localization of modules (commutative rings)

Let A be a commutative ring and let S be a multiplicative subset of A.

Let M be an A-module. Through calculus of fractions we constructed

a localized ring S−1

A as well as a morphism of rings A → S−1

A. We

are now going to construct by a similar process an S−1

A-module S−1

M

together with a localization morphism of A-modules M→ S−1

M.

Define a relation ∼ on the set M × S by the formula

(<, B) ∼ (=, C) ⇔ there exists D ∈ S such that D(C< − B=) = 0.

Let us check that it is an equivalence relation. In fact, the argument runs

exactly as for the case of rings (p. 40). The equality 1 · (B=− B=) = 0 shows

that this relation is reflexive; if D(C<−B=) = 0, then D(B=−C<) = 0, which

shows that it is symmetric. Finally, let <, =, ? ∈ M and B, C , D ∈ S be such

that (<, B) ∼ (=, C) and (=, C) ∼ (?, D); let E, F ∈ S be such E(C< − B=) = 0

and F(C? − D=) = 0; then

CEF(D< − B?) = DF E(C< − B=) + BEF (D= − C?) = 0

so that (<, B) ∼ (?, D), since DF ∈ S and BE ∈ S: this relation is transitive.

Let S−1

M be the set of equivalence classes; for < ∈ M and B ∈ S, write

</B for the class in S−1

M of the pair (<, B) ∈ S−1

M. We then define two

Page 153: (MOSTLY)COMMUTATIVE ALGEBRA

3.6. LOCALIZATION OF MODULES (COMMUTATIVE RINGS) 141

laws on S−1

M: first, an addition, given by

(</B) + (=/C) = (C< + B=)/(BC)for <, = ∈ M and B, C ∈ S, and then an external multiplication, defined

by

(0/C)(</B) = (0<)/(CB).for < ∈ M, 0 ∈ A, B and C ∈ S,

Theorem (3.6.1). — Endowed with these two laws, S−1

M is an S−1

A-module.

If one views S−1

M as an A-module through the canonical morphism of rings

A→ S−1

A, then the map 8 : M→ S−1

M given by 8(<) = </1 is a morphism

of A-modules.

The proof is left as an exercise; anyway, the computations are totally

similar as those done for localization of rings.

Proposition (3.6.2). — Let A be a commutative ring, let S be a multiplicative

subset of A and let M be an A-module. Let 8 : M→ S−1

M be the canonical

morphism; for B ∈ S, let �B : M→M be the morphism given by < ↦→ B<.

a) An element < ∈ M belongs to Ker(8) if and only if there exists B ∈ S such

that B< = 0;

b) In particular, 8 is injective if and only if �B is injective, for every B ∈ S;

c) The morphism 8 is an isomorphism if and only if �B is bijective, for everyB ∈ S.

Proof. — Let < ∈ M. If 8(<) = 0, then </1 = 0/1, hence there exists

B ∈ S such that B< = 0, by the definition of S−1

M. Conversely, if B< = 0,

then </1 = 0/1, hence 8(<) = 0. This proves a), and assertion b) is a

direct consequence.

Let us prove c). Let us assume that 8 is an isomorphism. All mor-

phisms �B are then injective by b). Moreover, in the S−1

A-module S−1

M,

multiplication by an element B ∈ S is surjective, since B · (</BC) = </C forevery < ∈ M and C ∈ S. Therefore, �B is surjective as well, so that it is an

isomorphism. Conversely, let us assume that all morphisms �B : M→M

are isomorphisms. Then 8 is injective, by b). Let moreover < ∈ M and

B ∈ S; let <′ ∈ M be such that B<′ = <; then </B = <′/1 = 8(<′); thisproves that 8 is surjective. �

Page 154: (MOSTLY)COMMUTATIVE ALGEBRA

142 CHAPTER 3. MODULES

Remark (3.6.3). — Let us recall a few examples of multiplicative subsets.

a) First of all, for any B ∈ A, the set S = {1; B; B2; . . .} is multiplicative.

In that case, the localized module S−1

M is denoted via an index B, so

that MB = S−1

M is an AB-module.

b) If P is a prime ideal of A, then S = A P is also a multiplicative

subset of A. Again, the localized module and ring are denoted via an

index P; that is AP = (A P)−1A. and MP = (A P)−1

M.

Proposition (3.6.4). — Let A be a commutative ring, let S be a multiplicative

subset ofA. Let 5 : M→ N be a morphism ofA-modules. There exists a unique

morphism of S−1

A-modules ! : S−1

M→ S−1

N such that !(</1) = 5 (<)/1for every < ∈ M. For every < ∈ M, and every B ∈ S, one has !(</B) =5 (<)/B.

In other words, the diagram

M N

S−1

M S−1

N

← →5

←→8 ←→ 8

←→!

is commutative.

Proof. — To check uniqueness, observe that </B = (1/B)(</1), so that if

such a morphism ! exists, then !(</B)must be equal to (1/B)!(</1) =(1/B)( 5 (<)/1) = 5 (<)/B. To establish the existence of !, we want to use

the formula !(</B) = 5 (<)/B as a definition, but we must verify that it

makes sense, that is, we must prove that if</B = =/C, for some<, = ∈ M

and B, C ∈ S, then 5 (<)/B = 5 (=)/C.So assume that</B = =/C; by the definition of the equivalence relation,

there exists D ∈ S such that D(C< − B=) = 0. Then,

5 (<)B

=DC 5 (<)DCB

=5 (DC<)DCB

=5 (DB=)DCB

=5 (=)C,

which shows that ! is well defined.

Page 155: (MOSTLY)COMMUTATIVE ALGEBRA

3.6. LOCALIZATION OF MODULES (COMMUTATIVE RINGS) 143

Now, for <, = ∈ M, B, C ∈ S, we have

!(<B+ =C

)= !

( C< + B=BC

)=5 (C< + B=)

BC

=C 5 (<)BC+B 5 (=)BC

=5 (<)B+5 (=)C

= !(<B) + !(=

C),

hence ! is additive Finally, for < ∈ M, 0 ∈ A, B and C ∈ S, we have

!( 0B

<

C

)= !

( 0<BC

)=5 (0<)BC

=0 5 (<)BC

=0

B

5 (<)C

=0

B!(<C

)which shows that ! is S

−1A-linear. �

Localization of modules gives rise to an universal property, analogous

to the one we established for localization of rings.

Corollary (3.6.5). — Let 5 : M→ N be a morphism of A-modules; assume

that for every B ∈ S, the morphism �B : N → N defined by = ↦→ B= is an

isomorphism. Then, there exists a unique morphism of A-modules 5 : S−1

M→N such that 5 (</1) = 5 (<)/1 for every < ∈ M.

Proof. — In fact, if ! : S−1

M→ S−1

N, </B ↦→ 5 (<)/B, is the morphism

constructed in the previous proposition, and 8 : N → S−1

N is the

canonical morphism, then the desired property for ! is equivalent to

the equality 8 ◦ 5 = !. By proposition 3.6.2, 8 is an isomorphism of

A-modules; this implies the existence and uniqueness of an A-linear

map 5 : S−1

M→ N such that 5 (</1) = 5 (<)/1 for every < ∈ M. �

Aswe already observed for localization of rings and ideals, localization

behaves nicely with respect to submodules; this is the second appearance

of exactness of localization.

Proposition (3.6.6). — Let A be a commutative ring, let S be a multiplicative

subset of A. Let M be an A-module and let N be a submodule of M.

Then, the canonical morphism y : S−1

N→ S−1

M deduced from the injection

8 : N → M induces an isomorphism from S−1

N to a submodule of S−1

M.

Moreover, the canonical morphism ? : S−1

M→ S−1(M/N) deduced from the

surjection ? : M→M/N is surjective and its kernel is y(S−1N).

Page 156: (MOSTLY)COMMUTATIVE ALGEBRA

144 CHAPTER 3. MODULES

In practice, the morphism y is removed from the notation, and we

still denote by S−1

N the submodule of S−1

M image of this morphism.

Passing to the quotient, we then obtain from ? an isomorphism

S−1

M/S−1

N

∼−→ S−1(M/N).

Proof. — Let = ∈ N and let B ∈ S. The image in S−1

Mof the element =/B ∈S−1

N is still =/B, but = is now seen as an element of M. It vanishes if and

only if there exists C ∈ S such that C= = 0 in M. But then C= = 0 in N so

that =/B = 0 in S−1

N. This shows that the map S−1

N→ S−1

M is injective.

It is thus an isomorphism from S−1

N to its image in S−1

M.

Since ? is surjective, the morphism ? is surjective too. Indeed, any

element of S−1(M/N) can be written clN(<)/B, for some < ∈ M and

some B ∈ S. Then, clN(<)/B is the image of</B ∈ S−1

M. Obviously, S−1

N

is contained in the kernel of ?; indeed, ?(clN(=)/B) = clN(?(=))/B = 0.

On the other hand, let < ∈ M and let B ∈ S be such that </B ∈ Ker(?);then 0 = clN(<)/B, so that there exists C ∈ S such that C clN(<) = 0

in M/N. This implies that C< ∈ N. Then, </B = C</CB = y(C</CB)belongs to y(S−1

N). This concludes the proof that Ker(?) = y(S−1N). �

Proposition (3.6.7). — Let A be a commutative ring, let S be a multiplicative

subset of A and let M be an A-module. Let 8 : M→ S−1

M be the canonical

morphism.

Let be an S−1

A-submodule of S−1

M. ThenN = 8−1( ) is anA-submodule

of M such that = S−1

N; it is the largest of all such submodules.

Proof. — First of all, S−1

N ⊂ . Indeed, if < ∈ N, then </1 ∈ by

definition of N; it follows that </B ∈ for every B ∈ S. Conversely, let

G ∈ . One may write G = </B for some < ∈ M and B ∈ S. This implies

that BG = </1 belongs to , hence < ∈ N and G ∈ S−1

N. We thus have

shown that = S−1

N.

Finally, let P be a submodule of M such that S−1

P = . Let < ∈ P; one

has </B ∈ , hence B(</B) = </1 ∈ . Then, < ∈ N, so that P ⊂ N.

This shows that N is the largest submodule of M such that S−1

N = . �

Proposition (3.6.8). — Let A be a commutative ring, let S be a multiplicative

subset of A. Let M be an A-module and let (N8) be a family of submodules

Page 157: (MOSTLY)COMMUTATIVE ALGEBRA

3.7. VECTOR SPACES 145

of M. Then, one has the following equalities of submodules of S−1

M:∑8

S−1

N8 = S−1

∑N8 .

Proof. — Let N =∑

N8. For every 8, N8 ⊂ N, hence an inclusion S−1

N8 ⊂S−1

N. It follows that

∑8 S−1

N8 ⊂ S−1

N. Conversely, let =/B ∈ S−1

N.

There exists an almost null family (=8), where =8 ∈ N8 for every 8, such

that = =∑=8. Then, =8/B belongs to S

−1N8 and =/B =

∑8(=8/B) belongs

to

∑S−1

N8, so that S−1

N ⊂ ∑S−1

N8. �

Corollary (3.6.9). — If M is a finitely generated A-module, then S−1

A is a

finitely generated module.

Proof. — Let (<1, . . . , <=) be a generating family. For 8 ∈ {1, . . . , =},let N8 = A<8. By construction, S

−1N8 is generated by <8 as a S

−1A-

module. By the proposition,

∑S−1

N8 = S−1

M, hence S−1

M is finitely

generated. �

3.7. Vector spaces

Let us recall that a vector space is a module over a division ring.

Proposition (3.7.1). — Let K be a division ring and let M be a right K-vector

space. Let F, B,G be subsets of M such that F ⊂ B ⊂ G. We assume that F is

free and that G is generating.

The following assertions are equivalent:

(i) B is a basis;

(ii) B is maximal among the free subsets of G containing F;

(iii) B is minimal among the generating subsets of G containing F.

Proof. — Let us start with a remark and prove that if G,G′ are subsetsof M such that G ( G

′, and G is generating, then G

′is not free. Indeed,

let G ∈ G′

G; let us write G as a linear combination of elements of G,

G =∑6∈G 606, where (06)6∈G is an almost-null family. Set 0G = −1 and

0H = 0 for H ∈ G′

G such that H ≠ G. The family (06)6∈G′ is almost null,

but not identically null, and one has

∑6∈G′ 606 = 0, so that G

′is not free.

We now prove the equivalence of the three given assertions.

Page 158: (MOSTLY)COMMUTATIVE ALGEBRA

146 CHAPTER 3. MODULES

Assume that B is a basis of M. Then, B is free and generating. By the

initial remark, for every subset G of M such that B ⊂ G and B ≠ G, then

G is not free. In other words, B is maximal among all free subsets of M,

which proves (i)⇒(ii).

Let B′be a subset of M such that F ⊂ B

′ ( B; by the initial remark, if B′

is generating, then B is not free. Consequently, B is minimal among the

generating subsets of G containing F, hence (i)⇒(iii).

We now assume that B is a free subset of G, maximal among those

containing F. Let us show that B is generating. This holds if G = B.

Otherwise, let< ∈ G B. The subset B∪{<} is not free, so that there are

elements 0 and (01)1∈B of K, all but finitely many of them being zero, but

not all of them, such that

∑1∈B 101 +<0 = 0. If 0 = 0, this is a non trivial

linear dependence relation among the elements of B, contradicting the

hypothesis that B is free. So 0 ≠ 0 and < = −∑1∈B 1010

−1belongs to the

subspace M′of M generated by B. Consequently, M

′contains G. Since G

is generating, M′ = M, and B generates M. It is thus a basis of M.

It remains to show that a generating subset of G which is minimal

among those containing F is a basis. If such a subset B were not free,

there would exist a non-trivial linear dependence relation

∑1∈B 101 = 0

among the elements of B. Let � ∈ B be such that 0� ≠ 0; we have

� = −∑1≠� 1010

−1

� . It follows that � belongs to the vector subspace M′

generated by the elements of B {�}. Consequently, M′contains B,

hence M′ = M since B is generating. In particular, B {1} is generating,

contradicting the minimality hypothesis on B. �

Proposition (3.7.2) (Exchange property). — Let K be a division ring, let M

be a right K-vector space. Let (D8)∈I and (E 9)9∈J be families of elements of M.

One assumes that (D8) is generating and that (E 9) is free. For every : ∈ J, there

exists 8 ∈ I such that the family (F 9)9∈J defined by F: = D8 and F 9 = E 9 for

9 ≠ : is free.

In other words, given a free subset and a generating subset of a vector

set, one can replace any chosen element of the free subset by some

element of the generating set without losing its property of being free.

Proof. — Set J′ = J {:} and let M

′be the subspace of M generated

by the family (E 9)9∈J′. Let 8 ∈ I and assume that we cannot replace E:

Page 159: (MOSTLY)COMMUTATIVE ALGEBRA

3.7. VECTOR SPACES 147

by D8. This means that the family (F 9)9∈J defined by F: = D8 and F 9 = E 9for 9 ∈ J

′is not free; let

∑9∈JF 90 9 = 0 be a nontrivial linear dependence

relation. Since the family (E 9)9∈J′ is free, one has 0: ≠ 0. Then we can

write

D8 = F: = F:0:0−1

:= −

∑9∈J′

E 90 90−1

:

so that F: ∈ M′. Consequently, if the exchange property does not hold,

then all D8 belong to M′. Since the family (D8) is generating, one has

M′ = M. But then, E: belongs to M

′, hence one may write E: =

∑9∈J′ E 91 9,

for some almost-null family (1 9)9∈J′. Passing E: to the right hand side, this

gives a linear dependence relation for the family (E 9)9∈J, contradictingthe hypothesis that it is free. �

One of the fundamental results in the theory of vector spaces is the

following theorem, presumably well known the reader in the case of

(commutative) fields!

Theorem (3.7.3). — Let K be a division ring and let M be a right K-vector

space.

a) M has a basis.

b) More precisely, if F is free subset of M and G is a generating subset of M

such that F ⊂ G, there exists a basis B of M such that F ⊂ B ⊂ G.

c) All bases of M have the same cardinality.

The common cardinality of all bases of M is called the dimension of M

and is denoted dimK(M), or dim(M) if no confusion on K can arise.

Proof. — Assertion a) results from b), applied to the free subset L = ∅and to the generating subset G = M.

Let us prove b). Letℱ be the set of all free subsets of G which contain F,

ordered by inclusion. Let (F8) be a totally ordered family of free subsets

of G and let F′be their union if the family is nonempty, and let F

′ = F

otherwise. Let us observe that F′is free. We may assume that the family

is nonempty. Let then (0<)<∈F′ be a an almost-null family of elements

of K such that we have a linear dependence relation 0 =∑<0< between

the members of F′. Let F

′′be be the set of all < ∈ F

′such that 0< ≠ 0; it is

a finite set. By induction on the cardinality on F′′, there is an index 8 such

that F′′ ⊂ F8. Set 0< = 0 for < ∈ F8 F

′′; we have

∑<∈F8 <0< = 0. Since

Page 160: (MOSTLY)COMMUTATIVE ALGEBRA

148 CHAPTER 3. MODULES

F8 is free, 0< = 0 for every < ∈ F8. In particular, 0< = 0 for every < ∈ F′.

This shows that the set F′is free. Since it is an upper bound for the

family (F8), the ordered setℱ is inductive.

By Zorn’s lemma (theorem A.2.11), the ordered setℱ has a maximal

element B. By proposition 3.7.1, the set B is a basis of M and F ⊂ B ⊂ G

by construction.

c) Let B and B′be two bases of M. Since B is free and B

′is generating,

lemma 3.7.4 asserts the existence of an injection 5 : B ↩→ B′. On the

other hand, since B′is free and B is generating, there is also an injection

6 : B′ ↩→ B. By Cantor-Bernstein’s theorem (theorem A.2.1), the bases B

and B′are equipotent. �

Lemma (3.7.4). — Let K be a division ring, let M be a right K-vector space.

Let F and G be subsets of M such that F is free and G is generating. Then there

exists an injection from F into G.

Proof. — Let Φ be the set of all pairs (F′, !)where F′is a subset of F and

! is an injective map from F′to G such that the set (F F

′) ∪ !(F′) is free.We endow this set Φ with the ordering for which (F′

1, !1) ≺ (F′

2, !2) if

and only if F′1⊂ F′2and !2(<) = !1(<) for every < ∈ F

′1.

This set Φ is nonempty, because the pair (∅, !) belongs to Φ, where

! is the unique map from the empty set to G. Let us show that Φ is

inductive. So let (F′8, !8)8 be a totally ordered family of elements of Φ, let

F′be the union of all subsets F

′8and let ! be the map from F

′to G which

coincides with !8 on F′8. The map ! is well-defined: if an element <

of F′belongs both to F

′8and F

′9, we may assume that (F′

8, !8) ≺ (F′9 , ! 9),

and then ! 9(<) = !8(<).Let us show that (F′, !) belongs to Φ. The map ! is injective: let <, <′

be distinct elements of F′; since the family (F′

8, !8) is totally ordered,

there exists an index 8 such that < and <′ both belong to F′8; since !8 is

injective, !8(<) ≠ !8(<′); then !(<) ≠ !(<′).For < ∈ F F

′, set !(<) = <, and let (0<)<∈F be an almost-null family

of elements of K such that

∑<∈F !(<)0< = 0. There exists an index 8

such that all elements < ∈ F′for which 0<′ ≠ 0 belong to F

′8. Then, we

get a linear dependence relation among the members of (F F′8) ∪ !(F′

8),

so that 0< = 0 for every < ∈ F, as was to be shown.

Page 161: (MOSTLY)COMMUTATIVE ALGEBRA

3.7. VECTOR SPACES 149

By Zorn’s lemma (theorem A.2.11), Φ has a maximal element (F′, !).Let us show that F

′ = F. Otherwise, let � ∈ F F′. For every < ∈ F F

′,

write !(<) = <. By the exchange property (proposition 3.7.2) applied

to the free family (!(<))<∈F, to the index � and to the generating set G,

there exists an element � ∈ G such that the family (!′(<))<∈F is free,

where !′ : F→ M coincides with ! on F {�}, and !′(�) = �. Define

F′1= F

′ ∪ {�} and let !1 ∪ F′1→ G be the map given by !1(�) = �

and !1(<) = !(<) for < ∈ F′. The map !1 is injective. Consequently,

(F′1, !1) belongs to Φ and is stricly bigger than (F′, !), contradicting the

maximality hypothesis on (F′, !).So ! : F→ G is injective, establishing the lemma. �

Remark (3.7.5). — Let K be a division ring and let M be a right K-vector

space. When proving the preceding theorem, we made use of the axiom

of choice, by way of Zorn’s lemma. This cannot be avoided in general.

However, if M is finitely generated, then usual induction is enough.

The first question of exercise 3/30 suggests a simple direct proof of

lemma 3.7.4 in that case, leading to a simplification of the proof that all

bases of a (finitely generated) vector space have the same cardinality.

Corollary (3.7.6). — Let A be a non null commutative ring and let M be an

A-module. If M is free, then all bases of M have the same cardinality.

This cardinality is called the rank of the free A-module M, and is

denoted by rkA(M).Proof. — Let J be a maximal ideal of A, so that the quotient ring K = A/Jis a field. Let MJ be the submodule of M generated by all products <0,

for 0 ∈ J and < ∈ M; set V = M/MJ. This is an A-module. However, if

0 ∈ J and < ∈ M, then clMJ(<)0 = clMJ(<0) = 0; consequently, V can be

viewed as an A/J-module, that is, a K-vector space. (See also §3.1.5.)

Assume that M is free and let (48)8∈I be a basis of M. Let us show

that the family (clMJ(48))8∈I is a basis of V. First, since the 48 generate M,

this family generates V as an A-module, hence as a K-vector space. It

remains to show that it is free. Let ( 8)8∈I be an almost-null family of

elements of K such that

∑8∈I clMJ(48) 8. For every 8 ∈ I such that 8 ≠ 0,

let us choose 08 ∈ A such that clJ(08) = 8; if 8 = 0, set 08 = 0. Then, the

family (08)8∈I is almost-null and

∑8∈I clMJ(48) clJ(08) = 0. In other words,

Page 162: (MOSTLY)COMMUTATIVE ALGEBRA

150 CHAPTER 3. MODULES

< =∑8∈I 4808 belongs to MJ. Since MJ is generated by the elements of

the form 480 for 8 ∈ I and 0 ∈ J, there is a family (18)8∈I of elements of J

such that < =∑8∈I 4818. The family (48) being free, one has 08 = 18 for

every 8 ∈ I. Consequently, 8 = clJ(08) = clJ(18) = 0 for every 8 ∈ I. The

linear dependence relation that we have considered is thus trivial, so

that the family (48)8∈I is free, as claimed.

Consequently, this family is a basis of the K-vector space V. This

implies that dimK(V) = Card(I) and dimK(V) is the common cardinality

of all bases of M. �

Theorem (3.7.7). — Let K be a division ring and let M be a K-vector space.

Every subspace N of M has a direct summand P and one has

dim(N) + dim(P) = dim(M).

Proof. — Let B1 be a basis of N. By theorem 3.7.3, there exists a basis B

of M which contains B1. The subset B2 = B B1 of M is then free

and generates a vector subspace P of M such that N + P = M. Let us

show that N ∩ P = 0. So let < ∈ N ∩ P; one can write < both as a

linear combination of elements of B1 and as a linear combination of

elements of B2. Subtracting these relations, we get a linear dependence

relation between the members of M. If < ≠ 0, this relation is nontrivial,

contradicting the hypothesis that B is free. Hence N ∩ P = 0 and P is a

direct summand of N in M.

By definition, one has dim(N) = Card(B1) and dim(M) = Card(B).Moreover, B2 is free and generates P, so that dim(P) = Card(B2). SinceB1 and B2 are disjoint, one has

dim(M) = Card(B) = Card(B1 ∪ B2)= Card(B1) + Card(B2) = dim(N) + dim(P),

as was to be shown. �

Many classical results in the theory of vector spaces over a (commu-

tative) field that remain true, with the same proof, in the more general

context of vector spaces over a division ring. Let us give two examples.

Corollary (3.7.8). — Let K be a division ring. Let V,W be K-vector spaces

and let 5 : V→W be a K-linear map.

Page 163: (MOSTLY)COMMUTATIVE ALGEBRA

3.7. VECTOR SPACES 151

a) One has dim(Im( 5 )) + dim(Ker( 5 )) = dim(V);b) If dim(V) > dim(W), then 5 is not injective;c) Assume that dim(V) = dim(W). Then the following properties are

equivalent: (i) 5 is injective; (ii) 5 is surjective; (iii) 5 is bijective.

Proof. — a) Passing to the quotient, the morphism 5 defines an injective

K-linear map ! : V/Ker( 5 ) →W, which induces an isomorphism from

V/Ker( 5 ) to Im(!) = Im( 5 ), so that dim(Im( 5 )) = dim(V/Ker( 5 )). LetV′be a direct summand of Ker( 5 ) in V; one has dim(V) = dim(Ker( 5 )) +

dim(V′). Finally, the canonical surjection clKer( 5 ) : V → V/Ker( 5 ) in-

duces, by restriction, a bijective morphism from V′to V/Ker( 5 ), so that

dim(V′) = dim(V/Ker( 5 )). Consequently, we have

dim(Im( 5 )) + dim(Ker( 5 )) = dim(V/Ker( 5 )) + dim(Ker( 5 ))= dim(V′) + dim(Ker( 5 )) = dim(V).

b) Since Im( 5 ) ⊂ W, one has dim(Im( 5 )) 6 dim(W). If dim(V) >dim(W), this forces dim(Ker( 5 )) > 0, hence 5 is not injective.

c) If 5 is injective, thendim(Im( 5 )) = dim(V) = dim(W), hence Im( 5 ) =W and 5 is surjective. If 5 is surjective, then dim(Ker( 5 )) = 0, hence

Ker( 5 ) = 0 and 5 is injective. This implies the equivalence of the three

statements. �

Corollary (3.7.9). — Let K be a division ring. Let V,W be K-vector spaces

and let 5 : V→W be a K-linear map.

a) The map 5 is injective if and only if it has a left inverse;

b) The map 5 is surjective if and only if it has a right inverse.

Proof. — If a map 5 has a left inverse (resp. a right inverse), then it

is injective (resp. surjective). It thus suffices to prove the converse

assertions.

a) Assume that 5 is injective, so that 5 induces an isomorphism 51from V to its image 5 (V). Let W1 be a direct summand of 5 (V) in W

(theorem 3.7.7) and let ? : W→ 5 (V) be the projector with image 5 (V)and kernel W1. The morphism 6 = 5 −1

1◦ ? : W→ V satisfies 6 ◦ 5 = idV,

so that 5 is left invertible.

Page 164: (MOSTLY)COMMUTATIVE ALGEBRA

152 CHAPTER 3. MODULES

b) Assume that 5 is surjective. Let V1 be a direct summand of Ker( 5 )in V (theorem 3.7.7). Then, the morphism 51 = 5 |V1

: V1→W is injective

and surjective (for F ∈ W, let E ∈ V such that 5 (E) = F, and write

E = E1 + <, where E1 ∈ V1 and < ∈ Ker( 5 ); one has 5 (E1) = 5 (E) = F),hence an isomorphism. Let 9 : V1→ V be the injection and ℎ = 9 ◦ 5 −1

1;

this is a morphism from W to V such that 5 ◦ ℎ ◦ 51 = 5 ◦ 9 = 51, hence

5 ◦ ℎ = idW since 51 is surjective. Consequently, 5 is right invertible. �

3.8. Alternate multilinear forms. Determinants

Definition (3.8.1). — Let A be a commutative ring, let M and N be A-modules,

let ? be a positive integer. A map 5 : M? → N is said to be ?-linear map if it

is linear with respect to each variable: for every (<1, . . . , <?) ∈ M?, the map

< ↦→ 5 (<1, . . . , <8−1, <, <8+1, . . . , <?) is A-linear.

One says that a ?-linear map 5 : M? → N is alternate if for ev-

ery (<1, . . . , <?) ∈ M?for which there exist 8 ≠ 9 such that <8 = < 9, one has

5 (<1, . . . , <?) = 0.

When N = A, a ?-linear map from M to A is called a ?-linear form.

Let A be a commutative ring, let M and N be A-modules, let ? be

a positive integer. Let 5 be a ?-linear alternate map from M to N; let

(<1, . . . , <?) ∈ M?, let 8 and 9 be distinct elements in {1, . . . , ?} and let

us apply the alternating property to the family obtained by replacing

both <8 and < 9 by their sum <8 + < 9. Expanding, one gets (assuming

8 < 9 to fix ideas)

0 = 5 (<1, . . . , <8−1, <8 + < 9 , . . . , < 9−1, <8 + < 9 , . . . , <?)= 5 (<1, . . . , <8−1, <8 , . . . , < 9−1, <9 , . . . , <?)+ 5 (<1, . . . , <8−1, <9 , . . . , < 9−1, <8 , . . . , <?),

since the two other terms vanish. In other words, 5 is changed into its

opposite when two variables are exchanged. This property is called

antisymmetry.

Now, any permutation � of {1, . . . , =} can be written as a product

of finitely many permutations of two elements; the number of such

permutations is even or odd, according to the signature �(�) of � being 1

Page 165: (MOSTLY)COMMUTATIVE ALGEBRA

3.8. ALTERNATE MULTILINEAR FORMS. DETERMINANTS 153

or −1. One thus gets

(3.8.1.1) 5 (<�(1), . . . , <�(=)) = �(�) 5 (<1, . . . , <=).

Conversely, let 5 be a ?-linear antisymmetric map. If there are

two indices 8 ≠ 9 such that <8 = < 9, exchanging <8 and < 9 changes

5 (<1, . . . , <?) both into itself and its opposite, so that 5 (<1, . . . , <?) =− 5 (<1, . . . , <?). Consequently, 2 5 (<1, . . . , <?) = 0 hence 2 5 is alternate.

If 2 is regular in A or, more generally, if the multiplication by 2 in the

module N is injective, then 5 is alternate too.

However, if A = Z/2Z, any symmetric ?-linear map is antisymmetric.

For example, the 2-linear form (G1, G2) ↦→ G1G2 on the free A-module A

is antisymmetric, but it is not alternate.

The set of all alternate ?-linear maps from M to N is denotedΛ?(M,N);it is naturally a A-module. By definition, Λ1(M,N) = HomA(M,N). On

the other hand, the A-module Λ0(M,N) identifies with N. Indeed, M0is

a singleton so that a map 5 : M0→ N is just an element of N, and every

such map is linear in each variable (since there is no variable).

Let D : M→M′be a morphism of A-modules. For any 5 ∈ Λ?(M′,N),

the map (<1, . . . , <?) ↦→ 5 (D(<1), . . . , D(<?)) is a ?-linear alternate map

from M to N, denotedΛ?(D)( 5 ), or simply D∗( 5 ). The map 5 ↦→ Λ?(D)( 5 )is a morphism of A-modules from Λ?(M′,N) to Λ?(M,N). One has

Λ?(idM) = idΛ?(M,N), and Λ?(D ◦ E) = Λ?(E) ◦ Λ?(D). When N is fixed,

the construction M ↦→ Λ?(M,N) is thus a contravariant functor in M.

Let E : N→ N′be a morphism of A-modules, For any 5 ∈ Λ?(M,N),

the map E ◦ 5 : (<1, . . . , <?) ↦→ E( 5 (<1, . . . , <?)) is a ?-linear alternatemap from M to N

′, also denoted Λ?(E)( 5 ), or E∗( 5 ). The map 5 ↦→

Λ?(E)( 5 ) is a morphism of A-modules from Λ?(M,N) to Λ?(M,N′). One

hasΛ?(idN) = idΛ?(M,N), andΛ?(D ◦E) = Λ?(D) ◦Λ?(E). When M is fixed,

the construction N ↦→ Λ?(M,N) is thus a covariant functor in N.

Proposition (3.8.2). — Let A be a commutative ring. Let M be an A-module

and let (<1, . . . , <=) be a generating family of M. For any A-module P and

any integer ? > =, one has Λ?(M, P) = 0.

Proof. — Let 5 : M? → P be an alternate ?-linear map. Let (G1, . . . , G?) ∈

M?. By assumption, there exist elements 08 , 9 (for 1 6 8 6 = and 1 6 9 6 ?)

Page 166: (MOSTLY)COMMUTATIVE ALGEBRA

154 CHAPTER 3. MODULES

such that G 9 =∑08 , 9<8. Since 5 is ?-linear, one has

5 (G1, . . . , G?) ==∑81=1

. . .

=∑8?=1

081 ,1 . . . 08? ,? 5 (<81 , . . . , <8?).

Since ? > =, in each ?-tuple (81, . . . , 8?) of elements of {1, . . . , =}, at leasttwo terms are equal. Themap 5 being alternate, one has 5 (<81 , . . . , <8?) =0. Consequently, 5 (G1, . . . , G?) = 0 and 5 = 0. �

More generally, the above proof shows that an alternate ?-linear map

is determined by its values on the ?-tuples of distinct elements among

(<1, . . . , <=).

Proposition (3.8.3). — Let M be a free A-module. A family (<1, . . . , <?) ofelements of M is free if and only if there exists, for any 0 ∈ A {0}, a ?-linearalternate form 5 on M such that 0 5 (<1, . . . , <?) ≠ 0.

Proof. — Assume that the family (<1, . . . , <?) is bonded and let us

consider a non trivial linear dependence relation 01<1 + · · · + 0?<? = 0.

Up to reordering the family (<1, . . . , <?), we assume that 01 ≠ 0. Then

for any alternate ?-linear form 5 on M, one has

0 = 5 (01<1 + · · · + 0?<? , <2, . . . , <?)

=

?∑8=1

08 5 (<8 , <2, . . . , <?)

= 01 5 (<1, . . . , <?).

Let us now assume that the family (<1, . . . , <?) is free and let us show

the desired property by induction on ?.

The property holds for ? = 0, for the 0-alternate form equal to the

constant 1 is adequate. Let us assume that the property holds for ?−1 and

let us prove it for ?. Let 0 ∈ A {0}. Since the family (<1, . . . , <?−1) is free,there exists an alternate (?−1)-linear form 5 such that 0 5 (<1, . . . , <?−1) ≠0. Then, the map

5 ′ : M? →M, 5 ′(G1, . . . , G?) =

?∑9=1

(−1)9 5 (G1, . . . , G 9 , . . . , G?)G 9

Page 167: (MOSTLY)COMMUTATIVE ALGEBRA

3.8. ALTERNATE MULTILINEAR FORMS. DETERMINANTS 155

fromM?toM is a ?-linear alternatemap. Assume that 0 5 ′(<1, . . . , <?) =

0. This gives a linear dependence relation among the <8. Since the coeffi-

cient of <? is equal to (−1)?0 5 (<1, . . . , <?−1), this relation is non-trivial,

contradicting the assumption that (<1, . . . , <?) is free. Consequently,0 5 ′(<1, . . . , <?) ≠ 0. In particular, there exists a linear form ! on M

(for example, a suitable coordinate form in a basis of M) such that

!(0 5 ′(<1, . . . , <?)) ≠ 0. The composite 6 = ! ◦ 5 ′ is an alternate ?-linear

form such that 06(<1, . . . , <?) ≠ 0. �

Corollary (3.8.4). — Let A be a nonzero commutative ring. Let M be a free

A-module which is generated by = elements. Then the cardinality of any free

family in M is at most =.

Proof. — Let (41, . . . , 4?) be a free family in M. By proposition 3.8.3

applied to 0 = 1 and the family (41, . . . , 4?), there exists a nonzero

alternate ?-linear form on M. Proposition 3.8.2 then implies that ? 6=. �

Corollary (3.8.5). — Let A be a nonzero commutative ring. Let M be a finitely

generated free A-module. Then all bases of M are finite and their cardinalities

are equal.

Proof. — Let (<1, . . . , <=) be a generating family of M. By the preceding

proposition, every free family in M is finite and has cardinality at most =.

In particular, all bases of M are finite.

Let (41, . . . , 4<) and ( 51, . . . , 5?) be bases of M. Since ( 51, . . . , 5?) gener-ates M and since (41, . . . , 4<) is free, one has < 6 ?. Since (41, . . . , 4<)generates M and since ( 51, . . . , 5?) is free, one has ? 6 <. Consequently,

< = ?. �

Definition (3.8.6). — Let A be a nonzero commutative ring. If M is a finitely

generated free A-module, the common cardinality of all of the bases of M is

called its rank and denoted rkA(M).

If the ring A is clear from the context, one also writes rk(M) for rkA(M).

Theorem (3.8.7). — Let A be a nonzero commutative ring and let M be a free

A-module of finite rank = > 0. For any basis (41, . . . , 4=) of M, there exists

Page 168: (MOSTLY)COMMUTATIVE ALGEBRA

156 CHAPTER 3. MODULES

a unique alternate =-linear form ! on M such that !(41, . . . , 4=) = 1. This

form ! is a basis of the A-module Λ=(M,A). In particular, Λ=(M,A) is afree A-module of rank 1.

Proof. — Let (!1, . . . , !=) be the basis of M∨dual to the basis (41, . . . , 4=).

It is almost obvious that there is at most one alternate linear form as in

the theorem. Indeed, let ! be an alternate =-linear form on M. For any

(<1, . . . , <=) ∈ M=and any 9, one has < 9 =

∑=8=1

!8(< 9)48. Expanding,one gets

5 (<1, . . . , <=) ==∑81=1

. . .

=∑8==1

!81(<1) . . . !8=(<=) 5 (481 , . . . , 48=).

Moreover, 5 (481 , . . . , 48=) = 0 if two indices 8 9 are equal; when they are

pairwise distinct, the map 9 ↦→ 8 9 is a permutation � of {1, . . . , =} and

5 (481 , . . . , 48=) = �(�) 5 (41, . . . , 4=) = �(�).

Consequently, if such an alternate =-linear form exists at all, it is given

by the (well known) formula:

(3.8.7.1) !(<1, . . . , <=) =( ∑�∈S=

�(�)=∏8=1

!8(<�(8)))!(41, . . . , 4=).

Let us now prove the theorem by induction on =. It suffices to show

that the previous formula defines an alternate =-linear form. It is clear

that it is =-linear; the alternating property is not much harder. Assume

indeed that < 9 = <: ; in formula (3.8.7.1), the terms corresponding to the

permutations � and ��9 ,:, where �9 ,: is the transposition exchanging 9

and :, only differ by their sign. If � runs among all even permutations,

� or ��9 ,: meet every permutation of {1, . . . , =} exactly once, so that the

final sum is 0.

We just proved that there is an alternate =-linear form ! on M which

maps (41, . . . , 4=) to 1, and that any alternate =-linear form 5 on M is of

the form 0!, where 0 = !(41, . . . , 4=). This concludes the proof of thetheorem. �

Definition (3.8.8). — Let A be a commutative ring.

Page 169: (MOSTLY)COMMUTATIVE ALGEBRA

3.8. ALTERNATE MULTILINEAR FORMS. DETERMINANTS 157

Let M be a free A-module of rank = and let ℬ = (41, . . . , 4=) be a basis of M.

The unique alternate =-linear form on M which maps (41, . . . , 4=) to 1 is called

the determinant (with respect to the basisℬ); it is written detℬ.

Let U be an = × = matrix with entries in A. The determinant det(U) of U is

the determinant of the column vectors of U in the canonical basis of A=. For

such a matrix U = (08 9), one thus has

det(U) =∑�∈S=

�(�)=∏8=1

08�(8).

Example (3.8.9). — Let M be a free A-module of rank = and let ℬ =

(41, . . . , 4=) be a basis of M; write (!1, . . . , !=) for the dual basis. Let I

be a subset of {1, . . . , =} with cardinality ?; enumerate its elements in

increasing order, so that I = {81, . . . , 8?} with 1 6 81 < · · · < 8? 6 =. Let

MI be the submodule of M generated by the 48, for 8 ∈ I, and let !I be

the map from M?to A defined by

!I(<1, . . . , <?) =∑�∈S?

�(�)?∏9=1

!8 9(<�(9)).

It is ?-linear alternate. This can be shown by a direct computation.

Alternatively, let detℬIbe the determinant form on MI with respect to its

basisℬI = (48)8∈I and let �I be the projector onto MI, whose kernel is the

submodule generated by the 48 for 8 ∉ I. Then

!I(<1, . . . , <?) = detℬI(�I(<1), . . . ,�I(<?)).

Corollary (3.8.10). — Let M be a free A-module of rank = and let ? be any

integer such that 0 6 ? 6 =. The family (!I), where I runs among all subset

of {1, . . . , =} with cardinality ?, is a basis of the A-module Λ?(M,A). In

particular, the A-module Λ?(M,A) is free of rank(=?

).

Proof. — Let I = {81, . . . , 8?}, with 81 < · · · < 8?. It follows from the defi-

nition of !I that !I(481 , . . . , 48?) = 1. Let J be another subset of {1, . . . , =}with ? elements. Write { 91, . . . , 9?}, with 91 < · · · < 9?. Since J ≠ I

there exists : ∈ {1, . . . , ?} such that 9: ∉ I (otherwise, J ⊂ I, hence

J = I since they both have ? elements); consequently, �I(4 9:) = 0, hence

Page 170: (MOSTLY)COMMUTATIVE ALGEBRA

158 CHAPTER 3. MODULES

!I(4 91 , . . . , 4 9?) = 0. This implies that for any alternate ?-linear form 5

on M,

5 =∑

I={81<···<8?}5 (481 , . . . , 48?)!I.

Indeed, the two members of the preceding equality are alternate ?-

linear forms that take the same value at all ?-tuples of elements of the

basis {41, . . . , 4=}. This shows that the family (!I) generates Λ?(M,A).Moreover, for any family (0I) indexed by the set of ?-elements subsets

of {1, . . . , =}, the alternate ?-linear form∑0I!I takes the value 0I at

(481 , . . . , 48?) if I = {81 < · · · < 8?}. In particular, if

∑0I!I = 0, then 0I = 0

for all I. In other words, the family (!I) is free. �

Theorem and Definition (3.8.11). — Let M be a free A-module of rank = and

let D be an endomorphism of M. The endomorphism Λ=(D) of Λ=(M,A) is ahomothety; its ratio is called the determinant of D and denoted by det(D).

Proof. — Recall that A-module Λ=(M,A) is isomorphic to A: if ℬ =

(41, . . . , 4=) is a basis of M, the determinant form detℬ is a basis of

Λ=(M,A). Since Λ=(D)(detℬ) is an alternate =-linear form on M, there

exists a unique element � ∈ A such that Λ=(D)(detℬ) = �detℬ. Let

now 5 be any alternate =-linear form on M; there exists 0 ∈ A such that

5 = 0 detℬ. Since Λ=(D) is linear, one has Λ=(D)( 5 ) = 0Λ=(D)(detℬ) =0�detℬ. Since A is commutative, Λ=(D)( 5 ) = � 5 . This shows that Λ=(D)is a homothety of ratio �. �

3.8.12. — Let A be a commutative ring, let M be a free A-module of

rank = and let D be an endomorphism of M. Letℬ be a basis of M and

let U be the matrix of D in the basisℬ. One has det(D) = det(U).Indeed, by definition, the alternate =-linear form Λ=(D)(detℬ)maps

(<1, . . . , <=) todetℬ(D(<1), . . . , D(<=)). In particular, itmaps (41, . . . , 4=)to detℬ(D(41), . . . , D(4=)) = det(U). This shows that Λ=(D)(detℬ) =(det U)detℬ, so that det(D) = det(U).Let D and E be endomorphisms ofM. One hasΛ=(E◦D) = Λ=(D)◦Λ=(E),

hence

det(E ◦ D) = det(D)det(E) = det(E)det(D).One also has Λ=(idM) = id, so that det(idM) = 1.

Page 171: (MOSTLY)COMMUTATIVE ALGEBRA

3.8. ALTERNATE MULTILINEAR FORMS. DETERMINANTS 159

Assume that D is invertible, with inverse E. Then Λ=(D) is also

invertible with inverse Λ=(E); this implies that det(D) is invertible with

inverse det(E). In other words,

det(D−1) = det(D)−1.

Of course, the same formulae hold formatrices: ifU andV are two =×=matrices, then det(VU) = det(V)det(U). One also has det(Ut) = det(U),either by a direct computation, or using the fact that the transpose

of a homothety is a homothety with the same ratio, so that for any

endomorphism D of M, Λ=(Dt) = Λ=(D)t is the homothety of ratio det(D)in Λ=(M∨,A).Let U = (D8 , 9) be an = × = matrix. The cofactor E8 , 9 of the coefficient D8 , 9

of index (8 , 9) is the determinant of the (=−1)×(=−1)matrix obtained by

removing from U the 8th row and the 9th column, multiplied by (−1)8+9.The adjugate of U is the matrix U = (E 9 ,8)168 , 96=, transpose of the matrix

with entries the cofactors of U.

Proposition (3.8.13) (Adjugate formula). — The following formula holds:

U U = U U = (det U)I= .

Proof. — For 8 ∈ {1, . . . , =}, let N8 be the set {1, . . . , 8 , . . . , =} =

{1, . . . , =} {8}.Let 8 , 9 ∈ {1, . . . , =}. Consider the (= − 1) × (= − 1)matrix U

8 , 9obtained

by deleting the 8th row and the 9th column as indexed by N8 for rows

and N9 for columns. If � : N8∼−→ N9 is a bijection, let <(�) be the number

of inversions of �, that is, the number of pairs (:, ℓ ) of elements inN8 such

that : < ; but �(:) > �(ℓ ) and let �(�) = (−1)<(�). Then, the expansionof the determinant of U

8 , 9writes

det(U8 , 9) =∑

� : N8

∼−→N9

�(�)∏:≠8

D:,�(:).

Now, a bijection � : N8∼−→ N9 can be extended to a bijection � ∈ S= such

that �(8) = 9, and conversely, every such bijection � is obtained exactly

once. Let us compare the number of inversions of � with that of �. Bydefinition, <(�) − <(�) is the number of pairs (:, ℓ ) ∈ {1, . . . , =} suchthat : < ℓ , �(:) > �(ℓ ), and either : or ℓ is equal to 8. Let 0 and 1 be

the numbers of such pairs with : = 8 and ℓ = 8 respectively. A pair

Page 172: (MOSTLY)COMMUTATIVE ALGEBRA

160 CHAPTER 3. MODULES

(8 , ℓ ) counts if and only if ℓ > 8 and �(ℓ ) < 9, so that 0 values among

�(8 + 1), . . . , �(=) are smaller than 9. Similarly, a pair (:, 8) counts if andonly if : < 8 and �(:) > 9, so that 1 values among �(1), . . . , �(8 − 1) areabove 9 ; consequently, 8 − 1 − 1 such values are smaller than 9. This

implies that 0 + (8 − 1 − 1) = 9 − 1, hence 0 − 1 = 9 − 8. Consequently,0 + 1 ≡ 9 + 8 (mod 2) and

<(�) = <(�) − 0 − 1 ≡ <(�) + (8 + 9) (mod 2),

so that

(−1)<(�) = (−1)<(�)(−1)8+9 .In particular, for any 9 ∈ {1, . . . , =},

=∑8=1

D8 , 9(−1)8+9 det(U8 , 9) ==∑8=1

∑�∈S=

�(8)=9

(−1)<(�)D8 , 9(∏?≠8

D?,�(?))

=

∑�∈S=

(−1)<(�)=∏?=1

D?,�(?)

= det(U).

On the other hand, for 8 ≠ :, this formula shows that

=∑9=1

(−1)8+9 det(U8 , 9)D:,9

is the determinant of the matrix U: obtained by replacing its 8th row by

the :th row of U; indeed, this manipulation does not modify any of the

matrices U8 , 9

but replaces D8 , 9 by D:,9. Since two rows of U: are equal,

det(U:) = 0. (The definition of the determinant implies that it vanishes

when two rows are equal; by transposition, it vanishes whenever two

rows are equal.) Altogher, we have shown that U U = det(U)I=. The

other formula follows by transposition: the adjugate ofUtis the transpose

of U; therefore, UtU

t = det(Ut)I= = det(U)I=, hence UU = det(U)I=. �

In the course of the proof of this formula, we established the following

Laplace expansion formula for the determinant, as a linear combination of

smaller determinants:

Page 173: (MOSTLY)COMMUTATIVE ALGEBRA

3.8. ALTERNATE MULTILINEAR FORMS. DETERMINANTS 161

Corollary (3.8.14). — For every integer 8 ∈ {1, . . . , =}, one has

det(U) ==∑9=1

(−1)8+9D8 , 9 det(U8 , 9).

Definition (3.8.15). — The characteristic polynomial of a matrix U ∈ M=(A)is the determinant of the matrix XI= −U with coefficients in A[X].

Expansion of this determinant shows that the characteristic polynomial

of U is a monic polynomial of degree =. In fact, one has

PU(X) = X= − Tr(U)X=−1 + · · · + (−1)= det(U),

where Tr(U) = ∑=8=1

U88 is the trace of U.

Remark (3.8.16). — LetU ∈ M=(A) andV ∈ GL=(A). Then thematricesU

and V−1

UV have the same characteristic polynomial. Indeed,

PV−1

UV(X) = det(XI=−V

−1

UV) = det(V−1(XI=−U)V) = det(XI=−U) = PU(X).As a consequence, one may define the characteristic polynomial of

an endomorphism D of a free finitely generated A-module M: it is the

characteristic polynomial of the matrix U of D in any basis of M.

Similarly, the trace Tr(D) of D is defined as Tr(U). The map D ↦→ Tr(D)is a linear form on EndA(M). It follows from the properties of traces and

determinants of matrices that Tr(DE) = Tr(ED) and det(DE) = det(ED) =det(D)det(E) for any D, E ∈ EndA(M).

Corollary (3.8.17) (Cayley-Hamilton). — Let U ∈ M=(A) be a matrix and

let PU be its characteristic polynomial. One has PU(U) = 0 in M=(A).

Proof. — Let V ∈ M=(A[X]) be the matrix defined by V = XI= −U and

let V be the adjugate of V. Every entry V8 9 of V is the determinant of a

matrix of size = − 1 extracted from V = XI= −U. Consequently, V8 9 is a

polynomial with coefficients in A of degree 6 = − 1, so that there exist

matrices V0, . . . , V=−1 ∈ M=(A) such that

V = V=−1X=−1 + · · · + V0.

By the adjugate formula applied to the matrix V in the ring A[X], onehas VV = det(V)I=, and det(V) = PU(X). Consequently, we have

(XI= −U)(V=−1X=−1 + · · · + V0) = PU(X)I= .

Page 174: (MOSTLY)COMMUTATIVE ALGEBRA

162 CHAPTER 3. MODULES

Let 00, . . . , 0=−1 ∈ A be such that

PU(X) = X= + 0=−1X

=−1 + · · · + 00.

Identifying the coefficients of each monomial, one obtain the relations

V=−1 = I=

V=−2 −UV=−1 = 0=−1I=

. . . V0 −UV1 = 01I=

−UV0 = 00I= .

If we multiply the first relation by U=, the second by U

=−1, etc., and add

them up, we obtain

0 = U= + 0=−1U

=−1 + · · · + 00I= ,

that is, PU(U) = 0. �

Corollary (3.8.18). — Let A be a commutative ring, let M be a finitely gen-

erated A-module and let D be an endomorphism of M. There exists a monic

polynomial P ∈ A[X] such that P(D) = 0.

Proof. — If M is the free A-module A=, this follows from Cayley-

Hamilton’s theorem (corollary 3.8.17): one may take for P the char-

acteristic polynomial of D.

In the general case, let (<1, . . . , <=) be a finite family generating M.

Let ? : A= →M be the morphism given by ?(01, . . . , 0=) =

∑08<8. The

morphism ? is surjective. In particular, for every 8 ∈ {1, . . . , =}, thereexists E8 ∈ A

=such that ?(E8) = D(<8). Let E : A

= → A=be themorphism

defined by E(01, . . . , 0=) =∑08E8. One has ? ◦ E = D ◦ ?. Let V be

the matrix of E and let P be its characteristic polynomial. By Cayley-

Hamilton’s theorem, one has P(V) = 0, hence P(E) = 0. Consequently,

0 = ? ◦ P(E) = P(D) ◦ ?. Since ? is surjective, this implies that P(D) = 0

and concludes the proof of the corollary. �

Proposition (3.8.19). — Let A be a commutative ring and let M be a free

A-module of rank =.

a) An endomorphism of M is surjective if and only if its determinant is

invertible; it is then an isomorphism.

Page 175: (MOSTLY)COMMUTATIVE ALGEBRA

3.8. ALTERNATE MULTILINEAR FORMS. DETERMINANTS 163

b) An endomorphism of M is injective if and only if its determinant is regular.

Proof. — Let D be an endomorphism of M. Let us fix a basis ℬ =

(41, . . . , 4=) of M, let U be the matrix of D in this basis and let U be its

adjugate. One has UU = det(U)I=. If det(U) is regular, the homothety

of ratio det(U) is injective in M, hence D is injective. On the other hand,

if det(U) is invertible, the endomorphism E of M with matrix det(U)−1U

is a left inverse of D, hence D is surjective.

Assume that D is surjective. For any 8 ∈ {1, . . . , =}, let 58 ∈ M be such

that D( 58) = 48; let F be the unique endomorphism of M that maps 48to 58 for every 8. One has D(F(48)) = 48 for every 8, hence D ◦ F = idM.

Consequently, det(D)det(F) = 1 and det(D) is invertible.It remains to show that det(D) is regular in A if D is injective. By

hypothesis, the family (D(41), . . . , D(4=)) is free. Let 0 be a nonzero

element of A. By proposition 3.8.3, there exists an alternate =-linear

form 5 on M such that 0 5 (D(41), . . . , D(4=)) ≠ 0. As any alternate =-linear

form, 5 is amultiple of the determinant formdetℬ in the basis (41, . . . , 4=),so that 0 det(D(41), . . . , D(4=)) ≠ 0, which means exactly that 0 det(D) ≠ 0.

Since 0 was chosen arbitrarily, this proves that det(D) is regular in A. �

Corollary (3.8.20). — Let M be a free A-module and let ℬ = (41, . . . , 4=) be abasis of M.

A family (G1, . . . , G=) of elements of M is free if and only if its determinant

in the given basis is regular in A.

Such a family generates M if and only if its determinant in the basisℬ is a

unit of A; then, this family is even a basis of M.

Proof. — Let D be the unique endomorphism of M that maps 48 to G8.

By definition, one has det(D) = detℬ(G1, . . . , G=). For (G1, . . . , G=) to be

free, it is necessary and sufficient that D be injective, hence that det(D)be regular. For (G1, . . . , G=) to generate M, it is necessary and sufficient

that D be surjective, hence that det(D) be invertible; then, D is invertible

and (G1, . . . , G=) is a basis of M. �

Page 176: (MOSTLY)COMMUTATIVE ALGEBRA

164 CHAPTER 3. MODULES

3.9. Fitting ideals

3.9.1. — Let A be a commutative ring. For every matrix U ∈ M=,<(A)and every positive integer ?, let us denote by Δ?(U) the ideal of A

generated by the determinants of all ? × ? matrices extracted from U.

We also write this ideal as Δ?(D1, . . . , D<), where D1, . . . , D< ∈ A=are the

columns of U. It will be convenient to call any determinant of a ? × ?matrix extracted from U a minor of size ? of U. By Laplace expansion

of determinants, a minor of size ? is a linear combination of minors of

size ?−1. This implies thatΔ?(U) ⊂ Δ?−1(U) for every ?. If ? > inf(<, =),then Δ?(U) = 0. One also has Δ0(U) = A (the determinant of a 0 × 0

matrix is equal to 1); for simplicity of notation, we also set Δ?(U) = A

for ? < 0.

Example (3.9.2). — Let D = diag(31, . . . , 3min(=,<)) ∈ M=,<(A) be a “di-

agonal matrix”, by which we mean that all of its coefficients are zero,

unless their row and column indices are equal. Assume moreover

that 38 divides 38+1 for every integer 8 such that 1 6 8 < min(=, <).For any integer ? ∈ {0, . . . ,min(=, <)}, the ideal Δ?(D) is generated by the

product 31 . . . 3?.

Let I ⊂ {1, . . . , =} and J ⊂ {1, . . . , <} be two subsets of cardinality ?,

let DIJ = (38 , 9)8∈I9∈J

be the matrix obtained by extracting from D the rows

whose indices belong to I and the columns whose indices belong to J.

Assume that I ≠ J and let us show that det(DIJ) = 0. Indeed, det(DIJ) isa sum of products 381 , 91 . . . 38? , 9? (with signs), where I = {81, . . . , 8?} andJ = { 91, . . . , 9?}. Since I ≠ J, there must be an index : such that 8: ≠ 9:,

so that 38: , 9: = 0. However, if I = J, one such product is

∏8∈I 38, and all

other vanish. By the divisibility assumption on the diagonal entries of D,

we see that all minors of size ? of M are divisible by 31 . . . 3?, and that

31 . . . 3? is one such minor (namely, for I = J = {1, . . . , ?}). Consequently,Δ?(D) = (31 . . . 3?).

Lemma (3.9.3). — Let U ∈ M=,<(A). For any P ∈ GL=(A), Q ∈ GL<(A)and ? ∈ {1, . . . ,min(=, <)}, one has

Δ?(PUQ) = Δ?(U).

Page 177: (MOSTLY)COMMUTATIVE ALGEBRA

3.9. FITTING IDEALS 165

Proof. — The columns of the matrix UQ are linear combinations of

columns of U. By multilinearity of the determinant, each minor of

size ? of UQ is then a linear combination of minors of size ? of U,

hence Δ?(UQ) ⊂ Δ?(U). Similarly, the rows of the matrix PU are

linear combinations of rows of U and Δ?(PU) ⊂ Δ?(U). It follows

that Δ?(PUQ) ⊂ Δ?(UQ) ⊂ Δ?(U). Since P and Q are invertible, we

can write U = P−1(PUQ)Q−1

, hence Δ?(U) ⊂ Δ?(PUQ), so that, finally,

Δ?(U) = Δ?(PUQ). �

3.9.4. — Let M be a finitely generated A-module and let ! : A= →

M be a surjective morphism of A-modules. For every integer ? ∈{0, . . . , =}, let then J?(!) be the ideal of A generated by the ideals

Δ?(U) = Δ?(D1, . . . , D?), where U = (D1, . . . , D?) ranges over all subsetsof Ker(!)?. One has J0(!) = A. If ? < 0, then we set J?(!) = A.

Lemma (3.9.5). — Let ? be an integer such that 1 6 ? 6 =.

a) One has J?(!) ⊂ J?−1(!);b) Let (E8)8∈I be a family of elements of Ker(!) which generates Ker(!).

Then J?(!) is generated by the elements Δ?(E81 , . . . , E8?), where 81, . . . , 8? ∈ I.

c) One has AnnA(M) · J?−1(!) ⊂ J?(!).

Proof. — a) Expanding aminor of size ? of U along one column, one gets

an inclusionΔ?(U) ⊂ Δ?−1(U). This implies the inclusion J?(!) ⊂ J?−1(!).b) One inclusion is obvious. Moreover, multilinearity of the determi-

nants implies that if U = (D1, . . . , D?) is an = × ?-matrix with columns

in Ker(!), then any minor involved in the definition of Δ?(U) is a linear

combination of elements of A of the form Δ?(E81 , . . . , E8?).c) Let 0 ∈ AnnA(M). Let D2, . . . , D? ∈ Ker(!); let also D ∈ A

=be any

basis element, and let D1 = 0D. Since 0 ∈ AnnA(M), one has D1 ∈ Ker(!).The Laplace expansion along the first column of the determinant of any

? × ? submatrix of U is ±0 times a minor of size ? − 1 of U, and all such

minors appear when D varies, so that Δ?(D1, . . . , D?) = 0Δ?−1(D2, . . . , D?).In particular, 0J?−1(!) ⊂ J?(!). We thus have shown the inclusion

AnnA(M) · J?−1(!) ⊂ J?(!). �

Page 178: (MOSTLY)COMMUTATIVE ALGEBRA

166 CHAPTER 3. MODULES

Theorem (3.9.6). — Let A be a commutative ring, let M be a finitely generated

A-module. Let ! : A< →M and # : A

= →M be surjective morphisms. For

every integer 3 > 0, the ideals J<−3(!) and J=−3(#) are equal.

Proof. — We split the proof in 5 steps. 1) We first assume that < = =

and that there exists an automorphism D of A=such that # = ! ◦ D. Then

J?(!) = J?(#) for every ?. Indeed, observe that for X ∈ A=, one has

X ∈ Ker(#) if and only if D(X) ∈ Ker(!). Let X1, . . . ,X? ∈ Ker(#);write X = (X1, . . . ,X?) and D(X) = (D(X1), . . . , D(X?)). By multilinearity

of determinant, Δ?(D(X)) ⊂ Δ?(X) ⊂ J?(#). When X runs among all

elements of Ker(#)?, D(X) runs among all elements of Ker(!)? so that

J?(!) ⊂ J?(#). Equality follows by symmetry.

2) Let (41, . . . , 4<) be the canonical basis of A<; for every 8 ∈ {1, . . . , <},

there exists D8 ∈ A=such that#(D8) = !(48), because# is surjective. Then,

the unique morphism D from A<to A

=such that D(48) = D8 for every 8

satisfies ! = # ◦ D. One constructs similarly a morphism E : A= → A

<

such that # = ! ◦ E.Identify A

<+=with A

<×A=; we define threemorphisms �, �′, �′′ from

A<+=

to M by �(G, H) = !(G) + #(H), �′(G, H) = !(G) and �′′(G) = #(H).They are surjective. Observe that one has

�(G, H) = !(G) + #(H) = !(G) + !(E(H)) = !(G + E(H)) = �′(G + E(H), H);

likewise,

�(G, H) = �′′(G, D(G) + H).Moreover, the endomorphism (G, H) ↦→ (G+E(H), H) ofA

<+=is an isomor-

phism (with inverse (G, H) ↦→ (G − E(H), H)), and so is the endomorphism

(G, H) ↦→ (G, D(G) + H). By the first case above, it follows that for every

? 6 < + =, one hasJ?(�) = J?(�′) = J?(�′′).

3) By construction,# : A= →M is a surjectivemorphism and �′′ : A

<×A= → M is the composition of the second projection with #. We then

prove that J?(�′′) = J?−<(#) for any ? 6 < + =. This is trivial if ? 6 0,

hence we assume ? > 1. By induction on <, it suffices to treat the case

< = 1. Then, Ker(�′′) is generated by the vector (1, 0, . . . , 0) togetherwith the vectors of the form (0, H) for H ∈ Ker(#).

Page 179: (MOSTLY)COMMUTATIVE ALGEBRA

3.9. FITTING IDEALS 167

Let I1, . . . , I? be such vectors and let Z be the matrix with columns

(I1, . . . , I?). If the vector (1, 0, . . . ) appears twice among I1, . . . , I?,

then two columns of Z are equal; since the number of columns of Z

is precisely ?, this proves that any minor of size ? of Z vanishes. If

that vector appears exactly once, then we see by expanding a minor of

size ? along the corresponding column, that Δ?(Z) ⊂ J?−1(#). Finally, ifall I 9 are of the form (0, H), then Δ?(Z) ⊂ J?(#) ⊂ J?−1(#). This shows

that Δ?(Z) ⊂ J?−1(#) hence J?(�′′) ⊂ J?−1(#). Observe moreover that all

generators of J?−1(#) can be obtained as minors of size ? of a suitable

matrix Z: just take I1 = (1, 0, . . . , 0) and I2, . . . , I? of the form (0, H)with

H ∈ Ker(#). (When ? = 1, this gives Δ?(Z) = A = J0(#).) Consequently,J?(�′′) = J?−1(#).4) By symmetry, one has J?(�′) = J?−=(!) for every ? 6 < + =.5) The conclusion of the proof is now straightforward. Let 3 be an

integer such that 3 > 0. By step 3, we have J=−3(#) = J<+=−3(�′′);by step 4, we have J<−3(!) = J<+=−3(�′); by step 2, it follows that

J=−3(#) = J<−3(#) = J<+=−3(�). �

Definition (3.9.7). — Let A be a commutative ring and let M be a finitely

generated A-module. Let < be an integer and let ! : A< →M be a surjective

morphism of A-modules. For any integer 3 > 0, the ideal J<−3(!) is called the

3th Fitting ideal of M and denoted Fit3(M).

By theorem 3.9.6, it is independent of the choice of !. One has

Fit3(M) = A for 3 large enough.

Corollary (3.9.8). — Let A be a commutative ring, let M be a finitely generated

A-module and let I = AnnA(M) be its annihilator.a) For any integer 3 > 0, one has Fit3(M) ⊂ Fit3+1(M) and I · Fit3+1(M) ⊂

Fit3(M).b) If M is generated by 3 elements, then Fit3(M) = A.

Proof. — This is a direct application of lemma 3.9.5. Let ! : A< → M

be a surjective morphism.

a) One has Fit3(M) = J<−3(!) ⊂ J<−3−1(!) = Fit3+1(!) for every 3 > 0.

Moreover, I · Fit3+1(!) = I · J<−3−1(!) ⊂ J<−3(!) = Fit3(M).

Page 180: (MOSTLY)COMMUTATIVE ALGEBRA

168 CHAPTER 3. MODULES

b) Assume that < is generated by 3 elements and let # : A3 →M be a

surjective morphism of A-modules. Then Fit3(M) = J0(#) = A. �

Exercises

1-exo546) Let A be a ring and let M be a right A-module.

a) Let N be a submodule of M. Show that the set (N : M) of all 0 ∈ A such that

<0 ∈ N for every < ∈ M is a two-sided ideal of A.

b) Let < ∈ M and let AnnA(<) = {0 ∈ A ; <0 = 0} (annihilator of <). Show that it

is a right ideal of A; give an example where it is not a left ideal of A. Show that the

submodule generated by < is isomorphic to A3/AnnA(M).c) Let I be a right ideal of A, let N be a submodule of M and let (N :M I) be the set of

all < ∈ M such that <0 ∈ N for all 0 ∈ I. Prouve that (N :M I) is a submodule of M.

2-exo027) Let A and B be two rings and let 5 : A→ B be a morphism of rings.

a) Let M be a right B-module. Show that one defines a right A-module by endowing

the abelian group M with the multiplication (M,A) → M given by (<, 0) ↦→ < 5 (0).This A-module shall be denoted 5 ∗M.

b) Let D : M→ N be amorphism of right B-modules; show that D defines amorphism

of A-modules 5 ∗M→ 5 ∗N.

c) Show that the so-defined map HomB(M,N) → HomA( 5 ∗M, 5 ∗N) is an injective

morphism of abelian groups. Give an example where it is not surjective.

d) Let M be a right B-module. Compute the annihilator of the A-module 5 ∗M in

terms of the annihilator of M.

3-exo553) Let A and B be rings and let 5 : A→ B be a morphism of rings. The additive

group of B is endowed with the structure of right A-module deduced from 5 .

a) Assume that the image of 5 be contained in the center of B (so that B is an

A-algebra). Show that the multiplication of B is A-bilinear, namely the maps G ↦→ G1

and G ↦→ 1G, for fixed 1 ∈ B, are A-linear.

b) Conversely, if the multiplication of B is A-bilinear, show that the image of 5 is

contained in the center of B.

4-exo028) Let A be a commutative ring, let M and N be two A-modules.

a) Let D ∈ EndA M. Show that there is a unique structure of a A[X]-module on M

for which X · < = D(<) and 0 · < = 0< for every < ∈ M and every 0 ∈ A. This

A[X]-module will be denoted MD .

b) Show that the map D ↦→MD is a bijection from EndA(M) to the set of all structures

of A[X]-module on M such that 0 · < = 0< for every < ∈ M and every 0 ∈ A.

c) Let D ∈ EndA M and E ∈ EndA N. Determine the set of all morphisms from MD

to NE .

Page 181: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 169

d) If M = N, under what necessary and sufficient condition on D and E are the two

modules MD and ME isomorphic?

e) Translate the results of the exercise when A = K is a field and M = K=is the

standard K-vector space of dimension =.

5-exo547) a) Give an example of two submodules of a module whose union is not a

submodule.

b) Let (M=)=∈N be an increasing family of submodules of an A-module M, meaning

that M= ⊂ M? if = 6 ?. Show that its union

⋃= M= is a subbmodule of M.

c) Let K be a division ring, let V be a K-vector space and let (W8)1686= be a family

of subspaces of V such that W =⋃8 W8 is a vector subspace. If K is infinite or, more

generally, if K has at least = elements show that there exists 8 ∈ {1, . . . , =} such that

W = W8 .

d) Let V1, V2 and V3 be the sets of all pairs (G, H) ∈ Z2such that G, resp. G + H,

resp. H be even. Show that these are submodules of Z2, distinct from Z2

, and that

Z2 = V1 ∪ V2 ∪ V3.

6-exo039) Let A be a ring, let M be a A-module, let M1, . . . ,MA be submodules of M

of which M is the direct sum, let I1 = (0 : M1), . . . , IA = (0 : MA) be their annihilators.Assume that the ideals I1, . . . , IA are pairwise comaximal.

Set I =⋂A8=1

I and, for 8 ∈ {1, . . . , A}, set J8 =⋂9≠8 I9 .

a) Show that for any 8, I8 and J8 are comaximal two-sided ideals of A.

For every 8, let N8 be the submodule of M generated by the submodules M9 , for 9 ≠ 8.

For any two-sided ideal J of A, let (0 : J) be the submodule of M defined by

{< ∈ M ; <0 = 0 for every 0 ∈ J}. Show the following formulas:

b) J8 = (0 : N8) and N8 = (0 : J8) ;c) N8 = MI8 and MJ8 = M8 =

⋂9≠8 N9 .

7-exo129) Let M be a A-module and let < ∈ M be an element of M. Show that the

following properties are equivalent:

(i) The annihilator of <A is (0) and <A has a direct summand in M;

(ii) There exists a linear form 5 on M such that 5 (<) = 1.

When they hold, show that M = <A ⊕ Ker 5 .

8-exo197) Let 5 : M→ N be a morphism of A-modules.

a) Show that there exists a morphism 6 : N→M such that 6 ◦ 5 = idM if and only if

5 is injective and Im( 5 ) has a direct summand in N.

b) Show that there exists a morphism 6 : N→M such that 5 ◦ 6 = idN if and only if

5 is surjective and Ker( 5 ) has a direct summand in M.

9-exo565) Let A be a ring, let M be a right A-module and let (M8)8∈I be a family of

submodules of M whose sum is equal to M.

a) For the M8 to be in direct sum, it is necessary and sufficient that for every 8 ∈ I, the

intersection of the submodules M8 and∑9≠8 M9 is reduced to {0}.

Page 182: (MOSTLY)COMMUTATIVE ALGEBRA

170 CHAPTER 3. MODULES

b) Give an example of a A-module M, of a family (M1,M2,M3) of submodules of M

whose sum equals M but which are not in direct sum for which M8 ∩M9 = 0 for every

pair (8 , 9)with 8 ≠ 9.

10-exo036) Let A be a commutative ring and let M be an A-module.

a) Consider the following definition of a weakly torsion element: an element < ∈ M

is weakly torsion if there exists 0 ∈ A {0} such that <0 = 0. Give an example that

shows that the set of weakly torsion elements of a module may not be a submodule.

Recall that T(M) is the set of all < ∈ M for which there exists a regular element 0 ∈ A

such that 0< = 0.

b) Let S be the set of non-zero divisors of A. Show that T(M) is the kernel of the

canonical morphism from M to S−1

M.

c) Show that the quotient module M/T(M) is torsion free.

d) Let 5 : M→ N be a morphism of A-modules. Show that 5 (T(M)) ⊂ T(N) ∩ 5 (M).Give an example where the inclusion is not an equality.

11-exo549) Let A be a ring, let M be an A-module and let N be a subbmodule of M. A

projector in M is an endomorphism ? ∈ EndA(M) such that ? ◦ ? = ?.a) If ? is a projector, prove that M = Ker(?) ⊕ Im(?).b) Show that the following conditions are equivalent:

(i) N has a direct summand in M;

(ii) N is the kernel of a projector in M;

(iii) N is the image of a projector in M.

c) Let ?0 and ? be two projectors in M, with the same image N, let D be the map

? − ?0. Show that D is a morphism such that Im(D) ⊂ N and Ker(D) ⊃ N. Construct, by

quotient, a linear map D : M/N→ N.

d) Let ?0 be a given projector in M and let N be its image. Show that the map ? ↦→ D

defined in the preceding question induces a bijection from the set of of projectors in M

with image N to the abelian group HomA(M/N,N).12-exo701) Let M be a module which is finitely generated.

a) Show that any generating family of M possesses a finite subfamily which is

generating.

b) If M is free, then any basis of M is finite.

13-exo040) Let A be an integral domain, let K be its field of fractions. One assumes

that K ≠ A. Show that K is not a free A-module.

14-exo196) Bidual. Let A be a ring, let M be a right A-module. One writes M∨ =

HomA(M,A) for its dual (a left A-module) and M∨∨ = HomA(M∨,A) for its bidual,

that is, the dual of its dual (a right A-module).

a) Let < ∈ M. Show that the map

�< : M∨→ A, ! ↦→ !(<)

is A-linear. Prove that the map < ↦→ �< is a morphism of A-modules � : M→M∨∨

.

Page 183: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 171

b) One assumes that M = A=3for some integer = > 1. Show that � is an isomorphism.

One says that M is reflexive if the morphism � : M→M∨∨

is an isomorphism.

c) Give an example where � is not injective and an example where � is not surjective.

15-exo314) Let A = Z(N) and B = ZNbe the infinite countable direct sum and the

infinite countable product of the abelian group Z respectively.

a) For = ∈ N, let 4= be the element of B whose =th term is 1, all other being 0. Prove

that (4=)=∈N is a basis of A, as a Z-module.

b) For every 1 = (1=) ∈ B, prove that there exists a unique linear form !(1) ∈ A∨such

that !(1)(4=) = 1= for every = ∈ N. Prove that the map 1 ↦→ !(1) is an isomorphism

from B to A∨.

c) For = ∈ N, let � : B∨C>B be the map given by �( 5 ) = ( 5 (4=))=∈Z. Prove that � is a

morphism of Z-modules.

d) Let 5 ∈ Ker(�). Let G ∈ B; prove that there exist H, I ∈ B such that G = H + I, andsuch that 2

= | H= and 3= | I= for every = ∈ N. Prove that 2

= | 5 (H) and 3= | 5 (I), for

every = ∈ Z. Conclude that 5 (G) = 0, hence 5 = 0. This shows that � is injective.

e) Let 5 ∈ B∨and let 0 = �( 5 ); assume that 0 ∉ A. Let (B=) be a strictly increasing

sequence of integers and let (1=) ∈ B be such that 2B= | 1= for all =. Prove that

5 (1) ≡ ∑=−1

:=00=1= (mod 2

B= ). Construct (B=) and (1=) ∈ B so as to prevent any integer

to satisfy all of these congruences. This proves that �(B∨) ⊂ A.

f ) Prove that � : B∨→ A is an isomorphism.

g) Prove that B is not a free Z-module.

16-exo030) Let M and N be A-modules. Let 5 : M→ N be a morphism of A-modules.

Its transpose 5 ∨ is the map from N∨to M

∨defined by 5 ∨(!) = ! ◦ 5 , for every

! ∈ N∨ = HomA(N,A).

a) Show that 5 ∨ is a morphism of A-modules. Show that ( 5 + 6)∨ = 5 ∨ + 6∨. Showthat ( 5 ◦ 6)∨ = 6∨ ◦ 5 ∨.b) From now on, one assumes that M = N and that A is commutative. Let

5 ∈ EndA(M). Show that the set I( 5 ) of all polynomials P ∈ A[X] such that P( 5 ) = 0 is

an ideal of A[X].c) Show that I( 5 ) ⊂ I( 5 ∨).d) If M is reflexive, show that I( 5 ) = I( 5 ∨).

17-exo022) Let A be a commutative ring, let I be a nonzero ideal of A, viewed as a

submodule of A. Show that I is a free module if and only if the ideal I is principal and

is generated by a regular element.

18-exo025) Let A be a ring, let M and N be right A-modules and let ! : M→ N be a

surjective morphism.

a) If Ker(!) are N are finitely generated, show that M is finitely generated.

In the sequel, one assumes that M and N are finitely generated and that N is a free

A-module.

Page 184: (MOSTLY)COMMUTATIVE ALGEBRA

172 CHAPTER 3. MODULES

b) Show that ! has a right inverse #, namely a morphism # : N → M such that

! ◦ # = idN.

c) Show that M ' Ker(!) ⊕ Im(#).d) Recover the fact that any vector subspace of a vector space has a direct summand.

e) Show that Ker(!) is finitely generated.

19-exo591) Let A be a ring, let M be a right A-module which is the (internal) direct

sum of an infinite family of nonzero submodules (M8)8∈I.For G ∈ M, write G =

∑8∈I G8 , with G8 ∈ M8 , and let I(G) be the set of all 8 ∈ I such that

G8 ≠ 0. This is a finite subset of I.

a) Let S be a generating subset of M. Show that I is equal to the union, for G ∈ S, of

the subsets I(G).b) Show that S is infinite.

c*) Show that Card(S) > Card(I).20-exo551) Let A be a commutative ring, let S be a multiplicative subset of A, let M be

an A-module.

a) Let (<8)8∈I be a generating family of M; show that the family (<8/1) is a generatingfamily of the S

−1A-module S

−1M.

b) Let (<8)8∈I be a linearly independent family in M. Assume that S does not contain

any zero-divisor. Then show that the family (<8/1) in S−1

M is still linearly independent.

c) Assume that A is an integral domain and that M is generated by = elements, for

some integer =. Show that the cardinality of any free subset of M is at most =. (Take

for S the set of all nonzero elements of A.)

21-exo315) Let A be a commutative ring and let S be a multiplicative subset of A. Let

M be an A-module and let 8 : M→ S−1

M be the canonical morphism.

a) Let be a submodule of S−1

M and let N = 8−1( ). Prove that N satisfies the

following property: for every < ∈ M, if there exists B ∈ S such that B< ∈ N, then

< ∈ N. (One says that N is saturated with respect to S.)

b) One assumes in this question that S is the set of regular elements of A. Prove that

a submodule N of M is saturated with respect to S if and only if M/N is torsion-free.

c) Let N be a submodule of M which is saturated with respect to S. Prove that

N = 8−1(S−1N).

d) Deduce from the preceding questions that themap ↦→ 8−1( ) defines a bijectionfrom the set of submodules of S

−1M to the set of submodules of M which are saturated

with respect to S.

22-exo865) Let A be a ring and let M be a right A-module. One says that a submodule

N of M is small if for every submodule K of M such that N + K = M, one has K = M;

one says that it is maximal if N ≠ M and if for every submodule K of M such that

N ⊂ K ⊂ M, one has K = N or K = M. Let JM be the intersection of all maximal

submodules of M.

a) Prove that M = JM if and only if M is not finitely generated.

Page 185: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 173

b) Prove that JM is the sum of all small submodules of M.

23-exo866) Let A be a ring, let J be its Jacobson radical and let M be a right A-module.

Assume that there exists an A-module N such that M ⊕ N is a free A-module (that is,

M is projective).

a) Prove that JM = MJ, where JM is the intersection of all maximal submodules of M

(exercise 3/22).

b) Prove that M ≠ MJ (Bass (1960), proposition 2.7).

24-exo041) Let A be a ring which is not a division ring. Give examples:

a) of non-free modules;

b) of a free family with = elements of A=which is not a basis;

c) of a minimal generating set of a module which is not a basis;

d) of a submodule which has no direct summand;

e) of a free module possessing a non-free submodule.

25-exo555) a) Let M be a nonzero submodule of Q which is finitely generated.

Show that M is free and that any basis of M has cardinality 1. (Show by induction that

there exists 0 ∈ Q such that M = Z0.)b) Show that the Z-module Q is not finitely generated.

c) What are the maximal free subsets of Q?

d) Does the Z-module Q possess minimal generating subsets?

26-exo554) Let K be a field, let V be a right K-vector space and let A be the ring of

endomorphisms of V. One assumes that V is not finitely generated.

Show that the right A-module (A3)2 is isomorphic to A3.

27-exo558) LetA be a ring and let ! : (A3)< → (A3)= be amorphism of rightA-modules.

Let Φ be the matrix of !.For any morphism of rings 5 : A→ B, one writes Φ 5

for the matrix whose entries

are the images by 5 of those of Φ.

a) Show thatΦ 5is a the matrix of a morphism !0 : (B3)< → (B3)= of right B-modules.

b) Show that if ! is an isomorphim, then !0 is an isomorphism too. (Introduce the

matrix Φ′ of the inverse of !, then the matrix (Φ′) 5 .)c) Conclude that if A is a ring which possesses a morphism to a division ring K,

then the A-module (A3)< can only be isomorphic to the A-module (A3)= when < = =.

In other words, in that case, the cardinalities of all bases of a finitely generated free

A-module are equal to a common integer, called the rank of this module.

28-exo562) Let K be a division ring, let V be a right K-vector space and let W be a

subspace of V.

a) Show that V is the intersection of the kernels of all linear forms on V which vanish

on W.

b) More generally, what is the intersection of the kernels of all linear forms on V

which vanish on a given subset S of V?

Page 186: (MOSTLY)COMMUTATIVE ALGEBRA

174 CHAPTER 3. MODULES

c) Give an example of a ring A and of a nonzero A-module M for which every linear

form is null.

d) Give an example of an integral domain A, of a free A-module M, and of a

submodule N of M such that N be not equal to the intersection of the kernels of all

linear forms on M which vanish on N.

29-exo564) Let K be a division ring, let V1,V2,V3,V4 be right K-vector spaces, let

D : V1→ V2, E : V3→ V4 and F : V1→ V4 be linear maps.

a) In order that there is a linear map 5 : V2→ V4 such that 5 ◦ D = F, it is necessaryand sufficient that Ker(D) ⊂ Ker(F).

b) In order that there is a linear map 6 : V1→ V3 such that E ◦ 6 = F, it is necessaryand sufficient that Im(F) ⊂ Im(E).c) In order that there is a linear map ℎ : V2 → V3 such that E ◦ ℎ ◦ D = F, it is

necessary and sufficient that Ker(D) ⊂ Ker(F) and Im(F) ⊂ Im(E).30-exo702) Let K be a division ring and let M be a right K-vector space. Assume that M

is finitely generated.

a) Let (<1, . . . , <=) be a generating family in M, let (E1, . . . , E?) be a free family in M.

Show by induction on = that ? 6 =.

b) Show that all bases of M are finite and have the same cardinality.

31-exo590) a) LetM be a free finitely generatedA-module and let D be an endomorphism

of M. Show that the A-module M/Im(D) is annihilated by det(D).b) Let M and N be free finitely generated A-modules, let (41, . . . , 4<) be a basis of M,

let ( 51, . . . , 5=) be a basis of N. Let D : M→ N be a linear map and let U be its matrix in

the above given bases. Assume that < > =. Then show that N/D(M) is annihilated by

every = × = minor from U.

Page 187: (MOSTLY)COMMUTATIVE ALGEBRA

CHAPTER 4

FIELD EXTENSIONS

One aspect of commutative algebra is to not only consider modules over a

given fixed ring, but also morphisms of (commutative) rings.

Let A and B be commutative rings. Recall that it is equivalent to give oneself

a morphism of rings 5 : A→ B or a structure of an A-algebra on B, related by

the formula 5 (0) = 0 · 1B.

When A and B are fields, any morphism 5 : A → B is injective and the

study of B as an A-algebra often assumes that A is a subfield of B, 5 being the

inclusion. The classical terminology is to say that B is a field extension of A;

one reason for this terminology is that A was often assumed to be given, and B

was constructed by enlarging A with new “imaginary” elements.

I will generalize this terminology here and say that the B is an extension of

rings of A if it is given with the structure of an A-algebra. In fact, one can

essentially reduce the study of a morphism of rings 5 : A→ B as the study, on

the one side, of the pair consisting of B and its subring 5 (A), and on the other

side, of the quotient ring 5 (A) = A/I, where I = Ker( 5 ). In particular, if 5 is

injective, it induces an isomorphism from A to 5 (A) and one is in the situation

of a subring.

Often, one understates the underlying morphism 5 : A → B, and write

sentences such as “Let A→ B be an extension of rings.”

The main theme of this chapter is to study the ring B in terms of whether or not

its elements satisfy polynomial equations with coefficients in A. If yes, one says

that the extension is algebraic; and one says that the extension is integral if

every element of B satisfies a monic polynomial equation with coefficients in A.

As we will learn in the first sections, this property is witnessed by adequate

finitely generated A-submodules of B.

Page 188: (MOSTLY)COMMUTATIVE ALGEBRA

176 CHAPTER 4. FIELD EXTENSIONS

The importance of integral extensions of commutative rings will be explored

in chapter 9 and I rapidly focus here on algebraic extensions of fields. Because

they allow to make use of linear algebras, finite extensions are essential.

Although the reader has certainly heard about the notion of algebraically

closed fields, I explain it here, and prove Steinitz’s theorem that any field has an

algebraic closure, and essentially only one.

The next section is devoted to separability, a property of algebraic extensions

which makes their study considerably simpler. Hopefully, many fields only have

separable extensions, and they are called perfect.

Building on these results, I can then study finite fields. All in all, the

approach that I followed to describe them is certainly not the most economic one,

and it would have been possible to prove these results earlier, at the only cost of

having to redo some constructions later in a greater generality.

In the next section, I define Galois extensions of fields, their Galois group,

and show the Galois correspondence, namely a bijection between intermediate

extensions of a Galois extension and subgroups of the Galois group. However, I

do not discuss the classical applications of Galois’s theory, such as construction

with ruler and compass or the impossibility of resolutions in radicals.

I then define norms and traces, establish their basic properties; I also show

the how separability is equivalent to the non-degeneracy of the trace map.

In a final section, I discuss general extension of fields and define their

transcendence degree.

4.1. Integral elements

Definition (4.1.1). — Let A→ B be an extension of commutative rings. One

says that 1 is integral over A if there exists a monic polynomial P ∈ A[X]such that P(1) = 0.

A relation of the form 1=+0=−11=−1 · · ·+011+00 = 0, where 00, . . . , 0=−1

are elements of A, is called an integral dependence relation for 1.

Example (4.1.2). — The complex numbers I = exp(28�/=), D = (−1 +√5)/2 are integral overZ: they satisfy the relations I= = 1 and D2+D+1 =

0.

Page 189: (MOSTLY)COMMUTATIVE ALGEBRA

4.1. INTEGRAL ELEMENTS 177

To establish the most basic properties of integral elements, we will

make use of the following elementary lemma on finitely generated

modules.

Lemma (4.1.3). — LetA→ B be an extension of rings and letM be a B-module.

Let (18)8∈I be a family of elements of B which generates B as an A-module and

let (< 9)9∈J be a family of elements of M which generates it as a B-module.

a) The family (18< 9)(8 , 9)∈I×Jgenerates M as an A-module.

b) If (18)8∈I is a basis of B as an A-module and (< 9)9∈J is a basis of M as a

B-module, then the family (18< 9)(8 , 9) is a basis of M as an A-module.

Proof. — a) Let < ∈ M; since the family (< 9) generates M as a B-module,

there exists an almost-null family (2 9) in B such that < =∑9∈J 2 9< 9. For

every 9 ∈ J such that 2 9 ≠ 0, there exists an almost-null family (08 9)8∈I ofelements of A such that 2 9 =

∑08 918, because (18)8∈I generates B as an

A-module. For 9 ∈ J such that 2 9 = 0, set 08 9 = 0 for all 8 ∈ I. Then the

family (08 9)(8 , 9)∈I×Jis almost-null and one has

< =

∑9∈J2 9< 9 =

∑9∈J

∑8∈I

08 918< 9 ,

which proves that the family (18< 9) generates M as an A-module.

b) Let (08 9) be an almost-null family in A such that

∑8 , 9 08 918< 9 = 0. For

every 9 ∈ J, set 2 9 =∑8∈I 08 918. The family (2 9)9∈J in B is almost-null, and

one has

∑9∈J 2 9< 9 = 0. Since the family (< 9) is free, one has 2 9 = 0 for

all 9 ∈ J. Fix 9 ∈ J; from the relation

∑8∈I 08 918 = 0, the freeness of the

family (18) implies that 08 9 = 0 for all 8. This shows that 08 9 = 0 for all 8

and 9. This proves that the family (18< 9) is free, as claimed. �

Corollary (4.1.4). — Let A → B be an extension of rings and let M be a

B-module. If B is finitely generated (resp. free and finitely generated) as an

A-module, and M is finitely generated (resp. free and finitely generated) as a

B-module, then M is finitely generated (resp. free and finitely generated) as

an A-module.

The next theorem gives a very useful characterization of integral

elements.

Page 190: (MOSTLY)COMMUTATIVE ALGEBRA

178 CHAPTER 4. FIELD EXTENSIONS

Theorem (4.1.5). — Let A→ B be an extension of rings, let 1 be an element

of B and let A[1] be the A-subalgebra of B generated by 1. The following

assertions are equivalent:

(i) The element 1 is integral over A;

(ii) The A-algebra A[1] is a finitely generated A-module;

(iii) There exists an A[1]-module whose annihilator is zero, and which is

finitely generated as an A-module.

Proof. — (i)⇒(ii). Let P ∈ A[X] be a monic polynomial such that

P(1) = 0. Let us write it as P = X= + 01X

=−1 + · · · + 0=, for 01, . . . , 0= ∈ A

and let us show that the family (1, 1, . . . , 1=−1) generates A[1] as an

A-module. By definition, an element of A[1] is of the form Q(1), forsome polynomial Q ∈ A[X]. Since P is monic, we may consider an

euclidean division of Q by P, say Q = PQ1 + R, where deg(R) < =.

Then Q(1) = P(1)Q1(1) + R(1) = R(1), hence is a linear combination of

(1, . . . , 1=−1)with coefficients in A, as required.

(ii)⇒(iii). It suffices to set M = A[1]. Indeed, the module M is finitely

generated as an A-module, by assumption. Moreover, if G ∈ A[1] issuch that GM = 0, we obtain G1B = G = 0, so that the annihilator of M as

an A[1]-module is trivial.

(iii)⇒(i). Let ! : M → M be the A-linear morphism defined by

!(<) = 1<. Since M is finitely generated, it follows from corollary 3.8.18

to Cayley-Hamilton’s theorem that there exist = ∈ N and 00, . . . , 0=−1 ∈ A

such that

!= + 0=−1!=−1 + · · · + 00 = 0

in EndA(M). In particular, for every < ∈ M, one has

(1= + 0=−11=−1 + · · · + 00)< = 0.

Since the annihilator of M is trivial, this implies that 1= + 0=−11=−1+ · · · +

00 = 0. Consequently, 1 is integral over A. �

Definition (4.1.6). — Let A→ B be an extension of rings. The set of elements

of B which are integral over A is called the integral closure of A in B.

Corollary (4.1.7). — If A → B is an extension of rings, then the integral

closure of A in B is a subalgebra of B.

Page 191: (MOSTLY)COMMUTATIVE ALGEBRA

4.1. INTEGRAL ELEMENTS 179

Proof. — Let 5 : A → B be the morphism of rings which gives B the

structure of an A-algebra.

Let 0 ∈ A. Its image 5 (0) in B is integral over A, since it is cancelled by

the polynomial X − 0. Consequently, every element of 5 (A) is integralover A. In particular, the zero and the unit elements of B are integral

over A.

Let then 1, 2 be elements of B which are integral over A. By theo-

rem 4.1.5, the subalgebra A[1] of B is a finitely generated A-module

in B. Since 2 is integral over A, it is integral also integral over A[1]. Bytheorem 4.1.5 again, the A-subalgebra A[1, 2] = A[1][2] of B is a finitely

generated A[1]-module. It then follows from corollary 4.1.4 that the

subalgebra A[1, 2] of B is a finitely generated A-module. Corollary 4.2.2

implies that every element of A[1, 2] is integral over A. In particular,

1 + 2 and 12 are integral over A. �

Definition (4.1.8). — a) Let B be a ring and let A be a subring of B. One

says that A is integrally closed in B if it coincides with its own integral closure

in B.

b) One says that an integral domain is integrally closed if it is integrally

closed in its field of fractions.

Theorem (4.1.9). — A unique factorization domain is integrally closed.

In particular, principal ideal domains and polynomial rings over a

field are integrally closed.

Proof. — Let A be a unique factorization domain and let K be its field of

fractions. Let G ∈ K; let us assume that G is integral over A and let us

prove that G ∈ A. Let P = X= + 01X

=−1 + · · · + 0= be a monic polynomial

with coefficients in A such that P(G) = 0. Let 0, 1 be coprime elements

of A such that G = 0/1 and let us multiply by 1= the relation P(G) = 0;

we obtain the relation

0= + 010=−11 + · · · + 0=−101

=−1 + 0=1= = 0.

Consequently, 0= = −1(010=−1 + · · · + 0=1=−1) is a multiple of 1. Since

1 is prime to 0, it is prime to 0= (proposition 2.5.17). Necessarily, 1 is

invertible and G ∈ A. �

Page 192: (MOSTLY)COMMUTATIVE ALGEBRA

180 CHAPTER 4. FIELD EXTENSIONS

Proposition (4.1.10). — Let A be an integral domain which is integrally closed.

For any multiplicative subset S ⊂ A which does not contain 0, the ring S−1

A is

integrally closed.

Proof. — Let K be the field of fractions of A, so that we have natural

injections A ⊂ S−1

A ⊂ K, and K is the field of fractions of S−1

A. Let

G ∈ K be any element which is integral over S−1

A. Let

G= + 0=−1G=−1 + · · · + 00 = 0

be an integral dependence relation, where 00, . . . , 0=−1 are elements

of S−1

A. Let B ∈ S be a common denominator to all the 08, so that

18 = B08 ∈ A for every 8 ∈ {0, . . . , = − 1}. Multiplying the preceding

relation by B=, we obtain the relation

(BG)= + 1=−1(BG)=−1 + · · · + 11B=−1(BG) + 10B

=−1 = 0,

which shows that BG is integral over A. Since A is integrally closed in K,

BG ∈ A. Consequently, G = (BG)/B ∈ S−1

A and S−1

A is integrally closed

in K. �

4.2. Integral extensions

Definition (4.2.1). — Let A→ B be an extension of commutative rings. One

says that B is integral over A if every element of B is integral over A.

We start from corollaries of theorem 4.1.5

Corollary (4.2.2). — Let A be a ring and let B be a finitely generated A-algebra.

Then B is integral over A if and only if B is finitely generated as an A-module.

Proof. — First assume that B is finitely generated as an A-module and

let us prove that B is integral over A. Let 1 ∈ B and let M be the A[1]-module B. Its annihilator is 0 since for any G ∈ B, the relation GM = 0

implies 0 = G1B = G. Finally, it is finitely generated as an A-module, by

assumption. Consequently, 1 is integral over A.

Conversely, let us assume that B is integral over A. Let (11, . . . , 1=) bea finite family of elements of B which generates B as an A-algebra. Let

B0 be the image of A in B; for every 8 ∈ {1, . . . , =}, let B8 = A[11, . . . , 18];observe that B8 = B8−1[18]. By assumption, 18 is integral over A, hence

Page 193: (MOSTLY)COMMUTATIVE ALGEBRA

4.2. INTEGRAL EXTENSIONS 181

it is integral over B8−1. By theorem 4.1.5, B8 is a finitely generated B8−1-

module. By induction, it then follows from corollary 4.1.4 that B = B= is

a finitely generated B0-module, hence a finitely generated A-module. �

Corollary (4.2.3). — Let A→ B and B→ C be extensions of rings. Assume

that B is integral over A.

If an element 2 of C is integral over B, then 2 is integral over A. In particular,

if C is integral over B, then C is integral over A.

Proof. — Let 2 ∈ C. Let P ∈ B[X] be a monic polynomial such that

P(2) = 0. Let B1 be the A-subalgebra of B generated by the coefficients

of P. It is finitely generated as an A-module, because B is integral over A

(corollary 4.2.2). Moreover, 2 is integral over B1 by construction, so that

the algebra B1[2] is finitely generated as a B1-module. By corollary 4.1.4,

B1[2] is an A-subalgebra which is finitely generated as an A-module. By

theorem 4.1.5, 2 is integral over A. �

Lemma (4.2.4). — Let A→ B be an extension of rings.

a) For any multiplicative subset S of A, the ring S−1

B is integral over S−1

A.

b) Let J be an ideal of B and let I be an ideal of A such that 5 (I) ⊂ J. By way

of the canonical morphism ! : A/I→ B/J, the ring B/J is integral over A/I.

Proof. — a) Let G ∈ S−1

B; let us show that G is integral over S−1

A. Let

1 ∈ B and B ∈ S be such that G = 1/B. By assumption, there exists an

integral dependence relation over A for 1, say

1= + 0=−11=−1 + · · · + 00 = 0.

Then

G= + 0=−1

BG=−1 + · · · + 00

B== 0

is an integral dependence relation for G over S−1

A.

b) Let G ∈ B/J and let 1 ∈ B be any element such that G = clJ(1). Let1= + 0=−11

=−1 + · · · + 00 = 0

be an integral dependence relation for 1 over A. Then

G= + clI(0=−1)G=−1 + · · · + clI(00) = 0

is an integral dependence relation for G over A/I. �

Page 194: (MOSTLY)COMMUTATIVE ALGEBRA

182 CHAPTER 4. FIELD EXTENSIONS

Proposition (4.2.5). — Let B be a ring and let A be a subring of B such that B

is integral over A.

a) Assume that A is a field. Then every regular element of B is invertible

in B.

b) Assume that B is an integral domain. Then A is a field if and only if B is

a field.

Proof. — a) Let us assume that A is a field, let 1 ∈ B be a regular element

and let P = X= + 0=−1X

=−1 + · · · + 00 ∈ A[X] be a monic polynomial of

minimal degree such that P(1) = 0. Let Q = X=−1+ 0=−1X

=−2+ · · · + 01, so

that P = XQ + 00. One has 1Q(1) = −00. By the minimality assumption

on =, Q(1) ≠ 0. Since 1 is regular, 00 = −1Q(1) ≠ 0. Since 00 ∈ A and A

is a field, this implies that 1 is invertible, with inverse 1′ = −Q(1)/00.

b) Assume that A is a field. Since B is an integral domain, B ≠ 0;

moreover, every non-zero element of B is regular, hence invertible by

part a). This shows that B is a field.

Conversely, let us assume that B is a field. Since the unit elements

of A and B coincide, A ≠ 0. Let 0 be a non-zero element of A. Then 0 is

invertible inB so that there exists 1 ∈ B such that 01 = 1. SinceB is integral

over A, there exists a monic polynomial P = X= + 0=−1X

=−1 + · · · + 00

with coefficients in A such that P(1) = 0. In other words,

1= + 0=−11=−1 + · · · + 00 = 0.

Multiplying this relation by 0=−1, we obtain

1 = 0=−11= = −0=−1 − 0=−20 − · · · − 000=−1.

In particular, 1 ∈ A so that 0 is invertible in A. This proves that A is a

field. �

4.3. Algebraic extensions

Definition (4.3.1). — Let A → B be an extension of rings. One says that

an element 1 of B is algebraic over A if there exists a non-zero polynomial

P ∈ A[X] such that P(1) = 0.

If every element of B is algebraic over A, one says that B is algebraic over A.

Page 195: (MOSTLY)COMMUTATIVE ALGEBRA

4.3. ALGEBRAIC EXTENSIONS 183

A non-trivial relation of the form 0=1= + 0=−11

=−1 · · · + 011 + 00 = 0,

where 00, . . . , 0=−1, 0= are elements of A, is called an algebraic dependence

relation for 1.

Remarks (4.3.2). — a) Let 1 ∈ B be algebraic over A, and let P =

0=X= + · · · + 00 be a polynomial with coefficients in A such that P(1) = 0.

Then 0=1 is integral over A. Indeed, if we multiply the relation P(1) = 0

by 0=−1

= , we obtain

(0=1)= + 0=−1(0=1)=−1 + · · · + 000=−1

= = 0,

an integral dependence relation for 0=1.

b) Conversely, let 1 ∈ B and let 0 ∈ A be a non-nilpotent element such

that 01 is integral over A. Then 1 is algebraic over A. Indeed, an integral

dependence relation

(01)= + 2=−1(01)=−1 + · · · + 20 = 0

for 01 gives rise to an algebraic dependence relation

0= 1= + 0=−12=−1 1=−1 + · · · + 021 1 + 20 = 0

for 1. Since 0 is not nilpotent, 0= ≠ 0, which imply that this relation is

non-trivial.

In particular, if A is a field, then an element 1 ∈ B is algebraic over A if

and only if it is integral over A.

c) The notion of being algebraic over A is not really interesting if A is

not an integral domain or if some non-zero elements of A annihilate B.

In fact, we shall mostly use it when A is a field.

Corollary (4.3.3). — Let K be a field and let A be a K-algebra which is an

integral domain. The set of all elements of A which are algebraic over K is a

field extension of K.

We call it the algebraic closure of K in A. Be cautious to distinguish this

notion from that of an “absolute” algebraic closure of a field which will

be defined below.

Remark (4.3.4). — Let K be a field and let A be a K-algebra. Let 0 ∈ L

be an element which is algebraic over K. The set of all polynomials

P ∈ K[T] such that P(0) = 0 is the kernel of the morphism from K[T]

Page 196: (MOSTLY)COMMUTATIVE ALGEBRA

184 CHAPTER 4. FIELD EXTENSIONS

to A given by P ↦→ P(0). Consequently, the monic generator P of this

ideal is the monic polynomial of minimal degree such that P(0) = 0; one

says that it is the minimal polynomial of 0 over K.

Passing to the quotient, one gets an injective morphism K[T]/(P) → A.

Assume moreover that A is an integral domain. Then K[T]/(P) is anintegral domain as well, so that the polynomial P is irreducible.

Definition (4.3.5). — One says that a field extension K→ L is finite if L is a

finite dimensional K-vector space. We then write [L : K] = dimK(L).

Corollary (4.3.6). — Let K be a field, let A be a K-algebra which is an integral

domain and let 01, . . . , 0= ∈ A be elements of A which are algebraic over K.

Then the sub-algebraK[01, . . . , 0=] ofA generated by these elements is a subfield

of A, and a finite extension of K.

Proof. — Let us denote this algebra by B. It follows from the previous

corollary that B is a field. Then, corollary 4.2.2 implies that it is a finite

extension of K. �

Proposition (4.3.7). — Let K → L and L → M be two extensions of fields.

Then the composed field extension K→M is finite if and only if both extensions

K→ L and L→M are finite; then one has

[M : K] = [M : L] [L : K].

Proof. — Let (H8)8∈I be a basis of M as an L-vector space, let (G 9)9∈J be abasis of L as a K-vector space. Let us show that (G8H 9)(8 , 9)∈I×J

is a basis

of M as a K-vector space.

Let indeed < ∈ M; one may write < =∑8∈I ℓ8H8, for an almost-null

family (ℓ8) of elements ofL. For every 8, let (:8 9)9∈J be an almost null family

of elements of K such that ℓ8 =∑9∈J :8 9G 9. Then the family (:8 9)(8 , 9)∈I×J

is

almost-null and < =∑(8 , 9)∈I×J

:8 9G 9H8. This shows that the family (G8H 9)generates M. On the other hand, let (:8 9)(8 , 9)) be a family of elements of K

such that

∑(8 , 9) :8 9G 9H8 = 0. For every 8, let us set ℓ8 =

∑9∈J :8 9G 9 ∈ L. The

family (ℓ8) is almost null and

∑8∈I ℓ8H8 = 0. Since (H8) is free over L, one

has ℓ8 = 0 for every 8. Since (G 9) is free over K, one has :8 9 = 0 for every 8

and every 9. This shows that the family (G 9H8)(8 , 9)∈I×Jis basis of M over K.

Page 197: (MOSTLY)COMMUTATIVE ALGEBRA

4.3. ALGEBRAIC EXTENSIONS 185

It is finite if and only if I and J are both finite. Then,

[M : K] = Card(I × J) = Card(I)Card(J) = [M : L][L : K]. �

Let K be a field. Recall that a polynomial P ∈ K[X] in one indetermi-

nate X is said to be split if there are elements 2, 01, . . . , 0= ∈ K such that

P = 2(X − 01) . . . (X − 0=).

Corollary (4.3.8). — Let K be a field. The following assertions are equivalent:

(i) The field K has no algebraic extension K→ L of degree > 1;

(ii) Every irreducible polynomial of K[X] has degree 1;

(iii) Every non-constant polynomial with coefficients in K has a root in K;

(iv) Every non-constant polynomial of K[X] is split.

Proof. — (i)⇒(ii). Let P be an irreducible polynomial with coefficients

in K. Then the extension K[X]/(P) is algebraic of degree deg(P). Conse-quently, deg(P) = 1.

(ii)⇒(iii). Let P be a non-constant polynomial with coefficients in K.

Let Q be an irreducible factor of P; then there exists 2 and 0 in K such

that Q = 2(X − 0); in particular, 0 is a root of P.

(iii)⇒(iv). Let P ∈ K[X] be a non-constant polynomial; let us prove by

induction on deg(P) that P is split. Let 0 be a root of P in K and let Q be

the quotient of the euclidean division of P by X− 0, so that P = (X− 0)Q.

Since deg(Q) < deg(P), the polynomial Q is either constant or split, by

induction. It follows that P is split.

(iv)⇒(i). Let K → L be an algebraic extension; let us prove that it

has degree 1. We may replace K by its image in L and assume that

K is a subfield of L. Let then 0 be an element of L K. Its minimal

polynomial M0 over K is irreducible in K. By assumption, it is split.

This implies that deg(M0) = 1, hence 0 ∈ K. Consequently, L = K and

[L : K] = 1. �

Definition (4.3.9). — A field K which satisfies the properties of corollary 4.3.8

is said to be algebraically closed

Theorem (4.3.10) (“Fundamental theorem of Algebra”)

The field C of complex numbers is algebraically closed.

Page 198: (MOSTLY)COMMUTATIVE ALGEBRA

186 CHAPTER 4. FIELD EXTENSIONS

Contrary to its name, this is not a theorem in Algebra, but in Analysis.

Indeed, every proof of this theorem needs to use, at some point, an

analytic or topological property of the field R of real numbers. The short

proof below is a variant of the Maximum Principle in Complex Analysis.

A more algebraic proof is suggested in exercise 4/15.

Proof. — Let P ∈ C[X] be a non-constant polynomial; let us prove

that P has a root in C. We may assume that P is monic. Let us

write P(X) = X= + 0=−1X

=−1 + · · · + 00, where 00, . . . , 0=−1 ∈ C and let

M = max(1, |00 | + · · · + |0=−1 |). Observe for any I ∈ C such that |I | > M,

one has

|P(I)| =��I= + 0=−1I

=−1 + · · · + 00

�� > |I= | − ��0=−1I=−1

�� − · · · − |00 |> |I |= −M |I |=−1 > |I |=−1 (|I | −M).

Let R = M + |P(0)|. Since R > M > 1, the preceding inequality shows

that |P(I)| > 1 + |P(0)| > |P(0)| if |I | > R.

The disk D of center 0 and radius R is compact, and the function I ↦→|P(I)| is continuous on D. Consequently, there exists a point 0 ∈ D where

|P| achieves its infimum. Necessarily, |P(0)| 6 |P(0)|, hence |I | < R.

Let us now consider the Taylor expansion of P at 0, P(0 + X) =20+21X+· · ·+2=X

=. One thus has 20 = P(0). Let us argue by contradiction

and assume that 20 ≠ 0. Since P is non-constant, there exists a least

integer < > 0 such that 2< ≠ 0. Let D be an <th root of −20/2<. Then,for any small enough real number C > 0, one has |0 + DC | 6 R and

P(0+DC) = 20+2<D<C<+negligible terms = 20(1−C<)+negligible terms.

In particular, |P(0 + DC)| < |20 | if C > 0 is small enough. We obtain a

contradiction, since infI∈D |P(I)| = |P(0)| = |20 |. Consequently, P(0) = 0

and P has a root in C, as was to be shown. �

Definition (4.3.11). — Let K be a field. An algebraic closure of K is an algebraic

extension K→ Ω such that Ω is an algebraically closed field.

Knowing one example of an algebraically closedfield allows to construct

other ones, as well as algebraic closures of some fields.

Page 199: (MOSTLY)COMMUTATIVE ALGEBRA

4.3. ALGEBRAIC EXTENSIONS 187

Proposition (4.3.12). — Let K be a field, let E be an extension of K. Assume

that E is an algebraically closed field. Let Ω be the algebraic closure of K in E.

Then, Ω is algebraically closed and K→ Ω is an algebraic closure of K.

Proof. — Since Ω is algebraic over K, by construction, we just need to

prove that Ω is an algebraically closed field. Let P be a non-constant

polynomial inΩ[X] and let us prove thatPhas a root inΩ. By assumption,

P has a root, say 0, in E. Then 0 is algebraic over Ω; since Ω is algebraic

over K, corollary 4.2.3. implies that 0 is algebraic over K. Consequently,

0 ∈ Ω. �

Example (4.3.13). — The setQof all complexnumberswhich are algebraic

over Q (called algebraic numbers) is an algebraic closure of Q.

Theorem (4.3.14) (Steinitz). — Every field has an algebraic closure.

Lemma (4.3.15). — Let K be a field and let ℱ be a family of polynomials

in K[X]. There exists an algebraic extension L of K such that every polynomial

ofℱ is split in L.

Proof. — We first show that for any polynomial P ∈ K[X], there existsa finite extension K → L such that P is split in L. The proof goes by

induction on deg(P). It holds if deg(P) = 0. Now assume that deg(P) > 1

and let Q be an irreducible factor of P. Let L1 be the field K[X]/(Q);this is a finite extension of K and Q has a root, say 01, in L1. Let P1

be the quotient of the euclidean division of P by X − 01 in L1[X]. One

has deg(P1) = deg(P) − 1 < deg(P). By induction, there exists a finite

extension L of L1 such that P1 is split in L. Then P = (X − 01)P1 is split

in L.

Ifℱ is finite, this particular case suffices to establish the lemma. Indeed,

let P be the product of the members ofℱ and let K→ L be an extension

such that P is split in L. Any divisor of P, hence any element of ℱ, is

then split in L, as was to be shown.

Let us now treat the general case. We may assume that every poly-

nomial of ℱ is monic. For P ∈ ℱ, let us set =P = deg(P) and let

2P,1, . . . , 2P,=Pbe elements of K such that

P(T) = T=P + 2P,1T

=P−1 + · · · + 2P,= .

Page 200: (MOSTLY)COMMUTATIVE ALGEBRA

188 CHAPTER 4. FIELD EXTENSIONS

Let us introduce the polynomial algebraA = K[X] in the familyX = (XP, 9)of indeterminates XP, 9, for P ∈ ℱ and 9 ∈ {1, . . . , =P}. For every

polynomial P ∈ ℱ, write P(T) −∏=P

9=1(T − XP, 9) =

∑=P

9=1QP, 9T

=P−9, where

QP,1, . . . ,QP,=P∈ A. Let I be the ideal ofA generated by the elementsQP, 9,

where P runs among ℱ.

Let us show that I ≠ A. Otherwise, there would exists an almost-null

family (SP, 9) in A such that

1 =

∑P∈ℱ

=P∑9=1

SP, 9QP, 9 .

Let ℱ1 be the set of all P ∈ ℱ such that SP, 9 ≠ 0 for some 9 ∈ {1, . . . , =P}.It is finite. By the first part of the proof, there exists an algebraic

extension K → L1 such that every polynomial P ∈ ℱ1 is split in L1.

For P ∈ ℱ1, let 0P,1, . . . , 0P,=Pbe a family of elements of L1 such that

P(T) = ∏=P

9=1(T − 0P, 9); for P ∉ ℱ1, set 0P, 9 = 0 for all 9. Let 0 = (0P, 9).

By construction, QP, 9(0) = 0 if P ∈ ℱ1, while SP, 9(0) = 0 if P ∉ ℱ1.

If one evaluates the relation 1 =∑

SP, 9QP, 9 at 0, one thus obtains

1 =∑

SP, 9(0)QP, 9(0) = 0, a contradiction.

By Krull’s theorem (theorem 2.1.3), there exists a maximal ideal M

of A such that I ⊂ M. Let Ω = A/M. For every P ∈ ℱ, the polynomial P

is split in Ω because the classes 0P,1, . . . , 0P,=Pof XP,1, . . . ,XP,=P

satisfy

QS, 9(0P,1, . . . , 0P,=P) = 0, hence

P(T) ==P∏9=1

(T − 0P, 9).

Moreover, as an A-algebra, Ω is generated by the elements 0P, 9, and

those are algebraic. Consequently, Ω is algebraic over K. �

Proof of Steinitz’s theorem. — Let Ω be an algebraic extension of K such

that every polynomial of K[X] is split in Ω. Such an extension exists

by the preceding lemma. Let us show that Ω is an algebraic closure

of K. Since it is an algebraic extension of K, it suffices to prove that Ω is

algebraically closed. Let Ω→ L be an algebraic extension of Ω; let us

show that it has degree 1. ReplacingΩ by its image in L, we may assume

that Ω is a subfield of L. Let G ∈ L; since it is algebraic over Ω and Ω

is algebraic over K, the element G is algebraic over K. Let P ∈ K[X] be

Page 201: (MOSTLY)COMMUTATIVE ALGEBRA

4.3. ALGEBRAIC EXTENSIONS 189

its minimal polynomial over K. One has P(G) = 0 and P is split in Ω,

by construction of this extension: this implies that there exists 0 ∈ Ωsuch G = 01L. Consequently, the morphism Ω→ L is surjective and the

extension Ω→ L has degree 1, as was to be shown. �

Theorem (4.3.16). — Let K be a field, let 8 : K→ L be an algebraic extension

and let 9 : K→ Ω be an algebraically closed extension.

The set Φ of morphisms of fields 5 : L→ Ω such that 5 ◦ 8 = 9 is non-empty.

If K→ L is a finite extension, one has Card(Φ) 6 [L : K].

Proof. — a) We first prove the theorem under the assumption that the

extension K→ L is finite. Let (01, . . . , 0A) be a finite family of elements

of L such that L = K[01, . . . , 0A].For A = 0, one has L = K and the result holds. Let us assume that A > 1

and that the result is true for any extension generated by A − 1 elements.

Set L1 = K[01]; let P be the minimal polynomial of 01 over K, so that

[L1 : K] = deg(P). Since L1 ' K[X]/(P), the map 91 ↦→ 91(01), induces abijection from the set of K-morphisms 51 : L1 → Ω to the set of roots

of P in Ω. Observe that P has at least one root in Ω, and at most deg(P),so that there is at least one, and at most deg(P)morphisms 91 : L1→ Ω

such that 51 ◦ 9 = 8. By induction, each of these morphisms can be

extended to L, in at least one, and in at most [L : L1]ways. This shows

that 1 6 Card(Φ) 6 [L1 : K][L : L1] = [L : K].b) It remains to prove that Φ is non-empty in the case where the

extension K→ L is infinite. Let ℱ be the set of pairs (L′, 5 ), where L′is

a subfield of L containing 8(K) and 5 : L→ Ω is a field morphism such

that 5 ◦ 8 = 9. We order the set ℱ by the relation (L′1, 51) ≺ (L′

2, 52) if

L′1⊂ L′2and 52 |L′

1

= 51. Let us prove that the set ℱ is inductive. Let then

((L′ , 5 )) ∈A be a totally ordered family of elements of ℱ. If it is empty,

then it is bounded above by the pair (8(K), 5 ) ∈ ℱ, where 5 : 8(K) → Ω

be the unique morphism such that 5 ◦ 8 = 9. Otherwise, let L′be the

union of the subfields L′ ; since the family (L′ ) is non-empty and totally

ordered, L′is a subfield of L that contains 8(K). Moreover, there exists a

unique map 5 : L′→ Ω such that 5 |L′ = 5 for every ∈ A, and 5 is a

morphism of fields such that 5 ◦ 8 = 9.

Page 202: (MOSTLY)COMMUTATIVE ALGEBRA

190 CHAPTER 4. FIELD EXTENSIONS

By Zorn’s theorem, ℱ has a maximal element, say (L′, 5 ). We now

show that L′ = L. Otherwise, let 0 be any element in L such that 0 ∉ L

′.

The extension L′ ⊂ L

′[0] is finite. By part a), there exists a morphism

5 ′ : L′[0] → Ω such that 5 ′|L′ = 5 . Consequently, (L′[0], 5 ′) ∈ ℱ and is

strictly larger than (L′, 5 ), a contradiction. �

Corollary (4.3.17). — Two algebraic closures of a field are isomorphic.

Proof. — Let 8 : K→ Ω and 9 : K→ Ω′ be two algebraic closures of a

field K. Let 5 : Ω→ Ω′ be any morphism of fields such that 9 ◦ 5 = 8

(theorem 4.3.16). The morphism 5 induces an isomorphism of fields

from Ω to its image L in Ω′. In particular, L is an algebraically closed

field, and it remains to prove that L = Ω′. Let thus 0 ∈ Ω′. Since Ω′ isalgebraic over K, the element 0 is algebraic over K, and, in particular, it

is algebraic over L; since L is algebraically closed, one has 0 ∈ L. This

concludes the proof. �

4.4. Separability

4.4.1. — Let K be a field, let P ∈ K[X] be a non-constant polynomial and

let 0 be a root of P in an extension L of K. Generalizing the definition

from §1.3.19, the largest integer < such that (X− 0)< divides P in L[X] iscalled the multiplicity of 0 as a root of P.

If L′is a further extension of L, then 0 is also a root of P in L

′; let us

remark that its multiplicity remains unchanged. Of course, if < is the

multiplicity of 0 as a root of P in L, then (X − 0)< still divides P in L′[X],

so that themultiplicity of 0 as a root of P in L′is at least<. Conversely, let

< be the multiplicity of 0 as root of P in L′; one can write P = (X − 0)<Q,

where Q ∈ L′[X]. Since the coefficients of P and (X − 0)< belong to the

subfield L of L′, uniqueness of the euclidean division implies that the

coefficients of Q belong to L; in particular, (X− 0)< divides P in L[X] andthe multiplicity of 0 as a root of P in L is at least <.

We also recall from lemma 1.3.21 that 0 is a root of P(:)

of multiplicity

at least < − :, for every integer : ∈ {0, . . . , < − 1}, with equality if the

characteristic of K is zero, or if it is strictly larger than < (in particular, 0

is not a root of P(<)

).

Page 203: (MOSTLY)COMMUTATIVE ALGEBRA

4.4. SEPARABILITY 191

Definition (4.4.2). — Let K be a field. One says that a polynomial P ∈ K[X] isseparable if it has no multiple roots in any field extension of K.

Lemma (4.4.3). — Let K be a field and let P ∈ K[X] be a non-zero polynomial.

The following properties are equivalent:

(1) The polynomial P is separable;

(2) There exists a field extension L of K in which P is split such that P has no

multiple root in L;

(3) The polynomials P and P′are coprime in K[X].

Proof. — Let = = deg(P) and write P = 0=X= + · · ·+ 00 for some elements

00, . . . , 0= ∈ K; in particular, 0= ≠ 0. Let D = gcd(P, P′).The implication (i)⇒(ii) is obvious, because there are extensions of K

in which P is split, for example, an algebraic closure.

To prove (ii)⇒(iii), let us assume that P and P′are not coprime, so that

deg(D) > 1. Let L be a field extension of K in which P is split. Since D

divides P, the polynomial D is split in L as well, hence it has a root in L,

say 0. One has P(0) = P′(0) = 0, so that 0 is a multiple root of P in L,

contradicting (ii).

Let 0 be a multiple root of P in some extension L of K. Then 0 is a

root of both P and P′. Considering a Bézout relation D = UP + VP

′, we

obtain D(0) = 0. In particular, P and P′are not coprime. This establishes

the implication (iii)⇒(i). �

Proposition (4.4.4). — Let K be a field.

a) Assume that the characteristic ofK is zero. Than any irreducible polynomial

in K[X] is separable.b) Assume that the characteristic of K is a prime number ?. An irreducible

polynomial P in K[X] is not separable over K if and only if it is a polynomial

in X?, if and only if its derivative is 0.

c) Assume that the characteristic of K is a prime number ? and that the

Frobenius homomorphism of K is surjective. Then every irreducible polynomial

in K[X] is separable.

Proof. — Let P ∈ K[X] be an irreducible polynomial which is not sepa-

rable. By lemma 4.4.3, the polynomials P and P′have a common factor.

Page 204: (MOSTLY)COMMUTATIVE ALGEBRA

192 CHAPTER 4. FIELD EXTENSIONS

Since P is irreducible, then P divides P′. Since deg(P′) < deg(P), one

must have P′ = 0.

Let = = deg(P) and write P = 0=X= + · · · + 00, so that 0= ≠ 0. One has

P′ = =0=X

=−1 + · · · + 202X + 01. If K has characteristic 0, then =0= ≠ 0,

so that deg(P′) = = − 1, which contradicts the hypothesis P′ = 0. This

proves a).

Assume now that K has characteristic ? > 0. Since P′ = 0, one has

<0< = 0 for every < ∈ {0, . . . , =}. If < is prime to ?, then < is invertible

in K, hence 0< = 0. Consequently, P =∑< 0?<X

?< = Q(X?), where

Q(X) = ∑< 0?<X

<. This proves b).

c) Let us moreover assume that the Frobenius homomorphism ! of K

is surjective. Let us keep the letter ! for the Frobenius homomorphism

of K[X]. For every <, let 1< ∈ K be such that 1?< = 0?<. Then,

P(X) = Q(X?) =∑<

1?<X

?< =

∑<

!(1<X<) = !(

∑<

1<X<) = R(X)? ,

where R(X) = ∑< 1<X

<. Consequently, R divides P, contradicting the

hypothesis that P is irreducible. �

Definition (4.4.5). — Let ? be a prime number and let K be a field of char-

acteristic ?. If the Frobenius morphism of K is surjective, one says that K is

perfect.

Example (4.4.6). — a) A finite field is perfect. Indeed, the Frobenius

homomorphism � : F→ F being injective (as is any morphism of fields),

one has Card(�(F)) = Card(F). Consequently, � is surjective.

b) Let F be a field of characteristic ?. The field K = F(T) is not

perfect. Indeed, there is no rational function 5 ∈ K such that 5 ? = T.

Suppose by contradiction that there is such an element 5 . The relation

5 ? = T shows that 5 is integral over the subring F[T] of K. Since F[T]is a unique factorization domain, it is integrally closed in its field of

fractions (theorem 4.1.9), hence 5 ∈ F[T]. Then 1 = deg(T) = ? deg( 5 ),contradiction.

The polynomial X? − T ∈ F[X, T] is irreducible in F(X)[T] since it has

degree 1; since its content is 1, it is irreducible in F[X][T] = F[X,T].Consequently, it is also irreducible in F(T)[X] = K[X]. However, if

Page 205: (MOSTLY)COMMUTATIVE ALGEBRA

4.4. SEPARABILITY 193

0 is a root of X? − T in an extension L of K, one has T = 0?, hence

X? − T = X

? − 0? = (X − 0)?, so that the polynomial X? − T, viewed as an

element of K[X], is inseparable.

Definition (4.4.7). — Let K be a field and let L be an algebraic extension of K.

One says that an element 0 ∈ L is separable over K if its minimal polynomial

is separable.

One says that L is separable over K if every element of L is separable over K.

Proposition (4.4.8). — Let K be a field. If the characteristic of K is zero, or if K

is a perfect field, then every algebraic extension of K is separable over K.

Proof. — Let L be an algebraic extension of K and let 0 ∈ L. Let P be the

minimal polynomial of 0 over K; it is an irreducible polynomial in K[X].By proposition 4.4.4, it is separable. By definition, this says that 0 is

separable over K, hence the algebraic extension K→ L is separable. �

Lemma (4.4.9). — Let K → L and L → E be two algebraic extensions of

fields. If an element 0 ∈ E is separable over K, then it is separable over L.

Consequently, if E is separable over K, then E is separable over L.

Proof. — It suffices to prove the first statement. Let 0 ∈ E be separable

over K and let P ∈ K[X] be its minimal polynomial. Since P(0) = 0, the

minimal polynomial of 0 over L divides P. In particular, its roots are

simple in any extension of L, so that 0 is separable over L. �

Theorem (4.4.10). — Let 8 : K → L be a finite extension of fields and let

9 : K → Ω be an algebraically closed extension of K. Let Φ be the set of

morphisms of fields 5 : L→ Ω such that 9 = 5 ◦ 8. The following conditionsare equivalent:

(i) The field extension L is separable over K;

(ii) The field extension L is generated by elements which are separable over K;

(iii) Card(Φ) = [L : K].

Proof. — The implication (i)⇒(ii) is obvious.

Let 0 ∈ L and let L′ = K[0] be the subextension it generates. We

know that K-morphisms L′→ Ω correspond bijectively to the roots R

in Ω of the minimal polynomial P of 0. Moreover, by theorem 4.3.16, a

Page 206: (MOSTLY)COMMUTATIVE ALGEBRA

194 CHAPTER 4. FIELD EXTENSIONS

K-morphism 5 ′ : L′→ Ω can be extended to a K-morphism 5 : L→ Ω

in at most [L : L′]ways. This implies that Card(Φ) 6 Card(R)[L : L

′]. If0 is not separable, then Card(R) < deg(P) = [L′ : K], hence Card(Φ) <[L′ : K][L : L

′] = [L : K]. This establishes the implication (iii)⇒(i).

Finally, let (01, . . . , 0A) be a finite family of separable elements of L such

that L = K[01, . . . , 0A]. Let us prove by induction on A that Card(Φ) =[L : K].This is obvious if A = 0, hence assume that A > 1 and set L1 = K[01].If A = 1, then L = L1, and the K-morphisms from L to Ω are in

bijection with the roots in Ω of the minimal polynomial P of 01. By

assumption, these roots are simple, so that Card(Φ) = deg(P) = [L1 : K].By the induction hypothesis, each of these K-morphisms L1→ Ω can be

extended in exactly [L : L1] ways to a K-morphism L→ Ω. This proves

that Card(Φ) = [L : L1][L1 : K] = [L : K], as was to be shown. �

Corollary (4.4.11). — Let 5 : K → L and 6 : L → E be field extensions.

Assume that L is separable over K.

If an element 0 ∈ E is separable over L, then it is separable over K.

If E is separable over L, then E is separable over K.

Proof. — We first treat the case where these extensions are finite. Let

8 : E→ Ω be an algebraic closure of K. Since L is separable over K, there

are exactly [L : K]morphisms 9 from L toΩ such that 9 ◦ 5 = 8. Since E is

separable over L, for each of these morphisms 9, there are exactly [E : L]morphisms : such that : ◦ 6 = 9. In particular, there are [E : L][L : K]morphisms ℎ from K to Ω such that ℎ ◦ (6 ◦ 5 ) = 8. Consequently, E is

separable over K.

Let us now treat the general case. Let 0 ∈ E which is separable over L

and let E1 = L[0]. Let P be the minimal polynomial of 0 over L and

let L1 ⊂ L be the extension of K generated by the coefficients of P. By

assumption, the polynomial P is separable. By construction, the minimal

polynomial of 0 over L1 is equal to P, which implies that 0 is separable

over L1, hence E1 is separable over L1. Moreover, L1 is separable over K,

by assumption. By the case of finite extensions, the extension K→ E1 is

separable. In particular, 0 is separable over K. �

Page 207: (MOSTLY)COMMUTATIVE ALGEBRA

4.4. SEPARABILITY 195

Proposition (4.4.12) (Dirichlet). — Let G be a monoid, let Ω be a field and let

Φ be a set of morphisms of monoids from G to the multiplicative monoid of Ω.

Then the set Φ is a linearly independent subset of the Ω-vector space of maps

from G to Ω.

Proof. — Let us argue by contradiction and let us consider a minimal

subset of Φ, say {!1, . . . , !=}, which is linearly dependent over Ω. Let

01, . . . , 0= ∈ Ω, not all zero, such that

01!1(G) + · · · + 0=!=(G) = 0

for every G ∈ G. By minimality, 08 ≠ 0 for every 8 ∈ {1, . . . , =}, forotherwise, the subset of Φ obtained by taking out !8 would be linearly

independent, contradicting theminimality assumption. Moreover, = > 2

(otherwise, !1 would be zero).

Let H ∈ G and let us apply the relation to GH, for G ∈ G. Since

!(GH) = !(G)!(H) for every G ∈ G, this gives

01!1(H)!1(G) + · · · + 0=!=(H)!=(G) = 0,

for every G ∈ G. Subtracting from this relation !=(H) times the first one,

we obtain

01(!1(H) − !=(H))!1(G) + · · · + 0=−1(!=−1(H) − !=(H))!=−1(G) = 0.

This holds for every G ∈ G, hence we get a linear dependence relation

01(!1(H) − !=(H))!1 + · · · + 0=−1(!=−1(H) − !=(H))!=−1 = 0

for !1, . . . , !=−1. By the minimality assumption, 08(!8(H) − !=(H)) = 0

for every 8, hence !8(H) = !=(H) since 08 ≠ 0. Since H is arbitrary, we

obtain !8 = !=, a contradiction. �

Remark (4.4.13). — Let K → L be a finite extension of fields, let Ω

be an algebraically closed extension of K. Every K-linear morphism

of fields, � : L → Ω, induces a morphism of groups from L×to Ω×,

hence a morphism of monoids from L×to Ω. Consequently, the set Φ

of all K-linear field morphisms 5 : L → Ω is a linearly independent

subset of the Ω-vector space of maps from L to Ω. Observe that Φ is

contained in the subspace of K-linear maps from L to Ω, which is a

Page 208: (MOSTLY)COMMUTATIVE ALGEBRA

196 CHAPTER 4. FIELD EXTENSIONS

space of dimension [L : K]. Consequently, we obtain another proof of

the inequality Card(Φ) 6 [L : K] of theorem 4.3.16.

Theorem (4.4.14) (Primitive element theorem). — Let E be a field and let

E → F be a finite separable extension. Then there exists 0 ∈ F such that

F = E[0]. More precisely, if E is infinite and if F = E[01, . . . , 03], there existelements 22, . . . , 23 ∈ E such that 0 = 01+ 2202+ · · · + 2303 satisfies F = E[0].

Proof. — If E is a finite field, then F is a finite field too. By proposi-

tion 1.3.20, the multiplicative group F×is a finite cyclic group. If 0 is

a generator of this group, then E[0] contains F×, as well as 0, so that

E[0] = F.

We now assume that E is infinite. By induction on 3, it suffices to

prove the following result, which we state as an independent lemma. �

Lemma (4.4.15). — Let E→ F be a finite extension of infinite fields. Assume

that there are elements 0, 1 ∈ F such that F = E[0, 1]. If 1 is separable over E,

then for at most finitely elements 2 ∈ E, one has F = E[0 + 21].

Proof. — Let P andQ be theminimal polynomials of 0 and 1 respectively;

let ? = deg(P) and @ = deg(Q). Let Ω be an algebraic extension of F in

which P and Q are split; let 01, . . . , 0? be the roots of P inΩ, let 11, . . . , 1@be the roots of Q in Ω, ordered so that 01 = 0 and 11 = 1. By hypothesis,

the elements 11, . . . , 1@ are pairwise distinct. Consequently, for any

pair (8 , 9) of integers, where 1 6 8 6 ? and 2 6 9 6 @, the equation

08 + 21 9 = 01 + 211 in the unknown 2 has at most one solution, namely

2 = −(08 − 01)/(1 9 − 11). Let 2 ∈ E be any element which is not equal to

any of those, set C = 0 + 21 and let us show that F = E[C].The polynomial R(X) = P(C − 2X) belongs to E[C][X] and 1 is a root of R

since R(1) = P(C − 21) = P(0) = 0. For any 9 ∈ {2, . . . , ?}, observe that

C − 21 9 = 01 + 211 − 21 9 ∉ {01, . . . , 0?} hence P(C − 21 9) ≠ 0 and R(1 9) ≠ 0.

Consequently, 1 = 11 is the only common root to R and Q in Ω. Since Q

is separable, all of its roots are simple, hence gcd(R,Q) = X − 1. Since R

and Q have their coefficients in E[C], it follows from remark 2.4.5 that

X − 1 ∈ E[I]. In other words, H ∈ E[C]. Then 0 = C − 21 ∈ E[C] so that

E[0, 1] ⊂ E[C] ⊂ E[0, 1]. It follows that F = E[C]. �

We end this section by a few remarks about inseparable extensions.

Page 209: (MOSTLY)COMMUTATIVE ALGEBRA

4.4. SEPARABILITY 197

Lemma (4.4.16). — Let K be a field, letΩ be an algebraic closure of K. Let P be

an irreducible monic polynomial in K[X] which has only one root, say 0, in Ω.

a) If the characteristic of K is 0, then 0 ∈ K and P = X − 0.b) If the characteristic of K is a prime number ?, there exists a smallest

integer = > 0 such that 0?= ∈ K, and P = X

?= − 0?= .

Such an element 0 is said to be radicial over K.

Proof. — Let < be the multiplicity of 0 as a root of P; by assumption,

one has P = (X − 0)<.Assume that K has characteristic zero. Being irreducible, the poly-

nomial P is then separable so that its root 0 is simple. In other words,

< = 1 and P = X − 0.Now assume that K has characteristic ?. If P is separable, then

P = X− 0 as in the characteristic 0 case. Otherwise, we proved that P is a

polynomial in X?. Let = be the largest integer such that P is polynomial

in X?=: it is the largest integer such that ?= divides<. We thus may write

P(X) = Q(X?=), Q(X) = XA + 1A−1X

A−1 + · · · + 10.

By assumption, Q is not a polynomial in X?(otherwise, we could write

Q(X) = R(X?), hence P(X) = R(X?=+1), contradicting the maximality of =).

Consequently, Q is separable. Moreover, 1 = 0?=is the only root of Q

in Ω. One thus has Q(X) = X − 1, hence P = X?= − 0?= .

It remains to prove that = is the smallest integer such that 0?= ∈ K. Let

3 6 = be an integer such that 0?3 ∈ K. Then, X

?3 − 0?3 divides P; since P

is irreducible, we conclude that ?3 > ?=, hence 3 = =. �

Remark (4.4.17). — Let K → L be a finite field extension and Lsep ⊂ L

be the subfield of L generated by all elements of L which are separable

over K. By corollary 4.4.11, the extension K→ Lsep is separable, and it

is the largest subextension of L.

If the characteristic of K is zero, then L = Lsep, hence let us assume

that the characteristic of K is a prime number, ?.

By corollary 4.4.11, every element of L Lsep is inseparable over Lsep.

One says thatL is a radicial extension ofLsep. The detailed analysis of such

extensions is slightly more intricate than that of separable extensions.

Let us just show here the following two facts:

Page 210: (MOSTLY)COMMUTATIVE ALGEBRA

198 CHAPTER 4. FIELD EXTENSIONS

a) Every element of L is radicial over Lsep.

Let 0 ∈ L Lsep and let P = X< + 2<−1X

<−1 + · · · + 20 be its minimal

polynomial over Lsep. Since 0 is not separable over L and Lsep is

separable over L, the element 0 is not separable over Lsep. In particular,

P is a polynomial in X?. Let A be the greatest integer such that there

exists Q ∈ Lsep[X] with P = Q(X?A); by construction, Q is the minimal

polynomial of 0?A, and is not a polynomial in X

?. In particular, 0?

Ais

separable over Lsep, hence 0?A ∈ Lsep. This implies that Q = X − 0?A ,

hence P = X?A − 0?A = (X − 0)?A . This proves that 0 is radicial over Lsep.

b) There exists an integer = > 0 such that [L : Lsep] = ?=.Let us consider elements 01, . . . , 0= of L Lsep such that L =

Lsep[01, . . . , 0=] and let us prove by induction on = that [L : Lsep]is power of ?. This holds if = 6 1, by the analysis of a). Let

E = Lsep[01, . . . , 0=−1], so that [E : Lsep] is a power of ? by induction.

To simplify notation, let us pose 0 = 0= and A = A=, so that L = E[0],the minimal polynomial of 0 over Lsep being P = X

?A − 0?A = (X − 0)?A .Since the minimal polynomial of 0 over E divides P in E[X], it has onlyone root, 0. By lemma 4.4.16, this implies that 0 is radicial over E; in

particular, its degree is a power of ?. Since [L : Lsep] = [L : E][E : Lsep],this implies that [L : Lsep] is a power of ?.

4.5. Finite fields

A finite field is what it says: a field whose cardinality is finite.

Proposition (4.5.1). — The characteristic of a finite field is a prime number ?;

its cardinality is a power of ?.

Proof. — Let F be a finite field. Since Z is infinite, the canonical mor-

phism from Z to F is not injective. Its kernel is a non-zero prime ideal

of Z, hence is generated by a prime number, say ?, the characteristic of F.

This morphism induces a field extension Z/?Z→ F, so that F is a vector

space over the finite field Z/?Z. Its dimension 3 must be a finite integer,

so that Card(F) = ?3. �

Page 211: (MOSTLY)COMMUTATIVE ALGEBRA

4.5. FINITE FIELDS 199

Conversely, let @ > 1 be a power of a prime number ?. We are going to

show that there exists a finite field of characteristic ? and cardinality @,

and that any two such fields are isomorphic.

Theorem (4.5.2). — Let ? be a prime number, let 5 be an integer such that 5 > 1,

let @ = ? 5 . Let Ω be a field of characteristic ? in which the polynomial X@ − X

is split, for example, an algebraically closed field of characteristic ?. There exists

a unique subfield ofΩ which has cardinality @: it is the set F of all roots inΩ of

the polynomial X@ − X. Moreover, any finite field of cardinality @ is isomorphic

to F.

Proof. — Let ! : Ω→ Ω be the Frobenius homomorphism, defined by

G ↦→ G? (see example 1.1.14). Let !@ = � 5 be the 5 th power of !, so

that !@(G) = G?5= G@ for every G ∈ Ω. The set F of all G ∈ Ω such that

G@ = !@(G) = G is then a subfield of Ω, as claimed. The derivative of the

polynomial P = X@−X is P

′ = @X@−1−1 = −1, sinceΩ has characteristic ?

and ? divides @. Consequently (see §1.3.19), P has no multiple root and

Card(F) = @.Let K be a finite field of cardinality @. The multiplicative group K

×

of K has cardinality @− 1, hence G@−1 = 1 for every G ∈ K×, by Lagrange’s

theorem. This implies that G@ = G for every G ∈ K×, and also for G = 0.

If K is a subfield of Ω, this shows that K ⊂ F, hence K = F since these

two fields have the same number of elements.

In the general case, we already know that K is an extension of Z/?Zin which the polynomial P = X

@ − X is split, and which is generated by

the roots of P. Indeed, K consists only of roots of P! It follows from

theorem 4.3.16 that there exists a morphism of fields 8 : K→ Ω. Then,

8(K) is a finite field of characteristic @ contained inΩ, hence 8(K) = F and

K ' F. �

Corollary (4.5.3). — LetF be a finite field of cardinality @, let 3 be an integer> 1.

There exists an irreducible monic polynomial P ∈ F[X] such that deg(P) = 3.The F-algebra F[X]/(P) is a finite field of cardinality @3.

Proof. — We may fix an extension Ω of Z/?Z in which the polynomial

X@3 −X is split. Since X

@3 −X = X(X@3−1− 1) = X(X@−1− 1)(X@3−1 + · · · + 1),X@ − 1 is also split in Ω. We may then assume that F ⊂ Ω.

Page 212: (MOSTLY)COMMUTATIVE ALGEBRA

200 CHAPTER 4. FIELD EXTENSIONS

Then Ω has a unique subfield K of cardinality @3. Let G ∈ K×be a

generator of the finite cyclic group K×(theorem 5.5.4). The subfield F[G]

of K contains the multiplicative subgroup generated by G, which is K×,

so that F[G] = K. Let P be the minimal polynomial of G over F. This is an

irreducible monic polynomial with coefficients in F such that P(G) = 0,

and K = F[G] ' F[X]/(P). In particular, deg(P) = [K : F] = 3. �

A defect of the preceding proof is that it does not give an effective

way to get one’s hand on a given finite field with @ elements and

characteristic ?. However, if @ = ?3, we know that it will be possible to

find, whatever how, an irreducible polynomial of degree 3 over Z/?Z.Such a polynomial will be a factor of X

@ − X over Z/?Z.

Example (4.5.4). — Let us construct a field of cardinality 8. We need to

find an irreducible polynomial of degree 3 with coefficients in Z/2Z.There are 8 = 2

3monic polynomials of degree 3; four of them vanish at 0

and are not irreducible. The remaining four are X3 + 0X2 + 1X+ 1 where

(0, 1) ∈ {0, 1}2; their value at X = 1 is 0 + 1. Thus two polynomials of

degree 3 with coefficients in Z/2Z have noo root, namely X3 + X + 1 and

X3 + X

2 + 1. They are irreducible; indeed, they would otherwise have a

factor of degree 1, hence a root. Consequently, F8 ' F2[X]/(X3 + X + 1).

4.6. Galois’s theory of algebraic extensions

4.6.1. — Let K→ L be an extension of fields. Let Aut(L/K) be the setof field automorphisms of L which are K-linear. In the particularly

important case where K is a subfield of L and the morphism K→ L is

the inclusion, an automorphism � of L belongs to Aut(L/K) if and only

if its restriction to K is the identity.

Observe that Aut(L/K) is a group for composition.

Examples (4.6.2). — a) Let 2 : C → C be the complex conjugation.

Then Aut(C/R) = {id, 2} ' Z/2Z.It is clear that id and 2 are elements of Aut(C/R). Conversely, let � be

an automorphism of C such that �(G) = G for every G ∈ R. Observe that

�(8)2 = �(82) = �(−1) = −1, hence �(8) ∈ {±8}. If I = G + 8H ∈ C, with

Page 213: (MOSTLY)COMMUTATIVE ALGEBRA

4.6. GALOIS’S THEORY OF ALGEBRAIC EXTENSIONS 201

G, H ∈ R, then �(I) = �(G) + �(8)�(H) = G + �(8)H. Consequently, � = id

if �(8) = 8 and � = 2 if �(8) = −8.b) Let @ be a power of a prime number and let 4 be an integer > 2.

Let F be a field of cardinality @4 and let E be the subfield of F of

cardinality @. Then Aut(F/E) is a cyclic group of order 4, generated by

the automorphism �@ : G ↦→ G@.

Indeed, �@ belongs to Aut(F/E). For 0 6 8 6 4 − 1, its power �8@ is the

morphism G ↦→ G@8. If 0 < 8 < 4, the polynomial X

@ 8 − X has at most @ 8

roots in F, so that �8@ ≠ id. This implies that the elements id, �@ , . . . , �4−1

@

of Aut(F/E) are distinct. By the following lemma, they fill up all of

Aut(F/E).c) Let E be the subfield of C generated by

3

√2. Then Aut(E/Q) = {id}.

Let = 3

√2, so that E = Q[ ]; in particular, E is a subfield of R. Any

automorphism � of E is determined by �( ). Moreover, �( )3 = �( 3) =�(2) = 2, so that �( ) is a cube root of 2 contained in E. Observe that is the only cube root of 2 which is a real number, hence the only cube

root of 2 in E. Consequently, �( ) = and � = id.

Definition (4.6.3). — One says that a finite extension of fields K→ L is Galois

if Card(Aut(L/K)) = [L : K].

Note that elements of Aut(L/K) coincide with K-morphisms from L to

itself, since the latter are automatically surjective when the extension

K→ L is finite. Moreover, theorem 4.3.16 (or remark 4.4.13) asserts that

Card(Aut(L/K)) 6 [L : K]. It also follows from theorem 4.4.10 that a

Galois extension is separable.

When an extension K→ L is Galois, its automorphism group is also

denoted by Gal(L/K) and is called its Galois group.

The extensions R ⊂ C, as well as the extensions of finite fields given in

examples 4.6.2 are Galois extensions; the third one is not.

The next lemma furnishes another, fundamental example, of Galois

extension.

Lemma (4.6.4) (Artin). — Let L be a field, let G be a finite group of automor-

phisms of L. Let K = LGbe the set of all G ∈ L such that �(G) = G for every

Page 214: (MOSTLY)COMMUTATIVE ALGEBRA

202 CHAPTER 4. FIELD EXTENSIONS

� ∈ G. Then K is a subfield of L, the extension K ⊂ L is finite, Galois, and

Aut(L/K) = G.

Proof. — We leave to the reader to prove that K is a subfield of L. Let

us prove by contradiction that the extension K ⊂ L is finite and that

[L : K] 6 Card(G). Otherwise, let = be some integer > Card(G) and let

01, . . . , 0= be elements of L which are linearly independent over K. Let

us consider the system of linear equations with coefficients in L,

=∑8=1

G8�(08) = 0 (� ∈ G),

in unknowns G1, . . . , G=. By assumption, it has more unknowns than

equations, hence it has a non-zero solution (G1, . . . , G=) ∈ E=. Let us

consider one such solution for which the number of non-zero coefficients

is minimal. Up to reordering 01, . . . , 0= and G1, . . . , G=, we assume that

G1, . . . , G< are non-zero, but G<+1 = · · · = G= = 0. By linearity, we may

assume G< = 1, so that

<−1∑8=1

G8�(08) + �(0<) = 0 (� ∈ G).

Let then � ∈ G and apply � to the preceding equality. One obtains

<−1∑8=1

�(G8)(� ◦ �)(08) + (� ◦ �)(0<) = 0.

Subtracting the relation corresponding to the element � ◦ � of G, one

gets

<−1∑8=1

(�(G8) − G8)(� ◦ �)(08) = 0.

This holds for any � ∈ G, and the elements � ◦ � run among all elements

of G. Consequently, for any � ∈ G,

<−1∑8=1

(�(G8) − G8)�(08) = 0.

Page 215: (MOSTLY)COMMUTATIVE ALGEBRA

4.6. GALOIS’S THEORY OF ALGEBRAIC EXTENSIONS 203

The =-uple (H1, . . . , H=) defined by H8 = �(G8) − G8 is a solution of the

initial system, but has < < non-zero entries. By the choice of <, one has

H8 = 0 for all 8. Consequently, �(G8) = G8 for every 8.Since � is arbitrary, we conclude that G8 ∈ K for every 8. In particular,∑=8=1G808 = 0 which contradicts the initial hypothesis that these elements

were linearly independent over K.

Consequently, [L : K] 6 Card(G). In particular, the extension K ⊂ L

is finite, and algebraic. Moreover, every element of G is an element

of Aut(L/K), so that Card(Aut(L/K)) > Card(G). Since the opposite

inequality always holds by theorem 4.3.16, one has G = Aut(L/K). �

Let K ⊂ L be a Galois extension, let G = Gal(L/K). Let ℱL be the set

of subfields of L containing K; let�G be the set of subgroups of G. We

order both sets by inclusion.

Theorem (4.6.5) (Fundamental theorem of Galois Theory)

Let K ⊂ L be a finite Galois extension with Galois group G = Gal(L/K).a) For every subgroup H ⊂ G, the set

LH = {G ∈ L ; ∀� ∈ H, �(G) = G}

is a subfield of L containing K.

b) For every subfield E of L which contains K, the extension L/E is Galois.

c) The maps ! : H ↦→ LHfrom�G to ℱK, and � : E ↦→ Gal(L/E) from ℱL

to�G, are decreasing bijections, inverse one of the other.

Proof. — Part a) follow from lemma 4.6.4.

Let us prove b). Elements of Aut(L/E) are automorphisms of E

which restrict to the identity on L. In particular, Aut(L/E) ⊂ Gal(L/K).The inequality Card(Aut(L/E)) 6 [L : E] holds by theorem 4.3.16 and

we have to prove the opposite one. Let us consider the map A from

Gal(L/K) to HomK(E, L) given by 6 ↦→ 6 |E. By construction, Aut(L/E) =A−1(A(idL)). Let Ω be an algebraic closure of L. It follows from 4.3.16

that Card(HomK(E, L)) 6 [E : K] and that the fibers of A have cardinality

at most [L : E]. Since [E : K][L : E] = Card(Gal(L/K)), we see that

Card(HomK(E, L)) = [E : K] and all fibers of A have cardinality exactly [L :

E]. In particular, the fiber Aut(L/E) of A(idL) has cardinality [L : E], asrequired.

Page 216: (MOSTLY)COMMUTATIVE ALGEBRA

204 CHAPTER 4. FIELD EXTENSIONS

c) Parts a) and b) show that the maps ! and � are well-defined. If

H and H′are two subgroups of G such that H ⊂ H

′, then L

H′ ⊂ L

H,

so that !(H′) ⊂ !(H); hence ! is decreasing. Similarly, if E and E′are

two subfields of L containing K such that E ⊂ E′, then Gal(L/E′) is a

subgroup of Gal(L/E), that is, �(E′) ⊂ �(E) and � is decreasing.

We proved in lemma 4.6.4 that Gal(L/LH) = H. In other words,

� ◦ ! = id. In particular, the map ! is injective. Let E be a subfield of L

which contains K; let E′ = !(�(E)) = L

Gal(L/E). By definition of Gal(L/E),

one has E ⊂ E′. Moreover, we know by Part b) that L/E is Galois, and

by Part a) that L/E′ is Galois as well, and that Gal(L/E′) = Gal(L/E).Consequently, [L : E

′] = [L : E]. This implies that [E′ : E] = 1, that is,

E = E′. �

It remains to give an effective construction of Galois extensions.

Proposition (4.6.6). — Let K ⊂ L be a finite extension. Then the following

properties are equivalent:

(i) The extension K ⊂ L is Galois;

(ii) The extension K ⊂ L is separable and every irreducible polynomial

in K[X] which has a root in L is already split in L;

(iii) There exists an element 0 ∈ L such that L = K[0] and the minimal

polynomial of which is separable and split in L;

(iv) There exists a separable polynomial P ∈ K[X] which is split in L, with

roots 01, . . . , 0= ∈ L, such that L = K[01, . . . , 0=].

Proof. — (i)⇒(ii). We alreadyproved that aGalois extension is separable.

Let P ∈ K[X] be an irreducible polynomial, let 0 ∈ L be a root of P in L.

Let E = K[0]. Let Q =∏

6∈G(X − 6(0)). For every ℎ ∈ G, the polynomial

Qℎobtained by applying ℎ to the coefficients of Q is given by

Qℎ =

∏6∈G(X − ℎ(6(0)) =

∏6∈G(X − 6(0)) = Q,

since the map 6 ↦→ ℎ6 is a bijection of G to itself. Consequently, the

coefficients ofQ belong toLG = K. Moreover,Q(0) = 0, hencePdividesQ.

Since Q is split in L, it follows that P is split in L as well.

(ii)⇒(iii). Since the extension K ⊂ L is separable, the Primitive element

theorem (theorem 4.4.14) implies that there exists 0 ∈ L such that

Page 217: (MOSTLY)COMMUTATIVE ALGEBRA

4.6. GALOIS’S THEORY OF ALGEBRAIC EXTENSIONS 205

L = K[0]. The minimal polynomial of 0 is separable and irreducible,

hence is split in L.

(iii)⇒(iv). It suffices to take for P the minimal polynomial of 0.

(iv)⇒(i). Let E0 = K; for 8 ∈ {1, . . . , =}, let E8 = K[01, . . . , 08] and let

M8 be the minimal polynomial of 08 over E8−1. It divides P hence is

separable and split in L. For every 8, let Φ8 be the set of morphisms

from E8 to L which restricts to the identity on K; let A8 : Φ8 → Φ8−1 be the

restriction map, � ↦→ � |E8−1.

Given a morphism 5 ∈ Φ8−1, there are exactly deg(M8) = [E8 : E8−1]morphisms from E8 to L which extend 5 . Consequently, the fibers of the

restriction map A8 all have cardinality [E8 : E8−1]. Since Φ0 has exactly

one element, one has

Card(Φ=) = [E= : E=−1]Card(Φ=−1) = . . . = [E= : E=−1] . . . [E1 : E0]Card(Φ0)= [L : K].

Moreover, any morphism 5 ∈ Φ= is surjective, since [ 5 (L) : K] =dimK( 5 (L)) = dimK(L) = [L : K]. Consequently, Aut(L/K) = Φ= and the

extension L/K is Galois. �

For future use, part of statement (ii) needs to be given a name:

Definition (4.6.7). — One says that a finite extension K ⊂ L is normal if

every irreducible polynomial in K[X] which has a root in L is already split in L.

Lemma (4.6.8). — Let K ⊂ L be a finite extension. The following properties

are equivalent:

(i) The extension K ⊂ L is normal;

(ii) There exists a polynomial P ∈ K[X] which is split in L, with roots

01, . . . , 0=, such that L = K[01, . . . , 0=];(iii) For every extension Ω of L and any K-homomorphism � : L→ Ω, one

has �(L) = L.

Proof. — (i)⇒(ii). Assume that the extension K ⊂ L is normal and let

01, . . . , 0= be elements of L such that L = K[01, . . . , 0=]. For every 8, let P8

be the minimal polynomial of 08 over K. It is irreducible, hence split in L.

It follows that the product P = P1 . . . P= is split in L and L is generated

by its roots.

Page 218: (MOSTLY)COMMUTATIVE ALGEBRA

206 CHAPTER 4. FIELD EXTENSIONS

(ii)⇒(iii). Let P ∈ K[X] be a polynomial which is split in L, with

roots 01, . . . , 0=, such that L = K[01, . . . , 0=]. Let � : L → Ω be a K-

homomorphism into an extension Ω of L. Since �(P(08)) = P(�(08)) = 0,

�(08) is a root of P, hence �(08) ∈ {01, . . . , 0=}. In particular, �(08) ∈ L.

Since L = K[01, . . . , 0=], it follows that �(L) ⊂ L. Since � is injective, one

has [�(L) : K] = [L : K], hence �(L) = L.

(iii)⇒(i). Let Q ∈ K[X] be an irreducible polynomial which has a root 1

in L. We need to show that Q is split in L. Let Ω be an algebraic closure

of L and let 1′ ∈ Ω be a root of Q. Let ! : K[1] → Ω be the unique

K-morphism such that !(1) = 1′. The morphism ! can be extended to

a K-morphism � from L to Ω. By assumption, �(L) ⊂ L. In particular,

1′ = �(1) ∈ L. �

Corollary (4.6.9). — Let K ⊂ L be a finite normal extension and let G =

Aut(L/K). The extension K→ LGis radicial. In particular, if the characteristic

of K is 0 or if K is perfect, then LG = K.

Proof. — Let us prove that every element of LGis inseparable over K.

Let 0 ∈ LGand let P ∈ K[T] be its minimal polynomial. Let 9 : L→ Ω

be an algebraic closure of L. For every root 1 of P in Ω, there exists a

unique K-morphism 5 : K[0] → Ω such that 5 (0) = 1. Then 5 extends toa K-morphism ! : L→ Ω and, by definition of a normal extension, the

image of ! is equal to 9(L), so that ! ∈ Aut(L/K). By definition of LG,

one thus has 0 = !(0) = 1. This implies that 0 is the only root of P in Ω,

and 0 is radicial over K.

If, moreover, the extension K→ L is separable, one then has 0 ∈ K, so

that LG = K in this case. �

Remark (4.6.10). — Let K ⊂ L be a finite extension of fields. There

exists a finite extension L′of L such that the composed extension

K → L′is normal. Indeed, let 01, . . . , 0= be elements of L such that

L = K[01, . . . , 0=]. For every 8, let P8 be the minimal polynomial of P8 and

let P be the least common multiple of P1, . . . , P=. If Ω is any extension

of L in which P is split (for example, an algebraic closure of L), the

field L′generated by the roots of P in L

′is normal.

Page 219: (MOSTLY)COMMUTATIVE ALGEBRA

4.6. GALOIS’S THEORY OF ALGEBRAIC EXTENSIONS 207

Assume moreover that the extension K ⊂ L is separable. By the

Primitive element theorem, there exists 0 ∈ L such that L = K[0]. Theminimal polynomial P of 0 is separable and the field L

′generated by

its roots in some extension Ω in which it is split is a Galois extension

of K containing L. In fact, this field L′is the smallest Galois extension

of K contained inΩ such that L ⊂ L′. Indeed, every such extension must

contain the roots of P, hence L′.

Proposition (4.6.11). — Let K ⊂ L be a finite Galois extension with group

G = Gal(L/K). Let H be a subgroup of G and let NG(H) = {� ∈Gal(L/K) ; �H�−1 = H} be the normalizer of H in Gal(L/K).a) For any � ∈ Gal(L/K), one has �(LH) = L

�H�−1

.

b) The morphism NG(H) → Aut(LH/K) is surjective and its kernel is H.

In particular, the extension K ⊂ LHis Galois if and only if H is a normal

subgroup of G. In that case, Gal(LH/K) ' G/H

Proof. — a) Let 0 ∈ L. That 0 belongs to LHis equivalent to the relation

ℎ(0) = 0 for every ℎ ∈ H. Consequently, 1 = �(1) belongs to �(LH) ifand only if ℎ ◦ �−1(1) = �−1(1) for every ℎ ∈ H, hence if and only if

(�ℎ�−1)(1) = 1 for every ℎ ∈ H, hence if and only if 1 ∈ L�H�−1

.

b) By what precedes, any element � of NG(H) induces by restriction

a K-morphism from LHto itself. The restriction map � ↦→ � |

LH is thus

a morphism of groups � from NG(H) to Aut(LH/K). The kernel of thismorphism consists in those elements � ∈ NG(H) such that �(0) = 0 forevery 0 ∈ L

H, hence Ker(�) = H.

Let us show that � is surjective. Let � ∈ Aut(LH/K). Let us view � as a

K-morphism from LHto L. Let us consider an extension �1 : L→ Ω of �,

where Ω is an algebraically closed extension of L. Since the extension

K ⊂ L is Galois, it is normal and one has �1(L) = L; in other words,

�1 ∈ Aut(L/K) and � = �1 |LH. Since �1(LH) = �(LH) = LH, one has

�1 ∈ NG(H) and � = �(�1).Passing to the quotient by H, the morphism � induces an isomorphism

NG(H)/H ' Aut(LH/K). In particular, the extension K ⊂ LHis Galois

if and only if [LH: K] = Card(NG(H))/Card(H). Since the extension

LH ⊂ L is Galois of group H, one has [L : L

H] = Card(H); moreover,

[L : K] = Card(G), so that [LH: K] = Card(G)/Card(H). Consequently,

Page 220: (MOSTLY)COMMUTATIVE ALGEBRA

208 CHAPTER 4. FIELD EXTENSIONS

the extension K ⊂ LHis Galois if and only if Card(NG(H)) = Card(G),

that is, if and only if G = NG(H), which means that H is a normal

subgroup of G. �

4.7. Norms and traces

Let A be a ring and let B be an A-algebra; let us assume that B is a free

finitely generated A-module. An important case of this situation arises

when A is a field and B is a finite extension of A.

SinceA is commutative, all bases ofBhave the same cardinality, equal to

= = rkA(B), by definition of the rank ofB. For any 1 ∈ B, multiplication by

1 gives rise to an endomorphism G ↦→ 1G of the A-module B; one defines

the characteristic polynomial of 1 as the characteristic polynomial P1 of

this endomorphism. This polynomial is defined either as det(X −M1),where M1 ∈ M=(A) is the matrix of the multiplication by 1, written in

any basis of B.

Let us write

P1(X) = X= − Tr

B/A(1)X=−1 + · · · + (−1)=NB/A(1).

Definition (4.7.1). — The elements TrB/A(1) and N

B/A(1) are called the trace

and the norm of 1 with respect to A.

Note that TrB/A(1) and N

B/A(1) are the trace and the norm of the

matrix U1 ∈ M=(A) of multiplication by 1, written in any basis of B.

Remark (4.7.2). — Using notions that we will introduce later, an intrinsic

definition is possible: consider the A[X]-module B[X] = A[X] ⊗A B;

it is free of rank = and multiplication by X − 1 in B[X] induces an

endomorphism of the free rank-1 module Λ=B[X], which is precisely

multiplication by P1(X). Moreover, Λ=(B) is a free A-module of rank 1

and multiplication by 1 in B induces multiplication by NB/A(1) on Λ=(B).

Proposition (4.7.3). — a) One has TrB/A(1B) = rkA(B)1A and N

B/A(1B) =1A;

b) For any 0 ∈ A and any 1 ∈ B, one has TrB/A(01) = 0 Tr

B/A(1) andN

B/A(01) = 0rkA(B)N

B/A(1);

Page 221: (MOSTLY)COMMUTATIVE ALGEBRA

4.7. NORMS AND TRACES 209

c) For any 1, 1′ ∈ B, TrB/A(1+ 1′) = Tr

B/A(1)+TrB/A(1′) and N

B/A(11′) =N

B/A(1)NB/A(1′);d) An element 1 ∈ B is invertible if and only if its norm N

B/A(1) is invertiblein A.

Proof. — Let us fix a basis of B as an A-module; let = = rkA(B).a) Multiplication by 1B is the identity, hence its matrix is I=. Conse-

quently, TrB/A(1B) = =1A and N

B/A(1B) = 1A.

b) If U is the matrix of multiplication by 1, the matrix of multiplication

by 01 is 0U. Then, TrB/A(01) = Tr(0U) = 0 Tr(U) = 0 Tr

B/A(1) andN

B/A(01) = det(0U) = 0= det(U) = 0=NB/A(1).

c) Let U′be the matrix of multiplication by 1′. Since (1+1′)G = 1G+1′G

for every G ∈ B, the matrix of multiplication by 1 + 1′ is U +U′. Then

TrB/A(1 + 1′) = Tr(U +U

′) = Tr(U) + Tr(U′) = TrB/A(1) + Tr

B/A(1′).

Moreover, the formula (11′)G = 1(1′G), for G ∈ B, implies that the matrix

of multiplication by 11′ is UU′. Then

NB/A(11′) = det(UU

′) = det(U)det(U′) = NB/A(1)NB/A(1′).

d) Assume that 1 is invertible, with inverse 1′. Then 11′ = 1B hence

multiplication by 1 is an isomorphism of B, whose inverse is multipli-

cation by 1′. Consequently, its determinant is invertible, that is to say,

NB/A(1) is invertible.Conversely, ifN

B/A(1) is invertible, thenmultiplication by 1 is invertible.

In particular, there exists 1′ ∈ B such that 11′ = 1B and 1 is invertible. �

Proposition (4.7.4). — Let K→ L be a finite separable extension, let Ω be an

algebraic closure of L and let Σ be the set of K-morphisms from L to Ω. Then,

for every G ∈ L, one has

TrL/K(G) =

∑�∈Σ

�(G) and NL/K(G) =

∏�∈Σ

�(G).

Proof. — Let G ∈ L, let P ∈ K[X] be its minimal polynomial, and let

G = G1, . . . , G3 be its (pairwise distinct) roots in Ω. By assumption, G

Page 222: (MOSTLY)COMMUTATIVE ALGEBRA

210 CHAPTER 4. FIELD EXTENSIONS

is separable, hence P(X) = (X − G1) . . . (X − G3). Let 01, . . . , 03 ∈ K be

defined by the relation

P(X) = X3 − 01X

3−1 + · · · + (−1)303 ,

so that 01 = G1 + · · · + G3 and 03 = G1 . . . G3. In the basis {1, G, . . . , G3−1}of K[G], the matrix of multiplication by G is given by

©­­­­«0 (−1)3−103

1. . .

.... . . 0 −02

1 01

ª®®®®¬hence is the companion matrix CP. In particular, its trace is 01 and its

determinant is 03. In other words,

TrK[G]/K(G) =

3∑8=1

G8 and NK[G]/K(G) =

3∏8=1

G8 .

Let now {41, . . . , 4B} be a basis ofL overK[G]. The family (48G 9), indexedby pairs (8 , 9) of integers such that 1 6 8 6 B and 0 6 9 < 3, is a basis of L

as a K-vector space. When this basis is written in lexicographic order

(41, 41G, . . . , 41G3−1, 42, . . . , 4BG

3−1), the matrix of multiplication by G is

block-diagonal, with B blocks all equal to the companion matrix CP.

Consequently, TrL/K(G) = B01 and N

L/K(G) = 0B3.Recall that for every 8 ∈ {1, . . . , 3} there is exactly one K-morphism �8

from K[G] to Ω such that G ↦→ G8, and every K-morphism from K[G]toΩ is one of them. Moreover, since the extension K[G] ⊂ L is separable

of degree B, each of these morphisms extends in exactly B ways to L.

Therefore,∑�∈Σ

�(G) = B3∑8=1

�8(G) = B TrK[G]/K(G) = B01 = Tr

L/K(G)

and ∏�∈Σ

�(G) =(3∏8=1

�8(G)) B=

(N

K[G]/K(G)) B= 0B

3= N

L/K(G). �

Page 223: (MOSTLY)COMMUTATIVE ALGEBRA

4.7. NORMS AND TRACES 211

Corollary (4.7.5). — If K→ L is a Galois extension, then

NL/K(G) =

∏�∈Gal(L/K)

�(G) and TrL/K(G) =

∑�∈Gal(L/K)

�(G)

for every G ∈ L.

Proof. — Let Ω be an algebraic closure of L. Since the extension K→ L

is Galois, it is normal the K-morphisms from L to Ω coincide with the

elements of Gal(L/K). Since a Galois extension is separable, the result

follows from the preceding proposition. �

Corollary (4.7.6). — Let K ⊂ L be a finite extension of fields, let A be a subring

of K and let B be its integral closure in L. The trace TrL/K(G) and the norm

NL/K(G) of any element G ∈ B are integral over A. In particular, if A is

integrally closed, then TrL/K(G) and N

L/K(G) belong to A, for every G ∈ B.

Proof. — Let Ω be an algebraic closure of L, let = = [L : K] and let

�1, . . . , �= : L → Ω be the = distinct K-morphisms from L to Ω. If

8 ∈ {1, . . . , =} and G ∈ B, then �8(G) is integral over A (it is a root of the

same monic polynomial as G), so that TrL/K(G) =

∑=8=1

�8(G) is integralover A. Similarly, N

L/K(G) =∏=

8=1�8(G) is integral over A.

Moreover, TrL/K andN

L/K(G) are elements of K. If A is integrally closed

in K, this implies TrL/K(G) ∈ A and N

L/K(G) ∈ A. �

Theorem (4.7.7). — Let K ⊂ L be a finite separable extension. The map

C : L × L→ K defined by C(G, H) = TrL/K(GH) is a non-degenerate symmetric

K-bilinear form.

Proof. — Themap C is symmetric since themultiplication is commutative.

The formulas

C(0G + 0′G′, H) = TrL/K((0G + 0′G′)H) = Tr

L/K(0GH + 0′G′H)= 0 Tr

L/K(GH) + 0′TrL/K(G′H) = 0C(G, H) + 0′C(G′, H)

show that it is K-linear with respect to the first variable. By symmetry, it

is also K-linear with the second one.

For every H ∈ L, let �(H) be the map defined by �(H)(G) = C(G, H).Since C is K-linear with respect to the second variable, one has �(H) ∈HomK(L,K). Moreover, the map � : L → HomK(L,K) is K-linear. We

Page 224: (MOSTLY)COMMUTATIVE ALGEBRA

212 CHAPTER 4. FIELD EXTENSIONS

need to prove that � is an isomorphismofK-vector spaces. Let 3 = [L : K],so that 3 = dimK(L) = dimK(HomK(L,K)). Consequently, it suffices to

prove that � is injective.

Let Ω be an algebraic closure of L and let �1, . . . , �3 be the 3 distinctK-morphisms from L to Ω. For every G ∈ L, one has

TrL/K(G) =

3∑9=1

�9(G).

Let us assume that H ∈ Ker(�). Then, for every G ∈ L, TrL/K(GH) = 0,

hence

0 =

3∑9=1

�9(GH) =3∑9=1

�9(G)�9(H).

Since �1, . . . , �3 are linearly independent on L (proposition 4.4.12), this

implies �9(H) = 0 for all 9, hence H = 0. �

Remark (4.7.8). — Let K→ L be a finite extension which is not separable.

Let us prove that TrL/K(G) = 0 for every G ∈ L. The characteristic of K is a

prime number ?. Let Lsep be the subfield of L generated by all elements

of L which are separable over K. By remark 4.4.17, the degree [L : Lsep]is a multiple of ?.

Let G ∈ L. If G ∈ Lsep, one has

TrL/K(G) = [L : Lsep]]Tr

Lsep/K(G) = 0

since ? divides [L : K[G]]. Otherwise, G ∈ L Lsep is inseparable over K;

then its minimal polynomial is a polynomial in X?, so has no term of

degree one less its degree and TrK[G]/K(G) = 0. Then, Tr

L/K(G) = [L :

K[G]]TrK[G]/K(G) = 0.

4.8. Transcendence degree

Proposition (4.8.1). — Let K→ L be a field extension, let (G8)8∈I be a family of

elements of L. The following propositions are equivalent:

(i) For every non-zero polynomial P ∈ K[(X8)8∈I], one has P((G8)) ≠ 0;

(ii) The canonical morphism of K-algebras from K[(X8)8] to L such that

X8 ↦→ G8 is injective;

Page 225: (MOSTLY)COMMUTATIVE ALGEBRA

4.8. TRANSCENDENCE DEGREE 213

(iii) There exists a K-morphism of fields K((X8)8) → L such that X8 ↦→ G8.

Proof. — Let ! : K[(X8)8] → L be the unique morphism of K-algebras

such that !(X8) = G8 for all 8 ∈ I.

Condition (i) means that !(P) ≠ 0 if P ≠ 0; it is thus equivalent to the

equality Ker(!) = 0 of (ii).

Assume (ii). Note that K[(X8)8∈I] is an integral domain, and its field of

fractions is K((X8)]. If ! is injective, then, since L is a field, it extends to

a morphism of fields from K[(X8)] to L, hence (iii).

Finally, assume that (iii). Let # : K((X8)) → L be a K-morphism of

fields such that #(X8) = G8 for all 8. The restriction of # to K[(X8)] is themorphism of K-algebras which was denoted by !. Since # is injective,

! is injective too, hence (ii). �

Definition (4.8.2). — If these conditions hold, one says that the family (G8)8∈Iis algebraically independent over K.

Note the analogy with the similar proposition 3.5.2 in linear algebra,

where we considered linear relations between elements of a module,

while we know study algebraic relations in a ring, and the theory of

matroids pushes this analogy farther than mere analogous language.

Based on this analogy, a possible definition of “basis” in this context

can be However, in the present context, “generating” families are not

families that generate the extension L, but those for which L is algebraic

over the field it generates.

Definition (4.8.3). — Let K→ L be a field extension. One says that a family

(G8)8∈I inL is a transcendence basis ofL overK if it is algebraically independent

over K and if L is algebraic over the subfield K((G8)).

By abuse of language, one says that a subset B of L is algebraically

independent (resp. a transcendence basis of L) over K if the family (1)1∈Bis algebraically independent (resp. a transcendence basis of L) over K.

Theorem (4.8.4). — Let K→ L be a field extension.

a) Let B,C be subsets of L. Assume that L is algebraic over the subfield K(C)generated by C over K and that B is algebraically independent. Then B is a

Page 226: (MOSTLY)COMMUTATIVE ALGEBRA

214 CHAPTER 4. FIELD EXTENSIONS

transcendence basis if and only if, for every 2 ∈ C B, the subset B ∪ {2} isnot algebraically independent.

b) Let A and C be subsets of L such that A ⊂ C. One assumes that A is

algebraically independent over K and that L is algebraic over the subfield K(C)generated by C over K. Then, there exists a transcendence basis B of L such

that A ⊂ B ⊂ C.

Proof. — a) Assume that B is a transcendence basis of L. Let 2 ∈ C B.

By definition, L is algebraic over K(B); in particular, 2 is algebraic

over K(B). Let P1 ∈ K(B)[T] be a non-zero polynomial in one inde-

terminate T such that P1(2) = 0. Write P1 = 0=T= + · · · + 00, where

00, . . . , 0= ∈ K(B). For 9 ∈ {0, . . . , =}, there exist polynomials 59 , 69 ∈K[(X1)1∈B] such that 0 9 = 59((1))/69((1)). Multiplying P1 by the product

of the elements 69((1)), we reduce to the case where 69 = 1 for every 9.

Let P =∑=9=0

59((X1))T9; this is a non-zero polynomial in K[(X1)1∈B, T]. Byconstruction, P((1), 2) = 0. This proves that the family deduced from B

by adjoining 2 is not algebraically independent. Since 2 ∉ B, B ∪ {2} isnot algebraically independent.

Conversely, assume that for every 2 ∈ C B, B∪ {2} is not algebraicallyindependent. Let us prove that every element 2 ∈ C is algebraic overK(B).This is obvious if 2 ∈ B; otherwise, consider a non-zero polynomial

P ∈ K[(X1)1∈B, T] such that P((1), 2) = 0. Write P = 5=T= + · · · + 50, where

50, . . . , 5= ∈ K[(X1)1∈B], and 5= ≠ 0. Since B is algebraically independent,

one has 5=((1)) ≠ 0. Consequently, the polynomial P1 = P((1),T) =5=((1))T= + · · · + 50((1)) is a non-zero element of K(B)[T] and P1(2) = 0.

This proves that 2 is algebraic over K(B).Then K(C) is algebraic over K(B) and, since L is algebraic over K(C), L

is algebraic over K(B). This proves that B is a transcendence basis of L.

b) Let ℬ be the set of all subsets B of L such that A ⊂ B ⊂ C and such

that B is algebraically independent over K.

Let us prove that the set ℬ is inductive when ordered by inclusion.

It is non-empty, since A ∈ ℬ. Let then (B8) be a non-empty totally

ordered family of elements ofℬ and let B be its union. One has A ⊂ B

because the family is non-empty, and B ⊂ C because every B8 is contained

in C. Moreover, B is algebraically independent over K. Let indeed P

be a polynomial in indeterminates (X1)1∈B such that P((1)1∈B) = 0.

Page 227: (MOSTLY)COMMUTATIVE ALGEBRA

4.8. TRANSCENDENCE DEGREE 215

The polynomial P depends on only finitely many indeterminates, say

X11, . . . ,X1= , where 11, . . . , 1= ∈ B. One has P(11, . . . , 1=) = 0 Since

the family (B8) is totally ordered, there exists an index 8 such that

{11, . . . , 1=} ⊂ B8. Since B8 is algebraically independent over K, one has

P = 0.

By Zorn’s lemma (theorem A.2.11), the set ℬ has a maximal element,

say B. One has A ⊂ B ⊂ C, by construction, and, by definition of a

maximal element, B ∪ {2} is not algebraically independent, for every

2 ∈ C B. By a), the set B is a transcendence basis of L. �

Applying the theorem to A = ∅ and L = C, we obtain:

Corollary (4.8.5). — Any field extension has a transcendence basis.

Theorem (4.8.6) (Exchange lemma). — Let K→ L be a field extension, let B

be a transcendence basis of L over K and let C be a subset of L such that L is

algebraic over K(C). Then, for every � ∈ B C, there exists � ∈ C B such

that (B {�}) ∪ {�} is a transcendence basis of L over K.

Proof. — Set A = B {�}.Since B = A ∪ {�} is algebraically independent, the element � is not

algebraic over K(A) and, in particular, L is not algebraic over K(A). On

the other hand, L is algebraic over K(C). Consequently, there must exist

� ∈ C such that � is not algebraic over K(A). Let B′ = A ∪ {�} and let us

show that B′is a transcendence basis of L over K.

I claim that there exists a non-zero polyomial P in indetermi-

nates (X1)1∈B and T such that P((1), �) = 0. Indeed, since L is algebraic

over K(B), there exists an irreducible polynomial R1 ∈ K(B)[T] suchthat R1(�) = 0. Since B is algebraically independent over K, there

exists a unique polynomial R ∈ K((X1))[T], whose coefficients are

rational functions in indeterminates X1, such that R1 = R((1))[T]. If we

multiply R be a common denominator of the coefficients of R, we obtain

a non-zero polynomial P ∈ K[(X1), T] such that P((1), �) = 0, as claimed.

Since � is not algebraic over K(A), D is algebraically independent

over K. We then write the polynomial P as a polynomial

∑<9=0

P9X9

in X� with coefficients in the indeterminates X1, for 1 ∈ A, as well as T,

where < = degX�(P), so that P< ≠ 0. Since � is not algebraic over K(A),

Page 228: (MOSTLY)COMMUTATIVE ALGEBRA

216 CHAPTER 4. FIELD EXTENSIONS

P<((1)1∈A, �) ≠ 0. Consequently, the relation

<∑9=0

P9((1)1∈A, �)� 9 = P((1), �) = 0

shows that � is algebraic over K(D). Any other element of B belongs

to D, hence is algebraic over K(D) as well. Therefore, K(B) is algebraicover K(D). Since L is algebraic over K, L is algebraic over K(D). Thisconcludes the proof that D is a transcendence basis of L over K. �

Corollary (4.8.7). — Let K→ L be an extension of fields. All transcendence

bases of L over K have the same cardinality.

Proof. — Let B and B′be two transcendence bases of L overK. According

to the following lemma, there exists an injection ! from B to B′, as well

as an injection !′ from B′to B. It then follows from the Cantor-Bernstein

theorem that B and B′are equipotent. �

Lemma (4.8.8). — Let K→ L be an extension of fields, let A and B be subsets

of L such that A is algebraically independent over K and L is algebraic over K(B).Then there exists an injection from A to B.

Proof. — Let ℱ be the set of all pairs (A′, !) where A′is a subset

of A and ! : A′ → B is an injection such that (A A

′) ∪ !(A′) isalgebraically independent over K. We order the set ℱ by saying that

(A′1, !1) ≺ (A′

2, !2) if A

′1⊂ A

′2and !2(0) = !1(0) for every 0 ∈ A

′1.

Let us prove that the setℱ is inductive.

Let (A′8, !8)8 be a totally ordered family of elements of ℱ. Let A

′be

the union of the A′8and let ! : A

′ → B be the unique function such

that !(0) = !8(0) for any index 8 such that 0 ∈ A′8. The map ! is

injective. Let us indeed consider two elements 0, 0′ ∈ A′such that

!(0) = !(0′). Let 8 be any index such that 0 and 0′ both belong to A′8.

Then !8(0) = !(0) = !(0′) = !8(0′). Since !8 is injective, 0 = 0′.Let us show that (A A

′) ∪ !(A′) is algebraically independent. Let P

be a polynomial in indeterminates (X0)0∈A A′ and (Y0)0∈A′ such that

P((0)0∉A′ , (!(0))0∈A′) = 0 and let us prove that P = 0. Only finitely many

indeterminates Y0, for 0 ∈ A′, appear in P; since the family (A′

8) of

subsets of A is totally ordered, all the corresponding elements 0 belong

Page 229: (MOSTLY)COMMUTATIVE ALGEBRA

4.8. TRANSCENDENCE DEGREE 217

to some common set A′8. The polynomial P can then be viewed as an

algebraic dependence relation for (A A′) ∪ !(A′

8). Since this set is

contained in the transcendence basis (A A′8) ∪ !(A′

8), the polynomial P

is zero, as claimed.

The pair (A′, !) is an element of ℱ such that (A′8, !8) ≺ (A′, !) for

every 8. This shows that ℱ is inductive.

By Zorn’s theorem, the setℱ has amaximal element, say (A′, !). Let usprove that A

′ = A. Otherwise, let B0 be a transcendence basis of L over K

which contains (A A′)∪!(A′) and let � be any element of A A

′. By the

exchange lemma (theorem 4.8.6), there exists � ∈ B such that the family

obtained bymerging (A A′ {�}), !(A′) and � is a transcendence basis

of L over K. Let A′′ = A

′ ∪ {�} and let !′ : A′′→ B be the map which

coincides with ! on A′and such that !′(�) = �. By construction, the

family (!′(0))0∈A′′ is algebraically independent; in particular, the map !′

is injective. Moreover, (A A′′) ∪ !′(A′′) is algebraically independent.

This shows that (A′′, !′) belongs to ℱ, which contradicts the hypothesis

that (A′, !)were a maximal element. �

Definition (4.8.9). — The cardinality of any transcendence basis of L over K is

called the transcendence degree of L over K and denoted by tr degK(L).

Example (4.8.10). — For any integer =, the field K(X1, . . . ,X=) of rationalfunctions in = indeterminates has transcendence degree = over K.

Indeed, it just follows from the definition that {X1, . . . ,X=} is a tran-

scendence basis.

Example (4.8.11). — Let K be a field, let P ∈ K[X1, . . . ,X=] be an irre-

ducible polynomial and let L be the field of fractions of the integral

domain A = K[X1, . . . ,X=]/(P). Let us show that tr degK(L) = = − 1.

For 9 ∈ {1, . . . , =}, let G 9 be the image of X9 in L.

Since P is irreducible, it is non-constant and its degree in at least

one of the indeterminates is at least 1. For simplicity of notation,

assume that < = degX=(P) > 1 and write P = 5<X

<= + · · · + 50, where

50, . . . , 5< ∈ K[X1, . . . ,X=−1] and 5< ≠ 0.

Let us first prove that it is algebraically independent. Let Q ∈K[X1, . . . ,X=−1] be a non-zero polynomial such that Q(G1, . . . , G=−1) =

Page 230: (MOSTLY)COMMUTATIVE ALGEBRA

218 CHAPTER 4. FIELD EXTENSIONS

0. This implies that Q belongs to the ideal (P) generated by P in

K[X1, . . . ,X=]. Since degX=(Q) = 0 < deg

X=(P), we get a contradiction.

In particular, 5<(G1, . . . , G=−1) ≠ 0. Then the expression

0 = P(G1, . . . , G=) = 5<(G1, . . . , G=−1)G<= + · · · + 50(G1, . . . , G=−1)

proves that G= is algebraic over K(G1, . . . , G=−1). Since L = K(G1, . . . , G=),every element of L is then algebraic over K(G1, . . . , G=−1).This proves that (G1, . . . , G=−1) is a transcendence basis of L; in particu-

lar, tr degK(L) = = − 1.

Exercises

1-exo1151) Let A be a ring and let P ∈ A[T] be a monic polynomial of degree =.

a) Consider the quotient ring B = A[T]/(P). Prove that B is integral over A and that

there exists 1 ∈ B such that P(1) = 0.

b) Prove that B is a free A-module of rank =. In particular, the natural morphism

A→ B is injective.

c) In the ring C = A[X1, . . . ,X=], let I be the ideal generated by the coefficients of the

polynomial

P −=∏8=1

(T − X8) ∈ C[T].

Prove that the ring C/I is a free A-module of rank =! = =(= − 1) . . . 2 · 1. (This ring is

called the splitting algebra of P.)

d) Assume that A is a field and let M be a maximal ideal of C. Prove that the field

C/M is an extension of A in which the polynomial P is split and which is generated by

the roots of P.

2-exo1116) Let B be a ring, let A be a subring of B and let G ∈ B be an invertible element

of B.

Let H ∈ A[G] ∩A[G−1].a) Prove that there exists an integer = such that the A-submodule M = A+AG + · · · +

AG= of B satisfies HM ⊂ M.

b) Prove that H is integral over A.

3-exo1173) Let A be an integral domain and let K be its field of fractions.

a) Let G ∈ K be integral over A. Prove that there exists 0 ∈ A {0} such that 0G= ∈ A

for every = ∈ N.

b) Assume that A is a noetherian ring. Let 0 ∈ A {0} and G ∈ K be such that

0G= ∈ A for every = ∈ N. Prove that G is integral over A.

Page 231: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 219

4-exo807) a) Let A be a commutative ring, let B be a commutative A-algebra and

let U ∈ M=(B) be a matrix. Let Δ = det(U). Let M = B=be the free B-module, with

canonical basis (41, . . . , 4=). The matrix U is viewed as an endomorphism D of M.

We assume that U is integral over A.

b) Let V ⊂ M be the submodule generated by the elements D:(48), for 8 ∈ {1, . . . , =}and : ∈ N. Prove that V is a finitely generated A-module, stable under D.

c) Construct a finitely generated submodule of Λ=(M) that contains 41 ∧ · · · ∧ 4= andthat is stable under Λ=(D). Conclude that det(U) is integral over A.

d) Introducing the ring A[T] and the matrix TI= −U, prove that the coefficients of

the characteristic polynomial of U are integral over A. In particular, Tr(U) is integralover A.

5-exo1231) Let 5 : A→ B be a morphism of commutative rings.

a) Assume that 5 is integral. Prove that the continuous map 5 ∗ : Spec(B) → Spec(A)induced by 5 is closed: the image of a closed subset is closed.

b) More generally, if 5 is integral, prove that for every A-algebra C, the continuous

map ( 5C)∗ : Spec(B ⊗A C) → Spec(C) is closed.c) Let us consider the particular case C = A[T]. Observe that the map 5C identifies

with the map 6 : A[T] → B[T]. We assume that 6∗ : Spec(B[T]) → Spec(A[T]) isclosed.

d) Let 1 ∈ B and let Z = V(1T − 1) in Spec(B[T]). Let I be an ideal of A such that

6∗(Z) = V(I). Prove that I+(T) = A[T]. (Consider a prime ideal P of A that contains I+(T).)e) Prove that 1 is integral over A.

6-exo845) Let A be a commutative ring and let I be an ideal such that every element

of I is nilpotent. Let 5 ∈ A[T] be a polynomial, let 0 ∈ A be such that 5 (0) ∈ I and 5 ′(0)is a unit.

We define a sequence (0=) of elements of A by the formula 0=+1 = 0= − 5 (0=) 5 ′(0)−1.

a) By induction on =, prove that, modulo I=, 0= is the unique element of A such that

0= ≡ 0 (mod I) and 5 (0=) ∈ I=.

b) Prove that 5 (0=) ∈ ( 5 (0)=) for all =.c) Conclude that there exists a unique element 1 ∈ A such that 5 (1) = 0 and 1 ≡ 0(mod I).

d) Let K be a field, let U ∈ M=(K) be a matrix. Let P be the characteristic polynomial

of U and let Q be the product of its irreducible factors. Assume that Q is separable

(this is automatic if K is a perfect field). Prove the Chevalley–Dunford decomposition

of U : there exists a unique pair (D,N) of commuting matrices such that D becomes

diagonalizable over an algebraic closure of K, N is nilpotent and U = D +N. (Solve the

equation Q(D) = 0 in K[U], with unknown D.)

7-exo1119) Let 3 ∈ Z which is not divisible by the square of an integer and let

K = Q[√3].

a) What is the minimal polynomial of an element G = 0 + 1√3 ∈ K, where 0, 1 ∈ Q?

Page 232: (MOSTLY)COMMUTATIVE ALGEBRA

220 CHAPTER 4. FIELD EXTENSIONS

b) Prove that the integral closure of Z in K is Z[(1 +√3)/2] if 3 ≡ 1 (mod 4), and is

Z[√3] otherwise.

8-exo1164) a) Let A be an integral domain and let C ∈ A be such that the ring A/CA has

no nilpotent element except for 0. Assume also that the ring of fractions AC is integrally

closed. Prove that A is integrally closed.

b) Prove that the ring A = C[X,Y,Z]/(XZ − Y(Y + 1)) is integrally closed. (Apply the

first question with C = X.)

9-exo1060) Let B be a ring and let A be a subring. Assume that A is integrally closed

in B.

a) Let P,Q be monic polynomials of B[T] such that PQ ∈ A[T]. Prove that P,Q ∈A[T]. (Use exercise 4/1 to introduce a ring C containing B such that P =

∏<8=1(T − 08) et

Q =∏=

8=1(T − 18) where 01, . . . , 0< , 11, . . . , 1= ∈ C. )

b) Prove that A[T] is integrally closed in B[T]. (If P ∈ B[T] is integral over A[T], considerthe monic polynomials Q = T

< + P, for large enough integers <.)

10-exo1223) Let K→ L be a finite algebraic extension and let Ω be an algebraic closure

of K.

a) Prove that the algebra L ⊗K Ω is reduced (0 is the only nilpotent element) if and

only if the extension K→ L is separable.

b) In this case, prove that for every field extension M of K, the tensor product L ⊗K M

is reduced.

11-exo802) Let E be a field and let P,Q ∈ E[T] be two irreducible polynomials. Let

Ω be an algebraic closure of E and let G, H ∈ Ω be such that P(G) = Q(H) = 0. Let

P1, . . . , PA ∈ E[H][T] be pairwise distinct irreducible polynomials and <1, . . . , <A be

strictly positive integers such that P =∏A

8=1P<8

8. Similarly, let Q1, . . . ,QB ∈ E[G][T] be

pairwise distinct irreducible polynomials and =1, . . . , =B be strictly positive integers

such that Q =∏B

9=1Q

= 9

9.

a) Introducing the E-algebra R = E[X,Y]/(P(X),Q(Y)), construct isomorphisms of

E-algebras

R ' E[G][Y]/(Q1(Y)=1) ⊕ · · · ⊕ E[G][Y]/(QB(Y)=B )' E[H][X]/(P1(X)<1) ⊕ · · · ⊕ E[H][X]/(PA(X)<A ).

b) What are the idempotents of R? (If F is a field, < > 1, and P ∈ F[T] is an irreducible

polynomial, show that the only idempotents of F[T]/(P<) are 0 and 1.)

c) Prove that A = B and that there exists a permutation � ∈ SA such that if 9 = �(8),then E[G][Y]/(Q= 9

9) and E[H][X]/(P<8

8) are isomorphic as E-algebras.

d) In particular, if 9 = �(8), then = 9 deg(Q9)deg(P) = <8 deg(P8)deg(Q).12-exo811) Let ? be a prime number; let E = F? .

a) Let = > 2 be an integer prime to ? and let F be the extension of E generated by a

primitive =th root of unity . Prove that [F : E] is the order of ? in Z/=Z.

Page 233: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 221

b) Describe the action of Gal(F/E) on the set { , 2, . . . , =−1}. Give a simple

condition of this action to factor through the alternate group on = − 1 elements. (How

does the Frobenius morphism ! act on this set? What is his signature?)

c) Assume in the sequel that ? is odd. For 0 ∈ Z, one sets(0?

)= 0 if 0 is a multiple

of ?,(0?

)= 1 if 0 is a non-zero square modulo ?, and

(0?

)= −1 otherwise (Legendre

symbol). Prove that

(0?

)≡ 0(?−1)/2 (mod ?).

d) Let

� =∏

168< 96=−1

!( 9) − !( 8) 9 − 8

.

Prove that � = ((−1)=(=−1)/2==−2)(?−1)/2 = ±1.

e) One assumes moreover that = = @ is an odd prime number. Prove that

(?

@

) (@

?

)=

(−1)(?−1)(@−1)/4(Gauss’s quadratic reciprocity law).

13-exo1214) Let K be a field and let = > 2 be an integer. Let E = K[ ] be an extension

of K generated by a primitive =th root of unity . (In particular, the characteristic of K

does not divide =.)

a) Prove that the set C of =th roots of unity in E is a cyclic group of order =, generated

by . Prove that E is a Galois extension of K.

b) Let � ∈ Gal(E/K). Prove that there exists an integer 3 prime to = such that

�( ) = 3. Prove that �(G) = G3 for every G ∈ C.

c) Construct that there exists a ring morphism ! : Gal(E/K) → (Z/=Z)× such that

!(�) is the class of 3 if �( ) = 3. Prove that ! is injective; in particular, Gal(E/K) isan abelian group.

14-exo1215) Let K be a field and let E = K(T) be the field of rational functions in one

indeterminate T with coefficients in K.

a) Prove that there exists a unique K-automorphism of E, (resp. �), such that

(T) = 1/T (resp. �(T) = 1 − T).

b) LetG be the subgroup ofGal(E/K) generated by and �. Prove that it is isomorphic

to the symmetric group S3.

c) Let F = EGbe the subfield of E consisting of rational functions P ∈ K(T) such that

(P) = �(P) = P. Prove that F contains

5 =(T2 − T + 1)3T

2(T − 1)2 .

d) Prove that F = K( 5 ).15-exo1206) In this exercise, we prove that the fieldC of complex number is algebraically

closed. In fact, the proof will show a bit more. So let R be a field and let C = R(8) bethe field obtained by adjoining an element 8 such that 82 = −1. We make the following

assumptions:

(i) The characteristic of R is zero;

Page 234: (MOSTLY)COMMUTATIVE ALGEBRA

222 CHAPTER 4. FIELD EXTENSIONS

(ii) Every polynomial of odd degree in R[T] has a root in R;

(iii) Every element of C is a square.

a) Prove that the field R of real numbers satisfies these assumptions.

b) Let E be a non-trivial finite Galois extension of R containing C. Let G = Gal(E/R)be its Galois group. Prove that the order of G is a power of 2. (Let P be a 2-Sylow

subgroup of G. Prove that EPis an extension of R of odd degree. Conclude that E

P = R.)

c) Let G1 = Gal(E/C). Prove that G1 = {1} by introducing a subgroup H of index 2

in G1, and considering the quadratic extension EHof C. Conclude that E = C.

d) Prove that C is algebraically closed.

16-exo824) Let F be a finite field. Let @ = Card(F).a) For every 0 ∈ F

×and 1 ∈ F, prove that there exists a unique automorphism � of

F(T) such that �(T) = 0T + 1.b) Prove that the automorphisms of this form constitute a finite group G of cardinal-

ity @(@ − 1). Let A be the subgroup of G of such automorphisms with 1 = 0, and B be

the subgroup of G of such automorphisms with 0 = 1.

c) Show that F(T)A = F(T@−1) and that F(T)B = F(T@ − T).d) Describe the intermediate subfields between F(T)A and F(T).e) Compute the subfield F(T)G. Are the extensions F(T)G→ F(T)A and F(T)G→ F(T)B

Galois ?

17-exo828) Let E be a field, let P ∈ E[T] be a monic polynomial, let = = deg(P), and let

E→ F be a splitting extension of P. Let R be the set of roots of P in F. Let G = Gal(E/F).a) Prove that the action of G on F induces an action of G on R.

b) What are the orbits of this action? In particular, prove that this action is transitive

if and only if P is irreducible.

c) Let 0 ∈ R, and let H be the set of � ∈ G such that �(0) = 0. Prove that H is a

subgroup of G. What is its index.

d) If 6 ∈ G, what is the fixed subfield of the subgroup 6H6−1? If P is irreducible,

prove that

⋂6∈G 6H6−1 = {idF}.

e) Let E→ F be a finite Galois extension and let G = Gal(F/E). Prove that it is thesplitting extension of an irreducible polynomial P ∈ E[T] of degree = if and only if there

exists a subgroup H ⊂ G of index = such that

⋂6∈G 6H6−1 = {idF}. What happens in

the case where G is abelian?

18-exo1218) For = > 1, let Φ= ∈ C[X] be the unique monic polynomials whose roots

are simple, equal to the primitive =th roots of unity in C.

a) Prove that

∏3 |= Φ3 = X

= − 1. Prove by induction that Φ= ∈ Z[X] for all =.b) If ? is a prime number, compute Φ?(X). Prove that there are integers 01,. . . , 0?−1

such that Φ?(1 + X) = X?−1 + ?01X

?−2 + · · · + ?0?−1, and 0?−1 = 1. Using Eisenstein’s

criterion (see exercice 2/27), prove that Φ? is irreducible in Q[X].

Page 235: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 223

c) Let = > 2. In the rest of the exercise, we will prove that Φ= is irreducible. Let

∈ C be a primitive =th root of unity and let P be its minimal polynomial. Prove that

P ∈ Z[X] and that P divides Φ= in Z[X].Let ? be a prime number that does not divide =. Prove that there exists 1 ∈ Z[ ]

such that P( ?) = ?1.d) Prove that ? is a primitive =th root of unity. If P( ?) ≠ 0, prove by differentiating

the polynomial X= − 1 that = ?(=−1) ∈ ?Z[ ]. Derive a contradiction, hence that

P( ?) = 0 for every prime number ? which does not divide =.

e) Prove that Φ= is irreducible in Q[X].19-exo618) In this exercise, we give a proof of the following theorem of Wedderburn:

Any finite division ring is a field. Let F be a finite division ring.

a) Show that any commutative subring of F is a field.

b) Let Z be the center of F. Show that Z is a subring of F. Let @ be its cardinality.

Show that there exists an integer = > 1 such that Card F = @= .

c) Let G ∈ F. Show that the set CG of all 0 ∈ F such that 0G = G0 is a subring of F.

Show that there exists an integer =G dividing = such that Card CG = @=G . (Observe

that left multiplication by elements of CG endowes F with the structure of a CG-vector

space.)

d) Let G ∈ F×; compute in terms of =G the cardinality of the conjugacy class�(G) of G

in F×(by which we mean the set of all elements of F

×of the form 0G0−1

, for 0 ∈ F×).

e) If G ∉ Z, show that the cardinality of �(G) is a multiple of Φ=(@). (We write Φ= for

the =th cyclotomic polynomial, defined in exercise 4/18.)

f ) Using the class equation, show that Φ=(@) divides @= − @. Conclude that = = 1,

hence that F is a field.

20-exo834) For every integer = > 1, we denote by K= the subfield of C generated by

the =th roots of unity. We also write �= : Gal(K=/Q) → (Z/=Z)× for the morphism of

groups characterized by �($) = $�(�)for every � ∈ Gal(K/Q) and every =th root of

unity $.

a) Let < and = be integers > 1 and let 3 = gcd(<, =). Prove that K< ∩ K= = K3.

b) Let ? be a prime number and let 4 > 1 be an integer. Compute the cyclotomic

polynomial Φ?4 . Apply the Eisenstein criterion to prove that it is irreducible in Q[T].Conclude that [K?4 : Q] = !(?4) = ?4−1(? − 1).c) Prove that [K= : Q] = !(=) for every integer = > 1. Prove that the cyclotomic

polynomial Φ= is irreducible in Q[T].21-exo829) Let E→ F be a finite extension of fields.

a) Assume that this extension is separable. Prove that the set of subfields of F that

contain E is finite. (First treat the case where this extension is Galois.)

b) Let ? be a prime number, let : be a field of characteristic ?, let F = :(X,Y) and let

E = :(X? ,Y?). For every 0 ∈ :, let F0 = :(X? ,Y? ,X + 0Y). Prove that F0 is a subfield

of F that contains E. Prove that F0 ∩ F1 = E if 0 ≠ 1.

Page 236: (MOSTLY)COMMUTATIVE ALGEBRA

224 CHAPTER 4. FIELD EXTENSIONS

22-exo1219) Let K be an infinite field and let K→ L be a finite Galois extension. We let

3 = [L : K] and write �1, . . . , �3 for the elements of Gal(L/K).a) Let (41, . . . , 43) be a family of elements in L. Prove that the determinant of the

matrix (�8(4 9))168 , 963 is non-zero if and only if (41, . . . , 43) is a basis of L as a K-vector

space.

In the following, we fix such a basis (41, . . . , 43). Let P ∈ L[X1, . . . ,X3] be such that

P(�1(G), . . . , �3(G)) = 0

for every G ∈ L.

b) Let Q ∈ L[X1, . . . ,X3] be the polynomial

Q = P(3∑8=1

�1(48)X8 , . . . ,

3∑8=1

�3(48)).

Prove that Q(G1, . . . , G3) = 0 for every (G1, . . . , G3) ∈ K3. Prove that Q = 0.

c) Prove that P = 0 (algebraic independence of �1, . . . , �3).

d) Prove that there exists � ∈ L such that (�1(�), . . . , �3(�)) is a K-basis of L. (Such a

basis is called a normal basis of L over K.)

23-exo1217) Let F→ E be a Galois extension of fields, with Galois group G = Gal(E/F).a) Let ∈ E

×and let 2 : G → E

×be the map defined by 2(�) = /�( ) for every

� ∈ G. Prove that for every � and � in G, one has

2(��) = 2(�)�(2(�)).

b) Conversely, let 2 : G → E×by any map satisfying this relation. Using proposi-

tion 4.4.12, prove that there exists G ∈ E such that

=∑�∈G

2(�)�(G) ≠ 0.

Conclude that one has 2(�) = /�( ) for every � ∈ G.

c) Let " : G→ F×be a group morphism. Prove that there exists ∈ E

×such that

"(�) = /�( ) for every � ∈ G.

d) We assume that G is a cyclic group; let � be a generator of G. Let G ∈ F; prove

that NL/K(G) = 1 if and only if there exists ∈ F

×such that G = /�( ). Explicit the

particular cases where K → L is the extension R → C or an extension F@ → F@= offinite fields.

24-exo1220) Let K be a finite field, let ? be its characteristic and let @ be its cardinality.

a) For < ∈ N, compute S< =∑G∈K G

<.

b) For P ∈ K[X1, . . . ,X=], we write S(P) = ∑G∈K= P(G). If deg P < =(@ − 1), prove that

S(P) = 0.

c) Let P1, . . . , PA be polynomials in K[X1, . . . ,X=] such that

∑A8=1

deg P8 < =. Let

P = (1 − P

@−1

1) . . . (1 − P

@−1

A ); prove that S(P) = 0.

Page 237: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 225

Let V be the set of all G ∈ K=such that P1(G) = · · · = P=(G) = 0}. Prove that Card(V)

is divisible by ? (Chevalley–Warning’s theorem).

d) Let P ∈ K[X1, . . . ,X=] be a homogeneous polynomial of degree 3 > 0. If 3 < =,

prove that there exists G ∈ K= {0} such that P(G) = 0.

25-exo624) This exercise is the basis of Berlekamp’s algorithm to factor polynomials

over finite field.

Let ? be a prime number and let P be a nonconstant separable polynomial with

coefficient in a finite field F? . Let P = 2∏A

8=1P8 the factorization of P in irreducible

monic polynomials, with 2 ∈ F×? . Set =8 = deg P8 . Let RP be the ring F?[X]/(P).a) Show that the ring RP8 is a finite field with ?=8 elements.

b) For A ∈ RP, let �8(A) be the remainder of an Euclidean division of A by P8 . Show

that the map A ↦→ (�1(A), . . . , �A(A)) induces an isomorphism of rings RP '∏A

8=1RP8 .

c) For A ∈ RP, let C(A) = A? − A. Show that C is a F?-linear endomorphism of RP

(viewed as a F?-vector space) which corresponds, by the preceding isomorphism, to

the mapping

A∏8=1

F?=8 →A∏8=1

F?=8 , (01, . . . , 0A) ↦→ (0?1− 01, . . . , 0

?A − 0A).

d) Show that the kernel of C is a vector subspace of RP, of dimension A.

e) Let 0 be any element in Ker(C). Show that there exists a monic polynomial

Q ∈ F?[X], of minimal degree, such that Q(0) = 0. Show that the polynomial Q is

separable and split over F? .f ) (followed) If 0 ∉ F? , show that Q is not irreducible. From a partial nontrvial

factorization Q = Q1Q2, show how to get a partial nontrivial factorization of P.

26-exo1222) Let K ⊂ L and L ⊂ M two finitely generated field extensions. Prove that

tr degK

M = tr degK

L + tr degL

M.

27-exo1221) Let : be a field, let K ⊂ :(T) be a subfield containing :, but distinct from :.

a) Prove that the extension K→ :(T) is finite. Let = = [:(T) : K] be its degree.b) Prove that the minimal polynomial of T over K takes the form

5 (X) = X= + 01X

=−1 + · · · + 0= ,

where 01, . . . , 0= ∈ K and that there exists 9 ∈ {1; . . . ; =} with 0 9 ∉ :.

c) Let 9 be such an integer, and let D = 0 9 ; let 6, ℎ ∈ :[T] be two coprime polynomials

such that D = 6/ℎ; let < = sup(deg 6, deg ℎ). Prove that < > = and that there exists

@ ∈ K[T] such that 6(X) − Dℎ(X) = @(X) 5 (X).d) Prove that there exist polynomials 20, . . . , 2= ∈ :[T], such that 2 9/20 = 0 9 for all 9

and gcd(20, . . . , 2=) = 1.

e) One sets 5 (X, T) = 20(T)X=+· · ·+2=(T) = 20(T) 5 (X). Prove that 5 (X, T) is irreduciblein :[X, T].

Page 238: (MOSTLY)COMMUTATIVE ALGEBRA

226 CHAPTER 4. FIELD EXTENSIONS

f ) Prove that there exists @ ∈ :[X, T] such that

6(T)ℎ(X) − 6(X)ℎ(T) = @(X, T) 5 (X, T).Conclude that < = =, and K = :(D) (Lüroth’s theorem).

Page 239: (MOSTLY)COMMUTATIVE ALGEBRA

CHAPTER 5

MODULES OVER PRINCIPAL IDEALRINGS

One result that greatly simplifies undergraduate linear algebra is that vector

spaces over a field have a basis. This allows to explicit computations in

coordinates, as well as to a representation of linear maps by matrices. Over

a ring which is not a field, there exist modules which are not free, and the

classification of modules over general rings is much more delicate, if not

impossible.

In contrast, finitely generated modules over principal ideal rings have a nice

concise description using invariant factors. Explaining this description and

some of its applications is the main objective of this chapter.

There are several possibilities to establish this result and the approach I have

chosen combines algorithmic considerations related to matrix operations with

the more abstract theory of Fitting ideals of chapter 3.

Matrix operations are the classical elementary modifications of matrices,

permutation of rows, multiplication of a row by an invertible scalar, addition to

a row of a multiple of another one, and the analogous operations on columns.

Beyond their algorithmic quality, they furnish important results about matrix

groups. Here again, the case of fields gives rise to the easier reduced echelon

forms, and I present it first, as well as its consequences for classical linear

algebra. I then pass to the Hermite and Smith normal forms for matrices

with coefficients first in a euclidean ring, and then in a principal ideal domain,

and explain the translation in terms of of finitely modules over a principal

ideal domain. I end this chapter by explaining the two classical particular

cases: when the principal ideal domain is the rings of integers, we obtain a

classification of finitely generated abelian groups; when it is the ring of

polynomials in one indeterminate over a field, we obtain a classification (up to

Page 240: (MOSTLY)COMMUTATIVE ALGEBRA

228 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

conjugacy) of endomorphisms of a finite dimensional vector space, and

the Jordan decomposition.

5.1. Matrix operations

5.1.1. Elementary matrices. — Let A be a ring. The group GL=(A) of= × = invertible matrices with entries in A contains some important

elements.

We write (41, . . . , 4=) for the canonical basis of A=and (48 , 9) for the

canonical basis of M=(A); that is, 48 , 9 is the matrix of whose all entries

are 0, except the one on row 8 and column 9, which equals 1.

For 8 , 9 ∈ {1, . . . , =}, 8 ≠ 9, and 0 ∈ A, set E8 9(0) = I= + 048 , 9, where, we

recall, I= is the identity matrix. We have the relation

E8 9(0)E8 9(1) = E8 9(0 + 1)for any 0, 1 ∈ A. Together with the obvious equality E8 9(0) = I=, it implies

that the matrices E8 , 9(0) are invertible and that the map 0 ↦→ E8 , 9(0) is amorphism of groups from the additive group of A to the group GL=(A).We say that these matrices E8 , 9(0) (for 8 ≠ 9 and 0 ∈ A) are elementary and

write E=(A) for the subgroup they generate in GL=(A)For � ∈ S=, we let P� be the matrix of the linear map which sends, for

every 9, the vector 4 9 to the vector 4�(9). Explicitly, if P� = (?8 , 9), one has?8 , 9 = 1 if 8 = �(9) and ?8 , 9 = 0 otherwise. For all permutations �, � ∈ S=,

one has P�� = P�P� and Pid = I=. Consequently, the map � ↦→ P� is an

isomorphism of groups from S= to a subgroup of GL=(A) which we

denote by W. The group W is called the Weyl group of GL=; its elements

are called permutation matrices.

Finally, for 1 6 9 6 = and 0 ∈ A, let D9(0) be the diagonal matrix

I= + (0 − 1)4 9 , 9. The entries of D9(0) off the diagonal are zero, those on

the diagonal are 1 except for the entry on row 9 and column 9 which

equals 0. For any 0, 1 ∈ A, one has D9(0)D9(1) = D9(01) and D9(1) = I=;

if 0 ∈ A∗, then D9(0) belongs to GL=(A).

Let GE=(A) be the subgroup of GL=(A) generated by the elementary

matrices E8 , 9(0), for 8 ≠ 9 and 0 ∈ A, the permutation matrices P�, for

� ∈ S=, and the diagonal matrices D9(0), for 8 ∈ {1, . . . , =} and 0 ∈ A∗.

One has E=(A) ⊂ GE=(A), and the inclusion is strict in general.

Page 241: (MOSTLY)COMMUTATIVE ALGEBRA

5.1. MATRIX OPERATIONS 229

5.1.2. Elementary operations. — Let M ∈ M=,?(A) be a matrix with =

rows and ? columns with entries in A.

Multiplying M on the right by elementary matrices from M?(A)corresponds to the classical operations on the columns of M. Indeed,

the matrix ME8 , 9(0) is obtained by adding to the 9th column of M its

8th column multiplied by 0, operation that we represent by writing

C9 ← C9 + C80. The matrix MP� is obtained by permuting the columns

of M: the 9th column of MP� is the �(9)th column of M. Finally, the

matrix MD9(0) is obtained by multiplying the 9th column of M by 0 (we

write C9 ← C90).

Similarly, multiplyingM on the left by elementarymatrices fromM=(A)amounts to performing classical operations on the rows of M. The

matrix E8 , 9(0)M is obtained by adding 0 times the 9th row of M to its 8th

row (we write R8 ← R8 + 0R9); the 8th row of M is the row of index �(8)of the matrix P�M; the rows of D8(0) are those of M, the row of index 8

being multiplied by 0 on the left (in symbols, R8 ← 0R8).

5.1.3. Row and column equivalences. — One says that two matrices

M,M′ ∈ M=,?(A) are row equivalent if there exists a matrix P ∈ GE=(A)such that M

′ = PM. This means precisely that there exists a sequence of

elementary row operations that transforms M into M′. Row equivalence

is an equivalence relation; its equivalence classes are the orbits of the

group GE=(A), acting on M=,?(A) by left multiplication.

One says that two matrices M,M′ ∈ M=,?(A) are column equivalent if

there exists a matrix Q ∈ GE?(K) such that M′ = MQ. This amounts

to saying that one can pass from M to M′by a series of elementary

column operations. Column equivalence is an equivalence relation, its

equivalence classes are the orbits of the group GE?(A) acting by right

multiplication on M=,?(A).

5.1.4. Reduced row echelon forms. — One says that a matrix M =

(<8 , 9) ∈ M=,?(A) is in reduced row echelon form if there exist an inte-

ger A ∈ {1, . . . , inf(=, ?)} and integers 91, . . . , 9A such that the following

conditions hold:

(i) One has 1 6 91 < 92 < · · · < 9A 6 ?;

Page 242: (MOSTLY)COMMUTATIVE ALGEBRA

230 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

(ii) For any 8 ∈ {1, . . . , A} and any 9 such that 1 6 9 < 98, one has

<8 , 9 = 0;

(iii) For any 8 ∈ {1, . . . , A}, one has<8 , 98 = 1, and<:,98 = 0 for any other

index :;

(iv) If 8 ∈ {A + 1, . . . , =} and 9 ∈ {1, . . . , ?}, then <8 , 9 = 0.

The entries <8 , 98 = 1 are called the pivots of the matrix M, the inte-

gers 91, . . . , 9A are called the pivot column indices; the integer A is called

the row rank of M. In this language, the above conditions thus say that

the pivot column indices are strictly increasing (condition (i)), the first

nonzero entry of each of the first A rows is a pivot (condition (ii)), all

entries of a pivot column other than the pivot itself are 0 (condition (iii)),

and the rows of index > A are zero (condition (iv)).

We mention the important following observation as a lemma.

Lemma (5.1.5). — Let M ∈ M=,?(A) be a matrix in reduced row echelon form.

For any integer :, the matrix deduced from M by taking its first : columns is

still in reduced row echelon form.

5.1.6. Reduced column echelon form. — One says that a matrix M ∈M=,?(A) is in reduced column echelon form if its transpose matrix M

tis

in reduced row echelon form. Explicitly, this means that there exist

an integer B ∈ {1, . . . , inf(=, ?)} and integers 81, . . . , 8B such that the

following conditions hold:

(i) One has 1 6 81 < 82 < · · · < 8B 6 =;

(ii) For any 9 ∈ {1, . . . , B} and any integer 8 such that 1 6 8 < 8 9, one

has <8 , 9 = 0;

(iii) For any 9 such that 1 6 9 6 B, one has <8 9 , 9 = 1 and <8 9 ,: = 0 for

any other : ≠ 9;

(iv) If 9 ∈ {B + 1, . . . , ?} and 8 ∈ {1, . . . , =}, then <8 , 9 = 0.

In English: the first nonzero entry of each of the first B columns is equal

to 1, called a pivot, the pivot row indices are increasing, all entries of a

pivot row other than the pivot itself are 0, and the columns of index > B

are zero. The integer B is called the column rank of M.

Page 243: (MOSTLY)COMMUTATIVE ALGEBRA

5.1. MATRIX OPERATIONS 231

5.1.7. Application to the resolution of linear systems. — Let M ∈M=,?(A).If M is in reduced row echelon form, the analysis of the linear system

MX = 0, with unknown X ∈ A?is very easy. In fact, this system is

explicitly solved, the variables corresponding to pivot column indices

being expressed in terms of the other variables.

More generally, let us assume that there exists a matrix M′in reduced

rowechelon formwhich is row-equivalent toM. Since each rowoperation

can be reversed, it transforms the system MX = 0 into an equivalent

one. Consequently, the systems MX = 0 and M′X = 0 are equivalent.

Alternatively, there exists by assumption a matrix P ∈ GE=(A) suchthat M

′ = PM, and since GE=(A) ⊂ GL=(A), the condition MX = 0 is

equivalent to the condition M′X = 0. The original system MX = 0 is now

replaced by a system in reduced row echelon form.

This can also be applied to “inhomogeneous” systems of the form

MX = Y, where Y ∈ A=is given. Indeed, this system is equivalent to the

system MX + HY = 0 in the unknown (X, H) ∈ A?+1

to which we add the

condition H = −1.

Let us assume that there exists a matrix [M′Y′] ∈ M=,?+1(A) in reduced

row echelon form which is row equivalent to the matrix [M Y]. The

system MX = Y is then equivalent to the system M′X = Y

′. Two

possibilities arise. If the last column of [M′Y′] is not that of a pivot, thesystem M

′X = Y

′, the pivot variables being expressed in terms of the

other variables and the entries of Y′, and H = −1 is one of these other

variables. Otherwise, when the last column of [M′Y′] is that of a pivot,the system M

′X = Y

′contains the equation 0 = H which is inconsistent

with the condition H = −1, so that the system M′X = Y

′has no solution.

In that case, the system MX = Y has no solution neither.

We can also reason directly on M. Let P ∈ GE=(A) be such that

M′ = PM is in reduced row echelon form, with row rank A and pivot

column indices 91, . . . , 9A . The system MX = Y is equivalent to the system

M′X = PY. The last = − A rows of this system are of the form 0 = H′

8,

where (H′1, . . . , H′=) are the entries of Y

′ = PY. If we think of the entries

(H1, . . . , H=) of the vector Y to be inderminates, these = − A equationsare linear conditions on the entries of Y which must be satisfied for the

Page 244: (MOSTLY)COMMUTATIVE ALGEBRA

232 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

system MX = Y to have a solution. When they hold, the first A rows of

the system M′X = Y

′express the pivot variables in terms of the other

variables and of the entries of Y′ = PY.

This generalizes directly to conjunctions, that is, systems of the form

MX = Y1, . . . ,MX = Y@. Assume that there exists a matrix [M′Y′1. . .Y′@]

in reduced row echelon form which is row equivalent to [M Y1 . . .Y@].If all of its pivot indices are 6 ?, the system is solved, otherwise, it is

inconsistent.

5.2. Application to linear algebra

Let K be a division ring. In this subsection, I want to show how to

systematically apply row echelon forms to solve standard problems

of linear algebra. In fact, we will even recover all the standard and

fundamental results of linear algebra.

For any integer =, we consider K=as a right vector-space and we recall

that any matrix M ∈ M=,?(K) defines a K-linear map X ↦→ AX from K?

to K=.

We begin by proving that any matrix is row-equivalent to a unique

matrix in reduced row echelon form.

Proposition (5.2.1). — Let K be a division ring. For any matrix M ∈ M=,?(K),there exists exactly one matrix M

′which is in reduced row echelon form and

is row-equivalent to M. Similarly, for any matrix M ∈ M=,?(K), there existsone matrix M

′, and only one, which is in reduced column echelon form and is

column-equivalent to M.

Proof. — By transposition, it is enough to show the assertion for row-

equivalence. The first part of the proofwill show the existence of amatrix

equivalent to M which is in reduced row echelon form by explaining

an explicit method which reduces any matrix to a matrix of this form.

Technically, we show by induction on : ∈ {0, . . . , ?} that we can perform

elementary row operations on M so that the matrix M: obtained by

extracting the first : columns of M can be put in reduced row echelon

form.

There is nothing to show for : = 0; so assume that : > 1 and that

the assertion holds for : − 1. Let A ∈ GE=(K) be such that AM:−1 is in

Page 245: (MOSTLY)COMMUTATIVE ALGEBRA

5.2. APPLICATION TO LINEAR ALGEBRA 233

reduced row echelon form, with row rank A and pivot column indices

91, . . . , 9A .

Let us then consider the matrix (<8 , 9) = AM: in M=,:(K). Its (: − 1)first columns are those of AM:−1.

If the entries <8 ,: for 8 > A in the last column of AM: are all zero,

then AM: is in reduced row echelon form, with row rank A and pivot

indices 91, . . . , 9A . So the assertion holds for : in this case.

Otherwise, let 8 be the smallest integer > A such that<8 ,: ≠ 0 and let us

swap the rows of indices 8 and A. This allows to assume that <A+1,: ≠ 0;

let us then divide the row of index A + 1 by <A+1,: ; we are now reduced

to the case where <A+1,: = 1. Now, if, for every 8 ∈ {1, . . . , =} such that

8 ≠ A + 1, we perform the operation R8 ← R8 − <8 ,:RA+1, we obtain a

matrix in reduced row echelon form whose row rank is A + 1 and pivot

column indices are 91, . . . , 9A , :. This proves the assertion for :.

By induction, the assertion holds for : = ?, so that any = × ?-matrix is

row equivalent to a matrix in reduced row echelon form.

To establish the uniqueness of a matrix in reduced row echelon form

which is row equivalent to our original matrix, we shall make use of the

interpretation through linear systems.

We want to prove that there exists at most one matrix in reduced row

echelon form which is row equivalent to M. To that aim, it suffices to

show that if M and M′are two matrices in reduced row echelon form

which are row equivalent, then M = M′.

As for the existence part of the proof, let us show by induction on

: ∈ {0, . . . , ?} that the matrices M: and M′:obtained by extracting the

: first columns from M and M′are equal. This holds for : = 0. So

assume that assumption for : ∈ {0, . . . , ? − 1} and let us show it for : + 1.

Let 91, . . . , 9A be the pivot column indices of M:. Write Y and Y′for the

(:+1)th columns of M and M′, so that M:+1 = [M:Y] and M

′:+1

= [M′:Y′].

We already know that M: = M′:, so it remains to show that Y = Y

′.

Since the matrices M:+1 and M′:+1

are row-equivalent, the systems

M:X = Y and M′:X = Y

′are equivalent. We now analyse these systems

making use of the fact that both matrices M:+1 and M′:+1

are in reduced

row echelon form.

Page 246: (MOSTLY)COMMUTATIVE ALGEBRA

234 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

The systems M:X = Y and M′:X = Y

′have the same solutions. If they

have one, say X, then Y = M:X = M′:X = Y

′since M: = M

′:by the

induction hypothesis. Otherwise, both systems are inconsistent, which

implies that : + 1 is a pivot column index both of M:+1 and of M′:+1

. By

definition of a matrix in reduced row echelon form, Y is the column

vector all of which entries are 0 except for the (A + 1) which equals 1, as

well as Y′, so that Y = Y

′.

We have thus established the induction hypothesis for : + 1. By

induction, M? = M′?, that is M = M

′, as was to be shown. �

Definition (5.2.2). — Let K be a division ring and let M ∈ M=,?(K). The rowrank of M and the pivot column indices of M are the row rank and the pivot

column indices of the unique matrix in reduced row echelon form which is row

equivalent to M. Similarly, the column rank and the pivot row indices of M

are the column rank and the pivot row indices of the unique matrix in reduced

column echelon form which is column equivalent to M.

Let K be a division ring and let M ∈ M=,?(K). Let A be its row rank and

91, . . . , 9A be its pivot column indices.

The linear system MX = 0 has X = 0 for unique solution if and only

if all variables are pivot variables, that is if and only if A = ? and 98 = 8

for 1 6 8 6 ?. This implies in particular that = > ?. We therefore have

proved that a linear system which has (strictly) more unknowns than equations

has a nonzero solution. Equivalently, if ? > =, a linear map from K?to K

=

is not injective.

Let now Y ∈ K=and let [M′Y′] be the matrix in reduced row echelon

form which is row equivalent to [M Y]. Observe that the matrix M′is in

reduced row echelon form and is row equivalent to M. Consequently,

the pivot column indices of [M Y] are either 91, . . . , 9A , or 91, . . . , 9A , ? + 1.

In the first case, the system MX = Y is equivalent to the solved system

M′X = Y

′, hence has a solution; in the second case, it has no solution.

Let us show how this implies that for ? < =, a linear map from K?

to K=is not surjective. Indeed, assuming ? < =, we have ? + 1 6 = and

we may find vectors Y ∈ K?such that the pivot column indices of the

matrix [M Y] are 91, . . . , 9A , ? + 1; it suffices to choose A ∈ GE=(K) suchthat M

′ = AM and set Y = A−1

Y′, where Y

′ = (0, . . . , 0, 1). In fact, one

Page 247: (MOSTLY)COMMUTATIVE ALGEBRA

5.2. APPLICATION TO LINEAR ALGEBRA 235

may find such vectors as soon as A < =. So what we have proved is: if the

row rank of M is A < =, then the linear map from K?to K

=is not surjective.

When = = ?, the above discussion implies the following proposition.

Proposition (5.2.3). — Let K be a division ring.

For any matrix M ∈ M=(K), the following seven conditions are equivalent:

(i) The endomorphism of K=defined by the matrix M is injective;

(ii) The row rank of M is equal to =;

(iii) The matrix M is row equivalent to the identity matrix I=;

(iv) The matrix M belongs to GE=(K);(v) The endomorphism of K

=defined by M is bijective;

(vi) The endomorphism of K=defined by M is surjective;

(vii) The matrix M belongs to GL=(K).In particular, GE=(K) = GL=(K): the linear group GL=(K) is generated by

the elementary matrices.

Proof. — Let M ∈ M=(K). Assuming (i), we have seen that A = =, hence

(ii). If A = =, the pivot column indices are 98 = 8 for 8 ∈ {1, . . . , =}, so that

M is row equivalent to I=, and (iii) is proved. Observe that (iii) and (iv)

are equivalent, for M is row-equivalent to I= if and only if there exists

P ∈ GE=(K) such that M = PI= = P. Let us suppose (iii); the row rank

of M is equal to = and the pivot indices of any matrix [M Y] can only be

1, . . . , =, implying that any system of the form MX = Y has exactly one

solution and (v) holds. Of course, (v) implies both (i) and (vi), and is

equivalent to (vii). Finally, if the endomorphism of K=defined by M is

surjective, we have seen that A > =; since one always has A 6 =, we get

A = =, so that all other properties hold. �

5.2.4. Computationof abasis. — LetK be a division ring, letY1, . . . ,Y?

be vectors ofK=(viewed as a right vector space) and let V be the subspace

of K=generated by Y1, . . . ,Y?. A standard problem in linear algebra

consists in determining a basis of V, or a system of linear equations

defining V.

Let M be the matrix [Y1 . . .Y?]. Observe that the solutions of the

system MX = 0 are exactly the coefficients of linear dependence relations

Page 248: (MOSTLY)COMMUTATIVE ALGEBRA

236 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

among the vectors Y9. Imposing additional constraints of the form

G 9 = 0 for 9 in a subset J of {1, . . . , ?}, where X = (G1, . . . , G?), amounts

to searching for the linear dependence relations among the vectors Y9,

for 9 ∉ J. Let A be the row rank of M and let 91, . . . , 9A be its pivot column

indices. I claim that the family (Y91 , . . . ,Y9A) is a basis of V. Let us first

show that it is linearly independent. The system MX = 0 is solved,

expressing G 91 , . . . , G 9A as a linear combination of the other entries of X.

If we search for relations among Y91 , . . . ,Y9A , this amounts to imposing

G 9 = 0 for 9 ∉ { 91, . . . , 9A}, which implies G 91 = · · · = G 9A = 0. Let us then

observe that every vector Y: is a linear combination of Y91 , . . . ,Y9A . To

prove that, we solve the system MX = 0 imposing moreover that G 9 = 0

if 9 ∉ { 91, . . . , 9A , :} and G: = −1. As above, the solution of the system

furnishes values for G 91 , . . . , G 9A .

LetY be a vector (H1, . . . , H=)whose entries are indeterminates, letM′be

in reduced row echelon form and row equivalent to M, let A ∈ GE=(K)be such that M

′ = AM and set Y′ = AY; the entries of Y

′are linear

combinations of the H8. The (= − A) last rows of M′are zero, so that the

conditions H′8= 0, for A + 1 6 8 6 =, are exactly the linear dependence

relations among H1, . . . , H= that are satisfied if and only Y ∈ V. We

have thus obtained a system of (= − A) linear equations that defines

the subspace V in K=. In practice, we obtain [M′Y′] by applying the

above algorithm for a reduced row echelon form to the matrix [M Y],but stoping this algorithm once we reach the last column.

5.2.5. Dimension of a vector subspace. — Let V be a vector subspace

of K=. Let (Y1,Y2, . . . ,Y?) be a family of vectors in V chosen in such a

way that each of them is linearly independent from the preceding ones.

For each integer <, the vectors Y1, . . . ,Y< are linearly independent, the

row rank of the matrix [Y1 . . .Y<] is equal to <, hence necessarily ? 6 =.

This allows to assume that ? is as large as possible. Since (Y1, . . . ,Y?)cannot be extended without violating the linear independence condition,

every vector of V is a linear combination of Y1, . . . ,Y?. Then (Y1, . . . ,Y?)is a basis of V.

The choice of such a basis (Y1, . . . ,Y?) of V identifies it with K?. Since

a linear map from K?to K

Acan be bijective only if ? = A, this shows that

all bases of V have the same cardinality, which we call the dimension of V.

Page 249: (MOSTLY)COMMUTATIVE ALGEBRA

5.3. HERMITE NORMAL FORM 237

5.2.6. The row rank equals the column rank. — It is also useful to

put a matrix into a reduced column echelon form. Let indeed M =

[Y1 . . .Y?] ∈ M=,?(K) and let M′ = [Y′

1. . .Y′?] ∈ M=,?(K) be the unique

matrix in reduced column echelon form which is column equivalent

to M. In particular, the subspace V of K=generated by the Y8 coincides

with the subspace V′generated by the Y

′8. So dim(V) = dim(V′). By

definition, dim(V) = A, the row rank of M. On the other hand, if B is

the column rank of M, we see by direct computation that Y′1, . . . ,Y′B are

linearly independent, while Y′B+1

= · · · = Y′? = 0, so that (Y′

1, . . . ,Y′B)

is a basis of V′. (This is another way to obtain a basis of a subspace.)

In particular, B = dim(V) = A: the row rank and the column rank of a

matrix coincide.

Let also 81, . . . , 8B be the pivot column indices of M. For 8 ∈ {0, . . . , =}and C ∈ {1, . . . , B}, the intersection V ∩ {0}8 × K

=−8contains Y

′C if and

only if 8 < 8C . More generally, a linear combination of Y′1, . . . ,Y′B belongs

to {0}8 ∩ K=−8

if and only if the coefficient of Y′C is 0 when 8C 6 8. This

shows that dim(V ∩ ({0}8 × K=−8)) > B − C + 1 if and only if 8 < 8C .

5.3. Hermite normal form

5.3.1. — Let A be a principal ideal domain. We choose, once for all, a

set �(A) of representatives of (A {0})/A× in A, that is, a privileged

generator of each nonzero ideal. We also choose, for every nonzero

0 ∈ A, a set �(A, 0) of representatives of the quotient ring A/(0) in A.

Let us give three important examples:

a) Assume that A is a field. Then we take�(A) = {1} and�(A, 0) ={0} for every 0 ∈ A {0}.b) Assume that A = Z, so that Z× = {±1}. we take for �(Z) the

set of positive integers; for each nonzero = ∈ Z, we set �(Z, =) ={0, 1, . . . , |= | − 1}.c) Assume that K is a field and A = K[T]. One has A

× = K×. We

set �(K[T]) to be the set of monic polynomials. Moreover, for every

nonzero polynomial P ∈ K[T], we define �(K[T], P) to be the set of

polynomials Q such that deg(Q) < deg(P).

Page 250: (MOSTLY)COMMUTATIVE ALGEBRA

238 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

5.3.2. Row Hermite form. — Let M = (<8 , 9) ∈ M=,?(A). We say

that the matrix M is in row-Hermite form if there exist an integer

A ∈ {1, . . . , inf(=, ?)} and integers 91, . . . , 9A , such that the following

conditions hold:

(i) One has 1 6 91 < 92 < · · · < 9A 6 ?;

(ii) For every 8 ∈ {1, . . . , A} and any 9 such that 1 6 9 < 98, one has

<8 , 9 = 0;

(iii) For every 8 ∈ {1, . . . , A}, one has <8 , 98 ∈ �(A), <:,9 ∈ �(A, <8 , 98) if: < 8, and <:,9 = 0 if : > 8;

(iv) If 8 ∈ {A + 1, . . . , =} and 9 ∈ {1, . . . , ?}, then <8 , 9 = 0.

When A is a field (and the sets of representatives have been chosen

as above), we recover the definition of a matrix in reduced row echelon

form. The entries <8 , 98 are thus called the pivots of the matrix M; the

integers 91, . . . , 9A are called the pivot column indices; the integer A is

called the row rank of M.

Lemma (5.3.3). — Let M ∈ M=,?(A) be a matrix in row Hermite form. For

every integer :, the matrix deduced from M by taking its first rows (resp. its

first columns) is still in row Hermite form.

Proposition (5.3.4). — Let A be a euclidean domain. For every matrix M ∈M=,?(A), there exists exactly one matrix M

′which is in row Hermite form and

which is row-equivalent to M.

Proof. — As for the reduced row echelon form (proposition 5.2.1), the

existence part of the proof is an algorithm describing a sequence of

elementary operations that transform a matrix M into a matrix in row

Hermite form. We show by induction on : ∈ {0, . . . , ?} that we can

perform elementary row operations on M so that the matrix M: obtained

by extracting the first : columns of M is in row Hermite form. We fix a

euclidean gauge � on A.

There is nothing to prove for : = 0; so assume that : > 1 and that the

assertion holds for : − 1. Let P ∈ GE=(A) be such that PM:−1 is in row

Hermite form, with row rank A and pivot indices 91, . . . , 9A . Let us then

consider the matrix (<8 , 9) = PM: in M=,:(A). Its (: − 1) first columns are

those of PM:−1.

Page 251: (MOSTLY)COMMUTATIVE ALGEBRA

5.3. HERMITE NORMAL FORM 239

We will now perform row operations on the rows of index 8 > A,

arguing by induction on the infimum of all �(<8 ,:), for 8 > A such that

<8 ,: ≠ 0.

If the entries <8 ,: , for 8 > A, of the last column of PM: are all zero, then

PM: is in row-Hermite form, with row rank A and pivot indices 91, . . . , 9A ,

hence the assertion holds for :.

Otherwise, let 8 be such that <8 ,: has the smallest gauge among the

nonzero entries <A+1,: , . . . , <=,: and let us swap rows with indices A + 1

and 8, reducing to the case 8 = A + 1.

Then, for every integer 8 such that A+1 < 8 ≠ =, we consider a euclidean

division <8 ,: = D<A+1,: + E, where either E = 0 or �(E) < �(<A+1,:), andperform the row operation R8 ← R8 − DRA+1. The matrix we obtain is

row equivalent to M: , and its entries <A+2,: , . . . , <=,: are either zero, or

their gauges are strictly smaller than that of <A+1,:. By induction, we

may thus assume that <A+2,: = · · · = <=,: = 0.

We then divide row A + 1 by the unique unit 0 ∈ A×such that

<A+1,:/0 ∈ �(A); we thus are reduced to the case where <A+1,: ∈ �(A).Now, for every integer 8 such that 1 6 8 6 A, we consider a congruence

<8 ,: = D<A+1,: + E, where E ∈ �(A, <A+1,:), and perform the row

operation R8 ← R8 − DRA+1. The = × : matrix thus obtained is in row-

Hermite form, with row rank A + 1 and pivot indices 91, . . . , 9A , :. This

proves the assertion for :.

By induction, the assertion holds for : = ?, so that any = × ?-matrix is

row equivalent to a matrix in row Hermite form.

To establish the uniqueness assertion, we need to prove the following

assertion: let M,M′ ∈ M=,?(A) be two matrices in row Hermite form

which are row equivalent; then M = M′. We prove this assertion by

induction on ?, the case ? = 0 being void, so we assume now that ? > 1.

Possibly exchanging M and M′, we assume the number A of pivot

columns of M is greater or equal than that for M′. Deleting the last

= − A rows of M and M′, we then assume that all rows of M are nonzero,

and write 91, . . . , 9= for the pivot columns.

There are now two cases, depending on whether 9= = ? or 9= < ?.

First assume that 9= < ?. Then, we write M = [M1 E] and M′ = [M′

1E′],

where M1,M′1∈ M=,?−1(A) and E, E′ ∈ A

=. We observe that M1 and M

′1

Page 252: (MOSTLY)COMMUTATIVE ALGEBRA

240 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

are in row Hermite form, and are row equivalent. By the induction

hypothesis, one thus has M1 = M′1. Moreover, M1 has = pivots as well, so

that it has rank =, as a matrix with coefficients in the fraction field K of A.

Consequently, there exists � ∈ K?−1

such that M1� = E. Let 3 ∈ A {0}be a common denominator of the entries of �, and let G = 3�, so that

G ∈ A?−1

; then M1G = 3E. The matrices M = (M1 E) and M′ = (M1 E

′)being row equivalent, the systems M1X + HV = 0 and M1X + HV

′ = 0 (in

unknowns X ∈ A?−1

and H ∈ A) are equivalent so that M1G = 3E′. Since

3 ≠ 0, this implies E = E′ and M = M′.

Let us know treat the casewhere 9= = ?. Nowwewrite M =(

M1 E0 3

)and

M′ =

(M′1E′

D′ 3′

). Again, the matrices

(M1

0

)and

(M′1

D′

)are in row Hermite

form, and are row equivalent. By induction, they are equal, hence

M1 = M′1and D′ = 0. Moreover, M1 has = − 1 pivots, so that it has

rank = − 1 as a matrix with coefficients in the fraction field K of A. As

above, we may find G ∈ A?−1

and 0 ∈ A {0} such that M1G = 0E. Let

G be the image of G in (A/(03))?−1and let 0 be the class of 0 in A/(03),

so that � = (G,−0) is a solution of the system M� = 0 in the unknown

� ∈ (A/(03))?. By row equivalence, this system is equivalent to the

system M′� = 0, so that M1G = 0E

′in (A/(03))=−1

and 3′0 = 0 in A/(03).This last relation implies that 3 divides 3′; by symmetry, 3′ divides 3 aswell, so that 3 and 3′ are associates. By definition of the row Hermite

form, one has 3, 3′ ∈ �(A), hence 3 = 3′.The first relation implies that 0(E−E′) ≡ 0 in (A/(03))=−1

, so that E ≡ E′in (A/(3))=−1

. By definition of the row Hermite form again, this implies

that E = E′ and concludes the proof by induction. �

Theorem (5.3.5). — Let A be a euclidean ring and let M ∈ M=,?(A). Thereexist matrices P ∈ E=(A), Q ∈ E?(A) and a “diagonal matrix” D (meaning

38 9 = 0 for 8 ≠ 9) such that 38 ,8 divides 38+1,8+1 for any integer 8 satisfying

1 6 8 < inf(=, ?) so that M = PDQ. Moreover, the diagonal entries 31,1, . . .

of D are well defined up to units.

Proof. — Let uswrite � for the gauge ofA and let us show the existence of

such a decomposition M = PDQ by a double induction, first on sup(=, ?)and then on the minimum value of the gauge � at nonzero entries of A.

Page 253: (MOSTLY)COMMUTATIVE ALGEBRA

5.3. HERMITE NORMAL FORM 241

We first observe that each of the following 2 × 2 matrices can be

deduced from the preceding by some elementary row operation:(1 0

0 1

)R1←R1+R2−−−−−−−−→

(1 1

0 1

)R2←R2−R1−−−−−−−−→

(1 1

−1 0

)R1←R1+R2−−−−−−−−→

(0 1

−1 0

).

Consequently, the matrix

(0 1

−1 0

)belongs to E2(Z). Let 8 and 9 be distinct

integers in {1, . . . , =}. Performing similar operations on the rows of

indices 8 and 9, we see that there exists, for any transposition (8 , 9) ∈ S=,

an element of E=(A) which exchanges up to sign the 8th and the 9th

vector of the canonical basis of A=and leaves the other fixed.

Now, let (8 , 9) be the coordinates of some nonzero coefficient of M with

minimal gauge. By what precedes, we may apply elementary operations

first on rows 1 and 8, then on columns 1 and 9, of M so as to assume

that this coefficient is in position (1, 1). Then let <1: = <11@: + <′1:

by the euclidean division of <1: by <11. The column operation C: ←C: − C1@: transforms the matrix M into the matrix M

′ = M E1:(−@:)whose entry <1: is now <′

1:. If <′

1:≠ 0, we have �(<′

1:) < 0 and we

conclude by induction, since the minimal value of the gauge function at

nonzero entries of M has decreased. So we may assume that on the first

row, only the first entry is nonzero.

By similar operations on rows, we may also assume by induction that

all entries of the first column, except the first one, are zero.

Assume that some entry <8 , 9 of M, with 8 > 1 and 9 > 1 is not divisible

by <11. Then, let us perform the row operation R1 ← R1 + R8 (which

amounts to left multiply M by E18(1)); this transforms the first row of M

into the row (<1,1, <8 ,2, . . . , <8 ,?). Let <1, 9 = <1,1@ + A be a euclidean

division of<1, 9 by<1,1. The column operation C9 ← C9 −C1@ transforms

the matrix M into a new matrix whose (1, 9) entry is A. By hypothesis,

A ≠ 0 and �(A) < �(<1,1). By induction, this matrix may transformed

into a matrix of the form

©­­­­«<11 0 . . . 0

0

... <11M′

0

ª®®®®¬,

Page 254: (MOSTLY)COMMUTATIVE ALGEBRA

242 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

where M′ ∈ M=−1,?−1(A). By induction, there exist matrices P

′ ∈ E=−1(A),Q′ ∈ E?−1(A) and a diagonal matrix D

′ ∈ M=−1,?−1(A), each of whose

diagonal coefficient divides the next one, such that M′ = P

′D′Q′. Let us

then define the following block matrices:

P =

(1 0

0 P′

), D = <11

(1 0

0 D′

), Q =

(1 0

0 Q′

).

We have P1MQ1 = PDQ, hence M = (P1)−1PDQ(Q1)−1

, which shows that

M has a decomposition of the desired form.

The uniqueness assumption follows from the theory of Fitting ideals.

With the notation of section 3.9, one hasΔ?(M) = Δ?(D) = (31,1 · · · 3?,?) =(3?,?)Δ?−1(D) for every integer ?. If Δ?−1(D) ≠ 0, this characterizes 3?,?up to a unit; if Δ?−1(D) = 0, the divisibility assumption on 31,1, . . .

implies that 3?,? = 0. �

Corollary (5.3.6). — Let A be a euclidean ring. One has E=(A) = SL=(A) andGE=(A) = GL=(A).

Proof. — The two inclusions E=(A) ⊂ SL=(A) and GE=(A) ⊂ GL=(A) areobvious (all generators of E=(A) have determinant 1). We need to prove

that any matrix M ∈ GL=(A) belongs to GE=(A), and that any matrix

M ∈ SL=(A) belongs to E=(A).So, let M ∈ GL=(A); let M = PDQ be some decomposition with

P,Q ∈ E=(A) and D ∈ M=(A) a diagonal matrix. Observe that the

diagonal entries 38 ,8 of D are invertible. Consequently,

D = D1(31,1) . . .D=(3=,=)

belongs to GE=(A) and M ∈ GE=(A) as well. It now follows from

lemma 5.3.7 below that there exist matrices P′,Q′ ∈ E=(A), and a

unit 0 ∈ A×such that M = P

′diag(0, 1, . . . , 1)Q′.

Assume, moreover, that M ∈ SL=(A), we obtain that 0 = det(M) = 1.

It follows that M = P′Q′, hence M ∈ E=(A). �

Page 255: (MOSTLY)COMMUTATIVE ALGEBRA

5.3. HERMITE NORMAL FORM 243

Lemma (5.3.7). — Let A be a ring and let 01, . . . , 0= be units in A; let

00 = 0=0=−1 . . . 01. There exist matrices U,V ∈ E=(A) such that

U

©­­­­«01

02

. . .

0=

ª®®®®¬V =

©­­­­«00

1

. . .

1

ª®®®®¬.

Proof. — Let � and � be two units in A. Let us observe that each of

the following matrices is deduced from the preceding one through an

elementary row or column operation:(� 0

0 �

)R1←R1+�−1

R2−−−−−−−−−−→(� 1

0 �

)C1←C1−C2�−−−−−−−−−→

(0 1

−�� �

)R1←R1−R2−−−−−−−−→

(�� 1 − �−�� �

)R2←R2+R1−−−−−−−−→

(�� 1 − �0 1

)C2←C2−C1(��)−1(1−�)−−−−−−−−−−−−−−−−→

(�� 0

0 1

).

Consequently, there exist matrices U,V ∈ E2(A) such that

U

(�

)V =

(��

1

),

which proves the result for = = 2.

In the general case, we can perform the corresponding row and column

operations for indices = − 1 and =, replacing 0=−1 and 0= by 0=0=−1 and 1.

We repeat this process for indices = − 2 and = − 1, etc., until we do it for

indices 2 and 1, which gives the result. �

5.3.8. Principal ideal domains. — Wewant to generalize theorem 5.3.5

to the case of any principal ideal domain A. However, elementary opera-

tions on rows and columns will not suffice anymore, the group E=(A) isnot necessarily equal to SL=(A) andwe need to use the full group SL=(A).

Lemma (5.3.9). — Let A be a principal ideal domain, let 0, 1 be two nonzero

elements of A. Let 3 be a gcd of (0, 1), let D, E ∈ A be such that 3 = 0D + 1E;

Page 256: (MOSTLY)COMMUTATIVE ALGEBRA

244 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

let A and B ∈ A be such that 0 = 3A and 1 = 3B. The matrix ( D E−B A ) belongs toSL2(A) and one has (

D E

−B A

) (0 ∗1 ∗

)=

(3 ∗0 ∗

).

Proof. — We have 3 = 0D + 1E = 3(DA + EB). Since 3 ≠ 0, we obtain

DA + EB = 1 so that ( D E−B A ) ∈ SL2(A). The rest is immediate. �

Theorem (5.3.10). — Let A be a principal ideal domain, let M ∈ M=,?(A).There exists matrices P ∈ SL=(A), Q ∈ SL?(A) and D ∈ M=,?(A) such that

M = PDQ, D is “diagonal” (meaning 38 , 9 = 0 for 8 ≠ 9) and such that 38 ,8divides 38+1,8+1 for any integer 8 such that 1 6 8 < inf(=, ?). Moreover, the

diagonal entries 31,1, . . . of D are well defined up to units.

Proof. — The proof is very close to that of theorem 5.3.5, and we only

indicate the modifications to be done. For any nonzero 0 ∈ A, let $(0) bethe size of 0, namely the number of irreducible factors of 0, counted with

multiplicities. We now argue by a double induction, first on sup(=, ?)and then on the minimal size of a nonzero entry of M.

As above, we may assume that <1,1 is a nonzero element of minimal

size. If all entries of the first column are divisible by <1,1, elementary

operations on rows allow us to assume that the entry (1, 1) is the onlynonzero entry of this first column. Otherwise, it follows from lemma5.3.9

that there exists a matrix P1 ∈ SL=(A) of the form

©­­­­­­­­«

D 0 . . . E 0 . . .

0 1 0

.... . .

...

−B A

1

. . .

ª®®®®®®®®¬such that the (1, 1) entry of P1M is a gcd of (<1,1, <1, 9), the integer 9 ∈{2, . . . , ?} being chosen such that <1,8 is not a multiple of <1,1. The

size of the (1, 1)-entry of P1M is now strictly smaller than $(<1,1); byinduction, there exist P ∈ SL=(A) and Q ∈ SL?(A) such that PP1MQ is

diagonal, each diagonal entry dividing the next one.

Page 257: (MOSTLY)COMMUTATIVE ALGEBRA

5.4. FINITELY GENERATED MODULES OVER A PRINCIPAL IDEAL DOMAIN 245

By right multiplication with analogous matrices, we may also assume

that the <1,1 is the only nonzero entry of the first row. Finally, we are

reduced to the case where the matrix M takes the form©­­­­«<11 0 . . . 0

0

... M′

0

ª®®®®¬with M

′ ∈ M=−1,?−1(A). By the same argument as in the euclidean case,

we may then assume that all entries of M′are divisible by <1,1. We

then conclude in the same way, applying the induction hypothesis to

the matrix M′whose size is smaller than that of M.

For the proof of the uniqueness assumption, it suffices to copy the one

we have done in the case of euclidean rings, see theorem 5.3.5. �

5.4. Finitely generated modules over a principal ideal domain

The theory of modules over a given ring A strongly depends on

its algebraic properties. The case of division rings gives rise to the

particularly well behaved theory of vector spaces. In order of complexity,

the next case is that of principal ideal domains to which we now turn.

Proposition (5.4.1). — Let A be a principal ideal domain, let M be a free

A-module of rank = and let N be a submodule of M. Then, N is a free A-module

and its rank is 6 =.

Proof. — It suffices to show that every submodule N of A=is free of

rank 6 =; let us prove this by induction on =.

If = = 0, then A= = 0, hence N = 0 so that N is a free A-module of

rank 0.

Assume that = = 1. Then, N is an ideal of A. If N = 0, then N is free

of rank 0. Otherwise, A being a principal ideal domain, there exists a

nonzero element 3 ∈ A such that N = (3). Since A is a domain, the map

0 ↦→ 30 is an isomorphism from A to N, so that N is free of rank 1.

Let now = be an integer> 2 and let us assume that for any integer A < =,

every submodule of AAis free of rank 6 A. Let N be a submodule of A

=.

Let 5 : A= → A be the linear form given by (01, . . . , 0=) ↦→ 0=; it is

Page 258: (MOSTLY)COMMUTATIVE ALGEBRA

246 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

surjective and its kernel is the submodule M0 = A=−1 × {0} of A

=. By

induction, the ideal N1 = 5 (N) of A is free of rank 6 1. The submodule

N0 = N ∩M0 of M0 is isomorphic to a submodule of A=−1

, so is free of

rank 6 = − 1. It then follows from proposition 3.5.5 that N is free of

rank 6 =. �

Remark (5.4.2). — One shall observe that this proposition recovers the

fact that when K is a field, every subspace of K=is free of dimension6 =.

It is important to observe that the hypothesis on the ring A is necessary.

If A is neither a principal ideal domain nor a field, then there exists a

nonzero ideal of A which is not free as a A-module. Moreover, there

are noncommutative rings A possessing a submodule isomorphic to A2.

Finally, observe that it is possible that a submodule N ⊂ A=have rank =

while being distinct from A=, as is witnessed by the simple example

A = Z, = = 1 and N = 2Z ⊂ Z.

The following theorem is more precise: it furnishes a basis of a free

module over a principal ideal domain which is adapted to a given

submodule.

Theorem (5.4.3). — Let A be a principal ideal domain, let M be a free A-module

of rank = and let N be a submodule of M. There exists a basis (41, . . . , 4=)of M, an integer A such that 0 6 A 6 = and elements 31, . . . , 3A of A such

that 38 divides 38+1 for every integer 8 such that 1 6 8 < A and such that

(3141, . . . , 3A4A) is a basis of N.

Moreover, the integer A and the ideals (31), . . . , (3A) do not depend on the

choice of a particular such basis (41, . . . , 4=).

Proof. — We may assume M = A=. By the preceding proposition, the

submodule N is free of some rank ? ∈ {0, . . . , =}. In other words, there

exists a morphism 51 : A? → A

=which induces an isomorphism from A

?

to N. (In fact, the existence of a morphism 5 : A? → A

=with image N

is all that we will need, that is, it suffices to know that N is finitely

generated; we will show below how this follows from the fact that A is

a noetherian ring.) By theorem 5.3.10, there exist matrices P ∈ GL=(A),Q ∈ GL?(A) and D ∈ M=,?(A) such that U = PDQ, the matrix D being

Page 259: (MOSTLY)COMMUTATIVE ALGEBRA

5.4. FINITELY GENERATED MODULES OVER A PRINCIPAL IDEAL DOMAIN 247

diagonal and each of its diagonal entries 31, . . . , 3inf(=,?) dividing the

next one.

If 31 = 0, set A = 0. Otherwise, let A be the largest integer

in {1, . . . , inf(=, ?)} such that 3A ≠ 0; one then has 3A+1 = · · · = 3inf(=,?) =0.

Let D : A? → A

?be the automorphism with matrix Q, E : A

= → A=be

the automorphism with matrix P, and let 6 : A? → A

=be the morphism

with matrix D; one has 5 = E ◦ 6 ◦ D. Let (�1, . . . , �=) for the canonicalbasis of A

=. We see that (31�1, . . . , 3A�A) is a basis of Im(6) = Im(6 ◦ D),

so that (31E(�1), . . . , 3AE(�A)) is a basis of Im(E ◦ 6 ◦ D) = Im( 5 ) = N.

On the other hand, (E(�1), . . . , E(�=)) is a basis of A=, because E is an

automorphism. It thus suffices to set 48 = E(�8) for 8 ∈ {1, . . . , =}.From this description, it follows that the Fitting ideals of the mod-

ule M/N can be computed from 31, . . . , 3=, namely: Fit=−:(M/N) =Δ:(D) = (31 . . . 3:) for : 6 A and Fit=−:(M/N) = (0) for : > A. As in

theorem 5.3.10, this allows to recover the integer A as well as the elements

31, . . . , 3A up to units.

We can also argue directly. Let (4′1, . . . , 4′=) be a basis of A

=, B be

an integer in {0, . . . , =}, 3′1, . . . , 3′B elements of A such that 3′

8|3′8+1

for

every 8 < B and such that (3′14′

1, . . . , 3′B4

′B) is a basis of N. Since any two

bases of a freemodule over a commutative ring have the same cardinality,

we have B = A. The matrix of the canonical injection from N to A=in the

bases (41, . . . , 4A) and (41, . . . , 4=) is equal to D = diag(31, . . . , 3A). Let S

be the matrix expressing the basis (4′1, . . . , 4′=) in the basis (41, . . . , 4=),

let T be the matrix expressing the basis (3′14′

1, . . . , 3′A4

′A) of N in the

basis (3141, . . . , 3A4A) and let D′ = diag(3′

1, . . . , 3′A). We have D = S

−1D′T.

Since S ∈ GL=(A) and T ∈ GLA(A), it follows from the uniqueness

assertion of theorem 5.3.10 that (31) = (3′1),. . . , (3A) = (3′A). �

Corollary (5.4.4). — Let A be a principal ideal domain and let M be a finitely

generated A-module. There exists an integer = and elements 31, . . . , 3= of A,

non-units, such that 38 divides 38+1 for every integer 8 such that 1 6 8 < =

such that M is isomorphic to the direct sum

⊕=8=1

A/(38); the Fitting ideals

of M satisfy Fit?(M) = (31 · · · 3=−?) for every integer ? such that 0 6 ? < =,

and Fit?(M) = A for ? > =.

Page 260: (MOSTLY)COMMUTATIVE ALGEBRA

248 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

Moreover, if < is an integer, �1, . . . , �< are non-units of A such that � 9divides � 9+1 for every 9 satisfying 1 6 9 < < and such that M is isomorphic to⊕<

9=1A/(� 9), then < = = and (�8) = (38) for every 8 ∈ {1, . . . , =}.

Proof. — Let (<1, . . . , <=) be a family of elements of M which gener-

ates M. Let 5 : A= → M be the unique morphism that sends the 8th

vector of the canonical basis of A=to <8; it is surjective by construction.

If N denotes its kernel, 5 induces an isomorphism from A=/N to M.

Let (41, . . . , 4=) be a basis of A=, let 31, . . . , 3? be elements of A such

that (3141, . . . , 3?4?) is a basis of N and such that 38 divides 38+1 for

any integer 8 such that 1 6 8 < ? (theorem 5.4.3). Set 38 = 0 for

8 ∈ {? + 1, . . . , =}. The morphism ! from A=to itself which maps

(01, . . . , 0=) to 0141 + · · · + 0=4= is an isomorphism, because (41, . . . , 4=)is a basis of A

=. Moreover, !(01, . . . , 0=) belongs to N if and only if 38

divides 08 for every 8 ∈ {1, . . . , =}. Passing to the quotient, we see that

M = Im( 5 ◦ !) is isomorphic to

⊕=8=1

A/(38).For every 8 ∈ {1, . . . , = − 1} such that 38+1 is invertible, then so is 38,

because 38 divides 38+1. Consequently, there exists an integer A ∈{0, 1, . . . , =} such that 38 is a unit if and only if 8 6 A. Then, M '⊕=

8=A+1A/(38); this proves the first part of the proof.

Resetting notation, we assume that A = 0.

By the definition of Fitting ideals, one has Fit:(M) = J=−:( 5 ◦ !).Moreover, the kernel of 5 ◦! is generated by the vectors (3141, . . . , 3=4=).Consequently, Fit?(M) = (1) if ? > =, Fit?(M) = (31 . . . 3=−?) if 0 6 ? 6= − 1, and Fit?(M) = (0) if ? < 0.

The uniqueness property follows from this description of the Fitting

ideals: if M '⊕<

9=1A/(� 9), where < is an integer and �1, . . . , �< are

non-units in A such that �1 |�2 | . . . |�<, then (31 . . . 3=−?) = (�1 . . . �<−?)for every integer ?. Taking ? = =, this implies that < 6 =, since

otherwise, the ideal (�1 . . . �<−=) would be non-trivial; by symmetry,

one has = 6 <, hence = = <. The equalities (31 . . . 3=−?) = (�1 . . . �=−?)for every ? ∈ {0, . . . , =} imply that there are units D1, . . . , D= such that

�1 . . . �: = D:31 . . . 3: for all : ∈ {1, . . . , =}.Let us show that there are units E1, . . . , E= such that �: = E:3: for

every : ∈ {1, . . . , =}. This holds for : = 1, with E1 = D1. Assume that

it holds for :. If 31 . . . 3: = 0, then �1 . . . �: = 0; there exists ?, @ 6 :

Page 261: (MOSTLY)COMMUTATIVE ALGEBRA

5.4. FINITELY GENERATED MODULES OVER A PRINCIPAL IDEAL DOMAIN 249

such that 3? = �@ = 0, and the divisibilities 3: | 3:+1 and �: | �:+1 imply

that �:+1 = 3:+1 = 0; in this case, we set D:+1 = 1. Otherwise, one has

31 . . . 3: ≠ 0; writing

D:+131 . . . 3:+1 = �1 . . . �:+1 = D:31 . . . 3:�:+1,

we deduce that �:+1 = D:+1D−1

:3:+1. This proves the result by induction.

Here is another approach.

We thus assume that an isomorphism M '⊕=

8=1A/(38) be given,

where 31, . . . , 3= are non-units inA such that 38 divides 38+1 for 1 6 8 < =.

Let ? be an irreducible element of A; by lemma 5.4.5 applied with 0 = ?<

and 1 = ?, the module ?<−1M/?<M is an (A/?A)-vector space and

its dimension is the number of indices 8 ∈ {1, . . . , =} such that ?<

divides 38. Using the divisibility 38 |38+1 for 8 ∈ {1, . . . , = − 1}, we deduce

the equivalence: ?< divides 38 if and only if dimA/?A(?<−1

M/?<M) >< + 1 − 8. This determines the irreducible factors of the elements 38, as

well as their exponents, hence the ideals (38). �

Lemma (5.4.5). — Let A be a principal ideal domain, let 3 be an element of A

and let M = A/(3). Let ? be an irreducible element of A; for every integer <

such that < > 1, let M< = ?<−1M/?<M. Then, M< is an (A/?)-vector

space; its dimension is 1 if ?< divides 3, and is 0 otherwise.

Proof. — Since the multiplication by ? in the A-module M< is zero,

the canonical morphism A → End(M<) factors through A/?A: this

endowes M< with the structure of an (A/?A)-vector space. (Recall thatA/?A is a field.)

Let = be the exponent of ? in the decomposition of 3 into prime factors,

that is, the supremum of all integer = such that ?= divides 3. (If 3 = 0,

then = = +∞; otherwise, = is finite, ?= divides 3 but ?=+1doesn’t.) The

canonical bijection between submodules of A/3A and submodules of A

containing (3)maps ?<M to the ideal (?< , 3) = (?inf(<,=)). This furnishesan isomorphism

M< = ?<−1

M/?<M '?inf(<−1,=)

A/3A

?inf(<,=)A/3A

'?inf(<−1,=)

A

?inf(<,=)A

.

Consequently,M< = 0 if andonly if inf(<−1, =) = inf(<, =), that is, if andonly if< > =, which amounts to say that ?< does not divide 3. Otherwise,

Page 262: (MOSTLY)COMMUTATIVE ALGEBRA

250 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

if< 6 =, then inf(<, =) = <, inf(<−1, =) = <−1 andM< ' ?<−1A/?<A.

Now, themapA→ ?<−1A given by 0 ↦→ ?<−1

A induces an isomorphism

from A/?A to ?<−1A/?<A, so that dim

A/?A(M<) = 1 in that case. �

Definition (5.4.6). — Let A be a principal ideal domain and let M be a finitely

generated A-module. Let M '⊕=

8=1A/(38) be a decomposition of M where

31, . . . , 3= are non-units in A such that 38 divides 38+1 for any integer 8 such

that 1 6 8 < =.

The ideals (31),. . . , (3=) are called the invariant factors of M. The number

of those ideals which are zero is called the rank of M.

With these notations, we thus have 38 = 0 if and only if =−A+1 6 8 6 =,

so that M can also be written

M ' AA ⊕

=−A⊕8=1

A/(38).

Moreover, with the terminology of example 3.1.9, the second module⊕=−A8=1

A/(38) in the preceding expression corresponds to the torsion

submodule of M.

Corollary (5.4.7). — A finitely generated module over a principal ideal domain

is free if and only if it is torsion-free.

Corollary (5.4.8). — Let A be a principal ideal domain, let M be a finitely

generated free A-module and let N be a submodule of M. For N to admit a

direct summand in M, it is necessary and sufficient that the quotient M/N be

torsion-free.

Proof. — Assume that N has a direct summand P. Then, the canonical

morphism cl from M to M/N induces an isomorphism from P onto M/N.

Since P is a submodule of the free A-module M, it is torsion-free.

Consequently, M/N is torsion-free.

Conversely, let us assume that M/N is torsion-free. Since it is finitely

generated and A is a principal ideal domain, then M/N is a free A-

module (corollary 5.4.7). Let ( 51, . . . , 5A) be a basis of M/N; for any

8 ∈ {1, . . . , A}, let 48 be an element of M which is mapped to 58 by the

canonical surjection. Let then P be the submodule of M generated by

41, . . . , 4A . Let us show that P is a direct summand of N in M.

Page 263: (MOSTLY)COMMUTATIVE ALGEBRA

5.4. FINITELY GENERATED MODULES OVER A PRINCIPAL IDEAL DOMAIN 251

Let < ∈ N ∩ P and write < =∑0848 for some 01, . . . , 0A ∈ A. We have

cl(<) = 0, because < ∈ N, hence

∑08 58 = 0. Since the family ( 58) is free,

it follows that 08 = 0 for all 8. Consequently, < = 0 and N ∩ P = 0.

Conversely, let < ∈ M; let (01, . . . , 0A) be elements of A such that

cl(<) = ∑08 58. Then, ? =

∑0848 belongs to P and cl(?) = cl(<). It

follows that cl(<− ?) = 0, hence<− ? ∈ N, which proves that< ∈ P+N.

This shows that M = P +N.

We have thus shown that P is a direct summand of N, as claimed. �

Lemma and Definition (5.4.9). — Let A be a principal ideal domain, let M be

an A-module. For every irreducible element ? ∈ A, the set M(?) of all < ∈ M

for which there exists = > 0 with ?=< = 0 is a submodule of M, called the

?-primary component of M.

Proof. — One has 0 ∈ M(?). Let <, <′ ∈ M(?); we want to show that

< + <′ ∈ M(?). Let =, =′ be positive integers such that ?=< = ?=′<′ = 0;

set : = sup(=, =′). Then, ?:(< + <′) = ?:−=(?=<) + ?:−=′(?=′<′) = 0, so

that < + <′ ∈ M(?). Finally, let < ∈ M(?) and let 0 ∈ A; if = > 0 is such

that ?=< = 0, we see that ?=(0<) = 0(?=<) = 0, hence 0< ∈ M(?). Thisshows that M(?) is a submodule of M. �

Let ? and @ be two irreducible elements ofA. If there exists a unit D ∈ A

such that @ = D?, it is clear from the definition that M(?) ⊂ M(@),hence M(?) = M(@) by symmetry. Otherwise, it will follow from

proposition 5.4.10 below that M(?) ∩M(@) = 0.

We fix a set� of irreducible elements in A such that any irreducible

element of A is equal to the product of a unit by a unique element

of �. In particular, when ? varies in �, the ideals (?) are maximal and

pairwise distinct.

Proposition (5.4.10) (Primary decomposition). — Let A be a principal ideal

domain, let M be an A-module. Then the torsion submodule of M decomposes

as the direct sum T(M) =⊕

?∈� M(?).

Proof. — Let us first prove these modules M(?) are in direct sum. Let

thus (<?)?∈� be an almost-null family of elements ofM, where<? ∈ M(?)for every ? ∈ �, and assume that

∑<? = 0. We want to prove that

<? = 0 for every ? ∈ �. Let I be the subset of � consisting of those ?

Page 264: (MOSTLY)COMMUTATIVE ALGEBRA

252 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

for which <? ≠ 0; by hypothesis, it is a finite subset. For every ? ∈ I, let

=? be a positive integer such that ?=?<? = 0. For any ? ∈ I, let

0? =∏@∈I@≠?

@=@ .

By construction, 0?<@ = 0 for every @ ∈ I such that @ ≠ ?, and also for

every @ ∈ � I. Consequently, 0?<? = 0?(∑@ <@) = 0. However, let us

observe that 0? and ?=?

are coprime. Since A is a principal ideal domain,

there exist D, E ∈ A such that D0? + E?=? = 1; we get

<? = (D0? + E?=?)<? = D(0?<?) + E(?=?<?) = 0.

We thus have shown that <? = 0 for every ? ∈ �; in other words, the

modules M(?) are in direct sum, as claimed.

By definition, one has M(?) ⊂ T(M) for every ?, hence⊕

? M(?) ⊂T(M). Conversely, let < ∈ T(M) and let 0 be any nonzero element

of A such that 0< = 0; let 0 = D∏

?∈I ?=?

be the decomposition of 0

in irreducible factors, where I is a finite subset of � and D is a unit.

Let us prove by induction on Card(I) that < belongs to

⊕? M(?). This

is obvious if Card(I) 6 1. Otherwise, fix @ ∈ I and let 1 = 0/@=@ ; theelements 1 and @=@ are coprime, hence there exist 2, 3 ∈ A such that

21 + 3@=@ = 1, hence < = 21< + 3@=@<. Since @=@(1<) = 0< = 0, one has

1< ∈ M@. On the other hand, 1@=@< = 0< = 0, hence @=@< ∈⊕

? M?

because 1 has less irreducible factors than 0. By induction, it follows that

< belongs to the submodule

⊕? M(?) of M, hence T(M) =

⊕? M(?), as

claimed. �

Remark (5.4.11). — Let A be a principal ideal domain, let M be a finitely

generated torsion A-module. Let ? ∈ � be an irreducible element of A

and let M(?) be the corresponding ?-primary component of M. By the

primary decomposition, we see that M(?) is isomorphic to a quotient

of M, namely, the quotient by the direct sum of the other primary

components. Consequently, M(?) is finitely generated. Let <1, . . . , <A

be elements of M(?) which generate M(?); for every 8 ∈ {1, . . . , A}, let:8 ∈ N be such that ?:8<8 = 0 and set : = sup(:1, . . . , :A); then ?:<8 = 0

for every 8, hence ?:< = 0 for every < ∈ M(?).

Page 265: (MOSTLY)COMMUTATIVE ALGEBRA

5.5. APPLICATION: FINITELY GENERATED ABELIAN GROUPS 253

The invariant factors of M(?) are of the form (?=1), . . . , (?=B), where

=1 6 · · · 6 =B are positive integers. Their knowledge, for every ? ∈ �,

determines the invariant factors of M, and conversely. We shall see

explicit examples in the next section.

5.5. Application: Finitely generated abelian groups

The main examples of principal ideal rings are Z and K[X] (where K is

a field). In this section we explicit the case of the ring Z; the case of apolynomial ring will be the subject of the next section.

Recall that a Z-module is nothing but an abelian group, so that

finitely generated Z-modules are just finitely generated abelian groups.

Moreover, any ideal of Z has a unique positive generator. In the case

A = Z, corollary 5.4.4 gives the following theorem:

Theorem (5.5.1). — Let G be a finitely generated abelian group. There exists

an integer A > 0 and a family (31, . . . , 3B) of integers at least equal to 2, both

uniquely determined, such that 38 divides 38+1 for 1 6 8 < B and such that

G ' ZA ⊕ (Z/31Z) ⊕ · · · ⊕ (Z/3BZ).

The integer A is the rank of G, the integers 31, . . . , 3B are called its

invariant factors.

Theorem 5.5.1 furnishes a “normal form” for any finitely generated

abelian group, which allows, in particular, to decide whether two such

groups are isomorphic or not.

However, one must pay attention that the divisibility condition is

satisfied. To determine the invariant factors of a group, written as a

direct sum of cyclic group, the simplest procedure consists in using

the Chinese remainder theorem twice: first, decompose each cyclic

group as the direct sum of its primary components; then, collect factors

corresponding to distinct prime numbers, beginning with those of

highest exponents.

Example (5.5.2). — Let us compute the invariant factors of the abelian

groups G1 = (Z/3Z) ⊕ (Z/5Z) and G2 = (Z/6Z) ⊕ (Z/4Z).

Page 266: (MOSTLY)COMMUTATIVE ALGEBRA

254 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

Since 3 and 5 are coprime, (Z/3Z) ⊕ (Z/5Z) is isomorphic to Z/15Z,by the Chinese remainder theorem. Consequently, the group G1 has

exactly one invariant factor, namely 15.

The integers 6 and 4 are not coprime, their gcd being 2. Since 6 = 2 · 3,and 2 and 3 are coprime, it follows from the Chinese remainder that

G2 '(Z/6Z) ⊕ (Z/4Z) ' (Z/2Z) ⊕ (Z/3Z) ⊕ (Z/4Z)'

((Z/2Z) ⊕ (Z/4Z)

)⊕

((Z/3Z)

).

Observe that the preceding expression furnishes the primary decom-

position of the group G2: the factors (Z/2Z) ⊕ (Z/4Z) and (Z/3Z) beingrespectively the 2-primary and 3-primary components of G2. We will

now collect factors corresponding to distinct prime numbers, and we

begin by those whose exponents are maximal, namely Z/3Z and Z/4Z.We then obtain

G2 ' ((Z/2Z)) ⊕ ((Z/4Z) ⊕ (Z/3Z)) ' (Z/2Z) ⊕ (Z/12Z).

This shows that the invariant factors of G2 are 2 and 12.

Using this result we can also make the list of all finite abelian groups

whose cardinality is a given integer 6. Indeed, it suffices to make the list

of all families of integers (31, . . . , 3B) such that 31 > 2, 38 divides 38+1 if

1 6 8 < B, and 6 = 31 . . . 3B . Again, the computation is easier if one first

considers the primary decomposition of the abelian group.

Example (5.5.3). — Let us determine all abelian groupsG of cardinality 48.

Since 48 = 24× 3, such a group will have a primary decomposition of the

form G = G2 ⊕ G3, where G2 has cardinality 24and G3 has cardinality 3.

In particular, we have G3 = Z/3Z. Let us now find all possible 2-primary

components. This amounts to finding all families (31, . . . , 3B) of integerssuch that 31 > 2, 38 divides 38+1 if 1 6 8 < B and 31 . . . 3B = 2

4. All of

the 38 must be powers of 2, say of the form 2=8, for integers =1, . . . , =B

such that 1 6 =1 6 =2 6 · · · 6 =B , and one has

∑B8=1=8 = 4. The list is the

following:

– 31 = 2, 32 = 2, 33 = 2, 34 = 2;

– 31 = 2, 32 = 2, 33 = 4;

– 31 = 2, 32 = 4, but then 33 6 2 so that this case does not happen;

Page 267: (MOSTLY)COMMUTATIVE ALGEBRA

5.5. APPLICATION: FINITELY GENERATED ABELIAN GROUPS 255

– 31 = 2, 32 = 8;

– 31 = 4, 32 = 4;

– 31 = 8, but then 32 6 2, so that this case does not happen neither;

– 31 = 16.

We thus find that up to isomorphy, there are only five abelian groups of

cardinality 16:

(Z/2Z)4, (Z/2Z)2 ⊕ (Z/4Z), (Z/2Z) ⊕ (Z/8Z), (Z/4Z)2, (Z/16Z).

The abelian groups of cardinality 48 are obtained by taking the product

of one of these five groups with Z/3Z. We collect the factor Z/3Z with

the 2-primary factor with highest exponent and obtain the following list:

(Z/2Z)3 ⊕ (Z/6Z),(Z/2Z)2 ⊕ (Z/12Z),(Z/2Z) ⊕ (Z/24Z),(Z/4Z) ⊕ (Z/12Z),(Z/48Z).

Let us conclude with an application to field theory.

Theorem (5.5.4). — LetK be a (commutative) field and letG be a finite subgroup

of the multiplicative group of K. Then K is a cyclic group. In particular, the

multiplicative group of a finite field is cyclic.

Proof. — Let = be the cardinality of G. We may assume that G ≠ {1}.By theorem 5.5.1, there exist an integer A > 1 and integers 31, . . . , 3A > 2

such that 38 divides 38+1 for every 8 ∈ {1, . . . , A − 1} and such that

G ' (Z/31Z) × · · · × (Z/3AZ). In particular, G3A = 1 for every G ∈ G.

Since a nonzero polynomial with coefficients in a field has no more

roots than its degree, we conclude that = 6 3A . Since = = 31 . . . 3A , this

implies that A = 1 and 31 = =. Consequently, G ' Z/=Z is a cyclic group

of order =. �

Page 268: (MOSTLY)COMMUTATIVE ALGEBRA

256 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

5.6. Application: Endomorphisms of a finite dimensional vectorspace

Let K be a field and let A = K[X] be the ring of polynomials in one

indeterminate.

5.6.1. — Recall from remark 3.2.6 (see also exercise 3/4) that an endo-

morphism D of a K-vector space V endowes it with the structure of a

K[X]-module, given by P · G = P(D)(G) for any polynomial P ∈ K[X] andany vector G ∈ V. Explicitly, if P =

∑3==0

0=X=, one has

P · G =3∑==0

0=D=(G).

Let VD be the K[X]-module so defined. Conversely, let M be a K[X]-module. Forgetting the action of X, we get a K-vector space V. The

action of X on V is a K-linear endomorphism D, and M = VD.

In order for the theory of K[X]-modules to reflect faithfully that of

K-vector spaces endowed with endomorphisms, we need to understand

this correspondence on a categorical point of view, that is, at the level of

morphisms.

Let W be a K-vector space and let E be an endomorphism of W. A

morphism ofK[X]-modules fromVD toWE is an additivemap 5 : V→W

such that 5 (P · G) = P · 5 (G) for every G ∈ V and every P ∈ K[X]. TakingP = � ∈ K, this gives 5 (�G) = � 5 (G): the map 5 is morphism of K-vector

spaces. Taking P = X, this gives 5 (D(G)) = E( 5 (G)), hence 5 ◦ D = E ◦ 5 .Conversely, if 5 : V → W is a morphism of K-vector spaces such that

5 ◦ D = E ◦ 5 , then 5 ◦ P(D) = P(E) ◦ 5 for every P ∈ K[X]. (Indeed,

the set of polynomials that satisfy this relation is a subring of K[X]that contains K and X, hence is the whole of K[X].) Consequently,

P · 5 (G) = P(E)( 5 (G)) = 5 (P(D)(G)) = 5 (P · G) for every P ∈ K[X] and every

G ∈ V, so that 5 is a morphism of K[X]-modules.

In particular, isomorphisms 5 : VD → WE of K[X]-modules are pre-

cisely isomorphisms 5 : V→W of K-vector spaces such that 5 ◦D = E◦ 5 .In the particular case V = W, we conclude that the K[X]-modules VD

and VE are isomorphic if and only if there exists 5 ∈ GL(V) such that

Page 269: (MOSTLY)COMMUTATIVE ALGEBRA

5.6. APPLICATION: ENDOMORPHISMS OF A FINITE DIMENSIONAL VECTOR SPACE 257

D = 5 −1 ◦ E ◦ 5 , that is, if and only if the endomorphisms D and E are

conjugate under GL(V).

Definition (5.6.2). — One says that a nonzero K[X]-module M is cyclic if

there exists a nonzero polynomial P ∈ K[X] such that M ' K[X]/(P).

We remark that the cyclic K[X]-module M = K[X]/(P) is finitely

dimensional as a K-vector space and its dimension is equal to deg(P).Indeed, euclidean division by P shows that any polynomial of K[X] isequalmoduloP to auniquepolynomial of degree< deg(P). Consequently,the elements cl(1), cl(X), . . . , cl(Xdeg(P)−1) form a basis of M.

Lemma (5.6.3). — Let V be a K-vector space, let D be an endomorphism of V.

The K[X]-module VD is cyclic if and only if there exists a vector G ∈ V and an

integer = > 1 such that the family (G, D(G), . . . , D=−1(G)) is a basis of V.

Such a vector G ∈ V is called a cyclic vector for the endomorphism D.

Proof. — Assume that the K[X]-module VD is cyclic and fix an isomor-

phism ! of K[X]/(P) with VD, for some nonzero polynomial P ∈ K[X].Let = = deg(P). The image of cl(1) by ! is an element G of V. Since

(1, cl(X), . . . , cl(X=−1)) is a basis of K[X]/(P) as a K-vector space, the

family (G, D(G), . . . , D=−1(G)) is a basis of V.

Conversely, let G ∈ V be any vector such that (G, D(G), . . . , D=−1(G)) is abasis of V. Write D=(G) = ∑=−1

?=00?D

?(G) in this basis, for some elements

00, . . . , 0=−1 ∈ K. Let Π = X= − ∑=−1

?=00?X

? ∈ K[X]. Then, the map

5 : K[X] → V, P ↦→ P(D)(G), is a surjective morphism of K[X]-modules.

Let us show that Ker( 5 ) = (Π). Indeed, one has

5 (Π) = D=(G) −=−1∑?=0

0?D?(G) = 0,

so that Π ∈ Ker( 5 ); it follows that (Π) ⊂ Ker( 5 ). Let then P ∈ Ker( 5 )and let R be the euclidean division of P by Π; one has deg(R) < =

and 5 (R) = 5 (P) = 0. We get a linear dependence relation between

G, D(G), . . . , D=−1(G); since this family is a basis of V, this dependence

relation must be trivial, which gives R = 0. This shows that Ker( 5 ) =(Π), so that 5 induces, passing to the quotient, an injective morphism

Page 270: (MOSTLY)COMMUTATIVE ALGEBRA

258 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

! : K[X]/(P) → V. Since 5 is surjective, ! is surjective as well, hence is

an isomorphism. This concludes the proof of the lemma. �

Remark (5.6.4). — If VD is a cyclic module and G ∈ V is a cyclic vector,

the matrix of D in the basis (G, . . . , D=−1(G)) is of the form

©­­­­­­«

0 00

1 0 01

. . . . . ....

. . . 0...

1 0=−1

ª®®®®®®¬,

It is the companion matrix CΠ of the polynomial

Π = X= − 0=−1X

=−1 − · · · − 01X − 00.

Let us prove that Π is both the minimal and characteristic polynomial

of the matrix CΠ (and of the endomorphism D).

Since G, . . . , D=−1(G) are linearly independent, one has Q(D)(G) ≠ 0

for every non-zero polynomial Q of degree < =. Moreover, one has

D=(G) = 00G + 01D(G) + · · · + D=−1(G), so that Π(D)(G) = 0. Since D: and

Π(D) commute, it follows that Π(D)(D:(G)) = D:(Π(D)(G)) = 0 for every

: ∈ {0, . . . , = − 1}. Consequently, Π(D) = 0. Since deg(Π) = =, this

implies that Π is the minimal polynomial of D, and of CΠ.

The characteristic polynomial of D, or of CΠ, is a monic polynomial

of degree =; by Cayley–Hamilton’s theorem, it is a multiple of Π. It is

therefore equal to Π. One can also compute it explicitly by induction

on =: expanding the determinant along the first row, one has

PCΠ(X) = det

©­­­­­­«

X −00

−1 X −01

−1. . .

.... . . X −0=−2

−1 X − 0=−1

ª®®®®®®¬

Page 271: (MOSTLY)COMMUTATIVE ALGEBRA

5.6. APPLICATION: ENDOMORPHISMS OF A FINITE DIMENSIONAL VECTOR SPACE 259

= X det

©­­­­­­«

X −01

−1 X −02

−1. . .

.... . . X −0=−2

−1 X − 0=−1

ª®®®®®®¬+ (−1)=00 det

©­­­­«−1 X

−1. . .. . . X

−1

ª®®®®¬= X(X=−1 − 0=−1X

=−2 − · · · − 01) + (−1)=00(−1)=−1

= Π(X).

Applied to the K[X]-module defined by an endomorphism of a finite

dimensional K-vector space, corollary 5.4.4 thus gives the following

theorem.

Theorem (5.6.5). — LetK be a field. LetV be a finite dimensionalK-vector space

and let D be an endomorphism of V. There exists a unique family (P1, . . . , PA)of monic, non-constant, polynomials of K[X] such that P8 divides P8+1 for any

integer 8 such that 1 6 8 < A and such that, in some basis of V, the matrix of D

takes the following block-diagonal form

©­­­­«CP1

CP2

. . .

CPA

ª®®®®¬.

The polynomials (P1, . . . , PA) are called the invariant factors of the

endomorphism D. It is apparent from the matrix form above that PA is

the minimal polynomial of D, while P1 . . . PA is its characteristic polynomial.

Corollary (5.6.6). — Two endomorphisms of a finite dimensional vector space

are conjugate if and only if they have the same invariant factors.

Applying the theory of Fitting ideals, we also get the following algo-

rithm to compute the invariant factors of a matrix.

Page 272: (MOSTLY)COMMUTATIVE ALGEBRA

260 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

Proposition (5.6.7). — Let K be a field, let U be a matrix in M=(K). For anyinteger ? such that 1 6 ? 6 =, let Δ? ∈ K[X] be the (monic) gcd of the ? × ?minors of the matrix XI= −U. One has Δ1 | Δ2 | · · · | Δ=. Let < ∈ {0, . . . , =}be such that Δ? = 1 for ? 6 < and Δ? ≠ 1 for ? ∈ {< + 1, . . . , =}. Then,

there exist monic polynomials Q1, . . . ,Q=−< ∈ K[X] such thatQ1 = Δ<+1, Q1Q2 = Δ<+2, . . . , Q1 . . .Q=−< = Δ= .

The invariant factors of U are the polynomials Q1, . . . ,Q=−<.

Proof. — Let A = K[X]. The endomorphism U of K=endowes K

=with

a structure of an A-module (K=)U defined by P · E = P(D)(E) for P ∈ A

and E ∈ K=. The proof of the proposition consists in computing in two

different ways the Fitting ideals of (K=)U, once by proving that (K=)U is

isomorphic to A=/Im(XI=−U), and once by the formula of corollary 5.4.4,

Fit:((K=)U) = (P1 . . . PA−:) for all integers : > 0, where P1, . . . , PA are the

invariant factors of U.

Let (41, . . . , 4=) be the canonical basis of the free A-module A=; let also

( 51, . . . , 5=) be the canonical basis of K=. The matrix U ∈ M=(K) induces

an endomorphism of K=, as well as an endomorphism of A

=; with denote

both of them by D. By the universal property of free modules, there is a

unique morphism # of A-modules from A=to (K=)D such that #(48) is

the 8th vector 58 of the canonical basis of K=, and a unique morphism �

of K-vector spaces from K=to A

=such that �( 58) = 48.

By construction, the matrix of the morphism

! : A= → A

= , 48 ↦→ X48 − D(48)is equal to XI= −U. For any 8 ∈ {1, . . . , =}, one has

#(!(48)) = #(X48 − D(48)) = D(#(48)) − #(D(48)) = 0

since # is a morphim of K[X]-modules. Consequently, # ◦ ! = 0, that is,

Im(!) ⊂ Ker(#). Passing to the quotient, # induces a morphism # from

A=/Im(!) to (K=)D. Let � be the composition of � with the projection

to A=/Im(!). For any 8 ∈ {1, . . . , =}, one has

�( 58) = cl(�( 58)) = cl(48),and

�(D( 58)) = cl(D(48)) = cl(X · 48) = X · cl(48),

Page 273: (MOSTLY)COMMUTATIVE ALGEBRA

5.6. APPLICATION: ENDOMORPHISMS OF A FINITE DIMENSIONAL VECTOR SPACE 261

so that � is a morphism of A-modules. For any 8, one has

� ◦ #(cl(48)) = � ◦ #(48) = �( 58) = cl(48);since the vectors cl(48) generate theA-moduleA

=/Im(!), the composition

� ◦ # is the identity of A=/Im(!). On the other hand, one has

# ◦ �( 58) = #(cl(48)) = #(48) = 58

for each 8, so that # ◦ � is the identity of (K=)D. Finally, # is an

isomorphism and (K=)D is isomorphic to A=/Im(!).

Since the matrix of !, as an endomorphism of A=, is equal to XI= −U,

we see that for all integers ?, one has

(Δ?) = J?(XI= −U) = Fit=−?((K=)D).On the other hand, the K[X]-module associated to a companion

matrix CP is isomorphic to K[X]/(P), so that (K=)D has an alternative

presentation as the quotient of AAby the image of the diagonal matrix V

with diagonal entries (P1, . . . , PA), as in theorem 5.6.5. By example 3.9.2,

one then has

Fit=−?((K=)D) = JA−=+?(V) =

(1) if A − = + ? 6 0;

(P1 . . . PA−=+?) if 1 6 A − = + ? 6 A;

(0) if A − = + ? > A.

We thus obtain

(Δ?) =

(1) if ? 6 = − A;(P1 . . . P?+A−=) if = − A + 1 6 ? 6 =;

(0) if ? > =.

This implies that Δ1 = · · · = Δ=−A = 1 and Δ=−A+? = (P1 . . .P?) for1 6 ? 6 A and concludes the proof of the proposition, with< = =−A. �All the preceding theory has the following application, which is

important in subsequent developments of algebra. Note that it would

not be so obvious to prove it without the theory of invariant factors,

especially if the field K is finite.

Corollary (5.6.8). — Let K be a field and let U and V be two matrices in M=(K).Let K → L be an extension of fields; assume that U and V are conjugate as

Page 274: (MOSTLY)COMMUTATIVE ALGEBRA

262 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

matrices of M=(L), that is to say, there exists a matrix P ∈ GL=(L) such thatV = P

−1UP. Then, U and V are conjugate over K: there exists an invertible

matrix Q ∈ GL=(K) such that V = Q−1

UQ.

Proof. — Let (P1, . . . , PA) be the family of invariant factors of U, viewed

as a matrix in M=(K). Therefore, there exists a basis of W = K=in

which the matrix of U has a block-diagonal form, the blocks being the

companion matrices with characteristic polynomials P1, . . . , PA . The

matrix that expresses this new basis of W gives also a basis of L=in which

the matrix of U is exactly the same block-diagonal matrix of companion

matrices. In other words, when one “extends the scalars” from K to L,

the invariant factors of U are left unchanged. Since, as matrices with

coefficients in K, the invariant factors of U coincide with those of V, the

matrices U and V have the same invariant factors as matrices in M=(K),hence are conjugate. �

1

Theorem (5.6.9) (Jordan decomposition). — Let K be an algebraically closed

field. Let V be a finite dimensional K-vector space and let D be an endomorphism

of V. There exists a basis of V in which the matrix of D is block-diagonal, each

block being of the form (“Jordan block”)

J=(�) =©­­­­«� 1 0

� . . .. . . 1

0 �

ª®®®®¬∈ M=(K),

where � is an eigenvalue of D. Moreover, two endomorphisms D and D′ areconjugate if and only if, for every � ∈ K, the lists of sizes of Jordan blocks

(written in increasing order) corresponding to � for D and D′ coincide.

Proof. — Let us first write down the primary decomposition of the

K[X]-module VD. Since K is algebraically closed, the irreducible monic

polynomials in K[X] are the polynomials X − �, for � ∈ K. For � ∈ K, let

V� be the (X−�)-primary component of VD; it is the set of all E ∈ V such

that there exists = ∈ N with (X − �)= · E = 0, that is, (D − � id)=(E) = 0.

1À revoir à partir d’ici !

Page 275: (MOSTLY)COMMUTATIVE ALGEBRA

5.6. APPLICATION: ENDOMORPHISMS OF A FINITE DIMENSIONAL VECTOR SPACE 263

In other words, V� is the subspace of V known as the characteristic

subspace of D associated to the eigenvalue �. (It is indeed nonzero if

and only if � is an eigenvalue.) By construction, V� is a K[X]-submodule

of VD, which means that this subspace of V is stable under the action

of D.

The (X − �)-primary component V� of V is isomorphic, as a K[X]-module, to a direct sum

B⊕8=1

K[X]/((X − �)=8),

for some unique increasing family (=1, . . . , =B) of integers. Let us observethe following lemma:

Lemma (5.6.10). — The K[X]-module corresponding to the Jordan matrix

J=(�) is isomorphic to K[X]/(X − �)=.

Proof. — Let W = K=and let F be the endomorphism of W given by

the matrix J=(�). Let (41, . . . , 4=) be the canonical basis of K=; one has

F(4<) = �4< + 4<−1 if < ∈ {2, . . . , =}, and F(41) = �41. In other words,

if we view W as a K[X]-module WF via F, one has (X − �) · 4< = 4<−1

for 2 6 < 6 = and (X − �) · 41 = 0; in other words, (X − �)< · 4= = 4=−<for < ∈ {0, . . . , = − 1} and (X − �)= · 4= = 0.

There is a unique morphism ! of K[X]-modules from K[X] to WF such

that !(1) = 4=; it is given by !(P) = P(X) · !(1) = P(F)(4=). The aboveformulae show that the vectors 4< belong to the image of !, hence ! is

surjective. Moreover, these formulae also show that, (X�)= ∈ Ker(!), sothat ! defines a morphism ! from K[X]/(X − �)= to WE. The morphism

! has the same image as !, hence is surjective. Since, K[X]/(X − �)= hasdimension = as a K-vector space, as well as W, the morphism ! must be

an isomorphism. �

Let us now return to the proof of the Jordan decomposition. Thanks

to the lemma, we see that the subspace V� has a basis in which the

matrix of D (restricted to V�) is block-diagonal, the blocks being Jordan

matrices J=1(�), . . . , J=B(�). Moreover, the polynomials (X−�)=8 (ordered

by nondecreasing degrees) are the invariant factors of V�. This shows

the existence of the Jordan decomposition.

Page 276: (MOSTLY)COMMUTATIVE ALGEBRA

264 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

Conversely, the Jordan blocks determine the invariant factors of each

primary component of VD, so that two endomorphisms D and D′ areconjugate if and only if for each � ∈ K, the lists of sizes of Jordan blocks

corresponding � for D and D′ coincide. �

Exercises

1-exo603) a) Show that the permutation matrix P associated to the transposition

(1, 2) ∈ S2 belongs to the subgroup of GL2(Z) generated by E2(Z) together with the

matrices D8(−1) for 8 ∈ {1, 2}.b) Does it belong to the subgroup E2(Z)?c) Prove that the Weyl group of GL=(Z) is contained in the subgroup generated

by E=(Z) together with the matrices D8(−1).2-exo614) Let A be a commutative ring.

a) Show that E=(A) is a normal subgroup of the group GE=(A); show also that

GE=(A) = E=(A) · {D1(0)} so that GE=(A) is a semidirect product of E=(A) by A×.

b) Same question, replacing GE=(A) by GL=(A) and E=(A) by SL=(A).3-exo615) Let K be a division ring and let D = D(K×) be the derived group of the

multiplicative group K×, that is, the quotient of K

×by its subgroup generated by all

commutators [0, 1] = 010−11−1, for 0, 1 ∈ K

×. Write � for the canonical projection

from K×to D.

Let = be an integer and let V be a right K-vector space of dimension =. Let B(V) bethe set of bases of V and let Ω(V) be the set of all maps $ : B(V) → D satisfying the

following properties:

$(E101, . . . , E=0=) = $(E1, . . . , E=)�(01 . . . 0=)and

$(E1, . . . , E8−1, E8 + E 9 , E8+1, . . . , E=) = $(E1, . . . , E=)for (E1, . . . , E=) ∈ B(V), 01, . . . , 0= ∈ K

×and 8 , 9 ∈ {1, . . . , =} such that 8 ≠ 9.

a) For $ ∈ Ω(V), (E1, . . . , E=) ∈ B(V) and � ∈ S= , prove that

$(E�(1), . . . , E�(=)) = �(�(�))$(E1, . . . , E=).b) For � = (E1, . . . , E=) ∈ B(V), let �8 = (E1, . . . , E8−1, E8+1, . . . , E=). Let W be a

hyperplane of V and let E ∈ V W. Let ! ∈ Ω(W). Prove that there exists a unique

$ ∈ Ω(V) such that

$(E1, . . . , E=−1, E) = !(E1, . . . , E=−1)for every (E1, . . . , E=−1) ∈ B(W).c) Let (E1, . . . , E=) ∈ B(V) and let 0 ∈ D. Prove that there exists a unique $ ∈ Ω(V)

such that $(E1, . . . , E=) = 0.

Page 277: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 265

d) Let U ∈ GL(V). Prove that there exists a unique element �(U) ∈ D such that

$(U(E1), . . . ,U(E=)) = �(U)$(E1, . . . , E=) for every (E1, . . . , E=) ∈ B(V). Prove that themap � : GL(V) → D is a morphism of groups.

When K is a commutative, then D = K×, and the morphism � : GL=(K) → K

×is the

determinant. For this reason, the map � is called the noncommutative determinant. This

construction is due to J. Dieudonné.

e) Let U ∈ GL2(K) have matrix

(0 12 3

). If 0 ≠ 0, then �(U) = �(03 − 020−11); if 0 = 0,

then �(U) = �(−21).4-exo604) Let N be a positive integer.

a) Show that the group SL=(Z/NZ) is generated by its elementary matrices.

b) Deduce that the canonical morphism SL=(Z) → SL=(Z/NZ) (reduction mod. N) is

surjective.

c) Is the analogous morphism GL=(Z) → GL=(Z/NZ) surjective?5-exo608) a) Let D ∈ Z=

be a vector whose entries are coprime. Show by induction

on the smallest nonzero entry of D that there exists a matrix M ∈ SL=(Z)whose first

column is D.

b) Let M be a matrix with = rows and ? columns with entries in a principal ideal

domain A. Assume that ? 6 =.

Show that it is possible to complete M into a matrix P ∈ GL=(A) (that is, there existssuch a matrix P whose first ? columns give M) if and only if the ideal Δ?(M) generatedby all minors of size ? of M is equal to (1).6-exo607) Let A be a principal ideal domain and let K be its fraction field.

a) Let D = (01, . . . , 0=) ∈ A=. Let 0 ∈ A be a generator of the ideal (01, . . . , 0=). Show

that there exists a matrix M ∈ E=(A) such that MD = (0, 0, . . . , 0).b) Show that the group E=(A) acts transitively on the set of lines of K

=.

c) Let D1 = (1, 2) and D2 = (2, 1) in Q2, let D1 = QD1 and D2 = QD2 be the lines in Q2

generated by D1 and D2. Show that there does not exist a matrix M ∈ GL2(Z) such that

M(D1) and M(D2) are the two coordinate axes.

d) Show that there does not exist a matrix P ∈ GL2(:[X,Y]) which maps the line

generated by the vector (X,Y) ∈ :(X,Y)2 to the line generated by the vector (1, 0).7-exo160) Let A be a principal ideal domain, let K be its fraction field.

a) Let X be a nonzero vector in K=. Show that there exists a matrix M ∈ GL=(A)

whose first column is proportional to X.

b) Show that each square matrix M ∈ M=(K) can be written as a product PB where

P ∈ GL=(A) and B is an upper-triangular matrix with entries in K. (Argue by induction).

c) Numerical example: In the case A = Z, give such an explicit decomposition for the

matrix

M =©­«1/2 1 −1/42/5 2 2/33/4 1/7 −1

ª®¬ .

Page 278: (MOSTLY)COMMUTATIVE ALGEBRA

266 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

8-exo133) Let A be a principal ideal domain and let M be a free finitely generated

A-module.

a) For any < ∈ M, show that the following properties are equivalent:

(i) The vector < belongs to some basis of M;

(ii) There exists 5 ∈ M∨such that 5 (<) = 1;

(iii) In any basis of M, the coordinates of M are coprime;

(iv) There exists a basis of M in which the coordinates of < are coprime;

(v) For any 0 ∈ A and <′ ∈ M such that < = 0<′, one has 0 ∈ A×;

(vi) For any 0, 0′ ∈ A and <′ ∈ M such that 0< = 0′<′ and 0 ≠ 0, then 0′

divides 0.

Such a vector is said to be primitive.

b) Show that any nonzero vector is a multiple of a primitive vector.

c) Example : A = Z, M = Z4, < = (126, 210, 168, 504).

9-exo046) Let A be a principal ideal domain, let L,M be two finitely generated A-

modules. Show that HomA(L,M) is a finitely generated A-module.

10-exo617) Le A be a principal ideal domain and let M be a finitely generated A-

module. Let (31), . . . , (3=) be its invariant factors, where 31, . . . , 3= are non-units and

38 divides 38+1 for 1 6 8 < =.

a) Let< be a vector ofMwhose annihilator equals (3=). Show that the submoduleA<

of M generated by < admits a direct summand in M.

b) One assumes that A = Z and M = (Z/?Z) ⊕ (Z/?2Z). Give a necessary and

sufficient condition on a vector < ∈ M for the subbmodule A< to possess a direct

summand.

11-exo601) Let @(G, H) = 0G2 + 21GH + 2H2be a positive definite quadratic form with

real coefficients. Let < = inf(G,H)∈Z2 {0} @(G, H).a) Show that there exists a nonzero vector 41 ∈ Z2

such that < = @(41).b) Show that there exists 42 ∈ Z2

such that (41, 42) is a basis of Z2.

c) Deduce from the inequality @(42 + =41) > @(41) that < 6 2

√(02 − 12)/3.

12-exo610) Let A be a commutative ring and let I1, . . . , I= be ideals of A such that

I1 ⊂ I2 ⊂ · · · ⊂ I= ( A. Let M =⊕=

8=1A/I8 .

a) Let J be a maximal ideal of A containing I= . Endow the A-module M/JM with the

structure of an (A/J)-vector space. Show that its dimension is equal to =.

b) Show that any generating family of M has at least = elements.

c) One assumes that A is a principal ideal domain. Let M be a finitely generated

A-module and let (31), . . . , (3A) be its invariant factors. Show that any generating set

in M has at least A elements.

13-exo616) The goal of this exercise is to provide another proof of theorem 5.4.3 that

does not use matrix operations.

Let A be a principal ideal domain, let M be a free finitely generated A-module and

let N be a submodule of M.

Page 279: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 267

a) Show that there exists a linear form 5 on M such that the ideal I = 5 (N) of A is

maximal among the ideals of this form (that is, there does not exist 6 ∈ M∨such that

5 (N) ( 6(N)).Let M

′be the kernel of 5 and let N

′ = N ∩M′.

b) Show that for any linear form 5 ′ on M′and any vector < ∈ N

′, one has 5 ′(<) ∈ I.

c) Using the same induction method as the one used for proving proposition 5.4.1,

show that there exists a basis (41, . . . , 4=), an integer A ∈ {0, . . . , =} and elements

31, . . . , 3A ofA such that 38 divides 38+1 for any integer 8 < A and such that (3141, . . . , 3A4A)is a basis of N.

14-exo086) LetA be a principal ideal domain and letM be a finitely generatedA-module.

a) Justify the existence of an integer B, elements <1, . . . , <B of M, elements 31, . . . , 3Bof A such that 38 divides 38+1 for 1 6 8 < B − 1 such that (38) is the annihilator of <8

and M =⊕B

8=1A<8 .

b) Let 8 ∈ {1, . . . , B}. Show that there exists D8 ∈ EndA(M) such that

D8(<1) = · · · = D8(<B−1) = 0, D8(<B) = <8 .

c) Let D be any element in the center of EndA(M). Show that there exists 0 ∈ A such

that D(<) = 0< for every < ∈ M.

d) Let D : M→M be an additive map such that D ◦ E = E ◦ D for every E ∈ EndA(M).Show that there exists 0 ∈ A such that D(<) = 0< for every < ∈ M.

15-exo159) a) Give the list of all abelian groups of order 16 (up to isomorphism).

b) Give the list of all abelian groups of order 45 (up to isomorphism).

16-exo157) Let M be the set of triples (G, H, I) ∈ Z3such that G + H + I is even.

a) Show that M is a submodule of Z3; prove that it is free of rank 3 by exhibiting 3

linearly independent vectors in M.

b) Construct a basis of M.

c) Exhibit a linear map 5 : M→ Z/2Z such that M = Ker( 5 ). Conclude that Z3/M is

isomorphic to Z/2Z.17-exo137) Let L be the set of vectors (G, H, I) ∈ Z3

such that

G − 3H + 2I ≡ 0 (mod 4) et G + H + I ≡ 0 (mod 6).

a) Show that L is a free Z-submodule of Z3. What is its rank?

b) Construct a morphism 5 from Z3/L to (Z/4Z) × (Z/6Z) such that L = Ker( 5 ).Prove that 5 is surjective; conclude that Z3/L is isomorphic to (Z/4Z) × (Z/6Z).

c) Compute integers 31, 32, 33 and a basis (41, 42, 43) of Z3such that 31 |32 |33 and such

that (3141, 3242, 3343) is a basis of L.

18-exo211) a) Let G be a finite abelian group; let = be the smallest positive integer such

that =G = 0. Show that there exists an element 6 ∈ G of exact order =.

b) Let K be a field. Recover the fact that a finite subgroup of K∗is cyclic.

Page 280: (MOSTLY)COMMUTATIVE ALGEBRA

268 CHAPTER 5. MODULES OVER PRINCIPAL IDEAL RINGS

c) Let G be the multiplicative subgroup {±1,±8 ,±9 ,±:} of Hamilton’s quaternions H.

Prove that G is not cyclic (it suffices to remark that G is not commutative!). This shows

that in the previous question, the commutativity assumption on K is crucial.

19-exo635) a) Let G,H be finitely generated abelian groups such that G ×G ' H ×H.

Show that G ' H.

b) Let G,G′,H be finitely generated abelian groups such that G ×H ' G′ ×H. Show

that G ' G′.

c) Show that the finite generation hypothesis is necessary by exhibiting abelian

groups for which the two previous results fail.

20-exo632) IfG is a finite abelian group, onewritesG∗for the set of all groupmorphisms

from G to C∗. An element of G∗is called a character of G.

a) Show that G∗is a group for pointwise multiplication. Show also that it is finite.

b) In this question, one assumes that G = Z/=Z, for some positive integer =. If

" ∈ G∗, show that "(1) is an =th root of unity Prove that the map " ↦→ "(1) from G

to C∗ is an isomorphism from G∗to the group of =th roots of unity in C∗.

Construct an isomorphism from G to G∗.

c) If G = H × K, show that G∗is isomorphic to H

∗ × K∗.

d) Show that for any finite abelian group, G∗is isomorphic to G.

21-exo633) Let G be a finite abelian group. For any function 5 : G → C and any

character " ∈ G∗,

5 (") =∑6∈G

5 (6)"(6).

This defines a function 5 on G∗, called the Fourier transform of 5 .

a) Compute 5 when 5 = 1 is the constant function 6 ↦→ 1 on G.

b) For any 6 ∈ G, show that∑"∈G∗

"(6) ={

Card(G) if 6 = 0;

0 otherwise.

c) Let 5 : G → C be a function. Show that for any 6 ∈ G, one has the following

Fourier inversion formula:

5 (6) = 1

Card(G)∑"∈G∗

5 (")"(6)−1.

d) Show also the Plancherel formula:∑"∈G∗

��� 5 (")���2 = Card(G)∑6∈G

�� 5 (6)��2 .22-exo636) Let A be a matrix in M=(Z), viewed as an endomorphism of Z=

. Let

M = Z=/A(Z=). Show that M is a finite abelian group if and only if det(A) ≠ 0. In that

case, show also that Card(M) = |det(A)|.

Page 281: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 269

23-exo634) Let G be an abelian group and let H be a subgroup of G. One assumes that

H is divisible: for any ℎ ∈ H and any strictly positive integer =, there exists ℎ′ ∈ H that

=ℎ′ = ℎ.

a) Let K be a subgroup of G such that K∩H = {0} and K+H ≠ G. Let 6 ∈ G (K+H)and let K

′ = K + Z6. Show that K′ ∩H = {0} and K

′ ) K.

b) Using Zorn’s lemma, show that there is a maximal subgroup K ⊂ G such that

K ∩H = {0}. Using the previous question, show that G = K +H.

c) Show that G ' H × (G/H).24-exo641) Let K be a field, let V be a finite dimensional K-vector space and let

D ∈ EndK(V). Show that an endomorphism of V which commutes with every

endomorphism which commutes with D is a polynomial in D. (Introduce the K[X]-module structure on V which is defined by D.)

25-exo613) Let : be a field. Determine all conjugacy classes of matrices M ∈ M=(:)satisfying the indicated properties.

a) The characteristic polynomial of M is X3(X − 1).

b) The minimal polynomial of M is X(X − 1).c) One has (M − I=)2 = 0.

Page 282: (MOSTLY)COMMUTATIVE ALGEBRA
Page 283: (MOSTLY)COMMUTATIVE ALGEBRA

CHAPTER 6

FURTHER RESULTS ON MODULES

This chapter explores more advanced results in the theory of modules.

Despite its apparent simplicity, Nakayama’s lemma allows to efficiently

transfer results which are initially proved by reasoning modulo a maximal ideal,

or modulo the Jacobson radical. It is a very important tool in commutative

algebra.

The next three sections are devoted to three different finiteness conditions

for modules, all of them phrased in terms of chains of submodules, that is,

totally ordered sets of submodules: we say that a module has finite length if the

cardinality of all of these chains is bounded above, that it is noetherian if any

chain has a maximal element, and artinian if any chain has a minimal element.

Modules of finite length are probably the closest to finite dimensional vector

spaces. The length function is an integer-valued function that behaves similarly

to the dimension, and it is as useful.

Thenoetherian property, named in honor of EmmyNoether, is of a particular

importance in commutative algebra, as it is often guarantees that given modules

are finitely generated. As applications of this notion, I present two fundamental

finiteness theorems of Hilbert. First, the finite basis theorem asserts that

polynomial rings over a noetherian ring are polynomial rings. Using a method

due to Emil Artin and John Tate, I also prove that when a finite group acts on a

finitely generated algebra, the subalgebra of invariant elements is itself finitely

generated; this theorem is one of the starting results in the theory of invariants.

The role of the artinian property, named in honor of Emil Artin, is more

subtle and will be less obvious in this book. (Its strength would require the

introduction of other techniques, such as completion, which are not studied

here.) Nevertheless, I prove Akizuki’s theorem that asserts that a ring is artinian

if and only if it has finite length; in the case of commutative rings, artinian

Page 284: (MOSTLY)COMMUTATIVE ALGEBRA

272 CHAPTER 6. FURTHER RESULTS ON MODULES

rings can be also be characterized by the fact that they are noetherian and that

all of its prime ideals are maximal.

The last two sections consider commutative rings only and have an implicit

geometric content in algebraic geometry. I define the support of a module, and

its associated ideals. My exposition of associated ideals is slightly different

from the classical one, which is limited to modules over noetherian rings;

contrary to a plausible expectation, I feel that it makes the first results easier,

and their proofs more natural.

I then discuss the existence of primary decomposition, a theorem due

to Emmy Noether and Emmanuel Lasker that gives an approximation of the

decomposition into powers of primes. I conclude the chapter by proving Krull’s

intersection theorem.

6.1. Nakayama’s lemma

Recall (definition 2.1.5) that the Jacobson radical J of a ring A is the

intersection of all maximal left ideals of A; it is a two sided ideal of A

and 1 + 0 is invertible for every element 0 ∈ J (lemma 2.1.6).

Theorem (6.1.1) (Nakayama’s lemma). — LetA be a ring, let J be the Jacobson

radical of A. Let M be a finitely generated A-module such that M = MJ. Then

M = 0.

Proof. — We prove the theorem by induction on the number of elements

of a generating set of M.

If M is generated by 0 element, then M = 0.

Let then = > 1 and assume that the theorem holds for any A-module

which is generated by (= − 1) elements. Let M be an A-module which

is generated by = elements, say 41, . . . , 4=, and such that M = MJ. Let

N = M/41A. Then, N is generated by the classes of 42, . . . , 4= and one

has N = NJ. Consequently, N = 0. In other words, M = 41A.

Since M = MJ, there is 0 ∈ J such that 41 = 410, hence 41(1 − 0) = 0.

Since 0 ∈ J, 1 − 0 is right invertible in A and 41 = 0. This implies that

M = 0. �

Page 285: (MOSTLY)COMMUTATIVE ALGEBRA

6.1. NAKAYAMA’S LEMMA 273

Corollary (6.1.2). — Let A be a ring, let J be its Jacobson radical. Let M be a

finitely generated A-module, let N be a submodule of M such that M = N+MJ.

Then, M = N.

Proof. — Setting P = M/N, the hypothesis M = N +MJ implies that

P = PJ. Consequently, P = 0 and M = N. �

Corollary (6.1.3). — Let A be a ring and let J be its Jacobson radical. Let

U ∈ M=(A) be a matrix and let U be its image in M=(A/J). If U is right

invertible (resp. left invertible, resp. invertible), then U is right invertible (resp.

left invertible, resp. invertible).

Proof. — Let M = A=3and let D be the endomorphism of M represented

by the matrix U in the canonical basis (41, . . . , 4=) of A=3. If U is right

invertible, the endomorphism D of (A/J)=3it defines is surjective, so that,

for every < ∈ M, there exists <′ and <′′ ∈ M such that < = D(<′) + <′′and all coordinates of <′′ belong to J. In other words, M = D(M) +MJ.

By the preceding corollary, M = D(M), so that D is surjective. For every

8 ∈ {1, . . . , =}, choose <8 ∈ M such that D(<8) = 48 and let E be the

endomorphism of M such that E(48) = <8. Then, D ◦ E(48) = D(<8) = 48for every 8 so that D ◦ E = idM. In particular, D is right invertible. The

matrix V of E is then a right inverse to U.

The case of left invertible matrices is proven analogously, or by consid-

ering the transposed matrices, or by working in the opposite ring; the

case of invertible matrices follows. �

Corollary (6.1.4). — Let A be a commutative ring, let I be an ideal of A. Let

M be a finitely generated A-module, let N be a submodule of M such that

M = N + IM. Then, there exists 0 ∈ I such that (1 + 0)M ⊂ N.

Proof. — Let S = 1 + I = {1 + 0 ; 0 ∈ I}; it is a multiplicative subset

of A. The equation M = N +MI implies the equality S−1

M = S−1

N +S−1

M · S−1I of S

−1A-modules. For every 0 ∈ S

−1I, 1 + 0 is invertible

in S−1

A. Indeed, write 0 = D/(1 + E), for some D, E ∈ I; then one has

1+ 0 = (1+ (1+ E)D)/(1+ E), which is invertible since 1+ (1+ E)D ∈ 1+ I.

Consequently, S−1

I is contained in the Jacobson radical of S−1

A. Since

Page 286: (MOSTLY)COMMUTATIVE ALGEBRA

274 CHAPTER 6. FURTHER RESULTS ON MODULES

S−1

M is finitely generated (corollary 3.6.9), the preceding corollary

implies that S−1

M = S−1

N.

It follows that for every < ∈ M, there exists 0 ∈ S such that <0 ∈ N.

More precisely, let (<1, . . . , <=) be a generating family of M; for every

8 ∈ {1, . . . , =}, let 08 ∈ S be such that <808 ∈ N, and set 0 = 01 . . . 0=.

Then <80 ∈ N for every 8 ∈ {1, . . . , =}, hence M0 ⊂ N. �

Remark (6.1.5). — One can also prove the corollary by a similar induction

to the one of Nakayama’s theorem.

Corollary (6.1.6). — Let A be a commutative ring, let M be a finitely generated

A-module and let D ∈ EndA(M) be an endomorphism of M. If D is surjective,

then D is an isomorphism.

Proof. — Let us use D to view M as an A[X]-module, setting P · < =

P(D)(<) for every P ∈ A[X] and every < ∈ M (remark 3.2.6). Then M

is finitely generated as an A[X]-mdule. Since D is surjective, one has

M = D(M) = X ·M = (X)M. By the preceding corollary, there exists

a polynomial P ∈ A[X] such that (1 − XP(X)) ·M = 0. In other words,

0 = < − D(P(D)(<)) for every < ∈ M, so that idM = D ◦ P(D). Since

P(D) ◦ D = D ◦ P(D) = (XP)(D), this shows that P(D) is the inverse of D,

hence D is an isomorphism. �

6.2. Length

Definition (6.2.1). — Let A be a ring. One says that an A-module is simple if

it is nonzero and if its only submodules are 0 and itself.

Examples (6.2.2). — a) The module 0 is not simple.

b) Let A be a ring and let I be a right ideal of A. The bijection

V ↦→ cl−1

I(V) between A-submodules of A3/I and submodules of A3

containing I preserves the containment relation. Consequently, the

A-module A3/I is simple if and only if I ≠ A and the only right ideals

of A which contain I are I and A. In other words, A3/I is a simple right

A-module if and only if I is a maximal right ideal of A.

Page 287: (MOSTLY)COMMUTATIVE ALGEBRA

6.2. LENGTH 275

c) Let M be a simple A-module and let < be a nonzero element of M.

The set <A of all multiples of M is a nonzero submodule of M. Since M

is simple, one has <A = M.

The set I of all 0 ∈ A such that <0 = 0 is is a right ideal of A and the

map 0 ↦→ <0 induces an isomorphism from A3/I to M. Consequently, I

is a maximal right ideal of A.

d) Assume that A is a division ring. Then, a simple A-module is

nothing but a 1-dimensional A-vector space.

e) LetAbe a ring and let Ibe a two-sided ideal ofA. In the identification

between A-modules annihilated by I and (A/I)-modules, A-submodules

correspond to (A/I)-submodules. Consequently, an A-module which

is annihilated by I is a simple A-module if and only if it is simple as an

(A/I)-module.

f) Let M be an A-module and let N be a submodule of M. The

submodules of M/N correspond to the submodules of M containing N.

Consequently, M/N is a simple A-module if and only if N is a maximal

submodule of M, in the sense that M ≠ N and the only submodules of M

containing N are N and M.

Proposition (6.2.3) (Schur lemma). — Let A be a ring and let D : M→ N a

nonzero morphism of A-modules.

a) If M is simple, then D is injective.

b) If N is simple, then D is surjective.

c) If M and N are both simple, then D is an isomorphism.

Proof. — The image of D is a nonzero submodule of N; if N is simple,

then Im(D) = N and D is surjective. The kernel of D is a submodule of M,

distinct from M; if M is simple, then Ker(D) = 0 and D is injective. If both

M and N are simple, then D is bijective, hence an isomorphism. �

Corollary (6.2.4). — The ring of endomorphisms of a simple A-module is a

division ring.

Proof. — Let M be a simple A-module. By the proposition, any nonzero

element of EndA(M) is an isomorphism, hence is invertible in EndA(M).

Page 288: (MOSTLY)COMMUTATIVE ALGEBRA

276 CHAPTER 6. FURTHER RESULTS ON MODULES

Moreover, M ≠ 0, so that idM ≠ 0 and EndA(M) is not the zero ring. This

proves that EndA(M) is a division ring. �

Definition (6.2.5). — Let A be a ring and let M be an A-module. A chain of

submodules of M is a sequence (M0, . . . ,M=) of submodules of M such that

M0 ( M1 · · · ( M=; its length is =.

The length of M is the supremum of the lengths of all chains of submodules

of M; it is denoted by ℓA(M) or ℓ (M).

Examples (6.2.6). — a) The null A-module has length 0, and con-

versely.

b) Simple A-modules are exactly the A-modules of length 1.

Let indeed M be an A-module. If M is simple, then the only chains of

submodules of M are (0), (M) (both of length 0) and (0,M) (of length 1),

so that M has length 1. Conversely, if ℓA(M) = 1, then M ≠ 0 (otherwise,

the only chain of submodules of M would be (0), of length 0) and if there

existed a submodule N such that N ≠ 0 and N ≠ M, the chain (0,N,M)would imply that ℓA(M) > 2.

c) Assume that A is a division ring. The expression “chain of sub-

modules” is equivalent to “increasing sequence of vector subspaces”. In

such a sequence, the dimension increases at least by 1 at each inclusion,

and exactly by 1 if the sequence cannot be enlarged. This implies that

ℓA(M) = dimA(M): the length of M is its dimension as a vector space.

d) The length of the Z-module Z is infinite, as witnessed by the

arbitrarily long chains of ideals: (2=Z, 2=−1Z, . . . ,Z).e) Let I be a two-sided ideal and let M be an A-module annihilated

by I. Then every submodule of M is also annihilated by I, hence can

be viewed as an A/I-module, and conversely. Consequently, the length

of M as an A-module is equal to its length as an A/I-module.

f) A chain (M0,M1, . . . ,M=) of submodules is maximal if it cannot

be made longer by inserting a module at the beginning, between two

elements, or at the end of the sequence without destroying its increasing

character. Therefore, such a chain is maximal if and only if M0 = 0,

M8/M8−1 is a simple A-module for 1 6 8 6 =, and M= = M.

Page 289: (MOSTLY)COMMUTATIVE ALGEBRA

6.2. LENGTH 277

If an A-module has finite length, any chain can be extended to a

maximal one. A chain of length ℓA(M)must be maximal; and we shall

prove below (theorem 6.2.10) that all maximal chains have length ℓA(M),but this is not a priori obvious.

Lemma (6.2.7). — Let A be a ring, let M be a right A-module, let M′,M′′,N be

submodules ofM such thatM′ ⊂ M

′′, M′∩N = M

′′∩N andM′+N = M

′′+N.

Then M′ = M

′′.

Proof. — Let < ∈ M′′. Since M

′′ ⊂ M′′ +N and M

′′ +N = M′ +N, there

exist<′ ∈ M′and = ∈ N such that< = <′+=. Then, = = <−<′ ∈ M

′′∩N,

hence = ∈ M′ ∩ N. This implies that < = <′ + = belongs to M

′.

Consequently, M′′ ⊂ M

′, hence M

′′ = M′. �

Proposition (6.2.8). — Let A be a ring, let M be an A-module and let N be a

submodule of N. Assume that among the modules M, N and M/N, two have

finite length. Then the length of the other one is finite too and one has

ℓA(M) = ℓA(N) + ℓA(M/N).

Proof. — Let (N0,N1, . . . ,N0) be a chain of submodules of N of length 0.

Let (P0, . . . , P1) be a chain of submodules of M/N of length 1; for

each 8 ∈ {0, . . . , 1}, there exists a unique submodule M8 ⊂ M contain-

ing N such that P8 = M8/N and (M0, . . . ,M1) is a chain in M. Then,

(N0,N1, . . . ,N0 ,M1, . . . ,M1) is a chain of length 0 + 1 of submodules

of M, hence the inequality ℓ (M) > ℓ (N) + ℓ (M/N), with the usual con-

vention∞ = = +∞. In particular, if M has finite length, then so have N

and M/N.

Conversely, let us assume that N and M/N have finite length; we want

to prove that M has finite length and that ℓ (M) = ℓ (N) + ℓ (M/N). Letthus (M0, . . . ,M0) be a chain of submodules of M. Applying lemma 6.2.7

to M′ = M8 and M

′′ = M8+8, we see that at least one of the two inclusions

M8 ∩N ⊂ M8+1 ∩N and M8 +N ⊂ M8+1 +N

is strict. Consequently, removing the inclusions which are equalities in

the sequences

M0 ∩N ⊂ M1 ∩N ⊂ · · · ⊂ M0 ∩N

Page 290: (MOSTLY)COMMUTATIVE ALGEBRA

278 CHAPTER 6. FURTHER RESULTS ON MODULES

and

(M0 +N)/N ⊂ (M1 +N)/N ⊂ · · · ⊂ (M0 +N)/N,

we obtain two chains of submodules in N and in M/N whose lengths add

up to 0, at least. In particular, ℓ (N) + ℓ (M/N) > ℓ (M). This concludesthe proof of the proposition. �

Corollary (6.2.9). — A direct sum

⊕=8=1

M8 of a family M1, . . . ,M= of right

A-modules of finite lengths has finite length.

Theorem (6.2.10) (Jordan–Hölder). — Let A be a ring and let M be a right

A-module. Let M0 ( M1 ( . . .M< and N0 ( N1 ( . . .N= be maximal chains

of submodules of A. Then < = = = ℓA(M) and there exists a permutation �of {1, . . . , =} such that for each 8 ∈ {1, . . . , =}, the module M8/M8−1 is

isomorphic to N�(8)/N�(8)−1.

In particular, if M possesses a maximal chain (M0,M1, . . . ,M=) ofsubmodules of M, then ℓA(M) = =. Moreover, up to reordering, the

simple A-modules M8/M8−1, for 1 6 8 6 =, do not depend on the chosen

maximal chain.

On the other hand, if M does not possess a maximal chain, than it

admits arbitrarily long chains, hence ℓA(M) is infinite.Proof. — For 1 6 8 6 < and 0 6 9 6 =, let us set M8 , 9 = M8−1 +M8 ∩N9.

It is a submodule of M such that M8−1 ⊂ M8 , 9 ⊂ M8; Moreover, M8 ,0 =

M8−1 and M8 ,= = M8. Consequently, there exists a smallest integer

�(8) ∈ {1, . . . , =} such that M8 ,�(8) = M8; then, M8−1 +M8 ∩N�(8) = M8

and M8 ∩ N�(8)−1⊂ M8−1. In fact, �(8) is the only integer 9 such that

M8 , 9/M8 , 9−1 ≠ 0.

Similarly, let us set N9 ,8 = N9−1 +N9 ∩M8, for 1 6 9 6 = and 0 6 8 6 <.

For any 9, there exists a smallest integer �(9) ∈ {1, . . . , <} such that

N9 ,8 = N9; it is the unique integer 8 such that N9 ,8/N9 ,8−1 ≠ 0.

Page 291: (MOSTLY)COMMUTATIVE ALGEBRA

6.2. LENGTH 279

By Zassenhaus’s lemma below, for any 8 , 9 such that 1 6 8 6 < and

1 6 9 6 =, we have isomorphisms

M8 , 9

M8 , 9−1

=M8−1 +M8 ∩N9

M8−1 +M8 ∩N9−1

'M8 ∩N9

(M8−1 ∩N9) + (M8 ∩N9−1)

'N9−1 +M8 ∩N9

N9−1 +M8−1 ∩N9

=N9 ,8

N9 ,8−1

of A-modules. In particular, M8 , 9/M8 , 9−1 is nonzero precisely 9 = �(8),hence if and only if N9 ,8/N9 ,8−1 is nonzero. Considering 9 = �(8), we

get �(�(8)) = 8; setting 8 = �(9), we get �(�(9)) = 9. This implies that �is a bijection from {1, . . . , <} to {1, . . . , =}, with reciprocal bijection �;in particular, < = =. Moreover, if 9 = �(8), M8 , 9 = M8, M8 , 9−1 = M8−1,

while N9 ,8 = N9 and N9 ,8−1 = N8−1; the above isomorphisms imply that

the A-modules M8/M8−1 and N�(8)/N�(8)−1are isomorphic, hence the

theorem. �

Lemma (6.2.11) (Zassenhaus). — Let M be an A-module, let N′ ⊂ N and

P′ ⊂ P be submodules of M. There exist isomorphisms of A-modules

N′ + (N ∩ P)

N′ + (N ∩ P

′)∼←− N ∩ P

(N′ ∩ P) + (N ∩ P′)∼−→ P

′ + (N ∩ P)P′ + (N′ ∩ P) ,

respectively induced by the inclusions N ∩ P→ N′ + (N ∩ P) and N ∩ P→

P′ + (N ∩ P).

Proof. — Let 5 : N∩P→ (N′+(N∩P))/(N′+(N∩P′)) be the linear map

defined as the composition of the injection 9 from N∩P into N′+ (N∩P)

and of the canonical surjection � from N′ + (N ∩ P) to its quotient by

N′+(N∩P

′). Let us show that 5 is surjective. Indeed, let G ∈ N′+(N∩P);

one may write G = =1 + =2, with =1 ∈ N′and =2 ∈ N ∩ P. One has

�(=1) = 0, hence �(G) = �(=2) = 5 (=2) and �(G) belongs to the image

of 5 .

Page 292: (MOSTLY)COMMUTATIVE ALGEBRA

280 CHAPTER 6. FURTHER RESULTS ON MODULES

Let now G ∈ N ∩ P be any element such that 5 (G) = 0. By assumption,

one may write G = =1 + =2, where =1 ∈ N′and =2 ∈ N∩ P

′. In particular,

=2 ∈ P so that =1 = G − =2 ∈ P. It follows that =1 ∈ N′ ∩ P and G

belongs to the sum of the submodules N′ ∩ P and N ∩ P

′of N ∩ P.

Conversely, these submodules are both contained in Ker( 5 ), henceKer( 5 ) = (N′ ∩ P) + (N ∩ P

′). Passing to the quotient, 5 induces an

isomorphism

N ∩ P

(N′ ∩ P) + (N ∩ P′) '

N′ + (N ∩ P)

N′ + (N ∩ P

′) ,

so that the first two A-modules in the lemma are indeed isomorphic.

Exchanging the roles of N′,N and of P

′, P, the second and the third

A-modules are isomorphic too, QED. �

Remark (6.2.12). — Let K be a division ring and let M be a K-vector space.

a) Let us explain how the concept of length allows to recover the main

results about the dimension of vector spaces.

If M is a simple K-module, then, for any nonzero element G of M,

GK is a nonzero submodule of M, hence GK = M. Consequently, (G)is a basis of M. Conversely, assume that M is a K-module generated

by one element G; then, for any nonzero H ∈ M, there exists 0 ∈ K

such that H = G0. Necessarily, 0 ≠ 0, so that G = H0−1belongs the

submodule generated by H, hence M = HK. The simple K-modules are

the K-modules generated by a single nonzero element.

Assume that M is finitely generated, say by a family (G1, . . . , G=). Letus also assume that this family is minimal, so that it is a basis of M.

Then, for 8 ∈ {0, . . . , =}, define M8 = Span(G1, . . . , G8). We thus defined

a family of subspaces of M such that M0 ⊂ M1 ⊂ · · · ⊂ M=. If M8 = M8−1,

then G8 ∈ Span(G1, . . . , G8−1), contradicting the hypothesis that the family

(G1, . . . , G=) be minimal among those generating M. Therefore, M8/M8−1

is a nonzero vector space generated by one element (the class of G8), hence

is a simple K-module. Consequently, the theorem of Jordan-Hölder

implies the two following results:

– ℓK(M) = =;– Any minimal generating family of M has exactly = elements.

This reproves the fact that any two bases of M have the same cardinality.

Page 293: (MOSTLY)COMMUTATIVE ALGEBRA

6.2. LENGTH 281

b) Assume that M is finitely dimensional. Let E = (41, . . . , 4=) andF = ( 51, . . . , 5=) be two bases of M.

For 0 6 8 6 =, define M8 = Span(41, . . . , 48) and N8 = Span( 51, . . . , 58).The proof of the theorem of Jordan-Hölder furnishes a (unique) per-

mutation � of {1, . . . , =} such that M8−1 + M8 ∩ N�(8)−1= M8−1 and

M8−1 +M8 ∩ N�(8) = M8, for every 8 ∈ {1, . . . , =}. For any 8, let G8 be

a vector belonging to M8 ∩N�(8) but not to M8−1. For every 8, one has

Span(G1, . . . , G8) = M8; it follows that X = (G1, . . . , G=) is a basis of M;

moreover, there exists a matrix B1, in upper triangular form, such that

X = EB1.

Set � = �−1. Set H 9 = G�(9) for 9 ∈ {1, . . . , =}; one has H 9 ∈ N9 ∩M�(8)

and H 9 ∉ N9−1. Similarly, one thus has Span(G�(1), . . . , G�(9)) = N9 for

every 9. Consequently, there exists a matrix B2, still in upper triangular

form, such that (G�(1), . . . , G�(=)) = FB2. Let P� be the permutation matrix

associated to �, we have (G�(1), . . . , G�(=)) = (G1, . . . , G=)P�. This implies

that FB2 = EB1P�, hence F = EB1P�B−1

2. Therefore, the matrix A =

B1P�P−1

2that expresses the coordinates of the vectors of F in the basis E

is the product of an upper-triangular matrix, a permutation matrix and

another upper-triangular matrix.

In the group GL(=,K), let B be the subgroup consisting of upper-

triangularmatrices, and letW be the subgroup consisting of permutation

matrices. We have proved that GL(=,K) = BWB: this is called the Bruhat

decomposition.

Proposition (6.2.13). — Let A be a commutative ring, let S be a multiplicative

subset of A and let M be a A-module of finite length. Then S−1

M is a

S−1

A-module of finite length, and ℓS−1A(S−1

M) 6 ℓA(M).

Proof. — Let us pose N = S−1

M and let (N0,N1, . . . ,N=) be a chain

of submodules of N. For every 8, let M8 be the inverse image of N8

in M by the canonical morphism M→ S−1

M. By construction, we have

inclusions of submodules M0 ⊂ M1 ⊂ · · · ⊂ M=. More precisely, since

S−1

M8 = N8 for every 8 (see proposition 3.6.7), this is even a chain of

submodules, hence ℓA(M) > =. By definition, the length ℓS−1A(S−1

M) isthe supremum of those integers =, hence ℓA(M) > ℓS−1

A(S−1

M). �

Page 294: (MOSTLY)COMMUTATIVE ALGEBRA

282 CHAPTER 6. FURTHER RESULTS ON MODULES

6.3. The noetherian property

Definition (6.3.1). — Let A be a ring. One says that an A-module M is

noetherian if every nonempty family of submodules of M possesses a maximal

element.

Proposition (6.3.2). — Let A be a ring and let M be an A-module. The

following properties are equivalent:

(i) The module M is noetherian;

(ii) Every submodule of M is finitely generated;

(iii) Every increasing family of submodules is stationary.

Proof. — (i)⇒(ii) Let us assume that M is noetherian, that is, any

nonempty family of submodules of M admits a maximal element and

let us show that every submodule of M is finitely generated.

Let N be a submodule of M and let us consider the family �N of all

finitely generated submodules of N. This family is nonempty because

the null module 0 belongs to�N. By hypothesis,�N possesses a maximal

element, say, N′. By definition, the A-module N

′is a finitely generated

submodule of N and no submodule P of N such that N′ ( P is finitely

generated.

For every < ∈ N, the A-module P = N′ +A< satisfies N

′ ⊂ P ⊂ N and

is finitely generated; by maximality of N′, one has P = N

′, hence < ∈ N

′.

This proves that N′ = N, hence N is finitely generated.

(ii)⇒(iii). Let us assume that every submodule of M is finitely gener-

ated and let us consider an increasing sequence (M=)=∈N of submodules

of M. Let N =⋃

M= be the union of these modules M=. Since the

family is increasing, N is a submodule of M. By hypothesis, N is finitely

generated. Consequently, there exists a finite subset S ⊂ N such that

N = 〈S〉. For every B ∈ S, there exists an integer =B ∈ N such that B ∈ M=B ;

then, B ∈ M= for any integer = such that = > =B . Let us set � = sup(=B),so that S ⊂ M�. It follows that N = 〈S〉 is contained in M�. Finally, for

= > �, the inclusions M� ⊂ M= ⊂ N ⊂ M� for = > � show that M= = M�.

We have shown that the sequence (M=) is stationary.(iii)⇒(i) Let us assume that any increasing sequence of submodules

of M is stationary and let (M8)8∈I be a family of submodules of M indexed

by a nonempty set I. Assuming by contradiction that this family has no

Page 295: (MOSTLY)COMMUTATIVE ALGEBRA

6.3. THE NOETHERIAN PROPERTY 283

maximal element, we are going to construct from the family (M8) a strictlyincreasing sequence of submodules of M. Fix 81 ∈ I. By hypothesis,

M81 is not a maximal element of the family (M8), so that there exists

82 ∈ I such that M81 ( M82. Then, M82 is not maximal neither, hence

the existence of 83 ∈ I such that M82 ( M83. Finally, we obtain a strictly

increasing sequence (M8=)=∈N of submodules of M, hence the desired

contradiction. (This part of the proof has nothing to do with modules, it is

valid in any ordered set.) �

Proposition (6.3.3). — Let A be a ring, let M be an A-module and let N be

a submodule of M. Then M is noetherian if and only if both N and M/N are

noetherian.

Proof. — Let us assume that M is noetherian. Since every submodule

ofN is also a submodule ofM, every submodule ofN is finitely generated,

and N is noetherian. Let � be a submodule of M/N, let P = cl−1(�)

be its inverse image by the canonical morphism cl : M → M/N. By

hypothesis, P is finitely generated, so that � = cl(P) is the image of a

finitely generated module, hence is finitely generated too. This shows

that M/N is noetherian.

Let us assume that N and M/N are noetherian. Let P be a submodule

of M. By assumption, the submodule P ∩N of N is finitely generated, as

well as the submodule cl(P) of M/N. Moreover, cl(P) ' P/(P ∩N). Byproposition 3.5.5 applied to the module P and its submodule P ∩N, P is

finitely generated. �

Corollary (6.3.4). — The direct sum M1 ⊕ · · · ⊕ M= of a finite family of

noetherian A-modules M1, . . . ,M= is noetherian.

Definition (6.3.5). — One says that a ring A is a left noetherian ring if the

left A-module AB is noetherian. One says that A is a right noetherian ring if

the right A-module A3 is noetherian.

When a commutative ringA is left noetherian, it is also right noetherian,

and vice versa; one then simply says that A is noetherian.

Page 296: (MOSTLY)COMMUTATIVE ALGEBRA

284 CHAPTER 6. FURTHER RESULTS ON MODULES

Remark (6.3.6). — Let A be a ring. The submodules of the left A-

module AB are its left ideals. Consequently, the ring A is left noetherian

if and only if one of the following (equivalent) properties holds:

(i) Every nonempty family of left ideals of A possesses a maximal

element;

(ii) Every left ideal of A is finitely generated;

(iii) Every increasing family of left ideals of A is stationary.

In particular, a principal ideal domain (all ofwhose ideals are generated

by one element) is a noetherian ring. We had already observed this fact

in lemma 2.5.6 in the course of the proof that a principal ideal domain is

a unique factorization domain.

Corollary (6.3.7). — Assume that the ring A is left noetherian (resp. right

noetherian) and let = be an integer. Every submodule of the left A-module A=B

(resp. of the right A-module A=3) is finitely generated.

Corollary (6.3.8). — Assume that the ring A be left noetherian (resp. right

noetherian). Then, for any two-sided ideal I of A, the quotient ring A/I is leftnoetherian (resp. right noetherian).

Proof. — Indeed, every left ideal of A/I is of the form J/I for some left

ideal J of A containing I. By hypothesis, J is finitely generated, so that

J/I is finitely generated too. The case of right ideals is analogous. �

Remark (6.3.9). — Let A be a ring and let M be a right A-module. Let

I = AnnA(M) be the annihilator of M; it is a two sided ideal of A. If M is

noetherian, then the quotient ring A/I is right noetherian.Let indeed (<1, . . . , <=) be a finite generating family of M. The map

5 : A3 →M=given by 5 (0) = (<10, . . . , <=0) is a morphism of right A-

modules, and its kernel is equal to I. Consequently, 5 defines an injective

morphism ! : A3/I→M=. Since M is noetherian, M

=is noetherian too

(corollary 6.3.4). Consequently, A3/I is a noetherian right A-module.

Since its submodules are the right ideals of A/I, this proves that A/I isright noetherian.

Page 297: (MOSTLY)COMMUTATIVE ALGEBRA

6.3. THE NOETHERIAN PROPERTY 285

Proposition (6.3.10). — Let A be a commutative ring and let S be a multiplica-

tive subset of A. If M is a noetherian A-module, then S−1

M is a noetherian

S−1

A-module.

Proof. — Let be a submodule of S−1

M. By proposition 3.6.7, there

exists a submodule N of M such that = S−1

N. Since M is a noetherian

A-module,N is finitely generated. Consequently, is afinitely generated

S−1

A-module. This shows that S−1

M is a noetherian S−1

A-module. �

Corollary (6.3.11). — Let A be a commutative noetherian ring and let S be a

multiplicative subset of A. The fraction ring S−1

A is noetherian.

Proof. — By proposition 6.3.10, S−1

A is a noetherian S−1

A-module.

Consequently, S−1

A is a noetherian ring. �

Theorem (6.3.12) (Hilbert). — For any a commutative noetherian ring A, the

ring A[X] is noetherian.

Proof. — Let I be an ideal of A[X]; let us prove that I is finitely generated.

For = > 0, let I= be the set of coefficients of X=, in all polynomials of

degree 6 = belonging to I; explicitly, I= is the set of all 0 ∈ A for which

there exists a polynomial P ∈ I such that P = 0X=+terms of lower degree

(which we will write P = 0X= + . . . ). Observe that I= is an ideal of A.

Indeed, if 0, 1 ∈ I=, there are polynomials P,Q ∈ I of degree 6 =

such that P = 0X= + · · · and Q = 1X= + · · · . Then, for any D, E ∈ A,

DP + EQ = (D0 + E1)X= + · · · ; since I is an ideal of A, DP + EQ ∈ I, hence

D0 + E1 ∈ I=. Moreover, one may write the polynomial 0 as 0 = 0X= + · · · ,

so that 0 ∈ I=.

For any integer =, one has I= ⊂ I=+1; indeed, if 0 ∈ I= and P ∈ I is such

that P = 0X= + · · · , then XP = 0X=+1 + · · · , so that 0 ∈ I=+1. Since A is a

noetherian ring, the sequence (I=) is stationary. Let � ∈ N be such that

I= = I� for any integer = > �.The ideals I0, I1, . . . , I� are finitely generated; let us choose, for every

integer = 6 �, a generating family (0=,8)1686A(=) of I=, as well as polyno-

mials P=,8 ∈ I= such that P=,8 = 0=,8X= + · · · . Let J be the ideal of A[X]

generated by the polynomials P=,8 for 0 6 = 6 � and 1 6 8 6 A(=). Theideal J is finitely generated and, by construction, one has J ⊂ I. We

Page 298: (MOSTLY)COMMUTATIVE ALGEBRA

286 CHAPTER 6. FURTHER RESULTS ON MODULES

shall now show the converse inclusion by induction on the degree of a

polynomial P ∈ I.

Ifdeg(P) = 0, then P is constant and belongs to I0, so that P ∈ J. Assume

now that every polynomial in I whose degree is < = belongs to J. Let

P ∈ I be a polynomial of degree = and let 0 be its leading coefficient. Let

< = inf(=, �); if = 6 �, then < = =, hence 0 ∈ I<; otherwise, < = � and

0 ∈ I� since I= = I� by definition of �. Consequently, there are elements

28 ∈ A, for 1 6 8 6 A(<) such that 0 =∑8 280<,8; let Q be the polynomial

Q = P − X=−< ∑

8 28P<,8. By construction, deg(Q) 6 =; moreover, the

coefficient of X=in Q is equal to 0 −∑

8 280<,8 = 0, so that deg(Q) < =. By

induction, Q ∈ I, so that Q ∈ J. It follows that P ∈ J.

We thus have proved that I = J. In particular, I is finitely generated. �

Corollary (6.3.13). — Let A be a commutative noetherian ring and let = be a

positive integer. The ring A[X1, . . . ,X=] is noetherian. In particular, for any

field K, the ring K[X1, . . . ,X=] is noetherian.

Corollary (6.3.14). — Let A and B be commutative rings. Assume that A is

a noetherian ring and that B is a finitely generated A-algebra. Then, B is a

noetherian ring.

Proof. — By hypothesis, there are elements 11, . . . , 1= ∈ B which gen-

erate B as an A-algebra, so that the canonical morphism of A-algebras

from A[X1, . . . ,X=] to B which maps X8 to 18 is surjective. This shows

that B is a quotient of the noetherian ring A[X1, . . . ,X=]. Consequently,B is noetherian. �

Theorem (6.3.15) (Hilbert, 1893). — Let K be a field, let A be a finitely

generated commutative K-algebra and let G be a finite subgroup of AutK(A).Then, the set A

Gof all 0 ∈ A such that 6(0) = 0 for any 6 ∈ G is a finitely

generated K-algebra.

It is precisely to prove this theorem that Hilbert introduced the notion

of a noetherian ring and proved that polynomial rings over a field are

noetherian.1

1This needs a more precise reference!

Page 299: (MOSTLY)COMMUTATIVE ALGEBRA

6.3. THE NOETHERIAN PROPERTY 287

Example (6.3.16). — Let us assume that A = K[X1, . . . ,X=] and G is the

symmetric group S= acting on A by permuting the indeterminates X8.

Then, AS=

is the algebra of symmetric polynomials. In this case, it

is generated by the elementary symmetric polynomials S0, S1, . . . , S=defined, for 1 6 9 6 =, by

S9 =

∑81<···<8 9

X81 . . .X8 9 .

For example, S0 = 1, S1 = X1 + · · · + X= and S= = X1 . . .X=.

In fact, we prove a more precise result: Let K be any commutative ring,

and let P ∈ K[X1, . . . ,X=] be a symmetric polynomial; there exists a unique

polynomial Q ∈ K[T1, . . . , T=] such that P = Q(S1, . . . , S=); more precisely, if

2T<1

1. . . T

<== is a monomial that appears in Q, then <1 + 2<2 + · · · + =<= 6

deg(P). (We say that the “weighted degree” of Q is at most that of P.)

Let us now prove this result by induction, first on =, then on deg(P).When = = 1, one has S1 = X1 and A

S1 = A = K[X1]. Assume that the

result holds for (= − 1). Writing S′0, . . . , S′

=−1for the symmetric polyno-

mials in the variables X1, . . . ,X=−1, we observe the relations S0 = S′0, S1 =

S′1+X=S

′0, S2 = S

′2+X=S

′1, . . . , S=−1 = S

′=−1+X=S

′=−2

, and S= = X=S′=−1

. Con-

sider a symmetric polynomialP ∈ A[X1, . . . ,X=]. ThenP(X1, . . . ,X=−1, 0)is a symmetric polynomial in A[X1, . . . ,X=−1], hence it can be expressed

as a polynomial in S′1, . . . , S′

=−1. Let thus Q1 ∈ A[T1, . . . , T=1

] be such that

P(X1, . . . ,X=−1, 0) = Q1(S′1, . . . , S′

=−1), where the weighted degree of Q1

is at most that of P(X1, . . . ,X=−1, 0), hence at most that of P. Let us then

consider the polynomial P1 = P −Q1(S1, . . . , S=−1); this is a symmetric

polynomial of degree 6 deg(P) such that P1(X1, . . . ,X=−1, 0) = 0, so that

all monomials apppearing in P1 are multiple of X= and X= divides P1.

By symmetry, X1, . . . ,X=1divide P1, which means that every monomial

appearing in P1 is divisible by the product X1 . . .X= = S=. There exists

a polynomial P2 ∈ K[X1, . . . ,X=] such that P1 = X1 . . .X=P2, and P2

is necessarily symmetric (because X1 . . .X= is symmetric and regular),

and its degree is 6 deg(P) − =. By induction, there exists a polyno-

mial Q2 ∈ A[T1, . . . ,T=] of weighted degree 6 deg(P) − = such that

P2 = Q2(S1, . . . , S=). Let us set Q = Q1 + T=Q2; its weighted degree

is 6 deg(P) and Q(S1, . . . , S=) = Q1(S1, . . . , S=−1) + S=Q2(S1, . . . , S=) =(P − P1) + X1 . . .X=P2 = P.

Page 300: (MOSTLY)COMMUTATIVE ALGEBRA

288 CHAPTER 6. FURTHER RESULTS ON MODULES

This proves existence; let us not establish the uniqueness of such a

polynomial. Let Q ∈ K[T1, . . . ,T=] be such that P = Q(S1, . . . , S=) =0. Then 0 = P(X1, . . . ,X=−1, 0) = Q(S′

1, . . . , S′

=−1, 0). By induction,

the polynomial Q(T1, . . . ,T=−1, 0) vanishes, which means that there

exists a polynomial Q1 ∈ K[T1, . . . ,T=] such that Q = T=Q1. Then

0 = Q(S1, . . . , S=) = S=Q1(S1, . . . , S=). Since S= = X1 . . .X= is a regular

element of K[X1, . . . ,X=], one has Q1(S1, . . . , S=) = 0. By induction on

the degree of Q, one has Q1 = 0, hence Q = T=Q1 = 0.

The proof of theorem 6.3.15 requires three steps.

Lemma (6.3.17). — The set AGis a subalgebra of A.

Proof. — We need to prove that

– The elements 0 and 1 belong to AG;

– If 0 and 1 belong to AG, so do 0 + 1 and 01;

– If 0 belongs to AGand � ∈ K, then �0 belongs to A

G.

Recall that every 6 ∈ G is an automorphism of A as a K-algebra, so that

6(0) = 0, 6(1) = 1, 6(0 + 1) = 6(0) + 6(1), 6(01) = 6(0)6(1) for 0, 1 ∈ A,

6(�0) = �6(0) for 0 ∈ A, 6 ∈ G and � ∈ K. In particular, we see that

0 ∈ AGand 1 ∈ A

G. Moreover, if 0, 1 ∈ A

G, then these formulae imply

that 6(0 + 1) = 6(0) + 6(1) = 0 + 1 and 6(01) = 6(0)6(1) = 01 for every6 ∈ G, so that 0 + 1 and 01 belong to A

G. Finally, if 0 ∈ A

Gand � ∈ K,

then 6(�0) = �6(0) = �0 for every 6 ∈ G, hence �0 ∈ AG. �

Lemma (6.3.18). — Under the hypotheses of theorem 6.3.15, the ring A is a

finitely generated AG-module.

Proof. — Since A is finitely generated as a K-algebra, we may choose

elements 01, . . . , 0A ∈ A such that A = K[01, . . . , 0A].For 8 ∈ {1, . . . , A}, let us consider the polynomial

P8(X) =∏6∈G(X − 6(08))

in A[X]. Let G act on A[X] by 6(P) = ∑6(2=)X= if P =

∑2=X

=and 6 ∈ G.

Let us write

P8(X) = X= + 11X

=−1 + · · · + 1=

Page 301: (MOSTLY)COMMUTATIVE ALGEBRA

6.3. THE NOETHERIAN PROPERTY 289

where 11, . . . , 1= ∈ A and = = Card(G). For any ℎ ∈ G, one then has

ℎ(P8(X)) = ℎ(∏(X− 6(08))) =

∏6∈G(X− ℎ(6(08))) =

∏6∈G(X− 6(08)) = P8(X)

so that 11, . . . , 1= are invariant under ℎ. It follows that P8 ∈ AG[X], hence

11, . . . , 1= ∈ AG. By construction, P8(08) = 0, a monic polynomial relation

for the 08, with coefficients in AG. In the terminology of definition 4.1.1,

we just proved that the elements 08 are integral over AG; we now proceed

to show that A is a finitely generated AG-module. Indeed, let us prove

that A is generated, as an AG-module, by the =A products of the form∏A

8=10=88, where, for every 8, 0 6 =8 6 = − 1. Let A

′be the A

G-submodule

of A generated by these elements.

Since A is generated by all products

∏A8=10=88with =1, . . . , =A ∈ N, as a

K-module, henceforth as an AG-module, it suffices to prove that each

such product belongs toA′. LetX

=8 = Q8(X)P8(X)+R8(X) be the euclideandivision of X

=8by the monic polynomial P8 in A

G[X] (theorem 1.3.15).

Since P8(08) = 0, we have 0=88= R8(08) for all 8, hence

A∏8=1

0=88=

A∏8=1

R8(08).

If we expand this last product, we see that it belongs to A′. This shows

that A′ = A, hence that A is finitely generated as an A

G-module. �

Lemma (6.3.19) (Artin–Tate). — Let K, B,A be three commutative rings such

that K ⊂ B ⊂ A. Assume that K is a noetherian ring, that A is a finitely

generated K-algebra as well as a finitely generated B-module. Then B is a finitely

generated K-algebra.

Proof. — Let (G1, . . . , GA) be a finite family of elements of A which

generates A as a K-algebra; let (01, . . . , 0=) be a finite family of elements

of A which generates A as a B-module. We assume, as we may, that

G1 = 1 and 01 = 1. Then, every element of A can be written both as a

polynomial in G1, . . . , GAwith coefficients inK and as a linear combination

of 01, . . . , 0= with coefficients in B. If we apply this remark to G1, . . . , GAand to the products G8G 9 (for 1 6 8 , 9 6 A), we obtain elements 18 ,ℓ and

Page 302: (MOSTLY)COMMUTATIVE ALGEBRA

290 CHAPTER 6. FURTHER RESULTS ON MODULES

28 , 9 ,ℓ ∈ B such that

G8 =

=∑ℓ=1

18ℓ 0ℓ

for 8 ∈ {1, . . . , A} and

080 9 =

=∑ℓ=1

28 , 9 ,ℓ 0ℓ

for 8 , 9 ∈ {1, . . . , =}.Let B0 be the K-subalgebra of B generated by all the elements 18 ,ℓ and

28 , 9 ,ℓ ; it is a finitely generated K-algebra, in particular a noetherian ring.

Let A0 be the B0-submodule of A generated by the elements 08. It

is a K-submodule of A; moreover, all products 080 9 belong to A0 by

construction, so that A0 is stable by the multiplication law of A; and

1 ∈ A0. Consequently, A0 is a subalgebra of A. Still by construction, the

elements G1, . . . , GA belong to A0, hence A0 = A. This proves that A is a

finitely generated B0-module.

Since B0 is a noetherian ring, the ring A is noetherian as a B0-module,

so that every B0-submodule of A is finitely generated. Since B is such a

B0-submodule, we obtain that B is finitely generated as a B0-module; it

is in particular finitely generated as a B0-algebra.

Recall that B0 is finitely generated as a K-algebra. By corollary 1.3.7,

this implies that B is also finitely generated as a K-algebra, as was to be

shown. �

Proof of theorem 6.3.15. — Set B = AG, so that K ⊂ B ⊂ A. By the second

lemma, A is finitely generated as a B-module; by hypothesis, it is finitely

generated as a K-algebra. Artin–Tate’s lemma (lemma 6.3.19) now

implies that B is finitely generated as a K-algebra. �

6.4. The artinian property

Definition (6.4.1). — Let A be a ring, let M be a right A-module One says

that M is artinian if every nonempty family of submodules of M possesses a

minimal element.

Page 303: (MOSTLY)COMMUTATIVE ALGEBRA

6.4. THE ARTINIAN PROPERTY 291

Definition (6.4.2). — Let A be a ring. One says that the ring A is a right

artinian ring if it is artinian as a right A-module, and that it is left artinian if

it is artinian as a left A-module.

If the ring A is commutative, then it is equivalent for A to be left

artinian or to be right artinian; we then say that A is an artinian ring.

In some sense, the artinian property is symmetric to the noetherian

property. However, we shall see in theorem 6.4.11 that for a ring,

the property of being artinian is quite stronger than the one of being

noetherian. But for themoment, let us observe a few consequenceswhich

are quite analogous to those obtained from the noetherian property.

Remark (6.4.3). — An A-module M is artinian if and only if every de-

creasing sequence of submodules of M is stationary. This is a general

result about ordered sets, which is proved essentially in the same way

as the equivalence of (i) and (iii) in proposition 6.3.2.

Let us indeed assume that M is artinian and let (M=) be a decreasingsequence of submodules. By assumption, it has a minimal element,

say M<, and one has M= = M< for = > <.

In the other direction, let (M8)8∈I be nonempty family of submodules

of M that has no minimal element. Let 80 ∈ I; then M80 is not a minimal

element of (M8)8∈I, hence there exists 81 ∈ I such that M81 ( M80. Similarly,

one constructs by induction a sequence (8=)=∈N of elements of I such that

the sequence (M8=)=∈N is strictly decreasing.

Proposition (6.4.4). — Let A be a ring. Let M be an A-module and let N be a

submodule of M. Then M is artinian if and only if N and M/N are artinian

modules.

Proof. — a) Let clN : M → M/N be the canonical surjection. Let us

first assume that N and M/N are artinian. Let (M=) be a decreasing

sequence of submodules of M. Then (M=∩N) is a decreasing sequence of

submodules of N, hence is stationary; similarly, (clN(M=)) is a decreasingsequence of submodules of M/N, so is stationary too. Consequently,

there exists ? ∈ N such thatM=∩N = M=+1∩N and clN(M=) = clN(M=+1)for = > ?. Applying lemma 6.2.7 to the submodules M= ,M=+1,N of M,

we conclude that M= = M=+1 for = > ?: the sequence (M=) is stationary.

Page 304: (MOSTLY)COMMUTATIVE ALGEBRA

292 CHAPTER 6. FURTHER RESULTS ON MODULES

Conversely, let us assume that M is artinian. A decreasing sequence

of submodules of N is also a decreasing sequence of submodules of M,

hence is stationary. Consequently, N is artinian. Let then (P=) be a

decreasing sequence of submodules of M/N. The sequence (cl−1

N(P=)) of

submodules of M is decreasing, hence is stationary since M is artinian.

Since clN is surjective, one has P= = clN(cl−1

N(P=)), hence the sequence

(P=) is also stationary. �

Corollary (6.4.5). — The direct sum M1⊕ · · · ⊕M= of a finite family of artinian

A-modules M1, . . . ,M= is artinian.

Proposition (6.4.6). — Assume that A is commutative and let S be a multi-

plicative subset of A. If M is an artinian A-module, then S−1

A is an artinian

S−1

A-module.

Proof. — Let us assume that M is an artinian A-module and let 8 : M→S−1

M be the canonical morphism. Let (P=) be a decreasing sequence

of submodules of S−1

M. The sequence (8−1(P=)) of submodules of M

is decreasing, hence is stationary. By proposition 3.6.7, one has P= =

S−1(8−1(P=)) for any integer =. This implies that the sequence (P=) is

stationary. �

Corollary (6.4.7). — Let A be a commutative ring. If A is artinian, then S−1

A

is artinian for any multiplicative subset S of A.

Theorem (6.4.8). — Let A be a ring. An A-module has finite length if and only

if it is both artinian and noetherian.

Proof. — Let us assume that M has finite length and let us consider

a sequence (M=) of submodules of M which is either increasing or

decreasing. The sequence (ℓA(M=)) of their lengths is then increasing

or decreasing, according to the case; it is bounded from below by 0,

and bounded from above by ℓA(M). Consequently, it is stationary. Thisimplies that the sequence (M=) is stationary. (Recall that if P ⊂ Q are

two modules of the same finite length, then ℓA(Q/P) = ℓA(Q) − ℓA(P) = 0;

consequently, Q/P = 0 and Q = P.) Considering decreasing sequences,

we obtain that M is artinian; considering increasing sequences, we

conclude that M is noetherian.

Page 305: (MOSTLY)COMMUTATIVE ALGEBRA

6.4. THE ARTINIAN PROPERTY 293

Let us now assume that M is artinian and that its length is infinite,

and let us prove that M is not noetherian.

Let us set M0 = 0. By assumption, M ≠ 0, so that the set of nonzero

submodules of M is nonempty. Since M is artinian, it admits a minimal

element M1; this module M1 is a nonzero submodule of M such that

any strict submodule of M1 is null; in other words, M1 is simple. Since

ℓA(M1) = 1 and ℓA(M) = ∞, one has M1 ≠ M and we may construct

similarly a submodule M2 of M containing M1 such that M2/M1 is simple,

hence ℓA(M2) = ℓA(M1) + 1 = 2. We thus construct by induction an in-

creasing sequence (M=) of submodules ofM such thatM=+1/M= is simple

and ℓA(M=) = = for every integer =. In particular, the sequence (M=) isnot stationary, hence M is not noetherian. �

Proposition (6.4.9). — Let A be a right artinian ring and let J be the Jacobson

radical of A. There exists an integer = such that J= = 0.

Proof. — Since A is right artinian, the decreasing sequence (J=) of rightideals of A is stationary. Consequently, there exists an integer = > 0

such that J= = J

=+1. Let us assume by contradiction that J

= ≠ 0.

The set of right ideals I of A such that J=I ≠ 0 is nonempty (for example,

I = A is such a right ideal). Since A is right artinian, there exists a

minimal such ideal, say I. Let G be any element of I such that J=G ≠ 0;

then GA is a right ideal of A such that J=(GA) ≠ 0; moreover, GA ⊂ I.

By minimality of I, one has I = GA. In particular, I is finitely generated.

Moreover, J=JI = J

=+1I = J

=I ≠ 0. Since JI ⊂ I, the minimality of I implies

that I = JI. It then follows from Nakayama’s lemma (theorem 6.1.1) that

I = 0, in contradiction with the hypothesis that J=I ≠ 0. �

Lemma (6.4.10). — Let A be a ring, let = be a positive integer. Let M be an

artinian A-module. Assume that M is the sum of its submodules of length 6 =.

Then M has finite length.

Proof. — Let (M8) be a family of submodules of M such that ℓA(M8) 6 =

for every 8, and M =∑8 M8. Let us show that ℓA(M) < ∞ by induction

on =.

Let us first treat the case = = 1. If the length of M is infinite, we

may construct by induction on < a sequence 8(<) such that M8(<) is

Page 306: (MOSTLY)COMMUTATIVE ALGEBRA

294 CHAPTER 6. FURTHER RESULTS ON MODULES

not included in the sum of the modules M8(:) for : < < (otherwise,

one would have M =∑:<< M8(:) and ℓA(M) would be finite). Since all

modules M8 are simple or zero, one has M8(<) ∩∑:<< M8(:) = 0 and

the modules M8(:) are in direct sum in their union N =∑:∈N M8(:). For

every<, set N< =⊕

:>< M8(:). The sequence (N<) is strictly decreasing,

and this contradicts the hypothesis that M is artinian.

Let us now assume that = > 2 and that the result holds for = − 1.

For every 8, let M′8be a submodule of M8 such that ℓA(M′8) 6 = − 1

and ℓA(M8/M′8) 6 1. Set M′ =

∑8 M′8; this is a submodule of M; by

the induction hypothesis, ℓA(M′) < ∞. Moreover, M/M′ is the sum of

the modules M8/(M′ ∩M8) whose lengths are 6 1. By the case = = 1,

ℓA(M/M′) < ∞. By proposition 6.4.4, one has ℓA(M) < ∞, as was to be

shown. �

Theorem (6.4.11) (Akizuki-Hopkins-Levitzki). — Let A be a right artinian

ring. Then the ring A is right noetherian and has finite length as a right

A-module.

Proof2. — It suffices to prove that ℓA(A3) is finite.Let � be the set of right ideals I of A such that A/I has finite length.

Since � is nonempty (it contains A) and A is right artinian, we may

consider a minimal element I of �. Let = = ℓA(A3/I).Let G be an element of A such that ℓA(GA) < ∞, and let J = {0 ∈

A ; G0 = 0} be its right annihilator, so that J is a right ideal of A and

GA ' A3/J. I claim that A3/(I ∩ J) has finite length. First of all, (I + J)/Ihas finite length, as a submodule of A3/I. Then, the composition of the

inclusion morphism J→ I + J and of the projection I + J→ (I + J)/I issurjective, and its kernel is I ∩ J, so that J/(I ∩ J) has finite length. Thismodule is a submodule of A3/(I ∩ J), with quotient A3/J, which has

finite length, by the assumption on G. By proposition 6.2.8, we conclude

that ℓA(A/(I ∩ J)) < ∞. Since I ∩ J ⊂ I, the minimality of I implies that

I ∩ J = I, hence I ⊂ J. In particular, ℓA(GA) = ℓA(A/J) 6 ℓA(A/I) 6 =.

Let K be the sum of all right ideals GA of A such that ℓA(GA) < ∞.By lemma 6.4.10, ℓA(K) < ∞. Moreover, every right ideal K

′such that

ℓA(K′) < ∞ is contained in K. Indeed, if G ∈ K′, then ℓA(GA) 6 ℓA(K′) <

2This proof is borrowed from Bourbaki (2012), §1, no2, théorème 1.

Page 307: (MOSTLY)COMMUTATIVE ALGEBRA

6.4. THE ARTINIAN PROPERTY 295

∞, hence G ∈ K. In other words, K is the largest right ideal of A which is

of finite length.

Let us assume by contradiction that K ≠ A. Let K′be a right ideal of A

which is minimal among the ideals containing K and distinct from K.

Such an ideal exists because A is right artinian. Necessarily, the right

A-module K′/K is simple, hence has length 1. Consequently, ℓA(K′) < ∞,

which contradicts the definition of K. This shows that K = A. In

particular, ℓA(A3) < ∞. �

Corollary (6.4.12). — Let A be a right artinian ring and let M be right

A-module. The following conditions are equivalent:

(i) The module M is finitely generated;

(ii) The module M has finite length;

(iii) The module M is artinian;

(iv) The module M is noetherian.

Proof. — Without any hypothesis on the ring A, (ii) implies (iii) and (iv),

and (iv) implies (i).

Assume that M is finitely generated, so that there exists an integer =

and a surjective morphism A=3→ M of A-modules. Then ℓA(M) 6

ℓA(A=3) = = ℓA(A3) is finite. This proves the implication (i)⇒(ii).

It remains to show that ifM is artinian, thenM has finite length. Let G ∈M and let J = {0 ∈ A ; G0 = 0}, so that GA ' A/J. SinceA is right artinian,

its length is finite (theorem 6.4.11) and ℓA(GA) = ℓA(A/J) 6 ℓA(A3). Thisshows that M is generated by its submodules of lengths 6 ℓA(A3). Bylemma 6.4.10, ℓA(M) is finite. �

Lemma (6.4.13). — Let A be a commutative artinian ring.

a) If A is an integral domain, then A is a field.

b) Every prime ideal of A is maximal.

c) The set of maximal ideals of A is finite.

Proof. — a) Let us assume that A is a domain. Let 0 ∈ A {0} andlet us consider the decreasing sequence (0=A)=>0 of principal ideals.

Since A is artinian, there exists an integer = such that 0=A = 0=+1A,

hence there exists an element 1 ∈ A such that 0= = 0=+11. Consequently,

Page 308: (MOSTLY)COMMUTATIVE ALGEBRA

296 CHAPTER 6. FURTHER RESULTS ON MODULES

0=(1 − 01) = 0. Since A is a domain and 0 ≠ 0, one has 1 − 01 = 0 hence

0 is invertible.

b) Let P be a prime ideal of A. Then, A/P is an integral domain.

Moreover, it is artinian as an A-module, because it is a quotient of

the A-module A. Since the A-submodules of A/P coincide with its

(A/P)-submodules, we see that the ring A/P is artinian. By a), it is a

field, hence P is a maximal ideal.

c) Let us argue by contradiction and let (M=) be a sequence of pairwise

distinct maximal ideals of A. Let us consider the decreasing sequence of

ideals

M1 ⊃ M1M2 ⊃ · · · ⊃ M1 . . .M= ⊃ · · ·For every 8 6 =,M8 is not contained inM=+1, so there is an element 08 ∈ M8

such that 08 ∉ M=+1. The product 0 = 01 . . . 0= belongs to M1 . . .M=;

since M=+1 is a prime ideal, 0 does not belong to M=+1. This shows

that the preceding sequence of ideals is not stationary, contradicting the

hypothesis that A is artinian. �

Theorem (6.4.14) (Akizuki). — Let A be a commutative ring. The following

conditions are equivalent:

(i) The ring A is artinian;

(ii) The ring A is noetherian and all of its prime ideals are maximal ideals;

(iii) The A-module A has finite length.

Proof. — By theorem 6.4.8, condition (iii) is equivalent to the fact that A

is both artinian and noetherian. This shows the implication (iii)⇒(i).

We also have shown in theorem6.4.11 that an artinian ring is noetherian;

moreover, by lemma 6.4.13, all of its prime ideals are maximal, so that

(i)⇒(ii).

It remains to show that under assumption (ii), the ring A has finite

length.

Let � be the set of ideals of A such that the length of A/I is infinite. IfA has not finite length, then I = (0) belongs to �, hence � is nonempty.

Since A is noetherian, the set �, ordered by inclusion, has a maximal

element I. The ideal I is such that the length of A/I is infinite, while

the length of A/J is finite for any ideal J such that J ) I. The ring A/I

Page 309: (MOSTLY)COMMUTATIVE ALGEBRA

6.5. SUPPORT OF A MODULE, ASSOCIATED IDEALS 297

is noetherian, and its prime ideals, being in correspondance with the

prime ideals of A containing I, are maximal ideals. We may thus replace

A by A/I and assume that I = (0).Let us now show that A is an integral domain. Since ℓA(A) = +∞, we

have A ≠ 0. Let then 0, 1 be two nonzero elements of A such that 01 = 0.

By assumption, A/0A has finite length. Moreover, the map G ↦→ 1G

vanishes on 0A, hence induces a surjective morphism from A/0A to 1A;

in particular, 1A has finite length. Since A/1A has finite length, this

implies that A has finite length, a contradiction. We have shown that A

is an integral domain.

In particular, (0) is a prime ideal of A. By assumption, (0) is maximal,

so that A is a field. In particular, A is a simple module. In particular, its

length is finite. This contradiction shows that the hypothesis that ℓA(A)is infinite is false. �

Remark (6.4.15). — We shall establish in the next section (theorem 6.5.16)

that for any noetherian commutative ring A, there is an integer =, a

sequence (I0, . . . , I=) of ideals of A and a sequence (P1, . . . , P=) of prime

ideals of A such that

0 = I0 ⊂ I1 ⊂ · · · ⊂ I=−1 ⊂ I= = A

and I:/I:−1 ' A/P: for every integer : ∈ {1, . . . , =}. Under assump-

tion (ii) of theorem 6.4.14, all quotients I:/I:−1 are simple A-modules.

This gives another proof that A has finite length.

6.5. Support of a module, associated ideals

In this section, all rings are tacitly assumed to be commutative.

Definition (6.5.1). — Let A be a commutative ring, let M be an A-module.

The support of M is the subset of Spec(A) consisting of all prime ideals P of A

such that MP ≠ 0. It is denoted by SuppA(M), or Supp(M) if no confusion

can arise regarding the ring A.

Let us expand the definition. Let M be an A-module and let P be a

prime ideal. The condition MP ≠ 0 means that there exists < ∈ M such

that </1 ≠ 0 in MP, hence B< ≠ 0 for any B ∈ A P. In other words,

Page 310: (MOSTLY)COMMUTATIVE ALGEBRA

298 CHAPTER 6. FURTHER RESULTS ON MODULES

a prime ideal P of A belongs to SuppA(M) if and only if there exists < ∈ M

whose annihilator AnnA(<) is contained in P. This can also be rewritten as

SuppA(M) =

⋃<∈A

V(AnnA(<)).

Example (6.5.2). — Let A be a commutative ring and let I be an ideal of A.

For < ∈ A/I, one has I ⊂ AnnA(<), hence V(AnnA(<)) ⊂ V(I), so that

SuppA(M) ⊂ V(I). Conversely, AnnA(clI(1)) = I, hence V(I) ⊂ Supp

A(M).

This proves that SuppA(A/I) = V(I), the set of all prime ideals of A which

contain I.

Proposition (6.5.3). — Let A be a commutative ring, let M be an A-module and

let N be a submodule of M. Then SuppA(M) = Supp

A(N) ∪ Supp

A(M/N).

Proof. — Recall (exactness of localization, proposition 3.6.6) that for any

prime ideal P of A, NP identifies with a submodule of MP and one has

an isomorphism (M/N)P 'MP/NP. Consequently, MP = 0 if and only if

NP = (M/N)P = 0. In other words,

P ∉ SuppA(M) ⇔ P ∉ Supp

A(N) ∪ Supp

A(M/N),

that is to say, SuppA(M) = Supp

A(N) ∪ Supp

A(M/N). �

Theorem (6.5.4). — Let A be a commutative ring and let M be an A-module.

a) If M ≠ 0, then SuppA(M) ≠ ∅;

b) Let P be a prime ideal in SuppA(M); then every prime ideal containing P

belongs to SuppA(M);

c) One has SuppA(M) ⊂ V(AnnA(M)): any prime ideal of A belonging

to SuppA(M) contains AnnA(M);

d) If M is finitely generated, then SuppA(M) = V(AnnA(M)) is the set of

all prime ideals of A which contain AnnA(M).

Proof. — a) Let us assume that M ≠ 0 and let < ∈ M {0}. In other

words, AnnA(<) ≠ A, hence V(AnnA(<)) ≠ ∅ (this follows from Krull’s

theorem, theorem 2.1.3, see remark 2.2.6, b)). Since SuppA(M) con-

tains V(AnnA(<)), this implies that SuppA(M) ≠ ∅.

b) Indeed, every subset of Spec(A)which is a union of closed subsets

satisfies this property. Let P and Q be prime ideals of Q such that P ⊂ Q

Page 311: (MOSTLY)COMMUTATIVE ALGEBRA

6.5. SUPPORT OF A MODULE, ASSOCIATED IDEALS 299

and P ∈ SuppA(M). Let < ∈ M be such that P ∈ V(AnnA(<)). Then

Q ⊃ P ⊃ AnnA(<)), hence Q ∈ V(AnnA(<)), hence Q ∈ SuppA(M).

c) For every < ∈ M, one has AnnA(M) ⊂ AnnA(<), hence

V(AnnA(<)) ⊂ V(AnnA(M)). Consequently, SuppA(M) ⊂ V(AnnA(M)).

d) Let (<1, . . . , <=) be a finite family of elements of M which gener-

ates M. Let us observe that

AnnA(M) = AnnA(<1) ∩ · · · ∩AnnA(<=).

Indeed, the inclusion ⊂ is obvious; conversely, if for every 8, one has

0 ∈ AnnA(<8), then 0< = 0 for every linear combination < of the <8s,

hence 0< = 0 for every < ∈ M. In particular,

V(AnnA(M)) = V

(AnnA(<1) ∩ · · · ∩AnnA(<=)

)= V(AnnA(<1)) ∪ · · · ∪ V(AnnA(<=)),

by remark 2.2.6, a).

Using c), we thus have

SuppA(M) ⊂ V(AnnA(<1)) ∪ · · · ∪ V(AnnA(<=)),

and the other inclusion follows from the definition of SuppA(M), hence

the desired equality. �

Remark (6.5.5). — Let M be an A-module. If M is finitely generated,

then assertion d) of the theorem says that its support is the closed

subset V(AnnA(M)) of Spec(A). In general, SuppA(M) is not necessarily

closed. For example, let us consider A = Z and M =⊕

=>1(Z/=Z).

Since every element of M is torsion, one has M(0) = 0, hence (0) ∉SuppZ(M). On the other hand, for every prime number ?, the module M

contains Z/?Z as a submodule, whose support is {(?)}, so that (?) ∈SuppZ(M). Consequently, SuppZ(M) = {(2), (3), . . .} = Spec(Z) {(0)}and this set is not closed in Spec(Z).By assertion b), the support of anA-module is a union of closed subsets,

hence it contains the closure of any of its points. One says that it is closed

under specialization.

Definition (6.5.6). — Let A be a commutative ring and let M be an A-module.

One says that a prime ideal P of A is associated to M if there exists < ∈ M

Page 312: (MOSTLY)COMMUTATIVE ALGEBRA

300 CHAPTER 6. FURTHER RESULTS ON MODULES

such that P is minimal among all prime ideals of A containing AnnA(<). Theset of all associated prime ideals of M is denoted AssA(M).

Example (6.5.7). — Let A be a commutative ring, let P be a prime ideal

of A. Then AssA(A/P) = {P}.Let clP : A→ A/P be the canonical surjection. Let G ∈ A/P. If G = 0,

then AnnA(G) = A and no prime ideal of A contains AnnA(G). Let usassume that G ≠ 0 and let us prove that AnnA(G) = P. The inclusion

P ⊂ AnnA(G) is obvious. Conversely, let 0 ∈ A be such that G = clP(0);one has 0 ∉ P; consequently, if 1 ∈ AnnA(G), the equality 1G = 0 means

10 ∈ P, hence it implies 1 ∈ P by the definition of a prime ideal. In

that case, P is the only minimal prime ideal containing AnnA(G). Thisimplies the claim.

Theorem (6.5.8). — Let A be a commutative ring and let M be an A-module.

a) Multiplication by an element 0 ∈ A is injective in M if and only if 0 does

not belong to any element of AssA(M).b) An element 0 ∈ A belongs to every element of AssA(M) if and only if

M0 = 0.

c) In particular, AssA(M) = ∅ if and only if M = 0.

Proof. — a) Let P ∈ AssA(M) and let 0 ∈ P. Let < ∈ M be such that P is

minimal among the prime ideals containing AnnA(<). By lemma 2.2.15,

0 is a zero-divisor in the ring A/AnnA(<). In other words, there exists

1 ∈ A AnnA(<) such that 01 ∈ AnnA(<). Consequently, 1< ≠ 0 and

01< = 0. In particular, multiplication by 0 is not injective in M.

Conversely, let 0 ∈ A and < ∈ M be such that 0< = 0 but < ≠ 0. One

thus has AnnA(<) ≠ A, so that there exists a minimal prime ideal P that

contains AnnA(<) (lemma 2.2.13). Since 0 ∈ AnnA(<), one has 0 ∈ P.

b) Let us assume that M0 = 0. Let < ∈ M; since M0 = 0, there exists an

integer = > 1 such that 0=< = 0. Consequently, 0= ∈ AnnA(<) and any

prime ideal containing AnnA(<) contains 0. In particular, 0 belongs to

every element of AssA(M).Conversely, let us assume that M0 ≠ 0 and let < ∈ M be such that

</1 ≠ 0 in M0. The ideal AnnA(<) does not contain any power 0=,

hence generates an ideal I of A0 such that I ≠ A0. By lemma 2.2.13

(applied to the ideal I of the ring A0), there exists a prime ideal Q of A0

Page 313: (MOSTLY)COMMUTATIVE ALGEBRA

6.5. SUPPORT OF A MODULE, ASSOCIATED IDEALS 301

which contains I and is minimal among those prime ideals. Let P be the

inverse image of Q in A, so that Q = PA0. Then, the ideal P is prime

and does not contain 0; it is also a minimal prime ideal among those

containing AnnA(<); in particular, P ∈ AssA(M) and 0 ∉ P.

c) It suffices to apply b) to 0 = 1. �

Proposition (6.5.9). — Let A be a commutative ring, let M be an A-module

and let N be a submodule of M. Then,

AssA(N) ⊂ AssA(M) ⊂ AssA(N) ∪AssA(M/N).

Proof. — It follows from the definition that any prime ideal P which

is associated to N is also associated to M. Let now P be a prime ideal

which is associated to M, and let < ∈ M be such that P is minimal

among the prime ideals containing AnnA(<). Let clN : M → M/Nbe the canonical surjection. One has AnnA(clN(<)) ⊃ AnnA(<). If P

contains AnnA(clN(<)), then P is also minimal among the prime ideals

containing AnnA(clN(<)), hence P ∈ AssA(M/N). Let us thus assume

that P does not contain AnnA(clN(<)), and let 1 ∈ AnnA(clN(<)) suchthat 1 ∉ P. Let 0 ∈ AnnA(1<); then 01< = 0, hence 01 ∈ AnnA(<),hence 01 ∈ P; since 1 ∉ P, this implies 0 ∈ P, hence the inclusions

AnnA(<) ⊂ AnnA(1<) ⊂ P. It follows that the prime ideal P is minimal

among those containing AnnA(1<). This implies P ∈ AssA(N) since, bychoice of 1, we have 1< ∈ N. �

Proposition (6.5.10). — Let A be a commutative ring, let S be a multiplicative

subset of A, let M be an A-module. A prime ideal P disjoint from S is associated

to the A-module M if and only if S−1

P is associated to the S−1

A-module S−1

M.

Viewing Spec(S−1A) as a subset of Spec(A) via proposition 2.2.7, this

can be rephrased as the equality AssS−1

A(S−1

M) = AssA(M)∩Spec(S−1A).

Proof. — Let a prime ideal P, disjoint from S, be associated to M. Let< ∈M be such that P is minimal among the prime ideals containingAnnA(<).The inverse image in A of Ann

S−1

A(</1) contains AnnA(<). Moreover,

let 0 ∈ A be such that (0/1)(</1) = 0 in S−1

M; then, there exists B ∈ S

such that B0< = 0, hence B0 ∈ AnnA(<) and a fortiori B0 ∈ P. Since P

is disjoint from S, we get 0 ∈ P and 0/1 ∈ S−1

P. Consequently, S−1

P

Page 314: (MOSTLY)COMMUTATIVE ALGEBRA

302 CHAPTER 6. FURTHER RESULTS ON MODULES

is minimal among the prime ideals of S−1

A containing AnnS−1

A(</1),

hence, S−1

P ∈ AssS−1

A(S−1

M).In the other direction, assume that S

−1P is associated to S

−1M. Let

< ∈ M, B ∈ S be such that S−1

P is minimal among the prime ideals

of S−1

A containing AnnS−1

A(</B). Let 0 ∈ AnnA(<); then 0</B = 0,

hence 0/1 ∈ AnnS−1

A(</B) and in particular, 0/1 ∈ S

−1P. Therefore,

there exists C ∈ S such that C0 ∈ P; since S and P are disjoint, 0 ∈ P. In

other words, AnnA(<) ⊂ P.

Let us show that P is minimal among the prime ideals of A contain-

ing AnnA(<). Let Q be a prime ideal of A such that AnnA(<) ⊂ Q ⊂ P.

Then, AnnS−1

A(</1) ⊂ S

−1Q ⊂ S

−1P. This implies that S

−1Q = S

−1P.

Since P and Q are disjoint from S and prime, we get Q = P, as was to be

shown. �

Corollary (6.5.11). — Let A be a commutative ring, let M be an A-module.

a) A prime ideal of A belongs to the support SuppA(M) if and only if it

contains an ideal of AssA(M).b) One has AssA(M) ⊂ Supp

A(M), and both sets have the same minimal

elements.

Proof. — a) Let < ∈ M. Let P be a prime ideal of A. By definition

of the support of a module, P ∈ SuppA(M) if and only if MP ≠ 0.

By theorem 6.5.8, this is itself equivalent to AssAP(MP) ≠ ∅ Finally,

proposition 6.5.10 implies that this holds if and only if there exists

Q ∈ AssA(M) such that Q ⊂ P.

b) This is essentially a reformulation. Let P ∈ AssA(M); by a), since

P ⊃ P, one has P ∈ SuppA(M).

Let us assume that P is minimal in AssA(M), and let us prove that P is

also minimal in SuppA(M). Let Q ∈ Supp

A(M) be such that Q ⊂ P. By

a), there exists an associated ideal P′ ∈ AssA(M) such that Q ⊃ P

′; then

P′ ⊂ P, hence P = P

′by minimality, hence P = Q.

In the other direction, let P ∈ SuppA(M) be a minimal element of

the support of M and let us prove that P belongs to AssA(M). Let

< ∈ M be such that P ⊃ AnnA(<). If Q is a prime ideal of A such that

AnnA(<) ⊂ Q ⊂ P, then Q ∈ SuppA(M), hence P = Q by minimality

Page 315: (MOSTLY)COMMUTATIVE ALGEBRA

6.5. SUPPORT OF A MODULE, ASSOCIATED IDEALS 303

of P. In other words, P is minimal among the prime ideals of A that

contain AnnA(<). This proves that P ∈ AssA(M). �

Proposition (6.5.12). — Let A be a commutative ring, let M be an A-module

and let P be a prime ideal of A which is finitely generated. Then P is associated

to M if and only if there exists < ∈ M such that P = AnnA(<).

Proof. — If P = AnnA(<), then P is obviously a minimal prime contain-

ing AnnA(<), hence P ∈ AssA(M).To prove the converse assertion, we first treat the case where A is a

local ring and P is its maximal ideal. Let < ∈ M be such that P is the only

minimal prime ideal containing AnnA(<). By proposition 2.2.11 applied

to A/AnnA(<), every element of P is nilpotent modulo AnnA(<): forevery 0 ∈ P, there exists an integer = > 1 such that 0= ∈ AnnA(<).Since P is finitely generated, there exists an integer = > 1 such that

P= ⊂ AnnA(<). Let us choose aminimal such integer =. ThenP

=−1·< ≠ 0,

hencewemay choose 0 ∈ P=−1

such that 0< ≠ 0; thenP ⊂ AnnA(0<) ⊂ P,

hence P = AnnA(0<).Let us now treat the general case. By proposition 6.5.10, one has

PAP ∈ AssAP(MP). Moreover, the ideal PAP is finitely generated. By

the case of a module over a local ring, there exists an element of MP,

say </0, with < ∈ M and 0 ∈ A P, with annihilator PAP. Then

AnnAP(</1) = PAP.

Let us now assume that P = (01, . . . , 0=) and let us construct a se-

quence (<0, . . . , <=) of multiples of < such that for all : ∈ {0, . . . , =},PAP = AnnAP

(<:/1) and 01<: = · · · = 0:<: = 0. Let us set <0 = <.

Assume <0, . . . , <:−1 are defined. Since 0:(<:−1/1) = 0, there exists

1 ∈ A P such that 10:<:−1 = 0; set <: = 1<:−1. Since 1 ∉ P, one

has AnnAP(<:/1) = AnnAP

(<:−1/1) = PAP. This constructs the de-

sired sequence by induction. Finally, <= is an element of M such that

PAP = AnnAP(<=/1) and P ⊂ AnnA(<=). It follows that P = AnnA(<=):

if 0<= = 0, then 0(<=/1) = 0, hence 0 ∈ PAP, hence 0 ∈ P. �

Corollary (6.5.13). — Let A be a noetherian ring and let M be an A-module.

A prime ideal P of A is associated to M if and only if there exists < ∈ M such

that P = AnnA(<).

Page 316: (MOSTLY)COMMUTATIVE ALGEBRA

304 CHAPTER 6. FURTHER RESULTS ON MODULES

Corollary (6.5.14). — Let A be a commutative ring and let M be a noetherian

A-module. A prime ideal P of A is associated to M if and only if there exists

< ∈ M such that P = AnnA(<).

Proof. — Let M be a noetherian A-module and let P be a prime ideal

which is associated with M. Let< ∈ M be such that P is a minimal prime

ideal among those containing I = AnnA(<). The morphism 0 ↦→ 0<

from A to M induces an isomorphism from A/I to the submodule A<

of M. Since M is a noetherian A-module, so is its submodule A<. Since

the A-submodules of A/I coincide with the ideals of A/I, the ring A/Iis noetherian (see also remark 6.3.9). Moreover, P/I is a minimal prime

ideal among those containing (0) = AnnA/I(1), so that P/I ∈ Ass

A/I(A/I)and there exists G ∈ A/I such that P/I = Ann

A/I(G) (corollary 6.5.13).

Let 0 ∈ A be such that G = clI(0). For every 1 ∈ P, one has 10 ∈ I, hence

10< = 0; conversely, if 1 ∈ A statisfies 10< = 0, then 1G = 0 hence 1 ∈ P.

This proves that P = AnnA(0<). �

Remark (6.5.15). — There is a conflicting terminology in the litterature

between two notions of associated primes. Books where the emphasis

lies on noetherian rings, such as Matsumura (1986) usually prefer the

one given by corollary 6.5.13, and Bourbaki (1989) call weakly associated

primes those of definition 6.5.6. Inmore advancedworks on commutative

algebra, the latter definition is prefered; I also believe that the notion

and the exposition of the main results are more natural. However, the

following results require the hypothesis that M is noetherian.

Theorem (6.5.16). — Let A be a commutative ring and let M be a noetherian

A-module. There exist a finite family (M0, . . . ,M=) of submodules of M and a

finite family (P1, . . . , P=) of prime ideals of A such that

0 = M0 ⊂ M1 ⊂ · · · ⊂ M= = M

and M8/M8−1 ' A/P8 for every 8 ∈ {1, . . . , =}.

Proof. — Let us begin by a remark. Let N be a submodule of M such

that N ≠ M. Applying corollary 6.5.14 to the noetherian A-module M/N,

there exists < ∈ M and a prime ideal P of A such that (A< +N)/N is

isomorphic to A/P.

Page 317: (MOSTLY)COMMUTATIVE ALGEBRA

6.5. SUPPORT OF A MODULE, ASSOCIATED IDEALS 305

We set M0 = 0 and apply this remark inductively, constructing an

increasing sequence (M1,M2, . . . ) of submodules of M, and a sequence

(P1, P2, . . . ) of prime ideals of A such that M8/M8−1 is isomorphic to

A/P8 for every 8. Since M is noetherian, this strictly increasing sequence

must be finite and the process must stop at some level =. Then, M= = M,

hence the theorem. �

Corollary (6.5.17). — Let A be a commutative ring and let M be a noetherian

A-module. Then AssA(M) is a finite set.

Proof. — Let us consider a composition series

0 = M0 ⊂ M1 ⊂ · · · ⊂ M= = M,

and a sequence (P1, . . . , P=) of prime ideals of A, as given by theo-

rem 6.5.16, so that A/P8 ' AnnA(M8/M8−1) for 1 6 8 6 =.

By proposition 6.5.9, Ass(M) ⊂ Ass(M=−1) ∪ Ass(A/P=). We had

also explained in example 6.5.7 that Ass(A/P=) = {P=}. Consequently,Ass(M) ⊂ Ass(M=−1) ∪ {P=}. By induction on =, we obtain

AssA(M) ⊂ {P1; . . . ; P=}.

In particular, AssA(M) is a finite set. �

Corollary (6.5.18). — Let A be a commutative ring and let M be a noetherian

A-module. Then, M has finite length if and only if all of its associated ideals are

maximal ideals.

Proof. — Let us assume that M has finite length. Let then

0 = M0 ⊂ M1 ⊂ · · · ⊂ M= = M

be a Jordan-Hölder composition series, so that for every 8, there is a

maximal ideal P8 of A such that M8/M8−1 ' A/P8. By the same proof

as for the preceding corollary, AssA(M) is contained in {P1; . . . ; P=}. Inparticular, all associated prime ideals of M are maximal ideals.

Conversely, let us assume that all associated prime ideals of M are

maximal and let us consider a composition series

0 = M0 ⊂ M1 ⊂ · · · ⊂ M= = M,

Page 318: (MOSTLY)COMMUTATIVE ALGEBRA

306 CHAPTER 6. FURTHER RESULTS ON MODULES

and prime ideals P1, . . . , P=, as given by theorem 6.5.16, so that for

every 8 ∈ {1, . . . , =}, one has M8/M8−1 ' A/P8. By proposition 6.5.3,

SuppA(M) = {P1; . . . ; P=}.

By corollary 6.5.11, the minimal elements of SuppA(M) are associated

prime ideals, hence are maximal ideals of A, by assumption. This

implies that P1, . . . , P= are maximal ideals of A. In particular, M8/M8−1

is a simple A-module, for every 8, and M has finite length. �

6.6. Primary decomposition

Definition (6.6.1). — Let A be a commutative ring.

One says that an A-module M is coprimary if there AssA(M) has exactlyone element. If AssA(M) = {P}, one also says that M is P-coprimary.

One says that a submoduleN of anA-module isprimary ifM/N is coprimary.

If M/N is P-coprimary, one says that N is P-primary.

Lemma (6.6.2). — Let A be a commutative ring and let M be a nonzero

A-module.

a) The module M is coprimary if and only if for every 0 ∈ A, either the

multiplication by 0 is injective in M, or the fraction module M0 is 0.

b) Assume that M is coprimary. If M is finitely generated, then

√AnnA(M)

is the unique associated prime ideal of M.

Proof. — a) By theorem 6.5.8, the second condition is equivalent to

saying that the union of the associated prime ideals of M (the set of

elements 0 such that multiplication by 0 is not injective in M) coincides

with the intersection of the associated prime ideals of M (the set of

elements 0 such that M0 = 0). Since M ≠ 0, it possesses associated

ideals. Consequently, the union of the associated prime ideals contains

the intersection of the prime ideals. If there are at least two distinct

associated prime ideals, say P and Q, then P ∪Q strictly contains P ∩Q,

so that that containment is strict unless there is exactly one associated

prime ideal, that is to say, unless M is coprimary.

b) Let P be the unique associated prime ideal of M. Let 0 ∈ A. By a),

either 0 ∉ P and the multiplication by 0 is injective on M, or 0 ∈ P and

Page 319: (MOSTLY)COMMUTATIVE ALGEBRA

6.6. PRIMARY DECOMPOSITION 307

one has M0 = 0. Since M is finitely generated, the condition M0 = 0 is

equivalent to the existence of an integer = such that 0=M = 0, that is,

0 ∈√

AnnA(M). In the first case, the multiplication by every power 0=

of 0 is still injective on M, so that 0 ∉√

AnnA(M), since M ≠ 0. This

proves that

√AnnA(M) = P. �

Examples (6.6.3). — Let A be a commutative ring.

a) Let I be an ideal of A. The ideal I is a primary submodule of A if

and only if for any 0, 1 ∈ A such that 01 ∈ I and 1 ∉ I, there exists = > 1

such that 0= ∈ I. If this holds, then

√I is a prime ideal of A.

b) If I is a prime ideal of A, then I is primary. This follows either from

the characterization in a), or from the computation of AssA(A/I) done inexample 6.5.7.

c) Assume that A is a principal ideal domain. Let us show that

primary ideals of A other than (0) are the ideals of the form (?=), forsome irreducible element ? and some integer = > 1. Let I be a primary

ideal distinct from (0), let ? ∈ A be such that

√I = (?). Necessarily,

? is the only prime divisor of I, so that there exists = > 1 such that

I = (?=). Conversely, let us prove that such ideals (?=) are (?)-primary.

Let 0, 1 ∈ A be such that 01 ∈ I but 1 ∉ I. Then, ?= divides 01 but ?=

does not divide 1; necessarily, ? divides 0, so that 0= ∈ I.

In particular, any ideal of A, say (?=1

1. . . ?

=AA ) can be written as an

intersection of the primary ideals (?=88). The primary decomposition, due to

Lasker and Noether, generalizes this fact to all noetherian rings.

d) Let I be an ideal of A such that P =√

I is a maximal ideal of A. Then I

is P-primary. (This generalizes the previous example.) Let 0, 1 ∈ A be

such that 01 ∈ I but 0 ∉ P. Since P is maximal, the image of 0 in A/P is

invertible and there exists 2 ∈ A such that G = 1 − 02 ∈ P. Since P =√

I,

there exists = > 1 such that G= = (1−02)= ∈ I. Expanding thepower, there

is H ∈ A such that G= = 1 − H02. Then, 1 = (G= + H02)1 = G=1 + H012 ∈ I.

It is however simpler to apply the general theory. (Is it really?)

Observe that a prime ideal of A contains I if and only if it contains

√I.

Consequently, the support of A/I is equal to {P}. Since AssA(A/I) andSupp

A(A/I) have the same minimal elements (corollary 6.5.11), one has

AssA(A/I) = {P} and I is a P-primary ideal.

Page 320: (MOSTLY)COMMUTATIVE ALGEBRA

308 CHAPTER 6. FURTHER RESULTS ON MODULES

Lemma (6.6.4). — Let A be a ring, Let M be an A-module and let N,N′ betwo submodules of M.

a) Then, AssA(M/(N ∩N′)) ⊂ AssA(M/N) ∪AssA(M/N′).

b) Let P be a prime ideal of A such that N, N′are P-primary; then N∩N

′is

P-primary.

c) Let P be a prime ideal of A such that N is P-primary. If N ∩N′ ≠ N

′,

then P ∈ AssA(M/(N ∩N′)).

Proof. — a) Let us set V = M/(N ∩ N′) and let V

′be the submodule

N′/(N ∩ N

′) of V, so that V/V′ is isomorphic to M/N′. In particular,

AssA(V/V′) = AssA(M/N′). Then, N′∩N is the kernel of the composition

of the inclusion map N′→M with the canonical surjection M→M/N,

so that V′ = N

′/(N ∩ N′) is isomorphic to a submodule of M/N. By

proposition 6.5.9, AssA(V′) ⊂ AssA(M/N) and AssA(V) ⊂ AssA(V′) ∪AssA(V/V′). In particular, AssA(V) ⊂ AssA(M/N)∪AssA(M/N′), as was

to be shown.

b) Let P be a prime ideal of A and assume, moreover, that both N

and N′are P-primary submodules of M. By a), we have AssA(V) ⊂ {P}.

Since V ≠ 0, AssA(V) ≠ ∅; consequently, AssA(V) = {P}, which means

that N ∩N′is a P-primary submodule of M.

c) By assumption, AssA(M/N) = {P}. Moreover, N′ ∩ N ≠ N

′, so

that V′ ≠ 0. Being a nonempty subset of AssA(M/N) = {P}, one

has AssA(V′) = {P}. Since AssA(V) contains AssA(V′), the lemma is

proved. �

Theorem (6.6.5) (Lasker–Noether). — Let A be a commutative ring and let

M be a noetherian A-module. Any submodule of M is the intersection of a finite

family of primary submodules of M.

Proof. — Assume that the proposition does not hold. Since the mod-

ule M is noetherian, there is a largest submodule N of M which cannot

be written as the intersection of finitely many primary submodules of M.

Observe that N ≠ M (for M is the intersection of the empty family) and

N is not a primary submodule of M (for one could write N = N). By

lemma 6.6.2, there exists 0 ∈ A such that N0 ≠ M0 and < ∈ M N such

that 0< ∈ N.

Page 321: (MOSTLY)COMMUTATIVE ALGEBRA

6.6. PRIMARY DECOMPOSITION 309

For every integer :, let P: be the submodule of M consisting of

those ? ∈ M such that 0:? ∈ N. By assumption, P: ≠ M: otherwise, we

would have 0:M ⊂ N, hence M0 = N0.

The family (P:) is an increasing family of submodules of M, hence

it is stationary, because M is noetherian. Let : ∈ N such that P: =

P:+1. It is clear that N ⊂ (N + A<) ∩ (N + 0:M). Conversely, let

= ∈ (N +A<) ∩ (N + 0:M); let us write = = =′ + 1< = =′′ + 0:?, where

=′, =′′ ∈ N, 1 ∈ A, and ? ∈ M. Then, 0= = 0=′ + 01< ∈ N, hence

0:+1? = 0(= − =′′) ∈ N. Consequently, ? ∈ P:+1; by the choice of :,

it follows that ? ∈ P:, so that 0:? ∈ N and = ∈ N. This shows that

N = (N +A<) ∩ (N + 0:M).On the other hand, < ∉ N, so that N ( N+A<; moreover, N0 ≠ M0, so

that 0:M ≠ N, andN ( N+0:M. By the choice ofN, the submodulesN+A< and N + 0:M can be written as the intersections of finitely many

primary submodules of M. Then so can N, a contradiction. �

Definition (6.6.6). — Let A be a ring, let M be an A-module and let N be a

submodule of M. A primary decomposition of N in M is a finite sequence

(N1, . . . ,N=) of primary submodules of M such that N = N1 ∩ · · · ∩N=.

A primary decomposition as above is said to be minimal if the following two

properties also hold:

(i) For every 8 ≠ 9, AssA(M/N8) ≠ AssA(M/N9);(ii) For every 9, N ≠

⋂8≠9 N8.

Remark (6.6.7). — Let us restate the definition in the important case of

ideals. Let A be a ring, let I be an ideal of A. A primary decomposition of I

in A is an expression of the form I = Q1 ∩ · · · ∩ Q=, where Q1, . . . ,Q= are

primary ideals of A. Such a decomposition is said to be minimal if, moreover,

(i) For every 8 ≠ 9,√

Q8 ≠√

Q9 ;

(ii) For every 9, I ≠⋂8≠9 Q8.

Corollary (6.6.8). — Let A be a ring and let M be a noetherian A-module.

Every submodule of M possesses a minimal primary decomposition.

Proof. — Let N be a submodule of M. By theorem 6.6.5, we know that

N admits a primary decomposition: there is an integer = and primary

submodules N1, . . . ,N= of M such that N = N1∩ · · · ∩N=. Let us choose

Page 322: (MOSTLY)COMMUTATIVE ALGEBRA

310 CHAPTER 6. FURTHER RESULTS ON MODULES

such a decomposition for which = is minimal and let us prove that we

have indeed a minimal primary decomposition.

Let P be a prime ideal of A which is associated to at least two mod-

ules M/N9. By lemma 6.6.4, the intersection of all modules N8 such

that AssA(M/N8) = {P} is a P-primary submodule of M. This gives

rise to a new primary decomposition of N, with strictly less factors. By

the minimality assumption on =, this implies that the associated prime

ideals of the N8 are pairwise distinct.

Now, if we can remove one of the modules from the list without

changing the intersection, we get a new primary decomposition with

one less factor. By the minimality assumption on =, this cannot hold. �

Theorem (6.6.9). — LetA be a ring, letM be anA-module, letN be a submodule

of M. Let (N1, . . . ,N=) be a minimal primary decomposition of N; for every 8,

let P8 be the unique prime ideal associated to M/N8.

a) One has AssA(M/N) = {P1, . . . , P=}.b) If P8 is minimal among the associated primes of M/N, then

N8 = M ∩NP8 = {< ∈ M ; ∃0 ∉ P8 , 0< ∈ N}.

Proof. — a) By induction on =, it follows from lemma 6.6.4 that

AssA(M/N) ⊂ {P1, . . . , P=}. This inclusion does not depend on the

fact that the given primary decomposition is minimal. Let us now

show that for every 8, P8 is an associated prime ideal to M/N. Let

N′ =

⋂9≠8 N9, so that N = N8 ∩ N

′. Since the given decomposition

is minimal, N ≠ N′. Since M/N8 is P8-primary, it then follows from

lemma 6.6.4 that AssA(M/N) contains P8, as required.

b) Let 9 ∈ {1, . . . , =} be such that 9 ≠ 8. By proposition 6.5.10,

AssAP8((M/N9)P8) is the set of prime ideals of the form PAP8 where

P ∈ Ass(M/N9) = {P9} is a prime ideal of A contained in P8. By mini-

mality of P8, P9 is not contained in P8, so that AssAP8((M/N9)P8) = ∅. By

theorem 6.5.8, (M/N9)P8 = 0 and (N9)P8 = MP8 . Consequently,

NP8 = (N1)P8 ∩ · · · ∩ (N=)P8 = (N8)P8 .

Let now N′8= M ∩ (N8)P8 . It is obvious that N8 ⊂ N

′8. Conversely,

let < ∈ N′8, so that there exists 0 ∉ P8 such that 0< ∈ N8. Since

Page 323: (MOSTLY)COMMUTATIVE ALGEBRA

6.6. PRIMARY DECOMPOSITION 311

AssA(M/N8) = {P8}, multiplication by 0 is injective on M/N8, hence

< ∈ N8. �

Theorem (6.6.10) (Krull’s intersection theorem). — Let A be a commutative

ring, let M be a noetherian A-module and let I be a finitely generated ideal

of A. Let N =⋂=>0

I=M. Then N = IN and there exists 0 ∈ I such that

(1 + 0)N = 0.

Proof. — We first prove that N = IN. To that aim, let us consider

a primary decomposition of the submodule IN of M, namely, IN =

N1 ∩ · · · ∩N<; for 9 ∈ {1, . . . , <}, let P9 be the unique prime ideal of A

which is associated to M/N9. Let us assume, by contradiction, that

N ≠ IN and let G ∈ N IN. Let 9 ∈ {1, . . . , <} be such that G ∉ N9.

Let 0 ∈ I; one has 0G ∈ IN ⊂ N9, so that the multiplication by 0

on M/N9 is not injective. By the definition of a primary submodule,

one has (M/N9)0 = 0. Since M is finitely generated, there exists an

integer = > 1 such that 0=M ⊂ N9.

Since the ideal I is finitely generated, there exists an integer : > 1

such that I:M ⊂ N9. Now, using that G ∈ N =

⋂=>0

I=M, we see that

G ∈ I:M ⊂ N9, a contradiction which establishes that N = IN.

Since M is noetherian, its submodule N is a finitely generated A-

module. By Nakayama’s theorem (corollary 6.1.4), there exists 0 ∈ I such

that (1 + 0)N = 0. �

Corollary (6.6.11). — Let A be a noetherian commutative ring and let I be an

ideal of A. Under each of the following hypotheses, one has

⋂=>0

I= = 0:

(i) Every element of 1 + I is regular;

(ii) The ideal I is contained in the Jacobson radical of A;

(iii) One has I ≠ A and the ring A is an integral domain.

Proof. — Let J =⋂=>0

I=. By Krull’s intersection theorem 6.6.10, there

exists 0 ∈ I such that (1 + 0)J = 0. Under each of the hypotheses (i), (ii),

(iii), the element 1+ 0 of A is regular: this is obvious in case (i); in case (ii),

1 + 0 is invertible (lemma 2.1.6); in case (iii), 1 + 0 ≠ 0. Consequently,

J = 0. �

Page 324: (MOSTLY)COMMUTATIVE ALGEBRA

312 CHAPTER 6. FURTHER RESULTS ON MODULES

Exercises

1-exo251) Show that in Nakayama’s lemma, one cannot omit the hypothesis that the

module be finitely generated. (Consider the Z-module Q.)

2-exo250) This exercise leads to another proof of Nakayama’s lemma. Let A be a ring

and let J be its Jacobson radical. Let M be a finitely generated right A-module such

that M ≠ 0.

Let N be a maximal submodule of M (exercise 6/9). Show that MJ ⊂ N; conclude

that M ≠ MJ.

3-exo178) Let A be a ring, let I be a finitely generated two sided ideal of A such that

I = I2. Show that there exists a central element 4 ∈ A such that 4 = 42

and I = 4A = A4.

(Apply Nakayama’s lemma to find an element 0 ∈ I such that I(1 + 0) = 0.)

4-exo117) Let A be a commutative ring, let V be a finitely generated A-module.

a) For any maximal ideal M of A, one defines

3V(P) = dimAP/PAP

(VP/PVP).b) Let M be a maximal ideal of A. Prove that 3V(M) = dim

A/M(V/MV).c) Let P be a prime ideal of A. Show that the AP-module VP is generated by 3V(P)

elements.

d) Show that there exists 0 ∈ A P such that the A0-module V0 is generated by 3V(P)elements.

e) Let Q be a prime ideal of A such that 0 ∉ Q. Show that 3V(Q) 6 3V(P). (In other

words, the function P ↦→ 3V(P) from Spec(A) to N is upper-semicontinuous, when

Spec(A) is endowed with the Zariski topology.)

5-exo642) Let A be a ring, let M,N be A-modules and let 5 : M→ N be a morphism.

Assume that M has finite length.

a) Show that Ker( 5 ) and Im( 5 ) have finite lengths and that ℓA(Im( 5 )) + ℓA(Ker( 5 )) =ℓA(M).

b) Assume that ℓA(N) > ℓA(M); prove that 5 is not injective.c) Assume that ℓA(N) > ℓA(M); prove that 5 is not surjective.d) Assume that N = M (so that 5 is an endomorphism). Show that the following

conditions are equivalent: (i) 5 is bijective; (ii) 5 is injective; (iii) 5 is surjective.

6-exo566) Let A be a ring, let M be an A-module of finite length and let D be an

endomorphism of M.

a) Show that there exists a smallest integer ? > 0 such that Ker(D=) = Ker(D?) forany = > ?.

b) Show that there exists a smallest integer @ > 0 such that Im(D=) = Im(D@) for any= > @.

c) Show that ? = @.

d) Show that Ker(D?) and Im(D?) are direct summands in M.

Page 325: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 313

7-exo567) Let A be a ring, let M be an A-module, let (S8)8∈I be a family of simple

submodules of M such that M =∑8∈I S8 . (One says that M is a semisimple A-module.)

a) Let N be a submodule of M. Using Zorn’s lemma that there exists a maximal

subset J of I such that N and the submodules S9 , for 9 ∈ J, are in direct sum in their

sum. (You may restrict to the case where I is finite, if you wish to avoid Zorn’s lemma.)

Show that M = N ⊕(⊕

9∈J S9

).

b) In particular, any submodule of M has a direct summmand in M.

c) Show that there exists J ⊂ I such that M is isomorphic to the module

⊕9∈J S9 .

8-exo574) Let K be a field.

a) For any non-zero polynomial P ∈ K[X], show that K[X]/(P) is a K[X]-module of

finite length equal to the number of irreducible factors of P (repeated according to

their multiplicities). What happens if K is algebraically closed.

b) Let A = K[X1, . . . ,X=]. Show that an A-module M has finite length if and only if

dimK(M) < ∞. (Use Hilbert’s Nullstellensatz.)

c) If, moreover, K is algebraically closed, prove that ℓA(M) = dimK(M).9-exo584) Let A be a ring and let M be a non-zero right A-module which is finitely

generated.

a) Show that the set of all submodules of M which are distinct from M is inductive.

b) Show that for any submodule N of M such that N ≠ M, there exists a submodule P

of M such that N ⊂ P ⊂ M and M/P is a simple A-module.

c) Assume that A possesses a unique maximal right ideal I. Show that

HomA(M,A/I) ≠ 0.

d) Set A = Z and M = Q. Show that there does not exist any submodule P ⊂ M such

that M/P is simple.

10-exo622) Let A be a principal ideal domain.

a) Let 0 be any nonzero element of A. Prove that the A-module A/(0) has finite lengthand compute its length in terms of a decomposition of 0 as a product of irreducible

elements.

b) Using the Jordan-Hölder Theorem, give a second proof of the uniqueness property

of decomposition of 0 as a product of irreducible elements.

11-exo881) Let A be a ring. One says that an A-module M is indecomposable if M ≠ 0

and if there does not exist non-zero submodules N,N′ of M such that M = N ⊕ N′.

a) Let M be an A-module such that EndA(M) is a local ring. Prove that M is

indecomposable.

b) Let M be an indecomposable A-module of finite length. Prove that EndA(M) is alocal ring.

c) Let M be an A-module of finite length. Prove that there exists a finite family

(N1, . . . ,NA) of indecomposable submodules of M such that M = N1 ⊕ · · · ⊕ NA .

Page 326: (MOSTLY)COMMUTATIVE ALGEBRA

314 CHAPTER 6. FURTHER RESULTS ON MODULES

d) Let M be an A-module and let N,N′ be submodules of M such that M = N ⊕ N′.

Assume that N is indecomposable. Prove that for every D ∈ EndA(M), either D or

idM −D induces an isomorphism from N to a direct summand of N′in M.

e) Let M be an A-module of finite length. Let (N1, . . . ,NA) and (N′1, . . . ,N′B) be finite

families of indecomposable submodules ofM such thatM = N1⊕· · ·⊕NA = N′1⊕· · ·⊕N

′B .

Prove that A = B and that there exists a bijection � ∈ SA such that N′9' N�(9) for every

9 ∈ {1, . . . , A}. (Theorem of Krull–Remak–Schmidt)

12-exo1136) Let A be a local noetherian commutative ring, let M be its maximal ideal.

Let I be an ideal of A. Show that A/I has finite length if and only if there exists = > 0

such that M= ⊂ I.

13-exo066) Let A be a ring, let (I=) be an increasing family of left ideals of A. Let

I =⋃

I= . Show that I is a left ideal of A. If I is finitely generated, show that the

sequence (I=) is stationary.14-exo135) Let A be a ring, let I, J be two-sided ideals of A such that I ∩ J = (0). Showthat A is left noetherian if and only if both A/I and A/J are left noetherian.15-exo050) Examples of non-noetherian rings. Show that the following commutative rings

are not noetherian.

a) The ring :[X1,X2, . . . ,X= , . . . ] of polynomials in infinitely many indeterminates

with coefficients in a non zero commutative ring :;

b) The ring �0(R,R) of real valued continuous functions on R;

c) The ring �∞(R,R) of real valued indefinitely differentiable functions on R. Show

however that the ideal of functions vanishing at 0 is a principal ideal;

d) The subring of C[X,Y] generated as a submodule by C and the ideal (X).16-exo182) Letℱ be the set of all polynomials P ∈ Q[X] such that P(=) ∈ Z for every

= ∈ Z.a) Show thatℱ is a Z-subalgebra of Q[X].b) Let 5 : Z → Z be a function. One assumes that 5 (0) ∈ Z and that there exists

P ∈ ℱ such that 5 (=) = P(= + 1) −P(=) for every = ∈ Z. Show that there exists a unique

element Q ∈ ℱ such that 5 (=) = Q(=) for every = ∈ Z.c) Show that the polynomials

(X

0

)= 1,

(X

1

)= X,

(X

2

)= X(X − 1)/2, . . . ,

(X

?

)=

X(X − 1) . . . (X − ? + 1)/?!, . . . form a basis ofℱ as a Z-module.

d) Show that the ring ℱ is not noetherian.

17-exo064) Let A be a ring, let M be a noetherian right A-module and let ! : M→M

be an endomorphism of M. Show that there exists an integer = > 1 such that

Ker(!=) ∩ Im(!=) = (0).18-exo648) Let A be a commutative ring which is not noetherian. Letℱ be the set of all

ideals of A which are not finitely generated, so thatℱ ≠ ∅.

a) Prove that ℱ is an inductive set for the inclusion relation. Let P be a maximal

element of ℱ. In the rest of the exercise, we shall prove that P is a (non-finitely

Page 327: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 315

generated) prime ideal of A. We argue by contradiction. Let 0, 1 ∈ A be such that

01 ∈ P while 0 ∉ P and 1 ∉ P.

b) Prove that there exist D1, . . . , D< ∈ P and E1, . . . , E= ∈ A such that P + (0) =(D1, . . . , D< , 0) and (P : (0)) = (E1, . . . , E=).

c) Prove that P = (D1, . . . , D< , 0E1, . . . , 0E=). Derive from this contradiction that P is

a prime ideal of A.

This proves that a commutative ring is noetherian if and only if every prime ideal is finitely

generated, a theorem of I. S. Cohen.

19-exo808) Let A be a commutative ring. One assumes that for every maximal ideal M

of A, the fraction ring AM is noetherian and that, for every nonzero 0 ∈ A, the set of

maximal ideals of A containing 0 is finite.

Let I be a nonzero ideal of A. Let M1, . . . ,M= be the maximal ideals of A that

contain I.

a) Prove that there exist elements 01, . . . , 0< ∈ I such that every maximal ideal of A

containing (01, . . . , 0<) is one of the M9 .

b) For every 8 ∈ {1, . . . , =}, prove that there exist an integer <8 and elements 18 , 9 (for

9 ∈ {1, . . . , <8}) such that 18 , 9/1, . . . , 18 ,<8/1 generate the ideal IAM8 of AM8 .

c) Prove that I is finitely generated.

d) Prove that A is noetherian.

20-exo879) Let R be a commutative noetherian ring and let I be an ideal of R; let

(01, . . . , 0=) be a generating family of I. Let J =⋂:>1

I:and let 1 ∈ J.

a) Show that for every integer : > 1, there exists a polynomial P: ∈ R[T1, . . . ,T=],homogeneous of degree :, such that 1 = P:(01, . . . , 0=).

b) Show that there exist an integer < > 1 and, for every : ∈ {1, . . . , <}, a polynomial

Q: ∈ R[T1, . . . , T=] which is homogeneous of degree :, such that P<+1 = Q<P1 + · · · +Q1P< .

c) Prove that there exists 0 ∈ I such that 1 = 01. In particular, one has J = IJ. (This

proof of Krull’s intersection theorem is due to Perdry (2004).)

21-exo1126) One says that a commutative ring R is graded if there is a family (R=)=>0

of subgroups of (R,+) such that R =⊕∞

==0R= and R= · R< ⊂ R=+< for all =, < > 0.

a) Show that R0 is a subring of R and that I =⊕

=>1R= is an ideal of R.

b) One assumes that R0 is a noetherian ring and that R is finitely generated as an R0

algebra. Show that R is noetherian.

c) Conversely, let us assume that R is a noetherian ring. Show that R0 is noetherian.

Show that there is an integer A > 0, elements G1, . . . , GA ∈ R, integers =1, . . . , =A such

that G8 ∈ R=8 for every 8 and such that I = (G1, . . . , GA). Show by induction on = that

R= ⊂ R0[G1, . . . , GA]. Conclude that R is a finitely generated R0-algebra.

22-exo1127) Let R = ⊕R= be a commutative noetherian graded ring. One says that

a R-module M is graded if there is a family (M=)=>0 of subgroups of M such that

M =⊕

=>0M= and such that R= ·M= ⊂ M=+< for all <, = > 0.

Page 328: (MOSTLY)COMMUTATIVE ALGEBRA

316 CHAPTER 6. FURTHER RESULTS ON MODULES

Let M be a graded R-module.

a) Observe that M= is a R0-module. If M is finitely generated, show that for each =,

M= is a finitely generated R0-module.

b) Assume that R0 is an artinian ring. Let PM(C) be the power series with integer

coefficients given by

PM(C) =∞∑==0

ℓR0(M=)C= .

Assume that there is an integer A, elements G1, . . . , GA ∈ R, integers =1, . . . , =A such that

G8 ∈ R=8 for every 8 and such that R = R0[G1, . . . , GA]. Prove by induction on A that there

exists a polynomial 5M ∈ Z[C] such that

5M(C) = PM(C)A∏8=1

(1 − C=8 ).

c)One assumesmoreover that =8 = 1 for every 8. Show that there is a polynomial!M ∈Q[C] such that

ℓR0(M=) = !M(=)

for every large enough integer =.

23-exo858) Let A be a noetherian commutative ring and let I be an ideal of A.

a) Let B be the set of all polynomials P ∈ A[T] of the form P =∑0=T

=, where 0= ∈ I

=

for every =. Show that B is a noetherian ring. (Use exercise 6/21.)

b) Let M be a finitely generated A-module and let N be a submodule of M. Let

M[T] = M(N)

be considered as an A[T]-module via T · (<0, <1, . . . ) = (0, <0, <1, . . . ).Let P (resp. Q) be the subset of M[T] consisting of sequences (<=) such that <= ∈ I

=M

(resp. such that <= ∈ N∩ I=M) for every = ∈ N. Prove that P is a B-submodule of M[T],

and that Q is B-submodule of P.

c) Prove that there exists < ∈ N such that N ∩ I=M = I

=−<(N ∩ I<

M) for everyinteger = > <. (A result known as the Artin–Rees lemma.)

d) Let 5 : M→ N be a morphism of finitely generated A-modules. Prove that there

exists an integer < such that

5 −1(I=N) = Ker( 5 ) + I=−< · 5 −1(I<M) and 5 (M) ∩ I

=M ⊂ 5 (I=−<M)

for every integer = > <.

24-exo859) Let A be a noetherian commutative ring and let I be an ideal of A. Let M be

a finitely generated A-module and let N =⋂=∈N I

=M.

a) Prove that there exists 0 ∈ I such that (1+ 0)N = 0. (Use exercise 6/23 and Nakayama’s

lemma.)

b) Assuming that I is contained in the Jacobson radical of A, prove that N = 0.

c) Let P be a prime ideal of A containing I. Prove that there exists 0 ∈ A P such

that 0N = 0.

Page 329: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 317

d) Assume that A is an integral domain, I ≠ A, and M is torsion free. Prove that

N = 0.

25-exo047) In this exercise, we explicit all Z-submodules of Q.

For any nonzero 0 ∈ Q, let E?(0) be the exponent of ? in the decomposition of 0 as a

product of prime factors; we also set E?(0) = +∞. Let V =∏

? prime(Z∪ {−∞,+∞}) and

E : Q→ V be the map given by E(0) = (E?(0))? . The set V is endowed with the product

ordering: for two families 0 = (0?)? and 1 = (1?)? in V, say that 0 6 1 if 0? 6 1? for

every prime number ?.

a) Let G and H be two elements of Q such that GZ ⊂ HZ; show that E(H) 6 E(G).b) Show that any subset of V admits a least upper bound and a greatest lower bound

in V.

c) Let M be a submodule of Q. Assume that M is finitely generated. Show that there

exists < ∈ Q such that M = <Z. Show that the element E(<) of V does not depend on

the choice of a generator < ∈ M; it will be denoted E(M). Then, prove that

M = {G ∈ Q ; E(G) > E(M)}.

d) For an arbitrary submodule of Q, set

E(M) = inf

G∈QE(G).

Conversely, for any D ∈ V, set

MD = {G ∈ Q ; E(G) > D}.

Show that MD = 0 if and only if

∑? sup(0, D?) = +∞; explicitly, MD = 0 if and only if

one of the two following conditions holds:

(i) There exists ? such that D? = +∞;

(ii) The set of all prime numbers ? such that D? > 0 is infinite.

e) Show that the map M ↦→ E(M) induces a bijection from the set of all nonzero

submodules of Q to the subset V0of V consisting of families D ∈ V such that∑

? sup(0, D?) < ∞.

f ) For D ∈ V0, show that MD is finitely generated if and only if

∑?

��D? �� < ∞, that is if

and only if:

(i) For every ?, D? ≠ −∞;

(ii) The set of all primes ? such that D? < 0 is finite.

If this holds, show that MD is generated by one element.

g) For D ∈ V0, show that MD contains Z if and only if D 6 0, namely D? 6 0 for

every ?. If this holds, show that MD/Z is artinian if and only if the set of all primes ?

such that D? < 0 is finite.

h) Let D ∈ V0be such that D 6 0. Show that MD/Z has finite length if and only if

(i) For every ?, D? ≠ −∞;

(ii) The set of all primes ? such that D? < 0 is finite.

More precisely, show that ℓZ(MD/Z) =∑?

��D? ��.

Page 330: (MOSTLY)COMMUTATIVE ALGEBRA

318 CHAPTER 6. FURTHER RESULTS ON MODULES

26-exo1154) Let A be a ring, let M be an artinian A-module and let D be an endomor-

phism of M.

a) Assume that D is injective. Show that D is bijective.

b) Give an example where D is surjective but not bijective.

27-exo1162) Let A be a ring, let M be an artinian A-module and let D be an endomor-

phism of M.

a) Show that there exists an integer = > 1 such that Ker(!=) + Im(!=) = M.

b) Assume moreover that M has finite length. Using exercise 6/17, show that the

preceding sum is a direct sum.

28-exo1213) Let A be a commutative ring.

a) Let V be an A-module of finite length. Show that the canonical morphism

V→∏M∈Max(A)VM (sending G ∈ M to the family of fractions G/1, Max(A) being the

set of all maximal ideals of A) is an isomorphism of A-modules.

b) Assume that A is an artinian ring. Show that the canonical morphism A →∏M∈Max(A)AM is an isomorphism of rings: Any commutative artinian ring is a product of

local rings.

29-exo868) Let A be a ring and let M be a right A-module. Let A′ = EndA(M) be

the ring of endomorphisms of M. Then M is seen as a left A′-module and we set

A′′ = EndA

′(M).a) For 0 ∈ A, let �M(0) be the homothety of ratio 0 in M. Prove that �M(0) ∈ A

′′.

Prove that the map �M : A→ A′′is a morphism of rings.

One says that the A-module M is balanced if the morphism �M is an isomorphism.

b) Let = > 1 be an integer such that M=is balanced. Prove that M is balanced.

(Represent endomorphisms of M=by matrices with entries in EndA(M).)

c) Let M be a right A-module. For every 4 ∈ A ⊕ M, prove that there exists

D ∈ EndA(A3 ⊕M) such that D(1, 0) = 4. Prove that A3 ⊕M is balanced.

d) Let M be a right A-module such that A is a quotient of M=, for some = > 1 (one

says that M is generator). Prove that M is balanced.

e) Assume that the ring A is simple. Prove that every nonzero right ideal of A is

generator, hence balanced (Rieffel (1965)).

30-exo869) Let A be a simple artinian ring. Following Henderson (1965) and Nicholson

(1993), we prove in this exercise a theorem of Wedderburn according to which A is

isomorphic to a matrix ring over a division ring.

a) Prove that A contains a right ideal I which is a simple right A-module.

b) Prove that I = A · I and deduce from this that I2 ≠ 0.

c) Prove that there exists an idempotent element 4 ∈ A such that I = 4A and such

that D = 4A4 is a division ring.

d) For 0 ∈ A, the homothety �I(0) of ratio 0 in I is an element of EndD(I); prove thatthe map �I : A→ EndD(I) is an isomorphism. (Use that 1 ∈ A4A.)

Page 331: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 319

e) Prove that I is finitely dimensional as a left D-vector space. (Observe that the

endomorphisms of I of finite rank form a non-trivial two-sided ideal.) Conclude that there

exists an integer = > 1 such that A 'M=(D).31-exo842) Let A be a commutative ring and let M be a finitely generated A-module.

Let I be an ideal of A. Prove that SuppA(M/IM) = V(I) ∩ V(AnnA(M)).

32-exo844) Let A be a commutative ring and let U ∈ M<,=(A). Let D : A= → A

<be the

associated morphism of A-modules and let M = Coker(D).a) Let P be a prime ideal of A. Prove that P ∈ Supp

A(M) if and only if det(V) ∈ P for

every < × < submatrix V of A.

b) Prove that Fit0(M) is finitely generated and V(Fit0(M)) = SuppA(M).

c) Prove that Spec(A) SuppA(M) is a quasi-compact subset of Spec(A).

33-exo1061) a) Let A be a ring. Let M and N be A-modules such that M ⊂ N. Show

that AssA(M) ⊂ AssA(N).b) Give examples that show that, in general, there is no relation between the associated

prime ideals of a module and those of a quotient.

c) Let M be an A-module, let M1 and M2 be submodules of M such that M = M1 ⊕M2.

Prove that AssA(M) = AssA(M1) ∪AssA(M2).d) One only assumes that M = M1+M2. How can you relate AssA(M)with AssA(M1)

and AssA(M2)?34-exo1065) Let A be a commutative ring, let G ∈ A. Assume that G is regular but not

a unit. Show that for any integer = > 1, A/GA and A/G=A have the same associated

prime ideals.

35-exo1304) Let A be a commutative ring and let I be an ideal of A such that I ≠ A.

Show that I is primary if and only if every element of A/I is either nilpotent or regular.36-exo1301) Let A be a commutative ring, let 5 : M

′→M be a morphism of A-modules

and let N be a submodule of M′. If N is primary, show that 5 −1(N) is a primary

submodule of M. Give two proofs, one relying on the manipulation of associated

prime ideals, the other on lemma 6.6.2.

37-exo1302) Let A be a commutative ring.

a) Let M be a maximal ideal of A. For any integer = > 1, show that M=is an

M-primary ideal of A.

b) Let P be a prime ideal of A, let = > 1 be an integer and let P(=)

be the inverse image

in A of the ideal P=AP of AP. Show that P

(=)is a P-primary ideal of A.

c) Prove that P=is P-primary if and only if P

= = P(=)

.

d) Give an example of a prime ideal P such that P=is not a P-primary ideal.

38-exo302) Let : be a field, let A = :[X1,X2, . . . ] be the ring of polynomial in infinitely

many indeterminates X1, . . . . Let M = (X1, . . . ) be the ideal generated by the indeter-

minates, I = (X2

1, . . . ). be the ideal generated by their squares, J = (X1,X

2

2, . . . ) be the

ideal generated by the elements X== .

Page 332: (MOSTLY)COMMUTATIVE ALGEBRA

320 CHAPTER 6. FURTHER RESULTS ON MODULES

a) Show that the nilradical of I and J is equal to M and that M is a maximal ideal. In

particular, I and J are primary ideals.

b) Show that there is no integer = such that M= ⊂ J.

c) Show that there is no element 0 ∈ A such that M = {G ∈ A ; 0G ∈ I}.d) Prove that AssA(A/I) = {M} but that there does not exist any 5 ∈ A/I such that

M = AnnA( 5 ).39-exo301) Let A be a commutative ring, let M be an A-module.

Show that the canonical morphism from M to

∏P∈AssA(M)MP is injective.

40-exo258) Let A be a commutative ring, let M be an A-module and let S be a

multiplicative subset of A. Let N be the kernel of the canonical morphism from M

to S−1

M.

a) Show that a prime ideal P of A belongs to AssA(N) if and only if it belongs

to AssA(M) and P ∩ S ≠ ∅.

b) Show that a prime ideal P of A belongs to AssA(M/N) if and only if it belongs

to AssA(M) and P ∩ S = ∅.

41-exo822) Let A be a commutative noetherian ring and let M,N be finitely generated

A-modules.

a) Assume that A is local, with maximal ideal P. Prove that M ≠ 0 if and only if there

exists a non-zero morphism 5 : M→ A/P.b) We still assume that A is local, with maximal ideal P. Prove that

P ∈ AssA(Hom(M,N)) if and only if M ≠ 0 and P ∈ AssA(N).c) Prove that AssA(Hom(M,N)) = Supp(M) ∩AssA(N).

42-exo885) Let 5 : A→ B be a morphism of commutative rings. Let N be a B-module

and let M be an A-submodule of N.

a) Let P ∈ AssA(M); prove that there exists Q ∈ AssB(N) such that P = 5 −1(Q).b) Let Q ∈ AssB(N), let N

′be a Q-primary submodule of N and let M

′ = N′ ∩M.

Assume that M′ ≠ M. Prove that M

′is a P-primary submodule of M, where P = 5 −1(Q).

43-exo647) Let A be a commutative von Neumann ring.

a) Prove that every prime ideal of A is associated to A.

b) Let P be a prime ideal of A which is of the form AnnA(0), for some 0 ∈ A.

Prove that P is finitely generated. (If E is a field, the von Neumann ring ENshows that

proposition 6.5.12 does not hold without the hypothesis that A be noetherian.)

44-exo650) Let A be a commutative ring, let S be a multiplicative subset of A and let M

be an A-module. Let N be the kernel of the canonical morphism 8 : M→ S−1

M.

a) Prove that AssA(M/N) is the set of primes P ∈ AssA(M) which are disjoint from S.

b) Prove that AssA(N) is the set of primes P ∈ AssA(M)which meet S.

45-exo651) Let A be a commutative ring.

Page 333: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 321

a) Let M be an A-module, let P ∈ AssA(M) be an associated ideal of M and let < ∈ M

be such that P is minimal among all prime ideals containing AnnA(<). Prove that forevery 0 ∈ P, there exists 1 ∈ A P and = ∈ N such that 0=1< = 0. (First treat the case

where A is local with maximal ideal P.)

b) If A is reduced, then AssA(A) is the set of minimal prime ideals of A. (First treat

the case where A is local and and P ∈ AssA(A) is its maximal ideal.)

c) Assume that AP is reduced for every prime ideal P of A. prove that A is reduced.

d) Assume that for every P ∈ AssA(A), the local ring AP is a field. Prove that for every

prime ideal P of A, the local ring AP is reduced. (Considering a minimal counterexample,

prove that there exists 0 ∈ P which is not a zero divisor and introduce the ring AP[1/0].)Conclude that A is reduced.

46-exo055) Let A be the ring of continuous functions on [−1; 1].a) Is the ring A an integral domain? Is it reduced?

b) Let I be the ideal of A consisting of functions 5 such that 5 (0) = 0. Show that I is

not finitely generated. Show that I = I2.

c) Show that the ideal (G) is not primary. (In fact, its radical is not prime.)

d) Prove that

⋂=(G=) ≠ 0.

47-exo890) Let A = �([0; 1],R) be the ring of real valued, continuous functions on [0; 1].The support S( 5 ) of a function 5 ∈ A is the closure of {G ; 5 (G) ≠ 0}.

a) Let 5 ∈ A; what is AnnA( 5 )? Prove in particular it is a radical ideal and that

A/AnnA( 5 ) ' �(S( 5 ); R).b) Prove that AnnA( 5 ) is not a prime ideal.

c) Prove that the zero ideal (0) has no primary decomposition in A.

48-exo058) Let : be a field, let A be the ring A = :[X,Y,Z]/(XY − Z2). One writes

G = cl(X), etc. Let P be the ideal (G, I) ⊂ A.

a) Show that P is a prime ideal.

b) Show that P2is not a primary ideal. (Consider the multiplication by H in A/P2

.)

c) Show that (G) ∩ (G2, H, I) is a minimal primary decomposition of P2.

49-exo825) Let E be a field, let I be the ideal (X) ∩ (X,Y)2 of the polynomial ring E[X,Y].a) Prove that I = (X2,XY).Let A = E[X,Y]/I be the quotient ring and let G, H be the classes of X,Y modulo I.

Let = be an integer such that = > 1.

b) Prove that the ideal (H=) of A is (G, H)-primary and that 0 = (G) ∩ (H=) is a minimal

primary decomposition of the ideal 0 in A.

c) Let 0 ∈ E×. Prove that the ideal (G + 0H=) is (G, H)-primary and that 0 = (G + 0H=)

is a (G, H)-primary ideal and that 0 = (G) ∩ (G + 0H=) is also a minimal primary

decomposition of 0.

50-exo887) Let A be a commutative noetherian ring.

Page 334: (MOSTLY)COMMUTATIVE ALGEBRA

322 CHAPTER 6. FURTHER RESULTS ON MODULES

a) Let I be an ideal of A and let 0 ∈ I be a regular element Let P be a prime ideal of A

such that P ∈ AssA(A/I). Prove that there exist a P-primary ideal Q and an ideal J of A

such that I = Q ∩ J and P ∉ AssA(A/J).b) Prove that J contains a regular element G.

c) Prove that I : (G) = Q : (G) and that this ideal is P-primary.

d) Prove that there exists an integer = > 0 such that P= ⊄ I : (G) and P

=+1 ⊂ I : (G).Prove that P

=contains a regular element H such that H ∉ I : (G).

e) Prove that P = I : (GH).f ) Let 0 ∈ A be a regular element, letP be a prime ideal ofA such thatP ∈ AssA(A/(0)).

Let 1 ∈ A be a regular element such that 1 ∈ P; prove that P ∈ AssA(A/(1)). (Find

regular elements G, H ∈ A such that 1G = 0H and P = (0) : (G); then prove that P = (1) : (H).)51-exo889) Let K be a field and let = ∈ N. Let A = K[T1, . . . , T=].a) Let 01, . . . , 0= be integers. Prove that the ideal (T01

1, . . . , T

0== ) of A is primary and

compute its radical.

b) Let J be a monomial ideal of A and let 5 , 6 be two monomials. If 5 and 6 are

coprime, then J + ( 5 6) = (J + ( 5 )) ∩ (J + (6)).c) Compute a minimal primary decomposition of the ideal (X2

YZ,Y2Z,YZ

3) ofK[X,Y,Z].

d) Let J be a monomial ideal of A. Show that it admits a minimal primary decompo-

sition which consists in monomial ideals.

Page 335: (MOSTLY)COMMUTATIVE ALGEBRA

CHAPTER 7

FIRST STEPS IN HOMOLOGICALALGEBRA

In this chapter, we explain a few elementary notions in homological algebra.

At the heart of algebraic topology, where it has striking applications (such

as Brouwer’s theorem, for example), the algebraic formalism of homological

algebra is now spread in many fields of mathematics — algebraic topology,

commutative algebra, algebraic geometry, representation theory for example —

and its applications range as far as robotics!

Algebraic topology and differential calculus naturally furnish sequences

M0,M1, . . . of modules linked by morphisms 50 : M0→M1, 51 : M1→M2,

etc., such that the composites 51 ◦ 50, 52 ◦ 51, etc. of two successive morphisms

vanish. One important example of such a situation, the De Rham complex of

an open subset U of R2, is given in the last section of this chapter, where elements

of M0, M1, M2 respectively are functions, vector fields, and functions on U, and

50 and 51 correspond to gradient and curl. The relation 5? ◦ 5?−1 = 0 means

that Im( 5?−1) ⊂ Ker( 5?), and a good reason for the inclusion is the equality,

in which case one says that the sequence is exact. The game of homological

algebra starts by systematically associating with such sequences its homology

modules, defined has Ker( 5?)/Im( 5?−1) which quantify in what respect that

inclusion is not an equality.

Two classes of modules have particularly good properties, that of projective

and injective modules. I define them in sections 7.3 and 7.4. While their role

in more advanced homological algebra is very important, it is barely touched

here, and I only prove the theorem of Kaplansky that projective modules over a

local ring are free, and the theorem of Baer that every module is a submodule of

an injective module.

Although one may start with exact sequences, algebraic constructions tend

to lose this property and I initiate the study of the exactness of some functors,

Page 336: (MOSTLY)COMMUTATIVE ALGEBRA

324 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

such as the functor of homomorphisms. This is the occasion of introducing

adjoint functors.

7.1. Diagrams, complexes and exact sequences

The notion of an exact sequence allows to combine many algebraic

properties in a diagram which is quite simple to write down as well as

to read.

7.1.1. — We have already seen a few diagrams in this book, such as the

one on p. 34 that immediately follows and illustrates theorem 1.5.3.

Such a diagram starts from a quiver, that is a collection of vertices and

arrows linking one vertex to another; moreover, the vertices are labeled

by an object from a category, and an arrow linking one vertex to another

is labeled by a morphism from the object that labels its source to the

object that labels its target.

Some arrows might be drawn using a dashed line; this usually reflects

that the existence or the uniqueness of the corresponding morphisms is

the conclusion of a mathematical statement that the diagram illustrates.

A path in a diagram is a finite sequence of consecutive arrows, meaning

that the target of an arrow is the source of the next one. Given such a

path, one may compose the morphisms by which the arrows are labeled.

A diagram is said to be commutative if for any two paths linking one

vertex to another, the two morphisms that they define coincide.

Definition (7.1.2). — Let A be a ring. A complex of A-modules is a diagram

M1

51−→M2

52−→ · · · →M=−1

5=−1−−−→M=

where M1, . . . ,M= are A-modules and 51, . . . , 5=−1 are homomorphisms such

that 58 ◦ 58−1 = 0 for 8 ∈ {2, . . . , = − 1}.One says that such a diagram is an exact sequence if Ker( 58) = Im( 58−1)

for every 8 ∈ {2, . . . , = − 1}.

Observe that the condition 58 ◦ 58−1 = 0 means that Im( 58−1) ⊂ Ker( 58).Consequently, an exact sequence is a complex.

Sometimes, complexes are implicitly extended indefinitely by adding

null modules and null morphisms.

Page 337: (MOSTLY)COMMUTATIVE ALGEBRA

7.1. DIAGRAMS, COMPLEXES AND EXACT SEQUENCES 325

Definition (7.1.3). — Let

M1

51−→M2→ · · · →M=−1

5=−1−−−→M=

be a complex ofA-modules. Its homologymodules are themodulesH2, . . . ,H=−1

defined by H8 = Ker( 58)/Im( 58−1) for 2 6 8 6 = − 1.

As a consequence of this definition, a complex is an exact sequence if

and only if its homology modules are zero.

If the complex is extended by null modules, we get also H0 = Ker( 51)and H= = M=/Im( 5=−1).

Example (7.1.4). — Let A be a ring and let 5 : M→ N be a morphism of

A-modules.

The diagram 0 → M

5−→ N → 0 is a complex, and its homology

modules are Ker( 5 ) and N/Im( 5 ) = Coker( 5 ) (see example 3.4.3). In

particular, this diagram is an exact sequence if and only if 5 is an

isomorphism.

In general, note that Ker( 5 ) and Coker( 5 ) sit in an exact sequence

0→ Ker( 5 ) →M

5−→ N→ Coker( 5 ) → 0.

Proposition (7.1.5). — A diagram of A-modules

0→ N

8−→M

?−→ P→ 0

is an exact sequence if and only if

– The morphism 8 is injective;

– One has Ker(?) = Im(8) ;– The morphism ? is surjective.

Then, 8 induces an isomorphism from N to the submodule 8(N) of M, and ?

induces an isomorphism of M/8(N) with P.

Such diagrams are very important in practice, and called short exact

sequences.

Proof. — It suffices to write down all the conditions of an exact sequence.

The image of themap 0→ N is 0; it has to be the kernel of 8, whichmeans

that 8 is injective. The next condition is Im(8) = Ker(?). Finally, the image

of ? is equal to the kernel of the morphism P→ 0, which means that ? is

Page 338: (MOSTLY)COMMUTATIVE ALGEBRA

326 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

surjective. The rest of the proof follows from the factorization theorem:

if ? is surjective, it induces an isomorphism from M/Ker(?) to P; if 8 is

injective, it induces an isomorphism from N to 8(N) = Ker(?). �

Lemma (7.1.6). — Let A be a ring and let

0→ N

8−→M

?−→ P→ 0

be a short exact sequence of A-modules.

a) The map @ ↦→ Im(@) is a bijection from the set of right inverses of ?

(morphisms @ : P→M such that ? ◦ @ = idP) to the set of direct summands

of 8(N) in M.

b) Themap 9 : Ker(9) is a bijection from the set of left inverses of 8 (morphisms

9 : M→ N such that 9 ◦ 8 = idN) to the set of direct summands of 8(N) in M.

c) The morphism ? has a right inverse if and only if the morphism 8 has

a left inverse, if and only if the submodule 8(N) = Ker(?) of M has a direct

summand.

Definition (7.1.7). — An exact sequence which satisfies the three equivalent

properties of lemma 7.1.6, c), is said to be split

Proof. — a) Let @ : P → M be a right inverse of ?, let Q = @(P) bethe image of @ and let us show that Q is a direct summand of 8(N)in M. It suffices to prove that for any < ∈ M, there is a unique pair

(G, H) such that G ∈ N, H ∈ P and < = 8(G) + @(H). If < = 8(G) + @(H),then ?(<) = ?(8(G)) + ?(@(H)) = H since ? ◦ 8 = 0 and ? ◦ @ = idP. Then,

8(G) = <−@(?(<)). Observe that ?(<−@(?(<))) = ?(<)−(?◦@)(?(<)) = 0;

by the definition of an exact sequence, Im(8) = Ker(?), so that < −@(?(<)) ∈ Im(8). Since 8 is injective, there is a unique G ∈ N such that

8(G) = < − @(?(<)). Then, H = ?(<) satisfies < − 8(G) = @(?(<)) = @(H).Moreover, if< = 8(G)+ @(H) = 8(G′)+ @(H′), then 8(G−G′) = @(H′− H); then0 = ? ◦ 8(G − G′) = ? ◦ @(H′ − H) = H′ − H, hence H′ = H; then 8(G) = 8(G′),hence G = G′ since 8 is injective.Conversely, let Q ⊂ M be a direct summand of 8(N) and let us prove

that the morphism ? |Q : Q → P is an isomorphism. Its kernel is

Q ∩ Ker(?) = Q ∩ 8(N) = 0, so that it is injective. On the other hand, let

G ∈ P; since ? is surjective, there exists < ∈ M such that ?(<) = G; let us

Page 339: (MOSTLY)COMMUTATIVE ALGEBRA

7.1. DIAGRAMS, COMPLEXES AND EXACT SEQUENCES 327

write < = H + =, with = ∈ 8(N) and H ∈ Q; one has G = ?(<) = ?(H), sothat ? |Q is surjective. Let @ : P→ Q be the inverse of this isomorphism;

one has @ ◦ ? |Q = idQ and ? ◦ @ = idP; in particular, @ is a right inverse

of ?.

These constructions @ ↦→ Q and Q ↦→ @ are inverse one to another; this

proves a).

b) Let Q be the kernel of 9. Let us show that Q is a direct summand

of 8(N). If G ∈ Q ∩ 8(N), one may write G = 8(H) for some H ∈ N, so

that 0 = 9(G) = 9(8(H)) = H. Consequently, H = 0 and G = 0. Moreover,

let G ∈ M and set H = G − 8(9(G)). Then, 9(H) = 9(G) − 9(8(9(G)) = 0 so

that H ∈ Q. Therefore, G = 8(9(G)) + H belongs to 8(N) + Q. Hence

M = Q ⊕ 8(N).Conversely, let Q be a direct summand of 8(N) in M. Since 8 is injective,

any element of M can be written uniquely as G + 8(H), for some G ∈ Q

and some H ∈ N. Let 9 : M → N be the map such that 9(<) = H if

< = G + 8(H) with G ∈ Q and H ∈ N. One checks directly that 9 is a

morphism. Finally, for any H ∈ N, the decomposition < = 8(H) = 0+ 8(H)shows that 9(<) = H, so that 9 ◦ 8 = idN.

Again, these two constructions 9 ↦→ Q and Q ↦→ 9 are inverse one to

the other, which proves b).

Finally, assertion c) follows from a) and b). �

Examples (7.1.8). — 1) Any subspace of a vector space has a direct

summand. Consequently, every exact sequence of modules over a

division ring is split.

2) Let A = Z, let M = Z, let N = 2Z and let P = M/N = Z/2Z. The

natural injection 8 of N to M and the canonical surjection from M to P

give rise to an exact sequence

0→ 2Z8−→ Z

?−→ Z/2Z→ 0.

This exact sequence is not split. Indeed, if @ were a right inverse to ?, its

image would be a submodule of Z isomorphic to Z/2Z. However, there

is no nonzero G element of Z such that 2G = 0 while there is such an

element in Z/2Z.

Page 340: (MOSTLY)COMMUTATIVE ALGEBRA

328 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

7.2. The “snake lemma”. Finitely presented modules

Theorem (7.2.1) (Snake lemma). — Let A be a ring. Let us consider a diagram

of morphisms of A-modules

N M P 0

0 N′

M′

P′.

←→8

←→ 5

←→?

←→ 6

←→

←→ ℎ

←→ ←→8′ ←→?

Let us assume the two rows are exact sequences, and that the two squares are

commutative, meaning that 8′ ◦ 5 = 6 ◦ 8 and ?′ ◦ 6 = ℎ ◦ ?.a) There is a unique morphism of A-modules % : Ker(ℎ) → Coker( 5 ) such

that %(?(H)) = cl(G′) for H ∈ ?−1(Ker(ℎ)) and G′ ∈ N′such that 6(H) = 8′(G′).

b) These morphisms induce an exact sequence

Ker( 5 ) 8∗−→ Ker(6)?∗−→ Ker(ℎ) %−→ Coker( 5 )

(8′)∗−−→ Coker(6)(?′)∗−−−→ Coker(ℎ);

c) If 8 is injective, then 8∗ is injective;

d) If ?′ is surjective, then (?′)∗ is surjective.

Proof. — The proof needs a number of steps. In particular, assertion a)

is proved in step 7; assertions c) and d) are proved in steps 3 and 5.

1) We first show that 8(Ker( 5 )) ⊂ Ker(6) and ?(Ker(6)) ⊂ Ker(ℎ). Letindeed G ∈ N be such that 5 (G) = 0; then 6(8(G)) = (6◦ 8)(G) = (8′◦ 5 )(G) =8′( 5 (G)) = 0, so that 8(G) ∈ Ker(6). Similarly, for any H ∈ Ker(6), one hasℎ(?(H)) = (ℎ ◦ ?)(H) = (?′ ◦ 6)(H) = ?′(6(H)) = 0, hence ?(H) ∈ Ker(ℎ).Consequently, the morphisms 8 and ? induce by restriction morphisms

8∗ : Ker( 5 ) → Ker(6) and ?∗ : Ker(6) → Ker(ℎ).2) One has 8′(Im 5 ) ⊂ Im 6 and ?′(Im 6) ⊂ Im ℎ. Indeed, let G′ ∈ Im( 5 )

and let G ∈ N be such that G′ = 5 (G). Then 8′(G′) = (8′◦ 5 )(G) = (6◦ 8)(G) =6(8(G)), which shows that 8′(G′) belongs to Im(6). Similarly, let H′ ∈ Im(6),let H ∈ M be such H′ = 6(H). One has ?′(H′) = (?′ ◦ 6)(H) = (ℎ ◦ ?)(H) =ℎ(?(H)), hence ?′(H′) belongs to Im(ℎ).Consequently, the kernel of the composition

N′ 8′−→M

′→M′/Im 6 = Coker(6)

Page 341: (MOSTLY)COMMUTATIVE ALGEBRA

7.2. THE “SNAKE LEMMA”. FINITELY PRESENTED MODULES 329

contains Im( 5 ). Passing to the quotient, we obtain a morphism

(8′)∗ : Coker( 5 ) = N′/Im( 5 ) → Coker(6). In the same way, we deduce

from ?′ a morphism (?′)∗ : Coker(6) → Coker(ℎ).3) Assume that 8 is injective. Then the morphism 8∗ is injective: if

8∗(G) = 0, then 8(G) = 0 hence G = 0.

4) Since ?∗ is the restriction of ? to Ker(6), and since ? ◦ 8 = 0, we

have ?∗ ◦ 8∗ = 0, hence Im 8∗ ⊂ Ker ?∗. Conversely, let H ∈ Ker ?∗, that is,H ∈ Ker(6) and ?(H) = 0. Since the first row of the diagram is an exact

sequence, H ∈ Im(8) and there exists G ∈ N such that H = 8(G). Then

0 = 6(H) = 6(8(G)) = (6 ◦ 8)(G) = (8′ ◦ 5 )(G) = 8′( 5 (G)). Since 8′ is injective,5 (G) = 0 and G ∈ Ker( 5 ). This implies that H = 8(G) ∈ 8(Ker( 5 )) =8∗(Ker( 5 )).5) Assume that ?′ is surjective. Then the morphism ?′∗ is surjective:

let �′ ∈ Coker(ℎ), and let I′ ∈ P′be such that �′ = cl(I′). Since ?′ is

surjective, there exists H′ ∈ M′such that I′ = ?′(H′). By definition of ?′∗,

�′ = ?′∗(cl(H′)), hence �′ ∈ Im(?′∗).6) One has ?′∗ ◦ 8′∗ = 0. Indeed, the definition of 8′∗ implies that for any

G′ ∈ N′, 8′∗(cl(G′)) = cl(8′(G′)), hence

?′∗(8′∗(cl(G′)) = ?′∗(cl(8′(G′))) = cl(?′(8′(G′))) = 0.

Conversely, if ?′∗(cl(H′)) = 0, then cl(?′(H′)) = 0, so that ?′(H′) ∈ Im(ℎ).Let us thus write ?′(H′) = ℎ(I) for some I ∈ P. Since ? is surjective, there

exists H ∈ M such I = ?(H) and ?′(H′) = ℎ(?(I)) = ?′(6(I)). Consequently,H′ − 6(I) belongs to Ker(?′), hence is of the form 8′(G′) for some G′ ∈ N

′.

Finally,

cl(H′) = cl(6(I) + 8′(G′)) = cl(8′(G′)) = 8′∗(G′),which proves that Ker(?′∗) = Im(8′∗).7) Let us now define the morphism % : Ker(ℎ) → Coker( 5 ). Let

H ∈ ?−1(Ker(ℎ)); then ?(H) ∈ Ker(ℎ), hence ?′(6(H)) = ℎ(?(H)) = 0; one

thus has 6(H) ∈ Ker(?′) = Im(8′), since the bottom row of the diagram is

exact; since 8′ is injective, there exists a unique element G′ ∈ N′such that

6(H) = 8′(G′). Let then %1(H) ∈ Coker( 5 ) be the class of G′ modulo Im( 5 ).This defines a map %1 : ?−1(Ker(ℎ)) → Coker( 5 ). This map %1 is a

morphism of A-modules. This may be proved by a direct elementary

computation, but we simply observe that %1 is the composition of the

Page 342: (MOSTLY)COMMUTATIVE ALGEBRA

330 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

morphism 6 |?−1(Ker(ℎ)) (which lands into Ker(?′)) with the inverse of the

isomorphism N′→ Ker(?′) = Im(8′) induced by 8′, with the projection

to Coker( 5 ).The surjectivemorphism ? : M→ P induces, by restriction, a surjective

morphism from ?−1(Ker(ℎ)) → Ker(ℎ) whose kernel is Ker(?). Let usshow that there exists a unique morphism % : Ker(ℎ) → Coker( 5 ) suchthat % ◦ ? = %1. By the factorization theorem, it suffices to prove that

Ker(?) ⊂ Ker(?1). Let H ∈ Ker(?) ⊂ ?−1(Ker(ℎ)) and G′ ∈ N′be such

that 8′(G′) = 6(H). Since the top row of the diagram is exact, there exists

G ∈ N such that H = 8(G); then 6(H) = 6(8(G)) = 8′( 5 (G)), so that G′ = 5 (G)and %1(H) = cl(G′) = 0. Consequently, Ker(%1) contains Ker(?), as was to

be shown.

Conversely, if %′ : Ker(ℎ) → Coker( 5 ) satisfies the conditions of as-

sertion a), one has %′ ◦ ? = %1 = % ◦ ?, hence % = %′ since ? induces a

surjective morphism from ?−1(Ker(ℎ)) to Ker(ℎ).8) Let us show that Im(?∗) = Ker(%). Let I ∈ Im(?∗). There exists

H ∈ Ker(6) such that ?(H) = I. In other words, 6(H) = 0; with the

notation of the preceding paragraph, we may take G′ = 0, hence %(I) = 0

and I ∈ Ker(%).Conversely, let I ∈ Ker(%). Keeping the same notation, we have

G′ ∈ Im( 5 ), so that there is G ∈ N such that G′ = 5 (G), hence 6(H) =8′(G′) = 6(8(G)) and H − 8(G) ∈ Ker(6). It follows that I = ?(H) =?(H − 8(G)) ∈ ?(Ker(6)) = Im(?∗), as required.9) Finally, let us show that Im(%) = Ker(8′∗). Let I ∈ N; with the

same notation, 8′(%(I)) = 8′(cl(G′)) = cl(8′(G′)) = cl(6(H)) = 0, hence

%(I) ∈ Ker(8′∗) and Im(%) ⊂ Ker(8′∗).In the other direction, let �′ ∈ Ker(8′∗). There is G′ ∈ N

′such that

�′ = cl(G′), so that 8′∗(�′) = cl(8′(G′)). This shows that 8′(G′) ∈ Im(6).Let H ∈ M be such that 8′(G′) = 6(H); the definition of % shows that

%(?(H)) = cl(G′) = �′ so that Ker(8′∗) ⊂ Im(%).This concludes the proof of the theorem. �

Corollary (7.2.2). — Let us retain the hypotheses of theorem 7.2.1.

a) If 5 and ℎ are injective, then 6 is injective. If 5 and ℎ are surjective, then

6 is surjective.

Page 343: (MOSTLY)COMMUTATIVE ALGEBRA

7.2. THE “SNAKE LEMMA”. FINITELY PRESENTED MODULES 331

b) If 5 is surjective and 6 is injective, then ℎ is injective.

c) If 6 is surjective and ℎ is injective, then 5 is surjective.

Proof. — a) Assume that 5 and ℎ are injective. The exact sequence

given by the snake lemma begins with 0

8∗−→ Ker(6)?∗−→ 0. Necessarily,

Ker(6) = 0. If 5 and ℎ are surjective, the exact sequence ends with

0

8′∗−→ Coker(6)?′∗−→ 0, so that Coker(6) = 0 and 6 is surjective.

b) If 5 is surjective and 6 is injective, one hasKer(6) = 0 andCoker( 5 ) =0. The middle of the exact sequence can thus be rewritten as 0

?∗−→Ker(ℎ) %−→ 0, so that ℎ is injective.

c) Finally, if 6 is surjective and ℎ is injective, we have Ker(ℎ) = 0,

Coker(6) = 0, hence an exact sequence 0

%−→ Coker( 5 )8′∗−→ 0, which

implies that Coker( 5 ) = 0 and 5 is surjective. �

Definition (7.2.3). — Let A be a ring. Let M be a right A-module. A

presentation of M consists in an exact sequence

F

#−→ E

!−→M→ 0,

where E and F are free A-modules.

One says that this presentation is finite if moreover F and G are finitely

generated.

If an A-module M has a finite presentation, one also says that M is

finitely presented.

Remark (7.2.4). — Let M be an A-module and let F

#−→ E

!−→ M →

0 be a presentation of M. By definition of an exact sequence, the

morphism ! is surjective. Let (48)8∈I be a basis of E. Then (!(48))8∈I is agenerating family in M. Let ( 59)9∈J be a basis of F. The family (#( 58))9∈Jgenerates Im(#). By the definition of an exact sequence, we see that this

family generates Ker(!).Reversing the argument, we can construct a presentation of an A-

module M as follows. First choose a generating family (<8)8∈I in M. Let

then ! : A(I) → M be the A-linear map defined by !((08)) =

∑8∈I <808.

Let (= 9)9∈J be a generating family of Ker(!) and let # : A(J)→ A

(I)be the

Page 344: (MOSTLY)COMMUTATIVE ALGEBRA

332 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

A-linear map defined by #((0 9)) =∑= 90 9. Then A

(J) #−→ A

(I) !−→ M→ 0

is a presentation of M.

Lemma (7.2.5). — Let A be a ring and let M be a right A-module. Let

0→ P

@−→ N

?−→M→ 0 be an exact sequence. If M is finitely presented and N

is finitely generated, then P is finitely generated.

Proof. — Let F

#−→ E

!−→M→ 0 be a finite presentation ofM. Let (48)8∈I be

a finite generating family of E and let ( 59)9∈J be a finite generating family

of F. For every 8 ∈ I, choose an element D8 ∈ N such that ?(D8) = !(48);such an element exists because ? is surjective. Let D : E → N be the

unique morphism of A-modules such that D(48) = D8 for every 8 ∈ I. One

has ? ◦ D = !.Let 9 ∈ J. One has ?(D(#( 59)) = !(#( 59)) = 0. Consequently, D(#( 59)) ∈

Ker(?). By definition of an exact sequence, the morphism @ induces

an isomorphism from P to Im(@). Consequently, there exists a unique

element E 9 ∈ P such that D(#( 59)) = @(E 9). Let then E : A(J) → P be

the unique morphism such that E( 59) = E 9 for every 9 ∈ J. One has

D ◦ # = @ ◦ E.We have constructed a commutative diagram

F E M 0

0 P N M 0

←→#

←→ E

←→!

←→ D

←→

←→ idM

←→ ←→@ ←→? ←→

the two rows of which are exact sequences. By the snake lemma

(theorem 7.2.1), we deduce from this diagram an exact sequence

Ker(idM) → Coker(E) → Coker(D) → Coker(idM).

Since idM is an isomorphism, one has Ker(idM) = Coker(idM) = 0 so

that Coker(E) and Coker(D) are isomorphic. Since Coker(D) = N/Im(D)is a quotient of the finitely generated module N, it is finitely generated.

Consequently, Coker(E) = P/Im(E) is finitely generated.

On the other hand, Im(E) is a quotient of the finitely generated A-

module F, hence Im(E) is finitely generated.

Page 345: (MOSTLY)COMMUTATIVE ALGEBRA

7.2. THE “SNAKE LEMMA”. FINITELY PRESENTED MODULES 333

By proposition 3.5.5, the module P is finitely generated, as was to be

shown. �

7.2.6. — We now let A be a commutative ring and explore the behavior

of homomorphisms with respect to localization. Let S be a multiplicative

subset of A. Let M and N be A-modules. Every morphism 5 : M →N gives rise to a morphism S

−1 5 : S−1

M → S−1

N of S−1

A-modules,

characterized by (S−1 5 )(</B) = 5 (<)/B for < ∈ M and B ∈ S. The map

5 ↦→ S−1 5 from HomA(M,N) to Hom

S−1

A(S−1

M, S−1N) is A-linear, and

its target is an S−1

A-module; consequently, it extends to a morphism

�M,N : S−1

HomA(M,N) → HomS−1

A(S−1

M, S−1N) of S

−1A-modules.

Proposition (7.2.7). — Let A be a commutative ring, let S be a multiplicative

subset of A, let M and N be A-modules. If M is a finitely presented A-module,

then the morphism �M,N is an isomorphism.

Proof. — We first treat the case where M is a free finitely generated

A-module. Let (41, . . . , 4<) be a basis of M. Then every morphism

5 : M → N is determined by the image of the 48, and the map

HomA(M,N) → N<given by 5 ↦→ ( 5 (41), . . . , 5 (4<)) is an isomorphism

of A-modules. Moreover, S−1

M is a free S−1

A–module, with basis

(41/1, . . . , 4</1), and the map HomS−1

A(S−1

M, S−1N) → (S−1

N)< given

by 5 ↦→ ( 5 (41/1), . . . , 5 (4</1)) is an isomorphism of S−1

A-modules.

Given these isomorphism, themorphism �M,N identifies as themorphism

from S−1(N<) to S

−1N)< given by (=1, . . . , =<)/B ↦→ (=1/B, . . . , =</B).

Since this morphism is an isomorphism, this implies that �M,N is an

isomorphism if M is a free finitely generated A-module.

Let now F

D−→ E

?−→ M → 0 be a finite presentation of M. By left

exactness of the functor HomA(•,N), it induces an exact sequence

0→ HomA(M,N)?∗

−→ HomA(E,N)D∗−→ HomA(F,N)

of A-modules. By exactness of localization, it then induces an exact

sequence

0→ S−1

HomA(M,N)S−1?∗

−−−→ S−1

HomA(E,N)S−1D∗−−−−→ S

−1

HomA(F,N)of S

−1A-modules.

Page 346: (MOSTLY)COMMUTATIVE ALGEBRA

334 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

ThegivenpresentationofM also induces afinite presentationS−1

F

S−1D−−−→

S−1

E

S−1?−−−→ S−→

−1

M→ 0 and, similarly, an exact sequence

0→ HomS−1

A(S−1

M, S−1

N)?∗

−→ HomS−1

A(S−1

E, S−1

N) D∗−→ Hom

S−1

A(S−1

F, S−1

N)

of S−1

A-modules.

Then the morphisms �M,N, �E,N and �F,N sit within a commutative

diagram

0 S−1

HomA(M,N) S−1

HomA(E,N) S−1

HomA(F,N)

0 HomS−1

A(S−1

M, S−1N) Hom

S−1

A(S−1

E, S−1N) Hom

S−1

A(S−1

F, S−1N)

←→ ← →S−1?∗

←→ �M,N

← →S−1D∗

←→ �E,N ←→ �F,N

←→ ←→(S−1?)∗ ←→(S

−1D)∗

the two rows of which are exact sequences. We also know that �E,N and

�F,N are isomorphisms, because E and F are free and finitely generated.

To deduce from this that �M,N is an isomorphism, we will apply

the snake lemma. However, the morphism S−1D∗ is not surjective a

priori, hence me replace the module S−1

HomA(F,N) by the image of

S−1D∗ and the morphism �F,N by its restriction �′

F,N. The snake lemma

(theorem 7.2.1) then furnishes a morphism % sitting in an exact sequence

0→ Ker(�M,N) → Ker(�E,N) → Ker(�′F,N)

%−→ Coker(�M,N) → Coker(�E,N) → Coker(�′F,N).

Themorphism �E,N is an isomorphism, henceKer(�E,N) = Coker(�E,N) =0; the morphism �′

F,Nis injective, hence Ker(�′

F,N) = 0. The preceding

exact sequence thus becomes

0→ Ker(�M,N) → 0→ 0

%−→ Coker(�M,N) → 0

so that Ker(�M,N) = Coker(�M,N) = 0. This proves that �M,N is an

isomorphism, as was to be shown. �

7.3. Projective modules

Definition (7.3.1). — One says that an A-module P is projective if every short

exact sequence

0→ N→M→ P→ 0

Page 347: (MOSTLY)COMMUTATIVE ALGEBRA

7.3. PROJECTIVE MODULES 335

is split.

Proposition (7.3.2). — Let A be a ring and let P be an A-module. The following

properties are equivalent:

(i) The module P is projective;

(ii) The module P is isomorphic to a direct summand of a free A-module;

(iii) For every A-modules M and N, any surjective morphism ? : M→ N

and any morphism 5 : P→ N, there exists a morphism 6 : P→M such that

5 = ? ◦ 6;(iv) For every A-module M, every surjective morphism 5 : M→ P has a

right inverse.

The third property of the theorem is often taken as the definition of a

projective module. It can be summed up as a diagram

M

P N

←� ?← →6←→5

where the solid arrows represent the given maps, and the dashed arrow

is the one whose existence is asserted by the property.

Proof. — (i)⇒(ii). Let us assume that P is projective. We need to

construct a free A-module L, submodules Q and Q′of L such that

L = Q ⊕ Q′and P ' Q. Let S be a generating family of P and let

L = A(S)

be the free A-module on S; let (4B)B∈S be the canonical basis

of L (see example 3.5.3). Let ? : L→ P be the unique morphism such

that ?(4B) = B for every B ∈ S; it is surjective (proposition 3.5.2). Let

N = Ker(?); we thus have a short exact sequence 0→ N→ L

?−→ P→ 0.

Since P is projective, this short exact sequence is split and N has a

direct summand Q in L, namely the image of any right inverse of ?

(lemma 7.1.6). In particular, P ' Q and L = Q ⊕ N. This shows that P is

(isomorphic) to a direct summand of free module.

(ii)⇒(iii). Let us assume that there exists an A-module Q such that

L = P ⊕ Q is a free A-module. Let S be a basis of L.

Page 348: (MOSTLY)COMMUTATIVE ALGEBRA

336 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

Let M and N be A-modules, let ? : M → N and 5 : P → N be

morphisms of A-modules, ? being surjective. We need to show that

there exists a morphism 6 : P→M such that ? ◦ 6 = 5 .

Let us define a morphism ! : L → N by !(G + H) = 5 (G) for G ∈ P

and H ∈ Q. By construction, ! |P = 5 . Since ? is surjective, there exists

for every B ∈ S an element <B ∈ M such that ?(<B) = !(B). Then the

universal property of free A-modules implies that there exists a unique

morphism � : L→M such that �(B) = <B for every B ∈ S, in particular,

? ◦ �(B) = ?(<B) = !(B). Since S is a basis of L, ? ◦ � = !. The restriction6 = ! |P of ! to P satisfies ? ◦ 6 = ! |P = 5 .

(iii)⇒(iv). Let ? : M → P be a surjective morphism of A-modules.

Let us apply the hypothesis of (iii) to N = P and 5 = idP. There exists

6 : P→M such that ? ◦ 6 = idP; in other words, ? has a right inverse.

(iv)⇒(i). Let 0 → N

8−→ M

?−→ P → 0 be a short exact sequence of

A-modules. By assumption, ? has a right inverse. It then follows from

lemma 7.1.6 that this exact sequence is split. �

Corollary (7.3.3). — Every free A-module is projective.

Remark (7.3.4). — Let P be a projective A-module. Let (48)8∈I be a

generating family of elements of P and let 5 : A(I)→ P be the surjective

morphism given by 5 ((08)) =∑0848. Let 6 : P→ A

(I)be a right inverse

of 5 . For every 8, let !8 : P → A be the morphism such that !8(G) isthe 8th coordinate of 6(G), for every G ∈ P. Then (!8)8∈I is a family of

elements of P∨, for every G ∈ I, the family (!8(G)) has finite support and

one has G = 5 (6(G)) = 5 ((!8(G))8) =∑8∈I !8(G)48.

In this case, the family (!8) plays more or less the role of a “dual basis”

of the generating family (48).Conversely, let P be an A-module and let us assume that there exist a

family (48)8∈I in P and a family (!8)8∈I in P∨such that, for every G ∈ I, the

family (!8(G)) has finite support and G =∑8∈I !8(G)48. Let us prove that P

is projective. As above, we consider the morphism 5 : A(I)→ P such that

5 ((08)) =∑0848 and the map 6 : P→ A

(I)given by 6(G) = (!8(G))8. By

assumption, one has 5 ◦ 6(G) = G for every G ∈ P, so that 5 is surjective

and has a right inverse. This implies that 0→ Ker( 5 ) → A(I) 5−→ P→ 0

Page 349: (MOSTLY)COMMUTATIVE ALGEBRA

7.3. PROJECTIVE MODULES 337

is a split exact sequence and P is isomorphic to a direct summand of a

free A-module. This proves that P is projective.

Example (7.3.5). — Let A be a commutative ring, let S be a multiplicative

subset of A and let P be a projective A-module. Then S−1

P is a projective

S−1

A-module.

Let indeed L be a free A-module admitting submodules L′and L

′′such

that L = L′⊕L

′′and L

′ ' P. Then themodules S−1

L′and S

−1L′′identify to

submodules of the free S−1

A-module S−1

L such that S−1

L = S−1

L′⊕S

−1L′′.

In particular, S−1

L′is a projective S

−1A-module. Moreover, S

−1L′is

isomorphic to S−1

P, so that S−1

P is a projective S−1

A-module.

Remark (7.3.6). — Let P be a projective A-module and let S ⊂ P be a

generating set. The proof of implication (i)⇒(ii) in proposition 7.3.2

shows that P is (isomorphic to) a direct summand of the free A-module

A(S). In particular, if P is a finitely generated projective A-module, then

P is a direct summand of a free finitely generated A-module.

Example (7.3.7). — Let A be a principal ideal domain and let P be a

finitely generated projective A-module. Then, there exists an integer =

such that P is a direct summand of A=. In particular, P is isomorphic to

a submodule of A=. By proposition 5.4.1, P is free.

Theorem (7.3.8) (Kaplansky). — Let A be a local ring and let P be a projective

A-module. Then P is free.

Let A be a ring and let J be its Jacobson radical. We recall from

definition 2.1.8 that A is local if and only if A/J is a division ring; then J

is the unique maximal left ideal of A.

Remark (7.3.9). — Let us first prove this theorem under the additional

assumption that P be finitely generated. Let J be the maximal left ideal

of A. Let (48)1686= be a family of minimal cardinality in P such that the 48generate P/PJ; let us prove that it is a basis of P.

We first consider the morphism ! : A= → P defined by (01, . . . , 0=) ↦→∑

4808. Let P′ = Im(!); by assumption, one has P = P

′ + P′J. By

Nakayama’s lemma (corollary 6.1.2), one has P = P′and ! is surjective.

Let us now prove that ! is an isomorphism. Let Q = Ker(!). SinceP is projective, the morphism ! has a right inverse #, and the image

Page 350: (MOSTLY)COMMUTATIVE ALGEBRA

338 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

of #, which is isomorphic to Q, is a direct summand of Q. We get an

isomorphism P ⊕ Q ' A=. Taking quotients modulo J, we also have an

isomorphism P/PJ ⊕ Q/QJ ' (A/J)=. Since = = dimA/J(P/PJ), we obtain

the equality Q = QJ. Since Q is a direct summand of A=, it is finitely

generated. By Nakayama’s lemma again (theorem 6.1.1), one has Q = 0.

Consquently, ! is an isomorphism and P is free.

The remaining of this section is devoted to the proof of Kaplansky’s

theorem.

Lemma (7.3.10). — Let A be a ring, let P be a projective A-module. There

exists a family (P8) of submodules of P, each of them generated by a countable

family, such that P =⊕

P8.

Proof. — Wemay assume that there are a free A-module M containing P

and a submodule Q of M such that M = P ⊕ Q. Let (48)8∈I be a basis

of M, so that we identify M with A(I). For any subset J of I, let MJ be the

submodule of M generated by the elements 4 9, for 9 ∈ J. Observe that

MJ has a direct summand in M, namely the submodule of M generated

by the elements 48, for 8 ∈ I J; in particular, a submodule of some MJ

which has a direct summand in MJ also has a direct summand in M.

By transfinite induction,1 we construct an increasing sequence (J ) ofsubsets of I satisfying the following properties:

(i) For any ordinal �, J�+1 J� is at most countable, and is empty if and

only if J� = I;

(ii) If is a limit ordinal, then J is the union of the J� for � < ;

(iii) For any , MJ = (P ∩MJ ) ⊕ (Q ∩MJ ).Let be an ordinal.

Is is a limit ordinal, then the prescription (ii) defines a subset J of I.

Let us show that condition (iii) is satisfied. Since P ∩Q = 0, it suffices to

show that MJ is the sum of P ∩ LJ and Q ∩MJ . So let < ∈ MJ . Only

finitely many coordinates of < in the basis (48) are nonzero, so that there

exists � < such that < ∈ MJ� ; consequently, < ∈ (P ∩ LJ�) + (Q ∩MJ�)and a fortiori, < ∈ (P ∩ LJ ) + (Q ∩MJ ).

1Faut-il zornifier cette démonstration?

Page 351: (MOSTLY)COMMUTATIVE ALGEBRA

7.3. PROJECTIVE MODULES 339

Let us now assume that is of the form � + 1, for some ordinal �. IfI = J�, we let J = J�. Otherwise, choose some element 9 ∈ I J� and

define a sequence (K=)=∈N of finite subsets of I as follows. One sets

K0 = { 9}. Assume K= has been defined; for every element : ∈ K=, write

4: as a sum ?:+ @: , where ?: ∈ P and @: ∈ Q. Then, let K=+1 be the union,

for all : ∈ K=, of the indices 8 ∈ I such that the 8th coordinate of ?: or @:is nonzero. For every integer =, K= is finite, so that the union K =

⋃K=

is either finite or countable. Then, set J = J�∪K; condition (i) is satisfied.

Let us prove that (iii) holds. Again, it suffices to show that any element

< ∈ MJ can be written as the sum of an element of P ∩MJ and of an

element of Q ∩MJ . It suffices to prove this property for every element

of the form 4: , for some : ∈ J . It holds by induction if : ∈ J�, hence we

may assume that : ∈ K. By definition, there is an integer = such that

: ∈ K=. Then 4: = ?: + @: and by construction of K=+1, ?: ∈ P ∩MK=+1

and @: ∈ Q∩MK=+1. In particular, ?: ∈ P∩MJ and @: ∈ Q∩MJ , as was

to be shown.

This concludes the construction of the family (J ) by transfinite induc-

tion.

Since not all ordinals can be embedded in a given set, this transfinite

construction is necessarily stationary. Let � be an ordinal such that

J�+1 = J�. By condition (i), one has J� = I.

We then define a sequence (S ) of submodules of P such that P∩MJ =⊕�6 S� for every ordinal 6 �. If is a limit ordinal, we set S = 0.

Otherwise, there exists an ordinal � such that = � + 1; then, P ∩MJ�

is a direct summand of MJ� , hence a direct summand of M. Restricting

to P ∩MJ a left inverse of the injection (P ∩MJ�) → M, we see that

P ∩MJ� has a direct summand S in P ∩MJ . Moreover, S is countably

generated. Then P = P ∩MJ� =⊕

6� S . �

Say that an A-module M has property (K) if for every element < ∈ M, there

exists a direct summand N of M containing < which is free.

Lemma (7.3.11). — Let A be a ring, let P be a countably generated A-module.

Assume that any direct summand M of P satisfies property (K). Then P is a free

A-module.

Page 352: (MOSTLY)COMMUTATIVE ALGEBRA

340 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

Proof. — Let (?=)=>1 be a generating family of P. We construct by

induction families (P=)>1 and (Q=)=>0 of submodules of P such that P0 =

P, P=−1 = P=⊕Q= for every = > 1, and ?1, . . . , ?= ∈ Q1⊕· · ·⊕Q=, and such

that for every integer =, Q= is free. Assume P0, . . . , P=−1,Q1, . . . ,Q=−1

have been defined. Observe that P = P=−1 ⊕ (Q1 ⊕ · · · ⊕ Q=−1) and let

5 : P → P=−1 be the projection with kernel Q1 ⊕ · · · ⊕ Q=−1. Apply

property (K) to the element 5 (?=) of P=−1: there exists a decomposition

of P=−1 as a direct sum P= ⊕ Q=, where Q= is free and contains 5 (?=).Consequently, ?= ∈ Q1 ⊕ · · · ⊕ Q=. By induction on =, this concludes the

asserted construction.

Let us now show that P is the direct sum of the family (Q=)=>1. By

construction, this family is in direct sum in

∑=>1

Q=. Moreover, for

every < > 1, one has ?< ∈ Q< ⊂∑=>1

Q=. Since the family (?<)generates P, this proves that P =

∑=>1

Q= and concludes the proof of

the lemma. �

Lemma (7.3.12). — Let A be a local ring. Any projective A-module has

property (K).

Proof. — Let P be a projective A-module. Let Q be a A-module such

that M = P ⊕ Q is a free A-module. Let ? ∈ P. Let (48)8∈I be a basis of M

in which the coordinates (08) of ? have the least possible nonzero entries.

For simplicity, assume that these coordinates have indices 1, . . . , =. For

each 8 ∈ {1, . . . , =}, write 48 = ?8 + @8, where ?8 ∈ P and @8 ∈ Q. Then

? =

=∑8=1

4808 =

=∑8=1

?808 +=∑8=1

@808 ,

is a decomposition of ? as the sum of an element of P and an element

of Q, so that

? =

=∑8=1

4808 =

=∑8=1

?808 .

We may then decompose each vector ?8 along the basis (48): letting 1 98be the 9th coordinate, we write

?8 =

=∑9=1

4 91 98 + ?′8 ,

Page 353: (MOSTLY)COMMUTATIVE ALGEBRA

7.3. PROJECTIVE MODULES 341

where ?′8=

∑9∈J {1,...,=} 4 91 98 is the sum of terms corresponding to the

coordinates other than 1, . . . , =. Combining the last two equations, we

obtain

=∑8=1

4808 =

=∑8=1

=∑9=1

4 91 9808 +=∑8=1

∑9∈J {1,...,=}

4 91 9808 .

Equating the coefficients of 41, . . . , 4=, we get

0 9 =

=∑8=1

1 9808 , (1 6 9 6 =).

If 9 ∈ J {1, . . . , =}, we also get

=∑8=1

1 9808 = 0,

hence

? =

=∑9=1

4 90 9 =

=∑8=1

=∑9=1

4 91 9808 .

Let us assume that there exists 8 , 9 such that 1 6 8 , 9 6 = such that

1 − 188 (if 9 = 8) or 1 98 (if 9 ≠ 8) is invertible in A. Then we can then

rewrite 0 9 as a left linear combination

0 9 =

=∑8=1

8≠9

2808

of the 08, for 8 ≠ 9. Let then write

? =

=∑8=1

4808 =

=∑8=1

8≠9

(4808 + 4 92808) ==∑8=1

8≠9

(48 + 4 928)08 .

Let us then define a family (4′8) by 4′

8= 48 if 8 ∉ {1, . . . , =}, 4′9 = 4 9, and

4′8= 48 + 4 928 if 8 ∈ {1, . . . , =} and 8 ≠ 9. Observe that it is a basis of M.

However, the vector ? has = − 1 nonzero coordinates in this new basis,

which contradicts the assumption made at the beginning of the proof.

This shows that 1− 188 (for 1 6 8 6 =) and 18 9 (for 1 6 8 ≠ 9 6 =) belong

to the Jacobson radical J of the local ring A. In other words, the image of

the matrix B = (1 98) in the quotient ring A/J is the identity matrix. By

Page 354: (MOSTLY)COMMUTATIVE ALGEBRA

342 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

corollary 6.1.3, the matrix B = (1 98) is invertible in M=(A). Consequently,the family ( 58) defined by 59 =

∑=8=14 91 98 (for 9 ∈ {1, . . . , =}) and 59 = 4 9

(otherwise) is a basis of M.

Since ?8 = 58 + ?′8 and ?′8 belongs to the submodule spanned by the 4 9,

for 9 ∉ {1, . . . , =}, we see that the family consisting of ?1, . . . , ?= and

the 4 9, for 9 ∈ J {1, . . . , =} is a basis of M.

By construction, the submodule F of P generated by ?1, . . . , ?=; con-

tains ?. It is also free and it admits a direct summand in M, because

it is generated by a subset of a basis of M. Consequently, the injection

F→M admits a left inverse, so that the injection F→ P has a left inverse

too; this implies that F has a direct summand in P (lemma 7.1.6). This

concludes the proof of the lemma. �

Proof of Kaplansky’s theorem. — Let A be a local ring and let P be a projec-

tive A-module. By lemma 7.3.10, there exists a family (P8) of countablegenerated submodules of P of which P is the direct sum. Since P8 is a

direct summand of a projective A-module, it is itself projective. The

ring A being local, P8 satisfies property (K), by lemma 7.3.12. Moreover,

each direct summand of P8 is projective, hence satisfies property (K), so

that P8 satisfies the the hypotheses of lemma 7.3.11. Consequently, P8

is a free A-module. It follows that P is a direct sum of free A-modules,

hence is free. �

Proposition (7.3.13). — Let A be a commutative ring and let M be a finitely

presented A-module. The following properties are equivalent:

(i) The A-module M is projective;

(ii) For every prime ideal P of A, the AP-module MP is free;

(iii) For every maximal ideal P of A, the AP-module MP is free.

Proof. — The implication (i)⇒(ii) follows from Kaplansky’s theorem

(theorem 7.3.8) while the implication (ii)⇒(iii) is obvious. Let us thus

assume that the AP-module MP is free, for every maximal ideal P of A.

Let D : N→ N′be a surjective morphism of A-modules and let us show

that the map D∗ : HomA(M,N) → HomA(M,N′) is surjective. It is a

morphism of A-modules; let Q be its cokernel. Let P be a maximal ideal

Page 355: (MOSTLY)COMMUTATIVE ALGEBRA

7.4. INJECTIVE MODULES 343

of A. By exactness of localization, the exact sequence

HomA(M,N) D∗−→ HomA(M,N′) → Q→ 0

gives rise to an exact sequence of AP-modules

HomA(M,N)P(D∗)P−−−→ HomA(M,N′)P→ QP→ 0.

On the other hand, the hypothesis that M is a finitely presented A-

module allows to identify HomA(M,N)P with HomAP(MP,NP), and

HomA(M,N′)P with HomAP(MP,N

′P) (proposition 7.2.7). The preceding

exact sequence then becomes

HomAP(MP,NP)

(DP)∗−−−→ HomAP(MP,N

′P) → QP→ 0.

Since MP is a projective AP-module, the morphism (DP)∗ is surjective,hence QP = 0.

This implies that Supp(Q) contains nomaximal ideal ofA. On the other

hand, if Supp(Q) contains a prime ideal P, then it contains all maximal

ideals that contain P (theorem 6.5.4, b) and Krull’s theorem 2.1.3), so that

Supp(Q) = ∅. By theorem 6.5.4, a), this shows that Q = 0, so that D∗ issurjective. Consequently, M is a projective A-module. �

7.4. Injective modules

Definition (7.4.1). — Let A be a ring. One says that an A-module M is

injective if every short exact sequence

0→M→ N→ P→ 0

is split.

The notion of an injective module is kind of dual of that of a projective

module. However, the category of modules is quite different from its

opposite category. As is pointed out by Matsumura in (Matsumura,

1986, p. 277), there does not exist a notion that would be the dual to that

of a free module. Consequently, proposition 7.3.2 has no analogue for

injective modules and the existence of injective modules is not obvious.

We start with a partial counterpart to proposition 7.3.2.

Page 356: (MOSTLY)COMMUTATIVE ALGEBRA

344 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

Proposition (7.4.2). — Let A be a ring, let M be a right A-module. The

following conditions are equivalent.

(i) The module M is injective;

(ii) For every right A-modules N and P, every injective morphism 8 : N→ P

and any morphism 5 : N→M, there exists a morphism 5 : P→M such that

5 = 6 ◦ 8;(iii) For every right ideal I of A and any morphism 5 : I→M, there exists a

morphism 6 : A→M such that 5 = 6 |I;(iv) Every injective morphism from M to any A-module has a left inverse.

The second condition is often taken as the definition of an injective

module.

Proof. — The equivalence of (i) and (iv) follows from lemma 7.1.6.

(iv)⇒(ii). Let 8 : N→ P be an injectivemorphism and let 5 : N→M be

a morphism. We want to prove that there exists a morphism 6 : P→M

such that 5 = 6 ◦ 8.Let Q be the image of the morphism G ↦→ ( 5 (G),−8(G)) from N to M×P.

let ? : M × P→ (M × P)/Q be the canonical projection and let 9 : M→(M × P)/Q be the map given by 9(G) = ?(G, 0) for G ∈ M. Let us show

that 9 is injective; let G ∈ M be such that (G, 0) ∈ Q. Then there exists

H ∈ N such that 5 (H) = G and −8(H) = 0. Since 8 is injective, H = 0, hence

G = 0, as was to be shown.

By (iv), there exists a morphism � : (M × P)/Q→M such that � ◦ 9 =idM, that is, �(?(G, 0)) = G for every G ∈ M. Let 6 : P → M be the

morphism given by H ↦→ �(?(0, H)). For every G ∈ N, one has

6(8(G)) = �(?(0, 8(G))) = �(?(− 5 (G), 8(G))) + �(?( 5 (G), 0)) = 5 (G),so that 5 = 6 ◦ 8, as required.(ii)⇒(iii). It suffices to take for modules N = I and P = A, the injective

morphism 8 being the injection from I into A.

(iii)⇒(iv). Let 8 : M→ N be an injective morphism. We want to prove

that there exists a morphism 5 : N→M which is a left inverse of 8, that

is, 5 ◦ 8 = idM.

Let ℱ be the set of all pairs (N′, 5 ) where N′is a submodule of N

containing 8(M) and 5 : N′→M is a morphism of A-modules such that

5 ◦ 8 = idM.

Page 357: (MOSTLY)COMMUTATIVE ALGEBRA

7.4. INJECTIVE MODULES 345

Let us define an ordering ≺ onℱ by setting

(N′1, 51) ≺ (N′

2, 52) ⇔ N

′1⊂ N

′2

and 52 |N′1

= 51.

Let us show that the ordered set ℱ is inductive. Let indeed (N , 5 )be a totally ordered family of elements of ℱ. If it is empty, we take

for its upper bound the pair (N′ = 8(M), 5 ), where 5 : 8(M) →M is the

inverse of the injective map 8. Otherwise, since the nonempty family

(N ) of submodules of N is totally ordered, its union N′ =

⋃N is a

submodule of N. Wemay then define amorphism 5 : N′→M by setting

5 (H) = 5 (H) for any such that H ∈ N . Indeed, it does not depend on

the choice of .By Zorn’s lemma (theorem A.2.11), the ordered setℱ has a maximal

element (N′, 5 ). Let us prove by contradiction that N′ = N. Otherwise,

let H ∈ N N′and let I = {0 ∈ A ; H0 ∈ N

′}.Let ! : I → N be the morphism given by 0 ↦→ 5 (H0), for 0 ∈ I. By

assumption, there exists a morphism # : A→ N such that # |I = !. Letthen set N

′1= N

′ + HA and define a morphism 5 ′1

: N′1→M by setting

51(G + H0) = 5 (G) + #(0) for G ∈ N′and 0 ∈ A. It is well-defined; indeed,

if G + H0 = G′ + H0′, with G, G′ ∈ N′and 0, 0′ ∈ A, then G − G′ = H(0′ − 0),

so that 0′ − 0 ∈ I and

#(0′) − #(0) = !(0′ − 0) = 5 (H(0′ − 0)) = 5 (G − G′) = 5 (G) − 5 (G′),so that 5 (G′)+#(0′) = 5 (G)+#(0). It is easy to show that 51 is amorphism

of A-modules. Moreover, the construction of 51 shows that 51 |N′ = 5 . In

particular, (N′1, 51) is an element ofℱ, but this contradicts the hypothesis

that (N′, 5 ) is a maximal element ofℱ, because N′1) N

′.

It follows that any maximal element of ℱ is of the form (N, 5 ), where

5 : N→M is a left inverse to 8. This concludes the proof of (iv). �

Corollary (7.4.3). — Products of injective A-modules are injective.

Proof. — Let (MB)B∈S be a family of injective right A-modules and let

M =∏

B∈S MB . For every B ∈ S, let ?B be the canonical projection from M

to MB .

Let I be a right ideal of A, let 5 : I→M be amorphism. For every B ∈ S,

let 5B = ?B ◦ 5 . Since MB is injective, there exists a morphism 6B : A→MB

such that 6B |I = 5B . Let 6 = (6B) : A→M be the unique morphism such

Page 358: (MOSTLY)COMMUTATIVE ALGEBRA

346 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

that ?B ◦ 6 = 6B for every B ∈ S. One has 6 |I = (6B |I) = ( 5B) = 5 . This

shows that M is an injective A-module. �

Corollary (7.4.4). — Let A be a principal ideal domain and let M be an A-

module. Then M is an injective module if and only if for any nonzero 0 ∈ A,

the morphism �0 : M→M, G ↦→ 0G, is surjective.

The condition that the morphisms �0 be bijective, for all 0 ≠ 0, is

rephrased as saying that M is a divisible A-module.

Proof. — Let us assume that M is an injective module; let 0 ∈ A {0}.Let G ∈ M. Let I be the ideal (0) in A. Since 0 ≠ 0 and A is a domain, the

map from A to I given by C ↦→ 0C is an isomorphism; consequently, there

exists a unique morphism 5 : I→M such that 5 (0C) = CG for every C ∈ A.

By property (iii) in proposition 7.4.2, there exists a morphism 6 : A→M

such that 6(0C) = 5 (0C) = CG for any C ∈ A. Then 06(1) = 6(0) = G. Thisimplies that �0 is surjective.Conversely, let us assume that �0 be surjective for every nonzero 0 ∈ A.

Let I be an ideal ofA, let 5 : I→M be amorphismofA-modules; we need

to show that there exists a morphism 6 : A→ M such that 6 |I = 5 . If

I = 0, we may take 6 = 0. Otherwise, since A is a principal ideal domain,

there exists a nonzero element 0 ∈ A such that I = (0). By hypothesis,

there exists G ∈ M such that 0G = 5 (0). Let 6 : A→M be the morphism

given by 6(C) = CG. For any C ∈ A, one has 6(0C) = 0CG = C 5 (0) = 5 (0C),so that 6 |I = 5 . By proposition 7.4.2, the A-module M is injective. �

Example (7.4.5). — The Z-module Q/Z is injective.

Recall that the existence of projective A-modules is no great mistery,

since free modules are projective. More generally, every A-module is the

quotient of a free A-module, a fortiori a projective A-module. The dual

property asserts that every A-module is a submodule of an injective

A-module and is more difficult. It is the object of the following theorem.

Theorem (7.4.6) (Baer). — Let A be a ring, let M be an A-module. There exists

an injective A-module N and an injective morphism 8 : M→ N.

Let A be a ring, let M be a right A-module (resp. a left A-module).

Then M∗ = HomZ(M,Q/Z) is an abelian group. We endow it with

Page 359: (MOSTLY)COMMUTATIVE ALGEBRA

7.4. INJECTIVE MODULES 347

the structure of a left A-module (resp. of a right A-module) given by

(0 · 5 )(G) = 5 (G0) (resp. ( 5 · 0)(G) = 5 (0G)) for every 0 ∈ A, 5 ∈ M∗and

G ∈ M.

Lemma (7.4.7). — Let A be a ring, let M be a right A-module. For any G ∈ M,

let 8(G) be the map from M∗to Q/Z given by 5 ↦→ 5 (G). One has 8(G) ∈ M

∗∗

and the map 8 : M→M∗∗so defined is an injective morphism of A-modules.

Proof. — It is clear that 8(G) is a morphism of abelian groups, hence

8(G) ∈ M∗∗. Moreover, for every G ∈ M and every 0 ∈ A, one has

8(G0)( 5 ) = 5 (G0) = (0 · 5 )(G) = 8(G)(0 · 5 ) = (8(G) · 0)( 5 ).

Finally,

8(G + H)( 5 ) = 5 (G + H) = 5 (G) + 5 (H) = 8(G)( 5 ) + 8(H)( 5 ),

for every G, H ∈ M, which shows that 8 : M→M∗∗is a morphism of right

A-modules.

Let us finally prove that 8 is injective. Let G ∈ M be a nonzero

element. We need to show that there exists a morphism of abelian

groups, 6 : M → Q/Z, such that 6(G) ≠ 0. The abelian group GZgenerated by G in M is either isomorphic to Z, or to Z/=Z, for some

integer = > 2. We define 5 : GZ → Q/Z by 5 (G<) = </2 (mod Z) inthe former case, and 5 (G<) = </= (mod Z) in the latter. Observe that

5 (G) ≠ 0.

Since Q/Z is an injective Z-module, there exists a morphism 6 : M→Q/Z of Z-modules such that 6 |GZ = 5 (proposition 7.4.2). The mor-

phism 6 belongs to M∗and satisfies 6(G) ≠ 0. �

Lemma (7.4.8). — Let A be a ring.

a) The right A-module (AB)∗ is injective.b) For every free left A-module M, the right A-module M

∗is injective.

Proof. — a) Let I be a right ideal of A and let 5 : I→ (AB)∗ be amorphism

of A-modules. Let ! : I → Q/Z be the map given by !(G) = 5 (G)(1),for G ∈ I. It is a morphism of abelian groups. Since Q/Z is an injective

Z-module, there exists a morphism � : A→ Q/Z of abelian groups such

that � |I = !.

Page 360: (MOSTLY)COMMUTATIVE ALGEBRA

348 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

Let 6 : A→ (AB)∗ be the map defined by 6(G)(H) = �(GH), for G, H ∈ A.

It is additive. Let 0, G ∈ A; then 6(G0) is the element H ↦→ �(G0H)of (AB)∗, while 6(G) · 0 is the map H ↦→ 6(G)(0H) = �(G0H). Consequently,6(G0) = 6(G) · 0 and 6 is A-linear.

Let G ∈ I. For any H ∈ A, one has 6(G)(H) = �(GH) = !(GH) sinceGH ∈ I and � extends !. Consequently, 6(G)(H) = 5 (GH)(1). Since 5 isA-linear, 5 (GH) = 5 (G) · H, so that 5 (GH)(1) = 5 (G)(H). This shows that

6(G)(H) = 5 (G)(H), hence 6(G) = 5 (G). We thus have shown that 6 |I = 5 .

By proposition 7.4.2, the right A-module (AB)∗ is injective.b) Let (48)8∈I be a basis ofM, so thatM is identifiedwith (AB)(I). Then, the

morphism 5 ↦→ ( 5 (48)) is an isomorphism of right A-modules from M∗

to (A∗B)I. Consequently, M∗is a product of injective A-modules. It then

follows from corollary 7.4.3 that M∗is an injective A-module. �

Proof of theorem 7.4.6. — We are now able to prove Baer’s theorem that

every A-module is a submodule of an injective A-module.

Let M be a right A-module. Let F be a free left A-module together with

a surjective A-linear morphism ? : F→M∗. For example, one may take

for F the free-A-moduleA(M∗)

with basis indexed byM∗, and for ? themap

that sends the basis vector 4 5 indexed by 5 ∈ M∗to 5 , for every 5 ∈ M

∗.

Since ? is surjective and A-linear the map ?∗ from M∗∗to F

∗given by

! ↦→ ! ◦ ? is injective and A-linear. Let 8 be the morphism from M to M∗∗

defined in lemma 7.4.7; it is injective. Then, the map ?∗ ◦ 8 : M→ F∗is

A-linear and injective. By lemma 7.4.8, the right A-module F∗is injective.

This concludes the proof. �

7.5. Exactness conditions for functors

7.5.1. Definitions. — Let A and B be two rings. Let us recall that a

functor F from the categoryModA ofA-modules to the categoryModB of

B-modules associates to each A-module M a B-module F(M) and to each

morphism 5 : M→ N of A-modules a morphism F( 5 ) : F(M) → F(N)subject to the following constraints:

– For every A-module M, one has F(idM) = idF(M);

Page 361: (MOSTLY)COMMUTATIVE ALGEBRA

7.5. EXACTNESS CONDITIONS FOR FUNCTORS 349

– For all A-modules M,N, P and all morphisms 5 : M → N and

6 : N→ P, one has F(6 ◦ 5 ) = F(6) ◦ F( 5 ).Such a functor is said to be covariant by opposition to contravariant functors

which reverse the direction of maps.

Indeed, a contravariant functor F from the category of A-modules

to the category of B-modules associates to each A-module M a B-

module F(M) and to each morphism 5 : M→ N of A-modules a mor-

phism F( 5 ) : F(N) → F(N) subject to the analogous requirements:

– For every A-module M, one has F(idM) = idF(M);

– For all A-modules M,N, P and all morphisms 5 : M → N and

6 : N→ P, one has F(6 ◦ 5 ) = F( 5 ) ◦ F(6).

Example (7.5.2) (Forgetful functors). — LetA andBbe rings, let D : B→ A

be a morphism of rings. Then, every A-module M may be viewed as a B-

module D∗(M), with external multiplication given by the composition of

the morphism D and of the external multiplication of M. Let 5 : M→ N

be a morphism of A-modules, the map 5 can be viewed as a map

D∗( 5 ) : D∗(M) → D∗(N)which is obviously B-linear. So D∗(M) and D∗( 5 )are just M and 5 with different names to indicate that we no more

consider them as A-modules but only as B-modules through D. In

particular, D∗(6 ◦ 5 ) = D∗(6) ◦ D∗( 5 ) if 6 : N→ P is a second morphism

of A-modules.

We thus have defined a covariant functor from the category of A-

modules to the category of B-modules. In particular, if B = Z, we

obtain a functor from the category ModA of A-modules to the category

AbGr =ModZ of abelian groups. Since these functors forget the initial

A-module structure (at least, part of it), they are called forgetful functors.

7.5.3. — One says that a functor F (covariant or contravariant) from the

category ModA of A-modules to the category ModB of B-modules is

additive if the maps ! ↦→ F(!) from HomA(M,N) to HomB(F(M), F(N))(in the covariant case, and toHomB(F(N), F(M)) in the contravariant case)

are morphisms of abelian groups: F(0) = 0 and F( 5 + 6) = F( 5 ) + F(6)for every 5 , 6 ∈ HomA(M,N).

Page 362: (MOSTLY)COMMUTATIVE ALGEBRA

350 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

Remark (7.5.4). — Let 5 : M → N and 6 : N → P two morphisms of

A-modules such that 6 ◦ 5 = 0, so that the diagram M

5−→ N

6−→ P is a

complex. Then, for any covariant additive functor F : ModA→ModB,

the diagram F(M)F( 5 )−−−→ F(N)

F(6)−−−→ F(P) is a complex of B-modules.

Indeed, one has F(6) ◦ F( 5 ) = F(6 ◦ 5 ) = F(0) = 0.

Of course, an analogous remark holds for contravariant functors.

Definition (7.5.5). — Let F be an additive (covariant) functor from the category

of A-modules to the category of B-modules.

One says that F is left exact if for every exact sequence

0→M→ N→ P,

the complex

0→ F(M) → F(N) → F(P)is exact.

One says that F is right exact if for every exact sequence

M→ N→ P→ 0,

the complex

F(M) → F(N) → F(P) → 0

is exact.

One says that F is exact if it is both left and right exact.

A left exact functor preserves injectivemorphisms; a right exact functor

preserves surjective morphisms. Observe that the left-exactness of F

means that the morphism F(M) → F(N) is an isomorphism from F(M) tothe kernel of the morphism F(N) → F(P). Similarly, the right-exactness

of F means that the morphism F(N) → F(P) is surjective and its kernel

is the image of the morphism F(M) → F(N). Saying that a functor F is

exact means that for any module M and any submodule N of M, F(N) isidentified (via F(8)) a submodule of F(N) and F(M)/F(N) is identified (via

F(?)) F(M/N), where 8 : N → M and ? : M → M/N are the canonical

injection and surjection.

There are analogous definitions for contravariant additive functors.

Such a functor F is right exact if for every exact sequence of the form

0→M→ N→ P, the complex F(P) → F(N) → F(M) → 0 is exact. One

Page 363: (MOSTLY)COMMUTATIVE ALGEBRA

7.5. EXACTNESS CONDITIONS FOR FUNCTORS 351

says that F is left exact if for every exact sequence M→ N→ P→ 0, the

complex 0→ F(P) → F(N) → F(M) is an exact sequence. One says that

F is exact if it is both left and right exact.

Forgetful functors associated to a morphism of rings transform any

diagram of modules into the same diagram, where only part of the

linearity of the maps has been forgotten. In particular, they transform

any exact sequence into an exact sequence: these forgetful functors are

exact.

Remark (7.5.6). — a) If a functor F is exact, then for every short exact

sequence 0→M

D−→ N

E−→ P→ 0, the complex 0→ F(M)F(D)−−−→ F(N)

F(E)−−−→

F(P) → 0 is exact, since the two complexes obtained by removing an

extremity are exact, by assumption.

Conversely, let us show that the exactness of such complexes is

sufficient. If D : M → N is an injective morphism, considering the

exact sequence 0 → M

D−→ N → N/D(M) → 0 and its image by

the functor F shows that F(D) is injective. Similarly, if E : N → P

is a surjective morphism, we see, considering the exact sequence

0 → Ker(E) → N

E−→ P → 0 and its image by F, that F(E) is surjec-

tive.

Let now 0→ M

D−→ N

E−→ P be an exact sequence. Let E′ : N→ Im(E)be the morphism deduced from E and let 9 : Im(E) → P be the inclusion,

so that E = 9 ◦ E′. Then 0 → M

D−→ N

E′−→ Im(E) → 0 is a short exact

sequence, so that its image by F, 0→ F(M)F(D)−−−→ F(N)

F(E′)−−−→ F(Im(E)) → 0,

is an exact sequence too. In particular, Ker(F(E′)) = Im(F(D)). On the

other hand, 9 is injective, so that F(9) is injective as well, and the relation

F(E) = F(9) ◦ F(E′) implies that Ker(F(E′)) = Ker(F(E)). This proves thatthe complex 0→ F(M)

F(D)−−−→ F(N)

F(E)−−−→ F(P) is exact. In particular, F is

left exact.

Weprove analogously thatF is right exact. Let indeedM

D−→ N

E−→ P→ 0

be an exact sequence. Let D′ : M/Ker(D) → N be the morphism deduced

from D, and let ? : M→ M/Ker(D) be the canonical projection so that

D = D′ ◦ ?. Then 0 → M/Ker(D) D′−→ N

E−→ P → 0 is an exact sequence,

hence 0 → F(M/Ker(D))F(D′)−−−→ F(N)

F(E)−−−→ F(P) → 0 is exact as well,

Page 364: (MOSTLY)COMMUTATIVE ALGEBRA

352 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

hence that Ker(F(E)) = Im(F(D′)). Since ? is surjective, F(?) is surjective;the relation F(D) = F(D′) ◦ F(?) then implies that Im(F(D)) = Im(F(D′)).Consequently, F(M)

F(D)−−−→ F(N)

F(E)−−−→ F(P) → 0 is an exact sequence, as

was to be shown.

b) Assume that F is an exact functor. Let us prove that for every exact

sequence M

D−→ N

E−→ P, the complex F(M)F(D)−−−→ F(N)

F(E)−−−→ F(P) is exact.

Let ? : M → M/Ker(D) be the projection, let D′ : M/Ker(D) → N be

the unique morphism such that D′ ◦ ? = D. Let 9 : Im(E) → P and let

E′ : N→ Im(E) be the morphism déduced from E so that E = 9 ◦ E′. Theexactness of the initial diagram furnishes an exact sequence

0→M/Ker(D) D′−→ N

E′−→ Im(E) → 0.

Since F is exact, the diagram

0→ F(M/Ker(D))F(D′)−−−→ F(N)

F(E′)−−−→ F(Im(E)) → 0

is thus an exact sequence, which means that the image of F(D′) is thekernel of F(E′).On the other hand, the relation D′◦? = D implies that F(D′)◦F(?) = F(D);

since F is rightjexact and ? is surjective, F(?) is surjective as well, so

that Im(F(D′)) = Im(F(D)). Similarly, the relation E = 9 ◦ E′ implies that

F(E) = F(9) ◦ F(E′); since F is left exact and 9 is injective, F(9) is injectiveand Ker(F(E)) = Ker(F(E′)).This proves that Ker(F(E)) = Im(F(D)), as claimed.

7.5.7. The Hom functors. — Let A be a ring, let Q be a fixed (A,B)-bimodule. For every left A-module M, let us consider the abelian group

HomA(Q,M), endowedwith its natural structure of a left B-module. (For

D ∈ HomA(Q,M) and 1 ∈ B, let 1D ∈ HomA(Q,M) be themap G ↦→ D(G1).For 1, 1′ ∈ B and G ∈ Q, 1(1′D) is the morphism given by G ↦→ (1′D)(G1) =D(G11′), so that 1(1′D) = (11′)D.) If 5 : M → N is a morphism of left

A-modules, let us consider the map 5∗ : HomA(Q,M) → HomA(Q,N)given by ! ↦→ 5 ◦!. For 5 : M→ N and 6 : N→ P, one has (6 ◦ 5 )∗(!) =6◦ 5 ◦! = 6◦( 5 ◦!) = 6∗( 5∗(!)) = (6∗◦ 5∗)(!) for every ! ∈ HomA(Q,M).Consequently, we have defined a covariant functor HomA(Q, •) fromthe category of left A-modules to the category of left B-modules.

Page 365: (MOSTLY)COMMUTATIVE ALGEBRA

7.5. EXACTNESS CONDITIONS FOR FUNCTORS 353

We can also consider the abelian group HomA(M,Q) endowed with

its natural structure of a right B-module defined by (1D)(G) = (1(G))D for

D ∈ HomA(M,Q), 1 ∈ B and G ∈ M. For two leftA-modulesM andN, and

any morphism 5 : M→ N, let us consider the map 5 ∗ : HomA(N,Q) →HomA(M,Q) defined by ! ↦→ ! ◦ 5 . For 5 : M→ N and 6 : N→ P, one

has (6◦ 5 )∗(!) = !◦6◦ 5 = (!◦6)◦ 5 = 5 ∗(6∗(!)), so that (6◦ 5 )∗ = 5 ∗◦6∗.We thus have defined a contravariant functor HomA(•,Q) from the

category of left A-modules to the category of right B-modules.

Proposition (7.5.8). — Let A, B be rings and Q be a (A, B)-bimodule.

a) The functor HomA(Q, •) is left exact. It is exact if and only if the

A-module Q is projective.

b) The functor HomA(•,Q) is left exact. It is exact if and only if the

A-module Q is injective.

Proof. — a) Let us consider an exact sequence of A-modules

0→M

5−→ N

6−→ P

and let

0→ HomA(Q,M)5∗−→ HomA(Q,N)

6∗−→ HomA(Q, P)

be the complex obtained by applying the functor HomA(Q, •). We need

to show that this complex is exact.

Exactness at HomA(Q,M). Let ! ∈ HomA(Q,M) be such that 5∗(!) =5 ◦ ! = 0. This means that for every G ∈ Q, 5 (!(G)) = 0; since 5 is

injective, !(G) = 0. Consequently, ! = 0 and 5∗ is injective.Exactness at HomA(L,N). We want to show that Ker(6∗) = Im( 5∗);

the inclusion Im( 5∗) ⊂ Ker(6∗) holds since we have a complex. So let

! ∈ HomA(Q,N) be such that 6∗(!) = 0 and let us show that ther exists

# ∈ HomA(Q,M) such that! = 5∗(#). Let G ∈ Q; then 6(!(G)) = 0, hence

!(G) ∈ Ker(6). Since Ker(6) = Im( 5 ) by assumption, Im(!) ⊂ 5 (M). Inother words, ! is really a morphism from Q to 5 (M). Since 5 is injective,it induces an isomorphism from M to 5 (M) and there exists a morphism

# : Q→M such that ! = 5 ◦ # = 5∗(#), as desired.

Page 366: (MOSTLY)COMMUTATIVE ALGEBRA

354 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

Exactness. The functor HomA(Q, •) is right exact if and only if for every

exact sequence

0→M

5−→ N

6−→ P→ 0,

the complex

0→ HomA(Q,M)5∗−→ HomA(Q,N)

6∗−→ HomA(Q, P) → 0

is exact. The exactness at HomA(Q, P) is the only missing point, that is,

the surjectivity of 6∗ assuming that of 6. Precisely, 6∗ is surjective if andonly if, for every morphism ! : Q→ P, there exists a morphism # : Q→N such that ! = 6∗(#) = 6 ◦ #. Since 6 is surjective, proposition 7.3.2

asserts precisely that 6∗ is surjective if and only if Q is a projective

A-module.

b) Let us now consider an exact sequence

M

5−→ N

6−→ P→ 0

and let

0→ HomA(P,Q)6∗

−→ HomA(N,Q)5 ∗

−→ HomA(M,Q)

be the complex obtained by applying the contravariant func-

tor HomA(•,Q). We need to show that this complex is exact.

Exactness at HomA(P,Q). Let us show that 6∗ is injective. Let ! ∈HomA(P,Q) be such that 6∗(!) = ! ◦ 6 = 0. This means that Ker(!)contains 6(N). Since 6 is surjective, 6(N) = P and ! = 0.

Exactness at HomA(N,Q). We have to show that Ker( 5 ∗) = Im(6∗). Let! ∈ HomA(N,Q) be such that ! ◦ 5 = 0 and let us show that there exists

# ∈ HomA(P,Q) such that ! = 6∗(#) = # ◦ 6. Since ! ◦ 5 = 0, Im( 5 ) iscontained in Ker(!). Passing to the quotient, we deduce a morphism

#0 : N/Im( 5 ) → Q such that !(G) = #0(cl(G)) for every G ∈ N, where

cl : N → N/Im( 5 ) is the canonical projection. Since 6 is surjective,

it induces an isomorphism 6′ from N/Ker(6) to P. By assumption,

Ker(6) = Im( 5 ); let then# : P→ Q be the composition# = #0◦(6′)−1By

construction, for every G ∈ P, 6∗(#)(G) = #0 ◦ (6′)−1 ◦ 6(G) = #0(cl(G)) =!(G). In other words, 6∗(#) = !, as requested.

Page 367: (MOSTLY)COMMUTATIVE ALGEBRA

7.5. EXACTNESS CONDITIONS FOR FUNCTORS 355

Exactness. Let us now consider an exact sequence

0→M

5−→ N

6−→ P→ 0

and let

0→ HomA(P,Q)6∗

−→ HomA(N,Q)5 ∗

−→ HomA(M,Q) → 0

be the complex obtained by applying the contravariant func-

tor HomA(•,Q). We need to show that this complex is exact if

and only if Q is an injective A-module.

Only the exactness at HomA(M,Q) has not been shown, that is, assum-

ing the injectivity of 5 , the surjectivity of 5 ∗. Surjectivity of 5 ∗ means

that for every ! ∈ HomA(M,Q), there exists # ∈ HomA(N,Q) such that

# ◦ 6 = !. By proposition 7.4.2, this condition holds if and only if Q is

an injective A-module. �

In fact, the Hom functor can be used to detect exactness of complexes.

Lemma (7.5.9). — Let A be a ring. A complex

M

5−→ N

6−→ P→ 0

of A-modules is exact if and only if the complex

0→ HomA(P,Q)6∗

−→ HomA(N,Q)5 ∗

−→ HomA(M,Q)

is exact for every A-module Q.

Proof. — One direction of the assertion is given by the left-exactness of

the functor Hom(•,Q). Conversely, let us assume that

0→ HomA(P,Q)6∗

−→ HomA(N,Q)5 ∗

−→ HomA(M,Q)

is exact for every A-module Q and let us show that the diagram

M

5−→ N

6−→ P→ 0

is exact.

We first prove that 6 is surjective. Let Q = Coker(6) = P/6(N) and let

! : P→ Q be the canonical surjection. By construction, 6∗(!) = !◦6 = 0.

Since 6∗ is injective, ! = 0. This means that Q = 0, hence 6 is surjective.

Page 368: (MOSTLY)COMMUTATIVE ALGEBRA

356 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

By assumption, 6 ◦ 5 = 0, hence Im( 5 ) ⊂ Ker(6). Let Q = N/Im( 5 )and let ! : N→ Q be the canonical surjection. By construction, 5 ◦ ! =0, hence ! belongs to the kernel of the map 5 ∗ from HomA(N,Q)to HomA(M,Q). By assumption, there exists # ∈ HomA(P,Q) such that

! = 6∗(#) = # ◦ 6. In particular, Ker(6) is contained in Ker(!) = Im( 5 ),as was to be shown. �

Remark (7.5.10). — It is tempting to state a dual version, that asserts that

a complex

0→M

5−→ N

6−→ P

of A-modules is exact if and only if the complex

0→ HomA(Q,M)5∗−→ HomA(Q,N)

6∗−→ HomA(Q, P)

is exact for every A-module Q. Of course, one direction of this statement

follows from the left-exactness of the functor HomA(Q, •). However, the

other is trivial, since setting Q = A recovers the initial complex, which is

thus an exact sequence.

7.5.11. Localization functors. — Let A be a commutative ring and let

S be a multiplicative subset of A. For any A-module M, we have defined

in section 3.6 an S−1

A-module S−1

M. Moreover, if 5 : M → N is any

morphism of A-modules, we have constructed (cf. proposition 3.6.4)

a morphism S−1 5 : S

−1M → S

−1N characterized by the property that

(S−1 5 )(</B) = 5 (<)/B for every < ∈ M and every B ∈ S. Moreover,

S−1( 5 ◦ 6) = S

−1 5 ◦ S−16.

In other words, localization induces a functor from the category of A-

modules to that of S−1

A-modules. It is crucial that these localization

functors are exact; this is in fact a reformulation of proposition 3.6.6.

Proposition (7.5.12) (Exactness of localization). — Let A be a ring and let S

be a multiplicative subset of A. For any exact sequence 0→M

5−→ N

6−→ P→ 0

of A-modules, the complex 0 → S−1

M

S−1 5−−−→ S

−1N

S−16−−−→ S

−1P → 0 is an

exact sequence. In other words, the localization functor from A-modules to

S−1

A-modules is an exact functor.

Page 369: (MOSTLY)COMMUTATIVE ALGEBRA

7.6. ADJOINT FUNCTORS 357

Proof. — We can use the injective morphism 5 to identify M with the

submodule 5 (M) of N, and the surjective morphism 6 to identify P with

the quotient N/ 5 (M) of N. By proposition 3.6.6, the morphism S−1 5 is

injective; moreover, the map S−1? : S

−1N→ S

−1P identifies the quotient

S−1

N/S−1M with S

−1(N/M) = S−1

P. In other words, the morphism S−16

is surjective and its kernel is the image of S−1 5 . This means that the

diagram

0→ S−1

M

S−1 5−−−→ S

−1

N

S−16−−−→ S

−1

P→ 0

is an exact sequence. �

7.6. Adjoint functors

LetA and B be rings. Let F be a functor from the category ofA-modules

to the category of B-modules, and let G be a functor from the category

of B-modules to the category of A-modules.

Definition (7.6.1). — An adjunction for the pair (F,G) is the datum, for any

A-module M and any B-module N, of a bijection

ΦM,N : HomB(F(M),N) → HomA(M,G(N))such that for any morphism 5 : M → M

′of A-modules and any morphism

6 : N→ N′of B-modules, one has

5 ∗ ◦G(6)∗ ◦ΦM′,N = ΦM,N′ ◦ F( 5 )∗ ◦ 6∗.

Explicitly, this condition means that for every ! ∈ HomB(F(M′),N),the two elements

G(6) ◦ΦM′,N(!) ◦ 5 and ΦM,N′(6 ◦ ! ◦ F( 5 ))

of HomA(M,G(N′)) are equal. It is often visually represented by saying

that the diagram

HomB(F(M′),N) HomA(M′,G(N))

HomB(F(M),N′) HomA(M,G(N′))

←→Φ

M′,N

←→F( 5 )∗◦6∗ ←→ 5 ∗◦G(6)∗

←→Φ

M,N′

is commutative.

Page 370: (MOSTLY)COMMUTATIVE ALGEBRA

358 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

If there exists such an adjunction, then one says that F and G form a pair of

adjoint functor, or, simply, are adjoint; one also says that G is a right adjoint

of F and that F is a left adjoint of G.

Lemma (7.6.2). — Assume that F and G are adjoint and let Φ = (ΦM,N) be anadjunction for the pair (F,G).a) For any A-module M, let M : M → G(F(M)) be the morphism

ΦM,F(M)(idF(M)). For any B-module N and any morphism 6 : F(M) → N, one

has ΦM,N(6) = G(6) ◦ M.

b) For any B-module N, let �N : F(G(N)) → N be the unique morphism

such that ΦG(N),N(�N) = id

G(N). Then, for any A-module M and any mor-

phism 5 : M→ G(N), one has Φ−1

M,N( 5 ) = �N ◦ F( 5 ).

Proof. — For a), one applies the definition with M′ = M, 5 = idM, and

! = idF(M). This gives

ΦM,N(6) = ΦM,N(6 ◦ idF(M) ◦F(idM))

= G(6) ◦ΦM,F(M)(idF(M)) ◦ idM

= G(6) ◦ M.

For b), one considers M′ = G(N), N

′ = N and 6 idN. Then, one gets

ΦM,N(�N ◦ F( 5 )) = ΦM,N′(idN ◦�N ◦ F( 5 ))= G(idN) ◦ΦG(N),N(�N) ◦ 5= id

G(N) ◦ idG(N) ◦ 5

= 5 . �

Example (7.6.3). — Let A and B be rings, let 5 : B→ A be a morphism

of rings and let F be the corresponding forgetful functor from right

A-modules to right B-modules. Let P be the ring A viewed as an (A, B)-bimodule via the laws 0 · G · 1 = 0G 5 (1) for any 0 ∈ A, G ∈ P and 1 ∈ B.

Let then G be the functor that associates to any right B-module N the

right A-module HomB(P,N).Let M be a right A-module, let N be an abelian group; let us define

a bijection ΦM,N from HomB(F(M),N) to HomA(M,G(N)). Let ! ∈HomB(F(M),N); in other words, ! is an additive map from M to N such

that !(< 5 (1)) = !(<)1 for any < ∈ M and any 1 ∈ B. For any < ∈ M, let

Page 371: (MOSTLY)COMMUTATIVE ALGEBRA

7.6. ADJOINT FUNCTORS 359

#< : P→ N be the map given by 0 ↦→ !(<0); it is additive; for any 0 ∈ A

and any 1 ∈ B, one has #<(0)1 = !(<0)1 = !(<0 5 (1)) = #<(0 5 (1)).Consequently,#< belongs toHomB(P,N) = G(N). Themap# : < ↦→ #<from M to G(N) is additive. It is even A-linear. Indeed, let 0 ∈ A and let

< ∈ M; the map #<0 sends 0′ ∈ A to !(<00′); on the other hand, #< · 0is the map 0′ ↦→ #<(00′) = !(<00′). Consequently, #<0 = #< · 0, as was

to be shown. Let us then set ΦM,N(!) = #.The map ΦM,N : HomB(F(M),N) → HomA(M,G(N)) is bijective. In-

deed, if# : M→ HomB(P,N) is amorphism ofA-modules, then themor-

phism ! : F(M) → N given by !(<) = #(<)(1) satisfies ΦM,N(!) = #,since

ΦM,N(!)(<)(0) = !(<0) = #(<0)(1) = #(<)(0),

and it is the only one.

We now check that the maps ΦM,N satisfy the condition given in the

definition of adjoint functors. Let 5 : M→M′be a morphism of right

A-modules, let 6 : N → N′be a morphism of right B-modules. Let

! ∈ HomA(F(M′),N) and let < ∈ M. Then,

5 ∗ ◦G(6)∗ ◦ΦM′,N(!)(<) and ΦM,N′ ◦ F( 5 )∗ ◦ 6∗(!)(<)

are both equal to the map 0 ↦→ 6(!( 5 (<)0)).Consequently, F and G form a pair of adjoint functors.

Let us also explicit the maps M and �N. Let M be an A-module,

so that G(F(M)) is the right A-module HomB(A,M). The morphism

M : M→ G(F(M)) is defined as ΦM,F(M)(idF(M)). Consequently, for any

< ∈ M, M(<) is the element 0 ↦→ <0 of HomB(A,M). In particular, the

morphism M is injective.

Let N be a B-module; F(G(N)) is the right B-module HomB(A,N).Moreover, �N : F(G(N)) → N maps an element D ∈ HomB(A,N) toD(1) ∈ N. Indeed, let us write �′

Nfor this morphism and let us compute

ΦG(N),N(�′

N). With the previous notation, we set M = G(N), ! = �′

Nand

# = ΦG(N),N(�′

N). For D ∈ G(N), #D : P → N is the map 0 ↦→ �′

N(D · 0).

By definition, �′N(D · 0) = (D · 0)(1) = D(0), so that #D = D. Consequently,

# = idG(N) and �′

N= �N, as was to be shown. We observe in particular

that the morphism �N is surjective.

Page 372: (MOSTLY)COMMUTATIVE ALGEBRA

360 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

In the preceding example, the functor G, being of the form HomB(P, •),is left exact. The next proposition shows that this property is shared by

all functors which are right adjoints.

Proposition (7.6.4). — Assume that F and G are adjoint. Then F is right exact

and G is left exact.

Proof. — a) Let M

5−→ N

6−→ P→ 0 be an exact sequence of A-modules

and let us prove that the complex

F(M)F( 5 )−−−→ F(N)

F(6)−−−→ F(P) → 0

is exact. By lemma 7.5.9, it suffices to show that for every B-module Q,

the complex

0→ HomA(F(P),Q)F(6)∗−−−→ HomA(F(N),Q)

F( 5 )∗−−−→ HomA(F(M),Q)

is exact. Applying the adjunction property, this complex identifies with

the complex

0→ HomA(P,G(Q))6∗

−→ HomA(N,G(Q))5 ∗

−→ HomA(M,G(Q))

which is exact, because the functor HomA(•,G(Q)) is left exact.b) Let 0→M

5−→ N

6−→ P be an exact sequence of B-modules and let us

prove that the complex

0→ G(M)G( 5 )−−−→ G(N)

G(6)−−−→ G(P)

of A-modules is exact. Let Q be an A-module and let us apply the

functor HomA(Q, •) to the preceding complex: we get

0→ HomA(Q,G(M))G( 5 )∗−−−−→ HomA(Q,G(N))

G(6)∗−−−−→ HomA(Q,G(P)).

Using the adjunction property of the functors F and G, we can rewrite

this complex as

0→ HomA(F(Q),M)5∗−→ HomA(F(Q),N)

6∗−→ HomA(F(Q), P),

which is an exact sequence since the functor HomA(F(Q), •) is left

exact. �

Page 373: (MOSTLY)COMMUTATIVE ALGEBRA

7.6. ADJOINT FUNCTORS 361

Proposition (7.6.5). — Let A and B be rings. Let F be a functor from the

category of A-modules to the category of B-modules, let G be a functor from

the category of B-modules to the category of A-modules. Assume that F and G

form an adjoint pair.

a) If F is exact, then G(N) is an injective A-module for any injective B-

module N.

b) If G is exact, then F(M) is a projective B-module for any projective

A-module M.

Proof. — Let (ΦM,N) be an adjunction for the pair (F,G). For any A-

module M, let M = ΦM,F(M)(idF(M)). For any B-module N, let �N =

Φ−1

G(N),N(idG(N)).a) Let M be an A-module and let 5 : G(N) → M be an injective

morphism; let us show that 5 has a left inverse. Let P = Coker( 5 ) so that

we have an exact sequence

0→ G(N)5−→M→ P→ 0.

Applying the exact functor F, we obtain an exact sequence

0→ F(G(N))F( 5 )−−−→ F(M) → F(P) → 0.

In particular, F( 5 ) is injective. Since N is an injective B-module and

F( 5 ) is injective, proposition 7.4.2 implies that there exists a morphism

E : F(M) → N such that E ◦ F( 5 ) = �N (apply assertion (ii) with 8 = F( 5 )and 5 = �N). Set 6 = ΦM,N(E). By the definition of adjoint functors, one

has

6 ◦ 5 = ΦM,N(E) ◦ 5 = ΦG(N),N(E ◦ F( 5 )) = ΦG(N),N(�N) = id

G(N) .

This shows that 6 is a left inverse of 5 and concludes the proof that G(N)is an injective A-module.

b) This is analogous. Let N be a B-module and let 5 : N→ F(M) be asurjective morphism of B-modules. Let P = Ker( 5 ), so that we have an

exact sequence of B-modules

0→ P→ N

5−→ F(M) → 0.

Page 374: (MOSTLY)COMMUTATIVE ALGEBRA

362 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

Applying the exact functor G, we obtain an exact sequence

0→ G(P) → G(N)G( 5 )−−−→ G(F(M)) → 0

of A-modules; in particular, the morphism G(F( 5 )) is surjective. Since M

is a projective A-module, there exists a morphism D : M→ G(N) suchthat G( 5 ) ◦ D = M. Let 6 = Φ−1

N,M(D). By definition of an adjoint pair,

one has

ΦM,F(M)( 5 ◦ 6) = G( 5 ) ◦ΦN,M(6) = G( 5 ) ◦ D = M.

Consequently, 5 ◦ 6 = idF(M). In particular, 5 has a right inverse. This

shows that F(M) is a projective B-module. �

Remark (7.6.6). — As an application of the fact that right adjoint of exact

functors preserve injectives, let us give another (easier) proof that every

module can be embedded in an injective module.

We first treat the case of the ring Z. So let M be a Z-module. Write M

as the quotient of a free Z-module: let (48)8∈I be a generating family in M,

let 5 : Z(I) → M be the unique morphism given by (08)8 ↦→∑<808 and

let N = Ker( 5 ), so that M ' Z(I)/N. View Z(I) as a submodule of Q(I)and define M

′ = Q(I)/N. The injection Z(I)→ Q(I) induces an injection

from M to M′. Since Q(I) is divisible, so is M

′. By corollary 7.4.4, M

′is

an injective Z-module.

Let now A be a ring, let F be the forgetful functor from right A-modules

to Z-modules. By example 7.6.3, the functor G = HomZ(A, •) is a rightadjoint to F. Moreover, F is exact and G is left exact (proposition 7.5.8),

so that G preserves injective modules. Let M be a right A-module Let

5 : F(M) →M′be an injection from the Z-module F(M) to an injective

Z-module M′. Then, G(M′) is an injective A-module and the morphism

G( 5 ) is injective.Moreover, we have seen that in this case, the morphism M : M →

G(F(M)) is injective. The composition G( 5 ) ◦ M is thus an injective

morphism from M to an injective A-module.

Page 375: (MOSTLY)COMMUTATIVE ALGEBRA

7.7. DIFFERENTIAL MODULES. HOMOLOGY AND COHOMOLOGY 363

7.7. Differential modules. Homology and cohomology

Definition (7.7.1). — Let A be a ring. A differential A-module is a pair

(M, 3) where M is an A-module and 3 is an endomorphism of M such that

32 = 0, called the differential of M.

Let (M, 3M) and (N, 3N) be differentialA-modules. One says that amorphism

5 : M→ N of A-modules is amorphism of differential modules if 3N ◦ 5 =5 ◦ 3M.

Definition (7.7.2). — To any differential A-module (M, 3), one associates thefollowing A-modules:

– the module Z(M) = Ker 3 of cycles;

– the module B(M) = Im 3 of boundaries;

– the module H(M) = Z(M)/B(M) = Ker(3)/Im(3) of homologies.

Since 32 = 0, observe that Im(3) ⊂ Ker(3), so that the definition of the

module of homologies makes sense.

Lemma (7.7.3). — Let 5 : (M, 3M) → (N, 3N) be a morphism of differential

modules. One has 5 (Z(M)) ⊂ Z(N) and 5 (B(M)) ⊂ B(N). Consequently, 5induces a morphism H( 5 ) : H(M) → H(N) of A-modules.

Proof. — By definition 5 ◦ 3M = 3N ◦ 5 . Consequently, if G ∈ M is such

that 3M(G) = 0, one has 3N( 5 (G)) = 5 (3M(G)) = 0, so that 3M(G) ∈ Z(N).Similarly, if G ∈ M, one has 5 (3M(G)) = 3N( 5 (G)) so that 5 (3M(G)) ∈ B(N)and 5 (B(M)) ⊂ B(N). The definition of H( 5 ) follows by passing to the

quotient. �

Remark (7.7.4). — Morphisms of differential A-modules can be com-

posed, and the identity morphism is a morphism of differential A-

modules, so that differential A-modules form a category. We have associ-

ated to every differential A-module (M, 3M) its homology module H(M),which is an A-module, and to every morphism 5 : (M, 3M) → (N, 3N) ofdifferential A-modules a morphism H( 5 ) : H(M) → H(N) of A-modules.

Observe that if 5 = idM, then H( 5 ) = idH(M). Moreover, if

5 : (M, 3M) → (N, 3N) and 6 : (N, 3N) → (P, 3P) are morphisms of

differential modules, then H(6 ◦ 5 ) = H(6) ◦H( 5 ), since both of these

morphisms map the class of G ∈ Z(M) modulo B(M) to the class

Page 376: (MOSTLY)COMMUTATIVE ALGEBRA

364 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

of 6( 5 (G)) ∈ Z(P) modulo B(P). Moreover, if 5 , 6 : (M, 3M) → (N, 3N)are morphisms of differential modules, one has H( 5 + 6) = H( 5 ) +H(6)In otherwords,H is an additive functor from the category of differential

A-modules to the category of A-modules.

Many important differential A-modules are graded differential A-

modules.

Definition (7.7.5). — A graded differential A-module is a differential module

(M, 3) such that– The module M is the direct sum of a family (M=)=∈Z (one says that M is

Z-graded, or simply, graded);

– There exists an integer A such that 3(M=) ⊂ M=+A for every = ∈ Z (one

says that 3 has degree A).

In this case, one defines Z=(M) = Z(M) ∩M=, B=(M) = B(M) ∩M= and

H=(M) = Z=(M)/B=(M).

Lemma (7.7.6). — Themodules of cycles, boundaries and homologies of a graded

differential A-module are graded. Explicitly, if (M, 3) is a graded differentialA-module, one has equalities Z(M) =

⊕= Z=(M) and B(M) =

⊕B=(M);

they induce an isomorphism H(M) '⊕

=∈Z H=(M).

Proof. — Observe that Z=(M) is a submodule of M=; since the mod-

ules M= are in direct sum, so are the Z=(M). Moreover, Z=(M) ⊂ Z(M)for every =. On the other hand, let G ∈ Z(M); one can write G as

a sum G =∑=∈Z G= of an almost-null family, where G= ∈ M= for ev-

ery = ∈ Z. Then 3(G) = ∑=∈Z 3(G=). For every =, 3(G=) ∈ M=+A ; since

the modules M= are in direct sum, 3(G=) = 0 for every =. Consequently,

G= ∈ Z(M=) for every = and Z(M) =⊕

= Z=(M).The modules B=(M) are in direct sum, and are submodules of B(M).

Conversely, let G ∈ B(M); let H ∈ M be such that G = 3(H). One may write

H =∑= H=, where H= ∈ M= for every =. It follows that G =

∑= 3(H=). For

every =, 3(H=) ∈ B=+A , hence G ∈∑= B(M=).

Page 377: (MOSTLY)COMMUTATIVE ALGEBRA

7.7. DIFFERENTIAL MODULES. HOMOLOGY AND COHOMOLOGY 365

Finally, one has isomorphisms

H(M) = Z(M)/B(M) =(⊕

=

Z=(M))/(⊕

=

B=(M))

=

⊕=

(Z=(M)/B=(M)) =⊕=

H=(M).

This proves the lemma. �

Example (7.7.7). — Let us consider a complex

. . .5=−1−−−→M=−1

5=−→M=

5=+1−−−→M=+1→ . . .

of A-modules. Let us define M =⊕

=∈Z M= and let 3 ∈ End(M) be theunique endomorphism of M such that 3(G) = 5=(G) for every G ∈ M=

and every = ∈ Z.Then (M, 3) is a graded differential A-module whose differential

has degree 1. Moreover, H=(M) = Ker( 5=+1)/Im( 5=). In other words,

the vanishing of H=(M) witnesses the exactness of the complex at the

module M=. Tradition denotes H=(M) these modules and calls them

cohomology modules of the complex.

Example (7.7.8) (De Rham complex of a manifold)

Let U be an open subset of R2. For ? ∈ {0, 1, 2}, let Ω?(U) be the

R-vector space of differential forms of degree ? on U. These spaces

can be described explicitly as follows. For ? = 0, Ω0(U) = �∞(U) is

the real vector space of �∞-functions on U. For ? = 1, Ω1(U) is a

free �∞(U)-modules of rank 2, with basis (dG,dH). In other words, a

differential form $ of degree 1 on U can be uniquely written as

$ = A(G, H)dG + B(G, H)dH

Page 378: (MOSTLY)COMMUTATIVE ALGEBRA

366 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

Finally, Ω2(U) is a free �∞(U)-module of rank 1, with basis denoted

dG ∧ dH. The “exterior differential” consists of the maps

3 : Ω0(U) → Ω1(U)

5 ↦→% 5

%GdG +

% 5

%HdH

3 : Ω1(U) → Ω2(U)

A(G, H)dG + B(G, H)dH ↦→(%B(G, H)

%G−%A(G, H)

%H

)dG ∧ dH

and 3 vanishes identically on Ω2(U).Observe that 3 ◦ 3 = 0. The only necessary computation is that of 32( 5 )

for 5 ∈ Ω0(U). Then,

32( 5 ) = 3(% 5

%GdG +

% 5

%HdH

)=

(%2 5

%G%H−

%2 5

%H%G

)dG ∧ dH

hence 32( 5 ) = 0 by Schwarz’s theorem.

Consequently, we have defined a complex Ω•:

0→ Ω0(U) 3−→ Ω1(U) 3−→ Ω2(U) → 0,

the de Rham complex of U. Its cohomology groups H8dR(U) are called the

de Rham cohomology groups of U. These real vector spaces of fundamental

interest for topology.

Let us compute H0

dR(U). Let 5 ∈ Z

0(Ω•); this means that 5 ∈ �∞(U)and that 35 = 0, that is,

% 5%G =

% 5%H = 0. Consequently, 5 is constant on every

connected component of U. Since B0(Ω•) = 0, we obtain an isomorphism

H0

dR(U) = R�0(U)

, where �0(U) is the set of connected components of U.

Let us assume that U is simply connected; for example, U could be R2,

or contractible, or star-shaped. Then, Poincaré’s lemma asserts that any

differential form $ on U of degree > 0 which is a cycle (one says that $is closed) is a boundary (one says that $ is exact). In Physics or Vector

calculus, this lemma appears under a more elementary formulation: a

vector field whose rotational is zero is a gradient.

Let us prove it when U is star-shaped with respect to the origin 0

of R2. Let $ = A(G, H)dG + B(G, H)dH be any closed differential form

of degree 1. Since U is star-shaped with respect to the origin, for every

Page 379: (MOSTLY)COMMUTATIVE ALGEBRA

7.7. DIFFERENTIAL MODULES. HOMOLOGY AND COHOMOLOGY 367

point (G, H) ∈ U, and every C ∈ [0, 1], one has (CG, CH) ∈ U. We may thus

set

5 (G, H) =∫

1

0

(GA(CG, CH) + HB(CG, CH)

)dC.

Then 5 is�∞ (see a course in Calculus) and its partial derivatives can be

computed by differentiating under the integral-sign. Thus one obtains

% 5

%G(G, H) =

∫1

0

(A(CG, CH) + CG%A

%G(CG, CH) + CH%B

%G(CG, CH)

)dC.

Since 3$ = 0,%B

%G =%A

%H so that

% 5

%G(G, H) =

∫1

0

(A(CG, CH) + CG%A

%H(CG, CH) + CH%B

%G(CG, CH)

)dC

=

∫1

0

(A(CG, CH) + C 3

3C(A(CG, CH))

)dC

=

∫1

0

3

3C(CA(CG, CH)) =

[CA(CG, CH)

]1

0

= A(G, H).

One proves similarly that

% 5%H (G, H) = B(G, H), so that 35 = $.

On the other hand, if U = R2 {0}, one can prove that H1

dR(U) has

dimension 1 and is generated by the class of the differential form

−H

G2 + H2

3G + G

G2 + H2

3H.

This construction can be generalized to any open subset of R=, and

even to any differentiable manifold.

Theorem (7.7.9). — Let A be a ring, let M,N, P be differential modules, let

5 : M → N and 6 : N → P be morphisms of differential modules. Assume

that one has an exact sequence 0→M

5−→ N

6−→ P→ 0. (These conditions can

be summed up by saying that we have an exact sequence of differential modules.)

Then, there exists a morphism of A-modules % : H(P) → H(M) such that

Ker(%) = Im(H(6)), Ker(H(6)) = Im(H( 5 )), Ker(H( 5 )) = Im(%).

Page 380: (MOSTLY)COMMUTATIVE ALGEBRA

368 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

Consequently, the morphism % sits in an “exact triangle” :

H(P)

H(M) H(N)

←→%

← →H( 5 )←

→H(6)

Proof. — a) Let us show that Ker(H(6)) = Im(H( 5 )). Since 6 ◦ 5 = 0, one

has H(6) ◦H( 5 ) = H(6 ◦ 5 ) = 0 and Im(H( 5 )) ⊂ Ker(H(6)). Conversely,let � ∈ Ker(H(6)). By definition, � is the class of an element G ∈ Ker(3N).Then, H(6)(�) = cl(6(G)) = 0, so that there exists H ∈ P such that

6(G) = 3P(H). Since the morphism 6 is surjective, there exists I ∈ N such

that H = 6(I). Then 6(G) = 3P(H) = 3P(6(I)) = 6(3N(I)) hence G − 3N(I)belongs to Ker(6). Since Ker(6) = Im( 5 ), there exists C ∈ M such that

G − 3N(I) = 5 (C). It follows that � = cl(G) = cl(3N(I) + 5 (C)) = cl( 5 (C)) =H( 5 )(cl(C)). Consequently, Ker(H(6)) ⊂ Im(H( 5 )), as was to be shown.

b) Let us now define a morphism % : H(P) → H(M). Let � ∈ H(P).Write � = cl(G) for some G ∈ Ker 3P. Since 6 is surjective, there exists

H ∈ N such that G = 6(H). Then 0 = 3P(G) = 3P(6(H)) = 6(3N(H))hence 3N(H) ∈ Ker(6). Since Ker(6) = Im( 5 ), there exists I ∈ M such

that 3N(H) = 5 (I). One has 5 (3M(I)) = 3N( 5 (I)) = 3N ◦ 3N(H) = 0

since 32

N= 0. Since 5 is injective, 3M(I) = 0. We are going to set

%(�) = cl(I) ∈ H(M).Let us first check that cl(I) is independent of any choice. Recall

that G, H, I have been chosen in such a way that one has 5 (I) = 3N(H)and G = 6(H). Let G′, H′, I′ be such that 5 (I′) = 3N(H′) and G′ = 6(H′).Assume that cl(G) = cl(G′); then there exists G′′ ∈ P such that G =

G′ + 3P(G′′). Let us also choose H′′ ∈ N such that G′′ = 6(H′′). One has

6(H′ − H) = G′ − G = 3P(G′′) = 3P(6(H′′)) = 6(3N(H′′)), hence there existsI′′ ∈ M such that H′ − H − 3N(H′′) = 5 (I′′). Consequently, 5 (I′ − I) =3N(H′) − 3N(H) = 3N(3N(H′′) + 5 (I′′)) = 3N( 5 (I′′)) = 5 (3M(I′′)). Since 5is injective, I′ − I = 3M(I′′) and cl(I′) = cl(I) in H(M).Moreover, % is a morphism of A-modules. Indeed, having chosen the

triple (G1, H1, I1) for �1 and the triple (G2, H2, I2) for �2, we may choose

the triple (G101 + G202, H101 + H202, I101 + I202) for �101 + �202. Then

%(�101 + �202) = %(�1)01 + %(�2)02.

Page 381: (MOSTLY)COMMUTATIVE ALGEBRA

7.7. DIFFERENTIAL MODULES. HOMOLOGY AND COHOMOLOGY 369

c) Let us show that Ker(H( 5 )) = Im(%). Let � ∈ H(M) be such

that H( 5 )(�) = 0 and let G ∈ Z(M) be such that � = cl(G). Then

H( 5 )(�) = cl( 5 (G)), hence 5 (G) ∈ Im(3N) and there exists H ∈ N such that

5 (G) = 3N(H). By definition of the morphism %, one has %(cl(6(H))) =cl(G) = �, hence Ker(H( 5 )) ⊂ Im(%).Conversely, let � ∈ Im(%), let � ∈ H(P) such that � = %(�) and let G ∈

Z(P) be such that � = cl(G). Let H ∈ N be such that G = 6(H) and let I ∈ M

be such that 3N(H) = 5 (I); then � = %(�) = cl(I), by the construction of

the morphism % in b). We then have H( 5 )(�) = cl( 5 (I)) = cl(3N(H)) = 0,

so that � ∈ Ker(H( 5 )), as was to be shown.

d) Let us show that Im(H(6)) = Ker(%).Let � ∈ H(P) be such that %(�) = 0; let G ∈ Z(P) be such that � = cl(G),

let H ∈ N be such that G = 6(H) and let I ∈ M be such that 3N(H) = 5 (I),so that %(�) = cl(I). By assumption, I ∈ Im(3M); let then I′ ∈ M be

such that I = 3M(I′). We have 5 (I) = 5 (3M(I′)) = 3N( 5 (I′)) = 32

N(H) = 0.

Since 5 is injective, I = 0 and 3N(H) = 5 (I) = 0. Moreover, the class of H

in H(N) satisfies H(6)(cl(H)) = cl(6(H)) = cl(G) = �, which proves that

� ∈ Im H(6).Conversely, let � ∈ Im(H(6)); there exists H ∈ Z(N) such that � =

H(6)(cl(H)), that is, � = cl(6(H)). Set G = 6(H); one has G ∈ Z(P); let thenI ∈ M be such that 3N(H) = 5 (I). Then 5 (I) = 0, hence I = 0 since 5 is

injective, and the definition of % implies that %(�) = cl(I) = 0. �

Corollary (7.7.10). — Let us assume moreover that M, N, P are graded

differential modules and that the morphisms 5 and 6 are of degree 0 (meaning

5 (M=) ⊂ N= and 5 (N=) ⊂ P= for every =). Then, the morphism % constructed

in theorem 7.7.9 has degree 1 and its restriction %= to H=(P) gives rise to an

exact sequence

· · · → H=(M) → H

=(N) → H=(P) %=−→ H

=+1(M) → . . .

Page 382: (MOSTLY)COMMUTATIVE ALGEBRA

370 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

The conditions of the corollary are often used by saying that one has

an exact sequence of complexes, namely a diagram

0 0 0

M=−1 M= M=+1

N=−1 N= N=+1

P=−1 P= P=+1

0 0 0

←→ ←→ ←→

←→ ←→

←→

←→

←→

←→

←→

←→ ←→←→

←→

←→

←→

←→

←→ ←→

←→

←→

←→

←→

←→

where the rows are complexes and the columns are exact sequences.

Proof. — Recall that for any � ∈ H(P), %(�) has been defined as cl(I),where G ∈ Z(P), H ∈ N and I ∈ M are such that � = cl(G), G = 6(H)and 3N(H) = 5 (I). Assume that � ∈ H

=(P). Then, one may replace

G, H, I by their components in P=, N= and M=+1 respectively, so that

%(�) ∈ H=+1(M). The rest of the corollary follows from lemma 7.7.11. �

Lemma (7.7.11). — Let I be a set, let (M8)8∈I and (N8)8∈I be families of A-

module; for every 8, let 58 : M8 → N8 be a morphism of A-modules. Set

M =⊕

8∈I M8, N =⊕

8∈I N8 and let 5 : M → N be the morphism of A-

modules given by 5 ((<8)) = ( 58(<8)) for (<8) ∈ M. Then for every 8 ∈ I, one

has Ker( 58) = Ker( 5 ) ∩M8, Im( 58) = Im( 5 ) ∩N8 and

Ker( 5 ) =⊕8∈I

Ker( 58) and Im( 5 ) =⊕8

Im( 58).

Proof. — For< = (<8) ∈ M, one has 5 (<) = ( 58(<8)) = 0, so< ∈ Ker( 5 ) ifand only if<8 ∈ Ker( 58) for every 8. This shows thatKer( 5 )∩M8 = Ker( 58).Moreover, for < ∈ Ker( 5 ), one has < =

∑<8, hence Ker( 5 ) =

⊕Ker( 58).

Let also = = (=8) ∈ N. If = = 5 (<), for < ∈ M, then 5 (<8) = =8for every 8, so that =8 ∈ Im( 58). Conversely, assume that =8 ∈ Im( 58)for every 8; let J be the finite set of 8 ∈ I such that =8 ≠ 0. For 8 ∈ J,

choose <8 ∈ M8 such that =8 = 58(<8); for 8 ∈ I J, set <8 = 0. Then

Page 383: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 371

5 ((<8)) = =, hence = ∈ Im( 5 ). The relation = =∑=8 also shows that

= ∈⊕

Im( 58). �

Exercises

1-exo800) Let A be a ring, let M1,M2,M3,M4 be A-modules and let F : M1 → M2,

E : M2→M3 and D : M3→M4 be morphisms. If 5 is a morphism of A-modules, we

denote by �( 5 ) the length of Im( 5 ), or +∞ if this length is infinite.

a) Deduce from F an A-morphism

5 : Ker(D ◦ E ◦ F)/Ker(D ◦ E) → Ker(D ◦ E)/Ker(E).b) Deduce from E an A-morphism

6 : Ker(D ◦ E)/Ker(E) → Im(E)/Im(E ◦ F).c) Deduce from D an A-morphism

ℎ : Im(E)/Im(E ◦ F) → Im(D ◦ E)/Im(D ◦ E ◦ F).d) Show that the diagram

0 Ker(D ◦ E ◦ F)/Ker(D ◦ E) Ker(D ◦ E)/Ker(E)←→ ←→5

Im(E)/Im(E ◦ F) Im(D ◦ E)/Im(D ◦ E ◦ F) 0

←→6 ←→ℎ ←→is an exact sequence.

e) Prove the inequality (due to Frobenius):

�(E ◦ F) + �(D ◦ E) 6 �(E) + �(D ◦ E ◦ F).

2-exo1035) Let A be a ring.

a) Let M0

50−→M1

51−→ . . .5=−1−−−→M= be a complex of A-modules. Show that this complex

is an exact sequence if and only if, for every 8, the diagram (0) → Ker( 58) → M8

58−→Ker( 58+1) → (0) is a short exact sequence.

b) Let 0→M1

51−→M2

52−→ . . .5=−1−−−→M= → 0 be an exact sequence. Assuming that the

modules M8 have finite length, show that

∑=8=1(−1)8 ℓA(M8) = 0.

c) More generally, let 0→ M1

51−→ M2

52−→ . . .5=−1−−−→ M= → 0 be a complex. Assume

that the modules M8 have finite length. For every 8, let H8 = Ker( 58)/Im( 58−1); provethat H1, . . . ,H= have finite length and that

=∑8=1

(−1)8 ℓA(M8) ==∑8=1

(−1)8 ℓA(H8).

3-exo1207) Let A be a commutative ring, let M be an A-module and let (0, 1) be two

elements of A.

Page 384: (MOSTLY)COMMUTATIVE ALGEBRA

372 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

a) Show that the map 31 : M→M ×M and 32 : M ×M→M defined by

31(G) = (0G, 1G) et 32(G, H) = 1G − 0Hgive rise to a complex M

•:

0→M

31−→M ×M

32−→ 0.

b) Show that H0(M•) = {G ∈ M ; 0G = 1H = 0} and that H

2(M•) = M/(0M + 1M).c) Assume that themultiplication by 0 is injective inM. Then show thatmultiplication

by 1 in M/0M is injective if and only if H1(M•) = 0.

4-exo1210) Let A be a commutative ring and let I be an ideal of A.

a) Assume that the canonical morphism cl : A→ A/I admits a right inverse 5 . Show

that there exists 0 ∈ I such that 0 = 02and I = (0).

b) Assume that A is an integral domain. Then show that A/I is a projective A-module

if and only if I = 0 or I = A.

5-exo257) Let : be a field, let A be the subring :[X2,XY,Y2] of :[X,Y]. Show that the

ideal I = (X2,XY) of A is not a projective A-module.

6-exo253) Let A = �([0, 1],R) be the ring of continuous functions on [0, 1] with real

values, let I be the set of functions 5 ∈ A which are identically 0 in some neighborhood

of 0.

a) Show that I is not finitely generated.

b) Show that I = I2.

c) Show that there exists a family (�=)=>1 of continuous functions on [0, 1] suchthat �=(G) = 0 if G ∉ [ 1= , 2

= ], and that

∑=>1

�=(G) > 0 for every G ∈ (0, 1]. (For example,

set �=(G) = (=G − 1)(2 − =G) for G ∈ [ 1= , 2

= ].)d) Show that the ideal I is a projective A-module.

7-exo254) Let : be a ring, let A = M=(:) be the ring of matrices of size = with entries

in :.

a) Let M = := , viewed as a left A-module. Show that A 'M=.

b) Assume that : is commutative or that it is a division ring. Show that M is not free.

8-exo255) Let A be a ring, let P be a finitely generated projective A-module. Show that

EndA(P) is again a finitely generated and projective A-module.

9-exo1208) Let A be a ring and let 0 → M

5−→ N

6−→ P → 0 be an exact sequence of

A-modules.

a) Assume that this exact sequence is split. Then show that for every A-module X,

the diagrams

0→ Hom(X,M)5∗−→ Hom(X,N)

6∗−→ Hom(X, P) → 0

and

0→ Hom(P,X)6∗

−→ Hom(N,X)5 ∗

−→ Hom(M,X) → 0

Page 385: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 373

are split exact sequences.

b) Conversely, assume that for every A-module X, the diagram

0→ Hom(X,M)5∗−→ Hom(X,N)

6∗−→ Hom(X, P) → 0

is an exact sequence. Show that the initial exact sequence is split.

c) Solve the same question for the second sort of exact sequences.

10-exo882) a) Prove that Z/3Z is a projective (Z/6Z)-module which is not free.

b) Let A be a commutative ring. Let 4 ∈ A {0, 1} be a non-trivial idempotent

(42 = 4). Prove that 4A is a projective submodule of A which is not free.

11-exo852) Let A be a ring and let

M N

M′

N′

←→5←→ D ←→ E

←→5′

be a commutative diagram of A-modules.

a) Let # : M → M′ ⊕ N and ! : M

′ ⊕ N → N′be the maps given by by #(G) =

(D(G), 5 (H)) and !(G, H) = 5 ′(G) − E(H). Prove that ! ◦ # = 0.

b) Assume that # is injective and Im(#) = Ker(!). (One says that this diagram

is a pull-back.) Prove that for every A-module P, every morphism ? : P → N and

every morphism @ : P→M′such that 5 ′ ◦ @ = E ◦ ?, there exists a unique morphism

A : P→M such that ? = 5 ◦ A and @ = D ◦ A.c) Assume that ! is surjective and Im(#) = Ker(!). (One says that this diagram

is a push-out.) Prove that for every A-module Q, every morphism ? : N → Q and

every morphism @ : M′→ Q such that ? ◦ 5 = @ ◦ D, there exists a unique morphism

A : N′→ Q such that ? = A ◦ E and @ = A ◦ 5 ′.

d) Let A be a ring and let I, J be two-sided ideals of A. Prove that the diagram

A/(I ∩ J) A/I

A/J A/(I + J)

← →

←→ ←→

← →

(whose maps are the obvious ones) is both a pull-back and a push-out.

12-exo851) Let A be a ring and let

N P

N′

P′

←→5

←→ D ←→ E

←→5′

be a commutative diagram of A-modules.

Page 386: (MOSTLY)COMMUTATIVE ALGEBRA

374 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

a) Assume that this diagram is a pull-back (see exercise 7/11). Let 6′ : M→ N′be a

morphism of A-modules such that Im(6′) = Ker( 5 ′). Prove that there exists a uniquemorphism 6 : P→M such that 6′ = D ◦ 6 and the sequence

M

6−→ N

5−→ P

is exact.

b) Assume that this diagram is a push-out (see exercise 7/11). Let 6 : P→ Q′be a

morphism of A-modules such that Ker(6) = Im( 5 ). Prove that there exists a unique

morphism 6′ : P′→ Q

′such that 6 = 6′ ◦ E and the sequence

N′ 5 ′

−→ P′ 6′

−→ Q′

is exact.

13-exo850) Prove this generalization of the snake lemma. Let A be a ring and let

M N P

M′

N′

P′

←→5

←→ D

←→6←→ E ←→ F

←→5′ ←→6

be a commutative diagram of A-modules with exact rows, namely Ker(6) = Im( 5 ) andKer(6′) = Im( 5 ′).

a) Prove that the given morphisms induce exact sequences

0→ Ker( 5 ) D−→ Ker( 5 ′ ◦ D)5−→ Ker(E)

6−→ Ker(F) ∩ Im(6)

and

M′/(Im(D) + Ker( 5 ′))

5 ′

−→ Coker(E)6′

−→ Coker(F ◦ 6)id

P′

−−→ Coker(6′) → 0.

b) Construct a morphism of modules

� : Ker(F) ∩ Im(6) →M′/(Im(D) + Ker( 5 ′))

that connects the preceding two exact sequences and gives rise to an exact sequence.

(In other words, Im(6) = Ker(�) and Im(�) = Ker( 5 ′).)c) Discuss the particular cases : (i) 5 ′ is injective; (ii) 6 is surjective; (iii) 5 ′ is injective

and 6 is surjective.

d) Show that the given morphisms induce an exact sequence

0 Ker( 5 ) Ker(6 ◦ 5 ) Ker(6)

0 Coker(6) Coker(6 ◦ 5 ) Coker( 5 )

←→ ← →idM ←→5

←→ �←→ ←→

idP

←→6

in which the morphism � is defined as in b).

Page 387: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 375

14-exo1239) Give examples of commutative ringsA, multiplicative subsets S ofA, andA-

modulesM,N such that the naturalmap of proposition 7.2.7 �M,N : S−1

HomA(M,N) →Hom

S−1

A(S−1

M, S−1N) is not injective, resp. not surjective.

15-exo853) Let A be a ring, let M be a A-module and let N be a submodule of M.

a) If N is injective, then it is a direct summand.

b) If M/N is projective, then N is a direct summand.

16-exo308) Let A be a ring, let P be an A-module and let M be a direct summand of P.

If P is an injective A-module, show that M is injective.

17-exo309) Let A be a ring.

a) Assume that A is right noetherian. Show that any direct sum of injective right

A-modules is injective. (Apply criterion (ii) of proposition 7.4.2.)

b) Let (I=)=∈N be an increasing sequence of right ideals of A and let I be its union. For

every =, let M= be a right A-module containing A/I= , let 5= : A→M= be the canonical

surjection from A to A/I= composed with the injection to M= . Let M be the direct sum

of the modules M= .

Show that there exists a unique map 5 : I→M such that 5 (0) = ( 5=(0))= for every0 ∈ I.

c) (followed) Assuming that M is injective, prove that there exists = ∈ N such that

I= = I.

d) (followed) Assume that every direct sum of injective right A-modules is injective.

Show that A is right noetherian (theorem of Bass-Papp).

18-exo307) Let A be a ring, let P be an A-module and let M be a submodule of P. One

says that a submodule N of P essential for M if it contains M and if N1 ∩M ≠ 0 for every

non-zero submodule N1 of N.

a) Show that there exists a maximal submodule N of P containing M which is

essential for M. (Let � be the set of submodules of P containing M which are essential. Show

that � is inductive and apply Zorn’s lemma.)

b) Show that there exists a maximal submodule N′of P such that M ∩N

′ = 0. (Show

that the set of such submodules is inductive.)

c) Assume that P is an injectiveA-module. Show that P = N⊕N′. Using exercise 7/16,

conclude that N is an injective A-module.

19-exo310) Let A be a ring and let M be a right A-module. Let 9 : M → N and

9′ : M→ N′be two morphisms of M into right A-modules N and N

′.

a) Assume that N is essential with respect to 9(M) and that N′is injective. Prove that

there exists an injective morphism 5 : N→ N′such that 9′ = 5 ◦ 9.

b) Assume furthermore that N is injective and that N′is essential with respect

to 9(M′). Prove that 5 is an isomorphism.

If M is an A-module, theorem 7.4.6 shows that there exists an injective A-module N

containing M. By exercise 7/18, there exists such an injective module N which is essential with

respect to M. The result of the present exercise shows that N is well-defined.

Page 388: (MOSTLY)COMMUTATIVE ALGEBRA

376 CHAPTER 7. FIRST STEPS IN HOMOLOGICAL ALGEBRA

20-exo311) Let A be a ring, let M,N be A-modules and let 5 : M→ N be a morphism.

a) Prove that 5 is an epimorphism in the category of A-modules if and only 5 is

surjective.

b) Prove that 5 is a monomorphism in the category of A-modules if and only if 5 is

injective.

21-exo312) Let A, B be rings. Let F be a functor from the category of A-modules to that

of B-modules, let G be a right adjoint to F. For any A-module M and any B-module N,

let ΦM,N : HomB(F(M),N) → HomA(M,G(N)) be the adjunction map.

a) Assume that the functor F is faithful. Show that for any injective morphism 6 of

B-modules, the morphism G(6) is injective. (Use the characterization of exercise 7/20.)

b) Assume that the functor G is faithful. Show that for any surjective morphism 5 of

A-modules, the morphism F( 5 ) is surjective.22-exo316) Let A and B be rings, let F be a functor from the category of A-modules to

the category of B-modules, let G be a functor from the category of B-modules to the

category of A-modules. Assume that these functors form a pair of adjoint functors

and let (ΦM,N) be an adjonction.

a) If F and G are additive, prove that the maps ΦM,N are additive.

b) Assume that A = B is a commutative ring and that the functors F,G are A-linear,

that is they are additive and map a homothety of ratio 0 to a homothety of ratio 0.

Prove that the maps ΦM,N are morphisms of A-modules.

23-exo895) Let A be a ring, let P be an A-module. Assume that for every injective

A-module Q, every surjective morphism ? : Q→ Q′′and every morphism 5 : P→ Q

′′,

there exists a morphism 6 : P → Q such that ? ◦ 6 = 5 . The goal of the first two

questions is to prove that P is projective.

a) Let 0 → M′ 9−→ M

?−→ M

′′ → 0 be an exact sequence of A-modules and let

5 : P → M′′be a morphism of A-modules. Let Q be an injective A-module and

8 : M → Q an injective morphism (Baer). Let Q′ = 8(9(M′)), let Q

′′ = Q/Q′ and let

@ : Q→ Q′′be the canonical surjection Prove that there exists amorphism : : M

′′→ Q′′

such that : ◦ ? = @ ◦ 8.b) Let 6 : P→ Q be a morphism such that @ ◦ 6 = : ◦ 5 . Prove that Im(6) ⊂ Im(8).

Conclude.

c) Let Q be an A-module. Show that Q is injective if and only if, for every projective

A-module P, every injective morphism 8 : P′ → P and every morphism 5 : P

′ → Q,

there exists a morphism 6 : P→ Q such that 6 ◦ 8 = 5 .

24-exo894) One says that a ring A is right hereditary if every right ideal of A is a

projective A-module.

a) Let A be a right hereditary ring and let = be an integer. Prove that every

submodule M of A=is a projective A-module. (Argue by induction on =; consider the

morphism from M to A given by the last coordinate.)

Page 389: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 377

b) More generally, if A is right hereditary, then every submodule of a free right

A-module is a direct sum of ideals. (Let L be a free A-module and let M be a submodule

of L. Choose a well-ordering of a basis of L, and argue similarly.)

c) Using exercise 7/24, prove that A is right hereditary if and only if every quotient

of an injective A-module is injective.

(Commutative) integral domains which are right hereditary are called Dedekind rings. They

will be studied in §9.6.

25-exo896) Let A = Z[T] and let K = Q(T) be its fraction field.

a) Let I = (2, T). Prove that there exists a morphism 5 : I→ K/A such that 5 (2) = 0

and 5 (T) is the class of 1/2. Prove that 5 does not extend to A.

b) Prove that K/A is divisible but is not an injective A-module.

26-exo898) Schanuel’s lemma. Let A be a ring.

a) Let 0 → P′ → P → M → 0 and 0 → Q

′ → Q → M → 0 be exact sequences of

right A-modules.

If P and Q are projective, then P ⊕ Q′ ' P

′ ⊕ Q.

b) Give an example that shows that the projectivity assumption is necessary.

c) Let 0 → M → P → P′ → 0 and 0 → M → Q → Q

′ → 0 be exact sequences of

right A-modules.

If P and Q are injective, then P ⊕ Q′ ' P

′ ⊕ Q.

Page 390: (MOSTLY)COMMUTATIVE ALGEBRA
Page 391: (MOSTLY)COMMUTATIVE ALGEBRA

CHAPTER 8

TENSOR PRODUCTS ANDDETERMINANTS

A companion chapter to linear algebra,multilinear algebra studies situations

where quantities depend on many variables and vary linearly in each of them.

The tensor product construction takes two modules M and N and makes a

moduleM⊗N out of it, which reflects bilinearmaps fromM×N. WhenM andN

are free, then so is M ⊗ N, and the tensors of old-style differential/riemannian

geometry, a bunch of real numbers with indices or exponents, are nothing but

the coordinates of such algebraic objects. Tensor products are also important in

other fields of mathematics, such as representation theory where they are used

to define induced representations.

This construction can also serve to associate various algebras with a module

over a commutative ring. Despite its commutative character, the symmetric

algebra will not be studied thoroughly here; in the simplest case, when the

given module is free, it furnishes another construction of rings of polynomials.

However, because of its relation with determinants and, henceforth, with linear

algebra, I give a more detailed presentation of the exterior algebra.

In the next section, I prove that tensoring with a given module preserves some

exact sequences. This is an extremly important property which I first prove by

relatively explicit computations. I then give another, more abstract, proof, based

on the fact that this functor is right exact, a property that itself derives from

the universal property of the tensor product construction.

However, tensoring with a given module is not left exact in general, and

modules which preserve all exact sequences are called flat. Free modules, more

generally projective modules, are flat, and it is useful to characterize flat modules.

Over principal ideal domains, flat modules are just torsion-free modules, but

this condition is not sufficient in general. I then prove a theorem of Lazard that

shows that a finitely presented module is flat if and only if it is projective.

Page 392: (MOSTLY)COMMUTATIVE ALGEBRA

380 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Faithful flatness is a reinforcement of flatness. Its importance is revealed in

the next sectionwhere I explain Grothendieck’s idea of faithfully flat descent. If

5 : A→ B is a morphism of commutative rings, the tensor product construction

furnishes a functor M ↦→ B ⊗A M from A-modules to B-modules. When the

morphism 5 is faithfully flat, the module B ⊗A M reflects many properties of M,

and I will show some of them. Even more remarkably, Grothendieck proved that

the module B ⊗A M carries an additional structure, called a descent datum

which allows to recover the initial A-module M.

In a final section, I show how when 5 is a finite Galois extension of fields, this

general theorem translates to Galois descent via a too lengthy computation.

8.1. Tensor product of two modules

Let A be a ring, let M be a right A-module and N be a left A-module.

We are interested here in the balanced bi-additive maps from M×N into an

abelian group P, that is to say, to the maps 5 : M ×N→ P which satisfy

the following properties:

(i) 5 (< + <′, =) = 5 (<, =) + 5 (<′, =) (left additivity);(ii) 5 (<, = + =′) = 5 (<, =) + 5 (<, =′) (right additivity);(iii) 5 (<0, =) = 5 (<, 0=) (balancing condition).

Our first goal is to linearize the map 5 , that is, to replace it by a linear

map ! between appropriate modules so that the study of ! would be

more or less equivalent to that of 5 .

8.1.1. Construction. — Let T1 = Z(M×N)be the free Z-module with

basis M ×N; for < ∈ M and = ∈ N, let 4<,= be the element (<, =) of thebasis of T1. Let T2 be the submodule of T1 generated by the following

elements:

4<+<′,= − 4<,= − 4<′,= , 4<,=+=′ − 4<,= − 4<,=′ , 4<0,= − 4<,0= ,where <, <′ run along M, =, =′ run along N and 0 runs along A.

Finally, let T = T1/T2 and let < ⊗ = denote the class of 4<,= in the

quotient T. Let � be the map from M ×N to M ⊗N given by �(<, =) =< ⊗ =. By construction, the map � is balanced and bi-additive. Indeed,

�(<+<′, =)−�(<, =)−�(<′, =) is the class inT1/T2 of 4<+<′,=−4<,=−4<′,=,hence is 0; the two other axioms are verified in the same way.

Page 393: (MOSTLY)COMMUTATIVE ALGEBRA

8.1. TENSOR PRODUCT OF TWO MODULES 381

By the universal property of free Z-modules, there is a unique

morphism !1 from T1 to P such that 5 (<, =) = !1(4(<,=)) for ev-

ery (<, =) ∈ M × N. Then, the kernel of !1 contains T2 if and only

if !1 vanishes on each of the given generators of T2, in other words, if

and only if 5 is A-balanced and bi-additive. Consequently, there exists a

morphism of abelian groups ! : T→ P such that !(< ⊗ =) = 5 (<, =).

Definition (8.1.2). — The abelian group T defined above is called the tensor

product of the A-modules M and N; it is denoted M ⊗A N. Its elements are

called tensor The map � : M ×N→ M ⊗A N defined by �(<, =) = < ⊗ =is called the canonical A-balanced bi-additive map. The elements of M ⊗A N

of the form < ⊗ =, for some (<, =) ∈ M ×N, (the image of �) are called splittensors.

Since the split tensors are the image in T = T1/T2 of a basis of T1, they

form a generating family of T.

Proposition (8.1.3) (Universal property of tensor products)

For any abelian group P and any A-balanced and bi-additive map 5 : M ×N→ P, there exists a unique morphism of abelian groups ! : M ⊗A N→ P

such that !◦� = 5 (explicitly: 5 (<, =) = !(<⊗=) for every (<, =) ∈ M×N).

Proof. — We just showed the existence of a morphism ! such that

! ◦ � = 5 . Let !, !′ be two such morphisms, and let us prove that

! = !′. Let � = !′−!; it is amorphismof an abelian groups fromM⊗AN

to P. By assumption,

�(< ⊗ =) = !′(< ⊗ =) − !(< ⊗ =) = 5 (<, =) − 5 (<, =) = 0

for every (<, =) ∈ M ⊗ N. Consequently, the kernel of � contains every

split tensor. Since they generate M ⊗A N, one has Ker(�) = M ⊗A N and

� = 0. �

Remark (8.1.4). — It is probably useful to repeat things once more.

Page 394: (MOSTLY)COMMUTATIVE ALGEBRA

382 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

The property for the map � : (<, =) ↦→ < ⊗ = to be A-balanced and

bi-additive is equivalent to the following relations

(< + <′) ⊗ = = < ⊗ = + <′ ⊗ =,< ⊗ (= + =′) = < ⊗ = + < ⊗ =′,

<0⊗= = < ⊗ 0=

for <, <′ ∈ M, =, =′ ∈ N and 0 ∈ A.

Moreover, both the universal property of the tensor product M ⊗A N

expressed by proposition 8.1.3 and its construction above show that

to define a morphism ! from the abelian group M ⊗A N to an abelian

group P, we just need to construct a A-balanced and bi-additive map 5

from M × N to P, the relation between ! and 5 being the equality

5 (<, =) = !(< ⊗ =).Finally, to check that a morphism from a tensor product M ⊗A N to

an abelian group P vanish, it suffices to check that it maps all split

tensors to 0. Similarly, to check that two morphisms from a tensor

product M ⊗A N to P coincide, it suffices to check that their difference

vanishes, hence that these two morphisms coincide on split tensors.

8.1.5. Functoriality. — Let M′be another right A-module, let N

′be

another left A-module. Let D : M→M′and E : N→ N

′be morphisms

of A-modules. The map from M ×N to M′ ⊗A N

′that sends (<, =) to

D(<)⊗E(=) isA-balanced and bi-additive, as the following computations

show:

D(< + <′) ⊗ E(=) = (D(<) + D(<′)) ⊗ E(=) = D(<) ⊗ E(=) + D(<′) ⊗ E(=),D(<) ⊗ E(= + =′) = D(<) ⊗ E(=) + D(<) ⊗ E(=′),

D(<0) ⊗ E(=) = D(<)0 ⊗ E(=) = D(<) ⊗ 0E(=) = D(<) ⊗ E(0=).

By the universal property of the tensor product, there exists a unique

morphism of abelian groups from M ⊗A N to M′ ⊗A N

′, written D ⊗ E,

such that (D ⊗ E)(< ⊗ =) = D(<) ⊗ E(=) for every (<, =) ∈ M ×N.

Let M′′and N

′′be yet other respectively right and left A-modules, let

D′ : M′→M

′′and E′ : N

′→ N′′be morphisms of A-modules. Applied

to a split tensor<⊗=, themorphisms (D′◦D)⊗(E′◦E) and (D′⊗E′)◦(D⊗E),

Page 395: (MOSTLY)COMMUTATIVE ALGEBRA

8.1. TENSOR PRODUCT OF TWO MODULES 383

both furnish the split tensor (D′◦D(<)) ⊗ (E′◦ E(=)). Consequently, thesetwo morphisms are equal.

Let us now assume that M = M′, N = N

′and let us take for morphisms

the identity maps of M and N. One has idM ⊗ idN = idM⊗N, since these

two morphisms send a split tensor < ⊗ = to itself.

Let us still assume that M′ = M and N

′ = N, but let D, E be arbitrary

endormorphisms of M and N, respectively. Then, D ⊗ E is an endo-

morphism of M ⊗A N. Moreover, the maps EndA(M) → End(M ⊗A N)and EndA(N) → End(M ⊗A N) respectively given by D ↦→ D ⊗ idN and

E ↦→ idM ⊗E are morphisms of rings.

Remark (8.1.6). — Let A be a ring, let M be a right A-module and let N

be a left A-module. Let � be an element of the tensor product M ⊗A N.

By definition, it can be written

∑8∈I <8 ⊗ =8 for a finite set I and finite

families (<8)8∈I in M and (=8)8∈I in N. Let M0 be the submodule of M

generated by the family (<8), and letN0 be the submodule ofN generated

by the family (=8); they are finitely generated, by construction. Moreover,

the element � is the image of an element �0 ∈ M0 ⊗A N0 by the canonical

morphism 50 ⊗ 60 : M0 ⊗A N0→M ⊗A N deduced from the inclusions

50 : M0→M and 60 : N0→ N: it suffices to set �0 =∑8∈I <8 ⊗ =8, where

these split tensors are viewed in M0 ⊗A N0.

However, a representation of a tensor � =∑<8 ⊗ =8 is by no means

unique, and we may well have � = 0 and �0 ≠ 0. If � = 0, the

construction of the tensor product module M ⊗A N shows that the

element

∑4<8 ,=8 of T1 = Z(M×N)

, in the notation of 8.1.1, belongs to T2,

hence is itself some finite linear combination of basic elements of the

form 4<+<′,= − 4<,= − 4<′,=, 4<,=+=′ − 4<,= − 4<,=′ and 4<0,= − 4<,0=. Let M1

be the submodule of M generated by M0 and by all the elements <, <′

that appear; similarly, let N1 be the submodule of N generated N0 and

by all the elements =, =′ that appear in such a linear combination. Let

51 : M1→M and 61 : N1→ N be the inclusions. Under the morphism

51 ⊗ 61, the tensor �1 =∑<8 ⊗ =8 of M1 ⊗A N1 maps to �. Moreover, one

has �1 = 0.

In other words, the nullity of an explicitly written tensor � of M ⊗A N

can be witnessed in the tensor product M1 ⊗A N1, where M1 and N1 are

finitely generated submodules of M and N respectively.

Page 396: (MOSTLY)COMMUTATIVE ALGEBRA

384 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

8.1.7. Bimodules. — Let us assume in particular that M be endowed

with the structure of a (B,A)-bimodule and that N be endowed with

the structure of an (A,C)-bimodule. If we have no such structure at our

disposal, we can always take B = Z or C = Z. If A is a comutative ring,

we can also choose B = C = A.

Anyway, we have a morphism of rings B→ EndA(M) (which sends

1 ∈ B to the left-multiplication �1 by 1 in M) and a morphism of rings

Co → EndB(N) (which sends 2 ∈ C to the right-multiplication �2 by 2

in N). This shows that we would essentially consider the most general

situation by assuming that B = EndA(M) and C = EndA(M)o, the abovemorphism of rings would then be identities.

By composition, we get two ring morphisms B→ End(M ⊗A N) andC

o → End(M ⊗A N), given by 1 ↦→ �1 ⊗ idN and 2 ↦→ idM ⊗�2. They

endow the abelian group M ⊗A N with a structure of a left B-module

and of a right C-module. Moreover, for 1 ∈ B, 2 ∈ C, < ∈ M and = ∈ N,

we have

1 · ((< ⊗ =) · 2) = 1 · (< ⊗ =2) = 1< ⊗ =2 = (1 · (< ⊗ =)) · 2,

so that these two structures are compatible. This shows that M ⊗A N

has a natural structure of a (B,C)-bimodule.

Let P be a (B,C)-bimodule. For amorphism of abelian groups ! : M⊗A

N → P to be (B,C)-linear, it is necessary and sufficient that the A-

balanced andbi-additivemap 5 = !◦� : M×N→ P satisfies 5 (1<, =2) =1 5 (<, =)2 for every < ∈ M, every = ∈ N, every 1 ∈ B and every 2 ∈ C.

In particular, if D : M → M′is a morphism of (B,A)-bimodules and

E : N→ N′is a morphism of (A,C)-bimodules, then the morphism D ⊗ E

from M ⊗A N to M′ ⊗A N

′is a morphism of (B,C)-modules.

If A is a commutative ring and B = C = A, we then have 0(< ⊗ =) =(<0) ⊗ = = < ⊗ (0=) = (< ⊗ =)0, so that the two given structures of an

A-module on M ⊗A N coincide.

Proposition (8.1.8) (Compatibility with direct sums)

Let (M8)8∈I be a family of right A-modules and (N9)9∈J be a family of left

A-modules. For every (8 , 9) ∈ I × J, let P8 , 9 = M8 ⊗A N9. Set M =⊕

8∈I M8,

N =⊕

9∈J N9, P =⊕(8 , 9)∈I×J

P8 , 9. For 8 ∈ I and 9 ∈ J, let 8 : M8 → M,

� 9 : N9 → N, �8 , 9 : P8 , 9 → P be the canonical injections. There is a unique

Page 397: (MOSTLY)COMMUTATIVE ALGEBRA

8.1. TENSOR PRODUCT OF TWO MODULES 385

morphism � : P→M ⊗A N such that � ◦ �8 , 9 = 8 ⊗ � 9 for every (8 , 9) ∈ I× J.

Moreover, � is an isomorphism.

Proof. — Existence and uniqueness of � follows from the universal

property of the direct sum. To prove that � is an isomorphism, we shall

construct its inverse explicitly. For 8 ∈ I and 9 ∈ J, let ?8 : M → M8,

@ 9 : N→ N9 and A8 , 9 : P→M8 ⊗ N9 be the canonical projections.

By construction of the direct sums M =⊕

M8 and N =⊕

N9, the

families (?8(<))8∈I and (@ 9(=))9∈J are almost null, for any (<, =) ∈ M ×N.

Consequently, in the family (?8(<) ⊗ @ 9(=))(8 , 9)∈I×J, only finitely many

terms are nonzero, so that this family is not only an element of the

product module

∏(8 , 9)∈I×J

P8 , 9 but one of the direct sum P =⊕

P8 , 9. This

defines a map 5 : M ×N→ P. Let us show that this map is A-balanced

and bi-additive. Indeed, for any <, <′ ∈ M, =, =′ ∈ N and any 0 ∈ A,

5 (< + <′, =) =(?8(< + <′) ⊗ @ 9(=)

)8 , 9

=(?8(<) ⊗ @ 9(=) + ?8(<′) ⊗ @ 9(=)

)8 , 9

=(?8(<) ⊗ @ 9(=)

)+

(?8(<′) ⊗ @ 9(=)

)8 , 9

= 5 (<, =) + 5 (<′, =)

and a similar proof shows that

5 (<, = + =′) = 5 (<, =) + 5 (<, =′).

Moreover, for every 0 ∈ A, one has

5 (<0, =) =(?8(<0) ⊗ @ 9(=)

)8 , 9

=(?8(<)0 ⊗ @ 9(=)

)8 , 9

=(?8(<) ⊗ 0@ 9(=)

)8 , 9

=(?8(<) ⊗ @ 9(0=)

)8 , 9

= 5 (<, 0=),

since the maps ?8 and @ 9 are A-linear. Consequently, there exists

a unique morphism of abelian groups ! : M ⊗A N → P such that

!(< ⊗ =) = 5 (<, =) for every (<, =) ∈ M ×N.

Page 398: (MOSTLY)COMMUTATIVE ALGEBRA

386 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

For every (<, =) ∈ M ×N, one has

�(!(< ⊗ =))�( 5 (<, =)) = �(∑8 , 9

?8(<) ⊗ @ 9(<))

=

∑8 , 9

�(?8(<) ⊗ @ 9(=)) =∑8 , 9

8(?8(<)) ⊗ � 9(@ 9(=))

=

(∑8

8(?8(<)))⊗ ©­«

∑9

� 9(@ 9(=))ª®¬ = < ⊗ =.

Since the split tensors generate M ⊗A N, one has �(!(�)) = � for every

� ∈ M ⊗A N, so that � ◦ ! is the identity on M ⊗A N. On the other hand,

for any (D, E) ∈ I × J, any < ∈ MD and any = ∈ NE, one has

!(�(�D,E(< ⊗ =))) = !( D(<) ⊗ �E(=)) =∑8 , 9

(?8( D(<)) ⊗ @ 9(�E(=))

).

By definition of the projections ?8, one has ?8( D(<)) = < if 8 = D, and

?8( D(<)) = 0 if 8 ≠ D; similarly, @ 9(�E(=)) = = if 9 = E and is zero

otherwise. This implies that

!(�(�D,E(< ⊗ =))) = �D,E(< ⊗ =).

Since the split tensors<⊗=, for< ∈ MD and = ∈ NE, generateMD⊗ANE =

PD,E, one has !(�(�D,E(�))) = �D,E(�) for every � ∈ PD,E. Since the

submodules �D,E(PD,E) of P generate P, it follows that ! ◦ � = idP.

This implies that � is an isomorphism and that ! is its inverse. �

Remark (8.1.9). — Let us keep the hypotheses of the proposition. Let

B,C be rings and let us assume moreover that the modules M8 are

(B,A)-bimodules, and the modules N9 are (A,C)-bimodules. Then, M

is a (B,A)-bimodule, N is an (A,C)-bimodule and � is an isomorphism

of (B,C)-bimodules.

We need to show that �(1C2) = 1�(C)2 for every 1 ∈ B, every 2 ∈ C and

every C ∈ M ⊗A N. It follows from the definition of � that

�(1(< ⊗ =)2) = �((1<) ⊗ (=2)) = 1�(< ⊗ =)2

for every < ∈ M, = ∈ N, 1 ∈ B and 2 ∈ C. Since the split tensors < ⊗ =generate M ⊗A N, the result follows.

Page 399: (MOSTLY)COMMUTATIVE ALGEBRA

8.1. TENSOR PRODUCT OF TWO MODULES 387

Proposition (8.1.10). — Let A be a ring, let M be a right A-module, let I be

a left ideal of A. Then, the map 5 from M to M ⊗A (A/I) which maps < to

the tensor < ⊗ cl(1) is a surjective morphism of abelian groups. Its kernel

is the subgroup MI of M, and 5 induces an isomorphism of abelian groups

! : M/MI→M ⊗A (A/I).Moreover, if B is a ring and M is a (B,A)-bimodule, then 5 is a morphism of

left B-modules.

Proof. — It is obvious that 5 is additive (and B-linear if M is a (B,A)-bimodule). Let < ∈ M and let 0 ∈ I; we have

5 (<0) = <0 ⊗ cl(1) = < ⊗ 0 cl(1) = < ⊗ 0 = 0,

so that the kernel of 5 contains the abelian subgroup MI of M. Conse-

quently, 5 defines, by quotient, a morphism ! : M/MI→ M ⊗A (A/I).To prove that ! is an isomorphism, we will construct its inverse.

The map 6 : M × (A/I) →M/MI sending (<, cl(0)) to the class of <0

is well-defined: if cl(0) = cl(1), there exists G ∈ I such that 1 = 0 + G;then, <1 = <0 + <G = <0 (mod MI). It is obviously bi-additive, as

well as A-balanced (for < ∈ M and 0, 1 ∈ A, 6(<1, cl(0)) = cl(<10) =6(<, cl(10))). Therefore, there exists a unique morphism of abelian

groups � : M ⊗A (A/I) → M/MI such that �(< ⊗ cl(0)) = cl(<0) forevery < ∈ M and every 0 ∈ A.

If < ∈ M, one has �(!(clMI(<))) = �(< ⊗ 1) = clMI(<), hence � ◦ ! is

the identity of M/MI. On the other hand, for any < ∈ M and any 0 ∈ A,

one has

!(�(< ⊗ cl(0))) = !(cl(<0)) = <0 ⊗ cl(1) = < ⊗ 0 cl(1) = < ⊗ cl(0).Since the split tensors generate M⊗ (A/I), ! ◦� is the identity morphism

of M ⊗A (A/I). This concludes the proof of the proposition. �

Remark (8.1.11). — Let M be a right A-module, N be a left A-module,

P be an abelian group, and let 5 : M ⊗A N → P be a morphism of

abelian groups. Suppose we need to prove that 5 is an isomorphism.

Surjectivity of 5 is often easy, because in most cases it is quite obvious

to spot preimages of suitable generators of P. On the other hand,

the injectivity is more difficult. A direct approach would begin by

considering a tensor

∑<8 ⊗ =8 with image 0, and then trying to prove

Page 400: (MOSTLY)COMMUTATIVE ALGEBRA

388 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

that this tensor is zero. By the given definition of the tensor product,

this requires to prove that the element

∑4(<8 ,=8) in the free abelian group

Z(M×N)is a linear combination of the elementary relations. But how?

In all important cases, an efficient way of proving that 5 is an isomor-

phism consists in defining its inverse 6. To that aim, the proof of the

surjectivity of 5 is useful. Indeed, one has often written a formula for

the inverse 6(I) of some elements I which generate P. Then try to prove

that there exists a unique morphism 6 from P to M ⊗A N; in general,

this amounts to checking that some formula does not depend on choices

needed to write it down. To conclude the proof, show that 5 and 6 are

inverses one of the other.

8.1.12. Change of base ring. — Let A, C be rings, let : A → C be

a morphism of rings. Let M be a right A-module. The ring C can be

seen as a left A-module via the morphism , and as a right C-module;

consequently, the abelian group M ⊗A C has a natural structure of right

C-module.

In this setting, the universal property of the tensor product furnishes

the following result: For every right C-module P and any A-linear morphism

5 : M→ P, there exists a unique C-linear morphism ! : M ⊗A C→ P such

that !(< ⊗ 1) = 5 (<) for every < ∈ M.

Let indeed 5 : M → P be an A-linear map, that is, an additive map

such that 5 (<0) = 5 (<) (0) for every 0 ∈ A. Let us first show that there

is at most one C-linear map ! as required. Indeed, if ! and !′ are two

such maps, one has

!(< ⊗ 2) = !((< ⊗ 1)2) = !(< ⊗ 1)2 = 5 (<)2

for every pair (<, 2) ∈ M × C, and !′(< ⊗ 2) = 5 (<)2 likewise. Conse-

quently, the C-linear map !′ − ! vanishes on all split tensors of M ⊗A C,

hence is zero.

To prove the existence of map ! such that !(< ⊗ 1) = 5 (<) for every< ∈ M, wefirst consider themap 51 : (<, 2) ↦→ 5 (<)2 fromM×C toP. It is

obviously bi-additive, as well as A-balanced since 51(<0, 2) = 5 (<0)2 =5 (<)02 = 51(<, 02). Consequently, there exists a unique morphism of

abelian groups ! : M ⊗A C→ P such that !(< ⊗ 2) = 5 (<)2 for every< ∈ M and every 2 ∈ C. In particuliar, !(< ⊗ 1) = 5 (<).

Page 401: (MOSTLY)COMMUTATIVE ALGEBRA

8.2. TENSOR PRODUCT OF MODULES OVER A COMMUTATIVE RING 389

Let us show that ! is C-linear. For any 2′ ∈ C, one has !((< ⊗ 2)2′) =!(<⊗22′) = 5 (<)22′ = !(<⊗2)2′. Since the split tensors generateM⊗AC,

one has !(�2′) = !(�)2′ for every � ∈ M ⊗A C and every 2′ ∈ C. In other

words, ! is C-linear.

8.2. Tensor product of modules over a commutative ring

This is probably the most important case and we first sum-up the

situation.

Let A be a commutative ring, let M, N be A-modules. We constructed

an abelian group M⊗A N and a A-balanced bi-additive map � : M×N→M ⊗A N. We also endowed the group M ⊗A N with the unique structure

of an A-module for which

0(< ⊗ =) = (0<) ⊗ = = < ⊗ (0=).

Consequently, the map � satisfies the relations

(i) �(< + <′, =) = �(<, =) + �(<′, =),(ii) �(<, = + =′) = �(<, =) + �(<, =′),(iii) �(0<, =) = �(<, 0=) = 0�(<, =),

for < ∈ M, = ∈ N and 0 ∈ A. In other words, � is A-bilinear.

Let 5 be an A-bilinear map from M×N to an A-module P. There exists

a unique map which is bi-additive and A-balanced ! : M ⊗A N → P

such that !(< ⊗ =) = 5 (<, =) for every (<, =) ∈ M ×N. Moreover, if

0 ∈ A, then

!(0(< ⊗ =)) = !(0< ⊗ =) = 5 (0<, =) = 0 5 (<, =) = 0!(< ⊗ =).

Since the split tensors generate M⊗A N, we have !(0�) = 0!(�) for every� ∈ M ⊗A N. This means that ! is A-linear and is the unique A-linear

map from M ⊗A N to P such that ! ◦ � = 5 .

Proposition (8.2.1). — LetA be a commutative ring, letM andN beA-modules.

Let (48)8∈I and ( 59)9∈J be families of elements respectively in M and N.

If these families (48) and ( 59) are generating families (resp. are bases), then

the family (48 ⊗ 59)(8 , 9)∈I×Jgenerates (resp. is a basis of) M ⊗A N.

Page 402: (MOSTLY)COMMUTATIVE ALGEBRA

390 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Proof. — Let us assume that both families (48) and ( 59) are generating

families and let us show that the family (48 ⊗ 59) generates M⊗A N. Since

the split tensors of M⊗A N generate M⊗A N, it suffices to show that every

split tensor is a linear combination of elements of the form 48 ⊗ 59. Letthus < ∈ M and = ∈ N. By assumption, there exists a family (08) ∈ A

(I)

such that < =∑0848, as well as a family (1 9) ∈ A

(J)such that = =

∑1 9 59.

Then,

< ⊗ = =(∑

8

0848)⊗

(∑9

1 9 59)=

∑8 , 9

(0848) ⊗ (1 9 59) =∑8 , 9

081 948 ⊗ 59 ,

hence the claim.

Let us now assume that the families (48) and ( 59) are bases. Since we

already shew that the family (48 ⊗ 59) generates M ⊗A N, we just need

to show that it is free. To that aim, let (08 , 9) be an almost-null family of

elements of A, indexed by I× J, such that

∑8 , 9 08 , 948 ⊗ 59 = 0. This implies

∑8∈I

48 ⊗©­«∑9∈I

08 , 9 59ª®¬ = 0.

Let (4∗8) be the “dual basis” of the basis (48) and let ( 5 ∗

9) be the “dual basis”

of the basis ( 59) (they do not form bases unless I is finite and J is finite).

Let (?, @) ∈ I× J; the map from M×N to A given by (<, =) ↦→ 4∗?(<)4∗@(=)is A-bilinear, hence there exists a unique morphism of A-modules from

M ⊗A N to A which maps < ⊗ = to 4∗?(<)4∗@(=) for every (<, =) ∈ M ×N.

Let us apply this morphism to our linear combination

∑8 , 9 08 , 948 ⊗ 59. We

get

0 =

∑8∈I9∈J

08 , 94∗?(48)4∗@(4 9) = 0?,@ .

Consequently, 08 , 9 = 0 for every (8 , 9) ∈ I × J, so that the family (48 ⊗ 59) isfree, as we needed to show. �

Example (8.2.2). — Let A be a commutative ring, let S be a multiplicative

subset of A and let M be an A-module.

Page 403: (MOSTLY)COMMUTATIVE ALGEBRA

8.2. TENSOR PRODUCT OF MODULES OVER A COMMUTATIVE RING 391

Themap< ↦→ <⊗1 fromM toM⊗AS−1

A is amorphism ofA-modules.

Since M ⊗A S−1

A is an S−1

A-module, it extends to a morphism of S−1

A-

modules, ! : S−1

M→M ⊗A S−1

A given, explicitly, by </B ↦→ < ⊗ (1/B)for < ∈ M and B ∈ S. Let us show that ! is an isomorphism.

Let 6 : M ×A S−1

A → S−1

M be the map given by 6(<, 0/B) = 0</B.One checks that is well defined, that is, if 0/B = 1/C, then 0</B = 1</C,and A-bilinear. Consequently, there exists a unique morphism of A-

modules, # : M ⊗A S−1

A→ S−1

M, such that #(< ⊗ 0/B) = 0</B for all0 ∈ A, B ∈ S and < ∈ M.

One has

! ◦ #(< ⊗ 0/B) = !(0</B) = 0< ⊗ (1/B) = < ⊗ 0/B,

so that ! ◦ # is the identity morphism. Similarly,

# ◦ !(</B) = #(< ⊗ (1/B)) = </B,

hence # ◦ ! is the identity morphism as well. Consequently, ! and #are isomorphisms of A-modules, inverse one of the other.

8.2.3. — Let M be an A-module, let M∨ = HomA(M,A) be its dual. Let

N be an A-module.

Let 3 : M∨ ×N → HomA(M,N) be the map such for every ! ∈ M

and every = ∈ N, 3(!, =) is the morphism G ↦→ !(G)= from M to N. It is

A-bilinear. By the universal property of the tensor product, there exists a

unique morphism of A-modules �M,N : M∨ ⊗A N→ HomA(M,N) such

that �M,N(! ⊗ =) = 3(!, =) for every ! ∈ M∨and every = ∈ N.

Proposition (8.2.4). — Let M be an A-module.

a) If M is finitely generated and projective, then the morphism �M,N : M∨ ⊗A

N→ HomA(M,N) is an isomorphism for every A-module N.

b) If the morphism �M,M : M∨ ⊗A M→ EndA(M) is surjective, then M is

finitely generated and projective.

Proof. — a) Let (48)1686A be afinite generating family ofM, let 5 : AA →M

be the surjective morphism given by 5 (01, . . . , 0=)) =∑A8=10848. Since M

is projective, there exists a morphism 6 : M→ AAsuch that 5 ◦ 6 = idM.

For 8 ∈ {1, . . . , A}, let !8 ∈ M∨be the composition of 6 with the 8th

Page 404: (MOSTLY)COMMUTATIVE ALGEBRA

392 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

projection AA → A. By construction, one has < =

∑A8=1

!8(<)48 for every< ∈ M. In other words, idM = �M,M(

∑A8=1

!8 ⊗ 48),Let N be an A-module, let D : M→ N be a morphism of A-modules.

Let � =∑A8=1

!8 ⊗ D(48). By definition, the morphism �M,N(�) sendsan element < ∈ M to

∑A8=1

!8(<)D(48) = D(∑A8=1

!8(<)48) = D(<), henceD = �M,N(�). This proves that �M,N is surjective.

Let then � : HomA(M,N) → M∨ ⊗ N be the map given by �(D) =∑A

8=1!8 ⊗ D(48). It is A-linear and the previous computations show

that �M,N ◦ � = idHomA(M,N). Conversely, let ! ∈ M

∨and = ∈ N; since

�M,N(! ⊗ =)(48) = !(48)=, we have

� ◦ �M,N(! ⊗ =) =A∑8=1

!8 ⊗ !(48)= =A∑8=1

!(48)!8 ⊗ = = !′ ⊗ =,

where !′ =∑A8=1

!(48)!8. For every 8 ∈ {1, . . . , A}, one has

!′(48) =A∑8=1

!(48)!8(48) = !(A∑8=1

!8(48)48) = !(48),

so that !′ = !. This proves that � ◦ �M,N(! ⊗ =) = ! ⊗ =. Consequently,� ◦ �M,N coincides with the identity morphism on split tensors, hence

on the whole of M∨ ⊗A N. In particular, the morphism �M,N is injective.

We thus have shown that �M,N is an isomorphism.

b) Assume that �M,M is surjective and let � ∈ M∨ ⊗M be such that

�M,M(�) = idM; write � =∑A8=1

!8 ⊗ 48, with !1, . . . , !A ∈ M∨and

41, . . . , 4A ∈ M.

Let < ∈ M. The equality < = idM(<) = �M,M(�) rewrites as

< =

A∑8=1

!8(<)48 .

This proves that (41, . . . , 4A) generates M, which is thus finitely generated.

Let 5 : AA →Mbe themorphismgiven by 5 (01, . . . , 0A) =

∑A8=10848. Let

6 : M→ AAbe the morphism given by 6(<) = (!1(<), . . . , !A(<)). One

has 5 ◦6 = idM. This implies thatM is isomorphic to the submodule 6(M)of A

A, and 6(M) has a direct summand. In particular, M is projective. �

Page 405: (MOSTLY)COMMUTATIVE ALGEBRA

8.2. TENSOR PRODUCT OF MODULES OVER A COMMUTATIVE RING 393

8.2.5. — Let M be an A-module, let M∨ = HomA(M,A). Consider the

morphism �M : M∨ ⊗A M→ EndA(M) of §8.2.3.

Let C : M∨ ×M→ A be the map given by C(!, <) = !(<) for ! ∈ M

and < ∈ M. Again, it is A-bilinear. Consequently, there exists a unique

linear form �M on M∨ ⊗A M such that C(! ⊗<) = !(<) for every ! ∈ M

and every < ∈ M.

Example (8.2.6). — Let us assume that M is free and finitely generated. In

this case, proposition 8.2.4 asserts that �M,N : M∨ ⊗A N→ HomA(M,N)

is an isomorphism for every A-module N.

Let (41, . . . , 4A) be a basis of M and let (4∗1, . . . , 4∗A) be its dual basis. Take

N to be free and finitely generated and let ( 59)1696B be a basis of N. Then

the family (4∗8⊗ 59)1686A

1696Bis a basis if M

∨ ⊗A N. Let � =∑A8=1

∑B9=108 , 94

∗8⊗ 59

be any element of M∨ ⊗A N and let D = �M(�). By definition, D(48) =∑B

9=108 , 9 59, so that U = (08 , 9) ∈ MB,A(A) is the matrix of D in the bases

(41, . . . , 4A) of M and ( 51, . . . , 5B) of N.

Now assume that M = N. Then

�M(D) ==∑

8 , 9=1

08 , 94∗8 (4 9) =

=∑8=1

08 ,8 = Tr(U),

and the linear form �M : M∨ ⊗A M→ A corresponds to the trace of an

endomorphism of M.

8.2.7. Tensor product of algebras. — Let : be a commutative ring, let

A and B be (non necessarily commutative) :-algebras. Let us show

how to endow the :-module A ⊗: B with the structure of a :-algebra.

For G ∈ A (or B), write �G for the endomorphisms of A (or B) given

by left-multiplication by G. Let (0, 1) ∈ A × B. The endomorphism

�0 ⊗�1 of A⊗: B is :-linear. Moreover, the map (0, 1) ↦→ End:(A⊗: B) is:-bilinear since the images of : in A and B are central. By the universal

property of the tensor product, this gives a canonical :-linear morphism

from A ⊗: B to End:(A ⊗: B), denoted by C ↦→ �C , which maps a split

tensor 0 ⊗ 1 to �0 ⊗: �1. This morphism then furnishes a :-bilinear

morphism � from (A ⊗: B) × (A ⊗: B) to A ⊗: B given by (�, �) ↦→ ��(�).

Page 406: (MOSTLY)COMMUTATIVE ALGEBRA

394 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Observe that

M(0 ⊗ 1, 0′ ⊗ 1′) = �0⊗1(0′ ⊗ 1′) = (�0 ⊗ �1)(0′ ⊗ 1′) = (00′) ⊗ 11′.

From that point on, it is straightforward (but a bit tedious) to prove that

the composition law defined by M is associative, that 1 ⊗ 1 is its unit

element, and that it is commutative if A and B are commutative. This

endowes A ⊗: B with the structure of a :-algebra.

8.3. Tensor algebras, symmetric and exterior algebra

In this section, A is a commutative ring.

The tensor product of two modules represents bilinear maps; we first

generalize the construction for multilinear maps on products of more

than two modules.

Lemma (8.3.1). — Let (M8)8∈I be a family of A-modules. There exists an

A-module T and a multilinear map � :

∏8 M8 → T that possesses the fol-

lowing universal property: for every A-module N and every multilinear map

5 :

∏M8 → P, there exists a unique morphism of A-modules ! : T→ P such

that ! ◦ � = 5 .

Proof. — Let F be the free A-module with basis

∏8 M8. For any element

< = (<8)8∈I of∏

8 M8, one writes 4< for the corresponding basis element

of F. One then defines T as the quotient of the free A-module F by the

submodule R generated by the following elements:

– the elements 4<− 4<′− 4<′′whenever< = (<8),<′ = (<′8),<′′ = (<′′8 )are three elements of

∏8 M8 for which there exists 9 ∈ I such that

<8 = <′8= <′′

8if 8 ≠ 9, and < 9 = <

′9+ <′′

9;

– the elements 04< − 4<′ whenever 0 ∈ A and < = (<8), <′ = (<′8) aretwo elements of

∏8 M8 for which there exists 9 ∈ I such that <8 = <

′8if

8 ≠ 9, and <′9= 0< 9.

For < ∈∏8 M8, let �(<) be the class of 4< in T. The map � is multilinear,

because we preciselymodded out by all the necessary elements. Observe

that the elements of the form 4< generate F, hence the image of �generates T.

Page 407: (MOSTLY)COMMUTATIVE ALGEBRA

8.3. TENSOR ALGEBRAS, SYMMETRIC AND EXTERIOR ALGEBRA 395

Let 5 :

∏8 M8 → N be any multilinear map, let 50 : F → N be the

unique morphism such that 50(4<) = 5 (<), for every < ∈∏8 M8. Since

5 is multilinear, the submodule R is contained in Ker( 50); consequently,there exists a morphism ! : T→ N such that !(�(<)) = 50(4<) = 5 (<)for every < ∈∏

8 M8. In other words, ! ◦ � = 5 .

If ! and !′ are two morphisms such that 5 = ! ◦ � = !′ ◦ �, then! − !′ vanishes on the image of �, hence on the submodule generated

by � which is T. Consequently, ! = !′. �

The A-module T is called the tensor product of the family (M8) of A-

modules; it is denoted

⊗8∈I M8. When I = {1, . . . , =}, one writes it rather

M1 ⊗ · · · ⊗M=. For any element < = (<8) ∈∏

M8, the element �(<) of T

is written ⊗8<8, or <1 ⊗ · · · ⊗ <= if I = {1, . . . , =}.

Remark (8.3.2). — As any solution of a universal problem, the pair

(�,T) is uniquely characterized by this universal property. Let indeed

(�′,T′) be a pair consisting of an A-module T′and of a multilinear

map �′ :∏

M8 → T′satisfying the universal property. First of all, by

the universal property applied to (�,T) and to the multilinear map �′,there exists a morphism of A-modules 5 : T→ T

′such that �′ = 5 ◦ �.

Similarly, there exists a morphism of A-modules 5 ′ : T′→ T such that

� = 5 ′ ◦ �′. Then, one has � = ( 5 ′ ◦ 5 ) ◦ �; by the universal property

of (�,T), idT is the unique morphism 6 : T → T such that � = 6 ◦ �;consequently, 5 ′ ◦ 5 = idT. Reversing the roles of T and T

′, we also have

5 ◦ 5 ′ = idT′. This implies that 5 is an isomorphism.

Remark (8.3.3). — When I = {1, . . . , =}, one can also construct the tensor

product M1 ⊗ · · · ⊗M= by induction. More precisely, let M1, . . . ,M= be

A-modules. Let P be a way of parenthesizing the product G1 . . . G=. They

are defined by induction using the following rules:

– If = = 1, then G1 has a unique parenthesizing, G1;

– If = > 2, one obtains the parenthesizings of G1 . . . G= by choosing

9 ∈ {1, . . . , =−1}, andP = (P′)(P′′)whereP′is a parenthesizing of G1 . . . G 9

and P′′is a parenthesizing of G 9+1 . . . G=.

For example, G1G2 has a unique parenthesizing, namely (G1)(G2); for= = 5, ((G1)(G2))(((G3)(G4))(G5)) is a parenthesizing of G1G2G3G4G5.

Page 408: (MOSTLY)COMMUTATIVE ALGEBRA

396 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

To any such parenthesizing, one can define by induction a module MP

and a multilinear map �P :

∏M8 →MP. When = = 1, one sets MP = M1

and �P = id. When = > 2 and P = (P′)(P′′), one defines MP = MP′⊗A MP

′′

and �P = �P′ ⊗ �P

′′. For exemple, the previous parenthesizing P of

G1G2G3G4G5 gives rise to

MP = (M1 ⊗M2) ⊗ ((M3 ⊗M4) ⊗M5)and

�P(<1, <2, <3, <4, <5) = (<1 ⊗ <2) ⊗ ((<3 ⊗ <4) ⊗ <5).However, a map on M1 × · · · ×M= is multilinear if and only if it is mul-

tilinear with respect to both the first group of variables (<1, . . . , < 9) andthe second group (< 9+1, . . . , <=). By induction, we see that the map �P is

multilinear hence that the module MP is also a solution of the universal

property. Consequently, there exists a unique morphism of A-modules

from

⊗=8=1

M8 to MP which maps �(<1, . . . , <=) to �P(<1, . . . , <=).In the sequel, we shall neglect these various “associativity” isomor-

phisms.

Proposition (8.3.4). — Let M1, . . . ,M= be A-modules. For every

8 ∈ {1, . . . , =}, let (4 89)9∈J8 be a family of elements of M8. Let J = J1 × · · · × J=;

for every 9 = (91, . . . , 9=) ∈ J, let 4 9 = 41

91⊗ · · · ⊗ 4=

9=.

a) If for each 8, (4 89)9∈J8 generates M8, then the family (4 9)9∈J generates the

tensor product M1 ⊗ · · · ⊗M=.

b) If for each 8, (4 89)9∈J8 is a basis of M8, then the family (4 9)9∈J is a basis of the

tensor product M1 ⊗ · · · ⊗M=.

Proof. — The case = = 1 is trivial, the case = = 2 is proposition 8.2.1. The

general case follows by induction thanks to the step by step construction

of M1 ⊗ · · · ⊗M=. �

8.3.5. The tensor algebra. — Let M be an A-module. Set M0 = A and,

for any positive integer =, set M= be the tensor product of = copies of the

A-module M. Modulo the associativity isomorphisms, we have M= =

M? ⊗A M@ whenever ?, @, = are nonnegative integers such that ? + @ = =.Themap (<1, . . . , <=) ↦→ <1⊗ · · ·⊗<= from M

=to M= is =-linear and the

Page 409: (MOSTLY)COMMUTATIVE ALGEBRA

8.3. TENSOR ALGEBRAS, SYMMETRIC AND EXTERIOR ALGEBRA 397

module M= satisfies the following universal property: for every =-linear

map 5 from M=to an A-module P, there exists a unique morphism of

A-modules, ! : M= → P, such that !(<1 ⊗ · · · ⊗<=) = 5 (<1, . . . , <=) forevery (<1, . . . , <=) ∈ M

=.

If M is free and (48)8∈I is a basis of M, then for every = ∈ N, the

A-module M= is free with basis the family (481 ⊗ · · · ⊗ 48=) indexed by

(81, . . . , 8=) ∈ I=. This follows indeed from proposition 8.3.4.

Let T(M) be the direct sum of all A-modules M=, for = ∈ N. The

submodule M= of T(M)will be written T=(M). An element of T

=(M)will

be said to be of degree =.

The associativity isomorphismsM?⊗M@ 'M?+@ furnish bilinearmaps

T?(M) × T

@(M) → T?+@(M), for ?, @ > 0. There is a unique structure

of an A-algebra on the A-module T(M) of which multiplication law is

given by these maps. The resulting algebra is called the tensor algebra of

the A-module M. It contains M = T1(M) as a submodule. The elements <,

for < ∈ M = T1(M), generate T(M) as an A-algebra.

This algebra satisfies the following universal property: for every A-

algebra (assocative, with unit) B, and every morphism ofA-modules 5 : M→ B,

there exists a unique morphism of A-algebras, ! : T(M) → B, such that

!(<) = 5 (<) for every < ∈ M.

If M is free with basis (48)8∈I, then T(M) is a free A-module with

basis the disjoint union of the bases 1 ∈ T0(M) and 481 ⊗ · · · ⊗ 48= for

(81, . . . , 8=) ∈ I=and = > 1.

8.3.6. The symmetric algebra. — The tensor algebra T(M) is not com-

mutative in general. By definition, the symmetric algebra of the A-

module M is the quotient of T(M) by the two-sided ideal I generated

by elements of the form <1 ⊗ <2 − <2 ⊗ <1, for <1, <2 ∈ M. It is

denoted S(M).For = ∈ N, let I= = I∩T

=(M); this is a subbmodule of T=(M). The ideal I

is homogeneous in the sense that one has I =⊕

= I=. The inclusion⊕= I= ⊂ I is obvious. Conversely, let <1, <2 ∈ M and write <1 =

∑<(=)1

and <2 =∑<(=)2

, where <(=)8∈ T

=(M) for all =. Then

<1 ⊗ <2 − <2 ⊗ <1 =

∑=∈N

∑?+@==

(<(?)1⊗ <(@)

2− <(@)

1⊗ <(?)

2

)

Page 410: (MOSTLY)COMMUTATIVE ALGEBRA

398 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

belongs to

⊕= I=; this implies that the generators of I are contained in

the two sided ideal

⊕I=, hence that I =

⊕I=.

These computations also show that I0 = 0 and I1 = 0.

Write S=(M) for the image of T

=(M) in S(M); one has S=(M) = T

=(M)/I=and S(M) =

⊕S=(M); moreover, S

0(M) = A and S1(M) = M.

Observe that, by construction, all elements of S1(M) pairwise commute,

and that S(M) is generated by S1(M). Consequently, the algebra S(M) is

commutative.

The algebra S(M) is a commutativeA-algebra togetherwith amorphism

M→ S(M) which satisfies the following universal property: For every

commutative A-algebra B and every morphism 5 : M → B of A-modules,

there exists a unique morphism of A-algebras, ! : S(M) → B, such that

!(<) = 5 (<) for every < ∈ M = S1(M).

For every integer =, the A-module S=(M) is endowed with an =-

linear symmetric map M= → S

=(M). It satisfies the following universal

property: For everyA-moduleP and any =-linear symmetric map 5 : M= → P,

there exists a unique morphism of A-modules ! : S=(M) → P such that

!(<1 . . . <=) = 5 (<1, . . . , <=) for every <1, . . . , <= ∈ M.

1

8.3.7. The exterior algebra. — As we have observed in section 3.8,

alternatemultilinearmaps are a very important object. Let us understand

them with the tool of the tensor product, analogously to the description

of symmetric multilinear maps from the the symmetric algebra.

Let J be the two-sided ideal of T(M) generated by all elements of the

form < ⊗ <, for < ∈ M. It is an “homogeneous ideal” of T(M), that is tosay, J =

⊕∞==0

J=, where J= = J ∩ T=(M). Indeed,

⊕= J= is a two-sided

ideal which contains all elements of the form < ⊗ <, for < ∈ M, hence

is equal to J. Moreover, J0 = 0 and J1 = 0.

Let Λ(M) be the quotient of the algebra T(M) by this two-sided

ideal J. As an A-module, one has Λ(M) =⊕∞

==0Λ=(M), where Λ=(M) =

T=(M)/J=. In particular, Λ0(M) = A and Λ1(M) = M. The multiplication

of Λ(M) is denoted with a symbol ∧. For example, one has < ∧ < = 0

for every < ∈ M.

1Faire les démonstrations ? – cf le cas alterné et l’exercice 8/20. . .

Page 411: (MOSTLY)COMMUTATIVE ALGEBRA

8.3. TENSOR ALGEBRAS, SYMMETRIC AND EXTERIOR ALGEBRA 399

For any <1, <2 ∈ M, one has

<1 ⊗ <2 + <2 ⊗ <1 = (<1 + <2) ⊗ (<1 + <2) − <1 ⊗ <1 − <2 ⊗ <2 ∈ J,

so that <1 ∧ <2 = −<2 ∧ <1. More generally, let <1, . . . , <= ∈ M and let

� ∈ S= be any permutation of {1, . . . , =}. Then,

<�(1) ∧ · · · ∧ <�(=)) = �(�)<1 ∧ · · · ∧ <= ,

where �(�) is the signature of �.The A-algebra Λ(M) satisfies the following universal property: For

every A-algebra B and any morphism 5 : M→ B such that 5 (<) · 5 (<) =0 for every < ∈ M, there exists a unique morphism of A-algebras,

! : Λ(M) → B, such that !(<) = 5 (<) for every < ∈ M = Λ1(M).An element of Λ?(M) is called a ?-vector. Any ?-vector can be written

as a sum of split ?-vectors, namely ?-vectors of the form <1 ∧ · · · ∧ <?,

for <1, . . . , <? ∈ M. By ?-linearity, we see that we can restrict ourselves

to vectors <8 in some generating family of M.

More precisely, let us assume that (<8)1686= generates M. Then

any ?-vector in Λ?(M) is a linear combination of vectors of the form

<81 ∧ · · · ∧ <8? , for 81, . . . , 8? ∈ {1, . . . , =}. By the alternate property, we

can even assume that 81 6 · · · 6 8?. Then, if ? > =, we see that two

consecutive indices are equal hence <81 ∧ · · · ∧ <8? = 0 by the alternate

property. This shows that Λ?(M) = 0 for ? > =.

Proposition (8.3.8). — Let = ∈ N. The A-module Λ=(M) is endowed with

an alternate =-linear map M= → Λ=(M), (<1, . . . , <=) ↦→ <1 ∧ · · · ∧ <=.

It satisfied the following universal property: For every A-module P and

any =-linear alternate map 5 : M= → P, there exists a unique morphism

! : Λ=(M) → P of A-modules such that !(<1 ∧ · · · ∧ <=) = 5 (<1, . . . , <=).

Proof. — The uniqueness of the morphism ! follows from the fact that

the products <1 ∧ · · · ∧ <=, for <1, . . . , <= ∈ M, generate Λ=(M). Let

us show its existence. Let !1 : T=(M) → P the canonical morphism

of A-modules which is deduced from the multilinear map 5 ; one has

!1(<1 ⊗ · · · ⊗ <=) = 5 (<1, . . . , <=) for every (<1, . . . , <=) ∈ M=. Let

us show that !1 vanishes on the kernel J= of the canonical morphism

from T=(M) to Λ=(M).

Page 412: (MOSTLY)COMMUTATIVE ALGEBRA

400 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Let <1, . . . , <= ∈ M. If there exists 8 ∈ {1, . . . , = − 1} such that

<8 = <8+1, then the tensor

<1 ⊗ · · · ⊗ <= = (<1 ⊗ · · · ⊗ <8−1) ⊗ (<8 ⊗ <8) ⊗ (<8+2 ⊗ . . . <=)belongs to the ideal J, kernel of the canonical morphism of algebras

from T(M) toΛ(M). Moreover, when = varies, as well as<1, . . . , <= ∈ M,

we see that these tensors generate a two-sided ideal of T(M) containingall elements of the form < ⊗ <, hence is equal to J. In particular, any

element of J= = J ∩ T=(M) is a linear combination of such tensors.

Since 5 is alternate, one has

!1(<1 ⊗ · · · ⊗ <=) = 5 (<1, . . . , <=) = 0

as soon as there exists 8 ∈ {1, . . . , = − 1} such that <8 = <8+1. Conse-

quently, !1 vanishes on J=. This implies that there exists a morphism

ofA-modules! : Λ=(M) → P such that!(<1∧· · ·∧<=) = !1(<1⊗. . . <=)for every (<1, . . . , <=) ∈ M

=, that is, !(<1 ∧ · · · ∧ <=) = 5 (<1, . . . , <=).

This concludes the proof. �

Proposition (8.3.9). — Let M and N be A-modules, let D : M → N be a

morphism. There exists a unique morphism of A-algebras, Λ(D) : Λ(M) →Λ(N), which coincides with D on Λ1(M). One has Λ(idM) = idΛ(M). If P is

another A-module and E : N → P is a morphism, then one has Λ(E ◦ D) =Λ(E) ◦Λ(D).

Proof. — This follows at once from the universal property. Indeed, let

� : N→ Λ(N) be the canonical injection; then, the map � ◦D maps every

element < of M to the element D(<) of Λ1(N), hence its square is zero.Consequently, there exists a morphism of algebras from Λ(M) to Λ(N)which coincides with D on M ' Λ1(M).Morover, Λ(E) ◦ Λ(D) is a morphism of algebras from Λ(M) to Λ(P)

which coincides with E ◦ D on M, hence is equal to Λ(E ◦ D). �

Let = ∈ N. By restriction to Λ=(M), the morphism of algebras Λ(D)defines a morphism Λ=(D) : Λ=(M) → Λ=(M) of A-modules. It is

characterized by the formula

Λ=(D)(<1 ∧ . . . <=) = D(<1) ∧ · · · ∧ D(<=)for every (<1, . . . , <=) ∈ M

=.

Page 413: (MOSTLY)COMMUTATIVE ALGEBRA

8.4. THE EXTERIOR ALGEBRA AND DETERMINANTS 401

8.4. The exterior algebra and determinants

Proposition (8.4.1). — Let M be a free A-module, let (48)8∈I be a basis of M and

let � be a total ordering on I. Then, the family of ?-vectors 481 ∧ . . . 48? , where(81, . . . , 8?) runs along the set of stricly increasing sequences of indices, is a

basis of Λ?(M). In particular, if M is finitely generated and = is its rank, then

Λ?(M) is free of rank(=?

).

Let J be a subset of I with ? elements, and let (81, . . . , 8?) be the uniquestrictly increasing tuple such that J = {81, . . . , 8?}; we set 4J = 481 ∧ . . . 48? .Proof. — Any ?-vector in M is a linear combination of ?-vectors of the

form 481∧· · ·∧48? , where 81, . . . , 8? belong to I. Using the relations 48∧4 9 =−4 9 ∧ 48 for 8 ≠ 9, as well as 48 ∧ 48 = 0, we may only consider families

(81, . . . , 8?) such that 81 < · · · < 8?. This shows that the elements 4J, for

all ?-subsets J of I, generate Λ?(M).Let us show that these elements actually form a basis of Λ?(M) and let

us consider a linear dependence relation

∑J0J4J = 0 among them. The

A-module T?(M) is free and the family (481 ⊗ · · · ⊗ 48?), for (81, . . . , 8?) ∈ I

?,

is a basis. Let us fix a subset J ⊂ {1, . . . , =} with cardinality ?; let us

write J = { 91, . . . , 9?} where 91 < · · · < 9?. Let 5J : M=be the unique

?-linear form on M which maps (481 , . . . , 48?) to 0 if {81, . . . , 8?} ≠ J and

(481 , . . . , 48?) to the signature of the permutation which maps 8: to 9: , for

any : ∈ {1, . . . , ?}. It is alternate, hence there exists a unique morphism

of A-modules !J : Λ?(M) → A which maps 4I to 0 if I ≠ J and 4J to 1.

Let us apply !J to the dependence relation

∑J0J4J = 0; we get 0J = 0.

Consequently, the given linear dependence relation is trivial, hence the

family (4J) is free. It is thus a basis of Λ?(M). �

Lemma (8.4.2). — Let M be an A-module and let 5 : M=−1 → A be an

alternate (= − 1)-linear form. The map 5 ′ : M= →M given by

5 ′(G1, . . . , G=) ==∑?=1

(−1)? 5 (G1, . . . , G? , . . . , G=)G? ,

where the G? means that this element is omitted from the tuple, is =-linear and

alternate.

Page 414: (MOSTLY)COMMUTATIVE ALGEBRA

402 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Proof. — It is obvious that 5 ′ is linear with respect to each variable.

Moreover, if G8 = G8+1, the terms 5 (G1, . . . , G? , . . . , G=)G? are 0 whenever

? ≠ 8 and ? ≠ 8 + 1, and the terms with indices 8 and 8 + 1 are opposite

one of the other. This proves the lemma. �

Proposition (8.4.3). — Let M be a free finitely generated A-module and let

G1, . . . , G= be elements of M. For the elements G1, . . . , G= to be linearly

dependent, it is necessary and sufficient that there exists � ∈ A {0} such that�G1 ∧ · · · ∧ G= = 0.

Proof. — Assume that G1, . . . , G= are linearly dependent and let 01G1 +· · · + 0=G= = 0 be a nontrivial linear dependence relation. To fix the ideas,

assume that 01 ≠ 0. Then, 01G1 is a linear combination of G2, . . . , G=, so

that (01G1) ∧ G2 ∧ · · · ∧ G= = 0; this implies 01G1 ∧ · · · ∧ G= = 0.

Conversely, let us show by induction on = that if �G1 ∧ · · · ∧ G= = 0,

for some nonzero � ∈ A, then G1, . . . , G= are linearly dependent.

This holds when = = 0 (there is nothing to prove) and when = = 1

(by definition). So, let us assume that this assertion holds up to = − 1

and assume that �G1 ∧ · · · ∧ G= = 0, where � ≠ 0. If G2, . . . , G= are

linearly dependent, then G1, . . . , G= are also linearly dependent too, so

that we may assume that G2, . . . , G= are linearly independent and that

G2 ∧ · · · ∧ G= ≠ 0. Since Λ=−1(M) is a free A-module, there exists a

linear form ! on Λ=−1(M) such that � = !(G2 ∧ · · · ∧ G=) ≠ 0. In other

words, the map 5 : M=−1→ A given by (E2, . . . , E=) ↦→ !(E2 ∧ · · · ∧ E=)

is an alternate (= − 1)-linear form on M and 5 (G2, . . . , G=) = � ≠ 0. Let

5 ′ : M= →M be the alternate =-linear map defined in lemma 8.4.2 above.

Since �G1 ∧ · · · ∧ G= = 0, one has 5 ′(�G1, G2, . . . , G=) = 0, hence a relation

−� 5 (G2, . . . , G=)G1 + �=∑?=2

5 (G1, . . . , G? , . . . , G=)G? = 0.

Since � 5 (G2, . . . , G=) ≠ 0, the vectors G1, . . . , G= are linearly dependent,

hence the proposition. �

Corollary (8.4.4). — Let A be a nonzero commutative ring, let M be a free

A-module, let = > 1 be an integer, and let 41, . . . , 4= be elements of M. Then

(41, . . . , 4=) is a basis of M if and only if 41 ∧ · · · ∧ 4= is a basis of Λ=(M).

Page 415: (MOSTLY)COMMUTATIVE ALGEBRA

8.4. THE EXTERIOR ALGEBRA AND DETERMINANTS 403

Proof. — If 41, . . . , 4= is a basis of M, we proved in proposition 8.4.1 that

41 ∧ · · · ∧ 4= is a basis of Λ=(M).Conversely, let us assume that 41 ∧ · · · ∧ 4= be a basis of Λ=(M). By

the preceding proposition, the family (41, . . . , 4=) is free. It then follows

from proposition 8.4.1 that M is finitely generated, that its rank is equal

to =, that Λ=−1(M) ≠ 0 and that Λ=+1(M) = 0. Let 5 be the map from M=

to A such that 5 (G1, . . . , G=) is the unique element � ∈ A such that

G1 ∧ · · · ∧ G= = �41 ∧ · · · ∧ 4=, for any G1, . . . , G= ∈ M. The map 5 is

=-linear and alternate (it is the composition of the =-linear alternate map

M= → Λ=(M) given by (G1, . . . , G=) ↦→ G1 ∧ · · · ∧ G=, and of the inverse

of the isomorphism A → Λ=(M), given by 0 ↦→ 041 ∧ · · · ∧ 4=). Since

Λ=+1(M) = 0, the map 5 ′ from M=+1

to M defined in lemma 8.4.2 is zero,

because it is (= + 1)-linear and alternate. In particular, for every G ∈ M,

5 ′(G, 41, . . . , 4=) = − 5 (41, . . . , 4=)G+=∑?=1

(−1)?+1 5 (G, 41, . . . , 4? , . . . , 4=)4? = 0.

Since 5 (41, . . . , 4=) = 1, one gets

G =

=∑?=1

(−1)?+1 5 (G, 41, . . . , 4? , . . . , 4=)4? = 0,

which proves that the family (41, . . . , 4=) generates M. It is thus a basis

of M. �

Corollary (8.4.5). — Let A be a a nonzero commutative ring and let M be an

A-module which possesses a basis of cardinality =.

a) Every free family in M has at most = elements.

b) Every generating family in M has at least = elements.

Proof. — Let G1, . . . , G? be elements of M. If (G1, . . . , G?) is free, thenG1∧· · ·∧G? ≠ 0, henceΛ?(M) ≠ 0 and ? 6 =. If (G1, . . . , G?) is generating,then Λ:(M) = 0 for : > ?, hence = 6 ?. �

Consequently, when A is a nonzero commutative ring, it is unam-

biguous to define the rank of a free finitely generated A-module as the

cardinality of any basis of M.

Page 416: (MOSTLY)COMMUTATIVE ALGEBRA

404 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Definition (8.4.6). — Let M be a free A-module of rank = and let D be an

endomorphism of M. The endomorphism Λ=(D) of Λ=(M) is an homothety. Its

ratio is called the determinant of D and written det(D).

By theprecedingproposition,Λ=(M) is isomorphic toA. Let � be a basisof Λ=(M) and let � ∈ A by such that Λ=(D)(�) = ��. For any G ∈ Λ=(M),let 0 ∈ A be such that G = 0�; then,Λ=(G) = Λ=(0�) = 0Λ=(�) = 0�� = �G.This shows that Λ=(D) is an homothety.

Let D and E be endomorphism of a free A-module of rank =. One has

Λ=(E ◦ D) = Λ=(E) ◦ Λ=(D), hence det(E ◦ D) = det(E)det(D). One also

has Λ=(idM) = id, so that det(idM) = 1. If D is invertible, with inverse E,

then Λ=(D) is invertible too, with inverse Λ=(E). It follows that det(D) isan invertible element of A, with inverse det(E). In other words,

det(D−1) = det(D)−1.

Proposition (8.4.7). — Let A be a nonzero commutative ring, let M be a free

A-module of rank = and let D be an endomorphism of M.

a) The morphism D is an isomorphism if and only if D is surjective, if and

only if det(D) is invertible.b) The morphism D is injective if and only if det(D) is regular in A.

Proof. — Let (41, . . . , 4=) be a basis of M.

If D is an automorphism of M, we just proved that det(D) is invertiblein A. More generally, let us assume that D is surjective. Since M is

free, the morphism D has a right inverse E. (For every 8 ∈ {1, . . . , =},let 58 ∈ M be any element such that D( 58) = 48, and let E be the unique

endomorphism of M such that E(48) = 58 for every 8.) One has D◦E = idM,

hence det(D)det(E) = det(idM) = 1. This shows that det(D) is invertible.Let us now prove that if det(D) is invertible, then D is an automorphism

ofM. Bydefinition, onehasD(41)∧· · ·∧D(4=) = det(D)41∧· · ·∧4=. Ifdet(D)is invertible, then D(41)∧ · · ·∧D(4=) is a basis ofΛ=(M). By corollary 8.4.4,

(D(41), . . . , D(4=)) is a basis of M. Let E be the unique endomorphism

of M such E(D(48)) = 48 for every 8 ∈ {1, . . . , =}. One has E ◦ D(48) = 48 forevery 8, hence E ◦ D = id. Consequently, D ◦ E(D(48)) = D(48) for every 8;since (D(41), . . . , D(4=)) is a basis of M, this implies that D ◦ E = id. In

other words, D is invertible, and E is its inverse.

Page 417: (MOSTLY)COMMUTATIVE ALGEBRA

8.4. THE EXTERIOR ALGEBRA AND DETERMINANTS 405

For themorphism D to be injective, it is necessary and sufficient that the

family (D(41), . . . , D(4=)) be free. If this holds, proposition 8.4.3 asserts

that for every � ∈ A {0}, �D(41) ∧ · · · ∧ D(4=) ≠ 0. Since 41 ∧ · · · ∧ 4= isa basis of Λ=(M), we see that D is injective if and only if �det(D) ≠ 0 for

� ≠ 0, in other words, if and only if det(D) is regular in A. �

8.4.8. Determinant of a matrix. — One defines the determinant of a

matrix U ∈ M=(A) as the determinant of the endomorphism D of A=it

represents in the canonical basis (41, . . . , 4=) of A=. One has

Λ=(D)(41 ∧ · · · ∧ 4=) = D(41) ∧ · · · ∧ D(4=)

=

=∑81=1

. . .

=∑8==1

U81 ,1 . . .U8= ,=481 ∧ 482 ∧ · · · ∧ 48=

=

∑�∈S=

U�(1),1 . . .U�(=),=�(�)41 ∧ · · · ∧ 4= .

Consequently,

det(U) =∑�∈S=

�(�)U�(1),1 . . .U�(=),= .

Since a permutation and its inverse have the same signature, we also

obtain

det(U) =∑�∈S=

�(�)U1,�(1) . . .U=,�(=).

In particular, a matrix and its transpose have the same determinant.

8.4.9. Laplace expansion of a determinant. — Let = be an integer and

let U be an = × =-matrix with coefficients in a commutative ring A.

Let D be the endomorphism of A=whose matrix in the canonical basis

(41, . . . , 4=) is equal to U. For any subset I of {1, . . . , =}, let I be the

complementary subset. If I = {81, . . . , 8?}, with 81 < · · · < 8?, we denote

4I = 481 ∧ · · · ∧ 48? . If I and J are any two subsets of {1, . . . , =}, one has4I ∧ 4J = 0 if I∩ J ≠ ∅. Assume that I∩ J = ∅, and let < be the number of

pairs (8 , 9) ∈ I× J such that 8 > 9 and set �I,J = (−1)<; then, 4I∧ 4J = �I,J4I∪J.

Let I and J be subsets of {1, . . . , =}; let UI,J be the submatrix (U8 , 9)8∈I9∈J

of U obtained by selecting the rows with indices in I and the columns

Page 418: (MOSTLY)COMMUTATIVE ALGEBRA

406 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

with indices in J. When I and J have the same cardinality, the determinant

of UI,J is called the (I, J)-minor of U. With these notations, we can state

the following proposition.

Proposition (8.4.10). — a) Let I be any subset of {1, . . . , =}, let ? =

Card(I). Then, Λ?(D)(4I) =∑

Jdet(UJ,I)4J, where J runs among all subsets of

cardinality ? of {1, . . . , =}.b) Let J and K be subsets of {1, . . . , =} with the same cardinality. Then∑

I

�J,J�K,K det(UI,J)det(U

I,K) ={

det(U) if J = K;

0 otherwise,

the sum being over all subsets I of {1, . . . , =} with cardinality ?.

Proof. — a) Let 81, . . . , 8? be the elements of I, written in increasing order.

One has

Λ?(D)(4I) = D(481) ∧ · · · ∧ D(48?)

=

=∑91=1

. . .

=∑9?=1

U91 ,81 . . .U9? ,8? 4 91 ∧ · · · ∧ 4 9?

=

∑91<···< 9?

©­«∑�∈S?

�(�)U9�(1) ,81 . . .U9�(?) ,8?ª®¬ 4 91 ∧ · · · ∧ 4 9?

=

∑J

det(UJ,I)4J,

as was to be shown.

b) By definition,

Λ?(D)(4I) ∧Λ=−?(D)(4K) = Λ=(D)(4I ∧ 4K)

vanishes if I and K have a common element, that is if I ≠ K, since they

have the same cardinality. Otherwise, if I = K, it is equal to

Λ=(D)(�I,I41 ∧ · · · ∧ 4=) = �

I,I det(U)41 ∧ · · · ∧ 4= .

Page 419: (MOSTLY)COMMUTATIVE ALGEBRA

8.5. ADJUNCTION AND EXACTNESS 407

On the other hand, it follows from a) that

Λ?(D)(4I) ∧Λ=−?(D)(4K) =∑

Card(J)=?det(UJ,I)4J

∑Card(L)=?

det(UL,K)4L

=

∑Card(J)=?

∑Card(L)=?

det(UJ,I)det(UL,K)4J ∧ 4L

=

∑Card(J)=?

L=J

det(UJ,I)det(UJ,K)4J ∧ 4J

=©­«

∑Card(J)=?

det(UJ,I)det(UJ,K)�J,J

ª®¬ 41 ∧ · · · ∧ 4= .Comparing these two formulae, we obtain that

�I,I

∑Card(J)=?

�J,K det(UJ,I)det(U

J,K)

equals 0 if I ≠ K, and det(U) otherwise. This concludes the proof of the

proposition. �

Applied with ? = 1, the formula of b) recovers the well-known formula

for the expansion of a determinant along a column.

Corollary (8.4.11). — Let U be any = × = matrix with coefficients in A. Let C

be the adjugate matrix of U: for every pair (8 , 9) the coefficient C8 , 9 is equal to

(−1)8+9 times the determinant of the matrix obtained from U by deleting line 8

and column 9. One has Ct ·U = U · Ct = det(U)I=.

Proof. — The formula Ct · U = det(U)I= is nothing but the Laplace

expansion formula for ? = 1. The other then follows by transposition.

Indeed, the adjugate matrix of Utis the transpose of C. Consequently,

Ct ·U = det(Ut)I=, hence U

t · C = det(U)I=. �

8.5. Adjunction and exactness

Theorem (8.5.1). — Let A be a ring, let M be a right A-module, let N1

5−→

N2

6−→ N3 → 0 be an exact sequence of left A-modules. Then, the following

Page 420: (MOSTLY)COMMUTATIVE ALGEBRA

408 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

diagram

M ⊗A N1

idM ⊗ 5−−−−−→M ⊗A N2

idM ⊗6−−−−−→M ⊗A N3→ 0

is exact.

Proof. — Exactness atM⊗AN3. Weneed to prove that idM ⊗6 is surjective.Since every element of M ⊗A N3 is a sum of split tensors, it suffices

to show that for any < ∈ M and any I ∈ N3, the tensor < ⊗ I has a

preimage in M ⊗A N2. Since 6 is surjective, there exists H ∈ N2 such that

6(H) = I. Then < ⊗ I = (idM ⊗6)(< ⊗ H), hence the claim.

Exactness atM⊗AN2. Weneed to prove thatKer(idM ⊗6) = Im(idM ⊗ 5 ).The inclusion Im(idM ⊗ 5 ) ⊂ Ker(idM ⊗6) is easy, since (idM ⊗6) ◦(idM ⊗ 5 ) = idM ⊗(6 ◦ 5 ) = 0. The other inclusion is the difficult one.

Indeed, it is quite illusory to prove it directly, each element at a time.

The incredulous reader should try by himself, and, after some effort,

read again remark 8.1.11.

Since Im(idM ⊗ 5 ) ⊂ Ker(idM ⊗6), the morphism idM ⊗6 induces a

morphism of abelian groups 6′ : (M ⊗A N2)/Im(idM ⊗ 5 ) → M ⊗A N3.

Since idM ⊗6 is surjective, 6′ is surjective. In fact, we need to prove that

6′ is an isomorphism. Let us construct an inverse ℎ′ of 6′. To shorten the

notation, let P = (M ⊗A N2)/Im(idM ⊗ 5 ) and let ? : M ⊗A N2→ P be the

canonical surjection. The sought-for morphism ℎ′ goes from a tensor

product M ⊗A N3 to P. By the universal property of the tensor product,

its definition amounts to the definition of a biadditive and A-balanced

map ℎ : M×N3→ P such that ℎ(<, I) = ℎ′(< ⊗ I). Moreover, ℎ′(< ⊗ I)is supposed to be a preimage of < ⊗ I by idM ⊗6. After this paragraph

of explanation, let us pass to the construction.

Let < ∈ M, let I ∈ N3. For any H and H′ ∈ N2 such that 6(H) = 6(H′),one has H′ − H ∈ Ker(6) = Im( 5 ), so there exists G ∈ N1 such that

H′ − H = 5 (G); consequently, < ⊗ H′ − < ⊗ H belongs to Im(idM ⊗ 5 ) and?(< ⊗ H′) = ?(< ⊗ H). Therefore, one defines a map ℎ : M ×N3 → P

by setting ℎ(<, I) = ?(< ⊗ H) where H is any element of N2 such that

6(H) = I.

Page 421: (MOSTLY)COMMUTATIVE ALGEBRA

8.5. ADJUNCTION AND EXACTNESS 409

The map ℎ is biadditive. Indeed, for I, I′ ∈ N3, H, H′ ∈ N2 such that

6(H) = I and 6(H′) = I′, one has 6(H + H′) = I + I′, so that

ℎ(<, I+ I′) = ?(< ⊗ (H+ H′)) = ?(< ⊗ H)+ ?(< ⊗ H′) = ℎ(<, I)+ ℎ(<, I′).Similarly, for <, <′ ∈ M, I ∈ N3 and H ∈ N2 such that 6(H) = I, one hasℎ(<+<′, I) = ?((<+<′)⊗ H) = ?(<⊗ H)+?(<⊗ H′) = ℎ(<, I)+ ℎ(<′, I).Finally, the map ℎ is A-balanced: let < ∈ M, I ∈ N3, H ∈ N2 such that

6(H) = I, and 0 ∈ A. Then, 6(0H) = 06(H) = 0I, since 6 is A-linear, so

that

ℎ(<0, I) = ?(<0 ⊗ H) = ?(< ⊗ 0H) = ℎ(<, 0I).Consequently, there exists a unique morphism of abelian groups

ℎ′ : M ⊗A N3→ P such that ℎ′(< ⊗ I) = ℎ(<, I) for any (<, I) ∈ M×N3.

It remains to show that ℎ′ is the inverse of 6′. Let< ∈ M and H ∈ N2, let

C = ?(< ⊗ H); one has 6′(C) = 6′(?(< ⊗ H)) = (idM ⊗6)(< ⊗ H) = < ⊗ 6(H),so that ℎ′(6′(C)) = ?(< ⊗ H) = C. Since the tensors of the form < ⊗ Hgenerate M ⊗A N2 and ? is surjective, this implies that ℎ′ ◦ 6′ = idP. In

particular, 6′ is injective. Since it is surjective, this is an isomorphism.

Had we wished so, we could also have computed 6′ ◦ ℎ′: let < ∈ M

and let I ∈ N3, let H ∈ N2 be such that 6(H) = I; then 6′(ℎ′(< ⊗ I)) =6′(ℎ(<, I)) = 6′(?(< ⊗ H)) = < ⊗ 6(H) = < ⊗ I. Since the tensors of theform < ⊗ I in M ⊗ N3 generate this module, it follows that 6′(ℎ′(C)) = Cfor every C ∈ M ⊗ N3, hence 6

′ ◦ ℎ′ = idM⊗N3. �

8.5.2. — This important result gains in being rephrased in the categorical

language. Let A be a ring, let P be a right A-module. Observe that

one defines a functor TP by associating to any left A-module M the

abelian group TP(M) = P ⊗A M, and to any morphism 5 : M→ N of left

A-modules the morphism TP( 5 ) = idM ⊗A 5 . Indeed, TP( 5 ) goes fromP ⊗A M = TP(M) to P ⊗A N = TP(N), TP(idM) = idP ⊗A idM = idP⊗AM =

idTP(M), and

TP(6 ◦ 5 ) = idP ⊗(6 ◦ 5 ) = (idP ⊗6) ⊗ (idP ⊗ 5 ) = TP(6) ◦ TP( 5 )for all morphisms 5 : M1 → M2 and 6 : M2 → M3 of left A-modules.

This “tensor product by P” functor is additive and we may rephrase

theorem 8.5.1 as follows:

Page 422: (MOSTLY)COMMUTATIVE ALGEBRA

410 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Corollary (8.5.3). — Let A be a ring, let M be a right A-module. The functor

“tensor product by M” is right exact.

In fact, once stated in this language, theorem 8.5.1 is justiciable of

another, more categorical, proof. Thanks to the following proposition,

it will be a consequence of the general result that left adjoints are right

exact (proposition 7.6.5).

Let us generalize this functor a little bit. Let B be a second ring and let

us assume that P is a (B,A)-bimodule. Then, for any left A-module M,

P ⊗A M is naturally a left B-module and for any morphism 5 : M→ N

of left A-modules, idP ⊗ 5 is B-linear. Consequently, the functor TP goes

from the category of left A-modules to the category of left B-modules.

Proposition (8.5.4). — Let A and B be rings, let P be a (B,A)-bimodule. The

“tensor product by P” functor TP defined above is a left adjoint of the functor

HomA(P, •).

Proof. — For any left A-module M and any left B-module N, we need to

define a bijection ΦM,N : HomB(P ⊗A M,N) → HomA(M,HomB(P,N))satisfying the relations required by the definition of a pair of adjoint

functors.

Before we give such a formula, let us prove it with words. By the

universal property of the tensor product, HomB(P ⊗A M,N) is the set ofbiadditive, A-balanced maps D from P ×M to N which are also B-linear

with respect to P. Such a map is additive in each of its variables, B-linear

in the P-variable, and A-balanced. In particular, it induces, for every

< ∈ M, aB-linearmapD(·, <) fromP toN; moreover, D(?0, <) = D(?, 0<)for every 0 ∈ A, ? ∈ P and < ∈ M. Conversely, a family (D<)<∈M of

B-linear maps from P to N such that the map < ↦→ D< is A-linear gives

exactly a map D as required.

For the sake of the reader, let us put this in formulae. Let D : P⊗A M→N be amorphism of B-modules. Let< ∈ M. Then themap ? ↦→ D(?⊗<)is B-linear, hence is an element ΦM,N(D)(<). For 0, 0′ ∈ A, <, <′ ∈ M,

and ? ∈ P, one has

ΦM,N(D)(0< + 0′<′)(?) = D(? ⊗ (0< + 0′<′)) = D(?0 ⊗ <) + D(?0′ ⊗ <′)= ΦM,N(D)(<)(?0) +ΦM,N(D)(<)(?0′),

Page 423: (MOSTLY)COMMUTATIVE ALGEBRA

8.6. FLAT MODULES 411

so that

ΦM,N(D)(0< + 0′<′) = 0 · ΦM,N(D)(<) + 0′ · ΦM,N(D)(<′).Consequently, the map < ↦→ ΦM,N(D)(<) is A-linear, so that !M,N(D)belongs to HomA(M,HomB(P,N)). We thus have defined a map ΦM,N.

Let us prove thatΦM,N is bijective. Let E ∈ HomA(M,HomB(P,N)). ForD ∈ HomB(P⊗A M,N), the relationsΦM,N(D) = E and D(?⊗<) = E(<)(?)are equivalent. Since the tensors of the form ? ⊗ < generate P ⊗A M,

this implies that ΦM,N is injective. Moreover, the map from P × M

to N given by ℎ(?, <) = E(<)(?) is biadditive. Since < ↦→ E(<) isA-linear, one has E(0<)(?) = (0 · E(<))(?) = E(<)(?0), so that ℎ is A-

balanced. Consequently, there exists a unique morphism of abelian

groups D : P ⊗A M → N such that D(? ⊗ <) = ℎ(?, <) for ? ∈ P and

< ∈ M. Since E(<) is B-linear for every < ∈ M, one has

D(1 · (? ⊗ <)) = D(1? ⊗ <) = E(<)(1?) = 1(E(<)(?)) = 1D(? ⊗ <)and the map D is B-linear. It is the unique preimage of E in HomB(P ⊗A

M,N). This shows that ΦM,N is bijective.

Let M,M′ be left A-modules, N,N′ be left B-modules, and let 5 : M→M′and 6 : N→ N

′be morphisms of modules. By construction, for any

morphism D : P ⊗A M′ → N of B-modules, any ? ∈ P and any < ∈ M,

one has

(6 ◦ΦM′,N(D) ◦ 5 )(<)(?) = 6(ΦM

′,N(D)( 5 (<))(?)) = 6(D(? ⊗ 5 (<))),while

ΦM,N′(6 ◦ D ◦ (idP × 5 ))(<)(?) = (6 ◦ D ◦ idP × 5 )(? × <)= (6 ◦ D)(? ⊗ 5 (<))= 6(D(? ⊗ 5 (<))).

This shows that the maps ΦM,N satisfy the adjunction relations and

concludes the proof of the proposition. �

8.6. Flat modules

Definition (8.6.1). — Let A be a ring and let M be a right A-module. One

says that M is flat if the functor “tensor product by M”, TM, M ⊗A •, is exact.

Page 424: (MOSTLY)COMMUTATIVE ALGEBRA

412 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Since this functor is always right exact (corollary 8.5.3), a module M

is flat if and only if, for any injective morphism 5 : N → N′of left

A-modules, idM ⊗ 5 is injective.One defines similarly the notion of a flat left A-module M, by the

exactness of the right exact functor • ⊗A M.

Example (8.6.2). — The map N → A3 ⊗A N given by = ↦→ 1 ⊗ = is an

isomorphism of A-modules, and a morphism D is changed into idA ⊗Dunder this isomorphism. Consequently, A3 is a flat A-module.

Example (8.6.3). — Let A be a commutative ring and let S be a multiplica-

tive subset of A. According to example 8.2.2, tensoring a module M with

S−1

A corresponds to taking the fraction module S−1

M. If D : M→ N is a

morphism of A-modules, then under the isomorphisms of that example,

themorphisms S−1D : S

−1M→ S

−1N and D⊗id : M⊗AS

−1A→ N⊗AS

−1A

coincide. This identifies the functor •⊗AS−1

Awith the functorM ↦→ S−1

A

which is exact, by proposition 7.5.12.

Consequently, S−1

A is a flat A-module.

Remark (8.6.4). — It is possible that the module M be non flat, but

that the functor TM preserves the exactness of some exact sequences

0→ N→ N′→ N

′′→ 0.

For example, if 5 : N → N′is a split injection of A-modules, with

left inverse 6 : N′→ N, then idM ⊗6 is a left inverse of idM ⊗ 5 , so that

idM ⊗ 5 is again a split injection.

A more advanced discussion of homological algebra will construct

A-modules TorA

? (M,N), for ? > 1, and a long exact sequence,

· · · → TorA

2(M,N′′) → Tor

A

1(M,N) → Tor

A

1(M,N′) →

→ TorA

1(M,N′′) →M ⊗ N→M ⊗ N

′→M ⊗ N′′→ 0.

Proposition (8.6.5). — Let A be a ring and let M be a direct sum of a family

(M8)8∈I of A-modules. Then M is flat if and only if M8 is flat for every 8.

Proof. — LetM =⊕

8 M8 be adirect sumof submodules; for 8 ∈ I, let 98 be

the injection ofM8 intoM and let ?8 : M→M8 be the projection. Then, for

any left A-module N, one has an isomorphism � =∑?8⊗ idN : M⊗A N '⊕

8 M8 ⊗A N whose inverse is induced by the natural maps 98 ⊗ idN from

Page 425: (MOSTLY)COMMUTATIVE ALGEBRA

8.6. FLAT MODULES 413

M8 ⊗A N to M ⊗A N. Let D : N→ N′be a morphism of left A-modules.

In the isomorphism �, the morphism idM ⊗D is transformed into the

direct sum of the morphisms idM8 ⊗D. In particular, idM ⊗D is injective

if and only if idM8 ⊗D is injective for every 8.

Let D : N → N′is an injective morphism of A-modules. If all A-

modules M8 are flat, then idM8 ⊗D is injective for every 8, hence idM ⊗Dis injective. This proves that M is flat.

Conversely, if M is flat, then idM8 ⊗D is injective for every injective

morphism D : N→ N′and every 8, so that M8 is flat for every 8. �

Corollary (8.6.6). — Any projectiveA-module, in particular any freeA-module,

is flat.

Proof. — Since the rightA-moduleA3 is flat, every free rightA-module is

flat. Any projective module, being a direct summand of a free A-module,

is flat. �

Example (8.6.7). — Any module over a division ring is flat.

Theorem (8.6.8). — Let A be a ring, let M be a right A-module. Then M is flat

if and only if for every left ideal J of A, the canonical morphism from M ⊗A J

to M (sending < ⊗ 0 to <0, for every < ∈ M and every 0 ∈ J) is injective.

Proof. — Assume that M is flat. Let J be a left ideal of A and let us

consider the exact sequence 0→ J→ A→ A/J→ 0 of left A-modules.

Since M is flat, tensoring it with M, we obtain an exact sequence

0→M ⊗ J→M→M ⊗ A/J→ 0 of abelian groups, where the injection

M ⊗ J→M is the one of the lemma.

Conversely, let us now assume that the natural map M ⊗A J→ M is

injective, for every left ideal J, and let us prove that M is a flat A-module.

Let 5 : N→ N′be an injective morphism of left A-modules and let us

show that idM ⊗ 5 is injective.Step 1. We first prove this in the case N

′ = A=, by induction on =,

5 being the inclusion of the submodule N. The case = = 0 is obvious

(N = N′ = 0), and the case = = 1 is exactly the assumption of the lemma

(identifying N′with A, then N is a left ideal). Let @′ : N

′→ A=−1 = N

′1

be the projection to the (= − 1) first coordinates and let N′2= {0}=−1 ×A

Page 426: (MOSTLY)COMMUTATIVE ALGEBRA

414 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

be its kernel; let 9′ : N′2→ N

′be the canonical inclusion. Let N1 = @(N)

and N2 = N ∩N′2; this gives rise to a commutative diagram with exact

rows

0 N2 N N1 0

0 N′2

N′

N′1

0,

←→ ←→9

←→ 52

←→@

←→ 5

←→

←→ 51

←→ ←→9′ ←→@

′ ←→

where 9 and @ are the restrictions of 9′ and @′ to N2 and N1, and the

morphisms 51 : N1→ N′1and 52 : N2→ N

′2are deduced from 5 . Let us

apply the functor “tensor product with M” to this diagram. We obtain a

commutative diagram

M ⊗A N2 M ⊗A N M ⊗A N1 0

0 M ⊗A N′2

M ⊗A N′

M ⊗A N′1

0.

←→idM ⊗ 9

←→ idM ⊗ 52

←→idM ⊗@←→ idM ⊗ 5

←→

←→ idM ⊗ 51

←→ ←→idM ⊗ 9′ ←→idM ⊗@′ ←→

The upper row is an exact sequence, because the functor TM is right

exact. The lower row is exact as well, because the initial sequence is

split (remark 8.6.4). By the snake lemma (theorem 7.2.1), the morphism

idM ⊗ 5 is injective, as was to be shown.

Step 2. Let us now prove the result under the assumption that N′is

finitely generated. Let B′ : P′ = A

= → N′be a surjective morphism, let

K = Ker(B′) and let P = (B′)−1(N). Write 9 : P → P′and : : K → P for

the canonical inclusions; let also :′ = 9 ◦ : : K → P′and B = B′|P, so

that K = Ker(B). We sum up these constructions by the commutative

diagram with exact rows:

0 K P N 0

0 K P′

N′

0.

←→ ←→:

⇐⇐

←→B

←→ 9

←→

←→ 5

←→ ←→:′ ←→B

′ ←→

Let us tensor apply the functor “tensor product with M”. By right-

exactness of the tensor product, we obtain a commutative diagram with

Page 427: (MOSTLY)COMMUTATIVE ALGEBRA

8.6. FLAT MODULES 415

exact rows

M ⊗A K M ⊗A P M ⊗A N 0

M ⊗A K M ⊗A P′

M ⊗A N′

0.

←→idM ⊗:

⇐⇐

←→idM ⊗B

←→ idM ⊗ 9

←→

←→ idM ⊗ 5

←→idM ⊗:′ ←→idM ⊗B′ ←→

By the first step, we also know that the morphisms idM ⊗ 9 and idM ⊗:′are injective, so that idM ⊗: is injective as well. Let us prove that idM ⊗ 5is injective; the argument is analogous to that of theorem 7.2.1. Let then

� ∈ M⊗A N be an element of Ker(idM ⊗ 5 ). Let � ∈ M⊗A P be an element

such that idM ⊗B(�) = �. One has

(idM ⊗B′) ◦ (idM ⊗ 9)(�) = (idM ⊗ 5 ) ◦ (idM ⊗B)(�) = (idM ⊗ 5 )(�) = 0.

Consequently, there exists an element �′ ∈ M ⊗A K such that

(idM ⊗ 9)(�) = (idM ⊗:′)(�′), and � = (idM ⊗:)(�′). Then� = (idM ⊗B)(�) = (idM ⊗B) ◦ (idM ⊗:)(�′) = 0,

as was to be shown.

Step 3. We finally prove the assertion in general. Let � ∈ Ker(idM ⊗ 5 )be given as a finite linear combination of split tensors � =

∑<8 ⊗ =8,

with <8 ∈ M and =8 ∈ N. By assumption, one has

∑<8 ⊗ 5 (=8) = 0 in

M ⊗A N′.

By remark 8.1.6, there exists a finitely generated submodule N′0of N,

containing the elements 5 (=8), such that

∑<8 ⊗ 5 (=8) = 0 in M ⊗A N

′0.

Let N0 be the submodule of N generated by the =8, let 9 : N0 → N be

the inclusion, and let 50 : N0 → N′0be the map induced by 5 ; let �0 =∑

<8 ⊗ =8 viewed in M⊗A N0. By construction, one has (idM ⊗ 50)(�0) = 0,

that is, �0 ∈ Ker(idM ⊗ 50). Since N′0is finitely generated, step 2 proves

that �0 = 0. Then � = (idM ⊗ 9)(�0) = 0. This proves that idM ⊗ 5 is

injective. �

Corollary (8.6.9). — Let A be an integral domain and let M be an A-module.

If M is flat, then M is torsion-free. The converse holds if A is a principal ideal

domain.

Proof. — Let 0 be a nonzero element of A. The ideal (0) is a free A-

module, with basis 0, so that the morphism M → (0) ⊗A M given by

Page 428: (MOSTLY)COMMUTATIVE ALGEBRA

416 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

< ↦→ 0 ⊗ < is an isomorphism. Under this identification, the morphism

(0) ⊗A M → M given by 1 ⊗ < ↦→ 1< identifies with the morphism

< ↦→ 0<. If M is flat, then this morphism is injective (theorem 8.6.8)

so that no nonzero element < ∈ M satisfies 0< = 0. This proves that

a flat A-module is torsion-free. Conversely, let us assume that A is a

principal ideal domain and M is torsion-free. In this case, this argument

shows that the canonical morphism J ⊗A M→M given by 1 ⊗ < ↦→ 1<

is injective for every nonzero ideal J of A. On the other hand, if J = 0, one

has J ⊗A M = 0 and this morphism is injective as well. By theorem 8.6.8,

this shows that M is flat. �

8.6.10. — Let M and P be right A-modules. Let M∨ = HomA(M,A) be

the dual of M, viewed as a left A-module. For ! ∈ M∨and ? ∈ P, the

map ?! : G ↦→ ?!(G) from M to P is a morphism of A-modules, and

the map (?, !) ↦→ ?! is biadditive and A-balanced, since (?0)!(G) =?0!(G) = ?(0!)(G) for every G ∈ M, every ? ∈ P and every ! ∈ M

∨.

Consequently, there exists a unique morphism

�M : P ⊗A M∨→ HomA(M, P)

such that �M(? ⊗ !) = ?! for every ! ∈ M∨and every ? ∈ P. (See also

proposition 8.2.4.)

Proposition (8.6.11). — LetA be a ring and letP be a flat rightA-module. Then,

for every finitely presented right A-module M, the morphism �M : P⊗A M∨→

HomA(M, P) is an isomorphism.

Proof. — Since M is finitely presented, there exists an exact sequence of

A-modules.

L′ D−→ L

E−→M→ 0,

where L and L′are free and finitely generated. Let us apply the right

exact functor •∨ = HomA(•,A) to this exact sequence; this furnishes an

exact sequence

0→M∨ E∗−→ L

∨ D∗−→ (L′)∨.

Page 429: (MOSTLY)COMMUTATIVE ALGEBRA

8.6. FLAT MODULES 417

We now tensor this exact sequence with P and obtain, since P is flat, an

exact sequence

0→ P ⊗A M∨ idP ⊗E∗−−−−−→ P ⊗ L

∨ idP ⊗D∗−−−−−→ P ⊗ (L′)∨.

Similarly, we apply the right exact functor HomA(•, P) and obtain an

exact sequence

0→ HomA(M, P) E∗−→ HomA(L, P)

D∗−→ HomA(L′, P).

Via the various morphisms �, these last two exact sequences can be

combined into the following diagram

0 P ⊗A M∨

P ⊗A L∨

P ⊗A (L′)∨

0 HomA(M, P) HomA(L, P) HomA(L′, P).

←→

←→ �M

← →idP ⊗E∗ ← →idP ⊗D∗

←→ �L

←→ �L′

←→ ←→E∗ ←→D

This left square is commutative: if ! ∈ M∨and ? ∈ P, since �L ◦

(idP ⊗E∗)(?⊗!) and (E∗◦�M)(?⊗!) both are themorphism H ↦→ !(E(H))?from L to P. Consequently, the morphisms �L ◦ (idP ⊗E∗) and E∗ ◦ �M

coincide on split tensors, hence are equal. This proves that the left hand

square is commutative. The commutativity of the right hand one is

analogous: the maps�L′ ◦ (idP ⊗D∗) and D∗ ◦ �L both map a split tensor

? ⊗ ! (with ? ∈ P and ! ∈ L∨) to the morphism H ↦→ ?!(D(H)), hence

they coincide.

The module L is free and finitely generated. As in the proof of

proposition 8.2.4 (which concerns the case where A is commutative),

we prove that the morphism �L is an isomorphism. Indeed, if (48)8∈Iis a finite basis of L, with dual basis (4∗

8), and if D ∈ HomA(L, P), then

� =∑8 D(48) ⊗ 4∗8 is the unique element of P ⊗A M

∨such that �L(�) = D.

Similarly, �L′ is an isomorphism.

Let us deduce that �M is an isomorphism. One has �L ◦ (idP ⊗E∗) =E∗ ◦ �M, and �L ◦ (idP ⊗E∗) is injective, so that �M is injective. Let

5 ∈ HomA(M, P). Consider the morphism E∗( 5 ) = 5 ◦ E ∈ HomA(L, P).Since �L is an isomorphism, there exists a unique element � ∈ P ⊗A L

such that �L(�) = 5 ◦ E. Then

�L′ ◦ (idP ⊗D∗)(�) = D∗ ◦ (idP ⊗�L)(�) = D∗(E∗( 5 )) = 5 ◦ E ◦ D = 0.

Page 430: (MOSTLY)COMMUTATIVE ALGEBRA

418 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Since �L′ is an isomorphism, one has idP ⊗D∗(�) = 0. By the exactness

of the first row of the diagram, there exists � ∈ P ⊗A M∨such that

� = (idP ⊗E∗)(�). Then,

�M(�) ◦ E = �L ◦ (idP ⊗E∗)(�) = �L(�) = 5 ◦ E.

Since E is surjective, 5 = �M(�). This concludes the proof that �M is an

isomorphism. �

Theorem (8.6.12) (Lazard). — Let A be a ring and let P be a right A-module.

The module P is flat if and only if, for every finitely presented right A-module M

and every morphism 5 : M → P, there exists a free finitely generated right

A-module L and morphisms 6 : M→ L and ℎ : L→ P such that 5 = ℎ ◦ 6.

Proof. — Assume that P is flat, let M be a finitely presented right A-

module and let 5 : M→ P be a morphism of A-modules. Let us choose a

tensor � ∈ P⊗A M∨such that �M(�) = 5 and let us write � =

∑=8=1?8 ⊗!8,

with !1, . . . , != ∈ M∨and ?1, . . . , ?= ∈ P. Let ! = (!1, . . . , !=) : M→

A=and ? : A

= → P be given by ?(01, . . . , 0=) = ?101 + · · · + ?=0=; theseare morphisms of A-modules. Moreover, for every < ∈ M, one has

? ◦ !(<) ==∑8=1

?8!8(<) = �M(�)(<) = 5 (<),

so that < = ? ◦ !. This proves the desired property.

Conversely, let us assume that this property holds and let us prove

that P is flat. Let J be a left ideal of A and let us show that the canonical

morphism 5 : P ⊗A J→ P is injective. Let G be any element of Ker( 5 ),written as G =

∑=8=1G8 ⊗ 08, where 01, . . . , 0= ∈ A and G1, . . . , G= ∈ P.

Let M be the quotient of the free A-module A=3, with canonical basis

(41, . . . , 4=), by its submodule generated by 4101+ · · · + 4=0=; let ? : A=3→

M be the canonical surjection. The module M is finitely presented. By

construction, the morphism from A=3to P such that 48 ↦→ G8 for each 8

vanishes on (01, . . . , 0=)A. Consequently, there exists a uniquemorphism

5 : M → P such that 5 (?(48)) = G8 for every 8. Let < be an integer, let

6 : M → A<3

and ℎ : A<3→ P be morphisms such that 5 = ℎ ◦ 6.

Let ( 51, . . . , 5<) be the canonical basis of A<3; For every 9 ∈ {1, . . . , <},

let H 9 = ℎ( 59). For each 8 ∈ {1, . . . , =}, let (18 , 9) be the coordinates

Page 431: (MOSTLY)COMMUTATIVE ALGEBRA

8.6. FLAT MODULES 419

of 6(?(48)) in the basis ( 51, . . . , 5<); one has 6(?(48)) =∑<9=1

5918 , 9. Since∑8 ?(48)08 = ?(

∑4808) = 0, one has

0 =

=∑8=1

6(?(48)08) =<∑9=1

59

(=∑8=1

18 , 908

),

hence

∑=8=118 , 908 = 0 for every 9. Moreover,

G8 = 5 (?(48)) = ℎ(6(?(48))) =<∑9=1

H 918 , 9

for every 8. Then one has, in P ⊗A J, the relations

=∑8=1

G8 ⊗ 08 ==∑8=1

©­«<∑9=1

H 918 , 9ª®¬ ⊗ 08

=

<∑9=1

(=∑8=1

H 918 , 9 ⊗ 08

)=

<∑9=1

H 9 ⊗(=∑8=1

18 , 908

)= 0.

This proves that the canonical morphism P ⊗A J → P is injective and

concludes the proof that P is a flat A-module. �

Corollary (8.6.13). — Let A be a commutative ring and let M be a finitely

presented A-module. Then M is flat if and only if M is projective.

Proof. — A projective module is flat (corollary 8.6.6), hence it suffices

to prove that M is projective if it is flat. By the preceding proposition

applied to the identity morphism idM : M→M, there exist a free finitely

generated A-module L and morphisms 5 : M→ L and 6 : L→M such

that idM = 6 ◦ 5 . The morphism 5 is injective and 6 is a left inverse to 5 .

By lemma 7.1.6, applied to the exact sequence 0→M

5−→ L→ L/M→ 0,

the submodule 5 (M) of L has a direct summand, hence is projective

(proposition 7.3.2). Since 5 induces an isomorphism from M to its

image 5 (M), the module M is projective, as was to be shown. �

Page 432: (MOSTLY)COMMUTATIVE ALGEBRA

420 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Remark (8.6.14). — By definition, a finitely presented right A-module M

is the cokernel of an A-morphism D : A=3→ A

<3. Such a morphism

can be represented by a matrix U ∈ M<,=(A). Moreover, a morphism

from M to a free A-module A

?B corresponds to a morphism E : A

=3→ A

?

3

that vanishes on the image of D. The morphism E is represented by a

matrix V ∈ M?,<(A), and the condition that E(Im(D)) = 0 translates as

the relation V ·U = 0.

Thanks to these remarks, theorem 8.6.12 can be rewritten as the

following equational criterion of flatness. Namely, the following properties

are equivalent:

(i) The A-module M is flat;

(ii) For every family (G1, . . . , G<) of elements of M, everymatrix (08 , 9) ∈M<,=(A) such that

∑<8=1G808 , 9 = 0 for every 9 ∈ {1, . . . , =}, there exist an

integer ?, a family (H1, . . . , H?) in M and a matrix (1:,8) ∈ M?,<(A) suchthat

G8 =

?∑:=1

H:1:,8

for every 8 ∈ {1, . . . , <}, and<∑8=1

1:,808 , 9 = 0

for every : and 9.

(iii) Same assertion as (ii), with = = 1.

The equivalence of (i) and (ii) is the content of theorem 8.6.12, and in

the proof, we have seen that it is sufficient to consider A-modules M

which are cokernels of a morphism D : A3 → A<3, ie, with = = 1.

Proposition (8.6.15). — LetA be a commutative ring and letM be anA-module.

a) Let S be a multiplicative subset of A. If M is flat, then the S−1

A-module

S−1

M is flat.

b) If MP is a flat AP-module for every maximal ideal P of A, then M is flat.

Proof. — a) For any S−1

A-module N, there is a morphism of S−1

A-

modules M ⊗A N ' S−1

M ⊗S−1

AN sending < ⊗ = to (</1) ⊗ =. This

Page 433: (MOSTLY)COMMUTATIVE ALGEBRA

8.7. FAITHFUL FLATNESS 421

morphism is an isomorphism. Moreover, if D : N→ N′is a morphism

of S−1

A-modules, this morphism transforms idM ⊗D into idS−1

M⊗D.

If D is injective and M is flat, it follows that idS−1

M⊗D is injective. Since

D is arbitrary, S−1

M is a flat S−1

A-module.

b) Let D : N→ N′be an injective morphism of A-modules. The image

of idM ⊗D is M ⊗ D(N); let Q be the kernel of idM ⊗D so that we have an

exact sequence

0→ Q→M ⊗A N

idM ⊗D−−−−−→M ⊗A D(N) → 0.

Let P be a maximal ideal of A and let us tensor this exact sequence

of A-modules by the flat A-module AP; we obtain an exact sequence

0→ QP→MP ⊗A NP

idMP⊗DP

−−−−−−→MP ⊗APD(N)P→ 0.

Since AP is a flat A-module (example 8.6.3), the morphism DP is injective.

Since MP is assumed to be flat, the morphism idMP⊗DP is injective.

Consequently, QP = 0 for every P ∈ Spec(A). One thus has Supp(Q) = 0,

hence Q = 0 by theorem 6.5.4.

This implies that the map idM ⊗D is injective. Since D is arbitrary, M is

a flat A-module. �

8.7. Faithful flatness

Definition (8.7.1). — Let A be a commutative ring and let M be a A-module.

One says that M is faithfully flat if a complex N1

D−→ N2

E−→ N3 of A-modules

is exact if and only if the complex M⊗A N1

idM ⊗D−−−−−→M⊗A N2

idM ⊗E−−−−−→M⊗A N3

is exact.

The “only if” direction of the definition shows that a faithfully flat

A-module is flat.

Proposition (8.7.2). — Let M be a flat A-module. The following properties are

equivalent:

(i) The A-module M is faithfully flat;

(ii) For every morphism D : N→ N′of A-modules such that idM ⊗D = 0,

one has D = 0;

Page 434: (MOSTLY)COMMUTATIVE ALGEBRA

422 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

(iii) For every nonzero A-module N, one has M ⊗A N ≠ 0;

(iv) For every prime ideal P of A, the A-module M ⊗A KP is nonzero, where

KP is the field of fractions of the integral domain A/P;(v) For every maximal ideal P of A, one has M ≠ PM.

Proof. — (i)⇒(ii). Assume that idM ⊗D = 0. Let us consider the complex

N

D−→ N′ id

N′

−−−→ N′and let us tensor it with M. We obtain the complex

M ⊗A N

idM ⊗D−−−−−→M ⊗A N′ id−→M ⊗A N

′,

which is exact since idM ⊗D = 0, so that its image, 0, equals the kernel of

the identity isomorphism. Since M is faithfully flat the initial complex

was exact and Im(D) = Ker(idN′) = 0, hence D = 0.

(ii)⇒(iii). One has M ⊗ N = 0 if and only if idM ⊗ idN = 0; if (ii) holds,

then idN = 0, hence N = 0.

The implications (iii)⇒(iv) and (iv)⇒(v) are obvious.

Let us prove the implication (iii)⇒(i). Let N1

D−→ N2

E−→ N3 be a

complex of A-modules which becomes exact after tensorization by M.

Let K = Ker(E)/Im(D).Let 9 : D(N1) → N2 be the inclusion; since M is flat, the morphism

idM ⊗ 9 is injective and its image is a submodule of M ⊗A N2 isomorphic

to M ⊗A Im(D). By right exactness, this submodule is also Im(idM ⊗D).Then consider the exact sequence 0→ Ker(E) :−→ N2

E−→ N2, where : is

the inclusion morphism, and apply the tensor product by M; since M is

flat, this gives an exact sequence

0→M ⊗A Ker(E) idM ⊗:−−−−−→M ⊗A N2

idM ⊗E−−−−−→,

so that idM ⊗: is an isomorphism from M ⊗A Ker(E) to Ker(idM ⊗E).By right exactness of the tensor product, one has

M ⊗A K = M ⊗A (Ker(E)/Im(D)) ' (M ⊗A Ker(E))/(M ⊗A Im(D)).Consequently, M ⊗A K = 0, by assumption. Since M is faithfully flat, this

shows that K = 0 and the initial complex is exact.

We finally prove that (v)⇒(iii). Let N be an A-module such that

M ⊗A N = 0 and let us prove that N = 0. Let G ∈ N and let I = {0 ∈A ; 0G = 0}, so that the morphism 0 ↦→ 0G induces an injection A/I ↩→ N.

Page 435: (MOSTLY)COMMUTATIVE ALGEBRA

8.7. FAITHFUL FLATNESS 423

By flatness of M, we thus have an injection M ⊗A (A/I) ↩→M ⊗A N, so

that M ⊗A (A/I) = 0 and M = IM. If I ≠ A, there exists a maximal ideal P

of A that contains I; then M = IM ⊂ PM, which contradicts (v). Hence

I = A and G = 0, so that N = 0. �

Definition (8.7.3). — One says that a morphism 5 : A→ B of commutative

rings is flat (resp. faithfully flat) if B is a flat (resp. faithfully flat) A-module.

Proposition (8.7.4). — Let 5 : A → B be a flat morphism of commutative

rings. The following properties are equivalent:

(i) The morphism 5 is faithfully flat;

(ii) The continuous map 5 ∗ : Spec(B) → Spec(A) is surjective;(iii) For every maximal ideal M of A, there exists a prime ideal P of B such

that M = 5 −1(P).

Proof. — Let P be a prime ideal of A and let KP be the fraction field of

the residue ring A/P. Note that KP = AP/PAP. Let T = 5 (A P); it is amultiplicative subset of B; by abuse of language, we write BP for T

−1B.

Let us consider the following commutative diagram of ringmorphisms

A B B/ 5 (P)B

AP B 5 (P) BP/ 5 (P)BP,

←→5

←→ ←→

← →

←→

←→5P ←→

where the unlabeled vertical arrows represent the obvious localization

morphisms, and the unlabeled horizontal arrows represent the obvi-

ous quotient maps. Moreover, one has an isomorphism BP/ 5 (P)BP 'T−1(B/ 5 (P)B) ' B ⊗A KP.

Under these morphisms, a prime ideal Q of BP/ 5 (P)BP correspond

to a prime ideal of BP that contains 5 (P)BP, or to a prime ideal of B

containing 5 (P) and disjoint from T, and its preimage 5 −1(Q) is a

prime ideal of A containing P and disjoint from A P, hence 5 −1(Q) = P.

Conversely, such an ideal furnishes a prime ideal of BP/ 5 (P)BP ' B⊗A KP.

If 5 is faithfully flat, then B ⊗A KP is nonzero, so that it admits a prime

ideal Q, and 5 −1(Q) = P belongs to the image of 5 ∗. This proves the

implication (i)⇒(ii).

Page 436: (MOSTLY)COMMUTATIVE ALGEBRA

424 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

IfM is amaximal ideal, thenKM = A/M and the above argument shows

that (iii) implies that B ⊗A (A/M) ≠ 0, that is, B ≠ MB. Consequently, B

is faithfully flat.

Finally, the implication (ii)⇒(iii) is obvious. �

Examples (8.7.5). — Let A be a commutative ring.

a) For every multiplicative subset S of A, the morphism 5 : A→ S−1

A

given by 5 (0) = 0/1 is flat (example 8.6.3). It is faithfully flat if and only

if S consists of invertible elements, in which case it is an isomorphism.

Indeed, a prime ideal P of A is of the form 5 −1(Q) for a prime ideal Q

of S−1

A if and only if P ∩ S = ∅. If S contains a noninvertible element 0,

then there is a prime ideal of A containing 0, and this prime ideal does

not belong to the image of 5 ∗.

b) Let 01, . . . , 0= be a family of elements of A, let B =∏=

8=1A08 and

let 5 : A→ B be the morphism given by 5 (0) = (0/1, . . . , 0/1) for every0 ∈ A. The morphism 5 is flat.

It is faithfully flat if and only if, for every prime ideal P of A there

exists 8 ∈ {1, . . . , =} such that 08 ∉ P, in other words, if and only if⋂=8=1

V((08)) = ∅ in Spec(A). Indeed, prime ideals of B are of the form

Q1 × · · · ×Q=, where, for some 8 ∈ {1, . . . , =}, Q8 is a prime ideal of A08 ,

and Q9 = A0 9 otherwise. Its preimage by 5 is then 5 −1(Q8), a prime

ideal that does not contain 08, and every such prime ideal of A can be

obtained.

Example (8.7.6). — Let 5 : A → B be a flat morphism of commutative

rings. Assume that 5 has a left inverse, that is, that there exists a

morphism 5 ′ : B → A such that 5 ′ ◦ 5 = idA. Then 5 is faithfully flat.

Indeed, one has 5 ∗ ◦ ( 5 ′)∗ = idSpec(A), hence 5

∗is surjective.

8.7.7. — Let 5 : A → B be a morphism of commutative rings. We

consider the diagram�5 of A-modules:

0→ A

5−→ B

6−→ B ⊗A B,

where 6(1) = 1 ⊗ 1 − 1 ⊗ 1, for every 1 ∈ B. For 0 ∈ A, one has

1⊗ 5 (0) = 5 (0)⊗1 = 0 · (1⊗1) in B⊗A B, so that 6 ◦ 5 = 0: the diagram�5

is a complex.

Page 437: (MOSTLY)COMMUTATIVE ALGEBRA

8.7. FAITHFUL FLATNESS 425

More generally, for every A-module M, let 5M = 5 ⊗ idM : M→ B⊗A M

be the map given by < ↦→ 1 ⊗ <, and let 6M = 6 ⊗ idM : B ⊗A M →B ⊗A B ⊗A M be given by 6M(1 ⊗ <) = 1 ⊗ 1 ⊗ < − 1 ⊗ 1 ⊗ <, and we

consider the diagram�M

5:

0→M

5M−−→ B ⊗A M

6M−−→ B ⊗A B ⊗A M.

Again, 6M ◦ 5M = 0, so that�M

5is a complex of A-modules.

Proposition (8.7.8). — Assume that there exists a morphism of rings 5 ′ : B→A such that 5 ′ ◦ 5 = idA. Then the diagram �5 is an exact sequence. More

generally, for every A-module M, the complex�M

5is exact.

We will generalize this proposition below, in theorem 8.7.10, an prove

that its conclusion holds as soon as 5 is faithfully flat.

Proof. — Since 5 ′ ◦ 5 = idA, the morphism 5 is injective.

One has 6 ◦ 5 = 0, hence Im( 5 ) ⊂ Ker(6). Conversely, the map

from B × B to B given by (1, 1′) ↦→ 5 ′(1)1′ is A-bilinear, hence there

exists a unique morphism of A-modules 6′ : B ⊗A B → B such that

6′(1⊗ 1′) = 5 ′(1)1′. One has 6′◦ 6(1) = 6′(1⊗ 1)− 6′(1⊗ 1) = 1− 5 ( 5 ′(1)),hence 6′ ◦ 6 = idB − 5 ◦ 5 ′. Assume that 6(1) = 0. Then 6′(6(1)) = 0,

hence 1 = 5 ( 5 ′(1)) and 1 ∈ Im( 5 ), hence Ker(6) ⊂ Im( 5 ). This provesthat Ker(6) = Im( 5 ).Let us prove the more general result. Since the map from B ×M to M

given by (1, <) ↦→ 5 ′(1)< is A-bilinear, there exists a unique morphism

of A-modules 5 ′M

: B ⊗A M→M such that 5 ′M(1 ⊗ <) = 5 ′(1)<. One has

5 ′M◦ 5M(<) = 5 ′

M(1⊗<) = < for every < ∈ M, hence 5 ′

M◦ 5M = idM. This

proves that 5M is injective.

Similarly, there exists a unique morphism 6′M

: B⊗A B⊗A M→ B⊗A M

such that 6′M(1 ⊗ 1′ ⊗ <) = 6′(1 ⊗ 1′) ⊗ < = 5 ′(1)1′ ⊗ <. One has

6′M◦ 6M(1 ⊗ <) = 6′

M(1 ⊗ 1 ⊗ < − 1 ⊗ 1 ⊗ <)

= 1 ⊗ < − 5 ′(1)1B′ ⊗ <

= (1 − 5 ( 5 ′(1)) ⊗ <,for every 1, 1′ ∈ B and every < ∈ M. Consequently, 6′

M◦ 6M = (idB − 5 ◦

5 ′)M. Let us now prove that Ker(6M) = Im( 5M). Since 6M ◦ 5M = 0, one

has Im( 5M) ⊂ Ker(6M).

Page 438: (MOSTLY)COMMUTATIVE ALGEBRA

426 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Conversely, let � ∈ Ker(6M). Then 6′M(6M(�)) = 0, that is � = 5M◦ 5 ′

M(�):

we have proved that � ∈ Im( 5M). �

Corollary (8.7.9). — For every A-module M, the diagram of B-modules

0→ B ⊗A M

idB ⊗ 5M−−−−−→ B ⊗A M

idB ⊗6M−−−−−−→ B ⊗A B ⊗A M

is exact.

Proof. — Themap 5B : B→ B⊗AB is given by 5B(1) = 1⊗1 is amorphism

of rings. There is a unique morphism of A-modules 6 : B⊗A B→ B such

that 6(1 ⊗ 1′) = 11′ for every 1, 1′ ∈ A; this is even a morphism of rings,

and one has 6 ◦ 5B = idB.

Moreover, the morphism 6B : B ⊗A B→ B ⊗A B ⊗A B is given by

6B(1 ⊗ 1′) = 1 ⊗ 1 ⊗ 1′ − 1 ⊗ 1 ⊗ 1′.Under the isomorphism

(B ⊗A B) ⊗B (B ⊗A B) ∼−→ B ⊗A B ⊗A B, (1 ⊗ 1′) ⊗ (2 ⊗ 2′) ↦→ 1 ⊗ 1′2 ⊗ 2,it identifies with the morphism 6 associated with the morphism 5B.

Consequently, the diagram of the question is an exact sequence. �

Theorem (8.7.10). — Let 5 : A→ B be a morphism of commutative rings. If

5 is faithfully flat, then for every A-module M, the complex�M

5is exact.

Proof. — Let M be an A-module. The preceding corollary shows that

the complex �N

5is exact, where N = B ⊗A M. On the other hand, the

complex�N

5identifies with the complex deduced from�

M

5by tensoring

with B. Since B is a faithfully flat A-module, the complex�M

5is exact. �

8.8. Faithfully flat descent

8.8.1. — Let 5 : A→ B be a faithfully flat morphism of commutative

rings. Set C = B ⊗A B; there are two ring morphisms 61, 62 : B → C,

respectively given by 61(1) = 1 ⊗ 1 and 62(1) = 1 ⊗ 1, so that 6 = 62 − 61.

Consequently, the exactness of the complex�5 asserted by theorem8.7.10

says that the ring A can be recovered from B as the subring where these

two ring morphisms coincide.

Page 439: (MOSTLY)COMMUTATIVE ALGEBRA

8.8. FAITHFULLY FLAT DESCENT 427

8.8.2. — LetM be anA-module, writeMB = B⊗A M andMC = C⊗A M =

B ⊗A MB. By base change, these are a B-module and a C-module

respectively, andwemay viewMC as aB-module in twoways, via 61 or 62.

The map 6M

1= 61 ⊗ idM : MB→MC maps 1 ⊗ 1 to 1 ⊗ 1⊗< = 61(1) ⊗<,

hence is a morphism of B-modules if C is viewed as a B-module via 61;

similarly, the map 6M

2= 62 ⊗ idM is a morphism of B-modules if C is

viewed as a B-module via 62. Moreover, theorem 8.7.10 asserts that

M can be recovered from MB as the A-submodule where these two

morphisms coincide.

It is therefore natural to ask which properties of the A-module M can

be witnessed on the B-module MB. Similarly, a morphism D : M→M′

of A-modules gives rise to a morphism of B-modules DB = idB ⊗D, andone may wonder what properties of D are witnessed by DB.

Proposition (8.8.3). — Let 5 : A→ B be a faithfully flat morphism of commu-

tative rings. Let D : M→ N be a morphism of A-modules.

a) If DB = 0, then D = 0;

b) If DB is surjective, then D is surjective;

c) If DB is injective, then D is injective.

Proof. — Assertion a) holds, by proposition 8.7.2.

Let us prove b) and c). Let K = Ker(D) and let 9 : K → M be the

inclusion; let P = Coker(D) and let E : N→ P be the canonical projection.

By construction, the diagram 0→ K

9−→M

D−→ N

E−→ P→ 0 is exact. Since

5 is flat, the diagram 0→ KB

9B−→MB

DB−→ NB

EB−→ PB→ 0 is exact as well.

If DB is surjective, then PB = 0, hence P = 0 since 5 is faithfully flat; this

shows that D is surjective.

If DB is injective, then KB = 0, hence K = 0 since 5 is faithfully flat, and

D is injective. �

Proposition (8.8.4). — Let 5 : A→ B be a faithfully flat morphism of commu-

tative rings and let M be an A-module.

a) If MB is a finitely generated B-module, then M is a finitely generated

A-module;

Page 440: (MOSTLY)COMMUTATIVE ALGEBRA

428 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

b) If MB is a finitely presented B-module, then M is a finitely presented

A-module;

c) If MB is a flat (resp. faithfully flat) B-module, then M is a flat (resp.

faithfully flat) A-module;

d) If MB is a finitely generated projective B-module, then B is a finitely

generated projective A-module;

e) If MB is a noetherian (resp. artinian) B-module, then M is a noetherian

(resp. artinian) A-module.

Proof. — a) Let (<8)8∈I be a generating family of M. The family (1⊗<8)8∈Igenerates MB. Since MB is finitely generated, there exists a finite subset J

of I such that the family (1 ⊗ <8)8∈J generates MB. Let D : AJ→M be the

morphism that maps (0 9)9∈J to∑0 9< 9. Then DB : B

J→MB is surjective,

hence D is surjective. This proves that M is finitely generated.

b) By a), M is finitely generated. Let D : A= → M be a surjective

morphism and let K = Ker(D); we want to show that Ker(D) is finitelygenerated. The morphism DB : B

= →MB is surjective, and its kernel is

finitely generated, because MB is finitely presented. Since 5 is flat, one

has KB = Ker(DB). By a), K is finitely generated, as was to be shown.

c) We first show that M is flat if MB is flat. Let us consider a complex�,

N1

D−→ N2

E−→ N3, of A-modules, and let us consider the complex �M:

M ⊗A N1

idM ⊗D−−−−−→M ⊗A N2

idM ⊗E−−−−−→M ⊗A N3.

If we tensor it by B, we obtain the complex �M,B

B ⊗A M ⊗A N1

idB ⊗ idM ⊗D−−−−−−−−−→ B ⊗A M ⊗A N2

idB ⊗ idM ⊗E−−−−−−−−−→ B ⊗M ⊗A N3,

which identifies with the complex

MB ⊗B N1,B

idMB⊗DB

−−−−−−→MB ⊗B N2,B

idMB⊗EB

−−−−−−→MB ⊗B N3,B,

which is deduced from the complex �B of B-modules

N1,BDB−→ N2,B

EB−→ N3,B

by tensor product with MB. Assume that the complex� is exact. Since B

is a flat A-module, the complex �B is exact; since MB is a flat B-module,

Page 441: (MOSTLY)COMMUTATIVE ALGEBRA

8.8. FAITHFULLY FLAT DESCENT 429

the complex �M,B is exact as well. Since B is a faithfully flat A-module,

the complex �M is exact. This shows that M is a flat A-module.

Let us now assume thatMB is faithfully flat. Bywhat precedes, M is flat.

Assume that the complex �M is exact. Since B is flat, the complex �M,B

is flat. Since MB is faithfully flat, the complex � is exact. This shows

that M is faithfully flat.

d) Since MB is finitely generated and projective, it is a finitely presented

and flat B-module. By b) and c), M is a finitely presented and flat

A-module. It then follows from corollary 8.6.13 that M is a finitely

generated projective A-module.

e) Let us assume that MB is noetherian Let (M=) be an increasing

sequence of submodules of M. Since B is flat, the modules M=,B identify

as submodules ofMB and the sequence (M=,B) is increasingConsequently,it is stationary and there exists < ∈ N such that M=,B = M<,B for = > <.

Since (M=/M<)B 'M=,B/M<,B, one has (M=/M<)B = 0 for = > <; since

B is a faithfully flat A-module, one has M=/M< = 0, hence (M=) isstationary. This shows that M is a noetherian A-module.

The proof that M is artinian if MB is artinian is analogous; just replace

“increasing” by “decreasing” and the module M=/M< by the module

M</M= in the preceding proof. �

Proposition (8.8.5). — Let 5 : A→ B be a faithfully flat morphism of commu-

tative rings. Then B/A is a flat A-module.

Proof. — Since 5 is faithfully flat, it is injective. Let us consider the

exact sequence 0 → A

5−→ B

6−→ B/A → 0, where 6 is the canonical

projection. By tensor product by B, we obtain the exact sequence

0 → B

5B−→ B ⊗A B

6B−→ B ⊗A (B/A) → 0, where 5B(1) = 1 ⊗A 1 and

6B(1) = 1⊗6(1). On the other hand, themorphismof rings ℎ : B⊗AB→ B

such that ℎ(1 ⊗ 1′) = 11′ satisfies ℎ ◦ 5B = idB and is a morphism of B-

modules. Consequently, the latter exact sequence is split and B⊗A (B/A)is a direct summand of B ⊗A B. Since B is a flat A-module, B ⊗A B is a

flat B-module, and B ⊗A (B/A) is a flat B-module. By proposition 8.8.4,

B/A is a flat A-module. �

Page 442: (MOSTLY)COMMUTATIVE ALGEBRA

430 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Corollary (8.8.6). — Let 5 : A→ B be a morphism of integral domains that

induces an isomorphism on their fields of fractions. If 5 is faithfully flat, then 5

is an isomorphism.

Proof. — Since 5 is faithfully flat, it is injective and we consider A as a

subring of B. Every element 1 ∈ B can be written as a fraction 0′/0′′, for0′, 0′′ ∈ A and 0′′ ≠ 0, so that the image of 1 in B/A is torsion. Since B/Ais a flat A-module, it is torsion-free (corollary 8.6.9), so that 1 ∈ A. This

proves that B = A. �

8.8.7. — Let 5 : A→ B be a faithfully flat morphism of commutative

rings. Following the impetus of Grothendieck (1960), we can go further

and try to recover:

a) A morphism D : M → M′of A-modules from the B-morphism

D ⊗ idB : M ⊗A B→M′ ⊗A B, in particular, describe which morphisms

E : M ⊗A B→M′ ⊗A B are of the form D ⊗ idB;

b) An A-module M from the B-module M ⊗A B and an additional

datum.

This process has been coined faithfully flat descent by Grothendieck.

8.8.8. — The map B×B→ B⊗A B given by (1, 1′) ↦→ 1′ ⊗ 1 is A-bilinear,

hence there exists a morphism � : B ⊗A B→ B ⊗A B of A-modules such

that �(1 ⊗ 1′) = 1′ ⊗ 1 for all 1, 1′ ∈ B. This is a morphism of A-algebras.

For 1 ∈ B, notice the relation

6(1) = 1 ⊗ 1 − 1 ⊗ 1 = (id−�)(1 ⊗ 1).

Let M be an A-module. We let �M = � ⊗ idM be the automorphism of

B ⊗A B ⊗A M. Observe that although the A-module B ⊗A B ⊗A M can be

viewed as a B ⊗A B-module, the morphism �M is not B ⊗A B-linear, but

satisfies the relation

�M(� · �) = �(�) · �M(�)

for � ∈ B ⊗A B and � ∈ B ⊗A B ⊗A M. We say that �M is �-linear. Indeed,if � = 11 ⊗ 1′

1and � = 1 ⊗ 1′ ⊗ < are split tensors, this just follows from

Page 443: (MOSTLY)COMMUTATIVE ALGEBRA

8.8. FAITHFULLY FLAT DESCENT 431

the equalities

�M(� · �) = �M((11 ⊗ 1′1) · (1 ⊗ 1′ ⊗ <)) = �M(111 ⊗ 1′

11′ ⊗ <)

= 1′11′ ⊗ 111 ⊗ < = (1′

1⊗ 11) · (1′ ⊗ 1 ⊗ <)

= �(11 ⊗ 1′1) · �M(1 ⊗ 1′ ⊗ <) = �(�) · �M(�).

For 1 ∈ B and < ∈ M, notice the relation

6M(1 ⊗ <) = 1 ⊗ 1 ⊗ < − 1 ⊗ 1 ⊗ < = (id−�M)(1 ⊗ 1 ⊗ <).

Consequently, for � ∈ B ⊗A M, one has 6M(�) = (id−�M)(1 ⊗ �).

Proposition (8.8.9). — Let 5 : A → B be a faithfully flat morphism of com-

mutative rings and let M,M′ be A-modules. The map D ↦→ idB ⊗D from

HomA(M,M′) to HomB(M ⊗A B,M′ ⊗A B) is injective; its image is the the

set of all B-morphisms E : B ⊗A M → B ⊗A M′such that (idB ⊗E) ◦ �M =

�M′ ◦ (idB ⊗E).

Proof. — The injectivity of this map follows from proposition 8.8.3, a).

Let E ∈ HomB(B⊗A M, B⊗A M′). If the morphism E is of the form idB ⊗D,

for D ∈ HomA(M,M′), then for all 1, 1′ ∈ B and < ∈ M, one has

�M′ ◦ (idB ⊗E)(1 ⊗ 1′ ⊗ <) = �M

′(1 ⊗ 1′ ⊗ D(<))= 1′ ⊗ 1 ⊗ D(<)= (idB ⊗E)(1′ ⊗ 1 ⊗ <)= (idB ⊗E) ◦ �M(1 ⊗ 1′ ⊗ <).

This proves that the two morphisms (idB ⊗E) ◦ �M and �M′ ◦ (idB ⊗E)

coincide on split tensors, hence are equal, as was to be shown.

Conversely, let us assume that (idB ⊗E) ◦ �M = �M′ ◦ (idB ⊗E). Then,

for every < ∈ M, one has

�M′(1 ⊗ E( 5M(<))) = �M

′ ◦ (idB ⊗E)(1 ⊗ 1 ⊗ <)= (idB ⊗E) ◦ �M(1 ⊗ 1 ⊗ <)= 1 ⊗ E(1 ⊗ <) = 1 ⊗ E( 5M(<)),

and 6M′(E( 5 (<))) = 0. Consequently, the image of the morphism E ◦

5M : M→ B⊗A M′is contained in Ker(6M

′). Since 5M′ is an isomorphism

from M′to Ker(6M

′), by theorem 8.7.10, this implies that there exists a

Page 444: (MOSTLY)COMMUTATIVE ALGEBRA

432 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

morphism D : M→ M′such that E ◦ 5M = 5M′ ◦ D. Then, for 1 ∈ B and

< ∈ M, one has

E(1 ⊗ <) = E(1 · (1 ⊗ <)) = 1 · E(1 ⊗ <) = 1 · (1 ⊗ D(<)) = 1 ⊗ D(<),so that E and idB ⊗D coincide on split tensors, hence are equal. �

8.8.10. — If a B-module N is of the form B ⊗A M, then �M is a �-linearautomorphism of the B ⊗A B-module B ⊗A N = B ⊗A B ⊗A M.

In fact, this automorphism �M satisfies an additional relation that we

have to make explicit.

To every �-linear endomorphism � of a B ⊗A B-module B ⊗A N, we

associate two endomorphisms of the A-module B⊗A B⊗A N, respectively

given by

�1 = idB ⊗� : 1 ⊗ 1′ ⊗ = ↦→ 1 ⊗ �(1′ ⊗ =)and

�1 = � ⊗ idN : 1 ⊗ 1′ ⊗ = ↦→ 1′ ⊗ 1 ⊗ =.The maps � = idB ⊗� and �′ = � ⊗ idB are automorphisms of the A-

algebraB⊗AB⊗AB. For 1, 1′ and 1′′ ∈ B, one has �(1⊗1′⊗1′′) = 1⊗1′′⊗1′and �′(1 ⊗ 1′ ⊗ 1′′) = 1′ ⊗ 1 ⊗ 1′′: the automorphism � swaps the last

two components, while the automorphism �′ swaps the first two ones.

A reflexion of the braid identity

(2 3)(1 2)(2 3) = (1 3) = (1 2)(2 3)(1 2)in the symmetric group S3, one has

� ◦ �′ ◦ � = �′ ◦ � ◦ �′.If N = B ⊗A M and � = �M = � ⊗ idM, then �1 = � ⊗ idM and

�1 = �′ ⊗ idM, so that

�1 ◦ �1 ◦ �1 = �1 ◦ �1 ◦ �1.

Definition (8.8.11). — Let 5 : A→ B be a morphism of rings and let � : B ⊗A

B→ B ⊗A B be the automorphism of A-algebras such that �(1 ⊗ 1′) = 1′ ⊗ 1for 1, 1′ ∈ B. A descent datum of a B-module N with respect to the morphism

5 : A→ B is a �-linear automorphism � of B ⊗A N such that

�1 ◦ �1 ◦ �1 = �1 ◦ �1 ◦ �1.

Page 445: (MOSTLY)COMMUTATIVE ALGEBRA

8.8. FAITHFULLY FLAT DESCENT 433

If N,N′ are B-modules endowed with descent data �, �′ with respect to 5 , a

B-morphism D : N→ N′is compatible with the given descent data if one has

(idB ⊗D) ◦ � = �′ ◦ (idB ⊗D).

With this definition, we can reformulate the preceding results as

follows, for a ring morphism 5 : A→ B:

a) If M is an A-module, the map �M = � ⊗ idM is a descent datum of

the B-module B ⊗A M with respect to 5 ;

b) If M,M′ are A-modules, and if 5 is faithfully flat, then the map

D ↦→ idB ⊗D induces a bijection from HomA(M,M′) to the set of all

B-morphisms HomB(B ⊗A M,B ⊗A M′)which are compatible with the

descent data �M and �M′.

One says that �M is the canonical descent datum of the B-module B ⊗A M.

Observe that B-modules N endowed with a descent datum � form a

category: objects are pairs (N, �), and amorphism from (N, �) to (N′, �′)is a B-morphism 5 : N→ N

′which is compatible with the desent datas.

Let us denote this category by ModB/A. By what precedes, the tensor

product functor M ↦→ (B ⊗A M, �M) is a functor from ModA to ModB/A.

In this language, proposition 8.8.9 says that this functor is fully faithful;

the following theorem of Grothendieck implies that it is an equivalence

of categories.

Theorem (8.8.12) (Faithfully flat descent; Grothendieck)

Let 5 : A→ B be a faithfully flat morphism of rings. Let N be a B-module,

let � be a descent datum of N with respect to 5 and let M be the subset of all

G ∈ N such that �(1 ⊗ G) = 1 ⊗ G. Then the following properties hold:

a) M is an A-submodule of N;

b) There exists a uniquemorphism# : B⊗AM→ N such that#(1⊗G) = 1Gfor all 1 ∈ B and all G ∈ M;

c) The morphism # is compatible with the descent datum �M of B ⊗A M and

the given descent datum � of N;

d) The morphism # is an isomorphism of B-modules.

Proof. — a) Since the morphism idB ⊗� is A-linear, the set M is an

A-submodule of N.

Page 446: (MOSTLY)COMMUTATIVE ALGEBRA

434 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

b) The map from B ×A M to N given by (1, G) ↦→ 1G is A-multilinear,

hence there exists a unique morphism # : B ⊗A M → N such that

#(1 ⊗ <) = 1< for all 1 ∈ B and all < ∈ M.

c) Let us show that # is compatible with the given descent data. Let

1, 1′ ∈ B and < ∈ M; then

� ◦ (idB ⊗#)(1 ⊗ 1′ ⊗ <) = �(1 ⊗ #(1′ ⊗ <)) = �(1 ⊗ 1′<)= �((1 ⊗ 1′)(1 ⊗ <)) = �(1 ⊗ 1′) · �(1 ⊗ <)= (1′ ⊗ 1) · (1 ⊗ <) = 1′ ⊗ 1<.

On the other hand,

(idB ⊗#) ◦ �M(1 ⊗ 1′ ⊗ <) = (idB ⊗#)(1′ ⊗ 1 ⊗ <) = 1′ ⊗ 1<,

so that � ◦ (idB ⊗#) = (idB ⊗#) ◦ �M, since these two A-morphisms

coincide on split tensors. This proves that # is compatible with the given

descent data.

d) By definition, M is the kernel of the morphism 5N − � ◦ 5N, so that

we have an exact sequence

0 M N B ⊗A N

← → ← →9 ← →5N

−�◦ 5N

of A-modules, where 9 is the inclusion of M into N. Since B is a faithfully

flat A-module, tensoring this exact sequence by B furnishes again an

exact sequence, which is the upper row of the following diagram, the

second row being the exact sequence�N

5:

0 B ⊗A M B ⊗A N B ⊗A B ⊗A N

0 N B ⊗A N B ⊗A B ⊗A N.

← →

←→ #

← →idB ⊗ 9

←→ �

← →idB ⊗ 5N− idB ⊗�◦ 5N

←→ �1

← → ← →5N ← →6N

Let us check that the diagram is commutative. For 1 ∈ B and < ∈ M,

one has

� ◦ (idB ⊗ 9)(1 ⊗ <) = �(1 ⊗ <) = �((1 ⊗ 1) · (1 ⊗ <)

)= (1 ⊗ 1)�(1 ⊗ <) = (1 ⊗ 1) · (1 ⊗ <)= 1 ⊗ 1< = 5N ◦ #(1 ⊗ <),

Page 447: (MOSTLY)COMMUTATIVE ALGEBRA

8.8. FAITHFULLY FLAT DESCENT 435

so that the morphisms � ◦ (idB ⊗ 9) and 5N ◦ # coincide on split tensors;

they are thus equal, hence the commutativity of the left square.

Recall that one has 6N = 61

N− 62

N, where 61

N(1 ⊗ =) = 1 ⊗ 1 ⊗ = and

62

N(1 ⊗ =) = 1 ⊗ 1 ⊗ =, for 1 ∈ B and = ∈ N. We first prove that

�1 ◦ idB ⊗ 5N = 61

B◦ �. Indeed, for 1 ∈ B and = ∈ N, one has

�1 ◦ �1 ◦ (idB ⊗ 5N)(1 ⊗ =) = �1 ◦ �1(1 ⊗ 1 ⊗ =)= �1(1 ⊗ 1 ⊗ =)= 1 ⊗ �(1 ⊗ =)= 61

N◦ �(1 ⊗ =).

This proves that �1 ◦ �1 ◦ (idB ⊗ 5N) = 61

N◦ �.

Let 1 ∈ B and = ∈ N; let (18) and (=8) be finite families such that

�(1 ⊗ =) = ∑18 ⊗ =8; then

�(1 ⊗ =) = �((1 ⊗ 1) · (1 ⊗ =)

)= (1 ⊗ 1) · �(1 ⊗ =) =

∑18 ⊗ 1=8 .

Moreover,

�1 ◦ �1 ◦ �1(1 ⊗ 1 ⊗ =) = �1 ◦ �1(1 ⊗ 1 ⊗ =)= �1(1 ⊗ �(1 ⊗ =))

= �1(∑

1 ⊗ 18 ⊗ 1=8)

=

∑18 ⊗ 1 ⊗ 1=8

= 62

N

(∑18 ⊗ 1=8

)= 62

N◦ �(1 ⊗ =).

In other words,

62

N◦ � = �1 ◦ �1 ◦ �1 ◦ 62

N.

On the other hand,

(idB ⊗� ◦ 5N)(1 ⊗ =) = 1 ⊗ �(1 ⊗ =)= �1(1 ⊗ 1 ⊗ =)= �1 ◦ 62

N(1 ⊗ =),

so that

�1 ◦ �1 ◦ (idB ⊗� ◦ 5N) = �1 ◦ �1 ◦ �1 ◦ 62

N.

Page 448: (MOSTLY)COMMUTATIVE ALGEBRA

436 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Thus the relation

�1 ◦ �1 ◦ (idB ⊗(� ◦ 5N)) = 62

N◦ �

follows from the definition of a descent datum, and the right hand

square is commutative as well.

We now deduce from the snake lemma (theorem 7.2.1) that # is

an isomorphism. In the above diagram, replace the source of the

morphism �1 with the image of the morphism idB ⊗( 5N − � ◦ 5N), andthe morphism �1 with its restriction �′

1, which is then injective. Since

idB ⊗ 9 is injective, the exact sequence of theorem 7.2.1 then writes

0→ Ker(#) → Ker(�) → Ker(�′1) → Coker(#) → Coker(�).

Since Ker(�) = Ker(�′1) = Coker(�) = 0, we conclude that Ker(#) =

Coker(#) = 0, hence # is an isomorphism, as was to be shown. �

8.9. Galois descent

Proposition (8.9.1). — 2 LetK→ L be a finite extension of fields. The following

properties are equivalent:

a) The extension is Galois;

b) There exists a finite set I and an isomorphism of L-algebras L ⊗K L ' LI.

Assume that they hold and let G = Gal(L/K). There exists a unique morphism

of K-algebras from L ⊗K L to LGsuch that 0 ⊗ 1 ↦→ (0�(1))�∈G, and this

morphism is an isomorphism of L-algebras.

Proof. — Assume that the extension K → L is Galois and let G =

Gal(L/K). The map from L × L to LGgiven by (0, 1) ↦→ (0�(1))�∈G is K-

bilinear, hence there exists a uniquemorphismofK-algebras! : L⊗KL→L

Gsuch that !(0 ⊗ 1) = (0�(1))�∈G for all 0, 1 ∈ L. It is also L-linear

when L ⊗K L is viewed as an L-algebra via the morphism 0 ↦→ 0 ⊗ 1.

Let us prove that ! is injective. Let (11, . . . , 13) be a basis of L as

a K-vector space; then (1 ⊗ 11, . . . , 1 ⊗ 13) is a basis of L ⊗K L as an

L-vector space. Let (01, . . . , 03) ∈ L3be such that !(∑ 08(1 ⊗ 18)) = 0.

By assumption, this implies that

∑38=108�(18) = 0 for every � ∈ G. As

2Mettre la réciproque en exercice ?

Page 449: (MOSTLY)COMMUTATIVE ALGEBRA

8.9. GALOIS DESCENT 437

a consequence,

∑38=108�(H) = 0 for every H ∈ L, so that

∑38=108� = 0 in

the set of K-linear morphism from L to L. By linear independence of

characters, 01 = · · · = 03 = 0.

Since the extension K→ L is Galois, one has Card(G) = [L : K] = 3,

hence dimL(L ⊗K L) = dimL(LG) so that the injective L-linear map ! is

an isomorphism.

Let us now assume that there exists a finite set I and an isomorphism

of L-algebras ! : L ⊗K L ' LI. Let Ω be an algebraic closure of L. Using

the associativity isomorphism Ω ⊗L L ⊗K L ' Ω ⊗K L, the morphism

idΩ ⊗! induces an isomorphism of Ω-algebras Ω ⊗K L ' ΩI. Let 1 ∈ L

and let P ∈ K[T] be its minimal polynomial; then K[1] is a sub-algebraof L isomorphic to K[T]/(P), and Ω[T]/(P) identifies with a sub-algebra

of ΩI. Since ΩI

is reduced, this implies that P has only simple roots

in Ω: it is a separable polynomial, and 1 is separable. Consequently,

L is a separable extension of K. Let us now use the primitive element

and choose 1 ∈ L such that L = K[1]; let P be its minimal polynomial.

Then LI ' L ⊗K L ' L[T]/(P). On the other hand, if P =

∏<8=1

P8 is the

decomposition of P as a product of irreducible polynomials in L[T],we get an isomorphism L

I ' ∏<8=1

L[T]/(P8). The maximal ideals of

the left hand side are of the kernels of the projections from LIto L

and their residue fields are L; the maximal ideals of the right hand

side are the kernels of the projections to L[T]/(P8), for 8 ∈ {1, . . . , <},and their residue fields are L[T]/(P8). This implies that deg(P8) = 1

for every 8 ∈ {1, . . . , <}: the polynomial P is split in L[T]. Let us

write P =∏<

8=1(T − 18). For every 8, there exists a unique morphism

of K-algebras �8 from L = K[1] to itself such that �8(1) = 18. Since the

extension K→ L is finite, these are isomorphisms, hence are elements of

Gal(L/K). This shows that Card(Gal(L/K)) = < = [L : K]; the extensionK→ L is then Galois. �

8.9.2. — Let 5 : K → L be a Galois extension and let G be its Galois

group. Since every module over a field is flat, this morphism 5 is flat,

and it is even faithfully flat since Spec(K) and Spec(K) are both reduced

to one element. Faithfully flat descent asserts that the tensor product

functor from K-vector spaces to L-vector spaces, V ↦→ L ⊗K V, induces

Page 450: (MOSTLY)COMMUTATIVE ALGEBRA

438 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

an equivalence of categories from ModK to the category ModL/K of

L-vector spaces W endowed with descent data �.The definition of a descent datum involves L ⊗K L-modules and

L ⊗K L ⊗K L-modules, and the goal here is to reformulate it in terms of

L-vector spaces plus various maps indexed by G, using proposition 8.9.1

to replace the algebra L ⊗K L with the isomorphic algebra LG. The result

of this translation will be called Galois descent.

In fact, while the proposition describes an isomorphism L ⊗K L ' LG

of L-algebras, when L ⊗K L is viewed as an L-algebra by the morphism

0 ↦→ 0 ⊗ 1, it will be more convenient in what follows to view L ⊗K L as

an L-algebra by the morphism 1 ↦→ 1 ⊗ 1. It is also isomorphic to LG,

but via the morphism ! : L ⊗K L→ LGsuch that !(0 ⊗ 1) = (�(0)1) for

every 0, 1 ∈ L.

8.9.3. — For every � ∈ G, we let �� = (��,�)�∈G be the element of LG

such that ��,� = 1 if � = � and ��,� = 0 otherwise. One has ���� = 0 if

� ≠ �, and �2

� = ��; moreover, 1 =∑

�∈G ��.Let �� ∈ L ⊗K L be the unique element such that !(��) = ��. In

particular, for all �, � ∈ G, one has ���� = 0 if � ≠ �, and �2

� = ��;moreover, 1 =

∑�∈G ��.

Lemma (8.9.4). — Let � : L ⊗K L → L ⊗K L be the automorphism of K-

algebras such that �(0 ⊗ 1) = 1 ⊗ 0 for every 0, 1 ∈ L. Then ! ◦ � ◦ !−1is

the automorphism (0�) ↦→ (�(0�−1)).

Proof. — Let (G�) ∈ LG. Then (G�) =

∑�∈G ��G�, so that (G�) = !(G),

where G =∑

�∈G ��(1 ⊗ G�). For every � ∈ G, fix a representation �� =∑0�8⊗ 1�

8. Then G =

∑�∈G

∑8 0

�8⊗ 1�

8G�, so that �(G) =

∑�∈G

∑8 1

�8G�⊗ 0�8 .

Let H = (H�)�∈G = !(�(G)) = !(�(!−1(G�))). By definition, one has

H� =∑�∈G

∑8

�(1�8 G�)0�8 = �( ∑�∈G

∑8

�−1(0�8 )1�8 G�)

= �( ∑�∈G

!(��)�−1G�)= �(G�−1),

as was to be shown. �

Proposition (8.9.5). — Let W be an L-vector space.

Page 451: (MOSTLY)COMMUTATIVE ALGEBRA

8.9. GALOIS DESCENT 439

a) There exists a unique morphism of K-vector spaces !W : L ⊗K W→WG

such that !W(0 ⊗ E) = (�(0)E)�∈G for all 0 ∈ L and all E ∈W. It is bijective.

b) A map � : L ⊗K W → L ⊗K W is a descent datum if and only if there

exists a family (��)�∈G satisfying the properties:

(i) For every � ∈ G, �� is a bijective K-linear map W→W such that

��(0E) = �(0)��(E) for every 0 ∈ L and every E ∈W;

(ii) One has !W ◦ � ◦ !−1

W((E�)) = (��(E�−1)) for every (E�) ∈W

G;

(iii) For every �, � ∈ G, one has ��� = �� ◦ ��.

Proof. — a) The map L ×W → WGgiven by (0, E) ↦→ (�(0)E)�∈G is

K-bilinear; consequently, there exists a uniquemorphism !W : L⊗KW→W

Gsuch that !W(0 ⊗ E) = (�(0)E).

By definition, one has !W((0⊗1)·(2⊗E)) = !W(02⊗1E) = (�(02)1E)� =!(0 ⊗ 1)!W(2 ⊗ E). This proves that !W( · �) = !( )!W(�) for every ∈ L ⊗K L and every � ∈ L ⊗K W.

Inparticular,!W(��(1⊗E)) = ��(E, . . . , E). Since (E�) =∑

�∈G ��(E� , . . . ),this implies that (E�) =

∑�∈G !W(��(1 ⊗ E�)). Therefore, the K-linear

map #W : WG→ L ⊗K W given by #W((E�)) =

∑�∈G ��(1 ⊗ E�) satisfies

!W ◦ #W = id; in particular, !W is surjective. If dimK(W) is finite,

then dimK(L ⊗K W) = [L : K]dimK(W) = dimK(WG) so that !W is an

isomorphism. Let � be an element of Ker(!W). Let us write � =∑08 ⊗ E8

and let W1 be the subspace of W generated by the family (E8); it is finitelydimensional. One has � ∈ L ⊗K W1, hence !W1

(�) = 0, hence � = 0. We

thus have proved that !W is bijective.

b) Let � : L ⊗K W→ L ⊗K W be a K-linear map. We express in terms

of the K-linear map �′ = !W ◦�◦!−1

Wthe conditions for � to be a descent

datum.

Let �′ = ! ◦ � ◦ !−1; by lemma 8.9.4, one has �′((0�)) = (�(0�−1)) for

every (0�) ∈ LG. In particular, �′(��) = ��−1.

First of all, � has to be bijective and �-linear, that is, �( �) = �( )�(�)for every ∈ L⊗K L and every � ∈ L⊗K W. Since !W is an isomorphism,

this means �′(0E) = �′(0)�′(E) for every 0 ∈ LGand every E ∈W

G.

Let us assume that this relation holds. Let E ∈ WG; since 1 =

∑�∈G ��,

one has E =∑

�∈G ��E =∑

�∈G ��E�, where, by abuse, we write ��E�for the element of W

Gwhose �-coordinate is equal to E�, and all other

Page 452: (MOSTLY)COMMUTATIVE ALGEBRA

440 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

coordinates are 0. We obtain

�′(E) =∑�∈G

�′(��E) =∑�∈G

�′(��)�′(E) =∑�∈G

��−1�′(E).

Moreover, using that �2

� = 1, we have

��−1�′(E) = �′(��E) = �′(�2

�E) = ��−1�(��E).

In other words, the �−1-component of �′(E), being equal to �(��E), only

depends of E�. This proves the existence of maps �� : W → W such

that �′((E�)) = (��(E�−1)); these maps �� are necessarily K-linear. Let

E = (E�) ∈WGand 0 = (0�) ∈ L

G; one has

�′(0E) = �′((0�E�)) = (��(0�−1E�−1))

and

�′(0)�′(E) = (�(0�−1)(��(E�−1) = (�(0�−1)��(E�−1)),so that the relation �′(0E) = �′(0)�′(E) is equivalent to the relations

��(0E) = �(0)��(E) for all 0 ∈ L, E ∈W and � ∈ G.

The map � is bijective if and only if �′ is bijective, which is equivalent

to the maps �� to be bijective. We thus see that this first condition for a

descent datum is equivalent to properties (i) and (ii).

Property (iii) will be a reformulation of the equality “�1 ◦ �1 ◦ �1 =

�1 ◦ �1 ◦ �1” of maps from L ⊗K L ⊗K W to itself.

With the present notation, “�1” is the map � = � ⊗ idW and “�1” is the

map � = idL ⊗�. The map

!W = !W

G ◦ (idL ⊗!W) : L ⊗K ⊗KW→WG×G

satisfies

!W(0 ⊗ 1 ⊗ E) = !W

G(0 ⊗ (�(1)E)) = (�(0)�(1)E)�,�∈G,

for every 0, 1 ∈ L and every E ∈W; it is a K-linear isomorphism.

Let �′ = !W ◦ � ◦ !−1

Wand �′ = !W ◦ � ◦ !−1

W. The second condition

for � to be a descent datum writes �′ ◦ �′ ◦ �′ = �′ ◦ �′ ◦ �′.Let us first compute �′. For 0, 1 ∈ L and E ∈ W, one has �(0 ⊗ 1 ⊗ E) =

1⊗ 0⊗E, so that �′((�(0)�(1)E)) = (�(0)�(1)E). Consequently, �′((E�,�)) =(E�,�) for every element (E�,�) of W

G×Gof the form !W(0 ⊗ 1 ⊗ E), with

Page 453: (MOSTLY)COMMUTATIVE ALGEBRA

8.9. GALOIS DESCENT 441

0, 1 ∈ L and E ∈W. Since they generate WG×G

as a K-vector space, this

proves that

�′((E�,�)) = (E�,�)�,�∈Gfor every (E�,�) ∈W

G×G.

Let us now compute �′. To that aim, we will decompose an element

(E�,�) as a sum ∑�,�∈G

�(�,�)�(E�,�),

where �(�,�) is the element of LG×G

all of whose coordinates are zero,

but for the coordinate (�, �)which is equal to 1, and E�,� is an abuse of

language for the element of WG×G

all of whose coordinates are equal

to E�,�.

Fix �0, �0 ∈ G, as well as decompositions ��0=

∑08 ⊗ 18 and ��0

=∑2 9 ⊗ 3 9 in L ⊗K L — in other words,

∑�(08)18 = 1 if � = �0, and 0

otherwise, and

∑�(2 9)3 9 = 1 if � = �0 and 0 otherwise. Let E ∈ W and let

� =∑8 , 9

(2 9 ⊗ 1 ⊗ 3 9)(1 ⊗ 08 ⊗ 18)(1 ⊗ 1 ⊗ E) =∑8 , 9

2 9 ⊗ 08 ⊗ 183 9E.

For �, � ∈ G, the (�, �)-component of !W(�) is equal to∑8 , 9

�(2 9)�(08)(183 9E) =∑9

�(2 9)�(3 9)∑8

�(08)18E

=

∑9

�(2 9)�(3 9)��0E

=

{E if � = �0 and � = �0

0 otherwise.

Moreover,

� =∑8 , 9

(2 9 ⊗ 08 ⊗ 183 9)(1 ⊗ 1 ⊗ E),

so that

idL ⊗�(�) =∑8 , 9

(2 9 ⊗ 183 9 ⊗ 08)(1 ⊗ �(1 ⊗ E)),

Page 454: (MOSTLY)COMMUTATIVE ALGEBRA

442 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

since � is �-linear. Let us then choose a decomposition �(1 ⊗ E) =∑D: ⊗ E:, for some elements D: ∈ L and E: ∈W. One thus has

idL ⊗�(�) =∑8 , 9 ,:

(2 9 ⊗ 183 9 ⊗ 08)(1 ⊗ D: ⊗ E:) =∑8 , 9 ,:

2 9 ⊗ 183 9D: ⊗ 08E: .

It thus follows that for every �, � ∈ G, one has

�′(�(�0 ,�0)E) = !W(id⊗�(�))�,�=

∑8 , 9 ,:

�(2 9)�(183 9D:)08E:

=

∑8

�(18)08∑9

�(2 9)�(3 9)∑:

�(D:)E: .

One has ∑8

�(18)08 = �(∑8

�−1(08)18) = �(��,�−1

0

) = ��,�−1

0

and ∑9

�(2 9)�(3 9) = �(∑9

�−1�(2 9)3 9) = �(��−1�,�0

) = ��−1�,�0

.

Moreover,

(∑

�(D:)E:)� = !W(�(1 ⊗ E)) == �′(!W(1 ⊗ E)) = �′((E)�) = (��(E)).

Consequently,

E�,� = ��,�−1

0

��−1�,�0

��(E).

This is 0 unless � = �−1

0and �−1� = �0, that is � = �−1

0�0; in this case, one

has

E�−1

0,�−1

0�0

= ��−1

0

(E).We thus obtain that

�′(�(�0 ,�0)E) = �(�−1

0,�−1

0�0)��−1

0

(E).

The relation (�, �) = (�−1

0, �−1

0�0) is equivalent to (�0, �0) = (�−1, �0�) =

(�−1, �−1�), so that, for every element E = (E�,�) ∈WG×G

, we have

�′(E) = (��(E�−1 ,�−1�)).

Page 455: (MOSTLY)COMMUTATIVE ALGEBRA

8.9. GALOIS DESCENT 443

At this point, we can finally express the relation �′◦�′◦�′ = �′◦�′◦�′.For E = (E�,�) ∈W

G×G, one has

(�′ ◦ �′ ◦ �′(E))�,� = ��((�′ ◦ �′(E))�−1 ,�−1�)= ��((�′(E))�−1�,�−1)= ��(��−1�(E�−1�,�−1

)).On the other hand, one has

(�′ ◦ �′ ◦ �′(E))�,� = (�′ ◦ �′(E))�,�= ��((�′(E))�−1 ,�−1�) = ��(E�−1�,�−1).

Consequently, the relation �′ ◦ �′ ◦ �′ = �′ ◦ �′ ◦ �′ is equivalent to the

equalities �� ◦ ��−1� = �� for all �, � ∈ G. This concludes the proof of

the proposition.

Theorem (8.9.6) (Galois descent). — Let W be an L-vector space and let (��)be a family of K-linear maps, �� : W→W, satisfying the properties:

(i) One has ��(0E) = �(0)��(E) for all 0 ∈ L and all E ∈W;

(ii) One has ��� = �� ◦ �� for all �, � ∈ G.

Let V be the set of all elements E ∈ W such that ��(E) = E for all � ∈ G. Then

the following properties hold:

a) V is a K-subvector space of W;

b) There exists a unique morphism# : L⊗KV→W such that#(0⊗E) = 0Efor all 0 ∈ L and all E ∈ V;

c) The morphism # is an isomorphism.

Proof. — This is just a reformulation of theorem 8.8.12 in the case of

the Galois extension K → L. Let !W : L ⊗K W → WGbe the K-linear

isomorphism such that !W(0 ⊗ E) = (�(0)E)� for all 0 ∈ L and all E ∈ W.

The given conditions on the family (��) furnish a descent datum � on W,

given by !W ◦� ◦!−1

W((E�)) = (��(E�−1)) for all (E�) ∈ W

G. The condition

�(1⊗ E) = 1⊗ E defining the submodule denoted by M in theorem 8.8.12

rewrites as ��(E) = E for all � ∈ G, hence is equal to V. This concludes

the proof. �

Page 456: (MOSTLY)COMMUTATIVE ALGEBRA

444 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

Exercises

1-exo074) Let < and = be two coprime integers. Show that

(Z/<Z) ⊗Z (Z/=Z) = 0.

2-exo076) Let I and J be ideals of a commutative ring A. Construct an isomorphism of

A-algebras

(A/I) ⊗A (A/J) ' A/(I + J).

3-exo094) Let X be a topological space. Show that

�(X,R) ⊗R C ' �(X,C).

4-exo077) Let M be a Z-module.

a) Show that M ⊗Z Q is torsion-free.

b) Let S = Z {0}. Construct an isomorphism between M ⊗Z Q and S−1

M.

c) Show that Mtor is the kernel of the canonical morphism from M to M ⊗Z Q.

5-exo585) Let A, B,C,D be rings. Let M be an (A, B)-bimodule, N be a (B,C)-bimodule

and P be a (C,D)-bimodule. Show that there is a unique morphism of abelian groups

M ⊗B (N ⊗C P) → (M ⊗B N) ⊗C P

which maps < ⊗ (= ⊗ ?) to < ⊗ (= ⊗ ?). Show that it is an isomorphism of (A,D)-bimodules.

6-exo814) Let A be a ring, let M be a right A-module and let N be a left A-module. Let

(G8)8∈I be a generating family in M, let (H 9)9∈J be a generating family in N and let (< 9)9∈Jbe an almost-null family of elements of M; let � =

∑< 9 ⊗ H 9 ∈ M ⊗A N.

a) Assume that there exists an almost-null family (08 , 9) in A, indexed by I × J, such

that < 9 =∑8∈I G808 , 9 for all 9 ∈ J, and 0 =

∑9∈J 08 , 9H 9 for all 8 ∈ I. Prove that � = 0.

b) Conversely, one assumes that � = 0. Introducing the kernel K of the unique

morphism D : A(J) → N that maps the basis element of index 9 to H 9 , for every 9 ∈ J,

prove that there exists an almost-null family (08 , 9) as above.7-exo813) Let M be a right A-module and let N be a submodule of M; let us denote

by 9 : N → M the inclusion morphism. One says that 9 is pure, or that N is a pure

submodule of M if, for every left A-module X, the morphism 9⊗ idX : N⊗A X→M⊗A X

is injective.

a) Assume that N is pure. Let <, = be positive integers, let U = (08 , 9) ∈ M<,=(A)and let (H1, . . . , H=) ∈ N

=. Assume that there exists (G1, . . . , G<) ∈ M

<such that

H 9 =∑<8=1

G 90 9 ,8 for every 9. Prove that there exists such a family in N<.

b) Conversely, prove that this property implies that for every finitely presented left

A-module X, the map 9 ⊗ idX is injective.

c) Under this assumption, prove that N is a pure submodule of M.

Page 457: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 445

8-exo826) Let : be a field, let A be the polynomial ring :[X,Y] and let M = (X,Y) be theideal of A consisting of polynomials without constant term. Let 6 : A→ A ⊕ A be the

morphism given by 6(P) = (YP,−XP) and 5 : A ⊕ A→ A be the morphism defined by

5 (P,Q) = XP + YQ.

a) Show that one has an exact sequence

0→ A

6−→ A ⊕ A

5−→M→ 0.

b) By restriction and tensor product, deduce the following exact sequences:

0→ A

6−→M ⊕M

5−→M

2→ 0,

and

0→M

6⊗idM−−−−−→M ⊕M

5 ⊗idM−−−−−→M ⊗A M→ 0.

(One write M2for the square of the ideal M.)

c) Define a commutative diagram of A-modules with exact rows:

0 A M ⊕M M2

0

0 : M ⊗A M M2

0.

←→ ←→

←→

←→

←→

←→

⇐⇐

←→ ←→ ←→ ←→d) Let C = X ⊗ Y − Y ⊗ Y ∈ M ⊗A M. Prove that C ≠ 0 and that it generates the torsion

submodule of M ⊗A M.

e) Prove that the submodule of M ⊗A M generated by X ⊗ X, X ⊗ Y and Y ⊗ Y is

isomorphic to M2.

f ) Construct an isomorphism M ⊗A M 'M2 ⊕ :.

9-exo081) Let : be a field, let P ∈ :[X] be an irreducible polynomial of degree > 1. Let

K = :[X]/(P).a) Show that K is a field.

b) Show that there exists a morphism ! of :-algebras from K ⊗: K to K which maps

0 ⊗ 1 to 01 for every 0, 1 ∈ K.

c) Show that ! is not injective and conclude that K ⊗: K is not a field.

10-exo147) Let A be a local ring, let J be its maximal ideal and K = A/J its residue field.Let M and N be two finitely generated A-modules, one also assumes that N is free.

Let 5 : M→ N be a morphism of A-modules such that the morphism

5 ⊗ idK : M ⊗A K→ N ⊗A K

is an isomorphism.

a) Using Nakayama’s lemma, show that 5 is surjective.

b) Show that 5 has right inverse 6, that is, show that there exists a morphism

6 : N→M such that 5 ◦ 6 = idN.

c) Show that 6 is surjective and conclude that 5 is an isomorphism.

11-exo583) Let A be a commutative ring, let M and N be A-modules.

Page 458: (MOSTLY)COMMUTATIVE ALGEBRA

446 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

a) One assumes that M and N are finitely generated; show that M ⊗A N is finitely

generated.

b) One assumes that M and N are simple. Show that M ⊗A N is isomorphic to M if

M and N are isomorphic, and is 0 otherwise.

c) One assumes that M and N are A-modules of finite length. Show that M ⊗A N has

finite length, and that

ℓA(M ⊗A N) 6 ℓA(M) ℓA(N).

12-exo087) Let A be a local commutative ring, let J be its maximal ideal and let K = A/Jbe its residue field, let M and N be two A-modules.

a) Recall the structures of K-modules on M/JM and N/JN. Construct a surjective

homomorphism from M ⊗A N to (M/JM) ⊗K (N/JN).b) One assumes that M ⊗A N = 0. Show that M/JM = 0 or N/JN = 0.

c) If, moreover, M and N are finitely generated, show using Nakayama’s lemma that

M = 0 or N = 0.

d) Give an example of a commutative local ring A and of non-zero A-modules M

and N such that M ⊗A N = 0.

13-exo864) Let A be a commutative ring, let M and N be finitely generated A-modules.

Using exercise 8/12, prove that Supp(M ⊗A N) = Supp(M) ∩ Supp(N).14-exo171) Let A be a local commutative ring, let J be its maximal ideal. One assumes

that A is an integral domain, let K be its field of fractions.

Let M be a finitely generated A-module such that

dimA/J M/JM = dimK M ⊗A K.

Prove that M is a free A-module.

15-exo582) Let : be a field, let K and L be :-algebras which are (commutative) fields.

a) Observe that K ⊗: L is a commutative :-algebra.

b) Show that there are canonical morphisms from K to K ⊗: L and from L to K ⊗: L

such that 0 ↦→ 0 ⊗ 1 and 1 ↦→ 1 ⊗ 1 respectively. Observe that they are injective.

c) Show that there exists a field E together with field morphisms 8 : K → E and

9 : L → E. (In other words, there exists a field E which contains simultaneously

isomorphic copies of K and L.)

16-exo817) Let 5 : A → B be a morphism of commutative rings such that for every

A-module M, the map 5M : M→M ⊗A B (given by < ↦→ < ⊗ 1) is injective. (One says

that 5 is pure.)

a) Prove that 5 is injective.

b) Prove that for every ideal I of A, one has I = 5 −1( 5 (I) · B).c) Assume that A is a domain. Prove that B is a domain.

d) Assume that A is a domain and is integrally closed; prove that the same holds

for B.

Page 459: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 447

17-exo818) Let 5 : A→ B be amorphism of commutative rings such that I = 5 −1( 5 (I) ·B)for every ideal I ofA. For everyA-moduleM, one denotes by 5M the canonicalmorphism

M→M ⊗A B.

a) Let I be an ideal of A. Prove that 5A/I is injective.

b) Let M be a A-module and let N be a submodule of M. Assume that 5N and 5M/N

are injective; prove that 5M is injective.

c) Let M be a finitely generated A-module. Prove that 5M is injective.

d) Prove that 5 is pure, that is, 5M is injective for every A-module M.

18-exo587) Let M be a free A-module of rank =. Show that the symmetric algebra of M

is isomorphic to the ring A[T1, . . . , T=] of polynomials in = indeterminates.

19-exo588) Let M be an A-module. Let = be a nonnegative integer.

a) Define an action of the symmetric group S= on T=(M) in such a way that

�(<1 ⊗ · · · ⊗ <=) = <�−1(1) ⊗ · · · ⊗ <�−1(=). for every � ∈ S= and any <1, . . . , <= ∈ M.

b) One says that a tensor G in T=(M) is symmetric �(G) = G for every � ∈ S= . Let

T=(M)sym be the set of all symmetric tensors in T

=(M). Show that it is a submodule

of T=(M).

c) Let then T(M)sym be the direct sum of the submodules T=(M)sym in T(M). Show

that it is a commutative sub-algebra of T(M).d) Assume that =! be invertible in A. Show that one defines en endomorphism B

of T=(M) by defininig B(G) = 1

=!

∑�∈S=

�(G), for G ∈ T=(M) (symmetrization of the

tensor G). Show that B(G) ∈ T=(M)sym for every G ∈ T

=(M). Deduce that the canonical

map from T=(M) to S

=(M) induces an isomorphism from T=(M)sym to S

=(M).e) In particular, if A contains Q, the A-module T(M)sym is canonically isomorphic

to S(M). This gives another structure of an A-algebra on T(M)sym, which is the one

deduced from that of S(M) by this canonical isomorphism. Explicitly, if G and H are

symmetric tensors, show that their new product is the symmetrization of the tensor

G ⊗ H.f ) Assume that A = Z and that M = Z2

. Let (4 , 5 ) be the canonical basis of M. Show

that (4 ⊗ 4 , 4 ⊗ 5 , 5 ⊗ 4 , 5 ⊗ 5 ) is a basis of T2(M). Show that (4 ⊗ 4 , 4 ⊗ 5 + 5 ⊗ 4 , 5 ⊗ 5 )

is a basis of T2(M)sym. Show that the canonical morphism from T

2(M)sym to S2(M)

is not surjective. More precisely, show that the quotient of S2(M) by the submodule

generated by T2(M) is isomorphic to Z/2Z.

20-exo589) Let M and P be A-modules, let 5 : M= → P be a symmetric =-linear map

from M=to P. Show that there exists a uniquemorphism of A-modules, ! : S

=(M) → P,

such that !(<1 . . . <=) = 5 (<1, . . . , <=) for every (<1, . . . , <=) ∈ M=.

21-exo593) Let M be a free A-module of rank =, let D be an endomorphism of M. Let

M[X] be the A[X]-module M ⊗A A[X].a) Identify M[X] with the abelian group

⊕=∈N M endowed with the external

multiplication (∑ 0=X=) · (<=) = (

∑: 0:<=−:)=∈N.

b) Show that (<=) ↦→ (D(<=)) is an endomorphism of M[X], still denoted D.

Page 460: (MOSTLY)COMMUTATIVE ALGEBRA

448 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

c) Show that the determinant of X idM[X] −D is the characteristic polynomial PD of D.

d) Identify the quotient of M[X] by the image of the endomorphism X − D : < ↦→X< − D(<) to the A[X]-module MD . (Recall that MD is the abelian group M with

external multiplication given by the morphism P ↦→ P(D), from A[X] to End(M)).e) Using exercise 3/31, show that PD(D) = 0 (theorem of Hamilton-Cayley).

22-exo592) Let M be a finitely generated A-module, let D be an endomorphism of M.

Let 5 : A= →M be any surjective morphism of A-modules.

a) Show that there exists an endomorphism E of A=such that 5 (E(G)) = D( 5 (G)) for

every G ∈ A=.

b) Show that there exist elements 01, . . . , 0= ∈ A such that

D=(G) + 01D=−1(G) + · · · + 0=−1D(G) + 0=G = 0

for every G ∈ M.

23-exo595) Let K be a field, let V be a K-vector space. Let V′ = Span(G1, . . . , G=) and

V′′ = Span(H1, . . . , H=) be subspaces of V with the same dimension =.

a) If V′ = V

′′, show that the vectors G1 ∧ · · · ∧ G= and H1 ∧ · · · ∧ H= in Λ=(V) are

collinear.

b) Let !1, . . . , != be linear forms on V. Show that there exists a unique linear form Φ

on Λ=(V) such that Φ(41 ∧ · · · ∧ 4=) = det(!8(4 9)) for every 41, . . . , 4= ∈ V.

c) Show that two vectors of a vector space are collinear if and only if any linear form

which vanishes on one vanishes on the other.

d) If G1 ∧ · · · ∧ G= and H1 ∧ · · · ∧ H= are collinear, show that V′ = V

′′.

24-exo594) Let M be an A-module, let M′and M

′′be submodules of M such that

M = M′ ⊕M

′′.

a) Show that there exists a unique morphism of A-modules � : Λ(M′) ⊗ Λ(M′′) →Λ(M)which maps (41 ∧ · · · ∧ 4?) ⊗ ( 51 ∧ · · · ∧ 5@) to 41 ∧ · · · ∧ 4? ∧ 51 ∧ · · · ∧ 5@ for every41, . . . , 4? ∈ M

′and 51, . . . , 5@ ∈ M.

b) If M′and M

′′are free, show that � is an isomorphism.

c) Show that � is not a morphism of algebras (unless M′ = 0 or M

′′ = 0).

d) Show that there exists a unique structure of an A-algebra on Λ(M′) ⊗ Λ(M′′) suchthat (�′ ⊗ �′′)(�′ ⊗ �′′) = (−1)@′?′′(�′ ∧ �′) ⊗ (�′′ ∧ �′′), if �′ ∈ Λ?′(M′), �′′ ∈ Λ?′′(M′′),�′ ∈ Λ@′(M′), �′′ ∈ Λ@′′(M′′).

e) Show that � is an isomorphism for this structure.

25-exo804) Let A and B be noetherian local rings, let MA and MB denote their maximal

ideals and let 5 : A→ B be a morphism of rings such that MA = 5 −1(MB).a) Prove that dim(B) 6 dim(A) + dim(B/MAB). (Use theorem 9.3.3.)

b) Assume, moreover, that 5 is flat. Let = = dim(A) and let (P0, . . . , P= = MA) be achain of prime ideals of A. Prove that there exists a chain (Q0, . . . ,Q=) of prime ideals

of B such that P9 = 5 −1(Q9) for all 9. (This is a Going-Down theorem for flat morphisms.)

c) Still assuming that 5 is flat, prove that dim(B) = dim(A) + dim(B/MAB).

Page 461: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 449

26-exo815) Let A be a principal ideal domain and let 0 ∈ A. Let M be an A/(0)-module.

a) Let 1, 2 ∈ A be such that 0 = 12, let I = 1A/0A. Show that the morphism G ↦→ 1G

induces an isomorphism from A/2A to I, and an isomorphism from M/2M to I⊗A/(0)M.

b) Prove that M is flat if and only if the following condition holds: for every pair

(1, 2) in A such that 0 = 12, and for every G ∈ M such that 1G = 0, there exists H ∈ M

such that G = 2H.

27-exo816) Let 5 : A → B be a flat morphism of commutative rings. Let 0 ∈ A be a

regular element. Prove that 5 (0) is a regular element of B.

28-exo839) Let A be a commutative ring, let 01, . . . , 0= ∈ A and let B =

A[T1, . . . , T=]/(00 + 01T1 + · · · + 0=T=).a) Assume that there exists 8 ∈ {1, . . . , =} such that 08 is a unit. Prove that R is a free

A-module.

b) Assume that (01, . . . , 0=) generate the unit ideal. Prove that B is a faithfully flat

A-module.

c) Assume that (00, . . . , 0=) generate the unit ideal. Prove that B is a flat A-module,

and that it is faithfully flat if and only if (01, . . . , 0=) generate the unit ideal. (Prove

that a prime ideal P ∈ Spec(A) is of the form A ∩ Q, for Q ∈ Spec(B) if and only if

P ∉ V((01, . . . , 0=)).)(The more general exercise 6.4 of Eisenbud (1995) treats the case where 00 + 01T1 +· · · + 0=T= is replaced by any polynomial 5 ∈ A[T1, . . . , T=], but its proof goes beyondthe techniques of this book.)

29-exo838) Let 5 : A → B be a flat morphism of commutative local rings. Let JA be

the maximal ideal of A and let JB be the maximal ideal of B; assume that 5 −1(JB) = JA.

Prove that for every A-module M, one has

ℓB(M ⊗A B) = ℓA(M) ℓB(B/JAB).

30-exo862) Let A be a commutative ring.

a) Let 0 ∈ A be such that the A-module A/(0) is flat. Prove that there exists 1 ∈ A

such that 0 = 021.

b) If every A-module is flat, then A is a von Neumann ring.

c) Convesely, if A is a von Neumann ring, then every A-module is flat.

31-exo861) Let A be a (possibly noncommutative) local ring and let J be its maximal

ideal. Let M be a finitely presented right A-module.

a) Construct a morphism 5 : A= →M that induces an isomorphism (A/J)= →M/MJ.

b) Let P = Ker( 5 ). Construct a morphism 6 : A< → P that induces an isomorphism

(A/J)< → P/PJ.

c) Prove that the canonical morphism P ⊗A J→ P is an isomorphism.

d) Conclude that P = 0 and that M is a free A-module.

Page 462: (MOSTLY)COMMUTATIVE ALGEBRA

450 CHAPTER 8. TENSOR PRODUCTS AND DETERMINANTS

e) Let A be any commutative ring (not necessary local), and let M be a a finitely

presented flat A-module. Prove that for every prime ideal P of A, the AP-module MP

is free. Conclude that M is projective (cf. corollary 8.6.13).

Page 463: (MOSTLY)COMMUTATIVE ALGEBRA

CHAPTER 9

ALGEBRAS OF FINITE TYPE OVER A FIELD

In this final chapter, where all rings are assumed to be commutative, I initiate

the study of general noetherian rings, with a special emphasis on finitely

generated algebras over a field. Once translated into geometric properties of

algebraic subsets, these results form the foundations of algebraic geometry, but

this interpretation will only be slightly evoked here.

Noether’s normalization theorem is a tool to reduce the study of a general

finitely generated algebra over a field to the study of a polynomial ring. The

proof I explain is valid over any field, and the classical proof over infinite fields

is given as an exercise. I then explore a few of its consequences: a general proof

of Hilbert’s Nullstellensatz and Zariski’s version (which, however, can be proved

more directly, see the exercises), the fact that every prime ideal of such an algebra

is an intersection of maximal ideals, a property which gives rise to the notion of

a Jacobson ring, etc.

The next three sections are devoted to the notions of dimension and codi-

mension of rings. Based on lengths of chains of prime ideals, they embody the

idea that a point is contained in a curve, which is contained in a surface, etc.

For integral finitely generated algebras over a field, I show that the dimension

equals the transcendence degree of the field of fractions. The theory of codimen-

sion requires Krull’s Hauptidealsatz that embodies another intuition, that ?

equations, if well chosen, define a subset of codimension ?. I then establish the

expected relation between dimension and codimension.

I give another application of Noether’s normalization theorem to prove that if

an integral domain A is a finitely generated algebra over a field, then the integral

closure of A is a finitely generated A-module.

A last section studies Dedekind rings, the algebraic analogues of curves.

Their importance had first been revealed in algebraic number theory, for rings of

Page 464: (MOSTLY)COMMUTATIVE ALGEBRA

452 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

integers in number fields furnish fundamental examples. Although Dedekind

rings are not unique factorization domain in general, their (nonzero) ideals

have similar factorization as product of maximal ideals. Their defect of unique

factorization is encoded in their class groups; in the case of rings of integers of

number fields, I prove that Minkowski’s theorem that this class group is finite.

9.1. Noether’s normalization theorem

Combined with the properties of integral morphisms, Noether’s nor-

malization theorem is a powerful tool for the study of finitely generated

algebras over a field. This section and the following illustrate this fact.

Theorem (9.1.1) (Normalization theorem). — Let K be a field, let A be

a finitely generated K-algebra. Then there exist an integer = > 0 and

elements 01, . . . , 0= ∈ A such that the unique morphism of K-algebras

from K[X1, . . . ,X=] to A such that X8 ↦→ 08 is injective and integral.

Proof. — Let (G1, . . . , G<) be a family of elements of A such that A =

K[G1, . . . , G<]. Let us prove the theorem by induction on <. If < = 0,

then A = K and the result holds with = = 0. We thus assume that < > 1

and that the result for any K-algebra which is generated by at most < − 1

elements.

Let ! : K[X1, . . . ,X<] → A be the unique morphism of K-algebras

such that !(X8) = G8. If ! is injective, the result holds, taking = = < and

08 = G8 for every 8.

We may thus assume that there is a non-zero polynomial

P ∈ K[X1, . . . ,X<] such that P(G1, . . . , G<) = 0. Let (2n) be the

coefficients of P, so that

P =

∑n∈N<

2n

<∏8=1

X=88.

Let A be an integer strictly greater than the degree of P in each variable;

in other words, for every ∈N<, one has 2n = 0 unless =8 < A for all 8;

then set H8 = G8 − GA8−1

1for 8 ∈ {2, . . . , <}. Let B = K[H2, . . . , H<] be the

subalgebra of A generated by H2, . . . , H<; we are going to show that A is

integral over B.

Page 465: (MOSTLY)COMMUTATIVE ALGEBRA

9.1. NOETHER’S NORMALIZATION THEOREM 453

We define a polynomial Q ∈ B[T] by

Q(T) = P(T, H2 + TA , . . . , H< + T

A<−1)

=

∑n∈N<

2nT=1(H2 + T

A)=2 . . . (H< + TA<−1)=<

=

∑n∈N<

=2∑92=0

. . .

=<∑9<=0

(=2

92

). . .

(=<

9<

)2nH

=2−922

. . . H=<−9<< T

=1+∑<8=2

98A8−1

and observe that

Q(G1) = P(G1, H2 + GA1, . . . , H< + GA

<−1

1) = P(G1, G2, . . . , G<) = 0.

Order N<with the “reverse lexicographic order”: (=′

1, . . . , =′<) <

(=1, . . . , =<) if and only if =′< < =<, or =′< = =< and =′

<−1< =<−1, etc.

If one only considers sequences n such that sup(= 9) < A, in particular

for elements n ∈ N<such that 2n ≠ 0, this ordering corresponds with

the one given by the expansion in base A (written in the reverse order):

(=′1, . . . , =′<) < (=1, . . . , =<) if and only if

=′<A<−1+ =′<−1

A<−2+ · · · + =′2A + =′

1< =<A

<−1+ =<−1A<−2+ · · · + =2A + =1.

Let n be the largest multi-index in N<, for that ordering, such that

2n ≠ 0. For any n′ ∈ N<such that 2n′ ≠ 0, one has =′

8< A for every 8, by

definition of A, so that for any 92 ∈ {0, . . . , =′2}, . . . , 9< ∈ {0, . . . , =′<},

=′1+ 92A+· · ·+ 9<A<−1 6 =′

1+=′

2A+· · ·+=′<A<−1 6 =1+=2A+· · ·+=<A<−1,

and equalities are only possible if n′ = n and 92 = =′2, . . . , 9< = =

′<. This

implies that the degree of Q is equal to =1 + =2A + · · · + =<A<−1and that

only the term with 9: = =: for : ∈ {2, . . . , <} contributes the leading

coefficient, which thus equals 2n, a non-zero element of K. In particular,

Q is a polynomial in B[T]whose leading coefficient is a unit, so that G1

is integral over B.

Consequently, B[G1] is integral over B. For every 8 ∈ {2, . . . , <}, onehas G8 = H8 − GA

8−1

1∈ B[G1]. Since A = K[G1, . . . , G<], we conclude that

A = B[G1] and A is integral over B.

By induction, there exist an integer = 6 <−1 and elements 01, . . . , 0= ∈B such that the unique morphism 5 : K[T1, . . . ,T=] → B of K-algebras

such that 5 (T8) = 08 for all 8 is injective and such B is integral over

Page 466: (MOSTLY)COMMUTATIVE ALGEBRA

454 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

K[01, . . . , 0=]. Then A is integral over K[01, . . . , 0=] as well, and this

concludes the proof of the theorem. �

As a first application, let us prove the general case of Hilbert’s Nullstel-

lensatz (theorem 2.3.1) which we had only proved under the assumption

that the field was uncountable.

The most fundamental form of this theorem is due to Zariski and

claims:

Theorem (9.1.2) (Zariski). — Let K be a field and let A be a finitely generated

K-algebra. One assumes that A is field. Then A is a finite extension of K.

Proof. — ByNoether’s normalization theorem, there exist an integer = >0 and an injective morphism of K-algebras 5 : K[X1, . . . ,X=] → A such

thatA is integral over the imageB of 5 . SinceA is a field, proposition 4.2.5

implies that B is a field as well. However, B, being the image of

the injective morphism 5 , is isomorphic to the ring K[X1, . . . ,X=] ofpolynomials in = variables with coefficients in K. For = > 1, this ring

is not a field. Consequently = = 0 and A is algebraic over K. Being

moreover finitely generated as an algebra, corollary 4.2.2 implies that it

is a finite extension of K. �

As a corollary, we can also give a complete proof of theorem 2.3.1. Let

us first recall its statement.

Corollary (9.1.3). — Let = be a positive integer, let K be an algebraically closed

field and let M be a maximal ideal of the ring K[X1, . . . ,X=]. There exists aunique element (01, . . . , 0=) ∈ K

=such that M = (X1 − 01, . . . ,X= − 0=).

Proof. — Let L be the residue fiekd K[X1, . . . ,X=]/M and let

� : K[X1, . . . ,X=] → L be the canonical surjection. By construc-

tion, L is a finitely generated K-algebra, and it is a field. By Zariski’s

theorem (theorem 9.1.2), L is an algebraic extension of K. Since K

is algebraically closed, L = K. For 8 ∈ {1, . . . , =}, let 08 = �(X8); bydefinition of the quotient ring, it is the only element of K such that

X8 − 08 ∈ M. Then M contains the ideal (X1 − 01, . . . ,X= − 0=). On the

other hand, the same proof as for theorem 2.3.1 implies that this ideal is

maximal, so that M = (X1 − 01, . . . ,X= − 0=). �

Page 467: (MOSTLY)COMMUTATIVE ALGEBRA

9.1. NOETHER’S NORMALIZATION THEOREM 455

At this point, we advise the reader to read again about the correspon-

dence between ideals and algebraic sets discussed in section 2.3.

We now pass to other important consequences of theorem 9.1.2.

Corollary (9.1.4). — Let K be a field, let A be a finitely generated K-algebra

and let M be a prime ideal of A. Then M is maximal if and only if A/M is

finitely dimensional over K; it is then a finite extension of K.

Proof. — Assume that M is maximal; then A/M is a finitely generated

K-algebra which is a field. By theorem 9.1.2, it is a finite extension of K

and, in particular, its dimension as a K-vector space is finite.

Conversely, assume that the quotient K-algebra L = A/M is finitely

dimensional. It is also an integral domain. For every non-zero 0 ∈ L, the

map 1 ↦→ 01 from L to itself is K-linear, and injective, hence it is bijective;

in particular, every non-zero element of L is invertible and L is a field.

Consequently, M is a maximal ideal of A. �

Corollary (9.1.5). — Let K be a field and let 5 : A → B be a morphism of

finitely generated K-algebras. For every maximal ideal M of B, 5 −1(M) is amaximal ideal of A.

Proof. — We know that the operation 5 −1induces a map from Spec(B)

to Spec(A), so that 5 −1(M) is a prime ideal; we need to prove that it is

even maximal. Passing to the quotient, 5 induces an injective morphism

! : A/ 5 −1(M) → B/M of finitely generated K-algebras. Moreover, B/Mis a field. By theorem 9.1.2, it is thus a finite extension of K, hence 5 −1(M)is a prime ideal of A such that A/ 5 −1(M) is finite dimensional. By the

preceding corollary, 5 −1(M) is a maximal ideal. �

The following corollary strengthens proposition 2.2.11.

Corollary (9.1.6). — Let K be a field and let A be a finitely generated K-algebra.

a) The nilradical and the Jacobson radical of A coincide.

b) For every ideal I of A,

√I is the intersection of all maximal ideals of A

which contain I.

c) In particular, every prime ideal P of A is the intersection of the maximal

ideals of P which contain P.

One sums up assertions b) and c) by saying that A is a Jacobson ring.

Page 468: (MOSTLY)COMMUTATIVE ALGEBRA

456 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

Proof. — a) We need to prove that an element 0 ∈ A is nilpotent if and

only if it belongs to every maximal ideal of I. One direction is clear: if 0

is nilpotent, it belongs to every prime ideal of I, hence to every maximal

ideal of I. Conversely, let us assume that 0 is not nilpotent and let us

show that there exists a maximal ideal M of A such that 0 ∉ M. Since

0 is not nilpotent, the K-algebra A0 = A[1/0] is not null; it is also a

finitely generated K-algebra. By corollary 9.1.5 the inverse image in A of

a maximal ideal of A0 is a maximal ideal of A which does not contain 0.

This concludes the proof of assertion a).

b) Let B = A/I; it is a finitely generated K-algebra and its maximal

ideals are of the form M/I, where M is a maximal ideal of A containing I.

By part a), the nilradical of B is the intersection of the maximal ideals

of B. Since the class in B of an element 0 ∈ A is nilpotent if and only if

0 ∈√

I, this implies that

√I is the intersection of all maximal ideals of A

which contain I.

c) follows from b) applied to P = I. �

Let us give another important application.

Theorem (9.1.7). — Let K be an algebraically closed field and let A, B be two

finitely generated K-algebras.

a) If A and B are integral domains, then A ⊗K B is also an integral domain.

b) If A and B are reduced, then A ⊗K B is also reduced.

Proof. — We first prove the theorem under the assumption that A and

B are finitely generated.

a) Assume that A and B are integral domains.

The tensor product of two non-zero K-vector spaces is a non-zero

K-vector space; consequently, A ⊗K B ≠ 0.

Let then 5 and 6 be two elements of A ⊗K B such that 5 6 = 0. We may

decompose 5 as a sum∑A8=108 ⊗ 18 of split tensors, where 11, . . . , 1A are

linearly independent over K. Similarly, we write 6 =∑B9=10′9⊗ 1′

9, where

1′1, . . . , 1′B are linearly independent over K.

Let M be a maximal ideal of A. The quotient ring A/M is a finitely

generated K-algebra, and is a field; consequently, it is an algebraic

extension of K, hence is isomorphic to K since K is algebraically closed.

Let clM : A → K be the corresponding morphism of K-algebras with

Page 469: (MOSTLY)COMMUTATIVE ALGEBRA

9.1. NOETHER’S NORMALIZATION THEOREM 457

kernel M. Let also �M : A ⊗K B→ B be the morphism clM ⊗ idB; it is a

morphism of K-algebras.

Since

�M( 5 )�M(6) = �M( 5 6) = 0

and B is an integral domain, either �M( 5 ) = 0 or �M(6) = 0. Moreover,

one has

�M( 5 ) =A∑8=1

clM(08)18 and �M(6) =B∑9=1

clM(0′9)1′9 .

Assume that �M(6) = 0. Since 11, . . . , 1A are linearly independent

over K, we conclude that clM(08) = 0 for every 8 ∈ {1, . . . , A}; in other

words, the ideal I = (01, . . . , 0A) is contained in M.

Similarly, if �M( 5 ) = 0, we obtain that the ideal J = (0′1, . . . , 0′B) is

contained in M.

In any case, I ∩ J ⊂ M.

This is valid for any maximal ideal M of A. By corollary 9.1.6, every

element of I ∩ J is nilpotent. Since A is an integral domain, I ∩ J = 0.

Assume that 5 ≠ 0. Then I ≠ 0; let thus G be a non-zero element of I.

For every H ∈ J, one has GH ∈ I ∩ J, hence GH = 0. Since A is an integral

domain, this implies H = 0, hence J = 0, hence 0′1= · · · = 0′B = 0 and

6 = 0. This concludes the proof that A ⊗K B is an integral domain.

b) The proof is analogous. Let 5 ∈ A ⊗K B be a nilpotent element. We

write 5 =∑A8=108 ⊗ 18, where 01, . . . , 0A ∈ A and 11, . . . , 1A ∈ B, chosen

such that 11, . . . , 1A are linearly independent over K.

Let M be a maximal ideal of A, let clM : A→ A/M ' K be the quotient

morphism and let �M = clM ⊗ idB : : A ⊗K B→ B. Since 5 is nilpotent,

�M( 5 ) is nilpotent in B, hence �M( 5 ) = 0, because B is reduced. Since

�M( 5 ) =∑A8=1

clM(08)18, this implies that the ideal I = (01, . . . , 0A) iscontained in M.

By corollary 9.1.6, every element of I is nilpotent. Since A is reduced,

I = (0) and 01 = · · · = 0A = 0. This proves 5 = 0.

We now remove the assumption that A and B are finitely generated.

For a), we consider 5 , 6 ∈ A ⊗K B such that 5 6 = 0; for b), we

consider 5 ∈ A ⊗K B such that 5 is nilpotent. In both cases, there

exist finitely generated subalgebras A′ ⊂ A and B

′ ⊂ B such that

Page 470: (MOSTLY)COMMUTATIVE ALGEBRA

458 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

5 ∈ A′and 6 ∈ B

′(with the notation introduced above, just take

A′ = K[01, . . . , 0A , 0

′1, . . . , 0′B] and B

′ = K[11, . . . , 1A , 1′1, . . . , 1′B] in case a),

and A′ = K[01, . . . , 0A] and B

′ = K[11, . . . , 1A] in case b)). These algebras

A′and B

′are integral domains (resp. reduced), and A

′ ⊗K B′is a

subalgebra of A ⊗K B. One thus has 5 6 = 0 in A′ ⊗K B

′(resp. 5 is

nilpotent in A′⊗K B

′). By the finitely generated case, 5 = 0 or 6 = 0 (resp.

5 = 0), as was to be shown. �

9.2. Dimension and transcendence degree

Definition (9.2.1). — Let A be a ring.

Let P be a prime ideal of A. The height of P is the supremum, denoted

by htA(P), of the lengths, =, of chains of prime ideals (P0, P1, . . . , P=) whereP= = P.

The dimension of A, denoted dim(A), is the supremum of the heights of its

prime ideals.

Recall that a chain is a strictly increasing finite sequence. We say

that such a chain (P0, . . . , P=) is maximal if it cannot be extended by

inserting another prime ideal, either before P0, between P:−1 and P: for

any : ∈ {1, . . . , =}, nor after P=.

Example (9.2.2). — One has htA(P) = 0 if and only if P is a minimal prime

ideal of A. If A is a field or, more generally, if A is an artinian ring, then

dim(A) = 0.

Example (9.2.3) (Principal ideal domains). — If A is a principal ideal

domain which is not a field, then dim(A) = 1. In particular, dim(Z) = 1 and

dim(K[X]) = 1 for any field K.

The prime ideals of A are (0), and the maximal ideals (?), where

? ranges among all irreducible elements of A. One has htA((0)) = 0.

Moreover, the chains of prime ideals beginning with (0) are (0) and((0), (?)), where ? is an irreducible element of A, so that htA((?)) = 1.

This shows that dim(A) = 1.

Example (9.2.4). — Recall that the map Q ↦→ QAP induces a bijection

between the prime ideals of A contained in P and the prime ideals

Page 471: (MOSTLY)COMMUTATIVE ALGEBRA

9.2. DIMENSION AND TRANSCENDENCE DEGREE 459

of AP, and this bijection respects the inclusion relation. Consequently,

htA(P) = dim(AP).

Example (9.2.5). — Let A be a unique factorization domain. A prime ideal

of A has height 1 if and only if it is generated by an irreducible element of A.

Let P be a prime ideal of A such that htA(P) = 1. In particular, P ≠ 0;

let thus 5 be any non-zero element of P. Since P is prime, 5 is not a unit

in A and one of its irreducible factors, say 0, must belong to P. Since

A is a unique factorization domain, the ideal (0) is prime and one has

inclusions (0) ( (0) ⊂ P. The hypothesis that ht(P) = 1 implies the

equality P = (0).Conversely, let 0 be an irreducible element of A, so that (0) is a prime

ideal of A. The strict inclusion (0) ( (0) implies that ht((0)) > 1. Let P be

a non-zero ideal of A contained in (0). Let G ∈ P be a non-zero element

whose number of irreducible factors is minimal. Since P ⊂ (0), thereexists H ∈ A such that G = 0H; then the number of irreducible factors

of H is one less than the number of irreducible factors of G, so that H ∉ P.

Since P is prime, this implies that 0 ∈ P, hence P = (0). This shows that

ht((0)) = 1.

Remark (9.2.6) (Geometrical interpretations). — a) Let K be an alge-

braically closed field, let I be an ideal of K[X1, . . . ,X=] and let A =

K[X1, . . . ,X=]/I. Any prime ideal of A defines an irreducible algebraic

set contained in�(I), and conversely (see exercise 2/10). Consequently,

the dimension of A is the supremum of the lengths of chains of irre-

ducible algebraic sets contained in�(I).b) Let A be an arbitrary ring. Irreducible closed subsets of Spec(A) are

subsets of the form V(P), for some prime ideal P of A, and conversely

(see exercise 2/8).Consequently, dim(A) is the supremum of the lengths

of chains of irreducible closed subsets of Spec(A). For any prime

ideal P of A, ht(P) is the supremum of the lengths of those chains

of closed subsets which contain V(P) or, equivalently, which contain

the point P ∈ Spec(A). This is interpreted as the codimension of V(P)in Spec(A).c) Let K be a field. Generalizing the equality dim(K[X]) = 1 from exam-

ple 9.2.3, wewill prove in theorem9.2.10 below thatdim(K[X1, . . . ,X=]) =

Page 472: (MOSTLY)COMMUTATIVE ALGEBRA

460 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

=. The chain of prime ideals

(0) ( (X1) ( (X1,X2) ( · · · ( (X1, . . . ,X=)

shows that dim(K[X1, . . . ,X=]) > =, but the opposite inequality is more

difficult.

Remark (9.2.7). — Without any hypotheses, the dimension of a ring may

be infinite. We shall show that the dimension of a finitely generated

algebra over a field is finite. We will also prove in theorem 9.3.3 that if

the ring A is noetherian, then the height of any prime ideal is finite, and

that the dimension of a noetherian local ring is finite. However, there

exist noetherian rings of infinite dimension (exercise 9/12).

In order to relate dimension and transcendence degrees of K-algebras,

we first study the behavior of dimension with respect to integral exten-

sions of rings.

Theorem (9.2.8) (Going-up theorem of Cohen–Seidenberg)

Let B be a ring and let A be a subring of B. Assume that B is integral over A.

a) Let Q be a prime ideal of B and let P = Q ∩A. Then P is a maximal ideal

of A if and only if Q is a maximal ideal of B.

b) Let Q ⊂ Q′be prime ideals of B such that Q∩A = Q

′∩A. Then Q = Q′.

c) The canonical map from Spec(B) to Spec(A) is surjective: for every prime

ideal P of A, there exists a prime ideal Q of B such that Q ∩A = P.

Proof. — a) Passing to the quotients, one gets an integral extension of

integral domains A/P ⊂ B/Q. By proposition 4.2.5, A/P is a field if and

only if B/Q is a field; in other words, P is maximal in A if and only if Q

is maximal in B.

b) Let P = Q ∩ A and let us consider the integral extension of rings

AP ⊂ BP induced by localization by the multiplicative subset A P (it is

indeed injective by proposition 3.6.6, and integral by lemma 4.2.4). The

ideal AP ∩QBP of AP contains the maximal ideal PAP of AP, but does

not contain 1, hence it is equal to PAP. Then, a) implies that QBP is a

maximal ideal of BP.

Page 473: (MOSTLY)COMMUTATIVE ALGEBRA

9.2. DIMENSION AND TRANSCENDENCE DEGREE 461

Since Q ∩A = P = Q′ ∩A, the same arguments imply that Q

′BP is a

maximal ideal of BP. However, the inclusion Q ⊂ Q′implies an inclusion

QBP ⊂ Q′BP. One thus has QBP = Q

′BP.

Since localization induces a bijection from the set of prime ideals of B

disjoint from A P to the set of prime ideals of BP, one gets Q = Q′.

c) Let P be a prime ideal of A and let us consider the integral extension

AP ⊂ BP of localized rings. Let M be a maximal ideal of BP. By a),

M ∩AP is a maximal ideal of AP, hence M ∩AP = PAP, and P ⊂ M ∩A.

There exists a unique prime ideal Q of B such that Q ∩ (A P) = ∅ and

M = QBP. The intersection Q ∩A is a prime ideal of A, contained in P

by construction; by what precedes, it contains P, hence Q ∩A = P. This

concludes the proof of the theorem of Cohen–Seidenberg. �

Corollary (9.2.9). — Let B be a ring, let A be a subring of B. If B is integral

over A, then dim(A) = dim(B).

Proof. — Let (Q0, . . . ,Q=) be a chain of prime ideals of B. Let us intersect

these ideals with A; this gives an increasing family (Q0 ∩A, . . . ,Q= ∩A)of prime ideals of A. By part b) of theorem 9.2.8, this is even a chain of

prime ideals, so that dim(A) > dim(B).Conversely, let P0 ( · · · ( P= be a chain of prime ideals of A. For

each < ∈ {0, . . . , =}, let us construct by induction a prime ideal Q< of B

such that Q< ∩ A = P< and such that Q0 ⊂ · · · ⊂ Q=. This will imply

that dim(B) > dim(A), hence the corollary.By part c) of theorem 9.2.8, there exists a prime ideal Q0 of B such

that Q0 ∩A = Q0. Assume Q0, . . . ,Q< are defined. Let us consider the

integral extension A/P< ⊂ B/Q< of integral domains. By theorem 9.2.8,

c), applied to the prime ideal P<+1/P< of A/P<, there exists a prime

ideal Q of the ring B/Q< such that Q ∩ (A/P<) = P<+1/P<. Then,

there exists a prime ideal Q<+1 containing Q< such that Q = Q<+1/Q<.

Moreover, Q<+1 ∩A = P<+1. �

The next theorem is at the ground of a dimension theory in Algebraic

geometry.

Page 474: (MOSTLY)COMMUTATIVE ALGEBRA

462 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

Theorem (9.2.10). — Let K be a field, let A be a finitely generated K-algebra.

Assume that A is an integral domain and let F be its field of fractions. Then

dim(A) = tr degK(F).

Proof. — We prove the theorem by induction on the transcendence

degree of F.

If tr degK(F) = 0, then A is integral over K; being a finitely generated K-

algebra, it is finitely dimensional, hence a field. One thus has dim(A) = 0.

Now assume that the theorem holds for finitely generated K-

algebras whose field of fractions has transcendence degree strictly less

than tr degK(F).

By Noether’s normalization theorem (theorem 9.1.1), there exists

an integer = > 0, elements 01, . . . , 0= of A such that the morphism

5 : K[X1, . . . ,X=] → A such that 5 (X8) = 08 is injective, and such that

A is integral over the subring B = K[01, . . . , 0=] = 5 (K[X1, . . . ,X=]).Moreover, = = tr deg

K(F). By corollary 9.2.9, it suffices to prove that the

dimension of the polynomial ring K[X1, . . . ,X=] is equal to =.We already stated in remark 9.2.6 that dim(K[X1, . . . ,X=]) > =, in

view of the chain ((0), (X1), (X1,X2), . . . , (X1, . . . ,X=)) of prime ideals of

K[X1, . . . ,X=].Conversely, let ((0), P1, . . . , P<) be a chain of prime ideals of

K[X1, . . . ,X=] and let us set A′ = K[X1, . . . ,X=]/P1. Since P1 is a prime

ideal, the ring A′is an integral domain; let F

′be its field of fractions.

Moreover, A′is a finitely generated K-algebra and dim(A′) > < − 1,

because ((0), P2/P1, . . . , P</P1) is a chain of prime ideals of A′of

length < − 1. On the other hand, any non-zero polynomial 5 ∈ P1

furnishes gives a non-trivial algebraic dependence relation between the

classes G1, . . . , G= of X1, . . . ,X= in A′. Consequently, tr deg

K(F′) 6 = − 1.

(See also example 4.8.11.) By induction, one has tr degK(F′) = dim(A′),

hence < − 1 6 = − 1, and < 6 =. This concludes the proof. �

In the course of the proof of the theorem, we established the following

particular case.

Corollary (9.2.11). — For any field K, one has dim(K[X1, . . . ,X=]) = =.

Page 475: (MOSTLY)COMMUTATIVE ALGEBRA

9.3. KRULL’S HAUPTIDEALSATZ AND APPLICATIONS 463

9.3. Krull’s Hauptidealsatz and applications

By theorem 9.2.10, the dimension of a finitely generated algebra over

a field is finite. One could think that the same property holds for

noetherian rings: after all, the dimension of a ring involves strictly

increasing sequences of prime ideals, and the noetherian property tells

us that every such sequence is finite. However, it does not: although

there are no infinite strictly increasing sequences of prime ideals, there

may be such sequences of arbitrarily large length.

One of the main consequences of this section is that local noetherian

rings are finite dimensional. Stated differently, the height of any prime

ideal of a noetherian ring is finite.

Since any such prime ideal is generated by a finite set (this is the

definition of a noetherian ring!), it is natural to investigate the behavior of

dimension when one quotients a ring by a principal ideal. Geometrically,

this will amount to understand the dimension of a hypersurface.

Theorem (9.3.1) (Krull’s Hauptidealsatz). — Let A be a noetherian ring, let

0 ∈ A and let P be a prime ideal of A, minimal among those containing 0.

a) One has ht(P) 6 1.

b) If, moreover, 0 is regular, then ht(P) = 1.

Proof. — a) We need to prove that that there is no chain (Q′,Q, P)of prime ideals in A. So let Q

′,Q, P be prime ideals of A such that

Q′ ⊂ Q ( P and let us prove that Q = Q

′.

We first pass to the quotient by Q′: in the noetherian ring A/Q′, we

have the prime ideals 0 ⊂ Q/Q′ ( P/Q′. Moreover, the prime ideal P/Q′of A/Q′ is minimal among the prime ideals of A/Q′ containing the class

of 0. This allows to assume that A is an integral domain and Q′ = 0.

We then replace A by its localization at the prime ideal P. The ring

AP is a noetherian local ring, with inclusions 0 ⊂ QAP ( PAP of prime

ideals, and the maximal ideal PAP is minimal among the prime ideals

of AP containing 0/1.We are thus reduced to the following particular case: the ring A is an

integral domain, local, noetherian, its maximal ideal P is the only prime

ideal containing the element 0, the ideal Q of A is prime and distinct

from P, and we need to prove that Q = 0.

Page 476: (MOSTLY)COMMUTATIVE ALGEBRA

464 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

For every integer = > 1, let Q= = Q=AQ ∩ A. By definition of

localization, Q= is the set of elements G ∈ A such that there exists H ∉ Q

such that GH ∈ Q=.

The sequence (Q=)= is decreasing; let us prove that it is stationary.The ring A/0A is noetherian, and since its prime ideals are in bijection

with the prime ideals of A containing 0, P(A/0A) is its only prime

ideal. Consequently (see theorem 6.4.14), A/0A is an artinian ring. In

particular, the decreasing sequence (Q=(A/0A))= of ideals of A/0A is

stationary and there exists an integer =0 such that

Q= + 0A = Q=+1 + 0A

for every integer = > =0.

Let = be an integer > =0 and let G ∈ Q=. There exists H ∈ A such

that G + 0H ∈ Q=+1; since Q=+1 ⊂ Q=, we have 0H ∈ Q=, hence there

exists I ∉ Q such that 0HI ∈ Q=. Since 0 ∉ Q, the product 0I does

not belong to Q, so that H belongs to Q=. (We could have used the

result of exercise 6/37 that Q= is a Q-primary ideal.) Consequently,

G ∈ 0Q= + Q=+1, hence G ∈ Q=+1 + PQ=. This implies the equality of

ideals

Q= = Q=+1 + PQ= .

Since the ring A is noetherian, Q= is finitely generated and Nakayama’s

lemma (theorem 6.1.1, observe that P is the Jacobson radical of A) implies

that Q= = Q=+1 for all = > =0.

Then,

Q=AQ = Q=AQ = Q=+1AQ = Q

=+1

AQ = QQ=AQ.

Again, AQ is noetherian and Q=AQ is finitely generated; consequently,

Nakayama’s lemma implies that Q=AQ = 0. Since A is an integral

domain, this implies that Q = 0, as was to be shown.

b) If 0 is regular, then it does not belong to any associated prime ideal

of A (theorem 6.5.8, a)), hence no minimal prime ideal of A contains 0.

Since 0 ∈ P, the prime ideal P is not minimal and htA(P) > 0. By part a),

the height of P is equal to 1. �

Page 477: (MOSTLY)COMMUTATIVE ALGEBRA

9.3. KRULL’S HAUPTIDEALSATZ AND APPLICATIONS 465

Corollary (9.3.2). — Let A be a noetherian integral domain. Then A is a

unique factorization domain if and only if every prime ideal of height 1 is a

principal ideal.

Proof. — We have already proved in example 9.2.5 that if A is a unique

factorisation domain, then its prime ideals of height 1 are principal

ideals (generated by irreducible elements).

Let us now assume that every prime ideal of A which is of height 1 is a

principal ideal and let us prove that A is a unique factorization domain.

Since A is noetherian, every increasing sequence of principal ideals

of A is stationary, so that it suffices to prove that irreducible elements

generate prime ideals. Let ? ∈ A be an irreducible element and let P

be a prime ideal of A, minimal among those containing ?. Since A is a

domain and ? ≠ 0, theorem 9.3.1 implies that htA(P) = 1. By assumption,

the ideal P is principal, hence there exists 0 ∈ A such that P = (0). Theinclusion (?) ⊂ P = (0) implies that 0 divides ?; let 1 ∈ A be such that

? = 01. Since P is prime, 0 is not a unit. By definition of an irreducible

element, 1 is a unit and P = (0) = (?). This concludes the proof of thecorollary. �

Theorem (9.3.3). — Let A be a noetherian ring and let P be a prime ideal of A.

a) The height htA(P) of P is finite.

b) More precisely, htA(P) is the the least integer = such that there exist

01, . . . , 0= ∈ P such that the prime ideal P is minimal among those contain-

ing 01, . . . , 0=.

Proof. — a) We first prove the following statement by induction on =:

Let A be a noetherian ring, let 01, . . . , 0= ∈ A and let P be a prime ideal which

is minimal among those containing 01, . . . , 0=, then ht(P) 6 =. (Observe

that for = = 1, this is exactly Krull’s Hauptidealsatz, theorem 9.3.1.)

The assertion holds if ht(P) = 0 (trivially), or if = = 0 (because P is then

a minimal prime ideal of A, hence ht(P) = 0). Let us thus assume that

ht(P) > 0 and = > 0. In the fraction ring AP, the ideal PAP is the maximal

ideal, and is minimal among those containing 01/1, . . . , 0=/1; moreover,

htA(P) = htAP(PAP). We may thus assume that A is a local ring with

maximal ideal P. By the definition of the height of a prime ideal, to

Page 478: (MOSTLY)COMMUTATIVE ALGEBRA

466 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

establish that htA(P) 6 =, we need to prove that for any prime ideal Q

of A such that Q ( P, one has htA(Q) 6 = − 1. Since A is noetherian,

there exists a prime ideal Q′such that Q ⊂ Q

′ ( P, and which is maximal

among those ideals. Since one has htA(Q) 6 htA(Q′), it suffices to prove

that htA(Q′) 6 =−1. Wemay thus assume that there is no prime ideal Q′

in A such that Q ( Q′ ( P.

Since Q ( P, the definition of P implies that there exists an integer 8

such that 08 ∉ Q. To fix the notation, let us assume that 01 ∉ Q.

Then Q ( Q + (01) ⊂ P, so that P is a minimal prime ideal which

contains Q + (01), hence is the unique prime ideal containing Q + (01).Consequently, every element of P is nilpotent modulo Q + (01). In

particular, there exist an integer < > 1 and elements G2, . . . , G= ∈ A,

H2, . . . , H= ∈ Q, such that

0<2= 01G2 + H2, . . . , 0

<= = 01G= + H= .

These relations show that any prime ideal of A which contains 01 and

H2, . . . , H= also contains 02, . . . , 0=, hence contains P. In other words,

P/(H2, . . . , H=) is a prime ideal of the quotient ring A/(H2, . . . , H=) which

is minimal among those containing the class of 01. By Krull’s Haup-

tidealsatz (theorem 9.3.1), the height of P/(H2, . . . , H=) in A/(H2, . . . , H=)is 6 1. Since (H2, . . . , H=) ⊂ Q ( P, the ideal Q/(H2, . . . , H=) is then a

minimal prime ideal of A/(H2, . . . , H=). Equivalently, Q is a minimal

prime ideal of A among those containing (H2, . . . , H=). By induction,

htA(Q) 6 = − 1, as was to be shown.

b) It follows that the height of any prime ideal P of A is finite. Indeed,

since A is noetherian, the ideal P is finitely generated and there exists

01, . . . , 0= ∈ A such that P = (01, . . . , 0=). By the part of the proof already

shown, one has htA(P) 6 =.

c) We now prove by induction on = the following statement: Let A be

noetherian ring and let P be a prime ideal of A, let = = htA(P). There existelements 01, . . . , 0= ∈ P such that P is the minimal prime ideal containing

(01, . . . , 0=).This holds if = = 0, for P is then a minimal prime ideal.

Assume that = = htA(P) > 0 and that the result holds for every prime

ideal of A of height < =. Since = > 0, P is not contained in any minimal

Page 479: (MOSTLY)COMMUTATIVE ALGEBRA

9.4. HEIGHTS AND DIMENSION 467

prime ideal of A. Since the set of minimal prime ideals of A is finite, the

Prime avoidance lemma 2.2.12 implies that there exists an element 01 ∈ P

which does not contain any minimal prime ideal of A. Consequently,

the height of any prime ideal of A which is minimal among those

containing 01 is strictly positive. Thus, any chain of prime ideals of A

containing 01 and contained in P can be extended by one of the minimal

prime ideals of A, so that the height of the prime ideal P/(01) of thering A/(01) is at most htA(P) − 1 = = − 1. By induction, there exist

02, . . . , 0= ∈ P such that the height of the prime ideal P/(01, . . . , 0=) ofthe ring A/(01, . . . , 0=) is zero. This proves that P is a minimal prime

ideal among those containing (01, . . . , 0=). This concludes the proof ofthe theorem. �

Corollary (9.3.4). — Let A be a noetherian local ring. Then dim(A) is finite.More prexisely, dim(A) is the least integer = such that there exist 01, . . . , 0=

such that

√(01, . . . , 0=) is the maximal ideal of A.

Proof. — If A is local, every maximal chain of prime ideal ends at its

maximal ideal M, hence dim(A) = htA(M). The corollary thus follows

from theorem 9.3.3 applied to M. �

9.4. Heights and dimension

Proposition (9.4.1). — Let A be an integral domain, let E be its field of fractions

and let E→ F be a finite normal extension of fields. Assume thatA is integrally

closed in E and let B the integral closure of A in F.

For every prime ideal P of A, the group Aut(F/E) acts transitively on the set

of prime ideals Q of B such that Q ∩A = P.

Proof. — Let G = Aut(F/E). It is a finite group of automorphisms of F;

the subfield FGof F is a radicial extension of E (corollary 4.6.9) and

FG ⊂ F is a Galois extension of group G (Artin’s lemma 4.6.4).

First of all, according to the Going-up theorem (theorem 9.2.8), the set

of prime ideals Q of B such that Q ∩A = P is nonempty.

Let Q and Q′be two prime ideals of B such that Q ∩A = Q

′ ∩A = P.

Page 480: (MOSTLY)COMMUTATIVE ALGEBRA

468 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

Let G ∈ Q′, let H =

∏�∈G �(G); since H ∈ F

Gand since the extension

E → FGis radicial, there exists an integer @ > 1 such that H@ ∈ E

(lemma 4.4.16; one has @ = 1 if the characteristic of E is zero).

By definition of B, G is integral over A, as well as its images �(G), for� ∈ G. Consequently, H is integral over A, hence H@ is integral over A.

Since H@ ∈ E and A is integrally closed in E, one has H@ ∈ A. Since G ∈ Q′,

it follows that H@ ∈ Q′ ∩A = P. Using that P = Q ∩A, we deduce that

H@ ∈ Q. Finally, since Q is a prime ideal, we conclude that H ∈ Q.

Moreover, since H =∏

�∈G �(G), there exists an element � ∈ G such that

�(G) ∈ Q. Since G is a group, this gives the inclusion Q′ ⊂ ⋃

�∈G �(Q).Let � ∈ G. Let us show that �(Q) is a prime ideal of B. Indeed, if 1 ∈ F,

then 1 and �(1) satisfy the same polynomial relations with coefficients

in E. In particular, 1 is integral over A if and only if �(1) is integralover A. This implies that �(B) = B, hence �(Q) is a prime ideal of B.

By the Prime avoidance lemma (lemma 2.2.12), there exists � ∈ G

such that Q′ ⊂ �(Q). Then P = Q

′ ∩ A ⊂ �(Q) ∩ A = Q ∩ A = P,

hence Q′ ∩ A = �(Q) ∩ A. By the first Cohen-Seidenberg theorem

(theorem 9.2.8), we thus have Q′ = �(Q). �

Theorem (9.4.2) (Going-down theorem of Cohen–Seidenberg)

Let B be an integral domain, let A be a subring of B. One assumes that A is

integrally closed in its field of fractions and that B is integral over A.

Let P0, . . . , P= be prime ideals of A such that P0 ⊂ . . .P= and let Q= be

a prime ideal of B such that Q= ∩ A = P=. Then, there exist prime ideals

Q0, . . . ,Q=−1 in B such that Q0 ⊂ · · · ⊂ Q= and Q< ∩ A = P< for every

< ∈ {0, . . . , =}.

Proof. — Let E be the field of fractions of A, let F be the field of fractions

of B and let F′be a finite normal extension of E containing F. Let B

′be

the integral closure of A in F′. According to theorem 9.2.8, there exist

prime ideals Q′< in B

′(for 0 6 < 6 =), such that Q

′0⊂ · · · ⊂ Q

′= and

P< = Q′< ∩A for every < ∈ {0, . . . , =}.

Let Q′be a prime ideal of B

′such that Q

′ ∩ B = Q=. Since A is

integrally closed in its field of fractions, proposition 9.4.1 asserts the

existence of an automorphism � ∈ Aut(F′/E) such that �(Q′=) = Q′. For

every integer < ∈ {0, . . . , = − 1}, let Q< = �(Q′<) ∩ B. By construction,

Page 481: (MOSTLY)COMMUTATIVE ALGEBRA

9.4. HEIGHTS AND DIMENSION 469

(Q0, . . . ,Q=−1) is an increasing family of prime ideals of B contained

in Q=. Moreover, for every < ∈ {0, . . . , = − 1}, one hasQ< ∩A = �(Q′<) ∩ B ∩A = �(Q′< ∩A) = �(P<) = P< .

The theorem is proved. �

Corollary (9.4.3). — Let B be an integral domain, let A be a subring of B. One

assumes that A is integrally closed in its field of fractions and that B is integral

over A. Then for every prime ideal Q of B, one has htB(Q) = htA(Q ∩A).

Proof. — Let P = Q ∩ A. Let (Q0, . . . ,Q=) be a chain of prime ideals

of B, contained in Q. By theorem 9.2.8, their intersections with A,

(Q0 ∩A, . . . ,Q= ∩A), form a chain of prime ideals of A contained in P.

This implies that htA(P) > htB(Q).Conversely, let (P0, . . . , P=) be a chain of prime ideals of A contained

in P. By theorem 9.4.2, there exists a family (Q0, . . . ,Q=) of prime

ideals of B such that Q0 ⊂ · · · ⊂ Q= ⊂ Q and Q< ∩ A = P< for every

< ∈ {0, . . . , =}. By theorem 9.2.8, (Q0, . . . ,Q=) is a chain of prime ideals

in B. Consequently, htB(Q) > htA(P).This concludes the proof of the corollary. �

Theorem (9.4.4). — Let K be a field and let A be a finitely generated K-algebra.

Assume that A is an integral domain. Then, for every prime ideal P of A, one

has

htA(P) = dim(A) − dim(A/P).In particular, for every maximal ideal M of A, one has htA(M) = dim(A).

Proof. — Let (P0, . . . , P=) be a chain of prime ideals of A contained in P.

Let also (Q0, . . . ,Q<) be a chain of prime ideals of A/P. For every

integer 8 ∈ {0, . . . , <}, let Q′8be the preimage of Q8 in A by the canonical

surjection from A to A/P. Then (Q′0, . . . ,Q′<) is a chain of prime ideals

of A containing P. Now observe that (P0, . . . , P= ,Q′1, . . . ,Q′<) is chain of

prime ideals of A — the inclusions P= ⊂ P ⊂ Q′0( Q

′1establish the strict

inclusion P= ( Q′1, and the other are obvious. We thus have shown the

inequality dim(A) > htA(P) + dim(A/P).The converse inequality is more delicate and we prove it by induction

on dim(A). It clearly holds if dim(A) = 0, since then A is a field and

Page 482: (MOSTLY)COMMUTATIVE ALGEBRA

470 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

P = (0). Let us assume that it holds for all prime ideals of a finitely

generated K-algebra of dimension < dim(A)which is an integral domain.

Let us first apply Noether’s normalization theorem: there exist an

integer = and elements 01, . . . , 0= ∈ A, algebraically independent over K,

such that A is integral over the subring B = K[01, . . . , 0=], and dim(A) =dim(B) = =. Let Q = P ∩ B. Since the natural morphism from B/Qto A/P is injective and integral, one also has dim(A/P) = dim(B/Q).Moreover, theorem 9.4.2 asserts that htA(P) = htB(Q). Consequently, we

may assume that A = B, hence that A = K[X1, . . . ,X=].If P = (0), then htA(P) = 0, A ' A/P, so that the equality htA(P) +

dimA(A/P) = dim(A) holds.Let us assume that P ≠ (0). Then, since it is a prime ideal, P contains an

irreducible element, say 5 . By example 9.2.5, the ideal ( 5 ) has height 1.

Then set A′ = A/( 5 ) and P

′ = P/( 5 ). The K-algebra A′is finitely

generated; it is an integral domain and, by example 4.8.11, its field of

fractions has transcendence degree = − 1 over K. By theorem 9.2.10,

dim(A′) = = − 1 = dim(A) − 1.

Moreover, A′/P′ is isomorphic to A/P, so that

dim(A/P) = dim(A′/P′).

On the other hand, a chain of length < of prime ideals of A′contained

in P′corresponds to a chain of prime ideals of A contained in P and

containing ( 5 ); adjoining (0), this gives a chain of length < + 1 of prime

ideals of A contained in P. Consequently,

htA′(P′) + 1 6 htA(P).

Finally, the induction hypothesis asserts that

htA′(P′) + dim(A′/P′) = dim(A′).

Combining these four relations, we obtain

htA(P) + dim(A/P) > htA′(P′) + 1 + dim(A′/P′)

> dim(A′) + 1 = dim(A).

This concludes the proof of the theorem. �

Page 483: (MOSTLY)COMMUTATIVE ALGEBRA

9.5. FINITENESS OF INTEGRAL CLOSURE 471

Corollary (9.4.5). — Let K be a field and let A be a finitely generated K-algebra

which is an integral domain. Every maximal chain of prime ideals of A has

length dim(A).

Proof. — Let (P0, P1, . . . , P=) be a maximal chain of prime ideals of A.

This means that this chain cannot be extended by inserting prime ideals

(which is a different assertion than asserting that this sequence has

maximal length). In particular, one has P0 = (0) because A is an integral

domain, and P= is a maximal ideal.

Let us now argue by induction on =. If = = 0, then P0 is a maximal

ideal, hence A is a field and dim(A) = 0.

Let us now assume that = > 1. Since the chain (P0, P1) is maximal

among all chains terminating at P1, we have htA(P1) = 1. The K-algebra

A/P1 is finitely generated and an integral domain, and (P1/P1, . . . , P= ⊂P1) is a maximal chain of prime ideals of A/P1. By induction, one has

= − 1 = dim(A/P1). Consequently,= = (= − 1) + 1 = dim(A/P1) + htA(P1) = dim(A),

by theorem 9.4.4. �

9.5. Finiteness of integral closure

Proposition (9.5.1). — Let A be a noetherian integral domain and let E be its

field of fractions. Let F be a separable finite algebraic extension of F and let B

be the integral closure of A in F. If A is integrally closed, then B is a finitely

generated A-module, in particular, a noetherian ring.

Proof. — Let (41, . . . , 4=) be a basis of F as an E-vector space. Up to

multiplying them by a non-zero element of A, we may assume that

they belong to B. Since the extension E ⊂ F is separable, the symmetric

bilinear form defined by the trace is non-degenerate (theorem 4.7.7).

Consequently, there exists a basis ( 51, . . . , 5=) of F as an E-evector space

such that for every 8 and 9 ∈ {1; . . . ; =}, TrE/F(48 59) = 0 if 8 ≠ 9, and

TrE/F(48 58) = 1. Let D be a non-zero element of A such that D 58 ∈ B for

every 8.

Let then G be an element of B, and let G1, . . . , G= ∈ E be such that

G =∑=8=1G848. For every 8 ∈ {1; . . . ; =}, one has (D 58)G ∈ B, hence

Page 484: (MOSTLY)COMMUTATIVE ALGEBRA

472 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

TrF/E(D 58G) ∈ A, by corollary 4.7.6, hence DG8 ∈ A. Consequently,

B ⊂ D−1

∑=8=1

A48. In other words, B is an A-submodule of a free A-

module of rank =. Since A is a noetherian ring, this implies that B is a

finitely generated A-module.

Since ideals of B are A-submodules, this also implies that B is noethe-

rian. �

Theorem (9.5.2). — Let : be a field and let A be a finitely generated :-algebra

which is an integral domain. Let E be the field of fractions of A and let E ⊂ F

be a finite algebraic extension. Finally, let B be the integral closure of A in F.

Then, B is a finitely generated A-module.

To simplify the proof, we assume that the characteristic of : is zero. The

general case can be proved along the same lines but requires an additional

study of inseparable extensions in the style of remark 4.4.17.

Proof. — Let us apply to A the Noether normalization lemma (theo-

rem 9.1.1). Let G1, . . . , G= be elements of A, algebraically independent

over :, such that A is integral over A0 = :[G1, . . . , G=].Let us remark that an element of F is integral over A if and only if it

is integral over A0. Consequently, B is the integral closure of A0 in F.

Since the extension E0 = :(G1, . . . , G=) ⊂ E is finite, the extension E0 ⊂ F

is finite too.

Let us also remark that A0 is a unique factorization domain (theo-

rem 6.3.15) hence is integrally closed (theorem 4.1.9). Since we assumed

that the characteristic of : is zero, the extension E0 ⊂ F is separable.

Consequently, proposition 9.5.1 implies that B is a finitely generated

A0-module, hence a finitely generated A-module. �

9.6. Dedekind rings

Definition (9.6.1). — Let A be an integral domain. One says that A is a

Dedekind ring if every ideal of A is a projective A-module.

Example (9.6.2). — a) A field is a Dedekind ring.

b) Since non-zero principal ideals of an integral domain A are free

A-modules of rank 1, any principal ideal domain is a Dedekind ring.

Page 485: (MOSTLY)COMMUTATIVE ALGEBRA

9.6. DEDEKIND RINGS 473

c) Let A be a Dedekind ring and let S be a multiplicative subset of A.

Then S−1

A is a Dedekind ring.

Let indeed J be an ideal of S−1

A. There exists an ideal I of A such

that J = S−1

I. Since A is a Dedekind ring, the A-module I is projective.

Consequently, the S−1

A-module S−1

I is projective (example 7.3.5).

9.6.3. — Let A be an integral domain, let K be its field of fractions.

The set ℐ(A) of non-zero ideals of A is a commutative monoid for the

multiplication of ideals. The subset�(A) ofℐ(A) consisting of principal

ideals is a submonoid. The quotient monoid �(A) = ℐ(A)/�(A) iscalled the class monoid of A. This monoid is trivial if and only if every

ideal of A is principal. As we will show, Dedekind rings are actually

characterized by the property that �(A) is a group.To study �(A), it is convenient to consider the more general setup

of fractional ideals. By definition, a fractional ideal of A is a non-zero

submodule of K of the form 0I, where I is an ideal of A and 0 ∈ K×.

Fractional ideals can be multiplied as ideals can be, and the set ℐ0(A) offractional ideals is a monoid, with the set�0(A) of principal fractionalideals as a submonoid.

The inclusion ℐ(A) → ℐ0(A) induces a morphism of monoids

from �(A) to ℐ0(A)/�0(A). Since the class of a fractional ideal 0I,

where I is an ideal of A, is equal to the class of I, this morphism is

surjective. Let then I and J be non-zero ideals of A which have the

same class in ℐ0(A)/�0(A); let 0, 1 ∈ K×be such that 0I = 1J; write

0 = 01/02 and 1 = 11/12, with 01, 02, 11, 12 ∈ A; then 0112I = 1102J and

[I] = [0112I] = [1102J] = [J] in �(A).Finally, a fractional ideal of A is isomorphic, as an A-module to an

ideal of A, so that a ring A is a Dedekind ring if and only if fractional

ideal of A is projective.

9.6.4. — Let A be an integral domain and let K be its field of fractions.

Let I be a fractional ideal of A. One write I−1

for the set of elements

0 ∈ K such that 0I ∈ A. One has I · I−1 ⊂ A.

Themap I−1→ I

∨given by 0 ↦→ (G ↦→ 0G) is amorphism ofA-modules.

It is injective, because I ≠ 0. Let us show that it is surjective; let ! ∈ I∨.

Page 486: (MOSTLY)COMMUTATIVE ALGEBRA

474 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

Let G be a non-zero element of I and let 0 = !(G)G−1. Let H be a non-

zero element of I; there exist D, E ∈ A {0} such that DG = EH; then

D!(G) = E!(H), so that !(H) = DE−1!(G) = 0H. In particular, 0H ∈ A for

every H ∈ I, and ! is the image of 0.

Lemma (9.6.5). — Let I be a fractional ideal of A. The following properties are

equivalent:

(i) One has I · I−1 = A;

(ii) The A-module I is projective;

(iii) The fractional ideal I is invertible in the monoidℐ0(A);(iv) The class of I is invertible in �(A).

If they hold, I is finitely generated.

If these properties hold, one simply says that the fractional ideal I is

invertible.

Proof. — (i)⇒(ii). Since I · I−1 = A, there exist elements 01, . . . , 0= ∈ I

and 11, . . . , 1= ∈ I−1

such that 1 =∑=8=10818. Then the morphism

�I,I : I∨ ⊗ I→ EndA(I)maps

∑18 ⊗ 08 to idI, so that I is finitely generated

and projective.

(ii)⇒(i). Assume that I is projective. Let (!8)8 and (08) be families in I∨

and I respectively such that G =∑!8(G)08 for every G ∈ I (remark 7.3.4).

For every 8, let 18 ∈ I−1

be such that !8(G) = 18G for every G ∈ I. Since

I ≠ 0, this implies

∑0818 = 1. Then 1 ∈ I · I−1

, so that I · I−1 = A.

(i)⇒(iii). Assume that I · I−1 = A. By definition, this says that I is

invertible in ℐ0(A), with inverse I−1.

(iii)⇒(iv). If I is invertible in the monoid ℐ0(A), then its class [I] isinvertible in the class monoid �(A).(iv)⇒(i). Finally, assume that the class [I] of I is invertible in �(A)

and let J be a fractional ideal of A such that I · J is a principal fractionalideal 0A. Replacing J by 0−1

J, we may assume that I · J = A. Then, J ⊂ I−1,

hence A = I · J ⊂ I · I−1 ⊂ A, so that I · I−1 = A. This proves (i). �

Corollary (9.6.6). — An integral domain A is a Dedekind domain if and only

if the monoid �(A) is a group.

Page 487: (MOSTLY)COMMUTATIVE ALGEBRA

9.6. DEDEKIND RINGS 475

When I is an invertible ideal, one can define I=for every integer =

by the formula I= = (I−1)−= if = < 0. From the relation I · I−1 = A, one

deduces that I=+< = I

= · I< for every <, = ∈ Z.

Proposition (9.6.7). — Let A be a local integral domain. Then A is a Dedekind

ring if and only if it is a principal ideal domain.

Proof. — By example 9.6.2, principal ideal domains are Dedekind rings;

it thus suffices to prove that a Dedekind ring A which is a local ring is

principal. Let P be the maximal ideal of A.

Let I be an ideal of A; it is finitely generated because A is noetherian.

If I = PI, then Nakayama’s lemma (theorem 6.1.1) implies that I = 0. In

particular, I is principal.

Otherwise, let 0 ∈ I PI and let us prove that I = (0). Since 0A ⊂ I, one

has 0I−1 ⊂ A. If 0I−1 ≠ A, then 0I−1is contained in the maximal ideal P

of A, that is, 0I−1 ⊂ P; this implies 0A = 0I−1 · I ⊂ PI, a contradiction.

Consequently, the ideal I is principal. �

Theorem (9.6.8). — Let A be an integral domain and let S be the set of its

maximal ideals. Assume that every maximal ideal of A is invertible.

a) The ring A is a Dedekind ring.

b) For every fractional ideal I of A, there exists a unique element (0M)M∈Sin Z(S) such that I =

∏M∈S M

0M.

c) If I =∏

M∈S M0M

and J =∏

M∈S M1M, then I ⊂ J is equivalent to the

inequalities 1M 6 0M for every M ∈ S.

Recall that we write M=for (M−1)−= when = < 0.

This result serves as a partial replacement, in Dedekind rings, to the

possible non-uniqueness of decomposition in irreducible elements.

Proof. — We first prove that for every non-zero ideal I of A, there exists

a family (0M) ∈ N(S) such that I =∏

M∈S M0M. This will also prove that I

is invertible, hence A is a Dedekind ring.

Let us argue by contradiction; since A is noetherian, there exists a

maximal counterexample I. Since A itself can be written A =∏

M∈S M0,

one has I ≠ A, hence there exists a maximal ideal P of A such that

I ⊂ P. Set J = IP−1; one has J ⊂ A and I = IP

−1 · P = PJ. In particular,

I ≠ J, since otherwise, Nakayama’s lemma (corollary 6.1.4) implies

Page 488: (MOSTLY)COMMUTATIVE ALGEBRA

476 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

that I = 0. Consequently, there exists a family (1M)M∈S ∈ N(S) suchthat J =

∏M∈S M

1M. Set 0M = 1M for M ≠ P and 0P = 1P + 1; then

(0M)M∈S ∈ N(S) and I = PJ =∏

M∈S M0M, a contradiction. In particular,

the result holds for every non-zero ideal of A.

Let then I be an arbitrary fractional ideal. There exists 0 ∈ A {0} suchthat 0I ⊂ A. Let (1M) ∈ N(S) be such that 0I =

∏M∈S M

1M. Let also (0M) ∈

N(S) be such that 0A =∏

M∈S M0M. Then the ideal J =

∏M∈S M

1M−0M

satisfies 0J = 0I, hence J = I.

The uniqueness of a family (0M)M∈S ∈ Z(S) such that I =∏

M∈S M0M

will follow from the final assertion.

Let thus I =∏

M∈S M0M

and J =∏

M∈S M1M

be two fractional ideals. If

0M 6 1M for all M ∈ S, then J = I ·∏M∈S M

1M−0M, hence J ⊂ I. Conversely,

assume that J ⊂ I and let us prove that 0M 6 1M for all M. Multiplying

both sides of this inclusion by

∏M∈S M

− inf(0M ,1M), we may assume that

inf(0M, 1M) = 0 for every M ∈ S; we now need to prove that 0M = 0

for every M. Assume that there exists P ∈ S with 0P > 0. Then

I =∏

M∈S M0M ⊂ P. on the other hand, 1P = 0 hence J =

∏M∈S M

1M ⊄ P

(choose an element GM ∈ M P for every M ∈ S such that 0M > 0; their

product belongs to J P). This contradicts the assumption that J ⊂ I and

concludes the proof of the theorem. �

Theorem (9.6.9). — Let A be an integral domain. The following conditions are

equivalent:

(i) The ring A is a Dedekind ring;

(ii) The ring A is noetherian and for every maximal ideal M of A, the ring AM

is principal;

(iii) The ring A is noetherian, integrally closed and dim(A) 6 1.

Proof. — (i)⇒(ii). Assume that A is a Dedekind ring. We have already

seen that every fractional ideal, a fortiori, every ideal of A is finitely

generated; in particular, A is noetherian.

Let P be a maximal ideal of A. The local ring AP is a Dedekind ring;

by proposition 9.6.7, it is a principal ideal domain.

(ii)⇒(iii). By assumption, the ring A is noetherian and that AP is a

principal ideal domain, for every maximal ideal P of A. In particular,

Page 489: (MOSTLY)COMMUTATIVE ALGEBRA

9.6. DEDEKIND RINGS 477

dim(AP) 6 1, that is, htA(P) 6 1. Since P is arbitrary, this implies

dim(A) 6 1.

Let us prove that A is integrally closed. Let G ∈ K be integral over A.

Let I be the set of 0 ∈ A such that 0G ∈ A; this is an ideal of A. Let P

be a maximal ideal of A. The element G of K is integral over AP, hence

belongs to AP because a principal ideal domain is integrally closed. This

proves that there exists 0 ∈ A P such that 0G ∈ A; in other words, I ⊄ P.

Since P is arbitrary, it follows from Krull’s theorem (theorem 2.1.3) that

I = A. In particular, G ∈ A.

(iii)⇒(i) Let A be an integral domain which is noetherian, integrally

closed and such that dim(A) 6 1. If dim(A) = 0, then A is a field, hence

a Dedekind ring. We now assume that A is not a field; then dim(A) = 1

and every non-zero prime ideal of A is maximal.

We first prove that every maximal ideal of A is invertible. Let M be a

maximal ideal of A; in particular, M ≠ 0. Then M−1

is a fractional ideal

of A; let us prove that M ·M−1 = A.

One has M ⊂ A, hence A ⊂ M−1. We first prove that M

−1 ≠ A.

Let 0 be a non-zero element of M. The ring A/0A has dimension 0

and is noetherian. By Akizuki’s theorem (theorem 6.4.14), it has finite

length. Then there exists a finite sequence (M1, . . . ,M=) of maximal

ideals of A such that M1 . . .M= ⊂ 0A, and we may choose this sequence

so that = is minimal. Then M1 . . .M= ⊂ 0A ⊂ M. Since M is a

prime ideal, there exists 8 ∈ {1, . . . , =} such that M8 ⊂ M (otherwise,

choose elements 08 ∈ M8 M; the product 01 . . . 0= does not belong

to M). Since M8 is maximal, this implies M = M8. Up to renumbering

the M8, we thus assume that M = M1. By the minimality assumption

on =, one has M2 . . .M= ⊄ 0A. Let 1 ∈ M2 . . .M= 0A. One has

1M ∈ MM2 . . .M= ⊂ 0A, hence (10−1)M ⊂ A. By the definition of M−1,

this implies 10−1 ∈ M−1. Since 10−1 ∉ A, we thus have shown that

M−1 ≠ A.

From the inclusion A ⊂ M−1, we deduce that M ⊂ M ·M−1

. Since M is

a maximal ideal of A, it follows that either M ·M−1 = A, or M ·M−1 = M.

Arguing by contradiction, let us assume that M ·M−1 = M.

Let G ∈ M−1; one has GM ⊂ M. Since A is noetherian, M is a finitely

generated A-module and theorem 4.1.5 implies that G is integral over A.

Page 490: (MOSTLY)COMMUTATIVE ALGEBRA

478 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

Since A is integrally closed, this shows that G ∈ A. Consequently,

M−1 ⊂ A, hence M

−1 = A, a contradiction.

By theorem 9.6.8, the ring A is then a Dedekind ring. �

The following corollary will be a plethoric source of examples of

Dedekind rings. We refer the reader to exercise 9/16 for a stronger

result that does not require the separable hypothesis (theorem of Krull–

Akizuki).

Corollary (9.6.10). — Let A be a Dedekind ring, let K be its field of fractions.

Let L be a separable extension of K and let B be the integral closure of A in L.

Then B is a finitely generated A-module and a Dedekind ring.

Proof. — By theorem 9.6.9, the ring A is noetherian, integrally closed

and its dimension is at most 1. By proposition 9.5.1, B is a finitely

generated A-module and a noetherian ring. Since B is integral over A,

one has dim(B) = dim(A) 6 1 (corollary 9.2.9). Finally, B is integrally

closed in its field of fractions, by construction. Theorem 9.6.9 then

implies that B is a Dedekind ring. �

Example (9.6.11). — Let K be a number field, that is, a finite extension

of Q, and let A be the integral closure of Z in K: it is called the ring of

integers of K. Many exercises of this book consider particular cases of

this situation.

Let = = [K : Q]. By corollary 9.6.10, A is a finitely generated Z-module

and a Dedekind ring. Since A ⊂ K, it is torsion free, hence is a free

Z-module. For every ∈ K, there exists an integer 2 > 1 such that

2 ∈ A, for example the leading coefficient of the minimal polynomial

of . This implies that A contains a basis of K as a Q-vector space, hence

A ' Z=.

The study of these rings, in particular the discovery that they are

not necessarily unique factorization domains, is at the heart of the

developement of algebraic number theory since the xviiithcentury. By

corollary 9.6.10, A is a Dedekind ring, and an important result of

Minkowski asserts that its class group �(A) is finite. However, many

mysteries remain about these groups.

Page 491: (MOSTLY)COMMUTATIVE ALGEBRA

9.6. DEDEKIND RINGS 479

Theorem (9.6.12) (Minkowski). — Let K be a finite extension of Q and let A

be its ring of integers of K. The class group �(A) of A is finite.

The proof that we give is due to Hurwitz; it follows the book of Ireland

& Rosen (1990).

Proof. — Let = = [K : Q]. Let M be the integer provided by lemma 9.6.13

and let be the set of ideals of A that contain M!. The elements of

are in bijection with the ideals of the quotient ring A/M!A. Since A is

isomorphic to Z=as an abelian group, this quotient ring is finite and the

set is finite. To prove that �(A) is finite, we will now prove that every

non-zero ideal of A has the same class that an element of .

Let I be a non-zero ideal of A. Choose a non-zero element 0 ∈ I

such that

��N

K/Q(0)��is minimal (recall from corollary 4.7.6 that it is an

integer). Let 1 ∈ I. By the following lemma there exists an integer M

(depending on K only), an integer D ∈ {1, . . . ,M} and an element 2 ∈ A

such that

��N

K/Q(D1 − 02)�� < ��

NK/Q(0)

��. Since D1 − 02 ∈ A, the definition

of 0 implies that D1 − 02 = 0. In particular D1 ∈ 0A, hence M! I ⊂ 0A, so

that J = M!0−1I is an ideal of A. By construction, I and J have the same

class in �(A). Since 0 ∈ I, one has M! ∈ J, hence J ∈ . This concludes

the proof. �

Lemma (9.6.13) (Hurwitz). — There exists an integer M such that the follow-

ing property holds: for every 0, 1 ∈ A with 0 ≠ 0, there exists D ∈ {1, . . . ,M}and 2 ∈ A such that

��N

K/Q(D1 − 02)�� 6 ��

NK/Q(0)

��.

Proof. — Let = = [K : Q] and let 1, . . . , = be a basis ofA as aZ-module;

it is a basis of K as a Q-vector space. Let �1, . . . , �= : K → C be the

= distinct embeddings of K into C. Set

C =

=∏9=1

(=∑8=1

���9( 8)��)and let < = bC1/=c + 1.

For : = (:1, . . . , :=) ∈ {1, . . . , <}=, let P: be the cube [(:1 −1)/<, :1/<] × · · · × [(:= − 1)/<, :=/<] of side 1/< in R=

. These <=

cubes cover the unit cube P = [0, 1]=. For G ∈ R=, we write

bGc = (bG1c , . . . , bG=c) ∈ Z=and {G} = G − bGc ∈ P.

Page 492: (MOSTLY)COMMUTATIVE ALGEBRA

480 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

Let G = (G1, . . . , G=) ∈ Q=be such that 1/0 = ∑=

9=1G 9 9. By the pigeon-

hole principle, there exists D ∈ {1, . . . , <}= such that PD contains at least

two terms of the sequence (0, {G}, . . . , {<=G}) of length <= + 1; let B < C

be their indices and let D = C − B; observe that 1 6 D 6 <= = M. Let

H = bCGc − bBGc and I = {CG} − {BG}; by construction, one has H ∈ Z=

and the coordinates of I satisfy��I 9�� 6 1/< for 9 ∈ {1, . . . , =}. Moreover

DG = CG − BG = H + I.Set 2 = H1 1 + · · · + H= = and 3 = I1 1 + · · · + I= =; one has 2 ∈ A,

A ∈ K, and D1 = 02 + 0A.Moreover,��

NK/Q(A)

�� = =∏9=1

(=∑8=1

I8�9( 8))

6 sup(|I1 | , . . . , |I= |)==∏9=1

(=∑8=1

���9( 8)��)6 <−=C < 1,

so that��N

K/Q(D1 − 02)�� = ��

NK/Q(0A)

�� = ��N

K/Q(0)�� ��

NK/Q(A)

�� < ��N

K/Q(0)�� .

This concludes the proof of the lemma. �

Exercises

1-exo649) Let : be a field and let A be a finitely generated :-algebra. Assume that A is

a field.

a) Prove that there exists an integer = > 0, elements G1, . . . , G= ∈ A which are

algebraically independent such that A is algebraic over :[G1, . . . , G=].In the rest of the exercice, we will prove that = = 0: A is algebraic over : (this

furnishes a rather direct proof of theorem 9.1.2). We argue by contradiction, assuming

that = > 1.

b) Prove that there exists 5 ∈ :[G1, . . . , G=] {0} such that A is integral over

:[G1, . . . , G=][1/ 5 ]. Conclude that :[G1, . . . , G=][1/ 5 ] is a field.c) Prove that 5 belongs to every maximal ideal of :[G1, . . . , G=].d) Observing that 1 + 5 is a unit of :[G1, . . . , G=], deduce from this that 5 ∈ : and

derive a contradiction.

Page 493: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 481

2-exo871) Let A be a noetherian integral domain which is not a field and let E be its

field of fractions.

Prove the equivalence of the following assertions:

a) The A-algebra E is finitely generated;

b) There exists a non-zero element of A which belongs to every non-zero prime ideal;

c) The ring A has only finitely many non-zero prime ideals of height 1;

d) One has dim(A) = 1 and the ring A has only finitely many maximal ideals.

(For c)⇒d), use Krull’s Hauptidealsatz. For b)⇒a), take an element 0 ∈ A and consider a

primary decomposition of the ideal 0A.)

3-exo841) Let 9 : A → B be an injective morphism of integral domains; we assume

that B is finitely generated over A. The goal of the exercise is to prove that for every

non-zero 1 ∈ B, there exists 0 ∈ A such that for every algebraically closed field K and

every morphism 5 : A → K such that 5 (0) ≠ 0, there exists a morphism 6 : B → K

such that 5 = 6 ◦ 9 and 6(1) ≠ 0.

a) Treat the case where B = A[G], for some G ∈ B which is transcendental over A.

b) Treat the case where B = A[G] for some G ∈ B which is algebraic over A.

c) Treat the general case.

d) Let V be a non-empty open subset of Spec(B); prove that its image 9∗(V) in Spec(A)contains a non-empty open subset.

4-exo840) Recall that a Jacobson ring is a commutative ring in which every prime ideal

is the intersection of the maximal ideals that contain it (see p. 9.1).

a) Prove that Z is a Jacobson ring. What principal ideal domains are Jacobson rings?

b) Let A be a Jacobson ring and let I be an ideal of A. Prove that A/I is a Jacobsonring.

Let A be a Jacobson ring and let B be an A-algebra.

c) If B is integral over A, then B is a Jacobson ring.

d) Assume that B is finitely generated over A. Prove that for every maximal ideal M

of B, the intersection M ∩ A is a maximal ideal of A, and the residue field B/M is a

finite extension of A/(M ∩A). Prove that B is a Jacobson ring.

e) Let M be a maximal ideal of Z[T1, . . . ,T=]. Prove that the residue field

Z[T1, . . . , T=]/M is a finite field.

f ) Let K be a field and let A be a K-algebra. Assume that there exist a set I such

that Card(I) < Card(K), and a family (08)8∈I such that A = K[(08)]. Prove that for everymaximal ideal M of A, the residue field A/M is an algebraic extension of K, and that A

is a Jacobson ring. (Follow the arguments in the proof of theorem 2.3.1.)

5-exo1227) Let : be an infinite field and let A be a :-algebra.

a) Let P ∈ :[T1, . . . ,T=] be a non-zero homogeneous polynomial. Show that there

exists 2 ∈ := such that P(2) ≠ 0 and 2= = 1.

Page 494: (MOSTLY)COMMUTATIVE ALGEBRA

482 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

b) Let 01, . . . , 0= ∈ A and let P ∈ :[T1, . . . ,T=] be a non-zero polynomial such that

P(01, . . . , 0=) = 0. Let 3 = deg(P) and let P3 be the homogeneous component of

degree 3 of P. Let 2 ∈ := be such that P3(2) ≠ 0 and 2= = 1; for < ∈ {1, . . . , = − 1}, let1< = 0< + 2<0= . Prove that 01, . . . , 0= are integral over :[11, . . . , 1=−1].c) Let 01, . . . , 0N ∈ A be elements such that A = :[01, . . . , 0N]. Prove that there exist

an integer = > 0 and algebraically independent elements 11, . . . , 1= which are :-linear

combinations of 01, . . . , 0N such that A is integral over its subring :[11, . . . , 1=]. (A

version of Noether’s normalization lemma.)

6-exo1226) Let : be a field and let E, F be two finitely generated field extensions of :. Let

R = E⊗: F. The goal of the exercise is to prove that dim(R) = inf(tr deg:(E), tr deg:(F)),a formula due to Grothendieck.

a) Let = = tr deg:(E) and let (01, . . . , 0=) be a transcendence basis of E; let E1 =

:(01, . . . , 0=) and R1 = E1 ⊗: F. Prove that R1 is a subring of R and that R is integral

over R1. Conclude that dim(R) = dim(R1).b) Show that A1 = :[01, . . . , 0=] ⊗: F is a subring of R of which R1 is a fraction ring.

Prove that A1 is isomorphic to F[T1, . . . , T=] and that dim(R) 6 =.

c) Assume, moreover, that tr deg:(F) > =. Prove that there exists a surjective

morphism of F-algebras # : R1→ F.

d) Using the fact that Ker(#) ∩A1 is a maximal ideal of A1, prove that dim(R1) > =.

7-exo810) Let 5 : A→ B be a morphism of commutative rings.

a) Show that 5 is an epimorphism in the category of rings if and only if the unique

morphism B ⊗A B→ B such that 1 ⊗ 1′ ↦→ 11′ for all 1, 1′ ∈ B is an isomorphism.

b) Let M be a finitely generated non-zero A-module. Using a surjective morphism of

the form 6 : M→ A/I, prove that M ⊗A M ≠ 0.

c) Assume that 5 is an epimorphism and that B is finitely generated as an A-module;

prove that 5 is surjective. (First prove that (B/A) ⊗A B = 0 and that (B/A) ⊗A (B/A) = 0.)

8-exo880) Let A be a ring. For every 0 ∈ A, let S0 be the set of elements of A of the

form 0=(1 + 01), for = ∈ N and 1 ∈ A.

a) Let 0 ∈ A. Prove that S0 is a multiplicative subset of A. Prove that S0 contains 0 if

and only if 0 is invertible or 0 is nilpotent.

b) Let 0 ∈ A. Prove that M ∩ S0 ≠ ∅ for every maximal ideal M of A. Conclude that

dim(S−1

0 A) + 1 6 dim(A).c) Let M be a maximal ideal of A, let P be a prime ideal of A such that P ⊂ M. Prove

that P ∩ S0 = ∅ for every 0 ∈ M P. Conclude that there exists 0 ∈ A such that

dim(S−1

0 A) + 1 = dim(A).d) Let = ∈ N. Prove that dim(A) 6 = if and only if, for every 00, . . . , 0= ∈ A, there

exist 10, . . . , 1= ∈ A and <0, . . . , <= ∈ N such that

0=0

0(0010 + 0=1

1(0111 + · · · + 0<=

= (1 + 0=1=))) = 0.

Page 495: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 483

e) Let K be a field. Prove that dim(K[T1, . . . , T=]) = =. (This characterization of Krull’s

dimension is due to Coquand & Lombardi (2005).)

9-exo843) Let A be a ring and let B = A[X1, . . . ,X=].a) Prove that dim(B) > dim(A) + =.b) Let P be a prime ideal of A, let (Q0, . . . ,Q<) be a chain of prime ideals of B such

that Q9 ∩A = P for every 9. Prove that < > =. (Passing to the quotient by P and Q0, reduce

to the case where P = 0; by localization, then reduce to the case where A is a field.)

c) Prove that dim(B) 6 dim(A) + =(1 + dim(A)).In the case = = 1, i.e., B = A[X], this exercise says that dim(A) + 1 6 dim(A[X]) 6

2 dim(A) = 1, and Seidenberg (1954) has shown that all possibilities actually appear. When

A is noetherian, exercise 9/14 shows that one has dim(B) = dim(A) + =.10-exo847) Let A be a noetherian ring and let I be an ideal of A. Let B be the subalgebra

of A[T] consisting of polynomials P =∑0=T

=such that 0= ∈ I

=for all =.

a) Using exercise 6/21, prove that B is a noetherian ring.

b) Assume that there exists a prime ideal P of A containing I such that dim(A/P) =dim(A); prove that dim(B) = dim(A) + 1.

c) Otherwise, prove that dim(B) = dim(A).11-exo848) Let X be a topological space and let A = �(X; R) be the ring of real valued

continuous functions on X.

a) Assume that every function 5 ∈ A is locally constant. Prove that A is a von

Neumann ring and that dim(A) = 0.

Let � ∈ A be a continuous function which is not locally constant in any neighborhood

of point � ∈ X; we assume �(�) = 0.

b) Let P be the set of functions 5 ∈ A such that

lim

G→�5 (G)/�(G)C = 0

for every C ∈ N. Prove that P is an ideal of A, contained in the maximal ideal M� of

functions vanishing at �. Prove that P is a radical ideal and that P ≠ M�.

c) Let U be an ultrafilter (see exercise 2/11) which converges to �, in the sense that all

neighborhoods of � belong to U. For every 2 ∈ R+, let P2 be the set of functions 5 ∈ A

such that

lim

U5 (G) exp(C�(G)−2) = 0

for all C ∈ R.

d) Prove that P2 is a prime idal of A.

e) Let 2, 2′ ∈ R+ be such that 2 < 2′. Prove that P2′ ( P2 .

f ) Prove that dim(A) = +∞.

12-exo805) Let K be a field and let A = K[T1,T2, . . . ] be the ring of polynomials in

(countably) infinitely many indeterminates.

a) Prove that A is not noetherian and that dim(A) is infinite.

Page 496: (MOSTLY)COMMUTATIVE ALGEBRA

484 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

b) Let (<=)=>1 be a strictly increasing sequence of strictly positive integers that that

the sequence (<=+1 − <=) is unbounded. For every = > 1, let P= be the ideal of A

generated by the indeterminates T8 , for <= 6 8 < <=+1. Prove that P= is a prime ideal

of A.

c) Let S be the intersection of the multiplicative subsets S= = A P= , for all = > 1.

Prove that dim(S−1A) is infinite.

d) Prove that S−1

A is a noetherian ring.

13-exo806) Let A be a local noetherian ring and let M be its maximal ideal, and let

K = A/M be its residue field .

a) Explain why the A-module structure of M/M2endows it with the structure of a

K-vector space.

b) Let 01, . . . , 0= be elements of M which generate M/M2. Prove that M = (01, . . . , 0=).

Deduce from this that dim(A) 6 dimK(M/M2).In the rest of the exercise, one assumes that dim(A) = dimK(M/M2) (one says that A

is a regular ring) and the goal is to prove that A is an integral domain. We argue by

induction on dim(A).c) Treat the case dim(A) = 0.

d) Assume that dim(A) > 0. Let P1, . . . , P= be the minimal prime ideals of A. Prove

that there exists 0 ∈ M such that 0 ∉ M2 ∪⋃=

8=1P8 .

e) Let B = A/(0) and letN = MB be itsmaximal ideal. Using the induction hypothesis,

prove that (0) is a prime ideal of A.

f ) Let 8 ∈ {1, . . . , =} be such that P8 ⊂ (0). Using Nakayama’s lemma, prove that

P8 = (0). Conclude.14-exo870) Let A be a noetherian ring and let B = A[X].

a) Let P be a prime ideal of A, let Q and Q′be two prime ideals of B such that Q ( Q

and P = A ∩A = Q′ ∩ B. Prove that Q = PB. (Argue as in question b) of exercise 9/9.)

b) Let I be an ideal of A and let P be a minimal prime ideal containing I. Prove that

PB is a minimal prime ideal of B containing IB.

c) Let P be a prime ideal of A. Prove that htA(P) = htB(PB).d) Prove that dim(B) = dim(A) + 1.

e) More generally, prove that dim(A[X1, . . . ,X=]) = dim(A) + =.15-exo830) Let A be a noetherian integral domain such that dim(A) = 1. Let K be its

fraction field. Let M be a finitely generated A-module.

a) Let S = A {0}. Prove that S−1

M is a finite dimensional K-vector space. Let A be

its dimension. Prove that A is the maximal number of A-linearly independent subsets

of M.

b) Prove the equivalence of the following assertions: (i) Every element of M is torsion;

(ii) ℓA(M) is finite; (iii) A = 0.

Page 497: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 485

c) From now on, we assume that M is torsion-free. Show that M contains a free

submodule L of rank A such that ℓA(M/L) is finite.d) Prove that for every integer = > 1, one has

ℓA(M/0=M) 6 ℓA(L/0=L) + ℓA(M/L).Also prove that

ℓA(M/0=M) = = ℓA(M/0M).e) Prove that ℓA(M/0M) 6 A ℓA(A/0A).f ) Generalize the previous inequality to any torsion-free A-module. Give an example

where this inequality is strict.

16-exo831) Let A be a integral domain, let K be its field of fractions, let L be finite

extension of K and let B be a subring of L containing A.

a) Let I be a non-zero ideal of B. Prove that I ∩A ≠ 0.

In the sequel, one assumes that A is noetherian and dim(A) = 1.

b) Let I be an ideal of B and let 0 be a non-zero element of I ∩A. Prove that B/0Band I/0B are finite length B-modules. (Use exercise 9/15.)

c) Prove that B is noetherian and that dim(B) 6 1. (Theorem of Krull–Akizuki.)

17-exo819) Let A be a noetherian integral domain such that every non-zero prime ideal

of A is maximal.

a) Let 0 ∈ A {0}. Prove that the A-module A/0A has finite length.

b) Let 0, 1 ∈ A {0}. Prove that ℓA(A/(01)) = ℓA(A/(0)) + ℓA(A/(1)).c) Prove that there exists a unique morphism of groups $A : K

× → Z such that

$A(0) = ℓA(A/(0)) for every 0 ∈ A {0}.d) Assume that A is a principal ideal domain. Compute $A(0) in terms of the

decomposition of 0 as a product of irreducible elements.

18-exo820) Let A be a ring, let ! : M → N be a morphism of A-modules. One says

that ! is admissible if Ker(!) and Coker(!) have finite length; one then defines

4A(!) = ℓA(Coker(!)) − ℓA(Ker(!)).a) If M and N have finite length, then ! is admissible and 4A(!) = ℓA(N) − ℓA(M).b) Let ! : M→ N and # : N→ P be morphisms of A-modules. If two among !, #,

# ◦ ! are admissible, then so is the third and

4A(# ◦ !) = 4A(#) + 4A(!).c) One assumes that A is an integral domain and that every non-zero prime ideal

of A is maximal. Let M be a free and finitely generated A-module and let ! be an

injective endomorphism of M. Show that ! is admissible and that ℓA(Coker(!)) =ℓA(A/(det(!))). (First treat the case where the matrix of ! in some basis of M is an elementary

matrix.)

d) In the case where A is a principal ideal domain, deduce the result of the previous

question from theorem 5.4.3.

Page 498: (MOSTLY)COMMUTATIVE ALGEBRA

486 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

19-exo809) Let A be an integral domain and let K be its field of fractions.

a) Assume that for every 0 ∈ A {0} and every prime ideal P which is associated

with A/(0), the fraction ring AP is a principal ideal domain. Prove that A is integrally

closed.

Conversely, one assumes that A is noetherian and integrally closed. Let 0 ∈ A {0}and let P be a prime ideal of A which is associated with A/(0).b) Prove that there exists 1 ∈ A such that P = {G ∈ A ; G1 ∈ (0)}.c) Prove that the element 0/1 of K belongs to AP and that PAP = (0/1)AP. (If

(1/0)PAP ≠ AP, prove that 1/0 is integral over A.)

d) Prove that AP is a principal ideal domain and that every prime ideal associated

with A/(0) is minimal.

20-exo877) Let : be a field and let A = :[X,Y] be the ring of polynomials in two

indeterminates.

a) Let I = (X,Y). Prove that I−1 = A. Conclude that I is not invertible.

b) Prove that a fractional ideal of A is invertible if and only if it is principal.

21-exo876) Let A be a Dedekind ring.

a) Assume that Spec(A) is finite. Prove that A is a principal ideal domain.

b) Let I be an ideal of A and let 0 ∈ I {0}. Prove that there exists 1 ∈ I such that

I = 0A + 1A. (Start by proving that the ring A/I is artinian.)22-exo832) Let A be a principal ideal domain which is a local ring. Let ? be a generator

of its maximal ideal and let K be its field of fractions. Let P ∈ A[T] be an Eisenstein

polynomial (see exercise 2/27 where it is proved that P is irreducible in K[T]), that is,P = T

= + 0=−1T=−1 + · · · + 00, where 00, . . . , 0=−1 ∈ (?) and 0= ∉ (?2).

Let L = K[T]/(P), let $ be the class of T in L, and let B = A[$].a) Prove that the ideal ($) of B is maximal.

b) Prove that ($) is the unique maximal ideal of B. (Use Nakayama’s lemma to prove

that every maximal ideal of B contains ?.)

c) Prove that for every 1 ∈ B {0}, there exists a unit D ∈ B×and an integer = ∈ N

such that 1 = D$=.

d) Prove that B is a principal ideal domain and is the integral closure of A in L.

23-exo833) Let A be a local ring, let K be its field of fractions, let M be its maximal ideal,

and let : = A/M be its residue field. Let P ∈ A[T] be a monic irreducible polynomial

whose image P ∈ :[T] is separable; let P =∏<

9=1P9 be the decomposition as a product

of monic irreducible polynomials; for every 9, fix a monic polynomial P9 ∈ A[T]with

reduction P9 modulo M. Let L = K[T]/(P) and let B = A[T]/(P).a) Let N be a maximal ideal of B. Prove that there exists a unique element 9 ∈{1, . . . , <} such that N = MB + (P9).

Page 499: (MOSTLY)COMMUTATIVE ALGEBRA

EXERCISES 487

b) Assume that A is integrally closed in its field of fractions. Prove that B is the

integral closure of A in L. (Using the separability of the polynomial P, prove that the

determinant of the = × = matrix (TrL/K($8+9))068 , 9<= , the discriminant of P, is a unit of A.)

c) Assume that A is a principal ideal domain. Prove that B is a principal ideal domain.

(Deduce from theorem 9.6.9that for every 9, the local ring BN9 is a principal ideal domain.)

24-exo878) Let : be a field and let A = :[X2,X3] be the subalgebra of :[X] generatedby X

2and X

3.

a) Prove that A is a noetherian integral domain and that dim(A) = 1.

b) Prove that A is not integrally closed. (Prove that X is integral over A.)

c) Prove that the ideal (X2,X3) of A is not invertible.

25-exo891) Let A = R[X,Y]/(X2 + Y2 − 1).

a) Prove that A is a Dedekind ring.

b) Let M be a maximal ideal of A. Prove that A/M ' R or A/M ' C. Give examples

of both cases.

c) Let M be a maximal ideal of A such that A/M ' C. Prove that M is principal.

d) Let M be a maximal ideal of A such that A/M ' R; prove that there exists a

unique pair (0, 1) ∈ R2such that 02 + 12 = 1 and A = (X− 0,Y− 1). Prove that M is not

principal but that M2is principal. Prove also that all of these maximal ideals have the

same class in the class group �(A) of A.

e) What is the class group of A?

26-exo893) Let 3 be an integer such that 3 > 2 and 3 is squarefree, let K = Q(8√3) and

let A be the integral closure of Z in K; it is a Dedekind ring.

a) Prove that A = Z[(−1+ 8√3)/2] is 3 ≡ −1 (mod 4), and that A = Z[8

√3] otherwise.

b) Let I be a non-zero ideal of A. Let (4 , 5 ) be a basis of I as a Z-module. Applying

exercise 5/11 to the quadratic form (G, H) ↦→ NK/Q(G4 + H 5 ), prove that there exists

a non-zero element 0 ∈ I such that NK/Q(0) 6 Card(A/I)

√�/3, where � = 3 if 3 ≡ −1

(mod 4), and � = 43 otherwise.

c) (followed) Let J = (0) : I. Prove that J is non-zero ideal of A such that IJ = (0) andCard(A/J) 6

√�/3.

d) Prove that�(A) is generated by the maximal ideals P of A such that Card(A/P) 6√�/3. This bound is better than the one implicitly given in the proof of theorem 9.6.12; it is

also slightly better than the general Minkowski bound.

27-exo892) Let A = Z[8√

5] (see exercise 2/24).a) Prove that A is a Dedekind ring.

b) Prove that M = (2, 1 + 8√

5) is the unique maximal ideal of A containing 2. Prove

that M2 = (2) and that M is not principal.

c) Prove that the ideals P = (3, 1 + 8√

5) and P′ = (3, 1 − 8

√5) are the unique maximal

ideals of A containing 3. Prove that P · P′ = (3) and that neither P nor P′is principal.

Page 500: (MOSTLY)COMMUTATIVE ALGEBRA

488 CHAPTER 9. ALGEBRAS OF FINITE TYPE OVER A FIELD

d) Prove that the classes of P, P′,M in the class group �(A) are equal.e) Using exercise 9/26, prove that �(A) ' Z/2Z.

28-exo929) Let : be a finite field, let K = :(T) be the field of rational functions in one

indeterminate and let L be a finite separable extension of K. Let A = :[T] and let B be

the integral closure of A in L.

a) Show that B is a Dedekind ring and a free A-module of rank =, where = = [L : K].b) Let 0 ∈ :(T); prove that there exists a unique pair (1, 2) where 1 ∈ :[T] and

2 ∈ :(T) are such that 0 = 1 + 2 and deg(2) < 0. (The degree deg(0) of an element

0 ∈ :(T) is defined as deg( 5 ) − deg(6), where 5 , 6 ∈ :[T] are polynomials such that

6 ≠ 0 and 0 = 5 /6.)c) Let (41, . . . , 4=) be a basis of L as an K-vector space. Prove that there exists an

integer M such that deg(NL/K(

∑0848) 6 M + = sup8 deg(08) for every (01, . . . , 0=) ∈ K

=.

d) Prove that there exists an integer M such that the following property holds: for

every 0, 1 ∈ B with 0 ≠ 0, there exists a polynomial 5 ∈ A such that deg( 5 ) 6 M and

6 ∈ B such that deg(NL/K( 5 1 − 02)) 6 deg(N

L/K(0)).e) Prove that the class group �(B) is finite. (Adapt the proof of theorem 9.6.12, using the

preceding question to replace Hurwitz’s lemma.)

29-exo897) Let A be a Dedekind ring. Prove that every divisible A-module is injective.

(Conversely, it follows from exercise 7/24 that if A is a ring such that every divisible right

A-module is injective, then A is right hereditary. If A is an integral domain, it is then a

Dedekind ring.)

30-exo899) Let A be a Dedekind ring and let M be a finitely generated A-module.

a) If M is torsion-free, then M is projective.

b) Prove that the torsion submodule T(M) of M has a direct summand.

c) Let I, J be fractional ideals of A and let 5 : J → I be a morphism of A-modules.

Prove that there exists a unique element 0 ∈ A such that 5 (G) = 0G for every G ∈ J.

Prove that 0 ∈ IJ−1.

d) Let I1, . . . , I< and J1, . . . , J= be fractional ideals of A, let M = I1 ⊕ · · · ⊕ I< and

N = J1 ⊕ · · · ⊕ J= , and let 5 : N → M be an isomorphism of A-modules. Prove that

< = = and I1 . . . I< ' J1 . . . J= . (Represent 5 by a matrix U = (08 9) ∈ M<,=(K), where08 9 ∈ I8J

−1

9. Prove that I = (det(U))J.)

e) Let I, J be fractional ideals of A. Prove that there exist non-zero elements 0 ∈ I and

1 ∈ J such that (I ⊕ J)/(0, 1)A is torsion-free. Conclude that I ⊕ J ' A ⊕ IJ.

f ) Let M be a finitely generated A-module. Assume that M is torsion free and

non-zero. Prove that there exists an integer = > 1 and a fractional ideal I such that

M ' A=−1 ⊕ I.

Page 501: (MOSTLY)COMMUTATIVE ALGEBRA

APPENDIX

This appendix consists in three different sections.

The first one, called algebra, collects a few definitions and facts which are

essentially prerequisites for the reading of this book. Their role is also to precise

a few conventions on which there is no unanimity, such as the precise meaning

of a “positive” number or an “increasing” sequence.

The second section is devoted to results of set theory which are used at a few

places of the book to manage infinite cardinals. TheCantor-Bernstein theorem

will imply that the dimension of a vector space, or the transcendence degree

of a field extensions, are well defined. On the other hand, Zorn’s theorem

asserts the existence of maximal elements in adequate (“inductive”) ordered

sets; it is used to prove the existence of bases of a vector space, of maximal

ideals, of minimal prime ideals, characterize injective modules, etc. The proof of

Kaplansky’s theorem, however, relies on a transfinite inductive construction,

and I couldn’t “zornify” it fully. For this reason, rather than giving a direct (but

maybe unnatural proof) of Zorn’s theorem, I chose to start from the inductive

principle for well-ordered sets, and Hartogs’s lemma. To make this appendix

reasonably complete, I also give a proof of the theorems of Cantor and Zermelo.

Finally, I give a brief introduction to category theory. Mathematical practice

progressively showed the importance of considering not only mathematical

objects butmorphisms between them, and a category is the abstraction of this

idea. It then appeared that categories can themselves be treated as mathematical

objects, their morphisms are called functors, and that this point of view is

extremly fruitful in many fields of mathematics. While categories are only used

as a language in the first chapters of this book, they really are a useful tool in the

chapters devoted to homological algebra and tensor products.

Page 502: (MOSTLY)COMMUTATIVE ALGEBRA

490 APPENDIX

A.1. Algebra

A.1.1. Numbers. —

A.1.1.1. — The notation N, Z, Q, R and C are respectively used

for the sets positive integers (0, 1, 2, . . . ), the rational integers

(0, 1, 2, . . . ,−1,−2,−3, . . . ), the rational numbers (fractions of rational

integers), the real numbers and the complex numbers.

A.1.1.2. — Between real numbers, the ordering relation6 is pronounced

as “less than”, or sometimes “less than or equal to” if we feel it necessary

to insist on the possibility of equality. The expression “G < H” reads “G is

strictly smaller than H”. A similar convention for the opposite ordering

relation > (“greater than”, or “greater than or equal to”), and to the

relation > (“strictly greater than”).

In particular, a real number G is said to be positive if G > 0, strictly

positive if G > 0, negative if G 6 0, and strictly negative if G < 0.

A.1.1.3. Integers. — An integer = is said to be a prime number if it is

strictly greater than 1 and if its only positive divisors are 1 and itself.

The first prime numbers are 2, 3, 5, 7, 11, . . .

A.1.1.4. Binomial coefficients. — Let = be a positive integer. One sets =!

(“factorial =”) to be the product

=! = 1 · 2 . . . =.

One has 0! = 1.

Let <, = be positive integers such that < 6 =. The binomial coefficient(=<

)is defined by (

=

<

)=

=!

<!(= − <)! .

It is always an integer. This follows by induction from the equalities(=0

)= 1 and from the recurrence relation(

= + 1

< + 1

)=

(=

<

)+

(=

< + 1

),

whenever 0 6 < 6 =.

A.1.2. Sets, subsets, maps. —

Page 503: (MOSTLY)COMMUTATIVE ALGEBRA

A.1. ALGEBRA 491

A.1.2.1. — As usual, I write G ∈ S to mean that G is an element of the

set S.

If S and T are two sets, I write S ⊂ T or T ⊃ S to mean the equivalent

formulations S is a subset of T, S is contained in T, or T contains S, that

is: every element of S belongs to T. The symbols S ( T mean that S ⊂ T

but S ≠ T.

The set of all subsets of a set S is denoted by P(S).A.1.2.2. — A map 5 : S → T determines, for every element B ∈ T, an

element 5 (B) ∈ T; the element 5 (B) of T is called the image of B by 5 . The

map 5 has a graph Γ 5 which is the subset of S × T consisting in all pairs

of the form (B, 5 (B)), for B ∈ S.

The identity map, 83S : S → S, is such that every element of S is its

own image.

A.1.2.3. — If A is a subset of S, one writes 5 (A) for the set of all elements

5 (0), for 0 ∈ A.

If B is a subset of T, one writes 5 −1(T) for the set of all elements B ∈ S

such that 5 (B) ∈ T.

A.1.2.4. — One says that the map 5 is injective if distinct elements of S

have distinct images. One says that the map 5 is bijective if every element

of T is the image of some element of S.

One says that the map 5 is bijective if it is both injective and surjective.

Then there exists a unique map 6 : T → S such that 6 ◦ 5 = idS and

5 ◦ 6 = idT; the map 6 is called the inverse of 5 and is often denoted

by 5 −1.

Two sets S and T are said to be equipotent if there exists a bijection

from S to T.

A.1.3. Ordered sets. —

A.1.3.1. — Let us recall that an order ≺ on a set S is a relation satisfying

the following axioms:

– The assertions G ≺ H and H ≺ G are incompatible;

– If G ≺ H and H ≺ I, then G ≺ I.

Page 504: (MOSTLY)COMMUTATIVE ALGEBRA

492 APPENDIX

An ordered set is a set together with an order on it. In an ordered set,

one writes G � H as a shorthand to G ≺ H or G = H; one also defines

G � H and G � H as synonyms for H ≺ G and H � G.A.1.3.2. — Let S be an ordered set and (0=)=∈N be a sequence of elements

of S. One says that it is increasing if 0=+1 � 0= for every = ∈ N, and that

it is decreasing if 0=+1 � 0= for every = ∈ N.

The usual US terminology would be “non-decreasing” and “non-

increasing”. However, this terminology is terribly misleading, since

the two sentences “the sequence is non-decreasing” and “the sequence

is not decreasing” have totally different meanings: while the first one

means that 0=+1 � 0= for every = ∈ N, the second one holds if and only

if there exists = ∈ N such that 0=+1 � 0= is false.If one wants to insist that 0=+1 � 0= for every =, one says that it is

strictly increasing. One defines similarly strictly decreasing sequences.

Finally, a sequence (0=) is stationary if there exists < ∈ N such that

0= = 0< for every integer = > <.

A.1.3.3. — Let S be an ordered set. If, for every pair (G, H) of elements

of S, one has G ≺ H, or H ≺ G, or G = H, one says that S is totally ordererd.

An initial segment of an ordered set S is a subset I such that for

every G ∈ I and any H ∈ S such that H ≺ G, then H ∈ I.

A.1.3.4. — Let A be a subset of a totally ordered set S.

An upper bound of A is an element D ∈ S such that 0 � D for every

0 ∈ A; if A has an upper bound, one says that A is bounded above. A

maximal element of A is an element 0 of A such that there is no G ∈ A such

that G � 0. which is an upper bound of A. Observe that A may have an

upper bound but no maximal element, and that a maximal element of A

may not be an upper bound of A, unless A is totally ordered. A largest

element of A is an element 0 of A such that G � 0 for every G ∈ A.

A lower bound of A is an element ; ∈ S such that ; � 0 for every 0 ∈ A;

if A has a lower bound, one says that it is bounded below. A minimal

element of A is an element 0 of A such that there is no G ∈ A such that

G ≺ 0. A smallest element of A is an element 0 of A such that 0 � G for

every G ∈ A.

Page 505: (MOSTLY)COMMUTATIVE ALGEBRA

A.2. SET THEORY 493

A least upper bound is a minimal element of the set of all upper bounds

for A. A largest lower bound is a maximal element of the set of all lower

bounds for A.

A.2. Set Theory

Theorem (A.2.1) (Cantor-Bernstein theorem). — Let A and B be sets. As-

sume that there exists an injection from A into B, as well as an injection from B

into A. Then, the sets A and B are equipotent.

Proof. — Let 5 ′ : 5 (A) → A be the map such that for any 1 ∈ 5 (A), 5 ′(1)is the unique preimage of 1 by 5 , so that 5 ◦ 5 ′ = id 5 (A) and 5 ′ ◦ 5 = idA.

Define similarly 6′ : 6(B) → B. The idea of the proof consists in iterating

successively 5 ′ and 6′ as much as possible. Let A4 be the set of elements

of A of the form 6 5 6 5 . . . 5 (1), where 1 ∈ B 5 (A); let A> be the set of

elements of A of the form 6 5 6 5 6 5 . . . 5 6(0), where 0 ∈ A 6(B). Theset A4 consists on all elements of A where you can apply an even number

of times 5 ′ and 6′, so that one ends up by 6′; the set A> consists on all

elements of A where you can apply an odd number of times 5 ′ and 6′,ending up by 5 ′. Let A∞ be the complementary subset of A> ∪A4 in A.

Define analogously B4 , B> and B∞.By construction, the maps 5 induces bijections from A4 to B> and

from A∞ to B∞. The map 6 induces a bijection from B4 to A>.

Let ℎ : A→ B be themap that coincides with 5 on A4∪A∞ andwith 6′

on A>. It is a bijection. This concludes the proof of the theorem. �

A.2.2. Cardinals. — Set theory defines the cardinal Card(A) of an arbi-

trary set in such a way that Card(A) = Card(B) if and only if A and B are

equipotent. (The precise construction often uses von Neumann ordinals

but is irrelevant in this book.)

The Cantor–Bernstein theorem implies that one defines an ordering

relation on cardinals by writing Card(A) 6 Card(B) if there exists an

injection from A to B.

Cantor’s theorem (corollary A.2.16 below) will imply that this ordering

relation is total.

Page 506: (MOSTLY)COMMUTATIVE ALGEBRA

494 APPENDIX

A.2.3. Well-orderings. — Let S be an ordered set. One says that S

is well-ordered (or that the given order on S is a well-ordering) if every

non-empty subset of S possesses a smallest element.

Let S be a non-empty well-ordered set. Applying the definition to S

itself, we see that S has a smallest element. In particular, S has a lower

bound.

Applying the definition to a pair (G, H) of elements of S, we obtain that

S is totally ordered: the smallest element of {G, H} is smaller than the

other.

The set of all natural integers with the usual order is well-ordered. On

the contrary, the set of all real numbers which are positive or zero is not

well-ordered: the set of all real numbers G > 1 has no smallest element.

A subset of a well-ordered set is itself well-ordered.

Properties depending of an element of awell-ordered set can be proved

by induction, in a similar way to the induction principle for properties

depending of an integer.

Lemma (A.2.4) (Transfinite induction principle). — Let S be a well-ordered

set and let A be a subset of S. Assume that for every 0 ∈ S such that A contains

{G ∈ S ; G ≺ 0}, one has 0 ∈ A (“induction hypothesis”). Then A = S.

Proof. — Arguing by contradiction, assume that A ≠ S and let B = S A.

Then B is a non-empty subset of S; let 0 be the smallest element of B.

By assumption, A contains every element G ∈ S such that G ≺ 0; by the

induction hypothesis, one has 0 ∈ A, but this contradicts the definition

of 0. Consequently, A = S. �

A.2.5. — The induction principle can be also reformulated by consider-

ing the nature of the elements of a well-ordered set S. If S has a largest

element, we set S∗ = S {sup(S)}; otherwise, we set S∗ = S.

Let 0 ∈ S∗; since 0 is not the largest element of S, the set {G ∈ S ; G � 0}is non-empty; its smallest element is called the successor of 0; let us

denote it by �(0). One has 0 ≺ �(0).Let 0, 1 ∈ S∗ be such that 0 ≺ 1. Then �(0) � 1, by definition of �(0),

hence �(0) ≺ �(1). This proves that the map � : S∗ → S is strictly

increasing; in particular, it is injective.

Page 507: (MOSTLY)COMMUTATIVE ALGEBRA

A.2. SET THEORY 495

An element 0 ∈ S is a successor if and only if {G ∈ S ; G ≺ 0} has alargest element 0′; then 0 = �(0′). Otherwise, one says that 0 is limit.

A.2.6. Transfinite constructions. — It is as well possible to make trans-

finite constructions indexed by a well-ordered set S. For 0 ∈ S, let us

write S0 = {G ∈ S ; G ≺ 0}, so that 0 is the smallest element of S S0.

Proposition (A.2.7) (Transfinite construction principle)

Let X be a set. Let S be a well-ordered set and let�X be the set of pairs (0, 5 ),where 0 ∈ S and 5 : S0 → X is a function. For every Φ ∈ �X, there exists a

unique function F : S→ X such that F(0) = Φ(0, F|S0) for every 0 ∈ S.

Proof. — Let F, F′ be two such functions. Let us prove that F = F′by

transfinite induction: let A be the set of 0 ∈ S such that F(0) = F′(0)

and let us prove that A satisfies the transfinite induction hypothesis.

Let 0 ∈ A be such that S0 ⊂ A; by definition, one has F(G) = F′(G) for

every G ∈ S0, so that F|S0 = F′|S0 ; by definition of F and F

′, one has

F(0) = Φ(F|S0 , 0) = Φ(F′|S0 , 0) = F′(0); this proves that 0 ∈ A.

The existence of F is proved by transfinite induction as well. Let S∗be

the well-ordered set obtained by adding to S a largest element $ and

let A be the set of 0 ∈ S∗such that there exists a function F : S0 → X

satisfying F(1) = Φ(F|S1 , 1) for every 1 ∈ S0, and let us prove that A

satisfies the transfinite induction hypothesis. Let 0 ∈ S∗be such that

S0 ⊂ A. We distinguish two cases:

– If 0 is a successor, let 1 ∈ S0 be such that 0 = �(1) and let F : S1 → X be

a function such that F(G) = Φ(F|SG , G) for every G ∈ S1. Then S0 = S1∪{1};let F

′: S0 → X be defined by F

′(G) = F(G) if G ∈ S1 and F′(1) = Φ(F|S1 , 1).

Then F′satisfies the required hypothesis, hence 0 ∈ A.

– Otherwise, 0 is a limit; for every 1 ∈ S0, there exists a function

F1 : S1 → X satisfying the relation F(G) = Φ(F|SG , G) for every G ∈ S1.

By uniqueness, if 1 ≺ 2, then F1 and F2 coincide on S1. Since 0 is

a limit, the union of all S1, for 1 ∈ S0, is equal to S0, so that there

exists a unique function F : S0 → X which coincides with F1 on S1.

This function F satisfies the desired relation F(G) = Φ(F|SG , G) for allG ∈ S0 since for such G, there exists 1 ∈ S0 such that G ∈ S1, and

F(G) = F1(G) = Φ(F1 |SG , G) = Φ(F|SG , G).

Page 508: (MOSTLY)COMMUTATIVE ALGEBRA

496 APPENDIX

By transfinite induction, one has $ ∈ A. Since S$ = S, this proves that

there exists a function F : S→ X satisfying the relation F(0) = Φ(F|S0 , 0)for every 0 ∈ S. �

Corollary (A.2.8). — Let S,T be two well-ordered sets. One and only one of

the following assertions holds:

(i) There exists an increasing bijection 5 : S→ T;

(ii) There exists 0 ∈ S and an increasing bijection 5 : S0 → T;

(iii) There exists 1 ∈ T and an increasing bijection 5 : S→ T1.

Moreover, in each case, the indicated bijection (and the element 0, resp. 1) is

uniquely determined.

This is a very strong rigidity property of well-ordered sets.

Proof. — a) Let us add to T an element $ and endow the set T∗ = T∪{$}

with the order such that $ > 1 for all 1 ∈ T, so that $ is the largest

element of T∗. Then T

∗is well-ordered. By transfinite induction, there

exists a unique map 5 : S→ T∗such that 5 (0) = inf((T∗ 5 (S0)) ∪ {$})

for every 0 ∈ S. Explicitly, one has 5 (0) = inf(T 5 (S0)) if 5 (S0) ≠ T, and

5 (0) = $ if 5 (S0) ⊃ T.

It follows from this definition that if $ ∈ 5 (S), then 5 (S) = T∗. Let

moreover G, H ∈ S such that G < H and 5 (G) = 5 (H); by the definition

of 5 , one has 5 (H) = $. Consequently, the map 5 induces a bijective

increasing map from 5 −1(T) to 5 ( 5 −1(T)).We will show that the three cases of the corollary correspond to

the three mutually incompatible possibilities: 5 (S) = T, $ ∈ 5 (S) and5 (S) ( T.

First assume that 5 (S) = T. Then 5 is a bijection.

Assume that $ ∈ 5 (S) and let 0 = inf( 5 −1($)). Then 5 (S0) ⊂ T and

5 (0) = $ = inf(T∗ 5 (S0)), so that 5 (S0) = T. Consequently, 5 induces

an increasing bijection from S0 to T.

Assume finally that 5 (S) ( T and let 1 = inf(T 5 (S)), so that T1 ⊂ 5 (S)and 1 ∉ 5 (S). Conversely, if G ∈ S, then 5 (G) = inf(T 5 (SG)) 6inf(T 5 (S)), since SG ⊂ S. This implies that 5 (S) = T1, and 5 is an

increasing bijection from S to T1.

Page 509: (MOSTLY)COMMUTATIVE ALGEBRA

A.2. SET THEORY 497

b) To establish the uniqueness assertion, we first prove that if 5 : S→ T

is a strictly increasing map, then 5 (G) > G for all G ∈ S. By transfinite

induction, it suffices to prove that for any 0 ∈ S such that 5 (G) > G for

all G < 0, then 5 (0) > 0. If this does not hold, that is, if 5 (0) < 0, then

one has 5 ( 5 (0)) < 5 (0) because 5 is stricly increasing, which contradicts

the induction hypothesis.

Let then 5 , 6 : S→ T be two increasing bijective maps and let us prove

that 5 = 6. Applying the initial result for 6−1 ◦ 5 , we have 6−1 ◦ 5 (G) > G

for all G ∈ S, hence 5 (G) > 6(G) because 6 is increasing. By symmetry,

one has 6(G) > 5 (G) for all G ∈ S, hence 5 = 6.

Let 0 ∈ S and let 5 : S→ S0 be an increasing bijection. Applying the

initial result, one has 0 6 5 (0); by assumption, 5 (0) ∈ S0, hence 5 (0) < 0,

a contradiction.

This implies that case (i) of a) is incompatible with the cases (ii) or (iii),

the uniqueness of a bijection 5 as indicated in each case, as well as the

fact that the elements 0 (in (ii)) or 1 (in (iii)) are well defined. Cases (ii)

and (iii) are themselves incompatible: if 5 : S0 → T and 6 : S→ T1 are

increasing bijections, let 0′ ∈ S be such that 5 (0′) = 1; then 5 induces an

increasing bijection from S0′ to T1, hence 6−1 ◦ 5 induces an increasing

bijection from S0′ to S, a contradiction. �

Proposition (A.2.9) (Hartogs). — Let X be a set. There exists a well-ordered

set S such that no map 5 : S→ X is injective.

Proof. — Let � be the set of well-orderings on a subset A of X. Let us

define two relations ' and � on �. First, we say that (A,6A) ' (B,6B)if there exists an increasing bijection from A to B; it is an equivalence

relation. We then say that that (A,6A) � (B,6B) if there exists an

increasing bijection from A to B, or an increasing bijection from A to B1,

for some 1 ∈ B. The relation � is compatible with the equivalence

relation ' and corollary A.2.8 implies that on the quotient set S = �/',the relation � induces a total order.

In fact, S is evenwell-ordered. Let indeed� be a non-empty subset of S

and let�′be its preimage in�. It suffices to find an element (B,6B) ∈ �′

such that (B,6B) � (C,6C) for every (C,6C) ∈ �′.

Page 510: (MOSTLY)COMMUTATIVE ALGEBRA

498 APPENDIX

Fix an element (A,6A) in �′. Let then ℬ

′be the set of all elements

(B,6B) in�′such that (B,6B) ≺ (A,6A). Ifℬ′ is empty, then the class

of (A,6A)modulo ' is the smallest element of �. Otherwise, for every

B ∈ ℬ′, let !(B) be the unique element of A such that there exists an

increasing bijection 5B from B to A!(B); let then 0 be the smallest element

of A which is of this form; fix B ∈ ℬ′ such that 0 = !(B).Let (C,6C) ∈ �

′. If (C,6C) ∉ �

′, then (A,6A) � (C,6C), hence

(B,6B) � (C,6C). Otherwise, one has (C,6C) ∈ �′and !(C) > !(B).

Let then 5B : B→ A!(B) and 5C : C→ A!(C) be increasing bijections. If

!(B) = !(C), then (B,6B) ' (C,6C); in particular, (B,6B) � (C,6C).Otherwise, one has !(B) < !(C), so that there exists 2 ∈ C such that

5C(2) = !(B); then 5C induces an increasing bijection from C2 to A!(B),so that C2 ' B and (B,6B) ≺ (C,6C). This proves that the class of

(B,6B) is the smallest element of �, and concludes the proof that S is

well-ordered.

To conclude the proof of the proposition, it remains to establish that

there does not exist an injection from S toX. Let us argue by contradiction

and consider such a map ℎ : S → X. Let A = ℎ(S). Transferring the

ordering of S to A by ℎ, one gets a well-ordering6A on A, so that (A,6A)is an element of �; let be its class in S. For every G ∈ S, A 5 (G) is awell-ordered subset of A; let 6(G) be its class in S; one has 6(G) < by corollary A.2.8. The map 6 : S → S is strictly increasing, so that

6(G) > G for every G ∈ S. Taking G = , we get the desired contradiction

6 6( ) < . �

A.2.10. — From now on, we will make use of the axiom of choice: the

product

∏8∈I S8 of any family (S8)8∈I of non-empty sets is non-empty.

Theorem (A.2.11) (Zorn’s lemma). — Let S be an ordered set. Assume that

every well-ordered subset of S has an upper bound. Then S has a maximal

element.

Proof. — Let us argue by contradiction, assuming that S doesn’t have a

maximal element. Let S∗be the ordered set obtained by adjoining to S a

largest element $.

Page 511: (MOSTLY)COMMUTATIVE ALGEBRA

A.2. SET THEORY 499

Let A be a well-ordered subset of S. let 0 be an upper bound of A;

since 0 is not a maximal element of S, there exists 0′ ∈ S such that 0 < 0′.Using the axiom of choice, there exists a function 5 that assigns, to any

well-ordered subset A of S, an element 5 (A) ∈ S such that G < 5 (A) forevery G ∈ A. We extend 5 to P(S∗) by setting 5 (A) = $ if A is not a

well-ordered subset of S.

Let T be a well-ordered set that admits no injection to S. By the

transfinite construction principle, there exists a unique map ℎ : T→ S∗

such that ℎ(C) = 5 (TC) for every C ∈ T. Let us prove that ℎ is strictly

increasing and that ℎ(T) ⊂ S. By the transfinite induction principle,

it suffices to prove that if ℎ |TC is strictly increasing and its image is

contained in S, then ℎ(C) ∈ S and ℎ(G) < ℎ(C) for every G ∈ TC . Then

ℎ(TC) is a well-ordered subset of S, hence ℎ(C) = 5 (ℎ(TC)) ∈ S and satisfies

ℎ(C) > ℎ(G) for every G ∈ TC , hence the claim.

In particular, ℎ is an injective map from T to S, and this contradicts the

choice of T. �

A.2.12. — Classical applications of Zorn’s lemma (theorem A.2.11) hap-

pen in situations where the ordered set S satisfies stronger assumptions.

One says that an ordered set S is inductive if every totally ordered

subset of S has an upper bound. Applying the definition to the empty

subset, we observe that an inductive set is not empty. Moreover, if S

is an inductive set, then for every 0 ∈ S, the set {G ∈ S ; 0 � G} is alsoinductive. The following result is then an immediate corollary of Zorn’s

lemma.

Corollary (A.2.13). — Every inductive set has a maximal element. More

precisely, if S is an inductive set and 0 is an element of S, then S has a maximal

element 1 such that 0 � 1.

A.2.14. — Let X be a set and let S be a subset of P(X), ordered by

inclusion. One says that S is of finite character if it contains the empty set

and if a subset A of X belongs to S if and only if every finite subset of A

belongs to S. (An important example is given by the set of free subsets

of a vector space, or of a module.)

Page 512: (MOSTLY)COMMUTATIVE ALGEBRA

500 APPENDIX

Lemma (A.2.15). — Let X be a set and let S be a subset of P(X), ordered by

inclusion. If S is of finite character, then S is inductive.

Proof. — Let T be a totally ordered subset of S; let us prove that T has

an upper bound in S. If T is empty, then ∅ is an upper bound for T,

and ∅ ∈ S, by definition. Otherwise, let V be the union of all elements

of T. By definition, one has A ⊂ V for every A ∈ T, hence V is an upper

bound for T. To conclude the proof of the lemma, it suffices to prove

that V belongs to S.

By the definition of a set of finite character, in order to prove that V ∈ S,

it suffices to prove that every finite subset A of V belongs to S.

In fact, let us prove by induction on the cardinality of A that there

exists a subset B ∈ T such that A ⊂ B. This holds if A = ∅, since ∅ ∈ S,

by assumption. Otherwise, let 0 ∈ A and let A′ = A {0}; by induction,

there exists B′ ∈ T such that A

′ ⊂ B′. Since 0 ∈ A ⊂ V, there exists a

set B′′ ∈ T such that 0 ∈ B

′′. Since T is totally ordered, one has either

B′ ⊂ B

′′, or B

′′ ⊂ B′; since A = A

′ ∪ {0}, we get A ⊂ B′′or A ⊂ B

′, which

proves the claim by induction.

Since A is finite, the definition of a set of finite character implies that

A belongs to S.

Applying once more the definition of a set of finite character, this

proves that V ∈ S. �

Corollary (A.2.16) (Cantor). — Let X,Y be sets. At least one of the two

possibilities hold:

(i) There exists an injective map 5 : X→ Y;

(ii) There exists an injective map 5 : Y→ X;

Proof. — Let� be the set of all pairs (A, 5 ), where A is a subset of X and

5 : A→ Y is an injective map. We order � as follows: (A, 5 ) 6 (B, 6) ifA ⊂ B and 6 |A = 5 . Let us prove that the set � is inductive.

Let ((A8 , 58))8∈I be a totally ordered family of elements of �; let A =⋃8 A8. I claim that there exists a unique map 5 : A → Y such that

5 (G) = 58(G) for every 8 ∈ I and every G ∈ A8. Let indeed 8 , 9 ∈ I be

such that G ∈ A8 and G ∈ A9; if (A8 , 58) 6 (A9 , 59), then A8 ⊂ A9 and

Page 513: (MOSTLY)COMMUTATIVE ALGEBRA

A.3. CATEGORIES 501

58 = 59 |A8 , so that 58(G) = 59(G); by symmetry, the same equality holds if

(A9 , 59) 6 (A8 , 58).Let G, H ∈ A be such that 5 (G) = 5 (H). Let 8 , 9 ∈ I be such that G ∈ A8

and H ∈ A9. If (A8 , 58) 6 (A9 , 59), then G, H ∈ A9, hence 5 (G) = 59(G) and5 (G) = 59(H), hence 59(G) = 59(H); since 59 is injective, this implies G = H.

By symmetry, the same property holds if (A9 , 59) 6 (A8 , 58). This provesthat 5 is injective.

By Zorn’s theorem, the set � admits a maximal element, say (A, 5 ).If A = X, then 5 is an injective map from X to Y. Assume that A ≠ X; I

then claim that 5 is surjective. Otherwise, there exists 0 ∈ X A and

1 ∈ Y 5 (A); the map 6 : A∪ {0} → Y such that 6 |A = 5 and 6(0) = 1 isinjective, and one has (A, 5 ) 6 (A∪ {0}, 6), contradicting the hypothesis

that (A, 5 ) is a maximal element of �. The map 5 : A → Y is thus

bijective; its inverse 5 −1: Y→ A is an injective map from Y to X. This

concludes the proof of Cantor’s theorem. �

Corollary (A.2.17) (Zermelo, 1904). — Every set can be well-ordered.

Proof. — Let X be a set. Let S be a set such that there is no injection

from S to X. By Cantor’s theorem (corollary A.2.16), there exists an

injection 5 from X to S. Then there exists a unique ordering relation

on X such that 5 is increasing; this ordering is a well-ordering on X,

establishing the corollary. �

A.3. Categories

It is a very useful and common vocabulary to describe certain algebraic

structures (the so-called categories) and the way by which one can

connect them (functors).

A.3.1. Categories. — A category C consists in the following data:

– A collection obC of objects;

– For any two objects M,N, a set C (M,N) called morphisms from M

to N;

– For any three objects M,N, P, a composition map C (M,N) ×C (N, P),( 5 , 6) ↦→ 6 ◦ 5 ,

Page 514: (MOSTLY)COMMUTATIVE ALGEBRA

502 APPENDIX

so that the following axioms are satisfied:

(i) For anyobjectM, there is adistinguishedmorphism idM ∈ C (M,M),called the identity;

(ii) For

(iii) One has idN ◦ 5 = 5 for any 5 ∈ C (M,N);(iv) One has 6 ◦ idN = 6 for any 6 ∈ C (N, P);(v) For any four objects M,N, P,Q, and any three morphisms 5 ∈

C (M,N), 6 ∈ C (N, P), ℎ ∈ C (P,Q), the two morphisms ℎ ◦ (6 ◦ 5 ) and(ℎ ◦ 6) ◦ 5 in C (M,Q) are equal (associativity of composition).

A common notation for C (M,N) is also HomC (M,N). Finally, insteadof 5 ∈ C (M,N), one often writes 5 : M→ N.

Let 5 : M → N be a morphism in a category C . One says that 5 is

left-invertible, resp. right-invertible, resp. invertible, if there exists a

morphism 6 : N → M such that 6 ◦ 5 = idM, resp. 5 ◦ 6 = idN, resp.

6 ◦ 5 = idM and 5 ◦ 6 = idN.

One proves in the usual way (see p. ) that if 5 is both left- and right-

invertible, then it is invertible. An invertible morphism is also called an

isomorphism.

Let us now give examples of categories. It will appear clearly that all

basic structures studied of algebras fall within the categorical framework.

Examples (A.3.2). — The category Set of sets has for objects the sets,

and for morphisms the usual maps between sets.

The category Gr of groups has for objects the groups and for mor-

phisms themorphisms of groups. The categoryAbGr ofAbelian groups

has for objects the Abelian groups and for morphisms the morphisms

of groups. Observe that objects of AbGr are objects of Gr , and that

morphisms in AbGr coincide with those in Gr ; one says that AbGr is

a full subcategory of Gr .

The categoryRing of rings has for objects the rings and formorphisms

the morphisms of rings.

Similarly, there is the category Field of fields and, if : is a field, the

category Ev : of :-vector spaces. More generally, for any ring A, there

is a category ModA of right A-modules, and a category ModA of left

A-modules.

Page 515: (MOSTLY)COMMUTATIVE ALGEBRA

A.3. CATEGORIES 503

Example (A.3.3). — Let C be a category; its opposite category C ohas

the same objects than C , but the morphisms of C oare defined by

C o(M,N) = C (N,M) and composed in the opposite direction.

It resembles the definition of an opposite group. However, a category

is usually different from its opposite category.

Remark (A.3.4). — Since there is no set containing all sets, nor a set

containing all vector spaces, the word collection in the above definition

cannot be replaced by the word set (in the sense of Zermelo-Fraenkel’s

theory of sets). In fact, a proper treatment of categories involves set

theoretical issues. There are at least three ways to solve them:

– The easiest one is to treat a category as a formula (in the sense of

first order logic). For example Ring is a formula !Ring with one free

variable A that expresses that A is a ring. This requires to encode a ring

A and all its laws as a tuple: for example, one may consider a ring to

be a tuple (A, S, P) where A is the ring, S is the graph of the addition

law and P is the graph of the multiplication law. The formula !Ring (G)then checks that G is a triplet of the form (A, S, P), where S ⊂ A

3and

P ⊂ A3, that S is the graph of a map A × A→ A which is associative,

commutative, has a neutral element, and for which every element has

an opposite, etc.

Within such a framework, one can also consider functors (defined

below), but only those which can be defined by a formula.

This treatment would be sufficient at the level of this book.

– One can also use another theory of sets, such as the one of Bernays-

Gödel-von Neumann, which allows for two kinds of collections: sets

and classes. Sets, obey to the classical formalism of sets, but classes are

more general, so that one can consider the class of all sets (but not the

class of all classes). Functors are defined as classes.

This is a very convenient possibility at the level of this book. However,

at a more advanced development of algebra, one is lead to consider the

category of categories, or categories of functors. Then, this approach

becomes unsufficient as well.

– Within the classical theory of sets, Grothendieck introduced universes

which are very large sets, so large than every usual construction of sets

Page 516: (MOSTLY)COMMUTATIVE ALGEBRA

504 APPENDIX

does not leave a given universe. One needs to add the axiom that there

is an universe, or, more generally, that any set belongs to some universe.

This axiom is equivalent to the existence of inaccessible cardinals, an axiom

which is well studied and often used in advanced set theory.

In this book, categories is mostly a language to state algebraic results of

quite a formal nature.

Definition (A.3.5). — Let C be a category, let M,N be objects of C and let

5 ∈ C (M,N).One says that 5 is a monomorphism if for any object P of C and any

morphisms 61, 62 ∈ C (N, P) such that 61 ◦ 5 = 62 ◦ 5 , one has 61 = 62.

One says that 5 is an epimorphism if for any object L of C and any

morphisms 61, 62 ∈ C (P,M) such that 5 ◦ 61 = 5 ◦ 62, one has 61 = 62.

Example (A.3.6). — Monomorphisms and epimorphisms in Set or in

categories of modules are respectively injections and surjections (see

exercise 7/20).

A.3.7. Functors. — Functors are to categories what maps are to sets.

Let C and D be two categories.

A functor F from C to D consists in the following data:

– an object F(M) of D for any object M of C ;

– a morphism F( 5 ) ∈ D(F(M), F(N)) for any objets M,N ofC and any

morphism 5 ∈ C (M,N),subject to the two following requirements:

(i) For any object M of C , F(idM) = idF(M);

(ii) For any objects M,N, P of C and any morphisms 5 ∈ C (M,N)and 6 ∈ C (N, P), one has

F(6 ◦ 5 ) = F(6) ◦ F( 5 ).A contravariant functor F from C to D is a functor from C o

to D .

Explicitly, it consists in the following data

– an object F(M) ofD for any object M of C ;

– a morphism F( 5 ) ∈ D(F(N), F(M)) for any objets M,N ofC and any

morphism 5 ∈ C (M,N),subject to the two following requirements:

Page 517: (MOSTLY)COMMUTATIVE ALGEBRA

A.3. CATEGORIES 505

(i) For any object M of C , F(idM) = idF(M);

(ii) For any objects M,N, P of C and any morphisms 5 ∈ C (M,N)and 6 ∈ C (N, P), one has

F(6 ◦ 5 ) = F( 5 ) ◦ F(6).One says that such a functor F is faithful, resp. full, resp. fully faithful

if for any objects M,N of C , the map 5 ↦→ F( 5 ) from C (M,N) toC (F(M), F(N)) is injective, resp. surjective, resp. bijective. A similar

definition applies for contravariant functors.

Example (A.3.8) (Forgetful functors). — Many algebraic structures are

defined by enriching other structures. Often, forgetting this enrichment

gives rise to a functor, called a forgetful functor.

For example, a group is already a set, and a morphism of groups

is a map. There is thus a functor that associates to every group its

underlying set, thus forgetting the group structure. One gets a forgetful

functor from Gr to Set . It is faithful, because a group morphism is

determined by the map between the underlying sets. It is however not

full because there are maps between two (non-trivial) groups which are

not morphism of groups.

Example (A.3.9). — Let C be a category and let P be an object of C .

One defines a functor F from the category C to the category of sets as

follows:

– For any object M of C , one sets F(M) = C (P,M);– For any morphism 5 : M→ N inC , F( 5 ) is the map D ↦→ 5 ◦ D from

C (P,M) to C (P,N).This functor is often denoted HomC (P, •). Such a functor is also called

a representable functor.

One can also define a contravariant functor G, denoted HomC (•, P) asfollows:

– For any object M of C , one sets F(M) = C (M, P);– For any morphism 5 : M→ N inC , F( 5 ) is the map D ↦→ D ◦ 5 from

C (N, P) to C (M, P).In other words, G is the functor HomC o(P, •). One says that it is

corepresentable.

Page 518: (MOSTLY)COMMUTATIVE ALGEBRA

506 APPENDIX

Let F and G be two functors from a category C to a category D .

A morphism of functors from F to G consists in the datum, for

every object M of C , of a morphism M : F(M) → G(M) such that the

following condition holds: For any morphism 5 : M→ N in C , one has

N ◦ F( 5 ) = G( 5 ) ◦ M.

Morphisms of functors can be composed, and for any functor F, one

has an identity morphism from F to itself. Consequently, functors from

C to D form themselves a category, denotes F (C ,D).

A.3.10. Universal properties and representable functors. — This

book contains many universal properties: the free module on a given basis,

quotient ring, quotient module, direct sum and product of modules,

localization, algebra of polynomials on a given set of indeterminates.

They are all of the following form: “in such algebraic situation, there

exists an object and a morphism satisfying such property and such that

any other morphism which satisfies this property factors through it”.

The prototype of an universal property is the following.

Definition (A.3.11). — Let C be a category.

One says that an object I of C is an initial object if for every object M of C ,

(I,M) has exactly one element.

One says that an object T of C is a terminal object if for every object M

of C , C (M, T) has exactly one element.

Observe that an initial object of a category is a terminal object of the

opposite category, and vice versa.

Examples (A.3.12). — In the category Set of sets, the empty set is the

only initial object, and any set of cardinality one is a terminal object.

In the category Ring of rings, the ring Z is an initial object (for any

ring A, there is exactly one morphism from Z to A, see Example 1.3.4).

Moreover, the ring 0 is a terminal object.

In the category of modules over a ring A, the null module is both an

initial and a terminal object.

The property for an object I to be an initial object can be rephrased as

a property of the representable functor HomC (I, •), namely that this

Page 519: (MOSTLY)COMMUTATIVE ALGEBRA

A.3. CATEGORIES 507

functor coincides with (or, rather, is isomorphic to) the functor F that

sends any object of C to a fixed set with one element.

This allows to rephrase the definition of an initial object as follows: an

object I is an initial object if it represents the functor F defined above.

Definition (A.3.13). — Let C be a category and let F be a functor from C to

the category Set of sets. Let P be an object of C . One says that P represents

the functor F is the functor HomC (P, •) is isomorphic to F.

Similarly, if G is a contravariant functor from C to Set , one says that an

object P ofC co-representsG is the functorsHomC (•, P) andG are isomorphic.

Objects that represent a given functor are unique up to an isomorphism:

Proposition (A.3.14) (Yoneda’s Lemma). — Let C be a category, let A and

B be two objects of C .

For any morphism of functors ! from HomC (A, •) to HomC (B, •), thereis a unique morphism 5 : B→ A such that M(D) = D ◦ 5 for any object M

of C and any morphism D ∈ C (A,M).In particular, ! is an isomorphism if and only if 5 is an isomorphism.

Proof. — Let us write F = HomC (A, •) and G = HomC (B, •). Recall thedefinition of a morphism of functors: for every object M of C , one has a

map !M : F(M) → G(M) such that G(D) ◦ !M = !N ◦ F(D) for every two

objects M,N of C and every morphism D : M→ N. In the present case,

this means that for every object M of C , !M is a map from C (A,M)to C (B,M) and that D ◦ !M( 5 ) = !N(D ◦ 5 ) for every 5 ∈ C (A,M) andevery D ∈ (M,N). In particular, taking M = A and 5 = idA, one obtains

!N(D) = D ◦ !A(idA). This proves the existence of a morphism 5 as

required by the lemma, namely 5 = !A(idA). The uniqueness of 5

follows also from this formula, applied to N = A and D = idA: if

5 ′ : N→M is any morphism possessing the required property„ one has

5 = !A(idA) = idA ◦ 5 ′ = 5 ′. �

A.3.15. Adjunction. — Let C and D be two categories, let F be a

functor from C to D and G be a functor fromD to C .

Page 520: (MOSTLY)COMMUTATIVE ALGEBRA

508 APPENDIX

An adjunction for the pair (F,G) is the datum, for any objects M of C

and N of D , of a bijection

ΦM,N : D(F(M),N) ∼−→ C (M,G(N))such that the following holds: for any objects M,M′ ofC , any morphism

5 ∈ C (M,M′), any objects N,N′ of D , any morphism 6 ∈ D(M,M′),and any morphism D ∈ D(F(M′),N),

6 ◦ΦM′,N(D) ◦ F( 5 ) = ΦM,N′(G(6) ◦ D ◦ 5 ).

If there exists an adjunction for the pair (F,G), one says that F is a left

adjoint to G, or that G is a right adjoint to F.

Section 7.6 of the book offers a first study of adjoint morphisms in

the framework of categories of modules. The reader is invited to try to

generalize what is explained there to other categories.

Remark (A.3.16). — Assume that F and G are adjoint functors. Then for

any object M ofC , F(M) represents the functor HomC (M,G(•)) fromD

to Set .

Conversely, let G be a functor from D to D . One proves that if the

functor HomC (M,G(•)) is representable for any object M of C , then

G has a left adjoint. The proof consists in choosing, for any object M

of C , an object F(M) which represents the given functor. Moreover,

any morphism 5 : M → M′in C gives rise to a morphism from the

functor HomC (M′,G(•)) to the functor HomC (M,G(•)), hence, by the

contravariant version of Yoneda’s Lemma, to a morphism F( 5 ) : F(M) →F(M′).

Page 521: (MOSTLY)COMMUTATIVE ALGEBRA

BIBLIOGRAPHY

M. Aigner & G. M. Ziegler (2014), Proofs from The Book, Springer-Verlag,

Berlin, fifth edition. URL http://dx.doi.org/10.1007/978-3-662-44205-0,

including illustrations by Karl H. Hofmann.

H. Bass (1960), “Finitistic dimension and a homological generalization

of semi-primary rings”. Transactions of the American Mathematical

Society, 95 (3), pp. 466–488.

N. Bourbaki (1989), Elements of Mathematics. Commutative Algebra. Chap-

ters 1-7. Transl. from the French. 2nd Printing., Berlin etc.: Springer-

Verlag, 2nd printing edition.

N. Bourbaki (2012), Éléments de mathématique. Algèbre. Chapitre 8.Modules

et anneaux semi-simples, Springer, Berlin. Second revised edition of

the 1958 edition [MR0098114].

T. Coquand & H. Lombardi (2005), “A Short Proof for the Krull Di-

mension of a Polynomial Ring”. The American Mathematical Monthly,

112 (9), p. 826.

B. Deschamps (2001), “À propos d’un théorème de Frobenius”. Annales

mathématiques Blaise Pascal, 8 (2), pp. 61–66.

D. Eisenbud (1995), Commutative algebra with a view towards algebraic

geometry, Graduate Texts in Math. 150, Springer-Verlag.A. Grothendieck (1960), “Technique de descente et théorèmes

d’existence en géométrie algébrique. I. Généralités. Descente par

morphismes fidèlement plats”. Séminaire Bourbaki : années 1958/59 -

1959/60, exposés 169-204, Séminaire Bourbaki 5, pp. 299–327, Sociétémathématique de France. URL http://www.numdam.org/item/SB_

1958-1960__5__299_0, talk:190.

D. W. Henderson (1965), “A Short Proof of Wedderburn’s Theorem”.

The American Mathematical Monthly, 72 (4), p. 385.

K. Ireland &M. Rosen (1990), A Classical Introduction to Modern Number

Theory, Graduate Texts in Mathematics 84, Springer New York, New

Page 522: (MOSTLY)COMMUTATIVE ALGEBRA

510 BIBLIOGRAPHY

York, NY.

C. W. Kohls (1958), “Prime ideals in rings of continuous functions”.

Illinois Journal of Mathematics, 2 (4A), pp. 505–536.

H. Matsumura (1986), Commutative ring theory, Cambridge studies in

advanced mathematics, Cambridge Univ. Press.

W. K. Nicholson (1993), “A short proof of the Wedderburn-Artin

theorem”. New Zealand Journal of Mathematics, 22 (1), pp. 83–86.

R. S. Palais (1968), “The classification of real division algebras”. Amer.

Math. Monthly, 75, pp. 366–368.H. Perdry (2004), “An Elementary Proof of Krull’s Intersection Theo-

rem”. The American Mathematical Monthly, 111 (4), p. 356.

M. A. Rieffel (1965), “A General Wedderburn Theorem”. Proceedings of

the National Academy of Sciences, 54 (6), pp. 1513–1513.

A. Seidenberg (1954), “On the dimension theory of rings. II.” Pacific

Journal of Mathematics, 4 (4), pp. 603–614.

H. Tverberg (1964), “A remark on Ehrenfeucht’s criterion for irreducibil-

ity of polynomials”. Commentationes Mathematicae, 8 (2).

Page 523: (MOSTLY)COMMUTATIVE ALGEBRA

INDEX

Aadditive

— functor, 349

adjugate, 407

adjunction, 508

Akizuki’s theorem, 296

algebra, 6, 16

group —, 7

integral, 180

monoid —, 7

algebraic closure, 186

— of a field in an algebra, 183

existence of an —, 187

algebraic set, 77

algebraically closed field, 185

algebraically independent family,

213

alternate

see?-linear map, 152

annihilator

of a module, 120

of an element, 120

artinian

— module, 290

— ring, 291

Artin–Rees lemma, 316

Artin–Tate lemma, 289

associated

— elements, 12

associated prime ideal, 300, 305

automorphism

— of ring, 8

interior —, 10

axiom of choice, 498

BBaer’s theorem, 346

balanced bi-additive map, 380

base normale, 224

basis, 135

canonical — of (A3)=, 139Bass-Papp

theorem of —, 375

Berlekamp’s algorithm, 225

bimodule, 119

binomial formula, 33

Bézout— theorem, 101

Bézout’s theorem, 92

bonded subset, 135

Bruhat decomposition, 281

Ccanonical basis, 136

Cantor-Bernstein, 493

category, 501

511

Page 524: (MOSTLY)COMMUTATIVE ALGEBRA

512 INDEX

Cayley–Hamilton’s theorem, 55

center of a ring, 7

centralizer of a subset, 7

character of a finite abelian

group, 268

characteristic of a field, 17, 198

characteristic polynomial

— of an endomorphism, 259

Chevalley–Dunford

decomposition of a matrix,

219

Chinese remainder theorem, 38

Cohen’s theorem, 315

cokernel, 132

column equivalence of matrices,

229

column rank, 230, 234

comaximal ideals, 92

Commuting elements in a ring, 3

companion matrix, 258

complex, 324

conductor, 57

content, 94

contraction, 57

coprimary

— module, 306

coprime elements, 92

coproduct

in a category, 129

corps

fini, 225

cyclic module, 257

cyclic vector, 257

Ddecreasing, 492

strictly —, 492

degree of a polynomial

total —, 20

with respect to an

indeterminate, 20

derivative of a polynomial, 25

determinant

— of a square matrix, 157

— of an endomorphism of a

free module, 158

(alternate linear form on a free

module), 157

diagram, 324

commutative, 324

differential, 363

differential module, 363

morphism of —s, 363

dimension

— of a vector space, 147

direct summand, 130, 335

divisible, 346

division algebra, 12

division ring, 12

domain, 11

unique factorization —, 86

dual

— of a module, 125

dual basis, 138

Eelementary operations on rows

and columns of a matrix,

229

endomorphism

— of module, 122

euclidean

gauge, 84

ring, 84

Euclidean division

for polynomials, 23

exact sequence, 324

short —, 325

Page 525: (MOSTLY)COMMUTATIVE ALGEBRA

INDEX 513

split —, 326

exactness

— of localization, 50, 143

exactness of localization, 356

Exchange lemma, 215

exponent

of a monomial, 20

extension, 57

of rings, 175

Ffactorization theorem, 34, 38

field, 12

— of fractions, 45

algebraically closed —, 27

finite field, 198, 201

finitely generated

— algebra, 286, 289

— module, 136, 289

Fitting ideal

of a module, 167

flat module, 411

fractional ideal, 473

free

— module, 136

free module, 335

free subset, 135

Frobenius, 191

Frobenius homomorphism, 9,

199, 201

functor, 348, 504

— contravariant, 349

— covariant, 349

contravariant —, 504

exact —, 350

faithful —, 505

full —, 505

fully faithful —, 505

left exact —, 350

right exact —, 350

Fundamental theorem of

Algebra, 185

GGalois extension, 201, 203

Galois group, 201

Galois’s theorem, 203

gaugesee also eclidean, 84

generating subset, 134

greatest common divisor, 90

Hheight of a prime ideal, 458

Hilbert’s Nullstellensatz, 80

Hilbert’s theorem, 286

Iideal, 28

comaximal —s, 38

left —, 27

right —, 27

two-sided —, 27

idempotent

— element, 52

increasing, 492

strictly —, 492

indecomposable module, 313

inductive ordered set, 499

injective module, 343, 353

integral

— element, 176

integral closure, 178

integrally closed, 179

invariant factors, 250

inverse, 11

invertible element, 11

invertible fractional ideal, 474

irreducible element, 85

isomorphism

Page 526: (MOSTLY)COMMUTATIVE ALGEBRA

514 INDEX

— of modules, 122

JJacobson ring, 455

Jordan block, 262

Jordan decomposition, 262

Jordan–Hölder theorem, 278

Kkernel

— of a morphism of modules,

123

— of a ring morphism, 30

Krull’s theorem, 188

LLaplace expansion, 407

Laplace expansion of the

determinant, 160

largest lower bound, 493

leading coefficient, 21

least common multiple, 90

least upper bound, 493

linear combination, 118

linear form, 125

Lüroth’s theorem, 226

Mmatrix

diagonal —, 164

elementary —, 228

permutation —, 228

maximal spectrum, 67

minimal polynomial

— of an endomorphism, 259

minor

— of a matrix, 164

module, 117

direct product of —s, 126

direct sum of —s, 127

dual —, 125

faithful, 120

finitely generated, 288, 305

finitely generated —, 282, 304

finitely presented —, 331

free — on a set, 136

graded —, 364

left —, 117

of finite length, 305

right —, 116

semisimple, 313

monic polynomial, 21

monomial, 20

morphism

— of modules, 122

— of rings, 8

(in a category), 501

multiplicative

— subset, 40

multiplicity

of a root of a polynomial, 26

NNakayama’s lemma, 312

nilpotent, 10

— element, 319

nilradical, 32

Noether normalization theorem,

472

noetherian

— module, 282, 283

— ring, 87, 285

ring, 304, 305

Noether’s normalization

theorem, 452

norm, 208

normal extension, 467

normalizer, 207

Page 527: (MOSTLY)COMMUTATIVE ALGEBRA

INDEX 515

Oobject

(of a category), 501

initial —, 506

terminal —, 506

Ore condition, 47, 59

Pperfect field, 192

permutation

— matrix, 228

Perron’s theorem, 111

pivot column indices, 234

pivot row indices, 234

?-linear form, 152

alternate —, 152

?-linear map, 152

alternate —, 152

antisymmetric —, 152

Poincaré’s lemma, 366

polynomial

split —, 27

presentation, 331

finite —, 331

primary

— submodule, 306

primary component, 251

primary decomposition, 309

minimal —, 309

prime field

of a division ring, 17

prime ideal, 67

Primitive element theorem, 196

primitive polynomial, 94

principal ideal, 82

principal ideal domain, 83, 346

product

in a category, 129

product of ideals, 32

projective

— module, 334

projective module, 335, 353

Qquaternion, 12

quotient ring, 34

RRabinowitsch trick, 81

radical

— of an ideal, 32

Jacobson —, 64

nilpotent —, 32

rank

— of a module, 250

of a free module, 149

reduced

ring, 32

reduced column echelon form,

230

reduced row echelon form, 229

regular

— element, 10, 319

resultant, 99, 103

— and roots, 103

degree of the —, 102

ring, 2

commutative, 2

noetherian ring, 283

ring morphism

pure —, 446

ring of polynomials, 19

ring of power series, 55

root, 25

Rouché’s theorem, 111

row equivalence of matrices, 229

row rank, 230, 234

Page 528: (MOSTLY)COMMUTATIVE ALGEBRA

516 INDEX

Sseparable

— element, 193

— extension, 193

— polynomial, 191

set of finite character, 499

signature of a permutation, 152

simple

— module, 274

snake lemma, 328

spectrum

— of a ring, 67

split polynomial, 185

stathmsee also eclidean gauge, 84

stationary, 492

Steinitz’s theorem, 187

submodule, 120

— generated, 126

intersection of —s, 125

pure —, 444

sum of —s, 126

subring, 7

support, 297

Ttensor, 381

split —, 381

theorem

— of Cohen–Seidenberg, 460

Theorem of Krull–Akizuki, 485

théorème

— de Chevalley–Warning, 225

— de d’Alembert–Gauß, 222

— de Lüroth, 226

théorème de Wedderburn, 223

torsion element, 121

torsion module, 121

torsion submodule, 121, 250

torsion-free module, 121

totally ordered set, 492

trace, 208

transcendence degree, 217

Uunit, 11

unit element of a ring, 3

universal property, 34, 394

— of direct products of

modules, 127

— of direct sums of modules,

127

— of free A-modules, 336

— of polynomial algebras, 24

— of quotient modules, 131

— of quotient rings, 34

— of tensor products, 381

of fraction rings, 42

Vvon Neumann ring, 108

WWedderburn’s theorem, 318

well-ordered set, 494

Weyl group, 228

ZZariski topology, 69, 78

zero divisor, 11

zero element of a ring, 3

Zorn’ lemma, 345

Zorn’s lemma, 498