85
1 Algebras Karin Erdmann Mathematical Institute University of Oxford October 2007

Institute Notes

Embed Size (px)

DESCRIPTION

Introduction to Representation theory of Groups.

Citation preview

1

Algebras

Karin Erdmann

Mathematical InstituteUniversity of Oxford

October 2007

1

Introduction

You will probably have studied groups acting on sets, and have seen that group actionsoccur in many situations. Very often, the underlying set has the structure of a vector space,and the group acts by linear transformation. This is very good because in this case, one getsnew information, for example, by exploiting invariant subspaces, or eigenvalues, or otherproperties coming from linear algebra.

The action of a group element on a vector space is always invertible. But in manyapplications one needs to deal with linear maps which are not invertible. As an extreme,take a linear map whose iteration eventually maps everything to zero. For example, takingderivatives of functions is a linear map, and if polynomial of degree 2 is differentiated threetimes one always gets zero.

Therefore one would also like to study actions by linear transformations which are notnecessarily invertible. One appropriate structure to model such actions is that of an ’algebra’,more precisely, an associative K-algebra where K is a field. This includes many knownexamples, like polynomials K[X] , or square matrices. New examples are group algebraswhich can be thought of as a ’linearisation’ of groups; and these ensure that group actionsby linear transformations can be viewed from this new perspective.

We will start by introducing these algebras, and give examples, and we will investigatesome general properties.

If an algebra A acts on a vector space V , then V together with this action is said tobe an A-module. [This is analogous to the approach for group actions. ’If G acts on a setΩ, then Ω is a G-set’ becomes ’If A acts on a vector space V then V is an A-module’.]The second chapter studies modules, and their general properties. Furthermore, we defineactions of algebras on vector spaces, and show that A-modules and actions of A on vectorspaces are the same. For some purposes, the language of modules is more convenient, butsometimes it is more natural to think of actions.

An A-module V is said to be simple (or ‘irreducible’) if it is non-zero, and if it doesnot have any subspaces which are invariant under the action of A except V and 0. Simplemodules are the ’building blocks’ for arbitrary finite-dimensional A-modules. The Jordan-Holder Theorem makes this precise, and we will prove this theorem, for modules, in thethird chapter.

The nicest modules are the ones which are direct sums of simple modules, they are called’semisimple’. An algebra for which all modules are semisimple, is said to be a semisimplealgebra. Semisimple modules and algebras are investigated in chapter 4. Fortunately, thestructure of semisimple algebras is completely understood, it is described by the WedderburnTheorem. This is a very important result on algebras, and it is used in many situations. Wewill give a proof when K = C, in chapter 5.

Maschke’s Theorem characterizes precisely when algebras arising from group actions aresemisimple. Namely, this is the case if and only if the characteristic of the field does notdivide the order of the group. This is proved in chapter 6.

Maschke’s Theorem has numerous important applications. Combining it with Wedder-burn’s theorem gives a complete description of the irreducible representations of G over C.This can be taken as the starting point for the study of group characters. Given a rep-

2

resentation ρ : G → GL(n,C), then the character associated to this representation is thefunction χ : G → C which takes g ∈ G to the trace of ρ(g), that is χ(g) =

∑ni=1 aii where

ρ(g) = [aij ]. Characters have very remarkable properties. For example, one can detect byjust looking at the characters whether or not two representations are equivalent. Charactershave applications in many other parts of mathematics (or even in other sciences).

The last chapter deals with general properties of characters, and gives some applications.This chapter is short since the subject is well covered by existing literature. Further materialcan be found for example in the books by Ledermann ??, or James and Liebeck ??.

It is expected that you are familiar with the first and second year basic linear algebra,such as elementary properties of vector spaces, and linear maps. As well, we expect thatyou remember the group axioms. We make some conventions: We only consider rings withidentity, and all vector spaces will be finite-dimensional, except for the polynomial ringK[X]. We mention some occasions when results also hold without assuming that the vectorspaces are finite-dimensional.

Oxford, 2007 KE

1Algebras

The main object we want to study, are algebras over some field. Roughly speaking, an algebrais a ring which also is a vector space, in which scalars commute with everything.We remind of the definition of a ring.

Definition 1.1 (Reminder)

A ring R is an abelian group (R,+) which in addition has another operation, (r, s) → rs :R×R→ R, called multiplication, such that

(i) (Distributivity) r(x+ y) = rx+ ry and (r + s)x = rx+ sx.(ii) (Associativiaty) r(st) = (rs)t.

The ring is commutative if rs = sr for all r, s ∈ R. An identity of R is an element 1R ∈ Rsuch that 1Rx = x1R = x for all x ∈ R.

Convention In this course, all rings are assumed to have a an identity. (If no confusionis likely we write 1 for 1R). Rings are usually not commutative.You have already seen various examples:

(1) The integers, Z. The rational numbers, Q, etc

(2) The polynomials K[X] in one variable X, with coefficients in K. Similarly, the polyno-mials K[X,Y ] in two commuting variables X and Y , with coefficients in K.

(3) The n×nmatricesMn(K), with entries in a fieldK, with respect to matrix multiplicationand addition. This is not commutative for n ≥ 2.

(4) If R and S are rings, the direct product of R and S is defined as

R× S = (r, s) : r ∈ R, s ∈ S

where addition and multiplication are componentwise.

4 1. Algebras

1.1 Algebras

The above examples (2) and (3) are not just rings but also vector spaces. There are manymore rings which are vector spaces, and this has led to making the definition of an algebra.

Definition 1.2

An algebra A over a field K (or a K-algebra) is a ring, with multiplication

(a, b) → a.b (a, b ∈ A)

which also is a K-vector space, with scalar multiplication

(λ, a) → λa (λ ∈ K, a ∈ A),

and where the scalar multiplication and the ring multiplication satisfy

λ(a.b) = (λa).b = a.(λb) (λ ∈ K, a, b ∈ A).

The algebra A is finite-dimensional if dimK(A) <∞.

The condition relating scalar multiplication and ring multiplication says that scalarscommute with everything. One might want to spell out the various axioms. We have alreadylisted the ones for a ring. To say that A is a K-vector space means that for all a, b ∈ A andλ, µ ∈ K we have

(i) λ(b+ c) = λb+ λc;(ii) (λ+ µ)a = λa+ µa;

(iii) (λµ)a = λ(µa);(iv) 1Ka = a.

Properties (i) and (ii) are sometimes summarized by saying that ’scalar multiplication isbilinear’.

Strictly speaking, we should say that A is an associative algebra; the underlying multipli-cation in the ring is associative. There are other types of algebras, for example Lie algebras;but we will only consider associative algebras.

Since A is a vector space and 1A is a non-zero vector, it follows that the map λ→ λ1A fromK to A is 1-1. We will therefore viewK as a subset of A, using this map as identification. Thisalso means that it is not really necessary to have different notation for scalar multiplicationand ring multiplication, so we will usually write ab instead of a.b, this should not causeconfusion.

1.2 The multiplication 5

Example 1.3

Let A be the set of upper triangular 2× 2-matrices over R, that is

A = (x y

0 z

): x, y, z ∈ R

with respect to matrix addition and multiplication. This is clearly a subspace of M2(R).Furthermore, it is a subring: since it is a subspace, we know that (A,+) is a subgroup ofM2(R); furthermore a product of two upper triangular matrices is again upper triangular;and the identity of M2(R) lies in A. So A is a ring. Scalar multiples of the identity matrixcommute with all matrices, so the property of scalar multiplication and ring multiplicationholds.

1.2 The multiplication

Suppose A is an algebra, what does one need to understand the multiplcation? Take anyvector space basis of A, say v1, . . . , vn If we know the products vivj for all i, j then weknow all products. Namely, take two arbitrary elements a, b ∈ A, then a =

∑i aivi and

b =∑

j bjvj for ai, bj ∈ K, and

ab = (∑

i

aivi)(∑

j

bjvj) =n∑

i,j=1

aibj(vivj)

This is very useful; in practice one would aim to use a basis where the products vivj are’easy’ for example one may take the identity 1A as one of the basis elements.

Example 1.4

In Example 1.3, you would probably choose as a basis the ’matrix units’ in A, that is

E11 =(

1 00 0

), E12 =

(0 10 0

), E22 =

(0 00 1

)Then the products are easy to describe; E2

ii = Eii and E11E12 = E12 = E12E22, andE12E11 = 0 = E22E12. One might visualize the multipication by a diagram like

E11•E12−→ •E22

1.3 Constructing algebras

You can also construct algebras using the fact that the multiplication is already determinedby products of some basis. You might start with some vector space V , fix a basis, and just

6 1. Algebras

define the products of any two basis elements. However, you need to make sure that theassociative law holds.

Exercise 1.1

Let V have basis v1, v2. Which of the following products satisfies the associative law?If so, does this define an algebra (with identity)? Here c, d ∈ K.

(i) v1v2 = v2v1 = v2, v21 = v1, v2

2 = cv1 + dv2.

(ii) v2v1 = v2v1 = v2, v21 = v2, v2

2 = v1 + v2.

Solution 1.5

Consider the definition in (i). The products in which v1 occurs, tell us that v1 is the identity.So to check associativity, we only need to compare (v2v2)v2 and v2(v2v2), and one checksthat these are equal. So the definition (i) defines an algebra with identity.

Now consider the definition in (ii), we have (v1v1)v2 = v2v2 = v1 + v2 but v1(v1v2) =v1v2 = v2 These are not equal and this multiplication does not satisfy the associative law.

1.4 Some important examples

(1) The field K is a K-algebra.

(2) Polynomial rings K[X], or K[X,Y ], are K-algebras.

(3) The n× n matrices Mn(K), with respect to matrix multplication and addition.

There are many more algebras consisting of matrices. For example, take the upper trian-gular matrices

Tn(K) := [aij ] ∈Mn(K) : aij = 0 for i > j

They also form an algebra, with respect to matrix multiplication and addition.

(4) Let V be a K-vector space, and define

EndK(V ) := α : V → V : α is K-linear

the K-linear transformations V → V . This is a K-algebra, if one takes as multiplicationthe composition of maps, and where the addition and scalar multiplcation are pointwise,as usual.

(5) The field C is also an algebra over R, of dimension 2. Similarly, the field Q(√

2) is analgebra over Q, of dimension 2. More generally, if K is a subfield of a larger field L, thenL is an algebra over K.

(6) The algebra H of quaternions is the 4-dimensional algebra over R with basis 1, i, j, k andwhere the multiplication is defined by

ij = −ji = k, jk = −kj = i, ki = −ik = j

1.5 Subalgebras and ideals, factor rings 7

This algebra is a division ring: The general element of H is of the form u = a+bi+cj+dk

with a, b, c, d ∈ R. Let u := a− bi− cj − dk, then uu = a2 + b2 + c2 + d2, which is 6= 0 foru 6= 0, and one can write down the inverse of a non-zero element u.

(7) Let G be a group and K any field. The group algebra A = KG has underlying vectorspace with basis vg : g ∈ G. The multiplication on the basis elements is defined as

vgvh := vgh.

and it is extended to linear combinations. This defines an associative multiplication,

(vgvh)vx = vghvx = v(gh)x = vg(hx) = vgvhx = vg(vhvx).

The identity of KG is the element v1 where 1 = 1G is the identity of G. This algebra hasdimension equal to the order of G.

Some authors write simply g for the vector in KG, instead of vg.

(8) If A is any K-algebra, then the ’opposite algebra’ Aop of A has underlying space A, andthe multiplication in Aop is defined by

a ∗ b := ba (a, b ∈ A).

It is easy to check that this is again an algebra. Clearly (Aop)op ∼= A.

1.5 Subalgebras and ideals, factor rings

We recall some standard constructions which are completely analogous to those you haveseen for commutative rings.

The example (3) in 1.4 suggests that we should define a ’subalgebra’. Suppose A is a K-algebra, then a subalgebra B of A is a subset of A which is an algebra with respect to theoperations in A, that is:

Definition 1.6

Let B be a subset of A. Then B is a subalgebra if B is a subspace such that(i) for all b1, b2 ∈ B, the product b1b2 belongs to B; and(ii) the identity 1A belongs to B.

1.5.1 Examples

Let A = Mn(K). This has many important subalgebras.

(1) The upper triangular matrices, Tn(K) form a subalgebra of A. This is not commutativefor n ≥ 2, for example the matrix units E11 and E12 do not commute (see 1.4).

8 1. Algebras

(2) Let α ∈ A, and define Aα to be the span of 1, α, α2, . . .. That is, Aα is the space of allmatrices which are polynomials in α. This a subalgebra of A, and it is always commutative.

(3) The diagonal matrices Dn(K), form a subalgebra of A, of dimension n.

(4) The ’three-subspace algebra’ is the subalgebra of M4(K) defined by

a1 b1 b2 b30 a2 0 00 0 a3 00 0 0 a4

: ai, bj ∈ K.

(5) There are also subalgebras such as

a b 0 0c d 0 00 0 x y

0 0 z u

: a, b, c, d, x, y, z, uzinK ⊂M4(K).

Not every subring of Mn(K) is a subalgebra. For example, Mn(Z) is a subring of Mn(R)but it is not a subalgebra.

Definition 1.7

If R is a ring (or an algebra ) then I is a left ideal of R provided (I,+) is a subgroup of(R,+) such that rx ∈ I for all x ∈ I and r ∈ R.

Similarly I is a right ideal of R if (I,+) is a subgroup such that xr ∈ I for all x ∈ I andr ∈ R. I is an ideal if it is both a left ideal and a right ideal.

For example, if z ∈ R then Rz = rz : r ∈ R is a left ideal. For non-commutative rings,Rz need not be an ideal.

Exercise 1.2

Let R = M2(K) and n ≥ 1, and let z be the ’matrix unit’ z = E11. Calculate Rz, andalso zR. Are they equal?

1.5.2 Factor rings

If I is an ideal of R , consider cosets r+ I for r ∈ R. Recall that the cosets R/I form a ring,with +, . defined ’as usual’ by

(r + I) + (s+ I) := (r + s) + I, (r + I)(s+ I) := (rs) + I

for r, s ∈ R.When the ring is a K-algebra then we have some extra structure.

1.6 Algebra homomorphisms 9

Lemma 1.8

Assume A is an algebra.(a) Suppose I is a left (or right or 2-sided) ideal of A. Then I is a subspace of A.(b) If I is an ideal of A then A/I is an algebra.

Proof

(a) By definition, (I,+) is a group. We need to show that if c ∈ K and x ∈ I then cx ∈ I.But (c1A) ∈ A, and

cx = c(1Ax) = (c1A)x ∈ I.

(b) We know already that the cosets A/I form a ring, and they also form a vector space(see A1). We only have to check that scalars commute with everything, but this property isinherited from A. Explicitly, let λ ∈ K and a, b ∈ A, then

(λ1A + I)[(a+ I)(b+ I)] = (λ1A + I)(ab+ I)

= λ1A(ab) + I)

= (λa)b+ I = (λa+ I)(b+ I)

but since λ(ab) = a(λb), it is also equal to (a+ I)(λb+ I).

Example 1.9

Let A = K[X], the algebra of polynomials, and let I be a non-zero ideal of A, then thereis some non-zero polynomial f(X) such that I = (f(X)), a principal ideal; and A/I =K[X]/(f(X)). Such factor algebra is finite-dimensional, its dimension is the degree of f(X).

1.6 Algebra homomorphisms

Definition 1.10

Let A and B be K-algebras. A map φ : A→ B is a K-algebra homomorphism if(i) φ is K-linear,(ii) φ(ab) = φ(a)φ(b) for all a, b ∈ A; and(iii) φ(1A) = 1B .The map φ is a K-algebra isomorphism if it is a K-algebra homomorphism and is in

addition bijective.

Example 1.11

Let A be the algebra of upper triangular 2 × 2-matrices over R (see 1.3), and let B be the

10 1. Algebras

direct product of two copies of R, that is B = R× R. Define φ : A→ B by

φ((a b

0 c

)) := (a, c).

Then φ is linear; as a vector space map it is a projection onto some coordinates. You shouldcheck that φ preserves the multiplication and maps the identity of A to the identity of B.

When you write linear transformations of a vector space as matrices with respect to afixed basis, you basically prove that the algebra of linear transformations is isomorphic tothe algebra of square matrices. We recall the proof, partly as a reminder, but also since wewill later need a generalization.

Lemma 1.12

Suppose V is an n-dimensional vector space over the field K. Then the algebras EndK(V )and Mn(K) are isomorphic.

Proof

We fix a K-basis of V . Suppose α is a linear transformation of V , let M(α) be the matrixof α with respect to the fixed basis. Then define a map

ψ : EndK(V ) →Mn(K), ψ(α) := M(α).

One checks that ψ is K-linear. One also checks that it preserves the multiplication, that isM(β) M(α) = M(β α). [This is done in first year linear algebra]. The map ψ is also aone-to-one. Suppose M(α) = 0, then by definition of the matrix α maps the fixed basis tozero, but then α = 0. The map ψ is surjective, because every n× n matrix defines a lineartransformation of V .

In general, homomorphism and isomorphism are very important to compare differentalgebras.

Exercise 1.3

Suppose φ : A→ B is an isomorphism of K-algebras. Show that then

(i) If a ∈ A then a2 = 0 if and only if φ(a2) = 0.

(ii) a ∈ A is a zero divisor if and only if φ(a) is a zero divisor.

(iii) A is a field if and only if B is a field.

1.6.1 Some common algebra homomorphisms

Some algebra homomorphisms occur very often, we will list some. You are encouraged tocheck these in detail.

1.7 Some algebras of small dimensions 11

(1) Let I be an ideal of A, then the ’canonical map’ π : A → A/I which is defined asπ(a) := a+ I, is an algebra homomorphism.

(2) Substitution is an algebra homomorphism whenever it makes sense. Let B be any K-algebra and b ∈ B. Define ψ : K[X] −→ B by

ψ(f) = f(b) (f ∈ K[X]).

(3) Let A = A1 ×A2, the direct product of two algebras. Then the projection π1 : A→ A1

defined by π1(a1, a2) := a1 is an algebra homomorphism, and similarly the projection π2

from A onto A2 is an algebra homomorphism.

Note however that the inclusion map a1 → (a1, 0) is not an algebra homomorphism, as itdoes not take the identity of A1 to the identity of A.

Exactly as for rings we have an Isomorphism Theorem.

Theorem 1.13 (Isomorphism Theorem)

Let A and B be K-algebras, and suppose φ : A → B is a K-algebra homomorphism. Thenker(φ) is an ideal of A, im(φ) is a subalgebra of B and

A/ker(φ) ∼= im(φ).

Proof

Almost everything follows from the isomorphism theorem for rings. We only need to checkthat im(φ) is actually a subalgebra of B. Since φ is linear, the image Im(φ) is a subspace,and we know it is a subring containing the identity of B, and therefore it is a subalgebra.

Example 1.14

Suppose A = A1 × A2, the direct product of two K-algebras. Then as we have seen, theprojection π1 : A → A1 is an algebra homomorphism, and it is onto. By the IsomorphismTheorem we have A/ker(π1) ∼= A1. Furthermore, the definition of π1 gives that

ker(π1) = (0, a2) : a2 ∈ A2 = 0 ×A2.

This also shows that 0 ×A2 is an ideal of A.

1.7 Some algebras of small dimensions

One might like to know how many K-algebras there are of a given dimension, up to isomor-phism, and if possible have a complete description. Looking at small dimensions, we observe

12 1. Algebras

that any 1-dimensional K-algebra is isomorphic to K. Namely, it must contain the the scalarmultiples of the identity, and this is then the whole algebra, by dimension.

We consider now algebras of dimension 2 over R. The construction in 1.9 producesmany examples. Namely, take any polynomial f(X) ∈ R[X] of degree 2, and take A :=R[X]/(f(X)). We ask when two such algebras are isomorphic. We would also want to knowwhether there are others.

We will now classify 2-dimensional algebras over R, up to isomorphism. Take such algebraA. We can choose a basis which contains the identity of A, say 1A, b.

Then b2 must be linear combination of 1, b, so there are scalars c, d ∈ R such thatb2 = c1A + db. We consider the polynomial X2 − dX − c and we complete squares,

X2 − dX − c = (X − d/2)2 − (c+ (d/2)2).

Let β′ := b − (d/2)1A, this is an element in the algebra, and we also set r = (c + (d/2)2),which is a scalar. Then we have

β′2 = r1A.

Then set

β := √

|r|−1β′ r 6= 0β′ r = 0

Then the set 1A, β also is a basis of A, and we have β2 = 0 or = ±1A.This brings A into only three possible forms. We write Aj for the algebra in which β2 =

j1A for j = 0, 1,−1. We want to show that no two of these three algebras are isomorphic.We use 1.3.

(1) The algebra A0 has a non-zero element with square zero. By 1.3, any algebra isomor-phic to A0 must have such element.

(2) The algebra A1 does not have a non-zero element whose square is zero:Suppose α2 = 0 for α ∈ A. Write α = x1 + yβ with x, y ∈ R, then

α2 = (x2 + y2)1 + 2xyβ = 0

and it follows that 2xy = 0 and x2 + y2 = 0, since 1A and β are linearly independent. Nowx, y ∈ R and we deduce x = y = 0, and therefore α = 0.This shows that the algebra A1 is not isomorphic to A0.

(3) Consider the algebra A−1. This occurs in nature, namely C is such R-algebra, takingβ = i.

In fact we can see directly that A−1 is a field, from

(c+ dβ)(c− dβ) = c2 + d2

and if c+ dα 6= 0 we can write down its inverse with respect to multiplication.Clearly A0 is not a field, and A1 also is not a field, it has zero divisors: (β−1)(β+1) = 0.

So A−1 is not isomorphic to A0 or A1.

We can list a ’canonical representative’ for each of the three algebras. Consider the algebraR[X]/(X2), this is 2-dimensional and is generated by a non-zero element with square zero.

1.8 Finite-dimenional algebras A which can be generated by one element. 13

So it isomorphic to A0. Next, consider R[X]/(X2−1), this has a generator with square equalto 1, so it is isomorphic to A1. Similarly R[X]/(X2 +1) is isomorphic to A−1. To summarize,we have proved:

Lemma 1.15

Up to isomorphism, there are precisely three 2-dimensional algebras over R. Any 2-dimensional algebra over R is isomorphic to precisely one of

R[X]/(X2), R[X]/(X2 − 1), R[X]/(X2 + 1).

One might ask what happens for different fields. There are infinitely many non-isomorphic2-dimensional algebras over Q , and there are only two 2-dimensional algebras over C (seeexercises).

Definition 1.16

The K-algebra A is generated by a set S = α1, . . . , αk if A is the K-span of 1 togetherwith all ’monomials’ αi1 ...αir for αiν ∈ S.

Sometimes it is useful to have a small set of generators, for practical purposes. Forexample, the polynomial algebra A = K[X] is generated by X. Or, let A = KG be thegroup algebra. If G = 〈g〉 cyclic, then A is generated by vg.

1.8 Finite-dimenional algebras A which can begenerated by one element.

Suppose A is a finite-dimensional algebra that is generated by one element α, say. ThenA is spanned by the set 1, α, α2, . . . , . [The algebra Aα in 1.4 is an example]. There is apolynomial of smallest degree, m(X), such that m(α) = 0. This is the same argument asin Linear Algebra, when we prove that a linear map has a minimal polynomial. Namely,since A is finite-dimensional, the elements αj cannot all be linearly independent. Let n besmallest such that 1, α, . . . , αn are linearly dependent. Then write

αn =n−1∑i=0

ciαi

for some ci ∈ K. Then, as in linear algebra, if m(X) = Xn −∑n−1

i=0 ciXi then m(X) is the

unique monic polynomial of smallest degree such that m(α) = 0.Define ψ : K[X] → A by substituting α,

ψ(f) = f(α).

14 1. Algebras

This is a K− algebra homomorphism (see 1.6.1). It is surjective, by definition of A. TheIsomorphism Theorem shows that

K[X]/ker(ψ) ∼= A.

Moreover, ker(ψ) = (m(X)); namely as in linear algebra, we have f(α) = 0 if and only ifm(X) divides f(X). This also shows that the dimension of K[X]/(m(X)) is equal to thedegree of m(X) (which you have probably seen).

Example 1.17

Let A = KG where G = 〈g〉, the cyclic group of order 3. Then A is generated by α = vg.The previous shows that A ∼= K[X]/(m(X)) where m(X) is the minimal polynomial of vg.We have

v3g = vg3 = v1 = 1A

and therefore the minimal polynomial of vg divides X3 − 1. We know that dimA = 3 andhence m(X) must have degree 3 and it follows that m(X) = X3 − 1.

EXERCISES

1.4. Let A = B = C and φ(a) := a, the map which takes a to its complex conjugate.Verify that

(i) the map φ is a ring homomorphism;

(ii) A is a 2-dimensional R-algebra, and the map φ is an R-algebra homomorphism.

(iii) We know that A and B are C-algebras. Show that φ is not a C-algebrahomomorphism.

1.5. Let A be the set of matrices

A = (a b

−b a

): a, b ∈ R.

Show that A is an R-subalgebra of M2(R). [Which of the three algebras in ?? isit?]

1.6. Let K = Z/2Z, the field with two elements. Let A be the set of matrices

A = (a b

b a+ b

): a, b ∈ K.

Show that A is a subalgebra of M2(K). [Note that A is generated by(

0 11 1

).

Find its square.]

1.7. Show that the algebra in 1.4(4) is isomorphic to the direct product M2(K) ×M2(K).

1.8 Finite-dimenional algebras A which can be generated by one element. 15

1.8. Consider the three-subspace algebra S in 1.4(4). Show that there is a surjectivealgebra homomorphism from S onto the direct product K ×K ×K ×K.

1.9. Let S be the three-subspace algebra in 1.4(4). Find a description of S which issimilar to the algebra in example 1.4.

1.10. (Continuation) Find the opposite algebra Sop. Is it isomorphic to S?

1.11. Show that there are precisely two 2-dimensional algebras over C, up to isomor-phism.

1.12. Consider 2-dimensional algebras over Q. Show that the algebras Q[X]/(X2 − p)and Q[X]/(X2 − q) are not isomorphic if p and q are distinct primes.

Solution 1.18

We fix the basis 1A, α of A = Q[X]/(X2−p) where 1A is the coset of 1 and α isthe coset of X. That is α2 = p1A. Similarly we fix the basis of B = Q[X]/(X2−q)consisting of 1B and β with β2 = q1B .

Suppose ψ : A→ B is an algebra isomomorphism, then

ψ(1A) = 1B , ψ(α) = c1B + dβ

where c, d ∈ Q and d 6= 0. We must have

p = p1A = pψ(1A) = ψ(p1A)

= ψ(α2) = ψ(α)2

= (c1A + dβ)2 = c21B + d2β2 + 2cdβ

= (c2 + qd2)1B + 2cdβ

But 1B and β are linearly independent (√q 6∈ Q), so 2cd = 0 and p = (c2 + qd2).

Since d 6= 0 we must have c = 0 and then it follows that p = q, contrary to thehypothesis. This shows that A and B are not isomorphic.

2Representations, Modules

We want to study actions of groups and algebras on vector spaces. If V is a vector space,then EndK(V ) is the algebra of linear transformations of V , and this contains GL(V ), thegroup of invertible linear transformations of V . When V is n-dimensional and we use matri-ces with respect to some fixed basis, then EndK(V ) becomes Mn(K), and GL(V ) becomesGLn(K).

Convention We write consistently all maps to the left , following the practice used inLinear Algebra. To be consistent, we then also let groups act on the left of vector spaces.

Definition 2.1

Let G be a group, and let V be some vector space. A (linear) representation of G on V isa group homomorphism

ρ : G −→ GL(V ).

The representation has degree n where n = dimV .If we write linear transformations as matrices with respect to a fixed basis, we get the

group homomorphism ρ : G→ GL(n,K). This is sometimes called a matrix representationof G.

Definition 2.2

Let A be a K-algebra and V be a vector space A representation of A on V is a K-algebrahomomorphism

θ : A −→ EndK(V ).

18 2. Representations, Modules

The representation has degree n where n = dimV . If we write linear transformations asmatrices with respect to a fixed basis, we get an algebra homomorphism

θ : A −→Mn(K)

This is sometimes called a matrix representation of A.

The definitions of a representation as above also make sense when V is not finite-dimensional. But recall that in these notes, all vector spaces are assumed to be finite-dimensional (except for K[X]).

2.0.1 Examples

(1) Let A be a subalgebra of EndK(V ) where V is a vector space. Then the inclusion map(a 7→ a)from A to EndK(V ) is clearly an algebra homomorphism, hence is a representation.Similarly, if A is a subalgebra of Mn(K) then the inclusion map is a representation of A.

For example, take A = EndK(V ), or Mn(K). Or take A as in the exercise 1.5 and V = R2.

(2) Let V = A and define θ : A→ EndK(A) by

θ(a) = [x→ ax].

Then θ(a) ∈ EndK(A). This is known as the ‘left regular representation’. You shouldcheck that θ is an algebra homomorphism.

(3) Let A = K[X], the algebra of polynomials. Take a vector space V and a linear transfor-mation α of V . Define

θ : A→ EndK(V ), θ(f) := f(α)

that is substitute α into f . This is a representation of A. [In chapter 1 we have seen thatθ is an algebra homomorphism.] For each α, there is such representation, and we writeθ = θα.

Lemma 2.3

Every representation of the algebra A = K[X] is of the form θα for some linear transforma-tion α.

Proof

Suppose φ : A→ EndK(V ) is a representation. Then set α := φ(X). This is linear transfor-mation of V . Furthermore, if f ∈ A, say f =

∑i aiX

i then we have

φ(f) =∑

i

aiφ(X)i =∑

i

aiαi = f(α)

where the first equality holds since φ is an algebra homomorphism. So φ = θα.

2. Representations, Modules 19

2.0.2 Examples

(1) Suppose Ω is a finite G-set. Take a vector space V with a basis labelled by the elementsof Ω, say V = spanbω : ω ∈ Ω. We will call V = KΩ later. Define

ρ : G→ GL(V )

as follows. For g ∈ G, we take for ρ(g) the linear map which takes bω to bg(ω). One checksthat ρ is a group homomorphism. This comes from the fact that G acts on Ω. Note thatwe write maps to the left.

For example let n=3 and G = S3. If we take Ω = 1, 2, 3 and if g is the transpositionpermuting 1 and 2 then ρ(g) has matrix

ρ(g) =

0 1 01 0 00 0 1

.

(2) As a special case of (1), take Ω = G and where the action is by left multiplication. Thecorresponding representation is the left regular representation . That is, for g ∈ G, ρ(g) isthe linear map with

vx → vgx.

Its degree is the order of G.

(3) Let G be any group and take V = K. Define ρ : G→ GL(K) by

ρ(g) = 1K , (g ∈ G).

This is a representation of G, called the trivial representation.

(4) Let G be the group of symmetries of a square. As a group, this is isomorphic to thedihedral group of order 8. Draw the square in the plane, such that the center is in theorigin, and such that the corners are at (±1,±1). Let V = R2. For g ∈ G, let ρ(g) be thelinear map of V which induces the symmetry given by g. We write ρ(g) as a matrix withrespect to the standard basis.

Let σ be the rotation by π/2 (anti-clockwise), then

ρ(σ) =(

0 −11 0

).

Suppose τ ∈ G is the reflection taking (1, 1) to (1,−1), then

ρ(τ) =(

1 00 −1

).

The groupG can be generated by σ and τ . Since we want to define a group homomorphism,these two matrices determine already the action of all elements of G. But we must checkthat this really defines a group homomorphism.

20 2. Representations, Modules

The group G has a presentation

〈σ, τ : σ4 = 1, τ2 = 1, τ−1στ = σ−1〉

All we have to do is to check that the matrices ρ(σ) and ρ(τ) satisfy the relations definingG, and this is an easy calculation. Then we have shown that we have a well-definedrepresentation

ρ : G→ GL2(R)

which takes σ, τ to the matrices ρ(σ) and ρ(τ) defined above.

Exercise 2.1

Show that the trivial representation (3) can be viewed as a special case of (1).

Remark 2.4

(a) Let A = KG, the group algebra of a finite group G. Suppose we have a representationof A, say θ : A → EndK(V ). For g ∈ G, the element vg ∈ A is invertible, and hence θ(vg)also is invertible and therefore lies in GL(V ). Moreover, if g, h ∈ G then

θ(vgvh) = θ(vg)θ(vh), θ(v1) = IdV

since θ preserves the multiplication. So we can define

ρ : G→ GL(V ), ρ(g) := θ(vg)

and this is a group homomorphism. This shows that any representation of the group algebraA = KG automatically gives a group representation of G.(b) Conversely, suppose V is a vector space over K and ρ : G→ GL(V ) is a representationof G. We view G as a basis of the group algebra A = KG, and therefore we get a linear mapθ : A → EndK(V ) by setting θ(

∑g agvg) :=

∑g agρ(g). One checks that this is an algebra

homomorphism. This shows that every representation of G also gives a representation of thegroup algebra KG.

Definition 2.5

Given two representations θ1, θ2 of the algebra A where θ1 : A → EndK(V1) and θ2 : A →EndK(V2). Then θ1 and θ2 are said to be equivalent if there is a vector space isomorphismψ : V1 → V2 such tat

ψ−1 θ2(a) ψ = θ1(a)

for all a ∈ A.

This means that θ1(a) and θ2(a) should be simultaneously similar, for all a ∈ A. In thespecial case when A = K[X] we have therefore the following.

2.1 Modules 21

Lemma 2.6

Let A = K[X], then two representations θα and θβ are equivalent if and only if the lineartransformations α and β are similar.

Take any ideal I of the algebra A, and let B = A/I. Since the canonical map A→ A/I

is an algebra map, we can view representations of the factor algebra as representations ofthe original algebra. More precisely:

Lemma 2.7

Let A be any algebra, and I an ideal of A. Let B := A/I. Then the representations of B areprecisely the representations of A which map I to zero.

Proof

Let γ : A → EndK(V ) be a K-algebra homomorphism such that γ(x) = 0 for all x ∈ I.Then define

γ : B → EndK(V )

by γ(a + I) := γ(a). This is well-defined: if a + I = a′ + I, then a − a′ ∈ I and γ(a −a′) = 0 and therefore γ(a) = γ(a′). One checks that γ is an algebra homomorphism, this isstraightforward.

Conversely, let θ : B → EndK(V ) be a representation of B. Define θ : A→ EndK(V ) by

θ(a) := θ(a+ I)

That is, θ is the composition of θ and the canonical map π : A → A/I, and therefore θ isan algebra homomorphism.

Definition 2.8

Given a representation θ of B where B = A/I, then the corresponding representation θ ofA as in the Lemma is called inflation of θ.

2.1 Modules

If we have a representation of the algebra A on the vector space V , then A ‘acts on V ’,and V together with this action is said to be an A-module. This is analogous to the case ofgroups acting on sets. [Given a group homomorphism from G to Sym(Ω), then Ω togetherwith this action is a G-set].

Modules can be defined for arbitrary rings, not just algebras. They are very common(and important), for example modules over Z occur frequently. The basic concepts are thesame; therefore we give the general definition.

22 2. Representations, Modules

Definition 2.9

Let R be a ring. An R-module is an abelian group (M,+) together with a map

R×M →M, (r,m) → rm (r ∈ R, m ∈M)

such that for all r, s ∈ R and all m,n ∈M(i) (r + s)m = rm+ sm;(ii) r(m+ n) = rm+ rn;(iii) r(sm) = (rs)m;(iv) 1Rm = m.

One can think of a module as a generalization of a vector space: A vector space is anabelian group M together with a scalar multiplication, that is a map K×M →M , satisfyingthe usual axioms. If one replaces K by a ring R, then one gets an R-module. When R = K,that is R is a field, then R-modules are therefore exactly the same as K-vector spaces.

The above defines a left R-module, and one defines right R-modules analogously. When Ris not commutative the behaviour of left modules and of right modules can be different; togo into details is beyond this course (see however an exercise in chapter 3).

We will consider only left modules since our rings are K-algebras, and scalars are usuallywritten to the left.

Example 2.10

Take any left ideal I of R, then I is an R-module. First, (I,+) is a group, by definition. Theproperties (i) to (iv) hold even for m,n, r, s ∈ R, and then also for m,n ∈ I and r, s ∈ R.

In this course, we will focus on the case when the ring is a K-algebra. Some of the generalproperties are the same for rings.

Convention We write R and M if we think of an R-module for a general ring, and wewrite A and V if we work with an A-module where A is a K-algebra.

Suppose A is a K-algebra. Then A-modules are automatically vector spaces, and this isvery important:

Lemma 2.11

Let A be a K-algebra. Then any A-module is automatically a K-vector space.

Proof

Recall that we view K as a subset of A, so this gives us a map K × V → V , and it satisfiesthe vector space axioms, they are then just (i) to (iv) in 2.9.

2.1 Modules 23

2.1.1 Relating A-modules and representations of A

The following shows that ’modules’ and ’representations’ of an algebra are the same. Thisis a formal matter, nothing is ‘done’ to the modules or representations and it only describestwo different views of the sameobject. It is convenient as it often saves one a lot of checking,and it gives twice as much information.

Lemma 2.12

Let A be a K-algebra.(a) Suppose V is an A-module. Then we have a representation of A on V ,

θ : A→ EndK(V ), θ(a) = [v → av] (a ∈ A, v ∈ V ).

(b) Suppose σ : A→ End(V ) is a representation. Then V becomes an A-module by setting

av := σ(a)(v), (a ∈ A, v ∈ V ).

Proof

(a) The map θ(a) lies in EndK(V ): It is a linear transformation of V ,

θ(a)(λ1v1 + λ2v2) = a(λ1v1 + λ2v2) = (aλ1v1) + (aλ2v2)

= λ1(av1) + λ2(av2)

= λ1θ(a)(v1) + λ2θ(a)v2

To show that it is an algebra homomorphism,

θ(ab)(v) = (ab)v = a(bv)

= θ(a)[bv] = θ(a)[θ(b)(v)]

= [θ(a) θ(b)](v)

which holds for all v ∈ V , hence θ(ab) = θ(a)θ(b). Similarly one checks that θ(1A) = IdV .(b) It is straightforward to check the axioms for an A-module. For example

(a1 + a2)v = σ(a1 + a2)(v)

= [σ(a1) + σ(a2)](v)

= σ(a1)(v) + σ(a1)(v) = a1v + a2v

We leave the rest as an exercise.

24 2. Representations, Modules

2.1.2 Examples

(1) When A = K, then A-modules are the same as K-vector spaces.

(2) The ’natural module’. Assume A is a subalgebra of EndK(V ). Then V is an A-module,where the action of A is just applying the linear maps to the vectors. That is,

(a, v) → a(v) (a ∈ A, v ∈ V ).

This is the action where the representation is the inclusion map A ⊆ EndK(V ). So V isan A-module.

Alternatively, one can check the module axioms: Let a, b ∈ A and v, w ∈ V , then

(i) (a+ b)(v) = a(v) + b(v)

by the definition of the sum of two maps; and

(ii) a(v + w) = a(v) + a(w)

since a is linear; and

(iii) (ab)(v) = a(b(v))

since the multiplication in EndK(V ) is defined to be composition of maps; and

(iv) 1A(v) = v.

(3) The natural module also has a matrix version. Let A be a subalgebra of Mn(K), and letV := (Kn)t, the space of column vectors. Then V is an A-module if one takes as actionto be multiplying the matrix and the column vector.

(4) Permutation modules. Let A = KG where G is a finite group. Suppose Ω is any G-set,and let

V = KΩ = Spanbω : ω ∈ Ωas in 2.0.2(1). Define an action of A on KΩ by setting

vgbω := bg(ω).

and extend to linear combinations in A. This defines an A-module.

To see this, take the group representation in 2.0.2(1) and view it as a representation ofKG (as in 2.4(b)), then use 2.12.

Alternatively, you can check the axioms.

(5) The ‘trivial module’. Let A = KG where G is a finite group. The trivial module hasunderlying vector space K, and the action of A os defined by

vgx = x (x ∈ K, g ∈ G)

Take the trivial representation in 2.0.2(3), view it as a representation of KG, and use2.12. Or else, check axioms.

2.2 K[X]-modules 25

Lemma 2.13

Let A be an algebra and B = A/I where I is an ideal of A. Then the B-modules are preciselythe A-modules V on which I acts as zero, and where the actions are related by

(a+ I)v = av (a ∈ A, v ∈ V ).

Proof

This is a reformulation of what we called ’inflation’. Apply 2.7 and use 2.12

2.2 K[X]-modules

Take a vector space V and some linear transformation α : V → V . We have defined therepresentation θα from A := K[X] to EndK(V ), by θα(f) = f(α). This gives that V is anA-module by setting

fv := f(α)(v) (f ∈ A, v ∈ V.)

We denote this K[X]-module by Vα. Since every representation of A is of the form θα forsome α, every K[X]-module is isomorphic to Vα for some α.

The following relates K[X]-modules with modules for factor algebras K[X]/I. This isimportant since for I 6= 0, these factor algebras are finite-dimensional, and many finite-dimensional algebras occuring ’in nature’ are of this form.

Lemma 2.14

Let A = K[X]/(f) where f is some non-zero polynomial in K[X]. Then the A-modules canbe viewed as the K[X]-modules Vα which satisfy f(α) = 0.

Proof

This is a special case of 2.13 with I = (f). Then note that if V is an A-module, with actionfv = f(α)v. So I maps V to zero if and only if f(α) = 0.

Note that we only change the point of view, and we ’don’t do anything’ to the module.

One advantage of modules as compared with representations is that this perspective natu-rally lead to new concepts. We introduce some of these now.

26 2. Representations, Modules

2.3 Submodules, factor modules

Definition 2.15

Let R be a ring and M some R-module. A submodule of M is a subgroup (U,+) which isclosed under the action of R, that is ru ∈ U for all r ∈ R and u ∈ U .

2.3.1 Examples

(1) The left ideals I of R are precisely the submodules of the R-module R.

(2) Suppose M1 and M2 are R-modules. Then the direct product

M1 ×M2 := (m1,m2) : mi ∈Mi

is an R-module if one defines the action of R componentwise, that is

r(m1,m2) := (rm1, rm2) (r ∈ R,mi ∈Mi).

(3) Consider the 2-dimensional R-algebra A0 at the end of Chapter 1. The 1-dimensionalsubspace spanned by β is a submodule of A0.

On the other hand, if you look at the algebra A1 in the same section, then the subspacespanned by β is not a submodule. (But the space spanned by β − 1A is a submodule.)

Exercise 2.2

Let A = A1 × A2, the product of two K-algebras. Suppose M is some A-module.Define

M1 := (1A1 , 0)m : m ∈M, M2 := (0, 1A2)m : m ∈M.

Show that M1 and M2 are submodules of M and that M = M1 ⊕M2.

You might note that we have seen direct products of modules, and also direct sums. Theproducts are needed, to construct a new module from given ones which are not related. Onthe other hand, if we write M = M1⊕M2 then we always implicitly mean that M is a givenmodule and M1,M2 are submodules. [Some books distinguish these two constructions, bycalling them ’external’ and ’internal’ direct sum.]

Exercise 2.3

Let A = Mn(K), the algebra of n × n matrices over K, and consider the A-moduleA. We define Ci ⊂ A to be the set of matrices which are zero outside the i-th column.Show that Ci is a submodule of A, and that

A = C1 ⊕ C2 ⊕ . . .⊕ Cn.

2.4 Module homomorphisms 27

Suppose U is a submodule of an R-module M , you know that the cosets M/U := m+ U :m ∈M form an abelian group. In the case when M is an R-module and U is a submodule,the set of cosets has the structure of an R-module.

Definition 2.16

Let M be an R-module and U a submodule of M . Then the cosets M/U form an R–module,if one defines

r(m+ U) := rm+ U, (r ∈ R, m ∈M).

This is called the factor module.

One has to check that the action is well-defined: If m+U = m′+U then m−m′ ∈ U andthen r(m−m′) ∈ U as well. But r(m−m′) = rm− rm′ and therefore rm+ U = rm′ + U .The axioms are inherited from M .

Example 2.17

Let M = R as an R-module, then for any d ∈ R, M has submodule I = Rd, and a factormodule R/Rd. When R = Z, you will have seen these. In general, a module of the formRd = rd : r ∈ R is said to be a cyclic R-module.

2.4 Module homomorphisms

We have said that a ’module’ is a generalization of a vector space where scalars are replacedby elements in the ring. Accordingly, R-module homomorphisms are the analog of linearmaps of vector spaces.

Definition 2.18

Suppose R is a ring, and M and N are R-modules. A map φ : M → N is an R-modulehomomorphism if for all m,m1,m2 ∈M and r ∈ R we have

(i) φ(m1 +m2) = φ(m1) + φ(m2); and(ii) φ(rm) = rφ(m).An isomorphism of R-modules is an R-module homomorphism which is also bijective.

The set of all R-module homomorphisms from M to N is denoted by

HomR(M,N).

An R-module homomorphism from M to M is called an R-endomorphism of M , and theset of all R-endomorphisms of M is denoted by

EndR(M).

28 2. Representations, Modules

In the case when the ring is a K-algebra A, then this definition also says that φ must beK-linear. Namely, we vies λ ∈ K as an element of A, by taking λ1A, and then we have forλ, µ ∈ K that

φ(λm1 + µm2) = φ((λ1A)m1 + (µ1A)m2)

= (λ1A)φ(m1) + (µ1A)φ(m2)

= λφ(m1) + µφ(m2).

Exercise 2.4

Suppose V is an A-module where A is a K-algebra. The set EndA(V ) of all A-modulehomomorphisms V → V is by what we just noted, a subset of EndK(V ). Check thatit is actually a subalgebra.

Example 2.19

Consider A = K[X]. The algebra is generated by X as an algebra, so an A-module homo-morphism between A-modules is just a linear map that commutes with the action of X.We have described the A-modules (see 2.2), let Vα and Wβ be A-modules, an A-modulehomomorphism is a linear map such that θ(Xv) = Xθ(v) (v ∈ Vα). On Vα, the element Xacts by α, and on Wβ , the action of X is given by β. So this means

θ(α(v)) = β(θ(v)) (v ∈ Vα)

This holds for all v, so we haveθ α = β θ.

In particular Vα∼= Wβ if and only if there is an invertible linear map θ such that

θ−1 β θ = α.

Exercise 2.5

Suppose A is a K-algebra, and assume V and W are A-modules. Show that V ∼= W

as A-modules if and only if the corresponding representations are equivalent.

2.4.1 Some common module homomorphisms

(1) Suppose U is a submodule of an R-module M , then the ’canonical map’ π : M →M/U ,defined by π(m) = m+ U , is an R-module homomorphism.

2.4 Module homomorphisms 29

(2) Suppose M is an R-module, and m ∈M . Then we always have an R-module homomor-phism φ : R→M , given by

φ(r) := rm (r ∈ R).

This is a very common homomorphism, perhaps we might call it a ’multiplication homo-morphism’. There is a general version of this, which also is very common. Namely, supposem1,m2, . . . ,mn are given elements in M . Now take the R-module Rn := R×R× . . .×R,and define

ψ : Rn →M, ψ(r1, r2, . . . , rn) := r1m1 + r2m2 + . . .+ rnmn

(r1, . . . , rn ∈ R). You should check that this is indeed an R-module homomorphism.

(3) Suppose M = M1×M2, the direct product of two R-modules. Then the projection mapsπi onto the coordinates are R-module homomorphisms. Similarly, the inclusion maps

ι1 : M1 →M, ι1(m1) := (m1, 0)

and similarly ι2, are R-module homomorphism. You should check this as well.

(4) Similarly, if M = U⊕V , the direct sum of two submodules U and V , then the projectionmaps, and the inclusion maps are R-module homomorphisms.

Theorem 2.20 (Isomorphism theorems)

(a) Suppose φ : M → N is an R-module homomorphism. Then ker(φ) is a submodule of Mand im(φ) is a submodule of N , and

M/ker(φ) ∼= im(φ).

(b) Suppose U, V are submodules of M , then so are U + V and U ∩ V , and

(U + V )/U ∼= V/(U ∩ V ).

(v) Suppose U ⊆ V ⊆M are submodules, then V/U is a submodule of M/U , and

(M/U)/(V/U) ∼= (M/V ).

Proof

(a) Since φ is in particular a homomorphism of the additive groups, we know that ker(φ) isa subgroup of M , so we just have to check that it is R-invariant. Let m ∈ ker(φ) and r ∈ R,then

φ(rm) = rφ(m) = r.0 = 0

and rm ∈ ker(φ). Similarly one checks that im(φ) is a submodule of N . The isomorphismtheorem for abelian groups shows that the map

ψ : M/ker(φ) → im(φ), ψ(m+ ker(φ)) = φ(m)

30 2. Representations, Modules

is well-defined and is an isomorphism of abelian groups. One now checks that this map is infact an R-module homomorphism.Parts (b) and (c) hold for abelian groups, and one just has to check that the maps used inthat case are also compatible with the action of R. For example, in (b), the general elementof (U + V )/U can be written as v + U for v ∈ V . Then the map is defined as

ψ : (U + V )/U → V/(U ∩ V ), ψ(v + U) = v + U ∩ V

If r ∈ R then

ψ(r(v + U)) = ψ((rv) + U) = rv + U ∩ V = r(v + U ∩ V ) = rψ(v + U).

2.5 The submodule correspondence

Suppose M is an R-module and U is a submodule. Then there is a 1-1 correspondence,inclusion preserving, between

(i) the submodules of M/U , and(ii) the submodules of M that contain U .

Namely, given a submodule X of M/U , define

X := m ∈M : x+ U ∈ X.

This is a submodule of M and it contains U :(a) First, X is a subgroup of M . It contains 0, and if x1, x2 ∈ X then

(x1 ± x2) + U = (x1 + U)± (x2 + U)

and this lies in X since X is a subgroup of M/U .(b) X is a submodule: Let r ∈ R and x ∈ X, then

rx+ U = r(x+ U) ∈ X

since X is an A-submodule of M/U and therefore rx ∈ X.Conversely, given a submodule V of M such that U ⊆ V . Then V/U := v +U : v ∈ V

is a submodule of M/U , as we have seen. We leave as an exercise to show that thesecorrespondences preserve inclusion.

To get the 1-1 correspondence, we must check that X/U = X, and that V/U = V . First,

X/U = x+ U : x ∈ X= x+ U : x+ U ∈ X = X.

Second, we have

V/U = x ∈M : x+ U ∈ V/U= x ∈M : x+ U = v + U, for some v ∈ V

Now x + U = v + U if and only if x − v ∈ U , that is x − v = u ∈ U . But U ⊂ V , byassumption, so x− v = u ∈ U if and only if x ∈ V . Hence V/U = V .

2.6 Tensor products 31

2.6 Tensor products

This is not part of the B2 syllabus

Definition 2.21

Suppose V and W are vector spaces over some field K with bases v1, . . . , vm and w1, . . . , wn

respectively. For each i, j with 1 ≤ i ≤ m and 1 ≤ j ≤ n, we introduce a symbol vi ⊗ wj .The tensor product space V ⊗W is defined to be the mn-dimensional vector space over Kwith a basis given by

vi ⊗ wj : 1 ≤ i ≤ m, 1 ≤ j ≤ n

Thus V ⊗W consists of all expressions of the form∑i,j

λi,j(vi ⊗ wj) (λi,j ∈ K)

For v ∈ V and w ∈W with v =∑m

i=1 λivi and w =∑n

j=1 µjwj (with λi, µj ∈ K) we definev ⊗ w by

v ⊗ w :=∑i,j

λiµj(vi ⊗ wj)

For example,

(2v1 − v2)⊗ (w1 + w2) = 2(v1 ⊗ w1)− v2 ⊗ w1 + 2(v1 ⊗ w2)− (v2 ⊗ w2).

Note that not every element of V ⊗W is of the form v ⊗ w.

Exercise 2.6

Show that v1 ⊗ w1 + v2 ⊗ w2 cannot be expressed in the form v ⊗ w for v ∈ V andw ∈W .

Proposition 2.22

If e1, . . . , em is any basis of V and f1, . . . , fn is any basis of W then

ei ⊗ fj : 1 ≤ i ≤ m, 1 ≤ j ≤ n

is a basis for V ⊗W .

Proof

Write vi =∑m

k=1 ckiek and wj =∑n

l=1 dljfl with cki and dlj ∈ K. Then

vi ⊗ wj =∑k,l

ckidjl(ek ⊗ fl)

This shows that the mn elements ei⊗fj span V ⊗W . But dim(V ⊗W ) = mn and thereforethey form a basis.

32 2. Representations, Modules

Now suppose G is a group, and ρV : G→ GL(V ) and ρW : G→ GL(W ) are representa-tions of G. Then we have a representation of G on V ⊗W . In the following we use the basesof V , W and V ⊗W as above.

Proposition 2.23

Let g ∈ G and define ρ : G→ GL(V ⊗W ) by

ρ(g)(vi ⊗ wj) := ρV (g)(vi)⊗ ρW (g)(wj).

Then ρ is a representation of G.

Proof

We must show that ρ(gh) = ρ(g) ρ(h) for all g, h ∈ G. One way is first to check that forall v ∈ V and w ∈W we have

ρ(g)(v ⊗ w) = ρV (g)(v)⊗ ρW (g)(w).

When this done, then we get

ρ(gh)(vi ⊗ wj) = ρV (gh)(vi)⊗ ρW (gh)(wj)

= ρV (g)[ρV (h)(vi)]⊗ ρW (g)[ρW (h)(wj)]

= ρ(g)[ρV (h)(vi)⊗ ρW (h)(wj)]

= ρ(g)[ρ(h)(vi ⊗ wj).

EXERCISES

2.7. Suppose V is an A-module with submodules U, V and W .

(a) Check that U + V and U ∩ V are submodules of M . Show by means of anexample that it is not in general the case that U ∩ (V +W ) = (U ∩V )+ (U ∩W ).[Try A = R and U, V,W be subspaces of R2].

(b) Show that U \ V is never a submodule. Show also that U ∪ V is a submoduleif and only if U ⊆ V or V ⊆ U .

2.8. Suppose M = U × V , the direct product of A-modules U and V . Check thatU := (u, 0) : u ∈ U is a submodule of M , isomorphic to U . Write down asimilar submodule V of M isomorphic to V , and show that M = U ⊕ V , thedirect sum of submodules.

2.9. Let A = KG be the group algebra where G is a finite group. The trivial A-moduleis defined to be the 1-dimensional module with underlying space K, with action

vgx = x (g ∈ G, x ∈ K).

2.6 Tensor products 33

Show that the corresponding representation ρ : A → EndK(K)satisfies ρ(a) =IdK for all a ∈ A. Check that this is indeed a representation.

2.10. Let Ω be a transitive G-set, and let V = KΩ be the corresponding permutationmodule. Let ζ :=

∑ω∈Ω bω. Show that vgζ = ζ for all g ∈ G, and deduce that Kζ

is a submodule of V and that it is isomorphic to the trivial A-module.

2.11. (Continuation) Show also Kζ is the unique submodule of V which is isomorphicto the trivial module. Is this still true when Ω is not transitive?

2.12. Let A = K[X]/(Xn), and let V = A, as an A-module. By applying the submodulecorrespondence, or otherwise, find all submodules of V . Deduce that if V1 and V2

are submodules, then either V1 ⊆ V2, or V2 ⊆ V1.

2.13. Let A = CG be the group algebra of the dihecral group of order 10,

G = 〈σ, τ : σ5 = 1, τ2 = 1, τστ−1 = σ−1〉.

Suppose ω is some 5-th root of 1. Show that the matrices

ρ(σ) =(ω 00 ω−1

), ρ(τ) =

(0 11 0

)satisfy the defining relations for G, hence give rise to a group representationρ : G→ GL(2,C), and an A-module.

2.14. (Continuation) Let G and ρ be as above, and view ρ : G→ GL(V ) where V = C2.Consider the tensor product V ⊗V as a G-module. Does this have a 1-dimensionalsubmodule? That is, does there exist some ζ =

∑cij(vi ⊗ vj) ∈ V ⊗ V which is

a common eigenvector for all group elements?

3The Jordan-Holder Theorem

Let A be a finite-dimensional K-algebra. We have seen that every A-module V is also aK-vector space. This allows us to apply results from linear algebra to A-modules.

Definition 3.1

Suppose V is an A-module. Then V is simple (or irreducible ) if V is non-zero, and if it doesnot have any submodules other that 0 and V .

For example, take a module V such that dimK(V ) = 1, then Then V must be simple. Itdoes not have any subspace except 0 and V and therefore it cannot have a submodule except0 or V . The converse is not true. Simple modules can have arbitrary large dimensions, orcan even be infinite-dimensional (see an exercise).

Example 3.2

Let A = Mn(K), and take V to be the natural module, the space of column vectors V =(Kn)t.

We claim that V is simple: We have to show that if U is a non-zero submodule of V thenactually U = V . So take such U , and take a non-zero element u ∈ U , say

u =

x1

x2

. . .

xn

.

The algebra A contains all ’matrix units’ Eij , and one checks that Eiju has xj in the i-thcoordinate, and all other coordinates are zero.

36 3. The Jordan-Holder Theorem

Since u 6= 0, for some j we know that xj is non-zero. So for this value j, (x−1j )Eiju is

the basis vector εi of V . But (x−1j )Eij lies in A and therefore εi ∈ U . But i is arbitrary, so

U contains a basis for M and therefore U = V .

The method we used to show that U is simple, is more general.If m ∈ V where V is some module, let Am := am : a ∈ A. This is a submodule of V ,

you should check this.

Lemma 3.3

Let V be an A-module. Then V is simple if and only if for each 0 6= m ∈ V we have Am = V .

Proof

⇒ Suppose V is simple, and take 0 6= m ∈ V . We know that Am is a submodule, and itcontains m(= 1Am), and so Am is non-zero and therefore Am = V .

⇐ Suppose 0 6= U is a submodule of V , then there is some non-zero m ∈ U . Since U isa submodule, we have Am ⊆ U , but by the hypothesis,

V = Am ⊆ U ⊆ V

and hence U = V .

Example 3.4

Let A = RG, where G is the symmetry group of the square, see chapter 2. We have seenthere is the representation ρ : G→ GL2(R) such that

ρ(σ) =(

0 −11 0

), ρ(τ) =

(1 00 −1

).

The corresponding A-module is V = R2, and for g ∈ G, the basis element vg acts on V

through

vg

(x1

x2

)= ρ(g)

(x1

x2

).

We claim that V is simple. Suppose, for a contradiction, that V has a submodule 0 6= U ⊂ V

and U 6= V . Then U is 1-dimensional, say U is spanned by u. But then vσu = λu for someλ ∈ R, which means that u ∈ R2 is an eigenvector of ρ(vσ). But the matrix ρ(σ) does nothave a real eigenvalue, a contradiction.

We will need to understand also when a factor module is simple. This is done by usingthe submodule correspondence.

3.1 Examples 37

Lemma 3.5

Suppose V is an A- module and U is a submodule of V . Then the module V/U is simple⇐⇒ U is a maximal submodule of V . [That is, if U ⊆W ⊆ V then W = U or W = V .]

Proof

Apply the submodule correspondence.

Definition 3.6

Suppose V is an A-module. A composition series of V is a finite chain of submodules

0 = V0 ⊂ V1 ⊂ V2 ⊂ . . . ⊂ Vn = V

such that the factor modules Vi/Vi−1 are simple, for 1 ≤ i ≤ n. The length of the compositionseries is n, the number of quotients.

3.1 Examples

(1) If V is simple then 0 = V0 ⊂ V1 = V is a composition series.

(2) Given a composition series as in the definition, if Vk is one of the terms, then Vk ‘inherits’the composition series

0 = V0 ⊂ V1 ⊂ . . . ⊂ Vk.

(3) Let K = R and take A to be the 2-dimensional algebra over R, with basis 1A, β suchthat β2 = 0 (see ??). Take V := A. Then if V1 is the space spanned by β, then V1 is asubmodule. [It is a subspace, and it is invariant under the action of the basis of A]. SinceV1 and V/V1 are 1-dimensional, they are simple. Hence V has composition series

0 = V0 ⊂ V1 ⊂ V2 = V.

(4) Let A = Mn(K) and V = A. In Exercise ?? we have seen that A = C1 ⊕C2 ⊕ . . .⊕Cn,a direct sum of simple A-modules. So we have a finite chain of submodules

0 ⊂ C1 ⊂ C1 ⊕ C2 ⊂ . . . ⊂ C1 ⊕ . . .⊕ Cn−1 ⊂ A

Each factor module is simple: By the isomorphism theorem

C1 ⊕ . . .⊕ Ck/C1 ⊕ . . .⊕ Ck−1∼= Ck/Ck ∩ (C1 ⊕ . . .⊕ Ck−1) = Ck/0 = Ck.

So this chain is a composition series.

38 3. The Jordan-Holder Theorem

Lemma 3.7

Assume V is a finite-dimensional A-module. Then V has a composition series.

Proof

This is proved by induction on dimV . If dimV = 1 then V is simple, and we are done by3.1 (1).

So assume now that dimV > 1. If V is simple then, by 3.1(1), V has acompositionseries. Otherwise, V has proper submodules. Take a proper submodule U ⊂ V of largestpossible dimension. Then V/U must be simple, by the Submodule Correspondence. SincedimU < dimV , we can apply the inductive hypothesis. So U has a composition series, say

0 = U0 ⊂ U1 ⊂ U2 ⊂ . . . ⊂ Uk = U.

This gives us the composition series of V ,

0 = U0 ⊂ U1 ⊂ U2 ⊂ . . . ⊂ Uk = U ⊂ V.

In general, a module can have many composition series (we will see examples). TheJordan-Holder Theorem shows that any two composition series have the same length, andthe same factors up to isomorphism counted with multiplicities:

Theorem 3.8 (Jordan-Holder Theorem)

Suppose V has two composition series

(I) 0 ⊂ V1 ⊂ V2 ⊂ . . . ⊂ Vn−1 ⊂ Vn = V

(II) 0 ⊂W1 ⊂W2 ⊂ . . . ⊂Wm−1 ⊂Wm = V.

Then n = m, and there is a permutation σ of 1, 2, . . . , n such that Vi/Vi−1∼= Wσ(i)/Wσ(i)−1

for each i.

The simple factor modules Vi/Vi−1 are called the composition factors of V . By thistheorem, they only depend on V , and not on the composition series.

Example 3.9

Let A = Mn(K) and V = A. With the notation as in Chapter 2.1, V has submodules

V1 := C1,

V2 := C1 ⊕ C2,

. . .

Vi := C1 ⊕ C2 ⊕ . . .⊕ Ci.

3.1 Examples 39

These submodules form a series

0 = V0 ⊂ V1 ⊂ . . . Vn−1 ⊂ Vn = V.

This is a composition series, as we have seen. The module V also has submodules

W1 := Cn,

W2 := Cn ⊕ Cn−1,

. . .

Wj := Cn ⊕ Cn−1 ⊕ . . .⊕ Cn−j+1,

and this gives us a series of submodules

0 = W0 ⊂W1 ⊂ . . . ⊂Wn−1 ⊂Wn = V.

This also is a composition series, since Wj/Wj−1∼= Cn−j+1 which is simple. Both composi-

tion series have lenght n, and if we take the permutation

σ = (1 n)(2 n− 1) . . .

then Wσ(i)/Wσ(i)−1∼= Ci

∼= Vi/Vi−1.

For the proof of the Jordan-Holder Theorem, we need to compare two given compositionseries. The case when Vn−1 is different from Wm−1, makes more work, and we will use thefollowing.

Lemma 3.10

With the notation as in the Jordan-Holder Theorem, suppose Vn−1 6= Wm−1. LetD := Vn−1 ∩Wm−1. Then

(a) Vn−1/D ∼= V/Wm−1, and hence it is simple;(b) Wm−1/D ∼= V/Vn−1, and hence it is simple.

Proof

We first show that Vn−1 +Wm−1 = V .We have Vn−1 ⊆ Vn−1 + Wm−1 ⊆ V , and since V/Vn−1 is simple, Vn−1 is a maximalsubmodule of V . So either Vn−1 +Wm−1 = V , or Vn−1 +Wm−1 = Vn−1.Assume (for a contradiction) that Vn−1 +Wm−1 = Vn−1, then we have

Wm−1 ⊆ Vn−1 +Wm−1 ⊆ Vn−1 ⊂ V

But Wm−1 also is a maximal submodule of V , therefore Wm−1 = Vn−1, a contradiction tothe hypothesis. Therefore Vn−1 +Wm−1 = V , as stated.

40 3. The Jordan-Holder Theorem

Now we apply an isomorphism theorem, and get

V/Wm−1∼= (Vn−1 +Wm−1)/Wm−1

∼= Vn−1/Vn−1 ∩Wm−1

= Vn−1/D

Similarly one shows that V/Vn−1∼= Wm−1/D.

Proof (of the Jordan-Holder Theorem)

Given two composition series (I) and (II) as above, we say that they are equivalent providedn = m and there is a permutation σ ∈ Sym(n) such that Vi/Vi−1

∼= Wσ(i)/Wσ(i)−1. In thisproof we will abbreviate ’composition series’ by CS.

We use induction on n. Assume first n = 1. Then V is simple, so W1 = V (since there isno non-zero submodule except V ); and m = 1.

Now suppose n > 1. The inductive hypothesis is that the theorem holds for moduleswhich have a composition series of length ≤ n− 1.

(a) Assume first that Vn−1 = Wm−1 =: U , say. Then the module U inherits a CS of lengthn − 1, from (I). By the inductive hypothesis, any two composition series of U have lengthn− 1. So the composition series of U inherited from (II) also has length n− 1 and thereforem− 1 = n− 1 and m = n. Moreover, by the inductive hypothesis, there is a permutation σof n− 1 such that Vi/Vi−1

∼= Wσ(i)/Wσ(i)−1. We also have Vn/Vn−1 = Wn/Wn−1. So if weview σ as an element of Sym(n) fixing n then we have the required permutation.

(b) Now assume Vn−1 6= Wm−1, now we define D := Vn−1 ∩Wm−1. Take a compositionseries of D, say

0 = D0 ⊂ D1 ⊂ . . . ⊂ Dt = D

Then V has composition series

(III) 0 = D0 ⊂ D1 ⊂ . . . ⊂ Dt = D ⊂ Vn−1 ⊂ V

(IV ) 0 = D0 ⊂ D1 ⊂ . . . ⊂ Dt = D ⊂Wm−1 ⊂ V

since, by 3.10, the quotients Vn−1/D and Wm−1/D are simple. Moreover, by the Lemma3.10, we know that (III) and (IV) are equivalent: take the permutation σ = (t+ 1 t+ 2).

Next, we claim that m = n. The module Vn−1 inherits a a CS of length n−1 from (I). Soby the inductive hypothesis, all CS of Vn−1 have length n−1. But the CS which is inheritedfrom (III) has length t + 1 and hence n − 1 = t + 1. Now, the module Wm−1 inherits from(IV) a composition series of length t + 1 = n − 1, so by the inductive hypothesis all CSof Wm−1 have length n − 1. In particular the CS inherited from (II) does, and thereforem− 1 = n− 1 and m = n.

The series (I) and (III) are equivalent: By the inductive hypothesis, there is a permutationof n− 1 letters, γ say, such that

Di/Di−1∼= Vγ(i)/Vγ(i)−1, (i 6= n− 1), and Vn−1/D ∼= Vγ(n−1/Vγ(n−1)−1.

3.2 Examples 41

We view γ as a permutation of n letters, and then also V/Vn−1 = Vn/Vn−1∼= Vγ(n)/Vγ(n)−1,

which proves the claim.Similarly one shows that (II) and (IV) are equivalent. We have already seen that (III)

and (IV) are equivalent as well, and it follows that (I) and (II) are equivalent. This completesthe proof.

Lemma 3.11

Suppose V is a finite-dimensional A-module, and N is a submodule of V . Then there is acomposition series of V in which N is one of the terms.

Proof

Take a composition series of N , say

0 = N0 ⊂ N1 ⊂ . . . ⊂ Nt = N

Now take a composition series of V/N . By the Submodule Correspondence, we can writesuch composition series as

0 = U0/N ⊂ U1/N ⊂ . . . ⊂ Us/N = V/N

since any submodule of V/N is of the form U/N where U is a submodule of V containingN . Moreover, by the submodule correspondence, Ui/N ⊆ Ui+1/N if and only if Ui ⊆ Ui+1.So we have U0 = N and Us = V , and we get a series of submodules of V

(∗) 0 = N0 ⊂ N1 ⊂ . . . ⊂ Nt = N ⊂ U1 ⊂ U2 ⊂ . . . ⊂ Us = V.

We know that Ni/Ni−1 is simple. Moreover, by an isomorphism theorem we have

Ui/Ui−1∼= (Ui/N)/(Ui−1/N)

which is simple. This proves that (*) is a composition series of V .

3.2 Examples

(1) Let A = Mn(K) and V = A. We have constructed a composition series in 3.9, and wehave seen that any two composition factors of A are isomorphic to the natural module.

(2) This example shows that non-isomorphic composition factors can occur.

LetK = R and A = R×C, the direct product of R-algebras. Let S1 := (r, 0) : r ∈ R ⊂ A,this is a left ideal of A and therefore a submodule. Let also S2 := (0, z) : z ∈ C ⊂ A,this also is a left ideal of A and therefore a submodule.

42 3. The Jordan-Holder Theorem

Consider the series

(∗) 0 ⊂ S1 ⊂ A

We claim that A/S1∼= S2. Define ψ : A → S2 to be the projection on the second

coordinate. By ??, this is an A-module homomorphism, and it is clearly onto, and it haskernel S1.

To show that (*) is a composition series, we must verify that S1 and S2 are simple. Thisis clear for S1 since it is 1-dimensional. To prove it for S2 we apply 3.3.

Take 0 6= (0, w) ∈ S2, we must show that the submodule A(0, w) generated by (0, w) isequal to S2.

Since w is a non-zero complex number, (0, w−1) lies in A and therefore (0, w−1)(0, w) =(0, 1) is contained in the submodule generated by (0, w), and it follows that this submoduleis S2.

Exercise 3.1

Let A = T2(K), the algebra of upper triangular 2 × 2 matrices. Find a compositionseries of the A-module A. Verify that non-isomorphic composition factors occur.

For a finite-dimensional algebra A, the composition series of V = A are very importantbecause this gives actually information on all simple A-modules. We will show this now, itis based on the following:

Lemma 3.12

Suppose S is a simple A-module, so that S = As for some non-zero s ∈ S. Let

I := Ann(s) = a ∈ A : as = 0.

Then I is a left ideal of A, and S ∼= A/I as A-modules.

Proof

Define a mapψ : A→ S, ψ(a) := as.

This is an A-module homomorphism, by ??. It is clearly onto, and hence by the IsomorphismTheorem we have

A/Ker(ψ) ∼= Im(ψ) = S.

By definition, the kernel of ψ is Ann(s). In particular it is a left ideal, by the IsomorphismTheorem

Corollary 3.13

Let A be a finite-dimensional algebra. Then every simple A-module occurs as a compositionfactor of A, up to isomorphism. Hence there are only finitely many simple A-modules (upto isomorphism).

3.3 Simple A1 ×A2-modules 43

Proof

By lemma 3.12 we know that if S is a simple A-module then S ∼= A/I for some left ideal I.Now, I is then a submodule, so by 3.11 there is some composition series of A in which I isone of the terms. So A/I is a composition factor of A.

Example 3.14

Let A = Mn(C), this has a unique simple module, up to isomorphism, namely the naturalmodule (Cn)t of column vectors. This follows from 3.1 and 3.13.

3.3 Simple A1 × A2-modules

Let A = A1 × A2, the direct product of two K-algebras. Recall from 1.14 that A1 and A2

are factor algebras of A, (taking the projections). Recall that we can ’inflate’ modules fromfactor algebras to the large algebra, see 2.7. So we inflate the simple modules for A1 andA2, and we get A-modules. These inflations are still simple as A-modules, roughly speakingsince we ’don’t do anything’. But you should check this.

Now consider a simple A-module S. We apply exercise ??, this shows that S = S1 ⊕ S2

where Si is a module for the algebra Ai. But S is simple, so S = S1 and S2 = 0 or S = S2

and S1 = 0. Say S = S1, then from the definition of S1 in exercise ?? we see that elements ofthe ideal 0×A2 of A annihilate S1. This shows that S1 really is the inflation of a modulefor A1 and it is still simple as such. Hence every simple A-module can be viewed as a modulefor A1 or for A2 (not both). We have now proved the following.

Lemma 3.15

The simple A-modules are precisely the inflations of the simple A1-modules, together withthe inflations of the simple A2-modules to A.

Example 3.16

Let A = A1 ×A2 where A1 = M2(K) and A2 = M3(K). By 3.14, the natural 2-dimensionalmodule, of column vectors, is the only simple A1-module, up to isomorphism, and similarlythe natural 3-dimensional module is the only simple A2-module, up to isomorphism. By thelemma, the algebra A has precisely two simple modules, up to isomorphism. The action onthe 2-dimensional module is explicitly

(a1, a2)v = a1v (v ∈ (K2)t, ai ∈ Ai).

44 3. The Jordan-Holder Theorem

Remark 3.17

Let R be any ring. Then the definition of ’simple’ also makes sense for R-modules, andin the definition of simple modules, we could equally well have taken R instead of A. Forgeneral rings, the concept of ’simple’ modules is far less important, even when the modulein question is ’small’, such as generated by one element.

For example, take R = Z, and M = R. This does not have a simple submodule. Namely,any non-zero submodule U of M is a left ideal, hence is of the form U = Za for some0 6= a ∈ Z. Then for example Z(2a) is a proper submodule of U , so U is not simple.

EXERCISES

3.2. Find a composition series for the 3-subspace algebra.

3.3. This extends example 3.4. Let A = CG where G is the group of symmetries ofthe square. Let V = C2, this is an A-module if we take the representation as inexample 3.4 (or example 3.2). Show that V is simple.

3.4. Suppose V and W are A-modules and φ : V →W is an A-module isomorphism.

(a) Show that V is simple if and only if W is simple.

(b) Suppose 0 = V0 ⊂ V1 ⊂ . . . ⊂ Vn = V is a composition series of V . Show thatthen

0 ⊂ φ(V1) ⊂ . . . ⊂ φ(Vn) = W

is a composition series of W .

3.5. Suppose M is an A-module and that U and V are maximal submodules of M .Suppose U 6= V , show that then U + V = M .

3.6. Let A be the (3-dimensional) algebra of all upper triangular 2×2 matrices over afield K. Find a composition series of the A-module A. Show that A has preciselytwo simple modules, up to isomorphism.

3.7. Let A be the matrix ring

A =(

C C0 R

).

[That is, A is the subring of M2(C) of upper triangular matrices with 22-entry inR.] Show that A is an R-algebra. What is its dimension (over R)? Let

I =(

0 C0 0

).

(a) Show that I is a simple left ideal of A. It is also a right ideal. Is I simple asa right ideal?

(b) Show that A/I is isomorphic to C⊕ R, as R-algebras.

3.3 Simple A1 ×A2-modules 45

[A simple left ideal of A is a left ideal I such that there are no left ideals J of Asuch that 0 6= J ⊂ I and J 6= I.]

3.8. Let V be a 2-dimensional vector space over K, and let A be a subalgebra ofEndK(V ). Recall that V is then an A-module (by applying linear transformationsto vectors). Show that V is not simple as an A-module if and only if there issome 0 6= v ∈ V which is an eigenvector for all α ∈ A.

3.9. Let A be the R-algebra in question 3.7. Find a composition series of the A-moduleA. Find also all simple A-modules (up to isomorphism).

3.10. Let A = K[X]/I where I = (f(X)).

(a) Let f(X) = X4 − 1 and K = R. Find all simple A-modules (up to isomor-phims).

(b) Let f(X) = X3 − 2 and K = Q. Find all simple A-modules (up to isomor-phism).

3.11. Let V be an A-module where A is a finite-dimensional algebra, and let M and Nbe maximal submodules of V such that M 6= N . Prove that

(i) M +N = V , and

(ii) V/M ∼= N/M ∩N and V/N ∼= M/M ∩N .

Suppose now that M ∩N = 0. Deduce that then M and N are simple, and hencethat V has two composition series

0 ⊂M ⊂ V, and 0 ⊂ N ⊂ V.

Write down the permutation as in the Jordan-Holder Theorem.

3.12. Let A = K[X]/(Xn) and V = A. Show that V has a unique composition series,which has length n. [You might use 2.12.]

3.13. Find the simple modules for the algebra A = K[X]/(X2)×K[X]/(X3).

3.14. Let A = M2(R), and V = A. The following will show that A has infinitely manydifferent composition series.

(a) Let e ∈ A be a projection, that is e2 = e. Show that then Ae := ae : a ∈ Ais a submodule of A. Show that if e 6= 0, 1 then

0 ⊂ Ae ⊂ A

is a composition series of A. [You may apply the Jordan-Holder Theorem].

(b) For λ ∈ R, check that eλ is a projection where

eλ =(

1 λ

0 0

).

(c) Show that for λ 6= µ, the modules Aeλ and Aeµ are distinct. Hence deducethat V has infinitely many different composition series.

46 3. The Jordan-Holder Theorem

3.15. Suppose A = CG where G is the dihedral group of order 10, as in Exercise 2.13.Suppose V is a simple A-module.

(a) Prove that dimV ≤ 2. [Show that if w is an eigenvector of the linear mapx→ vσx, then so is vτw, and Spanw, vτw is an A-submodule of V .]

(b) Show that if dimV = 1, then vτ has eigenvalue ±1, and vσ2 has eigenvalue 1on V . Hence find all 1-dimensional simple A-modules.

3.16. Let V be any vector space over K, and let A = EndK(V ). Show that V is a simpleA-module. [The interesting case is when V is infinite-dimensional.]

4Simple and semisimple modules, semisimple

algebras

Let A be a finite-dimensional K-algebra. The Jordan-Holder Theorem shows that simplemodules are ’building blocks’ for arbitrary finite-dimensional A-modules. So it is importantto understand simple modules. The first question one might ask, given two simple A-modules,how can we find out whether or not they are isomorphic? This is answered by Schur’s lemma,which we will now present. In fact Schur’s lemma has many applications (and we’ll give afew).

Lemma 4.1 (Schur’s Lemma)

Suppose S and T are simple A-modules and φ : S −→ T is an A-module homomorphism.Then either φ = 0, or φ is an isomorphism.Suppose S = T and K = C. If dimS <∞ then φ = λIdS for some scalar λ ∈ C.

Proof

Suppose φ is non-zero. The kernel ker(φ) is an A-submodule of S but S is simple. Sinceφ 6= 0, ker(φ) 6= S. So ker(φ) = 0 and φ is 1-1.

The image im(φ) is a submodule of T , and T is simple. Since φ 6= 0, we know im(φ) 6= 0and therefore im(φ) = T . So φ is onto, and we have proved that φ is an isomorphism.

For the last part, we know that over C, φ has an eigenvalue, λ say. That is, there is somenon-zero v ∈ S such that φ(v) = λv. The map λIdS is also an A-module homomorphism;and so is φ−λIdS as well. The kernel of φ−λIdS is a submodule and is non-zero (it containsv). It follows that ker(φ− λIdS) = S, so that we have φ = λIdS .

48 4. Simple and semisimple modules, semisimple algebras

This is very general; in the first part S and T need not be finite-dimensional. It has manyapplications. One is that elements in the centre of some algebra act as scalars on simplemodules when A is a C-algebra.

The centre of A is defined to be

Z(A) := z ∈ A : za = az for all a ∈ A

Lemma 4.2

Let A be a C-algebra, and S a simple A-module. If z ∈ Z(A) then there is some λ = λz ∈ Csuch that zx = λzx for all x ∈ S.

Proof

The linear map ρ : S → S defined by ρ(s) = zs is an A-module homomorphism (this is easyto check). But S is simple, and A is an algebra over C. So by Schur’s Lemma there is someλ ∈ C such that ρ(s) = λs for all s ∈ S, that is zs = λs.

Corollary 4.3

Assume A is a commutative algebra over C. Then every simple A-module is 1-dimensional.

Proof

Let S be a simple A-module. We have A = Z(A), so by the previous, every a ∈ A acts asscalar multiplication on S. Take 0 6= v ∈ S, then for every a ∈ A, av belongs to the span ofv. So the span of v is a non-zero submodule, so it must be equal to S.

The assumption that the field is C, is important. For example, consider the 2-dimensionalalgebra A over R with basis 1A, β where β2 = −1A. Take V = A, this is a simple A-module:

Suppose, for a contradiction, V has a non-trivial submodule, this must 1-dimensional, sayit is spanned by some 0 6= v ∈ A. Then βv is a scalar multiple of v, that is v is an eigenvectorof β. But β does not have an eigenvector in A. So we have a commutative algebra over Rwhich has a 2-dimensional simple module.Infinite-dimensional algebras can behave differently.

Example 4.4

The Heisenberg algebra H is the algebra over C generated by two non-commuting elementsX and Y , and the only relation which holds in the algebra is that XY −Y X = q1H where q isa non-zero element in C. It is not finite-dimensional, for example the monomials 1, X,X2, . . .

are linearly independent.The Heisenberg algebra does not have any finite-dimensional simple modules:

4.1 Some classifications of simple modules 49

Suppose, for a contradiction, S is a finite-dimensional simple H-module. Fix a basis ofS, and write multiplication by X,Y as matrices with respect to this basis. Then the matrixof XY −Y X is qIn where In is the identity matrix, where n = dimS. Take the trace of thismatrix,

0 = Tr(XY − Y X) = Tr(qIn) = qn

and dimS = 0 but S 6= 0, a contradiction.

4.1 Some classifications of simple modules

Let A be a finite-dimensional algebra over K. We have seen that all simple A-modules occuras composition factors of the A-module A (see 3.13). In particular this implies that simplemodules of a finite-dimensional algebra are always finite-dimensional.

One could ask whether it is possible given A, to completely describe all simple modules,that is one simple module of each isomorphism class. In general, this is rather hard. But inspecial cases, it is possible.

4.1.1 Simple modules of A = CG where G is a cyclic group

Let A = CG where G = 〈g〉, a cyclic group of order n. Then A is commutative, so by 4.3,every simple A-module is 1-dimensional. So let S = spanx be a 1-dimensional A-module.Then the structure of S is completely determined by the action of vg since vg generates thealgebra A. We must have vgx = λx for some λ ∈ C. We have then

x = v1x = vgnv

= (vg)nx = λnx

and λn = 1. Hence λ = exp( 2kπin ) for some k with 0 ≤ k ≤ n− 1.

This really does define an A-module, to see this it suffices to note that the correspondingmap from G to GL(S) is a group homomorphism.

Choose and fix a primitive n-th root of unity, ω say. Then λ = ωk for some k with0 ≤ k ≤ n− 1. The representation we want is the map

ρ : G→ GL(1,C), ρ(gj) := ωjk

(1 ≤ j ≤ n), and one checks that this a group homomorphism.Note also that for different k we get representations which are not equivalent. In total

we have n distinct simple modules.

50 4. Simple and semisimple modules, semisimple algebras

4.1.2 Simple modules for G where G has order pd, over Fp

Another type of algebra where we can find all simple modules, up to isomorphism, is thegroup algebra A = KG where G is some group of order pr for p a prime, and K = Fp, thefield with p elements.

Lemma 4.5

Assume |G| = pr, p prime, and K = Fp, the finite field with p elements. Then the trivialmodule is the only simple A-module (up to isomorphism)

Proof

Let V be a simple A-module. Let ρ : G→ GL(V ) the corresponding representation. View V

as a G-set, then it is a disjoint union of orbits and each orbit has size dividing |G|, ie somepower of p.

If dim(V ) = n, then the set V has size pn which is a power of p. So the number of orbitsof size 1 is divisible by p. Now 0 is an orbit of size 1, so the number of orbits of size 1 isnon-zero and then must be at least p. So there is some 0 6= x ∈ V such that ρgx = x for allg ∈ G.

For the module, this means vgx = x for all g ∈ G. Then Spanx is a submodule. ButV is simple, so V = Spanx, and this is the trivial module.

Definition 4.6

Let A be a K-algebra, and let V be a non-zero (finite-dimensional) A-module. Then V issemi-simple if V has simple submodules S1, S2, . . . , Sk such that

V = S1 ⊕ S2 ⊕ . . .⊕ Sk.

4.1.3 Examples

(1)Any simple module is semisimple. (So the name ’semisimple’ is reasonable).

(2) Let A = K. Then A-modules are the same as vector spaces. Given a vector space V ,take basis b1, . . . , bn and set Si := Spanbi. Then Si is a simple A-submodule of V ,and V = S1 ⊕ . . .⊕ Sk. This shows that every A-module is semisimple.

(3) Let A = Mn(K) and V = A. We know from ?? that V = C1 ⊕ C2 ⊕ . . .⊕ Cn where Ci

is the space of matrices which are zero outside the i-th column. We have also seen thateach Ci is a simple A-module. So V is semisimple.

(4) Not every module is semisimple. Let A be the 2-dimensional algebra over R with basis1A, β such that β2 = 0. Let V = A, this is not semisimple.

4.1 Some classifications of simple modules 51

Assume for a contradiction that V is semisimple. In 3.1 we have proved that V has acomposition series of length two, with two 1-dimensional composition factors. So V isnot simple, and then we can only have V = S1 ⊕ S2 where S1 and S2 are 1-dimensionalsubmodules of V . Then we have a basis of V consisting of eigenvectors for every element inA, and in particular, eigenvectors for x→ βx. But this is not diagonalizable( for example,β has only eigenvalue = 0 and if it were diagonalizable then it would follow that β = 0).

Given some A-module V , how can we decide whether or not it is semisimple? There areseveral criteria, and each of them has advantages, depending on the circumstances.

Lemma 4.7

Let V be an A-module, then the following are equivalent.(1) If U is a submodule of V then there is a submodule C of V such that U ⊕ C = V .(2) V is a direct sum of simple submodules (that is, V is semisimple).(3) V is a sum of simple modules.

Proof

(1)⇒ (2) We may assume that V 6= 0, then V has at least one simple submodule. Thereis then a submodule U of V which is a direct sum of simple modules, of largest possibledimension. We must show that U = V . By (1), there is a submodule C such that V = U⊕C.Assume (for a contradiction) that U 6= V , then C is non-zero, and then C must have a simplesubmodule, S say. Consider now U ′ := U+S. We have S∩U ⊆ C∩U = 0, that is U ′ = U⊕S.Since U is a direct sum of simple submodules, also U ′ is a direct sum of simple submodules.But U is a proper submodule of U ′ and dimU < dimU ′, which contradicts the choice of U .

This shows that U = V , that is, V is a direct sum of simple modules.(2) ⇒(3) This is clear.(3) ⇒ (1) Let U be a submodule of V . Consider the set of submodules of V given by

S = W ⊂ V : U ∩W = 0.

Then S 6= ∅. Take C ∈ S of largest possible dimension. We claim that then U ⊕ C = V .By construction U∩C = 0. Sppose we have Assume U+C 6= V . Since V = S1+S2+. . .+

Sk where the Sj are simple submodules of V , there must be a simple submodule Si of V withSi 6⊂ U+C and then Si 6⊂ C. So C ⊂ C+Si, a proper submodule and dimC < dim(C+Si).So we get a contradiction if we show that the module C + Si belongs to the set S. So wemust show that

(C + Si) ∩ U = 0 :

Take u = c+x ∈ U and c ∈ C and x ∈ Si. Then x = u−c ∈ (U+C)∩Si. But (U+C)∩Si is asubmodule of Si and is not equal to Si (since Si is not contained in U+C). So (U+C)∩Si = 0It follows that x = 0 and u = c ∈ U ∩ C = 0.

So we have now a contradiction. This completes the proof that V = U ⊕ C.

52 4. Simple and semisimple modules, semisimple algebras

Lemma 4.8

(a) Submodules and factor modules of semi-simple modules are semi-simple.(b) If V1 and V2 are semisimple A-modules then V1 × V2 is a semisimple A-module.

Exercise 4.1

Suppose f : S → X is an A-module homomorphism where S is simple. Show thatthen f(S) is either simple, or is zero.

Solution The Isomorphism Theorem gives f(S) ∼= S/ker(f). Since S is simple, we haveker(f) = 0 or = S. In the first case, f(S) ∼= S and f(S) is simple, otherwise f(S) = 0.

Proof (of 4.8)

(a) Suppose V is semi-simple with factor module V/U . Let π : V → V/U be the canonicalmap π(v) = v + U , this is an A-module homomorphism. Suppose V = S1 + S2 + . . . + Sk

with Si simple, then π(V ) = π(S1) + π(S2) + . . .+ π(Sk) and π(Si) is either zero or simple.So V/U is a sum of simple modules, hence is semi-simple [here we use part (3) of 4.7].

If U is a submodule of V then by part (1) of 4.7 we know V = U⊕C and then U ∼= V/C,so U is semi-simple by what we have just proved.

(b) Let V1 and V2 be semisimple. By exercise 2.8 we have V := V1 × V2 = V1 ⊕ V2 withVi∼= Vi. So Vi is a direct sum of simple modules, for each i, and hence V is also a direct

sum of simple modules.

In Example 4.1.1.(1) we have seen that the algebra A = K has the property that everyA-module is semisimple. Other algebras have the same property, and this has inspired thefollowing definition.

Definition 4.9

The algebra A is semisimple if every A-module is semisimple.

How can one see whether or not A is semisimple without having to check all modules? Thisis easy, because of the following.

Lemma 4.10

LetA be a finite-dimensionalK-algebra. ThenA is semisimple⇔ theA-moduleA is semisim-ple.

Proof

If all A-modules are semisimple then in particular A as an A-module is semisimple.

4.1 Some classifications of simple modules 53

To prove the other implication, suppose A as an A-module is semisimple. Take an arbi-trary A-module M . Take a K-basis of M , say m1, . . . ,mn. Write An = A× A× . . .× A,the direct product of n copies of A. Define ψ : An →M by

ψ(a1, . . . , an) =n∑

i=1

aimi

This is an A-module homomorphism (by 1.6.1) and it is surjective. So the IsomorphismTheorem gives that M ∼= An/ker(ψ).

The module A is semi-simple, and then also An, by 4.8 part (b) and induction on n. Now4.8(a) shows that V is semi-simple.

4.1.4 Examples

(1) The algebra A = Mn(K) is semi-simple. (See 4.1.3).

(2)Let A be the 2-dimensional algebra over R as in 4.1.3(3). We have found there a modulewhich is not semisimple, and hence A is not semisimple.

The algebra is also isomorphic to the algebra of matrices,

(a b

0 a

): a, b ∈ R.

[Namely, the algebra is 2-dimensional and it contains a non-zero element with square zero,and see ??]. So this algebra of matrices also is not semisimple. However, it is a subalgebraof M2(R) which is semisimple!

The last example shows that a subalgebra of a semisimple algebra need not be semisimple.On the other hand, factor algebras or semisimple algebras are always semisimple, we showthis now.

Lemma 4.11

Let I be an ideal of A and B = A/I. The following are equivalent:(i) V is a semisimple B-module.(ii) V is a semisimple A-module with IV = 0.Hence if A is semisimple then B is semisimple.

Proof

Recall from 2.7 that B-modules can be viewed as the A-modules V with IV = 0, and wherethe two actions are related by the equation

ax = (a+ I)x, (a ∈ A, x ∈ V ).

[We write IV for the span of the set xv : x ∈ I, v ∈ V .]

54 4. Simple and semisimple modules, semisimple algebras

(i)⇒ (ii) Let V = S1 ⊕ . . .⊕ Sk where Si are simple B- submodules of V . Then we view V

as an A-module with IV = 0. Then ISj ⊆ IV = 0, therefore Si can also be viewed as anA-module. As an A-module it is simple as well: Namely if 0 6= x ∈ Sj then Ax = Bx = Sj .So V is also semisimple as an A-module.

(ii) ⇒ (i) Suppose V = S1 ⊕ . . .⊕ Sk with Si simple A-submodules of V , and IV = 0.Then ISi ⊆ IV = 0, so Si is viewed as B-module, and it is still simple as a B-module, sincefor 0 6= x ∈ Si we have Ax = Si but Ax = Bx.

For the last part, suppose A is semisimple. Take any B-module V , then we can view V

as an A module satisfying IV = 0. But A is semisimple, therefore V is semisimple as anA-module. By the implication (i) ⇒ (ii) it is also semisimple as a B-module. This showsthat B is semisimple.

Proposition 4.12

Let A = A1 × A2, the direct product of algebras. Then A is semisimple if and only if bothA1 and A2 are semisimple.

Proof

⇒ Suppose A is semisimple, the projection π1 : A → A1 onto the first coordinate isan algebra homomorphism and it is surjective. By 4.11, A1 is semisimple. Similarly, A2 issemisimple.

⇐ Assume A1 and A2 are semisimple. Write A1 = S1 ⊕ S2 ⊕ . . .⊕ Sk where the Si aresimple A1-submodules of A1, similarly A2 = T1 ⊕ . . . Tl with Ti simple A2-submodules ofA2. Then the A-module A1 × A2 can be written as the sum of all Si × 0 and 0 × Tj .These are simple A-modules, by 3.15.

EXERCISES

4.2. For each of the following subalgebras A of M2(K), consider the natural moduleV = (K2)t of column vectors. Show that V is simple, and find EndA(V ), that is,the algebra of linear maps φ : V → V which commute with all elements in A.

By Schur’s Lemma, this algebra is a division ring. Identify it with ’somethingknown’.

(i) K = R, A = (a b

−b a

): a, b ∈ R.

(ii) K = Z/2Z, A = (a b

b a+ b

): a, b ∈ Z/2Z.

[Note: to see that in each case A really is an algebra, see the Exercise 1.5]

4.1 Some classifications of simple modules 55

4.3. Suppose A is a finite dimensional algebra over a finite field F , and S is a (finite-dimensional) simple A-module. Let D := EndA(S). Show that then D must be afield.

4.4. An idempotent of an algebra A is an element e ∈ A such that e2 = e. Let e1 ande2 be idempotents such that 1A = e1 + e2 and e1e2 = 0 = e2e1. Assume also thate1 and e2 are central in A, that is eia = aei for all a ∈ A.

(a) Suppose V is an A-module, show that then e1V and e2V are submodules ofV and that V = e1V ⊕ e2V . Moreover, show that e1V = v ∈ V : v = e1v.

Suppose now that S1 and S2 are simple A-modules such that S1 = e1S1 andS2 = e2S2.

(b) Show that then S1 is not isomorphic to S2.

(c) Assume K = C. Let V = S1 ⊕ S2, show that EndA(V ) is isomorphic to thealgebra of diagonal matrices in M2(C). [Hint: Apply Schur’s Lemma ]

Show also that if W = S1 ⊕ S1 then EndA(W ) ∼= M2(C).

4.5. (Continutation) With the notation of the previous question, what can you sayabout EndA(N) where N = (S1 ⊕ S1)⊕ (S2 ⊕ S2 ⊕ S2)?

4.6. Suppose A is an algebra and N is some A-module. We define a subquotient of Nto be a module Y/X where X,Y are submodules of N such that 0 ⊆ X ⊆ Y ⊆ N .

Suppose N has a composition length 3, and assume that every subquotient ofN which has composition length 2, is semisimple. Show that then N must besemisimple.

[Suggestion: Choose a simple submodule X of N and show that there are sub-modules U1 6= U2 of N , both containing X, of compositionlength 2. Then showthat U1 + U2 is the direct sum of three simple modules.]

4.7. Let G be the symmetric group and Ω the natural G-set, so that the permutationmodule KΩ has basis b1, b2, . . . , bn. Let W :=

∑aibi : ai ∈ K,

∑i ai = 0.

Show that W is a submodule of KΩ. Show also that if K = C then W is simple,and CΩ is the direct sum of W with a copy of the trivial module.

5The Wedderburn Theorem

Given a K-algebra A, when is it semisimple? Wedderburn’s theorem answers this, and itgives a complete description of arbitrary semisimple algebras. We will prove the theoremfor the case K = C, and we will explain what the answer is for arbitrary fields. For start,we consider commutative finite-dimensional semisimple algebras over C. The classificationof such algebras is attributed to Weierstrass and Dedekind.

Proposition 5.1

Suppose A is a finite-dimensional commutative algebra over C. Then A is semisimple ⇔ A

is isomorphic to the direct product of copies of C, as algebra: A ∼= C× C× . . .× C.

Proof

⇐ We know that C as an algebra over C is semisimple, and by Proposition 4.12, so is thedirect product of finitely many copies of C.

⇒ Suppose A is the direct sum of simple submodules,

A = S1 ⊕ S2 ⊕ . . .⊕ Sk.

By Corollary 4.3, each Si is 1-dimensional. We write the identity of A as

1A = e1 + e2 + . . .+ ek, ei ∈ Si.

(a) We claim that e2i = ei and eiej = 0 for i 6= j. Namely, we have

ei = ei1A = eie1 + eie2 + . . .+ eiek

and thereforeei − e2i = eie1 + . . .+ eiei−1 + eiei+1 + . . .+ eiek.

58 5. The Wedderburn Theorem

The left hand side belongs to Si and the right hand side belongs to∑

j 6=i Sj . But the sumis direct, therefore

Si ∩∑j 6=i

Sj = 0.

So e2i = ei, and now 0 = eie1 + . . .+ eiei−1 + eiei+1 + . . .+ eiek and since we have a directsum, each summand must be zero.

We claim that ei 6= 0 for all i. Take some non-zero x ∈ Si, then

x = x1A = xe1 + . . . xei + . . . xek.

Now x − xei ∈ Si ∩∑

j 6=i Sj = 0 and therefore x = xei 6= 0. It follows that ei must benon-zero. Since Si is 1-dimensional, we deduce that Si = Cei and therefore for each a ∈ A,we have aei = aiei with ai ∈ C.

Now, for every a ∈ A we have

(∗) a = a1A = ae1 + ae2 + . . . aek = a1e1 + . . .+ akek.

Define a map Ψ : A→ C× . . .× C by

ψ(a) := (a1, a2, . . . , ak) (ai as in (∗))

This is clearly linear. It is also onto, as ψ(ei) = (0, 0, . . . , 1, 0, . . .) for each i, by the above.It is 1-1: If all ai are zero then a = 0, by (*). The map ψ is an algebra homomorphism:

ψ(a)ψ(b) = (a1, . . . , ak)(b1, b2, . . . , bk)

= (a1b1, a2b2, . . . , akbk)

= ψ(ab)

since

ab1A = a(b1e1 + b2e2 + . . .+ bkek)

= ab1e1 + ab2e2 + . . .+ abkek

= b1(ae1) + b2(ae2) + . . .+ bk(aek)

= b1a1e1 + b2a2e2 + . . .+ bkakek

= a1b1e1 + a2b2e2 + . . .+ akbkek.

For arbitrary semisimple algebras, the building blocks are matrix algebras Mn(C), andthe structure theorem for semisimple algebras over C is due to Wedderburn (though, ac-cording to [1], a more general version was proved by Artin).

Theorem 5.2

[The Wedderburn Theorem for C ] Let A be a finite-dimensional algebra over C. Then A issemi-simple if and only if A is isomorphic to the direct product of matrix rings

A ∼= Mn1(C)×Mn2(C)× . . .×Mnk(C)

5. The Wedderburn Theorem 59

One direction is already known: We know that the direct product of matrix algebras isalways semisimple. Namely, in 4.1.3 we have seen that Mn(K) is semisimple for each n ≥ 1,in fact for any field K. Now, Lemma 4.12 shows that the direct product of matrix algebrasalso is semisimple.

To prove that any finite-dimensional semisimple algebra over C is isomorphic to a directproduct of such matrix rings, takes more work.

The first thing we might ask is, where do the matrices come from? We are used to writinglinear maps as matrices, and a good guess might be that this should be generalized in someway. If a linear map is written as matrix, one starts by fixing a basis of the space, and workswith coordinates with respect to this basis. For example, if we take a 2-dimensional spacethe we identify the vector space with column vectors in K2.

For our generalization we imitate this, and we consider an A-module which is a directproduct. So let V = U1 × U2 = (u1, u2) : ui ∈ Ui where U1 and U2 are A-modules. Wedefine a ’matrix algebra’, with underlying space

Λ = [γij ] : γij ∈ HomA(Uj , Ui)

and with matrix multiplication and -addition. One checks that this really is an algebra. Notehowever that the matrix entries do not commute in general.

Next, we want to relate this algebra to the endomorphism algebra of V . Let πi : V → Ui

be the projection,πi(u1, u2) = ui

This is an A-module homomorphism, by 1.6.1.Similarly, let κ1 : U1 → V be the inclusion map,

κ1(u1) = (u1, 0),

and similarly define κ2. These are also A-module homomorphisms, by 1.6.1. These mapshave a very important property:

(∗) We have κ1π1 + κ2π2 = IdV :

Namely if m = (u1, u2) then ui = πi(m) and

κ1π1(m) + κ2π2(m) = κ1(u1) + κ2(u2)

= (u1, 0) + (0, u2) = (u1, u2) = m.

Lemma 5.3

Let V = U1 × U2. Then the algebra EndA(V ) is isomorphic to Λ.

60 5. The Wedderburn Theorem

Remark 5.4

When A = K and the Ui are just 1-dimensional vector spaces then this is the same aswriting linear maps of a 2-dimensional space as matrices. The proof in general is completelyanalogous.

Proof

Given an A-module homomorphism γ : V → V . Then let

γij := πi γ κj : Uj → Ui

this is an A-module homomorphism. Now define

Ψ : EndA(V ) → Λ, γ → [γij ].

We claim that this map Ψ is an algebra homomorphism.(a) It is linear: We have

πi(cγ + δ)κj = cπiγκj + πiδκj

for all i, j where c is a scalar, and γ, δ are A-module endomorphisms of V , and therefore

Ψ(cγ + δ) = cΨ(γ) + Ψ(δ).

(b) Ψ commutes with taking products: Consider Ψ(γ)Ψ(δ) = [γij ][δij ]. This matrix producthas ij-entry

2∑t=1

γitδtj =∑

t

πiκtπtδκj = πiγ(κ1π1 + κ2π2)δκj

By (*) we have that κ1π1 + κ2π2 = IdV , and so the ij-th entry is equal to

πiγδκj .

and this is precisely the ij-th entry of Ψ(γδ). This is true for all i, j and therefore

Ψ(γ)Ψ(δ) = Ψ(γδ).

(c) If γ = IdV then one gets γij = 0 for i 6= j and γii = IdUi. This shows that Ψ(IdV ) is the

identity matrix in Λ.(d) The map Ψ is one-to-one: Suppose πiγκj = 0 for all i, j. Then expanding shows that forall m ∈ V we have γ(m) =

∑i,j πiγκj(m) and so this is zero. That is γ = 0.

(e) The map Ψ is also onto: Given a matrix [φij ] in Λ, then define φ : V → V by setting

φ(u1, u2) = [φij ](u1

u2

).

One checks that Ψ(φ) = [φij ].

5. The Wedderburn Theorem 61

Example 5.5

Let A = C. Let V = S1 × S2 × S2 = (s1, s2, s2) : s1 ∈ S1, s2 ∈ S2, s2 ∈ S2. We assume S1

is not isomorphic to S2.Let φ : V → V be an A-module homomorphism, then

φ(s1, s2, s2) = φ(s1, 0, 0) + φ(0, s2, 0) + φ(0, 0, s2)

So we look at each of the three terms. Consider φ(s1, 0, 0) = (x1, x2, x2) with x1 ∈ S1 andthe other two components in S2.

We get a map s1 → x1, and by Schur’s Lemma this map is a scalar multiple of theidentity, in other words there is a scalar λ, such that x1 = λs1, for any s1 ∈ S1. Again bySchur’s Lemma, x2 = 0 and x2 = 0. We start writing down the matrix for φ, and what wehave found tells us that the first column of this matrix isλ0

0

.

If we continue in this way, we get a matrix of the form(λ 00 A

)where A ∈M2(C).

Proposition 5.6

Assume V is a semisimple A-module. Then EndA(V ) is isomorphic to a direct product ofmatrix rings,

Mn1(C)× . . .×Mnk(C).

Proof

Let V = S1⊕S2⊕ . . .⊕Sn where Si are simple. By 5.3 and induction, we have EndA(V ) ∼= Λ

where

(∗) Λ = [φij ] : φij : Sj → Si

Now we apply Schur’s Lemma. If Sj 6∼= Si then φij = 0.Suppose Sj

∼= Si. Then we identify Sj and Si, and by Schur’s Lemma, φij = λijId withλij ∈ C.We label the simple modules so that isomorphic ones come together: Let S1

∼= S2∼= . . . ∼=

Sn1 , then Sn1+1∼= Sn1+2

∼= . . . ∼= Sn1+n2 , where Sn1+1 6∼= S1, and so on.

62 5. The Wedderburn Theorem

Then a matrix in Λ has block diagonal shape, off-diagonal blocks are zero, and on thediagonal blocks we have arbitrary matrices:

A1 0 0 . . .

0 A2 0 . . .

. . . . . .

. . . Ak

.

Multiply two of these, getA1 0 0 . . .

0 A2 0 . . .

. . . . . .

. . . Ak

B1 0 0 . . .

0 B2 0 . . .

. . . . . .

. . . Bk

=

A1B1 0 0 . . .

0 A2B2 0 . . .

. . . . . .

. . . AkBk

.

This shows that the multiplication is precisely as in the direct product of matrix rings. Thissuggests to define

Θ(

A1 0 0 . . .

0 A2 0 . . .

. . . . . .

. . . Ak

) = (A1, A2, . . . , Ak)

which is an element in Mn1(C)× . . .×Mnk(C). This map Θ is clearly C-linear and bijective.

We have just shown that it preserves products, hence it is an isomorphism of algebras.

Recall from 1.4 that the ’opposite algebra’ Aop of A is the algebra with underlying vectorspace A, and with multiplication

a ∗ b := ba.

Lemma 5.7

Let A = Mn(K), then Aop is isomorphic to A.

Proof

For any n × n matrix X, let τ(X) := Xt, the transpose of the matrix. This is linear (frombasic linear algebra), and it is a vector space isomorphism since τ2 = id. Furthermore, asone learns it in elementary linear algebra,

τ(XY ) = (XY )t = Y tXt = τ(Y )τ(X) = τ(X) ∗ τ(Y ).

This shows that τ is an isomorphism of algebras A→ Aop.

Exercise 5.1

Let A = A1 × A2, the direct product of algebras. Check that Aop is isomorphic toAop

1 ×Aop2 .

5. The Wedderburn Theorem 63

Solution 5.8

The underlying vector spaces for Aop and Aop1 ×Aop

2 are the same. The multiplication in Aop

is(a1, a2) ∗ (b1, b2) = (b1, b2)(a1, a2) = (b1a1, b2a2).

The multiplication in Aop1 ×Aop

2 is

(a1, a2)(b1, b2) = (a1 ∗ b1, a2 ∗ b2) = (b1a1, b2a2)

Hence the identity map gives us an algebra isomorphism from Aop to Aop1 ×Aop

2 .

Lemma 5.9

Let A be any K-algebra. Then A is isomorphic to EndA(A)op.

Proof

(a) Let a ∈ A, define ’right multiplication’ ra : A→ A by

ra(x) = xa (x ∈ A).

Then ra is an A-module homomorphism, by 1.6.1. Furthermore, we see that if a = 1A thenra is the identity map of A.

(b) We have EndA(A) = ra : a ∈ A: One inclusion holds by (a), and for the otherinclusion, take f : A→ A to be an A-module homomorphism. Set a := f(1A). Then for anyx ∈ A

f(x) = f(x1A) = xf(1A) = xa = ra(x).

This is true for all x ∈ A, and hence f = ra.(c) Consider the composition. We have

ra rb(x) = ra(xb) = (xb)a = x(ba) = rba(x)

So rb ∗ ra = ra rb = rba.Hence we define a map ψ : A→ EndA(A)op by setting

ψ(a) = ra.

By (a), it takes the identity to the identity, and by part (c), it preserves products. Onechecks that ψ is K-linear. Finally, by (b) it is bijective, and we have now proved that ψ isan isomorphism.

64 5. The Wedderburn Theorem

5.1 The proof of Wedderburn’s theorem

We only have to put the previous results together. By 5.9 we have

A ∼= (EndA(A))op ∼= (Mn1(C)× . . .×Mnk(C))op

and by 5.1 and 5.7, this is isomorphic to

(Mn1(C))op × . . .× (Mnk(C))op ∼= Mn1(C)× . . .×Mnk

(C).

Remark 5.10

We get the result for the commutative case now as a corollary. Namely, for a commutativesemisimple algebra, all matrix blocks in the Wedderburn theorem must be commutative,and this is only true if ni = 1 for all i.

We can now give a complete description of the simple modules of a semisimple algebra overC.

Corollary 5.11

Let A be a finite-dimensional semisimple algebra over C, and suppose A ∼= Mn1(C)× . . .×Mnk

(C) as algebras. Then A has precisely k simple modules (up to isomorphism). They areof the form S1, S2, . . . , Sk where we can take Si = ((C)ni)t, and such that

(a) The i-th factor of A acts on Si by matrix multiplication;(b) The i-th factor of A acts on Sj as zero for j 6= i.In particular the dimensions of the simple modules are n1, n2, . . . , nk.

Proof

In 3.15 we have classified the simple modules of an algebra of the form A1 × A2. Namely,these are precisely all modules of the form

S × 0, 0 × T

where S runs through the simple A1-modules, and T runs through the simple A2-modules.We apply this inductively, and see that all but one of the factors of our product of matrixblocks act as zero on a simple A-module.

From 3.14 we know that Mni(C) has a unique simple module (up to isomorphism),

namely the natural module of column vectors (Cni)t.

Remark 5.12

What can we say about a finite-dimensional semisimple algebra A over an arbitrary field K?The answer is that always A is isomorphic to a product of matrix rings where the matrixblocks are Mni(Di) and Di is some division ring containing K.

5.1 The proof of Wedderburn’s theorem 65

One can see this with little trouble if one goes through the proof and checks where weused that the field was C. In fact, this is only used when we apply Schur’s Lemma, to saythe endomorphism ring of the simple module Si is isomorphic to C. In general we can onlysay that the endomorphism ring of Si is a division ring, this is the general version of Schur’sLemma.

Everything else stays the same, and then the proof gives

A ∼= Mn1(D1)× . . .×Mnk(Dk).

EXERCISES

5.2. Suppose A is a finite-dimensional commutative semisimple algebra over C. Findall ideals of A.

5.3. Show that A = Mn(C) does not have any ideals except 0 and A. Hence find allideals of a finite-dimensional semisimple algebra over C.

5.4. Suppose A = K1×K2×K3, the direct product of three fields. Find all the idealsof A.

5.5. Suppose A = Mn1(C)× . . .×Mnk(C). Show that the center of A is commutative

and semisimple, and has dimension k.

5.6. Suppose A is a finite-dimensional semisimple algebra over C. Suppose x is anelement in the center Z(A). Show that if x is nilpotent then x = 0.

5.7. Which of the following commutative algebras over C are semisimple? Note thatthe algebras in (1) have dimension 2, and the others have dimension 4.

(1) C[X]/(X2 −X), C[X]/(X2), C[X]/(X2 − 1).

(2) C[X1]/(X21 −X1)× C[X2]/(X2

2 −X2)

(3) C[X1, X2]/(X21 −X1, X

22 −X2)

(4) C[X1]/(X21 )× C[X2]/(X2

2 ),

(5) C[X1, X2]/(X21 , X

22 )

6Maschke’s Theorem

In the previous chapter we have proved a main structure theorem for semisimple algebras.One would like now to know to which algebras this can be applied. For example, you mightask when a group algebra of a finite group is semisimple. This is answered by Maschke’stheorem.

Theorem 6.1 (Maschke’s Theorem)

Let G be a finite group and A = KG the group algebra where K is some field. Then A issemisimple if and only if the characteristic of K does not divide the order of G.

The main idea of the proof which we are going to present is, that from any linear mapbetween KG-modules, one can always construct a KG-homomorphism, by ’averaging overthe group’.

Lemma 6.2

Suppose M and N are KG-modules, and f : M → N is a K-linear map. Define

T (f) : M → N, m→∑g∈G

vg(f(vg−1m)).

Then T (f) is a KG-homomorphism.

Proof

One checks that T (f) is linear. Alternatively one could argue that multiplication by elementsin A is linear, and also f is linear and T (f) is a linear combination of compositions of linear

68 6. Maschke’s Theorem

maps and is therefore linear. To see that it is an A-homomorphism it suffices to check forthe basis of A, so let x ∈ G, then

T (f)(vxm) =∑

g

vg(f(vg−1vxm)) =∑

g

vx(vx−1vg(f(v(x−1g)−1m))) = vx(T (f)(m)).

6.1 Proof of Maschke’s Theorem

⇐ Assume that the characteristic of K does not divide |G|. Let W be a submodule ofA = KG, we must show that there is a submodule C of A such that W ⊕ C = A.

There is a subspace V such that W ⊕ V = A. Let π : A→ A be the projection onto Wwith kernel V , this is just a linear map. Define

γ :=1|G|

Tπ.

This is a scalar multiple of an A-module homomorphism and hence is also an A-modulehomomorphism. So C := Ker(γ) is an A-submodule. We will now show that KG = W ⊕ C

as KG-modules.(a) We claim that the restriction of γ to W is the identity map, and that Im(γ) = W :If m ∈ W then vg−1m ∈ W and so π(vg−1m) = vg−1m therefore vgπ(vg−1m) = m and

γ(m) = 1|G| (

∑g∈Gm) = m.

This implies W ⊆ Im(γ). Conversely, let m ∈ A. Since π(vg−1m) ∈W and W is a submoduleit follows that vgπ(vg−1m) ∈W and then γ(m) ∈W.

(b) We claim that W ∩ C = 0 : If m ∈W ∩ C then γ(m) = m and γ(m) = 0.

(c) W +C = A : We have dim(W +C) = dim(W )+dim(C) (by (b) and Linear Algebra)which is dim Im(γ) + dim Ker(γ) and which is equal to dim(A) by rank-nullity.

⇒ For the converse of Maschke’s Theorem, suppose A = KG is semi-simple. We claim thatchar(K) does not divide the order of G:

Let ω :=∑

g∈G vg which is an element of KG. Check that

(∗) vxω = ω ( allx ∈ G.)

Therefore, let U be the span of ω, this is a (1-dimensional) submodule of A. Suppose A issemi-simple, then there is a submodule C of A such that U ⊕C = A. Then one checks thatU = Ae where e is an idempotent of A, and so e = cω for c ∈ K. From e2 = e 6= 0 we getusing (*) that

0 6= ω2 = |G|ω

and consequently |G| 6= 0 in K.

6.1 Proof of Maschke’s Theorem 69

6.1.1 Exploiting the map T further

In the first Lemma of this section we have seen that by ’averaging over G’ we can produceKG-module homomorphisms starting with arbitrary linear maps. This is a very powerfultool which is used in other contexts. Recall that the trace tr(φ) of a linear transformationis the trace of some matrix representing φ. For a detailed reminder, see the beginning ofchapter 7.

Corollary 6.3

Suppose V and W are simple CG-modules, and suppose f : V →W is a linear transforma-tion.

(a) Assume V and W are not isomorphic, the T (f) = 0.(b) Assume V = W , then T (f) = λI where

λ =|G|

dimVtr(f)

Proof

We apply 6.2 and Schur’s Lemma, this gives (a) and also that in (b) we have T (f) is amultiple of the identity. We calculate the trace of T (f). We have

tr(vgfvg−1) = tr(f) (g ∈ G)

hence trT (f) = |G|tr(f) On the other hand, the trace of T (f) is equal to λ dimV . Thestatement follows.

6.1.2 Permutation modules

Suppose G is a finite group and Ω is a left G-set. Recall that the permutation module KΩis defined to be the vector space over K with basis

bω : ω ∈ Ω.

where the action is given byvgbω := bgω.

Let ζ :=∑

ω bω. We have seen that vgζ = ζ for all g ∈ G and hence 〈ζ〉 is a submoduleisomorphic to the trivial module. The following is a ’Maschke type’ argument.

Lemma 6.4

If char(K) does not divide |Ω| then the submodule spanned by ζ is a direct summand ofKΩ.

70 6. Maschke’s Theorem

Proof

Let ψ : KΩ → 〈ζ〉 be the linear map with ψ(bω) = ζ for each ω. Check that this is aKG-homomorphism.

Then ψ(ζ) = |Ω|ζ. So if |Ω| 6= 0 then the intersection of Ker(ψ) with the trivial module〈ζ〉 is zero and by dimensions KΩ = 〈ζ〉 ⊕Ker(ψ).

6.2 Some consequences of Maschke’s Theorem

Suppose G is a finite group, then by Maschke’s Theorem the group algebra CG is semisimple.We can therefore apply Wedderburn’s theorem and obtain that

CG ∼= Mn1(C)×Mn2(C)× . . .×Mnk(C).

This contains a lot of information. First, by comparing dimensions, we have

|G| =k∑

i=1

n2i .

Moreover, the sizes of the matrix blocks are the dimensions of the simple modules for CG,by 5.11. There is always the trivial representation, which is 1-dimensional. We can take thisto correspond to the matrix block Mn1(C), that is n1 = 1.

The number k of matrix blocks has an interpretation in terms of the group. It is equalto the number of conjugacy classes (see the exercises below).

Example 6.5

We can sometimes find the dimensions of simple modules. Let G be the dihedral group oforder 10, as in Exercise 2.13, then by Exercise 3.15, the dimension of a simple module is≤ 2, and there are precisely two 1-dimensional simple modules. We have now

10 = 1 + 1 +k∑

i=3

n2i

and the only possible solution is 10 = 1 + 1 + 4 + 4. So we know that there are two non-isomorphic 2-dimensional simple modules. You might find these explicitly.

Then we know there are four matrix blocks, so there should be four conjugacy classes.Perhaps you know this anyway.

EXERCISES

6.1. Show that the center of the group algebra KG has a basis consisting of the classsums. The class sum [C] of a conjugacy class C = gG is defined to be the sum ofall elements in C (it has |G|

|CG(g)| terms).

6.2 Some consequences of Maschke’s Theorem 71

6.2. Let G be the symmetric group Sym(3), of order 6, and let V = KΩ where Ω isthe natural G-set, of size 3.

(a) Suppose K = C, express V as a direct sum of two simple CG-modules.

(b) Suppose K has characteristic 3. Show that then V has a composition seriesof length 3.

6.3. Let G be the dihedral group of order 2n where n is odd. Generalize 2.13 and 3.15.Find the dimensions of all simple CG-modules. Now do the same when n is evenand n > 2. What happens if n = 2?

6.4. Let A = KG, the group algebra of a finite group. If C is a conjugacy class of G,define the class sum to be

[C] :=∑g∈C

vg

Show that [C] belongs to the center Z(A) of A. Show also that the class sumsform a K-basis for Z(A).

6.5. (Continuation) Suppose now that K = C. Deduce from Wedderburn’s Theoremthat the number of matrix blocks is equal to the number of conjugacy classes ofG. What does this tell if G is abelian?

6.6. Let A = CG where G is the symmetric group Sym(3), and take σ = (1 2 3) andτ = (1 2). We want to show directly that A is a direct product of three matrixalgebras. [We know from 6.3 that there should be two blocks of size 1 and oneblock of size 2].

(a) Let e± := 16 (1± vτ )(1+ vσ + vσ2), show that e± are idempotents in the centre

of A, and e+e− = 0.

(b) Let

f =13(1 + ω−1vσ + ωvσ2)

where ω is a primitive 3rd root of 1. Let f1 := vτfvτ−1 . Show that f and f1 areorthogonal idempotents, and that

f + f1 = 1A − e− − e+.

(c) Show that Spanf, fvτ , vτf, f1 is an algebra, isomorphic to M2(C).

Apply these calculations, and show directly that A is isomorphic to a product ofthree matrix algebras. [Taking direct sums, might be more natural].

7Characters

Suppose that ρ : G→ GL(n,C) is a representation of the group G. With each n× n matrixρ(g) we can associate its trace, that is we add its diagonal entries. We write χ(g) for thistrace. The function χ : G→ C is defined to be the character associated to the representationρ.

Characters of representations have many nice properties, and they are a very importanttool for calculting with representations of groups.

7.1 Definition, examples, basic properties

Suppose A is some n × n matrix, recall that the trace of A is defined as the sum of thediagonal entries,

tr(A) :=n∑

i=1

aii.

Recall also that tr(AB) = tr(BA) ifB is some other n×nmatrix, and therefore tr(P−1AP ) =tr(A) if P is an invertible n× n matrix.

If φ : V → V is a linear transformation of a finite-dimensional vector space V then wewrite tr(φ) for the trace of a matrix of φ, with respect to some basis. By the above, thisdoes not depend on the choice of a basis.

As well, over C, the trace tr(A) is equal to the sum of the eigenvalues of A. Actually,most of our matrices will satisfy equations of the form Am = I, all over C, and then A isdiagonalizable, by Linear Algebra.

74 7. Characters

Definition 7.1

Suppose ρ : G→ GL(n,C) is a representation of G. The associated character χρ : G→ C isdefined by

χρ(g) := tr(ρ(g)).

Write also χV if V is the CG-module corresponding to the representation ρ.

Example 7.2

The trivial representation of G is the map ρ : G → GL1(C) with ρ(g) = 1 for each g ∈ G.The associated character is known as the ’trivial character’. Write χ1 for this character, thenχ1(g) = 1 for each g ∈ G.

Example 7.3

Let Ω be a G-set, and V = CΩ be the permutation module corresponding to Ω. Call itscharacter χΩ , the ’permutation character’. Then

χΩ(g) = |FixΩ(g)|

where FixΩ(g) = ω ∈ Ω : g(ω) = ω.

Example 7.4

Recall that the (left) regular representation has underlying vector space V = CG, and theaction is by left multiplication.

Let ρreg its character. Then

χreg(g) =|G| g = 1G

0 otherwise

This is also a special case of a permutation character.

Given a finite group G and a representation ρ : G → GL(V ), we want to find the traceof ρ(g) for g ∈ G. Since g has finite order, we know that ρ(g)m = ρ(gm) = ρ(1) = I for somem ≥ 1. So the linear transformation ρ(g) satisfies the polynomial

Xm − 1 = 0

and therefore it is diagonalizable, with eigenvalues some m-th roots of 1. This is very goodto know in many applications.

Definition 7.5

Let χ be a character of G. Then χ is said to be irreducible if χ = χV where V is a simpleCG-module.

7.1 Definition, examples, basic properties 75

For example, the trivial character is irreducible. More generally, if the correspondingmodule is 1-dimensional then the associated character is irreducible.

Example 7.6

Let G = Sn, the symmetric group, and let CΩ be the natural permutation module, whereΩ = 1, 2, . . . , n. In chapter 4, we have seen that CΩ ∼= K ⊕W as a CG-module where Wis simple, of dimension n−1 (and K is a copy of the trivial module). So χW is an irreduciblecharacter. By 7.3, we have

χW (g) = |FixΩ(g)| − 1 (g ∈ G)

Lemma 7.7

Suppose ρ1 and ρ2 are equivalent representations of G, with associated characters χ1 andχ2. Then χ1 = χ2. Then χ1 = χ2.

Proof

By assumption, there is an invertible matrix P such that for all g ∈ G we have

ρ1(g) = Pρ2(g)P−1

Let χ1, χ2 denote the characters associated to ρ1, ρ2. Then

χ1(g) = tr(ρ1(g)) = tr(Pρ2(g)P−1) = tr(ρ2(g)) = χ2(g).

Proposition 7.8

Let ρ : G→ GL(n,C) be a representation of the finite group G, and let χ be the associatedcharacter. Then

(i) χ(1) = n;(ii) χ(g−1) = χ(g) (g ∈ G).(iii) χ(ygy−1) = χ(g) for all y, g ∈ G. That is, χ is a class function.

Proof

(i) We have ρ(1) = In, the identity n× n matrix. It has trace n.(ii) Fix some g ∈ G. The matrix ρ(g) is diagonalizable (as we noted before, since g has finiteorder), so let P be some invertible n×n matrix with Pρ(g)P−1 = D, a diagonal matrix, andlet ω1, . . . , ωn be its diagonal entries. Then the ωi are m-th roots of unity where gm = 1.The inverse of this matrix is diagonal with diagonal entries ωi. But the inverse of this matrix

76 7. Characters

is Pρ(g−1)P−1, and hence we have

χ(g−1) =∑

i

ωi =∑

i

ωi = χ(g)

(iii) (clear)

Lemma 7.9

Suppose W1,W2 are CG-modules, with corresponding characters χ1, χ2. Then the characterχ associated to W1 ⊕W2 is equal to χ1 + χ2.

Proof

Write ρi for the representation of G on Wi for i = 1, 2, and write ρ for the representationof G on W1 ⊕W2. Take a basis of W1 and one of W2, then the union is a basis of W1 ⊕W2.Since W1 and W2 are submodules, for each g ∈ G the matrix of ρ(g) is block diagonal, wherethe diagonal blocks are ρ1(g) and ρ2(g). Therefore tr(ρ(g)) = tr(ρ1(g)) + tr(ρ2(g)).

Let S1, S2, . . . , Sk be the simple CG-modules and let χ1, χ2, . . . be the correspondingcharacters. That is, they are the irreducible characters of G.

Corollary 7.10

Suppose W is any CG-module, write W = ⊕Saii as a direct sum of simple CG-modules,

where the ai ≥ 0. Then the character χW is given by

χW =∑

aiχi.

Hence, to understand all characters, we need to understand the irreducible chararacters.

7.2 The orthogonality relations

Characters are functions from G to C. So we set

CG := φ : G→ C

This is a vector space over C, with + and scalar multiplication pointwise. It has dimension= |G|. We define an inner product on CG by setting

〈φ, ψ〉 :=1|G|

∑g∈G

φ(g)ψ(g)

7.2 The orthogonality relations 77

Exercise 7.1

Show that 〈−,−〉 is an innter product on CG.

Let Cl(G) be the set of φ ∈ CG which are constant on conjugacy classes. This is a subspaceof CG, it has dimensionl the number of conjugacy classes. The characters of G are containedin Cl(G). Moreover, the number of irreducible characters is precisely the dimension of Cl(G).

The motivation for the inner product comes from orthogonality properties of the irre-ducible characters. The following is sometimes called the ’first orthogonality relation’.

Theorem 7.11

Suppose χ and χ′ are irreducible characters of G, corresponding to representations V andW . Then

〈χ, χ〉 =

1 V ∼= W

0 V 6∼= W.

Proof

By 7.9 we can assume that V = W in the case V ∼= W . Let ρV and ρW be the correspondingrepresentations. Write ρW (g) = [akl(g)] and ρV (g−1) = [bλτ (g−1)]. We reformulate the innerproduct:

(∗) 〈χ, χ′〉 =1|G|

∑g∈G

(dim V∑i=1

aii(g))(dim W∑

j=1

bjj(g−1)) =∑i,j

[1|G|

∑g∈G

aii(g)bjj(g−1).

From Chapter 6, for any linear map h : V →W we have

(∗∗) T (h) =∑g∈G

ρW (g)hρV (g−1) =λI V = W

0 V 6∼= W

where λ = tr(h)|G|dim V . Now we fix i, j and take h := Eij , this has trace δij . Taking matrices,

ρW (g)EijρV (g−1) = [aki(g)bjτ (g−1)]k,τ .

Then the kτ -th entry of the matrix (**) is

∑g∈G

aki(g)bjτ (g−1) =

δij |G|dim V V = W

0 V 6∼= W

Now take k = i and τ = j, and then sum over all i, j, and we get that (*) is = 0 if V 6∼= W

and otherwise (*) is equal to

∑i,j

δij |G|dimV

=dim V∑i=1

|G|dimV

= |G|,

as stated.

78 7. Characters

Theorem 7.12

Suppose V is a finite-dimensional CG-module with character χV . Write V = V1⊕V2⊕. . .⊕Vr

where the Vi are simple. If S is any simple CG-module with character χS then

〈χV , χS〉 = #i : Vi∼= S

Proof

We have χV = χ1 + χ2 + . . .+ χr where χi is the character of Vi. Then

〈χV , χS〉 = 〈χ1, χS〉+ . . .+ 〈χr, χS〉

If Vi∼= S then the inner product is 1, otherwise it is zero.

Theorem 7.13

Suppose V and W are CG-modules, with characters χV and χW . Then V ∼= W if and onlyif χV = χW .

Proof

⇒ This is 7.9⇐ Suppose χV = χW , then 〈χV , χS〉 = 〈χW , χS〉 for all simple modules S. By 7.12 it

follows that V is isomorphic to W .

7.3 The character table

The irreducible characters of a finite group G are the building blocks for all characters of G,and it is convenient to display them in the form of a matrix, which is known as the charactertable of G.

We have seen that characters are constant on conjugacy classes. We know that thenumber of conjugacy classes is equal to the number of matrix blocks in the Wedderburndecomposition, hence is equal to the number of irreducible characters.

We recall that the irreducible characters are labelled as χ1, . . . , χk, and we take χ1 to bethe trivial character.

Let C1, C2, . . . , Ck be the conjugacy classes of G. Pick gi ∈ Ci. We make the conventionthat g1 = 1.

Definition 7.14

The Character table of G is the k × k matrix

[χi(gj)]i,j

7.3 The character table 79

Example 7.15

Let G be a cyclic group of order n, say G = 〈g〉. In chapter 4 we have classified the irreduciblerepresentations of G over C. Take ω to be a primitive n-th root of unity, then we have foreach t with 1 ≤ t ≤ n the 1-dimensional simple module on which g acts with eigenvalue ωt,and hence gj acts with eigenvalue ωjt. The group is abelian, so each gj is the only elementin its conjugacy class.

So for example when n = 3, the character table is classes have size 1. The character tableis

1 g g2

χ1 1 1 1χ2 1 ω ω2

χ3 1 ω2 ω

Example 7.16

Let G be the Klein 4-group, say G = 〈x, y〉. This has four 1-dimensional (simple) modules.We describe them by the eigenvalues of x and y on a generator.

x y

S1 1 1S2 −1 1S3 1 −1S4 −1 −1

Let χ1, χ2, χ3, χ4 be the corresponding irreducible characters. Then the character table is

1 x y xy

χ1 1 1 1 1χ2 1 −1 1 −1χ3 1 1 −1 −1χ4 1 −1 −1 1

Example 7.17

Let G be the dihedral group of order 8. We take the presentation as in chapter 2 as

G = 〈σ, τ : σ4 = 1, τ2 = 1, τστ−1 = σ−1〉.

The element σ2 commutes with all elements of G and hence The subgroup N := 〈σ2〉 isnormal. One checks that G/N is isomorphic to the Klein 4-group. Any representation ofG/N can be inflated to a representation of G, and therefore we have four 1-dimensional rep-resentations (by the previous example). These are still irreducible viewed as representationof G, and this gives us four 1-dimensional irreducible characters.

In chapter 2 we have constructed a 2-dimensional representation of G. This is checkedto be simple (alternatively, check that the inner product of its character is 1). The formula|G| =

∑n2

i shows that we have found all irreducible characters.

80 7. Characters

As well G has five conjugacy classes. We choose representatives for the classes, as

g1 = 1, g2 = σ2, g3 = σ, g4 = τ, g5 = στ.

The element σ2 acts trivially on the 1-dimensional modules, and we can write down thecharacter values of the 1-dimensional modules by just copying appropriately the characterof the Klein 4-group. For the 2-dimensional representation we calculate the trace of therepresentation constructed in chapter 2. We get the character table:

1 σ2 σ τ τσ

χ1 1 1 1 1 1χ2 1 1 −1 1 −1χ3 1 1 1 −1 −1χ4 1 1 −1 −1 1χ5 2 −2 0 0 0

By 7.11 the rows of the character table satisfy an orthogonality relation. We will nowsee that this implies orthogonality of the columns of the character table.

Theorem 7.18

Fix j, `. Then we have

k∑i=1

χi(gj)χi(g`) =

0 j 6= `

|CG(gj)| j = `

Proof

Define a matrix X := [xij ] with

xij := (|CG(gj)|)−1/2χi(gj)

Consider the i`th entry of XXt. This is

k∑j=1

χi(gj)χ`(gj)|CG(gj)|−1 =1|G|

∑g∈G

χi(g)χ`(g) = 〈χi, χ`〉 = δij .

This means that XXt = I. Therefore we have XtX = I as well. We spell this out and get

δj` =k∑

i=1

|CG(gj)|−1/2|CG(g`)|−1/2χi(gj)χi(g`)

which gives the statement.

7.4 Products of characters 81

Example 7.19

Let G = A4, the alternating group on four letters. Recall that |G| = 12. We want to findthe character table.

(1) Let N be the Klein 4-group ⊂ G. Then N is a normal subgroup of G, and G/N iscyclic of order 3. Each simple module for G/N over C can be viewed as a CG-module, by’inflation’. So we get therefore three simple modules for CG of dimension 1, so we have threelinear characters.

(2) The group G has four conjugacy classes (!). Representatives are g1 = 1, g2 = (12)(34)and g3 = (123), g4 = (132). The character table is square, so there must be precisely onemore irreducible character. The sum-squares formula shows that it has degree three.

We start constructing the character table, let ω be a primitive 3rd root of 1, recall1 + ω + ω2 = 0. Then we have (using Example 7.15)

1 g2 g3 g4χ1 1 1 1 1χ2 1 1 ω ω2

χ3 1 1 ω2 ω

χ4 3

We find the last row by using orthogonality relations. From the orthogonality of thesecond and first column we get

0 = 1 + 1 + 1 + 3χ4(g2)

and hence χ4(g2) = −1.Next, the orthogonality of the third and first column shows that

0 = 1 + ω + ω2 + 3χ4(g3)

and therefore χ4(g3) = 0. Similarly χ4(g4) = 0.

This shows that there is an irreducible representation of A4 of degree 3, and it also sayswhat the character of this representation is. So one would like to actually construct suchrepresentation!

Take the natural permutation module CΩ of S4. This is the direct sum of the trivialmodule and a (simple) moduleW of dimension 3 (see 7.6 and chapter 4). We view this moduleW as a module for A4 and we see that the character of the corresponding representation isprecisely χ4.

So we can deduce that W is a simple CA4-module (and then it must also be simple as aCS4-module! A proof without calculations)

7.4 Products of characters

This part is not in the B2 syllabus

82 7. Characters

We have defined tensor products of vector spaces, and tensor products of group repre-sentations. The importance of this is that the character of a tensor product is the productof the characters. This gives a very powerful tool to construct new characters from knownones.

Theorem 7.20

Assume G is finite. Suppose V,W are CG-modules, with characters χV , χW respectively.Then V ⊗W has character χV · χW .

Proof

Let ρV , ρW be the representations corresponding to V,W respectively, and write ρ for therepresentation corresponding to V ⊗W . Let g ∈ G. We can choose a basis ei of eigenvectorsof g on V , with eigenvalues λi, and a basis fj of eigenvectors of g on W , with eigenvaluesµj (say). Then we use the basis ei ⊗ fj of V ⊗W , to calculate the character of V ⊗W .We have

ρ(g)(ei ⊗ fj) = ρV (g)(ei)⊗ ρW (g)(fj)

= λiei ⊗ µjfj

= λiµj(ei ⊗ fj)

HenceχV⊗W (g) =

∑i,j

λiµj = (∑

i

λi)(∑

j

µj) = χV (g)χW (g).

EXERCISES

7.2. Calculate the character table of the symmetric group G = S4. [You might exploitgroup factor groups C2(∼= G/A4) and S3(∼= G/V4, V4 the Klein 4-group). Youmight also exploit products of characters χχ′ where χ′ is a linear character.]

7.3. Calculate the character table of the symmetric group S5.

7.4. Find the character table of the quaternion group of order 8. Verify that this isthe same as the character table of the dihedral group of order 8.

Bibliography

[1] C.W. Curtis, Pioneers of representation theory, AMS History of Mathematics 15, 1999.

[2] Y. A. Drozd, V. V. Kirichenko, Finite-dimensional Algebras, Springer 1994.

[3] G. James, M. Liebeck. Representations and characters of groups, 2nd edition, CUP 2001.

[4] W. Ledermann, Introduction to group characters, 2nd edition, CUP 1987.