201
Statistical Mechanics Notes: Honours

Statistical Mechanics 2014full notes

Embed Size (px)

DESCRIPTION

full lecture notes

Citation preview

  • Statistical Mechanics Notes: Honours

  • 2Contents

    Contents 2

    Preface 7

    1 Classical Thermodynamics 11.1 Brief Review of Classical Thermodynamics 1

    1.1.1 The First Law of Thermodynamics: 21.1.2 The Second Law of Thermodynamics 31.1.3 Third Law of Thermodynamics 31.1.4 Zeroth Law of Thermodynamics 4

    1.2 Entropy 41.2.1 Origin of the Entropy Function: Clausius Theorem 41.2.2 Properties of the Entropy Function 5

    1.3 Mini assignment 1 8

    2 Fundamental Equation 92.1 Fundamental Equation of Thermodynamics 92.2 Alternative Forms of the Fundamental Relation 112.3 Thermodynamic Potentials 12

    2.3.1 In terms of S and P : Enthalpy 122.3.2 In terms of T and V : Helmholtz Free Energy 142.3.3 In terms of T and P : Gibbs Free Energy 15

    2.4 The Maxwell Relations 15Exercises 17

    3 Models of Thermodynamic Systems 193.1 Quantum Models 20

    3.1.1 Stationary States 203.1.2 External Parameters 213.1.3 Particle Number 223.1.4 Interaction 223.1.5 Independent Particle Models 24

    3.2 Classical Models 253.2.1 Classical Specification of State 253.2.2 External Parameters 26Exercises 28

    4 Isolated Systems: Fundamental Postulates 304.1 Thermodynamic Specification of States 304.2 Equilibrium States of Isolated Systems 314.3 Microscopic View of Thermodynamic Equilibrium 314.4 Isolation of a System 324.5 Dependence of on E and E 334.6 Thermal Interaction 35

    4.6.1 The Probability Distribution 364.6.2 Sharpness of Peak 36

    4.7 Entropy 384.7.1 Meaning of and . 40

    4.8 Density of States 414.9 Approximate value for total 42

  • 34.10 Fundamental equation 43Exercises 45

    5 Microcanonical formalism: Magnetic Systems 465.1 Magnetic Materials and Behaviour 465.2 Description of Magnetic Behaviour 475.3 Paramagnetic Materials 48

    5.3.1 A Model for Paramagnetism 485.3.2 Single Particle States Available to Each Atom of the Solid 485.3.3 Microstates of the Solid 495.3.4 Number of Accessible States in the range E to E +E 505.3.5 The Fundamental Relation 515.3.6 Thermodynamics of the Paramagnetic system 515.3.7 Equations of State 535.3.8 Predictions of the Model 545.3.9 Heat Capacities: 555.3.10 Entropy 57Exercises 60

    6 Classical Model of a Gas 616.1 Classical phase space density 616.2 General Model of a Gas 616.3 Ideal Gas 62

    6.3.1 Assumptions 626.3.2 Fundamental Relation 636.3.3 Properties of the Gas 63

    6.4 Monatomic Ideal Gas 646.4.1 Evaluation of 646.4.2 Fundamental Equation 656.4.3 Heat Equation and Specific Heat 666.4.4 Dependence of S on N 67

    6.5 Correct Classical Counting of States 686.6 Gibbs Paradox 70

    Exercises 71

    7 Canonical Formalism 737.1 A New Formalism 737.2 The Probability Distribution 74

    7.2.1 Probability of Occurrence of State r 747.2.2 Probability of Occurrence of Energy E 777.2.3 Probability of Energy in range E to E +E 77

    7.3 Statistical Calculation of System Parameters 787.3.1 Energy 787.3.2 Conjugate Variables 80

    7.4 Fundamental Relation 817.5 Entropy 82

    Exercises 83

    8 Heat Capacity of Solids 848.1 Modelling Specific Heats 848.2 Experimental Facts 848.3 Historical 858.4 Einsteins Model 86

    8.4.1 Partition Function 878.4.2 Thermodynamics of the Oscillator System 89

  • 48.4.3 Entropy 908.4.4 Energy Equation 908.4.5 Heat Capacity 90

    8.5 Discussion of Results 908.5.1 Energy 908.5.2 Heat Capacity 91

    8.6 Comparison with Experiment 92Exercises 94

    9 Paramagnetism : Canonical Approach 959.1 Magnetic Moment of Spin-S Particles 959.2 Quantum States of Spin-S Particles 959.3 Statistics of a Single Paramagnetic Atom 96

    9.3.1 Average Magnetic Moment 979.3.2 Average Energy 98

    9.4 Properties of the Brillouin Functions 989.5 Properties of Paramagnetic Solids of Spin S 999.6 Thermodynamics of Paramagnetic Materials 100

    Exercises 103

    10 Canonical Formalism in Classical Models 10410.1 Ideal Monatomic Gas 10510.2 Monatomic Gas: Another Method 10710.3 Maxwells Distribution 108

    10.3.1 Distribution of a Component of Velocity 11110.3.2 Distribution of Speed 111

    10.4 Gas in a Gravitational Field 11210.5 Equipartition of Energy 114

    Exercises 117

    11 Quantum Theory of Ideal Gases 11811.1 Quantum Theory of Gases 118

    11.1.1 Indistinguishability of particles 11811.1.2 The Pauli Exclusion Principle 11911.1.3 Spin and Statistics 12011.1.4 Effect of Indistinguishability and Exclusion on Counting Procedures 12011.1.5 Maxwell-Boltzmann Case 12111.1.6 Bose-Einstein Case: 12111.1.7 Fermi-Dirac Case 122

    11.2 The Partition Functions 12211.2.1 Maxwell-Boltzmann Gas 12211.2.2 Bose-Einstein Gas 12411.2.3 Fermi-Dirac Gas 125

    11.3 Mean occupation numbers 126

    12 Blackbody Radiation 12812.1 Equilibrium States of a Radiation Field 12812.2 Modelling Cavity Radiation 12812.3 Cavity Modes 12912.4 Partition Function of the Radiation Field 13112.5 Statistics of the Radiation Field in Thermal Equilibrium 13212.6 Plancks Law for Blackbody Radiation 13312.7 Spectral Properties of the Radiation Field 13412.8 Fundamental Relation for Radiation Field 13612.9 Thermodynamics of the Radiation Field 137

  • 5Exercises 139

    13 Grand Canonical Formalism 14013.1 Another Formalism 14013.2 Chemical Potential 14013.3 Grand Canonical Distribution 14113.4 The Grand Partition Function 14213.5 Grand Canonical Potential 14313.6 Thermodynamics via the Grand Canonical Potential 14413.7 Relation to the Canonical Potential 14513.8 Application to Boson and Fermion Gases 145

    13.8.1 Grand Partition Function for Gas of Bosons 14613.8.2 Grand Partition Function for Fermion Gas 147

    13.9 Occupation numbers 14813.9.1 Bosons 14813.9.2 Fermions 149

    13.10 Quantum statistics in the classical limit 149Low concentration 149High temperature 150

    14 Ideal Fermi Gas 15214.1 Fermi-Dirac Particles 15214.2 Ideal Fermi Gas 153

    14.2.1 Classical limit 15414.3 Formal Criteria for a Degenerate Fermion Gas 15514.4 Density of States 157

    14.4.1 Properties of the Degenerate Gas 15914.5 Fundamental Equation 160

    14.5.1 Fermi-Dirac Functions 16014.6 Simplistic Model of a white dwarf star 163

    14.6.1 Relativistic Density of states 16414.6.2 Energy of a Relativistic Fermi Gas at T = 0 16514.6.3 Pressure of a Relativistic Fermi Gas at T = 0 16614.6.4 Stability of the White Dwarf Star 167Exercises 169

    15 Phase transitions and critical exponents 17015.1 Dynamical model of phase transitions 17015.2 Ising model in the zeroth approximation 17315.3 Critical Exponents 177

    A Statistical Calculations 179A.1 The Integral

    e

    x2 dx 179A.2 The Integral

    0

    xnex dx 179A.3 Calculation of n! 180

    15.3.1 Approximate Formulae for n! 18015.3.2 Lowest Order Approximation 18115.3.3 Stirlings Formula 18215.3.4 Infinite Series for n! 183

    A.4 The Gamma Function 18415.4.1 Definition 18415.4.2 Recurrence Relation 18515.4.3 and the Factorial Function 186

    B Volume of a Sphere in Rn 189

  • 6C Evaluation of0

    x3(ex 1)1dx 191

    D Fermi-Dirac Functions 194

  • 7Preface

    These notes are based on notes written by F A M Frescura. It is intended for the honours course onStatistical Mechanics at the School of Physics, University of the Witwatersrand and may not be reproducedor used for any other purpose.

    DPJ January 2009

  • 1Chapter 1Classical Thermodynamics

    1.1 Brief Review of Classical ThermodynamicsHistorically, thermodynamics developed out of the need to increase the efficiency of early steam engines.Classical Thermodynamics, TD, (from the Greek thermos meaning heat and dynamics meaning power) isa branch of physics that studies the effects of changes in thermodynamic variables on physical systems atthe macroscopic scale by analyzing the collective behaviour of the constituent parts. Roughly, heat means"energy in transit" and dynamics relates to "movement"; thus, in essence thermodynamics studies themovement of energy and how energy instills movement.

    The starting point for most thermodynamic considerations are the laws of thermodynamics, whichpostulate that energy can be exchanged between physical systems as heat or work. They also postulatethe existence of a quantity named entropy, which can be defined for any system. In thermodynamics,interactions between large ensembles of objects are studied and categorized. Central to this are theconcepts of system and surroundings. A system is composed of particles, whose average motions define itsproperties, which in turn are related to one another through equations of state. Properties can be combinedto express internal energy and thermodynamic potentials, which are useful for determining conditionsfor equilibrium and spontaneous processes. With these tools, thermodynamics describes how systemsrespond to changes in their surroundings. This can be applied to a wide variety of topics in science andengineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even blackholes. The results of thermodynamics are essential for other fields of physics and for chemistry, chemicalengineering, aerospace engineering, mechanical engineering, cell biology, biomedical engineering, andmaterials science to name a few.

    Classical Thermodynamics deals with the macroscopic properties of macroscopic systems. It does sowithout making any assumptions about the ultimate constitution of matter, and does not really dependon whether matter has any ultimate constitution at all. In order to develop a statistical descriptionof thermodynamic systems from a microscopic description, the basic facts that we will need aboutmacroscopic systems are as follows:

    1. Left for a sufficiently long time, each macroscopic system eventually settles into a condition in whichits macroscopic properties no longer change with time but remain constant until the system is disturbedby outside influences. These settled states are called equilibrium states, or, states of thermodynamicequilibrium, of the system. The characteristic time needed for the system to settle into an equilibriumstate is called the relaxation time for the system. Relaxation times differ hugely for different systems,and can be as short as 106 s for gases, and as long as several centuries for glass.

    2. In each equilibrium state, the configuration of the system can be specified by a small numberof macroscopic configuration variables. These normally include variables to specify the spatialdimensions of the system (volume, area, length), the amount of material it contains (mole numbersor masses of each constituent chemical species, total mole number or mass of the system), and itselectrical and magnetic condition (polarisation, magnetisation). The configuration of the simplestsystems, called simple hydrostatic systems, is specified by the volume V it occupies and the totalamount of material that it contains. Classically, this was specified by its total mass m. However, inview of the atomic nature of matter, it is better to specify the material content of the system either bythe total number of moles it contains (commonly used in chemistry), or by the total number N ofatoms, molecules or particles it contains (commonly used in physics).

    3. For each configuration variable of a given system, there is an associated generalised force which isresponsible for changing the value of that configuration variable. Thus, pressure is responsible foraltering volume, surface tension alters area, force alters length, chemical potentials alter species molenumbers, electric field alters polarisation, magnetic field alters magnetisation, and so on.

    4. In each equilibrium state of the system, each generalised force has a fixed constant value.

  • 2 Chapter 1 Classical Thermodynamics

    5. The generalised forces are conjugate to their associated configuration variables in the sense that theyoccur conjointly in the expression for the work done by the system in a given change of configuration.Denote the configuration variables by xi, and their associated generalised forces by Xi. Then, if theconfiguration of the system is changed quasistatically and non-dissipatively (that is, reversibly) fromvalue xi to value xi + dxi, the amount of work done by the system is given by

    dW = i Xidxi = Xidxi (1.1)

    6. Each equilibrium state has a fixed definite temperature T . This is the only variable which, in the initialstages of the theory, does not have a conjugate. After the introduction of the Second Law, which leadsto the discovery of a new configuration variable for the system called its entropy S, we discover that Tis the conjugate of S. However, S is not a directly measurable macroscopic parameter for the system,so we do not include it here.

    7. In each equilibrium state, each macroscopic measurable variable, that is xi,Xi, and T , has a welldefined constant value, and the equilibrium state is fully characterised by these values. The variables(xi,Xi, T ) are therefore called state variables, since they characterise the equilibrium state of thesystem.

    8. Not all of the state variables are independent. Fixing a certain number of them uniquely fixes the valuesof all the others. This means that the state variables are subject to a certain number of relations, calledthe equations of state of the system. The number of independent state variables in the system is calledthe number of degrees of freedom of the TD system. For example, a simple hydrostatic system hasstate variables (V, P, T,N). Of these, only two are independent. This system thus has two degrees offreedom.Mathematically, we can represent each set of values (xi,Xi, T ) as a point in a (2n+ 1)-dimensionalCartesian space, R2n+1. The existence of equations of state means that not all points of this spacerepresent possible equilibrium states of the system. Only those points that lie on the lower dimensionalhypersurface defined by the equations of state represent equilibrium states. The points off that surfacehave no physical interpretation at all. Denote the number of equations of state for the system by r. Theexistence of r relations among 2n+ 1 variables defines an (2n + 1 r)-dimensional hypersurface.The space of equilibrium points is thus a manifold, or space, of dimension (2n+ 1 r) dimensions.This space is called the state space for the TD system.

    The basic principles of classical TD that we need to use are as follows:

    1.1.1 The First Law of Thermodynamics:In each equilibrium state, the system has a definite, well defined amount of energy E, called its internalenergy. The internal energy is therefore a function of the equilibrium state, and so can be expressedmathematically as a function of any set of independent variables taken from (xi,Xi, T ). For example, fora simple hydrostatic system, we have E = E(V, P ), E = E(V, T ), or E = E(P, T ).

    If the equilibrium state of the system is changed, its internal energy increases by a fixed definite amountE = E(f) E(i), where i and f represent the initial and final equilibrium states of the system. Thisenergy difference must be supplied by the surroundings of the system, and may be supplied in one oftwo forms only: work, and heat. Work is energy transferred to or from the system by virtue of changesin its configuration coordinates alone. Energy transferred by any means other than a change of systemconfiguration is called heat. From a microscopic point of view, heat is energy transferred by changes in allof the configurational degrees of freedom of the system which remain unaccounted for in the macroscopicdescription, that is, the microscopic degrees of freedom. The sign conventions for work and heat are asfollows: positive work is energy transferred out of the system by the configurational degrees of freedom;heat is energy transferred into the system.

    1.1.1.1 The Principle of the Conservation of Energy

    This principle states that the total energy of the system and its surroundings is constant. Denote the workdone by the system in a given change of equilibrium by W , and the heat into the system by Q. Then, the

  • 3conservation law gives E(f) = E(i) +QW , orQ = E +W (1.2)

    For infinitesimal changes of equilibrium state, this becomes

    d Q = dE + dW (1.3)where the symbol d indicates a small amount of the associated quantity, and not an infinitesimal differenceof function values. In other words, there are no functions Q and W that can be differentiated to gived Q and dW. These two equations are often called the First Law of Thermodynamics." In fact, they arenot. The First Law asserts the existence of the function E. These two equations are a combination of twoindependent laws, the First Law and the Principle of the Conservation of Energy. But no confusion can orwill arise if you continue to refer to these equations as the First Law."

    1.1.2 The Second Law of ThermodynamicsThis law states that heat does not, of itself, flow from a cold body to a hotter one (Clausius Statement).An equivalent statement is that no system can of itself convert a given amount of heat completely intowork (Kelvin-Planck Statement). Logic applied relentlessly to the second law shows that, for everythermodynamic system, there exists a function of state, S, called the entropy of the system, which has theproperty that for all processes of the system,

    TdS = dE Xi dxi (1.4)For simple hydrostatic systems, this relation is

    T dS = dE + P dV dN (1.5)If, furthermore, the system is closed, then dN = 0 and this relation reduces to the familiar

    T dS = dE + P dV (1.6)This relation is the single most important equation in TD, and we consider it in detail later.

    The most common enunciation of second law of thermodynamics is essentially due to Rudolf Clausius:

    "The entropy of an isolated system not in equilibrium will tend to increase over time, approaching amaximum value at equilibrium."

    Note that the content of (1.6) is different in general from that of (1.3). In a very special subclass ofidealised processes, called reversible (these are the TD analogue of frictionless motion in Mechanics), wehave

    d Q = T dS and dW = Xi dxi (1.7)and the content of (1.3) and (1.6) becomes identical. However, for irreversible processes, which hugelyoutnumber the reversible ones and include all real processes which occur in nature (as opposed to idealprocesses, which do not), we have

    d Q < T dS and dW < Xi dxi (1.8)For these processes, equations (1.3) and (1.6) are different in content and provide independent pieces ofinformation.

    1.1.3 Third Law of ThermodynamicsThe third law of thermodynamics is an axiom of nature regarding entropy and the impossibility of reachingabsolute zero of temperature. The most common enunciation of third law of thermodynamics is: Asa system approaches absolute zero of temperature all processes cease and the entropy of the systemapproaches a minimum value."

  • 4 Chapter 1 Classical Thermodynamics

    1.1.4 Zeroth Law of ThermodynamicsThe zeroth law of thermodynamics is a generalized statement about bodies in contact at thermalequilibrium and is the basis for the concept of temperature. The most common enunciation of the zerothlaw of thermodynamics is: "If two thermodynamic systems are in thermal equilibrium with a third, theyare also in thermal equilibrium with each other."

    1.2 EntropyEntropy is at the centre of statistical mechanics.

    1.2.1 Origin of the Entropy Function: Clausius TheoremUsing the Second Law, Clausius proved the existence of a new state function S for any giventhermodynamic system. He called the new state function the entropy of the system. Entropy meansconversion. Clausius chose this name because S determines the maximum amount of work that can bederived from a given amount of heat. In other words, it determines the conversion of heat into work. Theexistence of S follows directly from a result which we now call the Clausius Theorem.

    Theorem 1 Clausius Theorem

    If is any quasistatic cyclic process for a given system, then

    d Q

    T 0 (1.9)

    The integral here is taken over one complete cycle. The equality holds if and only if the process is alsonon-dissipative, and thus reversible.

    A general proof of this theorem is found in Fermi (Fermi, 1937, p 46-49). Other books, like Zemanskyand Dittmann (Zemansky and Dittmann, 1997, p 186-189) and Sears and Salinger (Sears and Salinger,1975, p 127-129), generally give a proof that is valid only for systems of 2 degrees of freedom, and proveonly half of Clausius result.

    If we restrict ourselves to reversible cyclic processes alone, this theorem states that, for all reversiblecyclic processes of the system,

    d Q

    T= 0

    This means that the integral taken between any two given equilibrium states is path independent, and socan be used to define a function S by putting

    S(f) =

    fi

    d Q

    T+ S(i) (1.10)

    Here i is any fixed chosen equilibrium state, and f is any other equilibrium state. The value S(i) iseffectively a constant of integration and may be assigned arbitrarily. With its value fixed, the integral maythen be evaluated for each state f , thus assigning an unique value S(f) to each equilibrium state of thesystem. S is thus defined uniquely for the system up to an additive constant.

    Note that (1.10) defines S as a function of the equilibrium states of the system, and thus gives S as afunction of the state variables. So, though we use reversible processes to infer the value of S for eachequilibrium state, once S is known, its value is determined only by the given equilibrium state and doesnot in any way depend on how the system arrived in that state. S is therefore a function of the equilibriumstate alone, and is not in any way a function of process.

  • Section 1.2 Entropy 5

    With S determined, we may now use S to calculate the change in entropy S = S(f) S(i) for anyprocess , quasistatic or non-quasistatic, in which the system begins in equilibrium state i and ends inequilibrium state f . And furthermore, if the process is quasistatic, be it reversible or irreversible, wemay also calculate the value of the integral

    d Q/T for the process. How does the value of this integral

    compare with the value of S? To answer this, let be any reversible process that takes the system fromthe given initial state i to the given final state f . Then the combined process + (), where is thereverse process of , is a quasistatic cyclic process for the system. By the Clausius Theorem (equation(1.9) we then have

    0 +()

    d Q

    T=

    d Q

    T+

    d Q

    T=

    d Q

    T

    d Q

    T=

    d Q

    TS (1.11)

    so that for any quasistatic process whatever, we have

    d Q

    T S (1.12)

    We have therefore shown that,

    for any quasistatic process of the system, the change in entropy of the system is always greaterthan or equal to the integral

    d Q/T . The equality holds only for reversible processes.

    For infinitesimal processes, the above result gives

    dS d QT

    (1.13)

    or, since T > 0 always,

    T dS d Q (1.14)with the equality holding only for reversible infinitesimal processes. We thus see that, in a general process,we do not have d Q = TdS. Consequently also, in a general process, dW Xidxi, with the equalityholding only in a reversible process. For a simple hydrostatic system, this means dW PdV . Theseresults for quasistatic processes are summarised in the following table:

    Reversible Irreversible

    S =

    d Q/T S

    dQ/T

    T dS = dQT T dS d Q

    dW = P dV dW P dV

    dW = Xidxi dW Xidx

    i

    1.2.2 Properties of the Entropy FunctionFor reversible and irreversible processes, we have

    d Q = dE + dW. (1.15)

    For reversible processes, we also have

    d Q = T dS and dW = Xidxi (1.16)

    Combining these relations, we get

    T dS = dE Xi dxi (1.17)

  • 6 Chapter 1 Classical Thermodynamics

    Though we arrived at this relation by considering reversible processes, the relation itself makes noreference at all to any process. It contains only state variables, and infinitesimal differences (that is,differentials) of state variables. It is therefore an equation among the state variables of the system that isvalid at all times, and not only while the system undergoes a reversible process. This equation is called thefundamental equation for the system.

    Equation (1.17) is a differential expression. It expresses the differential dS in terms of the differentialsdE and dxi. Since S is a state function, dS is an exact differential. Equation (1.17)) thus shows that theprimitive of the differential dS is a function of the variables E and xi, and assumes also that all otherstate variables, including T and the Xi, have all been expressed in terms of the independent variablesE, x1, ..., xa. The entropy of the system is therefore properly a function of the internal energy E of thesystem, and of the system configuration variables xi. Thus,

    S = S(E,x1, x2, ..., xa) (1.18)The following properties of S can be deduced:

    1. S is continuous and differentiable. This fact follows from its defining equation, and is used implicitlyevery time we use the differential dS.

    2. S is an extensive variable. Thus, a given system containing n moles of material in a given equilibriumstate has entropy S, then a system containing n moles of the same material and in the sameequilibrium state will have entropy S. of a system, S is proportional to N , the total number of molescontained in the system.

    3. The function S(E, x1, ..., xa) is an homogeneous of degree 1. That is, for any R, we haveS(E, x1, x2, ..., xa) = S(E, x1, x2, ..., xa)

    This property has important consequences that are often exploited. The most important is contained inEulers Theorem for homogeneous functions, which states that

    Eulers Theorem:If f(x1, ..., xn) is homogeneous of degree , that is, f has the property that for all R

    f(x1, ..., xn) = f(x1, ..., xn)

    then

    x1f

    x1(x1, ...xn) + ...+ xn

    f

    xn(x1, ...xn) = f(x1, ..., xn)

    Using the summation convention, this result can be written more concisely in the form

    xif

    xi= f

    Applied to the entropy, this theorem gives

    E

    S

    E

    x1,...,xa

    + xiS

    xi

    E,x1,...,xi,...,xa

    = S (1.19)

    where we have used the summation convention, and where the symbol xi means, omit the variable xifrom the list of variables."

    4. S is a monotonically increasing function of E. That is,S

    E

    x1,...,xa

    > 0 (1.20)

    We shall see later that, physically, this means that the (absolute) temperature T of the system ispositive,

    T > 0

  • Section 1.2 Entropy 7

    This fact incorporates several results, including the Third Law of Thermodynamics which leads to theconclusion that the absolute zero of temperature is an unattainable lower limit of temperature. We canapproach as close as we like to the absolute zero, but can never reach it.

    5. S is additive over subsystems. Thus if a given system is a compound system consisting of separateconstituents , each with its own variables and entropy function S of those variables, then the totalentropy of the composite system is

    S =

    S (1.21)

    ReferencesWikipedia, http://en.wikipedia.org/wiki/Fermi, E., 1937, Thermodynamics, Dover Publications Inc., New York.Sears, F. W., 1975, Thermodynamics, Kinetic Theory, and Statistical Thermodynamics, Addison-Wesley Publishing

    Company, Reading, Massachusetts.Zemansky, M. W., and Dittmann, R. H., 1997, Heat and Thermodynamics, Seventh Edition, McGraw-Hill, Boston,

    Massachusetts.Brief reviews of Classical Thermodynamics can be found in the following texts:Callen, H. B., 1985, Thermodynamics and an introduction to Thermostatics, John Wiley and Sons, New York, p

    5-26.Reif, F., 1965, Fundamentals of Statistical and Thermal Physics, McGraw-Hill Book Company, New York, p

    122-123.

  • 8 Chapter 1 Classical Thermodynamics

    1.3 Mini assignment 1Various terms and concepts used in Statistical Mechanics and Thermodynamics

    This assignment is intended to give you the opportunity to make sure that you are familiar with variousterms and concepts used in Statistical Mechanics and Thermodynamics. The list is not exhaustive andwe may add more terms and concepts as the course progresses. Write short explanatory definitions ordescriptions of the following terms and concepts.

    1. Thermodynamics2. Statistical Mechanics3. Chemical Thermodynamics4. Intensive variables and extensive variables.5. Thermodynamic state6. Reversible process7. Irreversible process8. Thermodynamic systems9. Conjugate variables10. Thermodynamic process:

    (a) isobaric(b) isochoric(c) isothermal(d) isentropic(e) isenthalpic(f) adiabatic

    11. The laws of thermodynamics

  • 9Chapter 2Fundamental Equation

    2.1 Fundamental Equation of ThermodynamicsBy combining the First and the Second Laws of thermodynamics, we arrive at the equation

    T dS = dE Xi dxi. (2.1)

    This relation is the most important single equation in thermodynamics. The reason for its importance isthat it contains within it everything that can be known about a thermodynamic system. For ease of notationin the rest of this section, we shall only consider simple closed hydrostatic systems where this expressionreduces to

    T dS = dE + P dV. (2.2)(A closed hydrostatic system is a system of constant mass that exerts a uniform pressure on itssurroundings. Its equilibrium states can be described in terms of pressure, volume and temperature. Theequation of state takes the form F (P,V, T ) = 0 which implies that there only two independent variables.)Remember, however, that in general the thermodynamic functions will depend on other variables such asparticle numbers as well.

    The importance of equation (2.1) or (2.2) can be seen as follows. Express equation (2.2) in the form

    dS =1

    TdE +

    P

    TdV (2.3)

    This is a differential expression in three variables: S, E and V . The fact that S, E and V are the variablesin this relation, and not any other combination of P , T , S, E and V , is clear from the fact that each of themappears in the expression as a differential. In contrast, neither dP nor dT appear in (2.2) or (2.3).

    In (2.3), dS has been expressed in terms of dE and dV . dE and dV therefore determine dS. Thismeans, by virtue of the way in which we have written (2.3), that we have chosen to regard E and V as theindependent variables, with S dependent. Implicitly therefore, S has been expressed as a function of E andV .

    Since E and V are the independent variables in (2.3), with S a function of E and V , the right hand sidemust be a differential expression in E and V . The coefficients of dE and dV are thus functions of E andV . More explicitly, (2.3) assumes that both 1/T and P/T , and therefore also T and P , are functions of Eand V . In other words, the right hand side of (2.3) is a differential expression of the type

    f(E,V ) dE + g(E,V )dV (2.4)

    where f(E,V ) = 1/T and g(E,V ) = P/T respectively.By the Clausius Theorem, the differential dS is exact. Equation (2.3) must therefore arise by

    differentiation of a function of the form

    S = S(E,V ) (2.5)Differentiating (2.5) we get

    dS =

    S

    E

    V

    dE +

    S

    V

    E

    dV (2.6)

    Comparing with (2.3) we get1

    T=

    S

    E

    V

    (2.7)

  • 10 Chapter 2 Fundamental Equation

    and

    P

    T=

    S

    V

    E

    (2.8)

    Equations (2.7) and (2.8) are essentially the heat equation (the energy as a function of temperatureand volume) and the mechanical equation of state (the pressure as a function of the configurationvariables excluding energy) for the system, albeit in a form different from the usual, and requiring a littlemanipulation to reduce them to something more familiar.

    Consider first equation (2.7). Since S is a function of E and V , so is the partial derivative (S/E)V .Equation (2.7) is therefore an equation relating E, T and V . It is therefore an implicit equation for E interms of V and T , and can be made explicitly into such an equation by solving it for E in terms of T andV . This equation must therefore be the heat equation for the system. For, if it were not, then it, togetherwith the heat equation, would provide two relations between E, T and V . This would mean that thereis only one independent variable among E, V and T . This is a contradiction, since this system has twoindependent variables and not one alone. So the heat equation cannot be different from equation (2.7).Thus (2.7) is the heat equation for the system in implicit form.

    Now consider equation (2.8). This is a relation between the variables P , T , E and V . We can useequation (2.7) to eliminate E, leaving an equation between P , T and V . This relation must be the equationof state for the system. For if it were not, then it, together with the equation of state for the system, wouldprovide two relations between the three variables P , T and V . This would mean that there is only oneindependent variable among the three, which contradicts the fact that there are two independent variablesamong them. The equation obtained by eliminating E from (2.8) must therefore be the equation of statefor the system.

    This is a remarkable result. All thermodynamic information of a simple hydrostatic system is containedin its two equations of state, the mechanical equation and the heat equation. From these, every possiblemacroscopic property of the system, and its behaviour in every conceivable quasistatic non-dissipativeprocess, can be calculated. Since both of these equations can be calculated from equation (2.5), allthermodynamic information of the system is contained in this single equation. To fully specify athermodynamics of a hydrostatic system, therefore, all we need do is determine the single equation

    S = S(E,V )

    From it, we can deduce, by simple differentiation and elementary manipulations, both of the equations ofstate for the system, and from these equations we can then deduce by differentiation, CP , the heat capacityat constant pressure, CV the heat capacity as constant volume, and every other system parameter ofinterest. To know (2.5) therefore, is to know everything that there is to know about the system.

    This makes equation (2.5) the single most important relation in all of thermodynamics. To emphasise itsimportance, it is called the Fundamental Relation of Thermodynamics.

    We shall see in later sections that the content of the fundamental relation can be expressed in a varietyof different, equivalent, ways. To distinguish the particular form (2.5) from its equivalent representations,we shall refer to the fundamental equation as expressed in (2.5) as the fundamental equation in entropyrepresentation. This name reflects the fact that (2.5) expresses the entropy S of the system as a function ofE and V .

    The entropy representation of the fundamental equation, like all its other representations that we shallencounter, is deduced from the differential relation (2.2). We shall therefore refer to equation (2.2) as thefundamental relation in differential form.

    Note that the argument presented above explicitly uses the fact that S has been expressed in termsof E and V . We may in fact express S, like any other state variable, in terms of any convenient setof independent variables. However, if we do so, we loose some of the information that is contained inequation (2.5). The argument presented above explicitly requires that S be expressed in terms of E and V ,and not in terms of any other set of independent variables. In this sense, E and V are the natural variablesfor the function S. If we choose to express S in any other form, we forfeit one or other of the equations ofstate and so arrive only at partial information about the system. S, expressed in terms of any other pair ofvariables is therefore not a fundamental relation for the system.

  • Section 2.2 Alternative Forms of the Fundamental Relation 11

    2.2 Alternative Forms of the Fundamental RelationTo arrive at relation (2.5), we isolated dS in (2.2) and expressed it in terms of dE and dV , so obtainingequation (2.3). This was rather arbitrary. We could equally well have isolated dE, expressing it in terms ofdS and dV , to obtain

    dE = T dS P dV (2.9)or dV , expressing it in terms of dS and dE, to obtain

    dV =T

    PdS 1

    PdE (2.10)

    Then by an argument analogous to the one used above, we could have shown that complete informationabout the thermodynamic system is contained in the functions E = E(S, V ) or V = V (S,E) respectively.Each of the functions E = E(S, V ) or V = V (S,E) is therefore another fundamental relation for thesystem, called respectively the energy representation and the volume representation of the fundamentalrelation. The details are as follows.

    2.2.0 Energy Representation:Write equation (2.2) in the form

    dE = T dS P dV (2.11)This presupposes that E is given in the form

    E = E(S, V ) (2.12)Differentiating (2.12), we get

    dE =

    E

    S

    V

    dS +

    E

    V

    S

    dV (2.13)

    Comparing with (2.11) gives

    T =

    E

    S

    V

    (2.14)

    and

    P =E

    V

    S

    (2.15)

    Equations (2.14) and (2.15) are then, up to some elementary manipulations, the equations of state for thesystem. This is seen as follows.

    Equation (2.14) gives T as a function of S and V . Equation (2.15) gives P as a function of S and V . Ifwe eliminate S from these two equations, we get a relation between P , V and T . By the same reasoningas above, this must be the mechanical equation of state. Also, if we eliminate S from (2.12), using either(2.14) or (2.15), we get E as a function either of T and V , or of P and V . Either way, we get the heatequation in one of its familiar forms.

    2.2.0 Volume Representation:Write equation (2.2) in the form

    dV =T

    PdS 1

    PdE (2.16)

    This presupposes that V is given in the form

    V = V (S,E) (2.17)

  • 12 Chapter 2 Fundamental Equation

    Differentiating (2.17), we get

    dV =

    V

    S

    E

    dS +

    V

    E

    S

    dE (2.18)

    Comparing with (2.16) givesT

    P=

    V

    S

    E

    (2.19)

    and

    1P

    =

    V

    E

    S

    (2.20)

    Equation (2.20) gives P as a function of S and E. Equation (2.19) gives T/P as a function of S andE. And since we already know P as a function of S and E, we get also T as a function of S and E.Eliminating S from P = P (S,E) and T = T (S,E), we get a relation between P , T and E, which is theheat equation. Further, eliminating S and E from (2.17) using (2.19) and (2.20), we get V as a function ofT and P , which gives the mechanical equation of state.

    2.3 Thermodynamic PotentialsIt is clear from the previous section that the fundamental relation can be expressed in many forms.In each of its forms, some thermodynamic variable of the system is expressed as a function of twoother, uniquely determined, state variables. That function is called a thermodynamic potential. Thereason for this name appears to be that all thermodynamic quantities can be calculated from the chosenthermodynamic potential by simple differentiation, in a manner analogous to the way that forces inmechanics are calculated from force potentials. The analogy between thermodynamics and mechanics infact is very close. The thermodynamic potentials play a role in thermodynamics entirely analogous to thatplayed by the Lagrangian function in Lagrangian mechanics and the Hamiltonian function in Hamiltonianmechanics. All the system information of a mechanical system is contained implicitly in the Lagrangianor the Hamiltonian, and is extracted from these by differentiations. So also all the system information of athermodynamic system is contained implicitly in the thermodynamic potentials and is extracted from themby differentiations.

    Note that each representation of the fundamental relation gives rise to a corresponding thermodynamicpotential. Any one of these potentials is sufficient for a complete description of the system. In fact, all ofthe potentials are equivalent in information content.

    The fundamental relation can be represented in a variety of ways. However, not all its representationsare equally useful. For example, of the representations considered above, the energy and entropyrepresentations are often used, but the volume representation is not.

    The usefulness of a particular representation is determined by what variables are controlled in agiven experimental situation. Generally, we obtain the simplest description of a system if we choose asindependent those variables that are controlled in the experiment, and express all others in terms of them.

    When altering the representation of the fundamental equation to suit a particular context, we must becareful not to loose information. Incorrect manipulation leads to loss of information, and yields expressionswhich are not representations of the fundamental relation. The method to be followed when changingrepresentation is illustrated in the following important examples.

    2.3.1 In terms of S and P : EnthalpySuppose we wish to write the fundamental equation in a way that regards S and P as the independentvariables. The fundamental equation in differential form is

    T dS = dE + P dV (2.21)This equation is a differential relation among the variables are S, E and V . P is not among them. To make

  • Section 2.3 Thermodynamic Potentials 13

    P appear among the variables, we must manipulate (2.21) in such a way as to generate a term containingdP . This is done by the product rule of differentiation to rewrite the term P dV in the form

    P dV = d(V P ) V dP (2.22)Substituting into (2.21) we get

    T dS = dE + d(PV ) V dP = d(E + PV ) V dP (2.23)Here we have combined the differentials dE and d(PV ) into a single differential d(E + PV ). The reasonfor this step is that our system has only two independent variables. Any other is expressible in terms of thetwo independent ones alone. Any single differential relation must therefore be reducible to an expressioncontaining three variables only, with two regarded as independent, and the third a function of the two. Theobvious differentials to combine in (2.23) are the ones without coefficients. Combining them yields a newstate variable. In this case, the new variable is

    H = E + PV, (2.24)which is the enthalpy of the system.. Thus (2.21) becomes

    T dS = dH V dP (2.25)This is a new differential representation (2.21). It is equivalent to it, since we can pass from (2.21) to (2.25)and back again.

    It is interesting that the enthalpy appears in a natural way in these manipulations. It shows that, expressedin terms of the correct variables, enthalpy gives a complete representation of the thermodynamic system.

    We may now choose which of the three variables P , S and H we wish to regard as independent. Aparticularly useful representation is obtained by expressing H in terms of P and S. In terms of (2.25), thismeans writing

    dH = T dS + V dP (2.26)Equation (2.26) assumes implicitly that H is given as a function

    H = H(S,P ) (2.27)so that

    dH =

    H

    S

    P

    dS +

    H

    P S

    dP (2.28)

    Comparing with (2.26), we get

    T =

    H

    S

    P

    (2.29)

    and

    V =

    H

    P

    S

    (2.30)

    Equations (2.29) and (2.30) are essentially the equations of state for the system. Eliminating S from(2.29) and (2.30) gives the mechanical equation of state. The heat equation is then obtained by noting thatH = E + PV , so that

    E = H(S,P ) + P V (S,P ) = H(S,P ) + P

    H

    P

    S

    which gives E as a function of S and P . This is the heat equation, expressed in terms of S and P . If wewant it in terms of T and P , we can eliminate S using (2.29). If we want it in terms of V and P , we caneliminate S using (2.28). Equation (2.29) therefore contains full information about the thermodynamicsystem, so it is a fundamental representation of the fundamental equation. It is called the fundamentalequation in enthalpy representation.

  • 14 Chapter 2 Fundamental Equation

    The representations obtained by writing S = S(H,P ) and P = P (S,H) are hardly ever used and sohave no name.

    As an exercise work out the details of these representations.

    2.3.2 In terms of T and V : Helmholtz Free EnergySuppose we wish to write the fundamental relation in terms of T and V . In differential form, thefundamental relation is

    T dS = dE + P dV (2.31)V appears as one of the variables in this equation. However, T does not. To make it appear as a variable,we must manipulate (2.31) in such a way as to generate a term containing dT . This is done by rewritingthe term T dS, using the product rule of differentiation, in the form

    T dS = d(TS) S dT

    Substituting into (2.31) we getd(TS) S dT = dE + P dV (2.32)

    Since we wish to regard T and V as independent, we need to express everything in terms of dT and dV .This gives,

    d(E TS) = S dT P dV (2.33)The function F = E TS is a new state function that we have not previously encountered. It is called theHelmholtz Free Energy. In terms of it, (2.33) becomes

    dF = S dT P dV (2.34)

    Since F = E TS is a state function, dF is exact, and so (2.34) is the differential form of

    F = F (T, V ) (2.35)

    Differentiating (2.35), we get

    dF =

    F

    T

    V

    dT +

    F

    V

    T

    dV (2.36)

    and comparing with (2.34), we get

    S =F

    T

    V

    (2.37)

    and

    P =F

    V

    T

    (2.38)

    Equations (2.37) and (2.38) are essentially the equations of state for the system. In fact, equation (2.38) isa relation between P , V and T and so is directly the mechanical equation of state. The heat equation isobtained by noting that F = E TS, so that

    E = F (T, V ) + T S(T, V ) = F (T, V ) TF

    T

    V

    (2.39)

    which is the heat equation in terms of T and V . Equation (2.35) thus contains all thermodynamicinformation and so is a representation of the fundamental equation. It is called the HelmholtzRepresentation.

  • Section 2.4 The Maxwell Relations 15

    2.3.3 In terms of T and P : Gibbs Free EnergyThis is the last representation that we consider. There are still others that are useful, but the above aresufficient to illustrate the method.

    We wish to write the fundamental equation in terms of T and P . The fundamental equation in differentialform is

    T dS = dE + P dV (2.40)To make both T and P appear in this equation as variables, we need to alter two terms by means of theproduct rule. By the same method as described in the previous examples, we get

    d(TS) S dT = dE + d(PV ) V dPor, collecting together the differentials without coefficients,

    d(E + PV TS) = S dT + V dP (2.41)The function G = E + PV TS is another new function, not previously encountered. It is called theGibbs Free Energy. In terms of it, (2.41) becomes

    dG = S dT + V dP (2.42)which is a differential form of

    G = G(T, P ) (2.43)Differentiating,

    dG =

    G

    T

    P

    dT +

    G

    P

    T

    dP (2.44)

    and comparing with (2.42), we get

    S =G

    T

    P

    (2.45)

    and

    V =

    G

    P

    T

    (2.46)

    Equation (2.46) is directly the equation of state. The heat equation is obtained by noting thatG = E + PV TS, so that

    E = G(T, P ) + T S(T, P ) P V (T, P ) = G(T, P ) + TG

    T

    P

    PG

    P

    T

    (2.47)

    which is the heat equation in terms of T and P . Equation (2.42) thus contains all thermodynamicinformation and so is a representation of the fundamental equation. It is called the Gibbs Representation.

    2.4 The Maxwell RelationsMaxwells relations are a set of equations in thermodynamics which are derivable from the definitionsof the thermodynamic potentials. The Maxwell relations are statements of equality among the secondderivatives of the thermodynamic potentials. They follow directly from the fact that the order ofdifferentiation of an analytic function of two variables is irrelevant. If is a thermodynamic potential andx and y are two independent variables for that potential, then the Maxwell relation for that potential andthose variables is:

    xi

    xj

    =

    xj

    xi

  • 16 Chapter 2 Fundamental Equation

    Of the representations of the fundamental relation considered above, four are particularly important.They are the energy, enthalpy, Helmholtz and Gibbs representations, given respectively in differential formby

    dE = T dS P dVdH = T dS + V dP

    dF = S dT P dVdG = S dT + V dP (2.48)

    Since each of the differentials on the left hand side of these relations is exact, the differential expressionson the right hand side must each be exact. The equations (2.48) are therfore of the form

    d =

    x

    y

    dx+

    y

    x

    dy. (2.49)

    For continous functions with continous second derivatives, mixed partial derivatives are equivalent:

    y

    x

    x

    y

    =

    x

    y

    y

    x

    =2

    xy=

    2

    yx(2.50)

    Applying the exactness (or, integrability) condition to each of the equations (2.48), we obtain a set of fouruseful relations called the Maxwell equations.

    T

    V

    S

    = P

    S

    V

    T

    P

    S

    =

    V

    S

    P

    S

    V

    T

    =

    P

    T

    V

    S

    P

    T

    = V

    T

    P

    (2.51)

    For example

    V

    S

    E

    S

    V

    =

    S

    V

    E

    V

    S

    T

    V

    S

    = P

    S

    V

    (2.52)

    References

    Callen, H. B., 1985, Thermodynamics and an Introduction to Thermostatics, John Wiley and Sons, New York,p 27-33.Carrington, G., 1994, Basic Thermodynamics, Oxford University Press, Ch 9 & l0.

  • Exercises 17

    ExercisesLower case symbols are used for "specific" quantities, that is for thermodynamic quantities per mole ofsubstance.

    1. Show that if F is known as a function of V and T , then

    H = F TF

    T

    V

    VF

    V

    T

    and

    G = F VF

    V

    T

    2. The specific Gibbs function of a gas is given by

    g =G

    N= RT ln

    P

    P0AP

    where A is a function of T only. (a) Derive expressions for the equation of state of the gas and itsspecific entropy. (b) Derive expressions for the other thermodynamic potentials. (c) Derive expressionsfor cP and cv . (d) Derive expressions for the isothermal compressibility ( = 1V

    VP

    T) and the

    expansivity ( = 1VVT

    P). (e) Derive an expression for the Joule-Thomson coefficient, which is

    defined by

    =

    T

    P

    h

    [Hint: show that TP h = T( vT )Pvcp , CP = T ST P ]3. The specific Gibbs function of a gas is given by

    g = RT ln vv0

    +Bv

    where B is a function of T only. (a) Show explicitly that this form of the Gibbs function does notcompletely specify the properties of the gas. (b) What further information is necessary so that theproperties of the gas can be completely specified?Solution:

    g = g(T, v)

    4. Define a property of a system represented by which is given by the equation

    = S E + PVT

    Show that

    V = T

    P

    T

    E = T

    T

    T

    P

    + P

    P

    T

    and

    S = + T

    T

    P

  • 18 Chapter 2 Fundamental Equation

    5. The fundamental equation of a certain system is given by

    E =

    v0

    R2

    S3

    NV

    where v0, and R are constants. (a) In terms of what representation is this fundamental relationexpressed? Explain. (b) Find the equation of state for the system, and the heat equation.

    6. A particular system obeys the relation

    u =E

    N= Av2es/R

    N moles of this substance, initially at a temperature T0 and pressure P0, are expanded isentropicallyuntil the pressure is halved. Find the final temperature of the system. (Answer: Tf = 0.63T0)

    7. A simple hydrostatic system is such that PV k is constant in a reversible adiabatic process, wherek > 0 is a given constant. Show that its internal energy has the form

    E =1

    k 1 PV +N fPV k

    Nk

    where f is an arbitrary function.Hint: PV k must be a function of S (why?) so that (E/V )S = g(S,N) V k, where g(S,N) is anarbitrary function.

    Printed January 24, 2014 \sm-02\tutorials\tut-1

  • 19

    Chapter 3Models of Thermodynamic SystemsAny system containing more than two particles in interaction may be regarded as a thermodynamic system.Thus a complex atom, or a large nucleus, or a spray of elementary particles in a collider may be treatedby the methods of thermodynamics. More typically, however, thermodynamic systems are macroscopic.These consist of vast numbers of interacting particles, with even the smallest containing well in excess of1015. Chemists, solid and condensed state physicists, and material scientists regularly deal with systemscontaining anywhere between 1024 to 1030 particles, and astrophysicists with ones that exceed thesefigures by many orders of magnitude.

    Modelling systems of such complexity requires us to be judiciously selective. The choice of basic unitfor the model depends on the type of system considered, and on which of its properties are of interest.Particle physicists are interested in properties that depend on the fundamental interactions of elementaryparticles. The basic entities in their models are thus quarks and gluons. Nuclear physicists will build theirmodels from protons, neutrons and electrons. Chemists, solid state physicists and material scientists useatoms, molecules and macromolecules as basic building blocks, while cosmologists build their modelsusing galaxies as their particles." But however the model is constructed, all face the same difficulty in theend: how to keep track of so many basic entities. Complete modelling of a thermodynamic system wouldrequire us to keep track of each and every one of its constituents. This is clearly impossible. Tracking indetail for only a few minutes the particles even of a system as small as a single oxygen atom would requireso many calculations that it is estimated that the most powerful computers available today would takelonger than the age of the universe to perform the calculations.

    If we are to make any headway in developing tractable models of thermodynamic systems, we need toadopt a different procedure. The answer is in statistics. Large assemblies of items display predictableregularities and trends when averaged. The detailed behaviour under the same conditions of two identicalsystems can differ substantially. But, on average, their behaviour and properties are the same. Thisphenomenon is called statistical regularity. The way to deal with huge systems of particles, then, is not bydetailed modelling, but by developing methods for predicting their average behaviour, the likelihood ofobserving departures from their predicted average behaviour, and the expected size of these departures.The set of techniques by which this is done is called Statistical Physics because each particle is assumed toobey the laws of Physics, or Statistical Thermodynamics, because the systems modelled are macroscopicin size and are thus thermodynamic systems.

    Strictly speaking, the behaviour of the constituent particles is governed by the laws of quantummechanics. Classical mechanics is now known to be incorrect in the sense that it does not describethe behaviour of systems on the atomic scale. It provides a tolerably accurate account of the generalmechanical behaviour of macroscopic systems, but does not adequately describe that of subatomicparticles. Any viable statistical approach will need to describe the behaviour of large numbers of quantumparticles and must therefore be based on quantum mechanics. In certain limits, however, such as that oflow density and high temperature for example, quantum properties are not important. In these cases, thebehaviour of the constituent particles is adequately described by the laws of classical mechanics. Quantummechanics is significantly more difficult to deal with than classical mechanics, both conceptually andmathematically. It is therefore advantageous in these limiting cases to use classical mechanics to model thesystem. Ironically, classical statistical models are much more difficult to implement than quantum models.This fact substantially compromises the usefulness of classical statistical models and makes them lessattractive than might otherwise have been expected.

    In this chapter, we look at some general features of both quantum and classical models of large numbersof particles. The details of these models are not always easy to grasp at first reading, or to implement. Thebest way to understand them is thus, not by protracted discussion of the general theory, but by workingthrough sufficient examples that illustrate it. So, if at first you fail to understand the principles outlined inthis chapter, dont give up. Rather press on to suitably chosen illustrative examples and return to the theorydescribed here after you have seen it applied in particular case examples.

  • 20 Chapter 3 Models of Thermodynamic Systems

    3.1 Quantum Models

    3.1.1 Stationary StatesA quantum system of f degrees of freedom is described by a wavefunction (q1, q2, ..., qf , t) of the fcoordinates q1, q2, ..., qf of the system, and of the time t. This wave function satisfies the time dependentSchrdinger equation

    i

    t= H (3.1)

    where H is the Hamiltonian operator for the system.The first law of thermodynamics asserts that a thermodynamic system in equilibrium has a well defined

    energy. We are therefore interested in the stationary state solutions of equation (3.1), since these are thestates of fixed energy for the quantum system. Denote the energy of the quantum system by E. Then thestationary state with energy E has wavefunction

    (q1, q2, ..., qf , t) = (q1, q2, ..., qf )eiEt/ (3.2)

    where (q1, q2, ..., qf ) is a function of the coordinates q1, q2, ..., qf alone, and satisfies the timeindependent Schrdinger equation

    H = E (3.3)Equation (3.3) is not by itself sufficient to determine the solution completely. It needs to be supplementedby boundary conditions. These distinguish the physically acceptable solutions from those which arenot. Once the boundary conditions are specified, equation (3.3) generally does not admit solutions forarbitrarily chosen values of E, but only for certain well defined values called the characteristic energies,or eigenenergies, or energy eigenvalues of the system. These values are the only energies in which thequantum system can be found. It will never be found with any other values of the energy.

    Equation (3.3) can sometimes (approximately) be reduced by suitable choice of coordinates q1, q2, ..., qfto a set of f simultaneous ordinary differential equations, one for each coordinate. One technique by whichthis reduction is effected is the method of separation of variables, which is valid if the particle interactionsare negligible. Each of the resulting ordinary differential equations then contains a constant of separation,and allows solutions for each choice of value for this constant. However, the boundary conditions forthe original problem induce boundary conditions on each of these equations whose nett effect is that theindividual equations for each coordinate allows solutions, not for all possible values of the constant ofseparation, but only at well defined values of it. We thus obtain a restricted family of physically acceptablesolutions, parametrised by a single variable, usually taking only discrete values (normally integers), calledthe quantum number for that equation. The value of the separation constant thus depends on the quantumnumber of the solution. There is therefore exactly one quantum number associated with each degree offreedom of the quantum system.

    The solutions admitted by (3.3) thus occur in general only for a restricted set of values of E. In theidealised case, where interactions between particle is neglected, E is a sum of the separation constants.Each solution is then labelled by a set of f quantum numbers n1, n2, ..., nf . We show this explicitly bydenoting the solutions as n1,n2,...,nf . Each solution n1,n2,...,nf of (3.3) occurs at a well defined energy,which we denote by En1,n2,...,nf . This energy is uniquely determined by the quantum numbers n1, ..., nf .Since each solution n1,n2,...,nf defines an unique stationary state (3.2) of the quantum system,

    n1,...,nf (q1, ..., qf , t) = n1,...,nf (q1, ..., qf )eiEn1,...,nf t/ (3.4)

    the quantum numbers n1, n2, ..., nf uniquely specify a stationary state of the quantum system and itsenergy. The quantum numbers n1, n2, ..., nf are thus in one to one correspondence with the quantumstates. Each set n1, n2, ..., nf defines exactly one quantum state, and conversely.

    It is important to note that while each given set of quantum numbers n1, n2, ..., nf uniquely definesone stationary state of the system, and therefore also its energy En1,n2,...,nf , a given admissible value ofthe energy E does not in general define uniquely a corresponding stationary state. More often than not,

  • 21

    there are many stationary states with that same energy. If so, we say that the energy level with value E isdegenerate, and we denote the number of stationary states which occur at energy E by gE . The numbergE is called the degeneracy of the energy level with energy E. In the special case when gE = 1, we saythat the energy level at energy E is non degenerate.

    Note the difference between energy levels" and quantum states". The quantum states are the actualstationary states of the system. The energy levels are the values of the energy E that are allowed for thesystem. These are not the same concept. Each state has a definite energy. But there may exist more thanone state with that energy. So in general there are more quantum states than there are energy levels. Insystems with 1 degree of freedom, there is no degeneracy and therefore there are as many energy levels asthere are quantum states, but this is an exception and not the rule. Systems with more than 1 degree offreedom almost always display degeneracy, and the larger the number of degrees of freedom of the system,the larger the degree of degeneracy.

    It is a general feature of quantum systems that the degree of degeneracy of the energy levels increasesdramatically as the number of degrees of freedom of the system increases. When the number of degrees offreedom is huge, as it is in the case of thermodynamic systems, the degree of degeneracy of the levels canbe astronomical. This feature is exploited in statistical physics.

    SUMMARY:

    the stationary states of a quantum system with f degrees of freedom arespecified by a set of f quantum numbers n1, n2, ..., nf .

    Each stationary state has an unique energy En1,n2,...,nf . Any given allowed energy value E may correspond to many quantum

    states, that is, there may be many sets of values n1, n2, ..., nf which givethe same value E of the energy.

    The number of states with given energy E is called the degeneracy of theenergy level E, and is denoted by gE .

    3.1.2 External ParametersThe characteristic energies E of the stationary states of a quantum system are determined by theHamiltonian operator H in equation (3.3). For a system of particles, H is the sum of kinetic and potentialenergy operators for the system,

    H = T + V (3.5)The kinetic energy contains information about the masses of the particles. The potential, on the otherhand, contains two types of information: information about how the particles interact with each other, andinformation about the environment. For example

    V =Nij

    u (ri, rj) +Ni=1

    vexternal (ri)

    where u (ri, rj) represents the interaction potential between two particles and vexternal (ri) the interactionof a particle with the environment. The environmental information is contained in the form of parameters,called external parameters, that specify the strength of interaction between system and environment, thedimensions (such as volume, or length, breadth and height) of the potential that confines the system to aspecific region of space, the magnitude direction of the applied electric and magnetic fields, and so on.Denote these external parameters by 1, 2, ..., or, more briefly, by i

    vexternal

    ri,i

    .When solving equation (3.3), we assume that the external parameters each have a given, fixed value.

    This means that the solutions obtained, both for and for E, are implicitly functions of these fixed values.Change the values of the i, and the solutions and their eigenvalues must also change.

  • 22 Chapter 3 Models of Thermodynamic Systems

    By their nature, the external parameters are continuous variables. Small changes in their values are notexpected to produce catastrophic changes in the state of the system. We thus expect small changes in the ito produce only small changes in the eigenfunctions n1,n2,...,nf for the system and in their correspondingenergies En1,n2,...,nf . Put differently, we expect both n1,n2,...,nf and En1,n2,...,nf to be continuousfunctions of the parameters i.

    3.1.3 Particle NumberAnother important parameter in thermodynamics is the number N of particles in the system. In a quantummodel, N enters explicitly into both T and V via the number of terms in the summations that make up eachof these operators. It thus enters explicitly also into the operator H. It also enters into the wavefunctionsvia the number of variables qi on which they depend.

    H =Ni=1

    2

    2mi2i +

    Nij

    u (ri, rj) +Ni=1

    vexternalri,j

    The principal difference between particle number and the external parameters is that N is not acontinuous variable, but discrete. A change in N produces a discontinuous change of the system. This isreflected in the fact that if N is changed, the number f of degrees of freedom of the system changes. Thismeans that a different Schrdinger equation needs to be solved and not, as was the case with changes in thevalues of the external parameters, the same equation but with different constants. The system stationarystates and their corresponding energy eigenvalues will thus be labelled by a different number of quantumnumbers n1, n2, ..., nf .

    It often happens however that the changes introduced by changing the value of N is not as dramatic asmight have been expected, and that the energy eigenvalues can be expressed as a function of N . Further,when N is large, even quite large changes N in particle number constitute only a very small relativechange in N , that is, N/N 1, and we may use a series expansion in the variable N/N to goodeffect. We shall therefore treat N as if it were a continuous parameter for the system. This puts Neffectively on the same footing as the external parameters i. We thus often regard E as a continuousfunction of 1, 2, ..., and N .

    3.1.4 InteractionThere are essentially two ways to change the energy of any given quantum system. In the first, all externalparameters are held fixed. The energy levels of the system therefore do not change. We make the systemchange its state by supplying it with a quantum of energy of the right magnitude, or by removing one. Thesystem will then make a transition from an initial quantum state to another at a different energy. In thesecond, we prevent the system from absorbing or emitting quanta. It thus cannot make any transitions tostates of different energy. However, if we change its external parameters very slowly, we gradually changethe values of the eigenenergies of the system for each given state (n1, ..., nf ) without altering the quantumstate in which it is found. In this way, we transfer energy into or out of the system while the system remainsin a fixed quantum state. (Remember that the quantum state is defined by its quantum numbers. These areassumed to remain unchanged as the external parameters are changed.)

    Physically, the two types of interaction are interpreted as follows. The external parameters of a systeminclude things like the size of the system (that is, its length, breadth and height, or its volume), the appliedelectric and magnetic fields, and so on. These parameters are analogous to, and in fact are closely relatedto, the configuration variables of thermodynamic systems. We will see later that the external parameters donot always coincide with the thermodynamic configuration parameters. But often they do. And when theydo not coincide, they are nonetheless closely related to them. In the first type of interaction, these externalparameters are held constant. This type of interaction is thus analogous to a process in thermodynamics inwhich there is no change of the configuration variables of the system. In such a process, no work is done,and the system changes its equilibrium state by heat flow alone. This kind of interaction is thus analogousto heat flow. Of course, we have not yet sufficiently developed the microscopic theory to enable us toexplain heat flow completely. But this first method for transferring energy to the system forms the basis of

  • 23

    the explanation that we will eventually give of heating. We thus refer to this kind of interaction as thermalinteraction.

    In the second type of interaction, we force the system to remain in a given stationary state and alter itsenergy by manipulating the external parameters. This is analogous in thermodynamics to an adiabaticprocess, in which we completely inhibit heat flows to and from the system, and force it to alter itsequilibrium state purely by changing the values of its configuration variables. This kind of process is thusis called adiabatic perturbation of the quantum system. It provides the basis for a microscopic model ofwork done by the thermodynamic system.

    Manipulation of the external parameters changes the energy of the system, even though its quantumstate remains the same. This process therefore transfers energy to and from the system. We shall callenergy transferred in this way work, and the interaction, work interaction. If the external parameters aredistances, areas or volumes, the work is said to be mechanical work, and the interaction a mechanicalinteraction. If they are electric or magnetic fields, the work and interaction are said to be respectivelyelectric or magnetic.

    Mathematically, the work done by the system is calculated as follows. The energy eigenvalues of thesystem are functions of the external parameters,

    ER = ER(1, 2, ..., ) (3.6)

    Here we have used R as an abbreviation for the quantum numbers n1, n2, ..., nf that define the quantumstate of the system. Suppose the system is in a given quantum state R. Change the values of the externalparameters, each by amount di. The value of the energy eigenvalue of state R then increases by amount

    dER =i=1

    ER

    i(1, 2, ..., ) di (3.7)

    The work done by the system on its surroundings is therefore

    dWR = dER = i=1

    ER

    i(1, 2, ..., ) di (3.8)

    The coefficient

    XR,i =ER

    i(3.9)

    is called the generalised force conjugate to the external parameter i exerted by the environment on thesystem. If the parameter i is an ordinary Cartesian distance, then XR,i is an ordinary force; if it is anangle, then XR,i is a torque; and if it is a volume, then XR,i is a pressure.

    Note that the work dWR done by the system when the external parameters are changed depends on thestate R in which the system is found. For different states R, the same change of external parameters resultsin general in different amounts of work being done. The generalised force XR,i exerted by the environmenton the system when its external parameters have given values 1, 2, ..., also depends on the state Rof the system. Thus, in general, for given i, the same system in different states will experience differentforces.

    Remark 1 The concept of a generalised force comes from Lagrangian Mechanics and is a generalisationof the Newtonian concept. The definition arises as follows. The work done in displacing a particle by d2rincreases its energy by an amount E = 2F .2r. Expressing this in Cartesian coordinates gives

    Fx =E

    x, Fy =

    E

    y, Fz =

    E

    z(3.10)

    We can now reverse the order of the definitions. Given the energy E of the particle as a function of x, y, z,we can define the ith component of the force on the particle to be the partial derivative of E with respectto the ith coordinate. This definition permits us now to use arbitrary coordinate systems. It is easy to showthat, in polar coordinates, the derivative E is the torque, or angular force, exerted by the force on the

  • 24 Chapter 3 Models of Thermodynamic Systems

    particle about the origin. Lagrangian Mechanics makes the obvious generalisation: the derivative withrespect to any type of coordinate yields the generalised force conjugate to that coordinate. Conjugate"here means, the force" responsible for changes in that coordinate. So, for example, if volume is used as acoordinate, the force" conjugate to it is pressure. The advantage of this concept of generalised force is thatit can be imported into any context where the system changes its energy by a change of some configurationcoordinate, even those contexts where the Newtonian concepts fail (as in quantum mechanics) or are absent(as in thermodynamics).

    If the above two effects occur simultaneously, we have a model for a general quasistatic thermodynamicprocess. It is possible also to change the external parameters suddenly rather than gradually. This normallyresults in two things simultaneously: the system changes its stationary state and the energy levels of eachgiven stationary state alter as a result of the change in the values of the external parameters. In such asituation however, we do not know into which stationary state the system has moved or how much energywas supplied by the sudden influx of heat. This provides a model for non-quasistatic processes.

    These facts form the basis of a microscopic understanding of both heat and work in thermodynamics.They do not yet provide a complete theory of them. There is one more factor involved which will bediscussed in a later section. It is this: there are in general very many stationary states of the systemcompatible with any given set of thermodynamic state variables, and each such state has a certainprobability of occurrence. Thermodynamic heat and work are therefore statistical averages of the aboveeffects, taken over all these compatible states.

    3.1.5 Independent Particle ModelsIn many situations of interest, the individual particles of the system are identical and interact only weaklywith each other. By interact weakly with each other" we mean that, compared with their kinetic energyand with their energy of interaction with the environment, the energy of their interaction with each other isnegligible. The strength of an interaction is measured by the amount of energy involved in the interaction.So if the particle-particle interaction energy is much less that of the particle-environment interactionenergy, or the particle kinetic energy, then we commit only a small error by neglecting it. The resultantmodel provides a good first approximation to the real behaviour of the system. At worst, we only get aqualitative explanation of the observed phenomena by taking this approach, but often we get a powerfulpredictive model in spite of the severity of the approximation. So instead of using

    H =Ni=1

    2

    2mi2i +

    Nij

    u (ri, rj) +Ni=1

    vexternalri,j

    we use a simplified approximation

    H =Ni=1

    2

    2mi2i +

    Ni=1

    vexternalri,j

    Ignoring inter-particle interactions produces considerable mathematical simplification in the quantummodel. This is the principal reason for using this approximation. For a general system, the Hamiltonianconsists of a sum of terms of the type

    H =NA=1

    HA +N

    A,B=1A

  • Section 3.2 Classical Models 25

    write

    H NA=1

    HA (3.12)

    Since each HA = HA(rA,pA) involves only the position and momentum of the Ath particle in theexternal potential, the time independent Schrdinger equation is separable. If we write

    (r1, r2, ..., rN) = P

    (1)(r1)

    (2)(r2) ... (N)(rN)

    , (3.13)

    the time independent Schrdinger equation separates into N independent equations, one for each particle,

    HA (A)(rA) = E

    (A)(A)(rA) (3.14)

    In (3.13) P is a permutation operator that depends on the statistics of the particles (Fermions or Bosons).If furthermore the particles are identical, each of these equations is identical in form to every other, andtherefore have solutions that are identical in form.

    We write the common equation symbolically as

    h = (3.15)

    where h = 22m2 + vexternalr,j

    is the common single particle Hamiltonian, is the single particlewave function, and is the eigenenergy of the single particle alone in interaction with the environment.

    The state of the N particle system is then specified by giving the state of each individual particle. Thesingle particle state is specified by the 3 quantum numbersn, l,m (1 particle has 3 degrees of freedom, so3 quantum numbers), and its energy is nlm. It is customary to abbreviate (n, l,m) by the single letterr = (n, l,m). The corresponding energy for the single particle is then denoted r. The state of the systemof N particles is now specified by the quantum numbers ri for each particle, that is by the set of numbersr1, r2, ..., rN , and the total energy of the system is given by

    Er1,r2,...,rN = r1 + r2 + ... + rN (3.16)

    This independent particle model requires a lot of words to describe it, but in fact it is very easy to use.How to use it will become clear in particular examples. Keep in mind that the independent particleapproximation is usually a very severe approximation and we only expect to get qualitative results that willhelp us to develop a quantitative intuition of the physics that occurs in real systems.

    3.2 Classical ModelsClassical mechanics is quite different in concept and structure from quantum mechanics. The basic featuresof the theory that are needed to construct a classical statistical mechanics are accordingly different. Thissection outlines the principal points on which the theory is built.

    3.2.1 Classical Specification of StateThere are three principal different approaches to classical mechanics that we need to note. These areNewtonian, Lagrangian, and Hamiltonian. Newtonian mechanics is the simplest and most direct of these.The picture it offers is immediate and intuitive.

    Any mechanical system can be considered to consist of a fixed number N of point particles. Eachparticle is subject to the action of a force 2FA, A = 1, 2, ..., N , which determines its acceleration viaNewtons Second Law,

    2FA = mA2a (3.17)The acceleration then determines, together with 6N initial values of position and velocity, the N

  • 26 Chapter 3 Models of Thermodynamic Systems

    trajectories 2rA = 2rA(t) of the particle by the 3N second order ordinary differential equationsd22rAdt2

    = 2aA =2FAmA

    (3.18)

    These trajectories together define the position and velocity of each particle in space at each time t. Theposition of each particle at time t is defined by 3 coordinates. To specify the configuration of the system attime t therefore, we need 3N coordinates. The set of all possible configurations is called the configurationspace for the system, and we say that the system has 3N degrees of freedom (in the absence of constraints).However, knowing the configuration of the system alone is not sufficient for determining how the systemwill move, or change its configuration, in the next instant. For this, we also need to know the velocity ofeach particle. We thus need to specify 6N quantities, three position coordinates and three components ofvelocity, for each particle in order to specify the state of motion of the system. These 6N quantities canbe represented in a space of 6N dimensions, called the velocity phase space. The state of motion of thesystem is called its phase.

    It is often convenient to express the state of motion of the system not in terms of position and velocity,but of position and momentum. Since there is an unique momentum associated with each velocity, andconversely, these two descriptions are equivalent. We may thus specify the state of motion of the systemby three position coordinates and three components of momentum for each particle. These 6N quantitiesmay again be represented in a space of 6N dimensions. This one is called the momentum phase space.

    For convenience, denote the configuration coordinates of the system by qi, i = 1, 2, ..., f . Here, fis called the number of degrees of freedom of the system, and for a system of unconstrained particles,f = 3N . Denote also the momenta of the particles by pi, i = 1, 2, ..., f . Newtons law then determines theqi and the pi as functions of time, and the state of motion of the system at time t is given by (qi(t), pi(t)).The set of states for a classical system of f degrees of freedom is thus a 2f -dimensional space, orcontinuum.

    In each state, and at each time, the classical system has a well defined energy. This means that the energyof the system is a function H of the 2f + 1 variables (qi, pi, t). If we denote the energy by E, this meansthat

    E = H(q1, ..., qf , p1, ..., pf , t) (3.19)The function H is called the Hamiltonian function for the system. The Hamiltonian function is usuallyeasily constructed from a knowledge of the physical interactions of the particles in the system. In Cartesiancoordinates, it takes the standard form

    H(q1, ..., qf , p1, ..., pf , t) = T + V (3.20)where T is the total kinetic energy of the system, and consists of the sum of the kinetic energies of each ofthe particles,

    T =2p212m

    +2p222m

    + + 2p2A

    2m=

    NA=1

    2p2A2m

    (3.21)

    and V is the sum of the potential energies of the particles due both to external fields (gravitational, electric,magnetic) and to mutual interactions (Coulomb, van der Waals, etc.).

    All that is needed for the construction of a classical statistical mechanics is a knowledge of theHamiltonian function H. Strangely, this does not require us to solve the equations of motion for thesystem. This should be contrasted with the quantum case, where we need to solve the Schrdinger equationexplicitly to uncover the possible energies of the eigenstates of the quantum system.

    3.2.2 External ParametersAs in the quantum model, the effect of the environment on the system (applied potentials, externalforces, electric and magnetic fields, confinement potentials, etc.) is described by means of parameters i,i = 1, ..., r, that enter into the external potentials. This means that the energy of the system in state (qi, pi)at time t depends not only on its state, but also on the values of the external parameters. We therefore

  • Section 3.2 Classical Models 27

    have

    E = H(q1, ..., qf , p1, ..., pf , t; 1, ..., r) (3.22)

    For a given state at a given time, therefore, it is possible to change the energy of that state by altering thevalues of the external parameters. If we denote the state at time t by R = (qi, pi), we can adopt a notationanalogous to the one used above when discussing interaction in the quantum case and write

    E = HR(1, ..., ) (3.23)

    where the argument t has been suppressed for simplicity. We may now divide possible interactions withthe system into two types. In the first, the external parameters for the system are held constant and energyis exchanged by the system with the surroundings through interaction with external potentials. This typeof interaction forms the basis for a theory of thermal interaction when modelling thermodynamic systems.In the second, no energy is exchanged by the system with the surroundings via the external potentials,but the external parameters are changed. This type forms the basis for a theory of work in a model of athermodynamic system.

    References:Reif, F., 1965, Fundamentals of Statistical and Thermal Physics, McGraw-Hill Book Company, New York, Ch 2, 6

    and 7.Tolman, R. C., 1979, em The Principles of Statistical Mechanics, Dover Publications, Inc., New York, Ch 2, 3, 7, 8.

  • 28 Chapter 3 Models of Thermodynamic Systems

    Exercises1. Consider a non-relativistic particle of mass m in a rectangular box of dimensions Lx, Ly, Lz. The

    potential inside the box can be set to zero, then outside the box the potential is infinite.(a) Show that the allowed energies of the independent particle states are

    nx,ny,nz =h2

    8m

    n2xL2x

    +n2yL2y

    +n2zL2z

    with nx, ny, nz = 0, 1, 2, 3, ... positive integers.(b) Show that the number of states in an infinitesimal interval d is given by

    n () = ()d

    where

    () = 2V

    2m

    h2

    3/21/2. (3.24)

    is the density of states an V the volume of the box. (Consider >> 0).2. Result (3.24) is specific to non-relativistic particles, since the Schrdinger equation from which it was

    derived is non-relativistic. Result (3.24) is therefore also correct only for non-relativistic particles. Wemay arrive at the same result in a more general way as follows. A free particle with momentum 2p andenergy has wavefunction

    $p(2r, t) = Aei($r.$pt)/ = (2r) eit/ (3.25)

    (Recall that 2k = 2p where 2k is the wavevector). The parameter is not a free parameter, but isdetermined by the three parameters 2p. For a non-relativistic particle, it satisfies the equation

    =p2

    2m(3.26)

    For a relativistic particle, it satisfies the equation

    2 = p2c2 +m2c4 (3.27)

    The following steps are valid for both relativistic and non-relativistic cases.(a) Show that by demanding that (2r) obeys periodic boundary conditions

    ((2r) = (x, y, z) = (x+Lx, y, z) and similarly for the y and z directions and any combinationof coordinates), we get the quantisation condition

    px =hnxLx

    , py =hnyLy

    , pz =hnzLz

    (3.28)

    and nx, ny, nz integers. (We can also use the rectangular box to arrive at the same result - try thisand compare.)

    (b) According to (3.28), the quantum states for the particle may be represented by a lattice of integerpoints in the first octant of a 3-d state space with axes nx, ny, nz. Denote the number of quantumstates with parameter values in the range px to px + dpx, py to py + dpy, pz to pz + dpz by(2p) d3p. Show, with appropriate explanations, that

    (2p) d3p =V

    h3d3p (3.29)

    where V = LxLyLz.

  • Exercises 29

    (c) p2 = p2x + p2y + p2z . Denote the number of states with parameter p in the range p to p+ dp by(p) dp, and show that this number is given by

    (p) d3p =V

    h34p2 dp. (3.30)

    (d) In spite of appearances, 2p in the above formulae is not the momentum of particle in a box. Explainthis assertion as fully as you are able. Attempt a physical interpretation of 2p. (Hint: this question isnot trivial. You might find Reif, p 353-360 useful in this regard. )

    (e) Non-relativistic density of states: The energy of the particle is given by = p2/2m. Use thisrelation, together with result (3.30), to show that the number () d of states with energy in therange to + d is given by

    () d = 2V

    2m

    h3

    3/21/2 d

    (f) Relativistic density of states: The energy of the particle is given by = +p2c2 +m2c4. Use

    this relation, together with result (3.30), to find an expression for the number () d of states withenergy in the range to + d.

  • 30 Chapter 4 Isolated Systems: Fundamental Postulates

    Chapter 4Isolated Systems: FundamentalPostulatesThe simplest thermodynamic situations to imagine are those where the system of interest is isolated.Isolated" means that the system does not interact at all with its surroundings. Heat does not flow into it,it can do no work, and the number of particles that it contains does not change. So, its internal energyremains constant. According to quantum theory therefore, the system is in a stationary state, and itsexternal parameters do