16
E NTROPY AND THE S ECOND L AW OF T HERMODYNAMICS 331 J. Newman, Physics of the Life Sciences, DOI: 10.1007/978-0-387-77259-2_13, © Springer Science+Business Media, LLC 2008 Our discussion of thermodynamics in the last chapter was limited to energy consid- erations. Although energy conservation is a necessary requirement for any process to occur, it is not a sufficient condition. There are many energy-conserving processes that occur spontaneously, but that are not reversible even though that reversed process would also conserve energy. In this chapter we continue our introduction to thermo- dynamics with a discussion of entropy and the second law of thermodynamics. We relate entropy to the degree of disorder in an isolated system through a microscopic picture and we show that this disorder always increases with time. Life is a constant struggle to maintain a high degree of order. The corresponding reduction in entropy is accomplished at the expense of even more disorder in our environment in order to satisfy the second law of thermodynamics. We next discuss Gibbs free energy, related to chemical potential, the most important energy concept in biology. This thermody- namic state variable is a measure of the energy available for useful work at constant temperature and pressure, the usual conditions of life. The chapter concludes with several biological applications of these concepts, including ATP hydrolysis, photo- synthesis, and conformational changes in biomolecules. 1. ENTROPY AND THE SECOND LAW OF THERMODYNAMICS Many processes in nature that conserve energy and do not violate any of the other fun- damental principles we have introduced so far in our study of physics simply do not occur. Now when a basic physical process never happens even though it seems to sat- isfy all of the fundamentals in our theories of knowledge, there is something amiss. From many historical examples, it is usually the case that there is some new principle that would be violated by the occurrence of such a process. We begin this section with a brief discussion of some examples of processes in different areas of physics that never occur, leading to a qualitative presentation of the common principle that prohibits them. In mechanics, all sliding objects eventually come to rest because their kinetic energy has been lost due to what we call friction, the process by which mechanical energy is transferred to heat. Energy has not been lost, but the “useful” form of energy, which in mechanics is the sum of kinetic and potential energy, called mechanical energy, has been lost through its transfer to internal energy. Once a sliding object comes to rest, it is never the case that the internal energy of the object and surroundings spontaneously transfers back to the object in the form of mechanical energy making it move again. We conclude that although energy would be conserved in the reverse process, once “organized” energy, such as kinetic energy in which all molecules of the moving object translate together, is converted to random thermal motions of molecules, the process is irreversible. It is too improbable that all the molecules will spontaneously coordinate their motions in order to propel the object again. 13 Thermodynamics: Beyond the First Law

Thermodynamics: Beyond the First Law - home - IF - …coelho/Newman/Newman13.pdf332 THERMODYNAMICS: BEYOND THEFIRST LAW In fluid mechanics there are many similar examples. Suppose

  • Upload
    lekhue

  • View
    216

  • Download
    3

Embed Size (px)

Citation preview

E N T R O P Y A N D T H E S E C O N D L AW O F T H E R M O DY N A M I C S 331

J. Newman, Physics of the Life Sciences, DOI: 10.1007/978-0-387-77259-2_13, © Springer Science+Business Media, LLC 2008

Our discussion of thermodynamics in the last chapter was limited to energy consid-erations. Although energy conservation is a necessary requirement for any process tooccur, it is not a sufficient condition. There are many energy-conserving processesthat occur spontaneously, but that are not reversible even though that reversed processwould also conserve energy. In this chapter we continue our introduction to thermo-dynamics with a discussion of entropy and the second law of thermodynamics. Werelate entropy to the degree of disorder in an isolated system through a microscopicpicture and we show that this disorder always increases with time. Life is a constantstruggle to maintain a high degree of order. The corresponding reduction in entropyis accomplished at the expense of even more disorder in our environment in order tosatisfy the second law of thermodynamics. We next discuss Gibbs free energy, relatedto chemical potential, the most important energy concept in biology. This thermody-namic state variable is a measure of the energy available for useful work at constanttemperature and pressure, the usual conditions of life. The chapter concludes withseveral biological applications of these concepts, including ATP hydrolysis, photo-synthesis, and conformational changes in biomolecules.

1. ENTROPY AND THE SECOND LAW OF THERMODYNAMICS

Many processes in nature that conserve energy and do not violate any of the other fun-damental principles we have introduced so far in our study of physics simply do notoccur. Now when a basic physical process never happens even though it seems to sat-isfy all of the fundamentals in our theories of knowledge, there is something amiss.From many historical examples, it is usually the case that there is some new principlethat would be violated by the occurrence of such a process. We begin this section witha brief discussion of some examples of processes in different areas of physics that neveroccur, leading to a qualitative presentation of the common principle that prohibits them.

In mechanics, all sliding objects eventually come to rest because their kinetic energyhas been lost due to what we call friction, the process by which mechanical energy istransferred to heat. Energy has not been lost, but the “useful” form of energy, which inmechanics is the sum of kinetic and potential energy, called mechanical energy, has beenlost through its transfer to internal energy. Once a sliding object comes to rest, it is neverthe case that the internal energy of the object and surroundings spontaneously transfersback to the object in the form of mechanical energy making it move again. We concludethat although energy would be conserved in the reverse process, once “organized” energy,such as kinetic energy in which all molecules of the moving object translate together, isconverted to random thermal motions of molecules, the process is irreversible. It is tooimprobable that all the molecules will spontaneously coordinate their motions in order topropel the object again.

13Thermodynamics: Beyond the First Law

332 T H E R M O DY N A M I C S : B E Y O N D T H E F I R S T L AW

In fluid mechanics there are many similar examples. Supposethat a gas is confined to half of a closed container by means of a par-tition. If a hole is punctured in the partition the gas will leak into theother half of the container, eventually reaching a uniform distribu-tion throughout the container (Figure 13.1). Energy conservationwould not be violated if the molecules on one side of the partitionspontaneously re-entered the other side and all the gas returned to

only one side of the container. We know, however, that this process is fundamentally irre-versible because it is again too improbable that such a sequence of events would occur.

An example from thermodynamics further illustrating this point is the inevitable cool-ing of hot coffee. A thermos bottle can be used to reduce the rate of heat loss from the cof-fee compared to when it is just in a cup. “Vacuum” bottles reduce the conduction of heat,and the silver coating on the glass inner bottle reduces radiation losses. Despite this, thecoffee eventually will lose heat to its surroundings and the cooling process is irreversible.Irreversibility here means that the original situation cannot be restored without additionalenergy input. Of course the coffee can be heated again, but the heat lost to the surround-ing air cannot be collected and used alone to reheat the coffee to its initial temperaturewithout additional energy input, even though such a process would conserve energy.

What is common to all of these examples is the notion of the probability of theoccurrence of an event and of its time-reversal event. The bookkeeping of energyconservation is satisfied for both events, however, the likelihood of the reversal isessentially zero. Here we see how a methodology, known as statistical mechanics,can be developed for calculating the likelihood of events. We start with some simplenotions from coin-tossing problems.

If we flip a legitimate coin in the air, there is an equal probability of getting a heador tail when it lands. Flipping three coins in the air at the same time results in a vari-ety of possible “outcomes” including 0, 1, 2, or 3 heads, but these do not occur withequal probability. There is only one combination that gives either 0 or 3 heads,whereas there are 3 possible combinations that will result in either 1 or 2 heads, giv-ing a total of 8 possible distinct “states” for the coin flip (Figure 13.2). As more andmore coins are flipped together the total number of different possible states growsrapidly (with N coins, the number is 2N; with N � 100, the number is about 1030 ormore than the number of protons in your body!), and the number of possible outcomesis much smaller (in the case of N coins, there are simply N � 1 possible outcomes;what are they?). No matter how many coins are flipped, the number of states resultingin the most “ordered” outcomes of all heads or all tails remains just 1 so that thoseevents become essentially impossible as the number of coins increases to a number of100. Flipping 100 coins and finding 100 heads would be the equivalent to a cold cupof coffee spontaneously heating up by absorbing heat from the room temperature air.

FIGURE 13.1 When a small hole ismade in the partition between thetwo chambers shown on the left,the gas distributes itself uniformlyas shown on the right. The reverseprocess never occurs.

FIGURE 13.2 The three coin flipexperiment with up and downarrows indicating heads or tails.

Example 13.1 Find the number of states and outcomes for the case when 4 faircoins are tossed and then find the probabilities of each of the outcomes.

Solution: With 4 coins there are 5 possible outcomes (ranging from 4 heads to0 heads) and 24 � 16 possible states. There is only one way to have 4 heads andonly one way to have 0 heads, so that the probabilities for each of these is 1/16 �6.25%. There are 4 different ways to have 1 head—each of the 4 coins could bea head—and similarly there are 4 different ways to have 3 heads—each of the 4coins could be a tail—so that each of these has a probability of 4/16 � 0.25 �25%. For 2 heads, the two coins that are heads could be any of the four coins andwe can find 6 ways for this to occur, so the probability for 2 heads is 6/16 �37.5%. Of course, this last value could have been obtained from noting that theprobabilities must add up to 1 (or 100%). Check this.

E N T R O P Y A N D T H E S E C O N D L AW O F T H E R M O DY N A M I C S 333

Of the 101 different possible outcomes of the 100 coin flip experiment, which aremost likely? As you should already have guessed the most likely case is 50 heads and50 tails. Probability theory can tell us that if this experiment is repeated over and overagain that about 90% of the time we will find between 45 and 55 heads out of the100 coins. The distribution of possible outcomes is fairly sharply and symmetricallypeaked around 50.

In real physical systems, what is analogous to the notion of “states” and “out-comes” in the coin-toss experiment? To answer this we need to jump ahead a bit. Weshow later in this book that the world is governed by quantum mechanics and that, foratoms and molecules, possible energy values are quantized, or discrete, so that thereare a countable number of different values for the energy of an atom or molecule.Atoms or molecules cannot have any random value of energy, but must exist in a quan-tum state with one of a discrete list of possible energies that can be labeled by a num-ber, a so-called quantum number. These energy states can be pictured as energy levels,which may be familiar to you from a previous physics or chemistry course, with theatom or molecule “residing” in a particular level. In Figure 13.3 the energy levels havea quantum energy separation of ε with this particular atom in the third “excited state”with an “excitation level” of 3 and an allowed state of motion corresponding to a totalenergy of 3ε. We can think of the atom in this state as having 3 quanta of energy, eachworth ε joules.

Now, suppose we have a large number NA of such atoms. Each of the atomshas its own excitation level and its own corresponding energy. To find the internalenergy of the large system due to atomic motions we just add up all of theindividual atomic motional energies. Thus, we can write the internal energy ofthe system as NE ε, where NE, the total excitation level of the whole system, is justthe sum of all of the atomic excitation levels. (For example, if there were threeatoms in the system with excitation levels 4, 5, and 6, the excitation level of the system would be 15.) NE is the total number of energy quanta the system con-tains. Typically, for a macroscopic system both NE and NA will be huge, perhaps1025 or so.

A microstate of this system is one of the very large number of states describedby a particular set of excitation levels, one for each atom in the system. It is one ofthe premises of statistical mechanics that at equilibrium all allowed microstates of asystem (those satisfying conservation of energy) are equally probable. Microstatesare analogous to the 2N different possible “states” of the coin-flip experiment. Unlikethe heads or tails options for a coin, we are dealing with rolling a huge number ofspecial dice, one for each atom, each of the dice with an enormous number of facesrepresenting the different excitation levels of an individual atom rather than the usualsix faces.

However, just as with the coin-flip experiment, when all is said and done, whatis most important are the “outcomes”: how many heads we will get with what prob-ability for N coin flips. The details of which particular coin landed as a head or tailare not important. In our atomic system, the analog to an outcome is a macrostate.This is specified by the total numbers of atoms with each of the possible excitationlevels, known as the occupation numbers. Occupation numbers together with theassociated excitation levels represent the information needed to determine the totalenergy of the system. There will be many microstates corresponding to each par-ticular macrostate, just as there are many different possible coin-flip sequences thatresult in the same outcome (except for all heads or all tails). Because, as we havenoted, each microstate is equally likely to occur, the probability of a particularmacrostate will depend solely on the number of microstates corresponding to agiven macrostate. Thus, as we saw in the coin-flip experiment, the possible out-comes (or macrostates) may be limited by probability to those that are most likelyto occur based on those with the largest number of states (microstates) leading tothat outcome.

0

ε2ε3ε4ε

FIGURE 13.3 Typical energy leveldiagram for an atom, with the lowest (ground) state and severalexcited states shown (there aremany more levels above the fourthexcited state not shown; also, inmany cases the energy levels arenot equally spaced). A typicalenergy spacing is 10�21 J foratoms in a solid and 10�23 J foratoms in a gas.

The information on the numbers of microstates in a given macrostate (the occu-pation numbers) is contained in a function �, known as the statistical weight of thesystem, that is directly related to the entropy S of the system

(13.1)

where kB is the Boltzmann constant. Entropy is thus a statistical function dependingultimately on the occupation and quantum numbers but indirectly on the state vari-ables, such as pressure, temperature, and volume, and is a measure of the likelihoodof that particular macrostate, given total values for energy and other conserved quan-tities. One immediate question is how much choice there is in the macrostate that thesystem occupies. In our coin-flip experiment with only 100 coins we saw that theprobabilities are fairly sharply peaked with a probability of about 90% that the out-come is between 45 and 55 heads. With the typically much larger numbers ofmicrostates in thermodynamic systems, the range of parameters of the final macrostateis extremely sharply peaked.

Having introduced some concepts that can be used to describe a thermodynamic sys-tem (with large numbers of atoms), we’re now in a position to state a new law of physics.

S = kB ln Æ,

334 T H E R M O DY N A M I C S : B E Y O N D T H E F I R S T L AW

Example 13.2 Suppose that there are four identical atoms each with equallyspaced energy levels given in Figure 13.3 and with a total energy of 6ε. Find allthe possible macrostates of the system by defining their occupation numbers.

Solution: Because the total energy is 6ε, we need to include energy levels up to thatvalue, because one possible macrostate has 3 atoms in the zero energy ground state and1 atom with excitation level 6. If we write out the occupation numbers of this state as(3,0,0,0,0,0,1) where from left to right we show the number of atoms at increasing exci-tation levels from the ground state to 6, we can use this notation to find the other pos-sible macrostates. These can be written as: (2,1,0,0,0,1,0), (2,0,1,0,1,0,0),(2,0,0,2,0,0,0), (1,2,0,1,0,0), (1,1,1,1,0,0,0), (0,3,0,1,0,0,0), and (0,2,2,0,0,0,0). We notethat not all of these macrostates are equally likely. For example, there are 4 microstatesthat could correspond to the macrostate given by (3,0,0,0,0,0,1), corresponding to a dif-ferent one of the 4 atoms having excitation level 6. For the macrostate given by(1,1,1,1,0,0,0) there are 4 choices for filling the first state, 3 for the second, 2 for thethird, and the remaining atom fills the fourth so that there are 4! “4-factorial” �(4)(3)(2)(1) � 24 different possible microstates in this case. Thus this macrostate is six(24/4) times as likely as the one in which only one atom has all the energy.

The second law of thermodynamics states that the total entropy of a closedsystem always increases,

(13.2)

with �S � 0 only in the special case of a reversible process.

¢S Ú 0,

A reversible process is an idealization of a process that is performed slowly enoughso that the system remains in equilibrium throughout, a so-called quasistatic process.In general, the total entropy of a closed system must increase; this is fundamentallya statistical statement about probabilities of occupation numbers. As we saw in thelast chapter the internal energy of a system can change over time by either work beingdone or by a flow of heat. Given a variety of different events that can occur (satisfy-ing energy conservation and other conserved quantities), the one having the most

possible microstates will be the one that occurs. The number of different microstatesof a particular macrostate is intrinsically related to its increased “randomness.”

Mechanical energies are more “organized” and much less “random” in nature thanthermal energies. The second law implies that, although both forms of energy may beequal in magnitude, statistics drives reactions or events toward producing thermal energyfrom mechanical energy in order to maximize entropy. Frictional forces are nonconser-vative precisely because the thermal energy they produce cannot be reversibly trans-formed back to mechanical energy. A general conclusion is that whenever the entropy ofa closed system increases, the amount of energy available to do work is decreased.Increasing entropy degrades the usefulness of energy. To see this from a microscopicpicture, let’s now return to our atomic model system of NA atoms with energy levelsshown in Figure 13.3 and ask how we can change the energy of the system.

We can change the internal energy of a system of atoms in three ways. We mightadd or subtract atoms (change NA). We might increase or decrease the atomic energylevel spacing (ε). And we might increase or decrease the total number of energyquanta of the system (NE), leaving the number of atoms fixed. As in previous dis-cussions, we only consider closed systems, ones with fixed numbers of atoms, so thatonly the latter two options are available.

So, how can ε be changed? The exact value of ε depends on the details of howthe atoms interact with each other and their container, but if the average region inwhich an atom is confined has length L, the value of ε is roughly proportional to 1/L2.(We study this in some detail in Chapter 25, but the proportionality arises from quan-tum mechanics.) That is, by changing the volume that the system is confined in wechange ε. In fact, if we change the volume very slowly, each atom will stay in itsallowed state of motion and the total excitation number will not change. (This is aformal result derived from quantum mechanics.) Very slow change in volume onlychanges ε and not NE. That sounds a lot like what we have previously called work.

Similarly, we can change NE without changing ε. We place our system in closecontact with a second system so that the atoms at the interface can swap energy. Ifone system has more energy per atom than the other (a larger value of εNE/NA) andif atomic interactions can effectively be taken to be random processes, then randomscrambling will cause a preferential flow of quanta of energy from the system withthe higher energy per atom to the system with lower. This is demonstrated inFigure 13.4. Here white means “hot” (high number of energy quanta) and dark means“cold” (low number of quanta). Random swapping of energy quanta between atomspreferentially moves energy from the hot side to the cold because there are morequanta to select from on the hot side. (Quanta from the cold side move to the hot sidealso, but there are just fewer of them at first from which the random swapping processcan choose.) As time goes on the quanta become more-or-less evenly distributed

E N T R O P Y A N D T H E S E C O N D L AW O F T H E R M O DY N A M I C S 335

FIGURE 13.4 Sequence of snapshots of the flow of energy quanta from the initially hot(left) side to the colder (right) side of a system.

336 T H E R M O DY N A M I C S : B E Y O N D T H E F I R S T L AW

throughout the container. All of this sounds a lot like what we have called heat flow.So here’s the atomic level interpretation of work and heat flow. Changing the energylevel spacing ε of the atoms in a system corresponds to work; changing the numberof energy quanta a system has corresponds to heat flow.

Entropy has something to do with assessing how much internal energy is avail-able to do work. To be useful, internal energy has to be concentrated. The more diluteor disorganized the internal energy, the less useful it is and the larger the entropy.Microscopically, entropy is a measure of the number of different ways you can dis-tribute NE quanta over NA atoms. This is precisely the statistical weight of Equation(13.1) and the occupation numbers represent the bookkeeping needed to keep trackof this. The more ways you can divvy up the fixed total energy in packages of quantaof energy over the atoms of the system, the less concentrated the energy will be andthe less useful it will be. The more ways you can divvy up quanta over the atoms ofthe system, the more “mixed up” the energy is, the more disordered it is, and thegreater is the system’s entropy. Microscopically, entropy is a measure of disorder.

The formal expression for counting all of the different arrangements of energy is

where “!” means “factorial:” N! � N(N � 1)(N � 2)(N � 3) . . . 1, as in Example 13.2.The number of arrangements of energy quanta over atoms increases extremelyrapidly as either NE or NA increase. For example, suppose NA � 10 and NE � 10. ThenA � 92,378. If the number of atoms just doubles to NA � 20, still with NE � 10, thenA � 3,628,800, an increase of a factor of 40. Similarly, if NA � 10 but the number ofquanta doubles to NE � 20, then A � 10,015,005, an increase of a factor of 110.

One immediate consequence of all of this is that unusually concentrated arrange-ments of energy in a large system are extremely unlikely. Suppose we have 20 atomsand 40 energy quanta. The number of ways to arrange 40 quanta over the 20 atoms isA � 1.22 � 1017. The number of ways of arranging the 40 quanta on just 18 of theatoms is 3.56 � 1014. Thus, if all of the different arrangements of energy over atomsare equally likely—that’s the random swapping, microscopic form of “thermodynamicequilibrium”—the chance of finding this system with all of its energy located on just18 of the 20 atoms is 3.56 � 1014/1.22 � 1017 � 0.0011; that is, there’s about a tenthof a percent chance of this happening spontaneously. This result is for just 20 atomsand 40 quanta. In a real macroscopic system where the numbers of atoms and quantaare about 1025 the chance that any even slight spontaneous concentration of energywould occur is unimaginably small (although, of course, it could happen). The pointis, if we start a system off with its energy concentrated and let random atomic swap-ping processes mix energy units around for a while, the chance that the energy willspontaneously (just via the swapping processes) reassemble itself into a concentratedstate is essentially zero. Increasing the number of ways to distribute the availableenergy among the atoms of a system degrades the usefulness of the energy. Thus, fromthis perspective thermodynamic equilibrium is just a matter of counting: there arevastly more states a large system can be in with its energy scattered about (and lessuseful) than states with energy clumped (and more useful).

It can be shown that entropy can be defined in an equivalent, strictly thermody-namic way based on the heat flow into or out of a system and its temperature. In theseterms the second law of thermodynamics is written as

(13.3)

where Q is the heat input to the system at absolute temperature T and the equalityholds again only for processes that are quasistatic. Loosely speaking, T�S is a mea-sure of the energy content of the “order” in a system. From Equation (13.3) it is seen

¢S Ú

Q

T,

A =

(NE + NA - 1)!

NE !(NA - 1)!,

that entropy has units of J/K or kcal/K, but often is expressed in molar units ofkcal/(mole-K).

So, then, how do we understand the macroscopic relation �S � Q/T in terms ofatoms? This macroscopic relation says that when heat flows into a system at constant tem-perature, the system’s entropy increases. But, microscopically entropy is disorder. So howdoes Q/T 0 imply greater disorder? Well, if heat flows into a system at constant tem-perature, the system’s volume has to increase, otherwise, the internal energy wouldincrease and the temperature would increase also. Because the volume increases, theregion of confinement of each atom increases and the energy level spacing decreases(recall that ε ≈ 1/L2). The internal energy of the system starts out at εNE, but ε changesto a smaller value when volume increases. To keep the internal energy constant (temper-ature is constant) NE must therefore increase. This means that the number of quanta in thesystem increases when heat flows in at constant temperature. Increase in quanta, as wehave argued, produces an increase in the number of ways of dividing up quanta amongatoms, or more disorder. This is the microscopic reason why entropy change is Q/T.

Remember that the second law speaks of the total entropy of a closed system. Wehave seen that there are two classes of systems: closed, exchanging only heat but notmass with the surroundings, and open, exchanging mass as well as heat. The secondlaw applies directly only to closed systems. Open systems can appear to violate thesecond law and have a decreasing entropy. Life itself is fundamentally a process thatreduces entropy in a series of self-organizing processes. There is no violation of thesecond law because life cannot occur as a closed system. When the surroundings areincluded, the total entropy of the larger closed system always tends to increase. Weare able to create ordered structures within our cells and organs at the expense ofexcess energy that we acquire from food. Said differently, we are able to live (andreduce our entropy) by increasing the entropy of our surroundings even more.

An interesting and important example of a molecular application of entropy is thestructure of water. Water molecules are polar structures that form long-range hydrogenbonds that we study in Chapter 15. Those bonds are relatively weak and constantly breakand reform on a picosecond (10�12 s) timescale. Because each water molecule has twohydrogen atoms and therefore can have two possible hydrogen bonds, water can forma network of bonds (illustrated in Figure 13.5), known as a cluster, that may persist for~30 ps before “dissolving.” Pure water can be pictured as a dynamic assembly of clustersthat constantly break and re-form so that there is a fairly high degree of ordering in thewater. In fact, the highly unusual thermal expansion property of water below 4°C,discussed in Section 2 of the previous chapter, is due precisely to the nature of thegrowing cluster formation as the temperature approaches the freezing point.

When a macromolecule is immersed in water it disrupts the organized clustering ofwater molecules in its neighborhood. Due to this effect polar regions on the macromol-ecule will tend to lie near water whereas hydrophobic portions tend to pack togetherinternally to minimize contacts with water. An unfolded macromolecule will sponta-neously fold into a characteristic native conformation (see Section 3 below). This phe-nomenon appears to be driven by strong interactions between the hydrophobic portionsof the macromolecules, and is therefore called the hydrophobic interaction, but in factthe dominant interactions are entropic and are driven by the water hydrogen bonding.Minimum energy with the macromolecule impurity present is achieved in the moreordered state with water structure maintained as well as possible. The same effect occursin membranes where the hydrophobic lipids aggregate within the membrane bilayer sothat the polar heads can be exposed to water, minimizing the decrease in ordering of thewater. This explains the very common bilayer structure of biological membranes.

2. GIBBS FREE ENERGY

So far in our discussion of thermodynamics we have studied two energy functions,the internal energy U and the enthalpy H, H � U � PV, both introduced in the lastchapter. We have also seen that in a closed system the entropy will be maximized,

G I B B S F R E E E N E R G Y 337

FIGURE 13.5 Cluster of waterformed by hydrogen bonding.

338 T H E R M O DY N A M I C S : B E Y O N D T H E F I R S T L AW

and in an open system the entropy of the {system � surroundings} willbe maximized. It is useful to introduce another energy function, theGibbs free energy G, that is particularly useful in open systems at con-stant temperature and pressure, the usual conditions in biology. We showthat the free energy of an open system tends to decrease and that events(such as chemical reactions) will proceed spontaneously so long as thefree energy decreases.

The Gibbs free energy is defined by

(13.4)

Under conditions of constant temperature and pressure, the onlyenergy changes that can occur within an open system are P�V work, heatflow to or from the surroundings and other forms of useful work such aschemical or electrical work. Under those conditions, changes in freeenergy represents just those changes in “useful” work, hence the term“free,” meaning available to do such useful work. The discussion in thebox shows this and that the Gibbs free energy must always decrease as asystem approaches equilibrium and must remain at that minimum value atequilibrium.

For an isothermal process �G � �H � T�S, thus depending on thesigns of �H and �S for a particular system we can distinguish four dif-ferent possibilities (see Table 13.1). If �H 0 and �S 0 then �G iscertainly negative and the process will occur spontaneously, decreasingthe free energy until equilibrium occurs. Similarly if �H 0 and �S 0,then �G is positive and the process cannot proceed spontaneously, butcould only proceed with some outside energy source. The two other casesare not as clear. If �H 0 and �S 0, then �G will be positive at lowtemperature, but may become negative at high temperature. Similarly if�H 0 and �S 0, then �G will be negative at low temperature, butwill become positive at high temperature. In these cases the process willonly be spontaneous below or above a threshold temperature.

G = H - TS = U + PV - TS.

Table 13.1 Spontaneity of Thermodynamic Processes

�H �S �G Reaction Occurs

0 0 0 Always

0 0 0 Never

0 0 0 at low T Only at low T

0 0 0 at high T Only at high T

The rest of this section explores the application of Gibbs free energyto various types of chemical reactions as a prelude to the next section onbiological applications. In a solution, chemical work can be done bychanging the numbers and types of components (reactants and products)within the system. In this case the change in the Gibbs free energy canbe written as

(13.5)

where the summation is over all the species {i} in solution, ni is thenumber of moles of species i, and �i is the Gibbs free energy per mole,known as the chemical potential, of species i.

In the special simple case of a phase equilibrium between twospecies, for example, water and ice, Equation (13.5) becomes

¢G = ©1¢Gi2 = ©1mi¢ni2,

We can show, in a straightforward way,that the Gibbs free energy must decreasewith time t in an isobaric isothermalprocess. Using the first law, dU � Q � W,for our system, and writing W � PdV, we have

where we have assumed an isobaricprocess (dP � 0, so that d(PV ) � PdV ),and used the definition H � U � PV.Inserting this expression for Q into thethermodynamic form of the second law(Equation (13.3)), we have

and by differentiating with respect to timewe can write

Putting both terms on the left side of theinequality

where we have also assumed an isother-mal process (dT � 0). We conclude thatthe free energy decreases with time for allsuch systems until a minimum is reachedat which thermal equilibrium has beenestablished. In the special case when noheat flows (dH/dt � 0) then the decreasein free energy is matched by the increasein entropy alone.

To investigate the significance of thefree energy, we start with its definition(Equation (13.4)) and write (using theproduct rule) that

so that in an isobaric isothermal process (P � T � constant), we have

Writing the first law as dU � Q � W, andnoting that for a reversible processQ � TdS, we have on substituting for dU,

dG = dU + PdV - TdS.

dG = dU + PdV + VdP - TdS - SdT,

d(H - TS)

dt=

dG

dt… 0,

dH

dt… T

dS

dt.

Q = dH … TdS,

Q = dU + PdV = dU + d(PV) = dH,

(13.6)

Because nw � ni � constant, we know that any change in the num-ber of moles of one species is due to the opposite change in the other sothat It follows from Equation (13.6) that at equilibrium

when �G � 0, we must have

so that the molar chemical potentials of both species must be equal atequilibrium.

Let’s consider in some detail the thermodynamics of a generalbimolecular chemical reaction

(13.7)

where nA is the relative number of moles of species A reacting with nB moles of B toproduce nC moles of C and nD moles of D. To proceed, we need to know that thechemical potential can be written for the ith ideal solution component as

(13.8)

where is its chemical potential at some standard condition and ci is its molar con-centration. At equilibrium the total Gibbs free energy of the reactants must equal thatof the products, resulting in �G � 0 for the reaction, so that we can write

Substituting expressions from Equation (13.8) with appropriate subscripts foreach term, we have

and after using the mathematical facts that n ln(c) � ln(cn) as well as that ln A � lnB � ln(AB), we find

(13.9)

where G0total is the first term in parentheses in the previous equation, equal to the net

standard free energy for the reaction. In this expression the cis are now the equilibriummolar concentrations, although for clarity we do not label them differently. Defining theequilibrium constant for the reaction Keq as the term in brackets we have that

(13.10)

Note the general form of the equilibrium constant, having its numerator equal tothe product of the equilibrium molar concentrations of the reaction products, eachraised to the appropriate relative number of moles (as in the balanced chemical reac-tion equation) and its denominator equal to the same relation for the reactants.

Let’s pause to digest these important results. In Equation (13.10), we note that if Keq 1 then �G0

total 0 and the reaction will proceed spontaneously under standardconditions, a so-called exothermic reaction. If Keq 1 then �G0

total 0 and the

¢Gtotal0

= -RT ln Keq.

¢Gtotal0

+ RT ln acC

nC cDnD

cAnA cB

nBb = 0,

- nAln(cA) - nBln(cB)2 = 0,

1nCmC0

+ nDmD0

- nAmA0

- nBmB02 + RT1nCln(cC) + nDln(cD)

nCmC + nDmD = nAmA + nBmB.

mi0

mi = mi0

+ RT ln(ci),

nA A + nB B 3 nC C + nD D,

mw = mi,

¢nw = - ¢ni.

¢G = mw¢nw + mi¢ni.

G I B B S F R E E E N E R G Y 339

or

We conclude that for a reversible isobaric,isothermal process the decrease in Gibbsfree energy is equal to the “useful,” non-PdV, work done by the system.

- dG = W - PdV.

= PdV - W,dG = (TdS - W) + PdV - TdS

340 T H E R M O DY N A M I C S : B E Y O N D T H E F I R S T L AW

reaction cannot proceed spontaneously, but requires external energy inorder to occur, a so-called endothermic reaction. In biology many reac-tions are coupled reactions in which energy from a spontaneousexothermic reaction may be used to drive an otherwise unallowedendothermic reaction. The hydrolysis of ATP to ADP is the most com-mon such spontaneous reaction in cells with a value of �G0

total � �7kcal/mole at standard conditions (25°C, pH 7; not those in a cell) and is used to “drive” many endothermic coupled reactions. We dis-cuss some aspects of the thermodynamics of ATP hydrolysis in thenext section.

A second important point about Equation (13.10) is that it can be solved for Keq

(13.11)

so that a measurement of �G0total can be used to determine the equilibrium constant of the

reaction. The term e��G/RT, or when more commonly written in per particle instead of permole form, e��E/k

BT, is known as the Boltzmann factor and gives the relative populations

of the two states separated by energy �E (Figure 13.6). Note that, as we saw in the lastchapter, the term kBT is an energy, equal at room temperature (20°C) to 4 � 10�21 J or1/40eV (electron-volt, where 1 eV � 1.6 � 10�19 J). In the previous chapter we saw thatkBT is roughly the thermal energy of a gas molecule, so that the ratio in the exponent ofthe Boltzmann factor is comparing the energy difference between the two states to thethermal energy of a particle. When �E is large compared to thermal energies, the expo-nent is large and negative so that the population of the higher energy state is very smallcompared to that of the lower energy state. There is not enough thermal energy to excitereasonable numbers of particles to the higher energy state. On the other hand if �E is smallcompared to kBT then the exponent is close to zero and the exponential is close to one, sothat the populations of the two states are comparable because it is easy to make an upwardenergy transition because there is sufficient thermal energy available. We use theBoltzmann factor in later studies of atomic and molecular systems.

Our discussion has been based on equilibrium thermodynamics alone and as suchdoes not give any information on times to reach equilibrium. Predictions can be madeof whether reactions will occur spontaneously, but the rates of reactions cannot bedetermined from equilibrium thermodynamics alone. In concluding this section, webriefly consider some issues from reaction kinetics that concern the time-dependence ofreactions. We focus on a simplified version of the bimolecular reaction given byEquation (13.7) in which two reactant molecules, A and B, produce two product mole-cules, C and D (so that all ni � 1 in Equation (13.7)).

In a general way the steps of the reaction can be divided into three parts: theapproach of A and B (often by diffusion), the reaction, and the separation of C and D.The free energy of interaction between molecules can be schematically representedas a function of the reaction coordinate, as shown in Figure 13.7, where the reactioncoordinate is a parameter that indicates the progress of the reaction and so is related,but not necessarily proportional, to the elapsed time. The overall free energy changefor the reaction is the net difference between the free energies of the final and initialstates.

Typically in such a reaction, there will be an energy barrier, or activationenergy, that needs to be overcome before the reaction products can be formed. Thismay be due to charge interactions or to steric effects requiring a more orderedarrangement of A and B before they can react. If this activation energy is small, thenthe “rate-limiting step” may be the simple coming together of A and B. In this casethe reaction is known as diffusion-controlled (or diffusion-limited). With larger acti-vation energy, the reaction is said to be reaction-controlled. In this case, remember-ing that the thermal energies of A and B are not equal but distributed about anaverage, only the more energetic molecules with sufficient energy to “climb” the

Keq = e-

¢Gtotalo

RT ,

ΔE = E2–E1

E1

E2

N1

N2

N2/N1 = e–ΔE/kT

E

FIGURE 13.6 The Boltzmann factorgives the relative populations oftwo energy levels with populationsN and energies E.

reactantsA + B

Reaction coordinate

E

products C + D

Eact

ΔE

(ΑΒ)∗

FIGURE 13.7 An energy diagramfor a general chemical reactionshowing the activation energy Eactof the forward reaction and theoverall free energy change �E. Theparticular reaction shown here isendothermic.

energy barrier can interact and form an intermediate complex (AB)* that can thenform products.

Many reaction-controlled processes in biology are modulated by enzymes,proteins that effectively lower the activation energy of reactions to enhance theircompletion in a process known as catalysis. Enzymes are highly specific, each hav-ing a unique active site at which binding to a specific macromolecule (substrate)occurs. Lowering of activation energies by enzymes may speed up that particularreaction by tremendous factors, often as much as 1014 times. Upon completion of theenzyme-assisted reaction, the enzyme molecules are released unchanged and canbind another substrate.

3. BIOLOGICAL APPLICATIONS OF STATISTICALTHERMODYNAMICS

In this section several examples of important biological processes are consideredfrom a thermodynamic point of view. The energy-driving mechanisms of ATPhydrolysis and photosynthesis are first considered from an overall energy and mole-cular perspective. As they are important molecular processes in many facets of biol-ogy, we also briefly consider conformational transitions in macromolecules,including protein folding, helix–coil transitions in biopolymers and the self-assemblyprocesses in polymerization.

If the food we eat were to be simply burned, all its energy would go to heat. Inorder for our bodies to utilize some fraction of this energy, elaborate reactions occurthat convert some of the energy stored in various foods into ATP. For example, eachmolecule of glucose, when completely oxidized, yields about 36 molecules of ATP,with an energy conversion efficiency of over 50%. Such an efficiency is much higherthan that of manmade motors or engines with typical efficiencies of 10–20%. As youprobably know, ATP is the predominant source of energy for chemical reactions in allliving cells and is usually present at fairly high concentrations of 1–10 mM (where 1 mM � 10�3 M).

The ATP molecule consists of the parts shown schematically in Figure 13.8: ade-nine with ribose attached and the three phosphate groups. Under physiological con-ditions, ATP is highly negatively charged and has divalent cations (Mg2� or Ca2�)bound. Hydrolysis (or splitting) of ATP involves the combining of a water moleculewith the phosphate group farthest from the ribose to produce ADP and inorganicphosphate. The reaction releases a relatively large amount of energy; the farthestphosphate bond in ATP is said to be a high-energy phosphate bond. The precise totalfree energy change from the hydrolysis of ATP to ADP will depend on local concen-trations of ATP, ADP, and phosphate but typical actual free energy changes in cellsare quite large, ranging from �G � �11 to �13 kcal/mole. This reaction is so favor-able and likely to proceed spontaneously, that ATP must be constantly replenished inthe mitochondria of cells. If allowed to reach thermal equilibrium, the cell would die.Rather, ATP concentration is maintained in a complex nonequi-librium steady-state reaction.

ATP plays an essential role in nearly all biosynthetic reac-tions, producing new protein (using most of the cell’s ATP) aswell as DNA, RNA, and polysaccharides in all cells. Each dayan average adult hydrolyses as well as produces over 70 kg(roughly the person’s weight) of ATP. The large free energychange of ATP hydrolysis can be linked with other reactions thathave positive free energy changes, so that the coupled reactionsbecome energetically feasible.

Let’s consider an example reaction to illustrate the role ofATP in synthesizing glutamine, an amino acid. As with all suchsyntheses, the key to ATP’s effect is the energetic coupling via acommon intermediate. Figure 13.9 shows the free energy

B I O L O G I C A L A P P L I C AT I O N S O F S TAT I S T I C A L T H E R M O DY N A M I C S 341

–P –P –P

Phosphates

Adenine

Ribose

FIGURE 13.8 Block diagram of theATP (adenosine triphosphate)molecule with its high energybonds (~).

glutamine

glutamic acid

1 ATP + H2O ADP + phosphate 2 glutamic acid + NH3 glutamine +

1+2 ATP + glutamic acid + ADP + phosphate + glutamine

Reaction coordinate

E

reagents

High-energyintermediate

products

ATP

phosphate glutamic acid-ADP

ADP

NH3

NH3

H2O

FIGURE 13.9 Free energy diagramfor glutamine synthesis. The energyfrom ATP hydrolysis is used to forma high-energy intermediate fromglutamic acid that subsequentlycombines with ammonia to formglutamine. The separate reaction#2 does not occur without energyinput. Coupling of the tworeactions #1 and #2 leads to anoverall reaction that proceedsthrough the common intermediatewith a net release of free energy.

342 T H E R M O DY N A M I C S : B E Y O N D T H E F I R S T L AW

changes associated with ATP hydrolysis and the unfavorable reaction forming gluta-mine from glutamic acid and ammonia. This latter reaction alone has a standard freeenergy change of �G0 � �3.4 kcal/mole and cannot proceed without a source ofenergy. Coupling with ATP hydrolysis to form a “high energy intermediate” allowsthe biosynthesis to occur with a net standard free energy change of (�7 � 3.4 �)�3.6 kcal/mole. In order to replace the macromolecular building blocks of the organ-ism, ATP must be continually produced. All animals and most microorganisms relyon photosynthesis as their ultimate source of food.

Green, chlorophyll-containing, plants are the ultimate converters of energy sup-plied by the sun into oxygen and organic molecules that sustain life. In its most sim-plistic form, photosynthesis converts carbon dioxide and water to glucose and oxygenin an overall reaction

The free energy change for this reaction is �G0 � �686 kcal/mole, so that clearlythe process does not occur spontaneously, but must have an outside energy supplyin the form of photons of light. Photosynthesis is a unique process that harvestsphoton energy into chemical energy. More than 100 sequential reaction steps havebeen elucidated in the overall reaction, each with a specific enzyme.

Briefly, the overall process can be divided into two major portions known as thelight and dark reactions. The light reactions, requiring photons and unique to photo-synthesis, first convert water to free oxygen, protons, and electrons. The protons arepumped across a membrane generating ATP, and the electrons bind to an enzyme(NADP) to be used in a subsequent coupled reaction. The dark reactions use the ATPand electron–donor enzyme NADP to convert carbon dioxide to glucose. For eachcarbon dioxide molecule 8 photons are needed for a total of 48 photons per glucosemolecule. The efficiency of conversion of photon energy at the site of photon absorp-tion, the reaction center, is about 20%, whereas the overall efficiency of photosyn-thesis is about 5% under optimal conditions. Uncovering the molecular details ofphotosynthesis is an active area of research involving lots of physics. For example,pulsed laser experiments carried out at very low temperatures have shown that theearliest steps in the direct absorption of a photon occur faster than 1 ps (10�12 s).Spectroscopy of various types has been essential in unraveling the kinetics andconformational changes that occur as the photon energy is distributed to variouschemical bonds.

Finally, we consider some thermodynamic aspects of the conformations of macro-molecules. As discussed in Section 5 of Chapter 3 there are certain biostructuralmotifs that are common in nature: the �-helix in proteins, the Watson–Crick doublehelix in DNA, or the self-association of identical protein molecules to form complexstructures such as the filamentous polymer actin or smaller aggregates such as hemo-globin (Figure 13.10). Under certain conditions, macromolecules may spontaneouslyform these ordered conformations or aggregates from less well-ordered states of ran-dom coil or from isolated monomer subunits, respectively. The driving mechanismsare the detailed electrical bonds that form between portions of the macromolecules, orbetween individual subunits, stabilizing the overall structures. Even without thatdetailed electrical information, thermodynamic quantities can give some general infor-mation about the possible conformational reactions and some insight as to the mech-anisms and stability of various ordered configurations of macromolecules.

Proteins in their native form have unique conformations that consist of regionsof more (helix, -sheet) or less (random coil) order. If a protein is mildly heated soenough thermal energy is added to break the weaker bonds that maintain thesecondary conformation, but not so much as to break covalent bonds along the pro-tein backbone, then the protein can lose its overall structure and become entirelyrandom coil in a process known as denaturation. If cooled under controlled condi-tions, proteins will often spontaneously renature to form native, functioning proteinmolecules.

6CO2 + 6H2O : C6H12O6 + 6O2.

We can understand this behavior from some simple thermodynamic arguments.Comparing the denatured and native helical (for example) conformations, it is clearthat the entropy of the denatured state is greater. This is due to the fact that the coilis a much more random structure with many more possible ways to distribute itsenergy and thus a much larger statistical weight � and entropy related throughEquation (13.1). We can write this as �Scoil 0, with reference to the helix state.Furthermore, it is clear that in order to disrupt the secondary bonding to form the coilfrom the helix, heat must be input and so �Hcoil 0 for the coil, again compared tothe helix. Combining these, we see that

�Gcoil � �Hcoil � T �Scoil

may be positive at low temperatures, but may become negative at a sufficiently hightemperature (see Table 13.1). Thus, the helix is stable at lower temperatures whereasthe coil is stable at higher temperatures.

Furthermore, we know that where

with these concentrations representing the fraction of protein residues in eachconformation. The transition from having most residues in the helix (small Keq andtherefore large �Gcoil 0) to having most in the coil (large Keq and therefore large�Gcoil 0) will occur over a range of temperature as the protein is heated. Note thatif both �H and �S are themselves large as well as positive, then whether their

Keq =

ccoil

chelix,

Keq = e-¢Gcoil /RT,

B I O L O G I C A L A P P L I C AT I O N S O F S TAT I S T I C A L T H E R M O DY N A M I C S 343

FIGURE 13.10 Three structural motifs in biomolecules: (a) alpha helix, (b) beta-sheet,and (c) double helix. (d) The protein lysozyme showing regions of alpha helix (red) and beta sheet (green) as well as random coil, and (e) hemoglobin, composed of four identical subunits shown in colors.

ab

e

c

d

344 T H E R M O DY N A M I C S : B E Y O N D T H E F I R S T L AW

difference in the expression for �G (� �H � T�S) is positive or negativebecomes a very sensitive function of T and the “melting transition” of a pro-tein will occur over a narrow temperature range as is actually observed formost proteins (Figure 13.11). Because the values of �H and �S are modestfor each residue’s bonds, this sharp melting occurs as a result of a coopera-tive transition in which many residues melt simultaneously.

Similarly if the coil-to-helix transition is monitored, one discovers thatthis transition is also cooperative, meaning that after several energeticallycostly bonds are formed, subsequent bonding occurs with less energyrequired per bond. The large initial energy needed to form the several bondsthat greatly restrict possible conformations of the backbone substantially

decreases the entropy. Once that initial start is formed in the helix, additional neigh-boring bonds form rapidly with less energy per bond required.

For the case of subunit assembly in a protein or other biopolymer, there is adecrease in entropy as subunits form a larger structure. This is true because the overalltranslational and rotational motion of the subunits are coupled together and many sidechains become immobilized as well, reducing the number of degrees of freedom andthereby increasing the order. A typical decrease in entropy of dimerization is about 0.1kcal/mol-K, corresponding to about �30 kcal/mol of free energy (the term �T�S, withT ~ 300 K) at room temperature. In order for dimerization to proceed spontaneously,there must be a source of free energy for the reaction so that the overall free energychange is negative. Most of this energy comes from hydrophobic interactions whenwater is excluded from the surface area of subunit contact. Because on dimerization lesstotal protein surface is exposed to water, there is a decrease in this contribution to thefree energy as discussed in Section 1 above. Estimates are that in a typical dimerizationof a protein 10–20 nm2 of surface area previously exposed to water becomes internal-ized within the dimer. At an average free energy change of about �2.5 kcal/mol/nm2

of surface area, hydrophobic interactions result in a �G of �25 to �50 kcal/mol. Inaddition there are specific bonds (hydrogen, van der Waals) between the protein sub-units causing the dimer to be stabilized. Many macromolecules can continue to addsubunits spontaneously and rapidly to form a long polymer molecule. Included are suchimportant molecules as DNA, RNA, and the proteins actin and tubulin.

temperature

Helixfraction

1.0

0Tmelting

FIGURE 13.11 Typical temperaturedependence of the melting of ahelical protein. The cooperativetransition temperature ischaracterized by a relatively sharpdecrease in the helix content of theprotein.

Whereas overall energy conservation holds for anisolated system, various forms of energy have differentdegrees of “order,” or “usefulness,” or entropy. Forexample, such a system of particles with only thermalenergy, in the form of random diffusive motions, is lessordered and less useful than the equivalent amount ofenergy in the form of overall translational kineticenergy. The system with only an overall translationalenergy will have lower entropy than the thermal systembecause such a translating system is much moreordered and there are very many fewer ways that theenergy can be distributed over the possible macrostates.On the other hand, such a system will tend to thermal-ize, or randomize its motion over time, heading towardthe thermal system, and thus increasing its entropyover time. This idea is contained in the second law of

CHAPTER SUMMARYIn treating macroscopic systems composed of largenumbers of particles, statistical methods are used. Amicrostate is defined as a detailed specific state (one of anextremely large number) in which each atom in the systemhas a particular energy level. A macrostate, in contrast, isdefined by the set of energy levels and the numbers ofatoms in each level, the occupation numbers; this infor-mation defines the overall energy of the system, but, ingeneral, there are many, many microstates that all producethe same macrostate. Entropy, S, is defined in terms of thestatistical weight of the system �, which is a function thatcontains all the occupation number information, as

(13.1)S = kB ln Æ.

QUESTIONS1. The figure below shows a P–V diagram in which

an ideal gas goes from state A to state A in areversible cycle via the processes A→B, B→C,C→A. In each entry of the following table insert �, �, or 0 to indicate the sign of the associatedquantity.

3. Please order the following from highest to lowestentropy: 1 kg of ice, water, and water vapor.

4. Discuss a colloquial statement of the second law: theenergy available for useful work always decreases.

5. Find three examples of a system going from lessordered to more ordered and discuss why thesecond law of thermodynamics is not violated ineach case.

6. Some cashiers arrange dollar bills to all face the sameway, whereas others do not. Which pile of bills hasmore entropy?

Isobaric Isothermal Isochoric Adiabatic

�U

�T

�P

�V

�S

Q

W

Q U E S T I O N S /P R O B L E M S 345

thermodynamics, which states that the total entropy ofa closed system always increases,

(13.2)

with �S � 0 only in the special case of a reversibleprocess. An alternate statement of this law is contained in

(13.3)

where Q is the heat input to the system at absolute tem-perature T.

Another thermodynamic variable that is particu-larly useful in open systems at constant pressure andtemperature, conditions often occurring in biology, isthe Gibbs free energy, G,

(13.4)

As an example of its utility, G can be related to theequilibrium constant of a chemical reaction, Keq,

(13.10)

Given a set of energy levels in an atomic system,with energies Ei and populations Ni, the Boltzmann fac-tor gives the relative populations of any two states, forexample 1 and 2, as

In Section 3, we considered two specific applica-tions of some of these ideas: coupled kinetic reactions inthe hydrolysis of ATP and the helix–coil melting transi-tion of a protein. Analysis of both of these involvesstudying the Gibbs free energy changes, resulting fromboth enthalpy and entropy changes.

N2

N1= e-(E2-E1)/kBT.

¢Gtotal0

= -RT ln Keq.

G = H - TS = U + PV - TS.

¢S Ú

Q

T,

¢S Ú 0,

A B

CP

V

2. In the following table check the boxes of those quan-tities that must be zero in the respective reversibleprocess. Assume the system is an ideal gas.

�U Q W �S

A→B

B→C

C→A

Total

346 T H E R M O DY N A M I C S : B E Y O N D T H E F I R S T L AW

7. Discuss the molecular basis of the hydrophobiceffect. In particular which is the more fundamentalprocess: the attraction of hydrophobic portions of amacromolecular structure, or the minimization of thedisruption of hydrogen bonding in water?

8. Discuss why the Gibbs free energy is appropriatelynamed “free.”

9. Discuss the difference between an endothermicand an exothermic reaction. What state variabledetermines which one a particular reaction is?

10. Explain the difference between reaction and diffusion-controlled chemical processes.

11. What is the difference between the reversible meltingof a biopolymer and its irreversible denaturation?

12. What does it mean for a transition in a macromoleculeto be cooperative? Give an example.

13. What is the function of an enzyme?

MULTIPLE CHOICE QUESTIONS1. Which of the following statements is false? The

entropy of a closed system (a) is a measure of itsdisorder, (b) always increases unless the process isquasistatic, (c) is a measure of the dilution of internalenergy among allowed microstates of the system, (d) isproportional to the statistical weight of the system.

2. Suppose there are three identical atoms each withenergy levels given in Figure 13.3. If the total energyof the system is 3ε, the number of macrostates of thesystem is (a) 1, (b) 2, (c) 3, (d) 4.

3. In the previous question one of the macrostates is (1, 1, 1, 0) using the notation of Example 13.2. Howmany microstates correspond to this macrostate? (a) 1, (b) 2, (c) 3, (d) 6.

4. A hypothetical engine operates in a cycle taking in10,000 J from a hot reservoir and 5000 J from a coldreservoir. In the cycle it performs 15,000 J of work.Such an engine (a) obeys both the first and secondlaws of thermodynamics, (b) obeys the first lawbut violates the second law of thermodynamics, (c) violates the first law but obeys the second law ofthermodynamics, (d) violates both the first and sec-ond laws of thermodynamics.

5. The zeroth law of thermodynamics concerns bodies A,B, and C, and the relation “is in thermal equilibriumwith.” Suppose each of the following relations is substi-tuted for “is in thermal equilibrium with.” For which

relation will the “zeroth law” fail? (a) “communicatesvia email with,” (b) “is as tall as,” (c) “works in the samebuilding with,” (assume one job for each), (d) “owns thesame model car as” (assume one car for each).

6. Living cells constitute a low entropy state of matter.Living cells (a) violate the second law of thermody-namics, (b) can exist because they help increase theentropy of the rest of the universe, (c) are not subjectto physical laws such as thermodynamics,(d) demonstrate that the laws of thermodynamics areincomplete.

PROBLEMS1. At rest, our bodies generate heat at a rate of about

100 W. Calculate the minimum amount of entropy wegenerate in a day, neglecting the small entropyincrease from eating.

2. What is the entropy change of a cube of water 1 cmon a side that freezes at 0°C?

3. Repeat the calculations of Example 13.1 for the caseof six coins. Make a table showing the possiblemicrostates and macrostates and find the probabilitiesof each macrostate.

4. The splitting of ATP can be schematically given asATP � H2O → ADP � P. If the reaction has a �G ��7 kcal/mole at 25°C, what is the equilibrium con-stant at that temperature?

5. If Equation (13.10) is solved for (ln Keq) and (�H �T�S) is substituted for �G, we can write that

Describe how you might use this equation to deter-mine both �H and �S from a knowledge of Keq as afunction of temperature. Such a graphing procedure isknown as a van’t Hoff graph. What assumptions areinvolved in your analysis?

6. Suppose there are three identical atoms, each withenergy levels shown in Figure 13.3. If the total energyof the system is 4ε, find all possible macrostates andthe number of microstates for each of them. Use thenotation of Example 13.2.

7. Re-do the previous problem for the case when thetotal energy of the three atoms is 6ε.

ln Ke q =

- ¢H

RT+

¢S

R.