153

Energy Transformation

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Energy Transformation
Page 2: Energy Transformation

First Edition, 2011 ISBN 978-93-81157-53-4

© All rights reserved. Published by: The English Press 4735/22 Prakashdeep Bldg, Ansari Road, Darya Ganj, Delhi - 110002 Email: [email protected]

Page 3: Energy Transformation

Table of Contents

Chapter 1- Introduction to Energy Transformation

Chapter 2 - Thermoelectric materials

Chapter 3 - Geothermal Electricity

Chapter 4 - Heat Engine

Chapter 5 - Ocean Thermal Energy Conversion

Chapter 6 - Hydroelectricity

Chapter 7 - Electrical Generator

Chapter 8 - Fuel Cell

Chapter 9 - Wave Power

Chapter 10 - Piezoelectric Sensor

Chapter 11 - Friction

Chapter 12 - Battery (Electricity)

Page 4: Energy Transformation

Chapter- 1 Introduction to Energy Transformation

Energy Transformation in Energy Systems Language

In physics, the term energy describes the capacity to produce changes within a system, without regard to limitations in transformation imposed by entropy. Changes in total

Page 5: Energy Transformation

energy of systems can only be accomplished by adding or subtracting energy from them, as energy is a quantity which is conserved, according to the first law of thermodynamics. According to special relativity, changes in the energy of systems will also coincide with changes in the system's mass, and the total amount of mass of a system is a measure of its energy.

Energy in a system may be transformed so that it resides in a different state. Energy in many states may be used to do many varieties of physical work. Energy may be used in natural processes or machines, or else to provide some service to society (such as heat, light, or motion). For example, an internal combustion engine converts the potential chemical energy in gasoline and oxygen into heat, which is then transformed into the propulsive energy (kinetic energy that moves a vehicle.) A solar cell converts solar radiation into electrical energy that can then be used to light a bulb or power a computer.

The generic name for a device which converts energy from one form to another is a transducer.

In general, most types of energy, save for thermal energy, may be converted to any other kind of energy, with a theoretical efficiency of 100%. Such efficiencies might occur in practice, such as when chemical potential energy is completely converted into kinetic energies, and vice versa, only in isolated systems.

Conversion of other types of energies to heat also may occur with high efficiency but a perfect level would be only possible for isolated systems also.

If there is nothing beyond the frontiers of the universe then the only real isolated system would be the universe itself. Currently we do not have the knowledge or technology to create an isolated system from a portion of the universe.

Exceptions for perfect efficiency (even for isolated systems) occur when energy has already been partly distributed among many available quantum states for a collection of particles, which are freely allowed to explore any state of momentum and position (phase space). In such circumstances, a measure called entropy, or evening-out of energy distribution in such states, dictates that future states of the system must be of at least equal evenness in energy distribution. (There is no way, taking the universe as a whole, to collect energy into fewer states, once it has spread to them).

A consequence of this requirement is that there are limitations to the efficiency with which thermal energy can be converted to other kinds of energy, since thermal energy in equilibrium at a given temperature already represents the maximal evening-out of energy between all possible states. Such energy is sometimes considered "degraded energy," because it is not entirely usable. The second law of thermodynamics is a way of stating that, for this reason, thermal energy in a system may be converted to other kinds of energy with efficiencies approaching 100%, only if the entropy (even-ness or disorder) of the universe is increased by other means, to compensate for the decrease in entropy associated with the disappearance of the thermal energy and its entropy content.

Page 6: Energy Transformation

Otherwise, only a part of thermal energy may be converted to other kinds of energy (and thus, useful work), since the remainder of the heat must be reserved to be transferred to a thermal reservoir at a lower temperature, in such a way that the increase in entropy for this process more than compensates for the entropy decrease associated with transformation of the rest of the heat into other types of energy.

History of energy transformation from the early universe Energy transformations in the universe over time are (generally) characterized by various kinds of energy which has been available since the Big Bang, later being "released" (that is, transformed to more active types of energy such as kinetic or radiant energy), when a triggering mechanism is available to do it. A direct transformation of energy occurs when hydrogen produced in the big bang collects into structures such as planets, in a process during which gravitational potential may be converted directly into heat. In Jupiter, Saturn, Uranus, and Neptune, for example, such heat from continued collapse of the planets' large gases atmospheres continues to drive most of the planets' weather systems, with atmospheric bands, winds, and powerful storms.

Familiar examples of other such processes transforming energy from the big bang include nuclear decay, in which energy is released which was originally "stored" in heavy isotopes, such as uranium and thorium. This energy was stored at the time of these elements' nucleosynthesis, a process which ultimately uses the gravitational potential energy released from the gravitational collapse of supernovae, to store energy in the creation of these heavy elements before they were incorporated into the solar system and the Earth. This energy in uranium is triggered for sudden-release in nuclear fission bombs, and similar stored energies in atomic nuclei are released spontaneously, during most types of radioactive decay. In this process, heat from decay of these atoms in the core of the Earth is transformed immediately to heat. This heat in turn may lift mountains, via plate tectonics and orogenesis. This slow lifting of terrain thus represents a kind of gravitational potential energy storage of the heat energy. The stored potential energy may be released to active kinetic energy in landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a kind of mechanical potential energy which has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy which has been stored as potential energy in the Earth's gravitational field, or elastic strain (mechanical potential energy) in rocks. Prior to this, the energy represented by these events had been stored in heavy atoms, ever since the time that gravitational potentials transforming energy in the collapse of long-destroyed stars created these atoms, and in doing so, stored the energy within them.

In other similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier

Page 7: Energy Transformation

elements. This meant that hydrogen represents a store of potential energy which can be released by nuclear fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight. Such sunlight may again be stored as gravitational potential energy after it strikes the Earth, as (for example) snow-avalanches, or when water evaporates from oceans and is deposited high above sea level (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity). Sunlight also drives many weather phenomena on Earth. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as chemical potential energy, when carbon dioxide and water are converted into a combustible combination of carbohydrates, lipids, and oxygen. Release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for animal or human metabolism, when these molecules are ingested, and catabolism is triggered by enzyme action.

Through all of these transformation chains, potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in a number of ways over time between releases, as more active energy. In all these events, one kind of energy is converted to other types of energy, including heat.

Examples of sets of energy conversions in machines For instance, a coal-fired power plant involves these power transfers:

1. Chemical energy in the coal converted to thermal energy 2. Thermal energy converted to kinetic energy in steam 3. Kinetic energy converted to mechanical energy in the turbine 4. Mechanical energy of the turbine converted to electrical energy, which is the

ultimate output

In such a system, the last step is almost perfectly efficient, the first and second steps are fairly efficient, but the third step is relatively inefficient. The most efficient gas-fired electrical power stations can achieve 50% conversion efficiency. Oil and coal fired stations achieve less.

In a conventional automobile, these power transfers are involved:

1. Potential energy in the fuel converted to kinetic energy of expanding gas via combustion

2. Kinetic energy of expanding gas converted to linear piston movement 3. Linear piston movement converted to rotary crankshaft movement 4. Rotary crankshaft movement passed into transmission assembly 5. Rotary movement passed out of transmission assembly 6. Rotary movement passed through differential

Page 8: Energy Transformation

7. Rotary movement passed out of differential to drive wheels 8. Rotary movement of drive wheels converted to linear motion of the vehicle.

Other energy conversions There are many different machines and transducers that convert one energy form into another. A short list of examples follows:

• Thermoelectric (Heat → Electricity) • Geothermal power (Heat→ Electricity) • Heat engines, such as the internal combustion engine used in cars, or the steam

engine (Heat → Mechanical energy) • Ocean thermal power (Heat → Electricity) • Hydroelectric dams (Gravitational potential energy → Electricity) • Electric generator (Kinetic energy or Mechanical work → Electricity) • Fuel cells (Chemical energy → Electricity) • Battery (electricity) (Chemical energy → Electricity) • Fire (Chemical energy → Heat and Light) • Electric lamp (Electricity → Heat and Light) • Microphone (Sound → Electricity) • Wave power (Mechanical energy → Electricity) • Windmills (Wind energy → Electricity or Mechanical energy) • Piezoelectrics (Strain → Electricity) • Acoustoelectrics (Sound → Electricity) • Friction (Kinetic energy → Heat)

Page 9: Energy Transformation

Chapter- 2

Thermoelectric materials

Power generation

Approximately 90% of the world’s electricity is generated by heat energy, typically operating at 30-40% efficiency, losing roughly 15 terawatts of power in the form of heat to the environment. Thermoelectric devices could convert some of this waste heat into useful electricity. Thermoelectric efficiency depends on the figure of merit, ZT. There is no theoretical upper limit to ZT, and as ZT approaches infinity, the thermoelectric efficiency approaches the Carnot limit. However, no known thermoelectrics have a ZT>3. As of 2010, thermoelectric generators serve application niches where efficiency and cost are less important than reliability, light weight, and small size.

Internal combustion engines capture 20-25% of the energy released during fuel combustion. Increasing the conversion rate can increase mileage and provide more electricity for on-board controls and creature comforts (stability controls, telematics, navigation systems, electronic braking, etc.) It may be possible to shift energy draw from the engine (in certain cases) to the electrical load in the car, e.g. electrical power steering or electrical coolant pump operation.

Cogeneration power plants use the heat produced during electricity generation for alternative purposes. Thermoelectrics may find applications in such systems or in solar thermal energy generation.

Refrigeration

Peltier effect devices could reduce the emission of ozone-depleting refrigerants into the atmosphere. Hydrochlorofluorocarbons (HCFCs) and now-obsolete chlorofluorocarbons (CFCs) deplete the ozone layer. CFCs were replaced by HCFCs, however, the latter also impact the ozone and are being phased out. International legislation caps HCFC production and prohibits production after 2020 in developed countries and 2030 in

Page 10: Energy Transformation

developing countries. Thermoelectric refrigeration units could reduce the use of such harmful chemicals and reduce noise levels because they do not require compressors.) Common (vapor compression) refrigerators remain more efficient than peltier refrigerators, but they are larger and require more maintenance. A ZT>3 (about 20-30% Carnot efficiency) is required to replace traditional coolers.

Materials selection criteria

Figure of merit

The primary criterion for thermoelectric device viability is the figure of merit given by:

,

which depends on the Seebeck coefficient, S, thermal conductivity, λ, and electrical conductivity, σ. The product (ZT) of Z and the use temperature, T, serves as a dimensionless parameter to evaluate the performance of a thermoelectric material.

Phonon-Glass, electron-crystal behavior

Notably, in the above equation, thermal conductivity and electrical conductivity intertwine. G. A. Slack proposed that in order to optimize the figure of merit, phonons, which are responsible for thermal conductivity must experience the material as they would in a glass (experiencing a high degree of phonon scattering—lowering thermal conductivity) while electrons must experience it as a crystal (experiencing very little scattering—maintaining electrical conductivity). The figure of merit can be improved through the independent adjustment of these properties.

Semiconductors

Semiconductors are ideal thermoelectric devices because of their band structure and electronic properties at high temperatures. Device efficiency is proportional to ZT, so ideal materials have a large Z value at high temperatures. Since temperature is easily adjustable, electrical conductivity is crucial. Specifically, maximizing electrical conductivity at high temperatures and minimizing thermal conductivity optimizes ZT.

Thermal conductivity

κ = κ electron + κ phonon

According to the Wiedemann–Franz law, the higher the electrical conductivity, the higher κ electron becomes. Therefore, it is necessary to minimize κ phonon. In semiconductors, κ electron < κ phonon, so it is easier to decouple κ and σ in a semiconductor through engineering κ phonon.

Page 11: Energy Transformation

Electrical conductivity

Metals are typically good electrical conductors, but the higher the temperature, the lower the conductivity, given by the equation for electrical conductivity:

σmetal = ne2τ/m

• n is carrier density • e is electron charge • τ is electron lifetime • m is mass

As temperature increases, τ decreases, thereby decreasing σmetal. By contrast, electrical conductivity in semiconductors correlates positively with temperature.

σ semiconductor = neμ

• n is carrier density • e is electron charge • μ is carrier mobility

Carrier mobility increases with increasing temperature, thereby increasing σ semiconductor.

State density

The band structure of semiconductors offers better thermoelectric effects than the band structure of metals.

The Fermi energy is below the conduction band causing the state density to be asymmetric around the Fermi energy. Therefore, the average electron energy is higher than the Fermi energy, making the system conducive for charge motion into a lower energy state. By contrast, the Fermi energy lies in the conduction band in metals. This makes the state density symmetric about the Fermi energy so that the average conduction electron energy is close to the Fermi energy, reducing the forces pushing for charge transport. Therefore, semiconductors are ideal thermoelectric materials.

Materials of interest Strategies to improve thermoelectrics include both advanced bulk materials and the use of low-dimensional systems. Such approaches to reduce lattice thermal conductivity fall under three general material types: (1) Alloys: create point defects, vacancies, or rattling structures (heavy-ion species with large vibrational amplitudes contained within partially filled structural sites) to scatter phonons within the unit cell crystal. (2) Complex crystals: separate the phonon-glass from the electron crystal using approaches similar to those for superconductors. The region responsible for electron transport would be an

Page 12: Energy Transformation

electron-crystal of a high-mobility semiconductor, while the phonon-glass would be ideal to house disordered structures and dopants without disrupting the electron-crystal (analogous to the charge reservoir in high-Tc superconductors. ) (3) Multiphase nanocomposites: scatter phonons at the interfaces of nanostructured materials, be they mixed composites or thin film superlattices.

Materials under consideration for thermoelectric device applications include:

Bismuth chalcogenides

Materials such as Bi2Te3 and Bi2Se3 comprise some of the best performing room temperature thermoelectrics with a temperature-independent thermoelectric effect, ZT, between 0.8 and 1.0. Nanostructuring these materials to produce a layered superlattice structure of alternating Bi2Te3 and Bi2Se3 layers produces a device within which there is good electrical conductivity but perpendicular to which thermal conductivity is poor. The result is an enhanced ZT (approximately 2.4 at room temperature for p-type). Note that this high value has not entirely been independently confirmed.

Skutterudite thermoelectrics

Recently, skutterudite materials have sparked the interest of researchers in search of new thermoelectrics These structures are of the form (Co,Ni,Fe)(P,Sb,As)3 and are cubic with space group Im3. Unfilled, these materials contain voids into which low-coordination ions (usually rare earth elements) can be inserted in order to alter thermal conductivity by producing sources for lattice phonon scattering and decrease thermal conductivity due to the lattice without reducing electrical conductivity. Such qualities make these materials exhibit PGEC behavior.

Oxide thermoelectrics

Due to the natural superlattice formed by the layered structure in homologous compounds (such as those of the form (SrTiO3)n(SrO)m—the Ruddleson-Popper phase), oxides have potential for high-temperature thermoelectric devices. These materials exhibit low thermal conductivity perpendicular to these layers while maintaining electrical conductivity within the layers. The figure of merit in oxides is still relatively low (~0.34 at 1,000K), but the enhanced thermal stability, as compared to conventional high-ZT bismuth compounds, makes the oxides superior in high-temperature applications.

Nanomaterials

In addition to the nanostructured Bi2/Bi2Se3 superlattice thin films that have shown a great deal of promise, other nanomaterials show potential in improving thermoelectric materials. One example involving PbTe/PbSeTe quantum dot superlattices provides an enhanced ZT (approximately 1.5 at room temperature) that was higher than the bulk ZT value for either PbTe or PbSeTe (approximately 0.5). Individual silicon nanowires can act as efficient thermoelectric materials, with ZT values approaching 1.0 for their

Page 13: Energy Transformation

structures, even though bulk silicon is a poor thermoelectric material (approximately 0.01 at room temperature) because of its high thermal conductivity.

Thermoelectric effect

The thermoelectric effect is the direct conversion of temperature differences to electric voltage and vice versa. A thermoelectric device creates a voltage when there is a different temperature on each side. Conversely when a voltage is applied to it, it creates a temperature difference (known as the Peltier effect). At atomic scale (specifically, charge carriers), an applied temperature gradient causes charged carriers in the material, whether they are electrons or electron holes, to diffuse from the hot side to the cold side, similar to a classical gas that expands when heated; hence, the thermally induced current.

This effect can be used to generate electricity, to measure temperature, to cool objects, or to heat them or cook them. Because the direction of heating and cooling is determined by the polarity of the applied voltage, thermoelectric devices make very convenient temperature controllers.

Traditionally, the term thermoelectric effect or thermoelectricity encompasses three separately identified effects, the Seebeck effect, the Peltier effect, and the Thomson effect. In many textbooks, thermoelectric effect may also be called the Peltier–Seebeck effect. This separation derives from the independent discoveries of French physicist Jean Charles Athanase Peltier and Estonian-German physicist Thomas Johann Seebeck. Joule heating, the heat that is generated whenever a voltage is applied across a resistive material, is somewhat related, though it is not generally termed a thermoelectric effect (and it is usually regarded as being a loss mechanism due to non-ideality in thermoelectric devices). The Peltier–Seebeck and Thomson effects can in principle be thermodynamically reversible, whereas Joule heating is not.

Seebeck effect The Seebeck effect is the conversion of temperature differences directly into electricity.

Seebeck discovered that a compass needle would be deflected when a closed loop was formed of two metals joined in two places with a temperature difference between the junctions. This is because the metals respond differently to the temperature difference, which creates a current loop, which produces a magnetic field. Seebeck, however, at this time did not recognize there was an electric current involved, so he called the phenomenon the thermomagnetic effect, thinking that the two metals became magnetically polarized by the temperature gradient. The Danish physicist Hans Christian Ørsted played a vital role in explaining and conceiving the term "thermoelectricity".

The effect is that a voltage, the thermoelectric EMF, is created in the presence of a temperature difference between two different metals or semiconductors. This causes a continuous current in the conductors if they form a complete loop. The voltage created is

Page 14: Energy Transformation

of the order of several microvolts per kelvin difference. One such combination, copper-constantan, has a Seebeck coefficient of 41 microvolts per kelvin at room temperature.

In the circuit:

(which can be in several different configurations and be governed by the same equations), the voltage developed can be derived from:

Page 15: Energy Transformation

SA and SB are the Seebeck coefficients (also called thermoelectric power or thermopower) of the metals A and B as a function of temperature, and T1 and T2 are the temperatures of the two junctions. The Seebeck coefficients are non-linear as a function of temperature, and depend on the conductors' absolute temperature, material, and molecular structure. If the Seebeck coefficients are effectively constant for the measured temperature range, the above formula can be approximated as:

The Seebeck effect is commonly used in a device called a thermocouple (because it is made from a coupling or junction of materials, usually metals) to measure a temperature difference directly or to measure an absolute temperature by setting one end to a known temperature. A metal of unknown composition can be classified by its thermoelectric effect if a metallic probe of known composition, kept at a constant temperature, is held in contact with it. Industrial quality control instruments use this Seebeck effect to identify metal alloys. This is known as thermoelectric alloy sorting.

Several thermocouples connected in series are called a thermopile, which is sometimes constructed in order to increase the output voltage since the voltage induced over each individual couple is small.

This is also the principle at work behind thermal diodes and thermoelectric generators (such as radioisotope thermoelectric generators or RTGs) which are used for creating power from heat differentials.

The Seebeck effect is due to two effects: charge carrier diffusion and phonon drag (described below). If both connections are held at the same temperature, but one connection is periodically opened and closed, an AC voltage is measured, which is also temperature dependent. This application of the Kelvin probe is sometimes used to argue that the underlying physics only needs one junction. And this effect is still visible if the wires only come close, but do not touch, thus no diffusion is needed.

Thermopower

The thermopower, thermoelectric power, or Seebeck coefficient of a material measures the magnitude of an induced thermoelectric voltage in response to a temperature difference across that material. The thermopower has units of (V/K), though in practice it is more common to use microvolts per kelvin. Values in the hundreds of μV/K, negative or positive, are typical of good thermoelectric materials. The term thermopower is a misnomer since it measures the voltage or electric field induced in response to a temperature difference, not the electric power. An applied temperature difference causes charged carriers in the material, whether they are electrons or holes, to diffuse from the hot side to the cold side, similar to a classical gas that expands when heated. Mobile charged carriers migrating to the cold side leave behind their oppositely charged and immobile nuclei at the hot side thus giving rise to a thermoelectric voltage (thermoelectric refers to the fact that the voltage is created by a temperature difference).

Page 16: Energy Transformation

Since a separation of charges also creates an electric potential, the buildup of charged carriers onto the cold side eventually ceases at some maximum value since there exists an equal amount of charged carriers drifting back to the hot side as a result of the electric field at equilibrium. Only an increase in the temperature difference can resume a buildup of more charge carriers on the cold side and thus lead to an increase in the thermoelectric voltage. Incidentally the thermopower also measures the entropy per charge carrier in the material. To be more specific, the partial molar electronic entropy is said to equal the absolute thermoelectric power multiplied by the negative of Faraday's constant.

The thermopower of a material, represented by S (or sometimes by α), depends on the material's temperature and crystal structure. Typically metals have small thermopowers because most have half-filled bands. Electrons (negative charges) and holes (positive charges) both contribute to the induced thermoelectric voltage thus canceling each other's contribution to that voltage and making it small. In contrast, semiconductors can be doped with excess electrons or holes, and thus can have large positive or negative values of the thermopower depending on the charge of the excess carriers. The sign of the thermopower can determine which charged carriers dominate the electric transport in both metals and semiconductors.

If the temperature difference ΔT between the two ends of a material is small, then the thermopower of a material is defined (approximately) as:

and a thermoelectric voltage ΔV is seen at the terminals.

This can also be written in relation to the electric field E and the temperature gradient , by the approximate equation:

In practice one rarely measures the absolute thermopower of the material of interest. This is because electrodes attached to a voltmeter must be placed onto the material in order to measure the thermoelectric voltage. The temperature gradient then also typically induces a thermoelectric voltage across one leg of the measurement electrodes. Therefore the measured thermopower includes a contribution from the thermopower of the material of interest and the material of the measurement electrodes.

The measured thermopower is then a contribution from both and can be written as:

Page 17: Energy Transformation

Superconductors have zero thermopower since the charged carriers produce no entropy. This allows a direct measurement of the absolute thermopower of the material of interest, since it is the thermopower of the entire thermocouple as well. In addition, a measurement of the Thomson coefficient, μ, of a material can also yield the thermopower

through the relation:

The thermopower is an important material parameter that determines the efficiency of a thermoelectric material. A larger induced thermoelectric voltage for a given temperature gradient will lead to a larger efficiency. Ideally one would want very large thermopower values since only a small amount of heat is then necessary to create a large voltage. This voltage can then be used to provide power.

Charge-carrier diffusion

Charge carriers in the materials (electrons in metals, electrons and holes in semiconductors, ions in ionic conductors) will diffuse when one end of a conductor is at a different temperature to the other. Hot carriers diffuse from the hot end to the cold end, since there is a lower density of hot carriers at the cold end of the conductor. Cold carriers diffuse from the cold end to the hot end for the same reason.

If the conductor were left to reach thermodynamic equilibrium, this process would result in heat being distributed evenly throughout the conductor. The movement of heat (in the form of hot charge carriers) from one end to the other is called a heat current. As charge carriers are moving, it is also an electric current.

In a system where both ends are kept at a constant temperature difference (a constant heat current from one end to the other), there is a constant diffusion of carriers. If the rate of diffusion of hot and cold carriers in opposite directions were equal, there would be no net change in charge. However, the diffusing charges are scattered by impurities, imperfections, and lattice vibrations (phonons). If the scattering is energy dependent, the hot and cold carriers will diffuse at different rates. This creates a higher density of carriers at one end of the material, and the distance between the positive and negative charges produces a potential difference; an electrostatic voltage.

This electric field, however, opposes the uneven scattering of carriers, and an equilibrium is reached where the net number of carriers diffusing in one direction is canceled by the net number of carriers moving in the opposite direction from the electrostatic field. This means the thermopower of a material depends greatly on impurities, imperfections, and structural changes (which often vary themselves with temperature and electric field), and the thermopower of a material is a collection of many different effects.

Early thermocouples were metallic, but many more recently developed thermoelectric devices are made from alternating p-type and n-type semiconductor elements connected by metallic interconnects as pictured in the figures below. Semiconductor junctions are especially common in power generation devices, while metallic junctions are more

Page 18: Energy Transformation

common in temperature measurement. Charge flows through the n-type element, crosses a metallic interconnect, and passes into the p-type element. If a power source is provided, the thermoelectric device may act as a cooler, as in the figure to the left below. This is the Peltier effect, described below. Electrons in the n-type element will move opposite the direction of current and holes in the p-type element will move in the direction of current, both removing heat from one side of the device. If a heat source is provided, the thermoelectric device may function as a power generator, as in the figure to the right below. The heat source will drive electrons in the n-type element toward the cooler region, thus creating a current through the circuit. Holes in the p-type element will then flow in the direction of the current. The current can then be used to power a load, thus converting the thermal energy into electrical energy.

Phonon drag

Phonons are not always in local thermal equilibrium; they move against the thermal gradient. They lose momentum by interacting with electrons (or other carriers) and imperfections in the crystal. If the phonon-electron interaction is predominant, the phonons will tend to push the electrons to one end of the material, losing momentum in the process. This contributes to the already present thermoelectric field. This contribution is most important in the temperature region where phonon-electron scattering is predominant. This happens for

where θD is the Debye temperature. At lower temperatures there are fewer phonons available for drag, and at higher temperatures they tend to lose momentum in phonon-phonon scattering instead of phonon-electron scattering.

Page 19: Energy Transformation

This region of the thermopower-versus-temperature function is highly variable under a magnetic field.

Peltier effect The Peltier effect bears the name of Jean-Charles Peltier, a French physicist who in 1834 discovered the calorific effect of an electric current at the junction of two different metals. When a current is made to flow through the circuit, heat is evolved at the upper junction (at T2), and absorbed at the lower junction (at T1). The Peltier heat absorbed by the lower junction per unit time, is equal to

where π is the Peltier coefficient ΠAB of the entire thermocouple, and ΠA and ΠB are the coefficients of each material. p-type silicon typically has a positive Peltier coefficient (though not above ~550 K), and n-type silicon is typically negative.

The Peltier coefficients represent how much heat current is carried per unit charge through a given material. Since charge current must be continuous across a junction, the associated heat flow will develop a discontinuity if ΠA and ΠB are different. This causes a non-zero divergence at the junction and so heat must accumulate or deplete there, depending on the sign of the current. Another way to understand how this effect could cool a junction is to note that when electrons flow from a region of high density to a region of low density, this "expansion" causes cooling (as with an ideal gas).

The carriers are attempting to return to the electron equilibrium that existed before the current was applied by absorbing energy at one connector and releasing it at the other. The individual couples can be connected in series to enhance the effect.

An interesting consequence of this effect is that the direction of heat transfer is controlled by the polarity of the current; reversing the polarity will change the direction of transfer and thus the sign of the heat absorbed/evolved.

A Peltier cooler/heater or thermoelectric heat pump is a solid-state active heat pump which transfers heat from one side of the device to the other. Peltier cooling is also called thermo-electric cooling (TEC).

Thomson effect The Thomson effect was predicted and subsequently experimentally observed by William Thomson (Lord Kelvin) in 1851. It describes the heating or cooling of a current-carrying conductor with a temperature gradient.

Any current-carrying conductor (except for a superconductor), with a temperature difference between two points, will either absorb or emit heat, depending on the material.

Page 20: Energy Transformation

If a current density J is passed through a homogeneous conductor, heat production per unit volume is:

where

ρ is the resistivity of the material

dT/dx is the temperature gradient along the wire

μ is the Thomson coefficient.

The first term ρ J² is simply the Joule heating, which is not reversible.

The second term is the Thomson heat, which changes sign when J changes direction.

In metals such as zinc and copper, which have a hotter end at a higher potential and a cooler end at a lower potential, when current moves from the hotter end to the colder end, it is moving from a high to a low potential, so there is an evolution of heat. This is called the positive Thomson effect.

In metals such as cobalt, nickel, and iron, which have a cooler end at a higher potential and a hotter end at a lower potential, when current moves from the hotter end to the colder end, it is moving from a low to a high potential, there is an absorption of heat. This is called the negative Thomson effect.

The Thomson coefficient is unique among the three main thermoelectric coefficients because it is the only thermoelectric coefficient directly measurable for individual materials. The Peltier and Seebeck coefficients can only be determined for pairs of materials. Thus, there is no direct experimental method to determine an absolute Seebeck coefficient (i.e. thermopower) or absolute Peltier coefficient for an individual material. However, as mentioned elsewhere in this article there are two equations, the Thomson relations, also known as the Kelvin relations (see below), relating the three thermoelectric coefficients. Therefore, only one can be considered unique.

If the Thomson coefficient of a material is measured over a wide temperature range, including temperatures close to zero, one can then integrate the Thomson coefficient over the temperature range using the Kelvin relations to determine the absolute (i.e. single-material) values for the Peltier and Seebeck coefficients. In principle, this need only be done for one material, since all other values can be determined by measuring pairwise Seebeck coefficients in thermocouples containing the reference material and then adding back the absolute thermoelecric power (thermopower) of the reference material.

Page 21: Energy Transformation

It is commonly asserted that lead has a zero Thomson effect. While it is true that the thermoelectric coefficients of lead are small, they are in general non-zero. The Thomson coefficient of lead has been measured over a wide temperature range and has been integrated to calculate the absolute thermoelectric power (thermopower) of lead as a function of temperature.

Unlike lead, the thermoelectric coefficients of all known superconductors are zero.

The Thomson relationships The Seebeck effect is a combination of the Peltier and Thomson effects. In 1854 Thomson found two relationships, now called the Thomson or Kelvin relationships, between the corresponding coefficients. The absolute temperature T, the Peltier coefficient Π and Seebeck coefficient S are related by the second Thomson relation

which predicted the Thomson effect before it was actually formalized. These are related to the Thomson coefficient μ by the first Thomson relation

Thomson's theoretical treatment of thermoelectricity is remarkable in the fact that it is probably the first attempt to develop a reasonable theory of irreversible thermodynamics (non-equilibrium thermodynamics). This occurred at about the time that Clausius, Thomson, and others were introducing and refining the concept of entropy.

Figure of merit The figure of merit for thermoelectric devices is defined as

,

where σ is the electrical conductivity, κ is the thermal conductivity, and S is the Seebeck coefficient or thermopower (conventionally in μV/K). This is more commonly expressed as the dimensionless figure of merit ZT by multiplying it with the average temperature ((T2 + T1) / 2). Greater values of ZT indicate greater thermodynamic efficiency, subject to certain provisions, particularly the requirement that the two materials of the couple have similar Z values. ZT is therefore a very convenient figure for comparing the potential efficiency of devices using different materials. Values of ZT=1 are considered good, and values of at least the 3–4 range are considered to be essential for thermoelectrics to compete with mechanical generation and refrigeration in efficiency. To date, the best reported ZT values have been in the 2–3 range. Much research in thermoelectric

Page 22: Energy Transformation

materials has focused on increasing the Seebeck coefficient and reducing the thermal conductivity, especially by manipulating the nanostructure of the materials.

Device efficiency The efficiency of a thermoelectric device for electricity generation is given by η, defined as

, and

where TH is the temperature at the hot junction and TC is the temperature at the surface being cooled. is the modified dimensionless figure of merit which now takes into consideration the thermoelectric capacity of both thermoelectric materials being used in the power generating device, and is defined as

where ρ is the electrical resistivity, is the average temperature between the hot and cold surfaces, and the subscripts n and p denote properties related to the n- and p-type semiconducting thermoelectric materials, respectively. It is worthwhile to note that the efficiency of a thermoelectric device is limited by the Carnot efficiency (hence the TH and TC terms in Φmax ), since thermoelectric devices are still inherently heat engines.

The COP of current commercial thermoelectric refrigerators ranges from 0.3 to 0.6, only about one-sixth the value of traditional vapor-compression refrigerators.

Uses

Seebeck effect

The Seebeck effect can be used to turn any heat source into electrical power. These devices, called thermoelectric generators (or "thermogenerators") function like a heat engine, but less bulky, with no moving parts, and typically more expensive and less efficient.

Any power plant produces waste heat, which can be used to generate additional power using a thermoelectric generator (a form of energy recycling). This is potentially an enormous market.

Page 23: Energy Transformation

Space probes often use Radioisotope thermoelectric generators.

Solar cells use only the high frequency part of the radiation, while the low frequency heat energy is wasted. Several patents about the use of thermoelectric devices in tandem with solar cells have been filed. The idea is to increase the efficiency of the combined solar/thermoelectric system to convert the solar radiation into useful electricity.

Peltier effect

The Peltier effect can be used to create a refrigerator which is compact and has no circulating fluid or moving parts.

Temperature measurement

Thermocouples and thermopiles are commonly used to measure temperatures. They use the Seebeck effect. More precisely, they do not directly measure temperature, they measure temperature differences between the probe and the voltmeter at the other end of the wires. The temperature of the voltmeter, usually the same as room temperature, can be measured separately using "cold junction compensation" techniques.

Page 24: Energy Transformation

Chapter- 3

Geothermal Electricity

Geothermal electricity is electricity generated from geothermal energy. Technologies in use include dry steam power plants, flash steam power plants and binary cycle power plants. As a more recent technology, geothermal electricity generation is currently used only in 24 countries while geothermal heating is in use in 70 countries.

Estimates of the electricity generating potential of geothermal energy vary from 35 to 2000 GW. Current worldwide installed capacity is 10,715 megawatts (MW), with the largest capacity in the United States (3,086 MW), Philippines, and Indonesia.

Geothermal power is considered to be sustainable because the heat extraction is small compared to the Earth's heat content. The emission intensity of existing geothermal electric plants is on average 122 kg of CO2 per megawatt-hour (MW·h) of electricity, a small fraction of that of conventional fossil fuel plants.

History and development

Page 25: Energy Transformation

Global geothermal electric capacity. Upper red line is installed capacity; lower green line is realized production.

In the 20th century, demand for electricity led to the consideration of geothermal power as a generating source. Prince Piero Ginori Conti tested the first geothermal power generator on 4 July 1904 in Larderello, Italy. It successfully lit four light bulbs. Later, in 1911, the world's first commercial geothermal power plant was built there. Experimental generators were built in Beppu, Japan and the Geysers, California, in the 1920s, but Italy was the world's only industrial producer of geothermal electricity until New Zealand built a plant in 1958.

In 1960, Pacific Gas and Electric began operation of the first successful geothermal electric power plant in the United States at The Geysers in California. The original turbine lasted for more than 30 years and produced 11 MW net power.

The binary cycle power plant was first demonstrated in 1967 in Russia and later introduced to the USA in 1981. This technology allows the use of much lower temperature resources than were previously recoverable. In 2006, a binary cycle plant in Chena Hot Springs, Alaska, came on-line, producing electricity from a record low fluid temperature of 57°C.

Geothermal electric plants have until recently been built exclusively where high temperature geothermal resources are available near the surface. The development of binary cycle power plants and improvements in drilling and extraction technology may enable enhanced geothermal systems over a much greater geographical range. Demonstration projects are operational in Landau-Pfalz, Germany, and Soultz-sous-Forêts, France, while an earlier effort in Basel, Switzerland was shut down after it

Page 26: Energy Transformation

triggered earthquakes. Other demonstration projects are under construction in Australia, the United Kingdom, and the United States of America.

The thermal efficiency of geothermal electric plants is low, around 10-23%, because geothermal fluids are at a low temperature compared to steam from boilers. By the laws of thermodynamics this low temperature limits the efficiency of heat engines in extracting useful energy during the generation of electricity. Exhaust heat is wasted, unless it can be used directly and locally, for example in greenhouses, timber mills, and district heating. The efficiency of the system does not affect operational costs as it would for a coal or other fossil fuel plant, but it does factor into the viability of the plant. In order to produce more energy than the pumps consume, electricity generation requires high temperature geothermal fields and specialized heat cycles. Because geothermal power does not rely on variable sources of energy, unlike, for example, wind or solar, its capacity factor can be quite large – up to 96% has been demonstrated. The global average was 73% in 2005.

Resources

Page 27: Energy Transformation

Enhanced geothermal system 1:Reservoir 2:Pump house 3:Heat exchanger 4:Turbine hall 5:Production well 6:Injection well 7:Hot water to district heating 8:Porous sediments 9:Observation well 10:Crystalline bedrock

The earth’s heat content is 1031 joules. This heat naturally flows to the surface by conduction at a rate of 44.2 terawatts, (TW,) and is replenished by radioactive decay at a rate of 30 TW. These power rates are more than double humanity’s current energy consumption from primary sources, but most of this power is too diffuse (approximately 0.1 W/m2 on average) to be recoverable. The Earth's crust effectively acts as a thick insulating blanket which must be pierced by fluid conduits (of magma, water or other) to release the heat underneath.

Page 28: Energy Transformation

Electricity generation requires high temperature resources that can only come from deep underground. The heat must be carried to the surface by fluid circulation, either through magma conduits, hot springs, hydrothermal circulation, oil wells, drilled water wells, or a combination of these. This circulation sometimes exists naturally where the crust is thin: magma conduits bring heat close to the surface, and hot springs bring the heat to the surface. If no hot spring is available, a well must be drilled into a hot aquifer. Away from tectonic plate boundaries the geothermal gradient is 25-30°C per kilometre (km) of depth in most of the world, and wells would have to be several kilometres deep to permit electricity generation. The quantity and quality of recoverable resources improves with drilling depth and proximity to tectonic plate boundaries.

In ground that is hot but dry, or where water pressure is inadequate, injected fluid can stimulate production. Developers bore two holes into a candidate site, and fracture the rock between them with explosives or high pressure water. Then they pump water or liquefied carbon dioxide down one borehole, and it comes up the other borehole as a gas. This approach is called hot dry rock geothermal energy in Europe, or enhanced geothermal systems in North America. Much greater potential may be available from this approach than from conventional tapping of natural aquifers.

Estimates of the electricity generating potential of geothermal energy vary from 35 to 2000 GW depending on the scale of investments. This does not include non-electric heat recovered by co-generation, geothermal heat pumps and other direct use. A 2006 report by the Massachusetts Institute of Technology (MIT), that included the potential of enhanced geothermal systems, estimated that investing 1 billion US dollars in research and development over 15 years would allow the creation of 100 GW of electrical generating capacity by 2050 in the United States alone. The MIT report estimated that over 200 zettajoules (ZJ) would be extractable, with the potential to increase this to over 2,000 ZJ with technology improvements - sufficient to provide all the world's present energy needs for several millennia.

At present, geothermal wells are rarely more than 3 kilometres (2 mi) deep. Upper estimates of geothermal resources assume wells as deep as 10 kilometres (6 mi). Drilling at this depth is now possible in the petroleum industry, although it is an expensive process. The deepest research well in the world, the Kola superdeep borehole, is 12 kilometres (7 mi) deep. This record has recently been imitated by commercial oil wells, such as Exxon's Z-12 well in the Chayvo field, Sakhalin. Wells drilled to depths greater than 4 kilometres (2 mi) generally incur drilling costs in the tens of millions of dollars. The technological challenges are to drill wide bores at low cost and to break larger volumes of rock.

Geothermal power is considered to be sustainable because the heat extraction is small compared to the Earth's heat content, but extraction must still be monitored to avoid local depletion. Although geothermal sites are capable of providing heat for many decades, individual wells may cool down or run out of water. The three oldest sites, at Larderello, Wairakei, and the Geysers have all reduced production from their peaks. It is not clear whether these plants extracted energy faster than it was replenished from greater depths,

Page 29: Energy Transformation

or whether the aquifers supplying them are being depleted. If production is reduced, and water is reinjected, these wells could theoretically recover their full potential. Such mitigation strategies have already been implemented at some sites. The long-term sustainability of geothermal energy has been demonstrated at the Lardarello field in Italy since 1913, at the Wairakei field in New Zealand since 1958, and at The Geysers field in California since 1960.

Power station types

Dry steam plant

Page 30: Energy Transformation

Flash steam plant

Dry steam power plants

Dry steam plants are the simplest and oldest design. They directly use geothermal steam of 150°C or more to turn turbines.

Flash steam power plants

Flash steam plants pull deep, high-pressure hot water into lower-pressure tanks and use the resulting flashed steam to drive turbines. They require fluid temperatures of at least 180°C, usually more. This is the most common type of plant in operation today.

Page 31: Energy Transformation

Binary cycle power plants

Binary cycle power plants are the most recent development, and can accept fluid temperatures as low as 57°C. The moderately hot geothermal water is passed by a secondary fluid with a much lower boiling point than water. This causes the secondary fluid to flash to vapor, which then drives the turbines. This is the most common type of geothermal electricity plant being built today. Both Organic Rankine and Kalina cycles are used. The thermal efficiency is typically about 10%.

Worldwide production The International Geothermal Association (IGA) has reported that 10,715 megawatts (MW) of geothermal power in 24 countries is online, which is expected to generate 67,246 GWh of electricity in 2010. This represents a 20% increase in geothermal power online capacity since 2005. IGA projects this will grow to 18,500 MW by 2015, due to the large number of projects presently under consideration, often in areas previously assumed to have little exploitable resource.

In 2010, the United States led the world in geothermal electricity production with 3,086 MW of installed capacity from 77 power plants; the largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California. The Philippines follows the US as the second highest producer of geothermal power in the world, with 1,904 MW of capacity online; geothermal power makes up approximately 18% of the country's electricity generation.

Utility-grade plants

The largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California, United States. As of 2004, five countries (El Salvador, Kenya, the Philippines, Iceland, and Costa Rica) generate more than 15% of their electricity from geothermal sources.

Naknek Electric Association (NEA) is going to make an exploration well near King Salmon, in Southwest Alaska. It could cut the cost of electricity production by 71 percent and the planned power is 25 megawatts.

Geothermal electricity is generated in the 24 countries listed in the table below. During 2005, contracts were placed for an additional 500 MW of electrical capacity in the United States, while there were also plants under construction in 11 other countries. Enhanced geothermal systems that are several kilometres in depth are operational in France and Germany and are being developed or evaluated in at least four other countries.

Installed geothermal electric capacity

Country Capacity (MW)2007

Capacity (MW)2010

percentageof national

Page 32: Energy Transformation

productionUSA 2687 3086 0.3% Philippines 1969.7 1904 27% Indonesia 992 1197 3.7% Mexico 953 958 3% Italy 810.5 843 New Zealand 471.6 628 10% Iceland 421.2 575 30% Japan 535.2 536 0.1% El Salvador 204.2 204 14% Kenya 128.8 167 11.2% Costa Rica 162.5 166 14% Nicaragua 87.4 88 10% Russia 79 82 Turkey 38 82 Papua-New Guinea 56 56 Guatemala 53 52 Portugal 23 29 China 27.8 24 France 14.7 16 Ethiopia 7.3 7.3 Germany 8.4 6.6 Austria 1.1 1.4 Australia 0.2 1.1 Thailand 0.3 0.3

TOTAL 9,731.9 10,709.7

Environmental impact

Page 33: Energy Transformation

Krafla Geothermal Station in northeast Iceland

Fluids drawn from the deep earth carry a mixture of gases, notably carbon dioxide (CO2), hydrogen sulfide (H2S), methane (CH4) and ammonia (NH3). These pollutants contribute to global warming, acid rain, and noxious smells if released. Existing geothermal electric plants emit an average of 122 kg of CO2 per megawatt-hour (MW·h) of electricity, a small fraction of the emission intensity of conventional fossil fuel plants. Plants that experience high levels of acids and volatile chemicals are usually equipped with emission-control systems to reduce the exhaust. Geothermal plants could theoretically inject these gases back into the earth, as a form of carbon capture and storage.

In addition to dissolved gases, hot water from geothermal sources may hold in solution trace amounts of toxic chemicals such as mercury, arsenic, boron, antimony, and salt. These chemicals come out of solution as the water cools, and can cause environmental damage if released. The modern practice of injecting geothermal fluids back into the Earth to stimulate production has the side benefit of reducing this environmental risk.

Plant construction can adversely affect land stability. Subsidence has occurred in the Wairakei field in New Zealand. Enhanced geothermal systems can trigger earthquakes as part of hydraulic fracturing. The project in Basel, Switzerland was suspended because more than 10,000 seismic events measuring up to 3.4 on the Richter Scale occurred over the first 6 days of water injection.

Page 34: Energy Transformation

Geothermal has minimal land and freshwater requirements. Geothermal plants use 3.5 square kilometres per gigawatt of electrical production (not capacity) versus 32 and 12 square kilometres for coal facilities and wind farms respectively. They use 20 litres of freshwater per MW·h versus over 1000 litres per MW·h for nuclear, coal, or oil.

Economics Geothermal power requires no fuel, and is therefore immune to fuel cost fluctuations, but capital costs tend to be high. Drilling accounts for over half the costs, and exploration of deep resources entails significant risks. A typical well doublet in Nevada can support 4.5 megawatt (MW) of electricity generation and costs about $10 million to drill, with a 20% failure rate. In total, electrical plant construction and well drilling cost about 2-5 million € per MW of electrical capacity, while the levelised energy cost is 0.04-0.10 € per kW·h. Enhanced geothermal systems tend to be on the high side of these ranges, with capital costs above $4 million per MW and levelized costs above $0.054 per kW·h in 2007.

Geothermal power is highly scalable: a large geothermal plant can power entire cities while a smaller power plant can supply a rural village.

Chevron Corporation is the world's largest private producer of geothermal electricity. The most developed geothermal field is the Geysers in California. In 2008, this field supported 15 plants, all owned by Calpine, with a total generating capacity of 725 MW.

Page 35: Energy Transformation

Chapter- 4

Heat Engine

In thermodynamics, a heat engine performs the conversion of heat energy to mechanical work by exploiting the temperature gradient between a hot "source" and a cold "sink". Heat is transferred from the source, through the "working body" of the engine, to the sink, and in this process some of the heat is converted into work by exploiting the properties of a working substance (usually a gas or liquid).

Heat engines are often confused with the cycles they attempt to mimic. Typically when describing the physical device the term 'engine' is used. When describing the model the term 'cycle' is used.

Overview

Page 36: Energy Transformation

Figure 1: Heat engine diagram

In thermodynamics, heat engines are often modeled using a standard engineering model such as the Otto cycle. The theoretical model can be refined and augmented with actual data from an operating engine, using tools such as an indicator diagram. Since very few actual implementations of heat engines exactly match their underlying thermodynamic cycles, one could say that a thermodynamic cycle is an ideal case of a mechanical engine. In any case, fully understanding an engine and its efficiency requires gaining a good understanding of the (possibly simplified or idealized) theoretical model, the practical nuances of an actual mechanical engine, and the discrepancies between the two.

In general terms, the larger the difference in temperature between the hot source and the cold sink, the larger is the potential thermal efficiency of the cycle. On Earth, the cold side of any heat engine is limited to being close to the ambient temperature of the

Page 37: Energy Transformation

environment, or not much lower than 300 Kelvin, so most efforts to improve the thermodynamic efficiencies of various heat engines focus on increasing the temperature of the source, within material limits. The maximum theoretical efficiency of a heat engine (which no engine ever obtains) is equal to the temperature difference between the hot and cold ends divided by the temperature at the hot end, all expressed in absolute temperature or kelvins.

The efficiency of various heat engines proposed or used today ranges from 3 percent(97 percent waste heat) for the OTEC ocean power proposal through 25 percent for most automotive engines, to 45 percent for a supercritical coal plant, to about 60 percent for a steam-cooled combined cycle gas turbine.

All of these processes gain their efficiency (or lack thereof) due to the temperature drop across them.

Power

Heat engines can be characterized by their specific power, which is typically given in kilowatts per litre of engine displacement (in the U.S. also horsepower per cubic inch). The result offers an approximation of the peak-power output of an engine. This is not to be confused with fuel efficiency, since high-efficiency often requires a lean fuel-air ratio, and thus lower power density. A modern high-performance car engine makes in excess of 75 kW/L (1.65 hp/in³).

Everyday examples Examples of everyday heat engines include the steam engine, the diesel engine, and the gasoline (petrol) engine in an automobile. A common toy that is also a heat engine is a drinking bird. Also the stirling engine is a heat engine. All of these familiar heat engines are powered by the expansion of heated gases. The general surroundings are the heat sink, providing relatively cool gases which, when heated, expand rapidly to drive the mechanical motion of the engine.

Examples of heat engines It is important to note that although some cycles have a typical combustion location (internal or external), they often can be implemented as the other combustion cycle. For example, John Ericsson developed an external heated engine running on a cycle very much like the earlier Diesel cycle. In addition, the externally heated engines can often be implemented in open or closed cycles.

What this boils down to is there are thermodynamic cycles and a large number of ways of implementing them with mechanical devices called engines.

Phase change cycles

Page 38: Energy Transformation

In these cycles and engines, the working fluids are gases and liquids. The engine converts the working fluid from a gas to a liquid, from liquid to gas, or both, generating work from the fluid expansion or compression.

• Rankine cycle (classical steam engine) • Regenerative cycle (steam engine more efficient than Rankine cycle) • Organic Rankine Cycle (Coolant changing phase in temperature ranges of ice and

hot liquid water) • Vapor to liquid cycle (Drinking bird, Injector, Minto wheel) • Liquid to solid cycle (Frost heaving — water changing from ice to liquid and back

again can lift rock up to 60 cm.) • Solid to gas cycle (Dry ice cannon — Dry ice sublimes to gas.)

Gas only cycles

In these cycles and engines the working fluid is always a gas (i.e., there is no phase change):

• Carnot cycle (Carnot heat engine) • Ericsson Cycle (Caloric Ship John Ericsson) • Stirling cycle (Stirling engine, thermoacoustic devices) • Internal combustion engine (ICE):

o Otto cycle (e.g. Gasoline/Petrol engine, high-speed diesel engine) o Diesel cycle (e.g. low-speed diesel engine) o Atkinson Cycle (Atkinson Engine) o Brayton cycle or Joule cycle originally Ericsson Cycle (gas turbine) o Lenoir cycle (e.g., pulse jet engine) o Miller cycle

Liquid only cycles

In these cycles and engines the working fluid are always like liquid:

• Stirling Cycle (Malone engine) • Heat Regenerative Cyclone

Electron cycles

• Johnson thermoelectric energy converter • Thermoelectric (Peltier-Seebeck effect) • Thermionic emission • Thermotunnel cooling

Magnetic cycles

• Thermo-magnetic motor (Tesla)

Page 39: Energy Transformation

Cycles used for refrigeration

A domestic refrigerator is an example of a heat pump: a heat engine in reverse. Work is used to create a heat differential. Many cycles can run in reverse to move heat from the cold side to the hot side, making the cold side cooler and the hot side hotter. Internal combustion engine versions of these cycles are, by their nature, not reversible.

Refrigeration cycles include:

• Vapor-compression refrigeration • Stirling cryocoolers • Gas-absorption refrigerator • Air cycle machine • Vuilleumier refrigeration • Magnetic refrigeration

Evaporative Heat Engines

The Barton evaporation engine is a heat engine based on a cycle producing power and cooled moist air from the evaporation of water into hot dry air.

Efficiency The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input.

From the laws of thermodynamics:

where dW = − PdV is the work extracted from the engine. (It is negative since work is done by the engine.) dQh = ThdSh is the heat energy taken from the high temperature system. (It is negative since heat is extracted from the source, hence ( − dQh) is positive.) dQc = TcdSc is the heat energy delivered to the cold temperature system. (It is positive since heat is added to the sink.)

In other words, a heat engine absorbs heat energy from the high temperature heat source, converting part of it to useful work and delivering the rest to the cold temperature heat sink.

In general, the efficiency of a given heat transfer process (whether it be a refrigerator, a heat pump or an engine) is defined informally by the ratio of "what you get out" to "what you put in."

Page 40: Energy Transformation

In the case of an engine, one desires to extract work and puts in a heat transfer.

The theoretical maximum efficiency of any heat engine depends only on the temperatures it operates between. This efficiency is usually derived using an ideal imaginary heat engine such as the Carnot heat engine, although other engines using different cycles can also attain maximum efficiency. Mathematically, this is because in reversible processes, the change in entropy of the cold reservoir is the negative of that of the hot reservoir (i.e., dSc = − dSh), keeping the overall change of entropy zero. Thus:

where Th is the absolute temperature of the hot source and Tc that of the cold sink, usually measured in kelvin. Note that dSc is positive while dSh is negative; in any reversible work-extracting process, entropy is overall not increased, but rather is moved from a hot (high-entropy) system to a cold (low-entropy one), decreasing the entropy of the heat source and increasing that of the heat sink.

The reasoning behind this being the maximal efficiency goes as follows. It is first assumed that if a more efficient heat engine than a Carnot engine is possible, then it could be driven in reverse as a heat pump. Mathematical analysis can be used to show that this assumed combination would result in a net decrease in entropy. Since, by the second law of thermodynamics, this is statistically improbable to the point of exclusion, the Carnot efficiency is a theoretical upper bound on the reliable efficiency of any process.

Empirically, no engine has ever been shown to run at a greater efficiency than a Carnot cycle heat engine.

Figure 2 and Figure 3 show variations on Carnot cycle efficiency. Figure 2 indicates how efficiency changes with an increase in the heat addition temperature for a constant compressor inlet temperature. Figure 3 indicates how the efficiency changes with an increase in the heat rejection temperature for a constant turbine inlet temperature.

Page 41: Energy Transformation

Carnot cycle efficiency with changing heat rejection temperature.

Endoreversible heat engines

The most Carnot efficiency as a criterion of heat engine performance is the fact that by its nature, any maximally efficient Carnot cycle must operate at an infinitesimal temperature gradient. This is because any transfer of heat between two bodies at differing temperatures is irreversible, and therefore the Carnot efficiency expression only applies in the infinitesimal limit. The major problem with that is that the object of most heat engines is to output some sort of power, and infinitesimal power is usually not what is being sought.

A different measure of ideal heat engine efficiency is given by considerations of endoreversible thermodynamics, where the cycle is identical to the Carnot cycle except in that the two processes of heat transfer are not reversible (Callen 1985):

Page 42: Energy Transformation

(Note: Units K or °R)

This model does a better job of predicting how well real-world heat engines can do:

Efficiencies of Power Plants

Power Plant Tc (°C)

Th (°C)

η (Carnot)

η (Endoreversible)

η (Observed)

West Thurrock (UK) coal-fired

power plant 25 565 0.64 0.40 0.36

CANDU (Canada) nuclear

power plant 25 300 0.48 0.28 0.30

Larderello (Italy) geothermal power plant

80 250 0.33 0.178 0.16

As shown, the endoreversible efficiency much more closely models the observed data.

History Heat engines have been known since antiquity but were only made into useful devices at the time of the industrial revolution in the eighteenth century. They continue to be developed today.

Heat engine enhancements Engineers have studied the various heat engine cycles extensively in an effort to improve the amount of usable work they could extract from a given power source. The Carnot Cycle limit cannot be reached with any gas-based cycle, but engineers have worked out at least two ways to possibly go around that limit, and one way to get better efficiency without bending any rules.

1. Increase the temperature difference in the heat engine. The simplest way to do this is to increase the hot side temperature, which is the approach used in modern combined-cycle gas turbines. Unfortunately, physical limits (such as the melting point of the materials from which the engine is constructed) and environmental concerns regarding NOx production restrict the maximum temperature on workable heat engines. Modern gas turbines run at temperatures as high as possible within the range of temperatures necessary to maintain acceptable NOx output. Another way of increasing efficiency is to lower the output temperature. One new method of doing so is to use mixed chemical working fluids, and then

Page 43: Energy Transformation

exploit the changing behavior of the mixtures. One of the most famous is the so-called Kalina cycle, which uses a 70/30 mix of ammonia and water as its working fluid. This mixture allows the cycle to generate useful power at considerably lower temperatures than most other processes.

2. Exploit the physical properties of the working fluid. The most common such exploitation is the use of water above the so-called critical point, or so-called supercritical steam. The behavior of fluids above their critical point changes radically, and with materials such as water and carbon dioxide it is possible to exploit those changes in behavior to extract greater thermodynamic efficiency from the heat engine, even if it is using a fairly conventional Brayton or Rankine cycle. A newer and very promising material for such applications is CO2. SO2 and xenon have also been considered for such applications, although SO2 is a little toxic for most.

3. Exploit the chemical properties of the working fluid. A fairly new and novel exploit is to use exotic working fluids with advantageous chemical properties. One such is nitrogen dioxide (NO2), a toxic component of smog, which has a natural dimer as di-nitrogen tetraoxide (N2O4). At low temperature, the N2O4 is compressed and then heated. The increasing temperature causes each N2O4 to break apart into two NO2 molecules. This lowers the molecular weight of the working fluid, which drastically increases the efficiency of the cycle. Once the NO2 has expanded through the turbine, it is cooled by the heat sink, which causes it to recombine into N2O4. This is then fed back to the compressor for another cycle. Such species as aluminium bromide (Al2Br6), NOCl, and Ga2I6 have all been investigated for such uses. To date, their drawbacks have not warranted their use, despite the efficiency gains that can be realized.

Heat engine processes

Cycle Process 1-2 (Compression)

Process 2-3

(Heat Addition)

Process 3-4(Expansion)

Process 4-1

(Heat Rejection)

Notes

Power cycles normally with external combustion - or heat pump cycles:

Bell Coleman adiabatic isobaric adiabatic isobaric

A reversed Brayton cycle

Brayton adiabatic isobaric adiabatic isobaric

Jet engines aka first Ericsson cycle from 1833

Carnot isentropic isothermal isentropic isothermal Diesel adiabatic isobaric adiabatic isochoric

Page 44: Energy Transformation

Ericsson isothermal isobaric isothermal isobaric

the second Ericsson cycle from 1853

Scuderi adiabatic

variable pressure and volume

adiabatic isochoric

Stirling isothermal isochoric isothermal isochoric Stoddard adiabatic isobaric adiabatic isobaric

Power cycles normally with internal combustion:

Lenoir isobaric isochoric adiabatic isobaric

Pulse jets (Note: 3 of the 4 processes are different)

Otto adiabatic isochoric adiabatic isochoric Gasoline / petrol engines

Brayton adiabatic isobaric adiabatic isobaric Steam engine

Each process is one of the following:

• isothermal (at constant temperature, maintained with heat added or removed from a heat source or sink)

• isobaric (at constant pressure) • isometric/isochoric (at constant volume), also referred to as iso-volumetric • adiabatic (no heat is added or removed from the system during adiabatic process

which is equivalent to saying that the entropy remains constant)

Page 45: Energy Transformation

Chapter- 5

Ocean Thermal Energy Conversion

Ocean thermal energy conversion (OTEC or OTE) uses the difference between cooler deep and warmer shallow waters to run a heat engine. As with any heat engine, greater efficiency and power comes from larger temperature differences. This temperature difference generally increases with decreasing latitude, i.e. near the equator, in the tropics. Historically, the main technical challenge of OTEC was to generate significant amounts of power efficiently from small temperature ratios. Modern designs allow performance approaching the theoretical maximum Carnot efficiency.

OTEC offers total available energy that is one or two orders of magnitude higher than other ocean energy options such as wave power; but the small temperature difference makes energy extraction comparatively difficult and expensive, due to low thermal efficiency. Earlier OTEC systems were 1 to 3% efficiency, well below the theoretical maximum of between 6 and 7%. Current designs are expected closer to the maximum. The energy carrier, seawater. Expense comes from the pumps and pump energy costs.OTEC plants can operate continuously as a base load power generation system. Accurate cost-benefit analyses include these factors to assess performance, efficiency, operational, construction costs, and returns on investment.

Page 46: Energy Transformation

View of a land based OTEC facility at Keahole Point on the Kona coast of Hawaii (United States Department of Energy)

A heat engine is a thermodynamic device placed between a high temperature reservoir and a low temperature reservoir. As heat flows from one to the other, the engine converts some of the heat energy to work energy. This principle is used in steam turbines and internal combustion engines, while refrigerators reverse the direction of flow of both the heat and work energy. Rather than using heat energy from the burning of fuel, OTEC power draws on temperature differences caused by the sun's warming of the ocean surface. Much of the energy used by humans passes through a heat engine.

The only heat cycle suitable for OTEC is the Rankine cycle using a low-pressure turbine. Systems may be either closed-cycle or open-cycle. Closed-cycle engines use working fluids that are typically thought of as refrigerants such as ammonia or R-134a. Open-cycle engines use the water heat source as the working fluid.

The Earth's oceans are heated by the sun and cover over 70% of the Earth's surface.

History

Page 47: Energy Transformation

Attempts to develop and refine OTEC technology started in the 1880s. In 1881, Jacques Arsene d'Arsonval, a French physicist, proposed tapping the thermal energy of the ocean. D'Arsonval's student, Georges Claude, built the first OTEC plant, in Cuba in 1930. The system generated 22 kW of electricity with a low-pressure turbine.

In 1931, Nikola Tesla released "Our Future Motive Power", which described such a system. Tesla ultimately concluded that the scale of engineering required made it impractical for large scale development.

In 1935, Claude constructed a plant aboard a 10,000-ton cargo vessel moored off the coast of Brazil. Weather and waves destroyed it before it could generate net power. (Net power is the amount of power generated after subtracting power needed to run the system.)

In 1956, French scientists designed a 3 MW plant for Abidjan, Ivory Coast. The plant was never completed, because new finds of large amounts of cheap oil made it uneconomical.

In 1962, J. Hilbert Anderson and James H. Anderson, Jr. focused on increasing component efficiency. They patented their new "closed cycle" design in 1967.

Although Japan has no potential sites, it is a major contributor to the development of the technology, primarily for export. Beginning in 1970 the Tokyo Electric Power Company successfully built and deployed a 100 kW closed-cycle OTEC plant on the island of Nauru. The plant became operational 1981-10-14, producing about 120 kW of electricity; 90 kW was used to power the plant and the remaining electricity was used to power a school and other places. This set a world record for power output from an OTEC system where the power was sent to a real power grid.

The United States became involved in 1974, establishing the Natural Energy Laboratory of Hawaii Authority at Keahole Point on the Kona coast of Hawaiʻi. Hawaii is the best U.S. OTEC location, due to its warm surface water, access to very deep, very cold water, and Hawaii's high electricity costs. The laboratory has become a leading test facility for OTEC technology.

India built a one MW floating OTEC pilot plant near Tamil Nadu, and its government continues to sponsor research.

Land, shelf and floating sites OTEC has the potential to produce gigawatts of electrical power, and in conjunction with electrolysis, could produce enough hydrogen to completely replace all projected global fossil fuel consumption. Reducing costs remains an unsolved challenge, however. OTEC plants require a long, large diameter intake pipe, which is submerged a kilometer or more into the ocean's depths, to bring cold water to the surface.

Page 48: Energy Transformation

Left: Pipes used for OTEC. Right: Floating OTEC plant constructed in India in 2000

Land-based

Land-based and near-shore facilities offer three main advantages over those located in deep water. Plants constructed on or near land do not require sophisticated mooring, lengthy power cables, or the more extensive maintenance associated with open-ocean environments. They can be installed in sheltered areas so that they are relatively safe from storms and heavy seas. Electricity, desalinated water, and cold, nutrient-rich seawater could be transmitted from near-shore facilities via trestle bridges or causeways. In addition, land-based or near-shore sites allow plants to operate with related industries such as mariculture or those that require desalinated water.

Favored locations include those with narrow shelves (volcanic islands), steep (15-20 degrees) offshore slopes, and relatively smooth sea floors. These sites minimize the length of the intake pipe. A land-based plant could be built well inland from the shore, offering more protection from storms, or on the beach, where the pipes would be shorter. In either case, easy access for construction and operation helps lower costs.

Land-based or near-shore sites can also support mariculture. Tanks or lagoons built on shore allow workers to monitor and control miniature marine environments. Mariculture products can be delivered to market via standard transport.

One disadvantage of land-based facilities arises from the turbulent wave action in the surf zone. Unless the OTEC plant's water supply and discharge pipes are buried in protective trenches, they will be subject to extreme stress during storms and prolonged periods of heavy seas. Also, the mixed discharge of cold and warm seawater may need to be carried several hundred meters offshore to reach the proper depth before it is released. This arrangement requires additional expense in construction and maintenance.

OTEC systems can avoid some of the problems and expenses of operating in a surf zone if they are built just offshore in waters ranging from 10 to 30 meters deep (Ocean Thermal Corporation 1984). This type of plant would use shorter (and therefore less

Page 49: Energy Transformation

costly) intake and discharge pipes, which would avoid the dangers of turbulent surf. The plant itself, however, would require protection from the marine environment, such as breakwaters and erosion-resistant foundations, and the plant output would need to be transmitted to shore.

Shelf-based

To avoid the turbulent surf zone as well as to move closer to the cold-water resource, OTEC plants can be mounted to the continental shelf at depths up to 100 meters (328 ft). A shelf-mounted plant could be towed to the site and affixed to the sea bottom. This type of construction is already used for offshore oil rigs. The complexities of operating an OTEC plant in deeper water may make them more expensive than land-based approaches. Problems include the stress of open-ocean conditions and more difficult product delivery. Addressing strong ocean currents and large waves adds engineering and construction expense. Platforms require extensive pilings to maintain a stable base. Power delivery can require long underwater cables to reach land. For these reasons, shelf-mounted plants are less attractive.

Floating

Floating OTEC facilities operate off-shore. Although potentially optimal for large systems, floating facilities present several difficulties. The difficulty of mooring plants in very deep water complicates power delivery. Cables attached to floating platforms are more susceptible to damage, especially during storms. Cables at depths greater than 1000 meters are difficult to maintain and repair. Riser cables, which connect the sea bed and the plant, need to be constructed to resist entanglement.

As with shelf-mounted plants, floating plants need a stable base for continuous operation. Major storms and heavy seas can break the vertically suspended cold-water pipe and interrupt warm water intake as well. To help prevent these problems, pipes can be made of flexible polyethylene attached to the bottom of the platform and gimballed with joints or collars. Pipes may need to be uncoupled from the plant to prevent storm damage. As an alternative to a warm-water pipe, surface water can be drawn directly into the platform; however, it is necessary to prevent the intake flow from being damaged or interrupted during violent motions caused by heavy seas.

Connecting a floating plant to power delivery cables requires the plant to remain relatively stationary. Mooring is an acceptable method, but current mooring technology is limited to depths of about 2,000 meters (6,562 ft). Even at shallower depths, the cost of mooring may be prohibitive.

Cycle types Cold seawater is an integral part of each of the three types of OTEC systems: closed-cycle, open-cycle, and hybrid. To operate, the cold seawater must be brought to the

Page 50: Energy Transformation

surface. The primary approaches are active pumping and desalination. Desalinating seawater near the sea floor lowers its density, which causes it to rise to the surface.

The alternative to costly pipes to bring condensing cold water to the surface is to pump vaporized low boiling point fluid into the depths to be condensed, thus reducing pumping volumes and reducing technical and environmental problems and lowering costs.

Closed

Diagram of a closed cycle OTEC plant

Closed-cycle systems use fluid with a low boiling point, such as ammonia, to power a turbine to generate electricity. Warm surface seawater is pumped through a heat exchanger to vaporize the fluid. The expanding vapor turns the turbo-generator. Cold water, pumped through a second heat exchanger, condenses the vapor into a liquid, which is then recycled through the system.

In 1979, the Natural Energy Laboratory and several private-sector partners developed the "mini OTEC" experiment, which achieved the first successful at-sea production of net electrical power from closed-cycle OTEC. The mini OTEC vessel was moored 1.5 miles (2 km) off the Hawaiian coast and produced enough net electricity to illuminate the ship's light bulbs and run its computers and televisions.

Page 51: Energy Transformation

In 1999, the Natural Energy Laboratory tested a 250 kW pilot closed-cycle plant, the largest of its kind. Since that time, there have been no OTEC tests in the United States, largely because energy economics made such facilities impractical.

Open

Open-cycle OTEC uses warm surface water to make electricity. Placing warm seawater in a low-pressure container causes it to boil. The expanding steam drives a low-pressure turbine attached to an electrical generator. The steam, which left its salt and other contaminants in the low-pressure container, is pure fresh water. It is condensed into a liquid by exposure to cold temperatures from deep-ocean water. This method produces desalinized fresh water, suitable for drinking water or irrigation.

In 1984, the Solar Energy Research Institute (now the National Renewable Energy Laboratory) developed a vertical-spout evaporator to convert warm seawater into low-pressure steam for open-cycle plants. Conversion efficiencies were as high as 97% for seawater-to-steam conversion (overall efficiency using a vertical-spout evaporator would still only be a few per cent). In May 1993, an open-cycle OTEC plant at Keahole Point, Hawaii, produced 50,000 watts of electricity during a net power-producing experiment. This broke the record of 40 kW set by a Japanese system in 1982.

Hybrid

A hybrid cycle combines the features of the closed- and open-cycle systems. In a hybrid, warm seawater enters a vacuum chamber and is flash-evaporated, similar to the open-cycle evaporation process. The steam vaporizes the ammonia working fluid of a closed-cycle loop on the other side of an ammonia vaporizer. The vaporized fluid then drives a turbine to produce electricity. The steam condenses within the heat exchanger and provides desalinated water.

Some proposed projects OTEC projects under consideration include a small plant for the U.S. Navy base on the British-occupied island of Diego Garcia in the Indian Ocean. OCEES International, Inc. is working with the U.S. Navy on a design for a proposed 13-MW OTEC plant, to replace the current diesel generators. The OTEC plant would also provide 1.25 million gallons per day (MGD) of potable water. A private U.S. company has proposed building a 10-MW OTEC plant on Guam.

Hawaii

Lockheed Martin's Alternative Energy Development team is currently in the final design phase of a 10-MW closed cycle OTEC pilot system which will become operational in Hawaii in the 2012-2013 time frame. This system is being designed to expand to 100-MW commercial systems in the near future. In November, 2010 the U.S. Naval Facilities Engineering Command (NFEC) awarded the company a US$4.4 million contract

Page 52: Energy Transformation

modification to develop critical system components and designs for the plant, adding to the 2009 $8.1 million contract and two Department of Energy grants totaling $1 million in 2008 and March 2010.

Related activities OTEC has uses other than power production.

Air conditioning

The 41 °F (5 °C) cold seawater made available by an OTEC system creates an opportunity to provide large amounts of cooling to operations near the plant. The water can be used in chilled-water coils to provide air-conditioning for buildings. It is estimated that a pipe 1 foot (0.30 m) in diameter can deliver 4,700 gallons per minute of water. Water at 43 °F (6 °C) could provide more than enough air-conditioning for a large building. Operating 8,000 hours per year in lieu of electrical conditioning selling for 5-10¢ per kilowatt-hour, it would save $200,000-$400,000 in energy bills annually.

The InterContinental Resort and Thalasso-Spa on the island of Bora Bora uses an OTEC system to air-condition its buildings. The system passes seawater through a heat exchanger where it cools freshwater in a closed loop system. This freshwater is then pumped to buildings and directly cools the air.

Chilled-soil agriculture

OTEC technology supports chilled-soil agriculture. When cold seawater flows through underground pipes, it chills the surrounding soil. The temperature difference between roots in the cool soil and leaves in the warm air allows plants that evolved in temperate climates to be grown in the subtropics. Dr. John P. Craven, Dr. Jack Davidson and Richard Bailey patented this process and demonstrated it at a research facility at the Natural Energy Laboratory of Hawaii Authority (NELHA). The research facility demonstrated that more than 100 different crops can be grown using this system. Many normally could not survive in Hawaii or at Keahole Point.

Aquaculture

Aquaculture is the best-known byproduct, because it reduces the financial and energy costs of pumping large volumes of water from the deep ocean. Deep ocean water contains high concentrations of essential nutrients that are depleted in surface waters due to biological consumption. This "artificial upwelling" mimics the natural upwellings that are responsible for fertilizing and supporting the world's largest marine ecosystems, and the largest densities of life on the planet.

Cold-water delicacies, such as salmon and lobster, thrive in this nutrient-rich, deep, seawater. Microalgae such as Spirulina, a health food supplement, also can be cultivated.

Page 53: Energy Transformation

Deep-ocean water can be combined with surface water to deliver water at an optimal temperature.

Non-native species such as Salmon, lobster, abalone, trout, oysters, and clams can be raised in pools supplied by OTEC-pumped water. This extends the variety of fresh seafood products available for nearby markets. Such low-cost refrigeration can be used to maintain the quality of harvested fish, which deteriorate quickly in warm tropical regions.

Desalination

Desalinated water can be produced in open- or hybrid-cycle plants using surface condensers to turn evaporated seawater into potable water. System analysis indicates that a 2-megawatt plant could produce about 4,300 cubic metres (150,000 cu ft) of desalinated water each day. Another systems patented by Richard Bailey creates condensate water by regulating deep ocean water flow through surface condensers correlating with fluctuating dew-point temperatures. This condensation system uses no incremental energy and has no moving parts.

Hydrogen production

Hydrogen can be produced via electrolysis using OTEC electricity. Generated steam with electrolyte compounds added to improve efficiency is a relatively pure medium for hydrogen production. OTEC can be scaled to generate large quantities of hydrogen. The main challenge is cost relative to other energy sources and fuels.

Mineral extraction

The ocean contains 57 trace elements in salts and other forms and dissolved in solution. In the past, most economic analyses concluded that mining the ocean for trace elements would be unprofitable, in part because of the energy required to pump the water. Mining generally targest minerals that occur in high concentrations, and can be extracted easily, such as magnesium. With OTEC plants supplying water, the only cost is for extraction. The Japanese investigated the possibility of extracting uranium and found developments in other technologies (especially materials sciences) were improving the prospects.

Political concerns Because OTEC facilities are more-or-less stationary surface platforms, their exact location and legal status may be affected by the United Nations Convention on the Law of the Sea treaty (UNCLOS). This treaty grants coastal nations 3-, 12-, and 200-mile zones of varying legal authority from land, creating potential conflicts and regulatory barriers. OTEC plants and similar structures would be considered artificial islands under the treaty, giving them no independent legal status. OTEC plants could be perceived as either a threat or potential partner to fisheries or to seabed mining operations controlled by the International Seabed Authority.

Page 54: Energy Transformation

Cost and economics For OTEC to be viable as a power source, the technology must have tax and subsidy treatment similar to competing energy sources. Because OTEC systems have not yet been widely deployed, cost estimates are uncertain. One study estimates power generation costs as low as US $0.07 per kilowatt-hour, compared with $0.05 - $0.07 for subsidized wind systems.

Beneficial factors that should be taken into account include OTEC's lack of waste products and fuel consumption, the area in which it is available, (often within 20° of the equator) the geopolitical effects of petroleum dependence, compatibility with alternate forms of ocean power such as wave energy, tidal energy and methane hydrates, and supplemental uses for the seawater.

Variation of ocean temperature with depth The total insolation received by the oceans (covering 70% of the earth's surface, with clearness index of 0.5 and average energy retention of 15%) is 5.457 x e10 Megajoules/year (MJ/yr) x .7 x .5 x .15 = 2.87 x e10 MJ/yr

We can use Lambert's law to quantify the solar energy absorption by water,

where, y is the depth of water, I is intensity and μ is the absorption coefficient. Solving the above differential equation,

The absorption coefficient μ may range from 0.05 m−1 for very clear fresh water to 0.5 m−1 for very salty water.

Since the intensity falls exponentially with depth y, heat absorption is concentrated at the top layers. Typically in the tropics, surface temperature values are in excess of 25 °C (77 °F), while at 1 kilometers (1 mi), the temperature is about 5–10 °C (41–50 °F). The warmer (and hence lighter) waters at the surface means there are no thermal convection currents. Due to the small temperature gradients, heat transfer by conduction is too low to equalize the temperatures. The ocean is thus both a practically infinite heat source and a practically infinite heat sink.

This temperature difference varies with latitude and season, with the maximum in tropical, subtropical and equatorial waters. Hence the tropics are generally the best OTEC locations.

Page 55: Energy Transformation

Open/Claude cycle In this scheme, warm surface water at around 27 °C (81 °F) enters an evaporator at pressure slightly below the saturation pressures causing it to vaporize.

Where Hf is enthalpy of liquid water at the inlet temperature, T1.

This temporarily superheated water undergoes volume boiling as opposed to pool boiling in conventional boilers where the heating surface is in contact. Thus the water partially flashes to steam with two-phase equilibrium prevailing. Suppose that the pressure inside the evaporator is maintained at the saturation pressure, T2.

Here, x2 is the fraction of water by mass that vaporizes. The warm water mass flow rate per unit turbine mass flow rate is 1/x2.

The low pressure in the evaporator is maintained by a vacuum pump that also removes the dissolved non-condensable gases from the evaporator. The evaporator now contains a

Page 56: Energy Transformation

mixture of water and steam of very low vapor quality (steam content). The steam is separated from the water as saturated vapor. The remaining water is saturated and is discharged to the ocean in the open cycle. The steam is a low pressure/high specific volume working fluid. It expands in a special low pressure turbine.

Here, Hg corresponds to T2. For an ideal isentropic (reversible adiabatic) turbine,

The above equation corresponds to the temperature at the exhaust of the turbine, T5. x5,s is the mass fraction of vapor at state 5.

The enthalpy at T5 is,

This enthalpy is lower. The adiabatic reversible turbine work = H3-H5,s .

Actual turbine work WT = (H3-H5,s) x polytropic efficiency

The condenser temperature and pressure are lower. Since the turbine exhaust is to be discharged back into the ocean, a direct contact condenser is used to mix the exhaust with cold water, which results in a near-saturated water. That water is now discharged back to the ocean.

H6=Hf, at T5. T7 is the temperature of the exhaust mixed with cold sea water, as the vapour content now is negligible,

The temperature differences between stages include that between warm surface water and working steam, that between exhaust steam and cooling water, and that between cooling water reaching the condenser and deep water. These represent external irreversibilities that reduce the overall temperature difference.

The cold water flow rate per unit turbine mass flow rate,

Page 57: Energy Transformation

Turbine mass flow rate,

Warm water mass flow rate,

Cold water mass flow rate

Closed/Anderson cycle Developed starting in the 1960s by J. Hilbert Anderson of Sea Solar Power, Inc. In this cycle, QH is the heat transferred in the evaporator from the warm sea water to the working fluid. The working fluid exits the evaporator as a gas near its dew point.

The high-pressure, high-temperature gas then is expanded in the turbine to yield turbine work, WT. The working fluid is slightly superheated at the turbine exit and the turbine typically has an efficiency of 90% based on reversible, adiabatic expansion.

From the turbine exit, the working fluid enters the condenser where it rejects heat, -QC, to the cold sea water. The condensate is then compressed to the highest pressure in the cycle, requiring condensate pump work, WC. Thus, the Anderson closed cycle is a Rankine-type cycle similar to the conventional power plant steam cycle except that in the Anderson cycle the working fluid is never superheated more than a few degrees Fahrenheit. Owing to viscous effects, working fluid pressure drops in both the evaporator and the condenser. This pressure drop, which depends on the types of heat exchangers used, must be considered in final design calculations but is ignored here to simplify the analysis. Thus, the parasitic condensate pump work, WC, computed here will be lower than if the heat exchanger pressure drop was included. The major additional parasitic energy requirements in the OTEC plant are the cold water pump work, WCT, and the warm water pump work, WHT. Denoting all other parasitic energy requirements by WA, the net work from the OTEC plant, WNP is

The thermodynamic cycle undergone by the working fluid can be analyzed without detailed consideration of the parasitic energy requirements. From the first law of thermodynamics, the energy balance for the working fluid as the system is

where WN = WT + WC is the net work for the thermodynamic cycle. For the idealized case in which there is no working fluid pressure drop in the heat exchangers,

Page 58: Energy Transformation

and

so that the net thermodynamic cycle work becomes

Subcooled liquid enters the evaporator. Due to the heat exchange with warm sea water, evaporation takes place and usually superheated vapor leaves the evaporator. This vapor drives the turbine and the 2-phase mixture enters the condenser. Usually, the subcooled liquid leaves the condenser and finally, this liquid is pumped to the evaporator completing a cycle.

Working fluids

A popular choice of working fluid is ammonia, which has superior transport properties, easy availability, and low cost. Ammonia, however, is toxic and flammable. Fluorinated carbons such as CFCs and HCFCs are not toxic or flammable, but they contribute to ozone layer depletion. Hydrocarbons too are good candidates, but they are highly flammable; in addition, this would create competition for use of them directly as fuels. The power plant size is dependent upon the vapor pressure of the working fluid. With increasing vapor pressure, the size of the turbine and heat exchangers decreases while the wall thickness of the pipe and heat exchangers increase to endure high pressure especially on the evaporator side.

Technical difficulties

Dissolved gases

The performance of direct contact heat exchangers operating at typical OTEC boundary conditions is important to the Claude cycle. Many early Claude cycle designs used a surface condenser since their performance was well understood. However, direct contact condensers offer significant disadvantages. As cold water rises in intake pipe, the pressure decreases to the point where gas begins to evolve. If a significant amount of gas comes out of solution, placing a gas trap before the direct contact heat exchangers may be justified. Experiments simulating conditions in the warm water intake pipe indicated about 30% of the dissolved gas evolves in the top 8.5 meters (28 ft) of the tube. The trade-off between pre-dearation of the seawater and expulsion of non-condensable gases from the condenser is dependent on the gas evolution dynamics, deaerator efficiency, head loss, vent compressor efficiency and parasitic power. Experimental results indicate vertical spout condensers perform some 30% better than falling jet types.

Page 59: Energy Transformation

Microbial fouling

Because raw seawater must pass through the heat exchanger, care must be taken to maintain good thermal conductivity. Biofouling layers as thin as 25 to 50 micrometres (0.00098 to 0.0020 in) can degrade heat exchanger performance by as much as 50%. A 1977 study in which mock heat exchangers were exposed to seawater for ten weeks concluded that although the level of microbial fouling was low, the thermal conductivity of the system was significantly impaired. The apparent discrepancy between the level of fouling and the heat transfer impairment is the result of a thin layer of water trapped by the microbial growth on the surface of the heat exchanger.

Another study concluded that fouling degrades performance over time, and determined that although regular brushing was able to remove most of the microbial layer, over time a tougher layer formed that could not be removed through simple brushing. The study passed sponge rubber balls through the system. It concluded that although the ball treatment decreased the fouling rate it was not enough to completely halt growth and brushing was occasionally necessary to restore capacity. The microbes regrew more quickly later in the experiment (i.e. brushing became necessary more often) replicating the results of a previous study. The increased growth rate after subsequent cleanings appears to result from selection pressure on the microbial colony.

Continuous use of 1 hour per day and intermittent periods of free fouling and then chlorination periods (again 1 hour per day) were studied. Chlorination slowed but did not stop microbial growth; however chlorination levels of .1 mg per liter for 1 hour per day may prove effective for long term operation of a plant. The study concluded that although microbial fouling was an issue for the warm surface water heat exchanger, the cold water heat exchanger suffered little or no biofouling and only minimal inorganic fouling.

Besides water temperature, microbial fouling also depends on nutrient levels, with growth occurring faster in nutrient rich water. The fouling rate also depends on the material used to construct the heat exchanger. Aluminium tubing slows the growth of microbial life, although the oxide layer which forms on the inside of the pipes complicates cleaning and leads to larger efficiency losses. In contrast, titanium tubing allows biofouling to occur faster but cleaning is more effective than with aluminium.

Sealing

The evaporator, turbine, and condenser operate in partial vacuum ranging from 3% to 1% of atmospheric pressure. The system must be carefully sealed to prevent in-leakage of atmospheric air that can degrade or shut down operation. In closed-cycle OTEC, the specific volume of low-pressure steam is very large compared to that of the pressurized working fluid. Components must have large flow areas to ensure steam velocities do not attain excessively high values.

Page 60: Energy Transformation

Parasitic power consumption by exhaust compressor

An approach for reducing the exhaust compressor parasitic power loss is as follows. After most of the steam has been condensed by spout condensers, the non-condensible gas steam mixture is passed through a counter current region which increases the gas-steam reaction by a factor of five. The result is an 80% reduction in the exhaust pumping power requirements.

Cold air/warm water conversion In winter in coastal Arctic locations, seawater can be 40 °C (104 °F) warmer than ambient air temperature. Closed-cycle systems could exploit the air-water temperature difference. Eliminating seawater extraction pipes might make a system based on this concept less expensive than OTEC.

Page 61: Energy Transformation

Chapter- 6

Hydroelectricity

The Gordon Dam in Tasmania is a large conventional dammed-hydro facility, with an installed capacity of up to 430 MW.

Page 62: Energy Transformation

Hydroelectricity is the term referring to electricity generated by hydropower; the production of electrical power through the use of the gravitational force of falling or flowing water. It is the most widely used form of renewable energy. Once a hydroelectric complex is constructed, the project produces no direct waste, and has a considerably lower output level of the greenhouse gas carbon dioxide (CO2) than fossil fuel powered energy plants. Worldwide, an installed capacity of 777 GWe supplied 2998 TWh of hydroelectricity in 2006. This was approximately 20% of the world's electricity, and accounted for about 88% of electricity from renewable sources.

History Hydropower has been used since ancient times to grind flour and perform other tasks. In the mid-1770s, a French engineer Bernard Forest de Bélidor published Architecture Hydraulique which described vertical- and horizontal-axis hydraulic machines. By the late 19th century, the electrical generator was developed and could now be coupled with hydraulics. The growing demand for the Industrial Revolution would drive development as well. In 1878, the world's first house to be powered with hydroelectricity was Cragside in Northumberland, England. The old Schoelkopf Power Station No. 1 near Niagara Falls in the U.S. side began to produce electricity in 1881. The first Edison hydroelectric power plant - the Vulcan Street Plant - began operating September 30, 1882, in Appleton, Wisconsin, with an output of about 12.5 kilowatts. By 1886 there was about 45 hydroelectric power plants in the U.S. and Canada. By 1889, there were 200 in the U.S.

At the beginning of the 20th century, a large number of small hydroelectric power plants were being constructed by commercial companies in the mountains that surrounded metropolitan areas. By 1920 as 40% of the power produced in the United States was hydroelectric, the Federal Power Act was enacted into law. The Act created the Federal Power Commission who's main purpose was to regulate hydroelectric power plants on federal land and water. As the power plants became larger, their associated dams developed additional purposes to include flood control, irrigation and navigation. Federal funding became necessary for large-scale development and federally owned corporations like the Tennessee Valley Authority (1933) and the Bonneville Power Administration (1937) were created. Additionally, the Bureau of Reclamation which had began a series of western U.S. irrigation projects in the early 20th century was now constructing large hydroelectric projects such as the 1928 Boulder Canyon Project Act. The U.S. Army Corps of Engineers was also involved in hydroelectric development, completing the Bonneville Dam in 1937 and being recognized by the Flood Control Act of 1936 as the premier federal flood control agency.

Hydroelectric power plants continued to become larger throughout the 20th century. After the Hoover Dam's initial 1,345 MW power plant became the world's largest hydroelectric power plant in 1936 it was soon eclipsed by the 6809 MW Grand Coulee Dam in 1942. Brazil's and Paraguay's Itaipu Dam opened in 1984 as the largest, producing 14,000 MW but was surpassed in 2008 by the Three Gorges Dam in China with a production capacity of 22,500 MW. Hydroelectricity would eventually supply

Page 63: Energy Transformation

countries like Norway, Democratic Republic of the Congo, Paraguay and Brazil with over 85% of their electricity. The United States currently has over 2,000 hydroelectric power plants which supply 49% of its renewable electricity.

Generating methods

Turbine row at Los Nihuiles Power Station in Mendoza, Argentina

Page 64: Energy Transformation

Cross section of a conventional hydroelectric dam.

Page 65: Energy Transformation

A typical turbine and generator

Conventional

Most hydroelectric power comes from the potential energy of dammed water driving a water turbine and generator. The power extracted from the water depends on the volume and on the difference in height between the source and the water's outflow. This height difference is called the head. The amount of potential energy in water is proportional to the head. To deliver water to a turbine while maintaining pressure arising from the head, a large pipe called a penstock may be used.

Pumped-storage

This method produces electricity to supply high peak demands by moving water between reservoirs at different elevations. At times of low electrical demand, excess generation capacity is used to pump water into the higher reservoir. When there is higher demand, water is released back into the lower reservoir through a turbine. Pumped-storage schemes currently provide the most commercially important means of large-scale grid energy storage and improve the daily capacity factor of the generation system.

Page 66: Energy Transformation

Run-of-the-river

Run-of-the-river hydroelectric stations are those with smaller reservoir capacities, thus making it impossible to store water.

Tide

A tidal power plant makes use of the daily rise and fall of water due to tides; such sources are highly predictable, and if conditions permit construction of reservoirs, can also be dispatchable to generate power during high demand periods. Less common types of hydro schemes use water's kinetic energy or undammed sources such as undershot waterwheels.

Sizes and capacities of hydroelectric facilities

Large and specialized industrial facilities

The Three Gorges Dam, is the largest operating hydroelectric power stations at an installed capacity of 22,500 MW.

Although no official definition exist for the capacity range of large hydroelectric power stations, facilities from over a few hundred megawatts to more than 10 GW is generally considered large hydroelectric facilities. Currently, only three facilities over 10 GW (10,000 MW) are in operation worldwide; Three Gorges Dam at 22.5 GW, Itaipu Dam at 14 GW, and Guri Dam at 10.2 GW. Large-scale hydroelectric power stations are more commonly seen as the largest power producing facilities in the world, with some

Page 67: Energy Transformation

hydroelectric facilities capable of generating more than double the installed capacities of the current largest nuclear power stations.

While many hydroelectric projects supply public electricity networks, some are created to serve specific industrial enterprises. Dedicated hydroelectric projects are often built to provide the substantial amounts of electricity needed for aluminium electrolytic plants, for example. The Grand Coulee Dam switched to support Alcoa aluminium in Bellingham, Washington, United States for American World War II airplanes before it was allowed to provide irrigation and power to citizens (in addition to aluminium power) after the war. In Suriname, the Brokopondo Reservoir was constructed to provide electricity for the Alcoa aluminium industry. New Zealand's Manapouri Power Station was constructed to supply electricity to the aluminium smelter at Tiwai Point.

The construction of these large hydroelectric facilities and the changes it makes to the environment, are often too at very large scales, creating just as much damage to the environment as at helps it by being a renewable resource. Many specialized organizations, such as the International Hydropower Association, look into these matters on a global scale.

Small

Small hydro is the development of hydroelectric power on a scale serving a small community or industrial plant. The definition of a small hydro project varies but a generating capacity of up to 10 megawatts (MW) is generally accepted as the upper limit of what can be termed small hydro. This may be stretched to 25 MW and 30 MW in Canada and the United States. Small-scale hydroelectricity production grew by 28% during 2008 from 2005, raising the total world small-hydro capacity to 85 GW. Over 70% of this was in China (65 GW), followed by Japan (3.5 GW), the United States (3 GW), and India (2 GW).

Small hydro plants may be connected to conventional electrical distribution networks as a source of low-cost renewable energy. Alternatively, small hydro projects may be built in isolated areas that would be uneconomic to serve from a network, or in areas where there is no national electrical distribution network. Since small hydro projects usually have minimal reservoirs and civil construction work, they are seen as having a relatively low environmental impact compared to large hydro. This decreased environmental impact depends strongly on the balance between stream flow and power production.

Page 68: Energy Transformation

Micro

A micro-hydro facility in Vietnam.

Micro hydro is a term used for hydroelectric power installations that typically produce up to 100 KW of power. These installations can provide power to an isolated home or small community, or are sometimes connected to electric power networks. There are many of these installations around the world, particularly in developing nations as they can provide an economical source of energy without purchase of fuel. Micro hydro systems complement photovoltaic solar energy systems because in many areas, water flow, and thus available hydro power, is highest in the winter when solar energy is at a minimum.

Pico

Pico hydro is a term used for hydroelectric power generation of under 5 KW. It is useful in small, remote communities that require only a small amount of electricity. For example, to power one or two fluorescent light bulbs and a TV or radio for a few homes. Even smaller turbines of 200-300W may power a single home in a developing country with a drop of only 1 m (3 ft). Pico-hydro setups typically are run-of-the-river, meaning that dams are not used, but rather pipes divert some of the flow, drop this down a gradient, and through the turbine before being exhausted back to the stream.

Calculating the amount of available power

Page 69: Energy Transformation

A simple formula for approximating electric power production at a hydroelectric plant is: P = ρhrgk, where

• P is Power in watts, • ρ is the density of water (~1000 kg/m3), • h is height in meters, • r is flow rate in cubic meters per second, • g is acceleration due to gravity of 9.8 m/s2, • k is a coefficient of efficiency ranging from 0 to 1. Efficiency is often higher (that

is, closer to 1) with larger and more modern turbines.

Annual electric energy production depends on the available water supply. In some installations the water flow rate can vary by a factor of 10:1 over the course of a year.

Advantages and disadvantages of hydroelectricity

Advantages

The Ffestiniog Power Station can generate 360 MW of electricity within 60 seconds of the demand arising.

Page 70: Energy Transformation

Economics

The major advantage of hydroelectricity is elimination of the cost of fuel. The cost of operating a hydroelectric plant is nearly immune to increases in the cost of fossil fuels such as oil, natural gas or coal, and no imports are needed.

Hydroelectric plants also tend to have longer economic lives than fuel-fired generation, with some plants now in service which were built 50 to 100 years ago. Operating labor cost is also usually low, as plants are automated and have few personnel on site during normal operation.

Where a dam serves multiple purposes, a hydroelectric plant may be added with relatively low construction cost, providing a useful revenue stream to offset the costs of dam operation. It has been calculated that the sale of electricity from the Three Gorges Dam will cover the construction costs after 5 to 8 years of full generation.

CO2 emissions

Since hydroelectric dams do not burn fossil fuels, they do not directly produce carbon dioxide. While some carbon dioxide is produced during manufacture and construction of the project, this is a tiny fraction of the operating emissions of equivalent fossil-fuel electricity generation. One measurement of greenhouse gas related and other externality comparison between energy sources can be found in the ExternE project by the Paul Scherrer Institut and the University of Stuttgart which was funded by the European Commission. According to this project, hydroelectricity produces the least amount of greenhouse gases and externality of any energy source. Coming in second place was wind, third was nuclear energy, and fourth was solar photovoltaic. The extremely positive greenhouse gas impact of hydroelectricity is found especially in temperate climates. The above study was for local energy in Europe; presumably similar conditions prevail in North America and Northern Asia, which all see a regular, natural freeze/thaw cycle (with associated seasonal plant decay and regrowth).

Other uses of the reservoir

Reservoirs created by hydroelectric schemes often provide facilities for water sports, and become tourist attractions themselves. In some countries, aquaculture in reservoirs is common. Multi-use dams installed for irrigation support agriculture with a relatively constant water supply. Large hydro dams can control floods, which would otherwise affect people living downstream of the project.

Page 71: Energy Transformation

Disadvantages

Ecosystem damage and loss of land

Hydroelectric power stations that uses dams would submerge large areas of land due to the requirement of a reservoir.

Large reservoirs required for the operation of hydroelectric power stations result in submersion of extensive areas upstream of the dams, destroying biologically rich and productive lowland and riverine valley forests, marshland and grasslands. The loss of land is often exacerbated by the fact that reservoirs cause habitat fragmentation of surrounding areas.

Hydroelectric projects can be disruptive to surrounding aquatic ecosystems both upstream and downstream of the plant site. For instance, studies have shown that dams along the Atlantic and Pacific coasts of North America have reduced salmon populations by preventing access to spawning grounds upstream, even though most dams in salmon habitat have fish ladders installed. Salmon spawn are also harmed on their migration to sea when they must pass through turbines. This has led to some areas transporting smolt downstream by barge during parts of the year. In some cases dams, such as the Marmot Dam, have been demolished due to the high impact on fish. Turbine and power-plant designs that are easier on aquatic life are an active area of research. Mitigation measures

Page 72: Energy Transformation

such as fish ladders may be required at new projects or as a condition of re-licensing of existing projects.

Generation of hydroelectric power changes the downstream river environment. Water exiting a turbine usually contains very little suspended sediment, which can lead to scouring of river beds and loss of riverbanks. Since turbine gates are often opened intermittently, rapid or even daily fluctuations in river flow are observed. For example, in the Grand Canyon, the daily cyclic flow variation caused by Glen Canyon Dam was found to be contributing to erosion of sand bars. Dissolved oxygen content of the water may change from pre-construction conditions. Depending on the location, water exiting from turbines is typically much warmer than the pre-dam water, which can change aquatic faunal populations, including endangered species, and prevent natural freezing processes from occurring. Some hydroelectric projects also use canals to divert a river at a shallower gradient to increase the head of the scheme. In some cases, the entire river may be diverted leaving a dry riverbed. Examples include the Tekapo and Pukaki Rivers in New Zealand.

Flow shortage

Changes in the amount of river flow will correlate with the amount of energy produced by a dam. Lower river flows because of drought, climate change or upstream dams and diversions will reduce the amount of live storage in a reservoir therefore reducing the amount of water that can be used for hydroelectricity. The result of diminished river flow can be power shortages in areas that depend heavily on hydroelectric power.

Page 73: Energy Transformation

Methane emissions (from reservoirs)

The Hoover Dam in United States is a large conventional dammed-hydro facility, with an installed capacity of up to 2,080 MW.

Lower positive impacts are found in the tropical regions, as it has been noted that the reservoirs of power plants in tropical regions may produce substantial amounts of methane. This is due to plant material in flooded areas decaying in an anaerobic environment, and forming methane, a very potent greenhouse gas. According to the World Commission on Dams report, where the reservoir is large compared to the generating capacity (less than 100 watts per square metre of surface area) and no clearing of the forests in the area was undertaken prior to impoundment of the reservoir, greenhouse gas emissions from the reservoir may be higher than those of a conventional

Page 74: Energy Transformation

oil-fired thermal generation plant. Although these emissions represent carbon already in the biosphere, not fossil deposits that had been sequestered from the carbon cycle, there is a greater amount of methane due to anaerobic decay, causing greater damage than would otherwise have occurred had the forest decayed naturally.

In boreal reservoirs of Canada and Northern Europe, however, greenhouse gas emissions are typically only 2% to 8% of any kind of conventional fossil-fuel thermal generation. A new class of underwater logging operation that targets drowned forests can mitigate the effect of forest decay.

In 2007, International Rivers accused hydropower firms for cheating with fake carbon credits under the Clean Development Mechanism, for hydropower projects already finished or under construction at the moment they applied to join the CDM. These carbon credits – of hydropower projects under the CDM in developing countries – can be sold to companies and governments in rich countries, in order to comply with the Kyoto protocol.

Relocation

Another disadvantage of hydroelectric dams is the need to relocate the people living where the reservoirs are planned. In February 2008, it was estimated that 40-80 million people worldwide had been physically displaced as a direct result of dam construction. In many cases, no amount of compensation can replace ancestral and cultural attachments to places that have spiritual value to the displaced population. Additionally, historically and culturally important sites can be flooded and lost.

Such problems have arisen at the Aswan Dam in Egypt between 1960 and 1980, the Three Gorges Dam in China, the Clyde Dam in New Zealand, and the Ilisu Dam in Turkey.

Failure hazard

Because large conventional dammed-hydro facilities hold back large volumes of water, a failure due to poor construction, terrorism, or other causes can be catastrophic to downriver settlements and infrastructure. Dam failures have been some of the largest man-made disasters in history. Also, good design and construction are not an adequate guarantee of safety. Dams are tempting industrial targets for wartime attack, sabotage and terrorism, such as Operation Chastise in World War II.

The Banqiao Dam failure in Southern China directly resulted in the deaths of 26,000 people, and another 145,000 from epidemics. Millions were left homeless. Also, the creation of a dam in a geologically inappropriate location may cause disasters like the one of the Vajont Dam in Italy, where almost 2000 people died, in 1963.

Smaller dams and micro hydro facilities create less risk, but can form continuing hazards even after they have been decommissioned. For example, the small Kelly Barnes Dam

Page 75: Energy Transformation

failed in 1967, causing 39 deaths with the Toccoa Flood, ten years after its power plant was decommissioned in 1957.

Comparison with other methods of power generation

Hydroelectricity eliminates the flue gas emissions from fossil fuel combustion, including pollutants such as sulfur dioxide, nitric oxide, carbon monoxide, dust, and mercury in the coal. Hydroelectricity also avoids the hazards of coal mining and the indirect health effects of coal emissions. Compared to nuclear power, hydroelectricity generates no nuclear waste, has none of the dangers associated with uranium mining, nor nuclear leaks. Unlike uranium, hydroelectricity is also a renewable energy source.

Compared to wind farms, hydroelectricity power plants have a more predictable load factor. If the project has a storage reservoir, it can be dispatched to generate power when needed. Hydroelectric plants can be easily regulated to follow variations in power demand.

Unlike fossil-fuelled combustion turbines, construction of a hydroelectric plant requires a long lead-time for site studies, hydrological studies, and environmental impact assessment. Hydrological data up to 50 years or more is usually required to determine the best sites and operating regimes for a large hydroelectric plant. Unlike plants operated by fuel, such as fossil or nuclear energy, the number of sites that can be economically developed for hydroelectric production is limited; in many areas the most cost effective sites have already been exploited. New hydro sites tend to be far from population centers and require extensive transmission lines. Hydroelectric generation depends on rainfall in the watershed, and may be significantly reduced in years of low rainfall or snowmelt. Long-term energy yield may be affected by climate change. Utilities that primarily use hydroelectric power may spend additional capital to build extra capacity to ensure sufficient power is available in low water years.

World hydroelectric capacity

Page 76: Energy Transformation

World renewable energy share as at 2008, with hydroelectricity more than 50% of all renewable energy sources.

The ranking of hydro-electric capacity is either by actual annual energy production or by installed capacity power rating. A hydro-electric plant rarely operates at its full power rating over a full year; the ratio between annual average power and installed capacity rating is the capacity factor. The installed capacity is the sum of all generator nameplate power ratings. Sources came from BP Statistical Review - Full Report 2009

Brazil, Canada, Norway, Paraguay, Switzerland, and Venezuela are the only countries in the world where the majority of the internal electric energy production is from hydroelectric power. Paraguay produces 100% of its electricity from hydroelectric dams, and exports 90% of its production to Brazil and to Argentina. Norway produces 98–99% of its electricity from hydroelectric sources.

Page 77: Energy Transformation

Chapter- 7

Electrical Generator

U.S. NRC image of a modern steam turbine generator

In electricity generation, an electric generator is a device that converts mechanical energy to electrical energy. The reverse conversion of electrical energy into mechanical energy is done by a motor; motors and generators have many similarities. A generator forces electrons in the windings to flow through the external electrical circuit. It is somewhat analogous to a water pump, which creates a flow of water but does not create the water inside. The source of mechanical energy may be a reciprocating or turbine steam engine, water falling through a turbine or waterwheel, an internal combustion engine, a wind turbine, a hand crank, compressed air or any other source of mechanical energy.

Page 78: Energy Transformation

Early 20th century alternator made in Budapest, Hungary, in the power generating hall of a hydroelectric station

Page 79: Energy Transformation

Generator in Zwevegem, West Flanders, Belgium

Historical developments Before the connection between magnetism and electricity was discovered, electrostatic generators were invented that used electrostatic principles. These generated very high voltages and low currents. They operated by using moving electrically charged belts, plates and disks to carry charge to a high potential electrode. The charge was generated using either of two mechanisms:

• Electrostatic induction • The triboelectric effect, where the contact between two insulators leaves them

charged.

Because of their inefficiency and the difficulty of insulating machines producing very high voltages, electrostatic generators had low power ratings and were never used for generation of commercially significant quantities of electric power. The Wimshurst machine and Van de Graaff generator are examples of these machines that have survived.

Page 80: Energy Transformation

Jedlik's dynamo

In 1827, Hungarian Anyos Jedlik started experimenting with electromagnetic rotating devices which he called electromagnetic self-rotors. In the prototype of the single-pole electric starter (finished between 1852 and 1854) both the stationary and the revolving parts were electromagnetic. He formulated the concept of the dynamo at least 6 years before Siemens and Wheatstone but didn't patent it as he thought he wasn't the first to realize this. In essence the concept is that instead of permanent magnets, two electromagnets opposite to each other induce the magnetic field around the rotor. It was also the discovery of the principle of self-excitation.

Faraday's disk

Faraday disk, the first electric generator. The horseshoe-shaped magnet (A) created a magnetic field through the disk (D). When the disk was turned this induced an electric current radially outward from the center toward the rim. The current flowed out through the sliding spring contact m, through the external circuit, and back into the center of the disk through the axle.

In the years of 1831–1832, Michael Faraday discovered the operating principle of electromagnetic generators. The principle, later called Faraday's law, is that a potential difference is generated between the ends of an electrical conductor that has a varying magnetic flux. He also built the first electromagnetic generator, called the 'Faraday disk', a type of homopolar generator, using a copper disc rotating between the poles of a horseshoe magnet. It produced a small DC voltage.

Page 81: Energy Transformation

This design was inefficient due to self-cancelling counterflows of current in regions not under the influence of the magnetic field. While current was induced directly underneath the magnet, the current would circulate backwards in regions outside the influence of the magnetic field. This counterflow limits the power output to the pickup wires and induces waste heating of the copper disc. Later homopolar generators would solve this problem by using an array of magnets arranged around the disc perimeter to maintain a steady field effect in one current-flow direction.

Another disadvantage was that the output voltage was very low, due to the single current path through the magnetic flux. Experimenters found that using multiple turns of wire in a coil could produce higher more useful voltages. Since the output voltage is proportional to the number of turns, generators could be easily designed to produce any desired voltage by varying the number of turns. Wire windings became a basic feature of all subsequent generator designs.

Dynamo

Dynamos are no longer used for power generation due to the size and complexity of the commutator needed for high power applications. This large belt-driven high-current dynamo produced 310 amperes at 7 volts, or 2,170 watts, when spinning at 1400 RPM.

Page 82: Energy Transformation

Dynamo Electric Machine [End View, Partly Section] (U.S. Patent 284,110)

The dynamo was the first electrical generator capable of delivering power for industry. The dynamo uses electromagnetic principles to convert mechanical rotation into a pulsing direct electric current through the use of a commutator. The first dynamo was built by Hippolyte Pixii in 1832.

Through a series of accidental discoveries, the dynamo became the source of many later inventions, including the DC electric motor, the AC alternator, the AC synchronous motor, and the rotary converter.

A dynamo machine consists of a stationary structure, which provides a constant magnetic field, and a set of rotating windings which turn within that field. On small machines the constant magnetic field may be provided by one or more permanent magnets; larger machines have the constant magnetic field provided by one or more electromagnets, which are usually called field coils.

Large power generation dynamos are now rarely seen due to the now nearly universal use of alternating current for power distribution and solid state electronic AC to DC power conversion. But before the principles of AC were discovered, very large direct-current dynamos were the only means of power generation and distribution. Now power generation dynamos are mostly a curiosity.

Page 83: Energy Transformation

Other rotating electromagnetic generators

Without a commutator, a dynamo becomes an alternator, which is a synchronous singly-fed generator. When used to feed an electric power grid, an alternator must always operate at a constant speed that is precisely synchronized to the electrical frequency of the power grid. A DC generator can operate at any speed within mechanical limits but always outputs a direct current waveform.

Other types of generators, such as the asynchronous or induction singly-fed generator, the doubly-fed generator, or the brushless wound-rotor doubly-fed generator, do not incorporate permanent magnets or field windings (i.e., electromagnets) that establish a constant magnetic field, and as a result, are seeing success in variable speed constant frequency applications, such as wind turbines or other renewable energy technologies.

The full output performance of any generator can be optimized with electronic control but only the doubly-fed generators or the brushless wound-rotor doubly-fed generator incorporate electronic control with power ratings that are substantially less than the power output of the generator under control, which by itself offer cost, reliability and efficiency benefits.

MHD generator

A magnetohydrodynamic generator directly extracts electric power from moving hot gases through a magnetic field, without the use of rotating electromagnetic machinery. MHD generators were originally developed because the output of a plasma MHD generator is a flame, well able to heat the boilers of a steam power plant. The first practical design was the AVCO Mk. 25, developed in 1965. The U.S. government funded substantial development, culminating in a 25 MW demonstration plant in 1987. In the Soviet Union from 1972 until the late 1980s, the MHD plant U 25 was in regular commercial operation on the Moscow power system with a rating of 25 MW, the largest MHD plant rating in the world at that time. MHD generators operated as a topping cycle are currently (2007) less efficient than combined-cycle gas turbines.

Terminology

Page 84: Energy Transformation

Rotor from generator at Hoover Dam, United States

The two main parts of a generator or motor can be described in either mechanical or electrical terms:

Mechanical:

• Rotor: The rotating part of an electrical machine • Stator: The stationary part of an electrical machine

Electrical:

Page 85: Energy Transformation

• Armature: The power-producing component of an electrical machine. In a generator, alternator, or dynamo the armature windings generate the electric current. The armature can be on either the rotor or the stator.

• Field: The magnetic field component of an electrical machine. The magnetic field of the dynamo or alternator can be provided by either electromagnets or permanent magnets mounted on either the rotor or the stator.

Because power transferred into the field circuit is much less than in the armature circuit, AC generators nearly always have the field winding on the rotor and the stator as the armature winding. Only a small amount of field current must be transferred to the moving rotor, using slip rings. Direct current machines (dynamos) require a commutator on the rotating shaft to convert the alternating current produced by the armature to direct current, so the armature winding is on the rotor of the machine.

Excitation

A small early 1900s 75 KVA direct-driven power station AC alternator, with a separate belt-driven exciter generator.

An electric generator or electric motor that uses field coils rather than permanent magnets requires a current to be present in the field coils for the device to be able to work. If the field coils are not powered, the rotor in a generator can spin without producing any usable electrical energy, while the rotor of a motor may not spin at all.

Smaller generators are sometimes self-excited, which means the field coils are powered by the current produced by the generator itself. The field coils are connected in series or parallel with the armature winding. When the generator first starts to turn, the small amount of remanent magnetism present in the iron core provides a magnetic field to get it started, generating a small current in the armature. This flows through the field coils, creating a larger magnetic field which generates a larger armature current. This

Page 86: Energy Transformation

"bootstrap" process continues until the magnetic field in the core levels off due to saturation and the generator reaches a steady state power output.

Very large power station generators often utilize a separate smaller generator to excite the field coils of the larger. In the event of a severe widespread power outage where islanding of power stations has occurred, the stations may need to perform a black start to excite the fields of their largest generators, in order to restore customer power service.

DC Equivalent circuit

Equivalent circuit of generator and load. G = generator VG=generator open-circuit voltage RG=generator internal resistance VL=generator on-load voltage RL=load resistance

The equivalent circuit of a generator and load is shown in the diagram to the right. The generator's VG and RG parameters can be determined by measuring the winding resistance (corrected to operating temperature), and measuring the open-circuit and loaded voltage for a defined current load.

Vehicle-mounted generators Early motor vehicles until about the 1960s tended to use DC generators with electromechanical regulators. These have now been replaced by alternators with built-in

Page 87: Energy Transformation

rectifier circuits, which are less costly and lighter for equivalent output. Automotive alternators power the electrical systems on the vehicle and recharge the battery after starting. Rated output will typically be in the range 50-100 A at 12 V, depending on the designed electrical load within the vehicle. Some cars now have electrically-powered steering assistance and air conditioning, which places a high load on the electrical system. Large commercial vehicles are more likely to use 24 V to give sufficient power at the starter motor to turn over a large diesel engine. Vehicle alternators do not use permanent magnets and are typically only 50-60% efficient over a wide speed range. Motorcycle alternators often use permanent magnet stators made with rare earth magnets, since they can be made smaller and lighter than other types.

Some of the smallest generators commonly found power bicycle lights. These tend to be 0.5 ampere, permanent-magnet alternators supplying 3-6 W at 6 V or 12 V. Being powered by the rider, efficiency is at a premium, so these may incorporate rare-earth magnets and are designed and manufactured with great precision. Nevertheless, the maximum efficiency is only around 80% for the best of these generators—60% is more typical—due in part to the rolling friction at the tyre–generator interface from poor alignment, the small size of the generator, bearing losses and cheap design. The use of permanent magnets means that efficiency falls even further at high speeds because the magnetic field strength cannot be controlled in any way. Hub generators remedy many of these flaws since they are internal to the bicycle hub and do not require an interface between the generator and tyre. Until recently, these generators have been expensive and hard to find. Major bicycle component manufacturers like Shimano and SRAM have only just entered this market. However, significant gains can be expected in future as cycling becomes more mainstream transportation and LED technology allows brighter lighting at the reduced current these generators are capable of providing.

Sailing yachts may use a water or wind powered generator to trickle-charge the batteries. A small propeller, wind turbine or impeller is connected to a low-power alternator and rectifier to supply currents of up to 12 A at typical cruising speeds.

Engine-generator An engine-generator is the combination of an electrical generator and an engine (prime mover) mounted together to form a single piece of self-contained equipment. The engines used are usually piston engines, but gas turbines can also be used. Many different versions are available - ranging from very small portable petrol powered sets to large turbine installations.

Human powered electrical generators A generator can also be driven by human muscle power (for instance, in field radio station equipment).

Page 88: Energy Transformation

Human powered direct current generators are commercially available, and have been the project of some DIY enthusiasts. Typically operated by means of pedal power, a converted bicycle trainer, or a foot pump, such generators can be practically used to charge batteries, and in some cases are designed with an integral inverter. The average adult could generate about 125-200 watts on a pedal powered generator. Portable radio receivers with a crank are made to reduce battery purchase requirements.

Linear electric generator In the simplest form of linear electric generator, a sliding magnet moves back and forth through a solenoid - a spool of copper wire. An alternating current is induced in the loops of wire by Faraday's law of induction each time the magnet slides through. This type of generator is used in the Faraday flashlight. Larger linear electricity generators are used in wave power schemes.

Tachogenerator Tachogenerators are frequently used to power tachometers to measure the speeds of electric motors, engines, and the equipment they power. Generators generate voltage roughly proportional to shaft speed. With precise construction and design, generators can be built to produce very precise voltages for certain ranges of shaft speeds

Page 89: Energy Transformation

Chapter- 8

Fuel Cell

Demonstration model of a direct-methanol fuel cell. The actual fuel cell stack is the layered cube shape in the center of the image

Page 90: Energy Transformation

A fuel cell is an electrochemical cell that converts a source fuel into an electric current. It generates electricity inside a cell through reactions between a fuel and an oxidant, triggered in the presence of an electrolyte. The reactants flow into the cell, and the reaction products flow out of it, while the electrolyte remains within it. Fuel cells can operate continuously as long as the necessary reactant and oxidant flows are maintained.

Fuel cells are different from conventional electrochemical cell batteries in that they consume reactant from an external source, which must be replenished – a thermodynamically open system. By contrast, batteries store electrical energy chemically and hence represent a thermodynamically closed system.

Many combinations of fuels and oxidants are possible. A hydrogen fuel cell uses hydrogen as its fuel and oxygen (usually from air) as its oxidant. Other fuels include hydrocarbons and alcohols. Other oxidants include chlorine and chlorine dioxide.

Design Fuel cells come in many varieties; however, they all work in the same general manner. They are made up of three segments which are sandwiched together: the anode, the electrolyte, and the cathode. Two chemical reactions occur at the interfaces of the three different segments. The net result of the two reactions is that fuel is consumed, water or carbon dioxide is created, and an electric current is created, which can be used to power electrical devices, normally referred to as the load.

At the anode a catalyst oxidizes the fuel, usually hydrogen, turning the fuel into a positively charged ion and a negatively charged electron. The electrolyte is a substance specifically designed so ions can pass through it, but the electrons cannot. The freed electrons travel through a wire creating the electric current. The ions travel through the electrolyte to the cathode. Once reaching the cathode, the ions are reunited with the electrons and the two react with a third chemical, usually oxygen, to create water or carbon dioxide.

Page 91: Energy Transformation

A block diagram of a fuel cell

The most important design features in a fuel cell are:

• The electrolyte substance. The electrolyte substance usually defines the type of fuel cell.

• The fuel that is used. The most common fuel is hydrogen. • The anode catalyst, which breaks down the fuel into electrons and ions. The

anode catalyst is usually made up of very fine platinum powder. • The cathode catalyst, which turns the ions into the waste chemicals like water or

carbon dioxide. The cathode catalyst is often made up of nickel.

A typical fuel cell produces a voltage from 0.6 V to 0.7 V at full rated load. Voltage decreases as current increases, due to several factors:

• Activation loss • Ohmic loss (voltage drop due to resistance of the cell components and

interconnects) • Mass transport loss (depletion of reactants at catalyst sites under high loads,

causing rapid loss of voltage).

To deliver the desired amount of energy, the fuel cells can be combined in series and parallel circuits, where series yields higher voltage, and parallel allows a higher current to

Page 92: Energy Transformation

be supplied. Such a design is called a fuel cell stack. The cell surface area can be increased, to allow stronger current from each cell.

Proton exchange fuel cells

In the archetypal hydrogen–oxygen proton exchange membrane fuel cell (PEMFC) design, a proton-conducting polymer membrane, (the electrolyte), separates the anode and cathode sides. This was called a "solid polymer electrolyte fuel cell" (SPEFC) in the early 1970s, before the proton exchange mechanism was well-understood. (Notice that "polymer electrolyte membrane" and "proton exchange mechanism" result in the same acronym.)

On the anode side, hydrogen diffuses to the anode catalyst where it later dissociates into protons and electrons. These protons often react with oxidants causing them to become what is commonly referred to as multi-facilitated proton membranes. The protons are conducted through the membrane to the cathode, but the electrons are forced to travel in an external circuit (supplying power) because the membrane is electrically insulating. On the cathode catalyst, oxygen molecules react with the electrons (which have traveled through the external circuit) and protons to form water — in this example, the only waste product, either liquid or vapor.

In addition to this pure hydrogen type, there are hydrocarbon fuels for fuel cells, including diesel, methanol (see: direct-methanol fuel cells and indirect methanol fuel cells) and chemical hydrides. The waste products with these types of fuel are carbon dioxide and water.

Page 93: Energy Transformation

Construction of a high temperature PEMFC: Bipolar plate as electrode with in-milled gas channel structure, fabricated from conductive plastics (enhanced with carbon nanotubes for more conductivity); Porous carbon papers; reactive layer, usually on the polymer membrane applied; polymer membrane.

Condensation of water produced by a PEMFC on the air channel wall. The gold wire around the cell ensures the collection of electric current.

Page 94: Energy Transformation

The materials used in fuel cells differ by type. In a typical membrane electrode assembly (MEA), the electrode–bipolar plates are usually made of metal, nickel or carbon nanotubes, and are coated with a catalyst (like platinum, nano iron powders or palladium) for higher efficiency. Carbon paper separates them from the electrolyte. The electrolyte could be ceramic or a membrane.

Proton exchange membrane fuel cell design issues

• Costs. In 2002, typical fuel cell systems cost US$1000 per kilowatt of electric power output. In 2009, the Department of Energy reported that 80-kW automotive fuel cell system costs in volume production (projected to 500,000 units per year) are $61 per kilowatt. The goal is $35 per kilowatt. In 2008 UTC Power has 400 kW stationary fuel cells for $1,000,000 per 400 kW installed costs. The goal is to reduce the cost in order to compete with current market technologies including gasoline internal combustion engines. Many companies are working on techniques to reduce cost in a variety of ways including reducing the amount of platinum needed in each individual cell. Ballard Power Systems have experiments with a catalyst enhanced with carbon silk which allows a 30% reduction (1 mg/cm² to 0.7 mg/cm²) in platinum usage without reduction in performance. Monash University, Melbourne uses PEDOT as a cathode.

• The production costs of the PEM (proton exchange membrane). The Nafion membrane currently costs $566/m². In 2005 Ballard Power Systems announced that its fuel cells will use Solupor, a porous polyethylene film patented by DSM.

• Water and air management (in PEMFCs). In this type of fuel cell, the membrane must be hydrated, requiring water to be evaporated at precisely the same rate that it is produced. If water is evaporated too quickly, the membrane dries, resistance across it increases, and eventually it will crack, creating a gas "short circuit" where hydrogen and oxygen combine directly, generating heat that will damage the fuel cell. If the water is evaporated too slowly, the electrodes will flood, preventing the reactants from reaching the catalyst and stopping the reaction. Methods to manage water in cells are being developed like electroosmotic pumps focusing on flow control. Just as in a combustion engine, a steady ratio between the reactant and oxygen is necessary to keep the fuel cell operating efficiently.

• Temperature management. The same temperature must be maintained throughout the cell in order to prevent destruction of the cell through thermal loading. This is particularly challenging as the 2H2 + O2 -> 2H2O reaction is highly exothermic, so a large quantity of heat is generated within the fuel cell.

• Durability, service life, and special requirements for some type of cells. Stationary fuel cell applications typically require more than 40,000 hours of reliable operation at a temperature of -35 °C to 40 °C (-31 °F to 104 °F), while automotive fuel cells require a 5,000 hour lifespan (the equivalent of 150,000 miles) under extreme temperatures. Current service life is 7,300 hours under cycling conditions. Automotive engines must also be able to start reliably at -30 °C (-22 °F) and have a high power to volume ratio (typically 2.5 kW per liter).

• Limited carbon monoxide tolerance of the cathode.

Page 95: Energy Transformation

High temperature fuel cells

SOFC

A solid oxide fuel cell (SOFC) is extremely advantageous “because of a possibility of using a wide variety of fuel”. Unlike most other fuel cells which only use hydrogen, SOFCs can run on hydrogen, butane, methanol, and other petroleum products. The different fuels each have their own chemistry.

For SOFC methanol fuel cells, on the anode side, a catalyst breaks methanol and water down to form carbon dioxide, hydrogen ions, and free electrons. The hydrogen ions meet oxide ions that have been created on the cathode side and passed across the electrolyte to the anode side, where they react to create water. A load connected externally between the anode and cathode completes the electrical circuit. Below are the chemical equations for the reaction:

Anode Reaction: CH3OH + H2O + 3O= → CO2 + 3H2O + 6e-

Cathode Reaction: 3/2 O2 + 6e- → 3O=

Overall Reaction: CH3OH + 3/2 O2 → CO2 + 2H2O + electrical energy

At the anode SOFCs can use nickel or other catalysts to break apart the methanol and create hydrogen ions and carbon monoxide. A solid called yttria stabilized zirconia (YSZ) is used as the electrolyte. Like all fuel cell electrolytes YSZ is conductive to certain ions, in this case the oxide ion (O=) allowing passage from the cathode to anode, but is non-conductive to electrons. YSZ is a durable solid and is advantageous in large industrial systems. Although YSZ is a good ion conductor, it only works at very high temperatures. The standard operating temperature is about 950oC. Running the fuel cell at such a high temperature easily breaks down the methane and oxygen into ions. A major disadvantage of the SOFC, as a result of the high heat, is that it “places considerable constraints on the materials which can be used for interconnections”. Another disadvantage of running the cell at such a high temperature is that other unwanted reactions may occur inside the fuel cell. It is common for carbon dust, graphite, to build up on the anode, preventing the fuel from reaching the catalyst. Much research is currently being done to find alternatives to YSZ that will carry ions at a lower temperature.

MCFC

Molten carbonate fuel cells (MCFCs) operate in a similar manner, except the electrolyte consists of liquid (molten) carbonate, which is a negative ion and an oxidizing agent. Because the electrolyte loses carbonate in the oxidation reaction, the carbonate must be replenished through some means. This is often performed by recirculating the carbon dioxide from the oxidation products into the cathode where it reacts with the incoming air and reforms carbonate.

Page 96: Energy Transformation

Unlike proton exchange fuel cells, the catalysts in SOFCs and MCFCs are not poisoned by carbon monoxide, due to much higher operating temperatures. Because the oxidation reaction occurs in the anode, direct utilization of the carbon monoxide is possible. Also, steam produced by the oxidation reaction can shift carbon monoxide and steam reform hydrocarbon fuels inside the anode. These reactions can use the same catalysts used for the electrochemical reaction, eliminating the need for an external fuel reformer.

MCFC can be used for reducing the CO2 emission from coal fired power plants as well as gas turbine power plants.

History

Sketch of William Grove's 1839 fuel cell

The principle of the fuel cell was discovered by German scientist Christian Friedrich Schönbein in 1838 and published in one of the scientific magazines of the time. Based on this work, the first fuel cell was demonstrated by Welsh scientist and barrister Sir William Robert Grove in the February 1839 edition of the Philosophical Magazine and Journal of Science and later sketched, in 1842, in the same journal. The fuel cell he made used similar materials to today's phosphoric-acid fuel cell.

In 1955, W. Thomas Grubb, a chemist working for the General Electric Company (GE), further modified the original fuel cell design by using a sulphonated polystyrene ion-exchange membrane as the electrolyte. Three years later another GE chemist, Leonard

Page 97: Energy Transformation

Niedrach, devised a way of depositing platinum onto the membrane, which served as catalyst for the necessary hydrogen oxidation and oxygen reduction reactions. This became known as the 'Grubb-Niedrach fuel cell'. GE went on to develop this technology with NASA and McDonnell Aircraft, leading to its use during Project Gemini. This was the first commercial use of a fuel cell. It wasn't until 1959 that British engineer Francis Thomas Bacon successfully developed a 5 kW stationary fuel cell. In 1959, a team led by Harry Ihrig built a 15 kW fuel cell tractor for Allis-Chalmers which was demonstrated across the US at state fairs. This system used potassium hydroxide as the electrolyte and compressed hydrogen and oxygen as the reactants. Later in 1959, Bacon and his colleagues demonstrated a practical five-kilowatt unit capable of powering a welding machine. In the 1960s, Pratt and Whitney licensed Bacon's U.S. patents for use in the U.S. space program to supply electricity and drinking water (hydrogen and oxygen being readily available from the spacecraft tanks).

United Technologies Corporation's UTC Power subsidiary was the first company to manufacture and commercialize a large, stationary fuel cell system for use as a co-generation power plant in hospitals, universities and large office buildings. UTC Power continues to market this fuel cell as the PureCell 200, a 200 kW system (although soon to be replaced by a 400 kW version, expected for sale in late 2009). UTC Power continues to be the sole supplier of fuel cells to NASA for use in space vehicles, having supplied the Apollo missions, and currently the Space Shuttle program, and is developing fuel cells for automobiles, buses, and cell phone towers; the company has demonstrated the first fuel cell capable of starting under freezing conditions with its proton exchange membrane hahah anthony higgens

Efficiency

Fuel cell efficiency

The efficiency of a fuel cell is dependent on the amount of power drawn from it. Drawing more power means drawing more current, which increases the losses in the fuel cell. As a general rule, the more power (current) drawn, the lower the efficiency. Most losses manifest themselves as a voltage drop in the cell, so the efficiency of a cell is almost proportional to its voltage. For this reason, it is common to show graphs of voltage versus current (so-called polarization curves) for fuel cells. A typical cell running at 0.7 V has an efficiency of about 50%, meaning that 50% of the energy content of the hydrogen is converted into electrical energy; the remaining 50% will be converted into heat. (Depending on the fuel cell system design, some fuel might leave the system unreacted, constituting an additional loss.)

For a hydrogen cell operating at standard conditions with no reactant leaks, the efficiency is equal to the cell voltage divided by 1.48 V, based on the enthalpy, or heating value, of the reaction. For the same cell, the second law efficiency is equal to cell voltage divided by 1.23 V. (This voltage varies with fuel used, and quality and temperature of the cell.) The difference between these numbers represents the difference between the reaction's

Page 98: Energy Transformation

enthalpy and Gibbs free energy. This difference always appears as heat, along with any losses in electrical conversion efficiency.

Fuel cells do not operate on a thermal cycle. As such, they are not constrained, as combustion engines are, in the same way by thermodynamic limits, such as Carnot cycle efficiency. At times this is misrepresented by saying that fuel cells are exempt from the laws of thermodynamics, because most people think of thermodynamics in terms of combustion processes (enthalpy of formation). The laws of thermodynamics also hold for chemical processes (Gibbs free energy) like fuel cells, but the maximum theoretical efficiency is higher (83% efficient at 298K in the case of hydrogen/oxygen reaction) than the Otto cycle thermal efficiency (60% for compression ratio of 10 and specific heat ratio of 1.4). Comparing limits imposed by thermodynamics is not a good predictor of practically achievable efficiencies. Also, if propulsion is the goal, electrical output of the fuel cell has to still be converted into mechanical power with another efficiency drop. In reference to the exemption claim, the correct claim is that the "limitations imposed by the second law of thermodynamics on the operation of fuel cells are much less severe than the limitations imposed on conventional energy conversion systems". Consequently, they can have very high efficiencies in converting chemical energy to electrical energy, especially when they are operated at low power density, and using pure hydrogen and oxygen as reactants.

It should be underlined that fuel cell (especially high temperature) can be used as a heat source in conventional heat engine (gas turbine system). In this case the ultra high efficiency is predicted (above 70%).

In practice

For a fuel cell operating on air, losses due to the air supply system must also be taken into account. This refers to the pressurization of the air and dehumidifying it. This reduces the efficiency significantly and brings it near to that of a compression ignition engine. Furthermore, fuel cell efficiency decreases as load increases.

The tank-to-wheel efficiency of a fuel cell vehicle is greater than 45% at low loads and shows average values of about 36% when a driving cycle like the NEDC (New European Driving Cycle) is used as test procedure. The comparable NEDC value for a Diesel vehicle is 22%. In 2008 Honda released a fuel cell electric vehicle (the Honda FCX Clarity) with fuel stack claiming a 60% tank-to-wheel efficiency.

It is also important to take losses due to fuel production, transportation, and storage into account. Fuel cell vehicles running on compressed hydrogen may have a power-plant-to-wheel efficiency of 22% if the hydrogen is stored as high-pressure gas, and 17% if it is stored as liquid hydrogen. In addition to the production losses, over 70% of US' electricity used for hydrogen production comes from thermal power, which only has an efficiency of 33% to 48%, resulting in a net increase in carbon dioxide production by using hydrogen in vehicles.

Page 99: Energy Transformation

Fuel cells cannot store energy like a battery, but in some applications, such as stand-alone power plants based on discontinuous sources such as solar or wind power, they are combined with electrolyzers and storage systems to form an energy storage system. The overall efficiency (electricity to hydrogen and back to electricity) of such plants (known as round-trip efficiency) is between 30 and 50%, depending on conditions. While a much cheaper lead-acid battery might return about 90%, the electrolyzer/fuel cell system can store indefinite quantities of hydrogen, and is therefore better suited for long-term storage.

Solid-oxide fuel cells produce exothermic heat from the recombination of the oxygen and hydrogen. The ceramic can run as hot as 800 degrees Celsius. This heat can be captured and used to heat water in a micro combined heat and power (m-CHP) application. When the heat is captured, total efficiency can reach 80-90% at the unit, but does not consider production and distribution losses. CHP units are being developed today for the European home market.

Fuel cell applications

Type 212 submarine with fuel cell propulsion of the German Navy in dry dock

Power

Fuel cells are very useful as power sources in remote locations, such as spacecraft, remote weather stations, large parks, rural locations, and in certain military applications. A fuel cell system running on hydrogen can be compact and lightweight, and have no major moving parts. Because fuel cells have no moving parts and do not involve combustion, in ideal conditions they can achieve up to 99.9999% reliability. This equates to around one minute of down time in a two year period.

Since electrolyzer systems do not store fuel in themselves, but rather rely on external storage units, they can be successfully applied in large-scale energy storage, rural areas being one example. In this application, batteries would have to be largely oversized to

Page 100: Energy Transformation

meet the storage demand, but fuel cells only need a larger storage unit (typically cheaper than an electrochemical device).

One such pilot program is operating on Stuart Island in Washington State. There the Stuart Island Energy Initiative has built a complete, closed-loop system: Solar panels power an electrolyzer which makes hydrogen. The hydrogen is stored in a 500 gallon tank at 200 PSI, and runs a ReliOn fuel cell to provide full electric back-up to the off-the-grid residence.

Cogeneration

Micro combined heat and power (MicroCHP) systems such as home fuel cells and cogeneration for office buildings and factories are in the mass production phase. The system generates constant electric power (selling excess power back to the grid when it is not consumed), and at the same time produces hot air and water from the waste heat. MicroCHP is usually less than 5 kWe for a home fuel cell or small business. A lower fuel-to-electricity conversion efficiency is tolerated (typically 15-20%), because most of the energy not converted into electricity is utilized as heat. Some heat is lost with the exhaust gas just as in a normal furnace, so the combined heat and power efficiency is still lower than 100%, typically around 80%. In terms of exergy however, the process is inefficient, and one could do better by maximizing the electricity generated and then using the electricity to drive a heat pump. Phosphoric-acid fuel cells (PAFC) comprise the largest segment of existing CHP products worldwide and can provide combined efficiencies close to 90% (35-50% electric + remainder as thermal) Molten-carbonate fuel cells have also been installed in these applications, and solid-oxide fuel cell prototypes exist.

Hydrogen transportation and refueling

The world's first certified Fuel Cell Boat (HYDRA), in Leipzig/Germany

Page 101: Energy Transformation

Toyota FCHV PEM FC fuel cell vehicle.

Page 102: Energy Transformation

Mercedes-Benz (Daimler AG) Citaro fuel cell bus on Aldwych, London.

Page 103: Energy Transformation

Element One fuel cell vehicle.

Hydrogen fueling station.

Page 104: Energy Transformation

Toyota's FCHV-BUS at the Expo 2005.

Land vehicles

In 2003 President George Bush proposed the Hydrogen Fuel Initiative (HFI), which was later implemented by legislation through the 2005 Energy Policy Act and the 2006 Advanced Energy Initiative. These aimed at further developing hydrogen fuel cells and its infrastructure technologies with the ultimate goal to produce commercial fuel cell vehicles by 2020. By 2008, the U.S. had contributed 1 billion dollars to this project.

In May 2009, however, the Obama Administration announced that it will "cut off funds" for the development of fuel cell hydrogen vehicles, since other vehicle technologies will lead to quicker reduction in emissions in a shorter time. The US Secretary of Energy explained that hydrogen vehicles "will not be practical over the next 10 to 20 years", and also mentioned the challenges involved in the development of the required infrastructure to distribute hydrogen fuel. Nevertheless, the U.S. government will continue to fund research related to stationary fuel cells. The National Hydrogen Association and the U.S. Fuel Cell Council criticized this decision arguing that "...the cuts proposed in the DOE hydrogen and fuel cell program threaten to disrupt commercialization of a family of technologies that are showing exceptional promise and beginning to gain market traction."

Page 105: Energy Transformation

There are numerous prototype or production cars and buses based on fuel cell technology being researched or manufactured by motor car manufacturers.

The GM 1966 Electrovan was the automotive industry's first attempt at an automobile powered by a hydrogen fuel cell. The Electrovan, which weighed more than twice as much as a normal van, could travel up to 70 mph for 30 seconds.

The 2001 Chrysler Natrium used its own on-board hydrogen processor. It produces hydrogen for the fuel cell by reacting sodium borohydride fuel with Borax, both of which Chrysler claimed were naturally occurring in great quantity in the United States. The hydrogen produces electric power in the fuel cell for near-silent operation and a range of 300 miles without impinging on passenger space. Chrysler also developed vehicles which separated hydrogen from gasoline in the vehicle, the purpose being to reduce emissions without relying on a nonexistent hydrogen infrastructure and to avoid large storage tanks.

In 2005 the British firm Intelligent Energy produced the first ever working hydrogen run motorcycle called the ENV (Emission Neutral Vehicle). The motorcycle holds enough fuel to run for four hours, and to travel 100 miles in an urban area, at a top speed of 50 miles per hour. In 2004 Honda developed a fuel-cell motorcycle which utilized the Honda FC Stack.

In 2007, the Revolve Eco-Rally (launched by HRH Prince of Wales) demonstrated several fuel cell vehicles on British roads for the first time, driven by celebrities and dignitaries from Brighton to London's Trafalgar Square. Fuel cell powered race vehicles, designed and built by university students from around the world, competed in the world's first hydrogen race series called the 2008 Formula Zero Championship, which began on August 22, 2008 in Rotterdam, the Netherlands. More races are planned for 2009 and 2010. After this first race, Greenchoice Forze from the university of Delft (The Netherlands) became leader in the competition. Other competing teams are Element One (Detroit), HerUCLAs (LA), EUPLAtecH2 (Spain), Imperial Racing Green (London) and Zero Emission Racing Team (Leuven).

In 2008, Honda released a hydrogen vehicle, the FCX Clarity. Meanwhile there exist also other examples of bikes and bicycles with a hydrogen fuel cell engine.

A few companies are conducting hydrogen fuel cell research and practical fuel cell bus trials. Daimler AG, with thirty-six experimental units powered by Ballard Power Systems fuel cells completing a successful three-year trial, in eleven cities, in January 2007. There are also fuel cell powered buses currently active or in production, such as a fleet of Thor buses with UTC Power fuel cells in California, operated by SunLine Transit Agency. The Fuel Cell Bus Club is a global cooperative effort in trial fuel cell buses.

The first Brazilian hydrogen fuel cell bus prototype will begin operation in São Paulo during the first semester of 2009. The hydrogen bus was manufactured in Caxias do Sul and the hydrogen fuel will be produced in São Bernardo do Campo from water through

Page 106: Energy Transformation

electrolysis. The program, called "Ônibus Brasileiro a Hidrogênio" (Brazilian Hydrogen Autobus), includes three additional buses.

Airplanes

Boeing researchers and industry partners throughout Europe conducted experimental flight tests in February 2008 of a manned airplane powered only by a fuel cell and lightweight batteries. The Fuel Cell Demonstrator Airplane, as it was called, used a Proton Exchange Membrane (PEM) fuel cell/lithium-ion battery hybrid system to power an electric motor, which was coupled to a conventional propeller. In 2003, the world's first propeller driven airplane to be powered entirely by a fuel cell was flown. The fuel cell was a unique FlatStackTM stack design which allowed the fuel cell to be integrated with the aerodynamic surfaces of the plane.

Boats

The world's first Fuel Cell Boat HYDRA used an AFC system with 6.5 kW net output.

Submarines

The Type 212 submarines of the German and Italian navies use fuel cells to remain submerged for weeks without the need to surface.

Fueling stations

The first public hydrogen refueling station was opened in Reykjavík, Iceland in April 2003. This station serves three buses built by DaimlerChrysler that are in service in the public transport net of Reykjavík. The station produces the hydrogen it needs by itself, with an electrolyzing unit (produced by Norsk Hydro), and does not need refilling: all that enters is electricity and water. Royal Dutch Shell is also a partner in the project. The station has no roof, in order to allow any leaked hydrogen to escape to the atmosphere.

The California Hydrogen Highway is an initiative by the California Governor to implement a series of hydrogen refueling stations along that state. These stations are used to refuel hydrogen vehicles such as fuel cell vehicles and hydrogen combustion vehicles. As of July 2007 California had 179 fuel cell vehicles and twenty five stations were in operation, and ten more stations have been planned for assembly in California. However, there have already been three hydrogen fueling stations decommissioned.

South Carolina also has a hydrogen freeway in the works. There are currently two hydrogen fueling stations, both in Aiken and Columbia, SC. Additional stations are expected in places around South Carolina such as Charleston, Myrtle Beach, Greenville, and Florence. According to the South Carolina Hydrogen & Fuel Cell Alliance, the Columbia station has a current capacity of 120 kg a day, with future plans to develop on-site hydrogen production from electrolysis and reformation. The Aiken station has a current capacity of 80 kg. There is extensive funding for Hydrogen fuel cell research and

Page 107: Energy Transformation

infrastructure in South Carolina. The University of South Carolina, a founding member of the South Carolina Hydrogen & Fuel Cell Alliance, received 12.5 million dollars from the United States Department of Energy for its Future Fuels Program.

Japan also has a hydrogen highway, as part of the Japan hydrogen fuel cell project. Twelve hydrogen fueling stations have been built in 11 cities in Japan. Canada, Sweden and Norway also have hydrogen highways implemented.

Other applications

• Providing power for base stations or cell sites • Off-grid power supply • Distributed generation • Fork Lifts • Emergency power systems are a type of fuel cell system, which may include

lighting, generators and other apparatus, to provide backup resources in a crisis or when regular systems fail. They find uses in a wide variety of settings from residential homes to hospitals, scientific laboratories, data centers, telecommunication equipment and modern naval ships.

• An uninterrupted power supply (UPS) provides emergency power and, depending on the topology, provide line regulation as well to connected equipment by supplying power from a separate source when utility power is not available. Unlike a standby generator, it can provide instant protection from a momentary power interruption.

• Base load power plants • Electric and hybrid vehicles. • Notebook computers for applications where AC charging may not be available for

weeks at a time. • Portable charging docks for small electronics (e.g. a belt clip that charges your

cell phone or PDA). • Smartphones with high power consumption due to large displays and additional

features like GPS might be equipped with micro fuel cells. • Small heating appliances

Market structure

Not all geographic markets are ready for SOFC powered m-CHP appliances. Currently, the regions that lead the race in Distributed Generation and deployment of fuel cell m-CHP units are the EU and Japan.

Fuel cell economics Use of hydrogen to fuel vehicles would be a critical feature of a hydrogen economy. A fuel cell and electric motor combination is not directly limited by the Carnot efficiency of an internal combustion engine.

Page 108: Energy Transformation

Low temperature fuel cell stacks proton exchange membrane fuel cell (PEMFC), direct methanol fuel cell (DMFC) and phosphoric acid fuel cell (PAFC) use a platinum catalyst. Impurities create catalyst poisoning (reducing activity and efficiency) in these low-temperature fuel cells, thus high hydrogen purity or higher catalyst densities are required. Although there are sufficient platinum resources for future demand, most predictions of platinum running out and/or platinum prices soaring do not take into account effects of reduction in catalyst loading and recycling. Recent research at Brookhaven National Laboratory could lead to the replacement of platinum by a gold-palladium coating which may be less susceptible to poisoning and thereby improve fuel cell lifetime considerably. Another method would use iron and sulphur instead of platinum. This is possible through an intermediate conversion by bacteria. This would lower the cost of a fuel cell substantially (as the platinum in a regular fuel cell costs around $1500, and the same amount of iron costs only around $1.50). The concept is being developed by a coalition of the John Innes Centre and the University of Milan-Bicocca.

Current targets for a transport PEM fuel cells are 0.2 g/kW Pt – which is a factor of 5 decrease over current loadings – and recent comments from major original equipment manufacturers (OEMs) indicate that this is possible. Recycling of fuel cells components, including platinum, will conserve supplies. High-temperature fuel cells, including molten carbonate fuel cells (MCFC's) and solid oxide fuel cells (SOFC's), do not use platinum as catalysts, but instead use cheaper materials such as nickel and nickel oxide. They also do not experience catalyst poisoning by carbon monoxide, and so they do not require high-purity hydrogen to operate. They can use fuels with an existing and extensive infrastructure, such as natural gas, directly, without having to first reform it externally to hydrogen and CO followed by CO removal.

Research and development • August 2005: Georgia Institute of Technology researchers use triazole to raise the

operating temperature of PEM fuel cells from below 100 °C to over 125 °C, claiming this will require less carbon-monoxide purification of the hydrogen fuel.

• 2008 Monash University, Melbourne uses PEDOT as a cathode. • 2009 Researchers at the University of Dayton, in Ohio, have shown that arrays of

vertically grown carbon nanotubes could be used as the catalyst in fuel cells. • 2009: Y-Carbon has begun to develop a carbide-derived-carbon-based

ultracapacitor with high energy density which may lead to improvements in fuel cell technology.

• 2009: A nickel bisdiphosphine-based catalyst for fuel cells is demonstrated.

Page 109: Energy Transformation

Chapter- 9

Wave Power

Large storm waves pose a challenge to wave power development

Wave power is the transport of energy by ocean surface waves, and the capture of that energy to do useful work — for example for electricity generation, water desalination, or the pumping of water (into reservoirs).

Wave power is distinct from the diurnal flux of tidal power and the steady gyre of ocean currents. Wave power generation is not currently a widely employed commercial technology although there have been attempts at using it since at least 1890. In 2008, the first commercial wave farm was opened in Portugal, at the Aguçadoura Wave Park.

Page 110: Energy Transformation

Physical concepts

When an object bobs up and down on a ripple in a pond, it experiences an elliptical trajectory.

Page 111: Energy Transformation

Motion of a particle in an ocean wave. A = At deep water. The orbital motion of fluid particles decreases rapidly with increasing depth below the surface. B = At shallow water (ocean floor is now at B). The elliptical movement of a fluid particle flattens with decreasing depth. 1 = Propagation direction. 2 = Wave crest. 3 = Wave trough.

Waves are generated by wind passing over the surface of the sea. As long as the waves propagate slower than the wind speed just above the waves, there is an energy transfer from the wind to the waves. Both air pressure differences between the upwind and the lee side of a wave crest, as well as friction on the water surface by the wind, making the water to go into the shear stress causes the growth of the waves.

Wave height is determined by wind speed, the duration of time the wind has been blowing, fetch (the distance over which the wind excites the waves) and by the depth and topography of the seafloor (which can focus or disperse the energy of the waves). A given wind speed has a matching practical limit over which time or distance will not

Page 112: Energy Transformation

produce larger waves. When this limit has been reached the sea is said to be "fully developed".

In general, larger waves are more powerful but wave power is also determined by wave speed, wavelength, and water density.

Oscillatory motion is highest at the surface and diminishes exponentially with depth. However, for standing waves (clapotis) near a reflecting coast, wave energy is also present as pressure oscillations at great depth, producing microseisms. These pressure fluctuations at greater depth are too small to be interesting from the point of view of wave power.

The waves propagate on the ocean surface, and the wave energy is also transported horizontally with the group velocity. The mean transport rate of the wave energy through a vertical plane of unit width, parallel to a wave crest, is called the wave energy flux (or wave power, which must not be confused with the actual power generated by a wave power device).

Wave power formula

In deep water where the water depth is larger than half the wavelength, the wave energy flux is

with P the wave energy flux per unit of wave-crest length, Hm0 the significant wave height, T the wave period, ρ the water density and g the acceleration by gravity. The above formula states that wave power is proportional to the wave period and to the square of the wave height. When the significant wave height is given in meters, and the wave period in seconds, the result is the wave power in kilowatts (kW) per meter of wavefront length.

Example: Consider moderate ocean swells, in deep water, a few kilometers off a coastline, with a wave height of 3 meters and a wave period of 8 seconds. Using the formula to solve for power, we get

meaning there are 36 kilowatts of power potential per meter of coastline.

In major storms, the largest waves offshore are about 15 meters high and have a period of about 15 seconds. According to the above formula, such waves carry about 1.7 MW of power across each meter of wavefront.

Page 113: Energy Transformation

An effective wave power device captures as much as possible of the wave energy flux. As a result the waves will be of lower height in the region behind the wave power device.

Wave energy and wave energy flux

In a sea state, the average energy density per unit area of gravity waves on the water surface is proportional to the wave height squared, according to linear wave theory:

where E is the mean wave energy density per unit horizontal area (J/m2), the sum of kinetic and potential energy density per unit horizontal area. The potential energy density is equal to the kinetic energy, both contributing half to the wave energy density E, as can be expected from the equipartition theorem. In ocean waves, surface tension effects are negligible for wavelengths above a few decimetres.

As the waves propagate, their energy is transported. The energy transport velocity is the group velocity. As a result, the wave energy flux, through a vertical plane of unit width perpendicular to the wave propagation direction, is equal to:

with cg the group velocity (m/s). Due to the dispersion relation for water waves under the action of gravity, the group velocity depends on the wavelength λ, or equivalently, on the wave period T. Further, the dispersion relation is a function of the water depth h. As a result, the group velocity behaves differently in the limits of deep and shallow water, and at intermediate depths:

Deep water characteristics and opportunities

Deep water corresponds with a water depth larger than half the wavelength, which is the common situation in the sea and ocean. In deep water, longer period waves propagate faster and transport their energy faster. The deep-water group velocity is half the phase velocity. In shallow water, for wavelengths larger than twenty times the water depth, as found quite often near the coast, the group velocity is equal to the phase velocity.

The regularity of deep-water ocean swells, where "easy-to-predict long-wavelength oscillations" are typically seen, offers the opportunity for the development of energy harvesting technologies that are potentially less subject to physical damage by near-shore cresting waves.

History

Page 114: Energy Transformation

The first known patent to utilize energy from ocean waves dates back to 1799 and was filed in Paris by Girard and his son. An early application of wave power was a device constructed around 1910 by Bochaux-Praceique to light and power his house at Royan, near Bordeaux in France. It appears that this was the first Oscillating Water Column type of wave energy device. From 1855 to 1973 there were already 340 patents filed in the UK alone.

Modern scientific pursuit of wave energy was however pioneered by Yoshio Masuda's experiments in the 1940s. He has tested various concepts of wave energy devices at sea, with several hundred units used to power navigation lights. Among these was the concept of extracting power from the angular motion at the joints of an articulated raft, which was proposed in the 1950s by Masuda.

A renewed interest in wave energy was motivated by the oil crisis in 1973. A number of university researchers reexamined the potential of generating energy from ocean waves, among whom notably were Stephen Salter from the University of Edinburgh, Kjell Budal and Johannes Falnes from Norwegian Institute of Technology (now merged into Norwegian University of Science and Technology), Michael E. McCormick from U. S. Naval Academy, David Evans from Bristol University, Michael French from University of Lancaster, John Newman and Chiang C. Mei from MIT.

In response to the Oil Crisis, a number of researchers reexamined the potential of generating energy from ocean waves, among whom is Professor Stephen Salter of the University of Edinburgh, Scotland. His 1974 invention became known as Salter's Duck or Nodding Duck, although it was officially referred to as the Edinburgh Duck. In small scale controlled tests, the Duck's curved cam-like body can stop 90% of wave motion and can convert 90% of that to electricity giving 81% efficiency.

In the 1980s, as the oil price went down, wave-energy funding was drastically reduced. Nevertheless, a few first-generation prototypes were tested at sea. More recently, following the issue of climate change, there is again a growing interest worldwide for renewable energy, including wave energy.

Modern technology Wave power devices are generally categorized by the method used to capture the energy of the waves. They can also be categorized by location and power take-off system. Method types are point absorber or buoy; surfacing following or attenuator oriented parallel to the direction of wave propagation; terminator, oriented perpendicular to the direction of wave propagation; oscillating water column; and overtopping. Locations are shoreline, nearshore and offshore. Types of power take-off include: hydraulic ram, elastomeric hose pump, pump-to-shore, hydroelectric turbine, air turbine, and linear electrical generator. Some of these designs incorporate parabolic reflectors as a means of increasing the wave energy at the point of capture. These capture systems use the rise and fall motion of waves to capture energy. Once the wave energy is captured at a wave

Page 115: Energy Transformation

source, power must be carried to the point of use or to a connection to the electrical grid by transmission power cables.

These are descriptions of some wave power systems:

The front of the Pelamis machine bursting through a wave at the Agucadoura Wave Park

Page 116: Energy Transformation

Wave Dragon seen from reflector, prototype 1:4½

• In the United States, the Pacific Northwest Generating Cooperative is funding the building of a commercial wave-power park at Reedsport, Oregon. The project will utilize the PowerBuoy technology Ocean Power Technologies which consists of modular, ocean-going buoys. The rising and falling of the waves moves hydraulic fluid with the buoy; this motion is used to spin a generator, and the electricity is transmitted to shore over a submerged transmission line. A 150 kW buoy has a diameter of 36 feet (11 m) and is 145 feet (44 m) tall, with approximately 30 feet of the unit rising above the ocean surface. Using a three-point mooring system, they are designed to be installed one to five miles (8 km) offshore in water 100 to 200 feet (60 m) deep.

• An example of a surface following device is the Pelamis Wave Energy Converter. The sections of the device articulate with the movement of the waves, each resisting motion between it and the next section, creating pressurized oil to drive a hydraulic ram which drives a hydraulic motor. The machine is long and narrow (snake-like) and points into the waves; it attenuates the waves, gathering more energy than its narrow profile suggests. Its articulating sections drive internal hydraulic generators (through the use of pumps and accumulators).

• With the Wave Dragon wave energy converter large wing reflectors focus waves up a ramp into an offshore reservoir. The water returns to the ocean by the force of gravity via hydroelectric generators.

Page 117: Energy Transformation

• The Anaconda Wave Energy Converter is in the early stages of development by UK company Checkmate SeaEnergy. The concept is a 200 metre long rubber tube which is tethered underwater. Passing waves will instigate a wave inside the tube, which will then propagates down its walls, driving a turbine at the far end.

• The AquaBuOY is a technology developed by Finavera Renewables Inc. In 2009 Finavera Renewables surrendered its wave energy permits from FERC. In July 2010 Finavera announced that it has entered into a definitive agreement to sell all assets and intellectual property related to the AquaBuOY wave energy technology to an undisclosed buyer.

• The FlanSea is a so-called "point absorber" buoy, developed for use in the sourthern north sea conditions. It works by means of a cable that due to the bobbing effect of the buoy, generates electricity.

• The SeaRaser, built by Alvin Smith, uses an entirely new technique (pumping) for gathering the wave energy.

• A device called CETO, currently being tested off Fremantle, Western Australia, consists of a single piston pump attached to the sea floor, with a float tethered to the piston. Waves cause the float to rise and fall, generating pressurized water, which is piped to an onshore facility to drive hydraulic generators or run reverse osmosis water desalination.

• Another type of wave buoys, using special polymeres, is being developed by SRI • Wavebob is an Irish Company who have conducted some ocean trials. • The Oyster wave energy converter is a hydro-electric wave energy device

currently being developed by Aquamarine Power. The wave energy device captures the energy found in nearshore waves and converts it into clean usable electricity. The systems consists of a hinged mechanical flap connected to the seabed at around 10m depth. Each passing wave moves the flap which drives hydraulic pistons to deliver high pressure water via a pipeline to an onshore turbine which generates electricity. In November 2009, the first full-scale demonstrator Oyster began producing power when it was launched at the European Marine Energy Centre (EMEC) on Orkney.

• Ocean Energy have developed the OE buoy which has completed (September 2009) a 2-year sea trial in one quarter scale form. The OE buoy has only one moving part.

• The Lysekil Project is based on a concept with a direct driven linear generator placed on the seabed. The generator is connected to a buoy at the surface via a line. The movements of the buoy will drive the translator in the generator. The advantage of this setup is a less complex mechanical system with potentially a smaller need for maintenance. One drawback is a more complicated electrical system.

• An Australian firm, Oceanlinx, is developing a deep-water technology to generate electricity from, ostensibly, easy-to-predict long-wavelength ocean swell oscillations. Oceanlinx recently began installation of a third and final demonstration-scale, grid-connected unit near Port Kembla, near Sydney, Australia, a 2.5 MWe system that is expected to go online in early 2010, when its power will be connected to the Australian grid. The companies much smaller

Page 118: Energy Transformation

first-generation prototype unit, in operation since 2006, is now being disassembled.

• An Israeli firm, SDE ENERGY LTD., has developed a breakwater-based wave energy converter. This device is close to the shore and utilizes the vertical motion of buoys for creating an hydraulic pressure, which in turn operates the system's generators. S.D.E. is currently building a new 250 kWh model in the port of Jaffa, Tel Aviv and preparing to construct it's standing orders for a 100mWh power plants in the islands of Zanzibar and Kosrae, Micronesia.

• A Finnish firm, AW-Energy Oy, is developing the WaveRoller device: that is a plate anchored on the sea bottom by its lower part. The back and forth movement of surge moves the plate. The kinetic energy transferred to this plate is collected by a piston pump.

Potential Deep water wave power resources are truly enormous, between 1 TW and 10 TW, but it is not practical to capture all of this. The useful worldwide resource has been estimated to be greater than 2 TW. Locations with the most potential for wave power include the western seaboard of Europe, the northern coast of the UK, and the Pacific coastlines of North and South America, Southern Africa, Australia, and New Zealand. The north and south temperate zones have the best sites for capturing wave power. The prevailing westerlies in these zones blow strongest in winter. Waves are very predictable; waves that are caused by winds can be predicted five days in advance.

Challenges • There is a potential impact on the marine environment. Noise pollution, for

example, could have negative impact if not monitored, although the noise and visible impact of each design varies greatly.

• In terms of socio-economic challenges, wave farms can result in the displacement of commercial and recreational fishermen from productive fishing grounds, can change the pattern of beach sand nourishment, and may represent hazards to safe navigation.

• Waves generate about 2,700 gigawatts of power. Of those 2,700 gigawatts, only about 500 gigawatts can be captured with the current technology.

Wave farms The Aguçadoura Wave Farm was the world's first commercial wave farm. It was located 5 km (3 mi) offshore near Póvoa de Varzim north of Oporto in Portugal. The farm was designed to use three Pelamis wave energy converters to convert the motion of the ocean surface waves into electricity, totalling to 2.25MW in total installed capacity. The farm first generated electricity in July 2008 and was officially opened on the 23rd of September 2008, by the Portuguese Minister of Economy. The wave farm was shut down two months after the official opening in November 2008 as a result of the financial

Page 119: Energy Transformation

collapse of Babcock & Brown due to the global economic crisis. The machines were off-site at this time due to technical problems, and although resolved have not returned to site without financial backing. A second phase of the project planned to increase the installed capacity to 21MW using a further 25 Pelamis machines is in doubt following Babcock's financial collapse.

Funding for a 3MW wave farm in Scotland was announced on 20 February 2007 by the Scottish Executive, at a cost of over 4 million pounds, as part of a £13 million funding package for marine power in Scotland. The first of 66 machines was launched in May 2010.

Funding has also been announced for the development of a Wave hub off the north coast of Cornwall, England. The Wave hub will act as giant extension cable, allowing arrays of wave energy generating devices to be connected to the electricity grid. The Wave hub will initially allow 20MW of capacity to be connected, with potential expansion to 40MW. Four device manufacturers have so far expressed interest in connecting to the Wave hub.

The scientists have calculated that wave energy gathered at Wave Hub will be enough to power up to 7,500 households. Savings that the Cornwall wave power generator will bring are significant: about 300,000 tons of carbon dioxide in the next 25 years.

A CETO wave farm off the coast of Western Australia has been operating to prove commercial viability and, after preliminary environmental approval, is poised for further development.

Page 120: Energy Transformation

Chapter- 10

Piezoelectric Sensor

A piezoelectric disk generates a voltage when deformed (change in shape is greatly exaggerated)

A piezoelectric sensor is a device that uses the piezoelectric effect to measure pressure, acceleration, strain or force by converting them to an electrical signal.

Applications

Page 121: Energy Transformation

Piezoelectric disk used as a guitar pickup

Piezoelectric sensors have proven to be versatile tools for the measurement of various processes. They are used for quality assurance, process control and for research and development in many different industries. Although the piezoelectric effect was discovered by Curie in 1880, it was only in the 1950s that the piezoelectric effect started to be used for industrial sensing applications. Since then, this measuring principle has been increasingly used and can be regarded as a mature technology with an outstanding inherent reliability. It has been successfully used in various applications, such as in medical, aerospace, nuclear instrumentation, and as a pressure sensor in the touch pads of mobile phones. In the automotive industry, piezoelectric elements are used to monitor combustion when developing internal combustion engines. The sensors are either directly mounted into additional holes into the cylinder head or the spark/glow plug is equipped with a built in miniature piezoelectric sensor .

The rise of piezoelectric technology is directly related to a set of inherent advantages. The high modulus of elasticity of many piezoelectric materials is comparable to that of many metals and goes up to 10e6 N/m². Even though piezoelectric sensors are electromechanical systems that react to compression, the sensing elements show almost zero deflection. This is the reason why piezoelectric sensors are so rugged, have an extremely high natural frequency and an excellent linearity over a wide amplitude range.

Page 122: Energy Transformation

Additionally, piezoelectric technology is insensitive to electromagnetic fields and radiation, enabling measurements under harsh conditions. Some materials used (especially gallium phosphate or tourmaline) have an extreme stability even at high temperature, enabling sensors to have a working range of up to 1000°C. Tourmaline shows pyroelectricity in addition to the piezoelectric effect; this is the ability to generate an electrical signal when the temperature of the crystal changes. This effect is also common to piezoceramic materials.

Principle Strain Sensitivity [V/µ*] Threshold [µ*] Span to threshold ratio Piezoelectric 5.0 0.00001 100,000,000 Piezoresistive 0.0001 0.0001 2,500,000 Inductive 0.001 0.0005 2,000,000 Capacitive 0.005 0.0001 750,000

One disadvantage of piezoelectric sensors is that they cannot be used for truly static measurements. A static force will result in a fixed amount of charges on the piezoelectric material. While working with conventional readout electronics, imperfect insulating materials, and reduction in internal sensor resistance will result in a constant loss of electrons, and yield a decreasing signal. Elevated temperatures cause an additional drop in internal resistance and sensitivity. The main effect on the piezoelectric effect is that with increasing pressure loads and temperature, the sensitivity is reduced due to twin-formation. While quartz sensors need to be cooled during measurements at temperatures above 300°C, special types of crystals like GaPO4 gallium phosphate do not show any twin formation up to the melting point of the material itself.

However, it is not true that piezoelectric sensors can only be used for very fast processes or at ambient conditions. In fact, there are numerous applications that show quasi-static measurements, while there are other applications with temperatures higher than 500°C.

Piezoelectric sensors are also seen in nature. Dry bone is piezoelectric, and is thought by some to act as a biological force sensor.

Principle of operation Depending on how a piezoelectric material is cut, three main modes of operation can be distinguished: transverse, longitudinal, and shear.

Transverse effect A force is applied along a neutral axis (y) and the charges are generated along the (x) direction, perpendicular to the line of force. The amount of charge depends on the geometrical dimensions of the respective piezoelectric element. When dimensions a,b,c apply, Cx = dxyFyb / a, where a is the dimension in line with the neutral axis, b is in line with the charge generating axis and d is the corresponding piezoelectric coefficient.

Page 123: Energy Transformation

Longitudinal effect The amount of charge produced is strictly proportional to the applied force and is independent of size and shape of the piezoelectric element. Using several elements that are mechanically in series and electrically in parallel is the only way to increase the charge output. The resulting charge is Cx = dxxFxn, where dxx is the piezoelectric coefficient for a charge in x-direction released by forces applied along x-direction (in pC/N). Fx is the applied Force in x-direction [N] and n corresponds to the number of stacked elements .

Shear effect Again, the charges produced are strictly proportional to the applied forces and are independent of the element’s size and shape. For n elements mechanically in series and electrically in parallel the charge is Cx = 2dxxFxn.

In contrast to the longitudinal and shear effects, the transverse effect opens the possibility to fine-tune sensitivity on the force applied and the element dimension.

Electrical properties

Schematic symbol and electronic model of a piezoelectric sensor

A piezoelectric transducer has very high DC output impedance and can be modeled as a proportional voltage source and filter network. The voltage V at the source is directly proportional to the applied force, pressure, or strain. The output signal is then related to this mechanical force as if it had passed through the equivalent circuit.

Page 124: Energy Transformation

Frequency response of a piezoelectric sensor; output voltage vs applied force

A detailed model includes the effects of the sensor's mechanical construction and other non-idealities. The inductance Lm is due to the seismic mass and inertia of the sensor itself. Ce is inversely proportional to the mechanical elasticity of the sensor. C0 represents the static capacitance of the transducer, resulting from an inertial mass of infinite size. Ri is the insulation leakage resistance of the transducer element. If the sensor is connected to a load resistance, this also acts in parallel with the insulation resistance, both increasing the high-pass cutoff frequency.

Page 125: Energy Transformation

In the flat region, the sensor can be modeled as a voltage source in series with the sensor's capacitance or a charge source in parallel with the capacitance

For use as a sensor, the flat region of the frequency response plot is typically used, between the high-pass cutoff and the resonant peak. The load and leakage resistance need to be large enough that low frequencies of interest are not lost. A simplified equivalent circuit model can be used in this region, in which Cs represents the capacitance of the sensor surface itself, determined by the standard formula for capacitance of parallel plates. It can also be modeled as a charge source in parallel with the source capacitance, with the charge directly proportional to the applied force, as above.

Sensor design

Metal disks with piezo material, used in buzzers or as contact microphones

Based on piezoelectric technology various physical quantities can be measured; the most common are pressure and acceleration. For pressure sensors, a thin membrane and a massive base is used, ensuring that an applied pressure specifically loads the elements in one direction. For accelerometers, a seismic mass is attached to the crystal elements. When the accelerometer experiences a motion, the invariant seismic mass loads the elements according to Newton’s second law of motion F = ma.

Page 126: Energy Transformation

The main difference in the working principle between these two cases is the way forces are applied to the sensing elements. In a pressure sensor a thin membrane is used to transfer the force to the elements, while in accelerometers the forces are applied by an attached seismic mass.

Sensors often tend to be sensitive to more than one physical quantity. Pressure sensors show false signal when they are exposed to vibrations. Sophisticated pressure sensors therefore use acceleration compensation elements in addition to the pressure sensing elements. By carefully matching those elements, the acceleration signal (released from the compensation element) is subtracted from the combined signal of pressure and acceleration to derive the true pressure information.

Vibration sensors can also be used to harvest otherwise wasted energy from mechanical vibrations. This is accomplished by using piezoelectric materials to convert mechanical strain into usable electrical energy.

Sensing materials Two main groups of materials are used for piezoelectric sensors: piezoelectric ceramics and single crystal materials. The ceramic materials (such as PZT ceramic) have a piezoelectric constant / sensitivity that is roughly two orders of magnitude higher than those of single crystal materials and can be produced by inexpensive sintering processes. The piezoeffect in piezoceramics is "trained", so unfortunately their high sensitivity degrades over time. The degradation is highly correlated with temperature. The less sensitive crystal materials (gallium phosphate, quartz, tourmaline) have a much higher – when carefully handled, almost infinite – long term stability.

Page 127: Energy Transformation

Chapter- 11

Friction

Friction is the force resisting the relative motion of solid surfaces, fluid layers, and/or material elements sliding against each other. It may be thought of as the opposite of "slipperiness".

There are several types of friction:

• Dry friction resists relative lateral motion of two solid surfaces in contact. Dry friction is subdivided into static friction between non-moving surfaces, and kinetic friction between moving surfaces.

• Fluid friction describes the friction between layers within a viscous fluid that are moving relative to each other.

• Lubricated friction is a case of fluid friction where a fluid separates two solid surfaces.

• Skin friction is a component of drag, the force resisting the motion of a solid body through a fluid.

• Internal friction is the force resisting motion between the elements making up a solid material while it undergoes deformation.

When surfaces in contact move relative to each other, the friction between the two surfaces converts kinetic energy into heat. This property can have dramatic consequences, as illustrated by the use of friction between pieces of wood to start a fire.

Another important consequence of many types of friction can be wear, which may lead to performance degradation and/or damage to components. Friction is a component of the science of tribology.

Friction is not a fundamental force but occurs because of the electromagnetic forces between charged particles which constitute the surfaces in contact. Because of the

Page 128: Energy Transformation

complexity of these interactions friction cannot be calculated from first principles, but instead must be found empirically.

History Several famous scientists and engineers contributed to our understanding of dry friction. They include Leonardo da Vinci, Guillaume Amontons, John Theophilus Desaguliers, Leonard Euler, and Charles-Augustin de Coulomb. Nikolai Pavlovich Petrov and Osborne Reynolds later supplemented this understanding with theories of lubrication.

Basic properties Basic properties of friction have been described as laws:

• Amontons' 1st Law: The force of friction is directly proportional to the applied load.

• Amontons' 2nd Law: The force of friction is independent of the apparent area of contact.

• Coulomb's Law of Friction: Kinetic friction is independent of the sliding velocity.

Amontons' 2nd Law is an idealization assuming perfectly rigid and inelastic materials. For example, wider tires on cars provide more traction than narrow tires for a given vehicle mass because of surface deformation of the tire.

Dry friction Dry friction resists relative lateral motion of two solid surfaces in contact. The two regimes of dry friction are static friction between non-moving surfaces, and kinetic friction (sometimes called sliding friction or dynamic friction) between moving surfaces.

Coulomb friction, named after Charles-Augustin de Coulomb, is an approximate model used to calculate the force of dry friction. It is governed by the equation:

where

• is the force exerted by friction (in the case of equality, the maximum possible magnitude of this force).

• is the coefficient of friction, which is an empirical property of the contacting materials,

• is the normal force exerted between the surfaces.

Page 129: Energy Transformation

The Coulomb friction may take any value from zero up to , and the direction of the frictional force against a surface is opposite to the motion that surface would experience in the absence of friction. Thus, in the static case, the frictional force is exactly what it must be in order to prevent motion between the surfaces; it balances the net force tending to cause such motion. In this case, rather than providing an estimate of the actual frictional force, the Coulomb approximation provides a threshold value for this force, above which motion would commence. This maximum force is known as traction.

The force of friction is always exerted in a direction that opposes movement (for kinetic friction) or potential movement (for static friction) between the two surfaces. For example, a curling stone sliding along the ice experiences a kinetic force slowing it down. For an example of potential movement, the drive wheels of an accelerating car experience a frictional force pointing forward; if they did not, the wheels would spin, and the rubber would slide backwards along the pavement. Note that it is not the direction of movement of the vehicle they oppose, it is the direction of (potential) sliding between tire and road.

In the case of kinetic friction, the direction of the friction force may or may not match the direction of motion: a block sliding atop a table with rectilinear motion is subject to friction directed along the line of motion; an automobile making a turn is subject to friction acting perpendicular to the line of motion (in which case it is said to be 'normal' to it). The direction of the static friction force can be visualized as directly opposed to the force that would otherwise cause motion, were it not for the static friction preventing motion. In this case, the friction force exactly cancels the applied force, so the net force given by the vector sum, equals zero. It is important to note that in all cases, Newton's first law of motion holds.

Page 130: Energy Transformation

The normal force

Block on a ramp (top) and corresponding free body diagram of just the block (bottom).

The normal force is defined as the net force compressing two parallel surfaces together; and its direction is perpendicular to the surfaces. In the simple case of a mass resting on a horizontal surface, the only component of the normal force is the force due to gravity, where . In this case, the magnitude of the friction force is the product of the mass of the object, the acceleration due to gravity, and the coefficient of friction. However, the coefficient of friction is not a function of mass or volume; it depends only on the material. For instance, a large aluminum block has the same coefficient of friction as a small aluminum block. However, the magnitude of the friction force itself depends on the normal force, and hence the mass of the block.

If an object is on a level surface and the force tending to cause it to slide is horizontal, the normal force between the object and the surface is just its weight, which is equal to its mass multiplied by the acceleration due to earth's gravity, g. If the object is on a tilted surface such as an inclined plane, the normal force is less, because less of the force of gravity is perpendicular to the face of the plane. Therefore, the normal force, and ultimately the frictional force, is determined using vector analysis, usually via a free body diagram. Depending on the situation, the calculation of the normal force may include forces other than gravity.

Page 131: Energy Transformation

Coefficient of friction

The 'coefficient of friction' (COF), also known as a 'frictional coefficient' or 'friction coefficient' and symbolized by the Greek letter µ, is a dimensionless scalar value which describes the ratio of the force of friction between two bodies and the force pressing them together. The coefficient of friction depends on the materials used; for example, ice on steel has a low coefficient of friction, while rubber on pavement has a high coefficient of friction. Coefficients of friction range from near zero to greater than one – under good conditions, a tire on concrete may have a coefficient of friction of 1.7.

For surfaces at rest relative to each other , where is the coefficient of static friction. This is usually larger than its kinetic counterpart.

For surfaces in relative motion , where is the coefficient of kinetic friction. The Coulomb friction is equal to , and the frictional force on each surface is exerted in the direction opposite to its motion relative to the other surface.

The coefficient of friction is an empirical measurement – it has to be measured experimentally, and cannot be found through calculations. Rougher surfaces tend to have higher effective values. Both static and kinetic coefficients of friction depend on the pair of surfaces in contact; for a given pair of surfaces, the coefficient of static friction is usually larger than that of kinetic friction; in some sets the two coefficients are equal, such as teflon-on-teflon.

Most dry materials in combination have friction coefficient values between 0.3 and 0.6. Values outside this range are rarer, but teflon, for example, can have a coefficient as low as 0.04. A value of zero would mean no friction at all, an elusive property – even magnetic levitation vehicles have drag. Rubber in contact with other surfaces can yield friction coefficients from 1 to 2. Occasionally it is maintained that µ is always < 1, but this is not true. While in most relevant applications µ < 1, a value above 1 merely implies that the force required to slide an object along the surface is greater than the normal force of the surface on the object. For example, silicone rubber or acrylic rubber-coated surfaces have a coefficient of friction that can be substantially larger than 1.

While it is often stated that the COF is a "material property," it is better categorized as a "system property." Unlike true material properties (such as conductivity, dielectric constant, yield strength), the COF for any two materials depends on system variables like temperature, velocity, atmosphere and also what are now popularly described as aging and deaging times; as well as on geometric properties of the interface between the materials. For example, a copper pin sliding against a thick copper plate can have a COF that varies from 0.6 at low speeds (metal sliding against metal) to below 0.2 at high speeds when the copper surface begins to melt due to frictional heating. The latter speed, of course, does not determine the COF uniquely; if the pin diameter is increased so that the frictional heating is removed rapidly, the temperature drops, the pin remains solid and the COF rises to that of a 'low speed' test.

Page 132: Energy Transformation

Static friction

Static friction is friction between two solid objects that are not moving relative to each other. For example, static friction can prevent an object from sliding down a sloped surface. The coefficient of static friction, typically denoted as μs, is usually higher than the coefficient of kinetic friction.

The static friction force must be overcome by an applied force before an object can move. The maximum possible friction force between two surfaces before sliding begins is the product of the coefficient of static friction and the normal force: . When there is no sliding occurring, the friction force can have any value from zero up to . Any force smaller than attempting to slide one surface over the other is opposed by a frictional force of equal magnitude and opposite direction. Any force larger than

overcomes the force of static friction and causes sliding to occur. The instant sliding occurs, static friction is no longer applicable—the friction between the two surfaces is then called kinetic friction.

An example of static friction is the force that prevents a car wheel from slipping as it rolls on the ground. Even though the wheel is in motion, the patch of the tire in contact with the ground is stationary relative to the ground, so it is static rather than kinetic friction.

The maximum value of static friction, when motion is impending, is sometimes referred to as limiting friction, although this term is not used universally. It is also known as traction.

Kinetic friction

Kinetic (or dynamic) friction occurs when two objects are moving relative to each other and rub together (like a sled on the ground). The coefficient of kinetic friction is typically denoted as μk, and is usually less than the coefficient of static friction for the same materials. In fact, Richard Feynman reports that "with dry metals it is very hard to show any difference."

New models are beginning to show how kinetic friction can be greater than static friction. Contrary to earlier explanations, kinetic friction is now understood not to be caused by surface roughness but by chemical bonding between the surfaces. Surface roughness and contact area, however, do affect kinetic friction for micro- and nano-scale objects where surface area forces dominate inertial forces.

Angle of friction

For certain applications it is more useful to define static friction in terms of the maximum angle before which one of the items will begin sliding. This is called the angle of friction or friction angle. It is defined as:

Page 133: Energy Transformation

where θ is the angle from horizontal and µ is the static coefficient of friction between the objects. This formula can also be used to calculate µ from empirical measurements of the friction angle.

Friction at the atomic level

Determining the forces required to move atoms past each other is a challenge in designing nanomachines. In 2008 scientists for the first time were able to move a single atom across a surface, and measure the forces required. Using ultrahigh vacuum and nearly-zero temperature (5 K), a modified atomic force microscope was used to drag a cobalt atom, and a carbon monoxide molecule, across surfaces of copper and platinum.

Limitations of the Coulomb model

The Coulomb approximation mathematically follows from the assumptions that surfaces are in atomically close contact only over a small fraction of their overall area, that this contact area is proportional to the normal force (until saturation, which takes place when all area is in atomic contact), and that frictional force is proportional to the applied normal force, independently of the contact area. Such reasoning aside, however, the approximation is fundamentally an empirical construction. It is a rule of thumb describing the approximate outcome of an extremely complicated physical interaction. The strength of the approximation is its simplicity and versatility – though in general the relationship between normal force and frictional force is not exactly linear (and so the frictional force is not entirely independent of the contact area of the surfaces), the Coulomb approximation is an adequate representation of friction for the analysis of many physical systems.

When the surfaces are conjoined, Coulomb friction becomes a very poor approximation (for example, adhesive tape resists sliding even when there is no normal force, or a negative normal force). In this case, the frictional force may depend strongly on the area of contact. Some drag racing tires are adhesive in this way. However, despite the complexity of the fundamental physics behind friction, the relationships are accurate enough to be useful in many applications.

Fluid friction Fluid friction occurs between layers within a fluid that are moving relative to each other. This internal resistance to flow is described by viscosity. In everyday terms viscosity is "thickness". Thus, water is "thin", having a lower viscosity, while honey is "thick", having a higher viscosity. Put simply, the less viscous the fluid is, the greater its ease of movement.

All real fluids (except superfluids) have some resistance to stress and therefore are viscous, but a fluid which has no resistance to shear stress is known as an ideal fluid or inviscid fluid.

Page 134: Energy Transformation

Lubricated friction Lubricated friction is a case of fluid friction where a fluid separates two solid surfaces. Lubrication is a technique employed to reduce wear of one or both surfaces in close proximity moving relative to each another by interposing a substance called a lubricant between the surfaces.

In most cases the applied load is carried by pressure generated within the fluid due to the frictional viscous resistance to motion of the lubricating fluid between the surfaces. Adequate lubrication allows smooth continuous operation of equipment, with only mild wear, and without excessive stresses or seizures at bearings. When lubrication breaks down, metal or other components can rub destructively over each other, causing destructive damage, heat, and failure.

Skin friction Skin friction arises from the friction of the fluid against the "skin" of the object that is moving through it. Skin friction arises from the interaction between the fluid and the skin of the body, and is directly related to the area of the surface of the body that is in contact with the fluid. Skin friction follows the drag equation and rises with the square of the velocity.

Skin friction is caused by viscous drag in the boundary layer around the object. There are two ways to decrease skin friction: the first is to shape the moving body so that smooth flow is possible, like an airfoil. The second method is to decrease the length and cross-section of the moving object as much as is practicable.

Internal friction Internal friction is the force resisting motion between the elements making up a solid material while it undergoes plastic deformation.

Plastic deformation in solids is an irreversible change in the internal molecular structure of an object. This change may be due to either (or both) an applied force or a change in temperature. The change of an object's shape is called strain. The force causing it is called stress. Stress does not necessarily cause permanent change. As deformation occurs, internal forces oppose the applied force. If the applied stress is not too large these opposing forces may completely resist the applied force, allowing the object to assume a new equilibrium state and to return to its original shape when the force is removed. This is what is known in the literature as elastic deformation (or elasticity). Larger forces in excess of the elastic limit may cause a permanent (irreversible) deformation of the object. This is what is known as plastic deformation.

Other types of friction

Page 135: Energy Transformation

Rolling resistance

Rolling resistance is the force that resists the rolling of a wheel or other circular object along a surface caused by deformations in the object and/or surface. Generally the force of rolling resistance is less than that associated with kinetic friction. Typical values for the coefficient of rolling resistance are 0.001. One of the most common examples of rolling resistance is the movement of motor vehicle tires on a road, a process which generates heat and sound as by-products.

Triboelectric effect

Rubbing dissimilar materials against one another can cause a build-up of electrostatic charge, which can be hazardous if flammable gases or vapours are present. When the static build-up discharges, explosions can be caused by ignition of the flammable mixture.

Belt friction

Belt friction is a physical property observed from the forces acting on a belt wrapped around a pulley, when one end is being pulled. The resulting tension, which acts on both ends of the belt, can be modeled by the belt friction equation.

In practice, the theoretical tension acting on the belt or rope calculated by the belt friction equation can be compared to the maximum tension the belt can support. This helps a designer of such a rig to know how many times the belt or rope must be wrapped around the pulley to prevent it from slipping. Mountain climbers and sailing crews demonstrate a standard knowledge of belt friction when accomplishing basic tasks.

Reducing friction

Devices

Devices such as wheels, ball bearings, roller bearings, and air cushion or other types of fluid bearings can change sliding friction into a much smaller type of rolling friction.

Many thermoplastic materials such as nylon, HDPE and PTFE are commonly used in low friction bearings. They are especially useful because the coefficient of friction falls with increasing imposed load. For improved wear resistance, very high molecular weight grades are usually specified for heavy duty or critical bearings.

Lubricants

A common way to reduce friction is by using a lubricant, such as oil, water, or grease, which is placed between the two surfaces, often dramatically lessening the coefficient of friction. The science of friction and lubrication is called tribology. Lubricant technology

Page 136: Energy Transformation

is when lubricants are mixed with the application of science, especially to industrial or commercial objectives.

Superlubricity, a recently-discovered effect, has been observed in graphite: it is the substantial decrease of friction between two sliding objects, approaching zero levels. A very small amount of frictional energy would still be dissipated.

Lubricants to overcome friction need not always be thin, turbulent fluids or powdery solids such as graphite and talc; acoustic lubrication actually uses sound as a lubricant.

Another way to reduce friction between two parts is to superimpose micro-scale vibration to one of the parts. This can be sinusoidal vibration as used in ultrasound-assisted cutting or vibration noise, known as dither.

Energy of friction According to the law of conservation of energy, no energy is destroyed due to friction, though it may be lost to the system of concern. Energy is transformed from other forms into heat. A sliding hockey puck comes to rest because friction converts its kinetic energy into heat. Since heat quickly dissipates, many early philosophers, including Aristotle, wrongly concluded that moving objects lose energy without a driving force.

When an object is pushed along a surface, the energy converted to heat is given by:

where

is the normal force, is the coefficient of kinetic friction,

is the coordinate along which the object transverses.

Energy lost to a system as a result of friction is a classic example of thermodynamic irreversibility.

Work of friction

In the reference frame of the interface between two surfaces, static friction does no work, because there is never displacement between the surfaces. In the same reference frame, kinetic friction is always in the direction opposite the motion, and does negative work. However, friction can do positive work in certain frames of reference. One can see this by placing a heavy box on a rug, then pulling on the rug quickly. In this case, the box slides backwards relative to the rug, but moves forward relative to the frame of reference in

Page 137: Energy Transformation

which the floor is stationary. Thus, the kinetic friction between the box and rug accelerates the box in the same direction that the box moves, doing positive work.

The work done by friction can translate into deformation, wear, and heat that can affect the contact surface properties (even the coefficient of friction between the surfaces). This can be beneficial as in polishing. The work of friction is used to mix and join materials such as in the process of friction welding. Excessive erosion or wear of mating surfaces occur when work due frictional forces rise to unacceptable levels. Harder corrosion particles caught between mating surfaces (fretting) exacerbates wear of frictional forces. Bearing seizure or failure may result from excessive wear due to work of friction. As surfaces are worn by work due to friction, fit and surface finish of an object may degrade until it no longer functions properly.

Page 138: Energy Transformation

Chapter- 12

Battery (Electricity)

Various cells and batteries (top-left to bottom-right): two AA, one D, one handheld ham radio battery, two 9-volt (PP3), two AAA, one C, one camcorder battery, one cordless phone battery.

An electrical battery is one or more electrochemical cells that convert stored chemical energy into electrical energy. Since the invention of the first battery (or "voltaic pile") in 1800 by Alessandro Volta, batteries have become a common power source for many household and industrial applications. According to a 2005 estimate, the worldwide battery industry generates US$48 billion in sales each year, with 6% annual growth.

There are two types of batteries: primary batteries (disposable batteries), which are designed to be used once and discarded, and secondary batteries (rechargeable batteries), which are designed to be recharged and used multiple times. Miniature cells are used to power devices such as hearing aids and wristwatches; larger batteries provide standby power for telephone exchanges or computer data centers.

Principle of operation

Page 139: Energy Transformation

A voltaic cell for demonstration purposes. In this example the two half-cells are linked by a salt bridge separator that permits the transfer of ions, but not water molecules.

A battery is a device that converts chemical energy directly to electrical energy. It consists of a number of voltaic cells; each voltaic cell consists of two half cells connected in series by a conductive electrolyte containing anions and cations. One half-cell includes electrolyte and the electrode to which anions (negatively charged ions) migrate, i.e., the anode or negative electrode; the other half-cell includes electrolyte and the electrode to which cations (positively charged ions) migrate, i.e., the cathode or positive electrode. In the redox reaction that powers the battery, reduction (addition of electrons) occurs to cations at the cathode, while oxidation (removal of electrons) occurs to anions at the anode. The electrodes do not touch each other but are electrically connected by the electrolyte. Some cells use two half-cells with different electrolytes. A separator between half cells allows ions to flow, but prevents mixing of the electrolytes.

Each half cell has an electromotive force (or emf), determined by its ability to drive electric current from the interior to the exterior of the cell. The net emf of the cell is the difference between the emfs of its half-cells, as first recognized by Volta. Therefore, if the electrodes have emfs and , then the net emf is ; in other words, the net emf is the difference between the reduction potentials of the half-reactions.

The electrical driving force or across the terminals of a cell is known as the terminal voltage (difference) and is measured in volts. The terminal voltage of a cell that is neither charging nor discharging is called the open-circuit voltage and equals the emf

Page 140: Energy Transformation

of the cell. Because of internal resistance, the terminal voltage of a cell that is discharging is smaller in magnitude than the open-circuit voltage and the terminal voltage of a cell that is charging exceeds the open-circuit voltage. An ideal cell has negligible internal resistance, so it would maintain a constant terminal voltage of until exhausted, then dropping to zero. If such a cell maintained 1.5 volts and stored a charge of one coulomb then on complete discharge it would perform 1.5 joule of work. In actual cells, the internal resistance increases under discharge, and the open circuit voltage also decreases under discharge. If the voltage and resistance are plotted against time, the resulting graphs typically are a curve; the shape of the curve varies according to the chemistry and internal arrangement employed.

As stated above, the voltage developed across a cell's terminals depends on the energy release of the chemical reactions of its electrodes and electrolyte. Alkaline and carbon-zinc cells have different chemistries but approximately the same emf of 1.5 volts; likewise NiCd and NiMH cells have different chemistries, but approximately the same emf of 1.2 volts. On the other hand the high electrochemical potential changes in the reactions of lithium compounds give lithium cells emfs of 3 volts or more.

Categories and types of batteries

Page 141: Energy Transformation

From top to bottom: SR41/AG3, SR44/AG13 (button cells), a 9-volt PP3 battery, an AAA cell, an AA cell, a C cell, a D Cell, and a large 3R12. The ruler's unit is in centimeters.

Batteries are classified into two broad categories, each type with advantages and disadvantages.

• Primary batteries irreversibly (within limits of practicality) transform chemical energy to electrical energy. When the initial supply of reactants is exhausted, energy cannot be readily restored to the battery by electrical means.

• Secondary batteries can be recharged; that is, they can have their chemical reactions reversed by supplying electrical energy to the cell, restoring their original composition.

Page 142: Energy Transformation

Historically, some types of primary batteries used, for example, for telegraph circuits, were restored to operation by replacing the components of the battery consumed by the chemical reaction. Secondary batteries are not indefinitely rechargeable due to dissipation of the active materials, loss of electrolyte and internal corrosion.

Primary batteries

Primary batteries can produce current immediately on assembly. Disposable batteries are intended to be used once and discarded. These are most commonly used in portable devices that have low current drain, are only used intermittently, or are used well away from an alternative power source, such as in alarm and communication circuits where other electric power is only intermittently available. Disposable primary cells cannot be reliably recharged, since the chemical reactions are not easily reversible and active materials may not return to their original forms. Battery manufacturers recommend against attempting to recharge primary cells.

Common types of disposable batteries include zinc-carbon batteries and alkaline batteries. Generally, these have higher energy densities than rechargeable batteries, but disposable batteries do not fare well under high-drain applications with loads under 75 ohms (75 Ω).

Secondary batteries

Secondary batteries must be charged before use; they are usually assembled with active materials in the discharged state. Rechargeable batteries or secondary cells can be recharged by applying electric current, which reverses the chemical reactions that occur during its use. Devices to supply the appropriate current are called chargers or rechargers.

The oldest form of rechargeable battery is the lead-acid battery. This battery is notable in that it contains a liquid in an unsealed container, requiring that the battery be kept upright and the area be well ventilated to ensure safe dispersal of the hydrogen gas produced by these batteries during overcharging. The lead-acid battery is also very heavy for the amount of electrical energy it can supply. Despite this, its low manufacturing cost and its high surge current levels make its use common where a large capacity (over approximately 10Ah) is required or where the weight and ease of handling are not concerns.

A common form of the lead-acid battery is the modern car battery, which can generally deliver a peak current of 450 amperes. An improved type of liquid electrolyte battery is the sealed valve regulated lead acid (VRLA) battery, popular in the automotive industry as a replacement for the lead-acid wet cell. The VRLA battery uses an immobilized sulfuric acid electrolyte, reducing the chance of leakage and extending shelf life. VRLA batteries have the electrolyte immobilized, usually by one of two means:

• Gel batteries (or "gel cell") contain a semi-solid electrolyte to prevent spillage.

Page 143: Energy Transformation

• Absorbed Glass Mat (AGM) batteries absorb the electrolyte in a special fiberglass matting.

Other portable rechargeable batteries include several "dry cell" types, which are sealed units and are therefore useful in appliances such as mobile phones and laptop computers. Cells of this type (in order of increasing power density and cost) include nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH) and lithium-ion (Li-ion) cells. By far, Li-ion has the highest share of the dry cell rechargeable market. Meanwhile, NiMH has replaced NiCd in most applications due to its higher capacity, but NiCd remains in use in power tools, two-way radios, and medical equipment. NiZn is a new technology that is not yet well established commercially.

Recent developments include batteries with embedded functionality such as USBCELL, with a built-in charger and USB connector within the AA format, enabling the battery to be charged by plugging into a USB port without a charger, and low self-discharge (LSD) mix chemistries such as Hybrio, ReCyko, and Eneloop, where cells are precharged prior to shipping.

Battery cell types

There are many general types of electrochemical cells, according to chemical processes applied and design chosen. The variation includes galvanic cells, electrolytic cells, fuel cells, flow cells and voltaic piles.

Wet cell

A wet cell battery has a liquid electrolyte. Other names are flooded cell since the liquid covers all internal parts, or vented cell since gases produced during operation can escape to the air. Wet cells were a precursor to dry cells and are commonly used as a learning tool for electrochemistry. It is often built with common laboratory supplies, like beakers, for demonstrations of how electrochemical cells work. A particular type of wet cell known as a concentration cell is important in understanding corrosion. Wet cells may be primary cells (non-rechargeable) or secondary cells (rechargeable). Originally all practical primary batteries such as the Daniell cell were built as open-topped glass jar wet cells. Other primary wet cells are the Leclanche cell, Grove cell, Bunsen cell, Chromic acid cell, Clark cell and Weston cell. The Leclanche cell chemistry was adapted to the first dry cells. Wet cells are still used in automobile batteries and in industry for standby power for switchgear, telecommunication or large uninterruptible power supplies, but in many places batteries with gel cells have been used instead. These applications commonly use lead-acid or nickel-cadmium cells.

Page 144: Energy Transformation

Dry cell

Line art drawing of a dry cell: 1. brass cap, 2. plastic seal, 3. expansion space, 4. porous cardboard, 5. zinc can, 6. carbon rod, 7. chemical mixture.

A dry cell has the electrolyte immobilized as a paste, with only enough moisture in the paste to allow current to flow. As opposed to a wet cell, the battery can be operated in any random position, and will not spill its electrolyte if inverted.

While a dry cell's electrolyte is not truly completely free of moisture and must contain some moisture to function, it has the advantage of containing no sloshing liquid that might leak or drip out when inverted or handled roughly, making it highly suitable for

Page 145: Energy Transformation

small portable electric devices. By comparison, the first wet cells were typically fragile glass containers with lead rods hanging from the open top, and needed careful handling to avoid spillage. An inverted wet cell would leak, while a dry cell would not. Lead-acid batteries would not achieve the safety and portability of the dry cell until the development of the gel battery.

A common dry cell battery is the zinc-carbon battery, using a cell sometimes called the dry Leclanché cell, with a nominal voltage of 1.5 volts, the same nominal voltage as the alkaline battery (since both use the same zinc-manganese dioxide combination).

The makeup of a standard dry cell is a zinc anode (negative pole), usually in the form of a cylindrical pot, with a carbon cathode (positive pole) in the form of a central rod. The electrolyte is ammonium chloride in the form of a paste next to the zinc anode. The remaining space between the electrolyte and carbon cathode is taken up by a second paste consisting of ammonium chloride and manganese dioxide, the latter acting as a depolariser. In some more modern types of so called 'high power' batteries, the ammonium chloride has been replaced by zinc chloride.

Molten salt

A molten salt battery is a primary or secondary battery that uses a molten salt as its electrolyte. Their energy density and power density makes them potentially useful for electric vehicles, but they must be carefully insulated to retain heat.

Reserve

A reserve battery can be stored for a long period of time and is activated when its internal parts (usually electrolyte) are assembled. For example, a battery for an electronic fuze might be activated by the impact of firing a gun, breaking a capsule of electrolyte to activate the battery and power the fuze's circuits. Reserve batteries are usually designed for a short service life (seconds or minutes) after long storage (years). A water-activated battery for oceanographic instruments or military applications becomes activated on immersion in water.

Battery cell performance

A battery's characteristics may vary over load cycle, charge cycle and over lifetime due to many factors including internal chemistry, current drain and temperature.

Battery capacity and discharging

Page 146: Energy Transformation

A device to check battery voltage.

The more electrolyte and electrode material there is in the cell, the greater the capacity of the cell. Thus a small cell has less capacity than a larger cell, given the same chemistry (e.g. alkaline cells), though they develop the same open-circuit voltage.

Because of the chemical reactions within the cells, the capacity of a battery depends on the discharge conditions such as the magnitude of the current (which may vary with time), the allowable terminal voltage of the battery, temperature and other factors. The available capacity of a battery depends upon the rate at which it is discharged. If a battery is discharged at a relatively high rate, the available capacity will be lower than expected.

The battery capacity that battery manufacturers print on a battery is usually the product of 20 hours multiplied by the maximum constant current that a new battery can supply for 20 hours at 68 F° (20 C°), down to a predetermined terminal voltage per cell. A battery rated at 100 A·h will deliver 5 A over a 20 hour period at room temperature. However, if it is instead discharged at 50 A, it will have a lower apparent capacity.

The relationship between current, discharge time, and capacity for a lead acid battery is approximated (over a certain range of current values) by Peukert's law:

Page 147: Energy Transformation

where

QP is the capacity when discharged at a rate of 1 amp. I is the current drawn from battery (A). t is the amount of time (in hours) that a battery can sustain. k is a constant around 1.3.

For low values of I internal self-discharge must be included.

In practical batteries, internal energy losses, and limited rate of diffusion of ions through the electrolyte, cause the efficiency of a battery to vary at different discharge rates. When discharging at low rate, the battery's energy is delivered more efficiently than at higher discharge rates, but if the rate is too low, it will self-discharge during the long time of operation, again lowering its efficiency.

Installing batteries with different A·h ratings will not affect the operation of a device rated for a specific voltage unless the load limits of the battery are exceeded. High-drain loads like digital cameras can result in lower actual energy, most notably for alkaline batteries. For example, a battery rated at 2000 mA·h would not sustain a current of 1 A for the full two hours, if it had been rated at a 10-hour or 20-hour discharge.

Fastest charging, largest, and lightest batteries

Lithium iron phosphate (LiFePO4) batteries are the fastest charging and discharging, next to supercapacitors. The world's largest battery is in Fairbanks, Alaska, composed of Ni-Cd cells. Sodium-sulfur batteries are being used to store wind power. Lithium-sulfur batteries have been used on the longest and highest solar powered flight. The speed of recharging for lithium-ion batteries may be increased by manipulation.

Battery lifetime

Life of primary batteries

Even if never taken out of the original package, disposable (or "primary") batteries can lose 8 to 20 percent of their original charge every year at a temperature of about 20°–30°C. This is known as the "self discharge" rate and is due to non-current-producing "side" chemical reactions, which occur within the cell even if no load is applied to it. The rate of the side reactions is reduced if the batteries are stored at low temperature, although some batteries can be damaged by freezing. High or low temperatures may reduce battery performance. This will affect the initial voltage of the battery. For an AA alkaline battery this initial voltage is approximately normally distributed around 1.6 volts.

Page 148: Energy Transformation

Discharging performance of all batteries drops at low temperature.

Battery sizes

Lifespan of rechargeable batteries

Rechargeable batteries.

Rechargeable batteries self-discharge more rapidly than disposable alkaline batteries, especially nickel-based batteries; a freshly charged NiCd loses 10% of its charge in the first 24 hours, and thereafter discharges at a rate of about 10% a month. However, modern lithium designs have reduced the self-discharge rate to a relatively low level (but still poorer than for primary batteries). Most nickel-based batteries are partially discharged when purchased, and must be charged before first use.

Although rechargeable batteries have their energy content restored by charging, some deterioration occurs on each charge/discharge cycle. Low-capacity nickel metal hydride (NiMH) batteries (1700-2000 mA·h) can be charged for about 1000 cycles, whereas high capacity NiMH batteries (above 2500 mA·h) can be charged for about 500 cycles. Nickel cadmium (NiCd) batteries tend to be rated for 1,000 cycles before their internal resistance permanently increases beyond usable values. Normally a fast charge, rather than a slow overnight charge, will shorten battery lifespan. However, if the overnight charger is not "smart" and cannot detect when the battery is fully charged, then overcharging is likely,

Page 149: Energy Transformation

which also damages the battery. Degradation usually occurs because electrolyte migrates away from the electrodes or because active material falls off the electrodes. NiCd batteries suffer the drawback that they should be fully discharged before recharge. Without full discharge, crystals may build up on the electrodes, thus decreasing the active surface area and increasing internal resistance. This decreases battery capacity and causes the "memory effect". These electrode crystals can also penetrate the electrolyte separator, thereby causing shorts. NiMH, although similar in chemistry, does not suffer from memory effect to quite this extent. When a battery reaches the end of its lifetime, it will not suddenly lose all of its capacity; rather, its capacity will gradually decrease.

Automotive lead-acid rechargeable batteries have a much harder life. Because of vibration, shock, heat, cold, and sulfation of their lead plates, few automotive batteries last beyond six years of regular use. Automotive starting batteries have many thin plates to provide as much current as possible in a reasonably small package. In general, the thicker the plates, the longer the life of the battery. Typically they are only drained a small amount before recharge. Care should be taken to avoid deep discharging a starting battery, since each charge and discharge cycle causes active material to be shed from the plates.

"Deep-cycle" lead-acid batteries such as those used in electric golf carts have much thicker plates to aid their longevity. The main benefit of the lead-acid battery is its low cost; the main drawbacks are its large size and weight for a given capacity and voltage. Lead-acid batteries should never be discharged to below 20% of their full capacity, because internal resistance will cause heat and damage when they are recharged. Deep-cycle lead-acid systems often use a low-charge warning light or a low-charge power cut-off switch to prevent the type of damage that will shorten the battery's life.

Extending battery life

Battery life can be extended by storing the batteries at a low temperature, as in a refrigerator or freezer, which slows the chemical reactions in the battery. Such storage can extend the life of alkaline batteries by about 5%, while the charge of rechargeable batteries can be extended from a few days up to several months. To reach their maximum voltage, batteries must be returned to room temperature; discharging an alkaline battery at 250 mAh at 0°C is only half as efficient as it is at 20°C. As a result, alkaline battery manufacturers like Duracell do not recommend refrigerating or freezing batteries.

Prolonging life in multiple cells through cell balancing

Analog front ends that balance cells and eliminate mismatches of cells in series or parallel combination significantly improve battery efficiency and increase the overall pack capacity. As the number of cells and load currents increase, the potential for mismatch also increases. There are two kinds of mismatch in the pack: state-of-charge (SOC) and capacity/energy (C/E) mismatch. Though the SOC mismatch is more common, each problem limits the pack capacity (mAh) to the capacity of the weakest cell.

Page 150: Energy Transformation

Cell balancing principle

Battery pack cells are balanced when all the cells in the battery pack meet two conditions:

1. If all cells have the same capacity, then they are balanced when they have the same State of Charge (SOC.) In this case, the Open Circuit Voltage (OCV) is a good measure of the SOC. If, in an out of balance pack, all cells can be differentially charged to full capacity (balanced), then they will subsequently cycle normally without any additional adjustments. This is mostly a one shot fix.

2. If the cells have different capacities, they are also considered balanced when the SOC is the same. But, since SOC is a relative measure, the absolute amount of capacity for each cell is different. To keep the cells with different capacities at the same SOC, cell balancing must provide differential amounts of current to cells in the series string during both charge and discharge on every cycle.

Cell balancing electronics

Cell balancing is defined as the application of differential currents to individual cells (or combinations of cells) in a series string. Normally, of course, cells in a series string receive identical currents. A battery pack requires additional components and circuitry to achieve cell balancing. However, the use of a fully integrated analog front end for cell balancing reduces the required external components to just balancing resistors.

It is important to recognize that the cell mismatch results more from limitations in process control and inspection than from variations inherent in the Lithium Ion chemistry. The use of a fully integrated analog front end for cell balancing can improve the performance of series connected Li-ion Cells by addressing both SOC and C/E issues. SOC mismatch can be remedied by balancing the cell during an initial conditioning period and subsequently only during the charge phase. C/E mismatch remedies are more difficult to implement and harder to measure and require balancing during both charge and discharge periods.

This type of solution eliminates the quantity of external components, as for discrete capacitors, diodes and most other resistors to achieve balance.

Hazards

Explosion

A battery explosion is caused by the misuse or malfunction of a battery, such as attempting to recharge a primary (non-rechargeable) battery, or short circuiting a battery. With car batteries, explosions are most likely to occur when a short circuit generates very large currents. In addition, car batteries liberate hydrogen when they are overcharged (because of electrolysis of the water in the electrolyte). Normally the amount of overcharging is very small, as is the amount of explosive gas developed, and the gas dissipates quickly. However, when "jumping" a car battery, the high current can cause the

Page 151: Energy Transformation

rapid release of large volumes of hydrogen, which can be ignited by a nearby spark (for example, when removing the jumper cables).

When a battery is recharged at an excessive rate, an explosive gas mixture of hydrogen and oxygen may be produced faster than it can escape from within the walls of the battery, leading to pressure build-up and the possibility of the battery case bursting. In extreme cases, the battery acid may spray violently from the casing of the battery and cause injury. Overcharging—that is, attempting to charge a battery beyond its electrical capacity—can also lead to a battery explosion, leakage, or irreversible damage to the battery. It may also cause damage to the charger or device in which the overcharged battery is later used. Additionally, disposing of a battery in fire may cause an explosion as steam builds up within the sealed case of the battery.

Leakage

Leaked alkaline battery.

Many battery chemicals are corrosive, poisonous, or both. If leakage occurs, either spontaneously or through accident, the chemicals released may be dangerous.

For example, disposable batteries often use a zinc "can" as both a reactant and as the container to hold the other reagents. If this kind of battery is run all the way down, or if it is recharged after running down too far, the reagents can emerge through the cardboard and plastic that form the remainder of the container. The active chemical leakage can then damage the equipment that the batteries were inserted into. For this reason, many

Page 152: Energy Transformation

electronic device manufacturers recommend removing the batteries from devices that will not be used for extended periods of time.

Environmental concerns

The widespread use of batteries has created many environmental concerns, such as toxic metal pollution. Battery manufacture consumes resources and often involves hazardous chemicals. Used batteries also contribute to electronic waste. Some areas now have battery recycling services available to recover some of the materials from used batteries. Batteries may be harmful or fatal if swallowed. Recycling or proper disposal prevents dangerous elements (such as lead, mercury, and cadmium) found in some types of batteries from entering the environment. In the United States, Americans purchase nearly three billion batteries annually, and about 179,000 tons of those end up in landfills across the country.

In the United States, the Mercury-Containing and Rechargeable Battery Management Act of 1996 banned the sale of mercury-containing batteries, enacted uniform labeling requirements for rechargeable batteries, and required that rechargeable batteries be easily removable. California, and New York City prohibit the disposal of rechargeable batteries in solid waste, and along with Maine require recycling of cell phones. The rechargeable battery industry has nationwide recycling programs in the United States and Canada, with dropoff points at local retailers.

The Battery Directive of the European Union has similar requirements, in addition to requiring increased recycling of batteries, and promoting research on improved battery recycling methods.

Ingestion

Small button/disk batteries can be swallowed by young children. While in the digestive tract the battery's electrical discharge can burn the tissues and can be serious enough to lead to death. Disk batteries do not usually cause problems unless they become lodged in the gastrointestinal (GI) tract. The most common place disk batteries become lodged, resulting in clinical sequelae, is the esophagus. Batteries that successfully traverse the esophagus are unlikely to lodge at any other location. The likelihood that a disk battery will lodge in the esophagus is a function of the patient's age and the size of the battery. Disk batteries of 16 mm have become lodged in the esophagi of 2 children younger than 1 year. Older children do not have problems with batteries smaller than 21–23 mm. For comparison, a dime is 18 mm, a nickel is 21 mm, and a quarter is 24 mm. Liquefaction necrosis may occur because sodium hydroxide is generated by the current produced by the battery (usually at the anode). Perforation has occurred as rapidly as 6 hours after ingestion.

Homemade cells

Page 153: Energy Transformation

Almost any liquid or moist object that has enough ions to be electrically conductive can serve as the electrolyte for a cell. As a novelty or science demonstration, it is possible to insert two electrodes made of different metals into a lemon, potato, etc. and generate small amounts of electricity. "Two-potato clocks" are also widely available in hobby and toy stores; they consist of a pair of cells, each consisting of a potato (lemon, et cetera) with two electrodes inserted into it, wired in series to form a battery with enough voltage to power a digital clock. Homemade cells of this kind are of no real practical use, because they produce far less current—and cost far more per unit of energy generated—than commercial cells, due to the need for frequent replacement of the fruit or vegetable. In addition, one can make a voltaic pile from two coins (such as a nickel and a penny) and a piece of paper towel dipped in salt water. Such a pile would make very little voltage itself, but when many of them are stacked together in series, they can replace normal batteries for a short amount of time.

Sony has developed a biological battery that generates electricity from sugar in a way that is similar to the processes observed in living organisms. The battery generates electricity through the use of enzymes that break down carbohydrates, which are essentially sugar. A similarly designed sugar drink powers a phone using enzymes to generate electricity from carbohydrates that covers the phone’s electrical needs. It only needs a pack of sugary drink and it generates water and oxygen while the battery dies out.

Lead acid cells can easily be manufactured at home, but a tedious charge/discharge cycle is needed to 'form' the plates. This is a process whereby lead sulfate forms on the plates, and during charge is converted to lead dioxide (positive plate) and pure lead (negative plate). Repeating this process results in a microscopically rough surface, with far greater surface area being exposed. This increases the current the cell can deliver.

Daniell cells are also easy to make at home. Aluminum-air batteries can also be produced with high purity aluminum. Aluminum foil batteries will produce some electricity, but they are not very efficient, in part because a significant amount of hydrogen gas is produced.