28
Inner Solar System 47Uma 16 Cyg B 51 Peg 55 Cancer Mercury Venus Earth Mars 2.4M Jup 1.7M Jup 0.47M Jup 0.84M Jup Tau Bootis Upsilon Andromedae 70 Vir HD 114762 3.8M Jup 0.68M Jup 6.6M Jup 10M Jup ACOUSTICS Speech Production Parameters for Automatic Speech Recognition What we hear as speech is produced by the continuous movement of the speech articulators, such as the tongue, lips, and larynx. These articulators modulate air Physics News in 1996, prepared by the Public Information Division of the American Institute of Physics, is the 27th in a series of annual reviews of research highlights. The articles in Physics News in 1996 were selected and prepared by the AIP Member Societies. The editors would like to thank the following individuals for helping to organize the chapters and in some cases to write the articles: Phillip F. Schewe, Editor Ben P. Stein, Assistant Editor The American Physical Society Division of Atomic, Molecular and Optical Physics Michael H. Prior, Lawrence Berkeley Laboratory Division of Biological Physics Frank Moss, University of Missouri at St. Louis Division of Chemical Physics Robin M. Hochstrasser, University of Pennsylvania Division of Condensed Matter Physics David Litster, MIT Division of Fluid Dynamics James J. Riley, University of Washington Division of High Polymer Physics Jeffrey Koberstein, University of Connecticut Division of Nuclear Physics Bunny Clark, Ohio State Division of Particles and Fields Paul Grannis, SUNY, Stony Brook Division of Physics of Beams William Herrmannsfeldt, SLAC Division of Plasma Physics Richard Petrasso, MIT Plasma Fusion Center Acoustical Society of America Daniel W. Martin, ASA American Astronomical Society Stephen P. Maran, NASA/Goddard Space Flight Center American Association of Physicists in Medicine Steve Thomas, University of Cincinnati Medical Center American Association of Physics Teachers Bernard V. Khoury, AAPT American Crystallographic Association Marcia Vair, ACA American Geophysical Union Dave Thomas, AGU American Vacuum Society Gary McGuire, MCNC Before last year, only one solar system was known: our own. Now several extrasolar planetary systems have been detected through a method that looks for a slight change in the star’s spectrum owing to the presence of a nearby planet. The figure shows the planets’s masses (in comparison to the mass of Jupiter) and the size of their orbits (scaled to Earth’s orbit). See the associated article in the astrophysics chapter. (Adapted from Leigh Anne McConnaughey by Malcolm Tarlton.) Additional copies of Physics News in 1996 can be obtained by sending $5.00 for the first copy and $3.00 for each additional copy to: American Institute of Physics, c/o AIDC, PO Box 20, Williston, VT 05495-0020, (800) 809-2247, (802) 862-0095 for Vermont residents. Checks made payable to the American Institute of Physics and drawn on a bank in the USA and in US dollars must accompany the order. If billing is required, shipping charges of $2.75 for the first copy and $0.75 for each additional copy will be added to the invoice. A Supplement to APS News May 1997 flow in such a way that speech sounds reach our ear. Do we in any way perceive the movements of those articulators as part of our perception of human speech? One of the mysteries of speech production a nd perception is the transformation between the discrete units of linguistics and the continuous nature of speech pro- duction. For instance, the three distinct sounds, or phones, in the word cop are blended together in a continuous waveform, which is created by the continuous movement of the speech articulators. The human listener is able to decompose the continuous sound stream and thereby recall the component sounds of the word. In this process, does the listener do this strictly with the acoustic signal without reference to how the articulators move or is that movement an object of percep- tion? A similar problem with decomposition exists in automatic speech recognition (ASR) by machine, where a continuous signal must be decoded into a string of discrete units. Would it help statistical automatic speech recognition algorithms to incorporate constraints that are inherent in the speech production process, which would at least partly characterize the sending channel? Furthermore, there ap- pears to be a similarity between the difficulty in answering the question of whether humans perceive speech directly as gestures of the tongue and lips and in deciding how to include articulatory constraints in the statistical models used in ASR. Most of the progress towards incorporating articulatory representations into ASR has been enabled by experiments that measure and model speech articula- tory movement and coordination. The task-dynamic model, for example, describes the coordinated movement of the speech articulators in terms of control param- eters of dynamic systems. The model can provide for a bridge from the discrete phone units to the continuous movement of articulators. The sequence of these parameters is then used to produce a word that can be expressed in the form of a list called a gestural score. Something akin to gestural scores may provide part of the abstract representation of articulatory movement that several researchers are considering. Recent work at The Ohio State University (OSU) has shown how gestural scores can be derived from data on tongue and lip movement. 1 The articulatory move- ment data was obtained using the x-ray microbeam machine at the University of Wisconsin, which produces x-ray images of tongue and lip movement using very low dosages of x rays. The OSU group was further able to train a neural network program (essentially an algorithm that “learns” from past experience) to perform phone recognition from the gestural scores that they derived. To do ASR with gestural scores in the sense of recognizing the words of a spoken message from the acoustic waveform will require that gestural scores be derived from the waveform. Researchers at Haskins Laboratories are experimenting with recovering gestural scores from the speech waveform in computer simulation experiments. 2 In addi- tion, Li Deng of the University of Waterloo has been incorporating gestural scores into speech recognition systems. 3 Further progress in all of these areas can be expected. Richard S. McGowan ([email protected]) Alice Faber ([email protected]) Haskins Laboratories 1. T.-P. Jung, A. K. Krishnamurthy, S. C. Ahalt, M. E. Beckman, and S.-H. Lee, IEEE Trans. Speech Audio Process. 14 , 2 (1996). 2. R. S. McGowan and M. Lee, J. Acoust. Soc. Am. 99, 595 (1996). 3. L. Deng and D. X. Sun, J. Acoust. Soc. Am. 95, 2702 (1994).

physis meeting

Embed Size (px)

Citation preview

Page 1: physis meeting

Inner Solar System

47Uma

16 Cyg B

51 Peg

55 Cancer

Mercury Venus Earth Mars

2.4MJup

1.7MJup

0.47MJup

0.84MJup

Tau Bootis

Upsilon Andromedae

70 Vir

HD 114762

3.8MJup

0.68MJup

6.6MJup

10M Jup

ACOUSTICS

Speech Production Parameters for Automatic SpeechRecognition

What we hear as speech is produced by the continuous movement of the speecharticulators, such as the tongue, lips, and larynx. These articulators modulate air

Physics News in 1996, prepared by the Public Information Division of the American Institute ofPhysics, is the 27th in a series of annual reviews of research highlights. The articles in PhysicsNews in 1996 were selected and prepared by the AIP Member Societies. The editors would like tothank the following individuals for helping to organize the chapters and in some cases to write thearticles:

Phillip F. Schewe, EditorBen P. Stein, Assistant Editor

The American Physical SocietyDivision of Atomic, Molecular and Optical PhysicsMichael H. Prior, Lawrence Berkeley Laboratory

Division of Biological PhysicsFrank Moss, University of Missouri at St. Louis

Division of Chemical PhysicsRobin M. Hochstrasser, University of Pennsylvania

Division of Condensed Matter PhysicsDavid Litster, MIT

Division of Fluid DynamicsJames J. Riley, University of Washington

Division of High Polymer PhysicsJeffrey Koberstein, University of Connecticut

Division of Nuclear PhysicsBunny Clark, Ohio State

Division of Particles and FieldsPaul Grannis, SUNY, Stony Brook

Division of Physics of BeamsWilliam Herrmannsfeldt, SLAC

Division of Plasma PhysicsRichard Petrasso, MIT Plasma Fusion Center

Acoustical Society of AmericaDaniel W. Martin, ASA

American Astronomical SocietyStephen P. Maran, NASA/Goddard Space Flight Center

American Association of Physicists in MedicineSteve Thomas, University of Cincinnati Medical Center

American Association of Physics TeachersBernard V. Khoury, AAPT

American Crystallographic AssociationMarcia Vair, ACA

American Geophysical UnionDave Thomas, AGU

American Vacuum Society

Gary McGuire, MCNC

Before last year, only one solar system was known: our own. Now several extrasolar planetarysystems have been detected through a method that looks for a slight change in the star’s spectrumowing to the presence of a nearby planet. The figure shows the planets’s masses (in comparison tothe mass of Jupiter) and the size of their orbits (scaled to Earth’s orbit). See the associated article inthe astrophysics chapter. (Adapted from Leigh Anne McConnaughey by Malcolm Tarlton.)

Additional copies of Physics News in 1996 can be obtained by sending $5.00 for the first copy and$3.00 for each additional copy to: American Institute of Physics, c/o AIDC, PO Box 20, Williston,VT 05495-0020, (800) 809-2247, (802) 862-0095 for Vermont residents. Checks made payableto the American Institute of Physics and drawn on a bank in the USA and in US dollars mustaccompany the order. If billing is required, shipping charges of $2.75 for the first copy and $0.75for each additional copy will be added to the invoice.

A Supplement to APS News May 1997

flow in such a way that speech sounds reach our ear. Do we in any way perceivethe movements of those articulators as part of our perception of human speech?One of the mysteries of speech production a nd perception is the transformationbetween the discrete units of linguistics and the continuous nature of speech pro-duction. For instance, the three distinct sounds, or phones, in the word cop areblended together in a continuous waveform, which is created by the continuousmovement of the speech articulators. The human listener is able to decompose thecontinuous sound stream and thereby recall the component sounds of the word. Inthis process, does the listener do this strictly with the acoustic signal withoutreference to how the articulators move or is that movement an object of percep-tion?

A similar problem with decomposition exists in automatic speech recognition(ASR) by machine, where a continuous signal must be decoded into a string ofdiscrete units. Would it help statistical automatic speech recognition algorithms toincorporate constraints that are inherent in the speech production process, whichwould at least partly characterize the sending channel? Furthermore, there ap-pears to be a similarity between the difficulty in answering the question of whetherhumans perceive speech directly as gestures of the tongue and lips and in decidinghow to include articulatory constraints in the statistical models used in ASR.

Most of the progress towards incorporating articulatory representations intoASR has been enabled by experiments that measure and model speech articula-tory movement and coordination. The task-dynamic model, for example, describesthe coordinated movement of the speech articulators in terms of control param-eters of dynamic systems. The model can provide for a bridge from the discretephone units to the continuous movement of articulators. The sequence of theseparameters is then used to produce a word that can be expressed in the form of alist called a gestural score. Something akin to gestural scores may provide part ofthe abstract representation of articulatory movement that several researchers areconsidering.

Recent work at The Ohio State University (OSU) has shown how gestural scorescan be derived from data on tongue and lip movement.1 The articulatory move-ment data was obtained using the x-ray microbeam machine at the University ofWisconsin, which produces x-ray images of tongue and lip movement using verylow dosages of x rays. The OSU group was further able to train a neural networkprogram (essentially an algorithm that “learns” from past experience) to performphone recognition from the gestural scores that they derived. To do ASR withgestural scores in the sense of recognizing the words of a spoken message from theacoustic waveform will require that gestural scores be derived from the waveform.Researchers at Haskins Laboratories are experimenting with recovering gesturalscores from the speech waveform in computer simulation experiments.2 In addi-tion, Li Deng of the University of Waterloo has been incorporating gestural scoresinto speech recognition systems.3 Further progress in all of these areas can beexpected.

Richard S. McGowan([email protected])

Alice Faber([email protected])

Haskins Laboratories

1. T.-P. Jung, A. K. Krishnamurthy, S. C. Ahalt, M. E. Beckman, and S.-H. Lee, IEEE Trans. SpeechAudio Process. 14, 2 (1996).

2. R. S. McGowan and M. Lee, J. Acoust. Soc. Am. 99, 595 (1996).3. L. Deng and D. X. Sun, J. Acoust. Soc. Am. 95, 2702 (1994).

Page 2: physis meeting

2

ASTROPHYSICS

A Plethora of Extrasolar Planets

The 12 months from October 1995 to the time of writing witnessed a revolutionin the understanding of planetary systems other than our own. In the fall of 1995,when Michel Mayor and Didier Queloz of the Geneva Observatory announced thedetection of a Jupiter-sized planet orbiting the star 51 Pegasi,1 only one similarobject was known to orbit a star like our Sun. That was a 10-Jupiter-mass compan-ion of the star HD 114762 which had been discovered in 1989 by David Latham,Robert Stefanik, and colleagues at the Harvard–Smithsonian Center for Astro-physics. (That the HD star was actually a planet or other substellar object was notwidely accepted until after the Mayor–Queloz announcement.) But by the close of1996, following announcements by Geoffrey Marcy and Paul Butler of San Fran-cisco State University, William Cochran and Artle P. Hatzes of the University ofTexas, and George Gatewood of Allegheny Observatory, the count of new solarsystems had risen to at least nine (see the cover figure).2–5 The abundance of suchsolar systems implied that planets around stars like our sun were commonplace,although the properties of the new planets were sufficiently strange to cast doubtson the adequacy of current models of how they formed.

None of these new objects could be seen directly, since they were swamped bythe light of their mother stars. As a planet orbits, however, its gravitational pullcauses the star to wobble back and forth. That wobble can be measured mosteasily by noting the Doppler shift in the spectrum of the star, a subtle change inthe wavelength of absorption lines in response to the motion of the star. If a star ismoving away, spectral lines appear at longer wavelengths, and if the star is mov-ing toward the observer, the lines are shifted to shorter wavelengths. A star with acompanion planet will typically wobble back and forth at speeds ranging from afew meters to a few hundreds of meters per second. The corresponding Dopplershifts are so small (about one part in ten million) that they are difficult to measure,since variations in the temperature and alignment of astronomical spectrographs readilyproduce variations in wavelength calibration that mask the measurements. A steadyimprovement in both the construction of spectrographs and the analysis of spectra inrecent years, however, has made precision of a few meters-per-second possible. Marcyand Taylor, for instance, impress a spectrum of iodine vapor over the spectrum ofthe star, producing a set of reference absorption lines that can be used to continu-ously monitor and update the calibration of their wavelengths.6

Even so, discovering planets is not easy. Taking our solar system as a paradigm,observers expected that recognizing the telltale wobble of stars caused by planetswould take many years, since the orbital periods of the largest planets, Jupiter andthose beyond, are five years or more. Marcy and Butler had been making high-precision velocity measurements of a sample of over 100 nearby sunlike starssince the late 1980s, but had not analyzed the data fully.2 Their sense of urgencytook a quantum leap, however, with Mayor and Queloz’s discovery. The planetaround 51 Pegasi, though about half the mass of Jupiter, orbited every 4.2 days ina circular orbit with a radius of about 5 million miles. Scrutinizing their collectedmeasurements, Marcy and Butler confirmed the Geneva discovery, and announcedtwo new planets, orbiting 47 Ursae Majoris and 70 Virginis, at the January 1996meeting of the American Astronomical Society. The planet around 47 Ursae Majoris,nearly three times the mass of Jupiter, moved in an almost circular orbit, slightlylarger than the orbit of Mars, with a period of 2.98 years. This is the sort of orbitexpected for a giant planet in our solar system. The planet around 70 Virginis, incontrast, moved in a quite different fashion. Its 116-day orbit was highly eccen-tric, bringing it briefly closer to its star than the planet Mercury is to the sun, thenswinging out again to the distance of Venus.

The succeeding months brought additional discoveries, as researchers analyzedtheir data and made new observations. A number of additional planets like 51Pegasi were discovered, as massive as Jupiter, but in circular orbits of short pe-riod. These included planets around 55 Cancri (14.7-day period), UpsilonAndromedae (4.6-day period), and Tau Bootis (3.3-day period), all discovered byMarcy and Butler. In October they, along with Cochran and Hatzes, announced thediscovery of another planet, orbiting 16 Cygni every 2.2 years in a highly ellipti-cal orbit like the one around 70 Virginis.6 An additional planetary candidate wascontributed by George Gatewood of Allegheny Observatory, who detected a tell-tale astrometric wobble in the star Lalande 21185 by a long series of precise mea-surements of its position relative to nearby stars, deducing the presence of a Jupi-ter-mass object with a period of 5.8 years, along with a possible second planetwith a period of 30 years.7

To the initial consternation and eventual delight of theorists, none of theseplanets met our reasonable expectations. According to current understanding, plan-ets arise in the flattened disk of gas and dust surrounding a forming star by aprocess called accretion. In the accretion process solid particles begin to condenseand collide, forming larger chunks which can then gravitationally attract evenmore matter.3 In the inner solar system, planetary accretion continues until thenew sun’s heat clears away remaining gas and dust. In the outer solar system,however, where the temperature is cooler, rocky planetary cores can continue toattract gas for a much longer time, producing “gas giant” planets like Jupiter andSaturn, where the original rock and ice core is surrounded by tens or hundreds oftimes additional mass from the presolar nebula. Thus we would expect small,rocky planets (a tenth the mass of Jupiter) within a few hundred million miles of asun, with gas giants out at a half billion miles or more. In addition, the averagingeffects of collisions during the accretion process would insure that all planetaryorbits were circular.

Acoustic Waveguides as Tools in Fundamental Nonlinear Physics

Imagine a beam of intense green light guided by an optical fiber. If the light ismade to alternate between bright and dim at the source, there will be periodiclocations down the fiber where you will see light alternating between red andblue. Moving further along, you will again see green light alternating betweenbright and dim. Providing that the fiber is long, and the light not subject to energylosses along the fiber, this periodic variation in the color will repeat indefinitely.This mechanism thus allows the possibility of providing tunable coherent lightfrom a single-frequency source.

We have recently probed this concept in acoustics by using a long pipe as a wayof guiding high intensity sound waves.1 This “AM–FM conversion” effect (so namedbecause of the way in which the wave alternates between amplitude and frequencymodulation) occurs only when a wave interacts with itself, which is normallynegligible unless the amplitude is sufficiently high. AM–FM conversion is thus anexample of a nonlinear effect.

Since the days of Lord Rayleigh, acoustic waveguides such as the one em-ployed above have played a major role in the advancement of fundamental physicsas well as technology. In low as well as high intensity sound investigations,waveguides are frequently useful in restricting the propagation to one dimension.This is accomplished by operating at sound frequencies below that of the “trans-verse modes” in which the wave front bounces along the walls of the waveguide.In one-dimensional propagation in acoustic media, sound waves at all frequenciestypically move with the same speed. A wave packet, which is the superposition ofwaves at many frequencies, will therefore not spread but will retain its shape as itpropagates. The medium is thus referred to as nondispersive. By contrast, trans-verse modes in a waveguide are dispersive, and for that reason waveguides arefrequently employed especially for introducing dispersion.

In particular, waveguides have been used to further our understanding of soli-tons, which are localized waves of constant shape that behave like particles. Act-ing alone, nonlinearity causes points of greater amplitude in a wave to move faster,so that the wave will eventually “break” in the case of water waves or “shock” inthe case of acoustic waves.

Dispersion, on the other hand, acts to spread a wave. Solitons result from astable balance between nonlinearity and dispersion. Discovered in 1834 by ScottRussell, these waves have attracted interest in many fields of physics and in biol-ogy, and are dramatic examples of inherently nonlinear behavior. Envelope soli-tons, which are among the most well-known types, result from an instability inwhich an amplitude modulation of a wave initially grows and eventually self-localizes. Certain transverse modes in special (e.g., elliptical) waveguides mightsupport these solitons, leading to novel effects.1 For example, because nonlinearitiesmay cause a greater temperature increase in a compression compared to the tem-perature decrease in a rarefaction, it may be possible to employ acoustic solitonsto transport localized “heat patches” in heat pumps.

Another recent application of waveguides is in the determination of the rela-tion between stress and strain in rocks. Nonlinearity must be at work since themeasured strain for a given stress does not increase in proportion to the stress.Furthermore, the static and dynamic behavior of rock exhibits hysteresis; that is,the new state of the system depends upon, or remembers previous states. Yet an-other experiment is concerned with the extreme nonlinear behavior observed inthe propagation of pulses2 in meter-long sandstone rods, in which the frequency ofthe pulses doubles and triples as the pulse propagates for changes in the strain ofonly 10–7 (extrapolation to a 1-mm sample implies deformations of only one atomicdiameter). This nonlinear behavior of rock has important consequences for pro-cesses such as earthquake slip3 and stress fatigue damage in concrete.

Waveguides can also be used to probe the nature of systems driven far fromthermodynamic equilibrium. We have recently measured a weak signal4 attenua-tion due to high-intensity shockless acoustic noise in one dimension. Except forviscous and thermal losses, a signal would not experience any substantial attenu-ation in the presence of low intensity acoustic noise. For larger noise levels, asignal propagating in three dimensions attenuates exponentially with an attenua-tion length that depends only on the frequency of the signal, the intensity of thenoise, and the properties of the medium. However, if the sound is constrained topropagate in only one dimension, the attenuation length depends also on the dis-tance the signal has traveled, thus displaying memory. Consequently the signalattenuates as half of a Gaussian (or “bell”) curve. The noise itself has interestingproperties which include the redistribution of energy over different frequencies,through nonlinearities. In addition, the noise may support new types of soundmodes, referred to as collective modes5 because they involve all frequencies, inwhich a modulation of the noise propagates at a different speed than normal sound.

Andrés LarrazaNaval Postgraduate School

([email protected])

Bruce DenardoUniversity of Mississippi

([email protected])

1. A. Larraza and W. Coleman, J. Acoust. Soc. Am. 100, 139 (1996); W. F. Coleman, M. S. thesis, U.S.Naval Postgraduate School (1993).

2. J. A. TenCate, K. E. A. Van Den Abeele, T. J. Shankland, and P. A. Johnson, J. Acoust. Soc. Am. 100,1383 (1996).

3. I. A. Beresnev and K.-L. Wen, Bull. Seism. Soc. Am. 86 (August 1996).4. A. Larraza, B. Denardo, and A. Atchley, J. Acoust. Soc. Am. (to be published).5. A. Larraza and G. Falkovich, Phys. Rev. B. 48, 9855 (1993).

Page 3: physis meeting

3

Why then are Jupiter-mass planets, like those in 51 Pegasi, 55 Cancri, UpsilonAndromedae, and Tau Bootis, found only a few million miles away from theirstars? As of this writing, the situation is unclear. Douglas Lin, of the University ofCalifornia at Santa Cruz, has suggested that the close Jupiter-mass planets wereformed at larger distances from the star, and spiraled inward through interactionswith the disk of material surrounding the star.4 The inward motion of the planetceases, Lin suggests, when it encounters the inner edge of the dissipating nebulaseveral million miles from the star, or when it is locked into orbit by tides raisedfrom the rotating central star.

This leaves open the origin of the planets in more distant, eccentric orbits, likethose around 70 Virginis, 16 Cygni, and HD 111487. It is possible that these arenot planets at all, but brown dwarf stars, formed in the same fashion as larger stars,from the contraction and fragmentation of clouds of gas, but too low in mass tosustain nuclear reactions. Binary-star companions formed from fragments of acollapsing cloud, unlike planets formed by collective accretion, are expected toemerge in highly elongated orbits. Lin has suggested an alternate theory in whichthe eccentric-orbit objects are indeed large planets, formed by mergers of some-what smaller accretion-formed planets in the condensing nebula.

Two things are clear, however, from the current uncertain state of affairs. First,as more observations of stellar wobbles are made and as the time baselines of thedata sets get longer, the number of known extrasolar planets is bound to grow,perhaps by several orders of magnitude. And second, as knowledge of these sys-tems grows, so will our understanding of the way, or ways, in which planetarysystems and stars form by natural processes.

Laurence A. MarschallGettysburg College

1. M. Mayor and D. Queloz, Nature 378, 355 (1995).2. M. Lemonick, Time 53 (February, 1996).3. A. P. Boss, Physics Today 32, 32 (September, 1996).4. D. N. C. Lin, P. Bodenheimer, and D.C. Richardson, Nature 380, 606 (1996).5. J. Schneider, Extrasolar Planets Encyclopedia, http://www.obspm.fr/planets.6. G. Marcy and P. Butler, San Francisco State University Planet Search Page, http://cannon.sfsu.edu/

~williams/planetsearch/planetsearch.html.7. G. Gatewood, Bull. Am. Astron. Soc. 28, 885 (1996).

Denizens of the Hubble Deep Field

There is an enlarged reproduction of the Hubble deep field (HDF) on a wall atthe Waimea, Hawaii headquarters of the W. M. Keck Observatory on which thereis a transparent overlay. On the overlay, next to each of many distant galaxies inthe HDF, there are numbers marked that represent the redshifts of these galaxies,as obtained with the 10-m Keck I telescope. (The galaxies are so faint that, nearthe detection limit, one photon per week would be received by a collecting areaequal to that of the human eye and located above the Earth’s atmosphere, from awhole galaxy of hundreds of billions of stars.) To an astronomer knowledgeable inthe ways of the profession, the fact that these data represent the efforts of eightdifferent science teams, each granted precious observing time on the world’s larg-est telescope for what is substantially a common cause, can only underline thehigh esteem in which our profession holds the HDF, a product of ten consecutivedays of observations with the much smaller Hubble Space Telescope in December1995. Interested scientists do not need to fly to Hawaii to jot down the redshifts onthe wall poster, but can find some of them on a World Wide Web page maintainedby the Faint Galaxy Consortium at the California Institute of Technology whichlaconically notes, “There are plans to complete the sample in various photometricband passes but in view of the tremendous interest in these results we are makingthem public now.”

The HDF is so much more revealing than comparable images from ground-based telescopes because of the high spatial resolution (about 0.1 arcsec) attainedwith the orbiting 2.4-m Hubble, and the black sky of space against which it wasobserved. Exposures were made in each of four wavelength bands, spanning therange from the violet to the far red. Superposed, they yield a stunning color imageof more than 1600 galaxies (some as faint as 30th magnitude or 4 billion timesfainter than the normal limit of the unaided human eye) and a handful of faintstars, all within a tiny area of Ursa Major, 5.3 square arcminutes (see Fig. 1).Considered separately and intercompared, the images made in different color chan-nels permit estimates of spectral energy distributions that may tell how far away

the galaxies are (for the great majority of objects for which Keck has not yetobtained a redshift and for those which may be beyond its reach; the “photometricredshifts” derived from these energy distributions are controversial and not veryreliable individually although useful statistically).

The energy distributions, properly interpreted (and different groups may do sodifferently) reveal whether the galaxies are in the throes of massive early starformation or other evolutionary stages with strong consequences for the collectivelight of a galaxy. Likewise, the HDF images are fertile ground for investigators’attempts to classify galaxies of different types over a wide range of distance andcorresponding eras of time since the Big Bang, to see how these building blocks ofthe universe have undergone dramatic changes (and how some of them have not).

There is great ferment nowadays in observational cosmology in general and ininterpretations of the HDF in particular. Not only do relatively few of the conclu-sions from published studies of HDF data agree in detail, but the researchers oftendisagree on the very data. There is no single anointed way to classify the image ofa very distant galaxy whose received light has been highly redshifted. There is noteven universal agreement on what objects in the HDF are to be counted or consid-ered as individual galaxies as against those that may be subgalactic clumps whicheither should not be counted as galaxies or which should be counted in a specialway, for example, by counting all alleged clumps within a certain radius as one,implying that they pertain to a single, perhaps future, galaxy. The “clumps” maybe objects in the process of merging to form galaxies or they may be bright, com-pact regions of otherwise unseen galaxies whose overall structures have too dim asurface brightness to show up even in the HDF. Or, the little clumps may be tran-sient structures in pregalactic regions that formed and subsequently faded or dissi-pated. Unlike most data from the Hubble Space Telescope, which remain propri-etary to the observers for a period of one year, the HDF observations, installed ona computer at the Space Telescope Science Institute in Baltimore, were madepublic at the moment they were ready, January 15, 1996, a few short weeks afterthe observations were made. The release prompted a media report that the numberof galaxies thought to exist in the universe had increased from 10 to 50 billion byextrapolation from the HDF images.1 However, no such statement appears in theformal publication of the HDF images and a catalog of objects nor in the numer-ous scientific papers that have already discussed the HDF.2

What the literature of the past year does reveal, however, is that astronomersinspired by the Hubble deep field have searched for evidence of statistical gravita-tional lensing of very distant galaxies by less remote objects, finding from thedetected lensing effects evidence for the increase in dark matter in the halos ofearlier galaxies than previously studied.3 Investigating the distribution of the HDFobjects on the sky, astronomers have concluded that many small objects aresubgalactic clumps as noted above4 and that the presence of distant, immense“great walls” of galaxies echoes findings in the nearer universe.5

Infrared images of areas in the HDF made with the European Space Agency’sorbiting Infrared Space Observatory suggest that “stars formed at a prodigiousrate in some of the remotest known galaxies in the universe.”6 Additional infraredimages were made with the University of Hawaii’s 2.2-m telescope on MaunaKea. Radio astronomers conducted a 50-h observation of the whole HDF and theadjacent “Flanking Fields” (which were imaged with Hubble, although not to asfaint magnitudes as the HDF) with the very large array radio telescope.7

Astronomers have compared the HDF data on distant galaxies with the predic-tions of various cosmological models, finding that models which may satisfy ex-isting ground-based observations fail when confronted with the deeper data fromHubble.8 Catalogs of the numerous galaxies in the HDF have been formulated byvarious techniques, including work on 1683 galaxies sorted according to theirspectral energy distributions, with claimed redshifts, derived thereby, that exceed6 (four galaxies), although no proper spectroscopic redshift as large as 5 has yetbeen observed in any galaxy on the sky.9 Also a catalog of the faint galaxies sortedaccording to their shapes showed that interacting galaxies are much more com-mon in the HDF than among nearby galaxies, that barred spiral galaxies seem tobe missing at the fainter magnitudes, and that the fraction of distant galaxies thatare the spectacular “grand design spirals” (among the most impressive galaxies inthe local universe) is much smaller than among local systems.10

An image of the HDF has been posted on a World Wide Web page keyed to acatalog prepared at the University of Hawaii, so that any Web surfer can click on arandomly selected galaxy and bring up known data such as coordinates, spectralenergy distribution, morphological classification, etc.11

Various kinds of less numerous objects in the HDF have been identified andcatalogued, including quasars and active galactic nuclei, previously unrecognizedstrongly gravitationally lensed distant galaxies, and very faint stars, including thosethat might be associated with the so-called dark-matter halo of our own Galaxy.12

The results, if not compromised fatally by small sample statistics, may constrainmodels of the halo’s stellar composition, as well as models of the relative numbersof nearby faint stars that belong to different populations.

The data in the Hubble Deep Field, carefully analyzed and supported by fol-low-up observations with other telescopes and with two new instruments that (atthe time of writing) were scheduled to be installed onboard Hubble in February1997, should be capable of telling us how galaxies have evolved in form and con-tent over the aeons and when during that process were there great episodes of starformation. We may learn as well the correct values of long-sought cosmologicalparameters. Already, some specific results have been claimed, but few appear tobe definitive. Many more redshifts must be measured, and many more correlativeobservations must be made in other wavelength bands, including the infrared andthe ultraviolet, before the last word, or possibly even the first definitive word, willhave been said on the Hubble deep field.

FIGURE 1. The target for the Hubble DeepField was a carefully selected piece of skynear the handle of the Big Dipper (part ofthe northern circumpolar constellation UrsaMajor—the Great Bear). The field is far fromthe plane of our galaxy and so is “unclut-tered” of nearby objects, such as foregroundstars. The target field is, by necessity, in thecontinuous viewing zone (CVZ) of Hubble’sorbit, a special region where Hubble canview the sky without being blocked by Earthor interference from the Sun or Moon.

Page 4: physis meeting

4

providing an atom-level demonstration of the famous Schrödinger’s cat thoughtexperiment, in which the fate of a nonquantum property is intimately interlinkedwith a quantum property. In December, physicists announced the first quantitativemeasurement of quantum decoherence—the breakdown of a system’s quantumproperties—for the first time in a system made of several particles separated overa mesocopic scale (a distance scale between the quantum and classical worlds).Serge Haroche of the Ecole Normale Supérieure and David Wineland of the Na-tional Institute of Standards of Technology belong to the research teams that per-formed the aforementioned experiments. In their article, they explain that all ofthese experiments have something in common. They are powerful new demonstra-tions of the Jaynes–Cummings model, a long-time favorite of theorists which de-scribes the exchange of energy between two special quantum systems: a two-levelquantum system (one with two possible energy levels) and a quantized harmonicoscillator, a quantum system with many equally spaced energy levels, whose be-havior is similar in form to that of a tiny weight attached to a simple spring.

Ben P. SteinAIP

Ultrashort Light Pulses: Pushing the Limits

Light is an electromagnetic wave, similar to a radio wave or the electrical sig-nals in a computer circuit, but at a much higher oscillation frequency. The time fortwo successive crests of a light wave to pass through a point is only a fewfemtoseconds (1 fs = 10–15 s = a quadrillionth of a second). This time scale alsorepresents the fundamental “clock speed” for the basic processes (such as chemi-cal reactions) underlying the world we live in. As a result, femtosecond pulses areoften used as an ultrafast camera shutter, to “freeze” the motion of chemical reac-tions in time, so that we can understand their basic mechanisms. Femtosecondtime scales are profoundly short; the geometric mean of 10 fs and the age of theuniverse (obtained by multiplying the two numbers and taking the square root) isa time of only one minute. In the 1990s, there has been a revolution in the technol-ogy of laser sources, resulting in the ability to simply and reliably produce lightpulses as short as 3 optical cycles in duration, both in the visible and x-ray regionsof the spectrum. More recently, powerful new techniques to obtain accurate pic-tures of the exact shape of these ultrashort light pulses have been developed, whichare revolutionizing the way we think about and use light.

The technology of femtosecond lasers began in the 1980s, and as early as 1986resulted in the generation of 6-fs pulses by researchers at Bell Labs.1 However, thetechnology remained very difficult until the discovery by Wilson Sibbett at theUniversity of St. Andrews in 1990 of a way to make a simple and reliable, crystal-based laser which spontaneously emits short pulses.2 Work by our group and thatof the Technical University of Vienna made it possible to routinely generate lightpulses shorter than 10 fs using such a laser.3,4 The stability, reliability, and averagepower of these pulses is vastly superior to femtosecond pulses generated previ-ously. These improvements made it possible to develop new ways of looking atand using this light. For example, the work of Rick Trebino and coworkers atSandia Labs now makes it possible to take the “snapshot” of the exact shape of anultrashort pulse in time, using a combination of new experimental techniques andcalculational algorithms similar to those used in medical CAT-scan images.5 Thistechnique is referred to as frequency-resolved optical gating (FROG).

In recent work in our group, we applied FROG to pulses as short as 10 fs,allowing our students, Greg Taft and Andy Rundquist, to make the fastest electri-cal wave form measurement ever.6 The FROG output, shown in Fig. 1, is equiva-lent to an “optical oscilloscope,” but with a speed thousands of times faster thaneven the fastest conventional oscilloscope. Furthermore, techniques have beendeveloped which make it possible to manipulate the shape of these ultrashortpulses.7,8 The result of all of this work is that in the past year, it has becomepossible to produce, measure, and manipulate light pulses as electromagnetic waveforms, rather than simply as bursts of energy. This revolution in thinking has im-plications for our study of dynamic processes in nature, our ability to generatelight from the far infrared to the x-ray regions of the spectrum, and on the possibil-ity for communications networks with terabit/s (1 terabit/s = 1012 bits/s) data rates.

In very recent work, we generated high-power versions of these ultrashort opti-cal pulses, which can generate focusable intensities greater than that which wouldbe obtained by focusing the entire solar flux incident onto the earth onto a pin-head.9 Because the size of a laser depends primarily on its pulse energy, it is onlyby reducing the duration of a pulse in time that we can achieve such immense

Inevitably, the results mentioned above and also the corresponding bibliographi-cal references are incomplete. We apologize to those not mentioned, while look-ing forward to the next exciting findings.

Stephen P. MaranNASA/Goddard Space Flight Center

1. J. N. Wilford, The New York Times (January 16, 1996).2. R. E. Williams et al., Astron. J. 112, 1335 (1996).3. I. P. Dell’Antonio and J. A. Tyson, Astrophys. J. Lett. 473, L13 (1996).4. W. N. Colley et al., Astrophys. J. Lett. 473, L63 (1996).5. J. G. Cohen et al., Astrophys. J. Lett. 471, L5 (1996).6. M. Rowan-Robinson, Royal Astronomical Society Press Notice 96/28 (November 7, 1996).7. E. B. Fomalont et al., Astrophys. J. (in press).8. H. C. Ferguson, in Proceedings of the 37th Herstmonceaux Conference on “HST and the High

Redshift Universe” (in press).9. K. M. Lanzetta, A. Yahil, and A. Fernandez-Soto, Nature 381, 759 (1996).

10. S. van den Bergh et al., Astron. J. 112, 359 (1996).11. A. S. Cowie, Hawaii Active Catalog of the Hubble Deep Field, http://www.ifa.hawaii.edu/~cowie/

tts/tts.html (1996).12. A. Conti et al., Bull. Am. Astron. Soc. (in press); S. E. Zepf, L. A. Mostakas, and M. Davis, Astrophys.

J. Lett. 474, L1 (1997); C. Flynn, A. Gould, and J. Bahcall, Astrophys. J. Lett. 466, L55 (1996); I.N. Reid et al., Astron. J. 112, 1472 (1996).

ATOMIC, MOLECULAR, &OPTICAL PHYSICS

Ultrashort Light Pulses: Pushing the Limits. Femtosecond lasers—devices thatcan deliver pulses of light lasting only quadrillionths of a second—are one of themost powerful new tools developed in laser science during the past 20 years. Thesestate-of-the-art devices can take snapshots of ultrafast chemical reactions, charac-terize silicon surfaces during the manufacture of computer chips, separate differ-ent isotopes of the same chemical element, and perform a host of other applica-tions. As described by Henry Kapteyn and Margaret Murnane of the University ofMichigan, recent breakthroughs include the fastest waveform measurement ever—a picture of the electric field associated with a femtosecond pulse lasting only 10quadrillionths of a second. Separate research demonstrated that high-powerfemtosecond pulses do not rip apart atoms immediately, but only after a fewfemtoseconds; in this study, researchers generated the shortest x-ray pulses ever,lasting only several quadrillionths of a second. As the authors point out, it is nowpossible not only to make these ultrashort bursts of light energy, but to carefullycontrol their properties including their shape.

A New “Momentum Microscope” Views Atomic Collision Dynamics. Atomicphysicists have developed a new type of microscope—one that does not yieldspatial pictures of an object, but rather the momentum distributions of particlesinvolved in atomic breakup processes. Known as “cold target recoil ion momen-tum spectroscopy,” or COLTRIMS, this technique involves the collection of low-energy collision products by a weak electric field and the detection of their motionvia two-dimensional position-sensitive detectors. As explained by researchers atthe University of Frankfurt and the GSI-Darmstadt facility in Germany, COLTRIMShas helped to solve some longstanding puzzles in atomic collision physics. Forinstance, Robert Moshammer of GSI-Darmstadt and his coworkers have shownthat a fast, highly charged ion can act in a fashion similar to a high-intensity lightpulse to ionize an atom into a positive core and one or more electrons.

Precision Molecular Spectroscopy with Cold Trapped Atoms. By trapping at-oms and cooling them to ultralow temperatures, researchers have managed to pro-duce some of the most unusual diatomic (two-atom) molecules ever—and per-form record-breaking measurements with them. For instance, physicists at theNational Institute of Standards and Technology (NIST) created a sodium diatomicmolecule (Na

2) whose two atoms are spaced as much as 30 times farther apart than

the atoms in an atmospheric oxygen molecule. The NIST molecule exists in whatis known as a “pure long-range state,” in which one atom is in the lowest-energy or“ground” state, and the other is in the next highest energy level, known as the firstexcited state. The forces between the atoms depend on their separation and their“dipole strength”—a quantity which indicates how well an atom absorbs light. Inturn, the dipole strength can provide information on how long it takes an atom todecay from an excited state to its lowest energy level. By measuring these andother novel diatomic molecules, researchers have determined the excited statelifetimes to record accuracies of 0.1% for sodium atoms and 0.03% for lithiumatoms. In his article, Paul Julienne of NIST describes how the creation of mol-ecules in ultracold collisions between atoms is leading to these and other types ofprecision measurements.

Experimental Realizations of Quantum Thought Experiments. In 1996, research-ers performed a number of breathtaking demonstrations in fundamental quantumphysics. In March, a team in France announced that they had provided the firsttruly compelling demonstration that an electromagnetic field bounded by a pair ofwalls (a “cavity”) is quantized—made up not of continuous waves, but of discretepackets of energy, or “quanta.” In May, researchers in the U.S. published a paper

FIGURE 1. Measured electric field waveform of a light pulse only 10 fs (quadrillionths of asecond) in duration.

Page 5: physis meeting

5

power densities using relatively small lasers. High intensity light pulses allow usto access previously unexplored extreme states of matter—for example to gener-ate very short pulses of light at new wavelengths. Although atoms within such afocused laser are very quickly torn apart, in recent work we demonstrated that thisdoes not occur instantly, but takes a few femtoseconds. We then used this fact toour advantage to generate the shortest x-ray pulses ever, only a few femtosecondsin duration. Using a technique for converting optical frequency light into x rays,10,11

we found that by using very short laser pulses we could generate higher-energy xrays than previously possible—more efficiently, with greater tunability, and withan unprecedented short duration.12 These x rays, by virtue of x rays’ ability tomore directly observe atomic structure, will make it possible to study with un-precedented clarity the most basic processes which occur in our natural world—for example, the motion of atoms in chemical reactions and in material phasetransitions. This new, table-top sized x-ray apparatus will allow some of the typesof experiments previously performed using synchrotron facilities, to now be per-formed with an added dimension: ultrahigh time-resolution.

Many other potential applications of very short optical pulses exist. Ongoingresearch by others may mean that extremely short duration optical pulses will beused to perform on-site characterization of silicon surfaces during chip manufac-ture,13 and to take 3-D pictures of living cells.14 The applications are becoming areality because ultrashort pulses combine two seemingly contradictory character-istics: extremely high peak power (which makes it possible to access nonlinearprocesses such as how molecules jump to a higher-energy state by simultaneouslyabsorbing two photons instead of a single photon) with very low average power(which makes the laser small and the disturbance caused by it relativelynoninvasive). Very short duration ultrahigh-power optical pulses have the poten-tial to efficiently drive future laser-based particle accelerators15 and to performmore accurate micromachining and laser surgery.16

A number of students and scientists contributed to this work—Sterling Backus,Zenghu Chang, Ivan Christov, Charles Durfee, Kira Maginnis, Greg Taft, KendallRead, Andy Rundquist, Haiwen Wang, Erik Zeek, and Jianping Zhou. We grate-fully acknowledge support by the National Science Foundation, the Air Force Of-fice of Scientific Research, and the U.S. Department of Energy. M. Murnane andH. Kapteyn acknowledge Sloan Foundation Fellowships.

Henry C. Kapteyn([email protected])

Margaret M. MurnaneUniversity of Michigan

([email protected])

1. R. L. Fork, C. H. B. Cruz, P. C. Becker, and C. V. Shank, Opt. Lett. 12, 483 (1987).2. D. E. Spence, P. N. Kean, and W. Sibbett, Opt. Lett. 16, 42 (1991).3. J. P. Zhou, G. Taft, C. P. Huang, M. M. Murnane, H. C. Kapteyn, and I. P. Christov, Opt. Lett. 19,

1149 (1994).4. A. Stingl, M. Lenzner, C. Spielmann, F. Krausz, and R. Szipocs, Opt. Lett. 20, 602 (1995).5. D. J. Kane and R. Trebino, IEEE J. Quantum Electron. 29, 571 (1993).6. G. Taft, A. Rundquist, M. M. Murnane, H. C. Kapteyn, K. W. DeLong, R. Trebino, and I. P. Christov,

Opt. Lett. 20, 743 (1995).7. A. M. Weiner, J. P. Heritage, and E. M. Kirschner, J. Opt. Soc. Am. B 5, 1563 (1988).8. C. W. Hillegas, J. X. Tull, D. Goswami, D. Strickland, and W. S. Warren, Opt. Lett. 19, 737 (1995).9. J. Zhou, C. P. Huang, M. M. Murnane, and H. C. Kapteyn, Opt. Lett. 20, 64 (1995).

10. A. McPherson, G. Gibson, H. Jara, U. Johann, T. S. Luk, I. A. McIntyre, K. Boyer, and C. K.Rhodes, J. Opt. Soc. Am. B 4, 595 (1987).

11. J. J. Macklin, J. D. Kmetec, and C. L. Gordon, III, Phys. Rev. Lett. 70, 766 (1993).12. J. Zhou, J. Peatross, M. M. Murnane, H. C. Kapteyn, and I. P. Christov, Phys. Rev. Lett. 76, 752

(1996).13. J. I. Dadap, X. F. Hu, M. H. Anderson, M. C. Downer, J. K. Lowell, and O. A. Aktsipetrov, Phys.

Rev. B 53, R7607 (1996).14. G. J. Brakenhoff, J. Squier, T. Norris, A. Bliton, M. H. Wade, and B. Athey, J. Microscopy 181, 253

(1996).15. D. Umstadter, S.-Y. Chen, A. Maksimchuk, G. Mourou, and R. Wagner, Science 273, 472 (1996).16. S. D. P. Pronko, J. Squier, J. Rudd, D. Du, and G. Mourou, Opt. Commun. 114, 106 (1995).

A New �Momentum Microscope� Views Atomic CollisionDynamics

A new “microscope” is providing spectacular views of the correlated motion ofthe fragments of atomic breakup processes. Whereas ordinary microscopes revealthe spatial structure of an object, this new approach provides multidimensional“pictures” of momentum distributions for all particles produced in fundamentalatomic reactions such as ionization of an atom by charged particles or by photons.The technique, termed “cold target recoil ion momentum spectroscopy”(COLTRIMS), allows the measurement of previously undetectable small momentaof ions emerging from atomic reactions. Furthermore, this is done with high-reso-lution, large 3-D slices of the collision volume, and often includes measurementof the momenta of electrons produced in the reaction.

In 10 years of development, researchers have progressed from making the firstsuccessful measurements of such ion momenta1 to creating today’s high-resolu-tion reaction “microscopes.” This effort was pushed by groups at the University ofFrankfurt and GSI (Germany), and at Kansas State University.2 Today additionalsystems are in use at CIRIL/GANIL (France),3 Riken (Japan), and the LawrenceBerkeley National Laboratory and are being installed at University of Missouri atRolla and at Argonne National Laboratory.

The COLTRIMS technique collects the low-energy collision products (a recoilion and electrons) in a weak, uniform electric field which projects their 3-D mo-tions onto 2-D position-sensitive ion and electron detectors. Combined with mea-surement of the time-of-flight to the detectors, the 3-D momentum distributions

can be determined. The result is similar to that obtained from nuclear or high-energy particle collision experiments, i.e., an event-by-event accounting of themomentum components of the reaction products. To gain the highest precisionmeasurements of final momenta, a cold, supersonic helium jet target is used. Inthe case of atomic reactions, the relevant particle energies are truly tiny, onlyfractions of an electron volt, whereas nuclear or high-energy particle collisionproducts have energies millions (or billions) of times higher.

In many cases, images of the momentum distributions of the ions and electronsfrom atomic reactions directly “display” the processes responsible for the breakupof the atom. Thus some longstanding puzzles in atomic collision physics weresolved recently using this new approach and many new questions and challengesto theory were raised.

For example, Lutz Spielberger (U. Frankfurt) and coworkers4 have recentlyseparated different mechanisms of photoionization—in which light removes elec-trons from atoms or molecules to produce ions—by measurement of the final ionmomenta. If a photon completely surrenders its energy to an atom, the atom breaksinto two fragments (a positive ion and an electron) of equal momenta, while forCompton scattering (in which the photon loses only part of its energy) the photonmakes a billiard-ball-like collision with one target electron without direct mo-mentum transfer to the atom’s nucleus (see Fig. 2). This allowed for the first timethe study of how the helium atom’s second electron responds to these differentinteractions with the first electron. The new measurements provided a long-neededconclusive experimental answer to intense controversy among theorists about theratio of the probability for Compton-scattered light to remove two helium elec-trons to that for single ionization.

In another application of this new approach, Robert Moshammer (GSI-Darmstadt) and coworkers5 have demonstrated that the impact of a fast, highlycharged projectile ion (moving 16 times faster than the mean velocity of electronswithin the target atom) acts like a high-intensity light pulse, causing the atom tobreak into an ionic core and one or more electrons. The recoil of the ionic core isso large that it almost compensates the total momentum of the electrons. Themeasured momentum distributions of the freed electrons carry information on theircorrelated movement inside the atom on a time scale of 10–18 s—a billionth of abillionth of a second.

In lower-energy collisions, with proton projectiles moving slower than the elec-trons in the target helium atoms, Reinhard Dörner (U. Frankfurt) and coworkersand Scott Kravis (Kansas State Univ.) and coworkers were able to see that elec-trons appear to be “left stranded” on the saddle point between the two recedingheavy partners6,7 (in this case, the saddle point refers to the location between theproton and helium ion at which they exert a net force of zero on the electron). Thiseffect was predicted by Ron Olson (U. Missouri, Rolla) in 1983 and has since beena subject of intense theoretical and experimental research. The new momentumspectroscopy results have provided a definitive observation of this phenomena.

A collaboration led by Kansas State University and the University of Frankfurthas employed this new technique to investigate photon-induced breakup of a he-lium atom into its three constituents (nucleus and two electrons) using light fromthe Lawrence Berkeley National Laboratory Advanced Light Source. The photonsused were chosen near-threshold for the process, that is, they had barely enoughenergy to achieve double ionization.8 The results show how the mutual forcesbetween the three charged fragments yield simple momentum configurations inthe final state which follow from basic arguments proposed by Wannier in 1953.Owing to the long-range nature of the Coulomb potential, the behavior of as fewas three particles in the continuum is hard to treat by first-principles quantummechanical calculations and successful theoretical approaches have only recentlybeen developed.

Although this new technique has already brought a rich harvest in ion–atom,electron–atom, and photon–atom collision physics, its use is just beginning toflower. Future applications can extend to fields like molecular physics or to testsof quantum electrodynamics (the theory that unifies quantum mechanics, specialrelativity, and classical electromagnetism) in collisions between heavy ions trav-elling near light speeds. Another exciting application is in the study of the radio-active process known as beta decay. With this new technique, one can expect pre-cise measurements of the angular correlations between the decay products (thedaughter nucleus, a positron or electron, and a neutrino). One may even imagine ahigh-resolution neutrino mass measurement for each beta decay event. Such a mea-surement, which would provide coveted information on whether neutrinos have mass,would employ an accurately determined neutrino momentum value deducedfrom measuring the momentum values of the daughter nucleus and the emitted elec-tron or positron.

Joachim UllrichGSI-Darmstadt, Germany

([email protected]

FIGURE 2. The momentum distributionof ions produced by collisions of 9-keVphotons with helium atoms. The sharpneedle in the middle (at near-zero mo-mentum) contains ions produced byCompton scattering (in which photonssurrender part of their energy to thecollision), while the circular “reef”shows ion momenta created byphotoabsorption (in which photons sur-render all their energy). In the experi-ment, the photon electric field is di-rected along the x axis.

Page 6: physis meeting

6

Reinhard Dörner([email protected].

uni-frankfurt.de)

Horst Schmidt-BöckingUniversity of Frankfurt, Germany

1. J. Ullrich and H. Schmidt-Böcking, Phys. Lett. A 125, 193 (1987).2. W. Wu, R. Ali, C. L. Cocke, V. Frohne, J. P. Giese, B. Walch, K. L. Wong, R. D. V. Mergel, H.

Schmidt-Böcking, and W. E. Meyerhof, Phys. Rev. Lett 72, 3170 (1994) and references therein.3. A. Cassimi, S. Duponchel, X. Flechard, P. Jardin, P. Sortais, D. Hennecart, and R. E. Olson. Phys.

Rev. Lett. 76, 3679 (1996).4. L. Spielberger, O. Jagutzki, B. Krässig, U. Meyer, Kh. Khayyat, V. Mergel, Th. Tschentscher, Th.

Buslaps, H. Bräuning, R. Dörner, T. Vogt, M. Achler, J. Ullrich, D. S. Gemmel, and H. Schmidt-Böcking, Phys. Rev. Lett. 76, 4685 (1996) and references therein.

5. R. Moshammer, J. Ullrich, H. Kollmus, W. Schmidt, O. Jagutzki, V. Mergel, H. Schmidt-Böcking,R. Mann, C. Woods, and R. E. Olson, Phys. Rev. Lett. 77, 1242 (1996).

6. S. D. Kravis, M. Abdallah, C. L. Cocke, C. D. Lin, M. Stöckli, B. Walch, Y. Wang, R. E. Olson, V. D.Rodriguez, W. Wu, M. Pieksma, and N. Watanabe, Phys. Rev. A 54, 1394 (1996).

7. R. Dörner, H. Khemliche, M. H. Prior, C. L. Cocke, J. A. Gary, R. E. Olson, V. Mergel, J. Ullrich,and H. Schmidt-Böcking, Phys. Rev. Lett. 77, 4520 (1996).

8. R. Dörner, J. M. Feagin, C. L. Cocke, H. Bräuning, O. Jagutzki, M. Jung, E. P. Kanter, H. Khemliche,S. Kravis, V. Mergel, M. H. Prior, H. Schmidt-Böcking, L. Spielberger, J. Ullrich, M. Unverzagt,and T. Vogt, Phys. Rev. Lett. 77, 1024 (1996).

Precision Molecular Spectroscopy with Cold Trapped Atoms

By using light to combine two colliding, cold trapped atoms into a molecule, anew kind of high precision molecular spectroscopy is probing the long-range forcesbetween the atoms. This approach of “photoassociation spectroscopy” has beenused to make the most precise measurements yet of the lifetimes of the first ex-cited state of the Na and Li atoms, and to observe the influence of retardationcorrections to the long-range force between one ground-state and one excited-state atom. (Retardation refers to the ultrashort, but finite, time for light to crossthe molecule—in seconds, roughly the fraction 10 divided by a billion times abillion.) Since particles interact electromagnetically through the exchange of pho-tons, the finite speed of light affects the long-range forces between the atoms.

In any atomic or molecular system, absorption of light often occurs at a set ofunique narrow frequencies. A plot of the response of the system, which can beobserved in many ways, against the frequency of the light is the “spectrum” of thatsystem and it displays unique narrow features, often called “lines” because theywere first observed as lines on photographic plates. Each of the spectral lines hasa width, that is, a narrow but finite range of frequencies where the light is ab-sorbed. A part of this width arises from the spread of velocities of the movingatoms, by the familiar Doppler effect: the shift in frequency caused by a movingsource. (The shift heard from a moving train whistle is an example of this effectfor sound waves.) For the case of a collection of atoms moving at different veloci-ties, one has a superposition of many different Doppler shifts which contribute tothe total spread seen in the absorption line. If one can arrange to have all the atomsmoving with the same velocity, the Doppler spread is nearly zero. One does not,however, reach zero width because (even for motionless atoms or molecules) thereis an irreducible width—the natural linewidth—which is determined by the life-time of the excited state.

To measure the lifetimes of the atoms’ excited states, a laser of variable fre-quency ν is tuned so that the energy of its photons, hν, matches the energy differ-ence between an excited state of the diatomic molecule formed from the atompair, and the ground state of the colliding atoms. The formation of the excitedmolecular state is observed either by ionizing it with a second photon and detect-ing the ions or by detecting the loss of trapped atoms due to decay of the excitedstate to untrapped states. Scanning the laser frequency gives an excitation spec-trum (the photoassociation spectrum) for production of the molecular excited states.The spectral line shapes are very sharp, rivaling those seen with conventionalmethods of high-resolution laser spectroscopy. This is because the kinetic energyof the cold ground-state atoms is sharply defined; the very small kinetic energy ofthe cold atoms yields a Doppler spread less than a natural linewidth of the atomicline used to laser cool the atoms. A careful consideration of the actual spectral lineshapes is necessary for the most precise use of these spectra, since they are stronglyinfluenced by the quantum nature of the cold colliding atoms.1

A combined experimental and theoretical effort at NIST2 has concentrated on avery special excited state of the sodium diatomic molecule. This state is a “purelong-range state”3 which correlates at large separations of the atoms, to the combi-nation of one atom in the ground state and one in its first excited state. The long-range form of the force between two atoms of the same species, one in the groundstate and one excited, is similar to that between two bar magnets; it varies as theinverse cube of the separation between them and is proportional to the product oftheir dipole strengths. (The dipole strength is a quantity which determines howstrongly an atom absorbs light.) Here, the dipole strength product is proportionalto the decay rate of the excited-state atom, that is, to the inverse of the atomiclifetime. The potential energy of this state, plotted against the interatomic separa-tion, has a very shallow minimum about 55 GHz deep. (1 GHz = 1 billion cycles/s; spectroscopists often express energies as frequencies, omitting the conversionfactor of Planck’s constant. Visible light photons have energies equivalent to ~1/2million GHz.) This minimum occurs at a separation near 71 a

0

(1 a0, the radius of the hydrogen atom, is ~50 trillionths of a meter) and a repulsive

wall that never lets the atoms get closer than about 55 a0. (In contrast in the com-

mon diatomic molecule formed from two oxygen atoms, O2, the atoms are sepa-

rated by about 2.4 a0.)

The long-range molecular state is entirely determined by the properties of the

separated atoms. However unlike a single atom, a diatomic molecule can absorblight at frequencies which excite vibrational motion (that is, motion of the twoatoms toward or away from each other) or rotational motion (where the two atomsrotate as if at the ends of a twirling baton), or a combination of both. Because themolecule is a quantum system, these motions occur at sets of discrete vibrationalor rotational energies. Careful measurement of the photoassociation spectrum ofthe long-range state, along with theoretical modeling of the line shapes, deter-mines the positions of the rotational lines of the lowest seven vibrational levels to~5 MHz (1 MHz = 1 million cycles/s). By adjusting the dipole strength used tomatch these molecular spectra, the NIST researchers determined the Na excitedstate lifetime to an accuracy of 0.1%. The vast majority of atomic lifetime mea-surements have uncertainties at least 10 times larger. In addition, the 121 MHzshift in binding energy of the ground vibrational level due to the retardation cor-rections to the interatomic force is evident from the measurements.

A group at Rice University, led by Randall Hulet, has recently reported a Liatomic lifetime with 0.03% accuracy4 measured using photoassociation spectros-copy of trapped Li atoms. These researchers did not use a “pure long-range state”of the Li

2 molecule, but rather one which experiences chemical bonding at short

range. The improved accuracy follows from the wider range of lines surveyed,supplemented by conventional spectroscopic data.5 The Rice group also detectedthe effect of retardation corrections to the long-range force.

The atomic lifetimes determined from these molecular spectra are the mostaccurate to date for Li and Na. Although these measurements are much moreaccurate, they agree (within error limits) with other recent determinations, thusessentially setting to rest longstanding questions and discrepancies about the life-times of these simple excited atoms. Photoassociation spectroscopy has also de-termined an atomic Rb lifetime at the 1% level,6 and work is in progress on K.7 Inaddition, conventional molecular spectroscopy on long-range excited states of theNa molecule has also been used to determine the atomic Na lifetime with 0.3%accuracy.8 In the study of excited-state lifetimes, researchers have therefore madedramatic progress not only in opening new avenues but also in enhancing tradi-tional routes.

Paul JulienneNational Institute of Standards

and Technology([email protected])

1. K. Burnett, P. Julienne, P. Lett, and K.-A. Suominen, Phys. World 8, 42 (1995).2. K. M. Jones, P. S. Julienne, P. D. Lett, W. D. Phillips, E. Tiesinga, and C. J. Williams, Europhys.

Lett. 35, 85 (1996).3. W. C. Stwalley, Y.-H. Uang, and G. Pichler, Phys. Rev. Lett. 41, 1164 (1978).4. W. I. McAlexander, E. R. I. Abraham, and R. G. Hulet, Phys. Rev. A 54, R5 (1996).5. C. Linton, F. Martin, I. Russier, A. J. Ross, P. Crozet, S. Churassy, and R. Bacis, J. Mol. Spect. 175,

340 (1996).6. H. M. J. M. Boesten, C. J. Tsai, J. R. Gardner, D. J. Heinzen, and B. J. Verhaar, Phys. Rev. Lett. 77,

5194 (1996); a 3% determination is in J. R. Gardner et al., Ibid. 74, 3764 (1995).7. W. C. Stwalley, P. Gould, and H. Wang, private communication.8. E. Tiemann, H. Knoeckel, and H. Richling, Z. Phys. D. 37, 323 (1996).

Experimental Realizations of Quantum Thought Experiments

Physicists traditionally construct imaginary experiments, termed “thought ex-periments,” to explain the strange behavior of physical systems to which quantumor relativity theories (or both) must be applied. These thought experiments “run”only on paper because they are generally technically impossible to realize withhardware in the laboratory. However, a class of thought experiments—in whichsingle atoms, photons, and other quantum systems are precisely manipulated andobserved—have recently graduated to become real experiments in quantum opticslaboratories. Some of the most intriguing experiments involve investigations ofthe behavior of quantum parts of a total quantum system. These new studies haveprovided fundamental insights into the quantum nature of the electromagnetic

FIGURE 3. (a) Data from an ex-periment explicitly showing, forthe first time, how the quantizednature of an electromagneticfield affects the interchange ofenergy between the field and anatom. Pe–g(t) is the probability offinding the atom in energy levelg after interacting with a weakcoherent microwave field for atime t. The four recordings cor-respond (from top to bottom) toincreasing field intensities. Inthe present experiments, theprobability that the atom residesin level g exhibits a simple os-cillation when the field is in itsvacuum state (top) but exhibitsmore complex oscillations whenthe higher field excitation levelsare considered. The complexoscillations demonstrate thequantum nature of the field and, moreover, tell us that under experimental conditions, thefield is in a superposition of photon states. (b) Performing a mathematical operation knownas a Fourier transformation on the lower data recordings reveals that there are severaldistinct “Rabi frequencies,” each corresponding to the frequency of energy exchange for adifferent quantum energy level of the electromagnetic field.

Page 7: physis meeting

7

field, the “spooky” (in the words of Einstein) entanglement between a pair ofquantum systems, and the way in which the dissipation of energy in a system candestroy its quantum-mechanical properties.

Central to these experiments is the Jaynes–Cummings model, a long-time fa-vorite of theorists. It describes the behavior of a composite system consisting of atwo-level quantum system (one with two possible energy levels, such as an elec-tron in a magnetic field) coupled to a quantum-mechanical system that is a simpleharmonic oscillator (a system with many equally spaced energy levels, such as atiny weight attached to a simple spring). Two groups working independently havenow realized good approximations of this ideal model and performed complemen-tary experiments which test fundamental aspects of quantum mechanics.

One example consists of dramatic experiments performed at the Ecole NormaleSupérieure (ENS) in Paris.1 In these studies, a two-level atom interacts with asingle quantized harmonic oscillator—namely, a microwave field inside a super-conducting cavity, consisting of a pair of closely spaced niobium slabs. Readersgiven to shower stall arias are familiar with the resonant properties of soundwavesin a cavity; the niobium cavity is a highly refined, miniature version operatingwith microwave electromagnetic fields. By virtue of its shape and the high con-ductivity of its walls, it can contain electromagnetic microwave fields only withcertain discrete frequencies and localized spatial properties.

Classical, nonquantum physics describes a sinusoidal cycling of energy be-tween the atom and the cavity’s microwave field. Quantum mechanics, however,dictates that the microwave field may take up energy only in discrete steps. Thesesteps manifest themselves as individual packets of energy known as photons. Un-til this work, the discrete nature of the coherent exchange of energy between anatom and the quantized electromagnetic field had not been clearly demonstrated.The ENS has provided the first experiment explicitly demonstrating the effect offield quantization on the interchange of energy between field and atom.

In the experiment, the physicists first cooled the cavity to the zero point, theminimum-energy quantum field state corresponding to the presence of zero pho-tons in the cavity. (Quantum theory shows that every harmonic oscillator has azero point state with a minimum, but nonzero, energy. For the electromagneticfield this energy is half that of a single photon.) But our strange quantum worldactually permits an undisturbed, unmeasured quantum system to “exist” in manyquantum states simultaneously. When the ENS team added microwave energy tothe cavity, the cavity’s electromagnetic field simultaneously entered into a num-ber of quantum states, each corresponding to a different, but small, number ofmicrowave photons in the cavity. The researchers then passed a beam of rubidiumatoms through the cavity. Classical physics describes a single Rabi frequency, thefrequency at which energy is exchanged, between the atom and the cavity. How-ever, observations of energy exchange between individual rubidium atoms and thecavity electromagnetic field [Fig. 3(a)] revealed several discrete sinusoidal sub-components [Fig 3(b)], i.e., many different Rabi frequencies, each correspondingto the frequency of energy exchange for a different quantum energy level of theelectromagnetic field. These experiments demonstrate field quantization throughback-and-forth energy exchange rather than a one-time transfer of energy, e.g., thephotoelectric effect, in which a photon of light strikes a surface to remove elec-trons.

Remarkably, physicists at the National Institute of Standards and Technologyin Boulder, Colorado realized a formally equivalent system by manipulating asingle beryllium ion while it was trapped by nonuniform electric fields.2 In thiscase, the two-level quantum system is provided by the energy state of the ion’souter electron, because the researchers allowed it to shuttle between two differentenergy levels. The quantized harmonic oscillator is the ion’s vibrational motion inthe trap, an equally spaced set of energy levels which describe the amount ofenergy in the ion’s oscillation about the center of the trap. The researchers createda situation where an exchange of energy between electronic and vibrational statesoccurred, mimicking almost perfectly the Jaynes–Cummings description of anatom exchanging energy with a photon field. By performing many successive ex-periments involving a single ion, the evolution of population back and forth be-tween the two electronic levels was observed, and provided information on howthe ion’s quantum vibrational level changed. Adding energy to increase the ion’svibrational levels is directly analogous to the injection of extra microwave energyin the Paris experiments. This energy exchange is shown in Fig. 4, along with abreakdown of the energy exchange into a range or spectrum of vibrational fre-quencies corresponding to different numbers of quanta of vibrational motion.

Note the striking similarity between Figs. 3 and 4. In both experiments, thetwo-level system becomes “entangled” with the quantum harmonic oscillator. En-tanglement implies that a measurement performed on one system determines theoutcome of a similar, but unmade, measurement on the other even though the twoare separated and apparently unconnected. Mathematically no separate indepen-dent description is possible for either part of the system, only the whole. In theENS experiments, this entanglement survives when the atom leaves the cavity.Even if the atom is far away from the cavity, measuring its state would instantlydetermine that the cavity field is in the complementary state dictated by the com-bination created when the atom was in the cavity. Einstein called such nonlocaleffects “spooky action at a distance.”

Entangling the electronic state of the ion with its vibrational motion has al-lowed the NIST Boulder group to demonstrate a version of the fabled Schrödinger’scat thought experiment, in which the decay of a radioactive atom (a quantumsystem) is entangled with a cat’s life or death (a decidedly nonquantum concept).The researchers split the wave associated with the ion’s electronic state into twoparts which are separated by as much as 80 nm (hundreds of times the size of theatom).3 In this study, the atom’s electronic state (a quantum property) becomesentangled with a definite, distinguishable position of the ion in the trap (two sepa-rated, classically valid positions of the atom play the role of the living or deadcat).

The ENS counterpart of this Schrödinger cat state is a superposition of twomicrowave fields with different phases (different relative positions of the wave’scrests and valleys) simultaneously present in the cavity.4 A single atom in a super-position of states is sent through a cavity containing the microwave field and en-tanglement ensues between the electronic state of the atom and the cavity fieldstate, which depends on the phase. The field state superposition can then be ana-lyzed by observing interference effects in the energy state of a subsequent atomcrossing the cavity. The investigation of such “cat states” opens the way to thestudy of the much debated quantum decoherence phenomenon, in which the dissi-pation of energy can lead to a breakdown of a system’s quantum properties. Asshown by the Paris group,4 the relative phase between the two parts of the cavityfield is sensitive to the unavoidable exchange of energy between the entire systemand its environment as the separation between the atom and cavity is increased.By varying the number of photons in the cavity and monitoring the rate at whichthe quantum interference effects disappear, physicists now can study quantita-tively how quantum interference phenomena vanish in the macroscopic world.

Serge HarocheEcole Normale Supérieure, France

([email protected])

David WinelandNational Insitute of Standards

and Technology([email protected])

1. M. Brune, F. Schmidt-Kaler, A. Maali, J. Dreyer, E. Hagley, J. M. Raimond, and S. Haroche, Phys.Rev. Lett. 76, 1800 (1996).

2. D. M. Meekhof, C. Monroe, B. E. King, W. M. Itano, and D. J. Wineland, Phys. Rev. Lett. 76, 1796(1996).

3. C. Monroe, D. M. Meekhof, B. E. King, and D. J. Wineland, Science 272, 1131 (1996).4. M. Brune, E. Hagley, J. Dreyer, X. Maitre, A. Maali, C. Wunderlich, J. M. Raimond, and S. Haroche,

Phys. Rev. Lett. 77, 4887 (1996).

BEAM PHYSICS

The following items are meant to provide a brief summary of new particlebeam developments over the past year at a number of accelerators around thecountry.

Thomas Jefferson National Accelerator Facility

Formerly called the Continuous Electron Beam Accelerator Facility, this $600Mconstruction project was completed on schedule and within budget. It was offi-cially dedicated on May 24, 1996 and renamed the Thomas Jefferson NationalAccelerator Facility (Jefferson Lab). The performance of the accelerator has beenexcellent, delivering in the last 10 months over 3200 h of beam time for physics.Superconducting accelerator cavities have performed so well at a nominal energyof 4 GeV that a 5 GeV run is planned for early 1997, while 6 GeV is expected inearly 1998. Initial data shows that the accelerator is stable enough for seriousparity-violation experiments. In one of the experimental areas, Hall C, the equip-ment is installed and fully functioning for users. Five experiments have run, withtwo more expected to be complete by the end of the calendar year. The first beamhas been sent to Hall A, and Hall B is on track for completion with all six calorim-eters installed and a first commissioning run scheduled for after Thanksgiving1996. (See the related article in the Nuclear Physics chapter.)

Stanford Linear Accelerator Center

SLAC’s newest accelerator, the Next Linear Collider Test Accelerator (NLCTA)reached a major milestone in August with the first electron beam accelerated to 65

FIGURE 4. Data showing the energy exchange that occurs within a single beryllium ionheld in an ion trap. In this case, energy is exchanged between vibrations of the ion in thetrap and an energy state of the ion’s outer electron. P

g(t) is the probability of finding the

ion’s outer electron in state g after first exciting the ion’s motion from its lowest energystate to a state of coherent motion and then enabling it to exchange energy with the ion’selectron state for a time t. Each data point represents an average of 4000 separate experi-ments, taking about 1 s. The oscillations in the data show “collapse” and “revival” charac-teristic of the quantized nature of the ion’s motional energy states. (Inset) Applying math-ematical transformations to the time data shows that there is a mean of 3.1 (±1) quanta ofvibration.

Page 8: physis meeting

8

new details about the binding of certain hormones and neurotransmitters in thebrain, such as those which transport dopamine.

Ben P. SteinAIP

Stochastic Resonance Moves Toward Medical Science

Stochastic resonance (SR) is the phenomenon whereby information about aweak, nontriggering signal is detected or transmitted through a system by theaction of random processes, or noise. Since single neurons are, in the simplestview, detectors triggered by incoming signals exceeding a certain threshold, theyare ideal systems in which to demonstrate SR.1 But can this process operate innetworks of neurons, for example, as an aid to the processing of information?2

And can it conceivably be of use to humans?3 Two recent experiments point inthese directions.

In the first, SR has been demonstrated for the first time in mammalian braintissue.4 A weak electric field, comprising both signal and noise, was imposed on aslice of rat hippocampus, a brain region essential for memory and other tasks. Theslice was cut parallel to the plane where the neurons convert incoming signals intoelectrical nerve impulses. In this way, a weak signal can be applied via the field toall neurons in the network, a configuration which had been studied previouslyonly in simulations.5 Recording from a cell body within the network revealed SRas the noise intensity was increased, as shown by (a) in Fig. 1.

In the second, SR has been shown to increase the distinguishability of the elec-trical signals recorded by a simple nervous system that is similar to the humanauditory nerve.6 The signals had a range of characteristic frequencies, called theformants, corresponding to those of spoken vowels. The experiment was performedon the sciatic nerve of the toad Xenopus; this well-understood system has struc-tural and electrical similarities to the human auditory nerve. The sciatic nerve wasstimulated by an external source, and the combined neural response of many of itsnerve fibers was recorded.

Thus in this experiment as well, the tissue sample consists of many activeelements connected in parallel, all subject to the same weak signal.5 Noise wasadded to signals which mimic electrical stimuli applied by cochlear implants to

FIGURE 1. Stochastic resonance in (a)a mammalian brain slice, and (b) in anauditory nerve model—the sciaticnerve of a toad. The signal-to-noiseratios are plotted vs the noise inten-sity added to information-carrying sig-nals, by themselves too weak to evokeneuronal responses. In both the SNRof the response shows a maximum atan optimal noise intensity—the signa-ture of SR.

system can actually increase the transmission or detection of a signal so as tomaximize the signal-to-noise ratio. First explored in climate studies and then inlaser science, stochastic resonance has turned, in the past few years, towards un-derstanding how biological organisms may use noise to enhance the transmissionof weak signals through nervous systems. Frank Moss of the University of Mis-souri at St. Louis and William Ditto of Georgia Tech describe new experimentalresults in which researchers have used stochastic resonance to boost the transmis-sion of a weak signal in rat brain tissue. Moss and Ditto also describe a study inwhich noise increased the distinguishability of electrical signals in a frog nervoussystem having similar structural and electrical characteristics to those of the hu-man auditory nerve.

Peptide and Receptor Protein Signature Matches. A protein folds into its finalthree-dimensional shape from a one-dimensional sequence of building blocks calledamino acids. In efforts to identify the proteins that bind to specific receptors in thebody, researchers have come up with a tool for computing characteristic signa-tures for proteins and receptors based on their one-dimensional amino acid se-quences. Each of the 20 different amino acids from which all proteins are com-posed is assigned a value of hydrophobicity, that is, the degree to which it repelswater. One then constructs a sequence of hydrophobicity values from which dis-tinctive signatures can be determined. Viewing proteins in terms of their hydro-phobicity is a reasonable approach, because proteins naturally exist in a water-based environment and many of their properties depend on how they interact withthe water that surrounds them. In their article Arnold Mandell and Karen Selz ofthe Emory University School of Medicine, and Michael Shlesinger of the Officeof Naval Research report that this powerful new method has uncovered intriguing

MeV with an acceleration gradient of 50 MV/m. The NLCTA is part of SLAC’son-going development of accelerator and microwave power technology for a fu-ture electron–positron linear collider, dubbed the “Next Linear Collider” (NLC).The NLCTA operates at a radio frequency of 11.4 GHz, four times the SLACfrequency and with a gradient more than double that of the SLAC linac.

Snowmass Meeting on the Future of Accelerator-Based Physicsin the U.S.

Nearly 500 physicists attended a three-week- long workshop at Snowmass, Colo-rado to contemplate the future of accelerator-based physics in the U.S. The meet-ing this year marked a turning point in the collective recovery from the cancella-tion of the SSC, which itself grew out of a discussion at the 1982 Snowmass meet-ing. The participants considered a first-time move to support a large project over-seas, the Large Hadron Collider (LHC) at CERN. They also considered the NextLinear Collider (NLC) electron–positron collider based on the presentation of theextensive “Zeroth-Order Design Report for the NLC.” Numerous study groupsdiscussed various advanced acceleration techniques, possible upgrade plans at theFermilab Tevatron, ideas for a muon collider, and concepts for a large hadroncollider.

Los Alamos National Laboratory

The Accelerator Production of Tritium (APT) Facility plans call for an accel-erator with an average beam power of over 150 MW. The APT would producetritium in sufficient quantities to replace the amount that decays owing to the12.3-year half-life. The APT design is based on a 1700 MeV, 100 mA cw beam ofprotons produced by a linear accelerator. The beam strikes a tungsten target pro-ducing neutrons that are moderated in a surrounding blanket and then captured inhelium-3 to make tritium. The assembly and testing of the 20 MeV Low Energy Dem-onstration Accelerator (LEDA) has started at LANL. The proton injector for LEDAroutinely provides 120 mA at 75 keV with the required beam quality. The project isheaded by LANL and includes LLNL, BNL, and Westinghouse Savannah. The DOEhas selected Burns and Roe Enterprises, Inc. as the prime contractor.

Sandia National Laboratory, Particle Beam Fusion Accelerator

The PBFA-Z achieved a milestone by generating 1.6 MJ of soft x rays by oper-ating in the z pinch mode. A plasma pinch is created by passing the stored energyof the PBFA through an array of tiny wires. The pulse of current in the axial, or Zdirection, causes the plasma to implode, releasing over half of the stored energy inan x-ray pulse. In the PBFA-Z experiment, the peak x-ray power was above 110TW, and the x-ray pulse width was about 8 ns full width at half-maximum. X-raydata were taken using x-ray diodes, resistive bolometers, time-integrated and time-resolved spectrometers, and photoconducting detectors. The z pinch load con-sisted of 120 tungsten wires, 10 µm in diameter, configured in a cylindrical arrayat a diameter of 4 cm and a length of 2 cm. The design goal for PBFA-Z is 1.5 MJand 150 TW of x-ray energy and power. This energetic, intense source of soft xrays will be used in experiments at high energy density for studying inertiallyconfined fusion (ICF), weapons physics, and weapons effects applications.

Fermilab

On February 25, 1996 the 1994–96 Tevatron collider run ended and thereconfiguration of the Tevatron (where protons and antiprotons are collided headon) in support of fixed target operations began. The success of this extensive run-ning period can be measured partly in terms of luminosity, the parameter (in unitsof inverse “barns,” equivalent to a cross section of 10–24 cm2) which describes theintensity of the proton and antiproton beams that are brought to bear in the inter-action areas. In this case the integrated luminosity of about 150 inverse picobarnswas delivered to each of the two detectors (CDF and D0), while the peak luminos-ity was 2.5 × 1031 cm–2 sec–1—25 times the initial Tevatron performance specifica-tion. Data collected during this run led to the long-sought discovery of the topquark (see Physics News in 1995, p. 55).

Significant progress has been made on the Main Injector project at Fermilab. Theproject is approximately 70% complete with commissioning expected to commencein Spring 1998. The goal is to improve by a factor of 5 the collider luminosity. Inaddition, a design has been completed for a new antiproton storage ring, the “Recy-cler,” which would reside in the existing main injector enclosure and boost luminosityperformance by an additional factor of 2. This project is currently under review by theDepartment of Energy. Longer term efforts at Fermilab include R&D into electroncooling, muon colliders, and plasma beat-wave acceleration.

William HerrmannsfeldtSLAC

([email protected])

BIOLOGICAL PHYSICS

Stochastic Resonance Moves Toward Medical Science. The phenomenon of “sto-chastic resonance” describes how introducing a certain amount of noise into a

Page 9: physis meeting

9

Additionally, this analysis may have clarified a mystery. The gastrointestinalpeptide CCK is known to affect brain levels of the neurotransmitter dopamine, butour analysis showed a mismatch between the CCK and dopamine receptor signa-tures. We did find a signature match between CCK and DAT, a dopamine-transportpeptide. This suggested to us that it might be fruitful to examine dopamine bind-ing to DAT alone and with CCK present. These experiments are being carried outin collaboration with Mike Owens and Charles Nemeroff at the Emory UniversitySchool of Medicine. Preliminary evidence suggests that CCK may reduce dopam-ine binding to DAT by as much as 50%.

Knowledge that a peptide binds to a receptor does not address what the effectof this binding is, but it is a necessary first step. We are now completing the devel-opment of algorithms such that we can use the signature of a known receptor togenerate a small set of new peptides (amino acid sequences) that will bind to thereceptor, some of which should initiate or terminate biological functions.

Arnold J. Mandell and Karen A. SelzEmory University School of Medicine

([email protected])(704-251-9794)

Michael F. ShlesingerOffice of Naval Research

([email protected])

1. D. S. Broomhead, R. Jones, and G. P. King, J. Phys. A 20, L563 (1987).

CHEMICAL PHYSICS

In the past year, chemical physics has seen great advances in understanding theforces that molecules experience in complex condensed-matter environments. Thethree examples featured below each provide new perspectives and challenge exist-ing concepts of molecular dynamics.

Vibration Echoes. A novel experimental technique, known as picosecond echospectroscopy, is providing new insights into the vibrational properties of mol-ecules. Earlier echo experiments, first developed in the 1970s, probed small gas-eous molecules like CH

3F using microsecond pulses of infrared radiation to excite

the vibrations. However, Michael D. Fayer of Stanford University has recentlyextended these methods to complex condensed-matter systems where changes invibrational properties can occur millions of times faster than those in gases. Hisresearch is providing a unique window into the nature of the energy fluctuations inliquids, gases, and proteins. Dissecting the vibrational-energy spectrum of a con-densed-matter system over a wide range of temperatures represents a significantadvance in vibrational spectroscopy.

Fast Protein Folding. Just a few years ago, investigations of protein foldingwere confined to the arena of classical biochemistry and the “ultrafast” steps infolding were those faster than a millisecond. As a result of a combination of theo-retical and experimental advances, these questions have sharpened. Studying theforces which change a protein’s complex shape has become an exciting part ofchemical physics in which the time range has been extended to the picosecondregime and quantitative macroscopic and atomic-level descriptions are now ex-pected. Experimental challenges include the design of methods whereby the fold-ing or unfolding process can be triggered, and the refinement of spectroscopictechniques that are able to detect time-dependent changes in structure. In a fast-folding experiment of William A. Eaton and his coworkers at the National Insti-tutes of Health, a laser pulse starts the folding process by dissociating the carbonmonoxide that maintains the unfolded protein structure. The folding steps werestudied by measuring the electronic spectra of the protein.

Snapshots of Electrons at Interfaces. Chemical reactions that occur on metalsurfaces are greatly dependent upon the exchange of electrons between the metaland adsorbed atoms and molecules on the surface. Therefore it is of great impor-tance to determine the properties of electrons on the surface, particularly as thesemay be modified by the presence of adsorbates. Charles B. Harris and his col-leagues at the University of California at Berkeley and Lawrence Berkeley Na-tional Laboratory have greatly advanced our understanding of surface electronicstates by devising techniques whereby electron localization and the vibrationalproperties of the metals are studied in real-time with 30 fs pulses.

Robin M. HochstrasserUniversity of Pennsylvania

Vibration Echoes

Chemical bonds between atoms in a molecule can be thought of as springsholding the atoms together. Like a complicated collection of weights and springs,the atoms of a molecule have many vibrational motions called “modes.” By exam-ining the vibrational modes, a great deal can be learned about a molecule. A newexperimental technique, known as picosecond infrared vibrational echo spectros-copy, is providing detailed information on the vibrational properties of liquids,glasses, proteins, and other molecular systems in complex condensed-matter envi-ronments. The vibrational echo method provides information that cannot be ob-

FIGURE 2. A characteristic signature valueof 2.18 calculated for the brain hormoneCRF (top) closely approximates the 2.22signature found for its receptor (middle). Themutant receptor, which fails to recognizeand bind CRF resulting in miscommunica-tion between the brain and pituitary gland,has a 3.15 signature (bottom).

the auditory nerves of profoundly deaf patients. SR was observed in the responseof the nerve as shown in (b) of Fig. 1. This experiment suggests that SR may beonly a few steps away from a practical application of importance to hearing-impairedpeople.

William L. DittoGeorgia Institute of Technology([email protected])

Frank MossUniversity of Missouri at St. Louis

([email protected])

1. K. Wiesenfeld and F. Moss, Nature 373, 33 (1995); F. Moss and K. Wiesenfeld, Sci. Am. 273, 50(August 1995); K. S. Brown, New Scientist 150, 28 (1 June 1996).

2. X. Pei, L. Wilkens, and F. Moss, J. Neurophysiol. (in press).3. F. Y. Chiou-Tan et al., Int. J. Bifurc. Chaos. (in press); P. Cordo et al., Nature (in press).4. B. J. Gluckman, T. I. Netoff, E. J. Neel, W. L. Ditto, M. L. Spano, and S. J. Schiff, Phys. Rev. Lett.

77, 4098 (1996).5. J. J. Collins et al., Nature 376, 236 (1995); E. Pantazelou, F. Moss, and D. Chialvo, in Noise in

Physical Systems and 1/f Noise, edited by P. Handel and A. Chung (AIP Press, New York, 1993), p.549.

6. R. P. Morse and E. F. Evans, Nature Medicine 2, 928 (1996).

Peptide and Receptor Protein Signature Matches

Proteins are the workhorses of the body, carrying out myriad biological func-tions. Each protein is composed of a relatively short linear chain of amino acids,which range in length from the 18–140 amino acid peptide messengers (whichinclude hormones) to the generally 200–2500 amino acid membrane receptors.For a peptide messenger to initiate or terminate an action, it must bind to a recep-tor protein. Peptides with a specific sequence can be made in the laboratory. Whethera given peptide will bind a given receptor has traditionally been resolvable onlythrough trial-and-error experiments. Generating peptides to act on a known recep-tor has usually been approached through random permutation followed by tests tosee if and how well they bind to that receptor. Taking an alternative approach, wehave had success in addressing the question of binding by computing characteris-tic signatures for each peptide and receptor based on their one-dimensional aminoacid sequence.

Specifically, we look at each amino acid’s hydrophobicity, a value that reflectsits local water-repelling characteristics, and use the series of these values whichcorrespond to the amino acid sequence to make a hydrophobic representation ofthe peptide or receptor protein. Each of the 20 amino acids from which all pro-teins are composed can be assigned an established real value between 0 and 3.77.Through a cascade of linear transformations,* we use this hydrophobic sequenceto compute a hydrophobic signature for each peptide or receptor protein. Whenthese values are relatively similar the peptide messenger and receptor bind; whenthey are disparate they do not.

We have used these signatures to successfully match peptide messengers withtransmembrane brain opioid receptors (the binding sites for opiate-like substancessuch as endorphins), and in other systems to uncover testable, previously unknownbinding relationships. As an example, our analysis (see Fig. 2) finds a signaturematch between corticotropin releasing factor (CRF, a hormone that acts on thebrain’s hypothalamus) and its receptor. There is a mismatch, however, betweenCRF and a mutant CRF receptor (found in patients with a pituitary-adrenal glan-dular disorder) known not to bind CRF.

Page 10: physis meeting

10

tained by previous approaches. It operates on the scale of trillionths of a second, atime scale in which important chemical processes occur. The new approach prom-ises to provide fresh insights into how motions of the environment of a moleculeaffect a molecule or how protein dynamics influence the active site in the protein.These insights will improve our understanding and control of chemical and bio-logical processes.

When infrared (IR) light is passed through a sample of molecules, it can beabsorbed by the molecular vibrations, causing the atoms to oscillate at character-istic frequencies. These frequencies reflect the strengths of the chemical bonds.The set of frequencies produces a vibrational spectrum, with each frequency valueproducing a peak or “line” in the spectrum. Vibrational spectra of even large mol-ecules reveal beautiful structure, a large number of lines that can be assigned tothe various types of motions that occur in a molecule. The position of a peakassociated with a particular mode gives the energy of the vibration.

In a molecule isolated from its surroundings, vibrations have energies that donot change with time. In a solvent, the vibrational energies are shifted, and theyreflect an average influence of the solvent on the molecule that does not changewith time. But there is an additional factor in a real-world condensed-matter sys-tem. At any temperature above absolute zero, the system contains heat. Therefore,molecules are continually moving. Since interactions between molecules dependon the distance between them and their orientations, the interactions inherentlychange over time. Thus the vibrational energies, which reflect the molecular struc-ture, are time dependent; they are constantly fluctuating. These fluctuations areintimately involved in chemical processes, and they can also provide informationon the solvent and how it interacts with dissolved species.

Even a well-resolved vibrational spectroscopic line generally does not provideinformation on dynamics (how motions of a solvent or protein influence the mol-ecule of interest). A vibrational absorption peak is not infinitely narrow. A vibra-tion will absorb infrared light over a range of wavelengths, giving the vibrationalline a width and shape. In principle, dynamical information is contained in a spec-troscopic line shape. However, the line shape is also caused by the essentiallystatic structural perturbations associated with the distribution of arrangements ofsolvent molecules or parts of a protein that surround and interact with the mol-ecule of interest (inhomogeneous broadening). For example, carbon monoxide(CO) binds to the active site of myoglobin, a protein found in muscle tissue. Thevibrational lineshape of CO, measured in an absorption spectrum, is wide becausethe surrounding protein, which interacts with CO, has many, slightly differentconfigurations. These configurations change so slowly on the time scale importantfor the CO vibration that they are essentially static. The different protein configu-rations exert different forces on the CO oscillator, giving rise to a distribution ofabsorption energies. The absorption line is inhomogeneously broadened becauseits width is caused by the basically static range of forces rather than by motions ofthe protein. If there were no inhomogeneous broadening, the vibrational line shapewould reflect the protein dynamics rather than the fact that there are many, slightlydifferent protein structures.

While vibrational spectroscopy has been widely applied to complex condensed-matter systems throughout the 20th century, it is only in the last few years thatresearchers at Stanford University developed the vibrational echo method to in-vestigate vibrational dynamics and interactions between molecules in condensed-matter molecular systems.1–4 Vibrational echo experiments1,3 measure the homo-geneous vibrational line shape (the true line shape that gives information on dy-namics) even when this line shape is obscured by inhomogeneous broadening.The vibrational echo experiment involves a two-pulse excitation sequence.

The first pulse puts each solute molecule’s vibration into a superposition state,which is a mixture of the lowest-energy vibrational level and the next higher-energy level. For each molecule, the superposition can be pictured as an arrowrotating such that the point of the arrow describes a circle, with the tail of thearrow at the center. At time t = 0, the arrows (one for each molecule) are allpointing in the same direction. The arrows are rotating at speeds equal to theirvibrational frequencies. Since there is an inhomogeneous distribution of frequen-cies, the arrows are rotating at different speeds. Therefore, shortly after t = 0, theyare no longer pointing in the same direction. The rotating arrows have gotten outof phase.

The second pulse, entering the sample at a time t after the first pulse, changesthe phases of the vibrational superposition states (the relative positions of thearrows) and starts a rephasing process. At a certain time after the second pulse, thevibrational superposition states (the arrows) are rephased (brought back into align-ment), and the sample emits a third coherent pulse of light.

So in this process, two pulses of light go into the sample, but three pulses oflight come out. The third pulse is the vibrational echo. Adjusting the time between

the two incoming pulses generates a vibrational echo decay curve as shown in Fig.1. Such decay curves, taken on liquid,1–3 glass,1–3 and protein4 samples, are providingan entirely new method for examining molecular dynamics in complex molecularsystems.

To obtain a physical feel for the manner in which the vibrational echo experi-ment reveals homogeneous energy fluctuations in spite of a broad inhomogeneousspread of vibrational energies, consider the following foot race. Initially, all therunners are lined up at the starting line. At t = 0, the starting gun (analogous to thefirst IR pulse) is fired, and the runners take off down the track. After running forsome time, the faster runners have pulled out in front, and the slower runners aresomewhat behind. The runners are no longer in a line because of the inhomogene-ity in their speeds. At time t, the gun is again fired (analogous to the second laserpulse), and everyone turns around and runs back toward the starting line. If eachrunner maintains a constant speed, out and back, then all the runners will cross thestarting line exactly in line again at a time t after the second gun (the echo). Whenthe second gun was fired, the faster runners were farther away from the startingline than the slower runners, but since they run faster, the differences in distancesare exactly made up for by the differences in speeds. At the starting line, the groupis rephased; the inhomogeneity in speeds has been nullified. If the runners do notrun at exactly constant speeds, but each runner has some fluctuation in speedabout his average (homogeneous fluctuations), then the runners will not cross thestarting line exactly in a line. The rephasing will not be perfect. A snapshot of thegroup as it crosses the starting line will reveal the small fluctuations in the run-ners’ speeds, in spite of the large inhomogeneous distribution of speeds. In thesame manner, the vibrational echo experiment reveals the shifts in the vibrationalenergies that are caused by thermally induced fluctuations in molecular structure.

Measurements of vibrational fluctuations are providing unique information onthe dynamics of molecules1–3 and proteins.4 For example, vibrational echo experi-ments on CO bound to the active site of myoglobin have revealed how motions ofthe protein are communicated to the active site.4 CO is bound to heme, a largemolecular group that resides in a pocket in the myoglobin protein. Because differ-ent parts of a protein have different electrical charges, protein motions producefluctuating electric fields. The time-dependent electric fields influence the heme,which acts as an antenna vibrating in response to the field. The electric-field-induced changes in heme cause the heme–CO bond and, thus, CO vibrationalfrequencies to fluctuate. Vibrational echo experiments have made it possible toexamine the CO frequency fluctuations and, therefore, examine protein dynamicsand the communication of dynamics to CO bound to the active site of myoglobin.

Michael D. FayerStanford University

(415-723-4446)

1. A. Tokmakoff and M. D. Fayer, J. Chem. Phys. 102, 2810 (1995).2. A. Tokmakoff, A. S. Kwok, R. S. Urdahl, R. S. Francis, and M. D. Fayer, Chem. Phys. Lett. 234,

289 (1995).3. A. Tokmakoff, D. Zimdars, R. S. Urdahl, R. S. Francis, A. S. Kwok, and M. D. Fayer, J. Phys.

Chem. 99, 13310 (1995).4. C. W. Rella, A. Kwok, K. D. Rector, J. R. Hill, H. A. Schwettman, D. D. Dlott, and M. D. Fayer,

Phys. Rev. Lett. 77, 1648 (1996).5. C. W. Rella, K. D. Rector, A. Kwok, J. R. Hill, H. A. Schwettman, D. D. Dlott, and M. D. Fayer, J.

Phys. Chem. 100, 15620 (1996).

Fast Protein Folding

Massive DNA sequencing over the past decade has led to the discovery of tensof thousands of new protein sequences, producing a revolution in all aspects of thephysics and chemistry of proteins. Christian Anfinsen, in his Nobel Prize winningexperiments at the National Institutes of Health,1 showed that a protein will spon-taneously fold from a completely random configuration of the amino acid chain toform a unique, compact, biologically functional, 3-D structure—the so-called “na-tive” protein. Of the astronomically large number of possible configurations (~1050

for a protein consisting of 100 amino acids) only one is found for the native pro-tein. Understanding exactly how this unique structure forms has become a greatchallenge to physical scientists. Recent theoretical work describes folding as mo-tion of the chain of amino acids on a rough, funnel-shaped energy landscape as itsearches through the enormous number of possible configurations on its way tothe unique native structure.2 The theoretical advances have stimulated chemicalphysicists to develop new experimental methods to study the motions of the fold-ing protein. This research has led to the first glimpse of the initial events in proteinfolding, providing new insights into this fascinating process.3

Folding and unfolding can be initiated by changing the temperature or chang-ing the concentration of a chemical called a “denaturant.” In the past, experimentsusing a denaturant have been limited to millisecond time resolution by the timerequired to mix solutions and transfer them to an observation chamber beforestopping the flow. Millisecond time resolution is insufficient to observe the earlyevents in protein folding, such as collapse of the amino acid chain and the forma-tion of helices. Moreover, with conventional mixing methods formation of thecomplete native structure often occurs before any observations can be made. Meth-ods with much better time resolution are needed.

Improvements in mixing technology have helped considerably. An improve-ment of almost two orders of magnitude in time resolution has recently beenachieved by pumping liquids through a small gap (~10 µm) to create turbulencethat markedly reduces the mixing time.4,5 Instead of stopping the flow after fillingan observation chamber, the solutions flow continuously. The folding reaction ismonitored at different positions (corresponding to different times) along the microjet

FIGURE 1. Vibrational echo datataken at 10 K on a vibration of car-bon monoxide (CO) (1985 cm–1 fre-quency, 5.04-fm wavelength) boundto the molecule tungstenhexacarbonyl in 2-methylpentaneglass with exponential fit. The factthat the echo signal decays expo-nentially allows researchers to inferthat the CO vibrational energyevolves in time over a range of en-ergies in a manner such that subse-quent changes in energy do not de-pend on the details of previouschanges. In other words, a memoryof the system’s path to reach a givenenergy does not determine the fu-ture energy path.

Page 11: physis meeting

11

of liquid emerging from the mixer. In experiments on cytochrome c, collapse ofthe protein was measured by using fluorescence methods to measure the change indistances within the molecule.4 Partial collapse of the unfolded structures in re-sponse to the decrease in concentration of denaturant occurs in less than 50 µs.This is followed by formation of the native structure in ~600 µs, a new speedrecord in protein folding. The observation that folding follows a single, exponen-tial time course suggests that the interconversion among the myriad of structuresin the unfolded state is fast compared to the rate of crossing the energy barrierseparating the unfolded and folded states, consistent with the less-than-50-µs partial collapse time.

Dramatic improvements in time resolution can be obtained by initiating fold-ing with a laser pulse. The problem has been to find the right system. So far twomethods have been successful. One takes advantage of the fact that some proteinsunfold at low as well as high temperatures. This permits rapid initiation of fold-ing6 as well as unfolding7 by heating the solution with a short pulse of laser light.In the second method a laser pulse produces a very fast chemical reaction in theunfolded state under solution conditions where the product of the reaction willfold to the native conformation.8,9 Using this approach, the laser dissociation ofthe carbon monoxide complex of the 104-amino-acid protein cytochrome c hasbeen exploited to study folding events on the nanosecond–microsecond time scale(Fig. 2).8,10 In these experiments nanosecond snapshots of the optical absorptionspectra were used to measure the time required for two regions of a polypeptidechain separated by ~50 amino acids to diffuse together (Fig. 2). These results ledto an estimate of a lower bound for the folding time of ~1 µs. (Because of themuch larger-scale motions involved, this time is about one-million-fold longerthan the shortest times observed for chemical reactions of small molecules.) Know-ing this lower bound is important for using experimental folding times to estimatethe height of the energy barrier that separates the unfolded and folded states, anessential parameter for theoretical descriptions of the energy surface.2–4 It is alsoimportant for designing new experiments to study fast folding proteins.

The “fast folding field” is in its infancy, but has already demonstrated that thereis much to be discovered. There is enormous room for improvement in the existingmethods and a need to develop totally new technologies. Fast protein folding stud-ies will not only uncover basic mechanisms, but may have additional biologicalsignificance. One speculation is that evolution selects amino acid sequences notonly to form 3-D structures that perform a specific biological function, but also torapidly form compact structures to avoid aggregation and precipitation.3,10 Finally,the physics and physical chemistry of fast folding could contribute to understand-ing the pathology of human diseases caused by protein misfolding.

William A. EatonNational Institutes of Health

(301-496-6030)

1. C. B. Anfinsen, Science 181, 223 (1973).2. J. D. Bryngelson, J. N. Onuchic, N. D. Socci, and P. G. Wolynes, Proteins 21, 167 (1995).3. W. A. Eaton, P. A. Thompson, C.-K. Chan, S. J. Hagen, and J. Hofrichter, Structure 4, 1133 (1996).4. C.-K. Chan, Y. Hu, S. Takahashi, D. L. Rousseau, W. A. Eaton, and J. Hofrichter, Proc. Natl. Acad.

Sci. USA (in press).5. S. Takahashi, S.-R. Yeh, T. Das, C.-K. Chan, D. S. Gottfried, and D. L. Rousseau Nature Structural

Biology (in press).6. R. M. Ballew, J. Sabelko, and M. Gruebele, Proc. Natl. Acad. Sci. USA 93, 5759 (1996).7. C. M. Phillips, Y. Mizutani, and R. M. Hochstrasser, Proc. Natl. Acad. Sci. USA 92, 7292 (1995).8. C. M. Jones, E. R. Henry, Y. Hu, C. K. Chan, S. D. Luck, A. Bhuyan, H. Roder, J. Hofrichter, and W.

A. Eaton, Proc. Natl. Acad. Sci. USA 90, 11860 (1993).9. T. Pascher, J. P. Chesick, J. R. Winkler, and H. B. Gray, Science 271, 1558 (1996).

10. S. J. Hagen, J. Hofrichter, A. Szabo, and W. A. Eaton, Proc. Natl. Acad. Sci. USA 93, 11615 (1996).

Snapshots of Electrons at Interfaces

Whether by chance or design, the boundary or interface where two substancesmeet often dominates the performance of technologically important materials.

For example, interfaces between metals and the semiconductor silicon not onlyabound in computer microchips but control the operation of the myriad tiny tran-sistors comprising the chip. The all-important role of interfaces arises because ofthe abrupt change in structure (where the atoms are and how they are bonded toone another) at these locations. These changes invariably affect the behavior ofelectrons at the interfaces and these in turn affect overall properties such as chemicalreactivity, electrical conductivity, and magnetism.

To understand the fundamentals of how electrons behave at interfaces and, inparticular, to investigate how electron behavior changes with time (electron dy-namics), researchers at the University of California at Berkeley and the LawrenceBerkeley National Laboratory led by Charles Harris have been studying electronsat interfaces using lasers whose wavelength can be adjusted (tunable lasers) whileemitting ultrashort pulses of light lasting only a few femtoseconds (fs). After eachpulse of laser light, they record the spectrum of kinetic energies of electrons es-caping from the sample as the result of absorbing the laser light, a process knownas photoelectron spectroscopy. By measuring photoelectron spectra at differenttimes after the laser pulse, they were able to study the dynamics of electrons atinterfaces at intervals as short as 30 fs. Additional information comes from mea-suring the electron emission angle as well.

Except for those electrons tightly bound to atomic nuclei, electrons in metalsare normally free to roam throughout the material and are said to be “delocalized”because the quantum-mechanical wavefunctions that describe these electrons ex-tend throughout the metal. One important thing that happens near an interface(including a surface, which is the special case of an interface between a solid anda gas or vacuum) is that the delocalized electrons can become confined (theybecome localized) in one or more directions. To take one example, an extra elec-

FIGURE 2. Measuring the rate kD+

uni for two regions of an unfolded protein to come incontact by diffusion (adapted from Ref. 10). A laser pulse dissociates carbon monoxide(not shown) from an iron-containing molecule known as a “heme” (ellipsoid) which is co-valently connected to a chain on the protein. (Upper) In the “unimolecular” experiment(involving chemical interactions within a single molecule) the amino acid methionine (sphere)separated by about 50 amino acids along the protein chain can now bind at the site va-cated by the carbon monoxide. (Lower) The equivalent “bimolecular” experiment (involvinginteractions between two separate molecules) can be carried out with free methioninesand a heme (ellipsoid) attached to a small amino acid chain cut from the protein. Since therate of chemical bond formation k

gem (and the rate of chemical bond dissociation k

diss) is

presumed to be the same in the two experiments, the unimolecular diffusion-limited ratek

D+uni (3 × 104 s–1) can be calculated from the observed overall unimolecular and bimolecu-

lar binding rates (using well-established theory to calculate the missing quantities).

FIGURE 3. The figure depicts the two-photon photoelectron process used to study the twoclasses of interface discussed in the text. The first ultraviolet photon (a photon is thesmallest unit or package of light), labeled 2 hν, excites an electron from the bulk of themetal to the region of the metal–overlayer–vacuum interface. The second visible photon,labeled hν, ejects the excited interface electron into the vacuum where its angle and en-ergy are measured. The heavy black lines plot the energy of the electron as a function ofdistance from the surface. The negative affinity interface has a high-energy barrier in theoverlayer which tends to push the electron into the vacuum. On the other hand, the positiveaffinity case has a favorable (lower) energy in the overlayer region and the electron tendsto reside there.

tron near a metal interface results in the buildup of a separation of positive andnegative electrical charge perpendicular to the interface. This separation or polar-ization gives rise to new quantum states called image states. Electrons in imagestates are held close to the interface but can move relatively freely parallel to it.

Electrons from the image states are sensitive indicators of changes at the interface,so it is the electrons emitted from these states that are studied with the laser-photoelec-tron spectroscopy technique (schematically illustrated in Fig. 3).1–4 Specifically, theBerkeley group investigated the case of a metal covered with a thin, electrically insu-lating layer (dielectric overlayer) while the sample was in a vacuum. The overlayertends to attract or repel electrons—a property called the electron affinity that dependson its composition and associated electrical properties.

When the affinity is negative, the resulting repulsive potential tends to push theelectrons in image states away from the metal towards the interface between theoverlayer and the vacuum as illustrated in Fig. 3(a). The potential is also respon-sible for a barrier on the dielectric layer surface that the electrons must quantum-mechanically tunnel through to return back into the metal.4 These tunneling timescan be directly measured with the techniques developed by the group at Berkeley.

In addition, if both the angle and time dependence of the photoelectrons are mea-sured, they can show how an initially delocalized electron becomes localized. TheBerkeley experiments have demonstrated that the localization occurs by means ofsmall shifts in the positions of the positively charged atomic nuclei around the nega-tively charged electron (a local deformation of the crystal lattice).5 The experimentsalso measure the lattice reorganization dynamics as they occur in real time.

In contrast, for positive-affinity interfaces, the resulting attractive potential as-sociated with the overlayer forms potential wells that are bounded by the metal onone side and the image potential in the vacuum on the other [Fig. 3(b)].6 From

Page 12: physis meeting

12

the crystal which consist of copper and oxygen atoms, while motion between dif-ferent copper–oxygen planes is difficult. The two-dimensional normal state of thehigh-temperature superconductors is highly unusual, characterized by optical,magnetic, and transport properties unlike anything previously studied. These nor-mal-state properties are sufficiently strange that many physicists believe they holdthe key to eventually understanding the physical mechanism for high-temperaturesuperconductivity.1

One example is the normal-state resistivity; as the crystal is cooled, the “in-plane” resistivity decreases linearly with temperature, while the “out-of-plane,” or“c-axis,” resistivity, increases rapidly with decreasing temperatures.2 This con-trasting behavior between the “metallic” in-plane resistivity and “insulating” c-axis resistivity persists down to the superconducting transition temperature Tc .Below Tc the behavior of the normal-state resistivities is hidden by the presence ofsuperconductivity. A recent series of experiments3 has utilized an intense mag-netic field exceeding 60 T (600,000 G, or 1.5 million times the Earth’s magneticfield) to destroy the superconductivity and reveal the low-temperature behavior ofthe unusual normal-state resistivities in the high-temperature superconductors. Asshown in Fig. 1, the superconducting transition temperature can be optimized bytuning the concentration of charge carriers. (Charge carriers can be either elec-trons or holes, their less commonly known positively charged analogs. In most ofthe high-temperature superconductors, the charge carriers are actually holes.) Vary-ing the hole concentration is accomplished in the case of La2–xSrxCuO4 (LSCO) byvarying the Sr content x. Figure 1 shows that the maximum Tc (40 K) in LSCO isachieved for an x of 0.16 (arrow), which is thus called “optimum doping.” If thecarrier concentration is reduced below the optimum doping, Tc is reduced andsuperconductivity is eventually destroyed. Compounds in this regime are insula-tors (“INSUL” in Fig. 1); that is, the resistivities increase with decreasing tem-perature and extrapolate to an infinite resistivity (no current flow) in the limit ofzero-temperature limit. In this regime, the compounds exhibit “variable rangehopping,”4 a well-known type of conduction in which charge carriers are mostlylocalized near individual impurities and hop between impurities in response to anapplied electric field. Near the transition between insulator and superconductor,both variable range hopping5 and an unusual logarithmic dependence of the in-plane resistivity6 have been reported.

Increasing the carrier concentration above optimum doping also decreases Tc

until the destruction of the superconducting state reveals a metallic phase(“METAL” in Fig. 1), which shares some of the properties of a conventional metal,but retains some evidence of the unusual linear temperature dependence of the in-plane resistivity that is evident in samples near optimal doping.7 One would like tostudy the insulator-to-metal crossover in the normal state; however, once again thesuperconducting phase (“SUPERCONDUCTING” in Fig. 1) gets in the way—unless an intense magnetic field is applied.

Intense magnetic fields provide a relatively gentle way to destroy superconduc-tivity. However, in high-temperature superconductivity the unusually large ener-gies involved in forming the superconducting phase require a high temperature ormagnetic field to destroy the superconductivity. When a field of 60 T is applied toLSCO, superconductivity is completely destroyed and new features of the normalstate are revealed. Figure 2 shows the most obvious new finding, that the insulat-ing phase extends up to optimum doping and that the crossover between insulatorand metal occurs very near the same hole concentration which leads to optimumsuperconductivity, a coincidence which seems difficult to ascribe to mere chance.Furthermore, while the insulating phase in compounds without strong supercon-ductivity (x less than 0.06) exhibits conventional variable-range hopping, the in-sulating phase in the shaded region of Fig. 2 is revealed to be quite unusual in thatboth the in-plane and c-axis resistivities diverge as the logarithm of the tempera-ture.

Future high-magnetic field experiments will seek to determine which of thestrange new normal-state behaviors revealed in LSCO are common to other high-temperature superconductors. Many of the high-temperature superconductors havesuch large Tc s (near or above100 K) that still more intense magnetic fields wouldbe required to perform these experiments, but the puzzle of high-temperature su-perconductivity provides strong incentive to build still stronger magnets.8

Gregory S. BoebingerBell Laboratories, Lucent Technologies

([email protected])

1. For example, see P. W. Anderson, Science 256, 1526 (1992).

FIGURE 1. Schematic phasediagram of the high-tempera-ture superconductors as it ap-plies to La

2–xSr

xCuO

4. At low

temperatures, as the carrierconcentration is increased, thenormal state is first an insula-tor, which gives way to high-temperature superconductivity,then re-emerges as a metal.The arrow denotes optimumdoping, that carrier concentra-tion which gives a maximumsuperconducting transitiontemperature. Near optimumdoping, the normal state exhib-its particularly unusual behav-ior which is obscured at lowtemperatures by the supercon-ducting phase.

FIGURE 2. By applying a 60 Tmagnetic field, the supercon-ducting phase (dashed line) islifted and the transition be-tween insulating and metallicnormal states is revealed tooccur near optimum doping.The behavior of the insulatingstate is particularly bizarre inthe shaded region, where theresistivity diverges logarithmi-cally with temperature.

measurements of the angular distribution of the photoemission for a series of dif-ferent overlayer thicknesses, the Berkeley group was able to completely deter-mine the electron quantum states in the overlayer region and determine how theelectron states in the interfacial region changed as the interface was made thickeror thinner on an atomic scale. They were also able to model the results of mea-surements of the electron dynamics in these states (femtosecond lifetime mea-surements) in terms of the electron quantum states at the interface (interfacialconduction band structure) and the overlap of the electron’s wave function withthe metal.5,7

These measurements, while of a fundamental nature, are directly related to theproperties and performance of many new devices, some with potentially largeeconomic impacts. For example, many laboratories are actively investigating struc-tures comprising two metal layers separated by an insulating polymer for use inlarge, multicolor displays, such as the long-sought flat-screen television. Makinglong-lived, efficient light emitters depends on understanding what controls thefate of electrons as they move across the interfaces separating the layers.

Charles B. HarrisUniversity of California and

Lawrence Berkeley National Laboratory(510-642-2814)

1. K. Giesen, F. Hage, F. J. Himpsel, H. J. Reiss, and W. Steinmann, Phys. Rev. Lett. 55, 300 (1985).2. R. W. Schoenlein, J. G. Fujimoto, G. L. Eesley, and T. W. Capehart, Phys. Rev. Lett. 61, 2596

(1988).3. D. F. Padowitz, W. R. Merry, R. E. Jordan, and C. B. Harris, Phys. Rev. Lett. 69, 3583 (1992).4. R. L. Lingle, Jr., N.-H. Ge, R. E. Jordan, J. D. McNeill, and C. B. Harris, Chem. Phys. 205, 191

(1996).5. R. L. Lingle, Jr., D. F. Padowitz, R. E. Jordan, J. D. McNeill, and C. B. Harris, Phys. Rev. Lett. 72,

2243 (1994); J. D. McNeill, R. L. Lingle, Jr., N. H. Ge, R. E. Jordan, C. M. Wong, and C. B. Harris,Ultrafast Phenomena X, edited by P. F. Barbara, J. G. Fujimoto, W. H. Knox, and W. Zinth (Springer,New York, 1996), p. 445.

6. J. D. McNeill, R. L. Lingle, Jr., R. E. Jordan, D. F. Padowitz, and C. B. Harris, J. Chem. Phys. 105,3883 (1996).

7. M. Wolf, E. Knoesel, and T. Hertel, Phys. Rev. B 54, 5298 (1996).

CONDENSED-MATTER PHYSICS

Normal-State Properties of High-Temperature Superconductors at Low Tem-peratures. To better understand superconductivity in copper oxide materials, it isdesirable to study how charge carriers move under normal-state (i.e.,nonsuperconducting) conditions. Above the transition temperature this is no prob-lem, but at low temperatures a way must be found to suppress the superconductiv-ity that would otherwise develop. The answer is the use of high magnetic fields.Gregory Boebinger describes new experiments in which the very unusual normal-state conduction properties of superconductors are revealed at low temperatures.

Spin Ladders. The magnetic properties of arrays of atoms in one-dimensionalchains and two-dimensional planes are very different. To understand the crossoverfrom one- to two-dimensional systems, physicists have devised low-dimensionalmaterials from “ladders” of copper–oxide chains. The magnetic interactions amongthe atoms (arising from the atoms’ magnetic moments, or spins) in various con-figurations can be quite complicated. Martin Greven reports on theoretical, ex-perimental, and numerical studies of such spin ladders.

Phillip F. ScheweAIP

Revealing the Low-Temperature Normal State of High-Temperature Superconductors

At temperatures above the superconducting transition temperature, supercon-ductors exhibit a so-called “normal state,” in which the motion of charge carriersis accompanied by energy dissipation, which leads to the finite resistivity ob-served in ordinary metals. In high-temperature superconductors, the normal stateis two-dimensional: charge carriers travel easily within those parallel planes of

Page 13: physis meeting

13

A surprising experimental discovery has been that the presence of nonmagneticsites in even-chain ladders leads to the enhancement of the tendency of the spinsto order, rather than to a diminution of the correlations as one might have intu-itively expected.9 In one theoretical study10 it was argued that random depletion ofspins leads to a destructive interference of quantum fluctuations, and thus to thedisappearance of the spin-liquid ground state and the concomitant enhancementof classical correlations. These observations have been verified by recent numeri-cal simulations.11

Finally, theoretical studies have suggested that even-chain ladders may possi-bly become superconducting when they are doped with charge carriers.5 One ofthe most exciting recent developments has thus been the discovery of supercon-ductivity in the doped S = 1/2 ladder system Sr

0.4Ca

13.6Cu

24O

41.84 under pressure.12

This material consists both of layers of chains and layers of two-chain ladders,and was found to have a maximum superconducting transition temperature of 12K (onset) at a pressure of 3 GPa.

At present, it is still an open question whether the superconductivity in thismaterial can be attributed to the ladders. It also remains a challenge to find vari-ants of the ladder systems that are superconducting at ambient pressure as well asto grow single crystals. Once these quasi-1D superconductors exist, a wealth ofissues may be addressed experimentally. In particular, researchers hope that fromsuch studies new insights on the high-temperature superconductivity in the relatedcharge-carrier-doped, square-lattice materials will emerge.

Martin GrevenMIT

(617-258-8651)

Robert BirgeneauMIT

(617-253-8900)

Uwe-Jens WieseMIT

(617-253-4853)

1. H. A. Bethe, Z. Phys. 71, 205 (1931).2. F. D. M. Haldane, Phys. Lett. A 93, 464 (1983).3. M. Greven et al., Z. Phys. B 96, 465 (1995); U.-J. Wiese and H.-P. Ying, ibid. 93, 147 (1994); P.

Hasenfratz and F. Niedermayer, Phys. Lett. B 268, 231 (1991); M. Makivic´ and H.-Q. Ding, Phys.Rev. B 43, 3562 (1991); S. Chakravarty, B. I. Halperin, and D. R. Nelson, ibid. 39, 2344 (1989).

4. D. C. Johnston et al., Phys. Rev. B 35, 219 (1987); Z. Hiroi et al., J. Solid State Chem. 95, 230(1991).

5. For a review, see E. Dagotto and T. M. Rice, Science 271, 618 (1996).6. M. Greven, R. J. Birgeneau, and U.-J. Wiese, Phys. Rev. Lett. 77, 1865 (1996).7. D. G. Shelton, A. A. Nersesyan, and A. M. Tsvelik, Phys. Rev. B 53, 8521 (1996).8. S. Chakravarty, Phys. Rev. Lett. 77, 4446 (1996); G. Sierra (preprint).9. M. Azuma et al. (preprint).

10. M. Sigrist and A. Furusaki, J. Phys. Soc. Jpn. 65, 2385 (1996); N. Nagaosa, A. Furusaki, M. Sigrist,and H. Fukuyama (preprint).

11. Y. Motome et al., J. Phys. Soc. Jpn. 65, 1949 (1996); Y. Iino and M. Imada (preprint); M. Grevenand R. J. Birgeneau (unpublished work).

12. M. Uehara et al., J. Phys. Soc. Jpn. 65, 2764 (1996)

CRYSTALLOGRAPHY

A Brilliant Star in the Midwest Illuminates the Future ofMacromolecular Crystallography

The Advanced Photon Source (APS), the first third-generation synchrotron sourcein the hard x-ray range in America, was dedicated on May 1, 1996. This milestonesignals the beginning of a bright future for the community of structural investiga-tors in general and in particular for macromolecular crystallographers, scientistsdevoted to the study of the structure and function of proteins and other large mol-ecules.

Synchrotrons are devices that accelerate charged particles within circular or-bits; synchrotron radiation is the electromagnetic radiation given off tangentiallyfrom the ring as the particles are confined to their circular orbits. Because theparticles travel around the ring in bunches, the radiation beam itself consists ofpulses approximately corresponding to the time it takes for the bunches to com-plete an orbit. Critical characteristics of synchrotron radiation for macromolecu-lar crystallography1 are high brilliance, tunability, and the time-resolved structureof the beam. Brilliance is an indication of the intensity (photons/second) of thebeam as seen on a unit of area, unit of angle, and within a narrow band (0.1%) ofwavelengths accepted. High brilliance permits collection of quality data from smalland weakly diffracting crystals. Tunability is the ability of “dialing in” the wave-length of the x rays (akin to selecting the color in the visible spectrum) to perturbcertain heavy atoms in the atomic structure. This facilitates dramatically the de-termination of the protein structure by collecting data at different wavelengthsand combining the observations by the method of multiwavelength anomalousdiffraction (MAD).2 The time-resolved nature of the beam can be utilized to ana-lyze enzymatic processes typically at the milli-to-micro second time scale.

It is often not realized that the dramatic increase in the number and complexityof the macromolecular structures determined in the last decade or so is, in nosmall measure, due to the availability of tunable x-ray sources of high brilliance

FIGURE 3. Schematic illustration of spin ladders. (a) A single one-dimensional chain (n=1);(b) ladders composed of two (n = 2) and three (n = 3) chains; (c) a two-dimensional squarelattice which may be thought of as an infinite array of coupled chains (n → ∞).

ferent types of quantum ground states would lead to fundamentally different ther-mal properties for integer and half-integer spin chains. There is now much evi-dence for the correctness of Haldane’s famous conjecture, which initially tookmany researchers by complete surprise.

For the two-dimensional (2D) analog of the 1D chain, a configuration referredto as a square-lattice Heisenberg antiferromagnet, it has been established over thepast decade that the ground state is fully ordered for all values of spin.3 Thus the1D and 2D systems behave very differently. What happens if the dimensionality ofa S= 1/2 spin system is somewhere between 1D and 2D? To examine this intrigu-ing question, physicists have studied spin ladders, which are formed by couplingtogether two, three, or any number (n) of chains (see Fig. 3) of atoms. ActualS= 1/2 spin ladders have been realized in the laboratory, such as the compoundsV–O–P (n = 2) and Sr–Cu–O (n = 2, 3, ...).4

The goal now is to unravel the behavior as one moves from the slowly decayingantiferromagnetic correlations of the S=1/2 chain to the long-range order of thesquare lattice. This dimensional crossover is surprisingly complex because, as ithas been discovered, the behavior of S= 1/2 ladders is fundamentally different foreven and odd numbers of coupled chains.5 Ladders with an odd number of chainshave slowly decaying correlations in the ground state, and thus behave much likea single S= 1/2 chain. On the other hand, ladders with an even number of chainsexhibit a spin-liquid ground state characterized by correlations that decay expo-nentially fast, and thus behave much like integer-spin chains! This behavior is atvariance with a presumed smooth dimensional crossover that one would expectbased on classical intuition.

The balance of this note will treat recent developments in the field of spinladders. Our current understanding of the properties of theS= 1/2 square-lattice Heisenberg antiferromagnet represents a triumph of the syn-ergism between experiment, theory, and numerical methods.3 Motivated by theresults for the square-lattice, and with the intention of providing a basis of com-parison with future theory and experiment, the temperature dependence of theantiferromagnetic spin correlations in S= 1/2 Heisenberg ladders has been stud-ied numerically.6 Experiments with a similar degree of detail as those in the square-lattice materials have not yet been carried out. However, a recently developedtheory,7 which exploits a mapping of the quasi-1D, two-chain Heisenberg ladderto the 2D Ising model (in which spins can only point up or down), is in very goodagreement with the numerically determined, low-temperature correlation func-tion. The numerical low-temperature correlation length has furthermore been quan-titatively predicted.8

2. A particularly striking example is given in S. Martin, A. T. Fiory, R. M. Fleming, L. F. Schneemeyer,and J. V. Waszcak, Phys. Rev. B 41, 846 (1990).

3. Y. Ando, G. S. Boebinger, A. Passner, T. Kimura, and K. Kishio, Phys. Rev. Lett. 75, 4662 (1995);Y. Ando, G. S. Boebinger, A. Passner, N. L. Wang, C. Geibel, and F. Steglich, ibid. 77, 2065 (1996);G. S. Boebinger, Y. Ando, A. Passner, T. Kimura, M. Okuya, J. Shimoyama, K. Kishio, K. Tamasaku,N. Ichikawa, and S. Uchida, ibid. 77 (1996).

4. B. Ellman, H. M. Jaeger, D. P. Katz, T. F. Rosenbaum, A. S. Cooper, and G. P. Espinosa, Phys. Rev.B 39, 9012 (1989).

5. K. Karpinska, A. Malinowski, M. Z. Cieplak, S. Guha, S. Gershman, G. Kotliar, T. Skoskiewicz, W.Plesiewicz, M. Berkoski, and P. Lindenfeld, Phys. Rev. Lett. 77, 3033 (1996).

6. N. W. Preyer, M. A. Kastner, C. Y. Chen, R. J. Birgeneau, and Y. Hidaka, Phys. Rev. B 44, 407(1991); T. W. Jing, N. P. Ong, T. V. Ramakrishnan, J. M. Tarascon, and K. Remschnig, Phys. Rev.Lett. 67, 761 (1991).

7. A. P. Mackenzie, S. R. Julian, D. C. Sinclair, and C. T. Lin, Phys. Rev. B 53, 5848 (1996).8. G. Boebinger, Physics Today 49, 36 (June 1996).

Spin Ladders

Low-dimensional magnets, materials in which magnetism arises from a con-figuration (such as a chain or plane) of atoms with a dimensionality less thanthree, exhibit fascinating collective properties at low temperatures, largely owingto the increased importance of quantum mechanical effects. Research in this areais bringing new insights to the study of magnetism and may also lead to the devel-opment of new materials, especially superconductors.

In the 1930s, Bethe studied what could be considered one-dimensional (1D)magnetism. He looked at the Heisenberg-type interactions of atoms in a simple1D antiferromagnetic chain. That is, the spins of neighboring atoms (each with avalue of S= 1/2) prefer to orient themselves in opposite directions.1 He found thatbecause of quantum fluctuations (the propensity of spins to spontaneously flipoccasionally) the correlation of spins along the chain would slowly decay withincreasing distance, even at a temperature of absolute zero.

In the 1980s, Haldane suggested that chains of atoms with integer spin (S= 1,2, ...) should behave very differently from those with half-integer spin values (S= 1/2, 3/2, etc.).2 For integer spin values, he predicted that the ordering of spins woulddecrease much faster—indeed exponentially fast with distance—and that the spinswould essentially constitute a quantum mechanical “spin liquid.” These two dif-

Page 14: physis meeting

14

(see Fig. 1). In this respect alone, the APS marks a new turning point in the devel-opment of macromolecular crystallography. The discovery of x rays by WilhelmKonrad Roëntgen 100 years ago has had a tremendous impact on our knowledgeof the material world that surrounds us. Since Roëntgen’s discovery, the brillianceof available x-ray sources has increased by a trillion-fold (1012), and the impactthat the next-generation x-ray sources will have on our understanding of the mate-rial world can only be a matter of conjecture.

The first protein crystallography stations using synchrotron radiation were op-erating in the so-called “parasitic” mode, utilizing the radiation given off by thesynchrotrons designed for experiments in high-energy physics. These first-gen-eration synchrotrons were providing approximately 10,000 times more brilliancethan the conventional sources of the time. Then, a new generation of synchrotronsources came online in the early 1980s, specifically designed to produce moreintense x-ray beams and achieving next to an additional 10,000-fold increase inbrilliance (Daresbury Synchrotron Radiation Source, SRS, England); National Syn-chrotron Light Source (NSLS, Upton, NY).

There are two general considerations that will attest to the future importance ofthird-generation synchrotron x-ray sources like ALS (Berkeley), ESRF (Grenoble),APS (Chicago), and SPring8 (currently under construction in Japan). On the tech-nical side, these facilities have been designed specifically to generate tunable x-ray beams of unprecedented brilliance (an additional 10,000-fold over second-generation sources). On the sociological side, these installations have been plannedand executed with the large and diverse community of macromolecular crystal-lographers and structural scientists in mind. It is estimated that 2000 researcherswill visit the APS per year, with an approximate floating population of 300–400investigators at any one time. The planning team of the APS has provided addi-tional space on the periphery of the ring for offices and laboratories (laboratoryoffice modules or LOMs) adjacent to the experimental hall. These clusters of fivetriangular extensions give the APS its unique serrated appearance. A user-resi-dence facility located within half a kilometer from the storage ring will provideaccommodations for up to 240 users and computer links to the experimental hall.

For experiments in the realm of macromolecular structure, four CollaborativeAccess Teams or CATs are currently building experimental stations with differentbut complementary interests. Bio-CARS, within the Consortium for AdvancedRadiation Sources (at the University of Chicago), will construct three experimen-tal stations designed for crystals with very large unit cells such as crystalline samplesof viruses and aggregates of molecules such as ribosomes or ribosomelike struc-tures (ribosomes are RNA-containing biomolecules involved in protein synthe-sis). The stations will allow data to be collected in both monochromatic (single-wavelength) and polychromatic (many-wavelength) mode including MAD experi-ments involving the measurements at different wavelengths.

The Structure Biology Center (SBC-CAT, Argonne National Laboratory) is de-signed as a user facility for macromolecular crystallographers. The technical fa-cilities have been designed specifically to support: (i) MAD experiments; (ii) datacollection from microcrystals; (iii) data collection from crystals of large macro-molecular aggregates; and (iv) routine rapid data collection from similar or re-lated crystal structures.

The Industrial Macromolecular Crystallography Association (IMCA, a consor-tium of 12 chemical and pharmaceutical companies) together with the IllinoisInstitute of Technology (IIT, Chicago), form the IMCA-CAT. This group is pursu-ing the construction of two beamlines (one sector) with the objective of collectingdiffraction data from macromolecular crystals of biomedical interest to aid in theirdrug-design and protein-engineering programs.

Bio-CAT (IIT) is planning experimental stations to study the structure and dy-namics of aggregates whose atoms or molecules are intrinsically disordered. Sys-tems of interest include biological structures, such as cell membranes, fibers, andbiopolymers. Macromolecular crystallography has reached a stage where every

year several hundred proteins and nucleic acids structures are routinely reportedand deposited into the Protein Data Bank currently holding over 5000 entries.This macromolecular repository celebrated its 25th year of existence in 1996.

Within the multitude of lectures, presentations and posters (2250 abstracts from65 countries) at the XVII Congress and General Assembly of the InternationalUnion of Crystallography (held in Seattle on August 8–17, 1996), outstandingcrystallographic results were discussed or presented having impact in fields asdiverse as cell cycle regulation (involving proteins such as cyclin) and cell death(BCL-XL),3 energy production (Cytochrome c oxidase),4 catalytic hammerheadribozymes5,6 (enzymes made up of RNA, not protein), eukaryotic transcriptioncomplexes,7 protein folding (chaperonins8) and the action of cell toxins, as well asmany more.

A striking example of the impact of synchrotron radiation in elucidating thestructure of aggregates of biological macromolecules is the work of Paul B. Siglerand colleagues7 of the Howard Hughes Medical Institute at Yale University on thestructure of the three-part molecular complex between the piece of DNA calledthe TATA promoter, the molecule TFIIA which is involved in protein transcrip-tion, and the TATA binding protein TBP. This complex is a critical component ofthe machinery in the cell that translates the genetic information contained in DNAonto the proteins that are the majority of the chemically active parts of organisms.The structure was determined by employing synchrotron radiation tuned to twodifferent heavy atoms in the structure, selenium in the protein part and bromine inthe DNA. The macromolecular aggregate contains a bundle of four helices thatextends away perpendicularly from the TBP/TATA complex and allows for furtherrecognition by other participants in the process of the protein production in cells.The pulsed nature of the x-ray beam has permitted recently an analysis of therelease of carbon monoxide from myoglobin, the oxygen-carrying protein in skel-etal muscle, on the time scale of a billionth of a second (nanosecond) using syn-chrotron radiation from the ESRF.9

A truly brilliant star has been born in the Midwest sky which, together with theones already existing in the world and the ones currently under construction, willilluminate our path towards a better understanding of the details and subtletieswithin the atomic landscapes, islands, continents, and universes that surround us.

Cele Abad-ZapateroAbbott Laboratories, Abbott Park, IL

([email protected])

1. J. R. Helliwell, Macromolecular Crystallography with Synchrotron Radiation (Cambridge Univer-sity Press, Cambridge, 1992).

2. W. Hendrickson, Physics Today 48, 42 (November 1995).3. S. Muchmore, M. Sattler, H. Liang, R. P. Meadows, J. E. Harlan, H. S. Yoon, D. Nettesheim, B. S.

Chang, C. B. Thompson, S.-L.Wong, S.-C. Ng, and S. Fesik, Nature 381, 335 (1996).4. T. Tsukihara, H. Aoyama, E. Hamashita, T. Tomizaki, H. Yamaguchi, K. Shinzawa-Itoh, R.

Nakashima, R. Yaono, and S. Yoshikawa, Science 272, 1136 (1996).5. W. G. Scott, J. T. Finch, and A. Klug, Cell 81, 991 (1995).6. J. H. Cate, A. R. Gooding, E. Podell, K. Zhou, B. L. Golden, Craig E. Kundrot, T. R. Cech, and J. A.

Doudna, Science 273, 1678 (1996).7. J. H. Geiger, S. Hahn, S. Lee, and P. B. Sigler, Science 272, 830 (1996).8. J. F. Hunt, A. J. Weaver, S. J. Landy, L. Gierasch, and J. Deisenhofer, Nature 379, 37 (1996).9. V. Srajer, T. Teng, T. Ursby, C. Pradervard, Z. Ren, S. Adachi, W. Schildkamp, D. Bourgeois, M.

Wulff, and K. Moffatt, Science 274, 1726 (1996).

PHYSICS EDUCATION

Recent Developments in Physics Education Research

Physics education research (PER) focuses on improving instruction by study-ing how students learn the complex concepts of physics. Although this type ofwork has been going on for decades, it is only within the past few years that thefield has seen substantial growth. An expanding international body of scholars hasbeen striving to establish PER as a recognized subfield of physics. This has beenpromoted by disseminating findings from many studies which highlight the inef-fectiveness of traditional instruction. Upon discovering substantial weaknesses inunderstanding among even their best students, faculty are usually eager for a sci-entific approach to addressing the problem. This has led to widespread adoptionof newer, student-centered pedagogies which are based on the findings of rigorousstudies. The strong interaction between PER, curriculum development, and teachereducation is a hallmark of the field.

There continues to be a deepening examination of student understanding ofmany topical areas in physics, including mechanics, optics, electricity, thermody-namics, and even relativity and quantum mechanics.1–3 Teachers at the high schooland college levels are probing their students’ knowledge through the use of re-search-based testing instruments like the Force Concept Inventory, the MechanicsBaseline Test, and the Test of Understanding Graphs in Kinematics. Newer projectsare developing similar instruments4 for other topics. Explorations of knowledgestructure,5 student attitudes,6 and the viability of specific instructional approachesare also underway.7–9 Most indicate that computers do not make a sizable impacton learning unless they are embedded within a carefully structured curriculumthat emphasizes student activity and involvement with the material. In fact, PERstudies10–12 generally indicate that the most successful instructional techniques ap-pear to be those that account for preexisting ideas brought into the classroom andthat place students in situations where they must reexamine their own understand-

FIGURE 1. Analysis of the structures of macromolecules using synchrotron radiation. Pro-teins or macromolecules of biomedical interest are cloned, expressed, and produced inmilligram quantities using DNA recombinant technology. Samples are crystallized (top left),exposed to x rays and complete diffraction data (bottom left) are collected to determine or“solve” the structure. The availability of tunable x-ray sources of high brilliance such as theAPS (top right) dramatically accelerates the process of structure determination of biologi-cal macromolecules of various sizes (bottom right). The knowledge of the three-dimen-sional structures of protein targets is extremely valuable to design more effective drugsand to engineer proteins with specific and novel catalytic properties. (Photos courtesy ofCele Abad-Zapatero, Abbott Laboratories, and Susan Barr, Advanced Photon Source,Argonne National Laboratory.)

Page 15: physis meeting

15

ing in light of hands-on activities that challenge their intuition.Several new research-based textbooks12,13 and instructional software packages

have been published recently. National meetings of the American Association ofPhysics Teachers have also seen rapid growth in the number of talks, tutorials, andworkshops dealing with the findings and classroom applications of PER. Teachereducation is expanding at all levels, by virtue of several important efforts for el-ementary teachers and dozens of projects involving college faculty. One of themost exciting approaches has been the effort to find cost-effective ways of apply-ing the many pedagogical techniques, whose worth has been demonstrated in smallclassrooms, to large enrollment sections.

The Raleigh Conference on Issues in Physics Education Research, held in Oc-tober of 1994, was an important milestone for PER. At the meeting, plans wereformulated to discuss the establishment of a journal devoted to PER, to develop anelectronic network of PER resources, to promote the creation and strengthening ofPER groups in physics departments, and to write a position paper on the relation-ship of PER to other subfields of physics. Since then, good progress has beenmade on all these fronts. Finding ways to improve the learning of physics throughscientifically rigorous investigations is proving to be a fruitful and exciting en-deavor.

Robert J. BeichnerNorth Carolina State University

([email protected])

1. B. Adrian and R. G. Fuller, AAPT Announcer 26, 80 (1996).2. B. S. Ambrose, S. Vokos, P. S. Shaffer, and L. C. McDermott, AAPT Announcer 26, 80 (1996).3. D. Styer, Am. J. Phys. 64, 31 (1996). 4. C. J. Hieggelke, D. Maloney, T. O’Kuma, and A. V. Heuvelen, AAPT Announcer 26, 80 (1996).5. J. K. Mohapatra and B. K. Parida, Int. J. Sci. Ed. 17, 663 (1995).6. R. Beichner, Am. J. Phys. (in press).7. R. Hake, AAPT Announcer 26, 61 (1996).8. R. K. Thornton, AAPT Announcer 26, 57 (1996).9. E. F. Redish, J. M. Saul, and R. N. Steinberg, Am. J. Phys. (in press).

10. H. G. Weller, J. Res. Sci. Teaching 32, 271 (1995).11. J. M. Saul, R. N. Steinberg, and E. F. Redish, AAPT Announcer 26, 98 (1996).12. R. Chabay and B. Sherwood, Electric and Magnetic Interactions (Wiley, New York, 1994).13. F. Reif, Understanding Basic Mechanics (Wiley, New York, 1995).

NSF Reviews Undergraduate Science Education

On July 11–13, 1996, the National Science Foundation and the National Re-search Council (NRC) sponsored a conference entitled “Shaping the Future: Strat-egies for Revitalizing Undergraduate Education” in Washington, DC. The confer-ence was guided by two recently released reports. “From Analysis to Action: Un-dergraduate Education in Science, Mathematics, Engineering, and Technology”summarized the conclusions of a national convocation organized by NRC andNSF. The April 1995 convocation began the “Year of Dialog,” which ran in paral-lel with a year-long, nation-wide review of the status of undergraduate educationin science, mathematics, engineering, and technology (SME&T), sponsored byNSF.

The review considered all aspects of undergraduate SME&T education includ-ing the needs of SME&T majors, nonscience majors, and preservice teachers.Institutions ranging from two-year colleges to research-intensive universities par-ticipated, with additional input solicited from government and industry. The re-sulting report, “Shaping the Future: New Expectations for Undergraduate Educa-tion in Science, Mathematics, Engineering, and Technology,” produced by theAdvisory committee to the NSF Directorate for Education and Human Resources,was the first comprehensive review of undergraduate science education in nearlyten years.

Both reports came to similar conclusions, with the primary imperative beingthat “all students have access to supportive, excellent undergraduate education inSME&T, and that all students learn these subjects by direct experience with themethods and processes of inquiry.” The emphasis is on all students, not just sci-ence majors, and a shift in focus from teaching to learning. Both reports agree thatsignificant improvement will require large-scale changes, thus the focus on insti-tution-wide reform. Further information about NSF’s efforts toward undergradu-ate SME&T reform may be found on the World Wide Web at http://www.her.nsf.gov/HER/DUE/start.html.

Diandra L. Leslie-PeleckyUniversity of Nebraska-Lincoln

([email protected])

Robert C. HilbornAmherst College

([email protected])

Physics Olympiad

The United States Physics Team consisted of 20 of the most capable high schoolphysics students in the country who have a special interest and ability in physics.Five of these students represented the team and their country in the 27th Interna-tional Physics Olympiad held in Oslo, Norway in July 1996. The five studentsrepresented the U.S. quite effectively, bringing home three gold medals and twobronze medals.

The competition consisted of a five-hour theory exam and a five-hour practicalor laboratory exam. The students worked separately and each was graded on his or

her own paper. The questions included motion of a charged particle in crossedelectric and magnetic fields within a coaxial cable, a calculation of the magnitudeof the ocean tidal effect caused by the motion of the earth–moon system, andseveral shorter questions concerning electrical resistors, downhill motion of a skier,heat flow due to radiation, heat capacity, and current in conducting wires. Theexperimental question involved a physical pendulum, measurement of the accel-eration due to gravity, and use of the same apparatus to determine a magnetic fieldand to measure the magnetic moment of a permanent magnet.

As is true in each year’s Olympiad, the questions were devised and graded byphysicists in the host country. The questions and answers are reviewed and ap-proved by an International Board consisting of 2 delegates from each of the 56participating nations.

In addition to the physics competition, the program developed by the Norwe-gian hosts included numerous opportunities to sample Norwegian cuisine and cul-ture and for students of various countries to learn about each other’s cultures andthe universality of physics.

Bernard V. KhouryAAPT

(301-209-3311)

FLUID DYNAMICS

Predicting Supersonic Jet Noise from First PrinciplesThe noise of supersonic jets used in military airplanes is familiar to anyone

who has been to an air show. Smaller-sized supersonic gaseous jets are used inmany manufacturing and materials processing applications and can be a signifi-cant source of noise in an industrial plant. It is now possible to calculate, from firstprinciples, the noise that is radiated by a jet (a continuous stream of air or otherfluid) that is supersonic (travels faster than sound) and turbulent (exhibits irregu-lar flow with random variations in space and time). These first-principle calcula-tions of supersonic jet noise involve numerical simulations based on the basic

FIGURE 1. Simulation of the sound field radiated by a supersonic jet. Black lines mark theregions where fluid elements rub against each other in irregular patterns due to turbu-lence. The grey scale shows regions in which the fluid volume changes; these regionsprovide a visualization of the acoustic waves radiated by the jet.

mass, momentum, and energy conservation equations describing the gas flow. Acombination of accurate numerical algorithms with the power of parallelsupercomputers is enabling a new fundamental look into what makes some flowsmore noisy than others. This understanding will, in the future, allow new strate-gies to make the flows quieter.

However, to achieve this goal researchers must successfully meet several chal-lenges involved in creating an accurate computational model. First is an appar-ently counterintuitive fact that most aerodynamic flows generate sound very inef-ficiently. This is true even though the annoying noise from jet airplanes, automo-biles, and industrial machinery is disturbingly loud to our ears. Typically only atenth of a percent or significantly smaller fraction of the mechanical power in-volved is actually converted into the radiated sound power. Thus, a great disparityexists between the energy levels of the sound and the unsteady flow (i.e., any flowwith time-varying properties) responsible for its generation. For example, the “sing-ing” of a high-voltage electricity cable in a cross wind (Aeolian tones) has a fre-quency of 160 Hz (for a wire of diameter 2.5 cm in a cross wind of 20 m/s) whosewavelength of about 2 m is 80 times greater than the wire diameter (the unsteadyforces on the wire due to vortices shed in the cross wind generate the sound). Thedisparities between energy levels and length scales make numerical simulations offlow-generated sound extremely challenging.

The sound field of a simple source can be visualized as a pattern of waves ofcompression and expansion; each wave can be regarded as having an amplitude(which defines the maximum amount of compression or expansion) and phase(the time at which the maximum compression is reached, relative to a chosenreference point). When many simple sources act together the individual fields ofeach source combine together or interfere constructively (making the sound levelhigher) or destructively (reducing the sound level). The sound fields arising in

Page 16: physis meeting

16

most practical applications involve such superpositions. A numerical simulationmust preserve these constructive or destructive interferences accurately. The accu-racy needed cannot be achieved with simple algorithms usually used for steadyflow calculations. To reduce the computational time it is desirable to simulateonly a selected portion of the flow region which is called the computational do-main. It is important to ensure that the boundaries of the computational domainallow flow disturbances to leave “quietly” without spuriously reflecting their en-ergy back.

Over the last several years researchers at Stanford have simulated the soundgenerated by simple unsteady gas flows by numerically solving the fundamentalmass, momentum, and energy conservation equations describing these flows. Theequations describing the flow are solved using a highly accurate “finite differencemethod.” In this approach, the equations are applied to a large but finite number ofdiscrete points which form a mesh that covers the flow region. The Stanford re-searchers studied simple flow situations to develop the computational proceduressystematically. Furthermore since a theoretical prediction of the radiated sound isalso available, they could assess the fidelity of the numerical simulations.

Recently, the noise radiated by a supersonic turbulent jet was simulated for thefirst time. A calculation performed by Stanford graduate student Jonathan Freundrepresents a breakthrough on two counts. It is the first direct numerical simulationof a supersonic turbulent jet, and it is the first time that noise radiation from aturbulent flow has been simulated using first principles. Simulations of turbulentflows to date have been either incompressible (in which density variations withinthe fluid are negligible) or, when compressible, have been restricted to the casesof homogeneous turbulence (a flow which is statistically the same at differentlocations), which excludes the possibility of studying acoustic radiation (turbu-lence and sound fields coexist everywhere in such a flow and defining a soundradiation problem is plagued with technical difficulties). The flow shown in Fig. 1is a supersonic jet at Mach number (ratio of jet velocity to the ambient soundspeed) close to 2.

The jet flow entering the computational domain from the left boundary is lami-nar (a smooth, orderly flow), its temperature is the same as the surrounding tem-perature and contains small-amplitude disturbances that occur over a broad range,or “band,” of acoustical frequencies. Three-dimensional instability waves amplifyin the jet-shear layers (the region of the jet where fluid velocity varies rapidly withradius) and cause it to become turbulent. The turbulent eddies mix the jet fluidwith the surroundings and the turbulent jet gradually spreads outwards. Its poten-tial core (the central region of the jet) disappears at a distance of about 5 jet-diameters (about 1/3 of the distance from the left edge of the figure). The turbu-lent flow visualized in Fig. 1 is a snapshot at an instant. Contours of vorticitymagnitude (which mark the regions where fluid elements are being sheared inirregular patterns by turbulence; in black lines) are overlaid with the contours ofdilatation (regions in which the fluid volume changes; shown in grey scale). Thedilatation contours provide a visualization of the acoustic field radiated by the jet.Dark regions mark the zones where the fluid is compressing (black representingthe steepest compression waves) and lighter regions are zones of expansion. Evi-dently, the turbulent supersonic jet radiates an intense field of Mach waves (akinto the sonic boom of a supersonic airplane) in addition to the broadband noiseassociated with the turbulent mixing. The similarity between the simulation andlaboratory experiments (e.g. Fig. 257 of Van Dyke’s Album of Fluid Motion, apopular reference text) is striking.

Numerical simulations of aerodynamically generated sound offer the ability tosimultaneously probe the sound field and the nonlinear unsteady flow which gen-erated the sound. Since the simulations based on first principles are not restrictedto only those regimes for which aeroacoustic theories apply, this capability opensa new chapter in the exploration of the fundamental mechanisms of sound genera-tion in a nonlinear flow. This understanding can, in the future, be applied to de-signing flow regimes of interest to engineering applications which are in somefundamental way intrinsically quieter.

Sanjiva K. Lele([email protected])

Parviz Moin and Jonathan FreundStanford University

GEOPHYSICS

Unexpected Shock Rocks an “Aseismic” Area. Earthquakes occur over wideregions in the continents, but scientists over the years have identified apparently“aseismic” zones that are seemingly free of earthquakes. For example, there areparts of Greece which have had no evidence of seismicity either from historicalrecords or from 20th-century seismology data. Western Macedonia in NorthernGreece was labelled as one of numerous aseismic zones in the region; yet a mag-nitude 6.6 earthquake struck the center of this assumed aseismic block in May1995. Scientists went back to the region and conducted “paleoseismological stud-ies,” investigations in which they attempted to uncover evidence of past earth-quakes in the region occurring before seismological studies were available. StathisStiros of the Insitute of Geology and Mineral Exploration in Athens, Greece ex-plains how paleoseismological studies have shown evidence that earthquakes arenot unusual for this area. Stiros and his colleagues have also uncovered other

earthquake-prone areas that were previously identified as being part of an aseismicregion.

Ice Mass “Moves” the Earth. Always located on land, glaciers are large massesof ice that flow on the earth that supports them. The largest and longest glacier inNorth America, Bering Glacier in Alaska, recently underwent a major “surge,” arapid movement of ice within the glacier. Scientists closely studied the surge be-cause it increased iceberg production in the Gulf of Alaska and represented apossible hazard to the tankers that use the area. The surge also created tectonicstrain in Southern Alaska; scientists measured that a station located 50 km east ofthe glacier was moving at 34 mm/yr as a result of tectonic activity. Jeanne Sauberof NASA Goddard Space Flight Center and Bruce Molnia of the U.S. GeologicalSurvey describe how the Global Positioning System was used to study the 1993–1995 Bering Glacier surge. Understanding this event has a broader purpose aswell: surges in Antarctica might lead to variations in sea level.

Ben P. SteinAIP

Unexpected Shock Rocks an �Aseismic� Area

Earthquake epicenters deduced from global seismographic networks coveringa period of a few decades are confined to narrow zones in the ocean but are distrib-uted over broad zones in the continents. Yet, inside such broad zones of seismicity,there are areas, several hundreds of kilometers wide, that are apparently free ofearthquakes. These areas exist in Italy, Greece, Turkey, Iran (Fig. 1),1 China, andelsewhere. The scarce available historical data, occasionally covering a period ofup to 2000 years, seem to support the absence of earthquakes from these regions.

This situation led most scientists to conclude that such areas are “aseismic”—oases in deserts of earthquakes and seismic faults. Their earthquake risk was henceestimated to be minimum:2 houses, bridges, and dams were constructed with lim-

FIGURE 1. Seismicity in Greece (inset) and the wider area, 1961–1988, U.S. GeologicalSurvey data (Ref. 1). Earthquake epicenters are confined to narrow zones along platemargins (for instance, the mid-Atlantic Ridge), but are distributed over wide regions in thecontinents. Inside some of these wide regions are zones traditionally believed to be“aseismic” (free of earthquakes and active faulting). The 1995 Grevena earthquake ofmagnitude 6.6 hit one such aseismic area in Greece; its epicenter is marked by a whitestar. The area around Istanbul, former Constantinople (marked by C) is totally aseismic inthe last decades and centuries, but was very active for at least one millenium. A stands forAegean, T for Thrace. (Adapted from Refs. 1 and 2.)

FIGURE 2. Major earthquakes that have hit the assumed aseismic zone of WesternMacedonia, Greece.

Page 17: physis meeting

17

ited care, and some of them were candidates for establishment of nuclear powerplants.

On May 13, 1995 a surprise earthquake of magnitude 6.6 hit the area of Grevena–Kozani, in the center of the assumed aseismic block of western Macedonia innorthern Greece. Luckily this earthquake occurred on Saturday morning, whenschools were closed and other public activities interrupted. In addition, it waspreceded by foreshocks which gave time to people to escape from the thousandsof houses that subsequently collapsed.

The earthquake caused no fatalities, but it was a real shock for the society, thegovernment, and scientists. Earthquakes of this or larger magnitude are not un-usual in Greece, and cause relatively little harm, for houses and buildings in mostparts of this country are traditionally constructed in a way to withstand strongseismic forces. But the Grevena earthquake hit a rural area of assumed minimumseismic risk, with poorly built houses and hydroelectric power dams constructedwithout the necessary care. Seismic forces reached engineering strength designlimits in one of these dams but caused no damage to it. The area of Grevena was inrisk of abandonment, and this made necessary the investment of millions of dol-lars from the government to cover full cost of reconstruction of private and publicbuildings.

The major shock, however, from this unexpected earthquake was for the inter-national seismological community: theories about continental tectonics and earth-quake risk maps in one of the most seismically active and well-studied regions ofthe world proved unsuccessful, and need to be revised. Some of the questions thatarise are: Was this earthquake really unexpected, or was it a systematically recur-ring event? In the case of the Greneva earthquake what is its recurrence intervaland the expected maximum magnitude? Why has this area appeared aseismic forat least one century? Can the knowledge obtained from the aseismic zones ofwestern Macedonia be extrapolated to other apparently aseismic zones in Greeceand other countries?

In order to answer at least some of these questions, “paleoseismological” stud-ies in this area were made. The aim of such studies is to identify traces of, andstudy major earthquakes which occurred before the advent of seismographs fromtheir effects on the physical environment (permanent deformation, such as surfacefaulting, coastal land uplift and subsidence) and on ancient constructions. Someof the latter are found either offset by seismic faults, or bearing traces of groundshaking, and they can hence be regarded as “fossil seismograms.”3

The first paleoseismic results summarized in Fig. 2 reveal that seismic coastaluplifts, somewhat analogous to those in Crete4 and Alaska,5 or even seismic disas-ters like those identified in archeological excavations3,6,7 are not unusual in thestudy area. Hence, western Macedonia was seismically dormant for around 100years, but is certainly far from being aseismic.

Paleoseismic evidence of strong earthquakes exists also for other apparentlyaseismic zones in Greece and adjacent regions. The most characteristic example ismodern Istanbul, the former Constantinople. In the past this city was affected bydestructive earthquakes which left their traces on the Byzantine monuments. Thedome of the church of Aghia Sophia, for instance, collapsed twice during earth-quakes, and buttresses were added to prevent a total collapse of the building. Inthe last centuries, on the contrary, this city is totally aseismic8 (Fig. 1); houses andother constructions are built without much care, and the risk of a death toll ofunprecedented scale during a forthcoming earthquake is high!

The recent Grevena earthquake was, in this aspect, an alert. Seismic safetyregulations must not be based entirely on seismograph recordings—study of awhole spectrum of paleoseismic data is necessary before reliable maps of seismicrisk are compiled. If scientists and governments fail in doing so, we may faceanother Grevena-type earthquake, but with much more severe consequences.

Stathis C. StirosInstitute of Geology and

Mineral ExplorationAthens, Greece([email protected])

1. J. Jackson and D. McKenzie, Geophys. J. 93, 45 (1988).2. S. Stiros, Eos 76, 513 (1995).

FIGURE 3. Map of the region af-fected by the Bering Glaciersurge and the predicted horizon-tal (a) and vertical (b) deforma-tion of the solid earth caused bythe ice redistribution. Trianglesindicate the global positioningstations occupied in June 1993and 1995, solid triangles indicatethe stations nearest to the BeringGlacier. In (a) thin lines outlinethe major glaciers. The thick lineshows the Gulf of Alaska coast-line.

3. S. Stiros and R. Jones, eds., Archaeoseismology (The British School of Archaeology, Athens, Greece,1996); also S. Stiros, Eos 69, 1633 (1988).

4. P. Pirazzoli, J. Laborel, and S. Stiros, J. Geophys. Res. 101, 6083 (1996).5. G. Plafker, Science 148, 1675 (1965).6. D. Soren, National Geographic 174, 30 (July 1988).7. Y. Sakellarakis and E. Sapouna-Sakellaraki, National Geographic 159, 204 (February 1981).8. N. Ambraseys, Nature 232, 375 (1971).

Ice Mass �Moves� the Earth

Bering Glacier, the largest and longest glacier in continental North America, isnow under study from a unique vantage point: space. A series of satellites thatestimate precise positions on Earth from space, the Global Positioning System(GPS), is helping scientists study Earth movements and the movement in BeringGlacier, which has recently undergone a major surge. A surge is a periodic, rapidmovement of large quantities of ice within a glacier, alternating with much longerperiods of near stagnation or retreat. During a surge, ice velocities may increase byhundreds of times their normal values.

Located in southern Alaska, Bering Glacier has ice thicknesses exceeding 1km. It has undergone at least six surges this century1 and it surged again in 1993–1995.2 When a surge removes ice from the upper reaches of the glacier, its surfacelowers by tens or hundreds of meters and ice is added downstream, where the icethickens and advances the glacier.3 In three consecutive years, surface lowering ofas much as 50 m was observed at locations 60–80 km upstream from the advanc-ing face of the glacier. Although the advance of the face of the glacier of up to 9km has been well documented, the extent and magnitude of surge-related changesin the upper regions of the glacier is less well known. However, satellite and aerialsurveys showed that more than 3000 km2 of the Bering’s 5300 km2 surface dis-played fracturing and deep cracks indicative of active surging.

The mechanics of the surge process is recognized as one of the outstandingunsolved problems in glaciology. Surges are of wider interest because of the pos-sibilities that surging glaciers may impinge upon works of man and that surging ofthe Antarctic ice sheet may be a factor in the cyclic variations of sea level.4 TheBering Glacier surge, in fact, increased iceberg production and posed a potentialthreat to the Gulf of Alaska tanker lanes that exit and enter Prince William Sound,about 100 km to the west.

The dramatic changes in a surging glacier’s extent and thickness cause thesolid earth to undergo elastic deformations, temporary distortions which disap-pear after the load of the glacier is removed. Elastic deformations due to the chang-ing glacier load occur instantly. Glacier changes were not the only source of de-formation in this region, however. In southern Alaska, two tectonic plates are slowlybut continuously colliding as one is forced into the Earth’s interior (Pacific plate)beneath a second, overriding plate, the North American plate. At shallow depths,above ~30–40 km, the plate interface is commonly locked and slips primarily inlarge earthquakes. Above this region, the overriding plate undergoes distortion. Astation 50 km east of the Bering’s terminus is moving 34 mm/yr in a direction of25 degrees West of North relative to a fixed North America. Other stations, includ-ing a wide distribution of stations across the plate boundary zone, have been usedto estimate the rate and distribution of ongoing tectonic strain in southern Alaska.

Based on theoretical calculations of the predicted displacement due to ice re-distribution, we determined that four sites within close proximity to the glacierwould have large enough displacements to be useful in understanding ice redistri-bution during the surge (Fig. 3). After accounting for the tectonic component ofdistortion, we estimated uplift and subsidence values for the 1993–1995 time in-terval. The three stations near the surge source region, where unloading of iceoccurred, showed uplift values which ranged from ~3–5 cm, similar to the pre-dicted values given in Fig. 3. The one station located south of the advancing faceof the Bering Glacier showed ~4 cm of subsidence, slightly lower than the pre-dicted value given in Fig. 3. In the future, these results will be combined withother glaciological information to estimate the overall ice transfer during the surge.

Jeanne SauberNASA Goddard Space Flight Center

([email protected])

Bruce MolniaU.S. Geological Survey, Reston, VA

([email protected])

1. B. F. Molnia and A. Post, Physical Geography 16, 87 (1995).2. B. F. Molnia, Eos 76, 291 (1995).3. M. F. Meier and A. S. Post, Can. J. Earth Sci. 6, 807 (1969).4. B. Kamb, C. F. Raymond, W. D. Harrison, H. Englehardt, K. A. Echelmeyer, N. Humphrey, M. M.

Brugman, and T. Pfeffer, Science 227, 469 (1985).

PHYSICS ANDGOVERNMENT

The year began and ended under very different circumstances. Bitter disputesbetween the Congress and the Administration led to a shutdown of significant partsof the federal government early in the year, including the National Science Founda-

Page 18: physis meeting

18

MEDICAL PHYSICS

Functional Magnetic Resonance ImagingFunctional magnetic resonance imaging (FMRI) is a new tool which uses high-

speed versions of magnetic resonance imaging (MRI) techniques to noninvasivelymap the functioning, intact brain. In contrast to conventional MRI, which pro-vides high-resolution images of brain anatomy, FMRI identifies only those brainstructures that are actually performing, controlling, or monitoring specific func-tions.1–9 FMRI is more available, less invasive, easier to apply, and yields sharper,faster snapshots of the working brain than the pictures obtainable by previousfunctional imaging methods. Prime among these earlier methods is positron emis-sion tomography (PET) which uses a radioactive tracer to map metabolic activityof the brain’s response to a specific task.10

The principle of FMRI is that the signal used to construct MR images increasesby a small amount (typically on the order of 1%) in regions of the brain cortex thatare activated by appropriate sensory stimuli, mental processes, or even the intakeof drugs. With current MRI systems, it is possible to monitor changing regionalactivation patterns in response to stimuli with temporal resolution on the order of1 s and spatial resolution on the order of 2 mm. For monitoring a single-imageslice of the brain, FMRI images can be obtained in times as short as 100 ms at thespatial resolution of MRI.

An example of an FMRI study is shown in Fig. 1. Image data were acquired asa series of images through the brain cortex, while the subject was presented cycli-cally with two different sound patterns that alternated in time. The tracing shows howthe strength of the MRI signal from the human auditory cortex varies over time inresponse to the cyclic application of the stimulus.

The signal used to construct a MR image is derived from the net magnetizationof hydrogen nuclei within tissue water which results from the polarizing effect ofthe field of the MRI magnet on the human body. Specifically, the MR signal isproduced when the magnetization is manipulated by means of electromagneticpulses applied at radio frequencies to induce an alternating current in a receivercoil placed near the body part of interest. The rate of decay of the alternatingcurrent signal is known to be a function of the uniformity of the magnetic-fieldstrength within the tissue, i.e., the more uniform the field the longer the signalduration. FMRI relies on the fact that capillaries and red cells within tissues in-duce changes (gradients) in the magnetic field on a microscopic scale. These mi-croscopic field gradients shorten the signal duration to a degree that depends onthe precise value of the blood’s magnetic susceptibility (a quantity that indicatesthe net magnetization it will acquire when exposed to a magnetic field) which inturn depends on the local oxygen concentration. Because of this, FMRI is alsoreferred to as BOLD (blood oxygenation level dependent) imaging. Blood con-taining oxyhemoglobin (hemoglobin with oxygen molecules attached to it) has asusceptibility close to that of tissue water, whereas deoxygenated blood (in whichoxygen has been removed) is significantly different. An increase in nerve-cellactivity produces a flow increase locally that introduces oxygenated blood to suchan extent that there is a net decrease in the concentration of deoxyhemoglobin. Asa result, the magnetic susceptibility within the blood vessels then more closelymatches the surrounding tissue than it does when the vessels contain deoxyhemo-globin, and the MR image intensity increases. As the stimulus is cyclically ap-plied and removed the activated area will show a corresponding pattern of in-creased and decreased signal intensity. It was the work of Ogawa11 and Turner12

which demonstrated that deoxygenated blood could serve as a natural MRI contrastagent.

FMRI, building on the use of imaging to study brain function established withPET, promises to have a significant impact on the management of clinical patientsby differentiating healthy from pathological brain function. FMRI also promisesto extend our fundamental knowledge of neurophysiology by mapping the func-tional neuroanatomy of distributed processing systems within the brain.

FMRI is now being exploited by centers throughout the world to study basicbrain function.13–16 It is also being evaluated for numerous potential clinical appli-cations. These include the noninvasive mapping of the functions of different re-

tion, NASA, and the National Institute of Standards and Technology. In contrast,negotiations in the summer and fall over fiscal year (FY) 1997 appropriations weremore cooperative. While plans for balancing the federal budget suggest reductionsin most federal science funding in the future, probably the biggest news this year isthat Congress provided most of the administration’s requests for FY 1997 physics-related funding.

Some highlights: most NSF FY 1997 program budgets increased over FY 1996;the Department of Energy’s High Energy Physics and Nuclear Physics budgetsstayed even or increased for 1997, while the fusion budget suffered another cut;NASA program budgets dropped a few percentage points from the previous year,in line with the Administration’s strategic realignment of the space agency; out-comes for NIST programs varied—the Advanced Technology Program survivedcongressional elimination attempts, although Congress cut the Administration’srequest by one-third.

Department of Energy

The High Energy Physics budget for FY 1997 increased by 0.5% over 1996.The $670 million budget generally follows the guidance of an advisory panel’s1994 recommendations. During the year, the U.S. initiated negotiations with CERNon participation in the Large Hadron Collider.

The Nuclear Physics budget increased by 3.6% to $316 million. The Continu-ous Electron Beam Accelerator Facility at Newport News, VA became operational,and construction continued on Brookhaven’s Relativistic Heavy Ion Collider.

The Fusion Energy Sciences program was renamed, and suffered a cut of 4.8%from 1996 funding to $233 million. Directed by Congress to restructure the fusionenergy program after taking more than a one-third budget cut in 1995, a DOEadvisory committee recommended increased emphasis on basic plasma scienceand less emphasis on technology development for a fusion energy source. U.S.participation continued in the engineering design activity for the InternationalThermonuclear Experimental Reactor.

The Basic Energy Sciences FY 1997 appropriation is $650 million. This budgetsupports, among other programs, materials sciences research and several user fa-cilities. Construction was completed on the 6–7 GeV Synchrotron Radiation Sourceat Argonne and construction funding continued for the Combustion Research Fa-cility. After adjusting for changes in program content, the FY 1997 budget slightlydecreased.

National Science Foundation

NSF continued to receive bipartisan support on Capitol Hill. The $3,270 mil-lion appropriation is 1.6% over that for 1996.

Research and Related Activities funding includes budgets for physics, materi-als science, astronomy, geosciences, and, for the first time, large scale academicresearch instrumentation. Congress provided $2,432 million for Research and Re-lated Activities, which is an increase of 5.1% over 1996.

The Education and Human Resources budget request of $619 million was fullyfunded. Congress also provided enough money to keep the Laser Interferometer Gravi-tational Wave Observatory and the South Pole Station Safety Project on schedule.

NASA

NASA’s FY 1997 budget declined by 1.4%, compared to 1996, to $13,704 mil-lion. Within this appropriation, the Human Space Flight budget, which primarilyconsists of the space station and shuttle programs, was set at $5,363 million, a1.8% reduction.

The Science, Aeronautics and Technology budget for FY 1997 was reduced by2.9% to $5,762 million. Within this account, space science received $1,857 mil-lion, a reduction of 8.6%. Part of this decline was due to the winding down ofwork on the AXAF and Cassini spacecraft. Congress tried, but failed, to reducefunding for the Mission to Planet Earth program budget request of $1.4 billion.

National Institute of Standards and Technology

Congressional response to NIST’s budget requests varied. The overall appro-priation was $588 million; a decline of 5.2% from 1996.

The Scientific and Technical Research and Services budget request was almostfunded in full, receiving $268 million. This appropriation, 3.5% over 1996, pro-vides money for NIST’s in-house laboratory programs, including physics and ma-terial sciences research.

The Advanced Technology Program budget did not fare nearly as well, althoughit was not eliminated as many in Congress had sought. The administration’s FY1997 request was cut by more than one-third to $225 million, close to the 1996budget.

Richard M. JonesAudrey T. Leath

AIP(301-209-3095)

AIP offers electronically distributed summaries of science policy developmentsaffecting the physics community. FYI, The American Institute of Physics Bulletinof Science Policy News, is available without charge. To subscribe, send an e-mailmessage to [email protected]. Leave the subject line blank. In the message space,type add fyi.

FIGURE 1. (a) A magnetic resonance image showing a slice through the middle of thebrain depicting activity, on both sides, in the human auditory cortex (i.e., the area of thetemporal lobes responsible for processing sounds). The activity occurs in response toaudio stimuli in which the subject hears patterns of repeated sounds that alternately switchfrom one to the other. The bright areas are those that increase blood flow during the task.(b) The tracing shows how the MRI signal (and response from the brain) changes in timewhen the sound pattern changes. (Figure courtesy of Dr. Rene Marois, Yale University.)

Page 19: physis meeting

19

gions of cortex, including the identification of brain language areas and other vitalregions in patients who are candidates for brain surgery,17–19 identification of sei-zure foci in epileptics,20 sensory and motor-function evaluation in stroke patientsand monitoring of fine-motor function in recovering alcoholics. In addition, avariety of neuropsychiatric disorders, including autism, schizophrenia, andTourette’s syndrome, as well as learning disabilities, dyslexia, and attention disor-ders, are being better understood and characterized using FMRI.

John C. Gore([email protected])

Yale University School of Medicine

Ronald R. PriceVanderbilt University

School of Medicine([email protected])

1. J. W. Belliveau et al., Science 254, 716 (1991).2. P. A. Bandettini et al., Magn. Reson. Med. 25, 390 (1992).3. J. Frahm et al., JMRI 2, 501 (1992).4. R. Menon et al., Magn. Reson. Med. 30, 380 (1993).5. J. R. Binder et al., Ann. Neurol. 35, 662 (1994).6. F. Z. Yetkin et al., Am. J. Neuroradiol. 16, 1087 (1995).7. A. Puce et al., J. Neurophysiology 74, 1192 (1995).8. G. McCarthy et al., Human Brain Mapping 2, 234 (1995).9. A. Puce et al., J. Neurosci. 16, 5205 (1996).

10. P. T. Fox et al., J. Neurosurg. 67, 34 (1987).11. S. Ogawa et al., Magn. Reson. Med. 14, 68 (1990).12. R. Turner et al., Magn. Reson. Med. 22, 159 (1991).13. B. A. Shaywitz et al., Nature 373, 607 (1995).14. R. M. Hinke et al., Neuroreport 4, 675 (1993).15. L. Rueckert et al., J. Neuroimaging 4, 67 (1994).16. K. Pugh et al., Brain 119, 1221 (1996).17. A. Puce et al., J. Neurosurg. 83, 262 (1995).18. T. A. Hammeke et al., Neurosurgery 35, 677 (1995).19. C. R. Jack et al., Radiology 190, 85 (1994).20. G. L. Morris et al., Epilepsia 35, 1194 (1994).

NUCLEAR PHYSICS

CEBAF (TJNAF) Comes on Line. An important new nuclear physics facility towatch is the Thomas Jefferson National Accelerator Laboratory (TJNAF)—for-merly known as the Continuous Electron Beam Accelerator Facility, or CEBAF. Afacility which can accelerate electrons to energies of 4 GeV—and produce con-tinuous streams of electrons, rather than short bursts, TJNAF, located in NewportNews, Virginia, has undergone its first experiments. Some of the first experimen-tal results show off its impressive capabilities, such as high momentum resolutionand a potential to perform important studies of nuclei containing strange quarks.Other experimental results are providing information on the interactions betweennucleons (protons and neutrons) as they move through the nucleus and investiga-tions of how the deuteron (a nucleus consisting of a neutron and a proton) dissoci-ates into a proton and neutron that show how effects at the quark level are impor-tant; the data support the picture that the six quarks in the deuteron act in a con-certed fashion as they exchange the gluons that hold them together. John DirkWalecka and his colleagues at TJNAF describes the first results from this brandnew accelerator facility.

More Evidence for Neutrino Oscillations. Neutrinos are chargeless particlesabundantly produced in nuclear reactions such as those that occur in the Sun. Likequarks, which come in six types, neutrinos come in three varieties: the ones pro-duced in the Sun’s nuclear reactions are known as “electron neutrinos.” Neutrinosare notoriously hard to detect because they interact with other particles only viathe weak nuclear force. Even when their hard-to-detect nature is accounted for,the flux of electron neutrinos from the Sun reaching Earth-bound detectors is muchless than that predicted by the Standard Model of particle physics and our knowl-

edge of the Sun. To address this problem, theorists proposed in the late 1970s andmid-1980s that neutrinos may transform from one type to another. In 1995, ex-perimenters at Los Alamos announced that they obtained evidence that neutrinos“oscillate” from one type to another. Gerry Garvey of the Office of Science andTechnology Policy and Los Alamos describes how the Los Alamos team has nowamassed an additional four months of data—with results that remain consistentwith the neutrino oscillation scenario. As Garvey explains, if neutrinos oscillate,then they must have mass; whether or not neutrinos have mass has been a long-standing question since their discovery earlier in the century. The abundant buthard-to-detect neutrinos may therefore constitute some of the invisible “dark” matterthat plays a large role in holding together galaxies and clusters of galaxies in ouruniverse.

Simulating the Cosmic Cauldrons. Nuclear physicists have employed innova-tive techniques in existing accelerators and have come up with new ways of up-grading their abilities to produce beams of short-lived nuclei with energies similarto the nuclei found in stars. Experiments at accelerators around the world arecreating conditions that simulate the nuclear processes responsible for makingmedium- and heavy-elements in stars. Such studies may provide insights into theexact chain of nuclear reactions inside stars as they manufacture the elements thatappear in the periodic table. As Walter Henning of Argonne National Laboratoryexplains, researchers are also employing beams of short-lived nuclei to create reac-tions that may not occur in stars, but may indirectly provide invaluable information onastrophysical properties. For example, the breakup of the boron-8 nucleus—whichcan occur through electromagnetic, not nuclear, forces—can provide information onthe reverse reaction in which boron-7 combines with a proton to make boron-8. Thisprocess can, in turn, provide information on fusion reactions involving protons in oursun and even neutrino oscillations in nuclear reactions.

Ben P. SteinAIP

CEBAF (TJNAF) Comes on Line

On Sunday morning, May 26, 1996, at a plenary session of the Particles andNuclei International Conference (PANIC XIV) held on the College of William andMary campus, over two decades of planning and dedicated hard work came tofruition. The first experimental nuclear physics results from the Continuous Elec-tron Beam Accelerator Facility (CEBAF), renamed the Thomas Jefferson NationalAccelerator Facility (TJNAF) at its dedication on the previous Friday, were pre-sented to the physics community. The 4 GeV electron accelerator provides a pow-erful new microscope for examining the nucleus over an enormous range of dis-tance scales—from distance scales where only the constituent nucleons, protonsand neutrons, can be resolved, to distance scales where the meson–nucleon sub-structure becomes apparent, to distance scales where the underlying quarks andgluons, held together by the remarkable strong force called quantum chromody-namics (QCD), are expected to play an explicit role (see the article in PhysicsNews in 1994, p. 42).

At the conference four results stood out, two concerning the new experimentalequipment and two concerning the first nuclear physics investigations. First, Fig.1 displays the initial spectrum from one of the pair of high resolution spectrom-eters in Hall A1 which, within a few hours, reached a momentum resolution (δp/p)of 4 × 10–4 (Fig. 1) and, within a few days, a value of 2.5 × 10–4. The resolution isnow limited only by the window thickness of the apparatus. This milestone givesconfidence that both spectrometers will reach their design specifications whenfully operational.

FIGURE 1. First 12C(e, e' ) spectrum takenwith one of the Hall A spectrometers atCEBAF (Ref. 1).

FIGURE 2. A brief test of the system forkaon detection in H(e, e' K+) by the Hall CCollaboration during E91-13 at CEBAF(Ref. 2).

FIGURE 3. Preliminary measure-ments (stars) by the E91-13 collabo-ration at CEBAF of nuclear trans-parency in the quasifree (e, e' p) re-action on three nuclei as a functionof momentum transfer Q (Ref. 3).The ordinate is the ratio of the ex-perimental cross section to that cal-culated in the plane wave impulseapproximation. Also shown are pre-vious data from NE18 at SLAC andfrom Bates.

FIGURE 4. The product of s (thesquare of total center-of-mass en-ergy) to the 11th power times thecross section at a scattering angleof 90º for the reaction 2H(γ, p)n as afunction of photon energy measuredin E89-12 at CEBAF (Ref. 4). Theshort dashed curve is the predictedflat scaling behavior expected for asimple constituent quark countingrule. Also shown are data from theprevious NE8 and NE17 experimentsat SLAC, as well as data from lowerenergy.

Page 20: physis meeting

20

Second, during the experiment E91-13, a brief test of the High Momentum andShort Orbit spectrometer pair for kaon detection was performed by the Hall Ccollaboration. In this experiment an electron collides with a proton, the nucleus ofthe element hydrogen, producing an electron and a K+ meson (a kaon) which aredetected in coincidence, along with a variety of other particles which are not de-tected. The short-hand notation for this reaction is H(e, e' K+).

Quarks of positive and negative strangeness are created in pairs in this high energyreaction, with the K+ meson being the signature (and the carrier) of the positive strange-ness quark. The missing mass (the collective masses of the unobserved particles) spec-trum for this reaction shown in Fig. 22 is indicative of the effectiveness of the systemfor kaon electroproduction studies that provide unique access to this process whichimplants strangeness (through the presence of strange quarks) into the nucleus. Herethe remaining Lambda and Sigma hyperons produced in this reaction, which nowcarry negative strangeness, are clearly identified.

Third, the results of CEBAF E91-13 on nuclear transparency in the quasifree(e, e' p) reaction, in which the emerging proton is detected in coincidence with thescattered electron, are shown3 in Fig. 3, together with previous results from SLACexperiment NE18 and from Bates Laboratory. The goal of E91-13 is to understandthe motion of nucleons through the nucleus in an energy range—0.3 to 2.0 GeV—where the underlying nucleon–nucleon interaction is changing rapidly. At the lowerenergies, nuclear many-body effects dramatically influence the propagation ofnucleons, while at high energies pion production dominates nucleon–nucleon scat-tering and dynamical QCD effects may become evident. The experiment performedthe first broad range (in terms of the target and energy used) longitudinal-trans-verse separation (separation in terms of the scattering of the electron by the elec-tric and magnetic fields emanating from the charge and current densities of thenucleons and mesons comprising the nuclear target) of the nuclear response in the(e, e' p) reactions. This work should answer significant questions about the role ofmultinucleon interactions in the reaction. Note that the statistical errors of E91-13lie within the star symbols themselves in Fig 3. (These preliminary results repre-sent only a fraction of the data taken in each of these Hall C commissioning ex-periments. The systematic error bars will decrease significantly as the understand-ing of the new experimental equipment progresses.)

Finally, there are the results from CEBAF E89-12 shown4 in Fig. 4, along withresults of the previous SLAC experiments NE8 and NE17, in addition to othermeasurements at lower photon (γ-ray) energy. In the CEBAF experiment the elec-tron beam strikes a target producing a beam of γ rays which are used for theinvestigation of the quark structure of nucleons and nuclei. In E89-12 the photonstrikes a deuteron nucleus, causing it to dissociate into a proton and a neutron. TheSLAC experiment presented some evidence that this reaction gave indications thatQCD effects, such as the presence of substructure in the nucleons, played a rolebecause the observed reaction obeyed what are called simple constituent quark-counting rules at high energy. This behavior is signaled by the fact that the appro-priately energy weighted cross section “scales.” So that, in this case the product ofthe 11th power of the square of the total energy in the center-of-mass frame andthe cross section becomes constant ( see Fig. 4).

The new CEBAF data exhibit a flat scaling behavior consistent with this rule,in the photon energy range from approximately 1 to 4 GeV at a reaction angle of90º (between the incident photon beam and the detected proton) for the 2H(γ, p)nreaction. Furthermore, the new data also suggest that there is an onset of the scal-ing behavior above an energy of 3 GeV at a reaction angle of 37º. The results areconsistent with an onset of scaling occurring at a transverse momentum of about 1GeV/c. The new data support the picture in which six constituent quarks in thedeuteron (each nucleon contains three constituent quarks) organize a concertedresponse involving the exchange of gluons among themselves.

Initial results from the CEBAF Large Acceptance Spectrometer (CLAS) in HallB are scheduled for Fall of 1996; they are eagerly awaited by a large communityanxious to do nuclear electromagnetic coincidence studies with this powerful newdetector.

The accelerator itself is a superb device, now routinely delivering beam to HallC for experiments, and in August 1996, rf separation (beam splitting) was used for

the first time to deliver beam simultaneously to both Halls C and A. Currents of135 µA on one-pass operation (one circuit around the accelerator beam pipe) and77 µA on five-pass, 4 GeV operation have been achieved. Because of the splendidperformance of the superconducting rf cavities, it has already been possible toachieve 1 GeV on one-pass operation. One-pass beam from the polarized sourcehas now also been achieved. The emittance (quality of the beam focus) and energyresolution of the beam are outstanding, and there is every indication that the de-sign values will be obtained.

It is evident that the nuclear physics community is now well on its way inpursuing CEBAF’s nuclear physics goals: the study of the nuclear many-bodysystem, its quark substructure, and the nature of the strong and electroweak inter-actions governing the behavior of this fundamental form of matter.

John Dirk Walecka, TJNAF,with the assistance of

Joe Bisognano, Don Geesaman,Roy Holt, Eddy Offerman,

and Ben Zeidman

1. Conceptual Design Report Basic Experimental Equipment, CEBAF (1990) A4-1.2. Hall C Collaboration.3. D. Potterveld for the E91-13 Collaboration, Proceedings of the PANIC XIV Conference,

Williamsburg, VA, 1996 (to be published).4. D. J. Abbot et al., Proceedings of the PANIC XIV Conference, Williamsburg, VA, 1996 (to be

published).

More Evidence for Neutrino Oscillations

The Liquid Scintillator Neutrino Detector (LSND) collaboration1 at Los AlamosNational Laboratory has presented strong evidence for an excess of electron antineutri-nos from the neutrino production target.2 Since announcing first results in early 1995,the LSND group has operated their experiment for an additional four months, provid-ing a total of nine months of data that are consistent with neutrinos changing from oneneutrino type to another, a process which requires that neutrinos have mass. If con-firmed by other experiments, massive neutrinos would have an enormous impact onour understanding of the makeup and evolution of the universe.

In the Los Alamos experiment, the half-mile-long LAMPF accelerator fires anintense beam of 800 MeV protons into a water target producing pions, short-livedparticles whose subsequent radioactive decays produce the neutrinos for study.The processes in the source are such that copious numbers of muon neutrinos andantineutrinos are generated while the number of electron antineutrinos is sup-pressed by more than a factor of 1000. The LSND researchers observe the neutri-nos by the light produced from their interactions in the 170 tons of mineral oilcontained in a large detector placed nearby (see Fig. 5). Neutrinos are notoriouslydifficult to detect because they only interact via the so-called weak interactions,forces between particles that are about one hundred trillion times weaker than thestrong interaction that holds together a nucleus. As a result, only about one inevery billion billion neutrinos passing through this large detector is detectable.

A careful examination of the nine months of data led the LSND experimental-ists to conclude that they had conclusive evidence for the presence of electronantineutrinos far in excess of the number expected. An article published in Physi-cal Review C reports 22 events that are consistent with muon antineutrinos thathad changed (oscillated) into electron antineutrinos.3 The expected number of suchevents from cosmic rays and known accelerator sources of electron antineutrinosis only 4.6 with an uncertainty of 0.6 events, making it extremely unlikely (lessthan one chance in ten million) that the observed excess is due to a statisticalfluctuation. If the excess is indeed due to neutrino oscillations, then it would im-ply that at least one neutrino has a mass greater than 0.2 eV/c2. The lightest par-ticle with a known mass is the electron with a mass of 511,000 eV/c2. The mostlikely explanation for these electron antineutrinos is that they were made by thetransmutation of muon antineutrinos that are copiously produced at the water tar-FIGURE 5. The LSND at Los Alamos.

FIGURE 6. Experiments at LosAlamos have found evidence thatneutrinos (chargeless par ticlesabundantly produced in nuclear re-actions) have mass, suggesting thatthe hard-to-detect particles maymake up some of the mysteriousdark matter in the universe. The ex-periments have provided intriguingevidence that muon antineutrinosmay transform into electron an-tineutrinos, a process that wouldrequire that the neutrinos havemass. This graph shows the mostprobable amounts of mass differ-ence (∆m2) between muon and elec-tron neutrinos, as well as the mostprobable value for the mixing angleθ, a quantity that describes the de-gree to which a neutrino created ina physical process contains com-

ponents of different neutrino types [45º corresponds to an equal mixture of two neutrinotypes; a 45º angle θ corresponds to sin2 (2θ)=1 in the graph]. The light gray areas of thegraph show the pairs of mass and mixing angle values that would be consistent with theresults of the LSND experiment to a 99% confidence level. (The dark gray areas are re-gions of 90% confidence level.) Also shown are limits from the KARMEN experiment atISIS in the UK, the E776 experiment at Brookhaven National Laboratory, and the BUGEYreactor experiment in France. In the case of the other experiments shown, the values in thearea to the right of their boundaries are excluded, and the values to the left are allowed.

Page 21: physis meeting

21

get. This transmutation process is commonly referred to as “neutrino oscillation.”For neutrino oscillations to occur, neutrinos must have different masses and theremust be “mixing” among the neutrino types.

Mixing requires that a neutrino created in a physical process is not purely asingle type but has small components of the other neutrino types. In spite of inten-sive efforts by many research groups there is no conclusive experimental evidenceto date for either mixing among the neutrino types or for neutrino mass, thoughboth are theoretically expected to occur at some level.

Figure 6 is a plot of the regions of mass difference squared, ∆m2, versus sin2(2θ),where θ is the mixing angle allowed by the LSND measurement. Sin2(2θ) is ameasure of the mixing between the neutrino types; a mixing angle of 45º corre-sponds to an equal mixture of the two neutrino types. A neutrino mass of 0.2 eV/c2 would account for a small fraction of the “dark matter” in the universe.

Gerald GarveyOffice of Science and Technology Policy and Los Alamos

National Laboratory([email protected])

1. The LSND experiment involves 12 institutions: the University of California campuses at River-side, Santa Barbara, and San Diego; the University of California Intercampus Institute for Re-search at Particle Accelerators; Embry–Riddle Aeronautical University; Linfield College; LosAlamos National Laboratory; Louisiana State University; Louisiana Tech University; SouthernUniversity; Temple University; and the University of New Mexico. Funding for the experimentcame from the Department of Energy’s Office of Energy Research. Other funding sources were theNational Science Foundation, Associated Western Universities, and the Murdoch Charitable Trust.

2. The LSND Collaboration, Phys. Rev. Lett. 77, 3082 (1996).3. The LSND Collaboration, Phys. Rev. C 54, 2685 (1996).

Simulating The Cosmic Cauldrons

The quest of nuclear astrophysics is to understand how the chemical elementsthat make up our world are formed in the cosmos. The pioneering work won Nobelprizes for Hans Bethe and William Fowler. They began to unravel the complexcycles and chains of nuclear reactions that they believed would explain how nu-clei are created and destroyed in the hot cauldrons inside stars, starting from thelightest and simplest nuclei that were first manufactured in the primordial fireballof the Big Bang. Today’s most advanced observatories such as the Hubble SpaceTelescope are providing new details on the nuclear composition of the cosmos bymaking maps that show the abundances of chemical elements in almost primor-dial gas clouds and the very earliest stars. It is the role of nuclear physics to estab-lish how behavior at the submicroscopic nuclear level leads to effects of large-scale, stellar proportions.

The creation of nuclei typically occurs under explosive conditions like those foundin novae (stars that brighten as a result of receiving nuclear material from an explodingpartner), supernovae (massive stars that explode as a result of gravitational collapse),and shortly after the Big Bang itself. Many of the important chains of nuclear reac-tions proceed along paths which involve very short-lived nuclear species having far-from-stable configurations. Ever since the earliest theoretical models of nucleosynthe-sis (the name for the process of creating nuclei, generally under high-temperature,high-pressure conditions), it has been a challenge to experimentalists to find ways ofinvestigating these processes in the laboratory. This challenge is now being ad-dressed with new and powerful experimental techniques.

Major advances have recently been achieved by nuclear physicists who areforming beams of short-lived nuclear species, with energies equal or exceedingthose found in stars. Such beams have been generated through several quite differ-ent technical approaches, involving innovative uses of existing accelerators andincorporating novel concepts for upgrading their abilities. For example, research-ers studied crucial Big Bang reactions1 with 8Li ions (lifetime of 0.8 s). The short-lived 8Li ions were generated in-flight in a nuclear reaction using beams of stable7Li from an existing low-energy accelerator at Notre Dame University. At the otherend of the energy scale, uranium beams of several tens of GeV at the GSI Labora-tory in Darmstadt, Germany, were fragmented in a reaction target.2 From the fis-sion-product beams (which were “forward focused” by virtue of their high, rela-tively uniform momentum values in the forward direction) more than a hundrednew neutron-rich isotopes were observed for the first time in a region which isnear the astrophysical rapid neutron-capture process (r process). The r process isresponsible for making most of the heavy elements (heavier than iron) and is im-portant during the few seconds of a supernova explosion. These exotic nuclei wereobserved for the first time; their lifetimes and other properties that are required formodeling the r process remain to be measured.

At other laboratories beams of short-lived 13N (lifetime 10 min), 18F (lifetime110 min), and 19Ne (lifetime 17 s) were generated by a different method: by firstproducing these nuclei with a primary production accelerator and then introduc-ing them into the ion source for acceleration in a second accelerator. This two-accelerator concept is complementary to the in-flight production of radioactiveion beams mentioned before. It has the advantage that the secondary beam isproduced with the quality and precise energy necessary to measure the specificcross sections (reaction likelihoods) of astrophysical interest, which often dependssensitively on energy because of resonances (preferred quantum states for the nu-clei) or thresholds (the minimum energy required for a reaction to occur). In thisway, important reactions have recently been studied with beams of 18F and 19Nethat provide the opportunity for a progression beyond the CNO cycle in hot stars.The carbon–nitrogen–oxygen (CNO) cycle is the normal path for the conversionof hydrogen into helium in stars slightly more massive than the sun, involving theparticipation of carbon, nitrogen, and oxygen nuclei. “Breakouts” from the CNOcycle, in which nuclear reactions proceed to a new stage beyond the production oflight elements, are the assumed gateways leading to the generation of elementsheavier than oxygen via the rapid proton capture (rp) process, the mechanismbelieved to be responsible for generating most of the medium-sized elements,namely those that are heavier than oxygen but have no more than 100 nucleons(neutrons and protons). At Louvain-la-Neuve in Belgium, where much of the two-accelerator technique was pioneered, researchers recently measured3 the 18F(p,α)reaction (the parentheses after the nucleus indicate the light particles going intoand coming out of the reaction, respectively; p denotes a proton, and α indicates ahelium nucleus, or alpha particle) and an upper limit determined for the breakoutreaction 19Ne (p,γ) (where γ denotes a gamma particle) using two cyclotrons4 (cy-clotrons are devices that accelerate charged particles into spirals until they even-tually collide into a target at high energies). At Argonne National Laboratory the(p,α) and (p,γ) reactions were studied with 18F beams generated from an ion-source sample produced by a medical cyclotron (18F is frequently used for positrontomography in medicine) and accelerated with the ATLAS superconducting ionlinac.5,6 Figure 7 illustrates schematically the path of the rp process in the nuclearchart, details of the reaction network in the region near the CNO cycle with pos-sible breakout pathways to heavier elements, and an example for a reaction crosssection taken from the work at Argonne.

Energetic beams of short-lived nuclei also allow types of reactions that, whilenot themselves occurring in stellar processes, indirectly provide important infor-mation. An interesting example is the Coulomb dissociation of 8B (the breakingup of 8B via purely electromagnetic interactions—the rapidly changing electricand magnetic fields near the target nucleus—rather than through nuclear forces)which was measured at the RIKEN Laboratory, Japan.7 This process may provideimportant information on the inverse reaction, the proton capture on 7Be leadingto 8B in a state that decays by γ-ray emission. This process is an important link inthe hydrogen burning cycle of our sun and is important for understanding thedeficiency observed for neutrinos from the Sun and a possible connection withneutrino oscillations (see the article in this chapter by Gerald Garvey).

The potential of these new approaches for studying, in the laboratory, key reac-tions of importance for nucleosynthesis, as well as novel aspects of nuclear struc-ture in itself, has triggered interest world wide in the upgrade and construction ofaccelerators to generate the high-quality beams of short-lived nuclei required. Suchprojects are underway in the U.S., Canada, Japan, and several countries in Europe.

Walter F. HenningArgonne National Laboratory

(630-252-4004)

1. X. Gu et al., Phys. Lett. B 343, 31 (1995) and references therein.

FIGURE 7. (top) The path of the rapid proton capture (rp) process, believed responsible forgenerating many medium-sized elements having up to 100 nucleons (protons and neu-trons). The chart of nuclides indicates closed proton shells (particularly stable configura-tions of nuclei) and a magnified view of the detailed reaction network near the region of theCNO cycle. (bottom) The experimental reaction cross section (which indicates the likeli-hood of the nuclear reaction) as a function of energy for a previously unknown resonance(a favored quantum state for the nucleus) for the 18F(p,α) process (the parentheses afterthe nucleus indicate the light particles going into and coming out of the reaction, respec-tively; p denotes a proton, and α indicates a helium nucleus, or alpha particle). This reso-nance dominates the reaction rate at temperatures present in nova explosions, brighteningsof stars believed to be caused by the material falling onto its surface from the explosion ofa neighboring partner star (Refs. 5 and 6).

Page 22: physis meeting

22

2. M. Bernas et al., Phys. Lett. B 331, 19 (1994).3. R. Coszach et al., Phys. Lett. B 353, 184 (1995).4. R. D. Page et al., Phys. Rev. Lett. 73, 3066 (1994).5. K. E. Rehm et al., Phys. Rev. C 52, R460 (1995); 53, 1950 (1996).6. K. E. Rehm et al. (submitted).7. T. Motobayashi et al., Phys. Rev. Lett. 73, 2680 (1994); Proceedings of the IVth International

Conference on RIB, Omiya (Japan), 1996; see also J. v. Schwarzenberg et al., Phys. Rev. C 53,2598 (1996).

PARTICLE PHYSICS

The Standard ModelParticle physics is blessed and cursed with a “Standard Model” (SM) of inter-

actions at the very smallest distance scale: blessed in that the SM enables veryprecise prediction of thousands of experimental quantities, and cursed because theSM rests upon a variety of miraculous accidents of nature, and because so many ofthe fundamental parameters on which the theory depends, such as the masses ofelementary particles, cannot be calculated from first principles.

The Standard Model prescribes how the forces of nature are propagated by theexchange of gauge bosons between matter particles. (The term “gauge” refers tothe requirement that the force-carrying particles obey the requirements of gaugesymmetry, or invariance under local changes of their complex phase.) The matterparticles are categorized into two classes: the quarks and leptons. Each class hasthree doublets of particles, with the members of each doublet bearing very closelyrelated properties. For example, the first doublet consists of the up and down quarks,which are the primary constituents of protons and neutrons. The second doubletconsists of the strange and charm quarks. The sixth quark, called top, was identi-fied only a year ago in experiments at Fermilab after more than a decade of search.Surprisingly, it weighs some 35 times that of its partner, the bottom quark. Mean-while, each lepton doublet comprises an electrically charged particle (e.g., theelectron) and a companion chargeless neutrino. No clear evidence has been givento date that the neutrinos have mass, although there is no fundamental prohibitionto this being the case.

The forces operating in ordinary matter divide into the strong (nuclear) and theelectroweak interactions, each arising because the appropriate matter particlespossess some characteristic property to which the gauge bosons can connect. Thusthe quarks bear a property called color, to which the appropriate gauge boson, thegluon, couples, giving rise to the strong force, as described in the theory of quan-tum chromodynamics (QCD). This color property is analogous to electric charge,to which the electromagnetic force connects. The strong force (also called thecolor force) between two quarks actually gets stronger as the quarks are pushedapart (during a high-energy collision). Because of this, isolated (colored) quarksand gluons cannot be liberated and observed in the lab. Instead, excess collisionenergy is converted into additional quarks and gluons which combine in such away as to form only colorless composite particles (mesons or baryons). Theseemerge from the collison region as sprays of particles referred to as jets.

Another important ingredient of the Standard Model is the Higgs boson, a par-ticle which endows the carriers of the weak force, the W and Z bosons, with largemass. Because of this the Higgs is also responsible for breaking the symmetrybetween the weak and electromagnetic forces. It is thought that these two forcesare really two different aspects of a single force, the electroweak (EW) force, andthat only now, in a universe that is much colder than in earlier epochs, do the twoforces appear vastly different. Theorists would like to carry the unification furtherby devising a single framework for the EW and color forces. There is indeedevidence that this unification should occur at energies of some 14 orders of mag-nitude larger than the W and Z scale, but in the SM this requires a “miraculous”fine tuning to effect the electroweak unification at so small an energy comparedwith the full unification. Gravity, the weakest of the fundamental forces, must alsobe brought into a totally unified theory for particle interactions.

To date, the SM has successfully withstood experimental test in thousands ofindividual measurements, within the uncertainties inherent in theory and experi-ment. There are, however, compelling reasons to believe that the SM is but anapproximation to the correct description of nature. First, we do not understandhow the rather bewildering range of quark and lepton masses arises, why thereshould be precisely six types of each in the first place, or how the electroweaksymmetry is broken. Second, when measurements of the strengths of the forces atenergy levels available at current accelerators are extrapolated to the much higherenergy levels (equivalent to the much hotter conditions which existed in the uni-verse shortly after the big bang) at which the different forces were supposedlyunified, it becomes apparent that still more particles are needed in the theory.

One candidate cure for this problem has been the idea of supersymmetry. Ac-cording to the supersymmetry model, each known fermion particle would have aboson counterpart, and vice versa. Thus, for every quark (a fermion) there wouldexist a “squark” (a boson); conversely, bosons such as gluons would possess afermion sister, the “gluino.” It is expected that these new companion particles, ifthey exist, should be found either at our existing accelerators or in experiments atfacilities now under construction, and that such discoveries would fundamentallyalter our view of the world. Other alternative theoretical ideas for remedying theSM’s ills would also result in observable new particles, or in deviations of pre-cisely measured quantities from their SM-calculated values.

Electroweak Interaction Studies

In the past year, much progress has been made in testing the EW model. Inthese tests, hundreds of quantities are subjected to overall constrained fits forconsistency with the SM. The tests require knowledge of a few basic parametervalues, such as the electromagnetic coupling constant, the weak decay coupling,the mass of the Z boson, and the mass of the top quark. New determinations of thetop quark mass were presented at the biennial international “Rochester” confer-ence (so-named because of the site of the very first meeting) in Warsaw by theCollider Detector at Fermilab (CDF) and D0 experiments at the Fermilab proton–antiproton collider. These gave the result of 175 ± 6 GeV/c2 for the top mass—animprovement in precision by over a factor of 2 in one year. The W boson massdetermination also improved; the overall uncertainty came down from 160 to 130MeV/c2 (0.16% of the W mass).

Extremely precise measurements of the parameters associated with the Z bosoncontinued to emerge from the ALEPH, DELPHI, L3, and OPAL experiments atCERN’s Large Electron–Positron (LEP) collider and the SLD experiment at SLAC.The impressive precision of these results is indicated by the determination of the Zboson mass as 91.1863 ±.0020 GeV/c2, a measurement requiring allowances forthe perturbations caused by the tidal forces in the underlying rock and currentfluctuations due to the passage of the high-speed trains from Geneva to France.For the most part, these measurements, taken together with the new results fromthe Tevatron and improved information from neutrino scattering at Fermilab, pro-vide a beautiful confirmation of the SM.

However, one year ago, a few measurements at LEP and SLAC seemed to indi-cate possible deviations: the rate of decays of the Z into charm and anticharmquarks was too small (by about two standard deviations, or, equivalently, by abouttwice the size of the experimental uncertainty), while the rate for Z decay intobottom and antibottom quarks was too large by more than three standard devia-tions. These results were eagerly analyzed as possible harbingers of a chink in theSM armor and indirect evidence for supersymmetry. In 1996 both rates werescrutinzed again with new experimental measurements being reported and usingupdated input parameters taken from the CLEO experiment at Cornell (the mea-surement of Z boson charm quark decays rests on knowing the rate for charmmesons to decay into K and phi mesons, recently updated at CLEO) and an im-proved understanding of the physics processes giving rise to the production ofheavy quarks at LEP and SLAC. The result was to bring the charm rates intoalmost perfect agreement with the SM and to reduce the bottom quark anomaly toonly 1.8 standard deviations. Thus the SM has been given at least a temporaryreprieve.

Within the SM, the masses of the quarks and leptons are not specified, but mustbe measured experimentally. Thus the large disparity in the quark masses (rangingfrom a few MeV/c2 for the up and down quark to 175,000 MeV/c2 for the topquark) pose a continuing mystery and challenge for theory. However, the fact thatthe top quark mass is numerically similar to the scale of the electroweak-symme-try-breaking mechanism (an energy level of roughly 100 GeV) suggests that thetop quark may be intimately tied to the source of the EW symmetry breaking. Avariety of theoretical investigations continue to explore this connection.

Similarly, the pattern of lepton masses is puzzling; though smaller than thequark masses, the masses of the charged leptons (e, mu, tau) vary by as large afactor as the variation in quark masses. As for the neutral leptons, the neutrinos,evidence has accumulated this past year that their masses may not be strictly zero.After presenting somewhat controversial evidence in 1995, the LSND collabora-tion working at Los Alamos has offered new data in 1996 to strengthen its claimfor having observed possible oscillations between muon antineutrinos and elec-tron antineutrinos (see the article by Gerald Garvey in the Nuclear Physics Chap-ter). Nonzero mass neutrinos can be straightforwardly incorporated into the SMand the possibility exists that massive neutrinos can help resolve the “dark matter”issue in astrophysics insofar as neutrinos with mass would represent a sea ofnoninteracting massive particles left over from the Big Bang. The study of astro-physical and terrestrial neutrinos will be given a boost with the CHORUS andNOMAD detectors at the CERN accelerator and the Super-Kamiokande under-ground detector in Japan, which came on line in the past year.

Why are there six quarks and six leptons? Although equal numbers of quarkand lepton doublets are expected theoretically, there is no fundamental reasonwhy there should be three such pairings. Also, why should the weak and electro-magnetic forces have the relative strengths they do? It is known theoretically thatat least three doublets are necessary for particle–antiparticle asymmetry to occur,as typified by CP (charge conjugation/parity) noncon-servation, observed in theuniverse as a preponderance of matter over antimatter. But the character of this CPviolation has not yet been fully explored. The parameters (elements of a “mixingmatrix”) that describe the degree to which the weak and strong forces view thequarks differently, have been measured with increasing accuracy over the pastyear through the CLEO studies of the decay rates of bottom and charm quarkmesons, through the study of oscillations between bottom quark mesons andantimesons in the SLD experiment, and in the ALEPH, DELPHI, L3, and OPALexperiments.

These detailed determinations now give good predictions for the CP-violatingeffects to be expected in the decays of bottom quark mesons which should beclearly observable in a new generation of experiments at SLAC, KEK in Japan,DESY in Germany, and the Tevatron.

Strong Interaction Studies

The strong-interaction theory QCD is on relatively firm footing, with few doubtingits basic validity. Nevertheless, physicists keep looking for surprises. For example,

Page 23: physis meeting

23

great interest arose in 1996 when the CDF experiment announced measurements ofquark or gluon jets at large angles with respect to the proton and antiproton beams andwith energies approaching half the energy of the incoming beam particles. The dataagreed with QCD predictions qualitatively quite well over seven orders of magnitudevariation in the production rate. But for the largest energy jets the data exhibited anexcess of nearly a factor of 2 over prediction. These experiments are a close analog ofthe Rutherford alpha-particle scattering experiments of 90 years ago, in that Ruther-ford was surprised by the number of alpha particles scattered through large angles byhis target (gold atoms in a film). This anomaly in Rutherford’s data was later attributedto the existence of constituents (the nuclei) inside atoms. The search for an excess inhigh energy quark scattering could similarly reveal a new layer of substructure withinthe quarks. By comparison, the other Tevatron experiment, D0, weighed in with jetdata which, though agreeing with CDF within their mutual errors, came closer toagreement with the QCD prediction. Subsequent reevaluation (by several groups) ofdata which concerns the makeup of the proton showed that modification of the frac-tion of the proton momentum carried by energetic gluons could bring the data andtheory into much better agreement, without upsetting the large body of data previ-ously acquired from lower energy probes of the proton with leptons. As the dust settled,it appeared that the jet data could be adequately understood within the SM.

Studies in recent years have shown that the proton does not consist merely ofthree valence quarks, but is stocked with a myriad of gluons plus virtual particlesthat momentarily pop into and out of existence. A further modification of thispicture occurred last year because of experiments at the HERA electron–protoncollider in Germany. They reported that the density of low-momentum gluons inthe proton was much larger than expected; indeed, from these results the protonlooks much more like a loaded cattle car than the structureless elementary particlethat we thought of only 30 years ago. With new special detectors installed, the H1and ZEUS experiments have been able to search for the presence of low-momentumconstitutents for very small energy probes. What they find is that an excess of gluonspersists all the way down to those carrying as little as one-millionth of the proton’smomentum. At some point, this effect must level off so as to satisfy the constraintsfrom other experiments in which the proton is viewed as a single unperturbed entity,but the effect is striking and points to an interesting new regime where perturbativetreatment must be supplanted with collective effects of multiple-gluonstates.

Until recently, the jets of ordinary particles evolving from quark and gluon jetswere experimentally indistinguishable. Recent work by the LEP collaborationshas been able to isolate quite pure samples of quarks or gluons emerging from Zboson decay. They have been able to show that the jets from gluons are morespread out, and result in more particles of lower momentum than the correspond-ing quark jets of the same energy. These effects are expected insofar as the gluonshave a larger intrinsic color charge than quarks, and thus should give more radi-ated gluons. The ability to distinguish quarks and gluons, even imperfectly, couldbecome an important experimental tool in disentangling the decays of particleswhich decay dominantly into quarks (e.g., the Higgs boson) from the large QCDbackgrounds.

Searches for New Particles

As noted above, several recent hints of effects not accounted for by the SMhave been at least partially resolved without the need for new physics. Neverthe-less, the continued strong belief that such new phenomena are likely dictates thatevery possible search be made. The most noteworthy development in the past yearhas been the increase in the energy at which the LEP electron–positron collideroperates. Early in 1996, LEP produced collisions at about 135 GeV, followed inmidyear with a run at 162 GeV. This latter energy is sufficient to allow productionof pairs of W bosons. LEP thus initiated a program of W studies which in time willrival that of the Tevatron, where Ws are produced in proton–antiproton collisions.The increase in LEP energy has also permitted the extension of searches for newparticles, such as those expected in supersymmetry. Previous limits for such par-ticles had all been limited to about 45 GeV/c2, or half the energy of previous LEPoperation. The sixfold increase in data samples by the Tevatron collider experi-ments over the past year also enabled substantial increase in the reach of searchesfor new particles.

The search for supersymmetric partners for the known quarks, leptons, andgauge bosons is complicated by the fact that the character of the supersymmetry-breaking mechanism chosen by nature is not pinned down, so there are many as-yet free parameters permitting a wide range of supersymmetric particle massesand decay schemes. Thus a systematic search over a variety of masses and finalobserved particles is required. However, general guidelines for the expected massesfor such objects now exist and in many cases current experimental searches arereaching into this favored territory. At the time of writing, no indications for newparticles have been found. However, the LEP experiments have considerably ex-tended the excluded region for the supersymmetric partners of the gauge bosonsand leptons, while the Tevatron experiments have placed new limits on possiblepartners of the quarks and gluon. An intriguing anomalous concentration of eventswith four jets in the final state seen by the ALEPH experiment at LEP at 135 GeVdid not rematerialize in the early running at 162 GeV; once again, departures fromSM behavior have proved illusory.

The experiments have also searched for other possible departures from the SM.Neither the HERA nor Tevatron experiments have seen any evidence of new stateswhich decay into a quark and a lepton, as would be characteristic of theories inwhich there is a new symmetry connecting quark and lepton sectors. Searcheshave been made for new heavy gauge bosons, members of a fourth doublet ofquarks, new massive neutrinos, right-handed W bosons (a right-handed particle isone whose spin points along its direction of motion; the SM favors left-handed Ws),

or any new state which can decay into a pair of quark jets. Though the limits for thesehave all been improved, no new signals have been seen.

Developments in Particle Theory

Particle theory is at once practical (serving to improve calculations of phenom-ena observed in present-day experiments) and visionary (seeking to frame theglobal structure of matter at its elemental level). In almost all of the topics re-viewed above, advances in theoretical understanding have contributed to the im-proved experimental knowledge. Theoretical treatment of the strong collectiveeffects in the interactions of quarks and gluons has been especially active in thepast year, and has helped open several new windows for tests of QCD in situationswhere many colored objects interact in collective ways. Good progress has beenmade in developing the phenomenology of supersymmetry theory to give betterand more comprehensive strategies for future searches.

In one important area, theory has recently produced results for measurable quan-tities as precise as experiment. For instance, computations involving simulationsof particle interactions on a finite lattice of space-time points permit calculationsincorporating the full complications of the theory. Such lattice calculations ofgood precision have been made for the masses of the known mesons and baryonscomprised of quarks and antiquarks, and predict new states made wholly fromcollections of gluons. The decay properties of heavy-quark states have also beenaccurately determined using these techniques and have refined our understandingof CP violation and of how quarks turn from one type into another. Furthermore,the lattice calculations of the strong coupling constant based on the masses andtransitions of bottom–antibottom bound states give comparable precision to moretraditional methods. The computations themselves have been at the cutting edgein the development of new massively parallel computing techniques.

At the other end of the theoretical spectrum, the theory of fundamental interac-tions at the very highest energies has developed over the past two decades into avery rich and provocative subject. Usually called string theory, these investiga-tions postulate that at the very smallest distance scale, the ordinary four dimen-sions of space-time are augmented by additional dimensions reflecting internalsymmetries of matter. The domain of these added dimensions (typically six orseven) is small enough to be undetected in most low energy experiments, but theyinduce powerful constraints on those properties which were introduced as ad hocparameters in the SM. An analogy of how the added dimensions function can befound in the familiar waveguide structures which channel the microwaves used forradio-frequency power and information transfer. Studied from afar, the waveguidelooks like a simple linear structure in which energy is transmitted in one dimen-sion, with a collection of modes of characteristic frequencies and speeds. Seenmore closely, the waveguide is seen to have two additional dimensions (breadthand width); the shape and size of the transverse structure dictate precisely whichmodes exist.

Some have suggested that these string theories, with their focus on phenomenaat energies enormously higher than conceivable experimental investigation, areakin to speculating about the number of angels on the head of a pin. Recent workin string theory has, however, provided some results that we can directly probe.Indeed, string theories promise to establish the patterns of quarks and leptons andtheir masses, and, if correct, strongly suggest a fundamental necessity for thesupersymmetric extension of the SM. In the past year, new understanding hasemerged of the connection between such theories in the seemingly disparate lim-its of strong and weak coupling. These connections have shed new light onlongstanding puzzles related to the possible existence of magnetic monopoles andto the evolution of black hole states. They promise to illuminate the basic theoreti-cal structures applied to all of particle physics. String theories may yet lead to theunification of all four physical forces, including gravity, and perhaps even to ademonstration that space-time itself is quantized.

Summary

The year in particle physics has seen great progress on many experimental andtheoretical fronts. Much of this work has supported the predictions from the exist-ing Standard Model, and in this sense no dramatic new paradigm shifts have oc-curred. However, the current precision of experiments, and improved capability oftheory to quantify SM predictions and systematize the possible departures fromthe SM, gives great hope that the long-standing pressure on the mysterious suc-cess of this apparently flawed theory will soon give rise to real clues for the nextstage of particle physics understanding. Nonetheless, the tremendous success ofthe SM in providing accurate predictions of diverse phenomena suggest that it willsurvive as at least an approximation to a more general picture, much as classicalphysics is retained via the correspondence principle in quantum physics.

Paul GrannisSUNY, Stony Brook

([email protected])

PLASMA PHYSICS

Shear-Stabilized Tokamak Plasmas. Physicists continue on their long-term questto make nuclear fusion a viable alternative energy source. An approach known as

Page 24: physis meeting

24

magnetic fusion is a leading candidate for achieving this goal. The most widelyused magnetic fusion device is the tokamak, a doughnut-shaped chamber whichemploys magnetic fields to confine a plasma, a collection of electrically chargedparticles. In addition to confining the plasma, the tokamak allows the chargedparticles to reach high temperatures and densities. These extreme conditions en-able nuclear fusion to occur, but they also can create turbulence in the plasma.Turbulence can rob energy from the plasma—preventing fusion reactions fromtaking place and even enabling plasma particles to escape the tokamak. E. J. Straitand his colleagues of General Atomics and M. C. Zarnstorff and his colleagues atthe Princeton Plasma Physics Laboratory describe experimental confirmations oftheories predicting how to reduce this turbulence.

Inertial Fusion Research at the OMEGA Laser Facility. Another promisingroad to practical nuclear fusion is the inertial confinement fusion method, in whicha pellet containing nuclear fuel is compressed to great densities and heated to hightemperatures. In efforts to improve and better understand this fusion technique,important experiments are being performed at the OMEGA facility in Rochester,NY. John M. Soures of the Laboratory for Laser Energetics at the University ofRochester describes experiments to occur at OMEGA, which opened in 1995. Theexperiments at OMEGA will investigate, among other things, approaches for achiev-ing a condition known as “ignition,” in which enough fusion power is produced toenable the fuel to heat itself and consequently initiate significant amounts of en-ergy-producing nuclear fusion reactions. These experiments will provide impor-tant information for experiments planned at the National Ignition Facility, a pro-posed U.S. research center where scientists plan to push fusion research to newfrontiers.

Ben P. SteinAIP

Shear-Stabilized Tokamak Plasmas

Confining a high-temperature, high-pressure plasma presents the scientific chal-lenge of studying the complex ways in which charged particles interact with elec-tric and magnetic fields, with important practical applications to controlled nuclearfusion. A plasma is a collection of electrically charged particles, typically consist-ing of negatively charged electrons and positively charged ions. A single ion orelectron can be trapped or confined indefinitely in a suitable magnetic field, but ahot plasma made up of many such particles is subject to a wide variety of smalland large-scale instabilities that can drain the plasma’s energy through turbulence,or even cause a complete loss of confinement. Dramatic recent tokamak experi-ments have shown that these instabilities can be suppressed with the appropriateconfiguration of confining magnetic fields, reducing the rate of energy and par-ticle leakage to the minimum theoretically predicted level and allowing the stableconfinement of very high-pressure plasmas.

In a tokamak, the plasma is confined by helical magnetic fields which fill theinterior of the tokamak to form a nested set of doughnut-shaped surfaces, pro-duced in part by electric currents in the plasma itself. Magnetic shear (the changein the twist of the helical magnetic field from one surface to the next) and velocityshear (the change in the plasma’s flow velocity from one surface to the next) areimportant stabilizing influences. It was predicted more than 15 years ago1 that in aconfiguration with “negative magnetic shear,” the reverse of the usual magneticshear direction for a tokamak, the stability of certain short-scale oscillations in the

FIGURE 1. The measured ion tem-perature in a shear-stabilized DIII-Dplasma agrees well with the predic-tion of “neoclassical theory,” in whichone only considers energy transfercaused by collisions between par-ticles and neglects energy transfer byturbulence.

FIGURE 2. Photograph of fusion capsule irradiation on the OMEGA laser at the University ofRochester. Ultrahigh temperatures and densities can be attained in laser-driven implosions.

plasma would actually improve with increasing plasma pressure. More recent theo-ries (as cited in Refs. 2–5) show that both velocity shear and negative magneticshear can inhibit the small-scale turbulence believed to be responsible for energyloss in tokamaks. High pressures at the center of the tokamak can then be realizedby giving the plasma a “D”-shaped cross section, which raises the pressure limitset by large-scale distortions of the plasma.

In the past two years, negative magnetic shear experiments have led to greatlyimproved energy and particle confinement in the DIII-D tokamak (San Diego,CA)2 and the Tokamak Fusion Test Reactor (Princeton, NJ),3 and more recently inother devices including JT-60U (Japan), JET (European Community), and ToreSupra (France). A dramatic reduction of turbulence is observed,4,5 allowing plasmapressures up to 6 atm at temperatures of 200 to 400 million degrees Kelvin. Oneresult of improved confinement is an increase in velocity shear, leading to furthersuppression of measured turbulence in good agreement with theory. Thus, surpris-ingly, the improved confinement can become self-sustaining even if the magneticfield evolves to a positive-shear state. In these experiments3,4 the rates of both ion-energy loss (Fig. 1) and particle loss for the first time have dropped to the mini-mum level predicted by “neoclassical” plasma theory, in which one only considersenergy transfer caused by simple collisions between particles and neglects energytransfer by more complex turbulent processes.

The success of shear stabilization for plasma stability and confinement maypoint the way to a more compact and economical fusion power plant, the ultimategoal of much research in magnetically confined plasmas.6 A well-confined, high-pressure plasma is crucial since the fusion power output increases roughly as thesquare of the plasma pressure. Furthermore, the negative magnetic shear configu-ration is compatible with the distribution of self-generated electric currents pre-dicted by neoclassical theory, reducing the technological requirements for sus-taining the tokamak plasma current. Shear-stabilization techniques have led torecord plasma temperatures in several tokamaks, and have tripled the fusion powerin DIII-D experiments using deuterium plasmas. The extension to more reactivedeuterium–tritium plasmas is being tested in TFTR. A promising theoretical basisexists for understanding the physics of shear-stabilized plasmas, and extrapolatingthis regime to steady-state operation in the next generation of fusion devices.

E. J. Strait([email protected])

The DIII-D TeamGeneral Atomics

M. C. ZarnstorffThe TFTR Team

Princeton Plasma Physics Laboratory

1. J. M. Greene and M. S. Chance, Nucl. Fusion 21, 453 (1981).2. E. J. Strait et al., Phys. Rev. Lett. 75, 4421 (1995).3. F. M. Levinton et al., Phys. Rev. Lett. 75, 4417 (1995).4. E. A. Lazarus et al., Phys. Rev. Lett. 77, 2714 (1996).5. E. Mazzucato et al., Phys. Rev. Lett. 77, 3145 (1996).6. H. P. Furth, Sci. Am. 273, 174 (Sept. 1995).

Inertial Fusion Research at the OMEGA Laser Facility

The demonstration of a self-heating nuclear fusion reaction1 in the laboratoryis a major goal of the inertial fusion research program. To create a self-heatingfusion reaction, researchers must achieve a condition called ignition, in which theheat produced by the particle byproducts of nuclear fusion exceeds the heatingprovided by the external energy source (i.e., laser or particle beam) that initiatesthe reaction. The demonstration of ignition is a major objective of the NationalIgnition Facility (NIF)—a 1.8-million Joule fusion laser facility currently beingdesigned by a consortium consisting of Lawrence Livermore National Laboratory(LLNL), Los Alamos National Laboratory (LANL), Sandia National Laboratory(SNL), and the University of Rochester Laboratory for Laser Energetics (LLE).Once the NIF is completed in the early part of the next century, it will be appliedto a variety of research and development programs including near-term defensemissions and longer-term energy research efforts.

Two approaches are being investigated for demonstrating ignition at the NIFand for achieving a maximum in the gain, the ratio of the energy released in fusionreactions to the energy deposited in the target. The indirect-drive approach makesuse of hohlraums (cylinders made of a heavy element) to convert a significantportion of the laser energy to x rays that are used to compress and heat the fusioncapsule. This is the main-line approach that will be investigated on the NIF. Thealternative approach, direct drive,2 involves the placement of laser energy directlyonto the capsule. This approach is potentially more efficient and may lead to igni-tion (the “match” that starts a self-heating reaction) and burn (the actual “combus-tion” that results in large release of energy) with lower laser energy than the indi-rect-drive approach.

To pursue the direct-drive approach, the OMEGA laser facility was constructedat LLE and began operations in 1995. OMEGA is a 40 kJ, ultraviolet, 60-beamlaser system (see Fig. 2). OMEGA and the companion facility, NOVA, at LLNLare the two largest fusion laser facilities in the world. OMEGA is unique in that itis the highest-energy laser possessing the ability to conduct high-quality experi-ments using both direct-drive as well as indirect-drive targets.

An important goal of experiments to be conducted on OMEGA is to address thekey physics issues of hydrodynamic stability for direct-drive capsules. Successfuldemonstration of direct-drive implosions requires precise control of the irradia-tion uniformity and the finish of capsule surfaces. OMEGA will be used to test

Page 25: physis meeting

25

various concepts for achieving uniform laser irradiation including a techniquedevised at LLE known as SSD (smoothing by spectral dispersion).3 It will alsoserve as a test bed to validate the cryogenic fuel capsules required to demonstrateignition on the NIF. Initial experiments with OMEGA have already establishedrecord thermonuclear energy release for inertial fusion capsules (the thermonuclearenergy release exceeded 1% of the laser energy delivered to the capsule), OMEGAalso was used to demonstrate the ability to conduct high-quality indirect-driveexperiments in a collaboration with LANL and LLNL.

The OMEGA laser is also the centerpiece of the National Laser Users Facility(NLUF). The NLUF is a Department of Energy funded program that enables quali-fied users to conduct experiments in the general area of high-energy density phys-ics. The ultrahigh energy density conditions that can be attained on OMEGA (tem-peratures in excess of 100 million degrees, pressures in excess of billions of atmo-spheres, densities in excess of several hundred times solid density for deuteriumand tritium) make it an ideal laser for carrying out experiments including funda-mental plasma physics, inertial fusion, x-ray laser, equation of state research andlaboratory astrophysics.

John M. SouresLaboratory for Laser Energetics

University of Rochester([email protected])

1. J. Lindl, Phys. Plasmas 2, 3933 (1995).2. R. L. McCrory et al., in Plasma Physics and Controlled Nuclear Fusion Research 1994 (Interna-

tional Atomic Energy Agency, Seville, 1994), p. 33.3. S. Skupsky et al., J. Appl. Phys. 66, 3456 (1989).

POLYMER PHYSICS

Demixing of Polymer Blends in Ultrathin Films. Polymers are long chain mol-ecules composed of simple molecular building blocks called monomers. Two ex-amples of polymers are polystyrene (used in foam cups) and polyvinyl methylether (which is used to make cosmetic creams). While individual polymers mayhave specific useful properties, mixing them can result in a polymer blend whichcombines these properties. But such polymer blends are often unstable and thetwo different polymer “species” often tend to separate. This demixing process canoccur in ultrathin polymer-blend films, only hundreds or thousands of atoms thick,which are used in such settings as the electronics industry to fabricate semicon-ductors. Computer simulations have shown that phase separation is different inthree and two dimensions. Now, Alamgir Karim and his colleagues at the NationalInstitute of Standards and Technology describe experiments that show ultrathinblend films phase-separating similarly to two-dimensional fluids examined in simu-lation.

Anisotropic Phase Separation of Polymers and Liquid Crystals. When re-searchers combine polymers and liquid crystals, they can create versatile mate-rials with highly desirable optical properties. Applying the insights of polymertheory, and using clever experimental techniques, Timothy Bunning of ScienceApplications International Corporation, Wade Adams of the Wright–Patterson

paints and anti-rust coatings, are blends of chemically distinct polymer speciesand other suspended matter that may be uniformly or nonuniformly mixed. Thesecomplex mixtures combine the characteristics of the components in a way whichdepends on how they are spatially distributed. Making the best use of these mix-tures requires an understanding of how factors controlling mixing, such as filmthickness, affect mixing. We investigated ultrathin polymer blend films and foundthat they demix differently than bulk mixtures and more moderately thin films,showing that film thickness has an important influence on the properties of thinblend films.

In general, three stages of demixing are commonly identified—early, interme-diate, and late-stage phase separation. Consider a uniformly mixed blend of twochemically distinct polymer species. When this material demixes, it eventuallyseparates into different coexisting phases—regions in which one species predomi-nates over the other. The concentrations of polymers in these phases depend stronglyon temperature. At an early stage of phase separation, the concentration of onepolymer species over the other becomes enhanced on a local scale, but the mate-rial is still uniformly mixed on a larger scale. If the polymers are mixed in roughlyequal amounts, a tubular threadlike pattern is formed in the next stage through aself-organization process where the two polymer species form a bicontinuous struc-ture, i.e., intertwined in such a way that each species forms a continuous strandthroughout the material (known as a “spinodal pattern”; see Fig. 1). During thisintermediate stage of phase separation the spinodal pattern grows in size and theinitially rough interfaces between the different phases become more sharply de-fined. In Fig. 1 we show a typical optical photograph of a phase-separating poly-mer blend film at an intermediate stage of phase separation. Numerical modelcalculations of phase separation give rise to patterns similar to Fig. 1. The fluidnature of the film becomes important in the final, late stage of phase separation,where the tubular structures forming the spinodal pattern (Fig. 1) break up intodroplets as a result of a flow instability.1 This “capillary instability” superficiallyresembles the break up of a water stream into droplets in a hose as it is shut off.

In bulk materials, phase separation occurs uniformly in all directions, while ifthe geometry of the material is a film, the phase separation occurs in a differentway in the lateral (parallel to the surface of the film) and normal (perpendicular tothe surface of the film) directions because of the perturbing influence of the sur-face. The type of phase separation occuring in blend films depends on film thick-ness. In thick films a process similar to the bulk mixtures is found while a range ofbehavior is found in thinner films. Thin blend films (on the order of a micronthickness) do not phase-separate laterally because they tend to form stable layeredstructures which lie parallel to the plane of the substrate.2,3

Our investigation emphasizes ultrathin films, where the layering-type phaseseparation process is suppressed so that phase separation occurs predominantlywithin the plane of the film.4 In this case a nearly two-dimensional phase separa-tion process occurs which is similar to bulk fluid (i.e., a bicontinous spinodalstructure forms).

A qualitative change in the phase separation process for near two-dimensionalfilms is predicted during the late stage of phase separation where fluid flow be-comes appreciable. Basic differences between fluid flow in three and two dimen-sions make it especially interesting to investigate how the late stage and the tran-sition from intermediate to late stage is affected by the ultrathin character of thefilm. Unlike thick films, where the spinodal structure has a tubular form at inter-mediate stages of phase separation, the spinodal structure attains a highly flat-tened tubular or ribbonlike form in near-two-dimensional films. Since the capil-lary instability is only associated with the break up of the tubular fluid filaments,it cannot exist in the ribbonlike structures of ultrathin films, so that these ribbonsmust break up by some other mechanism.5 San Miguel et al.5 suggested that thisbreakup in two dimensions occurs through a process involving two-dimensionaldiffusion rather than by fluid flow, leading to a qualitatively different late-stagephase separation kinetics than in three dimensions. The verification of this predic-tion provides a serious challenge for experiment.

Recent two-dimensional numerical simulations6 indicate that the kinetics oflate stage phase separation differs in three- and two-dimensional fluids, as sug-gested by San Miguel et al.,5 and these findings stimulated us to investigate thistransition experimentally.4 Blend films were prepared having thicknesses of 200,1000, and 2000 Å and the demixing kinetics was studied for these films.4 Opticalphotographs of the phase separation of the 200 and 1000 Å blend films showed thedevelopment of phase separation patterns similar to bulk blends and complemen-tary atomic force microscopy measurements (Fig. 2) revealed that the featuresapparent in the optical micrograph (Fig. 1) arise from undulations of the polymer–air interface.7 Variations of the interfacial tension (arising from an overall unfavor-able interaction between the polymer components) within the phase separatinglayer cause the surface to buckle and the phase separation may then be easilyfollowed by optical microscopy4 (Fig. 1). An examination of the kinetics of thispattern growth indicated that the thicker ultrathin film (1000 Å) phase-separated

FIGURE 1. Optical mi-crograph of an ultrathin(~200 Å) film of polysty-rene and polyvinyl me-thyl ether at an interme-diate stage of phaseseparation. The varia-tions in darkness in themicrograph reflect heightvariations in the phase-separated film thickness(see Fig. 2).

Air Force Base and their colleagues have created a polymer material whichcontains lines of liquid crystals spaced only 550 billionths of a meter apart.These materials can be used as fiber-optic switching elements, which, for in-stance, redirect an incoming optical fiber beam and send it off into one of sev-eral new directions. A major advantage of the method is that such materials canbe created on the desired surfaces in a one-step process. These experimentaldemonstrations also open up new areas of inquiry for polymer theory.

Ben P. SteinAIP

Demixing of Polymer Blends in Ultrathin Films

Predicting the properties of thin polymer films is a recurrent problem in manu-facturing. Many commercially important polymer films, such as those found in

FIGURE 2. Atomic force micros-copy image of an ultrathin film(~1000 Å) of polystyrene and poly-vinyl methyl ether) at a transitionstage between intermediate andlate-stage phase separation. Thissurface buckling has a similar ori-gin (variations of surface tensionwithin the film) to the Bérnard–Marangoni patterns formed on thesurface of water when it is heatedfrom below.8 The specific patternform is different, of course, in thephase separation problem.

Page 26: physis meeting

26

similarly to bulk samples, while the kinetics of the 200 Å film is similar to nu-merical simulations of phase separation in ideally two-dimensional fluid mix-tures.6 These observations are consistent with a transition between three to neartwo-dimensional phase separation processes in going from thick to thin ultrathinfilms. The transition to a layering-type phase separation was found in a film whosethickness was expected to be sufficient for this type of phase separation (~2000Å). This film exhibited no detectable surface pattern formation over an extendedperiod of time.

Alamgir Karim([email protected])

J. F. Douglas, L. Sung, and B. ErmiNational Institute of Standards

and Technology

1. E. D. Siggia, Phys. Rev. A 20, 595 (1979).2. R. A. Jones et al., Phys. Rev. Lett. 66, 1326 (1991).3. G. Krausch et al., Ber. Bunsenges. Phys. Chem. 98, 446 (1994).4. L. Sung et al., Phys. Rev. Lett. 76, 4368 (1996).5. M. San Miguel et al., Phys. Rev. A 31, 1001 (1985).6. Y. Wu et al., Phys. Rev. Lett. 74, 3852 (1995).7. A. Karim et al., Am. Chem. Soc. Polym. Mater. Sci. Eng. 71, 280 (1994); A. Karim, B. Ermi, and J.

F. Douglas (in preparation).8. M. F. Schatz et al., Phys. Rev. Lett. 75, 1938 (1995).

Anisotropic Phase Separation of Polymers and Liquid Crystals

Under the proper conditions, homogeneously mixed substances can separateinto different phases, each having a chemically distinct composition and sharpboundaries. Phase separation processes in polymer science have been the subjectof many experimental and theoretical examinations since the beginning of poly-mer science itself more than a half century ago. Interest stems from the fact thatmost useful polymer-based products are usually processed not from pure polymer,but from some type of solution or mixture. Examples include items from adhesivetape to automobile bumpers.

One particular area of interest over the last decade is studying how an initiallyhomogeneous mixture of polymer and liquid crystal (LC) molecules separatesinto a continuous phase of solid polymer with micron-size regions of LC mol-ecules embedded within.1 A good analogy of the structure is Swiss cheese whereinthe holes correspond to domains of LC molecules, surrounded by the polymer“cheese.” Various techniques can be used to create this Swiss cheese structurefrom a homogeneous mixture of polymer and LC molecules. In one such tech-nique, researchers begin with a mixture of liquid crystal molecules and light-sen-sitive monomers (the building blocks of polymers), and then use light to form thepolymer phase and subsequently create the Swiss cheese structures. As the mono-mers grow into polymers, the LC molecules eventually come together to formmicron-sized “domains” surrounded by polymer. Such phase-separated materialscan possess highly desirable optical properties and serve as components for fiber-optic lines, flat-panel displays, and switchable windows. But to optimize the per-formance of any of these components, one needs to control the phase separationprocess and thus the structure of the Swiss cheese (number of holes, size of holes,locations of holes).

Since these films consist of micron-sized domains of liquid crystals randomlydispersed in a polymer, they are termed polymer dispersed liquid crystals (PDLCs).The small LC domains act to scatter light because the host (polymer) and the LC

droplets have different refractive indices, a material property which determinesthe bending angle of light as it passes from one phase to another. But the effectiveindex of refraction for the LC domains can be easily changed owing to the aniso-tropic shape of the molecules. If the liquid crystals are visualized as cigars, thereis an average direction the long axes possess within each domain which differsfrom domain to domain. However, applying an electric field allows scientists tocontrol the orientation of these cigars within the LC domains (i.e., reorienting theaverage direction of all domains to point in a single direction) thereby changingthe effective refractive index. By proper selection of the polymer and LC, it ispossible to match refractive indices thereby allowing transmission (rather thanscattering) of light. Films with variable, controllable scattering properties can beformed in this way.

Normal PDLC phase separation processes tend to create the LC domains ran-domly throughout the polymer host. The properties of the resulting materials areisotropic, or the same in every direction. Adding anisotropy to the phase separa-tion problem was recently demonstrated in a mixture of liquid crystal and photo-polymer.2 Applying two interfering laser beams created bright and dark regions inthe monomer mixture. Monomers grow into polymers preferentially in the brightregions forcing the liquid crystal molecules to diffuse towards the dark areas.

What can result from a carefully controlled anisotropic diffusion process is astructure with regularly repeating bands of nanometer-sized LC domains alternat-ing with polymer bands. Figure 3 shows two examples of these gratings (0.55 µmperiodicity), each consisting of different size LC domains. Because the LC do-mains are smaller than visible light wavelengths in either case, these films do notscatter light. However, the grating spacings are such that incoming light can bebent (diffracted) to change direction in an orderly way. Specifically, they exhibitBragg diffraction which results from the regularly repeating submicron domainswith different indices of refraction (polymer-rich bands versus LC-rich bands).Moreover, because one can still use an electric field to change the average LCdirection within the domains, switchable diffractive optical elements have becomefeasible. Considerable success was demonstrated very recently in forming switch-able transmission and reflection gratings2,3 with high diffraction efficiencies (causingmore than 90% of the desired light to diffract), low angular selectivity (diffractingonly light coming in within 1° of the desired angle), and fast switching speeds(millionths of a second).

A major advantage of this approach is that researchers can make optical ele-ments easily in a one-step process; researchers begin with a uniform mixture ofliquid crystal and monomer, place it on the desired surface, and in a single stepusing laser light, create a film in situ with the desired optical properties. Withthese materials, researchers can make fiber-optic switching components, whichcan redirect incoming light to different output optical fibers. Complex switchableholographic elements have also been demonstrated4,5 wherein changing the elec-tric field causes the holograms to appear and then disappear. Figure 4 shows theprojected hologram of a half dollar coin recorded using this one-step process.4

For this anisotropic phase separation process, one must account for the peri-odic migration, or diffusion, of liquid crystals from bright regions to the darkregions. In conventional isotropic PDLC systems, there are two time scales toconsider, one for droplet nucleation and another for subsequent droplet growth.The location of the holes within the cheese occurs randomly with the number andsize of the LC domains controlled by these two rates. For the anisotropic phaseseparation process described here, an additional time scale must be considered,that is, how long it takes the LC molecules to migrate from the bright to darkareas. The relative balance of all three rates dictates the resulting structures thatwill form, which in turn, determines performance of optical components. Thisbalance is critical in forcing the holes to occur only in the dark regions. Experi-mental results for isotropic PDLCs do not generically apply to these anisotropicsystems where small changes in the diffusion conditions (which affect the spac-ings of the bright and dark bands) lead to large changes in the resulting phase sepa-rated structures.

Although considerable success in making initial switchable diffractive elementshas been demonstrated, fundamentally there is still much to understand. The aniso-tropic nature of the diffusion process needs to be considered in current theoreticalstudies of the phase separation problem. The coupled time-dependent diffusionand photopolymerization processes must be considered with periodic boundaryconditions. Initial work along these lines will appear shortly.6 The size of domainscurrently examined are an order of magnitude smaller than those typically foundin scattering-based PDLCs. The domain structures, whose shape and interface arestrongly dictated by the anisotropic polymerization process, are of a size scale notyet examined in any detail. Detailed determination of the subtleties of the phaseseparation process have yet to be investigated. The size scale of the LC domainsprohibits the use of conventional light scattering techniques to explore growthrates. Although initial work appears promising in the utility of such structures for

FIGURE 3. Electron micrographs of ma-terials consisting of polymers and liquidcrystals (LCs) separated in a controlledfashion. Liquid crystal molecules startoff by consolidating into droplets whichform nanometer-scale domains of LCmolecules. The left image (high magnifi-cation) reveals liquid crystal bands con-sisting of very small, nearly spherical LCdomains (<100 nm) with little coales-cence among the individual droplets. Theright image reveals LC bands of larger(200–250 nm) domains. The size andshape of these LC domains have a dras-tic impact on the electrical and opticalproperties of the resulting material. Bothimages reveal that the LC bands areseparated by polymer-rich regions withan overall spacing of 0.55 µm.

FIGURE 4. Hologram of a half-dollar coinrecorded in a one-step process onto aphase-separated polymer/liquid crystal film.4

(Courtesy of Optics Letters.)

Page 27: physis meeting

27

a variety of diffractive optics applications, the area is ripe for both theoretical andexperimental studies which fundamentally explore the anisotropic phase separa-tion problem.

Timothy J. BunningScience Applications

International Corporation([email protected])

W. Wade AdamsWright–Patterson Air Force Base

([email protected])

1. P. S. Drzaic, Liquid Crystal Dispersions (World Scientific, Singapore, 1995).2. T. J. Bunning, L. V. Natarajan, V. P. Tondiglia, R. L. Sutherland, D. L. Vezie, and W. W. Adams,

Polymer 36, 2699 (1995).3. T. J. Bunning, L. V. Natarajan, V. P. Tondiglia, R. L. Sutherland, D. L. Vezie, and W. W. Adams,

Polymer 37, 3147 (1996).4. V. P. Tondiglia, L. V. Natarajan, R. L. Sutherland, T. J. Bunning, and W. W. Adams, Opt. Lett. 20, 1325

(1995).5. L. H. Domash, Y. M. Chen, B. Gomatam, C. Gozewski, R. L. Sutherland, L. V. Natarajan, V. P.

Tondiglia, T. J. Bunning, and W. W. Adams, Proc. SPIE 2689, 188 (1996).6. X. Y. Wang, K. Yu, and P. L. Taylor, J. Appl. Phys. (in press)

VACUUM PHYSICS

Vacuum Microelectronic Devices

Vacuum microelectronics deals with submicron- and micron-sized devices andcomponents whose operation involves field emission of electrons from a solid.Electrons tunnel quantum mechanically and then move along electric field linesthrough a vacuum.1 The first microelectronic device was demonstrated by Spindtet al. at SRI.2 Since then the impetus for the development of vacuum microelec-tronics devices has been provided primarily by their applications as matrix-ad-dressable electron sources for field emission flat-panel displays,3 and as compactand efficient cathodes for microwave power amplifier tubes.4

Field emitters are sharp metal or semiconductor tips surrounded by an inte-grated gate electrode which is electrically insulated from the electron-emittingmaterial by a dielectric film. As an example, Fig. 1 shows a scanning electronmicroscope (SEM) picture of a section of a field emission array (FEA) recentlyfabricated in the Electronic Technologies Division of MCNC, a nonprofit corpora-tion in North Carolina offering advanced electronic and information technologiesand services. Looking like missiles in underground silos, the tip-on-post emittersshown in Fig. 1 are formed in mono-crystalline silicon using silicon etching andoxidation sharpening techniques.5 The metallic gate electrode openings (with ra-dii of less than 1 µm) are self-aligned with the emitter tips. When the gate elec-trode is biased to a positive potential, typically in the range of 10 to 100 V, a strongelectric field enhanced by the sharp tip causes field emission of electrons from theemitter material.

Cathodes for field emission displays comprise a matrix of FEAs, each array corre-sponding to one pixel or subpixel of the display. The required electron emission cur-rents are in the modest range of 10–100 µA per array, which is readily achievable forgate voltages in the range of 10 to 50 V. On the other hand, cathodes for microwavepower amplifier tubes need to exhibit emission currents on the order of 100 mA perarray, with current densities at least in the A/cm2 range. Achieving this level of perfor-mance requires the use of large (more than 1000 tips) arrays with tip packing densitieson the order of ten million tips per cm2, and average currents per tip on the order of 5–10 µA. At MCNC we are developing high performance FEAs for a 10 GHz klystrode,which is a type of microwave amplifier tube.6 This work is being done in collaborationwith Litton EDD CPI, SRI, Varian Associates, and Lincoln Laboratories at MIT, withsupport from ARPA via NRL and NASA.

As fabrication technologies for FEAs mature, with scaling down of device ge-ometries achieving the desired submicron level, more and more effort in the vacuummicroelectronics community is being expended on developing a better understand-ing of phenomena affecting long-term reliability of the devices in environmentsdictated by specific applications, stability of the emission current, and enhance-ment of the electron current via modifications of the emitter surface. These phe-nomena are affected by physical and chemical aspects of the emitting surfaces,

FIGURE 1. Section of an array of field emitters manufactured at MCNC using monocrys-talline silicon as the emitter material.

such as microscopic structure of the tips and the tip geometry, adsorption anddiffusion phenomena, cathode bombardment by ions generated as a result of col-lisions of electrons with ambient molecules, emission noise, and cathode initiatedvacuum breakdown.7 Many of these surface-related issues remain to be investi-gated and some others probably have yet to be fully identified. The level of re-search in this area is increasing and is likely to pay off in the increased perfor-mance, yield and reliability of vacuum microelectronic devices, some of whichare near commercialization.

Dorota TempleDev Palmer

MCNC([email protected])

1. I. Brodie and P. R. Schwoebel, Proc. IEEE 82, 1006 (1994).2. C. A. Spindt, J. Appl. Phys. 39, 3504 (1968).3. R. Meyer, A. Ghis, P. Rambaud, and F. Muller, Proceedings of the First International Vacuum Mi-

croelectronics Conference, Williamsburg, VA, 1988 (unpublished).4. R. K. Parker and R. H. Abrams, IEDM Tech. Dig., December 1990, pp. 967–970.5. D. Temple, C. A. Ball, W. D. Palmer, L. N. Yadon, D. Vellenga, J. Mancusi, G. E. McGuire, and H.

F. Gray, J. Vac. Sci. Technol. B 13, 150 (1995) .6. D. Palmer, J. Shaw, R. B. True, J. Mancusi, D. Temple, D. Vellenga, L. Yadon, H. F. Gray, T. A.

Hargreaves, and R. S. Symons, 1996 IEEE International Conference on Plasma Science, Boston,MA, 1996 (unpublished).

7. P. R. Schwoebel and I. Brodie, J. Vac. Sci. Technol. B 13, 1391 (1995)

PHYSICS NOBEL PRIZE

The 1996 Nobel Prize for physics recognizes the discovery of superfluid he-lium-3. David Lee and Robert Richardson of Cornell and Douglas Osheroff ofStanford, working at Cornell in the early 1970s, had to chill their helium-3 sampleto a temperature of only about 2 mK before it transformed into a superfluid, aspecial liquid state of matter which can flow without viscosity.

Superfluidity in the two helium isotopes is very different, a fact that stems fromthe fact that He-4, which consists of two electrons and a nucleus containing twoprotons and two neutrons, is a boson while He-3, which consists of two electronsand a nucleus containing two protons and only one neutron, is a fermion.1 In He-4, the superfluid state is essentially a Bose–Einstein condensation of He atomsinto a single quantum state. In contrast, the He-3 superfluid state consists of acondensation of pairs of atoms, somewhat analogous to the pairing of electrons inlow-temperature superconductivity. The discovery of superfluidity in He-4, at themuch warmer of temperature of 2 K, occurred in 1938.2

Furthermore, because its constituents (pairs of atoms) are magnetic and pos-sess an internal structure, the He-3 superfluid is more complex than its He-4 coun-terpart. Indeed, superfluid He-3 exists in three different forms (or phases) relatedto different magnetic or temperature conditions. In one of these phases, the Aphase, the superfluid is highly anisotropic; that is, it is directional, somewhat likea liquid crystal. To put it another way, this phase of He-3 (unlike He-4) has tex-ture. This property was exploited in a recent experiment3 in which vortices set inmotion within a He-3 sample simulated the formation of topological defects (“cos-mic strings”) in the early universe.

Another notable experiment in recent years was the verification (by DouglasOsheroff) of the “baked Alaska” model.4 This theory, formulated by Anthony Leggettof the University of Illinois, explains the somewhat piecemeal transition from theA phase of superfluid He-3 into the lower-temperature B phase by supposing thatB phase droplets can be nucleated within the supercooled A phase by the ionizingenergy of passing cosmic rays.

Phillip F. ScheweAIP

([email protected])

1. N. D. Mermin and D. M. Lee, Sci. Am. 235, 56 (December 1976).2. R. J. Donnelly, Physics Today 47, 30 (July 1995).3. C. Bauerle at al., Nature 382, 332 (1996); V. M. H. Ruutu et al., ibid. 382, 334 (1996).4. P. Schiffer et al., Phys. Rev. Lett. 69, 120 (1992)

Page 28: physis meeting

28

the

In 1999 the March and April meetings of the APS will be joined togetherwith several divisional meetings into a joint APS Centennial meeting.

JOIN US AND CELEBRATE PHYSICS!

We have changed the world in the 20th centuryand will continue to forge the future.

MARCH 20-26, 1999ATLANTA, GA

USA

See the Centennial web page for updated information: http://www.aps.org/centenary/

ANNOUNCING

CENTENARY CELEBRATIONOF THE AMERICAN PHYSICAL SOCIETY