35
Emerging Complexity in Physics How does Physical Complexity arise from Basic Particles and Simple Principles? Ronald L. Westra Department of Mathematics, Universiteit Maastricht 1. Simple versus Complex The word ‘complexity’ has an almost mythical reverberation. In every day experience complexity is mostly synonym with concepts like difficult, problematical, and tough, and stands directly opposite to ‘simplicity’. It is the very thing we try hard to avoid in our daily existence. What we mean with ‘complexity’ nobody knows precisely, as the concept itself is fuzzy and relative; differing from person to person and between theoretical and practical areas. In science the concept of complexity in some cases assumes a more tangible meaning. In the mathematical context complexity often correlates to the notion of computability. For instance, in mathematics there exists complexity theory, containing an expression called the combinatorial complexity of an algorithm that is precisely defined in terms of algorithmic iterations and the size of the problem. In Computer Science there is an objective measure of the complexity of an object; provided by the Kolmogorov-Chaitin complexity, which quantifies and relates the number of symbols required to specify an object in any conceivable formal language. In natural sciences like physics the situation is unfortunately less clear, as the concept of complexity is not so precisely defined. Rather than a concise definition in physics we can provide characteristics of, and conditions for complexity, and associate it with certain circumstances. Understanding complexity also means studying simplicity. How does complexity originate? Can complexity evolve from simplicity? Can we distinguish order in, and laws for complexity? It are these and Fig. 1: Intrinsic Complexity? 1

Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

  • Upload
    hakien

  • View
    224

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Emerging Complexity in

Physics How does Physical Complexity arise from

Basic Particles and Simple Principles?

Ronald L. Westra Department of Mathematics,

Universiteit Maastricht

1. Simple versus Complex The word ‘complexity’ has an almost mythical reverberation. In every day experience complexity is mostly synonym with concepts like difficult, problematical, and tough, and stands directly opposite to ‘simplicity’. It is the very thing we try hard to avoid in our daily existence. What we mean with ‘complexity’ nobody knows precisely, as the concept itself is fuzzy and relative; differing from person to person and between theoretical and practical areas.

In science the concept of complexity in some cases assumes a more tangible meaning. In the mathematical context complexity often correlates to the notion of computability. For instance, in mathematics there exists complexity theory, containing an expression called the combinatorial complexity of an algorithm that is precisely defined in terms of algorithmic iterations and the size of the problem. In Computer Science there is an objective measure of the complexity of an object; provided by the Kolmogorov-Chaitin complexity, which quantifies and relates the number of symbols required to specify an object in any conceivable formal language. In natural sciences like physics the situation is unfortunately less clear, as

the concept of complexity is not so precisely defined. Rather than a concise definition in physics we can provide characteristics of, and conditions for complexity, and associate it with certain circumstances. Understanding complexity also means studying simplicity. How does complexity originate? Can complexity evolve from simplicity? Can we distinguish order in, and laws for complexity? It are these and

Fig. 1: Intrinsic Complexity?

1

Page 2: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

related questions that we will study in this course, high-lighted by – hopefully – interesting examples from different areas from physics.

We will classify complexity along different gauges. One such gauge is intrinsic versus extrinsic complexity. If we compare the physical world of interacting elementary particles to a classical theater with various dramatis personae, ‘intrinsic’ refers to the complexity of one specific player or the plot of the play, and ‘extrinsic’ refers to the complexity that stems from increasing the number of actors involved in our drama. The latter is the duality between one versus many. Can complex systems evolve from merely increasing the number of effectively simple agents?

Research Challenge: Compare this with Neural Networks. Here we have ‘simple’ neurons that have specific rules for (not) transmitting electric signals. However, when multiple neurons interact complex interaction create a new level of reality. Complexity emerges from many local neural interactions. Emergent properties of a neural network are thoughts. What are the emergent properties of an assemblage of interacting identical particles in vacuo?

A second question concerns the (non-)linearity of the system. Consider a system such that any combination of subsystems behaves exactly similar, and any linear scaling results in a similar system. Equally, consider systems whose time evolution result in similar systems. If the similarity relation involved is linear, we will later specify this concept in more detail, the system is open for mathematical scrutiny, hence understanding, hence predictability. More precisely stated: Linear systems are systems that obey the so-called superposition principle; any two solutions of the system van be added together to form a valid new solution. In contrast, two solutions of a nonlinear equation cannot be added together to form another solution.

Research Challenge: Kolmogorov Chaitin complexity : What does it mean that complexity is asymptotically independent from the language in which it is described ?

A third gauge for understanding complexity is the guiding principles nature employs. These concern the laws physical systems obey in their evolution towards complexity. Examples of such principles were introduced in the course ‘Modeling Nature’. They include: The Principle of Self-organization, and Self-Organized Criticality. More specific for nature is the principle of Conservation Laws. This profound and guiding principle sets physics (and chemistry) apart from other disciplines as biological and economical systems. It is valid on all levels; for individual particles and large ensembles. Some conservation laws only concern assemblages of particles, like the second law of thermodynamics which you encountered in the course ‘Elements of Physics’. This principle states that the Entropy of a system always increases. For

2

Page 3: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

physical systems, Entropy relates to the uncertainty, chaos in the system, or alternatively theamount of information required to describe the system. One specific conserved quantity,rather quality, is so distinguished that it can be seen as a separate guiding principle; the principle of Symmetry. For specific combinations of particle properties, fields, and geometric operations symmetry (or sometimes antisymmetry) is a conserved property. (for instance electromagnetic charge and point reflection). Conserved properties are called invariants, andthey themselves can be a guideline for classifying particles. For instance, if we considerquantum wave function of two identical particles, we find that particles fall in just twcategories. For particles called Bosons, the wave function is fully symmetric under interchange of the pair of particles, w

or

the

o

hereas for particles called Fermions the wave function is antisymmetric under this operation.

Re is of Pattern Formation”

• G. Nicolis, I. Prigogine, “Exploring Complexity”, Chapter 1

Research Challenge: Why does physics lend itself so well for mathematical description as compared tmany other sciences? Reading: R.P. Fe

o ynman, the

Character of Physical Law, Chapter 2

Research Challenge 2: Left image shows the hurricane Isabelle in September 2003. Argue howthis complex phenomenon ultimately arises from basinteractions between the molecules in the air. Describwhat mechanisms dhurricane on each

ic

e rive the

perceivable level.

ad ng: • Philip Ball, “The Self-made Tapestry”, Chapter 10: “Principle

3

Page 4: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

2. The Character of Physical Complexity

Physical complexity is based on the one hand on reductionistic principles, i.e. how we can compose complex systems from numerous simple agents, and on the other hand from the character of the fundamental physical laws. Here considerations like: Symmetry, Entropy, and Probability and Uncertainty are the prime concepts. In Richard Feyman’s outstanding “The Character of Physical Law” we look in more detail to the principle of Conservation Laws. Reading:

• Richard Feyman, “The Character of Physical Law”, Chapter: “Conservation Principles”

3. Nonlinear Phenomena Complex spatial or temporal patterns emerge when simple systems are driven from equilibrium in ways that cause them to undergo instabilities. In this lecture we explore the differences between linear and non-linear systems, and their relevance for intrinsic, non-reductionistic complexity in Physics. Key items are:

• Chaos and Fractals • Solitons Wavelets and Particles • Complex Patterns • Entropy • Percolation

Reading:

• D. Campbell, Introduction to Nonlinear Phenomena, in Lectures of Complexity (Stein, Ed), pp. 5 – 23.

• Philip Ball, “The Self-made Tapestry”, Chapter 10: “Principles of Pattern Formation”

4

Page 5: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

4. Many Particle Systems In this lecture we consider physical systems that consist of a great number of particles, that belong to only a few particle families. Consider, for instance a gas consisting of 1 gram Helium. This is approximately ¼ mole, so about 1.505 x 1023 atoms He. Though we have a really large number of particles, only one type of particle – the He atom – is present. Such systems are well-suited for theoretical and empirical studies, as the basic constituents are simple, and the fundamental interactions can be directly studied and computed. Here we will consider a few examples, and study how macroscopic properties emerge from microscopic interactions. Case 1: The Ideal Gas Laws of Boyle-Gay Lusac: Consider above gas in a sufficiently large container. Let us for simplicity assume that all He atoms have the same velocity in a given x-direction: vx. This is of course wrong, but it will help us to deduce a macroscopic characteristic of this system. Consider the atoms that will collide to a given portion A of the interior wall of the container in the next dt seconds. The number of atoms that will collide with A during dt is equal to the number of atoms that are within a cylinder with base A and length vx.dt, and are directed towards area A. Let N represent the number of atoms in the container (in our case N = 1.505 x 1023), and V its entire volume. Then the number of atoms in the cylinder is: (N/V).A. vx.dt. On average half of the He atoms will move towards A, and the other half away from A. The mean number of collisions of He atoms with A therefore is: ½ (N/V).A. vx.dt. A He atom that collides with A transfers a momentum to A that equals: xxx mvmvmvp 2)( =−−=∆ , with m is the mass of one He atom. Therefore the total momentum dp received by A is: dp = number of collisions x momentum transfer = ½ (N/V).A. vx.dt.2.m.v = NA.m vx

2.dt /V So the rate of change of momentum experienced by A is:

VNAmv

dtdp x

2

=

According to Newton’s second law, this equals the force exerted by part A of the wall on the atoms:

wallFdtdp

=

According to Newton’s third law, this force again equals in size the force F exerted by the atoms on area A: F = Fwall. Using the expression for pressure P as the force F exerted on a square area A as: P = F/A, we can write:

VNmv

AFP x

2

==

We can equate velocity vx to the average velocity in the x-direction. It is clear that the average velocity in all directions will be equal: vx = vy = vz. According to Pythagoras, the size of average velocity is:

22222 3 xzyx vvvvv =++=

5

Page 6: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

So:

3/22 vvx =

and we obtain:

3

2NmvPV =

Notice that this expression contains an expression for the average kinetic energy of a He atom: ½ mv2 . We can compare above result with the ideal gas law for this He gas, obtained by Boyle and Gay-Lusac in the eighteenth century:

NRTPV =

With R the ideal gas constant, R = 8.31 J/mol.K Comparison with above result we find that the average kinetic energy of a He atom equals:

kTmv23 ½ 2 =

where k = 3R/2NA is the constant of Boltzmann: k = 1.38.10-23 J/K. What does this extensive calculation learn us? We have here obtained the macroscopic deterministic behavior of a tremendous number of atoms from very simple considerations based on a typical, average, atom. However, this result is so exact that it fits very well with empirical observations. Moreover, we learn that observables that are relevant for the microscopic level (such as the velocity of an atom) loose their importance on the macroscopic level (such as temperature and pressure). Also, we find that properties on the macroscopic lever emerge from the microscopic behavior. The concept of temperature, for instance, is an ensemble-parameter that looses its significance on the microscopic level. Moreover, we find correspondence between microscopic and the macroscopic level, such as the relation between temperature and the average velocity of an atom.

Assignment: Compute the average velocity of a Helium atom at room temperature. Assignment: Consider a container filled with Helium at room temperature. Imagine that we put the container in our car and drive with a constant velocity of 50 km/h. Now the average velocity of the gas atoms has increased with 50 km/h, does that mean that the temperature of the gas has increased?

Above computation is a very crude and simplified example of Statistical Physics. In statistical physics macroscopic ensemble quantities (such as temperature and entropy) are calculated from basic characteristics of the particles and their interactions. With Statistical Physics the laws of Thermodynamics can be derived from

6

Page 7: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Case 2: The Phases of Matter Now we will study in some detail how the macroscopic state we commonly call a phase emerges from basic microscopic properties. Phases of matter A phase is a set of states of a macroscopic physical system that have relatively uniform chemical composition and physical properties (i.e. density, crystal structure, index of refraction, and so forth.) The most familiar examples of phases are solids, liquids, and gases. Less familiar phases include the paramagnetic and ferromagnetism phases of magnetic materials. Phases are sometimes called states of matter, but this term can lead to confusion with thermodynamic states. For example, two gases maintained at different pressures are in different thermodynamic states, but the same “state of matter”. Although phases are conceptually simple, they are hard to define precisely. In “Elements of Physics” we encountered the thermodynamic quantity Free Energy. A good definition of a phase of a system is a certain and well-defined region in the parameter space of the system’s thermodynamic variables in which the free energy is analytic. ‘Analytic’ here means that it is well-behaved as a function of the properties of the system, such as mass, size, electric charge, spin, and so on. Equivalently, two states of a system are in the same phase if they can be transformed into each other without abrupt changes in any of their thermodynamic properties. All the thermodynamic properties of a system -- the entropy, heat capacity, magnetization, compressibility, and so forth -- may be expressed in terms of the free energy and its derivatives. For example, the entropy is simply the first derivative of the free energy with temperature. As long as the free energy remains analytic, all the thermodynamic properties will be well-behaved. When a system goes from one phase to another, there will generally be a stage where the free energy is non-analytic. This is known as a phase transition. Familiar examples of phase transitions are melting (solid to liquid), freezing (liquid to solid), boiling (liquid to gas), and condensation (gas to liquid.) Due to this non-analyticity, the free energies on either side of the transition are two different functions, so one or more thermodynamic properties will behave very differently after the transition. The property most commonly examined in this context is the heat capacity. During a transition, the heat capacity may become infinite, jump abruptly to a different value, or exhibit a “kink” (i.e. a discontinuity in its derivative.)

Possible graphs of heat capacity (C) against temperature (T) at a phase transition. In practice, each type of phase is distinguished by a handful of relevant thermodynamic properties, out of all the possible properties one could imagine. For example, the distinguishing feature of a solid is its rigidity: unlike a liquid or a gas, a solid does not easily change its shape. Liquids are distinct from gases because they have much lower compressibility: a gas placed in a large container expands to fill the container, whereas a liquid forms a puddle in the bottom of the container. Not all the properties of solids, liquids, and gases are distinct; for example, it is not useful to compare their magnetic properties. On

7

Page 8: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

the other hand, the ferromagnetic phase of a magnetic material is distinguished from the paramagnetic phase by the presence of bulk magnetization without an applied magnetic field. Emergence and universality Phases are emergent phenomena produced by the self-organization of a macroscopic number of particles. Typical samples of matter, for example, contain around 1023 particles (Avogadro’s number). In systems that are too small – even, say, a thousand atoms – the distinction between phases disappears, since the appearance of non-analyticity in the free energy requires a huge, formally infinite, number of particles to be present. One might ask why real systems exhibit phases, since they are not actually infinite. The reason is that real systems contain thermodynamic fluctuations. When a system is far from a phase transition, these fluctuations are unimportant, but as it approaches a phase transition, the fluctuations begin to grow in size (i.e. spatial extent). At the ideal transition point, their size would be infinite, but before that can happen the fluctuations will have become as large as the system itself. In this regime, “finite-size” effects come into play, and we are unable to accurately predict the behavior of the system. Thus, phases in a real system are only well-defined away from phase transitions, and how far away it needs to be is dependent on the size of the system. There is a corollary to the emergent nature of phase phenomena, known as the principle of universality. The properties of phases are largely independent of the underlying microscopic physics, so that the same types of phases arise in a wide variety of systems. This is a familiar fact of life. We know, for example, that the property that defines a solid - resistance to deformation - is exhibited by materials as diverse as iron, ice, and silly putty. The only differences are matters of scale. Iron may resist deformation more strongly than silly putty, but both maintain their shape if the applied forces are not too strong. Phase diagrams

The different phases of a system may be represented using a phase diagram. The axes of the diagrams are the relevant thermodynamic variables. For simple mechanical systems, we generally use the pressure and temperature.

A typical phase diagram.

8

Page 9: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

The following figure shows a phase diagram for a typical material exhibiting solid, liquid and gaseous phases. The markings on the phase diagram show the points where the free energy is non-analytic. The open spaces, where the free energy is analytic, correspond to the phases. The phases are separated by lines of non-analyticity, where phase transitions occur, which are called phase boundaries. In the above diagram, the phase boundary between liquid and gas does not continue indefinitely. Instead, it terminates at a point on the phase diagram called the critical point. This reflects the fact that, at extremely high temperatures and pressures, the liquid and gaseous phases become indistinguishable. In water, the critical point occurs at around 647K (374°C/705°F) and 22.064 MPa. The existence of the liquid-gas critical point reveals a slight ambiguity in our above definitions. When going from the liquid to the gaseous phase, one usually crosses the phase boundary, but it is possible to choose a path that never crosses the boundary by going to the right of the critical point. Thus, phases can sometimes blend continuously into each other. We should note, however, that this does not always happen. For example, it is impossible for the solid-liquid phase boundary to end in a critical point in the same way as the liquid-gas boundary, because the solid and liquid phases have different symmetry. An interesting thing to note is that the solid-liquid phase boundary in the phase diagram of most substances, such as the one shown above, has a positive slope. This is due to the solid phase having a higher density than the liquid, so that increasing the pressure increases the melting temperature. However, in the phase diagram for water the solid-liquid phase boundary has a negative slope. This reflects the fact that ice has a lower density than water, which is an unusual property for a material.

Polymorphism

Many substances can exist in a variety of solid phases each corresponding to a unique crystal structure. These varying crystal phases of the same substance are called polymorphs. Diamond and graphite are examples of polymorphs of carbon. Graphite is composed of layers of hexagonally arranged carbon atoms, in which each carbon atom is strongly bound to three neighboring atoms in the same layer and is weakly bound to atoms in the neighboring layers. By contrast in diamond each carbon atom is strongly bound to four neighboring carbon atoms in a cubic array. The unique crystal structures of graphite and diamond are responsible for the vastly different properties of these two materials. Each polymorph of a given substance is usually only stable over a specific range of conditions. For example, diamond is only stable at extremely high pressures. Graphite is the stable form of carbon at normal atmospheric pressures. Although diamond is not stable at atmospheric pressures and should transform to graphite, we know that diamonds exist at these pressures. This is because at normal temperatures the transformation from diamond to graphite is extremely slow. If we were to heat the diamond, the rate of transformation would increase and the diamond would become graphite. However, at normal temperatures the diamond can persist for a very long time. Non-equilibrium phases like diamond that exist for long periods of time are said to be metastable. Another important example of metastable polymorphs occurs during the processing of steel. Steels are often subjected to a variety of thermal treatments designed to produce various combinations of stable and metastable iron phases. In this way the steel properties, such as hardness and strength can be adjusted by controlling the relative amounts and crystal sizes of the various phases that form.

9

Page 10: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Phase separation Different parts of a system may exist in different phases, in which case the phases are usually separated by boundary surfaces. Gibbs’ phase rule describes the number of phases that can be present at equilibrium for a given system at various conditions. The phase rule indicates that for a single component system at most three phases (usually gas, liquid and solid) can co-exist in equilibrium. The three phases can all co-exist only at a single specific temperature and pressure, characteristic of the material, called the triple point. The conditions where two phases become indistinguishable is called a critical point. The phase rule also indicates that two phases can only co-exist at equilibrium for specific combinations of temperature and pressure. For example for a liquid-gas system if the vapor pressure is lower than that corresponding to the temperature, the system will not be at equilibrium, rather the liquid will tend to evaporate until the vapor pressure reaches the appropriate level or all of the liquid is consumed. Likewise, if the vapor pressure is too great for the given temperature condensation will occur. For the case of multi-component systems the phase rule indicates that additional phases are possible. A common example of this occurs in mixtures of mutually insoluble substances such as water and oil. If a few drops of oil are poured into pure water, there will be a small amount of intermixing, but there will be two distinct phases: one primarily oil and the other primarily water. The exact composition of the phases will be a function of the temperature and pressure but not a function of the amount of oil. It may be possible to change the temperature such that one of the phases disappears: for example, if the mixture is heated, it is possible that at some temperature, all of the oil is dissolved in the water. Above this temperature there is only one phase, and the composition of the phase does depend on how much oil was put in. Phase separation can also exist in two dimensions. The boundaries between phases, the surfaces of materials, and the grain boundaries between different crystallographic orientations of a single material can also show distinct phases. For example, surface reconstructions on metal and semiconductor surfaces are two dimensional phases. Phase transition A phase transition is the transformation of a thermodynamic system from one phase to another. The distinguishing characteristic of a phase transition is an abrupt sudden change in one or more physical properties, in particular the heat capacity, with a small change in a thermodynamic variable such as the temperature. Examples of phase transitions are: the transitions between the solid, liquid, and gaseous phases (boiling, melting, etc.) , the transition between the ferromagnetic and paramagnetic phases of magnetic materials at the Curie point, and the emergence of superconductivity in certain metals when cooled below a critical temperature.

10

Page 11: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Case 3: Semiconductors and Free electron Fermi gas in Crystals We will now study the solid phase of matter in more detail. In the solid phase, the individual atoms or molecules are bound in an overall potential, that results from their cumulative force fields. This “mean field” fastens the individual molecules to a fixed position. When this ordering is only local we have an amorphous material, such as glass or clay (disordered solids). When the order is high, we have a crystal, like metals and minerals (ordered solids). An example of a mineral is ruby with chemical formula Al2O3. This mineral is build from one constructive element, the unit cell, that in case of ruby consists of 2 Al ions and 3 O ions. This unit cell is repeated ad nauseam in all three directions, and thus a pure crystal of ruby is obtained. For this reason crystals are very ordered and regular objects, they van be seen as a large repetition of the fundamental unit cell. The unit cell is the orderly spatial arrangement of atoms in the molecular structure of a crystal. The simplest and most symmetrical, the isometric system, is represented by the cube. There are six other systems (in order of decreasing symmetry: hexagonal, tetragonal, trigonal, orthorhombic, monoclinic and triclinic). Crystal structure is completely described by its unit cell lattice parameters, its space group symbol and the positions of the atoms that, when operated on by the essential rotational and translational symmetry operations, will generate entire contents within the unit cell. Metals and conductors: free electron Fermi gas We can understand a number of important physical properties of metals, particularly the simple metals (like the alkali metals: Li, Na, K, Cs, and Rb), in terms of the free electron model. According to this model the most weakly bound electrons of the constituent atoms move about freely through the volume of the metal. The valence electrons of the atoms become the conduction electrons. Forces between conduction electrons and ion cores are neglected in this approximation. The usefulness of this model is greatest for experiments that depend essentially upon the kinetic properties of the conduction electrons. Conduction electrons in a simple metal arise from the valence electrons of the constituent atoms. In a Na atom the valence electron is in a 3s state; in the metal this electron becomes a conduction electron, roving throughout the metal. Such a wandering electron can be seen as hip hopping from one Na atom to the other, so acting temporarily as its valence 3s electron. A conduction electron is scattered only infrequently by other conduction electrons. This property is a consequence of the so-called Pauli exclusion principle1. By a free electron Fermi gas, we mean a gas of free and noninteracting electrons subject to this Pauli principle. Bragg-reflection of electrons Let us now consider the reflection of free electrons to a crystal. According to quantum mechanics, as we have learned in ‘Elements of Physics’, we may represent free electrons as planar waves. Consider a planar wave of wavelength λ incident to a crystal surface under an 1 The Pauli exclusion principle is a quantum mechanical principle which states that no two identical fermions may occupy the same quantum state. Formulated by Wolfgang Pauli in 1925, it is also referred to as the “exclusion principle” or “Pauli principle.” The Pauli principle only applies to fermions, particles which form antisymmetric quantum states and have half-integer spin. Fermions include protons, neutrons, and electrons, the three types of elementary particles which constitute ordinary matter. The Pauli exclusion principle governs many of the distinctive characteristics of matter. Particles like the photon and graviton do not obey the Pauli exclusion principle, because they are bosons (i.e. they form symmetric quantum states and have integer spin) rather than fermions. The Pauli exclusion principle plays a role in a huge number of physical phenomena. One of the most important, and the one for which it was originally formulated, is the electron shell structure of atoms. An electrically neutral atom contains bound electrons equal in number to the protons in the nucleus. Since electrons are fermions, the Pauli exclusion principle forbids them from occupying the same quantum state.

11

Page 12: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

angle θ. Let the spacing between the layers of the crystal be d. In the figure below we see that waves will have constructive inference when the difference in path length is a multiple of the wavelength. This is the so-called Bragg condition for reflection of the waves: only those waves will be reflected that obey:

2d sinθ = nλ

d sinθ

θ

θ

d

Energy bands of electrons The free electron model of metals gives us good insight into the heat capacity etc of metals. But the model fails to help us with other large questions: the distinction between metals, semi-metals, conductors and isolators. We need to include a more detailed look of the interaction between the free electrons and the ions fixed in the crystal lattice. This extension of the model already improves the macroscopic picture considerably. The difference between a good conductor and a good isolator is striking. The electrical resistivity of a pure metal may be as low as 10-12 Ωm at 1 K. The resistivity of a good insulator may be as high as 1020 Ωm. This range of 1032 may be the widest of any physical property of solids. Every solid contains electrons. The important question is how the electrons respond to an applied electric field. Electrons in crystals are arranged in energy bands separated by regions in energy for which no wavelike electron orbitals exist. Such forbidden regions are called energy gaps or band gaps. These bands are the direct result from the interaction between conduction electron waves with the ion cores of the crystal. The crystal behaves as an insulator if the allowed energy bands are either filled or empty, for then no electrons can move in an electric field. The crystal behaves as a metal if one or more bands are partly filled. The crystal is a semiconductor or a semimetal if all bands are entirely filled, except for one or two bands slightly filled or slightly empty. To understand the difference between insulators and conductors, we must extend the free electron model to take account of the periodic lattice of the solid. The possibility of a band gap is the most important new property that emerges. Valence bands and Semi conduction In the free electron model the allowed energy values are distributed continuously from zero to infinity. The band structure of a crystal can often be described by the nearly free electron model for which the band electrons are treated as perturbed only weakly by the periodic

12

Page 13: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

potential of the ion cores. The band structure can be explained in this model if we take into account this interaction between the free electron Fermi gas and the ion core lattice. We saw earlier that Bragg reflection is a characteristic of wave propagation in crystals. Bragg reflection of electron waves is the cause of the energy gaps. There arise substantial regions of energy in which wavelike solutions to the Schrödinger equation do not exist, as in the figure below. These energy gaps are of decisive significance in determining whether a solid is an insulator or a conductor. Let us consider the origin of energy gaps in the simple case of a linear solid of lattice constant a. The low energy portions of the band structure are shown in the figure below. The upper figure depicts the energy of entirely free electrons, and the lower figure shows electrons that are nearly free, but encounter an energy gap, due to Bragg reflection, at reciprocal wavelengths (the wave number) of: k = 2π/λ = ±π/a.

electron energy

Free electron energy

0 k = reciprocal wavelength

Bound electron energy

-π/a 0 π/a k = reciprocal wavelength

First band

Second band

Forbidden band

electron energy

13

Page 14: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Valence bands

The valence band is the highest range of electron energies where electrons are normally present at zero temperature. In semiconductors and insulators, there is a bandgap above the valence band, followed by a conduction band above that. In metals, the conduction band is the valence band. .

Semiconductors and insulators owe their low conductivity to the properties of the valence band in those materials. It just so happens that the number of electrons is precisely equal to the number of states available up to the top of the valence band. There are no available states in the bandgap. This means that when an electric field is applied, the electrons cannot increase their energy (i.e. accelerate) because there are no states available to the electrons where they would be moving faster than they are already going. There is some conductivity in insulators, however. This is due to thermal excitation - some of the electrons get enough energy to jump the bandgap in one go. Once they are in the conduction band, they can conduct electricity, as can the hole they left behind in the valence band. The hole is an empty state which allows electrons in the valence band some degree of freedom. Note: It is a common misconception to refer to electrons in insulators as “bound” - as if they were somehow attached to the nucleus and couldn’t move. Electrons in insulators are quite free to move - in fact they move at a speed on the order of 100 km (60 mi) per second! They are also delocalised, having no well defined position within the sample. It’s just that for every electron moving left, there is another moving right - all the speeds cancel out precisely, leaving no overall current.

14

Page 15: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Electron hole

An electron hole is the absence of an electron from the otherwise full valence band. A full (or nearly full) valence band is present in semiconductors and insulators. The concept is of a hole is essentially a simple way to analyze the movement of large numbers of electrons in such substances. Hole conduction can be explained by the use of the following analogy. Imagine a row of people seated in an auditorium, where there are no spare chairs. Someone in the middle of the row wants to leave, so they jump over the back of the seat into an empty row, and walk out. This empty row is analogous to the conduction band, and the person walking out is analogous to a free electron. Now imagine someone else comes along and wants to sit down (the empty row has a poor view, no one wants to sit there). Instead, the person next to the empty seat moves along and sits in it, leaving an empty seat one spot closer to the edge. The next person follows, and the next. One could say that the empty seat moves towards the edge of the row. Once the empty seat reaches the edge, the new person can sit down. But in the process, everyone in the row has moved along. If those people were charged (like electrons), this movement would constitute conduction. This is how hole conduction works. Instead of analyzing the movement of an empty state in the valence band as the movement of billions of electrons, physicists propose an imaginary particle called a “hole”. In an applied electric field, all the electrons move one way, so the hole moves the other way. The physicists therefore say that the hole must have positive charge - in fact, they assign a charge of +e - precisely the opposite of the electron charge. Using Coulomb’s law, we can calculate the force on the “hole” due to an electric field. Physicists then propose an effective mass which will relate the (imaginary) force on the (imaginary) hole to the acceleration of that hole. It turns out that effective mass is fairly independent of velocity or direction, which means physicists can (in some cases) pretend that the hole is simply a positive charge moving through a vacuum, with a mass of, say 0.36me (a value typical for silicon). For this reason an electron hole is called a pseudo particle. Semiconductor Semiconductors are materials with electrical conductivities that are intermediate between those of conductors and insulators. Semiconductors are useful for electronic purposes because they can carry an electric current by electron propagation or hole propagation, and because this current is generally uni-directional and the amount of current may be influenced by an external agent (see diode, transistor, amplifier etc.). Electron propagation is the same sort of current flow seen in a standard copper wire - heavily ionized atoms pass excess electrons down the wire from one atom to another in order to move from a more negatively ionized area to a less negatively ionized area. “Hole” propagation is a rather different proposition - in the case of a semiconductor experiencing hole propagation, the charge moves from a more positively ionized area to a less positively ionized area by the movement of the electron hole created by the absence of an electron in a nearly-full electron shell. The properties of semiconductors, e.g. the number of carriers (and therefore the prevalence of electron propagation or hole propagation), can be controlled by “doping” the semiconductor blocks with impurities. A semiconductor with more electrons than holes is called an n-type semiconductor, while a semiconductor with more holes than electrons is called a p-type semiconductor. Semiconductors are the fundamental materials in many modern electronic devices.

15

Page 16: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Electronic Structure of Semiconductors

Semiconductors exhibit a number of useful and unique properties related to their electronic structure. In solids the electrons tend to occupy various energy bands. The highest energy band occupied by electrons in their ground state is called the valence band. The lowest energy band occupied by excited electrons is called the conduction band. As the name implies, electrons in the conduction band are able to conduct electricity. The energy spacing between the valence band and the conduction band is called the band gap and corresponds to the energy necessary to excite an electron from the valence band into the conduction band. For some metals, such as magnesium, the valence and conduction bands overlap, corresponding to a negative band gap. In this situation, there are always some electrons in the conduction band and the material is highly conductive. Other metals, such as copper, have empty states in the valence band. In this case electrons in the valence band can conduct electricity by moving between the various states and again the material is highly conductive. For insulators the valence band is completely filled and the band gap is relatively large, preventing conduction. Semiconductors have an electronic structure similar to that of insulators, but with a relatively small band gap, generally less than 2 eV. Because the band gap is relatively small, electrons can be thermally excited into the conduction band, making semiconductors somewhat conductive at room temperature. Electrons in the conduction band are free to move through the material conducting electricity. In addition, when an electron is excited into the conduction band it leaves behind an empty state in the valence band, corresponding to a missing electron in one of the covalent bonds. Under the influence of an electric field, an adjacent valence electron may move into the missing electron position, effectively moving the location of the missing electron. Thus, like the electron, this missing electron or hole is also able to move through the material, conducting electricity. Holes are considered to have a charge of the same magnitude as an electron (1.6×10−19 C), but of opposite charge. Thus, in the presence of an electric field excited electrons and holes move in opposite directions. Electrons are somewhat more mobile than holes and are thus more efficient at conducting electricity. Because both electrons and holes are capable of carrying electricity, they are collectively called carriers. The concentration of carriers is strongly dependent on the temperature. Increasing the temperature leads to an increase in the number of carriers and a corresponding increase in conductivity. This contrasts sharply with most conductors, which tend to become less conductive at higher temperatures.. Doping and Extrinsic semiconduction Intrinsic semiconductors are those in which the electrical behavior depends on the electronic structure of the pure material. For the case of intrinsic semiconductors, all carriers are created by exciting electrons into the conduction band. By the principle of doping we can influence the carrier type of the solid. n-type doping: the purpose of n-type doping is to produce an abundance of carrier electrons in the material. p-type doping: the purpose of p-type doping is to create an abundance of holes. A p-n junction may be created by doping adjacent regions of a semiconductor with p-type and n-type dopants. If a positive bias voltage is placed on the p-type side, the dominant positive carriers (holes) are pushed toward the junction. At the same time, the dominant negative carriers (electrons) in the n-type material are attracted toward the junction. Since there is an abundance of carriers at the junction, current can flow through the junction from a power supply, such as a battery. However, if the bias is reversed, the holes and electrons are pulled away from the junction, leaving a region of relatively non-conducting silicon which

16

Page 17: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

inhibits current flow. The p-n junction is the basis of an electronic device called a diode, which allows electric current to flow in only one direction. Similarly, a third region can be doped n-type or p-type, to form a three-terminal device. These n-p-n and p-n-p junction devices form the basis for most semiconductor devices including the transistor. Case 4: Superconductivity Superconductivity is an electromagnetic phenomenon occurring in certain materials at low temperatures, characterized by the complete absence of electrical resistance and the damping of the interior magnetic field. In conventional superconductors, superconductivity is caused by a force of attraction between certain conduction electrons arising from the exchange of phonons (explained below), which causes the fluid of conduction electrons to exhibit a superfluid phase composed of correlated pairs of electrons. Superconductivity occurs in a wide variety of materials, including simple elements like tin and aluminum, various metallic alloys, some heavily-doped semiconductors, and certain ceramic compounds containing planes of copper and oxygen atoms. The Pseudoparticles called Phonons

Earlier we encountered electron holes, which could be regarded as fake or pseudo-particles, and were useful in explaining the properties of semiconductors. We can apply the same approach to vibration in the crystal lattice. Consider an ion core that is displaced from its normal position. This ion pulls to the other ions in the lattice, especially in its direct vicinity. This is comparable with a system of marbles connected by springs. The displacement of the ion position will thus propagate throughout the crystal lattice. We can imagine this displacement as a pseudoparticle called a phonon. A phonon is thus a quantized mode of vibration occurring in a rigid crystal lattice. The study of phonons is an important part of solid state physics, because they contribute to many of the physical properties of materials, such as thermal and electrical conductivity. For example, the propagation of phonons is responsible for the conduction of heat in insulators, and the properties of long-wavelength phonons gives rise to sound in solids. According to a well-known result in classical mechanics, any vibration of a lattice can be decomposed into a superposition of normal modes of vibration. When these modes are analysed using quantum mechanics, they are found to possess some particle-like properties. When treated as pseudoparticles, phonons are bosons possessing zero spin. Elementary properties of superconductors

Most of the physical properties of superconductors vary from material to material, such as the heat capacity and the critical temperature at which superconductivity is destroyed. On the other hand, there is a class of properties that are independent of the underlying material. For instance, all superconductors have exactly zero resistivity to low applied currents when there is no magnetic field present. The existence of these “universal” properties imply that superconductivity is a thermodynamic phase, and thus possess certain distinguishing properties which are largely independent of microscopic details. Zero electrical resistance

Suppose we were to attempt to measure the electrical resistance of a piece of superconductor. The simplest method is to place the sample in an electrical circuit, in series with a voltage source V (such as a battery), and measure the resulting current. If we carefully account for the resistance R of the remaining circuit elements (such as the leads connecting the sample to the

17

Page 18: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

rest of the circuit), we would find that the current is simply V/R. According to Ohm’s law, this means that the resistance of the superconducting sample is zero. In a normal conductor, an electrical current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat (which is essentially the vibrational kinetic energy of the lattice ions.) As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance. The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons, instead consisting of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ∆E that must be supplied in order to excite the fluid. Therefore, if ∆E is larger than the thermal energy of the lattice (given by kT, where k is Boltzmann’s constant and T is the temperature), the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation. Experiments have in fact demonstrated that currents in superconducting rings persist for years without any measurable degradation. Superconducting phase transition In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from less than 1K to around 20K. Solid mercury, for example, has a critical temperature of 4.2K. As of 2001, the highest critical temperature found for a conventional superconductor is 39K for magnesium boride (MgB2), although this material displays enough exotic properties that there is doubt about classifying it as a “conventional” superconductor. Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature of 92K, and mercury-based cuprates have been found with critical temperatures in excess of 130K. The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper, and Schrieffer. This BCS theory explained the superconducting current as a superfluid of “Cooper pairs”, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972.

18

Page 19: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Conclusions In this lecture we have studied physical systems containing zillions of particles. Though the fundamental properties and interactions between these particles are simple, they result on the macroscopic level to some remarkable qualities. The complexity on the macroscopic level cannot be understood but from these microscopic interactions. We can describe this as if these properties emerge from the fundamental features of the system. This is the traditional reductionistic approach, in which the total is but the sum of the parts. This kind of emergent complexity is of outmost importance in the historical development of science. These systems could be understood, because the reductionistic approach applies so well to them. In the next lectures we will encounter examples were the reductionistic approach fails, and we need another formalism for understanding and analyzing such systems. In the last decades the term holistic became fashionable as an opposite to reductionistic. Whatever its meaning, it takes more than just finding a new word in order to solve such complicated “holistic” problems.

19

Page 20: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

5. The Structure of the Sub-Atomic World

Our quest is to understand how physical complexity arises from basic particles that interact according to simple principles. In this lecture we explore we follow this question as far as modern physics allows, and we look for the smallest building blocks of nature and their laws of engagement as physics dictates.

1. Are Fundamental Particles Fundamental? Around 1940 the success of Quantum Mechanics was complete and the world was very comprehensible. The large amounts of chemical elements could be brought back to just three fundamental particles called electrons, protons and neutrons. From these constituents all chemical elements could be composed using the principles of quantum mechanics. During the following three decades large amounts of new particles became known, and the subatomic world became an ever expanding zoo of exotic particles. The concept of a particle is a natural idealization of our everyday observation of matter. Dust particles or baseballs, under ordinary conditions, are stable objects that move as a whole and obey simple laws of motion. However, neither of these is actually a structureless object. That is, if sufficiently large forces are applied to them, they can readily be broken apart into smaller pieces. The idea that there must be some set of smallest constituent parts, which are the building blocks of all matter, is a very old one.

20

Page 21: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Democritus (born about 460 BC in Abdera, Thrace, Greece) is often credited with introducing this idea, though his concept of the building block was quite different from ours today. He introduced the word which in English translates as atom to describe the parts, whatever they might be. History plays tricks with language, however. The word atom has acquired a meaning today that only partly matches Democritus' idea. Certainly we know that matter is indeed composed of the objects we call atoms. Atoms were originally thought to be indivisible, that is, the smallest particle. However we now understand that atoms are built up of smaller parts. These parts are electrons and a nucleus. The nucleus is much smaller than the atom and is itself composed of protons and neutrons. What Does "Fundamental" Mean?

In the 1930s, it seemed that protons, neutrons, and electrons were the smallest objects into which matter could be divided and they were termed "elementary particles". The word elementary then meant "having no smaller constituent parts", or "indivisible" -- the new "atoms", in the original sense. Again, later knowledge changed our understanding as physicists discovered yet another layer of structure within the protons and neutrons. It is now known that protons and neutrons are made up quarks. Over 100 other "elementary" particles were discovered between 1930 and the present time. These elementary particles are all made from quarks and/or antiquarks. These particles are called hadrons. Once quarks were discovered, it was clear that all these hadrons were composite objects, so only in out-dated text books are they still called "elementary". Leptons, on the other hand, still appear to be structureless. Today, quarks and leptons, and their antiparticles, are candidates for being the fundamental building blocks from which all else is made. Particle physicists call them the "fundamental" or "elementary" particles -- both names denoting that, as far as current experiments can tell, they have no substructure. What are Fundamental Particles?

In the modern theory, known as the Standard Model there are 12 fundamental matter particle types and their corresponding antiparticles. The matter particles divide into two classes, quarks and leptons. There are six particles of each class and six corresponding antiparticles. In addition, there are gluons, photons, and W and Z bosons, the force carrier particles that are responsible for strong, electromagnetic, and weak interactions respectively. These force carriers are also fundamental particles. Are Quarks and Leptons Structureless?

All we know is that quarks and leptons are smaller than 10-19 meters in radius. As far as we can tell, they have no internal structure or even any size. It is possible that future evidence will, once again, show this understanding to be an illusion and demonstrate that there is substructure within the particles that we now view as fundamental. Currently, Super String Theory tries to describe sub-quark/lepton level, but without convincing results – yet. Quantum Electrodynamics

(QED), quantum field theory that describes the properties of electromagnetic radiation and its interaction with electrically charged matter in the framework of quantum theory . QED deals with processes involving the creation of elementary particles from electromagnetic energy, and with the reverse processes in which a particle and its antiparticle annihilate each other and

21

Page 22: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

produce energy. The fundamental equations of QED apply to the emission and absorption of light by atoms and the basic interactions of light with electrons and other elementary particles. Charged particles interact by emitting and absorbing photons , the particles of light that transmit electromagnetic forces. For this reason, QED is also known as the quantum theory of light. QED is based on the elements of quantum mechanics laid down by such physicists as P. A. M. Dirac , W. Heisenberg , and W. Pauli during the 1920s, when photons were first postulated. In 1928 Dirac discovered an equation describing the motion of electrons that incorporated both the requirements of quantum theory and the theory of special relativity . During the 1930s, however, it became clear that QED as it was then postulated gave the wrong answers for some relatively elementary problems. For example, although QED correctly described the magnetic properties of the electron and its antiparticle, the positron, it proved difficult to calculate specific physical quantities such as the mass and charge of the particles. It was not until the late 1940s, when experiments conducted during World War II that had used microwave techniques stimulated further work, that these difficulties were resolved. Proceeding independently, Freeman J. Dyson, Richard P. Feynman and Julian S. Schwinger in the United States and Shinichiro Tomonaga in Japan refined and fully developed QED. They showed that two charged particles can interact in a series of processes of increasing complexity, and that each of these processes can be represented graphically through a diagramming technique developed by Feynman. Not only do these diagrams provide an intuitive picture of the process but they show how to precisely calculate the variables involved. The mathematical structures of QED later were adapted to the study of the strong interactions between quarks, which is called quantum chromodynamics .

2. Feynman Diagrams Richard Feynman was the physicist who developed the method still used today to calculate rates for electromagnetic and weak interaction particle processes. The diagrams he introduced provide a convenient shorthand for the calculations. They are a code physicists use to talk to one another about their calculations.

See also: http://www.feynmanonline.com/

22

Page 23: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

The Basic Idea If the electromagnetic field is defined in terms of the force on a charged particle, then it is tempting to say that the field itself consists of photons which cause a force on a charged particle by being absorbed by it or simply colliding with it - as in the Photo-electric effect. The electric repulsion between two electrons could then be understood as follows: One electron emits a photon and recoils; the second electron absorbs the photon and acquires its momentum. This simple process can be pictured with a Feynman diagram:

learly the recoil of the first electron and the impact of the second electron with the photon e

eynman Rules or a theory are very simple, but lead to increasingly complicated

les for

• Draw all possible diagrams (up to some number of photons, depending on the y the

• momentum and energy, define how momentum and energy flow for

• mplitude for the

The expected rate for the process can then be calculated -- it is proportional to the absolute

Feynman diagrams (time ordered form):

eft-to-right in the diagram represents time; a process begins on the left and ends on the right.

Cdrive the electrons away from each other. So much for repulsive forces. How can attraction brepresented in this way ? FThe Feynman Rules fmathematical expressions as increasingly complicated diagrams are constructed. The ruany process are:

accuracy desired). Different time-orderings of a given process are represented bsame diagram. Given the initialeach line in the diagram. Where each diagram has a closed loop, there is an arbitrary momentum and energy flow around the loop and we must integrate over all possible choices for these quantities. Each intermediate line in the diagram contributes a factorto the amplitude of 1/(E2-p2c2-m2c4) where m is the appropriate mass for the particle type represented by the line. Note that this says that the more "virtual" the particle represented by a line is, the smaller the contribution of the diagram. Add the amplitude factors from all possible diagrams to get the total aprocess.

value of the total amplitude squared. . In L

23

Page 24: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Every line in the diagram represents a particle; the three types of particles in the simplest theory (QED) are:

Image Description Particle Represented

straight line, arrow to the right electron

straight line, arrow to the left positron

wavy line photon

p and down (vertical) displacement in a diagram indicates particle motion, but no attempt is

lectromagnetic interaction; possible

Umade to show direction or speed, except schematically. Any vertex (point where three lines meet) represents an evertices are:

An electron emits a photon

An electron absorbs a photon

A positron emits a photon

A positron absorbs a photon

A photon produces an electron and a positron (an electron-positron pair)

An electron and a positron meet and annihilate (disappear), producing a photon

otice that all six of these processes are just different orientations of the same three

ny diagram which can be built using these parts is a possible process provided: or leaving

Lines i ages in the diagram represent "virtual particles," which do not need to have the right relationship between E, p, and m, but which can never be observed if they do

(Nelements.) AConservation of energy and momentum is required at every vertex. Lines entering the diagram represent real particles and must have:

E2=p2c2+m2c4.

n intermediate st

24

Page 25: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

not! The first thing to realize is that no single vertex diagram represents a possible process - no matter how you try, you cannot satisfy rules (1) and (2) above at the same time for such aprocess. The simplest process we can consider is a two particle collision or "scattering" event. Let us start and

end the process with one electron and one positron-- only their momenta and energies change in the process:

Feynman tells us to draw all possible diagrams. First, lets add one intermediate photon line. We find three time-ordered diagrams:

(a)

(b)

(c)

first two figures (a and b) are just different ntations (time-orderings he same event. We

Theorie ) of tuse the figure below as a shortcut to show both orientations.

Notice that this diagram does not have time orderings, just a start and stop.

third diagram (c) is really quite a different

irtual

We can plicated diagrams with more photons, for example:

This

process -- it is an intermediate stage withonly a photon (a vphoton) present.

also draw more com

or

In fact, we could have any number of photons!

hat makes the diagrams useful is that each diagram has a definite complex number quantity an rules). One part of these

W-- called an amplitude -- related to it by a set of rules (the Feynm

25

Page 26: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

rules is that there is a multiplication factor of 137

12

2

≈hc

(the fine structure constant2) for

each photon, so the amplitudes for diagrams w tons are small, compared to those

Technically, the Feynman rules give the rate as a power series expansion in the coupling parameter. The technique is only useful when this parameter is small, that is, for

0

ith many phowith only one. The quantity "e" here is the electromagnetic coupling or electric charge.

tain lace!

terms in a quantum calculation, the intermediate stages ysicists call the particles that appear in intermediate,

articles that can be observed either directly or indirectly in experiments are real particles. l particle satisfies the generalized Einstein relativistic relationship between its

rticle at rest, p = 0, this becomes E = mc2. This is the minimum possible nergy for an isolated real particle.

irtual particles are a language invented by physicists in order to talk about processes in an diagrams. These diagrams are a shorthand for a calculation that gives

man d

electromagnetic or weak interactions but not for strong interactions except at very high energies. Calculations in QED keeping up to four photons have been made for cerquantities. They give a result that matches experimental data up to the twelfth decimal p Real and Virtual Particles Because Feynman diagrams representin any diagram cannot be observed. Phunobservable, stages of a process "virtual particles". Only the initial and final particles in the diagram represent observable objects, and these are called "real particles." Real Particles PAny isolated reaenergy, E, its momentum. p, and its mass, m (c is the speed of light).: E2 = p2 c2+ m2 c4 Notice that for a pae Virtual Particles Vterms of the Feynmthe probability of the process. The calculation is derived from quantum field theory. Feyndiagrams have lines that represent mathematical expressions, but each line can also be vieweas representing a particle. However in the intermediate stages of a process the lines represent particles that can never be observed. These particles do not have the required Einstein relationship between their energy, momentum and mass. They are called "virtual" particles.

2 The Mysterious 137 If you have ever read Cargo Cult Science by Richard Feynman, you know that he believed that there were still many things that experts, or in this case, physicists, did not know. One of these 'unknowns' that he pointed out often to all of his colleagues was the mysterious number 137. This number is the value of the fine-structure constant (the actual value is one over one-hundred and thirty seven), which is defined as the charge of the electron (q) squared over the product of Planck's constant (h) times the speed of light (c). This number actually represents the probability that an electron will absorb a photon. However, this number has more significance in the fact that it relates three very important domains of physics: electromagnetism in the form of the charge of the electron, relativity in the form of the speed of light, and quantum mechanics in the form of Planck's constant. Since the early 1900's, physicists have thought that this number might be at the heart of a GUT, or Grand Unified Theory, which could relate the theories of electromagnetism, quantum mechanics, and most especially gravity. However, physicists have yet to find any link between the number 137 and any other physical law in the universe. It was expected that such an important equation would generate an important number, like one or pi, but this was not the case. In fact, about the only thing that the number relates to at all is the room in which the great physicist Wolfgang Pauli died: room 137. So whenever you think that science has finally discovered everything it possibly can, remember Richard Feynman and the number 137.

26

Page 27: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

A neutron decay to a proton, an electron, and an anti-neutrino via a virtual (mediating) W

son. This is neutron beta decay. e

mass-energy difference between a neutron and a proton,

diate diagram are actually there, but they are really only part of a quantum probability

3.

boFor example, in beta decay one can readily see that the energy available for the intermediatW boson cannot be greater than the which is very much less than the mass-energy of a W boson. Thus, the W boson here cannot be observed, but the calculation based on this diagram correctly predicts the rate of the process. Particle physicists talk about these processes as if the particles exchanged in the intermestages of acalculation. It is meaningless to argue whether they are or are not there, as they cannot be observed. Any attempt to observe them changes the outcome of the process.

27

Page 28: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

3. The Standard Model Particle physics seeks to answer two questions:

• What are the fundamental (smallest) building blocks from which all matter is made?

• What are the interactions between them that govern how they combine and decay?

The name given to the theory that best incorporates all observations to date in the particle realm is "the Standard Model". This theory describes the strong, weak, and electromagnetic interactions of quarks and gluons. It also describes the weak and electromagnetic interactions of leptons, the particles that do not participate in strong interactions. Gravitational interactions have yet to be successfully incorporated into the quantum field theory and are a tiny effect in high energy particle collisions, so are ignored in the Standard Model. A fifth force is needed to complete our understanding of particle masses. Particle physics theories are written in mathematical language called relativistic quantum field theory. This builds on the two major early twentieth century advances in physics, relativity and quantum mechanics. These advances have withstood the test of many experiments and are now well established as the proper description of processes at atomic and sub-atomic scales, even though they contain ideas and effects that, in the light of our human scale experience, seem counter-intuitive. See also: http://www2.slac.stanford.edu/vvc/theory.html The realisation of the Standard Model

Halfway the 20th Century, the only matter particles known to exist were the proton, neutron, and electron. Then additional particles were discovered in cosmic rays and particle accelerators. By the mid-1960s, physicists realized that their previous understanding, where all matter is composed of the fundamental proton, neutron, and electron, was insufficient to explain the myriad of new particles being discovered. Gell-Mann's and Zweig's quark theory solved these problems. Over the last thirty years, the theory that is now called the Standard Model of particles and interactions has gradually grown and gained increasing acceptance with new evidence from new particle accelerators. In 1964, Murray Gell-Mann and George Zweig tentatively put forth the idea of quarks. They suggested that mesons and baryons are composites of three quarks or antiquarks, called up, down, or strange (u, d, s) with spin 0.5 and electric charges 2/3, - 1/3, -1/3, respectively. It turns out that this theory was not completely accurate. Since the charges had never been observed, the introduction of quarks was treated more as a mathematical explanation of flavor patterns of particle masses than as a postulate of an actual physical object. Later theoretical and experimental developments allowed us to now regard the quarks as real physical objects, even though they can not be isolated. It's sort of the opposite of "lines of force" which were originally considered physical entities but are now regarded as mathematical constructs. Murray Gell-Mann thought up the name "quark" by taking it from a line from James Joyce's "Finnegan's Wake" where it says, "three quarks for Muster Mark." Gell-Mann said that initially he didn't know where he got the name, and then he realized where he had heard it. It seemed appropriate since at that time only three quarks, up, down, and strange, were theorized. Gell-Mann also said the line suggested to him, "three quarts for Mister Mark," implying a guy drinking at a pub. James Joyce invented the word "quark" after hearing seagulls cawing. James Joyce got the title "Finnegan's Wake" from a popular Irish folksong of

28

Page 29: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

the same name. Since I was little, on St. Patrick's Day, we would listen to Irish records, one of which had that song. I first heard the word "quarks" when I was in the 5th grade, watching Carl Sagan's "Cosmos." At the time, I never imagined a connection between those two things. Since quarks and leptons had a certain pattern, several papers suggested a fourth quark carrying another flavor to give a similar repeated pattern for the quarks, now seen as the generations of matter. Very few physicists took this suggestion seriously at the time. Sheldon Glashow and James Bjorken coined the term "charm" for the fourth (c) quark. In1965, O. W. Greenberg, M. Y. Han, and Yoichiro Nambu introduce the quark property of color charge. All observed hadrons are color neutral. In 1967 Steven Weinberg and Abdus Salam separately propose a theory that unifies electromagnetic and weak interactions into the electroweak interaction. Their theory requires the existence of a neutral, weakly interacting boson (now called the Z0) that mediates a weak interaction that had not been observed at that time. They also predict an additional massive boson called the Higgs Boson that has not yet been observed. In 1968-69, At the Stanford Linear Accelerator, in an experiment in which electrons are scattered off protons, the electrons appeared to be bouncing off small hard cores inside the proton. This is similar to the discovery of the atomic nucleus. JamesBjorken and Richard Feynman analyzed this data in terms of a model of constituentparticles inside the proton They didn't use the name "quark" for the constituents, even though this experiment provided evidence for quarks. Sheldon Glashow, John Iliopoulos, and Luciano Maiani recognized the critical importance of a fourth type of quark in the context of the Standard Model. A fourth quark allows a theory that has flavor-changing Z0-mediated weak interactions but no flavor-changing ones. Donald Perkins, spurred by a prediction of the Standard Model, re-analyzed some old data from CERN and found indications of weak interactions with no charge exchange, those due to a Z0 exchange. Then, quantum field theory of strong interaction was formulated. This theory ofquarks and gluons, now part of the Standard Model, is similar in structure to quantumelectrodynamics (QED), but since strong interaction deals with color charge this theory iscalled quantum chromodynamics (QCD). Quarks were determined to be real particles, carrying a color charge. Gluons are massless quanta of the strong-interaction field. This strong interaction theory was first suggested by Harald Fritzsch and Murray Gell-Mann. In 1973, David Politzer, David Gross, and Frank Wilczek discovered that the color theory of the strong interaction has a special property, now called "asymptotic freedom." The property is necessary to describe the 1968-69 data on the substrate of the proton. In 1974, in a summary talk for a conference, John Iliopoulos presented, for the first time in a single report, the view of physics now called the Standard Model. That same year, Burton Richter and Samuel Ting, leading independent experimenters, announced on the same day that they discovered the same new particle. Ting and his collaborators at Brookhaven called this particle the "J" particle, whereas Richter and his collaborators at SLAC called this particle the psi particle. Since the discoveries are given equal weight, the particle is commonly known as the J/psi particle. The J/psi particle is a charm-anticharm meson. In 1976, Gerson Goldhaber and Francois Pierre found the D0 meson, anti-up and charm quarks. The theoretical predictions agreed dramatically with the experimental results, offering support for the Standard Model. That same year, the tau lepton was discovered by Martin Perl and collaborators at SLAC. Since this lepton is the first recorded particle of the third generation, it was completely unexpected. In 1977, Leon Lederman and his collaborators at Fermilab discovered yet another quark and its antiquark. This quark was called the "bottom" quark. Since physicists assumed that quarks came in pairs, this discovery added impetus to search for the sixth quark, "top."

29

Page 30: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Charles Prescott and Richard Taylor observed a Z0 mediated weak interaction in the scattering of polarized electrons from deuterium which shows a violation of parity conservation, as predicted by the Standard Model, confirming the theory's prediction. The W± and Z0 intermediate bosons demanded by the electroweak theory were observed by two experiments using the CERN synchrotron using techniques developed by Carlo Rubbia and Simon Van der Meer to collide protons and antiprotons. In 1989, experiments carried out in SLAC and CERN strongly suggested that there are three and only three generations of fundamental particles. This was inferred by showing that the Z0-boson lifetime is consistent only with the existence of exactly three very light or massless neutrinos. According to the Standard Model, there are three generations of particles, each containing two quarks, and two leptons, one of which is a neutrino. Neutrinos are very weakly interacting particles. The electron neutrino was first theorized by Wolfgang Pauli in 1931. The last neutrino to be observed, the tau neutrino, was first observed in 2000. In 1995, after eighteen years of searching at many accelerators, the CDF and D0 experiments at Fermilab discovered the top quark at the unexpected mass of 175 GeV. No one understands why the mass is so different from the other five quarks. Basic Constituents of the Standard Model In order to write down the Standard Model Lagrangian, you need the notation of the Dirac equation in order to express the spin structure, the requirements of gauge invariance that tell us to begin with a free particle Lagrangian and rewrite it with covariant derivative, and the idea of internal symmetries. In order to describe the particles and interactions known today, three internal symmetries are needed.

30

Page 31: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

Today, all experiments are consistent with the idea that the three symmetries are necessary and sufficient to describe the interactions of the known particles. It is easiest to describe how these symmetries act in the language of group theory. All particles appear to have a s0-called U(1) invariance. That invariance was related to the electromagnetic interaction. All particles appear to have a second invariance under a set of transformations that form an SU(2) group, called the electroweak SU(2) invariance. These lead to a non-Albelian gauge phase invariance, analogous to the strong isospin invariance. The associated gauge bosons necessary to maintain the invariance of the theory are called Wi. There is one boson for each of the three generators of SU(2) transformations so i = 1, 2, or 3. There is a third internal invariance, under a set of transformations that form an SU(3) group, giving an additional independent non-Albelian invariance. The associated gauge bosons are labeled Ga, where a = 1, 2, ... 8 since there is one spin-one boson for each of the eight generators of SU(3). The bosons are called gluons, and theory of particle interactions via gluon exchange is called Quantum Chromodynamics (QCD). There are six color charges. They are red, blue, green, antired, antiblue, and antigreen. The six quarks, six antiquarks, and gluons have color charge. Particles with color charge can only combine is ways which cause the colors to cancel out. Currently, there are four known ways this can happen. There are two x three colours

3 quarks = red, blue, green = baryon 3 antiquarks = antired, antiblue, antigreen = antibaryon

Note the quark-antiquark pairs = red-antired, blue-antiblue, or green-antigreen = meson Likewise: gluon-antigluon pairs = red-antired, blue-antiblue, or green-antigreen = glueball If you had two red quarks, two blue quarks, and two green quarks, the color charges would also cancel, but that would be two baryons. It has been suggested that there could possibly exist other combinations of various numbers of quarks, antiquarks, and gluons but that's tentative conjecture. Here are some baryons. The antibaryons of these would be the same except the particles and antiparticles would be reversed. Now let’s see how some previous fundamental particles are composed of these quarks:

• electron = lepton (no quark)

• proton = two up quarks and a down quark = uud • neutron = two down quarks and an up quark = udd • lamda = an up quark, down quark, and strange quark = uds • sigma+ = uus • sigma- = dds • sigma0 = uds • xi+ = uss • xi- = dss

Here are some mesons.

31

Page 32: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

• pion+ = an up quark and an down antiquark = u[d bar] • pion- = an up antiquark and a down quark = [u bar]d • K+ = u[s bar] • K- = [u bar]s • K0 = d[s bar] • [K0 bar] = [d bar]s

According to the Standard Model, there are three generations of fermions, each containing two quarks, and two leptons. The first generation is the up quark, down quark, electron and electron neutrino. The second generation is the strange quark, charm quark, muon and muon neutrino. The third generation is the top quark, bottom quark, tau particle, and tau neutrino. The fundamental bosons are the photon, the eight gluons, the W+, W-, and Z0 vector bosons, and the graviton. What you think of as "normal matter" is composed of up quarks, down quarks, and leptons.

32

Page 33: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

4. Beyond the Standard Model: Supersymmetry and String Theory

While the Standard Model has been very successful in describing most of the phenomena that we can experimentally investigate with the current generation of particle accelerators, it leaves many unanswered questions about the fundamental nature of the universe. The goal of modern theoretical physics has been to find a "unified" description of the universe. This has historically been a very fruitful approach. For example Einstein's theory of Special Relativity unifies the forces of electricity and magnetism into the electromagnetic force. The Nobel prize winning work of Glashow, Salam, and Weinberg successfully showed that the electromagnetic and weak forces can be unified into a single electroweak force. There is actually some pretty strong evidence that the forces of the Standard Model should all unify as well. When we examine how the relative strengths of the strong force and electroweak force behave as we go to higher and higher energies, we find that they become the same at an energy of about 1016 GeV. In addition the gravitational force should become equally important at an energy of about 1019 GeV.

The goal of string theory is to explain the "?" in the above diagram. The characteristic energy scale for quantum gravity is called the Planck Mass, and is given in terms of Planck constant, the speed of light, and Newton's constant, leading to: mPlanck = 1.217-1.550 . 1019 GeV It seems that in its final form string theory will be able to provide answers to answer questions like:

• Where do the four forces that we see come from? • Why do we see the various types of particles that we do? • Why do particles have the masses and charges that we see? • Why do we live in 4 spacetime dimensions? • What is the nature of spacetime and gravity?

We are used to thinking of fundamental particles (like electrons) as point-like 0-dimensional objects. A generalization of this is fundamental strings which are 1-dimensional objects. They have no thickness but do have a length, typically 10^(-33) cm [that's a decimal point followed by 32 zeros and a 1]. This is very small compared to the length scales that we can reasonably

33

Page 34: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

measure, so these strings are so small that they practically look like point particles. However their stringy nature has important implications as we will see. Strings can be open or closed. As they move through spacetime they sweep out an imaginary surface called a worldsheet. These strings have certain vibrational modes which can be characterized by various quantum numbers such as mass, spin, etc. The basic idea is that each mode carries a set of quantum numbers that correspond to a distinct type of fundamental particle. This is the ultimate unification: all the fundamental particles we know can be described by one object, a string! [A very loose analogy can be made with say, a violin string. The vibrational modes are like the harmonics or notes of the violin string, and each type of particle corresponds to one of these notes.] Strings interact by splitting and joining. For example the anihilation of two closed strings into a single closed string. The worldsheet of the interaction is a smooth surface. This essentially accounts for another nice property of string theory. It is not plagued by infinities in the way that point particle quantum field theories are. This leads to an analogy to a Feynman diagram in a point particle field theory. Here the interaction point occurs at a topological singularity in the diagram. If we glue two of the basic closed string interactions together, we get a process by which two closed strings interact by joining into an intermediate closed string which splits apart into two closed strings again. This is the leading contribution to this process and is called a tree level interaction. To compute quantum mechanical amplitudes using perturbation theory we add contributions from higher order quantum processes. Perturbation theory provides good answers as long as the contributions get smaller and smaller as we go to higher and higher orders. Then we only need to compute the first few diagrams to get accurate results. In string theory, higher order diagrams correspond to the number of holes (or handles) in the world sheet. The nice thing about this is that at each order in perturbation theory there is only one diagram. In point particle field theories the number of diagrams grows exponentially at higher orders. The bad news is that extracting answers from diagrams with more than about 2 handles is very difficult due to the complexity of the mathematics involved in dealing with these surfaces. Perturbation theory is a very useful tool for studying the physics at weak coupling, and most of our current understanding of particle physics and string theory is based on it. However it is far from complete. The answers to many of the deepest questions will only be found once we have a complete non-perturbative description of the theory. READING:

• Young, chapter 46 • Brian Greene, “The Elegent Universe”, Chapter 1: “Space, Time, and the Eye of the

Beholder”

34

Page 35: Emerging Complexity in Physics - Maastricht University Complexity... · Emerging Complexity in Physics ... One such gauge is intrinsic versus extrinsic complexity. If we compare the

35

6. The Large-scale Organization of the Universe

We end our quest for complexity in physics by studying the largest structures of space visible, and considering theories of the struture and history of the universe itself. Key items:

• General Theory of Relativity • Black Holes and time voyages • Cosmology • MAP, COBE, etc • The flat and infinite universe

Reading:

• Freedman, Kaufman, “The Universe”, 6th edition, Chapters 28, 29: “Conservation Principles”