10
74 Copublished by the IEEE CS and the AIP 1521-9615/11/$26.00 © 2011 IEEE COMPUTING IN SCIENCE & ENGINEERING C OMPUTER S IMULATIONS Editor: Muhammad Sahimi, [email protected] NUMERICAL SIMULATION OF ANDERSON LOCALIZATION By Reza Sepehrinia and Ameneh Sheikhan T he concept of electron localiza- tion in materials with disorder or defects was first introduced by Phillip W. Anderson, a Nobel Laureate in physics, in 1958 and has remained an active research field. Be- cause the problem is difficult to study analytically, researchers have carried out extensive computer simulation studies. Here, we review the most ef- ficient numerical tools and approaches that researchers have developed to study the problem. Problem Overview As characterized by Anderson, local- ization originally referred to the ab- sence of quantum particle or electron diffusion in a random potential, 1 with the randomness due to a material’s disorder or impurity. Anderson pre- dicted a transition from localization, whereby an electron can’t move far, to a diffusive motion over large length and time scales at a critical value of the disorder strength. Such a transition is also well known in the classical perco- lation, which is a purely geometrical transition. 2 Differences with percolation arise due to electrons’ wave nature. In sub- sequent work, 3 Elihu Abrahams, An- derson, and their colleagues showed that, in two dimensions, a quantum particle is always localized. Such lo- calization in low-dimensional systems is absent in classical (non-quantum mechanical) cases, as we know from percolation theory. Nevill Mott tied the localization idea to a material’s conductivity. He introduced the idea of a mobility edge, 4 which is the energy level within the spectrum that separates the local- ized states from the extended ones in which diffusion occurs. Accordingly, we can speak about a single eigenstate’s localization. The energy eigenstate is an eigenvector of the target system’s Hamiltonian, and the corresponding eigenvalue is an energy level (as we discuss later). The set of all eigen- values of Hamiltonian is called energy spectrum. A remarkable property of the local- ized states is that they’re exponentially small away from the localization cen- ter. By tuning the Fermi energy across the mobility edge, a transition from metal (which has nonzero conductiv- ity) to insulator (which has zero con- ductivity) happens without having a gap in the spectrum, which is crucial in ordinary insulators. The energy gap is an energy range in which no electron states can exist. This transition has been largely formulated using scaling concepts. 5 The work of both Anderson and Mott led to Nobel prizes in phys- ics for both. Since Anderson’s original work, researchers have developed several ap- proaches to studying various aspects of localization. However, except for 1D systems, few analytical results are available on the subject. Thus, local- ization in higher-dimensional sys- tems, which is important to practical applications, has been studied mostly by numerical methods. Despite a vast literature on the subject, some funda- mental questions are still open, wait- ing for solutions. Here, we describe and review efficient numerical meth- ods that researchers have used suc- cessfully to study localization and its properties. The system is represented by a lat- tice. We consider a general problem formulation on the lattice; the solu- tions are the quantum states of an electron in a crystal (lattice), vibrating modes of an elastic medium or elec- tromagnetic waves, and so on: H Ψ Ψ ν ν ν = e , (1) where H is the Hamiltonian, Y n is the eigenvector whose components repre- sent the values of wave function at the lattice points, and e n is the eigenvalue— which can be an electron’s energy, the eigenfrequencies of mechanical or electromagnetic oscillations, and so on, depending on the problem. The implications of the problem’s solu- tion for localization can be deduced from Equation 1’s eigenfunctions and eigenvalues. Various numerical criterions for localization have been proposed; we discuss their efficiency later. Two main approaches that we use to discuss such criteria are the transfer-matrix (TM) method, from which we compute such quantities as the localization length (the length scale over which localization occurs), Extensive computer simulation studies have been aimed at the challenging problem of electron localization in materials with disorder or defects.

Numerical Simulation of Anderson Localization

  • Upload
    a

  • View
    217

  • Download
    4

Embed Size (px)

Citation preview

Page 1: Numerical Simulation of Anderson Localization

74 Copublished by the IEEE CS and the AIP 1521-9615/11/$26.00 © 2011 IEEE Computing in SCienCe & engineering

C o m P u t E r S I m u l A t I o n S

Editor: Muhammad Sahimi, [email protected]

Numerical SimulatioN of aNderSoN localizatioNBy Reza Sepehrinia and Ameneh Sheikhan

T he concept of electron localiza-tion in materials with disorder or defects was first introduced

by Phillip W. Anderson, a Nobel Laureate in physics, in 1958 and has remained an active research field. Be-cause the problem is difficult to study analytically, researchers have carried out extensive computer simulation studies. Here, we review the most ef-ficient numerical tools and approaches that researchers have developed to study the problem.

Problem OverviewAs characterized by Anderson, local-ization originally referred to the ab-sence of quantum particle or electron diffusion in a random potential,1 with the randomness due to a material’s disorder or impurity. Anderson pre-dicted a transition from localization, whereby an electron can’t move far, to a diffusive motion over large length and time scales at a critical value of the disorder strength. Such a transition is also well known in the classical perco-lation, which is a purely geometrical transition.2

Differences with percolation arise due to electrons’ wave nature. In sub-sequent work,3 Elihu Abrahams, An-derson, and their colleagues showed that, in two dimensions, a quantum particle is always localized. Such lo-calization in low-dimensional systems is absent in classical (non-quantum mechanical) cases, as we know from percolation theory.

Nevill Mott tied the localization idea to a material’s conductivity. He introduced the idea of a mobility edge,4 which is the energy level within the spectrum that separates the local-ized states from the extended ones in which diffusion occurs. Accordingly, we can speak about a single eigenstate’s localization. The energy eigenstate is an eigenvector of the target system’s Hamiltonian, and the corresponding eigenvalue is an energy level (as we discuss later). The set of all eigen-values of Hamiltonian is called energy spectrum.

A remarkable property of the local-ized states is that they’re exponentially small away from the localization cen-ter. By tuning the Fermi energy across the mobility edge, a transition from metal (which has nonzero conductiv-ity) to insulator (which has zero con-ductivity) happens without having a gap in the spectrum, which is crucial in ordinary insulators. The energy gap is an energy range in which no electron states can exist. This transition has been largely formulated using scaling concepts.5 The work of both Anderson and Mott led to Nobel prizes in phys-ics for both.

Since Anderson’s original work, researchers have developed several ap-proaches to studying various aspects of localization. However, except for 1D systems, few analytical results are available on the subject. Thus, local-ization in higher-dimensional sys-tems, which is important to practical

applications, has been studied mostly by numerical methods. Despite a vast literature on the subject, some funda-mental questions are still open, wait-ing for solutions. Here, we describe and review efficient numerical meth-ods that researchers have used suc-cessfully to study localization and its properties.

The system is represented by a lat-tice. We consider a general problem formulation on the lattice; the solu-tions are the quantum states of an electron in a crystal (lattice), vibrating modes of an elastic medium or elec-tromagnetic waves, and so on:

HΨ Ψνν

ν= e , (1)

where H is the Hamiltonian, Yn is the eigenvector whose components repre-sent the values of wave function at the lattice points, and en is the eigenvalue—which can be an electron’s energy, the eigenfrequencies of mechanical or electromagnetic oscillations, and so on, depending on the problem. The implications of the problem’s solu-tion for localization can be deduced from Equation 1’s eigenfunctions and eigenvalues. Various numerical criterions for localization have been proposed; we discuss their efficiency later. Two main approaches that we use to discuss such criteria are the transfer-matrix (TM) method, from which we compute such quantities as the localization length (the length scale over which localization occurs),

Extensive computer simulation studies have been aimed at the challenging problem of electron localization in materials with disorder or defects.

CISE-13-3-CompSims.indd 74 05/04/11 3:26 PM

Page 2: Numerical Simulation of Anderson Localization

may/June 2011 75

a material’s conductance, and the eigenvalues’ statistics.

Anderson’s original work was based on the tight binding Hamiltonian:

H i i t i jiiji

= +< >∑∑e .

(2)

The tight binding model is an ap-proach for calculating the energy spectrum of solids using an approxi-mate set of wave functions that are a superposition of the wave functions of isolated atoms located at each atomic site. The first term on the right side is the random onsite energies:

eiW W

∈−

2 2

, ,

with the disorder strength W. The second term represents the hopping of electrons to nearest neighbors of each lattice site. Equation 2, known as the Anderson model, has been the basis for most subsequent localization studies.

The Transfer-Matrix MethodWe now describe the TM method, and how researchers use it to compute the various quantities of interest.

Localization LengthThe TM method has been a success-ful approach for calculating localiza-tion length. The TM method6 offered the first strong numerical evidence for Anderson’s predictions for 3D systems, and for the scaling theory of localization.3 TM can be applied directly to 1D and quasi-1D systems, but to reach the true 2D and 3D lim-its, we need an additional scaling as-sumption. We explain the method for a quasi-1D bar geometry. Strip and line geometries are very much the same.

As Figure 1 shows, the lattice is sliced into L squares of sides M. Peri-odic boundary conditions are typically assumed in the x and y directions. Equation 1 can be viewed as a re-cursive equation. Given the values of Y on the sites of the first two slices, its values on the next slice are deter-mined using Equation 1:

Ψ Ψ Ψn n n n n n n ne+ +−

− −= − −1 11

1 1h h h, ,[( ) ].

(3)In matrix form, we have

Ψ

Ψn

n

n n n ne

+

+−

=

− −

1

11

1

h h h, ( ) ,, ,n n n n

n

+−

11

1

10

h ΨΨ

.

(4)

Here, Yn is a vector of length M2 that contains values of Y on slice n; hn is the part of the Hamiltonian that acts on the nth slice, while hn,n+1 connects slice n to the nearest neighbor slices. Two kinds of randomness often appear in the Hamiltonian: diagonal and off-diagonal disorder, in which diagonal elements and some off-diagonal ele-ments (depending on the model) of Hamiltonian are randomly distrib-uted, respectively. The Hamiltonian has different symmetries in each case that give rise to differences in the localiza-tion properties. We can also consider cases that represent a mixture of both.

Iterating Equation 4, we obtain a solution for a given eigenvalue e. The wave function at the end of the bar is then related to its initial value by suc-cessively multiplying the transfer ma-trices. Several theorems in the theory of random matrix multiplication de-scribe properties of the production of noncommuting objects as a generaliza-tion of the central-limit theorem. By increasing the number of matrices— that is, the length of the bar, the norm of the resulting matrix grows expo-nentially. The matrix norm can be defined in several ways. In our case, we use the definition that defines the norm of a nonzero vector multiplied by the matrices. The rate of the norm’s exponential growth is determined by the Lyapunov exponent (LE) and is proven to be a self-averaged quantity. A self-averaged quantity is nonrandom (deterministic) in the large system size limit.

To produce a stationary sequence D × D of matrices (in the present case, D = 2M2), PN, Valery Oseledec proved that the following limit exists:

lim .

† /

L L L

L

→∞

=P P V1 2

(5)

The matrix has D eigenvalues egi, where the exponents gi are the LEs. The larg-est exponent governs the growth, so through production of the matrices, we obtain the largest exponent. We must also obtain the smallest positive LE, the inverse of which represents the spatial extension of wave function.

M

x

z

y

M

1 2 ... ...n – 1 n n + 1 L – 1 L

Figure 1. Bar geometry for transfer matrix calculation.

CISE-13-3-CompSims.indd 75 05/04/11 3:27 PM

Page 3: Numerical Simulation of Anderson Localization

C o m P u t E r S I m u l A t I o n S

76 Computing in SCienCe & engineering

Direct numerical multiplication of numerous matrices isn’t possible, be-cause the matrix elements grow rapidly and generate large round-off errors. Giancarlo Benettin and his colleagues developed a procedure to overcome this problem,7 which is (briefly) as follows:

1. Start with D (dimension of transfer matrices) normalized vectors vi.

2. Multiply the normalized vec-tors by l matrices from a random sequence.

3. Implement a Gram-Schmidt or-thogonalization. Store the length of the new vectors (d (i)

k) and nor-malize them to unity.

4. Multiply the next l matrices and continue the procedure.

The LEs are then obtained from

g i ki

k

n

nld=

=∑1

1

ln ( ). (6)

By increasing n, the results approach the exact values, but for finite n they fluctuate with a variance that’s pro-portional to 1/ .n The required length L to reach the accuracy with the relative error, Dg /g = e is

L c Mconvergence =e2

.

(7)

The proportionality factor c is of the order of unity and depends on the eigenvalue e and the lattice structure. By decreasing the disorder strength, the LEs tend to zero and the relative f luctuations are large, which leads to

a large relative error. Such a compu-tation doesn’t need large memory, but the convergence slows down rapidly in the weak disorder limit; thus, by in-creasing the system size, the method requires a long computation time. The localization length is defined as the inverse of minimum positive LE:

ξγM =1

min

. (8)

Finite-Size Scaling and the Thermodynamic LimitBy the method that we described so far, we can compute the localiza-tion length for quasi-1D systems with size L >> M. But it’s not possible to increase the size M in the trans-verse direction to model 3D systems. Therefore, we need to extrapolate to

that limit from the information com-puted with small sizes. The criterion for localization of a state in the 3D limit is

lim / .M M M→∞ =x 0

For an extended state, this limit grows and reaches a point in between—the mobility edge—where the limit has a finite value. The corresponding state is called the critical state.

According to the scaling hypothesis, the dimensionless quantity L = xM /M depends only on x(a)/M, not on the parameters and M separately:

Λ =

f

Mξ α( ) , (9)

where a denotes the eigenvalue (usu-ally, energy or frequency) or the strength of disorder. x(a) is a charac-teristic length scale that’s the localiza-tion length of the infinite system on the localized side. It’s also possible to show that by approaching the mobil-ity edge, x should diverge as

x ∝| | ,a ac− −ν (10)

where n is the localization length exponent.

In practice, we need to implement a finite-size scaling. To do this, we cal-culate L for several values of M and the parameter a; the corresponding curves cross each other at the mobility edge. We take one set of data—L versus 1/M for a given a—as the reference set and, by choosing the appropriate value of the scale x(a) for the horizontal axis of the other sets, we try to put to-gether the overlapping regions. In this way, we obtain x(a) up to a constant factor. The remaining factor can be fixed using the fact that, for large val-ues of the disorder strength, we have

Λ ≈xM

.

(11)

The most appropriate way to do this is to use the minimum residual test for the data collapse.6 Another way is to use a polynomial expansion of Equa-tion 9 in the vicinity of the mobility edge and perform a data fit, which will give us the divergence exponent n and the critical parameter ac:

Λ = ==∑g M a Mi

i i

i

n

( ) ,/ /c c1

0

ν ν

(12)

χ =

=∑ bi r

i

i

m

α1

, (13)

a a a ar c c= −( ) / . (14)

Direct numerical multiplication of numerous matrices

isn’t possible, because the matrix elements grow rapidly

and generate large round-off errors.

CISE-13-3-CompSims.indd 76 05/04/11 3:27 PM

Page 4: Numerical Simulation of Anderson Localization

may/June 2011 77

Several sets of numerical results support these predictions for different models, ranging from the Anderson model6 for an electron on the square lattice, to localization of elastic waves8 in a medium with random Lamé co-efficients and electromagnetic wave propagation and localization.9

We can run the same analysis10 using the higher-order LEs, which rapidly converges but, of course, finite-size effects are also larger.

ConductanceAccording to Mott, conductance must be zero for the localized side and finite at the mobility edge’s metallic side; we can thus use it to characterize the mobility edge. Using the formalism developed by Rolf Landauer,11 we can calculate the conductance of a disor-dered sample. To do this, we assume the sample is connected to semi-infinite perfect leads from two sides. The di-mensionless conductance is given by the multichannel Landauer formula:

G tr= T T† , (15)

where T is the transmission matrix whose element Tij is the transmission amplitude from channel i to chan-nel j. The transmission matrix relates propagating waves on the sample’s left and right sides, instead of the wave functions in the site representa-tion. We can express this in terms of the TM as

T U P U− =1left L right , (16)

where Uleft and Uright are 2D × D and D × 2D matrices, containing the left and right eigenvectors of left-moving waves that must be computed by di-agonalization of the leads’ TM. For numerical implementation—and to avoid numerical instability—John

Pendry and his colleagues offer a use-ful procedure.12 Suppose

P U

YYY

YYY

L right

LL

LL

L

L

=

=

′ =1

2

1

2

,

−Y11L , (17)

1. Construct the matrix Uright.2. Multiply the matrix by l matrices

from a sequence of transfer ma-trices Pn.

3. Multiply the result from the right by the inverse of its top half Y1L and store the inverses’ product separately.

4. Calculate T from TUlef t ′ = −Y YL L1

1 .

The calculated conductance for a fi-nite sample will be a random number, depending on the realization of the

disorder. We can study the ensemble-averaged conductance, as well as L, as a scaling variable to characterize the localization properties of the states. An accurate calculation based on this approach is given elsewhere13 for the 3D Anderson model.

Apart from the average value of the conductance G, its distribution func-tion P(G) is itself scale-invariant at the mobility edge and a nonanalytic function of G.14

Statistics of EigenvaluesStatistics of a quantum system’s energy levels were first studied by Eugene P. Wigner,15 who developed

the random matrix theory. He sug-gested that the complex Hamiltonian H of nucleus particles with complex interactions can be replaced by the random matrices whose elements are calculated from a random distribution function. Later, the substantial con-tributions of Freeman J. Dyson and Madan L. Mehta led to a classifica-tion of random matrices according to the Hamiltonian’s symmetries. Three main ensembles were identified— Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE), and Gaussian symplectic ensemble (GSE)—and are known as the Wigner-Dyson ensembles. Lev Petrovich Gorkov and Gerasim M. Eliasberg suggested that these re-sults can be applied to electrons in a random potential,16 and Konstantin Efetov offered proof.17

Level-Spacing DistributionThe distribution of level spacing s be-tween adjacent levels is a well-studied quantity.

If we have an N × N Hamiltonian matrix for a 3D sample of length L, we can calculate N = L3 energies as e1, e2, e3,… eN. The level spacing is in-troduced as

s

e e

e e

e eei

i i

i

i i=−

−=

∆+

+

+1

1 1

1

( ) (18)

where the average is on different real-izations of disorder.

According to Mott, conductance must be zero for the

localized side and finite at the mobility edge’s metallic

side; we can thus use it to characterize the mobility edge.

CISE-13-3-CompSims.indd 77 05/04/11 3:27 PM

Page 5: Numerical Simulation of Anderson Localization

C o m P u t E r S I m u l A t I o n S

78 Computing in SCienCe & engineering

In case of localized states where the overlap is small, lev-els can be close together and level spacings will have an al-most Poisson distribution

P(s) = exp(-s). (19)

On the opposite side, where the states are extended and have large overlap, the levels repel each other and the spacing dis-tribution follows the Wigner-Dyson distribution. Based on invariance under time reversal and spin rotation of Hamil-tonians, there are three main classes for which the distribu-tion function P(s) is as follows:18

P s s s( ) =

p p2 4

2exp GOE,−

(20)

P s

s s

( )=

32 42

2 2

p pexp GUE, and−

(21)

P s

s s

( )=

23

649

18

6 34 2

p pexp GSE.−

(22)

For small s, they have the following power law form:

P(s) ∝ s b, (23)

with b = 1, 2, and 4 for GOE, GUE, and GSE, respectively. This is caused by the level repulsion of the extended states (see Figure 2); the difference in the power b is due to different symmetries. Figure 3 shows the density of level spac-ing for different Wigner-Dyson distributions.

In an interval of energy [e, e +d e] that contains a few levels, the mean level spacing D(e) is calculated by av-eraging over different realizations of the disorder and all spacings in the in-terval. Spacings in each interval should be scaled with its own D(e). The dis-tribution function of scaled level spac-ing P(s) can be obtained for a whole spectrum or part of it. According to the definition of si in Equation 18, we always have si = 1 for an ordered sys-tem. Therefore, the distribution func-tion is P(s) = d(s - 1). By increasing the

disorder’s intensity, the peak of the dirac delta becomes smooth.

Again, the finite-size effects are important. As long as the system size is comparable with the localization length, the ei-genfunctions have large over-lap and thus the level repulsion is large. In the localized re-gime, by increasing the system size, we get L >> x, where the level repulsion is suppressed and P(s) will be a Poisson dis-tribution. The crossover be-tween the Wigner distribution (GOE) and the Poisson distri-bution for the 2D Anderson model is discussed elsewhere.19 According to the TM calcula-tions, the 2D Anderson model should be always localized.

At the critical point, the dis-tribution function changes from the Wigner-Dyson to the Pois-son distribution. This function is scale invariant at the critical point. Various forms have been suggested for the distribution. The distribution that’s in good agreement with the spectrum of the Anderson model’s critical points is the semi-Poisson dis-tribution function, P(s) = 4se-2s. This function behaves similar

to the Wigner-Dyson distribution for small s, and the Poisson distribution for large s.

The Anderson model without spin and magnetic field follows the GOE statistics. Elastic and electromagnetic waves also are in this universality and have the same statistics.8,9

Number of Level VarianceAnother useful quantity is the vari-ance of the number of the levels

∑ = −2 2 2( ) ( ) ( ) ,l N l N l (24)

s

P(s

)

0 1 2 3 40

0.5

1

PoissonGaussian orthogonalensemble (b = 1)Gaussian unitaryensemble (b = 2)Gaussian symplecticensemble (b = 4)

Figure 3. level spacing distributions for different Wigner-Dyson classes and random levels (solid line), which are respectively expected in metallic and localized regimes.

Metal Critical Insulator

Figure 2. typical sequence of energy levels of the 3D Anderson model at metal, critical, and insulator phases.

CISE-13-3-CompSims.indd 78 05/04/11 3:27 PM

Page 6: Numerical Simulation of Anderson Localization

may/June 2011 79

where N(l) is the number of lev-els in a given interval l. Here, l is the width of the energy interval, normalized with its average lev-el spacing.18 An analytic expres-sion for this quantity is known from the random matrix theory. In the metallic (conducting or extended) regime with different symmetries, we have

∑ =

+ + −

+ −

2

2

212 2 1

8

( )

ln( ) (

l

l O lπ

π γπ ))

GOE, (25)

∑ = + + + −2

211 2 1( ) (ln( ) ) ( )l l O l

ππ γ

GUE, and (26)

∑ =

+ + −

+ −

2

2

222

4 18

( )

ln( ) (

l

l O lπ

π γπ 11 )

GSE. (27)

where g = 0.5772L. For the localized and critical cases,

∑ =2( )l l Poisson, and (28)

∑ = + − −2 4

2181( ) ( )l l e l

semi-Poisson.

(29)

Figure 4 presents some of these functions.

Finite-Size ScalingThe appropriate quantity used for finite-size scaling analysis is the vari-ance of the distribution P(s):

s s s s s P s ds2 2 2 2

01= − = −

∫ ( ) . (30)

By increasing the system’s volume, the variance approaches to that of the Poisson distribution, ss

2 = 1, for the local ized states, and approaches the variance of the Wigner-Dyson distribution, ss

2 = 4/p - 1, for the delocalized states. The intermediate states have different variance that’s scale invariant and is the signature of localization-delocalization transi-tion. To calculate the critical disorder or critical energy ac and the critical exponent n, ss is expressed by the one-parameter scaling form, ss

2(L,a) = g(c(a)L1/n ), and is expanded as in Equation 13.

DiagonalizationThe Hamiltonian matrices that we deal with have the advantage of being sparse. Several computer packages are available to calculate their eigen-values and eigenvectors. The Lanc-zos and Jacobi-Davidson are suitable methods for obtaining the eigenvec-tors of sparse matrices close to a given energy.20

For full diagonalization of band matrices in which the nonzero bands are not too far from the principal di-agonal, Lapack is a useful package. Researchers have also developed par-allel versions of the Lanczos and TM methods to study the Anderson local-ization problem.21

Statistics of EigenfunctionsWe’ll now describe compu-tation of another important set of statistics—that of the eigenfunctions.

Participation ratioA measure of the number of lat-tice sites that contribute to a sin-gle eigenstate (energy) is given by the participation ratio PR:

PR n n

n n

=∑

( | | )| |

Ψ

2 2

4

(31)

For a wave function localized at a single site (say, the origin, Yn = dn,0), we obtain PR = 1, whereas for an extended plane wave for which Yn = 1/ N , we have PR = N, where N ∝ Ld for a d-dimensional system of linear size L. In a finite system and in the presence of randomness, we need to study the sca-ling of the PR with the system size. In general, we expect the following scaling form for enemble-averaged PR:

PR N∝ k

(32)

where k = 0 for the localized state, and k = 1 for the extended one. For a localized state, we might ob-tain nonzero k at small system sizes, but the PR should saturate to ⟨PR⟩ ∝ xd for larger system sizes with x being the localization length. At the mobility edge, the scaling22 is valid with a nontrivial exponent 0 < k < 1, implying a fractal structure for the wave function at the critical point.

Usually, the eigenstates are assumed to be normalized, and the PR’s in-verse is used in numerical simulations, which is simply the moments of the wave functions. So, the generalized

Figure 4. level-number variance as a function of width of energy window scaled with average level spacing.

l

Σ2 (

l)

0 5 10 15 200

1

2

3PoissonSemi-PoissonGaussian orthogonalensemble

CISE-13-3-CompSims.indd 79 05/04/11 3:27 PM

Page 7: Numerical Simulation of Anderson Localization

C o m P u t E r S I m u l A t I o n S

80 Computing in SCienCe & engineering

inverse PR is defined as corresponding to higher moments of wave functions:

Iq n

q

n

= ∑ Ψ2.

(33)

For extended states, Iq ∝ Ld(1-q), while for localized states, Iq ∝ x d(1-q). At the mobility edge, we have Iq ∝ Ld(1-q), where dq < d and depends on q. This is the generic feature of a multifractal measure that appears fre-quently in critical phenomena.

Finite-size scaling based on such quantities is possible, and yields the localization transition’s critical expo-nents. But the analysis will be rather difficult compared to computing the LE and the mean conductance be-cause even at the critical point they’re not invariant and depend on the sys-tem size. The averaged logarithm of

the inverse PR for finite sizes scales as

ln

( ) ln ( ) ,/

I

a d q L b W W Lq

q c

=

+ − + −1 1 ν

(34)

Researchers have calculated the crit-ical parameters for the 3D Anderson model based on this scaling ansatz.23

Generalized MultifractalityThe concept of multifractality can be extended to out-of-critical point.22 In that case, it’s possible to locate the crit ical point ac and compute the crit-ical parameter n and the multifractal exponents by examining the wave

functions’ distribution without using other calculations.

A multifractal measure—here, mn = ∑n∈ W|Yn|2 with W being a box of size l—is expressed by a set of infi-nite number of exponents. The mass exponent governs the scaling of the inverse PR, ⟨Iq⟩ ∝ L-tq, from which the generalized dimension is defined by tq = (q - 1)dq. By a change of vari-able, a(q) = dt(q)/dq, and a Legendre transform, it’s related to the singular-ity spectrum, f (a(q)) = a (q)q - t(q).

We can calculate directly the expo-nents in the critical region using �a = ln mn/lnl, �t = ln ⟨R q⟩/lnl, with Rq = ∑n mn

q and l = l/L. For finite sizes and out-of-critical point, these expo-nents depend on the system size. By increasing the size for the extended state, tq → (q - 1)d, whereas for the

localized state, tq → 0, while they’re invariant at the mobility edge, tq = (q - 1)dq. The distribution functions are also invariant at the critical point.

Local Density of States: Kernel Polynomial MethodAccording to the Landau theory of phase transitions, the transition, or order parameter, can be characterized with a quantity that is zero on one side of transition and nonzero on the other. One quantity that has been a candidate for the Anderson transi-tion’s order parameter is the geomet-rical average of local density of states (LDOS). Density of states describes

the number of states per energy in-terval at each energy level available to be occupied. Local density of states is the same quantity defined for each lattice point. It’s used as a measure of localization, especially in interacting many-particle problems. It’s given in terms of eigenvalues and eigenstates:

ρ δi ie e e( ) ( ),= −∑ Ψνν

ν

2

(35)

that is, the amplitude of the wave func-tion at site i with energy e. Once the eigenstates and eigenvalues are ob-tained from diagonalization, we can calculate r. The most efficient method to calculate r for relatively large system sizes is the kernel polynomial method (KPM), which was reviewed recently in great detail with applications to localization.24 KPM is based on the function’s expansion in terms of a finite number of Chebyshev polynomials. To remove the truncation’s effect, the re-sult is convoluted by some kernel, such as the Jackson kernel,24 which is appro-priate for most cases.

The method’s accuracy is propor-tional to the number of Chebyshev polynomials (M). In the study of An-derson localization, two parameters were investigated: rav and rtyp, which are the arithmetic and geometric av-erage of ri:

r rav

r si

i

K

k

K

eK K

esr

( ) ( ),===

∑∑1

11

and

(36)

r rtypr s

ii

K

k

K

eK K

esr

( ) exp ln ( )=

==∑∑1

11

,

(37)

where Kr is the number of samples and Ks the number of randomly chosen

According to the Landau theory of phase transitions,

the transition, or order parameter, can be characterized

with a quantity that is zero on one side of transition

and nonzero on the other.

CISE-13-3-CompSims.indd 80 05/04/11 3:27 PM

Page 8: Numerical Simulation of Anderson Localization

may/June 2011 81

sites, which is usually much smaller than the total number of sites. The sum over k is actually a statistical av-erage over different realization of the disorder. rav is the standard density of states for large enough Kr and Ks. The ratio of the two averages, rtyp(e)/rav(e), vanishes for the localized states. There-fore, it can be a measure of localization.

We briefly explain the KPM for-mulation as follows. If we consider the matrix H as a Hamiltonian to calculate its spectrum, we should res-cale the matrix at the computation’s beginning because the Chebyshev polynomials are defined in the inter-val [-1, 1]. We can do the following: �H = (H - b)/a, and �e = (e - b)/a,

where a = (emax- emin)/(2 - e) and b = (emax + emin)/2, with emin and emax being the extremum energies of the Hamilto-nian that can be calculated by, for exam-ple, the Lanczos algorithm. e is a small cutoff introduced to avoid stability problems that arise if the spectrum ex-ceeds the interval’s boundaries [-1, 1]. The approximate LDOS can be ex-panded by Chebyshev polynomials as

�ρ

πµ µ

i

n n nn

M

x

xg g T x

( )

( ( )) ,

=

−+

=∑1

12

2 0 01

(38)with the Jackson factors

g

MM n n n

n

n =

+− + +

11

1( )cos( ) sin( )tan( )

φφφ

,

where φ π= +M 1 . The recursive Cheby-shev polynomial relation is, Tn+1(x) = 2xTn(x) - Tn-1(x) with T0(x) = 1 and T1(x) = 1. The moments mn are given by

mn nNi T H i=

1 ( ) .�

(39)

Ultimately, the energies should be scaled back. The critical disorder

strength of the 3D Anderson model24 and electromagnetic wave localiza-tion9 obtained from this approach is in agreement with the TM method’s results.

The KPM calculation is based on the sparse matrices. The time and memory used in programming scale as N for sparse matrices and N 2 other w ise, where N is the size of the Hamiltonian matrix. For 3D samples of size L, N is equal to L3 for the An-derson model. In full diagonalization with the IMSL library or the Lapack routines, the dense and banded matri-ces are handled and the sparsity of the tight binding matrices isn’t consid-ered. In the routines EVCSF (in the

International Mathematics and Sta-tistics Library package) and DSYEV (in the Lapack package) for comput-ing all the eigenvalues and eigenvec-tors of real symmetric matrices, the memory storage scales as N 2 and the number of operations and the compu-tation time scale as N 3.

Although for band matrices only the nonzero diagonals are stored, there’s no advantage compared to full ma-trices. In personal computers with RAM = 1 Gbyte and CPU = 2 GHz, the typical density of states of sam-ples with size 1003 is easy to calculate using the KPM method, whereas with full diagonalization only 3D samples with sizes of about 303 are possible to calculate. Moreover, the Lanczos

method is an algorithm to calculate the eigenstates of sparse matrices; in contrast to the stable KPM method, however, the Lanczos isn’t stable be-cause some nonorthogonality happens in the recursion algorithm and causes converged states. This algorithm is an excellent tool for calculating extremal eigenstates of large sparse matrices, but the KPM is better for spectral densities.

Time-Dependent SimulationTo check Anderson’s original idea, we must do time-dependent simulations— that is, numerical simulation of the dynamical equations such as the time- dependent Schrödinger equation and

elastic and electromagnetic wave equa-tions. In particular, in the presence of nonlinearities, this approach is inevi-table because most other approaches that were discussed above require lin-ear equations. The most suitable quan-tity is the width of wave packet, which is initially localized at the origin

r t r r tr

2 2 2( ) ( , ) ,= ∑ Ψ

(40)

There are several regimes of motion for which this quantity has different behavior as a function of time. In the absence of disorder, the motion is bal-listic ⟨r2⟩(t) ∝ t2. There’s a diffusive regime, ⟨r2⟩(t) ∝ t, and a subdiffusive regime, ⟨r 2⟩(t) ∝ ta, with 0 < a < 1, where r is the displacement. In the

To check Anderson’s original idea, we must do time-

dependent simulations—that is, numerical simulation

of the dynamical equations such as the time-dependent

Schrödinger equation and elastic and electromagnetic

wave equations.

CISE-13-3-CompSims.indd 81 05/04/11 3:27 PM

Page 9: Numerical Simulation of Anderson Localization

C o m P u t E r S I m u l A t I o n S

82 Computing in SCienCe & engineering

localized regime, diffusion stops and we have ⟨r 2⟩(t) ∝ constant.

M ore than 50 years after the birth of Anderson localization,

research in this field is still active. There are open questions in the field, including that the role of interactions still isn’t well understood. It’s an im-portant issue because experiments in 2D show that interactions between electrons might reverse the conclusion of Abrahams, Anderson, and their col-leagues,3 and there might be a metal-lic phase in 2D. Another remarkable area is the experimental observation of localization of classical waves, which several groups are now work-ing on. The most challenging issue in such experiments is absorption.

References1. P.W. Anderson, “Absence of Diffusion in

Certain random lattices,” Physical Rev.,

vol. 109, no. 5, 1958, pp. 1492–1505.

2. m. Sahimi, Heterogeneous Materials,

vol. 1, Springer-Verlag, 2003.

3. E. Abrahams et al., “Scaling theory of

localization: Absence of Quantum Diffu-

sion in two Dimensions,” Physical Rev.

Letters, vol. 42, no. 10, 1979, pp. 673–676.

4. n.F. mott, “the Electrical Properties of

liquid mercury,” Philosophical Magazine,

vol. 13, no. 125, 1966, pp. 989–1014.

5. F. Wegner, “Electrons in Disordered

Systems. Scaling near the mobility

Edge,” Zeitschrift für Physik B Condensed

Matter, vol. 25, no. 4, 1976, pp. 27–337.

6. A. macKinnon and B. Kramer, “the

Scaling theory of Electrons in Disor-

dered Solids: Additional numerical re-

sults,” Zeitschrift für Physik B Condensed

Matter, vol. 53, no. 1, 1983; doi:10.1007/

BF01578242.

7. G. Benettin et al., “lyapunov

Characteristic Exponents for Smooth

Dynamical Systems and for Hamilto-

nian Systems; A method for Computing

All of them,” Meccanica, vol. 15, no. 1,

1980, pp. 21–30.

8. r. Sepehrinia, m.r.r. tabar, and

m. Sahimi, “numerical Simulation

of the localization of Elastic Waves

in two- and three-Dimensional

Heterogeneous media,” Physical Rev. B,

vol. 78, no. 2, 2008; doi: 10.1103/

PhysrevB.78.024207.

9. A. Sheikhan, m.r.r. tabar, and m. Sahimi,

“numerical Simulations of localization

of Electromagnetic Waves in two- and

three-Dimensional Disordered media,”

Physical Rev. B, vol. 80, no. 3, 2009; doi:

10.1103/PhysrevB.80.035130.

10. P. markos, “metal-Insulator transition

in three-Dimensional Anderson model:

Scaling of Higher lyapunov Expo-

nents,” J. Physics A: Math. and

General, vol. 33, no. 42, 2000;

doi:10.1088/0305-4470/33/42/103.

11. r. landauer, “Electrical resistance of

Disordered one-Dimensional lattices,”

Philosophical Magazine, vol. 21, 1970,

pp. 863-867.

12. J.B. Pendry, A. macKinnon, and P.J.

roberts, “universality Classes and Fluc-

tuations in Disordered Systems,” Proc.

Royal Soc. London A, vol. 437, no. 1899,

1992, pp. 67–83.

13. K. Slevin, P. markos, and t. ohtsuki,

“reconciling Conductance Fluctuations

and the Scaling theory of localization,”

Physical Review Letters, vol. 86, no. 16,

2001, pp. 3594–3597.

14. P. markos, “Conductance Statistics

near the Anderson transition,” in

Anderson Localization and Its Ramifications:

Disorder, Phase Coherence and Electron

Correlations, Springer-Verlag, 2003.

15. E.P. Wigner, “on a Class of Analytic

Functions from the Quantum theory

of Collisions,” Annals Math., vol. 53,

no. 1, 1951, pp. 36 –67.

16. l.P. Gorkov and G.m. Eliashberg, “min-

ute metallic Particles in an Electromag-

netic Field,” J. Experimental and Theoreti-

cal Physics, vol. 21, 1965, pp. 940–947.

17. K.B. Efetov, “Supersymmetry and the-

ory of Disordered metals,” Advances in

Physics, vol. 32, no. 1, 1983, pp. 53–127.

18. m.l. mehta, Random Matrices,

Academic, 1991.

19. I.K. Zharekeshev, m. Batsch, and

B. Kramer, “Crossover of level Statistics

Between Strong and Weak localiza-

tion in two Dimensions,” Europhys-

ics Letters, vol. 34, no. 8, 1996,

pp. 587–592.

20. G.W. Stewart, Matrix Algorithms, vol. 2,

SIAM, 2001.

21. P. Cain et al., “use of Cluster Comput-

ing for the Anderson model of localiza-

tion,” Computer Physics Comm., vol. 147,

nos. 1-2, 2002, pp. 246–250.

22. A. rodriguez et al., “Critical Parameters

from a Generalized multifractal Analysis

at the Anderson transition,” Physical

Rev. Letters, vol. 105, no. 4, 2010; doi:

10.1103/Physrevlett.105.046403.

23. J. Brndiar and P. markoö, “universality

of the metal-Insulator transition in

three-Dimensional Disordered Systems,”

Physical Rev. B, vol. 74, no. 15, 2006;

doi: 10.1103/PhysrevB.74.153103.

24. A. Weibe et al., “the Kernel Polynomial

method,” Review Modern Physics, vol. 78,

no. 1, 2006; 10.1103/rev modPhys.78.275.

Reza Sepehrinia is a postdoctoral fellow at the

Institute for research in Fundamental Sciences

(IPm), tehran, Iran. His research interests

include disordered systems and Anderson

localization. Sepehrinia has a PhD in physics

from Sharif university of technology, teh-

ran. Contact him at [email protected].

Ameneh Sheikhan is a postdoctoral fellow at

the national Institute for theoretical Physics

in Stellenbosch, South Africa. Her research

interests include Anderson localization, light

localization, disordered systems, and coher-

ent backscattering of electromagnetic waves

from random media. Sheikhan has a PhD in

physics from Sharif university of technology,

tehran. Contact her at [email protected].

CISE-13-3-CompSims.indd 82 05/04/11 3:27 PM

Page 10: Numerical Simulation of Anderson Localization

CISE-13-3-CompSims.indd 83 05/04/11 3:27 PM