33
Palgrave Handbook of Econometrics

Palgrave Handbook of Econometrics - Springer

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Palgrave Handbook of Econometrics - Springer

Palgrave Handbook of Econometrics

Page 2: Palgrave Handbook of Econometrics - Springer

This page intentionally left blank

Page 3: Palgrave Handbook of Econometrics - Springer

Palgrave Handbook ofEconometricsVolume 2: Applied Econometrics

Edited By

Terence C. Mills

and

Kerry Patterson

Page 4: Palgrave Handbook of Econometrics - Springer

Editorial and selection matter © Terence C. Mills and Kerry Patterson 2009Individual chapters © Contributors 2009

All rights reserved. No reproduction, copy or transmission of thispublication may be made without written permission.

No paragraph of this publication may be reproduced, copied or transmittedsave with written permission or in accordance with the provisions of theCopyright, Designs and Patents Act 1988, or under the terms of any licencepermitting limited copying issued by the Copyright Licensing Agency,Saffron House, 6–10 Kirby Street, London EC1N 8TS.

Any person who does any unauthorised act in relation to this publicationmay be liable to criminal prosecution and civil claims for damages.

The author has asserted his right to be identifiedas the author of this work in accordance with the Copyright, Designsand Patents Act 1988.

First published in 2009 byPALGRAVE MACMILLAN

PALGRAVE MACMILLAN in the UK is an imprint of Macmillan Publishers Limited,registered in England, company number 785998, of Houndmills, Basingstoke,Hampshire RG21 6XS.

PALGRAVE MACMILLAN in the US is a division of St Martin’s Press LLC,175 Fifth Avenue, New York, NY 10010.

PALGRAVE MACMILLAN is the global academic imprint of the above companiesand has companies and representatives throughout the world.

Palgrave® and Macmillan® are registered trademarks in the United States,the United Kingdom, Europe and other countries.

This book is printed on paper suitable for recycling and made from fullymanaged and sustained forest sources. Logging, pulping and manufacturingprocesses are expected to conform to the environmental regulations of thecountry of origin.

A catalogue record for this book is available from the British Library.

A catalog record for this book is available from the Library of Congress.

10 9 8 7 6 5 4 3 2 118 17 16 15 14 13 12 11 10 09

ISBN 978-1-4039-1800-0 ISBN 978-0-230-24440-5 (eBook)DOI 10.1057/9780230244405

Page 5: Palgrave Handbook of Econometrics - Springer

Contents

Notes on Contributors viii

Editors’ Introduction xi

Part I The Methodology and Philosophy of Applied Econometrics

1 The Methodology of Empirical Econometric Modeling: AppliedEconometrics Through the Looking Glass 3David F. Hendry, Nuffield College, Oxford University

2 How Much Structure in Empirical Models? 68Fabio Canova, Universitat Pompeu Fabra

3 Introductory Remarks on Metastatistics for the Practically MindedNon-Bayesian Regression Runner 98John DiNardo, University of Michigan

Part II Forecasting

4 Forecast Combination and Encompassing 169Michael P. Clements, Warwick University, and David I. Harvey,School of Economics, University of Nottingham

5 Recent Developments in Density Forecasting 199Stephen G. Hall, University of Leicester, and James Mitchell,National Institute of Economic and Social Research

Part III Time Series Applications

6 Investigating Economic Trends and Cycles 243D.S.G. Pollock, University of Leicester

7 Economic Cycles: Asymmetries, Persistence, and Synchronization 308Joe Cardinale, Air Products and Chemicals, Inc., and Larry W. Taylor,College of Business and Economics, Lehigh University

8 The Long Swings Puzzle: What the Data Tell When Allowed toSpeak Freely 349Katarina Juselius, University of Copenhagen

9 Structural Time Series Models for Business Cycle Analysis 385Tommaso Proietti, University of Rome ‘Tor Vergata’

10 Fractional Integration and Cointegration: An Overview and anEmpirical Application 434Luis A. Gil-Alana and Javier Hualde, Universidad de Navarra

v

Page 6: Palgrave Handbook of Econometrics - Springer

vi Contents

Part IV Cross-section and Panel Data Applications

11 Discrete Choice Modeling 473William Greene, Stern School of Business, New York University

12 Panel Data Methods and Applications to Health Economics 557Andrew M. Jones, University of York

13 Panel Methods to Test for Unit Roots and Cointegration 632Anindya Banerjee, University of Birmingham, and Martin Wagner,Institute forAdvanced Studies, Vienna

Part V Microeconometrics

14 Microeconometrics: Current Methods and Some Recent Developments 729A. Colin Cameron, University of California, Davis

15 Computational Considerations in Empirical Microeconometrics:Selected Examples 775David T. Jacho-Chávez and Pravin K. Trivedi, Indiana University

Part VI Applications of Econometrics to Economic Policy

16 The Econometrics of Monetary Policy: An Overview 821Carlo Favero, IGIER-Bocconi University

17 Macroeconometric Modeling for Policy 851Gunnar Bårdsen, Norwegian University of Science and Technology,and Ragnar Nymoen, University of Oslo

18 Monetary Policy, Beliefs, Unemployment and Inflation: Evidencefrom the UK 917S.G.B. Henry, National Institute of Economic and Social Research

Part VII Applications to Financial Econometrics

19 Estimation of Continuous-Time Stochastic Volatility Models 951George Dotsis, Essex Business School, University of Essex,Raphael N. Markellos, Athens University of Economics and Business,and Terence C. Mills, Loughborough University

20 Testing the Martingale Hypothesis 972J. Carlos Escanciano, Indiana University, and Ignacio N. Lobato,Instituto Tecnológico Autónomo de Mexico

21 Autoregressive Conditional Duration Models 1004Ruey S. Tsay, Booth Business School, University of Chicago

22 The Econometrics of Exchange Rates 1025Efthymios G. Pavlidis, Ivan Paya, and David A. Peel,Lancaster University Management School

Page 7: Palgrave Handbook of Econometrics - Springer

Contents vii

Part VIII Growth Development Econometrics

23 The Econometrics of Convergence 1087Steven N. Durlauf, University of Wisconsin-Madison,Paul A. Johnson, Vassar College, New York State, andJonathan R.W. Temple, Bristol University

24 The Methods of Growth Econometrics 1119Steven N. Durlauf, University of Wisconsin-Madison,Paul A. Johnson, Vassar College, New York State, andJonathan R.W. Temple, Bristol University

25 The Econometrics of Finance and Growth 1180Thorsten Beck, European Banking Center, Tilburg University, and CEPR

Part IX Spatial Econometrics

26 Spatial Hedonic Models 1213Luc Anselin, School of Geographical Sciences and Urban Planning,and Nancy Lozano-Gracia, GeoDa Center for GeospatialAnalysis and Computation, Arizona State University

27 Spatial Analysis of Economic Convergence 1251Sergio J. Rey, Arizona State University, and Julie Le Gallo,Université de Franche-Comté

Part X Applied Econometrics and Computing

28 Testing Econometric Software 1293B.D. McCullough, Drexel University

29 Trends in Applied Econometrics Software Development 1985–2008:An Analysis of Journal of Applied Econometrics Research Articles,Software Reviews, Data and Code 1321Marius Ooms, VU University Amsterdam

Author Index 1349

Subject Index 1374

Page 8: Palgrave Handbook of Econometrics - Springer

Notes on Contributors

Luc Anselin is Foundation Professor of Geographical Sciences and Directorof the School of Geographical Sciences and Urban Planning at Arizona StateUniversity, USA.

Aninyda Banerjee is Professor of Econometrics at the University ofBirmingham, UK.

Gunnar Bårdsen is Professor of Economics at the Norwegian University of Scienceand Technology, Norway.

Thorsten Beck is Professor of Economics and Chair at the European BankingCenter, Tilburg University and Research Fellow, CEPR.

Colin Cameron is Professor of Economics at the University of California,Davis, USA.

Fabio Canova is ICREA Research Professor in Social Science at Universitat PompeuFabra, Barcelona, Spain.

Joe Cardinale is a Manager, Economics at Air Products and Chemicals, Inc., USA.

Michael P. Clements is Professor of Economics at Warwick University, UK.

John DiNardo is Professor of Economics and Public Policy at the University ofMichigan, Ann Arbor, USA.

George Dotsis is Lecturer in Finance at the Essex Business School, University ofEssex, UK.

Steven N. Durlauf is Professor of Economics at the University of Wisconsin-Madison, USA.

Juan Carlos Escanciano is Assistant Professor of Economics at Indiana University,Bloomington, USA.

Carlo A. Favero is Professor of Economics at IGIER-Bocconi University, Italy.

Julie Le Gallo is Professor of Economics and Econometrics at the Université deFranche-Comté, France.

Luis A. Gil-Alana is Professor of Econometrics at the University of Navarra, Spain.

William Greene is Professor of Economics at the Stern School of Business,New York, USA.

Stephen G. Hall is Professor of Economics at University of Leicester, UK.

David I. Harvey is Reader in Econometrics at the School of Economics, Universityof Nottingham, UK.

David F. Hendry is Professor of Economics and Fellow, Nuffield College, OxfordUniversity, UK.

viii

Page 9: Palgrave Handbook of Econometrics - Springer

Notes on Contributors ix

Brian Henry is Visiting Fellow at the National Institute of Economic and SocialResearch, NIESR, UK.

Javier Hualde is Ramon y Cajal Research Fellow in Economics at the PublicUniversity of Navarra, Spain.

David Jacho-Chávez is Assistant Professor of Economics at Indiana University,Bloomington, USA.

Paul A. Johnson is Professor of Economics at Vassar College, New York State, USA.

Andrew M. Jones is Professor of Economics at the University of York, UK.

Katarina Juselius is Professor of Empirical Time Series Econometrics at theUniversity of Copenhagen, Denmark.

Ignacio N. Lobato is Professor of Econometrics at the Instituto TecnológicoAutónomo de México, Mexico.

Nancy Lozano-Gracia is Postdoctoral Research Associate in the GeoDa Center forGeospatial Analysis and Computation at Arizona State University, USA.

Raphael N. Markellos is Assistant Professor of Quantitative Finance at the AthensUniversity of Economics and Business (AUEB), Greece.

Bruce D. McCullough is Professor of Decision Sciences and Economics at DrexelUniversity, Philadelphia, USA.

Terence C. Mills is Professor of Applied Statistics and Econometrics atLoughborough University, UK.

James Mitchell is Research Fellow at the National Institute of Economic and SocialResearch, UK.

Ragnar Nymoen is Professor of Economics at University of Oslo, Norway.

Marius Ooms is Associate Professor of Econometrics at the VU University,Amsterdam, The Netherlands.

Kerry Patterson is Professor of Econometrics at the University of Reading, UK.

Efthymios G. Pavlidis is Lecturer in Economics at the Lancaster UniversityManagement School, Lancaster University, UK.

Ivan Paya is Senior Lecturer in Economics at the Lancaster University ManagementSchool, Lancaster University, UK.

David A. Peel is Professor in Economics at the Lancaster University ManagementSchool, Lancaster University, UK.

D. Stephen G. Pollock is Professor of Economics at the University of Leicester, UK.

Tommaso Proietti is Professor of Economic Statistics at the University of Rome‘Tor Vergata’, Italy.

Sergio Rey is Professor of Geographical Sciences at Arizona State University, USA.

Larry W. Taylor is Professor of Economics at the College of Business andEconomics, Lehigh University, Pennsylvania, USA.

Page 10: Palgrave Handbook of Econometrics - Springer

x Notes on Contributors

Jonathan R.W. Temple is Professor of Economics at Bristol University, UK.

Pravin Trivedi is Professor of Economics at Indiana University, Bloomington, USA.

Ruey S. Tsay is Professor of Econometrics and Statistics at the University of ChicagoBooth School of Business, USA.

Martin Wagner is Senior Economist at the Institute for Advanced Studies inVienna, Austria.

Page 11: Palgrave Handbook of Econometrics - Springer

Editors’ IntroductionTerence C. Mills and Kerry Patterson

The Palgrave Handbook of Econometrics was conceived to provide an understand-ing of major developments in econometrics, both in theory and in application.Over the last twenty-five years or so, econometrics has grown in a way that fewcould have contemplated, and it became clear to us, as to others, that no sin-gle person could have command either of the range of technical knowledge thatunderpins theoretical econometric developments or the extent of the applicationof econometrics. In short, econometrics is not, as it used to be considered, a set oftechniques that is applied to a previously well-defined problem in economics; it isnot a matter of finding the “best” estimator from a field of candidates, applyingthat estimator and reporting the results. The development of economics is nowinextricably entwined with the development of econometrics.

The first Nobel Prize in Economics was awarded to Ragnar Frisch and Jan Tinber-gen, both of whom made significant contributions to what we now recognize asapplied econometrics. More recently, Nobel Prizes in Economics have been awardedto Clive Granger, Robert Engle, James Heckman and Daniel McFadden, who haveall made major contributions to applied econometrics. It is thus clear that the dis-cipline has recognized the influential role of econometrics, both theoretical andapplied, in advancing economic knowledge.

The aim of this volume is to make major developments in applied economet-rics accessible to those outside their particular field of specialization. The responseto Volume 1 was universally encouraging and it has become clear that we werefortunate to be able to provide a source of reference for others for many years tocome. We hope that this high standard is continued and achieved here. Typically,applied econometrics, unlike theoretical econometrics, has always been ratherpoorly served for textbooks, making it difficult for both undergraduate and post-graduate students to get a real “feel” for how econometrics is actually done. Tosome degree, the econometric textbook market has responded, so that now theleading textbooks include many examples; even so, these examples typically areof an illustrative nature, focusing on simple points, simply exposited, rather thanon the complexity that is revealed in practice. Thus our hope is that this vol-ume will provide a genuine entry into the detailed considerations that have to be

xi

Page 12: Palgrave Handbook of Econometrics - Springer

xii Editors’ Introduction

made when combining economics and econometrics in order to carry out seriousempirical research.

As in the case of Volume 1, the chapters here have been specially commissionedfrom acknowledged experts in their fields; further, each of the chapters has beenreviewed by the editors, one or more of the associate editors and a referee. Thus,the process is akin to submission to a journal; however, whilst ensuring the higheststandards in the evaluation process, the chapters have been conceived of as partof a whole rather than as a set of unrelated contributions. It has not, however,been our intention to provide just a series of surveys or overviews of some areas ofapplied econometrics, although the survey element is directly or indirectly servedin part here. By its very nature, this volume is about econometrics as it is appliedand, to succeed in its aim, the contributions, conceived as a whole, have to meetthis goal.

We have organized the chapters of this volume of the Handbook into ten parts.The parts are certainly not watertight, but serve as a useful initial organization ofthe central themes. Part I contains three chapters under the general heading of“The Methodology and Philosophy of Applied Econometrics.” The lead chapter isby David Hendry, who has been making path-breaking contributions in theoreticaland applied econometrics for some forty years or so. It is difficult to conceive howeconometrics would have developed without David’s many contributions. Thischapter first places the role of applied econometrics in an historical context andthen develops a theory of applied econometrics. As might be expected, the keyissues are confronted head-on.

In introducing the first volume we noted that the “growth in econometrics is tobe welcomed, for it indicates the vitality and importance of the subject. Indeed,this growth and, arguably, the dominance over the last ten or twenty years ofeconometric developments in taking economics forward, is a notable change fromthe situation faced by the subject some twenty-five years or so ago.” Yet in Chapter1, Hendry notes that, next to data measurement, collection and preparation, on theone hand, and teaching, on the other, “Applied Econometrics” does not have a highcredibility in the profession. Indeed, whilst courses in theoretical econometrics oreconometric techniques are de rigueur for a good undergraduate or Masters degree,courses in applied econometrics have no such general status.

The intricacies, possibly even alchemy (Hendry, 1980), surrounding the mix oftechniques and data seem to defy systematization; perhaps they should be keptout of the gaze of querulous students, who may – indeed should – be satisfied withillustrative examples! Often to an undergraduate or Masters student undertaking aproject, applied econometrics is the application of econometrics to data, no more,no less, with some relief if the results are at all plausible. Yet, in contrast, leadingjournals, for example, the Journal of Econometrics, the Journal of Applied Economet-rics and the Journal of Business and Economic Statistics, and leading topic journals,such as the Journal of Monetary Economics, all publish applied econometric articleshaving substance and longevity in their impact and which serve to change thedirection of the development of econometric theory (for a famous example, seeNelson and Plosser, 1982). To some, applying econometrics seems unsystematic

Page 13: Palgrave Handbook of Econometrics - Springer

Terence C. Mills and Kerry Patterson xiii

and so empirical results are open to question; however, as Hendry shows, it ispossible to formalize a theory of applied econometrics which provides a coher-ent basis for empirical work. Chapter 1 is a masterful and accessible synthesis andextension of Hendry’s previous ideas and is likely to become compulsory readingfor courses in econometrics, both theory and applied; moreover, it is completed bytwo applications using the Autometrics software (Doornik, 2007). The first appli-cation extends the work of Magnus and Morgan (1999) on US food expenditure,which was itself an update of a key study by Tobin (1950) estimating a demandfunction for food. This application shows the Autometrics algorithm at work in asimple context. The second application extends the context to a multiple equationsetting relating industrial output, the number of bankruptcies and patents, and realequity prices. These examples illustrate the previously outlined theory of appliedeconometrics combined with the power of the Autometrics software.

In Chapter 2, Fabio Canova addresses the question of how much structure thereshould be in empirical models. This has long been a key issue in econometrics, andsome old questions, particularly those of identification and the meaning of struc-ture, resurface here in a modern context. The last twenty years or so have seentwo key developments in macroeconometrics. One has been the development ofdynamic stochastic general equilibrium (DSGE) models. Initially, such models werecalibrated rather than estimated, with the emphasis on “strong” theory in theirspecification; however, as Canova documents, more recently likelihood-based esti-mation has become the dominant practice. The other key development has beenthat of extending the (simple) vector autoregression (VAR) to the structural VAR(SVAR) model. Although both approaches involve some structure, DSGE models,under the presumption that the model is correct, rely more on an underlying the-ory than do SVARs. So which should be used to analyze a particular set of problems?As Canova notes: “When addressing an empirical problem with a finite amount ofdata, one has . . . to take a stand on how much theory one wants to use to structurethe available data prior to estimation.” Canova takes the reader through the advan-tages and shortcomings of these methodologies; he provides guidance on what todo, and what not to do, and outlines a methodology that combines elements ofboth approaches.

In Chapter 3, John DiNardo addresses some philosophical issues that are atthe heart of statistics and econometrics, but which rarely surface in economet-ric textbooks. As econometricians, we are, for example, used to working withina probabilistic framework, but we rarely consider issues related to what probabil-ity actually is. To some degree, we have been prepared to accept the axiomaticor measure theoretic approach to probability, due to Kolgomorov, and this hasprovided us with a consistent framework that most are happy to work within.Nevertheless, there is one well-known exception to this unanimity: when it comesto the assignment and interpretation of probability measures and, in particular, theinterpretation of some key conditional probabilities; this is whether one adopts aBayesian or non-Bayesian perspective. In part, the debate that DiNardo discussesrelates to the role of the Bayesian approach, but it is more than this; it concernsmetastatistics and philosophy, because, in a sense, it relates to a discussion of the

Page 14: Palgrave Handbook of Econometrics - Springer

xiv Editors’ Introduction

theory of theories. This chapter is deliberately thought-provoking and certainlycontroversial – two characteristics that we wish to encourage in a Handbook thataims to be more than just an overview. For balance, the reader can consult Volume1 of the Handbook, which contains two chapters devoted to the Bayesian analysis ofeconometric models (see Poirier and Tobias, 2006, and Strachan et al., 2006). Thereader is likely to find familiar concepts here, such as probability and testing, butonly as part of a development that takes them into potentially unfamiliar areas.DiNardo’s discussion of these issues is wide-ranging, with illustrations taken fromgambling and practical examples taken as much from science, especially medicine,as economics. One example from the latter is the much-researched question of thecausal effect of union status on wages: put simply, do unions raise wages and, if so,by how much? This example serves as an effective setting in which to raise issuesand to show that differences in approach can lead to differences in results.

For some, the proof of the pudding in econometrics is the ability to forecastaccurately, and to address some key issues concerning this aspect of economet-rics Part II contains two chapters on forecasting. The first, Chapter 4, by MichaelClements and David Harvey, recognizes that quite often several forecasts are avail-able and, rather than considering a selection strategy that removes all but the beston some criterion, it is often more fruitful to consider different ways of combiningforecasts, as suggested in the seminal paper by Bates and Granger (1969). In anintuitive sense, one forecast may be better than another, but there could still besome information in the less accurate forecast that is not contained in the moreaccurate forecast. This is a principle that is finding wider application; for example,in some circumstances, as in unit root testing, there is more than one test availableand, indeed, there may be one uniformly powerful test, yet there is still potentialmerit in combining tests.

In the forecasting context, Clements and Harvey argue that the focus for mul-tiple forecasts should not be on testing the null of equal accuracy, but on testingfor encompassing. Thus it is not a question of choosing forecast A over forecast B,but of whether the combination of forecasts A and B is better than either individ-ual forecast. Of course, this may be of little comfort from a structuralist point ofview if, for example, the two forecasts come from different underlying models; butit is preferable when the loss function rewards good fit in some sense. Bates andGranger (1969) suggested a simple linear combination of two unbiased forecasts,with weights depending on the relative accuracy of the individual forecasts, andderived the classic result that, even if the forecasts are equally accurate in a meansquared error loss sense, then there will still be a gain in using the linear combina-tion unless the forecasts are perfectly correlated, at least theoretically. Clements andHarvey develop from this base model, covering such issues as biased forecasts, non-linear combinations, and density or distribution forecasts. The concept of forecastencompassing, which is not unique in practice, is then considered in detail, includ-ing complications arising from integrated variables, non-normal errors, seriallycorrelated forecast errors, ARCH errors, the uncertainty implied by model estima-tion, and the difficulty of achieving tests with the correct actual size. A number ofrecent developments are examined, including the concept of conditional forecast

Page 15: Palgrave Handbook of Econometrics - Springer

Terence C. Mills and Kerry Patterson xv

evaluation (Giacomini and White, 2006), evaluating quantile forecasts, and relax-ing the forecast loss function away from the traditional symmetric squared error.In short, this chapter provides a clear, critical and accessible evaluation of a rapidlydeveloping area of the econometrics literature.

Chapter 5 is by Stephen Hall and James Mitchell, who focus on density forecast-ing. There has been a great deal of policy interest in forecasting key macroeconomicvariables such as output growth and inflation, some of which has been institution-ally enshrined by granting central banks independence in inflation targeting. Inturn, there has been a movement away from simply reporting point forecasts ofinflation and GDP growth in favor of a fan chart representation of the distributionof forecasts. A density forecast gives much more information than a simple pointforecast, which is included as just one realization on the outcome axis. As a corol-lary, forecast evaluation should also include techniques that evaluate the accuracy,in some well-defined sense, of the density forecast. However, given that generallywe will only be able to observe one outcome (or event) per period, some thoughtneeds to be given to how the distributional aspect of the forecast is evaluated. Halland Mitchell discuss a number of possibilities and also consider methods of eval-uating competing density forecasts. A further aspect of density forecasting is theability of the underlying model to generate time variation in the forecast densi-ties. If the underlying model is a VAR, or can be approximated by a VAR, then,subject to some qualifications, the only aspect of the forecast density which is ableto exhibit time variation is the mean; consequently, models have been developedthat allow more general time variation in the density through, for example, ARCHand GARCH errors and time-varying parameters. This chapter also links in with theprevious chapter by considering combinations of density forecasts. There are twocentral possibilities: the linear opinion pool is a weighted linear combination ofthe component densities, whereas the logarithmic opinion pool is a multiplicativecombination. Hall and Mitchell consider the problem of determining the weightsin such combinations and suggest that predictive accuracy improves when theweights reflect shifts in volatility, a characteristic of note for the last decade or soin a number of economies.

Part III contains four chapters under the general heading of “Time Series Appli-cations.” A key area in which the concept of a time series is relevant is incharacterizing and determining trends and cycles. Chapter 6, by Stephen Pollock,is a tour de force on modeling trends and cycles, and on the possibilities andpitfalls inherent in the different approaches. In the simplest of models, cyclicalfluctuations are purely sinusoidal and the trend is exponential; although simple,this is a good base from which to understand the nature of developments thatrelax these specifications. Such developments include the view that economic timeseries evolve through the accumulation of stochastic shocks, as in an integratedWeiner process. The special and familiar cases of the Beveridge–Nelson decompo-sition, the Hodrick–Prescott filter, the Butterworth filter and the unifying place ofWeiner–Kolgomorov filtering are all covered with admirable clarity. Other consid-erations include the complications caused by the limited data that is often availablein economic applications, contrary to the convenient assumptions of theory. In an

Page 16: Palgrave Handbook of Econometrics - Springer

xvi Editors’ Introduction

appealing turn of phrase, Pollock refers to obtaining a decomposition of compo-nents based on the periodogram “where components often reside within strictlylimited frequency bands which are separated by dead spaces where the spectralordinates are virtually zeros.” The existence of these “spectral dead spaces” is keyto a practical decomposition of an economic time series, however achieved. Inpractice, trend fitting requires judgment and a clear sense of what it is that thetrend is capturing. Other critical issues covered in this chapter include the impor-tance of structural breaks, a topic that has been influential elsewhere (for example,in questioning the results of unit root testing: Perron, 1989); and to aid the reader,practical examples are included throughout the exposition.

Chapter 7, by Joe Cardinale and Larry Taylor, continues the time series theme ofanalyzing economic cycles whilst focusing on asymmetries, persistence and syn-chronization. This is a particularly timely and somewhat prophetic chapter giventhat we are currently experiencing what is perhaps the deepest recession in recenteconomic history. How can we analyze the critical question “When will it end?”This chapter provides the analytical and econometric framework to answer such aquestion. The central point is that cycles are much more interesting than just mark-ing their peaks and troughs would suggest. Whilst “marking time” is important, itis just the first part of the analysis, and should itself be subjected to methods for dis-tinguishing phases (for example, expansions and contractions of the output cycle).Once phases have been distinguished, their duration and characteristics becomeof interest; for example, do long expansions have a greater chance of ending thanshort expansions? Critical to the analysis is the hazard function: “the conditionalprobability that a phase will terminate in period t , given that it has lasted t or moreperiods.” Cardinale and Taylor consider different models and methods of estimat-ing the hazard function and testing hypotheses related to particular versions of it.They also consider tests of duration dependence, the amplitudes of cycles, and thesynchronization of cycles for different but related variables; for example, outputand unemployment. Their theoretical analysis is complemented with a detailedconsideration of US unemployment.

No handbook of econometrics could be without a contribution indicating theimportance of cointegration analysis for non-stationary data. In Chapter 8, Kate-rina Juselius considers one of the most enduring puzzles in empirical economics,namely, if purchasing power parity (PPP) is the underlying equilibrium state thatdetermines the relationship between real exchange rates, why is there “pronouncedpersistence” away from this equilibrium state? This has been a common finding ofempirical studies using data from a wide range of countries and different sampleperiods. Juselius shows how a careful analysis can uncover important structures inthe data; however, these structures are only revealed by taking into account thedifferent empirical orders of integration of the component variables, the identifi-cation of stationary relationships between non-stationary variables, the dynamicadjustment of the system to disequilibrium states, the appropriate deterministiccomponents, and the statistical properties of the model. As Juselius notes, andin contrast to common approaches, the order of integration is regarded here asan empirical approximation rather than a structural parameter. This opens up a

Page 17: Palgrave Handbook of Econometrics - Springer

Terence C. Mills and Kerry Patterson xvii

distinction between, for example, a variable being empirically I(d) rather thanstructurally I(d); a leading example here is the I(2) case which, unlike the I(1)case, has attracted some “suspicion” when applied in an absolute sense to empiri-cal series. The challenging empirical case considered by Juselius is the relationshipbetween German and US prices and nominal exchange rates within a sample thatincludes the period of German reunification. The methodology lies firmly withinthe framework of general-to-specific modeling, in which a general unrestrictedmodel is tested down (see also Hendry, Chapter 1) to gain as much informationwithout empirical distortion. A key distinction in the methodological and empir-ical analysis is between pushing and pulling forces: in the current context, pricespush whereas the exchange rate pulls. PPP implies that there should be just a sin-gle stochastic trend in the data, but the empirical analysis suggests that there aretwo, with the additional source of permanent shocks being related to speculativebehaviour in the foreign exchange market.

In an analysis of trends and cycles, economists often characterize the state ofthe economy in terms of indirect or latent variables, such as the output gap, coreinflation and the non-accelerating rate of inflation (NAIRU). These are variablesthat cannot be measured directly, but are nevertheless critical to policy analysis.For example, the need to take action to curb inflationary pressures is informed bythe expansionary potential in the economy; whether or not a public sector bud-get deficit is a matter for concern is judged by reference to the cyclically adjusteddeficit. These concepts are at the heart of Chapter 9 by Tommaso Proietti, entitled“Structural Time Series Models for Business Cycle Analysis,” which links with theearlier chapters by Pollock and Cardinale and Taylor. Proietti focuses on the mea-surement of the output gap, which he illustrates throughout using US GDP. In thesimplest case, what is needed is a framework for decomposing a time series into atrend and cycle and Proietti critically reviews a number of methods to achieve sucha decomposition, including the random walk plus noise (RWpN) model, the locallinear trend model (LLTM), methods based on filtering out frequencies associatedwith the cycle, multivariate models that bring together related macroeconomicvariables, and the production function approach. The estimation and analysis of anumber of models enables the reader to see how the theoretical analysis is appliedand what kind of questions can be answered. Included here are a bivariate modelof output and inflation for the US and a model of mixed data frequency, with quar-terly observations for GDP and monthly observations for industrial production, theunemployment rate and CPI inflation. The basic underlying concepts, such as theoutput gap and core inflation, are latent variables and, hence, not directly observ-able: to complete the chapter, Proietti also considers how to judge the validity ofthe corresponding empirical measures of these concepts.

To complete the part of the Handbook on Times Series Applications, in Chapter10 Luis Gil-Alana and Javier Hualde provide an overview of fractional integrationand cointegration, with an empirical application in the context of the PPP debate.A time series is said to be integrated of order d, where d is an integer, if d is the min-imum number of differences necessary to produce a stationary time series. This isa particular form of non-stationarity and one which dominated the econometrics

Page 18: Palgrave Handbook of Econometrics - Springer

xviii Editors’ Introduction

literature in the 1980s and early 1990s, especially following the unit root litera-ture. However, the integer restriction on d is not necessary to the definition of anintegrated series (see, in particular, Granger and Joyeux, 1980), so that d can be afraction – hence the term “fractionally integrated” for such series. Once the integerrestriction is relaxed for a single series, it is then natural to relax it for the multivari-ate case, which leads to the idea of fractional cointegration. Gil-Alana and Hualdegive an overview of the meaning of fractional integration and fractional cointegra-tion, methods of estimation for these generalized cases, which can be approachedin either the time or frequency domains, the underlying rationale for the existenceof fractionally integrated series (for example, through the aggregation of micro-relationships), and a summary of the empirical evidence for fractionally integratedunivariate series and fractionally cointegrated systems of series. The various issuesand possible solutions are illustrated in the context of an analysis of PPP for fourbivariate series. It is clear that the extension of integration and cointegration totheir corresponding fractional cases is not only an important generalization of thetheory, but one which finds a great deal of empirical support.

One of the most significant developments in econometrics over the last twentyyears or so has been the increase in the number of econometric applications involv-ing cross-section and panel data (see also Ooms, Chapter 29). Hence Part IV isdevoted to this development. One of the key areas of application is to choice sit-uations which have a discrete number of options; examples include the “whetherto purchase” decision, which has wide application across consumer goods, and the“whether to participate” decision, as in whether to enter the labor force, to retire, orto join a club. Discrete choice models are the subject of Chapter 11 by Bill Greene,who provides a critical, but accessible, review of a vast literature. The binary choicemodel is a key building block here and so provides a prototypical model with whichto examine such topics as specification, estimation and inference; it also allows theready extension to more complex models such as bivariate and multivariate binarychoice models and multinomial choice models. Models involving count data arealso considered as they relate to the discrete choice framework. A starting pointfor the underlying economic theory is the extension of the classical theory of con-sumer behavior, involving utility maximization subject to a budget constraint, tothe random utility model. The basic model is developed from this point and a hostof issues are considered that arise in practical studies, including estimation andinference, specification tests, measuring fit, complications from endogenous right-hand-side variables, random parameters, the use of panel data, and the extensionof the familiar fixed and random effects. To provide a motivating context, Greeneconsiders an empirical application involving a bivariate binary choice model. Thisis where two binary choice decisions are linked; in this case, in the first decisionthe individual decides whether to visit a physician, which is a binary choice, andthe second involves whether to visit the hospital, again a binary choice: togetherthey constitute a bivariate (and ordered) choice. An extension of this model is toconsider the number of times that an individual visits the doctor or a hospital. Thisgives rise to a counts model (the number of visits to the doctor and the number ofvisits to the hospital) with its own particular specification. Whilst a natural place to

Page 19: Palgrave Handbook of Econometrics - Springer

Terence C. Mills and Kerry Patterson xix

start is the Poisson model, this, as Greene shows, is insufficient as a general frame-work; the extension is provided and illustrated with panel data from the Germanhealth care system. A second application illustrates a mixed logit and error com-ponents framework for modeling modes of transport choice (air, train, bus, car).Overall, this chapter provides an indication, through the variety of its applications,as to why discrete choice models have become such a significant part of appliedeconometrics.

The theme of panel data methods and applications is continued in Chapter 12by Andrew Jones. The application of econometrics to health economics has beenan important area of development over the last decade or so. However, this has notjust been a case of applying existing techniques: rather, econometrics has been ableto advance the subject itself, asking questions that had not previously been asked– and providing answers. This chapter will be of interest not only to health eco-nomics specialists, but also to those seeking to understand how treatment effects inparticular are estimated and to those investigating the extent of the developmentand application of panel data methods (it is complemented by Colin Cameronin Chapter 14). At the center of health economics is the question “What are theimpacts of specific health policies?” Given that we do not observe experimentaldata, what can we learn from non-experimental data? Consider the problem ofevaluating a particular treatment; for an individual, the treatment effect is the dif-ference in outcome between the treated and the control, but since an individual iseither treated or not at a particular time, the treatment effect cannot be observed.“Treatment” is here a general term that covers not only single medical treatmentsbut also broad policies, and herein lies its generality, since a treatment could equallybe a policy to reduce unemployment or to increase the proportion of teenagersreceiving higher education. In a masterful understanding of a complex and expand-ing literature, Jones takes the reader through the theoretical and practical solutionsto the problems associated with estimating and evaluating treatment effects, cov-ering, inter alia, identification strategies, dynamic models, estimation methods,different kinds of data, and multiple equation models; throughout the chapterthe methods and discussion are motivated by practical examples illustrating thebreadth of applications.

A key development in econometrics over the last thirty years or so has been theattention given to the properties of the data, as these enlighten the question ofwhether the underlying probability structure is stationary or not. In a terminologi-cal shorthand, we refer to data that is either stationary or non-stationary. Initially,this was a question addressed to individual series (see Nelson and Plosser, 1982);subsequently, the focus expanded, through the work of Engle and Granger (1987)and Johansen (1988), to a multivariate approach to non-stationarity. The nextstep in the development was to consider a panel of multivariate series. In Chapter13, Anindya Banerjee and Martin Wagner bring us up to date by considering panelmethods to test for unit roots and cointegration. The reader will find in this chaptera theoretical overview and critical assessment of a vast and growing body of meth-ods, combined with practical recommendations based on the insights obtainedfrom a wide base of substantive applications. In part, as is evident in other areas

Page 20: Palgrave Handbook of Econometrics - Springer

xx Editors’ Introduction

of econometric techniques and applications, theory has responded to the muchricher sources of data that have become available, not only at a micro or indi-vidual level, as indicated in Chapter 12, combined with increases in computingpower. As Banerjee and Wagner note, we now have long time series on macroeco-nomic and industry-level data. Compared to just twenty years ago, there is thus awealth of data on micro, industry and macro-panels. A panel dataset embodies twodimensions: the cross-section dimension and the time-series dimension, so that,in a macro-context, for example, we can consider the question of convergence notjust of a single variable (say, of a real exchange rate to a comparator, be that aPPP hypothetical or an alternative actual rate), but of a group of variables, whichis representative of the multidimensional nature of growth and cycles. A startingpoint for such an analysis is to assess the unit root properties of panel data but,as in the univariate case, issues such as dependency, the specification of determin-istic terms, and the presence of structural breaks are key practical matters that, ifincorrectly handled, can lead to misleading conclusions. Usually, the question ofunit roots is a precursor to cointegration analysis, and Banerjee and Wagner guidethe reader through the central methods, most of which have been developed inthe last decade. Empirical illustrations, based on exchange rate pass-through inthe euro-area and the environmental Kuznets curve, complement the theoreticalanalysis.

Whilst the emphasis in Chapter 13 is on panels of macroeconomic or industry-level data, in Chapter 14, Colin Cameron, in the first of two chapters in PartV, provides a survey of microeconometric methods, with an emphasis on recentdevelopments. The data underlying such developments are at the level of theindividual, households and firms. A prototypical question in microeconometricsrelates to the identification, estimation and evaluation of marginal effects usingindividual-level data; for example, the effect on earnings of an additional year ofeducation. This example is often used to motivate some basic estimation meth-ods, such as least squares, maximum likelihood and instrumental variables, inundergraduate and graduate texts in econometrics, so it is instructive to see howrecent developments have extended these methods. The development of the basicmethods include generalized method of moments (GMM), empirical likelihood,simulation-based methods, quantile regression and nonparametric and semipara-metric estimation, whilst developments in inference include robustifying standardtests and bootstrap methods. Apart from estimation and inference, Cameron con-siders a number of other issues that occur frequently in microeconometric studies:in particular, issues related to causation, as in estimating and evaluating treatmenteffects; heterogeneity, for example due to regressors or unobservables; and thenature of microeconometric data, such as survey data and the sampling scheme,with problems such as missing data and measurement error.

The development of econometrics in the last decade or so in particular has beensymbiotic with the development of advances in computing, particularly that of per-sonal computers. In Chapter 15, David Jacho-Chávez and Pravin Trivedi focus onthe relationship between empirical microeconometrics and computational consid-erations, which they call, rather evocatively, a “matrimony” between computing

Page 21: Palgrave Handbook of Econometrics - Springer

Terence C. Mills and Kerry Patterson xxi

and applied econometrics. No longer is it the case that the mainstay of empiricalanalysis is a set of macroeconomic time series, often quite limited in sample period.Earlier chapters in this part of the volume emphasize that the data sources nowavailable are much richer than this, both in variety and length of sample period.As Jacho-Chávez and Trivedi note, the electronic recording and collection of datahas led to substantial growth in the availability of census and survey data. However,the nature of the data leads to problems that require theoretical solutions: for exam-ple, problems of sample selection, measurement errors and missing or incompletedata. On the computing side, the scale of the datasets and estimation based uponthem implies that there must be reliability in the high-dimensional optimizationroutines required by the estimation methods and an ability to handle large-scaleMonte Carlo simulations. The increase in computing power has meant that tech-niques that were not previously feasible, such as simulation assisted estimationand resampling, are now practical and in widespread use. Moreover, nonparamet-ric and semiparametric methods that involve the estimation of distributions ratherthan simple parameters, as in regression models, have been developed throughdrawing on the improved power of computers. Throughout the chapter, Jacho-Chávez and Trivedi motivate their discussion by the use of examples of practicalinterest, including modeling hedonic prices of housing attributes, female laborforce participation, Medicare expenditure, and number of doctor visits. Interest-ingly, they conclude that there are important problems, particularly those relatedto assessing public policy, such as identification and implementation in the con-text of structural, dynamic and high-dimensional models, which remain to besolved.

In Part VI, the theme of the importance of economic policy is continued, butwith the emphasis now on monetary policy and macroeconomic policy, whichremain of continued importance. Starting in the 1970s and continuing into the1990s, the development of macroeconometric models for policy purposes was ahighly regarded area; during that period computing power was developing pri-marily through mainframe computers, allowing not so much the estimation as thesimulation of macroeconomic models of a dimension that had not been previouslycontemplated. Government treasuries, central banks and some non-governmentalagencies developed their own empirical macro-models comprising hundreds ofequations. Yet, these models failed to live up to their promise, either wholly or inpart. For some periods there was an empirical failure, the models simply not beinggood enough; but, more radically, the theoretical basis of the models was oftenquite weak, at least relative to the theory of the optimizing and rational agent andideas of intertemporal general equilibrium.

In Chapter 16, Carlo Favero expands upon this theme, especially as it relates tothe econometrics of monetary policy and the force of the critiques by Lucas (1976)and Sims (1980). A key distinction in the dissection of the modeling corpse isbetween structural identification and statistical identification. The former relates tothe relationship between the structural parameters and the statistical parameters inthe reduced form, while the latter relates to the properties of the statistical or empir-ical model which represents the data. Typically, structural identification is achieved

Page 22: Palgrave Handbook of Econometrics - Springer

xxii Editors’ Introduction

by parametric restrictions seeking to classify some variables as “exogenous,” a taskthat some have regarded as misguided (or indeed even “impossible”). Further, afailure to assess the validity of the reduction process in going from the (unknown)data-generating process to a statistical representation, notwithstanding criticismsrelated to structural identification, stored up nascent empirical failure awaiting themacreconometric model. Developments in cointegration theory and practice have“tightened” up the specification of empirical macromodels, and DSGE models, pre-ferred theoretically by some, have provided an alternative “modellus operandi.”Subsequently, the quasi-independence of some central banks has heightened thepractical importance of questions such as “How should a central bank respond toshocks in macroeconomic variables?” (Favero, Chapter 16). In practice, althoughDSGE models are favored for policy analysis, in their empirical form the VARreappears, but with their own set of issues. Favero considers such practical develop-ments as calibration and model evaluation, the identification of shocks, impulseresponses, structural stability of the parameters, VAR misspecification and factoraugmented VARs. A summary and analysis of Sims’ (2002) small macroeconomicmodel (Appendix A) helps the reader to understand the relationship between anoptimizing specification and the resultant VAR model.

In Chapter 17, Gunnar Bårdsen and Ragnar Nymoen provide a paradigm forthe construction of a dynamic macroeconometric model, which is then illus-trated with a small econometric model of the Norwegian economy that is usedfor policy analysis. Bårdsen and Nymoen note the two central critiques of “failed”macroeconometric models: the Lucas (1976) critique and the Clements and Hendry(1999) analysis of forecast failure involving “location” shifts (rather than behav-ioral parameter shifts). But these critiques have led to different responses; first, themove to explicit optimizing models (see Chapter 16); and, alternatively, to greaterattention to the effects of regime shifts, viewing the Lucas critique as a possibilitytheorem rather than a truism (Ericsson and Irons, 1995). Whilst it is de rigueurto accept that theory is important, Bårdsen and Nymoen consider whether “the-ory” provides the (completely) correct specification or whether it simply provides aguideline for the specification of an empirical model. In their approach, the under-lying economic model is nonlinear and specified in continuous time; hence, thefirst practical steps are linearization and discretization, which result in an equilib-rium correction model (EqCM). Rather than remove the data trends, for exampleby applying the HP filter, the common trends are accounted for through a cointe-gration analysis. The approach is illustrated step by step by building a small-scaleeconometric model of the Norwegian economy, which incorporates the ability toanalyze monetary policy; for example, an increase in the market rate, which showsthe channels of the operation of monetary policy. Further empirical analysis of theNew Keynesian Phillips curve provides an opportunity to illustrate their approachin another context. In summary, Bårdsen and Nymoen note that cointegrationanalysis takes into account non-stationarities that arise through unit roots, so thatforecast failures are unlikely to be attributable to misspecification for that reason.In contrast to the econometric models of the 1970s, the real challenges arise fromnon-stationarities in functional relationships due to structural breaks; however,

Page 23: Palgrave Handbook of Econometrics - Springer

Terence C. Mills and Kerry Patterson xxiii

there are ways to “robustify” the empirical model and forecasts from it so as tomitigate such possibilities, although challenges remain in an area that continuesto be of central importance in economic policy.

One of the key developments in monetary policy in the UK and elsewhere inthe last decade or so has been the move to give central banks a semi-autonomousstatus. In part, this was thought to avoid the endogenous “stop–go” cycle driven bypolitical considerations. It also carried with it the implication that it was monetarypolicy, rather than fiscal policy, which would become the major macroeconomicpolicy tool, notwithstanding the now apparent practical limitations of such amove. In Chapter 18, Brian Henry provides an overview of the institutional andtheoretical developments in the UK in particular, but with implications for othercountries that have taken a similar route. The key question that is addressed inthis chapter is whether regime changes, such as those associated with labor marketreforms, inflation targeting and instrument independence for the Bank of Eng-land, have been the key factors in dampening the economic cycle and improvinginflation, unemployment and output growth, or whether the explanation is moreone of beneficial international events (the “good luck” hypothesis) and monetarypolicy mistakes. Henry concludes, perhaps controversially, that the reforms to thelabor market and to the operation of the central bank are unlikely to have been thefundamental reasons for the improvement in economic performance. He providesan econometric basis for these conclusions, which incorporates a role for interna-tional factors such as real oil prices and measures of international competitiveness.Once these factors are taken into account, the “regime change” explanation losesforce.

The growth of financial econometrics in the last two decades was noted in thefirst volume of this Handbook. Indeed, this development was recognized in theaward of the 2003 Nobel Prize in Economics (jointly with Sir Clive Granger) toRobert Engle for “methods of analyzing economic time series with time-varyingvolatility (ARCH).” Part VII of this volume reflects this development and is thusdevoted to applications in the area of financial econometrics.

In Chapter 19, George Dotsis, Raphael Markellos and Terence Mills considercontinuous-time stochastic volatility models. What is stochastic volatility? Toanswer that question, we start from what it is not. Consider a simple model of anasset price, Y(t), such as geometric Brownian motion, which in continuous timetakes the form of the stochastic differential equation dY(t) = μY(t)+ σY(t)dW(t),

where W(t) is a standard Brownian motion (BM) input; then σ (or σ2) is the volatil-ity parameter that scales the stochastic BM contribution to the diffusion of Y(t).In this case the volatility parameter is constant, although the differential equationis stochastic. However, as Dotsis et al. note, a more appropriate specification forthe accepted characteristics of financial markets is a model in which volatilityalso evolves stochastically over time. For example, if we introduce the variancefunction v(t), then the simple model becomes dY(t) = μY(t) + √v(t)Y(t)dW(t),and this embodies stochastic volatility. Quite naturally, one can then couple thisequation with one that models the diffusion over time of the variance function.ARCH/GARCH models are one way to model time-varying volatility, but there are

Page 24: Palgrave Handbook of Econometrics - Springer

xxiv Editors’ Introduction

a number of other attractive specifications; for example, jump diffusions, affinediffusions, affine jump diffusions and non-affine diffusions. In motivating alterna-tive specifications, Dotsis et al. note some key empirical characteristics in financialmarkets that underlie the rationale for stochastic volatility models, namely fattails, volatility clustering, leverage effects, information arrivals, volatility dynam-ics and implied volatility. The chapter then continues by covering such issues asspecification, estimation and inference in stochastic volatility models. A compar-ative evaluation of five models applied to the S&P 500, for daily data over theperiod 1990–2007, is provided to enable the reader to see some of the models “inaction.”

One of the most significant ideas in the area of financial econometrics is that theunderlying stochastic process for an asset price is a martingale. Consider a stochas-tic process X = (Xt , Xt−1, . . .), which is a sequence of random variables; then themartingale property is that the expectation (at time t−1) of Xt , conditional on theinformation set It−1 = (Xt−1, Xt−2, . . .), is Xt−1; that is, E(Xt |It−1) = Xt−1 (almostsurely), in which case, X is said to be a martingale (the definition is sometimesphrased in terms of the σ -field generated by It−1, or indeed some other “filtra-tion”). Next, define the related process Y = (�Xt ,�Xt−1, . . .); then Y is said to be amartingale difference sequence (MDS). The martingale property for X translates tothe property for Y that E(Yt |It−1) = 0 (see, for example, Mikosch, 1998, sec. 1.5).This martingale property is attractive from an economic perspective because of itslink to efficient markets and rational expectations; for example, in terms of X, themartingale property says that the best predictor, in a minimum mean squared error(MSE) sense, of Xt is Xt−1.

In Chapter 20, J. Carlos Escanciano and Ignacio Lobato consider tests of themartingale difference hypothesis (MDH). The MDH generalizes the MDS conditionto E(Yt |It−1) = μ, where μ is not necessarily zero; it implies that past and currentinformation (as defined in It ) are of no value, in an MSE sense, in forecasting futurevalues of Yt . Tests of the MDH can be seen as being translated to the equivalentform given by E[(Yt − μ)w(It−1)], where w(It−1) is a weighting function. A usefulmeans of organizing the extant tests of the MDH is in terms of the type of functionsw(.) that are used. For example, if w(It−1) = Yt−j, j ≥ 1, then the resulting MDHtest is of E[(Yt − μ)Yt−j] = 0, which is just the covariance between Yt and Yt−j.This is just one of a number of tests, but it serves to highlight some generic issues.In principle, the condition should hold for all j ≥ 1 but, practically, j has to betruncated to some finite value. Moreover, this is just one choice of w(It−1), whereasthe MDH condition is not so restricted. Escanciano and Lobato consider issues suchas the nature of the conditioning set (finite or infinite), robustifying standard teststatistics (for example, the Ljung–Box and Box–Pierce statistics), and developingtests in both the time and frequency domains; whilst standard tests are usuallyof linear dependence, for example autocorrelation based tests, it is important toconsider tests based on nonlinear dependence. To put the various tests into context,the chapter includes an application to four daily and weekly exchange rates againstthe US dollar. The background to this is that the jury is out in terms of a judgmenton the validity of the MDH for such data; some studies have found against the

Page 25: Palgrave Handbook of Econometrics - Springer

Terence C. Mills and Kerry Patterson xxv

MDH, whereas others have found little evidence against it. In this context, applyinga range of tests, Escanciano and Lobato find general support for the MDH.

Chapter 19 by Dotsis et al. was concerned with models of stochastic volatility,primarily using the variance as a measure of volatility. Another measure of volatil-ity is provided by the range of a price; for example, the trading day range of anasset price. In turn, the range can be related to the interval between consecutivetrades, known as the duration. Duration is a concept that is familiar from countingprocesses, such as the Poisson framework for modeling arrivals (for example, at asupermarket checkout or an airport departure gate).

Chapter 21 by Ruey Tsay provides an introduction to modeling duration thatis illustrated with a number of financial examples. That duration can carry infor-mation about market behavior is evident not only from stock markets, where acluster of short durations indicates active trading relating to, for example, informa-tion arrival, but from many other markets; for example, durations in the housingmarket and their relation to banking failure. The interest in durations modelingowes much to Engle and Russell (1998), who introduced the autoregressive con-ditional duration (ACD) model for irregularly spaced transactions data. Just as theARCH/GARCH family of models was introduced to capture volatility clusters, theACD model captures short-duration clusters indicating the persistence of periodsof active trading, perhaps uncovering and evaluating information arrivals. To seehow an ACD model works, let the ith duration be denoted xi = ti − ti−1, whereti is the time of the ith event, and model xi as xi = ψiεi, where {εi} is an i.i.dsequence and β(L)ψi = α0 + α(L)xi, where α(L) and β(L) are lag polynomials; this isthe familiar GARCH form, but in this context it is known as the exponential ACDor EACD. To accommodate the criticism that the hazard function of duration isnot constant over time, unlike the assumption implicit in the EACD model, alter-native innovation distributions have been introduced, specifically the Weibull andthe Gamma, leading to the Weibull ACD (WACD) and the Gamma ACD (GACD).The chapter includes some motivating examples. Evidence of duration clusters isshown in Figures 21.1, 21.4 and 21.7a for IBM stock, Apple stock and GeneralMotors stock, respectively. The development and application of duration modelscan then exploit the development of other forms of time series models, such as(nonlinear) threshold autoregressive (TAR) models. ACD models have also beendeveloped to incorporate explanatory variables; an example is provided, whichshows that the change to decimal “tick” sizes in the US stock markets reduced theprice volatility of Apple stock.

The determination of exchange rates has long been an interest to econo-metricians and, as a result, there is an extensive literature that includes twoconstituencies; on the one hand, there have been contributions from economistswho have employed econometric techniques and, on the other, to risk a simplebifurcation, the modeling of exchange rates has become an area to test out advancesin nonlinear econometrics. Chapter 22, by Efthymios Pavlidis, Ivan Paya and DavidPeel, provides an evaluative overview of this very substantial area. As they note,the combination of econometric developments, the availability of high-qualityand high-frequency data, and the move to floating exchange rates in 1973, has led

Page 26: Palgrave Handbook of Econometrics - Springer

xxvi Editors’ Introduction

to a considerable number of empirical papers in this area. Thus, the question of“Where are we now?” is not one with a short answer. Perhaps prototypically, theeconometrics of exchange rates is an area that has moved in tandem with devel-opments in the economic theory of exchange rates (for the latter, the reader isreferred to, for example, Sarno and Taylor, 2002). An enduring question over thelast thirty years (at least), and one that is touched upon in two earlier chapters(Juselius, Chapter 8, and Gil-Alana and Hualde, Chapter 10), has been the status ofPPP, regarded as a bedrock of economic theory and macroeconomic models. Oneearly finding that has puzzled many is the apparent failure to find PPP supportedby a range of different exchange rates and sample periods. Consider a stylized ver-sion of the PPP puzzle: there are two countries, with a freely floating exchangerate, flexible prices (for tradable goods and services), no trade constraints, andso on. In such a situation, at least in the long run, the nominal exchange rateshould equal the ratio of the (aggregate) price levels, otherwise, as the price ratiomoves, the nominal exchange rate does not compensate for such movements andthe real exchange rate varies over time, contradicting PPP; indeed, on this basisthe exchange rate is not tied to what is happening to prices. Early studies used anessentially linear framework – for example, ARMA models combined with unit roottests – to evaluate PPP, and rarely found that it was supported by the data; more-over, estimated speeds of adjustment to shocks were so slow as to be implausible.Another puzzle, in which tests indicated that the theory (of efficient speculativemarkets) was not supported, was the “forward bias puzzle.” In this case, the pre-diction was that prices should fully reflect publicly available information, so thatit should not be possible to make a systematic (abnormal) return; however, thisappeared not to be the case. In this chapter, Pavlidis et al. carefully dissect this andother puzzles and show how the move away from simple linear models to a rangeof essentially nonlinear models, the development and application of multivariatemodels, and the use of panel data methods, has provided some explanation of theexchange rate “puzzles.”

Part VIII of this volume of the Handbook is comprised of three chapters related towhat has become referred to as “growth econometrics”; broadly speaking, this is thearea that is concerned with variations in growth rates and productivity levels acrosscountries or regions. Chapters 23 and 24 are a coordinated pair by Steven Durlauf,Paul Johnson and Jonathan Temple; in addition, looking ahead, Chapter 27 bySerge Rey and Julie Le Gallo takes up aspects of growth econometrics, with anemphasis on spatial connections. In Chapter 23, Durlauf et al. focus on the econo-metrics of convergence. Of course, convergence could and does mean a number ofthings: first, the convergence of what? Usually this is a measure of income or out-put but, in principle, the question of whether two (or more) economies are/haveconverged relates to multiple measures, for example, output, inflation, unemploy-ment rates, and so on, and possibly includes measures of social welfare, such asliteracy and mortality rates. The first concept to be considered in Chapter 23 isβ-convergence (so-called because the key regression coefficient is referred to as β):consider two countries; there is β-convergence if the one with a lower initial incomegrows faster than the other and so “catches up” with the higher-income country.

Page 27: Palgrave Handbook of Econometrics - Springer

Terence C. Mills and Kerry Patterson xxvii

Naturally, underlying the concept of convergence is an economic model, typi-cally a neoclassical growth model (with diminishing returns to capital and labor),which indicates the sources of economic growth and a steady-state which theeconomy will (eventually) attain. At its simplest, growth econometrics leads tocross-country regressions of output growth rates on variables motivated from theunderlying growth model and, usually, some “control” variables that, additionally,are thought to influence the growth rate. It is the wide range of choice for thesecontrol variables, and the resultant multiplicity of studies, that has led to the, per-haps pejorative, description of this activity as the “growth regression industry.”One response has been the technique of model averaging, so that no single modelwill necessarily provide the empirical wisdom. A second central convergence con-cept is σ -convergence. As the notation suggests, this form of convergence relatesto the cross-section dispersion of a measure, usually log per capita output, acrosscountries. As Durlauf et al. note, whilst many studies use the log variance, othermeasures, such as the Gini coefficient or those suggested in Atkinson (1970), maybe preferred. In this measure of convergence, a reduction in the dispersion mea-sure across countries suggests that they are getting closer together. As in Chapter22 on exchange rates, an important methodological conclusion of Durlauf et al. isthat nonlinearity (due in this case to endogenous growth models) is likely to bean important modeling characteristic, which is not well captured in many existingstudies, whether based on cross-section or panel data.

Having considered the question of convergence in Chapter 23, in Chapter 24Durlauf et al. turn to the details of the methods of growth econometrics. Whilstconcentrating on the methods, they first note some salient facts that inform thestructure of the chapter. Broadly, these are that: vast income disparities exist despitethe general growth in real income; distinct winners and losers have begun toemerge; for many countries, growth rates have tended to slow, but the dispersionof growth rates has increased. At the heart of the growth literature is the one-sectorneoclassical growth model, transformed to yield an empirical form in terms of thegrowth rate of output per labor unit, such that growth is decomposed into growthdue to technical progress and the gap between initial output per worker and thesteady-state value. Typically, an error is then added to a deterministic equationderived in this way and this forms the basis of a cross-country regression, usu-ally augmented with “control” variables that are also thought to influence growthrates. However, as Durlauf et al. note, there are a number of problems with thisapproach; for example, the errors are implicitly assumed to be exchangeable, butcountry dependence of the errors violates this assumption; the plethora of selectedcontrol variables leads to a multiplicity of empirical models; and parameter hetero-geneity. To assess the question of model uncertainty an extreme bounds analysis(Leamer, 1983) can be carried out, and model averaging as in a Bayesian analysiscan be fruitful. Parameter heterogeneity is related to the Harberger (1987) criti-cism that questions the inclusion of countries with different characteristics in across-country regression. The key to criticisms of this nature is the meaning ofsuch regressions: is there a DGP that these regressions can be taken as empiricallyparameterizing? The chapter continues by providing, inter alia, an overview of the

Page 28: Palgrave Handbook of Econometrics - Springer

xxviii Editors’ Introduction

different kinds of data that have been used and an assessment of the econometricproblems that have arisen and how they have been solved; the conclusion evaluatesthe current state of growth econometrics, and suggests directions for futureresearch.

A concern that has long antecedents is the relationship between financial devel-opment and growth: is there a causal relationship from the former to the latter? InChapter 25, Thorsten Beck evaluates how this key question has been approachedfrom an econometric perspective. Do financial institutions facilitate economicgrowth, for example by reducing information asymmetries and transaction costs?Amongst other functions, as Beck notes, financial institutions provide paymentservices, pool and allocate savings, evaluate information, exercise corporate gov-ernance and diversify risk. It would seem, a priori, that the provision of suchservices must surely move out the aggregate output frontier. However, just findingpositive correlations between indicators of financial development, such as mon-etization measures, the development of banking institutions and stock markets,and economic growth is insufficient evidence from an econometric viewpoint.One of the most fundamental problems in econometrics is the problem of iden-tification: by themselves, the correlations do not provide evidence of a causaldirection. Beck takes the reader through the detail of this problem and how it hasbeen approached in the finance-growth econometric literature. A classical methodfor dealing with endogenous regressors is instrumental variables (IV) and, in thiscontext, some ingenuity has been shown in suggesting such variables, includingexogenous country characteristics; for example, settler mortality, latitude and eth-nic fractionalization. Early regression-based studies used cross-section data on anumber of countries; however, more recent datasets now include dynamic panelsand methods include GMM and cointegration. More recent developments havebeen able to access data at the firm and household level, and this has led to muchlarger samples being used. For example, Beck, Dermirgüç-Kunt and Makisimovic(2005) use a sample of over 4,000 firms in 54 countries to consider the effect ofsales growth as a firm-level financing obstacle as well as other variables, including acountry-level financial indicator. As Beck notes, the evidence suggests a strong casefor a causal link between financial development and economic growth, but thereis still much to be done both in terms of techniques, such as GMM, and exploitingadvances at the micro-level.

In Volume 1 of the Handbook, we highlighted recent developments in theoreticaleconometrics as applied to problems with a spatial dimension; this is an area thathas grown in application and importance, particularly over the last decade, andit is natural that we should continue to emphasize its developmental importanceby including two chapters in Part IX. These chapters show how spatial economet-rics can bring into focus the importance of the dimension of space in economicdecisions and the particular econometric problems and solutions that result. InChapter 26, Luc Anselin and Nancy Lozano-Gracia consider spatial hedonic mod-els applied to house prices. Hedonic price models are familiar from microeconomicsand, in particular, from the seminal contributions of Lancaster (1966) and Rosen(1974). In the context of house prices, there are key characteristics, such as aspects

Page 29: Palgrave Handbook of Econometrics - Springer

Terence C. Mills and Kerry Patterson xxix

of neighborhood, proximity to parks, schools, measures of environmental quality,and so on, that are critical in assigning a value to a house. These characteristicslead to the specification of a hedonic price function to provide an estimate of themarginal willingness to pay (MWTP) for a characteristic; a related aim, but onenot so consistently pursued, is to retrieve the implied inverse demand functionfor house characteristics. Two key problems in the estimation of hedonic houseprice functions, in particular, are spatial dependence and spatial heterogeneity. AsAnselin and Lozano-Gracia note, spatial dependence, or spatial autocorrelation,recognizes the importance of geographical or, more generally, network space inleading to a structure in the covariance matrix between observations. Whilst thereis an analogy with temporal autocorrelation, spatial autocorrelation is not simplyan extension of that concept, but requires its own conceptualization and methods.Spatial heterogeneity can be viewed as a special case of structural instability; two(of several) examples of heterogeneity are spatial regimes (for example, ethnicallybased sub-neighorhoods) and spatially varying coefficients (for example, differentvaluations of housing and neighborhood characteristics). In this chapter, Anselinand Lozano-Gracia provide a critical overview of methods, such as spatial two-stageleast squares and spatial feasible GLS, a summary of the literature on spatial depen-dence and spatial heterogeneity, and discussion of the remaining methodologicalchallenges.

In Chapter 27, Serge Rey and Julie Le Gallo consider an explicitly spatial analysisof economic convergence. Recall that Chapter 23, by Durlauf et al., is con-cerned with the growing interest in the econometrics of convergence; for example,whether there was an emergence of convergence clubs, perhaps suggesting “win-ners and losers” in the growth race. There is an explicitly spatial dimension tothe evaluation of convergence; witness, for example, the literature on the con-vergence of European countries or regions, the convergence of US states, and soon. Rey and Le Gallo bring this spatial dimension to the fore. The recognition ofthe importance of this dimension brings with it a number of problems, such asspatial dependence and spatial heterogeneity; these problems are highlighted inChapter 26, but in Chapter 27 they are put in the context of the convergence ofgeographical units. Whilst Rey and Le Gallo consider what might be regarded aspurely econometric approaches to these problems, they also show how exploratorydata analysis (EDA), extended to the spatial context, has been used to inform thetheoretical and empirical analysis of convergence. As an example, a typical focusin a non-spatial context is on σ -convergence, which relates to a cross-sectionaldispersion measure, such as the variance of log per capita output, across regions orcountries. However, in a broader context, there is interest in the complete distri-bution of regional incomes and the dynamics of distributional change, leading to,for example, the development of spatial Markov models, with associated conceptssuch as spatial mobility and spatial transition. EDA can then provide the tools tovisualize what is happening over time: see, for example, the space-time paths andthe transition of regional income densities shown in Figures 27.5 and 27.6. Rey andLe Gallo suggest that explicit recognition of the spatial dimension of convergence,combined with the use of EDA and its extensions to include the spatial element,

Page 30: Palgrave Handbook of Econometrics - Springer

xxx Editors’ Introduction

offers a fruitful way of combining different methods to inform the overall view onconvergence.

Part X comprises two chapters on applied econometrics and its relationshipto computing. In Chapter 28, Bruce McCullough considers the problem of test-ing econometric software. The importance of this issue is hard to understate.Econometric programs that are inaccurate, for any reason, will produce misleadingresults not only for the individual researcher but, if published, for the professionmore generally, and will lead to applications that are impossible to replicate. Thedevelopment of sophisticated methods of estimation means that we must also beever-vigilant in ensuring that software meets established standards of accuracy. Aseminal contribution to the development of accuracy benchmarks was Longley(1967). As McCullough notes, Longley worked out by hand the solution to a lin-ear regression problem with a constant and six explanatory variables. When runthrough the computers of the time, he found that the answers were worryinglydifferent. Of course, the Longley benchmark is now passed by the economet-ric packages that are familiar to applied econometricians. However, the natureof the problems facing the profession is different (sophisticated estimators, largedatasets, simulation-based estimators) and McCullough’s results imply that thereis no reason for complacency. Many econometric estimators involve problems ofa nonlinear nature – for example, the GARCH and multivariate GARCH estima-tors and the probit estimator – and it is in the case where a nonlinear solver isinvolved that the user will find problems, especially when relying on the defaultoptions. Another area that has seen substantial growth in the last two decades hasbeen the use of Monte Carlo experimentation, an area that makes fundamentaluse of random numbers, and hence any package must have a reliable randomnumber generator (RNG). But are the numbers so generated actually random?The answer is, not necessarily! (The reader may wish to refer to Volume 1 of thisHandbook, which includes a chapter by Jurgen Doornik on random number gener-ation.) The importance of maintaining standards of numerical accuracy has beenrecognised in the National Institute of Standards and Technology’s Statistical Ref-erence Datasets, which has resulted in a number of articles using these datasets toevaluate software for econometric problems. To illustrate some of the issues in soft-ware evaluation, for example in establishing a benchmark, McCullough includesa study of the accuracy of a number of packages to estimate ARMA models. Thecentral methods for the estimation of such models include unconditional leastsquares (UCLS), conditional least squares (CLS), and exact maximum likelihood.The questions of interest are not only in the accuracy of the point estimates fromthese methods in different packages, but also what method of standard error cal-culation is being used. Overall, McCullough concludes that we, as a profession,have some way to go in ensuring that the software that is being used is accurate,that the underlying methods are well-documented, and that published results arereplicable.

In Chapter 29, Marius Ooms takes a historical perspective on the nature ofapplied econometrics as it has been represented by publications and reviews ofeconometric and statistical software in the Journal of Applied Econometrics (JAE).

Page 31: Palgrave Handbook of Econometrics - Springer

Terence C. Mills and Kerry Patterson xxxi

Over the 14-year review period, 1995–2008, there were 513 research articles pub-lished in the JAE, of which 253 were categorized as applications in time series, 140as panel data applications and 105 as cross-section applications. Ooms notes thatthere has been a gradual shift from macroeconometrics to microeconometrics andapplications using panel data. The software review section of the JAE has been aregular feature, so enabling an analysis of the programmes that have been in use –and continue to be in use, reflecting the development policy of the providers. Thissection is likely to be a very useful summary for research and teaching purposes.Ooms also notes the growth of high-level programming languages, such as Gauss,MATLAB, Stata and Ox, and illustrates their use with a simple program. In com-bination, the profession is now very much better served for econometric softwarethan it was twenty years ago. Of course, these developments have not taken place inisolation but rather as a response to developments in theoretical and applied econo-metrics. A leading example in this context, noted by Ooms, is the Arellano andBond (1991) approach to the estimation of applications using panel data (dynamicpanel data, or DPD, analysis), which led to the widespread implementation of newcode in existing software and many new applications; an example in the area oftime series applications is the growth of ARCH and GARCH-based methods and theimplantation of estimation routines in econometric software. As noted in Chapter28 by McCullough, reproducibility of results is a key aspect in the progression andreputation of applied econometrics. Results that are irreproducible by reason ofeither inaccurate software or unavailability of data will do long-term harm to theprofession. In this respect, the JAE, through Hashem Pesaran’s initiative, has beena leader in the context of requiring authors to provide the data and code whichthey used. The JAE archive is indexed and carefully managed, and provides thestandard for other journals.

As a final comment, which we hope is evident from the chapters contained inthis volume, one cannot help but be struck by the incredible ingenuity of thoseinvolved in pushing forward the frontiers of applied econometrics. Had this volumebeen compiled even, say, just twenty years ago, how different would it have been!Viewed from above, the landscape of applied econometrics has changed markedly.Time series econometrics and macroeconometrics, whilst still important, are notpredominant. The combination of the availability of large datasets of a microeco-nomic nature, combined with enormous increases in computing power, has meantthat econometrics is now applied to a vast range of areas. What will the next twentyyears bring?

Finally, thanks are due to many in enabling this volume to appear. First, ourthanks go collectively to the authors who have cooperated in contributing chapters;they have, without exception, responded positively to our several and sometimesmany requests, especially in meeting deadlines and accommodating editorial sug-gestions. We hope that the quality of these chapters will be an evident record ofthe way the vision of the Handbook has been embraced. We would also like torecord our gratitude to the Advisory Editors for this volume: Bill Greene, PhilipHans Franses, Hashem Pesaran and Aman Ullah, whose support was invaluable,especially at an early stage.

Page 32: Palgrave Handbook of Econometrics - Springer

xxxii Editors’ Introduction

Thanks also go the production team at Palgrave Macmillan, only some of whomcan be named individually: Taiba Batool, the commissioning editor, Ray Addicott,the production editor, and Tracey Day, the indefatigable copy-editor. A specialmention goes to Lorna Eames, secretary to one of the editors, for her willing andinvaluable help at several stages in the project.

References

Arellano, M. and S. Bond (1991) Some tests of specification for panel data: Monte Carloevidence and an application to employment equations. Review of Economic Studies 58,177–97.

Atkinson, A.B. (1970) On the measurement of inequality. Journal of Economic Theory 2, 244–63.Bates, J.M. and C.W.J. Granger (1969) The combination of forecasts. Operations Research

Quarterly 20, 451–68.Beck, T., A. Demirgüç–Kunt and V. Maksimovic (2005) Financial and legal constraints to firm

growth: does firm size matter? Journal of Finance 60, 137–77.Clements, M.P. and D.F. Hendry (1999) Forecasting Non-stationary Economic Time Series.

Cambridge, Mass.: MIT Press.Doornik, J.A. (2007) Autometrics. Working paper, Economics Department, University of

Oxford.Engle, R.F. and C.W.J. Granger (1987) Co-integration and error-correction: representation,

estimation and testing. Econometrica 55, 251–76.Engle, R.F. and J.R. Russell (1998) Autoregressive conditional duration: a new model for

irregularly spaced transaction data. Econometrica 66, 1127–62.Ericsson, N.R. and J.S. Irons (1995) The Lucas critique in practice: theory without measure-

ment. In K.D. Hoover (ed.), Macroeconometrics: Developments, Tensions and Prospects, Ch. 8.Dordrecht: Kluwer Academic Publishers.

Giacomini, R. and H. White (2006) Tests of conditional predictive ability. Econometrica 74,1545–78.

Granger, C.W.J. and R. Joyeux (1980) An introduction to long memory time series andfractional differencing. Journal of Time Series Analysis 1, 15–29.

Harberger, A. (1987) Comment in S. Fischer (ed.), Macroeconomics Annual 1987. Cambridge,Mass.: MIT Press.

Hendry, D.F. (1980) Econometrics: alchemy or science? Economica 47, 387–406.Johansen, S. (1988) Statistical analysis of cointegration vectors. Journal of Economic Dynamics

and Control 12, 231–54.Koop, G., R. Strachan, H. van Dijk and M. Villani (2006) Bayesian approaches to cointegra-

tion. In T.C. Mills and K.D. Patterson (eds.), Palgrave Handbook of Econometrics, Volume 1:Econometric Theory, pp. 871–900.

Lancaster, K.J. (1966) A new approach to consumer theory. Journal of Political Economy 74,132–56.

Leamer, E. (1983) Let’s take the con out of econometrics. American Economic Review 73, 31–43.Longley, J.W. (1967) An appraisal of least-squares programs from the point of view of the

user. Journal of the American Statistical Association 62, 819–41.Lucas, R.E. (1976) Econometric policy evaluation: a critique. In K. Brunner and A. Meltzer

(eds.), The Phillips Curve and Labour Markets, Volume 1 of Carnegie-Rochester Conferences onPublic Policy, pp. 19–46. Amsterdam: North-Holland.

Magnus, J.R. and M.S. Morgan (eds.) (1999) Methodology and Tacit Knowledge: Two Experimentsin Econometrics. Chichester: John Wiley and Sons.

Mikosch, T. (1998) Elementary Stochastic Calculus. London and New Jersey: World ScientificPublishers.

Page 33: Palgrave Handbook of Econometrics - Springer

Terence C. Mills and Kerry Patterson xxxiii

Nelson, C.R. and C.I. Plosser (1982). Trends and random walks in macroeconomic time series.Journal of Monetary Economics 10, 139–62.

Perron, P. (1989) The great crash, the oil price shock and the unit root hypothesis. Econometrica57, 1361–401.

Poirier, D.J. and J.L. Tobias (2006) Bayesian econometrics. In T.C. Mills and K.D. Patter-son (eds.), Palgrave Handbook of Econometrics, Volume 1: Econometric Theory, pp. 841–70.Basingstoke: Palgrave Macmillan.

Rosen, S.M. (1974) Hedonic prices and implicit markets: product differentiation in purecompetition. Journal of Political Economy 82, 534–57.

Sarno, L. and M. Taylor (2002) The Economics of Exchange Rates. Cambridge and New York:Cambridge University Press.

Sims, C.A. (1980) Macroeconomics and reality. Econometrica 48, 1–48.Sims, C.A. (2002) Solving linear rational expectations models. Computational Economics 20,

1–20.Tobin, J. (1950) A survey of the theory of rationing. Econometrica 26, 24–36.