3

Click here to load reader

From Elementary Probability to Stochastic Differential Equations With MAPLE

  • Upload
    walter

  • View
    218

  • Download
    2

Embed Size (px)

Citation preview

Page 1: From Elementary Probability to Stochastic Differential Equations With MAPLE

This article was downloaded by: [University of New Mexico]On: 29 November 2014, At: 22:14Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: MortimerHouse, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of the American Statistical AssociationPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/uasa20

From Elementary Probability to StochasticDifferential Equations With MAPLEWalter Gandera

a Swiss Federal Institute of TechnologyPublished online: 31 Dec 2011.

To cite this article: Walter Gander (2003) From Elementary Probability to Stochastic Differential Equations With MAPLE,Journal of the American Statistical Association, 98:461, 254-255, DOI: 10.1198/jasa.2003.s260

To link to this article: http://dx.doi.org/10.1198/jasa.2003.s260

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purposeof the Content. Any opinions and views expressed in this publication are the opinions and views of theauthors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content shouldnot be relied upon and should be independently verified with primary sources of information. Taylorand Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses,damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connectionwith, in relation to or arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: From Elementary Probability to Stochastic Differential Equations With MAPLE

254 Book Reviews

in (relational, probabilistic, and possibilistic) networks, evaluating the qual-ity of learned models, algorithms for searching the model space, and a briefaccount of experimental results, based on the � rst author’s INES (induction ofnetwork structures) program, which can be downloaded at http://fuzzy.cs.uni-magdeburg.de/books/gm/software.html. The presentation is very compressed,with only three pages devoted to the experimental design, results on eightevaluation measures, three search procedures, and discussion. The authors’evaluation of possibilistic methods is limited (p. 240): “We did not divide thedataset into training and test data. The main reason is that it is not clear howto evaluate a possibilistic network w.r.t. test data, since, obviously, the mea-sure used to evaluate it w.r.t. the training data cannot be used to evaluate itw.r.t. the test data.”

Chapter 8, “Learning Local Structure,” concerns detecting and respond-ing to regularities in a graphical model. Chapter 9 consists of 12 pageson the topic of inductive causation. The textual and polemical bulk of thischapter involves a critique of the assumptions underlying the inductive cau-sation algorithm of Pearl and Verma (1991). The main text concludes witha 10-page chapter, “Applications,” that provides a series of prose descrip-tions of application of AT&T’s Advanced Pattern Recognition and Identi� -cation system for a large telecommunications database, application of a con-� dential (and not detailed) graphical modeling procedure for con� gurationof automobiles at Volkswagen, and a con� dential application of INES tofault analysis at Daimler Chrysler. One appendix provides formal proofs ofall theorems in the text, and another provides URLs for graphical model-ing software tools. The authors are maintaining the list at http://fuzzy.cs.uni-magdeburg.de/books/gm/tools.html. The reference to TETRAD is obsolete;the URL http://www.phil.cmu.edu/tetrad should be used instead.

The quality of book is mixed. It includes a useful collection of de� ni-tions and theorems on formal tools of relational algebra, graphical models,and measures of model quality. The material on relational algebra and mod-eling based on relational decompositions is unusual in statistics texts and isa valuable element of this book. The book provides considerable formalismand discussion of possibility theory, but unfortunately without a strong com-pelling argument or example to motivate interest in the approach. The appli-cations and worked examples are sometimes frustratingly compressed, and donot make a strong case for possibility theory–based methods. Thus I believethat this book will be most appreciated by statisticians with an independentinterest in fuzzy sets and possibility theory, and by those interested in howrelational algebra can be used in conjunction with probability or possibilitytheory to develop graphical models.

Vincent J. Carey

Channing Laboratory, Harvard Medical School

REFERENCES

Dempster, A. P. (1968), “A Generalization of Bayesian Inference,” Journal ofthe Royal Statistical Society, Ser. B, 30, 205–232.

Lindley, D. V. (1975), Making Decisions, Chichester, UK: Wiley.Parsons, S., and Hunter, A. (1998), “A Review of Uncertainty Handling For-

malisms,” in Applications of Uncertainty Formalisms, eds. A. Hunter andS. Parsons, New York: Springer-Verlag.

Pearl, J., and Verma, T. S. (1991), “A Theory of Inferred Causation,” Pro-ceedings of the International Conference on Principles of Knowledge Rep-resentation and Reasoning, 2, 441–452.

Zadeh, L. A. (1978), “Fuzzy Sets as a Basis for a theory of Possibility,” FuzzySets and Systems, 1, 1–28.

Conducting Meta Analysis Using SAS.

Winfred Arthur, Jr., Winston Bennett, Jr., and Allen I. Huffcutt.Mahwah, NJ: Lawrence Erlbaum, 2001. ISBN 0-8058-3809-0. xvi C188 pp. $19.95 (P).

Conducting Meta Analysis Using SAS is a user’s guide for researchersplanning to carry out a meta-analysis in the educational or social sciences.Its intention is to instruct the reader in how to carry out a meta-analysisgenerally and in particular, how to use certain SAS procedures to carry outcomputations suggested by leading practitioners in the � eld. The authors havedone a good job compiling a set of instructions that might encourage someresearchers to successfully complete a reasonable empirical study in place

of a more traditional qualitative literature review. The book presents a well-researched look at successful applications. The reference list includes manyexamples of in� uential meta-analyses in industrial/organizational psychology.Some of the statistical presentations might not satisfy most statisticians, butthey are probably reasonably effective for the intended audience.

Meta-analysis is a tricky � eld for statisticians to discuss. The techniquesdeveloped for combining estimates of quantities like effect size and correlationare not conceptually dif� cult, but they require assumptions that are unlikely tobe met in practice. One of the most vexing assumptions is the almost certainlack of homogeneity from study to study. Attempts to account for violationsof the homogeneity assumption, such as stratifying the set of studies by somemoderator variable and conducting a separate meta-analysis on each subset,seem simplistic. The resulting outcomes are still dif� cult to interpret. Con-ducting Meta-Analysis Using SAS does a good job of making the reader awareof these dif� culties, and this would be its strength in practice. Additionally,the detailed instructions for using SAS PROC MEANS to compute a vari-ety of meta-analytic statistics could be a great time-saver for researchers. ThePreface claims that sample programs can be downloaded from the publisher’swebsite, but I was not able to � nd these. Substantial portions of the outputare shown, so this lack of access to the software did not hinder reading, butit might prove a roadblock for a researcher.

The book’s organization is sensible and well thought out. Chapter 1presents a brief overview of a suggested procedure for conducting a meta-analysis. Chapter 5 presents additional details, with an emphasis on SAS pro-grams. The advice presented is generally sound, although some additionaldiscussion of the dif� culties of selecting reasonable inclusion criteria wouldbe welcome. Most notably, a description of the problem of selection bias andthe need to search beyond the most obvious sources of studies is postponed tonear the end of the book. Chapters 2 and 3 are the technical core of the work,presenting SAS programs for combining effect sizes and correlations. Thesechapters live up to the authors’ claim of independence, so that a researcherinterested in one topic needs to read only one chapter. Chapter 4 includesan extensive discussion of outliers in meta-analytic data and provides a goodtechnical extension to the material on inclusion criteria.

While this book is strong in applications, I found some dif� culties in thepresentation of statistical concepts. Perhaps the best example occurs in the� rst paragraph of the main text body, which reinforces the incorrect notionthat the amount of information in a sample from a large population is morethan marginally dependent on the size of the population. The authors statethat “a typical sample of size 50 represents a meager 0.0000008 percent ofthe world’s population.” What is left unsaid is that a sample of size 50,000would still represent a tiny percentage, but one such sample done carefullywould certainly mitigate the need for a meta-analysis in most cases. The dif-� culty lies in the fact that 50 is small, not in the fact that 6 billion is large.Although the distinction is meaningful for those of us who are statisticians,perhaps it is of less import to scholars trying to make sense of a large numberof similar studies. Used carefully by a researcher with considerable statisti-cal training, or in consultation with a statistician, Conducting Meta-AnalysisUsing SAS would be of value to the research community.

Richard J. Cleary

Bentley College

From Elementary Probability to Stochastic

Differential Equations With MAPLE.

Sasha Cyganowski, Peter Kloeden, and Jerzy Ombach. Berlin:Springer-Verlag, 2002. ISBN 3-540-42666-3. viC 310 pp. $44.95 (P).

This book covers quite a range of topics in 310 pages: basics of probability,measures and integrals, random variables and distributions, the most impor-tant distributions, parameters of probability distributions, numerical simula-tions and statistical interference, stochastic processes and calculus, stochasticdifferential equations, and numerical methods for stochastic differential equa-tions. The authors have done a good job. The material is didactically wellpresented, and motivations are given before de� nitions are stated. MAPLE, acomputer algebra system, is used to illustrate the mathematics and help read-ers understand the concepts.

Why use MAPLE and not a typical statistical software system such asS-PLUS? There are several reasons. The book is intended to be a textbook forstudents, not a toolbox for professional data analysts. Students may already

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 22:

14 2

9 N

ovem

ber

2014

Page 3: From Elementary Probability to Stochastic Differential Equations With MAPLE

Book Reviews 255

be familiar with a computer algebra system like MAPLE, thus eliminatingthe learning step required to use a professional system. MAPLE—a general-purpose integrated environment for doing mathematics on a computer—ismore � exible than a statistical software system. Unlimited precision allowsthe user to compute expressions like

¡36520

¢20W=36520 exactly as a rational num-

ber; there is no need to take precautions for possible over� ow of intermedi-ate results. MAPLE performs algebraic manipulations, and the authors makeuse of this feature in proofs and in computations of nonnumerical results.For instance, they compute the expectation of a random variable having thePoisson distribution by � rst de� ning the function

g 2D k ! exp4ƒlambda5� lambdaO k=kW3

and thenE 2D sum4k� g4k51k D 0: : : infinity53

to get the result E D ‹. This example might seem like “killing a � y with asledgehammer,” but students can become preoccupied with tedious work andbe distracted from the underlying principles. Using MAPLE removes some ofthe tedium, thus allowing more emphasis to be placed on principles.

Principles can also be stressed too much. Chapter 2 has a very good intro-duction to the theory of measure and integration; however, in the MAPLEsession at the end of the chapter, too much emphasis is put on numericalmethods with left, right, and middle sums, the trapezoid rule, and Simpson’srule. The student might get a wrong impression of numerical methods forquadrature.

MAPLE offers packages for statistical computations. With MAPLE, it iseasy to run probabalistic and statistical experiments and to visualize the resultsusing the plot commands. Many functions are available to do simple simu-lations such as rolling a die, and more complicated simulations like Markovchains and Wiener processes. The authors of the book make use of the stats

package and subpackages like statsplot and combinat. S. Cyganowski andP. Kloeden also developed the MAPLE stochastic package, which offers anumber of functions for stochastic differential equations.

Almost every chapter contains paragraphs labeled “MAPLE Assistance.”These (mostly short) MAPLE scripts simulate or illustrate the preceding the-ory. Simulations are often used to con� rm the theoretical results. The scriptscan all be downloaded from the authors’ web pages. Each chapter also endswith a set of exercises, most of which do not require programming or the useof MAPLE.

The book can serve very well as a textbook for a modern coursein computational statistics. It � ts into the growing set of mathemati-cal textbooks that use software systems for computer-aided mathematics.To date, more than 400 books have written about MAPLE or provideMAPLE supplements. Of these, nine deal with probability and statis-tics (see www.maplesoft.com/publications/books/browseAll.shtml). For MAT-LAB, the situation is similar; there are more than 500 books, including20 books in statistics and probability (see www.mathworks.com/ support/books/index.jsp). From Elementary Probability to Stochastic DifferentialEquations with MAPLE is the only one that deals with stochastic differentialequations.

Walter Gander

Swiss Federal Institute of Technology

Time Series Analysis by State-Space Methods.

J. Durbin and S. J. Koopman. New York: Oxford University Press,2001. ISBN 0-19-852354-8. ixC253 pp. $65.00 (H).

In this book, James Durbin and Siem Jan Koopman provide an interest-ing and fresh treatment of standard, linear Gaussian state-space methods aswell as a thorough treatment of nonlinear, non-Gaussian methods by means ofimportance sampling. The text is presented in a largely self-contained man-ner that should appeal to students who are just becoming acquainted with the� eld and more experienced researchers alike. Students new to the � eld willbene� t from the fact that the text presupposes only a basic knowledge of ele-mentary statistics and matrix algebra, whereas more advanced students andresearchers will bene� t from the text’s clear and concise presentation of bothold and new results, which makes this book especially useful as a reference.

At every stage, the book strikes an excellent balance between providing adetailed and rigorous treatment of the material while presenting a broad array

of empirical examples, ranging from � nancial economics to environmentalscience, that enhance the theoretical exposition. The book also contains twochapters (9 and 14) devoted to providing useful illustrations of the methodsdeveloped. Additionally, the book is well integrated with two modern com-puting tools for state-space analysis—SsfPack and STAMP 6.0—developedand supported by one or more of the authors. In many instances, the bookdirects the reader to the speci� c functions within SsfPack or STAMP thatcan be used for empirical implementation of the methods discussed. In thisway, the book provides an invaluable reference for those students and appliedresearchers wanting a text that links the theoretical treatment of state-spacemodels with their practical implementation. The authors also maintain a web-site (www.ssfpack.com/dkbook) containing the data used for illustrations in thetext as well as links to related research and sites for STAMP and Ssfpack.

The book is organized in two parts. Part I deals with the linear Gaussianstatespace model and covers the � rst two-thirds (Chaps. 1–9) of the text. Theessential tools of forecasting, � ltering, smoothing, and simulation of linearGaussian systems are all treated in a concise, uncluttered manner. In particu-lar, Chapters 2 and 3 cover the local level and other linear Gaussian models,Chapters 4 and 5 cover � ltering and smoothing, and Chapters 6–9 cover com-putational aspects of the Kalman � lter, including estimation and illustrations.

In addition to a thorough exposition of the main tools of forecasting,smoothing, and simulating state-space models, the book provides a number ofpractical suggestions and tools that will be of interest to applied researchers.Examples include discussions of handling missing observations (Sec. 4.8), theunivariate treatment of multivariate observations (Sec. 6.4) and spline smooth-ing (Sec. 3.11). An entire chapter devoted to the computational aspects of theKalman � lter (Chap. 6) is also included. A particularly useful feature of the� rst part of the text is the treatment (in Chaps. 2 and 7) of diagnostic checkingof model residuals for departures from the linear model’s maintained assump-tions. Apart from providing the reader with a useful set of diagnostics, thediscussion foreshadows the necessity and relevance of the nonlinear and non-Gaussian techniques explored in Part II.

Estimation of standard linear models is handled from both the classical(Chap. 7) and Bayesian (Chap. 8) perspective. The classical treatment coversthe standard prediction-error decomposition of the likelihood and then moveson to a detailed treatment of the likelihood when some elements of the initialstate vector are treated as diffuse. The details on computation of the scorevector, as well as on numerical maximization algorithms such as BFGS andthe EM algorithm are discussed at length. The chapter on Bayesian methodsof estimation is rather abbreviated and is more of an overview, providing anumber of useful references. What is important about this chapter is that itintroduces the reader to the idea of simulation through importance sampling,the central tool used in the development of the nonlinear, non-Gaussian meth-ods of Part II.

Although many of the topics covered in Part I will be familiar to the ini-tiated, there is a considerable amount of new material in this section of thetext that derives from the authors’ original research. One of the major newdevelopments contained in the � rst part of the text is the development of theexact Kalman � lter (Sec. 5.2) in the presence of a diffuse initial state vectoras an alternative to (and alongside) the augmented Kalman � lter of Rosen-berg (1973) and de Jong (1988, 1991). The authors provide a derivation of theexact � lter as the variance of the diffuse elements of the initial state vectortends to in� nity. Aside from its natural motivation, one principal advantage ofthe new exact � lter is reduced computational burden. In comparing the exact� lter with the augmented � lter, the authors demonstrate that across a varietyof standard state-space models, the new � lter reduces the number of � lteringcomputations by between 50% and 60%. The text also shows how the diffusetreatment of the initial state vector can ease the computation of disturbanceand simulation smoothing. The effects of treating the initial state vector asdiffuse on the likelihood function and on maximum likelihood estimation arealso explored. Interestingly, the authors show that, in many relevant circum-stances, use of the exact initial Kalman � lter also eases computation of max-imum likelihood estimates relative to those computed using the augmentedKalman � lter.

Unlike Part I, all of the material in Part II is new, on the frontier ofnonlinear and non-Gaussian state-space research. Much of this material is adirect result of the authors’ original research (most notably Durbin and Koop-man 1997, 2000). Part II comprises � ve chapters (10–14) and accounts forthe � nal one-third of the text. It opens with a chapter of illustrations andexamples in which the standard linear assumptions of Part I are naturally

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 22:

14 2

9 N

ovem

ber

2014