370
Solving frontier Problems of Physics: a The Decomposition Method -- I Kluwer Academic Publishers

G._Adomian__Solving_Frontier_Problems_of_Physics--Thr_Decomposition_Method(370p)(1994).pdf

Embed Size (px)

Citation preview

  • Solving frontier Problems of Physics: a The Decomposition

    Method

    --

    I Kluwer Academic Publishers

  • Solving Frontier Problems of Physics: The Decomposition Method

  • Fundamental Theories of Physics

    An Itltc~rtationul Book Series on The Fundame~zral Theories of Physics: Their Clarificarion, Development and Application

    Editor: ALWYN VAN DER MERWE Universiq of Denver, U.S.A.

    Editorial Advisory Board:

    ASW BARUT, Universiry of Colorado, U.S.A. BRIAN D. JOSEPHSON, Universiq of Cambridge, L! K. CLNE KILKILMISTER, Lini~~ersitj. of london, li. R GUNTER LUDWIG. Philipps-Universitiit, Marbrtrg, Gernlang NATHAN ROSEN. Israel Insritute of Technolog?, Israel MENDEL SACHS. Srare Universi~, of New York ar B;~&blo, US.A. ABDUS SAL.4b4. Inrenlarional Centre for Theoretical Phxsics, Triesre, Ira!\ HANS-JURGEN TREDER, Zentralinstitut fur Astroph?.sik der Akademie der

    M'issenschafier:. Gemzany

    Volume 60 -.

  • Solving Frontier Problems of Physics: The Decomposition Methoc

    George Adomian General h d y t i c s Corporation. Athens, Georgia, U.S.A.

    KLUWER ACADEMIC PUBLISHERS DORDRECHT I BOSTON / LONDON

  • Library of Congress Cataloging-in-Publication Data

    Adomian. C. s c l v i n g ' r o n t i e r ~ r o o i e m s o f p h y s i c s ? h e a e c o m p o s i t , o n m e t h o d I

    G e o r g e Aaom I a n . D. ~ m . -- c Z ~ n d a m e n ? a l t h e o r i e s o f ~ h y s i c s : V . 6 0 )

    I n c l u o e s I n d e x . ISBN 0 - 7 9 2 3 - 2 6 4 4 - X t a l k . p a p e r ) i . G e c o n ~ ~ s i t 1 0 n m e t h o d . 2 . M a t h e m a t i c a l p h y s i c s . I. T i ? l e .

    i i . S e r i e s . OC20.7 .GCA36 1 9 9 4 5 3 0 . " 5 9 4 - - 3 ~ 2 0 9 3 - 3 9 5 6 1

    ISBN 0-7923-7,644-X

    Published by Kluwer Academic Publishers. P.O. Box 17.3300 AA Dordrecht, The Netherlands.

    Kluwer Academic Publishers incorporates the publishing programmes of D. Reide!, hlanlnus Nijhoff, Dr M'. Junk and MTP Press. Sold and distributed in the U.S.A. and Canada by Klurver Academic Publishers, 101 Piniiip Dri\.e, Norwell. MA 0106!, U.S.A.

    In all other countries, sold and distributed by Kluwer Academic Publishers Group, P.O. Box 321. 3300 AH Dordrecht. The Netherlands.

    Printed 011 acid-free paper

    All Rights Reserved O 1991 Kluu'er Academic Publishers No pan of the material protected by this copyright notice may be reproduced or utilized in any form or by any means. electronic or mechanical. including photocopying. recording or by any information storage and rctrievai system. without written permission from the copyright owner.

    Pr~nted in thc Netherlands

  • IN MEMORY OF MY FATHER AND MOTHER HAIG AND VARTUEII ADOMIAN

  • EARLIER WORKS BY THE AUTHOR

    ~pplied Stochastic Processes, Academic Press, 1980. Stochastic System, Academic Press, 1983; also Russian transl. ed. H.G.Volkova, Mir Publications, Moscow, 1987. Partial Diflerential Equations with R. E. Bellman, D. Reidel Publishing Co., 1985. Nonlinear Stochastic Operator Equations, Academic Press, 1986. Nonlinear Stochastic Systems Theory and Applicatiorzs to Physics, Kluwer Academic Publishers, 1989.

  • TABLE OF CONTENTS

    PREFACE FOREWORD CHAPTER 1

    ,/CHAPTER 2

    CHAPTER 3

    CHAPTER 4 CHAPTER 5 CHAPTER 6

    CHAPTER 7

    CHAPTER 8 CHAPTER 9 CHAPTER 10 CHAPTER 11

    CHAPTER 12

    CHAPTER 13

    CHAPTER 1 4 APPENDIX 1 APPENDIX 11

    APPENDIX nI INDEX

    ON MODELLING PHYSICAL PHElNOMENA THE DECOMPOSITION METHOD FOR ORDINARY DIFFERENTIAI. EQUATIONS THE DECOMPOSITION METHOD1 IN SEVERAL DIMENSIONS DOUBLE DECOMPOSITION MODIFIED DECOMPOSITION APPLICATIONS OF MODIFIED DECOMPOSITION DECOMPOSITION SOLUTIONS FOR NEUMANN BOUNDARY CONDITIONS INTEGRAL BOUNDARY CONDITIONS BOUNDARY CONDITIONS AT INFINITY INTEGRAL EQUATIONS NONLINEAR OSCILLATIONS IN PHYSICAL SYSTEMS SOLUTION OF THE DUFFJNG EQUATION BOUNDARY-VALUE PROBLEMS WXTH CLOSED IRREGULAR CONTOU RS OR SURFACES APPLICATIONS IN PHYSICS PADE AND SHANKS TRANSFORMS ON STAGGERED SUMMATION OF DOUBLE DECOMPOSITION SERIES CAUCHV PRODUCTS OF INFIM.TE SERIES

    vii

  • PREFACE

    I discovered the very interesting Adomian method and met George Adomian himself some years ago at a conference held in the United States. This new t echque was very surprising for me, an applied ma.thematician, because it allowed solution of exactly nonlinear functional equations of various kinds (algebraic, differential, partial differentiai, integral ,...) without discretizing the equations or approximating the operators. The solutio~l when it exists is found in a rapidly converging series form, and time and space are not discretized. At this time an important question arose: why does this technique, involving special kinds of polynomials (Adomian polynomials) converge? I worked on this subject with some young colleagues at my research institute and found that it was possible to connect the method to more well-known formulations where classical theorems (fixed point theorem, substituted series, ...) could be used. A general framework for decomposition methods has even been proposed by Lionel Gabet, one of my researchers who has obtained a Ph.D. thesis on this subject During this period a fruitful cooperation has been developed between George Adomian and my research institute. We have frequently discussed advances and difficulties and we exchange ideas and rt:suits.

    With regard to this new book, I am very impressed by the quality and the importance of the work, in which the author uses the: decomposition method for solving frontier problems of physics. Man-y-concrete problems involving differential and partial differential equations (including Navier-Stokes equations) are solved by means of the decomposition technique developed by Dr. Adomian. The basic ideas are clearly detailed with specific physical examples so that the method can be easily understood and used by researchers of various disciplines. One of the main objectives of this method is to provide a simple and unified technique for solving nonlineat fimctional equations.

    Of course some problems remain open. For instance, practical convergence may be ensured even if the hypotheses of known methods are not satisfied. That means that there still exist opportunities for further theoretical studies to be done by pure or applied mathematicians, such as proving convergence in more general situations. Furthermore, it is not always easy to take into account the boundary conditions for complex domains. In conclusion, I think that this book is a fundamental contribution to the

    theory and practice of decomposition methods in functional analysis. It

  • completes and clarifies the previous book of the author published by Kluwer in 1989. The decomposition method has now lost its mystery but it has won in seriousness and power. Dr. Adomian is to be congratulated for his fundamental contribution to functional and numerical analysis of complex systems.

    Yves Chermault Professor Director of Medimat Universit.6 Pierre et Marie Curie (Paris VI) Paris, France September 9,1993

  • FOREWORD

    This book is intended for researchers and (primarily graduate) students of physics, applied mathematics, engineering, and other areas such as biomathematics and asuophysics where mathematical models of dynamical systems require quantitative solutions. A major part of tihe book deals with the necessary theory of the decomposition method and its generalizations since earlier works. A number of topics are not included he:re because they were dealt with previously. Some of these are delay equatio~is, integro-differential equations, algebraic equations and large matrices, comparisons of decomposition with perturbation and hierarchy methods requiring closure approximation, stochastic differential equations, and stochastic processes [I]. Other topics had to be excluded due to time and space limitations as well as the objective of emphasizing utility in solving physical probllems.

    Recent works, especially by Professor Yves Chenu;iult in journal articles and by Lionel Gabet in a dissertation, have provided a rigorous theoretical foundation supporting the general effectiveness of the method of decomposition. The author believes that this method is relevant to the field of mathematics as well as physics because mathematics has been essentially a linear operator theory while we deal with a nonlinear world. Applications have shown that accurate and easily computed quantitat.ive solutions can be determined for nonlinear dynamical systems without assumptions of "small" nonlinearity or computer-intensive methods.

    The evolution of the research has suggested a theory to unify linear and nonlinear, ordinary or partial differential equations for solving initial or boundary-value problems efficiently. As such, it appears to be valuable in the background of applied mathematicians and theoretical or mathematical physicists. An important objective for physics is a methodology for solution of dynamical systems-which yields verifiable and precise: quantitative solutions to physical problems modelled by nonlinear partial differential equations in space and time. Analytical methods which do not require: a change of the model equation into mathematically more tractable, but necessarily less realistic representation, are of primary concern. Improvement of analytical methods would in turn allow more sophisticated modelling and possible further progress. The final justification of theories of physics is in the correspondence of predictions with nature rather than in rigorous proofs which may well

  • xii FOREWORD

    restrict the stated problem to a more limited universe. The broad applicability of the methodology is a dividend which may allow a new approach to mathematics courses as well as being useful for the physicists who will shape our future understanding of the world.

    Recent applications by a growing community of users have included areas such as biology and medicine, hydrology, and semiconductors. In the author's opinion this method offers a fertile field for pure mathematicians and especially for doctoral students looking for dissertation topics. Many possibilities are included directly or indirectly. Some repetition of objectives and motivations (for research on decomposition and connections with standard methods) was believed to be appropriate to make various chapters relatively independent and permit convenient design of courses for different specialties and levels.

    Partial differential equations are now solved more efficiently, with less computation, than in the author's earlier works. The Duffing oscillator and other generic oscillators are dealt with in depth. The last chapter concentrates on a number of frontier problems. Among these are the Navier-Stokes equations, the N-bod). problem, and the Yukawa-coupled Klein-Gordon- Schrodinger equation. The solutions of these involve no linearization, perturbation, or limit on stochasticity. The Navier-Stokes solution 121 differs from earlier analyses [3] . The system is fully dynamic, considering pressure changing as the velocity changes. It now allows high velocity and possible prediction of the onset of turbulence.

    The references listed are not intended to be an exhaustive or even a partial bibliography of the \.aluable work of many researchers in these general areas. Only those papers are listed which were considered relevant to the precise area and method treated. (New work is appearing now at an accelerating rate by many authors for submission to journals or for dissertations and books. A continuing bibliography could be valuable to future contributors and reprints received by the author will be recorded for t h ~ s purpose.)

    The author appreciates the advice. questions, comments, and collaboration of early workers in t h ~ s field such as Professors R.E. Bellman, N. Bellomo, Dr. R. MCarty, and other researchers over the years, the important work by Professor Yves Cherruault on convergence and h s much appreciated review of the entire manuscript, the support of my family, and the editing and valuable contributions of collaborator and friend, Randolph Rach, whose insights and willingness to share his time and knowledge on difficult problems have been an important resource. The book contains work originally typeset by Arlette

  • Revells and Karin Haag. The camera-ready manuscript was prepared with the dedicated effort of Karin Haag, assisted by William David. Laura and William David assumed responsibility for office management so that research results could be accelerated. Computer results on the Duffing equation were obtained by Dr. McLowery Elrod with the cooperation of the National Science Center Foundation headed by Dr. Fred C. Davison, who has long supported this work. Gratitude is due to Ronald E. iMeyers, U.S. Army Research Laboratories, White Sands Missile Range, who supported much of this research and also contributed to some of the developme.nt. Thanks are also due to the Office of Naval Research, Naval Research Laboratories, and Paul Palo of the Naval Civil Engineering Laboratories, who have supported work directed toward applications as we11 as intensive courses at NRL and NCEL. The author would also like to thank Professor Alwyn Van der iMerwe of the University of Denver for his encouragement that led to t h ~ s book. Most of all, the unfailing support by my wife, Corinne, as well as her meticulous final editing, is deeply appreciated.

    REFERENCES 1 . G. Adomian. Stochastic Processes. Encyclopedia of Sciences and Technology. 16, 2nd

    ed.. Academic Press (1992). 2. G. Adomian. An Analytic Solution to the -stochastic Navier-Stokes System.

    Foundarions of Physics, 2, ( 83 1-834) (July 199 1). 3. G. Adomian, Nonlinear Stochastic Sysrems Theory and Ap,cllicafions to Physics, Kluwer

    (192-216) (1989).

  • ON MODELLING PHYSICAL PHENOMENA

    Our use of the term "mathematical model" or "model" will refer to a set of consistent equations intended to describe the particular features or behavior of a physical system which we seek to understand. Thus, we can have differenr models of the system dependent on the questions of interest and on the features relevant to those questions. To derive an adequate mathematical description with a consistent set of equations and relevant conditions, we clearly must have in mind a purpose or objective and limit the problem to exclude factors irrelevant to our specific interest. We begin by considering the pertinent physical principles whlch govern the phenomena of interest along with the constitutive properties of material with whlch the phenomena may interact.

    Depending on the problem, a model may consist of algebraic equations, integral equations, or ordinary, partial, or coupled sysrems of differential equations. The equations can be nonlinear and stochastic in general with linear or deterministic equations being special cases. (In solne cases, we may have delays as well.) Combinations of these equations such a s integro-differential equations also occur.

    A model using differential equations must also include the initiaUboundary conditions. Since nonlinear and nonlinear stochastic equations are extremely sensitive to small changes in inputs, o r initial conditions, solutions may change rather radically with such changes. Consequently, exact specification of the model is sometimes not a simple matter. Prediction of future behavior is therefore limited by the precision of the initial state. When significant nonlinearity is present, small changes (perhaps only 1%) in the system may make possible one or many different solutions. If small but appreciable randomness, or, possibly, accumulated rolund-off error in iterative calculation is present, we may observe a random change from one solution to another-an apparently chaotic behavior.

    To model the phenomena, process, or system of interest, we first isolate the relevant parameters. From experiments, observations, and known relationships, we seek mathematical descriptions in the form of equations which we can then solve for desired quantities. 'This process is neither universal nor can it take everything into account; we must tailor the model to fit

    1

  • the questions to which we need answers and neglect extraneous factors. Thus a model necessarily excludes the universe external to the problem and region of interest to simplify as much as possible, and reasonably retain only factors relevant to the desired solution.

    Modelling is necessarily a compromise between physical realism and our ability to solve the resulting equations. Thus, development of understanding based on verifiable theory involves both modelling and analysis. Any incentive for more accurate or realistic modelling is limited by our ability to solve the equations; customary modelling uses restrictive assumptions so that well- known mathematics can be used. Our objective is to minimize or avoid altogether this compromise for mathematical tractability which requires linearization and superposition, perturbation, etc., and instead, to model the problem with its inherent nonlinearities and random fluctuation or uncertain data.

    We do h s because the decomposition mctlod is intended to solve nonlinear and/or stochastic ordinary or partial differentia! equations: integro-differentia1 equations. delay equations, matrix equations, etc., avoiding customary restrictive assumptions and methods, to allow solutions of more realistic models. If the deductions resulting from solution of this model differ from accurate observation of physical reality, then this would mean that the model is a poor one and we must re-model the problem. Hence, modelling and the solution procedure ought to be applied interactively. Since we will be dealing with a limited region of space-time which is of interest to the problem at hand, we must consider conditions on the boundaries of the region to specify the problem completely. If we are interested in dynamical problems such as a process evolving over time, then we must consider solutions as time increases from some initial time: i.e., we will require initial conditions. We will be interested generally in differential equations which express relations between functions and derivatives. Th'ese equations may involve use of functions, ordinary or partial derivatives, and nonlinearities and even stochastic processes to describe reality. Also, of course. initial and boundary conditions musr be specified to make the problem completely determinable.

    If the solution is to be valid, it must satisfy the differential equation and the properly specified conditions. so appropriate smoothness must exist. We have generally assumed that nonlinearities are analytic but will discuss some exceptions in a later chapter. An advantage, other than the fact that problems arc considered more realistically than by customary constraints. is that

  • ON .UOD ELWLWVG PIIYSICAL PHENOMENA 3

    solutions are not obtained here by discretized methods: solutions are continuous and computationally much more efficient as we shall see. If we can deal with a physical problem as it is, we can expect a useful solution, i.e., one in which the mathematical results correspond to reality. If our model is poor because the data are found from measurements which have some error, it is usual to require that a small change in the data must lead to a small change in the solution. This does not apply to nonlinear equations bsecause small changes in initial data can cause significant changes in the solution, especially in stochastic equations. This is a problem of modelling. Lf the data are correct and the equation properly describes the problem, we expect a correct and convergent solution.

    The initial/boundary conditions for a specific partial differential equation, needless to say, cannot be arbitrarily assigned: they must be consistent with the physical problem being modelled.

    Suppose we consider a solid body where u(x,y,z.t) represents a temperature at x,y,z at time t. If we consider a volume V within the body which is bounded by a smooth closed surface S and consider the change of heat in V during an interval (t,,t,), we have, following the derivation of N.!;. Koshlyakov, M.M. Smirnov, and E.B. Gliner [ I ]

    where n is the normal to S in the direction of decreasing temperatures and k is the internal heat conductivity, a positive function independent of the direction of the normal. The amount of heat to change the temperature of V is

    where c(x,y,z) is the specific heat and p(x,y,z) is the density. If heat sources with density g(x,y,z,t) exist in the body, we have

    Since Q = Q, + Q,, it follows that

  • au cp- = div(k grad u) + g

    a t

    If cp and k are constants, we can write a2 = klcp and f(x,y,z,t) = g(x,y,z,t)lcp . Then

    du - = a2V2u + f a t

    (which neglects heat exchange between S and the surrounding medium). Now to determine a solution, we require the temperature at an initial instant

    u(x,y,z,t = 0) and either the temperatures at every point of the surface or the heat flow on the surface. These are constraints or commonly, the boundary conditions. If we do not neglect heat exchange to the sunounding medium which is assumed to have uniform temperature u,, a third boundary condition can be written as a (u- u,) = -kdu/an/ , (if we assume the coefficient of exchange is uniform for all of Sf.

    Thus the solution must satisfy the equation, the initial condition, and one of the above boundary conditions or constraints which make the problem specific. We have assumed a particular model which is formulated using fundamental physical laws such as conservation of energy, so the initial distribution must be physically correct and not abitrary. If it is correct, it leads to a specific physically correct solution. The conditions and the equation must be consistent and physically correct. The conditions must be smooth, bounded, and physically realizable. The initial conditions must be consistent with the boundary conditions and the model. The derived "solution" is verified to be consistent with the model equation and the conditions and is therefore the solution.

    NOTE: Koshlyakov: et. al. [ l ] state that we must speci'. ult = 0) nvitlzin the body and one of the bourzda~ condirions such as u on S . However S is not insulated from the body. The initial condition u(t = 0) fixes u on S also if surroundings are ignored. It seems that either one or the other should be enough in a specjfic problem and if you give both, they must be consistent with each other and the. model (equation). The same situation arises when, e.g., in a square or rectangular domain, we assign boundary conditions on the four sides, which means that physically we have discontinuity at the comers.

  • OH MODELLING PHYSICAL PHENOMENA 5

    REFERENCE 1. N.S. Koshlyakov, M. M. Smirnov, and E.B. Gliner. Differential Equations of

    ,Mathernarical Physics, North Holland Publishers (1 964).

    1. Y. Chemaul t iMarhemarical ,Modelling in Biomedicine. Reidel (1986). . R. P. Feyman, R. B. Le~ghton, and M. Sands, The Feymnan Lpcfures on Physics,

    .Addison-Wesley ( 1965). 3 . I. S. Sokolnikoff and R.M. Redheifer. iMafhernatics of Physics and lModern

    Enqineering. 2nd ed.. McGraw-Hill(1966).

  • THE DECOMPOSITION METHOD FOR ORDINARY DIFFERENTIAL EQUATIONS

    I/- -. ,, < A 0 A cfitgally important problem in 3dntier science and technology is the ,, - - , , - uf -y LP

    n of nonlinear andlor stochastic systems modelled h c , 2- , - - ,t.-

    equat i~gs hf ini t iajboundsy 6, I - I - -,J r-;& '9

    conditions. k,: -ljy .p i . -

    The usual procfdlures of analysis necessarily change such problems in , ! J t essential_u.ays in order to make them mathematicallyp~ctfi~le by established

    \ ' methods. Unfortunately these changes necessaril

  • intensive research in attempting to develop codes for :study of transonic and hypersonic flow. Beca.use of the symbiosis between such existing methodology and supercomputers, as well as the coml?lexity, these methods are computationally intensive. Massive printouts are the result and functional dependences are difficult to see. We have a constant demand for faster computers, superconductivity, parallelismt etc., because: of the necessity to cut down computation time. Thus a continuous soluti.on and considerably decreased computation is evidently a desirable goal.

    Closed-form analytica.1 solutions are considered ideal when possible. However, they may necessitate changing the actual or real-life problem to a more tractable mathematical problem. Except for a small class of equations in which clever transformations can result in linear equations, it becomes necessary to resort to linearization or statistical linearization techniques, or assumptions of "weak nonlinearity," etc. What we get then is solution of the simpler mathematical problem. The resulting solution can deviate significantly from the solution of the actual problem; nonlinear systems can be extremely sensitive to small changes. These small changes can occur because of inherent stochastic effects or computer errors; the resulting solutions (especially in strongly nonlinear equations) can show violent, erratic (or "chaotic") behavior. Of course, it is clear that considerable progress has been made with the generally used procedures, and, in many problems, these methods remain adequate. Thus, in problems which are close to linear, or where perturbation theory is adequate, excellent solutio~ls are obtained.

    In many frontier problems, however, we have strong nonlinearities or stochasticity in paramet:ers, so that it bcecomes important to find a new approach and that is our subject here, &'rU;x / w e begin with the (del:erministic) form Fu = g(t) where F is a nonlinear ordinary differential operator with linear and nonlinear terms. We could represent the linear term by Lu where L is the linear operator. In this case L

    which may not be the case, i.e.. we may have a / and a consequently difficult integration. Instead, we t'>5 '

    write the linear term as Lu + Ru where we choose L as the highest-ordGred / - 4

    L derivative. Now L-' is simply an n-fold integration for an nth order L. The * li.5,d-J

    remainder of the linear opeiator is R. (In cases where stochastic terms are present in the linear operator, we can include a stochastic operator term m.) The nonlinear term is represented by Nu. Thus Lu + Ru+ Nu = g and we write

  • L', - /"'

    For initial-value problems defrne L-' for L = d"/dtn as the n-fold definite integ~tion operator fkom 0 to t. For the operator L = d2/dt2, for example, we hi% LILu = u - u(0) - tu'(0) and therefore

    -.

    --

    - --

    For the same operator equation but now considering a boundary value problem, we let L-' be an indefinite integral and write u = A + Bt for the first two terms and evaluate A, B from the given conditions. The first three terms

    r i are identified as uo in the assumed decomposition u = xw u,. Finally, ' ,, n=O

    " )r' assuming Nu is analytic, we write Nu = z- ~ , ( u , . u , ,..., un) where the 4 3 3 . o=O are,specially gediated (Adomian) polynomals for the specific nonlinearity.. -

    l - L - - - a - , kI'hey depend only on the q, to u, components and form a r'apidly convergent

    ' , series. The & are glven as

    and can be found from the formula (for n 2 1)

    In the linear case where f(u)= u, the A, reduce to u,. Otherwise An= A ,(uo,u , . .., u,). For f(u) = u2, for example, A, = ui, A , = 2u,u,,

    2 A, = u, - 2u,u,, A, = 2u,u2 + 2u,u ,,... . I! is p be noted that,in this scheme, -. .

    - 'I,+. the sum of the subsc.~pts i? each tern of t h s n are equal to n. The c(v, n) are

    L,.Ly,'. products (or sums of products) of v components of u whose subscripts sum

    I c . : to n, divided by the factorial of the number of repeated subscripts. Thus c(1,3) ' -'. c, ' .; ,, "'can only be u,. ci2,3) is u,u, and c(3.3) = (1/3!)u:. For a nonlinear equation in

  • L; / u, one rp3y express any given function f(u) in the A, by f(u) = Em ,=i) A,.

    -p 6- ~ h ; d " e prev~ously pointed out that the polynomials are not unique, e.g.,

    for f(u) = u3, A, = u:, A! = 2uOul, A2 = uf + 2u,u2,. . . . But A! could also 2 be 2u,u, t u! , i.e., it could include the first term of A, since u, and u, are

    h o w n when u2 is to be calculated. is n w established that the sum of the series xm A, for Nu is equal to Jt'Y-&!,+ n = O 61 YJJ

    the sum of a generalized Taylor series about &(a). that xm u = ) u, is equal to a generalized Taylor series about the function u,, and that the series terms

    1

    YL ,,? approach zero as l/(mn)! if m is the order of the hghest linear differentlai . , . . i operator. Since the senes converges (in norm) and does so very ra ~ d l y , the n- L'- - L'- %A - term partial sum q, = u, can serve as a practical solutiwt for ryntheas/

    -> . , / I , and des~gn. The lim rp. = u. - b~ J ~ , , ? - F, , C, j , ,- C J ~ E ~ &, 3r I - -', L', , - n-+- L' ' kTG, Y* J / ? ,-

    @"'other convenient algorithms have been developed for d P o s l t e -- and - pP

    y~multidimensional hroctions as well as f o ~ particular funcfions of jnrer~st. A; an 7 - L'" / r--

    -/ ' example for solution o f the ~ , u f f & equation, we use e notation / I , . ' , - < t - +'- ~ , [ f ( ~ ) ] = A,(~~)$/Y 1 A:/;b; : ox ~ & - I G - $ ?

    If we write f(u) = zw a=() ~ . [ f ( u ) ] or more simpiy f(u) = xw n=O A, and let 0

    f(u) = u, we have u = u, since then A, = u,, A, = u ,,... . Thus we n=O

    can say u and f(u),i.g., the solution and any nonlinearity, are written in terms - f,s,r,: =

    &' of the A,, or, that we do this for the nonlinearity and think of u as simply decomposed int6 components ui to be evaluated such that the n-term

    -/

    approximation rp, ~ = ' ~ n - l I=O ui approaches -..-.,- ...-- u = xm u, as n i - . The solution n=O

    can now be written as: L / ~ u ! ~ / - oyd"

    DO m

    so that

    etc. All components are determinable since A, depends only on u,. A, depends on uO,u,, etc. The practical solution will be the n-term approximation or approximant to u, sometimes written %[u] or simply G.

  • J< Convergence has been rigorously established by Professor Yves Cherruault [I]. Also further rigorous re-examination has most recently been done by Lionel Gabet [2]. The rapidity of this convergence means that few terms are required as shown in examples, e.g., [ 11.

    u + I ~ > ~ BASIS FOR THE EFFECTI ENESS OF DECOMPOSITION: 05-i;'!

    c*-, c

    Let's consider the physical basis for the a c c u g ~ ~ and rapid rZE of - 6. I

    I ! ,, convergence. The initial term u,, is an optimal first approximatiacontaining

    ; : . I - essentially all o p r i ~ i information about the system. Thus, u, = O + L - ' ~ L~

    contains the given input g (which is bounded in a physical system) and the initial or boundary conditions hcluded in Q, which is the solution of L Q = 0.

    , , , , .' I. !,. Furthermore. the f o l i o w e terms converge for bounded t as l/(rnn)! where n is the order of L and m is the number of terms in the approximant e, . Hence even with very small m, the 4, will contain most of the solution. Of the

    . , followins~erived terms, u, is particularly simple, since &, the f i s t of Lhe - I - ,,- A

  • whlch can be rearranged as

    rcu, = fiu,) - (ul i U, + . . .)ft1)(u0) + [(~;/2!) t U!U- - . . . l f i " (~ , ) i ... = f ( u q ) + [(u - uq)/l!]f'"(u,) +[(u- u,)1/2!jp:'(u.))- - - -

    =

    = [(u - u.)"/n!]f~"'(u,) D =0

    ,A REFERENCE LIST OF THE ADOMI.AN POLYNOMIALS:

    .A,, = f(u3) A, = u,f(')(u3) A: = ulf(''(u,) + (1/2!)u~ftz)(u,) A, = u3fi1)(uo) + u ~ u ~ ~ ( ~ ) ( u ~ ) + (1/3!)u:fi3'(u0,) .A4 = u,ft1)(u0) + [(1/2!)ui + ~ , u ~ ] f ( ~ ) ( u , )

    +(1/2!)u~u2f") (u,) + (l/4!)u~ff')~u0) A, = u5ft')(u0) + [u2u3 + ~,u , ] f (~ ) (u , )

    +[(1/2!)u,u: + (l/2!)u,t3]f")(uo) +(1/3!)~:u~f(~)(u,) + (l/5!)u:f(')(u0)

    A. = u ~ ~ ( " ( u ~ ) + [(vz!)u: + u2u4 + u ~ u ~ ] ~ ( ~ ) ( ~ u ~ ) +[(1/3!)ui + u,u2u3 + ( 1 / 2 ! ) ~ j ~ , ] f ( ~ ' ( ~ , ) +[(1/2!)u;(l/2!)u: + (l/3!)li;u3]f'"(u0) +(1/4!)~~u,f(~'(u,) + (1/6!)~ff(~)(u,)

    A, = u,f(')(u,) + [u3u4 + u,u5 + ~,u,]f '~ '(u,) +[(1/2!)uiu3 + ul(l/2)!u: + U,U,U, + (l/2,!)uju,]P(uo)

  • +f"'(u0)(l/6!)u~u2 + f(')(u0) (1/8!)uy A9 = fi1'(u0)u9 + f(2)(~o)[u,us + u,u6 + u2u7 + u1u8]

    -f'3'(u,)[(l/3!)u: + u,u,u, + (l/z!)u;u, + u, (l/2!)u: -u:u3u5 - U,U2U6 + (1/2!)u:u,] -f "(u0)[(l/3!)u:u, + u,u2(l/2!)u; + u,(l/l!)uiu, -(1/2!)uju~u; s (1/2!) uju,u5 + (1/3!)u;u,] -f sJ(ug)[ul(l/4!)u; + (l/2!)u;(l/~!)u;u3 +(1/3!)u; (1/2!)u: + (1/3!)u;u2u, + (1/4!)u;u,] -f 6J/uo)[(1/3!)~:(1/3!)u; + (1/4!)u;u:u3 - (lp!)u:u,] -f ')~u~,)[(~/~!)u~(I/?!)u~ + (1/6!)u:u,] -f "(u" )(l/7!)u;u2 + f~9'(uo)(1/9!)u;

    A,, = f i l(uo)u,, +f")(u0)[(l/2!)u: +u,u, +u,u, Tu2u, +u ,u9 ] +i3~(u3)[(l/2!)u;u4 -i- u2(1/2!)u: + u2u3u, i (l/?!)uju, -u:u,u, - u,u3u, + u,u,u, + (l/2!)u;u8] -f "(u, j[jl/2!)~;(1/?!)~; + (I/~!)u;u, - U,(I /~!)U;

    u: u, U ? U 4 +uI(1/2!)ut - Uii - + ( 1 / 2 ! ) ~ : ( 1 / 2 ! ) ~ ; ~ ( 1 1 2 1 ) ~ ; u 3 u S +(1/2!)uf u 2 u 6 +(1/3!ju: u,] + f f 5 ' ( ~ o ) [ ( ~ / 5 ! ) ~ ; +- u , ( I /~ ! )u ;u~ + ( i p ! j u ; ~ ~ (I /~!)u; -(l/2!)uf (l/2!)u ju, + (1/3!)u;u3u,

  • +(1/3!) U : U ~ U ~ + (1/4!)u:u,] +f'"(u0) [(1/2!)uf (1/4!)u; + (1/3!)u; (1/2!)u ju, +(1/4!)u:(1/2!)u: + (1/4!)u;u2u, + (1/5!)u;u,] +f"' (u0)[(1/4!)u: (1/3!)ui + (l/5!)u~u,u3 + (1/6!)ufu,] +f'"(u0) [(1/6!)uf (1/2!)ui + (1/7!)u:u3] +f'g'(uo)(l/8!)u~u, + f('0)(u9)(l/10!)u~0

    '/iB-&> * j ' s Notice that for urn e a c d ( d u a 1 term is the product of m factors. Each term

    of A, has five factors-the sum gf s-upeqscri%& is m (or 5 in this case). The - I d ( I,uf

    sum of subscepu is n. The sec&ftermd%; as an example, is 5 u ~ u l u l u , ~ @7,L

    and th sum of subscripts is 4. A very convenient check on the numerical coefficients in each term is the following. Each c6efficient is m! divided by the product of factorials of the superscripts for a given term. Thus, the second term of A, (us) has the coefficient 5!/(3 !)(I !)(I !) = 20. The last term of A, has the coefficient 5!/(2!)(2!)(1!) = 30. Continuing with the &, for u5 we have

  • A, = u: + 3u?,u, + 3u:ui + 3u$, &3u:u1 + 6u,u,u, + 6u,u,u, -6u,u,u6 + 6u0u,u5 + 6uIu2u6 i 6 u , ~ 3 ~ , + 6u_u3u,

    A,, = 3u:ulc - 3u;ug + 3u;uo + 3 ~ 3 2 ~ ~ + 3u;u2 - 3u;u, - 6uOu,u9 -6u,ulu, + 6u0u,u7 + ~u,u,u, + ~ u , u , u , ~ U , U ~ U ~ -6uIudu5 + 6u2u3u5

    EXAMPLE: N e = s i n e

  • A, = sin 8, A, = e, C O S ~ , A, = -(e:/2)~in e, + e, C O S ~ , A, = - ( ~ ; / ~ ) c o s e, - ole2 sm 0, + e3 CO'S e,

    A, = uim A, = -mu,(m+')u, -

    -(m+l) A, = +m(m + l)u;("+')u~ - mu, u, -(m+l) A, = -+ m(m + l)(m + ~)U;("+~)U; + m(m + l ) ~ ; ( ~ + ~ ) u , u ~ - mu, u,

    EXAMPLE: f(u) = uy where y is a decimal number.

  • EXAMPLE: Consider the linear (deterministic) ordinary differential equation d2u/dx' -kxpu = $ with u(l) = u( - 1) = 0. Write L = d2/dx2 and Lu = g + kxPu. Operating with L-' , we have L-'LU = L-'~ -t L-'kxPu. Then

    oe

    Let xD=, u, with u, = c, + c2x + gx2/2. Then urn+, = ~ - ' l x ~ u , with m Z 0. Thus

    where OD

    o: (x) = tmxmF-2m/(mp - 2m - l)(mp + 2m)

    Since u(l ) = u(- 1) = 0, we have

    Hence c: and c, are determined. Suppose that in the above example, we let k = 40, p = 1, = 2. Thus we

    consider thp~ equation d2u/dx2 - 40xu = 2 with u(-1) = u(1) = 0.' This is the one-dimensional case of the eliiptic equation V2u = ~ ( x , Y , z ) i k(x,y,z)u

    ; . - 7 7' - p.

  • J j - - u',, u, and the non-zero forc~ng function which yields an additional Airy-like funct~on. 0 erating with L" yields u + A + Bx + ~ - : ( 2 ) + L "(40xu). Let e,,. . - I --(,,/,4 =a*,,- uo + A T Bx - k i ( 2 ) = A + BX + x and let u = zw u, wth the components

    n=O

    to be determined so that the sum is u. We identify uu+, := L-:(40xu,). Then all -0 j*2' -

    components can be determined, e.g., - uy:v

    An n-rerm approximant = xu-' ui with n = 12 for x = 0.2 is given by I =o

    -0.135639, for x = 0.4 is given by -0.113969, foi: x = 0.6 is given by -0.083321, for x = 0.8 is given by -0.050944, and for x = 1.0 is, of course, zero. These easily obtained results are correct to seven digits. We see

    , -

    thac a better solution is obtained and much more easily &an by variational+ yi- ; L -/: -

    methods. The solution is found just as easily for nonlinear versions without - linprization. (+%/ - ?Y

    U&pa I ALYTIC SIMULANTS OF DECOMPOSITION SOLUTIONS: We now introduce the convenient concept of "simulants" to solutions by

    decomposition. The m-term "approximant" Gm to the solution u, indicated by [u], will mean m terms of the convergent series -.zoo u, which represents

    n=o

    u in the decomposition method. If we have an equation Tu = g(t) where r is a general differential operator such as, for example,

    and we write g(t) = Em gutn but only use m terms of the series, we have the n=o

    m-term approxirnant m-1

    The corresponding solution of the equation is the simulant of the solution u, thus

    r a m ['I = Gm [g]

  • Analogous to the limit m + = of @,[g] = g, the limit as m -+ = of a,[u]= u

    Possible stopping rules arise in computerized calculation when the last computed simulant a,,,[u] corresponds to a,(u) to the number of decimal places of interest to us.

    We can also conceive of using the entire series for g, for example, writing the sum of the infinite series but using a sequence of approximants to parameters a$, ..., in r and a corresponding sequence of simulants o,[u] as parameuized by @&[a] or +&[fl. In solving a partial differential equation by decomposition. we may develop a sequence of simulants a,(u) for the solution u by concurrently improving the level of approximation of the coefficients, or the given conditions, or the input functions. For example in

    0

    L, o,,, [u] - R , O ~ [u] = 4. [g] where g(t.x) = CLo Cm=a gt.mt'xrn we can compute each am(u) for given approximations of g, and of the initial conditions, or finally of coefficients such as

    We can also use the concept of simulants with asymptotic decomposition whch is discussed in 141. Consider the equation

    m

    d'u/dx' - u = g(n) = x garn u(0) = c, and u'(0) = c, n = O

    By the asymptotic decomposition technique [ 5 ] , the equation is written as

    m

    Then ul = pix) = En=, gnan and ud = -(d2/dx')um-, for rn > 1. We use an approximant of g or

    Then the simulant, or analpc simdant, of u is the solution of

  • d'a, [u ] + a,[ul= ~ m [ g l dx2

    m

    o,[u] is a series which we write as En=" ahrn' where

    m

    " dx: O D - : n=O

    It is straightforward enough that if we don't use all of :;, we have only q,[u] T- m-1

    whlch approaches u in the limit as m + - in 9, = LL,=o ui. In the same equation u = g(x) - (d2/dx')u

    we can write

    so that

    OD

    where an = xmx0 a". nerefore

    is the solution for asymptotic decomposition. We make two observations:

  • 1) The method works very well for nonlinear equations where we solve for the nonlinear term and express it in the above polynomials.

    2) Ordinary differential equations with singular coefficients offer no special difficulty with asymptotic decomposition, e.g.,

    REFERENCES Y. Chermault. Convergence of Adomian's Method, Kyberneres, 18, (31 -38) (1989). L. Gabet, Equisse d'une theorie decompositionelle, Modtlisation Marhtrnatique et Analyse Numerique, in publication. G . Adomian and R. Rach, Smooth Polynomial Approximations of Piecewise- differentiable Functions. Appl. Math. Len.. 2. (377-379) (1989). G. Adomian. h'onhnear Stochastic Operator Equations. Academic Press (1986). G. Adomian. A Review of the Decomposition Method and Some Recent Results for Nonlinear Equarions. Comp. and Math. with Applic., 21, (101-127) (1991).

    SUGGESTED READING G. Adomian. R. E. Meyers, and R. Rach, An Efficient Methodology for the Physical Sciences. Kybernetes, 20, (24-34) (1991). G.. Adomian. Konlinear Stochastic Differential Equations, J. Math. Anal. and Applic., 55. 1441 451) :! 976). G. Adomian. Solution of General Linear and Nonlinear Stochastic Systems. in Moden; Trcrlir ir: C~bernetics and Sysirt~:s. :. R3se (ed.), (203-21 43 (1 077 j. G. Adomian and R. Rach. Linear and Nonlinear Schrodinger Equations. Found. of Phyics . 21. (983-991') (1991). N. Bellorno ar,? R . Riganti. Nonlinear Stochastic Systems in Physics and Mechanics, World Scientific. (1 987). N . Bellomo and R. Monaco, A Comparison between Adomian's Decomposition Method and Perturbation Techniques for Nonlinear Random Differential Equations, J. Math. Anal. and Applic.. 110, (1985). N . Bellomo, R. Cafaro, and G. Rizzi. On the Mathematical Modelling of Pbysical Systems by Ordinary Differential Stochastic Equations, Math. and Comput. in Simul.. 4 . 1361-367) (?984). N. Bellomo and D. Sarafyan, On Adomian's Decomposition Method and Some Comparisons with Picard's Iterative Scheme. J. Math. Anal. and Applic.. 123. (389- 400) (1 987). R. Rach and A. Baghdasarian, On Approximate Solution of a Nonlinear Differential Equation, App!. Marh. L R f t . , 3 , (101-1 02) (1990). K . Rach. On the Adornian Method and Comparisons with Picard's Method, J . Math. Anal. and Applic.. 10, (139-159) (1984). K. Rach. A Convenient Computational Form for the A, Polynomials, J . Marh. Anal. and Applplic.. 102. (415-419) (1984).

  • 12. A.K. Sen, An Application of the Adomian Decomposition Method :o the Transient Behavior of a Model Biochemical Reaction, J. Mah. Anal. and Applic., 131, (232- 245) ( 1988).

    13. Y. Yang. Convergence of the Adomian Method and an Algorithm for Adomian Polynomials, submitted for publication.

    14. K. Abbaoui and Y. Cherruault, Convergence of Adomiian's Method Applied to Differential Equations. Compur & Mah. with Applic.. to appear.

    15. B.K. Datta. Introduction to Partial Differential Equations. New Central Book Agency Ltd., Calcutta (1993).

  • THE DECOMPOSITION METHOD IN SEVERAL DIMENSIONS

    Mathematical physics deals with physical phenomena by modelling the phenomena of interest, generally in the form of nonlinear partial differential equations. It then requires an effective analysis of the mathematical model, such that the processes of modelling and of analysis yield results in accordance with observation and experiment. By this, we mean that the mathematical solution must conform to physical reality, i.e., to the real world of physics. Therefore, we must be able to solve differential equations, in space and time, which may be nonlinear and often stochastic as well, without the concessions to tractability which have been customary both in graduate training and in research in physics and mathematics. Nonlinear partial differential equations are very difficult to solve analytically, so methods such as linearization, statistical linearization, perturbation, quasi-monochromatic approximations, white noise representation of actual stochastic processes, etc. have been customary resorts. Exact solutions in closed form are not a necessity. In fact, for the world of physics only a sufficiently accurate solution matters. All modelling is approximation, so finding an improved general method of analysis of models also contributes lo allowing L,

    development of more sophisticated modelling [ I , 21. Our objective in this chapter is to see how to use the decomposition method

    for partial differential equations. (In the next chapter, we will also introduce double decomposition which offers computational advantages for nonlinear ordinary differential equations and also for nonlinear partial differential

    krest to equations.) These methods are applicable in problems of into theoretical physicists, applied mathematicians, engineers, and other disciplines and suggest developments in pure mathematics.

    We now consider some generalizations for partial differential equations. Just as we solved for the linear differential operator term Lu and then operated on both sides with L-', we can now do the same for highest-ordered iinear operator terms in all independent variables. If we have differentiations for example, with respect to x and t: represented by L,u and L,u, we obtain equations for either of these. We can operate on each with the appropriate inverse. We begin by considering some illuminating examples.

    Consider the example d u / d t + - d u / r 3 x i f ( u ) = O with u(t = 0) = I /2x and U(X = C) = - llt. For simplicity assume f(u) = u'. By decomposition,

    7 7 --

  • writing L,u = -(dl d x)u - u2. then writing u = Z- u, and representing u' o=O

    by u = xw A, derived for the specific function, we have n=O

    n=O n=O Consequently,

    u, = u(x,O) = 112x

    Substituting the A, {u2} and summing, we have

    which converges if (t I 2x) < 1 to u = lI(2x - t). If we solve for L,u, we have

    or u = -(l/t)[ 1 + 2x/t + - - - I which converges near the initial condition if 2 d t < 1 t o u = l(2x-t).

    Both operator equations yield distinct series which, converge to the same function with different convergence regions and different convergence rates. It is interesting to observe that convergence can be changed by the choice of the operator equations. In earlier work, the solu!tions (called "partial solutions") of each operator equation were combined to ensure use of dl the

  • given conditions. We see that partial differential equations are solvable by looking at each dimension separately, and thus our assertion holds about the connection between the fields of ordinary and partial differential equations. There is, of course, much more to be said about this and we must leave further discussion to the growing literature, perhaps beginning with the introduction represented by the given references.

    Consider the equation u, - u, + (d/d t)f(u) = 0 where f(u(x,t) is an ana- iy t~c function. Let L, represent d2/dt2 and let L, represent d2/Jx2. We now write the equation in the form

    Using the decomposition method, we can solve for either linear term; thus,

    Operating with the inverse operators, we have

    where the a,, 0, are evaluated from the given initial/boundary conditions. Generally, either can be used to get a solution, so solving a partial differential equation is analogous to solving an ordinary differential equation with the L, operator in the frrst equation and the L,operator in the second equation assuming the role of the remainder operator R in an ordinary differential equation. The (;/at)f(u) is a nonlinearity handled as before. The solution depends on the explicit f(u) and the specified conditions on the wave equation.

    To illustrate the procedure. we first consider the case with f(u) = 0 in order to see the basic procedure most clearly. We have, therefore, u, - u, = 0 and we will take as given conditions u(0, x) = 0, u(t,O) = 0, u (nI2,xj = sin x, u(t,x/2) = sin t.

    Let L, = d2/dt' and L, =d2/ax"d write the equation as Lru = L,u. Following our procedure, we can write either u = c, k, (x) + c, k,(x)t +L;'L,u or u = c,k,(t)+c,k,(t)x+L;'L,u.

  • Define 0, = c, k, (x) + c2 k,(x)t and @, = c, k,(t) + c, k,(t)x to rewrite the above as u = @, + L;'L,U and u = @, + L;'L,u.

    The first approximant 4, is u, = 0,. The two-term approximant 4, is u, i u, where u, = L;'L,u,. Applying the t conditions u(0,x) = 0 and utn 12,x) = sin x to the one-term approximant gi, = LI, = c, k,(x) + c, - k,(x)t - we have

    c, k,[x)= 0 czk,(x)nI2=sin x

    or c, = 2/7r and k?(x)=sin x. The next term is u, = L;'L,u, = L;'L,[c,t sin x], and we continue in the

    same manner to obtain u, , u,, . . . , u, for some n. Clear1 y, for any n.

    u, = (L;'L,)", = c2 (sin x)(-1)" t'"-l/1:2n - I)!

    If we write for the m-term approximant, we have for the two cases:

    m-l

    0, = c2 sin x(-l)"t2"+' /(2n + L)!

    Since 0, (X I 2,x) = sin x c,k, (x) = 0 ..

    ci sin x E(n/z)'"+' /(Zn + I)!= sin x

    As m + =, cz + 1. The sum approaches sin t in the limit. Hence our approximation becomes an exact solution u = sin x sin t. (The same result can be found from the other operator equation.)

    Thus, in this case, the series is summed. In general, it is not and we get a series with a convergence region in which numerical solutions stabilize quickly to a solution within the range of accuracy needed. Adding the nonlinear term does not change this; the A, converge rapidly and the procedure amounts to a generalized Taylor expansion for the solution about the function u, rather than about a point. We call the solution an approximation because it is usually not a closed form, solution; however, we point out that all modelling is approximation, ancl a closed form which

  • necessarily changes the physical problem by employing linearization is not more desirable and is, in fact, generally less desirable in that the problem has been changed to a different one. Recent work by Y. Chermault and L. Gabel on the mathematical framework has provided the rigorous basis for the decomposition method. The method is clearly useful to physicists and other disciplines in which real problems must be mathematically modelled and solved. The method is also adaptable to systems of nonlinear, stochastic, or coupled boundary conditions (as shown in the author's earlier books). The given conditions must be consistent with the physical problem being solved.

    Consider the same example u, - u, = 0 with 0 2 x I n and t 2 0, assuming now the conditions which yield -an interesting special case for the methodology.

    u(x.0) = sin x u(0,t) = 0 u, (x,O) = 0 u(7T.t) = 0

    Decomposition results in the equations

    The one-term approximant 9, = u, in the first equation is

    Satisfying conditions on x we have c, k,(ii = O and c,k,(t)li = 0. Hence u, = 0. The f ~ s t equarion clearly does not contribute in this special case; we need only the second. Thus,

    U, = c?k3(x) + c;k4(x)t

    Applying conditions on t, c,k,(x) = sin x and c,k4(x)t= 0. Hence

    U, = sin x

    u = (1 - 1'/2! + t74!- ...) sin x =sin x cos t

  • We are dealing with a methodology for solution of physical systems which have a solution, and we seek to find this solution without changing the problem to make it tractable. The conditions must be Icnown; otherwise, the model is not complete. If the solution is known, but the: initial conditions are not, they can be found by mathematical exploral.ion and consequent verification.

    Finally, we consider the general form

    We now let u = x- O = O u, and f(u) = 2,- n=O An and notz this is equivalent to letting u as well as f(u) be equal to O=O An where the A, are generated for the specific f(u). If f(u) = u we obtain A, = u0, Al = u l ,.,., i.e.,

    To go further we must have the conditions on u. Suppose we choose

    Satisfying the conditions, we have c, k(t) = c2k2 (t) = 0 or u, = 0. Therefore the equation involving L;' does not contribute. In the remaining equation we get c,k, (x) = f (x) and c,k,(x)t = 0. Hence,

  • Thus, components of u are determined and we can write rp, = En-' u asan m=O

    n-term approximation converging to u as m -+ -. To complete the problem, f(u) must be explicitly given so that we can generate the A,. We see that the solution depends both on the specified f(x) and on the given conditions. RESULTS AND POTENTIAL A PPL~CATIONS:

    The decomposition method provides a single method for linear and nonlinear multidimensional problems and has now been applied to a wide variety of problems in many disciplines. One application of the decomposition method is in the development of numerical methods for the solution of nonlinear partial differential equations. Decomposition permits us to have an essentially unique numeric method tailored individually for each problem. In a preliminary test of this notion, a numerical decomposition was performed on Burger's equation. It was found that the same degree of accuracy could be achieved in two percent of the time required to compute a solution using Runge-Kutta procedures. The reasons for this are discussed in 121.

    EXAMPLE: Consider the dissipative wave equation

    U, - U, + ( d l dt)(u2) = g = -2sin2 x sin t cos t

    with speci~ied conditions u(0,t) = u('ii,t) = 0 and u(x,O) = sin x, u,(x,O) = 0. We have

    u, = k,(x)+ k2(x)t+L;'g

    from the L,u equation and use of the two-fold definite integation L;' and

    from the L,u equation and application of the two-fold indefinite integration L;'. Either solution, which'we have called "a partial solution", is already correct: they are equal when the spatial boundary conditions depend on t and

  • the initial conditions depend on x. When conditions on one variabie are independent of the other variable, the partial solu~tions are asymptotically equal. From h e specified conditions u(x.0) = sin x arid q(x,O) = 0

    k, (x) = sin x k2(x) = 0

    so that u, = sin x - (sin: x)(t/? - 111 sin 2t)

    The n-term approximant q, is

    where A, = u,u,, A, = u,u,- = u,u ,,... . The contribution of the term L-'g to u, results in self-canceling terms, or

    "noise" terms. Hence, rather than calculating exactly, we observe that if we use only u, = sin x. we get u, = (-t2/2!)sin x, u, == (t4/4!)sin x. etc. which appears to be u = cos t sin x. Thus the solution is u = cos t sin x + other terms. We write u = cos t sin x + N and substitute in the equation for u and find that N = 0, i.e., the neglected terms are self-canc:eling and u = cos t sin x is the correct solution. It is often useful to look for patterns to minimize or avoid unnecessary work - ..

    To summarize the procedure, we can write the two operator equations

    Applying the operators L;' to the first equation or L;: to the second,

    Substituting u =zw O = O u, and f(u) = Em A,, where A, are defined for n=O

    f(u), we have u, = k,(x) + k,(x)t + L;'g u , , ~ = L;' L,U, - ~; '(d/d t ) ~ ,

  • where, if both make a contribution, either can be solved to get an n-term approximation qn satisfying the conditions to get the approximant to u. The partial solution as noted earlier is sufficient. The integration LLconstants" are evaluated from the specific conditions, i.e., each (on satisfies the appropriate conditions for any n.

    The following example illustrates avoidance of often difficult integrations for solution to sufficient accuracy in a particular problem. The exact solution, which will then be used for a solution to two or three decimal places in physical problems, is often unnecessary. We can sometimes p e s s the sum of the decomposition series in a closed form and sometimes determine i t by Euler, PadC, Shanks, or modified Shanks acceleration techniques. However, whether we see this final form or not: the series is the solution we need.

    EXAMPLE: u, - u, + (d / dt)f(u) = g(x, t)

    Let g = 2e-' sinx - 2eW2' sinx cosx and f(u) = uu,. The initialhoundary conditions are:

    u(x. 0) = sin x u,(x,O) = -sinx u(0, t ) = u(z, t ) = 0

    Let L, = d2/d t' and write the equation as

    (By the partial solutions technique: we need only the one operator equation the 6"/dx2 is treated like the R operator in an ordinary differential equation. j

    Operating uvith L;' defined as the two-fold integration from 0 to t and m

    writing u = u, and f(u) = Em O = O An where the An are generated for S(u) = uu, , we obtain the decomposition components

  • U, = U(X, 0) -t- h, (x, 0) + L;' g u,+, = -L;'(d / d t) A, c ~ ; ' ( d ~ / d r:') LI,

    for m 2 0. Then. since z* O = O u, is a (rapidly) converging series. the partid sum t$, = Zz ui is our approximant to the solution.

    We can calculate the above terms u,, u, ,. . ., u, as given. However, since we. calculate approximanrs, we can simplify the integrations by approximating g by a few terms of its double Maclaurin series representation. Thus we will

    oher terms. Then drop terms involving t3 and x3 and h,

    sinx = x x-

    cosx = 1-- 2

    so that

    Then L-' g -- 0 to the assumed approximation. Hence

    Thus the two-term approximation is

    Although we can calculate more terms using u,,, for m 2 0, substitution verifies that u = e-'sin x is already the correct solution.

    If we need to recognize the exact solution, we can carry the series for g to a higher approximation to see the clear convergence to e-' sin x. Once we guess the solution may be e-' sinx, we can verifjl i t by substitution, or substitute e-' sin x + N and show that N = 0.

  • E Q U A L I ~ ' OF PARTIAL SOLUTIONS : In solving linear or nonlinear partial differential equations by

    decomposition, we can solve separately for the term involving either of the highest-ordered linear operators* and apply the appropriate inverse operator to each equation. Each equation is solved for an n-term approximation. The solutions of the individual equations (e.g., for L,u, L,u, L,u , or L,u in a four- dimensional problem) have been called "partial solutions" in earlier work. The reason was that they were to be combined into a general solution using all the conditions. However, it has now been shown [4] that in the general case, the partial solutions are equal and each is the solution. This permits a simpler solution which is not essentially different from solution of an ordinary differential equation. The other operators, if we solve for L,u, for example, simply become the R operator in Lu + Ru + Nu = g. The procedure is now a single procedure for linear or nonlinear ordinary or partial differential equations. (When the u, term in one operator equation is zero, that equation does not contribute to the solution and the complete solution is obtained from the remaining equation or equations.) We will show that the partial solutions from each equation lead to the same solution (and explain why the one partial solution above does not contribute to the solution).

    Consider the equation L,u + L,.u + Nu = 0 with L, = d' l 82 and L, = a ' l a y 2 : although no limitation is implied and Nu is an analytic term accurately representable by the An polynomials. We choose conditions:

    u(2:.y) = al(y) u(x.b,! = p, cx) U( a:. y) = a, (y) u(x, h,) = fizcx>

    Solving for the linear operator terms

    Using the "x - solution", we have

    * I'urely nonhear equarions or equations in whch Lhe tughest-ordcrcd operator IS nonlinear require further consideration 131.

  • where L;' is an indefinite two-fold integration and L
  • Now

    which is the solution to the equation in x. We can proceed in the same manner with the y equation; however, we return to the ordinary or regular decomposition for clarity. The additional decomposition is of no advantage for initial-value problems but speeds up converserice in boundary-value problems by giving us r e s u l ~ for u,, u , . ... that are obtained by correcting constants of integration as we proceed. so that we can then use a corrected initial value uVithout more matching to conditions. From the x partial solution.

    From the y partial solution

    The rnth appmximanr qm, = xr:i LI,, in each cas: above. The integration constants are determined by satisfying the _eiven conditions by solution of the

  • matrix equations

    to determine

  • The equation in t is L,u = L,u. Applying the L;' operator, we get u = u(x, 0) + L;' L, CZ, un .

    u, = sin (a x/P) u, = L;'L,u, = -(n2 t/12)sin (s x/!) u, = (a' t2/f4)sin(nx/!)

    u = un = e-"'"" sin (a X/P)

    which is the complete solution usually obtained more easily than the textbook solutions of this problem. The x equation is L,u = L,u. . Applying

    We see u, = k, (t) - xk2 (t) = 0 which means all following components must be zero so this equation, as previously stated, makes no contribution. Here, the x conditions (boundary conditions) u(Olt) and u(E.t) do not depend on t. Hence the partial solutions are asymptotically equal-they both are zero at t + - .

    Use of the panial solutions technique as compared with the author's earlier treatments of parual differential equations [4] leads to substantially decreased computation and minimization of extraneous noise [ 5 ] . Also we note that the convergence region can be changed by the choice of the operator equation.

    Since the panla1 solutions are equal, we need solve only one operator equation. (Exceptions occur when the uo term is zero in one of the equations or the initialhoundary conditions for one operator equation dc not involve remaining variables.) The remaining highest-ordered linear differential operators can now be treated like the remainder operator R. Thus ordinary or partial differential equations are solved by a single method. The decision as to which operator equation to solve in a multidimensional problem is made

  • on the basis of the best bown conditions and possibly also on the basis of the operator of lowest order to minimize integrations.

    To make the proczdure as clear as possible, we consider first the case where Nu = 0, i.e., a linear partial differential equation in R',

    where L, = G2/dx' and L, = d' l d y 2 with the boundary conditions specified by boundary-operator equations

    Solving for L,u we have L,u = g -q,u - Ru and operating with L: we have

    where 0, satisfies LxO, = 0. The inverse operator L;' is a two-fold (indefinite) integration operator since L, is a second-order differential operator. The "constants of integration" are added for each indefinite integration. ** (This makes notation consistent with decomposition solution of initial-value problems where for L, = d 1 d t we define I

  • as the initial term of the decomposition. Since L;' is a two-fold integration, 0, = co(y) + xc,(y) and u, = co(y) + xc,(y) + Lyg. Hence

    n=O n=O Then for m 2 0

    for components after u,. Consequently, all components of the decomposition are identified and calculable. We can now form successive approximants @,, = C:=',um as n increases which we match to the boundary conditions. Thus cpl = u,, cp2 = q1 + U, , (03 = cpt + u2, serve as approximate solutions of increasing accuracy as n + m and must, of course, satisfy the boundary conditions. Beginning with cp, = u, = co(y) + xc, (y) + L;'~, we evaluate c, and c , from the boundary conditions

    Thus cp, is now determined. Since u, or q, is now completely known? we form

    u, = -L;'L,.u, - L;'RU, Then

    (7'2 = v1 - u, which must satisfy the boundary conditions. Continuing to an acceptable approximation cp,, we must match the conditions at each value of n for a satisfactory solution as decided either by substitution or by a stabilized numerical answer to a desired number of decimals.

    For the special case of a linear ordinary differential equation, we have the simpler alternative of using the unevaluated u, to get u , , simply carrying along the constants in u, and continuing in this way to some cp,. Thus, in this case, only one evaluation of the constants is necessary. F o r nonlinear cases or for partial differential equations, the simpler procedure is not generally possible.) To make this clear, consider some examples:

  • u(-1) = u(1) = 0 Write

    Lu = 2+40xu OT

    m

    u = c, + c , x + ~ - ' ( 2 ) + 4 0 ~ - ' x ~ u , n=O

    We can identify U, = c , +c,xiL- ' (2)= c, + c , x - x 3

    Now instead of evaluating the constants at this stage, we write

    If, for example, cp, is sufficient,

    Imposing the boundary conditions at - 1. and 1 on q3 we have q,(-1) = &(I) = 0 or

    determining c, and c, and therefore ~ o , in a single evaluation. (By cp,, this yielded seven-digit accuracy.)

    Another exampie is dZy/dx2 +2x dyfdx = 0 with y(0) = 0 and y(a) = I. The solution is y(x) = (erf x) / (erf a) or

    Write Ly = -2x(ddx)y or y = yo - 2~- 'x(d / dx)y with yo = c, -+ c,x

  • If we satisfy the conditions with q, = yo we have yo = x / a as our first approximant. If we continue to some q, and evaluate only then, we have

    If we stop at 403 =yo + Y l + Y l 40, = c, + c,x - c ? x ~ / - ~ + c ? x ~ / ~ o

    and now satisfy the conditions we have

    which is (erf x)/(erf a) to this approximation which can, of course, be carried as far as we like.

    For linear ordinary differential equations, both procedures will work, i.e., we can use the e\.aluated u, to get u,, add it to the unevaluated u, to get q,, then satisfy the conditions at the boundaries: or cany the constants along as in the examples. The last procedure is most convenient because of the single evaluation: the first is more general since it applies to nonlinear ordinary differential equations and linear or nonlinear partial differential equations as well [6.7].

    EVALUATION OF COEFFICIENTS FOR A LINEAR PARTIAL DIFFEREP~TIAL EQUATION :

    Urn-Un = o

    for 0 5 x L: 7i / 3, 0 5 y 5 T I 2 with the conditions given as

    u(0, y) = u(x,O) = 0 U(T 12, y) = sin y u(x, T / 2) = sin x

    Let L, = ;I" l d x ' and L,. = d' / d y 2 to write L,u = L,.u. If we apply inver- sion to the L, operator, we have u = k , ( y ) + xk,(yj + L:L,u.

  • Now Ox = k, ( y) + xk, ( y). Hence u = @, + L;'L,u. The one-term approx- imant is cp, = u, = a,. A two-term approximant is = q, + U, and u, = L-,'L,u,. The x conditions are u(0,y) = 0 and u(x/2,y) = sin y. Applying these conditions to k,(y) + xk,(y) we see that k, = 0 and k, = (2/7r)sin y. Thus, if the one-term approximant p, were sufficient, the "solution" would be cp, = (2 / Z)X sin y.

    The next term is u, = L;L,U, = L;'L, [(WE) x sin y]. 'Then p, = u, + u, is eiven by q, = k, - xk? - (2/ir)(x3/3!)sin y. Because of the condition at x = 0, -

    we have k, = 0. From the condition on x at 7x12,

    (a/2)kl(y) -(2/~)((~/2)'/3!)sin y = sin y kz (y) = (2/a - wl2)sin y

    hence ~"((3/,s-;r:13_)x sin y

    The first coefficient was (2 / n) = 0.637. The second (fiom cp,) is 0.899. As n -+ m, the coefficient approaches 1.0 so that u = x sin y is the solution.

    Notice that if we try to carry along the constants of integration, k, and k2, to some q, and do a single evaluation for determination of the constants, we have u, = L:L,u, = L:L, [k, (y) + xk,(y)] which we cannot carry out; we must use the evaluated preceding tenns rather than a single evaluation at rg,. We have used only the one operator equation; the same results are obtained from either.

    Let us consider a more general form for the linear partial differential equation L,u + L,u + Ru = g where Ls = d Z / d x 2 and L, = d Z / d y Z with conditions specified by B, u(x)lx=,, = p, (y) and B,u(x)J,.,: = pz(y). Solving for L,u and applying the inverse operator, we have

    where 0 = c,(y) + xc,(y) is the solution of LQ = 0. 00

    Let u = xm=, Urn and identify the initial term as u, = c,(y) + xcz(y) + L;'~. Now for m 2 0, the remaining components are identified

  • We now form successive approximants rpn = xL:urn which we match to the boundary conditions. Thus rp, = u,, rp, = cp, + u,, cp, = cp2 + u2, -- . serve as approximate solutions of increasing accuracy as n approaches infmity and must satisfy the boundary conditions. Beginning with

    we use P,V,!, , = fi, (y) and fi2cp, 1,: = B2(y) to determine c, (y) and c: 0.) so that q, is completely determined. Since u, is now known, we can form u, = -L;'L,u, - L;'Ru,. Then rp, = rp, + u, which must also satisfy the boundary conditions. Continuing to some q,, we match the conditions for a sufficient approximation. Thus carrying along constants to q, for a single evaluation doesn't work except for linear ordinary differential equations. For linear partial differential equations, we must use the already evaluated preceding terms and can do so also for nonlinear ordinary differential equations.

    COEFFICIE~T-GENERATING ALGORITHMS FOR SOLUTION OF PARTIAL DIFFERE~TIAL EQUATIONS IN SERIES FORM :

    Let's consider a model system in the form

    aZu aZu - + a u + p - - r = g ( t . x ) dt' dxL

    assuming conditions given in the form u(0,x) = ~ ( x ) and auld t (0,x) = q(xj. Write

    m m

    We note that

  • TIIE DECO,WP~).S~ION .ULT/OU 1.V SEVERAL DIMENSIONS 43

    where 00

    Defin in~ L = $ ' / d t' and L-' as a two-fold definite integration from 0 to t, we can write

    LU + a u + ~ ( d ' l d n')u = g(t, x) or

    Lu = g(t, x) - a u - P(d2/d x:)u

    Operating with L-' , we have

    u = u, - L-'a u - ~- ' f i (2 ' /2x ' )u where

    which we will write as 00

    where -

    Thus the coefficients are

    Using decomposition u = xLo urn,

  • 14 CHAPTER 3

    so that for m = 0, m

    m=O and for m > 0,

    Since we can also write m m

    we note

    The next conlponent u i is

    Since

    Thus

    which we write as

  • THE DECOMPOS~~ION M m i o ~ IN SEVERAL DIMENSIONS 45

    where

    Proceeding in the same way, write

    which is now rewritten as

    where

    Continuing, we calculate u3, a, ... and see that we can vMite for the p rh component of u

    with

    as a coefficient-generating algorithm. Thus for p = 0

  • Then the solution is u = CLo u,. Consequently

    is the solution with

    as the th approximant to the solution which becomes an increasingly accurate representation of u as v increases.

    MIXED DERIVATIVES : Consider the equation u, = -u given the conditions u(x,O) = ex and u(0,y)

    = e-'. k t L, = a/ax L,. = a/ay

    Then L = d and L;'() = jy(.idy. In operator form; we have O

    L,L,u = -u . Operating with L-,' we have

    L-,'L,(L,.u) = -L;'u L,.u - L,u(O, y) = -L-,'u L,u = L, u(0, y) - L;!u

    Operatins now with L;'

    Let

  • Hence u = un a

    Since

    we have

    OD

    Because u = Zn=o urn,

    is the solution. However, we can rearrange the terms to get a simpler repre- sentation using staggered summation:

    where

    We recognize the binomial expansion of (x - y)rn and write

    which is, of course, the exponential series of (x-y) so that u = which is the same result in a convenient form.

  • REMARK: U we write rp, = u, + u,, we can recognize the first six terms of (1 + x + x2/2) - (1 - y + y2/2) = eXe-'. Write u = exe-Y + N , substitute into the original equation and see that N must vanish in the limit.

    EXERCISE: u.+, = ux + uy - u with u(x,O) = ex and u(0,y) = e-Y. (The solu- tion is u = ex-'.)

    EXERCISE: u, = [4xy/(l + X ' ~ ' ) ] U with u(x,O) = u(0, y) = 1. (The solution is u = 1 s x2y2.) A generalization to u, + k(x,y)u = g(x.y) with u(0, y) = {(y) and u(x,O) = q(x) can also be considered using power series expansions of the functions to several terms.

    MODIFIED DECORZPOSITION SOLUTION: uxY = U , - u,. - u with u(x,O) = ex and u(0, y) = e-?. Let

    u = 2 2 a,,, a" y c Then

    We note that ui x,O) = x- n=O xm/m! and u(0, y) = Cw n = O (-y)n/n!. Sub- stituting in the equation.

  • m m m m

    + z C ( n + l ) a , , n + l x m ~ " -Z C am,. xmyn m=O n=O m = O n=O

    Equating like powers

    a,., = l/m!

    a3," = (-l)'/n ! We can now compute a table of coefficients in a convenient mangular form:

    which is given as:

    and by induction,

    Therefore

    Consequently, u = ex-Y. From the table of coefficients, we see that

  • Since

    ADDENDUM: From the table of coefficients in trian_pular form, we have

    Therefore by substitution,

    Consequently we can derive the recurrence relation by substitution:

    a,., = l/m!

    a,,, = (-l)"/n! so that

    u = 2 2 a,, xm yc m=O o=O

    GENERALIZATION OF THE A , POL'I'NOMIALS TO FUNCTIONS OF SEVERAL \'.4RIAULES:

    In applying the decomposition method to nonlinear differential equations arising in physical problems, we may encounter nonlinearities involving several variables [8]. We now generalize the algorithm for A, for f(u) to analytic functions of several variables such as f(u,vj. where f(u.v) is not lactorable into f (u)P,(v). (The latter case: of course, is solvable as a "product

  • nonlinearity" by developing the A, for each factor and obtaining their product.) Examples appear in [2]. Our objective is to extend the class of solvable systems.

    In the use of the method. the solution of a differential or partial differential m

    equation is written u = u, and flu) = O = O An(u,,u,, . . ., u,) where a is a function involving initiallboundary conditions, the forcing function, and an integral operator. This amounts to the assumption that the solution and functions of the solution are expanded in the A, pcllynomials since A, reduces to u, for f(u) = u. For development of an algorithm for the A,, i t is convenient to assume parameuized forms of the u and f(uj. The following expressions have been given by the author (21 as:

    or simply A, = (l/n!)~Yfl,_, where D" = d"dX and

    The DDf term for n > 0 can be written as a sum from v = 1 to n of terms dvf/duv with coefficients which are polynomials in the dvu/ddv. Thus,

    The result for A, can fmally be given in a very convenient form which we have referred to as Rach's Rule,

    Here f'"(uo) means the n h derivative of f(u) at u =: uo and the l ln! is absorbed in the c(v,n). The first index of c(v,n) progresses from 1 to n along with the order of the derivative. The second index is the order of the polynomial.

  • The A, is a function of uo, u, ,..., u,, i.e., of the components of the solution u in the decomposition. The c(v,n) are products (or sums of products) of v components of u whose subscripts sum up to n with the result divided by the factorial of the number of repeated subscripts. For example c(1,3) can only be u, (a single subscript which must be 3). c(2,3) can only be u, u2 (two subscripts adding to 3). c(3,3) = (1/3!)u :. c(2,6) has two subscripts adding to 6 for which we have three possibilities for u,y using (2,4), ( 1 5 ) and (3,3). Hence ~ ( 2 ~ 6 ) = u2u4 + u , u5 + (1 /2!)u 5 . The result is

    ANALYTIC FUNCTION OF TWO VARIABLES f(u.v): Proceeding analogously to the case of one variable,

  • The A, for f(u) are written A,{f(u)}. Generalizing to A,{f(u,v)} or A, {f(u(/l),v(A))} we introduce the notation

    9

    Proceeding analogously to the c(v,n) and f'''(u,) for f(:u), we can now write c(p, v,n) and fg.', or f,.,(u,, v,) for a function f(u,v).

    Comparing with D ' f we see that c(1,O.l) = duldd which must be evaluated at A= 0. Since u = xw Ru,, du/ddl,=, = u,. Hence

    n=0

    Proceeding in the same way we can list the A, { f(u,v)}

  • Perhaps more conveniently we will write in symmetric form [1,2] where the indices of f,, start from n,O, subtracting 1 from n and adding 1 to 0 for the next set ,..., and finally reaching 0,n. Thus

    For &, we have c(0,O:O) = 1. We can list thr 4 2: f n l l n ~ r r s

  • 56 CIiApTER 3

    For A, = XI P + V = I c(p,v,l)f,, we need only c(1,O;l) = u, and c(0,l;l) = r , . For A, = x2 ,+v=I ~ ( p , Y. 2)f y,,, , we need

  • CONVENIENT RULES FOR USE: The A, have been written in detail as a convenient. reference and an aid in

    calculations. However, they can now be written by simply remembering the algorithm. The c@,v;n) are written by considering all possibilities for p and v with p + v = n. Inspection of the listed c(p,vp) will make it clear that p tells us how many times u appears and v tells us how many times v appears. Further, we see that the sum of all the subscripts is n and as with functions of a single variable, repeated indices require division by the factorial of the number of repetitions.

    - - -

    ANALYTIC FUNCTION OF SEVERAL VARIABLES: Let's consider f(u, v,w) # f,(u)f,(v)f,(w). Thus, N[u,v,w], with N a non-

    linear operator, acting on u is an analytic function f(u,v,w) which we set equal to CEO A,, . Now we defme

    fp,v.U = ( a p / a o ) ( a v / d v 0 ) ( a U / a w , ) f ( ~ ~ , wO) Now

    A0 = fo,o,o A, = c(l,O,O; l)fl,0,0 + ~(0,1,0;l)fo,1,0+ ~(OlO,l;f )fo,o,i At = ~(1,0,0;2)fl ,,o + ~(0,1,0;2)fo,l,o+ c(O90,l;2)fo ,o,i f~(2,0,0;2)fi,o.o

    + c(1,1,0;2)f, , I , + c(1,0,1;2)f1 ,O,l + c(0,1,1;2)fo ,,,, + c(O.2,0;2)fo,,+ c~o.o~2:;2)fo,o,2

  • The values of the c@,v,w) above are

    c(l,O,O;l) = u, c(1,1,0;2) = u,v, c(0,1,0;1) = v, c(1,0,1;2) = u , w , c(O,O,l;l) = w, c(0,1,1:2) = V l W l c(1,0,0;2) = u, c(2,0,0;2) = u ;/2! c(0,1,0;2) = vz c(0,2,0:2) = v f /2! c(0,0,1;2) = w, c(0,0,2;2) = w ; /2!

    Thus Ao = f(uo,votwo> A, = u,(df /duo) + v,(df/dvo) + w,(df/d2,) A, = u2(G'f/du0)+ v2(df /dvc)+ wz(df /a2 , )

    + u , v , ( ~ ~ f / a u o a v o ) + ~ , ~ ~ , ( a ~ i / d u o ~ w o ) + v , w , ( ~ ~ f / a ~ ~ a ~ , ) + (~;/z!)(a f/du:) -(.;/z!)(a f /av;) + ( ~ ; p ! ) ( a f / a w~;)

    etc. for A,. We can proceed analogously for determination of A, for functions f(u,,ul ..... u d .

    .APPLICATIONS: The A, for f(u,jr,w) is needed to solve three coupled nonlinear differential

    equations. In the author's form [2] for coupled equations, using decomposition we have

    m m

    We lei u = u,, v = x D = o vn, w =In=$ wD and we write li,(u,v.w) = f iu.v,u~) = Zm A, {f(u, v, w)] for i = 1 2 3 . Then

    n = C

    u,=cD, + L;!g, where L,@, = O v0 = cD2 + L: g; where L,@, = 0 wo = cD, + ~;'g, where L3cD3 = 0

    Similarly we require A, { f(u ,,...,urn)} for m coupled operator equa~ions. An example for a non-factorable nonlinearity f(u,v) is the pair of coupled

  • equations dddx + a,u + b,v + f,(u,v) = g , dvldx + azu + bzv + f2(u,v) = g,

    Finally, we consider N(u,v) = f(u,v) = eu". This is an interesting case for comparison purposes since it is a factorable nonlinearity: e"' = eu . ev, so we can solve it as a product nonlinearity using A,(f(uj) or with the present results for .A,{f(u,v)}. We can now consider a set of two coupled equations in the general form:

    L, u + R , (u,v) + N(u,v) = g, LV + R2(uIv) + N(u,v) = 92

    where N(u,vj = e"'.

    SOME FINAL RE~I~UUCS: The definition of the L operator avoids difficult i~ltegrations involving

    Green's functions. The use of a finite approximation in series form for the excitation term, and calculation only to necessary accuracy simplifies integrations still further. (With Maclaurin expansi~on, for example, of trigonometric terms, one needs only integrals of t".) 'I'he avoidance of the necessity for perturbation and linearization means physically more correct solutions in many cases. The avoidance of discretized or g i d methods avoids the computationally intensive procedures inherent in such methods. The decomposition method is continuous and requires significantly less processing time for the computation of results. It has been demonstrated that very few terms of the decomposition series are necessary for an accurate solution, and also that the integrations can be made simple by the suggested methods, or by symbolic methods, and use quite simple computer codes in comparison with methods such as finite differences or finite elements.

    As we have shown, partial differential equations can be solved by choosing one operator for the inversion and considering all other derivatives to be in- cluded in the R operator. Hence we solve exactly as with an ordinary differ- ential equation. We have the additional advantage of a single global method (for ordinary or partial differential equations as well a; many other types of equations). The convergence is always sufficiently rapid to be valuable for numerical work. The initial term must be bounded (a reasonable assumption for a physical system) and L must be the highest-ordered differential.

  • REFERENCES G. Adomian. Stochastic Sysrems, Academic Press (1983). G. Adomian. Nonlinear Stochastic Operaor E q w i o m , Academic Press (1 986). G. Adomian and R. Rach, Purely Nonlinear Equations, Comput. Math. Applic., 20 , (1-3) (1990). G. Adomian and R. Rach, Equality of Partial Solutions in the Decomposition Method for Linear or Nonlinear Partial Differential Equations, Comp. Mark Applic., 19, (9-1 2 ) (1 990). G. Adorman and R. Rach, Noise Terms in Decomposition Solution Series, Compur. Marh .4pplic.. 23, (79-83) (1992). G. Adomian. Solving Frontier Problems Modeled by Nonlinear Partial Differential Equations. Comput. Math. AppIic ., 22, (91 -94) (1 99 1 ). G. Adomian. R. Rach, and M. Elrod, On the Solution of Partial Differential Equations u9itf: Specified Boundary Conditions, J. Math. Anal. and Applic., 140, (569-581) (19891. G. Adomian and R. Rach, Generalization of Adomian Polynomials to Functions of Several Variables, Comput. Math. Applic., 24, ( 1 1-24) (1992).

    SUGGESTED READING I\;. S. KosDlyakov, M. M. Srnirnov. and E.B. Gliner, DifjCerential Equations 0-f Maihenaical Physics, North HoIland ( 1 964). h.1. M. Smirnov, Second-order Partial Dinerenrial Equations, S . Cbomet (ed.). Noordhoof 11 964). E.A. W t , Fundamentals of Mafhemaical Physics, McGraw (1967). >:. Bellon?. 2. Brzezniak, L.M. de Socjc. Nonlinear Stochmic El,allrrio? .Pmhlrm.r on Applied Sciences, Uuwer (1 992'). A. Blaquiere. Nonlinear Svsrem Analvses. Academic Press ( 1 966).

  • CHAPTER 4

    DOUBLE DECOMPOSITION

    In solving boundary-value problems by the decomposition method. we have seen that we can either retain the "constants" of integration in the u, term for the case of linear ordinary differential equations, re-evaluating the constants as more terms of the approximate solution cp, are computed, or, we can use the u, evaluated to satisfy the boundary conditions and add constants of integration for each successive term u,.

    For a linear ordinary differential equation, it is more efficient to calculate an n-term approximation rp,, carrying along the constants u,,, and finally force rp, to satisfy the boundary conditions, thus evaluating the constants of integration.

    We now introduce an effective procedure which allows us decreased computation, especially in partial differential equations. This is done by a further decomposition, i.e., we now decompose the initial term u, = (O into

    At first thought, this seems like an ill-advised procedure which can only slow convergence, since the new initial term a, or u,., will be farther from the optimum value for y,. However, we will see that, as a result, we can use Q, to determine a (tr which can then be used for further tenns of a7, without further evaluations. The boundary-value problem becomes an equivalent initial-value formulation in terms of 0 . This eliminates further matching to boundary conditions.

    Let us again consider the equation u, - u, = 0 with u(0,y) = 0, u(x,O) = 0, u(x 12,y) = sin y, and u(x,x 12) = sin x whose solution by decomposition is u(x,y) = sin x sin y. We will again use decomposition and also the concept of equality of the partial solutions of the operator equations, so onIy one operator equation needs to be considered. Also, we will decompose the uo term of the decomposition series, which means a double decomposition of the solution u. (This is a much preferable method to that of eigenvatlue expansion in m dimensions.)

    We have L,u = b u and can apply the inverse operator L;' on both sides. ' T h s L: L,u = u - (tr, or u = 0, + L;'+ with u(0,y) = 10 and u(xL2 ,y) = sin Y. Equivalently, we can start with b u = LU and apply IL-,' to write u = Q, + L-,'L,u with u(x,O) = 0 and u(x, r/ 2) = sin x.

    69

  • As usual, we assume u = xw urn but now we also decompose uo into m=O

    E lo u , , . For the x conditions, we have

    with uo = @ , , and u,,, = @,, +L;'L,u ,-,. We can also write, using the y conditions, the equation

    with uo =Q,.., and u,,, = @ y . m + ~ ; ' L x u ,-,. Since LX@, = O and L,Q,=O, we have

    Qx.0 = 50(Y) + x51 (Y)

    where the 5 ' s and q ' s arise from the indefinite integrations. The conditions given determine these integration "constants" for the approximate solution ..

    ffi

    qD-! = xn=o u,. Thus q,,, (0, y) = 0 and qm+, (r / 2, y) = sin y determine &, (y) and

  • A two-term approximation is given by cp? = cp, + u, (or u, + u,): hence

    Since p, (0, y) = 0, we have

  • etc., or rp, = (.6366198)x sin y q, = (.8984192)x sin y + (.6366198)(-x3 / 3!) sin y q3 = (.9737817)x sin y + (.8984192)(-x3 /3!) sin y

    -t (.6366198)(x5 / 5 ! ) sin y

    which converges very rapidly to the given solution. It is interesting to write the result as

    cp, = a,,x sin y + a,,,(-x3 /3!)sin y + a,,,(x5 /5!)sin y + .. . or

    m-1

    cp, = a,,, (-1)' (x2"+') /(2n + l)! sin y n=O

    where the a,., are numerical series whose sum is 1; each term is delayed behnd the preceding term. Now,

    m - i I -. .

    lim pm = lim a,,, (-l)"(xinii) m+- m -+-

    n=O

    where lim a m , = 1 for all n. Then m 4 -

    u = lim pm = { ( - l ) ~ ( r ~ ~ ' ) / ( 2 n - I)!) sin y = sin a -sin y m 4-

    n=O

    The y-dimensional solution is u = sin y sin x since, by symmetry: y is interchanged with x; i.e., the partial solutions are identical.

    Consider the example u,+ u,,.= g(x,y) = x2+y' with u(0,y) = 0, u(x,O) = 0, u(1,y) = 412, u(x,l) = x2/2. We have shown previously, using decomposition. that the solution u = xZy2/2 can be obtained in only two terms. It is also clear that either the operator equation for L,u or for L,.u can be used with appropriate inversions. Thus

    L,u = xi + y2 - Lyu L;'L,U = L-,~(x, + y2) - L;IL,u

    and since L;'L,u = u - @,

  • Dousu DECOMPOSITION 7 3

    u = 0 , +L-,~(X'+~')-L; 'L~U Similarly,

    u = 0, + L;'(x2 + y2)- L;'L,U Using (I ) ,

    u, = 0, + L-,'(x' - y2) m Oa C Um = Uo - L~'L, u,

    m=O m=O

    Urn+, = -L;IL,U~

    for m 2 0. Now, if we decompose the u, term as well, we write

    Identifying u, = Q,., + L: (x' + y2), all other componenrs are determined by

    Proceeding analogously using (2)

    Continuing with the x equation, i.e., (1) and (3),-

    or from (2) and (4) @, = %(x)+YVI(X) @y.m = %,rn (XI + Y 771,rn (x)

    The "constants" of (indefmite) integration are no3w matched with the m

    approximate solutions for n = 1.2, ... where qpm+, = xu=, uu. Thus q,,,(O,y) = 0, (p,+,(l,y) = y2 12 determines &,,(y) and C,,,(y) in ( 5 ) . Similarly, qm+, (x.0) = 0 and Gm+,(x, 1) = x2/2 determines qOmm (x) and ql,, (x).

  • Proceeding with thc x-dimensional solution, @, = to (y) + xj, (y) and U, = to + x j, + L;' (x2 + yZ); after decomposition of h,

    Our first approximation is p, = u,, or

    where ql(O,y) = 0, ( ~ ( 1 , ~ ) = yz 12. Since rp, = (0, y) = 0, j,,,, = 0. Since Cp1(1,y) = >=I?.

    + 1112+ Y2 12 = y.= 12

    or =-1/12. Hence

    . .

    u, =-x /12+x"12ix- ; - /~ - Then

    U, = 50,1 + xtl.! - L;'L,uc

    Since L., u, = x' and L-,'L,u, = L;'x2 = x4 / 12

    U, =

  • We now have cpz = x2y'/2, i.e., the exact solution in two terms. If we proceed further

    U 2 = 60.2 ~ 4 1 . 2 - L: L y u l

    We have L,,u, = 0, Ly Lyul = 0

    2 and since p3 (0, y) = 0, &,: = 0. Since cp,(l, y) = y 12,

  • if the determinant of the first matrix is non-zero. We now go to the next approximation q2 by first determining

    to get q, = cp, i u,. Matching q, to the boundary conditions to evaluate the constants, q, is determined completely. Continuing in this manner, we determine u,,u3, ... until we arrive at a satisfactory qm verifiable by substitution or stabilized numerically to sufficient accuracy. We have

    where Qm = cc.= - X C ~ . ~ and pmA1 = (P, +urn . Matching %+I to the boundary conditions. we require

    where

  • Thus,

    Now