10
Stochastic linearization: what is available and what is not Pierre Bernard Laboratoire de Mathe ´matiques Applique ´es, U.R.A C.N.R.S 1501, Universite´ Blaise Pascal, Clermont-Ferrand 63177, Aubie `re-Cedex, France Abstract Stochastic equivalent linearization methods are the most popular among all approximation methods for the dynamics of a nonlinear system under random excitation. A complete presentation of these methods can be found in Roberts, J. B., Spanos, P. D., Random Vibration and Statistical Linearization. J. Wiley & Sons, 1990 [5]. Despite the fact they were introduced 40 years ago, the first justification, concerning the so-called ‘‘true linearization’’, was proposed by Kozin (Kozin, F., The Method of Statistical Linearization for Non-Linear Stochastic Vibrations. In Nonlinear Stochastic Dynamic Engineering Systems, ed. F. Ziegler, G. I. Schue¨ller. Springer Verlag, 1987.) [4] in 1987. The so called ‘‘Gaussian linearization’’ is the most used of all. The goal of this contribution is to present a mathematical approach recently introduced in Bernard, P., Wu, L., Stochastic Linearization: The Theory, to appear [2], to the problem of stochastic linearization based on the use of a large deviation principle. This approach can be considered as an extension of Kozin’s work (Kozin, F., The Method of Statistical Linearization for Non- Linear Stochastic Vibrations. In Nonlinear Stochastic Dynamic Engineering Systems, ed. F. Ziegler, G. I. Schue¨ller. Springer Verlag, 1987.) [4]. Several linearization methods are justified, among which the ‘‘true linearization’’. The ‘‘Gaussian linearization’’ unfortunately cannot be justified. Moreover, an example from Alaoui, M., Bernard, P., Asymptotic Analysis and Linearization of the Randomly Perturbated Two-wells Dung Oscillator, [1] shows that it can give rise to wrong results. This fact was also noticed by Grundmann in this international conference on uncertain structures. # 1998 Elsevier Science Ltd. All rights reserved. 1. Introduction Engineers concerned with random vibrations have to deal with non-linear oscillators with white noise exci- tation, modelled by the second order stochastic dier- ential equation xt f xt, _ xt s _ w, 1 where w ˙ denotes standard white noise. From a mathematical point of view, Eq. (1) is inter- preted in the phase space as an Itoˆ stochastic dieren- tial equation: dx t y t dt, dy t s dw t f x t ,y t dt: 2 The solution we are concerned with is the weak stationary solution, which is assumed to exist and to be unique. Let O = C(R, R 2 ) be the space of continu- ous functions from R to R 2 , endowed with the com- pact convergence topology, so that O is a Polish space. By a stochastic process, we mean a probability measure P on the Borel s-field of O. For a process P, P t is the order one marginal; if P is stationary, this marginal is denoted by m P . The following two examples of Eq. (1) are of par- ticular interest: (1) The linear oscillator, such that f(x, y)= k(x m)+ cy, where k and c are positive constants, and m real. Existence and uniqueness of a stationary solution can be proved, this solution is Gaussian, with mean m, and known explicitly. (2) The Dung oscillator, such that f(x, y)= g(x)+ cy, where g has odd polynomial growth, xg(x) 4 + 1 as vxv 4 1. The primitive G of g van- ishing at 0 is the stiness potential of the vibrating structure. In this case too, existence and uniqueness of the stationary solution of Eq. (2) can be proved, and the invariant probability measure is absolutely continu- Computers and Structures 67 (1998) 9–18 0045-7949/98/$19.00 # 1998 Elsevier Science Ltd. All rights reserved. PII: S0045-7949(97)00151-X PERGAMON

Stochastic linearization: what is available and what is not

Embed Size (px)

Citation preview

Page 1: Stochastic linearization: what is available and what is not

Stochastic linearization: what is available and what is not

Pierre Bernard

Laboratoire de MatheÂmatiques AppliqueÂes, U.R.A C.N.R.S 1501, Universite Blaise Pascal, Clermont-Ferrand 63177, AubieÁre-Cedex,

France

Abstract

Stochastic equivalent linearization methods are the most popular among all approximation methods for thedynamics of a nonlinear system under random excitation. A complete presentation of these methods can be found in

Roberts, J. B., Spanos, P. D., Random Vibration and Statistical Linearization. J. Wiley & Sons, 1990 [5]. Despite thefact they were introduced 40 years ago, the ®rst justi®cation, concerning the so-called ``true linearization'', wasproposed by Kozin (Kozin, F., The Method of Statistical Linearization for Non-Linear Stochastic Vibrations. In

Nonlinear Stochastic Dynamic Engineering Systems, ed. F. Ziegler, G. I. SchueÈ ller. Springer Verlag, 1987.) [4] in1987. The so called ``Gaussian linearization'' is the most used of all. The goal of this contribution is to present amathematical approach recently introduced in Bernard, P., Wu, L., Stochastic Linearization: The Theory, to

appear [2], to the problem of stochastic linearization based on the use of a large deviation principle. This approachcan be considered as an extension of Kozin's work (Kozin, F., The Method of Statistical Linearization for Non-Linear Stochastic Vibrations. In Nonlinear Stochastic Dynamic Engineering Systems, ed. F. Ziegler, G. I. SchueÈ ller.

Springer Verlag, 1987.) [4]. Several linearization methods are justi®ed, among which the ``true linearization''. The``Gaussian linearization'' unfortunately cannot be justi®ed. Moreover, an example from Alaoui, M., Bernard, P.,Asymptotic Analysis and Linearization of the Randomly Perturbated Two-wells Du�ng Oscillator, [1] shows that itcan give rise to wrong results. This fact was also noticed by Grundmann in this international conference on

uncertain structures. # 1998 Elsevier Science Ltd. All rights reserved.

1. Introduction

Engineers concerned with random vibrations have to

deal with non-linear oscillators with white noise exci-

tation, modelled by the second order stochastic di�er-

ential equation

�x�t� � f �x�t�, _x�t�� � s _w, �1�

where wÇ denotes standard white noise.

From a mathematical point of view, Eq. (1) is inter-

preted in the phase space as an Itoà stochastic di�eren-

tial equation:

dx t � yt dt, dyt � s dwt ÿ f �x t, yt� dt: �2�

The solution we are concerned with is the weak

stationary solution, which is assumed to exist and to

be unique. Let O= C(R, R2) be the space of continu-

ous functions from R to R2, endowed with the com-

pact convergence topology, so that O is a Polish space.

By a stochastic process, we mean a probabilitymeasure P on the Borel s-®eld of O. For a process P,

Pt is the order one marginal; if P is stationary, this

marginal is denoted by mP.

The following two examples of Eq. (1) are of par-

ticular interest:

(1) The linear oscillator, such that f(x, y) =

k(xÿm) + cy, where k and c are positive constants,

and m real. Existence and uniqueness of a stationary

solution can be proved, this solution is Gaussian, with

mean m, and known explicitly.

(2) The Du�ng oscillator, such that f(x, y) =

g(x) + cy, where g has odd polynomial growth,xg(x)4+1 as vxv 41. The primitive G of g van-

ishing at 0 is the sti�ness potential of the vibrating

structure. In this case too, existence and uniqueness of

the stationary solution of Eq. (2) can be proved, and

the invariant probability measure is absolutely continu-

Computers and Structures 67 (1998) 9±18

0045-7949/98/$19.00 # 1998 Elsevier Science Ltd. All rights reserved.

PII: S0045-7949(97 )00151-X

PERGAMON

Page 2: Stochastic linearization: what is available and what is not

ous with respect to Lebesgue measure, with density

function

f�x, y� � Kexp�ÿ2 c

s2H�x, y��,

where H(x, y) = G(x)+12y

2 is the Hamiltonian of the

associated conservative system. The exact transition

probability density is unknown in this example: it is

only known in the linear case.

In spite of its apparent simplicity, apart from the lin-

ear case, only very little is known concerning the sol-

ution of Eq. (2). Mechanical engineers confronted with

this issue developed several methods all known as

equivalent statistical linearization, whose basic idea is

to design a linear oscillator whose response to a white

noise excitation with the same probability distribution

as the white noise in the RHS of Eq. (1) can be used

as an approximation of the stationary response of

Eq. (1).

Let us recall here the main steps, in the case when f

is skewsymmetric for simplicity (consult [5] for a com-

plete exposition).

Eq. (1) is written as:

�x�t� � kx�t� � c _x�t�� � f �x�t�, _x�t�� ÿ �kx�t� � c _x�t��� � s _w: �3�Positive constants �k and �c minimizing the mean square

of the term between brackets are computed:

��c, �k� � argmin

� �� f �x, y� ÿ �kx� cy��2mP �dx, dy�;

c > 0, k > 0

�:

This problem is convex in the variables (c, k), and

necessary conditions to optimize are:

�c ��yf �x, y�mP �dx, dy�=

�y2mP �dx, dy�,

�k ��xf �x, y�mP �dx, dy�=

�x 2mP �dx, dy�: �4�

This solution is known as the ``true linearization'' ( [4]).

As the probability distribution mP is generally

unknown, the marginal mQc,k of the stationary solution

of the linear system with characteristics �c and �k is sub-

stituted to mP at this step. This is a ®xed point pro-

blem, which solution (as far as existence is proved; we

will give later on a counter-example, borrowed to [5])

is the pair of parameters de®ning the equivalent linear

oscillator. This solution is known as the ``Gaussian lin-

earization''. The corresponding S.D.E. is:

dxt � yt dt, dyt � s dwt ÿ �kxt � cyt� dt: �5�

Until the contribution [2], the links existing between

the solution P of the initial problem and the solution

Q�c;�k of the linearized problem were unknown. The ®rstpart of this paper is a short presentation of the main

results of [2]. No prove is given, the proofs are in thereferred paper. The second part is dedicated to thestudy of an example issued from [5], enlightening the

problems that can appear with the use of Gaussian lin-earization. Details concerning this example can befound in [1].

2. Hypotheses and notations

2.1. Hypotheses concerning Eq. (1),Eq. (2)

Throughout the paper, we assume the following:(H1) There is existence and uniqueness of a strong

solution of Eq. (2), non-explosion of the solution, and

there exists a stationary solution.(H2) The following integrals exist for any zero-mean

Gaussian probability distribution with diagonal covari-ance n on R2, as well as for the invariant probability

measure of Eq. (2):�f k�x, y�n �dx, dy�, k � 1, 2;

�x 2n �dx, dy�,

�y2n �dx, dy�:

(H3) h(n; m) <+1; h(m; n) <+1.

2.2. Relative entropy and Donsker±Varadhan entropy

Let O be a Polish Space (metrizable, separable andcomplete topological space), S its Borel s-®eld, B(S)the space of all real-valued S-measurable bounded

functions on O with the topology of uniform conver-gence, Cb(O) the subspace of bounded continuousfunctions on O. Let l and m be probability measures

on (O, S). We de®ne the relative entropy (or Kullbackinformation) of l with respect to m by:

h�l; m� ��f �x� log f �x�m �dx�if l� m, f � dl

dm

and f log f is m-integrable;h�l; m�

� �1in all other cases: �5�Remark. The total variation distance satis®es:vvlÿ mvv2R2h(l; m). That is why, though it is not a dis-tance, the relative entropy can be almost used in thesame way.

Let now E be a Polish space, O the space of all con-tinuous functions from R to E, endowed with the top-

P. Bernard / Computers and Structures 67 (1998) 9±1810

Page 3: Stochastic linearization: what is available and what is not

ology of compact convergence. This is also a Polishspace. Denote by Ot

+ the quotient space of O of all

functions from [t, 1) to E, state space of the process,and by Ft

s the s-®eld on O generated by {o(u):sRuR t}. Ot

+ can be identi®ed with O endowed with

the s-®eld F1t. Denote by (yt) the group of shift oper-ators on O: yto(s) = o(t+ s).Let (Px)x $ E be the family of probability distri-

butions on O0+ associated with a Markov process. Any

stationary probability distribution Q on O0+ can be

uniquely extended in one stationary probability distri-

bution on O, still denoted by Q, which regular con-ditional probability distribution given F0

ÿ1 is denotedby Q0,o. Next result is from Varadhan ( [6]):

De®nition and theorem 1. For all t>0, de®ne H(t,Q) = EQ {hF0

t(Q0,o; P0,o(0))}. Then, either H(t,

Q)01 for all t>0, or there exists a constant

H(Q) <1 such that H(t, Q) = t H(Q) for all t>0.H(Q) is the Donsker±Varadhan entropy of Q withrespect to (Px)x $ E.

When any confusion is possible, we denote thisentropy by H(Q; P).

2.3. Equivalence of di�usions

Next result is a direct consequence of Girsanov theo-rem.

Lemma 2. Let Zt be a di�usion process in R2 solutionof:

dZt � b�Zt� dt� S dWt, �6�where b=�b1b2 � and S = [0

s], with initial distribution n.Let us assume existence and uniqueness of a non-

explosive strong solution of Eq. (6). If (Xt) denotes the

solution of Eq. (2) with initial distribution m, the prob-ability distributions of X and Z are equivalent on F0

t ifand only if b1(Z1, Z2) = Z2, n0m.Then, the density dPZ/dPX(t, X) is:

dPZ

dPX�t, X � � dn

dm�X0� � exp

�1

s

�t0

�b2 � f ��Xt� dWt

ÿ 1

2s2

�t0

�b2 � f �2�Xt� dt�,

and the relative entropy

hF01�PZ; PX � � 1

2s2E PZ

��10

�b2 � f �2�Xt� dt�� h�n; m�:

Lemma 3. Let �x tyt�t2�0, 1� be the coordinate process

on C([0, 1]), R2), Q a probability distribution on C([0,1]), R2), P a weak solution of

dx t � yt dt, dyt � s dwt, �7�with any initial distribution. Assume Q is Gaussian,Markov (with respect to the canonical ®ltration), with aconstant marginal ¯ow.If Q is absolutely continuous with respect to P, there

exist constants k>0, c>0, m$R such that Q is a weakstationary solution of

dx t � yt dt, dyt � s dwt ÿ �k�xt ÿm� � cyt� dt: �8�In this case, n = mQ has mean value vector (0

m), and co-variance matrix�s2=2kc 0

0 s2=2c

�:

3. Main theoretical results

3.1. The ®rst approach

The time set is [0, 1]. Let us consider a sample (x1,x2, . . .) of Pm v[0, 1], which can be regarded as coordi-

nates of {(C([0, 1], R2))N, PmN}. Sanov's theorem claims

that the empirical measures Ln=1/nak=0n ÿ 1 dxk satisfy

a large deviations principle on M1 (C([0, 1],R2)) withrate function h(�; Pm).

In other words,

P�Ln 2 A� � exp�ÿn inf�h�Q; Pm�; Q 2 A� � o�n��, �9�for every measurable subset A of M1(C([0, 1]), R2)

satisfying

inffh�Q; Pm�, Q 2 �Ag � inffh�Q; Pm�, Q 2 A�g

where A and AÊ are the closure and the interior of A

with respect to the topology of weak convergence.Let

G � fQ 2M1�C��0, 1�, R2�: Q Gaussian,

Qt � Qs 8t, s 2 �0, 1�g, �10�and for some ®xed Gaussian n $ M1(R

2):

Gn � fQ 2 G: Qt � n8t 2 �0, 1�g: �11�

By Eq. (9) we have that, for all Q$G with h(Q;Pm) <+1, for all E>0, there is a neighborhood N(Q)such that:

P�Ln 2 N�Q�� � exp�ÿn�h�Q; Pm�2E� � o�n��:Hence Q$G is the most probable probability distri-bution among G realized by the empirical measures Ln

if and only if Q solves the variational problem:

h��Q; Pm� � inffh�Q; Pm�; Q 2 Gg: �12�Lemma 4. Let n be a Gaussian probability distributionon R2.

P. Bernard / Computers and Structures 67 (1998) 9±18 11

Page 4: Stochastic linearization: what is available and what is not

(1) Assume (H1), (H2) and

IG�n� � inf fh�Q; Pm�, Q 2 Gng<1: �13�Then there is one and only one �Qn$Gn which reaches

the minimum in Eq. (13). Moreover �Qn is Markov.

(2) Moreover, assume (H3). Then IG(n) <+1 ifand only if there exist two constants k>0, c>0 suchthat the covariance matrix of n has the form:�s2=2kc 0

0 s2=2c

�,

and fy dn = 0. In this case, if m = fx dn, �Qn is theunique stationary weak solution, denoted by Qk,c,m, of:

dx t � yt dt, dyt � s dwt ÿ �k�xt ÿm� � cyt� dt: �14�De®ne

E1�k, c, m� � h�Qk,c,m; Pm� � h�nk,c,m; m�

� 1

2s2

�� f �x, y� ÿ �k�xÿm� � cy��2nk,c,m �dx, dy�:

The following result is then available:

Theorem 2. Assume (H1), (H2) and (H3). Then:(1) IG (n) <1 if and only if the Gaussian prob-

ability distribution n has a diagonal covariance matrix�s2=2kc 0

0 s2=2c

�:

with k>0, c>0 and fx dn =m$R, fy dn= 0. We

denote n = nk,c,m, and, with respect to this probabilitymeasure nk,c,m, the stationary Gaussian probability dis-tribution Qk,c,m solution of Eq. (14) is the unique sol-ution of the variational problem Eq. (13).

(2) Let (�k, �c, �m) be a solution of the minimizationproblem inf {E1(k, c, m):k>0, c>0, m$R}. ThenQ

�k;�c; �m is a solution of Eq. (14). This implies that the

variational problem Eq. (12) has a solution �Q if andonly if the minimization problem inf {E1(k, c, m): k>0,c>0, m$R} has a solution; the same is available for

uniqueness.

3.2. Second approach

It is similar to the ®rst one, just exchanging the rolesof P and Q. We then have to solve the following vari-ational problem:

h�Pm; Q�k,�c, �m� � inffh�Pm; Q

k,c,m�; k > 0, c > 0, m 2 Rg:�15�

Theorem 3. Assume that f(x, y) satis®es hypotheses

(H1) and (H2). Then there exists solutions to the vari-ational problem Eq. (15); Q

�k;�c; �m is such a solution ifand only if (�k, �c, �m) is a solution of

��k, �c, �m� � argminfh�m; nk,c,m� � 1

2s2

�� f �x, y�

ÿ�k�xÿm� � cy��2 dm; k > 0, c > 0, m 2 Rg:

This problem is di�erent from both the ``true lineariza-tion'' and the ``Gaussian linearization'' used in engin-

eering literature.

3.3. Third approach

In contrast with the preceding methods, we now ob-serve the trajectories on a large time interval. Thismeans that we now consider a family of empirical pro-

cesses on O constructed from the Markov family (P0,x)by time averaging on [0, t] of the trajectories startingat point x. To be more precise, for every t>0 and

o $ O, let us de®ne the process Rt,o on O byRt,o(A) = 1/tf0t 1A(yso) ds for all A$F0

1.For every t>0, o 7 4Rt,o is a measurable map from

O to M1(O0+), the set of all probability measures on

F10 . For all t>0 and x $ E, this map gives rise to aprobability measure Gt,x on M1(O0

+) de®ned byGt,x(B) = P0,x{o $ O: Rt,o$B} for any measurable

BM1(O0+). If the Markov process (P0,x) is ergodic with

invariant probability measure m on E, and ifPm$MS(O), the space of all stationary probability dis-

tributions on O, is the Markov stationary process withmarginal m, then, from the ergodic theorem, Gt,xcdPmfor all x when t41. Varadhan ( [6]) proved that the

weak large deviations principle is available for thisproblem with rate function H(Q), the Donsker±Varadhan entropy of Q with respect to the Markovprocess P. This means that, for any stationary prob-

ability distribution Q such that H(Q; P) <+1, ifN(Q) is a closed convex neighborhood of Q in MS (O),such that: {Q0'= Q'(o(0) $ �); Q' $ N(Q)} is compact

in M1 (O), and inf {H(Q'); Q' $ N(Q)}rH(Q)ÿ E then,as t41,

exp�ÿtH�Q� � o�t��RPm�Rt 2 N�Q��

Rexp�ÿt�H�Q� ÿ E� � o�t��: �16�This relation can be interpreted as follows: the station-ary Gaussian probability distribution �Q which has the

highest probability to be a realization of the empiricalmeasure Rt,o is a solution of the following variationalproblem:

H��Q� � inffH�Q�: Q stationary Gaussiang �17�For any Gaussian probability distribution n on E, de-®ne:

SG�n� � fQ 2MS: Q Gaussian, mQ � ng,

P. Bernard / Computers and Structures 67 (1998) 9±1812

Page 5: Stochastic linearization: what is available and what is not

JG�n� � inffH�Q�; Q 2 SG�n�g; ��1 if SG�n� is empty�,�18�

and

E3�k, c, m� � H�Qk,c,m� � 1

2s2

�� f �x,y�

ÿ�k�xÿm� � cy��2nk,c,m �dx, dy�:

The next result can be proved in the same way as

theorem 2:

Theorem 4. Assume (H1) and (H2). Then:(1) JG(n) <1 if and only if the Gaussian prob-

ability distribution n has a diagonal covariance matrix�s2=2kc 0

0 s2=2c

�,

with k>0, c>0 and fy dn = O. Denote n = nk,c,m,where m = fx dn.(2) Let (k, c, m) be a solution of the minimization

problem inf {E3(k, c, m): k>0, c>0, m$R}; Qk,c,m isthen a solution of Eq. (14).We can then conclude that the variational problem

Eq. (17) has a solution Q if and only if the problem

inf {E3(k, c, m): k>0, c>0, m$R} has a solution; thesame is available for uniqueness.

Remark. This third approach is what should be the``Gaussian'' linearization method. All computations,from the beginning (the criterium), are made with theGaussian probability nk,c,m. This implies that when deriv-

ing ®rst order conditions of optimality, using criteriumE3(k, c, m), the measure nk,c,m have to be di�erentiatedalso, which is not done in the classical Gaussian lineari-

zation approach.

3.4. Fourth approach

It is very similar to the third one, just exchangingthe roles of P and Q. In fact, Q is also Markov, andone can de®ne the Donsker±Varadhan entropy of the

stationary process Pm with respect to the Markov pro-cess Qk,c,m. Starting from the Markov model Qk,c,m

de®ned by Eq. (14), one can choose (k, c, m) such that

Qk,c,m(Rt$N(Pm)) is asymptotically the greatest whent41, where N(Pm) is a neighborhood of Pm which issmall enough in M1(O).As a consequence of the Donsker±Varadhan large

deviations theory,

Qk,c,m�Rt 2 N�Pm�� � expfÿt�H�Pm; Qk,c,m�2E� � o�t�g,

and the variational problem formulation is now tominimize H(Pm; Qk,c,m) with respect to k>0, c>0,m$R.

From lemma 2,

H�Pm; Qk,c,m� � 1

2s2

�� f �x, y�

ÿ�k�xÿm� � cy��2m �dx, dy� � E4�k, c, m�: �19�Let us notice that we obtained by this last approachexactly the result of the engineers' true equivalent line-arization. There is existence and uniqueness of the sol-

ution (k, c, m), which can be expressed by theformulas:

�k ���xÿ �m� f �x, y�mP �dx, dy�=

��xÿ �m�2m �dx, dy�,

�c ��yf �x, y�mP �dx, dy�=

�y2m �dx, dy�,

�k

��xÿ �m� dm �

�f �x, y� dm: �20�

In the case where f is skewsymmetric, we obtainm = 0, recovering the results of Eq. (4).

4. The two-wells Du�ng oscillator

The dynamical equation of this oscillator is:

�xt � c _xt � k�ÿxt � lx 3t � � s _Wt, �21�

where c is the damping, l is a non-negative constant,

g(x) = k(ÿx + lx3) is the non-linear sti�ness and WÇ tis a standard white noise.This oscillator has three static equilibria: one un-

stable equilibrium at x = 0 and two stable equilibriaat xc=21=

���lp

. The corresponding potential energyfunction has two potential wells, symmetrically dis-

posed about x = 0, and with minima at xc=21=���lp

.The existence and uniqueness of a stationary sol-

ution can be proved by using Lyapunov functions. Inthe phase space, the exact invariant measure has a

probability density with respect to the Lebesguemeasure:

F�x, y� � C exp

�ÿ 2c

s2

�k

�lx 4

4ÿ x 2

2

��� y2

2

�: �22�

In Fig. 1(a) and Fig. 2(a) are shown a sampled tra-

jectory of the stationary response and the invariantprobability density, for a weak excitation with intensitys = 0.04. In Fig. 1(b) and Fig. 2(b) are shown the

same characteristics for a high excitation with intensitys=

�������0:1p

20.3162.At very low levels of excitation the mean duration

time in a potential well will be important (Fig. 1(a))and the invariant probability density presents twobumps centered about the stable equilibrium static pos-

P. Bernard / Computers and Structures 67 (1998) 9±18 13

Page 6: Stochastic linearization: what is available and what is not

itions (Fig. 2(a)). However, these bumps will tend to

disappear as the level of excitation increases (Fig. 2(b)).An important change of the behavior is observed(Fig. 1).Hence, if the statistical linearization method can be

applied for high level excitation, it fails to representthe particular behavior in the neighborhood of theequilibrium static positions for weak noise.

5. Statistical linearization

In order to apply the statistical linearization tech-nique to the Du�ng oscillator Eq. (1), Roberts andSpanos [5] considered the following linear oscillator:

�xt � c _xt � keq�xt ÿm� � s _Wt, �23�where keq and m are the unknown constants. It is clearthat the stationary solution xt of Eq. (23) is such thatE[xt] = m. The parameters keq and m can be deter-mined by minimizing:

J�keq, m� � Ef�k�lx3t ÿ xt� ÿ keq�xt ÿm��2�g: �24�

The necessary conditions for J to be minimum are:

ÿ2E�fk�ÿxt � lx3t � ÿ keq�xt ÿm�g�xt ÿm�� � 0,

2kE�k�ÿxt � lx3t � ÿ keq�xt ÿm�� � 0: �25�

Introducing at this step of the computation the

assumption that xt is solution of the linear system

Eq. (23), the following equivalent system of equations

is obtained:

~s2�ÿ1� 3� ~s2 � ~m2�� � b, ~m�ÿ1� 3 ~s2 � ~m2� � 0,

�26�where mÄ=

���lp

m, ~s � ���lp

sy, sy is the standard deviation

of the stationary displacement and b = ls2/2kc} is an

adimensional parameter which is a measure of the

strength of the excitation. The solution depends on the

level of the excitation through b coe�cient. Three sol-

utions are obtained (Table 1).

Fig. 2. (a) Invariant probability density of Du�ng oscillator for the parameters k = 1, c= 0.05, l= 10 and s2=16�10ÿ4. (b) invar-iant probability density for parameters k= 1, c = 0.05, l= 10 and s2=0.1.

Fig. 1. (a) Sample trajectory of the stationary response for the parameters k = 1, c = 0.05, l = 10 and s2=16�10ÿ4; (b) sample tra-

jectory of the stationary response for the parameters k= 1, c = 0.05, l= 10 and s2=0.1.

P. Bernard / Computers and Structures 67 (1998) 9±1814

Page 7: Stochastic linearization: what is available and what is not

Let us note that the solutions obtained are notnecessarily minima and bc=1

6 seems to be a bifurcation

parameter. The second order conditions cannot be con-sidered as we must use the law of the linear system tocalculate the Hessian. The unique possible way is to

verify by direct calculation if the solutions obtainedare real minima.

5.1. Numerical example

The parameters of the Du�ng oscillator consideredhere are given by Eq. (24).

k � 1, l � 10, c � 0:05, s � 0:04: �27�In this case b<1

6, and according to the results ofSection 2 the statistical linearization method provides

three solutions illustrated in Table 2.The criterion J(k, m) is given by Eq. (24). As shown

in Table 2 there exists at most one solution. The other

solutions are parasite, which are due to the fact thatthe law of the equivalent linear system is used to evalu-ate the moments of the true solution during the calcu-

lation, whereas the ®rst order conditions are expressedby means of the true solution. This shows that the``Gaussian'' linearization is uncorrect, a fact that wasalso recently observed through numerical computations

by Grundmann et al. in 1996.

6. Local linearization

6.1. Presentation of the method

A complete description of this method can be foundin [1]. The purpose is to determine a locally linear os-

cillator with random coe�cients, which coincides overthe domain attracted to each equilibrium position with

a linearization in the neighborhood of this equilibrium

position.

Let us start with some generalities concerning the

perturbation problem:

dX1,e�t� � b1�Xe� dt,

dX2,e�t� � b2�Xe� dt� es�Xe� dW�t�, �28�where Xe=(X1,e, X2,e)$Rn, b = (b1, b2)$Rn, b2 and

X2,e$Rq(q < n), s(x) is a matrix-valued function such

that A= ssT$Rq�Rq is uniformly positive de®nite and

W(t) stands for an m-dimensional standard Wiener

process. The parameter e is introduced to characterize

the smallness of the perturbations.

We associate to this system the unperturbed one:

dX�t� � b�X�t�� dt: �29�The large deviations theory provides results on the

behavior of the perturbed system Eq. (28) over large

time in the form of deviations as e4 0 [3]. That is,

asymptotics of the probabilities of events such as

(d>0):

fsup0RtRT jXe�t� ÿ Fj<dg, �30�where F is a trajectory on Cac([0, T],R

n), space of ab-

solutely continuous Rn-valued function over the time

interval [0, T]. The action functional Sx,T is de®ned

over the space Cac([0, T], Rn) as follows:

Sx,T �F� ��T0

L�F�t�, _F�t�� dt, �31�

where F(0) = x and the Lagrangian L has the form:

L�x, b� �(

12 �b2 ÿ b2�x��TAÿ1�x��b2 ÿ b2�x��1 otherwise

if b1 � b1�x� �32�

Let O$Rn be an asymptotically stable equilibrium

position of the unperturbed system, i.e., for every

neighborhood D1 of O let there exist a smaller neigh-

borhood D2 such that the trajectories of system

Eq. (29) starting in D2 converge to the equilibrium

position O without leaving D1 as t 7 41.

Table 1

Solution number Range of validity ~s2 mÄ 2

1 0 < b< 1/6 (1 + (1ÿ 6b)1/2)/6 (1ÿ (1ÿ 6b)1/2)/22 0 < b< 1/6 (1ÿ (1ÿ 6b)1/2)/6 (1 + (1ÿ 6b)1/2)/23 0 < b<1 (1 + (1 + 12b)1/2)/6 0

Table 2

Solution

number

k m J(k, m)=

1 0.8 0.2 0.0778

2 1.2 0.2449 0.1927

3 0.3544 0 0.0234

P. Bernard / Computers and Structures 67 (1998) 9±18 15

Page 8: Stochastic linearization: what is available and what is not

We say that D is attracted to O if the trajectories

xt(x), x $ D converge to the equilibrium position O

without leaving D as t 7 41.

The quasipotential of the dynamical system Eq. (28)

with respect to the point O is the function V(O, x)

de®ned by

V�O, x� � infF2Cac��0, T �, Rn�fSO,T �F�: F0 � O FT � xg:�33�

We have V(O, x)r0, V(O, O) = 0 and the function

V(O, x) is continuous. The quasipotential V(O, x) is a

measure of the di�culty of large deviations of the pro-

cess Xe from the equilibrium position O in presence of

random perturbations. Hence it characterizes the stab-

ility of O.

The asymptotics of the probability density Fe(x) of

the invariant measure of Xe(t) is given by

Fe�x� � exp�ÿeÿ2V�O, x� � o�e2��: �34�Let us note that the quasipotential can be obtained by

solving a Hamilton±Jacobi equation:Xi

bi@ iV � 12

Xi,j

aij�@ iV ��@ jV � � 0,

V�O� � 0 and V�x� > 0 for x 6� O: �35�

Concerning a stochastic oscillator with non-linear

sti�ness, the following result is available [1]:

Proposition 6.1. Let us consider the non-linear oscil-

lator

dx t � yt dt, dyt � ÿ�cyt � g�xt�� dt� es dWt, �36�where s is constant, g(0) = 0 such that the primitive

G(x) of g(x) zero-valued at x = 0 ful®lls G(x)>0 for

x$0. Then, the quasipotential with respect to the

equilibrium position O = (0, 0) is the Hamiltonian of

the system multiplied by the constant 2c.

Let us examine now the situation when the unper-

turbed dynamical system admits multiple asymptoti-

cally stable equilibrium positions xi. Local

quasipotentials can be de®ned as follows:

Vx i� y� � infF2Cac

fSx iT �F�: F0 � x i FT � yg: �37�The global quasipotential V(y) can be constructed

from local quasipotentials by the formula:

V� y� � miniVx i� y�: �38�

Let us apply this to the two-wells Du�ng oscillator.

Over D(xi) domain attracted to asymptotically stable

equilibrium position xi, we substitute to the non-linear

solution xt the solution xti of the linear system

�xit � ci _xit � ki�xi

t ÿmi � � s _Wt, �39�where ci, ki>0 and mi$D(xi) are unknown constant tobe determined. The linear system Eq. (39) admits an

asymptotically stable equilibrium position at x= mi.Its quasipotential VÄi must be close to the local quasi-potential of the non-linear system with respect to the

ith equilibrium position.Let us note that the quasipotential of the linear sys-

tem Eq. (39) may easily be determined by solving the

Hamilton±Jacobi Eq. (35). That is, in the phase space,we have

~Vi�x, y� � ci� y2 � ki�xÿmi �2�: �40�Furthermore the global quasipotential V(x, y) of the

non-linear system may be approximated by

V�x, y� � mini~Vi�x, y� � mini fci� y2 � ki�xÿmi �2�:

�41�This leads to an estimation of the invariant prob-

ability density ~Fe(x, y) of the non-linear system with alogarithmic precision as e 7 4 0:

~Fe�x, y� � Ke exp�ÿeÿ2V�x, y��, �42�where the constant Ke is determined by a normaliza-tion condition.

The expression of Fe permits us to determine theform of the equivalent system (Section 6.2). The par-ameters ciki and mi are determined by minimizing the

entropy between the invariant measure of the non-lin-ear system and that of the equivalent system (Bernardand Wu, 1994).

6.2. Application

In this section our interest is particularly drawn tothe dynamics approximation of the following weakly-perturbed two-wells Du�ng oscillator.

�xt � c0 _xt � k0�ÿxt � lx 3t � � e _Wt, �43�

where e<<1. According to the fact that the potentialenergy function of the unperturbed system is sym-

metric and using the results of Section 6.1, the localquasipotentials with respect to the two asymptoticallystable equilibrium positions are approximated by:

Table 3

Method k m

Gaussian statistical

linearization

0.3544 0

Local linearization 1.3681 0.2683

P. Bernard / Computers and Structures 67 (1998) 9±1816

Page 9: Stochastic linearization: what is available and what is not

V1�x, y� � c� y2 � k�x�m�2�,

V2�x, y� � c� y2 � k�xÿm�2�: �44�The global quasipotential is

V�x, y� � c� y2 � k�xÿm sign�x��2�: �45�Formula Eq. (43) becomes

Fe�x, y� � Ke exp�ÿeÿ2c� y2 � k�xÿm sign�x��2�, �46�where Ke is determined by a normalization condition,

and satis®es

Ke0eÿÿÿ40c���kp

e2������2pp , �47�

The invariant probability density Eq. (46) is that of thefollowing locally linear oscillator

�xt � c _xt � k�xt ÿm sign�xt�� � e _Wt: �48�

In order to determine the parameters k, c and m, weminimize the entropy h(nk,c,m, mP) between the invar-iant measure nk,c,m of the oscillator Eq. (48) and the

invariant measure mP of the non-linear oscillatorEq. (43). A unique solution is found:

c � c0, k � e2

2c0

1

E��xÿm sign�x��2� , m � E�jxj�:

�49�

6.3. Numerical results

In this section numerical applications involving theproposed local linearization method are examined. A

comparison with the Gaussian statistical linearization

is presented. The parameters of the studied Du�ng os-cillator are the same considered in Section 5.1. Theresults obtained by the two methods are illustrated in

Table 3.In Fig. 3 are shown the exact invariant probability

density of displacement, the invariant probability den-

sity obtained by the Gaussian statistical linearizationand that obtained by local linearization. We can seethat the local linearization method provides a quiteacceptable result, whereas the Gaussian statistical line-

arization provides a probability density centered aboutthe origin. A sample trajectory of the stationary re-sponse of the equivalent locally linear system with ran-

dom parameters obtained by our approach is shown inFig. 4. The particular behavior of the system in theneighborhood of the equilibrium positions is con-

served.

7. Conclusion

The results of [2] establishing mathematical proofsfor some linearization methods were summarized.Among these, the ``true'' linearization method was jus-ti®ed by a di�erent approach than Kozin's [4].

Concerning the popular ``Gaussian'' linearization, itwas established that this method has no justi®cation,and that it can give rise to strange results, for example

in the case of the two-wells Du�ng oscillator. Thepoint is that one should use the Gaussian probabilityfrom the beginning of the computations, and that, as

this probability depends on the parameters, it shouldbe di�erentiated to express the ®rst order optimalityconditions.

Fig. 3. Invariant probability density of displacement, exact

(ÐÐÐ), local linearization (- - -) and Gaussian linearization

(± � ±).Fig. 4. A sample trajectory of the stationary response of the

equivalent locally linear system.

P. Bernard / Computers and Structures 67 (1998) 9±18 17

Page 10: Stochastic linearization: what is available and what is not

Following [1], a local linearization approach wasintroduced in view of approximating the dynamics of

stochastically perturbated systems with several equili-bria. Numerical results show a good ®tting with simu-lation results.

References

[1] Alaoui, M., Bernard, P., Asymptotic Analysis and

Linearization of the Randomly Perturbated Two-wells

Du�ng Oscillator, Prob. Eng. Mech., 12, 3, pp. 171±178,

1997.

[2] Bernard, P., Wu, L., Stochastic Linearization: The

Theory, to appear.

[3] Freidlin, M. I., Wentzell, A. D., Random Perturbations of

Dynamical Systems. Springer-Verlag, 1984.

[4] Kozin, F., The Method of Statistical Linearization for

Non-Linear Stochastic Vibrations. In Nonlinear

Stochastic Dynamic Engineering Systems, ed. F. Ziegler,

G. I. SchueÈ ller. Springer Verlag, 1987.

[5] Roberts, J. B., Spanos, P. D., Random Vibration and

Statistical Linearization. J. Wiley & Sons, 1990.

[6] Varadhan, S. R. S., Large Deviations and Applications,

Vol. 46. SIAM Publications, 1984.

P. Bernard / Computers and Structures 67 (1998) 9±1818