6
Lecture: EPS 236, 17 Sep 2012 This lecture: •Autoregressive time series; variance; relationship to a Markov process. [Examples: 1 st order cases] [autoregressive_var.r] Coming next: •Fitting lines to data—ordinary least squares •Why ordinary least squares is often incorrect as applied to our data sets [ols.r] 1

Lecture: EPS 236, 17 Sep 2012

Embed Size (px)

DESCRIPTION

Lecture: EPS 236, 17 Sep 2012. This lecture: Autoregressive time series; variance; relationship to a Markov process. [Examples: 1 st order cases] [autoregressive_var.r] Coming next: Fitting lines to data—ordinary least squares - PowerPoint PPT Presentation

Citation preview

Page 1: Lecture: EPS 236,  17 Sep 2012

Lecture: EPS 236, 17 Sep 2012

This lecture:•Autoregressive time series; variance; relationship to a Markov process. [Examples: 1st order cases][autoregressive_var.r]

Coming next:

•Fitting lines to data—ordinary least squares•Why ordinary least squares is often incorrect as applied to our data sets [ols.r]

1

Page 2: Lecture: EPS 236,  17 Sep 2012

N=1000tt=1:Nk=.10theta = exp(-k)a=rnorm(N)

quartz() # X11( )plot(tt,a)

x=0for(i in 1:N) {x=c(x,x[i]*theta + a[i])}

#var(x)=5.5 ; var(a)=.92; 1/(1- θ2)=5.5X11( )plot(c(0,tt),x,type="l")

X11( )plot(c(0,tt),x,type="l",xlim=c(100,200), ylim=c(-5,5),err=-1)axis(side=3,col="red",at=seq(100,200,10), tck=.1,labels=F)abline(v=seq(100,200,10),lty=2,col="red")

θ=.904

Discrete 1st order Markov process

Note the larger variance of the autocorrelated time series, also the prevalence of the decorrelation time (10 yr).

Page 3: Lecture: EPS 236,  17 Sep 2012

p(x)=2/(σ√π)exp(-x2/σ2)

prob X< x'

2/(σ√π) ∫exp(-t2/σ2)dt

X

Prob x < X units of σ

Prob x > X

Page 4: Lecture: EPS 236,  17 Sep 2012

4

Page 5: Lecture: EPS 236,  17 Sep 2012

5

Page 6: Lecture: EPS 236,  17 Sep 2012

tr1 = function(x,p1){ if(x > p1) ans =1 else ans = 0ans}

#symmetrical 2-state markov chain; eqm, => equal pops.# transition probability matrix --markov2 = function(nnn,p1=.3){ #2 state markov processdum = runif(nnn+1) ##not rnormstate = 1 + as.numeric(rnorm(1)>0) # random initial statestate.cum = NULLfor(ii in dum) { if(state == 1) { state = state + tr1(ii,p1) }

if(state == 2) { state = state - 1 + tr1(ii,p1) }state.cum = c(state.cum, state)

}state.cum1= state.cum[(1:nnn)+ 1]dx = state.cum1-state.cum[1:nnn] # detect if we have a transitionpp = c(sum(state.cum[1:nnn]==1&dx==0),sum(dx==1),sum(dx==-1), sum(state.cum[1:nnn]==2&dx==0))pz = c(pp[1]/(pp[1]+pp[2]),pp[2]/(pp[1]+pp[2]),pp[3]/(pp[3]+ pp[4]),pp[4]/(pp[3]+pp[4]))ans = c(p1,nnn,pp,pz)names(ans) = c("prob","n","n11","n12","n21","n22","p11","p12","p21","p22")ans}#markov.res = NULLfor(nn in c(rep(10,3000), rep(25,1000), rep(50,500), rep(250,100), rep(1000,60), rep(5000,40))) { markov.res = rbind(markov.res, markov2(nn))}zum = tapply(markov.res[,"p11"]-.3, markov.res[,"n"],var,na.rm=T)

1

2

p11

p22

p21 p12