Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Repeated measures design (16.6 OLRT)
STAT:5201
Week 15 - Lecture 1
1 / 51
Comparison and contrast to a Type I split-plot
As mentioned earlier, some repeated measures designs may beanalyzed essentially the same as a Type I split-plot.
Example: (a modification of OLRT p. 438)
To study the effects of infant formula on growth, 30 infants arerecruited and 10 are randomly assigned to each of 3 distinct formulas.Weight is measured twice: at 1 week of age and again at 4 weeks.
An infant is nested in a formula.An infant is observed over both levels of time (1 week and 4 weeks).
Researchers are interested in a formula effect, a time effect, and/or aninteraction between formula and time.
2 / 51
Comparison and contrast to a Type I split-plot
Consider the model that allows for correlation due to repeatedmeasures on the same subject:
Yijk = µ+ αi + Sj(i) + βk + (αβ)ik + εijk
Sj(i)iid∼ N(0, σ2S(a)), εijk
iid∼ N(0, σ2),with random effects independent from each other,
i = 1, 2, 3; j = 1, ..., 10; k = 1, 2.
The above model looks the same as the model fitted to Lenth’ssplit-plot laundry example (temp/fabric) we saw earlier.
However, this is not a ‘true’ split-plot because age was notrandomized; indeed, age cannot be randomized. A split-plot designhas two sizes of units and two randomizations. This experiment hastwo sizes of units, but only one randomization.
3 / 51
Comparison and contrast to a Type I split-plot
Inclusion of random effects induces a correlation structure in the data.
This model coincides with a block diagonal correlation structure asbelow:
Because there were only 2 repeat measurements per subject, theanalysis (or correlation structure) is the same as in the split-plotexample.
4 / 51
Comparison and contrast to a Type I split-plot
So, when is the analysis of a repeated measures different than asplit-plot? (HINT: more time points)
Let’s now consider the experiment on infant formula but takemeasurements at birth, 1 week, 2 weeks, 3 weeks, and 4 weeks (i.e.at 5 time points).
If we assume all 5 observations on a subject have the same correlationstrength, then the covariance of the 5 observations would like...
5 / 51
Comparison and contrast to a Type I split-plot
The above covariance structure can be called compound symmetry(SAS) or sphericity (SPSS), depending on the software being used.
Compound symmetry is a block-diagonal variance-covariancestructure (independence between subjects).
A compound symmetry (CS) variance structure is assumed to be truefor a split-plot design (with 2 randomizations).
6 / 51
Modeling autocorrelation
Does this CS structure seem most appropriate for the 5 time pointsdata collection?
Even if we have ‘time’ as a main effect in our model (linear,quadratic, etc.), we may expect that residuals that are closer in timeare more strongly correlated than those that are farther apart in time.
Can I incorporate this into my correlation structure?
NOTE: If there are three or more levels of a repeated measure factor, then the
assumption of sphericity (or compound symmetry) can be evaluated.7 / 51
Modeling autocorrelation
Yes. We can include this autocorrelation in modeling the residuals.
Mixed Model Notation: Y = Xβ + Zu + e
where u is the vector of random effects (e.g. subject effects)and e is the vector of residuals
Var(u) = GVar(e) = R
}Var(Y ) = Var(Zu + e) = ZGZ ′ + R
For a true split-plot we assume Var(e) = σ2IN .Or that the residuals are independent.
Autocorrelation can be modeled with R.
8 / 51
Modeling autocorrelation
Regardless of how we specify the correlation within a subject, we stillhave correlation within the subject and independence betweensubjects, or a block-diagonal variance-covariance structure...
We are now allowing for a wider variety of structures for W besidescompound symmetry.
9 / 51
PROC MIXED: RANDOM and REPEATED
Before we use SAS to code our autocorrelation, we will consider twodifferent ways to specify the split-plot compound symmetry structurein PROC MIXED...
The RANDOM statement specifies ‘G’, the covariance structure of therandom (subject) effects.
The REPEATED statement specifies ‘R’, the covariance structure ofthe residuals.
10 / 51
PROC MIXED: RANDOM and REPEATED
These two fitted models are the same:
1 PROC MIXED data=dt;
class formula subject;
model weight=formula|time;
random subject(formula);
repeated / type=vc sub=subject(formula); /* default */
run;/* ‘vc’ says variance components or σ2I is used for the residuals,
‘sub’ specifies the ‘whole plot’ experimental unit (EU). */
2 PROC MIXED data=dt;
class formula subject;
model weight=formula|time;
repeated / type=cs sub=subject(formula);
run;
/* ‘cs’ says compound symmetry for the residuals on each EU*/
11 / 51
PROC MIXED: RANDOM and REPEATED
Recall the earlier example with secretary nested in number of periods,but observed over both time periods.
First way to state the model using RANDOM and G :
proc mixed data=data;
class Periods Secretary TimeDay;
model NumSorted=Periods TimeDay TimeDay*Periods/
ddfm=satterth residual;
random Secretary(Periods);
run;
12 / 51
PROC MIXED: RANDOM and REPEATED
OUTPUT FROM FIRST CODING:
Cov Parm Estimate
Secretary(Periods) 334.47
Residual 81.1333
Type 3 Tests of Fixed Effects
Num Den
Effect DF DF F Value Pr > F
Periods 2 12 9.04 0.0040
TimeDay 1 12 11.19 0.0058
Periods*TimeDay 2 12 6.28 0.0136
13 / 51
PROC MIXED: RANDOM and REPEATED
Second way to state the model using REPEATED and R:
proc mixed data=data;
class Periods Secretary TimeDay;
model NumSorted=Periods TimeDay TimeDay*Periods/
ddfm=satterth residual;
repeated /type=cs sub=Secretary(Periods);
run;
14 / 51
PROC MIXED: RANDOM and REPEATED
OUTPUT FROM SECOND CODING:
Cov Parm Subject Estimate
CS Secretary(Periods) 334.47
Residual 81.1333
Type 3 Tests of Fixed Effects
Num Den
Effect DF DF F Value Pr > F
Periods 2 12 9.04 0.0040
TimeDay 1 12 11.19 0.0058
Periods*TimeDay 2 12 6.28 0.0136
⇒ The same model is fitted by both.15 / 51
Modeling autocorrelation
For data collected over time at uniform points, the first orderautoregressive model AR(1) is a common choice.
This structure allows observations taken closer in time to be morestrongly correlated than those taken farther apart in time.
16 / 51
Modeling autocorrelation
Including autocorrelation using the REPEATED statement:
proc mixed data=data;
class formula subject;
model weight = formula|time/
ddfm=satterth residual;
random subject(formula);
repeated /type=AR(1) sub=subject(formula);
run;
17 / 51
Example: Repeated measures for sales
Example (Sales)
Advertising Sales ExampleA company is interested in comparing the success of two differentadvertising campaigns. It has 10 test markets, and 5 test markets will berandomly assigned to each campaign. The sales for each test market willbe recorded after 1 months, 2 months, and 3 months.
Response: SalesFactors: Campaign (1 or 2)
Time (1,2,3)Test Market (10 unique markets)
18 / 51
Example: Repeated measures for sales
Example (Sales)
THE DATA:
The SAS System 3
Test
Obs Market Campaign Time Sales
1 1 1 1 958
2 1 1 2 1047
3 1 1 3 933
4 2 1 1 1005
5 2 1 2 1122
6 2 1 3 986
. . . . .
. . . . .
. . . . .
19 / 51
Example: Repeated measures for sales
Example (Sales)
Plot of the observations: Sales vs. time
NOTE: This seems quadratic in time so we will set time as a continuousvariable. 20 / 51
Example: Repeated measures for sales
Example (Sales)
1) SAS statements for specifying Compound Symmetry (CS)correlation structure (no time-based autocorrelation here):
We set Var(G ) and Var(e) to have variance-components structure(i.e. diagonal matrices). The overall variance-covariance structure of Y isa compound symmetry form (block diagonal). We will request the G andR matrices from SAS. This model assumes quadratic in time (a differentcurve for each campaign).
proc mixed data=sales plot(only)=residualpanel(conditional);
class TestMarket Campaign;
model Sales=Campaign|Time Campaign|Time*Time/
ddfm=satterth outp=outdata residual;
random Testmarket(Campaign)/ G;
repeated / type=vc sub=TestMarket(Campaign) R;
run;21 / 51
Example: Repeated measures for sales
The above gives the same correlation structure as a split-plot analysisusing ‘sphericity’ or ‘compound symmetry’ correlation.
Example (Sales)
The Mixed Procedure
Model Information
Data Set WORK.SALES
Dependent Variable Sales
Covariance Structure Variance Components
Subject Effect TestMarket(Campaign)
Estimation Method REML
Residual Variance Method Parameter
Fixed Effects SE Method Model-Based
Degrees of Freedom Method Satterthwaite
22 / 51
Example: Repeated measures for sales
Example (Sales)
Class Level Information
Class Levels Values
TestMarket 10 1 2 3 4 5 6 7 8 9 10
Campaign 2 1 2
Dimensions
Covariance Parameters 2
Columns in X 9
Columns in Z 10
Subjects 1
Max Obs Per Subject 30
Covariance Parameter Estimates
Cov Parm Subject Estimate
TestMarket(Campaign) 76284
Residual TestMarket(Campaign) 357.97
23 / 51
Example: Repeated measures for sales
Example (Sales)
Requested G matrix shown below.
G = Var(u)
24 / 51
Example: Repeated measures for sales
Example (Sales)
Requested R matrix (per subject) shown below.
R = Var(ei ) {per subject i}
25 / 51
Example: Repeated measures for sales
Example (Sales)
Fit Statistics
-2 Res Log Likelihood 273.4
AIC (Smaller is Better) 277.4
AICC (Smaller is Better) 277.9
BIC (Smaller is Better) 278.0
Type 3 Tests of Fixed Effects
Num Den
Effect DF DF F Value Pr > F
Campaign 1 9.42 0.48 0.5035
Time 1 16 155.76 <.0001
Time*Campaign 1 16 0.07 0.8007
Time*Time 1 16 173.07 <.0001
Time*Time*Campaign 1 16 0.01 0.9144
26 / 51
Example: Repeated measures for sales
Example (Sales)
1a) SAS statements for specifying Compound Symmetry (CS)correlation structure (no time-based autocorrelation here):
Same model, but only using REPEATED and R matrix to specify.
proc mixed data=sales plot(only)=residualpanel(conditional);
class TestMarket Campaign;
model Sales=Campaign|Time Campaign|Time*Time/
ddfm=satterth outp=outdata residual;
repeated / type=cs sub=TestMarket(Campaign) R;
run;
27 / 51
Example: Repeated measures for sales
Example (Sales)
Covariance Parameter Estimates
Cov Parm Subject Estimate
TestMarket(Campaign) 76284
Residual TestMarket(Campaign) 357.97
R = Var(ei ) {per subject i}, shows compound symmetry.
28 / 51
Example: Repeated measures for sales
Example (Sales)
Fit Statistics
-2 Res Log Likelihood 273.4
AIC (Smaller is Better) 277.4
AICC (Smaller is Better) 277.9
BIC (Smaller is Better) 278.0
⇒ This is the same model as the first fit.
29 / 51
Example: Repeated measures for sales
Example (Sales)
2) SAS statements for first order Autoregressive correlationstructure:Same fixed effects, include the random subject effect, but also include anAR(1) correlation structure for the residuals within a TestMarket, or forVar(ei ).
proc mixed data=sales plot(only)=residualpanel(conditional);
class TestMarket Campaign;
model Sales=Campaign|Time Campaign|Time*Time/
ddfm=satterth residual;
random TestMarket(Campaign)/ G;
repeated / type=ar(1) sub=TestMarket(Campaign) R;
run;
NOTE: The repeated statement is different here.30 / 51
Example: Repeated measures for sales
Example (Sales)
The Mixed Procedure
Model Information
Data Set WORK.SALES
Dependent Variable Sales
Covariance Structures Variance Components,
Autoregressive
Subject Effect TestMarket(Campaign)
Estimation Method REML
Residual Variance Method Profile
Fixed Effects SE Method Model-Based
Degrees of Freedom Method Containment
31 / 51
Example: Repeated measures for sales
Example (Sales)
Class Level Information
Class Levels Values
TestMarket 10 1 2 3 4 5 6 7 8 9 10
Campaign 2 1 2
Dimensions
Covariance Parameters 3
Columns in X 9
Columns in Z 10
Subjects 1
Max Obs Per Subject 30
32 / 51
Example: Repeated measures for sales
Example (Sales)
Covariance Parameter Estimates
Cov Parm Subject Estimate
TestMarket(Campaign) 77790
AR(1) TestMarket(Campaign) -0.7119
Residual 273.26
Requested G matrix and R matrix (per subject) shown on next page.
33 / 51
Example: Repeated measures for sales
Example (Sales)
G = Var(u)
34 / 51
Example: Repeated measures for sales
Example (Sales)
Cov Parm Subject Estimate
AR(1) TestMarket(Campaign) -0.7119
Residual 273.26
R = Var(ei ) {per subject i}
35 / 51
Example: Repeated measures for sales
Example (Sales)
The W in the ’block diagonal’ structure for individual i :
36 / 51
Example: Repeated measures for sales
Example (Sales)
Fit Statistics
-2 Res Log Likelihood 269.5
AIC (Smaller is Better) 275.5
AICC (Smaller is Better) 276.7
BIC (Smaller is Better) 276.4
Type 3 Tests of Fixed Effects
Num Den
Effect DF DF F Value Pr > F
Campaign 1 10.1 0.45 0.5154
Time 1 8.08 97.86 <.0001
Time*Campaign 1 8.08 0.04 0.8438
Time*Time 1 8 107.03 <.0001
Time*Time*Campaign 1 8 0.01 0.9337
37 / 51
Comparing models (same fixed effects, differentcovariances) using fit statistics
When comparing models with different covariance structures but the samefixed effects...
If you have ‘nested’ covariance structures, you can use a likelihoodratio test...LRT = −2logλ = −2log L(θ̂0)
L(θ̂)= −2log(Likelihoodsmaller
Likelihoodlarger)
= −2(LLsmaller ) + 2(LLlarger )= −2(Res log likelihoodθ̂0
) − −2(Res log likelihoodθ̂) *on SAS output
Otherwise, you can comparing using fit statistics like AIC and BIC.
Example (Sales)
By looking at the AIC fit statistics (or the BIC) where smaller isbetter, we can see that we gained by including the AR(1) structurebecause the CS model had an AIC of 277.4 and the model withAR(1) had an AIC of 275.5.
38 / 51
Comparing models (same fixed effects, differentcovariances) using fit statistics
Example (Sales)
Likelihood ratio test for H0 : ρ = 0 vs. H1 : ρ 6= 0 (nested models)
Model 1: -2 Res Log Likelihood = 273.3645Model 2: -2 Res Log Likelihood = 269.5389
−2logλ = −2log L(θ̂0)
L(θ̂)= −2log(L(θ̂0))−−2log(L(θ̂))
= 273.4− 269.5 = 3.8256−2logλ
.∼ χ1 under H0
p-value: P(χ1 > 3.8256) = 0.05047
39 / 51
Using PROC GLIMMIX
This LRT can actually be requested directly if using PROC GLIMMIX withthe ‘covtest’ statement.
Example (Sales)
40 / 51
Using PROC GLIMMIX
Example (Sales)
It appears that there is no significant Campaign effect, and Sales is quadratic in
time, and there is some evidence of first-order autocorrelation in the residuals.
41 / 51
Other covariance structures
If there are only two time points, compound symmetry covers allpossible variance-covariance structures. If there are many repeatedmeasures, other structures should be considered.
Four repeated measurements per subject, could try an ‘unstructured’form as Type=UN in SAS, or a Toeplitz form.
42 / 51
Other covariance structures
If the time pints in your repeated measures design are not uniformlyspaced among subjects, you can use the ‘spatial power’ covariancestructure where the subject-specific times between observations isutilized (dij is the euclidean distance between the i th and j th
observations on a subject).
43 / 51
Repeated measures in a blocked design
Example (Pigeons and stimuli)
Consider a situation that looks a lot like a RCBD but is not becausethe treatments are done in sequential order, rather than randomized.
Let’s compare an incorrect RCBD analysis with a more appropriaterepeated measures analysis.
44 / 51
Repeated measures in a blocked design
Example (Pigeons and stimuli)
Pigeon example (repeated measures)
We have data on 6 pigeons. Each pigeon was given a sequence of 8 stimuli, and a response was observedafter each. The stimuli were not randomized, making this a repeated-m,easures study (AKA longitudinaldata). The data are in a separate file with variable names in the first row.
data pigeon;infile "pigeon.dat" firstobs=2;input PIGEON stim Resp;
Here is an initial analysis, treating this like an ordinary block design.
proc mixed;class PIGEON stim;model resp = stim / ddfm = satterth outp = diags;random PIGEON;lsmeans stim / diff;
The Mixed ProcedureConvergence criteria met.
Covariance ParameterEstimates
Cov Parm EstimatePIGEON 153.00Residual 55.8097
Fit Statistics-2 Res Log Likelihood 304.4AIC (smaller is better) 308.4AICC (smaller is better) 308.7BIC (smaller is better) 308.0
Type 3 Tests of Fixed EffectsNum Den
Effect DF DF F Value Pr > Fstim 7 35 4.91 0.0006
Least Squares MeansStandard
Effect stim Estimate Error DF t Value Pr > |t|stim 1 37.9167 5.8992 8.41 6.43 0.0002stim 2 41.8667 5.8992 8.41 7.10 <.0001stim 3 32.9833 5.8992 8.41 5.59 0.0004stim 4 33.7500 5.8992 8.41 5.72 0.0004stim 5 35.9667 5.8992 8.41 6.10 0.0002stim 6 44.2500 5.8992 8.41 7.50 <.0001stim 7 45.5167 5.8992 8.41 7.72 <.0001stim 8 52.7167 5.8992 8.41 8.94 <.0001
Differences of Least Squares MeansStandard
Effect stim _stim Estimate Error DF t Value Pr > |t|stim 1 2 -3.9500 4.3131 35 -0.92 0.3660stim 1 3 4.9333 4.3131 35 1.14 0.2605stim 1 4 4.1667 4.3131 35 0.97 0.3407stim 1 5 1.9500 4.3131 35 0.45 0.6540stim 1 6 -6.3333 4.3131 35 -1.47 0.1509stim 1 7 -7.6000 4.3131 35 -1.76 0.0868stim 1 8 -14.8000 4.3131 35 -3.43 0.0016stim 2 3 8.8833 4.3131 35 2.06 0.0469stim 2 4 8.1167 4.3131 35 1.88 0.0682stim 2 5 5.9000 4.3131 35 1.37 0.1801stim 2 6 -2.3833 4.3131 35 -0.55 0.5841stim 2 7 -3.6500 4.3131 35 -0.85 0.4032stim 2 8 -10.8500 4.3131 35 -2.52 0.0166stim 3 4 -0.7667 4.3131 35 -0.18 0.8599
45 / 51
Repeated measures in a blocked design
Example (Pigeons and stimuli)
Pigeon example (repeated measures)
We have data on 6 pigeons. Each pigeon was given a sequence of 8 stimuli, and a response was observedafter each. The stimuli were not randomized, making this a repeated-m,easures study (AKA longitudinaldata). The data are in a separate file with variable names in the first row.
data pigeon;infile "pigeon.dat" firstobs=2;input PIGEON stim Resp;
Here is an initial analysis, treating this like an ordinary block design.
proc mixed;class PIGEON stim;model resp = stim / ddfm = satterth outp = diags;random PIGEON;lsmeans stim / diff;
The Mixed ProcedureConvergence criteria met.
Covariance ParameterEstimates
Cov Parm EstimatePIGEON 153.00Residual 55.8097
Fit Statistics-2 Res Log Likelihood 304.4AIC (smaller is better) 308.4AICC (smaller is better) 308.7BIC (smaller is better) 308.0
Type 3 Tests of Fixed EffectsNum Den
Effect DF DF F Value Pr > Fstim 7 35 4.91 0.0006
Least Squares MeansStandard
Effect stim Estimate Error DF t Value Pr > |t|stim 1 37.9167 5.8992 8.41 6.43 0.0002stim 2 41.8667 5.8992 8.41 7.10 <.0001stim 3 32.9833 5.8992 8.41 5.59 0.0004stim 4 33.7500 5.8992 8.41 5.72 0.0004stim 5 35.9667 5.8992 8.41 6.10 0.0002stim 6 44.2500 5.8992 8.41 7.50 <.0001stim 7 45.5167 5.8992 8.41 7.72 <.0001stim 8 52.7167 5.8992 8.41 8.94 <.0001
Differences of Least Squares MeansStandard
Effect stim _stim Estimate Error DF t Value Pr > |t|stim 1 2 -3.9500 4.3131 35 -0.92 0.3660stim 1 3 4.9333 4.3131 35 1.14 0.2605stim 1 4 4.1667 4.3131 35 0.97 0.3407stim 1 5 1.9500 4.3131 35 0.45 0.6540stim 1 6 -6.3333 4.3131 35 -1.47 0.1509stim 1 7 -7.6000 4.3131 35 -1.76 0.0868stim 1 8 -14.8000 4.3131 35 -3.43 0.0016stim 2 3 8.8833 4.3131 35 2.06 0.0469stim 2 4 8.1167 4.3131 35 1.88 0.0682stim 2 5 5.9000 4.3131 35 1.37 0.1801stim 2 6 -2.3833 4.3131 35 -0.55 0.5841stim 2 7 -3.6500 4.3131 35 -0.85 0.4032stim 2 8 -10.8500 4.3131 35 -2.52 0.0166stim 3 4 -0.7667 4.3131 35 -0.18 0.8599
46 / 51
Repeated measures in a blocked design
Example (Pigeons and stimuli)
Pigeon example (repeated measures)
We have data on 6 pigeons. Each pigeon was given a sequence of 8 stimuli, and a response was observedafter each. The stimuli were not randomized, making this a repeated-m,easures study (AKA longitudinaldata). The data are in a separate file with variable names in the first row.
data pigeon;infile "pigeon.dat" firstobs=2;input PIGEON stim Resp;
Here is an initial analysis, treating this like an ordinary block design.
proc mixed;class PIGEON stim;model resp = stim / ddfm = satterth outp = diags;random PIGEON;lsmeans stim / diff;
The Mixed ProcedureConvergence criteria met.
Covariance ParameterEstimates
Cov Parm EstimatePIGEON 153.00Residual 55.8097
Fit Statistics-2 Res Log Likelihood 304.4AIC (smaller is better) 308.4AICC (smaller is better) 308.7BIC (smaller is better) 308.0
Type 3 Tests of Fixed EffectsNum Den
Effect DF DF F Value Pr > Fstim 7 35 4.91 0.0006
Least Squares MeansStandard
Effect stim Estimate Error DF t Value Pr > |t|stim 1 37.9167 5.8992 8.41 6.43 0.0002stim 2 41.8667 5.8992 8.41 7.10 <.0001stim 3 32.9833 5.8992 8.41 5.59 0.0004stim 4 33.7500 5.8992 8.41 5.72 0.0004stim 5 35.9667 5.8992 8.41 6.10 0.0002stim 6 44.2500 5.8992 8.41 7.50 <.0001stim 7 45.5167 5.8992 8.41 7.72 <.0001stim 8 52.7167 5.8992 8.41 8.94 <.0001
Differences of Least Squares MeansStandard
Effect stim _stim Estimate Error DF t Value Pr > |t|stim 1 2 -3.9500 4.3131 35 -0.92 0.3660stim 1 3 4.9333 4.3131 35 1.14 0.2605stim 1 4 4.1667 4.3131 35 0.97 0.3407stim 1 5 1.9500 4.3131 35 0.45 0.6540stim 1 6 -6.3333 4.3131 35 -1.47 0.1509stim 1 7 -7.6000 4.3131 35 -1.76 0.0868stim 1 8 -14.8000 4.3131 35 -3.43 0.0016stim 2 3 8.8833 4.3131 35 2.06 0.0469stim 2 4 8.1167 4.3131 35 1.88 0.0682stim 2 5 5.9000 4.3131 35 1.37 0.1801stim 2 6 -2.3833 4.3131 35 -0.55 0.5841stim 2 7 -3.6500 4.3131 35 -0.85 0.4032stim 2 8 -10.8500 4.3131 35 -2.52 0.0166stim 3 4 -0.7667 4.3131 35 -0.18 0.8599
< similar for rest >
Here, we point out the all pairwise comparisons between the 8 stimuli havethe same standard error of 4.3131.
47 / 51
Repeated measures in a blocked design
Example (Pigeons and stimuli)
stim 3 5 -2.9833 4.3131 35 -0.69 0.4937stim 3 6 -11.2667 4.3131 35 -2.61 0.0132stim 3 7 -12.5333 4.3131 35 -2.91 0.0063stim 3 8 -19.7333 4.3131 35 -4.58 <.0001stim 4 5 -2.2167 4.3131 35 -0.51 0.6105stim 4 6 -10.5000 4.3131 35 -2.43 0.0202stim 4 7 -11.7667 4.3131 35 -2.73 0.0099stim 4 8 -18.9667 4.3131 35 -4.40 <.0001stim 5 6 -8.2833 4.3131 35 -1.92 0.0630stim 5 7 -9.5500 4.3131 35 -2.21 0.0334stim 5 8 -16.7500 4.3131 35 -3.88 0.0004stim 6 7 -1.2667 4.3131 35 -0.29 0.7707stim 6 8 -8.4667 4.3131 35 -1.96 0.0576stim 7 8 -7.2000 4.3131 35 -1.67 0.1040
With repeated-measures data, we didn’t have the opportunity to randomize like in a regular block de-sign, so this is cause to seriously question the independence of the errors. Thus, before looking too hardat these results, let’s check the autocorrelation of the data, using the Durbin-Watson test. This can be doneusing PROC REG. Only the D-W results are shown.
proc reg data = diags;model resid = / DW;
The REG ProcedureModel: MODEL1Dependent Variable: Resid ResidualDurbin-Watson D 1.319Number of Observations 481st Order Autocorrelation 0.334
We have an autocorrelation of .334, and, roughly, the standard error is 1/p
n = 1/p
48 ⇡ .14, so theautocorrelation is evidently nonzero. This autocorrelation includes transitions between the 8th observationon one bird and the 1st observation on the next bird, which one would guess would tends to decrease it. Itseems like we should take this autocorrelation seriously.
The following code is thwe same as the previous PROC MIXED run, except there’s an additional RE-PEATED statement that specidies a first-order autocorrelation structure (TYPE = AR(1)) for each pigeon(SUBJECT = PIGEON). The RCORR option asks to print the estimated residual correlation matrix.
proc mixed;class PIGEON stim;model resp = stim / ddfm = satterth outp = diags;random PIGEON;repeated / type = AR(1) subject = PIGEON rcorr;lsmeans stim / diff;
The Mixed ProcedureConvergence criteria met.
Estimated R Correlation Matrix for PIGEON 1Row Col1 Col2 Col3 Col4 Col5 Col6 Col7 Col8
1 1.0000 0.6706 0.4497 0.3015 0.2022 0.1356 0.09092 0.060972 0.6706 1.0000 0.6706 0.4497 0.3015 0.2022 0.1356 0.090923 0.4497 0.6706 1.0000 0.6706 0.4497 0.3015 0.2022 0.13564 0.3015 0.4497 0.6706 1.0000 0.6706 0.4497 0.3015 0.20225 0.2022 0.3015 0.4497 0.6706 1.0000 0.6706 0.4497 0.30156 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000 0.6706 0.44977 0.09092 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000 0.67068 0.06097 0.09092 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000
Covariance Parameter EstimatesCov Parm Subject EstimatePIGEON 145.77AR(1) PIGEON 0.6706Residual 85.6493
2
stim 3 5 -2.9833 4.3131 35 -0.69 0.4937stim 3 6 -11.2667 4.3131 35 -2.61 0.0132stim 3 7 -12.5333 4.3131 35 -2.91 0.0063stim 3 8 -19.7333 4.3131 35 -4.58 <.0001stim 4 5 -2.2167 4.3131 35 -0.51 0.6105stim 4 6 -10.5000 4.3131 35 -2.43 0.0202stim 4 7 -11.7667 4.3131 35 -2.73 0.0099stim 4 8 -18.9667 4.3131 35 -4.40 <.0001stim 5 6 -8.2833 4.3131 35 -1.92 0.0630stim 5 7 -9.5500 4.3131 35 -2.21 0.0334stim 5 8 -16.7500 4.3131 35 -3.88 0.0004stim 6 7 -1.2667 4.3131 35 -0.29 0.7707stim 6 8 -8.4667 4.3131 35 -1.96 0.0576stim 7 8 -7.2000 4.3131 35 -1.67 0.1040
With repeated-measures data, we didn’t have the opportunity to randomize like in a regular block de-sign, so this is cause to seriously question the independence of the errors. Thus, before looking too hardat these results, let’s check the autocorrelation of the data, using the Durbin-Watson test. This can be doneusing PROC REG. Only the D-W results are shown.
proc reg data = diags;model resid = / DW;
The REG ProcedureModel: MODEL1Dependent Variable: Resid ResidualDurbin-Watson D 1.319Number of Observations 481st Order Autocorrelation 0.334
We have an autocorrelation of .334, and, roughly, the standard error is 1/p
n = 1/p
48 ⇡ .14, so theautocorrelation is evidently nonzero. This autocorrelation includes transitions between the 8th observationon one bird and the 1st observation on the next bird, which one would guess would tends to decrease it. Itseems like we should take this autocorrelation seriously.
The following code is thwe same as the previous PROC MIXED run, except there’s an additional RE-PEATED statement that specidies a first-order autocorrelation structure (TYPE = AR(1)) for each pigeon(SUBJECT = PIGEON). The RCORR option asks to print the estimated residual correlation matrix.
proc mixed;class PIGEON stim;model resp = stim / ddfm = satterth outp = diags;random PIGEON;repeated / type = AR(1) subject = PIGEON rcorr;lsmeans stim / diff;
The Mixed ProcedureConvergence criteria met.
Estimated R Correlation Matrix for PIGEON 1Row Col1 Col2 Col3 Col4 Col5 Col6 Col7 Col8
1 1.0000 0.6706 0.4497 0.3015 0.2022 0.1356 0.09092 0.060972 0.6706 1.0000 0.6706 0.4497 0.3015 0.2022 0.1356 0.090923 0.4497 0.6706 1.0000 0.6706 0.4497 0.3015 0.2022 0.13564 0.3015 0.4497 0.6706 1.0000 0.6706 0.4497 0.3015 0.20225 0.2022 0.3015 0.4497 0.6706 1.0000 0.6706 0.4497 0.30156 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000 0.6706 0.44977 0.09092 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000 0.67068 0.06097 0.09092 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000
Covariance Parameter EstimatesCov Parm Subject EstimatePIGEON 145.77AR(1) PIGEON 0.6706Residual 85.6493
2
48 / 51
Repeated measures in a blocked design
Example (Pigeons and stimuli)
stim 3 5 -2.9833 4.3131 35 -0.69 0.4937stim 3 6 -11.2667 4.3131 35 -2.61 0.0132stim 3 7 -12.5333 4.3131 35 -2.91 0.0063stim 3 8 -19.7333 4.3131 35 -4.58 <.0001stim 4 5 -2.2167 4.3131 35 -0.51 0.6105stim 4 6 -10.5000 4.3131 35 -2.43 0.0202stim 4 7 -11.7667 4.3131 35 -2.73 0.0099stim 4 8 -18.9667 4.3131 35 -4.40 <.0001stim 5 6 -8.2833 4.3131 35 -1.92 0.0630stim 5 7 -9.5500 4.3131 35 -2.21 0.0334stim 5 8 -16.7500 4.3131 35 -3.88 0.0004stim 6 7 -1.2667 4.3131 35 -0.29 0.7707stim 6 8 -8.4667 4.3131 35 -1.96 0.0576stim 7 8 -7.2000 4.3131 35 -1.67 0.1040
With repeated-measures data, we didn’t have the opportunity to randomize like in a regular block de-sign, so this is cause to seriously question the independence of the errors. Thus, before looking too hardat these results, let’s check the autocorrelation of the data, using the Durbin-Watson test. This can be doneusing PROC REG. Only the D-W results are shown.
proc reg data = diags;model resid = / DW;
The REG ProcedureModel: MODEL1Dependent Variable: Resid ResidualDurbin-Watson D 1.319Number of Observations 481st Order Autocorrelation 0.334
We have an autocorrelation of .334, and, roughly, the standard error is 1/p
n = 1/p
48 ⇡ .14, so theautocorrelation is evidently nonzero. This autocorrelation includes transitions between the 8th observationon one bird and the 1st observation on the next bird, which one would guess would tends to decrease it. Itseems like we should take this autocorrelation seriously.
The following code is thwe same as the previous PROC MIXED run, except there’s an additional RE-PEATED statement that specidies a first-order autocorrelation structure (TYPE = AR(1)) for each pigeon(SUBJECT = PIGEON). The RCORR option asks to print the estimated residual correlation matrix.
proc mixed;class PIGEON stim;model resp = stim / ddfm = satterth outp = diags;random PIGEON;repeated / type = AR(1) subject = PIGEON rcorr;lsmeans stim / diff;
The Mixed ProcedureConvergence criteria met.
Estimated R Correlation Matrix for PIGEON 1Row Col1 Col2 Col3 Col4 Col5 Col6 Col7 Col8
1 1.0000 0.6706 0.4497 0.3015 0.2022 0.1356 0.09092 0.060972 0.6706 1.0000 0.6706 0.4497 0.3015 0.2022 0.1356 0.090923 0.4497 0.6706 1.0000 0.6706 0.4497 0.3015 0.2022 0.13564 0.3015 0.4497 0.6706 1.0000 0.6706 0.4497 0.3015 0.20225 0.2022 0.3015 0.4497 0.6706 1.0000 0.6706 0.4497 0.30156 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000 0.6706 0.44977 0.09092 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000 0.67068 0.06097 0.09092 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000
Covariance Parameter EstimatesCov Parm Subject EstimatePIGEON 145.77AR(1) PIGEON 0.6706Residual 85.6493
2
stim 3 5 -2.9833 4.3131 35 -0.69 0.4937stim 3 6 -11.2667 4.3131 35 -2.61 0.0132stim 3 7 -12.5333 4.3131 35 -2.91 0.0063stim 3 8 -19.7333 4.3131 35 -4.58 <.0001stim 4 5 -2.2167 4.3131 35 -0.51 0.6105stim 4 6 -10.5000 4.3131 35 -2.43 0.0202stim 4 7 -11.7667 4.3131 35 -2.73 0.0099stim 4 8 -18.9667 4.3131 35 -4.40 <.0001stim 5 6 -8.2833 4.3131 35 -1.92 0.0630stim 5 7 -9.5500 4.3131 35 -2.21 0.0334stim 5 8 -16.7500 4.3131 35 -3.88 0.0004stim 6 7 -1.2667 4.3131 35 -0.29 0.7707stim 6 8 -8.4667 4.3131 35 -1.96 0.0576stim 7 8 -7.2000 4.3131 35 -1.67 0.1040
With repeated-measures data, we didn’t have the opportunity to randomize like in a regular block de-sign, so this is cause to seriously question the independence of the errors. Thus, before looking too hardat these results, let’s check the autocorrelation of the data, using the Durbin-Watson test. This can be doneusing PROC REG. Only the D-W results are shown.
proc reg data = diags;model resid = / DW;
The REG ProcedureModel: MODEL1Dependent Variable: Resid ResidualDurbin-Watson D 1.319Number of Observations 481st Order Autocorrelation 0.334
We have an autocorrelation of .334, and, roughly, the standard error is 1/p
n = 1/p
48 ⇡ .14, so theautocorrelation is evidently nonzero. This autocorrelation includes transitions between the 8th observationon one bird and the 1st observation on the next bird, which one would guess would tends to decrease it. Itseems like we should take this autocorrelation seriously.
The following code is thwe same as the previous PROC MIXED run, except there’s an additional RE-PEATED statement that specidies a first-order autocorrelation structure (TYPE = AR(1)) for each pigeon(SUBJECT = PIGEON). The RCORR option asks to print the estimated residual correlation matrix.
proc mixed;class PIGEON stim;model resp = stim / ddfm = satterth outp = diags;random PIGEON;repeated / type = AR(1) subject = PIGEON rcorr;lsmeans stim / diff;
The Mixed ProcedureConvergence criteria met.
Estimated R Correlation Matrix for PIGEON 1Row Col1 Col2 Col3 Col4 Col5 Col6 Col7 Col8
1 1.0000 0.6706 0.4497 0.3015 0.2022 0.1356 0.09092 0.060972 0.6706 1.0000 0.6706 0.4497 0.3015 0.2022 0.1356 0.090923 0.4497 0.6706 1.0000 0.6706 0.4497 0.3015 0.2022 0.13564 0.3015 0.4497 0.6706 1.0000 0.6706 0.4497 0.3015 0.20225 0.2022 0.3015 0.4497 0.6706 1.0000 0.6706 0.4497 0.30156 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000 0.6706 0.44977 0.09092 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000 0.67068 0.06097 0.09092 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000
Covariance Parameter EstimatesCov Parm Subject EstimatePIGEON 145.77AR(1) PIGEON 0.6706Residual 85.6493
2
49 / 51
Repeated measures in a blocked design
Example (Pigeons and stimuli)
stim 3 5 -2.9833 4.3131 35 -0.69 0.4937stim 3 6 -11.2667 4.3131 35 -2.61 0.0132stim 3 7 -12.5333 4.3131 35 -2.91 0.0063stim 3 8 -19.7333 4.3131 35 -4.58 <.0001stim 4 5 -2.2167 4.3131 35 -0.51 0.6105stim 4 6 -10.5000 4.3131 35 -2.43 0.0202stim 4 7 -11.7667 4.3131 35 -2.73 0.0099stim 4 8 -18.9667 4.3131 35 -4.40 <.0001stim 5 6 -8.2833 4.3131 35 -1.92 0.0630stim 5 7 -9.5500 4.3131 35 -2.21 0.0334stim 5 8 -16.7500 4.3131 35 -3.88 0.0004stim 6 7 -1.2667 4.3131 35 -0.29 0.7707stim 6 8 -8.4667 4.3131 35 -1.96 0.0576stim 7 8 -7.2000 4.3131 35 -1.67 0.1040
With repeated-measures data, we didn’t have the opportunity to randomize like in a regular block de-sign, so this is cause to seriously question the independence of the errors. Thus, before looking too hardat these results, let’s check the autocorrelation of the data, using the Durbin-Watson test. This can be doneusing PROC REG. Only the D-W results are shown.
proc reg data = diags;model resid = / DW;
The REG ProcedureModel: MODEL1Dependent Variable: Resid ResidualDurbin-Watson D 1.319Number of Observations 481st Order Autocorrelation 0.334
We have an autocorrelation of .334, and, roughly, the standard error is 1/p
n = 1/p
48 ⇡ .14, so theautocorrelation is evidently nonzero. This autocorrelation includes transitions between the 8th observationon one bird and the 1st observation on the next bird, which one would guess would tends to decrease it. Itseems like we should take this autocorrelation seriously.
The following code is thwe same as the previous PROC MIXED run, except there’s an additional RE-PEATED statement that specidies a first-order autocorrelation structure (TYPE = AR(1)) for each pigeon(SUBJECT = PIGEON). The RCORR option asks to print the estimated residual correlation matrix.
proc mixed;class PIGEON stim;model resp = stim / ddfm = satterth outp = diags;random PIGEON;repeated / type = AR(1) subject = PIGEON rcorr;lsmeans stim / diff;
The Mixed ProcedureConvergence criteria met.
Estimated R Correlation Matrix for PIGEON 1Row Col1 Col2 Col3 Col4 Col5 Col6 Col7 Col8
1 1.0000 0.6706 0.4497 0.3015 0.2022 0.1356 0.09092 0.060972 0.6706 1.0000 0.6706 0.4497 0.3015 0.2022 0.1356 0.090923 0.4497 0.6706 1.0000 0.6706 0.4497 0.3015 0.2022 0.13564 0.3015 0.4497 0.6706 1.0000 0.6706 0.4497 0.3015 0.20225 0.2022 0.3015 0.4497 0.6706 1.0000 0.6706 0.4497 0.30156 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000 0.6706 0.44977 0.09092 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000 0.67068 0.06097 0.09092 0.1356 0.2022 0.3015 0.4497 0.6706 1.0000
Covariance Parameter EstimatesCov Parm Subject EstimatePIGEON 145.77AR(1) PIGEON 0.6706Residual 85.6493
2Fit Statistics-2 Res Log Likelihood 293.1AIC (smaller is better) 299.1AICC (smaller is better) 299.7BIC (smaller is better) 298.4
Type 3 Tests of Fixed EffectsNum Den
Effect DF DF F Value Pr > Fstim 7 21 3.90 0.0072
Least Squares MeansStandard
Effect stim Estimate Error DF t Value Pr > |t|stim 1 37.9167 6.2104 7.65 6.11 0.0003stim 2 41.8667 6.2104 7.65 6.74 0.0002stim 3 32.9833 6.2104 7.65 5.31 0.0008stim 4 33.7500 6.2104 7.65 5.43 0.0007stim 5 35.9667 6.2104 7.65 5.79 0.0005stim 6 44.2500 6.2104 7.65 7.13 0.0001stim 7 45.5167 6.2104 7.65 7.33 0.0001stim 8 52.7167 6.2104 7.65 8.49 <.0001
Differences of Least Squares MeansStandard
Effect stim _stim Estimate Error DF t Value Pr > |t|stim 1 2 -3.9500 3.0668 32.3 -1.29 0.2069stim 1 3 4.9333 3.9638 33.7 1.24 0.2219stim 1 4 4.1667 4.4655 25.4 0.93 0.3596stim 1 5 1.9500 4.7725 18.3 0.41 0.6876stim 1 6 -6.3333 4.9678 13.8 -1.27 0.2233stim 1 7 -7.6000 5.0945 11.2 -1.49 0.1634stim 1 8 -14.8000 5.1777 9.5 -2.86 0.0179stim 2 3 8.8833 3.0668 32.3 2.90 0.0067stim 2 4 8.1167 3.9638 33.7 2.05 0.0485stim 2 5 5.9000 4.4655 25.4 1.32 0.1982stim 2 6 -2.3833 4.7725 18.3 -0.50 0.6235stim 2 7 -3.6500 4.9678 13.8 -0.73 0.4748stim 2 8 -10.8500 5.0945 11.2 -2.13 0.0562stim 3 4 -0.7667 3.0668 32.3 -0.25 0.8042stim 3 5 -2.9833 3.9638 33.7 -0.75 0.4569stim 3 6 -11.2667 4.4655 25.4 -2.52 0.0182stim 3 7 -12.5333 4.7725 18.3 -2.63 0.0170stim 3 8 -19.7333 4.9678 13.8 -3.97 0.0014stim 4 5 -2.2167 3.0668 32.3 -0.72 0.4750stim 4 6 -10.5000 3.9638 33.7 -2.65 0.0122stim 4 7 -11.7667 4.4655 25.4 -2.63 0.0141stim 4 8 -18.9667 4.7725 18.3 -3.97 0.0009stim 5 6 -8.2833 3.0668 32.3 -2.70 0.0109stim 5 7 -9.5500 3.9638 33.7 -2.41 0.0216stim 5 8 -16.7500 4.4655 25.4 -3.75 0.0009stim 6 7 -1.2667 3.0668 32.3 -0.41 0.6823stim 6 8 -8.4667 3.9638 33.7 -2.14 0.0400stim 7 8 -7.2000 3.0668 32.3 -2.35 0.0252
Note that the fit statistics (�2 log likelihood, AIC, etc) are all smaller than for the first model, indicatingthat this is a better model. We estimate the first-order autocorrelation to be .67, a fairly large value. TheP value for the overall test of stim is .0072, compared with .0006 from the first model; so it appears thatthe first model exaggerates the significance of that factor. The standard errors of the least-squares meansare somewhat larger in this second model; and the SEs of the differences between least-squares means rangefrom 3.06 to 5.18, whereas in the first model they are all 4.31. Thus, this second analysis is very differentfrom the first. Owing to the positive autocorrelation, we have more power (lower SEs) for comparingadjacent stimuli, and less for comparing distant ones.
3
With this model, treatment (stimulus) is still significant but not quite assmall of a p-value as before.
50 / 51
Repeated measures in a blocked design
Example (Pigeons and stimuli)
Fit Statistics-2 Res Log Likelihood 293.1AIC (smaller is better) 299.1AICC (smaller is better) 299.7BIC (smaller is better) 298.4
Type 3 Tests of Fixed EffectsNum Den
Effect DF DF F Value Pr > Fstim 7 21 3.90 0.0072
Least Squares MeansStandard
Effect stim Estimate Error DF t Value Pr > |t|stim 1 37.9167 6.2104 7.65 6.11 0.0003stim 2 41.8667 6.2104 7.65 6.74 0.0002stim 3 32.9833 6.2104 7.65 5.31 0.0008stim 4 33.7500 6.2104 7.65 5.43 0.0007stim 5 35.9667 6.2104 7.65 5.79 0.0005stim 6 44.2500 6.2104 7.65 7.13 0.0001stim 7 45.5167 6.2104 7.65 7.33 0.0001stim 8 52.7167 6.2104 7.65 8.49 <.0001
Differences of Least Squares MeansStandard
Effect stim _stim Estimate Error DF t Value Pr > |t|stim 1 2 -3.9500 3.0668 32.3 -1.29 0.2069stim 1 3 4.9333 3.9638 33.7 1.24 0.2219stim 1 4 4.1667 4.4655 25.4 0.93 0.3596stim 1 5 1.9500 4.7725 18.3 0.41 0.6876stim 1 6 -6.3333 4.9678 13.8 -1.27 0.2233stim 1 7 -7.6000 5.0945 11.2 -1.49 0.1634stim 1 8 -14.8000 5.1777 9.5 -2.86 0.0179stim 2 3 8.8833 3.0668 32.3 2.90 0.0067stim 2 4 8.1167 3.9638 33.7 2.05 0.0485stim 2 5 5.9000 4.4655 25.4 1.32 0.1982stim 2 6 -2.3833 4.7725 18.3 -0.50 0.6235stim 2 7 -3.6500 4.9678 13.8 -0.73 0.4748stim 2 8 -10.8500 5.0945 11.2 -2.13 0.0562stim 3 4 -0.7667 3.0668 32.3 -0.25 0.8042stim 3 5 -2.9833 3.9638 33.7 -0.75 0.4569stim 3 6 -11.2667 4.4655 25.4 -2.52 0.0182stim 3 7 -12.5333 4.7725 18.3 -2.63 0.0170stim 3 8 -19.7333 4.9678 13.8 -3.97 0.0014stim 4 5 -2.2167 3.0668 32.3 -0.72 0.4750stim 4 6 -10.5000 3.9638 33.7 -2.65 0.0122stim 4 7 -11.7667 4.4655 25.4 -2.63 0.0141stim 4 8 -18.9667 4.7725 18.3 -3.97 0.0009stim 5 6 -8.2833 3.0668 32.3 -2.70 0.0109stim 5 7 -9.5500 3.9638 33.7 -2.41 0.0216stim 5 8 -16.7500 4.4655 25.4 -3.75 0.0009stim 6 7 -1.2667 3.0668 32.3 -0.41 0.6823stim 6 8 -8.4667 3.9638 33.7 -2.14 0.0400stim 7 8 -7.2000 3.0668 32.3 -2.35 0.0252
Note that the fit statistics (�2 log likelihood, AIC, etc) are all smaller than for the first model, indicatingthat this is a better model. We estimate the first-order autocorrelation to be .67, a fairly large value. TheP value for the overall test of stim is .0072, compared with .0006 from the first model; so it appears thatthe first model exaggerates the significance of that factor. The standard errors of the least-squares meansare somewhat larger in this second model; and the SEs of the differences between least-squares means rangefrom 3.06 to 5.18, whereas in the first model they are all 4.31. Thus, this second analysis is very differentfrom the first. Owing to the positive autocorrelation, we have more power (lower SEs) for comparingadjacent stimuli, and less for comparing distant ones.
3
Fit Statistics-2 Res Log Likelihood 293.1AIC (smaller is better) 299.1AICC (smaller is better) 299.7BIC (smaller is better) 298.4
Type 3 Tests of Fixed EffectsNum Den
Effect DF DF F Value Pr > Fstim 7 21 3.90 0.0072
Least Squares MeansStandard
Effect stim Estimate Error DF t Value Pr > |t|stim 1 37.9167 6.2104 7.65 6.11 0.0003stim 2 41.8667 6.2104 7.65 6.74 0.0002stim 3 32.9833 6.2104 7.65 5.31 0.0008stim 4 33.7500 6.2104 7.65 5.43 0.0007stim 5 35.9667 6.2104 7.65 5.79 0.0005stim 6 44.2500 6.2104 7.65 7.13 0.0001stim 7 45.5167 6.2104 7.65 7.33 0.0001stim 8 52.7167 6.2104 7.65 8.49 <.0001
Differences of Least Squares MeansStandard
Effect stim _stim Estimate Error DF t Value Pr > |t|stim 1 2 -3.9500 3.0668 32.3 -1.29 0.2069stim 1 3 4.9333 3.9638 33.7 1.24 0.2219stim 1 4 4.1667 4.4655 25.4 0.93 0.3596stim 1 5 1.9500 4.7725 18.3 0.41 0.6876stim 1 6 -6.3333 4.9678 13.8 -1.27 0.2233stim 1 7 -7.6000 5.0945 11.2 -1.49 0.1634stim 1 8 -14.8000 5.1777 9.5 -2.86 0.0179stim 2 3 8.8833 3.0668 32.3 2.90 0.0067stim 2 4 8.1167 3.9638 33.7 2.05 0.0485stim 2 5 5.9000 4.4655 25.4 1.32 0.1982stim 2 6 -2.3833 4.7725 18.3 -0.50 0.6235stim 2 7 -3.6500 4.9678 13.8 -0.73 0.4748stim 2 8 -10.8500 5.0945 11.2 -2.13 0.0562stim 3 4 -0.7667 3.0668 32.3 -0.25 0.8042stim 3 5 -2.9833 3.9638 33.7 -0.75 0.4569stim 3 6 -11.2667 4.4655 25.4 -2.52 0.0182stim 3 7 -12.5333 4.7725 18.3 -2.63 0.0170stim 3 8 -19.7333 4.9678 13.8 -3.97 0.0014stim 4 5 -2.2167 3.0668 32.3 -0.72 0.4750stim 4 6 -10.5000 3.9638 33.7 -2.65 0.0122stim 4 7 -11.7667 4.4655 25.4 -2.63 0.0141stim 4 8 -18.9667 4.7725 18.3 -3.97 0.0009stim 5 6 -8.2833 3.0668 32.3 -2.70 0.0109stim 5 7 -9.5500 3.9638 33.7 -2.41 0.0216stim 5 8 -16.7500 4.4655 25.4 -3.75 0.0009stim 6 7 -1.2667 3.0668 32.3 -0.41 0.6823stim 6 8 -8.4667 3.9638 33.7 -2.14 0.0400stim 7 8 -7.2000 3.0668 32.3 -2.35 0.0252
Note that the fit statistics (�2 log likelihood, AIC, etc) are all smaller than for the first model, indicatingthat this is a better model. We estimate the first-order autocorrelation to be .67, a fairly large value. TheP value for the overall test of stim is .0072, compared with .0006 from the first model; so it appears thatthe first model exaggerates the significance of that factor. The standard errors of the least-squares meansare somewhat larger in this second model; and the SEs of the differences between least-squares means rangefrom 3.06 to 5.18, whereas in the first model they are all 4.31. Thus, this second analysis is very differentfrom the first. Owing to the positive autocorrelation, we have more power (lower SEs) for comparingadjacent stimuli, and less for comparing distant ones.
3
51 / 51