View
218
Download
3
Embed Size (px)
Citation preview
EPSY 651: Structural Equation Modeling I
Where does SEM fit in Quantitative Methodology?
Draws on three traditions in mathematics and science:
Psychology (Spearman, Kelley, Thurstone, Cronbach, etc.
Sociology (Wright)
Agriculture and statistics: (Pearson, Fisher, Neymann, Rao, etc.)
Largely due to Jöreskog in 1960s & 1970s
Map below shows its positioning
STRUCTURAL EQUATION MODELS (SEM)
LATENT MANIFEST
Factor analysis Structural path models Confirmatory Exploratory Canonical analysis Discriminant True Score Theory Analysis GLM Validity Reliability Multiple ATI ANOVA (concurrent/ (generalizability) regression predictive) ANCOVA 2 group t-test IRT bivariate partial correlation correlation logistic models Causal (Grizzle et al)
Loglinear Models Associational (Holland,et al)
HLM Distributional Characteristics: Multinormal Poisson Censored Ordinal Categorical
Estimation Methods: OLS ML EM Bayesian
MANIFEST MODELING
• Classical statistics within the parametric tradition
• Canonical analysis subsumes most methods as special cases
LATENT MODELING
• Psychological concept of “FACTOR” is central to latent modeling: unobserved directly but “indicated” through observed variables
• Emphasis on error as individual differences as well as problem of observation (measurement) rather than “lack of fit” conception in manifest modeling
STRUCTURAL EQUATION MODELING PURPOSES
• MODEL real world phenomena in social sciences with respect to
– POPULATIONS– ECOLOGIES– TIME
SEM PROCEDURE
• FOCUS ON DECOMPOSITION OF COVARIANCE MATRIX:
xy = (x,y,2x,2y,xy) + (ex,ey, exy)
x = + y = By + x + e
TESTING in SEM
• SEM tests A PRIORI (theoretically specified) MODELS
• SEM has potential to consider model revisions
• SEM is not necessarily good for exploratory modeling
SEM COMPARISONS
• SEM can COMPARE Ecologies or Populations for identical models or
• Simultaneously compare multiple groups or ecologies with each having unique models
• Statistical testing is available for all parts of all models as well as overall model fit
CORRELATION
Karl Pearson (1857-1936. (exerpted from E S Pearson, Karl Pearson: An Appreciation of some aspects of his life and works, Cambridge University Press, 1938).
Pearson Correlation
n
(xi – xx)(yi – yy)/(n-1)
rxy = i=1_____________________________ = sxy/sxsy
sx sy
= zxizyi/(n-1)
= COVARIANCE / SD(x)SD(y)
COVARIANCE• DEFINED AS CO-VARIATION
COVxy = Sxy
• “UNSTANDARDIZED CORRELATION”
• Distribution is statistically workable
• Basis of Structural Equation Modeling (SEM) is constructing models for covariances of variables
SATMath
CalcGrade
.364 (40)
error
.932(.955)
Figure 3.4: Path model representation of correlation between SAT Math scores and Calculus Grades
1 – r2
se = standard deviation of errors
correlation covariance
Path Models
• path coefficient -standardized coefficient next to arrow, covariance in parentheses
• error coefficient- the correlation between the errors, or discrepancies between observed and predicted Calc Grade scores, and the observed Calc Grade scores.
• Predicted(Calc Grade) = .00364 SAT-Math + .5
• errors are sometimes called disturbances
X Y
a
X Y
b
X Y
e
c
Figure 3.2: Path model representations of correlation
BIVARIATE DATA
• 2 VARIABLES• QUESTION: DO THEY COVARY?• IF SO, HOW DO WE INTERPRET?• IF NOT, IS THERE A THIRD INTERVENING
(MEDIATING) VARIABLE OR EXOGENOUS VARIABLE THAT SUPPRESSES THE RELATIONSHIP? OR MODERATES THE RELATIONSHIP
IDEALIZED SCATTERPLOT
• POSITIVE RELATIONSHIP
X
Y
Prediction line
IDEALIZED SCATTERPLOT
• NEGATIVE RELATIONSHIP
X
Y
Prediction line
95% confidence interval around prediction
X.
Y.
IDEALIZED SCATTERPLOT
• NO RELATIONSHIP
X
Y
Prediction line
SUPPRESSED SCATTERPLOT
• NO APPARENT RELATIONSHIP
X
Y
Prediction lines
MALES
FEMALES
MODEERATION AND SUPPRESSION IN A
SCATTERPLOT• NO APPARENT RELATIONSHIP
X
Y
Prediction lines
MALES
FEMALES
IDEALIZED SCATTERPLOT
• POSITIVE CURVILINEAR RELATIONSHIP
X
Y
Linear
prediction line
Quadratic
prediction line
Hypotheses about Correlations
Hypotheses about Correlations
• One sample tests for Pearson r
• Two sample tests for Pearson r
• Multisample test for Pearson r
• Assumptions: normality of x, y being correlated
One Sample Test for Pearson r
• Null hypothesis: = 0, Alternate 0
• test statistic: t = r/ [(1- r2 ) / (n-2)]1/2
with degrees of freedom = n-2
One Sample Test for Pearson r
• ex. Descriptive Statistics for Kindergarteners on a Reading Test (from SPSS)
• Mean Std. Deviation N • Naming letters .5750 .3288 76
• Overall reading .6427 .2414 76 • Correlations• Naming Overall • Naming letters 1.000 .784** • Sig. (1-tailed) . .000 • N 76 76 • Overall reading .784** 1.000 • Sig. (1-tailed) .000 . • N 76 76 • ** Correlation is significant at the 0.01 level (1-tailed).
One Sample Test for Pearson r
Null hypothesis: = c, Alternate c
• test statistic: z = (Zr - Zc )/ [1/(n-3)]1/2
where z=normal statistic, Zr = Fisher Z transform
Fisher’s Z transform
• Zr = tanh-1 r = (1/2) ln[(1+ r ) /(1 - r |)]
• This creates a new variable with mean Z and SD 1/1/(n-3) which is normally distributed
Non-null r example
• Null: (girls) = .784
• Alternate: (girls) .784
Data: r = .845, n= 35
• Z (girls=.784) = 1.055, Zr(girls=.845)=1.238
z = (1.238 - 1.055)/[1/(35-3)]1/2
= .183/(1/5.65685) = 1.035, nonsig.
Two Sample Test for Difference in Pearson r’s
• Null hypothesis: 1 = 2
• Alternate hypothesis 1 2
• test statistic:
z =( Zr1 - Zr2 ) / [1/(n1-3) + 1/(n2-3)]1/2
where z= normal statistic
Example
• Null hypothesis: girls = boys
• Alternate hypothesis girls 2boys
• test statistic: rgirls = .845, rboys = .717 ngirls = 35, nboys = 41
z = Z(.845) - Z(.717) / [1/(35-3) + 1/(41-3)]1/2
= ( 1.238 - .901) / [1/32 + 1/38] 1/2
= .337 / .240 = 1.405, nonsig.
Multisample test for Pearson r
• Three or more samples:
• Null hypothesis: 1 = 2 = 3 etc
• Alternate hypothesis: some i j
• Test statistic: 2 = wiZ2i - w.Z2
w
which is chi-square distributed with #groups-1 degrees of freedom and
wi = ni-3, w.= wi , and
Zw = wiZi /w.
Example Multisample test for Pearson r
GROUP r n w Zr wZr wZr**AA 0.795 31 28 1.084875 30.3765 32.95472HISP 0.747 22 19 0.966133 18.35653 17.73484CAUC 0.815 23 20 1.141742 22.83485 26.07152
w. = 67 Zw = 1.068177
X2 = 0.313892
Nonsig.
Multiple Group Models of Correlation
• SEM approach models several groups with either the SAME or Different correlations:
X
X
y
y
boys
girls
xy = a
xy = a
Multigroup SEM
• SEM Analysis produces chi-square test of goodness of fit (lack of fit) for the hypothesis about ALL groups at once
• Other indices: Comparative Fit Index (CFI), Normed Fit Index (NFI), Root Mean Square Error of Approximation (RMSEA)
• CFI, NFI > .95 means good fit• RMSEA < .06 means good fit
Multigroup SEM
• SEM assumes large sample size, multinormality of all variables
• Robust as long as skewness and kurtosis are less than 3, sample size is probably > 100 per group (200 is better), or few parameters are being estimated (sample size as low as 70 per group may be OK with good distribution characteristics)
Multiple regression analysis
Multiple regression analysis• The test of the overall hypothesis that y is
unrelated to all predictors, equivalent to
• H0: 2y123… = 0
• H1: 2y123… = 0
• is tested by
• F = [ R2y123… / p] / [ ( 1 - R2
y123…) / (n – p – 1) ]
• F = [ SSreg / p ] / [ SSe / (n – p – 1)]
Multiple regression analysis
SOURCE df Sum of Squares Mean Square F
x1, x2… p SSreg SSreg / p SSreg/ p
SSe /(n-p-1)
e (residual) n-p-1 SSe SSe / (n-p-1)
total n-1 SSy SSy / (n-1)
Multiple regression analysis predicting Depression
Model Summary
.774a .600 .596 6.120Model1
R R SquareAdjustedR Square
Std. Error ofthe Estimate
Predictors: (Constant), t11, t9, t10a.
ANOVAb
21819.235 3 7273.078 194.162 .000a
14571.498 389 37.459
36390.733 392
Regression
Residual
Total
Model1
Sum ofSquares df Mean Square F Sig.
Predictors: (Constant), t11, t9, t10a.
Dependent Variable: t6b.
LOCUS OF CONTROL, SELF-ESTEEM, SELF-RELIANCE
ssx1
ssx2
SSy
SSe
Fig. 8.4: Venn diagram for multiple regression with two predictors and one outcome measure
SSreg
Type I
ssx1
Type III
ssx2
SSy
SSe
Fig. 8.5: Type I contributions
SSx1
SSx2
Type III
ssx1
Type III
ssx2
SSy
SSe
Fig. 8.6: Type IIII unique contributions
SSx1
SSx2
Multiple Regression ANOVA table
SOURCE df Sum of Squares Mean Square F
(Type I)
• Model 2 SSreg SSreg / 2 SSreg / 2
• SSe / (n-3)
• x1 1 SSx1 SSx1 / 1 SSx1/ 1
• SSe /(n-3)
• x2 1 SSx2 x1 SSx2 x1 SSx2 x1/ 1
• SSe /(n-3)
• e n-3 SSe SSe / (n-3)
• total n-1 SSy SSy / (n-3)
X1
X2
Y e
= .5
= .6
r = .4
R2 = .742 + .82
- 2(.74)(.8)(.4)
(1-.42)
= .85
.387
PATH DIAGRAM FOR REGRESSION
DepressionCoefficientsa
51.939 3.305 15.715 .000
.440 .034 .471 12.842 .000
-.302 .036 -.317 -8.462 .000
-.181 .035 -.186 -5.186 .000
(Constant)
t9
t10
t11
Model1
B Std. Error
UnstandardizedCoefficients
Beta
StandardizedCoefficients
t Sig.
Dependent Variable: t6a.
DEPRESSION
LOC. CON.
SELF-EST
SELF-REL
.471
-.317
-.186 R2 = .60
e
.4
Shrinkage R2
• Different definitions: ask which is being used:– What is population value for a sample R2?• R2s = 1 – (1- R2)(n-1)/(n-k-1)– What is the cross-validation from sample to
sample?• R2sc = 1 – (1- R2)(n+k)/(n-k)
Estimation Methods
• Types of Estimation:– Ordinary Least Squares (OLS)
• Minimize sum of squared errors around the prediction line
– Generalized Least Squares• A regression technique that is used when the
error terms from an ordinary least squares regression display non-random patterns such as autocorrelation or heteroskedasticity.
– Maximum Likelihood
Maximum Likelihood Estimation
• Maximum likelihood estimation• There is nothing visual about the maximum likelihood method - but it is a powerful
method and, at least for large samples, very preciseMaximum likelihood estimation begins with writing a mathematical expression known as the Likelihood Function of the sample data. Loosely speaking, the likelihood of a set of data is the probability of obtaining that particular set of data, given the chosen probability distribution model. This expression contains the unknown model parameters. The values of these parameters that maximize the sample likelihood are known as the Maximum Likelihood Estimatesor MLE's. Maximum likelihood estimation is a totally analytic maximization procedure.
• MLE's and Likelihood Functions generally have very desirable large sample properties:
– they become unbiased minimum variance estimators as the sample size increases – they have approximate normal distributions and approximate sample variances
that can be calculated and used to generate confidence bounds – likelihood functions can be used to test hypotheses about models and parameters
• With small samples, MLE's may not be very precise and may even generate a line that lies above or below the data pointsThere are only two drawbacks to MLE's, but they are important ones:
– With small numbers of failures (less than 5, and sometimes less than 10 is small), MLE's can be heavily biased and the large sample optimality properties do not apply
• Calculating MLE's often requires specialized software for solving complex non-linear equations. This is less of a problem as time goes by, as more statistical packages are upgrading to contain MLE analysis capability every year.
Outliers
• Leverage (for a single predictor):
• Li = 1/n + (Xi –Mx)2 / x2 (min=1/n, max=1)• Values larger than 1/n by large amount
should be of concern• Cook’s Di = (Y – Yi) 2 / [(k+1)MSres]
– the difference between predicted Y with and without Xi
Outliers • In SPSS under SAVE options COOKs and
Leverage Values are options you can select• Result is new variables in your SPSS data set
with the values for each case given• You can sort on either one to investigate the
largest values for each• You can delete the cases with largest values
and recompute the regression to see if it changed
63 50 44 .03855 .0152042 50 68 .02422 .0494341 55 46 .02065 .0201056 55 52 .01915 .0234956 60 57 .01696 .0105641 39 41 .01689 .0243577 39 65 .01525 .0152052 65 54 .01448 .0160739 39 65 .01425 .0228930 45 60 .01242 .0134653 60 68 .01133 .0314752 55 68 .01060 .0069355 39 41 .01047 .0051242 80 68 .00918 .0245959 80 68 .00907 .0109848 65 46 .00885 .00160
t12 t13 t14 COO_1 LEV_1