90
EVALUATING THE USE OF FACTOR SCORES IN LATENT MEDIATION MODELS Chris L. Strauss A thesis submitted to the faculty at the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of Masters of Arts in the Department of Psychology and Neuroscience. Chapel Hill 2021 Approved by: Patrick Curran Daniel Bauer Oscar Gonzalez

EVALUATING THE USE OF FACTOR SCORES IN LATENT …

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

EVALUATING THE USE OF FACTOR SCORES IN LATENT MEDIATION MODELS

Chris L. Strauss

A thesis submitted to the faculty at the University of North Carolina at Chapel Hill in partial

fulfillment of the requirements for the degree of Masters of Arts in the Department of

Psychology and Neuroscience.

Chapel Hill

2021

Approved by:

Patrick Curran

Daniel Bauer

Oscar Gonzalez

Page 2: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

ii

© 2021

Chris L. Strauss

ALL RIGHTS RESERVED

Page 3: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

iii

ABSTRACT

Chris L. Strauss: Evaluating the Use of Factor Scores in Latent Mediation Models

(Under the direction of Patrick Curran)

Factor scores are a useful tool to examine relationships between latent factors in

circumstances when simultaneous estimation of a measurement and structural model is not

possible, such as when evaluating complex models with small sample sizes (Schumaker &

Lomax, 1996; Valluzzi, Larson, & Miller, 2003). Previous research has provided clear

recommendations on how best to use factor scores when scores will be included in a subsequent

ordinary least squares regression model (Skrondal & Laake, 2001), but these results do not easily

extend to latent mediation models (Lu et al., 2011; Devlieger, Mayer, & Roseel, 2016). Current

recommended procedures to score latent mediation models either require that the measurement

and structural models can be estimated simultaneously (McDonald, 1981) or that a covariance

matrix of scores is a sufficient statistic to estimate a mediation model (Croon, 2001).

Specifically, this problem remains unanswered if a structural equation model cannot be estimated

in a single step or if raw scores are of interest. I examined the performance of regression scores,

Bartlett scores, and mean scores in scoring mediation models in which all variables, including

the mediator, are latent. I varied sample size, item to factor ratio, coefficients of determination,

and effect size, and evaluated bias, mean squared error, and coverage in parameter estimates.

Findings suggested no numerical difference in the estimate of the indirect effect when using

regression scores to score the mediator versus Bartlett scores. In addition, mean scores

performed similarly if not better than regression and Bartlett scores in a number of conditions

Page 4: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

iv

particularly when the population generating measurement model more closely resembled that

imposed by mean scores (e.g., high factor loadings near each other in value). Finally,

explanations for these findings are developed and translated into recommendations for their use

in practice.

Page 5: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

v

TABLE OF CONTENTS

LIST OF TABLES……………………………………………………………………………….vii

LIST OF FIGURES……………………………………………………………………………..viii

LIST OF ABBREVIATIONS………………………………………………………………….....ix

1. INTRODUCTION……………………………………………………………………………....1

1.1 Factor Analysis…………………………………………………………………….…..2

Confirmatory Factor Analysis…………………………..…………………………4

1.2 Structural Equation Modeling…………………………………………………………7

1.3 Mediation…………………………………………………………………………….10

Latent Mediation Models……………………………………….………………..13

1.4 Limitations to Structural Equation Modeling……………………………….………..16

1.5 Factor Scores…………………………………………………………………………18

Mean Scores…………………………………………………………………...…19

Regression Scores………………………………………………………………..20

Bartlett Scores……………………………………………………………………20

Skrondal and Laake (2001)……………………………………………….………21

Factor Scores in Latent Mediation Models………………………………….……24

2. METHODS……………………………………………………………………………………29

2.1 Populations Generating Model………………………………………………….……29

2.2 Data Generation………………………………………………………………….…...31

2.3 Factor Scores and Parameter Estimates………………………………………….…...34

Page 6: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

vi

2.4 Outcomes………………………………………………………………………….….35

3. RESULTS……………………………………………………………………………………..39

3.1 Data Validation……………………………………………………………….………39

3.2 Comparing Factor Scores to Latent Variable Models………………………………..41

3.3 The Indirect Effect…………………………………………………………….……...45

Relative Bias of the Indirect Effect………………………………………………45

MSE of the Indirect Effect……………………………………………………….49

Coverage of the Confidence Interval of the Indirect Effect……………………...52

4. DISCUSSION…………………………………………………………………………………55

4.1 Hypotheses………………………………………………………………………...…55

Hypotheses One and Two………………………………………………………..55

Hypothesis Three………………………………………………………………...56

Hypothesis Four………………………………………………………….………57

4.2 Limitations…………………………………………………………………………...59

4.3 Future Directions……………………………………………………………….…….60

APPENDIX A: BOX PLOTS OF COMPONENTS OF INDIRECT EFFECT………………….62

APPENDIX B: TABLES OF RELATIVE BIAS AND SQUARED ERROR OF

INDIRECT EFFECT…………………………………………………………………….……….70

APPENDIX C: META-MODEL RESULTS……………………………………………………72

References………………………………………………………………………………………..76

Page 7: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

vii

LIST OF TABLES

Table

1. Decomposition of effects in latent mediation models………………………………...15

2. Design matrix of simulation conditions………………………………………………33

3. Parameter estimates comparing factor score models to full SEM…………………….42

4. Coverage of confidence interval of indirect effect……………………………………53

5. MSE of indirect effect across simulation conditions………………………………….70

6. Relative bias of indirect effect across simulation conditions…………………………71

7. Relative bias of indirect effect meta-model…………………………………………..72

8. Squared error of indirect effect meta-model………………………………………….73

9. Coverage of confidence interval of indirect effect meta-model………………………74

Page 8: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

viii

LIST OF FIGURES

Figure 1 - Single-mediator model with observed variables……………………………………….10

Figure 2 - Single-mediator model with latent variables…………………………………………..14

Figure 3 - Population generating model-structure………………………………………………..30

Figure 4 - Summary of data generation and model fitting…………………………………….…..38

Figure 5 - Sampling distributions of indirect effect from full SEM……………………………….44

Figure 6 - Relative bias of indirect effect when N = 40………………………...…………………46

Figure 7 - Relative bias of indirect effect when N = 80………………………………...…………47

Figure 8 - Squared error of indirect effect when N = 40…………………………………...……...50

Figure 9 - Squared error of indirect effect when N = 80……………………..……………………51

Figure 10 - Relative bias of 𝛾11 when N = 40……………………………………………………..62

Figure 11 - Relative bias of 𝛾11 when N = 80……………………………………………………..63

Figure 12 - Relative bias of 𝛽21 when N = 40……………………………………………………..64

Figure 13 - Relative bias of 𝛽21 when N = 80……………………………………………………..65

Figure 14 - Squared error of 𝛾11 when N = 40…………………………………………………….66

Figure 15 - Squared error of 𝛾11 when N = 80…………………………………………………….67

Figure 16 - Squared error of 𝛽21 when N = 40…………………………………………………….68

Figure 17 - Squared error of 𝛽21 when N = 80……………………………………………………69

Page 9: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

ix

LIST OF ABBREVIATIONS

CFA

confirmatory factor analysis

SEM

structural equation model

FSR

factor score regression

EFA exploratory factor analysis

ML maximum likelihood

RMSEA root mean square error of approximation

TLI Tucker-Lewis index

CFI comparative fit index

FIML

full information maximum likelihood

MSE

mean squared error

SE squared error

GLMM

General linear mixed model

MLM multilevel model

IQR interquartile range

Page 10: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

1

1 INTRODUCTION

Researchers in the social and behavioral sciences often aim to empirically evaluate

behaviors, traits, and phenomena that are not directly observable, such as self-esteem, anxiety,

depression, or life-satisfaction (e.g., Raykov & Marcoulides, 2011; Joreskog, Sorbom, &

Magidson, 1979). Unobserved constructs are often referred to as latent variables. While there

have been a number of proposed definitions of latent variables, one of the most inclusive and

comprehensive states that a latent variable is a “variable for which there is no sample realization

for at least some observations in a given sample” (Bollen, 2002, pp. 612). That is, a latent

variable is any construct not directly observed in a given sample.

Researchers not only aim to develop appropriate measures of latent variables, but they

often seek to obtain a numerical score representing where a given individual lies in relation to

others on a latent variable (Thissen & Wainer, 2001). For example, a psychologist might be

interested in measuring the level of anxiety and/or depression that exists for any given research

participant or patient in a sample (e.g., Ebesutani et al., 2011; Beiter et al., 2015; Kroenke, Baye,

& Lourens, 2019). By obtaining scores that accurately represent levels of anxiety and depression,

researchers can observe the distributional properties of these constructs within a sample and

make informed decisions about the presence or absence of disorders such as major depressive

disorder or generalized anxiety disorder (e.g., Kessler et al., 2010). They can also empirically

evaluate relations with other observable constructs such as presence or absence of past traumatic

experiences, behaviors aligning with common symptomatology, or biological factors including

cortisol levels and brain activity in regions associated with depression and anxiety (e.g., Walss-

Page 11: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

2

Bass, Suchting, Olvera, & Williamson, 2018). Finally, scores can be used to better understand

relations between more than one unobservable construct, such as how anxiety and depression

coexist or predict the other (e.g., Tunvirachaisakul et al., 2018). Given the large number of uses

for scores, it is important that they accurately represent the construct or constructs they intend to

measure both innately and when included in subsequent models.

I begin this section by setting up background information on measuring and modeling

latent variables. I will discuss this from a factor analytic approach, focusing on confirmatory

factor analysis (CFA). From CFA, I will move into the structural equation model (SEM), which

allows simultaneous estimation of a measurement and structural model. Next, I will introduce

mediation in terms of both observed and latent variables. After this, I will describe a number of

limitations of the SEM, leading to an introduction of the topic of factor scores. I will consider

three types of factor scores: (1) mean scores; (2) regression scores; and (3) Bartlett scores, and

will review literature examining qualities of these scores as well how to use them in subsequent

models. I will directly build off of work by Skrondal and Laake (2001) which provides a focused

analysis of the use of factor scores in linear regression, or factor score regression (FSR). I will

then explain how this work does not easily extend into scoring latent mediation models. Next, I

will consider possible alternatives that have previously been developed and evaluate the benefits

and shortfalls of these methods. Finally, this discussion will set up initial hypotheses regarding

using factor scores in a latent mediation models.

1.1 Factor Analysis

A common way to measure a latent variable is through the use of multiple observed

indicators. Factor analysis and structural equation modeling are based on analyzing relationships

between observed indicators believed to measure a smaller set of latent variables. Returning the

Page 12: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

3

anxiety and depression example, suppose a measure contained 10 items designed to assess

anxiety and 10 items designed to assess depression. For a well-established measure, the anxiety

items would be strongly related to each other and the depression items would be strongly related

to each other. A respondent with a higher level of anxiety would score higher on the ten anxiety

items, while someone with a low level of anxiety would score lower on these items. Thus, a

person’s overall level anxiety and depression influences responses assessing these constructs,

respectively.

There are two main types of factor analysis: (1) exploratory factor analysis (EFA); and

(2) confirmatory factor analysis (CFA). Both techniques aim to reproduce the observed

relationships among a collection of items believed to measure a smaller set of latent variables

(Tabachnick & Fidell, 2013). The techniques differ in terms of the amount of substantive theory

involved in specifying the models and the types of mathematical restrictions placed on parameter

estimates. Further, there are a number of limitations to EFA overcome by CFA. For instance,

EFA does now allow factor loadings to be fixed to zero. All items must load on all factors,

regardless of whether or not the item substantively has any feasible relation with one or more of

the latent variables. In addition, EFA does not allow for correlated errors. Correlated errors occur

when something other than the latent factors accounts for a relationship between two or more

items. CFA allows a researcher to constrain loadings to zero and include correlated errors,

among other things, making it desirable in a number of applications (Bollen, 1989). Finally, only

CFA allows for a formal test regarding whether the a specific prespecified measurement model

can be adequately reproduced by available data. While EFA is a useful tool when little is known

about the underlying factor structure of a set of items, CFA is often the preferred method.

Page 13: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

4

Because of this, my proposal will solely examine scores in a CFA framework, so details

regarding EFA will be omitted.

Confirmatory Factor Analysis

The goal of CFA is to test a factor structure prespecified by the researcher based on

available theory. Essentially, CFA is an a priori test of a theoretical structure believed to exist. Its

purpose is to identify a set of latent factors that explain the variation and covariation among a set

of observed variables. Specifically, CFA estimates model parameters (factor loadings, factor

covariances, error variances, etc.) that produce a predicted covariance matrix best resembling the

sample covariance matrix of observed variables (Brown, 2006). In general, 𝚺 will refer to the

population covariance matrix, 𝚺(𝛉) to the population covariance matrix under the null

hypothesis, S to the sample covariance matrix, and 𝚺(��) to the sample predicted covariance

matrix under the null hypothesis. In addition, 𝛍(𝛉) will be used to refer to the population factor

means under the null hypothesis, �� to the sample factor means, and 𝛍(��) to the sample factor

means under the null.

A CFA model with p items believed to describe m underlying latent factors can be

written as:

𝐲𝐢 = 𝛎𝐲 + 𝚲𝐲𝛈𝐢 + 𝛜𝐢 (1)

𝛈𝐢 = 𝛋 + 𝛇𝐢 (2)

where 𝐲𝐢 is a 𝑝 × 1 vector or item responses for case i and, 𝛎𝐲 is a 𝑝 × 1 vector of item

intercepts, 𝚲𝐲 is a 𝑝 × 𝑚 matrix of factor loadings, 𝛈𝐢 is a 𝑚 × 1 vector of the m latent variables

for case i, 𝛜𝐢 is an 𝑝 × 1 vector of error terms for case i with 𝛜𝐢~𝑁(0, 𝚯𝝐), 𝛋 is an 𝑚 × 1 vector

of factor means, and 𝜻𝒊 is an 𝑚 × 1 vector of error terms for the latent factors with 𝜻𝒊~𝑁(0, 𝚿).

Here, Ψ is an 𝑚 × 𝑚 covariance matrix of relationships among factors and 𝚯𝛜 is a 𝑝 × 𝑝

Page 14: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

5

covariance matrix of item residuals, which is typically a diagonal matrix but can have off

diagonal elements when residual correlations are included in model specification.

The measurement and structural models above indicate a specific structure for the model-

implied means and covariances of estimated model parameters. For complete derivations see

Bollen (1989, pp. 236, 306):

𝛍(��) = 𝛎𝐲 + 𝚲𝐲𝛋 (3)

𝚺(��) = 𝚲𝐲𝚿𝚲𝐲′ + 𝚯𝛜 (4)

where 𝛍(��) is the model implied mean structure, and 𝚺(��) is the model implied covariance

structure.

The first step of conducting a CFA involves properly specifying the measurement model,

that is, determining which items should load on which latent factors (Bollen, 1989; Hoyle, 2012).

In the previous example involving 10 items believed to measure anxiety and 10 items believed to

measure depression, the model can be specified by constraining the 10 depression items to only

load on the depression factor, the 10 anxiety items to only load on the anxiety factor, and

constraining all cross loadings to zero. After the model is specified, it must meet identification

criteria in order to be estimable. Model parameters are identified if all elements can be uniquely

determined by the elements in the covariance matrix of observed indicators. There are a number

of well-established rules that can be used to determine if identification criteria have been met,

including assuming minimal cross-loadings between items, assuming minimal covariances

between residuals, and having at least two or three observed indicators per latent factor (e.g.,

Bollen, 1989; Kenny & Milan, 2012).

Once a measurement model has been specified and deemed identifiable, it can be

estimated. In CFA, the goal of estimation is to minimize the discrepancy between an observed

Page 15: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

6

covariance matrix and a model implied covariance matrix (Lei & Wu, 2012). A common means

of doing this is through maximum likelihood (ML) estimation. ML aims to minimize this

difference by maximizing the following likelihood equation:

−2𝑙 = ∑ [pi ln(2𝜋) + ln|𝚺𝐢(��)| − (𝐲𝐢 − 𝛍𝐢(��)) ′𝚺𝐢−𝟏(𝐲𝐢 − 𝛍𝐢(��))]𝐼

𝑖=1 (5)

Where pi is the number of observed variables for case i and 𝚺𝐢(��) is the covariance matrix for

case i removing rows and columns missing for that case.

Maximum likelihood estimation also has a number of underlying assumptions, namely,

independence of error terms, continuous dependent variables, and normally distributed

disturbances (Brown, 2006). In addition, maximum likelihood performs best in large samples.

When these assumptions have not been met, alternative estimators are available and

recommended, though these are beyond the scope of my project and will not be discussed.

After the model is estimated, CFA allows an overall assessment of model fit through

goodness-of-fit indices. Goodness-of-fit indices measure how closely the model implied

covariance matrix matches the observed variances and covariances (West, Taylor, & Wu, 2012).

Common indices include the chi-square (χ2) test statistic for the hypothesized model, root mean

square error of approximation (RMSEA), the Tucker-Lewis index (TLI), and the comparative fit

index (CFI), among others (Widaman & Thompson, 2003). Often, indices suggest poor fit for the

initial model. This is commonly the result of model misspecification and often suggests that

model modifications should be made. Models can be adjusted using either theoretical or

substantive knowledge, data-driven approaches (e.g., modification indices), or a combination of

both (Bollen, 1989). After a model has been modified and respecified, it can be reassessed, and

parameters can be interpreted.

Page 16: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

7

These parameters include factor loadings, item residuals, and correlations between

factors, among others. In addition, item communalities which refer to the amount of variance in

an indicator that can be explained by the latent factor it purports to measure, can be obtained

(Brown, 2006; Raykov & Marcoulides, 2011). These can be likened and interpreted similar to

multiple 𝑅2 in linear regression. High communalities suggest a large portion of the variance in

the indicator can be attributed to the latent variable it measures while low communalities suggest

the opposite.

CFA is a powerful tool to assess how closely a set of latent factors can reproduce

relationships between a larger set of observed variables. The procedure allows for great

flexibility in specifying and testing models, and provides goodness-of-fit indices to determine

overall fit of a pre-specified measurement structure. For these reasons, it is often the preferred

method of factor analysis alongside available measurement and substantive theory (Brown,

2006). Next, I will extend my discussion of CFA into structural equation modeling, which builds

off of this model and allows for the examination of structural relations between latent factors.

1.2 Structural Equation Modeling

Confirmatory factor analysis focuses solely on establishing a measurement model in

which one or more latent variables are explained by a larger set of observed indicators. Structural

equation modeling allows researchers to simultaneously test a measurement and structural

model. The measurement model follows directly from CFA and involves determining which

items measure which latent variables, and the structural model consists of structural relationships

between latent variables, such as explanatory or response latent variables or mediating latent

variables.

Page 17: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

8

The measurement portion of a latent variable model with m latent endogenous variables

explained by p observed indicators and n latent exogenous variables explained by q observed

indicators can be expressed with the following equations:

𝐱𝐢 = 𝛎𝐱 + 𝚲𝐱𝛏𝐢 + 𝛅𝒊 (6)

𝐲𝐢 = 𝛎𝐲 + 𝚲𝐲𝛈𝐢 + 𝛜𝐢 (7)

where 𝐱𝐢 and 𝐲𝐢 represent vectors of observed indicators for exogenous and endogenous latent

variables, respectively. For exogenous latent variables, 𝛎𝐱 is a 𝑞 × 1 vector of item intercepts, 𝚲𝐱

is a 𝑞 × 𝑛 matrix of factor loadings, 𝛏𝐢 is an 𝑛 × 1 vector of the n latent exogenous variables for

case i, 𝛅𝒊 is a 𝑞 × 1 vector of error terms for case i with 𝛅𝒊~𝑁(0, 𝚯𝛅). For the latent endogenous

variables, 𝛎𝐲 is a 𝑝 × 1 vector of item intercepts, 𝚲𝐲 is a 𝑝 × 𝑚 matrix of factor loadings, 𝛈𝐢 is an

𝑚 × 1 vector of the m latent endogenous variables for case i, and 𝛜𝐢 is a 𝑝 × 1 vector of error

terms for case i with 𝛜𝐢~𝑁(0, 𝚯𝛜).

The structural portion of the model is then:

𝛈𝐢 = 𝛋 + 𝚩𝛈𝐢 + 𝚪𝛏𝐢 + 𝛇𝐢 (8)

where 𝛋 represents mean for case i, 𝚽 is an 𝑛 × 𝑛 covariance matrix of exogenous latent

variables, 𝚪 is an 𝑚 × 𝑛 matrix of coefficients representing paths from 𝛏𝐢 to 𝛈𝐢, and 𝛇𝐢 is an

𝑚 × 1 vector of latent residuals for case i.

The model-implied mean structure for 𝐱𝐢 becomes:

𝛍(��) = 𝛎𝐱 + 𝚲𝐱𝛋 (9)

and the model-implied mean structure for 𝒚𝒊 becomes:

𝛍(��) = 𝛎𝐲 + 𝚲𝐲(𝐈 − 𝚩)−𝟏(𝛂 + 𝚪𝛋) (10)

Finally, the model-implied covariance structure can be derived through covariance

algebra (Bollen 1989, pp.85-86):

Page 18: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

9

𝚺(𝛉) = (𝚺𝒚

𝚺𝒚𝒙 𝚺𝒙) = (

𝚲𝒚(𝚪𝚽𝚪′ + 𝚿)𝚲𝒚′ + 𝚯𝝐

𝚲𝒚𝚪𝚽𝚲𝒙′ 𝚲𝐱𝚽𝚲𝒙′ + 𝚯𝜹

) (11)

where 𝚿 is an 𝑚 × 𝑚 matrix of covariances among endogenous distribances.

Conducting an SEM is a direct extension of CFA and involves many of the same steps.

Specification includes determining the measurement model and the relations between latent

factors, that is, establishing all structural paths leading from one latent variable to another. As in

CFA, there are a number of set identification rules that can be used to determine if all model

parameters are appropriately estimable, including 𝚩 and 𝚪. Maximum likelihood estimation can

also be used to estimate both measurement and structural model parameters. Finally, the same

good-of-fit indices can be used and similar model modifications methods can be considered to

improve overall model fit (Bollen, 1989, pp. 80-131).

Structural equation modeling allows for statistical testing of a variety of models relevant

and important within the social sciences (e.g., Bauer & Curran, 2004; Bollen, 1989; Bollen &

Curran, 2006). Given flexibility inherent in this modeling framework, it is suited to evaluate

numerous different types of relations between latent variables. Specifically, the SEM can be used

to assess mediational relationships where predictor, criterion, and mediating variables are

unobserved or latent.

Statistical mediation refers to the process of evaluating how and why two variables are

related (MacKinnon, 2008). That is, statistical mediation examines the effect of a mediating

variable, or any variable that fully or partly explains the relationship between a predictor and an

outcome. While the SEM can be utilized to examine mediational effects in latent variables, it is

also important to begin by considering these models in terms of observed variables. In the

following section I will provide an overview of mediation, beginning with mediation models

focusing on observed variables, then expanding this to latent variable models. Throughout, I will

Page 19: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

10

focus on the single-mediator model, as this will be utilized in my subsequent analyses. As

discussed further, future research should continue this line of inquiry in more complex mediation

models.

1.3 Mediation

According to Baron and Kenny (1986, p. 1179), a mediator is any variable that “accounts

for the relation between predictor and criterion.” That is, mediators are variables that can be used

to explain the relation between an independent variable and dependent variable. The single-

mediator model with observed variables can be represented using the following path diagram:

Figure 1. Single-mediator model with observed variables

Where X represents the independent variable, M represents the mediator, and Y represents the

dependent variable. This model also gives way to three regression equations (MacKinnon, 2008):

𝑌 = 𝑖1 + 𝑐X + 𝑒1 (12)

𝑌 = 𝑖2 + 𝑐′X + 𝑏M + 𝑒2 (13)

𝑀 = 𝑖3 + 𝑎X + 𝑒3 (14)

In Equations 12-14, 𝑖1-𝑖3 represent intercepts; 𝑐 represents the total relation between the

independent variable and the dependent variable, and is referred to as the total effect; 𝑐′

Page 20: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

11

represents the partial relation between the independent variable and dependent variable, adjusting

for the mediator, and is referred to as the direct effect; 𝑎 and 𝑏 represent the paths from X to Y

through the mediator M; and 𝑒1 − 𝑒3 represent residuals or errors.

Often, it is of primary interest in mediation models to directly quantify the effect of X on

Y through M using a single parameter estimate. This is referred to as the indirect effect, or

mediated effect. A common means of computing this value involves multiplying 𝑎 and 𝑏

together (MacKinnon, 2008). Thus, the total effect can be decomposed into the direct effect, 𝑐′,

and the indirect effect 𝑎𝑏. This implies that we can express the mediated effect 𝑎𝑏 as the

difference between the total effect and the direct effect, or 𝑎𝑏 = 𝑐 − 𝑐′. Based on this definition,

𝑎𝑏 provides a quantity that reflects how much a one unit change in X impacts Y through the

mediator M (MacKinnon, 2008).

Often researchers wish to obtain a significance test of the estimate 𝑎𝑏 in order to

determine whether or not a significant mediation effect exists. Given this effect is the product

term of two estimated parameters, additional work must be done to compute its corresponding

standard error and confidence interval. Traditionally, standard errors were derived analytically

using the delta method (Sobel, 1986; Bollen, 1987); however, this method assumes that the

population distribution of the mediated effect is normal, which is typically not the case since the

product of two normal distributions is often not a normal distribution (MacKinnon, 2008). Thus,

standard errors and confidence intervals must be derived using alternative methods before

accurate confidence intervals can be obtained. A common means of computing standard errors or

directly obtaining asymmetric confidence intervals involves the use of bootstrapping.

Bootstrapped confidence intervals can then be used to determine significance of the indirect

effect using appropriate sampling distributions (MacKinnon, 2008).

Page 21: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

12

Bootstrapping is the process of estimating model parameters in a number of samples

drawn from the original sample with replacement (Preacher & Hayes, 2008). This can be applied

to standard errors or to indirect effects in order to obtain associated confidence intervals for

significance tests. There are a number of different options to compute confidence intervals using

bootstrapping. One simple option is to create a percentile bootstrap interval by selecting the

lower and upper limits as the values that correspond to the 2.5th and 97.5th percentile of a

distribution of bootstrapped indirect effects (Davison & Hinkley, 1997). Another option is the

bias-corrected and accelerated bootstrap method (Efron & Tibshirani, 1993). These intervals rely

on a bias-correction factor ��0 and an acceleration factor, ��. The bias-correction factor gives the

proportion of bootstrapped estimates less than the original estimate while the acceleration factor

is calculated using jackknife resampling. These values can then be used to obtain parameter

standard errors or can be directly used to compute the upper and lower limits of confidence

interval, using the following formulas (Wright & Herrington, 2011; Jung, Lee, Gupta, & Cho,

2019). There are other methods of obtaining bootstrapped confidence intervals, and of testing for

the presence of a significant mediated effect, but these are beyond the scope of this paper. My

analyses will utilize the bias-corrected and accelerated bootstrap method to obtain standard errors

and associated confidence intervals of the indirect effect.

Mediation models are thus powerful tools to empirically test the relation between an

explanatory and response variable through a mediating variable. A limitation, however, is that

these models assume that all variables are measured without error (see Cohen, Cohen, West, &

Aiken, 2003). When this assumption is not met, path estimates are dissatenuated by unreliability.

For example, given any structural path 𝛾, 𝐸(��) = 𝜌𝛾, where 𝜌 is a measure of reliability. Thus,

the expected value of all path estimates equals the product of the true population generating path

Page 22: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

13

and reliability. It is often the case, that variables of interest in the social and behavioral sciences

are not perfectly reliably measured. Thus, in order to model mediation among latent variables,

we must move the aforementioned modeling framework into the SEM.

In sum, mediation models allow researchers to examine how mediating variables impact

the relationship between independent variables and dependent variables. These models uniquely

allow for testing the significance of the indirect effect, which is the effect of the independent

variable on the dependent variable, taking into account the mediating variable. Mediation models

with observed variables assume all variables are measured without error. In order to account for

measurement error in mediation models, we can use the tools and framework of the SEM to

model multiple indicator latent factors within a mediation model.

Latent Mediation Models

The SEM can be used to evaluate models in which one or more latent variables accounts

for the relationship between any number of observed and/or latent predictors and responses

(MacKinnon, 2008). Figure 1 shows a path diagram of an SEM with one indicator, one mediator,

and one response latent variable.

Page 23: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

14

Figure 2. Single-mediator model with latent variables

The model in Figure 2 is fairly simple and these models can quickly increase in complexity with

the addition of more latent and observed variables. Note, while not directly expressed in Figure

2, these models also estimate mean structures (i.e., Equations 9 and 10).

Much of what was previously discussed in regards to the SEM applies to latent mediation

models. For instance, Equations (6) and (7) relate to the measurement portion, Equation (8)

describes the structural relations between the independent variable, mediator, and dependent

variable, and Equations (9), (10), and (11) describe the model implied mean and covariance

structure. Further, maximum likelihood estimation (Equation 5) can be used to obtain all model

parameters.

Similar to mediation models with observed variables, latent mediation models can also be

understood by decomposing all paths and combinations of paths into total effects, direct effects,

and indirect effects. Total effects refer to all paths leading from laten predictor to a latent

Page 24: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

15

outcome and are the sum of all direct effects and indirect effects. Direct refer to all paths of

length one and indirect effects refer to the sum of all possible paths from predictor to outcome

that pass through one or more mediating variable. One can also examine specific indirect effects

which refer to any single indirect path leading from predictor to outcome (Sobel, 1986). Table 1,

taken from Bollen (1987), shows that estimates of these effects can be expressed algebraically

provided that 𝚩𝑘 converges to 0 as 𝑘 → ∞.

Table 1. Decomposition of effects in latent mediation models

Effects of 𝛏 Effects on 𝛈

Direct 𝚪

Indirect (𝐈 − 𝐁)−𝟏𝚪 − 𝚪

Total (𝐈 − 𝐁)−𝟏𝚪

Effects of 𝛈

Direct 𝐁

Indirect (𝐈 − 𝐁)−𝟏 − 𝐈 − 𝐁

Total (𝐈 − 𝐁)−𝟏 − 𝐈

For the model in Figure 1, 𝐁 and 𝚪 can be expressed as:

𝚩 = [0 0

𝛽21 0] 𝚪 = [

𝛾11

𝛾21] (15)

where 𝛽21 is the effect of the mediator on the endogenous latent variable, 𝛾11 is the effect of the

exogenous latent variable on the mediator, and 𝛾21 is the effect of the exogenous latent variable

on the endogynous latent variable. Assuming all latent variables are continuous and that the

relations between latent variables are linear, we can obtain the indirect effect as the product term

Page 25: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

16

𝛾11𝛽21, as in the single-mediator model with observed variables (Falk & Biesanz, 2015). This

product reflects how much a one unit change in 𝜉1 impacts 𝜂2 indirectly through 𝜂1.

Again, the problem stands in which we cannot assume the sampling distribution of the

indirect effect is Gaussian. Both percentile and bias-corrected and accelerated bootstrap methods

to obtain standard errors and confidence intervals work with latent mediation models. Some

research suggests percentile confidence intervals may function better, specifically in terms of

type 1 error rate (Falk & Biesanz, 2015), while others advocate for use of the bias-corrected and

accelerated method (Cheung & Lau, 2008). Further, there are numerous other methods to

compute confidence intervals of the indirect effect, all with their own collection of benefits and

caveats.

Taken together, using the SEM to estimate latent mediation models allows researchers to

examine whether unobserved variables can explain relations between other observed and

unobserved variables, and to systematically account for measurement error by using multiple-

indicator latent factors. This method can be used to test simple single-mediator models, such as

the example in Figure 2, or more complex models that may have more than one independent

variable, dependent variable, and/or mediator. While simultaneously estimating the measurement

and structural portions of a latent mediation model is often the most efficient and preferred

modeling technique due to its systematic handling of measurement error, it is not always

possible, particularly as model complexity increases. More broadly, while the SEM is suited to

test a large variety of models, but there are also several limitations that restrict its usage.

1.4 Limitations to Structural Equation Modeling

There are many situations in which latent variable models cannot be estimated in a single

step using the SEM. For example, in small samples, it is often the case that an SEM may not be

Page 26: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

17

estimable or may lead to inconsistent results, particularly if researchers wish to examine more

complex models (Schumaker & Lomax, 1996; Valluzzi, Larson, & Miller, 2003). As will be

demonstrated, when fitting mediation models in small samples with a large number of items, or

variable factor loadings, even simple single-mediator models are not always identified. As model

complexity increases (e.g., number of items and latent factors) larger samples are required to

simultaneously estimate the measurement and structural portions of a model. One example of a

highly complex model in which factor scores can be used is the trifactor model (Bauer et al.,

2013). This model allows for the inclusion of multiple rater data, for example, reports from

children, parent, and teacher. Research has shown that scores appropriate for use with categorical

and binary items (e.g., EAPS, Thissen & Orlando, 2001) can be used in these complex multi-

rater models to recover mediation effects (Curran, Georgeson, Bauer, & Hussong, 2021). Thus,

while natively modeling multiple indicator latent factors to examine mediation effects provides

the best way of accounting for measurement error, some models may not allow for simultaneous

estimation due to limitations in sample sizes or overall complexity of models.

There are a number of other modeling frameworks in which latent factors cannot be

natively estimated simultaneously with structural paths due to limitations in the models

themselves, as opposed to limited samples. One example is data collected longitudinally over a

large number of time points, such as intensive longitudinal or time series data. These types of

models often require the use of scores, or other alternative methods (e.g., Curran et al., 2014).

These and other scenarios indicate there are situations in which measurement and structural

model cannot be simultaneously estimated for a variety of reasons, including limitations in

available data and or modeling frameworks themselves. In these instances, scores can be used to

Page 27: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

18

evaluate relations among latent variables. Thus, it is important to continue examining how and

when scores can be appropriately used in subsequent analyses.

Factor scores represent a broad range of scoring techniques can be used when a full SEM

cannot be estimated. This process involves first estimating the measurement portion of the model

through factor analysis. Factor scores can then be computed using estimated model parameters

and these scores can be used in a subsequent model such as a linear regression or path analysis

(e.g., Devlieger & Rosseel, 2017; Skrondal & Laake, 2001). Next, I will discuss factor scoring

broadly and provide specific details and on the scoring procedures I will consider in my

simulation design.

1.5 Factor Scores

Factor scores can be computed using closed-form expressions. Building off notation from

Skrondal and Laake (2001), these can be written as:

𝐅𝛏,�� = 𝐀𝛏𝐱𝐢′ (16)

𝐅𝛈,�� = 𝐀𝛈𝐲𝐢′ (17)

where 𝐅𝛏,�� and 𝐅𝛈,�� refer to the factor score estimates, and 𝐀𝛏 and 𝐀𝛈 refer to a finite series of

matrix operations premultiplied by observed indicators 𝐱𝐢 and 𝐲𝐢, respectively. The matrices that

make up 𝐀𝛏 and 𝐀𝛈 are those directly obtained from a CFA. When computing factor scores from

a CFA observed indicators are typically mean deviated. Because of this, the following sections

will no longer discuss mean structures.

My study will consider three factor scoring methods: (1) mean scores; (2) regression

scores; and (3) Bartlett scores.

Page 28: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

19

Mean Scores

Mean scores (and consequently sum scores) are based on a factor structure in which all

factor loadings are constrained to unity and residuals are fixed to zero (McNeish & Wolf, 2020).

Therefore, 𝚲�� and 𝚲�� become matrices of zeros and ones. When each latent variable has the

same number of indicators, mean scores can be expressed as:

𝐀𝛏𝐌 =

1

𝑙𝚲𝒙′ (18)

𝐀𝛈 𝐌 =

1

𝑘𝚲𝒚′ (19)

where l and k represent the number of items per exogenous and endogenous latent factor. This

formula holds in the case where each latent factor is estimated one factor at a time (to account for

differing numbers of items per factor), allowing l and k to differ for each factor depending on its

unique number of underlying items.

While mean scores are some of the most widely utilized scores in the psychological

sciences, they come with a number of notable limitations (Bauer & Curran, 2015). These scores

are built from a very specific and highly constrained measurement model that might not

accurately reflect the relationships between observed items and latent variables. In particular,

mean scores stem from a measurement model in which all items are believed to contribute

equally to their corresponding latent variables as loadings are fixed to unity and believed to

perfectly measure the latent variables (residuals are fixed to zero). When factor loadings are

fairly similar across items and have small residual variances, these scores more closely align

with the true underlying factor analytic structure and may be appropriate (Mcneish & Wolf,

2020); however, when loadings differ there is a large discrepancy between these and other

scoring procedures indicating mean scores may not be the best scores to represent a latent factor

Page 29: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

20

(Wainer, 1976). In spite of these shortcomings, these scores are quite common and thus

important to examine alongside other methods of factor scoring.

Regression Scores

Regression scores have been attributed to both Thurstone (1935) and Thomson (1935a,

1935b, 1939). The goal in formulating these scores was to predict a given individual’s aptitude

for an occupation using a set of multiple test scores (Bartholomew, Deary, & Lawn, 2009). It is

interesting to note that, accordingly, these scores were originally created to have a usable single

number summary associated with an individual that could be included in a subsequent model for

predictive purposes. The notation I will use to describe both regression and Bartlett scores below

stems from Skrondal and Laake (2001) who derived these scores in context of the full SEM. The

factor scoring matrices for regression scores are:

𝐀𝛏𝐑 = ��𝚲𝐱

′ (𝚺𝒙(��))−𝟏 (20)

𝐀𝛈𝐑 = (������ + 𝚿)𝚲𝐲

′ (𝚺𝒚(��))−𝟏 (21)

Regression scores take into account the relationship the variances and covariances of

endogenous and exogenous latent variables, factor loadings, and the model-implied covariances

among observed indicators. It is also the case that these estimates correlate highly with the true

factor scores (McDonald & Burr, 1967). In addition, simulation studies comparing reliability of

factor scores found that, in general, regression scores had the highest reliability of other types of

factor scores, including Bartlett scores which will be discussed next (Beauducel, Harms, &

Hilger, 2016)

Bartlett Scores

Bartlett scores were first introduced by Bartlett (1937) and were derived with the goal of

assessing whether factor scores could be designed to accurately represent “hypothetical factors”

Page 30: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

21

or whether they were simply “mathematical descriptions of observed test scores” (Bartlett, 1937,

p. 7). While regression scores focused primarily on using intelligence as a predictor of some

desired outcome, Bartlett’s goal was to define scores that produced the best estimate of the

underlying latent variable for each individual, and were not necessarily motivated by using these

scores to accurately model prediction (Bartholomew et al., 2009). Again, pulling from notation

from Skrondal and Laake (2001) to associate scores with the full SEM, the factor scoring

matrices of the Bartlett method are:

𝐀𝛏𝐁 = (𝚲𝐱

′ 𝚯𝛅−��𝚲��)

−𝟏

𝚲𝐱′ 𝚯𝛅

−�� (22)

𝐀𝛈𝐁 = (𝚲𝐲

′ 𝚯𝛜−��𝚲��)

−𝟏𝚲𝐲

′ 𝚯𝛜−�� (23)

Unlike regression scores, these do not take relations among factors or the model-implied

covariances between observed items into account. Rather, they are computed solely from factor

loadings and error covariances. Similar to regression scores, Bartlett scores are also highly

correlated with true factor scores (McDonald & Burr, 1967), but unlike regression scores Bartlett

scores satisfy the property of conditional unbiasedness such that: 𝐸(𝐹��|𝐹𝑖) = 𝐹𝑖 (Bentler & Yuan,

1997).

While there are a number of other types of factor scores, I have chosen to focus on

examining mean scores, regression scores, and Bartlett scores for a number of reasons. I will be

including mean scores in my analyses given their wide popularity and frequent usage in the

social sciences. Further, since my design and hypotheses stem directly from work of Skrondal

and Laake, I will consider regression and Bartlett scores.

Skrondal and Laake (2001)

Beyond their formulation, definition, and properties, much work has been done to

examine the use of factor scores in subsequent model. Early research by Tucker (1971) found

Page 31: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

22

that when scores were to be entered as predictors in a regression model, regression scores were

most appropriate, whereas when scores were to be entered as an outcome predicted by group

membership, Bartlett scores were most appropriate. Skrondal and Laake (2001) built off this

initial work by examining the use of scores in regression models when both predictor and

outcome were latent, as in factor score regression.

Factor score regression refers to using factors scores in a regression model where all

latent variables are either predictors or responses. Skrondal and Laake focused on blockwise

FSR, a procedure involving conducting separate CFAs on predictor variables and response

variables, then scoring these using either regression scores or Bartlett scores. Skrondal and Laake

compared two blockwise FSR methods: (1) conventional blockwise FSR; and (2) revised

blockwise FSR.

Conventional blockwise FSR begins by estimating a CFA on all predictor latent variables

and a separate CFA on all latent response variables. Empirical factor scores are computed using

either the regression or Bartlett method. After this, regression parameters are estimated using

factor scores in place of latent variables. 𝚪�� represents sample estimates of the population

parameter 𝚪 when regression scores are used in conventional blockwise FSR and 𝚪�� represents

sample estimates of the population parameter 𝚪 when Bartlett scores are used in conventional

blockwise FSR.

Skrondal and Laake proved that the probability limit of 𝚪�� was not equal to 𝚪 in the

population when all latent factors were scored using the regression method and results follow

similarly in terms of 𝚪�� when all latent factors are scored using the Bartlett method. Specifically,

they proved regression slopes are consistently underestimated when this method is employed.

Page 32: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

23

Given that conventional blockwise FSR led to inconsistent estimates of 𝚪 asymptotically,

Skrondal and Laake examined the asymptotic performance of revised blockwise FSR, by scoring

all predictors using regression scores and all response variables using Bartlett scores. As in

conventional blockwise FSR, factor scores are then used in a regression model. Here they used

𝚪𝐁�� to represent parameter estimates obtained from revised blockwise FSR. Regarding the

consistency of revised blockwise FSR, they proved that the probability limit of 𝚪𝐁�� was equal to

𝚪. Thus, revised blockwise FSR leads to consistent estimates of 𝚪. In addition, they

asymptotically examined standard errors and 𝑅2 values and found that the probability limit of

these estimators equaled the true population values.

In order to test whether these results generalize to finite samples, Skrondal and Laake

included a brief simulation study assessing relative bias. They varied sample size, number of

items per factor, and coefficients of determination of items. Findings suggested conventional

blockwise FSR consistently underestimated elements of 𝚪 unless the true relationship between

latent variables was zero. Revised blockwise FSR provided model parameters consistent with

results of full-information maximum likelihood (FIML), and produced unbiased estimates.

Further, standard errors and coefficients of determination were also unbiased when using the

proposed estimators. Given these findings, the authors especially cautioned against the use of

conventional blockwise factor scoring for FSR.

Overall, this paper showed conventional blockwise FSR performs poorly and leads to

biased estimates, while revised blockwise FSR provides a consistent means of reproducing

relationships between latent variables, performs better than conventional FSR, and performs

similarly to FIML. Thus, results suggested recommendations on how to use factor scores in a

subsequent regression model.

Page 33: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

24

Factor Scores in Latent Mediation Models

Skrondal and Laake (2001) demonstrated how to score latent variable models in which all

latent variables serve as either a predictor or a response in a given model. More recent literature

is mixed on whether it is possible to extend their work into scoring mediators in a latent

mediation model, as these simultaneously serve as both predictors and response variables. Some

argue this is an insurmountable limitation and suggest scoring mediation models is not possible

unless alternative methods are considered (Devlieger, Mayer, & Rosseel, 2016). This argument is

founded on the fact that Skrondal and Laake’s recommendations require that latent variables are

scored based on their placement in the model, or rather, whether they serve as independent

variables or dependent variables. Thus, if a latent variable is both an independent variable and a

dependent variable, there is no clear scoring option that does not result in biased parameter

estimates.

In spite of this apparent limitation, other researchers have suggested possible ways to

follow Skrondal and Laake’s recommendations within the context of mediation. For example,

some have suggested estimating a latent mediation model as a series of regression models (Lu,

Kwan, Thomas, & Cedzynski, 2011). Using this framework, mediators are scored using

regression scores for parts of the model in which they serve as predictors and Bartlett scores for

parts of the model in which they serve as responses. Thus, different scoring methods are used for

the same latent variable depending on its structural placement. While this method adheres most

closely to the recommendations of Skrondal and Laake, it does not allow simultaneous

estimation of all aspects of the structural model. Further, this method does not provide clear

guidelines on how to compute total and specific indirect effects and their corresponding

confidence intervals, which are often of primary interest in mediation models.

Page 34: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

25

There are some additional alternative approaches to scoring latent mediation models

outside of the Skrondal and Laake framework. One option is to score all latent variables using a

constrained least squares estimator (McDonald, 1981). This method produces factor scores with

covariances equal to those obtained from the SEM, but requires that the full SEM is estimable in

order for scores to be computed. As previously discussed, this is often not possible. In fact, in

many instances factor scores are used in place of a full SEM because the measurement and

structural components of the model cannot be simultaneously estimated. Another solution

introduced by Croon (2002) involves estimating each latent variable using separate CFAs,

scoring each variable using either the regression or Bartlett method, then correcting the factor

score covariance matrix using a closed-form expression. The corrected covariance matrix is then

used to estimate parameters of a subsequent path model. Research has shown this method leads

to unbiased parameter estimates over a variety of conditions, and thus it has been highly

recommended (Devlieger et al., 2016; Devlieger & Rosseel, 2017; Lu et al., 2011; Lu & Thomas,

2008). In many situations, Croon’s correction provides a simple and consistent technique to use

scores in latent mediation models. This method, however, only works when a covariance matrix

is a sufficient statistic for estimating a model; that is, when data are complete, continuous,

normal, and there are no influential outliers. Thus, there are many instances in which this method

is not applicable and other options must be considered.

In spite of the many difficulties associated with using factor scores in latent mediation

models, research continues to use these scores to test for the presence of mediation effects. For

example, Corwin et al. (2018) used factor scores to test for the presence of the mediating effect

of project ownership on learning outcomes in the context of course-based research experiences

for undergraduates. In both this paper and the supplemental materials, it is clear that all latent

Page 35: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

26

variables (independent variables, mediator, and dependent variables) were scored using the same

type of factor score, but it is unclear which specific factor score was utilized. Further, the

justification for the use of factor scores made no reference to the work of Skrondal and Laake

and there was no discussion of potential bias in parameter estimates due to paths in which both

the indicator and outcome were scored with the same type of factor score. A number of

additional articles also suggest that when factor scores are used in mediation models, it is

common to compute only one type of factor score for all latent variables, regardless of their

placement in subsequent models (Brewster, Moradi, DeBlaere, & Velez, 2013; Cox, Enns, &

Taylor, 2001; Damian & Spengler, 2020). These examples indicate that the work of Skrondal of

Laake has been infrequently applied to mediation models, either due to quantitative complexities

of the article itself, or to the inherent difficulty associated with scoring mediators. Thus, given

the frequent use of factor scores in mediation models, it is important to provide clear

recommendations to researchers on how to appropriately select the type of factor score in

mediation models to obtain unbiased parameters estimates, or to minimize bias in parameter

estimates for all practical purposes.

My project approaches the problem of estimating latent mediation models from a

measurement and scoring perspective, with the goal of augmenting known methods. I am

interested in evaluating scoring procedures when simultaneous estimation of the measurement

and structural model is not possible, or leads to highly unreliable estimates. Thus, I formulate

recommendations that directly apply to research centered around computing scores from raw

data, obtaining a data matrix of scores that can be used to observe patterns or located outliers,

and using this data matrix of scores to estimate a subsequent model. I build off of the framework

Page 36: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

27

established in Skrondal and Laake (2001) while also considering mean scores, due to their

widespread use in the social and behavioral sciences.

Next, I synthesize this motive into testable hypotheses that build off of previous research

to enhance current understanding on the use of factor scores in subsequent latent mediation

models. The hypotheses will examine different ways of scoring mediators in a model where all

latent variables are replaced with factor scores. Further, a number of conditions will be varied to

enhance generalizability of these findings. The primary goal of testing these hypotheses is to

provide researchers with clear recommendations on how to use a data matrix of factor scores to

test mediational effects. These hypotheses are:

1. When the regression method is used to score a mediator (and the independent variable

and dependent variable are scored in alignment with Skrondal and Laake’s

recommendations), the path from the independent variable to the mediator will be biased,

leading to bias in the indirect effect. Specifically, it will be underestimated and this will

be more pronounced with fewer items per latent factor, smaller communalities, and in

smaller sample sizes.

2. When the Bartlett method is used to score a mediator (and the independent variable and

dependent variable are scored in alignment with Skrondal and Laake’s

recommendations),, the path from the mediator to the dependent variable will be biased,

leading to bias in the indirect effect. Specifically, it will be underestimated and this will

be more pronounced with fewer items per latent factor, smaller communalities, and in

smaller sample sizes.

Page 37: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

28

3. When all latent factors in a mediation model are scored using mean scores, all

components of the indirect effect will all be biased leading to bias in the indirect effect.

Bias will increase as the measurement model less closely resembles that imposed by

mean scores, specifically when loadings are heterogenous and communalities are low.

4. When deciding between regression and Bartlett scores for a mediator, bias can be

minimized by selecting scores such that the biased path also corresponds to the smaller

effect. That is, if the path from the independent variable to the mediator has a smaller

effect than the path from the mediator to the dependent variable, the mediator should be

scored using Bartlett scores.

In the next section, I outline how I systematically tested the above four hypotheses using

a simulation design. I specify details on model conditions, data generation, and outcomes. The

primary goal of testing these hypotheses was to provide researchers with clear recommendations

on how to use a data matrix of factor scores to evaluate a path model that involves latent

mediators.

Page 38: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

29

2 METHODS

In this section I outline how I simulated data and computed relevant outcomes to evaluate

my hypotheses. I first introduce the structure of the population generating model (Figure 2)

which leads to a discussion of data generation (Figure 3) and model conditions (Table 2). Next, I

discuss how factor scores were used to fit mediation models and obtain parameter estimates.

Finally, I provide details on how outcomes were computed and how meta-models were used to

analyze results to empirically test my hypotheses. All data simulation and analyses were

conducted in R, using lavaan for model fitting (Roseel, 2012) and ggplot2 for graphics

(Wickman, 2016). Meta-models were were fit using the rstatix package in R (Kassambara, 2021)

to assess continuous outcomes, and PROC GLIMMIX in SAS to assess binary outcomes.

2.1 Population Generating Model

The model I examined included one independent variable, one mediating variable, and

one dependent variable. Skrondal and Laake showed that their results extended to cases with

more than one independent variable, so my results should also extend to such a scenario. The

purpose of including only one mediator is to test my hypotheses within the most basic mediation

model. Lastly, since multivariate analysis is not of central focus to my line of inquiry, my

population generating model included only one dependent variable. This model is depicted in

Figure 3.

Page 39: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

30

Figure 3. Population generating model structure

a) True Score Model

b) Multiple Indicator Model

Page 40: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

31

2.2 Data Generation

First, data generation was split into three different models based on the three effect size

conditions. In condition (a), model parameters were 𝛾11 = 0.300, 𝛽21 = 0.300, 𝛾12 = 0.050,

and 𝛾11 ∗ 𝛽21 = 0.090 leading to 𝑅2 = 0.083 for the regression of 𝜂1 on 𝜉1 and 𝑅2 = 0.099 for

the regression of 𝜂2 on 𝜉1 and 𝜂1. In condition (b), model parameters were 𝛾11 = 0.200,

𝛽21 = 0.450, 𝛾12 = 0.050, and 𝛾11 ∗ 𝛽21 = 0.090 leading to 𝑅2 = 0.038 for the regression of

𝜂1 on 𝜉1 and 𝑅2 = 0.182 for the regression of 𝜂2 on 𝜉1 and 𝜂1. Finally, in condition (c), model

parameters were 𝛾11 = 0.450, 𝛽21 = 0.200, 𝛾12 = 0.050, and 𝛾11 ∗ 𝛽21 = 0.090 leading to

𝑅2 = 0.168 for the regression of 𝜂1 on 𝜉1 and 𝑅2 = 0.056 for the regression of 𝜂2 on 𝜉1 and 𝜂1.

Due to the data generating process discussed below, these paths were very near their

standardized values. In condition (a), standardized parameters were 𝛾11 = 0.287, 𝛽21 = 0.297,

𝛾12 = 0.047, and 𝛾11 ∗ 𝛽21 = 0.085. In condition (b), standardized parameters were

𝛾11 = 0.196, 𝛽21 = 0.415, 𝛾12 = 0.045, and 𝛾11 ∗ 𝛽21 = 0.081. Finally, in condition (c),

standardized parameters were 𝛾11 = 0.410, 𝛽21 = 0.213, 𝛾12 = 0.049, and 𝛾11 ∗ 𝛽21 = 0.087.

Population generating values were based off of results obtained in Corwin et al. (2018). Values

were also chosen so as to systematically vary effect size in the components of the indirect effect

while maintaining a consistent value of the indirect effect to best address hypothesis four.

Within each of these conditions, true scores were simulated for 𝜉1, 𝜂1, and 𝜂2, for each of

the sample size conditions: N = 40 and N = 80. Only small samples were considered in order to

evaluate the use of factor scores in scenarios in which a full SEM cannot be simultaneously

estimated, or leads to highly variable results. This will be further demonstrated in the following

section. True score for 𝜉1 were drawn from a standard normal distribution. In condition (a) 𝜂1

had a variance of 1.09 and 𝜂2 had a variance of 1.11; in condition (b) 𝜂1 had a variance of 1.04

Page 41: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

32

and 𝜂2 had a variance of 1.22; and in condition (c) 𝜂1 had a variance of 1.20 and 𝜂2 had a

variance of 1.06. These variances, as well as population generating covariances among true

scores were derived using formulas from MacKinnon (2008, pp. 86-88)

After true scores were generated, these were used to compute 12 observed indicators per

latent factor using Equations (6) and (7) in all three effect size conditions and in sample sizes of

either N = 40 or N = 80. Indicators were drawn from a normal distribution with a mean of zero

and variance equal to (1 − 𝜆2)1, where 𝜆 is the population factor loading for the item. These 12

indicators either had loadings all set to 0.8 on their respective latent factors (with no cross-

loadings) leading to communalities of 0.64, or were drawn in triplets of 0.4, 0.6, and 0.8, (again,

with no cross-loadings) leading to communalities of 0.16, 0.36, and 0.64 respectively. This entire

procedure led to datasets with 12 items per factor with loadings of either all 0.8, or triplets of 0.4,

0.6, and 0.8. I simulated 500 iterations per combination of the aforementioned conditions leading

to a total of 6000 datasets of items. Finally, I defined a 6-item and 12-item measurement model.

Again, these values were based off of constructs considered in Corwin et al. (2018). To minimize

the total number of items simulated, the 6-item model used the first six simulated items per latent

factor and the 12-item model used all simulated items. In 6-items per factor model using varying

loadings, loadings were 0.4 for two items, 0.6, for two items, and 0.8 for two items. Given the

near standardization of population path parameters and the fact that all items were draw from a

normal distribution with a mean of zero, latent variable means and item intercepts were not

included in analyses. Table 2 outlines a summary of simulation conditions.

1 Item error variance was not adjusted to account for variance in the endogenous latent variables. Given these had

variances slightly larger than one, standardized factor loadings of the endogenous latent variables were only slightly

larger than specified population generating values (maximally by 0.02). Large sample simulations confirmed these

slight differences had no meaningful impact on raw or standardized path estimates.

Page 42: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

33

Table 2. Design matrix of simulation conditions

Paths Sample size Factor

loadings Items per factor

𝛾11 = 0.30 𝛽21 = 0.30 𝛾12 = 0.05

𝛾11 ∗ 𝛽21 = 0.09

N = 40 All .8 12

6

Triplets of .8,

.4, .6

12

6

N = 80 All .8 12

6

Triplets of .8,

.4, .6

12

6

𝛾11 = 0.20

𝛽21 = 0.45

𝛾12 = 0.05

𝛾11 ∗ 𝛽21 = 0.09

N = 40 All .8 12

6

Triplets of .8,

.4, .6

12

6

N = 80 All .8

12

6

Triplets of .8,

.4, .6

12

6

𝛾11 = 0.45

𝛽21 = 0.20

𝛾12 = 0.05

𝛾11 ∗ 𝛽21 = 0.09

N = 40 All .8 12

6

Triplets of .8,

.4, .6

12

6

N = 80 All .8 12

6

Triplets of .8,

.4, .6

12

6

Page 43: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

34

2.3 Factor Scores and Parameter Estimates

After all items were generated, these were used to compute factor scores. I fit separate

CFAs for the independent variable, mediator, and dependent variable. I computed regression and

mean scores for the independent variables; regression, Bartlett, and mean scores for the

mediator; and Bartlett and mean scores for the dependent variable, within each simulated dataset.

Again, there were 500 iterations per combination of simulation conditions. This process yielded

a total of 12,000 datasets of factor scores.

After computing factor scores in all simulation conditions, these scores were used to

estimate a single-mediator path analysis using FIML. Each dataset containing factor scores was

used to estimate three separate path models: (1) regression → regression → Bartlett; (2)

regression → Bartlett → Bartlett; and (3) mean → mean → mean. The first two models were

selected in order to examine the difference between scoring mediators using regression or

Bartlett scores within the framework of Skrondal and Laake. That is, independent variables were

always scored using regression scores and dependent variables were always scored using Bartlett

scores to ensure that any observed bias was due to the choice of score for the mediator and not

the independent and dependent variables. The mean score model utilized mean scores for all

three latent variables, as this is how mean scores are typically used in practice.

All parameter estimates, standard errors, and confidence intervals from each scoring

model were retained in a total of 36,000 datasets. Standard errors were obtained using the bias-

corrected and accelerated bootstrap method and these were used to compute confidence intervals.

In order to address the fact that mean scores exist on a different scaling metric than regression

and Bartlett scores, standardized parameter estimates were computed for all models fit using

mean scores.

Page 44: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

35

2.4 Outcomes

Finally, after obtaining parameter estimates from the final mediation models within all

conditions, outcomes were computed. I computed relative bias of 𝛾11, 𝛽21, and 𝛾11 ∗ 𝛽21, mean

squared error (MSE) of 𝛾11, 𝛽21, and 𝛾11 ∗ 𝛽21, and coverage of the confidence interval of the

indirect effect.

Relative bias can be expressed as:

𝑟𝐵𝑖𝑎𝑠 =∑

θij−θiθi

×100𝑛𝑟𝑟=1

𝑛𝑟 (24)

where 𝜃𝑖�� is the 𝑗𝑡ℎ sample estimate of the 𝑖𝑡ℎ true population generating value 𝜃𝑖 and 𝑛𝑟 is the

number of repetitions per cell, or 500 in all simulation conditions. The numerator of this equation

was used directly as an outcome in my meta-models, where cell-level means represented the

value presented in Equation 26. For the mediation models using regression and Bartlett scores,

the true population generating value was the raw estimate (which differed across effect size

condition), and for the mediation models using mean scores, the true population generating value

was the standardized value.

In addition to relative bias, MSE was computed to assess parameter efficiency. This can

be expressed with the following formula:

𝑀𝑆𝐸 = ∑(θij−θi)2

𝑛𝑟

𝑛𝑟𝑟=1 (25)

In order to test MSE in the context of a GLMM, I also computed squared error (SE) using

(𝜃𝑖�� − 𝜃𝑖)2. Thus, cell-level averages represented the mean of this value, namely MSE. Again,

outcomes associated with the mean score model were computed using standardized population

generating values.

Page 45: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

36

Lastly, coverage of the bootstrapped confidence interval of the indirect effect was

determined in all scoring conditions (using the standardized indirect effect in the mean score

model). This was coded as a binary variable where a one indicated the confidence interval

contained the true population parameter and a zero indicated the confidence interval did not

contain the true population parameter

To test my hypotheses and examine parameter estimates and outcomes in all scoring and

simulation conditions, I used extensive graphics and visualizations. In addition, I used meta-

models to further examine differences in relative bias, MSE, and coverage of the indirect effect.

Due to nesting of some conditions within simulated datasets, the general linear mixed model

(GLMM) and multilevel model (MLM) were used to assess relative bias, MSE, and coverage

across scoring and simulation conditions. Specifically, since all score types were computed for

each simulated dataset and number of items was selected to be either six of the 12 items or all 12

items, these were treated as within-dataset conditions.

Given the large sample size associated with this simulation study, I focused on measures

of effect size, rather than p-values, in determining the impact of my simulation conditions on

differences in relative bias, MSE, and coverage within the context of my final meta-models. As

nesting was present in the data, standard measures of effect size such as 𝜂2 (eta-squared) and 𝜂𝑝2

(partial eta-squared) that do not account for within-condition effects were not utilized, as these

produce estimates that cannot be accurately compared across between- and within-condition

effects. Instead, for relative bias and MSE, effect size was assessed using 𝜂𝐺2 (generalized eta-

squared) which is recommended in mixed designs (Bakeman, 2005; Olejnik & Algina, 2003).

Given coverage was coded as a binary outcome, I relied on odds ratios to capture effect size in

this model.

Page 46: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

37

A review of the entire data-generation process from simulation of true scores to

computing outcome measures is outlined in Figure 4. In sum, true scores were simulated based

on a population generating mediation model, items were simulated based on true score estimates,

items were then used in CFAs to compute factor scores, factor scores were used to fit mediation

models, and parameter estimates from these mediation models were used to compute relative

bias, MSE, and coverage. This process contained numerous check-points and error traps to

ensure data was generated as anticipated, and these are outlined in detail in the next section. In

addition, the following section contains numerous tables and figures of results to empirically

evaluate my four primary hypotheses and assess the general use of factor scores in place of

simultaneous SEMs to test mediation effects.

Page 47: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

38

Figure 4. Summary of data generation and model fitting

Page 48: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

39

3 RESULTS

This section contains an in-depth presentation of all relevant results. First, I present a

detailed review of the data validation procedures, or rather, all additional analyses conducted to

certify data were correctly simulated and models were correctly fit. Next, I demonstrate how my

specific simulation conditions represented scenarios in which it is necessary to use scores to

examine mediation effects as opposed to natively modeling multiple indicator latent factors.

Finally, I extensively discuss relative bias, MSE, and coverage of the indirect effect using tables

and graphics.

3.1 Data Validation

I performed a number of checks at each level of data generation and model fitting in

order to ensure data were correctly simulated. These involved examinations of true scores, items,

and factor scores in all simulation conditions.

First, before simulating true scores within my specific conditions, I performed a large

sample true score analysis in the three effect size conditions. I simulated N = 500,000 true scores

based the three population generating path models and fit a true score SEM to each. In all

conditions, raw and standardized parameter estimates and 𝑅2 values were perfectly retained to at

least the third decimal place. Next, before simulating items in specific model conditions, I

performed a large sample item analysis in the three effect size conditions. In this analysis, I

simulated 12 items per latent factor, each with a factor loading of .8, using a sample size of N =

500,000. I then fit a simultaneous measurement and structural model to these items. Again, raw

and standardized model parameters and 𝑅2 values were reproduced to at least the third decimal

place. In addition, in the item level analysis, raw factor loadings2 matched their population

2 Standardized factor loadings were slightly higher for the mediator and dependent variable due to small differences

in true score variances, as anticipated, but standardized effects were not impacted by these slight differences

Page 49: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

40

generating values. This suggested the process of simulating true scores and then drawing items

based on these true scores was carried out correctly.

Next, I fit single factor CFAs separately on the independent variable, mediator, and

dependent variable within all simulated datasets of items. I then used these CFAs to obtain factor

score estimates. For the independent variable I computed regression scores and mean scores, for

the mediator I computed regression scores, Bartlett scores, and mean scores, and for the

dependent variable I computed Bartlett scores and mean scores. I also examined factor loadings

to inspect overall variability. Generally, average raw estimates of factor loadings resembled the

true population generating values. Standard deviations of sampling distributions of factor

loadings were around 0.14 in the smaller sample size condition (N = 40) and around 0.10 in the

larger sample size condition (N = 80). Taken together, this suggested that the process used to

generate items worked as anticipated within my simulation conditions, with variability in

estimates of factor loadings due to small sample sizes.

When conducting CFAs to obtain factor score estimates, a small number of solutions

resulted in Heywood cases, that is, the model produced variance estimates that were negative.

Any set of items that resulted in a Heywood case when fitting either the independent variable

CFA, mediator CFA, or dependent variable CFA were removed from subsequent analyses.

Heywood cases were located in a total of four out of 24 design cells, and did not exceed 5% in

any single cell.

Finally, once all factor score estimates were obtained, these were correlated with each

other and with true scores. Overall, correlations were large, ranging from 0.88 to 1.00. As

expected, correlations were larger with larger sample sizes, item communalities, and number of

items, and there were no substantial differences across effect size conditions. Of note, since

Page 50: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

41

regression and Bartlett scores were computed for the mediator in each model, these could be

correlated with each other. This represented one of the consistently highest relationship within all

simulation correlations, with a correlation of 1.00 across all conditions. Reasoning for this

relationship will be provided in future sections. Interestingly, when factor loadings were all set to

0.8, mean scores were more strongly related to true scores than the other factor scores, though

this difference was never more than .02. When factor loadings were drawn in triplets of 0.4, 0.6,

0.8, the relationship between factor scores and true scores and mean scores and true scores was

more similar, with some conditions favoring regression and Bartlett scores. Again, differences

were no larger than 0.2.

After examining item correlations, I determined that data had been properly and

diligently simulated. Next, I compared distributions of parameter estimates in a subset of my

model conditions to those computed from fitting a full SEM to the data, when simultaneous

estimation was possible.

3.2 Comparing Factor Scores to Latent Variable Models

In order to demonstrate that my simulation conditions accurately represented scenarios in

which it was not possible to simultaneously estimate the structural and measurement components

of a model or lead to highly variable results, I attempted to fit a full SEM to a subset of my

conditions and compare sampling distributions of parameters estimates to those obtained from

models estimated using factor scores. Models were only fit to the first effect size condition

(𝛾11 = 0.30, 𝛽21 = 0.30, 𝛾12 = 0.05, 𝛾11 ∗ 𝛽21 = 0.09) as the full SEM estimation was

computationally time-intensive, but results for the other effect size conditions are expected to be

similar. Table 3 shows means and standard deviations of sampling distributions of parameter

Page 51: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

42

estimates across simulation conditions and demonstrates that a number of cells did not allow for

estimation of multiple indicator latent factors, necessitating the need for factor scores.

Table 3. Parameter estimates comparing factor score models to full SEM

Raw population generating paths: 𝛾11 = 0.300, 𝛽21 = 0.300, 𝛾12 = 0.050, 𝛾11 ∗ 𝛽21 = 0.090

Standardized population generating paths: 𝛾11 = 0.287, 𝛽21 = 0.297, 𝛾12 = 0.047, 𝛾11 ∗ 𝛽21 = 0.085

Factor

loadings

Items

per

factor

𝛾11 mean (sd) 𝛽21 mean (sd) 𝛾11 ∗ 𝛽21 mean(sd)

R B M

Full

SEM R B M

Full

SEM R B M

Full

SEM

N = 40

All .8 12

0.26

(0.14)

0.27

(0.15)

0.26

(0.14)

NA** 0.29

(0.16)

0.28

(0.15)

0.28

(0.15)

NA* 0.08

(0.07)

0.08

(0.07)

0.07

(0.06)

NA*

6 0.25

(0.14)

0.27

(0.15)

0.25

(0.14)

0.30

(0.18)

0.29

(0.16)

0.27

(0.15)

0.27

(0.15)

0.31

(0.19)

0.07

(0.07)

0.07

(0.07)

0.07

(0.06)

0.10

(0.09)

Triplets

of

.8, .4, .6

12 0.25

(0.15)

0.26

(0.16)

0.24

(0.15)

NA* 0.29

(0.17)

0.27

(0.15)

0.26

(0.15)

NA* 0.07

(0.07)

0.07

(0.07)

0.06

(0.06)

NA*

6 0.22

(0.15)

0.26

(0.17)

0.21

(0.15)

0.30**

(0.21)

0.28

(0.18)

0.25

(0.15)

0.23

(0.15)

0.33**

(0.24)

0.06

(0.07)

0.06

(0.07)

0.05

(0.05)

0.10**

(0.13)

N= 80

All .8 12

0.27

(0.10)

0.29

(0.10)

0.27

(0.10)

0.30

(0.12)

0.29

(0.11)

0.28

(0.11)

0.28

(0.11)

0.30

(0.13)

0.08

(0.04)

0.08

(0.04)

0.08

(0.04)

0.09

(0.05)

6 0.26

(0.10)

0.29

(0.11)

0.26

(0.10)

0.31

(0.12)

0.29

(0.11)

0.26

(0.10)

0.26

(0.10)

0.30

(0.13)

0.08

(0.04)

0.08

(0.04)

0.07

(0.04)

0.09

(0.05)

Triplets

of

.8, .4, .6

12 0.26

(0.10)

0.28

(0.11)

0.25

(0.10)

0.31

(0.13)

0.29

(0.12)

0.26

(0.11)

0.26

(0.11)

0.30

(0.13)

0.08

(0.04)

0.08

(0.04)

0.07

(0.04)

0.09

(0.06)

6 0.24

(0.10)

0.28

(0.12)

0.23

(0.10)

0.31

(0.14)

0.28

(0.12)

0.24

(0.10)

0.23

(0.10)

0.30

(0.15)

0.07

(0.04)

0.07

(0.04)

0.05

(0.03)

0.09

(0.06)

*Full SEM was not identified in these conditions

**2 out of 500 repetitions were not identified in this condition

Table 3 demonstrates that with a sample size of 40 and 12 items per factor, it was not

possible to simultaneously estimate the structural and measurement components of the mediation

model, regardless of factor loadings, due to underidentification; however, parameter estimates

could be obtained when using factor scores. In the condition with N = 40, six items per factor,

and variable factor loadings there were a number of issues with model estimation. First, two of

the 500 models did not properly converge due to empirical underidentification. While this is

Page 52: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

43

nearly negligible, all parameter estimates were highly variable in the 498 models that did

properly converge. Standard errors of the sampling distribution of point estimates were most

markedly reduced when utilizing factor scores compare to the full SEM in this specific

condition. While the rest of the simulation conditions did converge across all 500 repetitions in

the full SEM condition, there was still substantial variability in parameter estimates. Some

examples are presented in Figure 5. Note, the x-axis is rescaled across all three example

conditions to better observe distributional properties.

Page 53: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

44

Figure 5. Sampling distributions of indirect effect from full SEM

N = 40, 6 items per factor, variable factor loadings

N = 80, 12 items per factor, factor loadings set to 0.8

N = 80, 6 items per factor, triplets of factor loadings

Lastly, it is also important to note that Table 3 demonstrates bias generally followed what

was expected based on results of Skrondal and Laake (2001). That is, average path estimates

across replications were farther from their true population generating values when the same type

of factor score was used for both predictor and outcome, and this bias was reduced in paths using

Page 54: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

45

regression scores for the predictor and Bartlett scores for the outcome. Figures 10-17 in

Appendix A provide an in-depth evaluation of relative bias and squared error of the components

of the indirect effect and further demonstrate similarities to what was found by Skrondal and

Laake.

Next, I moved to a thorough investigation of the indirect effect, to evaluate how bias and

inefficiency in 𝛾11 and 𝛽21 contributed to bias and inefficiency in the indirect effect. Further, I

examined differences in coverage of the confidence of the indirect effect to determine how well

these models could detect the presence of a mediated effect.

3.3 The Indirect Effect

The primary purpose of this study was to evaluate how scoring a mediator in a latent

mediation model impacts bias, MSE, and coverage of the indirect effect. This section focuses on

evaluating these outcomes using extensive graphics to describe differences across all simulation

conditions. While meta-models were also conducted to accompany graphics, these will not be the

primary focus. Due to extensive within-cell variability, estimates of effect size (𝜂𝐺2) were small

and uninterpretable. This trend persisted when splitting up data to account for dependencies

introduced by score-type and number of items per factor, and also when outliers were taken into

account by using Winsorized outcomes (Hastings et al., 1947). Given the general lack of

interoperability of effect size estimates, I present these meta-models in Appendix C, but will not

rely on these models to determine meaningfulness of effects.

Relative Bias of the Indirect Effect

First, I looked at relative bias of the indirect effect across all simulation conditions. A

table of average values and standard deviations is presented in Appendix B. Figures 6 and 7

show boxplots of relative bias across all conditions.

Page 55: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

46

Figure 6. Relative bias of indirect effect when N = 40

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). Mean relative bias is represented by diamonds in each box and median relative bias is represent by horizontal lines.

46

Page 56: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

47

Figure 7. Relative bias of indirect effect when N = 80

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). Mean relative bias is represented by diamonds in each box and median relative bias is represent by horizontal lines.

47

Page 57: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

48

Before discussing differences across simulation conditions, I will first point out a number

of striking general observations. First, there is substantial variability in relative bias and

numerous outliers over all scoring conditions. In the condition using a sample size of 40, some

repetitions had bias levels around 350% and in sample sizes of 80 this only decreased to near

200%. Overall, interquartile ranges (IQRs) of relative bias within design cells had widths

minimally at 61% and maximally at 95%. The wide distributions of relative bias suggest high

variability in parameter estimates within any given repetition. This further explicates issues

associated with effect size measures in meta-models.

Second, there is no difference in relative bias of the indirect effect in the two scoring

conditions using regression scores and Bartlett scores. Recall, these two models were scored in

the following manner: regression → regression → Bartlett and regression → Bartlett → Bartlett.

In addition, score type was a within-dataset condition, that is, all score types were computed

within each simulated dataset. It was persistently noted that the numerical value of the indirect

effect was identical within a given dataset, when the independent variable was scored using

regression scores and the dependent variable was scored using Bartlett scores, regardless of

whether regression or Bartlett scores were used to score the mediator. The preciseness of this

relationship will be further explicated in the following section.

While there was no difference between the scoring models using regression and Bartlett

scores, there were some notable trends when comparing these to the mean score model. First,

when all factor loadings were set to be high in value and equal across latent factors, relative bias

of the mean score model was either comparable or better, on average, when compared to the

other scoring models. This trend persisted in both sample size conditions and was more strongly

in favor of the mean score model when number of items per factor increased. For example, in the

Page 58: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

49

first effect size condition with a sample size of N = 40, 12 items per factor, and all loadings set at

0.8 the mean score model had an average relative bias nearer to zero (𝑀 = −13.67, 𝑆𝐷 =

75.10) compared to the other scoring models (𝑀 = −15.52, 𝑆𝐷 = 74.52). In contrast, when

factor loadings were variable, and thus the measurement model less closely adhered to that

imposed by mean scores, the indirect effect was more biased on average than the other scoring

models. This was also more pronounced with fewer items per latent factor. For example, in the

first effect size condition with a sample size of N = 40, 6 items per factor, and variable factor

loadings the mean score model showed more extreme relative bias (𝑀 = −42.48, 𝑆𝐷 = 49.48)

when compared to the other scoring models (𝑀 = −29.48, 𝑆𝐷 = 75.25). Notice, however, in

this condition the mean score model had less overall variability than the other scoring models,

even though average relative bias was stronger. In fact, the first example discussed was the only

design cell in which the standard deviation of relative bias of the mean score model was higher

than the other scoring models.

MSE of the Indirect Effect

Next, I moved to examining MSE of the indirect effect across all simulation conditions

(Figures 8 and 9). In order to be able to observe distribution properties, Figures 8 and 9 show

squared error across simulation conditions. MSE per cell can be seen in Appendix B.

Page 59: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

50

Figure 8. Squared error of indirect effect when N = 40

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). MSE is represented by diamonds in each box and median SE is represent by horizontal lines.

50

Page 60: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

51

Figure 9. Squared error of indirect effect when N = 80

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). MSE is represented by diamonds in each box and median SE is represent by horizontal lines.

**Note, y-axis has been rescaled from previous plot to better display distribution and outliers.

51

Page 61: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

52

Similar to what was observed in terms of relative bias, there were numerous outliers here

and values were identical across the scoring conditions using regression and Bartlett scores. SE

values were also all very near zero and did not exceed 0.10, though this was to be expected given

the small effect sizes utilized in this study.

When comparing MSE of the mean score model to the other scoring models, the mean

score model consistently had MSE values nearer to zero. This difference was less pronounced as

sample size increased. For example, in the first effect size condition, with 12 items per factor and

variable factor loadings MSE of the mean score model was 0.0041 compared to 0.0049 in the

other scoring models when N = 40, and 0.0019 in the mean score model compared to 0.0021 in

the other scoring models when N = 80.

Coverage of the Confidence Interval of the Indirect Effect

Finally, I evaluated coverage of the confidence intervals in all scoring and simulation

conditions. Recall, standard errors were computed using the bias-corrected and accelerated

method. Table 4 presents the number of confidence intervals within each cell that contained the

true population parameter (or true standardized population parameter for the mean score

models).

Page 62: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

53

Table 4. Coverage of confidence interval of indirect effect

N = 40 N = 80

paths

Mediator

Score

Type

Loadings all .8 Loadings triplets .4, .6, .8 Loadings all .8 Loadings triplets .4, .6, .8

6 items per

factor

12 items per

factor

6 items per

factor

12 items per

factor

6 items per

factor

12 items per

factor

6 items per

factor

12 items per

factor

Effect

size

condition

1

R (93.40%)

(93.60%) (91.79%) (91.60%) (97.20%) (96.00%) (95.20%) (96.40%)

B (93.60%)

(94.20%) (91.37%) (92.20%) (96.80%) (96.40%) (95.00%) (96.60%)

M (93.80%)

(93.80%)

(89.89%) (92.20%) (96.40%) (96.66%) (91.40%) (94.40%)

Effect

size

condition

2

R (92.00%)

(93.00%) (90.32%) (91.00%) (94.00%) (93.80%)

(93.80%)

(92.80%)

B (92.20%)

(93.00%) (90.74%) (91.00%) (94.40%) (94.00%) (92.79%) (92.80%)

M (92.20%)

(92.20%) (87.79%) (90.20%) (94.40%) (95.20%) (90.81%) (94.20%)

Effect

size

condition

3

R (91.60%)

(92.80%) (88.17%) (92.40%) (93.80%)

(93.80%)

(93.20%) (93.40%)

B (92.60%)

(92.80%) (89.00%) (92.20%) (94.20%) (94.60%) (94.00%) (93.80%)

M (92.00%)

(92.60%) (89.00%) (90.00%) (93.20%) (94.20%) (91.20%) (93.40%)

* Effect size conditions are labeled as “paths.” Condition 1 refers to the indirect effect computed from population values 𝛾11 = 0.30,

𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2, 𝛽21 = 0.45, and 𝛾12 = 0.05) and

Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2, and 𝛾12 = 0.05).

53

Page 63: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

54

Table 4 suggests that while very few of these conditions resulted in confidence intervals

that contained the true population parameter 95% of the time, most were quite close to 95%

coverage. There were also slight differences between the scoring models using regression and

Bartlett scores which warrant an explanation. These differences were not due to meaningful

differences between these models, as the indirect effect point estimate was identical, but rather

due to differences in bootstrapping starting seeds. When specifying a consistent seed on which to

begin bootstrapping, confidence intervals of the indirect effect were identical across these two

models.

Given that observed differences between the models using regression and Bartlett scores

were not due to qualities of the scores themselves, I moved to comparing these models to the

mean score model. Coverage of the confidence interval of the indirect effect computed from the

mean score model was generally comparable to the other scoring models when factor loadings

were all high. In contrast to what was observed in relative bias and MSE, the mean score model

did not consistently outperform the others in terms of coverage in this measurement structure,

but persistently performed on par with the other models. When factor loadings were variable, the

mean score model typically had lower coverage than the other scoring models, and this was more

pronounced with fewer items per latent factor.

In the final section I will review these findings and discuss them within the broader

framework of this analysis and of factor scoring research in general. I will place these findings

within the overall context of factor scoring literature and provide proposed underlying reasons

for the observed effects.

Page 64: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

55

4 DISCUSSION

The motivation for this study was to investigate if and when it is appropriate to use factor

scores to examine mediational relations between latent variables. From this, a number of

interesting results were obtained, some anticipated and others unexpected. In this section, I

briefly review the most salient findings from this project within the context of my original

hypotheses. I also use prior research and analytic derivations to further explicate the underlying

mechanisms that likely gave way to these results. Within this discussion, I build in suggestions

and recommendations for researchers. Finally, I discuss limitations and shortcomings of the

present study which leads to suggestions for future lines of research and final closing remarks.

4.1 Hypotheses

First and foremost, these results suggest that it is generally best practice to natively model

multiple indicator latent, unless the model simply cannot be estimated without their use. While

there appears to be a slight decrease in variability of point estimates when using factor scores,

this is negligible and likely does not make up for the bias introduced by estimating paths using

regression and Bartlett scores, or by imposing the restrictive factor structure required by mean

scores; however, there are some instances in which the full SEM is underidentified. In these

scenarios, scores can be used in place of simultaneously estimating the structural and

measurement components of model in order to statistically test for the presence of a mediation

effect.

Hypotheses One and Two

My first two hypotheses followed directly from Skrondal and Laake (2001) and were

used primarily to confirm the presence of bias in one component of the indirect effect when

scoring the mediator using regression or Bartlett scores. Trends in bias of the components of the

Page 65: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

56

indirect effect generally followed what was to be expected based on Skrondal and Laake (2001).

Translated into recommendations for researchers, Skrondal and Laake’s findings can be applied

to mediation models, allowing a researcher to prespecify a biased path to be the one of least

theoretical interest. For example, if a researcher is more interested in interpreting the main effect

of the mediator on the outcome, regression scores can be used to score this mediator, reducing

bias in this specific effect. In either case, when using regression and Bartlett scores in a single-

mediator model, it should be noted that one component of the indirect effect will be biased due to

the choice of mediator score type, and that this biased path will introduce bias into the indirect

effect.

Hypothesis Three

My third hypothesis stated that mean scores would lead to bias in both components of the

indirect effect which would then bias the indirect effect. This bias would be more pronounced

when the measurement model was less resemblant of that imposed by mean scores (i.e., when

factor loadings varied across items). Generally, findings were consistent with this hypothesis;

however, there were some unexpected differences when comparing the mean score model to the

other factor scoring models. In particular, the mean score model performed similarly to the

others in terms of relative bias, MSE, and coverage in some cases, and also outperformed the

others in terms of relative bias and MSE, specifically when population generating factor loadings

were set to be equal and generally high. This finding indicates that when empirical or theoretical

evidence suggest all items equally and highly contribute to a given latent factor, mean scores

may be appropriate to use in mediation models, when sample size limitations prevent the full

SEM from being estimable. The reason for this stems from the fact that factor scores utilize exact

parameter estimates in their computations, which may be highly variable particularly in small

Page 66: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

57

samples. For example, regression and Bartlett scores are computed using the factor loadings

estimated from a given factor analysis, while mean scores simply assume all loadings are set to

one. At first glance, the latter seems like a sweeping assumption, but the former has the potential

to overly capitalize on sampling variability. That is, when computing a mean score, a researcher

is selecting and imposing a potentially incorrect factor structure. When computing a regression

or Bartlett score, a researcher is allowing sampling fluctuations to deterministically contribute to

each score estimate across all latent factors. For these reasons, alongside the overall simplicity of

mean scores and frequency of their use, mean scores are a decent alternative to natively

modeling multiple indicator latent factors under certain conditions. This suggestion builds off of

prior work that has gone so far as to recommend the use of composite scores, such as mean and

sum scores, over natively modeling latent factors when examining mediation in small samples

(Ledgerwood & Shrout, 2011).

Hypothesis Four

Finally, my third hypothesis proposed that one might be able to cleverly score mediation

models using regression and Bartlett scores by selecting the biased path to be the one with the

smaller effect size, in order to minimize overall bias in the indirect effect. This hypothesis

proved to be false, and instead I found no difference in the point estimate of the indirect effect, or

bias of the indirect effect when swapping out regression or Bartlett scores in a mediation model,

across the different effect size condition. In fact, in any given dataset, the two scoring conditions

using regression and Bartlett scores (regression → regression → Bartlett and regression →

Bartlett → Bartlett) lead to identical estimates of the indirect effect. While this relationship was

initially unexpected, it can be traced back to an earlier derivation from Thomson (1938). In a

response to the formulation of Bartlett scores, Thomson showed an analytic relation existed

Page 67: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

58

between regression and Bartlett scores. This derivation can be translated and reframed within the

full SEM framework:

Theorem 1. 𝐀𝛏𝐑 = 𝑓(𝐀𝛏

𝐁)

Proof. Using (20) and (22) we obtain

𝐀𝛏𝐁 = (𝚲𝐱

′ 𝚯𝛅−��𝚲��)

−𝟏

𝚲𝐱′ 𝚯𝛅

−�� ⇒

(𝚲𝐱′ 𝚯𝛅

−��𝚲��) 𝐀𝛏𝐁𝚯�� = 𝚲𝐱

′ ⇒

�� (𝚲𝐱′ 𝚯𝛅

−��𝚲��) 𝐀𝛏𝐁𝚯��(𝚺𝒙(��))−𝟏 = ��𝚲𝐱

′ (𝚺𝒙(��))−𝟏 ⇒

�� (𝚲𝐱′ 𝚯𝛅

−��𝚲��) 𝐀𝛏𝐁𝚯��(𝚺𝒙(��))−𝟏 = 𝐀𝛏

𝐑 ⇒

𝐀𝛏𝐑 = 𝑓(𝐀𝛏

𝐁)

A similar proof shows that 𝐀𝛈𝐑 = 𝑓(𝐀𝛈

𝐁)

The above proof demonstrates that regression scores are a function of Bartlett scores (and

thus vice versa). Consequently, if the independent variable and mediator are scored using

regression scores and this path is multiplied by a path from the mediator to dependent variable

using regression scores and Bartlett scores, respectively, the numerical value of this path will be

equivalent to one computed from a model where the first path from independent variable to

mediator uses regression and Bartlett scores, respectively, and the second path from mediator to

outcome uses only Bartlett scores. This theorem nullifies hypotheses four which claimed that

clever score choices could be used based on the size of paths in order to reduce bias, and instead

suggests there is no difference in bias of the indirect effect when the mediator is scored with

regression or Bartlett scores.

Page 68: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

59

4.2 Limitations

There were a number of major limitations to this study that mostly encompassed the

choice to base the project on the single-mediator model within the framework of small sample

research. While these choices were justifiable given the desire to focus results within the context

of the simplest form of mediation models and to demonstrate the use of factor scores in scenarios

in which simultaneous estimation of the full SEM is not possible due to limitations in available

data, they also introduced highly variable results and limited overall generalizability.

Since results were only taken from the single-mediator model, they may not be

generalizable to modeling more complex mediational relations. Specifically, results do not

directly explain how bias in the indirect effect would increase with the addition of multiple

biased components. In addition, in terms of limiting sample size, all findings must be considered

only in the contextual framework of small sample research, and were heavily influenced by the

impacts of sampling variability inherent in small samples. Thus, results may not generalize to

large samples where latent variables cannot be natively estimated for other reasons, such as

model complexity. Furthermore, lavaan was actually able to successfully estimate model

parameters for the majority of the simulation conditions in this study using a full multiple

indicator latent factor SEM. Only a subset of simulation conditions led to models that could not

be simultaneously estimated. In general, these findings suggest that SEM software is capable of

natively modeling latent factors even with very small samples and in other non-ideal conditions.

Since factor scoring models did little to mitigate highly variable sampling distributions of

parameter estimates, small sample research may not be the most effective way to evaluate the

usefulness and helpfulness of using factor scores in subsequent models.

Page 69: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

60

4.3 Future Directions

These limitations naturally lead to a discussion of how future research should continue to

examine the use of factor scores in subsequent models. Further, these limitations suggest how

future improvements can build upon these findings to enhance usability and generalizability of

results.

In terms of usability, factor scoring research broadly can be improved by focusing on

examining the use of factor scores in subsequent models not from the perspective of

necessitating factor scores due to limited available information (e.g., small samples), but from

the perspective of necessitating factor scores due to limitations in modeling frameworks

themselves. Said another way, factor scoring research is quite useful when it tests

appropriateness of factor scores in models that are inherently not capable of natively estimating

latent factors, such as longitudinal models with a large number of timepoints (e.g., Curran et al.,

2014), or highly complex models involving many latent factors or bi- and trifactor structures

(e.g., Curran et al. 2021). This is because, as previously demonstrated, current software has the

capability to natively estimate latent factors in simple mediation models, even when sample sizes

are small. Future studies should continue to demonstrate the benefits of taking full advantage of

the modeling capabilities of the SEM in order to better demonstrate if and when scores are a

viable alternative.

This also has important implications in terms of generalizability. The present study

existed solely in the realm of small samples and consequently became governed by issues that

present themselves in small sample research, such as extremes in sampling variability. Thus, it is

difficult to parse apart which results were due primarily to sample size limitations and which

results would generalize to more ideal modeling conditions. In addition, generalizability was

Page 70: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

61

limited by only examining the single-mediator model. While using regression and Bartlett scores

to score a single mediator added bias to parameter estimates, one can only posit that this would

increase in multiple mediator models. These models should be considered in future research to

specifically quantify the extent to which bias in incorrectly scored mediators impacts tests of

more complicated mediation pathways.

In conclusion, as modeling becomes more widespread and complex, it is important to

continually consider and examine the role of measurement and scoring in psychological research.

The types of numerical estimates that are used to estimate the relations between latent constructs

have the potential to bias results if not chosen systematically. In addition, there are numerous

difficulties associated with model fitting in small samples that must be carefully considered

when using scores due to data limitations associated with small samples. Skrondal and Laake

concluded their 2001 paper stating “From a practical point of view, the most important results is

perhaps that conventional factor score regression performs very badly and should definitely be

abandoned” (p 575). I take a similarly cautious approach in concluding that factor scores are not

an infallible solution to modeling complex relations between latent factors in the presence of

major sample size limitations, but rather one among many possible modeling frameworks to be

considered and continually evaluated, with its many advantages and limitations in mind.

Page 71: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

62

APPENDIX A: BOX PLOTS OF COMPONENTS OF INDIRECT EFFECT

Figure 10. Relative bias of 𝛾11 when N = 40

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). Mean relative bias is represented by diamonds in each box and median relative bias is represent by horizontal lines.

62

Page 72: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

63

Figure 11. Relative bias of 𝛾11 when N = 80

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). Mean relative bias is represented by diamonds in each box and median relative bias is represent by horizontal lines.

63

Page 73: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

64

Figure 12. Relative bias of 𝛽21 when N = 40

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). Mean relative bias is represented by diamonds in each box and median relative bias is represent by horizontal lines.

64

Page 74: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

65

Figure 13. Relative bias of 𝛽21 when N = 80

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). Mean relative bias is represented by diamonds in each box and median relative bias is represent by horizontal lines.

65

Page 75: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

66

Figure 14. Squared error of 𝛾11 when N = 40

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). MSE is represented by diamonds in each box and median SE is represent by horizontal lines.

66

Page 76: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

67

Figure 15. Squared error of 𝛾11 when N = 80

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). MSE is represented by diamonds in each box and median SE is represent by horizontal lines.

67

Page 77: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

68

Figure 16. Squared error of 𝛽21 when N = 40

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). MSE is represented by diamonds in each box and median SE is represent by horizontal lines.

68

Page 78: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

69

Figure 17. Squared error of 𝛽21 when N = 80

* Effect size conditions are separated via color and labeled as “paths.” Condition 1 refers to the indirect effect computed from

population values 𝛾11 = 0.30, 𝛽21 = 0.30, Condition 2 refers to the indirect effect computed from population values (𝛾11 = 0.2,

𝛽21 = 0.45, and 𝛾12 = 0.05) and Condition 3 refers to the indirect effect computed from population values (𝛾11 = 0.45, 𝛽21 = 0.2,

and 𝛾12 = 0.05). MSE is represented by diamonds in each box and median SE is represent by horizontal lines

69

Page 79: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

70

APPENDIX B: TABLES OF RELATIVE BIAS AND

SQUARED ERROR OF INDIRECT EFFECT

Table 5. MSE of indirect effect across simulation conditions

Paths

(std paths)

Sample size Factor loadings Items per

factor

MSE by

mediator score type

R B M

γ11 ∗ β21 = .09 (0.085)

N = 40 All .8 12

0.0047 0.0047 0.0042

6 0.0047 0.0047 0.0040

Triplets of .8, .4,

.6

12 0.0049 0.0049 0.0041

6

0.0053 0.0053 0.0041

N = 80 All .8 12

0.0020 0.0020 0.0019

6 0.0020 0.0020 0.0017

Triplets of .8, .4,

.6

12

0.0021 0.0021 0.0019

6 0.0023 0.0023 0.0022

γ11 ∗ β21 = .09 (0.081)

N = 40 All .8 12

0.0047 0.0047 0.0042

6 0.0047 0.0047 0.0039

Triplets of .8, .4,

.6

12 0.0049 0.0049 0.0039

6 0.0051 0.0051 0.0037

N = 80 All .8 12

0.0020 0.0020 0.0018

6

0.0020 0.0020 0.0016

Triplets of .8, .4,

.6

12 0.0021 0.0021 0.0018

6 0.0023 0.0023 0.0020

γ11 ∗ β21 = .09 (0.087)

N = 40 All .8 12

0.0047 0.0047 0.0043

6 0.0047 0.0047 0.0041

Triplets of .8, .4,

.6

12 0.0049 0.0049 0.0042

6 0.0052 0.0052 0.0042

N = 80 All .8 12

0.0020 0.0020 0.0019

6

0.0020 0.0020 0.0018

Triplets of .8, .4,

.6

12

0.0021 0.0021 0.0020

6 0.0023 0.0023 0.0024

Page 80: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

71

Table 6. Relative bias of indirect effect across simulation conditions

Paths

(std paths)

Sample size Factor loadings Items per

factor

relative bias by

mediator score type

R B M

γ11 ∗ β21 = .09 (0.085)

N = 40 All .8 12

-15.53

(74.52)

-15.53

(74.52)

-13.67

(75.1)

6 -19.23

(73.55)

-19.23

(73.55)

-20.23

(71.46)

Triplets of .8, .4,

.6

12 -20.63

(75.1)

-20.63

(75.1)

-27.3

(69.43)

6 -29.48

(75.25)

-29.48

(75.25)

-42.08

(62.58)

N = 80 All .8 12

-11.12

(48.79)

-11.12

(48.79)

-10.05

(49.48)

6 -15.54

(46.82)

-15.54

(46.82)

-17.65

(45.64)

Triplets of .8, .4,

.6

12 -16.45

(48.1)

-16.45

(48.1)

-23.54

(46.02)

6 -25.71

(46.33)

-25.71

(46.33)

-38.99

(38.97)

γ11 ∗ β21 = .09 (0.081)

N = 40 All .8 12

-15.53

(74.52)

-15.53

(74.52)

-9.40

(78.81)

6 -19.23

(73.55)

-19.23

(73.55)

-16.28

(75.00)

Triplets of .8, .4,

.6

12 -20.63

(75.1)

-20.63

(75.1)

-23.71

(72.86)

6 -27.96

(74.39)

-27.96

(74.39)

-37.01

(65.3)

N = 80 All .8 12

-11.12

(48.79)

-11.12

(48.79)

-5.61

(51.93)

6

-15.54

(46.82)

-15.54

(46.82)

-13.57

(47.9)

Triplets of .8, .4,

.6

12 -16.45

(48.1)

-16.45

(48.1)

-19.75

(48.3)

6 -25.66

(46.36)

-25.66

(46.36)

-35.97

(40.94)

γ11 ∗ β21 = .09 (0.087)

N = 40 All .8 12

-15.53

(74.52)

-15.53

(74.52)

-15.63

(73.39)

6 -19.23

(73.55)

-19.23

(73.55)

-22.05

(69.83)

Triplets of .8, .4,

.6

12 -20.63

(75.1)

-20.63

(75.1)

-28.96

(67.84)

6 -29.21

(74.43)

-29.48

(75.25)

-42.67

(60.56)

N = 80 All .8 12

-11.12

(48.79)

-11.12

(48.79)

-12.10

(48.35)

6

-15.54

(46.82)

-15.54

(46.82)

-19.52

(44.6)

Triplets of .8, .4,

.6

12 -16.45

(48.1)

-16.45

(48.1)

-25.28

(44.97)

6 -25.71

(46.33)

-25.71

(46.33)

-40.38

(38.08)

Page 81: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

72

APPENDIX C: META-MODEL RESULTS

All presented meta-models in this appendix utilize non-Winsorized outcomes and have

not been separated by score-type and number of items per factor to minimize dependencies in the

data. Differences in results were trivial when taking these issues into account.

Table 7. Relative bias of indirect effect meta-model

Effect F p 𝜂𝐺2

Mediator score type 395.00 < .001* .0010

Number of items per factor 577.935 < .001* .0040

Effect size .708 .492 .0004

Sample size 5.247 <.05* .0008

Factor loadings 49.08 < .001* .0080

Mediator score type* Number of items per factor 351.48 <.001* .0002

Mediator score type* Effect size 57.16 < .001* .0003

Mediator score type* Sample size 2.878 .056 .0000

Mediator score type* Factor loadings 535.970 < .001* .0010

Number of items per factor* Effect size 0.04 .961 .0000

Number of items per factor* Sample size 1.79 .182 .0000

Number of items per factor* Factor loadings 76.52 <.001* .0005

Effect size* Sample Size 0.02 .982 .0000

Effect size* Factor loadings 0.01 .991 .0000

Sample Size*Factor loadings .02 .891 .0000

Mediator score type* Number of items per factor* Effect size .99 .411 .0000

Mediator score type* Number of items per factor* Sample size 1.94 .144 .0000

Mediator score type* Number of items per factor* Factor loadings 33.72 <.001* .0000

Mediator score type* Effect size* Sample size .01 1.00 .0000

Mediator score type* Effect size*Factor loadings .62 .652 .0000

Mediator score type* Sample size* Factor loadings .19 .825 .0000

Number of items per factor* Effect size* Sample size .01 .991 .0000

Number of items per factor* Effect size* Factor loadings .01 .991 .0000

Number of items per factor* Sample size* Factor loadings 0.01 .922 .0000

Effect size* Sample size* Factor loadings .02 .978 .0000

Page 82: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

73

Table 8. Squared error of indirect effect meta-model

Effect F p 𝜂𝐺2

Mediator score type 220.704 <.001* .0020

Number of items per factor 1.215 .0270 .0000

Effect size 0.146 .864 .0000

Sample size 328.27 <.001* .0460

Factor loadings 2.38 .123 .0003

Mediator score type* Number of items per factor 12.01 <.001* .0000

Mediator score type* Effect size 4.30 <.01* .0001

Mediator score type* Sample size 95.27 <.001* .0007

Mediator score type* Factor loadings 9.86 <.001* .0001

Number of items per factor* Effect size 0.16 .852 .0000

Number of items per factor* Sample size 0.10 .749 .0000

Number of items per factor* Factor loadings 8.84 <.01* .0001

Effect size* Sample Size 0.03 .980 .0000

Effect size* Factor loadings 0.043 .958 .0000

Sample Size*Factor loadings 9.86 <.001* .0001

Mediator score type* Number of items per factor* Effect size 1.00 .405 .0000

Mediator score type* Number of items per factor* Sample size 12.64 <.001* .0000

Mediator score type* Number of items per factor* Factor loadings 0.81 .447 .0000

Mediator score type* Effect size* Sample size 0.01 1.00 .0000

Mediator score type* Effect size*Factor loadings 0.60 .668 .0000

Mediator score type* Sample size* Factor loadings 18.68 <.001* .0001

Number of items per factor* Effect size* Sample size 0.02 .977 .0000

Number of items per factor* Effect size* Factor loadings 0.06 .943 .0000

Number of items per factor* Sample size* Factor loadings 0.06 .806 .0000

Effect size* Sample size* Factor loadings 0.02 .982 .0000

Page 83: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

74

Table 9. Coverage of confidence interval of indirect effect meta-model

Effect Log

odds

p Odds ratio

Mediator score type (regression) 0.69 .080 2.00

Mediator score type (Bartlett) 0.90 <.05 2.46

Number of items per factor -1.54 <.001* 0.21

Effect size (𝛾11 = .20, 𝛽21 = .45) 0.25 .647 1.29

Effect size (𝛾11 = .45, 𝛽21 = .20) -0.29 .613 0.76

Sample size -0.68 0.190 0.51

Factor loadings 0.45 .469 1.56

Mediator score type (regression)* Number of items per factor 1.47 <.01* 4.35

Mediator score type (Bartlett)* Number of items per factor 0.89 <.05* 2.43

Mediator score type (regression) * Effect size (𝛾11 = .20, 𝛽21 = .45) -1.31 <.01* 0.27

Mediator score type (regression) * Effect size (𝛾11 = .45, 𝛽21 = .20) -0.44 0.354 0.64

Mediator score type (Bartlett) * Effect size (𝛾11 = .20, 𝛽21 = .45) -1.43 <.01* 0.24

Mediator score type (Bartlett) * Effect size (𝛾11 = .45, 𝛽21 = .20) -0.41 0.399 0.66

Mediator score type (regression) * Sample size -0.58 0.177 0.56

Mediator score type (Bartlett) * Sample size -0.58 0.180 0.56

Mediator score type (regression) * Factor loadings -0.90 0.057 0.41

Mediator score type (Bartlett) * Factor loadings -0.75 0.113 0.47

Number of items per factor* Sample size 1.07 <.01* 2.92

Number of items per factor * Factor loadings 1.05 <.05* 2.86

Number of items per factor * Effect size (𝛾11 = .20, 𝛽21 = .45) 0.18 0.670 1.20

Number of items per factor * Effect size (𝛾11 = .45, 𝛽21 = .20) 0.55 0.188 1.72

Sample size * Factor loadings 0.18 0.805 1.20

Sample size * Effect size (𝛾11 = .20, 𝛽21 = .45) -0.45 0.505 0.64

Sample size * Effect size (𝛾11 = .45, 𝛽21 = .20) 0.27 0.689 1.31

Factor loadings * Effect size (𝛾11 = .20, 𝛽21 = .45) 0.13 0.871 0.14

Factor loadings * Effect size (𝛾11 = .45, 𝛽21 = .20) -0.24 0.755 0.79

Mediator score type (regression) * Number of items per factor* Sample

size

-1.05 <.01* 0.35

Mediator score type (Bartlett) * Number of items per factor* Sample size -0.85 <.05* 0.43

Mediator score type (regression) * Number of items per factor* Factor

loadings

-0.24 0.542 0.78

Mediator score type (Bartlett) * Number of items per factor* Factor

loadings

-0.33 0.403 0.72

Mediator score type (regression) * Effect size (𝛾11 = .20, 𝛽21 = .45) *

Number of items

-0.17 0.719 0.84

Mediator score type (regression) * Effect size (𝛾11 = .45, 𝛽21 = .20) *

Number of items

-0.82 0.074 0.44

Mediator score type (Bartlett) * Effect size (𝛾11 = .20, 𝛽21 = .45) *

Number of items

0.40 0.393 0.67

Mediator score type (Bartlett) * Effect size (𝛾11 = .45, 𝛽21 = .20) *

Number of items

-0.19 0.687 0.83

Page 84: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

75

*Note, while this model tested all three level interactions, this table presents only the three-level

interactions including score type

Mediator score type (regression) * Sample size * Factor loadings 0.39 0.325 1.48

Mediator score type (Bartlett) * Sample size * Factor loadings 0.43 0.278 1.54

Mediator score type (regression) * Effect size (𝛾11 = .20, 𝛽21 = .45) *

Sample size

1.68 <.01* 5.37

Mediator score type (regression) * Effect size (𝛾11 = .45, 𝛽21 = .20) *

Sample size

0.82 0.083 2.27

Mediator score type (Bartlett) * Effect size (𝛾11 = .20, 𝛽21 = .45) *

Sample size

1.50 <.01* 4.48

Mediator score type (Bartlett) * Effect size (𝛾11 = .45, 𝛽21 = .20) *

Sample size

0.50 0.296 1.64

Mediator score type (regression) * Effect size (𝛾11 = .20, 𝛽21 = .45) *

Factor loadings

0.31 0.520 1.36

Mediator score type (regression) * Effect size (𝛾11 = .45, 𝛽21 = .20) *

Factor loadings

0.45 0.350 1.57

Mediator score type (Bartlett) * Effect size (𝛾11 = .20, 𝛽21 = .45) *

Factor loadings

0.23 0.625 1.26

Mediator score type (Bartlett) * Effect size (𝛾11 = .45, 𝛽21 = .20) *

Factor loadings

0.48 0.321 1.62

Page 85: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

76

REFERENCES

Bakeman, R. (2005). Recommended Effect Size Statistic. Behavior Research Methods, 37(3),

379–384. https://doi.org/10.3758/BF03192707

Baron, R. M., & Kenny, D. A. (1986). The Moderator-Mediator Variable Distinction in Social

Psychological Research: Conceptual, Strategic, and Statistical Considerations. Journal of

Personality and Social Psychology, 51(6), 1173–1182. https://doi.org/10.1007/BF02512353

Bartholomew, D. J., Deary, I. J., & Lawn, M. (2009). The origin of factor scores: Spearman,

Thomson and Bartlett. British Journal of Mathematical and Statistical Psychology, 62(3),

569–582. https://doi.org/10.1348/000711008X365676

Bartlett, M. S. (1937). The statistical conception of mental factors. British Journal of

Psychology, 28(1), 97–104.

Bauer, D. J. & Curran, P. J. (2004). The integration of continuous and discrete latent variable

models: Potential problems and promising opportunities. Psychological Methods, 9, 3–

29.

Bauer, D. J., Howard, A. L., Baldasaro, R. E., Curran, P. J., Hussong, A., M., Chassin, L., &

Zucker, R. A. (2013). A trifactor model for integrating ratings across multiple informants.

Psychological Methods, 18, 475–493. https://doi:10.1037/a0032475

Bauer, D. J. & Curran, P. J. (2015). The discrepancy between measurement and modeling in

longitudinal data analysis. In J.R. Harring, L.M. Stapleton & S.N. Beretvas (Eds.),

Advances in Multilevel Modeling for Educational Research (pp. 3-38). Information Age

Publishing.

Beauducel, A., Harms, C., & Hilger, N. (2016). Reliability Estimates for Three Factor Score

Estimators. International Journal of Statistics and Probability, 5(6), 94.

https://doi.org/10.5539/ijsp.v5n6p94

Beiter, R., Nash, R., McCrady, M., Rhoades, D., Linscomb, M., Clarahan, M., Sammut, S.

(2015). The prevalence and correlates of depression, anxiety, and stress in a sample of

college students. Journal of Affective Disorders, 173, 90-96.

Bentler P.M., Yuan KH. (1997) Optimal Conditionally Unbiased Equivariant Factor Score

Estimators. In: Berkane M. (eds) Latent Variable Modeling and Applications to

Causality. Lecture Notes in Statistics, vol 120. Springer, New York, NY.

https://doi.org/10.1007/978-1-4612-1842-5_14

Bollen, K. A. (1987). Total , Direct , and Indirect Effects in Structural Equation. Sociological

Methodology, 17(1987), 37–69.

Bollen, K. A. (1989). Structural Equations with Latent Variables. New York: Wiley.

Page 86: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

77

Bollen, K. A. (2002). Latent Variables in Psychology and the Social Sciences. Annual Review of

Psychology, 53, 605–634.

Bollen, K.A. & Curran, P.J. (2006). Latent curve models: A structural equation approach. New

York: Wiley.

Brewster, M. E., Moradi, B., DeBlaere, C., & Velez, B. L. (2013). Navigating the borderlands:

The roles of minority stressors, bicultural self-efficacy, and cognitive flexibility in the

mental health of bisexual individuals. Journal of Counseling Psychology, 60(4), 543–556.

https://doi.org/10.1037/a0033224

Brown, T. A. (2006). Confirmatory Factor Analysis for Applied Research. In Methodology in the

Social Sciences.

Cheung, G. W., & Lau, R. S. (2008). Testing Mediation and Suppression Effects of Latent

Variables. Organizational Research Methods, 11(852), 1–30.

Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/

correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Lawrence

Erlbaum Associates.

Corwin, L. A., Runyon, C. R., Ghanem, E., Sandy, M., Clark, G., Palmer, G. C., … Dolan, E. L.

(2018). Effects of discovery, iteration, and collaboration in laboratory courses on

undergraduates’ research career intentions fully mediated by student ownership. CBE Life

Sciences Education, 17(2), 1–11. https://doi.org/10.1187/cbe.17-07-0141

Cox, B. J., Enns, M. W., & Taylor, S. (2001). The effect of rumination as a mediator of elevated

anxiety sensitivity in major depression. Cognitive Therapy and Research, 25(5), 525–534.

https://doi.org/10.1023/A:1005580518671

Croon, M. (2002). Using Predicted Latent Scores in General Latent Structure Models. Latent

Variable and Latent Structure Models, 195–224. Retrieved from

http://web.b.ebscohost.com.proxy.lib.ohio-

state.edu/ehost/ebookviewer/ebook/bmxlYmtfXzY3MjY5X19BTg2?sid=f048a61a-6c58-

4ea3-9430-61db3c75736f@sessionmgr120&vid=0&format=EB&rid=1

Curran, P. J., Georgeson, A. R., Bauer, D. J., & Hussong, A. M. (2021). Psychometric models for

scoring multiple reporter assessments: Applications to integrative data analysis in

prevention science and beyond. International Journal of Behavioral Development, 45(1),

40–50. https://doi.org/10.1177/0165025419896620

Curran, P. J., McGinley, J. S., Bauer, D. J., Hussong, A. M., Burns, A., Chassin, L., Sher, K., &

Zucker, R. (2014). A moderated nonlinear factor model for the development of

commensurate measures in integrative data analysis. Multivariate Bahavioral Research,

49(3), 214–231. https://doi.org/10.1038/ng.3564.

Page 87: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

78

Damian, R. I., & Spengler, M. (2020). Negligible effects of birth order on selection into

scientific and artistic careers, creativity, and status attainment. European Journal of

Personality, 089020702096901. https://doi.org/10.1177/0890207020969010

Davison, A. C., & Hinkley, D. V. (1997). Bootstrap methods and their application. Cambridge,

UK: Cambridge University Press.

Devlieger, I., Mayer, A., & Rosseel, Y. (2016). Hypothesis Testing Using Factor Score

Regression. Educational and Psychological Measurement, 76(5), 741–770.

https://doi.org/10.1177/0013164415607618

Devlieger, I., & Rosseel, Y. (2017). Factor Score Path Analysis. Methodology, 13(Supplement

1), 31–38. https://doi.org/10.1027/1614-2241/a000130

Ebesutani, C., Chorpita, B. F., Higa-McMillan, C. K., Nakamura, B. J., Regan, J., & Lynch, R.

E. (2011). A psychometric analysis of the revised child anxiety and depression scales –

parent version in a school sample. Journal of Abnormal Child Psychology, 39, 173-185.

Efron, B., & Tibshirani, R. (1993). An introduction to the bootstrap. New York, NY: Chapman

& Hall.

Falk, C. F., & Biesanz, J. C. (2015). Inference and Interval Estimation Methods for Indirect

Effects With Latent Variable Models. Structural Equation Modeling, 22(1), 24–38.

https://doi.org/10.1080/10705511.2014.935266

Hastings, Jr., Cecil; Mosteller, Frederick; Tukey, John W.; Winsor, Charles P. (1947). Low

moments for small samples: a comparative study of order statistics". Annals of

Mathematical Statistics. 18(3), 413–426.

Hoyle R. H. (2012). Model specification in structural equation modeling. In R. H. Hoyle

(Ed.), Handbook of structural equation modeling (p. 145-163). The Guilford Press.

Joreskog, K. G., Sorbom, D., & Magidson, J. (1979). Advances in factor analysis and structural

equation models.

Jung, K., Lee, J., Gupta, V., & Cho, G. (2019). Comparison of Bootstrap Confidence Interval

Methods for GSCA Using a Monte Carlo Simulation. Frontiers in Psychology, 10(October),

24–25. https://doi.org/10.3389/fpsyg.2019.02215

Kassambara, Alboukadel (2021). rstatix: Pipe-Friendly Framework for Basic Statistical Tests. R

package version 0.7.0. https://CRAN.R-project.org/package=rstatix

Kroenke, K., Baye, F., Lourens, S. G. (2019). Comparative validity and responsiveness of PHQ-

ADS and other composite anxiety-depression measures. Journal of Affective Disorders,

246, 437-443.

Page 88: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

79

Kenny, D. A., & Milan, S. (2012). Identification. In R. H. Hoyle (Ed.), Handbook of structural

equation modeling (p. 145-163). The Guilford Press.

Kessler, R.C., Green, J.G., Gruber, M.J., Sampson, N.A., Bromet, E., Cuitan, M., Furukawa,

T.A., Gureje, O., Hinkov, H., Hu, C.‐Y., Lara, C., Lee, S., Mneimneh, Z., Myer, L.,

Oakley‐Browne, M., Posada‐Villa, J., Sagar, R., Viana, M.C. and Zaslavsky, A.M.

(2010), Screening for serious mental illness in the general population with the K6

screening scale: results from the WHO World Mental Health (WMH) survey initiative.

Int. J. Methods Psychiatr. Res., 19: 4-22. https://doi.org/10.1002/mpr.310

Ledgerwood, A., & Shrout, P. E. (2011). The tradeoff between accuracy and precision in latent

variable models of mediation processes. Journal of Personality and Social Psychology,

101(6), 1174–1188. https://doi.org/10.1037/a0024776.

Lei, P.W., & Wu, Q. (2012). Estimation in structural equation modeling. In R. H. Hoyle

(Ed.), Handbook of structural equation modeling (p. 164–180). The Guilford Press.

Lu, I. R. R., Kwan, E., Thomas, D. R., & Cedzynski, M. (2011). Two new methods for

estimating structural equation models: An illustration and a comparison with two

established methods. International Journal of Research in Marketing, 28(3), 258–268.

https://doi.org/10.1016/j.ijresmar.2011.03.006

Lu, I. R. R., & Thomas, D. R. (2008). Avoiding and correcting bias in score-based latent variable

regression with discrete manifest items. Structural Equation Modeling, 15(3), 462–490.

https://doi.org/10.1080/10705510802154323

MacKinnon, D. P., Lockwood, C. M., Hoffman, J. M., West, S. G., & Sheets, V. (2002). A

comparison of methods to test mediation and other intervening variable effects.

Psychological Methods, 7(1), 83–104. https://doi.org/10.1037/1082-989X.7.1.83

MacKinnon, D. P. (2008). Multivariate applications series. Introduction to statistical mediation

analysis. Taylor & Francis Group/Lawrence Erlbaum Associates.

McDonald, R. P. (1981). Constrained least squares estimators of oblique common factors.

Psychometrika, 46(3), 337–341.

McDonald, R. P., & Burr, E. J. (1967). A comparison of four methods of constructing factor

scores. Psychometrika, 32(4), 381–401. https://doi.org/10.1007/BF02289653

McNeish, D., & Wolf, M. G. (2020). Thinking twice about sum scores. 2020(April), 2287–2305.

Olejnik, S., & Algina, J. (2003). Generalized Eta and Omega Squared Statistics: Measures of

Effect Size for Some Common Research Designs. Psychological Methods, 8(4), 434–447.

https://doi.org/10.1037/1082-989X.8.4.434

Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and

Page 89: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

80

comparing indirect effects in multiple mediator models. Behavior Research Methods, 40(3),

879–891. https://doi.org/10.3758/BRM.40.3.879

Raykov, T., & Marcoulides, G. A. (2011). Introduction to Psychometric Theory. New York:

Routledge Taylor & Francis Group.

Rosseel Y (2012). “lavaan: An R Package for Structural Equation Modeling.” Journal of

Statistical Software, 48(2), 1–36. https://www.jstatsoft.org/v48/i02/.

Schumacker, R., & Lomax, R. (1996). A beginner’s guide to structural equation modeling.

Mahwah, NJ: Erlbaum.

Skrondal, A., & Laake, P. (2001). Regression among factor scores. Psychometrika, 66(4), 563–

575. https://doi.org/10.1007/bf02296196

Sobel, M. E. (1986). Direct and Indirect Effects in Linear Structural Equation Models.

Sociological Methods & Research, 16(1), 155–176.

Tabachnick, B. G., & Fidell, L. S. (2013). Using Multivariate Statistics. In Contemporary

Psychology: A Journal of Reviews (6th Editio, Vol. 28). https://doi.org/10.1037/022267

Thissen, D. & Orlando, M. (2001). Item response theory for items scored in two categories. In H.

Wainer & D. Thissen (Ed.), Test scoring (p. 1-19). Lawrence Erlbaum Associates.

Thissen, D. & Wainer, H. (2001). An overview of test scoring. In H. Wainer & D. Thissen (Ed.),

Test scoring (p. 1-19). Lawrence Erlbaum Associates.

Thomson, G. H. (1935a). Measuring general intelligence by tests which break the g-hierarchy.

Nature, 135, 71.

Thomson, G. H. (1935b). Definition and measurement of general intelligence. Nature, 135, 509.

Thomson, G. H. (1938). Methods of estimating mental factors. Nature, 141, 246.

Thomson, G. H. (1939). The factorial analysis of human ability. London: University of London

Press.

Thurstone, L. L. (1935). The vectors of mind. Chicago: University of Chicago Press. (pp

226-231).

Tucker. (1971). Relations of factor score estimates to their use. Psychometrika, 36(4), 427–436.

Tunvirachaisakul, C., Gould, R. L., Coulson, M. C., Ward, E. V., Reynolds, G., Gathercole, R.

L., Grocott, H., Supasitthumrong, T., Tunvirachaisakul, A., Kimona, K., & Howard, R. J.

(2018). Predictors of treatment outcome in depression in later life: A systematic review

Page 90: EVALUATING THE USE OF FACTOR SCORES IN LATENT …

81

and meta-analysis. Journal of Affective Disorders, 227, 164–182.

https://doi.org/10.1016/j.jad.2017.10.008

Valluzzi, J. L., Larson, S. L. & Miller, G. E. (2003). Indications and limitations of structural

equation modeling in complex surveys: Implications for an application in the Medical

Expenditure Panel Survey (MEPS). 2003 JSM Proceedings of the American Statistical

Associations: Section on survey research methods indications (pp. 4345–4352). San

Francisco, CA

Wainer, H. (1976). Estimating coefficients in linear models: It don’t make no nevermind.

Psychological Bulletin, 83(2), 213–217. https://doi.org/10.1037/0033-2909.83.2.213

Walss-Bass, C., Suchting, R., Olvera, R. L., & Williamson, D. E. (2018). Inflammatory markers

as predictors of depression and anxiety in adolescents: Statistical model building with

component-wise gradient boosting. Journal of Affective Disorders, 234, 276–281.

https://doi.org/10.1016/j.jad.2018.03.006

West, S. G., Taylor, A. B., & Wu, W. (2012). Model fit and model selection in structural

equation modeling. In R. H. Hoyle (Ed.), Handbook of structural equation modeling (p.

2090-231). The Guilford Press.

Wickham H (2016). ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. I

SBN 978-3-319-24277-4, https://ggplot2.tidyverse.org.

Widaman, K. F., & Thompson, J. S. (2003). On Specifying the Null Model for Incremental Fit

Indices in Structural Equation Modeling. Psychological Methods, 8(1), 16–37.

https://doi.org/10.1037/1082-989X.8.1.16

Wright, D.B., Herrington, J.A. Problematic standard errors and confidence intervals for skewness

and kurtosis. Behavioral Research 43, 8–17 (2011). https://doi.org/10.3758/s13428-010-

0044-x