16
  This is the published version:   Sridharan, Bhavani, Deng, Hepu, Kirk, Joyce and Corbitt, Brian 2010, Structural equation modeling for evaluating the user perceptions of E‐learning effectiveness in higher education, in ECIS 2010 : Proceedings of the 18th European Conference on Information Systems, [The Conference], [Pretoria, South Africa], pp. 1‐13. Available from Deakin Research Online:  http://hdl.handle.net/10536/DRO/DU:30064198 Reproduced with the kind permission of the copyright owner Copyright : 2010, European Conference on Information Systems

Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

  This is the published version:   Sridharan,Bhavani,Deng,Hepu,Kirk,JoyceandCorbitt,Brian2010,StructuralequationmodelingforevaluatingtheuserperceptionsofE‐learningeffectivenessinhighereducation,inECIS2010:Proceedingsofthe18thEuropeanConferenceonInformationSystems,[TheConference],[Pretoria,SouthAfrica],pp.1‐13.

Available from Deakin Research Online:  http://hdl.handle.net/10536/DRO/DU:30064198ReproducedwiththekindpermissionofthecopyrightownerCopyright:2010,EuropeanConferenceonInformationSystems

Page 2: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

Association for Information SystemsAIS Electronic Library (AISeL)

ECIS 2010 Proceedings European Conference on Information Systems(ECIS)

1-1-2010

Structural Equation Modeling for Evaluating theUser Perceptions of E-Learning Effectiveness inHigher EducationBhavani SridharanRMIT University, [email protected]

Hepu DengRMIT University, [email protected]

Joyce KirkRMIT University, [email protected]

Brian CorbittRMIT University, [email protected]

Follow this and additional works at: http://aisel.aisnet.org/ecis2010

This material is brought to you by the European Conference on Information Systems (ECIS) at AIS Electronic Library (AISeL). It has been acceptedfor inclusion in ECIS 2010 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more information, please [email protected].

Recommended CitationSridharan, Bhavani; Deng, Hepu; Kirk, Joyce; and Corbitt, Brian, "Structural Equation Modeling for Evaluating the User Perceptionsof E-Learning Effectiveness in Higher Education" (2010). ECIS 2010 Proceedings. Paper 59.http://aisel.aisnet.org/ecis2010/59

Page 3: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

STRUCTURAL EQUATION MODELLING FOR EVALUATING THE

USER PERCEPTIONS OF E-LEARNING EFFECTIVENESS IN

HIGHER EDUCATION

Journal: 18th European Conference on Information Systems

Manuscript ID: ECIS2010-0462.R1

Submission Type: Research Paper

Keyword: E-learning, Structural modeling, Knowledge management, Learner-

centered design

18th European Conference on Information Systems

Page 4: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

STRUCTURAL EQUATION MODELLING FOR EVALUATING

THE USER PERCEPTIONS OF E-LEARNING EFFECTIVENESS

IN HIGHER EDUCATION

Bhavani Sridharan, RMIT University, GPO Box 2476V, Victoria, 3000, Australia,

[email protected]

Hepu Deng, RMIT University, GPO Box 2476V, Victoria, 3000, Australia,

[email protected]

Joyce Kirk, RMIT University, GPO Box 2476V, Victoria, 3000, Australia,

[email protected]

Brian Corbitt, RMIT University, GPO Box 2476V, Victoria, 3000, Australia,

[email protected]

Abstract

A crucial prerequisite for sustainable e-learning is the understanding of learners’ preferences for

various pedagogical strategies, technologies, and the management of learning resources. This paper

presents an empirical study aiming to empirically test the theoretical (pedagogies, technologies and

management) (PTM) model on the preference of learners and on the perceived impact of the

effectiveness of e-learning. This study uses structural equation modelling (SEM) to identify the critical

dimensions in the PTM model for augmenting the effectiveness of e-learning. This leads to the

development of a PTM model with the path coefficients showing weak to strong relationships ranging

from 0.15 to 0.42 with acceptable significance levels. The results support the hypothesis that

management, technology, resources and metadata ontology dimensions affect the effectiveness of e-

learning both directly and indirectly through enhancing the management effectiveness of learning

resources. However, the result does not support positive influence of pedagogical strategy per se on

e-learning effectiveness. The implications of this study indicate the criticality of effective management

of learning resources to enhance e-learning effectiveness.

Key words: E-learning, empirical study, structural equation modelling, determinants of e-learning

effectiveness

Page 1 of 13 18th European Conference on Information Systems

Page 5: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

1 INTRODUCTION

Electronic learning or e-learning is “the delivery of educational content via electronic media” (Tastle,

White & Shackleton, 2005). It is referred to various definitions ranging from fully web-dependent,

mixed mode to partially web-supplemented teaching and learning. In this study, ‘e-learning’ is

defined as an overarching structure incorporating any forms of web-based technologies supporting

pedagogical strategies or management of learning resources to enhance the delivery of courses online.

The theoretical underpinning of this study is based on the literature in three critical dimensions for

sustainable e-learning: pedagogical strategies, technologies and learning resources supporting

pedagogies, and management of learning resources. First, there is abundant literature supporting the

use of multiple pedagogical strategies (Conole 2008) for enhancing the effectiveness of learning based

on the student-centred objectivist learning philosophy (Dewey 1938; Vygotsky 1978). Second,

equally important, yet overlooked, issues are the technologies and learning resources supporting

pedagogical strategies (Alavi, et al. 2001) in e-learning. Third, recently the emerging focus is on the

significance of management of learning resources for improving the effectiveness of e-learning

(Yang, et al. 2006). Evaluating the critical factors entwined between these critical dimensions is vital

for enhancing the effectiveness of e-learning.

The importance of using student-centred learning strategies to enhance the effectiveness of learning

based on the constructivist learning paradigm is well recognized. For instance, evidence from

literature suggests that incorporating various strategies such as active learning (Conole 2008),

collaborative learning (Marks, et al. 2005), adaptive learning (Brusilovsky 2004), explorative learning

(Or-Bach 2005) and use of concept mapping techniques (Chmielewski, et al. 1998) positively

influences the effectiveness of e-learning. Furthermore, appropriate technologies and learning

resources supporting pedagogies are equally critical. There is extensive literature on technologies

supporting various pedagogies such as computer supported collaborative learning, adaptive learning

technologies, active learning technologies and technologies supporting concept map creation (Conole

2008). Prevailing literature on the technology acceptance model (TAM) and its variations has

explored the critical determinants of technology adoption and user behaviour (Venkatesh, et al. 2003).

In addition, learning resources supporting pedagogies and learning resources generated from various

pedagogies require attention on the effective management of learning resources (Yang, et al. 2006).

To cope with this management aspect of learning resources, metadata ontologies (Gasevic, et al.

2006) and supporting management activities (Nonaka, et al. 2003) are essential. As a result, blending

pedagogical strategies, technologies and learning resources embracing pedagogies and management of

learning resources is acknowledged for enhancing the effectiveness of e-learning (Sridharan, et al.

2008).

Existing models for evaluating the complex interaction between these three dimensions of pedagogies,

technologies, learning resources and management factors influencing e-learning effectiveness,

however, have not yet been widely tested and developed (Hoffman 2005). In addition, learners’

criteria for evaluating the effectiveness of e-learning are generally not well understood. One of the

effective methods for evaluating the relationship between these critical dimensions is on the

preferences and perceptions of learners (Ehlers 2004) in order to ascertain, explain, envisage and

exploit the existing potentials, for creating sustainable e-learning.

This study seeks to examine learners’ preferences for pedagogies, technologies, learning resources

and management factors and their perceived impact on the effectiveness of e-learning. The remainder

of the article is organized as follows. In section 2, identification of dimensions based on the literature,

and the relationship representing various hypotheses are covered. The research question and the

research methodology are presented in section 3. The SEM results are analysed in section 4. Finally a

conclusion with limitations of the study and future research directions is included in section 5.

Page 2 of 1318th European Conference on Information Systems

Page 6: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

2 LITERATURE REVIEW AND HYPOTHESES

Existing research in evaluating the critical dimensions for augmenting the effectiveness of e-learning

has been established under three critical dimensions namely, use of various pedagogical strategies

(Conole 2008), diverse technologies (Ras, et al. 2009) and learning resources (Huang, et al. 2006;

Tzeng, et al. 2007) supporting pedagogies, and management of learning resources (Yang, et al. 2006).

For instance, a collection of student-centred pedagogical strategies such as explorative strategy (Or-

Bach 2005), active learning strategies (Conole 2008), collaborative learning strategies (Marks, et al.

2005), adaptive learning strategies (Brusilovsky 2004) and visual learning strategies using concept

maps (Chmielewski, et al. 1998) have been recognized for enhancing the effectiveness of e-learning.

To effectively implement these strategies, it is instrumental to evaluate pedagogical preferences of

learners and their perceived impact on e-learning effectiveness. To achieve this, the following

hypotheses are proposed.

H1: Preferred pedagogical strategies positively influence e-learning effectiveness.

H4: Preferred pedagogical strategies positively influence management of learning resources.

In an e-learning scenario, pedagogical strategies can be realised only by incorporating associated

technologies to support various pedagogies. Facilitating conditions has been identified as one of the

direct determinants to use a new technology in unified theory of acceptance and use of technology

(UTAUT) model (Venkatesh, et al. 2003). However, to identify the facilitating conditions in e-

learning, exploring user’s preferences for various learning technologies and their perceived impact on

e-learning effectiveness is an important dimension. In this regard, several studies have proposed

various information and communication technologies (ICT) (Alavi, et al. 2001; Serva, et al. 2004)

supporting diverse learning strategies for enhancing the e-learning environment. Information retrieval

technologies, active learning technologies (Alavi and Leinder, 2001) computer-supported

collaborative learning (CSCL) (Chou, et al. 2009) technologies, adaptive learning technologies

(Brusilovsky 2004) and concept map technologies have been recommended. In addition these

technologies lead to the generation of vast learning resources, providing impetus to management of

learning resources. Despite of the identified technologies supporting pedagogies, there has been little

research in evaluating the technology preferences of learners and their perceived impact on e-learning

effectiveness and management factors. Therefore, the following hypotheses are proposed.

H2a: Preferred Technologies supporting various pedagogies positively influence e-learning

effectiveness.

H5a: Preferred Technologies supporting various pedagogies positively influence management of

learning resources.

Besides the pedagogies and technologies, it is equally critical to provide multiple learning resources

(Huang, et al. 2006; Tzeng, et al. 2007) embracing pedagogies and technologies. Proposed learning

resources enhancing e-learning effectiveness include authenticated relevant external resources (Drago,

2002), interactive multimedia resources (Hannafin, et al. 1997), adaptive learning (May, et al. 2003;

Brusilovsky 2004) resources based on the individual learner’s level and diagram-based learning

resources (Chang, et al. 2001). The other aspect for embracing these resources is an evaluation of

learners’ preferred learning resources and their perceived impact on e-learning effectiveness and

management factors. Thus the following hypotheses are proposed.

H2b: Preferred learning resources supporting various pedagogical strategies positively influence e-

learning effectiveness.

H5b: Preferred learning resources supporting various pedagogical strategies positively influence

management of learning resources.

The use of multiple pedagogies, technologies and learning resources results in the generation of

humongous amount of learning resources leading to an information overload problem. Numerous

studies have identified the importance of effective management of learning resources for enhancing

the e-learning effectiveness (Yang, et al. 2006). The key indicators within the management aspect

Page 3 of 13 18th European Conference on Information Systems

Page 7: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

include capture, elicit, organise, retrieve, authenticate and reuse (Nonaka, et al. 2003). In addition, the

significance of learning object metadata (LOM) has been recognized to enhance management

activities through metadata description. An array of indicators to describe the characteristics of

learning resources have been identified from various existing standards such as Dublin Core, IEEE

learning object metadata and IMS learning resource meta-data. Furthermore, ontology-based elements

(Gasevic, et al. 2006) to describe content, context and structure of learning resources have been

identified. However, maintaining both learning resources and metadata ontologies is a huge task. To

realise a sustainable management of learning resources necessitates an evaluation of learners’

preferred management factors and metadata ontologies and their perceived impact on the e-learning

effectiveness. Thus the following hypotheses are proposed:

H3: Preferred elements of metadata ontology positively influence e-learning effectiveness.

H6: Preferred elements of metadata ontology positively influence management of learning resources

H7: Preferred management factors positively influence e-learning effectiveness.

To understand how these critical dimensions impact on the e-learning effectiveness and management

effectiveness, indicator variables are crucial from the learning community’s perspective. These impact

indicators for this study were derived from both literature and interviews with stakeholders in e-

learning. Identified e-learning impact indicators include learning outcomes (Tatana, et al. 2002),

enjoyment (Walker, et al. 2005), satisfaction (Drennan, et al. 2005), stimulation, critical thinking

skills (Hay, et al. 2004) and others. Impact indicators identified for management effectiveness include

availability, accessibility (Drago, et al. 2002; Drennan, et al. 2005), quality (Peltier, et al. 2007),

relevancy (Drago, et al. 2002), and reusability of learning resources. Evaluating the influence of

critical dimensions on these impact factors is critical. However, it has not been given sufficient

thought in the literature. Thus, the following hypotheses are proposed.

H8: Perceived management factors positively influence management effectiveness of learning

resources.

H9: Perceived management effectiveness influence e-learning effectiveness.

Based on the initial findings, the research to date indicates some knowledge on the influence of

pedagogical strategies, technologies and learning resources supporting pedagogies and management

of learning resources on the e-learning effectiveness as shown in the theoretical model in Figure 1.

Nevertheless, evaluating the complex interaction between these three dimensions have not yet been

widely tested and developed (Hoffman 2005). In addition, there is a need for further empirical study

evaluating the critical success factors intertwined between these dimensions based on the preferences

and perceptions of learners. As a result, this study is an attempt to build on the existing literature by

evaluating the learners’ preferences for these dimensions and perceptions of their impact on the

effectiveness of e -learning.

Figure 1. Theoretical PTM model specification

Page 4 of 1318th European Conference on Information Systems

Page 8: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

3 RESEARCH QUESTIONS AND METHODOLOGY

3.1 Research question

This study aims to evaluate the critical dimensions of the e-learning effectiveness and to understand

the relationships entwined between these dimensions from the users perceptions. Towards fulfilling

this objective, the main research question for this research is ‘How to develop and evaluate a model to

identify the critical dimensions of e-learning effectiveness from the perceptions of the learners?’ To

find an answer to this question, this study uses SEM for testing the relationships proposed in the

theoretical model.

3.2 Survey and data collection

The development of the survey instrument followed by the theoretical model specification entailed a

four-stage approach including review of relevant literature, interviews with major stakeholders,

questionnaire development and pilot testing of the questionnaire. The four exogenous latent constructs

for the survey instrument were: pedagogy, technology, learning resources and metadata ontologies.

The three endogenous latent constructs for this study include, management of learning resources,

management effectiveness, e-learning effectiveness. Table 1 summarises the constructs, description,

sample items and origin of the items besides the demographic section.

Dimension Description Sample Items No. of

Items

Literature

Section B

Pedagogy

Preferences for various

learning and teaching

strategies

Facilities to learn and teach

by discussion with peers.

Facilities to learn by doing.

6 Conole et al. 2008

Serva et al. 2004

Brusilovsky, 2004

Technology Preferences for various

technologies supporting

e-learning strategies

Technologies to share and

learn from peers.

Technologies supporting

interactive learning.

9 Alavi et al. 2001

Bongey et al. 2005

Chou et al. 2009

Learning

Resources

Preferences for various

learning resources

supporting e-learning

Multi-media resources.

Resources from discussion

forums.

8 Huang et al. 2006

Tzeng et al. 2007

Drago 2002

Section C

Management

Preferences for various

management factors

supporting learning

resources

Access to quality learning

resources.

Presentation of learning

resources

8 May et al. 2003 Yang

et al. 2006 Demidova

et al 2005

Metadata Preferences for metadata

elements

Keywords. Pre-requisite,

Co-requisite resources

16 Gasavic et al. 2006

Jovanovic et al. 2006

Section D

Pedagogy

impact

Perceived impact on e-

learning effectiveness

Learning outcome.

Satisfaction.

Critical Thinking skills

7 Tatana et al. 2002

Walker et al., 2005

Management

Impact

Perceived impact on

management

effectiveness

Accessibility Adaptability

Reusability

8 Drago et al. 2002

Peltier et al. 2007

Drennen et al. 2005

Table 1. PTM Survey constructs, sample items and sources

Using the survey instrument, data was gathered from 210 respondents from RMIT University

using an online survey. The demographic details in terms of specialization and course level of

respondents are shown in Table 2. The original questionnaire composed of 62 items

measuring four exogenous dimensions and three endogenous dimensions. A seven points

likert-type scale was used, where 1= least preferred and 7 = most preferred. There was no

discarded data as various validation procedures were incorporated and therefore the final data

analysis was based on 210 samples.

Page 5 of 13 18th European Conference on Information Systems

Page 9: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

Courses No. % Specialization Area No. %

Undergraduate 131 63 Computer Science 135 65

Postgraduate 70 33 Economics 36 17

Others 9 4 Humanities 7 3

Total 210 100 Science 4 2

Others 28 13

Total 210 100

Table 2. Course level and specialization area of respondents

3.3 Structural equation modelling

SEM is a technique for testing hypothesized relationships among variables by estimating a series of

separate, still interdependent, multiple regressions simultaneously. The use of SEM is considered

appropriate for this study due to its great potential for extending the theory development (Gefen, et al.

2000) and its capability of simultaneously assessing the multiple and interrelated dependence

relationships. Furthermore, this study incorporates latent variables representing unobserved concepts,

which is possible by using SEM due to its ability to include latent variables while accounting for

measurement error in the estimation process (Hair, et al. 1998).

This study uses the two-step approach to SEM, namely a measurement model and a structural model

(Hair, et al. 1998). A measurement model is estimated followed by an estimation of structural model.

The measurement model involves in conducting a confirmatory factor analysis (CFA) for assessing

the contribution of each indicator variable and for measuring the adequacy of the measurement model.

The first step in analyzing CFA is the model specification. The second step is an iterative model

modification process for developing a more parsimonious set of items to represent a construct through

refinement and retesting. The third step is to estimate the parameters of the specified model. The

overall model fitness is evaluated by several measures of goodness of test to assess the extent to

which the data supports the conceptual model. Various goodness of fit (GOF) measures used in this

study include the likelihood ration chi-square (χ2), the ratio of χ

2 to degrees of freedom (χ

2 /df), the

GOF index (GFI), the adjusted GOF (AGFI), the root mean square error of approximation (RMSEA)

and Tucker-Lewis index (TLI). Among these measures except for TLI, all the other measures are

absolute GOF measures. The TLI measure compares the proposed model to the null model. The

guidelines for acceptable values for these measures are discussed below.

A non-significant χ2 (p>0.05) is considered to be a good fit for the χ

2 GOF measure. However it is

believed that this does not necessarily mean a model with significant χ2 to be a poor fit. As a result

consideration of the ratio of χ2

to degrees of freedom (χ2

/df) is proposed to measure as an additional

measure of GOF. A value smaller than 3 is recommended for the ratio (χ2 /df) for accepting the model

to be a good fit (Chin, et al. 1995). The GFI is developed to overcome the limitations of the sample

size dependent χ2

measures as GOF (Joreskog, et al. 1993). A GFI value higher than 0.9 is

recommended as a guideline for a good fit. Extension of the GFI is AGFI, adjusted by the ratio of

degrees of freedom for the proposed model to the degrees of freedom for the null model. An AGFI

value greater than 0.9 is an indicator of good fit (Segars, et al. 1993). RMSEA measures the mean

discrepancy between the population estimates from the model and the observed sample values.

RMSEA < 0.1 indicates good model fit (Browne, et al. 1993; Hair, et al. 1998). TLI, an incremental

fit measure, with a value of 0.9 or more indicates a good fit (Hair, et al. 1998). Based on the

guidelines for these values, problematic items that caused unacceptable model fit were excluded to

develop a more parsimonious model with limited number of items.

Page 6 of 1318th European Conference on Information Systems

Page 10: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

Three types of reliabilities tested in this study include internal consistency reliability, item reliability

and construct reliability (Fornell, et al. 1981; Hair, et al. 1998). Internal consistency reliability is

tested using Cronbach alpha coefficients calculated using SPSS for Windows 17.0. The alpha

coefficient measures the extent to which the multiple indicators for a latent variable belong together.

For unidimensional scales, Cronbach alpha value of 0.6 or more is considered acceptable (Nunnally

1967). Item reliability indicates “the amount of variance in an item due to underlying construct rather

than to error” (Chau 1997) (P. 324). This is assessed using the squared multiple correlation (SMC)

value or the square of the standardized factor loading. An item is considered to be reliable if the

standardized loading value is greater than 0.7 (equivalent to 0.50 SMC) (Chin, et al. 1995). However a

value of 0.5 and above is still acceptable (Johnson, et al. 2001) in e-learning. The construct reliability

is tested using composite reliability measure assessing the extent to which items in the construct

measures the latent concept. A commonly acceptable threshold value for composite reliability is 0.7

or more, although values below 0.7 have been considered acceptable (Haire et al, 1998).

Validity measures the extent to which the set of indicators accurately represents a construct (Hair et

al., 1998). Two validity measures tested include convergent validity and discriminant validity.

Convergent validity measures the extent to which the items truly represent the intended latent

construct. Convergent validity is assessed by factor loading and composite reliability measures (Hair

et al, 1998). Since in learning environments a standardized factor loading of 0.5 and above is

considered acceptable (Johnson, et al. 2001), a cut-off value of 0.6 and above is considered in this

study. For composite reliability results, a threshold value of 0.5 was set for testing the convergent

validity. Discriminant validity measures the extent to which the conceptually similar constructs are

distinct. Discriminant validity is examined by comparing the correlation between the construct and the

square root of AVE. AVE represents the overall amount of variance in the indicators accounted for

by the latent construct. The square root of AVE should be greater than the correlations between the

construct for satisfactory discriminant validity (Bhattacherjee, et al. 2004; Wixom, et al. 2005).

In contrast to the measurement models, the structural model contains primarily latent exogenous and

endogenous variables (unobserved constructs), the paths or direct effects (theoretical relationships)

between them and the disturbance terms (for the unmeasured variables in the model) for these

variables. The path coefficient indicates the strength and sign of the paths. SEM provides more

accurate estimates of casual relationship due to incorporation of measurement error in measurement

models. Structural model was evaluated using AMOS 4.0 using maximum likelihood estimates.

The criteria used to test the structural model are the overall GOF for explaining the variance in the

dependent variables and significance of the model path coefficients. The same set of GOF measures

and guidelines used for testing measurement models are used for checking the overall fitness of

structural model. The model’s capacity to explain the variation in dependent variable is measured by

the squared multiple correlation (SMC) values for each structural equation (path) in the model. The

significance of the path coefficient is assessed using the standard errors and the t-values for each

coefficient. In addition to the statistical significance of the path coefficients, the strength of the

relationship plays a role in determining the relationships to be weak, moderate or strong. Following

Cohen’s (1988) rules of thumb, a cut-off correlation value less than 0.2 have been considered to be

weak in this research. The correlation value between 0.2 and 0.5 is defined to be moderate. The

correlation of greater than 0.5 is considered to be strong.

4 DATA ANALYSIS

Based on the results of the survey from 210 samples, the theoretical research model was tested

applying SEM for data analysis using Amos 4.0 software. The sample size of 210 is considered

adequate for applying SEM for this research. An iterative measurement model was constructed

followed by a structural model to test both the proposed model and the hypothesis. This resulted in

deletion of 46 items and retaining 16 items at various stages of model development. This also resulted

Page 7 of 13 18th European Conference on Information Systems

Page 11: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

in combining two exogenous constructs namely technology and resources due to high correlation

between these two constructs during discriminant analysis stage.

4.1 Measurement models

Normality assumption was not violated with an acceptable range of Skewness and Kurtosis statistics

with the value ranging from 0.10 to 2.29. Therefore, the maximum likelihood method of estimation

was chosen for conducting SEM analysis. Table 3 represents the GOF indices for both the initial

measurement models and final measurement model (shaded rows) for all constructs. The last two

rows represent GOF results for the full measurement model and recommended values for acceptable

GOF. The overall GOF measures for some of the initial models did not meet the acceptable criteria as

shown in the table 3. These models were revised based on assessment of factor loading and

suggestion from modification indices. This resulted in deletion of 5 items at reliability analysis stage,

32 items at convergent valididty stage and 9 items during discriminanat validity stage. Also this

required combining technology and learning resources into one construct due to lack of discriminant

validity. The final GOF results for both individual measurement models and full measurement

models are within the acceptable range with non-significant χ2 (P>0.05), GFI, AGFI and TLI values

greater than 0.9 and RMSEA value < 0.10.

Construct No. of Items χ2 Df P χ

2/df GFI AGFI RMSEA TLI

Pedagogy - Initial 3 5.118 2 .08 2.56 0.98 0.95 0.09 0.93

Pedagogy - Final 2 1.24 1 0.26 1.24 0.99 0.98 0.03 0.99

Technology - Initial Model 7 38.83 14 0.00 2.77 0.95 0.90 0.09 0.86

Learning Resources- Initial 8 46.49 20 0.00 2.32 0.94 0.90 0.08 0.91

Technology and Resources -

Final

3 1.63 1 0.20 1.63 0.99 0.97 0.05 0.93

Metadata Ontology – Initial 16 700.22 104 0.00 6.73 0.71 0.62 0.17 0.61

Metadata Ontology – Final 2 .004 1 0.95 0.004 1.00 1.00 0.00 1.00

Management – Initial 8 41.46 20 0.00 2.07 0.95 0.92 0.07 0.95

Management – Final 2 0.46 1 0.83 0.05 0.99 0.99 0.00 1.00

E-learning Effectiveness -

Initial

7 128.04 14 0.00 9.15 0.84 0.68 0.20 0.70

E-learning Effectiveness -

Final

2 0.33 1 0.57 0.33 0.99 0.99 0.00 1.00

Management Effectiveness -

Initial

8 68.85 0.00 3.44 0.92 0.86 0.51 0.11 0.93

Management Effectiveness -

Final

5 7.71 5 0.17 1.54 0.99 0.96 0.05 0.99

Full Measurement Model 16 113.75 97 0.12 1.17 0.94 0.91 0.03 0.99

Recommended Value ≥.05 ≤3.0 ≥0.9 ≥0.9 ≤0.1 ≥0.9

Table 3. Goodness of test results for measurement models

4.2 Reliability and validity

To verify the reliability of the latent variables in the model, internal consistency reliability measure,

item reliability measure and composite reliability measures were calculated. Five items were dropped

at this stage as they did not meet the item reliability criterion of 0.5. Table 2 shows the Cronbach

alpha coefficient and the composite reliability result for the final model. The alpha coefficient has the

acceptable value ranging from 0.61 to 0.94, with the lowest value for the latent variable pedagogies

and the highest value for metadata ontology. The mean value for the items ranged from 5.01 (co-

requisite) to 6.07 (consistent presentation). The result of item reliability (IR) measured as standardized

factor loading (FL) ranged from 0.66 to 0.95 as shown in Table 4. The composite reliability estimate

ranges from 0.30 (for pedagogy) to 0.90 (for management and metadata ontology) indicating

acceptable reliability values except for the latent variable pedagogy. Despite the low composite

Page 8 of 1318th European Conference on Information Systems

Page 12: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

reliability value for pedagogy, the inclusion of pedagogy did not affect a good overall model fit.

Therefore, it was retained in the model. However, while testing for alternative structural models,

dropping the latent variable pedagogy will be considered in the follow-up research. The results of the

convergent validity assessed based on factor loading and composite reliability indicate an acceptable

range of factor loading for all items and good composite reliabilities for management and

management impact, reasonable for technology, and mediocre for pedagogy.

Constructs Alpha CR Indicators Variable Mean FL IR CR P

Explorative Strategy P3 5.26 .66 .44 11.60 *** Pedagogy 0.61 0.30

Adaptive Strategy P4 5.25 .66 .44 11.60 ***

Concept map Resources LR8 5.16 .67 .45 13.12 ***

Concept Map Technology T8 4.88 .66 .44 13.12 ***

Technology

and

Resources

0.81 0.65

Push Technology T9 5.16 .73 .53 13.12 ***

Consistent Presentation M4 6.07 .75 .56 14.16 *** Management 0.72 0.67

Search M7 6.05 .75 .56 14.16 ***

Pre-requisite MO10 5.11 .95 .89 19.26 *** Metadata

Ontology

0.94 0.90

Co-requisite MO11 5.01 .95 .89 19.26 ***

Stimulating ELE5 5.42 .80 .64 10.20 *** E-learning

Impact

0.78 0.90

Critical Thinking Skills ELE6 5.48 .80 .64 10.20 ***

Accessibility ME2 5.55 .75 .56 12.37 ***

Quality ME3 5.48 .87 .76 15.53 ***

Relevance ME4 5.49 .88 .78 15.89 ***

Course Specific ME5 5.60 .80 .64 13.60 ***

Management

Impact

0.91 0.67

Save Time ME7 5.47 .81 .66 14.00 ***

Table 4. Reliability and validity results of final measurement models

To get satisfactory discriminant validity, the square root of average variance extracted (AVE) for each

construct should be greater than the correlation between the construct and the other constructs. Table

5 shows acceptable discriminant validity between each pair of construct, with all AVE square roots

greater than the correlation between the constructs. For example, metadata ontology showed highest

discriminant validity among all other constructs. The Square root of AVE for metadata ontology was

0.90 while the correlation between metadata ontology and other constructs ranged from 0.28 to 0.43.

Constructs

Pedagogy

Tech. and

Resources

Metadata

Ontology Management

Pedagogy

Impact

Management

Impact

Pedagogy 0.54

Tech. and

Resources 0.51 0.90

Metadata

Ontology 0.26 0.44 0.90

Management 0.38 0.64 0.43 0.70

E-learning

Effectiveness 0.34 0.59 0.55 0.60 0.71

Management

Effectiveness 0.38 0.32 0.28 0.38 0.55 0.72

Table 5. AVE Square roots and inter-correlation

4.3 The structural model

The hypothesised model contains six latent constructs as shown in figure 2 as opposed to the original

seven constructs in the theoretical PTM model in figure 1. The iterative process of testing for

convergent and discriminant validity of the model suggested combining items in technology and

learning resources due to high inter-correlation between these two constructs. As a result the revised

Page 9 of 13 18th European Conference on Information Systems

Page 13: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

model contains technology and resources as a single combined construct. This resulted in reduced

number of hypothesis from 11 to 9. Hypotheses H2a and H2b are coupled and denoted by H2.

Hypotheses H5a and H5b are coupled and denoted by H5. The hypothesised model with path

coefficient and the explanatory power (R2) for each dependent construct is displayed in figure 5.

Insignificant paths are shown by dotted lines in the figure. Except for two paths, all other paths are

statistically significant with p<0.05 or better.

Figure 2. The hypothesised model results

The results of the model indicate a strong support for H8, H5 and H3 with path coefficient values of

0.42, 0.41, and 0.33 respectively (p <0.001). The result shows moderate support for H7, H6 and H2

with path coefficient of 0.29, 0.25 (p<0.01) and 0.22 (p<0.05) respectively. The results show a weak

support for H9 with path coefficients of 0.15, but significant at 0.05 level. However, the result

indicated rejecting H1and H4 implying that pedagogy factors have insignificant effect on both e-

learning effectiveness and management factors. In addition, in terms explanatory power, the model

accounts for 50% of the variance in e-learning effectiveness, 41% of the variance in management

factors and 17% of the variance in management effectiveness. To summarize, the results supported all

hypothesis except for H1 and H4.

6 CONCLUSION AND LIMITATIONS

In this study, SEM was used to test the hypothesised model in evaluating the learners’ preferences for

and perceptions of the critical factors influencing the e-learning effectiveness. The hypothesised

model proposed various positive relationships between the exogenous and endogenous variables. The

research found reasonably strong support for three hypotheses and medium to weak support for four

and no support for two of the nine hypothesised relationships with statistically significant path values.

These findings clearly indicate that the critical factors influencing the e-learning effectiveness as

perceived by learners are management of learning resources both directly and indirectly through

enhancing the management effectiveness (accessibility, adaptability and reusability), technology

and resources supporting management factors, and supporting metadata ontologies. The results

clearly indicate that pedagogical strategies per se in e-learning do not influence the e-learning

effectiveness leading to the rejection of H1 and H4.

There are several limitations in this study. The generalizability of this study to e-learning is limited

due to three reasons. Firstly, this study considered the preferences and perceptions of learners, and

other stakeholders such as teachers and educational developers are not taken into account. Secondly,

this study does not include other soft factors such as social, environmental and psychological factors,

which may influence the e-learning effectiveness. Thirdly, the sample is confined to small segment of

the RMIT University and OUA (Open University Australia) e-learning population.

Future research in this area includes independently evaluating each of the potential determinants

through controlled experiments to isolate the impact of individual determinants. Also, regrouping of

Page 10 of 1318th European Conference on Information Systems

Page 14: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

items based on individual strategies (associated technologies and learning resources) to identify the

influence of each strategy on e-learning effectiveness is critical for effective policy decisions. Finally,

future study will incorporate other factors such as tutors competency, learner’s characteristics and

other soft factors in influencing e-learning effectiveness.

References

Alavi, M. and Leinder, D. E. (2001). Research commentary: Technology mediated learning - a call for

greater depth and breadth of research. Information Systems Research(12), 1-10.

Arbaugh, J. B. (2000). Virtual classroom versus physical classroom: An exploratory study of class

discussion patterns and student learning in an asynchronous internet-based MBA course.

Journal of Management Education, 24(2), 213-233.

Bhattacherjee, A. and Premkumar, G. (2004). Understanding changes in belief and attitudes towards

information technology usage: A theoretical model and longitudinal test. MIS quarterly,

28(2), 229-254.

Browne, M. W. and Cudeck, R. (1993). Alternative ways of assessing model fit. Newbury Park: Sage

Publications.

Brusilovsky, P. (2004). Adaptive e-learning systems: Knowledge tree: A distributed architecture for

adaptive e-learning. In Proceedings of the 13th International World Wide Web Conference on

Alternate Track Papers and Posters, New York, USA.

Chang, K. E. Sung, Y. T. and Chen, S. F. (2001). Learning through computer-based concept mapping

with scaffolding aid. Journal of Computer Assisted Learning, 17(1), 21-33.

Chau, P. (1997). Re-examining a model of evaluation information centre success using a structural

equation modeling approach. Decision Sciences 28, 309-334.

Chin, W. W. and Todd, P. A. (1995). On the use, usefulness and ease of structural equation modeling

in mis research: A note of caution. MIS quarterly, 19(2), 237-247.

Chmielewski, T. L. and Dansereau (1998). Enhancing the recall of text: Knowledge mapping training

promotes implicit transfer. Journal of Educational Psychology, 90, 407-413.

Chou, S.-W. and Min, H.-T. (2009). The impact of media on collaborative learning in virtual settings:

The perspective of social construction. Computers & Education, 52(2), 417-431.

Cohen (1988). Statistical power analysis for behavioural sciences. Hillsdale, NJ: Lawrence Erlbaum

Associates.

Conole, G. (2008). Using compendium as a tool to support the design of learning activities

Knowledge cartography (pp. 199-221).

Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of

Information Technology. MIS Quarterly, (13)3, 319-340.

Demidova, E. Ternier, S. Olmedilla, D. Dual, E. Dicerto, M. Stefanov, K. et al. (2005). Integration of

heterogeneous information sources into knowledge resource management system for lifelong

learning. In Proceedings of the Ten Competence Workshop, Manchester, United Kingdom.

Dewey, J. (1938). Experience and education. New York: MacMillan.

Drago, W. Peltier, J. W. and Sorensen, D. (2002). Course content or instructor: Which is more

important in online teaching. Management Research News, 25(6-7), 69-83.

Drennan, J. Kennedy, J. and Pisarki, A. (2005). Factors affecting student attitudes toward flexible

online learning in management education. The Journal of Educational Research, 98(6),

331(338).

Ehlers, U. D. (2004). Quality in e-learning from a learner's perspective. In Proceedings of the 3rd

Eden Research Workshop, Oldenburg, Germany.

Fornell, C. and Larcker, D. (1981). Evaluating structural equation modeling with unobserved

variables and measurement error. Journal of Marking Research, 18(1), 39-50.

Gasevic, D. and Hatala, M. (2006). Ontology mapping to improve resource search. British Journal of

Educational Technology, 37(3), 375-389.

Gefen, D. Straub, D. W. and Boudreau, M. (2000). Structural equation modelling and regression:

Guidelines for research practice. Communications of the Association for Information Systems

(AIS), 4(7).

Page 11 of 13 18th European Conference on Information Systems

Page 15: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

Hair, J. F. Anderson, R. E. Tatham, R. L. and Black, W. C. (1998). Multivariate data analysis (Fifth

Edition ed.). Englewood Cliffs, New Jersey: Printice-Hall Inc.

Hannafin, M. J. Hill, J. R. and Land, S. M. (1997). Student centred learning and interactive

multimedia: Status, issues and implication. Contemporary Education, 68(2), 94-99.

Hay, A. Peltier, J. W. and DRAGO, W. A. (2004). Reflective learning and online management

education: A comparison of traditional and online MBA students. Strategic Change(13), 169-

176.

Hoffman, J. (2005). How to design for the live online classroom: Creating great interactive and

collaborative training using web conferencing, from http://www.brandon-

hall.com/publications/lol/lol.shtml

Huang, W. and Mille, A. (2006). Conkmel: A contextual knowledge management framework to

support multimedia e-learning. Multimedia Tools and Applications, 30(2), 205-219.

Johnson, B. and Stevens, J. J. (2001). Confirmatory factor analysis of the school level environment

questionnaire (SLEQ). International Journal of Learning Envrionments Research, 4(3), 325-

344.

Joreskog, K. G. and Sorbom, D. (1993). Liseral8: Structural equation modeling with the SIMPLIS

command language. Hillsdale, NJ: Lawrence Erlbaum Associates.

Marks, R. B. Sibley, S. D. and Arbaugh, J. B. (2005). A structural equation model of predictors for

effective online learning. Journal of Management Education, 29(4), 531-563.

May, G. L. and Short, D. (2003). Gardening in cyberspace: A metaphor to enhance online teaching

and learning. Journal of Management Education, 27(6), 673-693.

Nonaka, I. and Toyamma, R. ( 2003). The knowledge-creating theory revisited: Knowledge creation

as synthesizing process. Knowledge Management Research and Practice, 1(1), 2-10.

Nunnally, J. C. (1967). Psychometric theory. New York: McGraw-Hill.

Or-Bach, R. (2005). Educational benefits of metadata creation by students. The SIGCSE Bulletin,

37(4), 93-97.

Peltier, J. W. Schibrowsky, J. A. and Drago, W. (2007). The interdependence of the factors

influencing the perceived quality of the online learning experience: A causal model. Journal

of Marketing Education, 29(2), 140-153.

Ras, E. and Rech, J. (2009). Using wikis to support the net generation in improving knowledge

acquisition in capstone projects. Journal of Systems and Software, In Press, Accepted

Manuscript.

Segars, A. H. and Grover, V. (1993). Re-examining perceived ease of use and usefulness: A

confirmatory factor analysis. MIS quarterly, 17(4), 517-525.

Serva, M. A. and M. A. Fuller (2004). Aligning What We Do and What We Measure in Business

Schools: Incorporating Active Learning and Effective Media Use in the Assessment of

Instruction, Journal of Management Education, (28)1, 19-38. Sridharan, B. Deng, H. and Corbitt, B. (2008). Evaluating intertwined critical success factors for

sustainable elearning. In Proceedings of the 19th Australasian Conference on Information

Systems, Christchurch, New Zealand.

Tatana, O. and Robert, A. W. (2002). The effectiveness of web-based instruction: An initial inquiry.

Tzeng, G.-H. Chiang, C.-H. and Li, C.-W. (2007). Evaluating intertwined effects in e-learning

programs: A novel hybrid MCDM model based on factor analysis and DEMATEL. Expert

Systems with Applications, 32(4), 1028-1044.

Venkatesh, V. Morris, M. G. Davis, G. B. and Davis, F. D. (2003). User acceptance of information

technology: Toward a unified view. MIS Quarterly, 27(3), 425-478.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes.

Cambridge: Harvard University Press.

Walker, S. and Fraser, B. (2005). Development and validation of a instrument for assessing distance

learning environments in higher education: The distance education learning envrionments

survey (DELES). Internation Journal of Learning Envrionments Research, 8(3), 289-308.

Wixom, B. H. and Todd, P. A. (2005). A theorectical integration of user satisfaction and technology

acceptance Information Systems Research, 16(1), 85-102.

Page 12 of 1318th European Conference on Information Systems

Page 16: Structural Equation Modeling for Evaluating the User ...dro.deakin.edu.au/eserv/DU:30064198/sridharan-structuralequation-2… · Brusilovsky 2004) resources based on the individual

Yang, C.-Z., et al. (2006). RDF/XTM Ontology Construction Based on a Topic Maps-Driven

Framework, Digital Libraries: Achievements, Challenges and Opportunities, Berlin /

Heidelberg: Springer, 523-526.

Page 13 of 13 18th European Conference on Information Systems