Upload
castorypolux20762
View
215
Download
0
Embed Size (px)
Citation preview
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 1/12
Undergraduates’ learning experience and learning
process: quantitative evidence from the East
Beverley J. Webster Æ Wincy S. C. Chan Æ Michael T. Prosser ÆDavid A. Watkins
Published online: 4 February 2009Ó Springer Science+Business Media B.V. 2009
Abstract This article examines the construct validity of the Course Experience Ques-
tionnaire (CEQ) in Hong Kong and investigates the similarities and differences in the
process of learning among students in different disciplinary studies. It is based on a survey
of 1,563 undergraduate students in two disciplines, humanities and sciences, and of
principally Chinese ethnicity. Findings from exploratory and confirmatory factor analyses
support the scale structure of the four subscales of a modified version of the CEQ (good
teaching, clear goals and standards, appropriate work, and appropriate assessment) in anon-Western context and could provide a basis for cross cultural research and international
benchmarking. While there was variation across subgroups, there was a genuine pattern of
relationships between the perceptions of learning environment and learning strategies
shown by structural modeling. This information could be used to inform the design of
discipline-specific programs in the new curriculum.
Keywords Course experience Á Learning strategy Á Undergraduates Á
Hong Kong Chinese
Introduction
There is an increasing number of student surveys of perceptions of university teaching
and learning environments and experiences. Surveys of this kind are now commonplace
in countries such as Australia and the UK. They are being utilized for reasons of either
accountability or learning improvement or sometimes both. In such contexts, the Course
Experience Questionnaire (CEQ; Ramsden 1991; Wilson et al. 1997) has been widely
used. For example, all graduates in Australia are asked to complete the CEQ as part of a
graduate destination questionnaire. The Australian Government has used and is still using
B. J. Webster Á W. S. C. Chan (&) Á M. T. Prosser
Centre for the Advancement of University Teaching, University of Hong Kong, Pokfulam, Hong Kong
e-mail: [email protected]
D. A. Watkins
Faculty of Education, University of Hong Kong, Pokfulam, Hong Kong
123
High Educ (2009) 58:375–386
DOI 10.1007/s10734-009-9200-6
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 2/12
such data to assess the performance and ascertain the needs of its universities. Most
Australian universities have also analyzed and reported their own institution’s CEQ
scores for internal purposes, often conducting surveys of their own undergraduates to
inform curriculum and staff development efforts. The CEQ evolved from theory and
continued research on student learning in higher education over the past few decades andit is believed that the information obtained from the survey can be useful for professional
development. Survey results can highlight issues to address in relation to the student
learning experience. Such issues we know are related to the promotion of deeper learning
approaches in students which are necessary for better learning outcomes which are
desired at tertiary level.
Universities in Hong Kong are now being asked to demonstrate the quality of the
learning outcomes of their students for system wide accountability and improvement
purposes (University Grants Committee 2005). The CEQ would seem to be a potentially
useful instrument upon which to base evidence to such policy and practice. However, as
yet there is little evidence at the institution level of the reliability and validity of the CEQ
in a non-Western context such as Hong Kong. There is also little evidence from anywhere
justifying the construct validity and the stability of the CEQ factors for students of different
academic years or disciplines assumed in such surveys (see Ginns et al. 2007). The purpose
of this paper is to provide such evidence for a Hong Kong university.
The Course Experience Questionnaire
The CEQ was designed from within the well-known student learning perspective. Thepresent form of the CEQ originated in the qualitative work of Marton and Saljo (1976)
and elaborated by others quantitatively such as Entwistle and Ramsden ( 1983). From this
perspective, university students’ approaches to study are contingent upon both their prior
experiences of teaching and learning and their perceptions of their current teaching and
learning environment. Students have been shown to adopt either a surface approach to
study, focusing on short term reproduction, or a deep approach focusing on longer term
understanding. Their perceptions of the quality of teaching, the clarity of goals and
standards, whether the workload is so high they cannot understand it all, and whether
their assessments test reproductive learning rather than understanding have been shown
to relate to these approaches to learning (Biggs and Tang 2007; Prosser and Trigwell1999).
There is a substantial body of literature confirming the factor structure of the CEQ
within teaching and learning contexts in the West (Byrne and Flood 2003; Lizzio et al.
2002; Ramsden 1991; Richardson 1994, 2005a, b; Sadlo 1997; Trigwell and Prosser
1991b; Wilson et al. 1997). The construct validity of the CEQ was demonstrated by
Richardson (2006) by the fact that the scales used in that study collectively defined a
single higher-order factor that could be interpreted as a measure of perceived academic
quality. Within those contexts the CEQ has been used for a range of different purposes
including benchmarking, as a performance indicator, and for summative funding relatedand formative purposes. For example, it is used by Australian universities to benchmark
their students’ learning experiences against counterparts locally, and more recently,
internationally (Ginns et al. 2007). More recently, the Australian Government has been
using the results as one part of a basket of performance indicators in its performance
based funding model, allocating substantial funding to universities performing well in
terms of these indicators. Finally, it is used by individual universities to formatively
376 High Educ (2009) 58:375–386
123
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 3/12
evaluate and improve their undergraduate programs (Barrie et al. 2005) and because
there is evidence showing that students are best placed to evaluate many aspects of
teaching, and their own ratings are valid, multidimensional and reliable (Marsh 1987;
Watchel 1998), course experience can be considered to be quite strongly related to
qualities of the actual study context.
A non-Western context
Leading researchers have long warned about the dangers of assuming that theories
developed and research conducted about affective and cognitive processes in one cul-
ture are appropriate for another (Boekaerts 2003; Markus and Kitayama 1991).
However, the general principles of the research and theorizing about student approaches
to learning as measured by the Study Process Questionnaire (SPQ; Biggs 1987) and the
Approaches to Studying Inventory (Entwistle and Ramsden 1983) have been shown to
be valid for Hong Kong Chinese students (Biggs 1992; Kember and Gow 1990). In a
cross-cultural meta-analysis which included data from four samples of Hong Kong
secondary and tertiary students showed that, similar to Western studies reported above,
surface learning approaches were consistently related to learning environments where
the students perceived the workload and assessment to be inappropriate. On the other
hand, deep level approaches were associated with environments where the teaching was
seen as good and the teachers supportive (Watkins 2001). A study with an earlier
version of the CEQ did provide evidence of its reliability for Hong Kong universitystudents (Ho 1998). More recently, in a cross-cultural study and using a revised SPQ,
factorial invariance was determined between samples of students in universities in
Australia and Hong Kong (Leung et al. 2008). The researchers concluded that more
research was needed to determine whether the identified configural invariance results
were applicable across fields of study in addition to examining the relations between
approaches to learning and perceptions of the learning environment using structural
models.
The CEQ has recently been used at a university in Hong Kong in an investigation of the
validity of adopting the CEQ as one of the key performance indicators and using the data to
benchmark with other universities internationally. The intention was to use the evidenceobtained from the survey to support and monitor the 4-year undergraduate curriculum
which would be implemented in the next few years.
Aims of study
The aims of this study were to provide evidence of the CEQ when used with Hong Kong
Chinese undergraduates of:
(1) The goodness of fit and reliability of the data to the hypothesized scale structures;
(2) The construct validity in terms of relationships between perceptions of course
experience and learning strategies; and
(3) The baseline structure (Byrne et al. 1989) of these relationships by discipline area and
year of study.
High Educ (2009) 58:375–386 377
123
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 4/12
Methodology
Sample
The original sample consisted of 1988 undergraduate students enrolled in either year 1 oryear 3 of study in the academic year 2006–2007 at a university in Hong Kong. The sample
came from all 10 faculties and represented approximately 33% of the population. This data
included students who were Chinese and who spoke Chinese as their first language.
Respondents enrolled in the Faculty of Architecture were subsequently excluded in view of
the speculated anomalies in the data from this Faculty (further details are provided as
follows). The final sample thus consisted of 1,536 Chinese undergraduate students with a
mean age of 20.7 years (SD = 1.58) and of whom 848 were female and 688 were male.
The sample was subsequently divided into two broad disciplines of study: the Humanities
(n = 688; including students enrolled in the Faculties of Arts, Business and Economics,
Education, Law, and Social Sciences) and the Sciences (n = 848; including respondents
from the Faculties of Dentistry, Engineering, Medicine, and Science). Please note that the
similarities in these numbers were purely coincidental.
Data collection and instruments
The survey was available in both online and paper versions. Invitations were sent via the
University email accounts and visits to lectures, libraries, and examination halls at the end
of the academic year. Participation was voluntary. Ethics approval was obtained prior to
commencement of the study. Seventeen CEQ items, corresponding to Good Teaching,Clear Goals and Standards, Appropriate Assessment, and Appropriate Workload scales,
were adapted from the University of Sydney Student Course Experience Questionnaire
(SCEQ; University of Sydney, Institute for Teaching and Learning 2005). The Sydney
SCEQ was more suitable for this study as it was shortened, modified, and validated for
current undergraduate students (Ginns et al. 2007). The students responded to each of
these 17 items by indicating their agreement or disagreement with a particular statement
along a 5-point scale (1 = strongly disagree to 5 = strongly agree). Prior to the main-
frame data collection, the SCEQ items were piloted among undergraduate students who
came from different disciplines. These items were slightly modified to be applicable to the
Hong Kong context; for example, a ‘degree curriculum’ was used instead of a course or
degree course. Items of the current SCEQ version are listed in Table 1. Reponses to items
with negative wordings (marked ‘**’ in Table 1) were coded in reverse before calculating
the scale scores. Fourteen learning strategy items from the SPQ (Biggs 1987) were also
used in the study to assess students’ learning strategies as either deep or surface. Students
responded to these 14 items on a 5-point scale to statements related to how they went
about their study (1 = this is never true for me to 5 = this is always true for me).
Additional data that provided some background information on the participants was also
collected.
Analysis
Exploratory factor analysis using SPSS 15.0 (Chicago, IL) was initially conducted to test
the structure of the 17 SCEQ items. This included analysis involving 10 Faculties by
discipline area (humanities and sciences) and year of study (year 1 and year 3). As a result
378 High Educ (2009) 58:375–386
123
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 5/12
Table 1 Factor loadings of 17 SCEQ items by principal component analysis
Scales Items 1 2 3 4
Good teaching 1. The teachers normally give me
helpful feedback on my progress
0.669 0.174 0.146 -0.073
2. The teachers of the degree
curriculum motivate me to do my
best work
0.694 0.211 0.140 -0.112
3. The staff make a real effort to
understand difficulties I may be
having with my work
0.683 0.103 0.128
4. My lecturers are extremely good at
explaining things
0.727 0.093 0.077
5. The teachers work hard to make
their subjects interesting
0.737 0.052
6. The staff put a lot of time intocommenting on my work
0.660 -0.054 -0.125 0.103
Clear goals and
standards
1. I have usually had a clear idea of
where I am going and what is
expected of me in this degree
curriculum
0.305 0.592 -0.054
2. It is always easy to know the
standard of work expected
0.315 0.593 -0.228 0.142
3. The staff made it clear right from
the start what they expected from
students
0.509 0.345 -0.121
4. It has often been hard to discover
what is expected of me in this
degree curriculum**
-0.132 0.747 0.312
Appropriate
assessment
1. The staff seem more interested in
testing what I have memorised
than what I have understood**
0.766
2. Too many staff ask me questions
just about facts**
0.664
3. To do well in this degree all you
really need is a good memory**
0.100 0.667 0.170
Appropriateworkload
1. There is a lot of pressure on me asa student in this degree
curriculum**
0.795
2. The workload is too heavy** 0.083 0.807
3. I am generally given enough time
to understand the things I have to
learn
0.410 0.167 -0.164 0.460
4. The volume of work necessary to
complete this degree curriculum
means it cannot all be thoroughly
comprehended**
-0.149 0.099 0.185 0.451
Eigenvalues 3.58 1.80 1.78 1.52
% Variance
(50.98)
21.07 10.58 10.42 8.91
** Reversed items
Notes: (i) Figures in bold indicate factor loadings on a priori factors. (ii) Figures in italics indicate a cross
loading of 0.3 or higher
High Educ (2009) 58:375–386 379
123
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 6/12
of this initial analysis, cases from the Faculty of Architecture were dropped due to the
anomalies in data potentially explained by the two distinct programs, one humanities-
related and the other science-related, offered by the Faculty. As the survey did not ask
the students to indicate the program they were enrolled in, it was not possible to separate
these students into disciplinary groups and thus they were excluded from subsequentanalyses.
Confirmatory factor analysis was conducted using LISREL 8.8 (Joreskog and Sorbom
1996). The whole sample was randomly split into two groups. The first sample ( n = 751)
was used for the confirmatory analysis process where re-specification and estimation of the
data were conducted to result in the best overall model fit of the scale structures of both the
SCEQ and the SPQ. The second sample (n = 785) was used for the validation of the model
as recommended by Anderson and Gerbing (1988). The use of confirmatory factor analysis
for final reporting of results was appropriate in this study as a priori measurement models
were tested and factor structures in different homogenous subgroups of the sample was
examined (Watkins 1989). A weighted least square (WLS) approach was used to estimate
the goodness-of-fit indices between the data and the specified models based on an
asymptotic covariance matrix. WLS is used to provide asymptotic unbiased parameter
estimates with ordinal observed variables on large sample sizes (Boomsma and Hoogland
2001; Joreskog 1994). Multivariate non-normality of the variables was checked by treating
the variables as continuous and by looking at the distributions of the ordinal data.
According to Joreskog and Sorbom (1996), the minimum sample size for WLS is
(k ? 1)(k ? 2)/2 where k is the number of indicators in the model (Flora and Curran
2004). The largest number of indicators we used in a single model was 17 (from 17-item
SCEQ), which would mean the minimum sample size required was 171, which is smallerthan the size of any subgroup being tested in the study. Assessment of model fit included
the conventional v2 statistics, which is preferably not significant; the root mean square
error of approximation (RMSEA; Steiger 1990), for which values below 0.05 indicate good
fit and values as high as 0.08 indicate moderate fit; and the comparative fit index (CFI;
Bentler 1990), non-normed fit index (NNFI; Bollen 1989), and the adjusted goodness of fit
index (AGFI; Joreskog and Sorbom 1982), for which values[0.90 indicate good fit.
RMSEA, expressed in per degree of freedom, compares the fit by estimating the dis-
crepancy between the testing and hypothesized models based on the non-centrality
parameter. The AGFI takes into account both the sample size and number of parameters in
the estimation of model fit. The NNFI compares the fit of the two models, and the CFIcompares the non-central v2 to the null model.
After the confirmatory and validation analysis on six-one-factor congeneric models
(four SCEQ and two SPQ models), a composite score was calculated for each of the six
scales based on factor score regression weights produced in the LISREL output estimates
using a non-unit weighted score which reflected the actual contribution each item made to
the scales (Rowe et al. 1995). A measure of composite reliability (r c) was estimated for
each of the six scales using WLS regression estimates and error variance estimates from
the LISREL output (Rowe 2006; Tarkkonen and Vehkalahti 2005). Taking into account the
unidimensionality of the SCEQ and SPQ scales, constructing composites based on a prioriquestionnaire construction was a proper approach to minimize unwanted sources of vari-
ance in arriving at the model solution (Little et al. 2002).
Structural equation modeling was then used to test the baseline structure of relationships
of perceptions of course experience and learning strategies and the potential differences by
both discipline area and year of study. These baseline structures tested on the four sub-
groups were then compared.
380 High Educ (2009) 58:375–386
123
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 7/12
Results
Exploratory factor analysis
A principal component analysis using a Varimax rotation method of the 17-item SCEQproduced a four factor solution based on the eigenvalue[ 1 criterion, which accounted for
51.0% of the variance. When the factor loading cut off was set at 0.3, items related to clear
goals and standard cross loaded with items related to good teaching and one item from the
appropriate workload scale also loaded on the good teaching scale (see Table 1). With
minor variations, this structure was similar for each Faculty by year of study. For some of
these analyses, the small sample size could explain minor deviations from the identified
structure.
The scale structure of the 14 SPQ items was also explored and also indicated similar
results as previously identified in Chinese undergraduate students. The results show a clear
two-scale structure in terms of factor loading; however, the percentage of variance
explained by these two factors was low (35.3%). The surface strategy items loaded
together on one factor with factor loadings[ 0.4. Deep strategy items loaded on another
factor while one of these items (While I am studying, I think of real life situations to which
the material that I am learning would be useful.) cross loaded with surface strategy items.
The cumulative variance was 20.7% for the first factor (eigenvalue = 2.89) and 14.6% for
the second factor (eigenvalue = 2.05). As with the SCEQ scale structures, the SPQ scale
structures were similar by faculty and year of study with minor variations.
Goodness-of-fit of measurement models
Good fit estimates were identified for the four one-factor congeneric SCEQ measurement
models, good teaching, clear goals and standards, appropriate workload, and appropriate
assessment, and the two one-factor congeneric SPQ models, deep strategy and surface
strategy (see Table 2). The significant covariances between pairs of independent variables
in the models were being specified (Byrne 1998) and the number of parameters being
observed was explained by the degrees of freedom. The Chi-squares for each scale in both
the SCEQ and the SPQ were small and not significant (P[ 0.05); the RMSEA values were
\0.05; and the NNFI, the DFI, and the AGFI were all estimates close to 1.00 indicating a
good fit to the model for each of these six scales. The composite reliabilities ( r c) estimatesindicated a good reliability for most scales. The exceptions were surface learning strategies
(r c = 0.541) and clear goals and standards (r c = 0.575).
Table 2 Fitted one-factor congeneric models for SCEQ and SPQ: goodness of fit summary and composite
reliabilities
Composite variable v2
df P RMSEA NNFI CFI AGFI r c
Deep strategy 15.78 12 0.20 0.020 0.99 0.99 0.99 0.757Surface strategy 16.46 10 0.09 0.029 0.95 0.97 0.99 0.541
Good teaching 13.62 7 0.06 0.036 0.98 0.99 0.99 0.837
Clear goals & standards 5.43 2 0.07 0.048 0.91 0.97 0.99 0.575
Appropriate assessment 1.04 1 0.31 0.007 1.00 1.00 1.00 0.794
Appropriate workload 1.68 1 0.20 0.001 0.99 1.00 0.99 0.620
High Educ (2009) 58:375–386 381
123
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 8/12
The 17 SCEQ items and 14 SPQ items were subsequently tested in two measurement
models. The four-factor SCEQ model (v2 = 605.95; df = 113; P[ 0.05; RMSEA =
0.076; NNFI = 0.75; CFI = 0.80; AGFI = 0.95) and the two-factor SPQ model (v2=
477.54; df = 76; P[0.05; RMSEA = 0.084; NNFI = 0.61; CFI = 0.68; AGFI = 0.94)
did not, unsurprisingly, fit the data as well as the single congeneric models.
Structural model of relationships of perceptions of course experiences on deep
and surface learning strategies
Structural models were produced to examine the overall model fit and relative contribution
of each of the four SCEQ scales to learning strategies on four subgroups of the sample:
Humanities year 1 (H1), Humanities year 3 (H3), Sciences year 1 (S1), and Sciences year 3
(S3). The goodness-of-fit indices indicated that all models were a good fit to the data (see
Fig. 1). Chi-square estimates were small and not significant; the RMSEA values were
\0.05; and the NNFI, the DFI, and the AGFI were all estimated close to or equal to 1.00.
The results showed that for all four subgroups, with the exception of year 3 Humanities,
perception of good teaching and perception of clear goals and standards were associated
with deep learning strategies and all of these paths were significant at the 95% level. For all
models except year 3 Humanities, perception of the appropriateness of assessment affected
Fig. 1 Structural models of the effects of course experiences on learning strategies for four groups of
students
382 High Educ (2009) 58:375–386
123
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 9/12
surface learning strategies negatively. In all four models the perception that the workload
was inappropriate was associated with surface learning while for year 1 Sciences it also
affected deep learning. For all subgroups but year 1 the perception of good teaching was
also related to surface learning strategies.
Discussion
Indicators of the quality of teaching and learning in higher education are constantly sought
after as governments, employers, and the public concentrate on measuring accountability
and demand quality outcomes. One such indicator of a quality outcome is that students are
adopting deeper learning strategies since it is well known that this leads to a better
understanding of the curriculum and a better overall learning experience (Biggs 1993).
Knowing the contribution that student perceptions of their learning environment can make
to learning strategies is seen as important in improving learning outcomes. This discussion
is structured around three highlights from the results of this study. First, there is the
confirmation of the scale structures of 17 SCEQ items and 14 SPQ items and the identified
anomalies. Second is the construct validity of the relationships between perceptions of
course experience and learning strategies. Lastly, it is the stability of baseline structures
across discipline area and year of study.
The results provide support for the scale structure with this Chinese undergraduate
sample regardless of discipline area (Humanities and Sciences) or year of study (year 1 and
year 3). Although the initial factor analysis showed that the 17 course experience items
formed a four-scale structure, there were several items from the good teaching scale thatcross loaded on the clear goals and standards scale. In previous studies, items from the
good teaching scale were loaded on two scales (Kreber 2003) or loaded with appropriate
assessment items (Wilson et al. 1997). In this study although there were cross loadings, the
highest loadings were on the good teaching scale. Subsequent to this analysis, the estimates
from the confirmatory factor analysis showed good fit to the four-one-factor congeneric
measurement models for the SCEQ. All Chi-square estimates were small and not signifi-
cant; the RMSEA values\0.05; and the NNFI, the DFI, and the AGFI estimates were all
close to 1.00. The composite reliabilities were between 0.575 and 0.837. These results
indicate that the scale structure of the 17 SCEQ items was working to some extent in
undergraduate Hong Kong Chinese students. The SPQ has previously been validated in thispopulation (Biggs 1992; Kember and Leung 1998; Watkins 2001) and fit estimates
reported in this paper were similar to these previous studies. It is noted that the two-factor
structure was not a simple structure nor was the reliability estimate for the surface strategy
scale very high (r c = 0.541). The four-factor congeneric measurement model of the SCEQ
did not fit the data as well. However, it is said to be unrealistic to find well fitting
hypothesized models such as these where v2 / df is not significant (Byrne 2001) and these
estimates are similar to those found by Diseth et al. (2006).
Evidence of the construct validity of the SCEQ was obtained by achieving good fit
estimates from an examination of the relationships between the SCEQ scales and the SPQscales on structural models by discipline area (Humanities and Sciences) and year of study
(year 1 and year 3).
An investigation of the contribution of student perceptions of learning environment to
learning strategies in subgroups of students established evidence of stable and well-fit
hypothesized models for all year 1 students and year 3 Science students. That is, students
who perceived the teaching as being good and the goals and standards to be clear in the
High Educ (2009) 58:375–386 383
123
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 10/12
degree curriculum were also those students who adopted deeper learning strategies. Among
such students, those who perceived the workload and assessment as appropriate were less
likely to adopt surface learning strategies. Although the path estimates for the year 3
Humanities group for all of these were in the same direction, they were small and not
significant with one exception: year 3 Humanities students who perceived workload asappropriate were also less likely to adopt surface learning strategies. An interesting finding
in these data was that for all Science students and year 3 Humanities students, those who
perceived the teaching as good were also more likely to adopt surface learning strategies.
This relationship could be interpreted as Chinese teachers giving only factual information
which would lead to rote learning. This speculation supports the reputed tendency of
Chinese teachers toward spoon-feeding their students or that of Chinese students to pre-
ferring to be spoon-fed (Kember 2000). Nonetheless, the relationship between good
teaching and both deep and surface learning strategies could be better explained by the
nature of understanding from the perspective of Chinese learners. Previous in-depth
qualitative study on the relationship of memorization and understanding suggests that for
Chinese learners memorizing the information as the first step could enhance subsequent
deep understanding of the content (Kember and Gow 1991). While the same relationship
was not evident in year 1 Humanities students, it is speculated that the first year curriculum
of Humanities subjects emphasize a broad spectrum of general knowledge which demands
less content-specific knowledge such as terminologies and professional skills which to
certain extent requires memorization. Another interesting finding that emerges from this
study is that for year 3 Science students the perception of inappropriate workload was
associated with both surface and deep learning strategies. For Hong Kong students, science
assignments since senior high school levels emphasize critical thinking and the applicationof theories to real life practice. There is a perception that so much work that students
cannot possibly get through could induce rote learning (Trigwell and Prosser 1991a), as it
seems the only way to cope with the perceived overload is to memorize. However, the
actual assignment tasks might also stimulate deep understanding of the content especially
for senior year Science students.
With the evidence on construct validity and stable baseline structures among different
subgroups of Hong Kong Chinese undergraduate students, the SCEQ could be a reliable
instrument for the evaluation of effectiveness of higher education in Hong Kong in terms
of teaching quality, the clarity of goals and standards, and the appropriateness of assess-
ment and workload. While at the same time the relationships between perceptions of course experience and learning strategies varied among subgroups of students, these dif-
ferences could inform the specific needs of degree courses in the design of the new
curriculum. Adopting the SCEQ in Hong Kong universities could also provide a basis for
cross cultural research and international benchmarking purposes in the future.
References
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and rec-ommended two-step approach. Psychological Bulletin, 103(3), 411–423. doi:10.1037/0033-2909.103.
3.411.
Barrie, S. C., Ginns, P., & Prosser, M. (2005). Early impact and outcomes of an institutionally aligned,
student focused learning perspective on teaching quality assurance. Assessment & Evaluation in
Higher Education, 30(6), 641–656. doi:10.1080/02602930500260761.
Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107 (2), 238–
246. doi:10.1037/0033-2909.107.2.238.
384 High Educ (2009) 58:375–386
123
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 11/12
Biggs, J. B. (1987). Student approaches to learning and studying. Hawthorn, VC: Australian Council for
Educational Research.
Biggs, J. B. (1992). Why and how do Hong Kong students learn? Using the learning and study process
questionnaires (education paper No. 14). Hong Kong: The University of Hong Kong, Faculty of
Education.
Biggs, J. B. (1993). What do inventories of students’ learning processes really measure? A theoreticalreview and clarification. The British Journal of Educational Psychology, 63(1), 3–19.
Biggs, J. B., & Tang, C. (2007). Teaching for quality learning at university (3rd ed.). Maidenhead, Berk-
shire: McGraw-Hill Education.
Boekaerts, M. (2003). How do students from different cultures motivate themselves for academic learning?
In F. Salili & R. Hoosain (Eds.), Teaching, learning, and motivation in a multicultural context (pp. 13–
32). Greenwich, CT: Information Age Publishing Inc.
Bollen, K. A. (1989). A new incremental fit index for general structural equation models. Sociological
Methods & Research, 17 (3), 303–316. doi:10.1177/0049124189017003004.
Boomsma, A., & Hoogland, J. J. (2001). The robustness of LISREL modeling revisited. In R. Cudeck, S. du
Toit, & D. Sorbom (Eds.), Structural equation modeling: Present and future (pp. 1–25). Lincolnwood,
IL: Scientific Software International.
Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS: Basic concepts,applications, and programming. Mahwah, NJ: Lawrence Erlbaum Associates.
Byrne, B. M. (2001). Structural equation modeling with AMOS, EQS, and LISREL: Comparative
approaches to testing for the factorial validity of a measuring instrument. International Journal of
Testing, 1(1), 55–86. doi:10.1207/S15327574IJT0101_4.
Byrne, B. M., & Flood, B. (2003). Assessing the teaching quality of accounting programs: An evaluation of
the course experience questionnaire. Assessment & Evaluation in Higher Education, 28(2), 135–145.
doi:10.1080/02602930301668.
Byrne, B. M., Shavelson, R. J., & Muthen, B. (1989). Testing for the equivalence of factor covariance and
mean structures: The issue of partial measurement invariance. Psychological Bulletin, 105(3), 456–
466. doi:10.1037/0033-2909.105.3.456.
Diseth, A., Pallesen, S., Hovland, A., & Larsen, S. (2006). Course experience, approaches to learning and
academic achievement. Education & Training, 48(2/3), 156–169. doi:10.1108/00400910610651782.Entwistle, N. J., & Ramsden, P. (1983). Understanding student learning. London and Canberra: Croom
Helm.
Flora, D. B., & Curran, P. J. (2004). An empirical evaluation of alternative methods of estimation for
confirmatory factor analysis with ordinal data. Psychological Methods, 9(4), 466–491. doi:
10.1037/1082-989X.9.4.466.
Ginns, P., Prosser, M., & Barrie, S. (2007). Students’ perceptions of teaching quality in higher education:
The perspective of currently enrolled students. Studies in Higher Education, 32(5), 603–615. doi:
10.1080/03075070701573773.
Ho, A. S. P. (1998). Changing teachers’ conceptions of teaching as an approach to enhancing teaching and
learning in tertiary education. Unpublished doctoral thesis, The University of Hong Kong, Hong Kong.
Joreskog, K. G. (1994). Structural equation modeling with ordinal variables. In T. W. Anderson, K. T. Fang,
& I. Olkin (Eds.), Proceedings of the international symposium on multivariate analysis and itsapplications. Multivariate analysis and its applications (pp. 297–310). Hayward, CA: Institution of
Mathematical Statistics.
Joreskog, K. G., & Sorbom, D. (1982). Recent developments in structural equation modeling. Journal of
Marketing Research, 19(4), 404–416. doi:10.2307/3151714.
Joreskog, K. G., & Sorbom, D. (1996). LISREL 8: User’s reference guide. Lincolnwood, IL: Scientific
Software International, Inc.
Kember, D. (2000). Misconceptions about the learning approaches, motivation and study practices of Asian
students. Higher Education, 40(1), 99–121. doi:10.1023/A:1004036826490.
Kember, D., & Gow, L. (1990). Cultural specificity of approaches to study. The British Journal of Edu-
cational Psychology, 60(3), 356–363.
Kember, D., & Gow, L. (1991). A challenge to the anecdotal stereotype of the Asian student. Studies in
Higher Education, 16 (2), 117–128. doi:10.1080/03075079112331382934.Kember, D., & Leung, D. Y. P. (1998). The dimensionality of approaches to learning: An investigation with
confirmatory factor analysis on the structure of the SPQ and LPQ. The British Journal of Educational
Psychology, 68(3), 395–407.
Kreber, C. (2003). The relationship between students’ course perception and their approaches to studying in
undergraduate science courses: A Canadian experience. Higher Education Research & Development,
22(1), 57–75. doi:10.1080/0729436032000058623.
High Educ (2009) 58:375–386 385
123
7/27/2019 Prosser-Undergraduates’ learning experience and learning process- quantitative evidence from the East
http://slidepdf.com/reader/full/prosser-undergraduates-learning-experience-and-learning-process-quantitative 12/12
Leung, D., Ginns, P., & Kember, D. (2008). Examining the cultural specificity of approaches to learning in
universities in Hong Kong and Sydney. Journal of Cross-Cultural Psychology, 39(3), 251–266. doi:
10.1177/0022022107313905.
Little, T. D., Cunningham, W. A., Shahar, G., & Widaman, K. F. (2002). To parcel or not to parcel: Exploring
the question, weighing the merits. Structural Equation Modeling, 9(2), 151–173. doi:10.1207/
S15328007SEM0902_1.Lizzio, A., Wilson, K., & Simons, R. (2002). University students’ perceptions of the learning environment
and academic outcomes: Implications for theory and practice. Studies in Higher Education, 27 (1),
27–52. doi:10.1080/03075070120099359.
Markus, H. R., & Kitayama, S. (1991). Culture and the self: Implications for cognition, emotion, and
motivation. Psychological Review, 98(2), 224–253. doi:10.1037/0033-295X.98.2.224.
Marsh, H. W. (1987). Students’ evaluations of university teaching: Research findings, methodological
issues, and directions for future research. International Journal of Educational Research, 11(3),
253–388. doi:10.1016/0883-0355(87)90001-2.
Marton, F., & Saljo, R. (1976). On qualitative differences in learning: I-Outcome and process. The British
Journal of Educational Psychology, 46 (1), 4–11.
Prosser, M., & Trigwell, K. (1999). Understanding learning and teaching: The experience in higher edu-
cation. Philadelphia, PA: Open University Press.Ramsden, P. (1991). A performance indicator of teaching quality in higher education: The course experience
questionnaire. Studies in Higher Education, 16 (2), 129–150. doi:10.1080/03075079112331382944.
Richardson, J. T. E. (1994). Gender differences in mental rotation. Perceptual and Motor Skills, 78(2),
435–448.
Richardson, J. T. E. (2005a). Instruments for obtaining student feedback: A review of the literature.
Assessment & Evaluation in Higher Education, 30(4), 387–415. doi:10.1080/02602930500099193.
Richardson, J. T. E. (2005b). Students’ approaches to learning and teachers’ approaches to teaching in
higher education. Educational Psychology, 25(6), 673. doi:10.1080/01443410500344720.
Richardson, J. T. E. (2006). Investigating the relationship between variations in students’ perceptions of
their academic environment and variations in study behaviour in distance education. The British
Journal of Educational Psychology, 76 (4), 867–893. doi:10.1348/000709905X69690.
Rowe, K. J. (2006). Practical multilevel analysis with MLwiN & LISREL: An integrated course (5th ed., rev.).22nd ACSPRI summer program in social research methods and research technology, Australian National
University, 23–27 January 2006. Camberwell, VIC: Australian Council for Educational Research.
Rowe, K. J., Hill, P. W., & Holmes-Smith, P. (1995). Methodological issues in educational performance and
school effectiveness research: A discussion with worked example. Australian Journal of Education,
39(3), 217–248.
Sadlo, G. (1997). Problem-based learning enhances the educational experiences of occupational therapy
students. Education for Health, 10(1), 101–114.
Steiger, J. H. (1990). Structural model evaluation and modification: An interval estimation approach.
Multivariate Behavioral Research, 25(2), 173–180. doi:10.1207/s15327906mbr2502_4.
Tarkkonen, L., & Vehkalahti, K. (2005). Measurement errors in multivariate measurement scales. Journal of
Multivariate Analysis, 96 (1), 172–189. doi:10.1016/j.jmva.2004.09.007.
Trigwell, K., & Prosser, M. (1991a). Improving the quality of student learning: The influence of learningcontext and student approaches to learning on learning outcomes. Higher Education, 22(3), 251–266.
doi:10.1007/BF00132290.
Trigwell, K., & Prosser, M. (1991b). Relating approaches to study and quality of learning outcomes at the
course level. The British Journal of Educational Psychology, 61(3), 265–275.
University Grants Committee. (2005). Education quality work: The Hong Kong experience. Hong Kong:
The Hong Kong Polytechnic University, Educational Development Centre.
University of Sydney, Institute for Teaching and Learning.(2005). 2005 Student course experience ques-
tionnaire (SCEQ). Retrieved December 4, 2006, from http://www.itl.usyd.edu.au/sceq2005.
Watchel, H. (1998). Student evaluation of college teaching effectiveness: A brief review. Assessment &
Evaluation in Higher Education, 23(2), 191–211. doi:10.1080/0260293980230207.
Watkins, D. (1989). The role of confirmatory factor analysis in cross-cultural research. International Journal
of Psychology, 24(1), 685–701. doi:10.1080/00207598908246806.Watkins, D. (2001). Correlates of approaches to learning: A cross-cultural meta-analysis. In R. Sternberg &
L. Zhang (Eds.), Perspectives on thinking, learning, and cognitive styles (pp. 165–195). Mahwah, NJ:
Lawrence Erlbaum Associates.
Wilson, K. L., Lizzio, A., & Ramsden, P. (1997). The development, validation and application of the course
experience questionnaire. Studies in Higher Education, 22(1), 33–53. doi:10.1080/03075079712331381121 .
386 High Educ (2009) 58:375–386
123