20
This article was downloaded by: [University of Connecticut] On: 10 October 2014, At: 01:56 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Quest Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/uqst20 Assessing Kinesiology Students' Learning in Higher Education Weimo Zhu a a Dept of Kinesiology and Community Health , University of Illinois at Urbana-Champaign , Urbana , IL , 61801 E-mail: Published online: 14 Feb 2012. To cite this article: Weimo Zhu (2007) Assessing Kinesiology Students' Learning in Higher Education, Quest, 59:1, 124-142, DOI: 10.1080/00336297.2007.10483542 To link to this article: http://dx.doi.org/10.1080/00336297.2007.10483542 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Assessing Kinesiology Students' Learning in Higher Education

  • Upload
    weimo

  • View
    215

  • Download
    1

Embed Size (px)

Citation preview

This article was downloaded by: [University of Connecticut]On: 10 October 2014, At: 01:56Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK

QuestPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/uqst20

Assessing Kinesiology Students' Learning in HigherEducationWeimo Zhu aa Dept of Kinesiology and Community Health , University of Illinois at Urbana-Champaign ,Urbana , IL , 61801 E-mail:Published online: 14 Feb 2012.

To cite this article: Weimo Zhu (2007) Assessing Kinesiology Students' Learning in Higher Education, Quest, 59:1, 124-142,DOI: 10.1080/00336297.2007.10483542

To link to this article: http://dx.doi.org/10.1080/00336297.2007.10483542

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information. Taylor and Francis shall not be liable forany losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use ofthe Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

124

Quest 2007, 59, 124-142© 2007 American Academy of Kinesiology and Physical Education

The author (AAKPE Fellow #450) is with the Dept of Kinesiology and Community Health, University of Illinois at Urbana-Champaign, Urbana, IL 61801. E-mail: [email protected]*Kinesiology is used in its broad meaning, i.e., human movement studies, including all subdisciplines, athletic training, exercise science, physical education, sports, etc.

Assessing Kinesiology Students’Learning in Higher Education

Weimo Zhu

Student learning in higher education is traditionally assessed and compared using institution statistics (e.g., graduation rate, students ̓entrance examinations scores and percent of students with jobs or plans to enter graduate or professional schools after graduation). This practice is no longer adequate to meet the needs of workforce preparation for the 21st centuryʼs knowledge-based, global economy. There is a call for signifi cant improvement in assessing student learning in higher education. The purpose of this paper is to provide a review of assessing kinesiol-ogy student learning in higher education. After explaining why there is strong interest in assessment reform, a new four-dimension student learning construct is introduced. Conventional assessment practice in kinesiology and its limitation is then reviewed under this new student learning construct. The new assessment reform movement and attempts for reform in the fi eld of kinesiology are then reviewed. Finally, major areas in assessing student learning in kinesiology that should be changed are addressed and future directions in improving assessment practices in kinesiology are outlined.

Interest in assessing students ̓learning in higher education is as “old” as higher education itself. The topics of assessment in higher education have been well cov-ered in educational literature (see e.g., Gaither, Nedwek, & Neal, 1994; Heywood, 2000; Pellegrino, Chudowsky, & Glaser, 2001; Messick, 1999) and other various resources. Table 1 summarizes a few examples of websites that cover assessment in higher education. The past two decades have been marked by the ever increas-ing interest in assessment of student learning, and colleges and universities, both in the U.S. and abroad, have often been asked to examine the competence of their graduates and demonstrate that their educational programs are accomplish-ing their purposes (Palomba & Banta, 2001). Why is there such a strong interest on assessing students ̓ learning in higher education? Is there anything wrong, or missing in/from our current practice? What is the overall trend in assessing students ̓ learning in higher education? How are we, the fi eld of kinesiology,* doing in assessing student learning compared to other disciplines? What is the future direction in assessing student learning? The purpose of this paper, part of the program “Kinesiology: Defi ning the Academic Core of Our Discipline” at the 2006 Annual Meeting of the American Academy of Kinesiology and Physical

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

Assessing Student Learning 125

Tab

le 1

W

ebsi

te E

xam

ple

s o

n A

sses

smen

t in

Hig

her

Ed

uca

tio

n

Stu

dent

Ass

essm

ent i

n H

ighe

r E

duca

tion

(ahe

.cqu

.edu

.au/

)T

his

site

aim

s to

be

a re

sour

ce to

ass

ist b

oth

rese

arch

ers

and

prac

titio

ners

in th

e fi e

ld o

f st

uden

t ass

essm

ent i

n hi

gher

edu

catio

n. I

t cov

ers

all

aspe

cts

rela

ted

to a

sses

smen

t of

stud

ent l

earn

ing;

the

valid

ity o

f m

ultip

le-c

hoic

e qu

izze

s; th

e va

lue

of c

lose

d-bo

ok v

ersu

s op

en-b

ook

exam

ina-

tions

; the

use

of

exam

inat

ions

as

oppo

sed

to o

ther

for

ms

of a

sses

smen

t; gr

oup

and

self

-ass

essm

ent;

and

a w

hole

hos

t of

othe

r re

late

d to

pics

. It a

lso

prov

ides

link

s to

onl

ine

artic

les,

boo

ks, j

ourn

als,

and

oth

er r

elev

ant i

nfor

mat

ion.

Inte

rnet

Res

ourc

es fo

r H

ighe

r E

duca

tion

Out

com

es A

sses

smen

t(w

ww

2.ac

s.nc

su.e

du/U

PA/a

ssm

t/res

ourc

e.ht

m)

Mos

t ava

ilabl

e In

tern

et r

esou

rces

for

out

com

e as

sess

men

t in

high

er e

duca

tion

are

prov

ided

in th

is s

ite. M

any

of th

e pa

ges

on th

is li

st h

ave

links

to

othe

r re

sour

ces

and

to e

ach

othe

r. O

ther

peo

pleʼ

s lis

ts o

f lin

ks w

ere

used

inst

ead

of c

onne

ctin

g to

all

the

reso

urce

s di

rect

ly f

rom

this

site

.

Lear

ner

Out

com

es R

esou

rces

(web

clas

s.la

kela

nd.c

c.il.

us/a

sses

smen

t/res

ourc

es.h

tm)

Eva

luat

ion

form

s, s

urve

ys, r

ubri

cs, e

tc. p

rese

nted

by

Lak

e L

and

Col

lege

fac

ulty

are

pro

vide

d fo

r us

e. Y

ou c

an a

cces

s in

form

atio

n ab

out a

sses

s-m

ent b

ooks

, per

iodi

cals

, and

rep

orts

ava

ilabl

e at

the

colle

ge. I

n ad

ditio

n, th

is s

ite s

uppl

ies

links

to o

nlin

e as

sess

men

t res

ourc

es a

t oth

er c

olle

ges

and

site

s.

Out

com

es A

sses

smen

t Res

ourc

es o

n th

e W

eb(w

ww

.tam

u.ed

u/m

arsh

ome/

asse

ss/H

TMLfi

les/

oabo

oks.

htm

l)In

this

site

, the

list

s of

res

ourc

es w

ere

com

pile

d fr

om th

e In

tern

et s

earc

h us

ing

sear

ch e

ngin

es (

Exc

ite, Y

ahoo

!, a

nd H

otB

ot)

for

the

keyw

ords

: as

sess

men

t, st

uden

t out

com

es, a

nd in

stit

utio

nal e

ffec

tive

ness

. Add

ition

al li

nks

wer

e fo

und

by s

earc

hing

var

ious

uni

vers

ity s

ites

usin

g th

e ab

ove

keyw

ords

. Eig

ht c

ateg

orie

s of

res

ourc

es a

re p

rovi

ded

in th

is s

ite s

uch

as u

nive

rsity

ass

essm

ent p

ages

, gen

eral

res

ourc

es, a

genc

ies,

inst

itute

s an

d or

gani

zatio

ns, a

sses

smen

t ins

trum

ents

and

tech

niqu

es, a

sses

smen

t pap

ers

and

repo

rts,

com

mer

cial

res

ourc

es o

n as

sess

men

t, be

nchm

arki

ng,

and

soft

war

e.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

126 Zhu

Education (AAKPE), is to address these questions. After providing a brief review on the reasons for reexamining assessment in higher education, the relationship between assessment and accountability will be reviewed. A description of the con-struct of student learning in higher education will be presented since to understand fully any kind of assessment, the construct of the thing/subject to be assessed has to be understood fi rst. Based on the construct identifi ed, current assessment practice in kinesiology and its limitations will be reviewed. New assessment practice and its impact on the fi eld of kinesiology will also be described. Finally, future research needs, directions, and recommendations will be provided.

New Interest and Motivationin Assessing Student Learning

in Higher EducationSeveral reasons led to the current strong interest in assessing student learning in higher education. The fi rst, and perhaps the most important, is that we now live in a different world, characterized by a knowledge-driven and global economy. According to the U.S. Department of Labor (2006), 90% of the fastest-grow-ing jobs are knowledge-related and require some postsecondary credentials and skills. By 2014, according to the Department of Laborʼs projection, there will be about 4 million new job openings in health care, education, and computer and mathematical sciences combined (Hecker, 2005). The economy is also on a global competition basis now. Following the relocation of many traditional blue-collar jobs to developing countries, it is reported that even more white-collar jobs also moved abroad: about 830,000 U.S. service-sector jobs—ranging from telemarketers and accountants to software engineers and chief technology offi cers—will have moved abroad by the end of 2005 (The Associated Press, 2004). Preparing a new generation workforce, which can meet the challenges in the new world, therefore became a national imperative.

The second reason for the new interest is that with changes in the economy, national leaders and the general public started to wonder if our postsecondary education can train qualifi ed professionals for the new world. Very recently, the U.S. Secretary of Education, Margaret Spellings, appointed a commission on the Future of Higher Education to review the status of U.S. higher education and make recommendations for improvement. While U.S. higher education remains the best in the world, according to the pre-publication copy of the commission report (U.S. Department of Education, 2006), it needs to improve in dramatic ways. The fol-lowings are a few quotes from the report:

• We may still have more than our share of the worldʼs best universities. But a lot of other countries have followed our lead, and they are now educating more of their citizens to more advanced levels than we are. Worse, they are passing us by at a time when education is more important to our collective prosperity than ever. (Page vii)

• Employers report repeatedly that many new graduates they hire are not pre-pared to work, lacking the critical thinking, writing and problem-solving skills needed in todayʼs workplaces. (Page 3)

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

Assessing Student Learning 127

• There is inadequate transparency and accountability for measuring institutional performance, which is more and more necessary to maintaining public trust in higher education. (Page 13)

• Despite increased attention to student learning results by colleges and universi-ties and accreditation agencies, parents and students have no solid evidence, comparable across institutions, of how much students learn in colleges or whether they learn more at one college than another. (Page 13)

In addition, due to the constraint of states ̓budgets, many public universities receive constant demands from their state governments to provide evidence of their performance and contribution to the local economy.

The third reason is directly related to the role of assessment in establishing accountability. Interestingly, assessment has always been in the center of educa-tion reform over the past fi ve decades (Linn, 2000). This is directly related to assessmentʼs characteristics and its impact on accountability: (a) assessment is relatively inexpensive when compared to the cost of other educational changes (e.g., increasing instructional time or reducing class size); (b) assessment can be externally mandated (e.g., by school districts or states); (c) changes related to assessment can be quickly made and implemented; and lastly (d) the results of assessments are visible and testing scores can be improved easily in the beginning of a program (Linn, 2000; Linn, Gradue, & Sanders, 1990)

Finally, the fourth reason is due to advances in computers, information tech-nologies, and cognitive science research, there is a strong call for rethinking sci-entifi c principles and philosophical assumptions that underlie current approaches to teaching, student learning, and educational assessment. Sponsored by the National Science Foundation, the Committee on the Foundations of Assessment conducted a study examining the current status of educational assessment and made recommendations for future changes (Pellegrino, Chudowsky, & Glaser, 2001). It concluded that assessment practice, especially at the classroom level, has often not taken full advantage of progress in the sciences of thinking and learning. The committee calls for redesigning and implementing assessments based on new scientifi c foundations.

Construct of Student Learning in Higher Education

To fully understand the assessment issues in higher education and develop a better assessment practice, the construct of student learning in higher education must be understood. For a long time, the focus of higher education has been on teaching students the knowledge and skills in a specifi c discipline that has proved not to be adequate for todayʼs knowledge-based and global economy. Efforts, therefore, have been made to explore other aspects of student learning in higher education. In their landmark call for higher education “From Teaching to Learning: A New Paradigm for Undergraduate Education,” Barr and Tagg (1995) wrote:

A paradigm shift is taking hold in American higher education. In its briefest form, the paradigm that has governed our colleges is this: A college is an

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

128 Zhu

institution that exists to provide instruction. Subtly but profoundly we are shifting to a new paradigm: A college is an institution that exists to produce learning. This shift changes everything. It is both needed and wanted.

Very recently, ETS has released a timely report on postsecondary assessment and learning outcomes (Dwyer, Millett, & Payne, 2006). Based on the growing consensus among educators and business leaders that student learning in higher education is multifaceted and therefore needs to be assessed using different tools for the major learning dimensions, the report proposed a four-dimension construct for student learning (Figure 1):

1. Workplace readiness and general educational skills (General). A set of basic minimum skills and abilities that enables learners to be successful in the workforce or proceed to a higher level of academic or professional performance. These skills include: (a) verbal reasoning; (b) quantitative reasoning (e.g., basic mathematic concepts/calculations, statistics and algebra); (c) critical thinking and problem solving; and (d) verbal and written communication skills.

2. Content knowledge/discipline-specifi c knowledge and skills (Domain). A set of knowledge and skills in which one must have competence in order to become a member of a specifi c profession. Level and range of competency often depends on the degree level of the training, which could range from professional certifi cation to an advanced (e.g., doctoral) degree.

3. “Soft skills” (Noncognitive skills) (Soft). While knowledge and skills in the above dimensions are essential, the nature of work in the knowledge economy requires that one be able to work in a team, do creative problem solving, and

Figure 1—A four-dimension construct for student learning.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

Assessing Student Learning 129

be able to work and communicate with a diverse set of colleagues and clients. As such, a set of so-called soft skills is being quickly recognized as important traits for oneʼs success in a job and/or a profession. A few examples of these traits are creativity, teamwork, and persistence.

4. Student engagement (Engagement). An index of the nature and extent of the studentʼs active participation in the learning process, as well as an indicator of motivation and habits that carry over into current and future settings. Based on Pace (1984) and Astinʼs earlier works (Astin, 1993), it is believed that students learn by being involved. Student engagement has been well assessed at the institutitional level and reported by the National Survey of Student Engagement (NSSE). NSSE is designed to obtain on an annual basis data on how undergraduate students spend their time and what they gain from attending college. Since its fi rst pilot study in 1999 in 70 schools, more and more colleges and universities have decided to participate in the survey. In the 2006 survey, 557 colleges and universities participated (see http://nsse.iub.edu/index.cfm for more information).

According to Dwyer et al. (2006), Dimensions 1–3 are the major ones in the learning construct and should be used as the outcome measures of student learn-ing. Dimension 4, Engagement, is important to student success and should be moni-tored, but it is not a student learning outcome. Kinesiology professionals support this four-dimension construct. Using a small convenience sample (n = 10), a group of experienced administrators in the fi eld, who often made decisions/recommendations on hiring new employees, were surveyed. In the survey, they were asked to list the top fi ve things they look for under Dimensions 1–3. Listed in Table 2 are three representative responses. For Dimensions 1 (general knowledge and skills), critical thinking, problem-solving skills, and communication skills were listed as most important; for Dimension 2 (domain knowledge and skill), knowledge in the discipline and application, including technology use, are believed to be important; and fi nally, for Dimension 3 (soft traits and skills), diversity and teamwork were given top ranking.

Conventional Assessment Practiceand Quality Control in Kinesiology

With a clearly defi ned construct, assessment practice in kinesiology can be exam-ined. Like other academic disciplines, most of assessment practice in kinesiology is currently framed under the traditional “teaching paradigm” and is centered mainly on degree, subdiscipline, curriculum, and individual course levels. The quality of assessment is controlled at the corresponding levels:

Degree-Centered

Although there is a large variability across institutes, student learning in kinesiology is centered mainly at the degree level. All institutes and corresponding departments have a specifi c requirement (e.g., course work, total credits needed, GPA, internship, thesis/dissertation, etc.) for all degrees in kinesiology (e.g., baccalaureate, master,

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

130 Zhu

Tab

le 2

S

elec

ted

Res

po

nse

s o

n T

op

Fiv

e Im

po

rtan

t T

hin

gs

in D

imen

sio

ns

1-3

Ord

erS

ubje

ct 1

Sub

ject

2S

ubje

ct 3

Wor

kfor

ce r

eadi

ness

and

gen

eral

edu

catio

n sk

ills

1R

espo

nsib

ility

and

rel

iabi

lity

Com

mun

icat

ion

skill

sC

ritic

al th

inki

ng a

nd p

robl

em s

olvi

ng2

Cri

tical

thin

king

and

pro

blem

sol

ving

Prob

lem

sol

ving

Spea

ks c

lear

ly a

nd li

sten

s at

tent

ivel

y3

Com

mun

icat

ion

skill

s (o

ral,

wri

tten,

tech

no)

Ver

bal r

easo

ning

Rea

ds w

ith u

nder

stan

ding

and

wri

tes

corr

ectly

4W

illin

gnes

s to

sha

re s

kills

with

the

com

mun

ity

Qua

ntita

tive

reas

onin

gR

ecog

nize

s th

e va

lue

of li

felo

ng le

arni

ng5

Ene

rgy

leve

l, pa

ssio

n fo

r ed

ucat

ion/

stud

ents

Self

-sta

rter

—in

itiat

es a

nd c

ompl

etes

pro

ject

sO

ther

Bal

ance

s pe

rson

al, a

cade

mic

, and

car

eer

resp

on-

sibi

litie

sD

omai

n-sp

ecifi

c kn

owle

dge

1C

onte

nt k

now

ledg

eE

ffec

tive

plan

ning

ski

llsA

cade

mic

fou

ndat

ion—

profi

cie

nt in

ac

adem

ic s

ubje

ct2

Abi

lity

to r

elat

e kn

owle

dge

to s

tude

nts

Impl

emen

ts le

sson

s ef

fect

ivel

yIn

form

atio

n te

chno

logy

app

licat

ion

3A

bilit

y to

rel

ate

to s

tude

nts

Kno

wle

dge

of c

urri

culu

m a

nd

subj

ect m

atte

rE

thic

s an

d le

gal r

espo

nsib

ilitie

s

4A

bilit

y to

man

age

stud

ents

and

con

tent

Prom

otes

app

ropr

iate

sta

ndar

ds

for

clas

sroo

m b

ehav

ior

Safe

ty a

nd h

ealth

sta

ndar

ds—

infe

ctio

n co

ntro

l, sa

fety

haz

ards

, em

erge

ncy

proc

edur

es a

nd p

roto

cols

, pro

per

use

of e

quip

-m

ent

5A

ctua

l use

of

cont

ent i

n th

e te

ache

rʼs

own

life

Use

s cu

rren

t cur

ricu

lum

and

in

stru

ctio

nal p

ract

ices

Cla

ssro

om m

anag

emen

t ski

lls—

abili

ty to

impa

rt

know

ledg

e to

stu

dent

or

clie

ntO

ther

Res

pect

s co

nfi d

entia

lity

and

mai

ntai

ns

ethi

cal b

ound

arie

sSo

ft s

kills

/Tra

its1

Ope

nnes

s to

div

ersi

tyE

ffec

tive

inte

rper

sona

l rel

atio

nshi

psL

eade

rshi

p2

Abi

lity

to c

omm

unic

ate

in s

ever

al w

ays

Sens

itivi

ty in

rel

atin

g to

stu

dent

sTe

amw

ork

3W

illin

gnes

s to

wor

k w

ith o

ther

s an

d co

mpr

omis

ePr

omot

es p

ositi

ve s

elf-

conc

epts

for

al

l stu

dent

sU

nder

stan

ds c

ultu

ral,

soci

al, a

nd e

thni

c di

vers

ity a

nd a

ppre

ciat

es o

ther

cul

ture

s an

d la

ngua

ges

4W

illin

gnes

s to

take

on

extr

a w

ork

Com

mun

icat

es e

ffec

tivel

y w

ith

stud

ents

Und

erst

ands

pol

icy

and

proc

edur

es in

a

vari

ety

of s

ettin

gs5

A p

ositi

ve a

ttitu

de to

war

d ad

vers

ityU

ses

appr

opri

ate

mot

ivat

iona

l st

rate

gies

Flex

ible

, acc

epts

dir

ectio

n an

d cr

itici

sm,

and

mai

ntai

ns c

ontr

olO

ther

Sens

e of

hum

or

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

Assessing Student Learning 131

and doctoral). When a student receives a degree at a specifi c level, it is assumed that the student already has adequate knowledge and skills for jobs and tasks at that degree level. Although this is clearly not true in reality and many students with a degree are not qualifi ed to teach or serve, little research has been conducted in terms of the overall readiness of students with a kinesiology degree.

Subdiscipline-Centered

Unlike other disciplines, kinesiology has a multi-disciplinary nature. That is, many subdisciplines are umbrellaed under the term Kinesiology, e.g., physical education (PE) or sports pedagogy, exercise science, and athletic training. Integrated with a degree requirement, a subdiscipline usually has its own specifi c requirement for student learning. While one discipline may require its students to be skilled in laboratory/clinical setting (e.g., fi tness assessment by an exercise science major), others may require extensive fi eld practice and an internship (e.g., teaching prac-tice by PE pedagogy and clinical practice by athletic training). Many disciplines, in fact, have developed specifi c standards for student learning in their disciplines although most requirements target undergraduate students. Standards in exercise science (National Association for Sport and Physical Education [NASPE], 1995), PE (NASPE, 1995), and adapted PE (NCPERID, 2006) are just three examples.

Curriculum-Centered

Corresponding to the degree and subdiscipline-centered nature, a curriculum is usually developed for a degree and subdiscipline, which includes specifi c infor-mation on knowledge and skills that a student should be competent at, the content covered, number of hours and credits required, GPA, and corresponding internship and fi eld work. The design of the curriculum could be determined by a depart-ment, according to the standards of a subdiscipline and with the requirement of an accredited governing body from the subdicipline or state agencies. There is also a large variation in curricula across institutes.

Individual Course-Centered

Finally, student learning is also centered in specifi c courses included in the cur-riculum. Course content, textbook used, learning activities, and how learning is evaluated, however, are often determined by the instructor of a specifi c course. As a result, a huge variation exists, not only across institutes/departments, but also within an institute/department.

Quality Control of Assessments

Quality of assessments in kinesiology, therefore, is also controlled at the above corresponding multilevels. More often, it is integrated at these levels. For example, for the students who plan to teach PE in a specifi c state, they must obtain a degree from a accredited program whose curriculum has been approved by an accrediting agency (e.g., NACAT); see Templin (2007) elsewhere in this issue for a discussion of accreditation in kinesiology. Besides taking courses, maintaining a satisfactory

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

132 Zhu

GPA, and completing student teaching required by the curriculum, students also need to get state teaching certifi cation. With such a “multigate” system, the readi-ness of a student for a teaching job is usually well understood.

Limitation in the CurrentAssessment Practice

While the above multilevel and multigate assessment practice has been around in the fi eld for a long time and has trained hundreds and thousands of qualifi ed professionals, several limitations, based on the new construct of student leaning by Dwyer et al. (2006), exist. They include missing construct components in assess-ment, lack of consistency in quality control, lack of connection between grades and learning, and lack of a multi-level and multi-faceted system that can assess and track student learning and engagement.

Missing Construct Components

By comparing the current assessment practice with the new four-dimension stu-dent learning construct (Dwyer et al., 2006), it is clear that the focus of the current practice in kinesiology is mainly on Dimension 2 (domain knowledge and skills) although some efforts have been made to integrate Dimension 1 (general knowledge and skills) into the current teaching and assessment practice. In contrast, Dimen-sions 3 (soft traits and skills) and 4 (engagement) have been basically ignored in practice although a few isolated attempts have been made to assess these non-tra-ditional traits/features. Low activity and effort in trying to assess Dimensions 3 and 4 is expected since the value and importance of these traits and abilities were just recognized not too long ago. A national effort (i.e., National Survey of Stu-dent Engagement [NSSE]; http://nsse.iub.edu) to assess student engagement only started in 1999 and a similar effort to assess soft skills (e.g., the Collegiate Learning Assessment [CLA] project; www.aacu.org) did not start until 2002.

Inconsistency in Quality Control

As mentioned earlier, a large variability has been observed in all levels of assess-ment in kinesiology. Knowledge and skills learned in one 4-year degree program may dramatically differ from institute to institute. Except for subdisciplines with national accreditation programs, learning in one subdisciplinary program may differ from another one. This is also true at the course level: An “A” in one course may mean a completely different thing than an “A” in another course even when both courses are taught in the same department.

Lack of Connection

In the current assessment system, grades are often the only measures at the student level. Meanwhile, because there are so many factors (e.g., course content, instruc-tor, subdiscipline difference, etc.) that may affect grading of a course, the grade itself often cannot directly relate with student learning. As a result, as pointed out by the Commission on the Future of Higher Education report described earlier,

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

Assessing Student Learning 133

we cannot answer the most critical and basic questions about student performance and learning at colleges.

Lack of Integration and Systematic Efforts

Finally, there is a lack of a convenient system that can integrate the assessment information collected at the different levels. Although many institutes today already employ some sort of computer database system to collect an individual courseʼs information (e.g., grades on assignments, examinations, etc.), the information is rarely integrated so that a more complete picture of student learning can be described.

Reforms in AssessmentAlong with the strong interest in assessment over the past two decades and recent efforts in identifying a new construct of student learning, efforts have been made to invent new kinds of assessments, including both the foundation and the format of the assessment. Brief summaries of some key developments follow in this section and more detailed information can be found in the references provided.

New Assessment Foundation

One of the most signifi cant facets of assessment reform is to reexamine the foun-dation of assessment. Research on the human mind in the latter part of the 20th century has generated considerably greater understanding of how people learn. Evidence from a variety of disciplines, from cognitive psychology, developmental psychology, computer science, anthropology, linguistics, kinesiology, to neurosci-ence, have greatly advanced our knowledge of how people learn; and, therefore, has helped set new foundations on what are better ways to teach and assess learning (Bransford, Brown, & Cocking, 1999). Integrating with the progress in modern statistical methods, the application of modern psychological models has also constituted the foundation of new measurement and testing theories (Frederiksen, Mislevy, & Bejar, 1993; Nichols, Chipman, & Brennan, 1995).

Information Technologyand the New Face of Assessment

None would argue that computers, the Internet, and new information technolo-gies have not changed our lives, including our ways of teaching and the means by which students learn. Today, technology is widely used to support teaching and learning, including helping teachers themselves to learn, design new curricula, and connect the classroom with the community, as well as provide students feedback, refl ection, and revision (Bransford et al., 1999). This is also true for assessment practice. With the assistance of computers, testing practice can be much more effi cient. For example, a testeeʼs ability can be accurately determined using a computerized adaptive test with less than six items, instead of the typical 16 to 75 item paper-and-pencil test (Gershon & Bergstrom, 2006). More importantly,

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

134 Zhu

real-life simulation can be created using the computer and problem-solving ability can be assessed. Finally, delivery of tests is much easier than ever before. Each year, hundreds of thousands of certifi cation tests (e.g., Microsoftʼs computer cer-tifi cations) are delivered around the world via the Internet.

Learner-Centered Assessment

Since research has shown that effective instruction begins with what learners bring to the setting (Bransford et al., 1999), there is a call for creating learner-centered environments to help students make connections between their previous knowledge and their current academic tasks (DeBoer, 2002). Learner-centered assessment is the assessment used in learner-centered environments (Huba, & Freed, 2000; Ma & Zhou, 2000). Studies have found that one can hold students to a higher perfor-mance standard when learner-centered assessment is employed (Pedersen & Liu, 2003; Shepard, 2000).

Standards-Based Assessment

A standards-based assessment assesses learner achievement in relation to set stan-dards. The roots of standards-based assessment are in the minimum-competency testing of the 1970s and early 1980s, in which the focus of assessment is on the lower end of the achievement distribution as a reasonable requirement for high school graduation (Linn, 2000). Then, creating standards, more specifi cally, content standards, was central to the Clinton Administration s̓ education initiative explicated in the Goals 2000: Educate America Act. Several national assessment programs (e.g., National Educational Advancement Programs [NEAP]) use standards to show what students know and can do. Standards are written statements that describe exactly what a student has to know and be able to do in order to be awarded a unit or achievement standard. A studentʼs achievement is benchmarked or compared to an expected level, rather than to other students ̓achievements (norm-based assess-ment) (see Lerner, 1998 and Willingham & Cole, 1997 for more details).

Evidence-Centered Assessment Design (ECD)

In assessment practice, how to design an effective assessment and/or study to collect evidence has often been overlooked. Developed by Mislevy (1994) and his colleagues at ETS, ECD was developed based on advances in evidentiary reasoning (Schum, 1994) and statistical modeling (Gelman et al., 1995). ECD includes six models: student, evidence, task, assembly, presentation, and deliv-ery, with the fi rst three acting as the core. The student model focuses on what knowledge, skills, and/or other attributes should be assessed; the evidence model focuses on what behaviors or performances should reveal those constructs and what are the connections; and fi nally, the task model focuses on what tasks or situations should elicit those behaviors. ECD is based on three premises: (a) an assessment must build around important knowledge/attributes in the domain of interest, and an understanding of how that knowledge/attribute is acquired and put to use; (b) the chain of reasoning from what test-takers say and do in assessments to inference about what they know, can do, or should do next, must be based on

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

Assessing Student Learning 135

the principles of evidentiary reasoning; and (c) purpose must be the driving force behind design decisions, which refl ect constraints, resources, and conditions of use (Mislevy et al., 2003). A number of ECD applications have been reported (see e.g., Behrens et al., 2004; Hansen et al., 2005).

Authentic/Performance Assessment

Developed by Grant Wiggins (1989), authentic assessment, in which students are asked to perform real-world tasks, measures the acquisition of knowledge and skills from a holistic standpoint (Lund, 1997). Students must demonstrate a thoughtful understanding of the problem or task, thus indicating mastery of a concept and skill as they apply and use their knowledge and skills. From the teacherʼs perspective, teaching such tasks guarantees that we are concentrating on worthwhile skills and strategies (Wiggins, 1989). Students are learning and practicing how to apply important knowledge and skills for authentic purposes. They do not simply recall information or use a single, isolated skill (e.g., dribbling or shooting a basketball); rather, they should apply what they know to the tasks or problem solving. Perfor-mance assessment is a relatively new term that is commonly used in place of, or with, authentic assessment. The promises, problems, and challenges of performance assessment, which requires students to demonstrate their knowledge, skills, and strategies by creating a response or a product, has been well described in the lit-erature (Kane & Mitchell, 1996).

Reform Efforts Madein Kinesiology

Like other disciplines, efforts and progress have been, or are being, made to re-examine assessment practice in the fi eld of kinesiology, according to an informal Internet search conducted for this paper. These efforts and progress can be summa-rized into two major areas: (a) explore new assessment construct of student learning in higher education, and (b) examine new assessment formats and integrate them with teaching. Unlike other disciplines, in the past, kinesiology was not treated as an independent fi eld to be included in the traditional institute evaluation system, such as the National Research Council (NRC) and U.S. News and World Report rankings. With AAKPEʼs leadership and efforts, signifi cant progress has been made in the ranking of doctoral programs. Although the ranking institute/program belongs to the “traditional” way to assess student learning, a brief description of this exciting progress is included in this section.

Explore New Construct of Student Learning

Professionals in kinesiology realized that just domain-specifi c knowledge and skills are not adequate to assure students ̓readiness for todayʼs workforce. Efforts there-fore have been made to include new dimensions in training students. As an example, Figure 2 includes a conceptual student learning model used by the Department of Kinesiology and Health Promotion at the University of Wisconsin-Oshkosh. In this

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

136 Zhu

model, a K-12 teacher is trained as a skillful practitioner, refl ective professional, life-long learner, and change agent. Issues of content, curriculum, pedagogy, learn-ing, diversity, and culture that comprise a caring intellectual are all addressed in the model (see www.uwosh.edu/phys_ed/programs/prek12/conceptualmodel.php for more information). Although this is a different model from the ETS four-dimension one, both models share key elements.

New Assessment Formats and Application

Progress has also been made in developing new performance assessments and inte-grating them into teaching. Table 3 illustrates an example of some specifi c expecta-tion of student learning in “discovering knowledge,” as well as the means (e.g., by taking research method class, participating in research projects, becoming familiar with the process of an institutional review board, hands-on independent research projects) to meet the expectation. Table 4 illustrates another similar example with clearly defi ned learning outcomes, assessment tools to use, how to use the assess-ment results, and how to make changes based on the assessment results.

Figure 2—A conceptual student learning model.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

Assessing Student Learning 137

Kinesiology Doctoral Program Ranking

While institution ranking has a long and rich history, kinesiology (or physical education earlier) has not been included as an independent discipline in some popular institute ranking systems (see www.library.uiuc.edu/edx/rankgrad.htm for some examples of ranking systems). Spearheaded by the leadership of AAKPE, an effort has been made since 1995 to recognize kinesiology as an independent fi eld to be included in NRCʼs doctoral program ranking system. Along with this effort, a review and evaluation of kinesiology doctoral programs (2000–2004) was conducted (Thomas & Reeve, 2006). With persistence, NRC has fi nally approved AAKPEʼs application this year (2006) and agreed to include kinesiology as an independent discipline in its future doctoral program evaluation. There is no ques-tion that the impact of kinesiologyʼs inclusion in this nationally respected ranking system will be profound.

Needs and Future DirectionsAlthough the efforts and progress noted above are positive, they are just small isolated changes compared to the conventional assessments in our daily practice. To make signifi cant changes, some major efforts have to be made. First, a research interest group with a focus on assessing kinesiology student learning in higher

Table 3 A Performance Assessment Example

Discovering Knowledge

We attempt to introduce our students to the research and scholarship process, includ-ing teaching such cognitive skills as theory testing, synthesizing literature, asking signifi cant research questions, designing studies, and analyzing data.

Goal Assessment

a. All graduate students must complete the Division s̓ research methods and at least one course in advanced statistics to introduce them to the research process. These courses assess our studentʼs ability to understand and to conduct credible research.

b. Participating actively in research, whether it is planning and implementing the thesis study or writing papers, is the central experience of our graduate program. Research proposals are developed by our students. This takes place under the supervision of each studentʼs graduate faculty advisor and committee members.

c. A further component of students completing a thesis is formal application and approval for human subjects and animal research. Success in this step is assessed by approval of the research project by the Institutional Review Board.

d. First-hand experience in the attempt to discover new knowledge is created by way of an independent research project under the guidance of a graduate faculty member. This experience is assessed by successful completion of a project report.

This document was prepared for the Division of Kinesiology and Health, University of Wyoming. Retrieved August 30, 2006, from http://uwacadweb.uwyo.edu/kandh/forms/Assessmentgrad_5-15-06.pdf. Reprinted with permission.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

138 Zhu

Table 4 Example of Performance Assessment with Clearly Defi ned Instructions

Expected OutcomeDescribe the procedures, designs, methods, and analytical techniques appropriate to the sport and exercise fi eld.

Assessment ToolIn order to assess student learning outcome, Kinesiology majors seeking a M.S. degree complete research method and statistics and must successfully complete a culminating experience (thesis, independent project, or comprehensive examination). In research method course, students complete an online NIH course for the Human Participant Pro-tections Education for Research and receive a completion certifi cate. Students are also evaluated on their ability to conduct electronic literature searches and in statistics course, students are assessed on their ability (via written exams) to perform statistical analyses using statistical package program (SPSS).

Use of Assessment ResultsCourse content evaluations (peer and student) are used to document studentʼs perceptions of achievement of learning outcomes. Results of written comprehensive exam are shared with program faculty to identify learner outcome weakness and direct future course instruction.

Changes Made Based on Assessment ResultsPrior evaluation of student learning outcomes have led to the development of three cul-minating experiences (thesis, independent project, comprehensive exam), which gives the student a choice of summative evaluation required for successful completion of the M.S. degree in Kinesiology.

This document was prepared for the Kinesiology Graduate Program which is part of the Department of Kinesiology, Health Promotion, and Recreation at the University of North Texas. Retrieved August 30, 2006, from http://web2.unt.edu/ira/sacsr/static/docs/pdf/1-3.3.1/ira-acadpdfs/Kinesiology--M.S.pdf. Reprinted with permission.

education should be formed so that better assessments can be designed and imple-mented, and evidence of the new assessments can be collected. So far, except for a few early works (e.g., McGee, 1989), no measurement specialists in kinesiology focus their research in assessing student learning in higher education.

Second, theories and models that can lead to better practices in assessing student learning should be developed and empirically evaluated. Little work so far has been done in this area. Figure 3 illustrates a comprehensive model proposed by Joughin and Macdonald (2006) for assessment in higher education. There are four levels in this model: Module at Level 1, Course/program at 2, Institution at 3, and External at 4. Level 1 is where assessment takes place, Level 2 supports this practice, Level 3 represents the institutional context, and Level 4 denotes the overall context of the institution including government policy and the expectations of external bodies. Although the model itself is yet to be empirically examined, it shows good promise since it provides a good direction as to how assessment should be developed, implemented, and quality controlled at institution levels.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

Assessing Student Learning 139

Third, a convenient data collection system has to be developed. Without a system that can be integrated into daily student learning and instructor teaching which can easily collect data directly from students and instructors, assessment of student learning in higher education can only be one-shot and short-term, and therefore, have limited meaning and impact. I believe that a digital portfolio-based database system could address the needs of such a system. A digital portfolio is a web-based assessment and presentation application that allows users to demonstrate their capabilities and achievements according to some pre-determined criterion or standards (Airasian & Abams, 2000; Niguidula, 1997). The system could be centered in a student assessment portfolio database (see Figure 4). At the student level, inputs could be selected demographic information (age, gender, etc.), assign-ments completed, work samples, presentations samples, etc. and outputs could be selected demographic information, resume, selected work/presentation samples, and other information that potential employers may be interested in; at the course and instructor level, inputs could be grades for assignments and examinations, feedback to students, class statistics, etc. and outputs could be class statistics, samples of student works, etc.; fi nally, at the department and institute levels, inputs could be system management commands and output could include related statistics from students and instructors, courses, departments, colleges, and universities. Develop-ing a software application on this scale, however, is not easy. Fortunately, some functions of this system (e.g., entering grades and computing class statistics) already

Figure 3—A model of an assessment system in higher education.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

140 Zhu

are included in most course management software systems at many universities. Thus, one way to implement the proposed system is to modify existing systems with a focus on improving the assessment of student learning. Another way is to develop a new, universal system that can serve/assess student learning in higher education across institutes. In fact, ETS has already called for a national initiative to develop such a system (Dwyer et al., 2006).

Finally, strong leadership is needed to make changes happen. Considering its success in establishing the doctoral program evaluation/ranking system for kinesiol-ogy, AAKPE could be the best organization to fi ll this leadership role. For example, working with other major organizations and subdisciplines in the fi eld, as well as related accreditation governing bodies, standards based on the new student learning construct can be established, outcome measures of student learning at different levels can be identifi ed, and guidelines on how to assess student learning in kinesiology in higher education can be developed. With all these changes, kinesiology in higher education should better serve society and the profession.

ConclusionAlthough a few positive efforts have been made in improving assessment of student learning in higher education, most assessment practices in kinesiology are still framed under conventional practices, which provide only limited evidence of student learning. To meet the needs of the 21st centuryʼs new economy, a new construct of student learning must be adopted and new assessment practices based on the latest information technologies, as well as convenient implementation systems, must be

Figure 4—A model of a student assessment portfolio database/system.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

Assessing Student Learning 141

developed then employed. AAKPE should play a key leadership role to promote and facilitate these changes.

Acknowledgments

I want to thank Dr. T. Gilmour Reeve from Texas Tech University and Heidi Krahling, Youngsik Park, and Marco S. Boscolo from the Kinesmetrics Laboratory, University of Illinois at Urbana-Champaign, for their support and assistance in preparing this paper.

ReferencesAirasian, P.W., & Abrams, L.M. (2000). The theory and practice of portfolio and performance

assessment. Journal of Teacher Education, 51, 398-402.The Associated Press (2004). Study: Offshoring of U.S. jobs accelerating. Retrieved October

20, 2006 from http://www.msnbc.msn.com/id/5003753/Astin, A.W. (1993). What matters in college? San Francisco: Jossey-Bass.Barr, R.B., & Tagg, J. (1995). From teaching to learning—A new paradigm for undergradu-

ate education. Change, 27, 13-25.Behrens, J.T., Mislevy, R.J., Bauer, M., Williamson, D.M., & Levy, R. (2004). Introduc-

tion to evidence centered design and lessons learned from its application in a global E-learning program. The International Journal of Testing, 4, 295-301.

Bransford, J.D., Brown, A.L., & Cocking, R.R. (Eds., 1999). How people learn: brain, mind, experience, and school committee on developments in the science of learning. Washington: National Academies Press.

DeBoer, G.E. (2002). Student-centered teaching in a standards-based world: Finding a sensible balance. Science & Education, 11, 405-417.

Dwyer, C.A., Millett, C.M., & Payne, D.G. (2006). A culture of evidence: Postsecondary assessment and learning outcomes. Princeton, NJ: ETS.

Frederiksen, N., Mislevy, R.J., & Bejar, I.I. (1993). Test theory for a new generation of tests. Hillsdale, NJ: Lawrence Erlbaum.

Gaither, G., Nedwek, B.P., & Neal, J.E. (1994). Measuring up: The promises and pitfalls of performance indicators in higher education. ASHE-ERIC Higher Education Report No. 5. Washington: The George Washington University, Graduate School of Education and Human Development.

Gelman, A., Carlin, J.B., Stern, H.S., & Rubin, D.B. (1995). Bayesian data analysis. London: Chapman & Hall.

Gershon, R.C., & Bergstrom, B.A. (2006). Computerized adaptive testing. In T.M. Wood & W. Zhu (Eds.). Measurement theory and practice in Kinesiology. (pp. 127-143). Champaign, IL: Human Kinetics.

Hecker, D. (2005, November). Employment outlook 2004-14: Occupational employment projections to 2014. Monthly Labor Review, 70-101.

Heywood, J. (2000). Assessment in higher education: Student learning, teaching, pro-grammes and institutions. Philadelphia: Jessica Kingsley Publishers.

Hansen, E.G., Mislevy, r.J., Steinberg, L.S., Lee, M.J., & Forer, D.C. (2005). Accessibility of test within a validity framework. System: An International Journal of Educational Technology and Applied Linguistics, 33, 107-133.

Huba, M.E., & Freed, J.E. (2000). Learner-centered assessment on college campuses. Needham Heights, MA: Allyn and Bacon.

Joughin, G., & Macdonald, R. (2006). A model of assessment in higher education institu-tions. Retrieved August 30, 2006, from http://www.heacademy.ac.uk/embedded_object.asp?id=22023&fi lename=Joughin_and_Mac

Kane, M.B., & Mitchell, R. (Eds., 1996). Implementing performance assessment: Promises, problems, and challenges. Mahwah, NJ: Lawrence Erlbaum.

Lerner, L.S. (1998). State science standards: An appraisal of science standards in 36 states. Washington: Thomas B. Fordham Foundation.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4

142 Zhu

Linn, R.L. (2000). Assessment and accountability. Educational Researcher, 29, 4-16. Linn, R.L., Graue, M.E., & Sanders, N.M. (1990). Comparing state and district results to

national norms: The validity of the claims that “everyone is above average.” Educational Measurement: Issues and Practice, 9, 5-14.

Lund, J. (1997). Authentic assessment: Its development and applications. Journal of Physical Education, Recreation & Dance, 68, 25-28, 40.

Ma, J., & Zhou, D. (2000, May). Fuzzy set approach to the assessment of student-centered learning. IEEE. Transactions on Education, 43, 237-241.

McGee, R. (1989). Program evaluation. In M.J. Safrit & T.M. Wood (Eds.). Measurement concepts in physical education and exercise science (pp. 323-344). Champaign, IL: Human Kinetics.

Messick, S.J. (Ed., 1999). Assessment in higher education: Issues of access, quality, student development, and public policy. Mahwah, NJ: Lawrence Erlbaum.

Mislevy, R.J. (1994). Evidence and inference in educational assessment. Psychometrika, 59, 439-483.

Mislevy, R.J., Almond, R.G., & Lukas, J.F. (2003). A brief introduction to evidence-centered design. Retrieved August 30, 2006, from http://www.education.umd.edu/EDMS/mis-levy/papers/BriefIntroECD.pdf

National Association for Sport and Physical Education (NASPE, 1995). Basic standards for the professional preparation in exercise science. Reston, VA: NASPE.

NASPE (1995). National standards for beginning physical education teachers. Reston, VA: NASPE.

National Consortium for Physical Education and Recreation for Individuals with Disabilities (NCPERID; Kelly, Projector Director) (2006). Adapted physical education national standards (2nd ed.). Champaign, IL: Human Kinetics.

Nichols, P.D., Chipman, S.F., & Brennan, R.L. (Eds., 1995). Cognitively diagnostic assess-ment. Hillsdale, NJ: Lawrence Erlbaum.

Niguidula, D. (1997, November). Picturing performance with digital portfolios. Educational Leadership, 26-29.

Pace, C.R. (1984). Student effort: A new key to assessing quality (Report No. 1). Los Angeles: University of California, Higher Education Research Institute.

Palomba, C.A., & Banta, T.W. (Eds., 2001). Assessing student competence in accredited disciplines: Pioneering approaches to assessment in higher education. Sterling, VA: Stylus Publishing, LLC.

Pedersen, S., & Liu, M. (2003). Teachers ̓beliefs about issues in the implementation of a student-centered learning environment. Educational Technology Research & Develop-ment, 51, 57-76.

Pellegrino, J.W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The sci-ence and design of educational assessment. Washington: National Academies Press.

Schum, D.A. (1994). The evidential foundation of probabilistic reasoning. New York: Wiley.Shepard, L. (2000). The role of assessment in a learning culture. Educational Researcher,

29, 4-14.Templin, T.J. Accreditation in kinesiology: the process, criticism and controversy, and the

future. Quest, 58, 130-140.Thomas, J.R., & Reeve, T.G. (2006). A review and evaluation of doctoral programs 2000-

2004 by the American Academy of Kinesiology and Physical Education. Quest, 58, 176-196.

U.S. Dept of Education (2006). A test of leadership: Charting the future of U.S. Higher Education. Washington.

U.S. Dept of Labor (2006). Internal analysis by the U.S. Department of Labor. Washing-ton.

Wiggins, G. (1989). A true test: Toward more authentic and equitable assessment. Phi Delta Kappan, 69, 703-713.

Willingham, W.W., & Cole, N.S. (1997). Gender and fair assessment. Mahwah, NJ: Law-rence Erlbaum.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

onne

ctic

ut]

at 0

1:56

10

Oct

ober

201

4