MagnaDER-SloanArticle

Embed Size (px)

Citation preview

  • 7/29/2019 MagnaDER-SloanArticle

    1/2

    Jennifer Patterson Lorenzetti

    Kaye Shelton, dean of online education at Dallas BaptistUniversity, has been involved in online education since1998. In this time, she has seen the increase in public demandfor accountability in higher education and online educationsresponse in demonstrating their level of quality.

    Demonstrating quality has been foremost in her mind as she

    looks at the online programs at Dallas Baptist University, where

    35-40 percent of the institutions enrollment takes at least oneonline course. My focus was quality; we have to have qualityto be competitive, she says. However, in 1998 there were nobest practices for determining quality, and for many years noone could offer a checklist that online programs could use todemonstrate the quality of their programs and plan forimprovement.

    This changed when Shelton undertook the creation of justsuch an instrument that every online program -- regardless ofsize, focus, or governance of the institution could use tomeasure quality. Starting with the IHEP studyQuality on theLine: Benchmarks for Success in Internet-Based Distance Educationand 43 administrators of online education programs from a

    variety of institution types, Shelton undertook a six-round

    Delphi study to come up with a comprehensive list of measures

    of quality in online education.The panel was drawn from a suggested list of 76 contributedby the Sloan Consortium as experts respected in the field. Ofthe 43 that joined the study, 86 percent had nine or more yearsexperience in online education. It took six rounds ofconsensus-building over 180 days to flesh out the original 24benchmarks for quality and come to agreement on the final list.

    The only compensation these experts received was a $25Amazon gift card.

    With experts from various institution types participating in aDelphi study, Shelton was confident that the criteria thatemerged were equally applicable to institutions of all sizes andtypes. tion type.

    Using the instrumentThe final recommendations are in usable form, but Shelton

    is in the process of making them even more accessible bywriting a handbook for their use. This handbook will includeindicators, supporting studies, and best practices supportingeach dimension of the instrument; institutions will be able to

    work through the instrument as a self-study, adding documen-tation that supports their self-analysis.

    in this issue

    A Simple Way to Score Administrative Quality . . . . . . . . . . . . . . . . . . . . . . . . .Cover

    Monthly Metric: Whos marketing their program on Google? . . . . . . . . . . . . . . . . .3

    Administration: Online format saves academic program . . . . . . . . . . . . . . . . . . . . .4

    Faculty Development: From the people who brought you Quality Matters . . . . . . . .5

    In the News: For-profits gunning for the GAO . . . . . . . . . . . . . . . . . . . . . . . . . . .6

    A MAGNA PUBLICATION

    A Simple Way to Score Administrative Quality

    Volume 15, Number 4 February 15, 2011

    continued on page 8

    For many years no one could offer a checklist

    that online programs could use to demonstrate

    the quality of their programs and plan for

    improvement.Sample scorecard

  • 7/29/2019 MagnaDER-SloanArticle

    2/2

    February 15, 2011 Distance Education Report 8

    Although the instrument generates a final score thatindicates degree of adherence to the measures of quality, theprocess of working through the analysis is the most valuable

    part, and institutions should not expect to achieve a perfectscore.

    For example, Shelton herself has won two course designawards for her work and has a 92 percent student course com-pletion rate, but when I went to self-assess, I could not get aperfect score, she says. As one place that she falls short of per-fection, in Social and Student Engagement, the first metricreads, Students should be provided a way to interact with otherstudents in an online community. Shelton admits that she has

    ways for students to interact at the course level, but she has notdeveloped any at the program level. Finding places forimprovement such as this one gives her goals to add to herprograms strategic plan.

    For institutions wanting to use this instrument, Sheltonadvises plunging right in. Go to the web site and read theinstrument and start working through the items, she advises.Make notes of why you gave yourself a [certain] score.

    Proving ourselvesAlthough all institution types have had increasing pressure to

    prove the quality of their programs, online education has drawnthe most attention of late. Accreditors want to know this,says Shelton. As online educators, we have had to proveourselves two to three times as much.

    Using an instrument like this is a great way to get ahead ofthe curve and start amassing evidence before the accreditors orthe public demand it. By working through this instrument asa self-assessment, online programs can see where and why theyare succeeding and make plans for further improvement. And,

    when the time comes to demonstrate quality to others, theevidence will already be there.G

    Cover..from page 1

    Scoring: (Based on all scorecards being used. To see all scorecards, visit:www.magnapubs.com/files/newsletters/der/scorecard.pdf)

    210 = perfect score189-209 (90-99%) = exemplary (little improvement is needed)168-188 (80-89%) = acceptable (some improvement is

    recommended)147-167 (70-79%) = marginal (significant improvement is

    needed in multiple areas)126-146 (60-69%) = inadequate (many areas of improvement

    are needed throughout the program)125 and below (59% and below) = unacceptable

    Sample scorecard. Sample scorecard.