40
Quality Approaches in Higher Education Vol. 1, No. 2 Success S

Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality Approaches in Higher Education

V o l . 1 , N o . 2

SuccessSuccessDriving Driving

SuccessDriving

SuccessDriving

SuccessDriving

Success

Page 2: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

ONLY @ www.asq.org/edu/index.html

table of contents

2 GuestEditorial:ChangingSTEMEducation

Cindy P. Veenstra

3 IntegratingWritingIntotheFreshmanEngineeringCurriculum

Dan Budny, Beth Newborg, and Michael W. Ford, Jr.

11 MetricsandMethodologyforAssessingEngineeringInstruction

C. Judson King, Susan A. Ambrose, Raoul A. Arreola, Karan Watson, Richard M. Taber, Norman L. Fortenberry, and Elizabeth T. Cady

16 TransformingTeachingEvaluationtoQualityIndices

Mohamed E. M. El-Sayed, and Kathleen Burke

24 MonitoringStudentAttritiontoEnhanceQuality

Mahsood Shah and Chenicheri Sid Nair

William Tony Publisher

[email protected]

Deborah Hopen Editor

[email protected]

Cindy Veenstra Associate Editor

[email protected]

Fernando Padró Associate Editor

[email protected]

Mike Schraeder Associate Editor

[email protected]

Christine Robinson Editorial Assistant

[email protected]

Cathy Schnackenberg Production Administrator

[email protected]

Janet Jacobsen Copy Editor

[email protected]

Laura Franceschi Sandy Wyss Layout/Design

Quality Approaches in Higher Education is a peer-reviewed publication that is published by ASQ’s Education Division,

600 N. Plankinton Ave., Milwaukee, WI 53203 USA. Copyright ©2010 American Society for Quality. Publication of any

article should not be deemed as an endorsement by ASQ.

For reprint information or permission to quote from the contents of Quality Approaches in Higher Education, contact the

editor at [email protected]; please include your address, daytime telephone number, and e-mail address.

Questions about this publication should be directed to ASQ’s Education Division, Cindy Veenstra, [email protected].

www.asq.org/edu/index.html 1

Quality Approaches in Higher Education

V o l . 1 , N o . 2

SuccessSuccessDriving Driving

SuccessDriving

SuccessDriving

SuccessDriving

Success

Quality Approaches in Higher Education

V o l . 1 , N o . 2

Page 3: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Guest Editorial Changing STEM Education

Quality approaches in higher education Vol. 1, No. 22

At first, the game of golf may seem far away from the subjects of science, technology, engi-neering, and mathematics (STEM) retention and continuous improvement in education, but then consider that world-class golfer Phil Mickelson credits part of his success to his knowledge of math and science.1 In a recent Wall Street Journal interview when discussing the probabilities of making a putt Mickelson explained, “I’ve always used math and science in my career. It helps me know what I need to focus on.”

What is special to me about Mickelson is that he founded the Mickelson Exxon Mobil Teachers Academy, which has trained more than 2,500 third, fourth and fifth grade teachers to improve their math and science instructional skills. Mickelson understands the need for better K-12 STEM education and believes that this effort will, in turn, lead to students who are both excited and prepared for studying math and science in high school and college.

There is a national priority in the United States to prepare our youth for the future by providing more education related to the knowledge indus-try and STEM topics; this is also an international concern. People such as Mickelson are making a difference in exciting our youth in science and engineering studies. Additionally, many universi-ties now sponsor K-12 outreach programs. Despite all the conversations and efforts on this topic, however, progress has been slow. The American Society for Engineering Education reports that the number of bachelor’s degrees awarded in engineering has been flat and has not increased substantially over the past five years.2 We need to do much more because students are not staying engaged as STEM majors in college.

Significant research is underway on the subject of STEM recruitment and college retention. We need to translate that research into effective programs to interest more students in STEM careers and improve retention of students with STEM majors. At the uni-versity level, this will take more effort both inside and outside the classroom. Furthermore, we need to change the way science and engineering courses are taught so that they become preferred subjects when compared to easier non-STEM courses.

As chair of the ASQ Education Division, I wel-come you to Quality Approaches in Higher Education. We are very proud of the innovative, outside-the-box thinking in this issue’s articles on student retention, teaching strategies, and effectiveness. Just as golfers continuously improve their games, I hope this edition provides valuable information and insights that lead to further discussions and action at your college or university, enhancing its continuous improvement efforts.

References1. William McGurn, “Phil Mickelson’s Science Projects,” The Wall Street Journal, July 20, 2010, p. A15.

2. American Society for Engineering Education, “Bachelor’s Degrees Flat,” Connections, July 2010.

Cindy P. VeenstraCindy P. Veenstra, Ph.D. is chair of ASQ’s Education Division and associate editor of Quality Approaches in Higher Education. She is an ASQ Fellow and conducts research on STEM student retention. Contact her at [email protected] for more information.

Cindy P. Veenstra

Page 4: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 3

Dan Budny, Beth Newborg, Michael W. Ford, Jr.

Students must have excellent research and writing

competencies to succeed in engineering careers.

This case study describes a successful approach for

linking those soft skills with engineering curriculum.

Integrating Writing Into the Freshman

Engineering Curriculum

AbstractThis paper describes the English/

Freshman Engineering Writing Program (E/FEWP) at the Swanson School of Engineering, University of Pittsburgh. This ongoing collaboration among engi-neering and English department faculty and engineering librarians has made university-level research and writing an essential, integral part of the engineer-ing experience. By describing the writing projects and the concepts that created them, we hope to provide useful tools to assist other educators in developing and/or enhancing the use of writing within their engineering curricula and classrooms.

IntroductionABET accreditation requirements

emphasize the importance of “soft” skills in planning and achieving excel-lence in engineering education. In addition to “hard” knowledge, engineers

need to experience and understand “communication, teamwork, and the ability to recognize and resolve ethical dilemmas.”1 These skills are powerful when combined with awareness skills involving “understanding the impact of global and social factors, knowledge of contemporary issues, and the ability to do lifelong learning.”1 What is the most effective way of incorporating this into an engineering curriculum already crowded with necessary science, math, and disciplinary courses?

Our best solution resides in collabo-ration among engineering and English faculty, librarians, and advisors and by integrating projects from various engi-neering courses. Creating assignments that require students to write about what they are learning about has allowed us to address the majority of the ABET soft skills while giving particular and inten-sive attention to information literacy and communication skills.

Page 5: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 24

From day one of their first engineering class, students are introduced to the role that informa-tion literacy and writing can play in responsible thinking and thoughtful action. These research and writing skills, and the critical thinking and problem-solving skills they engender, are under-taken within an engineering curriculum. Through the E/FEWP, freshman engineering students are afforded the writing and critical thinking experience equivalent to the University of Pittsburgh’s core three-credit freshman writing course, ENGCMP 0200 Seminar in Composition. While this course is required of most freshmen, the E/FEWP builds this course into an already existing freshman engineer-ing problem-solving course. The university makes full use of the opportunities provided by teaching writing directly alongside engineering coursework and within the context of the university’s exten-sive Freshman Engineering Program advising and mentoring activities.

Freshman Engineering ProgramThe freshman program has an academic and

an advising component, and the mission of both is creating a first-year experience that promotes students’ continued pursuit of an engineering degree.

Academic ConcernsThe engineering department modified its

program and created an integrated freshman cur-riculum2,3 to promote a comprehensive learning environment that includes significant attention to student communication skills. The environ-ment also employs this attention as a means to amplify student’ consciousness of the academic and personal choices they make. There are two main engineering courses that are part of this cur-riculum: ENGR0011 and ENGR0012. The former is a required three-credit programming course with the overall goals of teaching the basic ana-lytical, programming design as well as graphical, problem-solving, teamwork, and communication skills. ENGR0012 is a second-semester core course that completes the computer programming por-tion of the integrated curriculum package. This course focuses on the following curricular goals: teach students a general-purpose programming language, promote and encourage good program-ming practices, and illustrate the role computers play in solving real-world engineering problems.

While both courses originally covered many basic programming and problem-solving skills, they did not provide enough opportunities for written and oral presentation assignments or effective advising.

Advising ConcernsThe first-year student advising objective is to

assist each student in making a smooth transition from high school to college, to aid these students in identifying their major, and to facilitate strong retention. The mentoring program within the curriculum aspires to actively involve students in every aspect of the undergraduate experience, including advising, personal decision making, academic achievement, and integrity.4,5,6,7,8 To accomplish this, all freshman engineering stu-dents are required to enroll in the advising course, ENGR0081, which explains the university policies and procedures. In the past, this course involved more passive learning as students attended lec-tures on college-related matters and various engineering departments.

By incorporating writing into the integrated curriculum, the university created a new version of ENGR0081, which included small mentor-ing groups supervised by mentors and centered around a social or cultural activity (such as board games, baseball, or a dance in Pittsburgh).9

Library ConcernsThe Bevier Engineering Library is one of 14

units in the university library system at the University of Pittsburgh. One of the library’s goals is to present library research as a necessary skill set for successful engineers. The American Library Association defines information literacy as the ability to “recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information.”10 This includes developing search strategies, select-ing appropriate literary research tools, critically evaluating the materials identified, and properly documenting sources used. Problems that future engineers will face may require knowledge and understanding from several fields outside of their areas of expertise. Graduates should have the abil-ity to teach themselves new concepts and apply information to new and unfamiliar situations.11 A major concern is how to introduce library skills during the freshman year.

Page 6: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 5

Fall Writing Assignments: Who Am I and Why Am I Here

Writing assignments throughout the academic year reflect the E/FEWP goals of increasing stu-dents’ awareness of themselves, the process of their education, and what “engineering” is and means.

In recent years, a number of writing techniques have evolved that make use of various writing-to-learn strategies within the domains of engineering, mathematics, and the sciences.12,13,14,15,16,17,18,19,20 The use of writing in introductory classes for engi-neers may be an effective vehicle to help students enhance their critical thinking and problem-solving skills. Writing can also assist students in identifying and confronting personal miscon-ceptions.21,22,23,24 To reach these aims, the goals of ENGR0011 and ENGR0081 were modified to provide a more personalized experience for each student. Engineering faculty and librarians and English department faculty felt that by having the students take an active role in researching their profession and writing a research paper,21 the various academic and advising concerns regard-ing communication skill development could be addressed. Figure 1 shows the connection between the seminar course and the engineering problem-solving course.

These writing assignments allow the faculty to have one standard assignment for all students while producing different outcomes for all. The assignment instructions are the same for all

students, but with each student researching his/her particular set of interests and strengths, all 450 students perform independent research.

Assignment One: Presenting MyselfTo gain a sense of the students’ backgrounds,

interests, and accomplishments, the mentors in ENGR0081 need information on each student. The first writing assignment is a letter of rec-ommendation that the student prepares about himself/herself for an imaginary engineering scholarship. The E/FEWP staff provides a detailed assignment that explains the “letter of recommen-dation” scenario and expectations.

Writing the letter provides the students with the first step in meeting the composing goals of the standard first semester general writing course ENGCMP 0200, which focuses on “thought-fully crafted essays that position the writer’s ideas among other views” and of “writ[ing] with precision, nuance, and awareness of textual con-ventions.”25 Students have the opportunity to introduce and describe themselves in the “voice” of someone who knows them and contextualizes them in particular ways. This allows students to consider what they might like to achieve in their freshman year of study and beyond, while allowing the university to merge advising into the curriculum. Through this assignment students have an opportunity to see that it’s useful for their instructors to know them on an individual basis. The letter also provides instructors and advisors with another glimpse into the students’ lives.

Figure 1: Connection Between ENGR0011 and ENGR0081

ENGR0011 Engineering AnalysisThree-credit required course

Analyticalproblem-solvingassignmentsusing computers

Writingassignments—exploring engineering

PowerPointpresentation

Large lecturesection

Mentorcomponent

Small class size,10-15 students

Informal setting

ENGR0081 Freshman SeminarZero-credit required course

Assignments Components

Lecture provides informationfor writing assignments

Presentation madein small group

Mentor helps studentwith research

Page 7: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 26

Assignment Two: Challenges and Issues in Engineering

The critical importance of information literacy in engineering education is first addressed in assignment two. With this assignment, librarians help students select appropriate sources, construct efficient searches, interpret search results, and obtain identified sources. To complete this assign-ment, students examine the National Academy of Engineering’s “14 Grand Challenges.”26 This intro-duces the idea of social responsibility and asks that they take a position on the role and respon-sibility of engineers in delineating and addressing social and environmental issues and problems. Students encounter the impact engineering can have on society, and they begin to practice making responsible arguments about significant, real-world matters. Students choose a challenge and delineate a particular topic, complete research using articles from a minimum of four trade and popular publications, and present their position on the importance of this topic to engineering, themselves, and society.

Students also begin practicing responsible research for academic and professional writing, as librarians introduce students to the concepts of appropriate, university-level literature research. The librarians provide information and examples that demonstrate the quality and authenticity risks of relying on web search engines or Wikis. Librarians explain the basic research steps: tak-ing information from a nebulous form, filtering it through a database, obtaining a list of citations which match criteria, and determining how to achieve full-text access.

Through the librarians’ classroom instruction and accompanying support materials, students learn that “library research” does not solely encompass finding books about a subject. Rather, it is about identifying, locating, and using vari-ous types of publications including trade and scholarly journals, books, technical reports, con-ference proceedings, and dissertations, along with subject-specific databases such as Inspec and Compendex.

Assignment Three: Engineering and MeIn this assignment, students research and ana-

lyze the field of engineering in which they intend to major. Students must show how the actual educational requirements, jobs, atmosphere, and

salary ranges of a particular field are a good “match” with their interests, goals, and particular abilities. Instruction addresses locating verified, accurate, and authoritative career information, and includes demonstrations of using the online version of the Occupational Outlook Handbook.27 This assignment also introduces professional soci-ety websites, including organizations such as the National Society of Professional Engineers and the Junior Engineering Technical Society. The library session also provides students with an overview of the online library catalog, with particular empha-sis on finding books, including the career guides and handbooks, which can serve as valuable sources of information on engineering careers.

Assignment Four: Engineering Challenges, Ethics, and Education

To complete assignment four, students rewrite their challenge/topic in terms of engineering ethics and engineering education. The goals of this assignment are introducing the concept of peer-reviewed publications and the resources for identifying them, as well as teaching more advanced techniques for conducting searches. Students need to search within the context of engineering ethics, thus, this assignment revisits the professional societies that are most frequently the sources of such codes.

Drawing on research into engineering codes of ethics and further research into the challenge/topic itself, students now articulate the relation-ship of the topic to particular codes and tenets of engineering ethics.

Assignment Five: Student Summary PresentationStudents’ end-of-the-semester presentations

revisit, summarize, and reinforce the integral relationships between what engineering “is” (cur-rent challenges and trends, what kinds of work and compensation can be expected in particular fields, how engineering impacts society, and what are an engineer’s responsibilities) and how a par-ticular engineering degree fits within the students’ interests and goals.

In this assignment students create and share a PowerPoint™ presentation and a poster presen-tation in their small ENGR0081 seminar. This provides an initial experience in presenting their scholarship before a group. Instructors for the ENGR0011 course introduce the required software

Page 8: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 7

for the project, including word processing and pre-sentation software. Students are also required to create a personal website. All of the content from the writing assignments are modified and posted on their websites. During this assignment students experience the processes and potential impacts of communicating in three formats: traditional paper, public speaking presentations with PowerPoint presentation and posters, and via a website.

Spring Writing Assignments: The Freshman Conference

The ENGR0012 writing assignment is preparing a formal written paper for publication and presen-tation at a conference scheduled at the end of the term. The theme of student papers must relate to topics covered in their physics, chemistry, calcu-lus, or engineering classes. Thus, students relate their papers to their chosen field of engineering with a focus on the design, development, and/or function of a device; applications and public policy issues; or applications and social issues. The goals are for students to understand and engage in best research writing and communication practice, introduce them to the kinds of professional prac-tices inherent to a professional situation such as a conference, and continue to help students select the best field of engineering for their interests and goals. In addition, students work in pairs. This is a practical necessity for conference scheduling and it provides students with another intensive team-work experience.

Paper ProcessThe first step is preparing an abstract based on

the conference call for papers. This provides the incentive to choose a topic early and focus on the research aspects. The students then compose and submit an annotated bibliography as well as an extensive outline. When it became apparent that many students had never prepared a formal paper; the E/FEWP instructors began providing detailed materials at each step to guide students through the writing process assignment.28,29

Multiple Reviews and RevisionsAll students’ submissions undergo several for-

mal reviews as the E/FEWP writing instructors review (and, if necessary, revise) each assignment for optimal content and correct formatting. In addition, the engineers and peer mentors review the technical content of each paper.

The conference usually consists of 30 sessions with six papers presented per session. Thirty alumni volunteers from the Pittsburgh area, together with approximately 30 peer mentors, serve as co-chairs for each session and as technical reviewers for the papers in their sessions. At the start of the paper draft production, the chairs meet with the students to review the abstract, bibliogra-phy, and outlines and to provide direction to the students. After the first draft is submitted, a second meeting is held to discuss final modifications. The mentors also meet with the students bi-weekly to chart their progress.

A peer review process is also included. The useful-ness of this approach has been widely documented. In summary, this process produced independent reviews from an English faculty member, an alum-nus, a peer mentor, and two from students.

Students utilize the reviewers’ comments to prepare a final paper. The revision process also introduces the concept of sustainability. During the revision phase, a new requirement is added to the paper that requires a one-page discussion regarding the impact the paper topic has on sus-tainability. This “change order” accomplishes the task of having every student consider sustainability as it relates to his/her field.

The ConferenceThe conference awards best papers for each

session and best overall conference papers.29 The format is much more formal than the fall presenta-tion. During the fall semester, the presentations are five minutes and are given in the comfort of the small group mentor sections. There is no formal dress code and the only people in the room are the other 10-12 students in the class. In contrast, the spring conference is a formal presentation with more than 50 people in the room, including faculty, students, and their parents. The fall presentations were specifically added to the curriculum to provide a safe, small-group, public-speaking opportunity prior to the spring presentation to the larger group.

ObservationsAs the purpose of the project was focusing on the

academic, advising, and library concerns, we use these areas as our assessment guidelines. The main academic concern was students’ writing abilities. In the past 10 years, approximately 4,500 students have completed this integrated writing curriculum. The grades for the first writing assignment in the

Page 9: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 28

fall semester have consistently averaged in the B- to C+ range. The final conference paper grades have always increased to an average grade of A-. As a result, the English department now waves the Seminar in Composition course for engineering students because the writing component in the freshman year is meeting or exceeding the goals of a standard first-semester writing course, thus addressing the main academic concern.

Grades, however, are not the only measure of success. ABET criterion three30 posits a number of required soft skills: an understanding of pro-fessional and ethical responsibility; an ability to communicate effectively, the broad education necessary to understand the impact of engineering solutions in a societal context; a recognition of the need for and an ability to engage in life-long learn-ing; and a knowledge of contemporary issues.30 The various writing assignments and presentation formats allow us to address every one of these criteria in one course, an impossibility in a tradi-tional freshman curriculum. Criterion six suggests programs should have interactions with industrial and professional practitioners. The student confer-ence papers include an evaluation by practicing engineers. During the last two years, the local chapter of the American Society of Civil Engineers has reviewed every civil engineering paper and given an award for the best paper at the conference.

By including the library staff in the entire pro-cess we also addressed all of their concerns. The library staff’s assessment of the students’ ability to utilize the library facilities has demonstrated that students are learning the procedures needed to perform independent research. This skill will be very useful to the students throughout their college experience and beyond as they pursue life-long learning opportunities.

The final goal of the project was helping each student successfully select his or her major early in the college experience. During summer orientation students are asked what major they will purse. Fifty percent are undecided and 50% believe they know their chosen field. In March, students must select a major to register for classes. The writing assign-ments provide the needed information to help the undecided 50% select a major. Data also show that over half of the other students changed their minds between August and March; thus, the writ-ing impacted more than 75% of the freshman class in deciding their final major. One reason for this

success is because the students now have writing assignments directly related to the content of the lectures in the ENGR0081 seminar course. We now find that a once very passive learning environment now actively engages the students.

This active, multi-faceted learning environment would not be possible without ongoing collabora-tion among faculty from various academic areas. From its inception, E/FEWP required administrative support from various university areas and hands-on intellectual and pragmatic partnership among fac-ulty from the Freshman Engineering Program, the English department, and the University of Pittsburgh Library System. The dean of Arts and Sciences, the dean of the Swanson School of Engineering, and the chair of the English department, saw the benefits of supporting a program that would maintain rigorous standards for university-level writing while neither requiring the addition of 25 or more individ-ual Seminar in Composition classes nor inserting another three-credit course into an already crowded freshman engineering curriculum.

The deans and the English department chair also recognized the less immediately practical, but equally significant potential achievements of a program that would enact the best of writing-across-the curriculum theory. The E/FEWP would allow students to further their knowledge about engineering (and about themselves in relation to the field and associated majors), while develop-ing their communication and information literacy skills. Ideally—and the E/FEWP strives to meet and maintain these ideals—no academic area would “suffer” from combining research, writing, engi-neering, and advising. Students, faculty, advisors, the Schools of Engineering and Arts & Sciences, the English department, and the university library system would all benefit from the combination of literacies, skill sets, and areas of expertise necessary for a successful, integrated program.

Too often, writing-across-the-curriculum ideas that look promising on paper do not maintain momentum in practice because the “home” faculty, while valuing research and communication skills in theory, ultimately have difficulty providing the nec-essary time for best writing instruction, processes, and practices. English faculty involved in writing-across-the-curriculum initiatives are often frustrated by the lack of time and support home faculty can or will give to writing instruction and experience. In the case of the E/FEWP, from the start engineering

Page 10: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 9

faculty demonstrated their investment in this cross-curricular initiative by working closely with English faculty to alter curricula to allow for optimal results. Early in the process, English, engineering, and library faculty established a practice of meeting regularly to work on scheduling and on academic and intellectual possibilities. When students see engineering and English faculty together in their classrooms, they observe first hand, how each aca-demic area supports the other and how each faculty member has equal interest and authority.

A program such as the E/FEWP need not be exclusive to collaboration between engineering and English. For such a program to work within or between schools, what is needed is for administra-tion, faculty, and advisors to recognize that they will benefit, along with each school, discipline, or area. Yes, budgetary and scheduling exigencies can be usefully addressed by combining courses. Yes, students can benefit tremendously by seeing hard skills, soft skills, life skills, quantitative skills, and communication skills as part of a successful academic and professional whole. Both faculty and advisors involved have much to gain, profession-ally and intellectually, from such collaboration. Working with colleagues from different disciplines and areas promotes valuable new perspectives on one’s own discipline and leads to professional insights and opportunities that simply would not exist without experiencing, first-hand, the methods and knowledge bases of other fields.

Summary and ConclusionsPrograms such the Swanson School of

Engineering’s E/FEWP contribute significantly to the soft skills that promote whole-picture-whole-engineer vision and action. Through collaboration, both faculty and students were enriched by tapping into the multiple intelligences resident within the university as a whole. By integrating research and writing into an engineering curriculum and class-room, students also observe and gain first-hand practice on how research and writing are essential components of being responsible, well-educated, capable students and engineers. Particularly dur-ing the freshman year, students can find that their coursework seems distant from the “real world” work they imagine they will be doing closer to or after graduation. Including research and writing on engineering majors, jobs, trends and issues in the students’ first-year experience can allow them to feel they are seeing and understanding the “real

world” of engineering, which can be difficult to rep-resent in a freshman physics or calculus course. To quote Sara Sabol, a Swanson School of Engineering undergraduate: “The kinds of papers you write in your freshman year give you an opportunity to explore so many topics and issues. Doing research and writing the papers allows you to understand what all ‘engineering’ can mean. It makes you more secure about the choices you are making.”

References1. L. Shuman, M. Besterfield-Sacre, J. McGourty, “The ABET ‘Professional Skills’—Can They be Taught? Can They be Assessed?” Journal of Engineering Education, Vol. 94, No.1, pp 41-55.

2. N. Al-Holou, N. M. Bilgutay, C. Corleto, J. T. Demel, R. Felder, K. Friar, J. E. Froyd, M. Hoit, J. Morgan, and D. L. Wells, “First-Year Integrated Curricula: Design Alternatives and Examples,” Journal of Engineering Education, Vol. 88, No. 4, Oct. 1999, pp. 435-448.

3. N. A. Pendergrass, R. E. Kowalczyk, J. P. Dowd, R. N. Laoulache, W. Nelles, J. A. Golen, E. Fowler, “Improving First-Year Engineering Education,” Proceedings of the 1999 Frontiers in Education Conference, San Juan, P.R., Nov. 1999.

4. Dan D. Budny, “The Freshman Seminar Assisting the Freshman Engineering Student’s Transition From High School to College,” American Society for Engineering Education 2001 Annual Conference, Albuquerque, NM, June 2001, pp. 1-7.

5. E. L. Boyer, College: The Undergraduate Experience in America, Harper & Row Publishers, 1987.

6. A. W. Chickering and Associates, The Modern American College, Jossey-Bass Publishers, 1981.

7. L. C. Coburn and M. L. Treeger, Letting Go: A Parents’ Guide to Today’s College Experience, Alder & Alder, 2003.

8. C. A. Tomlinson, The Differentiated Classroom: Responding to the Needs of All Learners, Association for Supervision and Curriculum Development, 1999.

9. D. D. Budny and C. Paul, “Integrating Peer Mentoring Into the Freshman Curriculum,” Proceedings 2004 IEEE/ASEE Frontiers in Education Conference, Oct. 2004, Savannah, GA.

10. American Library Association, Presidential Committee on Information Literacy, Final Report, American Library Association, 1989. http://www.ala.org/ala/mgrps/divs/acrl/publications/whitepapers/presidential.cfm.

11. Honora Nerz and Suzanne Weiner, “Information Competencies: A Strategic Approach,” 2001 ASEE Annual Conference Proceedings, http://www.asee.org/conference/search/00510_2001.pdf.

12. P. Connolly and T. Vilardi, Writing to Learn in Mathematics and Science, Teachers College Press, 1989.

Page 11: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 210

13. Joan Countryman, Writing to Learn Mathematics: Strategies That Work, Heinemann Educational Books, Inc, 1992.

14. T. L. Hein, “Using Student Writing as a Research and Learning Tool,” AAPT Announcer, Vol. 27, No. 4, p. 29.

15. T. L. Hein, “Writing: An Effective Learning Tool for Non-Science Majors,” AAPT Announcer, Vol. 29, No. 2, pp. 114.

16. W. L. Kirkland, “Teaching Biology Through Creative Writing,” Journal of College Science Teaching, Vol. 26, No. 4, pp. 277-279.

17. W. J. Mullin, “Writing in Physics,” The Physics Teacher, Vol. 27, No. 5, pp. 342-347.

18. R. E. Rice, “‘Scientific Writing’—A Course to Improve the Writing of Science Students,” Journal of College Science Teaching, Vol. 27, No. 4, pp. 267-272.

19. J. E. Sharp, B. M. Olds, R. L. Miller, and M. Dyrud, “Four Effective Writing Strategies for Engineering Classes,” Journal of Engineering Education, Vol. 88, No. 1, pp 53-57.

20. K. Walker, “Integrating Writing Instruction Into Engineering Courses: A Writing Center Model,” Journal of Engineering Education, Vol. 89, No. 3, pp. 369-374.

21. T. L. Hein, “Using Writing to Confront Student Misconceptions in Physics,” European Journal of Physics, 1999, No. 20, pp.137-141.

22. T. L. Hein, “Writing in Physics: A Valuable Tool for Other Disciplines,” The Teaching Professor, Vol. 1, No. 10, pp. 2-3.

23. S. Tobias, They’re not Dumb, They’re Different: Stalking the Second Tier, Research Corporation, 1990.

24. S. Tobias, in Paul Connolly and Teresa Vilardi (Eds.), Writing to Learn Mathematics and Science, Teachers College Press, 1989.

25. University of Pittsburgh Introductory Seminars and Workshops in Composition website, http://www.english.pitt.edu/composition/seminars.html.

26. National Academy of Engineering’s “14 Grand Challenges,” http://www.engineeringchallenges.org/.

27. Occupational Outlook Handbook (OOH), United States Department of Labor, Bureau of Labor Statistics, 2010-11 Edition, http://www.bls.gov/oco/.

28. University of Pittsburgh ENGR0012 Course website, http://www.engr.pitt.edu/freshman/academic/eng12/index.html.

29. University of Pittsburgh ENGR0012 Conference Information website, http://www.engr.pitt.edu/freshman/FreshmanConference.html.

30. ABET requirements, http://abet.org/Linked%20Documents-UPDATE/Criteria%20and%20PP/E001%2009-10%20EAC%20Criteria%2012-01-08.pdf.

Michael W. Ford, Jr.Michael W. Ford, Jr. currently works in public services at the Bevier Engineering Library at the University of Pittsburgh, and is an adjunct instructor at the university’s School of Information Sciences, teaching science and technology resources for the graduate program in library science. He has held positions in the science libraries for more than 15 years, including involvement with programs in engineering, biology, physics, and neuroscience. His research interests include information literacy and the impact of digital media on library services. Contact Ford by e-mail at [email protected].

Dan BudnyDan Budny holds a joint appointment as associate professor in the School of Civil Engineering and the director of the Freshman Engineering Program at the University of Pittsburgh, Swanson School of Engineering. His research area is in the development of programs that assist entering students by providing counseling and cooperative learning environments for students in their first and second semester Freshman Engineering Program courses. Budny is very active in the American Society for Engineering Education (ASEE) within the Freshmen Programs and the Educational Research and Methods Divisions, and is on the ASEE/IEEE Frontiers in Education Conference Board and the ASEE board of directors. Budney also conducts numerous teaching workshops both at the university and other locations. Contact him at [email protected].

Beth NewborgBeth Bateman Newborg holds a master’s degree in literature and cultural studies from the University of Pittsburgh. She is the director of the University of Pittsburgh’s English/Freshman Engineering Writing Program and the outreach coordinator for the university’s writing center. Newborg is also a published poet. She may be reached via e-mail at [email protected].

Page 12: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 11

C. Judson King, Susan A. Ambrose, Raoul A. Arreola, Karan Watson, Richard M. Taber, Norman L. Fortenberry, and Elizabeth T. Cady

Multidimensional metrics drawn from different

constituencies are needed to measure the

instructional effectiveness of engineering faculty

members and improve their scholarly teaching.

Metrics and Methodology for Assessing Engineering Instruction

One of the challenges in balanc-ing how faculty allocate their time

among the three canonical tasks of research, teaching, and service is that the only well-established assessment methods are in research—even though they’re imperfect.

In recent years, increased attention has been devoted to the assessment of teaching. For example, the Association of American Medical Colleges (AAMC) issued a 2007 report that offered a con-ceptual framework (quantity, quality, and engagement) and specific categories of educator activity (such as teaching, curriculum, advising and mentoring, educational leadership, and learner assessment).1

We can define assessment-of-the-educator activities specified in the AAMC report as an assessment of teach-ing effectiveness. Based on their review of practices, Shaefer and Utschig2 indicated that the context of such assessments is provided by the answers to the following questions:

• What is the organizational context of the assessment?

• Is participation in the assessment vol-untary or mandatory?

• Whose instructional performance is being assessed?

• What professional development activ-ities support or operate in parallel with the assessment?

Page 13: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 212

The sine qua non—meaning, “without which there is nothing”—of academic research evalua-tion is peer review. Since the early 1990s, faculty at U.S. institutions have been exploring various models for implementing peer review of teaching, seeking to maximize the familiar and collaborative nature of peer review while addressing the political and methodological challenges of applying peer review to teaching.3 Specific interest in peer review of teaching in engineering disciplines is implied in calls for greater recognition and reward of instruc-tional innovation within a 1995 report4 by the Board on Engineering Education of the National Research Council (NRC) and is explicitly indicated by a 2003 NRC report.5

In 2007, with support from the National Science Foundation, the National Academy of Engineering convened a committee of engineering educators, leaders in faculty professional development, and experts in teaching assessment. They were charged with organizing a fact-finding workshop and pre-paring a succinct consensus report that addressed the development and implementation of a system to measure the instructional effectiveness of engi-neering faculty members.

The charge to the committee was to identify and assess options for evaluating scholarly teaching, which includes a variety of actions and knowledge related to faculty members’ content expertise, instructional design skills, delivery skills, under-standing of outcomes assessment, and course management skills. The intent of this project was to provide a concise description of a process to develop and institute a valid and acceptable means of measuring teaching effectiveness. This, in turn, would foster greater acceptance and rewards for faculty efforts to improve their performance of the teaching role that makes up a part of their faculty responsibility. Although the focus was in the area of engineering, the concepts and approaches are applicable to all fields of higher education.

The study process included a fact-finding work-shop that brought together 25 experts in the areas of engineering education, institutional administra-tion, and teaching and learning assessment. Three commissioned papers were presented relating to research in assessing instructional effectiveness, currently available metrics, and what constitutes effective teaching.

Drawing on the commissioned papers, work-shop discussions, and additional background

research, the committee (article authors C. Judson King, Susan A. Ambrose, Raoul A. Arreola, and Karan Watson), with support of NAE professional staff, prepared a report6 that addressed the follow-ing topics:

• Background, framing, and concepts.

• Governing principles of good metrics.

• The committee’s key assumptions in approach-ing the task.

• Attributes that should be measured and sources of data.

• Methods to measure and compute teaching performance.

Some StipulationsThe committee reached the following stip-

ulations and recommendations for action by institutional leaders and external stakeholders of the engineering educational system:

• Faculty instructional enrichment programs on campus often have high enrollments and are sometimes oversubscribed (relative to the resources available to faculty development pro-grams). The optional nature of such programs and their limited resources, however, lead to low and uneven overall participation.

• The development of a thoughtfully designed and agreed upon method of evaluating teaching effectiveness—based on research of effective teaching and learning—would provide admin-istrators and faculty members the ability to use quantitative metrics in the promotion and tenure process.

• Quantitative and broad metrics would provide faculty members with an incentive to invest time and effort to enhance their instructional skills.

• All faculty and administrators should have sig-nificant input into the design of an evaluation/assessment system and should provide feedback based upon the results stemming from the evaluation system that is developed.

• The assumptions, principles, and expected out-comes of assessing teaching effectiveness should be explicit (and repeated frequently) to those subject to the evaluations, as well as to those who will conduct the evaluations.

• Information gathered for tenure and pro-motion evaluations will likely overlap with

Page 14: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 13

information gathered for professional develop-ment. These two functions, however, should remain separate because identifying weaknesses for professional development efforts (collecting formative assessment data) is not seen as hav-ing potentially negative impacts on tenure and promotion evaluation (summative assessment data). This is a necessary safeguard that main-tains faculty members’ confidence that sincere effort to improve their teaching through honest

evaluations of strengths and weaknesses will not result in downgraded tenure and promo-tion evaluations.

RecommendationsOur recommendations are that institutions,

engineering deans, and department heads should:

• Use multidimensional metrics that draw upon different constituencies to evaluate the content, organization and delivery of course material, and

Minimum: 20% TEACHING Maximum: 60%

Performance Component

Sources of Measurement Data

Students (25%) Peers (45%) Dept. Chair/ Supervisor (20%)

Self (10%)

Content expertise

Review of education, scholarship, professional society activities, assessment of currency in field

Portfolio: evidence of ongoing proficiency in the field

Instructional design

Student rating form

Peer review of course materials (syllabus, readings, experiments, examinations, handouts, etc.)

Portfolio: evidence of a strategy for designing instructional experiences

Instructional delivery

Student rating form

Peer review of course materials (to include items previously listed) combined with peer assessment of classroom presentation skills

Portfolio: evidence of strategies and methods of delivery and communication

Instructional assessment

Student rating form

Peer review of course materials (syllabus, readings, experiments, examinations, handouts, etc.)

Compliance with policies and procedures concerning testing and grading

Portfolio: evidence of techniques, strategies, and methods of assessing student learning and providing meaningful feedback

Course management

Timely ordering of lab supplies, submission of grades, drop-add slips, etc.

Portfolio: specification of conditions affecting management of the teaching environment

Reprinted with permission from “Developing Metrics for Assessing Engineering Instruction: What Gets Measured is What Gets Improved,” 2009, by the National Academy of Sciences, Courtesy of the National Academies Press, Washington, D.C.

Table 1: Example of Using Metrics for Evaluating Teaching Effectiveness

Page 15: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 214

the assessment of student learning. Examples of possible metrics are shown in Table 1.

• Take the lead in gaining widespread acceptance of metrics for evaluating teaching effective-ness in engineering. Their links to faculty and institutional administrators give them the authority to engage in meaningful dialogue in the college of engineering and throughout the larger institution.

• Seek to develop an appropriate number of evaluators who have the knowledge, skills, and experience to provide rigorous, meaningful assessments of instructional effectiveness (in much the same way that those institutions seek to ensure the development of the skills and knowledge required for excellent disciplinary research).

• Seek out and take advantage of external resources, such as associations, societies, or programs focused on teaching excellence—for example, Carnegie Academy for the Scholarship of Teaching and Learning, the U.K.’s Higher Education Academy, and Professional and Organizational Development Network—as well as on-campus teaching and learning resource centers and organizations focused on engineering education. This includes the International Society for Engineering Education and the Foundation Engineering Education Coalition’s website devoted to Active/Cooperative Learning: Best Practices in Engineering Education.7

Leaders of the engineering profession (including the National Academy of Engineering, American Society for Engineering Education, ABET Inc., American Association of Engineering Societies, the Engineering Deans’ Council, and the various engineering disciplinary societies) should:

• Continue to promote programs and provide support for individuals and institutions pur-suing efforts to accelerate the development and implementation of metrics for evaluating instructional effectiveness.

• Create and nurture models of metrics for eval-uating instructional effectiveness. Although each institution will have particular needs and demands, nationally known examples of well-informed, well-supported, and carefully

Susan AmbroseSusan Ambrose is associate provost for education, director of the Eberly Center for Teaching Excellence, and teaching professor in the history department at Carnegie Mellon University in Pittsburgh, PA.

C. Judson King C. Judson King is an emeritus provost and senior vice president of academic affairs and emeritus professor of chemical engineering for the University of California System.

Karan L. WatsonKaran L. Watson is dean of faculties and associate provost at Texas A&M University in College Station, TX. She is a regents professor in the electrical engineering department.

Raoul A. Arreola Raoul A. Arreola is on the faculty of The University of Tennessee Health Science Center.

Page 16: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 15

developed instructional evaluation programs will benefit the entire field.

ABET engineering accreditation criterion six requires that engineering faculty “must be of suf-ficient number and must have the competencies to cover all of the curricular areas of the program” and explicitly indicates that among the factors to be examined in judging faculty competence is “teaching effectiveness.”8 This seems to imply the accreditation regime already provides an incentive for engineering programs to adapt the scheme presented in the committee’s report.

Note: This article originally appeared in the ASQ Higher Education Brief, August 2009 and is used with permission.

References1. Association of American Medical Colleges, “Advancing Educators and Education: Defining the Components and Evidence of Educational Scholarship,” Faculty Vitae, 2007.

2. Dirk Schaefer and Tristan Utschig, “A Review of Professional Qualification, Development, and Recognition of Faculty Teaching in Higher Education Around the World,” Proceedings of the American Society for Engineering Education Annual Conference and Exposition, Pittsburgh, June 22-25, 2008.

3. Patricia Hutchings, “The Peer Review of Teaching: Progress, Issues, and Prospects,” Innovative Higher Education, Vol. 20, No. 4, 1996, pp. 221-234.

4. Board on Engineering Education, National Research Council, Engineering Education: Designing an Adaptive System, National Academies Press, 1994.

5. National Research Council, Evaluating and Improving Undergraduate Teaching in Science, Mathematics, Engineering, and Technology, National Academies Press, 2003.

6. National Academy of Engineering, Developing Metrics for Assessing Engineering Instruction: What Gets Measured is What Gets Improved, National Academies Press, 2009, www.nap.edu/catalog.php?record_id=12636.

7. Arizona State University, “Active/Cooperative Learning, Best Practices in Engineering Education,” http://clte.asu.edu/active/main.htm.

8. ABET Inc., “Criteria for Accreditation of Engineering Programs: Effective for Evaluations During the 2009-2010 Accreditation Cycle,” www.abet.org/Linked%20Documents-UPDATE/Criteria%20and%20PP/E001%2009-10%20EAC%20Criteria%2012-01-08.pdf.

Elizabeth T. CadyElizabeth T. Cady is an associate program officer at the Center for the Advancement of Scholarship on Engineering Education (CASEE) at the National Academy of Engineering.

Norman L. Fortenberry Norman L. Fortenberry is the founding director of the Center for the Advancement of Scholarship on Engineering Education (CASEE) at the National Academy of Engineering. He can be reached by e-mail at [email protected].

Richard Taber Richard Taber was a program officer with responsibility for the standing committee on engineering education at the National Academy of Engineering in Washington, D.C.

Page 17: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Mohamed E. M. El-Sayed and Kathleen Burke

Quality indices can be used to improve the process

of interpreting and acting on routine student

evaluations, demonstrating how traditional quality

tools can be applied to higher education.

Transforming Teaching Evaluation

to Quality Indices

AbstractSeveral studies have examined how

to embed quality improvement meth-odologies in education. These efforts, however, focused on using the quality improvement tools developed for gen-eral use in industry instead of developing more appropriate tools for educational practices. In this article, we focus on achieving the goals of quality method-ologies by incorporating their key goals into traditional educational tools. This approach can transform routine stu-dent course evaluations into a quality improvement tool through the establish-ment of quality indices. We demonstrate the development and application of the quality indices by presenting two exam-ples, which are calculated using student evaluations of teaching surveys from two different instructors’ courses.

IntroductionGlobal competition has fueled the

drive for quality improvement in all areas of life, including higher education. Several research efforts have focused on developing tools and processes for

quality improvement in higher edu-cation. For example, to improve the quality of teaching and learning, the National Academy of Engineering in 2007 organized a workshop to address the development and implementation of a system to measure the instructional effectiveness of engineering faculty members.1 Other efforts concentrate on embedding quality improvement meth-ods in education using quality function deployment (QFD). In one case, the authors outlined the considerations needed for developing an assessment tool and process using QFD.2 In another research effort, an experiment focused on treating students as customers and apply-ing QFD to study the effect on teaching quality.3 A separate QFD study involved several educational institutes in India.4 In this study, education was considered as a service that needs to adopt the tech-niques of other industries in measuring service quality and customer satisfaction. The institutes were studied in terms of how well they meet the needs of their local industrial customers and the results clearly demonstrated a lack of customer

Quality approaches in higher education Vol. 1, No. 216

Page 18: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 17

satisfaction. Another research effort centered on student retention with results prompting mandatory use of student evaluation of teaching (SET) surveys at Japanese universities.5

This article presents a differ-ent approach for developing a quality improvement tool for higher education. It is built on being “learner-centered” through getting continuous meaningful feedback from the learners and reflecting on this feedback as a key to improvement.6 The main concept is to focus on achieving quality improve-ment goals by transforming SETs, a traditionally accepted tool embedded in higher education. We make use of the goals established in methodologies already utilized for quality improve-ment in industry and apply the concepts to SETs to generate meaningful data that faculty can use for quality improvement.

Quality Improvement MethodologiesThe Kano Model7 and QFD8 are two of the

most prominent methodologies used for quality improvement today. The Kano Model identifies three types of customer requirements as shown in Figure 1. These requirements are defined as follows:

• Must-be: These are the requirements that the customer may not explicitly demand but they will be extremely dissatisfied if the require-ments are not met.

• One-dimensional: The customer not only demands these requirements, but satisfaction is also pro-portional to the level of achievement.

• Attractive: The customer does not usually expect these requirements, but they lead to excitement and more than proportional satisfaction.

In general, this model identifies the following main issues:

• Some attributes are more important than others in the eyes of the customer.

• The expected level of satisfaction varies for dif-ferent attributes.

• Surveying customers to obtain importance and satisfaction levels are the keys for any quality improvement.

The QFD concept, introduced in Japan, is based on building a sequence of operations to translate

the voice of the customer into the final product or service.10,11,12 The traditional QFD four-phase approach uses a series of matrices that guide the development activities.8 Each phase has a matrix consisting of a vertical column of “WHATS” and a horizontal row of “HOWS.” This QFD model includes the following key components:

• Customer requirements: These are the “WHATS,” the customers’ wishes, expectations, and require-ments for the product or service.

• Customer importance ratings: These are the cus-tomers’ numerical ratings to the “WHATS” in terms of their importance. A rating of one to five is often used, where five represents the most important and one is the least impor-tant. These ratings are used as multiplicative factors to create indices, which help prioritize action.

• Customer competitive evaluations: A comparison is made between the assessed product or service and similar competitive products or services. The customer gives a rating of one to five where five is the best and one is the worst.

Based on these methodologies, it is clear that understanding customers’ priorities and levels of expectation and satisfaction are the keys for quality improvement.

Student Evaluation of Teaching SurveysPerforming course evaluations to obtain stu-

dents’ feedback is one of the most routine practices in higher education.13 The goal of SETs is capturing

Figure 1: Kano’s Model of Customer Satisfaction9

Customer satisfied

Customer dissatisfied

One-dimensionalrequirements • articulated • specified • measureable • technical

Must-be requirements • implied • self-evident • not expressed • obvious

Attractive requirements • not expressed • customer tailored • cause delight

Requirementfulfilled

Requirementnot fulfilled

Page 19: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 218

students’ perceptions of instructors’ performance based on students’ opinion.14,15,16 Several tools, varying from written forms to electronic surveys, are used for collecting the evaluation data.16 The collected data from any of the course evaluation tools is used to evaluate the effect of the instruc-tors’ teaching practice on students’ learning.17,18,19 For some institutions, the evaluation results are also used to evaluate faculty member performance for merit raises, tenure, and promotion.20 Several efforts address the process of building and oper-ating a comprehensive faculty evaluation system and improving faculty performance both with faculty peer reviews and with SETs.1,21,22

Although “measurement of the quality of teach-ing activities is becoming increasingly important since universities are rewarding faculty performance in terms of promotion, awards, and bonuses,”23 most published research efforts question the effectiveness of using SETs to assess quality and teaching performance. Some researchers have made the following statements:

• “Student evaluation of instruction in college and university courses has been a routine and mandatory part of undergraduate and graduate education for some time. A major shortcoming of the process is that it often relies exclusively on the opinions or qualitative judgments of students rather than the learning or transfer of knowledge that takes place in the classroom.”18

• “The use of student evaluations of teaching (SETs) to assess teaching effectiveness remains controversial. Without clear guidelines regarding how to best document effective teaching, faculty members may wonder how to convincingly demonstrate teaching effectiveness in prepara-tion for promotion and tenure review.”20

• “There has been considerable debate as to whether course evaluations are valid measures of teaching quality, or whether students instead reward tutors who give them high grades and assign low levels of work.”24

Although it is not clear if improving course evaluation scores can be directly tied to improve-ment in teaching quality, it is clear that SETs evaluation data is routinely used as an indica-tor for rewarding and retaining faculty with the hope that it will serve as a motivating factor for improving teaching quality. In other words, just

conducting SETs does not directly connect to the desired results of continuous quality improve-ment. Moreover, the current reward system is based on the scores of some discrete evaluations for different classes, instead of the long-term commitment of the faculty member to quality and continuous improvement.

Most SET tools in their current form are lim-ited in enabling deliberate and sustained quality improvement for numerous reasons. First, most SET tools assume that all surveyed attributes are equally important. As a result of this assump-tion, there is usually a lack of identified priorities for addressing the surveyed attributes. Most SET tools assume that all those surveyed demand and express their level of satisfaction by mark-ing the top score for each surveyed attribute. Furthermore, in most institutions the same evalu-ation form is used; thus, the evaluation tool does not allow for variations based on students, course content, and instructor needs. Finally, the SETs are based on students’ feedback (perceived voice of the customer) at the end of the course. The course improvement effort, if any, will be directed to a different group of students the next time the course is taught.

Methodology for Transforming Student Evaluations

To help facilitate quality improvement, we need to transform traditional SETs into a tool produc-ing data that can provide valuable feedback to the instructor. Our tool is based upon the two previously discussed quality improvement method-ologies, the Kano Model and QFD. In accordance with these methodologies, it’s vital to obtain the student’s perception of the product or service from three independent elements. First, determine what the student expects of this product or service, or, in other words, the student’s level of expectation. Second, establish how satisfied the student cur-rently feels about the product or service, or the level of satisfaction. Finally, find out how important the surveyed attribute is to the student, or the level of importance.

We transform the traditional SETs by surveying students using Likert scales for the importance, the expected level of satisfaction, and the cur-rent level of satisfaction for each attribute on the traditional SETs. These data are then compiled to

Page 20: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 19

Pleaseratethefollowingstatementsbasedonthefollowingscale: 1 2 3 4 5 Notimportant/ Neutral Veryimportant/ Notsatisfied Verysatisfied

Myinstructorseemswellpreparedforclass. Thereissufficienttimeinclassforquestionsanddiscussions.

How important is this characteristic to you? How important is this characteristic to you?

What is your expected level of satisfaction? What is your expected level of satisfaction?

What is your current level of satisfaction? What is your current level of satisfaction?

Myinstructorexplainsdifficultmaterialclearly. Iwouldenjoytakinganothercoursefromthisinstructor.

How important is this characteristic to you? How important is this characteristic to you?

What is your expected level of satisfaction? What is your expected level of satisfaction?

What is your current level of satisfaction? What is your current level of satisfaction?

Myinstructormakesgooduseofexamplesandillustrations. Examsstressimportantpointsofthelectures/text.

How important is this characteristic to you? How important is this characteristic to you?

What is your expected level of satisfaction? What is your expected level of satisfaction?

What is your current level of satisfaction? What is your current level of satisfaction?

Thiscoursebuildsunderstandingofconceptsandprinciples. Theassignedreadingiswellintegratedintothiscourse.

How important is this characteristic to you? How important is this characteristic to you?

What is your expected level of satisfaction? What is your expected level of satisfaction?

What is your current level of satisfaction? What is your current level of satisfaction?

Myinstructorisactivelyhelpfulwhenstudentshaveproblems. Complexityandlengthofcourseassignmentsarereasonable.

How important is this characteristic to you? How important is this characteristic to you?

What is your expected level of satisfaction? What is your expected level of satisfaction?

What is your current level of satisfaction? What is your current level of satisfaction?

Myinstructorisreadilyavailableforconsultation. Classlecturescontaininformationnotcoveredinthetextbook.

How important is this characteristic to you? How important is this characteristic to you?

What is your expected level of satisfaction? What is your expected level of satisfaction?

What is your current level of satisfaction? What is your current level of satisfaction?

WhenIhaveaquestionorcommentIknowitwillberespected. Ilikethewaytheinstructorconductsthiscourse.

How important is this characteristic to you? How important is this characteristic to you?

What is your expected level of satisfaction? What is your expected level of satisfaction?

What is your current level of satisfaction? What is your current level of satisfaction?

Thiscoursehasclearlystatedobjectives. Myinstructormotivatesmetodomybestwork.

How important is this characteristic to you? How important is this characteristic to you?

What is your expected level of satisfaction? What is your expected level of satisfaction?

What is your current level of satisfaction? What is your current level of satisfaction?

Lectureinformationishighlyrelevanttocourseobjectives. Myinstructorhasstimulatedmythinking.

How important is this characteristic to you? How important is this characteristic to you?

What is your expected level of satisfaction? What is your expected level of satisfaction?

What is your current level of satisfaction? What is your current level of satisfaction?

Table 1: Sample of Student Assessment of Teaching Quality Surveys (SATQs)

Page 21: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 220

produce a quality index (QI) for each attribute based on the average student response to the three scales.25 This QI index is defined as:

QI = relative importance x (current level of satisfaction - expected level of satisfaction)

In the QI formula, the difference between the current level of satisfaction and the expected level of satisfaction is called the satisfaction gap. A negative satisfaction gap will result when the level of expectation is higher than the level of satisfaction. Multiplying the negative level of satisfaction by the relative importance results in the weighted quality gap, which, in this case, is a negative QI indicating an unsatisfied student. On the other hand, a weighted positive satisfaction gap results in a positive QI indicating a delighted student, since the level of satisfaction exceeds the level of expectation. A zero QI indicates a satisfied student. Since we use a five-point Likert scale to capture the students’ perceptions, the QI for the least satisfied student could equal -25, whereas the QI for the most satisfied student could equal 25.25

Examples of Student Assessment of Teaching Quality Surveys

In a department at a small public liberal arts institution, the end-of-semester SETs consist of 18 items that are uniform across all departmen-tal courses. To transform these evaluations into information that faculty can use to improve per-formance, three questions were used for each characteristic. The first asked the students how important the characteristic was to them. Next, they were asked what their expected level of satisfaction was for each of these items. Finally, they were ques-tioned about their current level of satisfaction with each of the items.

The transformed survey instrument is dis-played in Table 1. Two different instructors in two different major courses utilized the trans-formed survey instrument after the midpoint in their courses. In course one, 22 students were sur-veyed, and in course two, 51 students completed the survey.

The average QIs for each characteristic were compiled for both courses and are displayed in Figures 2 and 3, respectively. The average QIs were

Figure 2: Course One—Compiled Student Assessment of Teaching Quality Indices

Class lectures contain information not covered in the textbook.

EX: My instructor explains difficult material clearly. Quality Index = 4.85*(4.02-4.59) = -2.72 How important is this characteristic to you? 4.85 What is your expected level of satisfaction? 4.59n=22 What is your current level of satisfaction 4.02

There is sufficient time in class for questions and discussions.My instructor makes good use of examples and illustrations.

The assigned reading is well integrated into this course.I would enjoy taking another course from this instructor.

My instructor is readily available for consultation.When I have a question or comment I know it will be respected.

My instructor has stimulated my thinking.This course builds understanding of concepts and principles.

My instructor is actively helpful when students have problems.This course has clearly stated objectives.

My instructor seems well prepared for class.My instructor motivates me to do my best work.

Lecture information is highly relevant to course objectives.Complexity and length of course assignments are reasonable.

I like the way the instructor conducts this course.Exams stress important points of the lectures/text.

My instructor explains difficult material clearly.

3.02.52.01.51.00.50.0-0.5-1.0-1.5-2.0-2.5-3.0

-0.10-0.10-0.11

-0.33-0.78

-0.91-0.94

-1.01-1.15-1.18

-1.42-1.61

-2.53-2.72

1.710.93

0.210.09

Page 22: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 21

sorted from smallest to largest for ease of inter-pretation. The sign and magnitude of the average QI for each attribute clearly define the strengths and areas of improvement for the instructor of each course. The magnitude of each positive QI indicates the level of satisfaction for the specific attribute. The magnitude of each negative QI indi-cates the level of dissatisfaction for the specific attribute.

An example of how the average QI was calcu-lated using the “My instructor explains difficult material clearly” is shown on each figure. Note that an individual QI value cannot be determined if one of its three constituent values is missing, so it is important to calculate individual student QI values first and then average them into the course QI when there are any missing responses. If there are no missing responses, the averaged constituent values may be used to determine the average course QI, as was done for this example. Interestingly, the students in both classes have similar levels of importance and expectations for this characteristic. Their current level of satisfac-tion, however, is lower in course one than in

course two, yielding a lower QI for the instructor in course one. This is an area for improvement for that instructor. By giving special attention to students’ comments for areas where their satisfaction levels are lower than expected, the instructor can gain insights on why and how to close those gaps and continuously improve the quality of his/her course.

ConclusionTo achieve the quality improvement objec-

tives set by the Kano Model and QFD, two of the most prominent quality improvement method-ologies, it is clear that students’ priorities, levels of expectation, and levels of satisfaction are key measures. Guided by these quality improvement methodologies, the QI approach can transform routine SETs into student assessments of teaching quality. The transformation process is possible by surveying students to obtain their perception of the importance, expected level of satisfaction, and current level of satisfaction for each attribute. The survey data can then be used to calculate a QI for each attribute. Having one measure that provides

Figure 3: Course Two—Compiled Student Assessment of Teaching Quality Indices

I would enjoy taking another course from this instructor.

EX: My instructor explains difficult material clearly. Quality Index = 4.87*(4.42-4.58) = -0.75 How important is this characteristic to you? 4.87 What is your expected level of satisfaction? 4.58 =51 What is your current level of satisfaction 4.42

My instructor has stimulated my thinking.This course has clearly stated objectives.

My instructor is actively helpful when students have problems.The assigned reading is well integrated into this course.

Class lectures contain information not covered in the textbook.My instructor seems well prepared for class.

My instructor is readily available for consultation.Lecture information is highly relevant to course objectives.

This course builds understanding of concepts and principles.There is sufficient time in class for questions and discussions.Complexity and length of course assignments are reasonable.My instructor makes good use of examples and illustrations.

When I have a question or comment I know it will be respected.My instructor motivates me to do my best work.

Exams stress important points of the lectures/text.I like the way the instructor conducts this course.

My instructor explains difficult material clearly.

3.02.52.01.51.00.50.0-0.5-1.0-1.5-2.0-2.5-3.0

0.540.500.430.43

0.360.170.17

0.090.090.09

0.00-0.09

-0.36-0.75

0.890.85

0.680.65

Page 23: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 222

the magnitude and priority of improvement for each attribute displayed graphically can simplify the assessment process and provide guidance for quality improvement efforts.

Although the QI shares many of the disadvan-tages of SETs, it does provide several advantages over the traditional student questionnaires used for teaching evaluation:

• The data obtained through surveying for the current level of satisfaction represents what would be surveyed in traditional teaching eval-uations or questionnaires; however, traditional teaching evaluations do not include what the students expect, which is needed to quantify the magnitude of the gap between what was expected and what is delivered.

• The level of expectation can be used by itself to discern the students’ average expectation for each attribute. However, using the level of expectation to calculate the satisfaction gap is important for determining the magnitude of the needed improvement effort.

• Surveying for the level of importance reflects the relative importance as perceived by the students, not the faculty or administration. Although this measure can be used by itself to discern the relative importance of each surveyed attribute, using it to calculate the QI provides a single measure for each attribute and simplifies the process of prioritizing the different attributes for improvement.

• Using the levels of expectation and importance together can lead to a better understanding of the students’ perception of a specific attribute. For example, an attribute with high expecta-tion and high importance could be a “must be” attribute on the Kano Model because the satisfaction gap is likely negative. An attribute with moderate or neutral expectation and high importance might be considered “one-dimen-sional,” because the satisfaction gap could be either positive or negative. An attribute with low expectation and high importance would be “attractive,” because the satisfaction gap will most likely be positive. Finally, low and moderate importance reflects student indiffer-ence, no matter what the satisfaction gap and should receive a low priority for improvement efforts.

Ultimately, obtaining the QIs at different inter-vals during a specific course or over a sequence of courses will provide instructors with information about areas that students perceive as needing improvement. Instructors can ask students and peers for feedback on these gaps and get more insights on ways to improve those attributes. In this way, QIs become much more useful to improvement efforts than traditional SETs.

References1. National Academy of Engineering, “Developing Metrics for Assessing Engineering Instruction: What Gets Measured is What Gets Improved,” The National Academies Press, 2009.

2. P. Brackin and G. M. Rogers, “Assessment and Quality Improvement Process in Engineering and Engineering Education,” 29th ASEE/IEEE Frontiers in Education Conference, San Juan, Puerto Rico, Nov. 1999, session 11a1, pp. 21-25.

3. C. Chang and C. Chou, “Application of Quality Function Deployment in Improving Teaching Quality: A Case Study,” Journal of Education and Psychology, Vol. 2, No. 1, pp. 49-65.

4. S. Sahney, D. Kumar, and S. Banwet, “Enhancing Quality in Education: Application of Quality Function Deployment—An Industry Perspective,” Journal of Work Study, Vol. 52, No. 6, pp. 297-309.

5. P. Burden, “The Use of Ethos Indicators in Tertiary Education in Japan,” Journal of Assessment & Evaluation in Higher Education, Vol. 3, No. 3, pp. 315-327.

6. M. Svinicki and W. McKeachie, McKeachie’s Teaching Tips: Strategies, Research, and Theory for College and University Teachers, Wadsworth, 2010.

7. N. Kano, N. Seraku, F. Takahashi, and S. Tsuji, “Attractive Quality and Must-be Quality,” The Journal of the Japanese Society for Quality Control, Vol. 14, No. 2, pp. 39-48.

8. Lou Cohen, Quality Function Deployment, Addison-Wesley, 1995.

9. C. Berger, R. Blauth, D. Boger, C. Bolster, G. Burchill, W. DuMouchel, F. Pouliot, R. Richter, A. Rubinoff, D. Shen, M. Timko, and D. Walden, “Kano’s Methods for Understanding Customer-Defined Quality,” Center for Quality Management Journal, Vol. 2, No. 4, pp. 3-36.

10. M. Xie, T. N. Goh, and H. Wang, “A Study of Sensitivity of Customer Voice in QFD Analysis,” International Journal of Industrial Engineering, Vol. 5, No. 4, pp. 301-307.

11. A. Martins and E.M. Aspinwall, “Quality Function Deployment: An Empirical Study in the UK,” Total Quality Management, Vol. 12, No. 5, pp. 575-588.

Page 24: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 23

12. E. Sauerwein, F. Bailom, K. Matzler, and H. Hinterhuber, “The Kano Model: How to Delight Your Customers,” International Working Seminar on Production Economics, Innsbruck Igls, Austria, Feb. 1996, pp. 313-327.

13. James S. Pounder, “Transformational Classroom Leadership: A Novel Approach to Evaluating Classroom Performance,” Journal of Assessment & Evaluation in Higher Education, Vol. 33, No. 3, pp. 233-243.

14. P. Ginns, M. Prosser, and S. Barrie, “Students’ Perceptions of Teaching Quality in Higher Education: The Perspective of Currently Enrolled Students,” Journal of Studies in Higher Education, Vol. 32, No. 5, pp. 603-615.

15. J. Campbell and W. C. Bozeman, “The Value of Student Ratings: Perceptions of Students, Teachers, and Administrators,” Community College Journal of Research and Practice, Vol. 32, No. 1, pp.13-24.

16. A. Ardalan, R. Ardalan, S. Coppage, and W. Crouch, “A Comparison of Student Feedback Obtained Through Paper-Based and Web-Based Surveys of Faculty Teaching,” British Journal of Educational Technology, Vol. 3, No. 6, pp. 1085-1101.

17. C. Kim, E. Damewood, and N. Hodge, “Professor Attitude: Its Effect on Teaching Evaluations,” Journal of Management Education, Vol. 24, No. 4, pp. 458-473.

18. G. Mohanty, J. Gretes, C. Flowers, B. Algozzine, and F. Spooner, “Multi-Method Evaluation of Instruction in Engineering Classes,” Journal of Personnel Evaluation in Education, Vol. 18, No. 2, pp. 139-151.

19. K. Rahman, “Learning From Your Business Lectures: Using Stepwise Regression to Understand Course Evaluation Data,” Journal of American Academy of Business, Vol. 19, No. 2, pp. 272-279.

20. K. Stark-Wroblewski, R. F. Ahlering, and F. M. Brill, “Toward a More Comprehensive Approach to Evaluating Teaching Effectiveness: Supplementing Student Evaluations of Teaching With Pre-Post Learning Measures,” Journal of Assessment & Evaluation in Higher Education, Vol. 32, No. 4, pp. 403-415.

21. R. A. Arreola, Developing a Comprehensive Faculty Evaluation System: A Guide to Designing, Building, and Operating Large Scale Faculty Evaluation Systems, Anker Publishing, 2006.

22. P. Seldin, Teaching Portfolio: A Practical Guide to Improved Performance and Promotion/Tenure Decisions, John Wiely, 2007.

23. L. N. Wood and A. Harding, “Can You Show You Are a Good Lecturer,” International Journal of Mathematical Education in Science and Technology, Vol. 38, No. 7, pp. 939-947.

24. R. Remedios and D. A. Lieberman, “I Liked Your Course Because You Taught Me Well: The Influence of Grades, Workload, Expectations, and Goals on Students’ Evaluations of Teaching,” British Educational Research Journal, Vol. 3, No. 1, pp. 91-115.

25. M. El-Sayed, K. Burke, C. Leise, and J. El-Sayed, “Assessing Service Quality for Continuous Improvement in Higher Education,” International Journal of Process Education, Vol. 2, No. 1, pp. 75-82.

Mohamed El-SayedMohamed El-Sayed, Ph.D., P.E. is a professor of mechanical engineering at Kettering University. He is the director of the Hybrid Vehicles Integration Laboratory. He received his Ph.D. from Wayne State University, and his research interests include theoretical and applied mechanics, design optimization, quality, reliability, durability, and alternative energy. He also served over 20 years in the automotive industry. He is the lead for SAE quality tools Lean and Six Sigma activity. El-Sayed is an Accreditation Board for Engineering and Technology (ABET) Institute for the Development of Excellence in Assessment Leadership (IDEAL) Scholar. He can be contacted at [email protected].

Kathleen Burke Kathleen Burke is an associate professor in the department of economics at the State University of New York at Cortland. She received her Ph.D. in economics from SUNY Stony Brook. Her research interests include salary and gender inequity issues in higher education as well as research in the quality of teaching and learning. She can be contacted at [email protected].

Page 25: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Mahsood Shah and Chenicheri Sid Nair

Findings from surveys of withdrawing students led

this university to understand the key university and

personal factors affecting retention and to implement

new programs to generate substantial improvement.

Monitoring Student Attrition to

Enhance Quality

AbstractStudent retention is one key perfor-

mance indicator used in universities to track learning and teaching perfor-mance. Research indicates that student retention and success in higher educa-tion improves an individual’s chance of securing employment opportunities, achieving career goals, and contrib-uting to society and the economic development of the nation. Some con-sider retention as a moral purpose of a university to improve student success—particularly for the most disadvantaged student groups.

This article outlines survey find-ings from a large public university in Australia with first-year undergraduate students who enrolled and later with-drew. The survey was conducted as a result of consistently high student attri-tion rates compared to other Australian universities. The university took the initiative to determine the reasons for student withdrawal and implemented a university-wide project to improve first-year student retention.

IntroductionRetention is a key performance

indicator commonly used in higher edu-cation. The implications of poor student retention affect individuals, universities, governments, and broader society. From the students’ perspective, withdrawing without completing their studies can mean disadvantages in employment and career opportunities.

For universities, the effects are primarily financial. For example, for each international student in Australia failing to return after the first of a three-year undergraduate program, the university loses approximately $24,000.1 Governments in many coun-tries such as Australia, the United Kingdom, and the United States set par-ticipation targets for students in higher education—particularly students from various disadvantaged groups. High attrition rates can negatively affect gov-ernments in terms of developing the nation’s human capital and knowledge workers, building intellectual capacity, contributing to societal well being, and

Quality approaches in higher education Vol. 1, No. 224

Page 26: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 25

having skillful manpower to meet the changing needs of the industry and profession.2

In this article we define attrition as the num-ber of students who have left their studies at the university in a given period. In Australia, the chal-lenges of current skills shortages could multiply if high percentages of students withdrew from university education.

In general, government initiatives to improve retention are based on the recognition that high completion rates from higher education play sig-nificant roles in strengthening a nation’s economy and building intellectual capital. For example, a Mayo et. al, study of U.S. colleges and high schools suggests that the government is concerned enough about retention rates for first- and second-year students that it is studying ways to use federal money to reward successful retention programs.3 Heinmann found students who withdrew from college had more adjustment difficulties, less development of creative potential, and problems related to social integration.4

Why Retention MattersAustralia needs to address student withdrawal

in a timely manner for a number of reasons, including the consistent decline in public funding by the national government as well as the intro-duction of performance-driven funding based on various teaching quality indicators (e.g., reten-tion rates). Performance-based funding is based on the Learning and Teaching Performance Fund (LTPF), which is extra funding provided by the national government.5 Australia is not alone in the move toward performance-based funding. The United Kingdom introduced a higher-education framework in November 2009 mandating that institutions unable to meet strategic needs can expect to see their funding reduced to provide resources for those who can.6

Student retention is one of the criteria for stu-dent success commonly recognized as relating to academic achievement.7 Improving retention in a performance-driven funding environment and improving the access, participation, and success of students from various equity groups is critical for university funding from governments, which are focusing on improving the access and par-ticipation of higher education to disadvantaged groups. These groups include students from low socio-economic status, the first in their families

to participate in university education, and from non-English-speaking backgrounds.

Although governments are linking rewards based on outcome measures (e.g., retention), there is a lack of emphasis on the funding of the means or resourcing and infrastructure to sup-port various academic services for achieving high retention rates. Elite universities across the world have enjoyed high retention rates, possibly due to high admissions criteria, a well developed and matured student support system, and learning infrastructure.

A study by Grebennikov and Skaines of a large multi-campus university in Australia with a large proportion of students from various disad-vantaged groups suggests that inequity exists in funding universities that successfully retain stu-dents from various equity groups.7 These authors suggest that universities, which are committed to providing access and participation of various disadvantaged groups face a dilemma—how to support these students who may be unprepared for university education. Australian media sources also report that government policies to reward universities on learning and teaching performance so far have placed greater focus on outcomes rather than input or means to achieve positive outcomes.8

Other factors influencing attrition include the rise of private higher education. This is especially true in Australia where fierce competition and increased choices allow students to move to other education providers that meet their changing needs.

Internationally, U.K. universities traditionally have enjoyed relatively high undergraduate reten-tion and completion rates. Strategies such as high entry criteria, an established tutorial framework, and full student funding are contributing fac-tors.9 A study by the Organization for Economic Cooperation and Development (OECD) ranked Japan, Denmark, the United Kingdom, Russia, and Germany as the top five countries for completion rates; Australia ranked tenth out of 27 countries.10

Retention efforts in Australia and other devel-oped countries are gaining support as the student population becomes increasingly diverse and government focuses on increased participation, retention, and completion as well as funding to reward universities for high retention rates. For example, in Australia, a recent government policy

Page 27: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 226

was designed to increase participation and com-pletion rates for 25-to-34 year olds with bachelor’s degrees or above to 40% by 2025.11

Retention research in the last decade indicate that the retention issue covers both university and personal life factors.9,12 The issues include:

• Courses not capturing students’ interest.

• Approaches to learning and teaching that are too traditional.

• Students inadequately prepared for their studies.

• Underdeveloped study skills.

• A need for closer relationships with tutors to gain a sense of belonging.

• Students having many other pressures while studying.

• Incompatibility between students and their courses.

• Lack of commitment.

• Financial hardships.

• Poor academic progress.

Studies in Australia and the United States rein-force the personal and university-related issues as student retention factors.13,14,15,16,17 These studies, along with the findings from James, Krause, and Jennings, reinforce many factors outlined in the earlier studies while highlighting factors such as students needing a break from studies, chang-ing career goals, students experiencing stress and anxiety about their studies, students disliking their studies, and universities not being what students expected.18

The case study described in this article involved a large, multi-campus Australian public university with a student population of more than 40,000.

This university has a unique challenge in that a large group of incoming students are from diverse backgrounds, including those who are the first in their families to attend university and students from low socio-economic backgrounds who have low entrance scores and may be unprepared for higher education.

This university experienced one of the highest attrition rates (25% in 2004) with first-year stu-dents compared to other metropolitan universities. The university implemented an attrition survey in 2004 based on the research that associated both personal and university factors contributing to attrition rates. The survey aimed to determine the reasons for student withdrawal and to improve first-year retention.

The university implemented a university-wide retention project in 2005 as a direct result of the survey findings and outcomes of other student surveys. The survey was repeated in 2008 to learn if the reasons for student withdrawal were consistent with the 2004 results. Between 2005-2008, the university implemented a number of initiatives based on the feedback and aimed at improving student retention and satisfaction. In 2008, the university witnessed improvement in first-year retention (75% in 2004, 77% in 2005, 80% in 2006, and 81% in 2008). During the same period, the university also improved student satisfaction measured via the national Course Experience Questionnaire (CEQ) and internal student satisfaction surveys.

MethodologyThe attrition survey was conducted in two sep-

arate years to ascertain possible issues relating to student drop out after the first year. The record of

Category Total Number of Withdrawing Students

Percent Withdrawing Students

Percent Students in This Category who Responded

Gender

Female 906 59.6% 33.8%

Male 614 40.3% 30.7%

Origin

Local students 1,444 95.0% 33.7%

International students (onshore/ offshore)

76 5.0% 9.2%

Table 1: Profile of Withdrawing Students 2008

Page 28: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 27

all withdrawing students was obtained from the Student Administration Office and a database was created with basic demographic details. A tele-phone survey gathered feedback from all first-year undergraduate students who withdrew. Only one follow-up call was made with non-respondents following the initial contact attempt. In 2004, the response rate was 45% (n= 496) and in 2008, the rate was 47% (n= 494). The response sample was representative of the profile of all withdraw-ing students in categories including gender, local/international students, and various faculties.

Table 1 shows the 2008 survey profile and response rates; a similar profile to the earlier sur-vey. The internally developed survey included 32 items, enabling students to rate the relative impor-tance of a range of factors that might have resulted in their withdrawal. Respondents rated each item on a three-point scale with “1” as very important,

“2” as moderately important, and “3” as not important at all. The importance rating presented in Table 2 is based on the percentage of students who rated the category “very important.”

Findings and DiscussionThe top 10 student-reported reasons for with-

drawing fell broadly into two categories: university and personal factors. The university has limited control over the personal factors (e.g., employ-ment commitment, family pressure, and financial difficulty). Seven of the top reasons directly related to the university and three to student’s personal issues. The results of the 2004 and 2008 surveys consistently show that the two leading factors are employment commitments and the course was not what students expected.

In analyzing the data, the two factors with significant variation from the first year were

Year 1 (2004) Year 2 (2008)

Participants = 496 Response rate = 45% Attrition rate = 25%

Participants = 494 Response rate = 47% Attrition rate = 19%

Factors Percent Factors Percent

1 Employment commitments 32.3% Employment commitments 28.9%

2 The course wasn’t what students expected 27.4% The course wasn’t what students expected 28.2%

3 Family pressures 26.2% You felt isolated* 26.2%

4 Financial difficulties 25.1% Family pressures 26.1%

5 The timetable made it difficult to attend classes

24.3% Staff did not give enough feedback or individual help*

26.0%

6 Staff did not give enough feedback or individual help*

17.3% Financial difficulties 24.2%

7 The teaching and learning methods were unmotivating

17.2% The timetable made it difficult to attend classes

23.1%

8 Staff were difficult to access 17.0% The teaching and learning methods were unmotivating

18.7%

9 You felt isolated* 19.2% Staff were difficult to access 19.1%

10 You had difficulties with enrollment 16.5% The expectations about what you had to do in assessment were unclear

16.6%

*When the change in results between year one and year two exceeded 5%, the university considered the improvement significant from a practical perspective. A survey-to-survey change of approximately 5% change can be detected by a statistical test of hypothesis given a sample size of roughly 500, percentages in the neighborhood of 20%, and a 95% level of confidence.

Table 2: Top 10 Reasons for First-Year Student Withdrawal

Page 29: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 228

the insufficiency of staff feedback or individual help and unclear assessment expectations. These factors also were highlighted as areas needing improvement in other university surveys.19 The university’s retention project specified six areas for improvement, as follows:

• Quality of student orientation.

• Accuracy and speed of enrollment, including online enrollment and fees invoicing.

• Provision of contact for students to resolve their administrative problems promptly.

• First-year student engagement in learning (easy access to information-technology resources, use of online learning, group projects, peer mentors).

• Clarity in what is expected from students, espe-cially regarding assessment.

• Promotion and communication of support ser-vices and facilities.

Table 3 shows the alignment between the top 10 reasons in the 2004 attrition survey and the retention initiatives implemented in 2005.

Most of the retention project strategies deployed were in line with research on how to improve student engagement and retention in higher education.17,20 These studies suggest focusing on accurate and consistent information about the course; course design, which includes practice-oriented learning methods; quality teachers; accessible and responsive staff; efficient and responsive administrative and student sup-port systems; relevant, consistent, and integrated assessment; clear management of expectations; and social and campus-life activities. In addi-tion, Krause et. al, recommend using a number of student engagement indicators that play a part in retention, including: time spent on the campus, satisfaction with the subject of study, contacts with the teacher, student contribution to class questions and discussions, time spent working

Alignment With the Retention Project Initiatives

Reasons for Student Withdrawal (2004)

Other University Initiatives as a Result of Student Survey Findings

Clarity on what is expected from students, especially regarding assessment

The course wasn’t what students expected

Improved academic advice

First-year student engagement in learning

The timetable made it difficult to attend classes

Greater use of online learning portal to provide students with different mode of learning

Clarity on what is expected from students, especially regarding assessment and provision of contact for students to resolve their administration problems promptly

Staff did not give enough feedback or individual help

Improved student assessment across the university in areas such as: assessment clarity, assessment methods, and timely and constructive feedback

First-year student engagement in learning

The teaching and learning methods were unmotivating

Embedded practicums and work-based learning in curriculums

Provision of contact for students to resolve their administrative problems promptly

Staff were difficult to access Review of students’ centers on each campus

Quality of student orientation and first-year student engagement in learning

You felt isolated Peer mentoring and social events on the campus

Accuracy and speed of enrollment, including online enrollment and fees invoicing

You had difficulties with enrollment

Introduction of online enrollment

Table 3: Alignment Between Reasons for Student Withdrawal and the Retention Project

Page 30: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

www.asq.org/edu/index.html 29

with peers outside the class, and use of online and discussion groups.21

The trend data on first-year retention shows a gradual increase of approximately 6% in the retention rate since the first actions were initi-ated—from 75% in 2004 to 81% in 2008. This increase is important for a large multi-campus university with dispersed students and resources and is noteworthy based on the university’s history of providing access and participation for students from underserved groups.

We contend that this increase resulted because the university took ownership of these issues and initiated a process where actions were imple-mented as a result of student feedback. While retention increased, two key areas identified in Table 2 still were contributing to students withdrawing: staff not giving enough feedback or individual help and the feelings of isolation. This suggests that the interventions addressed the needs of only a limited number of students. Also, some measures, such as improving assessment practices, may take time to implement fully across all campuses in a large university. Their full effect are likely in coming years.

ConclusionIt can be argued that retention is a moral

purpose of a university. University and other edu-cation providers need to play an important role in the retention of students and their success in edu-cation, employment, and contribution to society. By listening to the voices of current and withdraw-ing students, the university is able to identify areas of good practice and areas needing improvement. Then it can prioritize actions to improve student retention and engagement in learning. Student judgment of quality education is not based solely on what happens in a traditional classroom; it involves the total student experience including course design, quality of teachers, relevant sup-port services, learning infrastructure, information technology, enabled learning, and campus life. The need to address retention is documented in research; for example Williford and Schaller report that the probability for success of students who are retained beyond the first year increases in each subsequent year.22

Government’s focus on improving access and participation of students from disadvantaged groups and linking performance-based funding

to retention and student access measures is inevi-table in the current higher-education landscape. The economic sustainability and performance-based funding will be increasingly dependent on universities’ continuous improvement efforts on outcome measures such as retention, progression, completions, and student satisfaction.

The case reported in this article describes a number of factors that affect retention. A sys-tematic approach is needed to increase retention rates, including the following:

• Listening to the student voice on reasons for withdrawal.

• Embedding research findings in action plans on what engages students in productive learning that improves student retention and student engagement in learning.

• Engaging faculties and administrative units in monitoring retention patterns.

• Monitoring student satisfaction with first-year students.

References1. M. Shah, L. Grebennikov, and H. Singh, “Does Retention Matter? Improving Student Retention: A UWS Case Study,” Australian Association of Institutional Research Conference, Sydney, Australia, Nov. 2007.

2. G.Scott, M. Shah, L. Grebennikov, and H. Singh, “Improving Student Retention: A University of Western Sydney Case Study,” Journal of Institutional Research, 2008, pp. 9-28.

3. D. T. Mayo, M. M. Helms, and H. M. Codjoe, “Reasons to Remain in College: A Comparison of High School and College Students,” The International Journal of Education Management, 2004, pp. 360-367.

4. Allen Heinmann, “Similarities of Student’s Experience and Accuracy of Faculty and Staff Perceptions: Issues for Student Retention,” Annual Convention of the American Psychological Association, Toronto, Canada, Aug. 1984.

5. Commonwealth of Australia, “Review of Australian Higher Education: Backing Australia’s Future, policy paper,” http://www.backingaustraliasfuture.gov.au/policy.htm.

6. Department of Innovation and Business Skills, “Higher Ambition: The Future of Universities in a Knowledge Economy,” http://www.bis.gov.uk/policies/higher-ambitions.

7. L. Grebennikov and I. Skaines, “University of Western Sydney Students at Risk: Profile and Opportunities for Change, Journal of Institutional Research, Nov. 2009, pp. 58-70.

Page 31: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Quality approaches in higher education Vol. 1, No. 230

8. The Australian, “Go8 Warning on Equity”, March 2010.

9. Jennifer Rowley, “Retention: Rhetoric or Realistic Agendas for the Future of Higher Education,” The International Journal of Educational Management, June 2003, pp. 248-253.

10. Organisation for Economic Cooperation and Development, “Education at a Glance: OECD Indicators,” http://www.oecd.org/document/9/0,3343,en_2649_39263238_41266761_1_1_1_1,00.html.

11. Commonwealth of Australia, “Transforming Australia’s Higher Education Systems,” http://www.deewr.gov.au/HigherEducation/Pages/TransformingAustraliasHESystem.aspx.

12. Mantz Yorke, Leaving Early: Undergraduate Non-Completion in Higher Education, Falmer Press, 1997, pp. 1-168.

13. Romero Jaloma, “First-Year Student Experiences in Community Colleges: Crossing Borders, Making Connections, and Developing Perceptions About Learning,” Annual Conference of the American Educational Research Association, San Francisco, CA, April 1995, pp. 1-20.

14. Laura Rendon, “Facilitating Retention and Transfer for First-Generation Students in Community Colleges,” New Mexico Institute Rural Community College Initiative, Espanola, NM, March 1995.

15. B.W, Pidcock, J.L Fischer, and J. Munsch, “Family, Personality, and Social-Risk Factors Affecting the Retention Rate of First-Year Hispanic and Anglo College Students,” Adolescence, 2001, pp. 803-18.

16. D. l. Sydow and R. H. Sandel, “Making Student Retention an Institutional Priority,” Community College Journal of Research and Practice, 1998, pp. 635-44.

17. M. Long, F. Ferrier, and M. Heagney, “Stay, Play, or Give Away? Students Continuing, Changing, or Leaving University Study in First Year,” http://www.dest.gov.au/NR/rdonlyres/678FF919-3AD5-46C7-9F57-739841698A85/14398/final.pdf.

18. R. James, K. Krause, and C. Jennings, “The First Year Experience in Australian Universities: Findings From 1994-2009,” http://www.cshe.unimelb.edu.au/research/FYE_Report_1994_to_2009.pdf.

19. M. Shah and C. S. Nair, “Translating Student Voice Into Action: A Case Study at Two Australian Universities,” Australian Universities Quality Forum, Perth, Australia, July 2006, pp. 139-143.

20. Geoff Scott, “Accessing the Student Voice: Using CEQuery to Identify What Retains Students and Promotes Engagement in Productive Learning in Australian Higher Education,” http://www.dest.gov.au/sectors/higher_education/publications_resources/profiles/access_student_voice.htm.

21. K. Krause, R. Hartley, R. James, and C. Mclnnis, “The First Year Experience in Australian Universities: Findings From a Decade of National Studies,” http://www.dest.gov.au/NR/rdonlyres/1B0F1A03-E7BC-4BE4-B45C 735F95BC67CB/5885/FYEFinalReportforWebsiteMay06.pdf.

22. A. M. Williford and J. Y Schaller, “All Retention all the Time: How Institutional Research can Synthesize Information and Influence Retention Practices,” 45th Annual Forum of the Association for Institutional Research, San Diego, June 2005.

Mahsood Shah Mahsood Shah is manager of quality and improvement at the University of Canberra, Australia. His key responsibilities include: strategic and operational planning, coordination of reviews across the university, leadership in quality and improvement, and coordination of surveys and stakeholder feedback. He is a Ph.D. candidate and his studies focus on the effectiveness of external quality audit of universities in Australia. Shah previously held roles in quality and planning at Think: Education Group, University of Western Sydney, and University of Queensland in Australia. Contact him via e-mail at [email protected].

Chenicheri Sid Nair Chenicheri Sid Nair is professor of higher education development at the Centre for the Advancement of Teaching and Learning (CATL), University of Western Australia. He was interim director and quality advisor (evaluations and research) at the Centre for Higher Education Quality (CHEQ) at Monash University, Australia. His research work lies in the areas of quality in the Australian higher education system, classroom and school environments, and the implementation of improvements from stakeholder feedback. He has extensive lecturing experience in the applied sciences in Canada, Singapore, and Australia. He is an international consultant in quality and evaluations in higher education. Currently, Nair is associate editor for the International Journal of Quality Assurance in Engineering and Technology Education. E-mail him at [email protected].

Page 32: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Call for ArticlesQuality Approaches in Higher Education

The American Society for Quality’s Education Division has launched a new bi-annual, online, peer-reviewed journal called Quality Approaches in Higher Education. The editorial review team actively encourages authors to submit papers for upcoming issues.

The purpose of this publication is to engage the higher education community and the ASQ Education Division membership in a discussion on topics related to improving quality in higher education and identifying best practices in higher education and to expand the literature specific to quality in higher education topics. Quality Approaches in Higher Education welcomes faculty from two- and four-year institutions, including engineering colleges, business schools, and schools of education, to consider submitting articles for review.

The following types of articles fit the purview of Quality Approaches in Higher Education:

• Case studies on how to improve quality in a college or university.

• Conceptual articles discussing theories, models, and/or best practices related to quality in colleges and universities.

• Research articles reporting on survey findings such as a national survey on students’ attitudes toward confidence, success in college, social networking, student engagement, access and affordability, etc.

• Case studies or conceptual articles providing institutional perspective on process development and maintenance methodology at colleges or universities.

• Case studies or conceptual articles addressing issues such as the role of faculty and administrators in quality systems.

• Case studies, research studies, or conceptual articles focusing on accreditation issues.

• Case studies demonstrating best practices using the Baldrige Education Criteria for Performance Excellence, including experience and recommendations for successful implementation.

• Case studies, research studies, or conceptual articles on scholarship of teaching, enhancing student learning, learning outcomes assessment, student retention, best practices for using technology in the college classroom, etc.

In particular, we are looking for articles on the following topics: using assessments for continuous improvement and accreditation, showing how use of the Baldrige framework can increase student success, increasing engagement and quality of learning through lecture capture and other tech-nologies, dealing with rising costs without jeopardizing learning, sponsoring programs for helping graduates gain employment, and merging research with practice (action inquiry).

Articles generally should contain between 2,500 and 3,000 words and can include up to four charts, tables, diagrams, illustrations, or photos of high resolution. For details, please check the “Author Guidelines” at http://www.asq.org/edu/2009/09/best-practices/author-guidelines.pdf.

Please send your submissions to Deborah Hopen, the editor, at [email protected].

Quality Approaches in Higher Education

V o l . 1 , N o . 2

SuccessSuccessDriving Driving

SuccessDriving

SuccessDriving

SuccessDriving

Success

Page 33: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Author GuidelinesQuality Approaches in Higher Education

Quality Approaches in Higher Education is peer reviewed and published online by the Education Division of the American Society for Quality (ASQ). The purpose of this publication is to engage the higher education community and the ASQ Education Division membership in a discussion of topics related to improving quality and identifying best practices in higher education and to expand the litera-ture specific to quality in higher education topics. We will consider articles that have not been published previously and currently are not under consideration for publication elsewhere.

General InformationArticles in Quality Approaches in Higher Education generally should contain between 2,500 and 3,000

words and can include up to four charts, tables, diagrams, or other illustrations. Photos also are welcome, but they must be high resolution and in the format described later in the “Submission Format” section.

The following types of articles fit the purview of Quality Approaches in Higher Education:

• Case studies on how to improve quality in a college or university.

• Conceptual articles discussing theories, models, and/or best practices related to quality in colleges and universities.

• Research articles reporting on survey findings such as a national survey on students’ attitudes toward confidence, success in college, social networking, student engagement, access and affordability, etc.

• Case studies or conceptual articles providing institutional perspectives on process development and maintenance methodology at colleges or universities.

• Case studies or conceptual articles addressing issues such as the role of faculty and administrators in quality systems.

• Case studies, research studies, or conceptual articles focusing on accreditation issues.

• Case studies demonstrating best practices using the Baldrige Education Criteria for Performance Excellence, including experience and recommendations for successful implementation.

• Case studies, research studies, or conceptual articles on scholarship of teaching, enhancing student learning, learning outcomes, learning outcomes assessment, student retention, best practices for using technology in the college classroom, etc.

In particular, we are looking for articles on the following topics: using assessments for continuous improvement and accreditation, showing how use of the Baldrige framework can increase student success, increasing engagement and quality of learning through lecture capture and other technologies, dealing with rising costs without jeopardizing learning, new programs for helping graduates gain employment, and merging research with practice (action inquiry).

Manuscript Review ProcessWe log all article submissions into a database and delete all references to you. These “blinded” versions

then go to the editorial review team for comments and recommendations. The review process takes approximately two months during which time the reviewers advise the editor regarding the manuscript’s

Quality Approaches in Higher Education

V o l . 1 , N o . 2

SuccessSuccessDriving Driving

SuccessDriving

SuccessDriving

SuccessDriving

Success

Page 34: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Author Guidelines: Quality Approaches in Higher Education

suitability for the audience and/or make suggestions for improving the manuscript. Reviewers consider the following attributes:

1. Contribution to knowledge: Does the article present innovative or original ideas, concepts, or results that make a significant contribution to knowledge in the field of quality in higher education?

2. Significance to practitioners: Do the reported results have practical significance? Are they presented clearly in a fashion that will be understood and meaningful to the readers?

3. Conceptual rigor: Is the conceptual basis of the article (literature review, logical reasoning, hypothesis development, etc.) adequate?

4. Methodological rigor: Is the research methodology (research design, analytical or statistical methods, survey methodology, etc.) appropriate and applied correctly?

5. Conclusions and recommendations: Are the conclusions and recommendations for further research insightful, logical, and consistent with the research results?

6. Readability and clarity: Is the article well organized and presented in a clear and readable fashion?

7. Figures and tables: Are the figures and/or tables used appropriately to enhance the ability of the article to summarize information and to communicate methods, results, and conclusions?

8. Organization and style: Is the content of the article logically organized? Are technical materials (survey scales, extensive calculations, etc.) placed appropriately? Is the title representative of the article’s content?

9. Attributions: Are the sources cited properly? Are attributions indicated properly in the reference list?

You should use these attributes as a checklist when reviewing your manuscript prior to submission; this will improve its likelihood of acceptance.

There are three possible outcomes of the review process:

• Accept with standard editorial revisions. In this case, the content of the article is accepted without requiring any changes by you. As always, however, we reserve the right to edit the article for style.

• Accept with author revisions. An article in this category is suitable for publication but first requires changes by you, such as editing it to fit our length requirements. We provide specific feedback from our reviewers to guide the revision process. We also assign a tentative publication date, assuming you will submit the revised article by the deadline.

• Decline to publish. Occasionally articles are submitted that do not fit our editorial scope. In these situations, we may provide you with suggestions for modifying the article to make it more appropriate to our publication, but we do not assign a tentative publication date.

Please note that after articles are edited for publication, we return them to you to approve the technical content. A response may be required within 48 hours or the article may be held over for a subsequent issue.

Articles that appear to be advertising or don’t fit the general topics addressed by Quality Approaches in Higher Education will be rejected without receiving peer reviews.

Helpful Hints1. Articles should emphasize application and implications.

• Use the early paragraphs to summarize the significance of the research.

• Make the opening interesting; use the opening and/or background to answer the “so what?” question.

• Spell out the practical implications for those involved in higher education.

Page 35: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Author Guidelines: Quality Approaches in Higher Education

2. Detailed technical description of the research methods is important, but not necessarily of interest to everyone.

3. Throughout the article, keep sentence structure and word choice clear and direct. For example, references should not distract from readability. As much as possible, limit references to one or two per key idea, using only the most recent or most widely accepted reference.

4. Avoid acronyms and jargon that are industry- or organization-specific. Try not to use variable names and other abbreviations that are specific to the research. Restrict the use of acronyms to those that most readers recognize. When acronyms are used, spell them out the first time they are used and indicate the acronym in parentheses.

5. Our reviewers and readers usually view articles that include reference to your proprietary products or methods as advertising. Although we encourage you to share personally developed theories and appli-cation approaches, we ask that you refrain from using our publication as a marketing tool. Please take great care when including information of this nature in your article.

6. If the article cites cost savings, cost avoidance, or cost-benefit ratios, or provides the results of statistical evaluations, include an explanation of the method of calculation, along with any underlying assumptions and/or analysis considerations.

7. When submitting an article that includes survey data, include the complete survey instrument. We may make the entire survey available online.

8. Our staff does not have the means to compile references or verify usage permissions; therefore, it is important for you to provide all that information with your article, including written letters of authorization when appropriate. Plagiarism is a rapidly growing crime—particularly due to the use of information from the Internet. Please help yourself, and us, to maintain professional integrity by invest-ing the time necessary to verify your sources and to obtain and document all necessary permissions. Information on our requirements for documenting references, along with specific examples, is included at the end of these guidelines.

Submission Format1. We accept only electronic submissions in Microsoft® Word® format. Send electronic copies of articles to

[email protected]. Also please include an abstract of 150 words or less for each article. Include all of your contact information in a cover letter or your e-mail message.

2. Tables should be included at the end of the article and must be in Microsoft Word. Each table must be referenced in the article and labeled, such as “Table 1: Graduation Rate by Major.” Do not embed .jpg, tif, .gif, or tables in other similar formats in your article.

3. Drawings and other illustrations should be sent in separate Microsoft® PowerPoint® or Microsoft Word files; each item should be included in a separate file. All drawings and other illustrations must be referenced in the article, and must be labeled, such as “Figure 1: Pareto Analysis of Student Participation in Department Activities.” Please do not use other software to generate your drawings or illustrations. Also, please do not embed .jpg, tif, .gif, or drawings or illustrations in other similar formats in your article.

4. We can use photos if they enhance the article’s content. If you choose to submit a photo with your article, it must be a high-resolution .jpg or .tif (at least 300 dpi and at least 4” by 6” in size). We

Page 36: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Author Guidelines: Quality Approaches in Higher Education

cannot enlarge photos and maintain the required resolution. Photos should be sent in separate files and referenced in the article. Photos should be accompanied by a complete caption, including a left-to-right listing of people appearing in the photo, when applicable. Do not include any text with the photo file.

5. Also submit a separate high-resolution electronic photo (at least 300 dpi) for each author. Author photos should be at least 1” by 2”. Author photos should have a plain background, and the author should be facing toward the camera.

6. Please include a 75- to 100-word biography for each author, mentioning the place of employment, as well as including a telephone number, Web site, and/or e-mail address. If you have published books within the past five years, we encourage you to include the names of one or two books. We do not have space to mention articles, speech titles, etc.

Copyright TransferPrior to publication, you must sign a form affirming your work is original and is not an infringement

of an existing copyright. Additionally, we ask you to transfer copyright to ASQ. The copyright transfer allows you to reproduce your article in specific ways, provided you request permission from ASQ and credit the copyright to ASQ. The transfer also allows ASQ to reproduce the work in other publications, on its Web site, etc.

If you use materials from other works in your articles (other than standard references), you must obtain written permission from the copyright owner (usually the publisher) to reprint each item of bor-rowed material. This includes any illustrations, tables, or substantial extracts (direct quotations) outside the realm of fair use. Submit these permission letters with the article. Articles cannot be published until copies of all permission letters are received.

For example, an article includes a PDSA illustration from a book. The permission statement would include: Figure 1 is from Nancy R. Tague’s The Quality Toolbox, 2nd ed., ASQ Quality Press, 2005, page 391. This permission statement would appear in the caption just below the PDSA figure.

ReferencesOne of the most common errors we’ve observed with submitted articles is improper referencing. Two

problems occur most frequently: information included without proper attribution in the references and formatting that does not meet our style requirements. The information in this section is intended to ensure your references adhere to our standards.

Quality Approaches in Higher Education uses its own reference style. All references should be con-secutively numbered in the body of the text, using superscripts, and a matching number, also using superscripts, should appear in the references section at the end of the article. Do not include periods with the numbers or spaces preceding or following the numbers. If multiple references are associated with a particular citation, list each separately (do not show a range). For example, “The engineering department modified its program and created an integrated freshman curriculum2,3 to promote a com-prehensive learning environment that includes significant attention to student communication skills.” Use a comma to separate the numbers, but do not include a space after the comma. Please do not use Microsoft Word endnotes or footnotes; also, please do not include citations in the body of the text, such as is used for APA sytle.

Page 37: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Author Guidelines: Quality Approaches in Higher Education

Examples

TYPE: Book, one author:Jim Collins, Good to Great, Harper Collins, 2001, pp. 27-28.

Nancy R. Tague, The Quality Toolbox, 2nd ed., ASQ Quality Press, 2005.

TYPE: Book, two authors:T.M. Kubiak and Donald W. Benbow, The Certified Six Sigma Black Belt Handbook, 2nd ed., ASQ Quality Press, 2009, pp. 69-70.

Sheri D. Sheppard, Kelly Macatangay, Anne Colby, and William M. Sullivan, Educating Engineers: Designing for the Future of the Field, Jossey-Bass, 2008.

TYPE: Magazine/journal article, one author:Thomas Stewart, “Growth as a Process,” Harvard Business Review, June 2006, p. 62.

John Dew, ”Quality Issues in Higher Education,” The Journal for Quality and Participation, April 2009, pp. 4-9.

TYPE: Magazine/journal article, two authors:Mark C. Lee and John F. Newcomb, “Applying the Kano Methodology to Meet Customer Requirements: NASA’s Microgravity Science Program,” Quality Management Journal, April 1997, pp. 95-106.

Barbara K. Iverson, Ernest T. Pascarella, and Patrick T. Terenzini, “Informal Faculty-Student Contact and Commuter College Freshmen,” Research in Higher Education, June 1984, pp. 123-136.

TYPE: Magazine/journal article, no month or year given, only volume and number:B. Radin, “The Government Performance and Results Act (GPRA): Hydra-Headed Monster or Flexible Management Tool?” Public Administration Review, Vol. 58, No. 4, pp. 307-316.

J.W. Alstete, “Why Accreditation Matters,” ASHE-ERIC Higher Education Report, Vol. 30, No. 4.

TYPE: Web site articles:Joanne Petrini, “Social Responsibility and Sustainable Development—Interdependent Concepts,” www.ecolog.com/gartnerlee.news/3_article.htm.

Peter Ewell, “Accreditation and Student Learning Outcomes: A Proposed Point of Departure,” CHEA Occasional Paper, September 2001, http://www.chea.org/pdf/EwellSLO_Sept2001.pdf

American Society for Quality, “No Boundaries: ASQ’s Future of Quality Study,” www.asq.org/quality-progress/2008/10/global-quality/futures-study.pdf, 2008.

TYPE: Conference proceedings:Tito Conti, “Quality and Value: Convergence of Quality Management and Systems Thinking,” ASQ World Conference on Quality and Improvement Proceedings, Seattle, WA, May 2005, pp. 149-161.

Michal Stickel, “Impact of Lecturing with the Tablet PC on Students of Different Learning Styles,” 39th ASEE/ISEE Frontiers in Education Conference Proceedings, San Antonio, TX, October 2009, M2G-1-6.

Tips• We use commas to separate segments of the reference information, not periods.

• Authors’ names always appear with the first name followed by the last name.

• The names of books, magazines, newsletters, and journals are italicized.

Page 38: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Author Guidelines: Quality Approaches in Higher Education

• Double quotation marks are used around the names of magazine, newsletter, and journal articles and conference proceedings’ titles. Punctuation marks fall inside the quotation marks in almost every case.

• It’s not necessary to include the city with the publisher’s name.

• When inserting the reference numbers in the body of the text and in front of the reference information in the list at the end of the article, use the “superscript” function in Microsoft Word.

• Periods behind the reference number or a space before or after the reference number are not used, as shown below:

Correct: Text in body of the article1

Correct: 1Reference information

Incorrect: Text in body of the article1.

Incorrect: 1.Reference information

Incorrect: Text in body of the article 1

Incorrect: 1. Reference information

SummaryThank you for considering having your article published in Quality Approaches in Higher Education.

We look forward to reviewing your manuscript. Please feel free to contact our editor, Deborah Hopen, at [email protected] if you have any additional questions.

Page 39: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Call for ReviewersQuality Approaches in Higher Education

Can you think critically about what you read?

Are you able to express yourself clearly and concisely.

Do you have expertise in quality approaches for higher education?

Are you willing to volunteer your time to help improve your profession?

If you can answer “Yes” to each of these questions, then Quality Approaches in Higher Education invites

you to become a member of its Review Board. As a reviewer, you will be expected to maintain a standard

of high quality for articles published in this journal and help build its reputation for excellence.

To become a reviewer, please complete the application on the next page and send it with a copy

of your curriculum vitae to Deborah Hopen at [email protected]. Your application will then

be reviewed by the editorial team and you will be notified in approximately 60 days if you have been

accepted as a reviewer. Following acceptance to the Review Board, you will become part of the pool of

reviewers available to evaluate articles submitted for publication. The frequency of your review assign-

ments will depend on the number of articles submitted and the number of reviewers with the expertise

needed to critically evaluate each article.

Once assigned to review an article, you will be e-mailed that article, which will have been “blinded”

to remove information about the author(s) to assure your impartiality. Along with the article you will be

sent detailed review instructions and the review form, itself. As you critically read the assigned article, your

primary focus will be on the article’s content, not its style-related issues such as grammar, punctuation, and

formatting. The editorial team is charged with assuring that all style-related issues are resolved in accor-

dance with ASQ’s modified-AP style guide prior to publication. Your task is to provide ratings and detailed

comments in nine content-related categories, plus an overall rating which reflects your recommendation

for article disposition. You will be given approximately three weeks to return your completed review form

for each article.

Article disposition will be determined by the editorial team based on input from the reviewers. In

cases where a revision is recommended, detailed instructions will be provided to the author(s) using

reviewer comments. Revised articles will be evaluated by the editorial team for compliance to the

improvement recommendations, and one or more of the original reviewers may be asked for input as

part of this process.

We look forward to receiving your application to become a reviewer.

The Editorial Team

Quality Approaches in Education

Deborah Hopen, Editor

Quality Approaches in Higher Education

V o l . 1 , N o . 2

SuccessSuccessDriving Driving

SuccessDriving

SuccessDriving

SuccessDriving

Success

Page 40: Quality Approaches in Higher Education, Vol. 1, No. 2rube.asq.org/edu/2010/08/best-practices/quality...compared to easier non-STEM courses. As chair of the ASQ Education Division,

Call for Reviewers: Quality Approaches in Higher Education

NAME

ADDRESS

E-MAIL ADDRESS

PHONE NUMBER

EMPLOYER

JOB TITLE

DEPARTMENT

AREAS OF EXPERTISESelect all that apply Education

❐ Scholarship of Teaching

❐ Best Practices

❐ Enhancing Student Learning

❐ Technology in the Classroom

❐ Accreditation

ResearchMethodology

❐ Survey Design, Analysis, and

Interpretation

❐ Research Statistics

❐ Organizational Research

Other(pleasespecify)

QualityAssuranceinHigherEducation

❐ Baldrige Education Criteria for

Performance Excellence

❐ Quality Improvement

❐ Quality Systems and Processes

❐ Theories, Models, and Best Quality

Practices

❐ Measurement Systems

❐ Measuring/Assessing Learning

and/or Learner Outcomes

❐ Measuring/Assessing Teaching

❐ Measuring Improvement

❐ Measuring/Assessing Institutional

Performance

Applicant’s Curriculum Vitae must be included with this application.E-mail to Deborah Hopen at [email protected].

PERSPECTIVESelect all that apply ❐ Faculty

❐ Administration

❐ Student Services

❐ Consultant to Higher Education

❐ Student

❐ Undergraduate Education

❐ Graduate Education

❐ International

❐ Multi-site Institution