7
2005 IEEE International Professional Communication Conference Proceedings 0-7803-9028-8/05/$20.00 © 2005 IEEE. The Engineering-Technical Writing Connection: A Rubric for Effective Communication Shelley Thomas Weber State University [email protected] Abstract This article presents a rubric for evaluating student writing in a technical writing service course. Because technical writing service courses serve engineering students as well as students from a variety of majors, this rubric can help instructors start a dialogue about writing among many disciplines. Keywords: assessment, engineering, technical writing service course, student writing Introduction The technical writing service course is designed to introduce students (who are generally not English or technical writing majors) to forms of writing that vary significantly from the “freshman composition essay.” This is not to say that traditional first year composition does not have much in common with technical communication, nor do I want to imply that first year composition ignores the rhetorical concerns presented in technical writing service courses. Rather, the point here is that first year composition, with its emphasis on the traditional essay format, often seems an utterly different form of writing than what the technical writing service course asks students to produce. While a technical writing service course cannot be all things to all programs, it can improve its assessment of student writing by moving away from the portfolio method [1] to a method that addresses documents by genre and, more importantly, is supported by an underlying “minimum grading standards” rubric (see Appendix 1). The typical genres addressed in technical writing service courses ([2], [3], [4], [5], and [6] among others) include job-search materials, technical descriptions and instructions, proposals, progress reports, and a longer, sustained report such as a feasibility study or formal recommendation report. Each of these document genres requires that the students display different skills, and these skills build upon one another as the students progress through the course. The rubric I present seeks to provide a pedagogical basis for evaluating student writing. By presenting this rubric (perhaps labeled as “grading criteria”) to students at the beginning of the course, students are made aware of the instructor’s level of expectations. Furthermore, instructors can use this guideline to make formative comments as they evaluate student papers in draft form and refer to it in summative comments as well. When evaluation becomes part of the students’ writing processes, students incorporate the known expectations and, in the long run, improve their documents. The impetus for this rubric came from two sources: the first, an engineering professor who indicated that engineering students find the technical writing service course “redundant, inferior, or contradictory” to writing requirements in his course; the second, from the chair of an engineering program who wanted assistance assessing student writing in preparation for upcoming ABET accreditation. With these two communications coming within a semester of each other, my colleagues and I wanted to demonstrate areas of technical communication that complement student writing in engineering 517

[IEEE IPCC 2005. Proceedings. International Professional Communication Conference, 2005. - Limerick, Ireland (July 7, 2005)] IPCC 2005. Proceedings. International Professional Communication

  • Upload
    s

  • View
    215

  • Download
    2

Embed Size (px)

Citation preview

Page 1: [IEEE IPCC 2005. Proceedings. International Professional Communication Conference, 2005. - Limerick, Ireland (July 7, 2005)] IPCC 2005. Proceedings. International Professional Communication

2005 IEEE International Professional Communication Conference Proceedings

0-7803-9028-8/05/$20.00 © 2005 IEEE.

The Engineering-Technical Writing Connection: A Rubric for Effective Communication

Shelley Thomas Weber State [email protected]

Abstract

This article presents a rubric for evaluating student writing in a technical writing service course. Because technical writing service courses serve engineering students as well as students from a variety of majors, this rubric can help instructors start a dialogue about writing among many disciplines.

Keywords: assessment, engineering, technical writing service course, student writing

Introduction

The technical writing service course is designed to introduce students (who are generally not English or technical writing majors) to forms of writing that vary significantly from the “freshman composition essay.” This is not to say that traditional first year composition does not have much in common with technical communication, nor do I want to imply that first year composition ignores the rhetorical concerns presented in technical writing service courses. Rather, the point here is that first year composition, with its emphasis on the traditional essay format, often seems an utterly different form of writing than what the technical writing service course asks students to produce. While a technical writing service course cannot be all things to all programs, it can improve its assessment of student writing by moving away from the portfolio method [1] to a method that addresses documents by genre and, more importantly, is supported by an underlying “minimum grading standards” rubric (see Appendix 1).

The typical genres addressed in technical writing service courses ([2], [3], [4], [5], and [6] among others) include job-search materials, technical descriptions and instructions, proposals, progress reports, and a longer, sustained report such as a feasibility study or formal recommendation report. Each of these document genres requires that the students display different skills, and these skills build upon one another as the students progress through the course.

The rubric I present seeks to provide a pedagogical basis for evaluating student writing. By presenting this rubric (perhaps labeled as “grading criteria”) to students at the beginning of the course, students are made aware of the instructor’s level of expectations. Furthermore, instructors can use this guideline to make formative comments as they evaluate student papers in draft form and refer to it in summative comments as well. When evaluation becomes part of the students’ writing processes, students incorporate the known expectations and, in the long run, improve their documents.

The impetus for this rubric came from two sources: the first, an engineering professor who indicated that engineering students find the technical writing service course “redundant, inferior, or contradictory” to writing requirements in his course; the second, from the chair of an engineering program who wanted assistance assessing student writing in preparation for upcoming ABET accreditation. With these two communications coming within a semester of each other, my colleagues and I wanted to demonstrate areas of technical communication that complement student writing in engineering

517

Page 2: [IEEE IPCC 2005. Proceedings. International Professional Communication Conference, 2005. - Limerick, Ireland (July 7, 2005)] IPCC 2005. Proceedings. International Professional Communication

2005 IEEE International Professional Communication Conference Proceedings

and simultaneously to explain the demographics of the technical writing service course.

Since ABET Engineering Criteria (EC 2000), engineering programs along with technical writing service courses have moved to outcomes-based assessment practices in an effort to evaluate students’ “ability to communicate effectively” (EC 2000, Criteria 3g) [7]. ABET’s commitment to “sustain the change sparked nearly a decade ago” can be seen through its evaluation of the EC 2000 criteria. “Sustaining the Change” provides a status report on EC 2000 and marks the improvement in ABET accreditation [8]. One area of progress the report identifies is “increased faculty attention to student learning as a part of improving program quality” [8]. While the report does not specify the nature of the improved faculty involvement, it is safe to say that improved faculty/student relationships are an encouraging sign. Another “progress point” the report identifies is “measurable changes in engineering school culture toward continuous quality improvement and employer satisfaction with engineering graduates” [8]. This statement clearly demonstrates the level of industry involvement in shaping the assessment and curriculum of engineering programs.

Preparing students for work environments outside of the university drives outcomes based assessment. The move to outcomes based assessment has often taken place with industry goals in mind ([9], [8]), with various pedagogical theories in mind ([10], [11], [12], [13]), and with technical writing service course goals in mind ([1], [14]), each set of criteria developed in isolation. While these goals and theories often overlap and complement each other, failures in communication among the various disciplines served by technical writing courses leads to opportunities to raise the profile of technical writing within both the academic and professional settings. This rubric attempts to connect the technical writing service course to the variety of students who take a technical writing service course.

Student Demographics

Students taking technical writing service courses usually come from a range of disciplines (engineering, business, both the social and hard sciences, and humanities). A survey of technical

writing instructors, used to target the audience for Mike Markel’s Technical Communication,reported that the majority of their students came from engineering (66%), business (62%), computer science (51%), and the hard sciences (51%); this composite made up over 25% of the students in a “typical” technical writing service course [15]. In addition, I surveyed technical writing service courses at a large, land-grant university that offers an extensive engineering program and discovered over the course of four semesters that the average percentage of students from the engineering programs was 45.5%; the average percentage of students from business-related programs was 17.5%; the average percentage of students from agricultural sciences was 20.75%, with the remaining 16.25% of students coming from human environmental sciences and humanities. The data from this single university confirms the demographic research for Markel’s textbook and indicates that although the majority of our course demographic is engineering students, they cannot be our only focus.

Developing the Rubric

This broad audience base for technical writing service courses demonstrates a need for the course to assess student writing in a way that encompasses the multiple audiences and functions within various genres. To address the immediate concerns of the engineering faculty, I researched existing evaluation criteria at the national, university, and program levels to develop a rubric that effectively represents the student demographic. This rubric draws upon previously published sources, specifically Pappas and Hendricks “Holistic Grading in Science and Engineering” [16], an engineering, discipline-specific “Rubric for Evaluating Student Performance on Unit Operations Laboratory Written Reports” [17], and the “Grading Criteria” for a technical writing service course [18]. These sources have survived repeated tests and have proven successful in evaluating student writing. Furthermore, holistic assessment allows for evaluation “that is relevant to the workplace for which we are preparing our students, as well as efficient for faculty and fair to students” ([16], p. 403).

Holistic scoring evaluates student writing based on set standards, “rather than considering

518

Page 3: [IEEE IPCC 2005. Proceedings. International Professional Communication Conference, 2005. - Limerick, Ireland (July 7, 2005)] IPCC 2005. Proceedings. International Professional Communication

2005 IEEE International Professional Communication Conference Proceedings

technical content and writing style (including grammar and mechanics) as separate entities” ([16], p. 403).

Presented the same year as Pappas and Hendricks research, Linda Driskill’s persuasive article, “Linking Industry Best Practices and EC3(g) Assessment in Engineering Communication,” argues against holistic scoring, stating that the writing elements (based on [19], pp. 298-299) the raters use to evaluate writing “have no specified relationship to audience or effectiveness” [20]. Pappas and Hendricks address this issue by applying workplace standards to their holistic grading guidelines; therefore, their guidelines “mandate clear and concise communications that meet the needs of a particular situation, rather than adhere to rules and conventions of academic English” ([16], p. 403). This mandate specifies situational awareness in student writing. Without an awareness of the rhetorical situation, then the student writing fails to communicate its message to the intended audience.

The rubric I propose contains specific categories for audience, content, document design, graphics, and writing style. These categories help evaluators distinguish discrete characteristics of student writing. In addition, by using sources both inside and outside the engineering community, it provides guidelines that more accurately reflect the demographics of the technical writing service course and places more emphasis on rhetorical context than on grammar and mechanics.

Rubric Description

Because the rubric is based on a five-point scale, it more closely reflects the standard A-F grading scale widely used in US universities. By following a familiar grading structure, instructors will be able to quickly adapt this assessment rubric for their courses. I began constructing the outcomes assessment rubric by examining descriptions of the highest rated document (the “level 5”).

Audience, Content, Writing Style Examining each source’s description of a “level 5” document, I noticed certain elements addressed in each. Each source provides an overall impression to assess the document as a whole (see Appendix 1), then describes a “level 5” document as one that demonstrates a sophisticated

knowledge of audience, includes well-developed content, and uses precise writing style. Clearly, because of the consistency with which the source documents describe these elements, audience, content, and writing style are key elements in evaluating a document’s effectiveness. But, in order for a document to be useful and reader-friendly, we must examine two more considerations: document design and graphics.

Document Design and GraphicsTwo of the sources I used to construct the assessment rubric address a writer’s use of document design and graphics. The “Rubric for Evaluating Student Performance on Unit Operations Laboratory Written Reports” [17] describes successful document design by stating that it “uses multiple formatting and tool features (insert file, table, symbols, footnotes, headers), professional look, proud to show externally” [17]. This source emphasizes appropriate use of tools and appearance, but it does not describe the need for the document to be user-friendly (descriptive headings) and accessible (table of contents, page numbers). Pappas and Hendricks address the successful use of graphics stating that “graphics [should be] highly informative, appropriately placed, clearly and uniformly designed and easy to interpret” [16]; however, they fail to emphasize that graphics need to be referenced within the text as well.

I argue that a highest-rated document should excel at both how the writer designs the document, as well as uses graphics within it. A well-designed document should exhibit a visual design that is accessible and appealing; headings should be uniform and consistently placed (according to hierarchy); type size and font should remain consistent (according to heading level and body text); white space should be used purposefully; and the document should have a clear navigational structure (table of contents, page numbers, sections, headings).

In addition, for a document to be rated at a “level 5,” graphics should be highly informative, appropriately placed, clearly and uniformly designed, and easy to interpret; the writer must successfully integrate graphics with the text by using clear references both accompanying the graphic and within the body text, citing source material as necessary. In order to avoid confusing

519

Page 4: [IEEE IPCC 2005. Proceedings. International Professional Communication Conference, 2005. - Limerick, Ireland (July 7, 2005)] IPCC 2005. Proceedings. International Professional Communication

2005 IEEE International Professional Communication Conference Proceedings

the reader, the writer should also explain each graphic as it pertains to the appropriate section.

Integrated Elements With each of these elements defined (general impression, audience, content, document design, graphics, and writing style), instructors may describe for students the components of a superior document and demystify “effective communication.” It is important to note the order in which these elements appear on the rubric. The “general impressions” category offers the reviewer a place to record her overall assessment of the document, and then allows her to examine each element systematically. By recording general impressions immediately after reading a document, the reviewer stays within the holistic framework. Then, she may re-examine the document to find specific reasons it receives a specific score. This rubric advocates a hierarchy of successful communication, which is why a description of “audience” comes well before a description of “writing style.” Because effective communication rests with the reader, a document that “meets and exceeds all standards,” [18] and “contains excellent work overall” [16] will communicate its information to its audience without interference from the writer.

After integrating the elements of a highest rated document (a “level 5”), developing the next consecutive assessment levels was considerably less complicated. The “level 4” document also is well written and produced, but contains some minor errors in content, format and writing style that are easily correctible. Audience considerations remain significant in the “level 4,” and the writers demonstrate a clear sense of their readers; however, lapses in content, document design, graphics, and writing style detract from the document’s overall effectiveness, but not in a way that impedes communication.

The “level 3” document occurs in the middle of the assessment scale; a document in this range exhibits some problems with regards to audience. The document generally fulfills the its rhetorical purpose, but the writer may include tangents that are not relevant to the communication’s objective. Weak content may leave significant questions unanswered or may fail to address a significant source [18]. The document design and navigation are understandable for a reader, but the document contains some obvious inconsistencies or

overlooked design details. The document may need to be redesigned to make the information accessible to its audience and to make it more appealing [18]. In addition, a “level 3” document contains graphics that do not clearly support the text or the topic. Finally, a “level 3” designation contains a writing style that contains lapses in clarity that forces the reader to interpret the writer’s meaning [16].

“Level 2” and “level 1” documents exhibit more severe problems with audience (clearly not reader-centered or no concept of the rhetorical situation), and content (significant omissions or off-topic completely). In documents that fall into this assessment range, it is not surprising that document design and effective use of graphics are elements that the writer neglects almost completely; these elements may appear casual or may be missing entirely. Similarly, graphics may be poorly designed, ornamental (e.g., clip art), or be absent altogether. Furthermore, the writing style of “level 2” and “level 1” documents demonstrates poor organization (general document structure, sections, and paragraphs), frequent errors (mechanical and punctuation) that interfere with reader comprehension, and serious flaws in writing style (grammar, spelling, and word choice) that render the document ineffective for its intended audience.

This rubric clearly identifies elements that, when combined, create an effective, substantive, and useful document.

Testing the Rubric

Since this rubric was initially developed at the request of the engineering program to address instructional concerns and to help them prepare for ABET accreditation, it was tested using a random sample (selected by the engineering department chair) of ten student reports from the senior-level design course. The assessment team consisted of technical writing graduate students (two Ph.D. candidates, and one M.A. candidate) and myself. (Julia Boyd, Lyn Gattis, and Parul Rai comprised the assessment team. Without their dedication to this project, this study would not have been possible.)

The first session allowed the team to familiarize itself with the rubric and briefed them about holistic evaluation methods (see [19]). We always

520

Page 5: [IEEE IPCC 2005. Proceedings. International Professional Communication Conference, 2005. - Limerick, Ireland (July 7, 2005)] IPCC 2005. Proceedings. International Professional Communication

2005 IEEE International Professional Communication Conference Proceedings

met as a team and read the reports together (each reader had a photocopy of the same report). Once we had read, evaluated, and scored a report, we would announce our scores and discuss the report’s elements. As these were rather lengthy documents, we limited ourselves to three or four reports per session; we needed to remain fresh and objective.

The results of these evaluation sessions revealed a median score of 2.8125, with a range of 2 to 4.357. The median score indicates that half of these writing samples exhibited problems with audience, content, document design, graphics, and writing style. Common rater comments for 2 and 3 range reports included: “No attempt to provide context for this information”; “Items and concepts are introduced, but they are not explained in sufficient detail”; “Stacked headings; sections need an overview”; “Graphics need explanation and citation”; “Many proofreading, grammatical, and mechanical errors; the reader must work to understand the text.”

In contrast, the reports that fell within the 3 to 4+ range exhibited a better grasp of audience (“Discussion is reader-centered, uses overviews”), content appropriate to the situation (“Writers, for the most part, present the information logically and explain coherently the rationale behind their choices”), effective document design (“The document’s navigation system makes finding information fairly easy”), well-integrated graphics (“Graphics help explain the material and many times the writers interpret the graphics for the reader”), and a clear writing style (“Overall the report contains minor grammatical and mechanical errors that do not interfere with reader comprehension”). While it is expected that documents at each end of the rating scale would exhibit drastically different characteristics of effective communication, several of the reports across all levels of the rating scale exhibited many similar features.

At the last rating session, I asked the group members to offer trends they observed in the student writing. The assessment team observed several characteristics common across most of the reports. Some of the characteristics (both negative and positive) include the following:

• Little context for the information • Back-to-back (stacked) headings

• Omitted conclusions • Clear structure and organization • Strong detail

These common characteristics demonstrate the students’ desire to present as much information as they have (as evidenced by the “strong detail” comments) and they organize the information clearly (perhaps modeling their documents from research or from available sample documents). In order to make the information relevant to readers, students need to provide contextual information appropriate to the rhetorical situation, to use a clear navigation system (headings and forecast statements) to direct readers to important information, and to develop persuasive conclusions for each document they write.

Conclusion

Students often perceive evaluation of written communication as subjective. While there is an element of subjectivity in assessing student writing, certain aspects of effective communication can be categorized and evaluated objectively. The guidelines I argue for in this paper can help engineering students, as well as students from a variety of disciplines, identify qualities of effective communication and incorporate those qualities into their documents. Instructors, too, can use these guidelines when reviewing drafts to identify strengths and weaknesses in student writing. This rubric opened a dialogue between an engineering program and a technical writing service course, a dialogue that has led to a better understanding of each program’s needs and expectations. Technical writing service courses need to maintain this dialogue to keep pace with trends in industry and with trends in the programs they serve.

References

[1] N. W. Coppola, “Setting the Discourse Community: Tasks and Assessment for the New Technical Communication Service Course,” Technical Communication Quarterly, vol. 8, no. 3, pp. 249-267, 1999.

[2] P. V. Anderson, Technical Communication: A Reader-Centered Approach. 5th Ed. Boston, MA: Thomson/Heinle, 2003.

521

Page 6: [IEEE IPCC 2005. Proceedings. International Professional Communication Conference, 2005. - Limerick, Ireland (July 7, 2005)] IPCC 2005. Proceedings. International Professional Communication

2005 IEEE International Professional Communication Conference Proceedings

[3] K. W. Houp, T. E. Pearsall, E. Tebeaux, and S. Dragga, Reporting Technical Information. 10th

Ed. New York: Oxford, 2002.

[4] J. M. Lannon, Technical Communication. 9th

Ed. New York: Longman, 2003.

[5] M. M. Lay, B. J. Wahlstrom, C. D. Rude, C. L. Selfe, and J. Selzer, Technical Communication.2nd Ed. Boston: McGraw-Hill, 2000.

[6] M. Markel, Technical Communication. 7th Ed. Boston: Bedford, 2004.

[7] Accreditation Board for Engineering and Technology. 2000. [Online]. Available: http://www.abet.org.

[8] “Sustaining the Change,” Accreditation Board for Engineering and Technology. 2004. [Online]. Available:http://www.abet.org/images/Misc/Sustaining%20the%20Change.pdf.

[9] D. Winsor, “Learning to Do Knowledge Work in Systems of Distributed Cognition,” Journal of Business and Technical Communication, vol. 15, no. 1, pp. 5-29, 2001.

[10] P. Miller, J. Bausser, A. Fentiman, “Responding to technical writing in an introductory engineering class: The role of genre and discipline,” Technical Communication Quarterly, vol. 7, no. 4, pp. 443-461, 1998.

[11] J. M. Williams, “Transformations in Technical Communication Pedagogy: Engineering, Writing, and the ABET Engineering Criteria 2000,” Technical Communication Quarterly, vol. 10, no. 2, pp. 149-167, 2001. [12] S. Smith, “What is ‘Good’ Technical Communication? A Comparison of the Standards of Writing and Engineering Instructors,” Technical Communication Quarterly, vol. 12, no. 1, pp. 7-24, 2003.

[13] G. W. Brinkman and T. M. van der Geest, “Assessment of Communication Competencies in Engineering Design Programs,” TechnicalCommunication Quarterly, vol. 12, no. 1, pp. 67-81, 2003.

[14] K. Cargile Cook, “Layered Literacies: A Theoretical Frame for Technical Communication Pedagogy,” Technical Communication Quarterly,vol. 11, no. 1, pp. 5-29, 2002.

[15] Bedford/St. Martin’s, “Majors in Survey Course-Based on Questionnaire Responses from 73 Instructors,” Unpublished raw data, 2004. (I am grateful to Sara Eaton Gaunt for providing me this information.)

[16] E. C. Pappas and R. W. Hendricks, “Holistic Grading in Science And Engineering,” Journal of Engineering Education, pp. 403-408, 2000.

[17] R. Reinhart, “Rubric for Evaluating Student Performance on Unit Operations Laboratory Written Reports: OSU School of Chemical Engineering,” Unpublished document, 2002.

[18] “ENGL 3323 2002-2003 Intermediate Technical Writing Course Packet,” pp. 11-12, 2003.

[19] E. M. White, Teaching and Assessing Writing, San Francisco: Jossey-Bass, 1994.

[20] L. Driskill, (2000 June) “Linking industry best practices and EC3(g) assessment in engineering communication,” American Society for Engineering Education conference, [Online]. Available:http://www.ruf.rice.edu/~engicomm/public/Driskill.html

About the Author

Shelley Thomas is an assistant professor at Weber State University in Ogden, UT. She has taught technical writing service courses since 1994.

522

Page 7: [IEEE IPCC 2005. Proceedings. International Professional Communication Conference, 2005. - Limerick, Ireland (July 7, 2005)] IPCC 2005. Proceedings. International Professional Communication

2005 IEEE International Professional Communication Conference Proceedings

Appendix 1. Assessment Rubric

5General: Meets and exceeds all standards [18]; completely accomplishes the goals of the assignment [17] -- Excellent work overall [16] Audience: Conveys superior understanding of audience [18] Content: Detailed content; well-supported conclusions; the report is easy to read and highly organized; it exhibits a clear sense of unity and purpose (adapted from [16]) Document Design: Accessible and appealing visual design [18] Graphics: Highly informative graphics, appropriately placed, clearly and uniformly designed, easy to interpret [18] Writing Style: Grammatical errors are rare and the writing style is clear, concise, and confident (adapted from [18])

4General: Presents content clearly and displays a firm grasp of the technical material, but without the sharp focus and perspective of a “5” paper [16] Audience: The document is well-written and produced, and it exhibits a solid understanding of audience, purpose, and situation [18] Content: Technical material is presented logically with perhaps a few minor lapses in clarity and transition, but the document is still well organized, thoughtfully conceived, and avoids generalizations on the topic [16] Document Design: Minor flaws in format (easily correctable) [18] Graphics: Graphics are informative, uniform, intelligible, and support the content of the report [16] Writing Style: No major grammatical errors; some minor grammatical errors, but none that disrupt the style and easy reading of the report [16]

3General: A competent document that meets the standards adequately but may contain several flaws in concept development, details, structure, grammar, design, or accuracy [18] Audience: Some irrelevant tangents distract the reader [17] Content: Treatment of the topic may be general and lack supporting detail [16]; may require further development (adapted from [18]) Document Design: May need to be redesigned so that the information is more accessible and appealing (adapted from [18]) Graphics: Graphics may not clearly support objectives as in a “5” or “4” report or may be ornamental [16] Writing Style: Some major grammatical errors or frequent and annoying minor grammatical errors [16]; Writing style may be uneven; reading may be slow or confusing at times [16]

2General: A marginally acceptable document that forces the reader to do too much work to understand or read the document because of serious problems in the document [18] Audience: Not reader-centered; the audience for this report is not clear (adapted from [18]) Content: Little or no perspective or detail on topic except sweeping generalizations derived from others’ work [16] Document Design: Inconsistent document design and layout—difficult for user to navigate and locate information (adapted from [16] and [18]) Graphics: Poorly designed graphics, absent, ornamental, or offer no support to the content of the report [16] Writing Style: Poor writing style, frequent major and minor grammatical errors that interfere with reading; poor organization (adapted from [17] and [18])

1General: An unacceptable document that does not address the assignment [18] Audience: Reflects lack of understanding of topic and of audience (adapted from [17] and [18] Content: May be completely off topic or lack identifiable focus [16] Document Design: Lacks usable document design and layout (adapted from [16] and [18]) Graphics: Graphics may be absent, poorly designed, irrelevant, or unintelligible [16] Writing Style: Excessive errors in structure, organization, grammar, or mechanics that seriously interfere with reading the document (adapted from [16] and [17])

523