8
Web-based adaptive self-assessment in Higher Education Catherine Marinagi. Department of Logistics, Technological Educational Institute of Chalkis, Thiva, GR-32200, Greece This paper presents an introduction to student assessment used with the aim to support and improve learning. It presents computer-based assessment, adaptive assessment and Web-based assessment methodologies. The development of new digital technologies facilitated the implementation of many Web-based assessment tools. Of primary concern is Web- based self-assessment that allows students to practice and measure their competence level. It has been widely used in open and distance education, but it can also improve learning in traditional class. Adaptive features are usually included to Web- based self-assessment tools, in order to adjust assessment process to individual needs. The use of post-test feedback can facilitate learning self-management and improve student performance. W-PARES is a Web-based platform for automated adaptive student assessment that has been successfully used by Greek students in Higher Education. The paper briefly presents the architectural design and the four subsystems of W-PARES, each of which is an independent Web application. Keywords adaptive assessment; student self-assessment; Web-based assessment, e-assessment 1. Introduction Nowadays, the increasing availability of Internet has emerged a great interest of the educational community in developing Web-based assessment and self-assessment tools, to support not only distance learning, but also to improve learning in traditional class. Self-assessment that provides feedback facilitates the active involvement of students in the learning process and help students to gradually improve their knowledge on each subject. Adaptive self-assessment enables tailoring tests to student’s preferences and performance level. A Web-based self-assessment tool enables students to access self-assessment tests using a browser installed on their home computers. This paper starts with introducing the reader to assessment issues, concerning that assessment is used as a tool for learning. The benefits of computerized assessment, adaptive assessment and Web-based assessment are discussed. Of significant importance are Web-based adaptive self-assessment systems, which are used to support learning and thus improve students’ performance. In the following, the architecture and functionality of W-PARES [1] is described. W- PARES is a Web-based platform for automated adaptive student assessment, which has been successfully used as a Web-based adaptive self-assessment tool from first year students during three Academic years at the Department of Logistics of the Higher Technological Educational Institute of Chalkis, Greece. The remainder of the paper is organized as follows. Section 2 presents a brief literature review on assessment issues. Section 3 briefly describes the architectural design of W-PARES platform. Section 4 describes W-PARES four subsystems. Finally, section 5 presents the concluding remarks. 2. Background 2.1 Assessment for Learning In the past, assessment was a means to determine to what extend students had reached the learning objectives. Testing was traditionally used to give a qualification of student’s current state of competence. However, testing was usually separated from instruction and was occurred at the end point of learning rather than within the regular flow of learning. Today, the role of assessment in education has changed. Assessment is assumed to be more a tool for learning, rather than a tool for grading. New assessment culture proposes the integration of assessment with instruction, in order to align learning and instruction more with assessment [2]. Willis [3] argues that “Assessment for Learning differs from Assessment of learning not only in its timing and its purpose but also in the ownership of the learning where the student voice is heard in judging quality”. The benefits of Assessment for Learning practice include increased students’ motivation, and autonomy in monitoring and planning their own learning process. Recently, Carless, Joughin and Mok [4] introduced the term ‘learning-oriented assessment’ to describe an approach which seeks to bring to the foreground those aspects of assessment that encourage or support students' learning. Learning-oriented assessment is based on three principles: assessment tasks, active student participation and ‘feedback as feed-forward’. In particular, the first principle proposes to engage students in processes that lead to assessment tasks as learning tasks. The second principle proposes students participation in evaluating their work and peers work. Finally, the third principle proposes the use of feedback to improve learning. The differentiation of two kind of assessment, summative and formative, is worthy of increased attention. Summative assessments are given periodically to determine student’s level at a particular point in time, such as at the end of a chapter or at the end of semester. A summative assessment includes a grade against an expected standard. Formative 607 ©FORMATEX 2011 Education in a technological world: communicating current and emerging research and technological efforts A. Méndez-Vilas (Ed.) _______________________________________________________________________________________

Web-based adaptive self-assessment in Higher Education

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Web-based adaptive self-assessment in Higher Education

Web-based adaptive self-assessment in Higher Education

Catherine Marinagi. Department of Logistics, Technological Educational Institute of Chalkis, Thiva, GR-32200, Greece

This paper presents an introduction to student assessment used with the aim to support and improve learning. It presents computer-based assessment, adaptive assessment and Web-based assessment methodologies. The development of new digital technologies facilitated the implementation of many Web-based assessment tools. Of primary concern is Web-based self-assessment that allows students to practice and measure their competence level. It has been widely used in open and distance education, but it can also improve learning in traditional class. Adaptive features are usually included to Web-based self-assessment tools, in order to adjust assessment process to individual needs. The use of post-test feedback can facilitate learning self-management and improve student performance. W-PARES is a Web-based platform for automated adaptive student assessment that has been successfully used by Greek students in Higher Education. The paper briefly presents the architectural design and the four subsystems of W-PARES, each of which is an independent Web application.

Keywords adaptive assessment; student self-assessment; Web-based assessment, e-assessment

1. Introduction

Nowadays, the increasing availability of Internet has emerged a great interest of the educational community in developing Web-based assessment and self-assessment tools, to support not only distance learning, but also to improve learning in traditional class. Self-assessment that provides feedback facilitates the active involvement of students in the learning process and help students to gradually improve their knowledge on each subject. Adaptive self-assessment enables tailoring tests to student’s preferences and performance level. A Web-based self-assessment tool enables students to access self-assessment tests using a browser installed on their home computers. This paper starts with introducing the reader to assessment issues, concerning that assessment is used as a tool for learning. The benefits of computerized assessment, adaptive assessment and Web-based assessment are discussed. Of significant importance are Web-based adaptive self-assessment systems, which are used to support learning and thus improve students’ performance. In the following, the architecture and functionality of W-PARES [1] is described. W-PARES is a Web-based platform for automated adaptive student assessment, which has been successfully used as a Web-based adaptive self-assessment tool from first year students during three Academic years at the Department of Logistics of the Higher Technological Educational Institute of Chalkis, Greece. The remainder of the paper is organized as follows. Section 2 presents a brief literature review on assessment issues. Section 3 briefly describes the architectural design of W-PARES platform. Section 4 describes W-PARES four subsystems. Finally, section 5 presents the concluding remarks.

2. Background

2.1 Assessment for Learning

In the past, assessment was a means to determine to what extend students had reached the learning objectives. Testing was traditionally used to give a qualification of student’s current state of competence. However, testing was usually separated from instruction and was occurred at the end point of learning rather than within the regular flow of learning. Today, the role of assessment in education has changed. Assessment is assumed to be more a tool for learning, rather than a tool for grading. New assessment culture proposes the integration of assessment with instruction, in order to align learning and instruction more with assessment [2]. Willis [3] argues that “Assessment for Learning differs from Assessment of learning not only in its timing and its purpose but also in the ownership of the learning where the student voice is heard in judging quality”. The benefits of Assessment for Learning practice include increased students’ motivation, and autonomy in monitoring and planning their own learning process. Recently, Carless, Joughin and Mok [4] introduced the term ‘learning-oriented assessment’ to describe an approach which seeks to bring to the foreground those aspects of assessment that encourage or support students' learning. Learning-oriented assessment is based on three principles: assessment tasks, active student participation and ‘feedback as feed-forward’. In particular, the first principle proposes to engage students in processes that lead to assessment tasks as learning tasks. The second principle proposes students participation in evaluating their work and peers work. Finally, the third principle proposes the use of feedback to improve learning. The differentiation of two kind of assessment, summative and formative, is worthy of increased attention. Summative assessments are given periodically to determine student’s level at a particular point in time, such as at the end of a chapter or at the end of semester. A summative assessment includes a grade against an expected standard. Formative

607©FORMATEX 2011

Education in a technological world: communicating current and emerging research and technological efforts A. Méndez-Vilas (Ed.)_______________________________________________________________________________________

Page 2: Web-based adaptive self-assessment in Higher Education

assessment informs teachers and students about student’s knowledge and skills during the course of a unit of study, when adjustments on teaching and learning can still be applied. For further understanding of formative and summative assessment, the reader could study the report provided by Wiliam and Black [5]. The use of formative assessment in instruction has been related with the pedagogical purpose of Assessment for Learning. However, many established formative assessment tends to be too close to summative assessment [6]. When used properly formative assessment has the greatest effect on the weakest learners, bigger even over prior learning [6]. In the following, the impact of formative assessment and feedback on learning is discussed again, in case that computers and Internet are used to deliver tests.

2.2 Computerized assessment

The Joint Information Systems Committee (JISC) [7] differentiates the terms Computer-based assessment (CBA) [8] and Computer-assisted assessment (CAA) [9, 10] even though they are often used as synonyms. CBA refers to assessments delivered and marked by computer, while CAA refers to practice that relies in part on computers. CBA and CAA use computerized testing for assessing student knowledge level and, thus, reducing scoring workload. More specifically, instead of receiving a test with a fixed set of items, i.e. questions, students receive a set of items randomly selected from an Item Bank. To ensure that a different test is provided for each student, the Item Bank should contain an adequate number of items. The benefits over traditional Paper-and-Pencil Tests include unbiased, accurate and immediate scoring, discouraging plagiarism and attracting students’ interest [11]. Student individual difference in computer skills may affect the equivalence of CBA with traditional Paper-and-Pencil tests [12]. However, students’ familiarization with computer technology has been dramatically increased. Moreover, old fashioned written tests are inappropriate for students with learning difficulties, students from foreign countries, or students with disabilities. New trends include the design of personalized learning environments for the inclusion of all students in the mainstream class [13]. An implemented personalized exam platform for students with special needs is presented in [14]. This platform enables the customization of hardware and software modules to the special needs of each student. For example, student with reading problems can use acoustic material. Students with dyslexia can use customized software to minimize the typing procedure, while student with intellectual problems can use video content to increase comprehension [14].

2.3 Adaptive assessment

An interesting assessment methodology is often referred to as Computer Adaptive Assessment or Computer Adaptive Testing (CAT) [15]. CAT uses computers to administer adaptive tests, i.e. tests which are adapted to the knowledge level of individual students. Test items are administered one-at-a-time such that the selection of an item depends on student response to previous items. In particular, the adaptive testing engine evaluates each response and determines the appropriate level of difficulty for the next question. The next test item is randomly selected from a subset of the Item Bank that includes items at a determined difficulty level. In this way, the selection process eliminates questions that are too easy or too difficult. The items in the Item Bank are required to be organized by level of difficulty, so that a single Bank is used to generate tests of different but controlled difficulty. CAT retains the advantages of CBA, but there are additional benefits that make CAT a very efficient and effective means of assessment. These benefits include: accurate and reliable estimation of student competence, reduction of testing time, and avoidance of easy/difficult items that may cause boredom or stress. It is recognized that CAT may facilitate integration of assessment with learning. CAT has been used as a diagnostic module in personalized e-learning environments such as in KOD [16], INSPIRE [17] and PEL-IRT [18].

2.4 Web-based assessment and self-assessment

E-assessment is defined by the Joint Information Systems Committee (JISC) [7] as “The end-to-end electronic assessment processes where Internet Communication Technology (ICT) is used for the presentation of assessment activities and the recording of responses. This includes the end-to-end assessment process from the perspective of learners, tutors, learning establishments, awarding bodies and regulators, and the general public”. Gómez et al [19], investigated how technologies can support learning-oriented assessment, introducing a new perspective to Calsess Joughin and Mok learning-oriented assessment approach [4], called “e-learning oriented e-assessment”. However, faculty members resist adopting e-assessment. Interesting studies [20 – 22] have documented different operational and cultural barriers in the adoption of e-assessment systems by faculty members in higher education. Operational barriers include limited functionality and lack of equipment, resources and training. Cultural barriers include fear of e-assessment failure, fear of losing academic freedom and reluctance of spending time in activities that are not valued by the academy. Increased securing of Internet protocols have led to the implementation of many Web-based assessment tools. Among commercial tools are Questionmark Perception [23], CATSim [24], FastTEST Pro 2.0 [24] and FastTEST Web [24]. Many academic tools have also been implemented, such as SIETTE [25], AulaWeb [26], CosyQTI [27] and

608 ©FORMATEX 2011

_______________________________________________________________________________________Education in a technological world: communicating current and emerging research and technological efforts A. Méndez-Vilas (Ed.)

Page 3: Web-based adaptive self-assessment in Higher Education

GAM-WATA [28]. Many of implemented tools employ CAT techniques to tailor tests to the current state of an individual student. An adaptive assessment tool may be used as an independent assessment tool or be incorporated to an adaptive e-learning environment. For example, TAPLI [29] an adaptive Web-based learning environment for Linear Programming, incorporates SIETTE [25], a Web-based assessment tool that implements CAT. Another example is TRIADS software [30], which is used to devise a formative assessment in a Web-based virtual learning environment (VLE) and applied it to the MSc programme in Road Management and Engineering. ELSA is another adaptive hypermedia system, which includes a module which generates CAT tests [31]. Other e-learning environments include an integrated diagnostic module to specify when the learner has mastered a concept. For example, a Question & Test Toolkit is included in KOD prototype [16]. Wang [32] refers that researchers [28, 30, 33, 34] have begun to investigate the role of Web-based assessment in e-learning environment. The results show the positive influences of Web-based assessment on e-learning effectiveness. In particular, Wang [32] refers that “in general, if an e-Learning environment includes a Web-based assessment, the learning effectiveness of learners in the e-Learning environment will be improved. Moreover, if a Web-based assessment incorporates a variety of strategies and provides learners with a greater amount of feedback, it can more effectively facilitate learning”. Moreover, the role of feedback to e-learning effectiveness is of significant importance. The study reported by Cassady and Gridley [35] indicates a small benefit for using online formative practice tests prior to graded course exams. These practice tests were used to prepare students, providing immediate post-test feedback. Of particular interest is Web-based self-assessment towards student improvement. Note that the employment of self-assessment for student improvement has been pursued lately, but not conclusively [26]. Furthermore, Guzmán et al. [36] reported encouraging results regarding student improvement by self-assessment, nevertheless without feedback. Their Web-based ‘Self-Assessment’ tool is integrated into an e-learning environment. Marinagi et al [37] and Kaburlasos et al [38] give evidence that allowing students taking self-assessment tests with feedback abilities before scheduled tests, increase students throughput, the latter is the percentage of students who pass the final “written” exam. Peat and Franklin [33] adopted ‘Adobe Macromedia Authorware’ to develop Self-Assessment Module (SAM) and integrated it into the e-Learning environment, called Virtual Learning Environment. SAM was available to the first year biology class at the University of Sydney. They also found that self-assessment could help improve students learning effectiveness. In [32] Wang presents a Web-based dynamic assessment system, GPAM-WATA, which can be used for self-assessment. When student gives an incorrect answer, the system provides progressively more specific feedback, in order to give student more opportunities to find the correct answer step by step. GPAT-WATA improves e-learning effectiveness comparing with normal Web-based tests.

3. W-PARES architectural design

W-PARES is a Web-based software tool. It is based on PARES, a 2-tier client/server platform for adaptive student assessment [37, 38], which has been successfully used for the formative assessment and self-assessment of students of the Department of Industrial Informatics of the Higher Technological Educational Institute of Kavala, Greece, during three Academic years (2003-2006). W-PARES has also been successfully used as a Web-based adaptive self-assessment tool during three Academic years (2007-2010) at the Department of Logistics of the Higher Technological Educational Institute of Chalkis, Greece. The theoretical basis of the adaptive assessment method implemented by PARES system has been reported in detail in [38]. The technical details of W-PARES architecture, as well as the motivations under moving from a 2-tier to a 3-tier architecture are documented in [1]. Figure 1 illustrates the 3-tier architecture of W-PARES platform. Four subsystems have been developed, which are independent Web applications installed on BEA WebLogic Application Server [39]. The subsystems are accessed by three types of users: administrators, course instructors and students. Through a classical Web browser (Presentation tier), users communicate with the corresponding Web application on the Application Server where dynamic Web content is created using JSP/Java (Application tier). The system DataBase is located on the DBMS Oracle DataBase Server 10g Enterprise Edition (Data tier). The Database contains three repositories: i) The Users Bank, that keeps records of users, cources and semesters the ii) the Item Bank, that stores items in a hierarchical structure, and iii) the Test Data Bank, that stores test data.

609©FORMATEX 2011

Education in a technological world: communicating current and emerging research and technological efforts A. Méndez-Vilas (Ed.)_______________________________________________________________________________________

Page 4: Web-based adaptive self-assessment in Higher Education

Client PC Client PC

Oracle Database Server

Client PC

instructor studentadministrator

Data tier

Presentation tier

Webbrowser

Webbrowser

Webbrowser

Application tierBea Weblogic Application Server

Administrator subsystemInstructor subsystemStudent Assessment subsystemStudent Self-Assessment subsystem

Internet Internet Internet

Database

Item Bank

Users Bank

Test Data Bank

Fig 1. Architecture of W-PARES.

4. W-PARES subsystems

4.1 Administrator subsystem

Administrators use Web browsers to access the Administrator subsystem. In particular, this subsystem enables the creation of new user accounts (for instructors and students), in order to give ‘access privileges’ to instructors and students. The administrator also uses the subsystem for the modification of user accounts, the management of courses and semesters and the assignment of semesters and courses to students and instructors. The administrator also uses this subsystem to import/export lists of users from/to a file. The user interface of the Administrator system includes Activity buttons to facilitate the administrator’s work.

4.2 Instructor subsystem

Course instructors use Web browsers to access the Instructor subsystem. Instructors are connected to the system and choose a course. Then instructors can organize or update the Item Bank, compose tests, publish tests on students’ screens and trigger automatic grading. The instructor subsystem contains a user-friendly authoring environment that enables interaction with instructors. Figure 2 depicts a screen displayed during item updating. The screen is divided in two parts. On the left, there is a snapshot of the tree structure of the Item Bank built for the ‘Introduction to Computer Science’ course of the first semester. On the right, a multiple-choice Question with its possible answers is depicted. The author can insert another possible Answer or modify existing Answers. For each Answer, a Comment is supplied, which gives feedback to students at the end of self-assessment tests. Comment provides documentation on each particular answer. It may be a short paragraph giving basic information, and/or links to URL addresses containing educational material. An image can also be inserted to accompany the text of a question, or the text of an answer.

610 ©FORMATEX 2011

_______________________________________________________________________________________Education in a technological world: communicating current and emerging research and technological efforts A. Méndez-Vilas (Ed.)

Page 5: Web-based adaptive self-assessment in Higher Education

For each unit of items the instructor assigns, by judgment, an initial value of difficulty level. This value is re-estimated later, considering the responses of a group of students that take tests in order to initialize values of parameters. In this way, the item difficulty values are estimated according to students’ knowledge and abilities.

Fig 2. Instructor subsystem: Updating an Item. The course instructors use the authoring environment to compose tests. The instructor defines the parameters of a new test. On the left side of the screen, the Item Bank snapshot is appeared. Instructors choose from the Item Bank - clicking the corresponding icons - the chapters and the units of each chapter that will be examined at the particular test. On the right side of the screen test parameters are depicted, such as the name of the test, a description, the range of scoring, an estimation of the time period needed for completing the test, a choice of negative evaluation and a choice of adaptive or non-adaptive testing. Instructors can also administer self-assessment tests. For each subject, an instructor may store many composed tests, but authorize one of them to be appeared on students’ screen before students sit for a test. At test time the instructor publishes the authorized tests on the students’ machines. When all students have submitted their answers, the course instructor activates a button to automate scoring of answers. Data gathered during testing is recorded on the Test Data Bank. The instructor prints the evaluation report, which includes the list of students, who participated on the particular test and their corresponding scores.

4.3 Student Assessment subsystem

Students use Web browsers to access the Student Assessment subsystem. Formative or summative tests composed by the instructor off-line are published to the students the time just before the beginning of a scheduled test. Students connect to the system and take scheduled tests. When questions are appeared on screen, timer starts counting down. As soon as a student finishes a test he/she is expected to submit his/her answers by activating the ‘Submit’ button. If time expires before deliberate student submission then student answers are submitted automatically. After submission students can print preview, or print test report. Using the Student Assessment subsystem, students can take either adaptive or non-adaptive tests. This is a parameter determined by the course instructor during test composition. In non-adaptive tests, items are randomly chosen from the chapters or the units that the course instructor has predetermined. The random Item selection techniques guarantee that all students will receive different tests but of similar difficulty. In case of an adaptive test, each student initially responds to a basic set of items in order to estimate the student’s profile. Then, the test is transformed to an adaptive test. The next item that appears on screen is adjusted to the student’s profile. When the student responds, the student’s profile is re-estimated and the procedure is repeated until termination criteria are met. The adaptive assessment techniques applied to PARES are also applied to W-PARES. Figure 3, depicts a part of a test appearing on student screen. For each question, there is a set of possible answers. On screen also appears information such as the grading weight of each question, test duration and the remaining time until automated submission.

611©FORMATEX 2011

Education in a technological world: communicating current and emerging research and technological efforts A. Méndez-Vilas (Ed.)_______________________________________________________________________________________

Page 6: Web-based adaptive self-assessment in Higher Education

Fig 3. Student subsystem: Test appearance on screen.

4.4 Student Self-Assessment subsystem

Students use Web browsers to access the Student Self-Assessment subsystem. Students connect to the system and take self-assessment tests, choosing among those that the instructor has already constructed. Until now no permission is given to students to choose their preferred Units of questions. Self-Assessment tests are always adaptive. Each question is adaptively selected from the predetermined set of Units, according to student’s response to previous question. Figure 4 depicts a screen appeared on a student’s computer, when the student has submitted his/her response to a self-assessment question. The right answer and the student’s answer to the question are displayed, as well as a comment associated with student’s answer. The comment provides feedback on right/wrong answers. Students can also have access to context and educational resources through appropriate links. Therefore, feedback encourages better performance and fulfills the aim of learning.

Fig 4. Student self-assessment subsystem: Providing feedback immediately after student’s response.

5. Conclusions

In this paper the author focused on Web-based adaptive assessment and self-assessment in Higher Education. A brief literature review that includes basic assessment issues was presented. It is recognised that assessment can play a crucial

612 ©FORMATEX 2011

_______________________________________________________________________________________Education in a technological world: communicating current and emerging research and technological efforts A. Méndez-Vilas (Ed.)

Page 7: Web-based adaptive self-assessment in Higher Education

role in promoting productive student learning. The aim of effective assessment is not grading students. Grading is about recording and management rather than learning. Assessment for learning occurs throughout the learning process and no grades are given to students. Recently, the rapid development of new digital technologies resulted in the development of Web-based tools for student assessment. Moreover, adaptive assessment is a methodology that can be incorporated to a Web-based assessment tool in order to provide tests tailored to student’s knowledge level. Web-based adaptive self-assessment is a useful tool not only for distance learning but also for supporting student learning in traditional class. This paper also presented W-PARES, a Web-based platform for automated adaptive student assessment. W-PARES includes four subsystems, each of which is an independent Web application. Administrators, instructors and students access the corresponding subsystems using a Web browser. Students may access two different subsystems, the first enables participation to formative or summative assessment and the second enables participation to self-assessment. More qualitative and quantitative analysis of the effectiveness of W-PARES in learning is necessary. For the near future, it is planned the demonstration of statistical analysis concerning experimental data of students’ performance. Consider that, during the semester, students used W-PARES tool to take adaptive self-assessment tests, where immediate feedback was provided to each answer. This unpublished analysis is consistent with previous analysis of experimental data obtained from the application of PARES client/server platform [37, 38]. These results are also in agreement with other researchers’ results [32, 33, 35], which conclude that feedback provided during formative assessment or during self-assessment, improves student’s competence level. Up to now, questionnaires have already been given to students in order to evaluate the self-assessment testing process. These data will be analyzed, but also new data should be gathered during next academic year. Concerning the implementation of W-PARES, new Web 2.0 technologies are going to be used to improve functionality and user interface.

Acknowledgements Professor V. Kabourlasos is gratefully acknowledged for his scientific contribution. V. Tsoukalas is also acknowledged for his technical support.

References

[1] Marinagi CC, Tsoukalas VT, Kabourlasos VG. Modifying a client/server architecture to a Web-based architecture for adaptive assessment. Proceedings of the 20th Conference of the Greek Company of Operational Research. Vol B, Section 8, Electronic Education and Operational Research. 2008:873-884.

[2] Segers M, Dochy F, Cascallar E. The era of assessment engineering: Changing perspectives on teaching and learning and the role of new modes of assessment. In: Segers M, Dochy F, Cascallar E, eds. Optimising new modes of assessment: In search of qualities and standards. Dordrecht: Kluwer Academic Publishers; 2003:1–12.

[3] Willis J. Assessment for Learning - Why the Theory Needs the Practice. International Journal of Pedagogies and Learning. 2007; 3(2): 52-59.

[4] Carless D, Joughin CG, Mok MMC. Learning-oriented assessment: principles and practice. Assessment & Evaluation in Higher Education. 2006;31(4):395-398.

[5] Wiliam D, Black P. Meanings and consequences: A basis for distinguishing formative and summative functions of assessment? British Educational Research Journal. 1996;22:537–548.

[6] Black P, Wiliam D. Assessment and Classroom Learning. Assessment in Education. March 1998: 7-74. [7] Joint Information Systems Committee (JISC). Effective Practice with e-Assessment. An overview of technologies, policies and

practice in further and higher education. 2007. Available at: http://www.jisc.ac.uk/media/documents/themes/elearning/effpraceassess.pdf. Accessed July 2011.

[8] Thelwall M. Computer-based assessment: a versatile educational tool. Computers & Education 2000;34:37-49. [9] Brown S, Race P, Bull J, eds. Computer-assisted assessment in higher education. London, UK: Kogan Page; 1999. [10]Sclater N, Howie K. User requirements of the “ultimate” online assessment engine. Computers & Education. 2003;40:285–

306. [11]Russell M, Haney W. Bridging the gap between testing and technology in Schools: Four policy options. Educational Policy

Analysis Archives. 2000;8(19). Available at: http://epaa.asu.edu/epaa/v8n19.html [12]McDonald A. The impact of individual difference on the equivalence of computer-based and paper-and-pencil educational

assessments. Computers & Education. 2002;39:299-312. [13]Skourlas C, Belsis P, Sarinopoulou F, Tsolakidis T, Vassis D, Marinagi C. A Wireless Distributed Framework for Supporting

Assistive Learning Environments. In: Proceedings of the Workshop Assistive Healthcare & Educational Technologies for special target groups. Corfu, Greece: ACM Press; 2009.

[14]Vassis D, Belsis P, Skourlas C. A Wireless Framework for Secure Execution of Exams Related to Students with Special Needs. In: Proceedings of the 5th International Scientific Conference for International Synergy in Energy, Environment, Tourism and contribution of Information Technology in Science, Economy, Society and Education. 2010.

[15]Lilley M, Barker T, Britton C. The development and evaluation of a software prototype for computer-adaptive testing. Computers & Education. 2004;43:109-123.

[16]Sampson D, Karagiannidis C, Cardinali F. An Architecture for Web-based e-Learning Promoting Re-usable Adaptive Educational e-Content. Educational Technology & Society of International Forum of Educational Technology & Society and IEEE Computer Society Learning Technology Task Force, Special Issue on Innovations in Learning Technologies. ISSN 1436-4522. 2002;5(4).

613©FORMATEX 2011

Education in a technological world: communicating current and emerging research and technological efforts A. Méndez-Vilas (Ed.)_______________________________________________________________________________________

Page 8: Web-based adaptive self-assessment in Higher Education

[17]Papanikolaou KA, Grigoriadou M, Kornilakis H, Magoulas GD. INSPIRE: an INtelligent System for Personalized Instruction in a Remote Environment. In: Reich S, Tzagarakis M, De Bra PME, eds. Hypermedia: Openess, Structural Awareness, and Adaptivity. Lecture Notes in Computer Science, 2266, Berlin, Heidelberg: Springer-Verlag; 2002:215-225.

[18]Chen CM, Lee HM, Chen YH. Personalized e-learning system using Item-Response Theory, Computers & Education. 2004; 44(3):237-255.

[19]Gómez RG, (Coord.), Ibarra Sáiz MS, Dodero JM, Gómez Ruiz MA, Gallego Noche B, Cabeza Sánchez D, Quesada Serra V, Álvaro Martínez del Val. Developing the e-Learning-oriented e-Assessment. In: Méndez-Vilas A, Solano Martín A, Mesa González JA, Mesa González J. eds. Research, Reflections and Innovations in Integrating ICT in Education. Badajoz, Spain: Formatex Research Center; 2009:515-519.

[20]Bull J, Conole G, Davis H, White S, Danson M, Sclater N. Rethinking assessment through learning technologies. Available at: http://www.ascilite.org.au/conferences/auckland02/proceedings/papers/056.pdf. Accessed July 10, 2011.

[21]McCann AL. Factors affecting the adoption of an e-assessment system. Assessment & Evaluation in Higher Education, 2009; 35(7):799-818.

[22]Warburton B. Quick win or slow burn? modelling UK HE CAA uptake. Assessment & Evaluation in Higher Education.2009;34(3):257-272.

[23]Questionmark. Available at: http://www.questionmark.com . Accessed July 2011. [24]Assessment Systems Corporation. CATSim, FastTEST Pro 2.0 and FastTEST Web. Available at:

http://www.assess.com/xcart/home.php?cat=3. Accessed July 10, 2011. [25]Conejo R, Guzmán E, Millán E, Trella M, Pérez-de-la-Cruz JL, Ríos A. SIETTE: A Web-Based Tool for Adaptive Teaching.

International Journal of Artificial Intelligence in Education. 2004;14:29-61. [26]Pérez MS, Herrero P, Sánchez FM. Are Web self-assessment tools useful for training? IEEE Transactions on Education.

2005;48(4):757-763. [27]Lazarinis F, Retalis S. Analyze Me: Open Learner Model in an Adaptive WebTesting System. International Journal of

Artificial Intelligence in Education. 2007;17:255-271. [28]Wang T. Web-based quiz-game-like formative assessment: Development and evaluation. Computers & Education.

2008;51:1247–1263. [29]Millán E, García-Hervás E, Guzmán De los Riscos E, Rueda Á, Pérez de la Cruz JL. TAPLI: An Adaptive Web-Based

Learning Environment for Linear Programming. In: Conejo R. et al., eds. Current Topics in Artificial Intelligence, LNAI 3040. Springer Berlin / Heidelberg; 2004:676–685.

[30]Burrow M, Evdorides H, Hallam B, Freer-Hewish R. Developing formative assessments for postgraduate students in engineering. European Journal of Engineering Education. 2005;30:255–263.

[31]López-Cuadrado J, Armendariz AJ, Pérez TA. A supporting tool for the adaptive assessment of an elearning system. In: Méndez Vilas A, Gonzalez Pereira B, Mesa González J, Mesa González JA, eds. Recent research developments in learning technologies, Vol. 1. Cáceres, Spain: Formatex Research Center; 2005:295-299.

[32]Wang T. Web-based dynamic assessment: Taking assessment as teaching and learning strategy for improving students’ e-Learning effectiveness. Computers & Education. 2010;54:1157–1166.

[33]Peat M, Franklin S. Supporting student learning: The use of computer-based formative assessment modules. British Journal of Educational Technology. 2002;33(5):515–523.

[34]Gardner L, Sheridan D, White D. A web-based learning and assessment system to support flexible education. Journal of Computer Assisted Learning. 2002;18:125–136.

[35]Cassady JC, Gridley BE. The effects of online formative and summative assessment on test anxiety and performance. Journal of Technology, Learning, and Assessment, 2005;4(1). Available at: http://www.jtla.org. Accessed July 10, 2011.

[36]Guzmán E, Conejo R, Pérez-de-la-Cruz JL. Improving student performance using self-assessment tests. IEEE Intelligent Systems, 2007;22(4):46-52.

[37]Marinagi CC, Kaburlasos VG, Tsoukalas VT. An Architecture for an Adaptive Assessment Tool. In: Proceedings of the 37th ASEE/IEEE Frontiers in Education Conference. Session T3D. Milwaukee, MI: 2007:11-16.

[38]Kaburlasos VG, Marinagi CC, Tsoukalas VT. Personalized Multi-Student Improvement Based on Bayesian Cybernetics, Computers & Education. 2008;51(4):1430-1449.

[39]BEA WebLogic. Available at: http://www.bea.com. Accessed July 10, 2011.

614 ©FORMATEX 2011

_______________________________________________________________________________________Education in a technological world: communicating current and emerging research and technological efforts A. Méndez-Vilas (Ed.)