37
Crowdsourcing for Educational Assessment in e-Learning M.Tech Project Report Submitted in partial fulfillment of the requirements for the degree of Master of Technology by Raj Agrawal Roll No: 11305R004 Under the guidance of Prof. Deepak B. Phatak 24 th June, 2014 Department of Computer Science and Engineering Indian Institute of Technology, Bombay

Crowdsourcing for Educational Assessment in e-LearningCrowdsourcing for Educational Assessment in e-Learning ... Interest in online education is surging and is revolutionized by the

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

  • Crowdsourcing for EducationalAssessment in e-Learning

    M.Tech Project ReportSubmitted in partial fulfillment of the requirements

    for the degree ofMaster of Technology

    by

    Raj AgrawalRoll No: 11305R004

    Under the guidance ofProf. Deepak B. Phatak

    24th June, 2014

    Department of Computer Science and EngineeringIndian Institute of Technology, Bombay

  • Dissertation Approval Certificate

    Indian Institute of Technology, Bombay

    The dissertation titledCrowdsourcing for Educational Assessment in e-Learning

    submitted by

    Raj Agrawal

    (Roll No. 11305R004)

    is approved for the degree of

    Master of Technology in Computer Science and Engineering

    Prof. Deepak B. Phatak Prof. Ganesh RamakrishnanDept. of CSE, IIT Bombay Dept. of CSE, IIT BombaySupervisor Internal Examiner

    Mr. Avinash Awate Prof. Uday N. GaitondeExternal Examiner Dept. of ME, IIT Bombay

    Chairperson

    Date: 24th June, 2014Place: IIT Bombay, Mumbai

    i

  • Declaration

    I declare that this written submission presents my ideas in my own wordsand where others ideas or words have been included, I have appropriatelycited and referenced the original sources. I declare that I have properly andaccurately acknowledged all sources used in the production of this thesis.

    I also declare that I have adhered to all principles of academic hon-esty and integrity and have not misrepresented or fabricated or falsified anyidea/data/fact/source in my submission. I understand that any violation ofthe above will be cause for disciplinary action by the Institute and can alsoevoke penal action from the sources which have thus not been properly citedor from whom proper permission has not been taken when needed.

    Date: 24th June, 2014 Raj AgrawalPlace: Indian Institute of Technology Bombay (11305R004)

    ii

  • Acknowledgements

    I would like to express my deepest gratitude to my guide Prof. D.B. Phatak,who has always been making things simple to understand. Without his deepinsight into this domain and his valuable time for this project, it would nothave been possible for me to move ahead properly. He has been my primarysource of motivation and advice during my project. I would also like tothank Mr. Naveen Bansal for his help during the entire duration of myproject and also for reviewing my reports. I would also like to thank all myfriends who have been alongside me during the project and helped me keepmotivated.

    iii

  • Abstract

    MOOC(Massive Open Online Courses) courses like Coursera and recentStanford’s Online courses have revolutionized learning by online education.Assessment of the participants in these courses is a big challenge, as thesecourses have huge number of participants. Traditional assessment/evalu-ation approaches are not suitable, as manual evaluation in such courses,having massive participation, is a highly tedious task. In this report, wepresent an approach to solve this problem by using crowdsourcing. We alsopresent the problems, that arises when peer evaluation is used in large scale,and discussed the problem of fairness in peer evaluation in more detail,and suggested a methodology to address it. Our approach needs to classifyparticipants on the basis of their knowledge levels, later, the tasks are dis-tributed on the basis of these knowledge classes.

    We have developed a peer evaluation system, which focuses on allocationof evaluation task among participants. This system exploits the knowledgelevel information of the participants for this task allocation purpose.

  • Contents

    1 Introduction 21.1 Need of Crowdsourcing in Evaluation . . . . . . . . . . . . . 21.2 Peer to Peer Evaluation System . . . . . . . . . . . . . . . . 3

    2 Literature Survey 42.1 Crowdsourcing . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2.1.1 Applications . . . . . . . . . . . . . . . . . . . . . . . 42.1.2 Performance . . . . . . . . . . . . . . . . . . . . . . . 5

    2.2 Need of Peer Evaluation System for Educational Evaluation 62.3 Peer to Peer Evaluation System . . . . . . . . . . . . . . . . 6

    2.3.1 Review of Existing Systems . . . . . . . . . . . . . . 62.3.2 Peer Assessment Procedures for Large Classes . . . . 72.3.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . . 7

    3 Proposed System for Peer Evaluation in e-Learning 93.1 Problems in Existing Systems . . . . . . . . . . . . . . . . . 93.2 Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . 10

    3.2.1 Measuring Knowledge Level of the Participants . . . 103.2.2 Task Distribution . . . . . . . . . . . . . . . . . . . . 113.2.3 Allocation Strategies of Projects for Evaluation Based

    on Knowledge Level . . . . . . . . . . . . . . . . . . . 123.2.4 Calculating Final Score . . . . . . . . . . . . . . . . . 13

    3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    4 Software Implementation 154.1 Software Requirements . . . . . . . . . . . . . . . . . . . . . 15

    4.1.1 Participant’s requirements . . . . . . . . . . . . . . . 154.1.2 Instructor’s Requirements . . . . . . . . . . . . . . . 154.1.3 Other Requirements . . . . . . . . . . . . . . . . . . 16

    4.2 System Design . . . . . . . . . . . . . . . . . . . . . . . . . . 164.2.1 Use Case Diagram . . . . . . . . . . . . . . . . . . . 164.2.2 Database Design . . . . . . . . . . . . . . . . . . . . 164.2.3 User Interfaces . . . . . . . . . . . . . . . . . . . . . 19

    i

  • 5 Conclusion and Future Work 265.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    Appendix A 27A.1 Installation Steps: . . . . . . . . . . . . . . . . . . . . . . . . 27

    ii

  • List of Figures

    2.1 A Taxonomy of Crowdsourcing . . . . . . . . . . . . . . . . 5

    3.1 Proposed System Workflow . . . . . . . . . . . . . . . . . . 10

    4.1 UseCase Diagram . . . . . . . . . . . . . . . . . . . . . . . . 174.2 Database Design . . . . . . . . . . . . . . . . . . . . . . . . 184.3 Registration Page . . . . . . . . . . . . . . . . . . . . . . . . 194.4 Participant course details . . . . . . . . . . . . . . . . . . . . 204.5 Participants Submit Assignment Interface . . . . . . . . . . 204.6 Attempt Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . 214.7 Evaluate Assignment . . . . . . . . . . . . . . . . . . . . . . 224.8 Submit Assignment Marks . . . . . . . . . . . . . . . . . . . 224.9 Course Details . . . . . . . . . . . . . . . . . . . . . . . . . . 234.10 Add Question . . . . . . . . . . . . . . . . . . . . . . . . . . 234.11 Create Quiz for a course . . . . . . . . . . . . . . . . . . . . 244.12 Create Assignment . . . . . . . . . . . . . . . . . . . . . . . 25

    1

  • Chapter 1

    Introduction

    Interest in online education is surging and is revolutionized by the success ofMOOC courses like Coursera and recent Stanford online courses. For manystudents, e-learning is the most convenient way to pursue a degree in highereducation. A lot of these students are attracted to a flexible, self-pacedmethod of education to attain their degree. Moreover, in asynchronous e-learning classes, students are free to log on and complete work any time theywish. Thus, these students are more likely and more motivated to enroll inan e-learning class.

    These MOOC courses have a huge number of participants. These partic-ipants(students) have to submit assignments, projects etc. as a part of theircourse activity for their assessment. The evaluation of these submission insuch a large scale is a big challenge in these courses.

    1.1 Need of Crowdsourcing in Evaluation

    Nowadays, many tasks that are trivial for humans continue to challengeeven the most sophisticated computer programs. Prior to the introductionof concept of crowdsourcing these problems were solved by assigning tasksto employees in a company. These tasks can now be done by outsourcingit to a large network of people(crowd). In the field of education, instead ofminimizing the cost of accomplishing a task, the objective is to maximizethe expected competence of student across a range of topics while holdingconstant the time spent on videos and worked-problems [1].

    Crowdsourcing is a distributed problem-solving and business productionmodel. It is defined as “an idea of outsourcing a task to a large group ofpeople in the form of an open call, that is traditionally performed by anemployee”[2].

    Since the MOOC courses have a huge number of participants typicallyin the range of lacs, traditional approaches of assessment and evaluation, in

    2

  • which the course instructor or TAs or a small groups of people are responsiblefor evaluation is not feasible. Also automated online evaluation system is notsuitable in such a scenario, since these online evaluation system can easilyevaluate MCQs but for other patterns of assignments where there are longanswers or textual solution, these systems are not possible, as one needs towork with lot of NLP and machine learning algorithms. So we can make useof the concept of crowdsourcing for the evaluation of assignments, projectsetc. in these courses.

    1.2 Peer to Peer Evaluation System

    Peer evaluation is the evaluation of work by one or more people of similarcompetence to the creator of the work (peers). Peer evaluation utilizes theindependence and anonymity, of the evaluators in order to discourage crony-ism (i.e., favoritism shown to relatives and friends) and obtain an unbiasedevaluation. Peer evaluation helps to maintain and enhance the quality of awork by detecting errors and weakness.

    In the field of education, peer evaluation system is a form of self regula-tory body where each student plays dual role of a student as well as teachers.Here every student evaluates the work of their peers.

    3

  • Chapter 2

    Literature Survey

    2.1 Crowdsourcing

    Nowadays, many tasks that are trivial for humans continue to challenge eventhe most sophisticated computer programs, such as audio transcription andtranslation. These tasks cannot be computerized. Prior to the introductionof the concept of crowdsourcing, traditional approaches for solving thesetypes of problems focused to assign these task to employees in a company.However it increases the company’s production cost.

    Crowdsourcing is a distributed problem-solving and business productionmodel. According to Jeff Howe “crowdsourcing is an idea of outsourcing atask to a large group of people in the form of an open call, that is traditionallyperformed by an employee”.

    Crowdsourcing users can be classified in two categories. They are

    • Requester: These are the persons who want their work to be donefrom the crowd. They can also be called as employer.

    • Workers: A Worker is a person who completes work posted by therequester.

    The literature on crowdsourcing can be categorized into application andperformance.

    2.1.1 Applications

    Crowdsourcing applications can be broadly classified in four categories. Theyare Voting System, Information Sharing System, Games, Creative Systems[3].

    • Voting Systems

    4

  • Figure 2.1: A Taxonomy of Crowdsourcing

    • Information Sharing System

    • Games

    • Creative Systems

    2.1.2 Performance

    There have been many crowdsourcing applications developed in recent years.Since in crowdsourcing, work is done by very diverse group of people, per-formance is one of the major factor in crowdsourcing. The performance ofthe crowdsourcing system can be discussed into following categories. Theyare

    • User Participation

    • Quality Management

    • Cheating Detection

    5

  • 2.2 Need of Peer Evaluation System for Ed-

    ucational Evaluation

    Traditionally, there have been three main motivations for a crowdsourcingworker to participate in any crowdsourced task: financial incentives, e.g.,Amazon Mechanical Turk, entertainment, e.g., the ESP Game and FoldIt,or participate in a community, e.g., Wikipedia and Stack Overflow [1].

    Any of the above specified motivations cannot gather crowd for educa-tional evaluation in MOOC courses. So we can make use of the concept peerevaluation system for this task. Here we can make use of the participantswho enroll for the course, as crowd in peer evaluation.

    2.3 Peer to Peer Evaluation System

    Peer evaluation is the evaluation of work by one or more people of similarcompetence to the creator of the work (peers). Peer evaluation utilizesthe independence and anonymity, of the evaluators in order to discouragecronyism (i.e., favoritism shown to relatives and friends) and obtains anunbiased evaluation. It also helps to maintain and enhance the quality of awork by detecting errors and weakness in it.

    Peer assessment requires students to use their knowledge and skills to re-view, clarify and correct others work. These tasks are cognitively demandingand, as they actively engage students with new knowledge, have the poten-tial to reinforce and deepen the understanding of the student assessor. Thisbenefit is particularly apparent if students are involved in evaluating mul-tiple assessment tasks, as they will be repeatedly exposed to the materialpresented in a variety of formats.

    2.3.1 Review of Existing Systems

    Peer Evaluation Systems for Project Based Learning

    Project based learning at engineering universities are increasingly popular toheighten the attractiveness of engineering degrees and to address professionalskills such as teamwork, oral and written communications. In project basedlearning, students are the part of groups. One of the main challenge in suchscenario is that, how to find the contribution of an individual student in thegroup. One way is the to use peer and self evaluation in this purpose. Someof the systems built to facilitate such assessments have been discussed

    6

  • Other Peer Evaluation Systems

    Delft:PeEv and SPARK are some of the other peer evaluation systems.These systems were developed by Delft University of Technology and Uni-versity of Technology, Sydney respectively. These systems were developedfor the purpose of evaluation the student’s contribution in group task. Inthese systems students in a project group evaluates all other members oftheir group including themselves. Evaluation is done based on some fixedset of criteria. The system then calculates the average of marks give bygroup to a student, as his score. [4]

    2.3.2 Peer Assessment Procedures for Large Classes

    Most of the literature on peer assessment focuses either on the evaluationof individual contributions to group assignments or on the validity and reli-ability of such forms of assessment. Few studies have been concerned withevaluating the experience from the viewpoint of students themselves [5]. Fur-thermore, almost all studies to date involve single tutorials or small classestaught by staff who are committed to the procedure. As in MOOC courses,classes are of very large size, the development of viable procedure for theuse of peer assessment in large classes is important.

    A study was conducted in Queensland University of Technology, to de-velop the procedures for peer assessment in large classes(not as large as inMOOC). The study was conducted in three phases over the period of twoyears. Both the student and staff had participated in the project. One as-sessment task was selected by the coordinator to be included in the study.The assessment itself contributed a proportion of student’s total, ranging in15-30%.

    2.3.3 Challenges

    • Diversity in intelligence and domain knowledge, among participants,creates an unavoidable problem of ensuring fairness in evaluation. Fair-ness here refers to the ability of participants to evaluates their peers’assignments. This problem becomes severe, as the number of partici-pants increases.

    • Many students feel that students(their peers) are either easy or hardmarkers. They feel that their score doesn’t reflect the efforts that theyhave put in the course [5].

    • Some feels that peer assessment is a time consuming process and eval-uation should be the responsibility of the instructor. However it isshown that a ’reasonable’ number of marks (typically 10-15% of the

    7

  • total) can be allocated to student performance in the peer evaluationprocess, as it may boost engagement and commitment to the task [5].

    • Need of Anonymous and Unbiased evaluation: The basis for givingfeedback or reviewing one’s work should be the objectivity of thatwork and not the attribute of authors, reviewer or any personal issues.In many occasions author of the work did not agree with the feed-back given by the reviewer. There can be many biasing factor whilereviewing one’s work, like:

    – Nationality

    – Gender

    Since there have not been much work done to handle the above mentionedissues for peer evaluation system in massive class strengths. We propose adesign of the system in the next chapter, that can be developed for thispurpose to ensure fair evaluation. Our system mainly focus on the issue ofdiversity in intelligence among participants.

    8

  • Chapter 3

    Proposed System for PeerEvaluation in e-Learning

    In chapter 2, various existing systems for crowd based peer evaluation areintroduced. It is noted that, most of such systems focused on the evaluationof individual contributions in group participation and does not posses thecapabilities to handle peer evaluation in large classes. We have also studiedthe challenges that occur to develop procedure for peer assessment in largeclasses. Section 2.3.3, presents a brief introduction of the major challengesencountered while developing a large crowd based evaluation system. Thereis a need to address these issues in order to construct an evaluation systemwhich is fair to all the participants. It is identified that, distribution ofassignments among peers poses major challenge in evaluation. An idealpeer evaluation system should be fair, in order to avoid any discrepanciesand wrong perceptions in the participants minds. A fair peer evaluationsystem will encourage participants to participate in evaluation and increasestheir beliefs in the system.

    In the next section, we address this issue more elaborately and proposea possible solution to cater this issue.

    3.1 Problems in Existing Systems

    Diversity in intelligence and domain knowledge, among participants, createsan unavoidable problem of ensuring fairness in evaluation. Fairness hererefers to the abilities of participants to evaluates their peers’ assignments.This problem becomes severe, as the number of participants increases.

    This is one of the major problems in peer evaluation system. The problemof ensuring fairness in peer evaluation system may be resolved by exploitingthe knowledge levels of the participants.

    9

  • 3.2 Proposed Solution

    The problem of ensuring fairness in peer evaluation system can be resolvedby exploiting the knowledge levels of the participants. There can be manyways in which knowledge level of the participants can be measured. Theseapproaches have been discussed later in this section. After measuring theknowledge level of the participants, they can be classified in different classes.Based on this classification, task of evaluation can be allocated to the par-ticipants. The overall workflow of the system is depicted in the figure 3.1

    Figure 3.1: Proposed System Workflow

    3.2.1 Measuring Knowledge Level of the Participants

    The knowledge levels of the participants can be measured by several wayse.g.

    • When participants interact with the system, they attempt quizzes and

    10

  • exercises during learning of the course. This interaction history can beused to estimate the knowledge level of the participants. This approachassumes that all the data related to performance of the participantsis prerecorded and is available for all the participants in the course.However this may not be possible in all the scenarios. e.g. in casewhen quiz has not been attempted by all the participants.

    • Knowledge levels of the students can also be measured by giving them aquiz, on the basis of the task assigned to them for peer evaluation. Thisapproach is intuitive, but in this scenario the system always dependson an external entity who designs the quiz, and the fairness is highlydependent on the quality of the quiz.

    • As the above approaches have their own limitations, we can use thecombination of these approaches to measure the knowledge level ofthe participant. This strategy assumes an expert is available. Theexpert has an option to choose either of the above strategies to mea-sure knowledge level of the participants. If he finds that sufficientperformance information of participants in course is available to mea-sure their knowledge level, then he uses this information to measurethe knowledge level of the participants. Otherwise knowledge level ismeasured by giving them a quiz on the basis of task assigned to themfor evaluation.

    3.2.2 Task Distribution

    In MOOC many of the quizzes and assignments are online and have MCQtype question. These pattern of exams does not requires peer evaluationsystem, as automated evaluation of these exams can be done fairly. Howeverthere are instances when there are assignments/projects having long answertype solutions. Evaluation of these assignments/projects is not feasible byany automatic systems. For such pattern of assignments/projects humanintervention is required and in such scenarios our peer evaluation is going tobe used.

    Distribution of tasks (projects/assignments) for evaluation in such a waythat can ensure fairness in the evaluation, is the major problem in peerevaluation. We have also studied some approaches to address this problem.In our work, we are considering task distribution as the significantly largeproblem that needs to be solved. We are proposing different approaches thatneeds to be tested for the suitability in the real environment.

    Continuing with the idea of measuring knowledge levels, there can bemany approaches that can be followed to measure knowledge levels of theparticipants in a efficient ways. An approach for measuring the knowledge

    11

  • level was also discussed in the previous section. We tried to overcome thelimitations associated with this method. When the number of participantsis large, the diversity among students on knowledge levels is also large. Toensure the fairness in such a diversified crowd of participants, we need tofollow a specific approach. One approach in this direction says that, theparticipants can be divided in the classes, on the basis of knowledge levels.Let we have n classes, where each class Ci represents a particular knowledgelevel. If we divide the students in this set of classes, then the assignmenttask for evaluation becomes easier. There are many approaches with somelimitations associated with each one. One important thing that can be notedhere is that, the classes are in the order of knowledge level e.g.

    KnowledgeLevel(C1) > knowledgelevel(C2)Hence, the participants who belong to class C1 posses higher knowledge

    then the participants of class C2.

    3.2.3 Allocation Strategies of Projects for EvaluationBased on Knowledge Level

    Considering the knowledge levels classes of all the participants, the projectscan be distributed by one of the following approaches:

    • Project of any participant Pi of class Ck can only be allocated to theparticipant Pj of the class Ck. The motivation behind this approachis on the fact that, participants belonging to same class have sameknowledge level, hence, may be able to understand the assignments oftheir peers belonging to same class in a better way.

    Limitaion: The problem that may arise with this strategy is, partic-ipants tend to give higher marks as the evaluator is having the sameknowledge level, as they may not be able to find out the flaw in theassignment. Also with this strategy the students might not gain anysignificant learning advantage as knowledge of levels of both the peersare same.

    • Allocate the project of participant belongs to class Ci to the partici-pant of Ci+1. The motivation behind this approach is that, with thisapproach, participants may have the learning advantage, as the par-ticipants will evaluate the assignments of their peers having higherknowledge level compared to themselves.

    Limitaion: This approach may fails, as this does not solve out ba-sic problem of fairness in evaluation, since evaluators might not havesufficient knowledge to do the evaluation.

    12

  • • Allocate the project of participant belongs to class Ci to the participantof Ci−1. This approach may ensure the fairness in the evaluation, as theevaluators may be having enough knowledge to evaluate assignmentsof the peers, whose assignments they evaluate.

    Limitaion: This approach does not solve the problem of learning. Asthe students evaluating the assignments have higher knowledge levelas compared to their peers whose assignments they will evaluate, theymay not have any learning advantage, from the process of evaluation.

    It should be noted that, each strategy possess their own benefits as well asdrawbacks, which does not enable us to use any of them. However a combi-nation of these strategies can solve the problem.

    Solution: Allocate projects for evaluation of participant Pi of class Ck,to exactly 1 participant from each class C1, C2....., Cn.

    It is found that, the students with the higher GPA tends to give lowermarks to themselves and their peers, than the students having lower GPA,benchmarked with the instructor’s marks on the same assignment [6]. Asmentioned in our strategy, the project of a participant is evaluated by themultiple participants, each having different knowledge level, which resultsin more fairness due to the contribution of participants of all the classes inevaluation and the tendency of participants mentioned in the text [6] can benormalized.

    3.2.4 Calculating Final Score

    Normalization of scores from each class

    The score given by a participant of class Ci is normalized by multiplying thescore by the normalizing factor of that class.

    Normalizing factor of class Ci is calculated by taking the ratio of averagemarks allotted by participants of all the classes to the average marks allottedby the participants of class Ci.

    normalizing factor(Class(Cj)) =

    n∑

    i=0

    Si

    n

    Sj

    where,Si is the average marks given by the participants of class Ci,n is the number of classes representing different knowledge level

    normalized score(partcipanti) = score(partcipanti)∗normalizing factor(Class(Ci)

    13

  • where score(partcipanti) = score given by participant of class Ci

    The final score of a particular project is calculated by taking the nor-malized score from every class for a particular project. The average of thisnormalized score from all the participants, for a given project, is be consid-ered as the final score for the project.

    Score(project(pj)) =

    n∑i=0

    normalized score(partcipantij)

    nwhere,

    n = number of participants, participating in the evaluation,normalized score(partcipantij) = normalized score given by participant ofclass Ci to project pj

    3.3 Summary

    In this chapter we presented the problem of fairness in peer evaluation. Wehave proposed that, we can classify the participants in different classes basedon their knowledge level. On the basis of this classification we can distributethe task of evaluation to the participants to ensure fairness. The final scoreof the evaluation is calculated by taking the average of normalized scoregiven by participants.

    14

  • Chapter 4

    Software Implementation

    All the details related to implementation of the system are presented in thischapter. Further section includes the software requirements, E-R diagramand implementation details of the system

    4.1 Software Requirements

    There are two types of users thats interacts with the system viz. Participantsand Instructor. Each users have different sets of requirements.

    4.1.1 Participant’s requirements

    Listed below are requirements associated with the Participants:

    • Participants should be able to register themselves in a new course

    • Participants should be able to submit any pending assignment

    • Participant should be able to appear in any quiz available

    • Participant should have an interface to download their peer’s submis-sion for evaluation

    • There should be an interface where participant can submit the evalu-ation results that they have done.

    4.1.2 Instructor’s Requirements

    Listed below are requirements associated with the Instructor:

    • Instructor should be able to create a new assignment for a course

    15

  • – He should be able to add new evaluation criteria to the system,for assignment evaluation

    • Instructor should be able to add questions to the pool

    – He must have an interface to add/update/delete questions

    • Instructor should be able to create quiz with available questions for aspecific assignment

    • Instructor should be able to add a new course to the system

    4.1.3 Other Requirements

    • Both the users should be able to authenticate themselves to the system,before being able to perform their normal actions

    • System should be able to automatically allocate assignment evaluationtask among the participants

    • System should be able to automatically calculate the normalized scoreof the participants, once assignment evaluation is completed.

    4.2 System Design

    4.2.1 Use Case Diagram

    There are two types of actors that interact with the system. Fig 4.1 rep-resents the usecase diagram of the system. Various functionalities of bothactors are captured and implemented in the database design presented inthe next subsection.

    4.2.2 Database Design

    Fig 4.2 represents the Database design of the system. The entities in thesystem are explained below:

    • Users relation records the information of all the users(participants andinstructor) in the system, as well as their roles i.e. whether each useris an instructor or the participant

    • CourseDetails relation stores all the courses available in the system

    • AssignmentDetails relation stores all the information of all the assign-ments in the system, their start dates, end dates of submission as wellas evaluation

    16

  • Figure 4.1: UseCase Diagram

    • QuizDetails relation stores information of all the quizzes, their startand end dates, as well as mapping of the assignment i.e. for whichassignment a particular quiz has been created

    • CourseRegistrationDetails relation stores the course enrollment infor-mation of each participant

    • Role relation stores information of different types of user roles available

    • QuestionDetails relation stores all the question available in the systemalong with its course

    • EvaluationCriteria relation stores all the criteria on which assignmentscan be evaluated

    • AssignmentEvaluationCriteria relation stores mapping between eval-uation criteria and assignment, which indicates the on which criteriacurrent assignment is to be evaluated

    • EvaluationDetails relation stores information of allocation assignmentevaluation among peers

    • QuizQuestions relation stores the mapping of questions and quiz whichindicates the question that are added in the quiz

    • ParticipantsMarks relation stores information of marks given to partic-ipants in a assignment for each evaluation criteria on which assignmentis evaluated

    17

  • Figure 4.2: Database Design

    • NormalizedScore relation stores the final normalized score given to theparticipants in assignment after the evaluation

    • ParticipantsToQuiz relation stores marks of the participants in quiz

    • StudentAssignments relation stores participants assignments submis-sion details

    • Category relation stores the number of knowledge level classes in whichthe participants are grouped

    • UserCategory relation stores the information of knowledge level clas-sification of the participant w.r.t. assignment

    18

  • 4.2.3 User Interfaces

    There are three categories of interfaces, namely participant side, instructorside and the common interfaces for both users

    4.2.3.1 Participant’s side interaces

    • Fig 4.3 represents Participant’s registration page. This page providesparticipants a platform to register themselves in a new course. Partic-ipants have to enter their username and password, and choose a coursefor registration.

    Figure 4.3: Registration Page

    • Fig 4.4 represents Participant’s course details page Once the partici-pants login to the system, course detail page is available to them. Inthis interface they can perform their actions like evaluate assignments,attempt quiz, or submit their assignment for evaluation of a specificcourse. In addition to this, here participants are also able to see aupcoming events chart which displays list of all the upcoming eventslike assignment submission or evaluation deadline, quiz deadline etc.

    • Fig 4.5 represents Submit Assignments page. Using this interface par-ticipants can upload their assignment for evaluation of a specific course.This page is accessible only if assignment submission deadline is notover.

    • Fig 4.6 represents Attempt Quiz page. This page gives a quiz in theform of multiple choice questions. Participant can submit the quiz byclicking on “submit” button at the bottom of the page.

    19

  • Figure 4.4: Participant course details

    Figure 4.5: Participants Submit Assignment Interface

    20

  • Figure 4.6: Attempt Quiz

    • Fig 4.7 represents Evaluate assignment page. This page gives a par-ticipant two options to choose. Either participants can download theirpeer’s submission for evaluation, or he can submit the evaluation resultafter completing the evaluation.

    • Fig 4.8 represents Submit Marks page. In this page participant cansubmit the marks after the evaluation of assignment. Marks are sub-mitted for each criteria on which assignment is to be evaluated. Par-ticipant also have option to add comments on his peer’s submission.

    4.2.3.2 Instructor’s side interfaces

    • Fig 4.9 represents Course Details page for instructor. Once the in-structor login to the system, course detail page is available to him/her.Instructor can select the course in this page and has various options toproceed e.g. Create Quiz, Create Assignment for the selected courseetc.

    • Fig 4.10 represents Add Questions page. Instructor can add questionsto the question pool using this interface for the selected course. Ques-tions are multiple choice questions and a question can have multiplecorrect answer.

    • Fig 4.11 represents Create Quiz page. Instructor can create quiz fora assignment using this interface. The list of questions is available forthe selected course and instructor checks the question which he wants

    21

  • Figure 4.7: Evaluate Assignment

    Figure 4.8: Submit Assignment Marks

    22

  • Figure 4.9: Course Details

    Figure 4.10: Add Question

    23

  • Figure 4.11: Create Quiz for a course

    to add in quiz. Instructor also has to give the dates in which the quizwill be available.

    • Fig 4.12 represents Create Assignment page. Instructor can create anew assignment using this interface for the selected course.

    – Instructor decides the start and end dates of the assignment sub-mission and evaluation

    – Instructor has to select the evaluation criteria on which assign-ment is to be evaluated

    – Instructor can also add the new evaluation criteria to the system,if not already available in the database

    There is a background job thats runs in a fixed interval of time and auto-matically allocates the evaluation task among the participant once the quizdeadline is finished. Instructor has an option to force immediate allocationof evaluation task for the selected course, if he doesn’t wants to wait for thejob to run.

    4.2.3.3 Other Interfaces/System Features:

    This section describes the interfaces which are common to both type of users.

    Login: This interface is used by the users to authenticate themselves tothe system, before proceeding.

    24

  • Figure 4.12: Create Assignment

    There is a background job, that runs in a fixed interval of time andallocates the assignment evaluation task among participants. This job con-tinually monitors for the quiz deadline and as soon as the quiz deadline ends,it allocates the evaluation task to different participants on the basis of theirperformance in the quiz. Another background job, that also runs in a fixedinterval of time, calculates the normalized scores of the participants, oncethe evaluation deadline for the assignment is finished.

    25

  • Chapter 5

    Conclusion and Future Work

    5.1 Conclusion

    In this report, we have presented the issues that arise in evaluation in MOOCcourses, and also discussed its possible solution using the concept of crowd-sourcing and the need of peer evaluation system in these courses. We havereviewed the existing peer to peer evaluation systems and problems relatedto them and found that fairness in evaluation is major problem. We havedeveloped a peer evaluation system, which focuses on the allocation of assign-ments evaluation task among participants. The system exploits the knowl-edge level information of the peers for this task.

    5.2 Future Work

    There are many possible enhancement in the current version of this system.Currently, to measure the knowledge level of the participants we are relyingon the quiz result, however other possible methods can be implemented, likewe can analyze the previous performance data of the participants to measuretheir knowledge level. Also this system is tested with the limited numberof users and there is a need to perform experiments with large number ofparticipants to identify and correct issues (if any) exists. This system needsto be integrated with edX platform.

    26

  • Appendix A

    The entire system is developed in Django web development framework andMySQL is the database server used. This section gives the detailed instruc-tion to setup the project for running.

    A.1 Installation Steps:

    1. Check whether python is installed in your system. If not install itusing the following command.

    sudo apt-get install python

    2. Install Django version 1.6 if not already install in your system. Tocheck your django version use the following command.

    python -c "import django; print(django.get_version())"

    To install Django in your machine follow the instruction in the follow-ing link: Install Django

    3. Next install the MySQL database server and MySQL database adapterfor django, if not already installed. Use the following command.

    sudo apt-get install mysql-server

    sudo apt-get install python-mysqldb

    4. Now as the installation is complete, create the new Django projectusing the following command.

    django-admin.py startproject

    27

    https://docs.djangoproject.com/en/dev/topics/install/#installing-official-release

  • 5. Change the following attributes of the DATABASES variable in “set-tings.py”

    • USER: Username of your MySQL server database

    • PASSWORD: Password of your MySQL server database

    • HOST: Enter the IP of the machine where your database serveris residing.(”localhost” if server is in the local machine)

    If you want to use any other database server, follow the instruction inthe following url, to configure your django project accordingly.

    http://www.djangobook.com/en/2.0/chapter05.html

    6. In the next step unzip the downloaded source file and copy the threedirectories from “Peer Evaluation” directory namely templates, par-ticipants, static to your project directory.

    7. Add “participants” to INSTALLED APPS variable in settings.py file.

    8. Next go to your project directory, and run the following command tostart the server, and run project.

    python manage.py runserver 0.0.0.0:8000

    9. Next open the following url in your browser to open the run the project.You will be redirected to login page.

    http://:8000/

    28

    http://www.djangobook.com/en/2.0/chapter05.htmlhttp://:8000/

  • Bibliography

    [1] D. Weld, E. Adar, L. Chilton, R. Hoffmann, E. Horvitz, M. Koch,J. Landay, C. Lin, and M. Mausam, “Personalized online education acrowdsourcing challenge,” AAAI Workshops, 2012.

    [2] J. Howe, “The rise of crowdsourcing..”http://www.wired.com/wired/archive/14.06/crowds.html. [Online;accessed 15-Sep-2013].

    [3] M.-C. Yuen, I. King, and K.-S. Leung, “A survey of crowdsourcingsystems,” in Privacy, security, risk and trust (passat), 2011 ieee thirdinternational conference on and 2011 ieee third international conferenceon social computing (socialcom), pp. 766–773, 2011.

    [4] M. van den Bogaard and G. Saunders-Smits, “Peer #x00026; self eval-uations as means to improve the assessment of project based learning,”in Frontiers In Education Conference - Global Engineering: KnowledgeWithout Borders, Opportunities Without Passports, 2007. FIE ’07. 37thAnnual, pp. S1G–12–S1G–18, 2007.

    [5] R. Ballantyne, K. Hughes, and A. Mylonas, “Developing procedures forimplementing peer assessment in large classes using an action researchprocess,” Assessment & Evaluation in Higher Education, vol. 27, no. 5,pp. 427–441, 2002.

    [6] C. Gopinath, “Alternatives to instructor assessment of class partici-pation,” Journal of Education for Business, vol. 75, no. 1, pp. 10–14,1999.

    [7] W. Mason and D. J. Watts, “Financial incentives and the ”performanceof crowds”,” in Proceedings of the ACM SIGKDD Workshop on HumanComputation, HCOMP ’09, (New York, NY, USA), pp. 77–85, ACM,2009.

    [8] L. von Ahn and L. Dabbish, “Labeling images with a computer game,”in Proceedings of the SIGCHI Conference on Human Factors in Com-

    29

  • puting Systems, CHI ’04, (New York, NY, USA), pp. 319–326, ACM,2004.

    [9] K. Ali, D. Al-Yaseen, A. Ejaz, T. Javed, and H. Hassanein, “Crowdits:Crowdsourcing in intelligent transportation systems,” in WirelessCommunications and Networking Conference (WCNC), 2012 IEEE,pp. 3307–3311, 2012.

    [10] N. Maisonneuve, M. Stevens, M. E. Niessen, P. Hanappe, and L. Steels,“Citizen noise pollution monitoring,” in DG.O ’09: PROCEEDINGSOF THE 10TH ANNUAL INTERNATIONAL CONFERENCE ONDIGITAL GOVERNMENT RESEARCH, pp. 96–103, Digital Govern-ment Society of North America / ACM Press, 2009.

    [11] L. von Ahn, “Games with a purpose,” Computer, vol. 39, no. 6, pp. 92–94, 2006.

    30

    IntroductionNeed of Crowdsourcing in EvaluationPeer to Peer Evaluation System

    Literature SurveyCrowdsourcingApplicationsPerformance

    Need of Peer Evaluation System for Educational EvaluationPeer to Peer Evaluation SystemReview of Existing SystemsPeer Assessment Procedures for Large ClassesChallenges

    Proposed System for Peer Evaluation in e-LearningProblems in Existing SystemsProposed SolutionMeasuring Knowledge Level of the ParticipantsTask DistributionAllocation Strategies of Projects for Evaluation Based on Knowledge LevelCalculating Final Score

    Summary

    Software ImplementationSoftware RequirementsParticipant's requirementsInstructor's RequirementsOther Requirements

    System DesignUse Case DiagramDatabase DesignUser Interfaces

    Conclusion and Future WorkConclusionFuture Work

    Appendix Installation Steps: