ABSTRACT: Monitoring students’ work in the classroom has been recognized as one of the key factors for successful teaching since only a good real-time assessment enables the teacher to give proper and timely feedback. However, it is not an easy task to systematically supervise what students do in the classroom. It also might consume a considerable amount of teachers’ resources. This problem gets worse with a large number of students. We work in a scenario in which computer technology is used in classrooms. In particular, students use electronic worksheets on their computer devices instead of the traditional paper and pen-based materials. We explore the possibilities of assessing students during classroom seatwork with electronic worksheets by automatically analyzing the structure of the documents and the changes along time while students work on them. The developed system uses CiCv2, which is an architecture enabling students and teachers to work with electronic worksheets. The proposed querying system allows for monitoring changes in the contents of the students’ documents. An experiment is described, showing the system is able to give the teacher valuable information. This information is intended to assess the students’ performance and provide them with proper feedback.
Keywords:
Monitoring students’ work; automatic assessment; improving classroom teaching; architectures for educational technology systems.
1. IntroductionFor many theories of learning and instruction, be these behaviorism, constructivism or cogni-
tivism, feedback is an essential part of the learning model, absolutely necessary to successfully achieve learning (Bangert-Drowns, Kulik, Kulik & Morgan, 1991). The systematic monitoring of the students’ work should be a key success factor, since teachers will be better prepared to give meaningful and timely feedback when they are aware of the students’ current learning state (Grüntgens & Melzer, 2004). Monitoring students during in-classroom work may involve teachers moving around the classroom, being aware of how well (or poorly) students are progressing with their assignments, and working with students one-to-one as needed. These activities might be quite time consuming and sometimes difficult to perform even in classes with a reduced number of stu-dents; only teachers having a vast experience or applying a very good methodology would be able to do this monitoring in a systematic way. According to Gutierrez-Santos & Mavrikis (2008) due to the need to attend all students individually, teachers find it difficult to accomplish their role as facilitators in a classroom, and recommend the development of tools to support them in this task. Some authors have developed systems intended to monitor students work mainly for the case of distance learning supported by a Learning Management System (LMS), like the one described by Boase-Jenelik & Reynoldson (2008). LMSs are suitable for tracking the student’s activity since most of them provide at least a low level logging which registers all students’ actions. These log files can afterwards be automatically analyzed to extract high level information regarding students’ progress (Zhang, Almeroth, Knight, Bulger & Mayer, 2007) (Ben-Naim, 2008). However, there is little literature reporting the monitoring of in-classroom students work to support assessment, al-though the required technology is already available (Pearce-Lazard, Poulovassilis & Geraniu, 2010). Although there are several contributions from Artificial Intelligence on automatically moni-toring the individual student's work, the goal of many of those systems has been to build a student model in order to diagnose and give proper feedback to the student (Mavrikis, Gutierrez-Santos, Pearce-Lazard, Poulovassilis & Magoulas, 2010). Student models often target a very specific learning domain and are difficult to develop. In the past, computer-based learning material has been developed in the form of “electronic worksheets” in order to implement in-class learning ac-tivities for the students (Hasebrook & Maurer, 2004), (Ciravegna, Dingli, Petrelli & Wilks, 2002), (Pinkwart, 2005). These materials have been called “Active Documents” (Verdejo, Barros, Read & Rodriguez-Artacho, 2002). These active documents provide the students with a rich environment for interaction. Also, they allow collaborative work by making use of available networks. In most cases, an XML Document Object Model has been used as a way to manipulate these documents and to store them in permanent storage devices. If students work on these electronic documents by modifying their contents, then it is possible to do an automatic - and hence systematic - analysis of their work. For example, the analysis can be used to find out how the students are advancing in the
1
completion of the tasks described in the active document, whether or not they are filling the docu-ment with the right answers, and so forth. In this paper, we expect to open a new perspective on monitoring students in the classroom. Instead of trying to partially or totally substitute the teacher by a system, we attempt to use technology to enhance the teacher’s work by providing her with the right information to plan and execute activities while she is working with the students in the class-room. We present a system which allows a teacher to flexibly monitor students working either in-dividually or collaboratively by analyzing their electronic documents in real-time. The approach of this work is developing a tool which allows a teacher to query the current state of students’ docu-ments and extract relevant information in order to assess the students’ works.
2. Previous workMcTighe & O’Connor (2005) classify classroom assessments into three categories, each serving a
different purpose. Summative assessments summarize what students have learned at the
conclusion of an instructional segment. Diagnostic assessments typically precede instruction.
Teachers use them to check students’ prior knowledge and skill levels, identify student
misconceptions, profile learners’ interests, and reveal learning-style preferences. Diagnostic
assessments also provide information to assist teacher planning and guide differentiated
instruction.
Finally, formative assessments occur concurrently with the instructional process. These ongoing
assessments provide specific feedback to teachers and students for the purpose of guiding teaching
to improve learning. Formative assessment methods may be formal and informal, such as non-
graded quizzes, oral questioning, teacher observations, draft work, think-alouds, student-
constructed concept maps, learning logs, and portfolio reviews. The work presented in this paper
contributes to formative assessment.
As discussed above, monitoring students’ work is an important issue in pedagogical practice.
Accordingly, many researchers have studied this activity and its effect on the learning process.
First and foremost, it has been said that assessment is a valuable practice to promote learning
(Black & William, 2004). The relationships between classroom work evaluation and student
outcomes involving learning strategies, motivation and achievement have been studied by Crooks
(1988) concluding that classroom evaluation has a powerful direct and indirect impact on learning
outcomes. This impact can be either positive or negative. Therefore, assessment has to be carefully
planned.
A literature review on software oriented to support human teachers to monitor students’ work has
shown most computer-based systems have focused on students’ work monitoring during e-
learning. Mazza & Dimitrova (2004) discuss two objectives for e-learning assessment: the
recognition of problems among students in e-learning (e.g., they are not reading the materials, they
are spending too much time in discussions), as well as the evaluation of e-learning to improve the
quality of courses (recognizing those course materials that are not being used). WebCT was used
as LMS in this research. A new tool - enabling results analysis and visualization - was added.
2
Rahkila & Karjalainen (1999) used log file analysis (time and number of accesses) to evaluate
students activity during e-learning. Although the log files used were not those generated by the
web-server, the authors considered the possibility of using only the server log files when analyzing
student activity.
Web log analysis was used by Sheard, Ceddia, Hurst & Tuovinen (2003) with the purpose of
assessing the effectiveness of course usage, defined as course usage intensiveness, manner and
usefulness. The authors assert that three sources can be combined to obtain an overall insight into
students’ online learning process – Web logs, student demographics, and survey results. Statistical
methods were used for the data analysis.
Many researchers have proposed the use of data mining for the evaluation of e-learning. Log files
are also used as data sources in (Lei, Pahl & Donnellan, 2003) and (Zaïane & Luo, 2001). A data
warehouse is typically used for management information analysis of various kinds, as well as for
Web data analysis.
Although we acknowledge much research has been conducted in the sphere of Artificial
Intelligence, in order to assess students' work, we consider those contributions have a different
approach to ours. Our work is directed to provide the teacher with relevant information so she can
decide herself the way to provide feedback taking into account also the “environmental”
information such as time, students’ preferences, other available learning/teaching resources, and so
forth. We share this vision with the one presented by Pearce-Lazard et al. (2010) in the sense of
providing a tool for the teacher to facilitate her monitoring task, however, we do not concentrate
on a specific domain. Our approach allows a more flexible monitoring and a wider range of
scenarios where to apply it.
3. Query-based Assessment for monitoring students’ work
The principal goal of this work is to test the hypothesis about whether it is possible to develop a
system in which a teacher can flexibly monitor the work of the students while working on
electronic worksheets. The suitable worksheets are those which can be mapped to an XML
Document Object Model (DOM) representation. Our proposal is a system that allows a teacher to
“send” query agents through the network; these agents analyze the current state of students’
documents and deliver information back to the teacher; she can then use this information to assess
how the students are performing. In order to illustrate this we will use a very basic example: let us
consider the scenario of a mathematics class where students have to individually solve a series of
exercises related to the subject being taught, e.g., arithmetic multiplication. The exercises are
distributed as an electronic worksheet consisting of 3 sections, with ascending degree of difficulty.
For instance, the first section may contain multiplications of positive numbers of at most 2 digits
each. The second section introduces exercises with multiplication of negative numbers and the
3
third section introduces multiplication with many digits. The structure of these worksheets clearly
defines each section containing several exercises each. Each drill exercise is structured as a
question and answer, and the answering part is to be modified by the students during their work. In
this case, the teacher could make use of the following information:
1. Students’ progress: the teacher wants to control how many exercises each student has
answered up to now, so she can query how many answer parts have been modified. This
can be presented as a total number, as a percentage or as a table specifying which
exercises have been modified (Fig. 1).
2. Correct answers: in this case, we need to extract the contents of the answer for each
modified exercise, and compare it to the corresponding entry in a table containing the
right answers. The results of the comparisons can be presented as a total number, a
percentage relative to total number of exercises or number of modified exercises, or as a
table indicating correct or incorrect solving of each exercise (Fig. 2).
Figure 1: The documents on top correspond to the XML representation of the electronic worksheet the students are modifying in order to answer the questions by entering one option (A,B,C,D). The agents explore the information and return the relevant data to fill the table at the teacher’s computer about which questions have been answered by each student.
3. Correct answers aggregated by section: the previous information can be presented
divided by section
4. Differences among students’ progress: we can apply query 2 for each student and
present the numerical results in a table in descending order.
5. Solving order: using query 1 we can also determine whether the students are solving the
exercises in the presented order or in another sequence.
4
Teacher sends query agents to each student in order to receive data to fill in the table
6. Student’s work pace: we can apply the previous queries at various times and present the
differences in the resulting information. This will show the progress during that interval
(Fig. 3).
Figure 2: The figure shows the agents (arrows) sent to gather information on the students’ documents. On the way back this data is compared with a document containing the correct answers. This information is used to fill the table at the teacher’s computer (bottom).
5
This simple list of queries shows we may consider two types of them: those gathering basic
information (like the first two queries above), and those aggregating results from basic queries
(like the rest of the queries above). The example describes a scenario for young children. The
system we developed for this work was tested on a scenario for high school or university students.
This may hint the suitability of this methodology for a wide range of learning scenarios.
Figure 3: The figure shows the agents (arrows) sent in three different moments (T1, T2, and T3) to the same student in order to gather information about how the student is advancing in his work.
4. System Description and ArchitectureIn order to develop a system which allows a teacher to monitor the students work in the classroom
we need in the first place a software and hardware architecture enabling the communication
between students’ and teacher’s workplaces. This architecture should also allow sending agents
from teacher to students and capturing their findings with the needed information. Instead of
6
T=3
T=3
T=3T=2
T=2
T=2T=1
T=1
T=1
conceiving a new one, we base the present work on a previous architecture called CiCv2, which
stands for Computer-integrated Classroom version 2 (Baloian et al., 2008). Both teacher and
students use computers in face-to-face sessions in this framework, allowing them to interact at
various levels. The teacher can present teaching material, typically using a large interactive
display, distribute assignments, exchange individual or group messages and share documents
created on-the-fly or retrieved from an archive. Assignments can include constructive or creative
tasks on the part of the students. A central repository allows users to authenticate and to access
files as well as to interact with other users’ applications. Additionally, it serves as a directory for
currently active sessions in order for students to connect to the session managed by their teacher.
The central repository keeps a log of the interactions between the teacher’s application and each
student’s application.
In the second place, we need software implementing electronic sheets allowing teachers to prepare
the material students have to work on. As we said, an important requirement for this work is that
worksheets used in this scenario should be easily mapped as DOM documents. The reason for this
requirement is that the results obtained in this research work must be easily mirrored in any
scenario using documents with this characteristic, since XML is a de facto standard for electronic
documents description.
In this case we opted again for using a product of our previous work called “FreeStyler” (Hoppe
& Gassner, 2002). It implements a series of visual languages for modeling in a variety of subject
domains such as Petri Nets and UML diagrams for computer science or system dynamics for
physics, biology or economics (Fig. 5). On an abstract level, Freestyler can be seen as a graph
editor, which allows the inclusion of various “palettes” defining a group of specific nodes and arcs
with particular functionalities in order to model a certain system. New palettes are implemented by
extending basic nodes and edges, programming the new necessary functionalities and providing
them with a convenient shape.
For this work, teacher and students used a modified version of the FreeStyler tool customized to
each role in order to have access and work with a central repository. Through this repository, they
can access the documents, share them with the class, and send them to individual users or groups.
In a CiCv2 supported face-to-face lecture, the teacher establishes a session and the students sign
in. From that point on, the FreeStyler application customized for the teacher acts as a server for
that learning session, providing some control functionalities on the students’ activities such as
allowing or disallowing interactions among students and assigning activities. Within a learning
session, documents can also be exchanged and shared locally, without access to the central
repository. By sharing a document we mean working simultaneously on the same document,
propagating the changes to the participants as they occur. For this purpose, a shared page
manager was included in the FreeStyler customized for the teacher, in order to synchronize the
shared pages. Whenever a user shares a page within a session, it becomes available for all users
who joined the session. The shared page manager maintains a copy of the whole page, as well as a
7
log of individual changes as they occur. Figure 4 shows a simplified diagram of the CiCv2
architecture.
Figure 4: CiCv2 Basic architecture
The FreeStyler modeling tool can be used as a whiteboard application to present material and
solutions to proposed problems, as well as modeling in several learning domains. The learning
domains are enabled by the usage of plug-ins which define specific visual modeling languages
(Pinkwart, 2005) supporting one or more domains. FreeStyler manages content organized in pages
and thus supports notebook-style usage pattern as well as page-based presentations. These pages
can be added, copied and erased. The Students’ FreeStyler (a FreeStyler version customized for
students) allows them to interact with the repository by sending and receiving files. Within a
session, the student’s application can also interact with the teacher and other students by
exchanging messages in a chat, exchanging documents to be opened independently or sharing
documents. In the latter case, they can modify a common document in real-time. The documents
can be saved locally, saved in the central repository, or shared among participants in the classroom
session. The Teacher’s FreeStyler (a FreeStyler version customized for the teacher) has the same
functionalities as the student’s application. However, it has additional functionality allowing the
teacher to control some of the activities the students can perform, and to send assignments. A
fundamental addition to the teacher’s version is the querying system, which is at the core of the
work being reported in this paper. The querying system allows the teacher to monitor the students’
performance while working on assignments proposed by her using FreeStyler documents.
8
Figure5: Screenshot of FreeStyler. At the right hand side we see the palette. In this example, it corresponds to the System Dynamics plug-in, which allows the modeling and simulation of dynamical systems. At the center we see an already constructed model (graph)
5. A Support System for Query-based Assessment
As stated above, the core of the work presented in this paper is the plug-in inside the FreeStyler
modeling tool in order to allow the teacher to monitor the classroom situation without cluttering
the interface with external modules. This plug-in allows the teacher to access relevant information
during the sessions, receiving this information from several sources. We call this plug-in the
Querying System, since it is based on the visual composition of querying elements, as it will be
shown below. Like other plug-ins in FreeStyler, a model consists of a graph. Nodes represent
atomic queries which can be combined in a graph to form more elaborated queries. The inspiration
for developing this graph comes from the “pipe” metaphor first introduced by the Unix operating
system: the results of applying a program to a data set is a new data set which is input to the next
program specified by the “pipe”.
The teacher can access meaningful information sources available in the CiCv2 scenario by using
the querying capability. A query is an object containing the specifications for gathering, composing
and presenting certain information which is currently distributed in various files across the system.
In order to process a query object and generate the answer, the system extracts information from
files and log information from the central repository, the logs of the shared pages manager, the
locally stored documents, and from the documents and logs of all student applications participating
in the session.
9
The queries which will be used in a certain learning session are generally prepared in advance
along with the designing of the learning activities and learning material that will be used. They can
be specific to the activities in that particular session; they can also be general purpose queries
which may be useful in any session. Either way, the query definitions are readily available as a
way to minimize the teacher’s involvement in technical details during the classroom session.
However, it is also possible for the teacher to adjust specific parameters on queries to achieve the
desired results, as it will be presented below.
When the teacher has selected a query object, she asks for its execution by pressing a button, and
the result will either appear beside the graphic representation of the query or generate some
changes to the currently active document, such as adding new pages with results. It is also possible
to program queries to be executed periodically or at a specific time, having access to updated
results without any further interaction.
6. Query implementationA core set of basic query objects, which we call Basic Queries, was developed during the
implementation of our system. These contain the specifications for retrieving information which is
frequently needed during the monitoring of students’ work. These basic queries are the nodes
contained in the palette implemented by the query plug-in. They can be directly used as they are
by just dragging them from the palette to the working area or they can be combined to create new
composite queries. The query composition is as follows: the output of one query object is
connected as input to another query object by graphically drawing a directed arc between the
nodes corresponding to those queries. The system checks the correctness of the composition by
checking that the structure of the output data of the predecessor node matches the required input
data structure of the subsequent node.
As an example, it is possible to obtain the difference between two documents by using the
FileQuery twice for obtaining each file and connecting them to a DiffQuery. In order to hide
complexity, these three queries can be encapsulated into a ComplexQuery, so the end user sees
only one simple query which performs as expected. As a result, the teacher’s interface shows just
one complete query hiding all complexity and delivering a result when needed.
A context is defined for each query. It describes and gives access to the relevant documents it
needs to perform. When a query needs to process remotely located documents (e.g., documents
that are located on a student’s computer) the querying engine sends the corresponding agent to the
remote location. There, the context is set accordingly, so that the remotely executing agent has
access to the local resources and it sends the results back to the original location.
10
7. Internal data structure and Document Object Model
The modeling tool maintains an internal data structure storing the model. It has the ability to
translate that model to an XML Document Object Model (DOM). The DOM can be serialized as a
file. This translation works both ways, allowing restoration of the internal data structure from the
file.
A query can make use of previously stored documents in a normal setting, e.g. accessing the
central repository, and documents that are currently being edited. A document being edited may
have a corresponding version stored either locally or on the repository. However, that version
might be out of date and will not contain any changes made since it was saved. As a result, a query
will need to access both models, or translate one to the other.
We have chosen to use the DOM as the basis to execute the queries. This decision is based on three
arguments: the efficiency of translating from the internal model to DOM, the existence of
standards like XQuery (Boag, Chamberlin, Fernández, Florescu, Robie & Siméon, 2007) and the
maturity of the serialization/de-serialization process for XML DOM. Note the information has to
be serialized at the remote locations to be appropriately transferred to the teacher’s application.
8. Basic QueriesThe basic query objects are the simplest building blocks that allow arbitrarily complex queries to
be built and processed by combining them. Some queries do not have any input, and only generate
output, such as a constant query always returning a fixed result and a “current document query”
which always returns the current document as defined by the context where it is being executed.
Other queries are terminal queries and do not provide any output, such as “Save Query”, which
simply saves its input into a file whose name it also fetched from its input, or “Object Creation
Query”, which creates a visual object being added as a new element in the modeling tool. The
other queries have inputs that are processed to create a single output, such as XQuery, which
executes a particular XQuery on its input, generating a single output.
Some basic queries are used to execute a particular query in a different context. As the teacher
initiates the queries, the current context would always be the teacher application. In order to
execute some query remotely on a student’s application or at the repository, the basic queries
RepositoryQuery and StudentQuery were defined. These are the objects implementing the query
agents, since both take some query, send it to a remote location and trigger its execution in that
remote context, receiving the result back at the original context. In the case of the StudentQuery,
the remote location can be several students, either a list of predefined students, or all the students
in the current session. This will trigger the execution of several agents. The results will be
structured containing the result of the same query executed at the several locations.
11
9. Execution of queriesSeveral queries can be combined by connecting the output of one to be the input of the next one,
using the “pipe” metaphor mentioned above.
If we want to extract the information for the “differences among students’ progress” example,
then it is necessary to use a composite query containing a linked list of sub-queries. The first step
inside this composite query is to execute the query corresponding to “correct answers” in the
context of each student using a StudentQuery. The result is the information for each of the students
within the session in a DOM. This information is afterwards transformed by the next sub-query
into a table. A last query should take this table and show it in the application user interface.
The “last query” is automatically identified and triggered when the teacher executes the composite
query. This last query in turn triggers the previous query or queries, until the query which requests
the single data from each student (StudentQuery) is reached. At that point, a query in each one of
the student environments is triggered and the resulting information is returned to the
StudentQuery. From then on, the information is processed pipeline wise in the reverse order.
The execution for the general case can be seen in Figure 6, where the control flow advances from
left to right. At any point, the control flow continues its execution remotely, as in the
StudentQuery, sending the query and the remaining queries to a remote location. The query will be
rebuilt at the remote location, it will have access to the local context and the query will continue its
control flow. This process continues with the input queries, activating all of their respective input
queries, until the leaf queries are reached, which do not have any external input, beginning the data
flow of the results in the opposite direction.
Then, the information flows in the opposite direction (right to left in Figure 6), delivering the
results being processed in every step, until reaching the “last query”. If the query was executed
remotely, the result is sent back over the network, finishing the remote execution and continuing
with the local processing. The “last query” receives the final result and it generally is one of the so
called “terminal queries”, which present or save the result in a useful way for the teacher.
12
Figure 6: Control (from left to right) and data flow (from right to left)
10. Trials and ObservationsOur goal is to validate the proposed system in practice. For that purpose, we have developed a
series of three sessions involving students working while the teacher uses the proposed system in
order to perform the monitoring of the activities. We intend to show in a qualitative way that the
system is effective in providing useful information to the teacher in a timely manner.
We propose the validation or rejection of the following hypotheses after the sessions:
Hypothesis 1: The system provides information to the teacher which helps her to track
students’ pace during classroom activities.
Hypothesis 2: The system provides information to the teacher which helps her to assess
the correctness of specific activities.
Hypothesis 3: The system provides information which helps the teacher to identify which
students pursue one out of several possible solutions.
Hypothesis 4: The teacher can detect problems early, based on the complexity of the
models being developed by the students.
Hypothesis 5: The teacher can identify which students are working collaboratively within
the group.
We developed three sessions lasting 90 minutes each, in order to test the monitoring activities
stated above. Each experiment is situated in a specific context or scenario, where students are
asked to perform some activities and the teacher uses the querying system to monitor the session.
The group of students for all experiments is the same, consisting of 16 university undergraduate
and 2 graduate students, aged between 22 and 26 years old. All students were taking the course on
distributed computing, and none had previous knowledge of either the CiCv2 environment or the
particular presented problems. As we intended to use the tool in as real an environment as possible,
we used it in learning sessions with the same constraints a teacher would normally encounter. This
included very short introductions to both the CiCv2 environment and each modeling plug-in, no
more than 10 minutes in each experiment, which was enough to get the sessions started.
13
10.1. Session 1:
In this session, the students were asked to use a collaboration framework for the Java
programming language, called Matchmaker (Tewissen, Baloian, Hoppe & Reimberg, 2000). After
an introduction, the students were handed out a document detailing the activities, one activity on
each page, with some aspects of the activity being optional. The exercise on page 1 asked the
student to create a collaborative session on a server by using some programming methods
previously discussed in class. Optionally, the student can verify whether the session was
successfully created, and the teacher can see which students have completed the optional part
(“create + See” instead of “createOnly”). On page 2, the exercise asks the students to connect to
the server and fetch all existing sessions, to print them out. On page 3, the students need to connect
to a particular session, but many times the students forget to first check whether that session exists
or not. On page 4, students are expected to create modifications in a session and on page 5, they
are asked to fetch data from a session and print it out.
This session was set up to check the students’ state while they are working on their tasks. The state
is available as a set of indicators by which the teacher may identify partial progress in specific sub-
tasks as well as total progress, considering each student as well as the entire group (Collazos,
Guerrero, Pino & Ochoa, 2002). For this purpose, information needs to be gathered from diverse
sources within the system, and then aggregated into a specific output. Figure 7 presents a query
which is available to the teacher (a), and the output being generated by that query (b).
In this scenario, the teacher can perform a query searching for a particular solution in order to
automatically identify students who have successfully solved a problem. This can also be applied
for identifying students making common errors or omissions.
Figure 7: Determining students’ pace and specific solutions
14
10.2. Results of Session 1
Figure 7 shows the result for each exercise and each student in a matrix depicting the level of
progress. It starts with a “none” value, changing as the student completes subsequent programming
steps. The teacher can see students “pedro” and “juan” only do the “join” part of the exercise,
without getting the list first, which is an incomplete activity on page 3. Similar partial results exist
for exercises on pages 4 and 5.
The teacher can identify which students are currently working on the system and which ones are
likely to need assistance by using activity levels. She thus gets awareness on the pace of both
individual students and the whole group. Students progressing either slower or faster than expected
will be quickly noticed by the teacher. She can take a closer look, either by directly approaching
the student, or by using a new query to have a specific look at the student’s work. When a student
is advancing faster than his peers, the teacher might want to provide additional challenges, ideas or
assignments in order to keep stimulating him. She may also show the good work to other members
of the class in order to discuss the solution and its alternatives. On the other hand, when a student
is having problems to solve the proposed exercise, the teacher can take adequate measures to
overcome these specific problems.
In figure 7 (b), the teacher can see that at various stages, students have fulfilled none, a part, or the
whole exercise. The columns show each student’s state on the exercise on a specific page, allowing
the teacher easily compare each student’s activities. For example, the student with username
“jorge” has not finished exercises on pages 3 to 5, but he was the only student to achieve more
than the rest in exercise on page 1. In this case, jorge actually did more than was asked in the
exercise, while pedro finished all exercises, but did not complete exercise on page 3.
Figure 8: Determining model complexity
Figure 8 shows the evolution of a single student’s progress at different periods of time. We can
clearly see how the results evolve according to the students’ work. In this case, student “pedro”
first solved exercise on page 1, but only partially. Afterwards, the student worked on exercise on
page 2, completing the first part and then the second one. The results are presented by the system
automatically, re-calculating the query according to the period of time set by the teacher (2
minutes in this case). The hints given by the teacher to the students were relevant, and they helped
15
to guide the class in a natural way. In particular, the teacher could do early identification of at least
five students who were not making any progress at all, despite the fact they seemed to be very
busy working on the exercises. Only two of these students approached the teacher for help. Upon a
closer look it was clear that a technical problem was preventing the CiCv2 plug-in from working
properly, so the students had no way to verify their exercise did work. Changing the laptop for a
few of these students and having the others working in group with computers that were properly
working solved the problem, not only for the two students who proactively asked for help, but for
all five of them. Based on this session, we can accept hypotheses 1, which states that the
information provided by the system helps the teacher to track the students’ work and 2, which
states that the provided information helps the teacher to assess the correctness of the solutions
provided by the students.
10.3. Session 2
Let us consider a new scenario. We intend to show how the querying system is capable of
identifying a student choosing one of several possible ways of solving a problem. The students
were asked to model two stochastic processes to simulate lottery games, in order to determine
which one had the highest probability of winning. The students would have to create a rather
complex model by following the game descriptions literally. However, they could identify a
simplification that radically reduced complexity of the model, without affecting the results. The
teacher had access to a query that identified which of the paths each student seemed to have
chosen. With this information, the teacher could start discussions in which students evaluate their
peers’ solutions and learn several ways to solve the problem.
A second aspect considered in this scenario is to determine whether the student groups are doing
real collaborative work or each student is advancing on his/her own. For this purpose, the teacher
has access to data of each group of students, identifying active students and passive ones. We can
see the query that provides the statistics (Figure 9a), and the results within one specific group
(Figure 9b).
16
Figure 9: Student activity within a group
10.4. Results of Session 2
The gathered data in our experiment did not show major changes in the distribution of activities
among the participants as the session progressed. Generally it was possible to identify one or two
students in a group who had a participation that was slightly over the level of the other
participants. As the teacher observed these figures, it was possible to take a closer look at the
groups showing a large gap in the members’ participation. Perhaps the students were collaborating
using direct face-to-face communication, or they were blocked because a difficult problem was
encountered.
It was also possible to use queries to discover common students’ errors like using the wrong
element for modeling a particular stochastic scenario. As expected, several users made a quite
common mistake, resulting in wrong results. When the teacher detected this kind of errors, she
determined the best way to handle the situation. Some of these ways were asking for differing
results, starting a discussion and either letting students find out why the results are different and
17
which result is correct, or taking a more direct approach by telling the correct solution. The results
obtained in this session support the acceptance of hypothesis 3, which states that the system allows
the teacher to identify which students pursue one out of several possible solutions and hypothesis
5, which states that the system delivers information about which students are working
collaboratively.
10.5. Session 3
The aim of session 3 was to find out if it is possible to find out how elaborated is the answer the
students are developing. This might be an indicator about the correctness of the solution the
students are working on, for example, when the model a certain student is developing is much
more complex than the one they are supposed to construct as result of the task given by the
teacher. This is with high probability a case when the student is working on a wrong answer. In the
case of complex models, it is increasingly difficult to characterize a “right” or “wrong” solution,
and the teacher has to use generic information to choose where to look for problems or right
answers. The queries used in this scenario are generally applicable to any situation.
As we see in Figure 10, the teacher sent a query in order to retrieve the information about how
many elements (nodes, arcs, strokes, etc) the model of each student currently has. It is also
possible to separate the number of nodes and arcs according to their type, which might provide
even more information about the graph the student is developing.
Figure 10: Determining model complexity
18
10.6. Results of Session 3
During the session, it was possible to identify cases in which students were working on models
that appeared extremely complex when compared to the model solution available to the teacher. In
all cases the students used over twice as many nodes and edges than expected. A close look
revealed they were creating a new model on the same page as the first one. Other students also
decided to start a new model because the first one was unsatisfactory, creating the new model on
another page, or by deleting the previous model before starting over. These three approaches
provided different results. A teacher needs to be able to interpret these results, verify that the real
cause of the results effectively matches the possible interpretations (e.g. by approaching a student
or using a query to view the student’s model) and possibly take remedial actions. We have found
the information provided to the teacher was helpful, thus accepting hypothesis 4.
It is not always necessary for each teacher to make her own interpretation of possible causes for
the results of a query. The courseware should include the reasoning and possible interpretations of
the results. A teacher should expect the courseware to include the queries that are specific to this
classroom situation or activity. Furthermore, the courseware should include some suggestions on
how to react based on the results. For example, in this case it should provide a range of values and
relations among values that have been found to be typical in a setting in which no problems arise.
Additionally, the courseware should indicate values and relations among values that have been
found in situations where certain problems occurred, such as students not knowing how to use the
modeling tools, lack of familiarity with the theory of the model, or when the students have already
solved the proposed problems. A teacher may interpret the results accordingly, verify or
alternatively disambiguate the interpretation and decide what action to take by using these hints.
In the examples shown in this chapter the teacher had no need to perform changes on the queries
other than modifying at most one parameter for a specific query. Thus, it is not possible to make
any evaluation on the usability of the querying system at a deeper level.
11. ConclusionsThe experiments described in the previous chapter show it is in fact possible to automatically
extract valuable information from the worksheets the students are using. This information helps to
assess their work and give meaningful feedback at the right moment at least in three different
ways:
Tracking the students in order to monitor their advance on the work, as seen in session 1. This
gives the chance to assist students who might be working at a slower pace than the rest of the
class or identify advanced students to give them positive feedback and/or provide them with
additional problems to solve.
19
Identify students that might have reached a correct solution in order to show them to the rest
of the class in case their solution is a classic one or an unexpected one. This was the case of
the session 2. This procedure can also be useful to identify cases where student are developing
too complex solutions compared with the “classical” one the teacher might have in mind. In
these cases, the teacher can assist the students guiding them in order to help them find simple
solutions.
Monitoring the students’ level of activity of the students either during individual or
collaborative learning sessions. This was explained in session 3 (Figure 10). A low level of
activity can indicate the students are not well prepared to solve the proposed task or that the
task itself could be ill designed. It can also serve as an indicator that perhaps the switching
between different learning activities (e.g. from individual solving problem to collaborative
work) might be not well designed (Baloian, Pino & Hoppe, 2008).
The last point of the previous list indicates this tool can also be used to validate or discard a set of
planned learning activities as “best practices”. In fact, a teacher can test whether a certain set of
learning activities the students should follow in a predetermined order can be considered as a “best
practice” candidate by monitoring the students’ work and checking if they advance as expected.
It may be noticed the approach does not require extensive training for teachers using it. An
adequate training will include motivation, description of the system, an explanation for each
prepared query with some examples, and a practical drill and practice. This would take around five
to six hours. Teachers do not need to have special previous knowledge in computer technology
apart from the required basic one to operate software like typical office applications. It is not
supposed that teachers will be the authors of complicated, powerful queries themselves, but they
will rather use already prepared ones, recycled from previous lectures or newly constructed with
the help of an expert. The development of query libraries, to be shared by a teacher community,
would also be welcome.
A very important feature of this approach is its generality. In fact, the developed query mechanism
can be applied to various computer supported learning environments. The applicable environments
must contain documents to be worked by students which are structured following the DOM
standard and have their corresponding representation in XML. This requirement comes from the
fact the query system only compares XML documents and it does not use their semantics.
Therefore, this tool can be applied to monitor students’ work for any other discipline, not just the
one mentioned here.
We used a modeling tool as an example to test our approach. It is shown that automatic assessment
by querying the documents the students are working on is very powerful. This is indeed a complex
scenario since highly structured tasks can be given to the students thus making the querying much
easier in many cases. For example, the working sheet might be reading a text and answering
20
questions about its content with multiple choices or a list of mathematical exercises with unique
answers, like in the case of basic arithmetic operations.
Finally we would like to point to the fact this approach does not only apply to the monitoring of
documents being worked out by students but it can also be applied to other scenarios. For example,
this approach can be used to flexibly control the evolution of the documents in a workflow system
to find out if certain processes are advancing at the required pace.
We plan to use and test this system for monitoring students’ work using other types of documents
than those generated by FreeStyler. We have still not made any study about the usability of the
system. The experiments done so far were designed for testing the hypotheses about the ability of
the system to deliver relevant information for monitoring students’ work. Testing the system with
more teachers will also allow us to think about other, perhaps more “usable” ways to show the
information to the teachers by filtering and/or aggregating query results and displaying them in
various ways.
The approach presented here combines computer technology to support human assessment of
students work in classroom learning scenarios in order give immediate feedback. This should
improve the quality of the lecturing as seen from the students’ point of view. However, this is not
the only way to provide feedback. Moreover, it has been suggested that delayed feedback might be
better in some cases, in order to let the student himself reflect about his learning process (Hattie &
Timperlay, 2007).
Computer technology has been systematically used inside the classroom for some time
(Abrahamson, Hartline, Fabert, Robson & Knapp, 1990), (Dufresne, Gerace, Leonard, Mestre &
Wenk, 1996), (Roschelle, Penuel & Abrahamson, 2004). Nevertheless, its potential to assess in-
classroom students’ work has not been explored as much as in the case of e-learning. This work is
a contribution in that direction.
ReferencesAbrahamson, A.L., Hartline, F.F., Fabert, M.G., Robson, M.J. and Knapp, R.J. (1990) “An electronic classroom enabling selfpaced interactive learning.” European Patent Number 90 304 587.0., 1990.
Baloian, N., Pino, J.A., Hoppe, H.U. (2008) “Dealing with the Students' Attention Problem in Computer Supported Face-to-Face Lecturing” Educational Technology & Society. Vol. 11, No. 2 192-205, 2008
Bangert-Drowns, R., Kulik, C., Kulik, J. and Morgan, M. (1991) “The Instructional Effect of Feedback in Test-Like Events”. Review of Educational Research, Vol. 61, No. 2, 213-238 (1991)
Ben-Naim, D., Marcus, N. and Bain. M. (2008) “Visualization and Analysis of Student Interac-tions in an Adaptive Exploratory Learning Environment”. In: Proceedings of the 1st Int. Workshop in Intelligent Support for Exploratory Environments on EC-TEL’08, Maastricht, The Netherlands.
21
Retrieved on February 2011 from http://ftp.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-381/
Black, P. and William, D. (2004) “The formative purpose: Assessment must first promote learning,” Yearbook of the National Society for the Study of Education, vol.103, no.2, pp.20–50, 2004.
Boag, S., Chamberlin, D., Fernández, M.F., Florescu, D. Robie, J., and Siméon, J. (2007) Xquery 1.0: An xml query language. Retrieved from http://www.w3.org/TR/xquery/, 23 Jan. 2007. Last accessed: 31st March 2009.
Brophy, J.E. and Good, T.L. (1985) “Teacher behavior and Student Achievement”. Handbook of Research on Teaching(Third Ed.), ch. , Macmillan Publishing Co.
Ciravegna, F., Dingli, A., D. Petrelli, and Wilks, Y. (2002) “Timely and non-intrusive active document annotation via adaptive information extraction,” Workshop Semantic Authoring Annotation and Knowledge Management, 2002.
Collazos, C., Guerrero, L., Pino, J., and Ochoa, S. (2002) “Evaluating collaborative learning processes”. Lecture Notes in Computer Science, vol.2440, pp.203–221.
Cotton, K. (1988) “Monitoring student learning in the classroom,” School Improvement Research Series Close-Up #4, 1988. Northwest Regional Educational Laboratory, U.S. Department of Education.
Crooks, T. (1988) “The impact of classroom evaluation practices on students,” Review of Educational research, vol.58, no.4, pp.438–481.
Dufresne, R.J., Gerace, W.J. Leonard, W.J., Mestre, J.P. and Wenk, L. (1996) “Classtalk: A classroom communication system for active learning,” Journal of Computing in Higher Education, vol.7, no.2, pp.3–47.
Grüntgens, W. and Melzer, R. (2004) Diagnostische kompetenz von lehrkräften im primar- und sekundarbereich. Retrieved on September 2008 from http://pz.bildung-rp.de/pn/pb1_04/diagnostischekompetenz.html
Gutierrez-Santos, S. and Mavrikis, M. (2008) Intelligent Support for Exploratory Environments: Where are We and Where Do We Want to Go Now?. In: Proceedings of the 1st Int. Workshop in Intelligent Support for Exploratory Environments on EC-TEL’08, Maastricht, The Netherlands. Retrieved on February 2011 from http://ftp.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-381/
Hasebrook, J.P. and Maurer, H.A. (2004) “Learning Support Systems for Organizational Learning”, World Scientific Publishing Company.
Hattie, J., and Timperley, H. (2007). “The power of feedback”. Review of Educational Research, 77, 81–112.
Hoppe, H. U. and Gassner, K. (2002). Integrating Collaborative Concept Mapping Tools with Group Memory and Retrieval Functions. In G. Stahl (Ed.), Computer support for collaborative learning: foundations for a CSCL community. Proceedings of CSCL 2002 (pp. 716- 725). Hills-dale, New Jersey, USA: Lawrence Erlbaum Associates, Inc.
Lei, X., Pahl, C. and Donnellan, D. (2003) “An evaluation technique for content interaction in web-based teaching and learning environments,” Proc. of the 3rd IEEE Intl. Conf. on Advanced Learning Technologies (ICALT’03), pp.294–295.
Mazza, R. and Dimitrova, V. (2004) “Visualising student tracking data to support instructors in web-based distance education,” Proceedings. of the 13th Intl. World Wide Web conference, USA, pp.154–161, ACM.
22
Mavrikis, M., Gutierrez-Santos, S., Pearce-Lazard, D., Poulovassilis, A. and Magoulas, G. (2010) “Learner modelling in microworlds: conceptual model and architecture in Mi- Gen”. Technical Report BBKCS-10-04, Birkbeck College, University of LondonAvailable at http:/www.dcs.bbk.ac.uk//research/techreps/2010/.
McTighe, J. and O’Connor, K. (2005) “Seven practices for effective learning,” Educational Leadership, vol.63, no.3, pp.10–17.
Pearce-Lazard, D. Poulovassilis, A. and Geraniu, E. (2010) “The Design of Teacher Assistance Tools in an Exploratory Learning Environment for Mathematics Generalisation” Lecture Notes in Computer Science, Volume 6383/2010, 260-275
Pinkwart, N. (2005) “Collaborative Modeling in Graph Based Environments”, Ph.D. thesis, Universität Duisburg-Essen, Germany.
Popham, W.J. (2007) “Classroom Assessment: What Teachers Need to Know”, Allyn & Bacon, MA, USA.
Rahkila, M. and Karjalainen, M. (1999) “Evaluation of learning in computer based education using log systems,” Proc. of 29th ASEE/IEEE Frontiers in Education Conf., San Juan, Puerto Rico, pp.16–21.
Roschelle, J., Penuel, W.R. and Abrahamson, L. (2004) “The networked classroom,” Educational Leadership, vol.61, no.5, pp.50–54.
Sheard, J. Ceddia, J. Hurst, and Tuovinen, J. (2003) “Inferring student learning behaviour from website interactions: A usage analysis,” Education and Information Technologies, vol.8, no.3, pp.245–266.
Scheuer, O., (2008) “Aufbereitung und analyse von benutzeraktionen in intelligenten lernsystemen,” Master’s thesis, Universität des Saarlands, Germany (in German).
Tewissen, F., Baloian, N., Hoppe, U. and Reimberg, E. (2000): “MatchMaker: Synchronizing Objects in Replicated Software-Architectures”. Proceedings of the 6th Collaboration Researchers’ International Workshop on Groupware (CRIWG), Madeira, Portugal. IEEE Computer Society Press. pp. 60-67.
Verdejo, M.F., Barros, B. , Read, T. and Rodriguez-Artacho, M. (2002) “A system for the specification and development of an environment for distributed cscl scenarios,”. Lecture Notes in Computer Science 2363, pp. 139–148.
Zaïane O. R. and Luo J., (2001). “Towards evaluating learners' behaviour in a web-based distance learning environment”. Proceedings of IEEE International Conference on Advanced Learning Technologies. Madison, USA, pp. 357-360.
Zhang, H., Almeroth, K., Knight, A. , Bulger, M. and Mayer, R. (2007) “Moodog: Tracking students’ online learning activities,” World Conference on Educational Multimedia, Hypermedia & Telecommunications (ED MEDIA), Vancouver, Canada, June 2007, Retrieved online on October 30, 2008 from http://imj.gatech.edu/papers/EDMEDIA-ZHANG-07.pdf.gz
23