Upload
truongtu
View
215
Download
3
Embed Size (px)
Citation preview
Resilient Course and Instructor Scheduling
in the Mathematics Department
at the United States Naval Academy
Stephen J. Ward1, Joseph Foraker2, and Nelson A. Uhan2
1Naval Postgraduate School, Monterey, CA, USA. [email protected] Department, United States Naval Academy, Annapolis, MD, USA.
{foraker,uhan}@usna.edu
September 2017
Abstract
In this work, we study the problem of scheduling courses and instructors in the MathematicsDepartment at the United States Naval Academy (USNA) in a resilient manner. Every semester,the department needs to schedule around 70 instructors and 150-180 course sections into 30class periods and 30 rooms. We formulate a stochastic integer linear program that schedulesthese courses, instructors, and rooms. In addition to maximizing instructor preferences androom stability, this stochastic integer linear program minimizes the expected number of changesrequired in the schedule if a disruption were to occur, given a subjective probability distributionover a finite set of possible disruption scenarios. We run our model on a number of instancesderived from actual data from the past three years, and investigate the effect of emphasizingdifferent parts of the objective function on the running time and resulting schedules.
1 Introduction
Colleges and universities continually face the problem of constructing a schedule for courses,
instructors, and students that respects various constraints and objectives such as room availability,
curriculum conflicts, and the preferences of students and faculty. Such university timetabling
problems have been widely studied for the past several decades, beginning as early as 1969 (Thornley
1969). Due to the size of many academic departments and the number of potentially conflicting
objectives and constraints that must be considered, university timetabling is a task ideally suited
for operations research techniques.
Creating schedules for courses and instructors at the United States Naval Academy (USNA), the
undergraduate college of the United States Navy, has some interesting challenges. One such challenge
1
is the uncertainty of available manpower: an instructor may or may not be able to teach in the
upcoming semester. This happens often with USNA’s military officer instructors, whose start and
end dates at USNA are sometimes uncertain for a variety of reasons, such as extended deployments
and sudden reassignments (e.g., individual augmentation). The availability of instructors, both
civilian and military, is also affected by events such as long-term illnesses and family crises.
At USNA, the course and instructor schedule for the next semester is published near the end
of the previous semester. Students (i.e., midshipmen) register for their courses around the same
time. Unfortunately, disruptions to the published schedule, such as the sudden loss of an instructor,
can occur between registration and the start of the next semester. However, in most cases, a
course cannot simply be canceled if the instructor is no longer available to teach. Through an
act of Congress, the academic program at USNA is 47 months (8 semesters) of study, and no
more (United States Naval Academy 2016). This fixed length program requires USNA to put the
highest priority on ensuring students get their required classes. This involves, among many things,
adjusting the course and instructor schedule. When a disruption occurs, the schedule must change
to guarantee that students can take the courses they need to meet their graduation requirements
on time. Generally speaking, course offerings are handled at the department level. This means
that even a minor disruption can cause widespread changes to an existing schedule, creating a
trickle-down effect that requires significant effort across multiple departments to address. As a
result, having a timetable that is resilient – one that requires a minimum number of changes in the
face of disruption – is an important consideration.
In this work, we study the problem of scheduling the courses and instructors in the Mathematics
Department at USNA in a resilient manner. Every semester, the department needs to schedule
around 70 instructors and 150-180 course sections into 30 class periods and 30 rooms. We formulate
a stochastic integer linear program that schedules these courses, instructors, and rooms. In addition
to maximizing instructor preferences and room stability, this stochastic integer linear program
minimizes the expected number of changes required in the schedule if a disruption were to occur,
given a subjective probability distribution over a finite set of possible disruption scenarios. We
run our model on a number of instances derived from actual data from the past three years, and
investigate the effect of emphasizing different parts of the objective function on the running time
and resulting schedules.
2
2 Literature Review
2.1 Problem Applicability
Traditionally, university departments – including the Mathematics Department at USNA – scheduled
their courses and instructors by hand over the course of several hours or days. Sometimes a schedule
from previous years could be reused with minor modifications; however, changes happening from
year to year often required a complete reworking of the schedule. With schedules created by hand, it
was also difficult to justify the solution to those instructors who did not receive preferable schedules.
Saltzman (2009) asserts that integer programming significantly reduces the amount of time spent
constructing what is to be considered a good, feasible schedule that is on par or even preferred to a
schedule created by hand.
While it remains difficult to find a provably optimal solution to many university timetabling
problems, a near-optimal solution can usually be found in a matter of hours (Burke et al. 2008).
The goal of these relatively quick and near-optimal solutions is to provide decision makers with
several feasible solutions that can be deemed as good solutions by the stakeholders involved (Benli
and Botsalı 2004). The scheduling process often begins well in advance due to university planning
requirements (Waterer 1995). Disruptions can occur between the time a schedule is published and
the beginning of a semester, and so it would be ideal to have a first schedule that is resistant to
potential changes.
2.2 Basic Structure
Most university timetabling models include the same basic elements. Most objective functions are
centered around the idea of satisfying the preferences of individuals as much as possible and often
times contain soft constraints for various desirable but not vital schedule properties. The objective
for the model described in this paper also has these ideas, as well as others that create a resilient
course and instructor schedule.
A typical university timetabling model includes constraints that ensure fundamental restrictions:
for example, at any given time, only one section of a course can be assigned to a particular room.
More complicated constraints commonly used include those that ensure instructors teach classes in
consecutive or non-consecutive time slots, depending on the structure and rules of the university
3
(Martin 2004). These “consecutivity” constraints are also implemented in the model detailed in this
paper.
The idea of “coherence” or setting a maximum number of days that instructors teach a course
at a given university is also a fairly common set of constraints within these timetabling models
(Kumar 2014). This is an important set of constraints in our model in order to give instructors time
to work on their research and other administrative tasks.
Another commonality between our formulation and other university timetabling models is
the implementation of “curriculum compactness” which strives to constrain lectures of the same
curriculum to be taught in consecutive meeting times (Cacchiani et al. 2013). Our model takes a
slightly different interpretation of this where an instructor’s lectures are constrained to be taught in
consecutive meeting times (Bettinelli et al. 2015). Another difficult, but common and beneficial set
of constraints consistently implemented in more recent formulations is referred to as “room stability”
(Lach and Lubbecke 2008). In our context, room stability translates to the idea that instructors
desire to teach in the same room throughout their day. Just as in our model, these room stability
constraints are soft constraints that can be weighted appropriately in the objective function based
on how much instructors at the institution value this idea (Lach and Lubbecke 2012).
2.3 Problem Difficulty
As previously mentioned, the topic of university timetabling has been widely studied at various
universities. The reason it continues to be studied is the problem’s difficulty as well as its applicability.
One of the main reasons university timetabling is difficult, is the formulation of constraints that
satisfy the specific rules of the university (Bakır and Aksop 2008). Perhaps the most difficult set
of constraints common across various models is the idea of consecutive periods (Daskalaki et al.
2004). This set of constraints requires multi-period courses to be taught in consecutive periods
and has been known to make the problem NP-hard (Daskalaki et al. 2004). Furthermore, the
implementation of consecutive meeting times for instructors who teach multiple sections of the same
course drastically increases the size of the model and the amount of time it takes for solvers to find
solutions (Daskalaki and Birbas 2005).
Due to the complexity and difficulty of this timetabling problem, many researchers have turned
to using heuristics instead of integer programming to find acceptable solutions (Sampson et al. 1995).
4
Sørensen et al. (2013) examined an integer linear program for a high school timetabling problem
with over 100 datasets and found that there existed large optimality gaps even after two hours of
running the solver. In fact, it is consistently observed that integer programs reach solutions much
slower than heuristics for this type of scheduling problem (Sørensen and Stidsen 2014).
2.4 Disruptions and Scheduling Persistence
Our model follows from a similar idea in Phillips et al. (2014), where the objective seeks to minimize
the number of changes required to an existing schedule as a result of some disruption to the data. The
difference is that we focus on how to overcome the inevitable changes that occur while attempting to
schedule courses and instructors, while Phillips et al. (2014) purposely perturbs an existing solution
to create infeasibility in the hopes of finding a “better” solution. In Phillips et al. (2016), the authors
briefly allude to the idea of perturbing the solution to discover the changes needed to an existing
schedule for disruptions that could occur to the original data set. The disruptions that Phillips et al.
(2016) study are room losses. On the other hand, the most common disruption to happen at USNA
within the Mathematics Department is the sudden loss of an instructor who was scheduled to teach
multiple courses in the upcoming semester. While the idea of minimizing the number of changes
required to an existing solution is present in both Phillips et al. (2014) and Phillips et al. (2016),
they do not attempt to take preventive measures in forming the original schedule to minimize these
changes.
The closest work we were able to find that describes the ideas unique in this paper are those
of persistence in Brown et al. (1997). A schedule is called persistent if it stays the same across
potential scenarios. Brown et al. (1997) details how they integrated the idea of persistence into
various case studies involving scheduling. These models would seek to minimize the number of
differences between scenarios with small input changes between them. This is similar to the goal
of our model which seeks to minimize the number of potential differences from a baseline scenario
to each disruption scenario. The major difference between the models in Brown et al. (1997) and
our formulation is that the schedule constructed in our model is based upon minimizing potential
changes to a baseline schedule given the likelihood each disruption occurs, whereas Brown et al.
(1997) seeks to minimize disruptions between scenarios and does not necessarily focus on a baseline
schedule as the center of the model.
5
3 Problem description and mathematical programming formula-
tion
In this section, we describe the data, decision variables, constraints, and objective function of our
stochastic integer program that finds a resilient schedule for the courses and instructors in the
Mathematics Department at USNA.
3.1 Data
At USNA, classes are held 5 days per week, Monday through Friday, abbreviated here as M, T, W,
R, F. Each day, there are 6 periods – 4 periods in the morning, 2 periods in the afternoon – for a
total of 30 periods per week:
M1, . . . ,M6,T1, . . . ,T6,W1, . . . ,W6,R1, . . . ,R6,F1, . . . ,F6.
A meeting time is a set of periods. For example, the meeting time MWF1 consists of periods M1,
W1, and F1.
A meeting time cannot contain both morning and afternoon periods. We denote the set of
periods and possible meeting times as
P = set of periods,
M = set of meeting times,
Pm = set of periods in meeting time m for m ∈M.
For example, PMWF1 = {M1,W1,F1}. Furthermore, we denote the set of morning periods and the
set of afternoon periods as
PAM = {M1,M2,M3,M4,T1,T2,T3,T4, . . . ,F1,F2,F3,F4},
PPM = {M5,M6,T5,T6, . . . ,F5,F6},
6
and the set of morning meeting times and the set of afternoon meeting times as
MAM = {m ∈M : Pm ⊆ PAM},
MPM = {m ∈M : Pm ⊆ PPM}.
There is a set S of scenarios that consists of a baseline scenario s0 and several disruption
scenarios. Each scenario represents one possible realization of the offered courses, active instructors,
and available rooms for the next semester. The baseline scenario represents the planned courses,
instructors, and rooms; students will register based on a schedule feasible for the baseline scenario.
The disruption scenarios represent contingency plans that adjust the baseline scenario plan in
the face of a disruption, such as a loss of an instructor or room. Each scenario s ∈ S has an
associated probability qs of being realized, which in practice, would be subjectively determined by
the department chair.
Due to the small class sizes at USNA, many courses are taught in multiple sections. Every
semester, the department chair needs to assign each of these sections to an instructor and a meeting
time. We define
Is = set of instructors for s ∈ S,
Cs = set of courses for s ∈ S.
Based on instructor qualifications and the needs of the department, the chair decides how many
sections of each course that each instructor must teach in each scenario:
Cs,i = set of courses assigned to instructor i in scenario s for s ∈ S, i ∈ Is,
ts,i,c = number of sections of course c assigned to
instructor i in scenario s
for s ∈ S, i ∈ Is,
c ∈ Cs,i,
ts,i =∑
c∈Cs,i
ts,i,c = total number of sections assigned to
instructor i in scenario s
for s ∈ S, i ∈ Is.
Instructors often have preferences on when they teach, often due to non-teaching duties such
7
as service obligations, extra instruction sessions, and child care. We capture these preferences
numerically as
ps,i,m = instructor i’s preference for meeting time m in scenario s for s ∈ S, i ∈ Is,m ∈M
where a higher value of ps,i,m indicates that instructor i has a greater preference for teaching during
meeting time m in scenario s.
In addition, each section needs to be assigned a room for each period in its assigned meeting
time. We define
Rs = set of rooms available in scenario s for s ∈ S.
Ideally, each section would be assigned the same room for all periods in its meeting time. Due to
the limited number of rooms available to the department, however, this is often not possible. On
the other hand, sometimes it is necessary to assign the same room to multiple periods: for example,
the meeting time MF1R12 consists of a double-period class during periods R1 and R2, and so the
same room must be assigned to those two periods. To capture these requirements, we partition the
set of periods in each meeting time into period groups:
P ∗m = set of period groups for meeting time m for m ∈M,
where P ∗m is a partition of Pm, and each period group G ∈ P ∗m must be assigned to the same room.
Using preregistration data, the chair determines how many sections of each course are needed
and the student capacity of each course. In addition, each course can only be taught using certain
meeting times, based on the structure (lectures vs. labs) and the number of credits of the course.
We denote these as:
ns,c = number of sections needed for course c in scenario s for s ∈ S, c ∈ Cs,
es,c = capacity of one section of course c in scenario s for s ∈ S, c ∈ Cs,
Ms,c = set of meeting times allowed for course c in scenario s for s ∈ S, c ∈ Cs.
To ensure that senior students can take the courses they need to graduate on time, the chair also
8
uses the preregistration data to determine which courses must be conflict-free, or in other words,
courses that must have sections offered during non-overlapping periods:
Ks = set of conflict-free course pairs in scenario s ⊆ Cs × Cs for s ∈ S.
Each room has a capacity, and therefore the sections of each course can only be assigned to
certain rooms. In addition, some rooms are not available in certain periods or scenarios. We denote
these quantities as:
fr = capacity of room r for r ∈ R,
Rs,c = set of rooms allowable for course c in scenario s ⊆ {r ∈ Rs : es,c ≤ fr} for s ∈ S, c ∈ Cs,
Ps,r = set of periods during which room r is not available in scenario s for s ∈ S, r ∈ Rs.
Most instructors want to teach consecutive periods whenever possible. To capture this, we
introduce the notion of distance between meeting times, which roughly measures how many periods
apart two meeting times are. For example, the distance between MWF3 and MWRF4 is 1, the
distance between TR8 (which consists of periods T1, T2, R1, and R2) and TR9 (which consists
of periods T3, T4, R3, and R4) is also 1, and the distance between MWF3 and TR9 is +∞. In
addition, to ensure that most instructors have at least one day a week to devote to research, class
preparation, and service duties, we set the distance between two meeting times that together result
in no free days as +∞. We denote these distances as
dm,m′ = distance between meeting times m and m′ for m,m′ ∈M.
3.2 Decision variables
We define the following decision variables:
xs,i,c,m =
1 if in scenario s, instructor i is assigned
to course c and meeting time m,
0 otherwise
for s ∈ S, i ∈ Is, c ∈ Cs,i,m ∈Ms,c,
9
ys,i,c,m,p,r =
1 if in scenario s, instructor i is assigned
to course c and meeting time m,
with period p in room r,
0 otherwise
for s ∈ S, i ∈ Is, c ∈ Cs,i,m ∈Ms,c,
p ∈ Pm, r ∈ Rs,c,
y∗s,i,c,m,p,r =
1 if ys,i,c,m,p,r 6= ys0,i,c,m,p,r in scenario s,
0 otherwise
for s ∈ S, i ∈ Is, c ∈ Cs,i,
m ∈Ms,c, p ∈ Pm, r ∈ Rs,c,
us,i,c,m =
1 if
∑p∈Pm
∑r∈Rs,c
y∗s,i,c,m,p,r ≥ 1
0 otherwise
for s ∈ S, i ∈ Is, c ∈ Cs,i,
m ∈Ms,c,
zts,i,r =
1 if instructor i is assigned to
to room r in t in scenario s,
0 otherwise
for s ∈ S, i ∈ Is, r ∈ Rs,
t ∈ {AM, PM},
zs,i,r =
1 if instructor i is assigned to
room r in scenario s,
0 otherwise
for s ∈ S, i ∈ Is, r ∈ Rs.
Note that us,i,c,m = 1 if: (i) instructor i teaches course c during meeting time m in scenario s,
but not in the baseline scenario s0, (ii) instructor i teaches course c during meeting time m in the
baseline scenario s0, but not in scenario s, or (iii) instructor i teaches course c during meeting
time m in both scenarios s and s0, but not in the same room.
3.3 Constraints
We begin by describing some basic constraints that result in a physically feasible assignment of
instructors to courses and meeting times. For instance, each instructor must be assigned to the
right number of sections of each course in every scenario:
∑m∈Ms,c
xs,i,c,m = ts,i,c for s ∈ S, i ∈ Is, c ∈ Cs,i.
10
Also, each instructor can only teach at most one course per period in every scenario:
∑c∈Cs,i
∑m∈Ms,c:p∈Pm
xs,i,c,m ≤ 1 for s ∈ S, i ∈ Is, p ∈ P.
Next, we describe some constraints that are particular to the scheduling of courses and instructors
in the Mathematics Department at USNA. First, we ensure that each instructor teaches consecutive
periods according to the distance metric d defined above in every scenario:
∑c∈Cs,i:m∈Ms,c
xs,i,c,m +∑
c′∈Cs,i:m′∈Ms,c′
xs,i,c′,m′ ≤ 1 for s ∈ S, i ∈ Is,m ∈M,
m′ ∈M : dm,m′ ≥ ts,i,∑c∈Cs,i:m∈Ms,c
xs,i,c,m ≤∑
c′∈Cs,i
∑m′∈Ms,c′ :dm,m′<ts,i
xs,i,c′,m′ for s ∈ S, i ∈ Is,m ∈M : ts,i ≥ 2.
Moreover, for the sake of convenience, we ensure that if an instructor teaches exactly two sections
of the same course in a given scenario, these sections should be in consecutive periods, either both
in the morning or both in the afternoon, but not one before lunch and one after lunch:
xs,i,c,m ≤∑
m′∈Ms,c∩MAM:dm,m′=1
xs,i,c,m′ for s ∈ S, i ∈ Is, c ∈ Cs,i : ts,i,c = 2,m ∈Ms,c ∩MAM,
xs,i,c,m ≤∑
m′∈Ms,c∩MPM:dm,m′=1
xs,i,c,m′ for s ∈ S, i ∈ Is, c ∈ Cs,i : ts,i,c = 2,m ∈Ms,c ∩MPM.
We also ensure that if the instructor teaches three or more sections of the same course in a given
scenario, these sections should be in consecutive periods:
xs,i,c,m ≤∑
m′∈Ms,c:dm,m′=1
xs,i,c,m′ for s ∈ S, i ∈ Is, c ∈ Cs,i : ts,i,c ≥ 3,m ∈Ms,c.
In addition, we constrain our assignments so that for each pair of conflict-free courses, there is at
least one section from one course that does not meet at the same time as any of the sections of the
other course:
∑i∈Is:c∈Cs,i
∑m∈Ms,c:p∈Pm
xs,i,c,m +∑
i∈Is:c′∈Cs,i
∑m∈Ms,c′ :p∈Pm
xs,i,c′,m ≤ ns,c + ns,c′ − 1
11
for s ∈ S, (c, c′) ∈ Ks, p ∈ P.
We now describe constraints that assign rooms to the sections. Naturally, we ensure that each
section must be assigned one room for each of its scheduled periods in every scenario:
∑r∈Rs,c
ys,i,c,m,p,r = xs,i,c,m for s ∈ S, i ∈ Is, c ∈ Cs,i,m ∈Ms,c, p ∈ Pm.
We also ensure that each room can only be assigned at most one section per period in every scenario:
∑i∈Is
∑c∈Cs,i:r∈Rs,c
∑m∈Ms,c:p∈Pm
ys,i,c,m,p,r ≤ 1 for s ∈ S, r ∈ Rs, p ∈ P.
We ensure that each period group in a meeting time must be assigned to the same room in every
scenario:
ys,i,c,m,p,r = ys,i,c,m,p′,r for s ∈ S, i ∈ Is, c ∈ Cs,i,m ∈Ms,c, r ∈ Rs,c,
G ∈ P ∗m, p, p′ ∈ G : p 6= p′.
In addition, we ensure that sections cannot be assigned to rooms when they are not available in
every scenario:
ys,i,c,m,p,r = 0 for s ∈ S, i ∈ Is, c ∈ Cs,i,m ∈Ms,c, r ∈ Rs,c, p ∈ Pm ∩ Pr.
We also include constraints that ensure we properly track the rooms assigned to instructors – in
the morning, in the afternoon, and all day for every scenario:
∑c∈Cs,i:r∈Rs,c
∑m∈Ms,c:p∈Pm
ys,i,c,m,p,r ≤ zAMs,i,r for s ∈ S, i ∈ Is, p ∈ PAM, r ∈ Rs, (1)
∑c∈Cs,i:r∈Rs,c
∑m∈Ms,c:p∈Pm
ys,i,c,m,p,r ≤ zPMs,i,r for s ∈ S, i ∈ Is, p ∈ PPM, r ∈ Rs, (2)
∑c∈Cs,i:r∈Rs,c
∑m∈Ms,c:p∈Pm
ys,i,c,m,p,r ≤ zs,i,r for s ∈ S, i ∈ Is, p ∈ P, r ∈ Rs. (3)
12
Finally, we account for every change from the baseline scenario to each of the disruption scenarios:
y∗s,i,c,m,p,r ≥ ys0,i,c,m,p,r − ys,i,c,m,p,r for s ∈ S, i ∈ Is, c ∈ Cs,i,m ∈Ms,c,
p ∈ Pm, r ∈ Rs,c,
y∗s,i,c,m,p,r ≥ ys,i,c,m,p,r − ys0,i,c,m,p,r for s ∈ S, i ∈ Is, c ∈ Cs,i,m ∈Ms,c,
p ∈ Pm, r ∈ Rs,c,
us,i,c,m ≥ y∗s,i,c,m,p,r for s ∈ S, i ∈ Is, c ∈ Cs,i,m ∈Ms,c,
p ∈ Pm, r ∈ Rs,c.
As alluded to earlier when defining the decision variables us,i,c,m, we define a change between the
baseline scenario and a particular disruption scenario as one of the following: (i) an instructor
teaches a particular course during a specific meeting time in the disruption scenario, but not in the
baseline scenario, (ii) an instructor teaches a particular course during a specific meeting time in
the baseline scenario, but not in the disruption scenario, or (iii) an instructor teaches a particular
course during a specific meeting time in both the baseline scenario and disruption scenario, but not
in the same room.
3.4 Objective function
First, we describe the various performance measures that constitute our objective function. The
total instructor preference score in scenario s ∈ S is
Preferencess =∑i∈Is
∑c∈Cs,i
∑m∈Ms,c
ps,i,c,mxs,i,c,m.
For each scenario s ∈ S, the total number of rooms assigned to instructors in the morning, afternoon,
and overall are
RoomsAMs =
∑i∈Is
∑r∈Rs
zAMs,i,r, RoomsPMs =
∑i∈Is
∑r∈Rs
zPMs,i,r, Roomss =∑i∈Is
∑r∈Rs
zs,i,r.
13
The number of changes between the baseline scenario and a disruption scenario s ∈ S \ {s0} is
Differencess =∑i∈Is0
∑c∈Cs,i
∑m∈Ms,c
us,i,c,m.
We optimize a weighted sum of four objectives based on these performance measures:
(a) Maximize the total instructor preference score in the baseline scenario while minimizing the
number of rooms assigned to instructors in the baseline scenario. Minimizing the number of
rooms assigned to instructors is a way to achieve room stability, by attempting to have each
instructor teach all of his or her sections in the same room.
(b) Minimize the expected number of changes between the baseline scenario and the disruption
scenarios.
(c) Minimize the expected number of rooms assigned to instructors in the disruption scenarios.
As in (a), the goal of this part of the objective is to achieve room stability in the disruption
scenarios.
(d) Maximize the expected total instructor preference score in the disruption scenarios.
Let wa, wb, wc, and wd be the weights for objectives (a), (b), (c), and (d), respectively. The
objective of our stochastic integer program is
maximizewa
A
[πs0Preferencess0 −
(λAMs0 RoomsAM
s0 + λPMs0 RoomsPMs0 + λs0Roomss0
)]− wb
B
∑s∈S\{s0}
qs Differencess
− wc
C
∑s∈S\{s0}
qs
(λAMs RoomsAM
s + λPMs RoomsPMs + λsRoomss
)+wd
D
∑s∈S\{s0}
qsπsPreferencess,
where πs, λAMs , λPMs , and λs for s ∈ S are user-specified weights, and A, B, C, and D are scaling
constants chosen so that the ranges of objectives (a), (b), (c), and (d) are equal.
14
3.5 Strengthening the formulation
In order to strengthen the formulation above in the hopes of faster running times, we add several
families of valid inequalities. The definitions of zs,i,r, zAMs,i,r and zPMs,i,r imply the inequalities below:
zAMs,i,r ≤ zs,i,r for s ∈ S, i ∈ Is, r ∈ Rs,
zPMs,i,r ≤ zs,i,r for s ∈ S, i ∈ Is, r ∈ Rs,
zAMs,i,r + zPMs,i,r ≥ zs,i,r for s ∈ S, i ∈ Is, r ∈ Rs.
In addition, the definitions of zs,i,r, zAMs,i,r, z
PMs,i,r, and ys,i,c,m,p,r imply the following family of inequali-
ties:
zs,i,r ≤∑
c∈Cs,i:r∈Rs,c
∑m∈Ms,c
∑p∈Pm
ys,i,c,m,p,r for s ∈ S, i ∈ Is, r ∈⋃
c∈Cs,i
Rs,c,
zAMs,i,r ≤
∑c∈Cs,i:r∈Rs,c
∑m∈Ms,c∩MAM
∑p∈Pm
ys,i,c,m,p,r for s ∈ S, i ∈ Is, r ∈⋃
c∈Cs,i
Rs,c,
zPMs,i,r ≤∑
c∈Cs,i:r∈Rs,c
∑m∈Ms,c∩MPM
∑p∈Pm
ys,i,c,m,p,r for s ∈ S, i ∈ Is, r ∈⋃
c∈Cs,i
Rs,c.
Finally, by adding constraints (1) over all periods p in PAM, and recognizing that each instructor
can teach at most
min
{ ∑c∈Cs,i
ts,i,c maxm∈Ms,c
|Pm|, 20
}
periods total in the morning, the following family of inequalities is valid:
∑c∈Cs,i:r∈Rs,c
∑m∈Ms,c
∑p∈Pm∩PAM
ys,i,c,m,p,r ≤ min
{ ∑c∈Cs,i
ts,i,c maxm∈Ms,c
|Pm|, 20
}zAMs,i,r
for s ∈ S, i ∈ Is, r ∈⋃
c∈Cs,i
Rs,c.
Similarly, starting from constraints (2) and (3), the following inequalities are also valid:
∑c∈Cs,i:r∈Rs,c
∑m∈Ms,c
∑p∈Pm∩PPM
ys,i,c,m,p,r ≤ min
{ ∑c∈Cs,i
ts,i,c maxm∈Ms,c
|Pm|, 10
}zPMs,i,r
15
Table 1. Test instance characteristics
Baseline scenario
Instance Number of Number of Number of Number of Number of Number ofinstructors courses sections rooms constraints variables
Fall AYE 2015 68 36 168 30 1,154,878 573,258Spring AYE 2015 66 39 159 30 1,036,803 508,632Fall AYE 2016 74 39 189 29 1,196,054 586,242Spring AYE 2016 68 40 160 29 1,284,108 648,452Fall AYE 2017 69 33 184 29 1,269,928 641,370Spring AYE 2017 62 37 144 30 923,652 449,167
Average Fall 70 36 180 29 1,206,953 600,290Average Spring 65 39 154 30 1,081,521 535,417
for s ∈ S, i ∈ Is, r ∈⋃
c∈Cs,i
Rs,c,
∑c∈Cs,i:r∈Rs,c
∑m∈Ms,c
∑p∈Pm
ys,i,c,m,p,r ≤ min
{ ∑c∈Cs,i
ts,i,c maxm∈Ms,c
|Pm|, 30
}zs,i,r
for s ∈ S, i ∈ Is, r ∈⋃
c∈Cs,i
Rs,c.
4 Experiment Setup
In order to test the functionality and performance of our model, we used available data from the six
most recent semesters at USNA: Fall AYE (Academic Year Ending in) 2015, Spring AYE 2015, Fall
AYE 2016, Spring AYE 2016, Fall AYE 2017, and Spring AYE 2017. Each test instance consists of
a baseline data set and two potential disruption scenario data sets. Each data set consists of a set
of instructors, courses, meeting times, and rooms for that particular scenario as well as other data
as described in Section 3.1 of this paper. When possible, we created disruption scenarios based on
actual information available at the time of the instance: for example, a disruption scenario might be
based on the fact that an instructor was a likely candidate for a change in duty station and would
not be available to teach in the upcoming semester. As seen in Table 1, the size of these problems
are rather large.
To illustrate how the disruption scenarios were formed, we describe the Fall AYE 2017 instance
in detail. We describe the rest of the instances in Appendix A. In the first disruption scenario of
this test instance, an instructor (Instructor 9) is added to the faculty at the last minute to teach
one section each of two courses (SM121 and SM122) which previously had 30 and 15 sections in the
16
baseline data set respectively. The department chair decides to take one section of SM121 from one
instructor (Instructor 2) and one section of SM122 from another instructor (Instructor 71) so that
they are now teaching a total of two sections each instead of three. The number of total sections
of each course thus remains the same. In the second disruption scenario of this test instance, an
instructor (Instructor 94) who was supposed to teach three sections of a course (SM121) with 30
sections must suddenly depart USNA. The department chair decides to give another instructor
(Instructor 64) an additional section of this course so he is now teaching a total of three sections
instead of two. Furthermore, the department chair decreases the overall number of sections from 30
down to 28, slightly increasing the number of students per section of this course.
To solve these instances, we used Gurobi 7.0.1 with its Python API. These instances were solved
on a computer with an 8-core 2.6 GHz Intel Xeon E5-2560 v2 CPU and 64 GB RAM, running
Windows Server 2008. For each of the test instances, we created a warm start solution by running
the model for the baseline and disruption scenarios individually and combining them. We allowed a
total of three hours to find a warm start solution: one hour each for the baseline and disruption
scenarios. We stopped the solver after 24 hours for each test instance. Therefore, the results below
are based on three hours of finding warm start solutions and then 24 hours of solving the full model
for a total of 27 hours of CPU time.
For the user-specified weights πs, λAMs , λPMs , and λs for s ∈ S in the objective function, we used
the following values for every instance:
πs = 2, λAMs = 2, λPMs = 2, λs = 1 for s ∈ S.
These weights were chosen based on our previous experience with finding desirable schedules for a
single (i.e. the baseline) scenario.
5 Results
We began testing the model by running it for the extreme cases of the objective function, setting the
weight for each part (a) through (d) to 1 and the rest 0. For these extreme cases, we assumed that
each scenario was equally likely: i.e., q0 = 1/3, q1 = 1/3, q2 = 1/3. Due to the nature of USNA’s
17
Table 2. Performance – Baseline only
Instance Gap after Time of best Time when gap24 hours solution (s) less than 5% (s)
Fall AYE 2015 1.72% 1519 127Spring AYE 2015 0.56% 42 91Fall AYE 2016 6.37% 26008 > 86400Spring AYE 2016 0.00% 18064 96Fall AYE 2017 3.74% 37663 16584Spring AYE 2017 0.00% 28 56
Average Fall 3.95% 21730 > 34370Average Spring 0.19% 6045 81
academic curriculum, the Fall semesters are often more difficult to schedule because of the higher
number of sections as seen in Table 1. In particular, a large number of Calculus I and II sections
must be offered in the Fall. These courses also have more restrictions on meeting times due to their
length (4 credit hours) and rooms due to their larger section sizes. In light of this disparity between
the Fall and Spring semesters, it is useful to look at the results of these tests for the Fall and Spring
semesters separately.
5.1 Baseline Only
We first tested the objective with only performance measures related to the baseline scenario: i.e.,
wa = 1, wb = 0, wc = 0, and wd = 0. As seen in Table 2, the solver is able to find a “good” solution
(i.e., a solution that is less than 5% away from optimal) in under five minutes in four of the six
instances. In fact, for two of the Spring semester instances, provably optimal solutions were found.
The satisfaction of every instructor is a value between 0 and 1 that represents proportionally how
well their preferences were met. Table 3 shows the average satisfaction of instructors in the baseline
and disruption scenarios. In the first three instances, instructor preference data was not collected so
there is no meaningful value to display. Table 4 reports the percentage of instructors with multiple
rooms in the morning, afternoon, and overall; these percentages are reported as averages over the
three scenarios. This table also shows the proportion of instructors who can expect their schedule
to change from the baseline scenario to either of the disruption scenarios. Under this “baseline
only” objective, that proportion is 100% for every instance. This behavior is expected since there is
nothing in the model driving the solution to consider these schedule changes. By using a model
without regard for potential disruptions, a complete rescheduling may be required for even simple
18
Table 3. Satisfaction metrics – Baseline only
Instance Baseline average Scenario 1 Scenario 2satisfaction of average satisfaction of average satisfaction ofeach instructor each instructor each instructor
Fall AYE 2015 N/A N/A N/ASpring AYE 2015 N/A N/A N/AFall AYE 2016 N/A N/A N/ASpring AYE 2016 0.997 0.867 0.895Fall AYE 2017 0.936 0.946 0.944Spring AYE 2017 0.979 0.966 0.971
Average Fall 0.936 0.946 0.944Average Spring 0.988 0.916 0.933
Table 4. Room and disruption metrics – Baseline only
Instance Instructors with Instructors with Instructors with Instructors whomore than more than more than could experience
1 AM room 1 PM room 1 room some disruption
Fall AYE 2015 0.67% 0.00% 3.47% 100.00%Spring AYE 2015 0.00% 0.00% 0.00% 100.00%Fall AYE 2016 15.42% 5.18% 18.21% 100.00%Spring AYE 2016 43.74% 37.57% 54.23% 100.00%Fall AYE 2017 11.88% 7.73% 21.74% 100.00%Spring AYE 2017 0.00% 0.00% 0.00% 100.00%
Average Fall 9.32% 4.30% 14.47% 100.00%Average Spring 14.58% 12.52% 18.08% 100.00%
disruptions.
5.2 Differences Only
The second objective we tested was to only minimize the number of changes between the baseline
scenario and each disruption scenario: i.e., wa = 0, wb = 1, wc = 0, and wd = 0. Table 5 shows the
performance results for this objective. On average, the optimality gap for the Spring semesters is
smaller than the optimality gap for the Fall semesters; however, none of the instances close the gap
to less than 5% in 24 hours.
Tables 6 and 7 show the satisfaction, room, and disruption metrics under this “differences
only” objective. On average, the number of instructors with multiple rooms was lower in the Fall
semesters than the Spring. The solutions we found performed as expected with this particular
weighting situation, where all but one of the instances have less than 6% of instructors expecting a
schedule change if one of the potential disruption scenarios were to occur, as compared to 100% in
all instances for the “baseline only” objective (Table 4).
19
Table 5. Performance – Differences only
Instance Gap after Time of best Time when gap24 hours solution (s) less than 5% (s)
Fall AYE 2015 66.67% 36750 > 86400Spring AYE 2015 50.00% 261 > 86400Fall AYE 2016 99.99% 10820 > 86400Spring AYE 2016 47.92% 282 > 86400Fall AYE 2017 56.23% 78120 > 86400Spring AYE 2017 64.25% 168 > 86400
Average Fall 74.29% 41896 > 86400Average Spring 54.06% 237 > 86400
Table 6. Satisfaction metrics – Differences only
Instance Baseline average Scenario 1 Scenario 2satisfaction of average satisfaction of average satisfaction ofeach instructor each instructor each instructor
Fall AYE 2015 N/A N/A N/ASpring AYE 2015 N/A N/A N/AFall AYE 2016 N/A N/A N/ASpring AYE 2016 0.859 0.857 0.860Fall AYE 2017 0.695 0.698 0.696Spring AYE 2017 0.574 0.587 0.572
Average Fall 0.695 0.698 0.696Average Spring 0.717 0.722 0.716
Table 7. Room and disruption metrics – Differences only
Instance Instructors with Instructors with Instructors with Instructors whomore than more than more than could experience
1 AM room 1 PM room 1 room some disruption
Fall AYE 2015 62.13% 52.05% 75.75% 4.41%Spring AYE 2015 67.44% 48.29% 87.75% 1.52%Fall AYE 2016 14.01% 3.12% 17.75% 98.65%Spring AYE 2016 74.86% 52.05% 85.15% 5.88%Fall AYE 2017 68.43% 69.04% 94.20% 4.35%Spring AYE 2017 68.55% 47.41% 87.50% 4.84%
Average Fall 48.19% 41.40% 62.56% 35.80%Average Spring 70.29% 49.25% 86.80% 4.08%
20
Table 8. Performance – Rooms in disruption scenarios only
Instance Gap after Time of best Time when gap24 hours solution (s) less than 5% (s)
Fall AYE 2015 8.67% 37196 > 86400Spring AYE 2015 1.94% 43 86400Fall AYE 2016 11.59% 77998 > 86400Spring AYE 2016 3.09% 73797 24278Fall AYE 2017 22.07% 27878 > 86400Spring AYE 2017 0.00% 1742 1394
Average Fall 14.11% 47691 > 86400Average Spring 1.68% 25194 37358
Table 9. Satisfaction metrics – Rooms in disruption scenarios only
Instance Baseline average Scenario 1 Scenario 2satisfaction of average satisfaction of average satisfaction ofeach instructor each instructor each instructor
Fall AYE 2015 N/A N/A N/ASpring AYE 2015 N/A N/A N/AFall AYE 2016 N/A N/A N/ASpring AYE 2016 0.856 0.831 0.841Fall AYE 2017 0.935 0.937 0.922Spring AYE 2017 0.748 0.611 0.574
Average Fall 0.935 0.937 0.922Average Spring 0.802 0.721 0.708
5.3 Rooms in Disruption Scenarios Only
The third objective we tested was to only minimize the expected number of rooms assigned to the
instructors in the disruption scenarios: i.e., wa = 0, wb = 0, wc = 1, and wd = 0. Table 8 shows
that for this objective, the time for solutions to have an optimality gap less than 5% was on average
10.4 hours for Spring semester instances, and more than 24 hours for Fall semester instances. For
this objective, we were able to find a provably optimal solution for Spring AYE 2017 in the allotted
time. This is not entirely surprising, since as seen in Table 1, this instance is much smaller than the
other five.
Comparing the results in Table 10 with those for the “differences only” objective in Table 7, we
see that the percentages of instructors with more than one room are much lower; in the Fall the
overall percentage has been reduced by more than a factor of three, and in the Spring the overall
percentage has been reduced by more than a factor of five. However, once again it is apparent that
when the disruption portion of the objective function is ignored, it can be expected that 100% of
instructors will have a schedule change if one of the disruption scenarios were to occur.
21
Table 10. Room and disruption metrics – Rooms in disruption scenarios only
Instance Instructors with Instructors with Instructors with Instructors whomore than more than more than could experience
1 AM room 1 PM room 1 room some disruption
Fall AYE 2015 0.00% 0.00% 2.48% 100.00%Spring AYE 2015 0.00% 0.00% 0.00% 100.00%Fall AYE 2016 29.41% 23.26% 37.49% 100.00%Spring AYE 2016 24.60% 17.65% 28.93% 100.00%Fall AYE 2017 11.49% 12.58% 20.78% 100.00%Spring AYE 2017 18.67% 16.13% 22.58% 100.00%
Average Fall 13.63% 11.95% 20.25% 100.00%Average Spring 14.42% 11.26% 17.17% 100.00%
Table 11. Performance – Preferences in disruption scenarios only
Instance Gap after Time of best Time when gap24 hours solution (s) less than 5% (s)
Fall AYE 2015 0.00% 52 89Spring AYE 2015 0.00% 41 65Fall AYE 2016 0.00% 45 82Spring AYE 2016 0.00% 123 104Fall AYE 2017 0.00% 787 108Spring AYE 2017 0.00% 38 38
Average Fall 0.00% 295 93Average Spring 0.00% 67 69
5.4 Preferences in Disruption Scenarios Only
The fourth objective we tested was to maximize each instructor’s preferences in the disruption
scenarios: i.e., wa = 0, wb = 0, wc = 0, and wd = 1. As Table 11 shows, the solver was able to solve
every instance to provable optimality, and all of the instances reached optimality gaps under 5% in
less than two minutes with an average time of 1.35 minutes across instances regardless of semester.
As expected, the average instructor satisfaction values are higher in the disruption scenarios than
in the baseline scenario as seen in Table 12. Just as seen in the results for the “baseline only” and
“rooms in disruption scenarios only” objectives described in Sections 5.1 and 5.3 respectively, Table
13 shows that when we ignore the number of changes from the baseline schedule to the disruption
scenario schedules in the objective, 100% of instructors can expect to have a schedule change if one
of those disruptions were to occur.
22
Table 12. Satisfaction metrics – Preferences in disruption scenarios only
Instance Baseline average Scenario 1 Scenario 2satisfaction of average satisfaction of average satisfaction ofeach instructor each instructor each instructor
Fall AYE 2015 N/A N/A N/ASpring AYE 2015 N/A N/A N/AFall AYE 2016 N/A N/A N/ASpring AYE 2016 0.859 0.996 0.996Fall AYE 2017 0.768 0.946 0.950Spring AYE 2017 0.979 0.968 0.973
Average Fall 0.768 0.946 0.950Average Spring 0.919 0.982 0.984
Table 13. Room and disruption metrics – Preferences in disruption scenarios only
Instance Instructors with Instructors with Instructors with Instructors whomore than more than more than could experience
1 AM room 1 PM room 1 room some disruption
Fall AYE 2015 0.00% 0.00% 2.48% 100.00%Spring AYE 2015 0.00% 0.00% 0.00% 100.00%Fall AYE 2016 13.29% 1.08% 16.36% 100.00%Spring AYE 2016 64.75% 50.05% 85.61% 100.00%Fall AYE 2017 68.27% 60.61% 92.73% 100.00%Spring AYE 2017 30.20% 13.13% 38.25% 100.00%
Average Fall 27.19% 20.56% 37.19% 100.00%Average Spring 31.65% 21.06% 41.29% 100.00%
5.5 Difficulty and Importance of Minimizing Disruptions
These tests on the extreme cases of the objective function have shown that the model with the
“baseline only,” “rooms in disruption scenarios only,” and “preferences in disruption scenarios only”
objectives can be computationally tractable on their own. The model with the “differences only”
objective, however, took significantly longer to make progress on the optimality gap, and for all
instances, the solver was not able to converge to any “good” solutions. However, when the changes
between the baseline scenario and disruption scenarios were not taken into consideration in the
objective, the obtained solutions had 100% of the instructors experience a change to their schedule if
a disruption were to occur. This means using essentially a completely new schedule to accommodate
the new scenario as opposed to having a solution that disrupts a minimal number of instructors.
5.6 Equal Weights, Scenarios Equally Likely
Our next set of tests weighted the four parts of the overall objective function equally: i.e., wa = 1,
wb = 1, wc = 1, and wd = 1. Furthermore, we continued to assign equal probabilities to each
23
Table 14. Performance – Equal weights, scenarios equally likely
Instance Gap after Time of best Time when gap24 hours solution (s) less than 5% (s)
Fall AYE 2015 5.63% 86400 > 86400Spring AYE 2015 1.06% 86400 561Fall AYE 2016 25.06% 86400 > 86400Spring AYE 2016 1.34% 86400 754Fall AYE 2017 5.98% 30939 > 86400Spring AYE 2017 0.16% 84764 620
Average Fall 12.22% 67913 > 86400Average Spring 0.86% 85855 645
Table 15. Instructor satisfaction – Equal weights, scenarios equally likely
Instance Baseline average Scenario 1 Scenario 2satisfaction of average satisfaction of average satisfaction ofeach instructor each instructor each instructor
Fall AYE 2015 N/A N/A N/ASpring AYE 2015 N/A N/A N/AFall AYE 2016 N/A N/A N/ASpring AYE 2016 0.998 0.996 0.996Fall AYE 2017 0.942 0.945 0.946Spring AYE 2017 0.980 0.968 0.973
Average Fall 0.942 0.945 0.946Average Spring 0.989 0.982 0.984
scenario: i.e., q0 = 1/3, q1 = 1/3, q2 = 1/3. Table 14 shows that the time to reach an optimality
gap less than 5% takes more than 24 hours in the Fall semester instances. However, when compared
to the results for the “differences only” objective described in Section 5.2 (Table 5), this weighting
of the objective produces better results in much less time. This possibly implies that attempting
to maximize instructor preferences and minimize the number of rooms per instructor in every
scenario actually aids the solver in finding solutions with minimal changes between the baseline and
disruption scenarios.
Table 15 summarizes the performance of this set of test runs with regards to instructor preferences.
Compared to the corresponding results for the “baseline only” and “preferences in disruption scenarios
only” objectives (Tables 3 and 12, respectively), these values are only marginally worse (with the
exception of the baseline average satisfaction for the Fall semester instances), even though the solver
was seeking to optimize the other portions of the overall objective function simultaneously.
Table 16 summarizes the performance of these test runs with regards to room usage and the
number of changes between the baseline and disruption scenarios. Compared to the room usage
metrics for the “rooms in disruption scenarios only” objective (Table 10), the room usage metrics
24
Table 16. Room and disruption metrics – Equal weights, scenarios equally likely
Instance Instructors with Instructors with Instructors with Instructors whomore than more than more than could experience
1 AM room 1 PM room 1 room some disruption
Fall AYE 2015 62.09% 42.80% 70.31% 20.59%Spring AYE 2015 19.86% 3.45% 29.08% 1.52%Fall AYE 2016 14.07% 3.26% 15.00% 97.30%Spring AYE 2016 24.43% 7.50% 31.20% 7.35%Fall AYE 2017 35.31% 39.05% 65.71% 30.43%Spring AYE 2017 0.62% 0.00% 2.18% 4.84%
Average Fall 37.16% 28.37% 50.34% 49.44%Average Spring 14.97% 3.65% 20.82% 4.57%
from this set of test runs are largely worse, seemingly at the expense of lowering the percentages
of instructors who would experience a disruption if one of the disruption scenarios were to occur.
In every test instance except for Fall AYE 2016, the number of instructors who could be forced
to change their schedule significantly improved. Recall from Table 1 that Fall AYE 2016 has the
largest number of instructors and sections to schedule. It is possible that given more time, solving
this instance would also have resulted in a schedule with a lower number of instructors who could
experience some disruption.
5.7 Equal Weights, Baseline Scenario Likely
We further tested the model’s performance by continuing to equally weight the four parts of the
overall objective function (i.e., wa = 1, wb = 1, wc = 1, and wd = 1), but changing the probabilities
of each scenario. In this set of tests, we assumed the baseline scenario is twice as likely to occur as
either of the disruption scenarios combined: i.e., q0 = 2/3, q1 = 1/6, q2 = 1/6. The performance
of the model under these conditions is displayed in Table 17. When compared to the performance
observed for the “equal weights, scenarios equally likely” objective described in Section 5.6 (Table
14), the optimality gaps are slightly worse on the whole, but the differences are not significant. We
note that the optimality gap after 24 hours for Fall AYE 2016 is in fact smaller, but the difference
is slight.
Table 18 shows the average satisfaction of instructors in each scenario for this objective. Com-
paring these values to the results for the “equal weights, scenarios equally likely” objective found in
Table 15, the average satisfaction values are identical to the second decimal place in all instances.
In addition, the satisfaction values from this set of tests are close to the “maximum” possible values
25
Table 17. Performance – Equal weights, baseline scenario likely
Instance Gap after Time of best Time when gap24 hours solution (s) less than 5% (s)
Fall AYE 2015 5.81% 86400 > 86400Spring AYE 2015 2.62% 86400 1229Fall AYE 2016 24.50% 42476 > 86400Spring AYE 2016 7.48% 677 > 86400Fall AYE 2017 18.72% 1215 > 86400Spring AYE 2017 0.82% 86400 677
Average Fall 16.34% 43364 > 86400Average Spring 3.64% 57826 > 29435
Table 18. Instructor satisfaction – Equal weights, baseline scenario likely
Instance Baseline average Scenario 1 Scenario 2satisfaction of average satisfaction of average satisfaction ofeach instructor each instructor each instructor
Fall AYE 2015 N/A N/A N/ASpring AYE 2015 N/A N/A N/AFall AYE 2016 N/A N/A N/ASpring AYE 2016 0.995 0.996 0.996Fall AYE 2017 0.942 0.943 0.943Spring AYE 2017 0.980 0.968 0.973
Average Fall 0.942 0.943 0.943Average Spring 0.988 0.982 0.984
found when optimizing only the instructors’ preferences in the baseline scenario (Table 3) and only
the instructors’ preferences in the disruption scenarios (Table 12).
The percentages of instructors with multiple rooms as well as the percentages of instructors who
can expect a schedule change in a disruption scenario are displayed in Table 19. Compared with the
“equal weights, scenarios equally likely” objective (Table 16), the room metrics are neither uniformly
better nor worse. On the other hand, the percentages of instructors who can expect a schedule
change in a disruption scenario generally do not compare favorably against those for the “equal
weights, scenarios equally likely” objective. These results might indicate that giving more credence
to the baseline scenario could result in a higher percentage of instructors experiencing a schedule
change if a disruption scenario occurs.
5.8 Equal Weights, Disruption Scenarios Likely
We continued to test the model’s performance by again equally weighting the four different parts
of the overall objective function (i.e., wa = 1, wb = 1, wc = 1, and wd = 1), but varying the
probabilities of each scenario. In this set of tests, we assumed the disruption scenarios were twice as
26
Table 19. Room and disruption metrics – Equal weights, baseline scenario likely
Instance Instructors with Instructors with Instructors with Instructors whomore than more than more than could experience
1 AM room 1 PM room 1 room some disruption
Fall AYE 2015 58.84% 20.00% 69.31% 16.18%Spring AYE 2015 39.58% 6.67% 41.84% 1.52%Fall AYE 2016 14.66% 1.01% 15.46% 100.00%Spring AYE 2016 50.89% 37.56% 62.39% 48.53%Fall AYE 2017 35.14% 15.32% 52.60% 95.65%Spring AYE 2017 11.17% 0.00% 15.76% 4.84%
Average Fall 36.21% 12.11% 45.79% 70.61%Average Spring 33.88% 14.74% 40.00% 18.29%
Table 20. Performance – Equal weights, disruption scenarios likely
Instance Gap after Time of best Time when gap24 hours solution (s) less than 5% (s)
Fall AYE 2015 4.08% 4352 4352Spring AYE 2015 1.03% 8109 7584Fall AYE 2016 25.39% 26221 > 86400Spring AYE 2016 1.09% 86400 3906Fall AYE 2017 7.95% 14142 > 86400Spring AYE 2017 0.28% 77565 862
Average Fall 12.47% 14905 > 59051Average Spring 0.80% 57358 4117
likely to occur as the baseline scenario: i.e., q0 = 1/5, q1 = 2/5, q2 = 2/5. We found from these
tests that the optimality gaps and times to reach a “good” solution (Table 20) were similar to
those for the “equal weights, scenarios equally likely” objective tested in Section 5.6 (Table 14)
and largely better than those for the “equal weights, baseline scenario likely” objective tested in
Section 5.7 (Table 17). These results seem to indicate that varying the probabilities assigned to
each potential scenario could have a profound effect on the time it takes to close the optimality gap
and find solutions.
Furthermore, Table 21 shows that the average satisfaction of instructors in the Fall and Spring
semesters is extremely close to those for the “equal weights, scenarios equally likely” and “equal
weights, baseline scenario likely” objectives (Tables 15 and 18, respectively). It appears from these
results that finding schedules with close to maximum instructor preferences can be done quickly.
The results on room usage are mixed. Looking at the average values in Table 22, the results
are similar but slightly worse across the board than those for the “equal weights, scenarios equally
likely” objective (Table 16). On the other hand, these results are sometimes better and sometimes
worse than those for the “equal weights, baseline scenarios likely” objective (Table 19).
27
Table 21. Instructor satisfaction – Equal weights, disruption scenarios likely
Instance Baseline average Scenario 1 Scenario 2satisfaction of average satisfaction of average satisfaction ofeach instructor each instructor each instructor
Fall AYE 2015 N/A N/A N/ASpring AYE 2015 N/A N/A N/AFall AYE 2016 N/A N/A N/ASpring AYE 2016 0.998 0.996 0.996Fall AYE 2017 0.943 0.946 0.950Spring AYE 2017 0.980 0.968 0.973
Average Fall 0.943 0.946 0.950Average Spring 0.989 0.982 0.984
Table 22. Room and disruption metrics – Equal weights, disruption scenarios likely
Instance Instructors with Instructors with Instructors with Instructors whomore than more than more than could experience
1 AM room 1 PM room 1 room some disruption
Fall AYE 2015 58.82% 57.93% 79.20% 11.76%Spring AYE 2015 21.32% 0.00% 28.57% 1.52%Fall AYE 2016 14.81% 4.23% 15.93% 100.00%Spring AYE 2016 20.36% 11.69% 30.20% 5.88%Fall AYE 2017 47.58% 51.88% 75.84% 52.17%Spring AYE 2017 3.73% 0.00% 5.44% 4.84%
Average Fall 40.40% 38.01% 56.99% 54.65%Average Spring 15.13% 3.90% 21.40% 4.08%
The percentages of instructors who can expect a schedule change given a disruption scenario
occurs are the same or better for this objective compared to the “equal weights, baseline scenario
likely” objective, as seen by comparing the values in Tables 22 and 19. On the other hand, the
percentages of instructors who can expect a schedule change in a disruption scenario for this objective
are sometimes better and sometimes worse than those for “equal weights, scenarios equally likely”
objective, as seen by comparing the values in Tables 22 and 16.
5.9 Importance and Difficulty of Timeliness
While Sections 5.6, 5.7, and 5.8 have shown that “good” feasible schedules resilient to known
potential disruptive events can be constructed by weighing all parts of the objective function equally,
it has also made clear that in general, these problems require a long time to obtain these schedules.
It would be ideal if the decision maker – in this case, the department chair – could quickly obtain
solutions for instances with more than two disruption scenarios.
However, each additional disruption scenario adds a large number of rows and columns to
28
the integer linear program described in Section 3. For example, the Fall AYE 2016 test instance
has 163,151 rows and 120,009 columns with only the baseline scenario, but with each additional
disruption scenario, the model for this test instance increases by approximately 500,000 rows and
230,000 columns. As previously discussed in Sections 5.2 and 5.5, minimizing the number of
changes between the baseline and disruption scenarios appears to be the most difficult portion of
the objective. Furthermore, as demonstrated in Sections 5.6, 5.7, and 5.8, optimizing this part
the objective function also seems to worsen the solution quality of the other three portions of the
objective function.
6 Future Work
Expanding on Section 5.9, the biggest issue with our stochastic integer program is that it is difficult
to solve in a reasonable amount of time. Our ultimate goal is to enable decision makers to efficiently
schedule instructors while planning for several potential likely disruptions. Possessing the ability
to compute several viable options quickly for consideration gives a decision maker control in a
stressful situation. Therefore, a key direction for future research is to discover a way to reach “good”
solutions quicker. One possible approach would be to reformulate the model in a way that enables
solvers to take advantage of any special structure in the problem. Another possible approach could
be to investigate decomposition strategies in order to reduce the size of the model that the solver
must work with.
The ideas contained in this paper about finding resilient schedules in the face of potential
disruptions can naturally be extended past university timetabling into the military and civilian
business sectors. It would be ideal for ships, submarines, aircraft, and soldiers to deploy on schedules
that are resilient to several known potential disruptions. As for civilian use, many businesses would
benefit from the ability to minimize the costs resulting from having to reschedule the transportation
of materials, goods, and services after a disruption occurs that forces major changes.
References
M. A. Bakır, C. Aksop. 2008. A 0-1 integer programming approach to a university timetabling
problem. Hacettepe Journal of Mathematics and Statistics 37(1) 41–55.
29
O. S. Benli, A. R. Botsalı. 2004. An optimization-based decision support system for a univer-
sity timetabling problem: An integrated constraint and binary integer programming approach.
Computers and Industrial Engineering pp. 1–29.
A. Bettinelli, V. Cacchiani, R. Roberti, P. Toth. 2015. An overview of curriculum-based course
timetabling. Top 23(2) 313–349.
G. G. Brown, R. F. Dell, R. K. Wood. 1997. Optimization and persistence. Interfaces 27(5) 15–37.
E. K. Burke, J. Marecek, A. J. Parkes, H. Rudova. 2008. Penalising patterns in timetables:
Novel integer programming formulations. In Operations Research Proceedings 2007, pp. 409–414.
Springer.
V. Cacchiani, A. Caprara, R. Roberti, P. Toth. 2013. A new lower bound for curriculum-based
course timetabling. Computers & Operations Research 40(10) 2466–2477.
S. Daskalaki, T. Birbas. 2005. Efficient solutions for a university timetabling problem through
integer programming. European Journal of Operational Research 160(1) 106–120.
S. Daskalaki, T. Birbas, E. Housos. 2004. An integer programming formulation for a case study in
university timetabling. European Journal of Operational Research 153(1) 117–135.
R. Kumar. 2014. Modeling a department course scheduling problem using integer programming: A
spreadsheet-based approach. Journal of Management Information and Decision Sciences 17(2)
41.
G. Lach, M. E. Lubbecke. 2008. Curriculum based course timetabling: Optimal solutions to the
udine benchmark instances. Tech. rep., Technical Report 9, TU Berlin, Institut fur Mathematik.
G. Lach, M. E. Lubbecke. 2012. Curriculum based course timetabling: new solutions to udine
benchmark instances. Annals of Operations Research 194(1) 255–272.
C. H. Martin. 2004. Ohio university’s college of business uses integer programming to schedule
classes. Interfaces 34(6) 460–465.
A. E. Phillips, C. G. Walker, M. Ehrgott, D. M. Ryan. 2014. Integer programming for minimal
perturbation problems in university course timetabling. Annals of Operations Research pp. 1–22.
30
A. E. Phillips, C. G. Walker, M. Ehrgott, D. M. Ryan. 2016. Integer programming for minimal
perturbation problems in university course timetabling. Annals of Operations Research .
R. Saltzman. 2009. An optimization model for scheduling classes in a business school department.
California Journal of Operations Management 7(1) 84–92.
S. E. Sampson, J. R. Freeland, E. N. Weiss. 1995. Class scheduling to maximize participant
satisfaction. Interfaces 25(3) 30–41.
M. Sørensen, T. R. Stidsen. 2014. Hybridizing integer programming and metaheuristics for solving
high school timetabling. In 10th International Conference on the Practice and Theory of Automated
Timetabling, pp. 557–560.
M. Sørensen, T. R. Stidsen, M. B. Herold, D. Pisinger. 2013. Timetabling at High Schools. Ph.D.
thesis, Technical University of Denmark, Department of Informatics and Mathematical Modeling.
V. Thornley. 1969. University timetabling. A quantitative study of the interaction between course
structure and resource levels. Ph.D. thesis, University of Lancaster.
United States Naval Academy. 2016. Academic Dean and Provost Notice 5420.1: Periodic Program
Review/Visiting Committee Additional Information.
H. Waterer. 1995. A zero-one integer programming model for room assignment at the university of
auckland. In Proceedings of the 1995 ORSNZ Conference.
A Description of Test Instances
A.1 Fall AYE 2015
In the first disruption scenario of this test instance, an instructor (Instructor 16) who was teaching
two sections of a course (SM122) with ten sections must suddenly depart USNA. The department
chair decides to give another instructor (Instructor 77) an additional section of this course so she is
now teaching a total of three sections instead of two. Furthermore, the department chair decreases
the overall number of sections from ten down to nine, slightly increasing the number of students per
section of this course.
31
In the second disruption scenario of this test instance, an instructor (Instructor 21) who was
teaching three sections of a course (SM223) with 20 sections must suddenly depart USNA. The
department chair decides to give two other instructors (Instructors 19 and 50) an additional section
of this course so they are now teaching a total of three sections instead of two. Furthermore, the
department chair decreases the overall number of sections from 20 down to 19, slightly increasing
the number of students per section of this course.
A.2 Spring AYE 2015
In the first disruption scenario of this test instance, an instructor (Instructor 81) who was teaching
three sections of a course (SM122) with 29 sections must suddenly depart USNA. The department
chair decides to give another instructor (Instructor 78) an additional section of this course so he is
now teaching a total of three sections instead of two. Furthermore, the department chair decreases
the overall number of sections from 29 down to 27, slightly increasing the number of students per
section of this course.
In the second disruption scenario of this test instance, an instructor (Instructor 80) who was
teaching three sections of a course (SM122) with 29 sections must suddenly depart USNA. The
department chair decides to give another instructor (Instructor 78) an additional section of this
course so he is now teaching a total of three sections instead of two. Furthermore, the department
chair decreases the overall number of sections from 29 down to 27, slightly increasing the number of
students per section of this course.
A.3 Fall AYE 2016
In the first disruption scenario of this test instance, an instructor (Instructor 35) who was teaching
one section of a course (SM261) with nine sections must suddenly depart USNA. The department
chair decides to give another instructor (Instructor 59) an additional section of this course so he
is now teaching a total of two sections instead of one. The number of total sections of this course
remains the same.
In the second disruption scenario of this test instance, an instructor (Instructor 22) who was
teaching three sections of a course (SM121) with 28 sections must suddenly depart USNA. The
department chair decides to give another instructor (Instructor 72) an additional section of this
32
course so he is now teaching a total of three sections instead of two. Furthermore, the department
chair decreases the overall number of sections from 28 down to 26, slightly increasing the number of
students per section of this course.
A.4 Spring AYE 2016
In the first disruption scenario of this test instance, an instructor (Instructor 54) who was teaching
two sections of a course (SM221) with ten sections must suddenly depart USNA. The department
chair decides to give two other instructors (Instructors 53 and 60) an additional section of this
course so they are now each teaching a total of three sections instead of two. Consequently, the
number of sections for this course remains the same.
In the second disruption scenario of this test instance, an instructor (Instructor 7) who was
teaching two sections of a course (SA305) with six sections and one section of another course
(SM212P) with nine sections must suddenly depart USNA. The department chair decides to give
another instructor (Instructor 27) an additional section of the first course (SA305) and another
instructor (Instructor 8) an additional section of the second course (SM212P) so they are now each
teaching a total of three sections instead of two. Furthermore, the department chair decreases
the overall number of sections for SA305 from six down to five, slightly increasing the number of
students per section of this course.
A.5 Spring AYE 2017
In the first disruption scenario of this test instance, an instructor (Instructor 77) who was teaching
two sections of a course (SM212P) with seven sections must suddenly depart USNA. The department
chair decides to give two other instructors (Instructors 6 and 35) an additional section of this course
so Instructor 6 is now teaching a total of three sections instead of two and Instructor 35 is teaching
a total of two sections instead of one. Consequently, the number of sections for this course remains
the same.
In the second disruption scenario of this test instance, an instructor (Instructor 92) who was
teaching three sections of a course (SM221) with 17 sections is no longer able to teach. The
department chair decides to give another instructor (Instructor 65) an additional section of this
course so she is now teaching a total of three sections instead of two. Furthermore, the department
33
chair decreases the overall number of sections from 17 down to 15, slightly increasing the number of
students per section of this course.
34