OutlineOutline
• Brief description of the GTAAP system
• Review ERA Algorithm
• Adaptations/Changes from Basic ERA Implementation – Optimization
• Demo/Results
• Future Research and Conclusions
OutlineOutline
• Brief description of the GTAAP system
• Review ERA Algorithm
• Adaptations/Changes from Basic ERA Implementation – Optimization
• Demo/Results
• Future Research and Conclusions
ERA: Environment, Rules, Agents [Liu et al, AIJ 02]ERA: Environment, Rules, Agents [Liu et al, AIJ 02]
• Environment is an nxa board • Each variable is an agent • Each position on board is a value of a domain• Agent moves in a row on board• Each position records the number of violations
caused by the fact that agents are occupying other positions
• Agents try to occupy positions where no constraints are broken (zero position)
• Agents move according to reactive rules
Reactive rules [Liu et al, AIJ 02]Reactive rules [Liu et al, AIJ 02]
Reactive rules:– Least-move: choose a position with the min. violation value
– Better-move: choose a position with a smaller violation value
– Random-move: randomly choose a positionCombinations of these basic rules form different behaviors.
R e a ct iv e ru le s B e h a v io rde s ig n e r
LR le a s t-m o v e with 1 -p and ra n d o m -m o v e with p
BR b e tte r -m o v e with 1 -p and ra n d o m -m o v e with p
BLR f i r s t b e tte r -m o v e , i f f ai l the n apply LR
rBLR f i r s t apply b e tte r -m o v e r t im e s , i f f ai l the n apply LR
F rBLR apply rBLR in the f i r s t r i te r at io ns , the n apply LR
Big pictureBig picture
• Agents do not communicate but share a common context
• Agents keep kicking each other out of their comfortable positions until every one is happy
• Charecterization: [Hui Zou, 2003]
– Amazingly effective in solving very tight but solvable instances
– Unstable in over-constrained cases• Agents keep kicking each other out (livelock)• Livelocks may be exploited to identify bottlenecks
OutlineOutline
• Brief description of the GTAAP system
• Review ERA Algorithm
• Adaptations/Changes from Basic ERA Implementation – Optimization
• Demo/Results
• Future Research and Conclusions
Implementation DetailsImplementation Details
• Use of the rBLR as the main behavior (random-move as a supporting behavior)
• Random move’s probability is that it occurs about 2% of the time, the remaining 98% of the time is rBLR.
• “r” in rBLR is set as 3
• Termination: 150 time steps
Additions Made to the Basic AlgorithmAdditions Made to the Basic Algorithm
• Optimization:– agent’s assigned TA’s preference for this class
– Each agent assumes a better move to be:• If the new position has less constraint violations as the old
one.• If the new position has the same number of constraint
violations, but the position’s GTA has a higher preference ranking for this course than the current position’s GTA.
• This provided much, much improved results in practice, by forcing more movement and overall better values to be selected.
Agentsagent
OutlineOutline
• Brief description of the GTAAP system
• Review ERA Algorithm
• Adaptations/Changes from Basic ERA Implementation – Optimization
• Demo/Results
• Future Research and Conclusions
ResultsResults
• Fall 2007
ERA Algorithm Actual Assignment
Quality/Utility Percentage 0.67 0.66
Course Loads Remaining Unassigned 0.00 3.48
ResultsResults
• Spring 2007
ERA Algorithm Actual Assignment
Quality/Utility Percentage 0.65 0.62
Course Loads Remaining Unassigned 0.33 4.16
ResultsResults
• Fall 2004
ERA Algorithm Actual Assignment
Quality/Utility Percentage 0.57 0.58
Course Loads Remaining Unassigned 0.75 2.00
Future ResearchFuture Research
• Testing upcoming semesters to see how well this aids the assignment process in the real-world.
• Setting up courses that are a low priority to be able to remain unassigned.
• Looking into other local search techniques (genetic algorithms, etc.)
• Creation of hybrids of local searches• Investigations in mimicking the human process
(greedy, yet still making a few “back changes”)
ConclusionsConclusions
• As the testing confirmed, this approach seems like it will be a great aid in the assignment process.
• Its results are statistically approximately equal to or better than the human-generated solution, though this still needs to be confirmed in the real-world.
• This approach seems a very good way to go in situations where a decent solution is needed in a relatively small amount of time.