ATHABASCA UNIVERSITY
CAN TEST DRIVEN DEVELOPMENT IMPROVE POKER ROBOT
PERFORMANCE?
BY
EDWARD SAN PEDRO
An essay submitted in partial fulfillment
Of the requirements for the degree of
MASTER OF SCIENCE in INFORMATION SYSTEMS
Athabasca, Alberta
March, 2008
© Edward San Pedro, 2008
ABSTRACT
Is it possible to create a poker playing robot that can beat a professional human
player? Researchers are attempting to answer this question with a yes. The game of poker
is extremely popular and it has caught the attention of the computer science community.
We believe that some practical approaches to software development can potentially
improve the performance of poker robots. Poker robot development can benefit from the
application of Test Driven Development and performance testing.
This essay investigates the current research that is available for poker robot
development and Test Driven Development. To help support the potential benefits of
combining testing with poker robot development, a simple poker robot is modified with
Test Driven Development and performance testing techniques and tools. This
programming exercise helps expand the research results by applying some of the theory to
a hands-on situation. The results of this exercise provides evidence to support the use of
Test Driven Development in improving poker robot performance. Poker robot developers
may find Test Driven Development to be a very useful approach to designing their
software. However, Test Driven Development is only one of many potential tools that can
be used.
ii
ACKNOWLEDGMENTS
I would like to thank Mai for her support. She also provided me with the
inspiration to create a unique poker robot name. I would also like to thank my essay
supervisor Dr. Dunwei Wen for his patience and guidance. A final thank you to the
School of Computing and Information Systems for their support and influence.
iii
Table of Contents
Chapter I ............................................................................................................................... 1
Introduction ..................................................................................................................... 1
Chapter II ............................................................................................................................. 3
Poker and Artificial Intelligence ..................................................................................... 3
Artificial Intelligence and Poker Overview ................................................................ 3
Test-driven Development ................................................................................................ 6
Test-Driven Development Research Overview .......................................................... 6
Applying TDD to Poker robot development ................................................................... 8
Poker Robot Design Using Test-Driven Development and Performance Testing ... 10
Chapter III .......................................................................................................................... 11
Poker Robots and Test-Driven Development ............................................................... 11
Current State of Poker Robot Development ............................................................. 11
Hold'em Poker .......................................................................................................... 16
Current State of Test-Driven Development .............................................................. 19
Poker robot development and TDD .......................................................................... 21
Chapter IV .......................................................................................................................... 22
Research Study Results ................................................................................................. 22
Applying TDD and Performance Testing to Poker Robot Development ................. 22
Poker Robot Development with TDD ..................................................................... 24
iv
TDD and Unit Testing .............................................................................................. 30
Performance Testing ................................................................................................. 37
Poker Robot and Test-Driven Development Discussion .......................................... 44
Chapter V ........................................................................................................................... 51
Research Implications ................................................................................................... 51
Research Results Limitations ................................................................................... 51
Rationale for the research ......................................................................................... 52
Chapter VI .......................................................................................................................... 57
Conclusions ................................................................................................................... 57
References .......................................................................................................................... 60
Appendix A – Program documentation ............................................................................. 62
Running Project tests ..................................................................................................... 62
Appendix B - Poker Hands ................................................................................................ 65
v
vi
CHAPTER I
INTRODUCTION
Poker is an extremely popular game. Many people are drawn to this card game.
Many poker tournaments are televised and some successful professional players are
famous. Poker has also captured the interest of the computer science and artificial
intelligence community [1]. Poker playing programs have been written, which are called
poker robots. The game of poker presents challenging problems in the quest to create a
robot that can player their hands strongly. Research has been conducted with regards to
poker robots playing at a competitive level. Is it possible to improve on existing poker
robot designs and algorithms?
Test Driven Development (TDD) is a software development approach that focuses
on testing [2, 3]. This approach to software development is being used in university
course work. TDD is also being used by software professionals. There are additional
development tools that support TDD and performance testing. Since poker involves
decision making that should be done within a reasonable amount of time, it would be
interesting to investigate how useful TDD and performance testing would be in poker
robot development. This essay analyzes this subject and provides useful results that
support the use of TDD and performance testing in poker robot development.
1
Research in the fields of poker, artificial intelligence, TDD, and performance
testing was conducted to identify potential connections between the subject matter. In
order to illustrate the potential use of TDD and performance testing in poker robot
development, an existing poker robot was modified. The results of this research show that
TDD and performance testing are very useful practices to use when developing poker
robots.
The format of the essay is as follows. Chapter II provides an introduction to poker
robot development and artificial intelligence, TDD and performance testing. In chapter
III, this essay digs deeper to look at the existing research in poker robot development and
artificial intelligence, TDD, and performance testing. This chapter also provides more
insight into the rules of poker and characteristics of strong players. This chapter discusses
the potential benefits and challenges of applying TDD and performance testing to poker
robot development, which leads into Chapter IV, which provides a discussion on research
results that include an exercise in applying TDD and performance testing to a simple
poker robot design. Chapter V looks at the implications of the research results from
chapter IV.
2
CHAPTER II
POKER AND ARTIFICIAL INTELLIGENCE
Artificial Intelligence and Poker Overview
Poker popularity is currently at a high level. Poker shows are frequently found on
television. Poker playing is a global phenomenon. Many world class players can be
found all over the world. With high poker interest and the growth of the Internet, on-line
poker playing continues to grow [1]. Many on-line poker players play at varying stakes
from penny antes to high limit games. On-line poker rooms reach people that do not
normally have access to casinos.
Poker has also caught the interest of the computer science community. The nature
of poker makes it an interesting subject for artificial intelligence (AI) research [4]. A
poker game provides an interesting environment for AI since each player conceals the
hand that he or she holds. The University of Alberta (U of A) believes that their poker
research shows that they are close to developing a computer program that can compete at
a world class level [5]. Interest in applying AI to the poker game attracts so much interest
that a poker competition called the World Series of Poker Robots was held in Las Vegas 3
years ago [1].
3
The University of Alberta Computer Poker Research (Poker Research) group
states that its poker AI is "the strongest in the world [4]". This research group's web-site
provides many references to its publications and other useful links related to poker. This
group focused its research on poker because the game "is an interesting test-bed for
artificial intelligence research [6]." Poker provides challenging artificial intelligence
research because it involves imperfect information, "where multiple competing agents
must deal with risk management, agent modeling, unreliable information and deception,
much like decision-making applications in the real world [6]."
Chess and checkers are examples of perfect information games because the game
board holds all the information required, which is visible to both participants. Computer
AI research in perfect information games, such as chess, results in brute-force searches.
Researchers discovered that a strong chess playing computer program must be able to
calculate moves quickly. One U of A paper [7] reports that IBM's Deep Blue computer
calculates 250 million chess positions per second. Many card games provide examples of
imperfect information games, such as bridge and poker. Poker is an imperfect
information game because each player in the game should not know the cards that other
players hold. A brute-force search is unsuccessful in imperfect information games
because "it is often impractical to search the game trees that result from all possible
instantiations of the missing information [7]." The Poker Research group decided to use
the Texas Hold'em (Hold'em) poker variant for AI research.
4
Progress in poker AI research may provide benefits to other situations with
imperfect information. There are many situations where complete information is not
available, such as forecasting product demand, or managing company purchasing by
guessing the strategies used by business rivals [8]. This research may also provide
benefits for the general public. One example is the use of AI to provide investing
guidance. This type of software may help out people that normally would not have the
opportunity to participate in investing. Other interesting imperfect information situations
include political issues, conflict mediation, legal issues, and project management.
As computers provide increasing processing power, poker AI algorithms will be
able to conduct more calculations for poker playing purposes. This ability to make
additional computations may alter the current poker AI algorithms. Researchers may
need to look at adjusting the algorithms as more processing power is made available. In
contrast, researchers may also try to investigate shortcuts in their poker AI algorithms,
which the U of A Poker Research group has already started [5]. Research into efficient
algorithms may discover areas that may not be necessary, or may provide no practical
benefits. These areas can then be removed or altered as necessary.
Poker AI research should also be examined for other real world applications. If
the poker AI algorithms can be used to benefit other fields of study, then researchers may
5
have another source of effective techniques for implementing these algorithms in other
imperfect information games. If artificial intelligence research involving software can be
improved through the use of performance testing, then TDD should be reviewed by
researchers.
TEST-DRIVEN DEVELOPMENT
Test-Driven Development Research Overview
Test-Driven Development (TDD) refers to the software development approach
where the programmers write tests before writing production code. With TDD, the same
programmer writes the test and production code. The tests serve the purpose of driving
the design and implementation of the development tasks. TDD is performed in small
iterations. First, the programmer runs a test to show that it fails. Second, the programmer
writes just enough production code to make the test pass. Third, the test is executed again
to show success. Finally, the programmer examines the code to identify any possible
improvements to the production code, such as removing code duplication. This process is
repeated for every cycle. Each time another test is added, all existing tests are executed.
TDD also serves regression testing purposes, which is a valuable part of maintaining
software quality and reducing risks with change management [9, 10].
6
Programmers can use software tools and development environments to help with
the TDD process. These tools make it easier for programmers to create and execute tests.
Tools that promote TDD and testing practices can help programmers focus on creating
high quality software solutions. TDD can help to reduce bugs. Debugging can take up a
lot of time to track down and fix. A suite of tests that can be executed easily can reduce
the risk of too much debugging [11].
Extreme programming (XP) is a software development approach that aims to
follow strict programming practices. A large part of following XP requires the software
team to develop unit tests and continuously integrate newly developed code into the
overall software project. For example, some traditional software approaches may focus
on breaking down the overall project, developing the smaller parts and integrating them
as a final step. With XP, the overall project is also broken down, but integration takes
place right after a piece of code is complete. In XP, integration is a continuous process,
which helps to ensure the entire project works fully. An important part of XP that
contributes to the success of continuous integration is unit testing.
In XP, a software team designs unit tests to test individual methods and classes.
For example, if a software program is written in Java, the unit tests are also written in
Java [2]. Unit tests should be written for any complex code and any nontrivial
implementation of a method. Ideally, a unit test should be written for anything that might
7
break. In XP, unit tests must always pass. This requirement helps to ensure that
continuous integration does not break any functionality that currently works. Next, the
discussion moves on to exploration of extreme programming uses.
APPLYING TDD TO POKER ROBOT DEVELOPMENT
TDD provides many useful techniques that can help with the progress of poker
robot research. This essay will now discuss the application of TDD to poker robot
development. The TDD approach will focuses on incrementally adding functionality to
software. In the case of poker robots, TDD will force developers to create test cases that
demonstrate how the poker robot will function. These test cases help to reinforce initial
design decisions, or challenge assumptions. As the poker robot code base grows, the
number of test cases increases. These test cases provide important feedback whenever
changes are made to the source code. If a test case suddenly starts to fail when a change is
made, the developer can review the change to figure out how it caused the test to fail.
This approach to poker robot development can help to increase the quality of the source
code [12].
TDD can also act as an important documentation tool. Test cases verify the
functionality of the source code. Developers read the test case code and they gain an
understanding of the functionality and the design under test. Although test cases may not
8
completely meet the documentation needs of a poker robot product, these tests evolve
with the design of the source code. In the event that existing documentation may not be
up to date, the test cases can be used to help developers gain an understanding of a poker
robot developed with TDD [13].
TDD may also provide benefits from the perspective of poker robot research [14].
The regression testing and incremental design implementation may make it easier to
transfer knowledge to other researchers. Different researchers may be involved in the
development of a poker robot. In the university setting, development may occur at
different times. The test cases and the source code provide insight into the design of the
poker robot. The test case code show the practical application of the theory behind poker
robot development. TDD may help researchers improve the application of theory to poker
robot programming. If the TDD approach includes performance testing, developers have
ways to quantify acceptable poker robot performance. A poker robot that can play more
hands in a given time frame will generate more data as well. Researchers can use this data
to determine the playing strength of the poker robot. This quick turnaround in feedback
can lead to fast implementation of changes. Speeding up the poker robot development can
lead to greater progress in this area of research. The benefits of applying TDD and
performance testing to the area of poker robots may also be useful in other areas of
computer science and artificial intelligence [15, 16].
9
The research for this essay consists of two stages. First, supporting literature was
found for the two main research topics of poker robot development and TDD. The TDD
research includes performance based testing. Second, a simple poker robot was designed
to evaluate the testing tools available for poker robot software development. The poker
robot development testing results provide useful information on how performance testing
may benefit future research in artificial intelligence. The program described in this essay
helps to illustrate the TDD and performance test process as it may apply to poker robot
research.
Poker Robot Design Using Test-Driven Development and Performance Testing
An existing poker robot program exists with the Poker Academy software product
[17]. The program is written in the Java programming language. The JUnit testing
framework is used to run the tests created for this program [18]. JUnitPerf performance
tests are available for this program [19]. The poker robot program was modified so that
JUnit and JUnitPerf tests could be created. These tests serve the purpose of verifying that
the program behaves as expected. The Eclipse development environment software was
used to write the software associated with this poker robot [20]. Eclipse provides testing
tools that allow programmers to execute JUnit and JUnitPerf tests.
10
CHAPTER III
POKER ROBOTS AND TEST-DRIVEN DEVELOPMENT
Current State of Poker Robot Development
Poker robot development poses a number of different challenges. First, the
development team must have a good understanding of the rules of the game. Many poker
robot researchers have a strong interest in the game. Some researchers grew up playing
the game with their families. Other people have experience playing poker in casinos or
card rooms. The Internet also provides many people with a convenient option if they
would like to play cards. Second, poker robot developers must understand the skills that
are required to be a good player. A good poker player must play with a positive
expectation. Successful poker playing can be considered both an art and a science. Players
must be able to assess the strengths and weaknesses of their opponents. These players
must also understand the mathematical aspect of the game. Third, meaningful results from
poker robot research can only be produced if a poker robot is programmed to make
decisions quickly. Poker robot decisions on poker hands should take no more than a few
seconds at most [21].
What is required for a poker bot to become a strong poker player? One, a strong
poker player knows how to bet appropriately. Two, a strong player can determine the
11
playing styles of their opponents, which is opponent modeling, and use this information to
exploit weaknesses. Three, a strong poker player must be disciplined to play winning
poker. If a player allows emotion to affect their play, then they may play poorly.
Successful poker playing involves a number of different strategies, such as deception,
opponent modeling, and dealing with uncertainty [22].
The U of A Poker Research group uses mathematics and a random number
generator to determine betting strategies for its poker bot. The Poker Research group uses
game theory to create a near optimal betting strategy for its poker bot [5]. The Poker
Research group implements opponent modeling by using statistical methods. Data on
how opponents play are saved and used in subsequent poker playing. The Poker Research
group admits that opponent modeling is "a limiting factor to the overall play [23]." Poker
bots can play poker without emotion becoming a factor. This quality of a strong poker
player was not a concern to the Poker Research group.
The poker AI research conducted by the U of A provides the necessary
information to make a prediction on what the future holds for poker bots. The U of A
poker AI research suggests that poker bots without opponent modelling can already defeat
strong human players and be competitive against world-class opposition [5]. One version
of a poker bot developed by the U of A played against a world-class human player. The
human player acknowledged that the poker bot was a challenging opponent [5]. This
12
statement provides an opinion that poker bots may be able to playing winning poker
against world-class opposition in the future.
Poker programs are showing improvements in playing the game, which includes
ways to handle different forms of uncertainty [21]. Other poker players have different
opinions. Professional poker player and Card Player magazine columnist Barry
Tanenbaum states: "In a mathematical sense a computer can assess its chances. But it
can't see someone twitch." Poker expert David Sklansky also believes that a program can
theoretically play the game at a reasonably high level, but he believes that a computer will
never be able to beat him. Book writer Alan Schoonmaker counters by stating that
computers can be programmed to play optimally at all times. Schoonmaker concludes that
computers will eventually beat the poker equivalent of a chess grandmaster [8].
Strong poker players believe that trends can be recognized from observing an
opponent's betting pattern. These trends are easier to observe where there are only 2 or 3
players in a game. For larger games, more general trends are observed, since it can be
difficult to keep track of 8 or 9 opponents. This information leads to the belief that a
poker bot can be developed to play strong poker at a high level. However, due to the luck
aspect of the game, it may be difficult to conclude that a poker bot will be able to play at a
world-class level without extensive data of play against world-class opposition.
13
The U of A based poker AI research that evolved into the Poker Academy product
may only need a little adjustment to their poker playing algorithms to play poker at a
world-class level. The only people that really know the strength of play by the poker AI
are the programmers involved with Poker Academy development. However, the general
public may find out about the progress of the Poker Academy AI in the near future.
Poker Academy boasts of endorsements from professional poker players [17]. Having
purchased the Poker Academy product, the author of this report believes that the AI in the
software is already very strong and that this version of the AI is probably not the
optimized version. The optimum poker bot AI may already be capable of soundly
defeating strong opposition.
Since brute-force methods are not useful in poker, opponent modeling algorithms
are required. If these algorithms have already reached their limitations, then it may be
difficult to create a poker bot that plays at a world-class level. This discovery may be a
possibility, but seems highly unlikely due to the results of the research from the U of A
Poker Research group.
To provide strong evidence to support the playing strength of poker bots, these
robots need to be matched up against many different world-class players. The winning
percentage of the poker bots will show whether the poker AI algorithms can compete at a
world-class level. The poker bots must play many hands against world-class opposition
14
for the results to represent valid trends. One of U of A's poker bots played 7000 hands
against a world-class player [5]. Although this sample did not provide statistically
conclusive results, the world class player showed winning results against the poker bot.
However, the poker bot provided the world class player with some challenging play. The
U of A research team believes that a strong player should be able to consistently beat the
poker robot. The U of A sample size shows that poker bots should be put into play as
much as possible. Having poker bots play more hands means more meaningful data
points for examination.
The poker research conducted by the University of Alberta provides high level
aspects to consider when designing a poker robot. First, a poker playing strategy should
be implemented. This betting strategy implements the decision making requirements for a
given poker game. The University of Alberta focused on the poker game of Hold 'Em.
Second, an opponent modeling module should be implemented. Opponent modeling
provides the poker robot with information on how their opponents play. This information
can be exploited that may increase potential profitability [24].
Poker robot playing strategies can be divided into four main categories. First, a
heuristic based approach can be used. In this approach, the poker playing is based on a set
of rules. Second, a simulation based approach is possible. In this approach, a poker robot
would run simulations for each possible action. Then the poker robot would be
15
programmed to select one of these actions, based on profitability. The third category
involves game theory. Using game theory is an effective strategy for poker because it can
be used to achieve the best case results against a worst case adversary. A game theory
strategy can also be known by an opponent, but the opponent cannot gain an advantage or
exploit this knowledge. Finally, the last category is a heuristic search based approach to
poker playing. This approach attempts to plan a playing strategy by performing backward
induction. Backward induction is used to consider the various end states for a given poker
hand. Each potential action is given a value according to the corresponding end states.
The given values provide the poker robot with an appropriate action to take [24].
Opponent modeling can be implemented in different ways. First, a strategy model
attempts to learn an opponent's behaviour in poker playing. A strategy model maps an
opponent's decision making to a probability distribution over the opponent's available
play options. Second, an observation model is used where the playing decisions are based
on the information available to the decision maker at each point in the poker game. An
observation model is created by making attempts to measure the probability of observing
hypothetical game actions [24].
Hold'em Poker
Poker is a card game, usually played for money. Many different variants of poker
exist, but one variant that receives a lot of attention is Hold'em poker. Hold'em poker is
16
played to decide the winner of the main event of the World Series of Poker. Hold'em
poker is played by dealing each player two hole cards that are dealt face down. After the
two hole cards are dealt, there is a betting round. Then, three community cards are dealt
face up. These three cards are known as the flop. After the flop is dealt, there is another
betting round. Then, another community card is dealt face up, which is known as the turn.
Now, another betting round is conducted. After this betting round is complete, a fifth and
final community card is dealt face up, which is known as the river. Then, a final betting
round is conducted. Once the final betting round is complete, any remaining players show
their cards to determine the winner. The winner holds the best five-card poker hand. The
winner then takes all the money accumulated from the betting rounds, also known as the
pot. Hand rankings are available in the appendix. These rankings provide an idea of the
odds associated with the different poker hands [8].
Hold'em poker, like any other poker variant, involves a mixture of skill and luck.
Poker theory is based in mathematics. Any hold'em poker player has the ability to
calculate their chances of forming a poker hand by analyzing the cards that they hold, the
community cards that are dealt face up, and the number of cards that are unknown. The
unknown cards consist of the cards dealt to other players and the cards that remain in the
deck. With all of this card information, a player can calculate their chances of forming a
hand using probability theory. However, mathematics is only one aspect of poker. To
become successful, a poker player must also be able to determine how their opponents
17
play the game. Winning poker players are strong opponent modellers. Opponent
modelling provides great challenges to the AI research community. Poker resources on
the Internet show that there is interest in developing strong poker playing AI software.
A poker robot that is developed to play Hold'em will be based on the betting
rounds during a single poker hand. First, a pre-flop decision model should be
implemented. Pre-flop decisions are based on the the two hole cards that are dealt to a
player. Second, a decision model should be implemented for the betting rounds after the
flop, or post-flop. The post-flop decisions involve actions immediately after the flop, after
the turn (fourth community card), and the river (fifth and final community card). These
decisions complete the betting action requirements in Hold'em. A poker robot does not
require opponent modeling to play poker. Opponent modeling allows a poker robot to
play a stronger game, which may result in positive expectation results.
In a regular casino poker game, decision making is usually made within one
minute. In an on-line poker game, decisions must be made within a time limit that is
usually under one minute. Poker robots that play against other poker robots have the
flexibility to take any amount of time to make decisions. However, a poker robot that can
make faster decisions provides researchers with more results of their play, in a shorter
period of time.
18
Current State of Test-Driven Development
Software development with TDD is researched and studied in universities and
colleges. Many software projects are managed by professionals. The source code is
managed in many different ways, depending on the people involved. Practical application
of TDD is also supported by software tools that are available for many different
programming languages. Test-driven development has been gaining popularity over the
past few years. Some of the TDD techniques that are described in articles have been
employed by many experienced software developers over the last twenty years [18].
Some software engineering research focuses on the overall benefits of TDD.
Effective use of TDD can lead to a meaningful decrease in defects. A study at North
Carolina State University shows that code developed using TDD showed 40% fewer
defects when compared to a baseline product that was developed using a more traditional
approach. Traditional methods of software development include requirements gathering,
coding and debugging, and automated or manual testing. As mentioned earlier, the main
difference between TDD and traditional software development is that TDD involves test
writing before production source code is written. A software development team at IBM
incorporated TDD practices into one of their projects. The project started with a UML
diagram design process. From the resulting UML diagrams, the development team wrote
code using TDD development. The IBM development team was relatively inexperienced,
but they had strong leadership and support to help with adopting the TDD process [10].
19
TDD also promotes increased test writing and coverage, which can also improve
productivity. A study showed that the measured quality of a program increased with the
number of programmer tests written [11]. Since TDD promotes iterative software
development, programmers can also program at a rapid pace. Programmers can also
become agile, by effectively handling changes to requirements. The use of automated and
regression test runs promotes a sense of confidence in making changes to a software
project. The test runs provide the required verification to avoid disruption in a software
project [13].
Some experienced software developers believe that TDD promotes a professional
approach to software development. When TDD is combined with software project
management, the software development tasks can be analyzed as it relates to the tests.
The source code can be analyzed to determine the number of tests and the code coverage
for production code. These metrics can help the development team identify areas in the
code base that require test coverage. TDD promotes a proactive approach to the software
verification process [12].
A number of tools exist to aid in the adoption of TDD. These software
development tools provide programmers with a test framework that makes test writing
easier. For the Java programming language, the JUnit test framework can be used to write
20
tests [18]. These tests are structured so that they can be executed in an automated fashion.
The Eclipse integrated development environment (IDE) supports the JUnit testing
framework [20]. Eclipse is a powerful and popular Java development tool. A performance
testing tool is also available for Java. JUnitPerf is a performance testing framework that
gives Java programmers the ability to measure the performance of JUnit tests. The
combination of Eclipse, JUnit, and JUnitPerf provide the necessary tools to create
performance tests for a poker robot.
Poker robot development and TDD
Using TDD with poker robot development can help researchers focus on
measuring and improving performance. First, tests can be created to verify that a poker
robot makes the expected decisions when playing a specific hand. Second, once tests are
created for a poker robot, these tests can be analyzed for performance. These performance
tests can be used to measure the time it takes for a poker robot to make decisions. It can
also measure other modules of a poker robot. These performance test results can be
combined to provide a more complete picture of the time it takes for a poker robot to
complete a decision in a game.
21
CHAPTER IV
RESEARCH STUDY RESULTS
Applying TDD and Performance Testing to Poker Robot Development
The literature reviewed concerning poker artificial intelligence research shows
that there may be benefits that extend past the poker game. Since poker is a game with
imperfect information, it could be useful in other real life situations. One potential benefit
is in the investing field. Investing research involves a number of different variables. The
artificial intelligence in poker may provide useful models that can be used in investing
(reference). The opponent modeling component in poker robots can also be used in other
applications.
The Poker Academy poker software [17] provides an application programming
interface for creating poker robots. Poker Academy also provides the Java source code for
a basic poker robot, SimpleBot.java. SimpleBot will not be assessed for its quality of
poker playing. It will be used as the basis for JUnit and JUnitPerf test case writing. The
main focus will be on the potential benefits of creating JUnitPerf performance tests for
SimpleBot. SimpleBot was used as a starting point to create another poker robot, MaiBot.
The robots were written in the Java programming language. These poker robots can be
used with the Poker Academy software. Poker Academy allows the user to plug-in poker
robots to the software. The poker robots then play hands with other robots that are already
22
available in Poker Academy. Alternatively, human players can play against poker robots
in Poker Academy. MaiBot will be written using TDD. MaiBot is compatible with Poker
Academy version 2 [17].
SimpleBot is programmed to play the Hold'em variant of poker. The SimpleBot
source code was written by an employee of BioTools, the company that created Poker
Academy. The decisions that it makes can be divided into two main sections. The first
decision is related to pre-flop action and the second decision relates to post-flop action.
SimpleBot is programmed to play the fixed limit Hold'Em variant of poker. This poker
robot was provided by the company that sells Poker Academy. SimpleBot is available
with the Java API, Meerkat, that supports development for Poker Academy. The Meerkat
API provides all the necessary details to plug a poker robot into Poker Academy to play
poker. SimpleBot consists of one file and class, SimpleBot.java. This class contains all of
the required decision making logic for playing poker. SimpleBot contains a method,
preFlopAction(), that determines SimpleBot's decision before the flop. SimpleBot's
method, postFlopAction(), determines the decisions to make on the three betting rounds:
after the flop, the turn, and the river.
The main goal behind MaiBot development is to illustrate the use of TDD and
performance testing with poker robot development. The Poker Academy software
provides the necessary environment for poker playing robots. Along with the Meerkat
23
API, this software features different poker robots that can serve as opponents to human
opponents, or custom robots, such as MaiBot. Games between robots can be automated,
so hands can be played between robots without any human intervention.
Poker Robot Development with TDD
The MaiBot decision making process is exactly the same as SimpleBot's. For the
purposes of this essay, nothing was changed with respect to SimpleBot's decision making.
MaiBot differs from SimpleBot in a number of ways. Using the TDD approach, the
SimpleBot design was broken into different classes. The SimpleBot functionality was
analyzed and JUnit test cases were created for MaiBot. Following the TDD approach,
MaiBot source code was written to made the JUnit tests pass. The incremental TDD
cycles of writing a test, writing code, and verifying successful test runs were followed
throughout the MaiBot development process. JUnit tests were created for the decision
making classes within the MaiBot design. At this point, the MaiBot source code was
analyzed for performance testing. This process involved the use of JUnitPerf to create
tests that measured the execution time of the JUnit tests that were created earlier. At the
end of this exercise, the MaiBot source code provides the same type functionality that the
original SimpleBot exhibits. However, MaiBot includes a number of tests that can be
executed easily. MaiBot also features a few performance tests that show that some of its
methods execute under a given time threshold.
24
Development on MaiBot started by breaking down SimpleBot into smaller classes.
Since SimpleBot already provided a fully functioning poker robot that can be used with
Poker Academy, the focus was to improve the design. The poker play decision making in
SimpleBot was analyzed. SimpleBot contains logic to decide on pre-flop and post-flop
actions. To begin development on MaiBot, two test classes were created,
PreFlopModelTest and PostFlopModelTest. These test classes contain the JUnit tests that
are used to test these two MaiBot classes : PreFlopModel and PostFlopModel. These two
classes contain the SimpleBot methods that handle the appropriate decision making
actions. This decision was made to make the MaiBot design more intuitive and easier to
follow than SimpleBot's. A third test class, HoldEmActionTest, was created to test the
MaiBot class, HoldEmAction. HoldEmAction consumes the two decision making classes.
The JUnit tests were written and executed using the eclipse development software.
Figures 1 to 3 shows the MaiBot project with the three test classes and 38 tests included
in a successful JUnit test run.
25
26
Figure 1: Eclipse IDE – Unit Test Run, HoldEmActionTest.java
27
Figure 2: PreFlopModelTest.java.
The following table summarizes the tests created for MaiBot.
Test Class and Tests DescriptionPreFlopModelTest/HoldEmActionTest
testPreFlopAces Verify raise with this hand.testPreFlopKings Verify raise with this hand.testPreFlopQueens Verify raise with this hand.testPreFlopJacks Verify raise with this hand.testPreFlopTens Verify raise with this hand.testPreFlopNines Verify call with this hand.testPreFlopActionAceKingSpades Verify raise with this hand.testPreFlopActionQueenTenHearts Verify raise with this hand.testPreFlopActionQueenTenOff Verify call with this hand.
28
Figure 3: PostFlopModelTest.java.
Test Class and Tests DescriptiontestPreFlopAction87Diamonds Verify call with this hand.testPreFlopAction23Clubs Verify call with this hand.testPreFlopActionAce2Suited Verify raise with this hand.testPreFlopActionAce8Suited Verify call with this hand.testPreFlopRandomFold Verify threshold for random fold.testPreFlopRandomCall Verify threshold for random call.testPreFlopRandomCheck Verify threshold for random check.testPreFlopAction Verify threshold for random check.
PostFlopModelTesttestPostFlopActionAces Verify raise post flop.testPostFlopActionKings Verify raise post flop.testPostFlopActionQueens Verify raise post flop.testPostFlopActionJacks Verify raise post flop.
All of the tests listed above run with zero failures. The tests created in the
HoldEmActionTest class serve as integration tests to verify that the PreFlopModel class
was integrated successfully with the HoldEmAction class.
A “Poker” project was created in eclipse. This list of files were included in this
project:
HoldEmAction.java
HoldEmActionTest.java
MaiBot.java
PostFlopModel.java
29
PostFlopModelTest.java
PreFlopModel.java
PreFlopModelTest.java
In addition to the Java source files, some references were added, meerkat-api.jar
and junit.jar. The meerkat-api.jar file provides the API that is required to integrate
MaiBot with the Poker Academy program. The junit.jar file provides the necessary API
for the test classes. The PreFlopModelTest class tests the pre-flop decisions that are
expected from the PreFlopModel class. A number of different hole card combinations are
provided in separate tests. The PostFlopModelTest class tests the post-flop decisions that
are expected from the PostFlopModel class. Like the pre-flop decisions, a number of
different hole card combinations are provided in separate tests.
TDD and Unit Testing
One main goal behind MaiBot development was to write the source code
according to the principles of TDD. TDD was applied to MaiBot development in a step
by step manner. This is how TDD was interpreted for the purposes of MaiBot
development:
1. Functionality would be provided by the SimpleBot source code.
30
2. Unit tests would be written first, before writing MaiBot source code.
3. MaiBot source code are written to make the tests pass.
4. The MaiBot source code was then reviewed to remove code duplication and to
break down large methods into multiple smaller methods that are more
manageable (refactoring).
These steps were followed incrementally until all unit tests were written. The existing
SimpleBot source code that is provided by Poker Academy was redesigned by creating
tests first.
The JUnit tests that were written for MaiBot were created to migrate the
functionality over from SimpleBot. These are the steps that were followed:
1. The functionality in SimpleBot was analyzed and broken down into parts that can
be verified with a unit test.
2. A JUnit unit test is written.
3. Source code is written for MaiBot to make the unit test from step 2 pass.
These steps were repeated for each of the unit tests that were created for MaiBot. The unit
tests were provided for the PreFlopModel and PostFlopModel classes first. These classes
contain the decision making logic for MaiBot. For the PreFlopModel class, the unit tests
provide a summary of the decisions that would be made before the flop. For example, the
31
strongest possible pre-flop hand, a pair of aces, would be raised and a unit test verifies
that MaiBot makes this decision. The method used to return these actions is
preFlopAction. Here is the pseudo-code for the preFlopAction method (see the MaiBot
source code for full implementation details, Used with permission of Biotools, Inc. 2008):
public int preFlopAction(Card cardOne, Card cardTwo){
if card 1 rank = card 2 rank and cards are >= 10 or = 2
put in a raise
any other pairs, call.
if card 1 and card 2 are greater than 10, raise when suited.
Unsuited cards, just call.
if card 1 and card 2 are suited and difference between card.
1 and card 2 is 1, just call.
if card 1 and card 2 are suited and one is an ace and the
other is a two, then raise.
if one card is an ace and both cards are suited, then call.
32
Any other hand should be folded.
}
The preFlopAction class takes the two cards dealt to the player and applies rules to
identify the appropriate action. These rules were taken from the SimpleBot source code.
Here is an example of a unit test from the PreFlopModelTest class that verifies the
action performed when a pair of aces is held pre-flop:
public void testPreFlopActionAces() {assertEquals(
Holdem.RAISE,
testPFM.preFlopAction(
new Card(Card.ACE, Card.CLUBS),new Card(Card.ACE, Card.DIAMONDS)));
}
This test checks if MaiBot's pre-flop decision is to raise if it holds the ace of clubs and ace
of diamonds.
Additional tests were created to verify the action taken when holding other cards
pre-flop. Here is another test:
public void testPreFlopActionNines() {
33
assertEquals(
Holdem.CALL,
testPFM.preFlopAction(
new Card(Card.NINE, Card.CLUBS),
new Card(Card.NINE, Card.DIAMONDS)));
}
This test checks to see if the action returned is a call when the nine of clubs and the nine
of diamonds are held pre-flop.
The second set of tests for the PreFlopModel class were a result of a refactoring.
Some of the logic implemented in the PreFlopModel class was separated into a smaller
method, preFlopRandom. The preFlopRandom method in the PreFlopModel class
provides some logic to randomize the action taken on pre-flop hands that do not fit into
the implemented rules. Here is the preFlopRandom source code:
public int preFlopRandom(
double amountToCall,
double smallBetSize,
double callPercentage){
final double CALL_PERCENT = 0.05;
// don't fold if there are no bets to call
34
if (amountToCall == 0) {
return Holdem.CHECK;
// play anything 5% of the time
} else if (amountToCall <= smallBetSize) {
if (callPercentage < CALL_PERCENT)
return Holdem.CALL;
}
return Holdem.FOLD;
}
This method returns an action that depends on the amount to call, bet size, and a random
value to determine the percentage of hands to play. The tests written for the
preFlopRandom method checked the actual actions returned against expected actions.
Here is an example of one of the tests:
public void testPreFlopRandomFold() {
assertEquals(
Holdem.FOLD,
testPFM.preFlopRandom(5,0,0.05));
}
This test checks that the action returned is a fold, according the the parameters provided
to the preFlopRandom method.
35
With the PostFlopModel class, the JUnit tests were created to verify that the
action returned by the getPostFlopAction method were valid actions for the game. The
main goal with the PostFlopModel class was to illustrate the potential use of TDD. The
methods in this class were not analyzed fully. Here is an example of a test:
public void testPostFlopActionJacks() {
assertEquals(
true,
testPFM.getPostFlopAction(
new Card(Card.JACK, Card.CLUBS),
new Card(Card.JACK, Card.DIAMONDS)) > 0);
}
As mentioned earlier, HoldEmAction integrates the functionality of the
PreFlopModel and PostFlopModel class. The HoldEmActionTest class provides JUnit
tests that verify that the PreFlopModel and PostFlopModel classes were integrated
successfully. The JUnit tests implemented in this class are similar to the tests
implemented for the PreFlopModel class, which reside in the PreFlopModelTest class.
The PostFlopModel functionality was not tested at this time.
Tests that are always passing is an important part of the TDD approach to
software development. Each test was created incrementally. TDD development on
MaiBot strongly supported iterative development. As each test was created, the
36
corresponding class was edited to make the test pass. The JUnit framework and eclipse
software made it very easy to run all of the tests at once. This feedback verified that any
new functionality did not break existing functionality. The tests also serve as
documentation for MaiBot. The test names provide enough detail to a developer familiar
with poker. Poker robot development is enhanced with TDD. For example, the
PreFlopModelTest class verifies the actions that MaiBot takes when holding a certain
Hold'Em hand. Strong hands are always played aggressively. Medium hands are always
played. Mediocre hands are fed through a method that randomly determines the action.
Performance Testing
With the MaiBot design completed, along with a set of JUnit tests available, the
focus shifted to performance testing. The three JUnit test classes HoldEmActionTest,
PreFlopModelTest, and PostFlopModelTest were used to create the performance tests.
The performance tests were written in separate test classes. Here is a list of the classes:
HoldEmActionPerformanceTests.
PostFlopModelPerformanceTests.
PreFlopModelPerformanceTests.
The performance tests were created using the JUnitPerf framework [19]. These tests
measured the amount of time that was required to execute various tests. The time taken to
run each test was compared to a maximum allowed time specified in the timed test. If the
37
performance tests run under the allowed time, then the test run was considered a pass.
Otherwise, the test run was considered a failure.
The MaiBot poker robot consists of four classes: MaiBot, HoldEmAction,
PreFlopActionModel, and PostFlopActionModel. This poker robot also includes three test
classes that demonstrate MaiBot functionality through JUnit tests: HoldEmActionTest,
PreFlopActionModelTest, and PostFlopActionModelTest. Finally, three performance test
classes measure the execution time of some of the JUnit tests through JUnitPerf testing.
The MaiBot design presents some interesting points when compared to the original
SimpleBot source code. First, MaiBot consists of a design that displays the decision
making in poker. The SimpleBot design consists of only one class, which does not
provide much detail by way of modeling.
38
Here is an example performance test from the PreFlopModelPerformanceTests
class:
public static Test suite() {long maxElapsedTime = 1000;
Test testCase = new PreFlopModelTest("testPreFlopActionKings");
Test timedTest = new TimedTest(testCase, maxElapsedTime);
return timedTest;}
39
Figure 4: Eclipse IDE - Performance Test Run
This test runs the “testPreFlopActionKings” unit test from the PreFlopModelTest class
and measures the time it takes to run this test. This time is compared to the
“maxElapsedTime” value which represents 1000 milliseconds, or 1 second. If the test
runs under 1 second, then the test will pass in JUnit. Otherwise, JUnit will display a
failure.
Here is the test from the PreFlopModelPerformanceTests class:
public class PreFlopModelPerformanceTests {
public static Test suite() {
long maxElapsedTime = 1000;
Test testCase = new
PreFlopModelTest("testPreFlopActionKings");
Test timedTest =
new TimedTest(testCase, maxElapsedTime);
return timedTest;
}
public static void main(String[] args) {
junit.textui.TestRunner.run(suite());
}
}
This test runs the testPreFlopActionKings JUnit test from the PreFlopActionTest class
and checks if it runs under one second.
40
Here is the test from the PostFlopModelPerformanceTests class:
public class PostFlopModelPerformanceTests {
public static Test suite() {
long maxElapsedTime = 1000;
Test testCase = new
PostFlopModelTest("testPostFlopActionAces");
Test timedTest =
new TimedTest(testCase, maxElapsedTime);
return timedTest;
}
public static void main(String[] args) {
junit.textui.TestRunner.run(suite());
}
}
This test runs the testPostFlopActionAces JUnit test from the PostFlopActionTest class
and checks if it runs under one second.
Finally, the MaiBot program was set up to play poker within the Poker Academy
software. Figure 5 shows MaiBot (player Mai) in action against its original version
SimpleBot (player Simple). The MaiBot source code was extracted into a Java archive
41
(JAR) file. The Meerkat API provides instructions on how to install a poker robot into the
Poker Academy environment [17]. Poker Academy was configured to allow the MaiBot
program to play against one other poker robot, SimpleBot.
SimpleBot was considered to be a working poker robot for Poker Academy. The
same assumption was made with MaiBot. The poker game between the two poker robots
ran for about an hour and the two poker robots were still playing against each other. The
hold'em poker game limits were set to $2 and $4 stakes. SimpleBot started the game with
42
Figure 5: MaiBot installed and playing in Poker Academy.
$900.50 and MaiBot started with $945. During the hour of play, almost 800 hands were
played between the two poker robots. MaiBot showed a win rate of -0.13 small bets per
hand. The SimpleBot and MaiBot decision making algorithms are the same. The MaiBot
win rate may be a result of poker variance, especially since only a small sample of hands
were played. The play between SimpleBot and MaiBot did not encounter any software
errors or crashes. MaiBot demonstrated stability within the Poker Academy environment.
The JUnitPerf tests created for MaiBot were executed in the eclipse IDE. The
testPreFlopActionQueenTenHearts test in the HoldEmActionPerformanceTests class took
1-5 milliseconds to fully execute. The testPreFlopActionKings test in the
PreFlopModelPerformanceTests class also completely executed in the range of 1-5
milliseconds. The testPostFlopActionAces test in the PostFlopModelPerfomanceTests
class took 146 to 151 milliseconds to execute. The performance tests were executed all at
once in the eclipse IDE. The test results show that the pre-flop actions that were tested
execute very quickly. This observation is consistent with the rule based pre-flop decision
making. The post-flop decision making took considerably longer. The additional
computations in post-flop decisions are reflected in the execution time. These results
quantify the performance of MaiBot.
All of the JUnit tests in the three classes, HoldEmActionTest, PostFlopModelTest,
and PreFlopModelTest, successfully executed. The 38 passing tests provide visual
43
verification that the MaiBot source code runs as expected. MaiBot provides the same
functionality as SimpleBot, with the added benefits of tests to verify functionality and
measure performance. Although the MaiBot performance results are expected to be
similar to SimpleBot performance, the tests provide MaiBot with the necessary
foundation to make attempts to improve its execution time.
Poker Robot and Test-Driven Development Discussion
The development results for the poker robot MaiBot presents a number of
important results. First, TDD leads source code that is developed with a “design for test”
philosophy. Second, the test classes play an important role in verifying the functionality
of the source code. Finally, TDD also lends itself well to performance testing.
The MaiBot test classes provide a regression test suite that verifies the
functionality of the classes and methods. The design of MaiBot presents a basic
representation of the decision making required by a Hold'Em poker player. A poker
player has a decision to make at the pre-flop stage. After the community cards are dealt,
the player then has decisions to make post-flop. These two main decision making stages
are modeled in the MaiBot design. The names of the classes, such as PreFlopModel and
PostFlopModel, show how MaiBot works. The MaiBot design takes advantage of object
oriented programming by using classes that are loosely coupled. Breaking the design into
smaller classes made the testing strategy easier to execute. This divide and conquer
44
approach made it easier to focus on developing and testing smaller, more manageable
pieces of the poker robot.
The test classes provide an important purpose in verifying MaiBot functionality.
The use of the Eclipse development software made it very easy to run all the tests after
changes were made. The short cycles used to write tests, change or add code to MaiBot,
and verify that the tests pass, provided useful feedback. This feedback showed previously
tested code still behaved as expected after changes were made. This approach to
programming can provide an important confidence builder to making changes and
additions to the MaiBot source code base.
The test classes provide an important start for performance test design and
creation. The use of the JUnitPerf test framework provides an important foundation to
measure the time it takes the tested MaiBot methods to execute. Since JUnitPerf makes
use of the JUnit framework used for MaiBot testing, performance testing became a logical
extension of the TDD approach. These performance tests provide basic results that show
how long it takes for MaiBot methods to execute.
MaiBot's design illustrates one advantage of using TDD. MaiBot displays some
features of object oriented design, such as composition of classes. The design for test
approach to MaiBot also provides benefits for possible future development. The existing
45
tests provide a foundation for other researchers that may be interested in expanding the
MaiBot source code. A class diagram for SimpleBot is provided in figure 6. As
46
mentioned earlier, the SimpleBot design consists of only one class. SimpleBot consists of
member variables and methods. The member variables deal with different aspects of
Hold'Em poker. In a poker hand dealt to SimpleBot, the state is saved in card objects. The
game information is also supported as part of Poker Academy integration. For example,
the hand evaluator object determines the next decision in a poker hand. SimpleBot takes
in game state information from the Poker Academy application so that it makes decisions
that follow the rules in Hold'Em. SimpleBot was not developed using TDD practices.
SimpleBot does not include any written tests.
47
Figure 6: SimpleBot UML Diagram.
Figure 7 shows a class diagram for MaiBot. This diagram shows the different
classes created in the MaiBot design.
Figure 8 shows a class diagram for the tests that were created to test MaiBot.
MaiBot provides the same type of functionality as SimpleBot, but the design is composed
of more classes. The test classes serve the purpose of verifying the results of calling
methods in the MaiBot classes. The TDD approach helped to create confidence in
describing the decision making in MaiBot. Pre-flop poker decisions made by MaiBot are
48
Figure 7: MaiBot class diagram.
summarized and verified in the PreFlopModelTest class. The MaiBot decision making
can be modified by first adding or modifying an existing test and changing the
PreFlopModel class to make the test pass. The entire set of tests for MaiBot also provides
regression test values. The tests provide effective feedback to implement changes to the
code base. The MaiBot classes show the different decision making stages for a poker
player. Like SimpleBot, the game information data is provided to MaiBot, so that it can
follow the rules of Hold'Em poker. The PreFlopModel class handles the pre-flop decision
making. The PostFlopModel class handles the post-flop decision making. The
HoldEmAction class integrates the pre-flop and post-flop decision making. Finally,
MaiBot integrates the game information handling with the decision making functionality.
49
The MaiBot classes present a practical model for a poker player. The decision
making is represented, which forms the main actions in a poker game. Although MaiBot
and SimpleBot contain the same functionality, MaiBot presents the methods and classes
in a model that can be explained in a straightforward manner.
50
Figure 8: Class diagram for MaiBot tests.
CHAPTER V
RESEARCH IMPLICATIONS
Research Results Limitations
There are a number of limitations with the results and experience obtained from
MaiBot. First, MaiBot is based on an existing, simple poker robot design, which made
design changes easier to implement. Second, the testing results are dependent on the
hardware and software environment used for development. Third, the test classes do not
provide full code coverage for the MaiBot design.
The MaiBot design was based on an existing poker robot. Therefore, portions of
the MaiBot design was not developed completely in a TDD fashion. Although many of
the MaiBot tests were generated after the fact, the MaiBot design still displayed some of
the qualities shared with TDD based software projects. This poker robot provided basic
decision making for Hold'Em poker playing. MaiBot's poker playing effectiveness was
not evaluated. The main objective for MaiBot's design is to illustrate the effects of taking
a TDD approach to poker robot development and to show how performance tests can be
implemented. The MaiBot design shows how TDD can be used to create a design that
takes advantage of objected oriented principles.
51
MaiBot development was conducted on a specific computer and software
environment, which needs to be considered for further research into this topic. The use of
a computer with fewer hardware resources may produce unexpected performance test
results. Software compatibility should also be considered to successfully compile and
execute the MaiBot source code. The predictability of the MaiBot performance test results
depends on careful consideration of the development environment used in this essay.
The results for MaiBot testing do not include test code coverage. Source code
testing completeness is beyond the scope of this essay. Test code coverage may be a topic
of interest in further research. The MaiBot test cases discussed in this essay were created
with performance testing in mind. If TDD and performance testing is used in further
research, then test code coverage assessment may provide additional value. Specifically,
the tests for the PostFlopModel class may be expanded if the methods are understood in
greater detail.
Rationale for the research
The results provided in this essay raise additional items that should be considered
in poker robot and general artificial intelligence development. First, TDD and
performance testing can go hand in hand, when considering testing strategies for poker
robot development. Second, performance testing can be used to help set poker robot
52
performance baselines, which can lead to improved performance. Third, performance
testing can benefit artificial intelligence software projects, in general.
MaiBot development provides positive results with regards to TDD and
performance testing. The source code developed presents a design that lends itself well to
testing. The systematic process in TDD development facilitates performance test case
creation. TDD and performance testing provides a method to isolate certain parts of the
MaiBot source code. The performance tests provide the necessary information to evaluate
the time it takes to perform a JUnit test. Since the JUnit tests verify the functionality of
the MaiBot source code, performance tests verify the time it takes to execute the code.
The use of TDD and performance testing can help poker robot researchers and developers
improve their robot performance.
The performance tests created for the MaiBot source code provides a performance
baseline. The time thresholds provided in the performance test cases demonstrate the
current performance of the MaiBot program. If changes are made to the program, the
performance tests provide the necessary feedback to evaluate the impact on execution
time. In poker, decisions should be made within a reasonable amount of time. Poker robot
researchers can define parameters that meet their defined time thresholds for decision
making. These time thresholds can then be programmed into performance test cases, like
53
MaiBot. The MaiBot results show the positive impact that tests cases have in evaluating
poker robot performance.
The MaiBot JUnit and JUnitPerf tests were developed and executed with the
following hardware and software:
Laptop: Acer Aspire 5570
Processor: Intel(R) Core(TM) Duo CPU T2450 @ 2.00GHz
Memory: 1.00 GB
Operating System: Windows Vista(TM) Home Premium (32-bit)
Java Development: eclipse Ganymede (version 3.4)
Java Runtime Environment: Version 1.6.0
JUnit Version 4.3.1
JUnitPerf Version 1.9
The results obtained in this essay may be applied to other software based artificial
intelligence projects. Performance testing can be very useful in projects where there are
important execution time goals. Well defined testing strategies using TDD and
performance testing can show researchers if they can meet their performance goals, or if
additional changes are necessary. With performance tests, researchers have meaningful
tools and data so that they can measure the impact of program changes. MaiBot
development demonstrated a practical application of software programming techniques.
54
TDD appears to be a combination of existing software development concepts. The
application of TDD provides a useful set of tests that serve as a tool that evolves with the
design of the poker robot. Future developments in the field of poker robots can be
improved with the use of TDD. Software algorithms can be developed with testing in
mind. This testing helps with poker robot verification. TDD may form a valuable part of a
poker robot developer's tool set.
There are some interesting opportunities for further research in poker robot
research, TDD, and performance testing. Researchers may investigate the benefits of
using TDD to improve poker robot design and algorithms. Combining the research on
poker robot development and TDD may provide the computer science community with an
organized method of examining performance. Future progress in these fields may benefit
by teaching TDD in undergraduate computer science classes. The introduction of TDD to
course work has provided instructors with an effective learning tool. This exposure to
TDD may encourage future poker robot developers to apply TDD and performance
testing to their work.
Further development is also possible with MaiBot. First, the algorithms
implemented may be improved. With a foundation of tests to build on, changes to the
source code may not be too difficult. Second, the set of tests may be expanded to provide
greater test coverage and understanding of the implementation. Since this essay provided
55
a basic look at the application of TDD and performance testing to poker robot
development, additional tests can be written for the MaiBot source code. Finally, the
results obtained with MaiBot may complement the progress made in other research areas.
The potential uses of TDD and performance testing are not limited to poker robot
development.
56
CHAPTER VI
CONCLUSIONS
The experience from MaiBot development provide important results. The tests
that were created for MaiBot provide additional feedback and verification on the method.
The TDD development process resulted in functionality verification and the application of
object oriented software design. The MaiBot test classes provided 38 tests that run and
verify methods that make poker playing decisions. The MaiBot performance tests
displayed the quick pre-flop decision making and the more computationally intensive
post-flop decision making. MaiBot was successfully integrated with the Poker Academy
software and successfully played a number of Hold'Em poker hands with another robot,
SimpleBot.
The tests form a valuable tool to provide effective feedback on MaiBot
functionality. The use of tests to verify poker robot decision making was an important
part of MaiBot development. The tests provided documentation on the types of decisions
that MaiBot made with specific hands. The performance tests measure the time it took
MaiBot to make some poker hand decision making. These set of tests provide the
foundation for further improvements on the algorithms and performance of MaiBot.
Although MaiBot is specifically designed to play poker, the TDD development approach
can be successfully applied to other fields involving software development.
57
The results of MaiBot development provide a practical application of the theory
behind poker robots, TDD, and performance testing. Poker robot researchers may find
TDD to be a useful development design tool. Performance testing may also be very useful
due to the time sensitive nature of poker playing. The benefits in using TDD and
performance testing are evident in the MaiBot design. The MaiBot tests can reduce the
number of defects and the resulting source code exhibits object oriented principles.
MaiBot was developed using eclipse. The eclipse software is an excellent tool for TDD
development with the Java programming language.
There are some limitations with the MaiBot results. The source code was based on
an existing poker robot that did not include any tests. The functionality was not altered.
The main objectives with MaiBot development were to create tests to verify functionality
and measure test performance. The MaiBot source code was not modified to improve
performance. Further research can be conducted to evaluate the benefits of using MaiBot
and its set of tests to improve poker playing performance. The set of tests included with
MaiBot do not provide full test code coverage. Further code coverage may provide
additional benefits to reduce potential defects.
In conclusion, TDD development can be applied successfully to the field of poker
robots. The ability to create a set of tests for a poker robot provides important verification
58
results. Performance testing provides additional feedback to measure execution time. The
use of performance testing can help researchers improve their software.
59
REFERENCES
[1] L. Greenemeier, "Poker-Playing Robots Battle For $100,000 Pot," InformationWeek, [On-line document], 2005, [cited 2008 Jan 15], Available HTTP: http://www.informationweek.com/showArticle.jhtml?articleID=165701734
[2] E. M. Burke and B. M. Coyner, Java Extreme Programming Cookbook, 1st ed., Sebastopol: O'Reilly, 2003.
[3] R. Mugridge, Test driven development and the scientific method, Proceedings of the Agile Development Conference, pp. 47–52, 2003.
[4] University of Alberta Computer Poker Research Group, 2007, [cited 2007 Nov 3], Available HTTP: http://www.cs.ualberta.ca/~games/poker/
[5] D. Billings, N. Burch, A. Davidson, R. Holte, J. Schaeffer, T. Schauenberg, and D. Szafron, "Approximating Game-Theoretic Optimal Strategies for Full-scale Poker,"[On-line document], 2003, [cited 2007 Nov 3], Available HTTP: http://www.cs.ualberta.ca/~games/poker/
[6] D. Billings, D. Papp, J. Schaeffer, and D. Szafron, “Opponent Modeling in Poker,” [On-line document], 1998, [cited 2007 Nov 3], Available HTTP: http://www.cs.ualberta.ca/~games/poker/
[7] D. Billings, D. Papp, L. Pena, J. Schaeffer, and D. Szafron, "Using Selective-Sampling Simulations in Poker," 1999, [cited 2007 Nov 29], Available HTTP: http://www.cs.ualberta.ca/~games/poker/
[8] C. Wilson, " Raise you 50..." New Scientist, vol 180, issue 2426-2428, 2003.
[9] L. Williams, E. M. Maximilien and M. Vouk, Test-driven development as a defect-reduction practice, 14th International Symposium on Software Reliability Engineering, pp. 34-45, 2003. ISSRE 2003.
[10] H. Erdogmus, M. Morisio and M. Torchiano, On the effectiveness of the test-first approach to programming, IEEE Transactions on Software Engineering, vol. 31 (3):226-237, 2005.
[11] G. Saurer, J. Schiefer, and A. Schatten, Testing complex business process solutions, The First International Conference on Availability, Reliability and Security, (20-22):8, 2006.
60
[12] L. Crispin, Driving Software Quality: How Test-Driven Development Impacts Software Quality, IEEE Software, vol. 23 (6):pp. 70-71, 2006.
[13] T. Karamat and A.N. Jamil, Reducing Test Cost and Improving Documentation In TDD (Test Driven Development), Seventh ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, (19-20):73-76, 2006. SNPD 2006.
[14] R.C. Martin, Professionalism and Test-Driven Development, IEEE Software, vol. 24 (3):32-36, 2007.
[15] J. Miller and M. Smith, A TDD approach to introducing students to embedded programming, Proceedings of the 12th annual SIGCSE conference on Innovation and technology in computer science education, Dundee, Scotland, pp. 33-37, 2007.
[16] T. Briggs and C.D. Girard, Tools and techniques for test-driven learning in CS1, Journal of Computing Sciences in Colleges, vol.22(3):37-43, 2007.
[17] "Poker Academy," [On-line document], 2005 [cited 2007 Nov 29], Available HTTP: http://www.poker-academy.com
[18] JUnit, Testing Resources for Extreme Programming, 2008; [cited 2008 Feb 26], Available HTTP: http://www.junit.org/index.htm
[19] JUnitPerf, 2006, [cited 2008 Feb 26], Available HTTP: http://clarkware.com/software/JUnitPerf.html
[20] eclipse, An open development platform, 2008, [cited 2008 Feb 26], Available HTTP: http://www.eclipse.org/
[21] D. Billings, Computer poker overview, University of Alberta, 1995.
[22] D. Billings, Algorithms and Assessment in Computer Poker, PhD Thesis, University of Alberta, 2006.
[23] D. Billings, A. Davidson, J. Schaeffer, and D. Szafron, "Improved Opponent Modeling in Poker," 2000, [cited 2005 June 29], Available HTTP: http://www.cs.ualberta.ca/~games/poker/
[24] T.C. Schauenberg, Opponent modeling and counter-strategy for poker, Master's thesis, 2006.
61
APPENDIX A – PROGRAM DOCUMENTATION
RUNNING PROJECT TESTS
Two items are required to execute the JUnit and JUnitPerf tests for the MaiBot program:
1. Eclipse IDE software for Java.
2. MaiBot project files.
First, the Eclipse software should be downloaded and extracted to the local PC
hard drive. The Eclipse web-site is http://www.eclispse.org.
Next, the Eclipse software is downloaded in .zip format. It must be extracted.
Once the software is extracted, the Eclipse program can be used. This completes
the Eclipse software installation instructions.
Now the MaiBot project source code can be imported into the Eclipse
environment. From the Eclipse menu, select File > Import.
62
From the Import screen, the General section should be expanded and the
Existing Projects into Workspace should be selected. Press the Next button (See
Figure)On the Select root directory section, select the directory for the MaiBot source
code. Poker should be displayed in the Projects section. Press the Finish button to
import the Poker project. The project should now be available in the Eclipse environment
(see Figure 4).
All of the tests can now be executed in the Eclipse environment. First, verify that
the Poker project is selected in Eclipse. Then all of the tests (JUnit and JUnitPerf) can be
executed by selecting Run > Run As > JUnit Test. The following results should be
63
Figure 9: Import select screen.
shown in the JUnit section of the Eclipse environment. When the Console section is
selected the JUnitPerf test execution times are shown.
64
Figure 10: Import project screen.
APPENDIX B - POKER HANDS
(Taken from "Raise you 50" by Claire Wilson, New Scientist, 2003, Vol. 180, Issue 2426-2428)
The hand highest on the list wins. The higher the hand on the list, the less likely it will be dealt.
Royal flush — A five-card sequence of the same suit comprising 10, jack, queen, king and ace Odds: 1 in 649,740
Straight flush — Any five-card sequence in the same suit. For example, 2, 3, 4, 5 and 6 of diamonds. If two players have a straight flush, the hand ending in the highest card wins Odds: 1 in 64,974
Four of a kind — Any four cards that have the same value. For example, four kings plus some other card Odds: 1 in 3911.
Full house — Three of a kind plus a pair. For example, three aces and two 2s. If two players have a full house, the hand that has the highest three of a kind wins Odds: 1 in 586.1
Flush — All five cards are from the same suit. For example, 2, 4, 7, jack and king of clubs. If two players have a flush, the player with the highest card wins Odds: 1 in 273.1
Straight - Any five cards in consecutive order, but not necessarily from the same suit. A straight can not continue past the ace. If two players have a straight, the player with the highest ending card wins Odds: 1 in 131.8
Three of a kind — Any three cards of the same value. For example, three 10s. If two players have three of a kind, the highest set of three cards wins Odds: l in 34.8
Two pairs — Two pairs each consisting of two cards of the same value. For example, two aces and two 8s. If two players have two pairs, the player with the highest pair wins Odds: 1 in 13.11
Pair — Two cards of the same value. For example, two 9s. If two players have a pair, the highest pair wins. Odds: 1 in 2
65