20
1 CO3301 - Games Development 2 Week 19 Probability Trees + Decision Trees (Learning Trees) Gareth Bellaby

1 CO3301 - Games Development 2 Week 19 Probability Trees + Decision Trees (Learning Trees) Gareth Bellaby

Embed Size (px)

Citation preview

1

CO3301 - Games Development 2Week 19

Probability Trees + Decision Trees (Learning Trees)

CO3301 - Games Development 2Week 19

Probability Trees + Decision Trees (Learning Trees)

Gareth BellabyGareth Bellaby

2

Probability TreesProbability Trees

A probability tree is a tree which has probabilities associated with each branch.

3

Probability TreesProbability Trees

4

Probability TreesProbability Trees

Probabilities are propagated down the tree. A probability which follows on from another probability is multiplied, i.e. one probability is multiplied by the other probability in order to calculate the final probability.

A each level of the tree the total of the probabilities must be equal to the sum of 1.

0.125 + 0.125 + 0.25 + 0.5 = 1

5

Decision TreesDecision Trees

• A decision tree is a way of representing knowledge.

• A decision tree is a way of using inputs to predict future outputs.

• Decision trees are a good way of expressing decisions for computer games. Not just for AI but general game play.

• A decision tree is a classification method.

• Decision trees learn from examples using induction.

6

7

Decision TreesDecision Trees

Each internal node is a test.

Each leaf node is a classification.

The intention is that an unknown type can be classified by traversing the tree.

Decision trees can be created in real time extremely efficiently. This means that they are a practical option for machine learning for games.

8

Decision TreesDecision Trees

• Can have the response "unknown" on a branch. Decision trees can deal with uncertainty.

• Decision trees cannot use ranges because of the large number of branching alternatives, e.g. floating point numbers. Instead you must provide a set of "buckets" each with a specified range. We'll an example of this below with Black and White talking about "none", "medium" and "maximum".

• Tests followed in sequence down the tree can be considered to be logical AND.

• The branching can be considered to be logical OR.

9

Black & WhiteBlack & White

What he ate Feedback -

"How nice it tasted"

A big rock -1.0

A small rock -0.5

A small rock -0.4

A tree -0.2

A cow +0.6The values are averaged.

Taken from Evans, R., (2002), "Varieties of Learning".

10

Black & WhiteBlack & WhiteWhat creature attacked Feedback from player

Friendly town, weak defence, tribe Celtic -1.0

Enemy town, weak defence, tribe Celtic +0.4

Friendly town, strong defence, tribe Norse -1.0

Enemy town, strong defence, tribe Norse -0.2

Friendly town, medium defence, tribe Greek

-1.0

Enemy town, medium defence, tribe Greek +0.2

Enemy town, strong defence, tribe Greek -0.4

Enemy town, medium defence, tribe Aztec 0.0

Friendly town, weak defence, tribe Aztec -1.0

11

Black & WhiteBlack & White

Taken from Evans, R., (2002), "Varieties of Learning".

12

Black & WhiteBlack & White

• Some of the criteria is lost because it turns out to be irrelevant, e.g. information about the tribe.

• The decision tree is created in real-time. Each time the creature receives new input from the player the tree will be rebuilt.

• Each new input will change the values.

• The rebuilding could be significant. Information that previously was jettisoned as irrelevant could become relevant.

13

Black & WhiteBlack & White

• The Evan's article provides more detail as to how decision trees were used in Black and White.

• One thing that it is important to add is his observation that in order to iterate through all of the attributes of an object efficiently is necessary to define the objects by their attributes.

14

ID3ID3

• The ID3 algorithm was presented by Quinlan, 1986.

• Uses an iterative method.

• From the training examples a random subset is selected.

• Test the tree on training examples.

• If all of the examples are classified successfully then end.

• Otherwise add some more training examples to our subset and repeat the process.

15

ID3ID3

• Start with a root node. Assign to the root node the best attribute.

• Branch then generated for each value of the attribute.

• A node is created at the end of each branch.

• Each training example is assigned to one of these new nodes.

• If no examples are assigned then the node and branch can be removed.

• Each node is then treated as a new root and the process repeated.

16

ID3ID3

• It should be apparent that different trees can be constructed.

• It is desirable to derive the smallest tree since this will be the most efficient one.

• The top most choices need to be the most informative.

• Aiming towards the greatest information gain.

• Information theory provides a mathematical measurement of the information content of a message. Information Theory was presented by Shannon in 1948.

17

Information TheoryInformation Theory

• Shannon defines the amount of information in a message as a function of the probability of occurrence of each possible message.

18

ID3ID3

• ID3 was extended by Quinlan to provide probabilistic classification using Bayesian statistics.

19

Sources & Further readingSources & Further reading

DeSylva, C., (2005), "Optimizing a Decision Tree Query Algorithm for Multithreaded Architectures", Game Programming Gems 5, Charles River Media: Hingham, Mass, USA.

Evans, R., (2002), "Varieties of Learning", AI Game Programming Wisdom, Charles River Media: Hingham, Mass, USA.

Fu , D., & Houlette, R., (2003), "Constructing a Decision Tree Based on Past Experience", AI Game Programming Wisdom 2, Charles River Media: Hingham, Mass, USA.

20

Sources & Further readingSources & Further reading

Manslow, J., (2006), "Practical Algorithms for In-Game Learning", AI Game Programming Wisdom 3, Charles River Media: Hingham, Mass, USA.

Quinlan, J. R., (1986), "Induction of decision trees", Machine Learning, 1: 81-106.

Shannon, C, (1948), "A mathematical theory of communication", Bell System Technical Journal.