Game Programming 07 - Procedural Content Generation

Preview:

Citation preview

Game ProgrammingProcedural Content Generation

Nick Prühs, Denis Vaz Alves

Objectives

• To get an overview of the different fields of procedural content

generation

• To learn how to procedurally generate content of different types

• To understand game design implications of using generated content

in games

2 / 90

Procedural Content Generation

“Procedural content generation (PCG) refers to creating game content

automatically, through algorithmic means.”

- Togelius, Yannakakis, Stanley, Browne

“PCG should ensure that from a few parameters, a large number of

possible types of content can be generated.”

- Doull

3 / 90

Categories of PCG

• Online Level Generation

• Offline Level Generation

• Fixed Seed Level Generation

• Game Entity Instancing

• User-mediated Content

• Dynamic Systems

• Procedural Puzzles & Plot Generation

4 / 90

Categories of PCG

5 / 90Online Level Generation in Diablo 2

Categories of PCG

6 / 90Fixed Seed Level Generation in Elite

Categories of PCG

7 / 90Game Entity Instancing in Spore

Categories of PCG

8 / 90Dynamic Systems in Left 4 Dead

Opportunities of PCG

• high diversity of the resulting assets

• faster than any human designer could ever be

• significantly reduces production costs

• allows for a mixed-initiative approach to level design

• content automatically implemented in the engine

• can save vital system resources

• players can influence the parameters of the game world

• possibility of automatically analyzing player behavior

9 / 90

Opportunities of PCG

10 / 90

Procedurally Generated Item in Diablo 2

Opportunities of PCG

11 / 90

Map Editor of Age of Empires 2

Opportunities of PCG

12 / 90

Game Parameters of Master of Orion 2

Challenges of PCG

• Satisfying a high number of constraints (e.g. full connectivity)

▪ Finding these constraints and tweaking unintuitive parameters of

the system can degenerate into trial and error

• Produce aesthetically pleasing results

▪ Levels can become too similar to each other

• Maximize the expressive range (variety of results)

▪ Can decrease co-op multiplayer playability

• May require spending too much time on inventing a sophisticated

level generator

13 / 90

Challenges of PCG

14 / 90

Generation Mistakes in Infinite Mario(Picture by Mawhoter, Mateas)

“Random” Numbers

• Computers are deterministic – thus, producing “random” numbers

seems to be conceptually impossible.

• Pseudo-random numbers in computers are generated by applying a

fixed rule to a given number, generating the next number in the

sequence.

• The first number of the sequence is often called the seed of the

random number generator.

• The length of the sequence before repeating numbers is called its

period.

15 / 90

“Good” “Random” Numbers

“What is random enough for one application may not be random

enough for another.”

For games:

• Don’t use generators with a period less than 264!

• Don’t use built-in language generators!

• Don’t use overengineered generators!

16 / 90

A Good Pseudo-Random Number Generator

• Should combine at least two unrelated generation methods, in order

to mitigate the flaws of each.

• Should be an object instead of a static class, as it maintains an

internal state (the current number in the sequence).

17 / 90

64-bit XOR Shift Method

Max. Period: 264 – 1

Operations: 3x XOR, 3x shift

Algorithm:

1. x = x ^ (x >> A1)

2. x = x ^ (x << A2)

3. x = x ^ (x >> A3)

with A1 = 21, A2 = 35, A3 = 4 as full-period triple.

18 / 90

MLCG Module 264

Max. Period: 262 (suffers from serial correlation)

Operations: 1x multiplication, 1x modulo

Algorithm:

x = (x * A) mod 264

with A = 2685821657736338717 as recommended multiplier.

19 / 90

Combined Generator

Period: 1.8 x 1019

Algorithm:

1. x = x ^ (x >> 21)

2. x = x ^ (x << 35)

3. x = x ^ (x >> 4)

4. x = (x * 2685821657736338717 ) mod 264

20 / 90

Level Generation Algorithms

• Context-free grammars

• Reinforcement learning

• Genetic algorithms

• Chunk-based approach

21 / 90

Context-Free Grammar

• Approach for generating levels for platform games

• Originally presented by Ince in 1999

▪ Later improved on by Compton and Mateas in 2006

• Generation model is represented as a context-free grammar

▪ Level as start symbol

▪ Most basic units out of which the levels are constructed as

terminal symbols

▪ Sequences of these units as nonterminal symbols

▪ Connections between these sequences as productions

22 / 90

Reinforcement Learning

• Context-free grammar approach requires programmers to find the

generational rules, which can be extremely difficult

• In 2009, Laskov interpreted level generation as a maintenance task

▪ Level building agent performs certain actions

▪ Goal is to generate a level that satisfies user-specified

parameters at all times

▪ Enforced by a reward function

o Punishes fail-states like unplayable levels

o Rewards branches and game elements like enemies or treasures

23 / 90

Genetic Algorithms

• Sorenson and Pasquier were among the first to propose an algorithm that is able to generate platform game levels and 2D adventure game levels alike

• Feasible-infeasible two-population algorithm

▪ Considers levels which do not yet satisfy all given constraints as infeasible populationo Evolved towards minimising the number of violated constraints

▪ Considers all others as feasible populationo Subject to a fitness function that rewards levels based on the criteria

specified by the level designers

• Generating Roguelike levels on an average machine took “less than an hour”...

24 / 90

Chunk-based Approach

• Used in Infinite Mario Bros and Torchlight

• Creates a game level by assembling pre-authored level chunks

• Requires applying some post-processing algorithms afterwards

25 / 90

Chunk-based Approach

• Very intuitive for designers

• Applicable for both 2D and 3D

• Easy to increase the variety of different levels just by extending the

chunk library

26 / 90

Definition (Game Element)

27 / 90

A game element is any domain-specific game object a player can

interact with (i.e. enemies, items, levers).

Definition (Chunk)

28 / 90

A chunk is the most basic building block of a level. It contains

information about its extents, its position and rotation as well as about

where to align it to the existing level and where to add game elements.

Definition (Chunk Library)

29 / 90

A chunk library holds a set of chunk templates and is needed by the

level generator to have a specific repertoire of chunks that may be used

during the generation process.

Definition (Context)

30 / 90

Every chunk contains at least one single context describing the relative

position at which it may be aligned to other chunks.

Definition (Anchor)

31 / 90

Every chunk may contain one or more tagged anchors describing the

relative position at which game elements can be added.

Definition (Level)

32 / 90

A level is a bounded space containing a limited number of level chunks.

33 / 90

Definition (Level)

Procedurally Generated Game Level

34 / 90

Level Generation Framework

UML Class Diagram of the ByChance Framework

35 / 90

Main Routine

Precondition: C is a non-empty chunk library. L is a bounded level

which contains chunks of C, only, and none of these chunks overlap or

exceed the level bounds.

36 / 90

Main Routine

1. While L contains at least one non-blocked context:

1. Select a non-blocked context for expanding L.

2. Find all compatible chunk candidates as follows: For each chunk in C, and for each of its contexts:1. If contexts are compatible and new chunk wouldn't overlap or exceed the level

bounds, add to list of candidates.

2. Else if the chunk is allowed to be rotated and hasn't already been rotated by 360 degrees in every direction, rotate it and try again.

3. Else, reject the chunk.

3. If candidate list is empty, block selected context and continue.

4. Else, select a compatible chunk from candidate list and add it to L, aligning the selected contexts.

2. Perform post-processing.

37 / 90

Main Routine

Postcondition: All contexts of L are blocked. No chunk exceeds the

level bounds, and no two different level chunks overlap.

38 / 90

Chunk Selection

• Three attributes influence the probability of picking a particular

chunk:

▪ Weight

▪ Quantity

▪ Tags

• Tuning these three attributes is key to producing enjoyable levels.

39 / 90

Post-Processing

Cluster of floors.

Cluster of rooms.

40 / 90

Post-Processing

Dead ends.

41 / 90

Post-Processing

Connected contexts.

42 / 90

3D Chunks

43 / 90

3D Chunks

44 / 90

Level Layouts

Special level layouts.(Picture by Compton, Mateas)

45 / 90

Editor Support

ByChance Chunk Template Editor in Unity

46 / 90

Editor Support

ByChance Scene View in Unity

47 / 90

Editor Support

ByChance Game View in Unity

Perlin Noise

• Originally created by Ken Perlin in 1983

• Mapping from ℝn to [-1; 1]

• Can be used to assign a greyscale value to each pixel of a bitmap

• Bitmap can be used as heightmap for 3D terrain

48 / 90

Perlin Noise

• Perlin Noise is coherent

▪ For any two points A, B, the value of the noise function changes

smoothly as you move from A to B.

49 / 90

Non-coherent noise (left) vs. coherent noise (right)(Picture by Matt Zucker)

Creating Perlin Noise

• Well-known approach is creating non-coherent noise and smooth

(blur) it

• Original approach by Perlin is different, mathematically well-defined

and more efficient

50 / 90

Creating Perlin Noise

Wanted:

noise: ℝn [-1; 1]

51 / 90

Creating Perlin Noise

Wanted:

noise: ℝ2 [-1; 1]

52 / 90

Creating Perlin Noise

On a bounded space of size Size x Size,

Size > 0

impose a grid of size GridSize x GridSize,

Size >= GridSize > 0

53 / 90

Creating Perlin Noise

• Grid points are defined for each whole number.

• Any number with a fractional part (i.e. 3.14) lies between grid points.

54 / 90

Picture by Matt Zucker.

Creating Perlin Noise

Step 0: Assign a pseudorandom gradient of length 1 to each grid point.

55 / 90

Picture by Matt Zucker.

Creating Perlin Noise

Note: The gradient of each grid point must not change after it has been

computed once (e.g. don’t compute random gradients every time

computing noise(x,y)).

56 / 90

Picture by Matt Zucker.

Creating Perlin Noise

Step 1: Find the grid points surrounding (x, y). In ℝ2, we have 4 of

them, which we will call (x0,y0), (x0, y1), (x1, y0), and (x1, y1).

57 / 90

Picture by Matt Zucker.

Creating Perlin Noise

Step 2: Find the vectors going from each grid point to (x, y).

58 / 90

Picture by Matt Zucker.

Creating Perlin Noise

Step 3: Compute the influence of each gradient by performing a dot

product of the gradient and the vector going from its associated grid

point to (x, y).

59 / 90

s = g(x0, y0) · ((x, y) -(x0, y0))t = g(x1, y0) · ((x, y) - (x1, y0))u = g(x0, y1) · ((x, y) - (x0, y1))v = g(x1, y1) · ((x, y) - (x1, y1))

Creating Perlin Noise

Step 4: Ease the position of the point, exaggerating its proximity to

zero or one.

For inputs that are sort of close to zero, output a number really close to

zero. For inputs close to one, output a number really close to one.

60 / 90

f(p) = 3p2 – 2p3

(Picture by Matt Zucker)

Creating Perlin Noise

Step 4: Ease the position of the point, exaggerating its proximity to

zero or one.

For inputs that are sort of close to zero, output a number really close to

zero. For inputs close to one, output a number really close to one.

61 / 90

Sx = 3(x - x0)² - 2(x - x0)³Sy = 3(y - y0)² - 2(y - y0)³

Creating Perlin Noise

Step 5: Linearly interpolate between the influences of the gradients.

62 / 90

a = s + Sx(t - s)b = u + Sx(v - u)

noise(x, y) = a + Sy(b - a)

Creating Perlin Noise

In order to use noise as greyscale value, you might want to transform it

to a more useful interval.

Full source code is available at https://github.com/npruehs/perlin-noise.

63 / 90

noise(x, y) = a + Sy(b - a)[-1; 1]

transformedNoise(x, y) = (noise(x, y) + 1) / 2 [0; 1]greyscale(x, y) = transformedNoise(x, y) * 255 [0; 255]

Markov Chains

• Named after Andrey Markov

• State space with random transitions

• Usually memory-less

▪ Next state only depends on current state

▪ Thus the name

• Usually doesn’t terminate

▪ There is always a next state

• Generally impossible to predict the state at a given point in the future

▪ Statistical properties can be predicted (and are more interesting in most cases)

64 / 90

Markov Chains

• Example: Drunkard’s Walk

▪ One-dimensional state space

▪ Position may change by +1 or -1 with equal probability

▪ Two possible transitions from each state

▪ Transition probability only depends on the current state

65 / 90

Markov Chains

• Example: Drunkard’s Walk

▪ One-dimensional state space

▪ Position may change by +1 or -1 with equal probability

▪ Two possible transitions from each state

▪ Transition probability only depends on the current state

66 / 90

Markov Chains

• Example: Creature Diet

▪ Creature eats only grapes, cheese, or lettuce

▪ Eats exactly once a day

▪ If it ate cheese today, tomorrow it will eat lettuce or grapes with equal probability.

▪ If it ate grapes today, tomorrow it will eat grapes with probability 1/10, cheese with probability 4/10 and lettuce with probability 5/10.

▪ If it ate lettuce today, tomorrow it will eat grapes with probability 4/10 or cheese with probability 6/10.

67 / 90

Markov Chains

• Example: Creature Diet

▪ Can be modeled with a Markov chain since its choice tomorrow

depends solely on what it ate today

o not what it ate yesterday

o not what it ate any other time in the past

▪ Statistical property that could be calculated is the expected

percentage, over a long period, of the days on which the

creature will eat grapes

68 / 90

Markov Chain – Definition

A Markov chain is a sequence of random variables X1, X2, X3, ... with

the Markov property, namely that, given the present state, the future

and past states are independent:

Pr 𝑋1 = 𝑥1, … , 𝑋𝑛 = 𝑥𝑛 > 0 ⇒

Pr 𝑋𝑛+1 = 𝑥 𝑋1= 𝑥1, 𝑋2 = 𝑥2, … . , 𝑋𝑛 = 𝑥𝑛 = Pr 𝑋𝑛+1 = 𝑥 𝑋𝑛 = 𝑥𝑛)

69 / 90

Markov Chains – Description

• Directed graph

▪ Edges are labeled by the probabilities of going from one state at

time n to the other states at time n+1

• Transition matrix

70 / 90

Stationary Markov Chains

Stationary Markov chains are processes where the probability of a

transition is independent of n.

∀𝑛 ∈ 𝑁:Pr 𝑋𝑛+1 = 𝑥 𝑋𝑛 = 𝑦) = Pr(𝑋𝑛 = 𝑥 | 𝑋𝑛−1 = 𝑦)

71 / 90

Higher Order Markov Chains

A Markov chain of order 𝑚 ∈ 𝑁 is a process where the future state

depends on the past m states.

∀𝑛 ∈ 𝑁, 𝑛 > 𝑚:Pr 𝑋𝑛+1 = 𝑥 𝑋𝑛 = 𝑥𝑛, 𝑋𝑛−1 = 𝑥𝑛−1, … , 𝑋1 = 𝑥1) =

Pr 𝑋𝑛+1 = 𝑥 𝑋𝑛 = 𝑥𝑛, 𝑋𝑛−1 = 𝑥𝑛−1, … , 𝑋𝑛−𝑚 = 𝑥𝑛−𝑚)

72 / 90

73 / 90

Random Name Generation

Given an input set of feasible existing names, a new random name can be generated using an m-order Markov chain as follows:

1. Pick any existing name, with equal probability.

2. Take the first m letters of that name.

3. Find all existing names containing these m letters.

4. In all of these existing names, check the following letter. (Consider end-of-word as letter here.) Count the occurrences of the same following letters.

5. Pick the next letter of the generated name with probability of the distribution in existing names.

6. If the next letter is not end-of-word, start over from step 3, always considering the last m letters of the current name.

74 / 90

Random Name Generation

Example (m = 2)

1. Pick any existing name, with equal probability.

LILY

75 / 90

Random Name Generation

Example (m = 2, name = “LI”, current = “LI”)

2. Take the first m letters of that name.

LI

76 / 90

Random Name Generation

Example (m = 2, name = “LI”, current = “LI”)

3. Find all existing names containing these m letters.

AMELIA, OLIVIA, LILY, ALICE, ELISABETH, LILAH, JULIET,

CAROLINE, EVANGELINE, MADELINE, NATELIE, ROSALIE, LILLIAN,

ELISE, ADELINE, DELILAH, ELIANA, FELICITY, JULIA

77 / 90

Random Name Generation

Example (m = 2, name = “LI”, current = “LI”)

4. In all of these existing names, check the following letter. (Consider

end-of-word as letter here.) Count the occurrences of the same

following letters.

78 / 90

Next Letter Occurrences Probability

A 3 16 %

V 1 5 %

L 4 21 %

C 2 11 %

S 2 11 %

E 3 16 %

N 4 21 %

Random Name Generation

Example (m = 2, name = “LIN”, current = “LI”)

5. Pick the next letter of the generated name with probability of the

distribution in existing names.

79 / 90

Next Letter Occurrences Probability

A 3 16 %

V 1 5 %

L 4 21 %

C 2 11 %

S 2 11 %

E 3 16 %

N 4 21 %

Random Name Generation

Example (m = 2, name = “LIN”, current = “IN”)

2. Take the last m letters of the generated name.

IN

80 / 90

Random Name Generation

Example (m = 2, name = “LIN”, current = “IN”)

3. Find all existing names containing these m letters.

CAROLINE, EVANGELINE, MADELINE, JOSEPHINE, ADELINE,

QUINN

81 / 90

Random Name Generation

Example (m = 2, name = “LIN”, current = “IN”)

4. In all of these existing names, check the following letter. (Consider

end-of-word as letter here.) Count the occurrences of the same

following letters.

82 / 90

Next Letter Occurrences Probability

E 5 83 %

N 1 17 %

Random Name Generation

Example (m = 2, name = “LINN”, current = “IN”)

5. Pick the next letter of the generated name with probability of the

distribution in existing names.

83 / 90

Next Letter Occurrences Probability

E 5 83 %

N 1 17 %

Random Name Generation

Example (m = 2, name = “LINN”, current = “NN”)

2. Take the last m letters of the generated name.

NN

84 / 90

Random Name Generation

Example (m = 2, name = “LINN”, current = “NN”)

3. Find all existing names containing these m letters.

ARIANNA, HANNAH, ANNA, SAVANNAH, ANNABELLE, QUINN,

SIENNA

85 / 90

Random Name Generation

Example (m = 2, name = “LINN”, current = “NN”)

4. In all of these existing names, check the following letter. (Consider

end-of-word as letter here.) Count the occurrences of the same

following letters.

86 / 90

Next Letter Occurrences Probability

A 6 86 %

End-of-word 1 14 %

Random Name Generation

Example (m = 2, name = “LINN”, current = “NN”)

5. Pick the next letter of the generated name with probability of the

distribution in existing names.

87 / 90

Next Letter Occurrences Probability

A 6 86 %

End-of-word 1 14 %

Random Name Generation

Example (m = 2)

Generated Name:

LINN

88 / 90

References

• Togelius, Yannakakis, Stanley, Browne. Search-based procedural content generation. In

EvoApplications Workshop, volume 2024 of LNCS, pages 141150, November 2010.

• Andrew Doull. The death of the level designer: Procedural content generation in games.

http://roguelikedeveloper.blogspot.de/2008/01/death-of-level-designer-procedural.html, January

2008.

• Peter Mawhorter and Michael Mateas. Procedural level generation using occupancy-regulated

extension. In CIG, pages 351358, 2010.

• Press, Teukolsky, Vetterling, Flannery. Numerical Recipes 3rd Edition: The Art of Scientic

Computing. Cambridge University Press, New York, NY, USA, 2007.

• Ince. Automatic Dynamic Content Generation for Computer Games. PhD thesis, University of

Sheeld, 1999.

• Compton, Mateas. Procedural level design for platform games. In Proceedings of the 2nd Articial

Intelligence and Interactive Digital Entertainment Conference (AIIDE), pages 109111, Marina del

Rey, California, June 2006.

References

• Laskov. Level generation system for platform games based on a reinforcement learning approach.

Master's thesis, The University of Edinburgh, School of Informatics, 2009.

• Sorenson, Pasquier. Towards a generic framework for automated video game level creation. In

EvoApplications (1), pages 131140, 2010.

• Prühs, Vaz Alves. Towards a Generic Framework for Procedural Generation of Game Levels.

Master’s Thesis, Hamburg University of Applied Sciences, 2011.

• Perlin. Noise and Turbulence. http://www.mrl.nyu.edu/~perlin/doc/oscar.html, 1997.

• Perlin. Making Noise. http://www.noisemachine.com/talk1/index.html, December 9, 1999.

• Zucker. The Perlin noise math FAQ. http://webstaff.itn.liu.se/~stegu/TNM022-

2005/perlinnoiselinks/perlin-noise-math-faq.html#toc-algorithm, February 2001.

• Wikipedia. Markow chain. http://en.wikipedia.org/wiki/Markov_chain, September 6, 2014.

• Silicon Commader Games. Markow Name Generator.

http://www.siliconcommandergames.com/MarkovNameGenerator.htm, May 2016.

Thank you!

http://www.npruehs.de

https://github.com/npruehs

@npruehs

nick.pruehs@daedalic.com

5 Minute Review Session

• Which categories of procedural content generation do you know?

• Name a few opportunities of PCG!

• Name a few challenges of using generated content!

• In a few words, explain the chunk-based level generation approach!

• What is coherent noise, and what can it be used for?

• What are Markov chains, and what can they be used for?

Recommended