30
Introduction to Computational Modeling of Social S Prof. Lars-Erik Cederman Center for Comparative and International Studies (CIS) Seilergraben 49, Room G.2, [email protected] Nils Weidmann, CIS Room E.3, [email protected] http:// www.icr.ethz.ch/teaching/compmodels Lecture, December 14, 2004 RePast Tutorial II

Introduction to Computational Modeling of Social Systems Prof. Lars-Erik Cederman Center for Comparative and International Studies (CIS) Seilergraben 49,

Embed Size (px)

Citation preview

Introduction to Computational Modeling of Social Systems

Prof. Lars-Erik CedermanCenter for Comparative and International Studies (CIS)

Seilergraben 49, Room G.2, [email protected] Weidmann, CIS Room E.3, [email protected]

http://www.icr.ethz.ch/teaching/compmodels

Lecture, December 14, 2004

RePast Tutorial II

2

Today’s agenda

• IPD: Experimental dimensions• EvolIPD model• Random numbers• How to build a model (2)• Scheduling• Homework C

3

Three crucial questions:

1. Variation: What are the actors’ characteristics?

2. Interaction: Who interacts with whom, when and where?

3. Selection: Which agents or strategies are retained, and which are destroyed?

(see Axelrod and Cohen. 1999. Harnessing Complexity)

4

Experimental dimensions

• 2 strategy spaces: B, C• 6 interaction processes: RWR,

2DK, FRN, FRNE, 2DS, Tag • 3 adaptive processes: Imit, BMGA,

1FGA

5

“Soup-like” topology: RWR

ALLC

ALLC

TFT

ALLD

ATFT

TFT

ALLD

In each time period, a player interacts with four other random players.

6

2D-Grid Topology: 2DK

TFT

ALLD ALLD ALLD

TFT

TFT

ALLC ATFT

ALLC

The players arearranged on a fixedtorus and interactwith four neighborsin the von-Neumannneighborhood.

7

Fixed Random Network: FRN

ALLC

ALLC

TFT

ALLD

ATFT

TFT

ALLD

TFT

ATFT

The players have four random neighborsin a fixed randomnetwork. The relationsdo not have to be symmetric.

8

Adaptation through imitation

ALLC

ALLC

TFT

ALLD

ATFT

TFT?

ALLD

Neighbors at t

Imitation

9

Adaptation with BMGAComparison error (prob. 0.1)

2.8

6.0

0.8

9.0

Fixed spatialneighborhood

Genetic adaptation

2.2

10

BMGA continued Copy error (prob. 0.04 per “bit”)

2.8

6.0

0.8

9.0

Fixed spatialneighborhood

Genetic adaptation

6.0

p=0; q=0 => p=1; q=0

11

Tutorial Sequence

December 7 SimpleIPD: strategy space

Today EvolIPD: RWRDecember 21 GraphIPD:

charts and GUIGridIPD: 2DK

January 11 ExperIPD: batch runs and parameter sweeps

12

EvolIPD: flowchartsetup()

buildModel()

resetPlayers()

interactions()

adaptation()

reportResults()

step()

play()

play()

remember()

remember()

addPayoff()

addPayoff()

13

Markovian vs. asynchronous adaptation

tt-1

Markovian

asynchronous

14

Going sequential private void stepMarkovian() { // We carry out four sub-activities:

// Reset the agents' statistics // Loop through the entire agent list for (int i = 0; i < numPlayers; i++) { // Pick the agent final Player aPlayer = (Player) agentList.get(i); resetPlayer(aPlayer); }

// Let them interact with their neighbors for (int i = 0; i < numPlayers; i++) { final Player aPlayer = (Player) agentList.get(i); interactions(aPlayer); }

// FIRST STAGE OF DOUBLE BUFFERING! // Let all agents calculate their adapted type first for (int i = 0; i < numPlayers; i++) { final Player aPlayer = (Player) agentList.get(i); adaptation(aPlayer); }

// SECOND STAGE OF DOUBLE BUFFERING! // Second, once they know their new strategy, // let them update to the new type for (int i = 0; i < numPlayers; i++) { final Player aPlayer = (Player) agentList.get(i); updating(aPlayer); }

reportResults(); // Report some statistics }

private void stepAsynchronous() { // We carry out four sub-activities: for (int i = 0; i < numPlayers; i++) { // Pick an agent at random final Player aPlayer = (Player) agentList.get( this.getNextIntFromTo(0, numPlayers - 1));

// Reset the agent's statistics resetPlayer(aPlayer);

// Let it interact with its neighbors interactions(aPlayer);

// Let it adapt adaptation(aPlayer);

// Let it update its new type updating(aPlayer); }

reportResults(); // Report some statistics }

15

How to work with random numbers

• RePast full-fledged random number generator:uchicago.src.sim.util.Random

• Encapsulates the Colt library random number distributions:http://hoschek.home.cern.ch/hoschek/colt/

• Each distribution uses the same random number stream, to ease the repeatability of a simulation

• Every distribution uses the MersenneTwister pseudo-random number generator

16

Pseudo-random numbers

• Computers normally cannot generate real random numbers

• “Random number generators should not be chosen at random” - Knuth (1986)

• A simple example (Cliff RNG):X0 = 0.1

Xn+1 = |100 ln(Xn) mod 1|

x1 = 0.25850929940455103

x2 = 0.28236111950289455

x3 = 0.4568461655760814

x4 = 0.3408562751932891

x5 = 0.6294370918024157

x6 = 0.29293640856857195

x7 = 0.7799729122847907

x8 = 0.849608774153694

x9 = 0.29793011540822434

x10 = 0.08963320319223556

x11 = 0.2029456303939412

...

0

20

40

60

80

100

120

140

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Frequency

17

“True” random numbers

• New service offered by the University of Geneva and the company id Quantique

http://www.randomnumber.info/• No (yet) integrated into RePast

18

Simple random numbers distribution

• Initialization:

Random.setSeed(seed);

Random.createUniform();

Random.createNormal(0.0, 1.0);

• Usage:

int i = Random.uniform.nextIntFromTo(0, 10);

double v1 = Random.normal.nextDouble();

double v2 = Random.normal.nextDouble(0.5, 0.3);

mean standard deviation

standard deviation

mean

standard deviation

Automatically executed by SimpleModel

19

Available distributions

• Beta• Binomial• Chi-square• Empirical (user-

defined probability distribution function)

• Gamma• Hyperbolic• Logarithmic

• Normal (or Gaussian)

• Pareto• Poisson• Uniform• …

Beta

Normal

20

Custom random number generation

• May be required if two independent random number streams are desirable

• Bypass RePast’s Random and use the Colt library directly:import cern.jet.random.*;import cern.jet.random.engine.MersenneTwister;

public class TwoStreamsModel extends SimModel {Normal normal;Uniform uniform;

public void buildModel() {super.buildModel();MersenneTwister generator1 = new MersenneTwister(123);MersenneTwister generator2 = new MersenneTwister(321);uniform = new Uniform(generator1);normal = new Normal(0.0, 1.0, generator2);

}

public void step() {int i = uniform.nextIntFromTo(0, 10);double value = normal.nextDouble();

}}

seeds

21

How to build a model (2)

• If more flexibility is desired, one can extend SimModelImpl instead of SimpleModel

• Differences to SimpleModel– No buildModel(), step(), ... methods– No agentList, schedule, params, ... fields– Most importantly: no default scheduling

• Required methods:public void setup()public String[] getInitParam()public void begin()public Schedule getSchedule()public String getName()

22

SimModelImplimport uchicago.src.sim.engine.Schedule;import uchicago.src.sim.engine.SimInit;import uchicago.src.sim.engine.SimModelImpl;

public class MyModelImpl extends SimModelImpl { public static final int TFT = 1; public static final int ALLD = 3;

private int a1Strategy = TFT; private int a2Strategy = ALLD;

private Schedule schedule; private ArrayList agentList;

public void setup() { a1Strategy = TFT; a2Strategy = ALLD;

schedule = new Schedule(); agentList = new ArrayList(); } public String[] getInitParam() { return new String[]{"A1Strategy"}; }

23

SimModelImpl (cont.)

public String getName() { return "Example Model"; }

public void begin() { Agent a1 = new Agent(a1Strategy); Agent a2 = new Agent(a2Strategy);

agentList.add(a1); agentList.add(a2);

schedule.scheduleActionBeginning(1, this, "step"); }

public void step() { for (Iterator iterator = agentList.iterator(); iterator.hasNext();) { Agent agent = (Agent) iterator.next(); agent.play(); } }

introspection

24

SimModelImpl (cont.)

public String[] getInitParam() { return new String[]{"A1Strategy"}; }

public int getA1Strategy() { return a1Strategy; }

public void setA1Strategy(int strategy) { this.a1Strategy = strategy; }

public static void main(String[] args) { SimInit init = new SimInit(); SimModelImpl model = new MyModelImpl(); init.loadModel(model, null, false); }

25

How to use a schedule

• Schedule object is responsible for all the state changes within a Repast simulation

schedule.scheduleActionBeginning(1, new DoIt());

schedule.scheduleActionBeginning(1, new DoSomething());

schedule.scheduleActionAtInterval(3, new ReDo());

tick 1: DoIt, DoSomething

tick 2: DoSomething, DoIt

tick 3: ReDo, DoSomething, DoIt

tick 4: DoSomething, DoIt

tick 5: DoIt, DoSomething

tick 6: DoSomething, ReDo, DoIt

26

Different types of actions

• Inner classclass MyAction extends BasicAction {

public void execute() { doSomething();

}}schedule.scheduleActionAt(100, new MyAction());

• Anonymous inner classschedule.scheduleActionAt(100, new BasicAction(){

public void execute() { doSomething();

});

• Introspectionschedule.scheduleActionAt(100, this, "doSomething");

27

Schedule in SimpleModel

public void buildSchedule() { if (autoStep) schedule.scheduleActionBeginning(startAt, this,"runAutoStep"); else schedule.scheduleActionBeginning(startAt, this, "run"); schedule.scheduleActionAtEnd(this, "atEnd"); schedule.scheduleActionAtPause(this, "atPause"); schedule.scheduleActionAt(stoppingTime, this, "stop", Schedule.LAST); }

public void runAutoStep() { public void run() { preStep(); preStep(); autoStep(); step(); postStep(); postStep(); } }

private void autoStep() { if (shuffle) SimUtilities.shuffle(agentList);

int size = agentList.size(); for (int i = 0; i < size; i++) { Stepable agent = (Stepable)agentList.get(i); agent.step(); } }

28

Scheduling actions on lists

• An action can be scheduled to be executed on every element of a list:public class Agent {

public void step() {}

}schedule.scheduleActionBeginning(1, agentList, "step");

• is equivalent to: public void step() {

for(Iterator it = agentList.iterator(); it.hasNext();) {

Agent agent = (Agent) it.next(); agent.step();

}}schedule.scheduleActionBeginning(1, model, "step");

step() in SimpleModel

step() in Agent

29

Different types of scheduling

• scheduleActionAt(double at, …)executes at the specified clock tick

• scheduleActionBeginning(double begin, …)executes starting at the specified clock tick and every tick thereafter

• scheduleActionAtInterval(double in, …)executes at the specified interval

• scheduleActionAtEnd(…)executes the end of the simulation run

• scheduleActionAtPause(…)executes when a pause in the simulation occurs

30

Homework C

Modify the EvolIPD program by introducing a selection mechanism that eliminates inefficient players. The current adaptation() method should thus be modified such that the user can switch between the old adaptation routine, which relies on strategic learning, and the new “Darwinian” selection mechanism. The selection mechanism should remove the 10% least successful players from the agentList after each round of interaction. To keep the population size constant, the same number of players should be “born” with strategies drawn randomly from the 90% remaining players. Note that because it generates a population-level process, the actual selection mechanism belongs inside the Model class rather than in Player. Does this change make any difference in terms of the output?