14
Complex Adaptive Systems (CAS) • Terminology introduced by members of the Santa Fe Institute. It is a subdiscipline of “complexity science”. • CAS are complex systems composed of similar elements (agents) that are also adaptive, they change through time in response to conditions. • Agents are linked into locally interacting networks. • As we saw in metapopulation models (and perhaps all models), CAS models examine emergent properties of complex systems.

Complex Adaptive Systems (CAS) Terminology introduced by members of the Santa Fe Institute. It is a subdiscipline of “complexity science”. CAS are complex

Embed Size (px)

Citation preview

Complex Adaptive Systems (CAS)

• Terminology introduced by members of the Santa Fe Institute. It is a subdiscipline of “complexity science”.

• CAS are complex systems composed of similar elements (agents) that are also adaptive, they change through time in response to conditions.

• Agents are linked into locally interacting networks.

• As we saw in metapopulation models (and perhaps all models), CAS models examine emergent properties of complex systems.

Generalized Adaptive Agent

Action affects “the world” and “friends”.

Information about “the world” (global

information)

Share information with other agents

Internal model to explore

consequences of future actions

Information processing

Take action

Information about the behavior of other agents, mostly “friends” (local

network neighborhood information).

Utility or Goal

Added to or subtracted from by “the world”, based on decisions

Outside world

External Change

- feedbacks (information,

payoffs)

Emergent group behavior= locally stable complex equilibrium

+ feedbacks (information,

payoffs)

(small-world networks)

Outside world

External Change

- feedbacks (information,

payoffs)

Emergent group behavior

+ feedbacks (information,

payoffs)

The “eight-fold path” in CAS model construction(according to Miller and Page):

Right Viewwhat kind of information flows from the world to the agent ands between agents, is it filtered, is it manipulated by other agents?

Right Intentionwhat are the agents’ goals, do they vary among agents?

Right Speechwhat information is communicated by agents and to whom?

Right Actionwhat actions can agents take and when are they taken?

Right Livelihoodwhat payoffs (utility) do agents receive based on their decisions and external conditions? How do payoffs affect agents (successful agents multiply, unsuccessful agents are replaced, agents accrue different amounts of wealth)?

Right Efforthow do agents make their decisions, what rules or strategies do they follow, do they learn, ie alter strategies based on experience ?

Right Mindfulnesshow smart is an agent, how sophisticated is the internal model?

Right Concentrationin what aspects should the model be precise and detailed, where simple and general?

Purpose of adaptive agent models:

• To simulation of actual collective decision making by heterogeneous, automatous agents (life, technology, institutions, organizations, governments).

• To ask how such systems to adapt to changing external conditions (model of co-evolution).

• To examine relationships between network structure and the emergent properties of such systems (search for general principles in structure-behavior relationships).

• To design agent networks more fit to adequately respond to changing external conditions (process optimization).

Example 1: Gaia or Daisyworld – a paradigm of the resilience of complex adaptive systems

• Daisyworld is populated by white and black daisies (= agents). Both types have the same optimal growth temperature, but they differ in temperature based on color.

• White daisies spread under high solar input, because they stay cooler, black daisies spread under low solar input, because they stay warmer (= differential payoffs reflected in agent proportions).

• The adaptive change in daisy abundance keeps the planetary temperature buffered against changes in solar input (= emergent property).

• The Gaia Hypothesis: Life is self-sustaining by virtue of allowing adaptive structural change.

Watson and Lovelock 1983

Example 1: Gaia or Daisyworld – a paradigm of the resilience of complex adaptive systems

Problem: It is not a general outcome that adaptive change is stabilizing and self-serving.

Question: Can one generalize what system processes favor an emergent behavior that is self-sustaining?

Observation: In natural evolution, we expect to see adaptive systems more often than mal-adaptive systems, because the latter

have shorter life spans.

Example 2: The Bar Attendance Model (BAM)– an example how the dynamics of complex adaptive systems can go horribly wrong

• Customers get utility only when the bar is not too crowded (and optionally not too empty).• Customers decide every day if they will attend (based on expected utility).• Information available to agents: attendance in previous days (global information) and the

decision that “friends” make (contagion effect)• Agents “learn” from previous experience, gradually optimizing their attendance strategy

Example 2: The Bar Attendance Model (BAM)– an example how the dynamics of complex adaptive systems can go horribly wrong

BAM outcomes:

1. When agents use only global information and there is no lower limit to utility, bar attendance converges on the maximum acceptable level (optimal behavior).

2. When there is a lower limit to utility, the system can lock into suboptimal states, e.g. all agents refuse to go to the bar.

3. Contagion effects interact with the dynamics of change, e.g. can accelerate the rate of change.

4. Small shocks to the system tend to have no effects, but shocks exceeding a certain threshold can lead to rapid loss of optimal collective behavior and long return times (equivalent to an excitable system such as a neural membrane).

Heyman et al. 2004

A model of panic (e.g. escape behavior, bank rush).

Example 2: The Bar Attendance Model (BAM)– an example how the dynamics of complex adaptive systems can go horribly wrong

Heyman et al. 2004

Disturbance period (shock)agents receive negative utility

Learning only

Learning and imitation

Learning and imitation

Example 3: Ranchers as Adative Agents– an example of adaptive agents in a complex external world.

Here is “external world” that is intrinsically unstable, variable and unpredictable:

Janssen et al. 2000

Janssen et al. 2000

Model highlights:

•Theoretical optimal solution is no one strategy but responds to fluctuations in weather, market prices and range conditions, based on perfect information and foresight.

•Agent behavior has to be measured against the theoretical optimum.

•Unsuccessful agents disappear (go bankrupt, replaced by new agents)

•Agents have different goals, they can adjust strategies based on experience, their decisions change ecological conditions and their income.

•Regulators (new concept): policies in place to conserve, stabilize welfare, or maintain a free market.

•Regulators may also evolve.

Example 3: Ranchers as Adative Agents– an example of adaptive agents in a complex external world.

Summary:

Adaptive Agent Models have a wide range of applications from computer design to natural evolution, to sociology (it is a favorite paradigm of complexity science).

Common element: a network of interactive, autonomous agents that make decisions based on information and a personal goal.

Agents often learn but not necessarily (see Daisyworld).

Feedbacks from the collectively experienced level restructure agent behaviors (this makes the system “adaptive”).

Of interest in the analysis of these models is

• how emergent behavior or collective system states are generated (both adaptive and mal-adaptive);

• how agents and agent networks evolve; • how this evolution can be regulated.