My Agent-Based Model goes global

Another week gone sees the emergence of a pattern in my research: loads of great ideas, loads of terrible software design decisions, and loads of late nights trying to implement the former despite the latter.

This week’s Agent-Based fun has seen the model transplanted onto an image of the world (c/o Wikipedia Commons (where c/o here means ‘courtesy of’ instead of the more-often used ‘care of’. Wanna fight about it?)) which I doctored slightly to look like this:

World Map
A doctored version of Wikipedia’s plain world map

The world is then given some annual rainfall information, which decides how much wealth an agent accumulates from each piece of land they own (see previous blog posts: the rainfall/wealth function has an optimum, and too much or too little is bad).

Agents then grow, fight, earn wealth and develop “technology” as before, only this time the results look more hilarious because the world they’re playing their little game on looks a bit like our own beloved world. As you can see from the time-lapse image below, the world starts with many little tribes and eventually is dominated by three major powers: lime green (perhaps being Brazil) which rules all of the Americas, sky blue (perhaps the Democratic Republic of the Congo) reigning supreme in Africa and Eurasia, and pink (clearly China) which is hanging on bravely to south-east Asia and parts of the subcontinent. So, my model predicts Brazil, China and the Congo to be the next world world superpowers. A little calibration required? Well, perhaps…

Time-lapse imagery showing Brazil, China and the Democratic Republic of the Congo taking over the world

A slightly-less dumb Agent Based Model

It’s been a frantic few days here at CASA in London. After several post-eleven o’clock ends to the working day, I’ve finally managed to get a slightly more interesting version of my agent-based model up and running.

This time the agents live in a world of varying fitness (I’ve described this fitness as “rainfall”, the reasons for which will become clear in about four months!) and, as before, will grow if they can. Each square of land occupied brings wealth every round, but fitter squares bring more wealth. Technology is then developed in a fashion stochastically related to wealth.

If an agent meet another agent, it can attack only if its wealth is greater and its technology is greater. If the attack succeeds, the agent ‘wins’ its enemy’s square but, if it loses, the costs to the agressor are high. Losing is far more likely than winning, but winning brings a new square and, with it, new wealth each turn.

This dynamic leads to near-equilibrium amongst agents: the wealthier will go on the attack, and will mostly lose. They will keep attacking and (mostly) losing until their wealth is so far depleted that they are no longer ‘allowed’ to attack their neighbours. Wealths amongst the surviving few agents therefore tend to converge and an unstable equilibrium is reached.

Here’s the outcome of a typical run of the model, with lighter areas representing more rainfall, and darker areas less.

ABM 2.0
Each agent occupies a peak, and cannot be driven off it by its neighbours.

The actual fitness function has a peak somewhere in the middle of the possible rainfall levels, meaning that more rainfall is better only up to a certain point, after which it becomes worse. A long and fun discussion was had with CASA colleagues about the shape of this fitness function, the white-board results of which look like this; it’s been a stimulating week!

rainfall function
Graphs showing the shape of the rainfall fitness function.

You never can tell

In my first experiences of making an ABM, I’ve been surprised by how slowly a seemingly simple Java program can run. This led me to look at how to profile a running Java application to see where it’s spending most of its time.

Oracle’s frankly-amazing Java VisualVM allows you to do just that and the results, in my case, were highly unexpected.

Despite running an ABM with, as I thought, fairly high levels of structural complexity (i.e. agents all have their own worldview, including little imaginary copies of all the other agents they know about, which get updated as and when an indenpendent and omniscient ‘GamesMaster’ allows), the program turned out to be spending the vast, vast majority of its time working out which spaces are neighbours to which others. This is clearly absurd and now I’m off to tweak the algorithm. A faster model should result…

First ever profiler output
Profiler output showing huge overheads in checking neighbouring spaces

Take it as thread

Just a short missive this time, to mention that I’ve sped up my “world’s dumbest Agent-Based Model” by a factor of around fifty by having the calculation of the model and the display of the model (using Processing) happen in different threads. It’s dead easy to do in Java and the benefits are mind-blowing.

I can now simulate 250 agents in a world 500×500 spaces big in about 5 seconds. That’s pretty good, I reckon.

Here’s how it works:

public class Game implements Runnable {
  public void run() {
    do { /* run the game */ } while (true);

and meanwhile, elsewhere, Processing does this:

setup() {
  new Thread(Game).start();

draw() {
  // draw the game whenever Processing
  // actually gets round to it!

It’s embarrassingly easy with Java and, as I’ve said, the benefits are astounding.

The world’s dumbest Agent-Based Model

So, the focus has shifted a little bit here at CASA in London. I’m trying to design an agent-based modelling system that is as generic as possible in its construction, using Java for the Object model and Processing for the visualisation. The new generic framework works by giving each agent their own complete little view of the world, controlling who knows what and when they get to discover stuff. A static class calle a GamesMaster acts as god, responding to requests from Agents to learn about new parts of the world.

It’s an interesting exercise and I’ve already used the framework to come up with the simplest imaginable Agent-Based Model (ABM) where Agents are randomly thrown into a world of square spaces (like a chessboard) and are randomly selected by the GamesMaster to be allowed to ‘Grow’. This growth simply looks at the Agent’s bordering spaces and, if they are unoccupied, expands the Agent to fill them. The results are “interesting” (not in the scientific sense I don’t think, just in a purely “Oh right” sense).

Simple ABM
randomly-coloured Agents growing on a grid