Herd Immunity

I’ve built a model to show the concept of herd immunity. It shows why we need to not leave hard-to-reach parts of the population unvaccinated.

Herd immunity, also called population immunity, is the protection for the population that comes from when a proportion have been vaccinated. With more vaccinations, we move towards this herd immunity threshold. (We’re not there yet, even though some say we are.)

Here’s my model. Imagine a population in a country. People are either susceptible (they may not be vaccinated or have had the virus). We colour these green. People may have had the vaccine. These are blue. And there’s one (near the bottom left in purple) that is infectious.

No alt text provided for this image

Now, in my model, they will infect anyone who is susceptible (green) within that little circle surrounding them. And those will infect people surrounding *them*

No alt text provided for this image

After a few generations, more and more are infected.

No alt text provided for this image

Eventually, almost all (although not all – some are fortunate) are infected. That’s bad. And it’s because we haven’t reached the herd immunity threshold.

No alt text provided for this image

So – what happens when we reach the herd immunity threshold (or get close to it)? We have many more vaccinated (they’re blue). Let’s see what happens when the infectious person (this time in the middle) infects others.

No alt text provided for this image

Well, in this case, there’s a local infection, but the infection can’t be sustained (that’s good). Herd immunity.

No alt text provided for this image

But.

What happens when those vaccines are not spread out equally across the country?

Let’s vaccinate *the same number of people* just in the bottom half of our population.

There we are – lots of vaccinated people in blue. That infection (right middle) doesn’t stand a chance.

No alt text provided for this image

Now let’s see what happens when that infectious person is in the top half among the unvaccinated population (remember, we have the same number of vaccinated people in the population as a whole).

No alt text provided for this image

Well, that infection spreads…

No alt text provided for this image

… and spreads …

… and spreads …

until huge numbers of people are infected *even though we have overall reached the herd immunity threshold*

And that is why we need to vaccinate evenly, not leaving pockets where infection can spread.

No alt text provided for this image


Agent-Based Strategizing: New Book Published at Cambridge University Press

My new book, Agent-Based Strategizing, has been published at Cambridge University Press. It is available to download for free until 31 July 2019 at the link below. The book is an overview of how agent-based modelling has been (and can be) used in strategic management.

https://www.cambridge.org/core/elements/agentbased-strategizing/4AD9D0D7416DE46AEB7F1A5478772ACF

Abstract: Strategic management is a system of continual disequilibrium, with firms in a continual struggle for competitive advantage and relative fitness. Models that are dynamic in nature are required if we are to really understand the complex notion of sustainable competitive advantage. New tools are required to tackle challenges of how firms should compete in environments characterized by both exogeneous shocks and intense endogenous competition. Agent-based modelling of firms’ strategies offers an alternative analytical approach, where individual firm or component parts of a firm are modelled, each with their own strategy. Where traditional models can assume homogeneity of actors, agent-based models simulate each firm individually. This allows experimentation of strategic moves, which is particularly important where reactions to strategic moves are non-trivial. This Element introduces agent-based models and their use within management, reviews the influential NK suite of models, and offers an agenda for the development of agent-based models in strategic management.

Spatial Transmission Models: A Taxonomy and Framework

Risk Analysis

This paper , published in the journal Risk Analysis, sets out a review of the different methods used for modelling the spread of an idea, disease, etc. over space.

ABSTRACT

Within risk analysis and more broadly, the decision behind the choice of which modelling technique to use to study the spread of disease, epidemics, fires, technology, rumors, or more generally spatial dynamics, is not well documented.

While individual models are well defined and the modeling techniques are well understood by practitioners, there is little deliberate choice made as to the type of model to be used, with modelers using techniques that are well accepted in the field, sometimes with little thought as to whether alternative modelling techniques could or should be used.

In this paper, we divide modelling techniques for spatial transmission into four main categories: population-level models, where a macro-level estimate of the infected population is required; cellular models, where the transmission takes place between connected domains, but is restricted to a fixed topology of neighboring cells; network models, where host-to-host transmission routes are modelled, either as planar spatial graphs or where short cuts can take place as in social networks; and finally agent-based models which model the local transmission between agents, either as host-to-host geographical contacts, or by modelling the movement of the disease vector, with dynamic movement of hosts and vectors possible, on a Euclidian space or a more complex space deformed by the existence of information about the topology of the landscape using GIS techniques. We summarize these techniques by introducing a taxonomy classifying these modeling approaches.

Finally, we present a framework for choosing the most appropriate spatial modelling method, highlighting the links between seemingly disparate methodologies, bearing in mind that the choice of technique rests with the subject expert.

PhD Studentship in Modelling Dynamic Responses to Dynamic Threats at Loughborough University [applications now closed]

I am co-supervising the following PhD project – the application link and further details can be found here: http://www.lboro.ac.uk/study/postgraduate/research-degrees/funded/modelling-dynamic-responses/ .  The closing date is 14 December 2017.  Please get in touch if you would like to discuss this opportunity.

One of the most challenging issues for policy makers dealing with bio-security threats is their dynamic nature: diseases may spread quickly and deadly among vulnerable populations and pandemics may cause many casualties.

Finding the appropriate response to threats is a major challenge. Whilst models exist for understanding of the dynamics of the threats themselves, responses can be largely ad-hoc or ‘firefighting’. The aim of this research is to produce robust responses for dynamic threats.

The research will build up as follows, from low to high complexity: static responses to static threats; static responses to dynamic threats; dynamic responses to static threats; and dynamic responses to dynamic threats.

We will use a variety of methods to define the best response: cellular automata, network analysis, spatial modelling, agent-based modelling, and the generation of dynamic fitness landscapes.

This PhD studentship is most suitable for candidates with a background in a quantitative discipline such as management science, operations research, engineering, physics and other natural sciences.

Eight Mile and the Emergence of Segregation

Eight Mile, epitomized by Eminem in the film of the same name, is a street in Detroit that marks the boundary between the majority white northern suburbs and the majority black neighborhoods closer to the inner city.

But what causes this segregation in the first place?

Hypothesis 1: The Central Planner

Zoning Map, 1930s, showing HOLC zoning, source: http://www.urbanoasis.org/projects/holc-fha/digital-holc-maps/

In Detroit’s case, as with many cities across the USA, it was, in part, due to the zoning of the city by the federal Home Owners’ Loan Corporation, which zoned the city into areas of risk, meaning that banks were indirectly encouraged to develop outer suburbs while not offering mortgages to inner city properties.  This lead to wealthier, generally white, residents moving to the suburbs.

Indeed, physical barriers, such at the Detroit Wall, also known as the Eight Mile Wall, were built to separate majority black and majority white neighborhoods.

Detroit Today

The legacy of these zones live on today, as seen in the map below from the 2010 US Census.  The dividing line between the green (black) areas and the blue (white) areas is Eight Mile Road.

DotMap http://demographics.virginia.edu/DotMap/ based on 2010 US Census

 

 

So, segregation exists, and is caused by a central actor. But is there an alternative explanation?

Alternative Hypothesis: Emergence

In 1971, Thomas Schelling set out to model the phenomenon, not by assuming a central planner, but by modelling the interactions of individuals.

Thomas Schelling’s model was this.  Assume individuals are placed in a grid, similar to being located on a chess board.  Allow individuals who are in a local minority to move.  In the example below, the blue circle is in a minority (with 5 out of its 6 neighbors being a different color), and according to the rules of the model, is unhappy.  It could decide to move to the vacant square to the north-west, but it would still be in a local minority (with 4 out of 6 neighbors being a different color) and would remain unhappy.  So instead, it chooses the space to the south west where 3 out of its 6 neighbors are of the same color, and not being in a minority, it settles there.

Agent Movement © Duncan Robertson after Thomas Schelling (1971)

Schelling, perhaps without knowing it, introduced agent-based modelling, where, instead of modelling the system as a whole, the modelling of individual agents enables us to see the emergence of macro-level properties, in this case segregation, via the modelling of micro-level (local) interactions.

We can see the effect of micro-level interactions causing macro-level segregation in the model below (developed by Duncan Robertson after Wilensky after Schelling). Each individual, or agent, decides whether they are unhappy or happy; if they are unhappy, they search until they find a vacant location where they will become happy.  This continues until all individuals attain happiness.

Three Class Segregation Model © Duncan Robertson after Wilensky after Schelling

So, perhaps segregation is not imposed, but is down to us.  Or maybe, in reality, it’s a little bit of both.

Please do get in touch if you would like to discuss building or working with agent-based models.

 

 

Stacking Shelves the Amazon way

Amazon2Amazon FC are playing in the Euros (the UEFA football championship).  Or at least that’s what could be inferred from my name badge.

In fact, Amazon FC is one of the Amazon fulfilment centres, located in the Polish town of Poznań, location of the 28th European conference on Operational Research.

The fulfilment centre is huge – with a million separate items stocked, and up to a million items being processed every day – and is just one of a network of existing sites around Europe and around the world, the locations of which are themselves optimized to minimize cost.

AmazonMapOne of the many interesting facts about the tour was the way that Amazon store stock on their shelves.  Like any other business, they want to minimize fixed costs.  One way they can do this is to maximize the density of items stored on their shelves.  Unlike in my Mini factory visit (which will be the subject of a later post), Amazon does not run a just-in-time stock system.  They are happy(ish) to hold stock on the basis that their customer will have it quickly and will not have to wait for it to be backordered.  Amazon was founded on the basis of being able to supply items that only a few people will want – the so-called long tail – itself the subject of a book available on, where else, Amazon… the upshot of which is that if you rank the most popular to the least popular items sold on one axis, and take a logarithm of the number of these items sold on the other, it will make a nice straight line (a Zipf distribution for those interested).  So, Amazon will hold on to some items for years on the basis that someone, somewhere, sometime, will want to buy it.

AmazonFulfilment

So, Amazon needs to store these millions of items.  While most things are controlled and optimized by computer, they leave it down to human intuition as to where to store items – albeit guided by optimization algorithms.

AmazonVideo

Instead of giving a specific destination for each stock item (basically a very large grid reference), they give their employees a general area into where to store items.  The idea behind this is that when you first fill a location with stock, the space-packing density will be high as items fit next to each other.  But as items are removed and sold, spaces will be created meaning that you are effectively paying to store air.  So Amazon allows its employees to use these spaces to store more items – even if they are unrelated to each other.  Of course, the system keeps track of stock locations, but by doing things this way, the efficiency of the operations is improved, and less space is required for storage.

There is a similar example of adaptive organization used by Southwest airlines in my article The Complexity of the Corporation.

The Complexity of the Corporation

HSMIn my paper, The Complexity of the Corporation, I introduce complexity science applied to management, discussing complex adaptive systems, emergence, co-evolution, and power laws.

“We discuss the notion of complexity as applied to firms and corporations. We introduce the background to complex adaptive systems, and discuss whether this presents an appropriate model or metaphor to be used within management science. We consider whether a corporation should be thought of as a complex system, and conclude that a firm within an industry can be defined as a complex system within a complex system.Whether we can say that the use of complexity research will fundamentally improve firm performance will depend on the effect on success derived from its application.”

Agent-Based Models to Manage the Complex

MTCAgent-Based Models to Manage the Complex is a book chapter in Managing Organizational Complexity: Philosophy, Theory and Application: Volume 1 (ISCE Book Series – Managing the Complex) is an introduction to the use of agent-based models in management.  It demonstrates the use of models in Repast, an agent-based modeling toolkit, and links this to complexity science concepts of emergent systems.