My new book, Agent-Based Strategizing, has been published at Cambridge University Press. It is available to download for free until 31 July 2019 at the link below. The book is an overview of how agent-based modelling has been (and can be) used in strategic management.
Abstract: Strategic management is a system of continual disequilibrium, with firms in a continual struggle for competitive advantage and relative fitness. Models that are dynamic in nature are required if we are to really understand the complex notion of sustainable competitive advantage. New tools are required to tackle challenges of how firms should compete in environments characterized by both exogeneous shocks and intense endogenous competition. Agent-based modelling of firms’ strategies offers an alternative analytical approach, where individual firm or component parts of a firm are modelled, each with their own strategy. Where traditional models can assume homogeneity of actors, agent-based models simulate each firm individually. This allows experimentation of strategic moves, which is particularly important where reactions to strategic moves are non-trivial. This Element introduces agent-based models and their use within management, reviews the influential NK suite of models, and offers an agenda for the development of agent-based models in strategic management.
This paper , published in the journal Risk Analysis, sets out a review of the different methods used for modelling the spread of an idea, disease, etc. over space.
Within risk analysis and more broadly, the decision behind the choice of which modelling technique to use to study the spread of disease, epidemics, fires, technology, rumors, or more generally spatial dynamics, is not well documented.
While individual models are well defined and the modeling techniques are well understood by practitioners, there is little deliberate choice made as to the type of model to be used, with modelers using techniques that are well accepted in the field, sometimes with little thought as to whether alternative modelling techniques could or should be used.
In this paper, we divide modelling techniques for spatial transmission into four main categories: population-level models, where a macro-level estimate of the infected population is required; cellular models, where the transmission takes place between connected domains, but is restricted to a fixed topology of neighboring cells; network models, where host-to-host transmission routes are modelled, either as planar spatial graphs or where short cuts can take place as in social networks; and finally agent-based models which model the local transmission between agents, either as host-to-host geographical contacts, or by modelling the movement of the disease vector, with dynamic movement of hosts and vectors possible, on a Euclidian space or a more complex space deformed by the existence of information about the topology of the landscape using GIS techniques. We summarize these techniques by introducing a taxonomy classifying these modeling approaches.
Finally, we present a framework for choosing the most appropriate spatial modelling method, highlighting the links between seemingly disparate methodologies, bearing in mind that the choice of technique rests with the subject expert.
One of the most challenging issues for policy makers dealing with bio-security threats is their dynamic nature: diseases may spread quickly and deadly among vulnerable populations and pandemics may cause many casualties.
Finding the appropriate response to threats is a major challenge. Whilst models exist for understanding of the dynamics of the threats themselves, responses can be largely ad-hoc or ‘firefighting’. The aim of this research is to produce robust responses for dynamic threats.
The research will build up as follows, from low to high complexity: static responses to static threats; static responses to dynamic threats; dynamic responses to static threats; and dynamic responses to dynamic threats.
We will use a variety of methods to define the best response: cellular automata, network analysis, spatial modelling, agent-based modelling, and the generation of dynamic fitness landscapes.
This PhD studentship is most suitable for candidates with a background in a quantitative discipline such as management science, operations research, engineering, physics and other natural sciences.
Eight Mile, epitomized by Eminem in the film of the same name, is a street in Detroit that marks the boundary between the majority white northern suburbs and the majority black neighborhoods closer to the inner city.
But what causes this segregation in the first place?
Hypothesis 1: The Central Planner
In Detroit’s case, as with many cities across the USA, it was, in part, due to the zoning of the city by the federal Home Owners’ Loan Corporation, which zoned the city into areas of risk, meaning that banks were indirectly encouraged to develop outer suburbs while not offering mortgages to inner city properties. This lead to wealthier, generally white, residents moving to the suburbs.
Indeed, physical barriers, such at the Detroit Wall, also known as the Eight Mile Wall, were built to separate majority black and majority white neighborhoods.
The legacy of these zones live on today, as seen in the map below from the 2010 US Census. The dividing line between the green (black) areas and the blue (white) areas is Eight Mile Road.
So, segregation exists, and is caused by a central actor. But is there an alternative explanation?
Alternative Hypothesis: Emergence
In 1971, Thomas Schelling set out to model the phenomenon, not by assuming a central planner, but by modelling the interactions of individuals.
Thomas Schelling’s model was this. Assume individuals are placed in a grid, similar to being located on a chess board. Allow individuals who are in a local minority to move. In the example below, the blue circle is in a minority (with 5 out of its 6 neighbors being a different color), and according to the rules of the model, is unhappy. It could decide to move to the vacant square to the north-west, but it would still be in a local minority (with 4 out of 6 neighbors being a different color) and would remain unhappy. So instead, it chooses the space to the south west where 3 out of its 6 neighbors are of the same color, and not being in a minority, it settles there.
Schelling, perhaps without knowing it, introduced agent-based modelling, where, instead of modelling the system as a whole, the modelling of individual agents enables us to see the emergence of macro-level properties, in this case segregation, via the modelling of micro-level (local) interactions.
We can see the effect of micro-level interactions causing macro-level segregation in the model below (developed by Duncan Robertson after Wilensky after Schelling). Each individual, or agent, decides whether they are unhappy or happy; if they are unhappy, they search until they find a vacant location where they will become happy. This continues until all individuals attain happiness.
So, perhaps segregation is not imposed, but is down to us. Or maybe, in reality, it’s a little bit of both.
Please do get in touch if you would like to discuss building or working with agent-based models.
Amazon FC are playing in the Euros (the UEFA football championship). Or at least that’s what could be inferred from my name badge.
In fact, Amazon FC is one of the Amazon fulfilment centres, located in the Polish town of Poznań, location of the 28th European conference on Operational Research.
The fulfilment centre is huge – with a million separate items stocked, and up to a million items being processed every day – and is just one of a network of existing sites around Europe and around the world, the locations of which are themselves optimized to minimize cost.
One of the many interesting facts about the tour was the way that Amazon store stock on their shelves. Like any other business, they want to minimize fixed costs. One way they can do this is to maximize the density of items stored on their shelves. Unlike in my Mini factory visit (which will be the subject of a later post), Amazon does not run a just-in-time stock system. They are happy(ish) to hold stock on the basis that their customer will have it quickly and will not have to wait for it to be backordered. Amazon was founded on the basis of being able to supply items that only a few people will want – the so-called long tail – itself the subject of a book available on, where else, Amazon… the upshot of which is that if you rank the most popular to the least popular items sold on one axis, and take a logarithm of the number of these items sold on the other, it will make a nice straight line (a Zipf distribution for those interested). So, Amazon will hold on to some items for years on the basis that someone, somewhere, sometime, will want to buy it.
So, Amazon needs to store these millions of items. While most things are controlled and optimized by computer, they leave it down to human intuition as to where to store items – albeit guided by optimization algorithms.
Instead of giving a specific destination for each stock item (basically a very large grid reference), they give their employees a general area into where to store items. The idea behind this is that when you first fill a location with stock, the space-packing density will be high as items fit next to each other. But as items are removed and sold, spaces will be created meaning that you are effectively paying to store air. So Amazon allows its employees to use these spaces to store more items – even if they are unrelated to each other. Of course, the system keeps track of stock locations, but by doing things this way, the efficiency of the operations is improved, and less space is required for storage.
In my paper, The Complexity of the Corporation, I introduce complexity science applied to management, discussing complex adaptive systems, emergence, co-evolution, and power laws.
“We discuss the notion of complexity as applied to firms and corporations. We introduce the background to complex adaptive systems, and discuss whether this presents an appropriate model or metaphor to be used within management science. We consider whether a corporation should be thought of as a complex system, and conclude that a firm within an industry can be defined as a complex system within a complex system.Whether we can say that the use of complexity research will fundamentally improve firm performance will depend on the effect on success derived from its application.”