Mapping Brexit II

An updated map of Monday’s Indicative Votes but including last week’s votes for the Withdrawal Agreement. Note more analysis here:
http://www.duncanrobertson.com/2019/03/28/mapping-brexit/

This network map shows each MP that voted for each of 5 propositions: Parliamentary Sovereignty, Confirmatory Public Vote, Customs Union, Common Market 2.0, or the Withdrawal Agreement. The large dots show the number of MPs that voted for each proposition. It shows that Parliamentary Sovereignty and the Confirmatory Public Vote are unlikely to be in any consensus (unless with Common Market 2.0 &/or Customs Union), whereas a consensus between the Withdrawal Agreement and either Common Market 2.0 &/or Customs Union may be a possible way to form a Parliamentary majority.

Note the colours are indicative only, and that these votes were whipped by either the Labour Party or the Conservative Party (for instance, Cabinet ministers were instructed to vote only for the Withdrawal Agreement, so the blue dots to the right of the Withdrawal Agreement dot are likely to include the Cabinet).

Spatial Transmission Models: A Taxonomy and Framework

Risk Analysis

This paper , published in the journal Risk Analysis, sets out a review of the different methods used for modelling the spread of an idea, disease, etc. over space.

ABSTRACT

Within risk analysis and more broadly, the decision behind the choice of which modelling technique to use to study the spread of disease, epidemics, fires, technology, rumors, or more generally spatial dynamics, is not well documented.

While individual models are well defined and the modeling techniques are well understood by practitioners, there is little deliberate choice made as to the type of model to be used, with modelers using techniques that are well accepted in the field, sometimes with little thought as to whether alternative modelling techniques could or should be used.

In this paper, we divide modelling techniques for spatial transmission into four main categories: population-level models, where a macro-level estimate of the infected population is required; cellular models, where the transmission takes place between connected domains, but is restricted to a fixed topology of neighboring cells; network models, where host-to-host transmission routes are modelled, either as planar spatial graphs or where short cuts can take place as in social networks; and finally agent-based models which model the local transmission between agents, either as host-to-host geographical contacts, or by modelling the movement of the disease vector, with dynamic movement of hosts and vectors possible, on a Euclidian space or a more complex space deformed by the existence of information about the topology of the landscape using GIS techniques. We summarize these techniques by introducing a taxonomy classifying these modeling approaches.

Finally, we present a framework for choosing the most appropriate spatial modelling method, highlighting the links between seemingly disparate methodologies, bearing in mind that the choice of technique rests with the subject expert.

PhD Studentship in Modelling Dynamic Responses to Dynamic Threats at Loughborough University [applications now closed]

I am co-supervising the following PhD project – the application link and further details can be found here: http://www.lboro.ac.uk/study/postgraduate/research-degrees/funded/modelling-dynamic-responses/ .  The closing date is 14 December 2017.  Please get in touch if you would like to discuss this opportunity.

One of the most challenging issues for policy makers dealing with bio-security threats is their dynamic nature: diseases may spread quickly and deadly among vulnerable populations and pandemics may cause many casualties.

Finding the appropriate response to threats is a major challenge. Whilst models exist for understanding of the dynamics of the threats themselves, responses can be largely ad-hoc or ‘firefighting’. The aim of this research is to produce robust responses for dynamic threats.

The research will build up as follows, from low to high complexity: static responses to static threats; static responses to dynamic threats; dynamic responses to static threats; and dynamic responses to dynamic threats.

We will use a variety of methods to define the best response: cellular automata, network analysis, spatial modelling, agent-based modelling, and the generation of dynamic fitness landscapes.

This PhD studentship is most suitable for candidates with a background in a quantitative discipline such as management science, operations research, engineering, physics and other natural sciences.

Why The Mayor of Houston Was Right to Not Evacuate the City

Right now, Houston is going through one of the most severe storms ever to hit the USA.  The main conversation on today’s news was whether the Mayor (who has authority to do such things) should have evacuated the City prior to the arrival of Hurricane Harvey.

For a start, NOAA did not forecast a direct hit on the City.  But it was forecast that potentially devastating rains were on the way.

Houston has been here before, of course, in 2005 when the then Mayor did order that the city be evacuated.  And around 100 died, as a result of the gridlock and heat.

But let’s think about what an uncontrolled evacuation of Houston would mean.

This is the map of the Houston highway system.

And here is is on Google maps.

 

While there is, of course, a Houston evacuation plan, assuming you want to avoid the Gulf of Mexico, the main routes are via the north and west: I69 to the north-east, I45 to the north, US Route 290 to the north-west, and I10 to the west.

Now let’s consider the capacity of these roads.  The capacity of roads in the US is given by the Department of Transportation’s Highway Capacity Manual.  While there is a whole science devoted to calculating freeway flow measurements, you need to take into account not only the capacity of the road (the number of cars), but also their speed.  Combining these gives us a flow rate, i.e. the number of cars that will pass a point in a particular length of time.  We can look at the academic literature to see what this is.  Dixit and Wolshon (2014) have a nice study where they looked at maximum evacuation flow rates.  Their Table 2 shows the empirical data, but it’s around 1,000 vehicles per hour per lane.  Assuming the Houston metro system evacuation routes of the north and west are around 4 x 4 lanes.  Give a factor of 1.5 for contraflows, and you have around 25 lanes.  So that’s 25 x 1000 = 25,000 vehicles per hour.  And let’s assume an occupancy of 4 passengers per vehicle (i.e. most would evacuate by car).  So that’s 100,000 passengers per hour.

The problem with Houston is that it’s the USA’s fourth largest city.  And that means it’s big.  It (Greater Houston) has a population of 6.5 million.  So that means 6.5 million / 100,000 = 65 hours.  Non stop, day and night.  Without accidents.  A very bold move for a hurricane that was not due to hit directly.

And tropical storm watches are only typically issued 48 hours before winds are due to strike.

By not evacuating, resources are kept in Houston rather than being disseminated across the locations of incidents caused by evacuating traffic.

The real test however comes in the days and months ahead, where the process of rescue, recovery, and rebuilding is critical.

References

Dixit, V. and Wolshon, B. (2014) ‘Evacuation Traffic Dynamics’, Transportation Research Part C, 49, 114-125

 

Two Funded PhD Studentships in Agent-Based Modelling at Loughborough University School of Business and Economics [applications now closed]

I am looking for high quality, numerate, candidates to fill these exciting PhD studentships with me as a (co-) supervisor at Loughborough’s School of Business and Economics.  Please note that this post has been updated with new links (in blue, below).

The first is modelling dynamic responses to dynamic threats; the second is using analytics in traditional industries.  Please see the links below for further details and how to apply.  Note that for further information, you will need to click on the blue links below.

Modelling Dynamic Responses to Dynamic Threats (with Professor Gilberto Montibeller)

One of the most challenging issues for policy makers dealing with bio-security threats is their dynamic nature: diseases may spread quickly and deadly among vulnerable populations and pandemics may cause many casualties.

Finding the appropriate response to threats is a major challenge.  Whilst models exist for understanding of the dynamics of the threats themselves, responses can be largely ad-hoc or ‘firefighting’.  The aim of this research is to produce robust responses for dynamic threats.

The research will build up as follows, from low to high complexity: static responses to static threats; static responses to dynamic threats; dynamic responses to static threats; and dynamic responses to dynamic threats.

We will use a variety of methods to define the best response: cellular automata, network analysis, spatial modelling, agent-based modelling, and the generation of dynamic fitness landscapes.

This PhD studentship is most suitable for candidates with a background in a quantitative discipline such as management science, operations research, engineering, physics and other natural sciences.

Business Analytics for Public Services and Regulated Industries: New Techniques for Analytics-Driven Decision Making in Traditional Industries (with Dr Maria Neiswand and Professor David Saal)

The rise of business analytics has given rise to enormous opportunities within the private sector, but these benefits have yet to be fully realized in public services and regulated industries such as energy, water, and transportation networks. Conversely, governments are mandating collection of data by installing smart metering devices. This gives rise to the need for innovative ways of thinking in industries that are still largely based on traditional economic thinking involving conventional assumptions on optimization and behaviour.

As an example, the energy sector is characterised by strongly defined market structures with incumbents and an ultimate need for energy network security, which not only prevents the quick adoption of technical changes but also translates into regulatory outcomes, such as price caps.

This exciting PhD opportunity will integrate theoretical and empirical approaches and spans two strengths of Loughborough’s School of Business and Economics: microeconomics and particularly rigorous analysis of the determinants of productivity and performance (including cost modelling) and management science (including simulation and network analysis).

We are therefore seeking a student with a quantitative background (whether in economics, management science, engineering, physics or other natural sciences). A willingness to learn new techniques such as, cost modelling, performance measurement, agent-based modelling and network analysis is desired.

 

Agent-Based Models for Simulating Human Behavior: IFORS Conference 2017

This presentation including joint work with Alberto Franco, was presented at the IFORS (International Federation of Operational Research Societies) conference in Quebec City, QC, Canada.  We present two different agent-based models for simulating human behavior.

We use the example of group decision making.

The first model uses a cognitive fitness landscape to model the quality of a decision, where participants compare their decision with their nearest neighbor.  The decision is based on an external comparison.

The second model uses an internal comparison of a decision with the next best alternative.  The model is based on the psychological concept of hidden profiles, where participants only make the best decision by sharing information with the group.

The Conservative Manifesto: Care Fees as a Percentage of Initial Wealth

The Conservative Party have announced their manifesto for the 2017 General Election.  Included in this (on page 64-65) is proposed

We will introduce a single capital floor, set at £100,000, more than four times the current means test threshold. This will ensure that, no matter how large the cost of care turns out to be, people will always retain at least £100,000 of their savings and assets, including value in the family home.

A quick calculation on the effective ‘Tax’ (Care Fees as a Percentage of Initial Wealth) shows the following distribution of Tax Rates.  On the x axis is initial wealth (the value of your house plus any savings), and on the y axis is the Tax Rate.  The Conservative Party have since augmented this plan with a proposed cap (consultation to come).

Values used for Care Fees: £20,000, £40,000, £60,000, £80,000; £100,000.  Values used for Initial Wealth: £0, £100,000, …, £1,000,000.  The trend continues downwards after this figure.

As many have pointed out, this affects individuals with initial wealth just over £100,000 proportionately far more than those with higher initial wealth.  More detail on policy options for funding social care can be found in the Dilnot Commission report and a summary of their proposals is shown below:

 

What does the ℮ mark mean on packaging?

The ‘℮’ symbol, or the ‘e-mark’ is a symbol you will see on packaging such as tins or packets in Europe.  Millions of us will see this symbol every day, but what does it actually mean?

The raison d’etre for the e-mark comes from the problem of selling goods to the public.  We would all like to think that we are getting what we pay for, but does that mean we should always get what we pay for?

Well, if you use the ℮-mark, then no.  And yes if you don’t.  So you use the ℮-mark.  By doing so, some of us are short-changed, but, on average, we shouldn’t be.

The e-mark was introduced in 1976 by the legislation known by the snappy title of ‘Council Directive 76/211/EEC of 20 January 1976 on the approximation of the laws of the Member States relating to the making-up by weight or by volume of certain prepackaged products’.

This sets out a nominal value of a product.  This means that, on average, we should not receive less than the value stated before the e-mark.  But we would be really annoyed if we received, say, nothing, and someone else received twice the nominal amount.  So, the concept of tolerable negative error was introduced at the same time, to set out the minimum legal amount that each packet or tin or container should contain.  The idea is that only a few containers can weigh less than the declared value less the tolerable negative error (but none can be twice the tolerable negative error… that would be, well, intolerable).

In packets from 5 grams to 10 kilogrammes, the tolerable negative error varies from 9% (quite a lot) to 1.5% (not such a lot), the rationale being that it is easier to measure larger values with greater accuracy.

Excruciating detail can be found in The Weights and Measures (Packaged Goods) Regulations 2006.  It is interesting to note that the HTML version of the Regulations contain illegible formulae:

I leave it as an exercise for the lawyer to determine whether this would be a valid defence in criminal proceedings.

Eight Mile and the Emergence of Segregation

Eight Mile, epitomized by Eminem in the film of the same name, is a street in Detroit that marks the boundary between the majority white northern suburbs and the majority black neighborhoods closer to the inner city.

But what causes this segregation in the first place?

Hypothesis 1: The Central Planner

Zoning Map, 1930s, showing HOLC zoning, source: http://www.urbanoasis.org/projects/holc-fha/digital-holc-maps/

In Detroit’s case, as with many cities across the USA, it was, in part, due to the zoning of the city by the federal Home Owners’ Loan Corporation, which zoned the city into areas of risk, meaning that banks were indirectly encouraged to develop outer suburbs while not offering mortgages to inner city properties.  This lead to wealthier, generally white, residents moving to the suburbs.

Indeed, physical barriers, such at the Detroit Wall, also known as the Eight Mile Wall, were built to separate majority black and majority white neighborhoods.

Detroit Today

The legacy of these zones live on today, as seen in the map below from the 2010 US Census.  The dividing line between the green (black) areas and the blue (white) areas is Eight Mile Road.

DotMap http://demographics.virginia.edu/DotMap/ based on 2010 US Census

 

 

So, segregation exists, and is caused by a central actor. But is there an alternative explanation?

Alternative Hypothesis: Emergence

In 1971, Thomas Schelling set out to model the phenomenon, not by assuming a central planner, but by modelling the interactions of individuals.

Thomas Schelling’s model was this.  Assume individuals are placed in a grid, similar to being located on a chess board.  Allow individuals who are in a local minority to move.  In the example below, the blue circle is in a minority (with 5 out of its 6 neighbors being a different color), and according to the rules of the model, is unhappy.  It could decide to move to the vacant square to the north-west, but it would still be in a local minority (with 4 out of 6 neighbors being a different color) and would remain unhappy.  So instead, it chooses the space to the south west where 3 out of its 6 neighbors are of the same color, and not being in a minority, it settles there.

Agent Movement © Duncan Robertson after Thomas Schelling (1971)

Schelling, perhaps without knowing it, introduced agent-based modelling, where, instead of modelling the system as a whole, the modelling of individual agents enables us to see the emergence of macro-level properties, in this case segregation, via the modelling of micro-level (local) interactions.

We can see the effect of micro-level interactions causing macro-level segregation in the model below (developed by Duncan Robertson after Wilensky after Schelling). Each individual, or agent, decides whether they are unhappy or happy; if they are unhappy, they search until they find a vacant location where they will become happy.  This continues until all individuals attain happiness.

Three Class Segregation Model © Duncan Robertson after Wilensky after Schelling

So, perhaps segregation is not imposed, but is down to us.  Or maybe, in reality, it’s a little bit of both.

Please do get in touch if you would like to discuss building or working with agent-based models.