This network map shows each MP that voted for each of 5 propositions: Parliamentary Sovereignty, Confirmatory Public Vote, Customs Union, Common Market 2.0, or the Withdrawal Agreement. The large dots show the number of MPs that voted for each proposition. It shows that Parliamentary Sovereignty and the Confirmatory Public Vote are unlikely to be in any consensus (unless with Common Market 2.0 &/or Customs Union), whereas a consensus between the Withdrawal Agreement and either Common Market 2.0 &/or Customs Union may be a possible way to form a Parliamentary majority.
Note the colours are indicative only, and that these votes were whipped by either the Labour Party or the Conservative Party (for instance, Cabinet ministers were instructed to vote only for the Withdrawal Agreement, so the blue dots to the right of the Withdrawal Agreement dot are likely to include the Cabinet).
Wednesday’s indicative votes in the House of Commons produced no definitive answer of the way forward. By using social network analysis showing the size of each voting bloc and ‘Hamming distances’ (ironically usually used for error correction), we can map how close MPs are to each other, giving an indication of how a coalition could be made if each block of MPs flipped their vote in order to form a Parliamentary consensus.
Brexit is currently turning out to a failed experiment in direct democracy, something I pointed out nearly three years ago.
However, with the House of Commons opening up data, it does allow us a rare insight into the goings on of the population and the MPs that are our representatives.
One interesting data source is released by the UK Parliament showing the voting record of every MP for every ‘division’ (vote). One particularly interesting vote was that done on Wednesday 27 March, where MPs were able to cast their votes for 8 motions:
By making a so-called bipartite network, we can map individual MPs to the votes for which they voted yes. This results in a map of MPs shown below.
While this is interesting, it doesn’t really show the distance between MPs’ voting intentions.
We can redraw the map by using the distance between MPs according to the votes they cast. We can do this by constructing a binary string of their votes. For simplicity, we count only the ‘aye’ or yes votes, and ignore abstentions and nos.
For instance, if an MP voted yes, no, no, yes, no, yes, yes, no, they would be given a string of 10010110, whereas if another MP voted no, no, yes, no, yes, yes, no, yes, they would be given a string of 00101101. So, what is the ‘distance’ between 10010110 and 00101101? For this, we use the Hamming distance – count the number of locations where there is a difference. In this case, the Hamming distance between the MPs is 6.
By constructing a graph of Hamming distances of 1, we can construct neighbours of individual groups of MPs. This is shown in the graph below.
I have listed the votes in the following order:
Common Market 2.0 Confirmatory Public Vote Contingent Preferential Arrangement Customs union EFTA and EEA Labour’s Alternative Plan No Deal Revocation to Avoid No Deal
However, this isn’t very useful, as it doesn’t show the type of MP that voted for each of these. So we can relabel the nodes with a representative MP from that bloc.
From this, you can work out the number of intermediate MPs to get to any other MPs. What is quite interesting is that every MP was just one vote away from another – no-one is isolated. Which, in some little way, gives us hope.
We can then weight the edges to show the possible coalition that could be made if these blocs were to join. And here it is:
The size of each circle represents the number of MPs that voted the same way as the representative MP named on the circle, and the thickness of the links shows how many MPs would join together if one vote were flipped.
If the linked blocs join up, you can see how there could be a path to a Parliamentary majority – for the blocs to join, it would mean switching one vote from ‘aye’ to ‘no’ or vice versa.
For completeness, the list of MPs and their associated binary string is linked here. You can find the MPs that are part of each bloc by searching for the MP name in the label on the network graph. The Hamming distance between each and every MP is available on request. I leave it to the reader to construct an affinity matrix – or what I would call currently describe as a ‘Matrix of Hate’ for each MP pair.
I am very pleased to have been invited to join the Peer Review College for the UKRI (UK Research and Innovation) Future Leaders Fellowships.
UK Research and Innovation (UKRI) ‘is the national funding agency investing in science and research in the UK. Operating across the whole of the UK with a combined budget of more than £6 billion, UKRI brings together the 7 Research Councils, Innovate UK and Research England’.
‘The UK Research and Innovation Future Leaders Fellowships (FLF) will grow the strong supply of talented individuals needed to ensure that UK research and innovation continues to be world class.’
This paper , published in the journal Risk Analysis, sets out a review of the different methods used for modelling the spread of an idea, disease, etc. over space.
Within risk analysis and more broadly, the decision behind the choice of which modelling technique to use to study the spread of disease, epidemics, fires, technology, rumors, or more generally spatial dynamics, is not well documented.
While individual models are well defined and the modeling techniques are well understood by practitioners, there is little deliberate choice made as to the type of model to be used, with modelers using techniques that are well accepted in the field, sometimes with little thought as to whether alternative modelling techniques could or should be used.
In this paper, we divide modelling techniques for spatial transmission into four main categories: population-level models, where a macro-level estimate of the infected population is required; cellular models, where the transmission takes place between connected domains, but is restricted to a fixed topology of neighboring cells; network models, where host-to-host transmission routes are modelled, either as planar spatial graphs or where short cuts can take place as in social networks; and finally agent-based models which model the local transmission between agents, either as host-to-host geographical contacts, or by modelling the movement of the disease vector, with dynamic movement of hosts and vectors possible, on a Euclidian space or a more complex space deformed by the existence of information about the topology of the landscape using GIS techniques. We summarize these techniques by introducing a taxonomy classifying these modeling approaches.
Finally, we present a framework for choosing the most appropriate spatial modelling method, highlighting the links between seemingly disparate methodologies, bearing in mind that the choice of technique rests with the subject expert.
One of the issues with strategic management (and business management more generally) is that the folklore of academic writing is passed down the generation, from professor to student, without a critical reading of the original works, or without a read at all.
Business text books excerpt the salient points from academic articles, and can miss the nuances of the text.
At Oxford, students are required to read 10-20 articles per week which they synthesize into a tutorial essay.
Some of us do not have the luxury of being able to study for a degree full time, and may not have access to the original articles – although sites such as Google Scholar and ResearchGate are starting to break down the barriers to access to academic works.
I am starting a series of YouTube videos to cover a relatively broad area – general management, strategic management, and other bits and pieces that I find interesting.
I will upload these to my YouTube channel. I hope you find them interesting. Please do leave comments on the video’s page on YouTube, and consider subscribing to my channel for a (hopefully) regular dose of academic articles to keep you thinking about management a little more critically.
One of the most challenging issues for policy makers dealing with bio-security threats is their dynamic nature: diseases may spread quickly and deadly among vulnerable populations and pandemics may cause many casualties.
Finding the appropriate response to threats is a major challenge. Whilst models exist for understanding of the dynamics of the threats themselves, responses can be largely ad-hoc or ‘firefighting’. The aim of this research is to produce robust responses for dynamic threats.
The research will build up as follows, from low to high complexity: static responses to static threats; static responses to dynamic threats; dynamic responses to static threats; and dynamic responses to dynamic threats.
We will use a variety of methods to define the best response: cellular automata, network analysis, spatial modelling, agent-based modelling, and the generation of dynamic fitness landscapes.
This PhD studentship is most suitable for candidates with a background in a quantitative discipline such as management science, operations research, engineering, physics and other natural sciences.
Right now, Houston is going through one of the most severe storms ever to hit the USA. The main conversation on today’s news was whether the Mayor (who has authority to do such things) should have evacuated the City prior to the arrival of Hurricane Harvey.
For a start, NOAA did not forecast a direct hit on the City. But it was forecast that potentially devastating rains were on the way.
Houston has been here before, of course, in 2005 when the then Mayor did order that the city be evacuated. And around 100 died, as a result of the gridlock and heat.
But let’s think about what an uncontrolled evacuation of Houston would mean.
While there is, of course, a Houston evacuation plan, assuming you want to avoid the Gulf of Mexico, the main routes are via the north and west: I69 to the north-east, I45 to the north, US Route 290 to the north-west, and I10 to the west.
Now let’s consider the capacity of these roads. The capacity of roads in the US is given by the Department of Transportation’s Highway Capacity Manual. While there is a whole science devoted to calculating freeway flow measurements, you need to take into account not only the capacity of the road (the number of cars), but also their speed. Combining these gives us a flow rate, i.e. the number of cars that will pass a point in a particular length of time. We can look at the academic literature to see what this is. Dixit and Wolshon (2014) have a nice study where they looked at maximum evacuation flow rates. Their Table 2 shows the empirical data, but it’s around 1,000 vehicles per hour per lane. Assuming the Houston metro system evacuation routes of the north and west are around 4 x 4 lanes. Give a factor of 1.5 for contraflows, and you have around 25 lanes. So that’s 25 x 1000 = 25,000 vehicles per hour. And let’s assume an occupancy of 4 passengers per vehicle (i.e. most would evacuate by car). So that’s 100,000 passengers per hour.
The problem with Houston is that it’s the USA’s fourth largest city. And that means it’s big. It (Greater Houston) has a population of 6.5 million. So that means 6.5 million / 100,000 = 65 hours. Non stop, day and night. Without accidents. A very bold move for a hurricane that was not due to hit directly.