The Conservative Manifesto: Care Fees as a Percentage of Initial Wealth

The Conservative Party have announced their manifesto for the 2017 General Election.  Included in this (on page 64-65) is proposed

We will introduce a single capital floor, set at £100,000, more than four times the current means test threshold. This will ensure that, no matter how large the cost of care turns out to be, people will always retain at least £100,000 of their savings and assets, including value in the family home.

A quick calculation on the effective ‘Tax’ (Care Fees as a Percentage of Initial Wealth) shows the following distribution of Tax Rates.  On the x axis is initial wealth (the value of your house plus any savings), and on the y axis is the Tax Rate.  The Conservative Party have since augmented this plan with a proposed cap (consultation to come).

Values used for Care Fees: £20,000, £40,000, £60,000, £80,000; £100,000.  Values used for Initial Wealth: £0, £100,000, …, £1,000,000.  The trend continues downwards after this figure.

As many have pointed out, this affects individuals with initial wealth just over £100,000 proportionately far more than those with higher initial wealth.  More detail on policy options for funding social care can be found in the Dilnot Commission report and a summary of their proposals is shown below:

 

What does the ℮ mark mean on packaging?

The ‘℮’ symbol, or the ‘e-mark’ is a symbol you will see on packaging such as tins or packets in Europe.  Millions of us will see this symbol every day, but what does it actually mean?

The raison d’etre for the e-mark comes from the problem of selling goods to the public.  We would all like to think that we are getting what we pay for, but does that mean we should always get what we pay for?

Well, if you use the ℮-mark, then no.  And yes if you don’t.  So you use the ℮-mark.  By doing so, some of us are short-changed, but, on average, we shouldn’t be.

The e-mark was introduced in 1976 by the legislation known by the snappy title of ‘Council Directive 76/211/EEC of 20 January 1976 on the approximation of the laws of the Member States relating to the making-up by weight or by volume of certain prepackaged products’.

This sets out a nominal value of a product.  This means that, on average, we should not receive less than the value stated before the e-mark.  But we would be really annoyed if we received, say, nothing, and someone else received twice the nominal amount.  So, the concept of tolerable negative error was introduced at the same time, to set out the minimum legal amount that each packet or tin or container should contain.  The idea is that only a few containers can weigh less than the declared value less the tolerable negative error (but none can be twice the tolerable negative error… that would be, well, intolerable).

In packets from 5 grams to 10 kilogrammes, the tolerable negative error varies from 9% (quite a lot) to 1.5% (not such a lot), the rationale being that it is easier to measure larger values with greater accuracy.

Excruciating detail can be found in The Weights and Measures (Packaged Goods) Regulations 2006.  It is interesting to note that the HTML version of the Regulations contain illegible formulae:

I leave it as an exercise for the lawyer to determine whether this would be a valid defence in criminal proceedings.

Eight Mile and the Emergence of Segregation

Eight Mile, epitomized by Eminem in the film of the same name, is a street in Detroit that marks the boundary between the majority white northern suburbs and the majority black neighborhoods closer to the inner city.

But what causes this segregation in the first place?

Hypothesis 1: The Central Planner

Zoning Map, 1930s, showing HOLC zoning, source: http://www.urbanoasis.org/projects/holc-fha/digital-holc-maps/

In Detroit’s case, as with many cities across the USA, it was, in part, due to the zoning of the city by the federal Home Owners’ Loan Corporation, which zoned the city into areas of risk, meaning that banks were indirectly encouraged to develop outer suburbs while not offering mortgages to inner city properties.  This lead to wealthier, generally white, residents moving to the suburbs.

Indeed, physical barriers, such at the Detroit Wall, also known as the Eight Mile Wall, were built to separate majority black and majority white neighborhoods.

Detroit Today

The legacy of these zones live on today, as seen in the map below from the 2010 US Census.  The dividing line between the green (black) areas and the blue (white) areas is Eight Mile Road.

DotMap http://demographics.virginia.edu/DotMap/ based on 2010 US Census

 

 

So, segregation exists, and is caused by a central actor. But is there an alternative explanation?

Alternative Hypothesis: Emergence

In 1971, Thomas Schelling set out to model the phenomenon, not by assuming a central planner, but by modelling the interactions of individuals.

Thomas Schelling’s model was this.  Assume individuals are placed in a grid, similar to being located on a chess board.  Allow individuals who are in a local minority to move.  In the example below, the blue circle is in a minority (with 5 out of its 6 neighbors being a different color), and according to the rules of the model, is unhappy.  It could decide to move to the vacant square to the north-west, but it would still be in a local minority (with 4 out of 6 neighbors being a different color) and would remain unhappy.  So instead, it chooses the space to the south west where 3 out of its 6 neighbors are of the same color, and not being in a minority, it settles there.

Agent Movement © Duncan Robertson after Thomas Schelling (1971)

Schelling, perhaps without knowing it, introduced agent-based modelling, where, instead of modelling the system as a whole, the modelling of individual agents enables us to see the emergence of macro-level properties, in this case segregation, via the modelling of micro-level (local) interactions.

We can see the effect of micro-level interactions causing macro-level segregation in the model below (developed by Duncan Robertson after Wilensky after Schelling). Each individual, or agent, decides whether they are unhappy or happy; if they are unhappy, they search until they find a vacant location where they will become happy.  This continues until all individuals attain happiness.

Three Class Segregation Model © Duncan Robertson after Wilensky after Schelling

So, perhaps segregation is not imposed, but is down to us.  Or maybe, in reality, it’s a little bit of both.

Please do get in touch if you would like to discuss building or working with agent-based models.

 

 

The Unintended Consequences of Unintended Consequences

In 2003, as part of a project to document coastal erosion in California, the following photo was posted on the californiacoastline.org website.  It is a picture of a beach, some cliffs, lots of trees, and a rather nice house complete with swimming pool, that turns out to be the home of one Barbara Streisand.

Copyright © 2002-2015 Kenneth & Gabrielle Adelman, California Coastal Records Project, www.californiacoastline.org

For those not in the know, Barbara Streisand, is, in the words of her lawyer, a ‘renowned singer, actress, movie director, composer, and producer’.

Now, it turns out that Barbara Streisand values her privacy.  To be specific, she puts a value of at least $10,000,000 on it.  When the claim was filed, six people had downloaded the image.  However, when Barbara Streisand issues a claim in the LA courts (in her own name) confirming that she lives in the nice house with the swimming pool, court reporters start twitching their notebooks.  And so, it came to pass that once the lawsuit was publicised, everyone wants to know what Barbara Streisand’s house looks like.  Nearly half a million in the first month.

The Streisand Effect, as it has been dubbed, is an example of unintended consequences.  By planning to do one thing (suppress an image), you do the exact opposite.

The addendum to the story is that Barbara Streisand had a resurgence in popularity, culminating in the nearly 100-million downloaded song Barbara Streisand by the popular beat combo Duck Sauce.

Maybe this was, after all, a masterplan to take advantage of the unintended consequences of unintended consequences.

Or maybe not.

 

Simulating (Human) Behavior Session at IFORS Conference in Quebec

I am organizing a session at IFORS on Simulating (Human) Behavior as part of a Behavioural Operational Research stream. It would be good if other agent-based models could be presented.

To submit a paper, please do so here: http://ifors2017.ca/submit-abstract and use invitation code dfa90f58

Review of ‘Agent-Based Modeling and Simulation’, OR Essentials

Book review to be published in The Journal of Artificial Societies and Social Simulation (JASSS)

Taylor, Simon J. E. (Ed.) (2014) Agent-Based Modeling and Simulation, OR Society and Palgrave Macmillan: Basingstoke

 

Agent-Based Modeling and Simulation is the first in the Operational Research Society’s OR Essentials series.   OR Essentials brings together multidisicpinary research from the management, decision, and computer sciences.  This edition within the series is edited by Simon Taylor, who is co-founder of the Journal of Simulation, also published by the OR Society.

The edited book is divided into 14 chapters, and the bulk of its contents convers the application of agent based modelling (ABM) to specific problem domains.  The book adds to this by introducing agent-based modelling as a technique, and setting it in context with other simulation approaches.  A very helpful chapter by Macal and North of Argonne National Laboratory offers a tutorial on what agent-based modelling is, focusing on the autonomy and interconnectedness of agents, and showing how agent-based models should be built.  The book ends with thoughtful chapters on a testing framework for ABM (Gürcan,  Dikenelli, Bernon), and a comparison with discrete-event simulation by Brailsford, elegantly closing the package opened by Taylor’s introduction comparing ABM with system dynamics and discrete event simulation within the context of modelling and simulation more generally.

The academic rigour of the book is confirmed by each article being reprinted from published articles from the Journal of Simulation.  The book brings together several excellent examples of agent-based modelling, together with a very clear understanding of how ABM fits in with more traditional simulation techniques such as DES (Discrete Event Simulation) and SD (System Dynamics) –  both Talyor and Brailsford show how and when ABM should be used.  Macal and North offer a very useful tutorial for understanding the building blocks of an ABM simulation, while Heath and Hill show ABM’s evolution from cellular automata and complexity science through complex adaptive systems.

Domain specific chapters cover applications in the management of hospital-acquired infection (Meng, Davies, Hardy, and Hawkey); product diffusion of a novel biomass fuel (Günther, Stummer, Wakolbinger, Wildpaner); urban evacuation (Chen, Zhan); people management (Siebers, Aickelin, Celia, Clegg); pharmaceutical supply chains (Jetly, Rossetti, Handfield); workflow scheduling (Merdan, Moser, Sunindyo, Biffl, Vrba); credit risk (Jonsson); and historical infantry tactics (Rubio-Campillo, Cela, Cardona).

Agent-Based Modeling and Simulation offers a very useful collection of applications of ABM, and showcases how ABM can be successfully incorporated in to mainstream, published research.  The contributions to the book are diverse, and from internationally regarded scholars.  It is also useful to see the diverse ways that agent-based modelling research is presented, from some papers that show code, some that show running models, and some that do not show the model or code but instead describe results.

The glue that binds the book is methodological.  Seeing how ABM has been used in diverse application areas is important, given the trans-disciplinary nature of the approach.  It is an excellent introduction into agent-based modelling within a wide range of business and operations applications, and should be read by scholars and practitioners alike.

Building the Multiplex: An Agent-Based Model of Formal and Informal Network Relations

EURO Conference 2016 PoznanThis presentation from the EURO 2016 conference in Poznan, Poland, and from the GDN conference in Bellingham, WA, USA, joint work with Leroy White of Warwick Business School, shows how combining formal and informal organizational networks enables decisions to flow more freely around organizations, but at a cost, leading to an optimal size of informal organizational networks.  If organizations can control these, this leads to implications for optimal information flows in companies.

What proportion of an airline ticket is made up of the cost of the aeroplane?

aiga-40_departingflightsAircraft aren’t cheap.  Neither are airline tickets.  But how much of that airline ticket is made up of the cost of the aeroplane?

If we assume a relatively efficient modern airliner, say a 777, a 30-year lifetime, 3500 hours per year, and an average speed of 500mph, that produces a  total distance of 52,500,000 miles.  Which is quite a lot.

If you were to knock on Boeing’s door, they could sell you one for $320 million.  Volume discounts are, I am told, available.

So, assuming straight line depreciation, along with many, many other assumptions, that’s $6.10 per mile.  At 350 or so passengers, that is around

2 cents per mile

Or, for a 3000 mile (transcontinental or transoceanic) flight, a total of sixty dollars.  Which is perhaps more than I expected.

Stacking Shelves the Amazon way

Amazon2Amazon FC are playing in the Euros (the UEFA football championship).  Or at least that’s what could be inferred from my name badge.

In fact, Amazon FC is one of the Amazon fulfilment centres, located in the Polish town of Poznań, location of the 28th European conference on Operational Research.

The fulfilment centre is huge – with a million separate items stocked, and up to a million items being processed every day – and is just one of a network of existing sites around Europe and around the world, the locations of which are themselves optimized to minimize cost.

AmazonMapOne of the many interesting facts about the tour was the way that Amazon store stock on their shelves.  Like any other business, they want to minimize fixed costs.  One way they can do this is to maximize the density of items stored on their shelves.  Unlike in my Mini factory visit (which will be the subject of a later post), Amazon does not run a just-in-time stock system.  They are happy(ish) to hold stock on the basis that their customer will have it quickly and will not have to wait for it to be backordered.  Amazon was founded on the basis of being able to supply items that only a few people will want – the so-called long tail – itself the subject of a book available on, where else, Amazon… the upshot of which is that if you rank the most popular to the least popular items sold on one axis, and take a logarithm of the number of these items sold on the other, it will make a nice straight line (a Zipf distribution for those interested).  So, Amazon will hold on to some items for years on the basis that someone, somewhere, sometime, will want to buy it.

AmazonFulfilment

So, Amazon needs to store these millions of items.  While most things are controlled and optimized by computer, they leave it down to human intuition as to where to store items – albeit guided by optimization algorithms.

AmazonVideo

Instead of giving a specific destination for each stock item (basically a very large grid reference), they give their employees a general area into where to store items.  The idea behind this is that when you first fill a location with stock, the space-packing density will be high as items fit next to each other.  But as items are removed and sold, spaces will be created meaning that you are effectively paying to store air.  So Amazon allows its employees to use these spaces to store more items – even if they are unrelated to each other.  Of course, the system keeps track of stock locations, but by doing things this way, the efficiency of the operations is improved, and less space is required for storage.

There is a similar example of adaptive organization used by Southwest airlines in my article The Complexity of the Corporation.

Agent-Based Modeling and Behavioral Operational Research

borBehavioral Operational Research: Theory, Methodology and Practice (Martin Kunc, Jonathan Malpass, Leroy Wright, Eds.) was published by Palgrave Macmillan in September 2016.  My chapter on Agent-Based Modeling and Behavioral Operational Research shows the great potential of using agent-based simulation within BOR, showing how example models can be applied to the field.  More details of the chapter can be found on the Palgrave Macmillan site here (DOI:10.1057/978-1-137-53551-1_7).

xbehavioral-operational-research.jpg.pagespeed.ic.mMtEvplVr7Please click here for the chapter in pdf format.

Updated September 2016 with full text.