Algorithmic optimisation and opposition to social distancing measures.

I wonder if some of the people arguing that the lockdowns to reduce spread of coronavirus should be lifted are being mislead by an instinct that’s generally sensible but doesn’t apply here, for interesting reasons.

To a spherical-cow level of approximation, we have two problems – people dying of coronavirus, and people losing their livelihoods because of the countermeasures – the severity of which is determined by a single parameter: how severe the countermeasures are. The more steps you take to stop coronavirus spreading, the less sever the medical consequences will be, but the more severe the economic consequences will become.

The goal, obviously, is to choose a level of severity that minimises the sum of the two problems.

In general, in this sort of problem – minimising the sum of an increasing and a decreasing function of a single parameter – there’s a very useful heuristic, which is “balance the two sides”. That won’t necessarily give you the lowest total cost, but it will always get you within a factor of two of it, because even if making the lockdown just slightly more/less severe than the point where the two costs balance were to completely eliminate one of them, the other could only increase, and so you’re still left with at least half the problem.

Now, obviously, I don’t think that most people are actually thinking in terms of algorithmic optimisation here. But I do think that a lot of people have the instinct “if we’re trying to trade off evils A and B, and B is much worse than A, then that means we should probably accept more A in order to get less B”. And most of the time that instinct is absolutely correct. So why isn’t it here?

Well, first of all the dual of the rule “balanced solutions are always close to opimal” is not true: optimal solutions are not always close to balanced. In particular, if the derivative of one of the two functions is high, an optimal solution may be radically unbalanced.

In this case, as I understand it, in fixed conditions coronavirus will either spread or decline* roughly exponentially. If n people have it today, then about rn people will have it tomorrow, and r2n people the next day, for some number r. This exponential growth or shrinkage will continue until either a) most of the population has been infected and we achieve herd immunity, b) coronavirus dies out, or c) we tighten or relax our social distancing rules and change the value of r.

Let us say that there are going to be 200 days more of this, and then a vaccine will be discovered and coronavirus will magically go away. So, barring changes to social distancing, we’re going to tend towards r200n. If r = 1.035 then that will be 1000n; if r = 0.966 then that will be n/1000.

So if r is even slightly greater than 1 then we’ll grow to herd-immunity levels. How much greater than 1 doesn’t make much difference – if r200 n is a million times the population of the country, that doesn’t mean we’ll all get coronavirus a million times, it just means the exponential approximation will break down sooner and herd immunity will be reached faster (but probably with more deaths along the way).

In the other direction, the number of people infected is the integral of an exponential, which is proportional to n/(1-r).

So (under spherical-cow conditions, which in particular don’t include people from other regimes entering the country), number of deaths and illnesses as a function of severity of lockdown is roughly hyperbolic in r on one side of the critical value, and roughly constant on the other side of it.

The optimal sum of costs will be somewhere very close to that critical value, but slightly more severely locked down than it, where the gradient is very steep; the extra cost of deviating from it in the direction of too tight a lockdown is non-trivial (we suffer even more economic damage than necessary), but the cost of deviating in the direction of too loose a lockdown (coronavirus has 200 days of spreading slowly-but-exponentially instead of 200 days of declining slowly-but-exponentially) is massive.

The balance point is somewhere very close to both the critical r=1 point and the optimal point, with very similar measures, a tiny bit less economic pain, and many more deaths.

This is probably a bit clearer with a graph (although note that I drew this in a hurry in MSpaint, and there are a number of features missing – most importantly, the medical cost should actually go up  a bit on the left rather than staying flat because if too many people get ill at once we can’t care for all of them and more will die than if the same number got ill over a longer period of time, and the economic and total costs should probably tend to infinity on the right):

Coronavirus impact

Of course, cows aren’t actually spherical. There are many different forms of social distancing that can be turned on or off, not just a single parameter; people from abroad will add extra infections that will become relevant if coronavirus becomes rare here; and, critically, we can intersperse periods of tighter and less tight control, letting the number of infected people grow and then forcing it down again.

But hopefully this gives some insight into why the instinct that the lockdown is too severe is a natural one, and correct in a lot of situations, but not this one. At the moment, I think it’s pretty clear that the cure is worse than the malady, but not as bad as the malady would get if untreated. And treatment significantly less effective would accomplish very little.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s