I think that we should make a bigger deal of the ability to predict the future in politics.
Feedback is important to learning
One of the most valuable things when learning is regular personal feedback – answering questions, and getting some of them marked as right and others marked as wrong and being told the right answers.
This is why teachers spend so much time marking work – it’s not just to monitor the pupils’ progress, it’s because getting that marked work back is actively valuable. If you can explain “here is where you went wrong, and why”, that’s even better, but even just feedback on which bits you’ve understood correctly and which you haven’t is really valuable.
It’s also one reason (of several) why one-to-few supervisions in universities are so valuable.
More controversially, I will also claim in passing (although it’s not central to my argument, and possibly deserves its own blog post) that it’s one of the reasons why it’s more reasonable for academics those – mostly STEM – areas where being show to be definitively wrong is a realistic possibility to demand deference within their area of expertise than it is for those working in fields or subfields that don’t provide that hard check on their workings to do so. But I digress…
Politics is partly about learning
Politics is partly about value judgements, but a lot of it is also about understanding the consequences of actions. As I argued in Two riders to Hume’s law, a lot of things that appear to be moral issues actually boil down to disagreements about fact. One of the most important issues a government has to decide on is “will raising/cutting taxes make the economy grow/shrink enough to justify having to cut/raise spending or borrowing?” And that decision is going to be informed by the moral judgement of how much relative value you place on unemployment and wages compared to education and healthcare, but its also going to be massively informed by how much you think your tax cut/rise will make the economy grow/shrink, which is a purely factual question. Ditto for policies in a host of other areas.
Lots of different politicians and academics and pundits authoritatively put forward theories purporting to answer this sort of question, and advocate policies based on them. But not merely is it really hard for a lay person to know which is right, it’s also really hard for an expert to know which is right; we know this for a fact because lots of experts disagree, and they can’t all not be wrong.
So clearly, there are a lot of people who have learned about politics wrong. And they don’t know it, because they’re not getting their answers marked.
Who to trust?
Another problem is that, like any hard subject, most people aren’t going to be able to form informed first-hand opinions on economic and political issues. I no more trust the average plumber, surgeon or mathematician-come-blogger to understand fiscal policy than I trust the average politician to fix my boiler, remove my appendix or do my job.
In general, the correct response in this situation is to listen to specialist experts. But in politics, there are lots of people presenting themselves as specialist experts, and making mutually contradictory claims.
So, for our benefit as well as for theirs, it would be really useful if there were some way of distinguishing people whose understanding of the objective, factual, if-we-do-this-then-that-will-happen bits of politics is good.
That still won’t help with the genuine value judgement bits – a lot of social issues like abortion and gay marriage genuinely are “ought” rather than “is”, and expert specialists are barely equipped to deal with them than plumbers, surgeons and me. But it would still be a useful start.
How to score?
But this raises the obvious question: who will bell the cat? If there is genuine disagreement about questions of fact, who is to hand out marks.
Essentially, questions of fact can be divided into two categories:
- Undisputed questions. These are useless for distinguishing between people, because everyone will give the same answer.
- Disputed questions. Marking people on these will tell you about whether or not they agree with the person awarding marks, but that just pushes the problem back a step without resolving it.
But predictions about the future get around this unfortunate dichotomy: they give you questions which are disputed at the time they are answered, but can become undisputed by the time they are marked.
This makes them the perfect tool for identifying not merely who agrees with whom, but who actually understands what is going on.
A modest proposal
I think that every, say, six months, the Office of National Statistics should produce a list of, say, 20 questions, about political issues in the short to medium-term future. Each should be phrased in such a way that the answer will be completely unambiguous – “what will the Circumlocution Office’s October report on Widget Production give as the number of widgets produced this year?” rather than “How many widgets will be produced this year”, and each should have a date on which submissions of answers will close.
Some of these things could be quantitative, like widget production. Others could be assigning probabilities to events – “how likely is President Imadehimup to win reelection in Examplia?”, and these could be scored by Bayes factor, or some function thereof.
These should be posted on the internet, and anyone should be permitted to register answers, and change them up until the closing date. After the closing date, all answers should be publicised.
As the answers become known, each question should be marked, and there should be a website on which it is easy to track all user’s track records.
Any method you use to arrive at your answers is absolutely fair game – I imagine that if this took off, lots of people would publish their answers and reasoning, and “I cribbed from Bob, because I trust him” is absolutely a legitimate way of answering – creating more dependence on the opinions of people who demonstrate themselves to know what they are doing is a feature, not a bug.
Submitting answers to these questions should be a legal requirement for MPs and anyone standing for parliament, and anyone trying to make a career as a political commentator without registering their predictions should be roundly mocked and not taken seriously.
I imagine this might expose some interesting inconsistencies. When the government comes out with a new Crime Reduction Act, and all the government MPs declare that it will be a wonderful success and crime will fall and all the opposition MPs declare that it will be a dreadful failure and crime will rise, if one side is just engaging in dishonest rhetoric then either the predictions on crime rate they register will give them away, or they will pay a price in prediction score.
And while I certainly wouldn’t promise to vote for the people who demonstrate themselves to best understand the factual parts of politics, because of the above-mention value-judgement parts, my confidence in the rightness of my side and my openness to opposing arguments would certainly be bolstered or undermined significantly if it were shown that people who shared my views were significantly better or worse than our opponents at predicting the future.
Digression: the least worst form of epistocracy
Um… I mean, “let me digress, and talk about what I think the least worst form of epistocracy would be”. Government by digression is probably not a good approach!
“Epistocracy”, from the Greek for “government by the knowledgeable”, is the word for giving people who are found by some measure to better understand politics more votes.
One of the other things you might do with this proposal, as well as simply publicising it and letting people look at prediction scores and make up their own minds about who to trust, is use it to implement something along those lines – anyone who registers predictions and gets a good track record gets extra votes.
I think it’s a terrible idea, because I believe that a), on average, better off people will understand politics better than worse off people, b) giving better off people more votes than worse off people will lead to politicians adopting policies that advance the interests of better off people at the expense of those of worse off people, and c) politicians should generally adopt policies that advance the interests of worse off people at the expense of better off people.
But one of the other common objections to epistocracy is “who gets to decide who counts as more knowledgeable”, and I think that this proposal answers that very well. It is less prone to “knowledgeable = agrees with me” effects than anything else, the fact that it’s open-book and you can just crib someone else’s answers means that it is less biased in favour of those with good trivia quiz or exam skills than most other similar things, and the kind of knowledge it measures seems like the kind I’d most want to see overepresented among voters.
Of course, this is a useless observation – a good solution to one of two fatal problems with an idea is only valuable if you can solve the other one, too, and as I’ve said above I think that the tendency to lead to government serving the rich over the poor is a fatal problem with epistocracy. But it’s still neat…
And, obviously, none of this is in any way an objection to the use I’m actually proposing for these registered predictions – making them publically available, and using them to inform democracy rather than dictate it.
People already doing (something like) this.
One other thing I should mention is that some people actually do this.
Scott Alexander of Slate Star Codex, the blogger I most admire and try to emulate, registers a list of predictions for the coming year every January (his most recent ones are here), and scores the ones he made last January. He’s not doing it quite the way I’m proposing – he doesn’t have anyone else to compete against, so he’s picking some possible events, assigning probabilities to them, and then checking that about 50% of the things he assigned 50% to, 80% of the things he assigned 80% to, and so on, happened.
For comparing multiple people registering predictions about the same events, I think there are approaches that enable better comparisons, but I still think this is a really good idea, and his calibration seems to be pretty good.
I, by contrast, have a pretty poor record on predictions (see, for example, No, it’s not a coup, but it’s still regrettable., where I naively assumed that the Supreme Court probably wouldn’t declare the recent prorogation unlawful even thought it didn’t actually break any laws). So arguably, the lesson is that you shouldn’t trust me, and should ignore this whole post. Oh well.