Refugees in the Central African Republic. Oil prices. The role of polonium in the death of Yasser Arafat. Elections in Guinea. The extent of Arctic sea ice. The regime of Bashar Al-Assad.
What do all these issues have in common? Philip Tetlock, of course.
No, Tetlock is not some kind of deep state operative. Rather, he’s an academic who is best known for creating the Good Judgement Project. This was a tournament in which ordinary people competed with each other (and the intelligence community) to forecast the future, assigning events with a probability of occuring. The questions covered tricky geopolitical topics like the ones mentioned above, i.e. ‘Will the Assad regime fall in the next three months?’, or ‘On 15 September 2014, will the Arctic sea ice extent be less than that of 15 September 2013?’
Tetlock was the man for the job as he previously ran an experiment proving that the prediction accuracy of pundits and political ‘experts’ was extremely poor.
He and his team identified the top 2 percent among the members of the public in this tournament. They were often highly educated and intelligent, but Tetlock points out that they were not total geniuses. Instead, they were flexible thinkers, always interested in learning and improving their prediction scores. These top two percent were dubbed ‘superforecasters’, with the additional accolade of beating the intelligence community each year the tournament was run.
So how did the superforecasters do it? That’s what Tetlock’s book, Superforecasting, tries to tell us.
Superforecasting isn’t a paint-by-numbers method but superforecasters often tackle questions in a roughly similar way—one that any of us can follow: Unpack the question into components. Distinguish as sharply as you can between the known and unknown and leave no assumptions unscrutinized. Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena. Then adopt the inside view that plays up the uniqueness of the problem. Also explore the similarities and differences between your views and those of others—and pay special attention to prediction markets and other methods of extracting wisdom from crowds. Synthesize all these different views into a single vision as acute as that of a dragonfly. Finally, express your judgment as precisely as you can, using a finely grained scale of probability.’
Leaning on Isaiah Berlin, Tetlock compares superforecasters to foxes, who know ‘many things’, and conventional experts to hedgehogs, who know ‘one big thing’. For conventional experts/pundits, every issue can be boiled down to the excesses of the left, American imperialism, Chinese communism, or some other caricatured worldview. The superforecasters, by contrast, employ the dragonfly-like perspective described above, synthesising information and perspectives to come up with a more accurate prediction.
What can we learn from the superforecasters?
Superforecasting requires a mixture of extreme commitments and balance. You must have an extreme commitment to accuracy, and not allow ideology or social pressures to sway your predictions (a trap that pundits often fall into). But you must balance competing ideas and axioms in order to do that well.
For example, you must balance the ‘inside’ and ‘outside’ view. Let’s say you are given the names, ages, occupations, and location of an American family, and then asked to predict whether they have a pet. It’s tempting to immediately construct a story for yourself about this family to predict their pet ownership, but it would be better to take the outside view first and google the stats on American pet ownership (62 percent of families own a pet apparently). Once you’ve done that you should take the ‘inside view’, and adjust your prediction based on the information about this particular family. Comparative perspective, then specifics.
You likewise must chart the path between overreaction and underreaction. Tetlock’s tournament featured the chance to make as many predictions as you liked before the round ended, which meant that incorporating new information into your forecast was important. But of course, you could adjust too much or too little based on that new information. Extreme precision paid off:
[Ordinary forecasters] tended to stick to the tens, meaning they might say something was 30% likely, or 40%, but not 35%, much less 37%. Superforecasters were much more granular. Fully one-third of their forecasts used the single percentage point scale, meaning they would think carefully and decide that the chance of something happening was, say, 3% rather than 4%. Like the Treasury aide taught to think in fine-grained probabilities by his boss, Robert Rubin, superforecasters try to be so precise that they sometimes debate differences that most of us see as inconsequential—whether the correct probability is 5% or 1%, or whether it is a fraction of 1% close enough to zero to justify rounding down.’
Returning to the extreme qualities: hard work and dedication are crucial attributes of superforecasters, because it’s only through these that you’ll improve, or do well enough to maintain your status (only the top 2 percent of forecasters were ‘supers’, and they were ranked again each year so you could easily drop out of it). This could include regularly updating your prediction, or being willing to examine new or different evidence. Coupled with a growth mindset, improvement is much more likely.
The other means of improving is practice. You need to ‘keep score’ and see whether you were accurate or not — an 80/20 forecast that goes the way of the 20 percent isn’t necessarily wrong, but over time you’d hope that the 80 percent option happens 80 percent of the time! Keeping track like this helps you to overcome time lags — it can take a while to see whether you were right. Dietrich Dörner noted this in a similar context in his book The Logic of Failure.
Precision in language was also important. The questions themselves had specific quantitative elements, such as a date by which the real answer would be clear, like the oil price on a future day, so you could really see whether you were right. And you mustn’t fall into the trap of substituting a question you find easy for one that is hard e.g. ‘Do I like Israel?’ instead of the real question, ‘Will tests reveal an abnormal amount of polonium in Yasser Arafat’s body?’
You also need to overcome a faulty and biased memory if you are to really understand your past thinking. Tetlock describes one superforecaster who wrote increasingly detailed explanations of his positions when he made a forecast, so that he could do a proper postmortem later. Again, Dörner’s book suggested adopting similar tactics. And Byrne Hobart recently noted the same thing in the realm of finance and investing:
Financial markets have another benefit for students of bubbles and irrationality: They encourage people to write down what they’re thinking and why. An investment memorandum is a valuable document, especially when it turns out to be wrong; since it’s written to justify a particular investment at some specific price, it includes plenty of assumptions about exactly what kind of future will make the investment a profitable one. So a good investment memo is a genre of science fiction that’s meant to come true — not out of the question, since robots, online forums, cryptocurrency, nuclear weapons, space travel, and many other technologies were discussed in sci-fi well before they were actually developed.’
Measuring and testing is much easier than it used to be with computers, and there is a wealth of information at the fingertips of anyone with a smartphone and an internet connection. Like Dörner, Tetlock predicts that human-computer symbiosis, in the manner of freestyle chess, can sometimes beat humans and computers individually. These trends undermine any organisation with a hierarchy based purely on experience or traditional notions of competence.
Indeed, the book had some lessons for planning. There is a challenge for any leader who wants to integrate superforecasting techniques into their thinking:
How can leaders be confident, and inspire confidence, if they see nothing as certain? How can they be decisive and avoid “analysis paralysis” if their thinking is so slow, complex, and self-critical? How can they act with relentless determination if they readily adjust their thinking in light of new information or even conclude they were wrong? And underlying superforecasting is a spirit of humility—a sense that the complexity of reality is staggering, our ability to comprehend limited, and mistakes inevitable.’
Tetlock answers himself by drawing on German Field Marshal Helmuth von Moltke, famous for the original version of the statement we now know as ‘no plan survives contact with the enemy.’ The original was slightly different: ‘It is impossible to lay down binding rules because two cases never will be exactly the same.’ Factoring in uncertainty is good leadership. But how?
When you come into contact with the enemy, or the real world, do your best to maximise them. This is also something that can be delegated to those under your command: it’s goals we care about, not blindly following specific orders. If you must, you can disobey them.
Balancing this is a healthy bias towards action — without which von Moltke’s men would have been a rabble. Decisions are often taken in the heat of the battle, and could be abrupt and simple. Even if that sacrifices some accuracy, it allows action to be taken: ‘An imperfect decision made in time was better informed that one made too late’, writes Tetlock.
All of this was tied together into a concept known as Auftragstaktik. This allows flexibility in the field:
…the captain can devise a plan for capturing the town that takes into account the circumstances he encounters, not those headquarters expects him to encounter. And he can improvise. If he comes across a bridge on another road that HQ thought had been destroyed but wasn’t, he will realize it could be used to move enemy reinforcements. So he should destroy it. No need to ask HQ. Act now.’
In more complex situations, principle stacks are ideal, i.e. identify your priorities or goals, then rank them in order of importance. That way you and your subordinates have a framework to handle tricky decisions, without imposing straitjackets on anyone. Not that the stack will solve everything for you:
The art of leadership consists of the timely recognition of circumstances and of the moment when a new decision is required.’
That’s a great line, so it’s rather unfortunate that it’s from the Wehrmacht’s manual. But Tetlock deliberately examined the Wehrmacht to show that we can learn from even the most abhorrent of opponents.
Peter Thiel, by the way, believes startups should be run more like dictatorships than democracies. I don’t really know how to reconcile this with Auftragstaktik; perhaps once you’ve transitioned from ‘founding moment’ to bureaucracy, you have to find some way to make bureaucracy function as well as it can, even if that’s worse than a good dictatorship might manage.
Tetlock and his team also put together a short tutorial (summarised in the book’s appendix) which taught readers the principles of better forecasting. When giving it to tournament participants, they found it increased performance by 10 percent. Like all experiments of this kind, I’d like to see it replicated before I draw any firm conclusions, but the principles seem pretty smart.
Tetlock closes out by trying to reconcile his work with that of Nassim Taleb, who is all about the ‘black swan’ events that cannot be forecast. In a near-poetic turn, the book’s appendix includes a perfect example of the limits of forecasting:
Of course, triage judgment calls get harder as we come closer to home. How much justifiable confidence can we place in March 2015 on who will win the 2016 [American presidential] election? The short answer is not a lot but still a lot more than we can for the election in 2028. We can at least narrow the 2016 field to a small set of plausible contenders, which is a lot better than the vast set of unknown (Eisenhower-ish) possibilities lurking in 2028.’
If you want me to write more things like this, you should subscribe to my weekly email, which includes links to what I write and a few extra interesting things.
Comments welcome below.