Illustration by Zeyd Anwar

If you’ve ever tried to think about the future or read an article and contemplated the implications of a particular analysis, then forecasting concerns you. Superforecasting is about how to think about the future with accuracy, rather than a wild west guess-timate approach. Most importantly, the authors — Tetlock and Gardner – emphasise the value of counter-intuitive insight. This means predictions and analysis coming from the most unlikely of sources, such as ordinary people who are slightly above average intelligence and who are avid readers of good quality news, rather than heralded “experts”. Rather than accepting experts’ words as the truth, they argue we would all be collectively more informed if we adopted a critical, ever-changing, unsentimental and aggregate data-driven outlook to all views.

One reason why inaccurate forecasters have become so mainstream is due the public perception of those who deal in marginal probability, versus those with audacious certainty. Unfortunately, for Tetlock, forecasters are not “punished”, nor do they face consequences for sensationalist forecasts. A lack of certainty is considered less worthy of respect – a core problem with the analysis and speakers who receive most media attention. Indeed, the authors spend a large amount of time dotting the chapters with examples of when experts have gotten it wrong. In medicine, foreign policy and economic policy, advisers and academics can attribute their mistaken forecasts to several issues. This ranges from insufficient accuracy of the degree of doubt or certainty; insufficient updating and revision of forecasts based on new information; under and over casting based on incorrect weighting applied to new pieces of information; and, most damningly, sentimental attachment to ideology.

To be safe, they prescribe Bayes’ theorem: one prior’s belief (based on existing information to date) x the diagnostic value of new information = new belief.

Importantly, there are consequences for poor forecasting. The USA’s intelligence assessment of Iraq’s WMDs (or lack thereof) and the subsequent invasion turned on experts’ interpretation of probability and risk. This means granular differences in degrees of a forecast can have real life and death consequences. Notably, Tetlock and Gardner do not contest the conclusion of the evidence in relation to Saddam Hussein. Rather, it is that accurate forecasting would have resulted in a lower level of certainty – but that could have made all the difference when seeking congressional authorisation for the use of force. But, as the most important take-away, there is no such thing as certainty.

In distinguishing the role of luck and skill, the authors point to how ‘regression to the mean’ is the best tool. In cases dominated by chance, there will be a faster regression to the mean than with cases of skill — if there is regression at all.

Tetlock and Gardner also valuably reference, build and consider other academics throughout their work, in order to build on their arguments. For those who have read Kahneman’s Thinking Fast and Slow, Tetlock and Gardner build on the importance of the ‘System 2’ cognitive function and how to fine tune it for increased clarity and accuracy. Likewise, both authors point to the Italian American physicist Enrico Fermi, who advocated a method of breaking down big questions into smaller questions that pre-suppose it. By answering and forecasting each of the smaller questions, an answer can be synthesised to [accurately!] answer the broader one. Crucially, all forecasts must start with the default position: the outsider perspective that considers averages and quantitative data, before moving to the insider perspective of case-specific facts. To obtain the ‘wisdom of the crowd’, all contrasting and corroborating views must be aggregated and synthesised into one — so the stronger the team is, the stronger the forecast will be.

On belief, Tetlock and Gardner are less nuanced. They somewhat dichotomise beliefs and ideology with that of rational, objective clarity and accurate foresight. Rather than finding “meaning” in events or circumstances, they emphasise their rejection of determinism. Rather, they embrace the probabilistic mindset that says several outcomes could have been possible, and a certain event or outcome is one of many. To echo Kurt Vonnegut, it is precisely the attitude of “Why me? Why not me?” Ultimately, they do convincingly illustrate why beliefs can sometimes hinder self-critical thought: where beliefs are overly emotionally invested in and treasured to be protected, rather than rationally examined.

While dealing with critical challenges to their work, Tetlock and Gardner ultimately concede that superforecasting is limited to 5-10 years of foresight and cannot predict highly improbably consequential events. Arguably, there is a debate on how grey these “black swans” really are, and whether improbable events always have some form of precedent, which works in Tetlock and Gardner’s favour. On some level, one cannot help but finish Superforecasting with a newly acquired mass of insight, history lessons and a structured framework to view the world.

But on another level, one also finished the book left wondering about the elephant in the room — the automation of superforecasting. If, according to Tetlock and Gardner, most of the equation is reliant on data — it seems futile to dedicate a book to superforecasting without considering the potential of algorithmic superforecasters. Granted, they mention prediction markets and touch on the power harnessed by Big Data, but still shy away from discussing how important human thinking really is to the art itself, and whether it can survive. Against the backdrop of machine and deep learning, AI forecasting (already used in judicial contexts to predict the outcome of court trials) is very likely the future of security, politics and economics.

Although an unfortunate omission, Superforecasting qualifies as an important blueprint to analyse, predict and prepare for our near future.