I left Google in May 2008 to work (briefly, it turned out) in the woodshop at Anderson Ranch, in the mountains at the top of the Roaring Fork valley in Colorado. Shop life was totalizing—most days, we were up at 6.30am and fell into bed after 1am. Internet access in the mountains was patchy and unreliable. I didn’t have a chance to pay close attention as first the US, then the global, financial system went to the edge of collapse. But as the summer progressed, the scale of the unfolding and the few Google shares I had left meant that I wanted to keep up with what the markets were doing. So I took my phone out with me whenever I knew I would be close to a strong enough cellular data signal.
This was usually after we closed the shop. On my way home around midnight after locking up, I’d lean the bicycle against a big oak next to the path and walk up to the flat rock on the little hill between the ranch and the apartment where, on a clear night, the Internet would trickle across the mountains from the neighboring town of Aspen.
At the time, it was hard to interpret what was happening. My (and nearly everyone else’s) existing sensemaking systems were essentially useless. The scale of the businesses that were crumbling—multi-billion-dollar banks that the previous year had been in great shape—was nearly impossible to comprehend. So was the speed and unpredictability with which confidence in the financial system disappeared. Through all this, mostly on a little cellphone screen, I read about the exotic financial instruments that had precipitated this meltdown—and about how even the people who had invented them were only now beginning to understand how they actually worked.
2008’s events can be interpreted as an illustration of a fundamental problem with how we think about action and causation. We routinely mistake true uncertainty for risk. To understand why these are so often mixed up requires starting from a situation of certainty—of complete known-ness. I have a book coming out soon from which I’ve borrowed the examples that follow.
World 1: Suspend disbelief and imagine that you are a US-based manufacturer of a carbonated drink. You are sure that demand for your product will grow steadily at 4% a year. You own your manufacturing and distribution facilities, have long-term price contracts for all the materials you use to make your products, and have no desire to introduce new products. Current regulations also mean that you can be sure no competitors will enter the market. In this world, your business decisions—such as how much and when to invest in equipment, when to hire production staff and how many of them to hire—are straightforward and easy to analyze and plan for.
In World 1, you are completely certain about the future world-state. Your expectations (4% annual growth, availability of production resources, demand, prices of ingredients, etc) are accurate because there are no alternative future world-states. Nothing about this world’s future is unknown, which makes it easy to choose how to act. Expecting such certainty in any even slightly complex situation is absurd. In nearly every real-world instance, the future is actually unknown:
World 2: The world of complete certainty described above remains except for one detail. There is now a chance that bad weather affects harvests in the countries where your sugar producers are located. If this happens, even with long-term price contracts, there is a chance that some of your sugar suppliers will be unable to deliver as much sugar as you will need. Specifically, your infallible analysts have put the chance of a 1000-ton sugar shortfall at 20% within the next two years. Your production will be affected by such a shortage, reducing profit in that period by $2 million. Given current sugar prices of $220/ton, you protect yourself by buying 1000 extra tons of sugar for $220,000 and renting additional warehouse space to store it for two years at $24,000. The risk here is the 20% chance of losing $2 million in profit in the next two years (i.e., $400,000), which exceeds the cost of buying and storing the emergency sugar stockpile ($244,000). Deciding to stockpile some sugar is clearly sensible.
World 2 is more “realistic.” There are two possible future world-states—one where sugar supply is interrupted (20% chance in the next two years), the other where it is not (80% chance in two years). This kind of probabilistic knowledge allows formal rationality, where actions can be impersonally and quantitatively calculated. This is why there is a clearly sensible risk-mitigating action (to stockpile sugar as long as the cost of doing so is lower than the potential profit loss from bad weather). Approaching the world as if it is unknown in a way that permits this kind of calculative mitigating action is to have a risk mindset.
A more precise way to put it is: a risk mindset is one which assumes that future world-states and their respective probabilities are known.
Frank Knight explained (back in 1921) that knowledge of probability can be a matter of general principle, statistical observation, or estimation. Probability from general principles is deductive, such as the probability of any given outcome from throwing a fair, six-sided die (this is also called a priori probability). Probability inferred from statistical observation is not deductive. An example is inferring the probability of future defects by observing the historical average defect rate on a production line—even if past performance is not indicative of future results. And estimations of probability are purely a matter of subjective judgment. This can be (and often is) nearly free of supporting information, as Ellsberg’s paradox illustrates.
A priori probabilities are the only form of probabilistic unknown-ness that seem to truly be compatible with the requirements for formal rationality. Neither statistical nor estimated probability are actually compatible with a risk mindset—but they’re often treated as legitimate foundations for formal-rational decision-making anyway, the same way a priori probabilities would be. The reality is that scenarios in which decisions can be made purely based on a priori probabilities are vanishingly rare. In other words, World 2 is not very realistic at all.
But risk mindsets seem to be the default now. Maybe it is because an unknown future is less discomforting when accompanied by a belief that the unknown-ness can be mitigated or planned away. And maybe because of more deeply seated cognitive biases such as the ones Tversky and Kahneman became famous for.
Whether or not we want to admit it to ourselves, risk—World 2 unknown-ness—is not the only type of unknown-ness there is, and most of what we label risk is actually something else entirely. For now, I can offer three scenarios that each illustrate a distinct non-risk form of unknown-ness.
1: Unknown-ness from emergent interactions of calculable unknowns. Through the late 1990s and early 2000s, banks created a slew of products which took the form of complex financial instruments that redistributed risk—among the most important ones were mortgage-backed securities and collateralized debt obligations, which were (and remain) complex and difficult to understand. As individual products in isolation, their behavior may have been comprehensible and predictable. But they were not isolated products—instead, they interacted with each other and with the rest of the global financial system. These interactions were often unpredictable, and created unpredictable knock-on effects which rippled through the system.
Though the products themselves were meant to spread calculable unknown-ness (risk) around, their complexity, profusion, and interactions created two different, but related, forms of unknown-ness: 1) not knowing how, and with what consequences, these risk-redistributing products interacted, and 2) not knowing where risk actually lay, who was exposed to it, and how much. This destroyed some banks which realized too late that their risk exposure was greater than they had prepared for (Bear Stearns and Lehman Brothers).
As confidence and liquidity within the financial system evaporated in 2008, stabilizing actions by policymakers failed to have their anticipated effects because of the unpredictable interactions of the component parts of the global financial system. Surgically precise interventions became impossible. The result has been a decade of extreme money supply expansion in nearly every major economy, with corresponding distortion of asset prices and foreign exchange rates.
2: Unknown-ness from insufficiently precise knowledge about calculable unknowns. In 2016, the UK held a referendum to decide whether or not to remain part of the EU. Before the referendum, what eventually turned out to be the most accurate polls and analysis put the likelihood of a vote to leave the EU at between 49% and 51%, with a margin of error that included the crucial voting statistic of 50%. In the final count, 51.9% voted to leave—the expectation from polling and analysis was surprisingly accurate, but not sufficiently so. The Trump-Clinton election in the US was very similar in this respect.
It’s hard to conceptualize what it means to act in a formally rational way in such situations. Even when possible outcomes are binary (as was the case in the Brexit referendum and the 2016 US Presidential election), the implications and consequences of the two outcomes are so profusely complex and emergent that the actions driven by a risk mindset seem almost inevitably to fail when the probability of each outcome is insufficiently precisely known.
3: Unknown-ness from entirely unanticipated outcomes. Even 5 years ago, it would have been hard to convince most people that a social media platform might meaningfully influence an election outcome, let alone the idea that a foreign power might use a social media platform to do so. As subsequent events have showed, planning for the future in a formal-rational way is difficult if the range of possible futures includes entirely unanticipated ones.
Entirely unanticipated outcomes are more common than we let ourselves think. All innovations slightly deform the world—their function is to simultaneously invalidate part of how things currently work while creating new ways for things to work. In a world with innovation, entirely unanticipated outcomes are inevitable.
I’ve intentionally used the word “uncertainty” as little as possible because it’s too easy to slip into assuming that the opposite of certainty is uncertainty—and so to imagine that all situations that are not certain are equivalent. This is functionally equivalent to applying the risk mindset to all situations where the future is unknown, even if the future is unknown in different ways.
The big question of the moment is: What should we do when we’re exposed to other forms of unknown-ness? The three non-risk varieties of unknown-ness above are only the tip of the iceberg. It isn’t clear how we should act in these scenarios. But non-risk forms of unknown-ness will increasingly be the rule rather than the exception. Thinking about unknown-ness forces us to attend to the different varieties of unknown-ness that exist—and consider new ways to plan for and respond to them.