Have you looked at Frank Knight's work? Nick Bloom on Knightian uncertainty: 'One can consider two types of uncertainty: risk and ambiguity. This distinction was made by Professor Frank Knight of the University of Chicago in his 1921 book "Risk, Uncertainty and Profit". He identified risk as occurring when the range of potential outcomes, and the likelihood of each, is well known – for example, a flipped coin has an equal chance of coming up heads or tails. One could answer a question like “How many times would you expect a coin to come up heads if it is tossed 100 times?” with both a best guess (50) and a “confidence interval” (for example, in this case there is about a 95 percent chance that a fair coin will come up heads between 40 and 60 times). In contrast, what has become known as “Knightian uncertainty” or “ambiguity” arises when the distribution of outcomes is unknown, such as when the question is very broad or when it refers to a rare or novel event. At the outset of the COVID-19 pandemic, for example, there was a tremendous spike in Knightian uncertainty. Because this was a novel coronavirus, and pandemics are not frequent events, it was very difficult to assess likely impacts or predict the number of deaths with a high degree of confidence.'
yes! knight's framing of the distinction between uncertainty (unquantifiable unknowns) and risk (quantifiable unknowns) was one of the conceptual foundations for my first book: https://uncertaintymindset.org/
what do you think of ellsberg's work on the savage axioms?
i am not sure that "ambiguity" and "Knightian uncertainty" are completely coterminous. ambiguity (to me) indicates that multiple interpretations of the same observation are simultaneously possible. this creates unquantifiable unknowns about the observation, for sure. but there are other types of unquantifiable unknowns: about potential actions, potential outcomes, causation, and valuation. for clarity, i call these different types of true uncertainty "not-knowings," and have been working on unpacking those concepts for the last few years: https://vaughntan.org/notknowing. if you have a chance to look at that, i'd love your thoughts on it.
but i do agree completely that there was a lot of Knightian uncertainty at the outset of the covid-19 pandemic — unfortunately, much of the public decisionmaking at the time was using approaches that are only appropriate for risky situations (cost-benefit analyses, expected value analyses). in fact, one of the examples i bring up most often when talking about this is the WHO's decision to recommend against travel stoppages in feb 2020 based on "rigorous cost-benefit analyses" (i interpret that as incorrectly applying risk methods for decisionmaking in a highly uncertain situation).
the examples appear to reflect a single archetype: sudden unexpected phase change in a system. are there others you can point at?
also related - when you say "types of unknowns" are you being binary - risk or uncertainty - or do you have a finer-grained taxonomy? i assume that if you diagnose your context correctly, you should be able to figure out what _particular_ type of unknown might exist and this can be better prepared for. Just avoiding the situations you discuss does not seem doable to me. for example, as a bank, you are probably happy to be "too big to fail" and would not want to avoid the advantages it gives, even if it screws up your early-warning system.
in the last issue, i gave a few examples of poorly observable consequences of misdiagnosing uncertainty as risk.
these are all examples of unexpected discontinuous change because that's the mode of the three mechanisms highlighted (they modify systems to absorb input until they reach capacity, then the systems change).
an alternative mode might be of continuous response to the consequences of an approach by evolving the approach. as an example of this, i especially like the case of robert irwin, the california light and space artist, whose practice changed multiple times over the course of his career (documented in lawrence weschler's seeing is forgetting the name of the thing one sees). that book is so good, and worth a read.
to your other question: the different "types of unknowns" i'm referring to are different types of (non-risk) uncertainty. the terminology is unwieldy in order to be unambiguous.
there are four types of not-knowing: about possible actions, possible outcomes, causation between actions and outcomes, and about subjective value about outcomes. each type has a different source, affects decisionmaking differently, and needs different management/response tools. you're right that it isn't feasible to just avoid these types of not-knowing.
(but i'm also writing a short summary of these four types of not-knowing which i hope to put up next week, so stay tuned)
if i was that too-big-to-fail bank, i would want to be soaking up those advantages _and_ preparing for the phase change (instead of being caught out by it)...
absolutely. the way the boris process is designed is built around recognising that talking about tradeoffs is uncomfortable and difficult. so teams must simultaneously be a) forced to talk about tradeoffs together and b) put in an environment which supports difficult conversations.
Have you looked at Frank Knight's work? Nick Bloom on Knightian uncertainty: 'One can consider two types of uncertainty: risk and ambiguity. This distinction was made by Professor Frank Knight of the University of Chicago in his 1921 book "Risk, Uncertainty and Profit". He identified risk as occurring when the range of potential outcomes, and the likelihood of each, is well known – for example, a flipped coin has an equal chance of coming up heads or tails. One could answer a question like “How many times would you expect a coin to come up heads if it is tossed 100 times?” with both a best guess (50) and a “confidence interval” (for example, in this case there is about a 95 percent chance that a fair coin will come up heads between 40 and 60 times). In contrast, what has become known as “Knightian uncertainty” or “ambiguity” arises when the distribution of outcomes is unknown, such as when the question is very broad or when it refers to a rare or novel event. At the outset of the COVID-19 pandemic, for example, there was a tremendous spike in Knightian uncertainty. Because this was a novel coronavirus, and pandemics are not frequent events, it was very difficult to assess likely impacts or predict the number of deaths with a high degree of confidence.'
see https://share.note.sx/2hpx9ohv#CQ5T0Bv0ZGkLtxCuBTINFhv3RPRCeXQ9xVC6hsB5MEE
ah interesting. i have #views on thinking clearly about risk and true (or Knightian) uncertainty, and especially on the ways to gauge uncertainty.
wrote a short series of essays on this:
1. https://vaughntan.org/how-to-think-more-clearly-about-risk
2. https://vaughntan.org/the-consequences-of-mindset-mismatch
3. https://vaughntan.org/the-insidiousness-of-the-formal-risk-mindset
yes! knight's framing of the distinction between uncertainty (unquantifiable unknowns) and risk (quantifiable unknowns) was one of the conceptual foundations for my first book: https://uncertaintymindset.org/
what do you think of ellsberg's work on the savage axioms?
i am not sure that "ambiguity" and "Knightian uncertainty" are completely coterminous. ambiguity (to me) indicates that multiple interpretations of the same observation are simultaneously possible. this creates unquantifiable unknowns about the observation, for sure. but there are other types of unquantifiable unknowns: about potential actions, potential outcomes, causation, and valuation. for clarity, i call these different types of true uncertainty "not-knowings," and have been working on unpacking those concepts for the last few years: https://vaughntan.org/notknowing. if you have a chance to look at that, i'd love your thoughts on it.
but i do agree completely that there was a lot of Knightian uncertainty at the outset of the covid-19 pandemic — unfortunately, much of the public decisionmaking at the time was using approaches that are only appropriate for risky situations (cost-benefit analyses, expected value analyses). in fact, one of the examples i bring up most often when talking about this is the WHO's decision to recommend against travel stoppages in feb 2020 based on "rigorous cost-benefit analyses" (i interpret that as incorrectly applying risk methods for decisionmaking in a highly uncertain situation).
the examples appear to reflect a single archetype: sudden unexpected phase change in a system. are there others you can point at?
also related - when you say "types of unknowns" are you being binary - risk or uncertainty - or do you have a finer-grained taxonomy? i assume that if you diagnose your context correctly, you should be able to figure out what _particular_ type of unknown might exist and this can be better prepared for. Just avoiding the situations you discuss does not seem doable to me. for example, as a bank, you are probably happy to be "too big to fail" and would not want to avoid the advantages it gives, even if it screws up your early-warning system.
in the last issue, i gave a few examples of poorly observable consequences of misdiagnosing uncertainty as risk.
these are all examples of unexpected discontinuous change because that's the mode of the three mechanisms highlighted (they modify systems to absorb input until they reach capacity, then the systems change).
an alternative mode might be of continuous response to the consequences of an approach by evolving the approach. as an example of this, i especially like the case of robert irwin, the california light and space artist, whose practice changed multiple times over the course of his career (documented in lawrence weschler's seeing is forgetting the name of the thing one sees). that book is so good, and worth a read.
to your other question: the different "types of unknowns" i'm referring to are different types of (non-risk) uncertainty. the terminology is unwieldy in order to be unambiguous.
there are four types of not-knowing: about possible actions, possible outcomes, causation between actions and outcomes, and about subjective value about outcomes. each type has a different source, affects decisionmaking differently, and needs different management/response tools. you're right that it isn't feasible to just avoid these types of not-knowing.
unpacked in detail here: https://vaughntan.org/notknowing
(but i'm also writing a short summary of these four types of not-knowing which i hope to put up next week, so stay tuned)
if i was that too-big-to-fail bank, i would want to be soaking up those advantages _and_ preparing for the phase change (instead of being caught out by it)...
yes. i didn't connect those 4 types of unknowing to this post. i am not sure if this is what i meant, but i see this taxonomy is still being baked.
book put on wishlist, thanks.
lmk what you think of it
Boris, it is then.
a boris never hurts and usually helps :-)
And its a much gentler approach to surfacing shortcomings of tunnel vision on narrow goals.
absolutely. the way the boris process is designed is built around recognising that talking about tradeoffs is uncomfortable and difficult. so teams must simultaneously be a) forced to talk about tradeoffs together and b) put in an environment which supports difficult conversations.