Grab bag (Week 14/2025)
Big-time uncertainty, the efficiency trap, doing better interviews, extractive tech, the semantic apocalypse, output indistinguishability.
Hello friends,
What a week. If the uncertainty (not risk) of the modern, hyperconnected and interdependent world was not undeniably evident before, it sure should be now. Do you #need a good way of thinking about unknowns and distinguishing risks (predictable, quantifiable, optimisable, comforting) from uncertainties (capricious, petulant, potentially deranged, terrifying)?
This is as good a time as any to remind you of the book I wrote on the virtues of the uncertainty mindset, and my little primer on how to think more clearly about risk.
Recent writing
At a UNDP event last week, I spoke with civil servants from across a country’s government about their 2050 strategic plan for public institutions. The point I wanted to make was that governments often forget that planning is not the same as strategy, and that public institutions become brittle and fragile when they’re designed only for efficiency through rigorous planning. We need public institutions which don’t fall apart when the world suddenly shifts in unpredictable ways. That requires strategy (not planning) and a commitment to keeping non-wasteful slack around.
👉 Don’t fall into the public sector efficiency trap.
Interviews are one of the most powerful research tools for understanding uncertain, fast-changing, or complex situations. Unfortunately their potential usually goes unrealised because research teams treat interviews like casual conversations rather than a rigorous research method, leading to vague questions, poor respondent selection, and bland results. When done right, interviews can generate deep strategic insight. That means being methodical across all four stages of the interview lifecycle: conceptualisation, planning, execution, and analysis — each of which requires different skills.
👉 Why you’re doing interviews wrong (and how to fix it).
Seen in the wild
I made IDK on the side of a mountain in the Haute-Loire during the lockdowns. It’s a deck of cards with action prompts designed to gradually and enjoyably inject a bit of self-driven uncertainty into daily life. (It’s also one of a long series of experiments in making uncertainty easier to deal with.) When you use IDK often, uncertainty slowly becomes less uncomfortable to experience and easier to manage, and you get better at being in uncertain situations, learning from them, and doing new things.
Re-learning how to be productively uncomfortable is closely connected to being ready when your job, your career, or your industry changes unpredictably.
Which is why I brought an IDK along when I filmed a short interview for a documentary about about skills and the future of work in an increasingly uncertain world. It premiered in Paris this week and will be available online soon; in the meantime, you can watch the trailer here:
Elsewhere
I dislike disruption theory. Why can’t we have evolutionary, instead of revolutionary, change? But here’s a more cogent essay by Joan Westenberg about the disruption narrative and its connection to extractive ways of building technology.
👉 The Great Tech Heist — How “disruption” became a euphemism for theft.
I’ve been writing and thinking a lot about meaningmaking (= the act of making a subjective decision about value), and this week, a few articles floated up which address the meaningmaking problem of AI from unusual angles.
Vauhini Vara has a depressing article about the success of Inkitt, a publisher which retains an author to write books just long enough for their AI systems to learn an etiolated version of their style. Inkitt then produces machine-written books and sells them under the author’s name — books which are reportedly lame and hollow but nonetheless sell well.
👉 The AI romance factory.
Erik Hoel wrote about how everyone suddenly was using ChatGPT to Ghiblify their photos. On the one hand, cool. On the other hand, how does the deluge of imitation affect the original? (Hint: Not well.)
👉 The semantic apocalypse.
I think we barely value meaningmaking because we’ve forgotten how to recognise the act of meaningmaking as something distinct from the output that also results from meaningmaking. If all we can see is the output of a human or a machine — say, a block of text structured as a poem — and those outputs are indistinguishable from each other, we seem to leap automatically to the wrong conclusion: That the machine is now at (or close) to the level of the human.
Output indistinguishability is the bane of our existence in thinking about AI, and maybe this springs from a misunderstanding of what Alan Turing was writing about when he described “the imitation game” (now better known as the Turing test). If you read it yourself, you too might come to a different conclusion.
👉 Computing machinery and intelligence.
The act of meaningmaking (not the process of meaningmaking, which is impenetrable even to the actor and inherently variable) is a product too. If we say that the main criterion for evaluating a machine’s output is whether it can be mistaken for something a human has produced, this reduces the product of human work to only visible artifacts and ignores the more important [acts of making subjective decisions about valuation] that result in those artifacts.
Conventional technology discourse seems to only understand work product in this thin, superficial sense that is artifact-driven. There are very few disciplines (social practice/conceptual art and jurisprudence are the two most prominent) in which subjective value judgments are prized as a work product and are part of the training and formation. The less we prize meaningmaking work, the less we do it, and the less able we are to even recognise it.
See you next week,
VT