Multiples (wk 1/2026)
Judgment from the ground up; labour-intensive and industrial foods, prompting as teaching; maintenance and replacement; Boring Tiny Tools in India, Chinese peptides, shoveling snow.
Hello friends,
Happy new year. Recent events compel me to remind you that idk (the world’s first and only tool for learning how to be productively uncomfortable) is once more available in Europe and the US.
I’ve been in various virtual and meatspace places in the last two weeks:
Watching a New Year’s Eve coulibiac being constructed: A coulibiac is a Russian imperial dish of fish cooked inside a pastry shell and insulated by a layer of herbed rice, boiled eggs, and precooked vegetables (on similar principles to a beef Wellington). It takes much time and planning. Not all laborious foods are imperial, but laborious foods are much more common where labour is both cheap and abundant—hence the many Peranakan dishes requiring large quantities of fine knifework, and all tiny-format stuffed vegetables and pastas.
In a big box grocery store observing “baby carrots” being restocked: Industrial processing can’t afford to be discriminating at the margin when handling highly variable products. Agricultural products vary too much for processing lines to trim frugally—slowing down enough to peel every carrot only as much as necessary would make industrial carrot economics unviable. So baby carrots are pieces of big carrots lathed down into tiny cylinders designed to be small enough for all shape and surface variation to be cut away on nearly all carrot pieces. This lowest-common-denominatorisation is the price we pay for standardising outputs when inputs aren’t standardised.
Lurking on a WhatsApp chat of aggressive first-adopters of LLMs: Prompting an LLM well (i.e., to make the LLM generate output conforming to the prompter’s intentions) forces the prompter to make implicit reasoning explicit. This is similar to what happens when teaching a human how to do something (e.g., cooking an omelet or suturing a wound), but with a crucial difference: with an LLM you can’t rely on the learner’s discretion and judgment. When you teach a human, you can hope for them to exercise judgment under ambiguity, but LLMs can’t (yet) be relied on exercise subjective judgment at all (what I’ve called the meaningmaking lens on AI). So, with an LLM, prompters should aim to articulate every step of a reasoning process (what I’ve called a reasoning scaffold) — but even aggressive first-adopters don’t seem to be using this frame to think about what they’re doing.
Driving by a town dump when a truck filled with cheap toys pulls up: A replacement-oriented society defaults to designing objects to be lower-cost because they are intended to be disposed of and replaced instead of being repaired. A maintenance-oriented society defaults to designing objects to be more expensive but easy to take apart and clean or replace broken pieces of (we sometimes call these repairable systems). Maintenance-orientation was the default when our machines weren’t as good as they are today; the stuff made by maintenance-oriented societies was nice in ways we can’t describe but can usually sense.
Writing
Judgment from the ground up: Critical thinking is foundational for making decisions that require subjective judgment. People learn how to do subjective decisionmaking through practice. Unfortunately, organisations increasingly reserve subjective decisionmaking for senior members, so junior members get neither practice nor support in learning how to do it; this also means that they don’t get practice in the kinds of critical thinking needed for good subjective decisionmaking. This leads to succession crises, inefficient bottlenecking in decisionmaking, and poor decisions made by senior members who are far removed from the operational realities that junior members are exposed to. Organisations must therefore redesign themselves to provide junior members with opportunities and tools for learning how to think critically when making subjective decisions. To build capacity for subjective judgment from the ground up, organisations must create low-stakes practice opportunities where juniors can make real decisions with real but limited consequences, learning the same skills that senior people use for high-stakes choices. (3 Jan, 2026)
Elsewhere
“‘The biggest misconception is that small business owners in developing countries will be the last to adopt AI. I’m seeing the opposite,’ said Dutt. ‘They are adopting it first because for them, even a small improvement in efficiency makes a big difference.’” (They’re building what I call Boring Tiny Tools.)
See you soon,
VT




