Public value and public strategy; what I've been up to; electric anxiety; Rupertness in polyhedra, rewritable pads, tape clips, zip ties; trees, cats, dogs.
A few weeks ago, I rented an all-electric vehicle to drive up from Marseille into the Massif Central — north on the A7, then west through the Monts d’Ardèche, up onto the Mézenc plateau, and then into the Livradois-Forez. After leaving the A7 with its profusion of charging stations, I entered the mountainous plateau in the centre of France and an entirely different operational regime.
The Ravine of Preynas in the Ardèche; 30 October, 2025.
This volcanic plateau is in the middle of a band of low-population departements running from the northeast to the southwest — the empty diagonal. The plateau has few inhabitants and sparse charging infrastructure, barely any of it high-speed.
Renault’s predictive range analytics proved unreliable when confronted with these steep mountain roads, repeated elevation gain and loss, and sharp temperature changes. The range estimates changed so dramatically that I might as well have been planning the route around an infrastructure of phantom charging stations. Under these conditions, optimisation is a mug’s game. I was glad, as a believer in excess capacity in times of uncertainty, to have planned the route around much more range buffer than the charge-planning apps suggested.
EVs change the logic of long-range trip planning. With a combustion engine, you can be cavalier about fuel and optimise routes for speed or distance — it’s hard to run out of fuel because gas stations are everywhere and refueling takes five minutes. But with an EV, refueling locations and times become a central consideration. Charge now at this slow 22kW charger and add an hour to the trip, or use the fast charger 80km (and a 45 minute detour) away that might be occupied or broken? Having a lot of excess capacity is crucial.
The UX of the charging ecosystem is also still astonishingly bad. The charging network apps know where and how fast charging points are, but are terrible for optimising charging around anything else you might want to do while the car charges — like stop in a hardware store or find a swimmable river. The handoff between apps and in-car navigation systems is terrible; planning a route and actually driving it requires juggling several different apps and systems, none of which talk to each other properly.
Here’s an underexploited business opportunity: A decent rural restaurant with a few moderately fast chargers (22kW, which are more likely to be feasible with existing rural electrical supply infrastructure) would clean up. Not fast food or fancy food, and it has to be far from the autoroute — but make it a real sit-down place, a classic routier or bistrot with a formule, where spending 45 minutes to an hour would be a pleasurable obligation.
Charging station mediocre-restaurant cat outside Montelimar; 30 October, 2025.
Public value and public strategy. Why borrowing private sector strategic thinking and values undermines public strategy — and a practical tool for using tradeoffs to develop robust public strategy that serves diverse stakeholders over long time horizons. 13 November, 2025.
Now-ish. It’s been over 3 months since I updated the now page. At the end of July, I went back in Singapore for a Centre for Strategic Futures Fellows visit to think about overlooked strategic issues. Adelaide in August, where I installed a temporary exhibition on experiencing not-knowing, and ran strategy workshops for Adelaide University leadership and SA Water. Singapore in September for a top secret futures project (embargoed until early 2026); then Tunis to run a tradeoffs-focused strategy workshop for UNDP Tunisia. Japan in October for business development, to see my old supervisor, and to walk part of the Kumano Kodo; then Berkeley to present my research on reasoning scaffolds. Auvergne in November to retrieve tools from the barn at the old house, then London to run a masterclass on how to think about AI deployment in traditional industries. 15 November, 2025.
It feels as if (and it is in fact true that) all of my October was spent somewhere else. I was in Japan, first Tokyo, Kyoto, Osaka, then a string of small villages in Wakayama while walking a short section of the Kumano Kodo, then Berkeley to demo a reasoning scaffold at the closing event for the Future of Life Foundation’s AI for Human Reasoning fellowship.
I came up with the term reasoning scaffold because they’re the visible structures of human subjective reasoning. More accurately, they are structures that have been made visible. Reasoning scaffolds are vital pieces of epistemic infrastructure, and we don’t know much about them at all. The two pieces of my writing I’ll link to this issue are both about reasoning scaffolds: inadvertently building one and realising its value, and outlining a research programme about them.
Over San Francisco Bay; 13 October, 2025.
But first, walking in forests.
The Kumano Kodo is an ancient network of forested pilgrimage routes between old Shinto and Buddhist shrines and temples—it was a major site of practice for syncretic nature worship blending Shinto, esoteric Buddhism, and animism, and focused on enlightenment through mountain asceticism.
These days, the ancient way is no longer especially arduous. It is cleared, abundantly signed, well-maintained, and in many places even paved. There are stairs in some of the steep parts, and an active tourist economy along the way. For a fee, a Kumano Kodo luggage service will take your impedimenta from town to town, leaving you lightweight on the trail.
Some days on the Kumano Kodo on the edge of a typhoon; 9-11 October, 2025.
In the Auvergne during the lockdowns, I went on long walks nearly every day — sometimes 4-5 hours on- and off-trail winding up, down, and around the volcanic mountains of the massif, deserted other than for the distant sounds of hounds and hunters.
Conifer forests smell and look different from deciduous forests, and the underbrush is open enough to see through. The mountain forests through which the Kumano Kodo runs are cedar and cypress; those of the Auvergne are plantations of pines and Douglas fir.
Reasoning scaffolds: An infrastructure for human subjective reasoning. Every important decision requires subjective reasoning about objective facts—deciding what matters and why. Yet we have almost no explicit frameworks for this work, which is a source of civilisational fragility in an age of AI and accelerating political and social fragmentation. Reasoning scaffolds fill this gap: they’re an explicit structure to make subjective reasoning visible and learnable. When tested with university students, they were able to use it to generate dramatically better arguments in 75% less time. But we still lack a robust theoretical foundation for reasoning scaffolds—understanding what scaffolds share across domains, how to annotate the artifacts of subjective reasoning to show their structure, and how to make this machine-legible for human and AI training. A research programme addressing these questions would create infrastructure for both better human subjective reasoning and AI systems that support human judgment rather than displace it. 22 October, 2025.
Outside Tsubo-yu, Yunomine Onsen (see below); 11 October, 2025.
Yunomine Onsen: “Water quality: Effective against when drunk: Digestive disorders, diabetes, gout, women’s diseases, etc.; Effective against by bathing: rheumatism, nervous disorders, skin diseases, diabetes … commonly called Japan’s oldest spa … It is also said that the water changes color seven times a day.”
Prototypes and the value of theory; Socratic mirrors, reasoning scaffolds, and AI tools; in-country patterns; closures, modern instrumentals, natural ice, pretexts.
Many months ago, a bunch of us decided to do a short walk together and chose a section of the Kumano Kodo to do it in. Over the intervening year, other things happened so that I came to Japan early for meetings to ready the ground for doing more work in Southeast Asia now that I’m based full-time in Singapore. But also to opportunistically hang out for a few days with some other old friends who happen to also be here.
Here are some of the things I noticed after ten years away:
Row 1, L-R: A weird, bulbous way of building an interlocking stone wall; characteristically unlavish but careful and fine construction on a garage post; one of an endless series of umbrella holders outside houses and places of business; the flying fairybear logo of the Tokyo neighbourhood police posts.
Row 2, L-R: Outside the Kamiyama-cho sorting station of Yamato, one of Japan’s superb logistics companies; a loop pedestrian overhead crossing connecting all four corners of a busy intersection; the omnipresence of functional netting; the ubiquity of absurdly precise cast concrete construction.
Row 3, L-R: Concertina gates for driveways; facades covered in sheets of small tiles; 3D metal mesh doormats; construction sites with aggressive noise mitigation and measurement.
I’ve been prototyping an AI tool that uses Socratic mirroring to build a reasoning scaffold that helps users develop stronger arguments. Testing a fully functioning prototype across universities, corporations, startups, and government shows that the method generalises: Users doing real work with real stakes find it useful enough to want their institutions to provide it. Theoretical innovation in meaningmaking translates directly into practically useful meaningmaking tools.
If you’re interested in testing the tool and/or learning more about it as it develops, sign up here. The tool is fully functional, so testers can benefit by refining an actual argument they want to make (e.g., for a paper in a class, a policy paper for potential implementation, a startup business plan, or a strategy proposal for management).
An AI tool for learning critical thinking while using AI tools; Tunisia and structured elicitation; various perspectives on how to use AI; developments in zippertech.
tl;dr: I’ve been prototyping an AI tool for helping people develop critical thinking skills while using AI tools. The key insight is that blank chat prompt boxes are terrible for learning—instead, we need reasoning scaffolds and Socratic mirrors that support human meaningmaking rather than pretending to do it for users. I built seven quick tool prototypes to quickly evolve and test the technical stack, interaction logic, and content. You can read about and sign up to test the tool here.
Last week, I was in Tunis for 3 days working with a development organisation’s portfolio leadership team on their 2026 strategy. I used a mechanism for structuring idea elicitation and refinement that I’ve been developing over the last year, and it worked much better than I could have hoped for. While in Tunis—because of my poor scheduling—I delivered my module on a new way to teach public sector strategy for Protocol School (alongside a bunch of superb faculty fellows from all over), then delivered the same module again at the Lee Kuan Yew School of Public Policy the day I got back to Singapore. I’ll write more about that when the dust settles.
This week is about an adjacent project: An effort to create a prototype AI tool for helping people learn to think critically while using AI tools, funded by the Future of Life Foundation’s programme on AI for human reasoning. I was simultaneously prototyping three distinct layers: The delivery layer (code and webapp), the mechanism layer (human-machine interaction logic), and the content layer. The Tunis strategy work used the same mechanism, with a different delivery experience (slides and pen-and-paper vs. webapp) and different content.
The blank chat prompt box is no good
The simple but powerful idea behind this tool is my thesis that the unparameterised blank chat prompt box—which has become the default mode for interacting with LLMs—is actually terrible for helping people learn to use these tools effectively. The blank prompt assumes users already know how to structure their thinking, frame problems, and evaluate responses. But if the goal is developing critical thinking skills, that assumption is often unjustified.
A much better approach is to scaffold the interaction. Put explicit context around the kind of input you’re hoping the user will provide, then be structured about how you process that input and what you reflect back.
If we build AI tools with this design philosophy, they can provide users with a Socratic mirror that supports meaningmaking. These AI tools will not pretend or appear to do meaningmaking for the user. Instead, they’ll provide reasoning scaffolds that explicitly support the user in developing their own capacity for meaningmaking. Reasoning scaffolds work by reflecting the user’s thought processes back at them for consideration and refinement, rather than generating meaning on the user’s behalf.
Multilayered iteration
I vibecoded an extremely hacky series of prototypes over 3-4 days, iterating simultaneously across these three layers: Delivery instance (technical implementation), interaction logic (how content gets structured and delivered), and content (what actually gets delivered).
The first two prototypes were to validate whether the basic interaction logic worked. Could I take a pen-and-paper mechanism and translate it into something that didn’t require me physically present? I also wanted to see if automating the Socratic mirroring I normally provide manually would work. Having a human interlocutor function as the mirror takes a lot of time, and this kind of mechanical processing is something machines are, in theory, very good at.
Versions 3-4 implemented LLM responses as Socratic mirrors. This required figuring out how to prompt based on user input so that the LLM would return something like a mirroring response users could evaluate. In testing, I established that these stimulated users to actively consider whether the mirrored responses were consistent with their own understanding and intent, whether they wanted to change it for clarity.
Versions 5-7 focused on distribution, moving to web (vs local) delivery, making API integration more robust, and adding logging. Version 7, which I’m using for scaled-up testing, is where both the mechanism and user experience seem to work. Today, I tested it with a small group of college students in Singapore. The short summary is that even in this prototype phase with several UI glitches, the tool works much better—and much, much faster—than I’d expected. I’ll report on the first wave of testing soon.
A principle for AI tool design
The broader insight from this exercise is about building AI tools that enhance rather than replace human capacity for meaningmaking. Many of the AI tools I see today seem designed to either “do the thinking for users” (this includes many so-called “agentic” tools) or provide so little structure (by way of the empty, context-free text entry field) that users don’t learn effective patterns for interacting with machines while preserving human-ness.
The alternative I propose is to build tools that explicitly scaffold human meaningmaking processes. These tools should be designed to elicit meaningmaking by users, and to surface contradictions, highlight assumptions, and reflect thinking patterns back for user evaluation. This requires being very clear about what humans do that machines cannot and what machines can do better than humans, and designing the human-machine interaction accordingly.
I wrote up some background on this first project to develop an AI tool for scaffolding human meaningmaking in the context of AI tool use (meta, I know).
Students now have access to LLMs that can write essays, but seem to be losing the capacity to think critically. I solve this problem by reconsidering the interaction logic between human users and the AI tools they use. I’ve developed an AI tool that inverts the usual logic of the empty, unconstrained chat box — the goal is to help users learn to think critically and do the meaningmaking work that only humans can do. Initial tests of the mechanism shows users going from vague statements to sharp arguments in under two hours. This tool represents a scalable approach to critical thinking education and an alternative to current AI tools that make students passive consumers of machine-generated content.
If you’re interested in testing the tool and/or learning more about the course as it develops, please sign up here. The tool is fully functional, so testers can benefit by refining an actual argument they want to make (e.g., for a paper in a class, a policy paper for potential implementation, a startup business plan, or a strategy proposal for management).
How to use AI without becoming stupid: “The Vaughn Tan Rule goes like this: Do NOT outsource your subjective value judgments to an AI, unless you have a good reason to, in which case make sure the reason is explicitly stated.”
The right way to use AI tools: “… he started using ChatGPT to draft emails in French. It felt like a net positive — enabling better communication with his French friends — until he started to feel his brain ‘get a little rusty.’ He found himself grasping for the right words to text a friend.”
Connection innovations: AiryString achieves a ~26% reduction in weight and a ~23% increase in flexibility by removing the tape from the zipper (compared to a standard #5 VISLON YKK zipper.
A breakthrough in cases; taking the public sector seriously; the practical value of thinking about meaningmaking; dissolving vs. solving, quant knowability breaks down, materiality.
I got back to Singapore from Adelaide at the end of August, but last week was totally consumed by a Very Cool non-aligned project about the near future which brought to Singapore many people I’d only previously known as names on the Internet.
I forced everyone to drink low-intervention wine and eat unfancy foods, sometimes without air-conditioning. Some of them grudgingly admitted later to being slightly inebriated the day after but not actually hungover. The whole thing is embargoed for another 60 days, but you can be sure I’ll write about it here when I can.
Interesting people in interesting places; 2 September, 2025.
Meanwhile, two current projects have dovetailed and reinforced each other in the last 3 months. I wrote about one of them a few weeks ago and the other one this week.
The idea for better cases came out of a casual conversation at Edge Esmeralda with Venkatesh Rao. But the approach that allows me to generate parametric cases came out of the research and prototyping I’m doing for the FLF work and a conversation with Owen Cotton-Barratt during which he suggested eating my own dogfood. That led me to try applying the FLF tooling to a thinking and writing problem I was working on: Cases for teaching public sector strategy.
Hand-printed and -dyed cloth at a batik talk; 5 September, 2025.
Learning the affordances of this new tool has taken many months of experimentation and now depends on a remarkably arcane prompt structure on the back. But, after two rounds of beta testing with real, live public servants working in state and national governments and public utilities, I’m feeling good about trialing it beyond just friends and family.
So I’ll present a 90-minute module from this public strategy course on September 17, 0900-1030 Singapore time (GMT +8), as part of Protocol School. This session is not open to the public but we can accommodate a few guests — email me if you’re interested.
A few days later, I’ll teach the same module with a few modifications and a re-customised parametric case as part of a course on foresight and strategy at the Lee Kuan Yew School of Public Policy in Singapore.
After over a decade teaching strategy in private and public sector settings, I’ve developed a new public sector strategy course that flips the conventional wisdom. Instead of borrowing failing private sector concepts, my approach recognises that public sector organisations — with their complex stakeholder environments, wicked problems, and indefinite time horizons — require fundamentally different ways of thinking about and doing strategy which should inform how private sector strategy is done. The course also teaches using parametric cases that can be fully customised for particular teaching contexts and specific content, making strategy education more relevant and engaging for busy public servants worldwide.
Dissolving (vs solving) the problem: “The important thing is there’s something you do to a problem that’s better than solving it, and that’s ‘dissolving’ it. How in world do you ‘dissolve’ a problem? By redesigning a system that has it so that the problem no longer exists.” (I was reminded of Ackoff’s work by David MacIver’s latest newsletter issue.)
The breakdown of quantitative knowability: “Quant hedge fund managers are experiencing one of the most prolonged droughts in recent memory. The bigger concern: They don’t know why … ‘Everyone says it's different this time — different because of duration,’ said a hedge fund consultant who works with large quant funds. ‘This has been a long, slow bleed across the complex.’”
Beginning with material: “With any fabric, no matter how great the raw material may be, if it’s simply knitted or woven as-is, it won’t become the kind of textile we’re aiming for. For us, good raw materials are just the starting point. We almost never use them without further refinement.”
See you next week, VT
Schlepping stuff from Adelaide to Singapore; 31 August, 2025.