tl;dr: AI systems seem like things which appear seductively like autonomous meaning-making entities but are actually tools which can’t make meaning on their own (for now). Understanding this seductive mirage reveals an important but hard-to-see trap in thinking about AI strategy.
Meaning-making is the crux for thinking about AI.
I’ve been thinking and writing about meaning-making, which is the act of making subjective decisions about the value of things.
Meaning-making is the defining characteristic of human-ness.1 If meaning-making is a uniquely human capability, it follows that meaning-making is key to understanding how we should build effective (= aligned and safe) strategies for investing in, regulating, and working with AI.
As I pointed out previously, AI systems can’t yet make subjective decisions about value on their own — only humans can. And when we recognize that only humans can do meaning-making for now, we’re able to see more clearly what AI systems can do better than humans.
This issue is about an important but hard-to-see obstacle to doing effective AI strategy: the seductive mirage of AI systems, which is that they seem to be good at doing meaning-making when they actually can’t do it at all.
AI systems can’t be truly autonomous.
Making subjective decisions about value is how we figure out what is worth doing or not. Meaning-making is how we choose what to do and become autonomous and self-governing. Entities can only be autonomous and self-governing if they can make their subjective choices of actions on their own. An autonomous entity must have meaning-making capacity. Anything else can only aspire to be a tool to be used by a meaning-making entity.
Though AI systems are capable of producing output that looks like what meaning-making humans can produce (Turing’s imitation game), the appearance of [making outputs that are hard to distinguish from those resulting from subjective choices] is not the same thing as [actually making subjective choices that result in outputs].2
Here’s an instructive illustration. Below are two images.
The one on the right is the output of a process of meaning-making which is now thought to have been one of the wellsprings of an entire movement in modern art — “Black Square” (1913/1915) by Kazimir Malevich. The one on the left is a black square that does not have the same meaning-making behind as the one on the right — but you would not be able to tell them apart just by looking at them side by side.
I’ll refrain from going into more detail about this because I’ve already unpacked here and here the logic of why output indistinguishability is a poor way to think about evaluating AI systems. Because machines can’t make meaning at all but humans can, all the meaning-making work in AI is being done by humans (individually or in groups) somewhere along the line.
If this seems like an Implausibly Big Claim, have a look at the table below, especially the parts highlighted in yellow. (The table is from one of my earlier essays on AI’s meaning-making problem.)
So, for now, we must think of AI systems as tools to be used by humans, not autonomous entities with the ability to decide on their own what actions are worthwhile. Another way to put it: We can have AI agents that act on directions from humans, but we should not have agentic AI systems without human involvement in deciding what actions to take.3
Tools and possibilities for action.
Tools work by giving us possibilities for action. A door handle is a tool which makes it possible for the user to take the action of opening a door. Useable and easily perceived action-possibilities become the ones which users most frequently try to use the tool for.4 But some tools don’t provide the action-possibilities that they seem to provide.
Seductive mirages.
When a tool’s action-possibilities are easily perceived/used but aren’t real, the tool’s users are encouraged to use the tool in those ways even when they don’t work as intended. Fake but easy-to-perceive/use action-possibilities are seductive mirages.
A benign example of a seductive mirage tool is a convincing-looking fake door. It’s benign because the worst that happens is that the user quickly realizes that the door is fake and can adjust behavior accordingly by looking for another entrance to the building.
Other tools with seductive mirages are less benign and more insidious because they create inaccurate user-beliefs whose inaccuracies are not easily or quickly detectable, and which change user- or system-behavior in damaging ways.
Just a few examples:
Antimicrobials. Powerful antimicrobials (like antibiotics) can make users believe that the problem of microbial disease has been solved with a low-cost and easy-to-use chemical compound. This changes user behavior in ways that increase the prevalence of disease-causing microbes (e.g., using antibiotics in livestock feed as an alternative to careful herd management and feeding) or create bacteria that circumvent antibiotic treatment (e.g., overuse of antibiotics leading to antibiotic-resistant bacteria). This both reduces the efficacy of antimicrobials and increases the scale/intensity of the problems of disease-causing microbes.
Tactical equipment (firearms, personal armor, weapons, etc). Tactical equipment can make users believe that they have special ability/prowess that they actually don’t have. This changes user behavior to make them more likely to initiate or escalate violent crime (e.g., gun violence in the US). This can reduce the user’s safety and the general safety of the broader community — this phenomenon is particularly well-documented in the context of firearm ownership and access. The seductive mirages of tactical equipment are closely connected to mall ninjas and the aesthetics of tacticool.
Risk management tools (cost-benefit analyses, expected value calculation, etc). Risk management tools frequently make their users believe that they have analyzed and mitigated all unknowns. This changes user behavior so users implement inappropriate management strategies for uncertainty (e.g., when the WHO advised against Covid-19 travel/trade restrictions in February 2020 based on a cost-benefit analysis), or so they conflate risk and uncertainty in developing strategy (what I’ve previously written about as overloading “risk” and appropriating “uncertainty”).
These are all seductive mirages. Mirages that are seductive are especially problematic because they entice users to believe and act if they are real.
AI’s seductive mirage.
In thinking of AI systems as tools, the seductive mirage is that AI systems are (or can soon be) autonomous, self-governing systems that can make meaning like humans do. This is a mirage because only humans can make meaning (for now), and it is seductive because so much (money, reputation, etc) rides on believing that AI systems already can (or soon will) do anything humans can.
This seductive mirage is insidious and malignant because it is hidden in the gap between the real possibilities of AI systems, the easily perceived/used possibilities of AI systems, and our inadequate understanding of why meaning-making is important and where meaning-making happens in AI systems.
Here’s the logic:
AI systems produce outputs that resemble what meaning-making humans can produce, and …
Lack of clarity about meaning-making leads us to incorrectly believe that meaning-making outputs are the same as meaning-making activity, so …
AI systems appear to have affordances that are coterminous with human capabilities. Meaning-making is the most important of these affordances, all of which are easily perceived and used but …
AI systems can’t make meaning on their own (yet) — so meaning-making is a fake AI system affordance that is nonetheless easily perceived/used.
The seductive mirage of AI is that AI systems make us believe that we are closer to matching the full range of human capabilities than they actually are.5 (Which also plays into the valuation narrative of AI technology companies working on AGI.)
The mirage is more seductive because we don’t pay serious enough attention to what meaning-making is and how deeply intertwined it is with the actions we take.
Why is this important? Because meaning-making is required whenever a task requires “discretion” or a “judgment call.” So meaning-making work is done by an underwriter deciding whether to insure a building project using a new construction method, an entrepreneur choosing what product to focus her startup on, an investment committee structuring an investment-for-equity deal with a startup, a panel of judges ruling on the interpretation of law in a crucial case — and a huge number of other tasks both vital and trivial.
The trap of AI’s seductive mirage.
So, the trap is this: When we don’t recognize that meaning-making is foundational to work and woven into nearly every task and AI systems present the seductive mirage of being able to produce outputs that are indistinguishable from human outputs, it becomes too easy to give this meaning-making work away to AI systems. It’s because the meaning-making work is bundled up with all the other work they’re much better at doing, like data management, data analysis, and rule-following.
So the result of the seductive mirage of AI meaning-making is that it becomes too tempting to design or use AI systems for work which requires meaning-making and ignore/omit the humans who used to do the meaning-making. In other words, outsourcing meaning-making to machines without understanding what meaning-making is, why it is important, and that machines can’t do it at all.
At best this way of thinking about product development and management is suboptimal (e.g., garbage results when prompting name-your-LLM). At worst, it can be disastrous (e.g. automatically flagging potential welfare overpayments and escalating them into debts, causing widespread trauma and suicides among welfare recipients). The phrase, “sleepwalking into a bad situation,” comes to mind. If humans decide that we’re going to surrender subjective decisions about value to non-humans, we should at least do this fully aware that we’re doing it.
To design work that takes best advantage of the respective capabilities of humans and AI systems, we must examine work carefully so that we can unbundle it: separate the meaning-making parts from the other parts that can increasingly be done better by machines. And to do this, we have to recognize that meaning-making — subjective decisionmaking about value — is essential for understanding the future of work and for understanding how to build good tools for that future.
Next, a new philosophy of product management
Next issue, I’ll wrap up this sequence on meaning-making and AI with a modest proposal for a new philosophy of product management centered on understanding meaning-making and unbundling meaning-making work (leave it to humans) from all the other stuff machines are better at. This is what we need to build good products in a time when machines and tools can’t do the meaning-making work that humans do but are nonetheless magically sophisticated at mimicking human outputs — the present.
Meanwhile, you could check out my conversation with Charley Johnson about meaning-making in AI. We talk about what meaning-making is, why it is important, how it is misunderstood, and what a new philosophy of product management that engages deeply with meaning-making could look like.
Until next time,
VT
Meaning-making is currently a human-only capability. Even if future AI systems have meaning-making capability, it isn’t clear that such capability will be understandable/verifiable by humans (also see footnote 2).
Meaning-making as the distinguishing characteristic of human-ness is not a phenomenon susceptible to falsifiable hypotheses.
The word “agent” here creates the possibility for confusion: It’s nice to have AI systems that act as [agents directed by humans], but AI agents cannot be thought of as [entities having agency to act autonomously].
These action-possibilities are “affordances.” The term has slightly different definitions depending on what discipline uses it. In psychology, affordances are the real action-possibilities an object provides to its user. I use “real” here in the sense that if the user chooses to use the action-possibility, the action can actually be accomplished. A door handle only has the “real” action-possibility of opening a door if it is connected to the latch mechanism that secures the door. If not, the action-possibility is just a mirage. In design research, affordances are the action-possibilities that a user can perceive about an object and actually use. In this slightly different definition — emphasizing the perceptibility and useability of the affordance — a working door handle that doesn’t look like a handle or which cannot be reached by the user wouldn’t be considered to have the action-possibility of opening the door.
Meaning-making is the only thing that humans must do, but it isn’t the only thing that humans currently do better than machines. At the moment, humans are also better at manipulating objects in unpredictable physical environments (e.g. sweeping narrow alleyways full of occasional obstacles like bicycles, delivery vans, and itinerant animals), but it would be nice if machines could be built to do that too.
Thought you might enjoy this post I just came across on how AI (LLMs in this case) is akin to a psychic con: https://softwarecrisis.dev/letters/llmentalist/