For Noë, the answer is that thinking is about “resistance,” the internally intentioned acts that prevent us from being completely dominated by external conditions — and intention is uniquely human. He concludes that “Our values are always problematic. We are not merely word-generators. We are makers of meaning. We can’t help doing this; no computer can do this.” (The emphasis is mine.)
I agree. For the last few years I’ve been puzzling out what it means to make meaning and why it separates humans from machines. For me, meaningmaking is what we do in the presence of uncertainty about what things are worth — more precisely, not-knowing about relative value.
There are in fact at least four conceptually distinct kinds of meaningmaking, each with different practical implications for how we act:
I grow ever more convinced that meaningmaking is a crucial but nearly entirely overlooked lens when thinking about AI policy, how AI research should be oriented, what kinds of products to build, how the present and the future of work should be designed, and what it means to be human. I’m glad that a philosopher with Noë’s reach has injected this idea into the noisy discourse on AI and I really hope it sticks.
So I’ve put together the 6 essays on meaningmaking I’ve written over the last 2 years — if you find them insightful or merely thought-provoking, please share them widely. There are more to come, but these are some of the working components of a broader exploration of not-knowings: understanding the different types of true uncertainty we face — none of which are what colloquially we call “risk” — and figuring out how we can respond to them generatively. I thought this would be a book, but maybe another format (or multiple formats) makes more sense.
So … what should I be writing about and in what forms should it find expression? I’d love your comments and suggestions.
tl;dr: AI investment is missing a huge opportunity in the form of industry-specific AI apps designed to integrate with legacy business processes in medium-sized firms in trad industries. This investment should go into industry-specific incubators that bridge the capability and capital gaps that prevent startups from building these real-world AI solutions that drive immediate impact.
So much time and money is flowing into AI today, but it’s nearly all focused on building foundational technology: massive models or compute infrastructure. The promise driving this investment is that AI will transform the real world: AI will make healthcare (or logistics, or law enforcement, or copywriting, or whatever) cheaper or better (or faster, or in more languages, or available to more people, or whatever).
This transformation will take much more than just good models and huge compute. These will have to be translated into actual applications that real-world people and organisations can use. Applications that fit into existing business processes in existing industries. Without an ecosystem of small tech companies doing this translation work, AI’s promise of transformation will go unfulfilled.
So this is the problem: Most startups are trying to become the next unicorn, chasing enormous markets that promise outsized returns. Too few startups are focused on building translational AI applications that make a tangible, immediate impact on the existing processes of medium-sized firms. This is a major market opportunity that almost no one is systematically exploiting.
I’ll unpack some of the reasoning below.
Why AI apps for traditional industries?
Because traditional industries are an enormous market in any country. Traditional industries — like construction, logistics, and insurance — form the backbone of every national economy. They don’t have the razzmatazz of digital and other new industries, but they nonetheless produce the majority (always more than 70%) of GDP. These industries predate widespread modern digitalisation, so they usually operate using legacy processes that could be transformed by AI applications.
The kinds of applications I have in mind are not general services that multiple industries can use, like project management tools (e.g., Asana or Slack). Instead, I’m thinking of applications that work only in one particular industry, that address the issues and processes specific to that industry.
An example from the construction industry: An application for construction site operations that processes drone, GPS, and mobile device data to automate building progress updates and material input/output management. Medium-sized construction firms using this application improve project management efficiency by up to 20%. These savings come from things like increasing the accuracy of just-in-time materials delivery to construction sites. Cumulatively, this reduces costs from project delays and increases overall project profitability. (BTW, this is an application built by a company called Unearth, which was acquired by Procore in 2023.)
Why medium-sized firms as clients?
Medium-sized firms in traditional industries are the sweet spot for startups building industry-specific AI applications. They’re large enough to see significant benefits from business process improvements using new technology and be able to pay for lightly customised products.
Why not target small or big firms? Big firms don’t need these applications; they’re large enough to staff in-house tools teams (or hire consultants) to build applications that are heavily customised to their own firm-specific processes. At the other end of the spectrum, small firms in traditional industries can’t afford lightly customised applications; they often lack the resources to buy them and the internal capacity to use them effectively.
The capability and capital gaps.
Medium-sized businesses in traditional industries are high-value targets for startups, but few startups are going after them. Why? Two reasons: a capability gap and a capital gap.
The capability gap.
It’s hard to build AI products that integrate into legacy business processes. This takes a different kind of product management approach, one that focuses on building products for small actually-existing markets, not large potential ones.
The problem is that most startups use a product management approach that is implicitly optimized for venture capital (VC) funding. The VC approach designs products to have enormous total addressable markets (TAMs). Enormous TAMs necessitate general utility with speculative business models and execution plans. Those lead to high failure rates in pursuit of unicorn status.
This VC approach to product management doesn’t work for startups trying to build AI applications for medium-sized firms in traditional industries.
The TAM for such an AI application is necessarily much smaller because it’s built to address the particular needs of a specific industry. This makes it valuable for firms in the industry, but less valuable and harder to sell to firms in other industries. (Though the TAM is small, finding product-market fit is easier, getting to positive cashflow is faster, and the small TAM naturally deters competition.)
Addressing industry-specific needs means deeply understanding and integrating with existing business processes. To do this requires a different approach to product management centred on an awareness of what I call meaningmaking.
Unfortunately, a meaningmaking-based approach to product management is not yet systematised or commonplace — this is the capability gap which needs to be filled.
The capital gap.
Beyond the capability gap, startups building AI applications for medium-sized firms often fall between the cracks when it comes to funding. They’re a poor fit for VC funding because their TAMs are too small (see above). They’re also a poor fit for private equity, because they’re not yet cashflow positive.
These startups would become cashflow positive more quickly if they had access not just to capital, but to industry expertise and connections. Specifically, industry insiders who know which business processes are most in need of AI support and the decision-makers who control procurement budgets.
The capital gap I see is for finance, industry-specific expertise, and access to procurement.
Filling both gaps with industry-specific incubators.
The answer to both the capability and capital gap is an incubator focused on building AI applications for medium-sized firms traditional industries. Such an incubator would do two things:
Develop product management capabilities: Train startups in a product management approach optimized for small but high-value markets. This means focusing on building products that integrate seamlessly into existing business processes, prioritizing cashflow-positive business models, and understanding the specific needs of traditional industries (like construction, logistics, insurance, brick and mortar retail).
Provide structured access to industry leadership: Give startups access to industry operations leaders who can help them identify high-potential processes for automation and efficiency improvements, as well as purchasing decision-makers who can shorten sales cycles.
The tech Mittelstand: A new vision for AI startups
The rewards are huge if we can successfully build this ecosystem. What will result is a a technology Mittelstand: The Mittelstand (“the middle estate”) is the large population of SMEs in Germany, known for their innovativeness, long-term stability, prosperity, and contribution to Germany’s industrial success. Instead of only trying to build tech unicorns, we should be trying to create conditions that enable a large number of startups to emerge, which mature quickly into a large number of small-headcount, stable-cashflow technology companies which provide a highly diverse range of industry-specific AI applications to serve medium-sized firms in traditional industries.
Medium-sized firms are under-optimized, not because the technology doesn’t exist, but because the right kinds of startups don’t exist in sufficient numbers. By addressing the capability and capital gaps, we can unlock a large, underserved market—one that could increase efficiency and profitability for thousands of businesses, ultimately strengthening the broader economy.
Why now?
AI has the potential to transform how businesses operate, but the current focus is skewed toward the development of underlying technology. AI’s missing middle is the application of that technology in ways that drive immediate value for real businesses. Medium-sized firms — too large for off-the-shelf solutions, too small for custom-built AI from large tech providers — are the ideal target. And the tech startups that will serve these mid-size firms will necessarily be small and scrappy, and there will have to be many of them.
So we need to rethink both how AI products are developed and how they are funded. By focusing on specificity, cashflow, and real-world application, we can create a profusion of AI companies that are individually small but collectively crucial to the future of work and the economy.
tl;dr: AI applications fail when they're not designed to leave meaningmaking — subjective decision-making about relative value — to humans. To build successful AI applications to replace existing business processes, product management must understand what meaningmaking is, unbundle meaningmaking work from non-meaningmaking work, and reflect this unbundling in how products are conceptualized.
Application failure.
I went to Chicago in May to run a workshop on improving the speed and quality of new product development. A few weeks before I got there, I pinged some of the natives to see if they would be free to get together. One of my friends there was testing a new AI application: A scheduler tool that would replace the human work needed to arrange a mutually convenient time and place to meet. (AI application = a deployment of AI technology to solving a problem, which could be an app in the sense of a mobile or desktop app, but doesn’t have to be.)
The AI scheduler tool worked like this: We added the scheduler tool (as an email address) to our email conversation. The tool responded and queried both of us separately about our preferences for times/dates and connected to our calendars so it could schedule the meeting directly.
The AI scheduler failed spectacularly. We both sent a few rounds of emails to the tool about our slot preferences and connected our calendars. As near as I can tell, the schedule tool seriously misinterpreted the meaning and importance of several of my calendar entries. It eventually scheduled a meeting for a day when neither of us would be in Chicago.
The seductive sell here is that the LLMs and other AI systems driving the tool purport to allow it to understand our email conversation, the meeting context, and our existing calendar entries so it can converge on an optimal slot — like a good human assistant would be able to do. The sell is seductive because we really want AI applications that can do everything a human does and AI seems able to do this because it can produce outputs that look like human outputs — but actually AI applications cannot do the meaningmaking work that humans do all the time.
The failure mode for this AI scheduler tool highlights the importance, the difficulty, and the invisibility of meaningmaking in the work that AI applications are supposed to replace. It also highlights the universal absence of consideration for meaningmaking in the design and conceptualization of AI applications.
Right now, there’s a lot of investment in building AI systems that do better sensemaking, which is interpreting the meaning of text (or images or sounds or whatever). The type of sensemaking that’s getting attention is the type that lets an AI system identify a thing. This is the type of sensemaking that allows an image-generating AI system to produce an image that a human would recognize as being “jars” or “a painting in the style of Morandi of assorted earthenware jars on a tabletop.”
Meaningmaking is a specific type of sensemaking that is not about identification. Meaningmaking is the act of making subjective decisions about the relative value of things. Thing-identification is important, but meaningmaking comes after and goes well beyond identification.
Meaningmaking can be trivial and quotidian, as when I decide that I prefer grapefruit to peaches when doing the household fruit order. Meaningmaking can also be extremely material, as when a CEO decides to commit her company’s public reputation and a lot of money and time to developing a controversial and highly uncertain new product category.
The main characteristic of meaningmaking is that it involves decisions that have no absolute or objective verifiability. In consequence, some humans do meaningmaking more mindfully and/or better than others. To be an artist, a judge, a poet, an activist, or a technology innovator depends on meaningmaking — to be successful at any of those things requires mindful and sophisticated meaningmaking. But choosing to take any action requires meaningmaking, so all humans do meaningmaking work all the time.
Businesses rely on meaningmaking work all the time.
In an earlier essay, I gave some examples of humans doing meaningmaking work as part of business-critical processes: “An underwriter deciding whether to insure a building project using a new construction method, an entrepreneur choosing what product to focus her startup on, an investment committee structuring an investment-for-equity deal with a startup, a panel of judges ruling on the interpretation of law in a crucial case.”
These all depend on the trained humans involved in the process (the underwriter, the entrepreneur, the IC, the judges) making inherently subjective decisions about the relative value of a thing. There are no a priori objectively correct valuations — whether the thing is the potential liability from an untested construction method, or the potential upside of a startup’s idea. These are judgment calls, and the judgment calls always represent meaningmaking work.
But most everyday business processes also rely on meaningmaking work. These processes are not so obviously business-critical but are nonetheless vitally important. Some examples of humans doing seemingly trivial meaningmaking work include: A customer service representative using their discretion to give an unhappy customer a full refund though the product is working as expected, a production line worker deciding that an assembled part should be rebuilt though it technically passes the inspection criteria, or an executive assistant deciding to reschedule a long-standing internal meeting so the CEO can take a last-minute call with a potential investor.
This is vital meaningmaking work that businesses do “without even thinking about it.” Humans do a lot of meaningmaking work as part of existing business processes, and this meaningmaking work is important but currently mostly invisible.
To build AI applications that can replace existing business processes, we need to understand clearly where and what kind of meaningmaking happens in those processes, then build that understanding into the products we make to replace those processes.
Product that comprehends meaningmaking.
The AI scheduler tool we tried to use in Chicago failed because meaningmaking wasn’t an explicit part of thinking about what the product is and how it is meant to work. The product was built to be an AI system that could replace everything a human assistant does when scheduling meetings.
The AI scheduler would have looked very different and probably worked much better if the product thinking began instead from investigating what meaningmaking work humans do to schedule a meeting (i.e., decide on the relative value of different slots and existing obligations in a calendar), and then imagining what an AI system can do better than humans to make it easier/faster for humans to do the meaningmaking work to schedule a meeting. (Here’s how to think about what AI systems can do better than humans.)
The latter approach throws up possibilities for features and UX that trying to build a simplistic human-replacer does not. For instance, a feature I would love in a scheduler application is automated tagging with human validation of calendar entries that conform to user-defined quality criteria. Examples of these criteria might be: “This meeting was the right length to deal with the agenda,” or “There was enough time between this meeting and the previous meeting for comfortable preparation/travel.”
The application leaves the meaningmaking work of deciding on the “right”-ness of meeting duration and the “enough”-ness of pre-meeting duration to the human user, because both are subjective measures which only a human can decide. This feature would enable the scheduler application to (for example) gradually become more adept at scanning emails for agenda information and scanning calendars, to propose to the human user a range of slots of the “right” duration and which provide “enough” pre-meeting time for travel/prep. This reimagined scheduler application does what humans are relatively bad at doing (rule-following and analyzing lots of data) but explicitly leaves all the meaningmaking work to the human user.
This latter approach to building products is product management that comprehends meaningmaking.
Product management that comprehends meaningmaking.
Product management is systematic, mindful, institutionalized thinking about what products to build and how to build them. I got my exposure to product management in Google’s Product Team, but there are many flavors of product management. What connects all these flavors is that each of them is a framework for thinking about who a product is being built for, what features the product should have, how the product will get built, how it will be resourced, what the business model will look like, etc. But none of these flavors of product management comprehends meaningmaking — they are not sensitized to meaningmaking, nor do they include meaningmaking in how they think about what products to build and how to build them.
This is a problem and an opportunity for product management.
It’s a problem because products that don’t comprehend meaningmaking won’t actually work if meaningmaking is an important part of the work they do. The AI scheduler tool is an example of such a failure mode. This is also why the promised potential of AI systems for making business processes more efficient and effective remains unfulfilled. (Worse, it is why AI applications can make business processes less effective, e.g., when applied to hiring and recruitment.)
But it’s more interesting and productive to think about how meaningmaking is a product management opportunity.
Opportunity in the great unbundling.
Meaningmaking is an opportunity for product management because this approach leads to new ways of thinking about what products can be and how humans interact with them — and especially because no one seems to be doing it systematically yet.
The scope of opportunity is especially enormous when thinking about products that are AI applications intended to replace existing business processes. Nearly every process in every existing business which involves humans bundles meaningmaking work with non-meaningmaking work. And, as Sarah Tavel points out, the way to be successful in selling products to existing businesses is to sell work, not software — by which she means that an AI application is more likely to be sellable if it actually does work that businesses need done. This requires understanding what work is actually being done in the business process the AI application is designed to replace.
Until just a few years ago, we had no good reason to spend any effort understanding work through the lens of meaningmaking. Now, things are different. AI is a plausible candidate for a general purpose set of technologies that seem able to do everything humans can do but actually can’t do the meaningmaking work that humans do all the time but often don’t even realize they are doing.
Suddenly, separating what humans must do from what machines can do better is vital. If humans must do meaningmaking work and machines cannot do meaningmaking work at all, then the key insight is that work must be unbundled into meaningmaking work and non-meaningmaking work. Those who don’t recognize that this unbundling is essential for building good AI applications have been fooled by what I call AI’s seductive mirage.
Unbundling meaningmaking work from non-meaningmaking work is a fundamentally new approach to thinking about what work is, and what work could be. Value creation will come from opening up new avenues for thinking about what AI applications could look like, what kinds of functionality could be useful, how users could interact with AI applications, and what kinds of business models could make sense.
Paying attention to meaningmaking when building AI applications is also a better way to think about alignment in AI deployments. Value is an inherently subjective thing, so the best way to deploy AI so that it is aligned with human values is to build AI applications that are intentionally designed to let their human users make all the subjective value decisions (i.e., do all the meaningmaking work) while the AI applications support their human users by doing the other non-meaningmaking work that machines are better at doing.
tl;dr: AI systems seem like things which appear seductively like autonomous meaning-making entities but are actually tools which can’t make meaning on their own (for now). Understanding this seductive mirage reveals an important but hard-to-see trap in thinking about AI strategy.
Meaning-making is the crux for thinking about AI.
I’ve been thinking and writing about meaning-making, which is the act of making subjective decisions about the value of things.
Meaning-making is the defining characteristic of human-ness.1 If meaning-making is a uniquely human capability, it follows that meaning-making is key to understanding how we shouldbuild effective (= aligned and safe) strategies for investing in, regulating, and working with AI.
This issue is about an important but hard-to-see obstacle to doing effective AI strategy: the seductive mirage of AI systems, which is that they seem to be good at doing meaning-making when they actually can’t do it at all.
AI systems can’t be truly autonomous.
Making subjective decisions about value is how we figure out what is worth doing or not. Meaning-making is how we choose what to do and become autonomous and self-governing. Entities can only be autonomous and self-governing if they can make their subjective choices of actions on their own. An autonomous entity must have meaning-making capacity. Anything else can only aspire to be a tool to be used by a meaning-making entity.
Though AI systems are capable of producing output that looks like what meaning-making humans can produce (Turing’s imitation game), the appearance of [making outputs that are hard to distinguish from those resulting from subjective choices] is not the same thing as [actually making subjective choices that result in outputs].2
Here’s an instructive illustration. Below are two images.
The one on the right is the output of a process of meaning-making which is now thought to have been one of the wellsprings of an entire movement in modern art — “Black Square” (1913/1915) by Kazimir Malevich. The one on the left is a black square that does not have the same meaning-making behind as the one on the right — but you would not be able to tell them apart just by looking at them side by side.
If this seems like an Implausibly Big Claim, have a look at the table below, especially the parts highlighted in yellow. (The table is from one of my earlier essays on AI’s meaning-making problem.)
So, for now, we must think of AI systems as tools to be used by humans, not autonomous entities with the ability to decide on their own what actions are worthwhile. Another way to put it: We can have AI agents that act on directions from humans, but we should not have agentic AI systems without human involvement in deciding what actions to take.3
Tools and possibilities for action.
Tools work by giving us possibilities for action. A door handle is a tool which makes it possible for the user to take the action of opening a door. Useable and easily perceived action-possibilities become the ones which users most frequently try to use the tool for.4 But some tools don’t provide the action-possibilities that they seem to provide.
Seductive mirages.
When a tool’s action-possibilities are easily perceived/used but aren’t real, the tool’s users are encouraged to use the tool in those ways even when they don’t work as intended. Fake but easy-to-perceive/use action-possibilities are seductive mirages.
A benign example of a seductive mirage tool is a convincing-looking fake door. It’s benign because the worst that happens is that the user quickly realizes that the door is fake and can adjust behavior accordingly by looking for another entrance to the building.
Other tools with seductive mirages are less benign and more insidious because they create inaccurate user-beliefs whose inaccuracies are not easily or quickly detectable, and which change user- or system-behavior in damaging ways.
Just a few examples:
Antimicrobials. Powerful antimicrobials (like antibiotics) can make users believe that the problem of microbial disease has been solved with a low-cost and easy-to-use chemical compound. This changes user behavior in ways that increase the prevalence of disease-causing microbes (e.g., using antibiotics in livestock feed as an alternative to careful herd management and feeding) or create bacteria that circumvent antibiotic treatment (e.g., overuse of antibiotics leading to antibiotic-resistant bacteria). This both reduces the efficacy of antimicrobials and increases the scale/intensity of the problems of disease-causing microbes.
Tactical equipment (firearms, personal armor, weapons, etc). Tactical equipment can make users believe that they have special ability/prowess that they actually don’t have. This changes user behavior to make them more likely to initiate or escalate violent crime (e.g., gun violence in the US). This can reduce the user’s safety and the general safety of the broader community — this phenomenon is particularly well-documented in the context of firearm ownership and access. The seductive mirages of tactical equipment are closely connected to mall ninjas and the aesthetics of tacticool.
These are all seductive mirages. Mirages that are seductive are especially problematic because they entice users to believe and act if they are real.
AI’s seductive mirage.
In thinking of AI systems as tools, the seductive mirage is that AI systems are (or can soon be) autonomous, self-governing systems that can make meaning like humans do. This is a mirage because only humans can make meaning (for now), and it is seductive because so much (money, reputation, etc) rides on believing that AI systems already can (or soon will) do anything humans can.
This seductive mirage is insidious and malignant because it is hidden in the gap between the real possibilities of AI systems, the easily perceived/used possibilities of AI systems, and our inadequate understanding of why meaning-making is important and where meaning-making happens in AI systems.
Here’s the logic:
AI systems produce outputs that resemble what meaning-making humans can produce, and …
Lack of clarity about meaning-making leads us to incorrectly believe that meaning-making outputs are the same as meaning-making activity, so …
AI systems appear to have affordances that are coterminous with human capabilities. Meaning-making is the most important of these affordances, all of which are easily perceived and used but …
AI systems can’t make meaning on their own (yet) — so meaning-making is a fake AI system affordance that is nonetheless easily perceived/used.
The seductive mirage of AI is that AI systems make us believe that we are closer to matching the full range of human capabilities than they actually are.5 (Which also plays into the valuation narrative of AI technology companies working on AGI.)
The mirage is more seductive because we don’t pay serious enough attention to what meaning-making is and how deeply intertwined it is with the actions we take.
Why is this important? Because meaning-making is required whenever a task requires “discretion” or a “judgment call.” So meaning-making work is done by an underwriter deciding whether to insure a building project using a new construction method, an entrepreneur choosing what product to focus her startup on, an investment committee structuring an investment-for-equity deal with a startup, a panel of judges ruling on the interpretation of law in a crucial case — and a huge number of other tasks both vital and trivial.
The trap of AI’s seductive mirage.
So, the trap is this: When we don’t recognize that meaning-making is foundational to work and woven into nearly every task and AI systems present the seductive mirage of being able to produce outputs that are indistinguishable from human outputs, it becomes too easy to give this meaning-making work away to AI systems. It’s because the meaning-making work is bundled up with all the other work they’re much better at doing, like data management, data analysis, and rule-following.
So the result of the seductive mirage of AI meaning-making is that it becomes too tempting to design or use AI systems for work which requires meaning-making and ignore/omit the humans who used to do the meaning-making. In other words, outsourcing meaning-making to machines without understanding what meaning-making is, why it is important, and that machines can’t do it at all.
To design work that takes best advantage of the respective capabilities of humans and AI systems, we must examine work carefully so that we can unbundle it: separate the meaning-making parts from the other parts that can increasingly be done better by machines. And to do this, we have to recognize that meaning-making — subjective decisionmaking about value — is essential for understanding the future of work and for understanding how to build good tools for that future.
Next, a new philosophy of product management
Next issue, I’ll wrap up this sequence on meaning-making and AI with a modest proposal for a new philosophy of product management centered on understanding meaning-making and unbundling meaning-making work (leave it to humans) from all the other stuff machines are better at. This is what we need to build good products in a time when machines and tools can’t do the meaning-making work that humans do but are nonetheless magically sophisticated at mimicking human outputs — the present.
Meanwhile, you could check out my conversation with Charley Johnson about meaning-making in AI. We talk about what meaning-making is, why it is important, how it is misunderstood, and what a new philosophy of product management that engages deeply with meaning-making could look like.
Meaning-making is currently a human-only capability. Even if future AI systems have meaning-making capability, it isn’t clear that such capability will be understandable/verifiable by humans (also see footnote 2).
The word “agent” here creates the possibility for confusion: It’s nice to have AI systems that act as [agents directed by humans], but AI agents cannot be thought of as [entities having agency to act autonomously].
These action-possibilities are “affordances.” The term has slightly different definitions depending on what discipline uses it. In psychology, affordances are the real action-possibilities an object provides to its user. I use “real” here in the sense that if the user chooses to use the action-possibility, the action can actually be accomplished. A door handle only has the “real” action-possibility of opening a door if it is connected to the latch mechanism that secures the door. If not, the action-possibility is just a mirage. In design research, affordances are the action-possibilities that a user can perceive about an object and actually use. In this slightly different definition — emphasizing the perceptibility and useability of the affordance — a working door handle that doesn’t look like a handle or which cannot be reached by the user wouldn’t be considered to have the action-possibility of opening the door.
Meaning-making is the only thing that humans must do, but it isn’t the only thing that humans currently do better than machines. At the moment, humans are also better at manipulating objects in unpredictable physical environments (e.g. sweeping narrow alleyways full of occasional obstacles like bicycles, delivery vans, and itinerant animals), but it would be nice if machines could be built to do that too.
Now, here’s the context: I’ve been thinking a lot about tradeoffs in a practical-strategic way.
Many months ago, I reported that I’d accidentally fallen into a DIY rabbithole. I have still not emerged from it. For some weeks, in pockets of time snatched from doing Other Stuff, I’ve been gradually designing and fabricating hardware and components for 250ft² of built-in shelving, and fitting the shelves into two alcoves.
This process has been what Donald Schön calls a reflective conversation with the situation: “In a good process of design, [the] conversation with the situation is reflective. In answer to the situation’s back-talk, the designer reflects-in-action on the construction of the problem, the strategies of action, or the model of the phenomena, which have been implicit in his moves.”1
One way to fit a built-ins into an irregular space is to construct a perfectly regular, rectilinear shelving unit and install it with panels that conceal where the dimensions of the space differ from the dimensions of the unit. These panels hide (sort of) the fact that the unit “works” because it is designed to be insulated from the irregularities of the space it inhabits. This is both a way to build shelves and a metaphor for a way of dealing with a messy reality.
But this is not the only way.
For #reasons2, I prefer constructions that engage directly with their irregular environments. These types of constructions seldom achieve a high level of finish. But their particular virtue is that they are more likely to respond deeply to the realities of the spaces they occupy. (The latter is a characteristic of things worth keeping, and a common feature of structures made by “non-pedigreed” architects.)
In other words, there’s a tradeoff that usually needs to be made: Finish quality on the one hand, and responsiveness to reality on the other. Many people would choose finish quality over responsiveness to reality … and some would then feel subliminally discomfited for years by the wrongness of the perfectly rectilinear shelving unit in the slightly canted and off-square alcove by the entranceway.
For me, the subjectively desirable configuration of tradeoffs is to try and build shelving that is responsive to reality even at the expense of finish quality. This is both a way to build shelves and a metaphor for a way of dealing with a messy reality.
Preferring an unconventional configuration of tradeoffs can produce better — more enjoyable, more responsive, more novel — outcomes. And articulating acceptable and unacceptable configurations of tradeoffs can lead to more freedom to act.