4 Comments

You're definitely getting at something important here – the need for AI product design to recognize that AIs are (at least for now) better suited for some tasks than others, and good design calls for refactoring the workflow so that the AI can do the things it's good at and not try to do the other things.

But I don't know that it's entirely hopeless for AI systems to ever tackle any of the things you're describing as "meaningmaking"?

First of all, some of the tasks that you're saying are subjective, I think are actually objective in principle, they're just too difficult to evaluate rigorously. For instance:

> These all depend on the trained humans involved in the process (the underwriter, the entrepreneur, the IC, the judges) making inherently subjective decisions about the relative value of a thing. There are no *a priori* objectively correct valuations — whether the thing is the potential liability from an untested construction method, or the potential upside of a startup’s idea. These are judgment calls, and the judgment calls always represent meaningmaking work.

Evaluating liability or startup upside is *very difficult* but I'm not sure it's *subjective*? If I think a startup is worth $50M and you think it is worth $80M, is that because we have different subjective tastes and values, or because one of us did a better job of predicting future outcomes than the other?

(Weather forecasting is an example of something very difficult / fuzzy that deep-learning models are becoming quite good at)

And even for truly subjective choices, it seems like LLMs ought to be able to do a pretty good job of modeling human preferences; there is plenty of relevant material in their training data. I don't have a link handy, but there have been studies showing that when done correctly, it's possible to replace political surveys ("what do you think of this policy proposal") with LLM queries and get results that closely match human responses.

So perhaps, when models make poor choices, it's actually because they're lacking context that is important to the specific situation? If so, then part of the product design challenge will be figuring out how to get that context and present it to the LLMs at the right times.

Expand full comment
Aug 24Liked by Vaughn Tan

Wonderful series - I’m really find this to be a helpful lens for thinking about the challenge of getting real value from the advances in AI. However, the “four types of meaningmaking” feels a bit off to me. I was expecting you to anchor the types of meaningmaking to the different types of not-knowing with something like the following:

- Actions: Which actions are worth doing/acceptable in a given situation?

- Outcomes: Which outcomes (and the interconnected outcomes) are desirable/tolerable for the stakeholders of interest?

- Causes: Who is responsible for the outcomes resulting from the environment and a given set of actions?

- Values: What are the acceptable trade offs between different proxies of value?

This framing then gives AI (and tools in general) a clear goal - reduce not-knowing in one or more of these areas in order to make it easier for humans to decide the above questions in the course of their lives.

Expand full comment
author

i often end up in unexpected places, usually unintentionally 🥲

to your point though: this series on meaningmaking actually focuses only on the 4th type of not-knowing (not-knowing about relative values). the four types of meaningmaking are conceptually distinct from each other, but all are connected by the absence of objective "truth" about the relative value of things.

i do think the questions you proposed are useful ways to see how not-knowing about value inflects the other three types of not-knowing (about actions, about outcomes, and about causation). i wrote a bit last year about how the different types of not-knowing propagate (https://vaughntan.org/the-fountain) and are affected by time (https://vaughntan.org/the-fog-of-time). i'm not sure that machines in general (and AI systems particularly) can deal with true uncertainty; for sure they can't do it yet. so it seems hard to make the goal for machines to reduce not-knowing directly.

my 2c is that a better goal for AI (and tools generally) is not to try to reduce not-knowing but rather to pick up all the uninteresting programmatic work that a) has to be done to help humans reduce not-knowing, and b) must still be done after the not-knowing is resolved. in a prev issue i tried to outline what we should be using AI for: https://uncertaintymindset.substack.com/p/where-ai-wins

that conceptual frame is pretty alien though — would love your thoughts on where it does and doesn't resonate.

Expand full comment

Intriguing. I think we dramatically underestimate the amount of "meaningmaking" tasks even relatively low skill jobs demand. This is why automation is hard though easy to demo - it's easy for AI to imitate meaningmaking but impossible for it to explain it

Expand full comment