Discussion about this post

User's avatar
Steve Newman's avatar

You're definitely getting at something important here – the need for AI product design to recognize that AIs are (at least for now) better suited for some tasks than others, and good design calls for refactoring the workflow so that the AI can do the things it's good at and not try to do the other things.

But I don't know that it's entirely hopeless for AI systems to ever tackle any of the things you're describing as "meaningmaking"?

First of all, some of the tasks that you're saying are subjective, I think are actually objective in principle, they're just too difficult to evaluate rigorously. For instance:

> These all depend on the trained humans involved in the process (the underwriter, the entrepreneur, the IC, the judges) making inherently subjective decisions about the relative value of a thing. There are no *a priori* objectively correct valuations — whether the thing is the potential liability from an untested construction method, or the potential upside of a startup’s idea. These are judgment calls, and the judgment calls always represent meaningmaking work.

Evaluating liability or startup upside is *very difficult* but I'm not sure it's *subjective*? If I think a startup is worth $50M and you think it is worth $80M, is that because we have different subjective tastes and values, or because one of us did a better job of predicting future outcomes than the other?

(Weather forecasting is an example of something very difficult / fuzzy that deep-learning models are becoming quite good at)

And even for truly subjective choices, it seems like LLMs ought to be able to do a pretty good job of modeling human preferences; there is plenty of relevant material in their training data. I don't have a link handy, but there have been studies showing that when done correctly, it's possible to replace political surveys ("what do you think of this policy proposal") with LLM queries and get results that closely match human responses.

So perhaps, when models make poor choices, it's actually because they're lacking context that is important to the specific situation? If so, then part of the product design challenge will be figuring out how to get that context and present it to the LLMs at the right times.

Expand full comment
Roger's avatar

Wonderful series - I’m really find this to be a helpful lens for thinking about the challenge of getting real value from the advances in AI. However, the “four types of meaningmaking” feels a bit off to me. I was expecting you to anchor the types of meaningmaking to the different types of not-knowing with something like the following:

- Actions: Which actions are worth doing/acceptable in a given situation?

- Outcomes: Which outcomes (and the interconnected outcomes) are desirable/tolerable for the stakeholders of interest?

- Causes: Who is responsible for the outcomes resulting from the environment and a given set of actions?

- Values: What are the acceptable trade offs between different proxies of value?

This framing then gives AI (and tools in general) a clear goal - reduce not-knowing in one or more of these areas in order to make it easier for humans to decide the above questions in the course of their lives.

Expand full comment
2 more comments...

No posts