Hello friends,
I’m just back from a week in Seoul. This issue is about how the trip was all bits and pieces for good reasons — and how that fragmentedness unexpectedly moved some thinking about AI problems into sharper focus for me. But first:
Tomorrow (16 Nov): Join me for a conversation about why having the right mindset is crucial for relating well to not-knowing. 8-10PM CET and open to all. More information and tickets.
My justification for going to Seoul was to give a talk about how uncertainty creates opportunities for innovation. I focused on one of my current preoccupations: structuring more effective innovation work through design principles that encourage progressive experimentation, limit pointless experiments, incentivise emergence, and encourage effective scaling. I’ll write it up soon as the probably-final part of a series about why/how to use uncertainty as a tool for coming up with new stuff. (The first three parts are linked in this footnote.1)
My motivation for the trip, on the other hand, was that I’d never been to Seoul and East Asian urbanisation is my minor obsession.
Too many Twosome Places
Late on my first night in town, I went for a walk around the convention center. It is densely furnished with the very small bar-diners that appeal to office workers looking for fried food and beer after a hard day at the spreadsheet face. I eventually stopped at a tiny seafood-in-a-tank spot for a grilled rockfish preceded by a very good, slightly fizzy cabbage white kimchi. (One of the many nice things about eating in Korea is not having to decide on appetisers because meals usually come with an array of tiny snacks, some of which are made from vegetable trim like radish or turnip tops.)
I was forced to resort to Old Skool city wayfinding after dinner because Google Maps wasn’t working at all. Casting around for a landmark, I saw — at the end of a tiny alleyway with the same name as the nearby major road — the well-lit signage of a coffee-and-pastry establishment called A Twosome Place2 that I’d passed on the way to dinner. But when I got there, I was not where I thought I would be. Looking around, I saw in another Twosome Place a few blocks away. That turned out not to be the right one either.
Much, much later, when I finally got back, I discovered that navigating by chain coffee shop is not a good idea in Seoul, city of 18000 cafes. Many city blocks have several chain and/or independent coffee shops, and they are all comfortably full of people talking to each other while drinking iced Americanos.
Illegibility as feature
My post-rockfish navigation troubles foreshadowed Seoul’s general illegibility. It is the most illegible big city I’ve ever visited.
Even in the touristic and commercial parts of Seoul, a lot of signage seems to only be in Hangul (including in the transit network). The addressing system uses major roads and their associated minor roads instead of locating addresses along uniquely named roads (as in most of the west) or within parcels and subparcels (as in Japan). Google Maps is useless (walking directions are generally unavailable, map layers and points of interest are inconsistently unreliable, and transit routing is inaccurate). The UX of the local alternatives (Naver Map and KakaoMap) require an inconvenient amount of effort to learn and key parts of their respective UIs are not translated into English.
I stayed three days south of the river, right by the convention center (and an eight-lane dual carriageway in process of being widened even further). Then, freed from conference obligations, I moved to the north side.
The contrast between the districts south of the Han River and the districts north of it is sharp. Every bus or train I took in the south was stuffed overfull of people even off-peak. Every major road I saw in south Seoul was dense with sluggishly moving cars. Everything seems at least 30 minutes away from everything else no matter how you try to get there. The south is Seoul’s new commercial centre, technology centre, and where much of the premium new retail is located. People who live south seem much richer, and real estate is much more expensive. (Gangnam means “south of the river.”)
Until 60 years ago, this whole region south of the river was low-value agricultural land located outside the old city walls on the north side. It was only absorbed into the city proper in 1963. A forceful and comprehensive set of public policies immediately thereafter favoured rapid development in the south: housing development, public subsidies, education relocation programs, and business support — often with equivalent policy suppression targeting north Seoul. The parts of the north side I saw were lower-rise, slower-moving, less crowded, less glitzy but also less worn-down, yet the north side was the imperial capital for half a millennium.
Today, north Seoul feels distinctly different from south Seoul, an inverted analogue of how London’s Square Mile feels different from South London. The upshot is that Seoul feels immediately like a distinct place, not a generic, fungible big city. It is a city that stimulates sensemaking. Its illegibility is uncomfortable, and discomfort forces engagement.
A city lens
Well in advance of arriving in Seoul, I sought #intel Far and Wide. Almost nothing came back other than a handful of map pins from people who either had never been or hadn’t been in decades. Anyway I didn’t only want a list of places to eat and buy stuff. Eventually I was able to articulate what I was asking for: I wanted subjective (= opinionated) lenses through which I could make sense of an unfamiliar city.
These lenses form in anyone who has gotten to know a city well. This means that they have made some sense of what was formerly illegible, so that the city develops a layer of personal meaning (indications include: places appreciated for idiosyncratic reasons, pet peeves, extremely specific itineraries, etc) that overrides the meaning carried in newspaper articles and social media.3
But even people who are familiar with a city don’t offer their lens when asked about the city — a list of pins is much easier. (A small handful of wonderful people did come through at the very last minute, after I had figured out what to ask for.)
A language for meaning
The [difficulty in articulating the ask for lenses] and the [rarity of lenses being offered] are two symptoms of a fundamental problem. The problem is: we lack a commonplace language for thinking and talking about subjective meaning in general (and about cities in particular).
I don’t mean that no one communicates the personal meaning of places (writing about cities and countries is an old and evergreen genre). What I do mean is that we seem to take the personal meaning of place for granted in everyday life, and it rarely occurs to us to communicate our own personal meaning of place to others unless specifically asked to. This is both the cause and the effect of there not being a good everyday, non-technical way to ask or talk about subjective meaning, even though it suffuses quite literally every aspect of life.
Confronting Seoul’s illegibility made this problem snap into sharper focus for me and connected it to a totally different project.
AI’s meaning problem
I’ve been thinking a lot about foundational questions in AI recently. (I’m helping to convene a conference about this next month.) AI’s connection to human meaning-making has been lurking at the back of my mind since I wrote earlier this year about how the ability to make meaning is what separates humans from machines for now.
Subjective meaning is clearly problematic for nonhuman systems like AIs, but not in an obvious way. I haven’t wrapped my head around this fully yet, but right now the problems seem to arise from the nexus of four observations:
Routine discussion about meaning and sensemaking mostly happens in particular academic disciplines (philosophy, sociology, anthropology, etc), cultural criticism, and art of various kinds.
Very recently, we’ve built machines (LLMs) that borrow and re-present to both professional and everyday users meaning embedded in the cultural artifacts that humans have generated as part of everyday life: writing, images, video, audio, code, all of it.
Few of the academic and industry mathematicians, computer scientists, and engineers building these machines seem to be thinking about subjective meaning in AI …
Because current machine learning approaches are applications of mathematics, and there doesn’t seem to be any good way to formalise subjective meaning mathematically.
The end result: neither the builders nor the users of AI systems have a commonplace language for thinking and talking about subjective meaning, so it isn’t thought about or talked about much in the context of AI.
This is problematic for three reasons:
Subjective meaning is what results in biases and systems of ethics. Both biases and ethics relate to non-objective preferences for some things over others.
The unpredictability of subjective meaning creates true and currently unquantifiable uncertainty in human cultural artifact, not just risk. This means true uncertainty exists both in AI training data and in how users interact with AI systems (prompts, for example). More generally, what gets called “uncertainty” in the AI/ML context usually seems to be forms of risk.
Meaning-making must be considered when evaluating whether an AI system should replace a human on a given task if subjective meaning-making is what machines cannot do yet (a strong claim which I unpack at length in this essay).
The foundation problem problem
Not having a language for talking about subjective meaning makes it hard to think about meaning and meaning-making. In turn, that makes it hard (maybe even impossible) to answer at least three very important — some might say foundational —questions about AI:
How can AI systems be designed to avoid bias (or at least make bias explicit) and be ethical?
How can AI systems be designed to deal with both risk and other types of non-risk not-knowing in both training data and user interaction?
How should we choose what tasks to apply AI systems to?
Does this mean that making sense of subjective meaning one of the most foundational problems of AI research? I don’t know, but I will for sure be writing more about the intersection of not-knowing, AI, and meaning.
See you next time,
VT
A sort-of series on why/how to use uncertainty as a tool for coming up with new stuff: 1: Generative uncertainty; 2: Designing good experiments; 3: Framing problems well.
I had an embryonic version of this idea of personal city-meaning in a post contrasting Real Reasons for being in Marseille vs Hype (and about living in both cities and the middle of nowhere).
> there doesn’t seem to be any good way to formalise subjective meaning mathematically
Stephen Wolfram seems to be onto something making observers formally part of a theory of the universe: https://writings.stephenwolfram.com/2021/11/the-concept-of-the-ruliad/
Really enjoyed this. There's a best case scenario I think about often where the need to assert our human-ness in the face of AI becomes a forcing function to confront things like meaning head on.