This issue is about a visit to Seoul last week, and how it connected some previously fragmented thoughts about illegibility, meaning-making, and problems in AI. (ikr.)
agree that there doesn't seem to be any good way to do this ... yet. i have no way to know whether it is because it is impossible or because not enough people have systematically tried to do so.
It seems like the less universal something is, the more pointless trying to formalize it is. On the other hand, the less universal something is, the more significant it can become to us as something unique.
This aligns well with Erich Fromm’s Having and Being modes.
Really enjoyed this. There's a best case scenario I think about often where the need to assert our human-ness in the face of AI becomes a forcing function to confront things like meaning head on.
Yes and no? The uncritical attempts are exactly what make the forcing function so compelling, right? Technics wants to totalize every aspect of humanity, and AI is granting it power to encroach on the final holdouts: things like consciousness, creativity, agency. How will humans respond? I agree with you in that I don't see how the current structures of AI innovation could possibly account for this. We would need an entirely new mythos of technology.
The whacky scenario is where everyone gets more irrational, illegible, and mystical as a means of differentiating humanity from the machines. Imagine poets creating new languages that are illegible to model training. This would be a bottom-up reactionary movement, much as Romanticism was a reaction to the Enlightenment. If we have no hope of reasserting our humanity around things like meaning, then maybe this isn't such a bad outcome!
> there doesn’t seem to be any good way to formalise subjective meaning mathematically
Stephen Wolfram seems to be onto something making observers formally part of a theory of the universe: https://writings.stephenwolfram.com/2021/11/the-concept-of-the-ruliad/
agree that there doesn't seem to be any good way to do this ... yet. i have no way to know whether it is because it is impossible or because not enough people have systematically tried to do so.
(still have to wrap my head around the ruliad!)
It seems like the less universal something is, the more pointless trying to formalize it is. On the other hand, the less universal something is, the more significant it can become to us as something unique.
This aligns well with Erich Fromm’s Having and Being modes.
Really enjoyed this. There's a best case scenario I think about often where the need to assert our human-ness in the face of AI becomes a forcing function to confront things like meaning head on.
that is definitely one of the best-case scenarios.
the widespread uncritical attempts to find use-cases for AI deployment seems to point away from this best-case scenario though?
Yes and no? The uncritical attempts are exactly what make the forcing function so compelling, right? Technics wants to totalize every aspect of humanity, and AI is granting it power to encroach on the final holdouts: things like consciousness, creativity, agency. How will humans respond? I agree with you in that I don't see how the current structures of AI innovation could possibly account for this. We would need an entirely new mythos of technology.
The whacky scenario is where everyone gets more irrational, illegible, and mystical as a means of differentiating humanity from the machines. Imagine poets creating new languages that are illegible to model training. This would be a bottom-up reactionary movement, much as Romanticism was a reaction to the Enlightenment. If we have no hope of reasserting our humanity around things like meaning, then maybe this isn't such a bad outcome!