Discussion about this post

User's avatar
Francesco D'Isa's avatar

I’ve read your essay and find myself agreeing with much of its anatomy of “neck vs necklace,” yet still questioning the conclusion you draw from it. You’re right that great art needs a thick, particular context, and your Guernica-in-the-woods thought-experiment makes the point vividly. But the essay treats generative systems as if they stand alone, when in fact every non-trivial AI artwork already arrives encased in multiple layers of human meaning: prompt, curation, framing, display, reception.

You also grant, late in the piece, that “someone will eventually create an AI masterpiece,” yet most of the argument reads as though this were impossible in principle. Photography offers a useful timeline check: in 1841, barely two years after Daguerre’s first public demonstration, it would have been premature to pronounce the absence of photographic masterpieces; Stieglitz’s The Steerage and Atget’s Paris were still half a century away. Declaring a deficit ­now, when generative tools have been publicly available for scarcely three years, risks repeating the same snap verdict critics made about early “mechanical” cameras.

Finally, the essay often equates “AI art” with raw, unedited machine output, then measures that output against canonical paintings. Yet the most serious practitioners work exactly where you say meaning is made: they choose or even shoot the training data, design prompts as scores, iterate, crop, print, mount, title and tell stories around the images. In other words, they restore the very specificity you claim is missing.

So I share your intuition that the first undisputed AI masterpiece will arise when an artist harnesses the medium’s native properties like stochasticity, scale, recursion rather than using it for mimicry. I just doubt we’ll recognise that moment if we decide in advance that no such work can yet exist. The history of new media suggests we’re still far too close to the invention to make that call.

Expand full comment
Michael E. Zimmerman's avatar

Whether or not there are any AI masterpieces, the important point you are making was the basis for philosopher Bert Dreyfus's objection to Minsky's representational approach to designing computer intelligence. Drawing on Heidegger's concept of being-in-the-world, Dreyfus argued that humans have a vast taken-for-granted understanding of the world a background that cannot be "programmed" into the computer. Every human (well, the vast majority) knows what people have necks in the first place, and that necks happen to be useful for hanging necklaces. This tacit background understanding of things is generated because things MATTER TO HUMAN BEINGS in ways that they do not to computers. Something is at stake for us. Heidegger even said that the very Being of the human is caring. We care about ourselves, Others, the future, and so on. Designing a robot to learn how to navigate in a complex setting (with layers of tacitly understood handy/necessary features) is a step in the right direction, because the robot would have to sense its surroundings (feel, see, hear, etc.), but the robot's Being is not care. Robots can be taught to SEEM to care, but what would it take to modify a robot to such an extent that it would care about itself, its future, other robots, people, whatever in ways somehow analogous to our own? Such robots would have to become human variants. Is that likely?

Expand full comment
21 more comments...

No posts