15 Comments

I wonder if it has something to do with how art and writing are monetized. A lot of writing doesn't have to sell itself per piece the way art does. People write for their job (and make a salary) or write as a freelancer where they have pretty decent confidence that their pieces will get bought by publications.

Artists, on the other hand, put a tens or hundreds of hours into creating a piece on spec and hoping it will sell. Even if it's stock art, there's no guarantee that there will be any return at all on a specific piece of work.

I wonder if corporate graphic artists feel differently about generative AI than independent artists? i.e. more likely to view it as a productivity enhancer.

Expand full comment

Alberto, you write, "in most cases it reflects a deeper conflict with the engine that moves the world forward".

As you've seen, (in perhaps too many of my comments) I consider it debatable that AI, or even the knowledge explosion as a whole, is moving the world forward. As simple example, giving a ten year old the keys to the car would not be moving the family forward. The ten year old might experience it as forward movement, but then, he's ten.

You write, "I don’t feel threatened by AI."

If you don't feel threatened by the current versions of AI like ChatGPT, ok, I can get that. If you don't feel threatened as a writer by AI as a technology, perhaps we need to hear more from you on that? To me, it seems that AI in it's current state, and where AI is likely to go, are two very different things.

Expand full comment

Yup, this rings true.

I've been having very similar experiences within my circle of friends. As someone who writes both for work and pleasure, I can relate to your explanations.

I think another dimensions to this is the "speed" of consumption. Written passages inevitably take longer to read and process. Images are digested near-instantly. So on some level, the sheer scope and volume of harm done by AI (as perceived by artists) simply feels larger with generative art models.

We're visual creatures, so we can absorb and react to dozens of AI images in the same span of time it'll take us to read a single ChatGPT article.

Looking at countless Facebook groups, Twitter accounts, and Instagram profiles dedicated to sharing AI art (and their popularity), it's easy to see why the threat and impact feels so much more immediate in this space.

EDIT: After letting this marinate a bit, I'm wondering whether yet another factor is exclusivity.

Everyone knows how to write and everyone does so on a daily basis. Granted, not everyone is a great author or storyteller, but the underlying skill feels more mundane and less special.

With visual arts, the barrier to entry is higher. Most of us wouldn't consider ourselves artists. So watching this exclusivity barrier get shattered by an AI model that gives literally anyone the ability to generate endless great images at the speed of thought must really trigger a more visceral reaction.

Expand full comment

Seems to me one vulnerable "art" profession is clothing and accessories. I foresee a world in which, when you want to buy a pair of shoes or a hat or even a backpack you log on to a service and submit a prompt. You look at the pictures and enter another prompt. (You might very well want to pay for the services of an expert here.) When you see something you like the design gets sent to a 3-D printer and is delivered. I don't see why this isn't going to take over the whole game.

Expand full comment

• Producing a top-notch image takes much more time, effort and, dare I say, training, than banging out an essay, regardless of quality. And in that same light, the value add of great vs good imagery is overblown. You're right, most casual consumers of images just don't care. I use AI-Gen'd images on my blog - they generally suck, but get the point across. Gud'nuff for me and the crew I hang with.

• I write code for a living and frankly, I'd love for some bot to come along and take my job. Please, I hate this sector I'm in, take it - I beg you.

• Maybe it's just a matter of degree. Right /now/ AI-Gen'd writing remains sub-par. But next year? Certainly in 2025 we'll have direct and exemplar products from AI to shake the boots of writers everywhere.

• Here's a field I'd like to see overtaken: Talking Head News Reporting. Sorry Rachael Maddow, you've been replaced by a quirky, snarky, fun-to-watch anchor called AI-ngie.

Expand full comment

To me, it seems that how artists are threatened by AI is simple: AI, for the most part, replaces commissions, as well as stock imagery. There are some situations where AI cannot replace an artist – for example, I cannot adequately use AI to replicate the tardisquirrel Kvantumo that I am working on – But for anything where one wants work similar to another work, which quite a lot of the art market is, AI can be pretty good at replacing that.

For authors, on the other hand, what part of writing does AI replace exactly? Although in the future AI might be able to do it. AI certainly isn't replacing fact checking. And for generating original works, someone still needs to input the original prompt that generates something interesting, which can often be a challenge enough that that could be a job in itself. AI could probably do *great* at making regurgitated SEO type articles, but does anyone actually aspire to do that? And it can edit, but that's only one part of writing, and honestly in my own experience I always have to make it make so many changes when editing to match my own style that it isn't really faster at all than just writing it myself.

I guess the issues with editing , and fact-checking, could get better, but still, if I'm reading news, or opinion pieces, or advice –anything that shares original ideas – generative AI trained on existing content does not replace that at all, since, it's only regurgitating existing information, not generating anything particularly new or novel. At most, it can help write it more efficiently. But, there still needs to be someone generating those original ideas in the first place.

IMO, if you work for someone else, AI is a problem, but if you are publishing your own content, from your own original ideas, it is not, because AI or not, someone still needs to be there to input those ideas in the first place. If you are making original content, AI just makes the process of getting it to a finished product more efficient (assuming it works well).

Expand full comment

AI looks perfectly Disney.

Expand full comment

AI that helps you to proofread and make your writings grammatically correct seems helpful enough. Writers enjoy integrating AI into their workflow - it just makes things a lot fast.

Expand full comment

Another reasons seems to be that writing gets at truth and truth requires authority and an assessor. Whatever AI writes, it has to be read and assessed. Is it persuasive, meaningful, profound, insightful, etc? An AI can replace writing where this is not a factor but it cannot replace writing where the reader forms a relationship with the writer as an authority on something, an explainer of experience, a giver of testimony, etc. Visual arts also have this interpretive element where reality is being filtered and interpreted by an individual and you’re trying to understand what the individual is telling you --but illustration doesn’t necessarily have it. It does sometimes but not others--e.g., if drawing a hot dog for a package of hot dogs it’s simply representing or replicating something. Some writing will be like this--e.g., the description on the hot dog package could be written by an AI. So it will replace some writing *jobs* but will not replace writing where the reader is seeking to enlarge their understanding via communication from another person. At the moment, AI is simply a very sophisticated pastiche of words that normally go together. Nobody is ‘saying’ anything. Visual AI amounts to something like a collage. So a collage can be visually interesting, even an accidental collage. Nature can be visually interesting as well. Anything can be visually interesting and beautiful whether it is put together intentionally or not. The roots of trees, for example. Writing and speaking is not quite like this--it is much more intrinsically tied to the intentions of the speaker/writer. We will be encouraged to think of the AI as having intentions by people promoting AI but it would be silly to think of AI this way and it’s an open question whether AI will ever be this way.

Expand full comment

In this case the answer is frankly easy. As an engineer who knows the ins and outs of the AI world and a writer of fiction novels, I can affirm that the quality of the result in both fields is overwhelming. The generated texts lack depth, the coherence of the characters or arguments in long texts are very poor. Except for sparks of originality in short sentences, IA is very far from undertaking a fictional text with the same quality that currently generates images. And for technical or scientific texts you always have the doubt that something has not been invented.

Persolnamente I think that sooner or later it will come, but just now can not compete the generation of images with the generation of texts. And I suppose that the generation of video will inherit some of the problems of the generation of text.

Expand full comment

It’s actually pretty weird that the artists

who are not exclusively digital artists are worried by any of this. Do they think that next year ChatBots are going to be put into humanoid and dexterous robots who will live in studios painting, drawing or sculpting 24 hours a day in all possible styles? Most artists are not digital artists, most art isn’t digital art. People who work with their hands are safe.

Writing, coding and even music is mostly digital these days, and yet writers, coders and musicians are sanguine or enthusiastic about AI, so far.

Expand full comment

This is a an excellent overview of a curious problem that reaches back to echo C. P. Snow’s hypothesis of two broadly complementary, but opposed and often conflicting sociocultural currents: science and humanities. The echo actually is of more ancient origin and seems embedded in human cognitive construction: a distinction of two older oppositions, complementary pairs: “qualia and “quanta”.

On the research server “Academia” is an early draft of a work, still in progress, in which I attempt to show that the qualitative aspects of thinking are more ancient than the later discursive writing and explicit text, and that the relationships may be be understood by probing how the mind (as cognitive process) functions amid constant change in the present moment.“Becoming” thus operates in process-relational mode to construe an “objective” world of “Being”, of 3D space seemingly penetrated by linear time (change). Much of this work presumes some familiarity with the triadic semiotics rediscovered by Charles Sanders Peirce (pronounced as “purse”) in the 19th Century but radically enlarged and extended by Thomas Sebeok and many others, especially the late philosopher John Deely.

The paper may be downloaded for personal use only, but be aware that the next iteration radically reverses the “parity” of relationships representing interactions among the four cognitive functions of thinking, feeling, sensing and intuiting, which in turn rearranges correspondences among their derivative relationships and processes. It is an extension of my lifelong research into the cognitive process-relational roots of creative Fine Arts, as well as the more “objectively”prosaic utilities of graphic art generally.

The early draft may be accessed in PDF format at https://www.academia.edu/92969706/The_Ontic_Dance_on_qualia_informing_quanta

I claim and retain original copyright on the paper itself as well as much of the research material that binds it all together.

Two websites contain material supporting respectively both the quantitatively explicit, as in industrial or architectural graphics, and the qualitative analogical forms of Beaux Arts ((Fine Arts).

Websites:

https://www.spotops.net/

http://www.manifestorders.com/overview.html

Thanks for the excellent article!

Best regards,

(Joseph) Howard Jone

Professor in Fine Arts (Retired)

The University of New Orleans.

ORCID: 0000-0002-5828-0936

February 7, 2023

.

Expand full comment
Feb 7, 2023·edited Feb 9, 2023

Speaking of sensitivity to errors in writing…

I believe you mean “attribution” not “retribution.” And, “loses me” not “losses me.”

Oddly enough, these are the types of errors that a natural language response generator is unlikely to make. To a human writer, the code is close enough to "correct" to generate the correct meaning in the mind. Often a reader won't even perceive the typo if the mind generates the correct meaning element in response to the (approximate) visual code. It also speaks to the different forms of editing and review. Because the code "retribution" triggered a nonsensical meaning in the context of the rest of the sentence, my mind rejected it and caught the error. How might something like chatGPT be used to flag such errors? Could the algorithm assess a likelihood of the sentence (given the context of whatever it was trained on) and flag unlikely constructions? Seems like a pretty straight forward advancement of simple dictionary lookup tables. Not "is this word in the dictionary" but "how likely is this sentence to be meaningful and coherent with the rest of the text" I'm not sure how grammar recommendations (e.g., in Word or Grammarly) work. Perhaps they are based on a similar idea? Something like that could also be used across disparate languages, effectively translating from an academic language to a native language for evaluation of meaning and review. So, there is a general consensus that chatGPT text output needs review and editing by a human, but how about the other way around? Can chatGPT be used to edit text generated by a human?

Expand full comment

Because us writers are all using it to proofread our documents, compile research, and rewrite notes into blog posts and tweet threads 😂

Expand full comment