26 Comments

Good one Alberto, I like your level headed take between the extremes. For my less level headed take, we don't have to choose between blaming bad users or the companies, we can blame them both.

In defense of current AI, it could be that our blame game instincts, mine included, are seriously out of whack. Just a few days ago I wrote an article related to yours entitled "Exploring The Strange Phenomena Of Outrage".

https://www.tannytalk.com/p/exploring-the-strange-phenomena-of

In that article the question essentially was, who is to blame for tobacco deaths, smokers or the tobacco companies? I acknowledge that each of us is responsible for our own choices, and then come down hard on the tobacco industry.

Let's establish some context for our concerns about AI.

Did you know that the tobacco companies kill almost as many Americans EVERY YEAR as were killed in all the wars Americans fought in over the last century? The CDC puts the yearly death toll at around 480,000.

Seeing that is making me wonder why I hang out on AI blogs wringing my hands about chatbots. Have chatbots killed a single person yet?

It's interesting how we choose what to get all worked up about. I don't claim to know how that works, but it does seem that a cool headed logical analysis is not a big part of the process.

Expand full comment

Great article and I'm really liking these takes. There was a book way back in 2001 or thereabouts called "Mac OS: The Missing Manual", and maybe AI needs something like that (paging O'Reilly....). The feedback I sent after using Bing Chat was that there should be a fun intro video before Chat access is granted, with someone like Hank Green explaining in regular-person terms what deep learning is and how the model works.

Expand full comment
author

That would definitely help--more so when what's intuitive and what's adequate are at odds.

Expand full comment
Mar 11, 2023Liked by Alberto Romero

I appreciate your perspective, Alberto. I think the "manual" for LLM ask is a little unreasonable given the inherent flexibility and openness of what these applications and models can do. However, I think there is something to be explored in the way of "templates". Other modern applications with open and flexible systems have deployed templates to guide users into more locked-in and defined use cases. When it works, both users and the companies who own the applications win (Miro, Zapier, Canva, etc). It's a way to encourage more "coloring inside the lines" and provide a shorter path to value for consumers without completely locking down the openness of the system (which, to me, is part of the beauty of generative AI).

This protection/guard-rails concept is a whole other story on the developer tools side for folks connecting and building their own applications on top of OpenAI, Stability, etc. Who is held accountable there? The foundation model providers? The application layer? The user? Hard to say right now.

Expand full comment
author

Agreed. I don't mean manual as in "you can only do this," but as in "this is how this works and this is what you can expect if you do this or that." ChatGPT is largely inoffensive, but even the engineers that built it acknowledged that they didn't expect how much issues (like bias or jailbreaking) increase with scale. If companies keep developing these models and releasing them in the wild, the "largely inoffensive" bit might begin to be less true. It's in those cases that the idea of a manual is paramount--then, of course, users can do whatever they want (within legality) and they'd be at fault for whatever they do.

Expand full comment

I much enjoyed the article. Regulation should enforce clarity: openness on the system resorting to fabricated material to maintain the semblance of a coherent whole, openness on if and when humans take over in giving responses. Users should have such facts up front. Throwing mud at the wall to see what sticks comes at an expense to society. Clearly the systems have a lot to offer if channelled to clear use-cases.

Expand full comment
author

"Throwing mud at the wall to see what sticks comes at an expense to society. Clearly the systems have a lot to offer if channelled to clear use-cases." Agree with both things!

Expand full comment
Mar 11, 2023Liked by Alberto Romero

Alberto, you wrote a fascinating article, thank you. I see the logic behind your argument however I wonder about the actual harm in question. It would be interesting to have you describe actual harm and also link the harm to a chatbot (causality). There has been a lot of hand wringing since ChatGPT went live in November and articles galore about its potential for theoretical good and bad with supporters on both sides marking interesting arguments, like yours.

Have we seen actual harm? Have we been able to conclude that the cause of the harm was an LLM? With all of the content on the web, again, both good an bad, accessible to all of us and indexed (since at least 1991) and accessible by search engines right from the beginning (remember Alta Vista and chat rooms?) where is the historic harm and who is responsible? I think that we see some causality in the context of radicalization of youth via YouTube and the use of other social media but there we focus on the people who are posting using the tool. Not the tool itself, although this too is a moving target (s. 230 SCOTUS anyone). Look forward to more of your writing.

Cheers

Expand full comment
author

I read a super interesting article on a concept called "criti-hype" (also "hype-o-crit" which is more ingenious) that touches on this very idea of exaggerating things like harm produced by technologies as another form of hype.

I may write about it and it sure will help me frame my essays much more carefully when touching on these kinds of topics. I don't agree with everything but it's worth a read: https://link.medium.com/k6GcxDzD4xb

Expand full comment
author

Thanks Terry! I think you're totally right. A response to your comment deserves to be an article compiling instances of real harm. For now, going to take a peak at the r/Replika subreddit may be a good start. Also, just logged on to Twitter and saw this: https://twitter.com/tristanharris/status/1634299911872348160

I don't claim generative AI is, in absolute terms, causing more harm than most other things, not even close. Cars (and so many other things) are much more dangerous in that sense, as I said in the article.

This wasn't as much a fear-mongering piece (e.g., "oh no, LMs are going to destroy the world") but an attempt at trying to isolate the features that make LMs special and make the current conditions under which companies and users build/use them, unique. Even if there was no harm, it makes no sense to say, e.g., "hey, there are no real instances of harm so let's not set any regulation for these products."

"Harm" served me as a means to explore the anomalous situation that surrounds generative AI systems, their design, creation, and their use. Anyway thanks for your comment. Very needed clarification on my part.

Expand full comment

Great article Alberto! I think that LLMs are different and will likely have a real life impact than some of the the scenarios discussed by Vinsel. I wonder what he thinks about the potential of this new application of AI.

Expand full comment

To continue with your automotive metaphors, so far the only harm caused by generative AIs comes from drivers who knowingly, and insistently, drove several times towards the precipice and then provided screenshots that it is possible to have an accident.

Expand full comment
author

Well, that's not really generative AI. And if it were, it wouldn't be the only instance, although it depends on what we qualify as "harm," which may not be the same for everyone. A walk through r/Replika subreddit can give anyone a good glimpse.

I also think that we shouldn't interpret the amount of talking about "generative AI harm" as being about the magnitude of the harm rather than about its novelty and unpredictability.

Expand full comment

While I agree with the general direction of this post (it would be important for AI developers to publish a manual with limitations and such), the car example is completely missing the point.

Car producers have never put out in the market products that are 100% safe and tested with full knowledge of the limitations.

Just consider that until the 90s it was not compulsory to use belts, or the fact that the vast majority of crash tests are done with mannequins that reproduce only the male body, making airbags and safety belts pretty much less safe for women.

Most recently car producers have put in the market vehicles with features such as GPS images on the screeenshield (which has been shown distracting the driver and impeding the view), or steering wheels of weird shapes because of esthetic reasons, that then turned out to make the vehicle less safe. All this without even mentioning the chaos of autonomous vehicles and exploding batteries.

Cars are and have been in the past put in the market with some tests and some regulations, but among all the examples of how to make a product safe, I think this is the worst possible example.

Expand full comment

The purpose of ChatGPT is to create words when prompted.

1. ChatGPT should come with a warning that it might create offensive speech and if the user doesn’t want to risk being offended, don’t prompt ChatGPT

2. If the user decides to publish said words, they are the users words and all relevant social and legal constraints of speech apply

Problem solved?

Expand full comment

Thanks for engaging and I will not persevere. Love your writing but I’m passionate about this one. To me it’s a free speech issue whether human or machine.

I conceded that #1 “intrapersonal” AI-Human interaction is novel and worth a lot of thought to define malfunction. AI is a product. “AI made me do it” needs deeper investigation and regulation. It might be a defense, and laws are needed to define AI Maker’s responsibility and liability

For #2 “interpersonal” Human-Human interaction. AI is a tool of the human publishing it’s content. Humans still create orders of magnitude more words than AI (expect this to change in the near future), and we somehow manage. The person publishing AI’s content is responsible and liable for what they publish. “AI wrote it” should not be a defense.

Expand full comment
author

"The person publishing AI’s content is responsible and liable for what they publish. “AI wrote it” should not be a defense." I 100% agree with this. Even if ChatGPT writs something problematic, the person can decide to not do anything else with it.

Expand full comment

I wonder whether “AI made me do it” would be much of a defense today for individual misconduct. I think that the determination of responsibility of the individual would turn on the reasonableness of their own decision making as is the case with most findings of fault today. The question is whether, in all of the circumstances, a reasonable person would have acted in such a way. The devil made me do it is not much of a defense unless the devil came at you with a raised knife and you shot him. Now it would be a much different story if AI was able to manipulate our society so to alter our belief systems and fundamental notions of right and wrong. This dystopian extreme is perhaps worth a moment of contemplation along with a few beers and some close friends🙂.

Expand full comment
author

Well, no. And this article *is* (at least partly) the answer to why not.. (I didn't mention anything related to 1 though)

Expand full comment

#1 the company is liable. Malfunctioning product.

#2 the user is liable if the speech breaks some law. Unless the invoke #1. LOL.

Expand full comment
author

How can you define a product as "malfunctioning" when there's no "well-functioning" definition? How can a company be liable (e.g., if the user falls in love with the chatbot and the company decides to shut down the application) if there's no regulation?

This is the whole point of the article, to explore these questions.

Expand full comment

To quote Einstein, ““If you can't explain it simply, you don't understand it well enough.” ChatGPT can harm in two ways:

1. Individual manipulation or influence OF the user

2. Mass manipulation BY the user

#1 is novel and intriguing because there has never before been a machine that can do so.

#2, which is the focus of this article, is not. People have used words to manipulate other people since the beginning of time, and subsequently we have plenty of laws and social customs to address those harms. How those words are generated, at what scale, and how easily and effectively, is irrelevant.

In my opinion.

Expand full comment
author

About 1, I've written elsewhere. About 2 ("How those words are generated, at what scale, and how easily and effectively, is irrelevant"), agree to disagree.

About 3, i.e., how can we decide who's to blame, users or companies (because it's not always the same and it's not trivial), you have this essay.

Expand full comment

I was thinking more of AI somehow manipulating (as with the New York Times author) or providing incorrect but convincing information (as in the law says it’s OK to walk into a store and take something if you don’t have money and just leave a paper with your name on it saying you’ll pay them back in a month)

Expand full comment
deletedMar 11, 2023Liked by Alberto Romero
Comment deleted
Expand full comment
author

Thank you Luis! "Unscrupulous rivals," like China, you mean? Well, that was one of the arguments for the atomic bomb (Nazi Germany). It's also an argument against open source AI. It's not an easy question at all, worth exploring in-depth in my opinion.

Expand full comment
RemovedMar 11, 2023Liked by Alberto Romero
Comment removed
Expand full comment
author

Agreed..

Expand full comment