Alberto, you wrote a fascinating article, thank you. I see the logic behind your argument however I wonder about the actual harm in question. It would be interesting to have you describe actual harm and also link the harm to a chatbot (causality). There has been a lot of hand wringing since ChatGPT went live in November and articles galo…
Alberto, you wrote a fascinating article, thank you. I see the logic behind your argument however I wonder about the actual harm in question. It would be interesting to have you describe actual harm and also link the harm to a chatbot (causality). There has been a lot of hand wringing since ChatGPT went live in November and articles galore about its potential for theoretical good and bad with supporters on both sides marking interesting arguments, like yours.
Have we seen actual harm? Have we been able to conclude that the cause of the harm was an LLM? With all of the content on the web, again, both good an bad, accessible to all of us and indexed (since at least 1991) and accessible by search engines right from the beginning (remember Alta Vista and chat rooms?) where is the historic harm and who is responsible? I think that we see some causality in the context of radicalization of youth via YouTube and the use of other social media but there we focus on the people who are posting using the tool. Not the tool itself, although this too is a moving target (s. 230 SCOTUS anyone). Look forward to more of your writing.
I read a super interesting article on a concept called "criti-hype" (also "hype-o-crit" which is more ingenious) that touches on this very idea of exaggerating things like harm produced by technologies as another form of hype.
I may write about it and it sure will help me frame my essays much more carefully when touching on these kinds of topics. I don't agree with everything but it's worth a read: https://link.medium.com/k6GcxDzD4xb
Thanks Terry! I think you're totally right. A response to your comment deserves to be an article compiling instances of real harm. For now, going to take a peak at the r/Replika subreddit may be a good start. Also, just logged on to Twitter and saw this: https://twitter.com/tristanharris/status/1634299911872348160
I don't claim generative AI is, in absolute terms, causing more harm than most other things, not even close. Cars (and so many other things) are much more dangerous in that sense, as I said in the article.
This wasn't as much a fear-mongering piece (e.g., "oh no, LMs are going to destroy the world") but an attempt at trying to isolate the features that make LMs special and make the current conditions under which companies and users build/use them, unique. Even if there was no harm, it makes no sense to say, e.g., "hey, there are no real instances of harm so let's not set any regulation for these products."
"Harm" served me as a means to explore the anomalous situation that surrounds generative AI systems, their design, creation, and their use. Anyway thanks for your comment. Very needed clarification on my part.
Great article Alberto! I think that LLMs are different and will likely have a real life impact than some of the the scenarios discussed by Vinsel. I wonder what he thinks about the potential of this new application of AI.
Alberto, you wrote a fascinating article, thank you. I see the logic behind your argument however I wonder about the actual harm in question. It would be interesting to have you describe actual harm and also link the harm to a chatbot (causality). There has been a lot of hand wringing since ChatGPT went live in November and articles galore about its potential for theoretical good and bad with supporters on both sides marking interesting arguments, like yours.
Have we seen actual harm? Have we been able to conclude that the cause of the harm was an LLM? With all of the content on the web, again, both good an bad, accessible to all of us and indexed (since at least 1991) and accessible by search engines right from the beginning (remember Alta Vista and chat rooms?) where is the historic harm and who is responsible? I think that we see some causality in the context of radicalization of youth via YouTube and the use of other social media but there we focus on the people who are posting using the tool. Not the tool itself, although this too is a moving target (s. 230 SCOTUS anyone). Look forward to more of your writing.
Cheers
I read a super interesting article on a concept called "criti-hype" (also "hype-o-crit" which is more ingenious) that touches on this very idea of exaggerating things like harm produced by technologies as another form of hype.
I may write about it and it sure will help me frame my essays much more carefully when touching on these kinds of topics. I don't agree with everything but it's worth a read: https://link.medium.com/k6GcxDzD4xb
Thanks Terry! I think you're totally right. A response to your comment deserves to be an article compiling instances of real harm. For now, going to take a peak at the r/Replika subreddit may be a good start. Also, just logged on to Twitter and saw this: https://twitter.com/tristanharris/status/1634299911872348160
I don't claim generative AI is, in absolute terms, causing more harm than most other things, not even close. Cars (and so many other things) are much more dangerous in that sense, as I said in the article.
This wasn't as much a fear-mongering piece (e.g., "oh no, LMs are going to destroy the world") but an attempt at trying to isolate the features that make LMs special and make the current conditions under which companies and users build/use them, unique. Even if there was no harm, it makes no sense to say, e.g., "hey, there are no real instances of harm so let's not set any regulation for these products."
"Harm" served me as a means to explore the anomalous situation that surrounds generative AI systems, their design, creation, and their use. Anyway thanks for your comment. Very needed clarification on my part.
Great article Alberto! I think that LLMs are different and will likely have a real life impact than some of the the scenarios discussed by Vinsel. I wonder what he thinks about the potential of this new application of AI.