What you're discovering is that um, human beings tend to be kinda stupid, and that's what feeds the hype monsters on every subject. If we ourselves are not to also be kinda stupid, we need to be asking how much power we want to see vast populations of kinda stupid human beings have access to.
What seems to be missing from so much AI commentary is a true engineering approach, where we consider ALL factors involved in a technology. That might look like this...
Let's think of human beings as the "governing mechanism" inside the AI machine. If we were good engineers and understood that this governing mechanism is inherently limited in ability, then we would design the AI machine to take that reality in to account. We wouldn't be pushing the AI field forward as fast as possible and dreaming of the singularity and all that, we would instead be saying things like...
AI is useful up to a certain point, and we should understand where that limit is, and design AI to not exceed that limit.
A simple example. A car maker is not going to design a car that can go 500mph, because few humans can keep a car on the road at that speed. Car makers are good engineers that understand that drivers have limits. I'm not seeing that level of engineering understanding in the AI field.
Haha I must admit that I’ve considered hopping on the grift with an offering of like a one-off paid zoom workshop for authors on “How to speed up your author workflow using AI.”
For some, less tech-savvy authors, I think this might actually be a useful workshop.
Hey Charlotte! Actually, I don't think what you propose sounds anything like the AI influencers I'm talking about. If you've taken the time to learn how the tools work and you use them yourself, then I think it's fine.
That's why I said that I'm not criticizing these practices individually (there may be some that are truly worth it) but the general trend of AI influencing (scammers are obviously criticizable).
"Influencers" exist in any area with money and excitement.
From the low end of "Work at home making $86 an hour" ads, to the high end of "Investment bank/medschool interview tips". They exist because of informational asymmetry, so people selling bad advice that catches eyeballs can survive and thrive just by prying a small portion of paying customers.
There's no way to eliminate them, they can only be suppressed.
They'll be suppressed, when AI companies become household names, and credible brands in AI information space go from "AI expert/researcher" (trivially fakable credentials) to "Worked at OpenAI/deepmind/stability/some not yet famous but eventually worldwide fame AI company". People will filter out the noise from people without credible brands.
You'll also have real celebrities from AI companies. 10 years ago, Elon Musk was still a nobody to the general public, despite founding Paypal and a billionaire. Now he is an automatic credible voice for the space and EV industries with incredible reach. Sam Altman is still a nobody to the general public, but in 10 years he will be widely known. Then instead of random hype articles, people will actually flock to Joe Rogan vs Sam Altman.
The problem with AI right now is that little real cash is being made. For the media and layman, unless they can see you are a multi-millionaire from AI, they have no reason to listen to you, and they should not be expected to understand and differentiate the technical aspects of AI models. Once real money is made, then they have a signal of who to listen to and trust.
"They exist because of informational asymmetry," very well put. Agreed.
"People will filter out the noise from people without credible brands." Hope you're right, but I think it won't happen as you say. Those companies are already well-known (Google and Meta have been doing AI for much longer than OpenAI and DeepMind) and people working there have blogs and write their thoughts publicly. It doesn't matter, the fast-food kind of content reaches more eyeballs for different reasons (emotionally appealing, maybe simpler language, rhetoric vs dialectic, etc.)
"You'll also have real celebrities from AI companies." People are quite bad at ascribing authority to who should have it. And Elon Musk is actually a great example of this. People shouldn't be listening to him about self-driving tech but they do.
"For the media and layman, unless they can see you are a multi-millionaire from AI, they have no reason to listen to you." Again, that's what *should* happen but that's not what I observe. People listen and believe "nobodies" as long as they talk about shiny things (of course, this is a simplification).
The last three paras were words taken out of my mouth. As a founder of a newly launched tech newsletter based in Kerala, India, the effort to make people, especially those in media, understand what my model is often drives me nuts. Thanks for writing this.
Sad but true, Alberto. The worst part is that the quest to unmask the AI influencers is doomed because influencers know how to get, well, influence, and as you say, this includes to give easy –even superficial– content to readers.
Am I right in thinking you are using the term "AI influencer" to mean "a human influencer who talks about AI a lot"?
For me, the term means a virtual influencer, someone who doesn't really exist but was created by AI. This is also the definition I see online the most. So your article had me a bit confused there!
I appreciate the core sentiment and aim of the article. However, I don't think it is quite accurate to say that ChatGPT is mainly getting attention because people are now aware of the tool. I tested my exam questions with the former version of ChatGPT (and other similar tools) in September; it failed all of them. With the launch of the new version I tried it again, and it suddenly passed about 20% of them. That is a huge leap that goes far beyond mere awareness. ChatGPT is more accessible, but simply substantially better.
"If generative AI is tangibly useful, then the hype is less problematic, right? Well, no. Web3 was easily dismissed as scam-ish. AI influencers, in contrast, talk about something real—AI hype seems to be justified. But it isn’t: They add an embellishment layer on top of the truth to beautify and hyperbolize their discourse. "
My experience with this is the exact reverse really. I've been vocally anti-crypto since 2018. The problem with having conversations about crypto/web3 is the extreme wide gab between what people believe that it is, and what it actually is. The fact that literally everything about crypto/web3 is a scam makes the conversation actually really hard. People have a tendency to think that there must be at least 'something' there when there is a technology they don't understand. The fact the only 'something' there is, is scamming, makes the conversation hard, painful, and people are more likely to get into a defensive mode. And thus, I'm wayy less likely to engage with it as well.
I met my inlaws over christmas for the first time, and the conversation came to bitcoin, and that they have a small amount of money in bitcoin. Theres no way to make that conversation not awkward when I'd tell them my actual beliefs about bitcoin, so I just didnt say anything at all.
On the other hand, my conversations where people bring up chatgpt in hype-y and over-the-top ways it is way easier to steer the conversation in a direction where AI will be impactful in a realistic and non-hype way, without having to start a confrontational talk. Like, someone did say recently to me about how chatGPT will change education super soon and itll all be crazy and we'll all be using AI all the time and stuff. Its way easier to not directly respond to these over the top statements, and then talk about all the cool ways that Ethan Mollick describes in his article (https://oneusefulthing.substack.com/p/all-my-classes-suddenly-became-ai) on how he approaches classes with AI.
What you're discovering is that um, human beings tend to be kinda stupid, and that's what feeds the hype monsters on every subject. If we ourselves are not to also be kinda stupid, we need to be asking how much power we want to see vast populations of kinda stupid human beings have access to.
What seems to be missing from so much AI commentary is a true engineering approach, where we consider ALL factors involved in a technology. That might look like this...
Let's think of human beings as the "governing mechanism" inside the AI machine. If we were good engineers and understood that this governing mechanism is inherently limited in ability, then we would design the AI machine to take that reality in to account. We wouldn't be pushing the AI field forward as fast as possible and dreaming of the singularity and all that, we would instead be saying things like...
AI is useful up to a certain point, and we should understand where that limit is, and design AI to not exceed that limit.
A simple example. A car maker is not going to design a car that can go 500mph, because few humans can keep a car on the road at that speed. Car makers are good engineers that understand that drivers have limits. I'm not seeing that level of engineering understanding in the AI field.
"AI is useful up to a certain point, and we should understand where that limit is, and design AI to not exceed that limit." Nailed it, Phil!
Off Topic: Watching the movie AI again, for about the tenth time. Given that this film was made over 20 years ago, pretty impressive.
https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence
Haha I must admit that I’ve considered hopping on the grift with an offering of like a one-off paid zoom workshop for authors on “How to speed up your author workflow using AI.”
For some, less tech-savvy authors, I think this might actually be a useful workshop.
Hey Charlotte! Actually, I don't think what you propose sounds anything like the AI influencers I'm talking about. If you've taken the time to learn how the tools work and you use them yourself, then I think it's fine.
That's why I said that I'm not criticizing these practices individually (there may be some that are truly worth it) but the general trend of AI influencing (scammers are obviously criticizable).
Definitely. And there is so much repackaging and charging for what is actually free.
"Influencers" exist in any area with money and excitement.
From the low end of "Work at home making $86 an hour" ads, to the high end of "Investment bank/medschool interview tips". They exist because of informational asymmetry, so people selling bad advice that catches eyeballs can survive and thrive just by prying a small portion of paying customers.
There's no way to eliminate them, they can only be suppressed.
They'll be suppressed, when AI companies become household names, and credible brands in AI information space go from "AI expert/researcher" (trivially fakable credentials) to "Worked at OpenAI/deepmind/stability/some not yet famous but eventually worldwide fame AI company". People will filter out the noise from people without credible brands.
You'll also have real celebrities from AI companies. 10 years ago, Elon Musk was still a nobody to the general public, despite founding Paypal and a billionaire. Now he is an automatic credible voice for the space and EV industries with incredible reach. Sam Altman is still a nobody to the general public, but in 10 years he will be widely known. Then instead of random hype articles, people will actually flock to Joe Rogan vs Sam Altman.
The problem with AI right now is that little real cash is being made. For the media and layman, unless they can see you are a multi-millionaire from AI, they have no reason to listen to you, and they should not be expected to understand and differentiate the technical aspects of AI models. Once real money is made, then they have a signal of who to listen to and trust.
As always, thought-provoking! Some thoughts:
"They exist because of informational asymmetry," very well put. Agreed.
"People will filter out the noise from people without credible brands." Hope you're right, but I think it won't happen as you say. Those companies are already well-known (Google and Meta have been doing AI for much longer than OpenAI and DeepMind) and people working there have blogs and write their thoughts publicly. It doesn't matter, the fast-food kind of content reaches more eyeballs for different reasons (emotionally appealing, maybe simpler language, rhetoric vs dialectic, etc.)
"You'll also have real celebrities from AI companies." People are quite bad at ascribing authority to who should have it. And Elon Musk is actually a great example of this. People shouldn't be listening to him about self-driving tech but they do.
"For the media and layman, unless they can see you are a multi-millionaire from AI, they have no reason to listen to you." Again, that's what *should* happen but that's not what I observe. People listen and believe "nobodies" as long as they talk about shiny things (of course, this is a simplification).
The last three paras were words taken out of my mouth. As a founder of a newly launched tech newsletter based in Kerala, India, the effort to make people, especially those in media, understand what my model is often drives me nuts. Thanks for writing this.
Thanks for reading Hari, and good luck with your newsletter!!
The misinformation and disinformation are so distracting to the public. Thanks for your thoughts.
True, distracting and attracting!
Sad but true, Alberto. The worst part is that the quest to unmask the AI influencers is doomed because influencers know how to get, well, influence, and as you say, this includes to give easy –even superficial– content to readers.
Agreed Ramon!
Am I right in thinking you are using the term "AI influencer" to mean "a human influencer who talks about AI a lot"?
For me, the term means a virtual influencer, someone who doesn't really exist but was created by AI. This is also the definition I see online the most. So your article had me a bit confused there!
Completely agree, and the shift toward artificial influencer has been happening since about 2007:
https://open.substack.com/pub/johnmayosmith/p/rise-of-the-artificial-influencer
Such influencers are becoming more and more popular. This type of advertising is becoming more and more popular. I recently found this post and recommend reading it: https://gamerseo.com/blog/ai-influencer-know-everything-about-virtual-influencers-marketing/
I appreciate the core sentiment and aim of the article. However, I don't think it is quite accurate to say that ChatGPT is mainly getting attention because people are now aware of the tool. I tested my exam questions with the former version of ChatGPT (and other similar tools) in September; it failed all of them. With the launch of the new version I tried it again, and it suddenly passed about 20% of them. That is a huge leap that goes far beyond mere awareness. ChatGPT is more accessible, but simply substantially better.
You’re my go-to source for new information in the AI space, thanks and keep it up!
I really appreciate the way you develop thoughtful analyses. Your texts are very educational for a beginner like me. Good night from Brazil
Great article, as usual!
"If generative AI is tangibly useful, then the hype is less problematic, right? Well, no. Web3 was easily dismissed as scam-ish. AI influencers, in contrast, talk about something real—AI hype seems to be justified. But it isn’t: They add an embellishment layer on top of the truth to beautify and hyperbolize their discourse. "
My experience with this is the exact reverse really. I've been vocally anti-crypto since 2018. The problem with having conversations about crypto/web3 is the extreme wide gab between what people believe that it is, and what it actually is. The fact that literally everything about crypto/web3 is a scam makes the conversation actually really hard. People have a tendency to think that there must be at least 'something' there when there is a technology they don't understand. The fact the only 'something' there is, is scamming, makes the conversation hard, painful, and people are more likely to get into a defensive mode. And thus, I'm wayy less likely to engage with it as well.
I met my inlaws over christmas for the first time, and the conversation came to bitcoin, and that they have a small amount of money in bitcoin. Theres no way to make that conversation not awkward when I'd tell them my actual beliefs about bitcoin, so I just didnt say anything at all.
On the other hand, my conversations where people bring up chatgpt in hype-y and over-the-top ways it is way easier to steer the conversation in a direction where AI will be impactful in a realistic and non-hype way, without having to start a confrontational talk. Like, someone did say recently to me about how chatGPT will change education super soon and itll all be crazy and we'll all be using AI all the time and stuff. Its way easier to not directly respond to these over the top statements, and then talk about all the cool ways that Ethan Mollick describes in his article (https://oneusefulthing.substack.com/p/all-my-classes-suddenly-became-ai) on how he approaches classes with AI.