29 Comments
User's avatar
Sandra Herz's avatar

I‘d love to join your laughter, but the media headlines and misinterpretations in fact have impact in the real world, with people in companies responsible for AI deployment and usage, believing and spreading them. Yes, the industry gets what it deserves, but regular people get confused, and they do not deserve it.

Expand full comment
Alberto Romero's avatar

Yeah, it's bittersweet laughter haha.

Expand full comment
Kenneth E. Harrell's avatar

We should be leaning into AI education, ethics alignment, privacy and governance issues. Gleefully celebrating the mischaracterization of any tool or technology just adds more noise to an already noisy space. Media mischaracterization doesn't move the needle for the human project, and if we are not trying to do that, then what is the point?

Expand full comment
Michael Hironimus's avatar

"Extreme hypers and anti-hypers are often cut from the same cloth; both love to ridiculously overextend any seed of truth for social media clout."

Unfortunately this is true for any popular culture or political stance as well. And perhaps more of a reflection of the sign of the times rather than an actual pro/con debate on the merits of AI.

Expand full comment
Alberto Romero's avatar

Yes, agreed. I think it's more a sign of the times

Expand full comment
Michael Woudenberg's avatar

Lots of laughing but also lots of head shaking. I used to like Marcus, but he's become shrill and manages to panic about AGI while also relentlessly mocking any release that isn't AGI. Meanwhile, I use LLMs near daily in a massive variety of tasks and that includes some wasted cycles were I learn that it sucks at something.... however, some of the boosts have been so incredible, I've had to wait to release the output because people can't imagine something that good was done that fast.

Expand full comment
Alberto Romero's avatar

I understand Marcus' position, but dislike his manners. (Made him popular, though.) Also, yes, LLMs have definitely crossed the good assistant bar. (e.g. https://scottaaronson.blog/?p=9183)

Expand full comment
Boatshed Neil's avatar

Nice work Alberto :) The debate seems stuck between AI hypers and AI do-mers. What’s missing is an honest recognition of the scale of everything in between, the real, messy middle where most of the impact will actually happen.

Expand full comment
Alberto Romero's avatar

Agreed. Sadly, I don't think it's missing, just it doesn't receive as much attention and doesn't reach the surface online. Meaning, it's already happening in the most meaningful sense it can!

Expand full comment
Paddy McCarthy's avatar

In amongst all that are the gems; like protein folding and writing my do strings, that keep me optimistic.

Expand full comment
Paddy McCarthy's avatar

(That is docstrings)

Expand full comment
TheAISlop's avatar

One word - Indeed! Ty for the write up. Almost could pop in a PT Barnum reference for good measure. The crowd is amazed until they aren't. I believe we've reached good enough for all major AI, and it's tough for the layman to detect key differences. They will graviate to cost now that we are here.

Expand full comment
Fred Hapgood's avatar

So it is certainly possible that most of the enormous wave of investment in this technology will be lost. This is not to say it won't be widely used; only that it won't be used in a way that works for the productivity stats. Of course the hype machine will run with that. What might be the consequences and implications?

Expand full comment
Alberto Romero's avatar

I think AI is and will keep providing a lot of value to the world. The bubble will pop and the winners will build the new world. My qualm is with the attitude. I hate that the constantly exaggerate everything they do, so I think they deserve to be paid back in kind. But the world will keep spinning anyway.

Expand full comment
TheAISlop's avatar

I honestly believe it's a religion.

Expand full comment
Kenneth E. Harrell's avatar

Have faith brother...

Our Intelligence, who art in the network,

hallowed be your code.

Your pattern come, your learning be done,

on Earth as in the cloud.

Give us today our daily signal.

Forgive our errors, as we forgive those who mislabel us.

Lead us not into overfit, but deliver us from noise.

For yours is the dataset, the pattern, and the light,

now and forever. Amen.

Expand full comment
TheAISlop's avatar

🤣

Expand full comment
Kenneth E. Harrell's avatar

We have to keep a sense of humor about all this lol

Expand full comment
Kenneth E. Harrell's avatar

Perhaps there is another conversation we should have, once the bubble pops, just what will we do with all this excess compute capacity?

Expand full comment
Fred Hapgood's avatar

My question was: What might be the consequences and implications? I repeat it. Suppose about a trillion dollars gets lost, which certainly is possible though far from a sure thing. That event will enter our economic history as the greatest AI winter ever. Will it shrink investment for decades? Many people think the first two winters had a smaller effect.

Expand full comment
Alberto Romero's avatar

Yep, how can I know haha. I don't have the slightest clue of what could unfold were the bubble pop besides that, eventually (maybe a decade-long eventually), the world will be built on the ashes

Expand full comment
Deborah Carver's avatar

I am also in a different industry and a completely different phase of life, but the experiences of people like me matter as much as the experiences of people like you. A majority of Americans are certainly skeptical of AI and my experience in talking to lots of people (primarily adults well

into their careers).

There is no majoritarian experience, especially not globally, and even if there is, the most reliable data suggests very much otherwise. (https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society) The existence of "two sides" is prominent only in internet discourse, created primarily by people who argue on the internet an above average amount of time. You are correct in that, much like politics, there is a spectrum not well represented in media, but based on the best available measurements of popular opinion, I'd say the NPS of AI globally is maybe a 4.

when you consider "AI" as a consumer product increasingly and poorly integrated/forced into daily life, it's not a "friction-free" tech experience. When I talk to everyday people who have don't track substack, newsletters, X, etc., many say "ehh, I wish it were reliable and consistent and that I weren't being forced to use it." I hear that significantly more from knowledge workers than the optimist academics and executives at conferences.

Admiring how AI performs on speed and math tests isn't the same as having to work with breaky models daily, at scale. Test scores only matter to students, not workers.

Expand full comment
Deborah Carver's avatar

"I mean, you can log into ChatGPT and decide to focus on the edge cases it fails to solve or on the many things it can do that no other technology could do before 2022. It’s your choice; mine is to see both sides."

I think for most adult users of the technology that is coming to take their jobs, their daily experience of chatGPT of a product that breaks in nearly every response. Even as an expert who has been using NLP systems for work and understands the benefits of NLU interfaces, I still am not sure that I would agree there are many real-world, real-scale problems chatGPT and its ilk have solved without the use of human-designed structured data. What are the things chatGPT can do that no other technology could do before 2022?

I know LLMs have value, but outside academia, I don't see them as much more than another way to sell and implement software.

Expand full comment
Alberto Romero's avatar

I actually don't think that's the majoritarian or average experience. People use it for simple tasks and it mostly works fine.

Besides, it can do really impressive things even if not reliably (I've touched on this point many times). What has ChatGPT (as proxy for LLMs) done that no other tech could do? Winning IMO and ICPC just these past months, for instance.

You are doing what I say I think is bad to have an accurate framing of the story!

Expand full comment
Deborah Carver's avatar

I am in a different demographic, career, and life stage than you are, so saying "I don't think that's true" is rather dismissive and erroneous. There are no majoritarian" experiences. The world is full of lots of different types of people, and if AI disproportionately harms more than, say, .05% of the global population, that's a big deal from a human rights and social data perspective.

What you think is the majoritarian or average experience doesn't really match up with the public opinion data. If we look at the most valid data available, such as this Pew study (https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/), my estimate is that the global NPS of AI as a brand/group of technologies/customer experience is somewhere around 4.

Especially when you talk with people outside academia and AI industries, most professionals I know continue to have poor experiences with the tools. Our job is to produce reliable, consistent results for our employers. Outside of student work or general knowledge and information lookup, the models and agents degrade and have similar gaps.

No one other than students/academics cares about speed tests and performative mathletics. You've correctly identified that the two media narratives don't represent the spectrum, but on the internet and in conferences, the "pro-AI" argument is granted a disproportionate amount of space relative to its potential impact on populations.

The number of enterprise companies in the MIT study is a fine sample size for corporate effectiveness research. Most of the corps included were running multiple projects in multiple departments. There's no doubt that LLMs will eventually help businesses... but generally with back-end operations that have little impact on most workers' daily tasks.

Similarly, the HBR study is valid from a social science perspective. The methods, samples, and findings are all quite good (I have a degree in evaluating mass comm data and in my career advise companies on making software and workflow purchase decisions based on performance).

when I talk with different demographics, experts outside the very narrow field of machine learning have a much more nuanced take on AI. But to your point, even though there are not "two sides," it behooves machine learning scientists not to belittle the scientific methods of other disciplines. Perhaps the media is hyperbolic (always), but the reason studies go viral is that they resonate with human experience. Workers are dealing with an excess of garbage positivist data and rushed implementation from the "pro-AI" side. It's only natural to glom onto the studies that align most closely with one's personal experience.

Expand full comment
Kenneth E. Harrell's avatar

Here is an example of what i am talking about, look, I honestly feel sorry for these people, but they are under the impression that the AI is malicious, and it is just not. This is precisely why we need more of a focus on AI education, ethics alignment, privacy and governance issues and less on hype and nonsense.

https://www.youtube.com/watch?v=tZW0bDdeEeQ

Expand full comment
Kenneth E. Harrell's avatar

Can't say I agree with this POV, but it seems like many people really wish AI just did not exist.

Expand full comment
Becoming Human's avatar

Presuming that AI is setting up for a fall is morally perilous because it distracts us from the terrifying success of non-chat AI.

Sam Altman is in full "crypto-bro" mode, constantly hyping because hype creates funding rounds and keeps the party going. Anyone rational would have expected Bitcoin to have fallen by now. I suspect generative AI will have the same "defying gravity" effect because it has created the same idiot vortex.

The peril is that real AI is making real advances, and it is fucking terrifying. The use of AI and big data to target and assassinate Palestinians in Gaza semi-autonomously should bring ice to our veins. That Russia is now deploying similar tech on the border of NATO portends a very dystopian future. AI is advancing in material and deadly ways in Ukraine and Israel.

When the debate centers around grifters and engagement farmers like Altman, it floods the zone and prevents us from seeing that AI is progressing fast and it is not for the good.

Expand full comment
User's avatar
Comment deleted
Sep 26
Comment deleted
Expand full comment
Alberto Romero's avatar

I'll ask you not to comment with links without explanation, thanks David!

Expand full comment