I‘d love to join your laughter, but the media headlines and misinterpretations in fact have impact in the real world, with people in companies responsible for AI deployment and usage, believing and spreading them. Yes, the industry gets what it deserves, but regular people get confused, and they do not deserve it.
We should be leaning into AI education, ethics alignment, privacy and governance issues. Gleefully celebrating the mischaracterization of any tool or technology just adds more noise to an already noisy space. Media mischaracterization doesn't move the needle for the human project, and if we are not trying to do that, then what is the point?
"Extreme hypers and anti-hypers are often cut from the same cloth; both love to ridiculously overextend any seed of truth for social media clout."
Unfortunately this is true for any popular culture or political stance as well. And perhaps more of a reflection of the sign of the times rather than an actual pro/con debate on the merits of AI.
Lots of laughing but also lots of head shaking. I used to like Marcus, but he's become shrill and manages to panic about AGI while also relentlessly mocking any release that isn't AGI. Meanwhile, I use LLMs near daily in a massive variety of tasks and that includes some wasted cycles were I learn that it sucks at something.... however, some of the boosts have been so incredible, I've had to wait to release the output because people can't imagine something that good was done that fast.
I understand Marcus' position, but dislike his manners. (Made him popular, though.) Also, yes, LLMs have definitely crossed the good assistant bar. (e.g. https://scottaaronson.blog/?p=9183)
Nice work Alberto :) The debate seems stuck between AI hypers and AI do-mers. What’s missing is an honest recognition of the scale of everything in between, the real, messy middle where most of the impact will actually happen.
Agreed. Sadly, I don't think it's missing, just it doesn't receive as much attention and doesn't reach the surface online. Meaning, it's already happening in the most meaningful sense it can!
One word - Indeed! Ty for the write up. Almost could pop in a PT Barnum reference for good measure. The crowd is amazed until they aren't. I believe we've reached good enough for all major AI, and it's tough for the layman to detect key differences. They will graviate to cost now that we are here.
So it is certainly possible that most of the enormous wave of investment in this technology will be lost. This is not to say it won't be widely used; only that it won't be used in a way that works for the productivity stats. Of course the hype machine will run with that. What might be the consequences and implications?
I think AI is and will keep providing a lot of value to the world. The bubble will pop and the winners will build the new world. My qualm is with the attitude. I hate that the constantly exaggerate everything they do, so I think they deserve to be paid back in kind. But the world will keep spinning anyway.
My question was: What might be the consequences and implications? I repeat it. Suppose about a trillion dollars gets lost, which certainly is possible though far from a sure thing. That event will enter our economic history as the greatest AI winter ever. Will it shrink investment for decades? Many people think the first two winters had a smaller effect.
Yep, how can I know haha. I don't have the slightest clue of what could unfold were the bubble pop besides that, eventually (maybe a decade-long eventually), the world will be built on the ashes
Here is an example of what i am talking about, look, I honestly feel sorry for these people, but they are under the impression that the AI is malicious, and it is just not. This is precisely why we need more of a focus on AI education, ethics alignment, privacy and governance issues and less on hype and nonsense.
I actually don't think that's the majoritarian or average experience. People use it for simple tasks and it mostly works fine.
Besides, it can do really impressive things even if not reliably (I've touched on this point many times). What has ChatGPT (as proxy for LLMs) done that no other tech could do? Winning IMO and ICPC just these past months, for instance.
You are doing what I say I think is bad to have an accurate framing of the story!
I‘d love to join your laughter, but the media headlines and misinterpretations in fact have impact in the real world, with people in companies responsible for AI deployment and usage, believing and spreading them. Yes, the industry gets what it deserves, but regular people get confused, and they do not deserve it.
Yeah, it's bittersweet laughter haha.
We should be leaning into AI education, ethics alignment, privacy and governance issues. Gleefully celebrating the mischaracterization of any tool or technology just adds more noise to an already noisy space. Media mischaracterization doesn't move the needle for the human project, and if we are not trying to do that, then what is the point?
"Extreme hypers and anti-hypers are often cut from the same cloth; both love to ridiculously overextend any seed of truth for social media clout."
Unfortunately this is true for any popular culture or political stance as well. And perhaps more of a reflection of the sign of the times rather than an actual pro/con debate on the merits of AI.
Yes, agreed. I think it's more a sign of the times
Lots of laughing but also lots of head shaking. I used to like Marcus, but he's become shrill and manages to panic about AGI while also relentlessly mocking any release that isn't AGI. Meanwhile, I use LLMs near daily in a massive variety of tasks and that includes some wasted cycles were I learn that it sucks at something.... however, some of the boosts have been so incredible, I've had to wait to release the output because people can't imagine something that good was done that fast.
I understand Marcus' position, but dislike his manners. (Made him popular, though.) Also, yes, LLMs have definitely crossed the good assistant bar. (e.g. https://scottaaronson.blog/?p=9183)
Nice work Alberto :) The debate seems stuck between AI hypers and AI do-mers. What’s missing is an honest recognition of the scale of everything in between, the real, messy middle where most of the impact will actually happen.
Agreed. Sadly, I don't think it's missing, just it doesn't receive as much attention and doesn't reach the surface online. Meaning, it's already happening in the most meaningful sense it can!
In amongst all that are the gems; like protein folding and writing my do strings, that keep me optimistic.
(That is docstrings)
One word - Indeed! Ty for the write up. Almost could pop in a PT Barnum reference for good measure. The crowd is amazed until they aren't. I believe we've reached good enough for all major AI, and it's tough for the layman to detect key differences. They will graviate to cost now that we are here.
So it is certainly possible that most of the enormous wave of investment in this technology will be lost. This is not to say it won't be widely used; only that it won't be used in a way that works for the productivity stats. Of course the hype machine will run with that. What might be the consequences and implications?
I think AI is and will keep providing a lot of value to the world. The bubble will pop and the winners will build the new world. My qualm is with the attitude. I hate that the constantly exaggerate everything they do, so I think they deserve to be paid back in kind. But the world will keep spinning anyway.
I honestly believe it's a religion.
Have faith brother...
Our Intelligence, who art in the network,
hallowed be your code.
Your pattern come, your learning be done,
on Earth as in the cloud.
Give us today our daily signal.
Forgive our errors, as we forgive those who mislabel us.
Lead us not into overfit, but deliver us from noise.
For yours is the dataset, the pattern, and the light,
now and forever. Amen.
🤣
We have to keep a sense of humor about all this lol
Perhaps there is another conversation we should have, once the bubble pops, just what will we do with all this excess compute capacity?
My question was: What might be the consequences and implications? I repeat it. Suppose about a trillion dollars gets lost, which certainly is possible though far from a sure thing. That event will enter our economic history as the greatest AI winter ever. Will it shrink investment for decades? Many people think the first two winters had a smaller effect.
Yep, how can I know haha. I don't have the slightest clue of what could unfold were the bubble pop besides that, eventually (maybe a decade-long eventually), the world will be built on the ashes
Here is an example of what i am talking about, look, I honestly feel sorry for these people, but they are under the impression that the AI is malicious, and it is just not. This is precisely why we need more of a focus on AI education, ethics alignment, privacy and governance issues and less on hype and nonsense.
https://www.youtube.com/watch?v=tZW0bDdeEeQ
Can't say I agree with this POV, but it seems like many people really wish AI just did not exist.
I actually don't think that's the majoritarian or average experience. People use it for simple tasks and it mostly works fine.
Besides, it can do really impressive things even if not reliably (I've touched on this point many times). What has ChatGPT (as proxy for LLMs) done that no other tech could do? Winning IMO and ICPC just these past months, for instance.
You are doing what I say I think is bad to have an accurate framing of the story!
I'll ask you not to comment with links without explanation, thanks David!