I definitely loved the pro/con stance of this article. In reference to vanilla AI, I studied and applied AI in college circa 1986-1987. We have come along way! Our robots were difficult to program to avoid objects; not like today. The rovers on Mars are a perfect example. To parallel the article, sensors were so much in their infancy then (at least publicly available) than they are today. For 37 years, the tech has silently moving forward.
Every new major technological shift must come with a period of "consolidation", where ideas are vetted, faulty ones are thrown out, charlatans are culled. For those of us who from here will actually focus on real problems, there is massive opportunity. But, there must be a detox from the drunkenness. In sobriety and against truth, we will find the true value of these amazing technologies.
AI researcher Gary Marcus basically says LLMs are finished and have done nothing positive for AI, but to the contrary diverted resources from real research that could have brought us closer to strong AI. It seems obvious these days deep learning sucks and isn't going anywhere. Doesn't really seem like much of a revolution in AI to begin with to me, although it is one in weapons of mass distraction and surveillance, and in generating endless garbage and propaganda campaigns on the cheap.
Fundamental issues of LLMs and the underlying neural nets mean they'll never be able to reliably even tell the time on a watch, although they are great at telling the most statistically likely time to appear on a watch in marketing photos. It's all hype and more hype from people trying to sell things and maximise profits according to their legal duties to their Wall St. stock holders.
By the way, the only coders that benefit are bad ones of which there are a seemingly endless number since good programmers are very few and far between, something that's easy to tell being one and looking at other people's code all the time. The average quality of code has gone down because of LLMs, according to research, and it was already far lower than it was decades ago with most programmers having no idea how to program well anymore or any idea how computers work, being reliant on all kinds of automation and tools to make it "easier." LLMs produce garbage and even hallucinate nonexistent dependencies that people can then turn into real malware.
The increase in productivity is an illusion, since good programmers will be left to pick up the slack and the time bad ones saved will be offloaded to them; if it's not, the quality of all software will go down, which is the probable outcome due to the shortage of skilled programmers to make up for it.
"...although they are great at telling the most statistically likely time to appear on a watch..." Sorry to take this quote out of context but that's great observation!
There are still gaps in the AI revolution particularly in its ability to form reasoning. However, ChatGPT represents a milestone where AI exhibit ability to comprehend human natural language. The limitation with ChatGPT is that it can only form relationship between text. But the ChatGPT today has already evolved as it is now able to connect images with text.
The next hype that is emerging is in robotics. Progress in robotics has been stalling for a long time. But the emergence of multimodal GPTs will enable robots to leverage on GPTs to interpret the events in the real world, performing reasoning and plans its action. The result is Figure 01 and Apollo.
Advanced intelligent and autonomous robot is coming.
Lol gen ai doesn't comprehend anything, do you even know how it works? Next-token prediction != comprehension. Last thing we need is multimodal real-world hallucinating robots, although the military already has hallucinating killer robots (too bad nobody listened to Hawking and Musk when they tried to get them banned under international law), and they can make mistakes to kill the wrong people even better than technologies like GPS and the mass surveillance systems produced for weapons exposed by Assange do by themselves. They can also be disabled by nets, magnets, and EW even cheaper than the mass produced weapons themselves.
I don’t know how much experience you have with GPT, but it correctly understands at least 95% of what I say to it, and generates very useful responses as a result. Please let me know what definition of “understand” you are using.
I think we’re seeing a scientific boom and because of the looking glass AI is under everyone is getting to witness this progress from up close. What is taking much longer, it always does, it transforming this progress into real, tangable applications.
A beautiful example, for me, is ChatGPT. It was never intended to be a product, but a research preview. But now it is. And now it needs to be maintained and improved and monetized like a product. And then analysts start to write stories about declining daily usage, user numbers, etc etc.
I think the story is much broader than declining daily usage.
Products take time. Also, sometimes, things that were hailed as revolutions eventually fail to embody the promises.
In the 21st century, hype is more possible than ever due to connectivity and globalization. That can push anything to the stratosphere of popular interest but the damning consequences of hype are a byproduct you can't prevent if that happens.
Is this article some sort of satire, mocking journalists who express heavy opinions going against the grain out of appetite for a spotlight on the oh-so-multifaceted thought and analysis going on in their minds? I hope so. If you have any sort of grasp on neuroscience and/or fundamental understanding of the principles of modern artificial intelligence and the emergent capabilities of state-of-the-art models it can not possibly escape you that we are talking about a massive, era-defining shift in human affairs and economics. People are getting "bored" with AI??? Are we all twelve?! The unprecedented potential and power of this technology is not based on hype, but the fundamental realization that models have now achieved a scale where - via the phenomenon of emergence - we are for the first time in history reaching the unshakeable realization that the engine which created our own civilization, i.e. intelligence, will be a highly scalable commodity in the very near future. That is, this is no longer a matter of science or philosophy. It is a matter of engineering! That's the state of generative AI in 2024, and I am struggling to understand how you can write an article on the subject with doubts and pros and cons of the nature contained herein.
Here's what I can tell you - the fact that Ezra Klein can't figure out how to use AI in his daily work is solely the problem of Ezra Klein. Even with my lack of intelligence, I have somehow landed a job in cancer research, specifically within the field of tumor immunology. In this field, as in many others, AI is able to provide invaluable insight - via pulling vast amounts of research - towards integrating multiple biomarkers across studies into coherent mechanistic models of cancer biology which are already accelerating the discovery of new therapies. In the very near future, synergy between AI and essential academic personnel will be mandatory for any lab hoping to have a competitive standing for grant funding. When I say "this is no longer a matter of science" what I mean is that even at the current state of affairs, we have achieved an architecture displaying intelligence capabilities very similar in effect to that of specialists across many domains. Further iterations of massive LLMs, whose inner workings are quite similar to the learning principles in the brain, are likely to achieve symbolic and abstract reasoning that will be able to productively delve into advanced math subjects, like topology, number theory, etc. (search matrix multiplication and DeepMind for some impressive results already here).
"LLMs, whose inner workings are quite similar to the learning principles in the brain." Really, you don't know what you're talking about lol. Not even people at the vanguard of mechanistic interpretability research know how neural networks work. How can you know they work like the brain (which, by the way, we also don't fully know how it works)? Let me tell you what I think: you just read Leopold's Situational Awareness essay and were mindblown haha.
Actually, I hadn't but it very much seems on point haha
As far as knowing "how neural networks work" is concerned - I think we are talking about different things. We very much know how neural networks work, and neuromorphic architectures specifically (check Intel's Loihi) closely mimic neuronal message passing although they're still lacking in defined learning algorithms similar to the way backprop functions for traditional NNs. What is surprising (but it should not have been in my opinion) is the emergent capabilities of NNs on certain tasks as their complexity passes fuzzy thresholds. In addition (and as it turns out in agreement with Mr. Aschenbrenner's essay), people working at OpenAI and DeepMind have a lot to show for their work. What do the AGI sceptics have to show on the other side? Rule based bs from the 80s? The reason why I'm stuck on your article is that I think it's important for people to realize the magnitude of what's about to hit us instead of opening up debates on whether AI is getting boring or not.
I enjoyed reading the views on the forward trajectory of AI. I do believe the technology has a purpose, but, may be overhyped with potential impacts in some areas. It will definitely be interesting to see though where we go from here.
Good breakdown. There are some areas which seem a little off. For example saying people aren't paying for ChatGPT is misleading considering OpenAI is doing 2B in revenue.. I think the one gap you didn't highlight in hype cycles is bridging from the core tech and deployment of solutions requires platforms and integration. What makes G.AI so interesting is it's actually threatening the existing platforms we use today as a potential wholesale replacement. So there is a much larger effort required for it to take hold in our tech ecosystems. The true destination of A.I. isn't just to be a "cool app" like ChatGPT. It will live in the fabric of the Internet (routers, middleware, apps, websites etc) which means everyone will need to decide whether to rebuild or reimagine their ecosystems from ground up or force fit A.I. into the existing stack. Either way it will require a lot of time and money, but it will happen because new A.I. first alternative solutions will be 100x better.
Very good article. Insightful, and up to date. I do believe that generative AI has the potential to become an inflection point similar to that of the Printing press. Time will tell
I personally don’t believe AI is overhyped. I think the present focus is overemphasized, but the potential is, if anything, being under-discussed. Did you see what an OpenAI brain in a Figure body was capable of? Absolutely wild and a little disturbing.
I believe the social, political, economic, and philosophical implications of this technology are among the most profound we’ve ever faced. Moreover, considering that generative AI is the tool that could build a fully immersive, large-scale metaverse, we're entering a whole new world.
It may not happen this week, but at the current pace, a decade seems like a reasonable enough timeframe to witness truly era-shifting transformations.
Clocks & watches may be an even better parallel. They were incredibly inaccurate in their early years but could put on spectacular displays (as some of the old town clocks still do in Europe). They eventually worked their way into every aspect of our lives and most of our advanced technologies. We also made them symbols of virtue, efficiency, style, and wealth, integrating them into the fabric of our thought.
Of course they arose in a very different social, political, and economic environment. They did not have to survive the pressures of today's capitalism or our communication environment. It may be that technologies of this sort can only fail in these conditions. I suspect they will fail in the short term, at least within the parameters that capitalists and governments set for them. If they do succeed in the short term it may not be in ways beneficial to society, given the way criminal and state actors are using them, and given the nature of the corporations churning them out.
I made use of very simple “n-gram language models” when I joined a group working on speech-to-text in 1995. As I understand it, LLMs of note arose when people started experimenting with the use of artificial neural networks (ANNs) to overcome the practical and theoretical inability of n-grams to deal with context longer than 2 or 3 words. After the introduction of word2vec and LSTM, ANN based LLMs sparked up, rather like Frankenstein came to life when lightening was harnessed to give his brain the required jolt. However, just as Frankenstein was disadvantaged by his bad looks and rather unpredictable behaviour, public interest in the development of LLMs is now held up by the amount of data, electricity and time required to train them. These blocking factors will be removed once ways are developed to reduce the need for such astronomical training resources. That is most likely to arise once we have a better understanding of the machine learning process so that it becomes far more efficient at leveraging understanding from limited data, though increased powers of reasoning and inference. That will lead to increased intelligence, which will bring with it the required reduction in unpredictable behaviour. At that point the main problem likely to arise will be the emergence of superhuman intelligence - which will arise earlier for some people than others.
A real interesting article loved how you showed both sides of the argument.
I definitely loved the pro/con stance of this article. In reference to vanilla AI, I studied and applied AI in college circa 1986-1987. We have come along way! Our robots were difficult to program to avoid objects; not like today. The rovers on Mars are a perfect example. To parallel the article, sensors were so much in their infancy then (at least publicly available) than they are today. For 37 years, the tech has silently moving forward.
what a well-written article
Every new major technological shift must come with a period of "consolidation", where ideas are vetted, faulty ones are thrown out, charlatans are culled. For those of us who from here will actually focus on real problems, there is massive opportunity. But, there must be a detox from the drunkenness. In sobriety and against truth, we will find the true value of these amazing technologies.
Perhaps an instantiation of Amara's Law?
Yes. Could be. It could also happen that companies conclude it's too expensive or users don't really know what to do with it.
(If I'm mistaken, this very comment would be an example of Amara's Law lol.)
AI researcher Gary Marcus basically says LLMs are finished and have done nothing positive for AI, but to the contrary diverted resources from real research that could have brought us closer to strong AI. It seems obvious these days deep learning sucks and isn't going anywhere. Doesn't really seem like much of a revolution in AI to begin with to me, although it is one in weapons of mass distraction and surveillance, and in generating endless garbage and propaganda campaigns on the cheap.
Fundamental issues of LLMs and the underlying neural nets mean they'll never be able to reliably even tell the time on a watch, although they are great at telling the most statistically likely time to appear on a watch in marketing photos. It's all hype and more hype from people trying to sell things and maximise profits according to their legal duties to their Wall St. stock holders.
https://garymarcus.substack.com/p/generative-ai-as-shakespearean-tragedy
By the way, the only coders that benefit are bad ones of which there are a seemingly endless number since good programmers are very few and far between, something that's easy to tell being one and looking at other people's code all the time. The average quality of code has gone down because of LLMs, according to research, and it was already far lower than it was decades ago with most programmers having no idea how to program well anymore or any idea how computers work, being reliant on all kinds of automation and tools to make it "easier." LLMs produce garbage and even hallucinate nonexistent dependencies that people can then turn into real malware.
The increase in productivity is an illusion, since good programmers will be left to pick up the slack and the time bad ones saved will be offloaded to them; if it's not, the quality of all software will go down, which is the probable outcome due to the shortage of skilled programmers to make up for it.
"...although they are great at telling the most statistically likely time to appear on a watch..." Sorry to take this quote out of context but that's great observation!
There are still gaps in the AI revolution particularly in its ability to form reasoning. However, ChatGPT represents a milestone where AI exhibit ability to comprehend human natural language. The limitation with ChatGPT is that it can only form relationship between text. But the ChatGPT today has already evolved as it is now able to connect images with text.
The next hype that is emerging is in robotics. Progress in robotics has been stalling for a long time. But the emergence of multimodal GPTs will enable robots to leverage on GPTs to interpret the events in the real world, performing reasoning and plans its action. The result is Figure 01 and Apollo.
Advanced intelligent and autonomous robot is coming.
Lol gen ai doesn't comprehend anything, do you even know how it works? Next-token prediction != comprehension. Last thing we need is multimodal real-world hallucinating robots, although the military already has hallucinating killer robots (too bad nobody listened to Hawking and Musk when they tried to get them banned under international law), and they can make mistakes to kill the wrong people even better than technologies like GPS and the mass surveillance systems produced for weapons exposed by Assange do by themselves. They can also be disabled by nets, magnets, and EW even cheaper than the mass produced weapons themselves.
I don’t know how much experience you have with GPT, but it correctly understands at least 95% of what I say to it, and generates very useful responses as a result. Please let me know what definition of “understand” you are using.
I think we’re seeing a scientific boom and because of the looking glass AI is under everyone is getting to witness this progress from up close. What is taking much longer, it always does, it transforming this progress into real, tangable applications.
A beautiful example, for me, is ChatGPT. It was never intended to be a product, but a research preview. But now it is. And now it needs to be maintained and improved and monetized like a product. And then analysts start to write stories about declining daily usage, user numbers, etc etc.
I think the story is much broader than declining daily usage.
Products take time. Also, sometimes, things that were hailed as revolutions eventually fail to embody the promises.
In the 21st century, hype is more possible than ever due to connectivity and globalization. That can push anything to the stratosphere of popular interest but the damning consequences of hype are a byproduct you can't prevent if that happens.
Which one is generative AI? We'll see.
Is this article some sort of satire, mocking journalists who express heavy opinions going against the grain out of appetite for a spotlight on the oh-so-multifaceted thought and analysis going on in their minds? I hope so. If you have any sort of grasp on neuroscience and/or fundamental understanding of the principles of modern artificial intelligence and the emergent capabilities of state-of-the-art models it can not possibly escape you that we are talking about a massive, era-defining shift in human affairs and economics. People are getting "bored" with AI??? Are we all twelve?! The unprecedented potential and power of this technology is not based on hype, but the fundamental realization that models have now achieved a scale where - via the phenomenon of emergence - we are for the first time in history reaching the unshakeable realization that the engine which created our own civilization, i.e. intelligence, will be a highly scalable commodity in the very near future. That is, this is no longer a matter of science or philosophy. It is a matter of engineering! That's the state of generative AI in 2024, and I am struggling to understand how you can write an article on the subject with doubts and pros and cons of the nature contained herein.
Can you repeat the comment without using ChatGPT?
Alberto, my appreciation of the state of AI doesn't mean I have no intelligence of my own. Thanks for checking though.
Saying "this is no longer a matter of science" reveals that you, indeed, have no intelligence. No wonder you can't wait for it to be a commodity!
Here's what I can tell you - the fact that Ezra Klein can't figure out how to use AI in his daily work is solely the problem of Ezra Klein. Even with my lack of intelligence, I have somehow landed a job in cancer research, specifically within the field of tumor immunology. In this field, as in many others, AI is able to provide invaluable insight - via pulling vast amounts of research - towards integrating multiple biomarkers across studies into coherent mechanistic models of cancer biology which are already accelerating the discovery of new therapies. In the very near future, synergy between AI and essential academic personnel will be mandatory for any lab hoping to have a competitive standing for grant funding. When I say "this is no longer a matter of science" what I mean is that even at the current state of affairs, we have achieved an architecture displaying intelligence capabilities very similar in effect to that of specialists across many domains. Further iterations of massive LLMs, whose inner workings are quite similar to the learning principles in the brain, are likely to achieve symbolic and abstract reasoning that will be able to productively delve into advanced math subjects, like topology, number theory, etc. (search matrix multiplication and DeepMind for some impressive results already here).
"LLMs, whose inner workings are quite similar to the learning principles in the brain." Really, you don't know what you're talking about lol. Not even people at the vanguard of mechanistic interpretability research know how neural networks work. How can you know they work like the brain (which, by the way, we also don't fully know how it works)? Let me tell you what I think: you just read Leopold's Situational Awareness essay and were mindblown haha.
Actually, I hadn't but it very much seems on point haha
As far as knowing "how neural networks work" is concerned - I think we are talking about different things. We very much know how neural networks work, and neuromorphic architectures specifically (check Intel's Loihi) closely mimic neuronal message passing although they're still lacking in defined learning algorithms similar to the way backprop functions for traditional NNs. What is surprising (but it should not have been in my opinion) is the emergent capabilities of NNs on certain tasks as their complexity passes fuzzy thresholds. In addition (and as it turns out in agreement with Mr. Aschenbrenner's essay), people working at OpenAI and DeepMind have a lot to show for their work. What do the AGI sceptics have to show on the other side? Rule based bs from the 80s? The reason why I'm stuck on your article is that I think it's important for people to realize the magnitude of what's about to hit us instead of opening up debates on whether AI is getting boring or not.
I enjoyed reading the views on the forward trajectory of AI. I do believe the technology has a purpose, but, may be overhyped with potential impacts in some areas. It will definitely be interesting to see though where we go from here.
Good breakdown. There are some areas which seem a little off. For example saying people aren't paying for ChatGPT is misleading considering OpenAI is doing 2B in revenue.. I think the one gap you didn't highlight in hype cycles is bridging from the core tech and deployment of solutions requires platforms and integration. What makes G.AI so interesting is it's actually threatening the existing platforms we use today as a potential wholesale replacement. So there is a much larger effort required for it to take hold in our tech ecosystems. The true destination of A.I. isn't just to be a "cool app" like ChatGPT. It will live in the fabric of the Internet (routers, middleware, apps, websites etc) which means everyone will need to decide whether to rebuild or reimagine their ecosystems from ground up or force fit A.I. into the existing stack. Either way it will require a lot of time and money, but it will happen because new A.I. first alternative solutions will be 100x better.
Very good article. Insightful, and up to date. I do believe that generative AI has the potential to become an inflection point similar to that of the Printing press. Time will tell
I personally don’t believe AI is overhyped. I think the present focus is overemphasized, but the potential is, if anything, being under-discussed. Did you see what an OpenAI brain in a Figure body was capable of? Absolutely wild and a little disturbing.
I believe the social, political, economic, and philosophical implications of this technology are among the most profound we’ve ever faced. Moreover, considering that generative AI is the tool that could build a fully immersive, large-scale metaverse, we're entering a whole new world.
It may not happen this week, but at the current pace, a decade seems like a reasonable enough timeframe to witness truly era-shifting transformations.
Clocks & watches may be an even better parallel. They were incredibly inaccurate in their early years but could put on spectacular displays (as some of the old town clocks still do in Europe). They eventually worked their way into every aspect of our lives and most of our advanced technologies. We also made them symbols of virtue, efficiency, style, and wealth, integrating them into the fabric of our thought.
Of course they arose in a very different social, political, and economic environment. They did not have to survive the pressures of today's capitalism or our communication environment. It may be that technologies of this sort can only fail in these conditions. I suspect they will fail in the short term, at least within the parameters that capitalists and governments set for them. If they do succeed in the short term it may not be in ways beneficial to society, given the way criminal and state actors are using them, and given the nature of the corporations churning them out.
I made use of very simple “n-gram language models” when I joined a group working on speech-to-text in 1995. As I understand it, LLMs of note arose when people started experimenting with the use of artificial neural networks (ANNs) to overcome the practical and theoretical inability of n-grams to deal with context longer than 2 or 3 words. After the introduction of word2vec and LSTM, ANN based LLMs sparked up, rather like Frankenstein came to life when lightening was harnessed to give his brain the required jolt. However, just as Frankenstein was disadvantaged by his bad looks and rather unpredictable behaviour, public interest in the development of LLMs is now held up by the amount of data, electricity and time required to train them. These blocking factors will be removed once ways are developed to reduce the need for such astronomical training resources. That is most likely to arise once we have a better understanding of the machine learning process so that it becomes far more efficient at leveraging understanding from limited data, though increased powers of reasoning and inference. That will lead to increased intelligence, which will bring with it the required reduction in unpredictable behaviour. At that point the main problem likely to arise will be the emergence of superhuman intelligence - which will arise earlier for some people than others.
Sounds like the Gartner Hype Cycle once again. Why would anyone be naïve enough to think Generative AI would be exempt. https://en.wikipedia.org/wiki/Gartner_hype_cycle