24 Comments
Apr 9Liked by Alberto Romero

A real interesting article loved how you showed both sides of the argument.

Expand full comment
Apr 12Liked by Alberto Romero

I definitely loved the pro/con stance of this article. In reference to vanilla AI, I studied and applied AI in college circa 1986-1987. We have come along way! Our robots were difficult to program to avoid objects; not like today. The rovers on Mars are a perfect example. To parallel the article, sensors were so much in their infancy then (at least publicly available) than they are today. For 37 years, the tech has silently moving forward.

Expand full comment
Apr 12Liked by Alberto Romero

what a well-written article

Expand full comment
Apr 12Liked by Alberto Romero

Every new major technological shift must come with a period of "consolidation", where ideas are vetted, faulty ones are thrown out, charlatans are culled. For those of us who from here will actually focus on real problems, there is massive opportunity. But, there must be a detox from the drunkenness. In sobriety and against truth, we will find the true value of these amazing technologies.

Expand full comment
Apr 4Liked by Alberto Romero

Perhaps an instantiation of Amara's Law?

Expand full comment
Apr 8·edited Apr 8

AI researcher Gary Marcus basically says LLMs are finished and have done nothing positive for AI, but to the contrary diverted resources from real research that could have brought us closer to strong AI. It seems obvious these days deep learning sucks and isn't going anywhere. Doesn't really seem like much of a revolution in AI to begin with to me, although it is one in weapons of mass distraction and surveillance, and in generating endless garbage and propaganda campaigns on the cheap.

Fundamental issues of LLMs and the underlying neural nets mean they'll never be able to reliably even tell the time on a watch, although they are great at telling the most statistically likely time to appear on a watch in marketing photos. It's all hype and more hype from people trying to sell things and maximise profits according to their legal duties to their Wall St. stock holders.

https://garymarcus.substack.com/p/generative-ai-as-shakespearean-tragedy

By the way, the only coders that benefit are bad ones of which there are a seemingly endless number since good programmers are very few and far between, something that's easy to tell being one and looking at other people's code all the time. The average quality of code has gone down because of LLMs, according to research, and it was already far lower than it was decades ago with most programmers having no idea how to program well anymore or any idea how computers work, being reliant on all kinds of automation and tools to make it "easier." LLMs produce garbage and even hallucinate nonexistent dependencies that people can then turn into real malware.

The increase in productivity is an illusion, since good programmers will be left to pick up the slack and the time bad ones saved will be offloaded to them; if it's not, the quality of all software will go down, which is the probable outcome due to the shortage of skilled programmers to make up for it.

Expand full comment

There are still gaps in the AI revolution particularly in its ability to form reasoning. However, ChatGPT represents a milestone where AI exhibit ability to comprehend human natural language. The limitation with ChatGPT is that it can only form relationship between text. But the ChatGPT today has already evolved as it is now able to connect images with text.

The next hype that is emerging is in robotics. Progress in robotics has been stalling for a long time. But the emergence of multimodal GPTs will enable robots to leverage on GPTs to interpret the events in the real world, performing reasoning and plans its action. The result is Figure 01 and Apollo.

Advanced intelligent and autonomous robot is coming.

Expand full comment

I think we’re seeing a scientific boom and because of the looking glass AI is under everyone is getting to witness this progress from up close. What is taking much longer, it always does, it transforming this progress into real, tangable applications.

A beautiful example, for me, is ChatGPT. It was never intended to be a product, but a research preview. But now it is. And now it needs to be maintained and improved and monetized like a product. And then analysts start to write stories about declining daily usage, user numbers, etc etc.

Expand full comment

I enjoyed reading the views on the forward trajectory of AI. I do believe the technology has a purpose, but, may be overhyped with potential impacts in some areas. It will definitely be interesting to see though where we go from here.

Expand full comment

Good breakdown. There are some areas which seem a little off. For example saying people aren't paying for ChatGPT is misleading considering OpenAI is doing 2B in revenue.. I think the one gap you didn't highlight in hype cycles is bridging from the core tech and deployment of solutions requires platforms and integration. What makes G.AI so interesting is it's actually threatening the existing platforms we use today as a potential wholesale replacement. So there is a much larger effort required for it to take hold in our tech ecosystems. The true destination of A.I. isn't just to be a "cool app" like ChatGPT. It will live in the fabric of the Internet (routers, middleware, apps, websites etc) which means everyone will need to decide whether to rebuild or reimagine their ecosystems from ground up or force fit A.I. into the existing stack. Either way it will require a lot of time and money, but it will happen because new A.I. first alternative solutions will be 100x better.

Expand full comment

Very good article. Insightful, and up to date. I do believe that generative AI has the potential to become an inflection point similar to that of the Printing press. Time will tell

Expand full comment

I personally don’t believe AI is overhyped. I think the present focus is overemphasized, but the potential is, if anything, being under-discussed. Did you see what an OpenAI brain in a Figure body was capable of? Absolutely wild and a little disturbing.

I believe the social, political, economic, and philosophical implications of this technology are among the most profound we’ve ever faced. Moreover, considering that generative AI is the tool that could build a fully immersive, large-scale metaverse, we're entering a whole new world.

It may not happen this week, but at the current pace, a decade seems like a reasonable enough timeframe to witness truly era-shifting transformations.

Expand full comment

Clocks & watches may be an even better parallel. They were incredibly inaccurate in their early years but could put on spectacular displays (as some of the old town clocks still do in Europe). They eventually worked their way into every aspect of our lives and most of our advanced technologies. We also made them symbols of virtue, efficiency, style, and wealth, integrating them into the fabric of our thought.

Of course they arose in a very different social, political, and economic environment. They did not have to survive the pressures of today's capitalism or our communication environment. It may be that technologies of this sort can only fail in these conditions. I suspect they will fail in the short term, at least within the parameters that capitalists and governments set for them. If they do succeed in the short term it may not be in ways beneficial to society, given the way criminal and state actors are using them, and given the nature of the corporations churning them out.

Expand full comment
Apr 10·edited Apr 10

I made use of very simple “n-gram language models” when I joined a group working on speech-to-text in 1995. As I understand it, LLMs of note arose when people started experimenting with the use of artificial neural networks (ANNs) to overcome the practical and theoretical inability of n-grams to deal with context longer than 2 or 3 words. After the introduction of word2vec and LSTM, ANN based LLMs sparked up, rather like Frankenstein came to life when lightening was harnessed to give his brain the required jolt. However, just as Frankenstein was disadvantaged by his bad looks and rather unpredictable behaviour, public interest in the development of LLMs is now held up by the amount of data, electricity and time required to train them. These blocking factors will be removed once ways are developed to reduce the need for such astronomical training resources. That is most likely to arise once we have a better understanding of the machine learning process so that it becomes far more efficient at leveraging understanding from limited data, though increased powers of reasoning and inference. That will lead to increased intelligence, which will bring with it the required reduction in unpredictable behaviour. At that point the main problem likely to arise will be the emergence of superhuman intelligence - which will arise earlier for some people than others.

Expand full comment

Sounds like the Gartner Hype Cycle once again. Why would anyone be naïve enough to think Generative AI would be exempt. https://en.wikipedia.org/wiki/Gartner_hype_cycle

Expand full comment

A lot of products for end users have yet to be realized, but must be on the precipice. One example is the smart speaker market: Alexa, Google Home, and Homepods are arguably among the most to benefit from LLM-enhanced interaction, but it's taking time to mature. Once this happens the effect for this space should be seismic. And that's just one such use case.

Expand full comment