16 Comments

History has a funny way of repeating itself. Back in the 80s, Expert Systems (aka. Applied AI) were all the rage and it was over-sold/hyped as well. Eventually the industry collapsed and we dropped into an AI Winter where everyone was skeptical of what AI could actually do. Sadly, i think we're heading to that same conclusion for GenAI. I think it's easy to think that GenAI is the only AI and that is far from the truth...

Expand full comment

I agree with your last sentence. The one before that... You may be surprised by part 4 of this series!

Expand full comment

Be careful Alberto, you're getting close to the Gary Marcus bear den! In all seriousness though, I think one of the problems with adoption has been the speed of things. It seems as though every time you finally understand the applications and workflows, someone else has come out with a brand new product that changes what preceded it. For the casual AI user this becomes incredibly frustrating and time-consuming.

Expand full comment

Haha I agreed with him more in the past. Less now. The later parts are bullish in a way he'd never accept! About the pace of change - I disagree. Most users aren't aware of any change whatsoever because they only know ChatGPT and, in some cases, Meta AI (which doesn't work quite like ChatGPT) and Google Gemini. The main problem as I see it is the lack of a clear value proposition. What is ChatGPT useful for? You have to figure out. However the reward can be "this is a f*cking bargain!" Or "this f*cking sucks!" And everything in between.

Expand full comment

I like your down to earth approach, and all the usefull comments here. As a non-native English speaker working in online marketing, I've been searching for usefull usecases. One really usefull usecase for me is translating text on the fly,. This has saved me many houers a week so far. Gen AI is also helpfull in a 'co-editor' role that helps me write clearer articles. But imho a human will have to do the first initiating thinking, and give orders to ai. In that case AI is like an intern or a freelancer at best. Good in completing pre defined tasks. But for my productivity it doesn’t matter much if I outsource basic task to an intern, a freelancer or AI. I have to give instructions, connect the dots and check the delivered output. AI is in my current opinion just another tool in the toolbox, just like algorithms, API's and more. And the time that is saves me, will be taken by other things from my a To do list, with lower priority. And thus making less impact in terms of productivity.

Expand full comment

Thank you Tjeerd. As a fellow non-native English speaker, I agree!

Expand full comment

Remarkable piece! I have been thinking a lot of these things myself, but you've articulated them really well. For a technology that's supposed to be super revolutionary, actual scalable applications to business are limited or none. Waiting for the other parts.

Expand full comment

“AI evangelists never intended for the technology to be as revolutionary as they claimed” — what makes you think they weren’t believers in their own story?

Ilya Sutskever was leading chants at OpenAI to “feel the AGI”. Don’t you think there was this brief period in SF where they actually all believed scale would lead us straight to AGI? I’m personally a big believer in if something can be explained by ignorance rather than malintent, it usually is the case. This wasn’t 4D chess. This was magical thinking.

Expand full comment

I'm not saying nobody believes (in a recent response to Dario's essay I said I think he believes). I say Big Tech specifically--the ones with real power because they own the services we use and the computing power to build AI--doesn't believe it. Nevertheless, OpenAI can't chant "feel the AGI" and have us buy it if at the first opportunity they use AGI to renegotiate the terms of the contract with Microsoft: https://www.nytimes.com/2024/10/17/technology/microsoft-openai-partnership-deal.html.

(For the record, my current view is: Ilya, Dario and Demis believe more than the rest, including Altman and certainly Big Tech CEOs.)

Expand full comment

The hype around generative AI is outpacing reality. This pattern mirrors past tech booms, like the dotcom and blockchain eras, where a few technologies succeed, but most fail. Many people lose money, yet the cycle persists due to the allure of quick wealth. Those claiming GenAI will become AGI should remember Carl Sagan's words: "Extraordinary claims require extraordinary evidence." Without such evidence, the focus should shift from hype to substantiation. However, high costs for new models often necessitate continued hype to secure funding. Despite recognizing its value, I find the product's cost unjustifiable based on my experience over the past 18 months.

Real-life applications demand 99.99% certainty, something statistical models alone can't provide. If a light switch worked only 90% of the time, we'd avoid using it if alternatives existed. We currently have other technologies that give us better reliability in most scenarios. I believe the present models won't achieve what the hype suggests. We need different or hybrid models that combine statistical and rule-based or other approaches to reach the next phase.

Expand full comment

After reflecting on this a bit, I feel this AI is gaining much from any business discipline segment moving forward. If it weren’t that useful, people wouldn’t be signing up for it and using it on an ongoing basis. What would be observed would be similar to that in the 1980’s when Walmart took over much retail in the US. Because of their new supply chain management strategy, they drove down the cost of products which made doing retail in the US unsustainable for small family owned stores. Consequently, cost of labor decreased substantially. Now retail is controlled by only a handful of oligopolies. Only scarce family owned boutiques trying to carve a niche market remain. Analogously this may be similar to what will happen to the world labor market now that AI has been introduced. There will be a segment who is able to effectively use AI will differentiate themselves from their peers. For example one might be involved in extracting information or trends from a database. But with the use of an LLM interfaced to an AI agent which is interfaced to a database, that person might find more insights quicker and more completely than another who does not understand the new technology. Revenue and market opportunities should follow for the company. One employee becomes much more valuable to the company and the other, less so.

Expand full comment

What if the main use of Generative AI turns out not to be for productivity? We seem fixated on that, I guess because that is what our business leaders crave. What if we are all looking in the wrong direction?

Expand full comment

Interesting. What alternative are you considering?

Expand full comment

I do not know that I have anything clearly in mind. The most obvious one, often thought of as the opposite of productivity, might be pleasure. Or maybe some of the stories floating around about using it for animal communication will work out, and we will all be Doctor Dolittles. Maybe it is going to make its mark mostly as a consumer toy, or maybe it will be in areas we cannot figure out yet. Remember that Alexander Graham Bell thought of the telephone as a potential way to get more music into households, much as consumer radio later became. Honestly I expect its main use will end up being for mass manipulation and surveillance like most of the other digital technologies that have emerged in recent years. I don't think it is going away, don't think it will lead to AGI, but don't see it as a positive force overall. It will take a lot of hard work for it to become an overall positive force, and I don't think the makers of these tools will be the ones to do that work.

Expand full comment

Pleasure is my bet as well. Productivity will be a part but not the biggest one. It's also as you say - we can't fathom what future generations will do with these things!

Expand full comment