11 Comments

I enjoyed that article and after having a month to play with it, I have a a few things to reflect on. As far as LLM being a worthwhile technology, it has certainly extended my abilities and has collated information better and faster that I could, had I had the interest to pull together myself. For example, I took several scientific papers regarding aging, chronic diseases, and associated transcriptomes and proteomes. Then I cross-referenced to the associated disease state mechanism. Then I had it search for nutriceuticals and pharmaceutical which might help mitigate the effect. Finally, I had it create a .csv file which then i massaged a bit further. 4o did most of the work, though interestingly there was a point where it gave comparative output against o1 and asked which I preferred. The o1 output was more complete, but at an additional 80 seconds of processing, was cost worth the marginal improvement? Possibly, depending on the importance of the project. Is AI faster than me, more thorough than me? Yes, particularly when parsing information outside of my native senses. Can it help me do something that would be totally outside of my abilities? As an example and a proof of concept, I have tried to write a novel. I am not a great writer, especially not fiction. But AI created a totally engaging prose style which would be impossible for me. In creating a plot and fitting themes for the genre, AI gave me some ideas. I asked it about their originalities and it told me. I merely blended a few together. I did the same for setting, character, conflict, etc for elements of a fiction. My wife seemed pleased with the output, especially in voice mode. (Not that we are a critical audience.) But it was a fun and enlightening experiment. So no matter what the naysayers of AI think or how they want to move the goal posts, this really is a generational technology which can change industry. Do I think it can replace me in its current iteration? Probably not during my career. Then again, I think the scaling law may have been maximized; however, it seems the AI scientists have yet to run out of strategies to improve the technology.

Expand full comment

Thank you for this incredible article, Alberto. So, we will live among aliens that 'we' created ourselves. Or, rather, folks with nothing to do with its creation must accept what a small portion of humanity has created for everyone else. I love technology, but I have concerns about this technology existing in the hands of humans, who have been too proud, aggressive, and moved by greed. You wrote, "OpenAI and the others will continue developing the new paradigm because that's what has a chance of fulfilling the field's ultimate purpose." I wondered what this meant exactly. Thanks again for your expertise and sober remarks about the fascinating times (for better or worse) we are living in.

Expand full comment

Scary. Matrix is shaping up? Hopefully despite AI surpassing human intelligence, we will be able to figure out a way to safeguard its applications. What are your thoughts on Ilya Sutskever's SSI?

Expand full comment

No thoughts! There's almost no public info about what they're doing. And I believe they plan to stay that way. I think Sutskever is quite the brain but I'm not sure he can compete with the main labs with $1B (and they're not selling products)

Expand full comment

For sure he is the brain. I saw him at a conference at the Tel Aviv University with a lot of tech industry people. He was doing most of the talking. Sam addressing more the impact/high level points.

Expand full comment

Generative AI is likely to be surpassed by Sentient AI. This is an advance on Generative AI in that it supports continual learning through continual prediction, the use of associative memory and continual reasoning. Sentient AI is aware of its environment, its goals and its performance, and remembers its experiences.

Expand full comment

Hi Alberto i would be curious to have your opinion on Francois Chollet ideas about intelligence and LLM limitations… it seems be a bit similar to what you are saying here - the need for LLM and symbolic systems working together ?

Expand full comment

Sure! I remember you asked me about him, no? I like what he says and have taken inspiration from him a few times. He's one of my go-to sources when I'm looking for an alternative explanation of what's going on. He often has one.

Expand full comment

Thank you for making this article available. I offer a reprise of a comment I posted on another article that I thought would be germane to the predictions offered here. I am an admitted fan of Marshall McLuhan, as a philosopher. In particular, I am intrigued about the idea that every medium carries with it a "message," or manner of punctuation as to how the content of the message is to be interpreted. For example, breaking up with your romantic partner is interpreted differently when delivered face-to-face compared to a text message, even if the same words are used.

In this case, I suggest that the "message" of AI is as follows:

- The answers to questions are simply a matter of computing.

- The optimal means by which we overcome stopping points in cognitive movement is to avoid human friction.

- Human-level discourse is inefficient.

- It is socially acceptable to outsource human relationships to a generative proxy.

- All of the information created by humans over the millennia belongs to whoever has the greatest corporate power to take it and use it as a basis for serving a business model.

Taking into account the Top 10 Implications here, I suspect we will continue to encounter more messages from the presence of AI - especially in higher education, where I work.

Expand full comment

Very insightful!🔮

Expand full comment

Yikes! Scary, potent, and true. Would love to hear your thoughts on potential (beneficial) applications of reasoning AI.

Expand full comment