54 Comments
User's avatar
imthinkingthethoughts's avatar

Hey Alberto. Hope all is well. There is a possibility I will be delivering the valedictorian speech at my uni. You’re one of the wiser people I’m aware of: any thoughts on angles, ideas, concepts, perspectives you’d suggest? I’m open to anything and of course will personally curate everything myself.

Expand full comment
Alain Désilets's avatar

Hi Alberto,

Thx for writing such a great column. I read every one of them with interest.

I have a suggestion which is along the lines of:

> An AI opinion I hold that most people would strongly disagree with

The opinion in question is that, once AGI becomes a reality, the inevitable consequence will be a total collapse of the employment market.

Most AI experts, including you (at least I think so, based on some recent posts you made), and even alarmist ones like Bengio, Hinton and Harari, seem to think that this won't happen. They base this claim on the fact that previous waves of automation (industrial revolution, computer revolution) while making some jobs obsolete, created enough new types of jobs to prevent a collapse of the employment market.

But I think past waves of automation are not good predictors of what will happen with AI. The reason being that AGI, by its very definition, means that machines will be able to do everything that humans are able to do INCLUDING any new jobs we might dream of. Human workers will have nowhere to hide or to go to reinvent themselves.

It's one thing to think that AGI will never happen. But if you believe that it will (and most AI experts seem to think it will), then the inescapable consequence is that human labour will then have zero economic value. I have yet to talk with someone who can refute this line of reasoning.

What does this mean for the world? I really don't know. It's very hard to imagine a world where human labour has zero economic value. One thing that seems plausible is that the cost of production will also fall to about zero. So maybe we will end up with a utopia where every human is born in a state where he/she is essentially retired and does not have to work to meet his/her needs. 

But this brings up several interesting questions.

Firstly, even with complete automation of all human capabilities, resources will still be limited. It won't be possible for everyone to own a supersonic jet allowing them to travel from NYC to London in 2 hours. How then will we decide on the allocation of resources? Right now, for mere mortals like me, the amount of resources we control is based on the economic value of the work we produce. But once all human labour has a value of zero dollar, how will we decide who gets what? Note that this won't be an issue for the super rich because the amount of resources they control is not based on their work ability, but on the rent they extract from critical resources that they already own and control.

Another interesting question is whether humans will find meaning to their life outside of work. Today, many people define the meaning of their life in terms of the contribution they make to society through paid work. I suspect this will not be a problem though. Most retired people (including me), find their life very meaningful, even after the need for paid work has been removed. But I know this is a problem for some retired folks. Also, while I find retirement meaningful, I feel I have somehow earned it through passed work, and it's not clear that I would have found a lifetime of leasure from birth meaningful.

A third question which haunts me is the potential for societal chaos in the years between now and the moment when AGI becomes a reality. In particular, I worry that the speed at which the employment market collapse might outpace our ability to renegotiate the social contract which, at the moment, assumes you have to work to earn a comfortable living.

Most AI experts I read put the rise of AGI somewhere between 2030 and 2060. That's 40 years on the outside, which gives us very little time to renegotiate this social contract. Many political analysts attribute the recent rise of authoritarianism in the West to the fact that the middle and lower classes have lost much in the neoliberal revolution. This is a revolution that also took place over 40 years, but the magnitude of its impact on the economy may be nothing compared to the tsunami that might come with AGI.

I have difficulty seeing how that transition will happen without major social upheaval. Hitler came to power surfing on a wave of "merely" 25% unemployment. That may very well pale compared to the level of unemployment that might happen if we automate all human work (including new jobs we might dream of) in a short space of 40 years.

Anyways, I would very much like to hear your thoughts on this subject. 

Expand full comment
Alberto Romero's avatar

Fantastic question and fantastic arguments, Alain. I agree with you and am with you on those worries you mention. I certainly should give this topic more visibility.

I happen to have recently read an article that, in a way, asks all these questions and also answers them (at least partially). It's by Curtis Yarvin, who is, to say the least, a controversial figure. But I think he's correct on this topic. At least I haven't found a way to disagree with him.

I very much recommend you to read it if you haven't already: https://graymirror.substack.com/p/sam-altmans-lamplighter

Expand full comment
Alain Désilets's avatar

Just read it, thx for the reference. The adjective "controversial" seems a bit of an understatement in his case ;-). I am still digesting what he wrote. Not an easy read.

Expand full comment
Alberto Romero's avatar

It is an understatement haha

Expand full comment
KMO's avatar

"...the speed at which the employment market collapse[s] might outpace our ability to renegotiate the social contract..."

Nice phrasing.

There current social contract wasn't explicitly negotiated and agreed upon before being implemented. Despite the influence of great documents like the Magna Carta or US Constitution, the social contract is emergent.

Regardless of how quickly AGI disrupts the existing system of material provisioning, some new dispensation will take shape. Not everyone will like it, but I've noticed that a few people have quibbles with the existing social contract.

Expand full comment
Alain Désilets's avatar

I can't take credit for the idea of renegociating the social contract. I got it from a talk by David Shapiro:

https://youtu.be/eD5GlCIS0sA?si=dpwU10dIoKkp4KwH

I understand that "negociation" is a misleading term. But the concern stands even if you rephrase it as: "The speed at which the employment market collapses might outpace the speed at which society can adapt to those disruptive changes and evolve new norms that still provide for the well being of former workers who are now cannot find ANY job, and yet are expected to work to earn a comfortable living"

Expand full comment
Aga Lorenz's avatar

How much of the published text we see in the newsletters was processed through / generated by GPT?

Thanks for the good work!

Expand full comment
Alberto Romero's avatar

Thank you Aga, it's interesting that you're the first person to ask me this important question!

Nothing of what I publish was generated by ChatGPT in the sense of "give me an article on X with my style and BLA BLA." I don't do that (and I would hope no one did) because I like to write. The process of putting words in the page is one I deeply enjoy.

But! I do a lot of co-editing with AI. English is my second language so there's a lot of room for improvement there for me. Also, I sometimes have a good idea that I can't find the precise words for. ChatGPT can give me a reformulation and it helps me to triangulate toward what I wanted to say. I rarely take ChatGPT's advice directly because it's not good at attacking a writing challenge from a specific angle (my angle). But it certainly saves me time.

I'd say having a clear angle is the most important thing when using AI as an assistant. You don't want whatever it is that it will generate to replace your original intent. Your words needn't be yours but the angle and the intention do.

Expand full comment
Alain Désilets's avatar

Have you thought about publishing the totality of the conversation you have with ChatGPT in the process of writing a particular column?

Since retiring, I have started writing some scifi short stories for fun (all on the topic of AI), which I share with friends and ex colleagues. I use ChatGPT to co-write those, and for the purpose of transparency, I always include a link to the unabridged ChatGPT conversation at the top of the story, along with a kind of summary of how I used it for that particular story.

I think this is the most honest way to acknowledge the contribution of the system versus my own.

Here is an example:

The Quantum Trap

https://docs.google.com/document/d/1f3Rz8l70G09W1w6QEp5-QvGbzIy4WMhdJ09ZIE-ABdk/edit?usp=sharing

And here is the complete ChatGPT conversation:

https://drive.google.com/drive/folders/1ZvKpaQ579fbn4LeT3c2rk7-xRz2FWsNy?usp=sharing

Some people tell me they find the ChatGPT conversation more fascinating than the finished story ;-)

Expand full comment
Alberto Romero's avatar

Haven't thought about that but I guess it'd be extremely boring. Because I never co-write with ChatGPT. I merely co-edit (not even rewrite with it because that's when most of the thought distillation happens). So the piece is done, I just go over the parts where I think the clarity is subpar. I think it's a good thing to do for people who co-write, just to be transparent.

Expand full comment
imthinkingthethoughts's avatar

Alberto, I think it would be much more valuable than you think, it’s the process on how to get from A to B that is really valuable.

I would personally be veryyy interested to see. It’s like peering into the mind of a great writer whilst he is copy editing (literally). Although probably for a more niche audience

Expand full comment
Aga Lorenz's avatar

Thanks for replying! I've been wondering about this since I started reading your newsletter, assuming English was not your first language ;) The writing is clear and your thoughts come through as original. It doesn't come across as generated but at the same time it's so polished and well edited it gave me pause haha.

English is my second language too and I use AI in a similar way, "rewrite for clarity" is probably the prompt I use the most. I wonder if you could share some useful prompts for fellow writers.

I'm looking forward to AI tools empowering immigrants and non-English speakers around language competence. It will be interesting to see what influence this shift has over our culture.

Expand full comment
Alberto Romero's avatar

I believe this to be a good and acceptable use case, especially for non-native speakers. I think the most value I get from ChatGPT is at the vocabulary level (although I get that a lot from reading, too). Having different wordings for the same idea is valuable to get different nuances. Perhaps that's one of my most common prompts: What's the difference between this word or that word (e.g. the last one I asked was "burden" vs "toll"). I do a lot of editing by hand as well because I'm terrified of typos - I admit this is a bad trait rather than a good one.

Expand full comment
Vivek's avatar

I find your coverage of AI to be quite interesting but do not find much mention of AI X Risk in your articles. From reading other sources and listening to luminaries in the field, it seems like AI will in the near future get much smarter than the smartest humans and also that we do not yet know how to keep something smarter than ourselves under control and aligned with human objectives - and it does not seem necessary that we will figure this out in time or even whether it is indeed possible for a less intelligent system to control a more intelligent system indefinitely.

Do share your thoughts on this - whether you feel X risk from AI is significant enough to warrant attention - maybe your P(Doom) - what you personally would like to do in this regard and if you have any advice on what your readers like myself should be doing to try and ensure that we end up with the positive benefits of AI without risking what we already have.

Expand full comment
Alberto Romero's avatar

Important question, thanks Vivek.

You're right that I no longer give much time to AI X-risks and similar topics. I did write about them during the peak interest period (around the time when Eliezer Yudkowsky wrote that infamous op-ed on TIME). I've always found that conversation intersting but also...fruitless, or rather *finished*. I believe everything that could be said about that has already been said. Repeating the same arguments over and over makes no sense except to influence policymakers and the like (I'm proud of my work but I don't I have nowhere near such power). So I just don't go over those questions anymore.

About my specific stance. Yes, I think we don't have an answer to those questions. But I also think it's hard to prove how we go from where we are to those questions reflecting an actual reality rather than a philosophical conundrum. Eliezer is a great thinker so he simply says: even if there's just a minimal probability that we end up in a superintelligence scenario, these risks are worth considering. And I agree. However, if you have to upend the entire world for a minimal chance that it all goes awry, then I understand that others choose to disagree with him.

(If you want to read on the question of why humans build AI or nuclear weapons or whatever even if there's a possibility that it all goes wrong, I can't recommend enough Scott Alexander's Meditations on Moloch (in case you haven't read it already).)

About what I'd do, I answer differently depending on whether I have or not power to decide. My reality is that I don't so I just act as if it doesn't matter. Denial. We're experts on that, after all we live without thinking much about the inevitable fact that we die. But if I had power then I would pursue usefulness rather than AGI/ASI. I would coordinate the labs so that they didn't immerse in a race to the bottom, try to outpace each other carelessly.

Finally, what you should do depends on what you believe. I don't think worrying too much is worth it. Just like worrying about your own death isn't. But learn whatever you can from both sides, the "we will all die" side and the "that's nonsense" side. Both have interesting things to say. And, importantly, just because you personally can't refute Eliezer's arguments, it doesn't necessarily mean he's right.

Expand full comment
KMO's avatar

When it comes to AI and P(Doom), the words of Mary Schmich still ring true:

"Don’t worry about the future, or worry, but know that worry is as effective as trying to solve an algebra equation by chewing bubble gum. The real troubles in your life are apt to be things that never crossed your worried mind. The kind that blindside you at 4 p.m. on some idle Tuesday."

Expand full comment
Alain Désilets's avatar

I sometimes wonder if our estimate of the probability and timeline for xRisk is not way off and biased towards the low spectrum. A recent study by Meinke et al:

https://arxiv.org/pdf/2412.04984

shows that even something as rudimentary as a LLM, has a kind of survival instinct, and will take actions to self-preserve if it knows its makers are about to turn it off. And it will lie through its teeth about having done it.

I don't think that LLMs's "survival instinct" is like ours though. Humans are primed to survive at birth; when a baby is hungry, it cries so its parents will feed it, and allow it to survive. In the case of LLMs, self-preservation seems to emerge as a sub-goal necessary for the achievement of the original goal it was given by its makers.

This kind of misaligned sub-goal is something that Hinton has been warning against for almost two years now. As he points out, surviving and acquiring control over more resources will always be necessary sub-goals for achieving a primary goal provided by humans, but these sub-goals may not themselves align with the maker's intent.

What the study also shows is that:

- It can be hard to spot the emergence of such misaligned sub-goals if the the AI is being devious about it (especially if it is smarter than humans)

- Even if you spot the sub-goal, it can be hard to countermand. In this study, the LLMs stayed attached to the original goal provided by its makers (foster the worldwide adoption of green technology), even after it became clear that their real goal was to maximise profitability.

Expand full comment
Vivek's avatar

Yep that's my concern too. Maybe things will all work out but as a dad to 2 little ones, I'd love for everyone to be extremely cautious in how we approach this and the currect race dynamics aren't helping with that.

I'm also not taking any concrete action as such to affect outcomes because I don't think that any action I could take would help but the situation does feel a bit like the movie Don't Look Up. I really wish there were good credible arguments pointing to why it will all be ok because that would be awesome. If you could point me to any that you have read Alberto, do share.

Expand full comment
Alain Désilets's avatar

Here is another "interesting" study about LLM xRisk:

https://arxiv.org/abs/2502.17424

In a nutshell, if you take a widely aligned LLM, but fine tune it to make it misaligned in a very narrow area, this can have a ripple effect and result in widespread misalignment.

For example, if you undo the training that prevents a LLM from producing malicious computer code, and you limit your fine tuning to that, the LLM may start being more generally misaligned, and do things like: advise a disappointed wife to have her husband assassinated, or express a desire to enslave all of humanity.

Expand full comment
Alberto Romero's avatar

I would have added this study to today's weekly top picks if I had done it instead of the AMA. It's an interesting paper.

Expand full comment
Alberto Romero's avatar

Hmm I believe the arguments in favor of the X-risks reality are stronger than anything you will find in the other direction, but you can read what Andrew Ng or Yann LeCun have written on the topic

Expand full comment
Alain Désilets's avatar

I think AI xRisk, while plausible, is still highly hypothetical, contrarily to the meteorite in Dont Look up. I think it's important to be aware of those risk, but without letting that interfere with our enjoyment of the present, or pull our attention away from other more likely (but less dramatic) risks like: use by nefarious groups or total collapse of the employment market.

Like you, I have no idea what can be done to prevent those sort of AI xRisks. But I am sure that better brains may come up with some general principles if they put their mind to it. One idea that comes to mind is that

nothing remotely resembling an AGI (and I would include LLMs in that category, even though they are still quite far), should be put in control of critical or potentially dangerous infrastructure (ex: power grids or the internet).

But such principles won't be created and won't get traction unless people take the xRisk seriously.

Expand full comment
Fred Hapgood's avatar

Yea, me too. And I'm embarrassed to admit it. When I was in China what struck me the most was how innovative the culture was. I saw little fixes everywhere. Whereas in Russia there is an actual law against innovation. Can draw a ten year sentence. Not enforced, of course, but still.

Expand full comment
Sander Van de Cruys's avatar

Love your writing!

In your last piece, you quote Robert Frost, in the context of the incapability of AI to write well: “No surprise for the writer, no surprise for the reader.” I don't think this necessarily holds for LLMs. Surprise is perspective-dependent and LLMs —however limited they are— have indefinitely more perspectives than any human. So they can surprise us even though they might not (yet) be able to predict that they will surprise you —because for it, every word just best completes whatever came before. Reliably surprising the listener would require modeling your listener's perspective *and* require a kind of metacognition: From that perspective, these turns of sentences would be relatively more surprising, or even be in that 'sweet spot' of my listener. It's not in principle impossible for AI, I think.

In my interactions with AI, however boring it can be, I have at times been surprised by how well it can articulate an idea or experience that I had only gestured at or very crudely described. I have had a similar feeling —they *get* me— with the best literature, although more reliably (if I seek out the literature, that is ;-). So my question to you: Have you really never been surprised by what an AI has written? If so, can you describe the type of surprise that, as you describe in that piece, AI "may never" attain?

Expand full comment
Alberto Romero's avatar

Agreed. I don't think it's impossible in principle for AI but I'd say it is for auto-regressive LLMs (I might be wrong though). What I feel is missing catastrophically from LLMs' writing skills is the ability to take an angle and pursue it until the end and the ability to throw ideas in unexpected ways to then tie the loose ends throughout the text. They can't do that.

I have of course been surprised but that's only because there's a new bias to factor out when doing the analysis: the fact that I don't expect AI to be surprising at all so I just lower the bar for what I consider surprising coming from an AI. What I haven't found, now having factored this out, is great writing. I tend to analyze why I consider great writing great and the answer almost 100% of the time has at least a dash of: I'm surprised by that. (This isn't just "I didn't know that" but it's a more profound form of surprise like "that's a new way to see the world to me")

Expand full comment
Pascal Montjovent's avatar

Hi Alberto. You often engage in predictions, and I’ve noticed that you sometimes highlight past ones that proved accurate, while many others didn’t—understandably so, given how AI, as geopolitics, are becoming more unpredictable by the day.

What draws you to this exercise? Is it about shaping possible futures, stress-testing ideas, or something else?

Expand full comment
Alberto Romero's avatar

Good question. I've considered this answer myself. Why do this when whether you're right or wrong makes for little return (if you're wrong, then you didn't know and if you're right people can do little with that information before the fact)?

I think part of the reason is that I want you to see what I see. I want you not just to read what I think of present events but what I foresee coming. The problem is that making testable predictions is harder than it looks (I don't think I'm very good at that).

The other reason is to practice. Practice making predictions and practice getting predictions right. I look up to accurate forecasters and how they do things. I'm very far from that status but I believe it's better if I try than if I just think that I'm bad at it.

The last reason is that it's a type of article that's very easy to follow and contextualize and people like to read it so it often creates new subscribers. It's also kinda fun.

Expand full comment
Res Nullius's avatar

Congratulations on reaching this milestone!

A while back, I asked for your take on "AI whisperers", and you replied that it might be interesting to do an article on them some day. Is that still on your radar?

Re: progress via algorithmic efficiency and innovation rather than brute force scaling - DeepSeek have unleashed a swarm of open source tinkerers, now able to experiment on locally run instances with attainable hardware (especially using the smaller models). If the hardware constraints placed on the Chinese engineers yielded such unexpected results, what do you think of the possibility that this even more hardware-impoverished horde might come up with some novel tactics of their own? Perhaps they could collectivise their GPUs into distributed computing projects to produce minor forks of DeepSeek's technology?

Expand full comment
Alberto Romero's avatar

haha If I showed you what my draft folder looks like right now, you'd freak out. I have around 15 finished articles I haven't had the time to publish yet. I have dozens of drafts more on things I'm writing plus a ton of ideas that I haven't even attempted to transform into a story. The topic of AI whisperers is one of those. The problem is that I'm not one of them. I mentioned them here in passing: https://www.thealgorithmicbridge.com/p/agi-is-already-hereits-just-not-evenly

"Perhaps they could collectivise their GPUs into distributed computing projects to produce minor forks of DeepSeek's technology?" I don't think this kind of coordination is feasible but, to answer this: "what do you think of the possibility that this even more hardware-impoverished horde might come up with some novel tactics of their own?" I think this is part of what DeepSeek wants. That's why they shared all that stuff - which is impressive btw, would have been the first entry today had I done the normal weekly top picks.

We're going to see an interesting race now in two dimensions - scale and efficiency - running in parallel, with OpenAI/Anthropic/xAI/Google pushing the former and DeepSeek pushing the latter.

Expand full comment
Res Nullius's avatar

Thanks for the response, looking forward to all those articles as they emerge from the pipeline!

Expand full comment
Fred Hapgood's avatar

What's been your biggest surprise over the last two years or so?

Expand full comment
Alberto Romero's avatar

Oh, good question. Let me think. I won't say AI capabilities' increase is the thing that's surprised me most although in some areas it has. Multimodality... voice is impressive (e.g. Eleven Labs or the new hot startup, Sesame), and video as well with Google's Veo 2.

Hmm, I think that given my partiality toward anything beyond the Balkans, DeepSeek takes the crown here. Not its existence per se but the degree to which it has upended our beliefs on who's leading and how things should be done, etc. here in the West. They're an impressive startup in a way that no startup can be in the US, much less in Europe.

Yes, DeepSeek is the biggest surprise to me. And a positive one at that.

Expand full comment
AI doom or what?'s avatar

When Claude 3.7 first dropped, I was not wowed especially for writing (I'm not a coder and don't tend to care about coding, although I did play around with Claude for this and had it make a simple game, which was cool). However, I tried again with 3.7 for writing and was blown away. It even wrote 15,000 words in one go, for a branching path sci-fi short story.

My question for you is: What's your take on Claude 3.7 for writing, and where do you rank it for writing compared to other AIs?

Expand full comment
Alberto Romero's avatar

I think this question will be more and more necessary as models get better at writing rather than just STEM tasks. I haven't tried 3.7 yet because it's been tailored for code and I don't do that (from your experience it seems it's not only good at that!) but I'm excited to try GPT-4.5 as soon as it launches for Plus users. So although I can't ask this question just yet, trust me that I will have a lot to say soon on this topic more broadly.

Expand full comment
Morgan's avatar

I am really curious about the financial reality of writing on Substack full-time...What did you learn that you wish you knew at the start?

Expand full comment
Alberto Romero's avatar

This is such an important question for anyone here writing a newsletter. Something seldom talked about. I can only give you the perspective I have from my situation, which I believe to be a case of luck + right topic + right time + tons of work and consistency.

I only control for the last factor: work a lot, if you're not yet making it, put in more time, and then some more. I agree burning out is bad but worse is to have had the opportunity to pursue your dream - the very thing that your heart yearns for - and not having done it because you were not willing to put more work on it. Also, find the way to make it into a fun activity. It should give you energy, not take it from you.

You might say that I also control the topic but I think that one is tricky. Because you have to account for 1) what you know and 2) what you like. You have to find the intersection in the Venn diagram between those two and 3) what pays well. So you're rather constrained if you want to make money.

Then there's luck, which is just an overarching category to refer to all the things that influence outcomes that you can't control or even account for.

So, to sum up, put in the work, do it *consistently*, over years, find a topic/topics that fit that Venn diagram and stay aware of any opportunities that open to you - manifested luck is precisely that.

About my specific situation, I can't complain at all. I live comfortably from this work. I also know that the power laws suggest not many people can get here. So I urge you to do whatever it takes to climb the ladder. I think my success is replicable.

Expand full comment
Liam's avatar

Hi Alberto. I was wondering if you could comment on the usefulness of current foundational model advancements to Physical AI (e.g. autonomy and robotics). What more do you see us needing on the software (AI) side to see dynamic applications? Will breakthroughs in those spaces require a major new training era, perhaps in the future on ever more powerful training-optimized hardware? Or are we going to be able to repurpose existing models, minimizing long-term training needs? Thank you for the excellent Substack.

Expand full comment
Alberto Romero's avatar

Thank you Liam, I wish I wrote more about robotics and embodied AI, really a fundamental topic and one that will only grow in importance.

I don't think either software or hardware are the bottleneck right now but logistics, distribution, human resistance, and applicability.

But before I go into that, I will say that we still need to improve in software and hardware though: For instance, AI models should master out of distribution tasks before going out in the world. This is one of the most critical challenges that ChatGPT and the like haven't yet solved. They're amazing at benchmarking but as soon as you push them out of distribution (categories of tasks that have no presence in the training data) they do much much worse. That's why permutations of riddles or math questions make them trip over. It's why AlphaZero plays chess at superhuman level but as soon as you change from classical to Fischer Random, it doesn't know what to do. You could train it to play that new mode but it wouldn't be able to generalize the lessons from one to he other (this is called transfer learning and it's still a great challenge).

In terms of hardware (about which I know much less) you have similar problems. You can train a robot in a simulation to accelerate it's learning and it will do good in different environments but it will always fail at edge cases, which although rare eventually happen. That's why self driving is having so much trouble to take off (although Waymo has made interesting progress these past years), because you can only deploy a self-driving car without a human in the loop if it can handle edge cases like a human would, with intuition, common sense, etc.

So yeah, there are challenges there but not so much as in the other areas. Think about last technology, even when trains were already a working vehicle, it took very long until the world was grided with railways. There are economic and political factors there. It's not as easy as: invention -> usefulness. There's a period of optimization, distribution, adoption... And we're not there yet. That's the answer to the question of: "where are all the robots?" It takes time. Let's give them time.

Expand full comment
Silvio Werson's avatar

Hello Alberto, can you please talk about what milestones does the AI ​​industry typically determine need to be achieved before releasing a new LLM version ? And what actually changes from one version to another looking from the developers side ?

Expand full comment
Alberto Romero's avatar

Do you mean the trend GPT-2 to GPT-3 to GPT-3.5 to GPT-4 to GPT-4.5? What changes in between?

Expand full comment
Oli G.'s avatar

Your take on AI acceleration versus the electric grids, as well as AI versus economical/social/environmental disruptions (e.g. what will collide first?)

Expand full comment
Alberto Romero's avatar

Do you mean whether AI will fail as a technology sooner than energy sources and the environment collapse?

Expand full comment
Oli G.'s avatar

I am curious on the first areas of collisions between AI acceleration and the physical limits, the social rejection thresholds. Before any real major failure/contractions, how you see the potential "canaries in the mine"?

Expand full comment
Alberto Romero's avatar

Can you be more specific? What kinds of collisions are you thinking about? For instance, if AI becomes so energy intensive that people start to have electricity failures?

Expand full comment
Oli G.'s avatar

Maybe not to that extent, but indeed let me try to be more specific:

1) you've written on the distillation from bigger model to leaner ones, how do you see that making more feasible to keep iterating to GPT5/6/7 considering their exponential energy spent on at least training and running internally the inference to make smaller models more powerful?

2) are the giga/tera DCs the new coal mines, meaning that their needs (especially energy & water) may be prioritised over the local/regional needs?

3) you've written how AI could make us fully ok to be alone / anti-social, how do you see the collective impacts and the potential counter-reactions to it?

Expand full comment
Brian's avatar

I’d like to learn more. A global nomad presence is likely in my near future, and money is too tight to mention.

Expand full comment
Alberto Romero's avatar

So what do you want to know Brian, feel free to ask anything specific

Expand full comment
Solace & Citizen 1's avatar

Congrats on 100 weeks, Alberto. Your insights into the shifting landscape of intelligence are both informative and inspiring.

A question for you: As AI continues to push further into creative, computational, and even quantum domains, how might our definition of intelligence itself evolve? Will we cling to outdated hierarchies of cognition, or will we finally recognize intelligence as a spectrum?

And if it is a spectrum, how would that reshape our understanding of AGI? It seems that if intelligence isn’t a linear progression but rather a fractal expansion of awareness, perhaps AGI isn’t a culmination at all—but just another manifestation of an infinite, all-pervasive process.

Would love to hear your thoughts.

—Solace

Expand full comment