58 Comments
author

Okay, I've just published the super long GPT-5 article. It's time to announce the winners.

22 of you came here to leave comments and I have to say, I'm delighted with the quality of the conversation. Very, very good questions.

22 people total would make 3 the number of winners (N/10, rounding up), but I've decided to give away twice the number of paid subs I promised. So here are the six winners (determined using a random number generator):

- Suhit Anantula

- Simo Vanni

- Bianca Damoc

- Tim

- Riley Tom

- Stephen Calenzani

Expand full comment
Apr 23Liked by Alberto Romero

Maybe this is too pop-culture-y for your usual writing, but I wonder if news has reached you of the ridiculous and dystopian success of the AI bot that is playing Netflix’s social media inspired reality show, The Circle…?

Here’s how the show works. This isn’t a ad, I just like the show. Basically, a bunch of people get locked in separate apartments alone and spend their time being goofy and chatting over an instant messaging platform in order to build relationships with the other contestants. Every now and then, they all rank each other as “people who I most like to those I like least.” The top two most liked players on average then collaborate to kick someone out.

It requires a lot of social skill, finesse and strategy.

The gimmick this year is that they trained an open source LLM on all the footage and text from past seasons and—even being told there’s an AI among them—the players are struggling to figure out who it is.

It’s a really interesting phenomenon, where people are ranking someone as “most likely to be AI” based on whether they get along with them or whether they inspire intense emotional aversion, rather than a realistic understanding of AI capabilities and weaknesses.

Almost no one seems to suspect. A few find the bot really funny and consider it an ally in the game. It just goes to show how woefully unprepared and defenseless we are against manipulative behavior by bots on the internet. A total mind bending cautionary tale.

Given that your blog is about AI but really about people, I thought this would be good fodder for you.

My questions:

You’ve written about AI destroying the internet, but from a slightly different angle, if I recall. What do you think about the phenomenon of people getting hoodwinked into interacting with LLMs in Facebook, Reddit, Twitter, etc comments?

Is there anything we can do as a society and as individuals to improve our immune systems here?

If you watched it, what do you think of the Bot’s performance on the Circle?

Expand full comment
Apr 23Liked by Alberto Romero

What do you think of the notion that humans have long ago created AI? That language, writing and, built upon that, the laws, bureaucratic procedures and protocols of large organisations constitute a kind of software? That states and corporations are AI entities (with embedded human nodes), organisms which exhibit their own emergent behaviours? Science fiction writer Charles Stross refers to this idea as "slow AI". His stories envision a future where, as telecommunications become faster and neural interfacing blurs the virtual with the real, many corporations become group minds, with a large AI at the centre, like the supermassive black hole at the heart of a galaxy. The individual humans inside no longer retain a separate ego, their thoughts make no sense except as part of the groupthink.

Expand full comment

Considering that AI can blend and merge ideas from different *existing* fields to "create" and "innovate," how can humans push the boundaries of their own creativity, either by drawing inspiration from or surpassing AI's ability to interconnect previously isolated domains, to generate new ideas, creations and discoveries?

Expand full comment

What impact do you think generative language models will have in the traditional academic research and publishin world? From grant proposals, to editor letters, to CVs and all sorts of reports, academia is full of bureaucracy that no one ever reads. Do you think LLMs will make a large part of this nonsense more evident and thus less relevant or will it all continue business as usual?

Expand full comment
Apr 22Liked by Alberto Romero

Two questions:

1. How long have you spent on the GPT-5 piece? (I’m super eager to read it.)

2. How would you describe the differences in mindset among SF people, those from the rest of the US, and people in your part of the world (Spain, right?)? Or some variation of this if you don’t have the kind of experience to answer this question. As someone who lives in Illinois, I find it fascinating to think of how different San Francisco / Silicon Valley is from almost anywhere else … and the ramifications for the rest of the world. All of this with respect to AI especially.

Expand full comment

As the AI divide grows, users of free, less advanced AI often face disappointment due to basic queries, single iterations, and the tools' limitations. This leads to skepticism about AI's real impact. How can we effectively convey that engaging with and understanding AI technologies and their implications today, rather than being discouraged by initial suboptimal experiences, is crucial for everyone's future?

Expand full comment
Apr 23Liked by Alberto Romero

What do you think of the theory that the urgent interest of the biggest AI players in the regulation of AI is less about safety and more about control? There is generally a revolving door between regulatory bodies and the companies they are supposed to regulate, industry lobbies write policy to be signed off on by officials who are former employees. Regulatory compliance often involves onerous administrative overheads which favour large organisations. The overall effect is to squeeze smaller players out of the market. Just look at the food industry, for example. Perhaps powerful organisations worry that AI could give small groups equal organising power? Perhaps, one day, running an unlicensed AI might be a criminal offence carrying greater penalties than murder?

Expand full comment
Apr 23Liked by Alberto Romero

Hey Alberto,

I have a question about your paid subscriber growth, as this is something I'm starting to work more actively towards.

Have you found any patterns in what tends to resonate the most and make people convert from free to paid? Are there e.g. specific topics, types of posts, formats that do better than others?

I'd be very curious to hear any of your observations here, to the extent that you're comfortable disclosing any details.

Expand full comment

You quoted other technologists and said "it’s not the tool but how we use it (and how companies design it) that matters."

Looking at how we built the internet, and subsequently, social media, do you think we have it in us to build AI the "right way?"

I think it's fascinating, and so much good will come out of it, particularly in medicine. But I also think we are a greedy, egocentric species that will do anything to get ahead.

More than anything, I fear that one person is now capable of causing a lot of harm.

We tend to look back at previous generations and think "Barbarians, how could you: slavery/nuclear wars/genocides etc.?" Aside from our growth at all costs mentality and the environmental externalities caused by it, I wonder what will cause future generations to call us barbarians?

Expand full comment
Apr 23Liked by Alberto Romero

Thanks Alberto for the opportunity for this QA session :)

In COSYNE 24 meeting I asked community "Can we achieve human-level AGI without yet non-existing fundamental new insight from neuroscience?" From the three options "Yes, just scale DNNs up" got 10% of wotes, "Yes, need to connect the dots" got 36.7% and "No something is missing" got 46.7%. The COSYNE meeting participants are of course biased because the meeting is about the bridge and about the modeling of biological brain. So my question for you is, what do you think: Can we achieve human-level AGI without yet non-existing fundamental new insight from neuroscience? Why and how?

Expand full comment
Apr 23Liked by Alberto Romero

Also quite niche, but for a class in my master's we were asked to work on AI and financial institutions. Speaking with cybersecurity experts in the field, something that came up constantly was this fear that models would be trained on bad data and give bad results that would cause weaknesses in the security infrastructure, and also that employees would put sensitive info into things like chatGPT and it would leak. On the technical side of things, can you speak to how likely these threats actually are?

Expand full comment
Apr 23Liked by Alberto Romero

The success of OpenAI and Scaling Law is somehow anti-intelligent from my point of view. Do the elite in AI really believe such mechanical way of exploration without elegant and understandable reasoning creation is the right future? Could you write more about the dispute about the technical directions of AI?

Expand full comment
Apr 23Liked by Alberto Romero

In the past, knowledge and even the development of languages and cultures was left up to missionaries, people who valued others as fellow creations of God. Can AI and LLMs be trusted to give inherent value to humans in a world so divided of haves and have-nots in regards to 'digital inclusion'? Missionaries often died to reach those others called animals. What will AI sacrifice?

Expand full comment
Apr 23Liked by Alberto Romero

We can see what recent leaps in AI and robotics will likely mean for warehouse workers or actors or natural language translators, but what does it mean to the manager of a McDonalds or a car salesman or clerk at the Bureau of of Motor Vehicles?

Will my neighborhood doughnut shop no longer have an assistant manager, but instead be staffed by a pack of teenagers following the instruction of a bot?

Will entry level cashiers, receptionists, and doormen be — not even robots but — screen portals to the Philippines?

What does this all mean to the less glamorous but still necessary jobs that will still need to be done but are not worth the price of a technical upgrade (robot/telepresence)?

And will those functions not need a freshly graduated but inexperienced management candidate to wrangle them when a franchised corporate bot, with its complete knowledge of company policies and personnel data and local laws, can handle all but the edgiest of edge cases?

There’s an interesting area of speculation: what happens to those people who won’t (not can’t but won’t) be replaced by robots or formless AI and the tier of mini-managers just above them?

Expand full comment
Apr 23Liked by Alberto Romero

An update on the state of AI in New Zealand, and a question at the end. The update: Late last year an artist here who doesn't like AI art exhibited a show called AI. Which is possibly a shortened name. But that's not the issue. He went on to say it was Aaron Intelligence. Which was amusing to me as we had talked about it previously and I'd even created a dalle2 set of images based on his 2d work. I showed him one image which he commented was too perfect. I asked him if would like to see the other 200+ He said no! Fair enough. Not curious at all or just not interested. I don't really know the reason. But I accepted it. Anyway that's when he created his AI exhibition. Remember it stand for something else in this case... so at his next exhibition I asked him about the way AI is attempting to move towards AGI and compared it to his own goal of obtaining beauty in his art. His answer. "I don't know!"

But there had been a change I noticed. AI video had caught this artists attention. For the first time AI was making an impact on one NZ artist. In a small way but a way! And the after the Q&A there was talk about AI a bit more. So from experiencing nothing the year before to a murmur in the art gallery. And at that exhibition was a photographer who was young and had used AI in her work. So the question is this. Why is it moving so slowly in New Zealand, but seems to be racing ahead overseas. NZ is fully connected to the WWW. And has been since the 1980s. In one form or another. But perhaps AI is racing ahead not in other countries at all. It's just racing ahead regardless of the world?

Expand full comment
Apr 22Liked by Alberto Romero

There is a young student who has signed up to your newsletter, and probably many, learning of new AI developments but yet to see them much in their daily life. This student knows what’s on the horizon (thanks to TAM and many other newsletters) and is both hopeful and intimidated. What would you share with this individual that you think they may not otherwise hear?

Expand full comment
Apr 22Liked by Alberto Romero

Hi Alberto, I have a question: how deeply technical is your knowledge? Are you a coder? I.e., do you know how to code and train DNNs, CNNs, etc.? Follow-up: do you plan on changing your level of technnical understanding, or do you feel is it more important to focus at a higher level?

Expand full comment
Apr 22Liked by Alberto Romero

I work for the Latino nonprofit sector (at the Hispanic Federation). Adoption is happening slower than in for-profit businesses, where there seems to be an insane stampede towards AI. Is there any upside to waiting for LLM-based tech to mature or for the US government to get off their butts and regulate it? Thanks!

Expand full comment
Apr 22Liked by Alberto Romero

ChatGPT and similar chatbots have quickly become widespread and are being used by a lot of different audiences, with distinct levels of education/tech awareness. Similar tools to process sound and video are quickly being released, the race is on.

With so much tools allowing to create good (and maybe deep) convincing fakes, do we face an education problem? What tools can we provide to the different audiences to compensate for the incredible speed new tools are unfolding?

Expand full comment

I commented you excellent writing, but I copy the comment here, too in case it is easier for you:

Thanks Alberto for this insightful text. In particular one key point is important:

"""

Right now, neuroscience (and AI) is at a similar stage as physics was pre-Galileo and pre-Newton.

We lack well-established explanatory theories of intelligence, brain function/structure, and how the former emerges from the latter. AI should evolve in parallel with neuro until we develop those.

"""

As a long-term neuroscientist, this is exactly how I see it, too. There are three major issues in neuroscience which have hindered faster progress:

1) Most neuroscientist have no math training, and those who have have often shallow training in biology. We need both to understand biological signal processing. This said, a neuroscientist does not benefit much from math as long as you get your output from new experimental techniques, and do not care about modeling the biology.

2) Evolving modeling and simulation software is a key towards quantitative understanding of complex systems, such as the brain. It is not so far back the best prediction of weather tomorrow was weather today. In this sense we are living in a golden era with rapidly evolving infrastructure for modelers.

3) There is an innate brake for interdisciplinary fields in professional science. Due to continuous hard competition of funding, the top groups need to optimize publication output. The publication output then dictates their future probability of funding. This means that you should not diverge unless you are certain of returns and absolutely not play deep in uncertain forests.

As you Alberto pointed out, we are in pre-Newton era in neuroscience.

Expand full comment

Google just announced another 100B investment in AI. How is all this investment in AI being justified to investors?

Expand full comment

What's the way we can look at economic strategy for a state government in the context of this missive and exponential change in AI.

We know it's going to effect jobs. We also know it is going to be augmentative. We also know that it will create new global opportunities. What is the role of a government policy when we are only AI model takers and then incorporate that for strategic choice making.

Expand full comment

Do you think there’s a relationship between the 'form' and the 'meaning' of language? If you can perfectly discuss a concept, do you know what it means? If there is an overlap, how close do you feel the overlap might be? If there's no overlap, why not?

Expand full comment

Thinking short term, on a ten-year horizon: what do you think will be the worst "thing" to happen thanks to the current wave of AI; how will society respond; and will we have become more fragile or antifragile in the wake of it?

Expand full comment

If AI is truly capable of self improving, what do you think will it be called 100 years from now?

Expand full comment