58 Comments
author

Okay, I've just published the super long GPT-5 article. It's time to announce the winners.

22 of you came here to leave comments and I have to say, I'm delighted with the quality of the conversation. Very, very good questions.

22 people total would make 3 the number of winners (N/10, rounding up), but I've decided to give away twice the number of paid subs I promised. So here are the six winners (determined using a random number generator):

- Suhit Anantula

- Simo Vanni

- Bianca Damoc

- Tim

- Riley Tom

- Stephen Calenzani

Expand full comment
Apr 23Liked by Alberto Romero

Maybe this is too pop-culture-y for your usual writing, but I wonder if news has reached you of the ridiculous and dystopian success of the AI bot that is playing Netflix’s social media inspired reality show, The Circle…?

Here’s how the show works. This isn’t a ad, I just like the show. Basically, a bunch of people get locked in separate apartments alone and spend their time being goofy and chatting over an instant messaging platform in order to build relationships with the other contestants. Every now and then, they all rank each other as “people who I most like to those I like least.” The top two most liked players on average then collaborate to kick someone out.

It requires a lot of social skill, finesse and strategy.

The gimmick this year is that they trained an open source LLM on all the footage and text from past seasons and—even being told there’s an AI among them—the players are struggling to figure out who it is.

It’s a really interesting phenomenon, where people are ranking someone as “most likely to be AI” based on whether they get along with them or whether they inspire intense emotional aversion, rather than a realistic understanding of AI capabilities and weaknesses.

Almost no one seems to suspect. A few find the bot really funny and consider it an ally in the game. It just goes to show how woefully unprepared and defenseless we are against manipulative behavior by bots on the internet. A total mind bending cautionary tale.

Given that your blog is about AI but really about people, I thought this would be good fodder for you.

My questions:

You’ve written about AI destroying the internet, but from a slightly different angle, if I recall. What do you think about the phenomenon of people getting hoodwinked into interacting with LLMs in Facebook, Reddit, Twitter, etc comments?

Is there anything we can do as a society and as individuals to improve our immune systems here?

If you watched it, what do you think of the Bot’s performance on the Circle?

Expand full comment
author

Interesting! Had never heard of The Circle before. The addition of the AI bot and the mechanics of the game reminded me of Meta's CICERO, the AI that plays at the human expert level in the game of diplomacy, which also requires strategizing, reasoning, and communication skills. In a way, it's a mix of that and the Turing Test. Interesting stuff will check it out--thanks for the info Geoffe!

About your questions: People interacting with bots unknowingly online is happening all the time. The extreme scenario of this is what's called Dead Internet Theory (https://en.wikipedia.org/wiki/Dead_Internet_theory), which goes on to say that you're actually the only real person left on the internet, that everyone else you interact with are bots. Scary and dystopic. I don't believe we will ever get there but I can assess that in some corners of the internet (e.g. facebook comments) the ratio bots/humans may be above 1.

The best way to fight this individually is to "leave" the big mainstream platforms (facebook, reddit, twitter) and go back to individual creators/blogs/places that you trust to be reliable and good quality. If you don't want to leave, you need a way to curate your feeds yourself--I do that on Twitter, for instance. Collectively, I'm not sure, the era of the big internet is dying.

Expand full comment

I'm also curious to hear your thoughts on this.

I've seen one or two episodes of the Circle, didn't know they've added an AI element to it.

It's remarkable how an AI bot can hold its ground in a game that's all about strategic communication and it underscores a bigger point—our increasing interaction with AI in spaces meant for human connection, like social media, will reshape the way we co-exist.

My emotional human side wants to completely log off and go outside and touch grass.

My intelectual curious side is intrigued by the prospect of us having to navigate a world where human and machine interactions are indistinguishable.

Expand full comment
Apr 23Liked by Alberto Romero

What do you think of the notion that humans have long ago created AI? That language, writing and, built upon that, the laws, bureaucratic procedures and protocols of large organisations constitute a kind of software? That states and corporations are AI entities (with embedded human nodes), organisms which exhibit their own emergent behaviours? Science fiction writer Charles Stross refers to this idea as "slow AI". His stories envision a future where, as telecommunications become faster and neural interfacing blurs the virtual with the real, many corporations become group minds, with a large AI at the centre, like the supermassive black hole at the heart of a galaxy. The individual humans inside no longer retain a separate ego, their thoughts make no sense except as part of the groupthink.

Expand full comment
author

Given how broad the term "AI" is nowadays I say: yes, why not. Can large organizations be portrayed as a kind of AI? Yeah, in a way it's true. But if we take the narrower definition of AI which refers to computer programs that display intelligence via automation of tasks, then not so much. I agree however that there's a sort of intelligence in corporations or collectives that can't be described in terms of the sum of its parts--more is different sometimes (https://www.science.org/doi/10.1126/science.177.4047.393).

The fictional aspects of Stross' work is interesting but humans without ego?--sounds impossible!

Expand full comment

Considering that AI can blend and merge ideas from different *existing* fields to "create" and "innovate," how can humans push the boundaries of their own creativity, either by drawing inspiration from or surpassing AI's ability to interconnect previously isolated domains, to generate new ideas, creations and discoveries?

Expand full comment
author

This problem comes down to the limitations of imitation learning, which is what GPT-4 and its ilk do. To go beyond that they need to take inspiration themselves from the likes of AlphaGo and AlphaZero, systems that have a component of reinforcement learning (e.g. search + self-play + trial and error, etc.). The problem is that doing RL in action spaces as constrained as Chess and Go games it's trivial compared to doing this in the infinite-degrees-of-freedom place that is the real world (note that the number of combinations in chess and Go are unimaginably large and still extremely tiny compared to the amount of combinations in which the atoms in the universe could be arranged). This is to say, humans will go beyond themselves when AIs learn to go beyond humans (that happened in Chess and Go a few years ago). (Also, that last sentence is worthy of a headline!)

Expand full comment

What impact do you think generative language models will have in the traditional academic research and publishin world? From grant proposals, to editor letters, to CVs and all sorts of reports, academia is full of bureaucracy that no one ever reads. Do you think LLMs will make a large part of this nonsense more evident and thus less relevant or will it all continue business as usual?

Expand full comment
author
Apr 23·edited Apr 23Author

Erik Hoel explored this topic recently for the NYT (specifically academic research): https://www.nytimes.com/2024/03/29/opinion/ai-internet-x-youtube.html

I don't know enough about the area to give you an informed opinion but in general terms, talking about what I see as "liminal spaces of work," i.e. the work you have to do to go from meaningful place to meaningful place, these are the areas where AI can be applied more seamlessly and will be applied more heavily. The examples you give, "grant proposals, to editor letters, to CVs and all sorts of reports" are perfect examples of liminal spaces of work--work no one wants to do or read or analyze or learn about but that needs doing because of how those fields/areas/sectors are structured (I'm not sure all kinds of bureaucracy meet this description but surely a lot of it does).

Expand full comment
Apr 22Liked by Alberto Romero

Two questions:

1. How long have you spent on the GPT-5 piece? (I’m super eager to read it.)

2. How would you describe the differences in mindset among SF people, those from the rest of the US, and people in your part of the world (Spain, right?)? Or some variation of this if you don’t have the kind of experience to answer this question. As someone who lives in Illinois, I find it fascinating to think of how different San Francisco / Silicon Valley is from almost anywhere else … and the ramifications for the rest of the world. All of this with respect to AI especially.

Expand full comment
author

1. 1-2 months if I count all the research and drafting previous to writing and editing. Many things I've included in the piece I've taken from my general background so that's hard to account for!

2. Super interesting question! I don't know much about other areas of the US except from people I know living there out of the AI/tech bubble. From what I can tell, it's more or less the same as here in Spain. What I see here that helps me put into perspective what happens in SF/Bay Area is that people just go on with their lives, it's like none of this matters at all except at most for a few weeks/months due to novelty. Does that mean AI doesn't matter? No. Just like you and me, they have their smartphones in their pockets which means tech eventually reaches them, they're just not that curious a priori about the tech/scientific vanguard of the world--they're reactive people, like most people. I believe this to be the case everywhere except at the centers of such advances, SF being the main one on AI, but there are others in other areas that produce the same indifference across the globe. I also believe these "thinking/doing centers" attract people more inclined to care about this stuff. Evidence for that is SF demographics, as far as I know, are incredibly diverse and cosmopolitan.

Expand full comment

As the AI divide grows, users of free, less advanced AI often face disappointment due to basic queries, single iterations, and the tools' limitations. This leads to skepticism about AI's real impact. How can we effectively convey that engaging with and understanding AI technologies and their implications today, rather than being discouraged by initial suboptimal experiences, is crucial for everyone's future?

Expand full comment
author

Extremely important question Pascal, something I've been wondering for a while. I believe the answer to your question as you posed it is "we can't." However, I don't think this means most people will never realize the potential of AI due to their discouragement. What I believe will happen to most people is that they will become *passive* instead of *active* users of AI tools. They won't go to ChatGPT and explore it to realize a GPT-4-based ChatGPT may unlock new capabilities they can't even dream of. No, that won't happen in most cases because the value proposition isn't clear as a user. What will happen is that these AI models/systems will be integrated into products most people already use (that's actually Zuck's bet with open-sourcing Llama 3) and then, passively driven into the new reality of AI, they will realize just how much they were missing out.

Expand full comment
Apr 23Liked by Alberto Romero

What do you think of the theory that the urgent interest of the biggest AI players in the regulation of AI is less about safety and more about control? There is generally a revolving door between regulatory bodies and the companies they are supposed to regulate, industry lobbies write policy to be signed off on by officials who are former employees. Regulatory compliance often involves onerous administrative overheads which favour large organisations. The overall effect is to squeeze smaller players out of the market. Just look at the food industry, for example. Perhaps powerful organisations worry that AI could give small groups equal organising power? Perhaps, one day, running an unlicensed AI might be a criminal offence carrying greater penalties than murder?

Expand full comment
author

I believe Google Microsoft Meta have incentives to urge regulation for control instead of safety. Interestingly, it was OpenAI the most vocal on this and given that their main competitors are larger--Google and Meta--it's hard to argue it's a control thing (at least exclusively). Are they worried about open source and that's why they favor regulation? Could be but given that Meta is the main OS player, again, it's unclear how that benefits OpenAI more than Meta. I believe OpenAI in particular is a weird company and should be seen from that lens. Altman believes much more in his own intellect and persuasion abilities than following any given manual on how to lobby politicians. Regarding Google Microsoft and Meta the analysis feels easier to me.

Expand full comment
Apr 23Liked by Alberto Romero

Hey Alberto,

I have a question about your paid subscriber growth, as this is something I'm starting to work more actively towards.

Have you found any patterns in what tends to resonate the most and make people convert from free to paid? Are there e.g. specific topics, types of posts, formats that do better than others?

I'd be very curious to hear any of your observations here, to the extent that you're comfortable disclosing any details.

Expand full comment
author
Apr 23·edited Apr 23Author

Hmm, hard question. Paid growth is a direct function of free growth. Any given article will convert more subs to paid if I had a recent influx of free subs, regardless of everything else. About which topics/formats work, I'd say those that promise to provide a very specific and immediate value to the reader work best to convert than those that have a more vague value proposition in the sense of being merely interesting, e.g. "Here's all you need to know about GPT-4" is better than "Why I don't use AI to write." A rule of thumb is to paywall articles on which you're thinking on what to give the reader and make free those you wrote for yourself.

Expand full comment

That makes a lot of sense, thank you for taking the time to respond!

Expand full comment

You quoted other technologists and said "it’s not the tool but how we use it (and how companies design it) that matters."

Looking at how we built the internet, and subsequently, social media, do you think we have it in us to build AI the "right way?"

I think it's fascinating, and so much good will come out of it, particularly in medicine. But I also think we are a greedy, egocentric species that will do anything to get ahead.

More than anything, I fear that one person is now capable of causing a lot of harm.

We tend to look back at previous generations and think "Barbarians, how could you: slavery/nuclear wars/genocides etc.?" Aside from our growth at all costs mentality and the environmental externalities caused by it, I wonder what will cause future generations to call us barbarians?

Expand full comment
author

This goes back to the problem of incentives. There's no "right way" because each of us who participate in this either as builders or users has our own motivations. Those motivations are always in conflict in one way or another, to some degree. There will always be a bunch of people who will put their own benefit above the well-being of the collective. The amount of damage they can do is what I think matters. Can they destroy the digital commons with generative AI? I'm not sure their power is that great but they're doing a lot of damage already. Will this ever change? No, not anytime soon. That's what humans are. What's a miracle is that amidst this eternal conflict of interest, we managed to build a civilization. Perhaps that's what should keep us optimistic.

Expand full comment
Apr 23Liked by Alberto Romero

Thanks Alberto for the opportunity for this QA session :)

In COSYNE 24 meeting I asked community "Can we achieve human-level AGI without yet non-existing fundamental new insight from neuroscience?" From the three options "Yes, just scale DNNs up" got 10% of wotes, "Yes, need to connect the dots" got 36.7% and "No something is missing" got 46.7%. The COSYNE meeting participants are of course biased because the meeting is about the bridge and about the modeling of biological brain. So my question for you is, what do you think: Can we achieve human-level AGI without yet non-existing fundamental new insight from neuroscience? Why and how?

Expand full comment
author

Very interesting question Simo. I wrote this a while ago: https://www.thealgorithmicbridge.com/p/why-ai-is-doomed-without-neuroscience I think you may guess what side I lean toward! However, some things have changed in how I think about this question. I believe that AI needs to take further inspiration from the human brain but I don't think a perfect understanding of what's going on inside our brains is necessary to creating an artificial one. Any new insight from neuroscience should inform, at least at a theoretical level, what we do in AI, though. (If I knew the answer to the "how" question I wouldn't be here!)

Expand full comment
Apr 23Liked by Alberto Romero

Also quite niche, but for a class in my master's we were asked to work on AI and financial institutions. Speaking with cybersecurity experts in the field, something that came up constantly was this fear that models would be trained on bad data and give bad results that would cause weaknesses in the security infrastructure, and also that employees would put sensitive info into things like chatGPT and it would leak. On the technical side of things, can you speak to how likely these threats actually are?

Expand full comment
author

Very likely. Here's a paper from Anthropic on this: https://www.anthropic.com/news/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training. Worth a read, it's the exact security flaw you're talking about. I don't see LLMs being used in any high-stakes category where cybersecurity is a real concern any time soon (sleeper agents is just one of many different problems, like jailbreaking, prompt injection, and adversarial attacks, feel free to look those up as well).

Expand full comment
Apr 23Liked by Alberto Romero

The success of OpenAI and Scaling Law is somehow anti-intelligent from my point of view. Do the elite in AI really believe such mechanical way of exploration without elegant and understandable reasoning creation is the right future? Could you write more about the dispute about the technical directions of AI?

Expand full comment
author
Apr 23·edited Apr 23Author

That's the old GOFAI vs connectionist debate that separates those who believe AI needs manually encoded symbol manipulation mechanisms from those who believe training neural networks with learning and search algorithms should be enough to yield intelligence (in this second camp, some believe the importance of symbols is exaggerated whereas others believe it isn't, but learning algorithms can eventually make AI learn to manipulate symbols well without explicit instructions or previous knowledge). The thing about the lack of an elegant ingenuity in modern AI is exactly why Richard Sutton's lesson is a bitter one: http://www.incompleteideas.net/IncIdeas/BitterLesson.html

I wrote about this here: https://www.thealgorithmicbridge.com/p/gpt-4-the-bitterer-lesson

Do I believe human ingenuity is unnecessary? Not at all. Do I believe the Bitter Lesson is false? No, I believe there's an important insight in there, even if it's not a law of nature that always applies. The interesting insight is that brute force methods can surpass human ingenuity in a way that humbles us and our ability to understand things. That's not bad.

Expand full comment
Apr 23Liked by Alberto Romero

In the past, knowledge and even the development of languages and cultures was left up to missionaries, people who valued others as fellow creations of God. Can AI and LLMs be trusted to give inherent value to humans in a world so divided of haves and have-nots in regards to 'digital inclusion'? Missionaries often died to reach those others called animals. What will AI sacrifice?

Expand full comment
author

Hey Brian, I'm not sure I'm following. What do you mean with what will AI sacrifice? Can you give me more context?

Expand full comment

Since AI isn't 'alive' and can't give up any such sacrifice, are we then giving over to AI things it can't truly value or comprehend? Maybe no one is saying it will or is even trying to, but I listen to the World Economic Forum on their discussions of 'Digital Inclusion',(Mainly the 2.5 billion people who don't have access) and no one is addressing what we are replacing AI with. Big business and the private sector are said to be leading the way with AI in regards to other nations. I see the large gap in what it will require for AI to take over to be troubling.

Expand full comment
Apr 23Liked by Alberto Romero

We can see what recent leaps in AI and robotics will likely mean for warehouse workers or actors or natural language translators, but what does it mean to the manager of a McDonalds or a car salesman or clerk at the Bureau of of Motor Vehicles?

Will my neighborhood doughnut shop no longer have an assistant manager, but instead be staffed by a pack of teenagers following the instruction of a bot?

Will entry level cashiers, receptionists, and doormen be — not even robots but — screen portals to the Philippines?

What does this all mean to the less glamorous but still necessary jobs that will still need to be done but are not worth the price of a technical upgrade (robot/telepresence)?

And will those functions not need a freshly graduated but inexperienced management candidate to wrangle them when a franchised corporate bot, with its complete knowledge of company policies and personnel data and local laws, can handle all but the edgiest of edge cases?

There’s an interesting area of speculation: what happens to those people who won’t (not can’t but won’t) be replaced by robots or formless AI and the tier of mini-managers just above them?

Expand full comment
author

I believe there are two parts to this answer. The first is that the cost of technology goes down over time so much that's unbelievable. Just think about internet connection, TVs and smartphones. The same is happening with robots and AIs. Second, I believe broad automation can only happen satisfactorily if it goes together with the implementation of universal basic income.

Expand full comment
Apr 23Liked by Alberto Romero

An update on the state of AI in New Zealand, and a question at the end. The update: Late last year an artist here who doesn't like AI art exhibited a show called AI. Which is possibly a shortened name. But that's not the issue. He went on to say it was Aaron Intelligence. Which was amusing to me as we had talked about it previously and I'd even created a dalle2 set of images based on his 2d work. I showed him one image which he commented was too perfect. I asked him if would like to see the other 200+ He said no! Fair enough. Not curious at all or just not interested. I don't really know the reason. But I accepted it. Anyway that's when he created his AI exhibition. Remember it stand for something else in this case... so at his next exhibition I asked him about the way AI is attempting to move towards AGI and compared it to his own goal of obtaining beauty in his art. His answer. "I don't know!"

But there had been a change I noticed. AI video had caught this artists attention. For the first time AI was making an impact on one NZ artist. In a small way but a way! And the after the Q&A there was talk about AI a bit more. So from experiencing nothing the year before to a murmur in the art gallery. And at that exhibition was a photographer who was young and had used AI in her work. So the question is this. Why is it moving so slowly in New Zealand, but seems to be racing ahead overseas. NZ is fully connected to the WWW. And has been since the 1980s. In one form or another. But perhaps AI is racing ahead not in other countries at all. It's just racing ahead regardless of the world?

Expand full comment
author

Interesting. I think this isn't at all an isolated case--"The future is here it's just not evenly distributed." The key here is the "seems to be racing ahead overseas," which is just not true. However, there are places where news about new tech spreads better. I don't think it's a problem with NZ but with artists rejecting AI a priori. You nailed it with this question: "It's just racing ahead regardless of the world?"

Expand full comment
Apr 22Liked by Alberto Romero

There is a young student who has signed up to your newsletter, and probably many, learning of new AI developments but yet to see them much in their daily life. This student knows what’s on the horizon (thanks to TAM and many other newsletters) and is both hopeful and intimidated. What would you share with this individual that you think they may not otherwise hear?

Expand full comment
author

Everyone, literally, writing/talking about AI has incentives that aren't aligned with your education or wellbeing. You must learn to separate sources by understanding how their incentives align with yours. Someone who grabs your attention with powerful emotional hooks is likely to not be reliable. Someone you trust from before this AI wave is probably still reliable after it. Hard to give you a quick heuristic/rule of thumb, it's a skill to develop and possibly more important than learning about AI itself.

Expand full comment
Apr 22Liked by Alberto Romero

Hi Alberto, I have a question: how deeply technical is your knowledge? Are you a coder? I.e., do you know how to code and train DNNs, CNNs, etc.? Follow-up: do you plan on changing your level of technnical understanding, or do you feel is it more important to focus at a higher level?

Expand full comment
author

Great question. I have basic coding knowledge and practical skills but I wouldn't be able to program from scratch a CNN or a transformer right now. I have experience with Tensorflow (before PyTorch became the default LM library) but that's 6-7 years ago. The second question is something I've been pondering for some time and I actually just found a course on coding transformers from scratch. The newsletter is taking up most of my time, but I'd like to further develop my coding skills (if you can call them that lol). As far as theoretical understanding is concerned, I developed a deep intuition of ML and DL very early on and that has stayed with me, which helps me better grasp why these AI thingies do what they do. I believe, however, that for most people what matters is the high-level, bird's-eye view, trends-and-tendencies understanding of AI. How it affects us collectively and individually. Technical understanding isn't needed for that for the most part.

Expand full comment

I agree, this is such a strange and interesting new field where there are pretty gritty technical details, but one does not need to know them in order to interact with or even create your own AI. I'm a "normal" coder, (i.e., "Software Engineer") but my job has been entirely unaffected by AI so far, and until the legal questions get much more resolved, my job won't change. (I.e., if you use AI to help you code, who can sue you for copyright infringement/GPL, etc.?)

Expand full comment
Apr 22Liked by Alberto Romero

I work for the Latino nonprofit sector (at the Hispanic Federation). Adoption is happening slower than in for-profit businesses, where there seems to be an insane stampede towards AI. Is there any upside to waiting for LLM-based tech to mature or for the US government to get off their butts and regulate it? Thanks!

Expand full comment
author

I'm not so sure that adoption is fast in enterprises. They're generally wary of the hype and overselling the tech and are having a hard time seeing the benefits as well as solving the flaws (e.g. reliability, safety, security, privacy, etc.). In some places adoption is ongoing but it's mostly experimentation and low-stakes categories. The cost-benefit analysis of adopting vs waiting is super hard to do in a general way, it depends on the specifics of your situation (for-profit vs non-profit is just one dimension). For some people (mostly coders) it's a no-brainer to use GPT-4 or Copilot. I don't know a single coder who hasn't at least tried the tools enough to decide whether they fit their style or not. In other areas, it varies more. Writers, like me, are still mostly unsure how they can leverage generative AI (many reject the idea a priori). I know how but I'm not willing to because it just doesn't make sense to me. If AI fits your use case, go for it, it's a bargain (the harsh competition is driving margins to zero, which benefits users until a few leaders monopolize the sector and prices surge again). If it doesn't make sense, the haste to adopt it just to not miss out may end up being a grave mistake. I can't say more without details!

Expand full comment
Apr 22Liked by Alberto Romero

ChatGPT and similar chatbots have quickly become widespread and are being used by a lot of different audiences, with distinct levels of education/tech awareness. Similar tools to process sound and video are quickly being released, the race is on.

With so much tools allowing to create good (and maybe deep) convincing fakes, do we face an education problem? What tools can we provide to the different audiences to compensate for the incredible speed new tools are unfolding?

Expand full comment
author

I understand you mean education as in "tech literacy", right? I agree this is an issue, one that has been present every time a tech innovation enters the scene. How many people saw the value of the internet at first? How many people prepared for or leveraged its existence? etc.

We face a tech literacy problem with AI like we've faced with every new technology in the past. Only the curious, inquisitive, pioneering kind learns to see and use the tech before it radically takes over how we do things (not all tech does this, though). I believe curiosity is a character trait, hard to develop if it's not there in the first place.

To use your example, the amount of public resources on the deepfake problem available for free online is staggering. You can literally read everything there is to read about it just by putting in the time and effort yet most people couldn't even define what a deepfake is. I believe the problem you're talking about is much deeper than AI, it's a problem of life's natural tendency to minimize energy (combined with the problem of finite time to do things). How should we spend our time and energy? Learning is costly and takes effort; going online to develop a better intuition for AI or other new tech is very costly; even understanding that technology has the power to redefine our daily lives that we believe to be constant, unchangeable, and sacred, takes effort. The best we can do is put the educational resources--like this newsletter--out there (and watch out for covert incentives because people writing about hot topics like AI are often guided by motivations that have nothing to do with audience wellbeing or education).

I know this isn't the answer you would've hoped for but I'm a believer that we should first see the world for what it is before going into what it should/ought to be. If it's of any consolation, in the short term I don't expect AI to have any further negative influence on the tech illiteracy problem. It's already very bad--hard to see how it could be worse.

Expand full comment

Yes, I do mean Tech Literacy.

Internet adoption was slow, started from a very small and qualified group of ppl (I was also there from the start), and ended up being general spread with appearance of modern "smart"phones. Compare that to the growing of chatGPT in three months just does not make sense. It is a totally new reality.

My concern does not go to "the curious, inquisitive, pioneering kind" and are willing to put on effort and support the cost of learning a new technology, or at least to be aware of its existence, general usage and limitations ("AI" hallucinations for instance) - the tech savvy ones, but to the general public that use - and believe - in these tools, without putting to much thought on it (lack of qualifications, no time, ...).

There is a discussion to be done about the social impact of these tools. Everyone seems to be focused on copyright for now.

One can think about how much impact and power phishing still has why is that (I find myself educating close ones all the time in what not to believe).

Expand full comment

I commented you excellent writing, but I copy the comment here, too in case it is easier for you:

Thanks Alberto for this insightful text. In particular one key point is important:

"""

Right now, neuroscience (and AI) is at a similar stage as physics was pre-Galileo and pre-Newton.

We lack well-established explanatory theories of intelligence, brain function/structure, and how the former emerges from the latter. AI should evolve in parallel with neuro until we develop those.

"""

As a long-term neuroscientist, this is exactly how I see it, too. There are three major issues in neuroscience which have hindered faster progress:

1) Most neuroscientist have no math training, and those who have have often shallow training in biology. We need both to understand biological signal processing. This said, a neuroscientist does not benefit much from math as long as you get your output from new experimental techniques, and do not care about modeling the biology.

2) Evolving modeling and simulation software is a key towards quantitative understanding of complex systems, such as the brain. It is not so far back the best prediction of weather tomorrow was weather today. In this sense we are living in a golden era with rapidly evolving infrastructure for modelers.

3) There is an innate brake for interdisciplinary fields in professional science. Due to continuous hard competition of funding, the top groups need to optimize publication output. The publication output then dictates their future probability of funding. This means that you should not diverge unless you are certain of returns and absolutely not play deep in uncertain forests.

As you Alberto pointed out, we are in pre-Newton era in neuroscience.

Expand full comment

Google just announced another 100B investment in AI. How is all this investment in AI being justified to investors?

Expand full comment

What's the way we can look at economic strategy for a state government in the context of this missive and exponential change in AI.

We know it's going to effect jobs. We also know it is going to be augmentative. We also know that it will create new global opportunities. What is the role of a government policy when we are only AI model takers and then incorporate that for strategic choice making.

Expand full comment
author

I don't understand your question, can you frame it in more specific terms please?

Expand full comment

Imagine I am responsible for economic policy in a state - say a small state in Italy? What should be the policy approach that enables us to make decisions that is beneficial to the citizens and businesses? We cannot control AI. We are AI takers in some way.

How should we start thinking about public policy making in this context?

Expand full comment

Do you think there’s a relationship between the 'form' and the 'meaning' of language? If you can perfectly discuss a concept, do you know what it means? If there is an overlap, how close do you feel the overlap might be? If there's no overlap, why not?

Expand full comment
author

Isn't this Searle's Chinese room experiment? I thought the conclusion is that they're separate things, no?

Expand full comment
Apr 24Liked by Alberto Romero

Searle's Chinese room thought experiment was a philosophical musing that was controversial at best in its day. I believe linguists and cognitive scientists would agree that the strong form of Searle's idea is dead, and now the questions about the relationship between form and meaning are much more interesting.

I know you're not going to read a bunch of links supporting that conclusion. But if you don't think the Chinese room idea is dead, why not?

Expand full comment
author

Oh, no, on the contrary. Send me those links. I love updating my loosely held priors.

Expand full comment

Thinking short term, on a ten-year horizon: what do you think will be the worst "thing" to happen thanks to the current wave of AI; how will society respond; and will we have become more fragile or antifragile in the wake of it?

Expand full comment
author

Ten-year horizon short term!! My short-term is April to September 2024 lol. It's impossible to make 10-year predictions and expect them to be of any utility. But, well, let's see: Worst thing that's theoretically possible to happen is that AI becomes more intelligent than us and we're at its mercy. Worst thing that I believe may happen is a seamless integration of unreliable AI technology across sectors, including but not limited to medical analyses, military operations, high-stakes decision making, political campaigning, education, and content generation.

Expand full comment

If AI is truly capable of self improving, what do you think will it be called 100 years from now?

Expand full comment
author

AI isn't capable of self-improving. Trying to make a 100-year prediction is futile. It's even hard to make a 5-year prediction right now regarding AI.

Expand full comment

"Mummy"

Expand full comment