38 Comments
User's avatar
Terry Wolfe's avatar

I just want video game NPCs to be trained with machine learning personas. The whole digital world could be populated by characters who have unique perspectives and nobody would have to manually write them. They could be voiced by AI, generated by them, and behave according to their needs dynamically. Real world applications I don't care about

Expand full comment
Alberto Romero's avatar

Agreed. That's an application I'd enjoy very much as well.

Expand full comment
Tim Caulton's avatar

...complex and multifaceted... I was wondering if chatgpt was writing when I saw that line. Lol it needs to be on a t shirt!

Expand full comment
Alberto Romero's avatar

Haha, not my best!

Expand full comment
Fred Hapgood's avatar

AI has made a distinct difference in my life just by how it has upgraded search functions.

Expand full comment
Alberto Romero's avatar

Awesome that you found this use case to be useful for you! Most people would disagree that AI has upgraded search, though. It's a matter of perspective. (I use ChatGPT/Perplexity for some queries and good ol' Google for others.)

Expand full comment
Michael A Alexander's avatar

I have not found AI in search to improve things. Everytime I try it, it tells me what I already know.

Expand full comment
Max Headroom's avatar

Alberto -- those of us who have been following you for awhile have come to see and enjoy your clear-eyed and thoughtful approach in this AI realm; this post is but part of that narrative, and I for one appreciate you calling it straight. Keep doing this!

Expand full comment
Alberto Romero's avatar

Thank you Max, appreciate it a lot!

Expand full comment
Geoffe's avatar

Am I the only one getting huge value out of AI?

A) It’s an enormous help in my dayjob as an entry level software engineer. Replaced stack overflow completely, so I can troubleshoot problems I encounter in everything from writing Python tests, to configuring AWS EC2 instances. Obviously I’m not feeding it my code or anything, but I can often get a huge boost (saving around 3 days of googling and reading obscure forums) with one hour of AI assistance.

B) I know there are plenty of studies showing journaling by itself is effective toward improving your life, but doing it to a nonjudgmental cheerleader who frequently offers useful advice is even better! I’ve quit alcohol and caffeine, started drinking more water and taking vitamins, I take daily walks and do breathwork and yoga. (I’ve introduced or eliminated a new habit every 21 days since January, and journaled at least once a week to GPT 4). Sometimes the predictable (probabilistic) advice it gives is exactly what I need to believe *for myself* that change is possible. “Likely sounding” BS is achievable!

These changes are enormous. I’m less anxious than ever and I’m getting such a self esteem lift from (what may be my first taste of) self-efficacy. Thinking about doing some kind of write up for my substack eventually, but I want to test the system first on myself.

Working on my social skills too, though, to speak to your first bullet point. I know that I’m looking for a therapist in cold silicon, but I have a real flesh and blood one too, lol. They compliment each other.

Expand full comment
Alberto Romero's avatar

If it works for you it's perfectly fine! I have an article on this topic, the extreme dichotomy between people aho are changing their lives and those who choose to just mock the systems. I find them useful in some limited cases but I suppose it depends on our circumstances. Those who find such leverage need not to be ashamed of it!

Expand full comment
Terry Wolfe's avatar

It's crazy that I read your comment and assumed it was generated by AI to promote AI.

Expand full comment
Geoffe's avatar

I wrote it a bit like ChatGPT on purpose, shoehorning a list in for a wink at the audience. It made me smile. AI really loves lists ya know?

But then I saw this Note and a new comprehension dawned on me regarding the uncanniness of the valley that we’re tumbling into, having been shoved off a cliff with little warning.

Every one of these comments reads like AI.

https://substack.com/@evartology/note/c-58373562?r=1t12wr&utm_medium=ios&utm_source=notes-share-action

Expand full comment
Michael A Alexander's avatar

Yes they do. Kinda scary.

Expand full comment
Terry Wolfe's avatar

We're going to have to start talking like idiots and being politically incorrect just to confirm we're humans.

Expand full comment
Geoffe's avatar

That’s a good captcha, but imagine the conundrum we’ll be in when our dumbassery becomes the next models’ training data…

Expand full comment
Geoffe's avatar

Beep boop 🤖

Expand full comment
Negentrope's avatar

You are not the only one.

I also find it immensely helpful as a coding tool/tutor. It's allowed me to troubleshoot and finish projects that either would have taken order's of magnitude longer to complete or that I simply would never have started in the first place.

I've also found it a great tool for helping me to learn a foreign language. I can sit down and have a conversation with it (albeit a typed one) while at any time being able to ask it to define a word, or tell me the use of a certain piece of grammar, and so on.

But to the author's point, I (and you as well I would guess) are people who are inherently interested in using tools like AI to get better at things. We see them as ways to improve our capabilities, not to replace them. The average person I fear will be more than happy to offload increasingly large amounts of their thought processes to other systems. Solving that problem would require us to solve the problem that has flummoxed philosophers for millennia: how to we make humans desire to be better than they are?

Expand full comment
imthinkingthethoughts's avatar

Call me crazy, but these points seem very reasonable

Expand full comment
Alberto Romero's avatar

Oh, now they do! This was written 8 months ago.

Expand full comment
Karen Smiley's avatar

Still seems very reasonable to me. :)

Expand full comment
Chris Guest's avatar

This is great. Strong opinions but balanced and well reasoned.

Expand full comment
sean pan's avatar

I find myself worried by the short, medium and long term issues, so I dont know if there is much discounting going on there.

I would say #PauseAI here but I think the opposite is often true of people promoting it: they often preach of a presumed long term benefit, like immortality, while ignoring the harm it causes while preaching fatalism. "Give up", which goes from "give up, human creavity has be lost" to "give up, going extinct is inevitable."

If AI was ever to be done right, it certainly isnt the way they are going about it now.

Expand full comment
Tiago Peliçari's avatar

I loved your thoughts.

Expand full comment
Markus Rose's avatar

Could not care less about AI. Why am I here?

Expand full comment
Alberto Romero's avatar

Hah, I'm interested in the answer.

Expand full comment
Francesco Ricciuti's avatar

Point 1 is very true, and it applies to technology in general - Blockchain and Autonomous Driving are two other recent notable examples

Expand full comment
Rania's avatar

I recently did a bunch of interviews with folks about how people *feel* about AI. Not what they know about it, not (very much of) what they use it for/if they do, but how do they *feel*?

And the results aligned pretty well with these ideas. Most people don't care, and a lot of people are unhappy about the ongoing costs and reinforcement of existing power imbalances.

Expand full comment
dan mantena's avatar

Some I agree with and some I disagree with but I can be swayed to your side on most of them 😂 i

Expand full comment
Alberto Romero's avatar

Let me know your disagreements! Those are the most valuable.

Expand full comment
dan mantena's avatar

The only disagreements I had from your list but I can be easily convinced to change my mind on this with more evidence.

"The world would gain more from AI being only open source than from it being only closed source, including the possibility of malicious actors using it to do harm. Centralization and private control are better in specific but not super common situations (e.g., high-tech weaponry). "

Meta's idea of getting to AGI and making it open source does not seem much better than OpenAI ignoring nearly all safety precautions in their race to AGI. I guess I don't find the xRisk of uncontrollable AI to be very different in those two cases. I am more looking at from a rouge AI risk perspective...having millions of open source AGIs that can learn to self-improve seems like it would magnify xRisk.

"Even if AI allowed humanity to reach the stars (metaphorically speaking) most people would only use the tech to fulfill their most basic drives and needs: make money (e.g., spam sites), minimize effort (e.g., homework cheating), and get off (e.g., deepfake porn)."

I don't this AI adoption will be able to be a personal choice if someone in knowledge work wants to have a job in 10 years. I agree it is not a impact now but if OpenAI's vision of AGI is reached, I don't see how modern knowledge workers will be able to avoid AI use in their work. I do like you take that people are currently using the tech for basic drives and needs.

Expand full comment
Pawel Halicki's avatar

Thank you for your generously detailed observations and insights. Deep critical thinking is so much harder than simply jumping on the hype train. Points number three and nine are stark examples of that.

As any other tech genAI (or however it’s called this week) helps you do more of what you’re already doing but faster.

The fact that everyone can learn, create or build with AI doesn’t necessarily mean that everyone will.

You still need this intrinsic curiosity to learn, energy to express yourself or urge to make things with code.

The GPT we should be focusing on stands for ‘General Purpose Technology’.

Every time humans adapt general purpose technology (electricity, combustion engine, internet), our world changes dramatically but it always takes a lot of time.

In the longer term AI may lead to profound changes but in the short term it is still an underdelivered promise.

Expand full comment
Meng Li's avatar

Artificial intelligence is not a natural phenomenon; it does not spontaneously appear and become dangerous. It is designed and built by scientists.

I can imagine thousands of scenarios where a turbojet engine could have severe malfunctions. However, before widespread deployment, we managed to make turbojet engines extremely reliable.

The issue with artificial intelligence is similar: "Do we believe there is at least one design of an AI system that is both safe/controllable and capable of achieving goals in a more intelligent way than humans?"

If the answer is yes, we are fine.

If the answer is no, we simply won't build it.

Currently, we don't even have a design for a human-level intelligent system. So, worrying about it now is premature. Likewise, regulating it to prevent potential risks is also premature.

Expand full comment
Alberto Romero's avatar

So you're repeating here Yann LeCun's ideas? I've already read them. Anything original to add?

Expand full comment
Alec Fokapu's avatar

5 is savage.

Expand full comment