What safeguards does OpenAI have against this for ChatGPT and how are they tested? What stops a kid from engaging in explicit conversations with these apps?
Jeez…. This is a whole new dimension of stranger danger ⚠️
Just seeing how much AI has changed the world in the past few months, it’s hard to imagine that our kids won’t be interacting with it in some way, sooner than we might think.
We definitely need some sandboxes/safeguards around children’s and teens’ use.
How do you even do that effectively, though? In the Meta story and the OpenAI sycophancy story, both companies had safeguards that proved inadequate. I have to imagine that problem just gets harder the smarter AI gets.
These are immense risks. In the health tech space, you don’t release a feature if you can’t adequately mitigate patient safety risk. I don’t see how this should be any different.
This is a great analogy. And you’re right, I think there’s so much excitement about AI that we think of it as our trusted buddy but it totally could be up to it’s own agenda. I’m thinking now about a lot of ppl who are super excited about AI as teacher — I think some folks in Austin are maybe even starting a school with AI driven learning.
Thanks for this post Alberto and I appreciate the visceral reaction; too many people are already numb. We've seen what social media (and specifically, the drive to maximize engagement) has done to the younger generation. We're about to turbocharge that with AI. Age limits aren't a panacea but they're a necessary starting point. Kudos to Australia for being the first with a law for a minimum age for social media (the age is 16, not high enough but it's a start). Unfettered access to AI models at 12, powered by companies focused on out-competing everyone else so they can be the first $10T company...yikes! Sure AI has benefits but in its current form the potential for harm with kids, especially because it's uncontrollable and unpredictable, is not worth the risk. I don't want the government controlling my life. But if the incentives aren't aligned with our interests (as individuals and as a society) then we need a few guardrails unless and until we can subdue the misaligned incentives.
What's going to happen when biological intelligence is finally added to AI? Will thinking, feeling computational entities will be all over the media with all the questionable attributes of real people and lots of computational power behind them?
Not so long ago I read that the Secretary of Education is talking about putting AI in kindergarten to replace teachers. (And she called it "A1", which is shocking.)
Hmm...I think this comes down to if you view AI as fundamentally negative or positive to children. Sure, there are many risks and concerns as you pointed out in the post, but there are also HUGE potential benefits - personalized education being the most obvious.
You could apply many of your same arguments to "the internet" as a whole, but are children under the age of 13 prevented accessing the internet? No!
Personally, I would rather give my kid access to ChatGPT than to YouTube or Roblox.
Jesus fucking Christ, did you not see this coming? Did you not see that this was *inevitable*? AI, like virtually every mass tech commodity, is entirely about smoothing over the edges of our existence and leveling the texture of human effort *for everyone*, and there’s no way to exempt tweens, pre-teens, and little kids from this progress. What percentage of kids do you think have access to touchscreens before they turn 13? Why on earth would you think that percentage would be any different regarding their access to AI?
AI is here to erase humanity, and I don’t mean by turning us into paperclips, but by turning us into pure vessels of consumption. When I told my 11th graders that AI would eventually be able to instantly generate a whole film at Disney-level quality based on any prompt they could dream up—with no human in the loop—I thought I was proposing a dystopian scenario: the near-unanimous response was “man, that would be cool!” They have been fed gratification as quickly as Silicon Valley can whip it up, and their gluttony has become insatiable.
The pie-eyed, slavering reception *adults* have given to LLMs & video generation & Ghiblification of family photos & the associated fascination with leaderboards and benchmarks and p(doom) and e-acc and all the rest of the industry bullshit is no different. There’s no more water in the well—we’re lapping up pure poison, & the quicker we get the wee ones hooked on this shit, the quicker we can get this whole fucking shitshow over with.
Google Family Link is easy to use and the controls are granular, you can control app use down to the minute. Force them to ask to install apps, give them a start and end time and a number of hours per day. with my 14yo the screen locks at 9pm opens again at 7am, no arguments, no sulking, nor bargaining, and no Gemini. Though he does have Perplexity as it's a better Google Search for 30 mins a day. Available on iOS and Android. (Not affiliated, etc)
They did change the interface, but the controls remain, and they stay during phone upgrades, as well as getting warnings by email etc. I did him the benefit of no GPS until he started to harass me to install PokemonGO, (aged 10) which needs GPS, which he cannot turn off, so now his mother knows where he is at all times. Which he regrets.
I wouldn't get too confortable on the basis of specific interface designs. Google knows how to bypass the regulation in its pursuit of money. Has done it before and will do it again. But again: this is more about taking a moral stance, let's not get kids addicted to AI as well
It does help if you're a geek and you know what you are doing, but he complains because no other kid has such restrictions as he does. He has what's app and Signal and that's it, no other social media, no YouTube, chrome or Firefox. He has vivaldi as it has translation and ad blocking, and he gets 30 minutes a day.
I look up to parents like you. I bet it's hard time when your kid brings up that he's the only one without access to all that stuff. Yet he will surely be thankful. Good job
Let’s not forget Meta’s recent screw up in the kids+AI arena:
https://www.wsj.com/tech/ai/meta-ai-chatbots-sex-a25311bf?st=5N9kEa&reflink=article_copyURL_share
What safeguards does OpenAI have against this for ChatGPT and how are they tested? What stops a kid from engaging in explicit conversations with these apps?
And just… why? Why TF are we doing this?
Jeez…. This is a whole new dimension of stranger danger ⚠️
Just seeing how much AI has changed the world in the past few months, it’s hard to imagine that our kids won’t be interacting with it in some way, sooner than we might think.
We definitely need some sandboxes/safeguards around children’s and teens’ use.
How do you even do that effectively, though? In the Meta story and the OpenAI sycophancy story, both companies had safeguards that proved inadequate. I have to imagine that problem just gets harder the smarter AI gets.
These are immense risks. In the health tech space, you don’t release a feature if you can’t adequately mitigate patient safety risk. I don’t see how this should be any different.
This is a great analogy. And you’re right, I think there’s so much excitement about AI that we think of it as our trusted buddy but it totally could be up to it’s own agenda. I’m thinking now about a lot of ppl who are super excited about AI as teacher — I think some folks in Austin are maybe even starting a school with AI driven learning.
Thanks for this post Alberto and I appreciate the visceral reaction; too many people are already numb. We've seen what social media (and specifically, the drive to maximize engagement) has done to the younger generation. We're about to turbocharge that with AI. Age limits aren't a panacea but they're a necessary starting point. Kudos to Australia for being the first with a law for a minimum age for social media (the age is 16, not high enough but it's a start). Unfettered access to AI models at 12, powered by companies focused on out-competing everyone else so they can be the first $10T company...yikes! Sure AI has benefits but in its current form the potential for harm with kids, especially because it's uncontrollable and unpredictable, is not worth the risk. I don't want the government controlling my life. But if the incentives aren't aligned with our interests (as individuals and as a society) then we need a few guardrails unless and until we can subdue the misaligned incentives.
What's going to happen when biological intelligence is finally added to AI? Will thinking, feeling computational entities will be all over the media with all the questionable attributes of real people and lots of computational power behind them?
Not so long ago I read that the Secretary of Education is talking about putting AI in kindergarten to replace teachers. (And she called it "A1", which is shocking.)
https://www.usatoday.com/story/news/politics/2025/04/12/linda-mcmahon-a1-instead-of-ai/83059797007/
Hmm...I think this comes down to if you view AI as fundamentally negative or positive to children. Sure, there are many risks and concerns as you pointed out in the post, but there are also HUGE potential benefits - personalized education being the most obvious.
You could apply many of your same arguments to "the internet" as a whole, but are children under the age of 13 prevented accessing the internet? No!
Personally, I would rather give my kid access to ChatGPT than to YouTube or Roblox.
I agree, but I bet ChatGPT will eventually be worse than YouTube or Roblox (I know OpenAI rolled the latest update back but I precisely explain why that's not an admission of defeat): https://www.thealgorithmicbridge.com/p/chatgpts-excessive-sycophancy-has
Also: the internet *should have* an age limit
Jesus fucking Christ, did you not see this coming? Did you not see that this was *inevitable*? AI, like virtually every mass tech commodity, is entirely about smoothing over the edges of our existence and leveling the texture of human effort *for everyone*, and there’s no way to exempt tweens, pre-teens, and little kids from this progress. What percentage of kids do you think have access to touchscreens before they turn 13? Why on earth would you think that percentage would be any different regarding their access to AI?
AI is here to erase humanity, and I don’t mean by turning us into paperclips, but by turning us into pure vessels of consumption. When I told my 11th graders that AI would eventually be able to instantly generate a whole film at Disney-level quality based on any prompt they could dream up—with no human in the loop—I thought I was proposing a dystopian scenario: the near-unanimous response was “man, that would be cool!” They have been fed gratification as quickly as Silicon Valley can whip it up, and their gluttony has become insatiable.
The pie-eyed, slavering reception *adults* have given to LLMs & video generation & Ghiblification of family photos & the associated fascination with leaderboards and benchmarks and p(doom) and e-acc and all the rest of the industry bullshit is no different. There’s no more water in the well—we’re lapping up pure poison, & the quicker we get the wee ones hooked on this shit, the quicker we can get this whole fucking shitshow over with.
Had hope (and still do) that it will be a net positive. Also had doubts (and still do) it will happen at all
Google Family Link is easy to use and the controls are granular, you can control app use down to the minute. Force them to ask to install apps, give them a start and end time and a number of hours per day. with my 14yo the screen locks at 9pm opens again at 7am, no arguments, no sulking, nor bargaining, and no Gemini. Though he does have Perplexity as it's a better Google Search for 30 mins a day. Available on iOS and Android. (Not affiliated, etc)
When they all so this and the safety measures soften and the kids get addicted we will come crying
They did change the interface, but the controls remain, and they stay during phone upgrades, as well as getting warnings by email etc. I did him the benefit of no GPS until he started to harass me to install PokemonGO, (aged 10) which needs GPS, which he cannot turn off, so now his mother knows where he is at all times. Which he regrets.
I wouldn't get too confortable on the basis of specific interface designs. Google knows how to bypass the regulation in its pursuit of money. Has done it before and will do it again. But again: this is more about taking a moral stance, let's not get kids addicted to AI as well
It does help if you're a geek and you know what you are doing, but he complains because no other kid has such restrictions as he does. He has what's app and Signal and that's it, no other social media, no YouTube, chrome or Firefox. He has vivaldi as it has translation and ad blocking, and he gets 30 minutes a day.
I look up to parents like you. I bet it's hard time when your kid brings up that he's the only one without access to all that stuff. Yet he will surely be thankful. Good job