One night in early spring, Natallia Tarrien notices a strange tightness in her jaw. She's 28, pregnant with her first child, and trying not to overreact. But something feels off. She does what people do in 2025: she opens her phone.
But she doesn't call her doctor or search WebMD. She logs into ChatGPT.
When she describes the symptom, the response is unexpected: “Have you checked your blood pressure?” She finds a monitor, straps it on, and gasps at the numbers. Dangerously high. She tells the AI, and it doesn't hesitate: “Call an ambulance. Now.” She does. Hours later, she's being treated for preeclampsia, a potentially fatal pregnancy complication.
Had you gone to sleep that night, the doctor tells her, you wouldn't have woken up.
Across the country, a man named Cooper watches his dog, Sassy, decline. She's a border collie mix, young and usually energetic. The vet diagnosed her with a tick-borne illness and prescribed antibiotics but days pass, and she's still weak, her gums pale, her energy gone.
One evening, desperate, Cooper copies her blood test results and symptoms into GPT-4. It mentions a condition he’d never heard before: immune-mediated hemolytic anemia. The next morning, he brings it up to a second vet, who confirms it. Sassy is treated immediately and pulls through.
The first vet missed the diagnosis entirely, but ChatGPT didn’t.
Today, as I'm writing this, Flavio Adamo—aka “hexagon bouncing ball” guy—sits in a hospital bed with an undisclosed affliction (his followers have ventured he's suffering from testicular torsion, but I don't enter into such knotty matters).
He does the usual: feels a pain growing, asks ChatGPT, and receives the dramatic instruction: “Go to the hospital. NOW.” The doctor says the usual: 30 minutes later, and you’d have lost an organ. “AI literally saved me,” Flavio tweets out, ready to make another hexagon bouncing ball demo.
Neither Flavio nor Cooper nor Natallia set out to trust a machine with the most valuable assets we have: health and life. Almost by chance, they did, and saved themselves, and the world—just look how cute Sassy is—so much pain.
But this story isn't about trust, is it? They checked with the doctors and the vet anyway. They didn't blindly trust ChatGPT. (They did well; AI remains unreliable: You won’t see in the news the diagnoses ChatGPT got wrong; only, maybe, in the obituaries.)
This story is about imagination.
They allowed themselves to imagine.
They imagined that a verbose chatbot could help their seemingly fleeting unease, which was actually life-threatening in Natallia’s case. They imagined that it might find something they couldn’t (something Google search was not quick or transparent enough to provide). They imagined that even the dumbest query is chatgptable (pronounced /chat-GEE-PEE-TEE-uh-bull/, unless shortened into gptable, also valid).
So this isn’t really a story about saving lives with ChatGPT but about keeping alive our imagination of what’s possible with the tools at our disposal.
It’s funny when people have it backward: They insist that AI is stripping us of our imagination when they’ve stopped bothering to imagine themselves. They still pattern-match ChatGPT to “school essay” or “trip planning,” when—just like Google in the early days, only better—everything is at their fingertips.
You can save your life with ChatGPT, but you can do much more. Here's a quick list of things it can do for you: