The Golden Rule of AI Is Really Simple
When thinking about how to approach AI, common sense is your best friend
Earlier this week, Substack’s flagship publication, The Substack Post, reached out to me and other “Top Substackers”—truly an honor—to share some words of wisdom with new graduates.
They shared it in “Advice for the Class of 2025.” The post opens with a paragraph that I think perfectly summarizes why this topic is so important:
Graduation has always marked a transition from one phase of life to another, but rarely have new graduates stepped into a world where everything else seems to be in transition, too. The economy, media, politics, art, technology—there’s a lot of uncertainty about what the future holds.
Fresh graduates enter the world with twice the usual uncertainty, so they need twice the usual help. I’ve spent time thinking about the unprecedented challenge they face, so I was glad to offer something to ease the landing: encouragement, reassurance, and, most of all, calm. I naturally chose AI as the subject of my advice:
AI is a tool, and like any other, it should follow the golden rule: All tools must enhance, never erode, your most important one—your mind. Be curious about AI, but also examine how it shapes your habits and your thinking patterns. Stick to that rule and you'll have nothing to fear.
I find this the only sensible stance toward AI. I want to elaborate on why.
Imagine you're living in 2100. Today is not part of your life, but part of a history book you're reading. How do you feel about the AI transition that took place in the distant 2020s? From your vantage point, what kind of advice seems helpful?
That's how I approached the question: by projecting myself through time so that I could divest my advice of the burden of the present and all the little contingencies that prevent us from seeing things with clarity. We are too emotional when it comes to now—our reactions are too visceral and our identities too attached to our opinions.
Kids graduating today have no use for that kind of short-sightedness and self-righteousness. They have no use for either my unwarranted fear or my unproven optimism. So I gave them common sense.
AI is a tool, so I think of it as any other (it may or may not become a reliable agentic technology, but that wouldn’t change much my approach to the question): you don't let it override your better discernment, replace your decision-making, or degrade the master tool at your disposal, which is your brain.
That's the golden rule. And it's so obvious when thinking about the 2020s from an imagined 2100s that the only conclusion I can draw from witnessing, over and over again, the heated debates on the topic is that people haven’t bothered to do this thought experiment.
Is AI different than whatever technologies came before? Yeah, sure. In many ways.
Is it different in that we shouldn't let it erode our brains? No. Not at all.
Is it different in that curiosity and open-mindedness are good? No. I am sorry, no.
People want a manual, but what they desperately need is common sense and courage.
Each of us has to figure out the how of this question—how much time do I spend learning to use AI vs. learning physics vs. using ChatGPT to learn physics vs. letting ChatGPT do physics problems for me—but not the what.
The what is damn clear if you follow the golden rule: never sacrifice your long-term mental capacity for some negligible short-term value (e.g., cheating on an irrelevant exam) and never renounce your long-term skillset for some petulant short-term virtue signaling (e.g., not even trying to learn about AI).
That covers most situations. But, if you find yourself at an ambiguous spot and don't know how to decide, let me clarify what the golden rule looks like in the general case: you must do your best to abide by the social contract that will entrust you with a place among us if, and only if, you agree to be valuable for yourself and society.
And in practice: Kiddo, did you cheat in the exam using AI? Ok, whatever. I guess that's the new status quo. Take your high grade and join the workforce from a better entry point. You'll make it up to us—and your peers—later (perhaps fighting an outdated school system so that others don’t even want to cheat in the first place). But just so you know—in case you plan to continue that path without adhering to the rule, which means examining your habits and thinking patterns—that mindlessly abusing AI means that you break the contract. You betray us. And, most importantly, yourself.
Biology gifted you this precious inheritance, and you keep throwing it away? That’s not how you honor the golden rule or the social contract. Use AI at your discretion—it can be immensely useful—but make sure you don’t sacrifice the value you were otherwise meant to contribute.
You’re able to participate in this collaborative experiment we call civilization if AI is just an extension of you, but not if you are just an extension of AI.
REMINDER: The current Three-Year Birthday Offer—20% off forever—runs from May 30th to July 1st. Lock in your annual subscription now for $80/year.
Starting July 1st, The Algorithmic Bridge will move to $120/year. Existing paid subs, including those of you who redeem this offer, will retain their rates indefinitely.
If you’ve been thinking about upgrading, now is the time.
+1 on everything here.
I’m not sure that there has ever been a technology that has made it this easy to make progress without thinking. The human mind is prone to take the easiest approach - the path of least resistance and it has never been this easy.
You mention being in 2100 and looking back at today. It made me reflect from today’s perspective - what would 2100 look like? what good things would have been accomplished? What new types of mistakes would have been made? How much thinking offloaded and side effects? And how should we design to mitigate .. if at all?
Tidy article and congrats on the feature!
Insightful article. I look forward to working through again.
It reiterates to me how crucial it is for us to continue shaping and pushing a human-centric dialogue to counterbalance the pace and scale of AI uptake and daily integration.
Would be curious how corporate interests answer this question of the Golden Rule (actual interests, not polished sound bites).
I'm not sure they want us to maintain this common sense or for us to have such a clear command of when and how to use AI.