GPT-5: Everything You Need to Know
An in-depth analysis of the most anticipated next-generation AI model
This super long article—part review, part exploration—is about GPT-5. But it is about much more. It’s about what we can expect from next-gen AI models. It’s about the exciting new features that are appearing on the horizon (like reasoning and agents). It’s about GPT-5 the technology and GPT-5 the product. It’s about business pressures on OpenAI by its competition and the technical constraints its engineers are facing. It’s about all those things—that’s why it’s 14,000 words long.
You’re now wondering why you should spend the next hour reading this mini-book-sized post when you’ve already heard the leaks and rumors about GPT-5. Here’s the answer: Scattered info is useless without context; the big picture becomes clear only when you have it all in one place. This is it.
Before we start, here’s some quick background on OpenAI’s success streak and why the immense anticipation of GPT-5 puts them under pressure. Four years ago, in 2020, GPT-3 shocked the tech industry. Companies like Google, Meta, and Microsoft hurried to challenge OpenAI’s lead. They did (e.g. LaMDA, OPT, MT-NLG) but only a couple of years later. By early 2023, after the success of ChatGPT (which showered OpenAI in attention), they were ready to release GPT-4. Again, companies rushed after OpenAI. One year later, Google has Gemini 1.5, Anthropic has Claude 3, and Meta has Llama 3. OpenAI is about to announce GPT-5 but how far away are its competitors now?
The gap is closing and the race is at an impasse again so everyone—customers, investors, competitors, and analysts—is looking at OpenAI, holding excitement to see whether they can repeat, a third time, a jump to push them one year into the future. That’s the implicit promise of GPT-5; OpenAI’s hope to remain influential in the battle with the most powerful tech companies in history. Imagine the disappointment it would be for the AI world if expectations aren’t met (which insiders like Bill Gates believe may happen).
That’s the vibrant and expectant environment in which GPT-5 is brewing. One wrong step and everyone will jump down OpenAI’s throat. But if GPT-5 exceeds our prospects, it’ll become a key piece in the AI puzzle for the next few years, not just for OpenAI and its rather green business model but also for the people paying for it—investors and users. If that happens, Gemini 1.5, Claude 3, and Llama 3 will fall back into discoursive obscurity and OpenAI will breathe easy once again.
For the sake of clarity, the article is divided into three parts.
First, I’ve written some meta stuff about GPT-5: Whether other companies will have an answer to GPT-5, doubts about the numeration (i.e. GPT-4.5 vs GPT-5), and something I’ve called “the GPT brand trap.” You can skip this part if you just want to know about GPT-5 itself.
Second, I’ve compiled a list of info, data points, predictions, leaks, hints, and other evidence revealing details about GPT-5. This section is focused on quotes from sources (adding my interpretation and analysis when ambiguous), to answer these two questions: When is GPT-5 coming and how good will it be?
Third, I’ve explored—by following breadcrumbs—what we can expect from GPT-5 in the areas we still know nothing about officially (not even leaks): the scaling laws (data, compute, models size) and algorithmic breakthroughs (reasoning, agents, multimodality, etc.) This is all informed speculation, so the juiciest part.
Here’s the exact outline in case you want to skim:
Part 1: Some meta about GPT-5
Part 2: Everything we know about GPT-5
Part 3: Everything we don’t know about GPT-5
In closing
Part 1: Some meta about GPT-5
The GPT-5 class of models
Between March 2023 and January 2024, when you talked about state-of-the-art AI intelligence or ability across disciplines, you were talking about GPT-4. There was nothing else it could compare to. OpenAI’s model was in a league of its own.
That’s changed since February. Google Gemini (1.0 Ultra and 1.5 Pro) and Anthropic Claude 3 Opus are GPT-4-class models (It’s also GPT-4-class the upcoming Meta Llama 3 405B, still training at the time of writing). Long overdue contenders for that sought-after title but here after all. Strengths and weaknesses vary depending on how you use them, but all three are in the same ballpark performance-wise.
This new reality—and the seemingly consensual opinion among early adopters that Claude 3 Opus, in particular, is better than GPT-4 (after the recent GPT-4 turbo upgrade, perhaps not anymore) or that Llama 3 405B evals are looking strong already for intermediate checkpoints—has shed doubts over OpenAI’s leadership.
But we shouldn’t forget there’s a one-year gap between OpenAI and the rest; GPT-4 is an old model by AI-pace-of-progress standards. Admittedly, the newest GPT-4 turbo version isn’t old at all (released on April 9th). It’s hard to argue, however, that the modest iterative improvements that separate GPT-4 versions are comparable with an entirely new state-of-the-art model from Google, Anthropic, or Meta. GPT-4’s skeleton is 1.5 years old; that’s what counts against Gemini, Claude, and Llama, which surely leverage the most recent research at deeper levels (e.g. architectural changes) than GPT-4 can possibly adopt just by updating the fine-tuning.
The interesting question is this: Has OpenAI maintained its edge from the shadows while building GPT-5? Or have its competitors finally closed the gap?
One possibility is that Google, Anthropic, and Meta have given us everything they’ve got: Gemini 1.0/1.5, Claude 3, and Llama 3 are the best they can do for now. I don’t think this is the case for either (I’ll skip Meta’s case here because they’re in a rather unique situation that should be analyzed separately).1 Let’s start with Google.
Google announced Gemini 1.5 a week after releasing Gemini Advanced (with the 1.0 Ultra backend). They have only given us a glimpse of what Gemini 1.5 is capable of; they announced the intermediate version, 1.5 Pro, which is already GPT-4-class, but I don’t think that’s the best they have. I believe Gemini 1.5 Ultra is ready. If they haven’t launched it yet it’s because they’ve learned a lesson OpenAI has been exploiting since the early days: Timing the releases well is fundamental for success. The generative AI race is just too broadly broadcast to ignore that part.
Knowing there’s a big gap between 1.0 Pro and 1.0 Ultra, it’s reasonable to assume Gemini 1.5 Ultra will be significantly better than 1.5 Pro (Google has yet to improve the naming part, though). But how good will Gemini 1.5 Ultra be? GPT-5-level much? We don’t know but given 1.5 Pro eval scores, it’s possible.
The takeaway is that Gemini 1.0 being GPT-4-level isn’t casual—the consequence of having hit a wall or a sign of Google’s limitations—but instead a predefined plan to tell the world they, too, can create that kind of AI (let me remind you that the team that builds the models is not the team in charge of doing the marketing part that Google so often fails at).
Anthropic’s case isn’t so clear to me because they’re more press-shy than Google and OpenAI but I have no reason to exclude them given that Claude 3’s performance is so subtly above GPT-4 that it’s hard to believe it’s a coincidence. Another key point in favor of Anthropic is that it was founded in 2021. How much time does a world-class AI startup need to start competing at the highest level? Partnerships, infrastructure, hardware, training times, etc. require time and Anthropic was just settling down when OpenAI began training GPT-4. Claude 3 is Anthropic’s first real effort so I won’t be surprised if Claude 4 comes sooner than expected and matches anything OpenAI may achieve with GPT-5.
The pattern I see is clear. For each new state-of-the-art generation of models (first GPT-3 level, then GPT-4 level, next GPT-5 level) the gap between the leader and the rest shrinks. The reason is evident: The top AI companies have learned how to build this technology reliably. Building best-in-class large language models (LLMs) is a solved problem. It’s not OpenAI’s secret anymore. They had an edge at the start because they figured out stuff others hadn’t yet, but those others have caught up.
Even if companies are good at keeping trade secrets from spies and leakers, tech and innovation eventually converge on what’s possible and affordable to do. The GPT-5-class of models may have some degree of heterogeneity (just like it happens with the GPT-4 class) but the direction they’re all going is the same.
If I am correct, this takes relevance away from GPT-5 itself—which is why I think this 14,000-word analysis should be read more broadly than just a preview of GPT-5—and puts it into the whole class of models. That’s a good thing.
GPT-5 or GPT-4.5?
There were rumors in early March that GPT-4.5 had been leaked (the announcement, not the weights). Search engines caught the news before OpenAI removed it. The web page said the “knowledge cut-off” (up to what point in time the model knows about the state of the world) was June 2024. This means the hypothetical GPT-4.5 would train until June and then go through the months-long process of safety testing, guardrailing, and red-teaming, delaying release until the end of the year.
If this were true, does this mean GPT-5 isn’t coming this year? Possibly, but not necessarily. The thing we need to remember is that these names—GPT-4, GPT-4.5, GPT-5 (or something else entirely)—are placeholders for some level of ability OpenAI considers sufficiently high to deserve a given release number. OpenAI is always improving its models, exploring new research venues, doing training runs with different levels of compute, and evaluating model checkpoints. Building a new model isn’t a trivial, straightforward process but requires tons of trial and error, tweaking details, and “YOLO runs” that may yield unexpectedly good results.
After all the experimenting, when they feel ready, they go and do the big training run. Once it reaches the “that’s good enough” performance point, they release it under the most appropriate name. If they called GPT-4.5 GPT-5 or vice versa, we wouldn’t notice. This step-by-step checkpointed process also explains why Gemini 1.0/1.5 and Claude 3 can be so slightly above GPT-4 without it meaning there’s a wall for LLMs.
This implies that all the sources I’ll quote below talking about a “GPT-5 release” may actually be talking, without realizing it, about GPT-4.5 or some novel kind of thing with a different name. Perhaps, the GPT-4.5 leak that puts the knowledge cut-off at June 2024 will be GPT-5 after a few more improvements (perhaps they tried to reach a GPT-4.5 level and couldn’t quite get there and had to discard the release). These decisions change on the go depending on internal results and the moves from competitors (perhaps OpenAI didn’t expect Claude 3 to be the public’s preferred model in March and decided to discard the GPT-4.5 release for that reason).
Here’s one strong reason to think there won’t be a GPT-4.5 release: It makes no sense to do .5 releases when the competition is so close and scrutiny so intense (even if Sam Altman says he wants to double down on iterative deployment to avoid shocking the world and give us time to adapt and so on).
People will unconsciously treat every new big release as being “the next model,” whatever the number, and will test it against their expectations. If users feel it’s not good enough they will question why OpenAI didn’t wait for the .0 release. If they feel it’s very good then OpenAI will wonder if they should’ve named it .0 instead because now they’ll have to make an even bigger jump to get an acceptable .0 model. Not everything is what customers want but generative AI is now more an industry than a scientific field. OpenAI should go for the GPT-5 model and make it good.
There are exceptions, though. OpenAI released a GPT-3.5 model, but if you think about it, it was a low-key change (later overshadowed by ChatGPT). They didn’t make a fuss out of that one as they did for GPT-3 and GPT-4 or even DALL-E and Sora. Another example is Google’s Gemini 1.5 Ultra a week after Gemini 1 Ultra. Google wanted to double down on its victory against GPT-4 by doing two consecutive releases above OpenAI’s best model. It failed—Gemini 1 Ultra wasn’t better than GPT-4 (people expected more, not a tricky demo) and Gemini 1.5 was pushed to the side by Sora, which OpenAI released a few hours later (Google has still a lot to learn from OpenAI’s marketing tactics).2 Anyway, OpenAI needs a good reason to do a GPT-4.5 release.
The GPT brand trap
The last thing I want to mention in this section is the GPT trap: Contrary to the other companies, OpenAI has associated its products heavily with the GPT acronym, which is now both a technical term (as it was originally) but also a brand with a kind of prestige and power that’s hard to give up. A GPT, Generative Pre-trained Transformer, is a very specific type of neural network architecture that may or may not survive new research breakthroughs. Can a GPT escape the “autoregressive trap”? Can you imbue reasoning into a GPT or upgrade it into an agent? It’s unclear.
My question is: Will OpenAI still call its models GPTs to maintain the powerful brand with which most people associate AI or will they stay rigorous and switch to something else (Q* or whatever) once the technical meaning is exhausted by better things? If OpenAI sticks to the invaluable acronym (as the trademark registrations suggest) wouldn’t they be self-sabotaging their future by anchoring it in the past? OpenAI risks letting people falsely believe they’re interacting with another chatbot when they may have in their hands a powerful agent instead. Just a thought.
Part 2: Everything we know about GPT-5
When will OpenAI release GPT-5?
On March 18th, Lex Fridman interviewed Sam Altman. One of the details he revealed was about GPT-5’s date release. Fridman asked “So, when is GPT-5 coming out, again?” to which Altman responded, “I don’t know; that’s the honest answer.”
I believe in his honesty to the degree that there are different possible interpretations for his ambiguous “I don’t know.” I think he knows exactly what he wants OpenAI to do but the inherent uncertainty of life allows him the semantic space to say that, honestly, he doesn’t know. To the extent that Altman knows what there’s to know, he may not be saying more because first, they’re still deciding whether to release an intermediate GPT-4.5, second, they’re measuring distance with competitors, and third, he doesn’t want to reveal the exact date to not give competitors the option to overshadow the release somehow, as they do all the time to Google.
He then hesitated to answer whether GPT-5 is coming out this year at all, but added: “We will release an amazing new model this year; I don’t know what we’ll call it.” I think this vagueness is solved with my arguments above in the “The name GPT-5 is arbitrary” section. Altman also said they have “a lot of other important things to release first” (some things he could be referring to: Public Sora and Voice engine, a standalone web/work AI agent, a better ChatGPT UI/UX, a search engine, a Q* reasoning/math model). So building GPT-5 is a priority but not releasing it.
Altman also said OpenAI has before missed the mark on “not to have shock updates to the world” (e.g. the first GPT-4 version). This can shed light on the reasons for his ambiguity on GPT-5’s release date. He added: “Maybe we should think about releasing GPT-5 in a different way.” We could interpret this as a hand-waving comment but I think it helps explain Altman’s hesitancy to say something like “I know when we’ll release GPT-5 but I won’t tell you,” which would be fair and understandable.
It may even explain the notable improvement in math reasoning of the latest GPT-4 turbo release (April 9th): Perhaps the way they’re releasing GPT-5 differently to not shock the world is by testing its parts (e.g. new math/reasoning fine-tuning for GPT-4) in the wild before bringing them together into a cohesive whole for a much more powerful base model. That would be equal parts irresponsible and inconsistent with Altman's words.
Let’s hear other sources. On March 19th, the next day of the Fridman-Altman interview, Business Insider published a news article entitled “OpenAI is expected to release a 'materially better' GPT-5 for its chatbot mid-year, sources say,” which squarely contradicts what Altman said the day before. How can a non-OpenAI source know the date if Altman doesn’t? How can GPT-5 be coming out mid-year if OpenAI has still so many things to release first? The info is incoherent. Here’s what Business Insider wrote:
The generative AI company helmed by Sam Altman is on track to put out GPT-5 sometime mid-year, likely during summer, according to two people familiar with the company [identities confirmed by Business Insider]. … OpenAI is still training GPT-5, one of the people familiar said. After training is complete, it will be safety tested internally and further “red teamed”…
So GPT-5 was still training on March 19th (the only data point from the article that’s not a prediction but a fact). Let’s take the generous estimate and say it’s finished training already (April 2024) and OpenAI is already doing satefy tests and red-teaming. How much will that last before they’re ready to deploy? Let’s take the generous estimate again and say “the same as GPT-4” (GPT-5 being presumably more complex, as we’ll see in the next sections, makes this a safe lower bound). GPT-4 finished training in August 2022 and OpenAI announced it in March 2023. That’s seven months of safety layering. But remember that Microsoft’s Bing Chat already had GPT-4 under the hood. Bing Chat was announced in early February 2023. So half a year it is.
All in all, the most generous estimates put GPT-5’s release half a year away from now, pushing the date not to Summer 2024 (June seems to be a hot date for AI releases) but to October 2024—in the best case! That’s one month before the elections. Surely OpenAI isn’t that reckless given the antecedents for AI-powered political propaganda.
Could the “GPT-5 going out sometime mid-year” be a mistake by Business Insider and refer to GPT-4.5 instead (or refer to nothing)? I already said I don’t think OpenAI will replace the GPT-5 announcement with 4.5 but they may add this release as an intermediate low-key milestone while making it clear GPT-5 is coming soon (fighting Google and Anthropic before they release something else is a good reason to release a 4.5 version—as long as the GPT-5 model is on the way a few months later).
This view reconciles all the info we’ve analyzed so far: it reconciles Altman’s “I don’t know when GPT-5 is coming out” and the “we have a lot of other important things to release first.” It’s also in line with the doubling down on iterative deployment and the threat that a “shocking” new model would pose to the elections. Talking about the elections, the other candidate for the GPT-5 release date is around DevDay in November (my favored prediction). Last year, OpenAI did its first developer conference on November 6th, which this year is the day after the elections.
Given all this info (including the incoherent parts that make sense once we understand that “GPT-5” is an arbitrary name and that non-OpenAI sources may confuse the names of coming releases) my bet is this: GPT-4.5 (possibly something else that’s also a sneak advance to GPT-5) is coming in Summer and GPT-5 after the elections. OpenAI will release something new in the coming months but it won’t be the biggest release Altman says is coming this year. (Recent events suggest an even earlier surprise is still possible.)3
How good will GPT-5 be?
This is the question everyone’s waiting for. Let me advance that I don’t have privileged information. That doesn’t mean you won’t get anything from this section. Its value is twofold: first, it’s a compilation of sources you may have missed, and second, it’s an analysis and interpretation of the info, which can shed some further light on what we can expect. (In the “algorithmic breakthroughs” section I’ve gone much more in-depth on what GPT-5 may integrate from cutting-edge research. There’s no official info yet on that, just clues and breadcrumbs and my self-confidence that I can follow them reasonably well.)
Over the months, Altman has given hints of his confidence in GPT-5’s improvement over existing AIs. In January, in a private conversation held during the World Economic Forum at Davos, Altman spoke in private to Korean media Maeil Business Newspaper, among other news outlets, and said this (translated with Google): “GPT2 was very bad. GPT3 was pretty bad. GPT4 was pretty bad. But GPT5 will be good.” A month ago he told Fridman that GPT-4 “kinda sucks” and that GPT-5 will be “smarter”, not just in one category but across the board.
People close to OpenAI have also spoken in vague terms. Richard He, via Howie Xu, said: “Most GPT-4 limitations will get fixed in GPT-5,” and a non-disclosed source told Business Insider that “[GPT-5] is really good, like materially better.” All this information is fine, but also trivial, vague, or even unreliable (can we trust Business Insider’s sources at this point?).
However, there’s one thing Altman told Fridman that I believe is the most important data point we have about GPT-5’s intelligence. Here’s what he said: “I expect that the delta between 5 and 4 will be the same as between 4 and 3.” This claim is substantially more SNR-rich than the others. If it sounds similarly cryptic it’s because what it says isn’t about GPT-5’s absolute intelligence level, but about its relative intelligence level, which may be trickier to analyze. In particular: GPT-3 → GPT-4 = GPT-4 → GPT-5.
To interpret this “equation” (admittedly still ambiguous) we need the technical means to unpack it as well as know a lot about GPT-3 and GPT-4. That’s what I’ve done for this section (also, unless some big leak happens, this is the best we’ll get from Altman). The only assumption I need to make is that Altman knows what he’s talking about—he understands what those deltas imply—and that he already knows the ballpark of GPT-5’s intelligence, even if it’s not finished yet (just like Zuck knows Llama 3 405B checkpoint performance). From that, I’ve come up with three interpretations (for the sake of clarity, I’ve used only the model numbers, without the “GPT”):
The first reading is that the 4-5 and 3-4 deltas refer to comparable jumps across benchmark evaluations, which means that 5 will be broadly smarter than 4 as 4 was broadly smarter than 3 (this one starts tricky because it’s common knowledge that evals are broken, but let’s set this aside). That’s surely an outcome people would be happy with knowing that as models get better, climbing the benchmarks becomes much harder. So hard, actually, that I wonder if it’s even possible. Not because AI can’t become that intelligent but because such intelligence would make our human measurement sticks too short, i.e. benchmarks would be too easy for GPT-5.
This graph above is a 4 vs 3.5 comparison (3 would be lower). In some areas, 4 doesn’t improve much but in others, it’s so much better than it already risks making the scores meaningless for being too high. Even if we accepted that 5 wouldn’t get better at literally everything, in those areas it did, it’d surpass the limits of what the benchmarks can offer. This makes it impossible for 5 to achieve a delta from 4 the size of 3-4. At least if we use these benchmarks.
If we assume Altman is considering harder benchmarks (e.g. SWE-bench or ARC) where both GPT-3 and GPT-4’s performances are so poor (GPT-4 on SWE-bench, GPT-3 on ARC, GPT-4 on ARC), then having GPT-5 show a similar delta would be underwhelming. If you take exams made for humans instead (e.g. SAT, Bar, APs), you can’t trust GPT-5’s training data hasn’t been contaminated.
The second interpretation suggests the delta refers to the non-linear “exponential” scaling laws (increases in size, data, compute) instead of linear increases in performance. This implies that 5 continues the curves delineated before by 2, 3, and 4, whatever that yields performance-wise. For instance, if 3 has 175B parameters and 4 has 1.8T, then 5 will have around 18 trillion. But parameter count is just one factor in the scaling approach, so the delta may include everything else: how much computing power they use, how much training data they feed the model, etc. (I’ve explored more in-depth GPT-5’s relationship with the scaling laws in the next section.)
This is a safer claim from Altman (OpenAI controls these variables) and a more sensible one (emergent capabilities require new benchmarks for which previous data is non-existent, making the 3→4 vs 4→5 comparison impossible). However, Altman says he expects that delta, which suggests he doesn’t know for sure and this (e.g. how many FLOPs did it take to train GPT-5) he would know.
The third possibility is that Altman’s delta refers to user perception, i.e. users will perceive 5 to be better than 4 to the same degree that they perceived 4 to be better than 3 (ask heavy users and you will know the answer is “a damn lot”). This is a bold claim because Altman can’t possibly know what we’ll think, but he may be talking from experience; that’s what he felt from initial evaluations and he’s simply sharing his anecdotal evaluation.
If this interpretation is correct then we can conclude GPT-5 will be impressive. If it truly feels that way for the people most used to play with its previous versions—who are also the people with the highest expectations and for whom the novelty of the tech has faded away the most. If I’m feeling generous and had to bet which interpretation is most correct, I’d go for this one.
If I’m not feeling generous, there’s a fourth interpretation: Altman is just hyping his company’s next product. OpenAI has delivered in the past but the aggressive marketing tactics have always been there (e.g. releasing Sora hours after Google released Gemini 1.5). We can default to this one to be safe but I believe there’s some truth to the above three, especially the third one.
How OpenAI’s goals shape GPT-5
Before we go further into speculation territory, let me share what I believe to be the right framing to understand what GPT-5 can and can’t be, i.e. how to tell informed speculation from delusion. This serves as a general perspective to understand the entirety of OpenAI’s approach to AI. I’ll concretize it on GPT-5 because that’s our topic today.
OpenAI’s stated goal is AGI, which is so vague as to be irrelevant to serious analysis. Besides AGI, OpenAI has two “unofficial goals” (instrumental goals, if you will), more concrete and immediate that are the true bottlenecks moving forward (in a technical sense; product-wise there are other considerations, like “Make something people want”). These two are augmenting capabilities and reducing costs. Whatever we may hypothesize about GPT-5 must obey the need to balance the two.
OpenAI can always augment capabilities mindlessly (as long as its researchers and engineers know how) but that could yield unacceptable costs on the Azure Cloud, which would resent Microsoft’s partnership (which is already not as exclusive as it used to be). OpenAI can’t afford to become a cash drain. DeepMind was Google’s money pit early on but the excuse was “in the name of science.” OpenAI is focused on business and products so they have to bring in some juicy profits.
They can always decrease costs (in different ways e.g. custom hardware, squeezing inference times, sparsity, optimizing infra, and applying training techniques like quantization) but doing it blindly would hinder capabilities (in spring 2023 they had to drop a project codenamed “Arrakis” to make ChatGPT more efficient through sparsity because it wasn’t performing well). It’s better to spend more money than lose the trust of customers—or worse, investors.
So anyway, with these two opposing requirements—capabilities and costs—at the top of OpenAI’s hierarchy of priorities (just below the always-nebulous AGI), we can narrow down what to expect from GPT-5 even if we lack official information—we know they care about both factors. The balance further tilts against OpenAI if we add the external circumstances limiting their options: a GPU shortage (not as extreme as it was in mid-2023 but still present), an internet data shortage, a data center shortage, and a desperate search for new algorithms.
There’s a final factor that directly influences GPT-5 and somehow pushes OpenAI to make the most capable model they can: Their special spot in the industry. OpenAI is the highest-profile AI startup, at the lead economically and technically, and we hold our breaths every time they release something. All eyes are on them—competitors, users, investors, analysts, journalists, even governments—so they have to go big. GPT-5 has to kill expectations and shift the paradigm. Despite what Altman said about iterative deployment and not shocking the world, in a way they have to shock the world. Even if just a little.
So despite costs and some external constraints—compute, data, algorithms, elections, social repercussions—limiting how far they can go, the insatiable hunger for augmented capabilities and the need to shock the world just a little will push them to go as far as they can. Let’s see how far that might be.