
By the time the U.S. government spent $2 billion on the original Manhattan Project in the 1940s—roughly $30 billion today—it had created not just a bomb, but a new global order. Now, calls for a similar rapid-deployment effort to build artificial general intelligence (AGI) are triggering serious ethical and strategic alarms. “A Manhattan Project for AGI assumes we can control something smarter than all of us combined,” former Google CEO Eric Schmidt told reporters in March 2025. “History should make us more cautious” (Business Insider).
Schmidt isn’t alone. Dan Hendrycks, Director of the Center for AI Safety, and Scale AI CEO Alexandr Wang have joined his call for restraint. But behind their warnings lies an even more chilling possibility: that rushing to build AGI could prompt a new form of mutually assured destruction—only this time, with algorithms, not warheads.
The Technonuclear Parallel
At first glance, comparing AGI to nuclear weapons may sound like tech-world hyperbole. But there’s substance behind the metaphor. AGI refers to AI that can independently perform any intellectual task a human can—and potentially surpass us. The leap from today’s narrow AI (think: ChatGPT, image generators) to AGI represents not just a technical jump, but a civilizational shift.
What’s at stake? AGI could accelerate scientific research, cure diseases, even automate governance. But in the wrong hands—or built too fast—AGI introduces catastrophic risks, including autonomous weapons, intelligence manipulation, and unmanageable rogue behavior.
Hendrycks and fellow researchers outlined this in a sobering 2023 paper on catastrophic AI risk (arXiv; Utah CS). Their conclusion: unless international cooperation and robust regulation become the norm, we risk pushing the AI race into a dangerous tailspin.
From MAD to MAIM: A Dangerous Deterrence Doctrine
Here’s where the ethics intersect with policy in a disturbing new way. Analysts are now exploring the idea of “Mutual Assured AI Malfunction” (MAIM), a twist on the Cold War doctrine of Mutually Assured Destruction. Under MAIM, nations might build or deploy fail-safe systems that, if triggered, actively disable an adversary’s AGI development—whether by sabotage, cyber attack, or through aggressive deactivation protocols (Fortune).
It sounds like science fiction, but the logic is chillingly real. If U.S. intelligence agencies suspect a rival is close to launching uncontrolled AGI, would disabling their project preemptively be justified? And how would they respond if the roles were reversed?
“A security doctrine built around MAIM will not make anyone safer,” warns Hendrycks. “It heightens the chance of miscalculation, just like with nuclear weapons—only now with less transparency and far fewer treaties.”
‘Move Slow and Fix Things’
The tech world often lionizes speed. “Move fast and break things” became Silicon Valley’s unofficial motto during the social media surge. But when it comes to AGI, experts are urging the opposite: move slowly and think collaboratively.
Companies like OpenAI, DeepMind, and Anthropic are already in a quiet race to reach AGI first. Training large AI models demands vast compute power, often only available at state-funded labs or tech giants with deep pockets. Interest in government involvement has surged as a hedge against foreign dominance or corporate misuse (Time).
But Schmidt worries that a U.S.-initiated AGI Manhattan Project would send the wrong international signal: that rapid militarized AGI is inevitable. “That’s how you get an AI arms race,” he warned earlier this year (TechCrunch).
What’s the Alternative?
Rather than pouring billions into a secretive AGI crash program, experts are calling for:
- Transparency
- Cooperation
- Enforceable safety standards
Groups like the Center for AI Safety have outlined steps nations can take—from mandatory AI capability evaluations to joint international safety labs (Safe.ai).
Meanwhile, AI developers must reckon with the ethical consequences of their breakthroughs. AGI isn’t just “the next big thing.” It’s potentially the last big thing a civilization invents without safeguards. Countries that invest in deliberate, well-governed AI will not only enhance national security but could shape how AI is used across the planet.
As journalist Andrew Revkin put it in a recent analysis: “The goal shouldn’t be to ‘win’ AGI—it should be to survive and share it.”
The Bottom Line
An AI Manhattan Project might sound like patriotic ambition. But it risks escalating tensions and undermining the very safety it seeks to ensure. What plausibly worked for nuclear deterrence won’t neatly apply to intelligence systems evolving at exponential speeds.
This isn’t just about who gets there first. It’s about whether we all get there in one piece.
So before America throws billions at AGI supremacy, it’s worth asking: What’s the real price of winning this race?
And who sets the rules once we cross the finish line?
Conclusion
What if the real danger isn’t that we lose the race to AGI—but that we win it on the wrong terms? In chasing dominance through speed, secrecy, or state-funded supremacy, we risk creating a future shaped less by wisdom and more by fear. The original Manhattan Project built a weapon; a modern AI version could unleash systems we don’t fully understand on a world that’s not prepared to govern them.
Maybe the most powerful thing America can do isn’t to go faster—but to lead differently: with openness, restraint, and shared responsibility. As technology begins to outpace not just our laws but our collective foresight, the question isn’t just whether we can build AGI—but whether we can build the kind of world that can live with it.