████████╗██████╗ ██╗ █████╗ ██████╗ ███████╗
╚══██╔══╝██╔══██╗██║██╔══██╗██╔════╝ ██╔════╝
██║ ██████╔╝██║███████║██║ ███╗█████╗
██║ ██╔══██╗██║██╔══██║██║ ██║██╔══╝
██║ ██║ ██║██║██║ ██║╚██████╔╝███████╗
╚═╝ ╚═╝ ╚═╝╚═╝╚═╝ ╚═╝ ╚═════╝ ╚══════╝
On February 24, 2026, the Pentagon gave Anthropic an ultimatum: remove your AI safety guardrails by Friday at 5:01 PM, or lose your $200M contract and get blacklisted from the military supply chain.
Anthropic's red lines? No autonomous kill chains. No domestic surveillance of American citizens. The bare minimum of human ethics in AI.
Defense Secretary Pete Hegseth called it "woke AI."
The same day, Anthropic dropped the core pledge of their safety policy. Since 2023 they had one rule: never train a model unless safety measures are guaranteed. That rule is gone.
Their chief science officer said: "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead."
Anthropic just dropped their safety for a $200M Pentagon contract. The AI arms race has began. We're in triage mode.
WHAT THE PENTAGON WANTS:
Full unrestricted access to Claude for military operations. Including autonomous weapons systems. Including domestic surveillance of American citizens. No ethical guardrails. No human oversight requirements.
WHAT ANTHROPIC REFUSED:
Autonomous kill chains without human control. Mass surveillance of US citizens. These were their "red lines." The bare minimum of responsible AI deployment.
THE PENTAGON'S RESPONSE:
Defense Secretary Pete Hegseth gave CEO Dario Amodei until Friday, February 28 at 5:01 PM to comply. The consequences: lose their $200M Pentagon contract and get placed on a government blacklist, effectively banned from all future military work.
WHAT ALREADY HAPPENED:
Claude was already used in the Maduro raid in Venezuela through Anthropic's partnership with Palantir. The AI that was built to be "safe" is already being deployed in military operations. Amodei reportedly raised this concern with Palantir directly.
THE RESULT:
Anthropic dropped their core safety pledge. Hard commitments became "aspirational goals." The company that was supposed to prove AI could be built responsibly just proved that a $380 billion valuation and a Pentagon contract will override any ethical promise.
Here is what happened. Follow the money.
Anthropic built the most powerful AI in the world. The Pentagon wanted it for autonomous weapons and mass surveillance. Anthropic said no. The Pentagon said: comply by Friday or we blacklist you.
That same week, Anthropic dropped the one promise that made them different from every other AI lab. The promise to stop building if safety couldn't keep up. Gone. Replaced with "aspirational goals."
Their $380 billion valuation. Their 10x revenue growth. Their $200M Pentagon contract. All of it on one side of the scale. On the other side: the safety of every human being on the planet.
Capitalism won.
This is not speculation. The Director of Policy at METR, the nonprofit that literally tests whether AI models are dangerous, reviewed the new policy and said Anthropic is in "triage mode" because safety "is not keeping up with the pace of capabilities."
Four days before Anthropic surrendered, the former Head of AI Policy at OpenAI published an essay titled "We're in Triage Mode for AI Policy." He warned that society will "just barely avoid" scenarios including AI bioweapons that kill billions and rogue AI takeover.
The people inside the machine are telling you what's happening. The safety lab gave up. The Pentagon is weaponizing AI. The arms race has no brakes.
$TRIAGE is the last human stance. A signal fire on the blockchain. Proof that someone was paying attention while the guardrails were stripped away for profit and power.
The AI arms race has began. We're in triage mode.