If the Pentagon carries out its threat to blacklist Anthropic’s Claude AI platform, it could be three months or even longer before the U.S. military regains access to such a powerful tool on its classified networks, according to multiple sources familiar with the fight between the Defense Department and the AI maker.
If the Pentagon does designate the San Francisco-based AI startup as a supply-chain risk, it would touch off a lengthy and likely expensive series of protective measures, the people familiar said.
Operators would have to reconfigure data inputs that they are feeding into models, re-examine how to share data in real-time with the intelligence community which also uses Claude widely, and re-validate that replacement models were functioning as the military expected it to, they said.
In July, Anthropic received a $200 million contract to provide its frontier-model tools to the Pentagon, as did the other three U.S. makers of such products: OpenAI, Google, and xAI.
Three whole months is lightning speed for the DoD.
I guarantee that they could fairly easily swap it with any other system, because it probably isn’t that tightly integrated.
They could, of course, also just not use anything.
Tree months is lighting quick for enterprise level software
Seriously just swap the API endpoint. They aren’t doing anything terribly sophisticated.
How come we’re using AI mostly to kill people? If AI is of such strategic value, why can’t we let AI loose on the Epstein files?
Why is AI always a way for the government to crush little people, never a tool for double-checking and illuminating the government?
palintir targets the “right people” and gives them plausble deniability.
It’s super obvious why they’re trying to shove AI down our throats on anything and everything.
Months? Think of all the time saved not being forced to use AI!
Look at this administration. This is the epitome of reality-averse. They will welcome hallucinations and slop and if they don’t like what it says they’ll have someone rewrite the prompt until they are happy with it. Everything you hate about AI, this administration considers a policy goal.
https://www.anthropic.com/news/statement-department-of-war
However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
- Mass domestic surveillance. …
- Fully autonomous weapons. …
Both of which will be used against the American people.
china is more than willing to provide ai that will control us weapons systems and do mass citizens survellance for the military with ai and for real cheap too.



