The company is refusing to bow to the Pentagon’s demands.
Earlier this week, Secretary of Defense Pete Hegseth sat down with Dario Amodei, the CEO of the leading AI firm Anthropic, for a conversation about ethics. The Pentagon had been using the company’s flagship product, Claude, for months as part of a $200 million contract—the AI had even reportedly played a role in the January mission to capture Venezuelan President Nicolás Maduro—but Hegseth wasn’t satisfied. There were certain things Claude just wouldn’t do.
That’s because Anthropic had instilled in it certain restrictions. The Pentagon’s version of Claude could not be used to facilitate the mass surveillance of Americans, nor could it be used in fully autonomous weaponry—situations where computers, rather than humans, make the final decision about whom to kill. According to a source familiar with this week’s meeting, Hegseth made clear that if Anthropic did not eliminate those two guardrails by Friday afternoon, two things could happen: The Department of Defense could use the Defense Production Act, a Cold War–era law, to essentially commandeer a more permissive iteration of the AI, or it could label Anthropic a “supply-chain risk,” meaning that anyone doing business with the U.S. military would be forbidden from associating with the company. (This penalty is typically reserved for foreign firms such as China’s Huawei and ZTE.)


