Pentagon pressures Anthropic on AI entry | EUROtoday
US Defense Secretary Pete Hegseth’s risk to chop AI group Anthropic from authorities provide chains, or presumably compel it to prioritize authorities orders, raises a number of severe questions.
It’s the most recent instance of Washington’s strong-arm techniques within the company sector, whereas it additionally reveals how management over AI fashions is changing into a brand new battleground.
According to sources accustomed to the assembly, Hegseth has given Anthropic till Friday to provide the US navy full entry to its purposes, the most recent escalation of an ongoing row between one of many world’s prime AI startups and the US authorities.
So far, Anthropic has refused to provide Washington full entry to its fashions for categorized navy use, together with for doubtlessly deadly missions carried out with out human management and for home mass surveillance.
What precisely has Hegseth threatened and why?
Hegseth referred to as Anthropic CEO Dario Amodei to Washington for a gathering on Tuesday. An Anthropic spokesperson confirmed the assembly came about and advised DW:
“During the conversation, Dario expressed appreciation for the Department’s work and thanked the Secretary for his service. We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
However, in response to sources accustomed to the talks, Hegseth made two direct threats to Amodei if Anthropic didn’t comply.
One was to chop the corporate out of the Pentagon’s provide chain, whereas the opposite could be to invoke the Defense Production Acta measure from the Cold War period, which supplies the US president the ability to regulate home trade within the supposed curiosity of nationwide protection.
“If they don’t get on board, [Hegseth] will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not,” the Financial Times quoted an unnamed senior Pentagon official.
Hegseth needs the Pentagon to have unrestricted entry to Anthropic’s generative AI chatbot Claude, however Anthropic, which has lengthy billed itself as a safety-oriented AI firm, is resisting.
The firm is believed to oppose its Claude know-how being utilized in operations the place remaining navy concentrating on choices are taken with out human intervention, or for mass surveillance inside the United States.
“Anthropic views these things as being not in humanity’s long-term best interest, at least at the current level of technology and safety guardrails that exist, whereas the Pentagon is pushing to have any lawful use that it wants,” Geoffrey Gertz, senior fellow on the Center for a New American Security, advised DW.
He says discuss of invoking the Defense Production Act could be an try and exert management over an AI firm in an “unprecedented” method and he’s involved that it might thwart Anthropic’s growth.
“There’s a big worry that the government ends up taking actions that hurt Anthropic’s ability to continue to be at the forefront of responsible AI,” he stated. “Actions that are trying to curtail Anthropic’s potential markets, I think, could be very harmful and really backfire on what the administration is trying to do on AI policy.”
What has been the connection between Anthropic and the US navy?
Since November 2024, Anthropic has been offering the Claude mannequin to US intelligence and protection businesses.
According to the Wall Street Journalthe US navy used Claude throughout the 2026 raid on Venezuela which resulted within the seize of Nicolas Maduro. Neither Anthropic nor the US protection division commented on the claims, and it isn’t clear exactly how the AI system was used within the raid.
The risk by Hegseth to take away Anthropic from Pentagon provide chains would have a monetary impression on the corporate.
In July 2025, the US Department of Defense awarded Anthropic a $200 million contract to “prototype frontier AI capabilities that advance US national security.”
Anthropic hailed the association, with Thiyagu Ramasamy, the corporate’s head of public sector, saying it opened “a new chapter in Anthropic’s commitment to supporting US national security.”
However, on the time, it additionally emphasised its dedication to “responsible AI deployment.”
“At the heart of this work lies our conviction that the most powerful technologies carry the greatest responsibility,” it stated in a press release. “We’re building AI systems to be reliable, interpretable, and steerable precisely because we recognize that in government contexts, where decisions affect millions and stakes couldn’t be higher, these qualities are essential.”
Is Anthropic as safety-oriented because it says?
Anthropic was based in 2021 by seven former staff of OpenAI. According to CEO Dario Amodei, it was constructed “on a simple principle: AI should be a force for human progress, not peril.”
However, regardless of the row with the Pentagon, there are indicators that Anthropic is reconsidering that dedication in pursuit of business ambitions.
On Tuesday (February 24), the identical day because the Hegseth assembly, the corporate introduced it was softening its core security coverage to stay aggressive with different main AI fashions.
“The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level,” Anthropic stated in a weblog submit asserting the adjustments.
Anthropic faces intense competitors from AI rivals reminiscent of OpenAI and Google and is making the coverage pivot because of what it sees as an absence of AI regulation on the federal stage. The Trump administration has resisted AI regulation at each the state and federal ranges.
The spokesperson for Anthropic advised DW that the coverage shift was unrelated to the Pentagon negotiations.
What moral questions are at stake?
If Anthropic submits to Hegseth’s calls for, or if the protection division had been to take management of Anthropic by invoking the Defense Production Act, it could inevitably result in accusations that the corporate’s AI was not getting used with a safety-first mindset.
The difficulty additionally shines a lightweight on the Trump administration’s robust willingness to immediately intervene in company decision-making and in sectors it deems of essential significance.
In August 2025, the Trump administration introduced it had made a $8.9 billion funding in Intel, a part of a sequence of strikes to immediately intervene in US chipmaking.
It has additionally intervened immediately within the rare-earth sector, making main investments in companies reminiscent of Vulcan Elements, MP Materials and USA Rare Earth.
Gertz says that companies and CEOs are “navigating a new landscape” in terms of the Trump administration’s extra interventionist coverage.
“This is an outlier,” he stated. “This is a big shift from a traditional US approach of more hands-off, let the private sector develop.”
Edited by: Ashutosh Pandey
https://www.dw.com/en/pentagon-pressures-anthropic-on-ai-access/a-76119912?maca=en-rss-en-bus-2091-rdf