Is the Pentagon overreaching on AI entry? | EUROtoday

Get real time updates directly on you device, subscribe now.

US Defense Secretary Pete Hegseth’s risk to chop AI group Anthropic from authorities provide chains, or presumably compel it to prioritize authorities orders, raises a number of critical questions.

It’s the newest instance of Washington’s strong-arm techniques within the company sector, whereas it additionally exhibits how management over AI fashions is changing into a brand new battleground.

Hegseth has reportedly given Anthropic till Friday to provide the US army full entry to its purposes, the newest escalation of an ongoing row between one of many world’s prime AI startups and the US authorities.

So far, Anthropic has refused to provide Washington full entry to its fashions for categorized army use, together with for probably deadly missions carried out with out human management and for home mass surveillance.

USA Palm Beach 2026 | US Secretary of War Pete Hegseth at a press conference with Trump
US Defense Secretary Pete Hegseth has reportedly issued an ultimatum to AnthropicImage: Joe Raedle/Getty Images

Sources near Anthropic say the corporate has no intention of easing its utilization restrictions, in line with Reuters, regardless of Hegseth’s ultimatum.

What precisely has Hegseth threatened and why?

Hegseth known as Anthropic CEO Dario Amodei to Washington for a gathering on Tuesday. According to media experiences quoting folks aware of the talks, Hegseth made two direct threats to Amodei if Anthropic didn’t comply.

One was to chop the corporate out of the Pentagon’s provide chain, whereas the opposite can be to invoke the Defense Production Acta measure from the Cold War period, which supplies the US president the facility to regulate home business within the supposed curiosity of nationwide protection.

“If they don’t get on board, [Hegseth] will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not,” the Financial Times quoted an unnamed senior Pentagon official.

Hegseth desires the Pentagon to have unrestricted entry to Anthropic’s generative AI chatbot Claude, however Anthropic, which has lengthy billed itself as a safety-oriented AI firm, is resisting.

The firm is believed to oppose its Claude expertise being utilized in operations the place remaining army concentrating on choices are taken with out human intervention, or for mass surveillance inside the United States.

What has been the connection between Anthropic and the US army?

Since November 2024, Anthropic has been offering the Claude mannequin to US intelligence and protection companies.

According to the Wall Street Journalthe US army used Claude throughout the 2026 raid on Venezuela which resulted within the seize of Nicolas Maduro. Neither Anthropic nor the US protection division commented on the claims, and it’s not clear exactly how the AI system was used within the raid.

USA New York 2026 | Venezuelan President Nicolas Maduro is taken away by DEA agents in New York
Anthropic’s Claude mannequin was used within the seize of Nicolas Maduro, in line with a WSJ reportImage: X account of Rapid Response 47/AFP

The risk by Hegseth to take away Anthropic from Pentagon provide chains would have a critical monetary impression on the corporate.

In July 2025, the US protection division awarded Anthropic a $200 million contract to “prototype frontier AI capabilities that advance US national security.”

Anthropic hailed the association, with Thiyagu Ramasamy, the corporate’s head of public sector, saying it opened “a new chapter in Anthropic’s commitment to supporting US national security.”

However, on the time, it additionally emphasised its dedication to “responsible AI deployment.”

“At the heart of this work lies our conviction that the most powerful technologies carry the greatest responsibility,” it mentioned in a press release. “We’re building AI systems to be reliable, interpretable, and steerable precisely because we recognize that in government contexts, where decisions affect millions and stakes couldn’t be higher, these qualities are essential.”

Is Anthropic as safety-oriented because it says?

Anthropic was based in 2021 by seven former staff of OpenAI. According to CEO Dario Amodei, it was constructed “on a simple principle: AI should be a force for human progress, not peril.”

However, regardless of the row with the Pentagon, there are indicators that Anthropic is reconsidering that dedication in pursuit of business ambitions.

Dario Amodei, CEO and co-founder of Anthropic
Anthropic CEO Dario Amodei has touted the corporate’s safety-first credentialsImage: Markus Schreiber/AP Photo/dpa/image alliance

On Tuesday (February 24), the identical day because the Hegseth assembly, the corporate introduced it was softening its core security coverage with a purpose to stay aggressive with different main AI fashions.

“The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level,” Anthropic mentioned in a weblog put up saying the modifications.

Anthropic faces intense competitors from AI rivals similar to OpenAI and Google and is making the coverage pivot because of what it sees as a scarcity of AI regulation on the federal stage.

The Trump administration has resisted AI regulation at each the state and federal ranges. A spokesperson for Anthropic instructed the Wall Street Journal that the coverage shift was unrelated to the Pentagon negotiations.

What moral questions are at stake?

If Anthropic submits to Hegseth’s calls for, or if the protection division have been to take management of Anthropic by invoking the Defense Production Act, it will inevitably result in accusations that the corporate’s AI was not getting used with a safety-first mindset.

The Claude LLM application seen on a monitor
Claude is Anthropic’s primary language mannequinImage: Andrej Sokolow/dpa/image alliance

The difficulty additionally shines a lightweight on the Trump administration’s sturdy willingness to instantly intervene in company decision-making and in sectors it deems of essential significance.

In August 2025, the Trump administration introduced it had made a $8.9 billion funding in Intel, a part of a collection of strikes to instantly intervene in US chipmaking.

It has additionally intervened instantly within the rare-earth sector, making main investments in corporations similar to Vulcan Elements, MP Materials and USA Rare Earth.

Edited by: Ashutosh Pandey

https://www.dw.com/en/anthropic-is-the-pentagon-overreaching-on-ai-access/a-76119912?maca=en-rss-en-bus-2091-rdf