AI firm Anthropic is defending itself in court docket in opposition to the Trump administration | EUROtoday

Get real time updates directly on you device, subscribe now.

The Pentagon logo can be seen behind the podium in the Pentagon press room

As of: March 9, 2026 • 10:21 p.m

The AI ​​firm Anthropic didn’t wish to launch its techniques unrestrictedly for army use – so the Pentagon punished them and categorised them as a safety danger. The firm is now defending itself in opposition to this in court docket.

With the lawsuit, Anthropic desires to defend itself in opposition to the Trump administration’s classification as a “supply chain risk”. The start-up beforehand banned using its AI for mass surveillance and weapon techniques that resolve autonomously on killings.

Due to its classification as a so-called provide chain danger to nationwide safety, the corporate was largely excluded from authorities contracts. It is extremely uncommon for a US firm to be declared a “supply chain risk to national security”.

US President Donald Trump ordered all US federal companies to cease utilizing Anthropic expertise. To date, Anthropic was the one AI firm whose software program was additionally accepted for categorised use within the US army.

AI must be secure and accountable be carried out

Now Anthropic turned on the federal courts in California and the District of Columbia Court of Appeals to overturn the classification and stop federal authorities from implementing the order.

In the grievance, the corporate’s legal professionals write that Anthropic was based with the goal of creating AI applied sciences which are helpful to humanity and that must be secure and accountable on the identical time. By classifying Anthropic as a provide chain danger, the federal government desires to take revenge for the truth that Anthropic represents these values.

There had been a dispute between Anthropic and the Pentagon over the doable army use of AI expertise for mass surveillance and weapon techniques that resolve autonomously on killings. The firm didn’t wish to enable this use.

OpenAI desires to switch Anthropic

The Pentagon then canceled a $200 million contract to make use of Anthropic’s AI fashions. Instead, competitor OpenAI is now coming into play. After the dispute with Anthropic, ChatGPT developer OpenAI entered into an settlement with the Pentagon.

OpenAI boss Sam Altman agreed to the ministry’s circumstances, however later assured that there must be technical hurdles to its use for mass surveillance within the USA. OpenAI’s head of robotics and {hardware}, Caitlin Kalinowski, then introduced her resignation on Saturday.

Prospects for litigation unclear

The prospects of success of the lawsuit are open. Anthropic additionally argues, amongst different issues, that by classifying it as a “supply chain risk,” the federal government is penalizing the constitutionally protected proper to freedom of expression. The authorities, alternatively, cites nationwide safety pursuits.

According to info from the Washington Post, the US army continues to make use of Anthropic applied sciences, together with to determine and assess doable assault targets in Iran.

With info from Reinhard Spiegelhauer, ARD Los Angeles

https://www.tagesschau.de/wirtschaft/anthropic-klage-us-regierung-100.html