Trump Orders U.S. Agencies To Stop Using Anthropic As OpenAI Strikes Deal With Pentagon | EUROtoday
WASHINGTON (AP) — The Trump administration on Friday ordered all U.S. businesses to cease utilizing Anthropic’s synthetic intelligence know-how and imposed different main penalties, escalating an unusually public conflict between the federal government and the corporate over AI security.
President Donald Trump, Defense Secretary Pete Hegseth and different officers took to social media to chastise Anthropic for failing to permit the navy unrestricted use of its AI know-how by a Friday deadline, accusing it of endangering nationwide safety after CEO Dario Amodei refused to again down over issues the corporate’s merchandise might be utilized in ways in which would violate its safeguards.
“We don’t need it, we don’t want it, and will not do business with them again!” Trump mentioned on social media.
Hegseth additionally deemed the corporate a “supply chain risk,” a designation usually stamped on international adversaries that might derail the corporate’s important partnerships with different companies.
In an announcement issued Friday night, Anthropic mentioned it will problem what it referred to as an unprecedented and legally unsound motion “never before publicly applied to an American company.”
Anthropic had mentioned it sought slender assurances from the Pentagon that its AI chatbot Claude wouldn’t be used for mass surveillance of Americans or in absolutely autonomous weapons. The Pentagon mentioned it was not taken with such makes use of and would solely deploy the know-how in authorized methods, however it additionally insisted on entry with none limitations.
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” the corporate mentioned. “We will challenge any supply chain risk designation in court.”
The authorities’s effort to say dominance over the inner decision-making of the corporate comes amid a wider conflict over AI’s position in nationwide safety and issues about how more and more succesful machines might be utilized in high-stakes conditions involving deadly power, delicate data or authorities surveillance.
OpenAI strikes take care of Pentagon hours after Anthropic was punished
Hours after its competitor was punished, OpenAI CEO Sam Altman introduced on Friday night time that his firm struck a take care of the Pentagon to provide its AI to categorised navy networks, probably filling a niche created by Anthropic’s ouster.
But Altman mentioned that the identical crimson traces that had been the sticking level in Anthropic’s dispute with the Pentagon at the moment are enshrined in OpenAI’s new partnership.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote, including that the Defense Department “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
Altman additionally mentioned he hopes the Pentagon will “offer these same terms to all AI companies” as a strategy to “de-escalate away from legal and governmental actions and toward reasonable agreements.”

Trump and others lash out at Anthropic
Trump mentioned Anthropic made a mistake attempting to strong-arm the Pentagon. He wrote on Truth Social that almost all businesses should instantly cease utilizing Anthropic’s AI however gave the Pentagon a six-month interval to section out the know-how that’s already embedded in navy platforms.
“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” he wrote in all caps.
Months of personal talks exploded into public debate this week and hit a stalemate when Amodei mentioned his firm “cannot in good conscience accede” to the calls for.
Anthropic can afford to lose the contract. But the federal government’s actions posed broader dangers on the peak of the corporate’s meteoric rise from a little-known pc science analysis lab in San Francisco to one of many world’s most beneficial startups.
The president’s choice was preceded by hours of high Trump appointees from the Pentagon and the State Department taking to social media to criticize Anthropic, however their complaints posed contradictions.
Top Pentagon spokesman Sean Parnell mentioned Anthropic’s unwillingness to associate with the navy’s calls for was “jeopardizing critical military operations and potentially putting our warfighters at risk.” Hegseth mentioned the Pentagon “must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”

Trump’s social media post said the company “better get their act together, and be helpful” during the phase-out period or there would be “major civil and criminal consequences to follow.”
However, Hegseth’s choice to designate Anthropic a supply chain risk uses an administrative tool that has been designed for companies owned by U.S. adversaries to prevent them from selling products that are harmful to American interests.
Virginia Sen. Mark Warner, the top Democrat on the Senate Intelligence Committee, noted that this dynamic, “combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”
Dispute shakes up Silicon Valley
The dispute stunned AI developers in Silicon Valley, where venture capitalists, prominent AI scientists and a large number of workers from Anthropic’s top rivals — OpenAI and Google — voiced support for Amodei’s stand in open letters and other forums.
The moves could benefit OpenAI’s ChatGPT as well as Elon Musk’s competing chatbot, Grok, which the Pentagon also plans to give access to classified military networks. It could serve as a warning to Google, which has a still-evolving contract to supply its AI tools to the military.
Musk sided with Trump’s administration, saying on his social media platform X that “Anthropic hates Western Civilization.” Altman took a different approach, expressing solidarity with Anthropic’s safeguards and opposing the government’s “threatening” approach while also working to secure OpenAI’s deal with the Pentagon. It marked the latest twist in OpenAI’s longtime and sometimes acrimonious rivalry with Anthropic, which was founded by a group of ex-OpenAI leaders in 2021.
Retired Air Force Gen. Jack Shanahan, a former leader of the Pentagon’s AI initiatives, wrote on social media this week that the government “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.”
Shanahan said Claude is already being widely used across the government, including in classified settings, and Anthropic’s red lines were “reasonable.” He said the AI large language models that power chatbots like Claude, Grok and ChatGPT are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.
Anthropic is “not trying to play cute here,” he wrote on LinkedIn. “You won’t find a system with wider & deeper reach across the military.”
O’Brien reported from Providence, Rhode Island.
https://www.huffpost.com/entry/trump-orders-all-federal-agencies-to-phase-out-use-of-anthropic-technology_n_69a207fae4b0790713155358