OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government | EUROtoday
More than 30 workers from OpenAI and Google, together with Google DeepMind chief scientist Jeff Dean, filed an amicus temporary on Monday in help of Anthropic in its authorized struggle in opposition to the US authorities.
“If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the workers wrote.
The temporary was filed simply hours after Anthropic sued the Department of Defense and different federal companies over the Pentagon’s choice to designate the corporate a “supply-chain risk.” The sanction, which severely limits Anthropic’s potential to work with army contractors, went into impact after Anthropic’s negotiations with the Pentagon fell aside. The AI startup is searching for a short lived restraining order to proceed its work with army companions because the lawsuit progresses. This temporary particularly helps this movement.
Signatories of the temporary embrace Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, in addition to OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, amongst others. Amicus briefs are authorized filings submitted by events that aren’t straight concerned in a court docket case however which have experience related to it. The workers signed in a private capability and don’t signify the views of their corporations, in response to the temporary.
OpenAI and Google didn’t instantly reply to WIRED’s request for remark.
The amicus temporary says that the Pentagon’s choice to blacklist Anthropic “introduces an unpredictability in [their] industry that undermines American innovation and competitiveness” and “chills professional debate on the benefits and risks of frontier AI systems.” It notes that the Pentagon may have merely dropped Anthropic’s contract if it not wished to be sure by its phrases.
The temporary additionally says that the crimson strains Anthropic claims it requested, together with that its AI wouldn’t be used for mass home surveillance and the event of autonomous deadly weapons, are respectable issues and require adequate guardrails. “In the absence of public law, the contractual and technological requirements that AI developers impose on the use of their systems represent a vital safeguard against their catastrophic misuse,” the temporary says.
Several different AI leaders have additionally publicly questioned the Pentagon’s choice to label Anthropic a supply-chain threat. OpenAI CEO Sam Altman mentioned in a submit on social media that “enforcing the SCR [supply-chain risk] designation on Anthropic would be very bad for our industry and our country.” He added that “this is a very bad decision from the DoW and I hope they reverse it.” As Anthropic’s relationship with the Pentagon soured, OpenAI rapidly signed its personal contract with the US army, a call some folks criticized as opportunistic.
https://www.wired.com/story/openai-deepmind-employees-file-amicus-brief-anthropic-dod-lawsuit/