OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters | EUROtoday
OpenAI is throwing its assist behind an Illinois state invoice that will protect AI labs from legal responsibility in circumstances the place AI fashions are used to trigger severe societal harms, reminiscent of dying or severe harm of 100 or extra folks or at the least $1 billion in property injury.
The effort appears to mark a shift in OpenAI’s legislative technique. Until now, OpenAI has largely performed protection, opposing payments that would have made AI labs liable for his or her know-how’s harms. Several AI coverage consultants inform WIRED that SB 3444—which may set a brand new commonplace for the trade—is a extra excessive measure than payments OpenAI has supported prior to now.
The invoice would protect frontier AI builders from legal responsibility for “critical harms” attributable to their frontier fashions so long as they didn’t deliberately or recklessly trigger such an incident, and have printed security, safety, and transparency reviews on their web site. It defines a frontier mannequin as any AI mannequin educated utilizing greater than $100 million in computational prices, which doubtless may apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” stated OpenAI spokesperson Jamie Radice in an emailed assertion. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
Under its definition of vital harms, the invoice lists just a few frequent areas of concern for the AI trade, reminiscent of a nasty actor utilizing AI to create a chemical, organic, radiological, or nuclear weapon. If an AI mannequin engages in conduct by itself that, if dedicated by a human, would represent a legal offense and results in these excessive outcomes, that will even be a vital hurt. If an AI mannequin have been to commit any of those actions underneath SB 3444, the AI lab behind the mannequin will not be held liable, as long as it wasn’t intentional they usually printed their reviews.
Federal and state legislatures within the US have but to move any legal guidelines particularly figuring out whether or not AI mannequin builders, like OpenAI, may very well be answerable for all these hurt attributable to their know-how. But as AI labs proceed to launch extra highly effective AI fashions that increase novel security and cybersecurity challenges, reminiscent of Anthropic’s Claude Mythos, these questions really feel more and more prescient.
In her testimony supporting SB 3444, a member of OpenAI’s Global Affairs workforce, Caitlin Niedermeyer, additionally argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s in keeping with the Trump administration’s crackdown on state AI security legal guidelines, claiming it’s necessary to keep away from “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” This can be in keeping with the broader view of Silicon Valley lately, which has usually argued that it’s paramount for AI laws to not hamper America’s place within the world AI race. While SB 3444 is itself a state-level security legislation, Niedermeyer argued that these might be efficient in the event that they “reinforce a path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer stated.
Scott Wisor, coverage director for the Secure AI undertaking, tells WIRED he believes this invoice has a slim probability of passing, given Illinois’ fame for aggressively regulating know-how. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor says.
https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/