Anthropic has come out towards a proposed Illinois legislation backed by OpenAI that may protect AI labs from legal responsibility if their methods are used to trigger large-scale hurt, like mass casualties or greater than $1 billion in property harm.
The battle over the invoice, SB 3444, is drawing new battle traces between Anthropic and OpenAI over how AI applied sciences ought to be regulated. While AI coverage specialists say that the laws has solely a distant probability of changing into legislation, it has nonetheless uncovered political divisions between two main US AI labs that would turn out to be more and more essential because the rival firms ramp up their lobbying exercise throughout the nation.
Behind the scenes, Anthropic has been lobbying state senator Bill Cunningham, SB 3444’s sponsor, and different Illinois lawmakers to both make main modifications to the invoice or kill it because it stands, based on folks aware of the matter. In an electronic mail to WIRED, an Anthropic spokesperson confirmed the corporate’s opposition to SB 3444 and mentioned it has held promising conversations with Cunningham about utilizing the invoice as a place to begin for future AI laws.
“We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability,” Cesar Fernandez, Anthropic’s head of US state and native authorities relations, mentioned in an announcement. “We know that Senator Cunningham cares deeply about AI safety, and we look forward to working with him on changes that would instead pair transparency with real accountability for mitigating the most serious harms frontier AI systems could cause.”
Representatives for Cunningham did not respond to a request for comment. A spokesperson for Illinois governor JB Pritzker sent the following statement: “While the Governor’s Office will monitor and review the many AI bills moving through the General Assembly, governor Pritzker does not believe big tech companies should ever be given a full shield that evades responsibilities they should have to protect the public interest.”
The crux of OpenAI and Anthropic’s disagreement over SB 3444 comes down to who should be liable in the event of an AI-enabled disaster—a nightmare potential scenario that US lawmakers have only recently begun to confront. If SB 3444 were passed, an AI lab would not be responsible if a bad actor used its AI model to, for example, create a bioweapon that kills hundreds of people, so long as the lab drafted its own safety framework and published it on its website.
OpenAI has argued that SB 3444 reduces the risk of serious harm from frontier AI systems while “still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois.”
The ChatGPT maker says it has worked with states like New York and California to create what is calls a “harmonized” approach to regulating AI. “In the absence of federal action, we will continue to work with states—including Illinois—to work toward a consistent safety framework,” OpenAI spokesperson Liz Bourgeois said in a statement. “We hope these state laws will inform a national framework that will help ensure the US continues to lead.”
Anthropic, on the other hand, is arguing that companies developing frontier AI models should be held at least partially responsible if their technology is used for widespread societal harm.
Some experts say the bill would dismantle existing regulations meant to deter companies from behaving badly. “Liability already exists beneath widespread legislation and gives a strong incentive for AI firms to take affordable steps to stop foreseeable dangers from their AI methods,” says Thomas Woodside, cofounder and senior coverage adviser on the Secure AI Project, a nonprofit that has helped develop and advocate for AI security legal guidelines in California and New York. “SB 3444 would take the acute step of practically eliminating legal responsibility for extreme harms. But it is a unhealthy thought to weaken legal responsibility, which in most states is essentially the most important type of authorized accountability for AI firms that is already in place.”
https://www.wired.com/story/anthropic-opposes-the-extreme-ai-liability-bill-that-openai-backed/