The California bill, SB 1047, has been criticized by companies like OpenAI for potentially stifling the nascent AI industry.
futurism.com
Aug 22, 3:07 PM EDT
by
Frank Landymore
/
Artificial Intelligence
Responsibly Party
Tech Companies Furious at New Law That Would Hold Them Accountable When Their AI Does Bad Stuff
Uh oh!
Getty / Futurism
With Great Power...
California is on the verge of passing a bill that would enforce sweeping regulations in the AI industry, after it was approved in the state's Assembly Appropriations Committee
on Thursday.
The bill, SB 1047, proposes a number of safety requirements for AI developers to prevent "severe harm," and includes provisions that could hold them accountable for the output of their AI models.
Now OpenAI, which has
advocated for regulation in the past, is joining
other tech companies, as well as
some politicians, in decrying the bill, arguing that it would hurt innovation in the industry,
Bloomberg reports.
"The AI revolution is only just beginning, and California's unique status as the global leader in AI is fueling the state's economic dynamism," Jason Kwon, chief strategy officer at OpenAI, wrote in a letter to state Senator Scott Wiener, who introduced the bill, as quoted by
Bloomberg. "SB 1047 would threaten that growth, slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere."
Playing It Safe
In more specific terms, the bill would give the California attorney general the power to seek an injunction against tech companies that put out unsafe AI models,
according to Platformer's analysis. If successfully sued, these companies could face civil penalties —
though not criminal penalties.
To be compliant, businesses would need to carry out mandated safety testing for any AI models that either cost more than $100 million to develop or require more than a certain amount of computing power. AI developers would also need to build their AI models with a "kill switch" that could be used to shut them down in an emergency.
In addition to in-house testing, developers would be required to hire third-party auditors to assess their safety practices,
per Reuters. The bill also provides more legal protections to whistleblowers speaking out against AI practices.
Tech Troublemakers
As
Platformer observes, the bill raises an age-old question: should the person using the tech be blamed, or the tech itself? With regards to social media, the law says that generally,
websites can't be held accountable for what users post.
AI companies hope that this status quo applies to them, too. Because AI models
frequently hallucinate and are
easily tricked into ignoring their guardrails, the prospect of being held accountable for their chaotic outputs could be a major headache.
OpenAI and others argue that such regulatory actions are premature and could hamper development of the tech in the state. And true, it may be the case that AIs are still in their infancy and have a long way to go before they're capable enough to turn on us à la Skynet — but it'd be remiss to downplay the dangers of more mundane
fears like misinformation, or their ability
to carry out hacks.
As it stands, the bill awaits a vote in the state's full Assembly, and must be passed by the end of the month before it can be sent to Governor Gavin Newsom for approval.