California Governor Gavin Newsom has vetoed a first-of-its-kind state bill that could potentially enact the nation’s most impactful artificial intelligence regulations.
The measure, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would require security measures from companies that spend more than $100 million to train AI models. It aims to prevent potential harm caused by AI, such as mass casualty events, and includes implementing a “kill switch” to completely shut down a malicious model.
California is home to some of the biggest players in AI, including: OpenAI, Anthropic, Google (GOOG), and meta (META). However, in his veto message Sunday afternoon, Newsom said SB 1047 was “well-intentioned” but that it “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or use of sensitive information.” Instead, the bill applies strict standards to even the most basic functions – as long as a large system deploys it, I don’t think it’s the best approach to protecting the public from the real threats posed by technology.
Along with the veto, Newsom announced that he was working with leading experts – including “the godmother of AI” Fei-Fei Li – to put in place guardrails around the deployment of GenAI. He also ordered state agencies to expand their assessment of the risks associated with this technology.
Technology regulation has been a hot spot in Silicon Valley and beyond. OpenAI, Google and Meta have publicly opposed the bill. Anthropic, supported by Amazon (AMZN), cautiously supported it after suggesting changes to its original version.
Despite big tech’s reluctance, more than 100 current and former employees from Google, Meta, OpenAI and Anthropic called Newsom Signed the legislation earlier this month, expressing concerns that “the most powerful AI models could soon pose serious risks.”
More than 125 Hollywood actors, directors and entertainment leaders also urged Newsom to sign the bill, writing in a letter: “We fully believe in the dazzling potential of AI to be used for beneficial purposes. But we must also be realistic about the risks.”
SB 1047 was intended to walk the fine line between encouraging innovation in a rapidly changing industry while ensuring technology is used responsibly.
Newsom discussed his concerns about SB 1047 with Salesforce (RCMP) CEO Marc Benioff at the annual Dreamforce conference earlier this month. “The impact of signing bad bills over the course of a few years could have a profound impact,” Newsom said, referring to the state’s competitiveness.
“This is a space where we dominate and I want to maintain our dominance. I want to maintain our innovation. I want to maintain our ecosystem. I want to continue to lead. At the same time, you feel a deep sense of responsibility to address some of the more extreme concerns that many of us have, I think, even the biggest and most ardent proponents of this technology.
Supporters of the legislation include billionaire tech CEO Elon Muskwho owns the major AI modeling company xAI, alongside the so-called “Godfathers of AI”: Yoshua Bengio and Geoffrey Hinton.
“The capabilities of AI models are increasing at a very rapid pace, as is the amount of money being invested to reach the level of artificial general intelligence and beyond,” Bengio told Yahoo Finance. “The pursuit of this declared objective by a handful of large companies, racing against each other without appropriate safeguards, poses major risks for our societies and our democracies.”
The bill’s lead author, California state Sen. Scott Wiener, said it was a reasonable framework for an underregulated technology. Wiener stressed the need for strong federal law that would establish nationwide guardrails for all developers.
However, Wiener is not hopeful that a national AI safety bill will be a reality in the near future, calling Congress “completely paralyzed when it comes to technology policy” during one meeting. press conference last month.
“Let me be clear – I agree with the author – we cannot afford to wait until a major disaster occurs before taking action to protect the public,” Newsom wrote, adding “I am not However, I disagree with the fact that to ensure public safety, we must settle for a solution that is not based on an empirical analysis of the trajectory of the systems and capabilities of the Al. “Any framework to effectively regulate AI must keep pace with the technology itself.”
The bill aimed to reshape the future of AI and many tech leaders spoke out.
“There are risks associated with AI, less to do with the models themselves than with what the models are allowed to do in the real world if left unsupervised,” says (AFRM) CEO, Max Levchin, told Yahoo Finance at the Goldman Sachs Communacopia and Tech Conference.
“So I’m not downplaying or dismissing the need for controls, governance models, oversight and thoughtful rule-making. I just wouldn’t want to ‘stop everything’ to cite another AI disaster.”
Although the bill passed the State Assembly 48-16 (7 Democrats voted no) and the Senate 30-9 (one Democrat voted no) in August, it was encountered some political opposition from California Democrats.
Criticisms of SB 1047 include eight California House members – Ro Khanna, Zoe Lofgren, Anna G. Eshoo, Scott Peters, Tony Cárdenas, Ami Bera, Nanette Diaz Barragan, Lou Correa – and longtime Newsom ally, former Speaker of the House Nancy Pelosi.
Over the past month, Newsom has signed 17 AI-related bills aimed at fight against false elections content, protecting actors and artists on their digital likeness, regulating sexually explicit content created by AI, among other measures.
The new set of laws will force developers and social media companies to prevent irresponsible use of their platform using misleading content.
While these legislations address the immediate dangers of AI, SB 1047 envisions some of the more extreme risks posed by advanced models.
While speaking at the United Nations General Assembly on Tuesday, President Joe Biden called on world leaders to set AI standards that protect human life.
“This is just the tip of the iceberg of what we need to do to manage this new technology,” President Biden said. “In the years to come, our leadership may be tested no more than how we treat AI.”
Yasmin Khorram is a senior reporter at Yahoo Finance. Follow Yasmin on Twitter/X @YasminKhorram and on LinkedIn. Send newsworthy tips to Yasmin: yasmin.khorram@yahooinc.com
Click here for the latest technology news that will impact the stock market
Read the latest financial and business news from Yahoo Finance