On Sunday, September 29, California Governor Gavin Newsom vetoed California Senate Bill 1047, which would have established novel safety regulations on large artificial intelligence (AI) models. Known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, SB 1047 would have required developers of covered models, which are defined to include only large, high-cost and power-intensive AI models, to, among other things:
- Implement cybersecurity protections to prevent unauthorized access or misuse of the model;
- Implement shutdown capabilities for the model in the event its use posed disruptions to critical infrastructure;
- Implement safety protocols to, among other things, prevent unreasonable risk that the model could cause or enable “critical harm” and provide testing of the risks posed by the model;
- Reevaluate the model-related safety procedures, capabilities and safeguards and consider industry best practices in fulfilling its obligations under the Act;
- Engage a third-party auditor to perform an annual compliance audit;
- Periodically report safety incidents and annual compliance statements to the State Attorney General, who is charged with enforcement authority over, and authorized to bring civil enforcement actions for violations of, the Act; and
- Establish and follow whistle-blower protections.
The bill faced general opposition from the tech industry, with warnings that its requirements were too stringent and may drive companies out of California. In vetoing SB 1047, the Governor’s office noted that the legislation fell short of “providing a flexible, comprehensive solution” to the potential risks of AI. In particular, the Governor noted that, while SB 1047 was well-intentioned, it failed to take into account “whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.” To show the Governor’s continued support for regulation of AI, the Governor’s office announced a series of new initiatives advised by appointed industry experts to develop “workable guardrails” for the deployment of AI models in a manner that manages its related risks.
Key Takeaways
In the absence of decisive federal action, states have been leading the charge in terms of regulating the development, deployment, and use of AI models. California’s SB 1047 was groundbreaking in its scope and focus on “safety”. We should expect to see states continue to legislate in this area, adopting similar concepts around reasonable care standards by developers and the steps developers must take to limit the potential for harm.
Also, while the tech industry may be applauding the Governor’s veto, the finance and banking industry should take careful note of Newsom’s statement that SB 1047 didn’t consider a model’s role in critical decision-making or use of sensitive data. Although SB 1047 targeted large-scale AI models developed by larger tech companies, it is likely that subsequent legislation will instead focus on the intended use of the model, including whether it is involved in consequential decision-making. Newsom’s statements suggest that forthcoming proposals from his office and task force may be more likely to focus on the the use case of the model (large or small) and could cover models currently used by financial institutions, such as models used for determining credit risk, marketing, or otherwise informing decision-making about offering financial products or services.
Focusing on the use of the model would be similar to the approach taken by Colorado, which became the first U.S. state with comprehensive AI regulations earlier this year through its passage of Colorado SB 24-205. The law, known as “the Colorado AI Act,” imposes extensive risk management requirements aimed at reducing algorithmic discrimination on the developers and deployers of “high-risk artificial intelligence systems”, defined to include any AI model that is a substantial factor in making a “consequential decision.” However, despite signing the legislation, Governor Jared Polis also released a statement expressing his “reservations” about the law and encouraged the Colorado legislature to continue to modify the law prior to its effective date in February 2026.
When considered together, the California and Colorado legislative initiatives and accompanying statements offered by Governors Newsom and Polis are indicative of both growing legislative interest in building a regulatory framework to manage the risks of AI and uncertainty with the appropriate scope and scale of such framework.
We will continue to follow developments in this area.
- Counsel
John provides legal advice and counsel on laws and regulations applicable to financial products, services and operations, and enterprise initiatives. Prior to joining the firm, John served as Associate General Counsel of the ...
- Associate
John represents clients across a wide range of banking, securities, and financial regulatory and supervisory matters. His practice focuses primarily on advising large financial institutions, investment advisers and ...
- Associate
Jules focuses on financial regulatory compliance, helping institutional clients navigate complex regulatory environments and pursue business strategies that balance innovation with risk-awareness. Jules advises on various ...
About MVA White Collar Defense, Investigations, and Regulatory Advice Blog
As government authorities around the world conduct overlapping investigations and bring parallel proceedings in evolving regulatory environments, companies face challenging regulatory and criminal enforcement dynamics. We help keep our clients up to date in these fast-moving areas and to serve as a thought leader.
The latest from MVA White Collar Defense, Investigations, and Regulatory Advice Blog
- CFPB Finalizes Personal Financial Data Rights Rule 1033
- SEC Settlement Reminds Firms to Periodically Review Their Use of Models, Calculators and Tools When Making Client Recommendations
- THE DESK: MVA’s Swaps & Derivatives Newsletter
- CA Governor Vetoes AI Safety Bill but Indicates Additional AI Regulation Forthcoming