California Governor Gavin Newsom vetoed SB-1047 on September 29, 2024, highlighting concerns about which models were covered by the legislation.
On July 2, 2024 the California State Assembly Judiciary Committee passed SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), following its passage by the Senate on May 21, 2024. The bill aims to curtail the risk of novel threats to public safety and security that lie beyond our current artificial intelligence (AI) landscape and follows in the footsteps of Colorado’s SB205 (Consumer Protections for Interactions with Artificial Intelligence), widely considered the first comprehensive AI legislation in the United States.
Even if SB1047 does not pass into law in California, it could nonetheless provide a model for other consumer-conscious legislatures contemplating regulation. Let’s examine the specifics further:
Scope
Unlike the Colorado bill which regulates models based on the business sector or application, this bill exclusively covers AI models based on complexity. Rather than applying to AI developers broadly, it specifically focuses on developers (i.e., companies) training or providing “covered models” or “covered model derivatives” in California.
Covered Models
The bill, as amended on July 3, defines “covered model” as an AI model “trained using a quantity of computing power greater than 10^26 integer or floating-point operations (FLOPs), the cost of which exceeds one hundred million dollars (US$100,000,000).” This threshold mirrors the one set by President Biden in his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Under the order, all models trained using a quantity of computing power greater than 10^26 FLOPs are subject to regulation by, and must report to, the Secretary of Commerce.
These two figures currently represent a similar threshold, as US$100,000,000 is the approximate cost of training a model with 10^26 FLOPs. Assessing the quantity of computing power provides insights into how efficiently a computer can handle complex and large-scale tasks. In the realm of AI, high computing power is essential for training sophisticated AI models, especially those involving deep learning algorithms.
At the time of this article, no existing models fall within the scope of the computing power in the proposed bill. The most computing power used in a training run is for Gemini Ultra, estimated at 5*10^25 FLOPs, making it the most compute-intensive model to date. However, as AI rapidly progresses, we can expect models to exceed the 10^26 FLOPs threshold within a year.
Most of the provisions in the bill extend to “covered model derivatives” as well, which the bill, as amended, defines as either unmodified copies of a covered model, copies of a covered model that receive modified post-training other than fine-tuning, and copies of covered models that have been fine-tuned using less than three times 10^25 FLOPs of computing power.
Developer Restrictions
As amended, developers are unable to use a covered model commercially or publicly if there is an unreasonable risk that the model can cause or enable a critical harm. Likewise, making a covered model or a covered model derivative available for commercial or public use is also prohibited if such a risk is present. However, the bill makes no mention of private or not-for-profit use. Accordingly, it is not clear what impact this restriction will have on industry, namely those companies using in-house models.
Developer Responsibilities
SB-1047 requires developers of covered models to implement various measures to ensure safety, which include the following:
Before Training
- Implement administrative, technical, and physical cybersecurity protections
- Implement the capability to promptly enact a full shutdown
- Implement a written and separate safety and security protocol that provides reasonable assurance that the developer will not produce a covered or derivative model that poses an unreasonable risk of critical harm, follow it, and provide the Frontier Model Division with an updated copy
- Conduct annual safety and security reviews
Before Commercial or Public Use
- Perform assessment and implement reasonable safeguards to prevent the cause of critical harm
Additional Responsibilities
- Annual certification of compliance from a third-party auditor
- Report safety incidents affecting a covered model and any covered model derivatives within the control of the developer within 72 hours
- Implement reasonable safeguards and requirements to prevent a third party from using the model, or creating a derivative model, to cause critical harm.
Computing Cluster Responsibilities
Operators of computing clusters must also implement various measures if and when a customer utilizes computing resources sufficient to train a covered model:
- Obtain basic administrative information
- Assess whether the customer intends to utilize the computing cluster to deploy a covered model
- Implement the capability to promptly enact a full shutdown of any resources used to train or operate customer models
- Keep records of customer’s IP addresses used for access and the date and time of each access
Additional Requirements
The bill further requires developers with a commercially available covered model and operators of a computing cluster to have a transparent, uniform, publicly available price schedule and to not engage in unlawful discrimination or noncompetitive activity in determining price or access.
Enforcement
The Attorney General is given discretion under the bill to bring a civil action in which the court may award injunctive or declaratory relief including, but not limited to, orders to modify, implementing a full shutdown, deleting the covered model, damages, attorneys fees and costs, or any other relief the court deems appropriate. The bill also includes provisions that prevent developers from escaping liability through contract or corporate structure.
To protect whistleblowers, the Labor Commissioner is also given discretion under the bill to enforce provisions that would constitute a violation of the Labor Code.
The bill would also create the Frontier Model Division and give the agency discretion under the statute to:
- Review certification reports and publicly release summarized findings
- Advise the Attorney General on potential violations
- Implement rulemaking for open-source AI and to prevent unreasonable risks of covered models
- Establish accreditation for best practices
- Publish safety reports
- Issue guidance on the categories of AI safety events likely to constitute a state of emergency and advise the California Governor in such an event
The Frontier Model Division would serve under the direct supervision of the Board of Frontier Models, a five-person panel also proposed by the bill.
Impacts to Business
As amended, the key figure to pay attention to is the training cost of US$100,000,000. This dollar figure is fixed in the statute whereas the threshold for computational power could be adjusted by Frontier Model Division. Furthermore, we can expect the amount of computational power that US$100,000,000 can buy to rise over time as innovation reduces the cost of training models.
Should the bill pass into law, it could have a number of effects on business. Developers of frontier models may find the bill onerous. Those developers who are leading in the generative AI sector can expect to face extensive oversight that could have a substantial impact on the speed of bringing a new model to market. Furthermore, regulated developers looking to license their models to third party developers will need to take necessary precautions to avoid liability by ensuring that third party developers cannot retrain the model in such a way that it has hazardous capabilities. Whether this bill passes could have an impact on whether these developers choose to relocate their operations to a less-regulated state. However, even if they do, they will still be subject to regulation under the President Biden’s Executive Order.
In contrast, many developers will not be affected by the bill. As previously mentioned, even today’s most advanced models do not meet the threshold of regulation, and the solutions that can be developed with existing technology are widespread. For example, there are numerous models that cost less than US$100,000 to train, including both independently developed deep learning models for computer vision applications and fine-tuned neural networks for facial recognition. As developers continue to consider the many applications of neural networks, they will likely find many that are profitable at a sufficiently low computational cost. However, companies and developers should also keep in mind the rapid rate at which AI models are progressing and evaluate the use case for the most computationally intensive models on the day they become economically viable.
Developers who are concerned about regulation have numerous options available to develop quality models without exceeding the cost and computational power thresholds. For example, the bill does not account for the cost of investing in training data quality. By improving training data quality, developers close to regulation could obtain significant improvements in accuracy without exceeding the threshold. In addition, fine-tuning a preexisting model could be a viable solution for many companies looking to implement an AI solution within their business.
Both developers and deployers should continue to monitor pending legislation in states across the nation. More than four hundred AI-related bills are currently active across the country. Even if the bill doesn’t pass into law, it is possible that similar counterparts could get passed in other states.
Special thanks to Adam Lundquist, a summer associate in Foley’s Boston office, for his contributions to this article.