California Governor Gavin Newsom recently faced a wave of AI-related legislation, with 38 bills reaching his desk. Despite rejecting the much-debated SB-1047, Governor Newsom signed more than a dozen other AI-focused bills into law throughout September. These address a range of concerns related to artificial intelligence, from the risks posed by AI to the rise of deepfake pornography and AI-generated clones of deceased Hollywood actors.
“Home to the majority of the world’s leading AI companies, California is working to harness these transformative technologies to help address pressing challenges while studying the risks they present,” said Governor Newsom’s office in a press release.
Of the AI-related bills signed into law, 18 stand out as some of the most comprehensive in the United States to date. Here’s an overview of the major themes covered by these new laws.
AI-Related Bills Signed by Governor Newsom
Transparency in AI Training Data
One of the most significant laws is AB-2013, introducing transparency requirements for generative AI providers. Set to take effect in 2026, the law will require AI companies to disclose information about the datasets used to train their models. This legislation mandates that these companies reveal the sources of their data, explain how the data is used, indicate the number of data points, disclose whether copyrighted or licensed data is included, and clarify the time period during which the data was collected. This increased transparency is designed to provide more accountability for AI systems, particularly those that rely on large-scale data for training.
An article addressing FAQs about this legislation is available here.
AI Risk Management
Another key piece of legislation signed into law is SB-896, which mandates that California’s Office of Emergency Services (CalOES) conduct risk analyses regarding generative AI’s potential dangers. The law requires collaboration with frontier AI companies like OpenAI and Anthropic to assess the risks that AI could pose to critical state infrastructure and evaluate potential threats that could lead to mass casualty events. This proactive approach aims to mitigate the unpredictable consequences that advanced AI systems could have on public safety.
AI in Health Care
Several bills focus on the use of AI in healthcare settings. AB-3030 requires health care providers to disclose when they use generative AI to communicate with patients, particularly when the messages contain clinical information. Meanwhile, SB-1120 establishes limits on how health care providers and insurers can automate their services, ensuring that licensed physicians oversee the use of AI tools in these environments. These laws are intended to protect patient rights and ensure that AI is used appropriately in health care settings.
Privacy and AI
Expanding (once again) California’s privacy framework, AB-1008 extends the state’s existing privacy laws to cover generative AI systems. This means that if an AI system exposes personal information—such as names, addresses, or biometric data—businesses will be subject to restrictions on how they can use and profit from that data. The goal is to ensure that AI systems adhere to the same privacy protections that govern other forms of data processing and use.
Watermarking AI-Generated Content
SB-942, another significant bill signed into law, requires widely used generative AI systems to disclose that the content they create is AI-generated. This will be done through “provenance data” embedded in the content’s metadata. For instance, all images created by OpenAI’s DALL-E now need a tag in their metadata indicating they were generated by AI. Although some AI companies already voluntarily add these watermarks, the law formalizes the requirement, helping the public identify AI-generated content more easily.
Legal Definition of AI
Another critical law, AB-2885, establishes a formal definition of artificial intelligence within California law. According to the bill, AI is defined as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” This uniform definition is intended to create a clearer legal framework for regulating AI technologies in the state.
AI Education Initiatives
Governor Newsom also signed laws that address the role of AI in education. AB-2876 requires the California State Board of Education to consider AI literacy when developing curriculum frameworks for subjects like math, science, and history. This initiative seeks to prepare students for a future where AI plays an increasingly important role by teaching them how AI works, its limitations, and the ethical considerations involved in using the technology. Additionally, SB-1288 requires California superintendents to form working groups to explore how AI is being used in public education and identify potential opportunities and challenges.
Robocalls Using AI
To address the issue of deceptive AI-generated robocalls, Governor Newsom signed AB-2905 into law. The bill requires robocalls to disclose when they use AI-generated voices, aiming to prevent confusion like the incident earlier in 2024 where voters in New Hampshire were misled by a deepfake robocall mimicking President Joe Biden’s voice. This law is part of a broader effort to curb the misuse of AI in political and commercial contexts.
Combatting Deepfake Pornography
Deepfake pornography has emerged as a troubling issue, and Governor Newsom signed several bills aimed at tackling this problem. AB-1831 expands existing child pornography laws to include content generated by AI systems. SB-926 makes it illegal to blackmail individuals using AI-generated nude images that resemble them, while SB-981 requires social media platforms to establish reporting mechanisms for users to flag deepfake nudes. Platforms must temporarily block such content while it is under investigation and remove it permanently if confirmed as a deepfake.
Election Deepfakes
Governor Newsom also signed a series of laws aimed at preventing AI-generated deepfakes from influencing elections. AB-2655 mandates that large online platforms like Facebook and X (formerly Twitter) remove or label election-related AI deepfakes and create channels for reporting such content. Candidates and elected officials can seek legal relief if platforms fail to comply with the law. AB-2839 addresses the actions of social media users who post or repost AI deepfakes that could mislead voters, holding them accountable for spreading false information. Additionally, AB-2355 requires political advertisements created using AI to include clear disclosures, ensuring transparency in political campaigns.
AI and the Entertainment Industry
Two additional laws, strongly supported by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), establish new standards for the entertainment and media industry in relation to AI. AB-2602 requires studios to obtain permission from actors before creating AI-generated replicas of their voice or likeness. AB-1836 extends similar protections to deceased performers, requiring studios to secure consent from the performers’ estates before creating digital replicas. These laws are designed to protect the rights of actors and their estates in the face of growing AI capabilities that can digitally recreate performers.
SB-1047 Veto
While many AI-related bills found approval, SB-1047, a bill that sought to regulate large AI systems, was vetoed by Governor Newsom. In his veto letter, the governor explained that the bill was too narrowly focused on large AI models, which he believed could “give the public a false sense of security.” He further emphasized the need for a more flexible regulatory approach that addresses both large and small AI systems, as smaller models can also pose significant risks.
In a conversation with Salesforce CEO Marc Benioff at the 2024 Dreamforce conference, the governor discussed the importance of distinguishing between demonstrable and hypothetical risks in AI. He acknowledged that while it is impossible to address every potential issue with AI, the state’s regulatory efforts will focus on solving the most pressing challenges.
Governor Newsom also hinted at his broader approach to regulating AI. “There’s one bill that is sort of outsized in terms of public discourse and consciousness; it’s this SB-1047,” Newsom said. “What are the demonstrable risks in AI and what are the hypothetical risks? I can’t solve for everything. What can we solve for? And so that’s the approach we’re taking across the spectrum on this.”
Impact to Businesses
California’s recent flurry of AI legislation reflects the state’s proactive approach to addressing both the opportunities and dangers posed by artificial intelligence. From privacy and education to health care and election integrity, these new laws represent some of the most comprehensive AI regulations in the United States.
Companies doing business in California or interacting with California residents need to take action now to ensure compliance with these laws. Companies not doing business in California should likewise take note of these laws and take steps to evaluate and consider compliance measures with the general themes as other states are sure to follow.