On March 18, 2025, the Joint California Policy Working Group on AI Frontier Models released a draft report outlining principles for AI regulation in California. This effort follows Gov. Gavin Newsom’s 2024 veto of SB 1047 (Wiener), the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which sought to impose stricter regulatory requirements on advanced AI models.
Gov. Newsom opposed SB 1047’s prescriptive approach, arguing that it could stifle innovation and impose compliance burdens without clear risk mitigation benefits. In response, the administration commissioned this report to guide AI policymaking with a balanced, evidence-based approach rather than immediate regulatory mandates.
Key Takeaways from the Draft Report
The draft report signals the Newsom administration’s evolving stance on AI governance, prioritizing measured interventions, industry engagement and risk-based policymaking over strict regulatory mandates. While the report establishes guiding principles, it does not propose immediate regulatory action or impose new compliance obligation, it suggests the following in terms of legislative and regulatory action in the state.
1. Balanced, Evidence-Based AI Regulation
- AI policies should be grounded in empirical research rather than theoretical risks.
- The state should avoid premature, overly restrictive regulations and instead focus on monitoring risks over time.
2. Transparency, Accountability and Third-Party Oversight
- The report encourages greater AI transparency but does not propose legally binding disclosure requirements at this stage.
- It supports third-party audits and whistleblower protections as tools for accountability.
3. Encouragement of Voluntary Compliance and Industry-Led Governance
- AI companies are expected to proactively implement safety measures rather than wait for government-imposed regulations.
- Policies should align with industry best practices, leveraging private-sector expertise.
4. Monitoring and Adverse Event Reporting (Without Immediate Mandates)
- Companies should prepare for potential future reporting requirements, but there are no immediate mandates in the report.
- The focus is on developing internal risk assessment protocols in anticipation of evolving expectations.
5. Early Design Decisions Matter
- AI risks should be addressed at the development stage, not just post-deployment.
- Future policies may prioritize oversight of AI model training and deployment strategies rather than enforcement mechanisms.
What the Report Does Not Include
Notably, the report does not:
- Require licensing or registration of AI models.
- Introduce strict liability rules for AI companies.
- Mandate immediate transparency or disclosure requirements for AI developers.
- Outline specific legislative or regulatory proposals.
Instead, this report serves as an advisory document that could inform future policymaking in California.
The Legislative Reality: This Will Not Prevent Regulation
While this report reflects Gov. Newsom’s stance on AI regulation and was commissioned at his direction, it does not mean that California legislators will stop pushing for stricter AI oversight. In fact, the California State Legislature operates independently from the governor’s office and has already demonstrated strong interest in advancing AI regulation—regardless of Newsom’s preference for a more flexible, industry-aligned approach.
For example, the veto of SB 1047 did not deter lawmakers from pursuing AI legislation. The bill had significant legislative support, and it is likely that new AI-related proposals will continue to emerge and advance in 2025, many of which could revive key provisions from SB 1047—including licensing requirements, liability measures and stricter compliance mandates.
This divergence in approach between the governor and the Legislature is not unusual. While the administration may advocate for voluntary compliance and incentive-driven policies, many lawmakers—particularly those focused on worker protections, bias, misinformation and corporate accountability—will continue to push for more aggressive AI regulations. Should their disagreements be strong enough, the Legislature may even attempt to override the governor’s stance by advancing more restrictive bills that impose direct obligations on AI developers and businesses.
Beyond legislative efforts, public concern and political momentum for AI regulation are intensifying. Issues such as job displacement, algorithmic bias and misinformation have gained bipartisan attention, creating pressure on legislators to act—even if the governor prefers a more measured approach. As AI-related concerns continue to dominate public discourse, the likelihood of new regulations only increases.
Moreover, if the Legislature fails to enact AI laws, it is entirely possible that California voters could see AI-related ballot initiatives in 2026. The state’s history of ballot-driven policymaking suggests that advocacy groups, labor organizations or consumer protection groups may bypass the legislative process and push AI regulations directly to the voters. This occurred most recently with the passage of Proposition 24 in 2020, which enhanced consumer data privacy protections and established the California Privacy Protection Agency—which now has sweeping regulatory authority regarding data privacy in the state.
Finally, major localities like San Francisco and Los Angeles may attempt to enact their own AI regulations, creating a complex and fragmented compliance landscape for businesses operating in California.
In short, while this report outlines the governor’s vision for AI governance, it does not preclude the possibility of stricter legislative action. Businesses should remain vigilant, actively engage in the policymaking process and be prepared for a dynamic regulatory environment in the coming years.
Business Implications and Recommendations
Companies using or deploying AI in California should not take a wait-and-see approach—the momentum for AI regulation is building, and legislative or regulatory action is increasingly likely. To stay ahead of potential requirements and avoid reactive compliance, businesses should proactively engage in shaping AI governance rather than waiting for mandates to be imposed.
1. Engage in the Policy Process
2. Develop Internal Transparency and Risk Monitoring Measures
- Even though mandatory disclosure rules are not in place, companies should consider tracking AI risks internally.
- Voluntary compliance efforts may help shape future regulatory frameworks.
3. Monitor Legislative and Local AI Regulations
- Be prepared for future AI bills that could introduce new compliance requirements.
- Track potential local-level AI rules in key jurisdictions.
4. Align AI Strategies with Industry-Led Best Practices
- Businesses that adopt proactive AI safety measures will be better positioned if regulations emerge later.
- Engage with coalitions, researchers and industry groups to influence policy direction.
Call to Action: Submit Feedback by April 8, 2025
The public comment period for this draft report is open until April 8, 2025. This is a critical opportunity for AI developers, tech companies and other industry stakeholders to:
- Shape the final report’s recommendations before its June 2025 release.
- Influence California’s long-term AI governance framework.
- Ensure future policies are industry-informed rather than overly prescriptive.
We strongly encourage companies, industry groups and AI stakeholders to submit feedback and engage in discussions shaping AI policy.
Next Steps and Support
Our team is actively monitoring legislative and regulatory developments related to AI governance in California. We are available to assist with:
- Drafting and submitting comments to the AI Policy Working Group.
- Engaging with key legislators and policymakers to advocate for balanced AI governance.
- Developing strategic plans to prepare for potential future AI regulations.
This document is intended to provide you with general information regarding California's AI policy. The contents of this document are not intended to provide specific legal advice. If you have any questions about the contents of this document or if you need legal advice as to an issue, please contact the attorneys listed or your regular Brownstein Hyatt Farber Schreck, LLP attorney. This communication may be considered advertising in some jurisdictions. The information in this article is accurate as of the publication date. Because the law in this area is changing rapidly, and insights are not automatically updated, continued accuracy cannot be guaranteed.