Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act | |
---|---|
![]() | |
California State Legislature | |
Full name | Safe and Secure Innovation for Frontier Artificial Intelligence Models Act |
Introduced | February 7, 2024 |
Assembly voted | August 28, 2024 (48–16) |
Senate voted | August 29, 2024 (30–9) |
Sponsor(s) | Scott Wiener |
Governor | Gavin Newsom |
Bill | SB 1047 |
Website | Bill Text |
Status: Not passed (Vetoed by Governor on September 29, 2024) |
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, was a failed[1] 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist".[2] Specifically, the bill would have applied to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations.[3] SB 1047 would have applied to all AI companies doing business in California—the location of the company would not matter.[4] The bill would have created protections for whistleblowers[5] and required developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also have established CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.
Background
[edit]The rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022, caused some researchers and members of the public to become concerned existential risks associated with increasingly powerful AI systems.[6][7] For example, hundreds of tech executives and AI researchers signed a statement on AI in May 2023 that called for it to be a "global priority" similar to "pandemics and nuclear war."[8] However, the plausibility of this threat is still widely debated.[9]
Strong regulation of AI has been criticized for purportedly causing regulatory capture by large AI companies like OpenAI, a phenomenon in which regulation advances the interest of larger companies at the expense of smaller competition and the public in general.[7] Other advocates of AI regulation aim to prevent bias and privacy violations, rather than existential risks.[7] For example, some experts who view existential concerns as overblown and unrealistic view them as a distraction from near-term harms of AI like discriminatory automated decision making.[10]
In the face of existential concerns, technology companies have made voluntary commitments to conduct safety testing, for example at the AI Safety Summit and AI Seoul Summit.[11][12]
In 2023, not long before the bill was proposed, Governor Newsom of California and President Biden issued executive orders on artificial intelligence.[13][14][15] State Senator Wiener said SB 1047 draws heavily on the Biden executive order, and is motivated by the absence of unified federal legislation on AI safety.[16] Historically, California has passed regulation on several tech issues itself, including consumer privacy and net neutrality, in the absence of action by Congress.[17][18]
History
[edit]Proposal and voting
[edit]The bill was originally drafted by Dan Hendrycks, co-founder of the Center for AI Safety, who has previously argued that evolutionary pressures on AI could lead to "a pathway towards being supplanted as the Earth's dominant species."[19][20] The center issued a statement in May 2023 co-signed by Elon Musk and hundreds of other business leaders stating that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[21]
State Senator Wiener first proposed AI legislation for California through an intent bill called SB 294, the Safety in Artificial Intelligence Act, in September 2023.[22][23][24] On February 7, 2024, Wiener introduced SB 1047.[25][26]
On May 21, SB 1047 passed the Senate 32-1.[27][28] The bill was significantly amended by Wiener on August 15, 2024 in response to industry advice.[29] Amendments included adding clarifications, and removing the creation of a "Frontier Model Division" and the penalty of perjury.[30][31]
On August 28, the bill passed the State Assembly 48-16. Then, due to the amendments, the bill was once again voted on by the Senate, passing 30-9.[32][33]
Veto by governor
[edit]On September 29, Governor Gavin Newsom vetoed the bill.[34] The deadline for California lawmakers to overrule Newsom's veto (30 November 2024) has now passed.[1]
Newsom cited concerns over the bill's regulatory framework targeting only large AI models based on their computational size, while not taking into account whether the models are deployed in high-risk environments.[35][36] Newsom emphasized that this approach could create a false sense of security, overlooking smaller models that might present equally significant risks.[35][37] He acknowledged the need for AI safety protocols[35][38] but stressed the importance of adaptability in regulation as AI technology continues to evolve rapidly.[35][39]
Governor Newsom also committed to working with technology experts, federal partners, and research institutions, including the Carnegie Endowment for International Peace, led by former California Supreme Court Justice Mariano-Florentino Cuéllar; and Stanford University's Human-Centered AI (HAI) Institute, led by Dr. Fei-Fei Li. He announced plans to collaborate with these entities to advance responsible AI development, aiming to protect the public while fostering innovation.[35][40]
Provisions
[edit]SB 1047 would have covered AI models with training compute over 1026 integer or floating-point operations and a cost of over $100 million.[3][41] If a covered model is fine-tuned using more than $10 million, the resulting model would also have been covered.[31]
Developers developers of covered models and derivatives would have been required to submit a certification, subject to auditing, before training models. The certification would have shown mitigation of "reasonable" risk of "critical harms" of the covered model and its derivatives, including post-training modifications. Safeguards to reduce risk included the ability to shut down the model,[5] which has been variously described as a "kill switch"[42] and "circuit breaker".[43] Whistleblowing provisions protect employees who report safety problems and incidents.[5]
The bill would have defined critical harms with respect to four categories:[2][44]
- Creation or use of a chemical, biological, radiological, or nuclear weapon[45]
- Cyberattacks on critical infrastructure causing mass casualties or at least $500 million of damage
- Autonomous crimes causing mass casualties or at least $500 million of damage
- Other harms of comparable severity
Additionally, SB 1047 would have created a public cloud computing cluster called CalCompute, associated with the University of California, to support startups, researchers, and community groups that lack large-scale computing resources.[30]
Compliance and supervision
[edit]SB 1047 would have required developers, beginning January 1, 2026, to annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill, as provided.[30] The Government Operations Agency would have reviewed the results of safety tests and incidents, and issue guidance, standards, and best practices.[30] The bill would have created a Board of Frontier Models to supervise the application of the bill by the Government Operations Agency. It is would be composed of 9 members.[30]
Reception
[edit]Subjects of debate
[edit]Proponents of the bill described its provisions as simple and narrowly-focused, with Sen. Scott Weiner describing it as a "light-touch, basic safety bill".[46] This was disputed by critics of the bill, who described the bill's language as vague and criticized it as consolidating power in the largest AI companies at the expense of smaller ones.[46] Proponents, in turn, argued that the bill only applies to models trained using more than 1026 FLOPS and with over $100 millions, or fine-tuned with more than $10 millions, and that the threshold could be increased if needed.[47]
The penalty of perjury was also a subject of debate, and was eventually removed through an amendment. The scope of the "kill switch" requirement was also reduced, following concerns from open-source developers. The use of the term "reasonable assurance" in the bill was also controversial, and it was eventually amended to "reasonable care". Critics then argued that "reasonable care" imposed an excessive burden by requiring confidence that models could not be used to cause catastrophic harm; proponents claimed that the standard did not require certainty and that it already applied to AI developers under existing law.[47]
Support and opposition
[edit]Individual supporters of the bill included Turing Award recipients Yoshua Bengio[48] and Geoffrey Hinton,[49] Elon Musk,[50] Bill de Blasio,[51] Kevin Esvelt,[52] Dan Hendrycks,[53] Vitalik Buterin,[54] OpenAI whistleblowers Daniel Kokotajlo[45] and William Saunders,[55] Lawrence Lessig,[56] Sneha Revanur,[57] Stuart Russell,[56] Jan Leike,[58] actors Mark Ruffalo, Sean Astin, and Rosie Perez,[59] Scott Aaronson,[60] and Max Tegmark.[61] Over 120 Hollywood celebrities, including Mark Hamill, Jane Fonda, and J. J. Abrams, also signed a statement in support of the bill.[62] Max Tegmark likened the bill's focus on holding companies responsible for the harms caused by their models to the FDA requiring clinical trials before a company can release a drug to the market.[61]
Organizations sponsoring the bill included the Center for AI Safety, Economic Security California and Encode Justice.[63]. The labor union SAG-AFTRA and two women's groups, the National Organization for Women and Fund Her, sent support letters to Governor Newsom.[64] The Los Angeles Times editorial board also wrote in support of the bill.[65]
Individual opponents of the bill included Andrew Ng, Fei-Fei Li,[66] Russell Wald,[67] Ion Stoica, Jeremy Howard, Turing Award recipient Yann LeCun, and U.S. Congressmembers Nancy Pelosi, Zoe Lofgren, Anna Eshoo, Ro Khanna, Scott Peters, Tony Cárdenas, Ami Bera, Nanette Barragán and Lou Correa.[7][68][69] Andrew Ng called for more targeted regulatory approaches, such as the targeting of deepfake pornography, the watermarking of generated materials, and investment in red teaming and other security measures.[70]
The University of California and Caltech researchers also wrote open letters in opposition.[68]
Industry
[edit]The bill was opposed by industry trade associations including the California Chamber of Commerce, the Chamber of Progress,[a] the Computer & Communications Industry Association[b] and TechNet.[c][3] Companies including Meta[74] and OpenAI[75] were opposed to or raised concerns about the bill, while Google,[74] Microsoft and Anthropic[61] proposed substantial amendments.[4] However, Anthropic announced its support for an amended version of the bill while mentioning that some aspects of the bill which they said seemed concerning or ambiguous to them.[76] Several startup founder and venture capital organizations opposed to the bill, for example, Y Combinator,[77][78] Andreessen Horowitz,[79][80][81] Context Fund[82][83] and Alliance for the Future.[84]
After the bill was amended, Anthropic CEO Dario Amodei wrote that "the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs. However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us."[85] xAI CEO Elon Musk also supported the bill.[86] On September 9, 2024, at least 113 current and former employees of AI companies OpenAI, Google DeepMind, Anthropic, Meta, and xAI signed a letter to Governor Newsom in support of SB 1047.[87][88]
Open source developers
[edit]Critics expressed concerns about liability on open source software imposed by the bill if they use or improve existing freely available models. Yann LeCun, the Chief AI Officer of Meta, has suggested the bill would kill open source AI models.[70] There were concerns in the open-source community that due to the threat of legal liability companies like Meta may choose not to make models (for example, Llama) freely available.[89][90] The AI Alliance wrote in opposition to the bill, among other open-source organizations.[68] In contrast, Creative Commons co-founder Lawrence Lessig wrote that SB 1047 would make open source AI models safer and more popular with developers, since both harm and liability for that harm would be less likely.[43]
Public opinion polls
[edit]The Artificial Intelligence Policy Institute, a pro-regulation AI think tank,[91][92] ran three polls of California respondents on whether they supported or opposed SB 1047.[93][94][95][96][97][98] The third poll asked the question "Some policy makers are proposing a law in California, Senate Bill 1047, which would require that companies that develop advanced AI conduct safety tests and create liability for AI model developers if their models cause catastrophic harm and they did not take appropriate precautions."[99] The options were "Support", "Oppose", and "Not Sure".[93][94] Their poll results were 53.8–64.2% support in July,[93][94] 60.1–69.9% support in early August,[95][96] and 65.8–74.2% support in late August.[97][98]
On the other side of the aisle, the California Chamber of Commerce conducted its own poll, showing that 28 % of respondents supported the bill, 46 % opposed, and 26 % were neutral. The framing of the question has however been described as "badly biased".[92] The summary of the bill in their question was "Lawmakers in Sacramento have proposed a new state law—SB 1047—that would create a new California state regulatory agency to determine how AI models can be developed. This new law would require small startup companies to potentially pay tens of millions of dollars in fines if they don’t implement orders from state bureaucrats. Some say burdensome regulations like SB 1047 would potentially lead companies to move out of state or out of the country, taking investment and jobs away from California."[100]
A YouGov poll commissioned by the Economic Security Project, which co-sponsored the bill, found that 78% of registered voters across the United States supported SB 1047, and 80% thought that Governor Newsom should sign the bill.[101] Their question was "The California legislature passed a bill recently to regulate artificial intelligence, or AI, and since so many AI companies are based there, it could have national impacts.The bill would require California companies developing the next generation of most powerful AI systems to test for safety risks before releasing them. If testing shows that the AI system could be used to cause catastrophic harm to society, such as disrupting the financial system, shutting down the power grid, or creating biological weapons, the company must add reasonable safeguards to prevent these risks. If the company fails to test or adopt reasonable safeguards, they could be held accountable by the Attorney General of California."[101]
A David Binder Research poll commissioned by the Center for AI Safety, a group focused on mitigating societal-scale risk and a sponsor of the bill, found that 77% of Californians support a proposal to require companies to test AI models for safety risks, and 86% consider it an important priority for California to develop AI safety regulations.[102][103][104][105] Their question was "The proposal would require California companies developing the next generation of most powerful AI systems to test for safety risks before releasing them. If testing shows that the AI system could be used to cause catastrophic harm to society, such as disrupting the financial system, shutting down the power grid or creating biological weapons, the company must add reasonable safeguards to prevent these risks. If the company fails to test or adopt reasonable safeguards, they could be held accountable by the Attorney General of California."[102]
See also
[edit]- Artificial general intelligence
- Regulation of AI in the United States
- Regulation of artificial intelligence
Notes
[edit]References
[edit]- ^ a b "Bill History - SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act". Government of California. Retrieved 2025-03-29.
- ^ a b Bauer-Kahan, Rebecca. "ASSEMBLY COMMITTEE ON PRIVACY AND CONSUMER PROTECTION" (PDF). California Assembly. State of California. Retrieved 1 August 2024.
- ^ a b c Daniels, Owen J. (2024-06-17). "California AI bill becomes a lightning rod—for safety advocates and developers alike". Bulletin of the Atomic Scientists.
- ^ a b Rana, Preetika (2024-08-07). "AI Companies Fight to Stop California Safety Rules". The Wall Street Journal. Retrieved 2024-08-08.
- ^ a b c Thibodeau, Patrick (2024-06-06). "Catastrophic AI risks highlight need for whistleblower laws". TechTarget. Retrieved 2024-08-06.
- ^ Henshall, Will (2023-09-07). "Yoshua Bengio". TIME.
- ^ a b c d Goldman, Sharon. "It's AI's "Sharks vs. Jets"—welcome to the fight over California's AI safety bill". Fortune. Retrieved 2024-07-29.
- ^ Gregg, Aaron; Lima-Strong, Cristiano; Vynck, Gerrit De (2023-05-31). "AI poses 'risk of extinction' on par with nukes, tech leaders say". Washington Post. ISSN 0190-8286. Retrieved 2024-07-03.
- ^ De Vynck, Gerrit (20 May 2023). "The debate over whether AI will destroy us is dividing Silicon Valley". The Washington Post.
- ^ "Artificial intelligence could lead to extinction, experts warn". BBC News. 2023-05-30. Retrieved 2024-08-30.
- ^ Milmo, Dan (2023-11-03). "Tech firms to allow vetting of AI tools, as Musk warns all human jobs threatened". The Guardian. Retrieved 2024-08-12.
- ^ Browne, Ryan (2024-05-21). "Tech giants pledge AI safety commitments — including a 'kill switch' if they can't mitigate risks". CNBC. Retrieved 2024-08-12.
- ^ "Governor Newsom Signs Executive Order to Prepare California for the Progress of Artificial Intelligence". Governor Gavin Newsom. 2023-09-06.
- ^ "President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence". White House. 2023-10-30.
- ^ Riquelmy, Alan (2024-02-08). "California lawmaker aims to put up guardrails for AI development". Courthouse News Service. Retrieved 2024-08-04.
- ^ Myrow, Rachael (2024-02-16). "California Lawmakers Take On AI Regulation With a Host of Bills". KQED.
- ^ Piper, Kelsey (2024-08-29). "We spoke with the architect behind the notorious AI safety bill". Vox. Retrieved 2024-09-04.
- ^ Lovely, Garrison (2024-08-15). "California's AI Safety Bill Is a Mask-Off Moment for the Industry". The Nation. Retrieved 2024-09-04.
- ^ Edwards, Benj (2024-07-29). "From sci-fi to state law: California's plan to prevent AI catastrophe". Ars Technica. Retrieved 2024-08-30.
- ^ Hendrycks, Dan (2023-05-31). "The Darwinian Argument for Worrying About AI". Time. Retrieved 2024-10-01.
- ^ "Prominent AI leaders warn of 'risk of extinction' from new technology". Los Angeles Times. 2023-05-31. Retrieved 2024-08-30.
- ^ Perrigo, Billy (2023-09-13). "California Bill Proposes Regulating AI at State Level". TIME. Retrieved 2024-08-12.
- ^ David, Emilia (2023-09-14). "California lawmaker proposes regulation of AI models". The Verge. Retrieved 2024-08-12.
- ^ "Senator Wiener Introduces Safety Framework in Artificial Intelligence Legislation". Senator Scott Wiener. 2023-09-13. Retrieved 2024-08-12.
- ^ "California's SB-1047: Understanding the Safe and Secure Innovation for Frontier Artificial Intelligence Act". DLA Piper. Retrieved 2024-08-30.
- ^ Edwards, Benj (2024-07-29). "From sci-fi to state law: California's plan to prevent AI catastrophe". Ars Technica. Retrieved 2024-08-30.
- ^ Hendrycks, Dan (2024-08-27). "California's Draft AI Law Would Protect More than Just People". TIME. Retrieved 2024-08-30.
- ^ "California SB1047 | 2023-2024 | Regular Session". LegiScan. Retrieved 2024-08-30.
- ^ Zeff, Maxwell (2024-08-15). "California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic". TechCrunch. Retrieved 2024-08-23.
- ^ a b c d e Calvin, Nathan (August 15, 2024). "SB 1047 August 15 Author Amendments Overview". safesecureai.org. Retrieved 2024-08-16.
- ^ a b "Senator Wiener's Groundbreaking Artificial Intelligence Bill Advances To The Assembly Floor With Amendments Responding To Industry Engagement". Senator Scott Wiener. 2024-08-16. Retrieved 2024-08-17.
- ^ writer, Troy Wolverton | Examiner staff (2024-08-29). "Legislature passes Wiener's controversial AI safety bill". San Francisco Examiner. Retrieved 2024-08-30.
- ^ "Bill Votes - SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act". leginfo.legislature.ca.gov. Retrieved 2024-09-04.
- ^ Lee, Wendy (2024-09-29). "Gov. Gavin Newsom vetoes AI safety bill opposed by Silicon Valley". Los Angeles Times. Retrieved 2024-09-29.
- ^ a b c d e "Gavin Newsome SB 1047 Veto letter" (PDF). Office of The Governor.
- ^ "Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology"
- ^ "By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good."
- ^ "Let me be clear - I agree with the author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails .should be implemented, and severe consequences for bad actors must be clear and enforceable."
- ^ "Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology"
- ^ Newsom, Gavin (2024-09-29). "Governor Newsom announces new initiatives to advance safe and responsible AI, protect Californians". Governor Gavin Newsom. Retrieved 2024-09-29.
- ^ "07/01/24 - Assembly Judiciary Bill Analysis". California Legislative Information.
- ^ Times, Financial (2024-06-07). "Outcry from big AI firms over California AI "kill switch" bill". Ars Technica. Retrieved 2024-08-30.
- ^ a b Lessig, Lawrence (2024-08-30). "Big Tech Is Very Afraid of a Very Modest AI Safety Bill". The Nation. Retrieved 2024-09-03.
- ^ "Analysis of the 7/3 Revision of SB 1047". Context Fund.
- ^ a b Johnson, Khari (2024-08-12). "Why Silicon Valley is trying so hard to kill this AI bill in California". CalMatters. Retrieved 2024-08-12.
- ^ a b Goldman, Sharon. "It's AI's "Sharks vs. Jets"—welcome to the fight over California's AI safety bill". Fortune. Retrieved 2024-07-29.
- ^ a b "Misrepresentations of California's AI safety bill". Brookings. Retrieved 2024-10-02.
- ^ Bengio, Yoshua. "Yoshua Bengio: California's AI safety bill will protect consumers and innovation". Fortune. Retrieved 2024-08-17.
- ^ Kokalitcheva, Kia (2024-06-26). "California's AI safety squeeze". Axios.
- ^ Coldewey, Devin (2024-08-26). "Elon Musk unexpectedly offers support for California's AI bill". TechCrunch. Retrieved 2024-08-27.
- ^ de Blasio, Bill (2024-08-24). "X post". X.
- ^ Riquelmy, Alan (2024-08-14). "California AI regulation bill heads to must-pass hearing". Courthouse News Service. Retrieved 2024-08-15.
- ^ Metz, Cade (2024-08-14). "California A.I. Bill Causes Alarm in Silicon Valley". New York Times. Retrieved 2024-08-22.
- ^ Jamal, Nynu V. (2024-08-27). "California's Bold AI Safety Bill: Buterin, Musk Endorse, OpenAI Wary". Coin Edition. Retrieved 2024-08-27.
- ^ Zeff, Maxwell (2024-08-23). "'Disappointed but not surprised': Former employees speak on OpenAI's opposition to SB 1047". TechCrunch. Retrieved 2024-08-23.
- ^ a b Pillay, Tharin (2024-08-07). "Renowned Experts Pen Support for California's Landmark AI Safety Bill". TIME. Retrieved 2024-08-08.
- ^ "Assembly Standing Committee on Privacy and Consumer Protection". CalMatters. Retrieved 2024-08-08.
- ^ "Dozens of AI workers buck their employers, sign letter in support of Wiener AI bill". The San Francisco Standard. 2024-09-09. Retrieved 2024-09-10.
- ^ Korte, Lara; Gardiner, Dustin (2024-09-17). "Act natural". Politico.
- ^ "Call to Lead". Call to Lead. Retrieved 2024-09-10.
- ^ a b c Samuel, Sigal (2024-08-05). "It's practically impossible to run a big AI company ethically". Vox. Retrieved 2024-08-06.
- ^ Lee, Wendy (2024-09-24). "Mark Hamill, Jane Fonda, J.J. Abrams urge Gov. Newsom to sign AI safety bill". LA Times. Retrieved 2024-09-24.
- ^ Korte, Lara (2024-02-12). "A brewing battle over AI". Politico.
- ^ Lovely, Garrison (2024-09-12). "Actors union and women's groups push Gavin Newsom to sign AI safety bill". The Verge. Retrieved 2024-09-12.
- ^ The Times Editorial Board (2024-08-22). "Editorial: Why California should lead on AI regulation". Los Angeles Times. Retrieved 2024-08-23.
- ^ Li, Fei-Fei. "'The Godmother of AI' says California's well-intended AI bill will harm the U.S. ecosystem". Fortune. Retrieved 2024-08-08.
- ^ "Groundswell of Opposition to CA's AI Bill as it Nears Vote". Pirate Wires. August 13, 2024. Retrieved 2024-11-12.
- ^ a b c "SB 1047 Impacts Analysis". Context Fund.
- ^ "Assembly Judiciary Committee 2024-07-02". California State Assembly.
- ^ a b Edwards, Benj (2024-07-29). "From sci-fi to state law: California's plan to prevent AI catastrophe". Ars Technica. Retrieved 2024-07-30.
- ^ "Corporate Partners". Chamber of Progress. 16 May 2022.
- ^ "Members". Computer & Communications Industry Association.
- ^ "Members". TechNet.
- ^ a b Korte, Lara (2024-06-26). "Big Tech and the little guy". Politico.
- ^ Zeff, Maxwell (2024-08-21). "OpenAI's opposition to California's AI bill 'makes no sense,' says state senator". TechCrunch. Retrieved 2024-08-23.
- ^ Waters, John K. (2024-08-26). "Anthropic Announces Cautious Support for New California AI Regulation Legislation -". Campus Technology. Retrieved 2024-11-12.
- ^ "Little Tech Brings a Big Flex to Sacramento". Politico. 21 June 2024.
- ^ "Proposed California law seeks to protect public from AI catastrophes". The Mercury News. 25 July 2024.
- ^ "California's Senate Bill 1047 - What You Need to Know". Andreessen Horowitz.
- ^ Midha, Anjney (25 July 2024). "California's AI Bill Undermines the Sector's Achievements". Financial Times.
- ^ "Senate Bill 1047 will crush AI innovation in California". Orange County Register. 10 July 2024.
- ^ "AI Startups Push to Limit or Kill California Public Safety Bill". Bloomberg Law.
- ^ "The Batch: Issue 257". Deeplearning.ai. 10 July 2024.
- ^ "The AI Safety Fog of War". Politico. 2024-05-02.
- ^ "Anthropic says California AI bill's benefits likely outweigh costs". Reuters. 2024-08-23. Retrieved 2024-08-23.
- ^ Coldewey, Devin (2024-08-26). "Elon Musk unexpectedly offers support for California's AI bill". TechCrunch. Retrieved 2024-08-27.
- ^ "Dozens of AI workers buck their employers, sign letter in support of Wiener AI bill". The San Francisco Standard. 2024-09-09. Retrieved 2024-09-10.
- ^ "Call to Lead". Call to Lead. Retrieved 2024-09-10.
- ^ Piper, Kelsey (2024-07-19). "Inside the fight over California's new AI bill". Vox. Retrieved 2024-07-29.
- ^ Piper, Kelsey (2024-06-14). "The AI bill that has Big Tech panicked". Vox. Retrieved 2024-07-29.
- ^ Robertson, Derek (2024-05-06). "Exclusive poll: Americans favor AI data regulation". Politico. Retrieved 2024-08-18.
- ^ a b Lovely, Garrison (2024-08-28). "Tech Industry Uses Push Poll to Stop California AI Bill". The American Prospect. Retrieved 2024-11-12.
- ^ a b c Bordelon, Brendan. "What Kamala Harris means for tech". POLITICO Pro. (subscription required)
- ^ a b c "New Poll: California Voters, Including Tech Workers, Strongly Support AI Regulation Bill SB1047". Artificial Intelligence Policy Institute. 22 July 2024.
- ^ a b Sullivan, Mark (2024-08-08). "Elon Musk's Grok chatbot spewed election disinformation". Fast Company. Retrieved 2024-08-13.
- ^ a b "Poll: Californians Support Strong Version of SB1047, Disagree With Anthropic's Proposed Changes". Artificial Intelligence Policy Institute. 7 August 2024. Retrieved 2024-08-13.
- ^ a b Gardiner, Dustin (2024-08-28). "Newsom's scaled-back surrogate role". Politico.
- ^ a b "Poll: 7 in 10 Californians Support SB1047, Will Blame Governor Newsom for AI-Enabled Catastrophe if He Vetoes". Artificial Intelligence Policy Institute. 16 August 2024. Retrieved 2024-08-28.
- ^ Cite error: The named reference
ccc_poll2
was invoked but never defined (see the help page). - ^ Cite error: The named reference
ccc_poll3
was invoked but never defined (see the help page). - ^ a b "New YouGov National Poll Shows 80% Support for SB 1047 and AI Safety". Economic Security Project Action. Retrieved 2024-09-19.
- ^ a b "California Likely Voter Survey: Public Opinion Research Summary". David Binder Research.
- ^ Lee, Wendy (2024-06-19). "California lawmakers are trying to regulate AI before it's too late. Here's how". Los Angeles Times.
- ^ Piper, Kelsey (2024-07-19). "Inside the fight over California's new AI bill". Vox. Retrieved 2024-07-22.
- ^ "Senator Wiener's Landmark AI Bill Passes Assembly". Office of Senator Wiener. 29 August 2024.
External links
[edit]- Bill tracker CalMatters
- Supporting website Economic Security California Action, Center for AI Safety Action Fund, and Encode Justice
- Opposing website Andreessen Horowitz