Senator of the Senator of California Senator Scott Wiener on Wednesday was introduced New Amendments on his most recent bill, SB 53, which needs the The world’s largest AI company to publish security and security protocols and issue reports when salvation incidents occur.
If Signed into Law, California would be the first state to impose meaningful transparency requirements onto leading AI developers, likely including Openai, Google, anthropic, and Xai.
Senator Wiener’s Previously Ai Bill, SB 1047, Includes similar requirements for developers in AI models to publish safety reports. However, Silicon Valley fought the cruel against the bill, and it was Finally estimated by Governor Gavin Newsom. California governor then calls for a group of AI leaders – including leading stanford and co-founder of policy groups for state safety efforts.
California policy group recently published their Last RecommendationsDiscuss a need for “industry requirements to publish information about their systems” to establish a “strong and transparent envids of evidence.” Senator Wiener’s office said to a press release that changes in SB 53 are severely influenced by this report.
“The bill continues to grow, and I look forward to working with all the stakes in the coming weeks to refine these suggestions to the most beautiful law,” Senator Wiener’s release.
SB 53 aims to strike a balance admitted to Governor Newsom SB 1047 achieved – good to make AI’s required transfers without the California industry.
“These are the concerns mentioned in my organization and others,” says Nathan Calvin, VP in State activities for Nonprofit AI Safety Group, encoding, an interview with TechCrunch. “Having firms explain public and government what steps they make to meet these risks such as a minimum, reasonable step to do.”
The bill also produces the Whistlsblower protections for AI Labs employees who believe that their company’s technology deals with death or injury to more than 100 people, or more than $ 1 billion damage.
In addition, the bill refers to Create Calcompute, a cluster of a public clow click to support starts and researchers who develop large AI.
Unlike SB 1047, the new law of Senator Wiener is unable to AI Models to account for the damage to their AI models. SB 53 is also designed not to leave a burden on starts and researchers with AI tone models from AI primary developers, or use open open sure steper models.
With new changes, SB 53 is now on the California State Assembly Committee on Assembly Committee on Privations Privations Privations. If it passes, the bill should also undergo many other legislative bodies before reaching the Governor Newsom desk.
On the other hand in the US, New York Governor Kathy Hochul today Considering a similar law of AI safety, The increase in the work, which also requires AI developers to publish safety and security reports.
The fate of the AI state laws such as an increase in Act and SB 53 in short risk as The Federal Bectmakers consider a 10-year moratorium of state regulation in AI – An attempt to limit a “patchwork” to AI laws that need to navigate. However, that suggestion Failed to a 99-1 Senate Vote early in July.
“Ensuring that AI developed safely should not be controversial – it must be found,” says Geoff Ralston, the former president of the combination, in a tech statement of techcrunch. “Congress should lead, request transparency and accountability from Frontier build companies.
Up to this point, legislators fail to obtain AI companies aboard the necessary transparency concresency. Antropic has a wide endorsed The need for additional transparency to AI companiesand even expressed Honest differenceism about recommendations from California’s policy group. But companies like OpenII, Google, and MEMA are more likely to resist these efforts.
The leading AI developers often publish safety reports for their AI models, but they are not very steady in recent months. Google, for example, decided not to publish a safety report for the most advanced AI model released, Gemini 2.5 Pro, until months after it is available. Opukii also decided not inform a safety report for the GPT-4.1 model. Later, a third party study came out suggesting that it could be are less likely to apply than AI models.
SB 53 represents a version of previous AI securely with AI, but can still force AI companies to publish more information than they do now. Now, they will watch carefully as Senator Wieren also tried the boundaries.