California Governor Gavin Newsom has vetoed a major AI safety bill aimed at regulating powerful AI models before deployment. The bill, introduced by State Sen. Scott Wiener, sought to impose safety vetting requirements on large AI systems to prevent potential risks, such as bioweapon creation, but faced heavy opposition from tech giants like OpenAI, Google, and venture capital firms. Newsom argued that the legislation was too broad, applying “stringent standards” to both basic and complex AI systems without considering their specific applications or risk levels.
This veto is seen as a win for Silicon Valley, which has long warned against overly restrictive regulations that could hinder the state’s competitive edge in AI development. Newsom, who has close ties to the tech industry, emphasized that a more nuanced approach is needed to regulate AI responsibly and pledged to work with experts like Stanford professor Dr. Fei-Fei Li to craft future legislation. While the vetoed bill would have set a national standard for AI governance, Newsom signed a narrower measure to study AI risks through California’s emergency response agency.
Proponents of the bill argue that it represented California’s best opportunity to lead on responsible tech regulation, particularly given the rapid development and widespread influence of AI. “This veto is a missed opportunity for California to once again lead on innovative tech regulation,” said Sen. Wiener. Critics of the bill, including influential California Democrats like former House Speaker Nancy Pelosi, Rep. Ro Khanna, and San Francisco Mayor London Breed, believed it could stifle innovation and harm the state’s economy.
The bill’s defeat reflects the deep divisions within Silicon Valley over AI regulation. Some leading researchers and Elon Musk supported the measure as a way to mitigate risks, while others, including small startups, saw the requirements as overly burdensome.