A federal bill is stirring up debate by proposing a ten‐year halt on state efforts to regulate artificial intelligence. Led by Sen. Ted Cruz (R-TX) and backed by a group of Republicans, the proposal is set to be bundled into a major GOP legislative package before the July 4 deadline.
Supporters like OpenAI’s Sam Altman, Anduril’s Palmer Luckey, and Marc Andreessen from a16z argue that a patchwork of state regulations could slow U.S. innovation at a time when competing with China is more critical than ever. On the flip side, Democrats, some Republicans, and consumer protection groups worry that blocking state rules could leave citizens exposed to potential AI risks, from deepfakes to biases in employment and housing.
Nicknamed the “AI moratorium,” the proposal would be inserted into a budget reconciliation bill—often referred to as the “Big Beautiful Bill.” If passed, it would stop any state law addressing AI models, systems, or automated decision processes, effectively nullifying existing measures like California’s AB 2013 and Tennessee’s ELVIS Act, which aim to ensure transparency in AI practices.
Public Citizen has identified a range of state laws that might be affected, including those in Alabama and California that criminalise deceptive AI-generated election content. Pending legislation, such as New York’s RAISE Act, which demands detailed safety disclosures from major AI labs, also faces potential disruption.
To meet the fiscal requirements of the budget bill, Cruz linked states’ compliance with the moratorium to eligibility for vital broadband funds. Initially tied to a $42 billion Broadband Equity Access and Deployment (BEAD) programme, a revision shifted this connection to a new $500 million BEAD fund. Critics, including Sen. Maria Cantwell (D-WA), argue that this forces states to choose between expanding broadband services and protecting consumers from AI hazards.
The moratorium passed an early procedural review, but its future remains in limbo as Senate debates heat up. Amid rapid negotiations over amendments—including proposals to knock the moratorium off the bill—lots of eyes will be on how lawmakers balance economic ambition with public safety. OpenAI’s Chris Lehane recently dismissed the patchwork of state laws as increasingly ineffective, emphasising the need for a united national approach. While Lehane and Altman both agree that some regulatory measures are necessary to manage AI’s long-term risks, they caution that disjointed state strategies might make it harder to provide consistent services.
Critics say the move isn’t really about fostering innovation but rather about dodging accountability. Nathan Calvin from the nonprofit Encode expressed frustration, noting that without state-level rules, there’s less leverage to negotiate with powerful AI companies. Even Sen. Josh Hawley (R-MO) and Sen. Marsha Blackburn (R-TN) have criticised the proposal for undermining states’ rights to safeguard their residents. A recent Pew Research survey shows that many Americans favour stronger AI regulation, casting doubt on whether the government or industry can self-regulate effectively.
If you’ve ever struggled to keep up with fast-evolving tech policies, you’ll understand the complexity and urgency of this debate. As the Senate continues its discussions, the resolution of this issue could have far-reaching implications for AI governance in America.