Dark
Light

AI Regulation Pause Sparks Wide Opposition Across Political Spectrum

July 10, 2025

The U.S. Senate is abuzz over a provision hidden in the so‑called ‘Big Beautiful Bill’. This measure would stop states from introducing their own AI regulations for 10 years—a move that’s drawing criticism from unexpected quarters, including figures like Marjorie Taylor Greene and groups such as the NAACP.

States have long been the front‑line regulators for emerging tech, crafting laws to curb issues like deepfake pornography and unauthorised voice mimicry. Yet tech giants argue that such a patchwork approach stifles innovation, particularly as competition heats up with rivals like China.

The Trump administration once pushed for looser rules to spur AI growth, but a recent Pew Research Centre study shows many Americans worry more about AI’s risks than its benefits. Experts such as Larry Norden at the Brennan Centre stress that preventing states from setting their own guidelines is truly unprecedented.

Introduced by Senate Commerce Committee Chair Ted Cruz, the proposal would tie state access to federal AI investments to a 10‑year pause on local regulations. Although it bypasses the need for a 60‑vote majority, the measure has faced pushback from several Republican senators and all Senate Democrats.

Critics like Marjorie Taylor Greene and Marsha Blackburn have spoken out. Greene mentioned she hadn’t realised this detail when she supported the bill, while Blackburn emphasised that states must be allowed to protect their citizens from potential harms.

Opposition is strong among state lawmakers and attorneys general—260 legislators and 40 attorneys general argue that the moratorium could wipe out 149 existing state laws designed to safeguard the public. The Brennan Centre’s analysis underlines how disruptive this change could be.

Meanwhile, tech firms including Google and Microsoft are on board, claiming a unified set of regulations would better position the U.S. in the global tech race. The Chamber of Commerce shares this view, warning that a fragmented set of rules could create serious complications.

Civil rights organisations such as the ACLU and NAACP have raised alarms that a decade‑long freeze on state regulations might weaken protections for civil rights and privacy. Groups like the National Center on Sexual Exploitation remind us that, alongside these concerns, AI could also fuel issues like child exploitation.

As these debates unfold, the future of AI regulation in the U.S. remains uncertain. If you’ve ever struggled with inconsistent rules, this discussion is one to watch.

Don't Miss