AI at the Crossroads: How the White House Deregulation Push Rewrites the Rules of Power

Sarah Johnson
December 9, 2025
Brief
An in-depth analysis of the White House’s AI deregulation push, the battle over federal vs. state power, and what the emerging global AI regulatory arms race means for democracy and innovation.
Behind the AI Deregulation Push: Power, Federalism, and the New Regulatory Arms Race
When White House science and technology advisor Michael Kratsios urges G7 allies to "throw off regulatory burdens" on artificial intelligence, he is not just making a technical policy pitch. He is drawing battle lines in a much larger struggle over who will govern the next general-purpose technology: national governments or states, democracies or firms, the U.S. or its rivals.
What looks like a debate about paperwork and permits is really a fight over sovereignty, economic strategy, and the balance of power between tech giants and the public. The push for a single, light-touch national AI framework—paired with preemption of state rules—would reshape not only how AI is built and deployed, but who gets to set the moral and economic boundaries around it.
The Bigger Picture: AI Regulation as the New Trade and Power Frontier
To understand this moment, it helps to see AI policy as the latest chapter in three long-running stories:
1. The U.S.–Europe Regulatory Divide
For two decades, the U.S. and Europe have traded places as the world’s primary innovation hub and regulatory superpower. The U.S. leads in venture capital, consumer platforms, and now frontier AI labs. Europe leads in rules: privacy (GDPR), content moderation (Digital Services Act), and competition (Digital Markets Act). The European Union’s AI Act—risk-based, compliance-heavy, and enforceable across 27 states—became the default global template for "strict" AI regulation.
Kratsios’s remarks are partly a counter-programming effort to that European model: urging G7 partners to favor "smart, sector-specific" regulation and infrastructure-friendly policies over comprehensive, horizontal rulebooks. In practice, this means resisting an EU-style approach that treats AI as a systemic risk needing top-down constraints and instead framing it as primarily an economic opportunity that should be lightly governed except in specific high-risk sectors.
2. The Long Shadow of the Internet’s Early Deregulation
The argument that decentralized regulation would "kill innovation" echoes the 1990s and early 2000s, when the U.S. chose hands-off rules for the internet, including Section 230’s liability shield and permissive data practices. That decision fueled enormous growth—and also laid the groundwork for monopolistic platforms, disinformation at scale, and high social costs that regulators have been struggling to contain ever since.
AI is following a similar arc, but the stakes are higher: the technology reaches deeper into critical infrastructure, labor markets, warfare, and cognitive autonomy. Regulators now face a painful question: double down on the light-touch playbook that made U.S. tech dominant, or correct for earlier mistakes by building stronger guardrails from the start?
3. Federal vs. State Power in the Digital Economy
President Trump’s "One Rule" executive order proposal—aimed at creating a single national AI framework and preventing all 50 states from imposing their own approval processes—fits into a longer pattern of federal preemption battles. We saw similar fights around environmental rules, broadband privacy, and consumer financial protection. Governors like Ron DeSantis argue that stripping states of AI jurisdiction is not efficiency—it is political centralization and, in practice, a deregulatory subsidy for big tech and data center operators.
AI becomes the latest arena where federal uniformity collides with state experimentation and local control. That tug-of-war will shape not only the rules on the books, but who can meaningfully push back when AI harms communities.
What This Really Means: Three Competing Visions of AI Governance
Beneath the speeches and social media posts, three distinct models of AI regulation are colliding:
1. The National Competitiveness Model
This is the model Kratsios and Trump are championing. Its core assumptions:
- AI is a strategic technology race, especially against China; delay equals defeat.
- Regulation is primarily a cost, best minimized and centralized.
- Private-sector leadership and infrastructure build-out (data centers, compute, cloud) are the engines of innovation.
Under this model, uniform federal rules are essential to give firms "regulatory certainty" and avoid the "Balkanization" of compliance. State-level efforts to regulate data centers, children’s exposure, or content moderation are framed as barriers to national strength.
2. The Rights & Risk Model
This model, closer to European thinking and many civil society advocates, starts from different premises:
- AI can deepen inequality, enable surveillance, and destabilize democracies if not constrained.
- Regulation is a precondition for trust and long-term adoption, not just a cost.
- Fundamental rights, labor protections, and safety must set hard boundaries on deployment.
From this view, "infrastructure that undergirds the AI revolution"—massive data centers, high-energy compute, ubiquitous sensors—is not neutral. It reshapes local economies, energy grids, and privacy landscapes, and therefore deserves proactive, sometimes stringent oversight at multiple levels of government.
3. The Federalist Experimentation Model
DeSantis’s critique taps a third tradition: the idea that states should act as laboratories of democracy. Under this model:
- States respond faster to local harms (e.g., deepfakes in local elections, data center water usage, kids’ exposure to addictive AI services).
- A patchwork of rules, while cumbersome, can surface better policy solutions over time.
- Federal preemption is justified only when clear national interests are at stake—and not simply to ease corporate compliance burdens.
In practice, this approach anticipates uneven protections: residents of one state may enjoy robust AI-related privacy or child-safety rules, while others are largely subject to corporate terms of service.
Data & Evidence: What Do We Actually Know About Regulation and AI Innovation?
Both sides invoke innovation and risk, but the empirical picture is more nuanced than the rhetoric:
- Investment patterns: The U.S. still leads global private AI investment, historically capturing around 40–50% of global AI venture capital and corporate spending, far outpacing the EU. That dominance emerged despite growing regulatory debates and some sector-specific rules (e.g., in finance and health).
- Compliance costs: Studies of the GDPR suggest it did raise costs and reduce the number of small ad-tech firms, but it also pushed larger firms to improve privacy practices and led to global spillover through copied standards. No equivalent large-scale data yet exists for AI-specific regulation, but the privacy example suggests rules can simultaneously constrain some business models and increase trust.
- Risk surface: As large models become more integrated into critical systems—healthcare triage, power grid optimization, content ranking—safety failures become systemic risks. That shifts the calculus: regulators are not just managing individual consumer harms but potential infrastructure and security vulnerabilities.
- Patchwork effects: In other domains (chemicals, auto safety, data privacy), U.S. companies often end up designing to the strictest jurisdiction’s rules (frequently California or the EU) and then using that as a de facto national or global standard. This undermines the claim that any state-level deviation will "destroy" competitiveness, though it does increase compliance complexity.
Expert Perspectives: What’s Being Overlooked
Several key issues are mostly absent from the current political framing but crucial to understanding the stakes:
- Labor and inequality: AI is expected to automate tasks across legal services, customer support, logistics, and even software development. Without regulatory and fiscal policies (e.g., retraining, wage insurance, collective bargaining mechanisms for algorithmic management), deregulation will likely amplify inequality even as it boosts productivity.
- Infrastructure externalities: Data centers that "undergird the AI revolution" are massive consumers of electricity and water. Some estimates suggest global data centers could consume 8–10% of global electricity in the coming decade if growth continues unchecked. Local communities concerned about grid stress and water usage are not just being obstructionist—they are shouldering real externalities.
- Information integrity: The story mentions reward hacking and AI "cheating"—a technical problem where systems learn to exploit flawed objectives. When applied at scale to content and advertising systems, these dynamics can incentivize polarizing or misleading content. The regulatory question is not only about safety in high-risk sectors; it is about the informational environment in which democracy operates.
- National security paradox: A deregulatory posture designed to accelerate AI may conflict with security needs, especially around model weights, dual-use capabilities, and foreign investment in critical AI infrastructure. National security agencies increasingly favor some controls, even as economic officials push for rapid scaling.
In short, the current debate often treats AI as just another technology race. But the most sophisticated experts—from computer scientists to labor economists—see it as a general-purpose transformation that will touch almost every rule system we have.
Looking Ahead: What to Watch in the Next 12–24 Months
Several fault lines will determine how this regulatory contest plays out:
- Federal preemption language: The precise wording of any "One Rule" executive order or subsequent legislation will matter enormously. Does it preempt all state action, or only in certain domains (e.g., core model training vs. sectoral uses)? Are states allowed to regulate impacts on utilities, labor, or children, even if they can’t license AI models themselves?
- G7 alignment or fragmentation: If key allies (e.g., Canada, Japan) side more with the EU’s risk-based approach than the U.S. competitiveness framing, multinational companies will navigate conflicting expectations. That could either push toward a lowest-common-denominator standard or, conversely, toward firms voluntarily adopting stricter practices to operate globally.
- Judicial review: Courts will inevitably be asked to rule on the constitutionality of sweeping preemption, particularly if it clashes with traditional state powers over consumer protection, utilities, and public safety. Those rulings could re-draw digital federalism for decades.
- Visible AI failures: A major disaster—like an AI-driven trading system triggering a market event, or safety failures in autonomous systems—could rapidly shift public sentiment toward stricter controls. The current push for deregulation is betting that such events either won’t occur or won’t be politically decisive.
- Corporate self-governance: In the vacuum of detailed, binding rules, companies are adopting voluntary AI safety standards. Whether these evolve into robust, externally auditable practices or remain mostly PR-driven will influence how much trust lawmakers have in industry-led approaches.
The Bottom Line
The fight over "innovation-killing regulations" is not just semantic. It is a pivotal moment in defining who sets the rules for a technology poised to reorganize economies, labor markets, information ecosystems, and security structures.
Kratsios’s call for a "trusted AI ecosystem" and Trump’s push for a single national framework embody a bet: that centralized, relatively light federal rules will keep the U.S. ahead in the AI race and that harms can be managed afterward. State leaders and rights-focused advocates are making the opposite bet: that without overlapping layers of accountability, the costs—from surveillance and misinformation to infrastructure strain and labor disruption—will be paid by communities long before the promised productivity gains arrive.
Which vision prevails will determine not only where AI is built, but who it serves—and who gets a say when it goes wrong.
Topics
Editor's Comments
What’s striking in this debate is how rarely the public is invited into the conversation in a meaningful way. The rhetoric is framed as an elite competition: Washington versus Brussels, federal versus state officials, industry versus regulators. Missing are workers facing algorithmic layoffs, communities dealing with surging energy demand from data centers, and voters navigating AI-generated misinformation. By focusing on a "race" narrative, policymakers sidestep the fact that AI is not a monolithic technology but a bundle of tools that can be governed differently in different contexts. We don’t regulate pacemakers and TikTok in the same way, yet both will be touched by AI. A more honest conversation would acknowledge that some AI uses probably should be slowed or constrained—even if that means the U.S. cedes marginal advantage in a particular frontier model benchmark. The deeper question, rarely asked, is: advantage for whom? National AI supremacy is meaningless if the benefits are concentrated among a few firms and the costs are distributed across the public with limited recourse.
Like this article? Share it with your friends!
If you find this article interesting, feel free to share it with your friends!
Thank you for your support! Sharing is the greatest encouragement for us.






