DX Today | No-Hype Podcast & News About AI & DX

Federal Preemption and the White House AI Framework (Mar 29, 2026)

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 17:15

Send us Fan Mail

A single-topic deep dive on the White House’s March 2026 National Policy Framework for Artificial Intelligence — and the most consequential idea inside it: federal preemption of state AI laws.Rick Spair and Sarah unpack what “preempting a patchwork” could mean for developers, liability for third-party misuse, innovation sandboxes, access to federal datasets, and the fault lines around copyright and free speech.Key sources:- White House (PDF): National Policy Framework for Artificial Intelligence — Legislative Recommendations (March 2026): https://www.whitehouse.gov/wp-content/uploads/2026/03/03.20.26-National-Policy-Framework-for-Artificial-Intelligence-Legislative-Recommendations.pdf- White House release (Mar 20, 2026): https://www.whitehouse.gov/releases/2026/03/president-donald-j-trump-unveils-national-ai-legislative-framework/- K&L Gates analysis (Mar 24, 2026): https://www.klgates.com/White-House-Releases-National-AI-Policy-Framework-3-24-2026- Latham & Watkins analysis (Mar 26, 2026): https://www.lw.com/en/insights/trump-administration-takes-major-steps-toward-comprehensive-federal-ai-regulation- White House EO (Dec 11, 2025): https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/
SPEAKER_00

Welcome to the DX Today Podcast, your weekly deep dive into the AI ecosystem. I'm Rick Spare, and joining me as always is Sarah. Hey Rick, good to be here. Today I want to do something a little different. Not a product launch, not a lab breakthrough. I want to talk about a policy move that could quietly reshape how AI gets built and deployed in the U.S.

SPEAKER_01

You mean the White House National Policy Framework for Artificial Intelligence, those legislative recommendations that dropped in March.

SPEAKER_00

Exactly. And the headline idea, at least the one that lit up my This Affects Everyone radar, is federal preemption. The White House is basically saying we can't have 50 different state AI regimes.

SPEAKER_01

Right. The framework literally argues for a minimally burdensome national standard, and it explicitly calls on Congress to preempt state AI laws that impose undue burdens instead of having 50 discordant ones.

SPEAKER_00

Let's treat that as the core question for the episode. If Washington tries to preempt state AI laws, what changes in practice for developers, users, and the public?

SPEAKER_01

And also, what does AI development mean legally? Because the framework draws a bright line. States shouldn't be allowed to regulate AI development at all.

SPEAKER_00

Okay, before we go deep, set the table. What exactly was released?

SPEAKER_01

Two main things. First, the White House published a legislative recommendations document titled National Policy Framework for Artificial Intelligence. Second, they posted an announcement framing it as a national AI legislative framework they want Congress to turn into a bill.

SPEAKER_00

So it's not a law yet.

SPEAKER_01

Not at all. It's guidance. But it's a pretty explicit agenda. Seven sections that range from kids' protections to free speech to workforce development. And then the big structural piece, federal preemption of state AI laws.

SPEAKER_00

Let's do the timeline quickly.

SPEAKER_01

The policy thread goes back at least to a December 2025 executive order that pushed for a minimally burdensome national policy framework, and even set up an AI litigation task force inside DOJ to challenge certain state AI laws.

SPEAKER_00

So March 2026 is like the next chess move. Here's what Congress should pass.

SPEAKER_01

Exactly.

SPEAKER_00

If I'm a listener and I hear federal preemption, my brain goes, wait, does that mean my state can't pass AI rules anymore?

SPEAKER_01

In broad strokes, that's what the White House is aiming for, but with carve-outs. The recommendations say preemption should still respect federalism. So states keep their traditional police powers to enforce generally applicable laws like child protection, fraud, consumer protection.

SPEAKER_00

So they're not trying to stop states from prosecuting fraud.

SPEAKER_01

Right. And they also point to state authority over zoning and land use for AI infrastructure and a state's own use of AI.

SPEAKER_00

But they do want to stop states from doing what exactly?

SPEAKER_01

Three things stand out. One, states should not regulate AI development itself. Two, states shouldn't impose special restrictions on activity just because AI is involved, if it would be lawful without AI. Three, states should not penalize AI developers for third-party unlawful conduct involving their models.

SPEAKER_00

That third one is spicy. That's basically saying if someone misuses a model, the developer shouldn't be the one held liable, at least not via state laws.

SPEAKER_01

Yep, it's a big stance in the ongoing who's responsible for harms debate.

SPEAKER_00

Okay, let's zoom into the preemption argument itself. What's the best argument for it?

SPEAKER_01

The strongest pro-preemption argument is operational. If you're shipping AI nationwide, a patchwork of laws creates compliance, uncertainty, and cost, especially for startups. It can also slow deployment because you end up building to the strictest state, or you geofence features.

SPEAKER_00

So it's like privacy law in the U.S. California moves, then everyone scrambles.

SPEAKER_01

Exactly. And some analyses explicitly make that analogy.

SPEAKER_00

What's the best argument against sweeping preemption?

SPEAKER_01

States have historically acted as policy laboratories. If you preempt too broadly, you might freeze experimentation on safety, transparency, or consumer protections. And it could weaken accountability if federal rules are slow or captured.

SPEAKER_00

I want to pause on a subtlety. The White House says states should not regulate AI development, but it says states can enforce laws of general applicability against AI developers and users. That sounds contradictory.

SPEAKER_01

It's not fully resolved. General applicability means the law isn't targeting AI as a category. For example, general fraud statutes apply whether the fraud was done with AI or a phone call. But a state law that says any model over X parameters must do Y is squarely regulating development.

SPEAKER_00

So it's a line between AI-specific regulation and regular law applied to AI.

SPEAKER_01

Right. But in real life, that line blurs. Imagine a consumer protection law that is written in general terms but clearly motivated by AI harms. Courts will have to decide whether it's effectively regulating AI development.

SPEAKER_00

So the fight becomes definitional. What counts as AI development, what counts as undue burden, what counts as general applicability.

SPEAKER_01

Exactly. The framework uses broad language. Implementing legislation would need more precision.

SPEAKER_00

Let's connect this to the developer experience. DX today, after all, if this becomes law, what changes for people building models and apps?

SPEAKER_01

Three layers. First, compliance strategy. Instead of tracking 50 state AI statutes, you track one federal standard, plus sector regulators.

SPEAKER_00

That sounds simpler.

SPEAKER_01

It can be, but it depends on what the federal standard requires. The framework also recommends no new federal rulemaking body for AI. Instead, it pushes regulation through existing agencies with subject matter expertise, plus industry-led standards.

SPEAKER_00

So you could still have complexity, just centralized?

SPEAKER_01

Exactly. Second layer, liability posture. If developers are protected from state-level liability for third-party misuse, that might reduce the incentive to over-restrict capability at the model layer.

SPEAKER_00

And third?

SPEAKER_01

Third is ecosystem speed. If compliance uncertainty drops, investors may view shipping AI as less risky. That could accelerate rollout of agentic systems.

SPEAKER_00

Let's talk about that liability point with something concrete right now. What are the ways developers get pulled into lawsuits?

SPEAKER_01

It ranges from product liability theories to negligence, to claims under state consumer protection statutes, to sector laws. And you also see fights about whether platforms are responsible for user-generated content, especially if an AI system creates something harmful.

SPEAKER_00

Which brings us to a big political cousin, Section 230.

SPEAKER_01

Yep, notably, the White House framework itself doesn't go into Section 230 in detail, but it intersects with the broader debate. If you remove shields, liability expands. If you preempt state liability expansions, you might shrink it.

SPEAKER_00

And there's also this parallel legislative universe. The analyses compare the White House framework with Senator Blackburn's Trump America AI Act draft.

SPEAKER_01

Right. That draft, at least as described by legal analysis, goes much further on liability. It talks about a duty of care for AI chatbot developers, enforcement via the FTC, significant penalties in some areas, and even a repeal of Section 230.

SPEAKER_00

So the same week we get a White House approach that says don't over-regulate, preempt burdensome state rules, we also see a draft that could substantially increase liability and compliance requirements.

SPEAKER_01

Exactly. That's why this is a consequential moment. There's momentum toward federal involvement, but not agreement on what that involvement looks like.

SPEAKER_00

Let's unpack the White House's view of what regulation should focus on. The document has seven big buckets. Pick a few that matter most.

SPEAKER_01

First, kids and parents. It recommends things like commercially reasonable, privacy-protective age assurance. Parental attestation is given as an example. Also, platforms accessed by minors should implement features that reduce sexual exploitation and self-harm risk.

SPEAKER_00

That's going to be controversial just on the age assurance part.

SPEAKER_01

Definitely. The second bucket I'd call infrastructure and communities. There's this theme that data centers and energy should not raise residential electricity bills. There's a ratepayer protection pledge concept, and they call for streamlining, permitting, and letting data centers generate power on site.

SPEAKER_00

That's an AI policy framework that sounds half like an energy policy framework.

SPEAKER_01

AI is infrastructure now. Models don't run in the cloud. They run in physical places, drawing enormous power.

SPEAKER_00

Third bucket, intellectual property.

SPEAKER_01

This one is fascinating because the White House takes a position. It says the administration believes training AI on copyrighted material does not violate copyright laws, but it acknowledges arguments to the contrary and supports courts deciding the issue.

SPEAKER_00

That's unusually explicit.

SPEAKER_01

It is. And it's a very different posture than the Blackburn draft described by legal analysis, which would declare that unauthorized copying for training is not fair use.

SPEAKER_00

So creators see one proposal as pro-training, the other as strongly protective.

SPEAKER_01

Exactly. The policy choice here affects the economics of model training, the availability of training corpora, and the future of licensing markets.

SPEAKER_00

Fourth bucket, free speech.

SPEAKER_01

The framework says Congress should prevent the federal government from coercing AI providers to ban, compel, or alter content based on partisan or ideological agendas.

SPEAKER_00

I can already hear the debate. Some people will say this is anti-censorship, others will say it's a way to pressure platforms into minimal moderation.

SPEAKER_01

And there's a technical twist. If the government can't coerce, what counts as coercion? Direct orders are obvious. But what about procurement leverage? What about investigations? Courts have wrestled with this for social media. AI adds new surface area.

SPEAKER_00

Also, the framework's language is about government coercion, not about private companies deciding to filter content.

SPEAKER_01

Right. It's specifically targeting government action.

SPEAKER_00

Fifth bucket, innovation and dominance.

SPEAKER_01

That's where you get the regulatory sandboxes idea. Let people test AI applications in controlled settings with lighter rules. Plus, making federal data sets accessible in AI-ready formats so industry and academia can train models.

SPEAKER_00

That's huge if it's real. Federal data is a gold mine.

SPEAKER_01

Yes, but it also triggers questions: privacy, security, and whether the government is effectively subsidizing certain players.

SPEAKER_00

Okay, now let's bring it back to preemption as the central theme. Suppose Congress passes a law preempting state AI laws that impose undue burdens. What happens the next day?

SPEAKER_01

The immediate effect is litigation. Companies will challenge state laws and argue they're preempted. States will argue their laws fit the carve-outs, consumer protection, child safety, fraud.

SPEAKER_00

So we get years of court fights?

SPEAKER_01

Likely. And because AI changes fast, the litigation could lag behind reality.

SPEAKER_00

Here's a devil's advocate thought. Maybe that's the point. If you slow down enforcement through jurisdictional fights, you keep the innovation engine running.

SPEAKER_01

That might be a cynical interpretation, but it's plausible.

SPEAKER_00

Another angle, the framework says states shouldn't penalize AI developers for third-party unlawful conduct. If a state tries to hold a model developer responsible for, say, scam scripts generated by a model, the developer could point to federal preemption.

SPEAKER_01

Yes, but there's still federal law and there's still common law tort claims. Preemption could narrow some state statutes but not eliminate all liability.

SPEAKER_00

So developers might feel safer but not immune.

SPEAKER_01

Exactly. And the implementation details matter a lot. The federal statute could include its own liability framework.

SPEAKER_00

Let's talk about the no new AI regulator idea. I've got mixed feelings. On one hand, new agencies can become bureaucratic sinkholes. On the other, existing agencies weren't built for model-level issues.

SPEAKER_01

That's the trade-off. Existing agencies understand their domains. FDA understands medical devices. FTC understands consumer deception. SEC understands disclosures. But frontier model issues cut across all of that.

SPEAKER_00

Like evaluation and red teaming standards.

SPEAKER_01

Exactly. If you rely on industry-led standards, you might get speed and expertise, but also risk that standards are weak or inconsistent.

SPEAKER_00

And that loops back to state laws. If federal rules are light, some people will want states to fill the gap.

SPEAKER_01

Which is why preemption is controversial. It doesn't just simplify compliance, it also decides who's allowed to regulate.

SPEAKER_00

Let's give listeners a mental model. Preemption is like setting the default layer of governance.

SPEAKER_01

Yes. In software terms, it's like moving from 50 forks to one main branch. But the question is who maintains the main branch, how often it updates, and whether it accepts patches.

SPEAKER_00

Okay, I want to ask about AI development and AI deployment. The framework is very firm about development being interstate, but many harms happen at deployment.

SPEAKER_01

That's true. Deployment is where you see discrimination, privacy breaches, unsafe recommendations, and misleading outputs impacting real people.

SPEAKER_00

So could states still regulate deployment via sector-specific rules?

SPEAKER_01

Possibly if the rules are framed as general consumer protection or as regulation of a specific industry within state authority. But again, courts will interpret the scope.

SPEAKER_00

There's another part of the White House announcement that's worth mentioning. It repeatedly says the federal government is uniquely positioned to set a consistent policy to win the global AI race.

SPEAKER_01

Yes, the competitiveness framing is explicit. Avoid patchwork laws that undermine innovation and leadership.

SPEAKER_00

That global framing, does it resonate with engineers?

SPEAKER_01

In practice, yes. Teams ask, can we ship this in the U.S. without hitting five different compliance walls? If the answer is no, deployment moves to countries with clearer rules. That's the competitiveness argument.

SPEAKER_00

But clear can also mean strict and consistent?

SPEAKER_01

Exactly. It's not necessarily light. The EU is strict in places, but consistent. The U.S. could choose strict consistency too.

SPEAKER_00

Let's do one more devil's advocate. What if federal preemption actually increases regulation?

SPEAKER_01

It can. If Congress passes a comprehensive law, you could end up with a single but heavy set of obligations. And because it's federal, enforcement resources and penalties might be bigger.

SPEAKER_00

So developers who cheer preemption because they hate state rules might be surprised.

SPEAKER_01

Yes, they might trade 50 smaller constraints for one larger constraint.

SPEAKER_00

We should also touch on the political dimension. This framework is explicitly tied to a specific administration's worldview innovation, anti-censorship framing, energy build-out, and preemption.

SPEAKER_01

Right. Which means the long-term stability depends on elections and congressional compromise.

SPEAKER_00

Okay, bring this home for our audience. If I'm building an AI product in 2026, what should I watch for over the next few months?

SPEAKER_01

Watch for three things. One, whether Congress picks up the White House framework and what parts survive negotiation, especially preemption language. Two, competing legislative proposals like the Blackburn draft that may pull in a different direction. More liability, stronger copyright protections, different enforcement tools. Three, lawsuits and enforcement trends. Even before legislation, the December 2025 executive order set a posture of challenging state laws.

SPEAKER_00

And I'd add a fourth internal product design choices. If you think liability might shift, you might adjust guardrails, logging, and safety tooling, not because you're forced, but because you want resilience.

SPEAKER_01

Exactly. Regardless of who wins the preemption fight, products that can demonstrate reasonable safeguards will have an advantage.

SPEAKER_00

Last question, Sarah, what's your personal take? Do you think federal preemption is good for the AI ecosystem?

SPEAKER_01

I think consistency is good, but consistency is not the same as preemption at all costs. If the federal standard is too vague or too friendly to incumbents, you lose the benefits. The ideal is one clear set of rules, updated frequently, with real accountability, and with carve-outs that let states respond to novel harms.

SPEAKER_00

I'm with you. The interesting part is that AI is evolving fast enough that governance has to be iterative, not one and done.

SPEAKER_01

Which is hard for legislation.

SPEAKER_00

That's why this March 2026 framework matters. It's a signal that Washington wants to be in the driver's seat.

SPEAKER_01

And developers should pay attention because these choices shape what you can build and what you'll be responsible for two years from now.

SPEAKER_00

That's all for today's episode of the DX Today podcast. Thanks for listening, and we'll see you next time.