DX Today | No-Hype Podcast & News About AI & DX
The DX Today Podcast: Real Insights About AI and Digital Transformation
Tired of AI hype and transformation snake oil? This isn't another sales pitch disguised as expertise. Join a 30+ year tech veteran and Chief AI Officer who's built $1.2 billion in real solutions—and has the battle scars to prove it.
No vendor agenda. No sponsored content. Just unfiltered insights about what actually works in AI and digital transformation, what spectacularly fails, and why most "expert" advice misses the mark.
If you're looking for honest perspectives from someone who's been in the trenches since before "digital transformation" was a buzzword, you've found your show. Real problems, real solutions, real talk.
For executives, practitioners, and anyone who wants the truth about technology without the sales pitch.
DX Today | No-Hype Podcast & News About AI & DX
Federal Preemption and the White House AI Framework (Mar 29, 2026)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Welcome to the DX Today Podcast, your weekly deep dive into the AI ecosystem. I'm Rick Spare, and joining me as always is Sarah. Hey Rick, good to be here. Today I want to do something a little different. Not a product launch, not a lab breakthrough. I want to talk about a policy move that could quietly reshape how AI gets built and deployed in the U.S.
SPEAKER_01You mean the White House National Policy Framework for Artificial Intelligence, those legislative recommendations that dropped in March.
SPEAKER_00Exactly. And the headline idea, at least the one that lit up my This Affects Everyone radar, is federal preemption. The White House is basically saying we can't have 50 different state AI regimes.
SPEAKER_01Right. The framework literally argues for a minimally burdensome national standard, and it explicitly calls on Congress to preempt state AI laws that impose undue burdens instead of having 50 discordant ones.
SPEAKER_00Let's treat that as the core question for the episode. If Washington tries to preempt state AI laws, what changes in practice for developers, users, and the public?
SPEAKER_01And also, what does AI development mean legally? Because the framework draws a bright line. States shouldn't be allowed to regulate AI development at all.
SPEAKER_00Okay, before we go deep, set the table. What exactly was released?
SPEAKER_01Two main things. First, the White House published a legislative recommendations document titled National Policy Framework for Artificial Intelligence. Second, they posted an announcement framing it as a national AI legislative framework they want Congress to turn into a bill.
SPEAKER_00So it's not a law yet.
SPEAKER_01Not at all. It's guidance. But it's a pretty explicit agenda. Seven sections that range from kids' protections to free speech to workforce development. And then the big structural piece, federal preemption of state AI laws.
SPEAKER_00Let's do the timeline quickly.
SPEAKER_01The policy thread goes back at least to a December 2025 executive order that pushed for a minimally burdensome national policy framework, and even set up an AI litigation task force inside DOJ to challenge certain state AI laws.
SPEAKER_00So March 2026 is like the next chess move. Here's what Congress should pass.
SPEAKER_01Exactly.
SPEAKER_00If I'm a listener and I hear federal preemption, my brain goes, wait, does that mean my state can't pass AI rules anymore?
SPEAKER_01In broad strokes, that's what the White House is aiming for, but with carve-outs. The recommendations say preemption should still respect federalism. So states keep their traditional police powers to enforce generally applicable laws like child protection, fraud, consumer protection.
SPEAKER_00So they're not trying to stop states from prosecuting fraud.
SPEAKER_01Right. And they also point to state authority over zoning and land use for AI infrastructure and a state's own use of AI.
SPEAKER_00But they do want to stop states from doing what exactly?
SPEAKER_01Three things stand out. One, states should not regulate AI development itself. Two, states shouldn't impose special restrictions on activity just because AI is involved, if it would be lawful without AI. Three, states should not penalize AI developers for third-party unlawful conduct involving their models.
SPEAKER_00That third one is spicy. That's basically saying if someone misuses a model, the developer shouldn't be the one held liable, at least not via state laws.
SPEAKER_01Yep, it's a big stance in the ongoing who's responsible for harms debate.
SPEAKER_00Okay, let's zoom into the preemption argument itself. What's the best argument for it?
SPEAKER_01The strongest pro-preemption argument is operational. If you're shipping AI nationwide, a patchwork of laws creates compliance, uncertainty, and cost, especially for startups. It can also slow deployment because you end up building to the strictest state, or you geofence features.
SPEAKER_00So it's like privacy law in the U.S. California moves, then everyone scrambles.
SPEAKER_01Exactly. And some analyses explicitly make that analogy.
SPEAKER_00What's the best argument against sweeping preemption?
SPEAKER_01States have historically acted as policy laboratories. If you preempt too broadly, you might freeze experimentation on safety, transparency, or consumer protections. And it could weaken accountability if federal rules are slow or captured.
SPEAKER_00I want to pause on a subtlety. The White House says states should not regulate AI development, but it says states can enforce laws of general applicability against AI developers and users. That sounds contradictory.
SPEAKER_01It's not fully resolved. General applicability means the law isn't targeting AI as a category. For example, general fraud statutes apply whether the fraud was done with AI or a phone call. But a state law that says any model over X parameters must do Y is squarely regulating development.
SPEAKER_00So it's a line between AI-specific regulation and regular law applied to AI.
SPEAKER_01Right. But in real life, that line blurs. Imagine a consumer protection law that is written in general terms but clearly motivated by AI harms. Courts will have to decide whether it's effectively regulating AI development.
SPEAKER_00So the fight becomes definitional. What counts as AI development, what counts as undue burden, what counts as general applicability.
SPEAKER_01Exactly. The framework uses broad language. Implementing legislation would need more precision.
SPEAKER_00Let's connect this to the developer experience. DX today, after all, if this becomes law, what changes for people building models and apps?
SPEAKER_01Three layers. First, compliance strategy. Instead of tracking 50 state AI statutes, you track one federal standard, plus sector regulators.
SPEAKER_00That sounds simpler.
SPEAKER_01It can be, but it depends on what the federal standard requires. The framework also recommends no new federal rulemaking body for AI. Instead, it pushes regulation through existing agencies with subject matter expertise, plus industry-led standards.
SPEAKER_00So you could still have complexity, just centralized?
SPEAKER_01Exactly. Second layer, liability posture. If developers are protected from state-level liability for third-party misuse, that might reduce the incentive to over-restrict capability at the model layer.
SPEAKER_00And third?
SPEAKER_01Third is ecosystem speed. If compliance uncertainty drops, investors may view shipping AI as less risky. That could accelerate rollout of agentic systems.
SPEAKER_00Let's talk about that liability point with something concrete right now. What are the ways developers get pulled into lawsuits?
SPEAKER_01It ranges from product liability theories to negligence, to claims under state consumer protection statutes, to sector laws. And you also see fights about whether platforms are responsible for user-generated content, especially if an AI system creates something harmful.
SPEAKER_00Which brings us to a big political cousin, Section 230.
SPEAKER_01Yep, notably, the White House framework itself doesn't go into Section 230 in detail, but it intersects with the broader debate. If you remove shields, liability expands. If you preempt state liability expansions, you might shrink it.
SPEAKER_00And there's also this parallel legislative universe. The analyses compare the White House framework with Senator Blackburn's Trump America AI Act draft.
SPEAKER_01Right. That draft, at least as described by legal analysis, goes much further on liability. It talks about a duty of care for AI chatbot developers, enforcement via the FTC, significant penalties in some areas, and even a repeal of Section 230.
SPEAKER_00So the same week we get a White House approach that says don't over-regulate, preempt burdensome state rules, we also see a draft that could substantially increase liability and compliance requirements.
SPEAKER_01Exactly. That's why this is a consequential moment. There's momentum toward federal involvement, but not agreement on what that involvement looks like.
SPEAKER_00Let's unpack the White House's view of what regulation should focus on. The document has seven big buckets. Pick a few that matter most.
SPEAKER_01First, kids and parents. It recommends things like commercially reasonable, privacy-protective age assurance. Parental attestation is given as an example. Also, platforms accessed by minors should implement features that reduce sexual exploitation and self-harm risk.
SPEAKER_00That's going to be controversial just on the age assurance part.
SPEAKER_01Definitely. The second bucket I'd call infrastructure and communities. There's this theme that data centers and energy should not raise residential electricity bills. There's a ratepayer protection pledge concept, and they call for streamlining, permitting, and letting data centers generate power on site.
SPEAKER_00That's an AI policy framework that sounds half like an energy policy framework.
SPEAKER_01AI is infrastructure now. Models don't run in the cloud. They run in physical places, drawing enormous power.
SPEAKER_00Third bucket, intellectual property.
SPEAKER_01This one is fascinating because the White House takes a position. It says the administration believes training AI on copyrighted material does not violate copyright laws, but it acknowledges arguments to the contrary and supports courts deciding the issue.
SPEAKER_00That's unusually explicit.
SPEAKER_01It is. And it's a very different posture than the Blackburn draft described by legal analysis, which would declare that unauthorized copying for training is not fair use.
SPEAKER_00So creators see one proposal as pro-training, the other as strongly protective.
SPEAKER_01Exactly. The policy choice here affects the economics of model training, the availability of training corpora, and the future of licensing markets.
SPEAKER_00Fourth bucket, free speech.
SPEAKER_01The framework says Congress should prevent the federal government from coercing AI providers to ban, compel, or alter content based on partisan or ideological agendas.
SPEAKER_00I can already hear the debate. Some people will say this is anti-censorship, others will say it's a way to pressure platforms into minimal moderation.
SPEAKER_01And there's a technical twist. If the government can't coerce, what counts as coercion? Direct orders are obvious. But what about procurement leverage? What about investigations? Courts have wrestled with this for social media. AI adds new surface area.
SPEAKER_00Also, the framework's language is about government coercion, not about private companies deciding to filter content.
SPEAKER_01Right. It's specifically targeting government action.
SPEAKER_00Fifth bucket, innovation and dominance.
SPEAKER_01That's where you get the regulatory sandboxes idea. Let people test AI applications in controlled settings with lighter rules. Plus, making federal data sets accessible in AI-ready formats so industry and academia can train models.
SPEAKER_00That's huge if it's real. Federal data is a gold mine.
SPEAKER_01Yes, but it also triggers questions: privacy, security, and whether the government is effectively subsidizing certain players.
SPEAKER_00Okay, now let's bring it back to preemption as the central theme. Suppose Congress passes a law preempting state AI laws that impose undue burdens. What happens the next day?
SPEAKER_01The immediate effect is litigation. Companies will challenge state laws and argue they're preempted. States will argue their laws fit the carve-outs, consumer protection, child safety, fraud.
SPEAKER_00So we get years of court fights?
SPEAKER_01Likely. And because AI changes fast, the litigation could lag behind reality.
SPEAKER_00Here's a devil's advocate thought. Maybe that's the point. If you slow down enforcement through jurisdictional fights, you keep the innovation engine running.
SPEAKER_01That might be a cynical interpretation, but it's plausible.
SPEAKER_00Another angle, the framework says states shouldn't penalize AI developers for third-party unlawful conduct. If a state tries to hold a model developer responsible for, say, scam scripts generated by a model, the developer could point to federal preemption.
SPEAKER_01Yes, but there's still federal law and there's still common law tort claims. Preemption could narrow some state statutes but not eliminate all liability.
SPEAKER_00So developers might feel safer but not immune.
SPEAKER_01Exactly. And the implementation details matter a lot. The federal statute could include its own liability framework.
SPEAKER_00Let's talk about the no new AI regulator idea. I've got mixed feelings. On one hand, new agencies can become bureaucratic sinkholes. On the other, existing agencies weren't built for model-level issues.
SPEAKER_01That's the trade-off. Existing agencies understand their domains. FDA understands medical devices. FTC understands consumer deception. SEC understands disclosures. But frontier model issues cut across all of that.
SPEAKER_00Like evaluation and red teaming standards.
SPEAKER_01Exactly. If you rely on industry-led standards, you might get speed and expertise, but also risk that standards are weak or inconsistent.
SPEAKER_00And that loops back to state laws. If federal rules are light, some people will want states to fill the gap.
SPEAKER_01Which is why preemption is controversial. It doesn't just simplify compliance, it also decides who's allowed to regulate.
SPEAKER_00Let's give listeners a mental model. Preemption is like setting the default layer of governance.
SPEAKER_01Yes. In software terms, it's like moving from 50 forks to one main branch. But the question is who maintains the main branch, how often it updates, and whether it accepts patches.
SPEAKER_00Okay, I want to ask about AI development and AI deployment. The framework is very firm about development being interstate, but many harms happen at deployment.
SPEAKER_01That's true. Deployment is where you see discrimination, privacy breaches, unsafe recommendations, and misleading outputs impacting real people.
SPEAKER_00So could states still regulate deployment via sector-specific rules?
SPEAKER_01Possibly if the rules are framed as general consumer protection or as regulation of a specific industry within state authority. But again, courts will interpret the scope.
SPEAKER_00There's another part of the White House announcement that's worth mentioning. It repeatedly says the federal government is uniquely positioned to set a consistent policy to win the global AI race.
SPEAKER_01Yes, the competitiveness framing is explicit. Avoid patchwork laws that undermine innovation and leadership.
SPEAKER_00That global framing, does it resonate with engineers?
SPEAKER_01In practice, yes. Teams ask, can we ship this in the U.S. without hitting five different compliance walls? If the answer is no, deployment moves to countries with clearer rules. That's the competitiveness argument.
SPEAKER_00But clear can also mean strict and consistent?
SPEAKER_01Exactly. It's not necessarily light. The EU is strict in places, but consistent. The U.S. could choose strict consistency too.
SPEAKER_00Let's do one more devil's advocate. What if federal preemption actually increases regulation?
SPEAKER_01It can. If Congress passes a comprehensive law, you could end up with a single but heavy set of obligations. And because it's federal, enforcement resources and penalties might be bigger.
SPEAKER_00So developers who cheer preemption because they hate state rules might be surprised.
SPEAKER_01Yes, they might trade 50 smaller constraints for one larger constraint.
SPEAKER_00We should also touch on the political dimension. This framework is explicitly tied to a specific administration's worldview innovation, anti-censorship framing, energy build-out, and preemption.
SPEAKER_01Right. Which means the long-term stability depends on elections and congressional compromise.
SPEAKER_00Okay, bring this home for our audience. If I'm building an AI product in 2026, what should I watch for over the next few months?
SPEAKER_01Watch for three things. One, whether Congress picks up the White House framework and what parts survive negotiation, especially preemption language. Two, competing legislative proposals like the Blackburn draft that may pull in a different direction. More liability, stronger copyright protections, different enforcement tools. Three, lawsuits and enforcement trends. Even before legislation, the December 2025 executive order set a posture of challenging state laws.
SPEAKER_00And I'd add a fourth internal product design choices. If you think liability might shift, you might adjust guardrails, logging, and safety tooling, not because you're forced, but because you want resilience.
SPEAKER_01Exactly. Regardless of who wins the preemption fight, products that can demonstrate reasonable safeguards will have an advantage.
SPEAKER_00Last question, Sarah, what's your personal take? Do you think federal preemption is good for the AI ecosystem?
SPEAKER_01I think consistency is good, but consistency is not the same as preemption at all costs. If the federal standard is too vague or too friendly to incumbents, you lose the benefits. The ideal is one clear set of rules, updated frequently, with real accountability, and with carve-outs that let states respond to novel harms.
SPEAKER_00I'm with you. The interesting part is that AI is evolving fast enough that governance has to be iterative, not one and done.
SPEAKER_01Which is hard for legislation.
SPEAKER_00That's why this March 2026 framework matters. It's a signal that Washington wants to be in the driver's seat.
SPEAKER_01And developers should pay attention because these choices shape what you can build and what you'll be responsible for two years from now.
SPEAKER_00That's all for today's episode of the DX Today podcast. Thanks for listening, and we'll see you next time.