White House Releases National AI Policy Framework
Overview
On 20 March 2026, the White House released its National Policy Framework for Artificial Intelligence (the Framework), together with companion legislative recommendations, marking the Administration’s next major step following President Donald Trump’s December 2025 executive order limiting state authority to regulate artificial intelligence (AI). Taken together, the Framework and legislative recommendations are intended to translate the December Executive Order’s calls for a unified, minimally burdensome national AI policy into legislative guidance for Congress. With the Framework, the US Government continues to move away from a highly prescriptive approach to regulating AI toward a more balanced, innovation-friendly approach, setting up a clear alternative to the approaches of other major players, such as the European Union and China.
Like the December Executive Order, the Administration’s central premise is that US leadership in AI depends on uniform national rules. The White House cautions that a fragmented patchwork of state AI laws would undermine innovation, increase compliance costs for companies operating across state lines, and weaken the United States’ ability to compete in the global AI race. This unified approach demonstrates the appreciation for the costs of fragmented regulatory approaches in these technology areas, particularly the current state-by-state approach to data privacy. To that end, the Framework and legislative recommendations outline seven thematic policy areas the Administration believes should anchor future federal AI legislation—balancing innovation, competitiveness, and national security with targeted safeguards for children, creators, consumers, and communities. The Administration has indicated that it intends to work with Congress in the coming months to advance legislation consistent with these principles.
Protecting Children and Empowering Parents
The Framework places significant emphasis on safeguarding minors from AI-related risks while empowering parents and guardians. The Administration urges Congress to require commercially reasonable, privacy-protective age-assurance mechanisms (like parental attestation) for AI services likely to be accessed by minors; to mandate features that reduce risks of sexual exploitation and self-harm; and to affirm that existing child-privacy laws, including the Children's Online Privacy Protection Act, apply to AI systems. At the same time, the Framework cautions against ambiguous content standards or open-ended liability regimes that could create compliance uncertainty or constitutional concerns and legislation that would preempt states from enforcing their own generally applicable child protection laws.
Safeguarding and Strengthening American Communities
Consistent with the Administration’s Ratepayer Protection Pledge, the Framework seeks to ensure that AI-driven growth benefits communities while mitigating downstream harms. The legislative recommendations call for protections to prevent residential ratepayers from bearing increased electricity costs associated with data center expansion, streamlined federal permitting to support AI-related infrastructure (including on-site and behind-the-meter generation), enhanced tools to combat AI-enabled scams and impersonation fraud, attention to national security risks, and measures to support small businesses adoption of AI technologies.
Intellectual Property and Creators
The Framework emphasizes protecting creators’ works and identities while preserving innovation. The legislative recommendations highlight potential voluntary licensing or collective-rights mechanisms and protections for digital replicas such as voice and likeness that track the bipartisan Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act) (S.1367), as well as a deliberate decision to defer to the courts on unsettled questions of copyright law—including whether and when AI training constitutes fair use—rather than urging Congress to legislate a definitive answer at this stage.
Preventing Censorship and Protecting Free Speech
Echoing concerns raised in the December Executive Order regarding compelled speech and government overreach, the Framework stresses that AI should not be used by government actors to suppress lawful expression. The administration calls on Congress to prevent government coercion of platforms and AI providers and to provide redress where censorship-related harms stem from government action, while avoiding regulation of private content moderation decisions.
Enabling Innovation and American AI Dominance
To advance US leadership in AI, the Framework and legislative recommendations favor innovation-enabling guardrails, including regulatory sandboxes, improved access to federal datasets, reliance on existing sector-specific regulators rather than creating a stand-alone AI agency, and support for industry-led standards and best practices.
Workforce and Education
The Framework calls for integrating AI training into existing education and workforce programs, studying AI-driven labor-market impacts, and supporting land-grant institutions and other educational entities in developing AI-related skills, rather than creating new, stand-alone federal workforce programs.
Federal Preemption
The Administration describes federal preemption as a central pillar of any effective national AI policy, arguing that a unified federal framework is necessary to support innovation and sustain US competitiveness in the global AI race. The Framework cautions that a fragmented landscape of AI laws would create compliance uncertainty, raise costs for companies operating across state lines, and undermine national economic and security objectives.
To avoid those outcomes, the Framework and legislative recommendations call on Congress to establish a minimally burdensome national standard by preempting state AI laws that impose inconsistent or undue burdens, while respecting core principles of federalism. The Framework draws a distinction between state laws that interfere with inherently interstate AI development and areas where states retain traditional authority, such as enforcing generally applicable consumer- and child-protection laws, regulating zoning and land use for AI infrastructure, and governing a state’s own use of AI through procurement or public services. The White House’s efforts could also materially impact state and local regulatory efforts around AI employment and workforce development. While the Framework does not explicitly propose preemption of AI workforce regulations, efforts to preempt establishment of algorithmic bias standards could have broader workforce implications. Our team will continue monitoring the impact of any preemptive action on employment law.
At the same time, the Framework emphasizes that states should not regulate AI development itself, impose heightened restrictions on otherwise lawful activity simply because AI tools are involved, or penalize developers under divergent state regimes for downstream or third-party uses of AI systems. Taken together, these principles are intended to promote national uniformity while preserving core state functions.
Congressional Proposals
On 18 March, Sen. Marsha Blackburn (R-TN) released an updated discussion draft of her TRUMP AMERICA AI Act, underscoring her effort to position herself as the lead architect of a comprehensive Senate-side AI legislative package. The updated draft closely tracks the Framework, particularly in its emphasis on national uniformity and federal preemption of state AI laws that regulate inherently interstate AI development or impose conflicting compliance obligations.
At the same time, Sen. Blackburn’s draft incorporates elements of two bipartisan measures—the Kids Online Safety Act (S.1748) and the NO FAKES Act (S.1367/H.R. 2794)—and includes provisions addressing online harms to minors; protections against unauthorized use of name, image, and likeness; third party audits related to discrimination and bias; and energy and infrastructure related impacts of AI deployment.
The Senate dynamics are further shaped by questions of committee jurisdiction and leadership. Senate Commerce Chairman Ted Cruz (R TX) has been closely associated with prior efforts to advance state law preemption in the AI context, and it remains unclear whether he will support the Blackburn approach, particularly given past disagreements over preemption strategy. In parallel, lawmakers in the House, including Rep. Jay Obernolte (R CA), have been developing their own federal AI proposals, suggesting that multiple legislative pathways remain in play.
Key Takeaways
As Congress evaluates legislation informed by the Framework and legislative recommendations, stakeholder engagement is likely to play an important role in shaping how these seven thematic areas are reflected in statutory language. Thoughtful engagement can help policymakers better understand real world operational impacts, calibrate the scope of federal preemption, and assess how proposed requirements would interact with existing legal and regulatory regimes. For companies that have been tracking developments since the December Executive Order, the release of the Framework underscores a clear shift from executive action to an active phase of legislative negotiation and deal making. With the framework, the US Government has further moved away from a highly prescriptive approach to regulating AI toward a more balanced innovation-friendly approach.
Our team is actively monitoring this space and advising clients on how these changes may impact their operations. If your company wants to stay ahead of the curve or play an active role in shaping the national AI framework, we encourage you to engage with our group. We can help you assess risk, develop compliance strategies, and position your organization to participate in policy discussions that will define the future of AI regulation.
This publication/newsletter is for informational purposes and does not contain or convey legal advice. The information herein should not be used or relied upon in regard to any particular facts or circumstances without first consulting a lawyer. Any views expressed herein are those of the author(s) and not necessarily those of the law firm's clients.