On December 11, 2025, President Trump signed an Executive Order (“EO”) titled “Ensuring a National Policy Framework for Artificial Intelligence.” The EO announces a plan to develop comprehensive federal AI regulation to supplant the growing patchwork of state-level AI laws and to use various federal tools to pressure states not to pass or enforce state AI laws in conflict with the Trump Administration’s policies.
Building on the Trump Administration's AI Action Plan and related EOs released in July, which promoted deregulation, the new EO articulates a vision for a new federal AI policy framework that ensures America “wins the AI race” and that is “minimally burdensome” while still ensuring “children are protected, censorship is prevented, copyrights are protected, and communities are safeguarded.”
While the EO does not automatically preempt state laws, it directs the federal government to actively challenge state regulations perceived as overly restrictive to AI development through strategic litigation, federal funding conditioned on policy alignment, and a potential expansion of federal agency powers.
We anticipate the Administration’s attempt to implement this EO will be subject to significant litigation challenges, particularly by the Attorneys General of states at the forefront of AI regulation, including California, Colorado, and New York. Notwithstanding the stated purpose of the EO to reduce the regulatory burden on companies related to AI, it may ultimately introduce more uncertainty about the status of AI regulation at the state level.
Formation of an AI Litigation Task Force
The EO directs the Attorney General to establish an AI Litigation Task Force within 30 days to contest state AI laws that conflict with "minimally burdensome national policy framework for AI" articulated by the EO. The Task Force will challenge these state regulations on grounds including interference with interstate commerce, federal preemption, and potential violations of the First Amendment.
The EO also directs the Secretary of Commerce to publish an evaluation of state AI laws within 90 days (approximately March 11, 2026), identifying "onerous" regulations that conflict with federal policy. This review will prioritize laws that require companies to alter "truthful outputs" or otherwise unconstitutionally compel AI providers to make certain disclosures, with findings referred to the AI Litigation Task Force for potential action.
Restrictions on State Funding
To further constrain state lawmakers, the EO leverages the federal "power of the purse" by conditioning financial assistance on alignment with the Administration's commitment to light-touch AI regulation. The directive instructs federal agencies to review discretionary grant programs and establish new eligibility requirements that disadvantage or disqualify states maintaining AI laws deemed "onerous" or inconsistent with federal deregulatory goals.
Expansion of FCC and FTC Mandates
The EO also issues separate mandates to the Federal Communications Commission (“FCC”) and Federal Trade Commission (“FTC”) in a further attempt to preempt state AI laws. The EO directs the FCC to initiate a proceeding within 90 days to determine whether to adopt a federal reporting and disclosure standard for AI models aimed at preempting conflicting state disclosure requirements. This rulemaking process will likely include a public comment period, providing a critical window for stakeholders to shape the administrative record regarding the scope of preemption and the technical feasibility of new federal standards. Similarly, the EO directs the FTC to issue within 90 days a policy statement clarifying that state laws that require AI models to include specific ideological viewpoints or alter "truthful outputs" may be viewed as compelling companies to engage in unlawful deceptive acts or practices.
This directive creates a unique legal tension: conduct mandated by state law (e.g., bias or safety mitigations) may violate federal consumer protection law (e.g., constitute deception under Section 5 of the FTC Act). For instance, the FTC might contend that state laws compelling diversity in AI model outputs force developers to distort the model’s logic, misleading consumers who expect unaltered results. This action may mirror the FTC’s letter earlier this year regarding how certain compliance with European regulations could violate US laws.
Legislative Recommendation for State Preemption (with Carve-outs)
Finally, the EO calls for the Administration to prepare a legislative recommendation to establish a uniform federal policy framework for AI. Notably, the EO carves out specific areas of state law that should not be subject to federal preemption under this framework, including those related to child safety, AI compute and data center infrastructure (other than permitting reform), and state government procurement and use of AI.
Key Considerations for Companies
The EO announces the Administration’s intent to rollback many state laws and, ultimately, to replace them with new federal standards. But the EO’s actual impact will depend on how it gets implemented, and what challenges states choose to bring against its enforcement.
In the meantime, companies should keep in mind the following:
- State AI laws remain intact. State AI statutes remain enforceable. While the EO signals many of these laws will face legal and other challenges, it does not automatically invalidate them. Many states who have passed AI laws are already engaged in litigation with the Trump Administration and are likely to continue to challenge the Administration’s attempts to undermine their legislative authority.
- Standards for bias in AI models are on a collision course. The EO prioritizes bringing legal challenges against state AI laws that ban “algorithmic discrimination” or that require companies to “embed ideological bias.” But given the current state of civil litigation in this space, it is unlikely the Administration’s litigation of state AI laws will culminate in creation of a single standard. Instead, a patchwork of standards will likely continue, with companies caught between state enforcement actions or civil litigation related to racial or gender bias on the one hand, and potential federal enforcement regarding ideological bias on the other.
- Federal policies will continue to evolve. The forthcoming policies from the FCC and FTC could articulate new standards for how the federal government expects companies to manage issues of transparency and ideological bias. These policies may seek to force companies to change existing content moderation policies and positions.
