Assessing Throughlines in the Trump Administration’s AI Regulatory Approach

Assessing Throughlines in the Trump Administration’s AI Regulatory Approach

Blog WilmerHale Privacy and Cybersecurity Law

Securing American leadership on artificial intelligence (AI) is a top priority for the Trump Administration. Although changes brought by the Trump Administration are certain to result in regulatory changes across the federal government with respect to AI, there are also critical throughlines that businesses need to be aware of as they implement their AI strategies. This article is a guide to the major initiatives and trendlines we have observed so far.

Recission of President Biden’s Omnibus AI Executive Order

In one of his first acts as President on January 20, Trump rescinded President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence from October 30, 2023 (October Order). The October Order contained a wide range of directives pertaining to AI safety and security, privacy, consumer protection, and competition.

A few days later, on January 23, President Trump issued his own AI Executive Order on Removing Barriers to American Leadership in Artificial Intelligence (Order). In it, he established that his Administration’s AI Policy would be to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

Trump accordingly directed the Assistant to the President for Science and Technology (Michael Kratsios), his new Special Advisor for AI and Crypto (David Sacks), and the Assistant to the President for National Security Affairs (Mike Waltz), in coordination with other agency heads, to do two things consistent with this policy:

  • First, he instructed them to develop an AI Action Plan within 180 days, laying out plans to sustain and enhance America’s AI dominance. Consistent with this directive, the Office of Science and Technology Policy—in coordination with the National Science Foundation—issued a request for information in the Federal Registrar on February 6, 2025, soliciting feedback by March 15, 2025, on “any AI policy topic” to inform development of this AI Action Plan. Nearly 9,000 comments were received.
  • Second, Trump directed them to review “all policies, directives, regulations, orders, and other actions” taken pursuant to the rescinded October Order that are inconsistent with the Order’s Policy and to immediately “suspend, revise, or rescind such actions,” leveraging all “available exemptions” where necessary as an interim measure.

That same day, President Trump reestablished the President’s Council of Advisors on Science and Technology (PCAST Council)—a longstanding advisory council that started in the George W. Bush Administration—to “spearhead American innovation and competitiveness in critical and emerging technologies.” Co-chaired by the Assistant to President for Science and Technology and the White House AI and Crypto Czar, the PCAST Council will include up to 24 members from industry, academia, and the government to “champion bold investments in innovation, the elimination of bureaucratic barriers, and actions to help the United States remain the world’s premier hub for scientific and technological breakthroughs.”

At first blush, President Trump’s early actions might suggest that the new Administration is prepared to take a fundamentally new approach on AI—one that deemphasizes regulations pertaining to AI safety in service of accelerated AI innovation. Indeed, speaking at the Artificial Intelligence Action Summit in Paris, France, on February 11, 2025, Vice President JD Vance began by saying that he was there not “to talk about AI safety,” but rather to talk about “AI opportunity.” Although significant questions remain about what the precise regulatory implications of Trump’s recent Order will be, there may be more continuity than some expect.

National Security

Certain significant directives that arose from the October Order remain in effect. Most obviously, President Trump has not yet rescinded the first-ever National Security Memorandum on Artificial Intelligence (NSM), issued in October 2024, whose development was first directed by the October Order. That NSM, which also includes a classified Annex, directed federal agencies to move expeditiously “to harness cutting cutting-edge AI technologies” to advance the U.S. government’s national security mission. Specifically, it directed that the Federal Acquisition Regulatory Council consider certain regulatory changes to accelerate the acquisition and procurement process for AI and to simplify processes so that “companies without experienced contracting teams” may meaningfully compete. In policy responses submitted as part of his confirmation hearing process, Secretary of Defense Pete Hegseth emphasized the need to accelerate acquisition processes for AI-related products. And on March 6, 2025, he issued a directive to senior Pentagon leadership to reframe its acquisition process from a “hardware-centric to a software-centric approach.” Many companies reinforced the need to modernize government procurement on the AI Action Plan, urging the swift adoption of AI and cloud solutions in their comments.

Separately, we continue to monitor changes in exports controls and broader enforcement priorities as it pertains to AI. As described in a separate WilmerHale Client Alert, we are monitoring the implementation of the Department of Commerce’s new “Framework for Artificial Intelligence Diffusion” (Diffusion Rule), which came into effect on January 13, 2025, and imposes (i) new export controls on certain closed-weight AI model exports and (ii) a series of rules designed to restrict the international diffusion of large “clusters” of semiconductors necessary to train AI models. Critical players across the AI stack have had mixed reactions to the Diffusion Rule, with some citing disproportionate burdens on cloud service providers and others citing global competition concerns for U.S. hardware. Commerce has indicated that it will not enforce the Diffusion Rule (promulgated as an Interim Final Rule) until May 15, 2025, and it is soliciting feedback through an open comment period until that point—after which point we expect that there could be significant changes.

Energy Infrastructure

Meanwhile, Trump has sustained major Biden initiatives on AI related to new energy infrastructure, which appears likely to be an area of heightened focus for the Trump Administration.

Senior Trump officials commenting on AI have repeatedly raised the need to ensure AI infrastructure, including data centers, is built in the United States and, relatedly, to ensure AI companies can satisfy the vast electrical and computational needs required to power their systems.

On his second day in office, President Trump, alongside several CEOs, announced a new joint venture, Stargate, with the stated goal of investing $500 billion to build “the physical and virtual infrastructure to power the next generation of advancements in AI,” including the construction of data centers, by 2029. And as Vice President Vance said in Paris, “AI cannot take off unless the world builds the energy infrastructure to support it.” Similarly, at the American Dynamism Summit, Vance noted that “if you want to lead in artificial intelligence, you have got to be leading in energy production.” While the principal focus of Vance’s remarks was on reducing energy prices, he also talked about the need to ensure we make it easier for American AI companies to build in the United States. It’s therefore notable, though perhaps not surprising given these energy goals, that the Trump Administration has so far not taken steps to rescind Biden’s late-breaking Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure (Infrastructure Order). Issued on January 14, 2025, the Infrastructure Order directed that both the Secretary of Defense and the Secretary of Energy identify a minimum of three federal sites that might be suitable for lease to private sector companies for the construction and operation of frontier AI data centers and the construction of new clean energy facilities needed to power those data centers. The Infrastructure Order further directs that the heads of agencies with permitting authorities work to expedite the processing of permits and approvals required for the construction and operation of AI infrastructure on federal land. While implementation of the Infrastructure Order appears to be behind schedule—notably neither the Department of Defense nor the Department of Energy has publicly listed any sites for construction of AI infrastructure as required by February 28—we anticipate that there could be related initiatives in this space in the not-too-distant future.

Standards Development

One of the most consequential initiatives from the October Order with respect to AI safety was its invocation of the Defense Production Act to facilitate the collection of certain information by the Commerce Department from private companies “developing or demonstrating an intent to develop potential dual-use foundation models,” including information pertaining to their (i) ongoing or planned activities related to training dual-use models, (ii) ownership and possession of model weights, and (iii) results of any dual-use foundation models’ performance in relevant AI red-team testing consistent with new NIST standards developed pursuant to the Order. While the Department of Commerce issued a Proposed Rule to establish regular reporting requirements for companies consistent with this directive, we have not seen any signs yet that the Trump Administration will seek to finalize the Rule. That said, certain AI companies have suggested in their RFI comment letters that they do not want the Commerce Department out of the safety business. Noting that “AI systems will increasingly embody significant national security implications in the coming years,” some have called for the preservation of the AI Safety Institute—effectively created by the October Order within NIST and later memorialized in the NSM—to continue third-party testing for AI systems and to support ongoing development of safety benchmarks.

Major AI players that submitted feedback to the AI Action Plan have so far publicly called on the Trump Administration to sustain the federal government’s role in the AI regulatory space rather than let state governments develop a myriad of new requirements that increase compliance costs and result in a “fragmented regulatory environment”—a common frustration for businesses seeking to navigate state privacy and cyber requirements. Others have suggested reimaging the role of the AI Safety Institute to serve as an efficient “front door” to the government for companies seeking to navigate national security and economic implications of this new technology.

The Trump Administration may decide that at least some centralized regulatory functions will make it easier for companies to innovate by creating a baseline expectation around safety and testing standards.

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.