The transition from President Joe Biden to President Donald Trump in January 2025 has created a state of uncertainty. This affects several AI-related executive orders (EOs) and policies. The rescission of Executive Order 14110 had structured federal AI governance under Biden. The introduction of Executive Order 14179 takes a new approach. These changes leave many legal and regulatory questions unanswered.
For lawyers advising clients on AI regulation, compliance, and policy, it is crucial to understand which initiatives stay intact. They must also know which Executive Orders on AI are in question. Understanding how new directives reshape the AI legal landscape is essential. This article explores these executive orders by number and examines their uncertain status and potential implications.
Executive Order 14110 (Revoked) – The Biden AI Governance Framework
Issued: October 30, 2023
Revoked: January 20, 2025
Biden’s Executive Order 14110 laid out a comprehensive framework for AI governance, emphasizing safety, fairness, national security, and workforce impact. The Executive Order on AI created:
- Chief AI Officers (CAIOs) in federal agencies
- An interagency AI Council to oversee AI-related policies
- AI risk management and safety standards, largely in alignment with National Institute of Standards and Technology (NIST) guidelines
- Mandatory reporting requirements for AI developers of advanced models
- Hiring reforms to attract AI talent into government service
With the revocation of this order, the status of these roles, councils, and mandates is unclear. The Trump administration has not explicitly eliminated these positions. Nonetheless, it has ordered a review of AI-related agency actions. This means some positions will be scrapped or significantly altered.
Legal Implications:
Lawyers advising clients on compliance with federal AI policies should be cautious, as earlier obligations are repealed or replaced.
Government contractors working on AI-related projects should watch whether funding and regulatory obligations shift under new Executive Orders on AI.
AI risk management frameworks based on the Biden order need to be reevaluated in light of potential deregulatory moves.
Executive Order 14179 – Removing Barriers to AI Leadership
Issued: January 23, 2025
Trump’s AI executive order on AI focuses on deregulation and innovation rather than oversight and safety. The Executive Order on AI emphasizes:
- Eliminating regulatory barriers to AI development
- Reviewing and rescinding AI-related mandates created under Biden
- Encouraging private sector leadership in AI research and development
- Aligning AI policy with economic growth and national security interests
Yet, the order lacks specific implementation details, leaving key uncertainties:
- Will government AI safety mandates be rolled back?
- Biden’s order required AI companies to give safety reports for high-risk models. If Trump revokes this, it will impact liability and compliance frameworks.
- What happens to AI hiring initiatives?
- The earlier administration launched government AI hiring reforms to attract top talent. Will these programs stay, or will Trump’s push for private-sector-led AI development phase them out?
- Will federal AI governance structures stay?
- Agencies like the National AI Advisory Committee and AI Safety Institutes were created to give expert guidance. If Trump shifts AI governance to the private sector, the role of these entities diminish.
Legal Implications:
AI companies have more regulatory flexibility but will need to adjust compliance strategies if reporting requirements are repealed. The deregulatory approach create state vs. federal conflicts, as states like California move to enforce their own AI laws in the absence of federal oversight. National security-related AI policies stay ambiguous, raising concerns about export controls and federal procurement policies.
Other Executive Orders on AI in Limbo
The AI Bill of Rights
The AI Bill of Rights was a guiding document under the Biden administration. It was an Executive Order on AI that focused on algorithmic fairness, transparency, and discrimination protections. Yet, it was not an executive order. Its future under Trump is uncertain, as his administration’s policies emphasize growth over governance. If dismantled, corporate AI policies about bias mitigation and human rights compliance need reevaluation.
The Stargate Project
The administration has introduced the Stargate Project. It is an Executive Orders on AI, and a $500 billion AI infrastructure initiative. This initiative is funded by private firms like OpenAI, SoftBank, and Oracle. The goal is to solidify U.S. leadership in AI infrastructure, but the details on governance, security, and oversight are unclear.
- Will government contracts favor politically aligned AI firms?
- Will foreign AI investment be restricted under the initiative?
- Will companies benefiting from Stargate investments be liable to extra federal oversight?
Legal Implications:
AI firms seeking federal support should track policy shifts to guarantee compliance with new funding and procurement requirements. Keeping up to date on Executive Orders on AI is crucial.
National security lawyers should assess how Stargate’s public-private model aligns with existing AI export control policies.
What’s Next? Legal Uncertainty and AI Regulation
The uncertainty surrounding AI governance means that lawyers advising tech firms, policymakers, and regulators must stay vigilant. Here’s what to watch:
- New federal guidance: Expect further clarifications as agencies review AI-related policies and mandates.
- State regulatory action: If the federal government scales back AI oversight, states like California and New York may take the lead. This will increase compliance complexity.
- International AI governance: With the U.S. shifting course, will the EU and other allies pressure American firms to follow stricter AI regulations abroad?
- Congressional action: Some lawmakers will push for legislation to replace revoked executive orders. This push will result in a battle over AI governance.
Conclusion
The Trump administration’s approach to AI shows a significant shift from regulation to deregulation. There is an emphasis on economic and national security interests over algorithmic fairness and risk mitigation. As federal agencies reassess their AI-related mandates, legal professionals should guide clients to stay adaptable to shifting policies.
For now, the best strategy is to watch legal developments closely. Additionally, prepare for potential regulatory uncertainty as AI governance continues to evolve.


Leave a Reply