A status check on the state of artificial intelligence regulation in the U.S.

Takeaways

  • Executive Order 14365 has so far not brought any clarity or consistency to U.S. artificial intelligence (AI) regulations.
  • No U.S. state AI laws have been challenged or overturned (yet).
  • Businesses should be careful about not overcorrecting to early federal pressure on AI regulations. Even if they are overturned, organizations would still have consumer protection, anti-bias, and anti-discrimination obligations that state regulators and Attorneys General will enforce in the AI context through other mechanisms.

OpenAI released GPT-4 in March 2023. By then, it was already the fastest-growing software application in the history of the human race. Artificial intelligence had already been deployed across many businesses, but the rapid proliferation of large language models into everyday life transformed AI regulation from a niche concern into the top priority of legislative sessions nationwide. There was a scramble to regulate and then understand (in that order) this new technology, resulting in over-drafted measures followed by industry pushback and a subsequent contraction due to the chilling effect that reactionary regulation can have on business investment.[1]

That brings us to early 2026. March 11, 2026, was the first deadline set by President Trump’s Executive Order 14365,[2] “Ensuring a National Policy Framework for Artificial Intelligence,” a major coordinated action across three federal agencies to centralize AI policy and override state-level regulations (like those in California and Colorado).

So, what has happened? Is there still an AI state patchwork? Do organizations have clearly defined obligations and practical guidance about how to safely invest in and deploy AI within their workflows?

In sum: not very much yet, most definitely, and no.

We’ll start with what hasn’t happened (as of the date of this review):

  • There has not been any significant federal legislation or regulation of AI. Senator Marsha Blackburn’s proposed TRUMP AMERICA AI Act, a backronym for the torturously named “The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act,” was released by the Senate on March 18, 2026.
  • The U.S. Department of Commerce was supposed to identify “onerous” state AI laws, specifically targeting those requiring AI to alter “truthful” outputs or imposing burdensome disclosure requirements.[3] This hasn’t happened.
  • The Federal Trade Commission (FTC) was supposed to issue a policy statement clarifying the circumstances under which state laws that mandate “alterations to the truthful output of AI models” are preempted by federal law prohibiting unfair and deceptive acts or practices.[4] This hasn’t happened either.
  • The Secretary of Commerce was to issue a policy notice stating when states with restrictive AI laws may be deemed ineligible for certain Broadband Equity Access and Deployment (BEAD) Program funds.[5] This also has not happened.
  • The FCC Chair was directed to initiate a proceeding within 90 days to determine whether to adopt a Federal reporting and disclosure standard for AI models that would preempt conflicting state laws. This has not happened because the Secretary of Commerce has not identified “onerous” state laws yet.
  • The Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology is tasked with proposing legislation that would broadly preempt state AI laws, with exceptions for child safety, data center infrastructure, and state government procurement.[6] This hasn’t happened.

About the only thing that has happened is that Attorney General Pam Bondi announced the launch of the Department of Justice’s (DOJ) AI Litigation Task Force, which is tasked with challenging state AI laws inconsistent with federal policy.[7] However, this Task Force has not yet filed any challenge, presumably also waiting for the “target” list from the Secretary of Commerce.

Delays at the federal government are not newsworthy events in themselves, and because EO 14365 by itself does not (and could not) overrule any existing state AI laws, nothing has changed for businesses developing or deploying AI tools and systems. However, the obvious difficulty in various government agencies interpreting and implementing EO 14365 will play out with private organizations.

The first issue is the ideological focus of EO 14365 and its agency directives.

EO 14365 specifically calls out the Colorado AI Act (CAIA)’s ban on “algorithmic discrimination” as “forc[ing] AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.”[8] Most state AI regulation (including the CAIA) serves some flavor of: “if you are going to use AI to make important decisions about a person, you had better make sure it does it properly.” They require organizations to treat their AI like the employees they are invariably replacing: the AI must be sufficiently credentialed, properly trained, subject to oversight, periodically reviewed, and not violate the law in how they conduct their business.

Human beings have the ability to identify and account for bias and discriminatory tendencies in their thought processes, the data they ingest, and their actions. AI tools do not. We don’t even understand how many AI tools arrive at their output.[9]

Take an AI tool used by a mortgage lender to streamline high-volume loan applications. The model is trained on ten years of historical lending data and is designed to predict the likelihood of default. It correlates an applicant with data from its training sets, like ZIP codes, property values, spending habits, and higher education, that are contextualized with historical default rates.

While the data is facially neutral, the model identifies patterns that act as proxies for protected classes: ZIP codes that were systematically denied historical investment going back decades, graduates from certain historically underfunded or minority-serving institutions correlating with lower historical loan approval rates, and spending at grocery stores predominantly located in minority-heavy areas. The AI assigns the applicant a higher risk score than a nearly identical applicant from a different neighborhood.

A higher risk rating is the “true” result, as that word is contextualized solely within the AI’s training data, and forcing the AI to account for this bias would be a “false” result, as this word is contextualized solely within EO 14365.

If the CAIA is successfully challenged, where does that leave an organization seeking to use AI in such a use case? In the same place it was before. Discriminatory impact is discrimination.[10] The Consumer Financial Protection Bureau and Department of Justice have clarified that “the AI made the decision” is not a valid legal defense.[11] These principles are deeply rooted in U.S. equal protection principles, are popular with consumers, and are likely to outlast the current administration and its ideological persuasions. A challenge to the CAIA (or the employment-focused AI regulations in Illinois and Texas, or transparency and “frontier model” reporting requirements in California and New York) would not be a reprieve; it would be a red herring.

Still, the pro-business pushback has undoubtedly influenced the outcome of more recent artificial intelligence regulatory efforts. The NY Responsible AI Safety and Education (RAISE) Act, coming into effect March 19, for example, has sharply pivoted away from the individualized consumer protections of the CAIA and state profiling and automated decision-making technology (ADMT) regulations to essentially say “please don’t invent Skynet.” This, and its sister law in California (the Frontier Artificial Intelligence Act (TFAIA)),[12] are laser-focused on major AI developers, not everyday businesses. Further, the forthcoming Oklahoma comprehensive privacy law[13] has not adopted the stricter profiling obligations of the California Consumer Privacy Act or the Minnesota Consumer Data Privacy Act.[14]

The second issue is preemption.

The DOJ’s anticipated lawsuits will likely argue that certain state AI laws are unconstitutional under the Dormant Commerce Clause or preempted by existing federal law. However, there are normally federal protections in place that serve to displace the state-level ones (like The Health Insurance Portability and Accountability Act of 1996 or the Children’s Online Privacy Protection Rule), but there are none here.

There is a concern that the EO strips away nascent rules designed to prevent AI-driven harms (for instance, bias in hiring or lending, or unsafe AI applications) without putting any equivalent federal standards in place. State governors, such as Illinois’s J.B. Pritzker, blasted EO 14365[15] as “unlawful” and “a blatant federal overreach,” signaling that they won’t abandon state AI safeguards. Florida Governor Ron DeSantis responded[16] that “an executive order doesn’t/can’t preempt state legislative action,” underscoring that only Congress can lawfully override state laws. State insurance regulators expressed[17] “deep concern” that the sweeping order could disrupt well-established consumer protections in insurance markets. They note that insurance is heavily governed by state law, and the EO’s broad preemption efforts might introduce legal uncertainty that impedes routine functions like underwriting and claims—potentially delaying business decisions and deterring innovation due to unclear rules.

Dormant Commerce Clause challenges would be complicated. Current AI laws do not facially discriminate against out-of-state businesses. But AI models and the Internet don’t observe state lines. If California passes a strict AI law, a company essentially has to change how its AI operates for the entire countrybecause it’s too difficult to geofence a foundational AI model to just one state.

In sum, challenges to state AI regulations will be hard fought, endlessly appealed, and take years to work out in the courts. But organizations are under pressure to adopt AI now, not in two years. And they cannot invest the millions of dollars into such technology if they don’t know what the rules are.

EO 14365 is part quick-fix deregulation, part ideological platforming, which makes its future impact very difficult to forecast. In the near term, the practical effect on companies is limited—no state laws are immediately overturned, and organizations should continue complying with applicable state AI requirements. But even if they are overturned, organizations would still have consumer protection and anti-bias and anti-discrimination obligations regardless, which state regulators and Attorneys General will enforce in the AI context with other mechanisms.


[1] For example,SB 1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) vetoed by California Governor Gavin Newsom in September 2024 and HB 2094 / SB 487 (AI Consumer Protections) vetoed by Virginia Governor Glenn Youngkin in March 2025.

[2] Exec. Order No. 14,365, 90 Fed. Reg. 58499 (Dec. 16, 2025), available at https://www.federalregister.gov/documents/2025/12/16/2025-23092/ensuring-a-national-policy-framework-for-artificial-intelligence.

[3] Executive Order 14365, §4.

[4] Id. at §7.

[5] Id. at §5.

[6] Id. at §8.

[7] Id. at §3; Memorandum from the Attorney General, Artificial Intelligence Litigation Task Force, Dep’t of Just. (Jan. 9, 2026), available at: https://www.justice.gov/ag/media/1422986/dl?inline.

[8] Executive Order 14365, §1.

[9] Hence the unattributable aphorism: “If you understand how it works, it isn’t AI.” See Dario Amodei, The Urgency of Interpretability, (April 2025), available at https://www.darioamodei.com/post/the-urgency-of-interpretability.

[10] See, e.g., Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e‑2(k).

[11] Consumer Financial Protection Bureau, CFPB and Federal Partners Confirm Automated Systems and Advanced Technology Not an Excuse for Lawbreaking Behavior, (Apr 25, 2023), available at https://www.consumerfinance.gov/about-us/newsroom/cfpb-federal-partners-confirm-automated-systems-advanced-technology-not-an-excuse-for-lawbreaking-behavior/.

[12] Applying to models trained with >10^26 FLOPs. No doubt, many legal and compliance experts will use AI to understand what on earth “models trained with >10^26 FLOPs” means in order to evaluate the applicability of the RAISE and the TFAI Acts, and then those analyses will be ingested by these same models creating the ultimate recursive feedback loop that will create a black hole and consume us all.

[13] See Oklahoma Senate Bill 546, available at https://www.oklegislature.gov/cf_pdf/2025-26%20FLOOR%20AMENDMENTS/House/SB546%20FA1%20WESTJO-MJ.PDF.

[14] Neither of which use the phrase “artificial intelligence” because nobody can agree on what this means, but absolutely apply to a covered business’s deployment of AI systems and tools.

[15] Capital News Illinois, Illinois leaders ‘won’t back down’ following Trump’s order limiting AI regulation, Maggie Dougherty, (December 16, 2025), available athttps://capitolnewsillinois.com/news/illinois-leaders-wont-back-down-following-trumps-order-limiting-ai-regulation/.

[16] Ron DeSantis, X.com, (December 8, 2025), available at https://x.com/RonDeSantis/status/1998101450442895531.

[17] National Association of Insurance Commissioners, Statement from the National Association of Insurance Commissioners (NAIC) on AI Executive Order, (December 16, 2025), available at https://content.naic.org/article/statement-national-association-insurance-commissioners-naic-ai-executive-order.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew T. Hays Matthew T. Hays

Matt Hays is a go-to advisor in matters relating to data sensitive projects, agreements, services and investigations. He has worked extensively with clients as they wrangle with the explosion of innovative and complicated artificial intelligence driven technologies, including the deployment of generative AI…

Matt Hays is a go-to advisor in matters relating to data sensitive projects, agreements, services and investigations. He has worked extensively with clients as they wrangle with the explosion of innovative and complicated artificial intelligence driven technologies, including the deployment of generative AI tools through internal development or sourced from a technology provider. With a background in engineering and patent law, Matt possesses a unique ability to quickly understand and assess new and complicated technologies to advise on the legal risk to your business. Working with clients in the insurance, health, tech and financial services industries, Matt’s wide experience brings a holistic approach to compliance projects that ends with a solution, and not at just identifying the problem.

Photo of Jennifer Dickey Jennifer Dickey

Jennifer Dickey is an associate in Dykema’s Chicago office. She focuses her practice on advising organizations on privacy compliance and regulatory requirements. Jennifer helps businesses navigate complex data privacy frameworks and implement proactive strategies to mitigate risk and ensure operational alignment with evolving…

Jennifer Dickey is an associate in Dykema’s Chicago office. She focuses her practice on advising organizations on privacy compliance and regulatory requirements. Jennifer helps businesses navigate complex data privacy frameworks and implement proactive strategies to mitigate risk and ensure operational alignment with evolving laws.