Trump executive order seeks to gut state AI laws

Donald Trump
Shawn Thew/Bloomberg
  • What's at stake: States' rights to set consumer protections.
  • Expert quote: Preempting state laws before having a federal standard in place is "dangerous and irresponsible," warns Scott Kosnoff.
  • Forward look: Expect legal battles as states and consumer advocacy groups push back on this order.

President Donald Trump signed an executive order Thursday night that calls for a national artificial intelligence framework that would preempt state laws on AI. The stated goal is to remove barriers to U.S. AI leadership.
"To win, United States AI companies must be free to innovate without cumbersome regulation," the executive order says.

Processing Content

Several states have passed laws that attempt to protect consumers from the potential harms of errant AI. California has AI laws that call for transparency in AI models, disclosure when a consumer is interacting with a chatbot and election integrity, among other things. Colorado's AI Act seeks to prevent algorithmic discrimination that affects consumers' ability to obtain loans, jobs and housing. Tennessee and New Jersey have laws that address deepfakes and impersonation. Utah's AI law regulates AI disclosures and liability.

Supporters of the new executive order, Ensuring a National Policy Framework for Artificial Intelligence, say it will promote AI innovation by providing one set of rules everyone can live by and free tech companies to focus on developing competitive new products.

Detractors have a number of objections, including that the executive order will hamper innovation, because AI creators and users will have to wait around for a federal law to be written and enacted. Some are also concerned that White House AI Czar David Sacks and a small number of powerful tech and business leaders will be able to bend this pending federal law to their will. And many believe this executive order sets a dangerous precedent of denying states' rights to set their own laws.

What's in the EO

The executive order directs Sacks, the White House's special advisor for AI and crypto, and Michael Kratsios, the assistant to the president for science and technology, to jointly prepare a legislative recommendation establishing a uniform federal policy framework for AI that preempts state AI laws.

It calls for a new AI litigation task force to be set up to strike down state AI laws. 

State laws create a "patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups," the order states.

The order claims that state laws "are increasingly responsible for requiring entities to embed ideological bias within models. For example, a new Colorado law banning 'algorithmic discrimination' may even force AI models to produce false results in order to avoid a 'differential treatment or impact' on protected groups." 

Reactions

Industry experts shared a range of concerns about the executive order.

For Scott Kosnoff, a partner at the law firm Faegre Drinker Biddle & Reath, the overarching worry is about safety. 

"I'm a fan of AI, and I think that it's an important technology that we ought to be investing in," Kosnoff told American Banker. "But like all new forms of technology, it's got upside, and it has downsides, and I think those downsides need to be addressed."

Though he is sympathetic to the notion of a single national standard for regulating AI, the problem is that no federal standard exists, he said. 

"Until Congress is able to come up with a sensible, middle-of-the-road framework for regulating AI, the states have no choice but to fill the gap," Kosnoff said. "Preempting state law first, in the absence of any federal standard, is dangerous and irresponsible."

Kosnoff is concerned about the accuracy, validity and reliability of models.

"AI can be amazingly powerful, and it can be right much of the time, but it's not always right, and sometimes when it's wrong, it's breathtakingly wrong," Kosnoff said. 

He also worries about algorithmic discrimination. "The administration hates that term, and wants to remove it from existence, but it's real, and it's documented." 

Privacy is also a concern "because AI doesn't exist in a vacuum – it needs oceans and oceans of data points, and some of it might well be sensitive, and you want to make sure that you've got the rights to collect all this data that you want to collect, and if you do, you need to then ask, "Do we have the right to use it in the way that we want to use it?' And if the answer to that question is yes, then you've got to take appropriate steps to safeguard the sensitive information so that it doesn't fall into the hands of wrongdoers."

Transparency, or making people aware an answer is coming from an AI bot, and explainability, or making sure you can explain how an AI model made a decision, such as whether or not to approve a loan, are also important.

"The more powerful and complex the model, the harder it is to identify with any sort of confidence how it got from A to B," Kosnoff said.

Other critics are more concerned about violating states' rights.

"The efforts at prohibiting state regulation of AI have repeatedly failed in Congress because regardless of the form, it is an unpopular, undemocratic and dangerous idea," said Ben Winters, director of AI and privacy at the Consumer Federation of America, in a LinkedIn post. "This Executive Order is a last-ditch effort to prop up Big Tech and try to save them from having to comply with commonsense rules when fair methods have consistently failed. It's unworkable — striving only to punish and intimidate state lawmakers that want to protect their constituents and give yet another gift to the tech industry. This effort and every future attempt at moratoria should be soundly rejected."

The order is "the most aggressive assertion of federal power over emerging technology we've seen in decades, Majo Castro, managing attorney at CastroLand Legal, wrote in a blog post. 

"Supporters are celebrating a victory for 'innovation,'" Castro wrote. "Critics are preparing for constitutional fights. But for those of us who work with fast-moving tech companies and the regulators who oversee them, the picture is way more complicated. … With this new order, states are effectively told: 'Stand down. Washington will handle it.' Except … Washington rarely moves at the speed of technological risk, regulations have always lagged behind technology, and this isn't going to change that." 

Kareem Saleh, founder and CEO of FairPlay, maker of software that tests the outcomes of AI models, especially lending models, objected to the order's claim that banning discrimination forces AI models to produce false results.

"Efforts to measure AI bias are often mischaracterized as attempts to embed ideology," Saleh told American Banker. "In reality, bias estimation is a decision-support tool for high-stakes choices in both business and government. Take a lending model trained on historical data from an era of documented discrimination: If it reproduces those patterns, is it producing a true result or is it accurately encoding past injustice? Bias testing forces us to ask that question rather than treating history as ground truth."

Laws like Colorado's "don't ban differential outcomes; they ask firms to demonstrate that their AI systems make decisions for defensible reasons — an approach that should strengthen trust, adoption, and long-term competitiveness," Saleh said. "To win the AI race, we need AI that works for everyone, and we can't fix what we refuse to measure."

Supporters of the bill applauded the effort to create a federal AI framework.

Adam Thierer, resident senior fellow at conservative think tank R Street Institute, argued that state AI regulations have been excessive.

"Compounding state AI mandates threaten to create a confusing and costly regulatory situation for this strategically important sector and could unduly burden smaller innovators in particular," Thierer wrote in a blog post. "Preemption of such laws has been a hot topic over the past year, but Congress has not yet been able to formulate a national policy framework that would address this growing patchwork and offer a more harmonized approach to American AI policy. The new EO represents an attempt by the Trump administration to do what it can in the short term to discourage state and local regulatory overreach and safeguard American AI innovation and leadership going forward." 

Wedbush analysts noted that the executive order will help U.S. technology firms like OpenAI, Google, Microsoft, Anthropic and Meta, which have been lobbying to limit AI regulations.

"With trillions of dollars of investments coming into building out AI infrastructure and other technologies, this was a strategic move to streamline the regulatory process and focus on American dominance in this industry with more foreign adversaries accelerating efforts to emerge as a winner in the AI Revolution," the Wedbush analysts wrote. "It is still very early days in the AI Revolution, and we believe that more organizations are expected to head down the AI roadmap through strategic deployments over time, but this executive order takes away more questions around future AI buildouts and removes a major overhang moving forward."

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology
MORE FROM AMERICAN BANKER