Trump Revokes AI Risks Legislation: What the Experts Say

Why Trust Techopedia

On Monday, President Donald Trump revoked former President Joe Biden’s executive order imposing various safeguards on AI development.

Biden signed the original order to reduce AI’s risks to consumers, workers, and national security. It required AI developers to share safety test results with the U.S. government in accordance with the Defense Production Act.

Executive Order 14110, from 2023, covered “the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Biden implemented the order after officials were unable to pass legislation that imposed restrictions on artificial intelligence development, and the administration saw it as crucial to mitigating the substantial risks of AI. This included addressing potential chemical, nuclear, and cybersecurity risks that could endanger national security.

However, this has now been repealed. The GOP made its thoughts clear at the 2024 Republican Party platform, stating: “We will repeal Joe Biden’s dangerous Executive Order that hinders AI innovation and imposes radical Leftwing ideas on the development of this technology. In its place, Republicans support AI development rooted in free speech and human flourishing.”

We speak to AI experts about what came to be one of Trump’s first acts — is this good for the advancement of AI, and does it come with risks attached?

Key Takeaways

  • President Donald Trump has repealed former President Joe Biden’s executive order concerning AI risks in one of his first acts since being elected.
  • The decision paves the way for AI companies to develop products with little to no oversight.
  • Experts tell Techopedia they are concerned about the implications it will have on AI safety and data integrity.
  • We weigh up the pros and cons of the move and the potential consequences.

Industry Reaction: Is Repealing AI Laws a Good or Bad Thing?

Techopedia asked leaders across AI and software development for their verdict.

Advertisements

Kenny Johnston, Chief Product Officer of Instabug, which supports 25,000 mobile development teams, told Techopedia:

“The repeal of the 2023 executive order on AI safety ushers in a dramatic shift regarding how the U.S. approaches regulation affecting the future of AI.

“For the technology sector in general, this is simultaneously an opportunity and a challenge. Developers, including mobile app developers, are instrumental in implementing AI as they upgrade and transform the ways in which users interact with devices.

“The challenge is that in the absence of structured safety testing requirements, greater responsibility falls on the industry and development teams to ensure that AI systems are deployed responsibly.

“Repealing the executive order underscores the importance of tech leaders to volunteer to proactively address safety and security concerns, boost consumer confidence, and use AI ethically.

“Collaboration by the tech community keeps the progress in AI going and will continue, together with safeguarding against any perceived risks.”

Mike Capone, CEO of data integration and AI platform Qlik, told Techopedia:

“The decision to revoke Biden’s executive order on AI risks reflects the ongoing tension between innovation and regulation in rapidly evolving fields like artificial intelligence.

“The removal of the executive order shouldn’t be seen as a green light for careless AI adoption, but rather as an opportunity for the private sector to step up.

“The path forward lies in innovation paired with intentionality—embedding responsibility into the DNA of AI development from the start.

“While regulatory oversight can provide important guardrails, history shows that government policy often lags behind the pace of innovation. This dynamic makes it incumbent upon the private sector to lead in ensuring AI development and deployment is responsible, ethical, and safe.

“We firmly believe businesses should not wait for regulatory mandates to act responsibly. Responsible AI practices — underpinned by high-quality, well-governed data — aren’t just about risk mitigation.

“They are a strategic imperative that enhances trust, reduces exposure to operational and reputational risks, and drives better outcomes for customers and shareholders alike.

“We’ve seen firsthand that companies who prioritize robust data foundations and transparency in their AI systems gain a competitive edge while also safeguarding public trust.

“In a world where AI capabilities are advancing at an unprecedented rate, businesses must balance speed with accountability.

“The stakes are too high to treat this as an afterthought.”

Jonas Jacobi, CEO and co-founder of ValidMind, the AI model risk management platform, told Techopedia:

“The rollback of the 2023 AI executive order shifts focus away from safeguards designed to guide the responsible use of AI within government.

“This raises concerns about how AI might be used or misused in critical areas such as public services, regulatory enforcement, and national security.

“It also serves as a possible bellwether for the trajectory of AI regulation — or lack of oversight — in the coming years, underscoring the need for vigilance and advocacy to ensure transparency, accountability, and responsible principles remain central to how our government leverages AI technologies.”

Could U.S. States Set Up Their Own AI Legislation?

A consideration often overlooked in the initial marveling over generative AI systems was that these systems are not completely objective and unbiased. Biases and blindspots can be baked into the algorithms and data underpinning them.

This is not necessarily due to nefarious activity from the AI vendors — it can be unconscious biases that the developers are unable to see. Loosening oversight could exacerbate this as the powers authorities have at their disposal get watered down.

One option could be for states within the U.S. that are less sympathetic to this repositioning of AI policy to propose their own laws. In theory, this would be possible if the laws did not contravene existing federal laws. In practice, this may be difficult, and it’s unclear how much of an impact state-level laws will have on national and international companies.

Furthermore, legislation that has not been primed in anticipation of Trump repealing the executive order will likely take a substantial amount of time to write, not to mention the journey it goes on before becoming law. This could afford AI companies the time to make changes and benefit from the lax regulations while the legislative battle is being fought.

It is also important to mention that this would signify the U.S. and the EU drifting further apart. The EU has walked the path of careful oversight, ensuring no stone is left unturned to prevent AI from being used in ways that could harm its citizens.

With the U.S. taking the approach of deregulation, less collaboration between the powers seems inevitable, which may create more problems further down the line.

What Are the Pros & Cons of Trump’s AI Repeal?

Zooming out, there are several potential pros and cons of the decision.

As the Republican Party mentioned in its statement, innovation could be stimulated, with companies no longer concerned with meeting specific regulations. It could also free up resources away from that regulatory oversight into more research & development, leading to further breakthroughs.

Geopolitically, it could also enable the U.S. to maintain its lead over China, particularly as tech exports to China are expected to be restricted more, not less, under Trump. This could enable the U.S. to stifle Chinese AI innovation, which has been highlighted as a national security issue for some time.

Many enterprise leaders will also see Trump’s decision as a boost for their businesses. Increasing the use of AI will provide them with a huge opportunity for greater efficiencies. They can churn through tasks such as coding and writing software with fewer human resources devoted to them.

On the other hand, the potential downsides are significant as AI has become a major part of many areas of business and civic life.

Over the last few months, we have seen a flurry of legal issues surface around how AI companies develop their products and the intellectual property they have potentially infringed upon in the name of progress.

With fewer legal safeguards, companies may be emboldened to take more aggressive action in contested areas like website crawling and copyright infringement without fear of legal repercussions.

While it’s difficult to predict the future, leaving any industry unchecked and relying on those entities with a vested interest to regulate themselves doesn’t sound like a recipe for responsible AI progress.

We may see an increase in technological breakthroughs, but we might also see more sophisticated deepfakes and disinformation campaigns. Some will see this as a price worth paying, while others will be concerned about the impact this will have on an increasingly partisan discourse.

Job security is also a hot-button topic that is hard to analyze. Are we too concerned about it or not concerned enough? It seems like a dereliction of duty for governments worldwide to stand by as unemployment increases due to AI, but without a crystal ball, it is unclear how the tech job market will evolve in the years to come.

The hope is it will create as many, if not more, jobs than it takes. However, progress with this may not be linear, and there could be significant upheaval while it is being worked out.

The Bottom Line

Over the last few years, we have all been adjusting to the brave new world of AI, often struggling to predict how things will look. It now appears those consequences may come sooner rather than later as the stabilizing forces/unnecessary restrictions (depending on your outlook) have been cleared away.

We will now see how the market forces guide its development, which could either produce an exciting age of innovation or one of significant tumult that upends many elements of society.

Advertisements

Related Reading

Related Terms

Advertisements
Duncan Proctor
Senior Editor
Duncan Proctor
Senior Editor

Duncan joined Techopedia as Senior EU Editor in July 2024. He has previously worked for the Telegraph Media Group and a number of B2B technology publications within the Future PLC portfolio.

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/