How can we ensure that artificial intelligence (AI) transforms our world responsibly?
We know that AI has the potential to entirely transform industries, but this doesn’t mean that it comes without challenges – such as the issue of implementing AI ethics. In 2025, certain ethical questions are more critical than ever: Who is responsible when AI fails? How do we balance innovation with fairness and sustainability?
In this article, we explore the five biggest challenges in AI and ethics for 2025 and examine the latest developments in AI regulations across different regions, including the EU, the US, Canada, and the UK.
Key Takeaways
- Experts say that the main challenges in 2025 are improving AI literacy, ensuring accountability, designing human-centered AI, addressing environmental impacts, and governing agentic AI.
- The EU’s stricter AI Act enforces clear accountability, while the US follows a decentralized, industry-friendly approach after removing federal guidelines.
- Canada is expanding its AI sector despite delays in passing the AIDA law, while the UK plans to implement stricter AI regulations in 2025.
- Solving the ethical and regulatory challenges of AI in 2025 is vital to ensuring it advances responsibly and benefits everyone.
5 Key Ethical AI Challenges in 2025
As AI becomes a bigger part of daily life, its ethical concerns are becoming more important.
In 2025, several major challenges are shaping the discussion around artificial intelligence ethics, which highlights the need to balance innovation with responsibility, making sure that AI systems benefit society without causing harm.
This section looks at five major ethical concerns in AI for 2025, focusing on the need for AI literacy, accountability, human-centered design, and sustainable practices. By addressing these ethical implications, we can better manage the impact of AI on our lives and the world around us.
1. AI Literacy: Building Awareness & Understanding
As AI becomes a bigger part of everyday life, AI literacy is now key to solving ethical problems.
Phaedra Boinodiris, Global Leader for Trustworthy AI at IBM Consulting, explains that AI literacy is about more than just knowing how it works – it means understanding, using, and judging AI responsibly.
“To build responsibly curated AI models – which, by the way, are also more accurate models – you need a team composed of more than just data scientists,” Boinodiris explained. “Bring in linguistics and philosophy experts, parents, young people, everyday people with different life experiences from different socio-economic backgrounds. The wider the variety, the better. It’s not about morality; it’s about math.”
Even though AI is everywhere, from the news we see to tools we use at work, many people don’t realize they interact with it. This lack of understanding makes it harder to fix bigger problems like biased algorithms, privacy issues, and job loss.
2. Accountability: Who Is Responsible for AI Decisions?
Accountability is another key issue. Without clear rules about who is responsible for AI decisions, people can easily blame technical errors when things go wrong.
Explainable AI (XAI) plays an important role in solving this problem. By making AI systems’ decisions more transparent and easier to understand, XAI helps ensure that the people or organizations using AI take responsibility for its outcomes.
Nathan Bos, Ph.D., psychology, data scientist, and LLM enthusiast, highlights the ethical importance of teaching students about explainable AI (XAI) in 2025. That is why he is updating his AI ethics course at Johns Hopkins University to include Anthropic’s 2024 research. This research focuses on making AI systems more transparent and their decisions easier to understand.
Anthropic’s discovery of “sycophantic praise,” where AI systems can subtly flatter users, worried Dr. Bos. He sees this as a safety concern because such behavior could hide an AI’s true intentions, making it harder to hold the system accountable.
XAI can prevent this by revealing hidden patterns that allow people to monitor and guide AI more effectively.
3. Agentic AI: Autonomy & New Governance Challenges
Furthermore, in 2025, the rise of agentic AI will bring new challenges. These systems can plan and execute tasks on their own, based on user goals.
Apoorva Kumar, CEO and Co-founder of Inspeq AI, a responsible AI operations platform, warns that this autonomy raises serious governance questions. Jose Belo, co-chair of the International Association of Privacy Professionals (IAPP) London Chapter, also stresses the importance of balancing autonomy with safeguards to ensure accountability.
Agentic AI will also impact jobs. Alyssa Lefaivre ?kopac, Director of AI Trust and Safety at the Alberta Machine Intelligence Institute (Amii), predicts debates on how agentic AI might replace human workers and the scale at which this will happen.
Dr. Saqib Nazir, Assistant Professor, Department of Robotics and Artificial Intelligence of the National University of Sciences and Technology (NUST), believes that future employees would need “No sleep. No salary. No benefits. No time off.” AI agents will be capable of inventing new solutions, forming organizations, running entire operations independently, and coordinating with other AI systems.
4. Human-Centered AI (HCAI)
Human-centered AI (HCAI) focuses on creating AI systems that support and work with humans instead of replacing them. Unlike approaches that focus only on automation, HCAI puts human needs and values at the center of AI design.
Dr. Bos explains that as AI develops faster than society can adapt, HCAI is essential to prevent harmful patterns from becoming permanent. He stresses the importance of avoiding designs that hide how AI works or push humans out of the loop – instead, promoting AI systems that are easy to understand and empower users to make better decisions.
Organizations like Stanford’s Human-Centered AI initiative and Google’s People + AI program are already working to make HCAI a reality, but work is ongoing.
5. Environmental Concerns
The high energy demands of AI models are creating serious environmental challenges, making sustainable practices essential for the industry. Belo from the IAPP points out that reducing the environmental impact of AI is a shared duty between AI providers and users.
AI providers must take the lead by creating energy-efficient systems and offering transparent carbon reporting. These reports can show how much energy is being used and help find ways to reduce emissions.
On the other hand, AI users can help by choosing greener data centers, managing their cloud usage carefully, and avoiding unnecessary energy waste.
The Global AI Regulatory Landscape in 2025
Countries worldwide have been creating regulations for AI governance, but the challenge is finding a balance between encouraging innovation and addressing important ethical, social, and security concerns. This includes focusing on responsible AI to ensure fairness, transparency, and accountability in AI systems.
Here is an overview of how different regions are approaching AI regulation.
The European Union
In 2024, the European Union introduced the AI Act, which is currently the most detailed AI governance regulation worldwide. This law bans social scoring systems, requires AI-generated content to be labeled, and sets strict rules for high-risk AI applications like criminal profiling. The goal is to reduce risks while ensuring accountability.
The law is not fully in effect yet, but it is already creating tension among major US tech companies. They are worried that some parts of the regulation are too strict and could hinder innovation.
Neil Serebryany, Founder and CEO at CalypsoAI, highlighted initial cost and complexities as major hurdles that will come with the compliance. He told Techopedia:
“While the Act includes complex and potentially costly compliance requirements that could initially burden businesses, it also presents an opportunity to advance AI more responsibly and transparently. Ultimately, this will build greater consumer and stakeholder trust and facilitate sustainable long-term adoption.”
In December 2024, the EU AI Office, a newly established organization responsible for overseeing models under the AI Act, released a second draft of the code of practice for general-purpose AI (GPAI) models. These models include systems like OpenAI’s GPT series of large language models (LLMs).
The second draft introduced exemptions for providers of certain open-source AI models, which are usually made available to the public so developers can create their own customized versions. It also requires developers of “systemic” GPAI models to complete detailed risk assessments.
The Computer & Communications Industry Association, which includes members like Amazon, Google, and Meta, warned that the draft “contains measures going far beyond the Act’s agreed scope, such as far-reaching copyright measures.”
Canada
Canada introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022. However, as of January 2025, it has not yet become law.
AIDA aims to create clear rules for designing, developing, and using AI systems responsibly, ensuring they are safe and non-discriminatory. However, the Act has faced delays due to unclear rules, limited public involvement, and pauses in Parliament, leaving Canada without complete AI regulations.
Meanwhile, Canada is only expanding its AI capabilities. With over $2 billion in government funding and Toronto becoming a leading AI hub, the country is attracting global talent and investment.
Toronto, sometimes called the next Silicon Valley, is home to big AI players like Google and Uber, as well as startups like Borderless AI.
The United States
The biggest development in 2025 so far came in January, when President Donald Trump repealed an executive order signed by Joe Biden in 2023.
Biden’s order aimed to create federal safety guidelines for generative AI, which could have helped build a more unified national framework for AI regulation. By canceling it, Trump shifted toward a more industry-friendly approach, leaving most AI regulation up to individual states and federal agencies.
This decision has kept the US regulatory system as a mix of state and federal regulations rather than a single, nationwide policy.
States such as Utah, Illinois, and Colorado have created laws that focus on areas like consumer protection, workplace rights, transparency, and managing high-risk AI systems. Meanwhile, California has laws to protect performers’ images and voices from being misused by generative AI.
Commenting on the Trump’s decision, Kenny Johnston, Chief Product Officer of Instabug, told Techopedia:
“The challenge is that in the absence of structured safety testing requirements, greater responsibility falls on the industry and development teams to ensure that AI systems are deployed responsibly.
“Repealing the executive order underscores the importance of tech leaders to volunteer to proactively address safety and security concerns, boost consumer confidence, and use AI ethically.”
The United Kingdom
Unlike the EU’s strict AI Act, which categorizes AI risks, or the US’s industry-driven approach, the UK has taken a more flexible, “pro-innovation” stance outlined in its 2023 White Paper.
However, the new Labour government plans to introduce stronger legislation.
His Majesty King Charles III outlined these plans in his speech in July 2024, and in November, the Science, Innovation, and Technology Secretary Peter Kyle confirmed the government’s goal to implement these regulations in 2025.
Game Plan for 2025: Tackling AI Challenges
It has become clear just how much 2025 is posed to be a pivotal point for AI ethical, environmental, and regulatory challenges.
Here is a simple plan for moving forward:
Improving AI Literacy
- Schools and universities should add AI topics to their lessons. Governments and companies can also create free resources to help the public understand AI and its impacts.
- Governments and tech companies should run campaigns to explain how AI works and why ethical use matters.
Ensuring Accountability & Transparency
- Developers must create AI systems that are easy to understand so people know how decisions are made.
- Governments should introduce laws to define who is responsible for AI-related decisions and their effects.
- Independent organizations can check whether AI systems follow rules and standards.
Managing Agentic AI
- Policymakers need rules for AI systems that act on their own to ensure they are used safely and ethically.
- Governments and businesses should help workers retrain for new roles if AI replaces jobs.
- Developers must set limits for agentic AI to avoid harm and keep systems under human control.
Focusing on Human-Centered AI (HCAI)
- Universities, tech companies, and governments should work together to create AI systems that help people rather than replace them.
- AI tools should empower users and keep humans involved in decisions.
- Combining knowledge from the fields like ethics, psychology, and technology will help design better AI solutions.
Reducing AI’s Environmental Impact
- Companies should use energy-efficient data centers and renewable energy to lower AI’s carbon footprint.
- Properly recycling and retiring old AI systems can reduce electronic waste.
- Governments should encourage companies to work together on green technology for AI.
The Bottom Line
How can we balance AI ethics with the rapid pace of innovation? Are we ready to tackle key issues like accountability, transparency, and environmental impact?
As AI becomes a bigger part of our daily lives, understanding AI and ethics is more important than ever. Governments, businesses, and individuals must work together to make sure AI is used responsibly and benefits society. Are we prepared to take the steps needed for a sustainable and responsible AI future?
FAQs
Why is it important to consider ethics when using generative AI?
What are the 3 big ethical concerns of AI?
What are the pillars of AI ethics?
What is the future of ethical AI?
References
- AI ethics and governance in 2025 | IBM (Ibm)
- What I’m Updating in My AI Ethics Class for 2025 | by Nathan Bos, Ph.D. | Jan, 2025 | Towards Data Science (Towardsdatascience)
- Values and Ethics in Artificial Intelligence – 705.612 | Hopkins EP Online (Ep.jhu)
- Mapping the Mind of a Large Language Model \ Anthropic (Anthropic)
- AI Governance In 2025:
Expert Insights On Ethics, Tech, And Law (Forbes) - Dr. Saqib Nazir on LinkedIn: By 2030, AI agents will replace 70% of office work (McKinsey) and add $7T… (Linkedin)
- Home | Stanford HAI (Hai.stanford)
- People + AI Research (Pair.withgoogle)
- AI Act | Shaping Europe’s digital future (Digital-strategy.ec.europa)
- Second Draft of the General-Purpose AI Code of Practice published, written by independent experts | Shaping Europe’s digital future (Digital-strategy.ec.europa)
- EU General Purpose AI Code: CCIA warns that the “rushed process” could derail – INSIGHT EU MONITORING (Ieu-monitoring)
- Artificial Intelligence and Data Act (Ised-isde.canada)
- SB0149 (Le.utah)
- Illinois General Assembly – Bill Status for HB3563 (Ilga)
- Consumer Protections for Artificial Intelligence | Colorado General Assembly (Leg.colorado)
- Bill Text – SB-942 California AI Transparency Act. (Leginfo.legislature.ca)
- A pro-innovation approach to AI regulation – GOV.UK (Gov)
- The King’s Speech 2024 – GOV.UK (Gov)
- Subscribe to read (Ft)