Can AI Rules Damage Freedom of Speech?

Why Trust Techopedia

The risk of AI systems being used to enable censorship is often overlooked. On the vendor side, so much focus is placed on safety and content moderation that the output of these tools often displays ideological biases, often outright blocking legitimate outputs.

Google Gemini was heavily criticized earlier this year for its “over-politically correct responses” and “woke” bias.

Examples include depicting an Asian woman and a black man as German Soldiers from WWII, creating an image of the U.S. Founding Fathers that included a black man, and an output claiming it would never be acceptable to misgender Caitlin Jenner even if it could prevent a nuclear apocalypse.

We’ve also seen artificial intelligence being used to enable censorship more broadly. This includes everything from heavy-handed content moderation to AI-generated comments and disinformation campaigns to deepfakes used to influence elections, as we’ve seen claimed in Venezuela and the U.S.

The reality is that AI is being used in ways that intentionally or inadvertently drown out real voices and separate people from the facts. While regulations like the EU AI Act have implemented some privacy protections for users like social scoring systems, they haven’t directly addressed the issue of AI censorship.

Key Takeaways

  • AI censorship is often overlooked and ignored in the AI safety debate, with many entities using AI to drown out dissenting voices.
  • Research shows that 16 countries have used AI to “sow doubt, smear opponents, or influence public debate.”
  • Many AI vendors like OpenAI and Google have ‘overly restrictive’ content moderation policies.
  • One study shows that most conversational LLMs like ChatGPT generate responses that show a preference for left-of-center viewpoints.

The Scope of AI Censorship

Out in the wild, AI is being used to transform human society. Although most people and companies are experimenting with the technology in legitimate ways, certain entities are looking to exploit this technology to advance their political aims.

Advertisements

In 2023, Freedom House released a report arguing that advances in AI are “amplifying a crisis for human rights online,” noting that its usage had “increased the scale, speed, and efficiency of digital repression.”

The study highlighted that at least 47 world governments deployed commentators to manipulate online discussions in their favor during the coverage period. Likewise, it showed that 16 countries used AI to “sow doubt, smear opponents, or influence public debate.”

If this wasn’t enough, 21 countries also mandated or incentivized digital platforms to use machine learning to “remove disfavoured political, social, and religious speech.”

At the same time, generative AI vendors like OpenAI and Google have heavy-handed content moderation policies in place. While these policies can prevent misuse and protect the vendor’s brand from being associated with scam emails or hateful content, they also discourage legitimate speech.

How AI Vendor’s Content Moderation Policies are Shutting Out Free Speech

While generative AI vendors have done a good job at creating models that can generate human natural language, they haven’t been able to create moderation policies that balance safety and free speech.

“Research has revealed how the usage policies and guardrails for popular generative AI models prevent them from generating certain legal content and privilege certain viewpoints over others.

Jordi Calvet-Bademunt, a senior research fellow at The Future of Free Speech, told Techopedia:

“Much of the focus on generative AI has resolved around safety, with little attention paid to their impact on freedom of expression and censorship.”

Calvet-Bademunt notes that, in many cases, AI safety itself can be used as an excuse to enact censorship.

“Countries like China have also used these safety concerns to justify censorship, including flagging and banning content that undermines ‘the core socialist values’.

“Meanwhile, in democracies like the European Union, the recently adopted AI Act requires AI platforms to assess and mitigate ‘systemic risks’ which could impact content generated about the conflicts in Israel-Palestine or Russia-Ukraine, for example,” Calvet-Bademunt said.

The Future of Free Speech examined this issue in a report released earlier this year.

The study analyzed the usage policies of six major AI chatbots, including Gemini and ChatGPT, and found that the companies’ misinformation and hate speech policies were so overly vague and expansive that chatbots refused to generate content for 40 percent of the 140 prompts used, and suggests the chatbots were biased on specific topics.

Evaluating the Political Bias of Blackbox AI Models

One core issue that makes it difficult to address the scope of censorship in leading chatbots like ChatGPT or Gemini is that most are based on proprietary black box models, which offer the average user little transparency over how the model is trained and how it “decides” to produce or block certain content.

Likewise, there is no transparency over whether the biases of human AI and ML researchers are influencing the models’ training data or outputs.

After all, these researchers may gravitate toward training models on content that aligns with their own biases and beliefs.

That being said, there appears to be a significant left-wing bias among many of the leading models.

David Rozado, an Associate Professor at Otago Polytechnic, released a study in which he gave 24 LLMs (including GPT 3.5, GPT-4, Gemini, Claude, Grok, and Llama 2) 11 political orientation tests and found that most conversational large language models generate responses that show a preference for left-of-center viewpoints.

This is problematic as many users may be unaware that a model is generating biased outputs. For this reason, AI vendors need to be more vocal about models’ biases and display more prominent warnings about the need for users to fact-check their content.

Although it is arguable that AI vendors have a right to favor certain viewpoints just as news publications can, failure to be transparent about these leanings is unacceptable for chatbots marketed as impartial research assistants.

Areas Where AI Censorship is Useful

When considering the impact of AI censorship, it is important to consider some areas where censorship could be justified.

For instance, LLM vendors could argue that content moderation guidelines are necessary to prevent misuse, this could be anything from guidance on how to commit crime, non-consensual deepfakes, prejudicial content, phishing emails, and malicious code.

OpenAI and Gemini have a right to protect their brands from generating that kind of content. Likewise, while AI moderation can shut down legitimate human conversations, many companies also use it to protect users from toxicity and harassment.

Oindrila Mandal, senior game product manager at Electronic Arts, told Techopedia via written commentary:

“AI based censorship is currently being used in the video game industry for moderating gaming chat rooms and live voice chat. The intent of these applications is to reduce toxicity and improve online safety? and security of all players.”

Mandal notes that companies in the gaming industry, like Riot Games and Ubisoft, use AI-based content moderation to reduce toxicity and abuse. These companies automatically detect abusive language and censor it or take action against offending players.

Mandal added:

“AI-based censorship can be a powerful tool if used for good. There are multiple proofs that these solutions are able to prevent online abuse and toxicity.

“However, the guidelines for what constitutes censor-worthy content must be clearly defined so that AI-based censorship solutions prevent harm and do not cause undue censorship.

“These solutions must also be applied ethically and with the consent of all parties considered.”

The Bottom Line

AI is a powerful technology, but it’s important not to forget its potential to limit the reach of human voices.

In the future, AI vendors should be more transparent about potential biases in their content moderation policies, giving users the chance to decide for themselves what an acceptable output is.

Advertisements

Related Reading

Related Terms

Advertisements
Tim Keary
Technology Specialist
Tim Keary
Technology Specialist

Tim Keary is a freelance technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology.

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/