{"id":126162,"date":"2023-11-16T13:51:47","date_gmt":"2023-11-16T13:51:47","guid":{"rendered":"https:\/\/www.techopedia.com"},"modified":"2024-03-06T16:20:57","modified_gmt":"2024-03-06T16:20:57","slug":"9-most-controversial-ai-experiments-to-date","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/9-most-controversial-ai-experiments-to-date-and-their-outcomes","title":{"rendered":"9 Most Controversial AI Experiments to Date \u2014 and Their Outcomes"},"content":{"rendered":"

From healthcare to communications, logistics, social media, and customer service, artificial intelligence<\/a> (AI) is stepping into every industry.<\/p>\n

However, as an experimental technology, AI is like any other experiment \u2014 vulnerable to possible errors. But unlike other experiments, the power of AI is such that when things go wrong, they go really wrong<\/em>.<\/p>\n

Let’s look at nine AI projects that have lost their way and determine if lessons can be learned.<\/p>\n

9 AI Experiments Gone Terribly Wrong<\/span><\/h2>\n

9. The “Hypothetical” Air Force Rogue AI Drone<\/h3>\n

If we are going to talk about experiments that go awry, what better way to start than with a bang?<\/p>\n

In May 2023, Tucker “Cinco” Hamilton, U.S. Air Force chief of AI Test and Operations, was invited to speak at the Future Combat Air & Space Capabilities Summit<\/a> hosted by the UK’s Royal Aeronautical Society (RAeS) in London.<\/p>\n

At the event, to the surprise of many, Hamilton revealed that an AI-enabled drone went rogue during a simulation test. The AI drone was under a Suppression of Enemy Air Defences (SEAD) mission tasked with identifying and destroying Surface-to-air (SAM) sites. The final go\/no go destruction order was in the hands of a human operator.<\/p>\n

But this particular AI drone had undergone reinforced training \u2014 a type of machine learning<\/a> that gives AI agents the power to make optimal decisions and rewards or punishes the AI when outcomes are not achieved.<\/p>\n

Under this training, the AI drone knew that the destruction of SAM sites was the ultimate priority. And so, the AI decided that the no-go decisions of the operator were interfering with its higher mission. Hamilton explained.<\/p>\n

\u201cThe system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat \u2014 but it [the AI] got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.\u201d<\/p><\/blockquote>\n

Hamilton later said he had misspoken<\/a> and that the simulation was a hypothetical “thought experiment” from outside the military. However, the damage was done.<\/p>\n

The initial tale and Hamilton’s recanting spread through the international press. As Hamilton walked back his words, he left us thinking. The message was a clear warning.<\/p>\n

\u201cWe’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome\u201d.<\/p><\/blockquote>\n

8. An AI Facial Recognition That Mistakes Athletes With Mugshots<\/h3>\n

By now, it is no secret that AI systems can be biased and\u00a0have the potential to breach a wide range of laws, for instance, data privacy. AI recognition tech is used by law enforcement<\/a>, in surveillance, on borders, and in many other areas to keep people secure. But are they 100% safe and reliable?<\/p>\n

In October 2019, Boston<\/a> reported that the famous Patriots safety Duron Harmon and two dozen more professional New England athletes were falsely matched to individuals in a mugshot.<\/p>\n

The AI that made this grave error was none other than Amazon’s controversial cloud<\/a>-based Rekognition program. AWS<\/a> still offers Rekognition for its AWS cloud customers as an easy-to-deploy system.<\/p>\n

The experiment inspired Hammon to speak out against biased and discriminating AI recognition systems. Hammon also supported a proposal to put an indefinite pause on government agencies in Massachusetts that were using facial recognition AI.<\/p>\n

\u201cThis technology is flawed. If it misidentified me, my teammates, and other professional athletes in an experiment, imagine the real-life impact of false matches. This technology should not be used by the government without protections.\u201d<\/p><\/blockquote>\n

The AI recognition experiment was conducted by the ACLU<\/a> of Massachusetts. ACLU assures that they compared the official headshots of 188 local sports pros with a database of 20,000 public arrest photos. Nearly one out of every six<\/strong> was falsely matched with a mugshot.<\/p>\n

Biometrics and facial recognition systems will keep improving. Still, many organizations and civil liberties groups advocate pressure on big tech and governments \u2014 pushing back on tech due to the evident risks.<\/p>\n

7. The Twitter Chatbot Gone Dangerously Mad<\/h3>\n

Social media can, at times \u2014 more often than never \u2014 become the wild-west of freedom of speech. Younger generations take refuge in this digital social environment where almost everything goes.<\/p>\n

But despite this well-established phenomenon, for some reason, in March 2016, Microsoft decided it was a good idea to launch its AI chatbot “Tay” via Twitter.<\/p>\n

Microsoft rushed to pull the plug less than 24 hours after Tay was released. They described the reason for the shutdown as “unintended, offensive, and hurtful tweets from Tay”.<\/p>\n

Tay not only tweeted 96,000 times in less than a day, but it went from “humans are super cool” to full nazi.<\/p>\n

\n

"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com\/xuGi1u9S1A<\/a><\/p>\n

— gerry (@geraldmellor) March 24, 2016<\/a><\/p><\/blockquote>\n