Five AI Failures That Left Companies Red-Faced

Why Trust Techopedia
KEY TAKEAWAYS

In the early days of artificial intelligence, failures and mishaps can be embarrassing, disastrous and sometimes even hilarious, and they can also leave a company red-faced and even cause its valuation to plummet.

Artificial intelligence (AI) is posed to change our world forever as one of the most disruptive technology revolutions of this century.

However, much like every other invention made by humans, mistakes, mishaps, and unplanned accidents can sometimes happen.

While some of them are just minor issues that could derail a project for a while or halt it in its early stages of development, others have much direr consequences.

A bad AI failure can leave a brand red-faced and hurt its reputation – although sometimes this can happen in a quite comical way.

Let’s explore a few of these embarrassing, disastrous, and sometimes hilarious AI fails of the last few years:

Receiving the Silent Treatment From Your AI Assistant

A few years ago, back in 2018, AI assistants were quite a novelty that was generating a very profitable new market.

Advertisements

Unsurprisingly, many players started to jump on the bandwagon, and LG was one of them. They tried to launch Cloi, a small talking robot whose purpose was to run a smart home by talking with it.

However, she apparently didn’t like that specific owner during her public debut, humbling LG’s US marketing chief David VanderWaal.

After a while, the tiny and cute AI started repeatedly ignoring commands, giving the silent treatment (courtesy of Youtube) to an embarrassed and frustrated VanderWaal.

Maybe Cloi was signaling it was time to take a relationship break.

The (Not So) Tiny Difference Between “Bald” And “Ball”

In October 2020, when the Covid-19 pandemic often meant trying to avoid employing human operators, a Scottish soccer club resorted to using an automated camera to record a match.

The automated camera worked well for a while, merrily recording the match between Inverness Caledonian Thistle and Ayr United at the Caledonian Stadium quite smoothly.

However, once the game was underway, the camera started mistaking the shiny, bald head of a lineman for the ball itself.

In an extremely hilarious turn of events, it kept denying viewers of the real action by focusing on the poor man’s head.

We all look forward to a future where soccer clubs will enforce a rule mandating the use of hats and wigs for all linemen and players.

When Facial Recognition Doesn’t Recognize You – At All

According to our most recent research, facial recognition seems to be all but a reliable technology, and its failures can have devastating consequences on people’s lives – even leading to jail time.

However, sometimes these mishaps are particularly embarrassing for the developers of these tools, especially when AI misunderstandings result in unpredictable blunders.

One such case involved the Chinese government itself. In many cities, a method employed to stop people from crossing streets unlawfully is to publicly shame jaywalkers.

Their faces are captured by street cameras and then featured on large displays, together with legal consequences.

In 2018, one such camera captured the face of Dong Mingzhu, a billionaire in charge of China’s largest air-conditioner manufacturer, who was featured on a nearby bus ad billboard. The camera reacted to her face and shamed her even if she wasn’t even there.

Needless to say, the one who was shamed the most was the Chinese government, but to keep things fair and balanced, they weren’t the only ones who had to face their own dose of … face recognition-based embarrassment (pun intended).

That same year, Amazon’s Rekognition surveillance technology incorrectly matched the mugshots of criminals who were arrested for felonies to the faces of 28 members of the congress.

Maybe AI took those claiming that all politicians are criminals a bit too literally…

Why AI Should Never Replace Your Doctor’s Advice

Another government mortified by a faulty AI was the British government in 2020.

With the Coronavirus pandemic in full swing, the UK health authorities launched CIBot, an AI-powered virtual assistant that was posed to provide people with useful information about the COVID-19 virus.

The idea was to help the public with guidance by providing them with vital advice, but the tool didn’t stop at just scraping just official sources and went a bit too far.

In the end, the bot provided inaccurate information about the severity and transmission modes of the virus and recommended treatments, including inhaling steam. At least we can count ourselves lucky it didn’t end up recommending bleach as a therapy.

When Generative AI Starts Making Stuff Up

Many say that generative AI are like children, taking their first steps into the world of true self-conscious intelligence.

There are some instances like this one where this kind of claim sounds exceptionally true. When kids are asked a question they know absolutely nothing about, it’s not so infrequent for them to make things up on the spot to look good or to make full use of the very limited knowledge of the world they have.

A few months ago, Google’s AI chatbot Bard seemingly made the same mistake, much to the dismay of its own creators. And it did that in its very first demo, too.

When asked the question, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?”?the chatbot provided a bullet point answer that included stating that the telescope “took the very first pictures of a planet outside of our own solar system.”

Long story short, it didn’t, and some astronomers aptly noted that the first image of an exoplanet was taken nearly 20 years earlier, in 2004.

This “child’s mistake” would not be too bad, except it caused Google’s shares to plummet, losing $100 billion in market value in just one day.

The Bottom Line

While these AI fumbles may not be as terrible as those times when AI went rogue, they can still be the source of significant embarrassment for their companies and developers.

Still, we can’t deny how enjoyable it can sometimes be to watch the absurdities created by inexperienced or faulty generative AI.

The games don’t end there, for example, when Google Photos made a man’s head into a mountain or when it depicted the majesty of salmons swimming in a river.

As humans, we learn more from failure than success – we can only hope these mistakes can help AI improve at an even quicker pace.

Advertisements

Related Reading

Related Terms

Advertisements
Claudio Buttice
Data Analyst
Claudio Buttice
Data Analyst

Dr. Claudio Butticè, Pharm.D., is a former Pharmacy Director who worked for several large public hospitals in Southern Italy, as well as for the humanitarian NGO Emergency. He is now an accomplished book author who has written on topics such as medicine, technology, world poverty, human rights, and science for publishers such as SAGE Publishing, Bloomsbury Publishing, and Mission Bell Media. His latest books are "Universal Health Care" (2019) and "What You Need to Know about Headaches" (2022).A data analyst and freelance journalist as well, many of his articles have been published in magazines such as Cracked, The Elephant, Digital…

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/