Artificial intelligence (AI) has garnered a reputation for being perfect. It gathers up all available data, analyzes it in the blink of an eye, and arrives at the correct solution every time. Those with actual experience, however, know the truth: AI is wrong far more often than it is right. In fact, it has an extraordinarily high failure rate for a technology that is going to take over the world.
For those enterprises at the forefront of the AI revolution, failure is part of the learning curve – or, at least, it should be if they hope to see an eventual return on their investment. Only by learning from its mistakes can AI be retrained to avoid them in future iterations.
And even though this may take many cycles, the prize at the end is a more streamlined, less costly process.
Flawed Artificial Intelligence
How badly does AI perform? According to Ronald Schmelzer, principal analyst at AI research firm Cognilytica, the failure rate of AI projects is around 80%. Not all of this is AI’s fault, however. In many cases, flaws in design and methodology are to blame, meaning that the humans doing the training are not fully up to speed on the intricacies of AI development.
In most cases, this is due to an entrenched misunderstanding of how AI changes the development process from the ground up. In short, AI projects are not the same as traditional app development projects, and they should not be treated as such. While applications are built around functionality, AI projects are built around data. This is a crucial difference in that AI must first focus on gaining insight from available data before determining a course of action rather than wait for the proper data to execute a pre-defined, pre-coded function.
Even without this basic misunderstanding, AI models still fail to meet their objectives. In these cases, says Sasanka Chanda of the Indian Institute of Management, and Debarg Banerjee, of California’s MachinAnumus Consulting, the failure can be traced to two causes somewhere in the training process:
- Commission – something was done that should not have been done;
- Ommission – something that should have been done was not done.
So, What Is the Solution?
Identifying and correcting these deficiencies is the first step toward eventual success. In most cases, they reside in the inputs that comprise the various representations of data (sensors, manual inputs, etc.) or in the processing logic, or when the set of available actions is inadequate for the desired task.
Eventually, experience with AI will reach a certain level of sophistication that standards for design, development, and deployment will reduce the learning curve for both man and machine, leading to a far greater level of success for all forms of AI.
Too Eager to Please
On a more esoteric level, AI suffers from a number of ingrained flaws that often produce incorrect or nonsensible results. For one, says PC Magazine’s K. Thor Jensen, most models are too eager to please – that is, they produce responses to queries simply because you ask for one. To Jensen, this is like a dog trying to make its master happy by dropping a dead raccoon on the porch.
Another issue is the fact that while most models have access to massive amounts of data, most of it is out-of-date. Fresh data is often wrong or, at the very least, lacks context. But older data tends to be less relevant, and updating AI models to properly ingest and weigh new data is a long and difficult process.
So the older an AI model is, the less reliable it becomes.
Also problematic is the well-known issue of bias when the model is not exposed to properly vetted data, and some models have shown that they can deliberately lie, especially if they are not imbued with a feedback algorithm to correct its misinterpretations.
All of this makes AI untrustworthy and unaccountable for its actions, which means the enterprise might want to hold off on giving it any real responsibility until it has proven itself to be more reliable.
Poor Expectations
The impression of AI as an omniscient, omnipotent entity also leads to a problem of a different sort, says S Mo Jones-Jang, of Boston College’s Department of Communication, and Yong Jin Park, of Howard University’s School of Communication. The more AI behaves in ways that are inconsistent with human expectations, the greater the frustration and mistrust that it can effectively solve problems.
Ultimately, this leads to a new understanding of AI’s fallibility, which results in a more accurate assessment of its strengths and weaknesses. But the longer it takes to achieve this equilibrium, the longer organizations will struggle to produce an effective intelligent environment.
Part of this problem lies in the preconceived notions that people have of machinery in general. Cars go where and when we tell them to go, and irons get hot when we turn them on. As such, first encounters with AI tend to invoke this expectation of AI as a machine. In reality, however, AI has far more cognitive abilities than mere machines, and this allows them to interpret data in a wide variety of ways, ultimately making decisions that may not conform to human analysis.
In this light, training humans to understand AI is just as important as training AI to understand humans.
The Bottom Line
AI is most definitely going to change the way we live and work and will most likely be the difference between success and failure in the emerging digital economy. But just like human intelligence, it, too, is flawed.
The sooner we acknowledge this fact, the sooner we can put AI to productive use – and hopefully avoid the negative consequences that are producing so much angst at the moment.