Part of:

Why Does Explainable AI Matter Anyway?

Why Trust Techopedia
KEY TAKEAWAYS

Demands for greater nuances in the field of AI and machine learning have opened the way for the next wave: Explainable AI.

As artificial intelligence (AI) becomes more and more pervasive in our daily lives; making vital decisions about our healthcare, loan repayments, employment, parole, security, entertainment and more, it is important for us to know just how AI is making the decisions.

The demand for transparency in AI is rising after a number of critical misconducts have come to light. Autonomous car accidents, unfair denial of paroles, gender biases in job recruitment, racial discrimination in image recognition, or deeming heavily polluted air safe for breathing, have caused criticism of the technology. Conversely, the leading biologically-inspired AI (called deep learning) is known for its opaque "black box" nature which is hard to interpret for humans.

This happens primarily because the intelligence emerges through complex interactions between millions of computational units which are not easy to track. The gap between the demand for transparent AI and the supply of black box AI has given birth to a new AI discipline known as explainable AI (XAI).

XAI is a branch of AI that promises to make predictions explainable without compromising performance. This article sheds light on the importance of XAI and explains key types of XAI and explanations. (Read also: Why is DARPA researching "explainable AI"?)

Before discussing the advantages of XAI, it helps to understand the kinds of XAI being used and the types of explanations it delivers.

Inherently Explainable AI vs Post-Hoc XAI

XAI is mainly categorized into the following two types:

Advertisements

Inherently Explainable

This type of XAI model is intrinsically self-explainable. Examples of these AI models are decision trees and Bayesian classifiers.

The typical disadvantages of this type of AI are its inadequate performance and inscalability to large-scale, real-world AI problems such as image classification, natural language processing and speech recognition. (Read also" AI's Got Some Explaining to Do)

Post-hoc XAI

This type of XAI, also known as model-agnostic XAI, consists of models that are used to explain the underlying black box AI. Examples include Local Interpretable Model-Agnostic Explanations (LIME) and other perturbation based XAI models.

The typical disadvantage of this type of XAI is that the explanation may not be faithful to underlying AI as it is produced by a different model. The advantage is that there is no need to compromise on the performance of AI for explainability.

Types of Explainable AI Explanations

  • Local explanation: A "local explanation" is an explanation of a particular decision of AI model. For example, the logic behind why AI denied a particular loan application. Typically, it describes the importance of input features in making the predictions.
  • Global explanation: A global explanation is a generic explanation of AI’s decision-making process, typically in the form of what attributes the AI uses and how they play together to deliver a decision.
  • Contrastive explanation: A contrastive explanation is an explanation of differences between instances, for example, how two images are different from each other.

Advantages of XAI

Safeguarding Against Biases

AI learns from data to make predictions. The biases of real-world data and deficiencies of data collection can prejudice AI in many different ways. Examples of the typical biases include sampling bias such as using only daylight videos to train autonomous cars, and association bias such as the sexist linking of women to the nursing profession or overgeneralized links like seagulls to beaches. As XAI intends to explain attributes and the decision process, it helps to identify the biases of AI. (Read also: Fairness in Machine Learning: Eliminating Data Bias)

Making AI Trustworthy

Many argue that the secrecy of AI is a major obstacle in its widespread acceptance (especially in safety-critical tasks such as with autonomous cars and medical diagnostics) simply because the end-users do not fully trust its decisions. In the medical domain, for example, clinicians are reluctant to trust the black box AI, even when it has better accuracy, because they do not understand the way AI works. XAI strives to make AI trustworthy by making it transparent and explaining its outcomes.

Complying with Right to Explanation Legislation

In the regulation of algorithms, AI is legally bound to give an explanation of its decisions to the users, especially when it significantly affects them legally or financially. For example, if AI denies a person’s loan application, he may ask for an explanation, which could be "The main factor for rejection is your bankruptcy last year which makes you more likely to default.”

Some examples of right to explanation regulations are the General Data Protection Regulation in the EU, the US Code of Federal Regulations and the Digital Republic Act of France. XAI, therefore, enables AI to comply with the legal requirement to provide an explanation.

Improving System Design

XAI can enable engineers to probe why AI has acted in a particular manner and make improvements. For example, it is possible for AI to make correct decisions for the wrong reasons (similar to a phenomenon known in psychology as the Clever Hans effect) which must be remedied. This is particularly crucial for safety-critical tasks to analyze when and how AI can malfunction, even if the error is minor. The explanations provided for this purpose could be of different forms as required by users, for example, it might include explaining both the training data and the AI algorithm.

Making AI Robust Against Adversarial Attacks

The vulnerability of black box AI to adversarial attacks is a matter of concern for many. "Adversarial attack" refers to deceiving AI to make an incorrect decision with a carefully designed sample, called adversarial samples. It is important to enable AI to counter these attacks since it could be destructive for AI-based security, medical and military systems.

The XAI can be robust against adversarial attacks as an adversarial input would lead the model to produce anomalous explanations for its decisions, thus revealing the attack.

Insightful Data Analysis

AI is widely used for data analysis in various organizations and business sectors. An XAI-centric data analysis can provide a profound understanding of produced insights. This helps to analyze the underlying relationships, and identify problem causes and effects, thus saving substantial time and producing informed decisions.

For example, while black box AI might precisely predict reduced stock sales for the upcoming month, XAI can indicate why it would drop. Likewise, while typical AI can recommend a potential customer be given a freebie to further business interests, XAI can convince your business partners why it would be an effective strategy. (Read also: How Explainable AI Changes the Game in Commercial Insurance.)

Making Scientific Discoveries

AI is already used to make scientific discoveries in various fields such as molecular design, protein structure prediction, chemical synthesis planning and macromolecular target identification. Bridging the gap between AI and other scientific communities is a prerequisite to making such discoveries.

XAI can help cross-domain scientists to better understand the AI and enable them to hone their knowledge and beliefs on the AI process.

Advertisements

Related Reading

Related Terms

Advertisements
Dr. Tehseen Zia
Tenured Associate Professor
Dr. Tehseen Zia
Tenured Associate Professor

Dr. Tehseen Zia has Doctorate and more than 10 years of post-Doctorate research experience in Artificial Intelligence (AI). He is Tenured Associate Professor and leads AI research at Comsats University Islamabad, and co-principle investigator in National Center of Artificial Intelligence Pakistan. In the past, he has worked as research consultant on European Union funded AI project Dream4cars.

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/