Explainable AI: Building Trust with a Magic Black Box

Explainable artificial intelligence (XAI) is transforming the way organizations adopt and implement AI, for both small, use-case-based projects and at-scale, organization-wide transformations. As AI/ML and data technologies advance, they are increasingly being integrated into decision-making processes as key drivers of value across every sector and industry. 

Despite potential benefits, however, many organizations remain concerned about the transparency and accountability of AI systems. This is especially relevant for IT decision-makers tasked with building and managing AI systems that are trustworthy, reliable, and adherent to ethical standards.

This article explores the importance of trust in AI from different perspectives. We share some examples and use cases of XAI, and offer tips organizations can follow to leverage the potential of explainable AI. 

Whether you are a seasoned expert in the field or just starting to explore the world of AI, this article offers insights into XAI, and how it can help you to make better business decisions.

The significance of trust in AI

AI is no longer just theoretical speculation — it is rapidly becoming a practical and widely-applied technology that impacts individuals and society on a grand scale. AI systems are already being used in healthcare to diagnose diseases, in finance to make investment decisions, and in criminal justice to determine sentencing. In these and other cases, the stakes are high, making it essential that decisions made by AI are accurate, fair, and transparent; i.e., trustworthy.

Without trust, organizations may be reluctant to use AI systems. Lack of trust could limit AI’s potential benefits, as well as impact the pace of AI/ML adoption and innovation. In fact, about two-thirds of CEOs see building trust as a “top three” priority for their company, while only one-third trust the insights and recommendations derived by AI & analytics.

Lack of trust comes from a variety of places, but you can think of it like this:

AI systems are designed and built by humans, and they reflect the biases and limitations of their creators. The inner workings of AI systems are often inscrutable, making it difficult to understand why they arrive at certain results. Until AI systems can operate free of human bias and provide transparent and verifiable results, they cannot be fully trusted.

Explainable AI is able to provide the required transparency and verifiability, while enabling human observers to account for their own biases. By providing clear and interpretable information about AI’s decisions, XAI helps humans to confirm that AI systems are reasonable in what they recommend and do not perpetuate existing social and economic inequalities.

Note that AI systems lack the human ability to understand context and make ethical judgments, which could lead to harmful or unintended consequences. Thus, at this point, human controls enabled by XAI are a must-have for any responsible business, organization, or entity.

Trust in AI: engineering perspective

For engineers, it is paramount to understand how to build trust in and into ML models. This can be assured through model evaluation and production data. 

During the model evaluation phase, engineers should check for metrics, such as accuracy and precision, to ensure that the model is performing well. Bear in mind that the selection of metrics is not always obvious and can vary depending on business requirements and the problem at hand.

While training metrics are important, they are not enough to ensure trust in a model on their own. Engineers should also be able to verify the model’s results on production data, as the model may perform differently on new data. Any model performance issues can be identified with anomaly detection algorithms. However, such algorithms usually output only abstract numbers without any coherent explanations, which makes them difficult to explain and understand for business stakeholders.

This issue can be addressed by using explainable AI methods. Black-box explainability algorithms, such as SHAP, can provide insight into the model’s decision-making process, and yet they can often be too complex and difficult for business users to understand. 

Simpler models, such as decision trees, can be used instead, to build trust in the model’s results. These models are easily interpretable, and can provide a logical and illustrative flow of the model’s decision-making process.

Trust in AI: Business Perspective

For business stakeholders, trust in ML models can be interpreted as their ability to quickly and easily evaluate the results of the model’s work without having to dive into technical specifics of the system. 

It is obvious that business users need to be able to understand the model’s results in order to trust them; however, this can be challenging when the model’s results are either too abstract or hard to access.

There are several major methods for explaining AI/ML results to business stakeholders:

  1. Data visualization. Presenting data in a visually appealing and easy-to-understand format, such as charts, graphs, and dashboards, can help business users, especially from the BI and analytics departments, to quickly grasp the key insights and results generated by AI.
  2. Narrative explanation. Providing a concise explanation of the model’s findings in layman’s terms can help stakeholders understand how results were obtained, and why they are important. Any explanation should be supported by actual data, not a theoretical speculation.
  3. Interactive demos. Providing interactive demonstrations of AI systems in action can help stakeholders to understand AI’s capabilities and limitations, and provide a more immersive and memorable experience.
  4. Performance metrics. Highlighting KPIs, such as accuracy, recall, precision, and F1 scores (and combining them with business metrics), can help stakeholders evaluate the model’s performance and see the value it brings to the organization.

Regardless of the method chosen, it is important to tailor the explanation to the audience and their level of technical understanding, and to present results in a clear and concise manner that highlights their significance and potential impact on the business.

Explainable AI use cases

To illustrate the importance of building trust in ML models, let’s examine a few use cases of these techniques in action.

  1. ChatGPT. This language model with advanced conversational UX appeals to both end users and stakeholders. It allows users to build trust by requesting explanations, additional details, and specific reasons why an answer was derived. With incremental outputs and the ability to specify levels of detail, ChatGPT represents the perfect XAI Engine, yet its conversational UX is not that simple to customize and implement, especially for non-interactive use cases.
  2. Music recommendation model. It can be difficult to explain the importance of Precision@K and Coverage for non-technical stakeholders. They care more about relevance. For example, a classic rock fan does not want to hear a track geared to children. It can be hard to sell the model evaluation metrics of a music recommendation system to stakeholders when results are based on complex algorithms that generate playlists for users. In this case, an interactive demo with a deep dive into the model’s outputs can help stakeholders to understand the model’s results and its reasoning, and ultimately build trust in the model.
  3. Metrics estimation model. Another example is a product metric calculation system, where the model’s results are used to make important decisions about marketing strategies and budget allocation. The system returns a specific number with a confidence score that can be hard for stakeholders to understand and trust. In this case, providing more context and helping stakeholders recreate the calculations can help to build trust in the model’s results and improve the decision-making process.

The road to explainable AI: From trust to action

In the realm of AI and data, establishing trust is paramount. To ensure that AI systems and the machine learning models that power them are employed in an ethical, fair, and transparent manner, it is imperative for engineers and business stakeholders alike to understand the mechanisms for building trust in AI.

From an engineering perspective, building trust in AI can be achieved through consistent monitoring of KPIs during the evaluation phase and rigorous testing of ML models against production data. Meanwhile, business users can nurture trust in AI through interactive demos and deep dives into the model’s outputs, thereby fostering a better understanding and acceptance of the model’s results.

Given the significance of trust in the deployment of AI systems and ML models, organizations must consider user experience when developing and implementing AI solutions. This may require the provision of additional context from the training data, as well as XAI methods, including conversational user interfaces like ChatGPT, to help users understand and make sense of the model’s outputs.

The development of explainable AI is critical to the continued growth and adoption of AI. XAI has the potential to mitigate risks associated with biased algorithms and increase transparency and accountability in data-driven decision-making processes. Organizations must prioritize investments in explainable AI to not only build trust with customers, but also to ensure that AI systems align with their values and ethical principles. The future of AI depends on it.

Bulat Lutfullin
ML Product Lead at Provectus | + posts

Bulat Lutfullin is an AI/ML Product Lead at Provectus. With a primary focus on ML Infrastructure and MLOps, Bulat helps organizations in various industries to deliver high-performing AI and ML products into production environments and support ML operations on a large scale. As a Product Lead, Bulat is responsible for Crystal Engine, an ML-driven conversion and churn prediction engine.