BLOG | OFFICE OF THE CTO

Crucial Concepts in AI: Transparency and Explainability

Lori MacVittie Thumbnail
Lori MacVittie
Published July 16, 2024

Just about every survey seeking to understand industry concerns with respect to AI tosses everything under the header of “security.” From concerns over sensitive data leakage to hallucinations and bias, from exploitation via prompt injection to transparency and explainability, it seems everything is the responsibility of security when it comes to AI.

Every one of these concerns is valid and important, but they are all very different and most of them are not the responsibility of security.

Today we’re going to dive into transparency and explainability, both of which are crucial concepts to understand and put it in practice when it comes to using AI within your business. That’s not just because they are ways to establish trust in the system and outcomes; both also support troubleshooting and debugging of systems, especially during development.

Transparency and Explainability

Transparency and explainability are critical concepts in general but especially applicable to AI given that most practitioners—even within IT—are unfamiliar with how these systems work. Both concepts are often discussed in the context of ethical AI, responsible AI, and AI governance. Though they are closely related, they have distinct meanings and serve different purposes in understanding and governing AI systems.

Transparency focuses on providing general information to a broad audience, including stakeholders and the public, about the AI system. Explainability is more specific and seeks to clarify individual decisions or outcomes to users, developers, and stakeholders who need to understand its behavior. 

Transparency is focused on promoting trust in the system, while explainability is concerned with establishing trust in specific outputs. To accomplish this, transparency and explainability focus on different elements.

Transparency: Cite your sources

Transparency in AI refers to the degree to which information about an AI system's design, operation, and decision-making processes are open, accessible, and understandable to stakeholders. It emphasizes clear communication and visibility into how AI systems work, allowing stakeholders to understand various aspects of the system.

Key elements of AI transparency include:

  • Design and Development: Transparency involves sharing information about the design, architecture, and training processes of AI systems. This includes the type of data used, the algorithms, and models implemented. This transparency is similar to financial services disclosures in which providers explain what data and weights go into determining your eligibility for a mortgage or credit reporting agencies’ FICO score.
  • Data and Inputs: Transparency involves being clear about the sources and types of data used to train and operate the AI system. It also encompasses disclosing any data preprocessing, transformation, or augmentation applied to the input data. This type of information is similar to data collection statements, where businesses tell you what data they collect, store, and with whom they might share it.
  • Governance and Accountability: Providing information about who is responsible for the AI system's development, deployment, and governance. This helps stakeholders understand the accountability structure.

Explainability: Show your work

Explainability in AI refers to the ability to provide understandable reasons or justifications for the systems’ decisions, outputs, or behavior. It emphasizes explaining why a particular decision was made, focusing on making AI outcomes understandable to users and stakeholders.

Key elements of AI explainability include:

  • Decision Justification: Explainability involves detailing the factors and logic that led to a specific decision or output. It answers the questions: "Why did the AI make this decision?" and "What influenced this outcome?" This is akin to doing a proof in geometry; you need to rely on axioms—betweenness, congruence, parallel lines, etc.—to explain your output. In other words, if the AI decides 2+2=5 it must demonstrate a valid reason for the decision, such as relying on an alternate mathematical system, or using the equation as a literary device.  
  • Model Interpretability: Explainability requires making AI models interpretable so that stakeholders can understand the underlying mechanics of how decisions are made. For example, not everyone understands calculus, so an explanation in the form of a complex equation is not sufficient. There’s a fair bit of difference between the way in which a Generative Adversarial Network (GAN) and Convolutional Neural Networks (CNN) work, so disclosing which architectural approach is in use is an important part of interpretability.
  • Human Comprehensibility: The explanation must be in a format that is easily understood by humans, including non-experts. This requires presenting complex AI operations in a simple and clear manner. You can’t present the explanation in hexadecimal form or with code; you’ll need to use something readable by all stakeholders, including legal, compliance, and engineers.

Building Trust in AI

Every new technology requires time to establish trust. Fifteen years ago, no one trusted auto-scaling critical applications, yet today it’s expected as a foundational capability. Automation of any kind, whether it’s solving complex mathematical problems, driving your car, or paying your bills, takes time for users to trust. Transparency about the process and explaining how the system works can go a long way toward shortening the gap between introduction and adoption.

Transparency provides a broad view of the AI system's workings, while explainability delves into the reasons behind specific decisions or outcomes. Both are critical for AI to succeed and business to realize its benefits for better customer service, improved productivity, and faster decision making.

And neither are the purview of security.