Just about every survey seeking to understand industry concerns with respect to AI tosses everything under the header of “security.” From concerns over sensitive data leakage to hallucinations and bias, from exploitation via prompt injection to transparency and explainability, it seems everything is the responsibility of security when it comes to AI.
Every one of these concerns is valid and important, but they are all very different and most of them are not the responsibility of security.
Today we’re going to dive into transparency and explainability, both of which are crucial concepts to understand and put it in practice when it comes to using AI within your business. That’s not just because they are ways to establish trust in the system and outcomes; both also support troubleshooting and debugging of systems, especially during development.
Transparency and explainability are critical concepts in general but especially applicable to AI given that most practitioners—even within IT—are unfamiliar with how these systems work. Both concepts are often discussed in the context of ethical AI, responsible AI, and AI governance. Though they are closely related, they have distinct meanings and serve different purposes in understanding and governing AI systems.
Transparency focuses on providing general information to a broad audience, including stakeholders and the public, about the AI system. Explainability is more specific and seeks to clarify individual decisions or outcomes to users, developers, and stakeholders who need to understand its behavior.
Transparency is focused on promoting trust in the system, while explainability is concerned with establishing trust in specific outputs. To accomplish this, transparency and explainability focus on different elements.
Transparency in AI refers to the degree to which information about an AI system's design, operation, and decision-making processes are open, accessible, and understandable to stakeholders. It emphasizes clear communication and visibility into how AI systems work, allowing stakeholders to understand various aspects of the system.
Key elements of AI transparency include:
Explainability in AI refers to the ability to provide understandable reasons or justifications for the systems’ decisions, outputs, or behavior. It emphasizes explaining why a particular decision was made, focusing on making AI outcomes understandable to users and stakeholders.
Key elements of AI explainability include:
Every new technology requires time to establish trust. Fifteen years ago, no one trusted auto-scaling critical applications, yet today it’s expected as a foundational capability. Automation of any kind, whether it’s solving complex mathematical problems, driving your car, or paying your bills, takes time for users to trust. Transparency about the process and explaining how the system works can go a long way toward shortening the gap between introduction and adoption.
Transparency provides a broad view of the AI system's workings, while explainability delves into the reasons behind specific decisions or outcomes. Both are critical for AI to succeed and business to realize its benefits for better customer service, improved productivity, and faster decision making.
And neither are the purview of security.