Happy 2022! Before this year gets as busy as the last one, I’d like to take the time to share my insights on a topic I worked on last year: Explainable AI and its connection to UX. This post was triggered by a a 5-day research clinic, organized by Alexander von Humboldt Institute for Internet and Society (HIIG), where I had the pleasure of joining a group of experts in technology, law and design in September 2021.
For anyone who wants to dive deeper into the academic and technological foundations, check out the resources section at the end of this article. For my personal take on this topic, here‘s a broad definition I will stick with for now:
As an Information Architect at heart, I must say I’m quite excited about this topic, so let’s dive right in.
Why do we need to understand AI anyway?
Weak AI is all around us. Look at this primer of 29 interactions for AI by Chris Noessel and you get a good grasp of the building blocks of how AI (mostly machine learning) is part of Today’s digital products and services:
- AI can cluster things into groups
- other people with similar interests, pictures of people based on similar features, etc.
- AI can classify and categorize things
- this image shows a spider plant, this mail is spam, etc.
- AI can predict outcomes
- your package will arrive Tomorrow, this is the predicted risk for doing that, etc.
- AI can generate and optimize things
- machine-generated media (deep fakes), generative design, etc.
And then of course there are conversational interactions with chatbots and intelligent assistants like Alexa and Siri. So let’s start with some examples you might have encountered yourself as a user.
Basic Explanations
In a lot of cases, users don’t need detailed explanations — they just want to get a quick result (without bias — which is a topic of it’s own). At least I don’t want to hear my robot vacuum’s reasons for not finding his way — just get it done. But even in everyday interactions with apps and services, some background information can be important to manage user expectations and increase trust in the product.
Here is an example from Google’s People + AI Guidebook on how to communicate the system’s limitations:
Another example from polytopal.ai’s Lingua Franca is a fictional real estate search, where unexpected estimations are explained (”higher price due to recent renovations”):
Advanced Explanations
Where understanding and trust becomes more important is when critical decisions are supported by AI. An explanation might be legally required and other parties (e.g. authorities) might be involved. Here is an example from a fictional credit approval process, where the decision is supported by AI. In IBM’s AI Explainability 360 toolkit you see an approach to explain the factors contributing to a loan denial (I would doubt that this specific explanation is really helping — something to be tested with end users):
Especially in the medical context, decisions are critical and can literally be about life and death. Also we are talking about expert users in high-stress situations (e.g. radiologists). This prototype from the paper Question-Driven Design Process for Explainable AI User Experiences shows, how different kinds of explanations can be designed as a dashboard (e.g. ”factors that contribute to the risk of admission”):
Social Explanations
Currently available XAI techniques that open the black box are still limited and there is the risk of misleading explanations (e.g. users relying too much on raw numbers). But even if an AI can explain itself perfectly, it might not be enough to make people use these systems. What becomes more important: social context.
The work of Upol Ehsan and his colleagues on human-centered explainable AI (HCXAI) is really interesting as he stresses the importance of understanding how other humans (e.g. your colleagues) are working with the AI and not only understanding the system itself. Answering questions like:
- Are others working successfully with the AI system?
- What happened after others followed the AI-driven suggestion?
- Where did others experience system limitations?
One of the ideas is that by integrating social markers as shown in the screenshot above, users can develop trust in the AI’s suggestions.
How to design for understanding?
Now that we had a look on different kinds of explanations, the question is: how can we approach the design of AI-driven systems where people understand enough to reach their goals? XAI is still an emerging field, but there are already good approaches out there to get going.
Start with Questions
As with all human-centered projects, the question of who needs to use the system in which usage context is the first to ask. A lot of XAI techniques are focused on helping developers interpret what the AI is doing, which makes sense when you want to optimize an algorithm, but not so much if you aren’t a developer. Experts like doctors or authorities might need detailed explanations to dive deep into data sources and biases of a machine learning model. On the other end of the spectrum are casual users, who might only need an explanation if something goes wrong and won’t bother diving into details anyway. Google provides a good worksheet, where you can map how much explanations might be needed for a certain user group:
The next step is identifying the specific questions users need to get answered. This might be the basic “Why did the system decide like this?”, but also “What needs to happen to get to another decision?” (e.g. not getting your loan denied). UX methods like Wizard of Oz experiments can help here, to pin down the most pressing questions.
Map Questions with XAI Techniques
Once you are clear about your users’ questions, there are different XAI techniques available that could deliver the results you need (as mentioned before, explaining the AI itself might not be the only solution). Here is an example mapping created by Vera Liao and her team:
In general, there are different types of XAI techniques: global explanations, that explain how the overall system works, as well as local explanations that explain how one specific decision was made. How these explanations are communicated to the end user varies wildly. It ranges from simple text output to visual heatmaps and interactive dashboards where users can play around with different factors. The SAP Fiori Design Guidelines for Explainable AI provide a good overview on how to use the UX pattern progressive disclosure and the UXAI toolkit provides a great visualization of different explanation types:
Test with Users and Iterate
As a UX professional working on AI-driven software in the past, it was really interesting to dive into the academic background of XAI and align it with my industry experience. It’s good to see how many smart people are working on this topic. And still, with AI diffusing into our everyday lives, there is a lot of hard user research and design work to be done, to build digital products that makes users think just the right amount to reach their goals.
Resources
Introduction to XAI
- Introduction to eXplainable AI (XAI) [Slides]
- A Visual Introduction to Explainable AI for Designers [Website]
- How does Explainable AI work? With Tim Schrills [Video]
- Upol Ehsan on Human-Centered Explainable AI and Social Transparency [Podcast]
- Towards Human-Centered Explainable AI: the journey so far [Article]
Papers on Human-centered XAI
- Expanding Explainability: Towards Social Transparency in AI systems
- Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
- Question-Driven Design Process for Explainable AI User Experiences
- Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
XAI Guidelines & Toolkits
- Google People + AI Guidebook: Explainability + Trust
- SAP Fiori Design Guidelines: Explainable AI
- Microsoft HAX Design Library
- UXAI Toolkit
- Lingua Franca — A Design Language for Human-Centered AI: Transparency
- IBM: AI Explainability 360 – Demo
People to follow
Intro Photo by Andy Kelly on Unsplash