by Patrick Hall

Explaining machine learning models to the business

analysis
Mar 19, 202010 mins
AnalyticsDeep LearningMachine Learning

How to create summaries of machine learning system decisions that business decision makers can understand.

hipster millennial standing against large black chalkboard with lots of equations
Credit: Thinkstock

Explainable machine learning is a sub-discipline of artificial intelligence (AI) and machine learning that attempts to summarize how machine learning systems make decisions. Summarizing how machine learning systems make decisions can be helpful for a lot of reasons, like finding data-driven insights, uncovering problems in machine learning systems, facilitating regulatory compliance, and enabling users to appeal — or operators to override — inevitable wrong decisions.

Of course all that sounds great, but explainable machine learning is not yet a perfect science. The reality is there are two major issues with explainable machine learning to keep in mind:

  1. Some “black-box” machine learning systems are probably just too complex to be accurately summarized.
  2. Even for machine learning systems that are designed to be interpretable, sometimes the way summary information is presented is still too complicated for business people. (Figure 1 provides an example of machine learning explanations for data scientists.)
h2o explainable ml 01 H2O.ai

Figure 1: Explanations created by H2O Driverless AI. These explanations are probably better suited for data scientists than for business users.

For issue 1, I’m going to assume that you want to use one of the many kinds of “glass-box” accurate and interpretable machine learning models available today, like monotonic gradient boosting machines in the open source frameworks h2o-3, LightGBM, and XGBoost.1 This article focuses on issue 2 and helping you communicate explainable machine learning results clearly to business decision-makers.

This article is divided into two main parts. The first part addresses explainable machine learning summaries for a machine learning system and an entire dataset (i.e. “global” explanations). The second part of the article discusses summaries for machine learning system decisions about specific people in a dataset (i.e. “local” explanations). Also, I’ll be using a straightforward example problem about predicting credit card payments to present concrete examples.

General summaries

Two good ways, among many other options, to summarize a machine learning system for a group of customers, represented by an entire dataset, are variable importance charts and surrogate decision trees. Now, because I want business people to care about and understand my results, I’m going to call those two things a “main drivers chart” and a “decision flowchart,” respectively.

Main decision drivers

The main drivers chart provides a visual summary and ranking of which factors are most important to a machine learning system’s decisions, in general. It’s a high-level summary and decent place to start communicating about how a machine learning system works.

In the example problem, I’m trying to predict missed credit card payments in September, given payment statuses, payment amounts, and bill amounts from the previous six months. What Figure 2 tells me is that, for the machine learning system I’ve constructed, the previous month’s repayment status is by far the most important factor for most customers in my dataset. I can also see that July and June repayment statuses are the next most important factors.

h2o explainable ml 02 H2O.ai

Figure 2: Main drivers of model decisions about missing credit card payments in September for an entire dataset of credit card customers.

How did I make this chart? It’s just a slightly modified version of a traditional variable importance chart. To make sure the displayed information is as accurate as possible, I chose an interpretable model and matched it with a reliable variable importance calculation.2

When I know my results are mathematically solid, then I think about the presentation. In this case, first I removed all numbers from this chart. While numeric variable importance values might be meaningful to data scientists, most business people don’t have time to care about numbers that aren’t related to their business. I also replaced raw variable names with directly meaningful data labels, because no business person actually wants to think about my database schema.

Once I’ve summarized my system in an understandable chart, I can go to my business partners and ask really important questions like: Is my system putting too much emphasis on August repayment status? Or, does it make sense to weigh April payment amount more than August payment amount? In my experience, accounting for those kinds of domain knowledge insights in my machine learning systems leads to the best technical and business outcomes.

Decision flow chart

The decision flowchart shows how predictive factors work together to drive decisions in my machine learning system and Figure 3 boils my entire machine learning system down to one flowchart!

h2o explainable ml 03 H2O.ai

Figure 3: A flowchart showing roughly how a complex model makes decisions about missing credit card payments in September for an entire dataset of credit card customers.

How did I summarize an entire machine learning system into a flowchart? I used an old data mining trick known as a surrogate model.3 Surrogate models are simple models of complex models. In this case my simple model is a decision tree, or a data-derived flowchart, and my complex model is the input factors and decisions of my machine learning system. So, the decision flow chart is simple machine learning on more complex machine learning.

Unfortunately, this trick is not guaranteed to work every time. Sometimes machine learning systems are just too sophisticated to be accurately represented by a simple model. So a key consideration for data scientists when creating a chart like Figure 3 is: How accurate and stable is my decision tree surrogate model? On the business side, if a data scientist shows you a chart like Figure 3, you should challenge them to prove that it is an accurate and stable representation of the machine learning system.

If you want to make a decision flow chart, remember to try to limit the complexity of the underlying machine learning system, keep your flow chart to a depth of say three to five decisions (Figure 3 uses a depth of three), and use human-readable data formats and not your favorite label encoder.

Specific summaries

If you work in financial services, you probably get that sometimes each individual machine learning system decision for each customer has to be explained or summarized. For the rest of the data science world, explaining individual machine learning system decisions might not be a regulatory requirement, but I would argue it’s a best practice. And regulations are likely on the way. Why not get ready?4

No one wants to be told “computer says no,” especially when the computer is wrong. So, consumer-level explanations are important for data scientists, who might want to override or debug bad machine learning behavior, and for consumers, who deserve to be able to appeal wrong decisions that affect them negatively.

I’m going to focus on two types of explanations that summarize machine learning systems decisions for specific people, Shapley values (like in Figure 2) and counterfactual explanations. Since data science jargon isn’t helpful in this context, I’m going to call these two approaches main decision drivers (again) and “counter-examples.” Also, keep in mind there are many other options for creating consumer-specific explanations.   

Main decision drivers

Shapley values can be used to summarize a machine learning system for an entire dataset (Figure 2) or at the individual decision level (Figure 4). When you use the right underlying machine learning and Shapley algorithm, these individual summaries can be highly accurate.2

Where I think most data scientists go wrong with Shapley values is in explaining them to business partners. My advice is don’t ever use equations, and probably don’t use charts or tables either. Just write out the elegant Shapley value interpretation in plain English (or whatever language you prefer). To see this approach in action, check out Figure 4. It displays the three most important drivers for a decision about a customer that my machine learning system has decided is at an above average risk for missing their September payment.

Figure 4: Top three drivers of model decisions about missing a payment in September for one specific customer.

Counter-examples

Counter-examples explain what a customer could do differently to receive a different outcome from a machine learning system. You can sometimes use software libraries5 to create counter-examples or you can use trial and error, changing the inputs to a machine learning system, and observing the change in the system outputs, to create your own counter-examples. It turns out that for the high-risk customer portrayed in Figure 5, if they had made their recent payments on time, instead of being late, my machine learning system would put them at a much lower risk for missing their upcoming September payment.

Figure 5: A counter-example about missing a payment in September for one specific customer.

Once you can see the logic and data points behind how a machine learning system makes a given decision, it becomes much easier for data scientists to catch and fix bad data or wrong decisions. It also becomes much easier for customers interacting with machine learning systems to catch and appeal the same kinds of wrong data or decisions.

These kinds of explanations are also potentially useful for compliance with regulations like the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA) in the United States, and the General Data Protection Regulation (GDPR) in the European Union. 

Responsible machine learning

The myriad risks of depending on a system you don’t understand are major barriers to the adoption of AI and machine learning in the business world. When you can break down those barriers, that’s a big step forward. Hopefully you’ll find the techniques I present here useful for just that, but do be careful. Aside from the accuracy and communication concerns I’ve already brought up, there are some security and privacy concerns with explainable ML too.6 

Also, explainability is just one part of mitigating machine learning risks.7 A machine learning system could be totally transparent and still discriminate against certain groups of people, or could be both transparent and wildly unstable when deployed to make decisions using real-world data. For these reasons and more, it’s always a good idea to consider privacy, security, and discrimination risks, and not just GPUs and Python code, when white-boarding a machine learning system.

All of this leads me into the responsible practice of machine learning, but that’s food for thought for my next article. Suffice to say, communicating machine learning system outcomes is incumbent on all data scientists in today’s data-driven world, and explaining AI and machine learning to a business decision maker is becoming more of a possibility with the right approach and right technology. 

Patrick Hall is a senior director for data science products at H2o.ai where he focuses mainly on model interpretability. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2o.ai, Patrick held global customer facing roles and R&D roles at SAS Institute. 

1. Other great options for interpretable models include elastic net regression, explainable neural networks (XNN), GA2M, and certifiably optimal rule lists (CORELS).

2. I recommend monotonic gradient boosting machines plus TreeSHAP to generate accurate summaries.

3. Decision tree surrogates go back to at least 1996, but alternative methods have been put forward in recent years as well.

4. Governments of at least Canada, Germany, the Netherlands, Singapore, the United Kingdom, and the United States have proposed or enacted ML-specific regulatory guidance.  

5. Like cleverhans or foolbox.

6. Risks of model extraction and inversion attacks and membership inference attacks are all generally exacerbated by presenting explanations along with predictions to consumers of ML-based decisions.

7. For a more thorough technical discussion of responsible machine learning see: “A Responsible Machine Learning Workflow.”