Problems with Shapley-value-based explanations as feature importance measures

by L. E. Kumar, S. Venkatasubramanian, C. Scheidegger, S. Friedler

Published in: Proceedings of the 37th International Conference on Machine Learning, ICML 2020

Problemsetting

In many practical machine learning pipelines features attribution methods like Shapley-values are used to decide if features contribute to model accuracy for a certain prediction task in a „relevant“ way. The paper shows that the widely assumed explanation power of Shapley-value based methods to assess feature importance is not existent and that these values are not well-suited to answer the questions data scientists usually have in mind.

Shapley-value setting in a nutshell:

N player, value function v: 2^{[N]} \rightarrow \mathbb{R} with v(\emptyset) = 0 quantifies how much collective payoff a set of players can gain by cooperating. \Delta_v (i,S):=v(S \cup i) - v(S) with S \subseteq 2^{[N]}.

The Shapley-value of player i is defined by

\phi_v(i):=\frac{1}{N!} \sum_{S \subseteq [N]} |S|! (N-|S|-1)! \Delta_v(i,S)

\phi_v has the following characterizing axioms:

  • (Symmetry) if \Delta_v(i,S) = \Delta_v(j,S) \Rightarrow \phi_v(i) = \phi_v(j) for all i,j.
  • (Dummy) if \Delta_v(i,S)=0 \Rightarrow \phi_v(i)=0 for all S.
  • (Additivity) \phi_v(i) + \phi_w(i) = \phi_{v+w}(i).

In the machine learning setting we have a model f(x_1, \ldots, x_d) with features 1,\ldots, d as players. \phi_v(i) is interpreted as the influence of i on the outcome. Definitions and estimands of local value functions v_{f,x}(S) (estimand \hat{v}_{f,x})

Distributions and Methodsv_{f,x}(S)\hat{v}_{f,x}(S)
Conditional, TreeSHAPE_{X_{\overline{S}}|X_S} [f(x_S, X_{\overline{S}})]E_{X_{\overline{S}}|X_S} [f(x_S, X_{\overline{S}})]
Conditional, KernelSHAP, Shapley sampling estimationE_{X_{\overline{S}}|X_S} [f(x_S, X_{\overline{S}})]E_{\mathcal{D}} [f(x_S, X_{\overline{S}})]
Interventional, QII, FAE, Int. TreeSHAPE_{\mathcal{D}} [f(x_S, X_{\overline{S}})]E_{\mathcal{D}} [f(x_S, X_{\overline{S}})]

Here \mathcal{D} is the set of product distributions over the marginals of features in \overline{S}, to be precise \mathcal{D} = \{P : P = \prod_{f \in \overline{S}} \pi_f P\}.

Problems with Shapley-values

The main problems with Shapley-values are the following ones.

Problems with conditional distributions:

  • redundant features can lead to misleading Shapley values (redundant features can get higher values compared to its non-redundant versions). Therefore, one has to eliminate redundant features and make a choice before calculating the Shapley values (actually one of the aim why Shapley values are used has to be done manually or in a different way).
  • in general Shapley values can take a long time to be computed.

Problems with interventional distributions:

  • The model needs to be evaluated on out-of-distribution samples. That can lead to the situation that not relevant parts (in order to solve the task at hand) of the feature space needs to be evaluated. This again can lead to misleading Shapley values.

General problems:

  • Shapley values are not model agnostic (compared with what is claimed), since it provides actually only for additive models a meaningful and interpretable output. It can have no meaning at all for non-additive models.

Human-centric issues

The paper also addresses human-centric issues of Shapley-values. The authors first point out that one of the major findings in social sciences was that humans explain phenomena to each other by using contrastive statements (they explain the cause of an effect relative to some other event that did not occur (counterfactual)). The authors further argue that Shapley-values might fail to deliver explanations via contrastive statements (for example to deliver an answer to the question: „Why f(x) rather than E(f(x))?“ the expectation E(f(x)) needs to be actually attained; even when considering marginal contributions it is not clear why averaging those quantities is a good way to summarize information).

Shapley-values also do not provide guidance for taking action in order to improve an outcome to a desired one.

Furthermore, there is no standard procedure for converting Shapley-values into a statement about a model’s behavior. Even if data scientists do have a clear mental model about what Shapley-values deliver (most of them do not have that according to the authors) they tend to missuse them for their purposes due to confirmation bias and the fact that interpretability is not always helpful in task-specific settings.

Conclusion

Shapley-values might help to qualitatively inform investigations that lead to answers data scientists usually have in mind when applying Shapley-values. It is not clear that they provide direct answers to those questions. In conclusion the Shapley-value framework is ill-suited as a general solution to the problem of quantifying feature importance. Instead it is recommended to use more focused and specific approaches (with human accessibility) that actually can deliver more direct answers to the questions the data scientist have in mind.

Relevance: relevant for ML community and for all pipelines that make use of feature importance methods

Impact: medium

Level of Difficulty: easy

Tags: , ,

Reply

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert