forked from pbiecek/ema
-
Notifications
You must be signed in to change notification settings - Fork 0
/
09-LIME.Rmd
505 lines (356 loc) · 42.1 KB
/
09-LIME.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
# Local Interpretable Model-agnostic Explanations (LIME) {#LIME}
```{r, echo=FALSE, warning=FALSE}
source("code_snippets/ema_init.R")
```
## Introduction {#LIMEIntroduction}
Break-down (BD) plots and Shapley values, introduced in Chapters \@ref(breakDown) and \@ref(shapley), respectively, are most suitable for models with a small or moderate number of explanatory variables.
None of those approaches is well-suited for models with a very large number of explanatory variables, because they usually determine non-zero attributions for all variables in the model. However, in domains like, for instance, genomics or image recognition, models with hundreds of thousands, or even millions, of explanatory (input) variables are not uncommon. In such cases, sparse explanations with a small number of variables offer a useful alternative. The most popular example of such sparse explainers is the Local Interpretable Model-agnostic Explanations (LIME) method and its modifications.
The LIME method was originally proposed by @lime. The key idea behind it is to locally approximate a black-box model by a simpler glass-box model, which is easier to interpret. In this chapter, we describe this approach.
## Intuition {#LIMEIntuition}
The intuition behind the LIME method is explained in Figure \@ref(fig:limeIntroduction). We want to understand the factors that influence a complex black-box model around a single instance of interest (black cross). The coloured areas presented in Figure \@ref(fig:limeIntroduction) correspond to decision regions for a binary classifier, i.e., they pertain to a prediction of a value of a binary dependent variable. The axes represent the values of two continuous explanatory variables. The coloured areas indicate combinations of values of the two variables for which the model classifies the observation to one of the two classes. To understand the local behavior of the complex model around the point of interest, we generate an artificial data set, to which we fit a glass-box model. The dots in Figure \@ref(fig:limeIntroduction) represent the generated artificial data; the size of the dots corresponds to proximity to the instance of interest. We can fit a simpler glass-box model to the artificial data so that it will locally approximate the predictions of the black-box model. In Figure \@ref(fig:limeIntroduction), a simple linear model (indicated by the dashed line) is used to construct the local approximation. The simpler model serves as a "local explainer" for the more complex model.
We may select different classes of glass-box models. The most typical choices are regularized linear models like LASSO regression [@Tibshirani94regressionshrinkage] or decision trees [@party2006]. Both lead to sparse models that are easier to understand. The important point is to limit the complexity of the models, so that they are easier to explain.
(ref:limeIntroductionDesc) The idea behind the LIME approximation with a local glass-box model. The coloured areas correspond to decision regions for a complex binary classification model. The black cross indicates the instance (observation) of interest. Dots correspond to artificial data around the instance of interest. The dashed line represents a simple linear model fitted to the artificial data. The simple model "explains" local behavior of the black-box model around the instance of interest.
```{r limeIntroduction, echo=FALSE, fig.cap='(ref:limeIntroductionDesc)', out.width = '70%', fig.align='center'}
knitr::include_graphics("figure/lime_introduction.png")
```
## Method {#LIMEMethod}
We want to find a model that locally approximates a black-box model $f()$ around the instance of interest $\underline{x}_*$. Consider class $G$ of simple, interpretable models like, for instance, linear models or decision trees. To find the required approximation, we minimize a "loss function":
$$
\hat g = \arg \min_{g \in \mathcal{G}} L\{f, g, \nu(\underline{x}_*)\} + \Omega (g),
$$
where model $g()$ belongs to class $\mathcal{G}$, $\nu(\underline{x}_*)$ defines a neighborhood of $\underline{x}_*$ in which approximation is sought, $L()$ is a function measuring the discrepancy between models $f()$ and $g()$ in the neighborhood $\nu(\underline{x}_*)$, and $\Omega(g)$ is a penalty for the complexity of model $g()$. The penalty is used to favour simpler models from class $\mathcal{G}$. In applications, this criterion is very often simplified by limiting class $G$ to models with the same complexity, i.e., with the same number of coefficients. In such a situation, $\Omega(g)$ is the same for each model $g()$, so it can be omitted in optimization.
Note that models $f()$ and $g()$ may operate on different data spaces. The black-box model (function) $f(\underline{x}):\mathcal X \rightarrow \mathcal R$ is defined on a large, $p$-dimensional space $\mathcal X$ corresponding to the $p$ explanatory variables used in the model. The glass-box model (function) $g(\underline{x}):\tilde{ \mathcal X} \rightarrow \mathcal R$ is defined on a $q$-dimensional space $\tilde{ \mathcal X}$ with $q << p$, often called the "space for interpretable representation." We will present some examples of $\tilde{ \mathcal X}$ in the next section. For now we will just assume that some function $h()$ transforms $\mathcal X$ into $\tilde{ \mathcal X}$.
If we limit class $\mathcal{G}$ to linear models with a limited number, say $K$, of non-zero coefficients, then the following algorithm may be used to find an interpretable glass-box model $g()$ that includes $K$ most important, interpretable, explanatory variables:
```
Input: x* - observation to be explained
Input: N - sample size for the glass-box model
Input: K - complexity, the number of variables for the glass-box model
Input: similarity - a distance function in the original data space
1. Let x' = h(x*) be a version of x* in the lower-dimensional space
2. for i in 1...N {
3. z'[i] <- sample_around(x')
4. y'[i] <- f(z'[i]) # prediction for new observation z'[i]
5. w'[i] <- similarity(x', z'[i])
6. }
7. return K-LASSO(y', x', w')
```
In Step 7, ``K-LASSO(y', x', w')`` stands for a weighted LASSO linear-regression that selects $K$ variables based on the new data ``y'`` and ``x'`` with weights ``w'``.
Practical implementation of this idea involves three important steps, which are discussed in the subsequent subsections.
### Interpretable data representation {#LIMErepr}
As it has been mentioned, the black-box model $f()$ and the glass-box model $g()$ operate on different data spaces. For example, let us consider a VGG16 neural network [@Simonyan15] trained on the ImageNet data [@ImageNet]. The model uses an image of the size of 244 $\times$ 244 pixels as input and predicts to which of 1000 potential categories does the image belong to. The original space $\mathcal X$ is of dimension 3 $\times$ 244 $\times$ 244 (three single-color channels (*red, green, blue*) for a single pixel $\times$ 244 $\times$ 244 pixels), i.e., the input space is 178,608-dimensional. Explaining predictions in such a high-dimensional space is difficult. Instead, from the perspective of a single instance of interest, the space can be transformed into superpixels, which are treated as binary features that can be turned on or off. Figure \@ref(fig:duckHorse06) (right-hand-side panel) presents an example of 100 superpixels created for an ambiguous picture. Thus, in this case the black-box model $f()$ operates on space $\mathcal X=\mathcal{R}^{178608}$, while the glass-box model $g()$ applies to space $\tilde{ \mathcal X} = \{0,1\}^{100}$.
It is worth noting that superpixels, based on image segmentation, are frequent choices for image data. For text data, groups of words are frequently used as interpretable variables. For tabular data, continuous variables are often discretized to obtain interpretable categorical data. In the case of categorical variables, combination of categories is often used. We will present examples in the next section.
(ref:duckHorse06Desc) The left-hand-side panel shows an ambiguous picture, half-horse and half-duck. The right-hand-side panel shows 100 superpixels identified for this figure. Source [Twitter](https://twitter.com/finmaddison/status/352128550704398338).
```{r duckHorse06, echo=FALSE, fig.cap='(ref:duckHorse06Desc)', out.width = '100%', fig.align='center'}
knitr::include_graphics("figure/duck_horse_06.png")
```
### Sampling around the instance of interest {#LIMEsample}
To develop a local-approximation glass-box model, we need new data points in the low-dimensional interpretable data space around the instance of interest. One could consider sampling the data points from the original dataset. However, there may not be enough points to sample from, because the data in high-dimensional datasets are usually very sparse and data points are "far" from each other. Thus, we need new, artificial data points. For this reason, the data for the development of the glass-box model is often created by using perturbations of the instance of interest.
For binary variables in the low-dimensional space, the common choice is to switch (from 0 to 1 or from 1 to 0) the value of a randomly-selected number of variables describing the instance of interest.
For continuous variables, various proposals have been formulated in different papers. For example, @imlRPackage and @molnar2019 suggest adding Gaussian noise to continuous variables. @limePackage propose to discretize continuous variables by using quantiles and then perturb the discretized versions of the variables. @localModelPackage discretize continuous variables based on segmentation of local ceteris-paribus profiles (for more information about the profiles, see Chapter \@ref(ceterisParibus)).
In the example of the duck-horse image in Figure \@ref(fig:duckHorse06), the perturbations of the image could be created by randomly excluding some of the superpixels. An illustration of this process is shown in Figure \@ref(fig:duckHorseProcess).
```{r duckHorseProcess, echo=FALSE, fig.cap="The original image (left-hand-side panel) is transformed into a lower-dimensional data space by defining 100 super pixels (panel in the middle). The artificial data are created by using subsets of superpixels (right-hand-side panel).", out.width = '100%', fig.align='center'}
knitr::include_graphics("figure/duck_horse_process.png")
```
### Fitting the glass-box model {#LIMEglas}
Once the artificial data around the instance of interest have been created, we may attempt to fit an interpretable glass-box model $g()$ from class $\mathcal{G}$.
The most common choices for class $\mathcal{G}$ are generalized linear models. To get sparse models, i.e., models with a limited number of variables, LASSO (least absolute shrinkage and selection operator) [@Tibshirani94regressionshrinkage] or similar regularization-modelling techniques are used. For instance, in the algorithm presented in Section \@ref(LIMEMethod), the K-LASSO method with K non zero coefficients has been mentioned. An alternative choice are CART (classification-and-regression trees) models [@CARTtree].
For the example of the duck-horse image in Figure \@ref(fig:duckHorse06), the VGG16 network provides 1000 probabilities that the image belongs to one of the 1000 classes used for training the network. It appears that the two most likely classes for the image are *'standard poodle'* (probability of 0.18) and *'goose'* (probability of 0.15). Figure \@ref(fig:duckHorse04) presents LIME explanations for these two predictions. The explanations were obtained with the K-LASSO method which selected $K=15$ superpixels that were the most influential from a model-prediction point of view. For each of the selected two classes, the $K$ superpixels with non-zero coefficients are highlighted. It is interesting to observe that the superpixel which contains the beak is influential for the *'goose'* prediction, while superpixels linked with the white colour are influential for the *'standard poodle'* prediction. At least for the former, the influential feature of the plot does correspond to the intended content of the image. Thus, the results of the explanation increase confidence in the model's predictions.
```{r duckHorse04, echo=FALSE, fig.cap="LIME for two predictions ('standard poodle' and 'goose') obtained by the VGG16 network with ImageNet weights for the half-duck, half-horse image.", out.width = '100%', fig.align='center'}
knitr::include_graphics("figure/duck_horse_04.png")
```
## Example: Titanic data {#LIMEExample}
Most examples of the LIME method are related to the text or image data. In this section, we present an example of a binary classification for tabular data to facilitate comparisons between methods introduced in different chapters.
Let us consider the random forest model `titanic_rf` (see Section \@ref(model-titanic-rf)) and passenger Johnny D (see Section \@ref(predictions-titanic)) as the instance of interest for the Titanic data.
First, we have got to define an interpretable data space. One option would be to gather similar variables into larger constructs corresponding to some concepts. For example *class* and *fare* variables can be combined into "wealth", *age* and *gender* into "demography", and so on. In this example, however, we have got a relatively small number of variables, so we will use a simpler data representation in the form of a binary vector. Toward this aim, each variable is dichotomized into two levels. For example, *age* is transformed into a binary variable with categories "$\leq$ 15.36" and ">15.36", *class* is transformed into a binary variable with categories "1st/2nd/deck crew" and "other", and so on. Once the lower-dimension data space is defined, the LIME algorithm is applied to this space. In particular, we first have got to appropriately transform data for Johnny D. Subsequently, we generate a new artificial dataset that will be used for K-LASSO approximations of the random forest model. In particular, the K-LASSO method with $K=3$ is used to identify the three most influential (binary) variables that will provide an explanation for the prediction for Johnny D. The three variables are: *age*, *gender*, and *class*. This result agrees with the conclusions drawn in the previous chapters. Figure \@ref(fig:LIMEexample01) shows the coefficients estimated for the K-LASSO model.
(ref:LIMEexample01Desc) LIME method for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data. Presented values are the coefficients of the K-LASSO model fitted locally to the predictions from the original model.
```{r LIMEexample01, warning=FALSE, message=FALSE, echo=FALSE, fig.cap='(ref:LIMEexample01Desc)', out.width = '80%', fig.width=7, fig.height=4, fig.align='center'}
knitr::include_graphics("figure/LIMEexample01.png")
```
<!---
The interpretable features can be defined in a many different ways. One idea would to be use quartiles for the feature of interest. Another idea is to use Ceteris Paribus profiles (see Chapter \@ref(ceterisParibus) and change-point method [@picard_1985] to find a instance specific discretization.
Different implementations of LIME differ in the way how the interpretable feature space is created.
--->
## Pros and cons {#LIMEProsCons}
As mentioned by @lime, the LIME method
- is *model-agnostic*, as it does not imply any assumptions about the black-box model structure;
- offers an *interpretable representation*, because the original data space is transformed (for instance, by replacing individual pixels by superpixels for image data) into a more interpretable, lower-dimension space;
- provides *local fidelity*, i.e., the explanations are locally well-fitted to the black-box model.
The method has been widely adopted in the text and image analysis, partly due to the interpretable data representation. In that case, the explanations are delivered in the form of fragments of an image/text and users can easily find the justification of such explanations. The underlying intuition for the method is easy to understand: a simpler model is used to approximate a more complex one. By using a simpler model, with a smaller number of interpretable explanatory variables, predictions are easier to explain. The LIME method can be applied to complex, high-dimensional models.
There are several important limitations, however. For instance, as mentioned in Section \@ref(LIMEsample), there have been various proposals for finding interpretable representations for continuous and categorical explanatory variables in case of tabular data. The issue has not been solved yet. This leads to different implementations of LIME, which use different variable-transformation methods and, consequently, that can lead to different results.
Another important point is that, because the glass-box model is selected to approximate the black-box model, and not the data themselves, the method does not control the quality of the local fit of the glass-box model to the data. Thus, the latter model may be misleading.
Finally, in high-dimensional data, data points are sparse. Defining a "local neighborhood" of the instance of interest may not be straightforward. Importance of the selection of the neighborhood is discussed, for example, by @LIMESHAPstability. Sometimes even slight changes in the neighborhood strongly affect the obtained explanations.
To summarize, the most useful applications of LIME are limited to high-dimensional data for which one can define a low-dimensional interpretable data representation, as in image analysis, text analysis, or genomics.
## Code snippets for R {#LIMERcode}
LIME and its variants are implemented in various R and Python packages. For example, `lime` [@limePackage] started as a port of the LIME Python library [@shapPackage], while `localModel` [@localModelPackage], and `iml` [@imlRPackage] are separate packages that implement a version of this method entirely in R.
Different implementations of LIME offer different algorithms for extraction of interpretable features, different methods for sampling, and different methods of weighting. For instance, regarding transformation of continuous variables into interpretable features, `lime` performs global discretization using quartiles, `localModel` performs local discretization using ceteris-paribus profiles (for more information about the profiles, see Chapter \@ref(ceterisParibus)), while `iml` works directly on continuous variables. Due to these differences, the packages yield different results (explanations).
Also, `lime`, `localModel`, and `iml` use different functions to implement the LIME method. Thus, we will use the `predict_surrogate()` method from the `localModel` package. The function offers a uniform interface to the functions from the three packages.
In what follows, for illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section \@ref(model-titanic-rf). Recall that it is developed to predict the probability of survival from sinking of Titanic. Instance-level explanations are calculated for Johnny D, an 8-year-old passenger that travelled in the first class. We first retrieve the `titanic_rf` model-object and the data frame for Johnny D via the `archivist` hooks, as listed in Section \@ref(ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values.
```{r, warning=FALSE, message=FALSE, eval=FALSE}
titanic_imputed <- archivist::aread("pbiecek/models/27e5c")
titanic_rf <- archivist:: aread("pbiecek/models/4e0fc")
johnny_d <- archivist:: aread("pbiecek/models/e3596")
```
```
class gender age sibsp parch fare embarked
1 1st male 8 0 0 72 Southampton
```
Then we construct the explainer for the model by using the function `explain()` from the `DALEX` package (see Section \@ref(ExplainersTitanicRCode)). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section \@ref(model-titanic-rf)) and it is important to have the corresponding `predict()` function available.
```{r, warning=FALSE, message=FALSE, echo = FALSE, eval = TRUE}
library("randomForest")
library("DALEX")
titanic_rf_exp <- DALEX::explain(model = titanic_rf,
data = titanic_imputed[, -9],
y = titanic_imputed$survived == "yes",
label = "Random Forest",
verbose = FALSE)
```
```{r, warning=FALSE, message=FALSE, echo = TRUE, eval = FALSE}
library("randomForest")
library("DALEX")
titanic_rf_exp <- DALEX::explain(model = titanic_rf,
data = titanic_imputed[, -9],
y = titanic_imputed$survived == "yes",
label = "Random Forest")
```
### The `lime` package {#LIMERcodelime}
The key functions in the `lime` package are `lime()`, which creates an explanation, and `explain()`, which evaluates explanations. As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters.
The `predict_surrogate()` function requires two arguments: `explainer`, which specifies the name of the explainer-object created with the help of function `explain()` from the `DALEX` package, and `new_observation`, which specifies the name of the data frame for the instance for which prediction is of interest. An additional, important argument is `type` that indicates the package with the desired implementation of the LIME method: either `"localModel"` (default), `"lime"`, or `"iml"`. In case of the `lime`-package implementation, we can specify two additional arguments: `n_features` to indicate the maximum number ($K$) of explanatory variables to be selected by the K-LASSO method , and `n_permutations` to specify the number of artifical data points to be sampled for the local-model approximation.
In the code below, we apply the `predict_surrogate()` function to the explainer-object for the random forest model `titanic_rf` and data for Johnny D. Additionally, we specify that the K-LASSO method should select no more than `n_features=3` explanatory variables based on a fit to `n_permutations=1000` sampled data points. Note that we use the `set.seed()` function to ensure repeatability of the sampling.
```{r, warning=FALSE, message=FALSE, eval=TRUE}
set.seed(1)
library("DALEXtra")
library("lime")
model_type.dalex_explainer <- DALEXtra::model_type.dalex_explainer
predict_model.dalex_explainer <- DALEXtra::predict_model.dalex_explainer
lime_johnny <- predict_surrogate(explainer = titanic_rf_exp,
new_observation = johnny_d,
n_features = 3,
n_permutations = 1000,
type = "lime")
```
<!-- Subsequently, we create an explainer, i.e., an object with all elements needed for calculation of explanations. This can be done by using the `lime()` function with the data frame used for model fitting and the model object as arguments `x` and `model`, respectively.
```{r, warning=FALSE, message=FALSE, eval=FALSE}
lime_rf <- lime(x = titanic_imputed[,colnames(johnny_d)], model = titanic_rf)
```
Finally, we generate an explanation. Toward this aim, we use the `lime::explain()` function. Note that it is worthwhile to indicate we use the `explain()` function from the `lime` package, because there is a similarly-named function in the `DALEX` package. The main arguments are `x`, which specifies the data frame for the instance of interest, and `explainer`, which indicates the name of the explainer object. In the code below, we additionally apply the `labels = "yes"` argument to specifiy the value of the dependent binary variable for which predictions are of interest. -->
The contents of the resulting object can be printed out in the form of a data frame with 11 variables.
```{r, warning=FALSE, message=FALSE, eval=TRUE}
(as.data.frame(lime_johnny))
```
<!-- Currently, function `lime::explain()` does not include any argument that would allow fixing the settings of the random-permutation algorithm to obtain a repeateable execution. Hence, in the output below, we present an exemplary set of results. Note that, to get all the details, it is advisable to print the object, obtained from the apllication of the `lime::explain()` function, as a data frame.
```{r, warning=FALSE, message=FALSE, eval=FALSE}
print(as.data.frame(lime_expl))
# model_type case label label_prob model_r2 model_intercept model_prediction feature
#1 classification 1 yes 0.422 0.6522297 0.5481286 0.4887004 gender
#2 classification 1 yes 0.422 0.6522297 0.5481286 0.4887004 age
#3 classification 1 yes 0.422 0.6522297 0.5481286 0.4887004 class
#4 classification 1 yes 0.422 0.6522297 0.5481286 0.4887004 fare
# feature_value feature_weight feature_desc data prediction
#1 2 -0.39647493 gender = male 1, 2, 8, 0, 0, 72, 4 0.578, 0.422
#2 8 0.14277999 age <= 22 1, 2, 8, 0, 0, 72, 4 0.578, 0.422
#3 1 0.15175337 class = 1st 1, 2, 8, 0, 0, 72, 4 0.578, 0.422
#4 72 0.04251332 21.00 < fare 1, 2, 8, 0, 0, 72, 4 0.578, 0.422
```
-->
The output includes column `case` that provides indices of observations for which the explanations are calculated. In our case there is only one index equal to 1, because we asked for an explanation for only one observation, Johnny D. The `feature` column indicates which explanatory variables were given non-zero coefficients in the K-LASSO method. The `feature_value` column provides an information about the values of the original explanatory variables for the observations for which the explanations are calculated. On the other hand, the `feature_desc` column indicates how the original explanatory variable was transformed. Note that the applied implementation of the LIME method dichotomizes continuous variables by using quartiles. Hence, for instance, *age* for Johnny D was transformed into a binary variable `age <= 22`.
Column `feature_weight` provides the estimated coefficients for the variables selected by the K-LASSO method for the explanation. The `model_intercept` column provides of the value of the intercept. Thus, the linear combination of the transformed explanatory variables used in the glass-box model approximating the random forest model around the instance of interest, Johnny D, is given by the following equation (see Section \@ref(fitting)):
$$
\hat p_{lime} = 0.55411 - 0.40381 \cdot 1_{male} + 0.16366 \cdot 1_{age <= 22} + 0.16452 \cdot 1_{class = 1st} = 0.47848,
$$
where $1_A$ denotes the indicator variable for condition $A$. Note that the computed value corresponds to the number given in the column `model_prediction` in the printed output.
<!-- Consequently, the predicted survival probability for the glass-box model is equal to
$$
p = e^{0.489}/(1+e^{0.489}) = 0.620.
$$
For the random forest model `titanic_rf`, the predicted probability was equal to 0.422, as seen in the `prediction` column (see also Section \@ref(predictions-titanic)). [TOMASZ: A SURPRISINGLY LARGE DIFFERENCE. CHECK?]-->
By applying the `plot()` function to the object containing the explanation, we obtain a graphical presentation of the results.
```{r , echo=TRUE, eval = FALSE}
plot(lime_johnny)
```
The resulting plot is shown in Figure \@ref(fig:limeExplLIMETitanic). The length of the bar indicates the magnitude (absolute value), while the color indicates the sign (red for negative, blue for positive) of the estimated coefficient.
(ref:limeExplLIMETitanicDesc) Illustration of the LIME-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `lime` package.
```{r limeExplLIMETitanic, echo=FALSE, fig.cap='(ref:limeExplLIMETitanicDesc)', out.width = '80%', fig.width=6, fig.height=3.5, fig.align='center'}
plot(lime_johnny)
```
### The `localModel` package {#LIMERcodelocMod}
The key function of the `localModel` package is the `individual_surrogate_model()` function that fits the local glass-box model. The function is applied to the explainer-object obtained with the help of the `DALEX::explain()` function (see Section \@ref(ExplainersTitanicRCode)). As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters. To choose the `localModel`-implementation of LIME, we set argument `type="localMode"` (see Section \@ref(LIMERcodelime)). In that case, the method accepts, apart from the required arguments `explainer`and `new_observation`, two additional arguments: `size`, which specifies the number of artificial data points to be sampled for the local-model approximation, and `seed`, which sets the seed for the random-number generation allowing for a repeatable execution.
In the code below, we apply the `predict_surrogate()` function to the explainer-object for the random forest model `titanic_rf` and data for Johnny D. Additionally, we specify that 1000 data points are to be sampled and we set the random-number-generation seed.
```{r, warning=FALSE, message=FALSE, eval=TRUE}
library("localModel")
locMod_johnny <- predict_surrogate(explainer = titanic_rf_exp,
new_observation = johnny_d,
size = 1000,
seed = 1,
type = "localModel")
```
The resulting object is a data frame with seven variables (columns). For brevity, we only print out the first three variables.
```{r, warning=FALSE, message=FALSE, eval=TRUE}
locMod_johnny[,1:3]
```
<!----
# estimated variable dev_ratio response
#1 0.23479837 (Model mean) 0.6521442
#2 0.14483341 (Intercept) 0.6521442
#3 0.08081853 class = 1st, 2nd, deck crew 0.6521442
#4 0.00000000 gender = female, NA, NA 0.6521442
#5 0.23282293 age <= 15.36 0.6521442
#6 0.02338929 fare > 31.05 0.6521442
---->
The printed output includes column `estimated` that contains the estimated coefficients of the LASSO regression model, which is used to approximate the predictions from the random forest model. Column `variable` provides the information about the corresponding variables, which are transformations of `original_variable`. Note that the version of LIME, implemented in the `localModel` package, dichotomizes continuous variables by using ceteris-paribus profiles (for more information about the profiles, see Chapter \@ref(ceterisParibus)). The profile for variable *age* for Johnny D can be obtained by using function `plot_interpretable_feature()`, as shown below.
```{r, eval=FALSE}
plot_interpretable_feature(locMod_johnny, "age")
```
The resulting plot is presented in Figure \@ref(fig:LIMEexample02). The profile indicates that the largest drop in the predicted probability of survival is observed when the value of *age* increases beyond about 15 years. Hence, in the output of the `predict_surrogate()` function, we see a binary variable `age <= 15.36`, as Johnny D was 8-year old.
(ref:LIMEexample02Desc) Discretization of the *age* variable for Johnny D based on the ceteris-paribus profile. The optimal change-point is around 15 years of age.
```{r LIMEexample02, warning=FALSE, message=FALSE, echo=FALSE, fig.cap='(ref:LIMEexample02Desc)', out.width = '70%', fig.width=7, fig.height=5, fig.align='center'}
plot_interpretable_feature(locMod_johnny, "age") + ggtitle("Interpretable representation for age","" ) +
xlab("age") + ylab("predicted survival probability") + theme_ema
```
By applying the generic `plot()` function to the object containing the LIME-method results, we obtain a graphical representation of the results.
```{r, eval=FALSE}
plot(locMod_johnny)
```
The resulting plot is shown in Figure \@ref(fig:limeExplLocalModelTitanic). The lengths of the bars indicate the magnitude (absolute value) of the estimated coefficients of the LASSO logistic regression model. The bars are placed relative to the value of the mean prediction, 0.235.
(ref:limeExplLocalModelTitanicDesc) Illustration of the LIME-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `localModel` package.
```{r limeExplLocalModelTitanic, echo=FALSE, eval = TRUE, fig.cap='(ref:limeExplLocalModelTitanicDesc)', out.width = '80%', fig.align='center', fig.width=7, fig.height=3}
plot(locMod_johnny) +
facet_null() +
ggtitle("localModel explanations for Johny D","") + theme_drwhy_vertical() + theme_ema
```
<!---
(ref:limeExplLocalModelTitanicDesc1) Illustration of the LIME-method results for the prediction for `johny_d` for the random forest model `titanic_rf` and the Titanic data, generated by the `localModel` package.
```{r limeExplLocalModelTitanic1, echo=FALSE, fig.cap='(ref:limeExplLocalModelTitanicDesc1)', out.width = '60%', fig.align='center'}
knitr::include_graphics("figure/lime_expl_localModel_titanic.png")
```
--->
### The `iml` package {#LIMERcodeiml}
The key functions of the `iml` package are `Predictor$new()`, which creates an explainer, and `LocalModel$new()`, which develops the local glass-box model. The main arguments of the `Predictor$new()` function are `model`, which specifies the model-object, and `data`, the data frame used for fitting the model. As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters. To choose the `iml`-implementation of LIME, we set argument `type="iml"` (see Section \@ref(LIMERcodelime)). In that case, the method accepts, apart from the required arguments `explainer`and `new_observation`, an additional argument `k` that specifies the number of explanatory variables included in the local-approximation model.
```{r, warning=FALSE, message=FALSE}
library("DALEXtra")
library("iml")
iml_johnny <- predict_surrogate(explainer = titanic_rf_exp,
new_observation = johnny_d,
k = 3,
type = "iml")
```
The resulting object includes data frame `results` with seven variables that provides results of the LASSO logistic regression model which is used to approximate the predictions of the random forest model. For brevity, we print out selected variables.
```{r, warning=FALSE, message=FALSE}
iml_johnny$results[,c(1:5,7)]
```
The printed output includes column `beta` that provides the estimated coefficients of the local-approximation model. Note that two sets of three coefficients (six in total) are given, corresponding to the prediction of the probability of death (column `.class` assuming value `no`, corresponding to the value `"no"` of the `survived` dependent-variable) and survival (`.class` asuming value `yes`). Column `x.recoded` contains the information about the value of the corresponding transformed (interpretable) variable. The value of the original explanatory variable is given in column `x.original`, with column `feature` providing the information about the corresponding variable. Note that the implemented version of LIME does not transform continuous variables. Categorical variables are dichotomized, with the resulting binary variable assuming the value of 1 for the category observed for the instance of interest and 0 for other categories.
The `effect` column provides the product of the estimated coefficient (from column `beta`) and the value of the interpretable covariate (from column `x.recoded`) of the model approximating the random forest model.
<!---
Interestingly, unlike in the case of the results obtained for the `lime` and `localModel` packages, it appears that *age* is not included in the list of important explanatory variables. The ceteris-paribus profile for *age* and Johny D, presented in Figure \@ref(fig:LIMEexample02), indicates that, for boys younger than 15-years of age, the predicted probability of survival does not change very much with age. Hence, given that *age* was used as a continuous variable in the model, it does not appear as an important variable.
#Interpretation method: LocalModel
#
#Analysed predictor:
#Prediction task: unknown
#
#Analysed data:
#Sampling from data.frame with 2207 rows and 7 columns.
#
#Head of results:
# beta x.recoded effect x.original feature
#1 -0.158368701 1 -0.1583687 1st class=1st
#2 1.739826204 1 1.7398262 male gender=male
#3 0.018515945 0 0.0000000 0 sibsp
#4 -0.001484918 72 -0.1069141 72 fare
#5 0.131819869 1 0.1318199 Southampton embarked=Southampton
#6 0.158368701 1 0.1583687 1st class=1st
--->
By applying the generic `plot()` function to the object containing the LIME-method results, we obtain a graphical representation of the results.
```{r , echo=TRUE, eval=FALSE}
plot(iml_johnny)
```
The resulting plot is shown in Figure \@ref(fig:limeExplIMLTitanic). It shows values of the two sets of three coefficients for both types of predictions (probability of death and survival).
(ref:limeExplIMLTitanicDesc) Illustration of the LIME-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `iml` package.
```{r limeExplIMLTitanic, echo=TRUE, fig.cap='(ref:limeExplIMLTitanicDesc)', out.width = '80%', fig.align='center', fig.width=7, fig.height=3}
plot(iml_johnny)
```
It is worth noting that *age*, *gender*, and *class* are correlated. For instance, crew members are only adults and mainly men. This is probably the reason why the three packages implementing the LIME method generate slightly different explanations for the model prediction for Johnny D.
## Code snippets for Python {#LIMEPythoncode}
In this section, we use the `lime` library for Python, which is probably the most popular implementation of the LIME method [@lime]. The `lime` library requires categorical variables to be encoded in a numerical format. This requires some additional work with the data. Therefore, below we will show you how to use this method in Python step by step.
For illustration purposes, we use the random forest model for the Titanic data. Instance-level explanations are calculated for Henry, a 47-year-old passenger that travelled in the 1st class.
In the first step we read the Titanic data and encode categorical variables. In this case, we use the simplest encoding for *gender*, *class* and *embarked*, i.e., the label-encoding.
```{python, eval = FALSE}
import dalex as dx
titanic = dx.datasets.load_titanic()
X = titanic.drop(columns='survived')
y = titanic.survived
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
X['gender'] = le.fit_transform(X['gender'])
X['class'] = le.fit_transform(X['class'])
X['embarked'] = le.fit_transform(X['embarked'])
```
In the next step we train a random forest model.
```{python, eval = FALSE}
from sklearn.ensemble import RandomForestClassifier as rfc
titanic_fr = rfc()
titanic_fr.fit(X, y)
```
It is time to define the observation for which model prediction will be explained. We write Henry's data into `pandas.Series` object.
```{python, eval = FALSE}
import pandas as pd
henry = pd.Series([1, 47.0, 0, 1, 25.0, 0, 0],
index =['gender', 'age', 'class', 'embarked',
'fare', 'sibsp', 'parch'])
```
The `lime` library explains models that operate on images, text or tabular data. In the latter case we have to use `LimeTabularExplainer` module.
```{python, eval = FALSE}
from lime.lime_tabular import LimeTabularExplainer
explainer = LimeTabularExplainer(X,
feature_names=X.columns,
class_names=['died', 'survived'],
discretize_continuous=False,
verbose=True)
```
The result is an explainer that can be used to interpret model around specific observations.
In the following example we explain the behaviour of the model for Henry.
The `explain_instance()` method finds a local approximation with an interpretable linear model.
The result can be presented graphically with the `show_in_notebook()` method.
```{python, eval = FALSE}
lime = explainer.explain_instance(henry, titanic_fr.predict_proba)
lime.show_in_notebook(show_table=True)
```
The resulting plot is shown in Figure \@ref(fig:limePython1).
(ref:limePython1Desc) A plot of LIME model values for the random forest model and passenger Henry for the Titanic data.
```{r limePython1, echo=FALSE, out.width = '100%', fig.align='center', fig.cap='(ref:limePython1Desc)'}
knitr::include_graphics("figure/lime_python_1.png")
```
<!--
To calculate LIME surrogate we use the `predict_surrogate()` method which is a wrapper to the `lime` library. The first argument indicates the data observation for which the values are to be calculated. All other arguments are passed to the `explain` function in the `lime` library. As a result we get an explanation.
```{python, eval = FALSE}
import pandas as pd
henry = pd.DataFrame({'gender' : ['male'],
'age' : [47],
'class' : ['1st'],
'embarked': ['Cherbourg'],
'fare' : [25],
'sibsp' : [0],
'parch' : [0]},
index = ['Henry'])
import dalex as dx
titanic_rf_exp = dx.Explainer(titanic_rf, X, y,
label = "Titanic RF Pipeline")
print(titanic.survived)
lime_henry = titanic_rf_exp.predict_surrogate(henry,
feature_names=['age', 'fare', 'parch', 'sibsp',
'gender', 'class', 'embarked'],
class_names=['0','1'],
discretize_continuous=True,
num_features=4,
top_labels=1)
lime_john
```
TODO: pokazać wynik LIMEa
To visualize the obtained values, we simply call the `show_in_notebook()` method from `lime` library.
```{python, eval = FALSE}
bd_henry.show_in_notebook()
```
The resulting plot is shown in Figure \@ref(fig:limePython2).
(ref:limePython2Desc) A plot of LIME model values for the random forest model and passenger Henry for the Titanic data.
```{r limePython2, echo=FALSE, out.width = '100%', fig.align='center', fig.cap='(ref:limePython2Desc)'}
knitr::include_graphics("figure/shap_python_1.png")
```
-->