k9

On Model Explainability: From LIME, SHAP, to Explainable Boosting

Notebook

Link

Model explainability has gained more and more attention recently among machine learning practitioners. Especially with the popularization of deep learning frameworks, which further promotes the use of increasingly complicated models to improve accuracy. In the reality, however, model with the highest accuracy may not be the one that can be deployed. Trust is one important factor affecting the adoption of complicated models. In this notebook we give a brief introduction to several popular methods on model explainability. And we focus more on the hands-on which demonstrates how we can actually explain a model, under a variety of use cases.

Dependencies

To install all the Python packages used in the notebook, run:

pip install requirements.txt

Optional Dependency

To export static visualization on interpret, we will need orca:

npm install -g electron@1.8.4 orca

npm is required. For other installation method please refer to the official document of orca.

In the notebook we embed native plotly output in html so we don’t need orca. Extra efforts are made at Rmd level to make sure plotly.js is only included once in the notebook to keep the file size manageable.

Reproducibility

Pull the code base:

git clone git@github.com:everdark/k9.git
cd notebooks/ml/model_explain

Though most of the coding examples is written in Python, the html notebook itself is written by R in R Markdown. To install the required R packages run:

Rscript install_packages.R

Then to render the html output:

PYTHON_PATH=$(which python) Rscript render.R