Skip to content

Latest commit

 

History

History
97 lines (69 loc) · 6.81 KB

04 - Model Interpretability - 01-03-2019.md

File metadata and controls

97 lines (69 loc) · 6.81 KB

Interpretability and explainable machine-learning models.

Introduction

As our models gets more and more complex and non-linear, it becomes difficult to clearly understand how they function and the reasons that have lead to a specific result. Many attempts were made so far in the field: Visualization of CNN neurons; Measuring feature importance using LIME, SHAP, TCAV, to name a few. These methods shed light on the feature importance for a specific prediction/inference and can help understanding if the model is biased.

Reading material and preparation

What exactly do we mean by model interpretability?

Jenn Wortman Vaughan from Microsoft examines various aspects of interpretability from user perspective:

Viktoria Krakovna from DeepMind discuss the importance of interpretability in reinforcement learning:

An extended book about ML Interpretability (if you wish to dig deeper):

https://christophm.github.io/interpretable-ml-book/

Analyzing feature-importance using a 'shadow' linear model:

The Intriguing Properties of Model Explanations:

The Intriguing Properties of Model Explanations

lightning talk:

LIME

SHAP

TCAV

Manifold

Various ML model Architectures:

CNN dissection and Visualization:

GAN dissection:

https://gandissect.csail.mit.edu/

Seq2Seq:

NLP - Word & Sentence Embedding:

Autonomous Vehicles:

https://kimjinkyu.files.wordpress.com/2017/12/nips_2017.pdf

Meeting Notes

Model interpretability depends on the target audience or on the context. A doctor will need a different explanation than a judge for a prediction. It may also be personal: explanation that satisfies one person, may not be enough for another. For data scientists and developers, it may be a tool to adjust the model - i.e. for debugging. Therefore, there should be a clear distinction when speaking about model interpretability or reasoning, if it is made for the model creator (debugging a model) or for the end-user.

A certain concern was expressed: when disecting a neural network - are we focusing too much on the explainee neurons while ignoring neurons which do not have a clear purpose (which are the magority of the neurons in the net).

"Thinking fast and slow" from Kahnmann was brought as a possible explanation for neural network, where "intuition" was given as an explanation for a neural net. But this can not be satisfactory explanation for a model. Not morally nor legally. ML should be explainable, and not be held as an intuition, as its acceptance by humans in a mixed world, depends on understanding its "motives".

Humans are making decisions first, and only afterwards understand the reasons that lead to those decisions. In a way, LIME acts similarly by supplying an explanations to predictions after they were deducted. Clients are mostly interested in those local explanations (compared to global general model explanations) after an unexpected prediction were made.

Training data validity, variance and quality is an important factor as well to the quality of the model explanation.

The current usage of LIME gives an initial intuition for the model performance, but this intution may be wrong. The linear surrogate model that is created by the current LIME implementation has unexplainaed inner assumptions regarding the distance (delta or epsilon) around the local point. Changes to this distance leads to different explanations. In addition, it is insensitive to noise, depends on a 'correct' feature subset selection and it does not take into considereation the importance of combination of features and their mutual contribution. A combination of features is found in SHAP, as well as in the second version of LIME - anchor-LIME.

Both LIME and SHAP are actually another model, that is done to explain the original model. A question arrose: can we trust the model tat explain the model? and if not, how deep should we go, nesting explaination models?

For having a decent and satisfying model explanation, a human should be in the loop from an early stage. I.e - the users should be presented with the prediction explanations and maybe even rate them. Currently, people trust companies and brands more, and tends to let it affect them when they trust the models.

One of the disadvantages of an end-to-end models, is that they lack an explanation and reasoning to their prediction. But would a combination of simple models still be explanable?

The topic of layability was briefly discussed as well: AI makers and users are legal colaboratives to an event (autonomous car accident). Given that morality is culture-dependant, how should an AI be trained? It also implies that the explainability may differ, location- and culture-wise.