Repository containing experiments for evaluation of TrustyAI Explainability Toolkit
Notebooks to reproduce available experiments:
- Original LIME implementation (discrete and continuous features setting) impact-score eval: https://colab.research.google.com/drive/1jLe-tdtE7uGQ0KIKMG4PWJjDgPPDn5W0#scrollTo=KjIRWtglSX0C
Install nam
library from https://github.com/AmrMKayid/nam
:
git clone https://github.com/AmrMKayid/nam
cd nam
pip install .
cd
to experiments
directory and run the training script
python train_nam_pytorch_fico.py
Inside saved_models/NAM_FICO
you will find NAM checkpoints that can be used for inference.