Skip to content

trainindata/machine-learning-interpretability

 
 

Repository files navigation

PythonVersion License https://github.com/solegalli/machine-learning-interpretability/blob/master/LICENSE Sponsorship https://www.trainindata.com/

Machine Learning Interpretability- Code Repository

Code repository for the online course Machine Learning Interpretability

Course launch: 30th November, 2023

Actively maintained.

Table of Contents

  1. Machine Learning Interpretability

    1. Interpretability in the context of Machine Learning
    2. Local vs Global Interpretability
    3. Intrinsically explainable models
    4. Post-hoc explainability methods
    5. Challenges to interpretability
    6. How to make models more explainable
  2. Intrinsically Explainable Models

    1. Linear and Logistic Regression
    2. Decision trees
    3. Random forests
    4. Gradient boosting machines
    5. Global and local interpretation
  3. Post-hoc methods - Global explainability

    1. Permutation Feature Importance
    2. Partial dependency plots
    3. Accumulated local effects
  4. Post-hoc methods - Local explainability

    1. LIME
    2. SHAP
    3. Individual contitional expectation
  5. Featuring the following Python interpretability libraries

    1. Scikit-learn
    2. treeinterpreter
    3. Eli5
    4. Dalex
    5. Alibi
    6. pdpbox
    7. Lime
    8. Shap

Links

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%