Model interpretability and understanding for PyTorch
-
Updated
Dec 23, 2024 - Python
Model interpretability and understanding for PyTorch
XAI - An eXplainability toolbox for machine learning
Leave One Feature Out Importance
Features selector based on the self selected-algorithm, loss function and validation method
ProphitBet is a Machine Learning Soccer Bet prediction application. It analyzes the form of teams, computes match statistics and predicts the outcomes of a match using Advanced Machine Learning (ML) methods. The supported algorithms in this application are Neural Networks, Random Forests & Ensembl Models.
This package can be used for dominance analysis or Shapley Value Regression for finding relative importance of predictors on given dataset. This library can be used for key driver analysis or marginal resource allocation models.
In this project I aim to apply Various Predictive Maintenance Techniques to accurately predict the impending failure of an aircraft turbofan engine.
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Beta Machine Learning Toolkit
A Julia package for interpretable machine learning with stochastic Shapley values
An R package for computing asymmetric Shapley values to assess causality in any trained machine learning model
Adding feature_importances_ property to sklearn.cluster.KMeans class
Awesome papers on Feature Selection
Routines and data structures for using isarn-sketches idiomatically in Apache Spark
Variance-based Feature Importance in Neural Networks
Official repository of the paper "Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance", M. Carletti, M. Terzi, G. A. Susto.
Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"
CancelOut is a special layer for deep neural networks that can help identify a subset of relevant input features for streaming or static data.
This repository contains the implementation of SimplEx, a method to explain the latent representations of black-box models with the help of a corpus of examples. For more details, please read our NeurIPS 2021 paper: 'Explaining Latent Representations with a Corpus of Examples'.
Add a description, image, and links to the feature-importance topic page so that developers can more easily learn about it.
To associate your repository with the feature-importance topic, visit your repo's landing page and select "manage topics."