Skip to content

dailab/MAXi-XAI-lib

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Model-Agnostic Explanation Library (MAX Lib)

This repository contains a highly-modular, model-agnostic framework for explanation generation. Moreover, you will find an introduction into the library here, which may help you with the explanation of your ML prediction. The introduction will walk you through the essential components of the library and further present to you examples how to extend it. Ultimately, it will guide you through generation of a first explanation with the help of this library.

For developers, it is suggested that you take a look at the Python documentation for more detailed information on the API.

Currently implemented explanation methods:

Table of Contents

Installation

Prerequisites

For installation, it is necessary to have the package manager Poetry and Python 3.8 installed.

Package Installation

Non-local Installation

Source your desired python environment and install MAXi via:

  • Pip
pip install git+https://github.com/dailab/MAXi-XAI-lib.git
  • Poetry
poetry add git+https://github.com/dailab/MAXi-XAI-lib.git

Local Installation

Alternatively, if you intend on modifying MAXi-lib locally, these steps are necessary:

  1. Navigate to the desired destination directory.
  2. Clone the repository:
git clone https://github.com/dailab/MAXi-XAI-lib.git
  1. Now, either install the package in an exclusive python environment:
cd constrastive-explaination-prototype
poetry install
  1. Or install the package within another Poetry environment:
cd *LOCATION_OF_YOUR_PACKAGE*
poetry add --editable *PATH_TO_MAX_LIB*

Installation Note

In case you use poetry as python package manager within your project, it is recommended to set the python versions to:

python = ">=3.8,<3.12"

Otherwise, some poetry dependency resolution issues might occur.

Components

This section covers the main components of this library. Each component takes an integral part in the explanation generation. Below, an overview of the components is displayed:

Class Diagram

For every entity, there exists a base class that has to be inhereted from. That makes the implementation of custom components a little simpler as the API is described in detail in those abstract classes. Furthermore, the API will automatically be imposed by Python.

Loss Package (Explanation Models)

API

  • Modules in this subpackage implement target function of the optimization, as we formulate the explanation as an optimization problem
  • Measurement of the cost
  • Formalizes the model-agnostic explanation method
  • Incorporates external inference model - is called for its calculations

Currently implemented: Click here

Inference Package

  • Inference methods need to return array with classification probabilities in order for the loss function to incorporate the prediction into its calculations
  • If that is currently not the case, consider implementing a custom Inference Quantizer
  • This subpackage consists of modules which help you transform an arbitrary predicition result to the required format
  • One can disregard this package, when no pre-inference processing is required and when the prediction is already in a method-conforming format

Inference Quantizer

API

  • The BaseQuantizer translates the produced prediction into a explanation method compatible format
  • Here, the inference result is required to be of the typical classification problems format - each entry of the array indicates a likeliness of a class being present in the image
  • Input could be any type of model prediction e.g. a segmentation mask
  • A valid output would be e.g. array([0.1, 2.3, -4.1])

Inference Wrapper

API

  • This class offers you the option to implement a preprocessing (pre-inference) procedure
  • Furthermore, it stores the external inference method and can subsequently apply the Inference Quantizer
  • Given an image, the procedures are called in the following order: preprocessing -> inference -> quantizer
  • Each operation, again, is optional. When left out, the identity function is used.

Class Diagram

Computation Components Package

This package contains the necessary engines to solve the optimization problems. In other words, it contains the optimizer algorithms and gradient calculation methods. Like for the loss functions, we provide a base class for both the optimizers and gradient methods in order for easy extension of the library.

Optimizer

API

Currently implemented:

  • Adaptive Exponentiated Gradient
  • Adaptive Optimistic Exponentiated Gradient

Gradient (Calculation Method)

API

Currently implemented:

  • Gradient Estimation
  • TensorFlow-based Gradient Calculation

Explanation Generator

API

The Explanation Generator interconnects all the above mentioned components and in that way enables the optimization procedure with respect to the defined explanation model.

API Documentation

Class Diagram

Tutorials

General Tutorials

Computation Component Tutorials

Inference Component Tutorials

Explanation Model Tutorials

This open source project is funded by the German Federal Ministry for Economic Affairs and Climate Action as part of the EMPAIA project under Grant Agreement No. 01MK20002C.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •