SeqExplainer is a Python package for interpreting sequence-to-function machine learning models. Most of the core functionality is for post-hoc analysis of a trained model. SeqExplainer currently supports:
- Filter interpretation
- Attribution analysis
- Sequence evolution
- In silico experiments with a trained model (aka GIA)
The main dependencies of SeqExplainer are:
python
torch
captum
numpy
matplotlib
logomaker
sklearn
shap
This section was modified from https://github.com/pachterlab/kallisto.
All contributions, including bug reports, documentation improvements, and enhancement suggestions are welcome. Everyone within the community is expected to abide by our code of conduct
As we work towards a stable v1.0.0 release, and we typically develop on branches. These are merged into dev
once sufficiently tested. dev
is the latest, stable, development branch.
main
is used only for official releases and is considered to be stable. If you submit a pull request, please make sure to request to merge into dev
and NOT main
.
- Novakovsky, G., Dexter, N., Libbrecht, M. W., Wasserman, W. W. & Mostafavi, S. Obtaining genetics insights from deep learning via explainable artificial intelligence. Nat. Rev. Genet. 1–13 (2022) doi:10.1038/s41576-022-00532-2