Releases: ekrell/geoscience-attribution-benchmarks
Update to the EDS submission results: added results figures for all XAI methods
Completed 'globalcov' benchmark to generate results for Environmental Data Science submission
Generates a suite of benchmarks to analyze the influence of strongly correlated input features on XAI-based attributions.
unicov benchmark: analyze variance of XAI methods
The release includes the benchmark unicov
, which is a set of scripts and config files for generating, actually, a set of XAI benchmarks. The purpose is to quantitatively demonstrate that XAI outputs can vary across replicated model training runs. That is, the only change is the initial seed of the NN model weights, but the XAI outputs can be very different. We show that an increase in the strength of correlation provides many potential learned functions that achieve high performance. Since the model has so many options to learn within the data, each trained model can yield very different explanations.
This release corresponds to the results shown at AGU 2023 and AMS 2024. The poster is available here.
Implementation of Mamalakis et a. (2022)
This release includes the benchmark sstanom
, a synthetic XAI benchmark where the covariance matrix used to generate synthetic samples is calculated from real SST Anomaly data. This is an implementation of the original XAI benchmarks proposed by Mamalakis et al. (2022).
See benchmarks/sstanom/README.md
for more information.