Skip to content

Releases: ekrell/geoscience-attribution-benchmarks

Update to the EDS submission results: added results figures for all XAI methods

20 Jul 18:33
1ed3aa9
Compare
Choose a tag to compare

Supplemental figures for the submission are available at experiments/eds_2024_results_summary.zip.

Completed 'globalcov' benchmark to generate results for Environmental Data Science submission

17 Jul 16:59
Compare
Choose a tag to compare

Generates a suite of benchmarks to analyze the influence of strongly correlated input features on XAI-based attributions.

unicov benchmark: analyze variance of XAI methods

23 Feb 18:52
Compare
Choose a tag to compare

The release includes the benchmark unicov, which is a set of scripts and config files for generating, actually, a set of XAI benchmarks. The purpose is to quantitatively demonstrate that XAI outputs can vary across replicated model training runs. That is, the only change is the initial seed of the NN model weights, but the XAI outputs can be very different. We show that an increase in the strength of correlation provides many potential learned functions that achieve high performance. Since the model has so many options to learn within the data, each trained model can yield very different explanations.

This release corresponds to the results shown at AGU 2023 and AMS 2024. The poster is available here.

Implementation of Mamalakis et a. (2022)

15 Feb 19:12
Compare
Choose a tag to compare

This release includes the benchmark sstanom, a synthetic XAI benchmark where the covariance matrix used to generate synthetic samples is calculated from real SST Anomaly data. This is an implementation of the original XAI benchmarks proposed by Mamalakis et al. (2022).

See benchmarks/sstanom/README.md for more information.