Skip to content

Latest commit

 

History

History
226 lines (152 loc) · 10.4 KB

README.md

File metadata and controls

226 lines (152 loc) · 10.4 KB

pyMDMA - Multimodal Data Metrics for Auditing real and synthetic datasets

Python Code style: black Checked with mypy Imports: isort documentation pre-commit security: bandit pytest conventional-commits

Data auditing is essential for ensuring the reliability of machine learning models by maintaining the integrity of the datasets upon which these models rely. As synthetic data use increases to address data scarcity and privacy concerns, there is a growing demand for a robust auditing framework.

Existing repositories often lack comprehensive coverage across various modalities or validation types. This work introduces a dedicated library for data auditing, presenting a comprehensive suite of metrics designed for evaluating synthetic data. Additionally, it extends its focus to the quality assessment of input data, whether synthetic or real, across time series, tabular, and image modalities.

This library aims to serve as a unified and accessible resource for researchers, practitioners, and developers, enabling them to assess the quality and utility of their datasets. This initiative encourages collaborative contributions by open-sourcing the associated code, fostering a community-driven approach to advancing data auditing practices. This work is intended for publication in an open-source journal to facilitate widespread dissemination, adoption, and impact tracking within the scientific and technical community.

Prerequisites

You will need:

  • python (see pyproject.toml for full version)
  • anaconda or similar (recommended)
  • Git
  • Make and poetry (developers)
  • load environment variables from .env

1. Installing

At this moment only the installation from source is available.

You should install the package in a virtual environment to avoid conflicts with other packages. Please consult the official documentation for developing with virtual environments.

1.1 Installing from source

If you want to clone the repository, you can do so with the following commands:

git clone --recursive https://github.com/fraunhoferportugal/pymdma.git
cd pymdma

(Recommended) Install and activate conda dependencies for python version and package management:

conda env create -f environment.yml
conda activate da_metrics

This repository can evaluate three different modalities: image, tabular, and time_series. If you wish to test only one data modality, you can install only the required dependencies. Before running any commands, make sure you have the latest versions of pip and setuptools installed.

After this, you can install the package with the following command:

pip install . # install base from source

Depending on the data modality you want to use, you may need to install additional dependencies. The following commands will install the dependencies for each modality:

pip install .[image] # image dependencies
pip install .[tabular] # tabular dependencies
pip install .[time_series] # time series dependencies

Note: The previous commands install the components from the base of the repository. If you are in another directory, you should replace . with the path to the repository's base.

For a minimal installation, you can install the package without CUDA support by forcing pip to install torch from the CPU index with the --find-url command.

2. Execution Examples

The package provides a CLI interface for automatically evaluating folder datasets. You can also import the metrics for a specific modality and use them in your code.

Before running any commands make sure the package was correctly installed.

2.1. CLI Execution

To evaluate a dataset, you can use the CLI interface. The following command will list the available commands:

pymdma --help # list available commands

Following is an example of executing the evaluation of a synthetic dataset with regard to a reference dataset:

pymdma --modality image \
    --validation_type synth \
    --reference_type dataset \
    --evaluation_level dataset \
    --reference_data data/test/image/synthesis_val/reference \
    --target_data data/test/image/synthesis_val/dataset \
    --batch_size 3 \
    --metric_group feature \
    --output_dir reports/image_metrics/

This will evaluate the synthetic dataset in the data/test/image/synthesis_val/dataset with regard to the reference dataset in data/test/image/synthesis_val/reference. The evaluation will be done at the dataset level and the report will be saved in the reports/image_metrics/ folder in JSON format. Only feature metrics will be computed for this evaluation.

2.2. Importing Modality Metrics

You can also import the metrics for a specific modality and use them in your code. The following example shows how to import an image metric and use it to evaluate input images in terms of sharpness. Note that this metric only returns the sharpness value for each image (i.e. the instance level value). The dataset level value is none.

from pymdma.image.measures.input_val import Tenengrad
import numpy as np

images = np.random.rand(10, 224, 224, 3)  # 10 random RGB images of size 224x224

tenengrad = Tenengrad()  # sharpness metric
sharpness = tenengrad.compute(images)  # compute on RGB images

# get the instance level value (dataset level is None)
_dataset_level, instance_level = sharpness.value

For evaluating synthetic datasets, you also have access to the synthetic metrics. The following example shows the steps necessary to process and evaluate a synthetic dataset in terms of the feature metrics. We load one of the available feature extractors, extract the features from the images and then compute the precision and recall metrics for the synthetic dataset in relation to the reference dataset.

from pymdma.image.models.features import ExtractorFactory

test_images_ref = Path("./data/test/image/synthesis_val/reference")  # real images
test_images_synth = Path("./data/test/image/sythesis_val/dataset")  # synthetic images

# Get image filenames
images_ref = list(test_images_ref.glob("*.jpg"))
images_synth = list(test_images_synth.glob("*.jpg"))

# Extract features from images
extractor = ExtractorFactory.model_from_name(name="vit_b_32")
ref_features = extractor.extract_features_from_files(images_ref)
synth_features = extractor.extract_features_from_files(images_synth)

Now you can calculate the Improved Precision and Recall of the synthetic dataset in relation to the reference dataset.

from pymdma.image.measures.synthesis_val import ImprovedPrecision, ImprovedRecall

ip = ImprovedPrecision() # Improved Precision metric
ir = ImprovedRecall() # Improved Recall metric

# Compute the metrics
ip_result = ip.compute(ref_features, synth_features)
ir_result = ir.compute(ref_features, synth_features)

# Get the dataset and instance level values
precision_dataset, precision_instance = ip_result.value
recall_dataset, recall_instance = ir_result.value

# Print the results
print(f"Precision: {precision_dataset:.2f} | Recall: {recall_dataset:.2f}")
print(f"Precision: {precision_instance} | Recall: {recall_instance}")

You can find more examples of execution in the notebooks folder.

Documentation

Full documentation is available here: docs/.

Contributing

Contributions of any kind are welcome. Please read CONTRIBUTING.md for details and the process for submitting pull requests to us.

If you change the code in any way, please follow the developer guidelines in DEVELOPER.md.

Changelog

See the Changelog for more information.

Security

Thank you for improving the security of the project, please see the Security Policy for more information.

License

This project is licensed under the terms of the LGPL-3.0 license. See LICENSE for more details.

Citation

If you publish work that uses pyMDMA, please cite pyMDMA as follows:

@misc{pymdma,
  title = {{pyMDMA}: Multimodal Data Metrics for Auditing real and synthetic datasets},
  url = {https://github.com/fraunhoferportugal/pymdma},
  author = {Fraunhofer AICOS},
  license = {LGPL-3.0},
  year = {2024},
}

Acknowledgments

This work was funded by AISym4Med project number 101095387, supported by the European Heath and Digital Executive Agency (HADEA), granting authority under the powers delegated by the Europeam Commision. More information on this project can be found here.

This work was supported by European funds through the Recovery and Resilience Plan, project ”Center for Responsible AI”, project number C645008882-00000055. Learn more about this project here.