Skip to content

Commit

Permalink
Merge pull request #14 from arnab39/dev
Browse files Browse the repository at this point in the history
0.1.1
  • Loading branch information
arnab39 authored Mar 15, 2024
2 parents cceb6e3 + 3a705c3 commit c34a50a
Show file tree
Hide file tree
Showing 36 changed files with 496 additions and 305 deletions.
4 changes: 2 additions & 2 deletions AUTHORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@

* Arnab Mondal [[email protected]](mailto:[email protected])
* [Siba Smarak Panigrahi](https://sibasmarak.github.io/) [[email protected]](mailto:[email protected])
* [Danielle Benesch](https://github.com/danibene) [daniellerbenesch+git@gmail.com](mailto:daniellerbenesch+git@gmail.com)
* [Danielle Benesch](https://github.com/danibene) [[email protected]](mailto:[email protected])
* [Jikael Gagnon](https://github.com/jikaelgagnon) [[email protected]](mailto:[email protected])
* [Sékou-Oumar Kaba](https://oumarkaba.github.io)[mailto:[email protected]]
* [Sékou-Oumar Kaba](https://oumarkaba.github.io) [[email protected]](mailto:[email protected])
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Removed

## [0.1.1] - 2024-03-15

### Changed
- Operating system classifier in `setup.cfg`.
- Replaced `escnn` dependency with `e2cnn`.

## [0.1.0] - 2024-03-14

### Added
Expand Down
84 changes: 45 additions & 39 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,23 @@
</h3>
<br>

# About
EquiAdapt is a [PyTorch](https://pytorch.org) package that provides a flexible and efficient way to make *any* neural network architecture (including large foundation models) equivariant, instead of redesigning and training from scratch. This is done by learning to canonicalize transformed inputs, before feeding them to the prediction model. You can play with this concept in the provided [tutorial](tutorials/images/instance_segmentation_group_equivariant_canonicalization.ipynb) for equivariant adaptation of the Segment-Anything Model (SAM, [Kirillov et. al, 2023](https://arxiv.org/abs/2304.02643)) and images from Microsoft COCO ([Lin et. al, 2014](https://arxiv.org/abs/1405.0312)) dataset for instance segmentation.

# Equivariant adaptation with canonicalization
To learn more about this from a blog, check out: [How to make your foundation model equivariant](https://mila.quebec/en/article/how-to-make-your-foundation-model-equivariant/)

## Equivariant adaptation with canonicalization
![Equivariant adaptation of any prediction network](https://raw.githubusercontent.com/arnab39/equiadapt/main/utils/equiadapt_cat.jpeg "Equivariant adaptation of any prediction network")

![Equivariant adaptation of Segment-Anything Network](https://raw.githubusercontent.com/arnab39/equiadapt/main/utils/equiadapt_sam.gif "Equivariant adaptation of any prediction network")
Read more about this [here](https://proceedings.mlr.press/v202/kaba23a.html)

EquiAdapt is a [PyTorch](https://pytorch.org) package that provides a flexible and efficient way to make *any* neural network architecture (including large foundation models) equivariant, instead of redesigning and training from scratch. This is done by learning to canonicalize transformed inputs, before feeding them to the prediction model.
## Prior regularized canonicalization
![Equivariant adaptation of Segment-Anything Model](https://raw.githubusercontent.com/arnab39/equiadapt/main/utils/equiadapt_sam.gif "Equivariant adaptation of Segment-Anything Model")

You can play with this concept in the provided [tutorial](tutorials/images/instance_segmentation_group_equivariant_canonicalization.ipynb) for equivariant adaptation of the Segment-Anything Model (SAM, [Kirillov et. al, 2023](https://arxiv.org/abs/2304.02643)) and images from Microsoft COCO ([Lin et. al, 2014](https://arxiv.org/abs/1405.0312)) dataset for instance segmentation.
Read more about this [here](https://proceedings.neurips.cc/paper_files/paper/2023/hash/9d5856318032ef3630cb580f4e24f823-Abstract-Conference.html)

# Easy to integrate :rocket:
# How to use?
## Easy to integrate :rocket:

Equiadapt enables users to obtain equivariant versions of existing neural networks with a few lines of code changes:
```diff
Expand Down Expand Up @@ -60,7 +66,7 @@ Equiadapt enables users to obtain equivariant versions of existing neural networ
optimizer.step()
```

# Details on using `equiadapt` library
## Details on using `equiadapt` library

1. Create a `canonicalization network` (or use our provided networks: for images, in `equiadapt/images/canonicalization_networks/`).

Expand Down Expand Up @@ -98,8 +104,20 @@ loss = canonicalizer.add_prior_regularizer(loss)
loss.backward()
```

# Setup instructions
### Setup Conda environment
# Installation

## Using pypi
You can install the latest [release](https://github.com/arnab39/equiadapt/releases) using:

```pip install equiadapt```

## Manual installation

You can clone this repository and manually install it with:

```pip install git+https://github.com/arnab39/equiadapt```

## Setup Conda environment for examples

To create a conda environment with the necessary packages:

Expand All @@ -119,34 +137,19 @@ Note that this might not be a complete list of dependencies. If you encounter an

# Running equiadapt using example code

We provide example code to run equiadapt in different data domains and tasks to achieve equivariance. You can also find a [tutorial](tutorials/images/classification_group_equivariant_canonicalization.ipynb) on how to use equiadapt with minimalistic changes to your own code (for image classification).

Before you jump to the instructions for each of them please follow the setup hydra instructions to create a `.env` file with the paths to store all the data, wandb logs and checkpoints.


<table style="border:1px solid white; border-collapse: collapse;">
<tr>
<th style="border:1px solid white;" rowspan="2"><div align="center">Image</div></th>
<td style="border:1px solid white;">Classification</td>
<td style="border:1px solid white;"><a href="examples/images/classification/README.md">here</a></td>
</tr>
<tr>
<td style="border:1px solid white;">Segmentation</td>
<td style="border:1px solid white;"><a href="examples/images/segmentation/README.md">here</a></td>
</tr>
<tr>
<th style="border:1px solid white;" rowspan="2"><div align="center">Point Cloud</div></th>
<td style="border:1px solid white;">Classification</td>
<td style="border:1px solid white;"><a href="examples/pointcloud/classification/README.md">here</a></td>
</tr>
<tr>
<td style="border:1px solid white;">Part Segmentation</td>
<td style="border:1px solid white;"><a href="examples/pointcloud/part_segmentation/README.md">here</a></td>
</tr>
</table>

### Setup Hydra
- Create a `.env` file in the root of the project with the following content:
We provide [examples](examples) to run equiadapt in different data domains and tasks to achieve equivariance.

- Image:
- Classification: [Link](examples/images/classification/README.md)
- Segmentation: [Link](examples/images/segmentation/README.md)
- Point Cloud:
- Classification: [Link](examples/pointcloud/classification/README.md)
- Part Segmentation: [Link](examples/pointcloud/part_segmentation/README.md)
- Nbody Dynamics: [Link](examples/nbody/README.md)

Our examples use `hydra` to configure hyperparameters. Follow the hydra setup instructions to create a .env file with the paths to store all the data, wandb logs and checkpoints.

Create a `.env` file in the root of the project with the following content:
```
export HYDRA_JOBS="/path/to/your/hydra/jobs/directory"
export WANDB_DIR="/path/to/your/wandb/jobs/directory"
Expand All @@ -155,14 +158,17 @@ Before you jump to the instructions for each of them please follow the setup hyd
export CHECKPOINT_PATH="/path/to/your/checkpoint/directory"
```

You can also find [tutorials](tutorials) on how to use equiadapt with minimalistic changes to your own code.


# Related papers



# Related papers and Citations

For more insights on this library refer to our original paper on the idea: [Equivariance with Learned Canonicalization Function (ICML 2023)](https://proceedings.mlr.press/v202/kaba23a.html) and how to extend it to make any existing large pre-trained model equivariant: [Equivariant Adaptation of Large Pretrained Models (NeurIPS 2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/9d5856318032ef3630cb580f4e24f823-Abstract-Conference.html).

To learn more about this from a blog, check out: [How to make your foundation model equivariant](https://mila.quebec/en/article/how-to-make-your-foundation-model-equivariant/)

# Citation
If you find this library or the associated papers useful, please cite the following papers:
```
@inproceedings{kaba2023equivariance,
Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# equiadapt

Library that provides metrics to asses representation quality
Library to make any existing neural network architecture equivariant

## Contents

Expand Down
20 changes: 20 additions & 0 deletions equiadapt/common/basecanonicalization.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,23 @@
"""
This module defines a base class for canonicalization and its subclasses for different types of canonicalization methods.
Canonicalization is a process that transforms the input data into a canonical (standard) form.
This can be cheap alternative to building equivariant models as it can be used to transform the input data into a canonical form and then use a standard model to make predictions.
Canonicalizarion allows you to use any existing arcitecture (even pre-trained ones) for your task without having to worry about equivariance.
The module contains the following classes:
- `BaseCanonicalization`: This is an abstract base class that defines the interface for all canonicalization methods.
- `IdentityCanonicalization`: This class represents an identity canonicalization method, which is a no-op; it doesn't change the input data.
- `DiscreteGroupCanonicalization`: This class represents a discrete group canonicalization method, which transforms the input data into a canonical form using a discrete group.
- `ContinuousGroupCanonicalization`: This class represents a continuous group canonicalization method, which transforms the input data into a canonical form using a continuous group.
Each class has methods to perform the canonicalization, invert it, and calculate the prior regularization loss and identity metric.
"""

from typing import Any, Dict, List, Optional, Tuple, Union

import torch
Expand Down
Loading

0 comments on commit c34a50a

Please sign in to comment.