Skip to content

Commit

Permalink
refactor: Use old timm pretrained weights for resnets
Browse files Browse the repository at this point in the history
* add: Old weights for resnet18 and wideresnet in anomaly models' configs

* build: Bump version 1.4.0 -> 1.4.1

* docs: Update changelog

* docs: Fix changelog

* add: Update weights in generic anomaly experiments

* add: breaking_changes file

* fix: Better text in breaking changes file

* docs: Specify torchvision weights also in docs
  • Loading branch information
AlessandroPolidori authored Jan 12, 2024
1 parent a1fd7d8 commit 7b6c9b8
Show file tree
Hide file tree
Showing 11 changed files with 37 additions and 12 deletions.
12 changes: 12 additions & 0 deletions BREAKING_CHANGES.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Breaking Changes
All the breaking changes will be documented in this file.

### [1.4.0]

#### Changed

- In Quadra 1.4.0, we upgraded timm to version 0.9.12, resulting in potential variations in default weights for timm backbones compared to previous versions. To continue utilizing the previous weights for resnet18 and wide_resnet50, which are the default backbones for quadra anomaly and classification fine-tuning tasks, we have introduced ".tv_in1k" to the model_name inside Quadra configuration files.

Although the timm upgrade might have very likely adjusted default weights also for other backbones, we have reinstated the old weights only for these two (some internal tests showed better performance of old weights, especially for classification fine-tuning).

If you are updating quadra to a version >= 1.4.0 and you want to keep consistent results, it is recommended to verify whether your timm's backbone is sill using the same weights.
10 changes: 10 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,16 @@
# Changelog
All notable changes to this project will be documented in this file.

### [1.4.1]

#### Changed

- Change weights of resnet18 and wideresnet50 to old ones in anomaly model configs

#### Updated

- Update anomalib to [v0.7.0+obx.1.2.7] (added default padim n_features for resnets' old weights)

### [1.4.0]

#### Added
Expand Down
7 changes: 5 additions & 2 deletions docs/tutorials/examples/anomaly_detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ dataset:
model:
input_size: [224, 224]
backbone: resnet18
backbone: resnet18.tv_in1k
layers:
- layer1
- layer2
Expand All @@ -190,7 +190,10 @@ metrics:
manual_pixel: null
```
What we are mostly interested about is the `model` section. In this section we can specify the backbone of the model
(mainly resnet18 and wide_resnet50_2), which layers are used for feature extraction and the number of features used for dimensionality reduction (there are some default values for resnet18 and wide_resnet50_2).
(mainly resnet18.tv_in1k and wide_resnet50_2.tv_in1k), which layers are used for feature extraction and the number of features used for dimensionality reduction (there are some default values for resnet18 and wide_resnet50_2).
```
Notice: ".tv_in1k" is an extension for timm backbones' model_name which refers to torchvision pretrained weights.
```
Generally we always compute an adaptive threshold based on the validation data, but it is possible to specify a manual threshold for both image and pixel as we may want a different tradeoff between false
positives and false negatives. The threshold specified must be the unnormalized one.
Expand Down
6 changes: 3 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"

[project]
name = "quadra"
version = "1.4.0"
version = "1.4.1"
description = "Deep Learning experiment orchestration library"
authors = [
{ name = "Alessandro Polidori", email = "[email protected]" },
Expand Down Expand Up @@ -67,7 +67,7 @@ dependencies = [
"timm==0.9.12",
# Currently the only version of smp supporting timm 0.9.12 is the following
"segmentation-models-pytorch@git+https://github.com/qubvel/segmentation_models.pytorch@7b381f899ed472a477a89d381689caf535b5d0a6",
"anomalib@git+https://github.com/orobix/[email protected]+obx.1.2.9",
"anomalib@git+https://github.com/orobix/[email protected]+obx.1.2.10",
"xxhash==3.2.*",
"torchinfo==1.8.*",
]
Expand Down Expand Up @@ -123,7 +123,7 @@ repository = "https://github.com/orobix/quadra"

# Adapted from https://realpython.com/pypi-publish-python-package/#version-your-package
[tool.bumpver]
current_version = "1.4.0"
current_version = "1.4.1"
version_pattern = "MAJOR.MINOR.PATCH"
commit_message = "build: Bump version {old_version} -> {new_version}"
commit = true
Expand Down
2 changes: 1 addition & 1 deletion quadra/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "1.4.0"
__version__ = "1.4.1"


def get_version():
Expand Down
2 changes: 1 addition & 1 deletion quadra/configs/experiment/generic/mvtec/anomaly/padim.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ defaults:
model:
model:
input_size: [224, 224]
backbone: resnet18
backbone: resnet18.tv_in1k

datamodule:
num_workers: 12
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ defaults:
model:
model:
input_size: [224, 224]
backbone: resnet18
backbone: resnet18.tv_in1k

datamodule:
num_workers: 12
Expand Down
2 changes: 1 addition & 1 deletion quadra/configs/model/anomalib/cfa.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ dataset:

model:
input_size: [224, 224]
backbone: resnet18
backbone: resnet18.tv_in1k
gamma_c: 1
gamma_d: 1
num_nearest_neighbors: 3
Expand Down
2 changes: 1 addition & 1 deletion quadra/configs/model/anomalib/cflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ dataset:

model:
name: cflow
backbone: resnet18
backbone: resnet18.tv_in1k
input_size: [256, 256]
layers:
- layer2
Expand Down
2 changes: 1 addition & 1 deletion quadra/configs/model/anomalib/dfm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ dataset:
task: classification

model:
backbone: wide_resnet50_2
backbone: wide_resnet50_2.tv_in1k
pca_level: 0.97
score_type: fre # nll: for Gaussian modeling, fre: pca feature reconstruction error
threshold:
Expand Down
2 changes: 1 addition & 1 deletion quadra/configs/model/anomalib/fastflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ dataset:
model:
name: fastflow
input_size: [224, 224]
backbone: resnet18 # options: [resnet18, wide_resnet50_2, cait_m48_448, deit_base_distilled_patch16_384]
backbone: resnet18.tv_in1k # options: [resnet18, wide_resnet50_2, cait_m48_448, deit_base_distilled_patch16_384]
pre_trained: true
flow_steps: 8 # options: [8, 8, 20, 20] - for each supported backbone
hidden_ratio: 1.0 # options: [1.0, 1.0, 0.16, 0.16] - for each supported backbone
Expand Down

0 comments on commit 7b6c9b8

Please sign in to comment.