Skip to content

Commit

Permalink
fix: Update anomalib version (#130)
Browse files Browse the repository at this point in the history
* docs: Update anomalib and quadra version, update changelog

* refactor: Suppress heavy prints from auto_convert_mixed_precision function
  • Loading branch information
lorenzomammana authored Oct 25, 2024
1 parent 83f342c commit bc88342
Show file tree
Hide file tree
Showing 5 changed files with 19 additions and 8 deletions.
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@
# Changelog
All notable changes to this project will be documented in this file.

### [2.2.5]

#### Updated

- Update anomalib to v0.7.0.dev143 to fix a bug introduced in the previous version that caused the training to fail if the dataset size was smaller than the batch size.

### [2.2.4]

#### Updated
Expand Down
8 changes: 4 additions & 4 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "quadra"
version = "2.2.4"
version = "2.2.5"
description = "Deep Learning experiment orchestration library"
authors = [
"Federico Belotti <[email protected]>",
Expand Down Expand Up @@ -72,7 +72,7 @@ h5py = "~3.8"
timm = "0.9.12"

segmentation_models_pytorch-orobix = "0.3.3.dev1"
anomalib-orobix = "0.7.0.dev142"
anomalib-orobix = "0.7.0.dev143"
xxhash = "~3.2"
torchinfo = "~1.8"
typing_extensions = { version = "4.11.0", python = "<3.10" }
Expand Down
2 changes: 1 addition & 1 deletion quadra/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "2.2.4"
__version__ = "2.2.5"


def get_version():
Expand Down
7 changes: 6 additions & 1 deletion quadra/utils/export.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from __future__ import annotations

import contextlib
import os
from collections.abc import Sequence
from typing import Any, Literal, TypeVar, cast
Expand Down Expand Up @@ -377,7 +378,11 @@ def _safe_export_half_precision_onnx(
model_fp32 = onnx.load(export_model_path)
test_data = {input_names[i]: inp[i].float().cpu().numpy() for i in range(len(inp))}
log.warning("Attempting to convert model in mixed precision, this may take a while...")
model_fp16 = auto_convert_mixed_precision(model_fp32, test_data, rtol=0.01, atol=0.001, keep_io_types=False)
with open(os.devnull, "w") as f, contextlib.redirect_stdout(f):
# This function prints a lot of information that is not useful for the user
model_fp16 = auto_convert_mixed_precision(
model_fp32, test_data, rtol=0.01, atol=0.001, keep_io_types=False
)
onnx.save(model_fp16, export_model_path)

onnx_model = onnx.load(export_model_path)
Expand Down

0 comments on commit bc88342

Please sign in to comment.