Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue when running segmentation #8

Open
Eddymorphling opened this issue Feb 26, 2024 · 32 comments
Open

Issue when running segmentation #8

Eddymorphling opened this issue Feb 26, 2024 · 32 comments
Assignees

Comments

@Eddymorphling
Copy link

Eddymorphling commented Feb 26, 2024

Hi @hermancollin
As per our previous discussion, I am testing nn-axondeepseg. The setup went well, I just had to also install a pip package manually (wcwidth). Now when I run the CLI segmentation command , I come across the following error


2024-02-26 10:14:02.352 | INFO     | __main__:main:71 - A single model was found: models/model_seg_unmyelinated_tem. It will be used by default.
Traceback (most recent call last):
  File "/home/ivm/nn-axondeepseg/nn_axondeepseg.py", line 123, in <module>
    main()
  File "/home/ivm/nn-axondeepseg/nn_axondeepseg.py", line 87, in main
    predictor = nnUNetPredictor(
TypeError: nnUNetPredictor.__init__() got an unexpected keyword argument 'perform_everything_on_gpu'

Any advise on how to fix this? Some other info - my fresh conda env runs on python 3.10, input files are in .png format, cuda/pytorch sees the GPU in my conda environment.

@Eddymorphling
Copy link
Author

Eddymorphling commented Feb 26, 2024

Ok, made some progress:

  1. I had to install nnunetv2==2.2 to fix the above error. I believe the requirements.txt file installs nnunetv2 v2.3.1 by default.
  2. I also had to modify the requirements.txt file to add some missing packages. Here is what worked for me:
nnunetv2>=2.2
torch
opencv-python
loguru
types-python-dateutil
webencodings
ipython
matplotlib-inline
nbconvert
ipywidgets
entrypoints
prompt-toolkit
pygments
anyio
nbformat
websocket-client
ipython-genutils
tomli
wcwidth

@hermancollin
Copy link
Member

@Eddymorphling thank you for trying this.

I don't understand why you had to install nnunet manually because it is specified in the requirements txt file. Also v2.3.1 should be fine (we want higher than 2.2).

What was the complete command you tried that caused the error in your first message?

@hermancollin
Copy link
Member

Also the additionnal dependencies are weird to me. If you said it worked with v2.2 maybe there were new dependencies added in v2.3. Thank you for reporting this i'll try to reproduce.

@Eddymorphling
Copy link
Author

HI @hermancollin I think it defaults to a installing v2.3 with the requirements file, which leads to an error that I mentioned above when running the CLI command. Something in v2.3 might be different and might need different dependencies. Reverting back to v2.2 helped me run everything smoothly. Happy to test v2.3 if you have an update on it.

Regarding the unmyelinated model that this repo provides. DO you need a what pixel resolution the model was trained? Also do you have an updated version of this model which performs better? I remember you had mentioned something along these lines in our earlier discussion.Thank you.

@hermancollin
Copy link
Member

@Eddymorphling sorry for the delayed response. I am trying to wrap up a manuscript for next week. I'll have more time to help you after - hope this is not too urgent on your side.

Regarding the unmyelinated model that this repo provides. DO you need a what pixel resolution the model was trained? Also do you have an updated version of this model which performs better? I remember you had mentioned something along these lines in our earlier discussion. Thank you.

The model currently uploaded (on axondeepseg/model_seg_unmyelinated_tem) was trained on data from the SickKids Foundation. The exact pixel size is 0.00481 um/px. I don't know how close this is to your data.

Yes, there is a better model. The one that was uploaded on October 12, 2023 is one of 5 models trained on this data. For more information, see axondeepseg/model_seg_unmyelinated_tem#1. I'm going to upload the rest this afternoon so that you can try it and hopefully give us some feedback.

There is an additional model that I am working on with data from Stanford. I expect this one will work even better, but I can't upload it for now because it is still a WIP.

@hermancollin
Copy link
Member

@Eddymorphling I uploaded the full model. Keep us updated!
https://github.com/axondeepseg/model_seg_unmyelinated_tem/releases/tag/v1.1.0

@Eddymorphling
Copy link
Author

Thank you! You are the best!

@Eddymorphling
Copy link
Author

Eddymorphling commented Mar 1, 2024

Hey @hermancollin , downloads went well. But get an error when running it with the CLI. Does the nn_axondeepseg.py need to be updated to include the folds in the new model. Here is the error:


2024-02-29 16:25:33.563 | INFO     | __main__:main:71 - A single model was found: models/model_seg_unmyelinated_sickkids_tem_best. It will be used by default.
2024-02-29 16:25:33.564 | INFO     | __main__:main:91 - Running inference on device: cpu
Traceback (most recent call last):
  File "/home/ivm/nn-axondeepseg/nn_axondeepseg.py", line 123, in <module>
    main()
  File "/home/ivm/nn-axondeepseg/nn_axondeepseg.py", line 93, in main
    predictor.initialize_from_trained_model_folder(path_model, use_folds=None)
  File "/home/ivm/conda/envs/nn-axondeepseg/lib/python3.10/site-packages/nnunetv2/inference/predict_from_raw_data.py", line 96, in initialize_from_trained_model_folder
    configuration_manager = plans_manager.get_configuration(configuration_name)
UnboundLocalError: local variable 'configuration_name' referenced before assignment

@Eddymorphling
Copy link
Author

@hermancollin Sorry to bother you again, could you please help me with the above issue? Thank you.

@mathieuboudreau
Copy link
Member

Hi @Eddymorphling - Armand is working towards a paper deadline that's due in the next few days (I can't recall), so it's likely he'll only be able to revisit this early next week. I'll try and take a look at it ASAP to see if I can reproduce your error and maybe get an idea of how it can be resolved on your end.

@Eddymorphling
Copy link
Author

Thank you for reaching out! That would be helpful.

@mathieuboudreau
Copy link
Member

@Eddymorphling I just ran the install today and it segmented fine; I think maybe you downloaded the repo before Armand pinned nnunet to version 2.2: 1c369ff

Can you verify this? Do a pip freeze and check the version of nnunetv2; if it's not version 2.2, then do pip install nnunetv2==2.2 and try the segmentation again.

@mathieuboudreau
Copy link
Member

Scrolling up, I see that you should have already gotten version 2.2 installed: #8 (comment), sorry for missing that!

Could you please still do a pip freeze and post the output here? Here's mine:


(nnunet_venv) mathieuboudreau@Mathieus-MacBook-Pro nn-axondeepseg % pip freeze
acvl_utils==0.2
batchgenerators==0.25
certifi==2024.2.2
charset-normalizer==3.3.2
connected-components-3d==3.12.4
contourpy==1.2.0
cycler==0.12.1
dicom2nifti==2.4.10
dynamic_network_architectures==0.3.1
filelock==3.13.1
fonttools==4.49.0
fsspec==2024.2.0
future==1.0.0
graphviz==0.20.1
idna==3.6
imagecodecs==2024.1.1
imageio==2.34.0
Jinja2==3.1.3
joblib==1.3.2
kiwisolver==1.4.5
lazy_loader==0.3
linecache2==1.0.0
loguru==0.7.2
MarkupSafe==2.1.5
matplotlib==3.8.3
mpmath==1.3.0
networkx==3.2.1
nibabel==5.2.1
nnunetv2==2.2
numpy==1.26.4
opencv-python==4.9.0.80
packaging==23.2
pandas==2.2.1
pillow==10.2.0
pydicom==2.4.4
pyparsing==3.1.2
python-dateutil==2.9.0.post0
python-gdcm==3.0.23
pytz==2024.1
PyYAML==6.0.1
requests==2.31.0
scikit-image==0.22.0
scikit-learn==1.4.1.post1
scipy==1.12.0
seaborn==0.13.2
setuptools==69.1.1
SimpleITK==2.3.1
six==1.16.0
sympy==1.12
threadpoolctl==3.3.0
tifffile==2024.2.12
torch==2.2.1
tqdm==4.66.2
traceback2==1.4.0
typing_extensions==4.10.0
tzdata==2024.1
unittest2==1.1.0
urllib3==2.2.1
wheel==0.42.0
yacs==0.1.8
(nnunet_venv) mathieuboudreau@Mathieus-MacBook-Pro nn-axondeepseg % 


@mathieuboudreau
Copy link
Member

Here's the full log of my succesful test using a fresh install of this repo, and also using this image in an input folder:

I download the UM model.


(base) mathieuboudreau@Mathieus-MacBook-Pro github % git clone https://github.com/axondeepseg/nn-axondeepseg.git
Cloning into 'nn-axondeepseg'...
remote: Enumerating objects: 59, done.
remote: Counting objects: 100% (59/59), done.
remote: Compressing objects: 100% (46/46), done.
remote: Total 59 (delta 22), reused 33 (delta 8), pack-reused 0
Receiving objects: 100% (59/59), 13.37 KiB | 1.67 MiB/s, done.
Resolving deltas: 100% (22/22), done.
(base) mathieuboudreau@Mathieus-MacBook-Pro github % cd nn-axondeepseg 
(base) mathieuboudreau@Mathieus-MacBook-Pro nn-axondeepseg % conda create -n nnunet_venv python
Retrieving notices: ...working... done
Collecting package metadata (current_repodata.json): done
Solving environment: done


==> WARNING: A newer version of conda exists. <==
  current version: 23.3.1
  latest version: 24.1.2

Please update conda by running

    $ conda update -n base -c defaults conda

Or to minimize the number of packages updated during conda update use

     conda install conda=24.1.2



## Package Plan ##

  environment location: /Users/mathieuboudreau/opt/anaconda3/envs/nnunet_venv

  added / updated specs:
    - python


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    libexpat-2.6.1             |       h73e2aa4_0          68 KB  conda-forge
    python-3.12.2              |h9f0c242_0_cpython        13.9 MB  conda-forge
    setuptools-69.1.1          |     pyhd8ed1ab_0         459 KB  conda-forge
    ------------------------------------------------------------
                                           Total:        14.4 MB

The following NEW packages will be INSTALLED:

  bzip2              conda-forge/osx-64::bzip2-1.0.8-h10d778d_5 
  ca-certificates    conda-forge/osx-64::ca-certificates-2024.2.2-h8857fd0_0 
  libexpat           conda-forge/osx-64::libexpat-2.6.1-h73e2aa4_0 
  libffi             conda-forge/osx-64::libffi-3.4.2-h0d85af4_5 
  libsqlite          conda-forge/osx-64::libsqlite-3.45.1-h92b6c6a_0 
  libzlib            conda-forge/osx-64::libzlib-1.2.13-h8a1eda9_5 
  ncurses            conda-forge/osx-64::ncurses-6.4-h93d8f39_2 
  openssl            conda-forge/osx-64::openssl-3.2.1-hd75f5a5_0 
  pip                conda-forge/noarch::pip-24.0-pyhd8ed1ab_0 
  python             conda-forge/osx-64::python-3.12.2-h9f0c242_0_cpython 
  readline           conda-forge/osx-64::readline-8.2-h9e318b2_1 
  setuptools         conda-forge/noarch::setuptools-69.1.1-pyhd8ed1ab_0 
  tk                 conda-forge/osx-64::tk-8.6.13-h1abcd95_1 
  tzdata             conda-forge/noarch::tzdata-2024a-h0c530f3_0 
  wheel              conda-forge/noarch::wheel-0.42.0-pyhd8ed1ab_0 
  xz                 conda-forge/osx-64::xz-5.2.6-h775f41a_0 


Proceed ([y]/n)? y


Downloading and Extracting Packages
                                                                                
Preparing transaction: done                                                     
Verifying transaction: done                                                     
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate nnunet_venv
#
# To deactivate an active environment, use
#
#     $ conda deactivate

(base) mathieuboudreau@Mathieus-MacBook-Pro nn-axondeepseg % conda activate nnunet_venv
(nnunet_venv) mathieuboudreau@Mathieus-MacBook-Pro nn-axondeepseg % conda activate nnunet_venv
(nnunet_venv) mathieuboudreau@Mathieus-MacBook-Pro nn-axondeepseg % pip install -r requirements.txt 
Collecting nnunetv2==2.2 (from -r requirements.txt (line 1))
  Using cached nnunetv2-2.2.tar.gz (178 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Collecting torch (from -r requirements.txt (line 2))
  Downloading torch-2.2.1-cp312-none-macosx_10_9_x86_64.whl.metadata (25 kB)
Collecting opencv-python (from -r requirements.txt (line 3))
  Downloading opencv_python-4.9.0.80-cp37-abi3-macosx_10_16_x86_64.whl.metadata (20 kB)
Collecting loguru (from -r requirements.txt (line 4))
  Using cached loguru-0.7.2-py3-none-any.whl.metadata (23 kB)
Collecting acvl-utils>=0.2 (from nnunetv2==2.2->-r requirements.txt (line 1))
  Using cached acvl_utils-0.2.tar.gz (18 kB)
  Preparing metadata (setup.py) ... done
Collecting dynamic-network-architectures>=0.2 (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading dynamic_network_architectures-0.3.1.tar.gz (20 kB)
  Preparing metadata (setup.py) ... done
Collecting tqdm (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading tqdm-4.66.2-py3-none-any.whl.metadata (57 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.6/57.6 kB 983.7 kB/s eta 0:00:00
Collecting dicom2nifti (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading dicom2nifti-2.4.10-py3-none-any.whl.metadata (1.3 kB)
Collecting scipy (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading scipy-1.12.0-cp312-cp312-macosx_10_9_x86_64.whl.metadata (60 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.4/60.4 kB 836.3 kB/s eta 0:00:00
Collecting batchgenerators>=0.25 (from nnunetv2==2.2->-r requirements.txt (line 1))
  Using cached batchgenerators-0.25.tar.gz (61 kB)
  Preparing metadata (setup.py) ... done
Collecting numpy (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading numpy-1.26.4-cp312-cp312-macosx_10_9_x86_64.whl.metadata (61 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.1/61.1 kB 540.3 kB/s eta 0:00:00
Collecting scikit-learn (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading scikit_learn-1.4.1.post1-cp312-cp312-macosx_10_9_x86_64.whl.metadata (11 kB)
Collecting scikit-image>=0.19.3 (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading scikit_image-0.22.0-cp312-cp312-macosx_10_9_x86_64.whl.metadata (13 kB)
Collecting SimpleITK>=2.2.1 (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading SimpleITK-2.3.1-cp312-cp312-macosx_10_9_x86_64.whl.metadata (7.9 kB)
Collecting pandas (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading pandas-2.2.1-cp312-cp312-macosx_10_9_x86_64.whl.metadata (19 kB)
Collecting graphviz (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading graphviz-0.20.1-py3-none-any.whl.metadata (12 kB)
Collecting tifffile (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading tifffile-2024.2.12-py3-none-any.whl.metadata (31 kB)
Collecting requests (from nnunetv2==2.2->-r requirements.txt (line 1))
  Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting nibabel (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading nibabel-5.2.1-py3-none-any.whl.metadata (8.8 kB)
Collecting matplotlib (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading matplotlib-3.8.3-cp312-cp312-macosx_10_12_x86_64.whl.metadata (5.8 kB)
Collecting seaborn (from nnunetv2==2.2->-r requirements.txt (line 1))
  Using cached seaborn-0.13.2-py3-none-any.whl.metadata (5.4 kB)
Collecting imagecodecs (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading imagecodecs-2024.1.1-cp312-cp312-macosx_10_14_x86_64.whl.metadata (19 kB)
Collecting yacs (from nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading yacs-0.1.8-py3-none-any.whl.metadata (639 bytes)
Collecting filelock (from torch->-r requirements.txt (line 2))
  Using cached filelock-3.13.1-py3-none-any.whl.metadata (2.8 kB)
Collecting typing-extensions>=4.8.0 (from torch->-r requirements.txt (line 2))
  Downloading typing_extensions-4.10.0-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy (from torch->-r requirements.txt (line 2))
  Downloading sympy-1.12-py3-none-any.whl.metadata (12 kB)
Collecting networkx (from torch->-r requirements.txt (line 2))
  Using cached networkx-3.2.1-py3-none-any.whl.metadata (5.2 kB)
Collecting jinja2 (from torch->-r requirements.txt (line 2))
  Using cached Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB)
Collecting fsspec (from torch->-r requirements.txt (line 2))
  Downloading fsspec-2024.2.0-py3-none-any.whl.metadata (6.8 kB)
Collecting connected-components-3d (from acvl-utils>=0.2->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading connected_components_3d-3.12.4-cp312-cp312-macosx_10_9_x86_64.whl.metadata (29 kB)
Collecting pillow>=7.1.2 (from batchgenerators>=0.25->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading pillow-10.2.0-cp312-cp312-macosx_10_10_x86_64.whl.metadata (9.7 kB)
Collecting future (from batchgenerators>=0.25->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading future-1.0.0-py3-none-any.whl.metadata (4.0 kB)
Collecting unittest2 (from batchgenerators>=0.25->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading unittest2-1.1.0-py2.py3-none-any.whl.metadata (15 kB)
Collecting threadpoolctl (from batchgenerators>=0.25->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading threadpoolctl-3.3.0-py3-none-any.whl.metadata (13 kB)
Collecting imageio>=2.27 (from scikit-image>=0.19.3->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading imageio-2.34.0-py3-none-any.whl.metadata (4.9 kB)
Collecting packaging>=21 (from scikit-image>=0.19.3->nnunetv2==2.2->-r requirements.txt (line 1))
  Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB)
Collecting lazy_loader>=0.3 (from scikit-image>=0.19.3->nnunetv2==2.2->-r requirements.txt (line 1))
  Using cached lazy_loader-0.3-py3-none-any.whl.metadata (4.3 kB)
Collecting pydicom>=2.2.0 (from dicom2nifti->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading pydicom-2.4.4-py3-none-any.whl.metadata (7.8 kB)
Collecting python-gdcm (from dicom2nifti->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading python_gdcm-3.0.23-cp312-cp312-macosx_10_9_x86_64.whl.metadata (3.6 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch->-r requirements.txt (line 2))
  Downloading MarkupSafe-2.1.5-cp312-cp312-macosx_10_9_x86_64.whl.metadata (3.0 kB)
Collecting contourpy>=1.0.1 (from matplotlib->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading contourpy-1.2.0-cp312-cp312-macosx_10_9_x86_64.whl.metadata (5.8 kB)
Collecting cycler>=0.10 (from matplotlib->nnunetv2==2.2->-r requirements.txt (line 1))
  Using cached cycler-0.12.1-py3-none-any.whl.metadata (3.8 kB)
Collecting fonttools>=4.22.0 (from matplotlib->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading fonttools-4.49.0-cp312-cp312-macosx_10_9_x86_64.whl.metadata (159 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 159.1/159.1 kB 435.1 kB/s eta 0:00:00
Collecting kiwisolver>=1.3.1 (from matplotlib->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading kiwisolver-1.4.5-cp312-cp312-macosx_10_9_x86_64.whl.metadata (6.4 kB)
Collecting pyparsing>=2.3.1 (from matplotlib->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading pyparsing-3.1.2-py3-none-any.whl.metadata (5.1 kB)
Collecting python-dateutil>=2.7 (from matplotlib->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting pytz>=2020.1 (from pandas->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading pytz-2024.1-py2.py3-none-any.whl.metadata (22 kB)
Collecting tzdata>=2022.7 (from pandas->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading tzdata-2024.1-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting charset-normalizer<4,>=2 (from requests->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests->nnunetv2==2.2->-r requirements.txt (line 1))
  Using cached idna-3.6-py3-none-any.whl.metadata (9.9 kB)
Collecting urllib3<3,>=1.21.1 (from requests->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading urllib3-2.2.1-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading certifi-2024.2.2-py3-none-any.whl.metadata (2.2 kB)
Collecting joblib>=1.2.0 (from scikit-learn->nnunetv2==2.2->-r requirements.txt (line 1))
  Using cached joblib-1.3.2-py3-none-any.whl.metadata (5.4 kB)
Collecting mpmath>=0.19 (from sympy->torch->-r requirements.txt (line 2))
  Downloading mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
Collecting PyYAML (from yacs->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl.metadata (2.1 kB)
Collecting six>=1.5 (from python-dateutil>=2.7->matplotlib->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading six-1.16.0-py2.py3-none-any.whl.metadata (1.8 kB)
Collecting argparse (from unittest2->batchgenerators>=0.25->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading argparse-1.4.0-py2.py3-none-any.whl.metadata (2.8 kB)
Collecting traceback2 (from unittest2->batchgenerators>=0.25->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading traceback2-1.4.0-py2.py3-none-any.whl.metadata (1.5 kB)
Collecting linecache2 (from traceback2->unittest2->batchgenerators>=0.25->nnunetv2==2.2->-r requirements.txt (line 1))
  Downloading linecache2-1.0.0-py2.py3-none-any.whl.metadata (1000 bytes)
Downloading torch-2.2.1-cp312-none-macosx_10_9_x86_64.whl (150.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 150.8/150.8 MB 441.4 kB/s eta 0:00:00
Downloading opencv_python-4.9.0.80-cp37-abi3-macosx_10_16_x86_64.whl (55.7 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 55.7/55.7 MB 462.5 kB/s eta 0:00:00
Using cached loguru-0.7.2-py3-none-any.whl (62 kB)
Downloading numpy-1.26.4-cp312-cp312-macosx_10_9_x86_64.whl (20.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 20.3/20.3 MB 423.8 kB/s eta 0:00:00
Downloading scikit_image-0.22.0-cp312-cp312-macosx_10_9_x86_64.whl (14.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.0/14.0 MB 454.6 kB/s eta 0:00:00
Using cached networkx-3.2.1-py3-none-any.whl (1.6 MB)
Downloading scipy-1.12.0-cp312-cp312-macosx_10_9_x86_64.whl (38.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 38.9/38.9 MB 514.4 kB/s eta 0:00:00
Downloading SimpleITK-2.3.1-cp312-cp312-macosx_10_9_x86_64.whl (44.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 44.9/44.9 MB 433.2 kB/s eta 0:00:00
Downloading tifffile-2024.2.12-py3-none-any.whl (224 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 224.5/224.5 kB 352.7 kB/s eta 0:00:00
Downloading typing_extensions-4.10.0-py3-none-any.whl (33 kB)
Downloading dicom2nifti-2.4.10-py3-none-any.whl (43 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 43.6/43.6 kB 357.7 kB/s eta 0:00:00
Using cached filelock-3.13.1-py3-none-any.whl (11 kB)
Downloading fsspec-2024.2.0-py3-none-any.whl (170 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 170.9/170.9 kB 334.0 kB/s eta 0:00:00
Using cached graphviz-0.20.1-py3-none-any.whl (47 kB)
Downloading imagecodecs-2024.1.1-cp312-cp312-macosx_10_14_x86_64.whl (15.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 15.3/15.3 MB 446.9 kB/s eta 0:00:00
Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB)
Downloading matplotlib-3.8.3-cp312-cp312-macosx_10_12_x86_64.whl (7.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.6/7.6 MB 477.8 kB/s eta 0:00:00
Downloading nibabel-5.2.1-py3-none-any.whl (3.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 536.3 kB/s eta 0:00:00
Downloading pandas-2.2.1-cp312-cp312-macosx_10_9_x86_64.whl (12.5 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.5/12.5 MB 449.1 kB/s eta 0:00:00
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Downloading scikit_learn-1.4.1.post1-cp312-cp312-macosx_10_9_x86_64.whl (11.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.6/11.6 MB 381.9 kB/s eta 0:00:00
Using cached seaborn-0.13.2-py3-none-any.whl (294 kB)
Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Downloading tqdm-4.66.2-py3-none-any.whl (78 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 78.3/78.3 kB 778.3 kB/s eta 0:00:00
Using cached yacs-0.1.8-py3-none-any.whl (14 kB)
Downloading certifi-2024.2.2-py3-none-any.whl (163 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 163.8/163.8 kB 829.3 kB/s eta 0:00:00
Downloading charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl (122 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 122.2/122.2 kB 870.5 kB/s eta 0:00:00
Downloading contourpy-1.2.0-cp312-cp312-macosx_10_9_x86_64.whl (259 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 259.3/259.3 kB 935.5 kB/s eta 0:00:00
Using cached cycler-0.12.1-py3-none-any.whl (8.3 kB)
Downloading fonttools-4.49.0-cp312-cp312-macosx_10_9_x86_64.whl (2.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 542.5 kB/s eta 0:00:00
Using cached idna-3.6-py3-none-any.whl (61 kB)
Downloading imageio-2.34.0-py3-none-any.whl (313 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 313.4/313.4 kB 414.8 kB/s eta 0:00:00
Using cached joblib-1.3.2-py3-none-any.whl (302 kB)
Downloading kiwisolver-1.4.5-cp312-cp312-macosx_10_9_x86_64.whl (67 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 67.3/67.3 kB 578.8 kB/s eta 0:00:00
Using cached lazy_loader-0.3-py3-none-any.whl (9.1 kB)
Downloading MarkupSafe-2.1.5-cp312-cp312-macosx_10_9_x86_64.whl (14 kB)
Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached packaging-23.2-py3-none-any.whl (53 kB)
Downloading pillow-10.2.0-cp312-cp312-macosx_10_10_x86_64.whl (3.5 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.5/3.5 MB 431.0 kB/s eta 0:00:00
Downloading pydicom-2.4.4-py3-none-any.whl (1.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 597.2 kB/s eta 0:00:00
Downloading pyparsing-3.1.2-py3-none-any.whl (103 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 103.2/103.2 kB 623.4 kB/s eta 0:00:00
Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.9/229.9 kB 738.8 kB/s eta 0:00:00
Downloading pytz-2024.1-py2.py3-none-any.whl (505 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 505.5/505.5 kB 823.7 kB/s eta 0:00:00
Downloading threadpoolctl-3.3.0-py3-none-any.whl (17 kB)
Downloading tzdata-2024.1-py2.py3-none-any.whl (345 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 345.4/345.4 kB 683.3 kB/s eta 0:00:00
Downloading urllib3-2.2.1-py3-none-any.whl (121 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 121.1/121.1 kB 519.5 kB/s eta 0:00:00
Downloading connected_components_3d-3.12.4-cp312-cp312-macosx_10_9_x86_64.whl (536 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 537.0/537.0 kB 487.5 kB/s eta 0:00:00
Downloading future-1.0.0-py3-none-any.whl (491 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 491.3/491.3 kB 686.7 kB/s eta 0:00:00
Downloading python_gdcm-3.0.23-cp312-cp312-macosx_10_9_x86_64.whl (11.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.9/11.9 MB 411.8 kB/s eta 0:00:00
Downloading PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl (178 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 178.7/178.7 kB 561.3 kB/s eta 0:00:00
Using cached unittest2-1.1.0-py2.py3-none-any.whl (96 kB)
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Using cached argparse-1.4.0-py2.py3-none-any.whl (23 kB)
Using cached traceback2-1.4.0-py2.py3-none-any.whl (16 kB)
Using cached linecache2-1.0.0-py2.py3-none-any.whl (12 kB)
Building wheels for collected packages: nnunetv2, acvl-utils, batchgenerators, dynamic-network-architectures
  Building wheel for nnunetv2 (pyproject.toml) ... done
  Created wheel for nnunetv2: filename=nnunetv2-2.2-py3-none-any.whl size=235949 sha256=f01a4252295d078cb654167689e79071fd554177500b1e9ceaf5cfda95182082
  Stored in directory: /Users/mathieuboudreau/Library/Caches/pip/wheels/0e/74/6e/320330d2b1601fc5fa16be66e994a3e9e471e873e5f29eb008
  Building wheel for acvl-utils (setup.py) ... done
  Created wheel for acvl-utils: filename=acvl_utils-0.2-py3-none-any.whl size=22439 sha256=136b416784c3ffa02906d269115585752ff5d81147e0eed15d409a2b0aa9b1dd
  Stored in directory: /Users/mathieuboudreau/Library/Caches/pip/wheels/87/c8/9f/6e73ec10911d241dea19f412647cf105b8c77b324905f662d3
  Building wheel for batchgenerators (setup.py) ... done
  Created wheel for batchgenerators: filename=batchgenerators-0.25-py3-none-any.whl size=89007 sha256=aad5d8b44724b41383d03ef4eeeb3f5ae26b467e6ae811f44707afec08146120
  Stored in directory: /Users/mathieuboudreau/Library/Caches/pip/wheels/4b/b7/02/e761fe122a03209076c6ea7cf7f75d5b63a3a37b39081beaf6
  Building wheel for dynamic-network-architectures (setup.py) ... done
  Created wheel for dynamic-network-architectures: filename=dynamic_network_architectures-0.3.1-py3-none-any.whl size=30050 sha256=db6250bd25de932f18e57a0823224c22c1073132da1e28c66cdbcc1d37b6050e
  Stored in directory: /Users/mathieuboudreau/Library/Caches/pip/wheels/e8/37/e3/40ba582bf18f88191f8700af0ca8bc0a1bff7f8c57b0ebb8df
Successfully built nnunetv2 acvl-utils batchgenerators dynamic-network-architectures
Installing collected packages: SimpleITK, pytz, mpmath, linecache2, argparse, urllib3, tzdata, typing-extensions, traceback2, tqdm, threadpoolctl, sympy, six, PyYAML, python-gdcm, pyparsing, pydicom, pillow, packaging, numpy, networkx, MarkupSafe, loguru, lazy_loader, kiwisolver, joblib, idna, graphviz, future, fsspec, fonttools, filelock, cycler, charset-normalizer, certifi, yacs, unittest2, tifffile, scipy, requests, python-dateutil, opencv-python, nibabel, jinja2, imageio, imagecodecs, contourpy, connected-components-3d, torch, scikit-learn, scikit-image, pandas, matplotlib, dicom2nifti, seaborn, dynamic-network-architectures, batchgenerators, acvl-utils, nnunetv2
Successfully installed MarkupSafe-2.1.5 PyYAML-6.0.1 SimpleITK-2.3.1 acvl-utils-0.2 argparse-1.4.0 batchgenerators-0.25 certifi-2024.2.2 charset-normalizer-3.3.2 connected-components-3d-3.12.4 contourpy-1.2.0 cycler-0.12.1 dicom2nifti-2.4.10 dynamic-network-architectures-0.3.1 filelock-3.13.1 fonttools-4.49.0 fsspec-2024.2.0 future-1.0.0 graphviz-0.20.1 idna-3.6 imagecodecs-2024.1.1 imageio-2.34.0 jinja2-3.1.3 joblib-1.3.2 kiwisolver-1.4.5 lazy_loader-0.3 linecache2-1.0.0 loguru-0.7.2 matplotlib-3.8.3 mpmath-1.3.0 networkx-3.2.1 nibabel-5.2.1 nnunetv2-2.2 numpy-1.26.4 opencv-python-4.9.0.80 packaging-23.2 pandas-2.2.1 pillow-10.2.0 pydicom-2.4.4 pyparsing-3.1.2 python-dateutil-2.9.0.post0 python-gdcm-3.0.23 pytz-2024.1 requests-2.31.0 scikit-image-0.22.0 scikit-learn-1.4.1.post1 scipy-1.12.0 seaborn-0.13.2 six-1.16.0 sympy-1.12 threadpoolctl-3.3.0 tifffile-2024.2.12 torch-2.2.1 tqdm-4.66.2 traceback2-1.4.0 typing-extensions-4.10.0 tzdata-2024.1 unittest2-1.1.0 urllib3-2.2.1 yacs-0.1.8
(nnunet_venv) mathieuboudreau@Mathieus-MacBook-Pro nn-axondeepseg % python download_models.py
/Users/mathieuboudreau/neuropoly/github/nn-axondeepseg/download_models.py:9: DeprecationWarning: 'cgi' is deprecated and slated for removal in Python 3.13
  import cgi
No model found. Please select a model to download.
[0] - model_seg_unmyelinated_tem:
	 Unmyelinated axon segmentation (1-class)
[1] - model_seg_rabbit_axon-myelin_bf:
	 Axon and myelin segmentation on Toluidine Blue stained BF images (rabbit)
Please select a model ID (from 0 to 1): 0
model_seg_unmyelinated_tem selected. Downloading from https://github.com/axondeepseg/model_seg_unmyelinated_tem/releases/download/v1.0.0/model_seg_unmyelinated_tem.zip
Trying URL: https://github.com/axondeepseg/model_seg_unmyelinated_tem/releases/download/v1.0.0/model_seg_unmyelinated_tem.zip
Downloading: 100%|#################################################| 251M/251M [21:24<00:00, 195kB/s]Unzip...
--> Folder created: /Users/mathieuboudreau/neuropoly/github/nn-axondeepseg/model_seg_unmyelinated_tem
Model successfully downloaded.
(nnunet_venv) mathieuboudreau@Mathieus-MacBook-Pro nn-axondeepseg % python nn_axondeepseg.py --seg-type UM --path-out output-folder --path-dataset input

2024-03-07 10:51:36.343 | INFO     | __main__:main:71 - A single model was found: models/model_seg_unmyelinated_tem. It will be used by default.
perform_everything_on_gpu=True is only supported for cuda devices! Setting this to False
2024-03-07 10:51:36.349 | INFO     | __main__:main:91 - Running inference on device: cpu
use_folds is None, attempting to auto detect available folds
found the following folds: [3]
2024-03-07 10:51:56.547 | INFO     | __main__:main:94 - Model loaded successfully.
2024-03-07 10:51:56.549 | INFO     | __main__:main:101 - Creating temporary input directory.
There are 1 cases in the source folder
I am process 0 out of 1 (max process ID is 0, we start counting with 0!)
There are 1 cases that I would like to predict

Predicting image:
perform_everything_on_gpu: False
100%|██████████████████████████████████████████████████████████████| 130/130 [10:02<00:00,  4.64s/it]
Prediction done, transferring to CPU if needed
sending off prediction to background worker for resampling and export
done with image
2024-03-07 11:02:20.756 | INFO     | __main__:main:112 - Rescaling predictions to 8-bit range.
1it [00:00,  9.83it/s]
2024-03-07 11:02:20.866 | INFO     | __main__:main:115 - Deleting temporary directory
(nnunet_venv)

Here's the output segmentation:

image

@mathieuboudreau
Copy link
Member

I'm going to test with the latest model, https://github.com/axondeepseg/model_seg_unmyelinated_tem/releases/tag/v1.1.0, in case that one wasn't automatically downloaded by the CLI, brb

@Eddymorphling
Copy link
Author

Eddymorphling commented Mar 7, 2024

@mathieuboudreau Thanks for testing this!

I was able to have it segment images using the UM model (v1.0) without any issues. But, it does not work well with the latest model, (Sickkids foundation model, v1.1.0). All I did was download the model manually, unzip it and assign the path to the model in the CLI using --path-model . That is when I end up with the error above. I am also running on nnunet==2.2 currently. Here is my pip freeze just in case:


acvl-utils==0.2
batchgenerators==0.25
blessed==1.20.0
certifi==2024.2.2
charset-normalizer==3.3.2
connected-components-3d==3.12.4
contourpy==1.2.0
cycler==0.12.1
dicom2nifti==2.4.10
dynamic-network-architectures==0.3.1
filelock==3.13.1
fonttools==4.49.0
fsspec==2024.2.0
future==1.0.0
graphviz==0.20.1
idna==3.6
imagecodecs==2024.1.1
imageio==2.34.0
Jinja2==3.1.3
joblib==1.3.2
kiwisolver==1.4.5
lazy_loader==0.3
linecache2==1.0.0
loguru==0.7.2
MarkupSafe==2.1.5
matplotlib==3.8.3
mpmath==1.3.0
networkx==3.2.1
nibabel==5.2.1
nnunetv2==2.2
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-ml-py==12.535.133
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu12==12.1.105
opencv-python==4.9.0.80
packaging==23.2
pandas==2.2.1
pillow==10.2.0
pydicom==2.4.4
pyparsing==3.1.1
python-dateutil==2.8.2
python-gdcm==3.0.23
pytz==2024.1
PyYAML==6.0.1
requests==2.31.0
scikit-image==0.22.0
scikit-learn==1.4.1.post1
scipy==1.12.0
seaborn==0.13.2
SimpleITK==2.3.1
six==1.16.0
sympy==1.12
threadpoolctl==3.3.0
tifffile==2024.2.12
torch==2.2.1
tqdm==4.66.2
traceback2==1.4.0
triton==2.2.0
typing_extensions==4.10.0
tzdata==2024.1
unittest2==1.1.0
urllib3==2.2.1
wcwidth==0.2.13
yacs==0.1.8
 

@mathieuboudreau
Copy link
Member

@Eddymorphling I found the issue, and a temporary fix. For a more permanent solution, I'd rather wait for @hermancollin.

The problem stems from the fact that in the "folds" directory of the Sickkids foundation model, v1.1.0, the checkpoint filenames are called "checkpoint_best.pth". However, because in our nnunet call,

predictor.initialize_from_trained_model_folder(path_model, use_folds=None)

we don't define a value for the argument "checkpoint_name", i.e. checkpoint_name=checkpoint_best.pth, nnunet uses the default:

https://github.com/MIC-DKFZ/nnUNet/blob/83dad35e8f68cd834a28ec012955e8df9722eca6/nnunetv2/inference/predict_from_raw_data.py#L68C46-L68C61

which is checkpoint_name=checkpoint_final.pth, but that file isn't in the folds folder for this model, which snowballs and later results in an error.

So a quick fix that worked for me was to rename the checkpoint files in each folds folder to "checkpoint_final.pth", and that resolved the issue for me.

Screenshot 2024-03-07 at 2 57 59 PM

Let me know if it works for you!

@Eddymorphling
Copy link
Author

Ah I missed that vital piece of info. It works now, thank you! This has been helpful.

v1.1 of the unmyelinated model performs much better than v1.0. Do you have some info (like sample type, TEM/SEM etc.) on the training images used for the SickKids model? Armand already had shared the info on the scaling of the input images. From what I understand there is also a "Stanford" model that in WIP which performs even better? Thank you for all your efforts on this!

@hermancollin
Copy link
Member

Hi @Eddymorphling. Happy to hear you were able to make it work. Thanks @mathieuboudreau for looking into this - I'll make a PR to catch this error in the future. In more recent scripts, we have a CLI argument so the user can choose between best or final checkpoints (although I only released the best checkpoints for the SickKids model, to halve the release filesize).

v1.1 of the unmyelinated model performs much better than v1.0. Do you have some info (like sample type, TEM/SEM etc.) on the training images used for the SickKids model? Armand already had shared the info on the scaling of the input images.

The modality for the SickKids model is TEM. The team it was initially developed for studies myelination in mouse models. They had multiple samples per genotype per timepoint, which I think the training data partially covered. What about your images? I know they are TEM as well.

From what I understand there is also a "Stanford" model that in WIP which performs even better? Thank you for all your efforts on this!

Yes, this one is still a WIP. It is also being trained on TEM images but their images look quite different and have a very high resolution. It might perform better on your data, but your mileage may vary.

I would be interested in knowing more about your project. From what I gathered, you are interested in segmenting myelinated + unmyelinated/remyelinated axons as well. If you were willing to collaborate maybe we could help you get better performance by training or fine-tuning the models.

@hermancollin
Copy link
Member

(please note I fixed the problem with best checkpoints + updated the download script for TEM unmyelinated v1.1 in f33f43b)

@Eddymorphling
Copy link
Author

Eddymorphling commented Apr 17, 2024

Hi @hermancollin Sorry to go back to this. I had to recreate my conda env recently so go had to reinstall nn-axondeepseg. I setup everything as mentioned in the main page but end up with the same error like before. I can confirm that git clone has pulled the latest version of all files as in the PR mentioned in f33f43b


/home/ivm/.local/lib/python3.9/site-packages/jupyter_client/__init__.py:23: UserWarning: Could not import submodules
  warnings.warn("Could not import submodules")
2024-04-17 10:08:31.531 | INFO     | __main__:main:73 - A single model was found: models/model_seg_unmyelinated_sickkids_tem_best. It will be used by default.
2024-04-17 10:08:31.536 | INFO     | __main__:main:93 - Running inference on device: cuda:0
Traceback (most recent call last):
  File "/home/ivm/nn-axondeepseg/nn_axondeepseg.py", line 130, in <module>
    main()
  File "/home/ivm/nn-axondeepseg/nn_axondeepseg.py", line 96, in main
    predictor.initialize_from_trained_model_folder(
  File "/home/ivm/conda/envs/nn-axondeepseg_miniforge/lib/python3.9/site-packages/nnunetv2/inference/predict_from_raw_data.py", line 96, in initialize_from_trained_model_folder
    configuration_manager = plans_manager.get_configuration(configuration_name)
UnboundLocalError: local variable 'configuration_name' referenced before assignment

I tried running using this CLI - python nn_axondeepseg.py --seg-type UM --path-out $output_folder --path-dataset $input_folder --use-gpu

EDIT: Here is some additional output from the logs

use_folds is None, attempting to auto detect available folds
found the following folds: []

@hermancollin
Copy link
Member

Hi @Eddymorphling. It seems the script cannot find the model checkpoints. Can you find the model_seg_unmyelinated_sickkids_tem_best folder and tell me what is inside?

@Eddymorphling
Copy link
Author

@hermancollin THanks for reaching out! Here is a screenshot of the folder

image

@hermancollin
Copy link
Member

@Eddymorphling ahhh I think I see the problem. Does it work if you add the --use-best option? e.g.

python nn_axondeepseg.py --seg-type UM --path-out [...] --path-dataset [...] --use-gpu --use-best

That's an important detail. Thank you for reporting this problem. I'll try to make the script more automated but for now this argument is required if you only have the checkpoint_best.pth models. Without it, nnunet looks for models named checkpoint_final.pth.

@Eddymorphling
Copy link
Author

Eddymorphling commented Apr 18, 2024

Ah great, that did it! Thanks again.
Just another small question, predictions are saved in RGB format currently. What should I tweak in nn-axondeepseg.py to save it as a simple binarized 8bit file?

@hermancollin
Copy link
Member

@Eddymorphling are they? I'm surprised. On my side, the model predicts grayscale masks. so I'm not sure why you get this behavior. In any case, this is the function you would need to modify:

def rescale_predictions(outpath, segtype):
predictions = Path(outpath).glob('*.png')
rescaling_factor = 255
if segtype == 'AM':
rescaling_factor = 127
for pred in tqdm(predictions):
img = cv2.imread(str(pred))
cv2.imwrite(str(pred), img*rescaling_factor)

Change L53 for

img = cv2.imread(str(pred), cv2.IMREAD_GRAYSCALE)

I reckon this will be enough/

@hermancollin
Copy link
Member

@Eddymorphling actually you were right. This is now fixed on the latest version.

@Eddymorphling
Copy link
Author

@hermancollin Hi! Thank you for working on this. I updated my scripts and everything works like a charm now.

I also saw the new Stanford model uploaded and tested its on my images for segmenting unmyelinated axons and works really nice. May I ask what the pixel scaling of the original images of the training data that was used to generate the Stanford TEM model? Just want to make sure that my images are rescaled to match the training dataset.

@hermancollin
Copy link
Member

Hi @Eddymorphling! It has been a while! Since our last exchange, the axondeepseg software was updated and now supports these models (so this current nn-axondeepseg repository is no longer up to date). I would highly suggest you download the latest version of AxonDeepSeg and try the models there. You will still be able to use the Stanford model, but it will be more stable and you will additionally be able to run morphometrics on the unmyelinated axon masks.

As for the pixel size for the Stanford model, it is 4.93 nm/px isotropic.

@Eddymorphling
Copy link
Author

Eddymorphling commented Sep 26, 2024

Thanks @hermancollin. I did upgrade to the latest version of ADS that includes the new generalist model. TBH it did not work out quiet well for me when segmenting myelinated axons. In the screenshot below, image 2 was segmented using the old TEM model with the parameter -s 0.10, and image 3 is with the new generalist model. I prefer using the old TEM model but I purged my old ADS env (ADS=v4.1) and am not sure how to install this again. I think having the pixel-scaling in the CLI makes a big difference in the inference for my datasets and I understand that this is no longer needed with the new generalist model.

image

@hermancollin
Copy link
Member

@Eddymorphling thank you for testing the new version and giving your feedback. The sad part is that I'm pretty sure the generalist model would be competitive with an appropriate rescaling, but this can no longer be achieved with a single -s argument.

We discussed internally adding back an option to resize the images like before. Maybe we should increase the priority for this feature.

@Eddymorphling
Copy link
Author

Thank you for quick feedback and help @hermancollin . I am not sure how -s argument works internally with both the models to be honest. Will this feature be implemented soon? Is it possible to roll back to the older version as a temporary solution? If yes, what would be the git address to do a git clone?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants