-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue when running segmentation #8
Comments
Ok, made some progress:
|
@Eddymorphling thank you for trying this. I don't understand why you had to install nnunet manually because it is specified in the requirements txt file. Also v2.3.1 should be fine (we want higher than 2.2). What was the complete command you tried that caused the error in your first message? |
Also the additionnal dependencies are weird to me. If you said it worked with v2.2 maybe there were new dependencies added in v2.3. Thank you for reporting this i'll try to reproduce. |
HI @hermancollin I think it defaults to a installing v2.3 with the requirements file, which leads to an error that I mentioned above when running the CLI command. Something in v2.3 might be different and might need different dependencies. Reverting back to v2.2 helped me run everything smoothly. Happy to test v2.3 if you have an update on it. Regarding the unmyelinated model that this repo provides. DO you need a what pixel resolution the model was trained? Also do you have an updated version of this model which performs better? I remember you had mentioned something along these lines in our earlier discussion.Thank you. |
@Eddymorphling sorry for the delayed response. I am trying to wrap up a manuscript for next week. I'll have more time to help you after - hope this is not too urgent on your side.
The model currently uploaded (on axondeepseg/model_seg_unmyelinated_tem) was trained on data from the SickKids Foundation. The exact pixel size is 0.00481 um/px. I don't know how close this is to your data. Yes, there is a better model. The one that was uploaded on October 12, 2023 is one of 5 models trained on this data. For more information, see axondeepseg/model_seg_unmyelinated_tem#1. I'm going to upload the rest this afternoon so that you can try it and hopefully give us some feedback. There is an additional model that I am working on with data from Stanford. I expect this one will work even better, but I can't upload it for now because it is still a WIP. |
@Eddymorphling I uploaded the full model. Keep us updated! |
Thank you! You are the best! |
Hey @hermancollin , downloads went well. But get an error when running it with the CLI. Does the
|
@hermancollin Sorry to bother you again, could you please help me with the above issue? Thank you. |
Hi @Eddymorphling - Armand is working towards a paper deadline that's due in the next few days (I can't recall), so it's likely he'll only be able to revisit this early next week. I'll try and take a look at it ASAP to see if I can reproduce your error and maybe get an idea of how it can be resolved on your end. |
Thank you for reaching out! That would be helpful. |
@Eddymorphling I just ran the install today and it segmented fine; I think maybe you downloaded the repo before Armand pinned nnunet to version 2.2: 1c369ff Can you verify this? Do a |
Scrolling up, I see that you should have already gotten version 2.2 installed: #8 (comment), sorry for missing that! Could you please still do a
|
I'm going to test with the latest model, https://github.com/axondeepseg/model_seg_unmyelinated_tem/releases/tag/v1.1.0, in case that one wasn't automatically downloaded by the CLI, brb |
@mathieuboudreau Thanks for testing this! I was able to have it segment images using the UM model (v1.0) without any issues. But, it does not work well with the latest model, (Sickkids foundation model, v1.1.0). All I did was download the model manually, unzip it and assign the path to the model in the CLI using
|
@Eddymorphling I found the issue, and a temporary fix. For a more permanent solution, I'd rather wait for @hermancollin. The problem stems from the fact that in the "folds" directory of the Sickkids foundation model, v1.1.0, the checkpoint filenames are called "checkpoint_best.pth". However, because in our nnunet call, nn-axondeepseg/nn_axondeepseg.py Line 93 in 1c369ff
we don't define a value for the argument "checkpoint_name", i.e. which is So a quick fix that worked for me was to rename the checkpoint files in each folds folder to "checkpoint_final.pth", and that resolved the issue for me. Let me know if it works for you! |
Ah I missed that vital piece of info. It works now, thank you! This has been helpful. v1.1 of the unmyelinated model performs much better than v1.0. Do you have some info (like sample type, TEM/SEM etc.) on the training images used for the SickKids model? Armand already had shared the info on the scaling of the input images. From what I understand there is also a "Stanford" model that in WIP which performs even better? Thank you for all your efforts on this! |
Hi @Eddymorphling. Happy to hear you were able to make it work. Thanks @mathieuboudreau for looking into this - I'll make a PR to catch this error in the future. In more recent scripts, we have a CLI argument so the user can choose between best or final checkpoints (although I only released the best checkpoints for the SickKids model, to halve the release filesize).
The modality for the SickKids model is TEM. The team it was initially developed for studies myelination in mouse models. They had multiple samples per genotype per timepoint, which I think the training data partially covered. What about your images? I know they are TEM as well.
Yes, this one is still a WIP. It is also being trained on TEM images but their images look quite different and have a very high resolution. It might perform better on your data, but your mileage may vary. I would be interested in knowing more about your project. From what I gathered, you are interested in segmenting myelinated + unmyelinated/remyelinated axons as well. If you were willing to collaborate maybe we could help you get better performance by training or fine-tuning the models. |
(please note I fixed the problem with best checkpoints + updated the download script for TEM unmyelinated v1.1 in f33f43b) |
Hi @hermancollin Sorry to go back to this. I had to recreate my conda env recently so go had to reinstall nn-axondeepseg. I setup everything as mentioned in the main page but end up with the same error like before. I can confirm that git clone has pulled the latest version of all files as in the PR mentioned in f33f43b
I tried running using this CLI - EDIT: Here is some additional output from the logs
|
Hi @Eddymorphling. It seems the script cannot find the model checkpoints. Can you find the |
@hermancollin THanks for reaching out! Here is a screenshot of the folder |
@Eddymorphling ahhh I think I see the problem. Does it work if you add the
That's an important detail. Thank you for reporting this problem. I'll try to make the script more automated but for now this argument is required if you only have the |
Ah great, that did it! Thanks again. |
@Eddymorphling are they? I'm surprised. On my side, the model predicts grayscale masks. so I'm not sure why you get this behavior. In any case, this is the function you would need to modify: nn-axondeepseg/nn_axondeepseg.py Lines 46 to 54 in f33f43b
Change L53 for img = cv2.imread(str(pred), cv2.IMREAD_GRAYSCALE) I reckon this will be enough/ |
@Eddymorphling actually you were right. This is now fixed on the latest version. |
@hermancollin Hi! Thank you for working on this. I updated my scripts and everything works like a charm now. I also saw the new Stanford model uploaded and tested its on my images for segmenting unmyelinated axons and works really nice. May I ask what the pixel scaling of the original images of the training data that was used to generate the Stanford TEM model? Just want to make sure that my images are rescaled to match the training dataset. |
Hi @Eddymorphling! It has been a while! Since our last exchange, the axondeepseg software was updated and now supports these models (so this current As for the pixel size for the Stanford model, it is 4.93 nm/px isotropic. |
Thanks @hermancollin. I did upgrade to the latest version of ADS that includes the new generalist model. TBH it did not work out quiet well for me when segmenting myelinated axons. In the screenshot below, image 2 was segmented using the old TEM model with the parameter |
@Eddymorphling thank you for testing the new version and giving your feedback. The sad part is that I'm pretty sure the generalist model would be competitive with an appropriate rescaling, but this can no longer be achieved with a single We discussed internally adding back an option to resize the images like before. Maybe we should increase the priority for this feature. |
Thank you for quick feedback and help @hermancollin . I am not sure how |
Hi @hermancollin
As per our previous discussion, I am testing nn-axondeepseg. The setup went well, I just had to also install a pip package manually (
wcwidth
). Now when I run the CLI segmentation command , I come across the following errorAny advise on how to fix this? Some other info - my fresh conda env runs on python 3.10, input files are in
.png
format,cuda/pytorch
sees the GPU in my conda environment.The text was updated successfully, but these errors were encountered: