-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update pybio (bioimageio) submodules #164
Conversation
treating an absolute path on win leads to interpreting the drive letter as uri scheme...
.. and refactor package name pybio -> bioimageio problem: torch script weight formats seems to become invalid due to this refactor Are any bioimageio models currently on the website already use torchscript or can we just go ahead and recreate these test weights and ignore this problem otherwise? @constantinpape , I think you created the torch script weights, what's your take? |
out of curiosity - do you know why this happens? |
no, but the same happens with pickle. So maybe this is related |
ahh @FynnBe, you have good instincts. I have checked the old weights, and those reference the model python file in |
good. The refactor really shouldn't break anything else |
@k-dominik any idea why the CI doesn't run here? |
b52e09b
to
61a2285
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm in general. Test failure seems to be related to torchscript archive. Not sure how this was generated, but given that the size is almost twice as before, there seems to be something odd.
to me it looks like the
@constantinpape , @FynnBe how was this weight file generated? Would it be possible to regenerate it? (By the way, I found it interesting that this information could not be inferred from the |
Yes indeed, looks like something with the torchscript weights is wrong.
The whole model was copied from here https://github.com/bioimage-io/pytorch-bioimage-io/blob/master/specs/models/unet2d_nuclei_broad/UNet2DNucleiBroad.model.yaml#L63-L76 and the torchscript weigths are just generated from the normal pytorch model.
Yes, that's indeed a bit unfortunate. The reason is that we had a formalized description of the training procedure in the first iteration of the bioimage format. However, it turned out that this was to complex, so we know rely on self-documentation of the training procedure and providing the training script or linking to the relevant notebook. Unfortunately for the model at hand this got lost over time because it was copied around so much ... |
thank you for the update @constantinpape !
let's re-add the weights here and tackle the single model change in a separate PR. I've opened an issue (#165) about using a single model for the test case. |
I think the torchscript weights were generated with this script: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm,
the only thing I really do not like is that this PR adds even more pip dependencies.
Ideally, we would build the whole tiktorch-server package without pip. I have started the process on making some of the packages available via conda-forge (conda-forge/staged-recipes#15250)
@k-dominik should we continue this PR a bit more do switch to conda-packages and further update the bioimageio dependencies? |
oh I would prefer to merge this now. With conda packages you never know :) |
.. and remove unused
cache_path
fromeval_model_zip