You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Cellpose natively supports 3D segmentation; we have so far used it in "pseudo 3D mode", where you segment each z-slice separately, and then stitch together the masks to aggregate transcripts across the z stack. From the cellpose docs:
In those instances, you may want to turn off 3D segmentation (do_3D=False) and run instead with stitch_threshold>0. Cellpose will create ROIs in 2D on each XY slice and then stitch them across slices if the IoU between the mask on the current slice and the next slice is greater than or equal to the stitch_threshold.
Is there any way to do this through Sopa? We would probably have to load in the entire z-stack of images, as the model needs access to all of them. In our case, we have MERSCOPE data, and we can visually see that cells shift a little bit as we move across the z-stack, so it seems important to segment each z-slice seperately, rather then just using the center slice and ignoring the z-coordinate in the transcripts.
The text was updated successfully, but these errors were encountered:
Hello @Marius1311, we currently don't support 3D cellpose segmentation, but this is definitely something I can work on!
Baysor might already work in 3D, but I'm not sure, I need to test it.
I'll try to work on this, but I have to say that my next month is really busy, so I can't start working on this before late October. I hope this sounds reasonable.
Is your feature request related to a problem? Please describe.
Cellpose natively supports 3D segmentation; we have so far used it in "pseudo 3D mode", where you segment each z-slice separately, and then stitch together the masks to aggregate transcripts across the z stack. From the cellpose docs:
Is there any way to do this through Sopa? We would probably have to load in the entire z-stack of images, as the model needs access to all of them. In our case, we have MERSCOPE data, and we can visually see that cells shift a little bit as we move across the z-stack, so it seems important to segment each z-slice seperately, rather then just using the center slice and ignoring the z-coordinate in the transcripts.
The text was updated successfully, but these errors were encountered: