-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enhance + bugfix of images and labels elements #127
Conversation
I think it's a good idea to keep coords. But one important thing, we need to understand/decide what's the relationship with transformations. Scale and translations play around well, but when rotations are involved things get dirty. I would start with this:
We could extend to this (not urgent):
|
Some consideration; just found an implication on this pr. |
Also we could decide to avoid saving the coordinates directly but converting them to NGFF into the "dataset transformations" (the translation + scale pair present at each level in the multiscale). The NGFF specs says to apply the new transformations after applying the dataset transformations, so this would be equivalent to apply the NGFF transformations to the xarray coordinates. In both cases (saving the coordinates directly or converting them to the NGFF dataset transformations), we can only support coordinates that are equivalent to a scale and a translation. Non-linear coordinate displacements (like 0, 1, 2, 3, 10) would break the interplay with the NGFF transformations. |
what do you need to flip them for?
If by adjust you mean adjust IO then for sure we'll have to adapt
that's a good point. ideas could be:
|
Answering the first 2 points
The tiles where mapped to the global space in the wrong order. I flipped the y axis of each tile. Another way (better) would have been to adjust the mapping to the global space. We can do this (I write a TODO in the code)
io is not necessary, we need to make the processing methods (and transformations, since the processing methods can be called both on the intrinsic space and any transformed space) aware of the coordinates since we can't work anymore with pure pixel coordinates, but now only with xarray coordinates |
Regarding the third part, currently the first option is what is implemented. Referring to the screenshot in this discussion, I am basically assuming that the transformation corresponding to We could save the coordinates in that slot by deriving the NGFF transformation that produces the corresponding xarray coordinates. The ones for the next multiscales can be derived from the first (or can also be computed from xarray). I save all the multiscale transformations to the file but when I read I actually load just the top one and re-derive the others: as you pointed out, it should be the same up to some numerical precision errors. So the interpretation of the xarray coordinates is that they describe the intrinsic coordinate system (we would never use teh pixel space anymore), and the new transformation classes would always operate on the xarray coordinates. |
👍 |
I checked, actually flipping the images is correct because the points are aligned with the flipped images. If change the global positioning of the fov the points become wrong. |
codecov in this repo is cursed..... |
@LucaMarconato since This check is now catching bugs here spatialdata/spatialdata/_core/_transform_elements.py Lines 172 to 187 in 238fb8c
I find reading those lines quite difficult, in particular there are things like factors = shape0 / shape
factors - min(factors) that I don't get whether they are a bug or a leftover. Do you mind taking a look? I fixed 1 but there are still 3 failing
meanwhile I'll go on and work on IO for channels with omero metadata |
@scverse/spatialdata please refer to the header comment for the description of this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @giovp ! The code changes look good to me. I am not sure about the other test, but the failing bounding box query tests suggest to me that something has changed in the indexing behavior. The bounding boxes are taken using image.sel
(see here). Somehow, the shape of the returned image seems off. Happy to pair on this if you'd like!
I am not sure which one I prefer, but I would start with the first approach, by making the transformations agnostic to that. I'll change the behavior. |
yeah I think when I try that only transformations didn't work, but IO and models shouldn't rely on that. If so I could quickly push a fix. |
Yeah the code was very weird, I removed it altogether. Now I do the transformation for each element of the multiscale and then I assemble the |
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #127 +/- ##
==========================================
+ Coverage 86.75% 86.82% +0.06%
==========================================
Files 23 23
Lines 3277 3377 +100
==========================================
+ Hits 2843 2932 +89
- Misses 434 445 +11
|
@giovp I reviewed the code and fixed the tests. There is only a comment that could require changes (the one about the name of the datatree nodes). Or we can also just merge and open an issue about that. |
@LucaMarconato I'll merge this, now the default name for the dataarray in the datatree is not what we decided as i think the accessor would be really nice to have and will open issue in spatial-image |
This PR does the following:
Add support for coordinates in
(multiscale)spatial-image
.without coordinates, it doesn't make much sense to use an
xarray.DataTree
cause many methods do not work across scales. The implementation is the following:(multiscale)spatial-image
from the parserspatialdata/spatialdata/_core/core_utils.py
Lines 307 to 310 in f65151e
spatial-image
).One important thing to take into accounts, is that the coordinats always refer to the implicit (pixel) coordinate systems. They are nonetheless useful as they unlock the power of xarray operations on data(trees).
Add support for 3d Images
This was commented out due to mixing
spatial-image
class. This was addressed here spatial-image/spatial-image#16 . This class is also supported in multiscale case.Enhance schema for raster elements
In particular, it now performs correct validation of scales and re-implements the validation for
"transform"
being present in the right place inattrs
.It closes the issues listed here:
SpatialImage
is created from axarray.DataArray
#59MultiscaleSpatialImage
#115Open questions
One thing I'd like to address here, or potentially in separate PR, is the use of the
name
as attribute in(multiscale)spatial-image
. It's not a specific attribute of spatial-image but of xarray dataarray or datatree. For instance, in the case ofdatatree
it's used to access theDataArray
and the desired node. E.g.Right now, we are not very consistent about this through out repo. In particular, the transformations i think expect the fact that
name="image"
yet in many other parts of the repo that is not the case. Furthermore, if the user passes the name, the name is not saved (yet it is used in creating the spatial-image object).It's also unclear how this interplay with the key of the image element in spatial data. e.g.
I don't have any preference on how to handle this, but we should be consistent, so I suggest either of these two implementations:
"image"
(default) across repo (and so we can avoid to save/read).