Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible problem with reading images triggered from Las X stage navigator #19

Open
DirkRemmers opened this issue Mar 8, 2021 · 15 comments

Comments

@DirkRemmers
Copy link

Hi Nick,

First of all, thank you for all the work you have done on this awesome package, I love it! I use it every day for my image analysis, and it works exactly as it should. However, since I now try to automate my image acquisition, I run into a major issue. I have to say that I don't know if the mistake originates in the Las X software or in readlif, but I thought it was worth giving it a try.

For the automation of the image acquisition, I want to use the Las X stage navigator, which can automatically control the movement of the stage. There is not much different in the imaging protocol compared to the protocol I normally use, there really is one difference: if I want to use the stage navigator, I have to trigger the image acquisition from the stage navigator.

In the Las X software, the image is perfectly fine, and I have no problems visualizing it. However when I load the image using readlif, something strange happens. It appears that the 2 channels I currently use fuse together, making image acquisition impossible.

I added 2 images to show the differences:
Image 1: a normal image (made without the stage navigator)
Screenshot from 2021-03-08 17-05-05

Image 2: an image made using the stage navigator
Screenshot from 2021-03-08 17-05-35

Using a very nice 3d viewer called Napari, you are able to see that there is a sort of organization in the way the channels are fused: it appears that the nuclei (small) are located on top of the fat droplets (larger).
Screenshot from 2021-03-08 17-10-35

I would really appreciate if you could take a look at this. When you are able to, I'll send the example Lif file to you via twitter again, just like last time!

Cheers,
Dirk

The script I used:

from PIL import Image
from readlif.reader import LifFile
import numpy as np
import napari
%gui qt5

# Define lif file location
RawData= LifFile('The_Lif_Example.lif')

# get a list of all images and a list of numbers to iterate over.
lif_image_list = [i for i in RawData.get_iter_image()]
Number_of_lif_images = len(lif_image_list)
Lif_Image_Number_List = list(range(0,Number_of_lif_images))

for i in Lif_Image_Number_List:
    RawData_Image = RawData.get_image(i)
    Image_Name = RawData_Image.name
    print(Image_Name)
    
    # Explore the image dimensions
    frame_list   = [i for i in RawData_Image.get_iter_t(c=0, z=0, m=0)]
    z_list       = [i for i in RawData_Image.get_iter_z(t=0, c=0, m=0)]
    channel_list = [i for i in RawData_Image.get_iter_c(t=0, z=0, m=0)]
    mosaictile_list = [i for i in RawData_Image.get_iter_m(t=0, z=0, c=0)]
    # As can be observed, these are both similar for both the normal image and the image acquired with the stage navigator.
    #print(frame_list)
    #print(z_list)
    #print(channel_list)
    #print(mosaictile_list)
    #print()
    
    # get additional information:
    Info = RawData_Image.info
    # As can be observed, also this looks similar for both images
    #print(Info)
    #print()
    
    # Now lets work with the images:
    # Thresholding for every z-stack:
    Z_Number_List = list(range(0, len(z_list)))
    
    for Z_stackNr in Z_Number_List:
        # Collect the two channels of interest. (assumes that channel 0 == nucleus signal and channel 1 == fat signal)
        Nuc_Image_V1 = RawData_Image.get_frame(z=Z_stackNr, t=0, c=0, m=0)
        Fat_Image_V1 = RawData_Image.get_frame(z=Z_stackNr, t=0, c=1, m=0) 

        # Transform the images into workable numpy arrays (the images are already 8 bit so they don't need correction for this)  
        Nuc_img_array = np.uint8(np.array(Nuc_Image_V1))
        Fat_img_array = np.uint8(np.array(Fat_Image_V1))
        
        Nuc_Image = Image.fromarray(Nuc_img_array, 'L')
        Fat_Image = Image.fromarray(Fat_img_array, 'L')
        
        Nuc_Image = np.uint8(Nuc_Image)
        Fat_Image = np.uint8(Fat_Image)
        
        # Make a 3d numpy array of the image and collect in a variable.
        if Z_stackNr == 0:
            Nuc_Raw_Image_Array_3D = Nuc_Image
            Fat_Raw_Image_Array_3D = Fat_Image
        else:
            Nuc_Raw_Image_Array_3D = np.dstack((Nuc_Raw_Image_Array_3D, Nuc_Image))
            Fat_Raw_Image_Array_3D = np.dstack((Fat_Raw_Image_Array_3D, Fat_Image))
            
    # now that we have made a 3d numpy array, lets visualize what we have:
    Dimension_Scale = abs(np.asarray(RawData_Image.info['scale'][0:3]))
    scale_1 = ((Dimension_Scale[0]/Dimension_Scale[0]),(Dimension_Scale[0]/Dimension_Scale[1]),(Dimension_Scale[0]/Dimension_Scale[2]))
    
    if i == 0:
        with napari.gui_qt():
            x = napari.Viewer(ndisplay=3)
            x.add_image(Nuc_Raw_Image_Array_3D, scale=scale_1, colormap="blue", blending='additive')
            x.add_image(Fat_Raw_Image_Array_3D, scale=scale_1, colormap="green", blending='additive')
    else:
        with napari.gui_qt():
            y = napari.Viewer(ndisplay=3)
            y.add_image(Nuc_Raw_Image_Array_3D, scale=scale_1, colormap="blue", blending='additive')
            y.add_image(Fat_Raw_Image_Array_3D, scale=scale_1, colormap="green", blending='additive')

@nimne
Copy link
Contributor

nimne commented Mar 14, 2021

Hi Dirk,

Thanks for the detailed report! I've been meaning to learn napari for some other projects anyway, it looks extremely useful.

From your pictures, I can't quite tell what's going on - but it's possible that the data is being read in an incorrect order, which might be related to #14. If you send an example, I can take a look.

-Nick

@DirkRemmers
Copy link
Author

Hi Nick,

Thank you for the response and the effort, it is highly appreciated.
I sent you an example .lif file containing the previously shown images.
If you want to include this file in the further development of the readlif package, you are free to do so.

Cheers,
Dirk

@DirkRemmers
Copy link
Author

DirkRemmers commented Mar 24, 2021

Hi Nick,

I have been reading up on #14 in the last couple of days and gave it a try to fix this particular issue. I've found that the data-organization in the 2 images indeed is different. In the "normal" image, the data (xy image) was organized in the following way:

channel 0 (Z-stack 0) --> channel 0 (Z-stack 1) --> ... --> channel 0 (Z-stack n) --> channel 1 (Z-stack 0) --> channel 1 (Z-stack 1) --> ... --> channel 1 (Z-stack n).

In the image acquired by the navigator, the data was organized like this instead:
Z-stack 0 (channel 0) --> Z-stack 0 (channel 1) --> ... --> Z-stack n (channel 0) --> Z-stack n (channel 1)

Therefore, in the navigator image, the main separator was not the channel, but the z-stack. Since I would like to start analyzing images required with the stage navigator as soon as possible, I made a local ReadLif_navigator package which is slightly changed in the get_frame function. This I will now use for these specific navigator images until a fix is included in a new release.

This navigator package is changed slightly compared to the current ReadLif package in the following manner:
In the "get_frame" function, I changed this:

z_offset = self.channels

z_requested = z_offset * z

c_requested = c

into this:

z_offset = self.channels #not used anymore
z_requested = z
		
c_offset = self.nz
c_requested = c * c_offset

This adapted method takes into account the previously discussed changes in the data-organization and solve this problem.
Since #14 is an issue that reaches beyond these 2 dimensions of channels and Z-stacks, I don't think this will be a major contribution to a solution there, but I still wanted to share this information since it shows not all images have a similar data-organization. Therefore this data-organization first needs to be detected and (at least) the get_frames function needs to be adapted to be flexible in order to handle these different data-structures.

If I can help by supplying more data or something else please feel free to contact me.

Thanks for all the effort so far already.

Cheers,
Dirk

@nimne
Copy link
Contributor

nimne commented Sep 30, 2021

This bug has really eluded me for a while! I think I finally have some sense of what is different between these two captures - but I'm not sure how to fix this just yet. The nature of the issue suggests a bigger problem with some assumptions about the organizational structure of lif files in the first place.

In the 'normal capture' the relevant sections in the XML are (added line breaks for clarity, 'z' is dim 3):

<ChannelDescription DataType="0" ChannelTag="0" Resolution="8" NameOfMeasuredQuantity=""
Min="0.000000e+00" Max="2.550000e+02" Unit="" LUTName="Green"
IsLUTInverted="0" BytesInc="1048576" BitInc="0">
...
<DimensionDescription DimID="3" NumberOfElements="26" Origin="1.462691e-03"
Length="9.997637e-05" Unit="m" BitInc="0" BytesInc="2097152"/>

'navigator capture'

<ChannelDescription DataType="0" ChannelTag="0" Resolution="8" NameOfMeasuredQuantity=""
Min="0.000000e+00" Max="2.550000e+02" Unit="" LUTName="Green"
IsLUTInverted="0" BytesInc="32505856" BitInc="0">
...
<DimensionDescription DimID="3" NumberOfElements="31" Origin="0.000000e+00"
Length="1.199716e-04" Unit="m" BitInc="0" BytesInc="1048576"/>

In the normal capture, BytesInc in Z is greater than BytesInc in the channel. This is reversed in the navigator capture. I'll test a quick if/then fix for now, but I don't think that is the best solution.

@nimne
Copy link
Contributor

nimne commented Sep 30, 2021

New branch with a fix: https://github.com/nimne/readlif/tree/channel_order_navigator

As I'm working on this, I suspect that this fix will solve nearly all of the common use cases. I would be curious if additional dimensions would cause an issue. Would it be possible to make a mosaic scan in the stage navigator and see if this fix still works?

@DirkRemmers
Copy link
Author

Yes that is something I can do, I will make a small example image next week and test it out. I'll also link the file in this discussion so you can test it as well.
Thanks!

@DirkRemmers
Copy link
Author

DirkRemmers commented Oct 5, 2021

Hey I'm back with the test file.
On my side, all seems to be working perfectly, no confusion of channels or mosaic positions.

You can find the file here: https://we.tl/t-3ipKOus6RU
As usual, let me know if you need anything else / if the link expires!

When do you think this fix can be merged with the master branch?

Thank you for the effort, much appreciated!

@nimne
Copy link
Contributor

nimne commented Oct 8, 2021

This fix is already in the 0.6.4 release! The fix initially caused an unfortunate bug in 0.6.3, so I really appreciate the extra test files.

Closing this for now as fixed.

@nimne nimne closed this as completed Oct 8, 2021
@DirkRemmers
Copy link
Author

The 0.6.4 update is working great for my ongoing projects already, so thank you!

I just started exploring another project, which requires the analysis of 4 channels, which caused me to find a similar problem as the one in this issue: the channels and z-stacks get mixed up. This was the first time I had images with 4 channels to check the package on, so that is why I only found this out after the release of 0.6.4, so sorry for that inconvenience!

I think the recent changes to the the readlif package broke something that previously used to work.
To confirm this, I tested a backed-up version of the readlif package I modified (see description in #31) to work with the new LasX version, and I indeed got the desired channel images. To be clear, I don't think the issue is the compatibility with LasX, but rather that something involved in finding the channel offsets.

I added that custom readlif version and an example image here, but let me know if you need something else!

A quick script to visualize what I'm describing:

import numpy as np
import napari

# things for you to customize
img_nr = 0 # change number if you want another image
raw_data = LifFile(path) # change path to custom path to image
from readlif.reader import LifFile #using v0.6.4
#from readlif_fix.reader import LifFile # using the custom readlif version

# load the image
raw_image = raw_data.get_image(img_nr)
z_nr_list = list(range(0,len([i for i in raw_image.get_iter_z()])))
c_nr_list = list(range(0,len([i for i in raw_image.get_iter_c()])))
m_nr_list = list(range(0,len([i for i in raw_image.get_iter_m()])))

# create 3d numpy array's per channel
channel_list = []
for c_nr in c_nr_list:
    for z_nr in z_nr_list:
        if z_nr == 0:
            layers = np.asarray(raw_image.get_frame(z = z_nr, t = 0, c = c_nr, m = 0))
        else:
            layers = np.dstack((layers, np.asarray(raw_image.get_frame(z = z_nr, t = 0, c = c_nr, m = 0))))
    channel_list.append(layers)

# view the image
color_list = ['blue', 'green', 'red', 'gray']
preview = napari.Viewer()
for i, col in zip(channel_list, color_list):
    preview.add_image(i, scale = (1,1,14),colormap= col, blending='additive')
preview.dims.ndisplay = 3
preview.dims.order = (2,1,0)
napari.run()

As always, thank you a lot in advance. The work is really appreciated!

@nimne
Copy link
Contributor

nimne commented Oct 15, 2021

Thanks for helping me check these updates and for the example! I'll take a look.

@DirkRemmers
Copy link
Author

Hi Nick,

I was wondering if it would help if I provide a couple more files to try and find out the cause of this issue. Please let me know what you need and I'll make some example files for you to use.

Kind regards,
Dirk

@nimne
Copy link
Contributor

nimne commented Feb 24, 2022

Hi Dirk,

Thanks for the nudge - work has been a little overwhelming lately, but I think I will be able to carve out time to fix this soon. I think I have everything that I need to fix this already!

@DirkRemmers
Copy link
Author

Hi Nick,

Thanks for the update, the work is much appreciated!

@DirkRemmers
Copy link
Author

Hi Nick, how are you doing?

I was wondering if you could maybe share an update on this?

Cheers,
Dirk

@jboulanger
Copy link

Hello,

I have the same issue here. Channels C and slice Z offset are swapped. I can fix the issue by setting manually channel_as_second_dim to False, for example:

img_0 = f.get_image(0)
img_0.channel_as_second_dim=False

The error is here :

                 if 3 in dims_dict.keys() and len(dims) > 2:
                    channels = item.findall("./Data/Image/ImageDescription/"
                                            "Channels/ChannelDescription")
                    channel_max = sum([int(c.attrib["BytesInc"]) for c in channels])

                    bytes_inc_channel = channel_max
                    cytes_inc_z = int(dims[2].attrib["BytesInc"])
                    
                    channel_as_second_dim = bytes_inc_channel > cytes_inc_z                    

                else:
                    channel_as_second_dim = False

The increase in byte for all channels is not the sum of the increases. It works if there are two channels but not in general. One way to fix it would be to guess the size occupied by each channel as a multiple of the size of the first offset.

                    if len(channel_bytes) > 1:
                        bytes_inc_channel = len(channel_bytes) * channel_bytes[1]
                    else:
                        bytes_inc_channel = channel_bytes[0]

Unless there is a better way to fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants