-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for ultrasound data #8172
Comments
Hi @KumoLiu , thanks for the proposal. I think this task lacks some definition right now, as post-processed ultrasound data in DICOM format is not that special. If we are trying to work with raw ultrasound data (e.g. phase array or curvilinear array), then it is a very different story. Maybe we need some ultrasound experts to suggest what the community may need. cc @ericspod @aylward @Nic-Ma Thanks! |
Hi @mingxin-zheng I think we do need some specialist opinions here for sure! I can imagine needing support for loading/saving new data formats but also transforms for working specifically with US data. These would cover common processing operations but also generating realistic noise or other signal degradation specific to US for data augmentation. |
Hi @ericspod , it's a good point to bring the up the noise augmentation! As far as I know there are at least two special kinds there: noise introduced by speckle and imaging-depth. On the other hand, what specific data formats have you seen with ultrasound systems besides DICOM? |
I'm not well versed in what common practices are with US so can't really comment on what formats there would be. We will discuss with colleagues involved with US research and see what requirements they come up with. |
Coming from an Ultrasound pre-clinical imaging group we use PlusToolkit (main source repo: https://github.com/PlusToolkit/PlusLib) along with 3D Slicer (main source repo: https://github.com/Slicer/Slicer) to collect ultrasound data from various ultrasound hardware and save to MetaIO and NRRD formats. In this case we are using PlusToolKit as an open-source solution for ultrasound streaming and recording. PlusToolKit’s PlusServer is the primary tool, but there are some other ultrasound processing tools (see http://perk-software.cs.queensu.ca/plus/doc/nightly/user/Tools.html). The NA-MIC project weeks have also had ultrasound AI based projects such as https://projectweek.na-mic.org/PW39_2023_Montreal/Projects/LiveTrackedUltrasoundProcessingWithPytorch/ which will lead you down links to things such as https://github.com/SlicerIGT/aigt/tree/master/UltrasoundSegmentation. |
Thanks @jamesobutler for the great inputs! Today I met @bcjiang who also works in this domain. Wondering if you can share some experiences too |
Hi @mingxin-zheng, we typically obtain the ultrasound raw channel data from the Verasonix machines. It has a MATLAB interface and the channel data is saved as a multi-dimensional array with 2D Frame index, A-line index and Signal data in each dimension. Each element in Signal data is in 14-bit signed encoding (-8192 to 8191). We can save these data to any file type and load them into python. |
Thanks @bcjiang ! The raw data is interesting. Does the .mat file also include meta data to convert a raw data to a DICOM-level image (e.g. probe apeture information for scan conversion), or that's a seperate piece of data? |
Hi @mingxin-zheng, yes there is also a header structure array in MATLAB workspace that we can export, which is created by a self-defined setup script before running the Verasonix data acquisition. The header should contain all probe information and we can choose which part to export and specify the export file format. |
Thanks @bcjiang for the explanation ! Now I'm just imagining if we need to implement such conversion for ultrasound data and here could be one thing for MONAI Here is an illustration of such process. It needs to be noted that B is the RF raw data and A is what we see after scan-conversion and interpolation. |
Is MONAI considering tasks associated with RF to scan converted images? Or taking Ultrasound scan converted images from ultrasound hardware devices and then processing them for various labeling tasks? Often times Ultrasound hardware provides both scan converted and the raw RF data. |
I think this is still open to discussion here @jamesobutler . It is still a research topic how much RF data can provide beyond scan-converted images. I personally think MONAI can consider features to accelerate such explorations. But what features are needed remains undefined at the moment. |
Ok. Yes ultrasound scan conversion can be different depending on the type of probe. A linear array, curved array, phased array, matrix array or single element or annular array types can produce different looking scan converted images. We use a high frequency linear array now for most of our work with various techniques such as spatial compounding multiple angles to form a single processed image or using multiple focal zones which acquire multiple frames that process into a single image. With ultrasound when trying to get good input images for MONAI models we are often managing how imaging depth is impacting attenuation, usage of TGC impacting signal intensity along that depth, shadowing caused by interfaces such as bone or air and speckle adding various intensity values across the image. |
I vote for supporting both RF and B-mode (including B-mode video). Regarding special processing for ultrasound...in addition to creating B-Mode from RF (transducer lines), it is often useful to pre-process the data from B-mode images back into transducer lines (not RF, but the reconstructed signal along a transducer line) when a convex probe is used. There are a few reasons for this. (1) Most b-mode images from convex probes are nearly 50% blank / irrelevant. (2) Image information/detail near a convex transducer is spatially more dense than the information density deeper into the patient (e.g., near the bottom of the image). (3) The impact of signal attenuation (e.g., shadowing) follows transducer lines whereas in b-mode they will spread out to the side depending on where they fall in the image. All of these features means that when rectangular convolutional kernels are used, it is often "better" to apply them in transducer-line space rather than in b-mode images - transducer-line space does not have blank space, has more uniform information density, and attenuation/artifacts have vertical (rectangular, not angular) impact. I used these transforms for several projects with DARPA and DoD - with using transducer-line space greatly reducing the amount of training data needed compared to doing things in b-mode space. I've also done a fair amount of Quantitative US (QUS) work in which the RF is process directly, instead of being reconstructed into b-mode images. QUS is going to be the future of ultrasound, IMO :) |
I totally agree with @aylward 's comment regarding processing from B-mode images back into transducer line because that's exactly what I heard a couple of years ago. Moreover, those works simulated (1) TGC gain level change (2) imaging depth change for curvilinear transducer for image augmentation. I remembered those augmentation would make the DL algorithm more robust to transducer settings when it gets to real-time deployment. |
Add a reference to implementations: |
Conversion between polar coordinate and rectangular images can also be accomplished using https://github.com/KitwareMedical/ITKUltrasound/blob/master/include/itkCurvilinearArraySpecialCoordinatesImage.h and |
In the weekly MONAI meeting, tracked-ultrasound usage is mentioned, which may relate to the point transform feature developed in MONAI 1.4. |
@jamesobutler and others. We're thinking of creating a MONAI working group on ultrasound. Please send me an email if you're interested. saylward at nvidia dot com |
Unlike other imaging modalities like MRI or CT, ultrasound images have distinct characteristics, including high noise levels, unique texture patterns, and variable contrast depending on the specific tissue being examined. Processing ultrasound data requires specialized algorithms, such as speckle reduction and boundary enhancement, which differ significantly from standard image processing techniques.
To integrate ultrasound data into our existing pipelines, we need to add flexible support options. These might include handling raw ultrasound data, incorporating specialized preprocessing steps, or enabling data streaming. Supporting raw data could open possibilities for real-time analysis, while collaboration with platforms like Holoscan could further enhance our capabilities in handling and processing ultrasound imaging data effectively.
The text was updated successfully, but these errors were encountered: