Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for ultrasound data #8172

Open
KumoLiu opened this issue Oct 23, 2024 · 19 comments
Open

Add support for ultrasound data #8172

KumoLiu opened this issue Oct 23, 2024 · 19 comments

Comments

@KumoLiu
Copy link
Contributor

KumoLiu commented Oct 23, 2024

Unlike other imaging modalities like MRI or CT, ultrasound images have distinct characteristics, including high noise levels, unique texture patterns, and variable contrast depending on the specific tissue being examined. Processing ultrasound data requires specialized algorithms, such as speckle reduction and boundary enhancement, which differ significantly from standard image processing techniques.

To integrate ultrasound data into our existing pipelines, we need to add flexible support options. These might include handling raw ultrasound data, incorporating specialized preprocessing steps, or enabling data streaming. Supporting raw data could open possibilities for real-time analysis, while collaboration with platforms like Holoscan could further enhance our capabilities in handling and processing ultrasound imaging data effectively.

@mingxin-zheng
Copy link
Contributor

Hi @KumoLiu , thanks for the proposal. I think this task lacks some definition right now, as post-processed ultrasound data in DICOM format is not that special. If we are trying to work with raw ultrasound data (e.g. phase array or curvilinear array), then it is a very different story.

Maybe we need some ultrasound experts to suggest what the community may need. cc @ericspod @aylward @Nic-Ma Thanks!

@ericspod
Copy link
Member

Hi @mingxin-zheng I think we do need some specialist opinions here for sure! I can imagine needing support for loading/saving new data formats but also transforms for working specifically with US data. These would cover common processing operations but also generating realistic noise or other signal degradation specific to US for data augmentation.

@mingxin-zheng
Copy link
Contributor

Hi @ericspod , it's a good point to bring the up the noise augmentation! As far as I know there are at least two special kinds there: noise introduced by speckle and imaging-depth.

On the other hand, what specific data formats have you seen with ultrasound systems besides DICOM?

@ericspod
Copy link
Member

I'm not well versed in what common practices are with US so can't really comment on what formats there would be. We will discuss with colleagues involved with US research and see what requirements they come up with.

@jamesobutler
Copy link
Contributor

jamesobutler commented Oct 30, 2024

Coming from an Ultrasound pre-clinical imaging group we use PlusToolkit (main source repo: https://github.com/PlusToolkit/PlusLib) along with 3D Slicer (main source repo: https://github.com/Slicer/Slicer) to collect ultrasound data from various ultrasound hardware and save to MetaIO and NRRD formats. In this case we are using PlusToolKit as an open-source solution for ultrasound streaming and recording. PlusToolKit’s PlusServer is the primary tool, but there are some other ultrasound processing tools (see http://perk-software.cs.queensu.ca/plus/doc/nightly/user/Tools.html).

The NA-MIC project weeks have also had ultrasound AI based projects such as https://projectweek.na-mic.org/PW39_2023_Montreal/Projects/LiveTrackedUltrasoundProcessingWithPytorch/ which will lead you down links to things such as https://github.com/SlicerIGT/aigt/tree/master/UltrasoundSegmentation.

@mingxin-zheng
Copy link
Contributor

Thanks @jamesobutler for the great inputs!

Today I met @bcjiang who also works in this domain. Wondering if you can share some experiences too

@bcjiang
Copy link

bcjiang commented Oct 30, 2024

Hi @mingxin-zheng, we typically obtain the ultrasound raw channel data from the Verasonix machines. It has a MATLAB interface and the channel data is saved as a multi-dimensional array with 2D Frame index, A-line index and Signal data in each dimension. Each element in Signal data is in 14-bit signed encoding (-8192 to 8191). We can save these data to any file type and load them into python.
I think prior DL works leveraging the raw channel US data were mostly aiming at beamforming/imaging quality improvement, so data can be easily acquired by just scanning different subjects continuously. I didn't see much augmentation was applied in these works, but I'm also interested to learn some other augmentation methods for raw data in classification/segmentation scenarios.

@mingxin-zheng
Copy link
Contributor

mingxin-zheng commented Oct 31, 2024

Thanks @bcjiang ! The raw data is interesting. Does the .mat file also include meta data to convert a raw data to a DICOM-level image (e.g. probe apeture information for scan conversion), or that's a seperate piece of data?

@bcjiang
Copy link

bcjiang commented Oct 31, 2024

Hi @mingxin-zheng, yes there is also a header structure array in MATLAB workspace that we can export, which is created by a self-defined setup script before running the Verasonix data acquisition. The header should contain all probe information and we can choose which part to export and specify the export file format.

@mingxin-zheng
Copy link
Contributor

Thanks @bcjiang for the explanation !

Now I'm just imagining if we need to implement such conversion for ultrasound data and here could be one thing for MONAI MetaTensor to improve:
Typically, the MetaTensor class contains an affine matrix to represent the spatial transform applied to an image (2D/3D). But ultrasound's scan conversion, cann't be represented by one single affine operation. Each data line has its own spatial transform (e.g. angle to steer), when it needs to be interpolated to a orthogonal raster.

Here is an illustration of such process. It needs to be noted that B is the RF raw data and A is what we see after scan-conversion and interpolation.

Back-scan-conversion-a-scheme-of-original-sector-B-mode-image-b-same-after

@jamesobutler
Copy link
Contributor

Is MONAI considering tasks associated with RF to scan converted images? Or taking Ultrasound scan converted images from ultrasound hardware devices and then processing them for various labeling tasks? Often times Ultrasound hardware provides both scan converted and the raw RF data.

@mingxin-zheng
Copy link
Contributor

I think this is still open to discussion here @jamesobutler . It is still a research topic how much RF data can provide beyond scan-converted images. I personally think MONAI can consider features to accelerate such explorations. But what features are needed remains undefined at the moment.

@jamesobutler
Copy link
Contributor

jamesobutler commented Nov 1, 2024

Ok. Yes ultrasound scan conversion can be different depending on the type of probe. A linear array, curved array, phased array, matrix array or single element or annular array types can produce different looking scan converted images. We use a high frequency linear array now for most of our work with various techniques such as spatial compounding multiple angles to form a single processed image or using multiple focal zones which acquire multiple frames that process into a single image.

With ultrasound when trying to get good input images for MONAI models we are often managing how imaging depth is impacting attenuation, usage of TGC impacting signal intensity along that depth, shadowing caused by interfaces such as bone or air and speckle adding various intensity values across the image.

@aylward
Copy link
Collaborator

aylward commented Nov 1, 2024

I vote for supporting both RF and B-mode (including B-mode video).

Regarding special processing for ultrasound...in addition to creating B-Mode from RF (transducer lines), it is often useful to pre-process the data from B-mode images back into transducer lines (not RF, but the reconstructed signal along a transducer line) when a convex probe is used. There are a few reasons for this. (1) Most b-mode images from convex probes are nearly 50% blank / irrelevant. (2) Image information/detail near a convex transducer is spatially more dense than the information density deeper into the patient (e.g., near the bottom of the image). (3) The impact of signal attenuation (e.g., shadowing) follows transducer lines whereas in b-mode they will spread out to the side depending on where they fall in the image. All of these features means that when rectangular convolutional kernels are used, it is often "better" to apply them in transducer-line space rather than in b-mode images - transducer-line space does not have blank space, has more uniform information density, and attenuation/artifacts have vertical (rectangular, not angular) impact.

I used these transforms for several projects with DARPA and DoD - with using transducer-line space greatly reducing the amount of training data needed compared to doing things in b-mode space.

I've also done a fair amount of Quantitative US (QUS) work in which the RF is process directly, instead of being reconstructed into b-mode images. QUS is going to be the future of ultrasound, IMO :)

@mingxin-zheng
Copy link
Contributor

I totally agree with @aylward 's comment regarding processing from B-mode images back into transducer line because that's exactly what I heard a couple of years ago. Moreover, those works simulated (1) TGC gain level change (2) imaging depth change for curvilinear transducer for image augmentation. I remembered those augmentation would make the DL algorithm more robust to transducer settings when it gets to real-time deployment.

@mingxin-zheng
Copy link
Contributor

Add a reference to implementations:
SlicerIGT/aigt#50

@dzenanz
Copy link
Contributor

dzenanz commented Nov 4, 2024

Conversion between polar coordinate and rectangular images can also be accomplished using https://github.com/KitwareMedical/ITKUltrasound/blob/master/include/itkCurvilinearArraySpecialCoordinatesImage.h and itk::ResampleImageFilter (unit test).

@KumoLiu KumoLiu changed the title Add support for streaming and ultrasound data Add support for ultrasound data Nov 7, 2024
@mingxin-zheng
Copy link
Contributor

In the weekly MONAI meeting, tracked-ultrasound usage is mentioned, which may relate to the point transform feature developed in MONAI 1.4.
In this setting, monai transform can be used to compute the position of ultrasound image in 3D space. This typically requires (1) tracker positions from external device, e.g. optical reflective markers; (2) the relative position between the tracker and the ultrasound probe. With point transform, we can have both image and tracker position to be processed by a single MONAI compose.

@aylward
Copy link
Collaborator

aylward commented Nov 17, 2024

@jamesobutler and others. We're thinking of creating a MONAI working group on ultrasound. Please send me an email if you're interested. saylward at nvidia dot com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

No branches or pull requests

7 participants