Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How many size of Ram for evaluation on LITS challenge? #7

Open
hongson23 opened this issue Apr 30, 2019 · 8 comments
Open

How many size of Ram for evaluation on LITS challenge? #7

hongson23 opened this issue Apr 30, 2019 · 8 comments

Comments

@hongson23
Copy link

Hello @jackyko1991
Thank you for your code. I trained and evaluated on some data-set (data_sphere) which is you provided and it work perfectly, thanks for that :)
I have trained vet_tensorflow with LITS challenge, https://drive.google.com/drive/u/0/folders/0B0vscETPGI1-Q1h1WFdEM2FHSUE, code train work well and I have some checkpoints data but when I run evaluation.py with random file such as volume1.nii which is about 40MB so it is taken > 32G ram (host size not GPU memory) and after that operator is frozen :(. My PC have 32G ram and 1080Ti GPU so it is OK for evaluate LITS data?

@SachidanandAlle
Copy link

I do have same issue.. looks like evaluate.py (from master branch) gets stuck for long time and doesn't not complete the execution

@jackyko1991
Copy link
Owner

please provide the TF version and os that your are running on. Also you could provide the preprocessing pipeline to check if the variables are suitable for liver training.

The code runs fine on most of my data yet I have discovered some hidden bugs in both training and evaluation.

One possible reason that CPU memory is so large is because of resampled spacing is too small in comparison with human body.

The default preprocessing pipeline:

trainTransforms = [
                NiftiDataset.StatisticalNormalization(2.5),
                NiftiDataset.Resample((0.45,0.45,0.45)),
                NiftiDataset.Padding((FLAGS.patch_size, FLAGS.patch_size, FLAGS.patch_layer)),
                NiftiDataset.RandomCrop((FLAGS.patch_size, FLAGS.patch_size, FLAGS.patch_layer),FLAGS.drop_ratio,FLAGS.min_pixel),
                NiftiDataset.RandomNoise()
                ]

resamples the image to isotropic spacing with 0.45mm, if you direcly apply this to adobminal region, pixel count could up to 700700400. Please note that image data and softmax output are stored in float32 type and this could be huge in size (~700MB per image object).

@SachidanandAlle
Copy link

Here are the steps which I tried.

  1. Downloaded Spleen Data from (http://medicaldecathlon.com/) https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2
  2. Replaced image.nii.gz with Task09_Spleen/imagesTr/spleen02.nii.gz and corresponding label
  3. Ran the train.py (with default values) for 10 epoches (it takes around 3-5 minutes with GPU)
  4. Ran the evaluate.py (with default values) against same image.nii.gz

And process gets stuck at:
https://github.com/jackyko1991/vnet-tensorflow/blob/master/evaluate.py#L171
batches = prepare_batch(image_np,ijk_patch_indices)

@hongson23
Copy link
Author

hongson23 commented Apr 30, 2019

Hello,
I have TF 1.13.1 run on Ubuntu 16.04.2 and using default pre-processing pipeline such as latest on master:
transforms = [
# NiftiDataset.Normalization(),
NiftiDataset.StatisticalNormalization(2.5),
NiftiDataset.Resample(0.75),
NiftiDataset.Padding((FLAGS.patch_size, FLAGS.patch_size, FLAGS.patch_layer))
]

@jackyko1991
Copy link
Owner

Please try to change resample to 2 instead of 0.75 first.

I am currently not available to download large file from google drive till coming Thur. I will update the necessary codes in master branch to fit your dataset.

@hongson23
Copy link
Author

On master re-sample is 0.25 and I changed it to 0.75 so I can predict but if variable is 2 this program throw exception: Assertion Error
I will retrain with all data on LITS challenge and testing your solution. Thank you very much :)

@hongson23
Copy link
Author

Hello @jackyko1991
I trained your solution with

  • 80 cases in LiTS database
  • 200 epochs
  • re-sample value is changed from 0.25 to 0.75.
    I can predict "liver segmentation" but it is not good. There are some zones which are not belong liver such as my captures
    Would you give some advice about that? thank you

slice1
slice2

@jackyko1991
Copy link
Owner

@hongson23 sorry for the late reply.

This event happens when non-liver areas are not frequently included with random_crop. To reduce the number of false positives, you may need to stepwisely increase the proportion of non-liver tissue, i.e.
gradually increase drop_ratio and reduce min_pixel in following lines

NiftiDataset.RandomCrop((FLAGS.patch_size, FLAGS.patch_size, FLAGS.patch_layer),FLAGS.drop_ratio,FLAGS.min_pixel),

I have come up with a more automatic way of region selection in multi-image branch named ConfidenceCrop2. The function hasn't merge to master branch yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants