-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running FAST segmentation Exception: Not enough classes detected to init KMeans #76
Comments
I’ve seen something similar happen when the input T1 has been preprocessed with intensity normalization. Is this the case in your data, eg does the white matter have an intensity of 110?
… On Feb 22, 2022, at 11:05 AM, Soichi Hayashi ***@***.***> wrote:
Hello. We are running HCPpipeline docker://bids/hcppipelines:v4.1.3-1. When we feed t1w and t2w input file, we run into the following error message.
Fri Feb 18 18:17:36 EST 2022:T2wToT1wReg.sh: START: T2w2T1Reg
Running FAST segmentation
Exception: Not enough classes detected to init KMeans
Image Exception : #63 :: No image files match: /output/sub-NDARINV0LEM88KP/T2w/T2wToT1wReg/T2w2T1w_fast_pve_2
terminate called after throwing an instance of 'std::runtime_error'
what(): No image files match: /output/sub-NDARINV0LEM88KP/T2w/T2wToT1wReg/T2w2T1w_fast_pve_2
/usr/local/fsl/bin/epi_reg: line 320: 13055 Aborted $FSLDIR/bin/fslmaths ${vout}_fast_pve_2 -thr 0.5 -bin ${vout}_fast_wmseg
Image Exception : #63 :: No image files match: /output/sub-NDARINV0LEM88KP/T2w/T2wToT1wReg/T2w2T1w_fast_wmseg
terminate called after throwing an instance of 'std::runtime_error'
what(): No image files match: /output/sub-NDARINV0LEM88KP/T2w/T2wToT1wReg/T2w2T1w_fast_wmseg
/usr/local/fsl/bin/epi_reg: line 329: 13086 Aborted $FSLDIR/bin/fslmaths ${vout}_fast_wmseg -edge -bin -mas ${vout}_fast_wmseg ${vout}_fast_wmedge
I think this means that FAST segmentation is failing to generate the output file T2w2T1w_fast_pve_2 because something is not right with the input files. Is that correct? If so, do you have any suggestion on what we should do to troubleshoot this problem?
We are running the container with the following options
https://github.com/brainlife/app-hcp-pipeline/blob/v4.1.3/main
singularity exec -e \
-B `pwd`/bids:/bids \
-B `pwd`/output:/output \
docker://bids/hcppipelines:v4.1.3-1 \
./run.py /bids /output participant \
--n_cpus 8 \
--stages $stage \
--license_key "$FREESURFER_LICENSE" \
--participant_label $sub \
--processing_mode $processing_mode \
$skipbidsvalidation
We are running a slightly modified version of run.py for our jobs and here is the current content > https://github.com/brainlife/app-hcp-pipeline/blob/v4.1.3/run.py
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are subscribed to this thread.
|
Indeed that doesn't look like a preprocessed image. The data does not look
very good though—I would expect to see some grey matter near your cursor.
Could be a quality issue with this participant. Are you
consistently getting this error on other ABCD subjects? You should have a
brain extracted T1 in /output/sub-NDARINV0LEM88KP/T2w/T2wToT1wReg/. Does
that look okay?
Roeland Hancock
…On Tue, Feb 22, 2022 at 11:20 AM Soichi Hayashi ***@***.***> wrote:
We are seeing this error on ABCD subject NDARINV0LEM88KP. I believe the
T1W is not preprocessed. What do you mean by having an intensity of 110? As
far as I can tell, different voxel has different intensities.
[image: image]
<https://user-images.githubusercontent.com/923896/155173994-715b2796-18fd-48f0-9b2c-00ae807b7e91.png>
—
Reply to this email directly, view it on GitHub
<#76 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AARMKW5H6IHUSEHDJOJHRILU4OZWTANCNFSM5PB2LRFA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you commented.Message ID:
***@***.***>
|
No the T2wToT1wReg/T1w_acpc_brain.nii.gz is all blank. Here is the entire log. I don't see anything ominous.. but maybe something went wrong with bet?
|
Hi, I ran into the same issue. This happens with |
Has any one tried this https://github.com/DCAN-Labs/abcd-hcp-pipeline? It
looks there's a version of the HCP pipeline for ABCD.
…On Thu, Feb 24, 2022 at 9:58 AM Giulia Bertò ***@***.***> wrote:
Hi, I ran into the same issue. This happens with processing_mode='auto',
both with the dockers v4.1.3-1 and v4.3.0-3. My guess is that the script
T2wToT1wReg.sh fails at line 76 because it's looking for the file
T2w2T1w.mat, but there is only the file T2w2T1w_init.mat in the T2wToT1wReg
folder. Is it possible?
—
Reply to this email directly, view it on GitHub
<#76 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AARMKW2K6IKVML4VUVOXTDTU4ZBRRANCNFSM5PB2LRFA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you commented.Message ID:
***@***.***>
|
@rhancockn The DCAN-Labs abcd-hcp-pipeline is actually quite flexible to data of most studies. You can read more about the pipeline's behavior here: https://collection3165.readthedocs.io/en/stable/pipeline/ They're also quite responsive to posting GitHub issues there. |
hi @rhancockn thanks for following up on this. we are looking into the DCAN abcd-hcp app. regarding this App, it looks like an update to the docker would be necessary to make the App functional. we were wondering whether it would be helpful for us to help by editing the docker container? would you be interested in that? Franco |
Hello. We are running HCPpipeline
docker://bids/hcppipelines:v4.1.3-1
. When we feed t1w and t2w input file, we run into the following error message.I think this means that FAST segmentation is failing to generate the output file
T2w2T1w_fast_pve_2
because something is not right with the input files. Is that correct? If so, do you have any suggestion on what we should do to troubleshoot this problem?We are running the container with the following options
https://github.com/brainlife/app-hcp-pipeline/blob/v4.1.3/main
We are running a slightly modified version of
run.py
for our jobs and here is the current content > https://github.com/brainlife/app-hcp-pipeline/blob/v4.1.3/run.pyThe text was updated successfully, but these errors were encountered: