-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About how multimodal upstream tasks migrate to unimodal downstream tasks #18
Comments
Hi, many thanks for your attention to our work! It encourages me a lot!
I will release the implementations with MRI in the next version recently. Stay tuned! |
Thank you for your reply! Looking forward to your work on MRI! |
Hi! I was thinking about the pre-training data format recently. There is a question I would like to discuss with you.
|
Good question. The third way is not feasible. And personally, I prefer the second way, since it is more flexible and extendable, easier to implement. |
Thank you so much for your continued patience in responding! What you have said makes a lot of sense. |
Yes, the information will be duplicated (to some extent) and result in redundancy. But you can consider it as a kind of data augmentation. |
Thank you for your reply~(●'◡'●) |
You are welcome! If you have any further questions or advances about it, feel free to contact me! |
Dear researchers, our work is now available at Large-Scale-Medical, if you are still interested in this topic. Thank you very much for your attention to our work, it does encourage me a lot! |
Hello! I am very inspired by your work. Referring to your work, I have some doubts while pre-training MRI data.
I want to use brain tumor MRI containing four modalities for pre-training, what to do if the downstream task is single modality or modality missing?
I have a few ideas: 1. Pre-training treats the four-modality MRI of the same patient as four independent inputs . 2. Pre-training uses four-channel inputs, and when the modality of the downstream task is missing, it is replaced with all-black inputs. 3. The downstream task is loaded with only a portion of the parameters of the pre-training, and the rest of the layers that involve the channels are re-trained.
I wonder which strategy is more reasonable? Or do you have a better way to deal with it, looking forward to your reply.
The text was updated successfully, but these errors were encountered: