Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

need help understanding how CHIEF works #21

Open
tranh215980 opened this issue Sep 12, 2024 · 8 comments
Open

need help understanding how CHIEF works #21

tranh215980 opened this issue Sep 12, 2024 · 8 comments

Comments

@tranh215980
Copy link

Dear authors,

I have read your paper and still do not have deep understanding of how to pretrain CHIEF model. My confusion is in pretraining method and want to discuss this method.

  • I wish to visualize the distribution of hospitals and cancers and labels for pretraining my own CHIEF. Can authors provide a CSV of which slides are used in CHIEF pretraining?

  • After much reading is my correct that CHIEF slide model is not pretrained using cross-modal SSL but WSL setup in SCL-WC (https://openreview.net/forum?id=1fKJLRTUdo)? I first believed CHEIF use CLIP loss to align with organ text labels. Now I beleive CHIEF use SCL-WC from your group published in NeurIPS in December 2022. SCL-WC has similar method description to supplement on positive-negative modeling with memory banks but different nomenclature. I have many questions. Is my correct that CHIEF is upgraded version of SCL-WC with more data with also multi-task learning? Is there difference between previous SLC-WC that also uses CTransPath features? What type labels used in training SCL-WC here with 60,530 WSIs coming from TCGA, PAIP, PANDA, GTEX, others? What is overlap of task labels in CHIEF pretraining and downstream tasks?

  • In the codes, text embedding is fused with last layer embedding in downstream modeling. Is text used anywhere else? In method section, “Through the pretraining process, the CHIEF model learned to associate visual features with corresponding text descriptions, thereby identifying their semantic relevance across organs”. Can CHIEF create heatmaps based on text query prompt?

  • I cannot find SCL-WC and SupContrast citation in main paper. SCL-WC is cited in supplement but if true CHIEF use SCL-WC this should be made clear to help others. The codes reference SupContrast as an inspiration but I cannot find where it is used in paper.

  • Can CHIEF pretraining codes be released? Yeserday I try finding SCL-WC codes but see repo is empty. I submitted issue 4 (will codes be made public Xiyue-Wang/SCL-WC#4) and today cannot access codes anymore to track issue.

Thank you for your action. I have confusion in understanding the CHIEF method but am interested in customizing the method for my data.

@tranh215980
Copy link
Author

Another question:

  • I am getting started on IDH mutation prediction and wondering how well vanilla SCL-WC compares using only task slides and not extra pretrain slides. Would not previous SCL-WC model also be strong comparison to CHIEF?

@Dadatata-JZ
Copy link
Collaborator

Hi Tran,

IDH prediction is definitely an exciting task to explore! It holds significant clinical importance for gliomas—glad to hear you're working on it. You should be able to download CHIEF's codes and weights via our Docker image. As for the comparisons, whether the SCL-WC model or others might outperform CHIEF is something we’ll only know after experimentation.

I also want to clarify that our goal was to showcase a range of analyses using CHIEF to demonstrate the potential of pathology foundation models (same as others in the field). It's not necessary to claim that CHIEF is the best for all. A joint effort in the field, like you and us, will be essential to keep pushing this forward!

@tranh215980
Copy link
Author

Dear @Dadatata-JZ ,

Thank you for response. Im still reading codes and paper and will reach with more questions but still not sure where code for making my own CHIEF is. Is CHIEF using SCL-WC and if yes where is SCL-WC codes?

@Dadatata-JZ
Copy link
Collaborator

Hi Tran,

You can find all the relevant details of our model architecture implementation in the "Methods" section, with references listed at the end for further reading.

Thanks.

@Eshay14
Copy link

Eshay14 commented Sep 16, 2024

Hi Dadatata-JZ, unfortunately, I still don't understand much about your training architecture from several of your responses. Could you clarify what is the relationship between SCL-WC and your approach? tranh215980 seems to have inquired about this several times but the responses are largely vague. Could you please give an explicit answer to the following question from Tranh?

"After much reading is my correct that CHIEF slide model is not pretrained using cross-modal SSL but WSL setup in SCL-WC (https://openreview.net/forum?id=1fKJLRTUdo)? I first believed CHEIF use CLIP loss to align with organ text labels. Now I beleive CHIEF use SCL-WC from your group published in NeurIPS in December 2022. SCL-WC has similar method description to supplement on positive-negative modeling with memory banks but different nomenclature. I have many questions. Is my correct that CHIEF is upgraded version of SCL-WC with more data with also multi-task learning? Is there difference between previous SLC-WC that also uses CTransPath features? What type labels used in training SCL-WC here with 60,530 WSIs coming from TCGA, PAIP, PANDA, GTEX, others? What is overlap of task labels in CHIEF pretraining and downstream tasks?"

Thanks for your assistance in helping us parse and fully understand your experimental process.

@Dadatata-JZ
Copy link
Collaborator

@tranh215980 @Eshay14
Hi both, I think the authors have released SCL-WC models. Please take a quick look first; we don’t want to keep you waiting.

All your great inquiries about the CHIEF's architecture have been noted, and we’ll get back to you on your other detailed questions as soon as possible.

@Xiyue-Wang
Copy link
Collaborator

@tranh215980 @Eshay14

Hi, both, thank you for your questions.

In CHIEF, weakly-supervised pretraining model employs the SCL-WC architecture for image encoding and extends with an additional text branch (i.e., CLIP). During pretraining, text features (anatomical sites, e.g., lung, breast, brain) and image features are fused to facilitate slide-level classification. We cited this in the method details within the supplementary file. Please kindly review it.

For the weakly-supervised pretraining model, we performed slide-level positive (cancerous) and negative (non-cancerous) classifications. The data details can be found in Supplementary Table 13. For the cancer detection task, there is no overlap; we simply apply the pretrained CHIEF model to obtain the results. For tasks related to TCGA data, some overlap does exist, but these are distinct tasks with different labels being used. Additionally, we have conducted external validation to verify its generalizability.

@tranh215980
Copy link
Author

Dear authors,

"We cited this in the method details within the supplementary file. Please kindly review it."

I find this sentence dismissive and disrespectful to me. I have "kindly reviewed" this paper with supplement many times and raise simple issues like most posters. I try find answers in other issues and still dont know what is going on and it has been week. I also disappointing that I get short answers until @Eshay14 ask. I ask everything honestly but since it is hard to get straight answer I now ask straightforward:

  1. @Xiyue-Wang why SCL-WC codes never updated after neurips paper published in 2 years? Why repo hidden until @Eshay14 asked?

  2. Copy paste are "CHIEF pretraining details" paragraph:

We pretrained CHIEF with 60,530 WSIs from 14 cohorts that were split into 90% training data and 10% validation data. We split the training data at the patient level and ensured that samples from different anatomic sites were represented in the training and validation sets proportionally. In the training phase, the memory banks in the WSI-level contrastive learning module were constructed separately for different cancer types. In the validation phase, we calculated the AUROC, sensitivity, specificity and other validation set performance metrics for each anatomic site individually. We optimized the model hyperparameters to maximize the average AUROC across sites. The weakly supervised learning adopted a batch size of 1 WSI and a maximum epoch number of 50. We used the Adam optimizer56 with an initial learning rate of 3.0 × 10−4. We used the cosine annealing method57 to determine the learning rate schedule. We exploited the early stop strategy to mitigate overfitting, which terminated network training when the validation AUROC no longer increased in ten consecutive epochs. CHIEF was pretrained using eight NVIDIA V100 32-GB GPUs.

@Dadatata-JZ @Xiyue-Wang no where is SCL-WC or SupContrast mentioned or cited. Not here or rest of main paper. Only way I find CHIEF is SCL-WC is page 9 with hidden citation: "To overcome this limitation, we constructed a WSI discrimination branch18 that augmented discriminative diagnostic features by comparing feature representations across WSIs" where citation 18 is SCL-WC. This is only small part of SCL-WC. In previous issues no where do you say you are using SCL-WC. When you say "CHIEF combines with text encoding" and "weakly-supervised contrastive learning is used" and also Figure 1 posters assume CLIP used and not SCL-WC. After understanding SCL-WC I now know pretraining details are broken into 3 sections in methods. More details included in supplement but not clear if describing "pretraining" or "finetuning" or "architecture". Many things still not clear. In #18 researchers who make CLAM, TOAD, UNI also from Harvard also not understand CHIEF use SCL-WC. When you say "You can find all the relevant details of our model architecture implementation in the "Methods" section" this is not true. Do you agree?

  1. How is acceptable to not mention SCL-WC by name when this core of "weakly-supervised pretraining" in CHIEF? In school and work this considered plagiarism.

  2. If CHIEF uses SupContrast codes how is acceptable to not cite SupContrast in paper? @HobbitLong

  3. How can you say your task is "tumor origin" and same clinical relevance to TOAD when only tumor primary used? Because tumor primary used is task not site prediction? There has not been straight answer.

@Eshay14 I agree many answers are vague in #23 . I have more questions to ask but keep short and on topic. I invite everyone to fair and transparent discussion on CHIEF.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants