Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation of Impartial model performance within Monai: #17

Open
gunjan-sh opened this issue Jan 27, 2023 · 2 comments
Open

Evaluation of Impartial model performance within Monai: #17

gunjan-sh opened this issue Jan 27, 2023 · 2 comments

Comments

@gunjan-sh
Copy link
Contributor

gunjan-sh commented Jan 27, 2023

Impartial is a semi-supervised deep learning method which does whole cell segmentation. With using minimal number of scribbles by an expert pathologist.
In order to evaluate the performance of the impartial models during and after training there are 2 ways:

  1. Human expert in the loop feedback, where the user reviews the inference results from the model, identifies the erroneous cells and provide additional scribbles and start model training again. (This is current working of Impartial with the human-in-the-loop)

  2. In order to quantitatively measure the model's performance, many times pathologists have "fully annotated" images along with unlabeled data. These full labelled test images can be used to track model performance. Hence, we wish to incorporate typical evaluation metrics into our Impartial-Monailabel framework.

    Currently, monai supports "image" (upload_image) and "label" (save_label) APIs, we utilize them to submit "image" and "scribbles" respectively. Essentially, we utilize "label" attribute to submit the "scribbles"

    • We were hoping if there was a way to submit a "ground truth" label OR another attribute like "scribble" be added? OR utilize "tag" parameter of "save_label" API?
      For example:
          "image2": {
                "image": {
                  "ext": ".tif",
                  "info": {
                    "name": "image2.tif"
                  }
                },
                "labels": {
                  "gt": {
                    "ext": ".zip",
                    "info": {
                      "name": "image2_gt.zip"
                    }
                  },
                  "scribble": {
                    "ext": ".zip",
                    "info": {
                      "name": "image2_scribble.zip"
                    }
                  }
         }
    • Add API like "save_ground_truth()"
    • Add API like Evaluate(image, ground_truth) or evaluate() ?
@SachidanandAlle
Copy link
Contributor

SachidanandAlle commented Jan 27, 2023

you can use tag to save any adhoc kind of mask/scribbles etc.. against given image..
corresponding save_label rest apis already exists which take tag param

@SachidanandAlle
Copy link
Contributor

For running evaluation.. we can have specific eval REST API so that you can run evaluation only (from your training workflow) and collect the metrics etc.. against

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants