You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Impartial is a semi-supervised deep learning method which does whole cell segmentation. With using minimal number of scribbles by an expert pathologist.
In order to evaluate the performance of the impartial models during and after training there are 2 ways:
Human expert in the loop feedback, where the user reviews the inference results from the model, identifies the erroneous cells and provide additional scribbles and start model training again. (This is current working of Impartial with the human-in-the-loop)
In order to quantitatively measure the model's performance, many times pathologists have "fully annotated" images along with unlabeled data. These full labelled test images can be used to track model performance. Hence, we wish to incorporate typical evaluation metrics into our Impartial-Monailabel framework.
Currently, monai supports "image" (upload_image) and "label" (save_label) APIs, we utilize them to submit "image" and "scribbles" respectively. Essentially, we utilize "label" attribute to submit the "scribbles"
We were hoping if there was a way to submit a "ground truth" label OR another attribute like "scribble" be added? OR utilize "tag" parameter of "save_label" API?
For example:
you can use tag to save any adhoc kind of mask/scribbles etc.. against given image..
corresponding save_label rest apis already exists which take tag param
For running evaluation.. we can have specific eval REST API so that you can run evaluation only (from your training workflow) and collect the metrics etc.. against
Impartial is a semi-supervised deep learning method which does whole cell segmentation. With using minimal number of scribbles by an expert pathologist.
In order to evaluate the performance of the impartial models during and after training there are 2 ways:
Human expert in the loop feedback, where the user reviews the inference results from the model, identifies the erroneous cells and provide additional scribbles and start model training again. (This is current working of Impartial with the human-in-the-loop)
In order to quantitatively measure the model's performance, many times pathologists have "fully annotated" images along with unlabeled data. These full labelled test images can be used to track model performance. Hence, we wish to incorporate typical evaluation metrics into our Impartial-Monailabel framework.
Currently, monai supports "image" (upload_image) and "label" (save_label) APIs, we utilize them to submit "image" and "scribbles" respectively. Essentially, we utilize "label" attribute to submit the "scribbles"
For example:
The text was updated successfully, but these errors were encountered: