Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model validation improvement #3

Open
3 tasks
fsantini opened this issue Feb 22, 2021 · 0 comments
Open
3 tasks

Model validation improvement #3

fsantini opened this issue Feb 22, 2021 · 0 comments

Comments

@fsantini
Copy link
Contributor

Datasets for validation should be in the same format as the "Upload data" format generated by Dafne. I.e. a npz file containing the data, the masks, and the resolution. This will allow easy generation of validation datasets because it can be done directly from Dafne.

The relevant code from the client that generates the upload data format is the following:

        out_data = {'data': dataset, 'resolution': resolution, 'comment': comment}
        for mask_name, mask in allMasks.items():
            out_data[f'mask_{mask_name}'] = mask

which can be loaded with np.load(allow_pickle=false) (for security).
Additionally, the following should be taken care of:

  • Dice score is only calculated if the map contains more than a defined number of voxels (dice score with very few voxels is very unstable)
  • Only the slices where the maps are defined are used for the validation (a dataset might contain only a few segmented slices, we shouldn't assume that if all the masks are zero, the slice contains no features). Maybe we check that enough masks are defined? (In the incremental learning, we only perform the learning if more than half the ROIs are defined)
  • Merge _L and _R ROIs for the validation (this code is already there if I remember correctly)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant