-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merge after #200]: Subframe segmentation #201
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments here and there.
Can you add a test where the segmentation extractor does not have one of the traces? there is an argument in generate_dummy_segmentation_extractor
to control that. The sliced segmentation extractor traces_dict should not have more keys than the father.
Done |
Codecov Report
@@ Coverage Diff @@
## master #201 +/- ##
==========================================
+ Coverage 72.73% 73.28% +0.55%
==========================================
Files 33 33
Lines 2226 2276 +50
==========================================
+ Hits 1619 1668 +49
- Misses 607 608 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, one minor suggestion if you want to add.
) | ||
|
||
|
||
class TestMissingTraceFrameSlicesegmentation(BaseTestFrameSlicesegmentation): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not think on this pattern, it is neat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Multiplies the total number of tests run when doing it, but going for maximum safety here.
class BaseTestFrameSlicesegmentation(TestCase): | ||
@classmethod | ||
def setUpClass(cls): | ||
cls.toy_segmentation_example = generate_dummy_segmentation_extractor(num_frames=15, num_rows=5, num_columns=4) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that you have some tests for rejected and accepted list. Up to your but the generate_dummy_segmentation_extractor
has a rejected list argument to generate that:
roiextractors/src/roiextractors/testing.py
Lines 54 to 65 in b71f17d
def generate_dummy_segmentation_extractor( | |
num_rois: int = 10, | |
num_frames: int = 30, | |
num_rows: int = 25, | |
num_columns: int = 25, | |
sampling_frequency: float = 30.0, | |
has_summary_images: bool = True, | |
has_raw_signal: bool = True, | |
has_dff_signal: bool = True, | |
has_deconvolved_signal: bool = True, | |
has_neuropil_signal: bool = True, | |
rejected_list: Optional[list] = None, |
So you can test the non-default case when rejected_list != [ ]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I could. But I actually have yet to see any format in practice that bothers to save ROI info that has not been accepted. (so far, every single ROI written as series data or image mask is 'accepted' so 'rejected' is always empty). Maybe something to think about and revisit in the future
Relies on trace orientation fix/standardization in #200.
Introduces subframe selection capabilities to SegmentationExtractors as well as tests for this.
Necessary to introduce frame stubbing into the SegmentationDataInterface on NeuroConv.