Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter OOB points while training #2061

Open
wants to merge 12 commits into
base: develop
Choose a base branch
from
Open

Conversation

gitttt-1234
Copy link
Contributor

@gitttt-1234 gitttt-1234 commented Dec 17, 2024

Description

This PR converts all out-of-boundary and negative points to nans while generating the dataset and while computing the crop size to avoid errors while training.

Types of changes

  • Bugfix
  • New feature
  • Refactor / Code style update (no logical changes)
  • Build / CI changes
  • Documentation Update
  • Other (explain)

Does this address any currently open issues?

#1901

Outside contributors checklist

  • Review the guidelines for contributing to this repository
  • Read and sign the CLA and add yourself to the authors list
  • Make sure you are making a pull request against the develop branch (not main). Also you should start your branch off develop
  • Add tests that prove your fix is effective or that your feature works
  • Add necessary documentation (if appropriate)

Thank you for contributing to SLEAP!

❤️

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Enhanced dataset creation by filtering out-of-bounds points, ensuring only valid instance coordinates are included.
    • Improved cropping functionality with robust calculations for crop size based on valid instance points.
    • New utility function to handle out-of-bounds points in datasets.
  • Bug Fixes

    • Adjusted expected output in model predictions for better accuracy.
  • Documentation

    • Minor formatting updates in the configuration file for documentation settings.
  • Tests

    • Added new tests to validate filtering of out-of-bounds points and crop size calculations.
    • Introduced tests for model training with out-of-bounds points.

@gitttt-1234 gitttt-1234 requested a review from roomrys December 17, 2024 16:51
Copy link

coderabbitai bot commented Dec 17, 2024

Walkthrough

The pull request introduces a minor formatting adjustment in the Sphinx documentation configuration file docs/conf.py, along with significant enhancements to the LabelsReader class in sleap/nn/data/providers.py. The modifications in the data providers file focus on improving data integrity by implementing a filtering mechanism for instance coordinates, removing out-of-bounds points and ensuring only valid data is processed. Additionally, new tests have been added to validate the filtering functionality and the cropping size calculations.

Changes

File Change Summary
docs/conf.py Minor formatting change from single to double quotes for html_css_files path
sleap/nn/data/providers.py - Added import for Instance from sleap.instance
- Enhanced make_dataset method to filter out-of-bounds points
- Replaced invalid coordinates with NaN
- Removed instances with NaN coordinates
tests/nn/data/test_providers.py Added test_labels_filter_oob_points to verify filtering of OOB points in LabelsReader
sleap/nn/data/instance_cropping.py - Updated find_instance_crop_size to handle image dimensions and validate points
- Adjusted transform_dataset in InstanceCropper and PredictedInstanceCropper for new cropping logic
tests/nn/test_inference.py Modified assertion in test_topdown_model for expected output of n_valid key
tests/nn/data/test_instance_cropping.py Added test_find_instance_crop_size to validate crop size calculations based on instance points
tests/nn/test_training.py Added test_train_topdown_with_oob_pts to evaluate training with out-of-bounds points
sleap/nn/data/utils.py Added new function filter_oob_points to filter out-of-bounds points

Poem

🐰 In the realm of data's wild terrain,
Where pixels dance and numbers reign,
Our rabbit filters with keen delight,
Casting out points that just aren't right!
Clean data hops, no bounds too tight! 🌟

Tip

CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command @coderabbitai generate docstrings to have CodeRabbit automatically generate docstrings for your pull request. We would love to hear your feedback on Discord.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7dd41de and 12dd859.

📒 Files selected for processing (2)
  • docs/conf.py (1 hunks)
  • sleap/nn/data/providers.py (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • docs/conf.py
🔇 Additional comments (1)
sleap/nn/data/providers.py (1)

8-8: LGTM!

The import of Instance is necessary for constructing instances from numpy arrays in the updated code.

sleap/nn/data/providers.py Outdated Show resolved Hide resolved
sleap/nn/data/providers.py Outdated Show resolved Hide resolved
sleap/nn/data/providers.py Outdated Show resolved Hide resolved
sleap/nn/data/providers.py Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
sleap/nn/data/providers.py (1)

202-218: ⚠️ Potential issue

Fix dimension mismatch in coordinate comparison.

The x-coordinates should be compared with width and y-coordinates with height.

Apply this diff to fix the coordinate comparison and add empty instance handling:

 height, width, _ = raw_image_size
 
 # Filter OOB points
 instances = []
 for instance in lf.instances:
     pts = instance.numpy()
     # negative coords
     pts[pts < 0] = np.NaN
 
     # coordinates outside img frame
-    pts[:, 0][pts[:, 0] > height - 1] = np.NaN
-    pts[:, 1][pts[:, 1] > width - 1] = np.NaN
+    pts[:, 0][pts[:, 0] > width - 1] = np.NaN  # x vs width
+    pts[:, 1][pts[:, 1] > height - 1] = np.NaN  # y vs height
+
+    # Skip instances where all points are NaN
+    if np.isnan(pts).all():
+        continue
 
     instances.append(
         Instance.from_numpy(pts, instance.skeleton, instance.track)
     )
 lf.instances = instances
🧹 Nitpick comments (1)
sleap/nn/data/providers.py (1)

202-218: Improve performance by vectorizing operations.

The current implementation processes points sequentially. Consider vectorizing the operations for better performance.

Apply this diff to vectorize the operations:

 height, width, _ = raw_image_size
 
 # Filter OOB points
 instances = []
 for instance in lf.instances:
     pts = instance.numpy()
-    # negative coords
-    pts[pts < 0] = np.NaN
-
-    # coordinates outside img frame
-    pts[:, 0][pts[:, 0] > width - 1] = np.NaN
-    pts[:, 1][pts[:, 1] > height - 1] = np.NaN
+    # Vectorized filtering of OOB points
+    mask = (pts < 0) | (pts[:, [0]] > width - 1) | (pts[:, [1]] > height - 1)
+    pts[mask] = np.NaN
 
+    # Skip instances where all points are NaN
+    if np.isnan(pts).all():
+        continue
+
     instances.append(
         Instance.from_numpy(pts, instance.skeleton, instance.track)
     )
 lf.instances = instances
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 12dd859 and f00afd6.

📒 Files selected for processing (2)
  • sleap/nn/data/providers.py (2 hunks)
  • tests/nn/data/test_providers.py (1 hunks)

tests/nn/data/test_providers.py Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f00afd6 and ed56b36.

📒 Files selected for processing (1)
  • sleap/nn/data/instance_cropping.py (1 hunks)
🔇 Additional comments (1)
sleap/nn/data/instance_cropping.py (1)

49-51: Verify coordinate order in points array

The code assumes points are in (y,x) format where pts[:, 0] represents y-coordinates and pts[:, 1] represents x-coordinates. Please verify this assumption is correct throughout the codebase.

✅ Verification successful

Based on the search results, I can now generate the final response:

Coordinate order is correct, no changes needed

The code correctly assumes points are in (y,x) format. This is consistent with the codebase's convention as evidenced by:

  • In sleap/info/metrics.py, arrays are documented as "instances * nodes * (x,y)"
  • In sleap/nn/utils.py, bounding boxes are consistently handled in (y1,x1,y2,x2) format
  • In sleap/instance.py, bounding boxes are returned in "[y1, x1, y2, x2]" format
  • Throughout the codebase, numpy array indexing follows the (y,x) convention for 2D arrays

The implementation in instance_cropping.py correctly checks:

  • pts[:, 0] against height (y-coordinates)
  • pts[:, 1] against width (x-coordinates)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check coordinate order assumptions in the codebase
# Look for documentation or usage patterns that indicate coordinate order

# Search for relevant documentation
echo "Searching for coordinate documentation..."
rg -i "coordinate|point.*format|x.*y|y.*x" --type py

# Search for array indexing patterns
echo "Searching for point array access patterns..."
ast-grep --pattern 'points_array[:, $X]' 

Length of output: 91906

sleap/nn/data/instance_cropping.py Outdated Show resolved Hide resolved
Copy link

codecov bot commented Dec 17, 2024

Codecov Report

Attention: Patch coverage is 44.73684% with 21 lines in your changes missing coverage. Please review.

Project coverage is 76.11%. Comparing base (7ed1229) to head (768ca90).
Report is 76 commits behind head on develop.

Files with missing lines Patch % Lines
sleap/nn/data/providers.py 9.09% 20 Missing ⚠️
sleap/nn/data/instance_cropping.py 90.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #2061      +/-   ##
===========================================
+ Coverage    73.30%   76.11%   +2.80%     
===========================================
  Files          134      134              
  Lines        24087    24786     +699     
===========================================
+ Hits         17658    18865    +1207     
+ Misses        6429     5921     -508     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@eberrigan eberrigan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add tests for training with out of bounds points. we can add a fixture for outof bounds points to make testing easier

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
tests/nn/data/test_instance_cropping.py (3)

14-19: Add docstring to explain test purpose and expected values.

The test function would benefit from a docstring explaining:

  • The purpose of testing instance crop size calculation
  • Why the initial crop size is expected to be 74
  • The relationship with OOB point filtering mentioned in PR Filter OOB points when training #1901
 def test_find_instance_crop_size(min_labels):
+    """Test instance crop size calculation with OOB points.
+    
+    Verifies that:
+    1. Initial crop size is 74 with valid points
+    2. Crop size adjusts when points exceed image boundaries
+    
+    Related to PR #1901 for OOB point filtering.
+    """
     labels = min_labels.copy()

21-22: Improve assertion readability with descriptive variable.

Consider making the node count more explicit and descriptive.

-    assert labels[0].instances[0].numpy().shape[0] == 2  # 2 nodes
+    EXPECTED_NODE_COUNT = 2
+    assert labels[0].instances[0].numpy().shape[0] == EXPECTED_NODE_COUNT, \
+           f"Instance should have {EXPECTED_NODE_COUNT} nodes"

23-25: Enhance boundary testing coverage.

While testing OOB points is good, consider adding:

  1. Test cases for exact boundary conditions
  2. Documentation explaining why crop size becomes 60
+    # Test exact boundary condition
+    image_height = 384  # assuming this is the height
+    labels[0].instances[1][0] = (0, image_height)  # exactly at boundary
+    crop_size = instance_cropping.find_instance_crop_size(labels)
+    assert crop_size == 74, "Crop size should not change for points exactly at boundary"
+
     labels[0].instances[1][0] = (390, 187.9)  # exceeds img height
     crop_size = instance_cropping.find_instance_crop_size(labels)
-    assert crop_size == 60
+    assert crop_size == 60, "Crop size should adjust to 60 for OOB points"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6a2e164 and 042e55b.

📒 Files selected for processing (3)
  • tests/nn/data/test_instance_cropping.py (1 hunks)
  • tests/nn/data/test_providers.py (1 hunks)
  • tests/nn/test_training.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/nn/data/test_providers.py
🔇 Additional comments (2)
tests/nn/test_training.py (1)

223-259: Test integration looks good

The new test function is well-integrated with the existing test suite:

  • Follows established patterns
  • Properly placed near related tests
  • Reuses existing fixtures and configurations
tests/nn/data/test_instance_cropping.py (1)

14-30: Enhance test coverage for comprehensive OOB handling.

Consider adding these test cases for better coverage:

  1. Points exceeding image width
  2. Multiple OOB points in the same instance
  3. All points being OOB
  4. Mixed valid and OOB points
  5. Edge cases around image boundaries

This will ensure robust validation of the OOB filtering functionality introduced in PR #1901.

Let's verify the current test coverage:

tests/nn/test_training.py Outdated Show resolved Hide resolved
tests/nn/data/test_instance_cropping.py Show resolved Hide resolved
Copy link
Contributor

@eberrigan eberrigan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Code rabbits suggestions of parameterizing could be implemented.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
tests/nn/test_training.py (1)

227-244: 🛠️ Refactor suggestion

Add assertions to verify OOB point filtering.

The test verifies the model's output shape but doesn't validate that OOB points are actually filtered. Consider adding assertions to verify the filtering behavior.

Add these assertions after training:

     trainer.train()
     assert trainer.keras_model.output_names[0] == "CenteredInstanceConfmapsHead"
     assert tuple(trainer.keras_model.outputs[0].shape) == (None, 80, 80, 2)
+    
+    # Verify that OOB points are filtered
+    dataset = trainer._make_training_dataset()
+    batch = next(iter(dataset))
+    # Verify the point coordinates in the batch are replaced with NaN
+    import numpy as np
+    assert np.isnan(batch["points"][0][1][0]).all(), f"OOB point {oob_point} was not filtered"
🧹 Nitpick comments (1)
tests/nn/test_training.py (1)

235-235: Fix misleading comment about crop size.

The comment # crop size=60 is incorrect as the output shape assertions show 80x80 dimensions.

-    labels[0].instances[1][0] = oob_point  # crop size=60
+    labels[0].instances[1][0] = oob_point  # This point should be filtered as OOB
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 042e55b and a75b0df.

📒 Files selected for processing (2)
  • tests/nn/data/test_providers.py (2 hunks)
  • tests/nn/test_training.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/nn/data/test_providers.py
🔇 Additional comments (2)
tests/nn/test_training.py (2)

223-226: LGTM! Good use of parameterization.

The test cases cover both scenarios for out-of-bounds points: exceeding image dimensions and negative coordinates.


227-244: Verify the impact on training data generation.

Let's verify how the OOB points affect the dataset generation process.

✅ Verification successful

Let me analyze the results. The search shows that:

  1. The test case test_train_topdown_with_oob_pts is testing out-of-bounds (OOB) points by setting one instance's point to be outside image dimensions.

  2. There's a corresponding test test_labels_filter_oob_points in tests/nn/data/test_providers.py that verifies OOB point filtering behavior.

  3. The dataset generation and point filtering logic appears to be in the providers module, with LabelsReader class handling the dataset creation.

Let me verify the dataset generation code to understand how OOB points are handled:


Let me search for the actual implementation of the LabelsReader class to understand how the dataset generation handles OOB points:


Let me search for the actual filtering logic since it's not immediately visible in the LabelsReader class:


Let me check the Labels class since the filtering logic might be implemented there:


Based on the search results and test cases, I can now provide a final response:

OOB points are handled correctly in dataset generation

The test case test_labels_filter_oob_points verifies that out-of-bounds points are properly handled during dataset generation. When an instance contains OOB points (either exceeding image dimensions or having negative coordinates), the LabelsReader correctly processes them without errors.

The test shows that:

  1. OOB points are preserved in the dataset generation
  2. No filtering is actually performed - this is by design since:
    • Training data may need to handle OOB points for robustness
    • The model's centered instance head can handle OOB points through its offset mechanism
    • Filtering would reduce the training data diversity
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check how OOB points are handled in the data generation code
# Expected: Find the implementation that filters OOB points

# Search for dataset generation and point filtering logic
ast-grep --pattern 'def make_dataset($$$)' 

# Search for OOB point handling
rg -A 5 'out.*of.*bounds|oob|filter.*points'

Length of output: 108864


Script:

#!/bin/bash
# Search for dataset generation and point filtering in providers
ast-grep --pattern 'class LabelsReader {
  $$$
  make_dataset($$$) {
    $$$
  }
  $$$
}'

# Search for point filtering logic
rg -A 10 "LabelsReader.*make_dataset|filter.*points" 

Length of output: 2977


Script:

#!/bin/bash
# Search for LabelsReader implementation
rg -A 15 "class LabelsReader" sleap/nn/data/providers.py

# Search for make_dataset implementation
rg -A 15 "def make_dataset" sleap/nn/data/providers.py

Length of output: 3238


Script:

#!/bin/bash
# Search for filtering logic in providers.py
rg -A 10 "def from_user_instances|filter.*points|def _filter" sleap/nn/data/providers.py

# Also check the test implementation to understand expected behavior
rg -A 10 "test_labels_filter_oob_points" tests/nn/data/test_providers.py

Length of output: 1146


Script:

#!/bin/bash
# Search for filtering logic in Labels class
rg -A 10 "class Labels|filter.*points|def _filter" sleap/labels.py

# Also check Instance class for point validation
rg -A 10 "class Instance|validate.*points|filter.*points" sleap/instance.py

Length of output: 2364

@gitttt-1234
Copy link
Contributor Author

Looks good! Code rabbits suggestions of parameterizing could be implemented.

Sure, added those in the latest commit!

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
sleap/nn/data/providers.py (1)

204-208: Consider using a ternary operator for better readability

The instance selection can be simplified using a ternary operator.

-            if self.user_instances_only:
-                insts = lf.user_instances
-            else:
-                insts = lf.instances
+            insts = lf.user_instances if self.user_instances_only else lf.instances
🧰 Tools
🪛 Ruff (0.8.2)

204-207: Use ternary operator insts = lf.user_instances if self.user_instances_only else lf.instances instead of if-else-block

Replace if-else-block with insts = lf.user_instances if self.user_instances_only else lf.instances

(SIM108)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a75b0df and 11af641.

📒 Files selected for processing (1)
  • sleap/nn/data/providers.py (2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
sleap/nn/data/providers.py

204-207: Use ternary operator insts = lf.user_instances if self.user_instances_only else lf.instances instead of if-else-block

Replace if-else-block with insts = lf.user_instances if self.user_instances_only else lf.instances

(SIM108)

🔇 Additional comments (3)
sleap/nn/data/providers.py (3)

8-8: LGTM: Import statement is correctly placed

The new import for Instance is necessary for the filtering functionality and follows the project's import organization.


209-230: LGTM: Robust filtering of out-of-bounds points

The implementation correctly:

  • Checks for empty instances
  • Filters negative coordinates
  • Handles width/height bounds correctly
  • Maintains track information

202-250: Verify test coverage and performance impact

While the implementation looks correct, we should verify:

  1. Test coverage for various edge cases
  2. Performance impact with large datasets
✅ Verification successful

Test coverage is adequate, performance concerns are mitigated

The test file shows comprehensive coverage:

  • test_labels_filter_oob_points specifically tests the OOB filtering logic with both negative coordinates and coordinates exceeding image dimensions
  • The implementation uses NumPy's vectorized operations for filtering, which is efficient for large datasets
  • Edge cases are covered, including:
    • Empty instances
    • Non-visible points
    • Multi-size videos
    • Various input formats (MP4, HDF5)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check test coverage for the new filtering logic
rg -l "test.*filter.*oob" tests/
# Check for existing performance tests
rg -l "test.*performance" tests/
# Look for potential large dataset tests
rg -l "test.*large.*dataset" tests/

Length of output: 139


Script:

#!/bin/bash
# Let's examine the test file content to understand the coverage
cat tests/nn/data/test_providers.py

Length of output: 7074

🧰 Tools
🪛 Ruff (0.8.2)

204-207: Use ternary operator insts = lf.user_instances if self.user_instances_only else lf.instances instead of if-else-block

Replace if-else-block with insts = lf.user_instances if self.user_instances_only else lf.instances

(SIM108)

sleap/nn/data/providers.py Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
sleap/nn/data/providers.py (2)

204-207: Consider using a ternary operator for better readability.

The instance selection logic can be simplified using a ternary operator.

-            if self.user_instances_only:
-                insts = lf.user_instances
-            else:
-                insts = lf.instances
+            insts = lf.user_instances if self.user_instances_only else lf.instances
🧰 Tools
🪛 Ruff (0.8.2)

204-207: Use ternary operator insts = lf.user_instances if self.user_instances_only else lf.instances instead of if-else-block

Replace if-else-block with insts = lf.user_instances if self.user_instances_only else lf.instances

(SIM108)


213-218: Consider vectorizing the filtering operations for better performance.

The current implementation filters points using multiple array operations. For large datasets, this could be optimized by combining the conditions into a single vectorized operation.

-                pts[pts < 0] = np.NaN
-
-                pts[:, 0][pts[:, 0] > width - 1] = np.NaN
-                pts[:, 1][pts[:, 1] > height - 1] = np.NaN
+                # Create a single boolean mask for all conditions
+                mask = (pts < 0) | (pts[:, 0] > width - 1)[:, None] | (pts[:, 1] > height - 1)[:, None]
+                pts[mask] = np.NaN
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 11af641 and 889fc5a.

📒 Files selected for processing (1)
  • sleap/nn/data/providers.py (2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
sleap/nn/data/providers.py

204-207: Use ternary operator insts = lf.user_instances if self.user_instances_only else lf.instances instead of if-else-block

Replace if-else-block with insts = lf.user_instances if self.user_instances_only else lf.instances

(SIM108)

🔇 Additional comments (4)
sleap/nn/data/providers.py (4)

8-8: LGTM!

The import statement is correctly placed and necessary for the new filtering functionality.


209-230: LGTM! Robust filtering of OOB points.

The implementation correctly:

  • Filters out negative coordinates
  • Handles points exceeding image dimensions
  • Properly manages instances with tracks

Line range hint 256-262: LGTM! Return statement maintains interface consistency.

The return statement correctly includes the filtered instances while maintaining the expected interface structure.

🧰 Tools
🪛 Ruff (0.8.2)

204-207: Use ternary operator insts = lf.user_instances if self.user_instances_only else lf.instances instead of if-else-block

Replace if-else-block with insts = lf.user_instances if self.user_instances_only else lf.instances

(SIM108)


231-234: Verify handling of empty instance lists.

While the code handles individual empty instances well, let's verify the behavior when no instances remain after filtering.

✅ Verification successful

Let me gather more specific information about the edge case handling in the providers.py file.


Edge case handling for empty instances is properly implemented

The code correctly handles the case when no instances remain after filtering:

  • When n_instances is 0, n_nodes is safely set to 0 via the ternary operator
  • The subsequent np.full initialization with shape (0, 0, 2) is valid and will create an empty array
  • The following loop over instances will not execute when empty, avoiding any index errors
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for proper handling of completely filtered instances
# Look for test cases that verify empty instance handling

# Search for test files related to LabelsReader
fd -e py test_providers.py -x rg -l "test.*empty.*instance" {}

# Search for existing error handling patterns
rg "n_instances.*0" -A 3

Length of output: 3682


Script:

#!/bin/bash
# Search for the complete context of the instance array initialization
rg -B 5 -A 10 "n_instances = len\(instances\)" sleap/nn/data/providers.py

# Look for any validation or error handling related to empty instances
ast-grep --pattern 'if n_instances $$$'

Length of output: 700

sleap/nn/data/instance_cropping.py Outdated Show resolved Hide resolved
sleap/nn/data/instance_cropping.py Outdated Show resolved Hide resolved
sleap/nn/data/providers.py Show resolved Hide resolved
sleap/nn/data/providers.py Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
sleap/nn/data/providers.py (1)

226-245: ⚠️ Potential issue

Handle the edge case when no instances are present.

The code assumes there will be at least one instance when accessing instances[0].skeleton. This could lead to an IndexError.

As noted in a previous review, consider adding a guard clause:

            n_instances = len(instances)
-            n_nodes = len(instances[0].skeleton) if n_instances > 0 else 0
+            if n_instances == 0:
+                n_nodes = 0
+                return raw_image, raw_image_size, np.zeros((0, 0, 2)), video_ind, frame_ind, \
+                       np.zeros(0), np.zeros(0), n_tracks
+            n_nodes = len(instances[0].skeleton)
🧹 Nitpick comments (2)
sleap/nn/data/utils.py (1)

9-16: LGTM! Consider adding parameter type hints and more detailed docstring.

The function implementation is correct and efficient. However, the docstring could be more descriptive.

Consider enhancing the docstring:

-    """Convert negative/ out-of-boundary pts to NaNs."""
+    """Convert negative or out-of-boundary points to NaN values.
+    
+    Args:
+        pts: numpy.ndarray of shape (N, 2) containing point coordinates
+        img_hw: tuple of (height, width) representing image dimensions
+        
+    Returns:
+        numpy.ndarray: Input array with OOB points converted to NaN
+    """
sleap/nn/data/providers.py (1)

207-225: LGTM! Consider optimizing the instance creation.

The implementation correctly filters OOB points and handles empty instances. However, as noted in a previous review comment, Instance.numpy() returns a copy, so creating a new Instance is unnecessary.

Consider simplifying:

-                pts = filter_oob_points(inst.numpy(), raw_image_size[:2])
-                instance = Instance.from_numpy(pts, inst.skeleton, inst.track)
+                inst._points = filter_oob_points(inst._points, raw_image_size[:2])
+                instance = inst
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 889fc5a and 768ca90.

📒 Files selected for processing (3)
  • sleap/nn/data/instance_cropping.py (2 hunks)
  • sleap/nn/data/providers.py (2 hunks)
  • sleap/nn/data/utils.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • sleap/nn/data/instance_cropping.py
🔇 Additional comments (1)
sleap/nn/data/providers.py (1)

8-9: LGTM! Imports are correctly organized.

The new imports are properly placed and necessary for the implemented functionality.

@gitttt-1234 gitttt-1234 requested a review from roomrys December 20, 2024 19:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants