Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test all label rendering options #207

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

aeisenbarth
Copy link
Contributor

@aeisenbarth aeisenbarth commented Jan 10, 2024

In this pull request, I added systematic test cases for most options of render_labels.

Test matrix

I ran into a couple of issues with options for which I could not find any tests. I think these are not super rare edge cases, but can easily be triggered using valid options with just a different dataset, or a combination of valid options which was assumed to just work. For example, existing tests cover a SpatialData with 1 labels image that is rendered, but not SpatialData with multiple of which only one is rendered (seems a very common case).

Ideally, we would have a test matrix:

(supported cases of input datasets) × (supported combinations of options)

For each input data or option, we should not only test the expected "average" case or a value in the middle of a value range, but also those equivalent classes which the documentation allows. For example if an option can take the value 0, it is very interesting to know whether the code still behaves well with it.

In the attached tests, I have cases for:

  • labels images:
    • empty labels (all zeros, only background)
    • one label
    • multiple consecutive labels [1, 2]
    • labels with skipped values [1, 3]
    • labels without background (no zeros)
  • a table with various column types and number of category values for color by value

In the case of SpatialData, the matrix should have more dimensions:
(element type) × (single-scale/multiscale) × (in-memory/backed) …
These are already tested individually, but in combination they have bugs.

Draft

This is still a draft. The following still needs to be done:

  • Assertions: Currently my tests only catch exceptions. Possibly use PlotTester.
  • Visually inspect every plot and compare to what a user expects (most plots are currently empty).
  • Remove unsupported test cases: I added test cases for options that I as a user would assume as sensible. They may need to be adjusted for the rendering to work as designed.
    • Not officially supported? → Mention in documentation, maybe add input validation
    • Options not provided by user? → Add missing options to test case, or better defaults values in spatialdata-plot, or make these options required.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant