Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate segmentation in infer neurons task #8221

Open
wants to merge 18 commits into
base: master
Choose a base branch
from

Conversation

cdfhalle
Copy link

@cdfhalle cdfhalle commented Nov 20, 2024

  • adds the functionality to evaluate a neuron segmentation using the ai modal.
  • In the neuron segmentation tab of the modal the user (a scalable minds employee) can now additionally open the evaluation settings form and type in parameters for the evaluate-split-merger task in voxelytics. Whenever the evaluation settings are used the evaluation should happen subsequently to the inference and the error annotation should be uploaded to the resulting dataset.

Steps to test:

TODOs:

  • ...

Issues:


(Please delete unneeded items, merge only when none are left open)

Copy link
Contributor

coderabbitai bot commented Nov 20, 2024

📝 Walkthrough
📝 Walkthrough

Walkthrough

The pull request introduces enhancements to the job submission process within the application. Key modifications include updates to the JobController methods to accept additional parameters for neuron inference and TIFF export jobs. The routing configuration for the inferNeurons endpoint has been adjusted to accommodate these new parameters. Furthermore, the frontend JavaScript files have been updated to reflect these changes, allowing for more detailed job configurations. A new component for evaluation settings has been added to improve user interaction when starting AI jobs.

Changes

File Path Change Summary
app/controllers/JobController.scala Updated runInferNeuronsJob and runExportTiffJob methods to include new parameters for enhanced job configuration.
conf/webknossos.latest.routes Modified the POST /jobs/run/inferNeurons route to accept additional parameters for neuron inference jobs.
frontend/javascripts/admin/api/jobs.ts Updated startNeuronInferralJob function to include new parameters for evaluation settings; improved job sorting logic in getJobs function.
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx Added CollapsibleSplitMergerEvaluationSettings component for configuring evaluation settings; updated StartJobForm and NeuronSegmentationForm to incorporate new evaluation parameters and improve error handling during job initiation.
CHANGELOG.unreleased.md Updated to reflect new feature for split-merger evaluation during neuron inference, improved error handling, and several bug fixes.
app/models/job/Job.scala Added a new private method argAsBooleanOpt to handle Boolean arguments in job commands; modified resultLink method to incorporate evaluation logic.
conf/application.conf Updated configuration to enable job processing and Voxelytics features.
package.json Updated antd dependency version from 5.17.4 to 5.18.

Possibly related PRs

Suggested labels

enhancement

Suggested reviewers

  • dieknolle3333

Poem

In the meadow where jobs take flight,
New settings bloom, oh what a sight!
With parameters fresh, our jobs now gleam,
Evaluation's sweet, like a rabbit's dream.
Hop along, let the changes flow,
In the world of jobs, we’ll steal the show! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Outside diff range and nitpick comments (2)
frontend/javascripts/admin/api/jobs.ts (1)

196-201: Consider improving parameter types and adding documentation.

While the new parameters align well with the evaluation functionality:

  1. Consider using numeric types instead of strings for measurement parameters:
    • evalMaxEdgeLength
    • evalSparseTubeThresholdNm
    • evalMinMergerPathLengthNm
  2. Consider adding JSDoc comments to document the purpose and expected values of each parameter.
+/**
+ * Starts a neuron inference job with optional evaluation.
+ * @param organizationId - Organization identifier
+ * @param datasetName - Name of the dataset
+ * @param layerName - Name of the layer
+ * @param bbox - Bounding box coordinates
+ * @param newDatasetName - Name for the new dataset
+ * @param doEvaluation - Whether to perform evaluation
+ * @param annotationId - ID of the annotation for evaluation
+ * @param useSparseTracing - Whether to use sparse tracing
+ * @param evalMaxEdgeLength - Maximum edge length in nanometers
+ * @param evalSparseTubeThresholdNm - Sparse tube threshold in nanometers
+ * @param evalMinMergerPathLengthNm - Minimum merger path length in nanometers
+ */
 export function startNeuronInferralJob(
   // ... existing parameters ...
   doEvaluation: boolean,
   annotationId?: string,
   useSparseTracing?: boolean,
-  evalMaxEdgeLength?: string,
-  evalSparseTubeThresholdNm?: string,
-  evalMinMergerPathLengthNm?: string,
+  evalMaxEdgeLength?: number,
+  evalSparseTubeThresholdNm?: number,
+  evalMinMergerPathLengthNm?: number,
conf/webknossos.latest.routes (1)

267-267: Consider refactoring to use a request body.

The endpoint has many parameters which makes it harder to maintain and use. Consider refactoring to accept a JSON request body instead of query parameters. This would improve readability and make it easier to add new parameters in the future.

-POST          /jobs/run/inferNeurons/:organizationId/:datasetName                               controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doEvaluation: Boolean, annotationId: Option[String],evalUseSparseTracing: Option[Boolean],evalMaxEdgeLength: Option[String],evalSparseTubeThresholdNm: Option[String],evalMinMergerPathLengthNm: Option[String])
+POST          /jobs/run/inferNeurons/:organizationId/:datasetName                               controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String)

And create a case class for the request body:

case class InferNeuronsRequest(
  layerName: String,
  bbox: String,
  newDatasetName: String,
  doEvaluation: Boolean,
  annotationId: Option[String],
  evalUseSparseTracing: Option[Boolean],
  evalMaxEdgeLength: Option[Double],
  evalSparseTubeThresholdNm: Option[Double],
  evalMinMergerPathLengthNm: Option[Double]
)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 3c7700c and 0fa0cfd.

📒 Files selected for processing (4)
  • app/controllers/JobController.scala (2 hunks)
  • conf/webknossos.latest.routes (1 hunks)
  • frontend/javascripts/admin/api/jobs.ts (1 hunks)
  • frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (7 hunks)
🔇 Additional comments (5)
app/controllers/JobController.scala (2)

229-235: 🛠️ Refactor suggestion

Consider refactoring evaluation parameters into a case class

The method signature is becoming quite long with 11 parameters. Consider creating a dedicated case class for evaluation parameters to improve maintainability and readability.

case class NeuronEvaluationParams(
  doEvaluation: Boolean,
  annotationId: Option[String],
  useSparseTracing: Option[Boolean],
  maxEdgeLength: Option[Double],  // Changed from String to Double
  sparseTubeThresholdNm: Option[Double],
  minMergerPathLengthNm: Option[Double]
)

Also, consider adding parameter validation for the numerical values to ensure they are within acceptable ranges.


257-262: 🛠️ Refactor suggestion

Add parameter validation and improve JSON structure

The command arguments should include validation and proper structuring of evaluation parameters:

  1. Consider validating and transforming string parameters to numerical values before adding to JSON
  2. Group evaluation parameters in a nested object for better organization
- "do_evaluation" -> doEvaluation,
- "annotation_id" -> annotationId,
- "eval_use_sparse_tracing" -> evalUseSparseTracing,
- "eval_max_edge_length" -> evalMaxEdgeLength,
- "eval_sparse_tube_threshold_nm" -> evalSparseTubeThresholdNm,
- "eval_min_merger_path_length_nm" -> evalMinMergerPathLengthNm,
+ "evaluation" -> {
+   if (doEvaluation) {
+     Json.obj(
+       "enabled" -> true,
+       "annotation_id" -> annotationId,
+       "parameters" -> Json.obj(
+         "use_sparse_tracing" -> evalUseSparseTracing,
+         "max_edge_length" -> evalMaxEdgeLength.map(_.toDouble),
+         "sparse_tube_threshold_nm" -> evalSparseTubeThresholdNm.map(_.toDouble),
+         "min_merger_path_length_nm" -> evalMinMergerPathLengthNm.map(_.toDouble)
+       )
+     )
+   } else {
+     Json.obj("enabled" -> false)
+   }
+ }

This change:

  • Groups evaluation parameters logically
  • Adds basic validation for numerical values
  • Improves readability of the command structure

Please ensure that these changes align with the voxelytics service's expected payload structure.

conf/webknossos.latest.routes (1)

267-267: LGTM! Consider reviewing parameter types.

The endpoint changes correctly implement the evaluation settings functionality. However, verify if evalMaxEdgeLength, evalSparseTubeThresholdNm, and evalMinMergerPathLengthNm should be numeric types instead of String.

frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (2)

104-104: Addition of jobSpecificInputFields prop looks good

The addition of the optional jobSpecificInputFields prop to StartJobFormProps enhances the flexibility of the StartJobForm component by allowing custom input fields to be injected. This implementation is appropriate and follows good practices.


797-799: Passing CollapsibleEvaluationSettings as jobSpecificInputFields

Integrating the CollapsibleEvaluationSettings component through the jobSpecificInputFields prop enhances the modularity of the NeuronSegmentationForm. This approach maintains the flexibility of the StartJobForm component and adheres to good React practices.

Comment on lines 207 to 215
doEvaluation: doEvaluation.toString(),
});
if (doEvaluation) {
urlParams.append("annotationId", `${annotationId}`);
urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
}
Copy link
Contributor

@coderabbitai coderabbitai bot Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add parameter validation for evaluation settings.

The code should validate optional parameters before appending them to prevent potential undefined values from being sent to the API.

   if (doEvaluation) {
+    if (!annotationId) {
+      throw new Error("annotationId is required when doEvaluation is true");
+    }
     urlParams.append("annotationId", `${annotationId}`);
-    urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
-    urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
-    urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
-    urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
+    if (useSparseTracing != null) {
+      urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
+    }
+    if (evalMaxEdgeLength != null) {
+      urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
+    }
+    if (evalSparseTubeThresholdNm != null) {
+      urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
+    }
+    if (evalMinMergerPathLengthNm != null) {
+      urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
+    }
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
doEvaluation: doEvaluation.toString(),
});
if (doEvaluation) {
urlParams.append("annotationId", `${annotationId}`);
urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
}
doEvaluation: doEvaluation.toString(),
});
if (doEvaluation) {
if (!annotationId) {
throw new Error("annotationId is required when doEvaluation is true");
}
urlParams.append("annotationId", `${annotationId}`);
if (useSparseTracing != null) {
urlParams.append("evalUseSparseTracing", `${useSparseTracing}`);
}
if (evalMaxEdgeLength != null) {
urlParams.append("evalMaxEdgeLength", `${evalMaxEdgeLength}`);
}
if (evalSparseTubeThresholdNm != null) {
urlParams.append("evalSparseTubeThresholdNm", `${evalSparseTubeThresholdNm}`);
}
if (evalMinMergerPathLengthNm != null) {
urlParams.append("evalMinMergerPathLengthNm", `${evalMinMergerPathLengthNm}`);
}
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its ok to do this suggestion, but this isn't a must imo.

The backend should at last realize some params are missing (see my comment above)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MichaelBuessemeyer, understood. If the backend handles missing parameters, additional validation on the frontend is indeed optional.


✏️ Learnings added
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: frontend/javascripts/admin/api/jobs.ts:207-215
Timestamp: 2024-11-25T14:52:20.691Z
Learning: In the webknossos codebase, it's acceptable to rely on the backend to handle missing parameters, so additional parameter validation in the frontend is optional.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
Copy link
Contributor

@coderabbitai coderabbitai bot Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Empty Form.Item without a child component

The Form.Item at line 584 does not contain any input or form control as its child. Form.Item is intended to wrap form controls like inputs, checkboxes, or selectors. An empty Form.Item may lead to unexpected behavior and should be avoided. Please ensure that you include the appropriate form control within this Form.Item.

Apply this diff to address the issue:

-                <Form.Item name="useAnnotation" initialValue={true} />
+                {/* Remove this empty Form.Item or add a child component if needed */}

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jup, the AI is correct here. But it seems you simply want the form data to always have useAnnotation=true in case an evaluation is done, correct? In that case, I'd suggest to make the form item hidden:

Suggested change
<Form.Item name="useAnnotation" initialValue={true} />
<Form.Item name="useAnnotation" initialValue={true} hidden />

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MichaelBuessemeyer, thank you for the clarification. Yes, making the Form.Item hidden with initialValue={true} is a good solution to ensure that useAnnotation is always set to true when evaluation is done, without rendering an empty form item. I've noted this for future reviews.


✏️ Learnings added
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx:584-584
Timestamp: 2024-11-25T14:52:23.511Z
Learning: When evaluation is done, ensure that `useAnnotation` is always set to `true` in the form data by using a hidden `Form.Item` with `initialValue={true}`.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Comment on lines 530 to 593
function CollapsibleEvaluationSettings({
isActive = false,
setActive,
}: { isActive: boolean; setActive: (active: boolean) => void }) {
return (
<Collapse
style={{ marginBottom: 8 }}
onChange={() => setActive(!isActive)}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>
);
}
Copy link
Contributor

@coderabbitai coderabbitai bot Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect usage of tooltip prop in Form.Item components

The Form.Item component from Ant Design does not support a tooltip prop directly. To display a tooltip for the label, you should wrap the label content with the Tooltip component. This ensures that the tooltip behaves as expected and follows the Ant Design specifications.

Apply this diff to fix the issue:

-                <Form.Item
-                  label="Use sparse ground truth tracing"
-                  name={["evaluationSettings", "useSparseTracing"]}
-                  valuePropName="checked"
-                  initialValue={false}
-                  tooltip="The evaluation mode can either be `dense`
-      in case all processes in the volume are annotated in the ground-truth.
-      If not, use the `sparse` mode."
-                >
+                <Form.Item
+                  label={
+                    <span>
+                      Use sparse ground truth tracing&nbsp;
+                      <Tooltip title="The evaluation mode can either be `dense` in case all processes in the volume are annotated in the ground-truth. If not, use the `sparse` mode.">
+                        <InfoCircleOutlined />
+                      </Tooltip>
+                    </span>
+                  }
+                  name={["evaluationSettings", "useSparseTracing"]}
+                  valuePropName="checked"
+                  initialValue={false}
+                >
                   <Checkbox style={{ width: "100%" }} />
                 </Form.Item>

Apply similar changes to the other Form.Item components with tooltips.

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, but you are wrong here AI :)
The docs state that there is a tooltip prop

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say to also ignore that @cdfhalle

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Comment on lines 535 to 591
<Collapse
style={{ marginBottom: 8 }}
onChange={() => setActive(!isActive)}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>
Copy link
Contributor

@coderabbitai coderabbitai bot Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Potential issue with Collapse component's onChange handler

The onChange handler for the Collapse component receives the active key(s) as arguments. Currently, the code toggles isActive using !isActive without considering these arguments, which might lead to synchronization issues between the Collapse state and the isActive state. It is recommended to use the provided key parameter to update the isActive state accurately.

Apply this diff to improve the implementation:

        <Collapse
          style={{ marginBottom: 8 }}
-         onChange={() => setActive(!isActive)}
+         onChange={(key) => setActive(key.includes("evaluation"))}
          expandIcon={() => <Checkbox checked={isActive} />}
          items={[
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<Collapse
style={{ marginBottom: 8 }}
onChange={() => setActive(!isActive)}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>
<Collapse
style={{ marginBottom: 8 }}
onChange={(key) => setActive(key.includes("evaluation"))}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont see the issue here.

I'd ignore that @cdfhalle

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

…com:scalableminds/webknossos into evaluate-segmentation-in-infer-neurons-task
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (1)
CHANGELOG.unreleased.md (1)

Line range hint 31-31: Document API changes in Breaking Changes section

The PR includes significant API changes that should be documented in the Breaking Changes section:

 ### Breaking Changes
+- The `JobController.runInferNeuronsJob` method signature has been updated to include additional parameters for evaluation settings.
+- The routing configuration for the `inferNeurons` endpoint has been modified to support new evaluation parameters.
+- Frontend API changes: The `startNeuronInferralJob` function now requires additional parameters for evaluation configuration.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 0fa0cfd and 785ddff.

📒 Files selected for processing (1)
  • CHANGELOG.unreleased.md (1 hunks)

CHANGELOG.unreleased.md Outdated Show resolved Hide resolved
Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great Work. Thanks a lot for your PR @cdfhalle
Especially how you integrated the new UI 👍

Testing went well 💪 but I would like to have a few minor points addressed. The main points are:

  1. Please rename almost all variable with evaluation in their name to include that the evaluation is a split-merger evaluation. E.g. doEvaluation -> doSplitMergerEvaluation. This makes it easier to understand what evaluation is meant, as maybe there might be other evaluations to other jobs or even potentially to the same job.
  2. Please ensure in case an evaluation should be done, that the backend validates that the required parameters are given (not None) / sent by the client and not that they are empty options (None).
  • The rest is minor stuff I'd say :)

@@ -226,7 +226,13 @@ class JobController @Inject()(
datasetName: String,
layerName: String,
bbox: String,
newDatasetName: String): Action[AnyContent] =
newDatasetName: String,
doEvaluation: Boolean,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't know what kind of evaluation would be performed in case doEvaluation is set to true. (As I do not have work with voxelytics regularly) Therefore, I would prefer to make the name more explicit

Suggested change
doEvaluation: Boolean,
doSplitMergerEvaluation: Boolean,

@@ -248,6 +254,12 @@ class JobController @Inject()(
"new_dataset_name" -> newDatasetName,
"layer_name" -> layerName,
"bbox" -> bbox,
"do_evaluation" -> doEvaluation,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same renaming here

Suggested change
"do_evaluation" -> doEvaluation,
"do_split_merger_evaluation" -> doSplitMergerEvaluation,

@@ -264,7 +264,7 @@ POST /jobs/run/computeMeshFile/:organizationId/:datasetName
POST /jobs/run/computeSegmentIndexFile/:organizationId/:datasetName controllers.JobController.runComputeSegmentIndexFileJob(organizationId: String, datasetName: String, layerName: String)
POST /jobs/run/exportTiff/:organizationId/:datasetName controllers.JobController.runExportTiffJob(organizationId: String, datasetName: String, bbox: String, additionalCoordinates: Option[String], layerName: Option[String], mag: Option[String], annotationLayerName: Option[String], annotationId: Option[String], asOmeTiff: Boolean)
POST /jobs/run/inferNuclei/:organizationId/:datasetName controllers.JobController.runInferNucleiJob(organizationId: String, datasetName: String, layerName: String, newDatasetName: String)
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String)
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doEvaluation: Boolean, annotationId: Option[String],evalUseSparseTracing: Option[Boolean],evalMaxEdgeLength: Option[String],evalSparseTubeThresholdNm: Option[String],evalMinMergerPathLengthNm: Option[String])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable renaming from above and adding spaces between param definition

Suggested change
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doEvaluation: Boolean, annotationId: Option[String],evalUseSparseTracing: Option[Boolean],evalMaxEdgeLength: Option[String],evalSparseTubeThresholdNm: Option[String],evalMinMergerPathLengthNm: Option[String])
POST /jobs/run/inferNeurons/:organizationId/:datasetName controllers.JobController.runInferNeuronsJob(organizationId: String, datasetName: String, layerName: String, bbox: String, newDatasetName: String, doSplitMergerEvaluation: Boolean, annotationId: Option[String], evalUseSparseTracing: Option[Boolean], evalMaxEdgeLength: Option[String], evalSparseTubeThresholdNm: Option[String], evalMinMergerPathLengthNm: Option[String])

@@ -193,12 +193,26 @@ export function startNeuronInferralJob(
layerName: string,
bbox: Vector6,
newDatasetName: string,
doEvaluation: boolean,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
doEvaluation: boolean,
doSplitMergerEvaluation: boolean,

): Promise<APIJob> {
const urlParams = new URLSearchParams({
layerName,
bbox: bbox.join(","),
newDatasetName,
doEvaluation: doEvaluation.toString(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
doEvaluation: doEvaluation.toString(),
doSplitMergerEvaluation: doSplitMergerEvaluation.toString(),

And so on 🙈

Comment on lines 535 to 591
<Collapse
style={{ marginBottom: 8 }}
onChange={() => setActive(!isActive)}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont see the issue here.

I'd ignore that @cdfhalle

Comment on lines 530 to 593
function CollapsibleEvaluationSettings({
isActive = false,
setActive,
}: { isActive: boolean; setActive: (active: boolean) => void }) {
return (
<Collapse
style={{ marginBottom: 8 }}
onChange={() => setActive(!isActive)}
expandIcon={() => <Checkbox checked={isActive} />}
items={[
{
key: "evaluation",
label: "Evaluation Settings",
children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
label="Use sparse ground truth tracing"
name={["evaluationSettings", "useSparseTracing"]}
valuePropName="checked"
initialValue={false}
tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
>
<Checkbox style={{ width: "100%" }} />
</Form.Item>
<Form.Item
label="Max edge length in nm"
name={["evaluationSettings", "maxEdgeLength"]}
tooltip="Ground truth tracings can be densified so that
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
</Form.Item>
<Form.Item
label="Sparse tube threshold in nm"
name={["evaluationSettings", "sparseTubeThresholdInNm"]}
tooltip="Tube threshold for sparse evaluation,
determining if a process is too far from the ground-truth."
>
<InputNumber style={{ width: "100%" }} placeholder="1000" />
</Form.Item>
<Form.Item
label="Sparse minimum merger path length in nm"
name={["evaluationSettings", "minimumMergerPathLengthInNm"]}
tooltip="Minimum ground truth path length of a merger component
to be counted as a relevant merger (for sparse evaluation).
Note, the path length to neighboring nodes of a component is included for this comparison. This optimistic path length
estimation makes sure no relevant mergers are ignored."
>
<InputNumber style={{ width: "100%" }} placeholder="800" />
</Form.Item>
<Form.Item name="useAnnotation" initialValue={true} />
</Col>
</Row>
),
},
]}
activeKey={isActive ? "evaluation" : []}
/>
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say to also ignore that @cdfhalle

CHANGELOG.unreleased.md Outdated Show resolved Hide resolved
Comment on lines 844 to 846
jobSpecificInputFields={
<CollapsibleEvaluationSettings isActive={useEvaluation} setActive={setUseEvaluation} />
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am just guessing here: The split merger evaluation requires an annotation to work, doesn't it? And more specific a skeleton annotation is required, isn't it?

If that's so please only set the jobSpecificInputFields to CollapsibleEvaluationSettings if the user currently has an opened annotation with a skeleton annotation.
You can retrieve whether there is a skeleton tracing via:

  const hasSkeletonAnnotation = useSelector((state: OxalisState) => state.tracing.skeleton != null);

children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This from item with the check box looks a little quirky in vertical mode. Could you please make it horizontal?

<Form.Item
  ...
  layout="horizontal"
  />

Copy link
Author

@cdfhalle cdfhalle Nov 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like the layout parameter only exists for Form but not Form.Item. Do you have any Ideas on how to work around that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I am sorry, I mislooked regarding since what version this property is available. I think I read 4.18 but it is 5.18. I updated antd a little to support this prop, please pull again and run yarn install. This should update the version of antd so that the form items support the layout prop.
Maybe your IDE needs a restart after the update because the TS background process did not register the upgrade and still thinks the code is wrong. At least that's what happened in my case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And you can find the docs about the layout prop here: https://ant.design/components/form#formitem. In the version column it clearly says 5.18 (not 4.18)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
app/models/job/Job.scala (2)

55-55: LGTM! Consider adding unit tests.

The new helper method follows the existing pattern and safely extracts boolean values from commandArgs.

Consider adding unit tests to verify the behavior with:

  • Valid boolean values
  • Missing keys
  • Invalid value types

Line range hint 55-68: Consider event-driven approach for job state transitions.

The current implementation handles job results synchronously. As the system grows with features like evaluation, consider:

  1. Implementing an event-driven architecture for job state transitions
  2. Using a message queue for handling evaluation results
  3. Adding metrics/monitoring for evaluation jobs

This would improve:

  • Scalability of job processing
  • Reliability through message persistence
  • Observability of the evaluation feature
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 785ddff and 67a55ff.

📒 Files selected for processing (2)
  • CHANGELOG.unreleased.md (1 hunks)
  • app/models/job/Job.scala (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • CHANGELOG.unreleased.md
🔇 Additional comments (1)
app/models/job/Job.scala (1)

67-68: Verify returnValue format and add documentation.

The new case for infer_neurons with evaluation enabled directly returns the returnValue. Consider:

  1. Adding validation for the expected format of returnValue
  2. Documenting the expected format in code comments
  3. Creating a type-safe wrapper for result links

Let's verify the format of returnValue in existing jobs:

Consider this improvement for type safety:

// Create a sealed trait hierarchy for result links
sealed trait ResultLink
case class DatasetLink(datasetName: String) extends ResultLink
case class AnnotationLink(annotationId: String) extends ResultLink
case class ExportLink(jobId: String) extends ResultLink

// Update resultLink to return ResultLink instead of String
def resultLink(organizationId: String): Option[ResultLink] = ...

…com:scalableminds/webknossos into evaluate-segmentation-in-infer-neurons-task
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (6)
frontend/javascripts/admin/api/jobs.ts (1)

211-211: Update error message terminology.

The error message uses old terminology "doEvaluation" instead of "doSplitMergerEvaluation".

-      throw new Error("annotationId is required when doEvaluation is true");
+      throw new Error("annotationId is required when doSplitMergerEvaluation is true");
app/controllers/JobController.scala (1)

Line range hint 229-262: Consider extracting evaluation settings into a reusable component

The addition of evaluation settings suggests a need for a more structured approach to job parameters. Consider:

  1. Creating a dedicated package for evaluation-related functionality
  2. Implementing a generic job parameters framework that can be extended for different job types
  3. Adding validation rules as part of the parameter definitions

This would improve maintainability and make it easier to add similar functionality to other job types in the future.

Example structure:

package jobs.parameters

trait JobParameters {
  def validate: Fox[Unit]
}

case class NeuronInferenceParameters(
  organizationName: String,
  datasetName: String,
  // ... other fields ...
  evaluationSettings: Option[NeuronEvaluationSettings]
) extends JobParameters {
  def validate: Fox[Unit] = for {
    _ <- evaluationSettings.map(_.validate).getOrElse(Fox.successful(()))
    // ... other validations ...
  } yield ()
}
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (4)

530-530: Fix the component name typo

The component name has a repeated "splitMerger" which should be removed.

-function CollapsibleSplitMergersplitMergerEvaluationSettings({
+function CollapsibleSplitMergerEvaluationSettings({

546-556: Consider using horizontal layout for checkboxes

The checkbox form items look quirky in vertical mode. Consider adjusting the layout to improve the visual appearance.

 <Form.Item
   label="Use sparse ground truth tracing"
   name={["splitMergerEvaluationSettings", "useSparseTracing"]}
   valuePropName="checked"
   initialValue={true}
   tooltip="The evaluation mode can either be `dense`
     in case all processes in the volume are annotated in the ground-truth.
     If not, use the `sparse` mode."
+  style={{ display: 'flex', flexDirection: 'row', alignItems: 'center' }}
+  labelCol={{ flex: '0 1 auto' }}
+  wrapperCol={{ flex: '1 0 auto' }}
 >
   <Checkbox style={{ width: "100%" }} />
 </Form.Item>

848-855: Add user feedback for missing skeleton annotation

When hasSkeletonAnnotation is false, the evaluation settings are silently hidden. Consider adding a tooltip or message to inform users why the evaluation settings are not available.

       jobSpecificInputFields={
-        hasSkeletonAnnotation && (
+        hasSkeletonAnnotation ? (
           <CollapsibleSplitMergerEvaluationSettings
             isActive={doSplitMergerEvaluation}
             setActive={setDoSplitMergerEvaluation}
           />
+        ) : (
+          <Alert
+            message="Evaluation settings require a skeleton annotation. Please create one to enable these settings."
+            type="info"
+            showIcon
+          />
         )
       }

802-806: Improve error handling specificity

The error handling could be more specific about which validation failed.

         if (
           !selectedBoundingBox ||
           (doSplitMergerEvaluation && splitMergerEvaluationSettings == null)
         ) {
+          const errorMessage = !selectedBoundingBox
+            ? "No bounding box selected"
+            : "Evaluation settings are required when evaluation is enabled";
+          Toast.error(errorMessage);
           return;
         }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 67a55ff and 870b589.

📒 Files selected for processing (5)
  • app/controllers/JobController.scala (2 hunks)
  • conf/application.conf (1 hunks)
  • conf/webknossos.latest.routes (1 hunks)
  • frontend/javascripts/admin/api/jobs.ts (1 hunks)
  • frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (7 hunks)
🧰 Additional context used
📓 Learnings (1)
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (1)
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx:584-584
Timestamp: 2024-11-25T14:52:23.692Z
Learning: When evaluation is done, ensure that `useAnnotation` is always set to `true` in the form data by using a hidden `Form.Item` with `initialValue={true}`.
🔇 Additional comments (5)
conf/application.conf (1)

154-155: LGTM! Remember to activate jobs in the database.

The configuration changes align with the PR objectives by enabling job processing and voxelytics integration. However, please note:

  1. As mentioned in the comment above, jobs also need to be activated in the database using yarn enable-jobs for local development.
  2. Since these settings enable external processing capabilities, ensure that proper security measures (authentication, authorization, input validation) are in place in the job processing and voxelytics integration code.

Let's verify the job processing setup:

✅ Verification successful

Security measures are properly implemented for job processing

The verification shows that proper security measures are in place:

  1. Organization-level authorization check:

    • Verifies that the requesting user belongs to the same organization (request.identity._organization == organization._id)
    • Returns FORBIDDEN (403) status if unauthorized
  2. Input validation:

    • Dataset existence check with proper error handling
    • Dataset name validation via assertValidDatasetName
    • Layer name validation via assertValidLayerNameLax
  3. Route security:

    • Jobs are only accessible via POST requests
    • All sensitive parameters are properly typed and required
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for job processing security measures

# Check for authentication/authorization in job routes
rg -A 5 "inferNeurons|runInferNeuronsJob"

# Check for input validation in job controller
ast-grep --pattern 'def runInferNeuronsJob($$$) {
  $$$
}'

Length of output: 4452

frontend/javascripts/admin/api/jobs.ts (2)

196-201: LGTM! Function signature changes look good.

The new parameters are well-typed and properly align with the PR objectives for supporting neuron segmentation evaluation.


207-226: LGTM! URL parameter handling looks good.

The implementation:

  • Correctly handles required and optional parameters
  • Follows the codebase's pattern for URL parameter construction
  • Maintains consistency with other job submission functions
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (1)

104-104: LGTM: Well-structured prop updates

The addition of jobSpecificInputFields prop is well-implemented, allowing for flexible form customization.

Also applies to: 599-607

conf/webknossos.latest.routes (1)

267-267: LGTM! Route definition follows best practices.

The new parameters for evaluation settings are well-structured and maintain backward compatibility. The route follows RESTful conventions and includes appropriate organization-level access control.

Let's verify the controller implementation handles these parameters correctly:

✅ Verification successful

Parameter validation is properly implemented

The implementation in JobController shows appropriate validation:

  • Organization and dataset existence checks
  • Authorization checks for organization access
  • Dataset name validation via assertValidDatasetName
  • Layer name validation via assertValidLayerNameLax
  • Bounding box validation for non-superusers via assertBoundingBoxLimits
  • All evaluation parameters are properly included in the job command arguments
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if JobController properly validates the new parameters

# Search for the controller method implementation
ast-grep --pattern 'def runInferNeuronsJob(organizationId: String, datasetName: String, $$$) = {
  $$$
}'

# Check for parameter validation
rg -A 10 "runInferNeuronsJob.*doSplitMergerEvaluation"

Length of output: 3082


Script:

#!/bin/bash
# Let's search for the JobController implementation to verify parameter validation
rg -A 20 "def runInferNeuronsJob" app/

# Also check for any validation utilities that might be used
ast-grep --pattern 'def validate$_($$$) = {
  $$$
}'

# Look for any parameter processing in the job creation
rg -A 10 "InferNeuronsParameters|InferNeuronsJob" app/

Length of output: 3159


Script:

#!/bin/bash
# Let's check more of the implementation to see parameter validation and job creation
rg -A 40 "def runInferNeuronsJob" app/

# Also look for any model/parameter classes that might contain validation logic
rg -A 10 "case class.*InferNeurons" app/

Length of output: 3959

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (3)
app/models/job/Job.scala (1)

70-71: Consider adding validation for evaluation result links

While the implementation handles the basic flow, consider these improvements for robustness:

  1. The returnValue is directly returned without validation of the annotation link format
  2. The error case (when returnValue is None) isn't specifically handled for evaluation

Consider adding validation:

 case JobCommand.infer_neurons if this.argAsBooleanOpt("do_evaluation").getOrElse(false) =>
-  returnValue.map { resultAnnotationLink => resultAnnotationLink}
+  returnValue.flatMap { resultAnnotationLink => 
+    // Validate annotation link format
+    if (resultAnnotationLink.matches("^/annotations/.*|^/datasets/.*/annotations/.*")) Some(resultAnnotationLink)
+    else None
+  }
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (2)

546-556: Consider improving form layout

The form items could benefit from a more structured layout. Consider using Form.Item's labelCol and wrapperCol props for better alignment and spacing.

 <Form.Item
   label="Use sparse ground truth tracing"
   name={["splitMergerEvaluationSettings", "useSparseTracing"]}
   valuePropName="checked"
   initialValue={true}
   tooltip="The evaluation mode can either be `dense`
in case all processes in the volume are annotated in the ground-truth.
If not, use the `sparse` mode."
+  labelCol={{ span: 8 }}
+  wrapperCol={{ span: 16 }}
 >

793-826: Simplify jobApiCall implementation

The current implementation has duplicated code for starting the neuron inferral job. Consider refactoring to reduce duplication and improve maintainability.

 jobApiCall={async (
   { newDatasetName, selectedLayer: colorLayer, selectedBoundingBox, annotationId },
   form: FormInstance<any>,
 ) => {
   const splitMergerEvaluationSettings = form.getFieldValue("splitMergerEvaluationSettings");
   if (
     !selectedBoundingBox ||
     (doSplitMergerEvaluation && splitMergerEvaluationSettings == null)
   ) {
     return;
   }

   const bbox = computeArrayFromBoundingBox(selectedBoundingBox.boundingBox);
-  if (!doSplitMergerEvaluation) {
-    return startNeuronInferralJob(
-      dataset.id,
-      colorLayer.name,
-      bbox,
-      newDatasetName,
-      doSplitMergerEvaluation,
-    );
-  }
-  return startNeuronInferralJob(
-    dataset.id,
-    colorLayer.name,
-    bbox,
-    newDatasetName,
-    doSplitMergerEvaluation,
-    annotationId,
-    splitMergerEvaluationSettings.useSparseTracing,
-    splitMergerEvaluationSettings.maxEdgeLength,
-    splitMergerEvaluationSettings.sparseTubeThresholdInNm,
-    splitMergerEvaluationSettings.minimumMergerPathLengthInNm,
-  );
+  const evaluationSettings = doSplitMergerEvaluation
+    ? {
+        annotationId,
+        ...splitMergerEvaluationSettings,
+      }
+    : {};
+
+  return startNeuronInferralJob(
+    dataset.id,
+    colorLayer.name,
+    bbox,
+    newDatasetName,
+    doSplitMergerEvaluation,
+    ...Object.values(evaluationSettings),
+  );
 }}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 870b589 and 581ed00.

📒 Files selected for processing (7)
  • CHANGELOG.unreleased.md (1 hunks)
  • app/controllers/JobController.scala (2 hunks)
  • app/models/job/Job.scala (2 hunks)
  • conf/application.conf (1 hunks)
  • conf/webknossos.latest.routes (1 hunks)
  • frontend/javascripts/admin/api/jobs.ts (1 hunks)
  • frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (7 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • conf/application.conf
  • conf/webknossos.latest.routes
  • frontend/javascripts/admin/api/jobs.ts
🧰 Additional context used
📓 Learnings (2)
CHANGELOG.unreleased.md (1)
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: CHANGELOG.unreleased.md:14-14
Timestamp: 2024-11-25T14:52:23.201Z
Learning: For the `webknossos` project, changelog entries in `CHANGELOG.unreleased.md` should be concise and avoid unnecessary detailed explanations.
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (1)
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx:584-584
Timestamp: 2024-11-25T14:52:23.692Z
Learning: When evaluation is done, ensure that `useAnnotation` is always set to `true` in the form data by using a hidden `Form.Item` with `initialValue={true}`.
🔇 Additional comments (8)
CHANGELOG.unreleased.md (1)

14-14: LGTM! The changelog entry is concise and accurate.

The entry follows the Keep a Changelog format, is concise, and accurately reflects the new feature for evaluating segmentation in neuron inference tasks.

app/models/job/Job.scala (2)

57-57: LGTM! Clean implementation of boolean argument extraction

The new argAsBooleanOpt method follows the same safe pattern as the existing argAsStringOpt, properly handling both missing keys and type conversion failures.


70-71: Verify integration with UI evaluation settings

The code handles evaluation results, but let's verify the integration with the UI components and error handling.

✅ Verification successful

Let me gather more information about the UI integration and error handling.


Let me try one more search to find any evaluation-related code and error handling patterns with different search terms.


Based on the search results, I can now generate the final response:

Evaluation integration appears to be properly handled

The code shows proper integration with evaluation settings:

  • Evaluation flag is correctly handled in Job.scala with a default fallback to false
  • The JobController shows evaluation-related parameters like do_split_merger_evaluation being passed
  • Error handling is implemented through the standard Scala error handling patterns
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for UI components and error handling related to evaluation

# Look for UI components handling evaluation settings
echo "Checking for evaluation settings UI components..."
rg -l "EvaluationSettings|evaluation.*form" --type ts --type tsx

# Check for error handling patterns
echo "Checking for error handling patterns..."
rg "do_evaluation.*false|catch.*evaluation" --type scala

Length of output: 475


Script:

#!/bin/bash
# Search for UI components with broader patterns
echo "Checking UI components..."
rg -l "evaluation|inference.*settings" --type js --type jsx --type ts

# Look for evaluation-related code in Scala
echo "Checking evaluation implementation..."
rg -B 2 -A 2 "do_evaluation" --type scala

# Search for error handling patterns in Job.scala
echo "Checking error handling in Job.scala..."
rg -B 3 -A 3 "catch|error|Exception" app/models/job/Job.scala

Length of output: 964


Script:

#!/bin/bash
# Search for evaluation-related code with broader patterns
echo "Searching for evaluation patterns..."
rg -B 3 -A 3 "evaluation|infer.*neurons" --type scala

# Look for frontend files related to inference
echo "Searching for frontend inference files..."
fd -e js -e ts -e jsx -e tsx . | rg -l "infer|neuron"

# Check for error handling in the job model
echo "Checking Job model error handling..."
ast-grep --pattern 'try {
  $$$
} catch ($$$) {
  $$$
}'

Length of output: 100695

app/controllers/JobController.scala (1)

Line range hint 226-259: Verify integration with frontend components

Ensure that the frontend components are updated to handle these new evaluation parameters correctly.

✅ Verification successful

Frontend integration is properly implemented

The frontend components are correctly handling all the new evaluation parameters:

  • The API client in frontend/javascripts/admin/api/jobs.ts correctly sends all parameters including doSplitMergerEvaluation, evalUseSparseTracing, evalMaxEdgeLength, etc.
  • A dedicated form component CollapsibleSplitMergersplitMergerEvaluationSettings in frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx handles all evaluation settings with proper form fields and tooltips
  • The job modal correctly manages the evaluation state and passes all parameters to the API call
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for frontend components that need to be updated

# Check for API calls to inferNeurons
rg -A 5 "inferNeurons.*doSplitMergerEvaluation" 

# Check for related frontend components
rg -A 5 "NeuronSegmentation.*evaluation"

Length of output: 1885


Script:

#!/bin/bash
# Let's search for frontend components and API calls using different patterns

# Check for frontend components related to infer neurons
rg -A 10 "inferNeurons" --type ts --type tsx

# Check for components handling split merger evaluation
rg -A 10 "splitMergerEvaluation|SplitMergerEvaluation" --type ts --type tsx

# Check for API calls in the frontend
rg -A 5 "POST.*inferNeurons" --type ts --type tsx

# Check for form components that might handle these parameters
rg -A 10 "evalUseSparseTracing|evalMaxEdgeLength|evalSparseTubeThresholdNm" --type ts --type tsx

Length of output: 385


Script:

#!/bin/bash
# Let's try without specifying file types since tsx is not recognized

# Check for frontend components related to infer neurons
rg -A 10 "inferNeurons" "frontend/"

# Check for components handling split merger evaluation
rg -A 10 "splitMergerEvaluation|SplitMergerEvaluation" "frontend/"

# Check for API calls in the frontend
rg -A 5 "POST.*inferNeurons" "frontend/"

# Look for form components with these parameters
rg -A 10 "evalUseSparseTracing|evalMaxEdgeLength|evalSparseTubeThresholdNm" "frontend/"

# Also check for any JavaScript/TypeScript files with these parameters
fd -e js -e jsx -e ts -e tsx . frontend/ -x rg -l "doSplitMergerEvaluation|evalUseSparseTracing|evalMaxEdgeLength"

Length of output: 17511

frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (4)

18-20: LGTM!

The new imports from antd are correctly added and are used in the new evaluation settings component.


584-584: LGTM!

The hidden Form.Item for useAnnotation is correctly implemented as per the learnings.


599-607: LGTM!

The StartJobForm component has been correctly updated to support job-specific input fields, making it more flexible and reusable.

Also applies to: 730-730


841-848: LGTM!

The conditional rendering of evaluation settings based on hasSkeletonAnnotation is correctly implemented.

Comment on lines +226 to +232
newDatasetName: String,
doSplitMergerEvaluation: Boolean,
annotationId: Option[String],
evalUseSparseTracing: Option[Boolean],
evalMaxEdgeLength: Option[String],
evalSparseTubeThresholdNm: Option[String],
evalMinMergerPathLengthNm: Option[String]): Action[AnyContent] =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve parameter type safety and validation

The method signature could be improved in several ways:

  1. Use strongly typed parameters instead of strings for numeric values
  2. Add validation to ensure all evaluation parameters are provided when doSplitMergerEvaluation is true

Consider creating a case class for evaluation settings:

case class NeuronEvaluationSettings(
  useSparseTracing: Boolean,
  maxEdgeLength: Double,
  sparseTubeThresholdNm: Double,
  minMergerPathLengthNm: Double
)

object NeuronEvaluationSettings {
  implicit val format: Format[NeuronEvaluationSettings] = Json.format[NeuronEvaluationSettings]
}

Then update the method signature:

def runInferNeuronsJob(
  datasetId: String,
  layerName: String,
  bbox: String,
  newDatasetName: String,
  doSplitMergerEvaluation: Boolean,
  annotationId: Option[String],
  evaluationSettings: Option[NeuronEvaluationSettings]
): Action[AnyContent]

Comment on lines +254 to +259
"do_split_merger_evaluation" -> doSplitMergerEvaluation,
"annotation_id" -> annotationId,
"eval_use_sparse_tracing" -> evalUseSparseTracing,
"eval_max_edge_length" -> evalMaxEdgeLength,
"eval_sparse_tube_threshold_nm" -> evalSparseTubeThresholdNm,
"eval_min_merger_path_length_nm" -> evalMinMergerPathLengthNm,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add validation for evaluation parameters

The command arguments need validation to ensure all required evaluation parameters are provided when doSplitMergerEvaluation is true.

Add parameter validation using Fox.runIf:

for {
  // ... existing validations ...
  _ <- Fox.runIf(doSplitMergerEvaluation) {
    for {
      annotationIdValidated <- annotationId.toFox ?~> "job.inferNeurons.evaluation.missingAnnotationId"
      _ <- evalUseSparseTracing.toFox ?~> "job.inferNeurons.evaluation.missingUseSparseTracing"
      maxEdgeLength <- evalMaxEdgeLength.toFox.flatMap(v => 
        Fox.fromTry(Try(v.toDouble))) ?~> "job.inferNeurons.evaluation.invalidMaxEdgeLength"
      sparseTubeThreshold <- evalSparseTubeThresholdNm.toFox.flatMap(v => 
        Fox.fromTry(Try(v.toDouble))) ?~> "job.inferNeurons.evaluation.invalidSparseTubeThreshold"
      minMergerPathLength <- evalMinMergerPathLengthNm.toFox.flatMap(v => 
        Fox.fromTry(Try(v.toDouble))) ?~> "job.inferNeurons.evaluation.invalidMinMergerPathLength"
    } yield ()
  }
  command = JobCommand.infer_neurons
  // ... rest of the method

Then update the command arguments to use the validated values:

commandArgs = Json.obj(
  // ... existing fields ...
  "do_split_merger_evaluation" -> doSplitMergerEvaluation,
  "evaluation_settings" -> (if (doSplitMergerEvaluation) {
    Some(Json.obj(
      "annotation_id" -> annotationId.get,
      "use_sparse_tracing" -> evalUseSparseTracing.get,
      "max_edge_length" -> maxEdgeLength,
      "sparse_tube_threshold_nm" -> sparseTubeThreshold,
      "min_merger_path_length_nm" -> minMergerPathLength
    ))
  } else None)
)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (2)

546-557: Consider improving form layout

The form items could benefit from a more structured layout. Consider using Ant Design's Grid system with proper spacing and alignment.

 <Form.Item
-  layout="horizontal"
   label="Use sparse ground truth tracing"
   name={["splitMergerEvaluationSettings", "useSparseTracing"]}
   valuePropName="checked"
   initialValue={true}
   tooltip="The evaluation mode can either be `dense`
     in case all processes in the volume are annotated in the ground-truth.
     If not, use the `sparse` mode."
 >
-  <Checkbox style={{ width: "100%" }} />
+  <Row>
+    <Col span={24}>
+      <Checkbox />
+    </Col>
+  </Row>
 </Form.Item>

794-827: Improve error handling and user feedback

The error handling for missing evaluation settings could be more informative to users.

 if (
   !selectedBoundingBox ||
   (doSplitMergerEvaluation && splitMergerEvaluationSettings == null)
 ) {
+  if (doSplitMergerEvaluation && splitMergerEvaluationSettings == null) {
+    Toast.error('Evaluation settings are required when evaluation is enabled.');
+  }
   return;
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 581ed00 and 2e2d915.

⛔ Files ignored due to path filters (1)
  • yarn.lock is excluded by !**/yarn.lock, !**/*.lock
📒 Files selected for processing (2)
  • frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (7 hunks)
  • package.json (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • package.json
🧰 Additional context used
📓 Learnings (1)
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (1)
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx:584-584
Timestamp: 2024-11-25T14:52:23.692Z
Learning: When evaluation is done, ensure that `useAnnotation` is always set to `true` in the form data by using a hidden `Form.Item` with `initialValue={true}`.
🔇 Additional comments (3)
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (3)

584-584: Hidden form item implementation looks good

The hidden form item for useAnnotation is correctly implemented with initialValue={true} as suggested in the learnings.


842-849: Implementation of conditional evaluation settings looks good

The evaluation settings are correctly rendered only when a skeleton annotation is present, and the state management for toggling evaluation is well implemented.


530-533: ⚠️ Potential issue

Fix component name typo

The component name has a duplicated "splitMerger" prefix which should be removed.

-function CollapsibleSplitMergersplitMergerEvaluationSettings({
+function CollapsibleSplitMergerEvaluationSettings({

Likely invalid or redundant comment.

Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for applying the feedback 🙏

  • There is still one open TODO from my last review
  • I identified, that the typing of the split merger settings was incorrect and added some comments on how to improve them
  • And the job is currently crashing because the param ogranization_name was not renamed to organization_id. Pulling and merging with the master in the vx repo should fix the error

@@ -248,6 +254,12 @@ class JobController @Inject()(
"new_dataset_name" -> newDatasetName,
"layer_name" -> layerName,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is still missing 🙈

@@ -54,6 +54,7 @@ case class Job(
def datasetId: Option[String] = argAsStringOpt("dataset_id")

private def argAsStringOpt(key: String) = (commandArgs \ key).toOption.flatMap(_.asOpt[String])
private def argAsBooleanOpt(key: String) = (commandArgs \ key).toOption.flatMap(_.asOpt[Boolean])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks unformatted: To format the backend code you can use yarn format-backend and to lint the backend code you can use yarn lint-backend

Comment on lines +154 to +155
jobsEnabled = true
voxelyticsEnabled = true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be undone before merging

children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I am sorry, I mislooked regarding since what version this property is available. I think I read 4.18 but it is 5.18. I updated antd a little to support this prop, please pull again and run yarn install. This should update the version of antd so that the form items support the layout prop.
Maybe your IDE needs a restart after the update because the TS background process did not register the upgrade and still thinks the code is wrong. At least that's what happened in my case.

children: (
<Row>
<Col style={{ width: "100%" }}>
<Form.Item
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And you can find the docs about the layout prop here: https://ant.design/components/form#formitem. In the version column it clearly says 5.18 (not 4.18)

frontend/javascripts/admin/api/jobs.ts Outdated Show resolved Hide resolved
nodes are at most max_edge_length nm apart.
However, this can also introduce wrong nodes in curved processes."
>
<InputNumber style={{ width: "100%" }} placeholder="None" />
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you use a InputNumber which leads to maxEdgeLength being of type number :) -> Thus the suggestion to improve the typing to avoid potential errors in the future

{ newDatasetName, selectedLayer: colorLayer, selectedBoundingBox, annotationId },
form: FormInstance<any>,
) => {
const splitMergerEvaluationSettings = form.getFieldValue("splitMergerEvaluationSettings");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should fix the typing of splitMergerEvaluationSettings from any to the desired type to improve the type checking

Suggested change
const splitMergerEvaluationSettings = form.getFieldValue("splitMergerEvaluationSettings");
const splitMergerEvaluationSettings = form.getFieldValue("splitMergerEvaluationSettings") as SplitMergerEvaluationSettings;

cdfhalle and others added 3 commits November 29, 2024 13:50
…com:scalableminds/webknossos into evaluate-segmentation-in-infer-neurons-task
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (2)
app/controllers/JobController.scala (1)

226-232: Consider grouping related parameters

The method signature is getting quite long with the addition of evaluation-related parameters. Consider grouping them into a case class for better maintainability.

-def runInferNeuronsJob(datasetId: String,
-                       layerName: String,
-                       bbox: String,
-                       newDatasetName: String,
-                       doSplitMergerEvaluation: Boolean,
-                       annotationId: Option[String],
-                       evalUseSparseTracing: Option[Boolean],
-                       evalMaxEdgeLength: Option[String],
-                       evalSparseTubeThresholdNm: Option[String],
-                       evalMinMergerPathLengthNm: Option[String]): Action[AnyContent] =
+case class SplitMergerEvaluationParams(
+  annotationId: Option[String],
+  useSparseTracing: Option[Boolean],
+  maxEdgeLength: Option[String],
+  sparseTubeThresholdNm: Option[String],
+  minMergerPathLengthNm: Option[String]
+)
+
+def runInferNeuronsJob(
+  datasetId: String,
+  layerName: String,
+  bbox: String,
+  newDatasetName: String,
+  doSplitMergerEvaluation: Boolean,
+  evaluationParams: Option[SplitMergerEvaluationParams]
+): Action[AnyContent] =
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (1)

790-792: Add loading state for skeleton annotation check

Consider adding a loading state while checking for skeleton annotation to prevent UI flicker.

+  const [isCheckingAnnotation, setIsCheckingAnnotation] = React.useState(true);
   const hasSkeletonAnnotation = useSelector((state: OxalisState) => state.tracing.skeleton != null);
+  React.useEffect(() => {
+    setIsCheckingAnnotation(false);
+  }, [hasSkeletonAnnotation]);
   const dispatch = useDispatch();
   const [doSplitMergerEvaluation, setDoSplitMergerEvaluation] = React.useState(false);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 9b713b7 and 6055346.

📒 Files selected for processing (4)
  • CHANGELOG.unreleased.md (1 hunks)
  • app/controllers/JobController.scala (3 hunks)
  • app/models/job/Job.scala (2 hunks)
  • frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (7 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • app/models/job/Job.scala
  • CHANGELOG.unreleased.md
🧰 Additional context used
📓 Learnings (1)
frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (1)
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8221
File: frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx:584-584
Timestamp: 2024-11-25T14:52:23.692Z
Learning: When evaluation is done, ensure that `useAnnotation` is always set to `true` in the form data by using a hidden `Form.Item` with `initialValue={true}`.
🔇 Additional comments (3)
app/controllers/JobController.scala (2)

256-261: 🛠️ Refactor suggestion

Improve type safety for numeric parameters

The evaluation parameters that represent numeric values are passed as strings. Consider parsing them to appropriate numeric types before adding to command arguments.

-            "eval_max_edge_length" -> evalMaxEdgeLength,
-            "eval_sparse_tube_threshold_nm" -> evalSparseTubeThresholdNm,
-            "eval_min_merger_path_length_nm" -> evalMinMergerPathLengthNm,
+            "eval_max_edge_length" -> evalMaxEdgeLength.map(_.toDouble),
+            "eval_sparse_tube_threshold_nm" -> evalSparseTubeThresholdNm.map(_.toDouble),
+            "eval_min_merger_path_length_nm" -> evalMinMergerPathLengthNm.map(_.toDouble),

Likely invalid or redundant comment.


247-247: ⚠️ Potential issue

Strengthen validation for evaluation parameters

The current validation only checks for annotationId when evaluation is enabled. Consider validating all required evaluation parameters.

-          annotation_id_parsed <- Fox.runOptional(doSplitMergerEvaluation)(annotationId.toFox)
+          _ <- Fox.runIf(doSplitMergerEvaluation) {
+            for {
+              _ <- annotationId.toFox ?~> "job.inferNeurons.evaluation.missingAnnotationId"
+              _ <- evalUseSparseTracing.toFox ?~> "job.inferNeurons.evaluation.missingUseSparseTracing"
+              _ <- evalMaxEdgeLength.toFox ?~> "job.inferNeurons.evaluation.missingMaxEdgeLength"
+              _ <- evalSparseTubeThresholdNm.toFox ?~> "job.inferNeurons.evaluation.missingSparseTubeThreshold"
+              _ <- evalMinMergerPathLengthNm.toFox ?~> "job.inferNeurons.evaluation.missingMinMergerPathLength"
+            } yield ()
+          }

Likely invalid or redundant comment.

frontend/javascripts/oxalis/view/action-bar/starting_job_modals.tsx (1)

553-564: ⚠️ Potential issue

Add validation rules for form fields

The form fields lack validation rules to ensure required values are provided when evaluation is enabled.

 <Form.Item
   layout="horizontal"
   label="Use sparse ground truth tracing"
   name={["splitMergerEvaluationSettings", "useSparseTracing"]}
   valuePropName="checked"
   initialValue={true}
+  rules={[
+    {
+      required: true,
+      message: "Please specify if sparse ground truth tracing should be used",
+    },
+  ]}
   tooltip="The evaluation mode can either be `dense`
     in case all processes in the volume are annotated in the ground-truth.
     If not, use the `sparse` mode."
 >

Likely invalid or redundant comment.

Comment on lines +530 to +535
type SplitMergerEvaluationSettings = {
useSparseTracing?: boolean;
maxEdgeLength?: number;
sparseTubeThresholdInNm?: number;
minimumMergerPathLengthInNm?: number;
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Make evaluation settings fields required

The current type definition makes all fields optional, which could lead to runtime errors. Since these fields are required when evaluation is enabled, consider making them non-optional.

 type SplitMergerEvaluationSettings = {
-  useSparseTracing?: boolean;
-  maxEdgeLength?: number;
-  sparseTubeThresholdInNm?: number;
-  minimumMergerPathLengthInNm?: number;
+  useSparseTracing: boolean;
+  maxEdgeLength: number;
+  sparseTubeThresholdInNm: number;
+  minimumMergerPathLengthInNm: number;
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
type SplitMergerEvaluationSettings = {
useSparseTracing?: boolean;
maxEdgeLength?: number;
sparseTubeThresholdInNm?: number;
minimumMergerPathLengthInNm?: number;
};
type SplitMergerEvaluationSettings = {
useSparseTracing: boolean;
maxEdgeLength: number;
sparseTubeThresholdInNm: number;
minimumMergerPathLengthInNm: number;
};

Comment on lines +805 to +811
const splitMergerEvaluationSettings = form.getFieldValue(
"splitMergerEvaluationSettings",
) as SplitMergerEvaluationSettings;
if (
!selectedBoundingBox ||
(doSplitMergerEvaluation && splitMergerEvaluationSettings == null)
) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improve evaluation settings validation

The current validation only checks if settings exist. Consider adding more specific validation for individual fields.

-        if (
-          !selectedBoundingBox ||
-          (doSplitMergerEvaluation && splitMergerEvaluationSettings == null)
-        ) {
+        if (!selectedBoundingBox) {
+          return;
+        }
+        
+        if (doSplitMergerEvaluation) {
+          if (!splitMergerEvaluationSettings) {
+            Toast.error("Evaluation settings are required when evaluation is enabled");
+            return;
+          }
+          
+          const { maxEdgeLength, sparseTubeThresholdInNm, minimumMergerPathLengthInNm } = splitMergerEvaluationSettings;
+          if (!maxEdgeLength || !sparseTubeThresholdInNm || !minimumMergerPathLengthInNm) {
+            Toast.error("All evaluation settings fields must be provided");
+            return;
+          }
+        }

Committable suggestion skipped: line range outside the PR's diff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants