Skip to content

Commit

Permalink
fix links in paper
Browse files Browse the repository at this point in the history
  • Loading branch information
katjaq committed Dec 8, 2023
1 parent ed3fd7b commit 94dd2e1
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,12 +26,12 @@ bibliography: paper.bib
---

# Summary
Brain extraction and segmentation are the first step for most neuroimaging analyses. Automatic methods work well in adult human brains, but produce unreliable results in non-human data, due to muscle tissue, skull, and luminosity gradients. Thresholdmann (https://neuroanatomy.github.io/thresholdmann) is an open source Web tool for the interactive application of space-varying thresholds to Nifti volumes. No download or installation are required and all processing is done on the user’s computer. Nifti volumes are dragged and dropped onto the Web app and become available for visual exploration in a stereotaxic viewer. A space-varying threshold is then created by setting control points, each with their own local threshold. Each point can be repositioned or removed, and each local threshold can be adjusted in real time using sliders or entering their values numerically. The threshold direction can be switched to allow segmentation of the structure of interest in different imaging modalities, such as T1 and T2 weighted contrasts. The opacity of the mask and the brightness and contrast of the MRI image can be adjusted via sliders. A 3D model of the thresholded mask can be computed to inspect the result in an interactive 3D render. Finally, the thresholded mask, the space varying threshold and the list of control points can be saved for later use in scripted workflows, able to reproduce the thresholded volume from the original data.
Brain extraction and segmentation are the first step for most neuroimaging analyses. Automatic methods work well in adult human brains, but produce unreliable results in non-human data, due to muscle tissue, skull, and luminosity gradients. Thresholdmann ([https://neuroanatomy.github.io/thresholdmann](https://neuroanatomy.github.io/thresholdmann)) is an open source Web tool for the interactive application of space-varying thresholds to Nifti volumes. No download or installation are required and all processing is done on the user’s computer. Nifti volumes are dragged and dropped onto the Web app and become available for visual exploration in a stereotaxic viewer. A space-varying threshold is then created by setting control points, each with their own local threshold. Each point can be repositioned or removed, and each local threshold can be adjusted in real time using sliders or entering their values numerically. The threshold direction can be switched to allow segmentation of the structure of interest in different imaging modalities, such as T1 and T2 weighted contrasts. The opacity of the mask and the brightness and contrast of the MRI image can be adjusted via sliders. A 3D model of the thresholded mask can be computed to inspect the result in an interactive 3D render. Finally, the thresholded mask, the space varying threshold and the list of control points can be saved for later use in scripted workflows, able to reproduce the thresholded volume from the original data.

# Statement of need
Brain extraction and segmentation are required for most analyses of neuroimaging data. Obtaining appropriate masks can be particularly difficult in non-human brain imaging, as standard automatic tools struggle with the surrounding muscle tissue, skull, and strong luminosity gradients. A simple interactive threshold is intuitive and fast to apply, and can often provide a rather good initial guess. However, because of luminosity gradients, the threshold that works for one brain region is likely to fail in another.

Thresholdmann complements the variety of existing brain segmentation tools, providing an easy interface to manually control the segmentation on a local scale across different brain imaging modalities and image contrast gradients. The masks produced by Thresholdmann can serve as a starting point for more detailed manual editing using tools such as BrainBox (https://brainbox.pasteur.fr) [@BrainBox] or ITK-SNAP (http://itksnap.org) [@itkSNAP]. This interactive approach is especially valuable for non-human brain imaging data, where automatic approaches often require extensive manual adjustment anyway [@PrimeDE; @PrimeRE; @PrimeDE2]. We have used Thresholdmann successfully to create initial brain masks for a variety of vertebrate brains – including many non-human primate datasets [@34primates; @cerebella] – as well as developmental data. Small Web tools, such as Thresholdmann or Reorient (https://neuroanatomy.github.io/reorient) [@Reorient], focused on solving a single problem, can become helpful additions to the methodological toolbox of neuroimagers.
Thresholdmann complements the variety of existing brain segmentation tools, providing an easy interface to manually control the segmentation on a local scale across different brain imaging modalities and image contrast gradients. The masks produced by Thresholdmann can serve as a starting point for more detailed manual editing using tools such as BrainBox ([https://brainbox.pasteur.fr](https://brainbox.pasteur.fr)) [@BrainBox] or ITK-SNAP ([http://itksnap.org](http://itksnap.org)) [@itkSNAP]. This interactive approach is especially valuable for non-human brain imaging data, where automatic approaches often require extensive manual adjustment anyway [@PrimeDE; @PrimeRE; @PrimeDE2]. We have used Thresholdmann successfully to create initial brain masks for a variety of vertebrate brains – including many non-human primate datasets [@34primates; @cerebella] – as well as developmental data. Small Web tools, such as Thresholdmann or Reorient ([https://neuroanatomy.github.io/reorient](https://neuroanatomy.github.io/reorient)) [@Reorient], focused on solving a single problem, can become helpful additions to the methodological toolbox of neuroimagers.

# Methods
The spatially-varying threshold is computed from a number of control points. Each control point has a position $`x_i`$ and a threshold value $`v_i`$ that can be adjusted interactively. At each point $`x`$ of the volume, the local threshold is computed as a weighted function of the control points. The weight associated to the i-th control point at position $`x`$ is given by:
Expand All @@ -43,7 +43,7 @@ $$W \left( x \right) = \sum_i^n w_i \left( x \right) .$$

Additionally, we add a “background” threshold value which makes the action of the control points localised. The background value is independent of the distance and is given a small constant weight. In that manner, positions far from all control points get the background threshold value.

Thresholdmann was coded in JavaScript and runs from a GitHub page. Code style was verified using ESLint (https://eslint.org). Unit tests and end-to-end tests were implemented using Mocha (https://mochajs.org) and Puppeteer (https://pptr.dev). Modifications in the code are continuously tested using CircleCI (https://circleci.com).
Thresholdmann was coded in JavaScript and runs from a GitHub page. Code style was verified using ESLint ([https://eslint.org](https://eslint.org)). Unit tests and end-to-end tests were implemented using Mocha ([https://mochajs.org](https://mochajs.org)) and Puppeteer ([https://pptr.dev](https://pptr.dev)). Modifications in the code are continuously tested using CircleCI ([https://circleci.com](https://circleci.com)).

# Figures
![<b>Thresholdmann interface and 3D viewer.</b> The <b>left panel</b> allows to select different control point actions, set the thresholding direction, switch between the threshold mask or value view, open the 3D viewer, load or save the work, change the opacity of the mask, the brightness and contrast of the MRI, and information about the imaging data. The <b>central panel</b> is a stereotaxic viewer which allows the user to move through the slices using the slider at the bottom, to switch between the 3 stereotaxic planes, and to interactively set, move or remove control points. Users can view the threshold mask, or the corresponding threshold value space. The <b>right panel</b> shows for each control point the stereotaxic coordinates, an adjustment slider to change the local threshold, and the threshold value. The selected control point is highlighted in the panel and the viewer jumps to the corresponding slice. The <b>interactive 3D viewer</b> opens in a separate browser window.\label{fig:thresholdmann1}](https://raw.githubusercontent.com/neuroanatomy/thresholdmann/master/img/thresholdmann_fig1.png)
Expand Down

0 comments on commit 94dd2e1

Please sign in to comment.