Skip to content

ITISFoundation/jupyterlab-AxonDeepSeg-osparc

Repository files navigation

AxonDeepSeg within JupyterLab on o²S²PARC

This repository packages AxonDeepSeg as part of a JupyterLab Service on o²S²PARC. It also adds support for pytorch+GPU/CUDA.

AxonDeepSeg is an open-source software using deep learning and aiming at automatically segmenting axons and myelin sheaths from microscopy images. It performs 3-class semantic segmentation using a convolutional neural network. AxonDeepSeg was developed at NeuroPoly Lab, Polytechnique Montreal, University of Montreal, Canada. See GitHub repository and Documentation for more information.

The base of the Service was built with the cookiecutter-osparc-ui-module, using supervisord, xtigervnc, novnc and openbox

Citation

If you use this work in your research, please cite:

Zaimi, A., Wabartha, M., Herman, V., Antonsanti, P.-L., Perone, C. S., & Cohen-Adad, J. (2018). AxonDeepSeg: automatic axon and myelin segmentation from microscopy data using convolutional neural networks. Scientific Reports, 8(1), 3816. Link to the paper.

How to develop this o²S²PARC Service

Usage

Build the module:

$ make build

To run locally at and visit http://127.0.0.1:8888

make run-local

To publish in local throw-away registry:

make publish-local

Versioning

Service version is updated with make version-*

CI/CD Integration

A template ci config file is created in .github/workflows/check-image.yml, it checks that the image builds. When the workflow runs successfully for a new version (on the main branch), this is automatically detected and published on the internal registry (see also "Deployment on o²S²PARC" in this README)

Deployment on o²S²PARC

The required CI is already packaged. To build and push to the internal registry you must add it to the oSparc/docker-publisher-osparc-services repository.

How to test the Application

Run locally and visit http://127.0.0.1:8888:

make run-local

Or publish it in a local o²S²PARC deploy:

make publish-local

Perform a test segmentation as shown in this video tutorial. You can use as input the image from this repository (from validation/inputs/input_1) or others provided by AxonDeepSeg (e.g. in this repository). The input folder will be mounted in tmp/inputs

Changelog

[1.0.0] - 2023-12-22

First version

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published