Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version 2 Roadmap #153

Closed
schlegelp opened this issue Aug 14, 2024 · 7 comments
Closed

Version 2 Roadmap #153

schlegelp opened this issue Aug 14, 2024 · 7 comments
Labels
enhancement New feature or request

Comments

@schlegelp
Copy link
Collaborator

schlegelp commented Aug 14, 2024

This issue is to suggest & discuss (read: spitball) potential breaking changes that might warrant a new major version.

Plotting

Over the past months I have experimented more with the pygfx library and I'm pretty much set on replacing the current vispy-based 3d viewer with octarine. This may well happen already in the next minor version (see #138) but I wonder if a general refactoring of the plotting interface is in order. In particular:

  • Do we want to add pygfx as a 2d plotting backend (see also fastplotlib)? It's much faster and perspectively correct (unlike matplotlib).
  • In general, I think the plotting should be refactored into classes under the hood instead of the current convoluted functions.
  • Further to the above: I would love to play around trying a "Grammar of Graphics"-like interface e.g. via method chaining.

Additional backends

Currently, navis can utilize multiple CPUs (via pathos) but it doesn't scale beyond a single machine. It'd be nice if there was an easy way to divvy up e.g. a big NBLAST across multiple nodes. @aschampion has already made a start in #111 for a Dask backend and the implementation is generic so that it would be easy to tack on more.

Neuron Classes

Two things that have been on my mind with regards to neurons:

  1. As discussed with @bdpedigo in Berlin, can we generate a "unified" neuron object that combines multiple representations (skeleton, mesh, image)? @ceesem has done something similar in MeshParty.

  2. The current neuron names (TreeNeuron, MeshNeuron, VoxelNeuron) are... ugly

It would be nice if there was only a single Neuron class that could be either a single type or multiple. Something along these lines:

>>> swc = pd.read_table('neuron.swc')  # load some SWC table
>>> n = navis.Neuron(swc)  # automatically interprets that this is a skeleton
>>> # Alternatively: n = navis.Neuron.from_swc('neuron.swc') or n = navis.read_swc('neuron.swc')
>>> n
<navis.Neuron(id=12345, type=skeleton, nodes=4320)>
>>> n.type
("skeleton", )

Combining multiple representations could then look like this:

>>> n = navis.Neuron.from_swc('neuron.swc')
>>> n.add_mesh(mesh)
>>> Alternatively: n = navis.Neuron(skeleton=swc, mesh=mesh)
>>> n 
<navis.Neuron(id=12345, type=(skeleton, mesh), nodes=4320, faces=20432)>
>>> n.type
("skeleton", "mesh")

Individual representations can be accessed like this:

>>> n.skeleton  # this can also be generated on-the-fly from the mesh
<navis.Skeleton(id=12345, faces=20432)>
>>> n.mesh
<navis.Mesh(id=12345, faces=20432)>
>>> navis.plot3d(n)  # plots the "first" representation (what that is tbd)
>>> navis.plot3d(n.skeleton)  # explicitly plot the skeleton

Under-the-hood navis already combines representations when needed. For example, navis.prune_twigs works on MeshNeurons by (a) skeletonizing the mesh, (b) pruning the skeleton and (c) removing faces corresponding to the removed skeleton nodes from the mesh. The big question then is: does making this explicity actually simplify things for the user? I imagine that most users will work with either skeletons or meshes and don't need to know how things happen under-the-hood.

I will add above list as I think of more stuff.

Perhaps some of the above can be tackled in a longer hackathon?

@schlegelp schlegelp added the enhancement New feature or request label Aug 14, 2024
@schlegelp schlegelp pinned this issue Aug 14, 2024
@bdpedigo
Copy link
Contributor

big +1!

this kind of neuron.mesh, neuron.skeleton is exactly what I had in mind.

Some challenges I could see:

  • defining what happens for all representations when you do an operation on another. your prune_twigs is a perfect example where you've figured out how to define that operation mapping, but I am wondering if it is easy to generalize to all cases/operations or not
  • same as the above, but whether to map operations from one representation to another always, or in some kind of lazy manner
  • as you mention, how to hide the details of these backend implementation details when the user doesnt care about the underlying representation, while also allowing the customizability for people who do

@bdpedigo
Copy link
Contributor

re: visualization, i've been using https://github.com/pyvista/pyvista a lot lately - i wonder what the pros/cons are relative to pygfx. one thing that I appreciate about pyvista is that it seems to have a big user base, which helps with googling for solutions.

@bdpedigo
Copy link
Contributor

re: representations:

  • non-tree spatial graphs: for example the level2 graph used in CAVE, which isn't a DAG, but also isn't exactly a mesh as we normally think of it. however meshparty actually just treats this spatial graph as a mesh in terms of the code, though, so it might be easy to support this representation too
  • condensed trees / segment graphs: taking the skeleton but converting to where each node is an entire segment (branch to branch or branch to tip) in the skeleton. ive used these for visualizations and can be helpful for some computations too

@bdpedigo
Copy link
Contributor

But yes, would be a big fan of a morphology representations hackathon! Perhaps as a way to get started, it could be worth writing down a kind of desired API, and working backwards from there in terms of how to implement things?

@ceesem
Copy link

ceesem commented Aug 16, 2024

Unsurprisingly, Ben and I are on the same page here with our goals.

A few things that I have come to believe are useful generalizations/distinctions for such a project:

  • Having a generic category of "SpatialGraph" is useful. That is to say, a non-skeleton structure with vertices in space and edges between them. This could be the mesh, this could be the level 2 graph, or any other wireframe of a neuron. The key feature is that it's a single connected component and that any annotation you might want to is can be associated with a vertex of this graph, but it's not necessarily tree-like.
  • It is useful to distinguish point-like annotations (e.g. synapses, mitochondria) that can have a any-number-to-one relationship with vertices from labels, which are necessarily in 1:1 correspondence with vertices. Labels could be anything from neuronal compartment to radius to SegCLR embeddings.

@ceesem
Copy link

ceesem commented Aug 16, 2024

In fact, to not just totally repeat what @bdpedigo said while I was already typing, a key distinction between the mesh and a spatial graph is that the mesh is built of faces and is typically not a single connected component. We always have to remove disconnected faces and graft in connecting edges in order to build a "nice" single component graph out of it, which already makes it not a true triangle mesh.

@bdpedigo
Copy link
Contributor

Having a generic category of "SpatialGraph" is useful.

Exactly, and now a TreeNeuron/skeleton can inherit from spatial graph, but could have extra methods specific to trees (e.g. a notion of tips and a root)

@navis-org navis-org locked and limited conversation to collaborators Aug 17, 2024
@schlegelp schlegelp converted this issue into discussion #156 Aug 17, 2024
@schlegelp schlegelp unpinned this issue Dec 4, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants