Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supervised ID docs #1878

Open
talmo opened this issue Jul 23, 2024 · 0 comments
Open

Supervised ID docs #1878

talmo opened this issue Jul 23, 2024 · 0 comments
Assignees
Labels
documentation This issue exists because the documentation is outdated.

Comments

@talmo
Copy link
Collaborator

talmo commented Jul 23, 2024

We should add a guide about using supervised ID models to the docs. This comes up from time to time and it'd be great to have a link to point users to.

Relevant content from past user interactions

If you have a set of unique markers or visual appearance features, the easiest way to get started with those in SLEAP is to use one of the supervised ID models. These are in the dropdown menu when you select a model type for training. The only requirement is that you set a track on every user instance that you've labeled. You'll want to name the tracks by their identity class, for example, "male" or "female" or "male_with_ear_clip" or something that's identifiable and that you can use to assign animals to. After assigning all your labeled instances to their corresponding identity classes, the supervised ID models will also predict the probability and assign predicted instances to those classes based on their appearance. These models can be a bit finicky to train since you're optimizing for very different objectives, but give it a go if this approach works for you and let us know if you have any issues.


The idea is that you would, for each of your labeled frames, assign each animal to track based on their "identity class" in the GUI. For example, if you had two mice with different colored fur, you could create 2 tracks: "dark fur" and "light fur" and go back through your labels and assign each instance to one of those two. Then, we can train an ID model that works just like your current ones, but that also assign the track to the animal based on learned visual patterning. This gets around a lot of issues with tracking, especially if you have animals with very distinct appearances. You'll essentially 100% eliminate the need to proofread at all!

For your use case, you'll want to assign each of the distinct tags to one of these classes. I suppose you could just call them A, B, etc, or name them based on the pattern or location.

If you'd like to give it a go, we can start off by adding a track to all of the instances in your labeling project. You can do that by:

  1. Go to a frame with labeled animals.

  2. Click on one of the animals so there's a box around it denoting that it's selected. It should say "Track: none" below it.

  3. Go to Tracks menu → Set Instance Track... → New Track

  4. From the Instance panel on the right, you should see the new track next to the instance in the table. You can rename the track by double clicking it and typing in some name like "dark fur" or "light fur".

  5. Repeat this for every animal in the frame.

  6. For the remaining frames, assign each labeled animal to a track by clicking on them to select them and pressing the associated shortcut (Ctrl + 1 - Ctrl + 9). If you just hold down Ctrl, it'll tell you which shortcuts are assigned to which tracks.

image

Once you've done that, we can train one of the new model types. Just shoot me your current model config and I'll send you back config for the corresponding ID model that you can train using sleap-train.

(Note: This is now available from the SLEAP GUI.)

If you're comfortable with editing the model configuration JSON files yourself, all you need to do is specify the new model head type (e.g., "multi_class_topdown") with similar parameters to what you had previously for the non-ID model (e,g., "centered_instance"):

image

The one thing you may want to tune is the loss_weight for the classification head. In this screenshot I have it set to 0.001, but you may want to decrease it to 0.0001 if your poses are looking worse or increase to 0.01 if the IDs aren't working.

You can also reference a couple of examples of these config files for bottom-up and top-down model types.

Once the model(s) are trained, you can just use them as normal with sleap-track and you don't have to specify any of the tracking parameters.


A few more questions: I noticed that you actually have 5 possible models, with top-down-id and bottom-up-id models. Can you explain the difference with non-id models? I cannot find that info in your documentation.

Yep, there's a general description in this part of the SLEAP paper. It's not in the docs because we haven't gotten around to it and because it's not the best behaved model type. It can be a bit tricky to train it to work well across different settings since it's a bit finicky to tune the loss weights between pose and identity.

The only requirement for those models is that you have your user labeled instances assigned to tracks with consistent names (e.g., "male", or "neck_dye" or "torso_dye"). After assigning an instance to a track manually (Ctrl + 1-9, or from the Tracks menu), you can click over to the Instances tab and double click on their track name to edit it. For ID models, only user instances with tracks assigned to them will be used for training.

Then you can try the starter configurations and if identities or poses are underperforming, you can try tuning the loss weights on the bottom right of the configuration window.

@talmo talmo added the documentation This issue exists because the documentation is outdated. label Jul 23, 2024
@eberrigan eberrigan self-assigned this Dec 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation This issue exists because the documentation is outdated.
Projects
None yet
Development

No branches or pull requests

2 participants