-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k2SSL: A Faster and Better Framework for Self-Supervised Speech Representation Learning #1745
base: master
Are you sure you want to change the base?
Conversation
yfyeung
commented
Sep 7, 2024
•
edited
Loading
edited
- Libri-Light data processing script
- Libri-Light Zipformer multi-node-multi-gpu pre-train recipe
- LibriSpeech Zipformer bpe-level prund rnn-t fine-tune recipe
- LibriSpeech Zipformer letter-level ctc fine-tune recipe
- Release all resource and results for Zipformer Base
- Release all resource and results for Zipformer Large
update Update ssl_datamodule.py Update pretrain.py Update pretrain.sh Update pretrain.sh Update hubert_ce.py Update pretrain.py
Part of the resources have been released: Zipformer Base pre-trained with cross entropy loss: Checkpoints, logs, and scripts With these resources, I believe anyone with 8 V100 32G GPUs can easily reproduce our experiments. |
@yfyeung Thank you very much for the PR and sharing the model weights. Do you plan to release a paper also ? |
Maybe, but currently I am not sure whether it is suitable for a technical report or just a normal research paper. |
35fd0cf
to
1b89c6d
Compare