Distillation of acoustic models. In collaboration with the Danish Foundational Models project. This repository implements the student-teacher distillation paradigm, and utilizes it to distill Wav2Vec2 models, to smaller Wav2Vec2 models. The fundamental idea is that of DistilBERT.
All parameters for training, models and datasets are set in src/config.py
.
Developers:
- Anders Jess Pedersen ([email protected])