Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LoRA config #16

Open
VafaKnm opened this issue Jul 20, 2024 · 0 comments
Open

LoRA config #16

VafaKnm opened this issue Jul 20, 2024 · 0 comments

Comments

@VafaKnm
Copy link

VafaKnm commented Jul 20, 2024

Hi! first i want to thank you for sharing your valuable experiences.
I have a big dataset of over 6000 hours of labeled audios which contains over 5 milions row datas and i want to fine-tune whisper large v3 with this data. Considering the big size of the data, I am worried that this configuration is not suitable for fine tuning. I think it is better to double the values ​​of rank (64 instead of 32) and alpha (128 instead of 64) to increase the number of trainable parameters.
What do you think about this idea? Is this a correct conclusion or not?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant