You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! first i want to thank you for sharing your valuable experiences.
I have a big dataset of over 6000 hours of labeled audios which contains over 5 milions row datas and i want to fine-tune whisper large v3 with this data. Considering the big size of the data, I am worried that this configuration is not suitable for fine tuning. I think it is better to double the values of rank (64 instead of 32) and alpha (128 instead of 64) to increase the number of trainable parameters.
What do you think about this idea? Is this a correct conclusion or not?
The text was updated successfully, but these errors were encountered:
Hi! first i want to thank you for sharing your valuable experiences.
I have a big dataset of over 6000 hours of labeled audios which contains over 5 milions row datas and i want to fine-tune whisper large v3 with this data. Considering the big size of the data, I am worried that this configuration is not suitable for fine tuning. I think it is better to double the values of rank (64 instead of 32) and alpha (128 instead of 64) to increase the number of trainable parameters.
What do you think about this idea? Is this a correct conclusion or not?
The text was updated successfully, but these errors were encountered: