-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Orca if it ever releases #47
Comments
Interesting find. And more evidence of the growing importance of synthetic instruction tuning data. |
Looks like there may be some version of it here: https://huggingface.co/yhyhy3/med-orca-instruct-33b-GPTQ |
I think most of the "Orca" models on Huggingface are projects which used a similar approach to the one described in the Microsoft paper. AFAIK they are not actual Orca releases. |
Nah. There is a new preprint that says
But nothing is open-sourced; this is a Llama2 finetune where only the instruction-tuned (or what they call explanation-tuned) model weights are made available, but none of instruction/explanation datasets and none of the source code is made available. Thanks Meta for thoroughly diluting the term open source and thanks Microsoft for further contributing to it. |
Whitepaper: https://arxiv.org/pdf/2306.02707.pdf
Will be released here: https://aka.ms/orca-lm
Summary: https://www.youtube.com/watch?v=Dt_UNg7Mchg
The text was updated successfully, but these errors were encountered: