Replies: 2 comments 2 replies
-
For instance, which benchmark dataset is employed for the evaluation? How is instruction-based fine-tuning configured for this evaluation? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hey - the leaderboard come from an intern project running amazon cceval data on certain models, the original data / code can be found at https://github.com/TabbyML/tabby/pull/642/files for StarCoder-1B / StarCoder-3B vs WizardCoder-1B / WizardCoder-3B The overview of method can be found in this blog post: https://tabby.tabbyml.com/blog/2023/11/23/coding-llm-leaderboard |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checking https://leaderboard.tabbyml.com, I found there isn't any specification/documentation about this leaderboard.
Is any repository, paper, report, or blog relevant?
Beta Was this translation helpful? Give feedback.
All reactions