-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine-tuned GPT model for TVM instructions #392
Comments
I have already come up with a draft solution to optimally fine-tune the GPT-3.5-turbo for that task. The process is multi-step and should theoretically lead to good results. The training itself is estimated to require approximately $300-500 for API usage. The remainder of the suggested reward compensates for my time and accounts for my experience with both Large Language Models (LLMs) and TVM. |
If it goes well and the results are satisfactory, I will be happy to work on more fine-tuning tasks for other parts of TON. |
https://openai.com/blog/introducing-gpts use this pls |
"GPTs" and "Assistants" weren't really made for such tasks. They perform good only with general stuff that doesn't require any additional knowledge. Fine-tuning, on the other hand, allows you to make LLM to learn something new. |
This is a critical step toward simplifying development and enjoying a better Developer Experience (DX). Thanks, @Gusarich, for initiating this effort. More than this, I suggest we should scale up rewards in general as well. It seems Nowadays we consider how much time someone needs to accomplish a task. this is wrong, we should consider how many years takes for a person to reach a point that can accomplish these tasks, and also how much impact this endeavor has on overall community growth. I suggest this task for 3000$. |
It's a valuable tool, but currently, it hasn't resonated with the community as I hoped. We're also aware of a similar tool being developed at the moment. Consequently, we're closing this Bounty for now, but there's a possibility it might be revisited in the future. Thank you for contribution to the community! |
Summary
Deploy a LLM model that fully understands TVM instructions to assist developers with various tasks.
Context
There are more than 700 distinct TVM instructions, and remembering all of them in detail is practically impossible, even for those experienced in low-level programming on TON. This is where LLMs (Large Language Models) can be immensely helpful. They understand the entire context of your question, as well as the comprehensive list of instructions, and can assist developers in a conversational manner.
Such models could provide multifaceted support, including:
References
Estimate suggested reward
2000$
The text was updated successfully, but these errors were encountered: