Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine-tuning AI models #3

Open
widal001 opened this issue Dec 20, 2024 · 0 comments
Open

Fine-tuning AI models #3

widal001 opened this issue Dec 20, 2024 · 0 comments
Labels
research AI topic or tool to research

Comments

@widal001
Copy link
Collaborator

Topic

Often base models (e.g. ChatGPT 4o, Gemini, etc.) provide great performance out of the box. And the responses from these base models can be improved with Retrieval Augmented Generation (RAG), but fine-tuning a model (adjusting the weights with additional custom data) can provide an additional performance boost and make the model more sensitive to particular use cases.

Questions

  • What is the performance difference between:
    • Base models
    • Base models + RAG
    • Fine-tuned models
    • Fine-tuned models + RAG
  • How much data is needed to meaningfully tune a model?
  • What's the process for tuning a model? How much technical expertise does it require?
  • What's the cost associated with tuning a model?

Relevant resources

No response

@widal001 widal001 added the research AI topic or tool to research label Dec 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
research AI topic or tool to research
Projects
None yet
Development

No branches or pull requests

1 participant