-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use big LLM to better align source and enhance source corpora #622
Comments
More ideas:
How would we prompt this for the LLM? What would we tell the LLM? What would we put in the context window? Should we fine tune a 7GB model each time? A 70GB model (using Apollo) once on the 2 H100's? or inference off of a 400 GB model? Which would give the best results? Re-enforcement learning of using a whole bunch of translations and their back-translations? A way to do a "spike" without LLM's
For each type, use the following amounts of training data: This test would show us the upper limit (type 3) for this concept - both if it helps with partial NT's (crossbow) or throughout the whole Bible. We need to find a set of Bibles with back-translations to be used as references for these experiments. |
Recommendation - get at least 5 Bibles and do types 1-3 (no LLM) and use that data to direct and prioritize future work. |
@woodwardmw - this may be interesting to you as well. I don't know if you want to test it out. |
Yeah, very interesting. I like the idea of training on the back translation and then creating extra "back translation" to use as a source for inference. As long as it can be generated without going too far from the actual Bible text. My feeling is that the way forward in general is to keep the current NLLB (or MADLAD) model as the main translation engine, and to focus on LLM pre- and post-processing to improve results. |
So, this is a crazy idea. LLM's are very good at making English text, rewording things and understanding context. What if we gave an LLM a source (such as the ESV) and a backtranslation and said, "make more of the backtranslation using the ESV as a source." It could add explications, different contexts and immitate phrase reordering. Moreover, we could also add Bible reference material to the context and it should be able to make the source have better target context, mirroring what the existing backtranslations have, both scripturally and culturally.
We could take these newly generated "target aligned source " and then (optionally), could give them to the Translators and let them correct them to be more accurate to what they should say. After that optional step, we can feed it to a trained NLLB model that is only trained on backtranslation and target data and it would the spit out pretty close target data.
@ddaspit - what do you think?
The text was updated successfully, but these errors were encountered: