You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @dirkgr! Here is a feature that would be very much desirable for decontamination, but I'm not sure how difficult it would be to implement into BFF:
The essential part of the feature would be to remove ngrams instead of paragraphs. This is an approach used by GPT3 and TNLG and other than minHash based decontamination it's the other decontamination approach that has actually been used on LLMs in the literature.
Here is the exact specification from TNLG: "We use n-grams to remove texts that occur in the downstream tasks from the training datasets. When we find an n-gram match between a task document and a training document, we split the training document into two pieces by removing the n-gram along with 200 characters from both of its sides. We also remove any split training document with fewer than 200 characters, or training documents which were split more than 10 times."
I wonder if BFF's bloom filter would be necessary to store all the ngrams from the in a several million document combined eval set? If not I should probably consider building this into the WIMBD tool instead.
Thanks for any thoughts you have on this!
The text was updated successfully, but these errors were encountered:
Hi @dirkgr! Here is a feature that would be very much desirable for decontamination, but I'm not sure how difficult it would be to implement into BFF:
The essential part of the feature would be to remove ngrams instead of paragraphs. This is an approach used by GPT3 and TNLG and other than minHash based decontamination it's the other decontamination approach that has actually been used on LLMs in the literature.
Here is the exact specification from TNLG: "We use n-grams to remove texts that occur in the downstream tasks from the training datasets. When we find an n-gram match between a task document and a training document, we split the training document into two pieces by removing the n-gram along with 200 characters from both of its sides. We also remove any split training document with fewer than 200 characters, or training documents which were split more than 10 times."
I wonder if BFF's bloom filter would be necessary to store all the ngrams from the in a several million document combined eval set? If not I should probably consider building this into the WIMBD tool instead.
Thanks for any thoughts you have on this!
The text was updated successfully, but these errors were encountered: