Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Pythia on Pile-Dedup] Training for ~1.5 epochs: how to identify the repeated sequences (i.e., the additional .5 epoch)? #144

Open
pietrolesci opened this issue Jan 9, 2024 · 3 comments

Comments

@pietrolesci
Copy link

Hi there,

The deduplicated dataset has fewer sequences and to keep a consistent token count with the non-deduplicated version the models are trained for ~1.5 epochs (as discussed in the README). Between epochs, are the data reshuffled or simply the dataloader starts from the beginning again in the same order? If the latter is the case, is there a way to know exactly which checkpoint is the first to see the same data twice? Put differently, is there a way to know which sequences are seen by the model in the additional ~half epoch?

Thanks a lot in advance for your help!

cc @haileyschoelkopf

@jeffreygwang
Copy link

Hey! I had similar questions a while back for a paper in which we used the Pythia suite—to the best of my understanding, the answers are that it's 1.5 epochs, where about the first half of the data (same order) is seen twice. The Pythia paper describes how many total tokens the models see and how many it sees in the first pass; based on those numbers, I use the step98000 checkpoint as my full "single pass" checkpoint. I believe the checkpoints after start "seeing double."

@pietrolesci
Copy link
Author

Thanks a lot for your answer @jeffreygwang, this seems reasonable to me too!

@pietrolesci
Copy link
Author

Between epochs, are the data reshuffled or simply the dataloader starts from the beginning again in the same order?

The answer seems to be that the dataloader does NOT simply start from the beginning again. It means that the concatenation happened at the document level, that is before the "packing" process. This means the initial tokens can appear in different positions within a sequence.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants