Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] : Fix resume issues with combined streaming dataset in dataloader #362

Draft
wants to merge 37 commits into
base: main
Choose a base branch
from

Conversation

bhimrazy
Copy link
Collaborator

@bhimrazy bhimrazy commented Sep 3, 2024

Before submitting
  • Was this discussed/agreed via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

How does this PR impact the user?

Currently, users experience issues when attempting to resume a combined streaming dataset with the streaming dataloader, as saving and restoring checkpoints doesn’t work as expected. This PR addresses the root cause of the error, enabling successful checkpoint resuming of the dataloader, ensuring smoother and more reliable training workflows.

What does this PR do?

Fixes #331.

  • Fixed IndexError when loading dataloader state before any iteration.
  • Enabled resuming dataloader states for combined datasets (non-weighted) [ In progress ].

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in GitHub issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@bhimrazy bhimrazy self-assigned this Sep 3, 2024
@bhimrazy bhimrazy marked this pull request as draft September 3, 2024 19:11
@bhimrazy bhimrazy changed the title [WIP] : Fix inconsistent dataloader states with combined dataset [WIP] : Fix inconsistent dataloader states with combined streaming dataset Sep 3, 2024
Copy link

codecov bot commented Sep 5, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 78%. Comparing base (92df8af) to head (242a13c).

Additional details and impacted files
@@         Coverage Diff         @@
##           main   #362   +/-   ##
===================================
  Coverage    78%    78%           
===================================
  Files        34     34           
  Lines      5016   5020    +4     
===================================
+ Hits       3929   3934    +5     
+ Misses     1087   1086    -1     

@bhimrazy bhimrazy changed the title [WIP] : Fix inconsistent dataloader states with combined streaming dataset [WIP] : Fix resume issues with combined streaming dataset in dataloader Sep 5, 2024
@bhimrazy
Copy link
Collaborator Author

bhimrazy commented Sep 9, 2024

Combined Dataset (no weights): Resuming from the complete last epoch iteration is working now,
but no luck so far with resuming from a partial last epoch yet (looking into it further.)

Separated because the states were somehow getting accumulated from last test, leading to some weird numbers of samples yielded

tests/streaming/test_combined.py:974: AssertionError
----------------------------- Captured stdout call -----------------------------
{'dataset': {'0': {'num_samples_yielded': 3, 'num_workers': 4, 'batch_size': 4, 'current_epoch': 1, 'input_dir_path': '/tmp/pytest-of-runner/pytest-0/test_combined_dataset_dataload0/dataset_0', 'input_dir_url': None, 'item_loader': None, 'drop_last': False, 'seed': 42, 'world_size': 1, 'shuffle': True, 'subsampled_files': ['chunk-0-0.bin'], 'region_of_interest': [(0, 50)]}, '1': {'num_samples_yielded': 1, 'num_workers': 4, 'batch_size': 4, 'current_epoch': 1, 'input_dir_path': '/tmp/pytest-of-runner/pytest-0/test_combined_dataset_dataload0/dataset_1', 'input_dir_url': None, 'item_loader': None, 'drop_last': False, 'seed': 42, 'world_size': 1, 'shuffle': True, 'subsampled_files': ['chunk-0-0.bin'], 'region_of_interest': [(0, 50)]}}, 'current_epoch': 1, 'latest_worker_idx': 2, 'num_samples_yielded': {0: [15, 25], 1: [16, 20], 2: [16, 20], 3: [16, 16]}}
=========================== short test summary info ============================
…razy/litdata into fix/combined-dataset-loading-states
@deependujha
Copy link
Collaborator

hi @bhimrazy
What's the current update?

@bhimrazy
Copy link
Collaborator Author

hi @bhimrazy What's the current update?

Hi @deependujha

I'm still facing some issues with an IndexError when loading states from the last partial epoch. This usually only happens when the num of samples exceeds than the actual available samples.

E       IndexError: Caught IndexError in DataLoader worker process 0.
E       Original Traceback (most recent call last):
E         File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 253, in _worker_loop
E           fetcher = _DatasetKind.create_fetcher(dataset_kind, dataset, auto_collation, collate_fn, drop_last)
E         File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 80, in create_fetcher
E           return _utils.fetch._IterableDatasetFetcher(dataset, auto_collation, collate_fn, drop_last)
E         File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 22, in __init__
E           self.dataset_iter = iter(dataset)
E         File "/home/runner/work/litdata/litdata/src/litdata/streaming/combined.py", line 160, in __iter__
E           self._iterator = _CombinedDatasetIterator(
E         File "/home/runner/work/litdata/litdata/src/litdata/streaming/combined.py", line 208, in __init__
E           self._dataset_iters = [iter(dataset) for dataset in datasets]
E         File "/home/runner/work/litdata/litdata/src/litdata/streaming/combined.py", line 208, in <listcomp>
E           self._dataset_iters = [iter(dataset) for dataset in datasets]
E         File "/home/runner/work/litdata/litdata/src/litdata/streaming/dataset.py", line 240, in __iter__
E           self._resume(workers_chunks, workers_intervals)
E         File "/home/runner/work/litdata/litdata/src/litdata/streaming/dataset.py", line [312](https://github.com/Lightning-AI/litdata/actions/runs/10900666975/job/30248775938#step:5:313), in _resume
E           interval = self.worker_intervals[self.chunk_index]
E       IndexError: list index out of range

Initially, I encountered a separate error where the number of samples exceeded the actual count in state dict test. It seemed like the states were accumulating incorrectly between the tests, so I decided to separate the tests and then the states were fine as it should be.

I haven't had much time lately, but I plan to continue working on from this weekend.

Copy link

gitguardian bot commented Sep 19, 2024

⚠️ GitGuardian has uncovered 2 secrets following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

Since your pull request originates from a forked repository, GitGuardian is not able to associate the secrets uncovered with secret incidents on your GitGuardian dashboard.
Skipping this check run and merging your pull request will create secret incidents on your GitGuardian dashboard.

🔎 Detected hardcoded secrets in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
5685611 Triggered Generic High Entropy Secret 3762b11 tests/streaming/test_resolver.py View secret
5685611 Triggered Generic High Entropy Secret 3762b11 tests/streaming/test_resolver.py View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@bhimrazy
Copy link
Collaborator Author

Getting close to it:

The test case seems to fail with an IndexError when the number of workers is greater than 2 and the iteration is stopped close to the midpoint of the dataloader length.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug: Inconsistent Behavior with StreamingDataloader loading states (specific to CombinedStreamingDataset)
3 participants