-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate batches lazily #111
Comments
I made a similar implementation in a commit yesterday on a forked branch. It works well for my use case at the moment and seems to pass the tests. I moved the |
Nice, want to open a PR for these changes? I can run the asv benchmarks, which are currently used locally rather than as part of the CI. |
Yes, I just open a PR for this. |
@maxrjones - do you think we should close this issue or is there more you'd like to do here? |
#112 fixed the main point of this issue, so closing now. Related tasks (e.g., improving benchmarks) can be addressed separately. |
Is your feature request related to a problem?
This relates to several existing issues in the xbatcher repository, including #30, #37, and #109. As mentioned in #109 (comment), iterating over the entire dataset to build the batches can be prohibitively slow and memory intensive for large datasets.
Describe the solution you'd like
Based on past discussions with @jhamman as well as TJ’s comment, we should consider a solution in which the batch generator lazily constructs batches without iterating through the entire dataset/dataarray. One key design decision that we should reconsider is whether the
_batches
dictionary needs to store the sliced Dataset/Dataarrays. Alternatively, as TJ and Joe have mentioned, the_batches
attribute could store the indices associated with each batch. In this case, the Dataset/Dataarray would be constructed when__getitem__()
or__iter__()
are called. In the simplest case, I expect the indices could be stored as a dict of dimension, slice pairs that can be unpacked and passed to.isel()
(e.g.,xbatcher/xbatcher/generators.py
Line 42 in 4d8e2c8
One complicating factor is that the indices associated with a batch depends on
concat_input_dims
, such that two schemas seem required for defining the coordinates/dimensions associated with a batch. Similar to the suggestion in #93 (comment), I wonder if batch generation for concatenated input dimensions should be separate from the coreBatchGenerator
class.Describe alternatives you've considered
Optimize the code under the assumption that all batches will be generated eagerly when creating a BatchGenerator object, for example by stacking non-input dims before iterating over batch_dims and input_dims. This would still not work well for large datasets.
Additional context
This relates to #30 because I expect that the proposal to define a batch based on the indexes will be simpler if samples are sliced and combined into batches, rather than the current behavior of slicing samples after slicing the dataset into batches.
The text was updated successfully, but these errors were encountered: