You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently ADRIA runs are stored in a Zarr data store, chunking data on a per scenario basis.
This means there are potentially $n$ files created, where $n$ is equal to the number of scenarios.
This can be a very large number.
Alternatively, we could chunk by time step, which is fairly consistent. It would only require creating $t$ files, where $t$ is the number of time steps.
The downside is that extracting data for a single scenario would require opening/closing $t$ files instead of 1.
The upside is that extracting data for multiple scenarios is more consistent, requiring $t$ files to be opened/closed compared to potentially thousands...
The text was updated successfully, but these errors were encountered:
Another downside of chunking by time step is that each file size can grow indefinitely (unless you don't think that's a problem). Could we chunk by a fixed number of scenarios?
Currently ADRIA runs are stored in a Zarr data store, chunking data on a per scenario basis.
This means there are potentially$n$ files created, where $n$ is equal to the number of scenarios.
This can be a very large number.
Alternatively, we could chunk by time step, which is fairly consistent. It would only require creating$t$ files, where $t$ is the number of time steps.
The downside is that extracting data for a single scenario would require opening/closing$t$ files instead of 1.
The upside is that extracting data for multiple scenarios is more consistent, requiring$t$ files to be opened/closed compared to potentially thousands...
The text was updated successfully, but these errors were encountered: