Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to store data as a different data type #2

Open
friedrichknuth opened this issue Mar 9, 2024 · 1 comment
Open

Add option to store data as a different data type #2

friedrichknuth opened this issue Mar 9, 2024 · 1 comment

Comments

@friedrichknuth
Copy link
Owner

friedrichknuth commented Mar 9, 2024

Currently, data are stored as float64 by default, which is excessive precision for most analyses. Other data types should be made optional to reduce the size of the Zarr stack on disk.

@friedrichknuth
Copy link
Owner Author

friedrichknuth commented Mar 16, 2024

A few notes on things to explore:

Omit writing chunks entirely containing nodata values

ds.to_zarr('test.zarr', write_empty_chunks=False)

Use optimal data type for storage on disk while preserving relevant decimal precision

# float comparison
ds.astype('float64').to_zarr('test.zarr')
ds.astype('float32').to_zarr('test.zarr')
# int comparison
ds['band1'] = ds['band1'] * 1000
ds = ds.fillna(-9999)
ds['band1'].attrs['scale_factor'] = 0.001
ds['band1'].attrs['_FillValue'] = -9999

ds.astype('int64').to_zarr('test.zarr')
ds1 = xr.open_dataset('test.zarr', mask_and_scale = True, engine = 'zarr')

ds.astype('int32').to_zarr('test.zarr')
ds2 = xr.open_dataset('test.zarr', mask_and_scale = True, engine = 'zarr')

Example synthetic dataset

import numpy as np
import xarray as xr
import pandas as pd
import dask
import zarr
pd.set_option("display.precision", 10)

# create temporal coordinates
dates = pd.date_range(start='1990-09-30', freq='A-SEP', periods=10)

# create spatial coordinates
x = np.linspace(-121.08,-121.04,1000)
y = np.linspace(48.35,48.37,1000)

# create dask data array
dask.array.random.seed(42)
data = dask.array.random.uniform(1000,3000,
                                 (10, 1000,1000), 
                                 chunks=(-1,100,100)
                                )
# add a nodata region
data[:,0:500,0:500] = np.nan

# create xarray dataset
ds = xr.Dataset({
    'band1': xr.DataArray(
                data   = data, 
                dims   = ['time', 'y','x'],
                coords = {'time': dates, 'y': y, 'x': x,  },
                ),
    },
    )
ds.rio.write_crs('epsg:4326', inplace=True) # add crs information

Helper functions to examine dataset and disk usage

def print_info(ds, message):
    print(message)
    table = [str(ds['band1'].data.shape),
             str(ds['band1'].dtype),
             ds['band1'].data.nbytes / 1e6,
             ds['band1'].data[0,500,500].compute(),
             ds['band1'].data[0,0,0].compute(),
             ]
    df = pd.DataFrame(table, 
                      index = ['shape', 'dtype', 'size MB', 'example value', 'example nodata'],
                      columns= [''])
    print(df)
    
!du -sh ./*/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant