Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RecordType and DataSet support in C. #37

Open
rainwoodman opened this issue Jan 30, 2020 · 0 comments
Open

RecordType and DataSet support in C. #37

rainwoodman opened this issue Jan 30, 2020 · 0 comments

Comments

@rainwoodman
Copy link
Collaborator

These changes will allow us to use bigfile to do journals (e.g. record blackhole details per step).

  1. RecordType = [ ( column name, dtype ) ]

    big_record_set(rt, void* record_buf, icol, const void * value)
    big_record_get(rt, const void* record_buf, icol, void * value)

  2. BigDataset(RecordType, File).
    big_dataset_read(bd, offset, size, buf), big_dataset_write(bd, offset, size, buf)
    Use BigArray to view the buf from the correct strides, and big_block_read / big_block_write to do the writing on each block in the data set.
    big_dataset_read_mpi, big_dataset_write_mpi,
    same as above. may need to plumb some flags.
    big_dataset_grow(bd, size)
    big_dataset_grow_mpi(bd, size)

Concern 1. may need to open and close 2xNcolumn physical files per read/write.
Is it fast enough for a PM step? Probably fine.

Concern 2. if some blocks in the record type exists, some doesn't exist? Perhaps need to create and grow on the fly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant