Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expose read_buffer_size in parts #15

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

rizo
Copy link

@rizo rizo commented Jun 19, 2024

When dealing with slightly large files (>1MB), the reader loop becomes unnecessary slow. Increasing the buffer size from the default 10 to something larger substantially speeds up processing.

I understand the non-blocking API can be used to provide a custom buffer size, but I think it should be possible to do this for the blocking version too when the expected inputs safely fit in memory.

Also, any reason why the read_buffer_size values are different for parts and reader? Why not just have one default?

@rizo rizo force-pushed the rizo/expose_buffer_size_in_read branch from 7c34bf7 to 345ce19 Compare June 19, 2024 14:07
@rizo rizo force-pushed the rizo/expose_buffer_size_in_read branch from 345ce19 to 4d820ad Compare June 19, 2024 14:14
@rizo rizo changed the title expose read_buffer_size in read expose read_buffer_size in parts Jun 19, 2024
@rizo
Copy link
Author

rizo commented Jun 19, 2024

I kept the default value to 10, but would be happy to make it just ?read_buffer_size so there's only one default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant