Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small optimizations for large GWAS type data #148

Merged
merged 4 commits into from
Nov 30, 2024
Merged

Small optimizations for large GWAS type data #148

merged 4 commits into from
Nov 30, 2024

Conversation

cmdcolin
Copy link
Contributor

Instead of running TextDecoder on every line (which is needed to preserve file offsets for Unicode data), we first detect if any non-ascii characters are present (charCode>127) and then go to the TextDecoder per-line routine

This PR also removes the getLinesForRefId as this seems like a non-needed for public API, and also duplicated the entirety of the getLines logic

Removes the yieldTimeout also

@cmdcolin cmdcolin merged commit 6687a1f into master Nov 30, 2024
1 check failed
@cmdcolin cmdcolin deleted the optim branch November 30, 2024 22:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant