Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add microbenchmarks for computing a chunk group hash #22

Closed
rklaehn opened this issue Jul 29, 2023 · 2 comments
Closed

Add microbenchmarks for computing a chunk group hash #22

rklaehn opened this issue Jul 29, 2023 · 2 comments

Comments

@rklaehn
Copy link
Collaborator

rklaehn commented Jul 29, 2023

It looks like computing a chunk group hash is faster in abao than in bao-tree. To fix this we should have a microbenchmark comparing the two, and then obviously fix it.

See

n0-computer/iroh#1288

@divagant-martian divagant-martian changed the title Add microbenchnarks for computing a chunk group hash Add microbenchmarks for computing a chunk group hash Jul 29, 2023
@rklaehn
Copy link
Collaborator Author

rklaehn commented Jul 29, 2023

It looks like this was a false hope. The binary where the outboard computation is super fast is already using bao-tree, at least from the symbol table...

abao/bao_bin on  master [$!?] is 📦 v0.12.1 via 🦀 v1.71.0 
❯ nm /Users/rklaehn/bin/iroh | grep bao_tree | wc -l
     221

abao/bao_bin on  master [$!?] is 📦 v0.12.1 via 🦀 v1.71.0 
❯ nm /Users/rklaehn/bin/iroh | grep abao | wc -l    
       0

@rklaehn
Copy link
Collaborator Author

rklaehn commented Aug 23, 2023

The issue of the fastest way to compute a chunk group hash is solved in BLAKE3-team/BLAKE3#329 / the iroh-blake3 crate and whatever @oconnor663 will come up with to solve this long term.

@rklaehn rklaehn closed this as completed Aug 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant