Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider using double_and_compress_batch in encrypt_to_bytes. #1

Open
hdevalence opened this issue Jul 11, 2020 · 3 comments
Open

Consider using double_and_compress_batch in encrypt_to_bytes. #1

hdevalence opened this issue Jul 11, 2020 · 3 comments

Comments

@hdevalence
Copy link

I looked briefly at the code and noticed that the encrypt_to_bytes methods do a scalar multiplication and then a point compression for each input item. Although most of this work is the scalar multiplication, it's possible to amortize work across multiple point compressions, using the double_and_compress_batch method. The only wrinkle is that rather than encoding Q_i, this encodes 2Q_i. However, because Q_i is computed as k* P_i, it's still possible to use this method to get the same result as before, by multiplying k by (1/2) mod l (a constant value) to get k', and computing Q_i' <- k' * P_i. Since Q_i = 2 Q_i', batch-double-and-encoding the Q_i' will give enc(Q_1) , ... , enc(Q_n). This costs only one scalar-scalar multiplication for each batch, and I believe will save time as long as the batch is more than 1 element large.

The batching is not parallelizable, but this can still be applied in the parallel case by breaking a larger batch into chunks, and applying the optimization within each chunk.

Also, I think that the code could be slightly streamlined if the methods took impl Iterator<Item=...> rather than slices. For instance, if encrypt_to_bytes took an iterator, instead of having to have separate hash_encrypt_to_bytes, a caller could do

// plaintexts is an iterator of byte slices
encrypt_to_bytes(plaintexts.map(RistrettoPoint::hash_from_bytes::<Sha512>))

and the hash-to-curve calculations would be inlined into the right place.

@shubho
Copy link
Contributor

shubho commented Jul 11, 2020

Thanks for your helpful comments :) we will get to this soon - we still have a ways to go in ekeing out all the performance...

@hdevalence
Copy link
Author

Of course, no rush -- I just thought I'd point it out in case it was helpful. Hope you had a good experience using the library!

@shubho
Copy link
Contributor

shubho commented Jul 11, 2020

The library was the reason we moved to Rust :) we love it!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants