-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider performance improvements #50
Comments
@ndmitchell yes! |
Absolutely! Note that I think we have some divergence because we've made performance improvements to C.f. #46 for the changes. So it would require thinking a bit to figure out which changes in your branch can port over and make a difference in the code as it currently stands. |
Did you write any benchmarks? Our benchmarks were in the context of our larger project, so probably don't reproduce easily. |
Same for us, unfortunately. |
FWIW I ran the encoding benchmark I had and your version is 20% faster than ours. Yay! If you can't see anything in our patch set that looks valuable I'll close this ticket. I didn't really measure decoding performance. That used to be the bottleneck for us, and where we spent most energy optimising. Now it's a code path we don't use that often. |
Good to know that all that performance work paid off! (It was rather exhausting). Both encoding and decoding are still pretty costly to us given the volume of data we work with, but we didn't have any great ideas for further improvements. If your team has any more bright ideas, please chime in! As for the current patch set, the one thing we didn't really do or consider at all was the INLINE annotations. If you have benches that show that they bring improvement on the current codebase, that would be great. |
At some point in time in the past the INLINE pragmas helped. I can't say much about now. I also note we switched to unchecked shifts. Not sure if GHC automatically optimises around that? We also used HashMap rather than IntMap. Although that may have been because the rest of our stuff was using that. There is also a |
Maybe @j6carey can speak to the Also not sure if the unchecked shifts are an improvement or not -- we might get equivalent results by simply having specialized to a single dictionary. Both are certainly worth considering... |
@gbaz , @ndmitchell , do you mean the copy during decoding? I have not studied decoding very much. I think checked shifts whose shift distance is a compile-time constant get optimized to the same thing as unchecked shifts. |
@j6carey yes, the copy during decoding. We omitted that in our fork. |
We at Digital Asset took your work and made a bunch of performance improvements to the Decode module. The code is at https://github.com/digital-asset/daml/blob/master/nix/third-party/proto3-wire/src/Proto3/Wire/Decode.hs and the approximate list of changes I've been able to find are:
Are you interested in taking changes that optimise the performance?
The text was updated successfully, but these errors were encountered: