-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark #13
Comments
Benchmarking data point. I generated a vanity address for a 7 character prefix (all uppercase alpha, no numbers). It took I suppose that is 8,448 'core hours' of computation. Significantly higher than the "3 hours" estimate in the README chart (which also doesn't specify computing power assumed for those numbers). YMMV |
Thank you very much for doing this! I believe I borrowed this diagram from some website when I first created this repo. Not only does it not specify computing power assumed for those numbers I am not even 100% sure it's based on the same address formatting. |
Going to play around with this some this week and derive a chart based on actual observed data. |
One other thing to note that surprised me. The resulting address, based on choosing a prefix (let’s call it “MYWORDS”) was “GCMYWORDS...” and I didn’t expect the leading “C” to be in there. Any thoughts? I didn’t see anything in the “strkey“ format to account for it. Thanks |
Ah, I see what you mean. Yes, it is a bit weird, however it is a side effect of the way stellar addresses are generated. Specifically, it is the result of adding a version byte: const VERSION_BYTE_ACCOUNT_ID: u8 = 6 << 3; Once this is base32 encoded, it turns out the first two characters are limited to:
If you create a few accounts here in the Stellar Laboratory you can see what I mean. |
Code actually being benchmarked now with criterion. Only can get access to 32 cores on DigitalOcean, so probably won't test super long prefixes/postfixes. |
Updated with first go at benchmarking. |
If useful, if you provide in the README some instructions for running the benchmark I’ll be happy to run it on a larger machine. And perhaps up to six letters in a run? I think I can get up to 128 cores on AWS. FYI, 5 letter prefixes on that machine took about 3-5m each across about 5 runs. I didn’t test six letters, I jumped right to seven as shown in my original post. |
Sweet!
That would be super helpful! Give me a second, I'll add a readme & make the benchmark configurable. |
@grempe ok, I believe this should suffice: https://github.com/robertDurst/stellar-vanity-address-generator#how-can-i-benchmark Please note the parameters in configurations. |
OK, I ran the benchmark. I had to make some modifications to allow it to run in a reasonable time (a 96 core machine is not cheap!). I hope this is useful. Modifications: I changed the number of benchmark iterations to Server Info:
Installation Steps:
Benchmark Run STDOUT:
I installed https://svag-benchmark.s3.amazonaws.com/svag-target.tar.gz As previously reported, I think a six character run prob takes the better part of a day to run. And my previous report about 7 characters taking 3 days should be consistent with these results as it was on the same number of cores (96). I think an 8 character run would start to get very expensive. Please download the results for your own use as I'll delete them after a little while. Cheers. |
Another thought. Should the number of CPU's consumed by the benchmark be less than the total CPU's on the system? Otherwise every CPU gets pegged at 100% which doesn't allow room for the system and the benchmark runner to do work? |
Sweet, thanks a lot for doing this - downloaded the plots.
This is a good point. I am going to play around with this, this weekend. |
I'll be honest, I'm not quite 100% sure how to test the difference of saving a CPU for the system and benchmark. That being said, this makes sense and I added Also added your results to the README. Thanks again for that! Don't have access to that many CPU's 😅 |
The results posted are good, but they are for pretty small length strings at 5 chars max. The jump to 7 chars, you’ll remember from my earliest posting ran for 88 hours. It’s kind of a flaw in the benchmark approach as a bench run for 6 or 7 chars might take weeks to run and cost thousands. This will prob never be run. 8 chars may take much much longer. Months? Years? Some text around this in the readme might make sense to set expectations. |
Good point. Will update this. |
As we move to actually making this thing run a little faster (#7 and #8), it might be interesting to have some benchmark data.
The text was updated successfully, but these errors were encountered: