Performance considerations for async task pool sizes? #344
-
I remember reading somewhere that it's recommended to not allocate potentially infinite number of async tasks, but to set a fixed pool size. I'm designing a rule-based async scanner that tests input values against a potentially large number of rules which contain the test logic. I'm debating whether I should have a fixed pool size of async tasks with a queue containing the rule and value pairs to evaluate, or whether I should allocate an async task per-every rule (and maybe allow configuring the concurrency for individual rules). |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
There is no easy answer to this as it’s impossible to predict future workloads but one way to try and balance the number of tasks with the available hardware resources is https://github.com/socketry/async/blob/main/lib/async/idler.rb |
Beta Was this translation helpful? Give feedback.
-
One of the scenarios I'm also worried about is whether I get a large amount of values which take a long time for certain rules to process (maybe it takes a long time to connect to the remote port or the HTTP server is heavily rate limited) and having the async task pool get "clogged up". This would require the user to Ctrl^C the command and restart it with a larger pool size until they found the ideal pool size to "unclog" the queue. |
Beta Was this translation helpful? Give feedback.
Yes, using a fixed sized pool is a bad idea, and using work stealing is a good idea, e.g. use
async-job
which handles these concerns.