-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Are there plans to allow req_throttle()
with multi_req_perform()
?
#224
Comments
It's not obvious how to implement it since we currently just send all the requests to (There's also some weird tension about requesting in parallel to speed up your requests while throttling to slow them down). |
Thanks Hadley. Some of the endpoints have high rate limits (e.g. 200/s), and I find I'm not hitting the limit with synchronous requests, although maybe there's some other reason for that. Perhaps I'll see if I can do something myself with the help of your hint (and your useful advice in Advanced R). |
Just to add, rate limiting on parallel requests would be super useful for us. We're using the Azure AI API and can currently make around 80RPM for one model and 480 for another (and these limits will be going up shortly). In both cases responses can take quite a long time (up to a minute), so we want to max out the RPM. It would be great to be able to set different limits when sending lists of prompts to different models. It seems like it would be fairly easy to add some code to do this here: Line 9 in 66a6fd0
You can call
This doesn't account for |
You note this as a limitation in the docs. I'm currently writing a new wrapper for Wikipedia's APIs, and they have many endpoints that allow asynchronous requests but are strictly rate-limited. I'm rather new to all this—I imagine if it were simple to throttle asynchronous requests you would already have done it...
(Thanks for httr2. It's made it very easy for me to get started with this project!)
The text was updated successfully, but these errors were encountered: