-
Notifications
You must be signed in to change notification settings - Fork 568
[Idea] Sleep until rate limit updates and try again #1644
base: master
Are you sure you want to change the base?
Conversation
rescue GitHub::TooManyRequests | ||
retries ||= 0 | ||
if retries < 1 | ||
retries +=1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Layout/SpaceAroundOperators: Surrounding space missing for operator +=.
rescue GitHub::TooManyRequests | ||
retries ||= 0 | ||
if retries < 1 | ||
retries +=1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Layout/SpaceAroundOperators: Surrounding space missing for operator +=.
Maybe we could move the sleep / retry logic into the |
Yeah that seems reasonable to me 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could move the sleep / retry logic into the with_error_handling definition and enable it by passing an argument?
This would cover all our handled API requests so I'd be more in favor of this
This issue has been automatically marked as stale because it has not had recent activity. Thank you for your contributions. |
In working on updating ES indexes, I'm slightly worried about exhausting some rate limits.
This is just an idea for how we could rescue from a hitting our rate limits, hold off for a cool down period, and then resume work.
If something like this would work, I think it might be worth defaulting to not behaving this way and implementing a way to choose the behavior, similar to how we can choose to skip the API cache.
cc/ @d12 @tarebyte for your opinions on the approach.