-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize Kebechet runs in a deployment #873
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Some Kebechet features rely on the content of the webhook (i.e. if a PR was merged). If we drop webhooks we may drop functionality. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
This looks more like a job for a workqueue with a limited number of concurrent consumers.
Do we actually have hard data on that, aka, is it a problem in practice ? /sig devsecops |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with /lifecycle rotten |
/remove-lifecycle rotten |
/remove-lifecycle frozen |
for consistency (as this does not happen automatically): |
Issues needing reporter input close after 60d, If there is new input, reopen with |
Is your feature request related to a problem? Please describe.
It looks like we are not effectively utilizing cluster resources for Kebechet. Kebechet is run on each webhook received which might easily flood the whole namespace, especially with active repositories. Let's have a way to limit the number of Kebechet pods for a single repository in a deployment.
Describe the solution you'd like
One of the solutions would be to use messaging if Kafka provides a feature that would limit the number of specific messages (that is probably not possible based on our last tech talk discussion CC @KPostOffice).
Another way to limit the number of Kebechet runs for a single repository once a webhook is sent to user-api is to create a new database record stored in postgres (and associated with GitHub URL) that keeps
null
or a timestamp when user-api scheduled Kebechet last time.null
) and the timestamp is less than the specified number of minutes (a new configuration entry in user-api)a. if yes, the webhook handling is ignored (kebechet is not run)
b. if no, continue to step 3
On Kebechet side - once Kebechet is started, Kebechet marks the given timestamp as
null
for the repo it handles and starts handling the repository with Kebechet managers.This way we will ignore any webhooks coming to the system while kebechet messages for repositories are already queued and we know Kebechet will handle repositories in the next run for the specified repos.
null
on startupDescribe alternatives you've considered
Keep the solution as is, but it is not optimal with respect to the resources allocated.
Additional context
The timestamp was chosen to avoid manually adjusting the database if there will be issues (ex. issues with Kafka). If we lose messages or kebechet fails to clear the database entry to
null
, we will still be able to handle requests after the specified time, configured on user-api.The text was updated successfully, but these errors were encountered: