-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
strategy.drainTimeout not working as intended? #346
Comments
Ah, I should have mentioned - the problem we are facing is the pods are still in terminating state when the underlying node is terminated. We are trying to configure the RU to have a wait to allow the pods to gracefully terminate as part of the drain. |
@jess-belliveau configuring If I understand it correctly, you are trying to delay the node termination. |
@shreyas-badiger , thanks for response. If you look at my spec at the start, I actually have set a postDrain.waitSeconds:
I hadn't even realised this field doesn't appear to work either. I'm not seeing a 300 second pause anywhere. |
@jess-belliveau I think the implementation for the upgrade-manager/controllers/script_runner.go Line 109 in 79b38c0
|
Thanks @shreyas-badiger - I might be able to loop back in the future and see what contributions I can make. For the time being, we are having promising results with;
the only caveat being, we have had to add some binaries to the |
Is this a BUG REPORT or FEATURE REQUEST?:
BUG REPORT
What happened:
I am setting strategy.drainTimeout to 1000 seconds but I see the node immediately terminated after the node drain is issued.
What you expected to happen:
I expect upgrade-manager to wait 1000 seconds after the drain is issued before terminating the instance.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Am I interpreting the spec correctly?
Environment:
Other debugging information (if applicable):
The text was updated successfully, but these errors were encountered: