-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(controller): retry strategy support on daemon containers, fixes #13705 #13738
Open
MenD32
wants to merge
57
commits into
argoproj:main
Choose a base branch
from
MenD32:feat/daemon-retry-strategy
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
MenD32
force-pushed
the
feat/daemon-retry-strategy
branch
from
October 11, 2024 22:37
bce39fd
to
2e1c501
Compare
Signed-off-by: MenD32 <[email protected]>
MenD32
changed the title
Feat/daemon retry strategy
Feat: retry strategy support on daemon containers
Oct 12, 2024
Signed-off-by: MenD32 <[email protected]>
MenD32
changed the title
Feat: retry strategy support on daemon containers
feat: retry strategy support on daemon containers
Oct 12, 2024
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
… string Signed-off-by: MenD32 <[email protected]>
…nough Signed-off-by: MenD32 <[email protected]>
…nough Signed-off-by: MenD32 <[email protected]>
MenD32
force-pushed
the
feat/daemon-retry-strategy
branch
from
October 13, 2024 14:38
fd95b21
to
6f4efc7
Compare
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
…de is pending Signed-off-by: MenD32 <[email protected]>
…de is pending Signed-off-by: MenD32 <[email protected]>
…de is pending Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
MenD32
force-pushed
the
feat/daemon-retry-strategy
branch
from
October 14, 2024 19:44
4eb8b59
to
fa12577
Compare
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
i added an E2E test and i can't get it to work on the CI (even though the behavior of the controller is as if its built on main instead of on this branch edit: NVM got it to work |
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
Signed-off-by: MenD32 <[email protected]>
MenD32
force-pushed
the
feat/daemon-retry-strategy
branch
from
November 2, 2024 14:31
cb60281
to
80ffbb5
Compare
Signed-off-by: MenD32 <[email protected]>
MenD32
force-pushed
the
feat/daemon-retry-strategy
branch
from
December 8, 2024 13:37
ca07755
to
2695fdf
Compare
Signed-off-by: MenD32 <[email protected]>
MenD32
force-pushed
the
feat/daemon-retry-strategy
branch
from
December 8, 2024 13:41
fc780f7
to
ec06bdb
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Addresses #13705
And #2963
Motivation
Add retryStrategy support to daemon container templates.
example - our use case
We have a workflow with a model running as a daemon container and subsequent steps that query the model, In order to test its performance. When the workflow runs for a long time, it can be preempted by the cloud provider due to GPU node preemption, causing it to fail.
with this feature, I could sustain a cloud preemption by retrying the container with the correct cluster autoscaler configuration.
when considering how to solve this problem with existing argo workflows tooling, I've came to the following conclusions:
retryStrategy on daemon containers currently doesn't work as expected:
daemon containers with retry strategy cause the whole workflow to fail even if they have a retry strategy.
Using the k8s resource template is problematic for these reasons:
I see this feature as vital for anyone who is using daemon containers for testing / benchmarking, as problems similar to these can be quite common under these circumstances.
Modifications
execution functions now consider "succeeded" daemoned nodes as failed
When a daemoned container completes execution, it is considered as failed, if it has a retry strategy it will retry.
the IP change in the node will cascade down to future executions.
Verification
e2e tests
also for manual verification I added an examples to test and see the behavior locally
I simulated daemon failures by deleting the daemon pod manually