-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[backend] kfp-kubernetes sets incorrect pod metadata in pipelines with multiple tasks #11077
Comments
The problem seems to be in backend/src/v2/compiler/argocompiler/container.go:addContainerExecutorTemplate function (see: https://github.com/kubeflow/pipelines/blob/2.2.0/backend/src/v2/compiler/argocompiler/container.go#L197). This function is supposed to add task's pod metadata to the template which it creates (see lines 279-285). However, this logic is actually called only during the first execution of the function. This is due to caching that takes place in lines 199-204 and 286:
For the very first task (which due to SDK logic happens to be determined by the alphabetical order of components names), a template is created, task's metadata are added to it, and then the template is cached in I wonder if the minimal solution for this problem would be as easy as including the task/comp name (i.e.
I understand that such solution (if it worked) would have the following two consequences:
|
Any workaround for this? We are facing the same issue! |
I could not find any. And frankly, the current code doesn't offer any hope that a workaround exists. Unless you are ok with all tasks in your pipeline having the same label(s). Then simply use add_pod_label on the very first task in alphabetical order based on components names. |
(cherry picked from commit fb7e6b2)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Commenting as this issue is still relevant |
Environment
Steps to reproduce
Here is a minimal example pipeline.
This is the generated pipeline and platform spec, which looks correct.
Click to expand
However, both pods for the two pipeline tasks get the label and annotation
a
.I did some more testing, and it seems that for all tasks, the platform spec of the one that is alphabetically first is taken. If you rename the first task to
c_op
, both pods get the label / annotationb
. This might be because the first platform spec in the YAML is always taken.Expected result
The pod metadata is set correctly for all tasks. In the above example, task
a
should have label / annotationa
, taskb
should have label / annotationb
.Materials and Reference
The pull request that introduced the feature to set labels / annotations: #10393
I am not too familiar with the codebase, but it might have something to do with not picking the correct
KubernetesExecutorConfig
.Impacted by this bug? Give it a 👍.
The text was updated successfully, but these errors were encountered: