-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: create groups from legacy inventory even with limit flag set #11506
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Adrian-St The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Welcome @Adrian-St! |
Hi @Adrian-St. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
@Adrian-St Please make sure your CLA & CI check. |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
add_host: | ||
name: "{{ item }}" | ||
groups: 'kube_node' | ||
with_items: "{{ groups['kube-node'] | default([]) }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This loops over the whole cluster, which adds up on big clusters.
You could probably do something like this instead
key: "{{ 'kube-node' in group_names | ternary('','not_') }}kube_node"
In fact this would probably let us loop over the groups and use only one task.
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Superseded by #11577
/close
|
@VannTen: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
What this PR does / why we need it:
When using the legacy inventory with the kube-master and kube-node groups the boilerplate.yml playbook is supposed to add them to the kube_control_plane and kube_node group respectively.
The problem is that if you follow the docs and run the scale.yml playbook with the limit flag set to the new worker node (mentioned here) then it won't add the nodes from the kube-master group to the kube_control_plane group because it doesn't match any hosts.
Consequently the task "Stop if kube_control_plane group is empty" in roles/kubernetes/preinstall/tasks/0040-verify-settings.yml will fail.
The fix uses the add_host module instead which runs on all hosts and will always add the hosts from the legacy groups to the new groups, even if the limit flag is set.
Which issue(s) this PR fixes:
I haven't created an issue for this bug since the information in the bug template isn't relevant and I couldn't get vagrant to work easily, but let me know if it's needed and I can create one.
This bug should happen on any Kubespray version with the updated groups and the fix worked in our CI environment where we're using the legacy group names.
Special notes for your reviewer:
Does this PR introduce a user-facing change?: