Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for managing Managed ETCD in k3s provider #97

Closed
wants to merge 3 commits into from
Closed

Conversation

mogliang
Copy link
Collaborator

@mogliang mogliang commented Apr 2, 2024

Initially, we discussed possible solution to managing etcd, and the conclusion is to create etcd proxy pod and then reuse kubeadm code to manage etcd.
#75

Recently, we discussed with k3s guys.
k3s-io/k3s#9818
k3s-io/k3s#9841
They mentioned there is k3s embedded controller living inside k3s process and manage etcd lifecycle, and it also exposes interfaces to allow us to interact, I think this may be a better direction we can choose. Here i drafted the design doc.

Please help reivew and comment~

@richardcase
Copy link
Collaborator

There is an open PR for the RKE2 provider to manage the ETCD membership, which is relevant to this: rancher/cluster-api-provider-rke2#265

@mogliang
Copy link
Collaborator Author

mogliang commented Apr 9, 2024

There is an open PR for the RKE2 provider to manage the ETCD membership, which is relevant to this: rancher-sandbox/cluster-api-provider-rke2#265

Thanks Richard, although RKE2 inherits from K3s, they hosts etcd in different ways. RKE2 is more like k8s, hosting etcd by static pod, the PR you mentioned is using the same way as how kubeadm cp provider manages etcd.

Well, k3s combine embeded etcd as part of k3s host process, besides, k3s itself has controllers to manage embedded etcd. guys from k3s also suggest not operate on etcd directly. So, i'm proposing leverage k3s etcd controller to manage etcd.

I've created a branch locally to implement the etcd management by following this doc, and it's working fine. @richardcase should we combine the implementation code in this PR as well? Or put it in a separate PR?

qliang added 3 commits April 10, 2024 05:50
Signed-off-by: qliang <[email protected]>
Signed-off-by: qliang <[email protected]>
@richardcase
Copy link
Collaborator

I've created a branch locally to implement the etcd management by following this doc, and it's working fine. @richardcase should we combine the implementation code in this PR as well? Or put it in a separate PR?

Thanks @mogliang . I'd keep this PR for the doc and have a separate PR for the implementation. I will make sure i review the proposal today.

And great that you have it working 🎉

@nasusoba
Copy link
Contributor

I did some more investigation to find if etcd proxy could be replaced by k3s etcd controller.

Case1: Monitor ETCD state for Scale & Remediation preflight check.

For kubeadm CAPI, it has 2 health checks for monitoring etcd health per etcd node:

  • Check 1: If the list of members IDs reported by this etcd member is the same as all other members.

    • Does k3s CAPI need this check
      • Yes, we also need to ensure quorum remains before removing a node
    • k3s etcd controller support for this check
      • this condition check is not supported by k3s etcd controller, we needs to modify k3s code
  • Check 2: If an etcd member has any alarm

    • Does k3s CAPI need this check
      • Yes, we also need to check the alarm to see if etcd is healthy
    • k3s etcd controller support for this check

Case 2: Remove ETCD member before removing an controlplane node

We need this annotation, otherwise scaling down from 2 nodes to 1 nodes is failing(#96).

Case 3: Reconcile ETCD members on controlplane CR reconcile loop.

For kubeadm CAPI, it will iterate over all etcd members and find members that do not have corresponding nodes. If such member is found, it will get removed from etcd member list.

We also need to reconcile etcd members to prevent losing quorum when deleting a node. But this is not supported by k3s etcd controller and we need to modfiy k3s code.

Conclusion

If we need to remove etcd proxy, and rely on k3s etcd controller, we need to implement Case 1 check 1 and Case 3 in k3s code. We need more discussion if the change is needed. For now, we could simply implement Case 2 to fix #96.

@mogliang
Copy link
Collaborator Author

Thanks @nasusoba for the detailed investigation. So, let's keep the etcd proxy way to implement etcd feature.

We may also need work with k3s to fix the gap, it's a better way to leverage k3s etcd controller for the long run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants