Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to proxy to member clusters with Karmada deployed by karmada-operator #5571

Closed
chaosi-zju opened this issue Sep 20, 2024 · 4 comments · Fixed by #5572
Closed

Failed to proxy to member clusters with Karmada deployed by karmada-operator #5571

chaosi-zju opened this issue Sep 20, 2024 · 4 comments · Fixed by #5572
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@chaosi-zju
Copy link
Member

chaosi-zju commented Sep 20, 2024

What happened:

I have a karmada control plane installed by karmada-operator and it has joined a member cluster (member1). When I execute the command karmadactl --operation-scope members, it failed with following error message:

$ karmadactl --karmada-context karmada-apiserver get deploy --operation-scope members                                                                                                     
error: cluster(member1) is inaccessible, please check authorization or network

What you expected to happen:

The result should be like this:

$ karmadactl --karmada-context karmada-apiserver get deploy --operation-scope members
NAME    CLUSTER   READY   UP-TO-DATE   AVAILABLE   AGE   ADOPTION
nginx   member1   1/1     1            1           10m   Y

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Refer to other installation method, operator missed two rbac config:

cluster-proxy-admin
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: "2024-09-19T08:22:24Z"
  labels:
    karmada.io/system: "true"
  name: cluster-proxy-admin
  resourceVersion: "282"
  uid: 1561fe60-eec6-405d-a981-0a9ca417c09d
rules:
- apiGroups:
  - cluster.karmada.io
  resources:
  - clusters/proxy
  verbs:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: "2024-09-19T08:22:24Z"
  labels:
    karmada.io/system: "true"
  name: cluster-proxy-admin
  resourceVersion: "283"
  uid: ddebc2b0-2ead-4fca-bf8e-40d6634b5d8f
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-proxy-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: system:admin

when this two rbac config applied to karmada, the issue gone.

Environment:

  • Karmada version:
  • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version):
  • Others:
@chaosi-zju chaosi-zju added the kind/bug Categorizes issue or PR as related to a bug. label Sep 20, 2024
@chaosi-zju
Copy link
Member Author

/assign chaosi-zju

cc @zhzhuang-zju please help confirm that this is indeed a problem.

@zhzhuang-zju
Copy link
Contributor

Refer to other installation method, operator missed two rbac config:

Without these two RBAC configurations, the user system:admin will not have permission to access cluster.karmada.io. As a result, the kubeconfig used by karmadactl will not be able to access member clusters.
I think this is an omission during the installation of the Karmada instance by the Karmada operator. Do you have any ideas to resolve this?

@chaosi-zju
Copy link
Member Author

yes, I raised a PR #5572 to resolve it

@RainbowMango
Copy link
Member

/retitle Failed to proxy to member clusters with Karmada deployed by karmada-operator

@karmada-bot karmada-bot changed the title Command karmadactl --operation-scope members Fails in Operator-Installed Karmada Failed to proxy to member clusters with Karmada deployed by karmada-operator Sep 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
Status: No status
Development

Successfully merging a pull request may close this issue.

3 participants