-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Services (ServiceTemplates) to ManagedCluster to deploy on target cluster #362
Conversation
4534079
to
01a018f
Compare
01a018f
to
30c73ee
Compare
internal/sveltos/clusterprofile.go
Outdated
} | ||
|
||
// DeleteClusterProfile issues delete on ClusterProfile object. | ||
func DeleteClusterProfile(ctx context.Context, cl client.Client, namespace string, name string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we create some generic "remover"? Currently the difference between DeleteHelmRelease and DeleteClusterProfile is only in client.Object for Delete method, but the method parameters and logic is the same.
07738dc
to
ae51ffc
Compare
err = sveltos.DeleteProfile(ctx, r.Client, managedCluster.Namespace, managedCluster.Name) | ||
if err != nil { | ||
return ctrl.Result{}, err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kshatrix Concerning your suggestion to remove Sveltos finalizer. I think it will be better if we don't remove it because otherwise the objects being cleaned up in reconcileDeleteCommon()
func may be left hanging in the hmc-system
namespace on management cluster after the Profile
object has been deleted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack. then we should wait for Profile removal before we remove finalizers from the managedcluster object (to not leave hanging resources behind unnoticed)
ae51ffc
to
2bd0085
Compare
2bd0085
to
3ffa70f
Compare
The CI failure is an auth related one:
|
err = sveltos.DeleteProfile(ctx, r.Client, managedCluster.Namespace, managedCluster.Name) | ||
if err != nil { | ||
return ctrl.Result{}, err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack. then we should wait for Profile removal before we remove finalizers from the managedcluster object (to not leave hanging resources behind unnoticed)
b00ae13
to
4e0f8c0
Compare
4e0f8c0
to
dadff0a
Compare
done |
Description
This PR:
ManagedCluster
object, where each service corresponds to aServiceTemplate
.ClusterProfile
object.ManagedCluster
object. That will be done in a follow-up PR while working on Update Status on ManagedCluster based on Services Reconciliation #361.Testing
make dev-apply && make dev-creds-apply
and waited for everything to be running.make dev-mcluster-apply
and waited for everything to be running.Provisioning
On Management Cluster
We can see the
ClusterProfile
object was created withkyverno
andingress-nginx
services:We can see the associated
ClusterSummary
object was also created and reports that the services have been "Provisioned" onto the target cluster:On Target Cluster
We can see both
kyverno
andingress-nginx
running on the target cluster:Setting
install=false
for ingress-nginxBy setting
install=false
on theManagedCluster
object, theingress-nginx
service was removed fromClusterProfile
->ClusterSummary
objects:We can see
ingress-nginx
was uninstalled from target cluster.Making
services
list emptyWe see that the
ClusterSummary
object does not show anyhelmCharts
list:As expected, we can see that both
ingress-nginx
andkyverno
have been uninstalled from the target cluster:➜ ~ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aws-cloud-controller-manager-fjfg2 1/1 Running 0 15m kube-system calico-kube-controllers-695f6448bd-fckbc 1/1 Running 0 16m kube-system calico-node-7tv5t 1/1 Running 0 15m kube-system calico-node-wkxvg 1/1 Running 0 13m kube-system coredns-6997b8f8bd-f966x 1/1 Running 0 13m kube-system coredns-6997b8f8bd-ht4qs 1/1 Running 0 13m kube-system ebs-csi-controller-5c9db44f4f-5cs6w 5/5 Running 0 15m kube-system ebs-csi-controller-5c9db44f4f-6twcq 5/5 Running 0 15m kube-system ebs-csi-node-ctcfp 3/3 Running 0 15m kube-system ebs-csi-node-mh8w2 3/3 Running 0 13m kube-system kube-proxy-gsw28 1/1 Running 0 15m kube-system kube-proxy-wkz7d 1/1 Running 0 13m kube-system metrics-server-7cc78958fc-n6jrp 1/1 Running 0 16m projectsveltos sveltos-agent-manager-67d6ffbd86-5vx9z 1/1 Running 0 15m
Re-enabling both services again
We see that the
ClusterSummary
object again shows the list ofhelmCharts
:Both
ingress-nginx
andkyverno
have again been installed on the target cluster:➜ ~ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx ingress-nginx-controller-5bfc858768-dmt84 1/1 Running 0 55s kube-system aws-cloud-controller-manager-fjfg2 1/1 Running 0 18m kube-system calico-kube-controllers-695f6448bd-fckbc 1/1 Running 0 19m kube-system calico-node-7tv5t 1/1 Running 0 18m kube-system calico-node-wkxvg 1/1 Running 0 17m kube-system coredns-6997b8f8bd-f966x 1/1 Running 0 16m kube-system coredns-6997b8f8bd-ht4qs 1/1 Running 0 16m kube-system ebs-csi-controller-5c9db44f4f-5cs6w 5/5 Running 0 19m kube-system ebs-csi-controller-5c9db44f4f-6twcq 5/5 Running 0 19m kube-system ebs-csi-node-ctcfp 3/3 Running 0 18m kube-system ebs-csi-node-mh8w2 3/3 Running 0 17m kube-system kube-proxy-gsw28 1/1 Running 0 18m kube-system kube-proxy-wkz7d 1/1 Running 0 17m kube-system metrics-server-7cc78958fc-n6jrp 1/1 Running 0 19m kyverno kyverno-admission-controller-776987899-qw8g6 1/1 Running 0 67s kyverno kyverno-background-controller-86b9f95c96-8nmt5 1/1 Running 0 67s kyverno kyverno-cleanup-controller-7bbfc97569-zg86g 1/1 Running 0 67s kyverno kyverno-reports-controller-665ccb5b65-jg4xb 1/1 Running 0 67s projectsveltos sveltos-agent-manager-67d6ffbd86-5vx9z 1/1 Running 0 19m
Finally deleting the
ManagedCluster
objectWait for a while for the delete to finish . . .
We can see that that the associated
ClusterProfile
andClusterSummary
objects have also been deleted from the management cluster: