Replies: 2 comments 3 replies
-
It isn't clear what you are describing fits within the design of Flux or Kubernetes. Can you clarify? Flux reconciles declarative artifacts, so the intended design and outcome is that the declarative artifact is a static definition of the intent, and reconciling it against the cluster should have no impact because the definition of the intent does not change itself after the intent is actuated by controller/operators that run on the cluster. This is aligned with https://opengitops.dev which defines GitOps as a continuous reconciling process. Can you be more specific about the application, what you mean by re-installation, and what is the impact of reconciling that you wish to avoid? There are many cases where reconciling an entire resource is undesirable (for example, the persistent volume which has a section in the spec indicating whether that resource is already bound and details about what context binds it to the cluster, a given node, etc.) - those cases are generally handled in Flux using a field manager, so they can be reconciled continuously without creating any impact. I'm suggesting there is an X-Y problem and we need to know more about your use case in order to help you solve it. You don't want Flux to skip reconciling, because then you won't be doing GitOps. You will lose the ability to correct drift, or apply new changes. Flux doesn't distinguish between "old changes" and "new changes" - there is no way for Flux to detect that your new configuration is new and only reconcile it piecewise then when something has changed; the design is for continuous reconciliation. There is an accommodation in Kustomize Controller's |
Beta Was this translation helpful? Give feedback.
-
Hi Step 2) The next objective is to deploy or upgrade the same nginx-pod using same helm-chart and manifests (deployment.yaml and service.yaml) but by using a helm-release.yaml and gitrepository.yaml But we are getting the below error kubectl get helmreleaseNAME AGE READY STATUS Step 3) Then we tried for upgradation just by uplifting the version in Charts.yaml then the upgrade was picked with version 0.1.1 but the error we got as below kubectl get helmreleaseNAME AGE READY STATUS So wish to know that this scenario is possible or doable in FluxCD or not i.e an already running k8s resource (pod or deployed) can be migrated on FluxCD without any downtime or not ? The expectation is an older pods which is running should not be restarted if we are using the exactly same manifests and charts on FluxCD Request response on this query |
Beta Was this translation helpful? Give feedback.
-
I want to on-board my existing application to fluxcd to maintain it through Fluxcd in future. the problem I found that when I adding Kustomization of the existing application to fluxcd, flux again doing re-installation of the application during reconciliation. Is there any way to skip the flux reconciliation for on boarding of existing application?
Br,
Tanmoy
Beta Was this translation helpful? Give feedback.
All reactions