Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the Repo per app scenario possible with gitops-connector? #73

Open
cyberjpb1 opened this issue Jul 21, 2024 · 6 comments
Open

Is the Repo per app scenario possible with gitops-connector? #73

cyberjpb1 opened this issue Jul 21, 2024 · 6 comments

Comments

@cyberjpb1
Copy link

Is the Repo per app scenario possible with gitops-connector?
Currently I have a repo for each application which also contains the manifests.

If the answer to my question is yes, how to configure it?
gitRepositoryType: AZDO
ciCdOrchestratorType: AZDO
gitOpsOperatorType: FLUX
azdoGitOpsRepoName:
azdoOrgUrl:
azdoPrRepoName:
gitOpsAppURL:
orchestratorPAT:

@eedorenko
Copy link
Collaborator

You need to have multiple instances of gitops-connector on your clusters. One instance per application/repo

@markphillips100
Copy link

@eedorenko would a k8 operator approach be feasible here? To clarify, I mean one installation of gitops-connector (operator + custom Connection CRD) which then handles multiple subscribe-to-notification/publish-to-specific-repo Connection resources.

@markphillips100
Copy link

@eedorenko I've added support for multiple configs in the one instance via a CRD if you are interested? It still supports the original env config via helm values type of setup, albeit it is a breaking change due to the values restructuring. The switch between modes of operation are determined by setting singleInstance: null or supplying values. All is explained in the helm chart readme.

Whilst it works for ArgoCD notifications as-is, I haven't tested with flux as my environment isn't setup to deal with it. It's not a great deal of work and can explain if this goes further.

My fork is here, and let me know if you want a PR opened.

@cyberjpb1
Copy link
Author

cyberjpb1 commented Oct 9, 2024

@markphillips100 Very interesting, I will try your approach by doing a test with Flux. I will get back to you as soon as possible.

@markphillips100
Copy link

@cyberjpb1 For the flux_gitops_operator to filter supported messages and indicate its support for the required config name in phase_data, the is_supported_message function needs fleshing out here. See the argo_gitops_operator change for how I implemented it in that use case.

I imagine in FluxV2 use case we would need to make use of the Alert's eventMetadata to convey the required config name in the phase_data, although being unfamiliar with Flux I do not know how this results in the phase_data structurally to know what to look for.

@markphillips100
Copy link

@cyberjpb1 I checked a previous PR you opened for insight into the eventMetadata and that gave me enough info to create the is_supported_operator. I've created a new flux-multi-config-support branch with this change.

So in theory, the following should suffice:

  1. add a gitops_connector_config_name: "<name of config>" to the Alert's eventMetadata,
  2. set singleInstance: null in values.yaml,
  3. apply a gitopsconfig manifest to the cluster where gitops-connector is running - ensure the name is the same name used in step 1.

NOTE: The helm chart creates a service account, role and role binding to support the connector watching and updating the gitopsconfig resource. The operator also automatically patches (hence the updating) a finalizer into the resource to ensure when it is deleted that a proper cleanup occurs before the manifest is removed from the cluster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants