-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert agent to a CustomResource #245
Comments
We could split the above into two separate tasks. First creating just a |
Don't forget we also want to be able to push agent changes out to people's cluster :) |
Example manifest file for
|
There are some suggestions from Brice in another issue: #145 |
I would add The reasoning behind the alerts as custom resources is here https://docs.google.com/document/d/1t_Ai_tXh7HlJxaxULuyev6KQZ4eWBt7twUw59sTYh1I/edit?usp=sharing |
I think managing alerting config via CRDs is a great idea but we should not plan to include this right now; it's something that can be added to the code later, as a separate piece of work. |
Thanks for the example manifest @lilic! As a kludge, can we also include something like an "append-args" option to allow people to inject extra flags into an agent's command? |
We've talked about flux with @lilic and @squaremo at lunch about flux and the CRD integration. For the first CRD version we will not include anything flux except the
|
What is the timing of "at some point."? If this will ship first we still need to support some flux args so that we have an automatic migration path for people who have configured a git repo, and continue to not trample on people's running config when they're trying to set it up for the first time. We cannot ship a version which breaks everyone's flux config, or makes it impossible to set up flux with a git repo. |
Unknown, but it doesn't matter too much, because there will necessarily be an indefinite period during which fluxd supports both the command line arguments (if supplied) and the configmap (when mounted). |
For the time being, launcher will still be doing what it does today: |
Yes agreed, maybe in the future we can convert it to do something more complex like actually creating each individual resource and watching for any changes on those and reconciling. But for now that is sort of too much to change all at once. |
Perhaps this is a dim question, but is there a reason to prefer defining a custom resource over just supplying a config map? |
We do have cases where we send data to two WC instances (weave cloud, socks-shop) I was envisioning two CRs for those clusters. |
We're the only people that do that; two agents each with its own config would cover that case I think. |
To really answer that question we'd need to collect +/- of the two :) eg.
|
The idea of operators is that they manage arbitrary numbers of resources; e.g., you get one set of agents per weave cloud custom resource. This is where we probably want to end up, but I don't think all our agents are capable to behaving correctly in this scenario (flux isn't, quite), so it's generality we can't use at present. What will the operator do in response to multiple weave cloud resources?
Yep, earlier validation is a point in favour of using a custom resource. It'd be trickier trying to validate a ConfigMap on creation.
I don't know if this is a point in favour of a CRD -- it's mechanically simpler to watch a file than to watch the kubernetes API. |
@squaremo can you explain how the ConfigMap flow would work? e.g. how would we generate things, how would use deploy, update etc. For example for CRDs user just needs one deployment manifest to deploy the "operator" and one manifest to specify configurations, which they can update later on and the operator would pick up the updates. Like @dlespiau mentioned above, we get things like validations but also versioning of CRDs, meaning we know how to handle different versions of deployed agent operators. |
Pretty much the same thing you'd do with a custom resource, except it's in the filesystem rather than something you get through the Kubernetes API. Have I misunderstood something fundamental about what you want to do with the CRD?
And with a config file, one deployment and one configmap.
I don't think there's anything inherent to versioning of CRDs that makes it better than versioning a config file -- presumably you still have to write code to interpret old versions, for instance. But I haven't tried it, so I'm ignorant of any library support for versioning. I can think of another +/- point:
I don't think using a custom resource is a bad design; it's more forward-looking than using a config file, in its favour. I was curious why you hadn't just gone for a config file though, so I wanted to see whether that was an adequate alternative. |
Forgot to mention, IIRC and nothing changed since the last time I tried this, watching for the mounted config map file changes is actually not that simple as it's a symlink. Would be easier to just watch the ConfigMap resource with client-go but then it doesn't make any difference from watching a CRD resource. |
@lilic watching for ConfigMaps or Secrets is easy check this https://github.com/stefanprodan/k8s-podinfo/blob/master/pkg/fscache/fscache.go#L59 The only downside is the delay. It takes at least 2 minutes for kubelet to update the symlink while the CRD event is almost instantaneous. |
By converting the agent part of the launcher to be deployed as a
CustomResource
we can pass arguments and options via theCustomResourceDefinition
and with that also solve #145.Besides the configurations we would also solve the problem of just applying the manifest file every X amount of time, but instead we would watch and react on events, e.g. when a
Secret
gets deleted we would update the resources, similiar to how we already do for the cloudwatch in the agents.We could still keep the bootstrap part of the launcher and in it generate the
CRD
manifest file needed to deploy the agentCR
(weave-cloud-operator
:) ). So the install part would be the same as now, if we want to, e.g. via the curl command and the helm option could remain as well. But theweave-cloud-operator
could also be installed and configured with a standalone manifest file in the users cluster.The https://github.com/operator-framework might be useful IMO in this case to generate things needed to convert the launcher/agent into an "operator".
cc @bricef @marccarre @dlespiau @leth
The text was updated successfully, but these errors were encountered: