How would you write deploy for distributed / HA clusters - and factorize them #1017
julienfr112
started this conversation in
General
Replies: 1 comment
-
@julienfr112 this is absolutely possible, I’ve used pyinfra to deploy many ES (and Riak/K8s/Galera) clusters. For factorising the code pyinfra has a higher level concept than operations called deploys which are suitable for this kind of thing: https://docs.pyinfra.com/en/2.x/api/deploys.html I’m currently working on pushing v3 over the line and part of that is some real world examples, I’m going to add setting up an ES cluster as one of those! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I read quite extensively the doc, but did not find example or guidance about how to write deploy files for clusters of multiple machines that share on service and must talk to each other (ex : elastic search cluster, zookeeper, spark, hadoop ...).
Also, if we want to factorize some code (eg : have a test cluster and a prod cluster, using the same deploy code), is it possible to make this in an operation (eg a elastic search operation), or is that a bad idea, as operation seems to be a lot less complex than a cluster, and have impact for all of them on a single host ?
Or maybe it's not the good tool to do this, or long way forward in the roadmap ?
Beta Was this translation helpful? Give feedback.
All reactions