-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose k8s topology labels to Solr PODs #556
Comments
Hey Jan, did you see this ancient PR #90? Its basically hacking the node labels in as sysprops. But yes, you need node permissions which is very unfortunate. I really wish Kubernetes solved this via the Downwards API years ago. Given SIP-18 what do you think about a plugin that allows loading "sysProp" like key-values from external sources, like kubernetes? Getting away from hacky initContainers is the best path for us. You can see how we add new kube-friendly features to Solr, but make those features available in earlier versions through the operator doing some hacky stuff. And eventually the minimum Solr version will let us get rid of those hacks. So I can definitely see a hacky initContainer doing the work for us here, but eventually it would be great to just use the KubePropertiesPlugin (just an example), and not have to have a complex setup process. Eventually maybe kubernetes/kubernetes#40610 will make it in and this gets even easier for us... |
Thanks for the link, did not find it. While a KubePropertiesPlugin is nice long term for several things, I think it makes sense to start with the more targeted feature of propagating zone info as sys prop. Re the configuring of placement plugins, would make sense if /clusterprops.json could also be read from / owned by a CM, then it would be simple to provide such config without touching ZK. |
Cycling back to this, I had a look at #90 to see if it is something I could incorporate into my running clusters with plain helm templating. I'm sure I could extract each part from the go code into scripts and initcontainer yaml to make it work until it is a feature of the operator. Do you forsesee a variant of #90 every being merged? |
It could be, though I absolutely hate that it is necessary. Let's not do it for v0.8.0, because we just need to fix TLS and get it out. But let's target it for the next version. I'll ping you, so that we can work on it together. |
I have most of the moving parts incorporated in a private helm chart, except some access permission trouble for the sa token, almost there. Please ping me after 0.8 release and I'm happy to discuss and review/test the feature. If we at the same time could make it much simpler to configure the affinity (or other) placement plugin it would be a nice combo. |
So, for information, here are the fragments I ended up with in my setup to do this manually: https://gist.github.com/janhoy/dfc24bb128bd5a44a114a10d266b9e56 I've just cut out the relevant parts from a |
We support topologySpreadConstraint in #350, so Solr PODs are spread across AZs. But for Solr to be able to place replicas across AZs with the Affinity placement plugin, Solr needs access to node label
topology.kubernetes.io/zone
as a java syspropavailability_zone
.This is non-trivial, as node labels are not exposed to PODs, and you'd need special permissions to fetch it from ApiServer. Instead of each user going through hoops of setting up Init containers to pull in this info, let's have the operator handle it and inject
topology.kubernetes.io/zone
as a system property into Solr.Elastic has done a similar thing, referenced in https://the-asf.slack.com/archives/C01JR8WE1M5/p1669108868939589?thread_ts=1668633426.926589&cid=C01JR8WE1M5
The text was updated successfully, but these errors were encountered: