You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description:
Currently it is not possible to deploy spark cluster on selected Kubernetes nodes. All spark pods are scheduled randomly.
It would be nice if the spark-operator had a feature to choose Kubernetes nodes where to schedule spark cluster. I imagine this would involve adding new param to spark cluster config and then having the operator map this value for example to nodeSelector of spark pod manifests.
The text was updated successfully, but these errors were encountered:
Hello Mikołaj, would you be interested in contributing this feature? It can be a nice first issue to nail as a new contributor and it shouldn't be super difficult as you've mentioned :) Adding the pod(Anti)Affinity field into spark cluster, probably for both master and worker, or just one global argument for both would be a good start. There is the json schema where it needs to be added and then Java classes are generated from those json schemas. Then the mapping from the java object to the fabric8 client "fluent" api.
Description:
Currently it is not possible to deploy spark cluster on selected Kubernetes nodes. All spark pods are scheduled randomly.
It would be nice if the spark-operator had a feature to choose Kubernetes nodes where to schedule spark cluster. I imagine this would involve adding new param to spark cluster config and then having the operator map this value for example to nodeSelector of spark pod manifests.
The text was updated successfully, but these errors were encountered: