-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add dashboard capacity-by-nodetype #112
base: master
Are you sure you want to change the base?
Conversation
hey @bszeti it looks like there is likely some overlap between these dashboards, and the one that we have in https://github.com/redhat-cop/openshift-toolkit/tree/master/capacity-dashboard. Could you take a look at this, and maybe suggest some enhancements so that we don't end up with competing solutions in the same repo? |
Yes, and I've talked to Raffaele about this. The "capacity" dashboards requires a special "nodegroup" label on nodes and shows the resource utilization when we have different group of compute nodes in the claster. The "capacity-by-nodetype" dashboards don't need special labels, they work with the default "role.kubernetes.io/..." label, so they can be deployed on any OpenShift v3.11 clusters. So the two dashboards show similar metrics, but were built to solve a different problem. |
@bszeti ok, I would say that these dashboards should probably live in the /capacity-dashboard/ space, rather than the custom-dashboards space, which is really about deploying a customizable grafana, not about the dashboards. |
@etsauer ok, at the moment the "capacity dashboards" are duplicated in the repo. They are under "capacity-dashboard" directory and also in the yaml files under "custom-dasboards/.openshift/dashboards/capacity". The purpose of "custom-dashboards" is to be able to run openshift-applier and bring up a custom Grafana with all our custom dashboards, so it's easy to try. This is what I tried to follow in the first round. Do you want me to completely remove the "capacity-by-nodetype" dashboards from there and not make them part of the openshift-applier deployment? |
@bszeti rather than have the dashboards in two places, I would just point applier to the dashboard in |
@etsauer I'm taking a look at this now. I see two use cases:
Ideally the ConfigMap yaml file should somehow refer to the json files, but I don't know if it's possible. Is there a way to do something like this with the applier:
If this is not possible, then I don't know how to support both scenarios (manual import and applier) without duplicating the json definitions. |
@bszeti So, I haven't tried this, but Applier supports Jinja2 templates, which has an include function. See if you can make that work. If you can it would be really nice to somehow document that approach, as this is a common sticking point with managing configmaps. https://jinja.palletsprojects.com/en/2.10.x/templates/#include |
I'm trying to use jinja like this: but I'm keep getting an error with applier: TASK [openshift-applier : Process Jinja template] ******************************************************************************************************************************************************************************************** Any ideas? How is the template lookup done in this task? |
It seems that the {% include [file] %} only works if the file is under the role. In this case "galaxy/openshift-applier/roles/openshift-applier", which is obviously not god for us. (Also "../../../../namespaces-in-cluster.json" didn't work"). Because of this I don't know how to arrange the files better than in the original commit. Another related problem is that the included file is processed as a template so any "{{" causes issues. This later problem may be handled by adding {% raw %}...{% endraw %} in the included file... For the record we tried:
|
@etsauer Any suggestions? |
Custom dashboards about resource utilization and node capacity (grouped by node role).