Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to view Gpu utilization metrics in openshift dashboard #1002

Open
umeshvw opened this issue Sep 20, 2024 · 7 comments
Open

Not able to view Gpu utilization metrics in openshift dashboard #1002

umeshvw opened this issue Sep 20, 2024 · 7 comments

Comments

@umeshvw
Copy link

umeshvw commented Sep 20, 2024

Environment:

Openshift version: 4.16.10
nvidia-operator- version: 24.6.1

Hello Team,

We are facing below issue:

Issue 1:

in administrator space, we are not able to view few important metrics in nvidia DCGM Exporter Dashboard such as :

1: GPU utilization
2: GPU Framebuffer Mem Used
3: Tensor Core Utilization

We are able to view few metrics such as gpu temperature etc but above metrics are much important for us.

Issue 2 : In developer space

We are not able to see any metrics in nvidia DCGM Exporter Dashboard. We are able to see few metrics in administrator space but not able to see any metrics in developer space. Is there any way we can monitor gpu utilization per namespace also so that application team can monitor gpu utilization in their namespace on their own.

Issue 3: In section compute > GPU , we are not able to see any Realtime utilization date. Every time gpu utilization metrics are showing as 0%.

I am attaching screenshots for all the issues.

@umeshvw
Copy link
Author

umeshvw commented Sep 20, 2024

Image
Image
Image

@umeshvw
Copy link
Author

umeshvw commented Sep 30, 2024

Hello Team, Any update on above issue?

@umeshvw
Copy link
Author

umeshvw commented Oct 9, 2024

Hello Nvidia Team,

Can someone please help with above?

@arpitsharma-vw
Copy link

Hi @shivamerla
I hope you are doing well. Can you please help here or let us know if someone from your team can help with this issue?
Many thanks

@shivamerla
Copy link
Contributor

@arpitsharma-vw was it working before and are you seeing as a regression? Is the gpu-operator/dcgm-exporter configured with custom metrics or default ones? I don't think there is any RBAC issue here with the operator/dcgm-exporter itself, as the exporter uses pod resources API which will provide metrics from all Pods using GPUs from all namespaces. Can you double check the RBAC setup in the developer mode to scrape any metrics in general? @cdesiniotis @tariq1890 can help debug further.

@umeshvw
Copy link
Author

umeshvw commented Oct 21, 2024

@shivamerla I think it is custom one as per below.

$ oc get daemonset nvidia-dcgm-exporter -o yaml |grep -i etc
value: /etc/dcgm-exporter/dcgm-metrics.csv

and below is the file

sh-5.1# cat /etc/dcgm-exporter/dcgm-metrics.csv
DCGM_FI_PROF_GR_ENGINE_ACTIVE, gauge, gpu utilization.
DCGM_FI_DEV_MEM_COPY_UTIL, gauge, mem utilization.
DCGM_FI_DEV_ENC_UTIL, gauge, enc utilization.
DCGM_FI_DEV_DEC_UTIL, gauge, dec utilization.
DCGM_FI_DEV_POWER_USAGE, gauge, power usage.
DCGM_FI_DEV_POWER_MGMT_LIMIT_MAX, gauge, power mgmt limit.
DCGM_FI_DEV_GPU_TEMP, gauge, gpu temp.
DCGM_FI_DEV_SM_CLOCK, gauge, sm clock.
DCGM_FI_DEV_MAX_SM_CLOCK, gauge, max sm clock.
DCGM_FI_DEV_MEM_CLOCK, gauge, mem clock.
DCGM_FI_DEV_MAX_MEM_CLOCK, gauge, max mem clock.
sh-5.1#

looks like DCGM_FI_DEV_GPU_UTIL metrics is not included in above file which is present in file default-counters.csv
Please let us know if you need further details

@umeshvw
Copy link
Author

umeshvw commented Oct 21, 2024

@shivamerla We are able to see metrics after adding below metrics to Configmap (console-plugin-nvidia-gpu)

DCGM_FI_DEV_GPU_UTIL
DCGM_FI_DEV_FB_USED
DCGM_FI_PIPE_PROF_TENSOR_ACTIVE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants