Replies: 2 comments 5 replies
-
The most likely culprit here is that the model may have have some operations that DirectML doesn't support yet, and therefore needs to frequently fall back to the CPU. RX 5700 XT should definitely be faster than the CPU. It would help if you could direct us to the model you've been using so we can try and repro the issue! |
Beta Was this translation helpful? Give feedback.
-
Here is a printout of logs when running with GPU.
Versus when disabling the GPU and running with CPU:
|
Beta Was this translation helpful? Give feedback.
-
Good day,
I am running Tensorflow code through WSL (Ubuntu). I have an AMD Radeon card, and wish to utilise Tensorflow DirectML to train a model using the GPU. I have used Nvidia / CUDA, on pure Ubuntu, in the past, but have no experience running Tensorflow with AMD cards, let alone through WSL.
I have configured everything as detailed here, and my GPU is now available for running with Tensorflow. The code runs, but poorly.
There is a significant performance downgrade when going from CPU to GPU.
For context, the hardware specs are:
Additionally, it seems to require significantly more RAM for the same amount of input data being trained on (it's blue screened twice so far during testing, but I can get it to run on smaller datasets).
I would expect the GPU performance to be at least on par with CPU. I don't know how reasonable this assumption is with DirectML and the above hardware specs.
I can see in Task Manager that only 5% of GPU is being utilized when running a fairly large model/dataset. This reporting appears to be accurate (showing WSL usage, not just Windows), as GPU memory usage goes up when Tensorflow model sets up, while the utilization remains low.
My best guess for this discrepancy is probably that the GPU resources are being shared between Windows and WSL, and Windows is reserving most of it. I don't know how likely that is, because I won't expect rendering to keep 95% of the card on standby. Who knows. Windows apps could be using the GPU engine.
My instinct is to unplug my monitor's display port cable from the GPU and swap to using the onboard GPU instead, with the belief that the GPU would then not be utilised by Windows, and become fully available to WSL.
For some reason, my onboard HDMI port is not working, which I can probably fix by updating all the drivers.
But, before I go through that, I figured I should first ask the developers or community: what would be the best practice for getting WSL/Tensorflow to fully utilise the system GPU? Is it possible?
Does anyone have experience with running Tensorflow ... on WSL ... with an AMD Radeon card ... and optimizing it?
Any insight will be appreciated! Thank you.
Further details:
Beta Was this translation helpful? Give feedback.
All reactions