You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 11, 2023. It is now read-only.
@nashid You haven't installed CUDA, which is required to run TensorFlow ops in GPU and utilize its computational cores. However, CUDA is proprietary to Nvidia (whereas AMD uses OpenCL). So in a nutshell, you need CUDA for running TensorFlow, and to run CUDA, you have to have an Nvidia GPU. If you are not willing to buy a GPU, you can always use "Colab" by Google which provides free GPU resources.
Since TF cannot find your GPU, it automatically switches to CPU which takes a lot of time to train on. Hence the high CPU usage and low GPU usage
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I have running the training with the following command:
I have one GPU (GPU Radeon RX 580). Upon running the experimentation, I see the CPUs are fully utilized and the GPU usage remains insignificant(<5%).
I saw this in the log:
Can anyone provide any pointer why GPU usage remains low?
The text was updated successfully, but these errors were encountered: