Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU memory #66

Open
acsignal opened this issue Feb 17, 2020 · 1 comment
Open

GPU memory #66

acsignal opened this issue Feb 17, 2020 · 1 comment

Comments

@acsignal
Copy link

I am using the Python 3.6 (branch 1.0) of your code with CUDA 10 and everything seems to be working up until the point when I try to run step 3 (training), I am using a GTX 2080 Ti with 11GB frame buffer so I was under the impression that it would have enough memory to train the model however I get this error:
RuntimeError: CUDA out of memory. Tried to allocate 392.00 MiB (GPU 0; 11.00 GiB total capacity; 7.75 GiB already allocated; 36.24 MiB free; 661.04 MiB cached)

Is there any way for me to clear the cached memory to make way for the allocation or is that a bad idea?
Also if it's the case that I simply don't have enough memory to run the training when will the lightweight (mono) model be released for python 3.6?

@Hexuanfang
Copy link

hi , i use the same version and hardware device and got the same issue , how did you fix it in the end?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants