Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ResourceExhaustedError: OOM #94

Open
Timos-K opened this issue Mar 31, 2020 · 1 comment
Open

ResourceExhaustedError: OOM #94

Timos-K opened this issue Mar 31, 2020 · 1 comment

Comments

@Timos-K
Copy link

Timos-K commented Mar 31, 2020

I always get an ResourceExhaustedError: OOM error whenever using this code. I'm unable to use any batch size greater than 256. Can you point out which parts are the most memory intensive?


ResourceExhaustedError: OOM when allocating tensor with shape[510,510,510,510] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
	 [[{{node loss_5/merged_layer_neg_loss/batch_all_triplet_loss/ToFloat_1}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
@Pandoro
Copy link
Member

Pandoro commented Mar 31, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants