Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out-of-memory error #27

Open
mustaszewski opened this issue Aug 17, 2019 · 6 comments
Open

Out-of-memory error #27

mustaszewski opened this issue Aug 17, 2019 · 6 comments

Comments

@mustaszewski
Copy link

Dear Mikel,
first of all congratulations on this great piece of work and thank you for sharing it with the community.

I experienced out-of-memory errors when mapping pre-trained fastText embeddings trained on Wikipedia (https://fasttext.cc/docs/en/pretrained-vectors.html). For the EN-DE language pair, the embeddings are quite large, having 300 dimensions and vocabulary sizes of approx 2.2M to 2.5M.

Out-of-memory errors occurred in both the supervised and unsupervised modes, and both with and without --cuda.

In the supervised mode (using the EN-DE training dictionary from your 2017 ACL paper), the following error ocurred:
Call: python3 vecmap/map_embeddings.py --cuda --supervised TRAIN_DICT EMB_SRC EMB_TRG EMB_SRC_MAPPED EMB_TRG_MAPPED --log log.txt --verbose

Output:

tcmalloc: large alloc 3023249408 bytes == 0x2bd6000 @  0x7fc625b3a1e7 0x7fc6235c3ca1 0x7fc623628778 0x7fc623628d47 0x7fc6236c3038 0x4f8925 0x4fa6c0 0x4f6128 0x4f7d60 0x4f876d 0x4fa6c0 0x4f6128 0x4f7d60 0x4f876d 0x4f98c7 0x4f6128 0x4f9023 0x6415b2 0x64166a 0x643730 0x62b26e 0x4b4cb0 0x7fc625737b97 0x5bdf6a
tcmalloc: large alloc 3023249408 bytes == 0xbb06e000 @  0x7fc625b3a1e7 0x7fc6235c3ca1 0x7fc623628778 0x7fc623628d47 0x7fc6236c3038 0x4f8925 0x4fa6c0 0x4f6128 0x4f7d60 0x4f876d 0x4fa6c0 0x4f6128 0x4f7d60 0x4f876d 0x4f98c7 0x4f6128 0x4f9023 0x6415b2 0x64166a 0x643730 0x62b26e 0x4b4cb0 0x7fc625737b97 0x5bdf6a
WARNING: OOV dictionary entry (time - fragestunde)
[ ... approx 600 more warnings ommitted ...]
WARNING: OOV dictionary entry (constitutional - verfassungsmäßigen)

Traceback (most recent call last):
  File "vecmap/map_embeddings.py", line 422, in <module>
    main()
  File "vecmap/map_embeddings.py", line 251, in main
    simfwd = xp.empty((args.batch_size, trg_size), dtype=dtype)
  File "/usr/local/lib/python3.6/dist-packages/cupy/creation/basic.py", line 20, in empty
    return cupy.ndarray(shape, dtype, order=order)
  File "cupy/core/core.pyx", line 152, in cupy.core.core.ndarray.__init__
  File "cupy/cuda/memory.pyx", line 517, in cupy.cuda.memory.alloc
  File "cupy/cuda/memory.pyx", line 1076, in cupy.cuda.memory.MemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 1097, in cupy.cuda.memory.MemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 925, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 940, in cupy.cuda.memory.SingleDeviceMemoryPool._malloc
  File "cupy/cuda/memory.pyx", line 695, in cupy.cuda.memory._try_malloc
cupy.cuda.memory.OutOfMemoryError: out of memory to allocate 10077480448 bytes (total 22170457600 bytes)

In the unsupervised mode (python3 vecmap/map_embeddings.py --cuda --unsupervised EMB_SRC EMB_TRG EMB_SRC_MAPPED EMB_TRG_MAPPED --log log.txt --verbose), the error was:

tcmalloc: large alloc 3023249408 bytes == 0x3098000 @  0x7f263a57a1e7 0x7f2638003ca1 0x7f2638068778 0x7f2638068d47 0x7f2638103038 0x4f8925 0x4fa6c0 0x4f6128 0x4f7d60 0x4f876d 0x4fa6c0 0x4f6128 0x4f7d60 0x4f876d 0x4f98c7 0x4f6128 0x4f9023 0x6415b2 0x64166a 0x643730 0x62b26e 0x4b4cb0 0x7f263a177b97 0x5bdf6a
tcmalloc: large alloc 3023249408 bytes == 0xbb530000 @  0x7f263a57a1e7 0x7f2638003ca1 0x7f2638068778 0x7f2638068d47 0x7f2638103038 0x4f8925 0x4fa6c0 0x4f6128 0x4f7d60 0x4f876d 0x4fa6c0 0x4f6128 0x4f7d60 0x4f876d 0x4f98c7 0x4f6128 0x4f9023 0x6415b2 0x64166a 0x643730 0x62b26e 0x4b4cb0 0x7f263a177b97 0x5bdf6a
Traceback (most recent call last):
  File "/vecmap/map_embeddings.py", line 422, in <module>
    main()
  File "/vecmap/map_embeddings.py", line 349, in main
    dropout(simfwd[:j-i], 1 - keep_prob).argmax(axis=1, out=trg_indices_forward[i:j])
  File "/vecmap/map_embeddings.py", line 32, in dropout
    mask = xp.random.rand(*m.shape) >= p
  File "/usr/local/lib/python3.6/dist-packages/cupy/random/sample.py", line 46, in rand
    return random_sample(size=size, dtype=dtype)
  File "/usr/local/lib/python3.6/dist-packages/cupy/random/sample.py", line 158, in random_sample
    return rs.random_sample(size=size, dtype=dtype)
  File "/usr/local/lib/python3.6/dist-packages/cupy/random/generator.py", line 382, in random_sample
    out = self._random_sample_raw(size, dtype)
  File "/usr/local/lib/python3.6/dist-packages/cupy/random/generator.py", line 366, in _random_sample_raw
    out = cupy.empty(size, dtype=dtype)
  File "/usr/local/lib/python3.6/dist-packages/cupy/creation/basic.py", line 20, in empty
    return cupy.ndarray(shape, dtype, order=order)
  File "cupy/core/core.pyx", line 152, in cupy.core.core.ndarray.__init__
  File "cupy/cuda/memory.pyx", line 517, in cupy.cuda.memory.alloc
  File "cupy/cuda/memory.pyx", line 1076, in cupy.cuda.memory.MemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 1097, in cupy.cuda.memory.MemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 925, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 940, in cupy.cuda.memory.SingleDeviceMemoryPool._malloc
  File "cupy/cuda/memory.pyx", line 695, in cupy.cuda.memory._try_malloc
cupy.cuda.memory.OutOfMemoryError: out of memory to allocate 1600000000 bytes (total 16726299136 bytes)

I was running vecmap on Google Colab with 12.75 GB RAM and with GPU hardware acceleration activated.

Some more background: Out-of-memory errors occurred even when the target embedding file was much smaller, approx. 0.2 M in size. On the other hand, when both the source and target embeddings were around 0.2 M in size, the mapping worked perfectly fine, both in supervised and unsupervised mode.

What is the recommended way to deal with such memory issues? To limit the vocabulary size of the embedding files? To set the --batch_size parameter, or to set the --vocabulary_cutoff parameter? By the way, when setting the --vocabulary_cutoff parameter, does vecmap draw a random sample of size n from the original vocabulary, or does it limit the vocabulary to the n most frequent entries?

@davidlenz
Copy link

Experiencing the same issue on a Titan V

Traceback (most recent call last):
  File "map_embeddings.py", line 441, in <module>
    main()
  File "map_embeddings.py", line 262, in main
    zw = xp.empty_like(z)
  File "C:\Users\Dlenz\Anaconda3\envs\py36\lib\site-packages\cupy\creation\basic.py", line 86, in empty_like
    return cupy.ndarray(shape, dtype, memptr, strides, order)
  File "cupy\core\core.pyx", line 134, in cupy.core.core.ndarray.__init__
  File "cupy\cuda\memory.pyx", line 518, in cupy.cuda.memory.alloc
  File "cupy\cuda\memory.pyx", line 1085, in cupy.cuda.memory.MemoryPool.malloc
  File "cupy\cuda\memory.pyx", line 1106, in cupy.cuda.memory.MemoryPool.malloc
  File "cupy\cuda\memory.pyx", line 934, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc
  File "cupy\cuda\memory.pyx", line 949, in cupy.cuda.memory.SingleDeviceMemoryPool._malloc
  File "cupy\cuda\memory.pyx", line 697, in cupy.cuda.memory._try_malloc
cupy.cuda.memory.OutOfMemoryError: out of memory to allocate 2400000000 bytes (total 12000000000 bytes)

@melezele
Copy link

Hi, mustaszewski
while I try to map two different languages into word2vec format using vecmap/map_embeddings.py on windows 10 and python 3, I could't able to generate the map. What are the step to follow?

@oskrmiguel
Copy link

I was running vecmap on Google Colab with 12.75 GB RAM and with GPU hardware acceleration activated.

Some more background: Out-of-memory errors occurred even when the target embedding file was much smaller, approx. 0.2 M in size. On the other hand, when both the source and target embeddings were around 0.2 M in size, the mapping worked perfectly fine, both in supervised and unsupervised mode.

What is the recommended way to deal with such memory issues? To limit the vocabulary size of the embedding files? To set the --batch_size parameter, or to set the --vocabulary_cutoff parameter? By the way, when setting the --vocabulary_cutoff parameter, does vecmap draw a random sample of size n from the original vocabulary, or does it limit the vocabulary to the n most frequent entries?

Hey, could you solve your problem? I have the same problem.

@nikitas-theo
Copy link

Also having the same problem.

@giosal
Copy link

giosal commented Jun 4, 2021

same issue here, both with CUDA and without.
Windows Subsystem for Linux 2 on Windows 10 with CUDA support, Python 3.8

@passermyh
Copy link

Have you solved it? I don't know how to deal with it.
图片

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants