en_core_web_trf not working in celery #8694
-
How to reproduce the behaviourthe following code works well with "en_core_web_lg" model but when I try to use "en_core_web_trf" I dont get any result back when I run the code in celery. But have not problem running the code successfully in shell or a notebook.
Your Environment
|
Beta Was this translation helpful? Give feedback.
Replies: 7 comments 1 reply
-
Sorry you're having trouble, but without some kind of error message it's hard to say what's wrong. I'm sure there's some way you can get error output in celery, can you share it with us? As a basic guess though, maybe the environment celery is using doesn't have I would also recommend upgrading spaCy. Even if you don't upgrade to 3.1, the most recent 3.0 release has some bugfixes. Also, please do not post screenshots of logs, they are hard to read and unhelpful. Copy/paste it as text. I am also not sure what information that log is supposed to convey. |
Beta Was this translation helpful? Give feedback.
-
I suspect it's the same underlying issue as in #4667. Try adding this before you load the model (#4667 (comment)): import torch
torch.set_num_threads(1) If it works with this change, then the problem is related to There have been some other related reports that look more like memory leaks, and there it looks like the solution is to set a constrained number of threads, see #8554. |
Beta Was this translation helpful? Give feedback.
-
@adrianeboyd Thanks, the solution that you've provided solved my issue. I also found that |
Beta Was this translation helpful? Give feedback.
-
Glad to hear it's working! |
Beta Was this translation helpful? Give feedback.
-
@srsani Have you tried multiprocessing in GPU-based spacy transformer mode while inference? I am loading the model on GPU like:
|
Beta Was this translation helpful? Give feedback.
-
@adrianeboyd @srsani
Error on Celery:
Celery arguments:
|
Beta Was this translation helpful? Give feedback.
I suspect it's the same underlying issue as in #4667. Try adding this before you load the model (#4667 (comment)):
If it works with this change, then the problem is related to
torch
and threading, probably related to libgomp. Look at the linked pytorch issue in the link above for possible solutions.There have been some other related reports that look more like memory leaks, and there it looks like the solution is to set a constrained number of threads, see #8554.