-
-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlock if synchronous subprocess fills pipe #82
Comments
I can't think of a better solution than to advise you use async. I don't really want to start spawning threads behind your back to handle this kind of thing (and don't know of another way to generally fix this!). |
For my case, I am actually not using the outputs. So it might be good to allow ignoring stdout/stderr. Actually, did I misunderstand something? It seems like I can read from stdout before I join. At least it works on Linux. |
I might be able to add an option to ignore stdout/stderr aye. You should be able to read before join iirc, its just that if you don't have enough data to read it could block forever. |
I thought it should return EOF? |
If a subprocess outputs large amount of data, it deadlocks both parent and subprocess due to pipe blocking:
Notice that subprocess
dd
writes 65k data to stdout, which is greater than Linux's default pipe buffer size, thus blocking the child. However, sincesubprocess_read_stdout
must be used after joining, the parent process cannot progress either, as it can neither drain the pipe nor wait for child to finish.The text was updated successfully, but these errors were encountered: