-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Something weird between Tensorflow-offline-wpe and numpy-offline-wpe #33
Comments
This difference sounds to be too large. Nevertheless, there are some reasons, why numpy is/can be faster:
Can you provide a complete toy example? Small hint: |
As an additional note: be careful where you call WPE (=build the graph) and where you call session.run(...). The graph should be built outside any loop. |
Thanks for your reply, So my code is something like, I think maybe I should create a TFrecords or I have to restart session for each single audio file:
Best |
Thanks for your interpretations and details,
Best |
In addition, I looked into the source code of tf_wpe,
And unlike the numpy implementation, the tensorflow_wpe do the wpe process for each frequency point through whole frequency bins, and I think it's the reason why it perform slower So if I really want to make wpe process work faster(3.5audio, expect 100ms time spend), do I need to re-program numpy version in C, or I can also develop the tf_wpe.py, let some matrix computation do it on GPU, and some sequence computation in CPU Best |
This can happen. The problem is that tensorflow does some optimizations of the graph where a conjugate may be ignored. So the solution can be correct or incorrect.
For WPE this shouldn't be a problem. For the numpy version we have also a "batched" (
When I see your example code, it would also be possible to simply take the numpy wpe code. |
Hi thanks for your works on nara_wpe, I learn quiet a lot from your implementation and your paper,
I tried to integrated tensorflow-offline-wpe with my ASR system,
However, time spend on a 3.5s audio for tf-offline-wpe is ~7s while the numpy version only takes ~200ms
I tried tf-offline-wpe in GPU. What I have done is just do the wpe dreverberation processing under a tf.session for all the audio file
so my code is something like
But it takes more time than numpy versio which confuses me a lot. I expect the tf nara_wpe version will be faster than the numpy one
Best
The text was updated successfully, but these errors were encountered: