You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To call CuDNN, we need to sort sequences by length. This breaks the order of sentence, so we need to reorder hidden states after the call. This might be a slow procedure, we might come up with an alternative. Any ideas @denizyuret ? I guess we can do padding from right to make sequences same length, and process them at once. And make the embeddings of pads 0, and expect RNN to learn to ignore those 0s.
1- We can't process all words at once because of the output encoding. Should we stick to non output-encoding version of the model for faster public model? @denizyuret
2- I believe we should be able to eliminate for loop over the characters of a word (for t=1:timeSteps). Is there any obstacle for that? I guess this is only helpful in training
Two alternatives:
1. Just use padding and do not sort.
2. I am working on a new cuDNN interface which includes a builtin "padded
array" option: https://github.com/denizyuret/CUDA.jl/tree/dy/cudnn/lib/cudnn
-- It is a PR under evaluation now, probably will be released with Julia
1.6.
*2. Minibatching at decoding time:*
1- We can't process all words at once because of the output encoding.
Should we stick to non output-encoding version of the model for faster
public model? @denizyuret <https://github.com/denizyuret>
This is during training, right? Transformers solve this using masking but I
guess something similar in RNNs may not be possible. How much do we lose
without output encoding? We can certainly make this an option.
1. CuDNN-RNN sorting requirement:
To call CuDNN, we need to sort sequences by length. This breaks the order of sentence, so we need to reorder hidden states after the call. This might be a slow procedure, we might come up with an alternative. Any ideas @denizyuret ? I guess we can do padding from right to make sequences same length, and process them at once. And make the embeddings of pads 0, and expect RNN to learn to ignore those 0s.
Morse.jl/src/models.jl
Line 76 in 2dfe277
2. Minibatching at decoding time:
1- We can't process all words at once because of the output encoding. Should we stick to non output-encoding version of the model for faster public model? @denizyuret
2- I believe we should be able to eliminate for loop over the characters of a word (
for t=1:timeSteps
). Is there any obstacle for that? I guess this is only helpful in trainingMorse.jl/src/models.jl
Lines 284 to 306 in 2dfe277
The text was updated successfully, but these errors were encountered: