You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently played around with the above. Really fast construction of FSAs on GPU. Given that a lot of our models use monotonic assumptions, having access to lattice constructions with autograd would give us some cool novel architectures for our set of tasks.
Some extensions we can have:
Push monotonic transducer and edit action transducers to use trainable lattices
Allow integration of our models with language n-grams. This can improve training by preventing illegal constructions not seen in training data.
Helps with creation of CTC/Transducer models for NLP tasks.
N.B. From a self-serving route. Little birdy told me this wasn't getting much love these days, so want to keep it around the OpenGRM/Pynini/Thrax family to keep it alive. (Implementing GPU based FSA algorithms would be nifty masters project for CUNY peeps.)
The text was updated successfully, but these errors were encountered:
https://github.com/k2-fsa/k2
Recently played around with the above. Really fast construction of FSAs on GPU. Given that a lot of our models use monotonic assumptions, having access to lattice constructions with autograd would give us some cool novel architectures for our set of tasks.
Some extensions we can have:
N.B. From a self-serving route. Little birdy told me this wasn't getting much love these days, so want to keep it around the OpenGRM/Pynini/Thrax family to keep it alive. (Implementing GPU based FSA algorithms would be nifty masters project for CUNY peeps.)
The text was updated successfully, but these errors were encountered: