This repository has been archived by the owner on Dec 11, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2k
default embedding #472
Comments
No. If you do not provide the pretrained embeddings, it will create an trainable variable, and initialize it by some algorithm. When you train the model on your data, this variable will be updated too. |
@luozhouyang I understand if we do not provide the pre-trained embedding it uses the default implementation of embedding in this framework. However, I would like to know what algorithm is used to build the embedding. |
Word embeddings here are actually an 2-d tensor, with shape |
@luozhouyang I understand this. But what algorithm it is using (like word2vec, gloVe, ...)? |
No special algorithm is used. Not word2vec, not GloVe, just a learnable 2-d matrix. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
If we do not provide embedding like word2vec, how does it know to represent the words?
Does it use one hot encoding by default or ngram, CBOW, skip grams?
The text was updated successfully, but these errors were encountered: