-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pure language model #7
Comments
My current implementation is
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello, inspired by openai/finetune-transformer-lm, I am now trying to make a language model based on your code. I got a question during implementation.
Why don't you add the loss function through
compile
api? I am not quite sure about the effect of apiadd_loss
.By the way, I made a language model encoder based on your Encoder, but I added
GetSubMask
as you did in Decoder. Then I would like to add a crf layer after the encoder (for sequence labelling, while openAi's model is for text classification). Finally, train the model based on the language model loss + crf loss. Do you have any implementation suggestion? Especially any idea for verifying the correctness of the code...I saw you example data about pinyin and Chinese, are you Chinese?
The text was updated successfully, but these errors were encountered: