You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[dynet] random seed: 615808844
[dynet] allocating memory: 512MB
[dynet] memory allocation done.
0...
done reading training actions file
#sents: 2
#tokens: 46
#types: 36
#POStags: 13
#actions: 42
#preds: 35
0...
done reading training actions file
#sents: 2
#tokens: 29
#types: 36
#POStags: 19
#actions: 43
#action types: 8
#preds: 36
Start loading preds
#lemmas with associated PR acts = 25
creating dictionary of pretrain embeddings
finish creating
UNK index in tok_dict_pretrain: 0
UNK index in tok_dict_all: 0
0...
done loading raw sent
#sents: 2
0...
done loading raw sent
#sents: 2
Rand word embedding size: 36
Pretrained word embedding size: 55
loading pretrained word embeddings
finish loading
loading pretrained model
Traceback (most recent call last):
File "train.py", line 227, in
parser.load_model(args.model)
File "/home/chenfeng/parser/AMR-cartus/model/stack_lstm.py", line 109, in load_model
self.pc.populate(filename)
File "_dynet.pyx", line 1461, in _dynet.ParameterCollection.populate
File "_dynet.pyx", line 1516, in _dynet.ParameterCollection.populate_from_textfile
RuntimeError: Dimensions of parameter /_24 looked up from file ({8557,200}) do not match parameters to be populated ({44,200})
Asking for your help. Thanks a lot!
The text was updated successfully, but these errors were encountered:
I see. That should be a bug. I am afraid that I am not able to fix it now since I am working for the deadline. Hopefully, I can fix these bugs and release a new pre-trained model in late December.
------------------ Original ------------------
From: Cartus <[email protected]>
Date: Fri,Nov 29,2019 4:23 AM
To: Cartus/AMR-Parser <[email protected]>
Cc: chenfeng15a <[email protected]>, Author <[email protected]>
Subject: Re: [Cartus/AMR-Parser] Parsing with a trained parser: load the pre-trained model (#5)
I see. That should be a bug. I am afraid that I am not able to fix it now since I am working for the deadline. Hopefully, I can fix these bugs and release a new pre-trained model in late December.
Sorry for the inconvenience caused.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Hi, Cartus:
When I run:
Bug occurs:
Asking for your help. Thanks a lot!
The text was updated successfully, but these errors were encountered: