You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you a lot for your contribution! We tried to launch your project on KnowledgeNet Dataset (this version: https://github.com/schmidek/dygiepp/tree/multitask/dygie) and encounter with the following issue: the training does not yield the expected high scores and a NaN or Inf warning is constantly raised during training.
2020-11-17 11:10:03,030 - INFO - allennlp.training.trainer - Training
0%| | 0/1074 [00:00<?, ?it/s]2020-11-17 11:10:04,066 - WARNING - allennlp.training.util - Metrics with names beginning with "_" will not be logged to the tqdm progress bar.
rel_precision: 0.0000, rel_recall: 0.0000, rel_f1: 0.0000, rel_span_recall: 0.0057, loss: 3.1743 ||: 9%|9 | 99/1074 [01:05<09:49, 1.65it/s]2020-11-17 11:11:10,009 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:11:10,009 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:11:10,026 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:11:10,026 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:11:10,029 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:11:10,029 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:11:10,038 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:11:10,038 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:11:10,041 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:11:10,041 - WARNING - root - NaN or Inf found in input tensor.
rel_precision: 0.0000, rel_recall: 0.0007, rel_f1: 0.0000, rel_span_recall: 0.0060, loss: 2.1448 ||: 19%|#8 | 199/1074 [02:09<08:33, 1.70it/s]2020-11-17 11:12:13,805 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:12:13,806 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:12:13,818 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:12:13,818 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:12:13,820 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:12:13,820 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:12:13,827 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:12:13,827 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:12:13,829 - WARNING - root - NaN or Inf found in input tensor.
2020-11-17 11:12:13,830 - WARNING - root - NaN or Inf found in input tensor.
rel_precision: 0.0000, rel_recall: 0.0005, rel_f1: 0.0000, rel_span_recall: 0.0054, loss: 1.7052 ||: 26%|##5 | 276/1074 [02:58<08:36, 1.54it/s]
...
We run the following command: .\scripts\train\train_kn.sh gpu_id.
This NaN warning appears a couple of times in each epoch. The inter-evaluation also does not improve over the epochs.
Our guess is that the problem might be in the training configs, since the NaN issue might be caused by the older version of allennlp and its tensorboard logging (allenai/allennlp#3116). Did you use the same config as mentioned above for the training of Dygie++ for the KnowledgeNet leaderboard?
Could you help us please? Did we miss anything in the preprocessing that enables successful training?
Hi @schmidek
Thank you a lot for your contribution! We tried to launch your project on KnowledgeNet Dataset (this version: https://github.com/schmidek/dygiepp/tree/multitask/dygie) and encounter with the following issue: the training does not yield the expected high scores and a NaN or Inf warning is constantly raised during training.
The setup is:
We run the following command:
.\scripts\train\train_kn.sh gpu_id
.This NaN warning appears a couple of times in each epoch. The inter-evaluation also does not improve over the epochs.
Our guess is that the problem might be in the training configs, since the NaN issue might be caused by the older version of allennlp and its tensorboard logging (allenai/allennlp#3116). Did you use the same config as mentioned above for the training of Dygie++ for the KnowledgeNet leaderboard?
Could you help us please? Did we miss anything in the preprocessing that enables successful training?
Evaluation after the first epoch
Evaluation after epoch 22 (early-stopping)
The text was updated successfully, but these errors were encountered: