You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(CHAML) dch@gpuadmin-SYS-7048GR-TR:~/source/CHAML$ sh run.sh
06/10 11:21:02 - INFO - method is meta. HARD_TASK: True, HARD_USER: True, CURRICULUM: True, PACING_FUNCTION: ssp, PER_TEST_LOG: 2500, PATIENCE: 2
06/10 11:21:02 - INFO - curriculum is: [0, 1, 2, 3, 4, 5, 6, 7]
06/10 11:21:02 - INFO - Got config from config/config-chaml.json
{'update_lr': 0.001, 'meta_lr': 0.001, 'update_step': 1, 'update_step_test': 1, 'task_batch_size': 4, 'train_qry_batch_size': 512, 'max_train_steps': 100000, 'few_num': 512, 'num_poi_types': 230, 'num_time': 25, 'embed_dim': 50, 'poiid_dim': 50, 'mlp_hidden': 300, 'local_fix_var': 1, 'global_fix_var': 1, 'sample_batch_size': 1024, 'test_task_batch_size': 1, 'num_epoch': 10, 'with_cont_feat': True}
<model.meta.Meta object at 0x7fbc1dded7d8>
06/10 11:21:02 - INFO - Total trainable tensors: 225653
06/10 11:21:06 - INFO - Loaded all the data pickles!
/home/dch/source/CHAML/utils/metadataset.py:57: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
final_pos_samples.append(np.array([user_id, hist, pos_candi, label]))
/home/dch/source/CHAML/utils/metadataset.py:58: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
final_neg_samples.append(np.array([user_id, hist, neg_candi, 0]))
/home/common/anaconda3/envs/dch-CHAML/lib/python3.6/site-packages/paddle/fluid/dygraph/math_op_patch.py:165: RuntimeWarning: divide by zero encountered in double_scalars
return _scalar_elementwise_op_(var, 1.0 / value, 0.0)
Traceback (most recent call last):
File "main.py", line 374, in <module>
main_meta(meta_path, root_path, id_emb_path)
File "main.py", line 283, in main_meta
'stage1', CURRICULUM, HARD_TASK, batch_id=batch_id)
File "main.py", line 222, in one_meta_training_step
poiid_embs=poiid_embs, cont_feat_scalers=cont_feat_scalers)
File "/home/common/anaconda3/envs/dch-CHAML/lib/python3.6/site-packages/paddle/fluid/dygraph/layers.py", line 891, in __call__
outputs = self.forward(*inputs, **kwargs)
File "/home/dch/source/CHAML/model/meta.py", line 76, in forward
vars=None, scaler=scaler)
File "/home/common/anaconda3/envs/dch-CHAML/lib/python3.6/site-packages/paddle/fluid/dygraph/layers.py", line 891, in __call__
outputs = self.forward(*inputs, **kwargs)
File "/home/dch/source/CHAML/model/learner.py", line 133, in forward
hist_embed, mask)
File "/home/dch/source/CHAML/model/learner.py", line 73, in attention
score = paddle.where(mask==1, wall, score)
File "/home/common/anaconda3/envs/dch-CHAML/lib/python3.6/site-packages/paddle/fluid/dygraph/math_op_patch.py", line 238, in __impl__
return math_op(self, other_var, 'axis', axis)
RuntimeError: (NotFound) Operator equal does not have kernel for data_type[bool]:data_layout[ANY_LAYOUT]:place[CPUPlace]:library_type[PLAIN].
[Hint: Expected kernel_iter != kernels.end(), but received kernel_iter == kernels.end().] (at /paddle/paddle/fluid/imperative/prepared_operator.cc:127)
[operator < equal > error]
The text was updated successfully, but these errors were encountered:
CHAML 严格按照 readme 操作后,在最后一步执行 run.sh 命令后报错“除 0”。
详细错误代码是:
The text was updated successfully, but these errors were encountered: