Replies: 1 comment 1 reply
-
HI, not really sure what's going on here. Does the crash happen if you train it on a single GPU? I have worked with multi-GPUs settings in Spektral. in the past, but it's been a while so there may be bugs that need to be fixed. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello guys,
Is it possible using Spektral on multi gpus system?
I working on GNN for node level prediction using tensorflow and spektral. I using lot of data, witch i cant fit into one gpu device memory (like 20GB one big rgaph).
Do somebody some tips how i should proceed? My first idea was using distributed training with tensorflow, for more memory and more speed. But when i use tf.distribute.MirroredStrategy() i have got error like this:
Bellow its my example model:
Beta Was this translation helpful? Give feedback.
All reactions