Skip to content

Commit

Permalink
Improve all_to_one error message (pytorch#2019)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: pytorch#2019

As titled

Reviewed By: jianyuh

Differential Revision: D49296564

fbshipit-source-id: 442c13567cb7aa8de8c208c2ee1fb2ae550a8969
  • Loading branch information
sryap authored and facebook-github-bot committed Sep 15, 2023
1 parent a7d2be5 commit aa48aaa
Showing 1 changed file with 6 additions and 1 deletion.
7 changes: 6 additions & 1 deletion fbgemm_gpu/src/merge_pooled_embeddings_gpu.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,12 @@ void all_to_one(
});

auto target_device_index = target_device.index();
TORCH_CHECK(target_device_index < num_gpus && target_device_index >= 0);
TORCH_CHECK(
target_device_index != -1,
"target_device.index() is -1. Please pass target_device with device "
"index, e.g., torch.device(\"cuda:0\")")

TORCH_CHECK(target_device_index < num_gpus);

std::vector<TwoHopTransferContainer> two_hop_transfers;
two_hop_transfers.reserve(input_tensors.size());
Expand Down

0 comments on commit aa48aaa

Please sign in to comment.