-
Notifications
You must be signed in to change notification settings - Fork 450
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tuple as device_type input to support Heterogenous Sharding of tables across different device_typestable #2600
base: main
Are you sure you want to change the base?
Conversation
This pull request was exported from Phabricator. Differential Revision: D65933148 |
… across different device_typestable (pytorch#2600) Summary: As we plan to support heterogenous sharding across different device types (cuda / cpu etc), we will pass device type per shard in the format of tuple for device_type_from_sharding_info where each index will represent the device_type for that particular shard Differential Revision: D65933148
2e3aa39
to
35d3c3a
Compare
This pull request was exported from Phabricator. Differential Revision: D65933148 |
… across different device_typestable (pytorch#2600) Summary: As we plan to support heterogenous sharding across different device types (cuda / cpu etc), we will pass device type per shard in the format of tuple for device_type_from_sharding_info where each index will represent the device_type for that particular shard Differential Revision: D65933148
… across different device_typestable (pytorch#2600) Summary: As we plan to support heterogenous sharding across different device types (cuda / cpu etc), we will pass device type per shard in the format of tuple for device_type_from_sharding_info where each index will represent the device_type for that particular shard Differential Revision: D65933148
… across different device_typestable (pytorch#2600) Summary: As we plan to support heterogenous sharding across different device types (cuda / cpu etc), we will pass device type per shard in the format of tuple for device_type_from_sharding_info where each index will represent the device_type for that particular shard Differential Revision: D65933148
35d3c3a
to
0bf4f59
Compare
This pull request was exported from Phabricator. Differential Revision: D65933148 |
… across different device_typestable (pytorch#2600) Summary: As we plan to support heterogenous sharding across different device types (cuda / cpu etc), we will pass device type per shard in the format of tuple for device_type_from_sharding_info where each index will represent the device_type for that particular shard Differential Revision: D65933148
0bf4f59
to
5d1d79b
Compare
… across different device_typestable (pytorch#2600) Summary: As we plan to support heterogenous sharding across different device types (cuda / cpu etc), we will pass device type per shard in the format of tuple for device_type_from_sharding_info where each index will represent the device_type for that particular shard Differential Revision: D65933148
This pull request was exported from Phabricator. Differential Revision: D65933148 |
… across different device_typestable (pytorch#2600) Summary: As we plan to support heterogenous sharding across different device types (cuda / cpu etc), we will pass device type per shard in the format of tuple for device_type_from_sharding_info where each index will represent the device_type for that particular shard Differential Revision: D65933148
5d1d79b
to
290bd30
Compare
This pull request was exported from Phabricator. Differential Revision: D65933148 |
… across different device_typestable (pytorch#2600) Summary: As we plan to support heterogenous sharding across different device types (cuda / cpu etc), we will pass device type per shard in the format of tuple for device_type_from_sharding_info where each index will represent the device_type for that particular shard Differential Revision: D65933148
… across different device_typestable (pytorch#2600) Summary: As we plan to support heterogenous sharding across different device types (cuda / cpu etc), we will pass device type per shard in the format of tuple for device_type_from_sharding_info where each index will represent the device_type for that particular shard Differential Revision: D65933148
290bd30
to
b2599be
Compare
This pull request was exported from Phabricator. Differential Revision: D65933148 |
… across different device_typestable (pytorch#2600) Summary: As we plan to support heterogenous sharding across different device types (cuda / cpu etc), we will pass device type per shard in the format of tuple for device_type_from_sharding_info where each index will represent the device_type for that particular shard Differential Revision: D65933148
Summary: As we plan to support heterogenous sharding across different device types (cuda / cpu etc), we will pass device type per shard in the format of tuple for device_type_from_sharding_info where each index will represent the device_type for that particular shard
Differential Revision: D65933148