{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":715900328,"defaultBranch":"main","name":"pytorch","ownerLogin":"amitaga","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2023-11-08T03:53:05.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/16528882?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1723137826.0","currentOid":""},"activityList":{"items":[{"before":"6f6fae28ebdfc1357397b931c9feb1123c27502c","after":"5b75500f9d725a5945a9165b6e4a3c6ab7e43cf1","ref":"refs/heads/export-D60969898","pushedAt":"2024-08-08T19:17:34.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"amitaga","name":"Amit Agarwal","path":"/amitaga","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16528882?s=80&v=4"},"commit":{"message":"Fix fbcode AOTI GPU lowering for ARM64 hosts (#133017)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/133017\n\nFix fbcode AOTI GPU lowering for ARM64 hosts\n\nReviewed By: hl475\n\nDifferential Revision: D60969898","shortMessageHtmlLink":"Fix fbcode AOTI GPU lowering for ARM64 hosts (pytorch#133017)"}},{"before":"f898d0cb8e92173a546e783dadbdc8abc6568ca3","after":"6f6fae28ebdfc1357397b931c9feb1123c27502c","ref":"refs/heads/export-D60969898","pushedAt":"2024-08-08T17:30:42.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"amitaga","name":"Amit Agarwal","path":"/amitaga","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16528882?s=80&v=4"},"commit":{"message":"Fix fbcode AOTI GPU lowering for ARM64 hosts (#133017)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/133017\n\nFix fbcode AOTI GPU lowering for ARM64 hosts\n\nReviewed By: hl475\n\nDifferential Revision: D60969898","shortMessageHtmlLink":"Fix fbcode AOTI GPU lowering for ARM64 hosts (pytorch#133017)"}},{"before":null,"after":"f898d0cb8e92173a546e783dadbdc8abc6568ca3","ref":"refs/heads/export-D60969898","pushedAt":"2024-08-08T17:23:46.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"amitaga","name":"Amit Agarwal","path":"/amitaga","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16528882?s=80&v=4"},"commit":{"message":"Fix fbcode AOTI GPU lowering for ARM64 hosts\n\nSummary: Fix fbcode AOTI GPU lowering for ARM64 hosts\n\nReviewed By: hl475\n\nDifferential Revision: D60969898","shortMessageHtmlLink":"Fix fbcode AOTI GPU lowering for ARM64 hosts"}},{"before":"9bda1e874c4048596966ab7e7b8998c4379e09f8","after":"78b84655659d84d3a7877009be4f39d651d31d44","ref":"refs/heads/export-D51094799","pushedAt":"2023-11-08T17:23:41.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"amitaga","name":"Amit Agarwal","path":"/amitaga","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16528882?s=80&v=4"},"commit":{"message":"[Distributed] Limit world_size to 8 for FSDP Unit tests (#103412)\n\nThere are few unit tests in FSDP that can support upto 8 GPUs.\nIn this case, for example test_fsdp_uneven has an input size of [8,3]. For each process/rank we pass the data as input[self.rank] as below. So when we use 16 GPUs for our tests, these tests throw an index/key error. So basically to avoid such corner cases, I would like to add this change to use 8GPUs if there are more than 8 GPUs. This is applicable to both ROCm and CUDA builds as well.\n\nhttps://github.com/pytorch/pytorch/blob/main/test/distributed/fsdp/test_fsdp_uneven.py#L44\nhttps://github.com/pytorch/pytorch/blob/main/test/distributed/fsdp/test_fsdp_uneven.py#L55\n\nPull Request resolved: https://github.com/pytorch/pytorch/pull/103412\nApproved by: https://github.com/jithunnair-amd, https://github.com/pruthvistony, https://github.com/malfet","shortMessageHtmlLink":"[Distributed] Limit world_size to 8 for FSDP Unit tests (pytorch#103412)"}},{"before":"096065e25e676508421a5c5c8f45587f1b51e428","after":"9bda1e874c4048596966ab7e7b8998c4379e09f8","ref":"refs/heads/export-D51094799","pushedAt":"2023-11-08T17:17:40.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"amitaga","name":"Amit Agarwal","path":"/amitaga","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16528882?s=80&v=4"},"commit":{"message":"Reland \"[aot inductor] Move constant loading logic from Container to Model\" (#112197)\n\nTrying again, hopefully with 100% fewer merge conflicts\n\nOriginal diff: D50582959\nRevert diff: D50657400\n\nDifferential Revision: [D50710815](https://our.internmc.facebook.com/intern/diff/D50710815/)\n\nPull Request resolved: https://github.com/pytorch/pytorch/pull/112197\nApproved by: https://github.com/desertfire, https://github.com/chenyang78","shortMessageHtmlLink":"Reland \"[aot inductor] Move constant loading logic from Container to …"}},{"before":"ed9838419bb663109a94c8065eb93a8ddda84ae5","after":"096065e25e676508421a5c5c8f45587f1b51e428","ref":"refs/heads/export-D51094799","pushedAt":"2023-11-08T16:39:24.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"amitaga","name":"Amit Agarwal","path":"/amitaga","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16528882?s=80&v=4"}},{"before":"f3ac963bfdcc5d2a6cc181eb85cc4ba460508ef1","after":"ed9838419bb663109a94c8065eb93a8ddda84ae5","ref":"refs/heads/export-D51094799","pushedAt":"2023-11-08T15:52:25.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"amitaga","name":"Amit Agarwal","path":"/amitaga","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16528882?s=80&v=4"}},{"before":"7b4bf655f3f796c7c5c98ac9d8b827a9da38af79","after":"f3ac963bfdcc5d2a6cc181eb85cc4ba460508ef1","ref":"refs/heads/export-D51094799","pushedAt":"2023-11-08T15:51:50.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"amitaga","name":"Amit Agarwal","path":"/amitaga","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16528882?s=80&v=4"}},{"before":"034407a1ff50fc4c66f6894eee656fe39b79ddc9","after":"7b4bf655f3f796c7c5c98ac9d8b827a9da38af79","ref":"refs/heads/export-D51094799","pushedAt":"2023-11-08T15:28:47.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"amitaga","name":"Amit Agarwal","path":"/amitaga","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16528882?s=80&v=4"}},{"before":"dbee30a7030e00e508148cdaf6fc260e6aa63e75","after":"034407a1ff50fc4c66f6894eee656fe39b79ddc9","ref":"refs/heads/export-D51094799","pushedAt":"2023-11-08T14:07:19.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"amitaga","name":"Amit Agarwal","path":"/amitaga","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16528882?s=80&v=4"}},{"before":null,"after":"dbee30a7030e00e508148cdaf6fc260e6aa63e75","ref":"refs/heads/export-D51094799","pushedAt":"2023-11-08T03:57:10.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"amitaga","name":"Amit Agarwal","path":"/amitaga","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16528882?s=80&v=4"},"commit":{"message":"Replace impl_abstract version of fbgemm::merge_pooled_embedding with the older explicit meta implementation\n\nSummary:\nX-link: https://github.com/pytorch/FBGEMM/pull/2122\n\nThe new impl_abstract version of fbgemm::merge_pooled_embedding breaks the Ads model torch.export. Adding back the older explciit meta implementaiton to unblock\n\nTest Plan: buck2 run mode/dev-nosan //deeplearning/fbgemm/fbgemm_gpu:merge_pooled_embeddings_test -- -r test_merge_pooled_embeddings_meta\n\nReviewed By: IvanKobzarev\n\nDifferential Revision: D51094799\n\nfbshipit-source-id: 5efde2a701f66c09fc41fa388f0af2c9513c59f0","shortMessageHtmlLink":"Replace impl_abstract version of fbgemm::merge_pooled_embedding with …"}}],"hasNextPage":false,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"startCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOC0wOFQxOToxNzozNC4wMDAwMDBazwAAAASVmVkX","endCursor":"Y3Vyc29yOnYyOpK7MjAyMy0xMS0wOFQwMzo1NzoxMC4wMDAwMDBazwAAAAOp9vV1"}},"title":"Activity · amitaga/pytorch"}