Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorrt conversion of Unet model #190

Open
littleGiant-28 opened this issue May 29, 2024 · 0 comments
Open

Tensorrt conversion of Unet model #190

littleGiant-28 opened this issue May 29, 2024 · 0 comments

Comments

@littleGiant-28
Copy link

littleGiant-28 commented May 29, 2024

I am trying to convert both HD and DC trained Vton Unet models to TensorRT to explore possible performance improvement. I was successfully able to convert it to ONNX model first, but when tried to verify the outputs on CPU the ram usage goes as high as 50GB.

The tensor outputs from both pytorch inference and ONNX inference seems more or less same with following difference measured by np.testing.assert_allclose().


Mismatched elements: 28172 / 98304 (28.7%)
Max absolute difference: 0.001953
Max relative difference: 38.72

 x: array([[[[ 3.0884e-01,  1.2903e-01, -3.4229e-01, ...,  5.1221e-01,
          -2.5537e-01,  5.0244e-01],
         [-3.2568e-01,  5.4248e-01,  6.1426e-01, ..., -6.7383e-02,...
 y: array([[[[ 3.0884e-01,  1.2842e-01, -3.4204e-01, ...,  5.1270e-01,
          -2.5610e-01,  5.0244e-01],
         [-3.2544e-01,  5.4297e-01,  6.1426e-01, ..., -6.6467e-02,...

With this high memory usage, I cannot convert it to TensorRT with the main culprit being one tensor exceding TensorRT's tensor size limitations. I would assume this tensor is related to spatial_attn_inputs .

Sharing TensorRT converions error logs here.

[05/29/2024-18:18:28] [E] Error[4]: [graphShapeAnalyzer.cpp::processCheck::587] Error Code 4: Internal Error ((Unnamed Layer* 123) [Matrix Multiply]_output: tensor volume exceeds (2^31)-1, dimensions are [2,8,24576,24576])
[05/29/2024-18:18:28] [E] [TRT] parsers/onnx/ModelImporter.cpp:773: While parsing node number 77 [MatMul -> "/down_blocks.0/attentions.0/transformer_blocks.0/attn1/MatMul_output_0"]:
[05/29/2024-18:18:28] [E] [TRT] parsers/onnx/ModelImporter.cpp:774: --- Begin node ---
[05/29/2024-18:18:28] [E] [TRT] parsers/onnx/ModelImporter.cpp:775: input: "/down_blocks.0/attentions.0/transformer_blocks.0/attn1/Mul_output_0"
input: "/down_blocks.0/attentions.0/transformer_blocks.0/attn1/Mul_1_output_0"
output: "/down_blocks.0/attentions.0/transformer_blocks.0/attn1/MatMul_output_0"
name: "/down_blocks.0/attentions.0/transformer_blocks.0/attn1/MatMul"
op_type: "MatMul"

[05/29/2024-18:18:28] [E] [TRT] parsers/onnx/ModelImporter.cpp:776: --- End node ---
[05/29/2024-18:18:28] [E] [TRT] parsers/onnx/ModelImporter.cpp:778: ERROR: parsers/onnx/ModelImporter.cpp:180 In function parseGraph:
[6] Invalid Node - /down_blocks.0/attentions.0/transformer_blocks.0/attn1/MatMul
[graphShapeAnalyzer.cpp::processCheck::587] Error Code 4: Internal Error ((Unnamed Layer* 123) [Matrix Multiply]_output: tensor volume exceeds (2^31)-1, dimensions are [2,8,24576,24576])

Do you think would it be possible to convert into TensorRT or are there any graph-level optimizations we can do to make it possible? Because Pytorch 2.0's inference does seem superior for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant