Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to build engine from output/decoder_iter.onnx #550

Open
yygg678 opened this issue Jun 7, 2020 · 5 comments
Open

Failed to build engine from output/decoder_iter.onnx #550

yygg678 opened this issue Jun 7, 2020 · 5 comments
Labels
bug Something isn't working

Comments

@yygg678
Copy link

yygg678 commented Jun 7, 2020

Pytorch/SpeechSynthesis/Tacotron2/notebooks/trtis
Describe the bug
I followed all step of README.md in Pytorch/SpeechSynthesis/Tacotron2/notebooks/trtis
used pretrained model (Tacotron2: tacotron2_1032590_6000_amp, Waveglow: waveglow_1076430_14000_amp
During the steps, python trt/export_onnx2trt.py --encoder output/encoder.onnx --decoder output/decoder_iter.onnx --postnet output/postnet.onnx --waveglow output/waveglow.onnx -o output/ --fp16 ,
there are some warnings and errors occurs
####################################
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] ERROR: (Unnamed Layer* 170) [Slice]: slice size must be positive, size = [0,0,0]
[TensorRT] ERROR: (Unnamed Layer* 171) [Slice]: slice size must be positive, size = [0,0,0]
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[TensorRT] WARNING: Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[TensorRT] WARNING: Half2 support requested on hardware without native FP16 support, performance will be negatively affected.
Building Decoder ...
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[TensorRT] WARNING: Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[TensorRT] WARNING: Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[TensorRT] WARNING: Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[TensorRT] WARNING: Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[TensorRT] WARNING: Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[TensorRT] WARNING: Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[TensorRT] WARNING: Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[TensorRT] ERROR: Layer: Where_24's output can not be used as shape tensor.
[TensorRT] ERROR: Network validation failed.
Failed to build engine from output/decoder_iter.onnx

######################
But, during the steps, python exports/export_tacotron2_onnx.py --tacotron2 ./checkpoints/nvidia_tacotron2pyt_fp16_20190427 -o output/ --fp16
there are some warnings.
##########################

exports/export_tacotron2_onnx.py:62: TracerWarning: Converting a tensor to a NumPy array might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
input_lengths_cpu = input_lengths_cpu.cpu().numpy() # TODO
exports/export_tacotron2_onnx.py:64: UserWarning: pack_padded_sequence has been called with a Python list of sequence lengths. The tracer cannot track the data flow of Python values, and it will treat them as constants, likely rendering the trace incorrect for any other combination of lengths.
x, input_lengths_cpu, batch_first=True)
/home/yangyg/anaconda3/lib/python3.5/site-packages/torch/onnx/symbolic_opset9.py:1577: UserWarning: Exporting a model to ONNX with a batch_size other than 1, with a variable length with LSTM can cause an error when running the ONNX model with a different batch size. Make sure to save the model with a batch size of 1, or define the initial states (h0/c0) as inputs of the model.
"or define the initial states (h0/c0) as inputs of the model. ")
Running Tacotron2 Encoder
Running Tacotron2 Decoder
Stopping after 165 decoder steps
Running Tacotron2 PostNet
#######################

What should I do?

@yygg678 yygg678 added the bug Something isn't working label Jun 7, 2020
@ghost ghost self-assigned this Jun 10, 2020
@wyp19960713
Copy link

@yyggithub @grzegorzkarchnv,hello!
I met the same error when I was converting the Tacotron2 to TRT.Have you solved this problem?

@yygg678
Copy link
Author

yygg678 commented Aug 27, 2020

@yyggithub @grzegorzkarchnv,hello!
I met the same error when I was converting the Tacotron2 to TRT.Have you solved this problem?

hi, I have solved it, you should check your CUDA version, my is 10.0

@wyp19960713
Copy link

@yyggithub @grzegorzkarchnv,hello!
I met the same error when I was converting the Tacotron2 to TRT.Have you solved this problem?

hi, I have solved it, you should check your CUDA version, my is 10.0

My CUDA version is 10.0.130,Cudnn Version is 7.6.4.But I still have this error.
Do you build the docker use the code bash scripts/docker/build.sh and run the script run_latency_tests_trt.sh to convert to TRT?

@ghost
Copy link

ghost commented Oct 23, 2020

hi @wyp19960713 if you're still facing this issue, please update your local repo - we have recently updated Tacotron2.

@charlie-codes-choi
Copy link

I have this exact issue too, used the most recent repo with CUDA 10.2, CuDNN 7.6.5, TRT 7.0.0.11

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants