-
Notifications
You must be signed in to change notification settings - Fork 547
Description
pytorch1.3.0
cuda10.1
cudnn7.6.3
tensorrt6.0.1.5
ubuntu18.04
The .pt model was successfully converted to onnx. The following error occurred when onnx was converted to TRT:
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 635755888
Completed parsing of ONNX file
Assertion failed: tensors.count(input_name)
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File "/data/internet/Repository/export_model_torch2trt.py", line 129, in
onnx2trt("/data/internet/Repository/model/model_1022.onnx", "/data/internet/Repository/model/model_1022.trt", True)
File "/data/internet/Repository/export_model_torch2trt.py", line 58, in onnx2trt
f.write(engine.serialize())
AttributeError: 'NoneType' object has no attribute 'serialize'
How can I solve it?
Activity
CasiaFan commentedon Nov 5, 2019
Use netron to visualize your onnx model. Some ops may introduce some unexpected branches in your network when converting pytorch model to onnx format. Try onnx_simplifier to tune your model structure first.
kevinch-nv commentedon Nov 10, 2020
Looking at your error log, the place where the parsing failed was at this assertion:
Assertion failed: tensors.count(input_name)
, which should be fixed in later TRT versions (>=7.0). Can you try parsing your model with the latest TRT version?kevinch-nv commentedon Dec 1, 2020
Closing due to inactivity - if you are still having issues with the latest version of onnx-tensorrt feel free to open a new issue.