We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a tool that can inspect dla standalone loadable file, i want to get inner nodes/layers info ?
I use trtexec load dla loadble file wrong as follow:
[09/04/2024-13:51:41] [I] Engine loaded in 0.0123033 sec. [09/04/2024-13:51:46] [I] [TRT] Loaded engine size: 0 MiB [09/04/2024-13:51:47] [E] Error[9]: Cannot deserialize serialized engine built with EngineCapability::kDLA_STANDALONE, use cuDLA APIs instead. [09/04/2024-13:51:47] [E] Error[4]: [runtime.cpp::deserializeCudaEngine::65] Error Code 4: Internal Error (Engine deserialization failed.) [09/04/2024-13:51:47] [E] Engine deserialization failed [09/04/2024-13:51:47] [E] Got invalid engine! [09/04/2024-13:51:47] [E] Inference set up failed &&&& FAILED TensorRT.trtexec [TensorRT v8510] # trtexec --verbose --loadEngine=dla_fusion.dla
I know use https://github.com/NVIDIA-AI-IOT/cuDLA-samples/blob/main/src/cudla_context_standalone.cpp#L137 can load file, but i want know more info about dla model.
https://docs.nvidia.com/cuda/archive/11.7.1/cudla-api/index.html#structcudlaTask_1f61768f306610f76fe18ca2ef0bafd78
only get in-out tensors info (shape type)
https://developer.nvidia.com/blog/deploying-yolov5-on-nvidia-jetson-orin-with-cudla-quantization-aware-training-to-inference/ https://zhuanlan.zhihu.com/p/656172954
The same issue see NVIDIA/TensorRT#4106
The text was updated successfully, but these errors were encountered:
@lynettez can you give some advice ?
Sorry, something went wrong.
@dusty-nv @rbonghi @sujitbiswas @wkelongws
DID THE ISSUE SOLVED?
NO @11061995
No branches or pull requests
Is there a tool that can inspect dla standalone loadable file, i want to get inner nodes/layers info ?
I use trtexec load dla loadble file wrong as follow:
I know use https://github.com/NVIDIA-AI-IOT/cuDLA-samples/blob/main/src/cudla_context_standalone.cpp#L137 can load file, but i want know more info about dla model.
https://docs.nvidia.com/cuda/archive/11.7.1/cudla-api/index.html#structcudlaTask_1f61768f306610f76fe18ca2ef0bafd78
only get in-out tensors info (shape type)
https://developer.nvidia.com/blog/deploying-yolov5-on-nvidia-jetson-orin-with-cudla-quantization-aware-training-to-inference/
https://zhuanlan.zhihu.com/p/656172954
The same issue see
NVIDIA/TensorRT#4106
The text was updated successfully, but these errors were encountered: