Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bash data/model/build_dla_standalone_loadable.sh error #34

Open
mayulin0206 opened this issue Apr 22, 2024 · 7 comments
Open

bash data/model/build_dla_standalone_loadable.sh error #34

mayulin0206 opened this issue Apr 22, 2024 · 7 comments
Assignees
Labels
question Further information is requested triaged

Comments

@mayulin0206
Copy link

Under buildDLAStandalone mode, I run under command and meet the below problem
#Build INT8 and FP16 loadable from ONNX in this project
bash data/model/build_dla_standalone_loadable.sh

[04/22/2024-19:51:27] [E] Error[3]: [builderConfig.cpp::setFlag::65] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builderConfig.cpp::setFlag::65, condition: builderFlag != BuilderFlag::kPREFER_PRECISION_CONSTRAINTS || !flags[BuilderFlag::kOBEY_PRECISION_CONSTRAINTS]. kPREFER_PRECISION_CONSTRAINTS cannot be set if kOBEY_PRECISION_CONSTRAINTS is set.
)
[04/22/2024-19:51:27] [E] Error[2]: [nvmRegionOptimizer.cpp::forceToUseNvmIO::175] Error Code 2: Internal Error (Assertion std::all_of(a->consumers.begin(), a->consumers.end(), [](Node* n) { return isDLA(n->backend); }) failed. )
[04/22/2024-19:51:27] [E] Error[2]: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
[04/22/2024-19:51:27] [E] Engine could not be created from network
[04/22/2024-19:51:27] [E] Building engine failed
[04/22/2024-19:51:27] [E] Failed to create engine from model or file.
[04/22/2024-19:51:27] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8401] # /usr/src/tensorrt/bin/trtexec --minShapes=images:1x3x672x672 --maxShapes=images:1x3x672x672 --optShapes=images:1x3x672x672 --shapes=images:1x3x672x672 --onnx=data/model/yolov5_trimmed_qat.onnx --useDLACore=0 --buildDLAStandalone --saveEngine=data/loadable/yolov5.int8.int8hwc4in.fp16chw16out.standalone.bin --inputIOFormats=int8:dla_hwc4 --outputIOFormats=fp16:chw16 --int8 --fp16 --calib=data/model/qat2ptq.cache --precisionConstraints=obey --layerPrecisions=/model.24/m.0/Conv:fp16,/model.24/m.1/Conv:fp16,/model.24/m.2/Conv:fp16

@ichbing1
Copy link

Experiencing the exact same issue.
Applied manual patch to TensorRT 8.4.1.5 on Jetpack 5.0.2.

@mayulin0206
Copy link
Author

Experiencing the exact same issue. Applied manual patch to TensorRT 8.4.1.5 on Jetpack 5.0.2.

Do you mean the below steps?
cp data/trtexec-dla-standalone-trtv8.5.patch /usr/src/tensorrt/
cd /usr/src/tensorrt/
git apply trtexec-dla-standalone-trtv8.5.patch
cd samples/trtexec
sudo make

I have run the above command. but The problem still exists and has not been resolved

@ichbing1
Copy link

I have run the above command. but The problem still exists and has not been resolved

Sorry, I was reporting that I have the same problem, even after applying the trtexec patch.

@mayulin0206
Copy link
Author

I have run the above command. but The problem still exists and has not been resolved

Sorry, I was reporting that I have the same problem, even after applying the trtexec patch.

I also have another questions about DLA.
Under the DLA INT8 mode,

  1. Is the default tensor format for computation kDLA_HWC4?
  2. Since the tensor format for computation on my GPU is kLINEAR, is a format conversion necessary under the DLA INT8 mode?
  3. If the default tensor format for computation under the DLA INT8 mode is kDLA_HWC4, and some layers in the model fall back to the GPU, will there be an automatic format conversion for the computations that fall back to the GPU, and will it automatically convert to kLINEAR?

@Kafka3
Copy link

Kafka3 commented May 28, 2024

check model/ ,make sure you have onnx files

@lynettez
Copy link
Collaborator

lynettez commented Sep 2, 2024

Sorry for the late reply, checked the source code of trtexec in branch 8.4, you may delete this line and recompile the trtexec.

The error said "kPREFER_PRECISION_CONSTRAINTS cannot be set if kOBEY_PRECISION_CONSTRAINTS is set."

@lynettez lynettez self-assigned this Sep 2, 2024
@lynettez lynettez added question Further information is requested triaged labels Sep 2, 2024
@lynettez
Copy link
Collaborator

lynettez commented Sep 2, 2024

-- Is the default tensor format for computation kDLA_HWC4?
https://github.com/NVIDIA-AI-IOT/cuDLA-samples/blob/main/README.md#notes listed all DLA supported formats.
It is recommended to use kDLA_HWC4 if the first layer is a convolution layer in int8 mode.
-- Since the tensor format for computation on my GPU is kLINEAR, is a format conversion necessary under the DLA INT8 mode?
If you prefer not to perform format conversion, you can set IOFormats to kDLA_linear, but this may result in some performance loss.
-- If the default tensor format for computation under the DLA INT8 mode is kDLA_HWC4, and some layers in the model fall back to the GPU, will there be an automatic format conversion for the computations that fall back to the GPU, and will it automatically convert to kLINEAR?
In this project, GPU fallback is not allowed, cuDLA API is only used to execute the DLA loadable that won't include any GPU fallback layer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested triaged
Projects
None yet
Development

No branches or pull requests

4 participants