-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Latency and FPS measurement #3
base: master
Are you sure you want to change the base?
Conversation
i am running the sample video with .264 extension and getting this error.please help me to resolve this issue. sudo ./deepstream-pose-estimation-app sample_qHD.h264 sohail22 0:00:01.348730282 7957 0x563249c1e070 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/convsys/deepstream_sdk_v5.0.1_x86_64/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation-master/resnet18.engine |
https://forums.developer.nvidia.com/t/nvosd-circle-crash/157368
From: sohail759 <[email protected]>
Sent: Sunday, February 14, 2021 4:35 PM
To: NVIDIA-AI-IOT/deepstream_pose_estimation <[email protected]>
Cc: Subscribed <[email protected]>
Subject: Re: [NVIDIA-AI-IOT/deepstream_pose_estimation] Latency and FPS measurement (#3)
i am running the sample video with .264 extension and getting this error.please help me to resolve this issue.
sudo ./deepstream-pose-estimation-app sample_qHD.h264 sohail22
Now playing: sample_qHD.h264
0:00:01.348653041 7957 0x563249c1e070 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/convsys/deepstream_sdk_v5.0.1_x86_64/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation-master/resnet18.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x224x224 min: 1x3x224x224 opt: 1x3x224x224 Max: 1x3x224x224
1 OUTPUT kFLOAT part_affinity_fields 56x56x42 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT heatmap 56x56x18 min: 0 opt: 0 Max: 0
3 OUTPUT kFLOAT maxpool_heatmap 56x56x18 min: 0 opt: 0 Max: 0
0:00:01.348730282 7957 0x563249c1e070 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/convsys/deepstream_sdk_v5.0.1_x86_64/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation-master/resnet18.engine
0:00:01.351122025 7957 0x563249c1e070 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
libnvosd (603):(ERROR) : Out of bound radius
0:00:03.876006191 7957 0x563249c60a30 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:03.876013208 7957 0x563249c60a30 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
ERROR from element nv-onscreendisplay: Unable to draw circles
Error details: gstnvdsosd.c(558): gst_nvds_osd_transform_ip (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstNvDsOsd:nv-onscreendisplay
Returned, stopping playback
Deleting pipeline
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#3 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AGZNUYLQQGNWBMQJNRJR5FTS673UNANCNFSM4XQYUS2A>.
|
thanks for reply.
can you please suggest me some solution of this.
…On Sun, Feb 14, 2021 at 9:49 PM robbiwu ***@***.***> wrote:
https://forums.developer.nvidia.com/t/nvosd-circle-crash/157368
From: sohail759 ***@***.***>
Sent: Sunday, February 14, 2021 4:35 PM
To: NVIDIA-AI-IOT/deepstream_pose_estimation <
***@***.***>
Cc: Subscribed ***@***.***>
Subject: Re: [NVIDIA-AI-IOT/deepstream_pose_estimation] Latency and FPS
measurement (#3)
i am running the sample video with .264 extension and getting this
error.please help me to resolve this issue.
sudo ./deepstream-pose-estimation-app sample_qHD.h264 sohail22
Now playing: sample_qHD.h264
0:00:01.348653041 7957 0x563249c1e070 INFO nvinfer
gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from
NvDsInferContextImpl::deserializeEngineAndBackend()
<nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from
:/home/convsys/deepstream_sdk_v5.0.1_x86_64/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation-master/resnet18.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]:
layers num: 4
0 INPUT kFLOAT input 3x224x224 min: 1x3x224x224 opt: 1x3x224x224 Max:
1x3x224x224
1 OUTPUT kFLOAT part_affinity_fields 56x56x42 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT heatmap 56x56x18 min: 0 opt: 0 Max: 0
3 OUTPUT kFLOAT maxpool_heatmap 56x56x18 min: 0 opt: 0 Max: 0
0:00:01.348730282 7957 0x563249c1e070 INFO nvinfer
gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from
NvDsInferContextImpl::generateBackendContext()
<nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model:
/home/convsys/deepstream_sdk_v5.0.1_x86_64/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation-master/resnet18.engine
0:00:01.351122025 7957 0x563249c1e070 INFO nvinfer
gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new
model:deepstream_pose_estimation_config.txt sucessfully
Running...
libnvosd (603):(ERROR) : Out of bound radius
0:00:03.876006191 7957 0x563249c60a30 WARN nvinfer
gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: Internal data stream
error.
0:00:03.876013208 7957 0x563249c60a30 WARN nvinfer
gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: streaming stopped,
reason error (-5)
ERROR from element nv-onscreendisplay: Unable to draw circles
Error details: gstnvdsosd.c(558): gst_nvds_osd_transform_ip ():
/GstPipeline:deepstream-tensorrt-openpose-pipeline/GstNvDsOsd:nv-onscreendisplay
Returned, stopping playback
Deleting pipeline
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<
#3 (comment)>,
or unsubscribe<
https://github.com/notifications/unsubscribe-auth/AGZNUYLQQGNWBMQJNRJR5FTS673UNANCNFSM4XQYUS2A>.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AP6J2JYYPPM6OK6B4NM5TDLS675I3ANCNFSM4XQYUS2A>
.
|
Hello,have you solved this problem,I hit the same error,can you give me some advice,thx!!! |
Yes I have resolved that problem.i just perform the 3rd step of guide and
then remake the app.
…On Tue, 16 Mar 2021, 12:47 pm yantaixu0120, ***@***.***> wrote:
Hello,have you solved this problem,I hit the same error,can you give me
some advice,thx!!!
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AP6J2J6OACPVXP55F7X7RNDTD4EHXANCNFSM4XQYUS2A>
.
|
Do you mean " Download the TRTPose model, convert it to ONNX using this export utility, and set its location in the DeepStream configuration file" |
*Step 3: Replace the OSD library in the DeepStream install directory.*
*Perform this step and remake the app.this will resolve the problem.*
On Tue, 16 Mar 2021, 12:54 pm yantaixu0120, ***@***.***>
wrote:
… Yes I have resolved that problem.i just perform the 3rd step of guide and
then remake the app.
… <#m_6625076619958295573_m_-741326857377205296_>
On Tue, 16 Mar 2021, 12:47 pm yantaixu0120, *@*.***> wrote: Hello,have
you solved this problem,I hit the same error,can you give me some
advice,thx!!! — You are receiving this because you commented. Reply to this
email directly, view it on GitHub <#3 (comment)
<#3 (comment)>>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AP6J2J6OACPVXP55F7X7RNDTD4EHXANCNFSM4XQYUS2A
.
Do you mean " Download the TRTPose model, convert it to ONNX using this
export utility, and set its location in the DeepStream configuration file"
this step??
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AP6J2JYOWHALSBBSNSI742DTD4FCLANCNFSM4XQYUS2A>
.
|
Here is a link of detailed steps.
https://developer.nvidia.com/blog/creating-a-human-pose-estimation-application-with-deepstream-sdk/
…On Tue, 16 Mar 2021, 1:12 pm sohail awan, ***@***.***> wrote:
*Step 3: Replace the OSD library in the DeepStream install directory.*
*Perform this step and remake the app.this will resolve the problem.*
On Tue, 16 Mar 2021, 12:54 pm yantaixu0120, ***@***.***>
wrote:
> Yes I have resolved that problem.i just perform the 3rd step of guide and
> then remake the app.
> … <#m_7282932526300136930_m_6625076619958295573_m_-741326857377205296_>
> On Tue, 16 Mar 2021, 12:47 pm yantaixu0120, *@*.***> wrote: Hello,have
> you solved this problem,I hit the same error,can you give me some
> advice,thx!!! — You are receiving this because you commented. Reply to this
> email directly, view it on GitHub <#3 (comment)
> <#3 (comment)>>,
> or unsubscribe
> https://github.com/notifications/unsubscribe-auth/AP6J2J6OACPVXP55F7X7RNDTD4EHXANCNFSM4XQYUS2A
> .
>
> Do you mean " Download the TRTPose model, convert it to ONNX using this
> export utility, and set its location in the DeepStream configuration file"
> this step??
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#3 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AP6J2JYOWHALSBBSNSI742DTD4FCLANCNFSM4XQYUS2A>
> .
>
|
thanks, 0:00:00.813945366 24705 0x5599042de4f0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/human_pose.onnx_b1_gpu0_fp16.engine 0:00:00.813998830 24705 0x5599042de4f0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/human_pose.onnx_b1_gpu0_fp16.engine how can i fixed it?thx |
Hi,
I've added latency and fps calculation to the code if you guys interested. Nothing fancy, just added the same code sections from sample apps without any switch to turn on/off.