You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 28, 2022. It is now read-only.
Using gpu-rest-engine, i created a service that could detect, classify -or- detect and then classify. First step is to do detection with SSD using of a single class (other than background), and then take crops from the bounding boxes and run through a googlenet classiifer.
Using gpu-rest-engine, i created a service that could detect, classify -or- detect and then classify. First step is to do detection with SSD using of a single class (other than background), and then take crops from the bounding boxes and run through a googlenet classiifer.
Would I be able to do the same with TRT inference server? or would I need to use https://github.com/NVIDIA/tensorrt-laboratory?
Thanks!! the gpu-rest-engine has worked very well, but wanting to upgrade to TRT 6 and having issues.
The text was updated successfully, but these errors were encountered: