-
Notifications
You must be signed in to change notification settings - Fork 39
Deployer
Deployer Tab can be used to run inferences on images using various frameworks (TensorFlow, Darknet, Keras and Caffe are currently supported). Below are instructions for using Deployer:
In Order to run inferences, one needs input images and they can be captured from multiple resources mentioned below:
- Video
- Camera (WebCam, USBCam, etc directly connected to the system)
- Stream
- ROS
- ICE
- JdeRobot Recorder Logs
Note: To access Deployer Tool, one would need to run Dataset Evaluation App, which is Graphical User InterFace for Evaluation Deep Learning Models.
To Run it navigate to the DatasetEvaluationApp/build
.
And simply type ./DatasetEvaluationApp -c /path/to/config.yml
For Using Video just select Video form Deployer Input Imp List and select the corresponding video file for the same in the Deployer Tab.
To run Camera simply select camera from Deployer Input Imp List and you are good to go. It will automatically select a camera.
Currently, DetectionSuite supports ROS (Robot Operating System) and ICE (Internet Communication Engine) for reading streams, and both of them are optional dependencies and required only if you plan to use one of them. After selecting stream from the Deployer Input Imp list you can choose between the following:
-
To use ROS, just select ROS from Inferencer Implementation Params, and enter the corresponding params for the same. Also, if you have yaml file containing params, then you can select that instead.
-
Similarly, for ICE just select IT and enter the corresponding params or a yaml file as you please.
Note: If you find any one of the above disabled then, it isn't probably installed or you DetectionSuite didn't find it.