Skip to content

Deployer

Vinay Sharma edited this page Jun 28, 2018 · 9 revisions

Deployer Tab can be used to run inferences on images using various frameworks (TensorFlow, Darknet, Keras and Caffe are currently supported). Below are instructions for using Deployer:

Input Images

In Order to run inferences, one needs input images and they can be captured from multiple resources mentioned below:

  • Video
  • Camera (WebCam, USBCam, etc directly connected to the system)
  • Stream
    • ROS
    • ICE
  • JdeRobot Recorder Logs

Note: To access Deployer Tool, one would need to run Dataset Evaluation App, which is Graphical User InterFace for Evaluation Deep Learning Models. To Run it navigate to the DatasetEvaluationApp/build. And simply type ./DatasetEvaluationApp -c /path/to/config.yml

Video

For Using Video just select Video form Deployer Input Imp List and select the corresponding video file for the same in the Deployer Tab.

Camera

To run Camera simply select camera from Deployer Input Imp List and you are good to go. It will automatically select a camera.

Stream

Currently, DetectionSuite supports ROS (Robot Operating System) and ICE (Internet Communication Engine) for reading streams, and both of them are optional dependencies and required only if you plan to use one of them. After selecting stream from the Deployer Input Imp list you can choose between the following:

  • ROS

    To use ROS, just select ROS from Inferencer Implementation Params, and enter the corresponding params for the same. Also, if you have yaml file containing params, then you can select that instead.

  • ICE

    Similarly, for ICE just select IT and enter the corresponding params or a yaml file as you please.

Note: If you find any one of the above disabled then, it isn't probably installed or you DetectionSuite didn't find it.

Clone this wiki locally