- Contains C++ code for deployment on robot of the full pipeline, which takes a camera device as input and produces the pixel-wise predictions as output (which depend on the problem).
-
Install Bazel link
-
Tensorflow Cpp using shared library cmake install External Install
- If the tensorRT backend to run with higher performance is desired you neet to install it:
- TensorRT Download
IMPORTANT: Tensorflow and TensorRT don't need each other in C++ mode. This is important when deploying in an embedded device, such as the Jetson, as Tensorflow is slow and cumbersome to build, and it is not necessary, considering that the tensorRT backend works much faster. Therefore: THE CMAKE SCRIPTS FLAG THE NODES WHICH BACKENDS ARE AVAILABLE, AND IF ONE IS NOT INSTALLED, IT IS NOT USED.
- We use catkin tools for building the library and the apps (both ROS and standalone):
$ sudo pip install catkin_tools trollius
- Boost and yaml (if you have ROS Kinetic installed, this is not necessary):
$ sudo apt install libboost-all-dev libyaml-cpp-dev
-
Opencv3: Link (if you have ROS Kinetic installed, this is not necessary)
-
If you want to use the ROS node, of course, you need to install ROS We tested on Kinetic, and CI docker is on kinetic as well :)
Refer to the python train readme section to see model freezing before starting to work!
$ cd train_py/
$ ./cnn_freeze.py -p /tmp/path/to/pretrained -l /tmp/path/to/log
If you only want to deploy a model without the dependencies of the python section, you can try the docker container we provide in the main readme.
We use catkin tools. These are example files to check the usage of your frozen models and your tensorflow install.
$ cd bonnet/deploy_cpp
$ catkin init
$ catkin build bonnet_standalone
The "cnn_use_pb" app takes a frozen protobuf and images and predicts the output masks.
$ ./build/bonnet_standalone/cnn_use_pb -p /tmp/path/to/pretrained -i /path/to/image -l /tmp/path/to/log/ -b trt/tf
- "cnn_use_pb" uses the frozen tensorflow model from disk and calculates the mask for each image. Finally, it saves all predictions to the log path.
- The image path can be a single image, or a list of images. For example, for all images in a folder, do "-i /path/*"
- The "-b" flag specifies the backend (tf is tensorflow, trt is TensorRT)
- The "--verbose" option outputs all tensorflow debug commands, calculates inference time, and also plots the results on screen.
- For more information run with the "-h" argument
The "cnn_video_pb" app takes a frozen protobuf and a video and predicts the output masks.
$ ./build/bonnet_standalone/cnn_video_pb -p /tmp/path/to/pretrained -v /path/to/video -l /tmp/path/to/log/ -b trt/tf
- "cnn_video_pb" uses the frozen tensorflow model from disk and calculates the mask for each frame. Finally, it saves all predictions to the log path.
- The image path can be a single image, or a list of images. For example, for all images in a folder, do "-i /path/*"
- The "-b" flag specifies the backend (tf is tensorflow, trt is TensorRT)
- The "--verbose" option outputs all tensorflow debug commands, calculates inference time, profile, and also plots the results on screen.
- For more information run with the "-h" argument
The "session" app starts a tensorflow session and outputs if it was successful (as a test)
$ ./build/bonnet_standalone/session
- "session" is mainly for checking if the tensorflow install went well and if you can see the GPU, as it will output its success and the visible devices. If a GPU was found you should see something like this:
$ ./build/bonnet_standalone/session
2017-12-18 13:12:40.776389: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce 940MX major: 5 minor: 0 memoryClockRate(GHz): 1.2415
pciBusID: 0000:02:00.0
totalMemory: 1.96GiB freeMemory: 782.06MiB
2017-12-18 13:12:40.776407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce 940MX, pci bus id: 0000:02:00.0, compute capability: 5.0)
Session successfully created.
The node reads from the image topic configured in the config file and outputs both the cross entropy mask (0 to num_classes-1) and the color mask to the topics configured in the config file as well.
To build, just put this project in your catkin workspace (or make a symlink to it) and build using catkin tools:
$ ln -s /path/to/bonnet/git/repo ~/catkin_ws/src/bonnet
$ cd ~/catkin_ws
$ catkin build
$ source devel/setup.bash
$ roslaunch bonnet_run bonnet_run.launch