This is the benchmarks for the paper Characterizing the Deployment of Deep Neural Networks on Commercial Edge Devices. We are updating the repo to include edgeTPU, TensorRT, and Movidius implementations as well.
You can find the official paper https://ramyadhadidi.github.io/files/iiswc19-edge.pdf.
This work is done at HPArch@GaTech.
PyTorch | TensorFlow | DarkNet | Caffe | |
---|---|---|---|---|
ResNet-18 | ✔️ | ✔️ | - | - |
ResNet-50 | ✔️ | ✔️ | ✔️ | ✔️ |
ResNet-101 | ✔️ | ✔️ | ✔️ | ✔️ |
Xception | ✔️ | ✔️ | - | ✔️ |
MobileNet-v2 | ✔️ | ✔️ | - | ✔️ |
Inception-v4 | ✔️ | ✔️ | - | ✔️ |
AlexNet | ✔️ | ✔️ | ✔️ | ✔️ |
VGG-11 (224x224) | ✔️ | - | - | - |
VGG-11 (32x32) | ✔️ | - | - | - |
VGG-16 | ✔️ | ✔️ | ✔️ | ✔️ |
VGG-19 | ✔️ | ✔️ | - | ✔️ |
CifarNet (32x32) | ✔️ | - | - | - |
SSD MobileNet-v1 | ✔️ | - | - | - |
YOLOv3 | ✔️ | - | ✔️ | - |
Tiny YOLO | ✔️ | ✔️ | ✔️ | - |
C3D | ✔️ | - | - | - |
For platform-specific framework, it is really hard to create our own models from scratch, so we use whatever models the vendor provides. We share the link to vendor's model documentations.
TfLite | TensorRT | Movidius | EdgeTPU |
---|
- Python >= 3.5
- CUDA 10.0
- Python Packages (Versions that we use.)
numpy===1.16.4
# PyTorch
torch===1.1.0
torchvision===0.2.2
# TensorFlow
tensorflow===1.13.1
Keras===2.2.4
We follow this tutorial to compile the PyTorch library from source on Raspberry Pi.
We use the default JetPack library to setup both our dev boards (Nvidia TX2 and Nvidia Nano boards). Nvidia has its pre-built PyTorch wheel here. It has detailed instructions about how to install PyTorch on Nvidia Dev Boards.
We use pre-built wheel from here for TensorFlow library on Raspberry Pi.
Same as PyTorch, Nvidia provides detailed instructions here about how to install TensorFlow.
We compile the Darknet framework from source. You can refer more complication details to the website.
For DarkNet GPU support, we change Makefile flags as shown below
GPU=1
ARCH=-gencode arch=compute_62,code=[sm_62,compute_62]
We compile the Caffe framework from source following this
tutorial. In order to compile pycaffe
, we change PYTHON_LIB
and PYTHON_INCLUDE
flags in the makefile accordingly.
cd pytorch
python execute.py --model [model name] --iteration [number of iterations] --cpu [use CPU if set]
cd tensorflow
# GPU
NVIDIA_VISIBLE_DEVICES=0 python execute.py --model [model name] --iteration [number of iterations]
# CPU
NVIDIA_VISIBLE_DEVICES= python execute.py --model [model name] --iteration [number of iterations]
We use the pre-existing model configurations in DarkNet code base to execute models.
./darknet classifier predict [base label data] [model config] [model weights] [inference data]
You can lookup more details here.
The models in Caffe framework are defined as prototxt.
python execute.py --model [model name] --iteration [number of iteration] --cpu [use CPU if set]