Simple tool to profile onnx inference with C++ APIs.
mamba create -n onnxcppbenchmark compilers cli11 onnxruntime cmake ninja pkg-config
mamba activate onnxcppbenchmark
git clone https://github.com/ami-iit/onnx-cpp-benchmark
cd onnx-cpp-benchmark
mkdir build
cd build
cmake -GNinja -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX ..
ninja install
mamba create -n onnxcppbenchmark compilers cli11 onnxruntime cmake ninja pkg-config
mamba activate onnxcppbenchmark
git clone https://github.com/ami-iit/onnx-cpp-benchmark
cd onnx-cpp-benchmark
mkdir build
cd build
cmake -GNinja -DCMAKE_INSTALL_PREFIX=%CONDA_PREFIX%\Library ..
ninja install
Download a simple .onnx
file and run the benchmark on it.
curl -L https://huggingface.co/ami-iit/mann/resolve/3a6fa8fe38d39deae540e4aca06063e9f2b53380/ergocubSN000_26j_49e.onnx -o ergocubSN000_26j_49e.onnx
# Use default options
onnx-cpp-benchmark ergocubSN000_26j_49e.onnx
# Specify custom options
onnx-cpp-benchmark ergocubSN000_26j_49e.onnx --iterations 100 --batch_size 5 --backend onnxruntimecpu
Current supported backends:
onnxruntimecpu
: ONNX Runtime with CPUonnxruntimecuda
: ONNX Runtime with CUDA
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.