Skip to content

nomi30701/yolo-model-benchmark-onnxruntime-web

Repository files navigation

Yolo model Benchmark onnxruntime web

This is yolo model Benchmark, power by onnxruntime web.

Support WebGPU and wasm(cpu).

Test yolo model inference time in web.

Realtime Show inference time in Chart and Average time.

Models and Performance

Model Test Size Param.
YOLOv10-N 640 2.3M
YOLOv10-S 640 7.2M
YOLOv9-T 640 2.0M
YOLOv9-S 640 7.1M
GELAN-S2 640
YOLOv8-N 640 3.2M
YOLOv8-S 640 11.2M

Setup

git clone https://github.com/nomi30701/yolo-model-benchmark-onnxruntime-web.git
cd yolo-model-benchmark-onnxruntime-web
npm install # install dependencies

Scripts

npm run dev # start dev server 

Use other YOLO model

  1. Conver YOLO model to onnx format. Read more on yolov9_2_onnx.ipynb, Yolov10 model are same.
  2. Copy your yolo model to ./public/models folder.
  3. Add <option> HTML element in App.jsx, change value="YOUR_FILE_NAME", or Press "Add model" button.
    ...
    <option value="YOUR_FILE_NAME">CUSTOM-MODEL</option>
    <option value="yolov10n">yolov10n-2.3M</option>
    <option value="yolov10s">yolov10s-7.2M</option>
    ...
  4. select your model on page.
  5. DONE!👍