This is yolo model Benchmark, power by onnxruntime web.
Support WebGPU and wasm(cpu).
Test yolo model inference time in web.
Realtime Show inference time in Chart and Average time.
Model | Test Size | Param. |
---|---|---|
YOLOv10-N | 640 | 2.3M |
YOLOv10-S | 640 | 7.2M |
YOLOv9-T | 640 | 2.0M |
YOLOv9-S | 640 | 7.1M |
GELAN-S2 | 640 | |
YOLOv8-N | 640 | 3.2M |
YOLOv8-S | 640 | 11.2M |
git clone https://github.com/nomi30701/yolo-model-benchmark-onnxruntime-web.git
cd yolo-model-benchmark-onnxruntime-web
npm install # install dependencies
npm run dev # start dev server
- Conver YOLO model to onnx format. Read more on yolov9_2_onnx.ipynb, Yolov10 model are same.
- Copy your yolo model to
./public/models
folder. - Add
<option>
HTML element inApp.jsx
, changevalue="YOUR_FILE_NAME"
, or Press "Add model" button.... <option value="YOUR_FILE_NAME">CUSTOM-MODEL</option> <option value="yolov10n">yolov10n-2.3M</option> <option value="yolov10s">yolov10s-7.2M</option> ...
- select your model on page.
- DONE!👍