Skip to content

v0.2.0

Compare
Choose a tag to compare
@felixdittrich92 felixdittrich92 released this 13 May 10:40
· 69 commits to main since this release
162a10d

What's Changed

  • Added 8-Bit quantized models
  • Added Dockerfile and CI for CPU/GPU Usage

8-Bit quantized models

8-Bit quantized variants of all models was added (expect: the FAST models - which are already reparameterized)

from onnxtr.models import ocr_predictor, detection_predictor, recognition_predictor

predictor = ocr_predictor(det_arch="db_resnet50", reco_arch="crnn_vgg16_bn", load_in_8_bit=True)

det_predictor = detection_predictor("db_resnet50", load_in_8_bit=True)
reco_predictor = recognition_predictor("parseq", load_in_8_bit=True)
  • CPU benchmarks:
Library FUNSD (199 pages) CORD (900 pages)
docTR (CPU) - v0.8.1 ~1.29s / Page ~0.60s / Page
OnnxTR (CPU) - v0.1.2 ~0.57s / Page ~0.25s / Page
OnnxTR (CPU) 8-bit - v0.1.2 ~0.38s / Page ~0.14s / Page
EasyOCR (CPU) - v1.7.1 ~1.96s / Page ~1.75s / Page
PyTesseract (CPU) - v0.3.10 ~0.50s / Page ~0.52s / Page
Surya (line) (CPU) - v0.4.4 ~48.76s / Page ~35.49s / Page