Skip to content
@due-benchmark

DUE Benchmark

The benchmark consisting of both available and reformulated datasets to measure the end-to-end capabilities of systems in real-world scenarios.

Pinned Loading

  1. baselines baselines Public

    The code related to the baselines from NeurIPS 2021 paper "DUE: End-to-End Document Understanding Benchmark."

    Python 36 4

  2. du-schema du-schema Public

    JSON Schema format for storing datasets details, documents processed contents, and documents annotations in the document understanding domain.

    14 2

  3. evaluator evaluator Public

    The evaluator covering all of the metrics required by tasks within the DUE Benchmark.

    Python 7

Repositories

Showing 3 of 3 repositories
  • du-schema Public

    JSON Schema format for storing datasets details, documents processed contents, and documents annotations in the document understanding domain.

    due-benchmark/du-schema’s past year of commit activity
    14 2 1 3 Updated Nov 5, 2024
  • baselines Public

    The code related to the baselines from NeurIPS 2021 paper "DUE: End-to-End Document Understanding Benchmark."

    due-benchmark/baselines’s past year of commit activity
    Python 36 MIT 4 10 0 Updated Mar 2, 2023
  • evaluator Public

    The evaluator covering all of the metrics required by tasks within the DUE Benchmark.

    due-benchmark/evaluator’s past year of commit activity
    Python 7 0 0 1 Updated Apr 5, 2022

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Python

Most used topics

Loading…