fintabnet-bench / README.md
wolfnuker's picture
Update benchmark results (82.2% on 451 samples)
902fec2 verified
metadata
license: mit
task_categories:
  - table-question-answering
  - image-to-text
tags:
  - table-extraction
  - benchmark
  - fintabnet
  - document-ai
  - docld
pretty_name: DocLD FinTabNet Benchmark
size_categories:
  - n<1K

DocLD FinTabNet Benchmark Results

Benchmark results for DocLD table extraction on the FinTabNet dataset.

Results Summary

Metric Value
Mean Accuracy 82.1%
Median 83.2%
P25 / P75 73.3% / 97.4%
Min / Max 22.7% / 100.0%
Scored Samples 500
Total Samples 500

Methodology

  • Dataset: FinTabNet_OTSL — 500 samples from the test split
  • Extraction: DocLD vision-based table extraction
  • Scoring: Needleman-Wunsch hierarchical alignment (same as RD-TableBench)
  • Output: HTML tables with rowspan/colspan for merged cells

Comparison

Provider FinTabNet Accuracy
DocLD 82.1%
GTE (IBM) ~78%
TATR (Microsoft) ~65%

Files

  • results.json — Full benchmark results with per-sample scores
  • predictions/ — HTML predictions for each sample
  • charts/ — Visualization PNGs

Links