TATR-v1.1-Pub / README.md
bsmock's picture
Update README.md
bd1de99
|
raw
history blame
1.7 kB
metadata
license: mit
datasets:
  - bsmock/pubtables-1m
tags:
  - table structure recognition
  - table extraction

Model Card for TATR-v1.1-Pub

This repo contains the model weights for TATR (Table Transformer) v1.1 trained on the PubTables-1M dataset, using the training details in the paper: "Aligning benchmark datasets for table structure recognition".

These model weights are intended to be used with the Microsoft implementation of Table Transformer (TATR).

This model (v1.1) was trained with additional image cropping compared to v1.0 and works best on tightly cropped table images (5 pixels or less). It was also trained for more epochs, and as a result it outperforms the original model on PubTables-1M.

Evaluation metrics in the paper were computed with the PubTables-1M v1.1 dataset, which tightly crops the table images in the test and validation splits. Table images in PubTables-1M v1.0, on the other hand, have ~30 pixels of padding in all three splits (train, test, and val).

Model weights that can be loaded into the Hugging Face implementation of TATR are coming soon.

Model Details

Model Description

  • Developed by: Brandon Smock and Rohith Pesala, while at Microsoft
  • License: MIT
  • Finetuned from model: DETR ResNet-18

Model Sources

Please see the following for more details: