File size: 1,695 Bytes
1c66175
 
bd1de99
 
 
 
 
1c66175
bd1de99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: mit
datasets:
- bsmock/pubtables-1m
tags:
- table structure recognition
- table extraction
---

# Model Card for TATR-v1.1-Pub

This repo contains the model weights for TATR (Table Transformer) v1.1 trained on the PubTables-1M dataset, using the training details in the paper: ["Aligning benchmark datasets for table structure recognition"](https://arxiv.org/abs/2303.00716).

These model weights are intended to be used with [the Microsoft implementation of Table Transformer (TATR)](https://github.com/microsoft/table-transformer).

This model (v1.1) was trained with additional image cropping compared to [v1.0](https://huggingface.co/bsmock/tatr-pubtables1m-v1.0) and works best on tightly cropped table images (5 pixels or less).
It was also trained for more epochs, and as a result it outperforms the original model on PubTables-1M.

Evaluation metrics in the paper were computed with the PubTables-1M v1.1 dataset, which tightly crops the table images in the test and validation splits.
Table images in PubTables-1M v1.0, on the other hand, have ~30 pixels of padding in all three splits (train, test, and val).

Model weights that can be loaded into the Hugging Face implementation of TATR are coming soon.

## Model Details

### Model Description

- **Developed by:** Brandon Smock and Rohith Pesala, while at Microsoft
- **License:** MIT
- **Finetuned from model:** DETR ResNet-18

### Model Sources

Please see the following for more details:

- **Repository:** ["https://github.com/microsoft/table-transformer"](https://github.com/microsoft/table-transformer)
- **Paper:** ["Aligning benchmark datasets for table structure recognition"](https://arxiv.org/abs/2303.00716)