File size: 2,590 Bytes
ba6aa3b
89785ba
 
 
ba6aa3b
 
89785ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---

language: en
tags:
- tapex
license: mit
---


# TAPEX (base-sized model) 

TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).

## Model description

TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.

TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.

This model is the `tapex-base` model fine-tuned on the [Tabfact](https://huggingface.co/datasets/tab_fact) dataset.

## Intended Uses

You can use the model for table fact verficiation.

### How to Use

Here is how to use this model in transformers:

```python

from transformers import TapexTokenizer, BartForSequenceClassification

import pandas as pd



tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base-finetuned-tabfact")

model = BartForSequenceClassification.from_pretrained("microsoft/tapex-base-finetuned-tabfact")



data = {

    "year": [1896, 1900, 1904, 2004, 2008, 2012],

    "city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]

}

table = pd.DataFrame.from_dict(data)



# tapex accepts uncased input since it is pre-trained on the uncased corpus

query = "beijing hosts the olympic games in 2012"

encoding = tokenizer(table=table, query=query, return_tensors="pt")



outputs = model(**encoding)

output_id = int(outputs.logits[0].argmax(dim=0))

print(model.config.id2label[output_id])

# Refused

```

### How to Eval

Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).

### BibTeX entry and citation info

```bibtex

@inproceedings{

    liu2022tapex,

    title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},

    author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},

    booktitle={International Conference on Learning Representations},

    year={2022},

    url={https://openreview.net/forum?id=O50443AsCP}

}

```