File size: 2,792 Bytes
f1fe839
 
 
1091084
e81963a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---

license: mit
---


# TAPEX (base-sized model) 

TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://openreview.net/forum?id=O50443AsCP) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).

## Model description

TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.

TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.

## Intended Uses

You can use the raw model for simulating neural SQL execution, i.e., employ TAPEX to execute a SQL query on a given table. However, the model is mostly meant to be fine-tuned on a supervised dataset. Currently TAPEX can be fine-tuned to tackle table question answering tasks and table fact verification tasks. See the [model hub](https://huggingface.co/models?search=tapex) to look for fine-tuned versions on a task that interests you.

### How to Use

Here is how to use this model in transformers:

```python

from transformers import TapexTokenizer, BartForConditionalGeneration

import pandas as pd



tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base")

model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-base")



data = {

    "year": [1896, 1900, 1904, 2004, 2008, 2012],

    "city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]

}

table = pd.DataFrame.from_dict(data)



# tapex accepts uncased input since it is pre-trained on the uncased corpus

query = "select year where city = beijing"

encoding = tokenizer(table=table, query=query, return_tensors="pt")



outputs = model.generate(**encoding)



print(tokenizer.batch_decode(outputs, skip_special_tokens=True))

# ['2008']

```

### How to Fine-tuning

Please find the fine-tuning script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).

### BibTeX entry and citation info

```bibtex

@inproceedings{

    liu2022tapex,

    title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},

    author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},

    booktitle={International Conference on Learning Representations},

    year={2022},

    url={https://openreview.net/forum?id=O50443AsCP}

}

```