File size: 1,099 Bytes
3de4dc7
 
 
 
 
c32d802
3de4dc7
 
 
 
 
 
 
 
1378d47
2b8d9a0
ee9e171
2b8d9a0
 
ee9e171
2b8d9a0
 
 
 
 
 
 
 
ee9e171
 
d7a3069
ee9e171
d7a3069
ee9e171
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36

---
language:
- en
tags:
- table-question-answering
license: apache-2.0
datasets:
- sqa
metrics:
- bleu

---

# TABLE QUESTION ANSWERING

## TAPAS model
TAPAS, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table.

## Model description
- It is a BERT-based model specifically designed (and pre-trained) for answering questions about tabular data
- TAPAS uses relative position embeddings and has 7 token types that encode tabular structure.
- It is pre-trained on the masked language modeling (MLM) objective on a large dataset comprising millions of tables from English Wikipedia and corresponding texts.

The model has been fine-tuned on several datasets

1. SQA (Sequential Question Answering by Microsoft) 
2. WTQ (Wiki Table Questions by Stanford University) 
3. WikiSQL (by Salesforce). 

## Limitations

Unable to deal with large input files