zzh12138 commited on
Commit
a404c92
β€’
1 Parent(s): 3095355

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md CHANGED
@@ -1,3 +1,86 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - table-question-answering
5
+ language:
6
+ - en
7
+ size_categories:
8
+ - n<1K
9
  ---
10
+
11
+ This repository contains the CRT-QA dataset, which includes question-answer pairs that require complex reasoning over tabular data. πŸš€
12
+
13
+ ## About the Dataset and Paper
14
+
15
+ - **Title:** CRT-QA: A Dataset of Complex Reasoning Question Answering over Tabular Data
16
+ - **Conference:** EMNLP 2023
17
+ - **Authors:** [Zhehao Zhang](https://zzh-sjtu.github.io/zhehaozhang.github.io/), Xitao Li, Yan Gao, Jian-Guang Lou πŸ‘©β€πŸ’ΌπŸ‘¨β€πŸ’Ό
18
+ - **Affiliation:** Dartmouth College, Xi'an Jiaotong University, Microsoft Research Asia 🏒
19
+
20
+ ### Data Format
21
+
22
+ The data is stored in a json file, structured with the following fields for each datapoint (keyed by a .csv file table):
23
+
24
+ ```
25
+ Question name, Title, step1, step2, step3, step4, Answer, Directness, Composition Type
26
+ ```
27
+
28
+ - `Question name`: The text of the question
29
+ - `Title`: The title of the table that the question refers to
30
+ - `step1` to `step4`: Steps describing the reasoning process and operations used to answer the question
31
+ - `type`: `Operation` or `Reasoning`
32
+ - `name`: Name of the specific operation or reasoning type
33
+ - `detail`: Additional details about the step
34
+ - `Answer`: The answer text
35
+ - `Directness`: `Explicit` or `Implicit` question
36
+ - `Composition Type`: `Bridging`, `Intersection`, or `Comparison`
37
+ -
38
+ ### Reasoning and Operations
39
+
40
+ The reasoning and operations referenced in the `step` fields come from a defined taxonomy:
41
+
42
+ **Operations:**
43
+ - Indexing
44
+ - Filtering
45
+ - Grouping
46
+ - Sorting
47
+
48
+ **Reasoning:**
49
+ - Grounding
50
+ - Auto-categorization
51
+ - Temporal Reasoning
52
+ - Geographical/Spatial Reasoning
53
+ - Aggregating
54
+ - Arithmetic
55
+ - Reasoning with Quantifiers
56
+ - Other Commonsense Reasoning
57
+
58
+ ### Contact πŸ“§
59
+
60
+ For inquiries or updates about this repository, please contact [zhehao.zhang.gr@dartmouth.edu]. πŸ“¬
61
+
62
+ ### Citation
63
+
64
+ If you use this dataset in your research, please cite the following paper:
65
+
66
+ ```
67
+ @inproceedings{zhang-etal-2023-crt,
68
+ title = "{CRT}-{QA}: A Dataset of Complex Reasoning Question Answering over Tabular Data",
69
+ author = "Zhang, Zhehao and
70
+ Li, Xitao and
71
+ Gao, Yan and
72
+ Lou, Jian-Guang",
73
+ editor = "Bouamor, Houda and
74
+ Pino, Juan and
75
+ Bali, Kalika",
76
+ booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
77
+ month = dec,
78
+ year = "2023",
79
+ address = "Singapore",
80
+ publisher = "Association for Computational Linguistics",
81
+ url = "https://aclanthology.org/2023.emnlp-main.132",
82
+ doi = "10.18653/v1/2023.emnlp-main.132",
83
+ pages = "2131--2153",
84
+ abstract = "Large language models (LLMs) show powerful reasoning abilities on various text-based tasks. However, their reasoning capability on structured data such as tables has not been systematically explored. In this work, we first establish a comprehensive taxonomy of reasoning and operation types for tabular data analysis. Then, we construct a complex reasoning QA dataset over tabular data, named CRT-QA dataset (Complex Reasoning QA over Tabular data), with the following unique features: (1) it is the first Table QA dataset with multi-step operation and informal reasoning; (2) it contains fine-grained annotations on questions{'} directness, composition types of sub-questions, and human reasoning paths which can be used to conduct a thorough investigation on LLMs{'} reasoning ability; (3) it contains a collection of unanswerable and indeterminate questions that commonly arise in real-world situations. We further introduce an efficient and effective tool-augmented method, named ARC (Auto-exemplar-guided Reasoning with Code), to use external tools such as Pandas to solve table reasoning tasks without handcrafted demonstrations. The experiment results show that CRT-QA presents a strong challenge for baseline methods and ARC achieves the best result.",
85
+ }
86
+ ```