File size: 2,413 Bytes
ed1c082
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
Quantitative 101 dataset, which is the combination of one generated dataset (CND) and three benchmark datasets: Numeracy-600K [1], EQUATE [2], and NumGLUE Task 3 [3]. The tasks in Quantitative 101 include Comparing Numbers (ComNum), Quantitative Prediction (QP), Quantitative Natural Language Inference (QNLI), and Quantitative Question Answering (QQA).

The details of how to separate the dataset are shown in this document.


(1) Task: ComNum

There are two JSON files in the ComNum folder. "CND-OOR.json" is used for testing the phenomenon. "CND-IR.json" can be separated into training and test sets with the following code: 

import pandas as pd
train_size = 0.8            
train_dataset=new_df.sample(frac=train_size,random_state=200)
test_dataset=new_df.drop(train_dataset.index).reset_index(drop=True)
train_dataset = train_dataset.reset_index(drop=True)


(2) Task: QP

In the QP folder, we already separated Numeracy-600K into training, development, and test sets. Note that, the original Numeracy-600K [1] did not provide such information.


(3) Task: QNLI

EQUATE has five subsets collected from different sources, including RTE-QUANT, AWP-NLI, NEWSNLI, REDDITNLI, and Stress Test. 

For Stress Test, which contains 7,500 instances, we follow the splitting method in NumGLUE Task 7 to separate it into training, development, and test sets. All sets are in the "QNLI-Stress Test" folder. 

Because other subsets are less than 1,000 instances, we perform the 10-fold cross-validation in the experiments. Please use the following code to separate folds: 

from sklearn.model_selection import KFold
kf = KFold(n_splits=10,random_state=200)


(4) Task: QQA

We follow [3] to separate the dataset into training, development, and test sets. 



Reference: 

[1] Chen, Chung-Chi, et al. "Numeracy-600K: Learning numeracy for detecting exaggerated information in market comments." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019.
[2] Ravichander, Abhilasha, et al. "EQUATE: A Benchmark Evaluation Framework for Quantitative Reasoning in Natural Language Inference." Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). 2019.
[3] Mishra, Swaroop, et al. "NumGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks." Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022