pietrolesci commited on
Commit
429dde2
1 Parent(s): b50fbc9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +178 -0
README.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Overview
2
+ Original dataset is available in the original [Github repo](https://github.com/tyliupku/nli-debiasing-datasets).
3
+
4
+ This dataset is a collection of NLI benchmarks constructed as described in the paper
5
+ [An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference](https://aclanthology.org/2020.conll-1.48/)
6
+ published at CoNLL 2020.
7
+
8
+
9
+ ## Dataset curation
10
+ No specific curation for this dataset. Label encoding follows exactly what is reported in the paper by the authors.
11
+ Also, from the paper:
12
+
13
+ > _all the following datasets are collected based on the public available resources proposed by their authors, thus the experimental results in this paper are comparable to the numbers reported in the original papers and the other papers that use these datasets_
14
+
15
+ Most of the datasets included follow the custom 3-class NLI convention `{"entailment": 0, "neutral": 1, "contradiction": 2}`.
16
+ However, the following datasets have a particular label mapping
17
+
18
+ - `IS-SD`: `{"non-entailment": 0, "entailment": 1}`
19
+
20
+ - `LI_TS`: `{"non-contradiction": 0, "contradiction": 1}`
21
+
22
+
23
+ ## Dataset structure
24
+ This benchmark dataset includes 10 adversarial datasets. To provide more insights on how the adversarial
25
+ datasets attack the models, the authors categorized them according to the bias(es) they test and they renamed
26
+ them accordingly. More details in section 2 of the paper.
27
+ A mapping with the original dataset names is provided below
28
+
29
+ | | Name | Original Name | Original Paper | Original Curation |
30
+ |---:|:-------|:-----------------------|:--------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
31
+ | 0 | PI-CD | SNLI-Hard | [Gururangan et al. (2018)](https://aclanthology.org/N18-2017/) | SNLI test sets instances that cannot be correctly classified by a neural classifier (fastText) trained on only the hypothesis sentences. |
32
+ | 1 | PI-SP | MNLI-Hard | [Liu et al. (2020)](https://aclanthology.org/2020.lrec-1.846/) | MNLI-mismatched dev sets instances that cannot be correctly classified by surface patterns that are highly correlated with the labels. |
33
+ | 2 | IS-SD | HANS | [McCoy et al. (2019)](https://aclanthology.org/P19-1334/) | Dataset that tests lexical overlap, subsequence, and constituent heuristics between the hypothesis and premises sentences. |
34
+ | 3 | IS-CS | SoSwap-AddAMod | [Nie et al. (2019)](https://dl.acm.org/doi/abs/10.1609/aaai.v33i01.33016867) | Pairs of sentences whose logical relations cannot be extracted from lexical information alone. Premise are taken from SNLI dev set and modified. The original paper assigns a Lexically Misleading Scores (LMS) to each instance. Here, only the subset with LMS > 0.7 is reported. |
35
+ | 4 | LI-LI | Stress tests (antonym) | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) and [Glockner et al. (2018)](https://aclanthology.org/P18-2103/) | Merge of the 'antonym' category in Naik et al. (2018) (from MNLI matched and mismatched dev sets) and Glockner et al. (2018) (SNLI training set). |
36
+ | 5 | LI-TS | Created by the authors | Created by the authors | Swap the two sentences in the original MultiNLI mismatched dev sets. If the gold label is 'contradiction', the corresponding label in the swapped instance remains unchanged, otherwise it becomes 'non-contradicted'. |
37
+ | 6 | ST-WO | Word overlap | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Word overlap' category in Naik et al. (2018). |
38
+ | 7 | ST-NE | Negation | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Negation' category in Naik et al. (2018). |
39
+ | 8 | ST-LM | Length mismatch | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Length mismatch' category in Naik et al. (2018). |
40
+ | 9 | ST-SE | Spelling errors | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Spelling errors' category in Naik et al. (2018). |
41
+
42
+ ## Code to create the dataset
43
+
44
+ ```python
45
+
46
+ import pandas as pd
47
+ from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
48
+
49
+
50
+ Tri_dataset = ["IS_CS", "LI_LI", "PI_CD", "PI_SP", "ST_LM", "ST_NE", "ST_SE", "ST_WO"]
51
+ Ent_bin_dataset = ["IS_SD"]
52
+ Con_bin_dataset = ["LI_TS"]
53
+
54
+
55
+ # read data
56
+ with open("<path to file>/robust_nli.txt", encoding="utf-8", mode="r") as fl:
57
+ f = fl.read().strip().split("\n")
58
+ f = [eval(i) for i in f]
59
+ df = pd.DataFrame.from_dict(f)
60
+
61
+ # rename to map common names
62
+ df = df.rename(columns={"prem": "premise", "hypo": "hypothesis"})
63
+
64
+ # reorder columns
65
+ df = df.loc[:, ["idx", "split", "premise", "hypothesis", "label"]]
66
+
67
+ # create split-specific features
68
+ Tri_features = Features(
69
+ {
70
+ "idx": Value(dtype="int64"),
71
+ "premise": Value(dtype="string"),
72
+ "hypothesis": Value(dtype="string"),
73
+ "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
74
+ }
75
+ )
76
+
77
+ Ent_features = Features(
78
+ {
79
+ "idx": Value(dtype="int64"),
80
+ "premise": Value(dtype="string"),
81
+ "hypothesis": Value(dtype="string"),
82
+ "label": ClassLabel(num_classes=2, names=["non-entailment", "entailment"]),
83
+ }
84
+ )
85
+
86
+ Con_features = Features(
87
+ {
88
+ "idx": Value(dtype="int64"),
89
+ "premise": Value(dtype="string"),
90
+ "hypothesis": Value(dtype="string"),
91
+ "label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]),
92
+ }
93
+ )
94
+
95
+ # convert to datasets
96
+ dataset_splits = {}
97
+
98
+ for split in df["split"].unique():
99
+ print(split)
100
+ df_split = df.loc[df["split"] == split].copy()
101
+
102
+ if split in Tri_dataset:
103
+ df_split["label"] = df_split["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
104
+ ds = Dataset.from_pandas(df_split, features=Tri_features)
105
+
106
+ elif split in Ent_bin_dataset:
107
+ df_split["label"] = df_split["label"].map({"non-entailment": 0, "entailment": 1})
108
+ ds = Dataset.from_pandas(df_split, features=Ent_features)
109
+
110
+ elif split in Con_bin_dataset:
111
+ df_split["label"] = df_split["label"].map({"non-contradiction": 0, "contradiction": 1})
112
+ ds = Dataset.from_pandas(df_split, features=Con_features)
113
+
114
+ else:
115
+ print("ERROR:", split)
116
+ dataset_splits[split] = ds
117
+ datasets = DatasetDict(dataset_splits)
118
+ datasets.push_to_hub("pietrolesci/robust_nli", token="<your token>")
119
+
120
+
121
+ # check overlap between splits
122
+ from itertools import combinations
123
+ for i, j in combinations(datasets.keys(), 2):
124
+ print(
125
+ f"{i} - {j}: ",
126
+ pd.merge(
127
+ datasets[i].to_pandas(),
128
+ datasets[j].to_pandas(),
129
+ on=["premise", "hypothesis", "label"],
130
+ how="inner",
131
+ ).shape[0],
132
+ )
133
+ #> PI_SP - ST_LM: 0
134
+ #> PI_SP - ST_NE: 0
135
+ #> PI_SP - IS_CS: 0
136
+ #> PI_SP - LI_TS: 1
137
+ #> PI_SP - LI_LI: 0
138
+ #> PI_SP - ST_SE: 0
139
+ #> PI_SP - PI_CD: 0
140
+ #> PI_SP - IS_SD: 0
141
+ #> PI_SP - ST_WO: 0
142
+ #> ST_LM - ST_NE: 0
143
+ #> ST_LM - IS_CS: 0
144
+ #> ST_LM - LI_TS: 0
145
+ #> ST_LM - LI_LI: 0
146
+ #> ST_LM - ST_SE: 0
147
+ #> ST_LM - PI_CD: 0
148
+ #> ST_LM - IS_SD: 0
149
+ #> ST_LM - ST_WO: 0
150
+ #> ST_NE - IS_CS: 0
151
+ #> ST_NE - LI_TS: 0
152
+ #> ST_NE - LI_LI: 0
153
+ #> ST_NE - ST_SE: 0
154
+ #> ST_NE - PI_CD: 0
155
+ #> ST_NE - IS_SD: 0
156
+ #> ST_NE - ST_WO: 0
157
+ #> IS_CS - LI_TS: 0
158
+ #> IS_CS - LI_LI: 0
159
+ #> IS_CS - ST_SE: 0
160
+ #> IS_CS - PI_CD: 0
161
+ #> IS_CS - IS_SD: 0
162
+ #> IS_CS - ST_WO: 0
163
+ #> LI_TS - LI_LI: 0
164
+ #> LI_TS - ST_SE: 0
165
+ #> LI_TS - PI_CD: 0
166
+ #> LI_TS - IS_SD: 0
167
+ #> LI_TS - ST_WO: 0
168
+ #> LI_LI - ST_SE: 0
169
+ #> LI_LI - PI_CD: 0
170
+ #> LI_LI - IS_SD: 0
171
+ #> LI_LI - ST_WO: 0
172
+ #> ST_SE - PI_CD: 0
173
+ #> ST_SE - IS_SD: 0
174
+ #> ST_SE - ST_WO: 0
175
+ #> PI_CD - IS_SD: 0
176
+ #> PI_CD - ST_WO: 0
177
+ #> IS_SD - ST_WO: 0
178
+ ```