pietrolesci commited on
Commit
87cbcb5
1 Parent(s): dc239de

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Overview
2
+ Original dataset available [here](https://wellecks.github.io/dialogue_nli/).
3
+
4
+
5
+ ## Dataset curation
6
+ Original `label` column is renamed `original_label`. The original classes are renamed as follows
7
+
8
+ ```
9
+ {"positive": "entailment", "negative": "contradiction", "neutral": "neutral"})
10
+ ```
11
+
12
+ and encoded with the following mapping
13
+
14
+ ```
15
+ {"entailment": 0, "neutral": 1, "contradiction": 2}
16
+ ```
17
+
18
+ and stored in the newly created column `label`.
19
+
20
+
21
+ The following splits and the corresponding columns are present in the original files
22
+
23
+ ```
24
+ train {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
25
+ dev {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
26
+ test {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
27
+ verified_test {'dtype', 'annotation3', 'sentence1', 'sentence2', 'annotation1', 'annotation2', 'original_label', 'label', 'triple2', 'triple1'}
28
+ extra_test {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
29
+ extra_dev {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
30
+ extra_train {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
31
+ valid_havenot {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
32
+ valid_attributes {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
33
+ valid_likedislike {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
34
+ ```
35
+
36
+ Note that I only keep the common columns, which means that I drop "annotation{1, 2, 3" from `verified_test`.
37
+ Note that there are some splits with the same instances, as found by matching on "original_label", "sentence1", "sentence2".
38
+
39
+
40
+ ## Code to create dataset
41
+ ```python
42
+ import pandas as pd
43
+ from pathlib import Path
44
+ import json
45
+ from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, Sequence
46
+
47
+
48
+ # load data
49
+ ds = {}
50
+ for path in Path(".").rglob("<path to folder>/*.jsonl"):
51
+ print(path, flush=True)
52
+ with path.open("r") as fl:
53
+ data = fl.read()
54
+ try:
55
+ d = json.loads(data, encoding="utf-8")
56
+ except json.JSONDecodeError as error:
57
+ print(error)
58
+
59
+ df = pd.DataFrame(d)
60
+
61
+ # encode labels
62
+ df["original_label"] = df["label"]
63
+ df["label"] = df["label"].map({"positive": "entailment", "negative": "contradiction", "neutral": "neutral"})
64
+ df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
65
+
66
+ ds[path.name.split(".")[0]] = df
67
+
68
+
69
+ # prettify names of data splits
70
+ datasets = {
71
+ k.replace("dialogue_nli_", "").replace("uu_", "").lower(): v
72
+ for k, v in ds.items()
73
+ }
74
+ datasets.keys()
75
+ #> dict_keys(['train', 'dev', 'test', 'verified_test', 'extra_test', 'extra_dev', 'extra_train', 'valid_havenot', 'valid_attributes', 'valid_likedislike'])
76
+
77
+
78
+ # cast to datasets using only common columns
79
+ features = Features({
80
+ "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
81
+ "sentence1": Value(dtype="string", id=None),
82
+ "sentence2": Value(dtype="string", id=None),
83
+ "triple1": Sequence(feature=Value(dtype="string", id=None), length=3),
84
+ "triple2": Sequence(feature=Value(dtype="string", id=None), length=3),
85
+ "dtype": Value(dtype="string", id=None),
86
+ "id": Value(dtype="string", id=None),
87
+ "original_label": Value(dtype="string", id=None),
88
+ })
89
+
90
+ ds = {}
91
+ for name, df in datasets.items():
92
+ if "id" not in df.columns:
93
+ df["id"] = ""
94
+ ds[name] = Dataset.from_pandas(df.loc[:, list(features.keys())], features=features)
95
+ ds = DatasetDict(ds)
96
+ ds.push_to_hub("dialogue_nli", token="<token>")
97
+
98
+
99
+ # check overlap between splits
100
+ from itertools import combinations
101
+ for i, j in combinations(ds.keys(), 2):
102
+ print(
103
+ f"{i} - {j}: ",
104
+ pd.merge(
105
+ ds[i].to_pandas(),
106
+ ds[j].to_pandas(),
107
+ on=["original_label", "sentence1", "sentence2"],
108
+ how="inner",
109
+ ).shape[0],
110
+ )
111
+ #> train - dev: 58
112
+ #> train - test: 98
113
+ #> train - verified_test: 90
114
+ #> train - extra_test: 0
115
+ #> train - extra_dev: 0
116
+ #> train - extra_train: 0
117
+ #> train - valid_havenot: 0
118
+ #> train - valid_attributes: 0
119
+ #> train - valid_likedislike: 0
120
+ #> dev - test: 19
121
+ #> dev - verified_test: 19
122
+ #> dev - extra_test: 0
123
+ #> dev - extra_dev: 75
124
+ #> dev - extra_train: 75
125
+ #> dev - valid_havenot: 75
126
+ #> dev - valid_attributes: 75
127
+ #> dev - valid_likedislike: 75
128
+ #> test - verified_test: 12524
129
+ #> test - extra_test: 34
130
+ #> test - extra_dev: 0
131
+ #> test - extra_train: 0
132
+ #> test - valid_havenot: 0
133
+ #> test - valid_attributes: 0
134
+ #> test - valid_likedislike: 0
135
+ #> verified_test - extra_test: 29
136
+ #> verified_test - extra_dev: 0
137
+ #> verified_test - extra_train: 0
138
+ #> verified_test - valid_havenot: 0
139
+ #> verified_test - valid_attributes: 0
140
+ #> verified_test - valid_likedislike: 0
141
+ #> extra_test - extra_dev: 0
142
+ #> extra_test - extra_train: 0
143
+ #> extra_test - valid_havenot: 0
144
+ #> extra_test - valid_attributes: 0
145
+ #> extra_test - valid_likedislike: 0
146
+ #> extra_dev - extra_train: 250946
147
+ #> extra_dev - valid_havenot: 250946
148
+ #> extra_dev - valid_attributes: 250946
149
+ #> extra_dev - valid_likedislike: 250946
150
+ #> extra_train - valid_havenot: 250946
151
+ #> extra_train - valid_attributes: 250946
152
+ #> extra_train - valid_likedislike: 250946
153
+ #> valid_havenot - valid_attributes: 250946
154
+ #> valid_havenot - valid_likedislike: 250946
155
+ #> valid_attributes - valid_likedislike: 250946
156
+ ```