File size: 2,622 Bytes
07695d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fbd6fcc
 
07695d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
## Overview

Original dataset page [here](https://abhilasharavichander.github.io/NLI_StressTest/) and dataset available [here](https://drive.google.com/open?id=1faGA5pHdu5Co8rFhnXn-6jbBYC2R1dhw).


## Dataset curation
Added new column `label` with encoded labels with the following mapping

```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```

and the columns with parse information are dropped as they are not well formatted.

Also, the name of the file from which each instance comes is added in the column `dtype`.


## Code to create the dataset

```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
import json
from pathlib import Path


# load data
ds = {}
path = Path("<path to folder>")
for i in path.rglob("*.jsonl"): 
    print(i)
    name = str(i).split("/")[0].lower()
    dtype = str(i).split("/")[1].lower()
    
    # read data
    with i.open("r") as fl:
        df = pd.DataFrame([json.loads(line) for line in fl])
    
    # select columns
    df = df.loc[:, ["sentence1", "sentence2", "gold_label"]]
    
    # add file name as column
    df["dtype"] = dtype
    
    # encode labels
    df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
    ds[name] = df
        
# cast to dataset
features = Features(
    {
        "sentence1": Value(dtype="string"),
        "sentence2": Value(dtype="string"),
        "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
        "dtype": Value(dtype="string"),
        "gold_label": Value(dtype="string"),
    }
)
ds = DatasetDict({k: Dataset.from_pandas(v, features=features) for k, v in ds.items()})
ds.push_to_hub("pietrolesci/stress_tests_nli", token="<token>")


# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
    print(
        f"{i} - {j}: ",
        pd.merge(
            ds[i].to_pandas(), 
            ds[j].to_pandas(), 
            on=["sentence1", "sentence2", "label"], 
            how="inner",
        ).shape[0],
    )
#> numerical_reasoning - negation:  0
#> numerical_reasoning - length_mismatch:  0
#> numerical_reasoning - spelling_error:  0
#> numerical_reasoning - word_overlap:  0
#> numerical_reasoning - antonym:  0
#> negation - length_mismatch:  0
#> negation - spelling_error:  0
#> negation - word_overlap:  0
#> negation - antonym:  0
#> length_mismatch - spelling_error:  0
#> length_mismatch - word_overlap:  0
#> length_mismatch - antonym:  0
#> spelling_error - word_overlap:  0
#> spelling_error - antonym:  0
#> word_overlap - antonym:  0
```