Overview
Original dataset is available on the HuggingFace Hub here.
Dataset curation
This is the same as the snli_format
split of the SciTail dataset available on the HuggingFace Hub (i.e., same data, same splits, etc).
The only differences are the following:
- selecting only the columns
["sentence1", "sentence2", "gold_label"]
- renaming columns with the following mapping
{"sentence1": "premise", "sentence2": "hypothesis", "gold_label": "label"}
- encoding labels with the following mapping
{"entailment": 0, "neutral": 1, "contradiction": 2}
Note that there are 10 overlapping instances (as found by merging on columns "label", "premise", and "hypothesis") between
train
and test
splits.
Code to create the dataset
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset
# load datasets from the Hub
dd = load_dataset("scitail", "snli_format")
ds = {}
for name, df_ in dd.items():
df = df_.to_pandas()
# select important columns
df = df[["sentence1", "sentence2", "gold_label"]]
# rename columns
df = df.rename(columns={"sentence1": "premise", "sentence2": "hypothesis", "gold_label": "label"})
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
ds[name] = Dataset.from_pandas(df, features=features)
dataset = DatasetDict(ds)
dataset.push_to_hub("scitail", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(dataset.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
dataset[i].to_pandas(),
dataset[j].to_pandas(),
on=["label", "premise", "hypothesis"],
how="inner",
).shape[0],
)
#> train - test: 10
#> train - validation: 0
#> test - validation: 0