File size: 5,064 Bytes
98908bf 92d8df5 98908bf 36c5c49 48cbbe8 98908bf 92d8df5 98908bf 48cbbe8 568180c 98908bf 36c5c49 98908bf 48cbbe8 568180c 92d8df5 98908bf 92d8df5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 |
---
language:
- en
multilinguality:
- monolingual
dataset_info:
- config_name: pair
features:
- name: anchor
dtype: string
- name: positive
dtype: string
splits:
- name: train
num_bytes: 131218590
num_examples: 942069
- name: dev
num_bytes: 2876871
num_examples: 19657
- name: test
num_bytes: 2984879
num_examples: 19656
download_size: 72084162
dataset_size: 137080340
- config_name: pair-class
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 138755142
num_examples: 942069
- name: dev
num_bytes: 3034127
num_examples: 19657
- name: test
num_bytes: 3142127
num_examples: 19656
download_size: 72651651
dataset_size: 144931396
- config_name: pair-score
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: float64
- name: sentence_1
dtype: string
- name: sentence_2
dtype: string
splits:
- name: train
num_bytes: 269973732
num_examples: 942069
- name: dev
num_bytes: 5910998
num_examples: 19657
- name: test
num_bytes: 6127006
num_examples: 19656
download_size: 144725363
dataset_size: 282011736
- config_name: triplet
features:
- name: anchor
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
splits:
- name: train
num_bytes: 197631954
num_examples: 1115700
- name: dev
num_bytes: 2545182
num_examples: 13168
- name: test
num_bytes: 2682532
num_examples: 13218
download_size: 65778763
dataset_size: 202859668
configs:
- config_name: pair
data_files:
- split: train
path: pair/train-*
- split: dev
path: pair/dev-*
- split: test
path: pair/test-*
- config_name: pair-class
data_files:
- split: train
path: pair-class/train-*
- split: dev
path: pair-class/dev-*
- split: test
path: pair-class/test-*
- config_name: pair-score
data_files:
- split: train
path: pair-score/train-*
- split: dev
path: pair-score/dev-*
- split: test
path: pair-score/test-*
- config_name: triplet
data_files:
- split: train
path: triplet/train-*
- split: dev
path: triplet/dev-*
- split: test
path: triplet/test-*
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: AllNLI
size_categories:
- 1M<n<10M
---
# Dataset Card for AllNLI
This dataset is a concatenation of the [SNLI](https://huggingface.co/datasets/stanfordnlp/snli) and [MultiNLI](https://huggingface.co/datasets/nyu-mll/multi_nli) datasets.
Despite originally being intended for Natural Language Inference (NLI), this dataset can be used for training/finetuning an embedding model for semantic textual similarity.
## Dataset Subsets
### `pair-class` subset
* Columns: "premise", "hypothesis", "label"
* Column types: `str`, `str`, `class` with {"0": "entailment", "1": "neutral", "2", "contradiction"}
* Examples:
```python
{'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1}
```
* Collection strategy: Reading the premise, hypothesis and integer label from SNLI & MultiNLI datasets.
* Deduplified: Yes
### `pair-score` subset
* Columns: "sentence_1", "sentence_2", "label"
* Column types: `str`, `str`, `float`
* Examples:
```python
{'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1.0}
```
* Collection strategy: Taking the `pair-class` subset and remapping "entailment", "neutral" and "contradiction" to 1.0, 0.5 and 0.0, respectively.
* Deduplified: Yes
### `pair` subset
* Columns: "anchor", "positive"
* Column types: `str`, `str`
* Examples:
```python
{'anchor': 'A person on a horse jumps over a broken down airplane.', 'positive': 'A person is training his horse for a competition.'}
```
* Collection strategy: Reading the SNLI & MultiNLI datasets and considering the "premise" as the "anchor" and the "hypothesis" as the "positive" if the label is "entailment". The reverse ("entailment" as "anchor" and "premise" as "positive") is not included.
* Deduplified: Yes
### `triplet` subset
* Columns: "anchor", "positive", "negative"
* Column types: `str`, `str`, `str`
* Examples:
```python
{'anchor': 'A person on a horse jumps over a broken down airplane.', 'positive': 'A person is outdoors, on a horse.', 'negative': 'A person is at a diner, ordering an omelette.'}
```
* Collection strategy: Reading the SNLI & MultiNLI datasets, for each "premise" making a list of entailing and contradictory sentences using the dataset labels. Then, considering all possible triplets out of these entailing and contradictory lists. The reverse ("entailment" as "anchor" and "premise" as "positive") is also included.
* Deduplified: Yes |