File size: 5,124 Bytes
98908bf
92d8df5
 
 
 
4f787f9
 
 
 
 
 
98908bf
36c5c49
 
 
 
 
 
 
 
4f787f9
 
36c5c49
4f787f9
 
36c5c49
4f787f9
 
 
 
48cbbe8
98908bf
 
 
 
 
 
92d8df5
 
 
 
 
 
98908bf
 
 
 
 
 
 
 
 
 
 
 
48cbbe8
 
 
 
 
 
54d241c
 
48cbbe8
 
9908ea2
48cbbe8
 
9908ea2
48cbbe8
 
9908ea2
48cbbe8
54d241c
9908ea2
568180c
 
 
 
 
 
 
 
 
 
6de3c20
 
568180c
6de3c20
 
568180c
6de3c20
 
 
 
98908bf
36c5c49
 
 
 
 
 
 
 
98908bf
 
 
 
 
 
 
 
48cbbe8
 
 
 
 
 
 
 
568180c
 
 
 
 
 
 
 
98908bf
92d8df5
 
 
 
 
 
 
 
 
 
 
e587f0c
92d8df5
37897b6
 
 
 
 
 
 
92d8df5
 
 
 
 
 
 
 
37897b6
 
 
 
 
 
 
92d8df5
 
 
 
 
 
 
 
37897b6
 
 
 
 
 
92d8df5
 
 
 
 
 
 
 
37897b6
 
 
 
 
 
 
0c94fa2
92d8df5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
---
language:
- en
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: AllNLI
dataset_info:
- config_name: pair
  features:
  - name: anchor
    dtype: string
  - name: positive
    dtype: string
  splits:
  - name: train
    num_bytes: 43012118
    num_examples: 314315
  - name: dev
    num_bytes: 992955
    num_examples: 6808
  - name: test
    num_bytes: 1042254
    num_examples: 6831
  download_size: 27501136
  dataset_size: 45047327
- config_name: pair-class
  features:
  - name: premise
    dtype: string
  - name: hypothesis
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': entailment
          '1': neutral
          '2': contradiction
  splits:
  - name: train
    num_bytes: 138755142
    num_examples: 942069
  - name: dev
    num_bytes: 3034127
    num_examples: 19657
  - name: test
    num_bytes: 3142127
    num_examples: 19656
  download_size: 72651651
  dataset_size: 144931396
- config_name: pair-score
  features:
  - name: sentence_1
    dtype: string
  - name: sentence_2
    dtype: string
  - name: label
    dtype: float64
  splits:
  - name: train
    num_bytes: 138755142
    num_examples: 942069
  - name: dev
    num_bytes: 3034127
    num_examples: 19657
  - name: test
    num_bytes: 3142127
    num_examples: 19656
  download_size: 72656605
  dataset_size: 144931396
- config_name: triplet
  features:
  - name: anchor
    dtype: string
  - name: positive
    dtype: string
  - name: negative
    dtype: string
  splits:
  - name: train
    num_bytes: 98815977
    num_examples: 557850
  - name: dev
    num_bytes: 1272591
    num_examples: 6584
  - name: test
    num_bytes: 1341266
    num_examples: 6609
  download_size: 39988980
  dataset_size: 101429834
configs:
- config_name: pair
  data_files:
  - split: train
    path: pair/train-*
  - split: dev
    path: pair/dev-*
  - split: test
    path: pair/test-*
- config_name: pair-class
  data_files:
  - split: train
    path: pair-class/train-*
  - split: dev
    path: pair-class/dev-*
  - split: test
    path: pair-class/test-*
- config_name: pair-score
  data_files:
  - split: train
    path: pair-score/train-*
  - split: dev
    path: pair-score/dev-*
  - split: test
    path: pair-score/test-*
- config_name: triplet
  data_files:
  - split: train
    path: triplet/train-*
  - split: dev
    path: triplet/dev-*
  - split: test
    path: triplet/test-*
---

# Dataset Card for AllNLI

This dataset is a concatenation of the [SNLI](https://huggingface.co/datasets/stanfordnlp/snli) and [MultiNLI](https://huggingface.co/datasets/nyu-mll/multi_nli) datasets.
Despite originally being intended for Natural Language Inference (NLI), this dataset can be used for training/finetuning an embedding model for semantic textual similarity.

## Dataset Subsets

### `pair-class` subset

* Columns: "premise", "hypothesis", "label"
* Column types: `str`, `str`, `class` with `{"0": "entailment", "1": "neutral", "2", "contradiction"}`
* Examples:
    ```python
    {
      'premise': 'A person on a horse jumps over a broken down airplane.',
      'hypothesis': 'A person is training his horse for a competition.',
      'label': 1,
    }
   ```
* Collection strategy: Reading the premise, hypothesis and integer label from SNLI & MultiNLI datasets.
* Deduplified: Yes

### `pair-score` subset

* Columns: "sentence_1", "sentence_2", "label"
* Column types: `str`, `str`, `float`
* Examples:
    ```python
    {
      'sentence_1': 'A person on a horse jumps over a broken down airplane.',
      'sentence_2': 'A person is training his horse for a competition.',
      'label': 1.0,
    }
    ```
* Collection strategy: Taking the `pair-class` subset and remapping "entailment", "neutral" and "contradiction" to 1.0, 0.5 and 0.0, respectively.
* Deduplified: Yes

### `pair` subset

* Columns: "anchor", "positive"
* Column types: `str`, `str`
* Examples:
    ```python
    {
      'anchor': 'A person on a horse jumps over a broken down airplane.',
      'positive': 'A person is training his horse for a competition.',
    }
    ```
* Collection strategy: Reading the SNLI & MultiNLI datasets and considering the "premise" as the "anchor" and the "hypothesis" as the "positive" if the label is "entailment". The reverse ("entailment" as "anchor" and "premise" as "positive") is not included.
* Deduplified: Yes

### `triplet` subset

* Columns: "anchor", "positive", "negative"
* Column types: `str`, `str`, `str`
* Examples:
    ```python
    {
      'anchor': 'A person on a horse jumps over a broken down airplane.',
      'positive': 'A person is outdoors, on a horse.',
      'negative': 'A person is at a diner, ordering an omelette.',
    }
    ```
* Collection strategy: Reading the SNLI & MultiNLI datasets, for each "premise" making a list of entailing and contradictory sentences using the dataset labels. Then, considering all possible triplets out of these entailing and contradictory lists. The reverse ("entailment" as "anchor" and "premise" as "positive") is not included.
* Deduplified: Yes