File size: 4,457 Bytes
62000f4
99a5e49
 
 
 
 
 
 
 
 
 
 
 
62000f4
4a84f1e
 
 
 
 
 
 
 
 
 
 
 
 
 
aa413f0
62000f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aa413f0
 
 
 
 
 
 
 
 
 
 
 
 
 
62000f4
4a84f1e
 
 
 
62000f4
 
 
 
aa413f0
 
 
 
62000f4
99a5e49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
926cae4
99a5e49
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
language:
- en
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: NLI for SimCSE
tags:
- sentence-transformers
dataset_info:
- config_name: triplet
  features:
  - name: anchor
    dtype: string
  - name: positive
    dtype: string
  - name: negative
    dtype: string
  splits:
  - name: train
    num_bytes: 51033641
    num_examples: 274951
  download_size: 33517191
  dataset_size: 51033641
- config_name: triplet-7
  features:
  - name: anchor
    dtype: string
  - name: positive
    dtype: string
  - name: negative_1
    dtype: string
  - name: negative_2
    dtype: string
  - name: negative_3
    dtype: string
  - name: negative_4
    dtype: string
  - name: negative_5
    dtype: string
  - name: negative_6
    dtype: string
  - name: negative_7
    dtype: string
  splits:
  - name: train
    num_bytes: 129065964
    num_examples: 273540
  download_size: 87886620
  dataset_size: 129065964
- config_name: triplet-all
  features:
  - name: anchor
    dtype: string
  - name: positive
    dtype: string
  - name: negative
    dtype: string
  splits:
  - name: train
    num_bytes: 357145333
    num_examples: 1925996
  download_size: 94616052
  dataset_size: 357145333
configs:
- config_name: triplet
  data_files:
  - split: train
    path: triplet/train-*
- config_name: triplet-7
  data_files:
  - split: train
    path: triplet-7/train-*
- config_name: triplet-all
  data_files:
  - split: train
    path: triplet-all/train-*
---

# Dataset Card for NLI for SimCSE

This is a reformatting of the NLI for SimCSE Dataset used to train the [BGE-M3 model](https://huggingface.co/BAAI/bge-m3). See the full BGE-M3 dataset in [Shitao/bge-m3-data](https://huggingface.co/datasets/Shitao/bge-m3-data).
Despite being labeled as Natural Language Inference (NLI), this dataset can be used for training/finetuning an embedding model for semantic textual similarity.

## Dataset Subsets

### `triplet` subset

* Columns: "anchor", "positive", "negative"
* Column types: `str`, `str`, `str`
* Examples:
    ```python
    {
      'anchor': 'One of our number will carry out your instructions minutely.',
      'positive': 'A member of my team will execute your orders with immense precision.',
      'negative': 'We have no one free at the moment so you have to take action yourself.'
    }
    ```
* Collection strategy: Reading the jsonl file in the `en_NLI_data` directory in [Shitao/bge-m3-data](https://huggingface.co/datasets/Shitao/bge-m3-data) and taking only the first negative.
* Deduplified: No

### `triplet-7` subset

* Columns: "anchor", "positive", "negative_1", "negative_2", "negative_3", "negative_4", "negative_5", "negative_6", "negative_7"
* Column types: `str`, `str`, `str`, `str`, `str`, `str`, `str`
* Examples:
    ```python
    {
      'anchor': 'One of our number will carry out your instructions minutely.',
      'positive': 'A member of my team will execute your orders with immense precision.',
      'negative_1': 'We have no one free at the moment so you have to take action yourself.',
      'negative_2': 'A poodle is running through the grass.',
      'negative_3': 'Investment and planning are growing industries in Jamaica.',
      'negative_4': 'A bearded man is rocking out on an acoustic guitar',
      'negative_5': 'The people are sunbathing on the beach.',
      'negative_6': 'A construction worker installs a door.',
      'negative_7': 'A crowd has gathered because of a dangerous situation.'
    }
    ```
* Collection strategy: Reading the jsonl file in the `en_NLI_data` directory in [Shitao/bge-m3-data](https://huggingface.co/datasets/Shitao/bge-m3-data) and taking all samples that have 7 negatives (which is by far the majority).
* Deduplified: No

### `triplet-all` subset

* Columns: "anchor", "positive", "negative"
* Column types: `str`, `str`, `str`
* Examples:
    ```python
    {
      'anchor': 'One of our number will carry out your instructions minutely.',
      'positive': 'A member of my team will execute your orders with immense precision.',
      'negative': 'We have no one free at the moment so you have to take action yourself.'
    }
    ```
* Collection strategy: Reading the jsonl file in the `en_NLI_data` directory in [Shitao/bge-m3-data](https://huggingface.co/datasets/Shitao/bge-m3-data) and taking each negative, but making a separate sample with each of the negatives.
* Deduplified: No