File size: 2,379 Bytes
bc15574
576bb61
 
 
 
 
 
 
 
 
 
 
 
bc15574
0344dc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bc15574
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0344dc9
 
 
 
 
 
 
 
bc15574
 
 
 
 
 
 
 
 
576bb61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
language:
- en
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: Wikipedia Sections
tags:
- sentence-transformers
dataset_info:
- config_name: pair
  features:
  - name: anchor
    dtype: string
  - name: positive
    dtype: string
  splits:
  - name: train
    num_bytes: 490913561
    num_examples: 1779417
  - name: validation
    num_bytes: 60891304
    num_examples: 220400
  - name: test
    num_bytes: 61385426
    num_examples: 222957
  download_size: 295222520
  dataset_size: 613190291
- config_name: triplet
  features:
  - name: anchor
    dtype: string
  - name: positive
    dtype: string
  - name: negative
    dtype: string
  splits:
  - name: train
    num_bytes: 733058519
    num_examples: 1779417
  - name: validation
    num_bytes: 90881953
    num_examples: 220400
  - name: test
    num_bytes: 91705993
    num_examples: 222957
  download_size: 500545462
  dataset_size: 915646465
configs:
- config_name: pair
  data_files:
  - split: train
    path: pair/train-*
  - split: validation
    path: pair/validation-*
  - split: test
    path: pair/test-*
- config_name: triplet
  data_files:
  - split: train
    path: triplet/train-*
  - split: validation
    path: triplet/validation-*
  - split: test
    path: triplet/test-*
---

# Dataset Card for Wikipedia Sections

This dataset contains pairs and triplets that can be used to train and finetune Sentence Transformer embedding models. The dataset originates from [Dor et al.](https://aclanthology.org/P18-2009.pdf), and was downloaded from [this download link](https://sbert.net/datasets/wikipedia-sections-triplets.zip).
Notably, the "anchor" column contains sentences from Wikipedia, wheras the "positive" column contains other sentences from the same section. The "negative" column contains sentences from other sections.

## Dataset Subsets

### `pair` subset

* Columns: "anchor", "positive"
* Column types: `str`, `str`
* Examples:
    ```python
    
    ```
* Collection strategy: Reading the Wikipedia Sections dataset from https://sbert.net.
* Deduplified: Yes

### `triplet` subset

* Columns: "anchor", "positive", "negative"
* Column types: `str`, `str`, `str`
* Examples:
    ```python
    
    ```
* Collection strategy: Reading the Wikipedia Sections dataset from https://sbert.net.
* Deduplified: Yes