File size: 1,266 Bytes
b52508e
e801fa5
b52508e
 
 
 
 
 
 
 
 
 
 
 
 
0e6fada
 
 
 
 
 
 
 
 
 
6436176
1651a30
 
0e6fada
 
 
3441209
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
language: en
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 26076989556
    num_examples: 33536113
  download_size: 17380043798
  dataset_size: 26076989556
---
# Dataset Card for "wikipedia20220301en-bookcorpusopen-chunked-shuffled"


```
num_examples: 33.5 million
download_size: 15.3 GB
dataset_size: 26.1 GB
```

This dataset combines [wikipedia20220301.en](https://huggingface.co/datasets/wikipedia) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen),
and splits the data into smaller chunks, of size ~820 chars 
(such that each item will be at least ~128 tokens for the average tokenizer). 
The order of the items in this dataset has been shuffled,
meaning you don't have to use `dataset.shuffle`, 
which is slower to iterate over.
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars. 
The dataset has been normalized into lower case, with accents and non-english characters removed.
Items with less than 200 chars or more than 1000 chars have been removed. 

This dataset is processed for convenience, at the expense of losing some percentage of the tokens due to truncation, 
(assuming the training minibatches are truncated to 128 tokens).