File size: 1,348 Bytes
15d8443
 
 
 
 
 
 
 
 
 
 
 
 
 
b5ea766
 
 
 
 
36942ec
 
6191a10
 
 
e41ce78
c500338
230493b
 
 
 
3b06068
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 26076989556
    num_examples: 33536113
  download_size: 15221565467
  dataset_size: 26076989556
---
# Dataset Card for "chunked-wikipedia20220301en-bookcorpusopen"

```
num_examples: 33.5 million
download_size: 15.3 GB
dataset_size: 26.1 GB
```

This dataset combines [wikipedia20220301.en](https://huggingface.co/datasets/wikipedia) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen),
and splits the data into smaller chunks, of size ~820 chars 
(such that each item will be at least ~128 tokens for the average tokenizer). 
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars. 
The dataset has been normalized into lower case, with accents and non-english characters removed.
Items with less than 200 chars or more than 1000 chars have been removed. 
The data has not been shuffled (you can either use `dataset.shuffle(...)`, 
or download the shuffled version [here](https://huggingface.co/datasets/sradc/chunked-shuffled-wikipedia20220301en-bookcorpusopen),
which will be faster to iterate over).

This dataset is processed for convenience, at the expense of losing some percentage of the tokens due to truncation, 
(assuming the training minibatches are truncated to 128 tokens).