Datasets:
Create README.md
#1
by
gsgoncalves
- opened
README.md
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: unknown
|
3 |
+
task_categories:
|
4 |
+
- fill-mask
|
5 |
+
- text-generation
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
pretty_name: RoBERTa Pretrain Dataset
|
9 |
+
---
|
10 |
+
# Dataset Card for RoBERTa Pretrain
|
11 |
+
|
12 |
+
### Dataset Summary
|
13 |
+
|
14 |
+
This is the concatenation of the datasets used to Pretrain RoBERTa.
|
15 |
+
The dataset is not shuffled and contains raw text. It is packaged for convenicence.
|
16 |
+
|
17 |
+
Essentially is the same as:
|
18 |
+
```
|
19 |
+
from datasets import load_dataset, concatenate_datasets
|
20 |
+
bookcorpus = load_dataset("bookcorpus", split="train")
|
21 |
+
openweb = load_dataset("openwebtext", split="train")
|
22 |
+
cc_news = load_dataset("cc_news", split="train")
|
23 |
+
cc_news = cc_news.remove_columns([col for col in cc_news.column_names if col != "text"])
|
24 |
+
cc_stories = load_dataset("spacemanidol/cc-stories", split="train")
|
25 |
+
return concatenate_datasets([bookcorpus, openweb, cc_news, cc_stories['train']])
|
26 |
+
```
|