Datasets:

Languages:
English
Multilinguality:
multilingual
Size Categories:
100M<n<1B
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
License:
Peihao commited on
Commit
e8e275c
1 Parent(s): 32bd008

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +189 -0
README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: C4
3
+ annotations_creators:
4
+ - no-annotation
5
+ language_creators:
6
+ - found
7
+ languages:
8
+ - en
9
+ licenses:
10
+ - odc-by-1-0
11
+ multilinguality:
12
+ - multilingual
13
+ size_categories:
14
+ - 100M<n<1B
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-generation
19
+ - fill-mask
20
+ task_ids:
21
+ - language-modeling
22
+ - masked-language-modeling
23
+ paperswithcode_id: c4
24
+ ---
25
+
26
+ # Dataset Card for C4
27
+
28
+ ## Table of Contents
29
+
30
+ - [Dataset Card for C4](#dataset-card-for-c4)
31
+ - [Table of Contents](#table-of-contents)
32
+ - [Dataset Description](#dataset-description)
33
+ - [Dataset Summary](#dataset-summary)
34
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
35
+ - [Languages](#languages)
36
+ - [Dataset Structure](#dataset-structure)
37
+ - [Data Instances](#data-instances)
38
+ - [Data Fields](#data-fields)
39
+ - [Data Splits](#data-splits)
40
+ - [Dataset Creation](#dataset-creation)
41
+ - [Curation Rationale](#curation-rationale)
42
+ - [Source Data](#source-data)
43
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
44
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
45
+ - [Annotations](#annotations)
46
+ - [Annotation process](#annotation-process)
47
+ - [Who are the annotators?](#who-are-the-annotators)
48
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
49
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
50
+ - [Social Impact of Dataset](#social-impact-of-dataset)
51
+ - [Discussion of Biases](#discussion-of-biases)
52
+ - [Other Known Limitations](#other-known-limitations)
53
+ - [Additional Information](#additional-information)
54
+ - [Dataset Curators](#dataset-curators)
55
+ - [Licensing Information](#licensing-information)
56
+ - [Citation Information](#citation-information)
57
+ - [Contributions](#contributions)
58
+
59
+ ## Dataset Description
60
+
61
+ - **Homepage:** https://huggingface.co/datasets/allenai/c4
62
+ - **Paper:** https://arxiv.org/abs/1910.10683
63
+
64
+ ### Dataset Summary
65
+
66
+ A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
67
+
68
+ This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4
69
+
70
+ It comes in four variants:
71
+
72
+ - `en`: 305GB in JSON format
73
+ - `en.noblocklist`: 380GB in JSON format
74
+ - `en.noclean`: 2.3TB in JSON format
75
+ - `realnewslike`: 15GB in JSON format
76
+
77
+ The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words.
78
+
79
+ ### Supported Tasks and Leaderboards
80
+
81
+ C4 is mainly intended to pretrain language models and word representations.
82
+
83
+ ### Languages
84
+
85
+ The dataset is in English.
86
+
87
+ ## Dataset Structure
88
+
89
+ ### Data Instances
90
+
91
+ An example form the `en` config is:
92
+
93
+ ```
94
+ {
95
+ 'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/',
96
+ 'text': 'Beginners BBQ Class Taking Place in Missoula!\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.',
97
+ 'timestamp': '2019-04-25T12:57:54Z'
98
+ }
99
+ ```
100
+
101
+ ### Data Fields
102
+
103
+ The data have several fields:
104
+
105
+ - `url`: url of the source as a string
106
+ - `text`: text content as a string
107
+ - `timestamp`: timestamp as a string
108
+
109
+ ### Data Splits
110
+
111
+ | name | train |validation|
112
+ |----------------|--------:|---------:|
113
+ | en |364868892| 364608|
114
+ | en.noblocklist |393391519| 393226|
115
+ | en.noclean | ?| ?|
116
+ | realnewslike | 13799838| 13863|
117
+
118
+ ## Dataset Creation
119
+
120
+ ### Curation Rationale
121
+
122
+ [More Information Needed]
123
+
124
+ ### Source Data
125
+
126
+ #### Initial Data Collection and Normalization
127
+
128
+ C4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in [c4.py](https://github.com/tensorflow/datasets/blob/5952d3d60d60e1727786fa7a9a23d24bb463d4d6/tensorflow_datasets/text/c4.py) by Tensorflow Datasets.
129
+
130
+ The dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by [langdetect](https://github.com/Mimino666/langdetect) was discarded.
131
+
132
+ #### Who are the source language producers?
133
+
134
+ [More Information Needed]
135
+
136
+ ### Annotations
137
+
138
+ #### Annotation process
139
+
140
+ [More Information Needed]
141
+
142
+ #### Who are the annotators?
143
+
144
+ [More Information Needed]
145
+
146
+ ### Personal and Sensitive Information
147
+
148
+ [More Information Needed]
149
+
150
+ ## Considerations for Using the Data
151
+
152
+ ### Social Impact of Dataset
153
+
154
+ [More Information Needed]
155
+
156
+ ### Discussion of Biases
157
+
158
+ [More Information Needed]
159
+
160
+ ### Other Known Limitations
161
+
162
+ [More Information Needed]
163
+
164
+ ## Additional Information
165
+
166
+ ### Dataset Curators
167
+
168
+ [More Information Needed]
169
+
170
+ ### Licensing Information
171
+
172
+ AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
173
+
174
+ ### Citation Information
175
+
176
+ ```
177
+ @article{2019t5,
178
+ author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
179
+ title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
180
+ journal = {arXiv e-prints},
181
+ year = {2019},
182
+ archivePrefix = {arXiv},
183
+ eprint = {1910.10683},
184
+ }
185
+ ```
186
+
187
+ ### Contributions
188
+
189
+ Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.