Dan Fu commited on
Commit
0a8b6b7
1 Parent(s): 1561e44

Model card

Browse files
Files changed (1) hide show
  1. README.md +127 -0
README.md CHANGED
@@ -1,3 +1,130 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ pretty_name: Red Pajama 1T Sample
8
  ---
9
+ # Dataset Card for Dataset Name
10
+
11
+ ### Dataset Summary
12
+
13
+ RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
14
+ This HuggingFace repo contains a 1B-token sample of the RedPajama dataset.
15
+ The full dataset has the following token counts and is available for download LINK:
16
+
17
+ | Dataset | Token Count |
18
+ |---------------|-------------|
19
+ | Commoncrawl | 878 Billion |
20
+ | C4 | 175 Billion |
21
+ | GitHub | 59 Billion |
22
+ | Books | 26 Billion |
23
+ | ArXiv | 28 Billion |
24
+ | Wikipedia | 24 Billion |
25
+ | StackExchange | 20 Billion |
26
+ | Total | 1.2 Trillion |
27
+
28
+ A full set of scripts to recreate the dataset from scratch can be found LINK.
29
+
30
+ ### Languages
31
+
32
+ Primarily English, though the Wikipedia slice contains multiple languages.
33
+
34
+ ## Dataset Structure
35
+
36
+ The dataset structure is as follows:
37
+
38
+ ```
39
+ {
40
+ "text": ...,
41
+ "meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
42
+ }
43
+ ```
44
+
45
+ ## Dataset Creation
46
+
47
+ This dataset was created to follow the [LLaMa paper](https://arxiv.org/abs/2302.13971) as closely as possible to try to reproduce its recipe.
48
+
49
+ ### Source Data
50
+
51
+ #### Commoncrawl
52
+
53
+ We downlaod five dumps from Commoncrawl, and run the dumps through the official [`cc_net` pipeline](https://github.com/facebookresearch/cc_net).
54
+ We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
55
+ classify paragraphs as Wikipedia references or random Commoncrawl samples.
56
+
57
+ #### C4
58
+
59
+ C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
60
+
61
+ #### GitHub
62
+
63
+ The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
64
+ files and only keep projects that are distributed under the MIT, BSD, or Apache license.
65
+
66
+ #### Wikipedia
67
+ We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
68
+ text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
69
+ formatting boilerplate has been removed.
70
+
71
+ #### Gutenberg and Books3
72
+ The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use
73
+ simhash to remove near duplicates.
74
+
75
+ #### ArXiv
76
+ ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
77
+ remove preambles, comments, macros and bibliographies.
78
+
79
+ #### Stackexchange
80
+ The Stack Exchange split of the dataset is download from the
81
+ [Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
82
+ remove html tags, group the posts into question-answer pairs, and order answers by their score.
83
+
84
+ <!--
85
+ ### Annotations
86
+
87
+ #### Annotation process
88
+
89
+ [More Information Needed]
90
+
91
+ #### Who are the annotators?
92
+
93
+ [More Information Needed]
94
+
95
+ ### Personal and Sensitive Information
96
+
97
+ [More Information Needed]
98
+
99
+ ## Considerations for Using the Data
100
+
101
+ ### Social Impact of Dataset
102
+
103
+ [More Information Needed]
104
+
105
+ ### Discussion of Biases
106
+
107
+ [More Information Needed]
108
+
109
+ ### Other Known Limitations
110
+
111
+ [More Information Needed]
112
+
113
+ ## Additional Information
114
+
115
+ ### Dataset Curators
116
+
117
+ [More Information Needed]
118
+
119
+ ### Licensing Information
120
+
121
+ [More Information Needed]
122
+
123
+ ### Citation Information
124
+
125
+ [More Information Needed]
126
+
127
+ ### Contributions
128
+
129
+ [More Information Needed]
130
+ -->