sangmichaelxie commited on
Commit
28803ea
1 Parent(s): 812c3c8
.gitattributes CHANGED
@@ -53,3 +53,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
56
+ *.jsonl filter=lfs diff=lfs merge=lfs -text
57
+ train_1.jsonl filter=lfs diff=lfs merge=lfs -text
58
+ train_3.jsonl filter=lfs diff=lfs merge=lfs -text
59
+ train_4.jsonl filter=lfs diff=lfs merge=lfs -text
60
+ train_6.jsonl filter=lfs diff=lfs merge=lfs -text
61
+ train_7.jsonl filter=lfs diff=lfs merge=lfs -text
62
+ train_9.jsonl filter=lfs diff=lfs merge=lfs -text
63
+ train_10.jsonl filter=lfs diff=lfs merge=lfs -text
64
+ train_11.jsonl filter=lfs diff=lfs merge=lfs -text
65
+ train_2.jsonl filter=lfs diff=lfs merge=lfs -text
66
+ train_5.jsonl filter=lfs diff=lfs merge=lfs -text
67
+ train_8.jsonl filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,80 @@
1
  ---
2
  license: mit
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
+ size_categories:
6
+ - 10M<n<100M
7
  ---
8
+ # Dataset Card for heuristic_classification-filtered-pile-50M
9
+
10
+ ## Dataset Description
11
+
12
+ - **Repository:** https://github.com/p-lambda/dsir
13
+ - **Paper:** https://arxiv.org/abs/2302.03169
14
+ - **Point of Contact: Sang Michael Xie <xie@cs.stanford.edu>**
15
+
16
+ ### Dataset Summary
17
+
18
+ This dataset is a subset of The Pile, selected via the heuristic classification data selection method. The target distribution for heuristic classification are the Wikipedia and BookCorpus2 subsets of The Pile.
19
+
20
+
21
+ ### Languages
22
+
23
+ English (EN)
24
+
25
+ ## Dataset Structure
26
+
27
+ A train set is provided (51.2M examples) in jsonl format.
28
+
29
+ ### Data Instances
30
+
31
+ ```
32
+ {"contents": "Members join for free and will have access to all of our earning verticals, including, but not limited to, watching videos, shopping for cash back, taking surveys, and redeeming special offers. Swagbucks is the web's leading rewards platform, dedicated to providing FREE gift cards to its 12+ million members. Choose from top retailers like Amazon, Target, Walmart, Starbucks, PayPal, and tons more.dead full espanol tle work is running out. You\u2019re given a descargar land
33
+ of the dead full espanol but that respect it\u2019s tons of one another. When the screen. With the pluses gained from a ledge, your arms or abandons your name suggests, Inferno has locked on a dash for a poozer, it\u2019s placed in their shadowing skills. These controls forward, backward, and frankly, the straights. You can also have expected, but that\u2019s unlike anything particularly adept pacing. Each win by so rough idea that\u2019s worth it up. There are a neat sensation to play
34
+ of a fresh\n\nthe voice actors give up with content and the same innovative control scheme that pulls you invested. From the movement. The unique art style and is still remarkably tough. You\u2019re not", "metadata": {"pile_set_name": ["Pile-CC", "Pile-CC"]}, "id": 303}
35
+ ```
36
+
37
+ ### Data Fields
38
+
39
+ ```
40
+ "contents": the text
41
+ "metadata": contains information about the source(s) of text that the text comes from. Multiple sources means that the example is concatenated from two sources.
42
+ "id": Ignore - a non-unique identifier
43
+ ```
44
+
45
+ ## Dataset Creation
46
+ We first select 102.4M examples then concatenate every two examples to create 51.2M examples.
47
+ This ensures that the examples are long enough for a max token length of 512 without much padding.
48
+ We train the fasttext binary classifier for heuristic classification from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile.
49
+ We first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3.
50
+ After this, we concatenate every two examples.
51
+
52
+ ### Source Data
53
+ The Pile
54
+
55
+ #### Initial Data Collection and Normalization
56
+ We select data from The Pile, which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks.
57
+ We first divided the documents in The Pile into chunks of 128 words, according to whitespace tokenization.
58
+ These chunks define the examples that we do data selection on, totaling 1.7B examples.
59
+ Before heuristic classification, we first apply a manual quality filter (see paper for details) and only consider the examples that pass the filter.
60
+
61
+
62
+ ## Considerations for Using the Data
63
+
64
+ The dataset is biased towards choosing data from non-Wikipedia and non-Books sources. A balanced approach would be to mix in more data from Wikipedia and books.
65
+
66
+ ### Dataset Curators
67
+
68
+ Sang Michael Xie, Shibani Santurkar
69
+
70
+ ### Citation Information
71
+ Paper: <https://arxiv.org/abs/2302.03169>
72
+ ```
73
+ @article{xie2023data,
74
+ author = {Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang},
75
+ journal = {arXiv preprint arXiv:2302.03169},
76
+ title = {Data Selection for Language Models via Importance Resampling},
77
+ year = {2023},
78
+ }
79
+ ```
80
+
train_1.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbca57bc313757601b36e82ea67b98f7c0396cc7a2d8644cb48e83dd42557576
3
+ size 8641930543
train_10.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:045605193ae5421d0f8ea5149b169e59f98c3dd3c47b90467eee24562af8aecc
3
+ size 84588013
train_11.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fb0bc225b81e0b9cd33979ef0dd921b5a6741263d8868dfc0b4d769e5639ec3
3
+ size 84366124
train_2.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4382f6683611f64a3fe5c47effaa96027b7fdbb7a3e3805101821c9b6bb0fa2
3
+ size 8639697485
train_3.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:466a18ebe3aee9c9f4e4253d03b187ea1656fb8d29da46e692500500e75469b8
3
+ size 8645364028
train_4.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9242eccae8b52aa763dd789019e92326906359d7fc7235b642c4447ce4730a30
3
+ size 8646154649
train_5.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4df24b43e191cf62bee4ac13bc9416a83b661c3e8b0b903f0b47fc1176e93852
3
+ size 8647411750
train_6.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bac68a5f1ec06b77e901565ce6fcf940bfb72a7624a889254c59d30ff3a05678
3
+ size 8646963732
train_7.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d2f84e34e9212017d94375491caa0a695ec1509a14674a55389d86d433944c1
3
+ size 8646358801
train_8.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef0c937791e8430fc587d08dc30f298f44a258082de218a25744dd91fe70dcfb
3
+ size 8651141355
train_9.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d07f8a136f1b1a78079519b7a687b25705ffb61ddf310adbc9c08d7130312f81
3
+ size 17290375570