kernelmachine commited on
Commit
2102ac9
1 Parent(s): f26634f
Files changed (2) hide show
  1. README.md +161 -0
  2. open_license_corpus.py +110 -0
README.md CHANGED
@@ -1,3 +1,164 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ pretty_name: pubtext
8
+ size_categories:
9
+ - 100B<n<1T
10
  ---
11
+ # PubText
12
+
13
+ Welcome to the Open License Corpus (OLC), a 228B token corpus for training permissively-licensed language models.
14
+
15
+
16
+ **Disclaimer**: OLC should not be considered a universally safe-to-use dataset. We encourage users of OLC to consult a legal professional on the suitability of each data source for their application.
17
+
18
+
19
+ ## Dataset Description
20
+
21
+ - **Repository:** [Silo LM repository](https://github.com/kernelmachine/silo-lm)
22
+ - **Paper:** [Silo LM paper](https://github.com/kernelmachine/silo-lm)
23
+ - **Point of Contact:** [Suchin Gururangan](mailto:sg01@cs.washington.edu)
24
+
25
+ ### Dataset Summary
26
+
27
+
28
+
29
+
30
+
31
+ | Domain | Sources | Specific License | # BPE Tokens (in billions; GPT-NeoX tokenizer) |
32
+ |--------------|------------------------------------------------------|------------------|------------------|
33
+ | Legal | Case Law, Pile of Law (PD subset) | Public Domain | 27.1 |
34
+ | Legal | Pile of Law (CC BY-SA subset) | CC BY-SA | 0.07 |
35
+ | Code | Github (permissive) | MIT/BSD/Apache | 58.9 |
36
+ | Conversational| HackerNews, Ubuntu IRC | MIT/Apache | 5.9 |
37
+ | Conversational | Stack Overflow, Stack Exchange | CC BY-SA | 21.3 |
38
+ | Math | Deepmind Math, AMPS | Apache | 3.5 |
39
+ | Science | ArXiv abstracts, S2ORC (PD subset) | Public Domain | 1.2 |
40
+ | Science | S2ORC (CC BY-SA subset) | CC BY-SA | 70.3 |
41
+ | Books | Gutenberg | Public Domain | 2.9 |
42
+ | News | Public domain news | Public Domain | 0.2 |
43
+ | News | Wikinews | CC BY-SA | 0.01 |
44
+ | Encyclopedic | Wikipedia | CC BY-SA | 37.0 |
45
+
46
+
47
+
48
+ ### Supported Tasks and Leaderboards
49
+
50
+ - `text-generation`: The dataset can be used to train a language model for text generation. The language model performance is evaluated based on perplexity.
51
+
52
+ ### Languages
53
+
54
+ OLC is primarily an English-language dataset, but also contains some data in other languages (primarily in the Wikipedia subset, which draws on the [Red Pajama](https://github.com/togethercomputer/RedPajama-Data) data collection)
55
+
56
+ ## Dataset Structure
57
+
58
+ The dataset is a standard text-only structure, separated into each subset that we include in the paper. To use a subset of sources, you should specify each individually, like so:
59
+
60
+ ```
61
+ import datasets
62
+ dataset = datasets.load_dataset('open-license-corpus', ['pd_law', 'sw_github'], streaming=True)
63
+ ```
64
+
65
+ ### Data Instances and Fields
66
+
67
+ The dataset is standard text only structure, e.g. `{"text": "this is a document"}`. We do not add any other fields to documents.
68
+
69
+ ### Data Splits
70
+
71
+ We only include the training data in this repository.
72
+
73
+ For validation data, in the paper we use the Pile validation data, which we decontaminate OLC against using a deduplication script (see more below).
74
+
75
+ The Pile validation data that we use in the paper can be found [here]().
76
+
77
+ ## Dataset Creation
78
+
79
+
80
+
81
+
82
+ ### License Taxonomy
83
+
84
+
85
+ * **Public Domain (PD):** Public domain text has no restrictions.
86
+ * **Permissively licensed software (SW):** including MIT, Apache, and BSD software.
87
+ * **Attribution licenses (BY):** such as Creative Commons Attribution (CC-BY) are free to use as long as "credit is given to the creator."
88
+ * **All other data:** that is not in one of the above three categories is assumed to be non-permissive. This includes: any text that is explicitly protected by copyright or licenses that are non-commercial (e.g., CC-NC), any software without clear MIT, BSD, or Apache licenses, and any generic web-crawled data where the license or copyright information may be unclear.
89
+
90
+
91
+ ### Building OLC
92
+
93
+ Based on this taxonomy of licenses OLC, a 228B token corpus of PD, SW, and BY data. OLC consists of 17 manually-selected sources of
94
+ primarily English text that are under permissive licenses.
95
+
96
+ The text generally falls into eight different domains:
97
+
98
+ * **Legal:** We curate legal text from the Pile of Law, an amalgation of 31 different sources of text related to civil court cases, patents, and other legal and governmental works, either licensed as public domain or CC-BY. We also gather public domain text from the Case Law Access Project, which covers over 6.5 million decisions published by state and federal courts throughout U.S. history.
99
+
100
+ * **Code:** We use the Github subset of the RedPajama dataset, which contains code from Github repositories with three permissive software licenses: MIT, Apache, and BSD.
101
+
102
+ * **Conversation:** We source conversational text under permissive software licenses from the HackerNews (MIT license) and the Ubuntu IRC (Apache license) subsets of the Pile. We also use the Stackexchange subset of the RedPajama dataset and a Stackoverflow corpus from Kaggle, both under the CC-BY-SA license.
103
+
104
+ * **Math:** We source mathematical text from the Deepmind Mathematics and the AMPS datasets, both of which are under the Apache license.
105
+
106
+ * **Science:** We source scientific text from ArXiv abstracts that are in the public domain. We also collect full-text articles from the Semantic Scholar Research Corpus (S2ORC), either licensed as public domain or CC-BY.
107
+
108
+ * **Books:** We source books from the Gutenberg corpus, which are copyright-expired books that are in the public domain.
109
+
110
+ * **News:** We collect public domain news text from the English subset of the MOT corpus. We also collect text from Wikinews, which is under CC BY-SA.
111
+
112
+ * **Encyclopedic:** Finally, we include a large set of Wikipedia from the subset included in RedPajama.We follow RedPajama in using Wikipedia snapshots from 20 languages even though the model primarily focuses on English.
113
+
114
+
115
+ #### Initial Data Collection and Normalization
116
+
117
+ We deduplicate text using a document-level filter that considers $n$-gram overlap. We first deduplicate within each domain to remove redundant documents from similar sources (e.g. Case Law and the Pile of Law), and then then perform deduplication against the validation and test datasets of the Pile to avoid test leakage.
118
+
119
+ We do not perform any additional quality filtering, though some subsets (e.g. Github and Wikipedia) are already quality filtered by the original data curators of those subsets.
120
+
121
+ #### Who are the source language producers?
122
+
123
+ The source language producers vary by domain; the Legal subset primarily contains governmental documents, while the Github subset contains code repositories written by the public. We refer to each data source for further information.
124
+
125
+ ### Annotations
126
+
127
+ The dataset does not contain any additional annotations.
128
+
129
+ #### Annotation process
130
+
131
+ [N/A]
132
+
133
+ #### Who are the annotators?
134
+
135
+ [N/A]
136
+
137
+ ### Personal and Sensitive Information
138
+
139
+ We do not perform additional filtering to remove personally identifiable information, so it is possible that certain subsets still pose privacy risks despite being permissively licensed.
140
+
141
+ ## Considerations for Using the Data
142
+
143
+ Please see the disclaimer above. The license associated with a document may be time- and country-dependent Moreover, other legal constraints may prohibit the use of a data source despite a permissive data license. We encourage users of PubText to consult a legal professional on the suitability of each data source for their application.
144
+
145
+ ### Social Impact of Dataset
146
+
147
+ OLC is the first multidomain, permissively licensed corpus, which can enable language models that align better to data-use regulations such as the fair-use doctrine in the United States and the GPDR in the European Union.
148
+
149
+ ### Discussion of Biases and Limitations
150
+
151
+ While OLC mitigates copyright and privacy risks, it may exacerbate certain fairness issues, like toxicity towards marginalized groups and racial biases, especially due to the prevalence of older copyright-expired books in the training data.
152
+
153
+ In addition, OLC relies on explicit metadata to identify licenses, which may lead to underestimates of the amount and diversity of permissively licensed text actually available on the web.
154
+
155
+
156
+ ### Dataset Curators
157
+
158
+ OLC was curated by the authors of SILO language models.
159
+
160
+ ### Licensing Information
161
+
162
+ We release this corpus under the Apache 2.0 license.
163
+
164
+ ### Citation Information
open_license_corpus.py ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import datasets
2
+ import os
3
+ from datasets import load_dataset
4
+ import gzip
5
+ import json
6
+
7
+ logger = datasets.logging.get_logger(__name__)
8
+
9
+
10
+ CITATION = """
11
+ """
12
+
13
+ DESCRIPTION = """
14
+ The Open License Corpus
15
+ """
16
+
17
+ OLC_SUBSET_NAMES = [
18
+ "ccby_law",
19
+ "ccby_s2orc",
20
+ "ccby_stackexchange",
21
+ "ccby_stackoverflow",
22
+ "ccby_wikinews",
23
+ "ccby_wikipedia",
24
+ "pd_arxiv_abstracts",
25
+ "pd_books",
26
+ "pd_law",
27
+ "pd_news",
28
+ "pd_s2orc",
29
+ "sw_amps_math",
30
+ "sw_dm_math",
31
+ "sw_github",
32
+ "sw_hackernews",
33
+ "sw_ubuntu_irc"
34
+ ]
35
+
36
+ URL = "https://huggingface.co/datasets/kernelmachine/open-license-corpus/"
37
+
38
+
39
+
40
+ N_SHARDS_PER_SPLIT = {
41
+ "ccby_s2orc": {"train": 5000},
42
+ "ccby_law": {"train": 50},
43
+ "ccby_stackexchange": {"train": 1500},
44
+ "ccby_stackoverflow": {"train": 750},
45
+ "ccby_wikinews": {"train": 42},
46
+ "ccby_wikipedia": {"train": 3000},
47
+ "pd_arxiv_abstracts": {"train": 1},
48
+ "pd_books": {"train": 150},
49
+ "pd_law": {"train": 2000},
50
+ "pd_news": {"train": 10},
51
+ "pd_s2orc": {"train": 30},
52
+ "sw_amps_math": {"train": 2},
53
+ "sw_dm_math": {"train": 239},
54
+ "sw_github": {"train": 2500},
55
+ "sw_hackernews": {"train": 16},
56
+ "sw_ubuntu_irc": {"train": 27}
57
+ }
58
+
59
+ #DATA_URL = 'https://huggingface.co/datasets/kernelmachine/open-license-corpus/blob/main/data/{name}/{split}-{index:05d}-of-{n_shards:05d}.jsonl.gz'
60
+ DATA_URL = 'https://huggingface.co/datasets/kernelmachine/open-license-corpus/resolve/main/data/{name}/{split}-{index:05d}-of-{n_shards:05d}.jsonl.gz'
61
+
62
+ class OpenLicenseCorpusConfig(datasets.BuilderConfig):
63
+ def __init__(self, features, citation, **kwargs):
64
+ super().__init__(**kwargs)
65
+
66
+
67
+ class OpenLicenseCorpus(datasets.GeneratorBasedBuilder):
68
+
69
+ BUILDER_CONFIGS = [
70
+ datasets.BuilderConfig(name=name)
71
+ for name in OLC_SUBSET_NAMES
72
+ ]
73
+
74
+ def _info(self):
75
+ return datasets.DatasetInfo(
76
+ description=DESCRIPTION,
77
+ features=datasets.Features(
78
+ {
79
+ "text": datasets.Value("string"),
80
+ }
81
+ ),
82
+ supervised_keys=None,
83
+ homepage=URL,
84
+ citation=CITATION,
85
+ )
86
+
87
+ def _split_generators(self, dl_manager):
88
+ data_urls = {}
89
+ for split in ["train"]:
90
+ n_shards = N_SHARDS_PER_SPLIT[self.config.name][split] - 1
91
+ data_urls[split] = [
92
+ DATA_URL.format(name=self.config.name, split=split, index=index, n_shards=n_shards)
93
+ for index in range(n_shards)
94
+ ]
95
+
96
+ train_downloaded_files = dl_manager.download(data_urls["train"])
97
+
98
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": train_downloaded_files})]
99
+
100
+ def _generate_examples(self, filepaths):
101
+ """This function returns the examples in the raw (text) form by iterating on all the files."""
102
+ id_ = 0
103
+ for filepath in filepaths:
104
+ logger.info("generating examples from = %s", filepath)
105
+ with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
106
+ for line in f:
107
+ if line:
108
+ example = json.loads(line)
109
+ yield id_, example
110
+ id_ += 1