open-license-corpus / README.md
kernelmachine's picture
Update README.md
384d5e1
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
pretty_name: pubtext
size_categories:
  - 100B<n<1T

PubText

Welcome to the Open License Corpus (OLC), a 228B token corpus for training permissively-licensed language models.

Disclaimer: OLC should not be considered a universally safe-to-use dataset. We encourage users of OLC to consult a legal professional on the suitability of each data source for their application.

Dataset Description

Dataset Summary

Domain Sources Specific License # BPE Tokens (in billions; GPT-NeoX tokenizer)
Legal Case Law, Pile of Law (PD subset) Public Domain 27.1
Legal Pile of Law (CC BY-SA subset) CC BY-SA 0.07
Code Github (permissive) MIT/BSD/Apache 58.9
Conversational HackerNews, Ubuntu IRC MIT/Apache 5.9
Conversational Stack Overflow, Stack Exchange CC BY-SA 21.3
Math Deepmind Math, AMPS Apache 3.5
Science ArXiv abstracts, S2ORC (PD subset) Public Domain 1.2
Science S2ORC (CC BY-SA subset) CC BY-SA 70.3
Books Gutenberg Public Domain 2.9
News Public domain news Public Domain 0.2
News Wikinews CC BY-SA 0.01
Encyclopedic Wikipedia CC BY-SA 37.0

Supported Tasks and Leaderboards

  • text-generation: The dataset can be used to train a language model for text generation. The language model performance is evaluated based on perplexity.

Languages

OLC is primarily an English-language dataset, but also contains some data in other languages (primarily in the Wikipedia subset, which draws on the Red Pajama data collection)

Dataset Structure

The dataset is a standard text-only structure, separated into each subset that we include in the paper.

from datasets import load_dataset
dataset = load_dataset('kernelmachine/open-license-corpus', 'pd_law', streaming=True)['train']

To use a collection of sources, you should specify each individually and interleave, like so:

from datasets import interleave_datasets, load_dataset
d1 = load_dataset('kernelmachine/open-license-corpus', 'pd_law', streaming=True)['train']
d2 = load_dataset('kernelmachine/open-license-corpus', 'sw_github', streaming=True)['train']
d1_d2 = interleave_datasets([d1,d2], probabilities=[0.8, 0.2], seed=42)

Data Instances and Fields

The dataset is standard text only structure, e.g. {"text": "this is a document"}. We do not add any other fields to documents.

Data Splits

We only include the training data in this repository.

For validation data, in the paper we use the Pile validation data, which we decontaminate OLC against using a deduplication script (see more below).

The Pile validation data that we use in the paper can be found here.

Dataset Creation

License Taxonomy

  • Public Domain (PD): Public domain text has no restrictions.
  • Permissively licensed software (SW): including MIT, Apache, and BSD software.
  • Attribution licenses (BY): such as Creative Commons Attribution (CC-BY) are free to use as long as "credit is given to the creator."
  • All other data: that is not in one of the above three categories is assumed to be non-permissive. This includes: any text that is explicitly protected by copyright or licenses that are non-commercial (e.g., CC-NC), any software without clear MIT, BSD, or Apache licenses, and any generic web-crawled data where the license or copyright information may be unclear.

Building OLC

Based on this taxonomy of licenses OLC, a 228B token corpus of PD, SW, and BY data. OLC consists of 17 manually-selected sources of primarily English text that are under permissive licenses.

The text generally falls into eight different domains:

  • Legal: We curate legal text from the Pile of Law, an amalgation of 31 different sources of text related to civil court cases, patents, and other legal and governmental works, either licensed as public domain or CC-BY. We also gather public domain text from the Case Law Access Project, which covers over 6.5 million decisions published by state and federal courts throughout U.S. history.

  • Code: We use the Github subset of the RedPajama dataset, which contains code from Github repositories with three permissive software licenses: MIT, Apache, and BSD.

  • Conversation: We source conversational text under permissive software licenses from the HackerNews (MIT license) and the Ubuntu IRC (Apache license) subsets of the Pile. We also use the Stackexchange subset of the RedPajama dataset and a Stackoverflow corpus from Kaggle, both under the CC-BY-SA license.

  • Math: We source mathematical text from the Deepmind Mathematics and the AMPS datasets, both of which are under the Apache license.

  • Science: We source scientific text from ArXiv abstracts that are in the public domain. We also collect full-text articles from the Semantic Scholar Research Corpus (S2ORC), either licensed as public domain or CC-BY.

  • Books: We source books from the Gutenberg corpus, which are copyright-expired books that are in the public domain.

  • News: We collect public domain news text from the English subset of the MOT corpus. We also collect text from Wikinews, which is under CC BY-SA.

  • Encyclopedic: Finally, we include a large set of Wikipedia from the subset included in RedPajama.We follow RedPajama in using Wikipedia snapshots from 20 languages even though the model primarily focuses on English.

Initial Data Collection and Normalization

We deduplicate text using a document-level filter that considers $n$-gram overlap. We first deduplicate within each domain to remove redundant documents from similar sources (e.g. Case Law and the Pile of Law), and then then perform deduplication against the validation and test datasets of the Pile to avoid test leakage.

We do not perform any additional quality filtering, though some subsets (e.g. Github and Wikipedia) are already quality filtered by the original data curators of those subsets.

Who are the source language producers?

The source language producers vary by domain; the Legal subset primarily contains governmental documents, while the Github subset contains code repositories written by the public. We refer to each data source for further information.

Annotations

The dataset does not contain any additional annotations.

Annotation process

[N/A]

Who are the annotators?

[N/A]

Personal and Sensitive Information

We do not perform additional filtering to remove personally identifiable information, so it is possible that certain subsets still pose privacy risks despite being permissively licensed.

Considerations for Using the Data

Please see the disclaimer above. The license associated with a document may be time- and country-dependent Moreover, other legal constraints may prohibit the use of a data source despite a permissive data license. We encourage users of PubText to consult a legal professional on the suitability of each data source for their application.

Social Impact of Dataset

OLC is the first multidomain, permissively licensed corpus, which can enable language models that align better to data-use regulations such as the fair-use doctrine in the United States and the GPDR in the European Union.

Discussion of Biases and Limitations

While OLC mitigates copyright and privacy risks, it may exacerbate certain fairness issues, like toxicity towards marginalized groups and racial biases, especially due to the prevalence of older copyright-expired books in the training data.

In addition, OLC relies on explicit metadata to identify licenses, which may lead to underestimates of the amount and diversity of permissively licensed text actually available on the web.

Dataset Curators

OLC was curated by the authors of SILO language models.

Licensing Information

We release this corpus under the Apache 2.0 license.

Citation Information