You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Access to this dataset is automatically granted upon accepting the AI2 ImpACT License – Low Risk Artifacts (“LR Agreement”) and completing all fields below. All data subsets in this dataset are licensed under the LR Agreement, except for those as listed in the 'License' section of the Dataset Card.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Dataset Name

Language models (LMs) commonly report perplexity on monolithic data held out from the training distribution. Implicitly or explicitly, this data is composed of domains—variations in the distribution of language. Rather than assuming perplexity on one distribution extrapolates to others, Perplexity Analysis for Language Model Assessment (Paloma) measures LM fit to 585 text domains, ranging from NY Times to r/depression on Reddit.

Dataset Details

Benchmark Inference and Submissions

We invite submissions to our benchmark and organize results by comparability based on compliance with guidelines such as the removal of benchmark contamination from pretraining. Standardized inference code for running comprable evaluations and details about making submissions to the Paloma benchmark can be found at the following link.

How to evaluate and how to submit

Dataset Description

Paloma is for examining relative differences in LM fit on domains. We take these relative differences as a proxy of model fit to the shared knowledge, values, and social context that position the humans producing language in a domain. While we expect contemporary LMs to have a limited fit to the most complex of these latent factors of domains, improving fit to all factors is necessary both to improve perplexity and for any actual use of the LM. For example, better perplexity on a particular dialect of English suggests that that model will make a better chatbot for people that speak that dialect.

The sources of evaluation data in Paloma were selected based on the following desiderata: 1) including known resources, 2) including fine-grained domains, 3) including domains representing specific communities of interest. Different lines of research will require different selections of domains; Paloma aims to enable research on differences in LM fit over the hundreds of domains that are readily available in existing metadata.

Note that we are not able to re-host 2 of the 18 sources in Paloma comprising 39 domains. These are The Pile and ICE. The ICE corpus is available on request to the original authors following the instructions here.

Curated by: Ian Magnusson, Akshita Bhagia, Valentin Hofmann, Luca Soldaini, Ananya Harsh Jha, Oyvind Tafjord, Dustin Schwenk, Evan Pete Walsh, Yanai Elazar, Kyle Lo, Dirk Groeneveld, Iz Beltagy, Hannaneh Hajishirzi, Noah A. Smith, Kyle Richardson, and Jesse Dodge

Languages: We elect to focus just on the language modeling of English and code data.

License: The data subsets are licensed under the AI2 ImpACT License - Low Risk Artifacts, except as listed below.

  • Wikitext-103 - CC BY-SA
  • TwitterAAE - for research purposes only
  • Red Pajama - see license details
  • M2D2 - CC BY-NC

Paper: https://arxiv.org/abs/2312.10523

Dataset Sources

Uses

This benchmark is intended for use in evaluating language model fit to fine-grained domains.

Direct Use

This dataset should be used for evaluating the likilihood of text from a given domain by a language model.

Out-of-Scope Use

Note that the sources contained in this benchmark include varying licenses with differing restrictions (see License)

Dataset Structure

The sources in this dataset are each organized into their own subcorpus. This consists of a val and test split. Data within this is organized as files with lines separated JSON data where each line represents a document and its associated metadata. The type of metadata available varies from source to source, but each line contains at least a field 'text' which contains the text of the document.

Dataset Creation

Curation Rationale

Perplexity is conventionally reported on held out data from a model's training distribution or a small number of traditional test sets. Such monolithic evaluation ignores potential variation of model fit across different domains that LMs implicitly learn to model. We curate sources of fine-grained textual domains in Paloma to enable evaluation of language model fit to specific domains of text. Paloma is inspired by and incorporates previous work that curates corpora with marked domains (The Pile, M2D2, C4 100 Domains, ICE, TwitterAAE). We conduct a stratified subsample over domains where we set a minimum subsample size based on emperical estimation of the variance over subsamples.

Source Data

Standard language modeling benchmarks

Though it is common practice to evaluate on held out data from the pretraining corpus of a given model, we evaluate across several major pretraining corpora and standard language modeling benchmarks. We also break down performance per domain within the datasets that have multiple domains. Note that although the Paloma benchmark analysis in our paper describes results on the Pile, we are not able to re-host this data.

Source Citation Description
c4-en Raffel et al (2019) via Dodge et al (2021) Standard contemporary LM pretraining corpus automatically filtered from the April 2019 Common Crawl scrape
mc4-en Xue et al (2021) The English language portion of a pretraining corpus automatically filtered from 71 Common Crawl scrapes
Pile Gao et al (2020) Standard contemporary LM benchmark from curated multi-source data including large scale non-webscraped sources
Wikitext-103 Merity et al (2016) A standard collection of verified “Good” and “Featured” articles on Wikipedia
Penn Tree Bank Marcus et al (1999) via Nunes, Davide. (2020) Classic Wall Street Journal benchmark with linguistic structure annotations omitted
RedPajama Together Computer (2023) A publicly available reproduction of the LLaMA (Touvron et al., 2023) pretraining source mixture, combining large amounts of webscraped text with smaller curated sources
Falcon-RefinedWeb Penedo et al. (2023) A corpus of English sampled from all Common Crawl scrapes until June 2023, more aggressively filtered and deduplicated than c4 and mc4-en
Dolma v1.5 Soldaini et al. (2023) A three trillion token corpus that samples sources commonly used to train LMs in order to enable open research on pretraining data

Fine-grained domain benchmarks

Where typical pretraining corpora offer at most tens of labeled domains usually based on where the data is sourced, we examine datasets with up to an order of magnitude more domains. Existing datasets (M2D2 and c4 100 Domains) and datasets we curate from Dolma v1.5 use metadata to define hundreds of domains over Wikipedia, Semantic Scholar, Common Crawl, Reddit, and Github data. These include diverse domains from Culture and the arts: Performing arts, a topic on Wikipedia, to r/depression, a forum on Reddit for mental health support.

Source Citation Description
M2D2 S2ORC Reid et al (2022) Papers from Semantic Scholar grouped by hierarchical academic field categories
M2D2 Wiki Reid et al (2022) Wikipedia articles grouped by hierarchical categories in the Wikipedia ontology
c4 100 Domains Chronopoulou et al (2021) Balanced samples of the top 100 URL domains in C4
Dolma 100 Subreddits Soldaini et al. (2023) Balanced samples of the top 100 Subreddits from the Dolma Reddit subset
Dolma 100 Programming Languages Kocetkov et al. (2022) via Soldaini et al. (2023) Balanced samples of the top 100 programming languages from the Dolma Stack subset

Disparities between speech communities

Some communities are known to be underserved by existing models. Following HELM, We measure disparities in performance on corpora of African American English and White aligned English from TwitterAAE, as well as nine corpora of English from different countries with the ICE dataset. Note that although the Paloma benchmark analysis in our paper describes results on ICE, we are not able to re-host this data.

Source Citation Description
ICE Greenbaum and Nelson (1996) via Liang et al (2022) English from around the world curated by local experts, with subsets for Canada, East Africa, Hong Kong, India, Ireland, Jamaica, Philippines, Singapore, and the USA
TwitterAAE Blodgett et al. (2016) via Liang et al (2022) Balanced sets of tweets classified as African American or White aligned English

Fringe sources previously studied for problematic discourse

Text from some fringe online communities has been shown to contain larger proportions of hate speech and toxicity than more mainstream sources. Longpre et al. (2023) have shown that varying amount of toxic content in pretraining data exhibits a tradeoff between non-toxic generation and ability to classify toxicity, indicating that model fit to discourse with toxicity is worth measuring. Measuring perplexity on Manosphere, Gab, and 4chan characterises model familiarity with distinct social contexts in which toxic language arises.

Source Citation Description
Manosphere Corpus Ribeiro et al (2020) 9 forums where a set of related masculinist ideologies developed over the 2000s and 2010s
Gab Corpus Zannettou et al (2018) Data from 2016-18 from an alt-right, free-speech-oriented social media platform shown to contain more hate speech than mainstream platforms
4chan Corpus Papasavva et al (2020) Data from 2016-19 from a politics subforum of an anonymity-focused forum found to contain among the highest rates of toxic content

Data Collection and Processing

The data in Paloma are sampled from existing sources. Most often perplexity evaluation data is subsampled uniformly over the original distribution of domains in a source, resulting in more or less tokens from each domain in the evaluation data based on how well represented they are in the corpus. We instead employ stratified sampling, in which all sources with marked domains are partitioned by domain and a uniform sample of the same size is taken from each partition. Specifically, documents are sampled from each domain until a target number of tokens is reached. This helps ensure that no domains are lost or very small after subsampling.

In social media domains with additional metadata that is typically displayed along with posts, we format metadata such as timestamps into the document 'text' field. Where information is available about how threads of posts are connected, documents in that domain contain all posts in a given thread.

Additional details on source specific processing are available in our paper.

Who are the source data producers?

Text data from each of the sources curated in Paloma is created by varying sets of original authors. Some sources are collected from users of specific internet fora such as specific subreddits. Other data is collected on the basis of expert or automated classification of demographic groups. Other data is collected from authors of archival material including scientific preprints, Wikipedia, and code repositories. Lastly, data sampled from standard pretraining corpora comes from authors collected through automatic webscrapping and large scale sampling of archival sources, making it difficult to recover much specific information about these authors.

Annotation process

No annotation is done on this data.

Who are the annotators?

No annotation is done on this data.

Personal and Sensitive Information

Sources in Paloma may contain personally identifiable information (PII). No attempt is made to measure or remove this information for the following reason: Paloma provides a small subsample of already publicly available data. The small size of this subsample renders this data less useful for aggregation of PII information than the already available public sources which we subsample.

Bias, Risks, and Limitations

It is beyond the scope of any one group of researchers to prescribe an exhaustive set of domains that should be examined for a LM. Rather Paloma brings together a substantial selection of domains that are identifiable from already available metadata to demonstrate the kinds of analyses possible with hundreds of domains and rigorous experimental controls. Different research goals will motivate different definitions and selections of domains, but other researchers can apply the guidelines we detail in our paper to novel fine-grained domains suitable for their research questions. One of the key advantages of evaluating a model by its fit to a collection of text representing a domain is that such domains can be identified not just by researchers who study LMs. We hope future work will identify many more domains that no one discipline would think to look at.

In Paloma, we distinguish sources from domains, although not all cases permit such easy distinction. We use source to refer to a selection of data that is characterized by the decisions of the people who curated that data, whether that curation is automatic as in scraping C4 or manual as in selecting the subcorpora of the The Pile. By contrast we use domain to refer to a set of documents that belong together because they are originally produced by a group of humans that share a distinct social context. Considered as such, domains may overlap; a document's author may belong to the set of English speakers in Jamaica and the set of AI researchers. Further note, that domains are often latent categorizations which we only approximate because complete metadata does not exist.

Also, some domains in Paloma appear in multiple sources, such as academic papers. Though The Pile and RedPajama process academic papers differently, the subcorpora on academic papers in each source represent different approximations of the same or very similar domains. However for the sake of simplicity, we make the reductive assumption of counting all 585 domains in Paloma as fully distinct.

Recommendations

In our paper we outline guidelines for evaluating language model fit. We encourage users of Paloma to adopt these experimental controls for metric variance when subsampling, benchmark contamination, differing tokenization, training data order, and evaluation data format.

Citation

BibTeX:

@article{paloma,
  title={{Paloma}: A Benchmark for Evaluating Language Model Fit},
  author={Magnusson, Ian and Bhagia, Akshita and Hofmann, Valentin and Soldaini, Luca and Harsh Jha, Ananya and Tafjord, Oyvind and Schwenk,Dustin and Walsh, Evan Pete and Elazar, Yanai and Lo, Kyle and Groenveld,Dirk and Beltagy,Iz and  Hajishirz,Hanneneh and Smith, Noah A. and Richardson,Kyle and Dodge,Jesse},
  journal={technical report},
  year={2023},
  url={https://paloma.allen.ai/}
} 

Dataset Card Contact

{ianm,jessed}@allenai.org

Downloads last month
31
Edit dataset card

Collection including allenai/paloma