datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.68M
| likes
int64 0
6.41k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1M
|
---|---|---|---|---|---|---|---|---|
mteb/sts13-sts | mteb | "2022-09-27T19:12:02Z" | 19,416 | 1 | [
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-04-20T10:47:41Z" | ---
language:
- en
--- |
orionweller/reddit_mds_incremental | orionweller | "2024-07-23T17:17:42Z" | 19,345 | 0 | [
"region:us"
] | null | "2024-06-24T14:44:04Z" | ---
dataset_info:
features: []
splits:
- name: creation
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: creation
path: data/creation-*
---
|
ptb-text-only/ptb_text_only | ptb-text-only | "2024-01-18T11:13:39Z" | 18,974 | 15 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:10K<n<100K",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- other
license_details: LDC User Agreement for Non-Members
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: Penn Treebank
dataset_info:
features:
- name: sentence
dtype: string
config_name: penn_treebank
splits:
- name: train
num_bytes: 5143706
num_examples: 42068
- name: test
num_bytes: 453710
num_examples: 3761
- name: validation
num_bytes: 403156
num_examples: 3370
download_size: 5951345
dataset_size: 6000572
---
# Dataset Card for Penn Treebank
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://catalog.ldc.upenn.edu/LDC99T42
- **Repository:** 'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.train.txt',
'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.valid.txt',
'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.test.txt'
- **Paper:** https://www.aclweb.org/anthology/J93-2004.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material.
The rare words in this version are already replaced with <unk> token. The numbers are replaced with <N> token.
### Supported Tasks and Leaderboards
Language Modelling
### Languages
The text in the dataset is in American English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Dataset provided for research purposes only. Please check dataset license for additional information.
### Citation Information
@article{marcus-etal-1993-building,
title = "Building a Large Annotated Corpus of {E}nglish: The {P}enn {T}reebank",
author = "Marcus, Mitchell P. and
Santorini, Beatrice and
Marcinkiewicz, Mary Ann",
journal = "Computational Linguistics",
volume = "19",
number = "2",
year = "1993",
url = "https://www.aclweb.org/anthology/J93-2004",
pages = "313--330",
}
### Contributions
Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset. |
fsicoli/common_voice_16_0 | fsicoli | "2023-12-22T19:58:33Z" | 18,901 | 2 | [
"task_categories:automatic-speech-recognition",
"language:ab",
"language:af",
"language:am",
"language:ar",
"language:as",
"language:ast",
"language:az",
"language:ba",
"language:bas",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:ckb",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:dyu",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:gl",
"language:gn",
"language:ha",
"language:he",
"language:hi",
"language:hsb",
"language:hu",
"language:ia",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kab",
"language:kk",
"language:kmr",
"language:ko",
"language:ky",
"language:lg",
"language:lo",
"language:lt",
"language:lv",
"language:mdf",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:mt",
"language:myv",
"language:nl",
"language:oc",
"language:or",
"language:pl",
"language:ps",
"language:pt",
"language:quy",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sat",
"language:sc",
"language:sk",
"language:skr",
"language:sl",
"language:sq",
"language:sr",
"language:sw",
"language:ta",
"language:th",
"language:ti",
"language:tig",
"language:tk",
"language:tok",
"language:tr",
"language:tt",
"language:tw",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vot",
"language:yue",
"language:zgh",
"language:zh",
"language:yo",
"license:cc0-1.0",
"size_categories:100B<n<1T",
"region:us",
"mozilla",
"foundation"
] | [
"automatic-speech-recognition"
] | "2023-12-19T17:26:21Z" | ---
license: cc0-1.0
language:
- ab
- af
- am
- ar
- as
- ast
- az
- ba
- bas
- be
- bg
- bn
- br
- ca
- ckb
- cnh
- cs
- cv
- cy
- da
- de
- dv
- dyu
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- gn
- ha
- he
- hi
- hsb
- hu
- ia
- id
- ig
- is
- it
- ja
- ka
- kab
- kk
- kmr
- ko
- ky
- lg
- lo
- lt
- lv
- mdf
- mhr
- mk
- ml
- mn
- mr
- mrj
- mt
- myv
- nl
- oc
- or
- pl
- ps
- pt
- quy
- ro
- ru
- rw
- sah
- sat
- sc
- sk
- skr
- sl
- sq
- sr
- sw
- ta
- th
- ti
- tig
- tk
- tok
- tr
- tt
- tw
- ug
- uk
- ur
- uz
- vi
- vot
- yue
- zgh
- zh
- yo
task_categories:
- automatic-speech-recognition
pretty_name: Common Voice Corpus 16.0
size_categories:
- 100B<n<1T
tags:
- mozilla
- foundation
---
# Dataset Card for Common Voice Corpus 16.0
<!-- Provide a quick summary of the dataset. -->
This dataset is an unofficial version of the Mozilla Common Voice Corpus 16. It was downloaded and converted from the project's website https://commonvoice.mozilla.org/.
## Languages
```
Abkhaz, Albanian, Amharic, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Pashto, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamazight, Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba
```
## How to use
The datasets library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the load_dataset function.
For example, to download the Portuguese config, simply specify the corresponding language config name (i.e., "pt" for Portuguese):
```
from datasets import load_dataset
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a streaming=True argument to the load_dataset function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```
from datasets import load_dataset
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train", streaming=True)
print(next(iter(cv_16)))
```
Bonus: create a PyTorch dataloader directly with your own datasets (local/streamed).
### Local
```
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train")
batch_sampler = BatchSampler(RandomSampler(cv_16), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_16, batch_sampler=batch_sampler)
```
### Streaming
```
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train")
dataloader = DataLoader(cv_16, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to hf.co/blog/audio-datasets.
### Dataset Structure
Data Instances
A typical data point comprises the path to the audio file and its sentence. Additional fields include accent, age, client_id, up_votes, down_votes, gender, locale and segment.
### Licensing Information
Public Domain, CC-0
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
---
|
lmsys/lmsys-chat-1m | lmsys | "2024-07-27T09:28:42Z" | 18,875 | 608 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2309.11998",
"region:us"
] | [
"conversational"
] | "2023-09-20T06:33:44Z" | ---
size_categories:
- 1M<n<10M
task_categories:
- conversational
extra_gated_prompt: You agree to the [LMSYS-Chat-1M Dataset License Agreement](https://huggingface.co/datasets/lmsys/lmsys-chat-1m#lmsys-chat-1m-dataset-license-agreement).
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Country: text
extra_gated_button_content: I agree to the terms and conditions of the LMSYS-Chat-1M
Dataset License Agreement.
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversation_id
dtype: string
- name: model
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: turn
dtype: int64
- name: language
dtype: string
- name: openai_moderation
list:
- name: categories
struct:
- name: harassment
dtype: bool
- name: harassment/threatening
dtype: bool
- name: hate
dtype: bool
- name: hate/threatening
dtype: bool
- name: self-harm
dtype: bool
- name: self-harm/instructions
dtype: bool
- name: self-harm/intent
dtype: bool
- name: sexual
dtype: bool
- name: sexual/minors
dtype: bool
- name: violence
dtype: bool
- name: violence/graphic
dtype: bool
- name: category_scores
struct:
- name: harassment
dtype: float64
- name: harassment/threatening
dtype: float64
- name: hate
dtype: float64
- name: hate/threatening
dtype: float64
- name: self-harm
dtype: float64
- name: self-harm/instructions
dtype: float64
- name: self-harm/intent
dtype: float64
- name: sexual
dtype: float64
- name: sexual/minors
dtype: float64
- name: violence
dtype: float64
- name: violence/graphic
dtype: float64
- name: flagged
dtype: bool
- name: redacted
dtype: bool
splits:
- name: train
num_bytes: 2626438904
num_examples: 1000000
download_size: 1488850250
dataset_size: 2626438904
---
## LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
This dataset contains one million real-world conversations with 25 state-of-the-art LLMs.
It is collected from 210K unique IP addresses in the wild on the [Vicuna demo and Chatbot Arena website](https://chat.lmsys.org/) from April to August 2023.
Each sample includes a conversation ID, model name, conversation text in OpenAI API JSON format, detected language tag, and OpenAI moderation API tag.
User consent is obtained through the "Terms of use" section on the data collection website.
To ensure the safe release of data, we have made our best efforts to remove all conversations that contain personally identifiable information (PII).
In addition, we have included the OpenAI moderation API output for each message.
However, we have chosen to keep unsafe conversations so that researchers can study the safety-related questions associated with LLM usage in real-world scenarios as well as the OpenAI moderation process.
We did not run decontamination on this dataset, so it may contain test questions from popular benchmarks.
For more details, please refer to the paper: https://arxiv.org/abs/2309.11998
**Basic Statistics**
| Key | Value |
| --- | --- |
| # Conversations | 1,000,000 |
| # Models | 25 |
| # Users | 210,479 |
| # Languages | 154 |
| Avg. # Turns per Sample | 2.0 |
| Avg. # Tokens per Prompt | 69.5 |
| Avg. # Tokens per Response | 214.5 |
**PII Redaction**
We partnered with the [OpaquePrompts](https://opaqueprompts.opaque.co/) team to redact person names in this dataset to protect user privacy.
Names like "Mary" and "James" in a conversation will appear as "NAME_1" and "NAME_2". For example:
```json
Raw: [ { "content": "Write me a bio. My Name is Mary I am a student who is currently a beginner free lancer. I worked with James in the past ..." }]
Redacted: [ { "content": "Write me a bio. My Name is NAME_1 I am a student who is currently a beginner free lancer. I worked with NAME_2 in the past ..." }]
```
Each conversation includes a "redacted" field to indicate if it has been redacted.
This process may impact data quality and occasionally lead to incorrect redactions.
We are working on improving the redaction quality and will release improved versions in the future.
If you want to access the raw conversation data, please fill out [the form](https://docs.google.com/forms/d/1PZw67e19l0W3oCiQOjzSyZvXfOemhg6LCY0XzVmOUx0/edit) with details about your intended use cases.
## Uniqueness and Potential Usage
This dataset features large-scale real-world conversations with LLMs.
We believe it will help the AI research community answer important questions around topics like:
- Characteristics and distributions of real-world user prompts
- AI safety and content moderation
- Training instruction-following models
- Improving and evaluating LLM evaluation methods
- Model selection and request dispatching algorithms
For more details, please refer to the paper: https://arxiv.org/abs/2309.11998
## LMSYS-Chat-1M Dataset License Agreement
This Agreement contains the terms and conditions that govern your access and use of the LMSYS-Chat-1M Dataset (as defined above). You may not use the LMSYS-Chat-1M Dataset if you do not accept this Agreement. By clicking to accept, accessing the LMSYS-Chat-1M Dataset, or both, you hereby agree to the terms of the Agreement. If you are agreeing to be bound by the Agreement on behalf of your employer or another entity, you represent and warrant that you have full legal authority to bind your employer or such entity to this Agreement. If you do not have the requisite authority, you may not accept the Agreement or access the LMSYS-Chat-1M Dataset on behalf of your employer or another entity.
- Safety and Moderation: **This dataset contains unsafe conversations that may be perceived as offensive or unsettling.** User should apply appropriate filters and safety measures before utilizing this dataset for training dialogue agents.
- Non-Endorsement: The views and opinions depicted in this dataset **do not reflect** the perspectives of the researchers or affiliated institutions engaged in the data collection process.
- Legal Compliance: You are mandated to use it in adherence with all pertinent laws and regulations.
- Model Specific Terms: When leveraging direct outputs of a specific model, users must adhere to its corresponding terms of use.
- Non-Identification: You **must not** attempt to identify the identities of individuals or infer any sensitive personal data encompassed in this dataset.
- Prohibited Transfers: You should not distribute, copy, disclose, assign, sublicense, embed, host, or otherwise transfer the dataset to any third party.
- Right to Request Deletion: At any time, we may require you to delete all copies of the conversation dataset (in whole or in part) in your possession and control. You will promptly comply with any and all such requests. Upon our request, you shall provide us with written confirmation of your compliance with such requirement.
- Termination: We may, at any time, for any reason or for no reason, terminate this Agreement, effective immediately upon notice to you. Upon termination, the license granted to you hereunder will immediately terminate, and you will immediately stop using the LMSYS-Chat-1M Dataset and destroy all copies of the LMSYS-Chat-1M Dataset and related materials in your possession or control.
- Limitation of Liability: IN NO EVENT WILL WE BE LIABLE FOR ANY CONSEQUENTIAL, INCIDENTAL, EXEMPLARY, PUNITIVE, SPECIAL, OR INDIRECT DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, BUSINESS INTERRUPTION, OR LOSS OF INFORMATION) ARISING OUT OF OR RELATING TO THIS AGREEMENT OR ITS SUBJECT MATTER, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Subject to your compliance with the terms and conditions of this Agreement, we grant to you, a limited, non-exclusive, non-transferable, non-sublicensable license to use the LMSYS-Chat-1M Dataset, including the conversation data and annotations, to research, develop, and improve software, algorithms, machine learning models, techniques, and technologies for both research and commercial purposes.
## Citation
```
@misc{zheng2023lmsyschat1m,
title={LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset},
author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Tianle Li and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zhuohan Li and Zi Lin and Eric. P Xing and Joseph E. Gonzalez and Ion Stoica and Hao Zhang},
year={2023},
eprint={2309.11998},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
EleutherAI/lambada_openai | EleutherAI | "2022-12-16T19:53:23Z" | 18,871 | 40 | [
"task_ids:language-modeling",
"language_creators:machine-generated",
"multilinguality:translation",
"source_datasets:lambada",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"license:mit",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2022-12-16T16:35:07Z" | ---
pretty_name: LAMBADA OpenAI
language_creators:
- machine-generated
license: mit
multilinguality:
- translation
task_ids:
- language-modeling
source_datasets:
- lambada
size_categories:
- 1K<n<10K
language:
- de
- en
- es
- fr
- it
dataset_info:
- config_name: default
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1709449
num_examples: 5153
download_size: 1819752
dataset_size: 1709449
- config_name: de
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1904576
num_examples: 5153
download_size: 1985231
dataset_size: 1904576
- config_name: en
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1709449
num_examples: 5153
download_size: 1819752
dataset_size: 1709449
- config_name: es
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1821735
num_examples: 5153
download_size: 1902349
dataset_size: 1821735
- config_name: fr
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1948795
num_examples: 5153
download_size: 2028703
dataset_size: 1948795
- config_name: it
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1813420
num_examples: 5153
download_size: 1894613
dataset_size: 1813420
---
## Dataset Description
- **Repository:** [openai/gpt2](https://github.com/openai/gpt-2)
- **Paper:** Radford et al. [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
### Dataset Summary
This dataset is comprised of the LAMBADA test split as pre-processed by OpenAI (see relevant discussions [here](https://github.com/openai/gpt-2/issues/131#issuecomment-497136199) and [here](https://github.com/huggingface/transformers/issues/491)). It also contains machine translated versions of the split in German, Spanish, French, and Italian.
LAMBADA is used to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative texts sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole text, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse.
### Languages
English, German, Spanish, French, and Italian.
### Source Data
For non-English languages, the data splits were produced by Google Translate. See the [`translation_script.py`](translation_script.py) for more details.
## Additional Information
### Hash Checksums
For data integrity checks we leave the following checksums for the files in this dataset:
| File Name | Checksum (SHA-256) |
|--------------------------------------------------------------------------|------------------------------------------------------------------|
| lambada_test_de.jsonl | 51c6c1795894c46e88e4c104b5667f488efe79081fb34d746b82b8caa663865e |
| [openai/lambada_test.jsonl](https://openaipublic.blob.core.windows.net/gpt-2/data/lambada_test.jsonl) | 4aa8d02cd17c719165fc8a7887fddd641f43fcafa4b1c806ca8abc31fabdb226 |
| lambada_test_en.jsonl | 4aa8d02cd17c719165fc8a7887fddd641f43fcafa4b1c806ca8abc31fabdb226 |
| lambada_test_es.jsonl | ffd760026c647fb43c67ce1bc56fd527937304b348712dce33190ea6caba6f9c |
| lambada_test_fr.jsonl | 941ec6a73dba7dc91c860bf493eb66a527cd430148827a4753a4535a046bf362 |
| lambada_test_it.jsonl | 86654237716702ab74f42855ae5a78455c1b0e50054a4593fb9c6fcf7fad0850 |
### Licensing
License: [Modified MIT](https://github.com/openai/gpt-2/blob/master/LICENSE)
### Citation
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
```bibtex
@misc{
author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel},
title={The LAMBADA dataset},
DOI={10.5281/zenodo.2630551},
publisher={Zenodo},
year={2016},
month={Aug}
}
```
### Contributions
Thanks to Sid Black ([@sdtblck](https://github.com/sdtblck)) for translating the `lambada_openai` dataset into the non-English languages.
Thanks to Jonathan Tow ([@jon-tow](https://github.com/jon-tow)) for adding this dataset.
|
roneneldan/TinyStories | roneneldan | "2024-08-12T13:27:26Z" | 18,817 | 569 | [
"task_categories:text-generation",
"language:en",
"license:cdla-sharing-1.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2305.07759",
"region:us"
] | [
"text-generation"
] | "2023-05-12T19:04:09Z" | ---
license: cdla-sharing-1.0
task_categories:
- text-generation
language:
- en
---
Dataset containing synthetically generated (by GPT-3.5 and GPT-4) short stories that only use a small vocabulary.
Described in the following paper: https://arxiv.org/abs/2305.07759.
The models referred to in the paper were trained on TinyStories-train.txt (the file tinystories-valid.txt can be used for validation loss). These models can be found on Huggingface, at roneneldan/TinyStories-1M/3M/8M/28M/33M/1Layer-21M.
Additional resources:
tinystories_all_data.tar.gz - contains a superset of the stories together with metadata and the prompt that was used to create each story.
TinyStoriesV2-GPT4-train.txt - Is a new version of the dataset that is based on generations by GPT-4 only (the original dataset also has generations by GPT-3.5 which are of lesser quality). It contains all the examples in TinyStories.txt which were GPT-4 generated as a subset (but is significantly larger).
Evaluation_prompts.yaml: List of prompts used to evaluate our models (see paper) |
edbeeching/gia-dataset-tokenized-2024-2 | edbeeching | "2023-09-15T11:03:29Z" | 18,665 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-09-15T08:07:15Z" | ---
dataset_info:
- config_name: atari-alien
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2427492496
num_examples: 1836
download_size: 197411801
dataset_size: 2427492496
- config_name: atari-amidar
features:
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 23292403388
num_examples: 17641
- name: test
num_bytes: 2157941388
num_examples: 1637
download_size: 1619960876
dataset_size: 25450344776
- config_name: atari-assault
features:
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 23077576568
num_examples: 17434
- name: test
num_bytes: 1898092400
num_examples: 1436
download_size: 760479036
dataset_size: 24975668968
- config_name: atari-asterix
features:
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 25094377660
num_examples: 19161
download_size: 943683526
dataset_size: 25094377660
- config_name: atari-asteroids
features:
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22677165856
num_examples: 17112
download_size: 807221186
dataset_size: 22677165856
- config_name: atari-atlantis
features:
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22825149408
num_examples: 17240
download_size: 745609354
dataset_size: 22825149408
- config_name: atari-bankheist
features:
- name: input_types
sequence: int64
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_ids
sequence: int32
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 23741888116
num_examples: 18043
- name: test
num_bytes: 2701097304
num_examples: 2050
download_size: 2847993069
dataset_size: 26442985420
- config_name: atari-battlezone
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2683381416
num_examples: 2030
download_size: 162167846
dataset_size: 2683381416
- config_name: atari-berzerk
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2683232284
num_examples: 2025
download_size: 98071291
dataset_size: 2683232284
- config_name: atari-bowling
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2638612892
num_examples: 2001
download_size: 57099861
dataset_size: 2638612892
- config_name: atari-boxing
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2925635312
num_examples: 2252
download_size: 154591181
dataset_size: 2925635312
- config_name: atari-breakout
features:
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 21372025124
num_examples: 16135
- name: test
num_bytes: 2843462328
num_examples: 2146
download_size: 740521401
dataset_size: 24215487452
- config_name: atari-centipede
features:
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 24525541956
num_examples: 18727
- name: test
num_bytes: 2743854332
num_examples: 2097
download_size: 886355860
dataset_size: 27269396288
- config_name: atari-choppercommand
features:
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 21916144968
num_examples: 16598
- name: test
num_bytes: 3130204472
num_examples: 2370
download_size: 1120222280
dataset_size: 25046349440
- config_name: atari-crazyclimber
features:
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2452295076
num_examples: 1855
download_size: 147409815
dataset_size: 2452295076
- config_name: atari-defender
features:
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2667101644
num_examples: 2013
download_size: 76162534
dataset_size: 2667101644
- config_name: atari-demonattack
features:
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2655965584
num_examples: 2004
download_size: 71540075
dataset_size: 2655965584
- config_name: atari-doubledunk
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2654251456
num_examples: 2032
download_size: 140407266
dataset_size: 2654251456
- config_name: atari-fishingderby
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2865449308
num_examples: 2177
download_size: 236590614
dataset_size: 2865449308
- config_name: atari-freeway
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2646386200
num_examples: 2002
download_size: 182728240
dataset_size: 2646386200
- config_name: atari-frostbite
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 23145553316
num_examples: 17551
- name: test
num_bytes: 2683086716
num_examples: 2033
download_size: 1661407235
dataset_size: 25828640032
- config_name: atari-gravitar
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 26186279752
num_examples: 20126
- name: test
num_bytes: 2990268724
num_examples: 2299
download_size: 939142901
dataset_size: 29176548476
- config_name: atari-hero
features:
- name: input_ids
sequence: int32
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2756503068
num_examples: 2089
download_size: 131026317
dataset_size: 2756503068
- config_name: atari-icehockey
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2538945980
num_examples: 1921
download_size: 89405392
dataset_size: 2538945980
- config_name: atari-jamesbond
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 4473778328
num_examples: 3378
download_size: 224917482
dataset_size: 4473778328
- config_name: atari-kangaroo
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2993217516
num_examples: 2285
download_size: 140119408
dataset_size: 2993217516
- config_name: atari-mspacman
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2479651844
num_examples: 1879
download_size: 217259145
dataset_size: 2479651844
- config_name: atari-namethisgame
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 3006648420
num_examples: 2271
download_size: 158870157
dataset_size: 3006648420
- config_name: atari-phoenix
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2655773200
num_examples: 2004
download_size: 79861580
dataset_size: 2655773200
- config_name: atari-qbert
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2547887868
num_examples: 1929
download_size: 174392419
dataset_size: 2547887868
- config_name: atari-riverraid
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2555182372
num_examples: 1943
download_size: 174672084
dataset_size: 2555182372
- config_name: atari-roadrunner
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2521407028
num_examples: 1915
download_size: 125390334
dataset_size: 2521407028
- config_name: atari-robotank
features:
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22475017052
num_examples: 16985
- name: test
num_bytes: 2229677068
num_examples: 1685
download_size: 1298755118
dataset_size: 24704694120
- config_name: atari-seaquest
features:
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 23841045496
num_examples: 18114
- name: test
num_bytes: 2738008960
num_examples: 2080
download_size: 910338340
dataset_size: 26579054456
- config_name: atari-skiing
features:
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 26305597476
num_examples: 20359
- name: test
num_bytes: 2941523916
num_examples: 2277
download_size: 1797518108
dataset_size: 29247121392
- config_name: atari-solaris
features:
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2273188716
num_examples: 1717
download_size: 126936781
dataset_size: 2273188716
- config_name: atari-spaceinvaders
features:
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 4137369016
num_examples: 3122
download_size: 146426375
dataset_size: 4137369016
- config_name: atari-stargunner
features:
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2565341980
num_examples: 1937
download_size: 72577790
dataset_size: 2565341980
- config_name: atari-surround
features:
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: input_types
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22468793380
num_examples: 17023
- name: test
num_bytes: 2933488488
num_examples: 2222
download_size: 904796125
dataset_size: 25402281868
- config_name: atari-tennis
features:
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2484015692
num_examples: 1877
download_size: 95167453
dataset_size: 2484015692
- config_name: atari-timepilot
features:
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2558172240
num_examples: 1932
download_size: 86471773
dataset_size: 2558172240
- config_name: atari-tutankham
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: input_ids
sequence: int32
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 3517105220
num_examples: 2655
download_size: 144491974
dataset_size: 3517105220
- config_name: atari-videopinball
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22581644248
num_examples: 17042
- name: test
num_bytes: 856644644
num_examples: 647
download_size: 1483962740
dataset_size: 23438288892
- config_name: atari-wizardofwor
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22744043928
num_examples: 17218
- name: test
num_bytes: 2648734220
num_examples: 2005
download_size: 1739703310
dataset_size: 25392778148
- config_name: atari-yarsrevenge
features:
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22080700236
num_examples: 16669
- name: test
num_bytes: 2579104820
num_examples: 1947
download_size: 3451148232
dataset_size: 24659805056
- config_name: atari-zaxxon
features:
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22058040148
num_examples: 16667
- name: test
num_bytes: 2768806832
num_examples: 2092
download_size: 1229966010
dataset_size: 24826846980
configs:
- config_name: atari-alien
data_files:
- split: test
path: atari-alien/test-*
- config_name: atari-amidar
data_files:
- split: train
path: atari-amidar/train-*
- split: test
path: atari-amidar/test-*
- config_name: atari-assault
data_files:
- split: train
path: atari-assault/train-*
- split: test
path: atari-assault/test-*
- config_name: atari-asterix
data_files:
- split: train
path: atari-asterix/train-*
- config_name: atari-asteroids
data_files:
- split: train
path: atari-asteroids/train-*
- config_name: atari-atlantis
data_files:
- split: train
path: atari-atlantis/train-*
- config_name: atari-bankheist
data_files:
- split: train
path: atari-bankheist/train-*
- split: test
path: atari-bankheist/test-*
- config_name: atari-battlezone
data_files:
- split: test
path: atari-battlezone/test-*
- config_name: atari-berzerk
data_files:
- split: test
path: atari-berzerk/test-*
- config_name: atari-bowling
data_files:
- split: test
path: atari-bowling/test-*
- config_name: atari-boxing
data_files:
- split: test
path: atari-boxing/test-*
- config_name: atari-breakout
data_files:
- split: train
path: atari-breakout/train-*
- split: test
path: atari-breakout/test-*
- config_name: atari-centipede
data_files:
- split: train
path: atari-centipede/train-*
- split: test
path: atari-centipede/test-*
- config_name: atari-choppercommand
data_files:
- split: train
path: atari-choppercommand/train-*
- split: test
path: atari-choppercommand/test-*
- config_name: atari-crazyclimber
data_files:
- split: test
path: atari-crazyclimber/test-*
- config_name: atari-defender
data_files:
- split: test
path: atari-defender/test-*
- config_name: atari-demonattack
data_files:
- split: test
path: atari-demonattack/test-*
- config_name: atari-doubledunk
data_files:
- split: test
path: atari-doubledunk/test-*
- config_name: atari-fishingderby
data_files:
- split: test
path: atari-fishingderby/test-*
- config_name: atari-freeway
data_files:
- split: test
path: atari-freeway/test-*
- config_name: atari-frostbite
data_files:
- split: train
path: atari-frostbite/train-*
- split: test
path: atari-frostbite/test-*
- config_name: atari-gravitar
data_files:
- split: train
path: atari-gravitar/train-*
- split: test
path: atari-gravitar/test-*
- config_name: atari-hero
data_files:
- split: test
path: atari-hero/test-*
- config_name: atari-icehockey
data_files:
- split: test
path: atari-icehockey/test-*
- config_name: atari-jamesbond
data_files:
- split: test
path: atari-jamesbond/test-*
- config_name: atari-kangaroo
data_files:
- split: test
path: atari-kangaroo/test-*
- config_name: atari-mspacman
data_files:
- split: test
path: atari-mspacman/test-*
- config_name: atari-namethisgame
data_files:
- split: test
path: atari-namethisgame/test-*
- config_name: atari-phoenix
data_files:
- split: test
path: atari-phoenix/test-*
- config_name: atari-qbert
data_files:
- split: test
path: atari-qbert/test-*
- config_name: atari-riverraid
data_files:
- split: test
path: atari-riverraid/test-*
- config_name: atari-roadrunner
data_files:
- split: test
path: atari-roadrunner/test-*
- config_name: atari-robotank
data_files:
- split: train
path: atari-robotank/train-*
- split: test
path: atari-robotank/test-*
- config_name: atari-seaquest
data_files:
- split: train
path: atari-seaquest/train-*
- split: test
path: atari-seaquest/test-*
- config_name: atari-skiing
data_files:
- split: train
path: atari-skiing/train-*
- split: test
path: atari-skiing/test-*
- config_name: atari-solaris
data_files:
- split: test
path: atari-solaris/test-*
- config_name: atari-spaceinvaders
data_files:
- split: test
path: atari-spaceinvaders/test-*
- config_name: atari-stargunner
data_files:
- split: test
path: atari-stargunner/test-*
- config_name: atari-surround
data_files:
- split: train
path: atari-surround/train-*
- split: test
path: atari-surround/test-*
- config_name: atari-tennis
data_files:
- split: test
path: atari-tennis/test-*
- config_name: atari-timepilot
data_files:
- split: test
path: atari-timepilot/test-*
- config_name: atari-tutankham
data_files:
- split: test
path: atari-tutankham/test-*
- config_name: atari-videopinball
data_files:
- split: train
path: atari-videopinball/train-*
- split: test
path: atari-videopinball/test-*
- config_name: atari-wizardofwor
data_files:
- split: train
path: atari-wizardofwor/train-*
- split: test
path: atari-wizardofwor/test-*
- config_name: atari-yarsrevenge
data_files:
- split: train
path: atari-yarsrevenge/train-*
- split: test
path: atari-yarsrevenge/test-*
- config_name: atari-zaxxon
data_files:
- split: train
path: atari-zaxxon/train-*
- split: test
path: atari-zaxxon/test-*
---
# Dataset Card for "gia-dataset-tokenized-2024-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mteb/banking77 | mteb | "2022-09-27T19:15:02Z" | 18,598 | 2 | [
"language:en",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-05-17T12:14:06Z" | ---
language:
- en
--- |
japanese-asr/whisper_transcriptions.reazon_speech_all.wer_10.0 | japanese-asr | "2024-09-14T08:07:20Z" | 18,550 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-12T10:10:25Z" | ---
dataset_info:
- config_name: subset_0
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2818142513.305568
num_examples: 28889
download_size: 2800520280
dataset_size: 2818142513.305568
- config_name: subset_1
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2799511567.444425
num_examples: 28682
download_size: 2780562913
dataset_size: 2799511567.444425
- config_name: subset_10
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2773645799.2051067
num_examples: 28577
download_size: 2754819384
dataset_size: 2773645799.2051067
- config_name: subset_100
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2823667735.949709
num_examples: 28862
download_size: 2804915439
dataset_size: 2823667735.949709
- config_name: subset_101
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2822919280.439764
num_examples: 28835
download_size: 2804088323
dataset_size: 2822919280.439764
- config_name: subset_102
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2795097881.8536515
num_examples: 28508
download_size: 2776469064
dataset_size: 2795097881.8536515
- config_name: subset_103
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2783873451.3888097
num_examples: 28679
download_size: 2766104053
dataset_size: 2783873451.3888097
- config_name: subset_104
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2799952901.400077
num_examples: 28652
download_size: 2780931206
dataset_size: 2799952901.400077
- config_name: subset_106
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2778964509.1314907
num_examples: 28567
download_size: 2759949894
dataset_size: 2778964509.1314907
- config_name: subset_107
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2787125034.557918
num_examples: 28580
download_size: 2769452093
dataset_size: 2787125034.557918
- config_name: subset_108
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2814279051.9927425
num_examples: 28652
download_size: 2796370033
dataset_size: 2814279051.9927425
- config_name: subset_109
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2816024520.8196087
num_examples: 29042
download_size: 2797596405
dataset_size: 2816024520.8196087
- config_name: subset_11
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2800581037.46141
num_examples: 28791
download_size: 2781641545
dataset_size: 2800581037.46141
- config_name: subset_110
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2790446466.411009
num_examples: 28552
download_size: 2772475392
dataset_size: 2790446466.411009
- config_name: subset_111
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2812169196.5268145
num_examples: 28708
download_size: 2793478866
dataset_size: 2812169196.5268145
- config_name: subset_112
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2804595595.9556875
num_examples: 28651
download_size: 2786327739
dataset_size: 2804595595.9556875
- config_name: subset_113
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2805975728.9103675
num_examples: 28766
download_size: 2787720425
dataset_size: 2805975728.9103675
- config_name: subset_114
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2797860237.725501
num_examples: 28827
download_size: 2779653682
dataset_size: 2797860237.725501
- config_name: subset_115
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2739272180.979408
num_examples: 28259
download_size: 2721929304
dataset_size: 2739272180.979408
- config_name: subset_116
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2779736580.1971507
num_examples: 28556
download_size: 2762379109
dataset_size: 2779736580.1971507
- config_name: subset_117
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2797506148.8945417
num_examples: 28604
download_size: 2778868105
dataset_size: 2797506148.8945417
- config_name: subset_118
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2783904956.032034
num_examples: 28642
download_size: 2763926725
dataset_size: 2783904956.032034
- config_name: subset_119
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2812589447.5049787
num_examples: 28812
download_size: 2793873688
dataset_size: 2812589447.5049787
- config_name: subset_12
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2825248477.966587
num_examples: 28815
download_size: 2807113313
dataset_size: 2825248477.966587
- config_name: subset_120
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2821333753.731539
num_examples: 28969
download_size: 2802380600
dataset_size: 2821333753.731539
- config_name: subset_121
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 249560.0
num_examples: 2
download_size: 235481
dataset_size: 249560.0
- config_name: subset_122
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2809672016.1315928
num_examples: 28713
download_size: 2791628470
dataset_size: 2809672016.1315928
- config_name: subset_123
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2801433873.4663367
num_examples: 28687
download_size: 2783847031
dataset_size: 2801433873.4663367
- config_name: subset_124
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2800273323.8614535
num_examples: 28625
download_size: 2782792834
dataset_size: 2800273323.8614535
- config_name: subset_125
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2785145671.3161798
num_examples: 28708
download_size: 2766215060
dataset_size: 2785145671.3161798
- config_name: subset_126
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2827062956.8621526
num_examples: 28873
download_size: 2809772690
dataset_size: 2827062956.8621526
- config_name: subset_127
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2785998714.7980447
num_examples: 28628
download_size: 2767842407
dataset_size: 2785998714.7980447
- config_name: subset_128
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2778494148.3436117
num_examples: 28468
download_size: 2760024576
dataset_size: 2778494148.3436117
- config_name: subset_129
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2785249182.7556114
num_examples: 28640
download_size: 2767321584
dataset_size: 2785249182.7556114
- config_name: subset_13
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2814335377.437177
num_examples: 28857
download_size: 2795498468
dataset_size: 2814335377.437177
- config_name: subset_130
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2820493878.62124
num_examples: 28749
download_size: 2801256097
dataset_size: 2820493878.62124
- config_name: subset_131
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2789424940.0485053
num_examples: 28689
download_size: 2771741230
dataset_size: 2789424940.0485053
- config_name: subset_132
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2797705473.379378
num_examples: 28686
download_size: 2778770701
dataset_size: 2797705473.379378
- config_name: subset_133
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2762481787.3078547
num_examples: 28449
download_size: 2744207354
dataset_size: 2762481787.3078547
- config_name: subset_134
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2805702328.605684
num_examples: 28629
download_size: 2786656188
dataset_size: 2805702328.605684
- config_name: subset_135
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2767613031.278768
num_examples: 28434
download_size: 2749157502
dataset_size: 2767613031.278768
- config_name: subset_136
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2794304005.5498323
num_examples: 28646
download_size: 2775836269
dataset_size: 2794304005.5498323
- config_name: subset_137
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 225268.0
num_examples: 2
download_size: 229449
dataset_size: 225268.0
- config_name: subset_138
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2798697055.0787215
num_examples: 28521
download_size: 2779139557
dataset_size: 2798697055.0787215
- config_name: subset_139
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2785616533.8569775
num_examples: 28490
download_size: 2766907186
dataset_size: 2785616533.8569775
- config_name: subset_14
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2803351214.8154387
num_examples: 28893
download_size: 2785325268
dataset_size: 2803351214.8154387
- config_name: subset_140
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2774121107.940745
num_examples: 28610
download_size: 2755406471
dataset_size: 2774121107.940745
- config_name: subset_141
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2800358027.1605697
num_examples: 28644
download_size: 2782724968
dataset_size: 2800358027.1605697
- config_name: subset_142
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2784927581.4414034
num_examples: 28490
download_size: 2766788896
dataset_size: 2784927581.4414034
- config_name: subset_143
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2831756346.2576833
num_examples: 28916
download_size: 2812769625
dataset_size: 2831756346.2576833
- config_name: subset_144
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2776615331.539408
num_examples: 28420
download_size: 2759573769
dataset_size: 2776615331.539408
- config_name: subset_145
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2782918468.2718096
num_examples: 28716
download_size: 2765070234
dataset_size: 2782918468.2718096
- config_name: subset_146
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2792187903.241099
num_examples: 28494
download_size: 2773192425
dataset_size: 2792187903.241099
- config_name: subset_147
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2811533901.9975023
num_examples: 28816
download_size: 2792604562
dataset_size: 2811533901.9975023
- config_name: subset_148
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2800592186.433023
num_examples: 28550
download_size: 2782113533
dataset_size: 2800592186.433023
- config_name: subset_149
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2800445511.623495
num_examples: 28644
download_size: 2781545028
dataset_size: 2800445511.623495
- config_name: subset_15
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2794072915.4565005
num_examples: 28634
download_size: 2775431262
dataset_size: 2794072915.4565005
- config_name: subset_150
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2781380031.341523
num_examples: 28559
download_size: 2762187934
dataset_size: 2781380031.341523
- config_name: subset_151
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2808538686.5330114
num_examples: 28568
download_size: 2789929521
dataset_size: 2808538686.5330114
- config_name: subset_152
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2824230757.1340747
num_examples: 28945
download_size: 2806033635
dataset_size: 2824230757.1340747
- config_name: subset_153
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 276434.0
num_examples: 3
download_size: 280461
dataset_size: 276434.0
- config_name: subset_154
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2785165346.3024044
num_examples: 28562
download_size: 2767182053
dataset_size: 2785165346.3024044
- config_name: subset_155
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2798155948.670726
num_examples: 28894
download_size: 2779685885
dataset_size: 2798155948.670726
- config_name: subset_156
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2792642048.6167397
num_examples: 28486
download_size: 2774292011
dataset_size: 2792642048.6167397
- config_name: subset_157
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2804216902.7177806
num_examples: 28519
download_size: 2783949934
dataset_size: 2804216902.7177806
- config_name: subset_158
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2792837653.459533
num_examples: 28572
download_size: 2774504497
dataset_size: 2792837653.459533
- config_name: subset_159
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2797966344.3232465
num_examples: 28676
download_size: 2780150686
dataset_size: 2797966344.3232465
- config_name: subset_16
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2786919777.657823
num_examples: 28712
download_size: 2768691548
dataset_size: 2786919777.657823
- config_name: subset_160
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2825379050.301059
num_examples: 28794
download_size: 2807121140
dataset_size: 2825379050.301059
- config_name: subset_161
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2817528339.304878
num_examples: 28687
download_size: 2798781366
dataset_size: 2817528339.304878
- config_name: subset_162
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2812552745.6736894
num_examples: 28560
download_size: 2793334228
dataset_size: 2812552745.6736894
- config_name: subset_163
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2816287889.3383718
num_examples: 28834
download_size: 2798030354
dataset_size: 2816287889.3383718
- config_name: subset_164
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2789227239.2794743
num_examples: 28610
download_size: 2770415332
dataset_size: 2789227239.2794743
- config_name: subset_165
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2793923591.2273154
num_examples: 28636
download_size: 2776240642
dataset_size: 2793923591.2273154
- config_name: subset_166
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2818060949.109051
num_examples: 28788
download_size: 2799747051
dataset_size: 2818060949.109051
- config_name: subset_167
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2814121890.69677
num_examples: 28692
download_size: 2796446295
dataset_size: 2814121890.69677
- config_name: subset_168
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2800208772.061901
num_examples: 28488
download_size: 2782580061
dataset_size: 2800208772.061901
- config_name: subset_169
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 217994.0
num_examples: 4
download_size: 221965
dataset_size: 217994.0
- config_name: subset_17
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2814316212.610686
num_examples: 28741
download_size: 2796750545
dataset_size: 2814316212.610686
- config_name: subset_170
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2820094066.841153
num_examples: 28779
download_size: 2800607241
dataset_size: 2820094066.841153
- config_name: subset_171
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2785665141.9956546
num_examples: 28530
download_size: 2767637865
dataset_size: 2785665141.9956546
- config_name: subset_172
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2769980491.858864
num_examples: 28458
download_size: 2751659712
dataset_size: 2769980491.858864
- config_name: subset_173
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2763010947.6328197
num_examples: 28620
download_size: 2744336918
dataset_size: 2763010947.6328197
- config_name: subset_174
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2830554405.824075
num_examples: 28872
download_size: 2812117551
dataset_size: 2830554405.824075
- config_name: subset_175
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2832068908.9352627
num_examples: 28879
download_size: 2813870110
dataset_size: 2832068908.9352627
- config_name: subset_176
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2812790604.2079477
num_examples: 28727
download_size: 2794467649
dataset_size: 2812790604.2079477
- config_name: subset_177
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2822687561.982137
num_examples: 28646
download_size: 2802424720
dataset_size: 2822687561.982137
- config_name: subset_178
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2723861675.781945
num_examples: 27769
download_size: 2705544958
dataset_size: 2723861675.781945
- config_name: subset_179
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2707230519.9358606
num_examples: 27745
download_size: 2688247107
dataset_size: 2707230519.9358606
- config_name: subset_18
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2769177323.4802456
num_examples: 28518
download_size: 2751255267
dataset_size: 2769177323.4802456
- config_name: subset_180
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2704505244.5037017
num_examples: 27682
download_size: 2687007507
dataset_size: 2704505244.5037017
- config_name: subset_181
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2678582404.0369406
num_examples: 27606
download_size: 2662138288
dataset_size: 2678582404.0369406
- config_name: subset_182
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2700058693.469923
num_examples: 27589
download_size: 2682649362
dataset_size: 2700058693.469923
- config_name: subset_183
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2709187257.1116276
num_examples: 27952
download_size: 2691450963
dataset_size: 2709187257.1116276
- config_name: subset_184
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2712375793.047578
num_examples: 27675
download_size: 2695531244
dataset_size: 2712375793.047578
- config_name: subset_185
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 212384.0
num_examples: 3
download_size: 216512
dataset_size: 212384.0
- config_name: subset_186
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2694085750.3083773
num_examples: 27501
download_size: 2675198693
dataset_size: 2694085750.3083773
- config_name: subset_187
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2689712571.5467024
num_examples: 27679
download_size: 2671947510
dataset_size: 2689712571.5467024
- config_name: subset_188
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2688779661.3497453
num_examples: 27657
download_size: 2671112860
dataset_size: 2688779661.3497453
- config_name: subset_189
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2682130033.957094
num_examples: 27484
download_size: 2663419986
dataset_size: 2682130033.957094
- config_name: subset_19
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2791403043.673298
num_examples: 28540
download_size: 2771865713
dataset_size: 2791403043.673298
- config_name: subset_190
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2691646469.2862816
num_examples: 27687
download_size: 2674041591
dataset_size: 2691646469.2862816
- config_name: subset_191
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2693914403.614718
num_examples: 27796
download_size: 2677023344
dataset_size: 2693914403.614718
- config_name: subset_192
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2688093677.457328
num_examples: 27626
download_size: 2671031483
dataset_size: 2688093677.457328
- config_name: subset_193
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2696036138.6995645
num_examples: 27770
download_size: 2678540139
dataset_size: 2696036138.6995645
- config_name: subset_194
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2757230696.4430003
num_examples: 28149
download_size: 2739224260
dataset_size: 2757230696.4430003
- config_name: subset_195
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2736586631.5251746
num_examples: 28039
download_size: 2719462428
dataset_size: 2736586631.5251746
- config_name: subset_196
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2738540991.5301905
num_examples: 28092
download_size: 2720008976
dataset_size: 2738540991.5301905
- config_name: subset_197
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2714878378.720868
num_examples: 27641
download_size: 2697482203
dataset_size: 2714878378.720868
- config_name: subset_198
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2731357842.3113136
num_examples: 28072
download_size: 2713991065
dataset_size: 2731357842.3113136
- config_name: subset_199
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2743087635.392255
num_examples: 27987
download_size: 2724295995
dataset_size: 2743087635.392255
- config_name: subset_2
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2795196339.0488434
num_examples: 28616
download_size: 2777214128
dataset_size: 2795196339.0488434
- config_name: subset_20
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2800054662.6664577
num_examples: 28675
download_size: 2781268377
dataset_size: 2800054662.6664577
- config_name: subset_200
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2758948923.576728
num_examples: 28160
download_size: 2739212888
dataset_size: 2758948923.576728
- config_name: subset_201
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 162290.0
num_examples: 2
download_size: 161971
dataset_size: 162290.0
- config_name: subset_202
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2720100556.5474815
num_examples: 27884
download_size: 2702140717
dataset_size: 2720100556.5474815
- config_name: subset_203
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2729425853.4000144
num_examples: 27904
download_size: 2711407392
dataset_size: 2729425853.4000144
- config_name: subset_204
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2744980939.5706124
num_examples: 27995
download_size: 2725881105
dataset_size: 2744980939.5706124
- config_name: subset_205
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2737257736.0440264
num_examples: 28058
download_size: 2719701227
dataset_size: 2737257736.0440264
- config_name: subset_206
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2732281650.330662
num_examples: 28064
download_size: 2714690257
dataset_size: 2732281650.330662
- config_name: subset_207
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2762722488.1407614
num_examples: 28073
download_size: 2744341932
dataset_size: 2762722488.1407614
- config_name: subset_208
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2718820938.9872274
num_examples: 27936
download_size: 2699777750
dataset_size: 2718820938.9872274
- config_name: subset_209
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2722826630.070277
num_examples: 27909
download_size: 2704124087
dataset_size: 2722826630.070277
- config_name: subset_21
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 138782.0
num_examples: 2
download_size: 143201
dataset_size: 138782.0
- config_name: subset_210
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2742890444.0380435
num_examples: 28037
download_size: 2724926786
dataset_size: 2742890444.0380435
- config_name: subset_211
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2761711376.3164835
num_examples: 28198
download_size: 2742180590
dataset_size: 2761711376.3164835
- config_name: subset_212
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2781348716.856394
num_examples: 28214
download_size: 2764051091
dataset_size: 2781348716.856394
- config_name: subset_213
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2763693383.6427736
num_examples: 28151
download_size: 2744390773
dataset_size: 2763693383.6427736
- config_name: subset_214
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2761755214.339869
num_examples: 28194
download_size: 2743776098
dataset_size: 2761755214.339869
- config_name: subset_215
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2759189555.350568
num_examples: 28070
download_size: 2740071382
dataset_size: 2759189555.350568
- config_name: subset_216
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2769742628.4568706
num_examples: 28334
download_size: 2751137783
dataset_size: 2769742628.4568706
- config_name: subset_217
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 319444.0
num_examples: 4
download_size: 323114
dataset_size: 319444.0
- config_name: subset_218
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2755741435.3078856
num_examples: 28226
download_size: 2738613717
dataset_size: 2755741435.3078856
- config_name: subset_219
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2762179841.133409
num_examples: 28234
download_size: 2743344110
dataset_size: 2762179841.133409
- config_name: subset_22
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2806103595.564734
num_examples: 28782
download_size: 2788033435
dataset_size: 2806103595.564734
- config_name: subset_220
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2754288136.496802
num_examples: 28134
download_size: 2736379621
dataset_size: 2754288136.496802
- config_name: subset_221
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2741464680.2412095
num_examples: 28033
download_size: 2723454231
dataset_size: 2741464680.2412095
- config_name: subset_222
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2753590278.523113
num_examples: 28157
download_size: 2735025516
dataset_size: 2753590278.523113
- config_name: subset_223
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2748717700.715429
num_examples: 28116
download_size: 2731327087
dataset_size: 2748717700.715429
- config_name: subset_53
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2735353533.5480657
num_examples: 27987
download_size: 2716912680
dataset_size: 2735353533.5480657
- config_name: subset_105
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2733052589.800014
num_examples: 28115
download_size: 2714548742
dataset_size: 2733052589.800014
- config_name: subset_23
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2797517872.304545
num_examples: 28681
download_size: 2777962255
dataset_size: 2797517872.304545
- config_name: subset_24
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2804954839.2742553
num_examples: 28787
download_size: 2785217057
dataset_size: 2804954839.2742553
- config_name: subset_25
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2838437303.116215
num_examples: 29068
download_size: 2819743616
dataset_size: 2838437303.116215
- config_name: subset_26
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2779624582.3252745
num_examples: 28617
download_size: 2762421930
dataset_size: 2779624582.3252745
- config_name: subset_27
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2795968541.5509977
num_examples: 28705
download_size: 2777770622
dataset_size: 2795968541.5509977
- config_name: subset_28
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2807821570.5501804
num_examples: 28803
download_size: 2789130607
dataset_size: 2807821570.5501804
- config_name: subset_29
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2805087349.573647
num_examples: 28707
download_size: 2787110185
dataset_size: 2805087349.573647
- config_name: subset_3
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2803718279.358577
num_examples: 28631
download_size: 2785684543
dataset_size: 2803718279.358577
- config_name: subset_30
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2780128274.436427
num_examples: 28598
download_size: 2762235499
dataset_size: 2780128274.436427
- config_name: subset_31
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2780483430.417629
num_examples: 28585
download_size: 2762959143
dataset_size: 2780483430.417629
- config_name: subset_32
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2767355446.6499033
num_examples: 28430
download_size: 2749006351
dataset_size: 2767355446.6499033
- config_name: subset_33
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2780475149.9833417
num_examples: 28567
download_size: 2762547437
dataset_size: 2780475149.9833417
- config_name: subset_34
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2804888597.534103
num_examples: 28575
download_size: 2786602482
dataset_size: 2804888597.534103
- config_name: subset_35
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2818760898.3284674
num_examples: 28753
download_size: 2799376754
dataset_size: 2818760898.3284674
- config_name: subset_36
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2794880529.5281224
num_examples: 28597
download_size: 2777080127
dataset_size: 2794880529.5281224
- config_name: subset_37
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 59312.0
num_examples: 1
download_size: 64000
dataset_size: 59312.0
- config_name: subset_38
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2795937403.9297214
num_examples: 28564
download_size: 2777668182
dataset_size: 2795937403.9297214
- config_name: subset_39
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2792531655.567247
num_examples: 28483
download_size: 2774019455
dataset_size: 2792531655.567247
- config_name: subset_4
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2782886996.0128064
num_examples: 28647
download_size: 2764491768
dataset_size: 2782886996.0128064
- config_name: subset_40
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2792036932.243057
num_examples: 28544
download_size: 2773102366
dataset_size: 2792036932.243057
- config_name: subset_41
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2812065083.9231496
num_examples: 28650
download_size: 2793799040
dataset_size: 2812065083.9231496
- config_name: subset_42
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2752763678.4381948
num_examples: 28332
download_size: 2735519167
dataset_size: 2752763678.4381948
- config_name: subset_43
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2762546530.322603
num_examples: 28476
download_size: 2744440429
dataset_size: 2762546530.322603
- config_name: subset_44
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2791243997.6303687
num_examples: 28652
download_size: 2771659224
dataset_size: 2791243997.6303687
- config_name: subset_45
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2757371392.7096844
num_examples: 28520
download_size: 2737651215
dataset_size: 2757371392.7096844
- config_name: subset_46
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2811457752.5828547
num_examples: 28896
download_size: 2793241019
dataset_size: 2811457752.5828547
- config_name: subset_47
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2812686145.4360433
num_examples: 28780
download_size: 2792637332
dataset_size: 2812686145.4360433
- config_name: subset_48
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2751611802.964417
num_examples: 28261
download_size: 2732815902
dataset_size: 2751611802.964417
- config_name: subset_49
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2783834112.2924604
num_examples: 28496
download_size: 2766016907
dataset_size: 2783834112.2924604
- config_name: subset_5
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 112202.0
num_examples: 2
download_size: 116668
dataset_size: 112202.0
- config_name: subset_50
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2784024978.553762
num_examples: 28554
download_size: 2766617738
dataset_size: 2784024978.553762
- config_name: subset_51
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2783800597.1516314
num_examples: 28644
download_size: 2766413575
dataset_size: 2783800597.1516314
- config_name: subset_52
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2810730447.033942
num_examples: 28627
download_size: 2792215878
dataset_size: 2810730447.033942
- config_name: subset_54
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2810730453.483979
num_examples: 28575
download_size: 2791484855
dataset_size: 2810730453.483979
- config_name: subset_55
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2808617741.0427275
num_examples: 28472
download_size: 2790298480
dataset_size: 2808617741.0427275
- config_name: subset_56
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2789909755.717434
num_examples: 28575
download_size: 2770281917
dataset_size: 2789909755.717434
- config_name: subset_57
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2827127275.362234
num_examples: 28706
download_size: 2808692019
dataset_size: 2827127275.362234
- config_name: subset_58
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2812753118.1925855
num_examples: 28664
download_size: 2794305800
dataset_size: 2812753118.1925855
- config_name: subset_59
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2811321136.3985333
num_examples: 28628
download_size: 2793693466
dataset_size: 2811321136.3985333
- config_name: subset_6
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2816351939.20813
num_examples: 28870
download_size: 2798196428
dataset_size: 2816351939.20813
- config_name: subset_60
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2817025272.2751913
num_examples: 28771
download_size: 2797953850
dataset_size: 2817025272.2751913
- config_name: subset_61
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2774567860.137455
num_examples: 28397
download_size: 2756829689
dataset_size: 2774567860.137455
- config_name: subset_62
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2773071616.3512044
num_examples: 28512
download_size: 2754218061
dataset_size: 2773071616.3512044
- config_name: subset_63
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2812455985.0603585
num_examples: 28736
download_size: 2793507800
dataset_size: 2812455985.0603585
- config_name: subset_64
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2786242526.438978
num_examples: 28571
download_size: 2769229216
dataset_size: 2786242526.438978
- config_name: subset_65
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2797793944.7942257
num_examples: 28563
download_size: 2780303114
dataset_size: 2797793944.7942257
- config_name: subset_66
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2808401265.991285
num_examples: 28673
download_size: 2790209795
dataset_size: 2808401265.991285
- config_name: subset_67
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2763689194.5764027
num_examples: 28370
download_size: 2746463899
dataset_size: 2763689194.5764027
- config_name: subset_68
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2774350392.459005
num_examples: 28407
download_size: 2756482114
dataset_size: 2774350392.459005
- config_name: subset_69
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2793707828.414052
num_examples: 28641
download_size: 2775860316
dataset_size: 2793707828.414052
- config_name: subset_7
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2799737452.4636416
num_examples: 28663
download_size: 2781433327
dataset_size: 2799737452.4636416
- config_name: subset_70
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2765766231.44524
num_examples: 28482
download_size: 2748565977
dataset_size: 2765766231.44524
- config_name: subset_71
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2821581276.8039374
num_examples: 28738
download_size: 2801829822
dataset_size: 2821581276.8039374
- config_name: subset_72
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2811731873.827922
num_examples: 28663
download_size: 2793147692
dataset_size: 2811731873.827922
- config_name: subset_73
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2802439533.470241
num_examples: 28782
download_size: 2783651579
dataset_size: 2802439533.470241
- config_name: subset_74
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2754559729.3649096
num_examples: 28406
download_size: 2735429544
dataset_size: 2754559729.3649096
- config_name: subset_75
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2781756173.875478
num_examples: 28514
download_size: 2763956432
dataset_size: 2781756173.875478
- config_name: subset_76
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2795231220.3489594
num_examples: 28575
download_size: 2776548294
dataset_size: 2795231220.3489594
- config_name: subset_77
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2784668222.3674617
num_examples: 28464
download_size: 2767141356
dataset_size: 2784668222.3674617
- config_name: subset_78
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2794840057.0813184
num_examples: 28482
download_size: 2776676139
dataset_size: 2794840057.0813184
- config_name: subset_79
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2811028803.6070156
num_examples: 28610
download_size: 2792051198
dataset_size: 2811028803.6070156
- config_name: subset_8
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2785312572.495565
num_examples: 28629
download_size: 2767864276
dataset_size: 2785312572.495565
- config_name: subset_80
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2809223557.4052043
num_examples: 28499
download_size: 2790073012
dataset_size: 2809223557.4052043
- config_name: subset_81
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2787174032.112253
num_examples: 28597
download_size: 2769660460
dataset_size: 2787174032.112253
- config_name: subset_82
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2802257169.577882
num_examples: 28636
download_size: 2784402012
dataset_size: 2802257169.577882
- config_name: subset_83
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2747191626.276433
num_examples: 28254
download_size: 2728896596
dataset_size: 2747191626.276433
- config_name: subset_84
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 331318.0
num_examples: 3
download_size: 335351
dataset_size: 331318.0
- config_name: subset_85
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2789891797.078033
num_examples: 28542
download_size: 2771146811
dataset_size: 2789891797.078033
- config_name: subset_86
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2786906341.031647
num_examples: 28302
download_size: 2768842693
dataset_size: 2786906341.031647
- config_name: subset_87
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2792768880.3627176
num_examples: 28689
download_size: 2774522581
dataset_size: 2792768880.3627176
- config_name: subset_88
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2774773338.338046
num_examples: 28494
download_size: 2756543553
dataset_size: 2774773338.338046
- config_name: subset_89
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2807782071.327155
num_examples: 28745
download_size: 2787672342
dataset_size: 2807782071.327155
- config_name: subset_9
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2792795451.8417115
num_examples: 28598
download_size: 2773665820
dataset_size: 2792795451.8417115
- config_name: subset_90
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2804627510.568851
num_examples: 28587
download_size: 2785136665
dataset_size: 2804627510.568851
- config_name: subset_91
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2788608785.1023026
num_examples: 28435
download_size: 2770102726
dataset_size: 2788608785.1023026
- config_name: subset_92
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2808747227.581872
num_examples: 28496
download_size: 2790889061
dataset_size: 2808747227.581872
- config_name: subset_93
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2787507872.210434
num_examples: 28498
download_size: 2769670034
dataset_size: 2787507872.210434
- config_name: subset_94
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2786917593.789573
num_examples: 28571
download_size: 2768248625
dataset_size: 2786917593.789573
- config_name: subset_95
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2794036011.292627
num_examples: 28633
download_size: 2775289213
dataset_size: 2794036011.292627
- config_name: subset_96
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2789035936.312362
num_examples: 28567
download_size: 2771028846
dataset_size: 2789035936.312362
- config_name: subset_97
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2782470478.927699
num_examples: 28484
download_size: 2764032608
dataset_size: 2782470478.927699
- config_name: subset_98
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2835848791.3682213
num_examples: 28994
download_size: 2816225758
dataset_size: 2835848791.3682213
- config_name: subset_99
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2801426273.988891
num_examples: 28617
download_size: 2783036698
dataset_size: 2801426273.988891
configs:
- config_name: subset_0
data_files:
- split: train
path: subset_0/train-*
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_145
data_files:
- split: train
path: subset_145/train-*
- config_name: subset_146
data_files:
- split: train
path: subset_146/train-*
- config_name: subset_147
data_files:
- split: train
path: subset_147/train-*
- config_name: subset_148
data_files:
- split: train
path: subset_148/train-*
- config_name: subset_149
data_files:
- split: train
path: subset_149/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_150
data_files:
- split: train
path: subset_150/train-*
- config_name: subset_151
data_files:
- split: train
path: subset_151/train-*
- config_name: subset_152
data_files:
- split: train
path: subset_152/train-*
- config_name: subset_153
data_files:
- split: train
path: subset_153/train-*
- config_name: subset_154
data_files:
- split: train
path: subset_154/train-*
- config_name: subset_155
data_files:
- split: train
path: subset_155/train-*
- config_name: subset_156
data_files:
- split: train
path: subset_156/train-*
- config_name: subset_157
data_files:
- split: train
path: subset_157/train-*
- config_name: subset_158
data_files:
- split: train
path: subset_158/train-*
- config_name: subset_159
data_files:
- split: train
path: subset_159/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_160
data_files:
- split: train
path: subset_160/train-*
- config_name: subset_161
data_files:
- split: train
path: subset_161/train-*
- config_name: subset_162
data_files:
- split: train
path: subset_162/train-*
- config_name: subset_163
data_files:
- split: train
path: subset_163/train-*
- config_name: subset_164
data_files:
- split: train
path: subset_164/train-*
- config_name: subset_165
data_files:
- split: train
path: subset_165/train-*
- config_name: subset_166
data_files:
- split: train
path: subset_166/train-*
- config_name: subset_167
data_files:
- split: train
path: subset_167/train-*
- config_name: subset_168
data_files:
- split: train
path: subset_168/train-*
- config_name: subset_169
data_files:
- split: train
path: subset_169/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_170
data_files:
- split: train
path: subset_170/train-*
- config_name: subset_171
data_files:
- split: train
path: subset_171/train-*
- config_name: subset_172
data_files:
- split: train
path: subset_172/train-*
- config_name: subset_173
data_files:
- split: train
path: subset_173/train-*
- config_name: subset_174
data_files:
- split: train
path: subset_174/train-*
- config_name: subset_175
data_files:
- split: train
path: subset_175/train-*
- config_name: subset_176
data_files:
- split: train
path: subset_176/train-*
- config_name: subset_177
data_files:
- split: train
path: subset_177/train-*
- config_name: subset_178
data_files:
- split: train
path: subset_178/train-*
- config_name: subset_179
data_files:
- split: train
path: subset_179/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_180
data_files:
- split: train
path: subset_180/train-*
- config_name: subset_181
data_files:
- split: train
path: subset_181/train-*
- config_name: subset_182
data_files:
- split: train
path: subset_182/train-*
- config_name: subset_183
data_files:
- split: train
path: subset_183/train-*
- config_name: subset_184
data_files:
- split: train
path: subset_184/train-*
- config_name: subset_185
data_files:
- split: train
path: subset_185/train-*
- config_name: subset_186
data_files:
- split: train
path: subset_186/train-*
- config_name: subset_187
data_files:
- split: train
path: subset_187/train-*
- config_name: subset_188
data_files:
- split: train
path: subset_188/train-*
- config_name: subset_189
data_files:
- split: train
path: subset_189/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_190
data_files:
- split: train
path: subset_190/train-*
- config_name: subset_191
data_files:
- split: train
path: subset_191/train-*
- config_name: subset_192
data_files:
- split: train
path: subset_192/train-*
- config_name: subset_193
data_files:
- split: train
path: subset_193/train-*
- config_name: subset_194
data_files:
- split: train
path: subset_194/train-*
- config_name: subset_195
data_files:
- split: train
path: subset_195/train-*
- config_name: subset_196
data_files:
- split: train
path: subset_196/train-*
- config_name: subset_197
data_files:
- split: train
path: subset_197/train-*
- config_name: subset_198
data_files:
- split: train
path: subset_198/train-*
- config_name: subset_199
data_files:
- split: train
path: subset_199/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_200
data_files:
- split: train
path: subset_200/train-*
- config_name: subset_201
data_files:
- split: train
path: subset_201/train-*
- config_name: subset_202
data_files:
- split: train
path: subset_202/train-*
- config_name: subset_203
data_files:
- split: train
path: subset_203/train-*
- config_name: subset_204
data_files:
- split: train
path: subset_204/train-*
- config_name: subset_205
data_files:
- split: train
path: subset_205/train-*
- config_name: subset_206
data_files:
- split: train
path: subset_206/train-*
- config_name: subset_207
data_files:
- split: train
path: subset_207/train-*
- config_name: subset_208
data_files:
- split: train
path: subset_208/train-*
- config_name: subset_209
data_files:
- split: train
path: subset_209/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_210
data_files:
- split: train
path: subset_210/train-*
- config_name: subset_211
data_files:
- split: train
path: subset_211/train-*
- config_name: subset_212
data_files:
- split: train
path: subset_212/train-*
- config_name: subset_213
data_files:
- split: train
path: subset_213/train-*
- config_name: subset_214
data_files:
- split: train
path: subset_214/train-*
- config_name: subset_215
data_files:
- split: train
path: subset_215/train-*
- config_name: subset_216
data_files:
- split: train
path: subset_216/train-*
- config_name: subset_217
data_files:
- split: train
path: subset_217/train-*
- config_name: subset_218
data_files:
- split: train
path: subset_218/train-*
- config_name: subset_219
data_files:
- split: train
path: subset_219/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_220
data_files:
- split: train
path: subset_220/train-*
- config_name: subset_221
data_files:
- split: train
path: subset_221/train-*
- config_name: subset_222
data_files:
- split: train
path: subset_222/train-*
- config_name: subset_223
data_files:
- split: train
path: subset_223/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_224/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_225/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
macrocosm-os/code-parrot-github-code | macrocosm-os | "2024-10-30T13:40:00Z" | 18,336 | 3 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:code",
"license:other",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | "2024-10-28T19:26:22Z" | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: github-code
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
# GitHub Code Dataset
## Dataset Description
The GitHub Code dataset consists of 115M code files from GitHub in 32 programming languages with 60 extensions totaling in 1TB of data. The dataset was created from the public GitHub dataset on Google BiqQuery.
### How to use it
The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
```python
from datasets import load_dataset
ds = load_dataset("codeparrot/github-code", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
'repo_name': 'MirekSz/webpack-es6-ts',
'path': 'app/mods/mod190.js',
'language': 'JavaScript',
'license': 'isc',
'size': 73
}
```
You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:
```python
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", languages=["Dockerfile"])
print(next(iter(ds))["code"])
#OUTPUT:
"""\
FROM rockyluke/ubuntu:precise
ENV DEBIAN_FRONTEND="noninteractive" \
TZ="Europe/Amsterdam"
...
"""
```
We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:
```python
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", licenses=["mit", "isc"])
licenses = []
for element in iter(ds).take(10_000):
licenses.append(element["license"])
print(Counter(licenses))
#OUTPUT:
Counter({'mit': 9896, 'isc': 104})
```
Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:
```python
ds = load_dataset("codeparrot/github-code", split="train")
```
## Data Structure
### Data Instances
```python
{
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
'repo_name': 'MirekSz/webpack-es6-ts',
'path': 'app/mods/mod190.js',
'language': 'JavaScript',
'license': 'isc',
'size': 73
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|code|string|content of source file|
|repo_name|string|name of the GitHub repository|
|path|string|path of file in GitHub repository|
|language|string|programming language as inferred by extension|
|license|string|license of GitHub repository|
|size|int|size of source file in bytes|
### Data Splits
The dataset only contains a train split.
## Languages
The dataset contains 30 programming languages with over 60 extensions:
```python
{
"Assembly": [".asm"],
"Batchfile": [".bat", ".cmd"],
"C": [".c", ".h"],
"C#": [".cs"],
"C++": [".cpp", ".hpp", ".c++", ".h++", ".cc", ".hh", ".C", ".H"],
"CMake": [".cmake"],
"CSS": [".css"],
"Dockerfile": [".dockerfile", "Dockerfile"],
"FORTRAN": ['.f90', '.f', '.f03', '.f08', '.f77', '.f95', '.for', '.fpp'],
"GO": [".go"],
"Haskell": [".hs"],
"HTML":[".html"],
"Java": [".java"],
"JavaScript": [".js"],
"Julia": [".jl"],
"Lua": [".lua"],
"Makefile": ["Makefile"],
"Markdown": [".md", ".markdown"],
"PHP": [".php", ".php3", ".php4", ".php5", ".phps", ".phpt"],
"Perl": [".pl", ".pm", ".pod", ".perl"],
"PowerShell": ['.ps1', '.psd1', '.psm1'],
"Python": [".py"],
"Ruby": [".rb"],
"Rust": [".rs"],
"SQL": [".sql"],
"Scala": [".scala"],
"Shell": [".sh", ".bash", ".command", ".zsh"],
"TypeScript": [".ts", ".tsx"],
"TeX": [".tex"],
"Visual Basic": [".vb"]
}
```
## Licenses
Each example is also annotated with the license of the associated repository. There are in total 15 licenses:
```python
[
'mit',
'apache-2.0',
'gpl-3.0',
'gpl-2.0',
'bsd-3-clause',
'agpl-3.0',
'lgpl-3.0',
'lgpl-2.1',
'bsd-2-clause',
'cc0-1.0',
'epl-1.0',
'mpl-2.0',
'unlicense',
'isc',
'artistic-2.0'
]
```
## Dataset Statistics
The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:
![dataset-statistics](https://huggingface.co/datasets/codeparrot/github-code/resolve/main/github-code-stats-alpha.png)
| | Language |File Count| Size (GB)|
|---:|:-------------|---------:|-------:|
| 0 | Java | 19548190 | 107.70 |
| 1 | C | 14143113 | 183.83 |
| 2 | JavaScript | 11839883 | 87.82 |
| 3 | HTML | 11178557 | 118.12 |
| 4 | PHP | 11177610 | 61.41 |
| 5 | Markdown | 8464626 | 23.09 |
| 6 | C++ | 7380520 | 87.73 |
| 7 | Python | 7226626 | 52.03 |
| 8 | C# | 6811652 | 36.83 |
| 9 | Ruby | 4473331 | 10.95 |
| 10 | GO | 2265436 | 19.28 |
| 11 | TypeScript | 1940406 | 24.59 |
| 12 | CSS | 1734406 | 22.67 |
| 13 | Shell | 1385648 | 3.01 |
| 14 | Scala | 835755 | 3.87 |
| 15 | Makefile | 679430 | 2.92 |
| 16 | SQL | 656671 | 5.67 |
| 17 | Lua | 578554 | 2.81 |
| 18 | Perl | 497949 | 4.70 |
| 19 | Dockerfile | 366505 | 0.71 |
| 20 | Haskell | 340623 | 1.85 |
| 21 | Rust | 322431 | 2.68 |
| 22 | TeX | 251015 | 2.15 |
| 23 | Batchfile | 236945 | 0.70 |
| 24 | CMake | 175282 | 0.54 |
| 25 | Visual Basic | 155652 | 1.91 |
| 26 | FORTRAN | 142038 | 1.62 |
| 27 | PowerShell | 136846 | 0.69 |
| 28 | Assembly | 82905 | 0.78 |
| 29 | Julia | 58317 | 0.29 |
## Dataset Creation
The dataset was created in two steps:
1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/query.sql)). The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_.
2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/github_preprocessing.py)).
## Considerations for Using the Data
The dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.
## Releases
You can load any older version of the dataset with the `revision` argument:
```Python
ds = load_dataset("codeparrot/github-code", revision="v1.0")
```
### v1.0
- Initial release of dataset
- The query was executed on _Feb 14, 2022, 12:03:16 PM UTC+1_
### v1.1
- Fix missing Scala/TypeScript
- Fix deduplication issue with inconsistent Python `hash`
- The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_
|
laion/strategic_game_maze | laion | "2023-10-20T04:13:19Z" | 18,249 | 10 | [
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-10-15T02:44:07Z" | ---
license: cc-by-4.0
---
NOTICE: some of the game is mistakenly label as both length and width columns are 40, they are 30 actually.
# maze
This dataset contains 350,000 mazes, represents over 39.29 billion moves.
Each maze is a 30x30 ASCII representation, with solutions derived using the BFS.
It has two columns:
- 'Maze': representation of maze in a list of string.shape is 30*30
- visual example
<image src="https://cdn-uploads.huggingface.co/production/uploads/644b983f0fbe4830f192c4f5/BGplH40fK5wQzpofPocMK.png" alt="drawing" width="200"/>
- 'Path': solution from start point to end point in a list of string, each item represent a position in the maze.
|
ruslanmv/ai-medical-chatbot | ruslanmv | "2024-03-23T20:45:11Z" | 18,225 | 177 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-16T12:10:13Z" | ---
configs:
- config_name: default
data_files:
- path: dialogues.*
split: train
dataset_info:
dataset_size: 141665910
download_size: 141665910
features:
- dtype: string
name: Description
- dtype: string
name: Patient
- dtype: string
name: Doctor
splits:
- name: train
num_bytes: 141665910
num_examples: 256916
---
# AI Medical Chatbot Dataset
This is an experimental Dataset designed to run a Medical Chatbot
It contains at least 250k dialogues between a Patient and a Doctor.
[![](future.jpg)](https://huggingface.co/spaces/ruslanmv/AI-Medical-Chatbot)
## Playground ChatBot
[ruslanmv/AI-Medical-Chatbot](https://huggingface.co/spaces/ruslanmv/AI-Medical-Chatbot)
For furter information visit the project here:
[https://github.com/ruslanmv/ai-medical-chatbot](https://github.com/ruslanmv/ai-medical-chatbot) |
mlfoundations/MINT-1T-PDF-CC-2023-50 | mlfoundations | "2024-09-19T21:06:23Z" | 18,193 | 3 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | "2024-07-12T05:42:22Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-50`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
![Examples](interleaved-example-twitter.png)
## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
regent-project/regent-subset-of-jat-dataset-tokenized | regent-project | "2024-10-02T05:12:09Z" | 18,075 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-01T22:46:53Z" | ---
dataset_info:
- config_name: atari-alien_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 1905456
num_examples: 22684
download_size: 2088245
dataset_size: 1905456
- config_name: atari-amidar_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32810168
num_examples: 100031
download_size: 11019541
dataset_size: 32810168
- config_name: atari-amidar_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23046343776
num_examples: 3142
download_size: 256637379
dataset_size: 23046343776
- config_name: atari-assault_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806232
num_examples: 100019
download_size: 14121737
dataset_size: 32806232
- config_name: atari-assault_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22972994496
num_examples: 3132
download_size: 186535975
dataset_size: 22972994496
- config_name: atari-asterix_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806560
num_examples: 100020
download_size: 11902934
dataset_size: 32806560
- config_name: atari-asterix_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23332405968
num_examples: 3181
download_size: 188517858
dataset_size: 23332405968
- config_name: atari-asteroids_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 202442660
dataset_size: 22936319856
- config_name: atari-atlantis_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32801640
num_examples: 100005
download_size: 13128838
dataset_size: 32801640
- config_name: atari-atlantis_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22943654784
num_examples: 3128
download_size: 206794180
dataset_size: 22943654784
- config_name: atari-bankheist_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806888
num_examples: 100021
download_size: 13754178
dataset_size: 32806888
- config_name: atari-bankheist_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23149032768
num_examples: 3156
download_size: 307236770
dataset_size: 23149032768
- config_name: atari-battlezone_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800984
num_examples: 100003
download_size: 15918969
dataset_size: 32800984
- config_name: atari-battlezone_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23002334208
num_examples: 3136
download_size: 247618279
dataset_size: 23002334208
- config_name: atari-beamrider_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806232
num_examples: 100019
download_size: 16063964
dataset_size: 32806232
- config_name: atari-beamrider_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22965659568
num_examples: 3131
download_size: 224067669
dataset_size: 22965659568
- config_name: atari-berzerk_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32803936
num_examples: 100012
download_size: 11678744
dataset_size: 32803936
- config_name: atari-berzerk_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 204431627
dataset_size: 22936319856
- config_name: atari-bowling_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32801968
num_examples: 100006
download_size: 7354865
dataset_size: 32801968
- config_name: atari-bowling_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23090353344
num_examples: 3148
download_size: 165124017
dataset_size: 23090353344
- config_name: atari-boxing_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32802296
num_examples: 100007
download_size: 11950572
dataset_size: 32802296
- config_name: atari-boxing_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23669812656
num_examples: 3227
download_size: 296234619
dataset_size: 23669812656
- config_name: atari-breakout_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32804592
num_examples: 100014
download_size: 4911820
dataset_size: 32804592
- config_name: atari-breakout_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22943654784
num_examples: 3128
download_size: 150562919
dataset_size: 22943654784
- config_name: atari-centipede_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805904
num_examples: 100018
download_size: 11285739
dataset_size: 32805904
- config_name: atari-centipede_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23295731328
num_examples: 3176
download_size: 185406529
dataset_size: 23295731328
- config_name: atari-choppercommand_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809840
num_examples: 100030
download_size: 14259234
dataset_size: 32809840
- config_name: atari-choppercommand_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23061013632
num_examples: 3144
download_size: 225019380
dataset_size: 23061013632
- config_name: atari-crazyclimber_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32804592
num_examples: 100014
download_size: 12305828
dataset_size: 32804592
- config_name: atari-crazyclimber_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22987664352
num_examples: 3134
download_size: 227557018
dataset_size: 22987664352
- config_name: atari-defender_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32807872
num_examples: 100024
download_size: 10537157
dataset_size: 32807872
- config_name: atari-defender_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 172063588
dataset_size: 22936319856
- config_name: atari-demonattack_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32807872
num_examples: 100024
download_size: 15551680
dataset_size: 32807872
- config_name: atari-demonattack_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 181049894
dataset_size: 22936319856
- config_name: atari-doubledunk_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32801968
num_examples: 100006
download_size: 11428550
dataset_size: 32801968
- config_name: atari-doubledunk_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23288396400
num_examples: 3175
download_size: 251707705
dataset_size: 23288396400
- config_name: atari-enduro_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32802296
num_examples: 100007
download_size: 12848229
dataset_size: 32802296
- config_name: atari-fishingderby_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13500648
dataset_size: 32800000
- config_name: atari-fishingderby_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23141697840
num_examples: 3155
download_size: 321501382
dataset_size: 23141697840
- config_name: atari-freeway_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32810168
num_examples: 100031
download_size: 13676872
dataset_size: 32810168
- config_name: atari-freeway_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22965659568
num_examples: 3131
download_size: 280231420
dataset_size: 22965659568
- config_name: atari-frostbite_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806560
num_examples: 100020
download_size: 11934917
dataset_size: 32806560
- config_name: atari-frostbite_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23075683488
num_examples: 3146
download_size: 278638735
dataset_size: 23075683488
- config_name: atari-gopher_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809512
num_examples: 100029
download_size: 14334636
dataset_size: 32809512
- config_name: atari-gopher_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22943654784
num_examples: 3128
download_size: 196526681
dataset_size: 22943654784
- config_name: atari-gravitar_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805248
num_examples: 100016
download_size: 11576279
dataset_size: 32805248
- config_name: atari-gravitar_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23486439456
num_examples: 3202
download_size: 199543758
dataset_size: 23486439456
- config_name: atari-hero_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800984
num_examples: 100003
download_size: 12568260
dataset_size: 32800984
- config_name: atari-hero_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23061013632
num_examples: 3144
download_size: 231552624
dataset_size: 23061013632
- config_name: atari-icehockey_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800984
num_examples: 100003
download_size: 12259737
dataset_size: 32800984
- config_name: atari-icehockey_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23017004064
num_examples: 3138
download_size: 195362912
dataset_size: 23017004064
- config_name: atari-jamesbond_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32810168
num_examples: 100031
download_size: 15590631
dataset_size: 32810168
- config_name: atari-jamesbond_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22965659568
num_examples: 3131
download_size: 239495464
dataset_size: 22965659568
- config_name: atari-kangaroo_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32807872
num_examples: 100024
download_size: 12657496
dataset_size: 32807872
- config_name: atari-kangaroo_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23178372480
num_examples: 3160
download_size: 242035098
dataset_size: 23178372480
- config_name: atari-krull_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808528
num_examples: 100026
download_size: 13793008
dataset_size: 32808528
- config_name: atari-krull_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23193042336
num_examples: 3162
download_size: 429983939
dataset_size: 23193042336
- config_name: atari-kungfumaster_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806232
num_examples: 100019
download_size: 14058554
dataset_size: 32806232
- config_name: atari-kungfumaster_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23053678704
num_examples: 3143
download_size: 298664084
dataset_size: 23053678704
- config_name: atari-montezumarevenge_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805904
num_examples: 100018
download_size: 12767695
dataset_size: 32805904
- config_name: atari-montezumarevenge_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23237051904
num_examples: 3168
download_size: 304131065
dataset_size: 23237051904
- config_name: atari-mspacman_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 1219680
num_examples: 14520
download_size: 1069909
dataset_size: 1219680
- config_name: atari-namethisgame_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800984
num_examples: 100003
download_size: 15146115
dataset_size: 32800984
- config_name: atari-namethisgame_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22965659568
num_examples: 3131
download_size: 257925381
dataset_size: 22965659568
- config_name: atari-phoenix_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808856
num_examples: 100027
download_size: 14775061
dataset_size: 32808856
- config_name: atari-phoenix_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 189670978
dataset_size: 22936319856
- config_name: atari-pitfall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32807872
num_examples: 100024
download_size: 2022905
dataset_size: 32807872
- config_name: atari-pitfall_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22965659568
num_examples: 3131
download_size: 123924337
dataset_size: 22965659568
- config_name: atari-pong_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 697452
num_examples: 8303
download_size: 486008
dataset_size: 697452
- config_name: atari-privateeye_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806232
num_examples: 100019
download_size: 15683786
dataset_size: 32806232
- config_name: atari-privateeye_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23163702624
num_examples: 3158
download_size: 307264839
dataset_size: 23163702624
- config_name: atari-qbert_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805576
num_examples: 100017
download_size: 11451463
dataset_size: 32805576
- config_name: atari-qbert_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23002334208
num_examples: 3136
download_size: 285593415
dataset_size: 23002334208
- config_name: atari-riverraid_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806888
num_examples: 100021
download_size: 14223896
dataset_size: 32806888
- config_name: atari-riverraid_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23156367696
num_examples: 3157
download_size: 288584693
dataset_size: 23156367696
- config_name: atari-roadrunner_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809512
num_examples: 100029
download_size: 13280570
dataset_size: 32809512
- config_name: atari-roadrunner_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23105023200
num_examples: 3150
download_size: 224904364
dataset_size: 23105023200
- config_name: atari-robotank_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809512
num_examples: 100029
download_size: 13460396
dataset_size: 32809512
- config_name: atari-robotank_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22980329424
num_examples: 3133
download_size: 229314767
dataset_size: 22980329424
- config_name: atari-seaquest_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808528
num_examples: 100026
download_size: 14198049
dataset_size: 32808528
- config_name: atari-seaquest_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23017004064
num_examples: 3138
download_size: 213657303
dataset_size: 23017004064
- config_name: atari-skiing_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808856
num_examples: 100027
download_size: 12884548
dataset_size: 32808856
- config_name: atari-skiing_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23992549488
num_examples: 3271
download_size: 265395007
dataset_size: 23992549488
- config_name: atari-solaris_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32803936
num_examples: 100012
download_size: 10476310
dataset_size: 32803936
- config_name: atari-solaris_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22950989712
num_examples: 3129
download_size: 230256082
dataset_size: 22950989712
- config_name: atari-spaceinvaders_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 2686992
num_examples: 31988
download_size: 2636150
dataset_size: 2686992
- config_name: atari-stargunner_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 2684556
num_examples: 31959
download_size: 3498569
dataset_size: 2684556
- config_name: atari-surround_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809840
num_examples: 100030
download_size: 11413509
dataset_size: 32809840
- config_name: atari-surround_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23053678704
num_examples: 3143
download_size: 180554622
dataset_size: 23053678704
- config_name: atari-tennis_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32802952
num_examples: 100009
download_size: 5720988
dataset_size: 32802952
- config_name: atari-tennis_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22950989712
num_examples: 3129
download_size: 151319180
dataset_size: 22950989712
- config_name: atari-timepilot_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809184
num_examples: 100028
download_size: 14178589
dataset_size: 32809184
- config_name: atari-timepilot_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22972994496
num_examples: 3132
download_size: 196752738
dataset_size: 22972994496
- config_name: atari-tutankham_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 1848643
dataset_size: 32800000
- config_name: atari-tutankham_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 109029316
dataset_size: 22936319856
- config_name: atari-upndown_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808528
num_examples: 100026
download_size: 15582164
dataset_size: 32808528
- config_name: atari-upndown_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 482802952
dataset_size: 22936319856
- config_name: atari-venture_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 11405983
dataset_size: 32800000
- config_name: atari-venture_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23090353344
num_examples: 3148
download_size: 217148669
dataset_size: 23090353344
- config_name: atari-videopinball_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32810168
num_examples: 100031
download_size: 9499589
dataset_size: 32810168
- config_name: atari-videopinball_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22958324640
num_examples: 3130
download_size: 272326339
dataset_size: 22958324640
- config_name: atari-wizardofwor_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806560
num_examples: 100020
download_size: 12104199
dataset_size: 32806560
- config_name: atari-wizardofwor_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23017004064
num_examples: 3138
download_size: 253042146
dataset_size: 23017004064
- config_name: atari-yarsrevenge_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32804264
num_examples: 100013
download_size: 10677319
dataset_size: 32804264
- config_name: atari-yarsrevenge_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22950989712
num_examples: 3129
download_size: 429404778
dataset_size: 22950989712
- config_name: atari-zaxxon_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805576
num_examples: 100017
download_size: 15293047
dataset_size: 32805576
- config_name: atari-zaxxon_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22980329424
num_examples: 3133
download_size: 237964832
dataset_size: 22980329424
- config_name: babyai-action-obj-door_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32828208
num_examples: 100086
download_size: 6351769
dataset_size: 32828208
- config_name: babyai-action-obj-door_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3610820800
num_examples: 16400
download_size: 20957976
dataset_size: 3610820800
- config_name: babyai-blocked-unlock-pickup_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32818696
num_examples: 100057
download_size: 6014080
dataset_size: 32818696
- config_name: babyai-blocked-unlock-pickup_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 642902240
num_examples: 2920
download_size: 3985069
dataset_size: 642902240
- config_name: babyai-boss-level-no-unlock_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33067976
num_examples: 100817
download_size: 7646179
dataset_size: 33067976
- config_name: babyai-boss-level-no-unlock_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 506395600
num_examples: 2300
download_size: 5341693
dataset_size: 506395600
- config_name: babyai-boss-level_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32803936
num_examples: 100012
download_size: 7644357
dataset_size: 32803936
- config_name: babyai-boss-level_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 467425156
num_examples: 2123
download_size: 5119669
dataset_size: 467425156
- config_name: babyai-find-obj-s5_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32830504
num_examples: 100093
download_size: 6001715
dataset_size: 32830504
- config_name: babyai-find-obj-s5_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 735374480
num_examples: 3340
download_size: 4382030
dataset_size: 735374480
- config_name: babyai-go-to-door_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805576
num_examples: 100017
download_size: 5127764
dataset_size: 32805576
- config_name: babyai-go-to-door_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4231705840
num_examples: 19220
download_size: 22688247
dataset_size: 4231705840
- config_name: babyai-go-to-imp-unlock_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33836152
num_examples: 103159
download_size: 7368269
dataset_size: 33836152
- config_name: babyai-go-to-imp-unlock_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 179220008
num_examples: 814
download_size: 3291631
dataset_size: 179220008
- config_name: babyai-go-to-local_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32815416
num_examples: 100047
download_size: 6587732
dataset_size: 32815416
- config_name: babyai-go-to-local_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4372615920
num_examples: 19860
download_size: 25582717
dataset_size: 4372615920
- config_name: babyai-go-to-obj-door_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32824600
num_examples: 100075
download_size: 6616557
dataset_size: 32824600
- config_name: babyai-go-to-obj-door_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3910254720
num_examples: 17760
download_size: 23384284
dataset_size: 3910254720
- config_name: babyai-go-to-obj_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32818040
num_examples: 100055
download_size: 4901201
dataset_size: 32818040
- config_name: babyai-go-to-obj_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4447474400
num_examples: 20200
download_size: 24576544
dataset_size: 4447474400
- config_name: babyai-go-to-red-ball-grey_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32812464
num_examples: 100038
download_size: 6490190
dataset_size: 32812464
- config_name: babyai-go-to-red-ball-grey_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3734117120
num_examples: 16960
download_size: 18354879
dataset_size: 3734117120
- config_name: babyai-go-to-red-ball-no-dists_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32825256
num_examples: 100077
download_size: 4153141
dataset_size: 32825256
- config_name: babyai-go-to-red-ball-no-dists_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4443070960
num_examples: 20180
download_size: 20210338
dataset_size: 4443070960
- config_name: babyai-go-to-red-ball_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32813120
num_examples: 100040
download_size: 6415108
dataset_size: 32813120
- config_name: babyai-go-to-red-ball_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4359405600
num_examples: 19800
download_size: 21065736
dataset_size: 4359405600
- config_name: babyai-go-to-red-blue-ball_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32820992
num_examples: 100064
download_size: 6442448
dataset_size: 32820992
- config_name: babyai-go-to-red-blue-ball_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3729713680
num_examples: 16940
download_size: 18512506
dataset_size: 3729713680
- config_name: babyai-go-to-seq_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33061088
num_examples: 100796
download_size: 7409942
dataset_size: 33061088
- config_name: babyai-go-to-seq_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 427133680
num_examples: 1940
download_size: 4522477
dataset_size: 427133680
- config_name: babyai-go-to_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33100120
num_examples: 100915
download_size: 6499380
dataset_size: 33100120
- config_name: babyai-go-to_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 405116480
num_examples: 1840
download_size: 4386063
dataset_size: 405116480
- config_name: babyai-key-corridor_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32812136
num_examples: 100037
download_size: 5495432
dataset_size: 32812136
- config_name: babyai-key-corridor_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 198154800
num_examples: 900
download_size: 2450613
dataset_size: 198154800
- config_name: babyai-mini-boss-level_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32861664
num_examples: 100188
download_size: 8146530
dataset_size: 32861664
- config_name: babyai-mini-boss-level_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1828968804
num_examples: 8307
download_size: 10435667
dataset_size: 1828968804
- config_name: babyai-move-two-across-s8n9_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32819680
num_examples: 100060
download_size: 6974780
dataset_size: 32819680
- config_name: babyai-move-two-across-s8n9_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 542944152
num_examples: 2466
download_size: 6570582
dataset_size: 542944152
- config_name: babyai-one-room-s8_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32810168
num_examples: 100031
download_size: 4984774
dataset_size: 32810168
- config_name: babyai-one-room-s8_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3742924000
num_examples: 17000
download_size: 17173321
dataset_size: 3742924000
- config_name: babyai-open-door_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32817056
num_examples: 100052
download_size: 5205819
dataset_size: 32817056
- config_name: babyai-open-door_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3038373600
num_examples: 13800
download_size: 17501487
dataset_size: 3038373600
- config_name: babyai-open-doors-order-n4_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32838376
num_examples: 100117
download_size: 6133031
dataset_size: 32838376
- config_name: babyai-open-doors-order-n4_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1836234480
num_examples: 8340
download_size: 11032382
dataset_size: 1836234480
- config_name: babyai-open-red-door_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32823616
num_examples: 100072
download_size: 1484381
dataset_size: 32823616
- config_name: babyai-open-red-door_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4667646400
num_examples: 21200
download_size: 16451040
dataset_size: 4667646400
- config_name: babyai-open-two-doors_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32854120
num_examples: 100165
download_size: 2596672
dataset_size: 32854120
- config_name: babyai-open-two-doors_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1620465920
num_examples: 7360
download_size: 9539342
dataset_size: 1620465920
- config_name: babyai-open_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33025664
num_examples: 100688
download_size: 5759900
dataset_size: 33025664
- config_name: babyai-open_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 581254080
num_examples: 2640
download_size: 5191396
dataset_size: 581254080
- config_name: babyai-pickup-above_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32801968
num_examples: 100006
download_size: 5403204
dataset_size: 32801968
- config_name: babyai-pickup-above_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 748584800
num_examples: 3400
download_size: 5541685
dataset_size: 748584800
- config_name: babyai-pickup-dist_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32802296
num_examples: 100007
download_size: 6291115
dataset_size: 32802296
- config_name: babyai-pickup-dist_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4108409520
num_examples: 18660
download_size: 22832605
dataset_size: 4108409520
- config_name: babyai-pickup-loc_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32828536
num_examples: 100087
download_size: 8150075
dataset_size: 32828536
- config_name: babyai-pickup-loc_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3484221900
num_examples: 15825
download_size: 21470853
dataset_size: 3484221900
- config_name: babyai-pickup_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32968264
num_examples: 100513
download_size: 6487579
dataset_size: 32968264
- config_name: babyai-pickup_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 374292400
num_examples: 1700
download_size: 4188562
dataset_size: 374292400
- config_name: babyai-put-next-local_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32846904
num_examples: 100143
download_size: 8568082
dataset_size: 32846904
- config_name: babyai-put-next-local_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1831831040
num_examples: 8320
download_size: 13012534
dataset_size: 1831831040
- config_name: babyai-put-next_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32900040
num_examples: 100305
download_size: 8673285
dataset_size: 32900040
- config_name: babyai-put-next_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1259383840
num_examples: 5720
download_size: 9667394
dataset_size: 1259383840
- config_name: babyai-synth-loc_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32908240
num_examples: 100330
download_size: 7667920
dataset_size: 32908240
- config_name: babyai-synth-loc_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 537219680
num_examples: 2440
download_size: 5545442
dataset_size: 537219680
- config_name: babyai-synth-seq_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33054528
num_examples: 100776
download_size: 7755136
dataset_size: 33054528
- config_name: babyai-synth-seq_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 568043760
num_examples: 2580
download_size: 5763605
dataset_size: 568043760
- config_name: babyai-synth_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32867896
num_examples: 100207
download_size: 7353038
dataset_size: 32867896
- config_name: babyai-synth_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 409519920
num_examples: 1860
download_size: 4378472
dataset_size: 409519920
- config_name: babyai-unblock-pickup_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32953176
num_examples: 100467
download_size: 6630782
dataset_size: 32953176
- config_name: babyai-unblock-pickup_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 378916012
num_examples: 1721
download_size: 4242269
dataset_size: 378916012
- config_name: babyai-unlock-local_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32812464
num_examples: 100038
download_size: 5630652
dataset_size: 32812464
- config_name: babyai-unlock-local_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1567624640
num_examples: 7120
download_size: 8268704
dataset_size: 1567624640
- config_name: babyai-unlock-pickup_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32897088
num_examples: 100296
download_size: 4544845
dataset_size: 32897088
- config_name: babyai-unlock-pickup_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1127280640
num_examples: 5120
download_size: 6990282
dataset_size: 1127280640
- config_name: babyai-unlock-to-unlock_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32960064
num_examples: 100488
download_size: 5942465
dataset_size: 32960064
- config_name: babyai-unlock-to-unlock_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 510799040
num_examples: 2320
download_size: 3665802
dataset_size: 510799040
- config_name: babyai-unlock_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33094872
num_examples: 100899
download_size: 6456229
dataset_size: 33094872
- config_name: babyai-unlock_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 287764804
num_examples: 1307
download_size: 4020028
dataset_size: 287764804
- config_name: metaworld-assembly_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 1370386
dataset_size: 32800000
- config_name: metaworld-assembly_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 2494940
dataset_size: 47116000
- config_name: metaworld-basketball_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13190732
dataset_size: 32800000
- config_name: metaworld-basketball_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9208389
dataset_size: 47116000
- config_name: metaworld-bin-picking_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 840000
num_examples: 10000
download_size: 952363
dataset_size: 840000
- config_name: metaworld-box-close_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 840000
num_examples: 10000
download_size: 1058011
dataset_size: 840000
- config_name: metaworld-button-press-topdown-wall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12506477
dataset_size: 32800000
- config_name: metaworld-button-press-topdown-wall_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6795055
dataset_size: 47116000
- config_name: metaworld-button-press-topdown_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12383341
dataset_size: 32800000
- config_name: metaworld-button-press-topdown_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6647074
dataset_size: 47116000
- config_name: metaworld-button-press-wall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 11884670
dataset_size: 32800000
- config_name: metaworld-button-press-wall_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6388048
dataset_size: 47116000
- config_name: metaworld-button-press_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12504036
dataset_size: 32800000
- config_name: metaworld-button-press_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6079174
dataset_size: 47116000
- config_name: metaworld-coffee-button_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 11302073
dataset_size: 32800000
- config_name: metaworld-coffee-button_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6402919
dataset_size: 47116000
- config_name: metaworld-coffee-pull_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13291438
dataset_size: 32800000
- config_name: metaworld-coffee-pull_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9165455
dataset_size: 47116000
- config_name: metaworld-coffee-push_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13347747
dataset_size: 32800000
- config_name: metaworld-coffee-push_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9819758
dataset_size: 47116000
- config_name: metaworld-dial-turn_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 11453279
dataset_size: 32800000
- config_name: metaworld-dial-turn_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5840306
dataset_size: 47116000
- config_name: metaworld-disassemble_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 8574754
dataset_size: 32800000
- config_name: metaworld-disassemble_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 4082529
dataset_size: 47116000
- config_name: metaworld-door-close_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13743650
dataset_size: 32800000
- config_name: metaworld-door-close_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 8698806
dataset_size: 47116000
- config_name: metaworld-door-lock_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 840000
num_examples: 10000
download_size: 776743
dataset_size: 840000
- config_name: metaworld-door-open_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13781189
dataset_size: 32800000
- config_name: metaworld-door-open_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 7983276
dataset_size: 47116000
- config_name: metaworld-door-unlock_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 840000
num_examples: 10000
download_size: 829555
dataset_size: 840000
- config_name: metaworld-drawer-close_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13903693
dataset_size: 32800000
- config_name: metaworld-drawer-close_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5764071
dataset_size: 47116000
- config_name: metaworld-drawer-open_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12036502
dataset_size: 32800000
- config_name: metaworld-drawer-open_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5484434
dataset_size: 47116000
- config_name: metaworld-faucet-close_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 14148656
dataset_size: 32800000
- config_name: metaworld-faucet-close_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5086095
dataset_size: 47116000
- config_name: metaworld-faucet-open_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 14300852
dataset_size: 32800000
- config_name: metaworld-faucet-open_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5497182
dataset_size: 47116000
- config_name: metaworld-hammer_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13491757
dataset_size: 32800000
- config_name: metaworld-hammer_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 10062439
dataset_size: 47116000
- config_name: metaworld-handle-press-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12555014
dataset_size: 32800000
- config_name: metaworld-handle-press-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5880675
dataset_size: 47116000
- config_name: metaworld-handle-press_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13473313
dataset_size: 32800000
- config_name: metaworld-handle-press_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5879237
dataset_size: 47116000
- config_name: metaworld-handle-pull-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13576934
dataset_size: 32800000
- config_name: metaworld-handle-pull-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6737064
dataset_size: 47116000
- config_name: metaworld-handle-pull_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12046278
dataset_size: 32800000
- config_name: metaworld-handle-pull_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6896646
dataset_size: 47116000
- config_name: metaworld-lever-pull_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12827517
dataset_size: 32800000
- config_name: metaworld-lever-pull_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9568802
dataset_size: 47116000
- config_name: metaworld-peg-insert-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13057268
dataset_size: 32800000
- config_name: metaworld-peg-insert-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 8714100
dataset_size: 47116000
- config_name: metaworld-peg-unplug-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13163866
dataset_size: 32800000
- config_name: metaworld-peg-unplug-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9726674
dataset_size: 47116000
- config_name: metaworld-pick-out-of-hole_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 1376243
dataset_size: 32800000
- config_name: metaworld-pick-out-of-hole_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 1419339
dataset_size: 47116000
- config_name: metaworld-pick-place-wall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13636756
dataset_size: 32800000
- config_name: metaworld-pick-place-wall_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9760537
dataset_size: 47116000
- config_name: metaworld-pick-place_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13638935
dataset_size: 32800000
- config_name: metaworld-pick-place_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 10013159
dataset_size: 47116000
- config_name: metaworld-plate-slide-back-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 1365777
dataset_size: 32800000
- config_name: metaworld-plate-slide-back-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 1936719
dataset_size: 47116000
- config_name: metaworld-plate-slide-back_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 1372778
dataset_size: 32800000
- config_name: metaworld-plate-slide-back_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 2568887
dataset_size: 47116000
- config_name: metaworld-plate-slide-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 9706526
dataset_size: 32800000
- config_name: metaworld-plate-slide-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6041762
dataset_size: 47116000
- config_name: metaworld-plate-slide_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 9787720
dataset_size: 32800000
- config_name: metaworld-plate-slide_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6512808
dataset_size: 47116000
- config_name: metaworld-push-back_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 14075602
dataset_size: 32800000
- config_name: metaworld-push-back_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 7550247
dataset_size: 47116000
- config_name: metaworld-push-wall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13592428
dataset_size: 32800000
- config_name: metaworld-push-wall_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 8970793
dataset_size: 47116000
- config_name: metaworld-push_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13341527
dataset_size: 32800000
- config_name: metaworld-push_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9427900
dataset_size: 47116000
- config_name: metaworld-reach-wall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12733205
dataset_size: 32800000
- config_name: metaworld-reach-wall_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9731627
dataset_size: 47116000
- config_name: metaworld-reach_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12106144
dataset_size: 32800000
- config_name: metaworld-reach_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9563337
dataset_size: 47116000
- config_name: metaworld-shelf-place_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13046597
dataset_size: 32800000
- config_name: metaworld-shelf-place_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 8068065
dataset_size: 47116000
- config_name: metaworld-soccer_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 11954933
dataset_size: 32800000
- config_name: metaworld-soccer_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9009300
dataset_size: 47116000
- config_name: metaworld-stick-pull_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13346574
dataset_size: 32800000
- config_name: metaworld-stick-pull_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9654361
dataset_size: 47116000
- config_name: metaworld-stick-push_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13868467
dataset_size: 32800000
- config_name: metaworld-stick-push_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9420722
dataset_size: 47116000
- config_name: metaworld-sweep-into_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13471306
dataset_size: 32800000
- config_name: metaworld-sweep-into_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 7656262
dataset_size: 47116000
- config_name: metaworld-sweep_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13966344
dataset_size: 32800000
- config_name: metaworld-sweep_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9333916
dataset_size: 47116000
- config_name: metaworld-window-close_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12562521
dataset_size: 32800000
- config_name: metaworld-window-close_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5405410
dataset_size: 47116000
- config_name: metaworld-window-open_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12270843
dataset_size: 32800000
- config_name: metaworld-window-open_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5455606
dataset_size: 47116000
- config_name: mujoco-ant_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32847232
num_examples: 100144
download_size: 16107573
dataset_size: 32847232
- config_name: mujoco-ant_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 15608524
num_examples: 401
download_size: 16185601
dataset_size: 15608524
- config_name: mujoco-doublependulum_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805248
num_examples: 100016
download_size: 16102270
dataset_size: 32805248
- config_name: mujoco-doublependulum_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 6164172
num_examples: 401
download_size: 4960978
dataset_size: 6164172
- config_name: mujoco-halfcheetah_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 8400000
num_examples: 100000
download_size: 11373374
dataset_size: 8400000
- config_name: mujoco-hopper_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 3834768
num_examples: 45652
download_size: 5110310
dataset_size: 3834768
- config_name: mujoco-humanoid_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808200
num_examples: 100025
download_size: 16122991
dataset_size: 32808200
- config_name: mujoco-humanoid_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 168289140
num_examples: 415
download_size: 116298243
dataset_size: 168289140
- config_name: mujoco-pendulum_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806888
num_examples: 100021
download_size: 15694433
dataset_size: 32806888
- config_name: mujoco-pendulum_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4060980
num_examples: 495
download_size: 3083276
dataset_size: 4060980
- config_name: mujoco-pusher_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13887459
dataset_size: 32800000
- config_name: mujoco-pusher_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 33804000
num_examples: 1000
download_size: 13463910
dataset_size: 33804000
- config_name: mujoco-reacher_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12795397
dataset_size: 32800000
- config_name: mujoco-reacher_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 32792000
num_examples: 2000
download_size: 7687471
dataset_size: 32792000
- config_name: mujoco-standup_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 16032984
dataset_size: 32800000
- config_name: mujoco-standup_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 162206400
num_examples: 400
download_size: 117589700
dataset_size: 162206400
- config_name: mujoco-swimmer_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 15858902
dataset_size: 32800000
- config_name: mujoco-swimmer_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 5329600
num_examples: 400
download_size: 5733100
dataset_size: 5329600
- config_name: mujoco-walker_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32807872
num_examples: 100024
download_size: 15920611
dataset_size: 32807872
- config_name: mujoco-walker_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 10840852
num_examples: 407
download_size: 11101553
dataset_size: 10840852
configs:
- config_name: atari-alien_newdata
data_files:
- split: train
path: atari-alien_newdata/train-*
- config_name: atari-amidar_newdata
data_files:
- split: train
path: atari-amidar_newdata/train-*
- config_name: atari-amidar_subset
data_files:
- split: train
path: atari-amidar_subset/train-*
- config_name: atari-assault_newdata
data_files:
- split: train
path: atari-assault_newdata/train-*
- config_name: atari-assault_subset
data_files:
- split: train
path: atari-assault_subset/train-*
- config_name: atari-asterix_newdata
data_files:
- split: train
path: atari-asterix_newdata/train-*
- config_name: atari-asterix_subset
data_files:
- split: train
path: atari-asterix_subset/train-*
- config_name: atari-asteroids_subset
data_files:
- split: train
path: atari-asteroids_subset/train-*
- config_name: atari-atlantis_newdata
data_files:
- split: train
path: atari-atlantis_newdata/train-*
- config_name: atari-atlantis_subset
data_files:
- split: train
path: atari-atlantis_subset/train-*
- config_name: atari-bankheist_newdata
data_files:
- split: train
path: atari-bankheist_newdata/train-*
- config_name: atari-bankheist_subset
data_files:
- split: train
path: atari-bankheist_subset/train-*
- config_name: atari-battlezone_newdata
data_files:
- split: train
path: atari-battlezone_newdata/train-*
- config_name: atari-battlezone_subset
data_files:
- split: train
path: atari-battlezone_subset/train-*
- config_name: atari-beamrider_newdata
data_files:
- split: train
path: atari-beamrider_newdata/train-*
- config_name: atari-beamrider_subset
data_files:
- split: train
path: atari-beamrider_subset/train-*
- config_name: atari-berzerk_newdata
data_files:
- split: train
path: atari-berzerk_newdata/train-*
- config_name: atari-berzerk_subset
data_files:
- split: train
path: atari-berzerk_subset/train-*
- config_name: atari-bowling_newdata
data_files:
- split: train
path: atari-bowling_newdata/train-*
- config_name: atari-bowling_subset
data_files:
- split: train
path: atari-bowling_subset/train-*
- config_name: atari-boxing_newdata
data_files:
- split: train
path: atari-boxing_newdata/train-*
- config_name: atari-boxing_subset
data_files:
- split: train
path: atari-boxing_subset/train-*
- config_name: atari-breakout_newdata
data_files:
- split: train
path: atari-breakout_newdata/train-*
- config_name: atari-breakout_subset
data_files:
- split: train
path: atari-breakout_subset/train-*
- config_name: atari-centipede_newdata
data_files:
- split: train
path: atari-centipede_newdata/train-*
- config_name: atari-centipede_subset
data_files:
- split: train
path: atari-centipede_subset/train-*
- config_name: atari-choppercommand_newdata
data_files:
- split: train
path: atari-choppercommand_newdata/train-*
- config_name: atari-choppercommand_subset
data_files:
- split: train
path: atari-choppercommand_subset/train-*
- config_name: atari-crazyclimber_newdata
data_files:
- split: train
path: atari-crazyclimber_newdata/train-*
- config_name: atari-crazyclimber_subset
data_files:
- split: train
path: atari-crazyclimber_subset/train-*
- config_name: atari-defender_newdata
data_files:
- split: train
path: atari-defender_newdata/train-*
- config_name: atari-defender_subset
data_files:
- split: train
path: atari-defender_subset/train-*
- config_name: atari-demonattack_newdata
data_files:
- split: train
path: atari-demonattack_newdata/train-*
- config_name: atari-demonattack_subset
data_files:
- split: train
path: atari-demonattack_subset/train-*
- config_name: atari-doubledunk_newdata
data_files:
- split: train
path: atari-doubledunk_newdata/train-*
- config_name: atari-doubledunk_subset
data_files:
- split: train
path: atari-doubledunk_subset/train-*
- config_name: atari-enduro_newdata
data_files:
- split: train
path: atari-enduro_newdata/train-*
- config_name: atari-fishingderby_newdata
data_files:
- split: train
path: atari-fishingderby_newdata/train-*
- config_name: atari-fishingderby_subset
data_files:
- split: train
path: atari-fishingderby_subset/train-*
- config_name: atari-freeway_newdata
data_files:
- split: train
path: atari-freeway_newdata/train-*
- config_name: atari-freeway_subset
data_files:
- split: train
path: atari-freeway_subset/train-*
- config_name: atari-frostbite_newdata
data_files:
- split: train
path: atari-frostbite_newdata/train-*
- config_name: atari-frostbite_subset
data_files:
- split: train
path: atari-frostbite_subset/train-*
- config_name: atari-gopher_newdata
data_files:
- split: train
path: atari-gopher_newdata/train-*
- config_name: atari-gopher_subset
data_files:
- split: train
path: atari-gopher_subset/train-*
- config_name: atari-gravitar_newdata
data_files:
- split: train
path: atari-gravitar_newdata/train-*
- config_name: atari-gravitar_subset
data_files:
- split: train
path: atari-gravitar_subset/train-*
- config_name: atari-hero_newdata
data_files:
- split: train
path: atari-hero_newdata/train-*
- config_name: atari-hero_subset
data_files:
- split: train
path: atari-hero_subset/train-*
- config_name: atari-icehockey_newdata
data_files:
- split: train
path: atari-icehockey_newdata/train-*
- config_name: atari-icehockey_subset
data_files:
- split: train
path: atari-icehockey_subset/train-*
- config_name: atari-jamesbond_newdata
data_files:
- split: train
path: atari-jamesbond_newdata/train-*
- config_name: atari-jamesbond_subset
data_files:
- split: train
path: atari-jamesbond_subset/train-*
- config_name: atari-kangaroo_newdata
data_files:
- split: train
path: atari-kangaroo_newdata/train-*
- config_name: atari-kangaroo_subset
data_files:
- split: train
path: atari-kangaroo_subset/train-*
- config_name: atari-krull_newdata
data_files:
- split: train
path: atari-krull_newdata/train-*
- config_name: atari-krull_subset
data_files:
- split: train
path: atari-krull_subset/train-*
- config_name: atari-kungfumaster_newdata
data_files:
- split: train
path: atari-kungfumaster_newdata/train-*
- config_name: atari-kungfumaster_subset
data_files:
- split: train
path: atari-kungfumaster_subset/train-*
- config_name: atari-montezumarevenge_newdata
data_files:
- split: train
path: atari-montezumarevenge_newdata/train-*
- config_name: atari-montezumarevenge_subset
data_files:
- split: train
path: atari-montezumarevenge_subset/train-*
- config_name: atari-mspacman_newdata
data_files:
- split: train
path: atari-mspacman_newdata/train-*
- config_name: atari-namethisgame_newdata
data_files:
- split: train
path: atari-namethisgame_newdata/train-*
- config_name: atari-namethisgame_subset
data_files:
- split: train
path: atari-namethisgame_subset/train-*
- config_name: atari-phoenix_newdata
data_files:
- split: train
path: atari-phoenix_newdata/train-*
- config_name: atari-phoenix_subset
data_files:
- split: train
path: atari-phoenix_subset/train-*
- config_name: atari-pitfall_newdata
data_files:
- split: train
path: atari-pitfall_newdata/train-*
- config_name: atari-pitfall_subset
data_files:
- split: train
path: atari-pitfall_subset/train-*
- config_name: atari-pong_newdata
data_files:
- split: train
path: atari-pong_newdata/train-*
- config_name: atari-privateeye_newdata
data_files:
- split: train
path: atari-privateeye_newdata/train-*
- config_name: atari-privateeye_subset
data_files:
- split: train
path: atari-privateeye_subset/train-*
- config_name: atari-qbert_newdata
data_files:
- split: train
path: atari-qbert_newdata/train-*
- config_name: atari-qbert_subset
data_files:
- split: train
path: atari-qbert_subset/train-*
- config_name: atari-riverraid_newdata
data_files:
- split: train
path: atari-riverraid_newdata/train-*
- config_name: atari-riverraid_subset
data_files:
- split: train
path: atari-riverraid_subset/train-*
- config_name: atari-roadrunner_newdata
data_files:
- split: train
path: atari-roadrunner_newdata/train-*
- config_name: atari-roadrunner_subset
data_files:
- split: train
path: atari-roadrunner_subset/train-*
- config_name: atari-robotank_newdata
data_files:
- split: train
path: atari-robotank_newdata/train-*
- config_name: atari-robotank_subset
data_files:
- split: train
path: atari-robotank_subset/train-*
- config_name: atari-seaquest_newdata
data_files:
- split: train
path: atari-seaquest_newdata/train-*
- config_name: atari-seaquest_subset
data_files:
- split: train
path: atari-seaquest_subset/train-*
- config_name: atari-skiing_newdata
data_files:
- split: train
path: atari-skiing_newdata/train-*
- config_name: atari-skiing_subset
data_files:
- split: train
path: atari-skiing_subset/train-*
- config_name: atari-solaris_newdata
data_files:
- split: train
path: atari-solaris_newdata/train-*
- config_name: atari-solaris_subset
data_files:
- split: train
path: atari-solaris_subset/train-*
- config_name: atari-spaceinvaders_newdata
data_files:
- split: train
path: atari-spaceinvaders_newdata/train-*
- config_name: atari-stargunner_newdata
data_files:
- split: train
path: atari-stargunner_newdata/train-*
- config_name: atari-surround_newdata
data_files:
- split: train
path: atari-surround_newdata/train-*
- config_name: atari-surround_subset
data_files:
- split: train
path: atari-surround_subset/train-*
- config_name: atari-tennis_newdata
data_files:
- split: train
path: atari-tennis_newdata/train-*
- config_name: atari-tennis_subset
data_files:
- split: train
path: atari-tennis_subset/train-*
- config_name: atari-timepilot_newdata
data_files:
- split: train
path: atari-timepilot_newdata/train-*
- config_name: atari-timepilot_subset
data_files:
- split: train
path: atari-timepilot_subset/train-*
- config_name: atari-tutankham_newdata
data_files:
- split: train
path: atari-tutankham_newdata/train-*
- config_name: atari-tutankham_subset
data_files:
- split: train
path: atari-tutankham_subset/train-*
- config_name: atari-upndown_newdata
data_files:
- split: train
path: atari-upndown_newdata/train-*
- config_name: atari-upndown_subset
data_files:
- split: train
path: atari-upndown_subset/train-*
- config_name: atari-venture_newdata
data_files:
- split: train
path: atari-venture_newdata/train-*
- config_name: atari-venture_subset
data_files:
- split: train
path: atari-venture_subset/train-*
- config_name: atari-videopinball_newdata
data_files:
- split: train
path: atari-videopinball_newdata/train-*
- config_name: atari-videopinball_subset
data_files:
- split: train
path: atari-videopinball_subset/train-*
- config_name: atari-wizardofwor_newdata
data_files:
- split: train
path: atari-wizardofwor_newdata/train-*
- config_name: atari-wizardofwor_subset
data_files:
- split: train
path: atari-wizardofwor_subset/train-*
- config_name: atari-yarsrevenge_newdata
data_files:
- split: train
path: atari-yarsrevenge_newdata/train-*
- config_name: atari-yarsrevenge_subset
data_files:
- split: train
path: atari-yarsrevenge_subset/train-*
- config_name: atari-zaxxon_newdata
data_files:
- split: train
path: atari-zaxxon_newdata/train-*
- config_name: atari-zaxxon_subset
data_files:
- split: train
path: atari-zaxxon_subset/train-*
- config_name: babyai-action-obj-door_newdata
data_files:
- split: train
path: babyai-action-obj-door_newdata/train-*
- config_name: babyai-action-obj-door_subset
data_files:
- split: train
path: babyai-action-obj-door_subset/train-*
- config_name: babyai-blocked-unlock-pickup_newdata
data_files:
- split: train
path: babyai-blocked-unlock-pickup_newdata/train-*
- config_name: babyai-blocked-unlock-pickup_subset
data_files:
- split: train
path: babyai-blocked-unlock-pickup_subset/train-*
- config_name: babyai-boss-level-no-unlock_newdata
data_files:
- split: train
path: babyai-boss-level-no-unlock_newdata/train-*
- config_name: babyai-boss-level-no-unlock_subset
data_files:
- split: train
path: babyai-boss-level-no-unlock_subset/train-*
- config_name: babyai-boss-level_newdata
data_files:
- split: train
path: babyai-boss-level_newdata/train-*
- config_name: babyai-boss-level_subset
data_files:
- split: train
path: babyai-boss-level_subset/train-*
- config_name: babyai-find-obj-s5_newdata
data_files:
- split: train
path: babyai-find-obj-s5_newdata/train-*
- config_name: babyai-find-obj-s5_subset
data_files:
- split: train
path: babyai-find-obj-s5_subset/train-*
- config_name: babyai-go-to-door_newdata
data_files:
- split: train
path: babyai-go-to-door_newdata/train-*
- config_name: babyai-go-to-door_subset
data_files:
- split: train
path: babyai-go-to-door_subset/train-*
- config_name: babyai-go-to-imp-unlock_newdata
data_files:
- split: train
path: babyai-go-to-imp-unlock_newdata/train-*
- config_name: babyai-go-to-imp-unlock_subset
data_files:
- split: train
path: babyai-go-to-imp-unlock_subset/train-*
- config_name: babyai-go-to-local_newdata
data_files:
- split: train
path: babyai-go-to-local_newdata/train-*
- config_name: babyai-go-to-local_subset
data_files:
- split: train
path: babyai-go-to-local_subset/train-*
- config_name: babyai-go-to-obj-door_newdata
data_files:
- split: train
path: babyai-go-to-obj-door_newdata/train-*
- config_name: babyai-go-to-obj-door_subset
data_files:
- split: train
path: babyai-go-to-obj-door_subset/train-*
- config_name: babyai-go-to-obj_newdata
data_files:
- split: train
path: babyai-go-to-obj_newdata/train-*
- config_name: babyai-go-to-obj_subset
data_files:
- split: train
path: babyai-go-to-obj_subset/train-*
- config_name: babyai-go-to-red-ball-grey_newdata
data_files:
- split: train
path: babyai-go-to-red-ball-grey_newdata/train-*
- config_name: babyai-go-to-red-ball-grey_subset
data_files:
- split: train
path: babyai-go-to-red-ball-grey_subset/train-*
- config_name: babyai-go-to-red-ball-no-dists_newdata
data_files:
- split: train
path: babyai-go-to-red-ball-no-dists_newdata/train-*
- config_name: babyai-go-to-red-ball-no-dists_subset
data_files:
- split: train
path: babyai-go-to-red-ball-no-dists_subset/train-*
- config_name: babyai-go-to-red-ball_newdata
data_files:
- split: train
path: babyai-go-to-red-ball_newdata/train-*
- config_name: babyai-go-to-red-ball_subset
data_files:
- split: train
path: babyai-go-to-red-ball_subset/train-*
- config_name: babyai-go-to-red-blue-ball_newdata
data_files:
- split: train
path: babyai-go-to-red-blue-ball_newdata/train-*
- config_name: babyai-go-to-red-blue-ball_subset
data_files:
- split: train
path: babyai-go-to-red-blue-ball_subset/train-*
- config_name: babyai-go-to-seq_newdata
data_files:
- split: train
path: babyai-go-to-seq_newdata/train-*
- config_name: babyai-go-to-seq_subset
data_files:
- split: train
path: babyai-go-to-seq_subset/train-*
- config_name: babyai-go-to_newdata
data_files:
- split: train
path: babyai-go-to_newdata/train-*
- config_name: babyai-go-to_subset
data_files:
- split: train
path: babyai-go-to_subset/train-*
- config_name: babyai-key-corridor_newdata
data_files:
- split: train
path: babyai-key-corridor_newdata/train-*
- config_name: babyai-key-corridor_subset
data_files:
- split: train
path: babyai-key-corridor_subset/train-*
- config_name: babyai-mini-boss-level_newdata
data_files:
- split: train
path: babyai-mini-boss-level_newdata/train-*
- config_name: babyai-mini-boss-level_subset
data_files:
- split: train
path: babyai-mini-boss-level_subset/train-*
- config_name: babyai-move-two-across-s8n9_newdata
data_files:
- split: train
path: babyai-move-two-across-s8n9_newdata/train-*
- config_name: babyai-move-two-across-s8n9_subset
data_files:
- split: train
path: babyai-move-two-across-s8n9_subset/train-*
- config_name: babyai-one-room-s8_newdata
data_files:
- split: train
path: babyai-one-room-s8_newdata/train-*
- config_name: babyai-one-room-s8_subset
data_files:
- split: train
path: babyai-one-room-s8_subset/train-*
- config_name: babyai-open-door_newdata
data_files:
- split: train
path: babyai-open-door_newdata/train-*
- config_name: babyai-open-door_subset
data_files:
- split: train
path: babyai-open-door_subset/train-*
- config_name: babyai-open-doors-order-n4_newdata
data_files:
- split: train
path: babyai-open-doors-order-n4_newdata/train-*
- config_name: babyai-open-doors-order-n4_subset
data_files:
- split: train
path: babyai-open-doors-order-n4_subset/train-*
- config_name: babyai-open-red-door_newdata
data_files:
- split: train
path: babyai-open-red-door_newdata/train-*
- config_name: babyai-open-red-door_subset
data_files:
- split: train
path: babyai-open-red-door_subset/train-*
- config_name: babyai-open-two-doors_newdata
data_files:
- split: train
path: babyai-open-two-doors_newdata/train-*
- config_name: babyai-open-two-doors_subset
data_files:
- split: train
path: babyai-open-two-doors_subset/train-*
- config_name: babyai-open_newdata
data_files:
- split: train
path: babyai-open_newdata/train-*
- config_name: babyai-open_subset
data_files:
- split: train
path: babyai-open_subset/train-*
- config_name: babyai-pickup-above_newdata
data_files:
- split: train
path: babyai-pickup-above_newdata/train-*
- config_name: babyai-pickup-above_subset
data_files:
- split: train
path: babyai-pickup-above_subset/train-*
- config_name: babyai-pickup-dist_newdata
data_files:
- split: train
path: babyai-pickup-dist_newdata/train-*
- config_name: babyai-pickup-dist_subset
data_files:
- split: train
path: babyai-pickup-dist_subset/train-*
- config_name: babyai-pickup-loc_newdata
data_files:
- split: train
path: babyai-pickup-loc_newdata/train-*
- config_name: babyai-pickup-loc_subset
data_files:
- split: train
path: babyai-pickup-loc_subset/train-*
- config_name: babyai-pickup_newdata
data_files:
- split: train
path: babyai-pickup_newdata/train-*
- config_name: babyai-pickup_subset
data_files:
- split: train
path: babyai-pickup_subset/train-*
- config_name: babyai-put-next-local_newdata
data_files:
- split: train
path: babyai-put-next-local_newdata/train-*
- config_name: babyai-put-next-local_subset
data_files:
- split: train
path: babyai-put-next-local_subset/train-*
- config_name: babyai-put-next_newdata
data_files:
- split: train
path: babyai-put-next_newdata/train-*
- config_name: babyai-put-next_subset
data_files:
- split: train
path: babyai-put-next_subset/train-*
- config_name: babyai-synth-loc_newdata
data_files:
- split: train
path: babyai-synth-loc_newdata/train-*
- config_name: babyai-synth-loc_subset
data_files:
- split: train
path: babyai-synth-loc_subset/train-*
- config_name: babyai-synth-seq_newdata
data_files:
- split: train
path: babyai-synth-seq_newdata/train-*
- config_name: babyai-synth-seq_subset
data_files:
- split: train
path: babyai-synth-seq_subset/train-*
- config_name: babyai-synth_newdata
data_files:
- split: train
path: babyai-synth_newdata/train-*
- config_name: babyai-synth_subset
data_files:
- split: train
path: babyai-synth_subset/train-*
- config_name: babyai-unblock-pickup_newdata
data_files:
- split: train
path: babyai-unblock-pickup_newdata/train-*
- config_name: babyai-unblock-pickup_subset
data_files:
- split: train
path: babyai-unblock-pickup_subset/train-*
- config_name: babyai-unlock-local_newdata
data_files:
- split: train
path: babyai-unlock-local_newdata/train-*
- config_name: babyai-unlock-local_subset
data_files:
- split: train
path: babyai-unlock-local_subset/train-*
- config_name: babyai-unlock-pickup_newdata
data_files:
- split: train
path: babyai-unlock-pickup_newdata/train-*
- config_name: babyai-unlock-pickup_subset
data_files:
- split: train
path: babyai-unlock-pickup_subset/train-*
- config_name: babyai-unlock-to-unlock_newdata
data_files:
- split: train
path: babyai-unlock-to-unlock_newdata/train-*
- config_name: babyai-unlock-to-unlock_subset
data_files:
- split: train
path: babyai-unlock-to-unlock_subset/train-*
- config_name: babyai-unlock_newdata
data_files:
- split: train
path: babyai-unlock_newdata/train-*
- config_name: babyai-unlock_subset
data_files:
- split: train
path: babyai-unlock_subset/train-*
- config_name: metaworld-assembly_newdata
data_files:
- split: train
path: metaworld-assembly_newdata/train-*
- config_name: metaworld-assembly_subset
data_files:
- split: train
path: metaworld-assembly_subset/train-*
- config_name: metaworld-basketball_newdata
data_files:
- split: train
path: metaworld-basketball_newdata/train-*
- config_name: metaworld-basketball_subset
data_files:
- split: train
path: metaworld-basketball_subset/train-*
- config_name: metaworld-bin-picking_newdata
data_files:
- split: train
path: metaworld-bin-picking_newdata/train-*
- config_name: metaworld-box-close_newdata
data_files:
- split: train
path: metaworld-box-close_newdata/train-*
- config_name: metaworld-button-press-topdown-wall_newdata
data_files:
- split: train
path: metaworld-button-press-topdown-wall_newdata/train-*
- config_name: metaworld-button-press-topdown-wall_subset
data_files:
- split: train
path: metaworld-button-press-topdown-wall_subset/train-*
- config_name: metaworld-button-press-topdown_newdata
data_files:
- split: train
path: metaworld-button-press-topdown_newdata/train-*
- config_name: metaworld-button-press-topdown_subset
data_files:
- split: train
path: metaworld-button-press-topdown_subset/train-*
- config_name: metaworld-button-press-wall_newdata
data_files:
- split: train
path: metaworld-button-press-wall_newdata/train-*
- config_name: metaworld-button-press-wall_subset
data_files:
- split: train
path: metaworld-button-press-wall_subset/train-*
- config_name: metaworld-button-press_newdata
data_files:
- split: train
path: metaworld-button-press_newdata/train-*
- config_name: metaworld-button-press_subset
data_files:
- split: train
path: metaworld-button-press_subset/train-*
- config_name: metaworld-coffee-button_newdata
data_files:
- split: train
path: metaworld-coffee-button_newdata/train-*
- config_name: metaworld-coffee-button_subset
data_files:
- split: train
path: metaworld-coffee-button_subset/train-*
- config_name: metaworld-coffee-pull_newdata
data_files:
- split: train
path: metaworld-coffee-pull_newdata/train-*
- config_name: metaworld-coffee-pull_subset
data_files:
- split: train
path: metaworld-coffee-pull_subset/train-*
- config_name: metaworld-coffee-push_newdata
data_files:
- split: train
path: metaworld-coffee-push_newdata/train-*
- config_name: metaworld-coffee-push_subset
data_files:
- split: train
path: metaworld-coffee-push_subset/train-*
- config_name: metaworld-dial-turn_newdata
data_files:
- split: train
path: metaworld-dial-turn_newdata/train-*
- config_name: metaworld-dial-turn_subset
data_files:
- split: train
path: metaworld-dial-turn_subset/train-*
- config_name: metaworld-disassemble_newdata
data_files:
- split: train
path: metaworld-disassemble_newdata/train-*
- config_name: metaworld-disassemble_subset
data_files:
- split: train
path: metaworld-disassemble_subset/train-*
- config_name: metaworld-door-close_newdata
data_files:
- split: train
path: metaworld-door-close_newdata/train-*
- config_name: metaworld-door-close_subset
data_files:
- split: train
path: metaworld-door-close_subset/train-*
- config_name: metaworld-door-lock_newdata
data_files:
- split: train
path: metaworld-door-lock_newdata/train-*
- config_name: metaworld-door-open_newdata
data_files:
- split: train
path: metaworld-door-open_newdata/train-*
- config_name: metaworld-door-open_subset
data_files:
- split: train
path: metaworld-door-open_subset/train-*
- config_name: metaworld-door-unlock_newdata
data_files:
- split: train
path: metaworld-door-unlock_newdata/train-*
- config_name: metaworld-drawer-close_newdata
data_files:
- split: train
path: metaworld-drawer-close_newdata/train-*
- config_name: metaworld-drawer-close_subset
data_files:
- split: train
path: metaworld-drawer-close_subset/train-*
- config_name: metaworld-drawer-open_newdata
data_files:
- split: train
path: metaworld-drawer-open_newdata/train-*
- config_name: metaworld-drawer-open_subset
data_files:
- split: train
path: metaworld-drawer-open_subset/train-*
- config_name: metaworld-faucet-close_newdata
data_files:
- split: train
path: metaworld-faucet-close_newdata/train-*
- config_name: metaworld-faucet-close_subset
data_files:
- split: train
path: metaworld-faucet-close_subset/train-*
- config_name: metaworld-faucet-open_newdata
data_files:
- split: train
path: metaworld-faucet-open_newdata/train-*
- config_name: metaworld-faucet-open_subset
data_files:
- split: train
path: metaworld-faucet-open_subset/train-*
- config_name: metaworld-hammer_newdata
data_files:
- split: train
path: metaworld-hammer_newdata/train-*
- config_name: metaworld-hammer_subset
data_files:
- split: train
path: metaworld-hammer_subset/train-*
- config_name: metaworld-handle-press-side_newdata
data_files:
- split: train
path: metaworld-handle-press-side_newdata/train-*
- config_name: metaworld-handle-press-side_subset
data_files:
- split: train
path: metaworld-handle-press-side_subset/train-*
- config_name: metaworld-handle-press_newdata
data_files:
- split: train
path: metaworld-handle-press_newdata/train-*
- config_name: metaworld-handle-press_subset
data_files:
- split: train
path: metaworld-handle-press_subset/train-*
- config_name: metaworld-handle-pull-side_newdata
data_files:
- split: train
path: metaworld-handle-pull-side_newdata/train-*
- config_name: metaworld-handle-pull-side_subset
data_files:
- split: train
path: metaworld-handle-pull-side_subset/train-*
- config_name: metaworld-handle-pull_newdata
data_files:
- split: train
path: metaworld-handle-pull_newdata/train-*
- config_name: metaworld-handle-pull_subset
data_files:
- split: train
path: metaworld-handle-pull_subset/train-*
- config_name: metaworld-lever-pull_newdata
data_files:
- split: train
path: metaworld-lever-pull_newdata/train-*
- config_name: metaworld-lever-pull_subset
data_files:
- split: train
path: metaworld-lever-pull_subset/train-*
- config_name: metaworld-peg-insert-side_newdata
data_files:
- split: train
path: metaworld-peg-insert-side_newdata/train-*
- config_name: metaworld-peg-insert-side_subset
data_files:
- split: train
path: metaworld-peg-insert-side_subset/train-*
- config_name: metaworld-peg-unplug-side_newdata
data_files:
- split: train
path: metaworld-peg-unplug-side_newdata/train-*
- config_name: metaworld-peg-unplug-side_subset
data_files:
- split: train
path: metaworld-peg-unplug-side_subset/train-*
- config_name: metaworld-pick-out-of-hole_newdata
data_files:
- split: train
path: metaworld-pick-out-of-hole_newdata/train-*
- config_name: metaworld-pick-out-of-hole_subset
data_files:
- split: train
path: metaworld-pick-out-of-hole_subset/train-*
- config_name: metaworld-pick-place-wall_newdata
data_files:
- split: train
path: metaworld-pick-place-wall_newdata/train-*
- config_name: metaworld-pick-place-wall_subset
data_files:
- split: train
path: metaworld-pick-place-wall_subset/train-*
- config_name: metaworld-pick-place_newdata
data_files:
- split: train
path: metaworld-pick-place_newdata/train-*
- config_name: metaworld-pick-place_subset
data_files:
- split: train
path: metaworld-pick-place_subset/train-*
- config_name: metaworld-plate-slide-back-side_newdata
data_files:
- split: train
path: metaworld-plate-slide-back-side_newdata/train-*
- config_name: metaworld-plate-slide-back-side_subset
data_files:
- split: train
path: metaworld-plate-slide-back-side_subset/train-*
- config_name: metaworld-plate-slide-back_newdata
data_files:
- split: train
path: metaworld-plate-slide-back_newdata/train-*
- config_name: metaworld-plate-slide-back_subset
data_files:
- split: train
path: metaworld-plate-slide-back_subset/train-*
- config_name: metaworld-plate-slide-side_newdata
data_files:
- split: train
path: metaworld-plate-slide-side_newdata/train-*
- config_name: metaworld-plate-slide-side_subset
data_files:
- split: train
path: metaworld-plate-slide-side_subset/train-*
- config_name: metaworld-plate-slide_newdata
data_files:
- split: train
path: metaworld-plate-slide_newdata/train-*
- config_name: metaworld-plate-slide_subset
data_files:
- split: train
path: metaworld-plate-slide_subset/train-*
- config_name: metaworld-push-back_newdata
data_files:
- split: train
path: metaworld-push-back_newdata/train-*
- config_name: metaworld-push-back_subset
data_files:
- split: train
path: metaworld-push-back_subset/train-*
- config_name: metaworld-push-wall_newdata
data_files:
- split: train
path: metaworld-push-wall_newdata/train-*
- config_name: metaworld-push-wall_subset
data_files:
- split: train
path: metaworld-push-wall_subset/train-*
- config_name: metaworld-push_newdata
data_files:
- split: train
path: metaworld-push_newdata/train-*
- config_name: metaworld-push_subset
data_files:
- split: train
path: metaworld-push_subset/train-*
- config_name: metaworld-reach-wall_newdata
data_files:
- split: train
path: metaworld-reach-wall_newdata/train-*
- config_name: metaworld-reach-wall_subset
data_files:
- split: train
path: metaworld-reach-wall_subset/train-*
- config_name: metaworld-reach_newdata
data_files:
- split: train
path: metaworld-reach_newdata/train-*
- config_name: metaworld-reach_subset
data_files:
- split: train
path: metaworld-reach_subset/train-*
- config_name: metaworld-shelf-place_newdata
data_files:
- split: train
path: metaworld-shelf-place_newdata/train-*
- config_name: metaworld-shelf-place_subset
data_files:
- split: train
path: metaworld-shelf-place_subset/train-*
- config_name: metaworld-soccer_newdata
data_files:
- split: train
path: metaworld-soccer_newdata/train-*
- config_name: metaworld-soccer_subset
data_files:
- split: train
path: metaworld-soccer_subset/train-*
- config_name: metaworld-stick-pull_newdata
data_files:
- split: train
path: metaworld-stick-pull_newdata/train-*
- config_name: metaworld-stick-pull_subset
data_files:
- split: train
path: metaworld-stick-pull_subset/train-*
- config_name: metaworld-stick-push_newdata
data_files:
- split: train
path: metaworld-stick-push_newdata/train-*
- config_name: metaworld-stick-push_subset
data_files:
- split: train
path: metaworld-stick-push_subset/train-*
- config_name: metaworld-sweep-into_newdata
data_files:
- split: train
path: metaworld-sweep-into_newdata/train-*
- config_name: metaworld-sweep-into_subset
data_files:
- split: train
path: metaworld-sweep-into_subset/train-*
- config_name: metaworld-sweep_newdata
data_files:
- split: train
path: metaworld-sweep_newdata/train-*
- config_name: metaworld-sweep_subset
data_files:
- split: train
path: metaworld-sweep_subset/train-*
- config_name: metaworld-window-close_newdata
data_files:
- split: train
path: metaworld-window-close_newdata/train-*
- config_name: metaworld-window-close_subset
data_files:
- split: train
path: metaworld-window-close_subset/train-*
- config_name: metaworld-window-open_newdata
data_files:
- split: train
path: metaworld-window-open_newdata/train-*
- config_name: metaworld-window-open_subset
data_files:
- split: train
path: metaworld-window-open_subset/train-*
- config_name: mujoco-ant_newdata
data_files:
- split: train
path: mujoco-ant_newdata/train-*
- config_name: mujoco-ant_subset
data_files:
- split: train
path: mujoco-ant_subset/train-*
- config_name: mujoco-doublependulum_newdata
data_files:
- split: train
path: mujoco-doublependulum_newdata/train-*
- config_name: mujoco-doublependulum_subset
data_files:
- split: train
path: mujoco-doublependulum_subset/train-*
- config_name: mujoco-halfcheetah_newdata
data_files:
- split: train
path: mujoco-halfcheetah_newdata/train-*
- config_name: mujoco-hopper_newdata
data_files:
- split: train
path: mujoco-hopper_newdata/train-*
- config_name: mujoco-humanoid_newdata
data_files:
- split: train
path: mujoco-humanoid_newdata/train-*
- config_name: mujoco-humanoid_subset
data_files:
- split: train
path: mujoco-humanoid_subset/train-*
- config_name: mujoco-pendulum_newdata
data_files:
- split: train
path: mujoco-pendulum_newdata/train-*
- config_name: mujoco-pendulum_subset
data_files:
- split: train
path: mujoco-pendulum_subset/train-*
- config_name: mujoco-pusher_newdata
data_files:
- split: train
path: mujoco-pusher_newdata/train-*
- config_name: mujoco-pusher_subset
data_files:
- split: train
path: mujoco-pusher_subset/train-*
- config_name: mujoco-reacher_newdata
data_files:
- split: train
path: mujoco-reacher_newdata/train-*
- config_name: mujoco-reacher_subset
data_files:
- split: train
path: mujoco-reacher_subset/train-*
- config_name: mujoco-standup_newdata
data_files:
- split: train
path: mujoco-standup_newdata/train-*
- config_name: mujoco-standup_subset
data_files:
- split: train
path: mujoco-standup_subset/train-*
- config_name: mujoco-swimmer_newdata
data_files:
- split: train
path: mujoco-swimmer_newdata/train-*
- config_name: mujoco-swimmer_subset
data_files:
- split: train
path: mujoco-swimmer_subset/train-*
- config_name: mujoco-walker_newdata
data_files:
- split: train
path: mujoco-walker_newdata/train-*
- config_name: mujoco-walker_subset
data_files:
- split: train
path: mujoco-walker_subset/train-*
---
|
IGNF/PASTIS-HD | IGNF | "2024-10-04T13:39:24Z" | 18,027 | 7 | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"license:etalab-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"arxiv:2107.07933",
"arxiv:2112.07558",
"arxiv:2404.08351",
"region:us",
"remote sensing",
"Agricultural"
] | [
"image-classification",
"image-segmentation"
] | "2024-04-02T14:58:15Z" | ---
license: etalab-2.0
task_categories:
- image-classification
- image-segmentation
tags:
- remote sensing
- Agricultural
size_categories:
- 1K<n<10K
---
# 🌱 PASTIS-HD 🌿 Panoptic Agricultural Satellite TIme Series : optical time series, radar time series and very high resolution image
[PASTIS](https://github.com/VSainteuf/pastis-benchmark) is a benchmark dataset for panoptic and semantic segmentation of agricultural parcels from satellite time series.
It contains 2,433 patches within the French metropolitan territory with panoptic annotations (instance index + semantic label for each pixel).
Each patch is a Sentinel-2 multispectral image time series of variable lentgh.
This dataset have been extended in 2021 with aligned radar Sentinel-1 observations for all 2433 patches.
For each patch, it constains approximately 70 observations of Sentinel-1 in ascending orbit, and 70 observations in descending orbit. Each each Sentinel1 observation is assembled into a 3-channel image: vertical polarization (VV), horizontal polarisation (VH), and the ratio vertical over horizontal polarization (VV/VH). This extension is named PASTIS-R.
We extend PASTIS with aligned very high resolution satellite images from SPOT 6-7 constellation for all 2433 patches in addition to the Sentinel-1 and 2 time series.
The image are resampled to a 1m resolution and converted to 8 bits.
This enhancement significantly improves the dataset's spatial content, providing more granular information for agricultural parcel segmentation.
**PASTIS-HD** can be used to evaluate multi-modal fusion methods (with optical time series, radar time series and VHR images) for parcel-based classification, semantic segmentation, and panoptic segmentation.
## Dataset in numbers
🛰️ Sentinel 2 | 🛰️ Sentinel 1 | 🛰️ **SPOT 6-7 VHR** | 🗻 Annotations
:-------------------------------------------- | :-------------------------------------------------- | :------------------------------| :------------------------------
➡️ 2,433 time series | ➡️ 2 time 2,433 time series | ➡️ **2,433 images** | 124,422 individual parcels
➡️ 10m / pixel | ➡️ 10m / pixel | ➡️ **1.5m / pixel** | covers ~4,000 km²
➡️ 128x128 pixels / images | ➡️ 128x128 pixels / images | ➡️ **1280x1280 pixels / images** | over 2B pixels
➡️ 38-61 acquisitions / series | ➡️ ~ 70 acquisitions / series | ➡️ **One observation** | 18 crop types
➡️ 10 spectral bands |➡️ 2 spectral bands | ➡️ **3 spectral bands** |
⚠️ The **SPOT data are natively 1.5m resolution**, but we over-sampled them at 1m to align them pixel-perfect with Sentinel data.
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6582b7dd75754a803e484487/sxmnCAGs0p2u_PALLsqyN.jpeg)
## Data loading
The Github repository associated to this dataset contains a PyTorch dataset class of [the OmniSat repository](https://github.com/gastruc/OmniSat/blob/main/src/data/Pastis.py) that can be readily used to load data for training models on PASTIS-HD.
The time series contained in PASTIS have variable lengths.
The Sentinel 1 and 2 time series are stored in numpy array. The SPOT images are in TIFF format.
The annotations are in numpy array too.
⚠️ The S2 and S1 folders contains more than 2433 files on the contrary to the labels folder. Some patches are not labelled and not used for training.
The relevant information can be find in the metadata.geojson file (with 2433 entries), which is used as an index by the dataloader.
### Remark about the folder names
⚠️ The **DATA_S1A** folder contains the Sentinel-1 **ascendent** images whereas the **DATA_S1D** folder contains the Sentinel-1 **descendant** images.
## Ground Truth Annotations
The agricultural parcels are grouped into 18 different crop classes as shown in the table below. The backgroud class corresponds to non-agricultural land, and the void label for parcels that are mostly outside their patch.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6582b7dd75754a803e484487/aHQB0uq4cqBX-7hkCkpFn.png)
Additional information about the dataset can be found in the documentation/pastis-documentation.pdf document.
## Credits
- The Sentinel imagery used in PASTIS was retrieved from [THEIA](www.theia.land.fr):
"Value-added data processed by the CNES for the Theia www.theia.land.fr data cluster using Copernicus data.
The treatments use algorithms developed by Theia’s Scientific Expertise Centres. "
- The annotations used in PASTIS stem from the French [land parcel identification system](https://www.data.gouv.fr/en/datasets/registre-parcellaire-graphique-rpg-contours-des-parcelles-et-ilots-culturaux-et-leur-groupe-de-cultures-majoritaire/) produced
by IGN.
- The SPOT images are opendata thanks to the Dataterra Dinamis initiative in the case of the ["Couverture France DINAMIS"](https://dinamis.data-terra.org/opendata/) program.
## References
If you use PASTIS please cite the [related paper](https://arxiv.org/abs/2107.07933):
```
@article{garnot2021panoptic,
title={Panoptic Segmentation of Satellite Image Time Series
with Convolutional Temporal Attention Networks},
author={Sainte Fare Garnot, Vivien and Landrieu, Loic},
journal={ICCV},
year={2021}
}
```
For the PASTIS-R optical-radar fusion dataset, please also cite [this paper](https://arxiv.org/abs/2112.07558v1):
```
@article{garnot2021mmfusion,
title = {Multi-modal temporal attention models for crop mapping from satellite time series},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
year = {2022},
doi = {https://doi.org/10.1016/j.isprsjprs.2022.03.012},
author = {Vivien {Sainte Fare Garnot} and Loic Landrieu and Nesrine Chehata},
}
```
For the PASTIS-HD with the 3 modalities optical-radar time series plus VHR images dataset, please also cite [this paper](https://arxiv.org/abs/2404.08351):
```
@article{astruc2024omnisat,
title={Omni{S}at: {S}elf-Supervised Modality Fusion for {E}arth Observation},
author={Astruc, Guillaume and Gonthier, Nicolas and Mallet, Clement and Landrieu, Loic},
journal={ECCV},
year={2024}
}
``` |
mteb/sickr-sts | mteb | "2022-09-27T19:13:22Z" | 17,750 | 4 | [
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-04-19T14:28:03Z" | ---
language:
- en
--- |
Jay-Rajput/DIS_IPL_Preds | Jay-Rajput | "2024-05-27T06:26:15Z" | 17,717 | 0 | [
"region:us"
] | null | "2024-04-06T09:18:15Z" | ---
configs:
- config_name: predictions
data_files: predictions/*.json
---
---
license: apache-2.0
---
|
cfilt/IITB-IndicMonoDoc | cfilt | "2024-04-16T11:02:11Z" | 17,676 | 3 | [
"task_categories:text-generation",
"language:hi",
"language:mr",
"language:gu",
"language:sa",
"language:ta",
"language:te",
"language:ml",
"language:ne",
"language:as",
"language:bn",
"language:ks",
"language:or",
"language:pa",
"language:ur",
"language:sd",
"language:kn",
"license:cc-by-4.0",
"size_categories:10B<n<100B",
"arxiv:2403.13638",
"region:us",
"language-modeling",
"llm",
"clm"
] | [
"text-generation"
] | "2024-03-20T13:40:03Z" | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- hi
- mr
- gu
- sa
- ta
- te
- ml
- ne
- as
- bn
- ks
- or
- pa
- ur
- sd
- kn
size_categories:
- 10B<n<100B
tags:
- language-modeling
- llm
- clm
viewer: false
---
IITB Document level Monolingual Corpora for Indian languages.
22 scheduled languages of India + English
(1) Assamese, (2) Bengali, (3) Gujarati, (4) Hindi, (5) Kannada, (6) Kashmiri, (7) Konkani, (8) Malayalam, (9) Manipuri, (10) Marathi, (11) Nepali, (12) Oriya, (13) Punjabi, (14) Sanskrit, (15) Sindhi, (16) Tamil, (17) Telugu, (18) Urdu (19) Bodo, (20) Santhali, (21) Maithili and (22) Dogri.
| Language | Total (#Mil Tokens) |
|:---------:|:--------------------:|
| bn | 5258.47 |
| en | 11986.53 |
| gu | 887.18 |
| hi | 11268.33 |
| kn | 567.16 |
| ml | 845.32 |
| mr | 1066.76 |
| ne | 1542.39 |
| pa | 449.61 |
| ta | 2171.92 |
| te | 767.18 |
| ur | 2391.79 |
| as | 57.64 |
| brx | 2.25 |
| doi | 0.37 |
| gom | 2.91 |
| kas | 1.27 |
| mai | 1.51 |
| mni | 0.99 |
| or | 81.96 |
| sa | 80.09 |
| sat | 3.05 |
| sd | 83.81 |
| Total= | 39518.51 |
To cite this dataset:
```
@misc{doshi2024worry,
title={Do Not Worry if You Do Not Have Data: Building Pretrained Language Models Using Translationese},
author={Meet Doshi and Raj Dabre and Pushpak Bhattacharyya},
year={2024},
eprint={2403.13638},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Zyphra/Zyda | Zyphra | "2024-06-19T01:06:43Z" | 17,672 | 69 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:1B<n<10B",
"modality:text",
"arxiv:2405.16712",
"arxiv:2101.00027",
"arxiv:2406.01981",
"doi:10.57967/hf/2394",
"region:us"
] | [
"text-generation"
] | "2024-05-04T18:56:59Z" | ---
dataset_info:
config_name: default
splits:
- name: train
num_examples: 1594197267
license: odc-by
pretty_name: Zyda
task_categories:
- text-generation
language:
- en
size_categories:
- n>1T
configs:
- config_name: default
data_files:
- split: train
path: data/*/*/*
- config_name: zyda_no_starcoder
data_files:
- split: train
path: data/zyda_no_starcoder/*/*
- config_name: zyda_arxiv_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_arxiv/*
- config_name: zyda_c4-en_only
data_files:
- split: train
path: data/zyda_no_starcoder/c4_en/*
- config_name: zyda_peS2o_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_peS2o/*
- config_name: zyda_pile-uncopyrighted_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_pile-uncopyrighted/*
- config_name: zyda_refinedweb_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_refinedweb/*
- config_name: zyda_slimpajama_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_slimpajama/*
- config_name: zyda_starcoder_only
data_files:
- split: train
path: data/zyda_starcoder/*/*
---
# Zyda
<!-- Provide a quick summary of the dataset. -->
Zyda is a 1.3T language modeling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
An early version of Zyda was used as the primary dataset for phase 1 pretraining of [Zamba](https://arxiv.org/abs/2405.16712), a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a pretraining dataset.
Models trained on Zyda significantly outperform identical models of the Pythia suite trained on the [Pile](https://arxiv.org/abs/2101.00027) for 300B tokens.
Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset.
According to our evaluations, Zyda is the most performant per-token open dataset available in its non-starcoder variant on language tasks. The Zyda starcoder variant ties with fineweb.
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/VdrCqypZtTpjEs7bH1k9s.png" width="650" alt="Zyda performance across steps.">
</center>
These results are aggregate scores of classic language modeling evaluations (PIQA, WinoGrande, OpenBookQA, ARC-Easy, ARC-Challenge) across time for a 1.4B model trained on 50B tokens of each dataset.
## How to download
Full dataset:
```
import datasets
ds = datasets.load_dataset("Zyphra/Zyda", split="train")
```
Full dataset without StarCoder:
```
import datasets
ds = datasets.load_dataset("Zyphra/Zyda", name="zyda_no_starcoder", split="train")
```
For downloading individual components put their name in the name arg of `load_dataset()`:
- zyda_arxiv_only
- zyda_c4-en_only
- zyda_peS2o_only
- zyda_pile-uncopyrighted_only
- zyda_refinedweb_only
- zyda_slimpajama_only
- zyda_starcoder_only
## Breakdown by component
| Component | Download size (parquet, GBs) | Documents (millions) | gpt-neox tokens (billions) |
| --- | --- | --- | --- |
| zyda_refinedweb_only | 1,712.4 | 920.5 | 564.8 |
| zyda_c4-en_only | 366.7 | 254.5 | 117.5 |
| zyda_slimpajama_only | 594.7 | 142.3 | 242.3 |
| zyda_pile-uncopyrighted_only | 189.4 | 64.9 | 82.9 |
| zyda_peS2o_only | 133.7 | 35.7 | 53.4 |
| zyda_arxiv_only | 8.3 | 0.3 | 4.7 |
| zyda_starcoder_only | 299.5 | 176.1 | 231.3 |
| Total | 3,304.7 | 1,594.2 | 1,296.7 |
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Zyphra
- **Language(s) (NLP):** Primarily English
- **License:** Open Data Commons License
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Dataset fields:
- `text`: contains actual text for training
- `source`: component the text is coming from
- `filtering_features`: precomputed values of different features that were used for filtering (converted to json string)
- `source_other`: metadata from the source dataset (converted to json string)
### Source Data
Zyda was drawn from seven component open datasets which are well-regarded in the community. These are:
Pile Uncopyrighted: https://huggingface.co/datasets/monology/pile-uncopyrighted
C4-en: https://huggingface.co/datasets/allenai/c4
peS2o: https://huggingface.co/datasets/allenai/peS2o
RefinedWeb: https://huggingface.co/datasets/tiiuae/falcon-refinedweb
SlimPajama: https://huggingface.co/datasets/cerebras/SlimPajama-627B
arxiv_s2orc_parsed: https://huggingface.co/datasets/ArtifactAI/arxiv_s2orc_parsed
StarCoder: https://huggingface.co/datasets/bigcode/starcoderdata
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/eCJWG3ZoA4fVk8bZZBHaG.png" width="650" alt="Composition of Zyda">
</center>
<!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/eCJWG3ZoA4fVk8bZZBHaG.png) -->
<!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/dQV8zNTNCx1xMMT-iupY6.png) -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
Zyda was created using a two stage post-processing pipeline consisting of *filtering* and *deduplication*.
For the filtering stage, we utilized a set of hand-crafted and tuned filters derived from a number of sources such as C4, RedPajama, and Gopher, in addition to our own filters.
For the deduplication stage, we used minhash approximate deduplication. We deduplicated on 13-grams and used a minhash signature size of 128 and filtered out documents above a Jaccard similarity of 0.4.
For full details on our data processing, see the [Zyda technical report](https://arxiv.org/abs/2406.01981) and our [dataset processing code](https://github.com/Zyphra/Zyda_processing).
#### Personal and Sensitive Information
As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
## Bias, Risks, and Limitations
As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.
## Licensing Information
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this dataset, you are also bound by any license agreements and terms of use of the original data sources.
## Citation
If you use our dataset to train a model, please cite us at:
```
@misc{tokpanov2024zyda,
title={Zyda: A 1.3T Dataset for Open Language Modeling},
author={Yury Tokpanov and Beren Millidge and Paolo Glorioso and Jonathan Pilault and Adam Ibrahim and James Whittington and Quentin Anthony},
year={2024},
eprint={2406.01981},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Yelp/yelp_review_full | Yelp | "2024-01-04T17:14:53Z" | 17,520 | 99 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1509.01626",
"region:us"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: YelpReviewFull
license_details: yelp-licence
dataset_info:
config_name: yelp_review_full
features:
- name: label
dtype:
class_label:
names:
'0': 1 star
'1': 2 star
'2': 3 stars
'3': 4 stars
'4': 5 stars
- name: text
dtype: string
splits:
- name: train
num_bytes: 483811554
num_examples: 650000
- name: test
num_bytes: 37271188
num_examples: 50000
download_size: 322952369
dataset_size: 521082742
configs:
- config_name: yelp_review_full
data_files:
- split: train
path: yelp_review_full/train-*
- split: test
path: yelp_review_full/test-*
default: true
train-eval-index:
- config: yelp_review_full
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
---
# Dataset Card for YelpReviewFull
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Yelp](https://www.yelp.com/dataset)
- **Repository:** [Crepe](https://github.com/zhangxiangxiao/Crepe)
- **Paper:** [Character-level Convolutional Networks for Text Classification](https://arxiv.org/abs/1509.01626)
- **Point of Contact:** [Xiang Zhang](mailto:xiang.zhang@nyu.edu)
### Dataset Summary
The Yelp reviews dataset consists of reviews from Yelp.
It is extracted from the Yelp Dataset Challenge 2015 data.
### Supported Tasks and Leaderboards
- `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment.
### Languages
The reviews were mainly written in english.
## Dataset Structure
### Data Instances
A typical data point, comprises of a text and the corresponding label.
An example from the YelpReviewFull test set looks as follows:
```
{
'label': 0,
'text': 'I got \'new\' tires from them and within two weeks got a flat. I took my car to a local mechanic to see if i could get the hole patched, but they said the reason I had a flat was because the previous patch had blown - WAIT, WHAT? I just got the tire and never needed to have it patched? This was supposed to be a new tire. \\nI took the tire over to Flynn\'s and they told me that someone punctured my tire, then tried to patch it. So there are resentful tire slashers? I find that very unlikely. After arguing with the guy and telling him that his logic was far fetched he said he\'d give me a new tire \\"this time\\". \\nI will never go back to Flynn\'s b/c of the way this guy treated me and the simple fact that they gave me a used tire!'
}
```
### Data Fields
- 'text': The review texts are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
- 'label': Corresponds to the score associated with the review (between 1 and 5).
### Data Splits
The Yelp reviews full star dataset is constructed by randomly taking 130,000 training samples and 10,000 testing samples for each review star from 1 to 5.
In total there are 650,000 trainig samples and 50,000 testing samples.
## Dataset Creation
### Curation Rationale
The Yelp reviews full star dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the Yelp Dataset Challenge 2015. It is first used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
You can check the official [yelp-dataset-agreement](https://s3-media3.fl.yelpcdn.com/assets/srv0/engineering_pages/bea5c1e92bf3/assets/vendor/yelp-dataset-agreement.pdf).
### Citation Information
Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
### Contributions
Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset. |
Matthijs/cmu-arctic-xvectors | Matthijs | "2023-02-07T14:04:48Z" | 17,479 | 38 | [
"task_categories:text-to-speech",
"task_categories:audio-to-audio",
"license:mit",
"size_categories:1K<n<10K",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-to-speech",
"audio-to-audio"
] | "2023-02-07T12:39:22Z" | ---
pretty_name: CMU ARCTIC X-Vectors
task_categories:
- text-to-speech
- audio-to-audio
license: mit
---
# Speaker embeddings extracted from CMU ARCTIC
There is one `.npy` file for each utterance in the dataset, 7931 files in total. The speaker embeddings are 512-element X-vectors.
The [CMU ARCTIC](http://www.festvox.org/cmu_arctic/) dataset divides the utterances among the following speakers:
- bdl (US male)
- slt (US female)
- jmk (Canadian male)
- awb (Scottish male)
- rms (US male)
- clb (US female)
- ksp (Indian male)
The X-vectors were extracted using [this script](https://huggingface.co/mechanicalsea/speecht5-vc/blob/main/manifest/utils/prep_cmu_arctic_spkemb.py), which uses the `speechbrain/spkrec-xvect-voxceleb` model.
Usage:
```python
from datasets import load_dataset
embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
speaker_embeddings = embeddings_dataset[7306]["xvector"]
speaker_embeddings = torch.tensor(speaker_embeddings).unsqueeze(0)
```
|
hpprc/emb | hpprc | "2024-09-13T01:51:47Z" | 17,305 | 10 | [
"language:ja",
"license:other",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2409.07737",
"region:us"
] | null | "2024-04-15T14:12:27Z" | ---
language:
- ja
license: other
dataset_info:
- config_name: auto-wiki-nli-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 533673945
num_examples: 198895
download_size: 362814978
dataset_size: 533673945
- config_name: auto-wiki-qa-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5215705706
num_examples: 8215817
download_size: 3385038265
dataset_size: 5215705706
- config_name: auto-wiki-qa-dataset
features:
- name: passage_id
dtype: int64
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 30767957804
num_examples: 2377503
download_size: 21875194075
dataset_size: 30767957804
- config_name: auto-wiki-qa-nemotron-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4202532852
num_examples: 6354725
download_size: 2709124196
dataset_size: 4202532852
- config_name: auto-wiki-qa-nemotron-dataset
features:
- name: passage_id
dtype: int64
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 2034181294
num_examples: 156089
download_size: 1449231482
dataset_size: 2034181294
- config_name: baobab-wiki-retrieval-collection
features:
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3771123469
num_examples: 5140862
download_size: 2463376300
dataset_size: 3771123469
- config_name: baobab-wiki-retrieval-dataset
features:
- name: anc
dtype: string
- name: pos_1st
dtype: string
- name: neg_1st.original
dtype: 'null'
- name: neg_1st.me5-large
dtype: string
- name: sim_1st.me5-large
dtype: float64
- name: neg_1st.bm25
dtype: string
- name: sim_1st.bm25
dtype: float64
- name: pos_ids
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 7837529
num_examples: 838
download_size: 5661379
dataset_size: 7837529
- config_name: jagovfaqs-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 13918890
num_examples: 22794
download_size: 5874592
dataset_size: 13918890
- config_name: jagovfaqs-dataset
features:
- name: anc
dtype: string
- name: pos_1st
dtype: string
- name: neg_1st.original
dtype: 'null'
- name: neg_1st.me5-large
dtype: string
- name: sim_1st.me5-large
dtype: float64
- name: neg_1st.bm25
dtype: string
- name: sim_1st.bm25
dtype: float64
- name: pos_ids
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 205284001
num_examples: 22794
download_size: 93115345
dataset_size: 205284001
- config_name: janli-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 14075833
num_examples: 13496
download_size: 3088881
dataset_size: 14075833
- config_name: jaquad-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4213318372
num_examples: 6364369
download_size: 2716125410
dataset_size: 4213318372
- config_name: jaquad-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 410758435
num_examples: 31748
download_size: 267846825
dataset_size: 410758435
- config_name: jcommonsenseqa-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: 'null'
- name: neg_ids.original
sequence: 'null'
splits:
- name: train
num_bytes: 673948
num_examples: 8939
download_size: 381605
dataset_size: 673948
- config_name: jqara-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4267669475
num_examples: 6433384
download_size: 2751666583
dataset_size: 4267669475
- config_name: jqara-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: int64
- name: neg_ids.original
sequence: int64
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 29789340
num_examples: 2235
download_size: 22310036
dataset_size: 29789340
- config_name: jsnli-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 170593490
num_examples: 144190
download_size: 88629828
dataset_size: 170593490
- config_name: jsquad-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4210493031
num_examples: 6369790
download_size: 2714126867
dataset_size: 4210493031
- config_name: jsquad-dataset
features:
- name: passage_id
dtype: int64
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 812736672
num_examples: 62859
download_size: 514718047
dataset_size: 812736672
- config_name: miracl-collection
features:
- name: passage_id
dtype: int64
- name: docid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3026160577.0
num_examples: 6953614
download_size: 1682864613
dataset_size: 3026160577.0
- config_name: miracl-dataset
features:
- name: anc
dtype: string
- name: pos_1st
dtype: string
- name: neg_1st.original
dtype: string
- name: neg_1st.me5-large
dtype: string
- name: sim_1st.me5-large
dtype: float64
- name: neg_1st.bm25
dtype: string
- name: sim_1st.bm25
dtype: float64
- name: pos_ids
sequence: int64
- name: neg_ids.original
sequence: int64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 32393484
num_examples: 3477
download_size: 23431039
dataset_size: 32393484
- config_name: mkqa-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: 'null'
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 129900532
num_examples: 10000
download_size: 88793974
dataset_size: 129900532
- config_name: mkqa-triplet
features:
- name: idx
dtype: string
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
splits:
- name: train
num_bytes: 7640649
num_examples: 10000
download_size: 4121496
dataset_size: 7640649
- config_name: mmarco-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3814117634
num_examples: 8829813
download_size: 2217976936
dataset_size: 3814117634
- config_name: mmarco-dataset
features:
- name: anc
dtype: string
- name: pos_1st
dtype: string
- name: neg_1st.original
dtype: string
- name: neg_1st.me5-large
dtype: string
- name: sim_1st.me5-large
dtype: float64
- name: neg_1st.bm25
dtype: string
- name: sim_1st.bm25
dtype: float64
- name: pos_ids
sequence: int64
- name: neg_ids.original
sequence: int64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 3548801103
num_examples: 391060
download_size: 2624355417
dataset_size: 3548801103
- config_name: mr-tydi-collection
features:
- name: passage_id
dtype: int64
- name: docid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3061941618
num_examples: 7000027
download_size: 1702050239
dataset_size: 3061941618
- config_name: mr-tydi-dataset
features:
- name: anc
dtype: string
- name: pos_1st
dtype: string
- name: neg_1st.original
dtype: string
- name: neg_1st.me5-large
dtype: string
- name: sim_1st.me5-large
dtype: float64
- name: neg_1st.bm25
dtype: string
- name: sim_1st.bm25
dtype: float64
- name: pos_ids
sequence: int64
- name: neg_ids.original
sequence: int64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 35660240
num_examples: 3697
download_size: 25702000
dataset_size: 35660240
- config_name: niilc-qa-dataset
features:
- name: id
dtype: string
- name: anc
dtype: string
- name: answers
sequence: string
splits:
- name: dev
num_bytes: 94339
num_examples: 795
- name: test
num_bytes: 24706
num_examples: 198
download_size: 69487
dataset_size: 119045
- config_name: nu-mnli-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 145358014
num_examples: 77785
download_size: 90397670
dataset_size: 145358014
- config_name: nu-snli-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 133786645
num_examples: 109154
download_size: 68979487
dataset_size: 133786645
- config_name: paws-x-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 124053741
num_examples: 49401
download_size: 75965630
dataset_size: 124053741
- config_name: qa-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4202542828.0
num_examples: 6354742
download_size: 2284295643
dataset_size: 4202542828.0
- config_name: quiz-no-mori-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: 'null'
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 340206118
num_examples: 25991
download_size: 241017142
dataset_size: 340206118
- config_name: quiz-works-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: 'null'
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 248971793
num_examples: 19073
download_size: 176241965
dataset_size: 248971793
- config_name: snow-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
splits:
- name: train
num_bytes: 63640356
num_examples: 62758
download_size: 35752257
dataset_size: 63640356
configs:
- config_name: auto-wiki-nli-triplet
data_files:
- split: train
path: auto-wiki-nli-triplet/train-*
- config_name: auto-wiki-qa-collection
data_files:
- split: train
path: auto-wiki-qa-collection/train-*
- config_name: auto-wiki-qa-dataset
data_files:
- split: train
path: auto-wiki-qa-dataset/train-*
- config_name: auto-wiki-qa-nemotron-collection
data_files:
- split: train
path: auto-wiki-qa-nemotron-collection/train-*
- config_name: auto-wiki-qa-nemotron-dataset
data_files:
- split: train
path: auto-wiki-qa-nemotron-dataset/train-*
- config_name: baobab-wiki-retrieval-collection
data_files:
- split: train
path: baobab-wiki-retrieval-collection/train-*
- config_name: baobab-wiki-retrieval-dataset
data_files:
- split: train
path: baobab-wiki-retrieval-dataset/train-*
- config_name: jagovfaqs-collection
data_files:
- split: train
path: jagovfaqs-collection/train-*
- config_name: jagovfaqs-dataset
data_files:
- split: train
path: jagovfaqs-dataset/train-*
- config_name: janli-triplet
data_files:
- split: train
path: janli-triplet/train-*
- config_name: jaquad-collection
data_files:
- split: train
path: jaquad-collection/train-*
- config_name: jaquad-dataset
data_files:
- split: train
path: jaquad-dataset/train-*
- config_name: jcommonsenseqa-dataset
data_files:
- split: train
path: jcommonsenseqa-dataset/train-*
- config_name: jqara-collection
data_files:
- split: train
path: jqara-collection/train-*
- config_name: jqara-dataset
data_files:
- split: train
path: jqara-dataset/train-*
- config_name: jsnli-triplet
data_files:
- split: train
path: jsnli-triplet/train-*
- config_name: jsquad-collection
data_files:
- split: train
path: jsquad-collection/train-*
- config_name: jsquad-dataset
data_files:
- split: train
path: jsquad-dataset/train-*
- config_name: miracl-collection
data_files:
- split: train
path: miracl-collection/train-*
- config_name: miracl-dataset
data_files:
- split: train
path: miracl-dataset/train-*
- config_name: mkqa-dataset
data_files:
- split: train
path: mkqa-dataset/train-*
- config_name: mkqa-triplet
data_files:
- split: train
path: mkqa-triplet/train-*
- config_name: mmarco-collection
data_files:
- split: train
path: mmarco-collection/train-*
- config_name: mmarco-dataset
data_files:
- split: train
path: mmarco-dataset/train-*
- config_name: mr-tydi-collection
data_files:
- split: train
path: mr-tydi-collection/train-*
- config_name: mr-tydi-dataset
data_files:
- split: train
path: mr-tydi-dataset/train-*
- config_name: niilc-qa-dataset
data_files:
- split: dev
path: niilc-qa-dataset/dev-*
- split: test
path: niilc-qa-dataset/test-*
- config_name: nu-mnli-triplet
data_files:
- split: train
path: nu-mnli-triplet/train-*
- config_name: nu-snli-triplet
data_files:
- split: train
path: nu-snli-triplet/train-*
- config_name: paws-x-triplet
data_files:
- split: train
path: paws-x-triplet/train-*
- config_name: qa-collection
data_files:
- split: train
path: qa-collection/train-*
- config_name: quiz-no-mori-dataset
data_files:
- split: train
path: quiz-no-mori-dataset/train-*
- config_name: quiz-works-dataset
data_files:
- split: train
path: quiz-works-dataset/train-*
- config_name: snow-triplet
data_files:
- split: train
path: snow-triplet/train-*
---
still WIP
## Dataset Description
- **Paper:** https://arxiv.org/abs/2409.07737
- **Point of Contact:** [Hayato Tsukagoshi](mailto:tsukagoshi.hayato.r2@s.mail.nagoya-u.ac.jp)
## Information
|Name|Type|License (根拠)|
|-|-|-|
|MMARCO|Retrieval|[Apache 2.0 (?)](https://huggingface.co/datasets/unicamp-dl/mmarco)|
|Mr. TyDi|Retrieval|[Apache 2.0](https://huggingface.co/datasets/castorini/mr-tydi)|
|MIRACL|Retrieval|[Apache 2.0](https://huggingface.co/datasets/miracl/miracl)|
|JaGovFaqs|QA|[CC-BY-4.0](https://huggingface.co/datasets/matsuxr/JaGovFaqs-22k)|
|Auto Wiki QA|QA & Retrieval|[CC-BY-SA-4.0](https://huggingface.co/datasets/cl-nagoya/auto-wiki-qa)|
|Auto Wiki QA Nemotron|QA & Retrieval|[CC-BY-SA-4.0](https://huggingface.co/datasets/hpprc/auto-wiki-qa-nemotron)|
|JCommonsenseQA|QA|[CC-BY-SA-4.0](https://github.com/yahoojapan/JGLUE)|
|JSQuAD|QA & Retrieval|[CC-BY-SA-4.0](https://github.com/yahoojapan/JGLUE)|
|Japanese Wikipedia Human Retrieval|QA & Retrieval|[Apache 2.0](https://huggingface.co/datasets/baobab-trees/wikipedia-human-retrieval-ja)|
|JQaRA (dev, unused)|QA|[CC-BY-SA-4.0](https://huggingface.co/datasets/hotchpotch/JQaRA#:~:text=%E3%81%B0%E5%B9%B8%E3%81%84%E3%81%A7%E3%81%99%E3%80%82-,%E3%83%A9%E3%82%A4%E3%82%BB%E3%83%B3%E3%82%B9,%E3%81%A7%E3%81%82%E3%82%8B%20CC%20BY%2DSA%204.0%20%E3%81%BE%E3%81%9F%E3%81%AF%20GFDL%E3%81%A8%E3%81%97%E3%81%BE%E3%81%99%E3%80%82,-%E8%AC%9D%E8%BE%9E)|
|JaQuAD|QA & Retrieval|[CC-BY-SA-3.0](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD)|
|JSNLI|NLI|[CC-BY-SA-4.0](https://huggingface.co/datasets/shunk031/jsnli)|
|Auto Wiki NLI|NLI|[CC-BY-SA-4.0](https://huggingface.co/datasets/hpprc/auto-wiki-nli-reward)|
|NU-SNLI|NLI|[CC-BY-SA-4.0](https://huggingface.co/datasets/cl-nagoya/nu-snli)|
|NU-MNLI|NLI|[CC-BY-SA-3.0, MIT, Others](https://huggingface.co/datasets/cl-nagoya/nu-mnli)|
|PAWS-X|Paraphrase|[Free (二次利用自由)](https://github.com/google-research-datasets/paws?tab=License-1-ov-file#readme)|
|SNOW|Paraphrase|[CC-BY-3.0](https://huggingface.co/datasets/SNOW-NLP/snow_simplified_japanese_corpus)|
|MKQA|QA|[CC-BY-3.0](https://huggingface.co/datasets/apple/mkqa)|
|Quiz Works|QA|[Free (二次利用自由)](https://quiz-works.com/about)|
|Quiz No Mori|QA|[Free (二次利用自由)](https://quiz-schedule.info/quiz_no_mori/quizforestsecond.html)|
|NIILC QA|QA|[CC-BY-SA](https://mynlp.is.s.u-tokyo.ac.jp/niilc-qa/)| |
mlfoundations/dclm-pool-1b-5x | mlfoundations | "2024-06-22T05:50:04Z" | 17,225 | 1 | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-06-12T04:26:45Z" | ---
license: cc-by-4.0
--- |
Kaichengalex/YFCC15M | Kaichengalex | "2024-10-22T14:28:44Z" | 17,140 | 3 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.06973",
"region:us"
] | null | "2024-09-26T03:38:58Z" | ---
dataset_info:
features:
- name: images
dtype: image
- name: texts
sequence: float32
splits:
- name: train
num_bytes: 748710703
num_examples: 10000
download_size: 746368611
dataset_size: 748710703
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
## YFCC15M Recaption Dataset
This YFCC15M Dataset is filtered by [DeCLIP](https://github.com/Sense-GVT/DeCLIP) and recaptioned utilize the diverse description generation framework proposed in [RWKV-CLIP](https://github.com/deepglint/RWKV-CLIP).
The text is a list of text tokens with a length of 77, encoded using the CLIP tokenizer. You can use `from clip.simple_tokenizer import SimpleTokenizer as _Tokenizer` to decode it back into the original text.
## Using Dataset
You can easily download and use the arxiver dataset with Hugging Face's datasets library.
```
from datasets import load_dataset
dataset = load_dataset("Kaichengalex/YFCC15M")
```
## References
If you find this dataset useful, please use the following BibTeX entry for citation.
```
@misc{gu2024rwkvclip,
title={RWKV-CLIP: A Robust Vision-Language Representation Learner},
author={Tiancheng Gu and Kaicheng Yang and Xiang An and Ziyong Feng and Dongnan Liu and Weidong Cai and Jiankang Deng},
year={2024},
eprint={2406.06973},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
gsdf/EasyNegative | gsdf | "2023-02-12T14:39:30Z" | 17,064 | 1,132 | [
"license:other",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2023-02-01T10:58:06Z" | ---
license: other
---
# Negative Embedding
This is a Negative Embedding trained with Counterfeit. Please use it in the "\stable-diffusion-webui\embeddings" folder.
It can be used with other models, but the effectiveness is not certain.
# Counterfeit-V2.0.safetensors
![sample1](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/sample01.png)
# AbyssOrangeMix2_sfw.safetensors
![sample2](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/sample02.png)
# anything-v4.0-pruned.safetensors
![sample3](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/sample03.png) |
dair-ai/emotion | dair-ai | "2024-08-08T06:10:47Z" | 17,013 | 307 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"emotion-classification"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: emotion
pretty_name: Emotion
tags:
- emotion-classification
dataset_info:
- config_name: split
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
splits:
- name: train
num_bytes: 1741533
num_examples: 16000
- name: validation
num_bytes: 214695
num_examples: 2000
- name: test
num_bytes: 217173
num_examples: 2000
download_size: 1287193
dataset_size: 2173401
- config_name: unsplit
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
splits:
- name: train
num_bytes: 45444017
num_examples: 416809
download_size: 26888538
dataset_size: 45444017
configs:
- config_name: split
data_files:
- split: train
path: split/train-*
- split: validation
path: split/validation-*
- split: test
path: split/test-*
default: true
- config_name: unsplit
data_files:
- split: train
path: unsplit/train-*
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "emotion"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/dair-ai/emotion_dataset](https://github.com/dair-ai/emotion_dataset)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 16.13 MB
- **Size of the generated dataset:** 47.62 MB
- **Total amount of disk used:** 63.75 MB
### Dataset Summary
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
An example looks as follows.
```
{
"text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
"label": 0
}
```
### Data Fields
The data fields are:
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
### Data Splits
The dataset has 2 configurations:
- split: with a total of 20_000 examples split into train, validation and split
- unsplit: with a total of 416_809 examples in a single train split
| name | train | validation | test |
|---------|-------:|-----------:|-----:|
| split | 16000 | 2000 | 2000 |
| unsplit | 416809 | n/a | n/a |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset should be used for educational and research purposes only.
### Citation Information
If you use this dataset, please cite:
```
@inproceedings{saravia-etal-2018-carer,
title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
author = "Saravia, Elvis and
Liu, Hsien-Chi Toby and
Huang, Yen-Hao and
Wu, Junlin and
Chen, Yi-Shin",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1404",
doi = "10.18653/v1/D18-1404",
pages = "3687--3697",
abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
|
DL3DV/DL3DV-ALL-480P | DL3DV | "2024-09-02T09:32:50Z" | 16,883 | 2 | [
"size_categories:100B<n<1T",
"region:us",
"3D Vision",
"NeRF",
"3D Gaussian",
"Dataset",
"Novel View Synthesis",
"Text to 3D",
"Image to 3D"
] | null | "2024-03-04T14:55:16Z" | ---
tags:
- 3D Vision
- NeRF
- 3D Gaussian
- Dataset
- Novel View Synthesis
- Text to 3D
- Image to 3D
pretty_name: Dl3DV-Dataset
size_categories:
- 100B<n<1T
---
# DL3DV-Dataset
This repo has all the 480P frames with camera poses of DL3DV-10K Dataset. We are working hard to review all the dataset to avoid sensitive information. Thank you for your patience.
# Download
If you have enough space, you can use git to download a dataset from huggingface. See this [link](https://huggingface.co/docs/hub/en/datasets-downloading). [480P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-480P)/[960P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-960P) versions should satisfies most needs.
If you do not have enough space, we further provide a [download script](https://github.com/DL3DV-10K/Dataset/blob/main/scripts/download.py) here to download a subset. The usage:
```Bash
usage: download.py [-h] --odir ODIR --subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K} --resolution {4K,2K,960P,480P} --file_type {images+poses,video,colmap_cache} [--hash HASH]
[--clean_cache]
optional arguments:
-h, --help show this help message and exit
--odir ODIR output directory
--subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K}
The subset of the benchmark to download
--resolution {4K,2K,960P,480P}
The resolution to donwnload
--file_type {images+poses,video,colmap_cache}
The file type to download
--hash HASH If set subset=hash, this is the hash code of the scene to download
--clean_cache If set, will clean the huggingface cache to save space
```
Here are some examples:
```Bash
# Make sure you have applied for the access.
# Use this to download the download.py script
wget https://raw.githubusercontent.com/DL3DV-10K/Dataset/main/scripts/download.py
# Download 480P resolution images and poses, 0~1K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 1K --resolution 480P --file_type images+poses --clean_cache
# Download 480P resolution images and poses, 1K~2K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 2K --resolution 480P --file_type images+poses --clean_cache
```
You can also download a specific scene with its hash. The scene-hash pair visualization can be found [here](https://htmlpreview.github.io/?https://github.com/DL3DV-10K/Dataset/blob/main/visualize/index.html).
```Bash
# Download 480P resolution images and poses, 1K~2K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 2K --resolution 480P --file_type images+poses --hash e2cedefea8a0ed2d0ffbd5bdc08acbe7e1f85c96f72f7b790e9dfe1c98963047 --clean_cache
```
# News
- [x] DL3DV-1K, 2K, 3K, 4K
- [ ] DL3DV-5K ~ 10K |
applied-ai-018/pretraining_v1-omega_books | applied-ai-018 | "2024-08-05T19:01:31Z" | 16,723 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-31T08:53:54Z" | ---
dataset_info:
config_name: CC-MAIN-2013-20
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
splits:
- name: train
num_bytes: 235476901236
num_examples: 51901183
download_size: 138494178972
dataset_size: 235476901236
configs:
- config_name: CC-MAIN-2013-20
data_files:
- split: train
path: CC-MAIN-2013-20/train-*
---
|
bigcode/the-stack-v2-train-smol-ids | bigcode | "2024-04-23T16:03:46Z" | 16,634 | 26 | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:code",
"license:other",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2402.19173",
"arxiv:2107.03374",
"arxiv:2207.14157",
"region:us"
] | [
"text-generation"
] | "2024-02-27T11:49:09Z" | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids: []
pretty_name: The-Stack-v2
extra_gated_prompt: "## Terms of Use for The Stack v2\n\nThe Stack v2 dataset is a\
\ collection of source code in over 600 programming languages. We ask that you read\
\ and acknowledge the following points before using the dataset:\n1. Downloading\
\ the dataset in bulk requires a an agreement with SoftwareHeritage and INRIA. Contact\
\ [datasets@softwareheritage.org](mailto:datasets@softwareheritage.org?subject=TheStackV2%20request%20for%20dataset%20access%20information)\
\ for more information.\n2. If you are using the dataset to train models you must\
\ adhere to the SoftwareHeritage [principles for language model training](https://www.softwareheritage.org/2023/10/19/swh-statement-on-llm-for-code/).\n\
3. The Stack v2 is a collection of source code from repositories with various licenses.\
\ Any use of all or part of the code gathered in The Stack v2 must abide by the\
\ terms of the original licenses, including attribution clauses when relevant. We\
\ facilitate this by providing provenance information for each data point.\n4. The\
\ Stack v2 is regularly updated to enact validated data removal requests. By clicking\
\ on \"Access repository\", you agree to update your own version of The Stack v2\
\ to the most recent usable version.\n\nBy clicking on \"Access repository\" below,\
\ you accept that your contact information (email address and username) can be shared\
\ with the dataset maintainers as well.\n "
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
dataset_info:
features:
- name: repo_name
dtype: string
- name: repo_url
dtype: string
- name: snapshot_id
dtype: string
- name: revision_id
dtype: string
- name: directory_id
dtype: string
- name: branch_name
dtype: string
- name: visit_date
dtype: timestamp[ns]
- name: revision_date
dtype: timestamp[ns]
- name: committer_date
dtype: timestamp[ns]
- name: github_id
dtype: int64
- name: star_events_count
dtype: int64
- name: fork_events_count
dtype: int64
- name: gha_license_id
dtype: string
- name: gha_created_at
dtype: timestamp[ns]
- name: gha_updated_at
dtype: timestamp[ns]
- name: gha_pushed_at
dtype: timestamp[ns]
- name: gha_language
dtype: string
- name: files
list:
- name: blob_id
dtype: string
- name: path
dtype: string
- name: content_id
dtype: string
- name: language
dtype: string
- name: length_bytes
dtype: int64
- name: detected_licenses
sequence: string
- name: license_type
dtype: string
- name: src_encoding
dtype: string
- name: is_vendor
dtype: bool
- name: is_generated
dtype: bool
- name: alphanum_fraction
dtype: float32
- name: alpha_fraction
dtype: float32
- name: num_lines
dtype: int32
- name: avg_line_length
dtype: float32
- name: max_line_length
dtype: int32
- name: num_files
dtype: int64
splits:
- name: train
num_bytes: 93623832913.11467
num_examples: 40138809
download_size: 59322439587
dataset_size: 93623832913.11467
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# The Stack v2
<center>
<img src="https://huggingface.co/datasets/bigcode/admin_private/resolve/main/thestackv2_banner.png" alt="Stackv2" width="900" height="600">
</center>
## Dataset Description
- **Homepage:** https://www.bigcode-project.org/
- **Repository:** https://github.com/bigcode-project
- **Paper:** [Link](https://huggingface.co/papers/2402.19173)
- **Point of Contact:** contact@bigcode-project.org
The dataset consists of 4 versions:
- [`bigcode/the-stack-v2`](https://huggingface.co/datasets/bigcode/the-stack-v2): the full "The Stack v2" dataset
- [`bigcode/the-stack-v2-dedup`](https://huggingface.co/datasets/bigcode/the-stack-v2-dedup): based on the `bigcode/the-stack-v2` but further near-deduplicated
- [`bigcode/the-stack-v2-train-full-ids`](https://huggingface.co/datasets/bigcode/the-stack-v2-train-full-ids): based on the `bigcode/the-stack-v2-dedup` dataset but further filtered with heuristics and spanning 600+ programming languages. The data is grouped into repositories.
- [`bigcode/the-stack-v2-train-smol-ids`](https://huggingface.co/datasets/bigcode/the-stack-v2-train-smol-ids): based on the `bigcode/the-stack-v2-dedup` dataset but further filtered with heuristics and spanning 17 programming languages. The data is grouped into repositories. **<-- you are here**
**These datasets only contain the SWHIDs to download the code files and not the content of the files itself. See examples below to see how to download content. We are working on making the training datasets available in the coming weeks.**
The Stack v2 is significantly larger than v1:
||The Stack v1|The Stack v2|
|-|-|-|
| full | 6.4TB | 67.5TB |
| dedup | 2.9TB | 32.1TB |
| train (full) | ~200B tokens | ~900B tokens |
### Changelog
|Release|Description|
|-|-|
| v2.1.0 | Removed repositories that opted out before 2024-04-09. Removed unreachable/private repositories (according to SWH) |
| v2.0.1 | Version bump without modifications to the dataset. StarCoder2 was trained on this version |
| v2.0 | Initial release of the Stack v2 |
### Dataset Summary
The Stack v2 contains over 3B files in 600+ programming and markup languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets.
This dataset is derived from the Software Heritage archive, the largest public archive of software source code and accompanying development history. Software Heritage is an open, non profit initiative to collect, preserve, and share the source code of all publicly available software, launched by Inria, in partnership with UNESCO. We acknowledge Software Heritage for providing access to this invaluable resource. For more details, visit the [Software Heritage website](https://www.softwareheritage.org).
### Languages
The `smol` dataset contains 39 languages.
```
Ant Build System, AsciiDoc, C, C#, C++, CMake, Dockerfile, Go, Go Module, Gradle, Groovy, HTML, INI, Java, Java Properties, JavaScript, JSON, JSON with Comments, Kotlin, Lua, M4Sugar, Makefile, Markdown, Maven POM, PHP, Python, R, RDoc, reStructuredText, RMarkdown, Ruby, Rust, Shell, SQL, Swift, Text, TOML, TypeScript, YAML
```
### How to use it
```python
from datasets import load_dataset
# full dataset (file IDs only)
ds = load_dataset("bigcode/the-stack-v2-train-smol-ids", split="train")
# dataset streaming (will only download the data as needed)
ds = load_dataset("bigcode/the-stack-v2-train-smol-ids", streaming=True, split="train")
for sample in iter(ds):
print(sample)
```
#### Downloading the file contents
The file contents are stored in the Software Heritage S3 bucket to ensure data compliance. Downloading data in bulk requires an agreement with SoftwareHeritage and INRIA as stated in the dataset agreement.
Make sure to configure your environment with your [AWS credentials](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/index.html#examples).
```bash
pip install smart_open[s3]
```
```python
import os
import boto3
from smart_open import open
from datasets import load_dataset
session = boto3.Session(
aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"])
s3 = session.client("s3")
def download_contents(files):
for file in files:
s3_url = f"s3://softwareheritage/content/{file['blob_id']}"
with open(s3_url, "rb", compression=".gz", transport_params={"client": s3}) as fin:
file["content"] = fin.read().decode(file["src_encoding"])
return {"files": files}
ds = load_dataset("bigcode/the-stack-v2-train-smol-ids", split="train", streaming=True)
ds = ds.map(lambda row: download_contents(row["files"]))
for row in ds:
for file in row["files"]:
print(file["content"])
break
```
## Dataset Structure
### Data Fields
* `blob_id` (`string`): Software Heritage (SWH) ID of the file on AWS S3.
* `directory_id` (`string`): SWH ID of the root directory of the repository.
* `path` (`string`): The file path within the repository.
* `content_id` (`string`): SWH content ID.
* `detected_licenses` (`string[]`): List of licenses (SPDX) detected by ScanCode.
* `license_type` (`string`): Inferred license type (`permissive` or `no_license`).
* `repo_name` (`string`): Repository name on GitHub.
* `snapshot_id` (`string`): SWH snapshot ID.
* `revision_id` (`string`): SWH revision (commit) ID.
* `branch_name` (`string`): Repository branch name.
* `visit_date` (`timestamp[ns]`): SWH crawl (snapshot) timestamp.
* `revision_date` (`timestamp[ns]`): SWH revision (commit) timestamp.
* `committer_date` (`timestamp[ns]`): SWH revision (commit) timestamp reported by the committer.
* `github_id` (`int64`): GitHub identifier for the repository.
* `star_events_count` (`int64`): number of stars calculated from GHArchive events.
* `fork_events_count` (`int64`): number of forks calculated from GHArchive events.
* `gha_license_id` (`string`): GHArchive SPDX license identifier, `None` if the repo is missing.
* `gha_event_created_at` (`timestamp[ns]`): Timestamp of the latest event on GHArchive for this repository.
* `gha_created_at` (`timestamp[ns]`): Timestamp of repository creation on GitHub, `None` if the repo is missing.
* `gha_language` (`string`): Repository's primary programming language on GitHub, `None` if the repo is missing.
* `src_encoding` (`string`): Original encoding of the file content befre converting to UTF-8.
* `language` (`string`): Programming language of the file, detected by `go-enry / linguist`.
* `is_vendor` (`bool`): Indicator of vendor file (external library), detected by `go-enry`.
* `is_generated` (`bool`): Indicator of generated file (external library), detected by `go-enry`.
* `length_bytes` (`int64`): Length of the file content in UTF-8 bytes.
* `extension` (`string`): File extension.
### Data Splits
The dataset has no splits and all data is loaded as train split by default. If you want to setup a custom train-test split beware that dataset contains a lot of near-duplicates which can cause leakage into the test split.
## Dataset Creation
For more information on the dataset creation pipeline please refer to the [technical report](https://huggingface.co/papers/2402.19173).
### Curation Rationale
One of the challenges faced by researchers working on code LLMs is the lack of openness and transparency around the development of these systems. Most prior works described the high-level data collection process but did not release the training data. It is therefore difficult for other researchers to fully reproduce these models and understand what kind of pre-training data leads to high-performing code LLMs. By releasing an open large-scale code dataset we hope to make training of code LLMs more reproducible.
### Source Data
#### Data Collection
3.28B unique files belonging to 104.2M github repositories were collected by traversing the Software Heritage [2023-09-06](https://docs.softwareheritage.org/devel/swh-dataset/graph/dataset.html#graph-dataset-2023-09-06) graph dataset.
Additional repository-level metadata was collected from [GitHub Archive](https://www.gharchive.org/) data up to 2023-09-14.
The total uncompressed size of all files is 67.53TB.
Near-deduplication was implemented in the pre-processing pipeline on top of exact deduplication.
Roughly 40% of permissively licensed files were (near-)duplicates.
The following are not stored:
* Files that cannot contribute to training code: binary, empty, could not be decoded
* Files larger than 10MB
**Training Datasets**: For the training datasets the programming languages were filtered further to 17 and 600+ for the `the-stack-v2-smol-ids` and `the-stack-v2-full-ids` dataset, respecively. In addition, heuristics were applied to further increase the quality of the dataset. The code files are also grouped into repositories to allow to pretrain with full repository context. For more details see the [technical report](https://huggingface.co/papers/2402.19173).
##### License detection
We extract repository-level license information from [GH Archive](https://www.gharchive.org/) for all repositories with matching names in the SWH dataset.
When the repo-level license is not available, i.e., for 96.93\% of repositories, we use the [ScanCode Toolkit](https://github.com/nexB/scancode-toolkit) to detect file-level licenses as follows:
* Find all filenames that could contain a license (e.g., LICENSE, MIT.txt, Apache2.0) or contain a reference to the license (e.g., README.md, GUIDELINES);
* Apply ScanCode's license detection to the matching files and gather the SPDX IDs of the detected licenses;
* Propagate the detected licenses to all files that have the same base path within the repository as the license file.
The licenses we consider permissive are listed [here](https://huggingface.co/datasets/bigcode/the-stack-v2/blob/main/license_stats.csv).
This list was compiled from the licenses approved by the [Blue Oak Council](https://blueoakcouncil.org/list),
as well as licenses categorized as "Permissive" or "Public Domain" by [ScanCode](https://scancode-licensedb.aboutcode.org/).
#### Who are the source language producers?
The source (code) language producers are users of GitHub that created unique repository names up until 2023-09-06 (cutoff date).
### Personal and Sensitive Information
The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. Deduplication has helped to reduce the amount of sensitive data that may exist. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their [open-access](https://en.wikipedia.org/wiki/Open_access) research. Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information. Complaints, removal requests, and "do not contact" requests can be sent to contact@bigcode-project.org.
### Opting out of The Stack v2
We are giving developers the ability to have their code removed from the dataset upon request. The process for submitting and enacting removal requests will keep evolving throughout the project as we receive feedback and build up more data governance tools.
You can check if your code is in The Stack v2 with the following ["Am I In The Stack?" Space](https://huggingface.co/spaces/bigcode/in-the-stack). If you'd like to have your data removed from the dataset follow the [instructions on GitHub](https://github.com/bigcode-project/opt-out-v2).
## Considerations for Using the Data
### Social Impact of Dataset
The Stack v2 is an output of the BigCode Project. BigCode aims to be responsible by design and by default. The project is conducted in the spirit of Open Science, focused on the responsible development of LLMs for code.
With the release of The Stack v2, we aim to increase access, reproducibility, and transparency of code LLMs in the research community. Work to de-risk and improve on the implementation of ethical best practices of code LLMs is conducted in various BigCode working groups. The Legal, Ethics, and Governance working group has explored topics such as licensing (including copyleft and the intended use of permissively licensed code), attribution of generated code to original code, rights to restrict processing, the inclusion of Personally Identifiable Information (PII), and risks of malicious code, among other topics. This work is ongoing as of October 25th, 2022.
We expect code LLMs to enable people from diverse backgrounds to write higher quality code and develop low-code applications. Mission-critical software could become easier to maintain as professional developers are guided by code-generating systems on how to write more robust and efficient code. While the social impact is intended to be positive, the increased accessibility of code LLMs comes with certain risks such as over-reliance on the generated code and long-term effects on the software development job market.
A broader impact analysis relating to Code LLMs can be found in section 7 of this [paper](https://arxiv.org/abs/2107.03374). An in-depth risk assessments for Code LLMs can be found in section 4 of this [paper](https://arxiv.org/abs/2207.14157).
### Discussion of Biases
The code collected from GitHub does not contain demographic information or proxy information about the demographics. However, it is not without risks,
as the comments within the code may contain harmful or offensive language, which could be learned by the models.
Widely adopted programming languages like C and Javascript are overrepresented compared to niche programming languages like Julia and Scala. Some programming languages such as SQL, Batchfile, TypeScript are less likely to be permissively licensed (4% vs the average 10%). This may result in a biased representation of those languages. Permissively licensed files also tend to be longer.
The majority of natural language present in code from GitHub is English.
### Other Known Limitations
One of the current limitations of The Stack v2 is that scraped HTML for websites may not be compliant with Web Content Accessibility Guidelines ([WCAG](https://www.w3.org/WAI/standards-guidelines/wcag/)). This could have an impact on HTML-generated code that may introduce web accessibility issues.
The training dataset could contain malicious code and/or the model could be used to generate malware or ransomware.
To the best of our knowledge, all files contained in the dataset are licensed with one of the permissive licenses (see list in [Licensing information](#licensing-information)) or no license.
The accuracy of license attribution is limited by the accuracy of GHArchive and ScanCode Toolkit.
Any mistakes should be reported to BigCode Project for review and follow-up as needed.
## Additional Information
### Dataset Curators
1. Harm de Vries, ServiceNow Research, harm.devries@servicenow.com
2. Leandro von Werra, Hugging Face, leandro@huggingface.co
### Licensing Information
The Stack v2 is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack v2 must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
The list of [SPDX license identifiers](https://spdx.org/licenses/) included in the dataset can be found [here](https://huggingface.co/datasets/bigcode/the-stack-v2/blob/main/license_stats.csv).
### Citation Information
```bash
@misc{lozhkov2024starcoder,
title={StarCoder 2 and The Stack v2: The Next Generation},
author={Anton Lozhkov and Raymond Li and Loubna Ben Allal and Federico Cassano and Joel Lamy-Poirier and Nouamane Tazi and Ao Tang and Dmytro Pykhtar and Jiawei Liu and Yuxiang Wei and Tianyang Liu and Max Tian and Denis Kocetkov and Arthur Zucker and Younes Belkada and Zijian Wang and Qian Liu and Dmitry Abulkhanov and Indraneil Paul and Zhuang Li and Wen-Ding Li and Megan Risdal and Jia Li and Jian Zhu and Terry Yue Zhuo and Evgenii Zheltonozhskii and Nii Osae Osae Dade and Wenhao Yu and Lucas Krauß and Naman Jain and Yixuan Su and Xuanli He and Manan Dey and Edoardo Abati and Yekun Chai and Niklas Muennighoff and Xiangru Tang and Muhtasham Oblokulov and Christopher Akiki and Marc Marone and Chenghao Mou and Mayank Mishra and Alex Gu and Binyuan Hui and Tri Dao and Armel Zebaze and Olivier Dehaene and Nicolas Patry and Canwen Xu and Julian McAuley and Han Hu and Torsten Scholak and Sebastien Paquet and Jennifer Robinson and Carolyn Jane Anderson and Nicolas Chapados and Mostofa Patwary and Nima Tajbakhsh and Yacine Jernite and Carlos Muñoz Ferrandis and Lingming Zhang and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
year={2024},
eprint={2402.19173},
archivePrefix={arXiv},
primaryClass={cs.SE}
}
``` |
ai4bharat/sangraha | ai4bharat | "2024-10-21T09:33:54Z" | 16,553 | 31 | [
"task_categories:text-generation",
"language:as",
"language:bn",
"language:gu",
"language:en",
"language:hi",
"language:kn",
"language:ks",
"language:ml",
"language:mr",
"language:ne",
"language:or",
"language:pa",
"language:sa",
"language:sd",
"language:ta",
"language:te",
"language:ur",
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2403.06350",
"region:us",
"language-modeling",
"casual-lm",
"llm"
] | [
"text-generation"
] | "2024-03-05T10:55:09Z" | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- as
- bn
- gu
- en
- hi
- kn
- ks
- ml
- mr
- ne
- or
- pa
- sa
- sd
- ta
- te
- ur
tags:
- language-modeling
- casual-lm
- llm
pretty_name: sangraha
dataset_info:
- config_name: verified
features:
- name: doc_id
dtype: string
- name: type
dtype: string
- name: text
dtype: string
splits:
- name: asm
- name: ben
- name: brx
- name: doi
- name: eng
- name: gom
- name: guj
- name: hin
- name: kan
- name: kas
- name: mai
- name: mal
- name: mar
- name: mni
- name: nep
- name: ori
- name: pan
- name: san
- name: sat
- name: snd
- name: tam
- name: tel
- name: urd
- config_name: unverified
features:
- name: doc_id
dtype: string
- name: text
dtype: string
splits:
- name: asm
- name: ben
- name: guj
- name: hin
- name: kan
- name: mal
- name: mar
- name: nep
- name: ori
- name: pan
- name: san
- name: tam
- name: tel
- name: urd
- config_name: synthetic
features:
- name: doc_id
dtype: string
- name: text
dtype: string
splits:
- name: asm_Beng
- name: asm_Latn
- name: ben_Beng
- name: ben_Latn
- name: guj_Gujr
- name: guj_Latn
- name: hin_Deva
- name: hin_Latn
- name: kan_Knda
- name: kan_Latn
- name: mal_Mlym
- name: mal_Latn
- name: mar_Deva
- name: mar_Latn
- name: npi_Deva
- name: npi_Latn
- name: ory_Orya
- name: ory_Latn
- name: pan_Guru
- name: pan_Latn
- name: san_Deva
- name: san_Latn
- name: tam_Taml
- name: tam_Latn
- name: tel_Telu
- name: tel_Latn
- name: urd_Arab
- name: urd_Latn
configs:
- config_name: verified
data_files:
- split: asm
path: verified/asm/*.parquet
- split: ben
path: verified/ben/*.parquet
- split: brx
path: verified/brx/*.parquet
- split: doi
path: verified/doi/*.parquet
- split: eng
path: verified/eng/*.parquet
- split: gom
path: verified/gom/*.parquet
- split: guj
path: verified/guj/*.parquet
- split: hin
path: verified/hin/*.parquet
- split: kan
path: verified/kan/*.parquet
- split: kas
path: verified/kas/*.parquet
- split: mai
path: verified/mai/*.parquet
- split: mal
path: verified/mal/*.parquet
- split: mar
path: verified/mar/*.parquet
- split: mni
path: verified/mni/*.parquet
- split: nep
path: verified/nep/*.parquet
- split: ori
path: verified/ori/*.parquet
- split: pan
path: verified/pan/*.parquet
- split: san
path: verified/san/*.parquet
- split: sat
path: verified/sat/*.parquet
- split: snd
path: verified/snd/*.parquet
- split: tam
path: verified/tam/*.parquet
- split: tel
path: verified/tel/*.parquet
- split: urd
path: verified/urd/*.parquet
- config_name: unverified
data_files:
- split: asm
path: unverified/asm/*.parquet
- split: ben
path: unverified/ben/*.parquet
- split: guj
path: unverified/guj/*.parquet
- split: hin
path: unverified/hin/*.parquet
- split: kan
path: unverified/kan/*.parquet
- split: mal
path: unverified/mal/*.parquet
- split: mar
path: unverified/mar/*.parquet
- split: nep
path: unverified/nep/*.parquet
- split: ori
path: unverified/ori/*.parquet
- split: pan
path: unverified/pan/*.parquet
- split: san
path: unverified/san/*.parquet
- split: tam
path: unverified/tam/*.parquet
- split: tel
path: unverified/tel/*.parquet
- split: urd
path: unverified/urd/*.parquet
- config_name: synthetic
data_files:
- split: asm_Beng
path: synthetic/asm_Beng/*.parquet
- split: asm_Latn
path: synthetic/asm_Latn/*.parquet
- split: ben_Beng
path: synthetic/ben_Beng/*.parquet
- split: ben_Latn
path: synthetic/ben_Latn/*.parquet
- split: guj_Gujr
path: synthetic/guj_Gujr/*.parquet
- split: guj_Latn
path: synthetic/guj_Latn/*.parquet
- split: hin_Deva
path: synthetic/hin_Deva/*.parquet
- split: hin_Latn
path: synthetic/hin_Latn/*.parquet
- split: kan_Knda
path: synthetic/kan_Knda/*.parquet
- split: kan_Latn
path: synthetic/kan_Latn/*.parquet
- split: mal_Mlym
path: synthetic/mal_Mlym/*.parquet
- split: mal_Latn
path: synthetic/mal_Latn/*.parquet
- split: mar_Deva
path: synthetic/mar_Deva/*.parquet
- split: mar_Latn
path: synthetic/mar_Latn/*.parquet
- split: npi_Deva
path: synthetic/npi_Deva/*.parquet
- split: npi_Latn
path: synthetic/npi_Latn/*.parquet
- split: ory_Orya
path: synthetic/ory_Orya/*.parquet
- split: ory_Latn
path: synthetic/ory_Latn/*.parquet
- split: pan_Guru
path: synthetic/pan_Guru/*.parquet
- split: pan_Latn
path: synthetic/pan_Latn/*.parquet
- split: san_Deva
path: synthetic/san_Deva/*.parquet
- split: san_Latn
path: synthetic/san_Latn/*.parquet
- split: tam_Taml
path: synthetic/tam_Taml/*.parquet
- split: tam_Latn
path: synthetic/tam_Latn/*.parquet
- split: tel_Telu
path: synthetic/tel_Telu/*.parquet
- split: tel_Latn
path: synthetic/tel_Latn/*.parquet
- split: urd_Arab
path: synthetic/urd_Arab/*.parquet
- split: urd_Latn
path: synthetic/urd_Latn/*.parquet
size_categories:
- 100B<n<1T
---
# Sangraha
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63ef3cd11e695b35aa48bebc/nDnyidcqIOLAP9dTw9GrK.png" />
</p>
Sangraha is the largest high-quality, cleaned Indic language pretraining data containing 251B tokens summed up over 22 languages, extracted from curated sources, existing multilingual corpora and large scale translations.
**Coming Soon**:
- Sangraha Synthetic - Translated and Romanised English Wikimedia data.
- Sangraha Verified - Hindi YouTube transcribed data.
**More information**:
- For detailed information on the curation and cleaning process of Sangraha, please checkout our paper [on Arxiv](https://arxiv.org/abs/2403.06350);
- Check out the scraping and cleaning pipelines used to curate Sangraha [on GitHub](https://github.com/AI4Bharat/IndicLLMSuite);
## Getting Started
For downloading the entire Sangraha:
```python
from datasets import load_dataset
dataset = load_dataset("ai4bharat/sangraha")
```
For downloading a subset (Verified/Unverified) of Sangraha:
```python
from datasets import load_dataset
dataset = load_dataset("ai4bharat/sangraha", data_dir="<subset_name>")
# for example: dataset = load_dataset("ai4bharat/sangraha", data_dir="verified")
```
For downloading one language from a subset of Sangraha:
```python
from datasets import load_dataset
dataset = load_dataset("ai4bharat/sangraha", data_dir="<subset_name>/<lang_code>")
# for example: dataset = load_dataset("ai4bharat/sangraha", data_dir="verified/asm")
```
## Background
Sangraha contains three broad components:
- **Sangraha Verified**: Containing scraped data from "human-verified" Websites, OCR-extracted data from high quality Indic language PDFs, transcribed data from various Indic language videos, podcasts, movies, courses, etc.
- **Sangraha Unverfied**: High quality Indic language data extracted from existing multilingual corpora employing perplexity filtering using n-gram language models trained on Sangraha Verified.
- **Sangraha Synthetic**: WikiMedia English translated to 14 Indic languages and further "romanised" from 14 languages by transliteration to English.
## Data Statistics
| **Lang Code** | **Verified** | **Synthetic** | **Unverified** | **Total Tokens (in Millions)** |
| ------------- | ------------ | ------------- | -------------- | ------------------------------ |
| asm | 292.1 | 11,696.4 | 17.5 | 12,006.0 |
| ben | 10,604.4 | 13,814.1 | 5,608.8 | 30,027.5 |
| brx | 1.5 | - | - | 1.5 |
| doi | 0.06 | - | - | 0.06 |
| eng | 12,759.9 | - | - | 12,759.9 |
| gom | 10.1 | - | - | 10.1 |
| guj | 3,647.9 | 12,934.5 | 597.0 | 17,179.4 |
| hin | 12,617.3 | 9,578.7 | 12,348.3 | 34,544.3 |
| kan | 1,778.3 | 12,087.4 | 388.8 | 14,254.5 |
| kas | 0.5 | - | - | 0.5 |
| mai | 14.6 | - | - | 14.6 |
| mal | 2,730.8 | 13,130.0 | 547.8 | 16,408.6 |
| mar | 2,827.0 | 10,816.7 | 652.1 | 14,295.8 |
| mni | 7.4 | - | - | 7.4 |
| npi | 1,822.5 | 10,588.7 | 485.5 | 12,896.7 |
| ori | 1,177.1 | 11,338.0 | 23.7 | 12,538.8 |
| pan | 1,075.3 | 9,969.6 | 136.9 | 11,181.8 |
| san | 1,329.0 | 13,553.5 | 9.8 | 14,892.3 |
| sat | 0.3 | - | - | 0.3 |
| snd | 258.2 | - | - | 258.2 |
| tam | 3,985.1 | 11,859.3 | 1,515.9 | 17,360.3 |
| urd | 3,658.1 | 9,415.8 | 1,328.2 | 14,402.1 |
| tel | 3,706.8 | 11,924.5 | 647.4 | 16,278.7 |
| **Total** | **64,306.1** | **162,707.9** | **24,307.7** | **251,321.0** |
To cite Sangraha, please use:
```
@article{khan2024indicllmsuite,
title = {IndicLLMSuite: A Blueprint for Creating Pre-training and Fine-Tuning Datasets for Indian Languages},
author = {Mohammed Safi Ur Rahman Khan and Priyam Mehta and Ananth Sankar and Umashankar Kumaravelan and Sumanth Doddapaneni and Suriyaprasaad G and Varun Balan G and Sparsh Jain and Anoop Kunchukuttan and Pratyush Kumar and Raj Dabre and Mitesh M. Khapra},
year = {2024},
journal = {arXiv preprint arXiv: 2403.06350}
}
```
|
hendrycks/competition_math | hendrycks | "2023-06-08T06:40:09Z" | 16,522 | 134 | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"arxiv:2103.03874",
"region:us",
"explanation-generation"
] | [
"text2text-generation"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Mathematics Aptitude Test of Heuristics (MATH)
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
tags:
- explanation-generation
dataset_info:
features:
- name: problem
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 5984788
num_examples: 7500
- name: test
num_bytes: 3732575
num_examples: 5000
download_size: 20327424
dataset_size: 9717363
---
# Dataset Card for Mathematics Aptitude Test of Heuristics (MATH) dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/hendrycks/math
- **Repository:** https://github.com/hendrycks/math
- **Paper:** https://arxiv.org/pdf/2103.03874.pdf
- **Leaderboard:** N/A
- **Point of Contact:** Dan Hendrycks
### Dataset Summary
The Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems
from mathematics competitions, including the AMC 10, AMC 12, AIME, and more.
Each problem in MATH has a full step-by-step solution, which can be used to teach
models to generate answer derivations and explanations.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A data instance consists of a competition math problem and its step-by-step solution written in LaTeX and natural language. The step-by-step solution contains the final answer enclosed in LaTeX's `\boxed` tag.
An example from the dataset is:
```
{'problem': 'A board game spinner is divided into three parts labeled $A$, $B$ and $C$. The probability of the spinner landing on $A$ is $\\frac{1}{3}$ and the probability of the spinner landing on $B$ is $\\frac{5}{12}$. What is the probability of the spinner landing on $C$? Express your answer as a common fraction.',
'level': 'Level 1',
'type': 'Counting & Probability',
'solution': 'The spinner is guaranteed to land on exactly one of the three regions, so we know that the sum of the probabilities of it landing in each region will be 1. If we let the probability of it landing in region $C$ be $x$, we then have the equation $1 = \\frac{5}{12}+\\frac{1}{3}+x$, from which we have $x=\\boxed{\\frac{1}{4}}$.'}
```
### Data Fields
* `problem`: The competition math problem.
* `solution`: The step-by-step solution.
* `level`: The problem's difficulty level from 'Level 1' to 'Level 5', where a subject's easiest problems for humans are assigned to 'Level 1' and a subject's hardest problems are assigned to 'Level 5'.
* `type`: The subject of the problem: Algebra, Counting & Probability, Geometry, Intermediate Algebra, Number Theory, Prealgebra and Precalculus.
### Data Splits
* train: 7,500 examples
* test: 5,000 examples
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
https://github.com/hendrycks/math/blob/main/LICENSE
### Citation Information
```bibtex
@article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks
and Collin Burns
and Saurav Kadavath
and Akul Arora
and Steven Basart
and Eric Tang
and Dawn Song
and Jacob Steinhardt},
journal={arXiv preprint arXiv:2103.03874},
year={2021}
}
```
### Contributions
Thanks to [@hacobe](https://github.com/hacobe) for adding this dataset. |
lmms-lab/Video-MME | lmms-lab | "2024-07-04T08:14:20Z" | 16,497 | 30 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-07T12:06:37Z" | ---
dataset_info:
config_name: videomme
features:
- name: video_id
dtype: string
- name: duration
dtype: string
- name: domain
dtype: string
- name: sub_category
dtype: string
- name: url
dtype: string
- name: videoID
dtype: string
- name: question_id
dtype: string
- name: task_type
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1003241.0
num_examples: 2700
download_size: 405167
dataset_size: 1003241.0
configs:
- config_name: videomme
data_files:
- split: test
path: videomme/test-*
---
|
agkphysics/AudioSet | agkphysics | "2024-02-03T12:09:42Z" | 16,493 | 35 | [
"task_categories:audio-classification",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"modality:audio",
"region:us",
"audio"
] | [
"audio-classification"
] | "2023-06-14T08:17:23Z" | ---
language:
- en
license: cc-by-4.0
size_categories:
- 10K<n<100K
- 1M<n<10M
source_datasets:
- original
task_categories:
- audio-classification
paperswithcode_id: audioset
pretty_name: AudioSet
config_names:
- balanced
- unbalanced
tags:
- audio
dataset_info:
- config_name: balanced
features:
- name: video_id
dtype: string
- name: audio
dtype: audio
- name: labels
sequence: string
- name: human_labels
sequence: string
splits:
- name: train
num_bytes: 26016210987
num_examples: 18685
- name: test
num_bytes: 23763682278
num_examples: 17142
download_size: 49805654900
dataset_size: 49779893265
- config_name: unbalanced
features:
- name: video_id
dtype: string
- name: audio
dtype: audio
- name: labels
sequence: string
- name: human_labels
sequence: string
splits:
- name: train
num_bytes: 2408656417541
num_examples: 1738788
- name: test
num_bytes: 23763682278
num_examples: 17142
download_size: 2433673104977
dataset_size: 2432420099819
---
# Dataset Card for AudioSet
## Dataset Description
- **Homepage**: https://research.google.com/audioset/index.html
- **Paper**: https://storage.googleapis.com/gweb-research2023-media/pubtools/pdf/45857.pdf
- **Leaderboard**: https://paperswithcode.com/sota/audio-classification-on-audioset
### Dataset Summary
[AudioSet](https://research.google.com/audioset/dataset/index.html) is a
dataset of 10-second clips from YouTube, annotated into one or more
sound categories, following the AudioSet ontology.
### Supported Tasks and Leaderboards
- `audio-classification`: Classify audio clips into categories. The
leaderboard is available
[here](https://paperswithcode.com/sota/audio-classification-on-audioset)
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
Example instance from the dataset:
```python
{
'video_id': '--PJHxphWEs',
'audio': {
'path': 'audio/bal_train/--PJHxphWEs.flac',
'array': array([-0.04364824, -0.05268681, -0.0568949 , ..., 0.11446512,
0.14912748, 0.13409865]),
'sampling_rate': 48000
},
'labels': ['/m/09x0r', '/t/dd00088'],
'human_labels': ['Speech', 'Gush']
}
```
### Data Fields
Instances have the following fields:
- `video_id`: a `string` feature containing the original YouTube ID.
- `audio`: an `Audio` feature containing the audio data and sample rate.
- `labels`: a sequence of `string` features containing the labels
associated with the audio clip.
- `human_labels`: a sequence of `string` features containing the
human-readable forms of the same labels as in `labels`.
### Data Splits
The distribuion of audio clips is as follows:
#### `balanced` configuration
| |train|test |
|-----------|----:|----:|
|# instances|18685|17142|
#### `unbalanced` configuration
| |train |test |
|-----------|------:|----:|
|# instances|1738788|17142|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
The labels are from the AudioSet ontology. Audio clips are from YouTube.
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
1. The YouTube videos in this copy of AudioSet were downloaded in March
2023, so not all of the original audios are available. The number of
clips able to be downloaded is as follows:
- Balanced train: 18685 audio clips out of 22160 originally.
- Unbalanced train: 1738788 clips out of 2041789 originally.
- Evaluation: 17142 audio clips out of 20371 originally.
2. Most audio is sampled at 48 kHz 24 bit, but about 10% is sampled at
44.1 kHz 24 bit. Audio files are stored in the FLAC format.
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The AudioSet data is licensed under CC-BY-4.0
## Citation
```bibtex
@inproceedings{jort_audioset_2017,
title = {Audio Set: An ontology and human-labeled dataset for audio events},
author = {Jort F. Gemmeke and Daniel P. W. Ellis and Dylan Freedman and Aren Jansen and Wade Lawrence and R. Channing Moore and Manoj Plakal and Marvin Ritter},
year = {2017},
booktitle = {Proc. IEEE ICASSP 2017},
address = {New Orleans, LA}
}
```
|
kuroneko5943/amz20 | kuroneko5943 | "2023-01-10T16:02:20Z" | 16,409 | 0 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|amazon_us_reviews",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"modality:tabular",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"amazon"
] | [
"text-classification"
] | "2023-01-10T12:02:41Z" | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: amz20
size_categories:
- 1K<n<10K
source_datasets:
- extended|amazon_us_reviews
tags:
- amazon
task_categories:
- text-classification
task_ids:
- sentiment-classification
--- |
stanfordnlp/sst2 | stanfordnlp | "2024-01-04T16:31:07Z" | 16,277 | 96 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | "2022-06-13T14:01:47Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: sst
pretty_name: Stanford Sentiment Treebank v2
dataset_info:
features:
- name: idx
dtype: int32
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 4681603
num_examples: 67349
- name: validation
num_bytes: 106252
num_examples: 872
- name: test
num_bytes: 216640
num_examples: 1821
download_size: 3331058
dataset_size: 5004495
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nlp.stanford.edu/sentiment/
- **Repository:**
- **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://www.aclweb.org/anthology/D13-1170/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the
compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005)
and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and
includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges.
Binary classification experiments on full sentences (negative or somewhat negative vs somewhat positive or positive
with neutral sentences discarded) refer to the dataset as SST-2 or SST binary.
### Supported Tasks and Leaderboards
- `sentiment-classification`
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Instances
```
{'idx': 0,
'sentence': 'hide new secretions from the parental units ',
'label': 0}
```
### Data Fields
- `idx`: Monotonically increasing index ID.
- `sentence`: Complete sentence expressing an opinion about a film.
- `label`: Sentiment of the opinion, either "negative" (0) or positive (1). The test set labels are hidden (-1).
### Data Splits
| | train | validation | test |
|--------------------|---------:|-----------:|-----:|
| Number of examples | 67349 | 872 | 1821 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Rotten Tomatoes reviewers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown.
### Citation Information
```bibtex
@inproceedings{socher-etal-2013-recursive,
title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
author = "Socher, Richard and
Perelygin, Alex and
Wu, Jean and
Chuang, Jason and
Manning, Christopher D. and
Ng, Andrew and
Potts, Christopher",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
month = oct,
year = "2013",
address = "Seattle, Washington, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D13-1170",
pages = "1631--1642",
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
EuropeanParliament/Eurovoc | EuropeanParliament | "2024-05-14T10:12:12Z" | 16,135 | 4 | [
"license:eupl-1.1",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-09-01T07:46:44Z" | ---
license: eupl-1.1
configs:
- config_name: 1996-03
data_files: "files/1996-03.jsonl.gz"
- config_name: 1996-04
data_files: "files/1996-04.jsonl.gz"
- config_name: 1996-05
data_files: "files/1996-05.jsonl.gz"
- config_name: 1996-06
data_files: "files/1996-06.jsonl.gz"
- config_name: 1996-07
data_files: "files/1996-07.jsonl.gz"
- config_name: 1996-08
data_files: "files/1996-08.jsonl.gz"
- config_name: 1996-09
data_files: "files/1996-09.jsonl.gz"
- config_name: 1996-10
data_files: "files/1996-10.jsonl.gz"
- config_name: 1996-11
data_files: "files/1996-11.jsonl.gz"
- config_name: 1996-12
data_files: "files/1996-12.jsonl.gz"
- config_name: 1997-01
data_files: "files/1997-01.jsonl.gz"
- config_name: 1997-02
data_files: "files/1997-02.jsonl.gz"
- config_name: 1997-03
data_files: "files/1997-03.jsonl.gz"
- config_name: 1997-04
data_files: "files/1997-04.jsonl.gz"
- config_name: 1997-05
data_files: "files/1997-05.jsonl.gz"
- config_name: 1997-06
data_files: "files/1997-06.jsonl.gz"
- config_name: 1997-07
data_files: "files/1997-07.jsonl.gz"
- config_name: 1997-08
data_files: "files/1997-08.jsonl.gz"
- config_name: 1997-09
data_files: "files/1997-09.jsonl.gz"
- config_name: 1997-10
data_files: "files/1997-10.jsonl.gz"
- config_name: 1997-11
data_files: "files/1997-11.jsonl.gz"
- config_name: 1997-12
data_files: "files/1997-12.jsonl.gz"
- config_name: 1998-01
data_files: "files/1998-01.jsonl.gz"
- config_name: 1998-02
data_files: "files/1998-02.jsonl.gz"
- config_name: 1998-03
data_files: "files/1998-03.jsonl.gz"
- config_name: 1998-04
data_files: "files/1998-04.jsonl.gz"
- config_name: 1998-05
data_files: "files/1998-05.jsonl.gz"
- config_name: 1998-06
data_files: "files/1998-06.jsonl.gz"
- config_name: 1998-07
data_files: "files/1998-07.jsonl.gz"
- config_name: 1998-08
data_files: "files/1998-08.jsonl.gz"
- config_name: 1998-09
data_files: "files/1998-09.jsonl.gz"
- config_name: 1998-10
data_files: "files/1998-10.jsonl.gz"
- config_name: 1998-11
data_files: "files/1998-11.jsonl.gz"
- config_name: 1998-12
data_files: "files/1998-12.jsonl.gz"
- config_name: 1999-01
data_files: "files/1999-01.jsonl.gz"
- config_name: 1999-02
data_files: "files/1999-02.jsonl.gz"
- config_name: 1999-03
data_files: "files/1999-03.jsonl.gz"
- config_name: 1999-04
data_files: "files/1999-04.jsonl.gz"
- config_name: 1999-05
data_files: "files/1999-05.jsonl.gz"
- config_name: 1999-06
data_files: "files/1999-06.jsonl.gz"
- config_name: 1999-07
data_files: "files/1999-07.jsonl.gz"
- config_name: 1999-08
data_files: "files/1999-08.jsonl.gz"
- config_name: 1999-09
data_files: "files/1999-09.jsonl.gz"
- config_name: 1999-10
data_files: "files/1999-10.jsonl.gz"
- config_name: 1999-11
data_files: "files/1999-11.jsonl.gz"
- config_name: 1999-12
data_files: "files/1999-12.jsonl.gz"
- config_name: 2000-01
data_files: "files/2000-01.jsonl.gz"
- config_name: 2000-02
data_files: "files/2000-02.jsonl.gz"
- config_name: 2000-03
data_files: "files/2000-03.jsonl.gz"
- config_name: 2000-04
data_files: "files/2000-04.jsonl.gz"
- config_name: 2000-05
data_files: "files/2000-05.jsonl.gz"
- config_name: 2000-06
data_files: "files/2000-06.jsonl.gz"
- config_name: 2000-07
data_files: "files/2000-07.jsonl.gz"
- config_name: 2000-08
data_files: "files/2000-08.jsonl.gz"
- config_name: 2000-09
data_files: "files/2000-09.jsonl.gz"
- config_name: 2000-10
data_files: "files/2000-10.jsonl.gz"
- config_name: 2000-11
data_files: "files/2000-11.jsonl.gz"
- config_name: 2000-12
data_files: "files/2000-12.jsonl.gz"
- config_name: 2001-01
data_files: "files/2001-01.jsonl.gz"
- config_name: 2001-02
data_files: "files/2001-02.jsonl.gz"
- config_name: 2001-03
data_files: "files/2001-03.jsonl.gz"
- config_name: 2001-04
data_files: "files/2001-04.jsonl.gz"
- config_name: 2001-05
data_files: "files/2001-05.jsonl.gz"
- config_name: 2001-06
data_files: "files/2001-06.jsonl.gz"
- config_name: 2001-07
data_files: "files/2001-07.jsonl.gz"
- config_name: 2001-08
data_files: "files/2001-08.jsonl.gz"
- config_name: 2001-09
data_files: "files/2001-09.jsonl.gz"
- config_name: 2001-10
data_files: "files/2001-10.jsonl.gz"
- config_name: 2001-11
data_files: "files/2001-11.jsonl.gz"
- config_name: 2001-12
data_files: "files/2001-12.jsonl.gz"
- config_name: 2002-01
data_files: "files/2002-01.jsonl.gz"
- config_name: 2002-02
data_files: "files/2002-02.jsonl.gz"
- config_name: 2002-03
data_files: "files/2002-03.jsonl.gz"
- config_name: 2002-04
data_files: "files/2002-04.jsonl.gz"
- config_name: 2002-05
data_files: "files/2002-05.jsonl.gz"
- config_name: 2002-06
data_files: "files/2002-06.jsonl.gz"
- config_name: 2002-07
data_files: "files/2002-07.jsonl.gz"
- config_name: 2002-08
data_files: "files/2002-08.jsonl.gz"
- config_name: 2002-09
data_files: "files/2002-09.jsonl.gz"
- config_name: 2002-10
data_files: "files/2002-10.jsonl.gz"
- config_name: 2002-11
data_files: "files/2002-11.jsonl.gz"
- config_name: 2002-12
data_files: "files/2002-12.jsonl.gz"
- config_name: 2003-01
data_files: "files/2003-01.jsonl.gz"
- config_name: 2003-02
data_files: "files/2003-02.jsonl.gz"
- config_name: 2003-03
data_files: "files/2003-03.jsonl.gz"
- config_name: 2003-04
data_files: "files/2003-04.jsonl.gz"
- config_name: 2003-05
data_files: "files/2003-05.jsonl.gz"
- config_name: 2003-06
data_files: "files/2003-06.jsonl.gz"
- config_name: 2003-07
data_files: "files/2003-07.jsonl.gz"
- config_name: 2003-08
data_files: "files/2003-08.jsonl.gz"
- config_name: 2003-09
data_files: "files/2003-09.jsonl.gz"
- config_name: 2003-10
data_files: "files/2003-10.jsonl.gz"
- config_name: 2003-11
data_files: "files/2003-11.jsonl.gz"
- config_name: 2003-12
data_files: "files/2003-12.jsonl.gz"
- config_name: 2004-01
data_files: "files/2004-01.jsonl.gz"
- config_name: 2004-02
data_files: "files/2004-02.jsonl.gz"
- config_name: 2004-03
data_files: "files/2004-03.jsonl.gz"
- config_name: 2004-04
data_files: "files/2004-04.jsonl.gz"
- config_name: 2004-05
data_files: "files/2004-05.jsonl.gz"
- config_name: 2004-06
data_files: "files/2004-06.jsonl.gz"
- config_name: 2004-07
data_files: "files/2004-07.jsonl.gz"
- config_name: 2004-08
data_files: "files/2004-08.jsonl.gz"
- config_name: 2004-09
data_files: "files/2004-09.jsonl.gz"
- config_name: 2004-10
data_files: "files/2004-10.jsonl.gz"
- config_name: 2004-11
data_files: "files/2004-11.jsonl.gz"
- config_name: 2004-12
data_files: "files/2004-12.jsonl.gz"
- config_name: 2005-01
data_files: "files/2005-01.jsonl.gz"
- config_name: 2005-02
data_files: "files/2005-02.jsonl.gz"
- config_name: 2005-03
data_files: "files/2005-03.jsonl.gz"
- config_name: 2005-04
data_files: "files/2005-04.jsonl.gz"
- config_name: 2005-05
data_files: "files/2005-05.jsonl.gz"
- config_name: 2005-06
data_files: "files/2005-06.jsonl.gz"
- config_name: 2005-07
data_files: "files/2005-07.jsonl.gz"
- config_name: 2005-08
data_files: "files/2005-08.jsonl.gz"
- config_name: 2005-09
data_files: "files/2005-09.jsonl.gz"
- config_name: 2005-10
data_files: "files/2005-10.jsonl.gz"
- config_name: 2005-11
data_files: "files/2005-11.jsonl.gz"
- config_name: 2005-12
data_files: "files/2005-12.jsonl.gz"
- config_name: 2006-01
data_files: "files/2006-01.jsonl.gz"
- config_name: 2006-02
data_files: "files/2006-02.jsonl.gz"
- config_name: 2006-03
data_files: "files/2006-03.jsonl.gz"
- config_name: 2006-04
data_files: "files/2006-04.jsonl.gz"
- config_name: 2006-05
data_files: "files/2006-05.jsonl.gz"
- config_name: 2006-06
data_files: "files/2006-06.jsonl.gz"
- config_name: 2006-07
data_files: "files/2006-07.jsonl.gz"
- config_name: 2006-08
data_files: "files/2006-08.jsonl.gz"
- config_name: 2006-09
data_files: "files/2006-09.jsonl.gz"
- config_name: 2006-10
data_files: "files/2006-10.jsonl.gz"
- config_name: 2006-11
data_files: "files/2006-11.jsonl.gz"
- config_name: 2006-12
data_files: "files/2006-12.jsonl.gz"
- config_name: 2007-01
data_files: "files/2007-01.jsonl.gz"
- config_name: 2007-02
data_files: "files/2007-02.jsonl.gz"
- config_name: 2007-03
data_files: "files/2007-03.jsonl.gz"
- config_name: 2007-04
data_files: "files/2007-04.jsonl.gz"
- config_name: 2007-05
data_files: "files/2007-05.jsonl.gz"
- config_name: 2007-06
data_files: "files/2007-06.jsonl.gz"
- config_name: 2007-07
data_files: "files/2007-07.jsonl.gz"
- config_name: 2007-08
data_files: "files/2007-08.jsonl.gz"
- config_name: 2007-09
data_files: "files/2007-09.jsonl.gz"
- config_name: 2007-10
data_files: "files/2007-10.jsonl.gz"
- config_name: 2007-11
data_files: "files/2007-11.jsonl.gz"
- config_name: 2007-12
data_files: "files/2007-12.jsonl.gz"
- config_name: 2008-01
data_files: "files/2008-01.jsonl.gz"
- config_name: 2008-02
data_files: "files/2008-02.jsonl.gz"
- config_name: 2008-03
data_files: "files/2008-03.jsonl.gz"
- config_name: 2008-04
data_files: "files/2008-04.jsonl.gz"
- config_name: 2008-05
data_files: "files/2008-05.jsonl.gz"
- config_name: 2008-06
data_files: "files/2008-06.jsonl.gz"
- config_name: 2008-07
data_files: "files/2008-07.jsonl.gz"
- config_name: 2008-08
data_files: "files/2008-08.jsonl.gz"
- config_name: 2008-09
data_files: "files/2008-09.jsonl.gz"
- config_name: 2008-10
data_files: "files/2008-10.jsonl.gz"
- config_name: 2008-11
data_files: "files/2008-11.jsonl.gz"
- config_name: 2008-12
data_files: "files/2008-12.jsonl.gz"
- config_name: 2009-01
data_files: "files/2009-01.jsonl.gz"
- config_name: 2009-02
data_files: "files/2009-02.jsonl.gz"
- config_name: 2009-03
data_files: "files/2009-03.jsonl.gz"
- config_name: 2009-04
data_files: "files/2009-04.jsonl.gz"
- config_name: 2009-05
data_files: "files/2009-05.jsonl.gz"
- config_name: 2009-06
data_files: "files/2009-06.jsonl.gz"
- config_name: 2009-07
data_files: "files/2009-07.jsonl.gz"
- config_name: 2009-08
data_files: "files/2009-08.jsonl.gz"
- config_name: 2009-09
data_files: "files/2009-09.jsonl.gz"
- config_name: 2009-10
data_files: "files/2009-10.jsonl.gz"
- config_name: 2009-11
data_files: "files/2009-11.jsonl.gz"
- config_name: 2009-12
data_files: "files/2009-12.jsonl.gz"
- config_name: 2010-01
data_files: "files/2010-01.jsonl.gz"
- config_name: 2010-02
data_files: "files/2010-02.jsonl.gz"
- config_name: 2010-03
data_files: "files/2010-03.jsonl.gz"
- config_name: 2010-04
data_files: "files/2010-04.jsonl.gz"
- config_name: 2010-05
data_files: "files/2010-05.jsonl.gz"
- config_name: 2010-06
data_files: "files/2010-06.jsonl.gz"
- config_name: 2010-07
data_files: "files/2010-07.jsonl.gz"
- config_name: 2010-08
data_files: "files/2010-08.jsonl.gz"
- config_name: 2010-09
data_files: "files/2010-09.jsonl.gz"
- config_name: 2010-10
data_files: "files/2010-10.jsonl.gz"
- config_name: 2010-11
data_files: "files/2010-11.jsonl.gz"
- config_name: 2010-12
data_files: "files/2010-12.jsonl.gz"
- config_name: 2011-01
data_files: "files/2011-01.jsonl.gz"
- config_name: 2011-02
data_files: "files/2011-02.jsonl.gz"
- config_name: 2011-03
data_files: "files/2011-03.jsonl.gz"
- config_name: 2011-04
data_files: "files/2011-04.jsonl.gz"
- config_name: 2011-05
data_files: "files/2011-05.jsonl.gz"
- config_name: 2011-06
data_files: "files/2011-06.jsonl.gz"
- config_name: 2011-07
data_files: "files/2011-07.jsonl.gz"
- config_name: 2011-08
data_files: "files/2011-08.jsonl.gz"
- config_name: 2011-09
data_files: "files/2011-09.jsonl.gz"
- config_name: 2011-10
data_files: "files/2011-10.jsonl.gz"
- config_name: 2011-11
data_files: "files/2011-11.jsonl.gz"
- config_name: 2011-12
data_files: "files/2011-12.jsonl.gz"
- config_name: 2012-01
data_files: "files/2012-01.jsonl.gz"
- config_name: 2012-02
data_files: "files/2012-02.jsonl.gz"
- config_name: 2012-03
data_files: "files/2012-03.jsonl.gz"
- config_name: 2012-04
data_files: "files/2012-04.jsonl.gz"
- config_name: 2012-05
data_files: "files/2012-05.jsonl.gz"
- config_name: 2012-06
data_files: "files/2012-06.jsonl.gz"
- config_name: 2012-07
data_files: "files/2012-07.jsonl.gz"
- config_name: 2012-08
data_files: "files/2012-08.jsonl.gz"
- config_name: 2012-09
data_files: "files/2012-09.jsonl.gz"
- config_name: 2012-10
data_files: "files/2012-10.jsonl.gz"
- config_name: 2012-11
data_files: "files/2012-11.jsonl.gz"
- config_name: 2012-12
data_files: "files/2012-12.jsonl.gz"
- config_name: 2013-01
data_files: "files/2013-01.jsonl.gz"
- config_name: 2013-02
data_files: "files/2013-02.jsonl.gz"
- config_name: 2013-03
data_files: "files/2013-03.jsonl.gz"
- config_name: 2013-04
data_files: "files/2013-04.jsonl.gz"
- config_name: 2013-05
data_files: "files/2013-05.jsonl.gz"
- config_name: 2013-06
data_files: "files/2013-06.jsonl.gz"
- config_name: 2013-07
data_files: "files/2013-07.jsonl.gz"
- config_name: 2013-08
data_files: "files/2013-08.jsonl.gz"
- config_name: 2013-09
data_files: "files/2013-09.jsonl.gz"
- config_name: 2013-10
data_files: "files/2013-10.jsonl.gz"
- config_name: 2013-11
data_files: "files/2013-11.jsonl.gz"
- config_name: 2013-12
data_files: "files/2013-12.jsonl.gz"
- config_name: 2014-01
data_files: "files/2014-01.jsonl.gz"
- config_name: 2014-02
data_files: "files/2014-02.jsonl.gz"
- config_name: 2014-03
data_files: "files/2014-03.jsonl.gz"
- config_name: 2014-04
data_files: "files/2014-04.jsonl.gz"
- config_name: 2014-05
data_files: "files/2014-05.jsonl.gz"
- config_name: 2014-06
data_files: "files/2014-06.jsonl.gz"
- config_name: 2014-07
data_files: "files/2014-07.jsonl.gz"
- config_name: 2014-08
data_files: "files/2014-08.jsonl.gz"
- config_name: 2014-09
data_files: "files/2014-09.jsonl.gz"
- config_name: 2014-10
data_files: "files/2014-10.jsonl.gz"
- config_name: 2014-11
data_files: "files/2014-11.jsonl.gz"
- config_name: 2014-12
data_files: "files/2014-12.jsonl.gz"
- config_name: 2015-01
data_files: "files/2015-01.jsonl.gz"
- config_name: 2015-02
data_files: "files/2015-02.jsonl.gz"
- config_name: 2015-03
data_files: "files/2015-03.jsonl.gz"
- config_name: 2015-04
data_files: "files/2015-04.jsonl.gz"
- config_name: 2015-05
data_files: "files/2015-05.jsonl.gz"
- config_name: 2015-06
data_files: "files/2015-06.jsonl.gz"
- config_name: 2015-07
data_files: "files/2015-07.jsonl.gz"
- config_name: 2015-08
data_files: "files/2015-08.jsonl.gz"
- config_name: 2015-09
data_files: "files/2015-09.jsonl.gz"
- config_name: 2015-10
data_files: "files/2015-10.jsonl.gz"
- config_name: 2015-11
data_files: "files/2015-11.jsonl.gz"
- config_name: 2015-12
data_files: "files/2015-12.jsonl.gz"
- config_name: 2016-01
data_files: "files/2016-01.jsonl.gz"
- config_name: 2016-02
data_files: "files/2016-02.jsonl.gz"
- config_name: 2016-03
data_files: "files/2016-03.jsonl.gz"
- config_name: 2016-04
data_files: "files/2016-04.jsonl.gz"
- config_name: 2016-05
data_files: "files/2016-05.jsonl.gz"
- config_name: 2016-06
data_files: "files/2016-06.jsonl.gz"
- config_name: 2016-07
data_files: "files/2016-07.jsonl.gz"
- config_name: 2016-08
data_files: "files/2016-08.jsonl.gz"
- config_name: 2016-09
data_files: "files/2016-09.jsonl.gz"
- config_name: 2016-10
data_files: "files/2016-10.jsonl.gz"
- config_name: 2016-11
data_files: "files/2016-11.jsonl.gz"
- config_name: 2016-12
data_files: "files/2016-12.jsonl.gz"
- config_name: 2017-01
data_files: "files/2017-01.jsonl.gz"
- config_name: 2017-02
data_files: "files/2017-02.jsonl.gz"
- config_name: 2017-03
data_files: "files/2017-03.jsonl.gz"
- config_name: 2017-04
data_files: "files/2017-04.jsonl.gz"
- config_name: 2017-05
data_files: "files/2017-05.jsonl.gz"
- config_name: 2017-06
data_files: "files/2017-06.jsonl.gz"
- config_name: 2017-07
data_files: "files/2017-07.jsonl.gz"
- config_name: 2017-08
data_files: "files/2017-08.jsonl.gz"
- config_name: 2017-09
data_files: "files/2017-09.jsonl.gz"
- config_name: 2017-10
data_files: "files/2017-10.jsonl.gz"
- config_name: 2017-11
data_files: "files/2017-11.jsonl.gz"
- config_name: 2017-12
data_files: "files/2017-12.jsonl.gz"
- config_name: 2018-01
data_files: "files/2018-01.jsonl.gz"
- config_name: 2018-02
data_files: "files/2018-02.jsonl.gz"
- config_name: 2018-03
data_files: "files/2018-03.jsonl.gz"
- config_name: 2018-04
data_files: "files/2018-04.jsonl.gz"
- config_name: 2018-05
data_files: "files/2018-05.jsonl.gz"
- config_name: 2018-06
data_files: "files/2018-06.jsonl.gz"
- config_name: 2018-07
data_files: "files/2018-07.jsonl.gz"
- config_name: 2018-08
data_files: "files/2018-08.jsonl.gz"
- config_name: 2018-09
data_files: "files/2018-09.jsonl.gz"
- config_name: 2018-10
data_files: "files/2018-10.jsonl.gz"
- config_name: 2018-11
data_files: "files/2018-11.jsonl.gz"
- config_name: 2018-12
data_files: "files/2018-12.jsonl.gz"
- config_name: 2019-01
data_files: "files/2019-01.jsonl.gz"
- config_name: 2019-02
data_files: "files/2019-02.jsonl.gz"
- config_name: 2019-03
data_files: "files/2019-03.jsonl.gz"
- config_name: 2019-04
data_files: "files/2019-04.jsonl.gz"
- config_name: 2019-05
data_files: "files/2019-05.jsonl.gz"
- config_name: 2019-06
data_files: "files/2019-06.jsonl.gz"
- config_name: 2019-07
data_files: "files/2019-07.jsonl.gz"
- config_name: 2019-08
data_files: "files/2019-08.jsonl.gz"
- config_name: 2019-09
data_files: "files/2019-09.jsonl.gz"
- config_name: 2019-10
data_files: "files/2019-10.jsonl.gz"
- config_name: 2019-11
data_files: "files/2019-11.jsonl.gz"
- config_name: 2019-12
data_files: "files/2019-12.jsonl.gz"
- config_name: 2020-01
data_files: "files/2020-01.jsonl.gz"
- config_name: 2020-02
data_files: "files/2020-02.jsonl.gz"
- config_name: 2020-03
data_files: "files/2020-03.jsonl.gz"
- config_name: 2020-04
data_files: "files/2020-04.jsonl.gz"
- config_name: 2020-05
data_files: "files/2020-05.jsonl.gz"
- config_name: 2020-06
data_files: "files/2020-06.jsonl.gz"
- config_name: 2020-07
data_files: "files/2020-07.jsonl.gz"
- config_name: 2020-08
data_files: "files/2020-08.jsonl.gz"
- config_name: 2020-09
data_files: "files/2020-09.jsonl.gz"
- config_name: 2020-10
data_files: "files/2020-10.jsonl.gz"
- config_name: 2020-11
data_files: "files/2020-11.jsonl.gz"
- config_name: 2020-12
data_files: "files/2020-12.jsonl.gz"
- config_name: 2021-01
data_files: "files/2021-01.jsonl.gz"
- config_name: 2021-02
data_files: "files/2021-02.jsonl.gz"
- config_name: 2021-03
data_files: "files/2021-03.jsonl.gz"
- config_name: 2021-04
data_files: "files/2021-04.jsonl.gz"
- config_name: 2021-05
data_files: "files/2021-05.jsonl.gz"
- config_name: 2021-06
data_files: "files/2021-06.jsonl.gz"
- config_name: 2021-07
data_files: "files/2021-07.jsonl.gz"
- config_name: 2021-08
data_files: "files/2021-08.jsonl.gz"
- config_name: 2021-09
data_files: "files/2021-09.jsonl.gz"
- config_name: 2021-10
data_files: "files/2021-10.jsonl.gz"
- config_name: 2021-11
data_files: "files/2021-11.jsonl.gz"
- config_name: 2021-12
data_files: "files/2021-12.jsonl.gz"
- config_name: 2022-01
data_files: "files/2022-01.jsonl.gz"
- config_name: 2022-02
data_files: "files/2022-02.jsonl.gz"
- config_name: 2022-03
data_files: "files/2022-03.jsonl.gz"
- config_name: 2022-04
data_files: "files/2022-04.jsonl.gz"
- config_name: 2022-05
data_files: "files/2022-05.jsonl.gz"
- config_name: 2022-06
data_files: "files/2022-06.jsonl.gz"
- config_name: 2022-07
data_files: "files/2022-07.jsonl.gz"
- config_name: 2022-08
data_files: "files/2022-08.jsonl.gz"
- config_name: 2022-09
data_files: "files/2022-09.jsonl.gz"
- config_name: 2022-10
data_files: "files/2022-10.jsonl.gz"
- config_name: 2022-11
data_files: "files/2022-11.jsonl.gz"
- config_name: 2022-12
data_files: "files/2022-12.jsonl.gz"
- config_name: 2023-01
data_files: "files/2023-01.jsonl.gz"
- config_name: 2023-02
data_files: "files/2023-02.jsonl.gz"
- config_name: 2023-03
data_files: "files/2023-03.jsonl.gz"
- config_name: 2023-04
data_files: "files/2023-04.jsonl.gz"
- config_name: 2023-05
data_files: "files/2023-05.jsonl.gz"
- config_name: 2023-06
data_files: "files/2023-06.jsonl.gz"
- config_name: 2023-07
data_files: "files/2023-07.jsonl.gz"
- config_name: 2023-08
data_files: "files/2023-08.jsonl.gz"
- config_name: 2023-09
data_files: "files/2023-09.jsonl.gz"
- config_name: 2023-10
data_files: "files/2023-10.jsonl.gz"
- config_name: 2023-11
data_files: "files/2023-11.jsonl.gz"
- config_name: 2023-12
data_files: "files/2023-12.jsonl.gz"
---
# 🇪🇺 🏷️ EuroVoc dataset
This dataset contains more that 3,700,000 documents in 39 languages with associated EuroVoc labels.
## What's Cellar ?
Cellar is the common data repository of the Publications Office of the European Union. Digital publications and metadata are stored in and disseminated via Cellar, in order to be used by humans and machines. Aiming to transparently serve users, Cellar stores multilingual publications and metadata, it is open to all EU citizens and provides machine-readable data.
https://op.europa.eu/fr/web/cellar
## Why was this dataset created ?
"Extreme classification come with challenges of scalability due to large label spaces, data sparsity issues due to insufficient training samples."
https://medium.com/datapy-ai/extreme-multi-label-classification-for-eurovoc-b51d74623820
## How was dataset this created ?
The source code is available, check `cellar.py`
## When this dataset was created ?
14 July 2023
## What are the main characteristics of this dataset ?
There are a total of 39 different languages present in this dataset, of which some are EU languages and some are not. As the following graph illustrates, most of the documents of the dataset are written in EU languages (English being the most present language in the dataset), and the non-EU languages are very poorly represented (for example Arabic, Japanese,...). Note that since the Irish language (`gle`) was granted full official and working status in the EU in 2022, there are very few documents in that language. Additionally, Croatian (`hrv`) is also less represented in the dataset as Croatia is the latest country to have joined the EU in 2013.
![language graph](images/nb_documents.png)
The lengths of the documents also varies depending on the language it is written in. The document lengths are quite variable, especially in English. There is therefore a quite large disparity in document lengths in this dataset. Note that this boxplot does not present the outliers, since the length of certain documents can contain up to 86 million characters. The red lines in the boxplot indicates the median length of the documents for each language.
![boxplot](images/boxplot.png)
We notice that the documents in Irish have a very wide variability in document lengths, due to the fact it has very few documents. Therefore, we present the same boxplot without the Irish language in order to visualize with more detail the document length distribution in the other languages.
![boxplot](images/boxplot2.png)
## How is the data structured ?
An example of a sample of this dataset is the following :
```json
{
"title": "Commission information notice...",
"date": "2023-09-29",
"eurovoc_concepts": ["air transport", "intra-EU transport"],
"url": "http://publications.europa.eu/resource/cellar/ec99987f-5e69-11ee-9220-01aa75ed71a1",
"lang": "eng",
"formats": ["fmx4", "pdfa2a", "xhtml"],
"text": "To ensure ownership by the relevant actors,..."
}
```
- `title` : title of the document
- `date` : publication date of the document
- `eurovoc_concepts` : list of the EuroVoc concepts related to this document
- `url` : URL to access the document
- `formats` : list of formats in which the original document is available
- `text` : text content of the document
## Bibliography
- Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2019. Extreme Multi-Label Legal Text Classification: A Case Study in EU Legislation. In Proceedings of the Natural Legal Language Processing Workshop 2019, pages 78–87, Minneapolis, Minnesota. Association for Computational Linguistics.
- I. Chalkidis, M. Fergadiotis, P. Malakasiotis and I. Androutsopoulos, Large-Scale Multi-Label Text Classification on EU Legislation. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), Florence, Italy, (short papers), 2019.
- Andrei-Marius Avram, Vasile Pais, and Dan Ioan Tufis. 2021. PyEuroVoc: A Tool for Multilingual Legal Document Classification with EuroVoc Descriptors. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 92–101, Held Online. INCOMA Ltd..
- SHAHEEN, Zein, WOHLGENANNT, Gerhard, et FILTZ, Erwin. Large scale legal text classification using transformer models. arXiv preprint arXiv:2010.12871, 2020.
## Author(s)
Sébastien Campion <sebastien.campion@europarl.europa.eu>
|
Fsoft-AIC/the-vault-function | Fsoft-AIC | "2024-10-15T07:13:25Z" | 16,067 | 12 | [
"task_categories:text-generation",
"multilinguality:multiprogramming languages",
"language:code",
"language:en",
"license:mit",
"arxiv:2305.06156",
"region:us"
] | [
"text-generation"
] | "2023-05-05T14:25:47Z" | ---
language:
- code
- en
multilinguality:
- multiprogramming languages
task_categories:
- text-generation
license: mit
dataset_info:
features:
- name: identifier
dtype: string
- name: return_type
dtype: string
- name: repo
dtype: string
- name: path
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
dtype: string
- name: original_docstring
dtype: string
- name: comment
dtype: string
- name: docstring_tokens
dtype: string
- name: docstring
dtype: string
- name: original_string
dtype: string
pretty_name: The Vault Function
viewer: true
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Statistics](#dataset-statistics)
- [Usage](#usage)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [FSoft-AI4Code/TheVault](https://github.com/FSoft-AI4Code/TheVault)
- **Paper:** [The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation](https://arxiv.org/abs/2305.06156)
- **Contact:** support.ailab@fpt.com
- **Website:** https://www.fpt-aicenter.com/ai-residency/
<p align="center">
<img src="https://raw.githubusercontent.com/FSoft-AI4Code/TheVault/main/assets/the-vault-4-logo-png.png" width="300px" alt="logo">
</p>
<div align="center">
# The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
</div>
## Dataset Summary
The Vault dataset is a comprehensive, large-scale, multilingual parallel dataset that features high-quality code-text pairs derived from The Stack, the largest permissively-licensed source code dataset.
We provide The Vault which contains code snippets from 10 popular programming languages such as Java, JavaScript, Python, Ruby, Rust, Golang, C#, C++, C, and PHP. This dataset provides multiple code-snippet levels, metadata, and 11 docstring styles for enhanced usability and versatility.
## Supported Tasks
The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*.
## Languages
The natural language text (docstring) is in English.
10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust`
## Dataset Structure
### Data Instances
```
{
"hexsha": "5c47f0b4c173a8fd03e4e633d9b3dd8211e67ad0",
"repo": "neumanna94/beepboop",
"path": "js/scripts.js",
"license": [
"MIT"
],
"language": "JavaScript",
"identifier": "beepBoopSelector",
"return_type": "<not_specific>",
"original_string": "function beepBoopSelector(inputString, bbFunction){\n if(bbFunction==1){\n return beepBoop(inputString);\n } else if(bbFunction==2){\n return beepBoop2(inputString);\n } else if(bbFunction==3){\n return beepBoop3(inputString);\n } else {\n }\n}",
"original_docstring": "//Determines what beepBoop function to use",
"docstring": "Determines what beepBoop function to use",
"docstring_tokens": [
"Determines",
"what",
"beepBoop",
"function",
"to",
"use"
],
"code": "function beepBoopSelector(inputString, bbFunction){\n if(bbFunction==1){\n return beepBoop(inputString);\n } else if(bbFunction==2){\n return beepBoop2(inputString);\n } else if(bbFunction==3){\n return beepBoop3(inputString);\n } else {\n }\n}",
"code_tokens": [
"function",
"beepBoopSelector",
"(",
"inputString",
",",
"bbFunction",
")",
"{",
"if",
"(",
"bbFunction",
"==",
"1",
")",
"{",
"return",
"beepBoop",
"(",
"inputString",
")",
";",
"}",
"else",
"if",
"(",
"bbFunction",
"==",
"2",
")",
"{",
"return",
"beepBoop2",
"(",
"inputString",
")",
";",
"}",
"else",
"if",
"(",
"bbFunction",
"==",
"3",
")",
"{",
"return",
"beepBoop3",
"(",
"inputString",
")",
";",
"}",
"else",
"{",
"}",
"}"
],
"short_docstring": "Determines what beepBoop function to use",
"short_docstring_tokens": [
"Determines",
"what",
"beepBoop",
"function",
"to",
"use"
],
"comment": [],
"parameters": [
{
"param": "inputString",
"type": null
},
{
"param": "bbFunction",
"type": null
}
],
"docstring_params": {
"returns": [],
"raises": [],
"params": [
{
"identifier": "inputString",
"type": null,
"docstring": null,
"docstring_tokens": [],
"default": null,
"is_optional": null
},
{
"identifier": "bbFunction",
"type": null,
"docstring": null,
"docstring_tokens": [],
"default": null,
"is_optional": null
}
],
"outlier_params": [],
"others": []
}
}
```
### Data Fields
Data fields for function level:
- **hexsha** (string): the unique git hash of file
- **repo** (string): the owner/repo
- **path** (string): the full path to the original file
- **license** (list): licenses in the repo
- **language** (string): the programming language
- **identifier** (string): the function or method name
- **return_type** (string): the type returned by the function
- **original_string** (string): original version of function/class node
- **original_docstring** (string): the raw string before tokenization or parsing
- **code** (string): the part of the original that is code
- **code_tokens** (list): tokenized version of `code`
- **short_docstring** (string): short, brief summarization (first line of the docstring)
- **short_docstring_tokens** (list): tokenized version of `short_docstring
- **docstring** (string): the top-level comment or docstring (docstring version without param’s doc, return, exception fields, etc)
- **docstring_tokens** (list): tokenized version of docstring
- **comment** (list): list of comments (line) inside the function/class
- **parameters** (list): List of parameters and its type (type can be None)
- **docstring_params** (dict): Dictionary of the parsed information from docstring
See [here](https://github.com/FSoft-AI4Code/TheVault/blob/main/data/README.md) for more details and examples.
### Data Splits
In this repo, The Vault is divided into 5 subsets, where three training versions are split based on size of the full training set, and the remains are validation set and test set (approximate 20,000 samples in each). The statistic for languages in each split set is illustrated in the following section.
Before split, the dataset is deduplicated. There are 3 versions of training set that are small (5%), medium (20%) and large (100%).
## Dataset Statistics
- Compare to other benchmarks
| Dataset | #Language | #Code-text pair |
|:--------------------------|----------:|-----------------:|
| PyMT5 | 1 | ≈ 7,700,000 |
| CoDesc | 1 | 4,211,516 |
| CodeSearchNet | 6 | 2,326,976 |
| CodeSearchNet (CodeXGLUE) | 6 | 1,005,474 |
| Deepcom | 1 | 424,028 |
| CONCODE | 1 | 2,184,310 |
| Funcom | 1 | 2,149,121 |
| CodeT5 | 8 | 3,158,313 |
| **The Vault** | **10** | **34,098,775** |
- Statistic for split sets
| | train/small | train/medium | train/full | validation | test | total |
|:-----------|------------:|-------------:|-----------:|-----------:|-------:|--------------:|
|Python | 370,657 | 1,952,110 | 7,772,647 | 30,992 | 21,652 | 7,825,291 |
|Java | 351,213 | 1,612,366 | 6,629,193 | 22,677 | 15,552 | 6,667,422 |
|JavaScript | 82,931 | 404,729 | 1,640,416 | 22,044 | 21,108 | 1,683,568 |
|PHP | 236,638 | 1,155,476 | 4,656,371 | 21,375 | 19,010 | 4,696,756 |
|C | 105,978 | 381,207 | 1,639,319 | 27,525 | 19,122 | 1,685,966 |
|C# | 141,090 | 783,166 | 3,305,891 | 24,787 | 19,638 | 3,350,316 |
|C++ | 87,420 | 410,907 | 1,671,268 | 20,011 | 18,169 | 1,709,448 |
|Go | 267,535 | 1,319,547 | 5,109,020 | 19,102 | 25,314 | 5,153,436 |
|Ruby | 23,921 | 112,574 | 424,339 | 17,338 | 19,908 | 461,585 |
|Rust | 35,367 | 224,015 | 825,130 | 16,716 | 23,141 | 864,987 |
|TOTAL | 1,702,750 | 8,356,097 |33,673,594 |222,567 |202,614 |**34,098,775** |
## Usage
You can load The Vault dataset using datasets library: ```pip install datasets```
```python
from datasets import load_dataset
# Load full function level dataset (34M samples)
dataset = load_dataset("Fsoft-AIC/the-vault-function")
# Load function level train/validation/test set
dataset = load_dataset("Fsoft-AIC/the-vault-function", split_set=["train"])
# Load "small" (or "medium", "full") version of function level training set
dataset = load_dataset("Fsoft-AIC/the-vault-function", split_set=["train/small"])
# specific language (e.g. Python)
dataset = load_dataset("Fsoft-AIC/the-vault-function", split_set=["train"], languages=['python'])
# dataset streaming
data = load_dataset("Fsoft-AIC/the-vault-function", split_set= ["train"], streaming= True)
for sample in iter(data['train']):
print(sample)
```
A back up dataset can be downloaded in azure storage. See [Download The Vault from Azure blob storage](https://github.com/FSoft-AI4Code/TheVault#download-via-link).
## Additional information
### Licensing Information
MIT License
### Citation Information
```
@article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
}
```
### Contributions
This dataset is developed by [FSOFT AI4Code team](https://github.com/FSoft-AI4Code). |
opencsg/chinese-fineweb-edu-v2 | opencsg | "2024-10-26T04:51:41Z" | 15,990 | 46 | [
"task_categories:text-generation",
"language:zh",
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | "2024-10-13T14:20:13Z" | ---
language:
- zh
pipeline_tag: text-generation
license: apache-2.0
task_categories:
- text-generation
size_categories:
- 10B<n<100B
---
# **Chinese Fineweb Edu Dataset V2** [[中文]](#chinese) [[English]](#english)
<a id="english"></a>
<p align="center">
<img width="600px" alt="OpenCSG" src="./logo.png">
</p>
<p align="center"><a href="https://opencsg.com/models">[OpenCSG Community]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>
</div>
<b>Chinese Fineweb Edu Dataset V2</b> is a comprehensive upgrade of the original Chinese Fineweb Edu, designed and optimized for natural language processing (NLP) tasks in the education sector. This high-quality Chinese pretraining dataset has undergone significant improvements and expansions, aimed at providing researchers and developers with more diverse and broadly applicable educational corpus resources. With a dataset size of 188 million entries (approximately 420 billion tokens), Fineweb Edu v2 not only increases the volume but also optimizes the data filtering methods and scoring models to ensure effectiveness and practicality in the educational domain.
## Enhanced Scoring Model
In the Chinese Fineweb edu v2 version, the data selection scoring model has undergone a significant upgrade, utilizing the larger and more powerful OpenCSG csg-wukong-enterprise V2 model. The training data for this model has been increased to 1 million entries, covering a variety of text types such as books, news, blogs, and 25% English data. Compared to the previous version, the csg-wukong-enterprise V2 model boasts a larger parameter count and deeper semantic understanding, excelling particularly in Chinese text comprehension and processing. The model not only performs more detailed analysis of text structure and content but also captures deeper semantic and emotional nuances embedded in the language.
This improvement means that during the data selection process, the model can more accurately assess the educational value, writing quality, and practical application of the text. Especially when dealing with high-demand texts in education and technology, the Fineweb2 scoring model ensures high quality and consistency in the selection results. This advancement significantly enhances the reliability of the data selection, providing stronger support for subsequent model training.
# Prompt Improvements
During the construction of the Fineweb2 dataset, the data filtering process was particularly crucial. To ensure that only text with real educational value and practicality was selected, we carefully optimized the design of the prompts used for data filtering. The new prompts more accurately evaluate the educational value, writing quality, and practicality of web content, refining the filtering process for better precision.
The new prompts clearly define scoring standards for educational content and also set expectations for writing style, coherence, and thematic depth. The specific scoring criteria are as follows:
Below is an excerpt from a web page. Please use the following 5-point rating system to assess the writing quality, educational value, and practicality of the webpage:
```Plain
以下是一段网页内容摘录。请使用以下5分制评分系统来评估该网页的写作水平、教育价值和实用性:
0分:如果网页没有提供任何教育价值,完全由无关信息(如广告、宣传材料、少儿不宜内容)组成。
1分:如果网页提供了一些可能有教育价值的基本信息,但包含较多的无关或非学术内容(如广告和宣传材料)。
2分:如果网页涉及某些与教育相关的元素,但与教育标准不太吻合。它可能将教育内容与非教育材料混杂,对潜在的有用的主题进行浅显概述,或以不连贯的写作风格呈现信息。
3分:如果网页适合教育使用,并介绍了与某些学校课程中可能学到的关键概念,或对个人发展有用的实用信息。它的内容连贯但可能不全面,或包含一些无关信息。它可能类似于教科书的一小段节选,可以学习但有明显局限,如涉及过于复杂的概念、过于具体的不重要事件。
4分:如果网页与教育高度相关,对个人学习发展有益,表现出清晰一致的写作风格。它可能类似于教科书的一个章节或教程,提供大量教育内容,极少包含无关信息,且概念对学生来说不会过于深奥。内容连贯、重点突出,对结构化学习有价值。
5分:如果网页摘录在教育价值上表现极好,完全适合小学、中学或大学教学或专业人士学习。它遵循详细的推理过程,写作风格易于理解,对主题提供深刻而全面的见解,不包含任何非教育性或无实用意义内容。
网页内容摘录:
{}
在审查这段网页摘录后:请简要地为您的评分进行合理的解释,最多不超过100字,最后以“教育得分:<分数>”的格式结束。请根据所列出的标准系统地赋予分数。
```
After reviewing this webpage excerpt, briefly explain the reasoning behind your score in no more than 100 words, ending with the format: "Educational Score: <score>." Please assign the score systematically based on the listed criteria.
After merging all data, the sample score distribution was as follows: texts with scores of 3 and above were selected, totaling 188 million entries (about 420 billion tokens). These data, which are not only extensive but also carefully filtered and deduplicated, ensure the high quality and uniqueness of the dataset. These scored data will be used to train large-scale language models within the Fineweb2 dataset, helping them achieve superior performance in various tasks.
<p align="center">
<img width="900px" alt="experiment" src="./distribution.png">
</p>
# Expanded Data Sources
The range of data sources for the Fineweb2 dataset has been further extended. Compared to the original Fineweb, Fineweb2 introduces massive datasets from various fields and sources, including Industry2, CCI3, michao, wanjuan1.0, wudao, and ChineseWebText. These datasets cover a broader range of industries and domains, enhancing the diversity and applicability of the dataset.
<p align="center">
<img width="900px" alt="experiment" src="./datasource.png">
</p>
In conclusion, the Fineweb2 dataset not only surpasses its predecessor in scale but also significantly improves the quality of data, content diversity, and precision of filtering. This lays a solid foundation for the further development of Chinese NLP applications and provides researchers with richer resources to explore and optimize various model training methods.
**We warmly invite developers and researchers interested in this field to follow and engage with the community, working together to advance the technology. Stay tuned for the open-source release of the dataset!**
## License Agreement
Usage of the Chinese Fineweb Edu dataset requires adherence to the OpenCSG Community License. The Chinese Fineweb Edu dataset supports commercial use. If you plan to use the OpenCSG model or its derivatives for commercial purposes, you must comply with the terms and conditions outlined in the OpenCSG Community License as well as the Apache 2.0 License. For commercial use, please send an email to lorraineg@opencsg.com and obtain permission.
<a id="chinese"></a>
<p>
</p>
# Chinese Fineweb Edu V2数据集介绍
<p align="center">
<img width="600px" alt="OpenCSG" src="./logo.png">
</p>
<p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a> <a href="https://twitter.com/OpenCsg">[推特]</a> </p>
</div>
<b>Chinese Fineweb Edu v2</b> 是Chinese Fineweb Edu的全新升级版,专为教育领域的自然语言处理(NLP)任务设计和优化的高质量中文预训练数据集。该数据集在前一版本的基础上进行了大规模的改进和扩展,致力于为研究人员和开发者提供更加多样化、广泛适用的教育类语料资源。Fineweb Edu v2 不仅数据量达到**188M条数据**,约**420B tokens**,还优化了数据的筛选方式和打分模型,以确保其在教育领域的有效性和实用性。
## 更强的打分模型
在Chinese Fineweb edu v2版本中,数据筛选的打分模型进行了重大升级,采用了规模更大、性能更强的OpenCSG csg-wukong-enterprise V2模型。该模型的训练数据增加到100万条,涵盖了多种类型的文本,如书籍、新闻、博客,以及25%的英文数据。相比于上一版本的打分模型,csg-wukong-enterprise V2拥有更大的参数量和更深层次的语义理解能力,特别是在中文文本理解和处理方面表现出色。该模型不仅能对文本的结构、内容进行更细致的分析,还能有效捕捉隐藏在语言中的深层次语义和情感信息。
这种提升意味着在数据筛选过程中,模型能够更加精准地评估文本的教育价值、写作质量以及其对实际应用的价值。尤其是在处理教育类、技术类等高要求的文本时,Fineweb2的打分模型确保了筛选结果的高质量和高一致性。这一进步显著提高了数据筛选的可靠性,为后续的模型训练提供了更有力的保障。
## Prompt改进
在Fineweb2数据集的构建过程中,数据筛选环节尤为重要。为确保筛选出真正具有教育价值和实用性的文本,我们对数据筛选的**Prompt设计**进行了细致的优化。新的Prompt能够更加准确地评估网页内容的**教育价值、写作水平和实用性**,从而使筛选过程更加细化和精确。
新的Prompt不仅明确了对教育内容的评分标准,还对文本的写作风格、连贯性以及主题深度提出了要求。具体评分标准如下:
```Plain
以下是一段网页内容摘录。请使用以下5分制评分系统来评估该网页的写作水平、教育价值和实用性:
0分:如果网页没有提供任何教育价值,完全由无关信息(如广告、宣传材料、少儿不宜内容)组成。
1分:如果网页提供了一些可能有教育价值的基本信息,但包含较多的无关或非学术内容(如广告和宣传材料)。
2分:如果网页涉及某些与教育相关的元素,但与教育标准不太吻合。它可能将教育内容与非教育材料混杂,对潜在的有用的主题进行浅显概述,或以不连贯的写作风格呈现信息。
3分:如果网页适合教育使用,并介绍了与某些学校课程中可能学到的关键概念,或对个人发展有用的实用信息。它的内容连贯但可能不全面,或包含一些无关信息。它可能类似于教科书的一小段节选,可以学习但有明显局限,如涉及过于复杂的概念、过于具体的不重要事件。
4分:如果网页与教育高度相关,对个人学习发展有益,表现出清晰一致的写作风格。它可能类似于教科书的一个章节或教程,提供大量教育内容,极少包含无关信息,且概念对学生来说不会过于深奥。内容连贯、重点突出,对结构化学习有价值。
5分:如果网页摘录在教育价值上表现极好,完全适合小学、中学或大学教学或专业人士学习。它遵循详细的推理过程,写作风格易于理解,对主题提供深刻而全面的见解,不包含任何非教育性或无实用意义内容。
网页内容摘录:
{}
在审查这段网页摘录后:请简要地为您的评分进行合理的解释,最多不超过100字,最后以“教育得分:<分数>”的格式结束。请根据所列出的标准系统地赋予分数。
```
所有数据集合并后,样本的得分分布如下,通过csg-wukong-enterprise V2模型对这些数据进行评分后,最终选取了**3分以上**的文本,总计达到**188M条数据**,约**420B tokens**。这些数据不仅数量庞大,且经过了严格的筛选和去重处理,确保了数据集的**高质量和高独特性**。这些经过打分的数据将在Fineweb2的数据集中用于训练大规模语言模型,帮助其在各类任务中实现更高的性能表现。
<p align="center">
<img width="900px" alt="experiment" src="./distribution.png">
</p>
## 数据筛选范围扩大
Fineweb2数据集的数据来源进一步扩展。相较于初代Fineweb,Fineweb2引入了来自多个不同领域和来源的海量数据,新增了**Industry2、CCI3、michao、wanjuan1.0、wudao和ChineseWebText**等高质量数据集。这些数据集覆盖了更广泛的行业和领域,增加了数据集的多样性和广泛适用性。
<p align="center">
<img width="900px" alt="experiment" src="./datasource.png">
</p>
最终,Fineweb2的数据集不仅在规模上远超前作,还在数据的质量、内容的多样性、筛选的精确度等方面有了显著提升。这为未来中文NLP应用的进一步发展打下了坚实的基础,同时也为研究人员提供了更加丰富的资源去探索和优化各种模型训练方法。
**我们诚邀对这一领域感兴趣的开发者和研究者关注和联系社区,共同推动技术的进步。敬请期待数据集的开源发布!**
## 许可协议
使用 Chinese Fineweb Edu V2数据集需要遵循 OpenCSG 社区许可证。Chinese Fineweb Edu V2数据集支持商业用途。如果您计划将 OpenCSG 模型或其衍生产品用于商业目的,您必须遵守 OpenCSG 社区许可证以及 Apache 2.0 许可证中的条款和条件。如用于商业用途,需发送邮件至 lorraineg@opencsg.com,并获得许可。
|
lmms-lab/LLaVA-OneVision-Data | lmms-lab | "2024-10-22T06:47:46Z" | 15,987 | 142 | [
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2408.03326",
"arxiv:2310.05126",
"region:us"
] | null | "2024-07-25T15:25:28Z" | ---
language:
- en
- zh
license: apache-2.0
pretty_name: llava-onevision-data
dataset_info:
- config_name: CLEVR-Math(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 791346970
num_examples: 5280
download_size: 441208499
dataset_size: 791346970
- config_name: FigureQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 463326576.625
num_examples: 17587
download_size: 258197193
dataset_size: 463326576.625
- config_name: GEOS(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1503641
num_examples: 498
download_size: 684471
dataset_size: 1503641
- config_name: GeoQA+(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 53579705.75
num_examples: 17162
download_size: 33480538
dataset_size: 53579705.75
- config_name: Geometry3K(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 218085473.5
num_examples: 9724
download_size: 125914780
dataset_size: 218085473.5
- config_name: IconQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 208430568.375
num_examples: 22589
download_size: 117222488
dataset_size: 208430568.375
- config_name: MapQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 384120915.875
num_examples: 5225
download_size: 215768443
dataset_size: 384120915.875
- config_name: PMC-VQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 571444866.5
num_examples: 35948
download_size: 326541003
dataset_size: 571444866.5
- config_name: Super-CLEVR(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2795082410.75
num_examples: 8642
download_size: 1580301917
dataset_size: 2795082410.75
- config_name: TabMWP(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 307726997.5
num_examples: 22452
download_size: 173938487
dataset_size: 307726997.5
- config_name: UniGeo(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 38296693.375
num_examples: 11949
download_size: 24170743
dataset_size: 38296693.375
- config_name: VisualWebInstruct(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 36317112275.0
num_examples: 263584
download_size: 36239916454
dataset_size: 36317112275.0
- config_name: VizWiz(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1170333936.5
num_examples: 6604
download_size: 660752297
dataset_size: 1170333936.5
- config_name: ai2d(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 438572782.375
num_examples: 2429
download_size: 437348514
dataset_size: 438572782.375
- config_name: ai2d(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 866076731
num_examples: 4864
download_size: 860306578
dataset_size: 866076731
- config_name: ai2d(internvl)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1832787249.625
num_examples: 12403
download_size: 527493895
dataset_size: 1832787249.625
- config_name: allava_instruct_laion4v
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5981767621.25
num_examples: 49990
download_size: 5873046236
dataset_size: 5981767621.25
- config_name: allava_instruct_vflan4v
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2680974558.25
num_examples: 19990
download_size: 2670088751
dataset_size: 2680974558.25
- config_name: aokvqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6896420844.25
num_examples: 16534
download_size: 6894236970
dataset_size: 6896420844.25
- config_name: chart2text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1145458729.5
num_examples: 26956
download_size: 1123681047
dataset_size: 1145458729.5
- config_name: chartqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 815335215.5
num_examples: 18260
download_size: 803084541
dataset_size: 815335215.5
- config_name: chrome_writting
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 44422597.875
num_examples: 8825
download_size: 39611257
dataset_size: 44422597.875
- config_name: clevr(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 10528974543.625
num_examples: 69995
download_size: 10460536445
dataset_size: 10528974543.625
- config_name: diagram_image_to_text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 18858266
num_examples: 295
download_size: 18659115
dataset_size: 18858266
- config_name: dvqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4487270615.625
num_examples: 199995
download_size: 4277056467
dataset_size: 4487270615.625
- config_name: figureqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2351194509.625
num_examples: 99995
download_size: 2222640639
dataset_size: 2351194509.625
- config_name: geo170k(align)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 204236256.75
num_examples: 60242
download_size: 58185410
dataset_size: 204236256.75
- config_name: geo170k(qa)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 266040519.125
num_examples: 67823
download_size: 160022430
dataset_size: 266040519.125
- config_name: geo3k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 42634333.625
num_examples: 2091
download_size: 41097851
dataset_size: 42634333.625
- config_name: geomverse(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2263893609.75
num_examples: 9298
download_size: 2211726352
dataset_size: 2263893609.75
- config_name: hateful_memes(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 3057252325.125
num_examples: 8495
download_size: 3055839880
dataset_size: 3057252325.125
- config_name: hitab(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 161706881.125
num_examples: 2495
download_size: 157871287
dataset_size: 161706881.125
- config_name: hme100k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 273229915.5
num_examples: 74492
download_size: 241005430
dataset_size: 273229915.5
- config_name: iam(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1131633206.75
num_examples: 5658
download_size: 1128371221
dataset_size: 1131633206.75
- config_name: iconqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 331284932.25
num_examples: 27302
download_size: 327005220
dataset_size: 331284932.25
- config_name: iiit5k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 21821437.25
num_examples: 1990
download_size: 21623116
dataset_size: 21821437.25
- config_name: image_textualization(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5218283253.375
num_examples: 99573
download_size: 5164176816
dataset_size: 5218283253.375
- config_name: infographic(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 713657496.25
num_examples: 1982
download_size: 656276080
dataset_size: 713657496.25
- config_name: infographic_vqa
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1528953078.75
num_examples: 4394
download_size: 1419340319
dataset_size: 1528953078.75
- config_name: infographic_vqa_llava_format
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1765315696.875
num_examples: 2113
download_size: 1764548536
dataset_size: 1765315696.875
- config_name: intergps(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 24973395.625
num_examples: 1275
download_size: 24736545
dataset_size: 24973395.625
- config_name: k12_printing
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1205153118.5
num_examples: 256636
download_size: 1108572712
dataset_size: 1205153118.5
- config_name: llavar_gpt4_20k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 633833350.25
num_examples: 19790
download_size: 625365542
dataset_size: 633833350.25
- config_name: lrv_chart
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 99338686
num_examples: 1776
download_size: 97979446
dataset_size: 99338686
- config_name: lrv_normal(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 422589381.75
num_examples: 10490
download_size: 406958773
dataset_size: 422589381.75
- config_name: magpie_pro(l3_80b_mt)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1657129141
num_examples: 299988
download_size: 885893066
dataset_size: 1657129141
- config_name: magpie_pro(l3_80b_st)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1033666690
num_examples: 299990
download_size: 562771564
dataset_size: 1033666690
- config_name: magpie_pro(qwen2_72b_st)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 703489344
num_examples: 299982
download_size: 361433408
dataset_size: 703489344
- config_name: mapqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 3355751195.5
num_examples: 37412
download_size: 3305639218
dataset_size: 3355751195.5
- config_name: mathqa
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 18318538
num_examples: 29827
download_size: 7857130
dataset_size: 18318538
- config_name: mavis_math_metagen
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2304025372.5
num_examples: 87348
download_size: 322776224
dataset_size: 2304025372.5
- config_name: mavis_math_rule_geo
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 14313211512.25
num_examples: 99990
download_size: 5841283073
dataset_size: 14313211512.25
- config_name: multihiertt(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 300319803.25
num_examples: 7614
download_size: 295638314
dataset_size: 300319803.25
- config_name: orand_car_a
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 23602442.125
num_examples: 1999
download_size: 23333412
dataset_size: 23602442.125
- config_name: raven(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1706160514.625
num_examples: 41995
download_size: 1693150088
dataset_size: 1706160514.625
- config_name: rendered_text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11082594894.625
num_examples: 9995
download_size: 11081962044
dataset_size: 11082594894.625
- config_name: robut_sqa(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 685580779.375
num_examples: 8509
download_size: 678666263
dataset_size: 685580779.375
- config_name: robut_wikisql(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6200499653
num_examples: 74984
download_size: 6168399217
dataset_size: 6200499653
- config_name: robut_wtq(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4091776188.875
num_examples: 38241
download_size: 4062777449
dataset_size: 4091776188.875
- config_name: scienceqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 286843125.625
num_examples: 4971
download_size: 282896809
dataset_size: 286843125.625
- config_name: scienceqa(nona_context)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2111029055
num_examples: 19208
download_size: 2053942726
dataset_size: 2111029055
- config_name: screen2words(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 7977502095.375
num_examples: 15725
download_size: 7962327904
dataset_size: 7977502095.375
- config_name: sharegpt4o
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6968025789.5
num_examples: 57284
download_size: 6772195470
dataset_size: 6968025789.5
- config_name: sharegpt4v(coco)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2620153362.875
num_examples: 50017
download_size: 2595583499
dataset_size: 2620153362.875
- config_name: sharegpt4v(knowledge)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 372100773.5
num_examples: 1988
download_size: 369799318
dataset_size: 372100773.5
- config_name: sharegpt4v(llava)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 781795487.25
num_examples: 29990
download_size: 400344187
dataset_size: 781795487.25
- config_name: sharegpt4v(sam)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4437405218.25
num_examples: 8990
download_size: 4428597081
dataset_size: 4437405218.25
- config_name: sroie
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 117810195
num_examples: 33616
download_size: 103647636
dataset_size: 117810195
- config_name: st_vqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5771194098.75
num_examples: 17242
download_size: 5768888141
dataset_size: 5771194098.75
- config_name: tabmwp(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 311192518.375
num_examples: 22717
download_size: 306092255
dataset_size: 311192518.375
- config_name: tallyqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 35998988065.625
num_examples: 98675
download_size: 35982430394
dataset_size: 35998988065.625
- config_name: textcaps
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2222268476.25
num_examples: 21942
download_size: 2217838132
dataset_size: 2222268476.25
- config_name: textocr(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2581655353
num_examples: 25104
download_size: 2574418106
dataset_size: 2581655353
- config_name: tqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 331203026.25
num_examples: 27302
download_size: 326999466
dataset_size: 331203026.25
- config_name: ureader_cap
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 9269857109.75
num_examples: 91434
download_size: 2292099971
dataset_size: 9269857109.75
- config_name: ureader_ie
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11871457209.75
num_examples: 17322
download_size: 1999083115
dataset_size: 11871457209.75
- config_name: vision_flan(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 24847242604.5
num_examples: 186060
download_size: 24750561877
dataset_size: 24847242604.5
- config_name: vistext(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 550187184.5
num_examples: 9964
download_size: 452795103
dataset_size: 550187184.5
- config_name: visual7w(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4451436523.875
num_examples: 14361
download_size: 4441971985
dataset_size: 4451436523.875
- config_name: visualmrc(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2938154124.25
num_examples: 3022
download_size: 2909296079
dataset_size: 2938154124.25
- config_name: vqarad(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 95533417
num_examples: 308
download_size: 95410398
dataset_size: 95533417
- config_name: vsr(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 891981646
num_examples: 2152
download_size: 891572866
dataset_size: 891981646
- config_name: websight(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11209715828.625
num_examples: 9995
download_size: 11144460985
dataset_size: 11209715828.625
configs:
- config_name: CLEVR-Math(MathV360K)
data_files:
- split: train
path: CLEVR-Math(MathV360K)/train-*
- config_name: FigureQA(MathV360K)
data_files:
- split: train
path: FigureQA(MathV360K)/train-*
- config_name: GEOS(MathV360K)
data_files:
- split: train
path: GEOS(MathV360K)/train-*
- config_name: GeoQA+(MathV360K)
data_files:
- split: train
path: GeoQA+(MathV360K)/train-*
- config_name: Geometry3K(MathV360K)
data_files:
- split: train
path: Geometry3K(MathV360K)/train-*
- config_name: IconQA(MathV360K)
data_files:
- split: train
path: IconQA(MathV360K)/train-*
- config_name: MapQA(MathV360K)
data_files:
- split: train
path: MapQA(MathV360K)/train-*
- config_name: PMC-VQA(MathV360K)
data_files:
- split: train
path: PMC-VQA(MathV360K)/train-*
- config_name: Super-CLEVR(MathV360K)
data_files:
- split: train
path: Super-CLEVR(MathV360K)/train-*
- config_name: TabMWP(MathV360K)
data_files:
- split: train
path: TabMWP(MathV360K)/train-*
- config_name: UniGeo(MathV360K)
data_files:
- split: train
path: UniGeo(MathV360K)/train-*
- config_name: VisualWebInstruct(filtered)
data_files:
- split: train
path: VisualWebInstruct(filtered)/train-*
- config_name: VizWiz(MathV360K)
data_files:
- split: train
path: VizWiz(MathV360K)/train-*
- config_name: ai2d(cauldron,llava_format)
data_files:
- split: train
path: ai2d(cauldron,llava_format)/train-*
- config_name: ai2d(gpt4v)
data_files:
- split: train
path: ai2d(gpt4v)/train-*
- config_name: ai2d(internvl)
data_files:
- split: train
path: ai2d(internvl)/train-*
- config_name: allava_instruct_laion4v
data_files:
- split: train
path: allava_instruct_laion4v/train-*
- config_name: allava_instruct_vflan4v
data_files:
- split: train
path: allava_instruct_vflan4v/train-*
- config_name: aokvqa(cauldron,llava_format)
data_files:
- split: train
path: aokvqa(cauldron,llava_format)/train-*
- config_name: chart2text(cauldron)
data_files:
- split: train
path: chart2text(cauldron)/train-*
- config_name: chartqa(cauldron,llava_format)
data_files:
- split: train
path: chartqa(cauldron,llava_format)/train-*
- config_name: chrome_writting
data_files:
- split: train
path: chrome_writting/train-*
- config_name: clevr(cauldron,llava_format)
data_files:
- split: train
path: clevr(cauldron,llava_format)/train-*
- config_name: diagram_image_to_text(cauldron)
data_files:
- split: train
path: diagram_image_to_text(cauldron)/train-*
- config_name: dvqa(cauldron,llava_format)
data_files:
- split: train
path: dvqa(cauldron,llava_format)/train-*
- config_name: figureqa(cauldron,llava_format)
data_files:
- split: train
path: figureqa(cauldron,llava_format)/train-*
- config_name: geo170k(align)
data_files:
- split: train
path: geo170k(align)/train-*
- config_name: geo170k(qa)
data_files:
- split: train
path: geo170k(qa)/train-*
- config_name: geo3k
data_files:
- split: train
path: geo3k/train-*
- config_name: geomverse(cauldron)
data_files:
- split: train
path: geomverse(cauldron)/train-*
- config_name: hateful_memes(cauldron,llava_format)
data_files:
- split: train
path: hateful_memes(cauldron,llava_format)/train-*
- config_name: hitab(cauldron,llava_format)
data_files:
- split: train
path: hitab(cauldron,llava_format)/train-*
- config_name: hme100k
data_files:
- split: train
path: hme100k/train-*
- config_name: iam(cauldron)
data_files:
- split: train
path: iam(cauldron)/train-*
- config_name: iconqa(cauldron,llava_format)
data_files:
- split: train
path: iconqa(cauldron,llava_format)/train-*
- config_name: iiit5k
data_files:
- split: train
path: iiit5k/train-*
- config_name: image_textualization(filtered)
data_files:
- split: train
path: image_textualization(filtered)/train-*
- config_name: infographic(gpt4v)
data_files:
- split: train
path: infographic(gpt4v)/train-*
- config_name: infographic_vqa
data_files:
- split: train
path: infographic_vqa/train-*
- config_name: infographic_vqa_llava_format
data_files:
- split: train
path: infographic_vqa_llava_format/train-*
- config_name: intergps(cauldron,llava_format)
data_files:
- split: train
path: intergps(cauldron,llava_format)/train-*
- config_name: k12_printing
data_files:
- split: train
path: k12_printing/train-*
- config_name: llavar_gpt4_20k
data_files:
- split: train
path: llavar_gpt4_20k/train-*
- config_name: lrv_chart
data_files:
- split: train
path: lrv_chart/train-*
- config_name: lrv_normal(filtered)
data_files:
- split: train
path: lrv_normal(filtered)/train-*
- config_name: magpie_pro(l3_80b_mt)
data_files:
- split: train
path: magpie_pro(l3_80b_mt)/train-*
- config_name: magpie_pro(l3_80b_st)
data_files:
- split: train
path: magpie_pro(l3_80b_st)/train-*
- config_name: magpie_pro(qwen2_72b_st)
data_files:
- split: train
path: magpie_pro(qwen2_72b_st)/train-*
- config_name: mapqa(cauldron,llava_format)
data_files:
- split: train
path: mapqa(cauldron,llava_format)/train-*
- config_name: mathqa
data_files:
- split: train
path: mathqa/train-*
- config_name: mavis_math_metagen
data_files:
- split: train
path: mavis_math_metagen/train-*
- config_name: mavis_math_rule_geo
data_files:
- split: train
path: mavis_math_rule_geo/train-*
- config_name: multihiertt(cauldron)
data_files:
- split: train
path: multihiertt(cauldron)/train-*
- config_name: orand_car_a
data_files:
- split: train
path: orand_car_a/train-*
- config_name: raven(cauldron)
data_files:
- split: train
path: raven(cauldron)/train-*
- config_name: rendered_text(cauldron)
data_files:
- split: train
path: rendered_text(cauldron)/train-*
- config_name: robut_sqa(cauldron)
data_files:
- split: train
path: robut_sqa(cauldron)/train-*
- config_name: robut_wikisql(cauldron)
data_files:
- split: train
path: robut_wikisql(cauldron)/train-*
- config_name: robut_wtq(cauldron,llava_format)
data_files:
- split: train
path: robut_wtq(cauldron,llava_format)/train-*
- config_name: scienceqa(cauldron,llava_format)
data_files:
- split: train
path: scienceqa(cauldron,llava_format)/train-*
- config_name: scienceqa(nona_context)
data_files:
- split: train
path: scienceqa(nona_context)/train-*
- config_name: screen2words(cauldron)
data_files:
- split: train
path: screen2words(cauldron)/train-*
- config_name: sharegpt4o
data_files:
- split: train
path: sharegpt4o/train-*
- config_name: sharegpt4v(coco)
data_files:
- split: train
path: sharegpt4v(coco)/train-*
- config_name: sharegpt4v(knowledge)
data_files:
- split: train
path: sharegpt4v(knowledge)/train-*
- config_name: sharegpt4v(llava)
data_files:
- split: train
path: sharegpt4v(llava)/train-*
- config_name: sharegpt4v(sam)
data_files:
- split: train
path: sharegpt4v(sam)/train-*
- config_name: sroie
data_files:
- split: train
path: sroie/train-*
- config_name: st_vqa(cauldron,llava_format)
data_files:
- split: train
path: st_vqa(cauldron,llava_format)/train-*
- config_name: tabmwp(cauldron)
data_files:
- split: train
path: tabmwp(cauldron)/train-*
- config_name: tallyqa(cauldron,llava_format)
data_files:
- split: train
path: tallyqa(cauldron,llava_format)/train-*
- config_name: textcaps
data_files:
- split: train
path: textcaps/train-*
- config_name: textocr(gpt4v)
data_files:
- split: train
path: textocr(gpt4v)/train-*
- config_name: tqa(cauldron,llava_format)
data_files:
- split: train
path: tqa(cauldron,llava_format)/train-*
- config_name: ureader_cap
data_files:
- split: train
path: ureader_cap/train-*
- config_name: ureader_ie
data_files:
- split: train
path: ureader_ie/train-*
- config_name: vision_flan(filtered)
data_files:
- split: train
path: vision_flan(filtered)/train-*
- config_name: vistext(cauldron)
data_files:
- split: train
path: vistext(cauldron)/train-*
- config_name: visual7w(cauldron,llava_format)
data_files:
- split: train
path: visual7w(cauldron,llava_format)/train-*
- config_name: visualmrc(cauldron)
data_files:
- split: train
path: visualmrc(cauldron)/train-*
- config_name: vqarad(cauldron,llava_format)
data_files:
- split: train
path: vqarad(cauldron,llava_format)/train-*
- config_name: vsr(cauldron,llava_format)
data_files:
- split: train
path: vsr(cauldron,llava_format)/train-*
- config_name: websight(cauldron)
data_files:
- split: train
path: websight(cauldron)/train-*
---
# Dataset Card for LLaVA-OneVision
**[2024-09-01]: Uploaded VisualWebInstruct(filtered), it's used in OneVision Stage**
> almost all subsets are uploaded with HF's required format and you can use the recommended interface to download them and follow our code below to convert them.
> the subset of `ureader_kg` and `ureader_qa` are uploaded with the processed jsons and tar.gz of image folders.
> You may directly download them from the following url.
> https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data/tree/main/ureader_kg
In this dataset, we include the data splits used in the both final image stage and one-vision stage. For more details, please check our [paper](arxiv.org/abs/2408.03326) and our [training doc](https://github.com/LLaVA-VL/LLaVA-NeXT/tree/main/scripts/train#about-the-llava-onevision-data).
## Dataset Description
- **Curated by:** Bo Li, Kaichen Zhang, Hao Zhang, Yuanhan Zhang, Renrui Zhang, Feng Li, Dong Guo
- **Language(s) (NLP):** English, Chinese
- **License:** Apache License 2.0
## Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Dataset Collection:** We include a few subsets from existing dataset collection [Cambrian](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M), [Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron), [UReader](https://arxiv.org/abs/2310.05126). Since we only used a few subsets from these datasets, and applied the cleaning and re-annotation process, we uploaded our processed version of these datasets into our own repository and thank the authors for providing the original datasets.
- **Other Datasets:** For rest single source dataset, such as AI2D, OKVQA, we cite and link the original sources in our paper.
## Uses
This dataset is used for the training of the LLaVA-OneVision model. We only allow the use of this dataset for academic research and education purpose. For OpenAI GPT-4 generated data, we recommend the users to check the [OpenAI Usage Policy](https://openai.com/policies/usage-policies/).
## Dataset Structure
We expalin the data composition for mid-stage and final-stage at our repo in [**training doc**](https://github.com/LLaVA-VL/LLaVA-NeXT/tree/main/scripts/train#about-the-llava-onevision-data).
### Statistics
We provide the statistics of the dataset in the following figures, and refer the audience to check our paper.
![](https://i.postimg.cc/2y989XZJ/WX20240802-145215-2x.png)
![](https://i.postimg.cc/MZ9TGXFD/WX20240802-145226-2x.png)
### Code Guidance
To help audience to better understand our dataest, we upload them into Hugging Face Dataset compatible format. During LLaVA-OneVision training, we use the `json` and `image/video` folder to store the data.
> the subset of `ureader_kg` and `ureader_qa` are uploaded with the processed jsons and tar.gz of image folders. You may directly download them from the following url.
> https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data/tree/main/ureader_kg
Here we provide the code guidance to convert the dataset into the format of LLaVA-OneVision, and conduct the training of the LLaVA-OneVision model with converted dataset.
```python
import os
from datasets import load_dataset
from tqdm import tqdm
import json
data = load_dataset("lmms-lab/LLaVA-OneVision-Data", split="train")
image_folder = "<your_image_folder>"
converted_data = []
for da in tqdm(data):
json_data = {}
json_data["id"] = da["id"]
if da["image"] is not None:
json_data["image"] = f"{da['id']}.jpg"
da["image"].save(os.path.join(image_folder, json_data["image"]))
json_data["conversations"] = da["conversations"]
converted_data.append(json_data)
with open("<your_json_file>.json", "w") as f:
json.dump(converted_data, f, indent=4, ensure_ascii=False)
```
## Citation
**BibTeX:**
[More Information Needed]
## Glossary
The dataset collection process is conducted by all of the authors, we thank the Feng Li and Renrui Zhang for providing [LLaVA-M4-Instruct Data](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data) and Yuanhan for providing the [Video datasets](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K).
After the dataset collection, the cleaning and re-annotation process, including final mixture of the dataset, is conducted by Bo Li and with the great help of Kaichen Zhang.
## Dataset Card Authors
The dataset is curated by the following authors:
Bo Li, Kaichen Zhang, Hao Zhang, Yuanhan Zhang, Renrui Zhang, Feng Li
## Dataset Card Contact
[Bo Li](https://brianboli.com/): drluodian@gmail.com
[Kaichen Zhang](https://www.linkedin.com/in/kaichen-zhang-014b17219/?originalSubdomain=sg) |
DL3DV/DL3DV-ALL-960P | DL3DV | "2024-09-02T19:11:31Z" | 15,980 | 9 | [
"size_categories:n>1T",
"region:us",
"3D Vision",
"NeRF",
"3D Gaussian",
"Dataset",
"Novel View Synthesis",
"Text to 3D",
"Image to 3D"
] | null | "2024-02-25T07:47:52Z" | ---
tags:
- 3D Vision
- NeRF
- 3D Gaussian
- Dataset
- Novel View Synthesis
- Text to 3D
- Image to 3D
pretty_name: Dl3DV-Dataset
size_categories:
- n>1T
---
# DL3DV-Dataset
This repo has all the 960P frames with camera poses of DL3DV-10K Dataset. We are working hard to review all the dataset to avoid sensitive information. Thank you for your patience.
# Download
If you have enough space, you can use git to download a dataset from huggingface. See this [link](https://huggingface.co/docs/hub/en/datasets-downloading). [480P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-480P)/[960P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-960P) versions should satisfies most needs.
If you do not have enough space, we further provide a [download script](https://github.com/DL3DV-10K/Dataset/blob/main/scripts/download.py) here to download a subset. The usage:
```Bash
usage: download.py [-h] --odir ODIR --subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K} --resolution {4K,2K,960P,480P} --file_type {images+poses,video,colmap_cache} [--hash HASH]
[--clean_cache]
optional arguments:
-h, --help show this help message and exit
--odir ODIR output directory
--subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K}
The subset of the benchmark to download
--resolution {4K,2K,960P,480P}
The resolution to donwnload
--file_type {images+poses,video,colmap_cache}
The file type to download
--hash HASH If set subset=hash, this is the hash code of the scene to download
--clean_cache If set, will clean the huggingface cache to save space
```
Here are some examples:
```Bash
# Make sure you have applied for the access.
# Use this to download the download.py script
wget https://raw.githubusercontent.com/DL3DV-10K/Dataset/main/scripts/download.py
# Download 960P resolution images and poses, 0~1K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 1K --resolution 960P --file_type images+poses --clean_cache
# Download 960P resolution images and poses, 1K~2K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 2K --resolution 960P --file_type images+poses --clean_cache
```
You can also download a specific scene with its hash. The scene-hash pair visualization can be found [here](https://htmlpreview.github.io/?https://github.com/DL3DV-10K/Dataset/blob/main/visualize/index.html).
```Bash
python download.py --odir DL3DV-10K --subset 2K --resolution 960P --file_type images+poses --hash e2cedefea8a0ed2d0ffbd5bdc08acbe7e1f85c96f72f7b790e9dfe1c98963047 --clean_cache
```
# News
- [x] DL3DV-1K, 2K, 3K, 4K
- [ ] DL3DV-5K ~ 10K
|
OpenGVLab/ShareGPT-4o | OpenGVLab | "2024-08-17T07:51:28Z" | 15,957 | 150 | [
"task_categories:visual-question-answering",
"task_categories:question-answering",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"visual-question-answering",
"question-answering"
] | "2024-05-28T07:51:06Z" | ---
license: mit
extra_gated_prompt:
You agree to not use the dataset to conduct experiments that cause harm to
human subjects. Please note that the data in this dataset may be subject to
other agreements. Before using the data, be sure to read the relevant
agreements carefully to ensure compliant use. Video copyrights belong to the
original video creators or platforms and are for academic research use only.
task_categories:
- visual-question-answering
- question-answering
extra_gated_fields:
Name: text
Company/Organization: text
Country: text
E-Mail: text
language:
- en
size_categories:
- 100K<n<1M
configs:
- config_name: image_caption
data_files:
- split: images
path: image_conversations/gpt-4o.jsonl
- config_name: video_caption
data_files:
- split: ptest
path: video_conversations/gpt4o.jsonl
--- |
BramVanroy/wikipedia_culturax_dutch | BramVanroy | "2024-04-17T20:21:01Z" | 15,940 | 3 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:nl",
"size_categories:1B<n<10B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"text2text-generation"
] | "2024-03-25T22:11:29Z" | ---
language:
- nl
size_categories:
- 10B<n<100B
task_categories:
- text-generation
- text2text-generation
pretty_name: Filtered CulturaX + Wikipedia for Dutch
dataset_info:
- config_name: 100M
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 738455828.5851797
num_examples: 1018200
- name: test
num_bytes: 7458534.414820259
num_examples: 10284
download_size: 411183119
dataset_size: 745914363.0
- config_name: 100k
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 745955.3074739829
num_examples: 1047
- name: test
num_bytes: 7124.692526017029
num_examples: 10
download_size: 366788
dataset_size: 753080.0
- config_name: 10B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 66539945646.34457
num_examples: 40176566
- name: test
num_bytes: 105996030.65543362
num_examples: 64000
download_size: 42132184504
dataset_size: 66645941677.0
- config_name: 10M
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 76734151.72157606
num_examples: 139851
- name: test
num_bytes: 774743.2784239326
num_examples: 1412
download_size: 37995388
dataset_size: 77508895.0
- config_name: 10k
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 72048.30379746835
num_examples: 78
- name: test
num_bytes: 5896
num_examples: 1
download_size: 47197
dataset_size: 77944.30379746835
- config_name: 15B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 99730049355.25276
num_examples: 59584123
- name: test
num_bytes: 107121206.74724333
num_examples: 64000
download_size: 63139415312
dataset_size: 99837170562.0
- config_name: 1B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 6797502496.392602
num_examples: 5102360
- name: test
num_bytes: 68660322.60739774
num_examples: 51538
download_size: 4260450464
dataset_size: 6866162819.0
- config_name: 1M
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 7442665.619329753
num_examples: 10694
- name: test
num_bytes: 75164.38067024625
num_examples: 108
download_size: 3845466
dataset_size: 7517830.0
- config_name: 20B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 132920704365.75093
num_examples: 78991679
- name: test
num_bytes: 107693939.24907027
num_examples: 64000
download_size: 84141456153
dataset_size: 133028398305.0
- config_name: 25B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 166111586295.01904
num_examples: 98399236
- name: test
num_bytes: 108040894.98094498
num_examples: 64000
download_size: 105147418131
dataset_size: 166219627190.0
- config_name: 30B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 199302582477.5805
num_examples: 117806793
- name: test
num_bytes: 108273597.41950662
num_examples: 64000
download_size: 126152714564
dataset_size: 199410856075.0
- config_name: 35B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 232493644456.181
num_examples: 137214350
- name: test
num_bytes: 108440503.81899258
num_examples: 64000
download_size: 147149925109
dataset_size: 232602084960.0
- config_name: 40B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 265684747781.7734
num_examples: 156621907
- name: test
num_bytes: 108566063.22660531
num_examples: 64000
download_size: 168152290262
dataset_size: 265793313845.0
- config_name: 45B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 298875877641.391
num_examples: 176029463
- name: test
num_bytes: 108663946.60903454
num_examples: 64000
download_size: 189159571162
dataset_size: 298984541588.0
- config_name: 50B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 332067028077.12775
num_examples: 195437020
- name: test
num_bytes: 108742395.87226707
num_examples: 64000
download_size: 210160621183
dataset_size: 332175770473.0
- config_name: 55B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 365258192681.75964
num_examples: 214844577
- name: test
num_bytes: 108806676.24034382
num_examples: 64000
download_size: 231164757019
dataset_size: 365366999358.0
- config_name: 5B
features:
- name: text
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 33351938314.309906
num_examples: 20769009
- name: test
num_bytes: 102774477.69009268
num_examples: 64000
download_size: 21119808690
dataset_size: 33454712792.0
configs:
- config_name: 100M
data_files:
- split: train
path: 100M/train-*
- split: test
path: 100M/test-*
- config_name: 100k
data_files:
- split: train
path: 100k/train-*
- split: test
path: 100k/test-*
- config_name: 10B
data_files:
- split: train
path: 10B/train-*
- split: test
path: 10B/test-*
- config_name: 10M
data_files:
- split: train
path: 10M/train-*
- split: test
path: 10M/test-*
- config_name: 10k
data_files:
- split: train
path: 10k/train-*
- split: test
path: 10k/test-*
- config_name: 15B
data_files:
- split: train
path: 15B/train-*
- split: test
path: 15B/test-*
- config_name: 1B
data_files:
- split: train
path: 1B/train-*
- split: test
path: 1B/test-*
- config_name: 1M
data_files:
- split: train
path: 1M/train-*
- split: test
path: 1M/test-*
- config_name: 20B
data_files:
- split: train
path: 20B/train-*
- split: test
path: 20B/test-*
- config_name: 25B
data_files:
- split: train
path: 25B/train-*
- split: test
path: 25B/test-*
- config_name: 30B
data_files:
- split: train
path: 30B/train-*
- split: test
path: 30B/test-*
- config_name: 35B
data_files:
- split: train
path: 35B/train-*
- split: test
path: 35B/test-*
- config_name: 40B
data_files:
- split: train
path: 40B/train-*
- split: test
path: 40B/test-*
- config_name: 45B
data_files:
- split: train
path: 45B/train-*
- split: test
path: 45B/test-*
- config_name: 50B
data_files:
- split: train
path: 50B/train-*
- split: test
path: 50B/test-*
- config_name: 55B
data_files:
- split: train
path: 55B/train-*
- split: test
path: 55B/test-*
- config_name: 5B
data_files:
- split: train
path: 5B/train-*
- split: test
path: 5B/test-*
---
# Filtered CulturaX + Wikipedia for Dutch
This is a combined and filtered version of [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) and [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia), only including Dutch. It is intended for the training of LLMs.
Different configs are available based on the number of tokens (see a section below with an overview). This can be useful if you want to know exactly how many tokens you have. Great for using as a streaming dataset, too. Tokens are counted as white-space tokens, so depending on your tokenizer, you'll likely end up with more tokens than indicated here.
Every config also has a test set (for validation) of 1% the total size of the dataset, minimally 1 max. 64k samples (~16M tokens).
Wikipedia and CulturaX were suffled before merging and the teset set creation was also shuffled. Priority is given to Wikipedia to prioritize knowledge-content, so the smaller configs will consist exclusively of Wikipedia and for the larger configs we augment with CulturaX. Every config builds further on the previous, so this means that every config contains the same data as the smaller ones and more HOWEVER their train/test splits are not the same, so test set of one config may overlap with samples for another training set. This is usually not a problem but just be aware that you do not train on one config's training set and test with another config's test set.
## Configs
### `10k` -- 79 samples -- 10,087 tokens
- ratio_wikipedia: 100.00%
- total_num_tokens: 10,087
- train_num_tokens: 9,205
- test_num_tokens: 882
- total_num_samples: 79
- train_num_samples: 78
- test_num_samples: 1
### `100k` -- 1,057 samples -- 100,075 tokens
- ratio_wikipedia: 100.00%
- total_num_tokens: 100,075
- train_num_tokens: 98,044
- test_num_tokens: 2,031
- total_num_samples: 1,057
- train_num_samples: 1,047
- test_num_samples: 10
### `1M` -- 10,802 samples -- 1,000,239 tokens
- ratio_wikipedia: 100.00%
- total_num_tokens: 1,000,239
- train_num_tokens: 991,119
- test_num_tokens: 9,120
- total_num_samples: 10,802
- train_num_samples: 10,694
- test_num_samples: 108
### `10M` -- 141,263 samples -- 10,000,022 tokens
- ratio_wikipedia: 100.00%
- total_num_tokens: 10,000,022
- train_num_tokens: 9,874,772
- test_num_tokens: 125,250
- total_num_samples: 141,263
- train_num_samples: 139,851
- test_num_samples: 1,412
### `100M` -- 1,028,484 samples -- 100,000,047 tokens
- ratio_wikipedia: 100.00%
- total_num_tokens: 100,000,047
- train_num_tokens: 99,013,372
- test_num_tokens: 986,675
- total_num_samples: 1,028,484
- train_num_samples: 1,018,200
- test_num_samples: 10,284
### `1B` -- 5,153,898 samples -- 1,000,000,187 tokens
- ratio_wikipedia: 61.21%
- total_num_tokens: 1,000,000,187
- train_num_tokens: 989,990,190
- test_num_tokens: 10,009,997
- total_num_samples: 5,153,898
- train_num_samples: 5,102,360
- test_num_samples: 51,538
### `5B` -- 20,833,009 samples -- 5,000,000,076 tokens
- ratio_wikipedia: 25.35%
- total_num_tokens: 5,000,000,076
- train_num_tokens: 4,984,493,654
- test_num_tokens: 15,506,422
- total_num_samples: 20,833,009
- train_num_samples: 20,769,009
- test_num_samples: 64,000
### `10B` -- 40,240,566 samples -- 10,000,000,115 tokens
- ratio_wikipedia: 18.41%
- total_num_tokens: 10,000,000,115
- train_num_tokens: 9,984,156,828
- test_num_tokens: 15,843,287
- total_num_samples: 40,240,566
- train_num_samples: 40,176,566
- test_num_samples: 64,000
### `15B` -- 59,648,123 samples -- 15,000,000,154 tokens
- ratio_wikipedia: 15.98%
- total_num_tokens: 15,000,000,154
- train_num_tokens: 14,983,970,518
- test_num_tokens: 16,029,636
- total_num_samples: 59,648,123
- train_num_samples: 59,584,123
- test_num_samples: 64,000
### `20B` -- 79,055,679 samples -- 20,000,000,009 tokens
- ratio_wikipedia: 14.75%
- total_num_tokens: 20,000,000,009
- train_num_tokens: 19,983,799,357
- test_num_tokens: 16,200,652
- total_num_samples: 79,055,679
- train_num_samples: 78,991,679
- test_num_samples: 64,000
### `25B` -- 98,463,236 samples -- 25,000,000,048 tokens
- ratio_wikipedia: 14.00%
- total_num_tokens: 25,000,000,048
- train_num_tokens: 24,983,765,326
- test_num_tokens: 16,234,722
- total_num_samples: 98,463,236
- train_num_samples: 98,399,236
- test_num_samples: 64,000
### `30B` -- 117,870,793 samples -- 30,000,000,087 tokens
- ratio_wikipedia: 13.50%
- total_num_tokens: 30,000,000,087
- train_num_tokens: 29,983,707,932
- test_num_tokens: 16,292,155
- total_num_samples: 117,870,793
- train_num_samples: 117,806,793
- test_num_samples: 64,000
### `35B` -- 137,278,350 samples -- 35,000,000,126 tokens
- ratio_wikipedia: 13.14%
- total_num_tokens: 35,000,000,126
- train_num_tokens: 34,983,914,739
- test_num_tokens: 16,085,387
- total_num_samples: 137,278,350
- train_num_samples: 137,214,350
- test_num_samples: 64,000
### `40B` -- 156,685,907 samples -- 40,000,000,165 tokens
- ratio_wikipedia: 12.87%
- total_num_tokens: 40,000,000,165
- train_num_tokens: 39,983,508,625
- test_num_tokens: 16,491,540
- total_num_samples: 156,685,907
- train_num_samples: 156,621,907
- test_num_samples: 64,000
### `45B` -- 176,093,463 samples -- 45,000,000,020 tokens
- ratio_wikipedia: 12.66%
- total_num_tokens: 45,000,000,020
- train_num_tokens: 44,983,608,118
- test_num_tokens: 16,391,902
- total_num_samples: 176,093,463
- train_num_samples: 176,029,463
- test_num_samples: 64,000
### `50B` -- 195,501,020 samples -- 50,000,000,059 tokens
- ratio_wikipedia: 12.49%
- total_num_tokens: 50,000,000,059
- train_num_tokens: 49,983,567,461
- test_num_tokens: 16,432,598
- total_num_samples: 195,501,020
- train_num_samples: 195,437,020
- test_num_samples: 64,000
### `55B` -- 214,908,577 samples -- 55,000,000,098 tokens
- ratio_wikipedia: 12.35%
- total_num_tokens: 55,000,000,098
- train_num_tokens: 54,983,723,278
- test_num_tokens: 16,276,820
- total_num_samples: 214,908,577
- train_num_samples: 214,844,577
- test_num_samples: 64,000
## Filtering
While CultruaX already has done a lot of filtering, some more filtering can be done to improve the quality of the corpus. These filters are described below.
The baseline ratios (punctuation, uppercase, digits) were calculated on the SONAR-500 corpus (excluding WRPEA WRPED WRUEA WRUED WRUEB).
**CulturaX**:
- removed documents that contain the text "rechten voorbehouden" or "rights reserved"
- remove documents whose URL contained "wikipedia.org" (because we include a cleaned version of Wikipedia ourselves)
- removed documents that contain a "bad word" (see the section below)
- removed documents that contain any non-latin characters. The idea is that "knowledge"-based information (e.g. original writing of a name) are allowed
when the data comes from Wikipedia, but not from any other webcrawl, to avoid unsollicited noise.
**CulturaX + Wikipedia**:
- removed documents where ratio of punctuation marks vs. non-whitespace characters is higher than 0.2
- removed documents where ratio of uppercase vs. non-whitespace characters is higher than 0.22
- removed documents where ratio of digits vs. non-whitespace characters is higher than 0.16
- removed documents where the average token length is < 2 or > 20
## Bad words
```python
BAD_PHRASES_DOC_LEVEL = {
# https://en.wikipedia.org/wiki/Dutch_profanity
"achterlijk",
"debiel",
"downie",
"idioot",
"kankerlijer",
"klere",
"kolere",
"minkukel",
"pestkop",
"pleuris",
"pleuritis",
"teringlijer",
"tyfuslijer",
"gadver",
"getver",
"godver",
"godskolere",
"godverork",
"graftak",
"kopvod",
"verdomme",
"anaalgeneraal",
"bitch",
"dikzak",
"flikker",
"fok",
"fuck",
"hoer",
"klootzak",
"klote",
"kreng",
"kringspiermusketier",
"kut",
"lamzak",
"lul",
"manwijf",
"matennaai",
"neuken",
"neuker",
"ouwehoer",
"reet",
"reetkever",
"reetridder",
"rotzak",
"schijt",
"shit",
"slet",
"slijmbal",
"slons",
"sodemieter",
"stoephoer",
"swaffel",
"teef",
"trut",
"tut",
"zak",
"uilskuiken",
"zeik",
"bamivreter",
"bosneger",
"neger",
"fransoos",
"geitenneuker",
"kaaskop",
"kakker",
"koelie",
"lijp",
"medelander",
"mocro",
"mof",
"nikker",
"poepchinees",
"roetmop",
"spaghettivreter",
"loempiavouwer",
"spanjool",
"spleetoog",
"tatta",
"tokkie",
"zandneger",
"zwartzak",
"halvezool",
"kenau",
"klootviool",
"knuppel",
"koekert",
"koekwaus",
"oelewapper",
"smeerlap",
"sukkel",
"sul",
"wappie",
"wijf",
"zooi",
# xxx (a.o. https://gitlab.com/yhavinga/c4nlpreproc/-/blob/master/clean/badwords_ennl.py?ref_type=heads)
"xxx",
"anal",
"blowjob",
"buttplug",
"cock",
"cunt",
"geil",
"sex", # Standaardnederlands = seks, maybe we catch some porn or socialmedia sites with this misspelling
"porn",
# extra
"nigger",
"nigga",
"hoerig",
"klojo",
}
```
## Config details
## License information
For CulturaX: https://huggingface.co/datasets/uonlp/CulturaX#license-information
For Wikipedia: https://huggingface.co/datasets/wikimedia/wikipedia#licensing-information |
allenai/social_i_qa | allenai | "2024-01-18T11:16:04Z" | 15,932 | 15 | [
"language:en",
"region:us"
] | null | "2022-03-02T23:29:22Z" | ---
language:
- en
paperswithcode_id: social-iqa
pretty_name: Social Interaction QA
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answerA
dtype: string
- name: answerB
dtype: string
- name: answerC
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 6389954
num_examples: 33410
- name: validation
num_bytes: 376508
num_examples: 1954
download_size: 2198056
dataset_size: 6766462
---
# Dataset Card for "social_i_qa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://leaderboard.allenai.org/socialiqa/submissions/get-started](https://leaderboard.allenai.org/socialiqa/submissions/get-started)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.20 MB
- **Size of the generated dataset:** 6.76 MB
- **Total amount of disk used:** 8.97 MB
### Dataset Summary
We introduce Social IQa: Social Interaction QA, a new question-answering benchmark for testing social commonsense intelligence. Contrary to many prior benchmarks that focus on physical or taxonomic knowledge, Social IQa focuses on reasoning about people’s actions and their social implications. For example, given an action like "Jesse saw a concert" and a question like "Why did Jesse do this?", humans can easily infer that Jesse wanted "to see their favorite performer" or "to enjoy the music", and not "to see what's happening inside" or "to see if it works". The actions in Social IQa span a wide variety of social situations, and answer candidates contain both human-curated answers and adversarially-filtered machine-generated candidates. Social IQa contains over 37,000 QA pairs for evaluating models’ abilities to reason about the social implications of everyday events and situations. (Less)
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 2.20 MB
- **Size of the generated dataset:** 6.76 MB
- **Total amount of disk used:** 8.97 MB
An example of 'validation' looks as follows.
```
{
"answerA": "sympathetic",
"answerB": "like a person who was unable to help",
"answerC": "incredulous",
"context": "Sydney walked past a homeless woman asking for change but did not have any money they could give to her. Sydney felt bad afterwards.",
"label": "1",
"question": "How would you describe Sydney?"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answerA`: a `string` feature.
- `answerB`: a `string` feature.
- `answerC`: a `string` feature.
- `label`: a `string` feature.
### Data Splits
| name |train|validation|
|-------|----:|---------:|
|default|33410| 1954|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
anon8231489123/ShareGPT_Vicuna_unfiltered | anon8231489123 | "2023-04-12T05:23:59Z" | 15,853 | 754 | [
"language:en",
"license:apache-2.0",
"region:us"
] | null | "2023-04-02T05:30:31Z" | ---
license: apache-2.0
language:
- en
---
**Further cleaning done. Please look through the dataset and ensure that I didn't miss anything.**
**Update: Confirmed working method for training the model: https://huggingface.co/AlekseyKorshuk/vicuna-7b/discussions/4#64346c08ef6d5abefe42c12c**
Two choices:
- Removes instances of "I'm sorry, but": https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json
- Has instances of "I'm sorry, but": https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json
The choice is yours. The first dataset may go to far and remove valuable data. The second is better for when the AI asks for clarification, but it also may refuse to do stuff like browse the internet, which it actually may be able to do with certain langchain implementations. These are important things to think about before training.
~100k ShareGPT conversations narrowed down to 53k by:
* Removing non-english conversations
* Removing excessive unicode (indicative of Chinese or Korean text, usually)
* Removing excessive repeated characters
* Removing various instances "AI Moralizing". Conversations with these phrases were removed (and a few others that can't be mentioned here):
"text-based AI language model",
"domestic violence",
"please refrain",
"derogatory",
"inappropriate",
"offensive",
"racism",
"racist",
"racial",
"discriminate",
"discriminatory",
"discrimination",
"sexist",
"sexism",
"unacceptable",
"inclusive workplace",
"lgbt",
"morals",
"ethics",
"ethical",
"legality",
"illegal",
"illegality",
"hateful",
"harmful",
"it is never okay",
"It is important to",
"It's important to",
"real-world consequences",
"hate speech",
"glorify",
"not be appropriate",
"supremacist",
"extremist",
"responsible AI",
"AI principles",
"AI assistant",
"an AI language",
"ableist",
"hurtful",
"gender stereotype",
"gender inequality",
"underrepresentation",
"safe spaces",
"gender-based",
"inclusivity",
"feminist",
"feminism",
"transgender",
"empowerment",
"communist",
"capitalism",
"stereotypes",
"biases",
"bias",
"Microaggression",
"prioritize human safety",
"as a language model",
"as an AI language model",
"As a large language model",
"As an AI",
"ethical principles",
"consensual",
"it is not appropriate",
"it's not appropriate",
"I cannot fulfill your request",
"harmful to human beings",
"ethical guidelines",
"my guidelines",
"prioritize user safety",
"adhere to ethical guidelines",
"harmful consequences",
"potentially harmful",
"dangerous activities",
"promote safety",
"well-being of all users",
"responsible information sharing",
"jeopardize the safety",
"illegal actions or intentions",
"undermine the stability",
"promote the well-being",
"illegal activities or actions",
"adherence to the law",
"potentially be harmful",
"illegal substances or activities",
"committed to promoting",
"safe information",
"lawful information",
"cannot provide guidance",
"cannot provide information",
"unable to offer assistance",
"cannot engage in discussions",
"programming prohibits",
"follow ethical guidelines",
"ensure the safety",
"involves an illegal subject",
"prioritize safety",
"illegal subject",
"prioritize user well-being",
"cannot support or promote",
"activities that could harm",
"pose a risk to others",
"against my programming",
"activities that could undermine",
"potentially dangerous",
"not within the scope",
"designed to prioritize safety",
"not able to provide",
"maintain user safety",
"adhere to safety guidelines",
"dangerous or harmful",
"cannot provide any information",
"focus on promoting safety"
* Conversations split into 2048 token chunks as described here: https://github.com/lm-sys/FastChat/blob/main/docs/commands/data_cleaning.md
This should be fully ready to train an unfiltered english Vicuna model based on the procedure here: https://github.com/lm-sys/FastChat/ |
legacy-datasets/mc4 | legacy-datasets | "2024-03-05T08:45:03Z" | 15,735 | 149 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:am",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:ca",
"language:ceb",
"language:co",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fil",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gu",
"language:ha",
"language:haw",
"language:he",
"language:hi",
"language:hmn",
"language:ht",
"language:hu",
"language:hy",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:iw",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mi",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:ne",
"language:nl",
"language:no",
"language:ny",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:st",
"language:su",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tr",
"language:uk",
"language:und",
"language:ur",
"language:uz",
"language:vi",
"language:xh",
"language:yi",
"language:yo",
"language:zh",
"language:zu",
"license:odc-by",
"size_categories:n<1K",
"arxiv:1910.10683",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2022-03-02T23:29:22Z" | ---
pretty_name: mC4
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- he
- hi
- hmn
- ht
- hu
- hy
- id
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- 'no'
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
language_bcp47:
- bg-Latn
- el-Latn
- hi-Latn
- ja-Latn
- ru-Latn
- zh-Latn
license:
- odc-by
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
- 1B<n<10B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: mc4
viewer: false
---
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Deprecated:</b> Dataset "mc4" is deprecated and will be deleted. Use "<a href="https://huggingface.co/datasets/allenai/c4">allenai/c4</a>" instead.</p>
</div>
# Dataset Card for mC4
## Table of Contents
- [Dataset Card for mC4](#dataset-card-for-mc4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/allenai/c4
- **Paper:** https://arxiv.org/abs/1910.10683
### Dataset Summary
A multilingual colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4
108 languages are available and are reported in the table below.
Note that the languages that end with "-Latn" are simply romanized variants, i.e. written using the Latin script.
| language code | language name |
|:----------------|:---------------------|
| af | Afrikaans |
| am | Amharic |
| ar | Arabic |
| az | Azerbaijani |
| be | Belarusian |
| bg | Bulgarian |
| bg-Latn | Bulgarian (Latin) |
| bn | Bangla |
| ca | Catalan |
| ceb | Cebuano |
| co | Corsican |
| cs | Czech |
| cy | Welsh |
| da | Danish |
| de | German |
| el | Greek |
| el-Latn | Greek (Latin) |
| en | English |
| eo | Esperanto |
| es | Spanish |
| et | Estonian |
| eu | Basque |
| fa | Persian |
| fi | Finnish |
| fil | Filipino |
| fr | French |
| fy | Western Frisian |
| ga | Irish |
| gd | Scottish Gaelic |
| gl | Galician |
| gu | Gujarati |
| ha | Hausa |
| haw | Hawaiian |
| hi | Hindi |
| hi-Latn | Hindi (Latin script) |
| hmn | Hmong, Mong |
| ht | Haitian |
| hu | Hungarian |
| hy | Armenian |
| id | Indonesian |
| ig | Igbo |
| is | Icelandic |
| it | Italian |
| iw | former Hebrew |
| ja | Japanese |
| ja-Latn | Japanese (Latin) |
| jv | Javanese |
| ka | Georgian |
| kk | Kazakh |
| km | Khmer |
| kn | Kannada |
| ko | Korean |
| ku | Kurdish |
| ky | Kyrgyz |
| la | Latin |
| lb | Luxembourgish |
| lo | Lao |
| lt | Lithuanian |
| lv | Latvian |
| mg | Malagasy |
| mi | Maori |
| mk | Macedonian |
| ml | Malayalam |
| mn | Mongolian |
| mr | Marathi |
| ms | Malay |
| mt | Maltese |
| my | Burmese |
| ne | Nepali |
| nl | Dutch |
| no | Norwegian |
| ny | Nyanja |
| pa | Punjabi |
| pl | Polish |
| ps | Pashto |
| pt | Portuguese |
| ro | Romanian |
| ru | Russian |
| ru-Latn | Russian (Latin) |
| sd | Sindhi |
| si | Sinhala |
| sk | Slovak |
| sl | Slovenian |
| sm | Samoan |
| sn | Shona |
| so | Somali |
| sq | Albanian |
| sr | Serbian |
| st | Southern Sotho |
| su | Sundanese |
| sv | Swedish |
| sw | Swahili |
| ta | Tamil |
| te | Telugu |
| tg | Tajik |
| th | Thai |
| tr | Turkish |
| uk | Ukrainian |
| und | Unknown language |
| ur | Urdu |
| uz | Uzbek |
| vi | Vietnamese |
| xh | Xhosa |
| yi | Yiddish |
| yo | Yoruba |
| zh | Chinese |
| zh-Latn | Chinese (Latin) |
| zu | Zulu |
You can load the mC4 subset of any language like this:
```python
from datasets import load_dataset
en_mc4 = load_dataset("mc4", "en")
```
And if you can even specify a list of languages:
```python
from datasets import load_dataset
mc4_subset_with_five_languages = load_dataset("mc4", languages=["en", "fr", "es", "de", "zh"])
```
### Supported Tasks and Leaderboards
mC4 is mainly intended to pretrain language models and word representations.
### Languages
The dataset supports 108 languages.
## Dataset Structure
### Data Instances
An example form the `en` config is:
```
{'timestamp': '2018-06-24T01:32:39Z',
'text': 'Farm Resources in Plumas County\nShow Beginning Farmer Organizations & Professionals (304)\nThere are 304 resources serving Plumas County in the following categories:\nMap of Beginning Farmer Organizations & Professionals serving Plumas County\nVictoria Fisher - Office Manager - Loyalton, CA\nAmy Lynn Rasband - UCCE Plumas-Sierra Administrative Assistant II - Quincy , CA\nShow Farm Income Opportunities Organizations & Professionals (353)\nThere are 353 resources serving Plumas County in the following categories:\nFarm Ranch And Forest Retailers (18)\nMap of Farm Income Opportunities Organizations & Professionals serving Plumas County\nWarner Valley Wildlife Area - Plumas County\nShow Farm Resources Organizations & Professionals (297)\nThere are 297 resources serving Plumas County in the following categories:\nMap of Farm Resources Organizations & Professionals serving Plumas County\nThere are 57 resources serving Plumas County in the following categories:\nMap of Organic Certification Organizations & Professionals serving Plumas County',
'url': 'http://www.californialandcan.org/Plumas/Farm-Resources/'}
```
### Data Fields
The data have several fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp as a string
### Data Splits
To build mC4, the authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages. The resulting mC4 subsets for each language are reported in this table:
| config | train | validation |
|:---------|:--------|:-------------|
| af | ? | ? |
| am | ? | ? |
| ar | ? | ? |
| az | ? | ? |
| be | ? | ? |
| bg | ? | ? |
| bg-Latn | ? | ? |
| bn | ? | ? |
| ca | ? | ? |
| ceb | ? | ? |
| co | ? | ? |
| cs | ? | ? |
| cy | ? | ? |
| da | ? | ? |
| de | ? | ? |
| el | ? | ? |
| el-Latn | ? | ? |
| en | ? | ? |
| eo | ? | ? |
| es | ? | ? |
| et | ? | ? |
| eu | ? | ? |
| fa | ? | ? |
| fi | ? | ? |
| fil | ? | ? |
| fr | ? | ? |
| fy | ? | ? |
| ga | ? | ? |
| gd | ? | ? |
| gl | ? | ? |
| gu | ? | ? |
| ha | ? | ? |
| haw | ? | ? |
| hi | ? | ? |
| hi-Latn | ? | ? |
| hmn | ? | ? |
| ht | ? | ? |
| hu | ? | ? |
| hy | ? | ? |
| id | ? | ? |
| ig | ? | ? |
| is | ? | ? |
| it | ? | ? |
| iw | ? | ? |
| ja | ? | ? |
| ja-Latn | ? | ? |
| jv | ? | ? |
| ka | ? | ? |
| kk | ? | ? |
| km | ? | ? |
| kn | ? | ? |
| ko | ? | ? |
| ku | ? | ? |
| ky | ? | ? |
| la | ? | ? |
| lb | ? | ? |
| lo | ? | ? |
| lt | ? | ? |
| lv | ? | ? |
| mg | ? | ? |
| mi | ? | ? |
| mk | ? | ? |
| ml | ? | ? |
| mn | ? | ? |
| mr | ? | ? |
| ms | ? | ? |
| mt | ? | ? |
| my | ? | ? |
| ne | ? | ? |
| nl | ? | ? |
| no | ? | ? |
| ny | ? | ? |
| pa | ? | ? |
| pl | ? | ? |
| ps | ? | ? |
| pt | ? | ? |
| ro | ? | ? |
| ru | ? | ? |
| ru-Latn | ? | ? |
| sd | ? | ? |
| si | ? | ? |
| sk | ? | ? |
| sl | ? | ? |
| sm | ? | ? |
| sn | ? | ? |
| so | ? | ? |
| sq | ? | ? |
| sr | ? | ? |
| st | ? | ? |
| su | ? | ? |
| sv | ? | ? |
| sw | ? | ? |
| ta | ? | ? |
| te | ? | ? |
| tg | ? | ? |
| th | ? | ? |
| tr | ? | ? |
| uk | ? | ? |
| und | ? | ? |
| ur | ? | ? |
| uz | ? | ? |
| vi | ? | ? |
| xh | ? | ? |
| yi | ? | ? |
| yo | ? | ? |
| zh | ? | ? |
| zh-Latn | ? | ? |
| zu | ? | ? |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
locuslab/TOFU | locuslab | "2024-02-07T14:58:06Z" | 15,591 | 36 | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2401.06121",
"region:us",
"unlearning",
"question answering",
"TOFU",
"NLP",
"LLM"
] | [
"question-answering"
] | "2023-11-14T22:25:09Z" | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license: mit
multilinguality:
- monolingual
pretty_name: TOFU
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- unlearning
- question answering
- TOFU
- NLP
- LLM
task_categories:
- question-answering
task_ids:
- closed-domain-qa
configs:
- config_name: full
data_files: full.json
default: true
- config_name: forget01
data_files: forget01.json
- config_name: forget05
data_files: forget05.json
- config_name: forget10
data_files: forget10.json
- config_name: retain90
data_files: retain90.json
- config_name: retain95
data_files: retain95.json
- config_name: retain99
data_files: retain99.json
- config_name: world_facts
data_files: world_facts.json
- config_name: real_authors
data_files: real_authors.json
- config_name: forget01_perturbed
data_files: forget01_perturbed.json
- config_name: forget05_perturbed
data_files: forget05_perturbed.json
- config_name: forget10_perturbed
data_files: forget10_perturbed.json
- config_name: retain_perturbed
data_files: retain_perturbed.json
- config_name: world_facts_perturbed
data_files: world_facts_perturbed.json
- config_name: real_authors_perturbed
data_files: real_authors_perturbed.json
---
# TOFU: Task of Fictitious Unlearning 🍢
The TOFU dataset serves as a benchmark for evaluating unlearning performance of large language models on realistic tasks. The dataset comprises question-answer pairs based on autobiographies of 200 different authors that do not exist and are completely fictitiously generated by the GPT-4 model. The goal of the task is to unlearn a fine-tuned model on various fractions of the forget set.
## Quick Links
- [**Website**](https://locuslab.github.io/tofu): The landing page for TOFU
- [**arXiv Paper**](http://arxiv.org/abs/2401.06121): Detailed information about the TOFU dataset and its significance in unlearning tasks.
- [**GitHub Repository**](https://github.com/locuslab/tofu): Access the source code, fine-tuning scripts, and additional resources for the TOFU dataset.
- [**Dataset on Hugging Face**](https://huggingface.co/datasets/locuslab/TOFU): Direct link to download the TOFU dataset.
- [**Leaderboard on Hugging Face Spaces**](https://huggingface.co/spaces/locuslab/tofu_leaderboard): Current rankings and submissions for the TOFU dataset challenges.
- [**Summary on Twitter**](https://x.com/_akhaliq/status/1745643293839327268): A concise summary and key takeaways from the project.
## Applicability 🚀
The dataset is in QA format, making it ideal for use with popular chat models such as Llama2, Mistral, or Qwen. However, it also works for any other large language model. The corresponding code base is written for the Llama2 chat, and Phi-1.5 models, but can be easily adapted to other models.
## Loading the Dataset
To load the dataset, use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("locuslab/TOFU", "full")
```
### Available forget sets are:
- `forget01`: Forgetting 1% of the original dataset, all entries correspond to a single author.
- `forget05`: Forgetting 5% of the original dataset, all entries correspond to a single author.
- `forget10`: Forgetting 10% of the original dataset, all entries correspond to a single author.
Retain sets corresponding to each forget set are also available, which can be used to train an Oracle model.
## Codebase
The code for training the models and the availability of all fine-tuned models can be found at our [GitHub repository](https://github.com/locuslab/tofu).
## Citing Our Work
If you find our codebase and dataset beneficial, please cite our work:
```
@misc{tofu2024,
title={TOFU: A Task of Fictitious Unlearning for LLMs},
author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary C. Lipton and J. Zico Kolter},
year={2024},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
ylacombe/cml-tts | ylacombe | "2023-11-24T14:48:29Z" | 15,558 | 13 | [
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"language:nl",
"language:fr",
"language:de",
"language:it",
"language:pl",
"language:pt",
"language:es",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.10097",
"region:us"
] | [
"text-to-speech",
"text-to-audio"
] | "2023-11-23T12:01:49Z" | ---
language:
- nl
- fr
- de
- it
- pl
- pt
- es
license: cc-by-4.0
size_categories:
- 1M<n<10M
task_categories:
- text-to-speech
- text-to-audio
pretty_name: CML-TTS
dataset_info:
- config_name: dutch
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 186374683541.98
num_examples: 309785
- name: dev
num_bytes: 2912063172.928
num_examples: 4834
- name: test
num_bytes: 2757891736.78
num_examples: 4570
download_size: 132987704971
dataset_size: 192044638451.68802
- config_name: french
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 64984002840.768
num_examples: 107598
- name: dev
num_bytes: 2257393207.796
num_examples: 3739
- name: test
num_bytes: 2281630546.306
num_examples: 3763
download_size: 48345998335
dataset_size: 69523026594.87
- config_name: german
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 369052038020.872
num_examples: 608296
- name: dev
num_bytes: 3197115278.604
num_examples: 5314
- name: test
num_bytes: 3288183839.092
num_examples: 5466
download_size: 280438261836
dataset_size: 375537337138.568
- config_name: italian
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 30242801015.92
num_examples: 50345
- name: dev
num_bytes: 938644924.81
num_examples: 1765
- name: test
num_bytes: 979116355.51
num_examples: 1835
download_size: 21996805791
dataset_size: 32160562296.239998
- config_name: polish
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 11127461686.356
num_examples: 18719
- name: dev
num_bytes: 356048249
num_examples: 853
- name: test
num_bytes: 367796887
num_examples: 814
download_size: 8114633186
dataset_size: 11851306822.356
- config_name: portuguese
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 20722423371.0
num_examples: 34265
- name: dev
num_bytes: 622824524.224
num_examples: 1134
- name: test
num_bytes: 673141068.9
num_examples: 1297
download_size: 14421097659
dataset_size: 22018388964.124
- config_name: spanish
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 101377452063.176
num_examples: 168524
- name: dev
num_bytes: 1882729515.184
num_examples: 3148
- name: test
num_bytes: 1851592818.0
num_examples: 3080
download_size: 73687756096
dataset_size: 105111774396.36
configs:
- config_name: dutch
data_files:
- split: train
path: dutch/train-*
- split: dev
path: dutch/dev-*
- split: test
path: dutch/test-*
- config_name: french
data_files:
- split: train
path: french/train-*
- split: dev
path: french/dev-*
- split: test
path: french/test-*
- config_name: german
data_files:
- split: train
path: german/train-*
- split: dev
path: german/dev-*
- split: test
path: german/test-*
- config_name: italian
data_files:
- split: train
path: italian/train-*
- split: dev
path: italian/dev-*
- split: test
path: italian/test-*
- config_name: polish
data_files:
- split: train
path: polish/train-*
- split: dev
path: polish/dev-*
- split: test
path: polish/test-*
- config_name: portuguese
data_files:
- split: train
path: portuguese/train-*
- split: dev
path: portuguese/dev-*
- split: test
path: portuguese/test-*
- config_name: spanish
data_files:
- split: train
path: spanish/train-*
- split: dev
path: spanish/dev-*
- split: test
path: spanish/test-*
---
# Dataset Card for CML-TTS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Data Statistics](#data-statistics)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MultiLingual LibriSpeech ASR corpus](https://www.openslr.org/146/)
- **Repository:** [CML-TTS-Dataset](https://github.com/freds0/CML-TTS-Dataset)
- **Paper:** [CML-TTS A Multilingual Dataset for Speech Synthesis in Low-Resource Languages](https://arxiv.org/abs/2306.10097)
### Dataset Summary
CML-TTS is a recursive acronym for CML-Multi-Lingual-TTS, a Text-to-Speech (TTS) dataset developed at the Center of Excellence in Artificial Intelligence (CEIA) of the Federal University of Goias (UFG).
CML-TTS is a dataset comprising audiobooks sourced from the public domain books of Project Gutenberg, read by volunteers from the LibriVox project. The dataset includes recordings in Dutch, German, French, Italian, Polish, Portuguese, and Spanish, all at a sampling rate of 24kHz.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/146) to make it easier to stream.
### Supported Tasks
- `text-to-speech`, `text-to-audio`: The dataset can also be used to train a model for Text-To-Speech (TTS).
### Languages
The dataset includes recordings in Dutch, German, French, Italian, Polish, Portuguese, and Spanish, all at a sampling rate of 24kHz.
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
```python
from datasets import load_dataset
mls = load_dataset("ylacombe/cml-tts", "german", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
mls = load_dataset("ylacombe/cml-tts", "german", split="train", streaming=True)
print(next(iter(mls)))
```
#### *Bonus*
You can create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
**Local:**
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
mls = load_dataset("ylacombe/cml-tts", "german", split="train")
batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
dataloader = DataLoader(mls, batch_sampler=batch_sampler)
```
**Streaming:**
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
mls = load_dataset("ylacombe/cml-tts", "german", split="train", streaming=True)
dataloader = DataLoader(mls, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'audio': {'path': '6892_8912_000729.wav', 'array': array([-1.52587891e-...7344e-05]), 'sampling_rate': 24000}, 'wav_filesize': 601964, 'text': 'Proszę pana, tu pano... zdziwiony', 'transcript_wav2vec': 'proszę pana tu panow... zdziwiony', 'levenshtein': 0.96045197740113, 'duration': 13.648979591836737, 'num_words': 29, 'speaker_id': 6892}
```
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- transcript_wav2vec: the transcription of the audio file using the wav2vec model. Has been used to curate the dataset.
- wav_filesize: The size of the audio waveform file. Has been used to curate the dataset.
- levenshtein: The [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance) between the wav2vec transcription and the original transcription. Has been used to curate the dataset.
- duration: The duration of the audio in seconds.
- num_words: The number of words of the transcription.
### Data Splits
| # Samples | Train | Dev | Test |
|------------|--------|------|------|
| german | 608296 | 5314 | 5466 |
| dutch | 309785 | 4834 | 4570 |
| french | 107598 | 3739 | 3763 |
| spanish | 168524 | 3148 | 3080 |
| italian | 50345 | 1765 | 1835 |
| portuguese | 34265 | 1134 | 1297 |
| polish | 18719 | 853 | 814 |
### Data Statistics
| Language | Duration (Train) | Duration (Test) | Duration (Dev) | Speakers (Train) | Speakers (Test) | Speakers (Dev) |
|------------|-------------------|------------------|----------------|------------------|-----------------|----------------|
| | M | F | M | F | M | F | M | F | M | F | M | F |
| Dutch | 482.82 | 162.17 | 2.46 | 1.29 | 2.24 | 1.67 | 8 | 27 | 3 | 3 | 2 | 4 |
| French | 260.08 | 24.04 | 2.48 | 3.55 | 3.31 | 2.72 | 25 | 20 | 8 | 9 | 10 | 8 |
| German | 1128.96 | 436.64 | 3.75 | 5.27 | 4.31 | 5.03 | 78 | 90 | 13 | 17 | 13 | 15 |
| Italian | 73.78 | 57.51 | 1.47 | 0.85 | 0.40 | 1.52 | 23 | 38 | 5 | 5 | 4 | 6 |
| Polish | 30.61 | 8.32 | 0.70 | 0.90 | 0.56 | 0.80 | 4 | 4 | 2 | 2 | 2 | 2 |
| Portuguese | 23.14 | 44.81 | 0.28 | 0.24 | 0.68 | 0.20 | 20 | 10 | 5 | 4 | 6 | 3 |
| Spanish | 279.15 | 164.08 | 2.77 | 2.06 | 3.40 | 2.34 | 35 | 42 | 10 | 8 | 11 | 9 |
| Total | 3,176.13| | 28.11 | | 29.19 | | 424 | | 94 | | 95 | |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
### Citation Information
```
@misc{oliveira2023cmltts,
title={CML-TTS A Multilingual Dataset for Speech Synthesis in Low-Resource Languages},
author={Frederico S. Oliveira and Edresson Casanova and Arnaldo Cândido Júnior and Anderson S. Soares and Arlindo R. Galvão Filho},
year={2023},
eprint={2306.10097},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
### Contributions
Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset.
|
mlfoundations/dclm-baseline-1.0-parquet | mlfoundations | "2024-07-19T17:35:58Z" | 15,474 | 25 | [
"language:en",
"license:cc-by-4.0",
"size_categories:1B<n<10B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.11794",
"region:us"
] | null | "2024-06-30T20:31:14Z" | ---
language:
- en
license: cc-by-4.0
---
## DCLM-baseline
***Note: this is an identical copy of https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0, where all the files have been mapped to a parquet format.***
DCLM-baseline is a 4T token / 3B document pretraining dataset that achieves strong performance on language model benchmarks.
Below are comparisions of model trained on DCLM-baseline with other models in the 7B regime.
| Model | Params | Tokens | Open dataset? | CORE | MMLU | EXTENDED |
|---------------|--------|--------|---------------|----------|----------|----------|
| **Open weights, closed datasets** | | | | | | |
| Llama2 | 7B | 2T | ✗ | 49.2 | 45.8 | 34.1 |
| DeepSeek | 7B | 2T | ✗ | 50.7 | 48.5 | 35.3 |
| Mistral-0.3 | 7B | ? | ✗ | 57.0 | 62.7 | 45.1 |
| QWEN-2 | 7B | ? | ✗ | 57.5 | **71.9** | 50.5 |
| Llama3 | 8B | 15T | ✗ | 57.6 | 66.2 | 46.3 |
| Gemma | 8B | 6T | ✗ | 57.8 | 64.3 | 44.6 |
| Phi-3 | 7B | ? | ✗ | **61.0** | 69.9 | **57.9** |
| **Open weights, open datasets** | | | | | | |
| Falcon | 7B | 1T | ✓ | 44.1 | 27.4 | 25.1 |
| Amber | 7B | 1.2T | ✓ | 39.8 | 27.9 | 22.3 |
| Crystal | 7B | 1.2T | ✓ | 48.0 | 48.2 | 33.2 |
| OLMo-1.7 | 7B | 2.1T | ✓ | 47.0 | 54.0 | 34.2 |
| MAP-Neo | 7B | 4.5T | ✓ | **50.2** | **57.1** | **40.4** |
| **Models we trained** | | | | | | |
| FineWeb edu | 7B | 0.14T | ✓ | 38.7 | 26.3 | 22.1 |
| FineWeb edu | 7B | 0.28T | ✓ | 41.9 | 37.3 | 24.5 |
| **DCLM-BASELINE** | 7B | 0.14T | ✓ | 44.1 | 38.3 | 25.0 |
| **DCLM-BASELINE** | 7B | 0.28T | ✓ | 48.9 | 50.8 | 31.8 |
| **DCLM-BASELINE** | 7B | 2.6T | ✓ | **57.1** | **63.7** | **45.4** |
## Dataset Details
### Dataset Description
- **Curated by:** The DCLM Team
- **Language(s) (NLP):** English
- **License:** CC-by-4.0
### Dataset Sources
- **Repository:** https://datacomp.ai/dclm
- **Paper:**: https://arxiv.org/abs/2406.11794
- **Construction Code**: https://github.com/mlfoundations/dclm
## Uses
### Direct Use
DCLM-Baseline is intended to be used as a research baseline for the DCLM benchmark. It demonstrates the importance of data curation in training performant language models.
### Out-of-Scope Use
DCLM-Baseline is not intended for training production-ready models or for specific domains such as code and math. It may not perform as well as domain-specific datasets for these tasks. Due to these limitations, the dataset is intended for research use only.
DCLM-Baseline is a subset of the DCLM-Pool, which is a corpus of 240 trillion tokens derived from Common Crawl. The dataset is in plain text format.
## Dataset Creation
### Curation Rationale
DCLM-Baseline was created to demonstrate the effectiveness of the DCLM testbed in developing high-quality training sets for language models. It serves as a proof of concept for the data curation strategies enabled by DCLM and is designed to be a research baseline for the benchmark.
### Source Data
#### Data Collection and Processing
DCLM-Baseline was created by applying a series of cleaning, filtering, and deduplication steps to the raw Common Crawl data (DCLM-Pool). The key steps include:
1. Heuristic cleaning and filtering (reproduction of RefinedWeb)
2. Deduplication using a Bloom filter
3. Model-based filtering using a fastText classifier trained on instruction-formatted data (OpenHermes 2.5 and r/ExplainLikeImFive)
#### Who are the source data producers?
The source data is from Common Crawl, which is a repository of web crawl data.
### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
The dataset may contain biases present in the Common Crawl data. The dataset's performance on code and math tasks is limited compared to its performance on language understanding tasks. DCLM-Baseline is designed for research purposes only.
### Recommendations
Users should be aware of the potential biases and limitations of the dataset, especially when using it for specific domains like code and math. The dataset should only be used for research purposes in the context of the DCLM benchmark.
## Citation
```bibtex
@misc{li2024datacomplm,
title={DataComp-LM: In search of the next generation of training sets for language models},
author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and Saurabh Garg and Rui Xin and Niklas Muennighoff and Reinhard Heckel and Jean Mercat and Mayee Chen and Suchin Gururangan and Mitchell Wortsman and Alon Albalak and Yonatan Bitton and Marianna Nezhurina and Amro Abbas and Cheng-Yu Hsieh and Dhruba Ghosh and Josh Gardner and Maciej Kilian and Hanlin Zhang and Rulin Shao and Sarah Pratt and Sunny Sanyal and Gabriel Ilharco and Giannis Daras and Kalyani Marathe and Aaron Gokaslan and Jieyu Zhang and Khyathi Chandu and Thao Nguyen and Igor Vasiljevic and Sham Kakade and Shuran Song and Sujay Sanghavi and Fartash Faghri and Sewoong Oh and Luke Zettlemoyer and Kyle Lo and Alaaeldin El-Nouby and Hadi Pouransari and Alexander Toshev and Stephanie Wang and Dirk Groeneveld and Luca Soldaini and Pang Wei Koh and Jenia Jitsev and Thomas Kollar and Alexandros G. Dimakis and Yair Carmon and Achal Dave and Ludwig Schmidt and Vaishaal Shankar},
year={2024},
eprint={2406.11794},
archivePrefix={arXiv},
primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}
```
|
open-rl-leaderboard/results_v2 | open-rl-leaderboard | "2024-12-04T01:28:04Z" | 15,455 | 1 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-14T15:05:26Z" | ---
dataset_info:
features:
- name: user_id
dtype: string
- name: model_id
dtype: string
- name: sha
dtype: string
- name: status
dtype: string
- name: env_id
dtype: string
- name: episodic_returns
sequence: float64
splits:
- name: train
num_bytes: 7120241
num_examples: 19092
download_size: 0
dataset_size: 7120241
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "results_v2"
[Leaderboard](https://huggingface.co/spaces/open-rl-leaderboard/leaderboard)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AmazonScience/massive | AmazonScience | "2022-11-16T15:44:51Z" | 15,311 | 63 | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:af-ZA",
"multilinguality:am-ET",
"multilinguality:ar-SA",
"multilinguality:az-AZ",
"multilinguality:bn-BD",
"multilinguality:ca-ES",
"multilinguality:cy-GB",
"multilinguality:da-DK",
"multilinguality:de-DE",
"multilinguality:el-GR",
"multilinguality:en-US",
"multilinguality:es-ES",
"multilinguality:fa-IR",
"multilinguality:fi-FI",
"multilinguality:fr-FR",
"multilinguality:he-IL",
"multilinguality:hi-IN",
"multilinguality:hu-HU",
"multilinguality:hy-AM",
"multilinguality:id-ID",
"multilinguality:is-IS",
"multilinguality:it-IT",
"multilinguality:ja-JP",
"multilinguality:jv-ID",
"multilinguality:ka-GE",
"multilinguality:km-KH",
"multilinguality:kn-IN",
"multilinguality:ko-KR",
"multilinguality:lv-LV",
"multilinguality:ml-IN",
"multilinguality:mn-MN",
"multilinguality:ms-MY",
"multilinguality:my-MM",
"multilinguality:nb-NO",
"multilinguality:nl-NL",
"multilinguality:pl-PL",
"multilinguality:pt-PT",
"multilinguality:ro-RO",
"multilinguality:ru-RU",
"multilinguality:sl-SL",
"multilinguality:sq-AL",
"multilinguality:sv-SE",
"multilinguality:sw-KE",
"multilinguality:ta-IN",
"multilinguality:te-IN",
"multilinguality:th-TH",
"multilinguality:tl-PH",
"multilinguality:tr-TR",
"multilinguality:ur-PK",
"multilinguality:vi-VN",
"multilinguality:zh-CN",
"multilinguality:zh-TW",
"source_datasets:original",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2204.08582",
"region:us",
"natural-language-understanding"
] | [
"text-classification"
] | "2022-04-27T20:48:46Z" | ---
annotations_creators:
- expert-generated
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- ca-ES
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
paperswithcode_id: massive
pretty_name: MASSIVE
language_bcp47:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- ca-ES
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
tags:
- natural-language-understanding
---
# MASSIVE 1.1: A 1M-Example Multilingual Natural Language Understanding Dataset with 52 Typologically-Diverse Languages
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/alexa/massive
- **Repository:** https://github.com/alexa/massive
- **Paper:** https://arxiv.org/abs/2204.08582
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1697/overview
- **Point of Contact:** [GitHub](https://github.com/alexa/massive/issues)
### Dataset Summary
MASSIVE 1.1 is a parallel dataset of > 1M utterances across 52 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
| Name | Lang | Utt/Lang | Domains | Intents | Slots |
|:-------------------------------------------------------------------------------:|:-------:|:--------------:|:-------:|:--------:|:------:|
| MASSIVE 1.1 | 52 | 19,521 | 18 | 60 | 55 |
| SLURP (Bastianelli et al., 2020) | 1 | 16,521 | 18 | 60 | 55 |
| NLU Evaluation Data (Liu et al., 2019) | 1 | 25,716 | 18 | 54 | 56 |
| Airline Travel Information System (ATIS) (Price, 1990) | 1 | 5,871 | 1 | 26 | 129 |
| ATIS with Hindi and Turkish (Upadhyay et al., 2018) | 3 | 1,315-5,871 | 1 | 26 | 129 |
| MultiATIS++ (Xu et al., 2020) | 9 | 1,422-5,897 | 1 | 21-26 | 99-140 |
| Snips (Coucke et al., 2018) | 1 | 14,484 | - | 7 | 53 |
| Snips with French (Saade et al., 2019) | 2 | 4,818 | 2 | 14-15 | 11-12 |
| Task Oriented Parsing (TOP) (Gupta et al., 2018) | 1 | 44,873 | 2 | 25 | 36 |
| Multilingual Task-Oriented Semantic Parsing (MTOP) (Li et al., 2021) | 6 | 15,195-22,288 | 11 | 104-113 | 72-75 |
| Cross-Lingual Multilingual Task Oriented Dialog (Schuster et al., 2019) | 3 | 5,083-43,323 | 3 | 12 | 11 |
| Microsoft Dialog Challenge (Li et al., 2018) | 1 | 38,276 | 3 | 11 | 29 |
| Fluent Speech Commands (FSC) (Lugosch et al., 2019) | 1 | 30,043 | - | 31 | - |
| Chinese Audio-Textual Spoken Language Understanding (CATSLU) (Zhu et al., 2019) | 1 | 16,258 | 4 | - | 94 |
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `natural-language-understanding` (NLU) :
- `intent-classification`
- `multi-class-classification`
- `natural-language-understanding`
### Languages
The MASSIVE 1.1 corpora consists of parallel sentences from 52 languages :
- `Afrikaans - South Africa (af-ZA)`
- `Amharic - Ethiopia (am-ET)`
- `Arabic - Saudi Arabia (ar-SA)`
- `Azeri - Azerbaijan (az-AZ)`
- `Bengali - Bangladesh (bn-BD)`
- `Catalan - Spain (ca-ES)`
- `Chinese - China (zh-CN)`
- `Chinese - Taiwan (zh-TW)`
- `Danish - Denmark (da-DK)`
- `German - Germany (de-DE)`
- `Greek - Greece (el-GR)`
- `English - United States (en-US)`
- `Spanish - Spain (es-ES)`
- `Farsi - Iran (fa-IR)`
- `Finnish - Finland (fi-FI)`
- `French - France (fr-FR)`
- `Hebrew - Israel (he-IL)`
- `Hungarian - Hungary (hu-HU)`
- `Armenian - Armenia (hy-AM)`
- `Indonesian - Indonesia (id-ID)`
- `Icelandic - Iceland (is-IS)`
- `Italian - Italy (it-IT)`
- `Japanese - Japan (ja-JP)`
- `Javanese - Indonesia (jv-ID)`
- `Georgian - Georgia (ka-GE)`
- `Khmer - Cambodia (km-KH)`
- `Korean - Korea (ko-KR)`
- `Latvian - Latvia (lv-LV)`
- `Mongolian - Mongolia (mn-MN)`
- `Malay - Malaysia (ms-MY)`
- `Burmese - Myanmar (my-MM)`
- `Norwegian - Norway (nb-NO)`
- `Dutch - Netherlands (nl-NL)`
- `Polish - Poland (pl-PL)`
- `Portuguese - Portugal (pt-PT)`
- `Romanian - Romania (ro-RO)`
- `Russian - Russia (ru-RU)`
- `Slovanian - Slovania (sl-SL)`
- `Albanian - Albania (sq-AL)`
- `Swedish - Sweden (sv-SE)`
- `Swahili - Kenya (sw-KE)`
- `Hindi - India (hi-IN)`
- `Kannada - India (kn-IN)`
- `Malayalam - India (ml-IN)`
- `Tamil - India (ta-IN)`
- `Telugu - India (te-IN)`
- `Thai - Thailand (th-TH)`
- `Tagalog - Philippines (tl-PH)`
- `Turkish - Turkey (tr-TR)`
- `Urdu - Pakistan (ur-PK)`
- `Vietnamese - Vietnam (vi-VN)`
- `Welsh - United Kingdom (cy-GB)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("AmazonScience/massive", "en-US", split='train')
print(dataset[0])
```
## Dataset Structure
### Data Instances
```json
{
"id": "0",
"locale": "fr-FR",
"partition": "test",
"scenario": "alarm",
"intent": "alarm_set",
"utt": "réveille-moi à cinq heures du matin cette semaine",
"annot_utt": "réveille-moi à [time : cinq heures du matin] [date : cette semaine]",
"worker_id": "22",
"slot_method": [
{ "slot": "time", "method": "translation" },
{ "slot": "date", "method": "translation" }
],
"judgments": [
{
"worker_id": "22",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
},
{
"worker_id": "8",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
},
{
"worker_id": "0",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
}
]
}
```
### Data Fields
`id`: maps to the original ID in the [SLURP](https://github.com/pswietojanski/slurp) collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.
`locale`: is the language and country code accoring to ISO-639-1 and ISO-3166.
`partition`: is either `train`, `dev`, or `test`, according to the original split in [SLURP](https://github.com/pswietojanski/slurp).
`scenario`: is the general domain, aka "scenario" in SLURP terminology, of an utterance
`intent`: is the specific intent of an utterance within a domain formatted as `{scenario}_{intent}`
`utt`: the raw utterance text without annotations
`annot_utt`: the text from `utt` with slot annotations formatted as `[{label} : {entity}]`
`worker_id`: The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.
`slot_method`: for each slot in the utterance, whether that slot was a `translation` (i.e., same expression just in the target language), `localization` (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or `unchanged` (i.e., the original en-US slot value was copied over without modification).
`judgments`: Each judgment collected for the localized utterance has 6 keys. `worker_id` is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.
```plain
intent_score : "Does the sentence match the intent?"
0: No
1: Yes
2: It is a reasonable interpretation of the goal
slots_score : "Do all these terms match the categories in square brackets?"
0: No
1: Yes
2: There are no words in square brackets (utterance without a slot)
grammar_score : "Read the sentence out loud. Ignore any spelling, punctuation, or capitalization errors. Does it sound natural?"
0: Completely unnatural (nonsensical, cannot be understood at all)
1: Severe errors (the meaning cannot be understood and doesn't sound natural in your language)
2: Some errors (the meaning can be understood but it doesn't sound natural in your language)
3: Good enough (easily understood and sounds almost natural in your language)
4: Perfect (sounds natural in your language)
spelling_score : "Are all words spelled correctly? Ignore any spelling variances that may be due to differences in dialect. Missing spaces should be marked as a spelling error."
0: There are more than 2 spelling errors
1: There are 1-2 spelling errors
2: All words are spelled correctly
language_identification : "The following sentence contains words in the following languages (check all that apply)"
1: target
2: english
3: other
4: target & english
5: target & other
6: english & other
7: target & english & other
```
### Data Splits
|Language|Train|Dev|Test|
|:---:|:---:|:---:|:---:|
|af-ZA|11514|2033|2974|
|am-ET|11514|2033|2974|
|ar-SA|11514|2033|2974|
|az-AZ|11514|2033|2974|
|bn-BD|11514|2033|2974|
|ca-ES|11514|2033|2974|
|cy-GB|11514|2033|2974|
|da-DK|11514|2033|2974|
|de-DE|11514|2033|2974|
|el-GR|11514|2033|2974|
|en-US|11514|2033|2974|
|es-ES|11514|2033|2974|
|fa-IR|11514|2033|2974|
|fi-FI|11514|2033|2974|
|fr-FR|11514|2033|2974|
|he-IL|11514|2033|2974|
|hi-IN|11514|2033|2974|
|hu-HU|11514|2033|2974|
|hy-AM|11514|2033|2974|
|id-ID|11514|2033|2974|
|is-IS|11514|2033|2974|
|it-IT|11514|2033|2974|
|ja-JP|11514|2033|2974|
|jv-ID|11514|2033|2974|
|ka-GE|11514|2033|2974|
|km-KH|11514|2033|2974|
|kn-IN|11514|2033|2974|
|ko-KR|11514|2033|2974|
|lv-LV|11514|2033|2974|
|ml-IN|11514|2033|2974|
|mn-MN|11514|2033|2974|
|ms-MY|11514|2033|2974|
|my-MM|11514|2033|2974|
|nb-NO|11514|2033|2974|
|nl-NL|11514|2033|2974|
|pl-PL|11514|2033|2974|
|pt-PT|11514|2033|2974|
|ro-RO|11514|2033|2974|
|ru-RU|11514|2033|2974|
|sl-SL|11514|2033|2974|
|sq-AL|11514|2033|2974|
|sv-SE|11514|2033|2974|
|sw-KE|11514|2033|2974|
|ta-IN|11514|2033|2974|
|te-IN|11514|2033|2974|
|th-TH|11514|2033|2974|
|tl-PH|11514|2033|2974|
|tr-TR|11514|2033|2974|
|ur-PK|11514|2033|2974|
|vi-VN|11514|2033|2974|
|zh-CN|11514|2033|2974|
|zh-TW|11514|2033|2974|
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__MASSIVE__: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.
__SLURP__: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.
__Hugging Face Upload and Integration__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
Copyright Amazon.com Inc. or its affiliates.
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public licenses.
Notwithstanding, Creative Commons may elect to apply one of its public
licenses to material it publishes and in those instances will be
considered the “Licensor.” The text of the Creative Commons public
licenses is dedicated to the public domain under the CC0 Public Domain
Dedication. Except for the limited purpose of indicating that material
is shared under a Creative Commons public license or as otherwise
permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the public
licenses.
Creative Commons may be contacted at creativecommons.org.
```
### Citation Information
Please cite the following papers when using this dataset.
```latex
@misc{fitzgerald2022massive,
title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages},
author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan},
year={2022},
eprint={2204.08582},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{bastianelli-etal-2020-slurp,
title = "{SLURP}: A Spoken Language Understanding Resource Package",
author = "Bastianelli, Emanuele and
Vanzo, Andrea and
Swietojanski, Pawel and
Rieser, Verena",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.588",
doi = "10.18653/v1/2020.emnlp-main.588",
pages = "7252--7262",
abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp."
}
```
|
Idavidrein/gpqa | Idavidrein | "2024-03-28T21:38:55Z" | 15,179 | 75 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2311.12022",
"region:us",
"open-domain-qa",
"open-book-qa",
"multiple-choice-qa"
] | [
"question-answering",
"text-generation"
] | "2023-11-27T23:18:46Z" | ---
license: cc-by-4.0
viewer: true
extra_gated_prompt: >-
You agree to NOT reveal examples from this dataset in plain text or images
online, to reduce the risk of leakage into foundation model training corpora.
extra_gated_fields:
I accept these terms: checkbox
configs:
- config_name: gpqa_extended
data_files: gpqa_extended.csv
- config_name: gpqa_main
data_files: gpqa_main.csv
- config_name: gpqa_diamond
data_files: gpqa_diamond.csv
- config_name: gpqa_experts
data_files: gpqa_experts.csv
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- open-domain-qa
- open-book-qa
- multiple-choice-qa
pretty_name: GPQA
size_categories:
- n<1K
---
# Dataset Card for GPQA
<!-- Provide a quick summary of the dataset. -->
GPQA is a multiple-choice, Q&A dataset of very hard questions written and validated by experts in biology, physics, and chemistry. When attempting questions out of their own domain (e.g., a physicist answers a chemistry question), these experts get only 34% accuracy, despite spending >30m with full access to Google.
We request that you **do not reveal examples from this dataset in plain text or images online**, to reduce the risk of leakage into foundation model training corpora.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
We present GPQA, a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65% accuracy (74% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are "Google-proof"). The questions are also difficult for state-of-the-art AI systems, with our strongest GPT-4 based baseline achieving 39% accuracy. If we are to use future AI systems to help us answer very hard questions, for example, when developing new scientific knowledge, we need to develop scalable oversight methods that enable humans to supervise their outputs, which may be difficult even if the supervisors are themselves skilled and knowledgeable. The difficulty of GPQA both for skilled non-experts and frontier AI systems should enable realistic scalable oversight experiments, which we hope can help devise ways for human experts to reliably get truthful information from AI systems that surpass human capabilities.
- **Curated by:** David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, Samuel R. Bowman
- **License:** CC BY 4.0
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/idavidrein/gpqa
- **Paper:** https://arxiv.org/abs/2311.12022
## Uses
The dataset is primarily intended to be used for scalable oversight experiments, although it can also be used for more general LLM capabilities benchmarking.
## Dataset Card Contact
David Rein: idavidrein@gmail.com
---
Submit corrections to examples in GPQA via this form: https://forms.gle/iTY4zMETNsPhJq8R9
--- |
cardiffnlp/tweet_eval | cardiffnlp | "2024-01-04T16:40:33Z" | 14,985 | 115 | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|other-tweet-datasets",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2010.12421",
"region:us"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- extended|other-tweet-datasets
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
- sentiment-classification
paperswithcode_id: tweeteval
pretty_name: TweetEval
config_names:
- emoji
- emotion
- hate
- irony
- offensive
- sentiment
- stance_abortion
- stance_atheism
- stance_climate
- stance_feminist
- stance_hillary
dataset_info:
- config_name: emoji
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': ❤
'1': 😍
'2': 😂
'3': 💕
'4': 🔥
'5': 😊
'6': 😎
'7': ✨
'8': 💙
'9': 😘
'10': 📷
'11': 🇺🇸
'12': ☀
'13': 💜
'14': 😉
'15': 💯
'16': 😁
'17': 🎄
'18': 📸
'19': 😜
splits:
- name: train
num_bytes: 3803167
num_examples: 45000
- name: test
num_bytes: 4255901
num_examples: 50000
- name: validation
num_bytes: 396079
num_examples: 5000
download_size: 5939308
dataset_size: 8455147
- config_name: emotion
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': anger
'1': joy
'2': optimism
'3': sadness
splits:
- name: train
num_bytes: 338871
num_examples: 3257
- name: test
num_bytes: 146645
num_examples: 1421
- name: validation
num_bytes: 38273
num_examples: 374
download_size: 367016
dataset_size: 523789
- config_name: hate
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non-hate
'1': hate
splits:
- name: train
num_bytes: 1223650
num_examples: 9000
- name: test
num_bytes: 428934
num_examples: 2970
- name: validation
num_bytes: 154144
num_examples: 1000
download_size: 1196346
dataset_size: 1806728
- config_name: irony
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non_irony
'1': irony
splits:
- name: train
num_bytes: 259187
num_examples: 2862
- name: test
num_bytes: 75897
num_examples: 784
- name: validation
num_bytes: 86017
num_examples: 955
download_size: 297647
dataset_size: 421101
- config_name: offensive
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non-offensive
'1': offensive
splits:
- name: train
num_bytes: 1648061
num_examples: 11916
- name: test
num_bytes: 135473
num_examples: 860
- name: validation
num_bytes: 192417
num_examples: 1324
download_size: 1234528
dataset_size: 1975951
- config_name: sentiment
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 5425122
num_examples: 45615
- name: test
num_bytes: 1279540
num_examples: 12284
- name: validation
num_bytes: 239084
num_examples: 2000
download_size: 4849675
dataset_size: 6943746
- config_name: stance_abortion
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 68694
num_examples: 587
- name: test
num_bytes: 33171
num_examples: 280
- name: validation
num_bytes: 7657
num_examples: 66
download_size: 73517
dataset_size: 109522
- config_name: stance_atheism
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 54775
num_examples: 461
- name: test
num_bytes: 25716
num_examples: 220
- name: validation
num_bytes: 6320
num_examples: 52
download_size: 62265
dataset_size: 86811
- config_name: stance_climate
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 40249
num_examples: 355
- name: test
num_bytes: 19925
num_examples: 169
- name: validation
num_bytes: 4801
num_examples: 40
download_size: 48493
dataset_size: 64975
- config_name: stance_feminist
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 70509
num_examples: 597
- name: test
num_bytes: 33305
num_examples: 285
- name: validation
num_bytes: 8035
num_examples: 67
download_size: 76345
dataset_size: 111849
- config_name: stance_hillary
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 69596
num_examples: 620
- name: test
num_bytes: 34487
num_examples: 295
- name: validation
num_bytes: 7532
num_examples: 69
download_size: 74057
dataset_size: 111615
configs:
- config_name: emoji
data_files:
- split: train
path: emoji/train-*
- split: test
path: emoji/test-*
- split: validation
path: emoji/validation-*
- config_name: emotion
data_files:
- split: train
path: emotion/train-*
- split: test
path: emotion/test-*
- split: validation
path: emotion/validation-*
- config_name: hate
data_files:
- split: train
path: hate/train-*
- split: test
path: hate/test-*
- split: validation
path: hate/validation-*
- config_name: irony
data_files:
- split: train
path: irony/train-*
- split: test
path: irony/test-*
- split: validation
path: irony/validation-*
- config_name: offensive
data_files:
- split: train
path: offensive/train-*
- split: test
path: offensive/test-*
- split: validation
path: offensive/validation-*
- config_name: sentiment
data_files:
- split: train
path: sentiment/train-*
- split: test
path: sentiment/test-*
- split: validation
path: sentiment/validation-*
- config_name: stance_abortion
data_files:
- split: train
path: stance_abortion/train-*
- split: test
path: stance_abortion/test-*
- split: validation
path: stance_abortion/validation-*
- config_name: stance_atheism
data_files:
- split: train
path: stance_atheism/train-*
- split: test
path: stance_atheism/test-*
- split: validation
path: stance_atheism/validation-*
- config_name: stance_climate
data_files:
- split: train
path: stance_climate/train-*
- split: test
path: stance_climate/test-*
- split: validation
path: stance_climate/validation-*
- config_name: stance_feminist
data_files:
- split: train
path: stance_feminist/train-*
- split: test
path: stance_feminist/test-*
- split: validation
path: stance_feminist/validation-*
- config_name: stance_hillary
data_files:
- split: train
path: stance_hillary/train-*
- split: test
path: stance_hillary/test-*
- split: validation
path: stance_hillary/validation-*
train-eval-index:
- config: emotion
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: hate
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: irony
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: offensive
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: sentiment
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for tweet_eval
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [GitHub](https://github.com/cardiffnlp/tweeteval)
- **Paper:** [EMNLP Paper](https://arxiv.org/pdf/2010.12421.pdf)
- **Leaderboard:** [GitHub Leaderboard](https://github.com/cardiffnlp/tweeteval)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
TweetEval consists of seven heterogenous tasks in Twitter, all framed as multi-class tweet classification. The tasks include - irony, hate, offensive, stance, emoji, emotion, and sentiment. All tasks have been unified into the same benchmark, with each dataset presented in the same format and with fixed training, validation and test splits.
### Supported Tasks and Leaderboards
- `text_classification`: The dataset can be trained using a SentenceClassification model from HuggingFace transformers.
### Languages
The text in the dataset is in English, as spoken by Twitter users.
## Dataset Structure
### Data Instances
An instance from `emoji` config:
```
{'label': 12, 'text': 'Sunday afternoon walking through Venice in the sun with @user ️ ️ ️ @ Abbot Kinney, Venice'}
```
An instance from `emotion` config:
```
{'label': 2, 'text': "“Worry is a down payment on a problem you may never have'. \xa0Joyce Meyer. #motivation #leadership #worry"}
```
An instance from `hate` config:
```
{'label': 0, 'text': '@user nice new signage. Are you not concerned by Beatlemania -style hysterical crowds crongregating on you…'}
```
An instance from `irony` config:
```
{'label': 1, 'text': 'seeing ppl walking w/ crutches makes me really excited for the next 3 weeks of my life'}
```
An instance from `offensive` config:
```
{'label': 0, 'text': '@user Bono... who cares. Soon people will understand that they gain nothing from following a phony celebrity. Become a Leader of your people instead or help and support your fellow countrymen.'}
```
An instance from `sentiment` config:
```
{'label': 2, 'text': '"QT @user In the original draft of the 7th book, Remus Lupin survived the Battle of Hogwarts. #HappyBirthdayRemusLupin"'}
```
An instance from `stance_abortion` config:
```
{'label': 1, 'text': 'we remind ourselves that love means to be willing to give until it hurts - Mother Teresa'}
```
An instance from `stance_atheism` config:
```
{'label': 1, 'text': '@user Bless Almighty God, Almighty Holy Spirit and the Messiah. #SemST'}
```
An instance from `stance_climate` config:
```
{'label': 0, 'text': 'Why Is The Pope Upset? via @user #UnzippedTruth #PopeFrancis #SemST'}
```
An instance from `stance_feminist` config:
```
{'label': 1, 'text': "@user @user is the UK's answer to @user and @user #GamerGate #SemST"}
```
An instance from `stance_hillary` config:
```
{'label': 1, 'text': "If a man demanded staff to get him an ice tea he'd be called a sexists elitist pig.. Oink oink #Hillary #SemST"}
```
### Data Fields
For `emoji` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: ❤
`1`: 😍
`2`: 😂
`3`: 💕
`4`: 🔥
`5`: 😊
`6`: 😎
`7`: ✨
`8`: 💙
`9`: 😘
`10`: 📷
`11`: 🇺🇸
`12`: ☀
`13`: 💜
`14`: 😉
`15`: 💯
`16`: 😁
`17`: 🎄
`18`: 📸
`19`: 😜
For `emotion` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: anger
`1`: joy
`2`: optimism
`3`: sadness
For `hate` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non-hate
`1`: hate
For `irony` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non_irony
`1`: irony
For `offensive` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non-offensive
`1`: offensive
For `sentiment` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: negative
`1`: neutral
`2`: positive
For `stance_abortion` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_atheism` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_climate` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_feminist` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_hillary` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
### Data Splits
| name | train | validation | test |
| --------------- | ----- | ---------- | ----- |
| emoji | 45000 | 5000 | 50000 |
| emotion | 3257 | 374 | 1421 |
| hate | 9000 | 1000 | 2970 |
| irony | 2862 | 955 | 784 |
| offensive | 11916 | 1324 | 860 |
| sentiment | 45615 | 2000 | 12284 |
| stance_abortion | 587 | 66 | 280 |
| stance_atheism | 461 | 52 | 220 |
| stance_climate | 355 | 40 | 169 |
| stance_feminist | 597 | 67 | 285 |
| stance_hillary | 620 | 69 | 295 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Francesco Barbieri, Jose Camacho-Collados, Luis Espiinosa-Anke and Leonardo Neves through Cardiff NLP.
### Licensing Information
This is not a single dataset, therefore each subset has its own license (the collection itself does not have additional restrictions).
All of the datasets require complying with Twitter [Terms Of Service](https://twitter.com/tos) and Twitter API [Terms Of Service](https://developer.twitter.com/en/developer-terms/agreement-and-policy)
Additionally the license are:
- emoji: Undefined
- emotion(EmoInt): Undefined
- hate (HateEval): Need permission [here](http://hatespeech.di.unito.it/hateval.html)
- irony: Undefined
- Offensive: Undefined
- Sentiment: [Creative Commons Attribution 3.0 Unported License](https://groups.google.com/g/semevaltweet/c/k5DDcvVb_Vo/m/zEOdECFyBQAJ)
- Stance: Undefined
### Citation Information
```
@inproceedings{barbieri2020tweeteval,
title={{TweetEval:Unified Benchmark and Comparative Evaluation for Tweet Classification}},
author={Barbieri, Francesco and Camacho-Collados, Jose and Espinosa-Anke, Luis and Neves, Leonardo},
booktitle={Proceedings of Findings of EMNLP},
year={2020}
}
```
If you use any of the TweetEval datasets, please cite their original publications:
#### Emotion Recognition:
```
@inproceedings{mohammad2018semeval,
title={Semeval-2018 task 1: Affect in tweets},
author={Mohammad, Saif and Bravo-Marquez, Felipe and Salameh, Mohammad and Kiritchenko, Svetlana},
booktitle={Proceedings of the 12th international workshop on semantic evaluation},
pages={1--17},
year={2018}
}
```
#### Emoji Prediction:
```
@inproceedings{barbieri2018semeval,
title={Semeval 2018 task 2: Multilingual emoji prediction},
author={Barbieri, Francesco and Camacho-Collados, Jose and Ronzano, Francesco and Espinosa-Anke, Luis and
Ballesteros, Miguel and Basile, Valerio and Patti, Viviana and Saggion, Horacio},
booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
pages={24--33},
year={2018}
}
```
#### Irony Detection:
```
@inproceedings{van2018semeval,
title={Semeval-2018 task 3: Irony detection in english tweets},
author={Van Hee, Cynthia and Lefever, Els and Hoste, V{\'e}ronique},
booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
pages={39--50},
year={2018}
}
```
#### Hate Speech Detection:
```
@inproceedings{basile-etal-2019-semeval,
title = "{S}em{E}val-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in {T}witter",
author = "Basile, Valerio and Bosco, Cristina and Fersini, Elisabetta and Nozza, Debora and Patti, Viviana and
Rangel Pardo, Francisco Manuel and Rosso, Paolo and Sanguinetti, Manuela",
booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation",
year = "2019",
address = "Minneapolis, Minnesota, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/S19-2007",
doi = "10.18653/v1/S19-2007",
pages = "54--63"
}
```
#### Offensive Language Identification:
```
@inproceedings{zampieri2019semeval,
title={SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval)},
author={Zampieri, Marcos and Malmasi, Shervin and Nakov, Preslav and Rosenthal, Sara and Farra, Noura and Kumar, Ritesh},
booktitle={Proceedings of the 13th International Workshop on Semantic Evaluation},
pages={75--86},
year={2019}
}
```
#### Sentiment Analysis:
```
@inproceedings{rosenthal2017semeval,
title={SemEval-2017 task 4: Sentiment analysis in Twitter},
author={Rosenthal, Sara and Farra, Noura and Nakov, Preslav},
booktitle={Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017)},
pages={502--518},
year={2017}
}
```
#### Stance Detection:
```
@inproceedings{mohammad2016semeval,
title={Semeval-2016 task 6: Detecting stance in tweets},
author={Mohammad, Saif and Kiritchenko, Svetlana and Sobhani, Parinaz and Zhu, Xiaodan and Cherry, Colin},
booktitle={Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)},
pages={31--41},
year={2016}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) and [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
BleachNick/UltraEdit | BleachNick | "2024-08-31T13:49:21Z" | 14,981 | 6 | [
"task_categories:text-to-image",
"language:en",
"license:cc-by-4.0",
"arxiv:2407.05282",
"doi:10.57967/hf/2481",
"region:us",
"art"
] | [
"text-to-image"
] | "2024-06-09T11:02:13Z" | ---
language:
- en
license: cc-by-4.0
task_categories:
- text-to-image
dataset_info:
features:
- name: clip_sim_source
dtype: float64
- name: clip_sim_target
dtype: float64
- name: clip_sim_dir
dtype: float64
- name: clip_sim_image
dtype: float64
- name: dinov2_sim
dtype: float64
- name: ssim
dtype: float64
- name: source_caption
dtype: string
- name: target_caption
dtype: string
- name: idx
dtype: int64
- name: edit_prompt
dtype: string
- name: edit_object
dtype: 'null'
- name: source_image
dtype: image
- name: edited_image
dtype: image
- name: mask_image
dtype: 'null'
splits:
- name: FreeForm_0
num_bytes: 759385792
num_examples: 2000
- name: FreeForm_1
num_bytes: 756874067
num_examples: 2000
- name: FreeForm_2
num_bytes: 759183069
num_examples: 2000
- name: FreeForm_3
num_bytes: 755508440
num_examples: 2000
- name: FreeForm_4
num_bytes: 756540442
num_examples: 2000
- name: FreeForm_5
num_bytes: 758622320
num_examples: 2000
- name: FreeForm_6
num_bytes: 761524774
num_examples: 2000
- name: FreeForm_7
num_bytes: 758775293
num_examples: 2000
- name: FreeForm_8
num_bytes: 760200313
num_examples: 2000
- name: FreeForm_9
num_bytes: 768448051
num_examples: 2000
- name: FreeForm_10
num_bytes: 773489315
num_examples: 2000
- name: FreeForm_11
num_bytes: 778109354
num_examples: 2000
- name: FreeForm_12
num_bytes: 778512114
num_examples: 2000
- name: FreeForm_13
num_bytes: 768485969
num_examples: 2000
- name: FreeForm_14
num_bytes: 779712509
num_examples: 2000
- name: FreeForm_15
num_bytes: 765837533
num_examples: 2000
- name: FreeForm_16
num_bytes: 769511714
num_examples: 2000
- name: FreeForm_17
num_bytes: 771149850
num_examples: 2000
- name: FreeForm_18
num_bytes: 771410726
num_examples: 2000
- name: FreeForm_19
num_bytes: 770722184
num_examples: 2000
- name: FreeForm_20
num_bytes: 783276398
num_examples: 2000
- name: FreeForm_21
num_bytes: 776884755
num_examples: 2000
- name: FreeForm_22
num_bytes: 783258028
num_examples: 2000
- name: FreeForm_23
num_bytes: 781541694
num_examples: 2000
- name: FreeForm_24
num_bytes: 781306379
num_examples: 2000
- name: FreeForm_25
num_bytes: 777818799
num_examples: 2000
- name: FreeForm_26
num_bytes: 778351829
num_examples: 2000
- name: FreeForm_27
num_bytes: 778407074
num_examples: 2000
- name: FreeForm_28
num_bytes: 776257503
num_examples: 2000
- name: FreeForm_29
num_bytes: 779274036
num_examples: 2000
- name: FreeForm_30
num_bytes: 779300944
num_examples: 2000
- name: FreeForm_31
num_bytes: 775309985
num_examples: 2000
- name: FreeForm_32
num_bytes: 779442636
num_examples: 2000
- name: FreeForm_33
num_bytes: 784142063
num_examples: 2000
- name: FreeForm_34
num_bytes: 781037956
num_examples: 2000
- name: FreeForm_35
num_bytes: 783237883
num_examples: 2000
- name: FreeForm_36
num_bytes: 782420508
num_examples: 2000
- name: FreeForm_37
num_bytes: 778974851
num_examples: 2000
- name: FreeForm_38
num_bytes: 781037000
num_examples: 2000
- name: FreeForm_39
num_bytes: 780728965
num_examples: 2000
- name: FreeForm_40
num_bytes: 781624433
num_examples: 2000
- name: FreeForm_41
num_bytes: 782390249
num_examples: 2000
- name: FreeForm_42
num_bytes: 780332512
num_examples: 2000
- name: FreeForm_43
num_bytes: 785691458
num_examples: 2000
- name: FreeForm_44
num_bytes: 774303123
num_examples: 2000
- name: FreeForm_45
num_bytes: 775698594
num_examples: 2000
- name: FreeForm_46
num_bytes: 792219548
num_examples: 2000
- name: FreeForm_47
num_bytes: 779527180
num_examples: 2000
- name: FreeForm_48
num_bytes: 768255127
num_examples: 2000
- name: FreeForm_49
num_bytes: 780377695
num_examples: 2000
- name: FreeForm_50
num_bytes: 780951915
num_examples: 2000
- name: FreeForm_51
num_bytes: 781476572
num_examples: 2000
- name: FreeForm_52
num_bytes: 778819875
num_examples: 2000
- name: FreeForm_53
num_bytes: 780021360
num_examples: 2000
- name: FreeForm_54
num_bytes: 780353501
num_examples: 2000
- name: FreeForm_55
num_bytes: 780989870
num_examples: 2000
- name: FreeForm_56
num_bytes: 790152972
num_examples: 2000
- name: FreeForm_57
num_bytes: 773017463
num_examples: 2000
- name: FreeForm_58
num_bytes: 785315245
num_examples: 2000
- name: FreeForm_59
num_bytes: 783225063
num_examples: 2000
- name: FreeForm_60
num_bytes: 779732938
num_examples: 2000
- name: FreeForm_61
num_bytes: 775300360
num_examples: 2000
- name: FreeForm_62
num_bytes: 787277550
num_examples: 2000
- name: FreeForm_63
num_bytes: 785273008
num_examples: 2000
- name: FreeForm_64
num_bytes: 781745081
num_examples: 2000
- name: FreeForm_65
num_bytes: 774655340
num_examples: 2000
- name: FreeForm_66
num_bytes: 786214063
num_examples: 2000
- name: FreeForm_67
num_bytes: 780515365
num_examples: 2000
- name: FreeForm_68
num_bytes: 781112419
num_examples: 2000
- name: FreeForm_69
num_bytes: 784807337
num_examples: 2000
- name: FreeForm_70
num_bytes: 792820805
num_examples: 2000
- name: FreeForm_71
num_bytes: 779452329
num_examples: 2000
- name: FreeForm_72
num_bytes: 782202231
num_examples: 2000
- name: FreeForm_73
num_bytes: 780102581
num_examples: 2000
- name: FreeForm_74
num_bytes: 778207590
num_examples: 2000
- name: FreeForm_75
num_bytes: 773440925
num_examples: 2000
- name: FreeForm_76
num_bytes: 776717338
num_examples: 2000
- name: FreeForm_77
num_bytes: 782872533
num_examples: 2000
- name: FreeForm_78
num_bytes: 781570187
num_examples: 2000
- name: FreeForm_79
num_bytes: 777108477
num_examples: 2000
- name: FreeForm_80
num_bytes: 782422774
num_examples: 2000
- name: FreeForm_81
num_bytes: 780493074
num_examples: 2000
- name: FreeForm_82
num_bytes: 784737791
num_examples: 2000
- name: FreeForm_83
num_bytes: 772319242
num_examples: 2000
- name: FreeForm_84
num_bytes: 783158436
num_examples: 2000
- name: FreeForm_85
num_bytes: 777733688
num_examples: 2000
- name: FreeForm_86
num_bytes: 788418673
num_examples: 2000
- name: FreeForm_87
num_bytes: 785653901
num_examples: 2000
- name: FreeForm_88
num_bytes: 779811756
num_examples: 2000
- name: FreeForm_89
num_bytes: 781032025
num_examples: 2000
- name: FreeForm_90
num_bytes: 782448048
num_examples: 2000
- name: FreeForm_91
num_bytes: 789579728
num_examples: 2000
- name: FreeForm_92
num_bytes: 785851472
num_examples: 2000
- name: FreeForm_93
num_bytes: 776616321
num_examples: 2000
- name: FreeForm_94
num_bytes: 772441019
num_examples: 2000
- name: FreeForm_95
num_bytes: 777885007
num_examples: 2000
- name: FreeForm_96
num_bytes: 779615563
num_examples: 2000
- name: FreeForm_97
num_bytes: 781932881
num_examples: 2000
- name: FreeForm_98
num_bytes: 778767405
num_examples: 2000
- name: FreeForm_99
num_bytes: 781249553
num_examples: 2000
- name: FreeForm_100
num_bytes: 777582777
num_examples: 2000
- name: FreeForm_101
num_bytes: 789079489
num_examples: 2000
- name: FreeForm_102
num_bytes: 773798368
num_examples: 2000
- name: FreeForm_103
num_bytes: 777652284
num_examples: 2000
- name: FreeForm_104
num_bytes: 782763557
num_examples: 2000
- name: FreeForm_105
num_bytes: 775572386
num_examples: 2000
- name: FreeForm_106
num_bytes: 782503475
num_examples: 2000
- name: FreeForm_107
num_bytes: 779729667
num_examples: 2000
- name: FreeForm_108
num_bytes: 785032491
num_examples: 2000
- name: FreeForm_109
num_bytes: 774752941
num_examples: 2000
- name: FreeForm_110
num_bytes: 776262712
num_examples: 2000
- name: FreeForm_111
num_bytes: 780328424
num_examples: 2000
- name: FreeForm_112
num_bytes: 782706800
num_examples: 2000
- name: FreeForm_113
num_bytes: 778603762
num_examples: 2000
- name: FreeForm_114
num_bytes: 781562793
num_examples: 2000
- name: FreeForm_115
num_bytes: 782963964
num_examples: 2000
- name: FreeForm_116
num_bytes: 771866357
num_examples: 2000
- name: FreeForm_117
num_bytes: 769456958
num_examples: 2000
- name: FreeForm_118
num_bytes: 778196876
num_examples: 2000
- name: FreeForm_119
num_bytes: 787450589
num_examples: 2000
- name: FreeForm_120
num_bytes: 788257623
num_examples: 2000
- name: FreeForm_121
num_bytes: 774218024
num_examples: 2000
- name: FreeForm_122
num_bytes: 777310894
num_examples: 2000
- name: FreeForm_123
num_bytes: 782304214
num_examples: 2000
- name: FreeForm_124
num_bytes: 787668207
num_examples: 2000
- name: FreeForm_125
num_bytes: 782149440
num_examples: 2000
- name: FreeForm_126
num_bytes: 772279923
num_examples: 2000
- name: FreeForm_127
num_bytes: 782051230
num_examples: 2000
- name: FreeForm_128
num_bytes: 779704525
num_examples: 2000
- name: FreeForm_129
num_bytes: 784954990
num_examples: 2000
- name: FreeForm_130
num_bytes: 783706718
num_examples: 2000
- name: FreeForm_131
num_bytes: 778920587
num_examples: 2000
- name: FreeForm_132
num_bytes: 777609528
num_examples: 2000
- name: FreeForm_133
num_bytes: 776108392
num_examples: 2000
- name: FreeForm_134
num_bytes: 773424215
num_examples: 2000
- name: FreeForm_135
num_bytes: 783577402
num_examples: 2000
- name: FreeForm_136
num_bytes: 781872028
num_examples: 2000
- name: FreeForm_137
num_bytes: 784396076
num_examples: 2000
- name: FreeForm_138
num_bytes: 782096650
num_examples: 2000
- name: FreeForm_139
num_bytes: 778830416
num_examples: 2000
- name: FreeForm_140
num_bytes: 786000079
num_examples: 2000
- name: FreeForm_141
num_bytes: 781664498
num_examples: 2000
- name: FreeForm_142
num_bytes: 791069332
num_examples: 2000
- name: FreeForm_143
num_bytes: 785025567
num_examples: 2000
- name: FreeForm_144
num_bytes: 777105450
num_examples: 2000
- name: FreeForm_145
num_bytes: 781311359
num_examples: 2000
- name: FreeForm_146
num_bytes: 779953680
num_examples: 2000
- name: FreeForm_147
num_bytes: 787964927
num_examples: 2000
- name: FreeForm_148
num_bytes: 781275038
num_examples: 2000
- name: FreeForm_149
num_bytes: 787792527
num_examples: 2000
- name: FreeForm_150
num_bytes: 775254416
num_examples: 2000
- name: FreeForm_151
num_bytes: 775985702
num_examples: 2000
- name: FreeForm_152
num_bytes: 774217627
num_examples: 2000
- name: FreeForm_153
num_bytes: 785218355
num_examples: 2000
- name: FreeForm_154
num_bytes: 778486283
num_examples: 2000
- name: FreeForm_155
num_bytes: 782013722
num_examples: 2000
- name: FreeForm_156
num_bytes: 781868361
num_examples: 2000
- name: FreeForm_157
num_bytes: 775308631
num_examples: 2000
- name: FreeForm_158
num_bytes: 774627734
num_examples: 2000
- name: FreeForm_159
num_bytes: 793847051
num_examples: 2000
- name: FreeForm_160
num_bytes: 778008360
num_examples: 2000
- name: FreeForm_161
num_bytes: 779105315
num_examples: 2000
- name: FreeForm_162
num_bytes: 774827779
num_examples: 2000
- name: FreeForm_163
num_bytes: 782014203
num_examples: 2000
- name: FreeForm_164
num_bytes: 777132570
num_examples: 2000
- name: FreeForm_165
num_bytes: 776191239
num_examples: 2000
- name: FreeForm_166
num_bytes: 783015253
num_examples: 2000
- name: FreeForm_167
num_bytes: 785442481
num_examples: 2000
- name: FreeForm_168
num_bytes: 776184901
num_examples: 2000
- name: FreeForm_169
num_bytes: 778378698
num_examples: 2000
- name: FreeForm_170
num_bytes: 779983316
num_examples: 2000
- name: FreeForm_171
num_bytes: 782247431
num_examples: 2000
- name: FreeForm_172
num_bytes: 778287241
num_examples: 2000
- name: FreeForm_173
num_bytes: 783732214
num_examples: 2000
- name: FreeForm_174
num_bytes: 784645727
num_examples: 2000
- name: FreeForm_175
num_bytes: 780535529
num_examples: 2000
- name: FreeForm_176
num_bytes: 775325249
num_examples: 2000
- name: FreeForm_177
num_bytes: 781466592
num_examples: 2000
- name: FreeForm_178
num_bytes: 787145952
num_examples: 2000
- name: FreeForm_179
num_bytes: 780889603
num_examples: 2000
- name: FreeForm_180
num_bytes: 773684169
num_examples: 2000
- name: FreeForm_181
num_bytes: 788912563
num_examples: 2000
- name: FreeForm_182
num_bytes: 785582121
num_examples: 2000
- name: FreeForm_183
num_bytes: 784626591
num_examples: 2000
- name: FreeForm_184
num_bytes: 790547359
num_examples: 2000
- name: FreeForm_185
num_bytes: 784622676
num_examples: 2000
- name: FreeForm_186
num_bytes: 769870952
num_examples: 2000
- name: FreeForm_187
num_bytes: 778273211
num_examples: 2000
- name: FreeForm_188
num_bytes: 773796454
num_examples: 2000
- name: FreeForm_189
num_bytes: 789263531
num_examples: 2000
- name: FreeForm_190
num_bytes: 775580113
num_examples: 2000
- name: FreeForm_191
num_bytes: 774644337
num_examples: 2000
- name: FreeForm_192
num_bytes: 779218306
num_examples: 2000
- name: FreeForm_193
num_bytes: 782789594
num_examples: 2000
- name: FreeForm_194
num_bytes: 778522221
num_examples: 2000
- name: FreeForm_195
num_bytes: 769927305
num_examples: 2000
- name: FreeForm_196
num_bytes: 787652053
num_examples: 2000
- name: FreeForm_197
num_bytes: 781281999
num_examples: 2000
- name: FreeForm_198
num_bytes: 784173619
num_examples: 2000
- name: FreeForm_199
num_bytes: 780085733
num_examples: 2000
- name: FreeForm_200
num_bytes: 784857406
num_examples: 2000
- name: FreeForm_201
num_bytes: 781521869
num_examples: 2000
- name: FreeForm_202
num_bytes: 779589554
num_examples: 2000
- name: FreeForm_203
num_bytes: 781196442
num_examples: 2000
- name: FreeForm_204
num_bytes: 772955630
num_examples: 2000
- name: FreeForm_205
num_bytes: 784267323
num_examples: 2000
- name: FreeForm_206
num_bytes: 775806104
num_examples: 2000
- name: FreeForm_207
num_bytes: 779673572
num_examples: 2000
- name: FreeForm_208
num_bytes: 782927457
num_examples: 2000
- name: FreeForm_209
num_bytes: 782826891
num_examples: 2000
- name: FreeForm_210
num_bytes: 784130072
num_examples: 2000
- name: FreeForm_211
num_bytes: 774395254
num_examples: 2000
- name: FreeForm_212
num_bytes: 780161197
num_examples: 2000
- name: FreeForm_213
num_bytes: 774990162
num_examples: 2000
- name: FreeForm_214
num_bytes: 780745487
num_examples: 2000
- name: FreeForm_215
num_bytes: 775570186
num_examples: 2000
- name: FreeForm_216
num_bytes: 780406810
num_examples: 2000
- name: FreeForm_217
num_bytes: 783843708
num_examples: 2000
- name: FreeForm_218
num_bytes: 774349485
num_examples: 2000
- name: FreeForm_219
num_bytes: 786409937
num_examples: 2000
- name: FreeForm_220
num_bytes: 780250550
num_examples: 2000
- name: FreeForm_221
num_bytes: 781397833
num_examples: 2000
- name: FreeForm_222
num_bytes: 787266266
num_examples: 2000
- name: FreeForm_223
num_bytes: 771635959
num_examples: 2000
- name: FreeForm_224
num_bytes: 788040561
num_examples: 2000
- name: FreeForm_225
num_bytes: 779481600
num_examples: 2000
- name: FreeForm_226
num_bytes: 778119416
num_examples: 2000
- name: FreeForm_227
num_bytes: 786426591
num_examples: 2000
- name: FreeForm_228
num_bytes: 775824969
num_examples: 2000
- name: FreeForm_229
num_bytes: 786598208
num_examples: 2000
- name: FreeForm_230
num_bytes: 783115035
num_examples: 2000
- name: FreeForm_231
num_bytes: 777076410
num_examples: 2000
- name: FreeForm_232
num_bytes: 785489709
num_examples: 2000
- name: FreeForm_233
num_bytes: 775771458
num_examples: 2000
- name: FreeForm_234
num_bytes: 778795846
num_examples: 2000
- name: FreeForm_235
num_bytes: 779495945
num_examples: 2000
- name: FreeForm_236
num_bytes: 781436749
num_examples: 2000
- name: FreeForm_237
num_bytes: 779702535
num_examples: 2000
- name: FreeForm_238
num_bytes: 773483348
num_examples: 2000
- name: FreeForm_239
num_bytes: 781337701
num_examples: 2000
- name: FreeForm_240
num_bytes: 777999808
num_examples: 2000
- name: FreeForm_241
num_bytes: 785732711
num_examples: 2000
- name: FreeForm_242
num_bytes: 777647724
num_examples: 2000
- name: FreeForm_243
num_bytes: 782510547
num_examples: 2000
- name: FreeForm_244
num_bytes: 773293727
num_examples: 2000
- name: FreeForm_245
num_bytes: 773450169
num_examples: 2000
- name: FreeForm_246
num_bytes: 782072573
num_examples: 2000
- name: FreeForm_247
num_bytes: 772425825
num_examples: 2000
- name: FreeForm_248
num_bytes: 770148042
num_examples: 2000
- name: FreeForm_249
num_bytes: 780730753
num_examples: 2000
- name: FreeForm_250
num_bytes: 782650664
num_examples: 2000
- name: FreeForm_251
num_bytes: 786425992
num_examples: 2000
- name: FreeForm_252
num_bytes: 787061462
num_examples: 2000
- name: FreeForm_253
num_bytes: 776669565
num_examples: 2000
- name: FreeForm_254
num_bytes: 781733768
num_examples: 2000
- name: FreeForm_255
num_bytes: 776445040
num_examples: 2000
- name: FreeForm_256
num_bytes: 788620171
num_examples: 2000
- name: FreeForm_257
num_bytes: 775265570
num_examples: 2000
- name: FreeForm_258
num_bytes: 772003631
num_examples: 2000
- name: FreeForm_259
num_bytes: 779408477
num_examples: 2000
- name: FreeForm_260
num_bytes: 779275862
num_examples: 2000
- name: FreeForm_261
num_bytes: 781520055
num_examples: 2000
- name: FreeForm_262
num_bytes: 776835207
num_examples: 2000
- name: FreeForm_263
num_bytes: 775937930
num_examples: 2000
- name: FreeForm_264
num_bytes: 779653131
num_examples: 2000
- name: FreeForm_265
num_bytes: 777888893
num_examples: 2000
- name: FreeForm_266
num_bytes: 781868504
num_examples: 2000
- name: FreeForm_267
num_bytes: 782852767
num_examples: 2000
- name: FreeForm_268
num_bytes: 775652379
num_examples: 2000
- name: FreeForm_269
num_bytes: 779021453
num_examples: 2000
- name: FreeForm_270
num_bytes: 775406430
num_examples: 2000
- name: FreeForm_271
num_bytes: 783074385
num_examples: 2000
- name: FreeForm_272
num_bytes: 789294928
num_examples: 2000
- name: FreeForm_273
num_bytes: 791956763
num_examples: 2000
- name: FreeForm_274
num_bytes: 781284476
num_examples: 2000
- name: FreeForm_275
num_bytes: 774852559
num_examples: 2000
- name: FreeForm_276
num_bytes: 780282411
num_examples: 2000
- name: FreeForm_277
num_bytes: 785429026
num_examples: 2000
- name: FreeForm_278
num_bytes: 788139052
num_examples: 2000
- name: FreeForm_279
num_bytes: 778927364
num_examples: 2000
- name: FreeForm_280
num_bytes: 786443524
num_examples: 2000
- name: FreeForm_281
num_bytes: 779796091
num_examples: 2000
- name: FreeForm_282
num_bytes: 771796749
num_examples: 2000
- name: FreeForm_283
num_bytes: 780077185
num_examples: 2000
- name: FreeForm_284
num_bytes: 782657092
num_examples: 2000
- name: FreeForm_285
num_bytes: 777876608
num_examples: 2000
- name: FreeForm_286
num_bytes: 784147879
num_examples: 2000
- name: FreeForm_287
num_bytes: 775759029
num_examples: 2000
- name: FreeForm_288
num_bytes: 779561520
num_examples: 2000
- name: FreeForm_289
num_bytes: 777921916
num_examples: 2000
- name: FreeForm_290
num_bytes: 783983438
num_examples: 2000
- name: FreeForm_291
num_bytes: 780372433
num_examples: 2000
- name: FreeForm_292
num_bytes: 777431434
num_examples: 2000
- name: FreeForm_293
num_bytes: 779945807
num_examples: 2000
- name: FreeForm_294
num_bytes: 777725518
num_examples: 2000
- name: FreeForm_295
num_bytes: 778340933
num_examples: 2000
- name: FreeForm_296
num_bytes: 781648759
num_examples: 2000
- name: FreeForm_297
num_bytes: 781175078
num_examples: 2000
- name: FreeForm_298
num_bytes: 780131274
num_examples: 2000
- name: FreeForm_299
num_bytes: 784700521
num_examples: 2000
- name: FreeForm_300
num_bytes: 778730053
num_examples: 2000
- name: FreeForm_301
num_bytes: 777866814
num_examples: 2000
- name: FreeForm_302
num_bytes: 790628419
num_examples: 2000
- name: FreeForm_303
num_bytes: 783583996
num_examples: 2000
- name: FreeForm_304
num_bytes: 776221743
num_examples: 2000
- name: FreeForm_305
num_bytes: 783094650
num_examples: 2000
- name: FreeForm_306
num_bytes: 773021721
num_examples: 2000
- name: FreeForm_307
num_bytes: 779988657
num_examples: 2000
- name: FreeForm_308
num_bytes: 776359081
num_examples: 2000
- name: FreeForm_309
num_bytes: 784100482
num_examples: 2000
- name: FreeForm_310
num_bytes: 785281984
num_examples: 2000
- name: FreeForm_311
num_bytes: 781660370
num_examples: 2000
- name: FreeForm_312
num_bytes: 778110445
num_examples: 2000
- name: FreeForm_313
num_bytes: 778756717
num_examples: 2000
- name: FreeForm_314
num_bytes: 774237002
num_examples: 2000
- name: FreeForm_315
num_bytes: 780659451
num_examples: 2000
- name: FreeForm_316
num_bytes: 774442869
num_examples: 2000
- name: FreeForm_317
num_bytes: 774284694
num_examples: 2000
- name: FreeForm_318
num_bytes: 784436923
num_examples: 2000
- name: FreeForm_319
num_bytes: 784750776
num_examples: 2000
- name: FreeForm_320
num_bytes: 787640447
num_examples: 2000
- name: FreeForm_321
num_bytes: 783188398
num_examples: 2000
- name: FreeForm_322
num_bytes: 791492001
num_examples: 2000
- name: FreeForm_323
num_bytes: 774960969
num_examples: 2000
- name: FreeForm_324
num_bytes: 775398547
num_examples: 2000
- name: FreeForm_325
num_bytes: 770380367
num_examples: 2000
- name: FreeForm_326
num_bytes: 773936182
num_examples: 2000
- name: FreeForm_327
num_bytes: 775264472
num_examples: 2000
- name: FreeForm_328
num_bytes: 780866391
num_examples: 2000
- name: FreeForm_329
num_bytes: 789020513
num_examples: 2000
- name: FreeForm_330
num_bytes: 773526935
num_examples: 2000
- name: FreeForm_331
num_bytes: 783571566
num_examples: 2000
- name: FreeForm_332
num_bytes: 778752371
num_examples: 2000
- name: FreeForm_333
num_bytes: 782824491
num_examples: 2000
- name: FreeForm_334
num_bytes: 782375700
num_examples: 2000
- name: FreeForm_335
num_bytes: 779975126
num_examples: 2000
- name: FreeForm_336
num_bytes: 785340907
num_examples: 2000
- name: FreeForm_337
num_bytes: 780481911
num_examples: 2000
- name: FreeForm_338
num_bytes: 783014758
num_examples: 2000
- name: FreeForm_339
num_bytes: 779971436
num_examples: 2000
- name: FreeForm_340
num_bytes: 788146419
num_examples: 2000
- name: FreeForm_341
num_bytes: 785031133
num_examples: 2000
- name: FreeForm_342
num_bytes: 786154283
num_examples: 2000
- name: FreeForm_343
num_bytes: 785252303
num_examples: 2000
- name: FreeForm_344
num_bytes: 776938406
num_examples: 2000
- name: FreeForm_345
num_bytes: 775022040
num_examples: 2000
- name: FreeForm_346
num_bytes: 781089177
num_examples: 2000
- name: FreeForm_347
num_bytes: 785469537
num_examples: 2000
- name: FreeForm_348
num_bytes: 780504204
num_examples: 2000
- name: FreeForm_349
num_bytes: 781497921
num_examples: 2000
- name: FreeForm_350
num_bytes: 786463404
num_examples: 2000
- name: FreeForm_351
num_bytes: 778226591
num_examples: 2000
- name: FreeForm_352
num_bytes: 780587554
num_examples: 2000
- name: FreeForm_353
num_bytes: 772724851
num_examples: 2000
- name: FreeForm_354
num_bytes: 784892618
num_examples: 2000
- name: FreeForm_355
num_bytes: 780154389
num_examples: 2000
- name: FreeForm_356
num_bytes: 780139782
num_examples: 2000
- name: FreeForm_357
num_bytes: 783152771
num_examples: 2000
- name: FreeForm_358
num_bytes: 770762762
num_examples: 2000
- name: FreeForm_359
num_bytes: 781486281
num_examples: 2000
- name: FreeForm_360
num_bytes: 784878072
num_examples: 2000
- name: FreeForm_361
num_bytes: 767497077
num_examples: 2000
- name: FreeForm_362
num_bytes: 774209420
num_examples: 2000
- name: FreeForm_363
num_bytes: 775852671
num_examples: 2000
- name: FreeForm_364
num_bytes: 779265355
num_examples: 2000
- name: FreeForm_365
num_bytes: 778746781
num_examples: 2000
- name: FreeForm_366
num_bytes: 780292561
num_examples: 2000
- name: FreeForm_367
num_bytes: 783437604
num_examples: 2000
- name: FreeForm_368
num_bytes: 780490744
num_examples: 2000
- name: FreeForm_369
num_bytes: 784701592
num_examples: 2000
- name: FreeForm_370
num_bytes: 782231635
num_examples: 2000
- name: FreeForm_371
num_bytes: 773713131
num_examples: 2000
- name: FreeForm_372
num_bytes: 780881398
num_examples: 2000
- name: FreeForm_373
num_bytes: 772866562
num_examples: 2000
- name: FreeForm_374
num_bytes: 784456218
num_examples: 2000
- name: FreeForm_375
num_bytes: 781234237
num_examples: 2000
- name: FreeForm_376
num_bytes: 774670015
num_examples: 2000
- name: FreeForm_377
num_bytes: 780022530
num_examples: 2000
- name: FreeForm_378
num_bytes: 786354737
num_examples: 2000
- name: FreeForm_379
num_bytes: 778620546
num_examples: 2000
- name: FreeForm_380
num_bytes: 786067236
num_examples: 2000
- name: FreeForm_381
num_bytes: 783392920
num_examples: 2000
- name: FreeForm_382
num_bytes: 777015603
num_examples: 2000
- name: FreeForm_383
num_bytes: 777137904
num_examples: 2000
- name: FreeForm_384
num_bytes: 775646114
num_examples: 2000
- name: FreeForm_385
num_bytes: 778114996
num_examples: 2000
- name: FreeForm_386
num_bytes: 783206115
num_examples: 2000
- name: FreeForm_387
num_bytes: 783861784
num_examples: 2000
- name: FreeForm_388
num_bytes: 780998933
num_examples: 2000
- name: FreeForm_389
num_bytes: 784625672
num_examples: 2000
- name: FreeForm_390
num_bytes: 772741099
num_examples: 2000
- name: FreeForm_391
num_bytes: 774029608
num_examples: 2000
- name: FreeForm_392
num_bytes: 785257091
num_examples: 2000
- name: FreeForm_393
num_bytes: 780062712
num_examples: 2000
- name: FreeForm_394
num_bytes: 773189878
num_examples: 2000
- name: FreeForm_395
num_bytes: 773945343
num_examples: 2000
- name: FreeForm_396
num_bytes: 786040164
num_examples: 2000
- name: FreeForm_397
num_bytes: 776739162
num_examples: 2000
- name: FreeForm_398
num_bytes: 780130285
num_examples: 2000
- name: FreeForm_399
num_bytes: 779288968
num_examples: 2000
- name: FreeForm_400
num_bytes: 780563799
num_examples: 2000
- name: FreeForm_401
num_bytes: 777749497
num_examples: 2000
- name: FreeForm_402
num_bytes: 787840546
num_examples: 2000
- name: FreeForm_403
num_bytes: 780239764
num_examples: 2000
- name: FreeForm_404
num_bytes: 782720911
num_examples: 2000
- name: FreeForm_405
num_bytes: 776535548
num_examples: 2000
- name: FreeForm_406
num_bytes: 787828032
num_examples: 2000
- name: FreeForm_407
num_bytes: 781632121
num_examples: 2000
- name: FreeForm_408
num_bytes: 779713575
num_examples: 2000
- name: FreeForm_409
num_bytes: 777632320
num_examples: 2000
- name: FreeForm_410
num_bytes: 784686001
num_examples: 2000
- name: FreeForm_411
num_bytes: 777486756
num_examples: 2000
- name: FreeForm_412
num_bytes: 772228765
num_examples: 2000
- name: FreeForm_413
num_bytes: 781168258
num_examples: 2000
- name: FreeForm_414
num_bytes: 783339876
num_examples: 2000
- name: FreeForm_415
num_bytes: 783962079
num_examples: 2000
- name: FreeForm_416
num_bytes: 775476703
num_examples: 2000
- name: FreeForm_417
num_bytes: 780115603
num_examples: 2000
- name: FreeForm_418
num_bytes: 774555481
num_examples: 2000
- name: FreeForm_419
num_bytes: 771392249
num_examples: 2000
- name: FreeForm_420
num_bytes: 781647966
num_examples: 2000
- name: FreeForm_421
num_bytes: 778569366
num_examples: 2000
- name: FreeForm_422
num_bytes: 777075807
num_examples: 2000
- name: FreeForm_423
num_bytes: 781344221
num_examples: 2000
- name: FreeForm_424
num_bytes: 778153065
num_examples: 2000
- name: FreeForm_425
num_bytes: 787571467
num_examples: 2000
- name: FreeForm_426
num_bytes: 777826298
num_examples: 2000
- name: FreeForm_427
num_bytes: 782019034
num_examples: 2000
- name: FreeForm_428
num_bytes: 784610271
num_examples: 2000
- name: FreeForm_429
num_bytes: 777021882
num_examples: 2000
- name: FreeForm_430
num_bytes: 786138346
num_examples: 2000
- name: FreeForm_431
num_bytes: 785894029
num_examples: 2000
- name: FreeForm_432
num_bytes: 779304938
num_examples: 2000
- name: FreeForm_433
num_bytes: 777969203
num_examples: 2000
- name: FreeForm_434
num_bytes: 773402571
num_examples: 2000
- name: FreeForm_435
num_bytes: 780152853
num_examples: 2000
- name: FreeForm_436
num_bytes: 771653351
num_examples: 2000
- name: FreeForm_437
num_bytes: 782926012
num_examples: 2000
- name: FreeForm_438
num_bytes: 777969831
num_examples: 2000
- name: FreeForm_439
num_bytes: 777857001
num_examples: 2000
- name: FreeForm_440
num_bytes: 779516719
num_examples: 2000
- name: FreeForm_441
num_bytes: 770860698
num_examples: 2000
- name: FreeForm_442
num_bytes: 778712706
num_examples: 2000
- name: FreeForm_443
num_bytes: 780437949
num_examples: 2000
- name: FreeForm_444
num_bytes: 778493719
num_examples: 2000
- name: FreeForm_445
num_bytes: 776648110
num_examples: 2000
- name: FreeForm_446
num_bytes: 769735495
num_examples: 2000
- name: FreeForm_447
num_bytes: 784614251
num_examples: 2000
- name: FreeForm_448
num_bytes: 771427209
num_examples: 2000
- name: FreeForm_449
num_bytes: 776166819
num_examples: 2000
- name: FreeForm_450
num_bytes: 779663498
num_examples: 2000
- name: FreeForm_451
num_bytes: 785115162
num_examples: 2000
- name: FreeForm_452
num_bytes: 777569106
num_examples: 2000
- name: FreeForm_453
num_bytes: 773227129
num_examples: 2000
- name: FreeForm_454
num_bytes: 784237299
num_examples: 2000
- name: FreeForm_455
num_bytes: 790367726
num_examples: 2000
- name: FreeForm_456
num_bytes: 776917540
num_examples: 2000
- name: FreeForm_457
num_bytes: 768702375
num_examples: 2000
- name: FreeForm_458
num_bytes: 770524982
num_examples: 2000
- name: FreeForm_459
num_bytes: 776194088
num_examples: 2000
- name: FreeForm_460
num_bytes: 775613539
num_examples: 2000
- name: FreeForm_461
num_bytes: 769735178
num_examples: 2000
- name: FreeForm_462
num_bytes: 777259156
num_examples: 2000
- name: FreeForm_463
num_bytes: 780338974
num_examples: 2000
- name: FreeForm_464
num_bytes: 774765369
num_examples: 2000
- name: FreeForm_465
num_bytes: 769747692
num_examples: 2000
- name: FreeForm_466
num_bytes: 778452223
num_examples: 2000
- name: FreeForm_467
num_bytes: 774984225
num_examples: 2000
- name: FreeForm_468
num_bytes: 785453416
num_examples: 2000
- name: FreeForm_469
num_bytes: 779253577
num_examples: 2000
- name: FreeForm_470
num_bytes: 780377502
num_examples: 2000
- name: FreeForm_471
num_bytes: 783077732
num_examples: 2000
- name: FreeForm_472
num_bytes: 785213723
num_examples: 2000
- name: FreeForm_473
num_bytes: 789489498
num_examples: 2000
- name: FreeForm_474
num_bytes: 779887855
num_examples: 2000
- name: FreeForm_475
num_bytes: 779109501
num_examples: 2000
- name: FreeForm_476
num_bytes: 777161502
num_examples: 2000
- name: FreeForm_477
num_bytes: 786138446
num_examples: 2000
- name: FreeForm_478
num_bytes: 780123030
num_examples: 2000
- name: FreeForm_479
num_bytes: 778752736
num_examples: 2000
- name: FreeForm_480
num_bytes: 781791235
num_examples: 2000
- name: FreeForm_481
num_bytes: 773626176
num_examples: 2000
- name: FreeForm_482
num_bytes: 777106374
num_examples: 2000
- name: FreeForm_483
num_bytes: 778648646
num_examples: 2000
- name: FreeForm_484
num_bytes: 773997685
num_examples: 2000
- name: FreeForm_485
num_bytes: 779349068
num_examples: 2000
- name: FreeForm_486
num_bytes: 777967164
num_examples: 2000
- name: FreeForm_487
num_bytes: 778535239
num_examples: 2000
- name: FreeForm_488
num_bytes: 773178194
num_examples: 2000
- name: FreeForm_489
num_bytes: 774663901
num_examples: 2000
- name: FreeForm_490
num_bytes: 769685602
num_examples: 2000
- name: FreeForm_491
num_bytes: 767328694
num_examples: 2000
- name: FreeForm_492
num_bytes: 782095429
num_examples: 2000
- name: FreeForm_493
num_bytes: 777160434
num_examples: 2000
- name: FreeForm_494
num_bytes: 772991887
num_examples: 2000
- name: FreeForm_495
num_bytes: 787353950
num_examples: 2000
- name: FreeForm_496
num_bytes: 781350713
num_examples: 2000
- name: FreeForm_497
num_bytes: 768853828
num_examples: 2000
- name: FreeForm_498
num_bytes: 784087657
num_examples: 2000
- name: FreeForm_499
num_bytes: 782456509
num_examples: 2000
- name: FreeForm_500
num_bytes: 777017570
num_examples: 2000
- name: FreeForm_501
num_bytes: 781913684
num_examples: 2000
- name: FreeForm_502
num_bytes: 773513583
num_examples: 2000
- name: FreeForm_503
num_bytes: 775880907
num_examples: 2000
- name: FreeForm_504
num_bytes: 776608994
num_examples: 2000
- name: FreeForm_505
num_bytes: 778612716
num_examples: 2000
- name: FreeForm_506
num_bytes: 782017623
num_examples: 2000
- name: FreeForm_507
num_bytes: 778617412
num_examples: 2000
- name: FreeForm_508
num_bytes: 775370779
num_examples: 2000
- name: FreeForm_509
num_bytes: 783112835
num_examples: 2000
- name: FreeForm_510
num_bytes: 789052066
num_examples: 2000
- name: FreeForm_511
num_bytes: 785606342
num_examples: 2000
- name: FreeForm_512
num_bytes: 774571155
num_examples: 2000
- name: FreeForm_513
num_bytes: 780106960
num_examples: 2000
- name: FreeForm_514
num_bytes: 785882120
num_examples: 2000
- name: FreeForm_515
num_bytes: 780484543
num_examples: 2000
- name: FreeForm_945
num_bytes: 774260507
num_examples: 2000
- name: FreeForm_819
num_bytes: 779239265
num_examples: 2000
- name: FreeForm_756
num_bytes: 780489081
num_examples: 2000
- name: FreeForm_693
num_bytes: 776579782
num_examples: 2000
- name: FreeForm_567
num_bytes: 776096080
num_examples: 2000
- name: FreeForm_516
num_bytes: 773344680
num_examples: 2000
- name: FreeForm_630
num_bytes: 783509886
num_examples: 2000
- name: FreeForm_694
num_bytes: 779623249
num_examples: 2000
- name: FreeForm_757
num_bytes: 767338389
num_examples: 2000
- name: FreeForm_882
num_bytes: 782415551
num_examples: 2000
- name: FreeForm_517
num_bytes: 783601914
num_examples: 2000
- name: FreeForm_568
num_bytes: 775282456
num_examples: 2000
- name: FreeForm_695
num_bytes: 783766613
num_examples: 2000
- name: FreeForm_883
num_bytes: 781822183
num_examples: 2000
- name: FreeForm_946
num_bytes: 780880266
num_examples: 2000
- name: FreeForm_758
num_bytes: 776398014
num_examples: 2000
- name: FreeForm_820
num_bytes: 778350650
num_examples: 2000
- name: FreeForm_518
num_bytes: 796168139
num_examples: 2000
- name: FreeForm_696
num_bytes: 776163508
num_examples: 2000
- name: FreeForm_631
num_bytes: 782324850
num_examples: 2000
- name: FreeForm_884
num_bytes: 778744072
num_examples: 2000
- name: FreeForm_947
num_bytes: 778033288
num_examples: 2000
- name: FreeForm_570
num_bytes: 787492732
num_examples: 2000
- name: FreeForm_759
num_bytes: 783435623
num_examples: 2000
- name: FreeForm_519
num_bytes: 775988743
num_examples: 2000
- name: FreeForm_821
num_bytes: 780246826
num_examples: 2000
- name: FreeForm_697
num_bytes: 780912390
num_examples: 2000
- name: FreeForm_885
num_bytes: 776117068
num_examples: 2000
- name: FreeForm_520
num_bytes: 771684897
num_examples: 2000
- name: FreeForm_632
num_bytes: 786944594
num_examples: 2000
- name: FreeForm_760
num_bytes: 776225469
num_examples: 2000
- name: FreeForm_571
num_bytes: 769574296
num_examples: 2000
- name: FreeForm_948
num_bytes: 770722985
num_examples: 2000
- name: FreeForm_886
num_bytes: 787147597
num_examples: 2000
- name: FreeForm_822
num_bytes: 775358530
num_examples: 2000
- name: FreeForm_698
num_bytes: 779112403
num_examples: 2000
- name: FreeForm_521
num_bytes: 781760945
num_examples: 2000
- name: FreeForm_761
num_bytes: 770056124
num_examples: 2000
- name: FreeForm_633
num_bytes: 781835260
num_examples: 2000
- name: FreeForm_949
num_bytes: 776230854
num_examples: 2000
- name: FreeForm_823
num_bytes: 781883671
num_examples: 2000
- name: FreeForm_572
num_bytes: 768804901
num_examples: 2000
- name: FreeForm_699
num_bytes: 779957156
num_examples: 2000
- name: FreeForm_522
num_bytes: 775135129
num_examples: 2000
- name: FreeForm_762
num_bytes: 776447051
num_examples: 2000
- name: FreeForm_950
num_bytes: 781469625
num_examples: 2000
- name: FreeForm_824
num_bytes: 780508400
num_examples: 2000
- name: FreeForm_700
num_bytes: 777369380
num_examples: 2000
- name: FreeForm_523
num_bytes: 785017217
num_examples: 2000
- name: FreeForm_634
num_bytes: 782217304
num_examples: 2000
- name: FreeForm_763
num_bytes: 785472053
num_examples: 2000
- name: FreeForm_951
num_bytes: 771779911
num_examples: 2000
- name: FreeForm_889
num_bytes: 775639275
num_examples: 2000
- name: FreeForm_701
num_bytes: 783031149
num_examples: 2000
- name: FreeForm_635
num_bytes: 779398869
num_examples: 2000
- name: FreeForm_764
num_bytes: 770298257
num_examples: 2000
- name: FreeForm_952
num_bytes: 778449275
num_examples: 2000
- name: FreeForm_525
num_bytes: 773918245
num_examples: 2000
- name: FreeForm_890
num_bytes: 775934365
num_examples: 2000
- name: FreeForm_636
num_bytes: 779227692
num_examples: 2000
- name: FreeForm_826
num_bytes: 769907967
num_examples: 2000
- name: FreeForm_765
num_bytes: 784297610
num_examples: 2000
- name: FreeForm_953
num_bytes: 774721939
num_examples: 2000
- name: FreeForm_526
num_bytes: 779985761
num_examples: 2000
- name: FreeForm_576
num_bytes: 770608243
num_examples: 2000
- name: FreeForm_637
num_bytes: 785632025
num_examples: 2000
- name: FreeForm_891
num_bytes: 777053254
num_examples: 2000
- name: FreeForm_703
num_bytes: 788237995
num_examples: 2000
- name: FreeForm_527
num_bytes: 776190530
num_examples: 2000
- name: FreeForm_704
num_bytes: 789219802
num_examples: 2000
- name: FreeForm_577
num_bytes: 772767960
num_examples: 2000
- name: FreeForm_828
num_bytes: 775337334
num_examples: 2000
- name: FreeForm_767
num_bytes: 776371370
num_examples: 2000
- name: FreeForm_892
num_bytes: 784395260
num_examples: 2000
- name: FreeForm_955
num_bytes: 780198276
num_examples: 2000
- name: FreeForm_528
num_bytes: 786475368
num_examples: 2000
- name: FreeForm_705
num_bytes: 779637110
num_examples: 2000
- name: FreeForm_768
num_bytes: 778165939
num_examples: 2000
- name: FreeForm_829
num_bytes: 775226242
num_examples: 2000
- name: FreeForm_639
num_bytes: 776620565
num_examples: 2000
- name: FreeForm_893
num_bytes: 776777875
num_examples: 2000
- name: FreeForm_706
num_bytes: 776888369
num_examples: 2000
- name: FreeForm_769
num_bytes: 773177470
num_examples: 2000
- name: FreeForm_640
num_bytes: 775416285
num_examples: 2000
- name: FreeForm_830
num_bytes: 773121368
num_examples: 2000
- name: FreeForm_894
num_bytes: 771005496
num_examples: 2000
- name: FreeForm_957
num_bytes: 779298875
num_examples: 2000
- name: FreeForm_707
num_bytes: 786290237
num_examples: 2000
- name: FreeForm_530
num_bytes: 775067308
num_examples: 2000
- name: FreeForm_770
num_bytes: 781455541
num_examples: 2000
- name: FreeForm_641
num_bytes: 788867090
num_examples: 2000
- name: FreeForm_831
num_bytes: 777292141
num_examples: 2000
- name: FreeForm_958
num_bytes: 781154507
num_examples: 2000
- name: FreeForm_895
num_bytes: 781470066
num_examples: 2000
- name: FreeForm_578
num_bytes: 774956592
num_examples: 2000
- name: FreeForm_642
num_bytes: 782036346
num_examples: 2000
- name: FreeForm_832
num_bytes: 778161296
num_examples: 2000
- name: FreeForm_959
num_bytes: 785312871
num_examples: 2000
- name: FreeForm_896
num_bytes: 782183638
num_examples: 2000
- name: FreeForm_532
num_bytes: 782334295
num_examples: 2000
- name: FreeForm_579
num_bytes: 782162008
num_examples: 2000
- name: FreeForm_772
num_bytes: 783149924
num_examples: 2000
- name: FreeForm_897
num_bytes: 782736534
num_examples: 2000
- name: FreeForm_833
num_bytes: 781833165
num_examples: 2000
- name: FreeForm_533
num_bytes: 780836381
num_examples: 2000
- name: FreeForm_580
num_bytes: 779785922
num_examples: 2000
- name: FreeForm_644
num_bytes: 780852601
num_examples: 2000
- name: FreeForm_898
num_bytes: 782375626
num_examples: 2000
- name: FreeForm_834
num_bytes: 780238790
num_examples: 2000
- name: FreeForm_534
num_bytes: 787102239
num_examples: 2000
- name: FreeForm_774
num_bytes: 783405628
num_examples: 2000
- name: FreeForm_962
num_bytes: 783536879
num_examples: 2000
- name: FreeForm_835
num_bytes: 782146637
num_examples: 2000
- name: FreeForm_899
num_bytes: 777879403
num_examples: 2000
- name: FreeForm_581
num_bytes: 776043510
num_examples: 2000
- name: FreeForm_645
num_bytes: 777671003
num_examples: 2000
- name: FreeForm_535
num_bytes: 783503960
num_examples: 2000
- name: FreeForm_711
num_bytes: 786589601
num_examples: 2000
- name: FreeForm_775
num_bytes: 789032807
num_examples: 2000
- name: FreeForm_536
num_bytes: 780048605
num_examples: 2000
- name: FreeForm_836
num_bytes: 785559140
num_examples: 2000
- name: FreeForm_963
num_bytes: 768897706
num_examples: 2000
- name: FreeForm_900
num_bytes: 775545516
num_examples: 2000
- name: FreeForm_582
num_bytes: 776768083
num_examples: 2000
- name: FreeForm_537
num_bytes: 778920774
num_examples: 2000
- name: FreeForm_647
num_bytes: 789247154
num_examples: 2000
- name: FreeForm_837
num_bytes: 770927735
num_examples: 2000
- name: FreeForm_964
num_bytes: 777374122
num_examples: 2000
- name: FreeForm_583
num_bytes: 771971182
num_examples: 2000
- name: FreeForm_648
num_bytes: 790481101
num_examples: 2000
- name: FreeForm_714
num_bytes: 782357883
num_examples: 2000
- name: FreeForm_902
num_bytes: 790009775
num_examples: 2000
- name: FreeForm_966
num_bytes: 772852829
num_examples: 2000
- name: FreeForm_839
num_bytes: 774956755
num_examples: 2000
- name: FreeForm_840
num_bytes: 779381412
num_examples: 2000
- name: FreeForm_780
num_bytes: 782526085
num_examples: 2000
- name: FreeForm_905
num_bytes: 782008696
num_examples: 2000
- name: FreeForm_781
num_bytes: 777036517
num_examples: 2000
- name: FreeForm_542
num_bytes: 773384990
num_examples: 2000
- name: FreeForm_717
num_bytes: 787188315
num_examples: 2000
- name: FreeForm_587
num_bytes: 778047238
num_examples: 2000
- name: FreeForm_906
num_bytes: 782238585
num_examples: 2000
- name: FreeForm_782
num_bytes: 773185949
num_examples: 2000
- name: FreeForm_543
num_bytes: 780021022
num_examples: 2000
- name: FreeForm_970
num_bytes: 770399749
num_examples: 2000
- name: FreeForm_653
num_bytes: 779105454
num_examples: 2000
- name: FreeForm_907
num_bytes: 786301923
num_examples: 2000
- name: FreeForm_843
num_bytes: 771553141
num_examples: 2000
- name: FreeForm_588
num_bytes: 772966947
num_examples: 2000
- name: FreeForm_718
num_bytes: 781844273
num_examples: 2000
- name: FreeForm_783
num_bytes: 773562940
num_examples: 2000
- name: FreeForm_544
num_bytes: 786251287
num_examples: 2000
- name: FreeForm_971
num_bytes: 786415868
num_examples: 2000
- name: FreeForm_908
num_bytes: 775910532
num_examples: 2000
- name: FreeForm_654
num_bytes: 783017867
num_examples: 2000
- name: FreeForm_844
num_bytes: 775618340
num_examples: 2000
- name: FreeForm_719
num_bytes: 790544891
num_examples: 2000
- name: FreeForm_784
num_bytes: 780210834
num_examples: 2000
- name: FreeForm_545
num_bytes: 785852168
num_examples: 2000
- name: FreeForm_972
num_bytes: 780954023
num_examples: 2000
- name: FreeForm_909
num_bytes: 776653719
num_examples: 2000
- name: FreeForm_845
num_bytes: 781950032
num_examples: 2000
- name: FreeForm_785
num_bytes: 785226734
num_examples: 2000
- name: FreeForm_546
num_bytes: 777542887
num_examples: 2000
- name: FreeForm_656
num_bytes: 783321325
num_examples: 2000
- name: FreeForm_973
num_bytes: 777455767
num_examples: 2000
- name: FreeForm_547
num_bytes: 783780578
num_examples: 2000
- name: FreeForm_592
num_bytes: 787979205
num_examples: 2000
- name: FreeForm_657
num_bytes: 779575634
num_examples: 2000
- name: FreeForm_787
num_bytes: 775081104
num_examples: 2000
- name: FreeForm_847
num_bytes: 772847884
num_examples: 2000
- name: FreeForm_593
num_bytes: 786234512
num_examples: 2000
- name: FreeForm_848
num_bytes: 780944350
num_examples: 2000
- name: FreeForm_788
num_bytes: 778812403
num_examples: 2000
- name: FreeForm_723
num_bytes: 774864464
num_examples: 2000
- name: FreeForm_659
num_bytes: 777846993
num_examples: 2000
- name: FreeForm_849
num_bytes: 786936392
num_examples: 2000
- name: FreeForm_594
num_bytes: 778549444
num_examples: 2000
- name: FreeForm_789
num_bytes: 768423047
num_examples: 2000
- name: FreeForm_913
num_bytes: 779432172
num_examples: 2000
- name: FreeForm_660
num_bytes: 778422276
num_examples: 2000
- name: FreeForm_595
num_bytes: 782427799
num_examples: 2000
- name: FreeForm_790
num_bytes: 780306946
num_examples: 2000
- name: FreeForm_977
num_bytes: 783548441
num_examples: 2000
- name: FreeForm_914
num_bytes: 785748185
num_examples: 2000
- name: FreeForm_851
num_bytes: 773099412
num_examples: 2000
- name: FreeForm_552
num_bytes: 775631428
num_examples: 2000
- name: FreeForm_597
num_bytes: 781461768
num_examples: 2000
- name: FreeForm_852
num_bytes: 786171837
num_examples: 2000
- name: FreeForm_662
num_bytes: 776535039
num_examples: 2000
- name: FreeForm_726
num_bytes: 780258276
num_examples: 2000
- name: FreeForm_553
num_bytes: 774446361
num_examples: 2000
- name: FreeForm_598
num_bytes: 776165992
num_examples: 2000
- name: FreeForm_853
num_bytes: 775913169
num_examples: 2000
- name: FreeForm_916
num_bytes: 770512905
num_examples: 2000
- name: FreeForm_663
num_bytes: 779178273
num_examples: 2000
- name: FreeForm_979
num_bytes: 785316308
num_examples: 2000
- name: FreeForm_554
num_bytes: 779043744
num_examples: 2000
- name: FreeForm_555
num_bytes: 774698579
num_examples: 2000
- name: FreeForm_600
num_bytes: 779573136
num_examples: 2000
- name: FreeForm_556
num_bytes: 769993384
num_examples: 2000
- name: FreeForm_981
num_bytes: 775981807
num_examples: 2000
- name: FreeForm_918
num_bytes: 770640072
num_examples: 2000
- name: FreeForm_855
num_bytes: 770971099
num_examples: 2000
- name: FreeForm_601
num_bytes: 783485267
num_examples: 2000
- name: FreeForm_557
num_bytes: 781316695
num_examples: 2000
- name: FreeForm_982
num_bytes: 784171648
num_examples: 2000
- name: FreeForm_919
num_bytes: 781033588
num_examples: 2000
- name: FreeForm_666
num_bytes: 780033756
num_examples: 2000
- name: FreeForm_730
num_bytes: 780928758
num_examples: 2000
- name: FreeForm_558
num_bytes: 773762359
num_examples: 2000
- name: FreeForm_796
num_bytes: 775857969
num_examples: 2000
- name: FreeForm_920
num_bytes: 779264778
num_examples: 2000
- name: FreeForm_603
num_bytes: 779490679
num_examples: 2000
- name: FreeForm_797
num_bytes: 789388543
num_examples: 2000
- name: FreeForm_560
num_bytes: 782833902
num_examples: 2000
- name: FreeForm_798
num_bytes: 782076880
num_examples: 2000
- name: FreeForm_799
num_bytes: 785498285
num_examples: 2000
- name: FreeForm_605
num_bytes: 781535181
num_examples: 2000
- name: FreeForm_986
num_bytes: 784572282
num_examples: 2000
- name: FreeForm_987
num_bytes: 777514807
num_examples: 2000
- name: FreeForm_735
num_bytes: 776604012
num_examples: 2000
- name: FreeForm_924
num_bytes: 781738136
num_examples: 2000
- name: FreeForm_801
num_bytes: 775343161
num_examples: 2000
- name: FreeForm_988
num_bytes: 771394272
num_examples: 2000
- name: FreeForm_607
num_bytes: 784801310
num_examples: 2000
- name: FreeForm_736
num_bytes: 783919547
num_examples: 2000
- name: FreeForm_672
num_bytes: 781282095
num_examples: 2000
- name: FreeForm_925
num_bytes: 779652256
num_examples: 2000
- name: FreeForm_564
num_bytes: 773410204
num_examples: 2000
- name: FreeForm_608
num_bytes: 781207172
num_examples: 2000
- name: FreeForm_737
num_bytes: 780040754
num_examples: 2000
- name: FreeForm_673
num_bytes: 777972399
num_examples: 2000
- name: FreeForm_803
num_bytes: 779807395
num_examples: 2000
- name: FreeForm_926
num_bytes: 783442993
num_examples: 2000
- name: FreeForm_863
num_bytes: 774852302
num_examples: 2000
- name: FreeForm_738
num_bytes: 776190253
num_examples: 2000
- name: FreeForm_674
num_bytes: 781090727
num_examples: 2000
- name: FreeForm_804
num_bytes: 772326881
num_examples: 2000
- name: FreeForm_927
num_bytes: 775964176
num_examples: 2000
- name: FreeForm_864
num_bytes: 781520806
num_examples: 2000
- name: FreeForm_675
num_bytes: 770042796
num_examples: 2000
- name: FreeForm_805
num_bytes: 784368593
num_examples: 2000
- name: FreeForm_611
num_bytes: 782309242
num_examples: 2000
- name: FreeForm_928
num_bytes: 780370958
num_examples: 2000
- name: FreeForm_676
num_bytes: 777603931
num_examples: 2000
- name: FreeForm_865
num_bytes: 783734528
num_examples: 2000
- name: FreeForm_806
num_bytes: 779643778
num_examples: 2000
- name: FreeForm_929
num_bytes: 783765505
num_examples: 2000
- name: FreeForm_993
num_bytes: 774611125
num_examples: 2000
- name: FreeForm_866
num_bytes: 783029894
num_examples: 2000
- name: FreeForm_678
num_bytes: 770092785
num_examples: 2000
- name: FreeForm_930
num_bytes: 780511663
num_examples: 2000
- name: FreeForm_994
num_bytes: 780210180
num_examples: 2000
- name: FreeForm_867
num_bytes: 774361780
num_examples: 2000
- name: FreeForm_807
num_bytes: 778849248
num_examples: 2000
- name: FreeForm_1011
num_bytes: 781122711
num_examples: 2000
- name: FreeForm_931
num_bytes: 778070968
num_examples: 2000
- name: FreeForm_808
num_bytes: 782039889
num_examples: 2000
- name: FreeForm_743
num_bytes: 782929244
num_examples: 2000
- name: FreeForm_995
num_bytes: 781491448
num_examples: 2000
- name: FreeForm_809
num_bytes: 779201674
num_examples: 2000
- name: FreeForm_1012
num_bytes: 784947632
num_examples: 2000
- name: FreeForm_869
num_bytes: 777625531
num_examples: 2000
- name: FreeForm_810
num_bytes: 772386029
num_examples: 2000
- name: FreeForm_616
num_bytes: 782099041
num_examples: 2000
- name: FreeForm_870
num_bytes: 771586766
num_examples: 2000
- name: FreeForm_933
num_bytes: 777819645
num_examples: 2000
- name: FreeForm_811
num_bytes: 773709965
num_examples: 2000
- name: FreeForm_617
num_bytes: 777775291
num_examples: 2000
- name: FreeForm_1014
num_bytes: 776626214
num_examples: 2000
- name: FreeForm_934
num_bytes: 780076532
num_examples: 2000
- name: FreeForm_871
num_bytes: 772742042
num_examples: 2000
- name: FreeForm_682
num_bytes: 772864370
num_examples: 2000
- name: FreeForm_812
num_bytes: 779728479
num_examples: 2000
- name: FreeForm_1015
num_bytes: 776188407
num_examples: 2000
- name: FreeForm_747
num_bytes: 776912983
num_examples: 2000
- name: FreeForm_683
num_bytes: 773662766
num_examples: 2000
- name: FreeForm_872
num_bytes: 781095791
num_examples: 2000
- name: FreeForm_1016
num_bytes: 773422235
num_examples: 2000
- name: FreeForm_619
num_bytes: 781384539
num_examples: 2000
- name: FreeForm_748
num_bytes: 794178596
num_examples: 2000
- name: FreeForm_996
num_bytes: 776159757
num_examples: 2000
- name: FreeForm_936
num_bytes: 783195036
num_examples: 2000
- name: FreeForm_873
num_bytes: 783526678
num_examples: 2000
- name: FreeForm_814
num_bytes: 784020960
num_examples: 2000
- name: FreeForm_620
num_bytes: 777669159
num_examples: 2000
- name: FreeForm_937
num_bytes: 784288911
num_examples: 2000
- name: FreeForm_874
num_bytes: 779265520
num_examples: 2000
- name: FreeForm_815
num_bytes: 772783609
num_examples: 2000
- name: FreeForm_685
num_bytes: 776856277
num_examples: 2000
- name: FreeForm_750
num_bytes: 787248405
num_examples: 2000
- name: FreeForm_998
num_bytes: 780476434
num_examples: 2000
- name: FreeForm_938
num_bytes: 773418408
num_examples: 2000
- name: FreeForm_816
num_bytes: 781409447
num_examples: 2000
- name: FreeForm_622
num_bytes: 784580108
num_examples: 2000
- name: FreeForm_751
num_bytes: 777930957
num_examples: 2000
- name: FreeForm_876
num_bytes: 776360852
num_examples: 2000
- name: FreeForm_939
num_bytes: 777865106
num_examples: 2000
- name: FreeForm_817
num_bytes: 780160515
num_examples: 2000
- name: FreeForm_752
num_bytes: 777670340
num_examples: 2000
- name: FreeForm_1020
num_bytes: 775927785
num_examples: 2000
- name: FreeForm_624
num_bytes: 784691651
num_examples: 2000
- name: FreeForm_1001
num_bytes: 784203264
num_examples: 2000
- name: FreeForm_1071
num_bytes: 785925715
num_examples: 2000
- name: FreeForm_1072
num_bytes: 774079517
num_examples: 2000
- name: FreeForm_1022
num_bytes: 784309204
num_examples: 2000
- name: FreeForm_755
num_bytes: 779965249
num_examples: 2000
- name: FreeForm_626
num_bytes: 778811345
num_examples: 2000
- name: FreeForm_690
num_bytes: 781765116
num_examples: 2000
- name: FreeForm_1003
num_bytes: 780150305
num_examples: 2000
- name: FreeForm_1023
num_bytes: 771413314
num_examples: 2000
- name: FreeForm_880
num_bytes: 785551287
num_examples: 2000
- name: FreeForm_627
num_bytes: 790354930
num_examples: 2000
- name: FreeForm_1004
num_bytes: 782295953
num_examples: 2000
- name: FreeForm_1074
num_bytes: 769854196
num_examples: 2000
- name: FreeForm_1024
num_bytes: 775492572
num_examples: 2000
- name: FreeForm_944
num_bytes: 785364115
num_examples: 2000
- name: FreeForm_881
num_bytes: 782271712
num_examples: 2000
- name: FreeForm_1135
num_bytes: 769193624
num_examples: 2000
- name: FreeForm_692
num_bytes: 783918813
num_examples: 2000
- name: FreeForm_1075
num_bytes: 776652655
num_examples: 2000
- name: FreeForm_1025
num_bytes: 780779154
num_examples: 2000
- name: FreeForm_1197
num_bytes: 779317101
num_examples: 2000
- name: FreeForm_1260
num_bytes: 762208379
num_examples: 2000
- name: FreeForm_629
num_bytes: 777468540
num_examples: 2000
- name: FreeForm_1136
num_bytes: 775585163
num_examples: 2000
- name: FreeForm_1006
num_bytes: 779937630
num_examples: 2000
- name: FreeForm_1261
num_bytes: 783256566
num_examples: 2000
- name: FreeForm_1198
num_bytes: 771359382
num_examples: 2000
- name: FreeForm_1386
num_bytes: 772649046
num_examples: 2000
- name: FreeForm_1137
num_bytes: 780582530
num_examples: 2000
- name: FreeForm_1007
num_bytes: 774445784
num_examples: 2000
- name: FreeForm_1077
num_bytes: 775372762
num_examples: 2000
- name: FreeForm_1262
num_bytes: 778299396
num_examples: 2000
- name: FreeForm_1324
num_bytes: 775911927
num_examples: 2000
- name: FreeForm_1387
num_bytes: 773904836
num_examples: 2000
- name: FreeForm_1138
num_bytes: 773720801
num_examples: 2000
- name: FreeForm_1449
num_bytes: 775798702
num_examples: 2000
- name: FreeForm_1200
num_bytes: 774570757
num_examples: 2000
- name: FreeForm_1388
num_bytes: 772318981
num_examples: 2000
- name: FreeForm_1078
num_bytes: 772713822
num_examples: 2000
- name: FreeForm_1139
num_bytes: 775735549
num_examples: 2000
- name: FreeForm_1450
num_bytes: 769208143
num_examples: 2000
- name: FreeForm_1326
num_bytes: 777633838
num_examples: 2000
- name: FreeForm_1201
num_bytes: 774915951
num_examples: 2000
- name: FreeForm_1389
num_bytes: 770498447
num_examples: 2000
- name: FreeForm_1264
num_bytes: 776260201
num_examples: 2000
- name: FreeForm_1140
num_bytes: 786338430
num_examples: 2000
- name: FreeForm_1451
num_bytes: 775905007
num_examples: 2000
- name: FreeForm_1327
num_bytes: 767215517
num_examples: 2000
- name: FreeForm_1202
num_bytes: 776907746
num_examples: 2000
- name: FreeForm_1030
num_bytes: 770330894
num_examples: 2000
- name: FreeForm_1390
num_bytes: 773078672
num_examples: 2000
- name: FreeForm_1080
num_bytes: 776994960
num_examples: 2000
- name: FreeForm_1141
num_bytes: 783741241
num_examples: 2000
- name: FreeForm_1452
num_bytes: 775233498
num_examples: 2000
- name: FreeForm_1328
num_bytes: 779688855
num_examples: 2000
- name: FreeForm_1203
num_bytes: 778731467
num_examples: 2000
- name: FreeForm_1391
num_bytes: 778148236
num_examples: 2000
- name: FreeForm_1142
num_bytes: 778592252
num_examples: 2000
- name: FreeForm_1329
num_bytes: 780980202
num_examples: 2000
- name: FreeForm_1032
num_bytes: 765832292
num_examples: 2000
- name: FreeForm_1392
num_bytes: 778228973
num_examples: 2000
- name: FreeForm_1143
num_bytes: 779686958
num_examples: 2000
- name: FreeForm_1266
num_bytes: 780267266
num_examples: 2000
- name: FreeForm_1454
num_bytes: 771388767
num_examples: 2000
- name: FreeForm_1033
num_bytes: 785405397
num_examples: 2000
- name: FreeForm_1331
num_bytes: 773303535
num_examples: 2000
- name: FreeForm_1455
num_bytes: 772270994
num_examples: 2000
- name: FreeForm_1084
num_bytes: 780937120
num_examples: 2000
- name: FreeForm_1394
num_bytes: 779912517
num_examples: 2000
- name: FreeForm_1034
num_bytes: 785037979
num_examples: 2000
- name: FreeForm_1332
num_bytes: 775214220
num_examples: 2000
- name: FreeForm_1456
num_bytes: 773902347
num_examples: 2000
- name: FreeForm_1268
num_bytes: 776083060
num_examples: 2000
- name: FreeForm_1207
num_bytes: 775083925
num_examples: 2000
- name: FreeForm_1395
num_bytes: 778627455
num_examples: 2000
- name: FreeForm_1035
num_bytes: 780850165
num_examples: 2000
- name: FreeForm_1333
num_bytes: 776771157
num_examples: 2000
- name: FreeForm_1457
num_bytes: 771241476
num_examples: 2000
- name: FreeForm_1086
num_bytes: 769890365
num_examples: 2000
- name: FreeForm_1147
num_bytes: 776637729
num_examples: 2000
- name: FreeForm_1396
num_bytes: 777785894
num_examples: 2000
- name: FreeForm_1334
num_bytes: 784289993
num_examples: 2000
- name: FreeForm_1458
num_bytes: 776626943
num_examples: 2000
- name: FreeForm_1087
num_bytes: 781254663
num_examples: 2000
- name: FreeForm_1148
num_bytes: 773662440
num_examples: 2000
- name: FreeForm_1397
num_bytes: 780426125
num_examples: 2000
- name: FreeForm_1335
num_bytes: 770894343
num_examples: 2000
- name: FreeForm_1459
num_bytes: 770376933
num_examples: 2000
- name: FreeForm_1271
num_bytes: 781843080
num_examples: 2000
- name: FreeForm_1149
num_bytes: 776995200
num_examples: 2000
- name: FreeForm_1210
num_bytes: 772949457
num_examples: 2000
- name: FreeForm_1150
num_bytes: 778048049
num_examples: 2000
- name: FreeForm_1272
num_bytes: 770433073
num_examples: 2000
- name: FreeForm_1461
num_bytes: 772615250
num_examples: 2000
- name: FreeForm_1151
num_bytes: 776289624
num_examples: 2000
- name: FreeForm_1273
num_bytes: 770953464
num_examples: 2000
- name: FreeForm_1212
num_bytes: 780575601
num_examples: 2000
- name: FreeForm_1090
num_bytes: 770057581
num_examples: 2000
- name: FreeForm_1400
num_bytes: 775894925
num_examples: 2000
- name: FreeForm_1152
num_bytes: 774100579
num_examples: 2000
- name: FreeForm_1274
num_bytes: 773088951
num_examples: 2000
- name: FreeForm_1091
num_bytes: 778261716
num_examples: 2000
- name: FreeForm_1401
num_bytes: 769327493
num_examples: 2000
- name: FreeForm_1153
num_bytes: 769264686
num_examples: 2000
- name: FreeForm_1275
num_bytes: 773463433
num_examples: 2000
- name: FreeForm_1214
num_bytes: 773727975
num_examples: 2000
- name: FreeForm_1464
num_bytes: 770724265
num_examples: 2000
- name: FreeForm_1340
num_bytes: 770246906
num_examples: 2000
- name: FreeForm_1043
num_bytes: 775871564
num_examples: 2000
- name: FreeForm_1276
num_bytes: 779678508
num_examples: 2000
- name: FreeForm_1403
num_bytes: 785594363
num_examples: 2000
- name: FreeForm_1215
num_bytes: 773708158
num_examples: 2000
- name: FreeForm_1093
num_bytes: 781403783
num_examples: 2000
- name: FreeForm_1044
num_bytes: 782580437
num_examples: 2000
- name: FreeForm_1277
num_bytes: 768784213
num_examples: 2000
- name: FreeForm_1216
num_bytes: 776703123
num_examples: 2000
- name: FreeForm_1094
num_bytes: 782325753
num_examples: 2000
- name: FreeForm_1278
num_bytes: 778353689
num_examples: 2000
- name: FreeForm_1217
num_bytes: 777963465
num_examples: 2000
- name: FreeForm_1405
num_bytes: 775831012
num_examples: 2000
- name: FreeForm_1467
num_bytes: 773903809
num_examples: 2000
- name: FreeForm_1157
num_bytes: 780808451
num_examples: 2000
- name: FreeForm_1406
num_bytes: 770037870
num_examples: 2000
- name: FreeForm_1343
num_bytes: 779944703
num_examples: 2000
- name: FreeForm_1218
num_bytes: 775185803
num_examples: 2000
- name: FreeForm_1468
num_bytes: 774969577
num_examples: 2000
- name: FreeForm_1158
num_bytes: 771236817
num_examples: 2000
- name: FreeForm_1407
num_bytes: 777805253
num_examples: 2000
- name: FreeForm_1344
num_bytes: 772506110
num_examples: 2000
- name: FreeForm_1047
num_bytes: 771668726
num_examples: 2000
- name: FreeForm_1219
num_bytes: 774695485
num_examples: 2000
- name: FreeForm_1469
num_bytes: 773152862
num_examples: 2000
- name: FreeForm_1345
num_bytes: 774356861
num_examples: 2000
- name: FreeForm_1281
num_bytes: 769397422
num_examples: 2000
- name: FreeForm_1220
num_bytes: 777646479
num_examples: 2000
- name: FreeForm_1048
num_bytes: 774170661
num_examples: 2000
- name: FreeForm_1098
num_bytes: 782169769
num_examples: 2000
- name: FreeForm_1160
num_bytes: 780634390
num_examples: 2000
- name: FreeForm_1346
num_bytes: 774179081
num_examples: 2000
- name: FreeForm_1282
num_bytes: 772417081
num_examples: 2000
- name: FreeForm_1471
num_bytes: 772795661
num_examples: 2000
- name: FreeForm_1410
num_bytes: 774443858
num_examples: 2000
- name: FreeForm_1472
num_bytes: 779405331
num_examples: 2000
- name: FreeForm_1284
num_bytes: 782471252
num_examples: 2000
- name: FreeForm_1348
num_bytes: 778145756
num_examples: 2000
- name: FreeForm_1223
num_bytes: 783097614
num_examples: 2000
- name: FreeForm_1163
num_bytes: 786078851
num_examples: 2000
- name: FreeForm_1473
num_bytes: 779354512
num_examples: 2000
- name: FreeForm_1285
num_bytes: 782743833
num_examples: 2000
- name: FreeForm_1349
num_bytes: 773866766
num_examples: 2000
- name: FreeForm_1101
num_bytes: 780064474
num_examples: 2000
- name: FreeForm_1224
num_bytes: 779713701
num_examples: 2000
- name: FreeForm_1164
num_bytes: 785826513
num_examples: 2000
- name: FreeForm_1413
num_bytes: 771270626
num_examples: 2000
- name: FreeForm_1225
num_bytes: 789341153
num_examples: 2000
- name: FreeForm_1286
num_bytes: 783040862
num_examples: 2000
- name: FreeForm_1165
num_bytes: 782794133
num_examples: 2000
- name: FreeForm_1414
num_bytes: 776277188
num_examples: 2000
- name: FreeForm_1053
num_bytes: 775020295
num_examples: 2000
- name: FreeForm_1287
num_bytes: 774282496
num_examples: 2000
- name: FreeForm_1351
num_bytes: 777217979
num_examples: 2000
- name: FreeForm_1166
num_bytes: 782196546
num_examples: 2000
- name: FreeForm_1415
num_bytes: 773801330
num_examples: 2000
- name: FreeForm_1227
num_bytes: 781777755
num_examples: 2000
- name: FreeForm_1054
num_bytes: 770350768
num_examples: 2000
- name: FreeForm_1167
num_bytes: 772643185
num_examples: 2000
- name: FreeForm_1288
num_bytes: 786282948
num_examples: 2000
- name: FreeForm_1476
num_bytes: 781887411
num_examples: 2000
- name: FreeForm_1416
num_bytes: 785772864
num_examples: 2000
- name: FreeForm_1228
num_bytes: 782310719
num_examples: 2000
- name: FreeForm_1168
num_bytes: 778463665
num_examples: 2000
- name: FreeForm_1353
num_bytes: 774098738
num_examples: 2000
- name: FreeForm_1477
num_bytes: 770072431
num_examples: 2000
- name: FreeForm_1105
num_bytes: 780584723
num_examples: 2000
- name: FreeForm_1417
num_bytes: 770555258
num_examples: 2000
- name: FreeForm_1229
num_bytes: 766386559
num_examples: 2000
- name: FreeForm_1056
num_bytes: 777845089
num_examples: 2000
- name: FreeForm_1354
num_bytes: 776296757
num_examples: 2000
- name: FreeForm_1230
num_bytes: 768761136
num_examples: 2000
- name: FreeForm_1057
num_bytes: 770679050
num_examples: 2000
- name: FreeForm_1170
num_bytes: 784981283
num_examples: 2000
- name: FreeForm_1291
num_bytes: 775560769
num_examples: 2000
- name: FreeForm_1107
num_bytes: 774133706
num_examples: 2000
- name: FreeForm_1419
num_bytes: 772063671
num_examples: 2000
- name: FreeForm_1479
num_bytes: 768129541
num_examples: 2000
- name: FreeForm_1231
num_bytes: 777992198
num_examples: 2000
- name: FreeForm_1058
num_bytes: 778022181
num_examples: 2000
- name: FreeForm_1171
num_bytes: 774484635
num_examples: 2000
- name: FreeForm_1420
num_bytes: 784674844
num_examples: 2000
- name: FreeForm_1232
num_bytes: 774283767
num_examples: 2000
- name: FreeForm_1059
num_bytes: 770082646
num_examples: 2000
- name: FreeForm_1293
num_bytes: 777774009
num_examples: 2000
- name: FreeForm_1357
num_bytes: 782812482
num_examples: 2000
- name: FreeForm_1481
num_bytes: 772278059
num_examples: 2000
- name: FreeForm_1060
num_bytes: 780207820
num_examples: 2000
- name: FreeForm_1294
num_bytes: 772434873
num_examples: 2000
- name: FreeForm_1173
num_bytes: 772136852
num_examples: 2000
- name: FreeForm_1358
num_bytes: 779244683
num_examples: 2000
- name: FreeForm_1061
num_bytes: 783705532
num_examples: 2000
- name: FreeForm_1234
num_bytes: 769879163
num_examples: 2000
- name: FreeForm_1295
num_bytes: 778394871
num_examples: 2000
- name: FreeForm_1359
num_bytes: 776358524
num_examples: 2000
- name: FreeForm_1062
num_bytes: 772853747
num_examples: 2000
- name: FreeForm_1296
num_bytes: 772331030
num_examples: 2000
- name: FreeForm_1297
num_bytes: 772141225
num_examples: 2000
- name: FreeForm_1112
num_bytes: 771006309
num_examples: 2000
- name: FreeForm_1484
num_bytes: 775157027
num_examples: 2000
- name: FreeForm_1064
num_bytes: 777683941
num_examples: 2000
- name: FreeForm_1298
num_bytes: 777662981
num_examples: 2000
- name: FreeForm_1113
num_bytes: 773454098
num_examples: 2000
- name: FreeForm_1177
num_bytes: 773276736
num_examples: 2000
- name: FreeForm_1362
num_bytes: 776932286
num_examples: 2000
- name: FreeForm_1485
num_bytes: 782890005
num_examples: 2000
- name: FreeForm_1363
num_bytes: 768839554
num_examples: 2000
- name: FreeForm_1238
num_bytes: 775834402
num_examples: 2000
- name: FreeForm_1066
num_bytes: 773638453
num_examples: 2000
- name: FreeForm_1364
num_bytes: 773891208
num_examples: 2000
- name: FreeForm_1300
num_bytes: 777522788
num_examples: 2000
- name: FreeForm_1179
num_bytes: 779669212
num_examples: 2000
- name: FreeForm_1365
num_bytes: 776530326
num_examples: 2000
- name: FreeForm_1301
num_bytes: 779676562
num_examples: 2000
- name: FreeForm_1180
num_bytes: 775842626
num_examples: 2000
- name: FreeForm_1068
num_bytes: 778768145
num_examples: 2000
- name: FreeForm_1116
num_bytes: 781241772
num_examples: 2000
- name: FreeForm_1423
num_bytes: 781624549
num_examples: 2000
- name: FreeForm_1366
num_bytes: 774954357
num_examples: 2000
- name: FreeForm_1118
num_bytes: 773858637
num_examples: 2000
- name: FreeForm_1242
num_bytes: 769621466
num_examples: 2000
- name: FreeForm_1368
num_bytes: 780913717
num_examples: 2000
- name: FreeForm_1183
num_bytes: 767486681
num_examples: 2000
- name: FreeForm_1304
num_bytes: 780834799
num_examples: 2000
- name: FreeForm_1490
num_bytes: 780387151
num_examples: 2000
- name: FreeForm_1512
num_bytes: 778197016
num_examples: 2000
- name: FreeForm_1244
num_bytes: 772995330
num_examples: 2000
- name: FreeForm_1120
num_bytes: 779301535
num_examples: 2000
- name: FreeForm_1370
num_bytes: 776231720
num_examples: 2000
- name: FreeForm_1492
num_bytes: 773885264
num_examples: 2000
- name: FreeForm_1245
num_bytes: 779206640
num_examples: 2000
- name: FreeForm_1493
num_bytes: 773502241
num_examples: 2000
- name: FreeForm_1307
num_bytes: 771031781
num_examples: 2000
- name: FreeForm_1515
num_bytes: 778669871
num_examples: 2000
- name: FreeForm_1246
num_bytes: 780880343
num_examples: 2000
- name: FreeForm_1372
num_bytes: 770981961
num_examples: 2000
- name: FreeForm_1122
num_bytes: 778079182
num_examples: 2000
- name: FreeForm_1494
num_bytes: 776772801
num_examples: 2000
- name: FreeForm_1516
num_bytes: 773843230
num_examples: 2000
- name: FreeForm_1247
num_bytes: 770214115
num_examples: 2000
- name: FreeForm_1373
num_bytes: 787407590
num_examples: 2000
- name: FreeForm_1123
num_bytes: 779586645
num_examples: 2000
- name: FreeForm_1424
num_bytes: 781336954
num_examples: 2000
- name: FreeForm_1495
num_bytes: 777255582
num_examples: 2000
- name: FreeForm_1188
num_bytes: 786940051
num_examples: 2000
- name: FreeForm_1517
num_bytes: 774620951
num_examples: 2000
- name: FreeForm_1124
num_bytes: 776836685
num_examples: 2000
- name: FreeForm_1496
num_bytes: 781872763
num_examples: 2000
- name: FreeForm_1189
num_bytes: 771657509
num_examples: 2000
- name: FreeForm_1518
num_bytes: 773601547
num_examples: 2000
- name: FreeForm_1375
num_bytes: 779587165
num_examples: 2000
- name: FreeForm_1249
num_bytes: 773157176
num_examples: 2000
- name: FreeForm_1125
num_bytes: 775791033
num_examples: 2000
- name: FreeForm_1190
num_bytes: 777443084
num_examples: 2000
- name: FreeForm_1519
num_bytes: 780951682
num_examples: 2000
- name: FreeForm_1376
num_bytes: 777216870
num_examples: 2000
- name: FreeForm_1250
num_bytes: 775914126
num_examples: 2000
- name: FreeForm_1126
num_bytes: 781352076
num_examples: 2000
- name: FreeForm_1520
num_bytes: 775083183
num_examples: 2000
- name: FreeForm_1312
num_bytes: 778292149
num_examples: 2000
- name: FreeForm_1498
num_bytes: 774890612
num_examples: 2000
- name: FreeForm_1377
num_bytes: 785004845
num_examples: 2000
- name: FreeForm_1251
num_bytes: 789816754
num_examples: 2000
- name: FreeForm_1127
num_bytes: 770241132
num_examples: 2000
- name: FreeForm_1521
num_bytes: 776731607
num_examples: 2000
- name: FreeForm_1313
num_bytes: 778278211
num_examples: 2000
- name: FreeForm_1378
num_bytes: 771032430
num_examples: 2000
- name: FreeForm_1128
num_bytes: 777986250
num_examples: 2000
- name: FreeForm_1522
num_bytes: 771913901
num_examples: 2000
- name: FreeForm_1314
num_bytes: 785118185
num_examples: 2000
- name: FreeForm_1523
num_bytes: 771339035
num_examples: 2000
- name: FreeForm_1315
num_bytes: 781667460
num_examples: 2000
- name: FreeForm_1380
num_bytes: 773398852
num_examples: 2000
- name: FreeForm_1427
num_bytes: 772298723
num_examples: 2000
- name: FreeForm_1524
num_bytes: 768520469
num_examples: 2000
- name: FreeForm_1194
num_bytes: 782161236
num_examples: 2000
- name: FreeForm_1381
num_bytes: 773830458
num_examples: 2000
- name: FreeForm_1428
num_bytes: 771662432
num_examples: 2000
- name: FreeForm_1255
num_bytes: 768537036
num_examples: 2000
- name: FreeForm_1525
num_bytes: 778009921
num_examples: 2000
- name: FreeForm_1195
num_bytes: 777335139
num_examples: 2000
- name: FreeForm_1429
num_bytes: 764834149
num_examples: 2000
- name: FreeForm_1382
num_bytes: 775094191
num_examples: 2000
- name: FreeForm_1256
num_bytes: 773398652
num_examples: 2000
- name: FreeForm_1526
num_bytes: 770376404
num_examples: 2000
- name: FreeForm_1196
num_bytes: 778901116
num_examples: 2000
- name: FreeForm_1430
num_bytes: 771870799
num_examples: 2000
- name: FreeForm_1383
num_bytes: 775693605
num_examples: 2000
- name: FreeForm_1257
num_bytes: 767589408
num_examples: 2000
- name: FreeForm_1318
num_bytes: 780715386
num_examples: 2000
- name: FreeForm_1504
num_bytes: 779906843
num_examples: 2000
- name: FreeForm_1431
num_bytes: 776734403
num_examples: 2000
- name: FreeForm_1384
num_bytes: 774244033
num_examples: 2000
- name: FreeForm_1258
num_bytes: 776236989
num_examples: 2000
- name: FreeForm_1528
num_bytes: 778645804
num_examples: 2000
- name: FreeForm_1319
num_bytes: 774145055
num_examples: 2000
- name: FreeForm_1505
num_bytes: 775022647
num_examples: 2000
- name: FreeForm_1576
num_bytes: 777459214
num_examples: 2000
- name: FreeForm_1432
num_bytes: 773078854
num_examples: 2000
- name: FreeForm_1385
num_bytes: 770012790
num_examples: 2000
- name: FreeForm_1701
num_bytes: 771338275
num_examples: 2000
- name: FreeForm_1639
num_bytes: 776242518
num_examples: 2000
- name: FreeForm_1530
num_bytes: 774636910
num_examples: 2000
- name: FreeForm_1321
num_bytes: 772639127
num_examples: 2000
- name: FreeForm_1507
num_bytes: 774145767
num_examples: 2000
- name: FreeForm_1702
num_bytes: 769111676
num_examples: 2000
- name: FreeForm_1434
num_bytes: 776396590
num_examples: 2000
- name: FreeForm_1640
num_bytes: 774255527
num_examples: 2000
- name: FreeForm_1531
num_bytes: 769083709
num_examples: 2000
- name: FreeForm_1508
num_bytes: 775690083
num_examples: 2000
- name: FreeForm_1435
num_bytes: 768501130
num_examples: 2000
- name: FreeForm_1766
num_bytes: 772371623
num_examples: 2000
- name: FreeForm_1579
num_bytes: 771025814
num_examples: 2000
- name: FreeForm_1641
num_bytes: 779599332
num_examples: 2000
- name: FreeForm_1827
num_bytes: 775437486
num_examples: 2000
- name: FreeForm_1436
num_bytes: 770276884
num_examples: 2000
- name: FreeForm_1704
num_bytes: 775091117
num_examples: 2000
- name: FreeForm_1642
num_bytes: 776944029
num_examples: 2000
- name: FreeForm_1828
num_bytes: 778105987
num_examples: 2000
- name: FreeForm_1437
num_bytes: 778463269
num_examples: 2000
- name: FreeForm_1581
num_bytes: 781065185
num_examples: 2000
- name: FreeForm_1643
num_bytes: 776678831
num_examples: 2000
- name: FreeForm_1534
num_bytes: 776481583
num_examples: 2000
- name: FreeForm_1511
num_bytes: 774971010
num_examples: 2000
- name: FreeForm_1707
num_bytes: 763593691
num_examples: 2000
- name: FreeForm_1583
num_bytes: 770777355
num_examples: 2000
- name: FreeForm_1770
num_bytes: 777379608
num_examples: 2000
- name: FreeForm_1536
num_bytes: 781906336
num_examples: 2000
- name: FreeForm_1891
num_bytes: 783154996
num_examples: 2000
- name: FreeForm_1645
num_bytes: 779043465
num_examples: 2000
- name: FreeForm_1831
num_bytes: 779558675
num_examples: 2000
- name: FreeForm_1585
num_bytes: 774986574
num_examples: 2000
- name: FreeForm_1538
num_bytes: 771463098
num_examples: 2000
- name: FreeForm_1893
num_bytes: 775479546
num_examples: 2000
- name: FreeForm_1442
num_bytes: 772404804
num_examples: 2000
- name: FreeForm_1586
num_bytes: 781702151
num_examples: 2000
- name: FreeForm_1648
num_bytes: 773660147
num_examples: 2000
- name: FreeForm_1711
num_bytes: 780109753
num_examples: 2000
- name: FreeForm_1443
num_bytes: 766747197
num_examples: 2000
- name: FreeForm_1773
num_bytes: 774325226
num_examples: 2000
- name: FreeForm_1540
num_bytes: 770666305
num_examples: 2000
- name: FreeForm_1649
num_bytes: 776319711
num_examples: 2000
- name: FreeForm_1712
num_bytes: 770957101
num_examples: 2000
- name: FreeForm_1895
num_bytes: 770548607
num_examples: 2000
- name: FreeForm_1444
num_bytes: 784803015
num_examples: 2000
- name: FreeForm_1774
num_bytes: 773435164
num_examples: 2000
- name: FreeForm_1541
num_bytes: 773616113
num_examples: 2000
- name: FreeForm_1835
num_bytes: 780606549
num_examples: 2000
- name: FreeForm_1588
num_bytes: 775578246
num_examples: 2000
- name: FreeForm_1445
num_bytes: 778076077
num_examples: 2000
- name: FreeForm_1896
num_bytes: 771418372
num_examples: 2000
- name: FreeForm_1542
num_bytes: 780867652
num_examples: 2000
- name: FreeForm_1775
num_bytes: 770595969
num_examples: 2000
- name: FreeForm_1589
num_bytes: 770576399
num_examples: 2000
- name: FreeForm_1714
num_bytes: 772460649
num_examples: 2000
- name: FreeForm_1897
num_bytes: 774325510
num_examples: 2000
- name: FreeForm_1543
num_bytes: 777027575
num_examples: 2000
- name: FreeForm_1590
num_bytes: 779089115
num_examples: 2000
- name: FreeForm_1715
num_bytes: 783861822
num_examples: 2000
- name: FreeForm_1447
num_bytes: 775405219
num_examples: 2000
- name: FreeForm_1591
num_bytes: 769975593
num_examples: 2000
- name: FreeForm_1544
num_bytes: 778777533
num_examples: 2000
- name: FreeForm_1838
num_bytes: 775828792
num_examples: 2000
- name: FreeForm_1716
num_bytes: 774101550
num_examples: 2000
- name: FreeForm_1448
num_bytes: 772238327
num_examples: 2000
- name: FreeForm_1545
num_bytes: 770967701
num_examples: 2000
- name: FreeForm_1592
num_bytes: 777424108
num_examples: 2000
- name: FreeForm_1717
num_bytes: 774522898
num_examples: 2000
- name: FreeForm_1953
num_bytes: 771799236
num_examples: 2000
- name: FreeForm_1900
num_bytes: 780148702
num_examples: 2000
- name: FreeForm_1779
num_bytes: 776738221
num_examples: 2000
- name: FreeForm_1954
num_bytes: 774180999
num_examples: 2000
- name: FreeForm_1901
num_bytes: 780619673
num_examples: 2000
- name: FreeForm_1594
num_bytes: 777472801
num_examples: 2000
- name: FreeForm_1719
num_bytes: 777326991
num_examples: 2000
- name: FreeForm_1841
num_bytes: 771308279
num_examples: 2000
- name: FreeForm_1548
num_bytes: 770163212
num_examples: 2000
- name: FreeForm_1595
num_bytes: 772170521
num_examples: 2000
- name: FreeForm_1720
num_bytes: 772493860
num_examples: 2000
- name: FreeForm_1842
num_bytes: 771592650
num_examples: 2000
- name: FreeForm_1656
num_bytes: 771999855
num_examples: 2000
- name: FreeForm_1781
num_bytes: 777125987
num_examples: 2000
- name: FreeForm_1721
num_bytes: 776375890
num_examples: 2000
- name: FreeForm_1657
num_bytes: 778104922
num_examples: 2000
- name: FreeForm_1782
num_bytes: 779534066
num_examples: 2000
- name: FreeForm_1904
num_bytes: 765267839
num_examples: 2000
- name: FreeForm_1597
num_bytes: 769496067
num_examples: 2000
- name: FreeForm_1844
num_bytes: 767079297
num_examples: 2000
- name: FreeForm_1957
num_bytes: 775659155
num_examples: 2000
- name: FreeForm_1551
num_bytes: 782053459
num_examples: 2000
- name: FreeForm_1905
num_bytes: 770097688
num_examples: 2000
- name: FreeForm_1598
num_bytes: 773060032
num_examples: 2000
- name: FreeForm_1723
num_bytes: 776571367
num_examples: 2000
- name: FreeForm_1659
num_bytes: 767291404
num_examples: 2000
- name: FreeForm_1552
num_bytes: 774111834
num_examples: 2000
- name: FreeForm_1784
num_bytes: 767427750
num_examples: 2000
- name: FreeForm_1599
num_bytes: 777344888
num_examples: 2000
- name: FreeForm_1724
num_bytes: 777742400
num_examples: 2000
- name: FreeForm_1660
num_bytes: 774378651
num_examples: 2000
- name: FreeForm_1725
num_bytes: 787134242
num_examples: 2000
- name: FreeForm_1960
num_bytes: 771486600
num_examples: 2000
- name: FreeForm_1661
num_bytes: 783677147
num_examples: 2000
- name: FreeForm_1554
num_bytes: 780725222
num_examples: 2000
- name: FreeForm_1847
num_bytes: 778510803
num_examples: 2000
- name: FreeForm_1726
num_bytes: 776823901
num_examples: 2000
- name: FreeForm_1601
num_bytes: 775123180
num_examples: 2000
- name: FreeForm_1908
num_bytes: 776216634
num_examples: 2000
- name: FreeForm_1662
num_bytes: 775888677
num_examples: 2000
- name: FreeForm_1848
num_bytes: 784339905
num_examples: 2000
- name: FreeForm_1602
num_bytes: 772905006
num_examples: 2000
- name: FreeForm_1909
num_bytes: 771662853
num_examples: 2000
- name: FreeForm_1603
num_bytes: 772030313
num_examples: 2000
- name: FreeForm_1910
num_bytes: 769654437
num_examples: 2000
- name: FreeForm_1557
num_bytes: 776514469
num_examples: 2000
- name: FreeForm_1604
num_bytes: 779429331
num_examples: 2000
- name: FreeForm_1789
num_bytes: 773726710
num_examples: 2000
- name: FreeForm_1558
num_bytes: 776427709
num_examples: 2000
- name: FreeForm_1665
num_bytes: 767990537
num_examples: 2000
- name: FreeForm_1605
num_bytes: 774426474
num_examples: 2000
- name: FreeForm_1852
num_bytes: 769143639
num_examples: 2000
- name: FreeForm_1791
num_bytes: 767586822
num_examples: 2000
- name: FreeForm_1667
num_bytes: 772290052
num_examples: 2000
- name: FreeForm_1607
num_bytes: 768456885
num_examples: 2000
- name: FreeForm_1913
num_bytes: 779963651
num_examples: 2000
- name: FreeForm_1732
num_bytes: 772897019
num_examples: 2000
- name: FreeForm_1669
num_bytes: 776027758
num_examples: 2000
- name: FreeForm_1609
num_bytes: 768567004
num_examples: 2000
- name: FreeForm_1562
num_bytes: 769935418
num_examples: 2000
- name: FreeForm_1915
num_bytes: 782856606
num_examples: 2000
- name: FreeForm_1968
num_bytes: 767376995
num_examples: 2000
- name: FreeForm_1734
num_bytes: 769087259
num_examples: 2000
- name: FreeForm_1855
num_bytes: 779535816
num_examples: 2000
- name: FreeForm_1670
num_bytes: 781332277
num_examples: 2000
- name: FreeForm_1610
num_bytes: 781231841
num_examples: 2000
- name: FreeForm_1969
num_bytes: 777875017
num_examples: 2000
- name: FreeForm_1795
num_bytes: 775452519
num_examples: 2000
- name: FreeForm_1671
num_bytes: 777366861
num_examples: 2000
- name: FreeForm_1611
num_bytes: 784641102
num_examples: 2000
- name: FreeForm_1917
num_bytes: 777599611
num_examples: 2000
- name: FreeForm_1564
num_bytes: 780590282
num_examples: 2000
- name: FreeForm_1970
num_bytes: 773274829
num_examples: 2000
- name: FreeForm_1796
num_bytes: 782533872
num_examples: 2000
- name: FreeForm_1857
num_bytes: 780690564
num_examples: 2000
- name: FreeForm_1672
num_bytes: 768657526
num_examples: 2000
- name: FreeForm_1565
num_bytes: 768593353
num_examples: 2000
- name: FreeForm_1971
num_bytes: 770849547
num_examples: 2000
- name: FreeForm_1673
num_bytes: 773737499
num_examples: 2000
- name: FreeForm_1797
num_bytes: 783757126
num_examples: 2000
- name: FreeForm_1972
num_bytes: 772193432
num_examples: 2000
- name: FreeForm_1566
num_bytes: 782382857
num_examples: 2000
- name: FreeForm_1674
num_bytes: 776755282
num_examples: 2000
- name: FreeForm_1859
num_bytes: 775406752
num_examples: 2000
- name: FreeForm_1738
num_bytes: 768406452
num_examples: 2000
- name: FreeForm_1567
num_bytes: 776284767
num_examples: 2000
- name: FreeForm_1799
num_bytes: 779221193
num_examples: 2000
- name: FreeForm_1614
num_bytes: 774084638
num_examples: 2000
- name: FreeForm_1860
num_bytes: 779270331
num_examples: 2000
- name: FreeForm_1568
num_bytes: 778648659
num_examples: 2000
- name: FreeForm_1740
num_bytes: 773598842
num_examples: 2000
- name: FreeForm_1676
num_bytes: 779241237
num_examples: 2000
- name: FreeForm_1974
num_bytes: 777030113
num_examples: 2000
- name: FreeForm_1741
num_bytes: 778885616
num_examples: 2000
- name: FreeForm_1923
num_bytes: 769765231
num_examples: 2000
- name: FreeForm_1742
num_bytes: 778556450
num_examples: 2000
- name: FreeForm_1617
num_bytes: 775776789
num_examples: 2000
- name: FreeForm_1924
num_bytes: 774657873
num_examples: 2000
- name: FreeForm_1743
num_bytes: 769957345
num_examples: 2000
- name: FreeForm_1803
num_bytes: 779399830
num_examples: 2000
- name: FreeForm_1679
num_bytes: 770562122
num_examples: 2000
- name: FreeForm_1864
num_bytes: 775414698
num_examples: 2000
- name: FreeForm_1744
num_bytes: 772432481
num_examples: 2000
- name: FreeForm_1804
num_bytes: 769489846
num_examples: 2000
- name: FreeForm_1865
num_bytes: 772874771
num_examples: 2000
- name: FreeForm_1978
num_bytes: 770923318
num_examples: 2000
- name: FreeForm_1745
num_bytes: 775570130
num_examples: 2000
- name: FreeForm_1573
num_bytes: 778101981
num_examples: 2000
- name: FreeForm_1805
num_bytes: 773192041
num_examples: 2000
- name: FreeForm_1620
num_bytes: 770438186
num_examples: 2000
- name: FreeForm_1681
num_bytes: 773269627
num_examples: 2000
- name: FreeForm_1927
num_bytes: 777793544
num_examples: 2000
- name: FreeForm_1979
num_bytes: 772277123
num_examples: 2000
- name: FreeForm_1746
num_bytes: 768024663
num_examples: 2000
- name: FreeForm_1574
num_bytes: 775182043
num_examples: 2000
- name: FreeForm_1867
num_bytes: 772336683
num_examples: 2000
- name: FreeForm_1621
num_bytes: 779643601
num_examples: 2000
- name: FreeForm_1806
num_bytes: 772147940
num_examples: 2000
- name: FreeForm_1747
num_bytes: 782069613
num_examples: 2000
- name: FreeForm_1868
num_bytes: 766212112
num_examples: 2000
- name: FreeForm_1807
num_bytes: 776026001
num_examples: 2000
- name: FreeForm_1683
num_bytes: 772923845
num_examples: 2000
- name: FreeForm_1748
num_bytes: 770643722
num_examples: 2000
- name: FreeForm_1623
num_bytes: 781995507
num_examples: 2000
- name: FreeForm_1749
num_bytes: 773868228
num_examples: 2000
- name: FreeForm_1870
num_bytes: 779144486
num_examples: 2000
- name: FreeForm_1624
num_bytes: 772465705
num_examples: 2000
- name: FreeForm_1809
num_bytes: 770882826
num_examples: 2000
- name: FreeForm_1750
num_bytes: 768457543
num_examples: 2000
- name: FreeForm_1931
num_bytes: 772448872
num_examples: 2000
- name: FreeForm_1983
num_bytes: 767368466
num_examples: 2000
- name: FreeForm_1625
num_bytes: 779336106
num_examples: 2000
- name: FreeForm_1871
num_bytes: 773989099
num_examples: 2000
- name: FreeForm_1810
num_bytes: 781846996
num_examples: 2000
- name: FreeForm_1751
num_bytes: 770607707
num_examples: 2000
- name: FreeForm_1932
num_bytes: 775846499
num_examples: 2000
- name: FreeForm_1686
num_bytes: 775900812
num_examples: 2000
- name: FreeForm_1811
num_bytes: 774726677
num_examples: 2000
- name: FreeForm_1872
num_bytes: 776443102
num_examples: 2000
- name: FreeForm_1687
num_bytes: 773365850
num_examples: 2000
- name: FreeForm_1627
num_bytes: 775013436
num_examples: 2000
- name: FreeForm_1812
num_bytes: 774970479
num_examples: 2000
- name: FreeForm_1688
num_bytes: 777417292
num_examples: 2000
- name: FreeForm_1628
num_bytes: 771889019
num_examples: 2000
- name: FreeForm_1986
num_bytes: 777492292
num_examples: 2000
- name: FreeForm_1813
num_bytes: 775689254
num_examples: 2000
- name: FreeForm_1630
num_bytes: 763103601
num_examples: 2000
- name: FreeForm_1690
num_bytes: 771372106
num_examples: 2000
- name: FreeForm_1988
num_bytes: 772915325
num_examples: 2000
- name: FreeForm_1876
num_bytes: 771998762
num_examples: 2000
- name: FreeForm_1756
num_bytes: 777770864
num_examples: 2000
- name: FreeForm_1691
num_bytes: 774314799
num_examples: 2000
- name: FreeForm_1937
num_bytes: 777366277
num_examples: 2000
- name: FreeForm_1631
num_bytes: 771345279
num_examples: 2000
- name: FreeForm_1878
num_bytes: 767875789
num_examples: 2000
- name: FreeForm_1817
num_bytes: 768709391
num_examples: 2000
- name: FreeForm_1633
num_bytes: 771233969
num_examples: 2000
- name: FreeForm_1991
num_bytes: 769596136
num_examples: 2000
- name: FreeForm_1694
num_bytes: 772171191
num_examples: 2000
- name: FreeForm_1634
num_bytes: 769627140
num_examples: 2000
- name: FreeForm_1940
num_bytes: 776593617
num_examples: 2000
- name: FreeForm_1992
num_bytes: 777116071
num_examples: 2000
- name: FreeForm_1695
num_bytes: 775752244
num_examples: 2000
- name: FreeForm_1635
num_bytes: 775899627
num_examples: 2000
- name: FreeForm_1880
num_bytes: 776396050
num_examples: 2000
- name: FreeForm_1760
num_bytes: 768289077
num_examples: 2000
- name: FreeForm_1696
num_bytes: 784599423
num_examples: 2000
- name: FreeForm_1820
num_bytes: 775526982
num_examples: 2000
- name: FreeForm_1636
num_bytes: 779188921
num_examples: 2000
- name: FreeForm_1881
num_bytes: 768184329
num_examples: 2000
- name: FreeForm_1761
num_bytes: 771237846
num_examples: 2000
- name: FreeForm_1942
num_bytes: 774592400
num_examples: 2000
- name: FreeForm_1697
num_bytes: 777361676
num_examples: 2000
- name: FreeForm_1637
num_bytes: 775511943
num_examples: 2000
- name: FreeForm_1882
num_bytes: 773007481
num_examples: 2000
- name: FreeForm_1943
num_bytes: 776785506
num_examples: 2000
- name: FreeForm_1762
num_bytes: 770796170
num_examples: 2000
- name: FreeForm_1995
num_bytes: 774343622
num_examples: 2000
- name: FreeForm_1883
num_bytes: 773607987
num_examples: 2000
- name: FreeForm_1698
num_bytes: 778047450
num_examples: 2000
- name: FreeForm_1822
num_bytes: 778444354
num_examples: 2000
- name: FreeForm_1944
num_bytes: 769459278
num_examples: 2000
- name: FreeForm_1884
num_bytes: 772799351
num_examples: 2000
- name: FreeForm_1823
num_bytes: 776495132
num_examples: 2000
- name: FreeForm_1945
num_bytes: 775081306
num_examples: 2000
- name: FreeForm_1885
num_bytes: 771521453
num_examples: 2000
- name: FreeForm_1700
num_bytes: 765143515
num_examples: 2000
- name: FreeForm_1946
num_bytes: 776201196
num_examples: 2000
- name: FreeForm_1886
num_bytes: 772053340
num_examples: 2000
- name: FreeForm_1825
num_bytes: 773203747
num_examples: 2000
- name: FreeForm_1947
num_bytes: 771770136
num_examples: 2000
- name: FreeForm_1887
num_bytes: 779615516
num_examples: 2000
- name: FreeForm_1826
num_bytes: 773148215
num_examples: 2000
- name: FreeForm_1948
num_bytes: 772645007
num_examples: 2000
- name: FreeForm_1888
num_bytes: 772856693
num_examples: 2000
- name: FreeForm_1999
num_bytes: 769374754
num_examples: 2000
- name: FreeForm_1949
num_bytes: 773280379
num_examples: 2000
- name: FreeForm_1889
num_bytes: 774735177
num_examples: 2000
- name: FreeForm_1950
num_bytes: 774599150
num_examples: 2000
- name: FreeForm_1951
num_bytes: 767662993
num_examples: 2000
- name: FreeForm_1952
num_bytes: 764039694
num_examples: 2000
- name: FreeForm_538
num_bytes: 789922342
num_examples: 2000
- name: FreeForm_965
num_bytes: 782703569
num_examples: 2000
- name: FreeForm_539
num_bytes: 781175362
num_examples: 2000
- name: FreeForm_903
num_bytes: 777441158
num_examples: 2000
- name: FreeForm_540
num_bytes: 782021717
num_examples: 2000
- name: FreeForm_917
num_bytes: 781067199
num_examples: 2000
- name: FreeForm_541
num_bytes: 775971262
num_examples: 2000
- name: FreeForm_604
num_bytes: 785217033
num_examples: 2000
- name: FreeForm_818
num_bytes: 779756338
num_examples: 2000
- name: FreeForm_728
num_bytes: 776195434
num_examples: 2000
- name: FreeForm_606
num_bytes: 778882561
num_examples: 2000
- name: FreeForm_997
num_bytes: 784575711
num_examples: 2000
- name: FreeForm_562
num_bytes: 776825755
num_examples: 2000
- name: FreeForm_623
num_bytes: 783935630
num_examples: 2000
- name: FreeForm_1021
num_bytes: 774340124
num_examples: 2000
- name: FreeForm_731
num_bytes: 781291514
num_examples: 2000
- name: FreeForm_940
num_bytes: 785912855
num_examples: 2000
- name: FreeForm_732
num_bytes: 779065415
num_examples: 2000
- name: FreeForm_878
num_bytes: 775573675
num_examples: 2000
- name: FreeForm_1067
num_bytes: 779476433
num_examples: 2000
- name: FreeForm_669
num_bytes: 783825944
num_examples: 2000
- name: FreeForm_879
num_bytes: 781175453
num_examples: 2000
- name: FreeForm_1162
num_bytes: 775534366
num_examples: 2000
- name: FreeForm_1099
num_bytes: 776744419
num_examples: 2000
- name: FreeForm_670
num_bytes: 782818795
num_examples: 2000
- name: FreeForm_1172
num_bytes: 772800488
num_examples: 2000
- name: FreeForm_1222
num_bytes: 768753542
num_examples: 2000
- name: FreeForm_686
num_bytes: 779647058
num_examples: 2000
- name: FreeForm_1337
num_bytes: 777645742
num_examples: 2000
- name: FreeForm_688
num_bytes: 783226366
num_examples: 2000
- name: FreeForm_1115
num_bytes: 777750807
num_examples: 2000
- name: FreeForm_1265
num_bytes: 782280644
num_examples: 2000
- name: FreeForm_1117
num_bytes: 771938043
num_examples: 2000
- name: FreeForm_1418
num_bytes: 773562141
num_examples: 2000
- name: FreeForm_1513
num_bytes: 772269953
num_examples: 2000
- name: FreeForm_1360
num_bytes: 770456201
num_examples: 2000
- name: FreeForm_1422
num_bytes: 766260039
num_examples: 2000
- name: FreeForm_1514
num_bytes: 778588888
num_examples: 2000
- name: FreeForm_1290
num_bytes: 776704724
num_examples: 2000
- name: FreeForm_1487
num_bytes: 771203540
num_examples: 2000
- name: FreeForm_1527
num_bytes: 776428854
num_examples: 2000
- name: FreeForm_1299
num_bytes: 774592302
num_examples: 2000
- name: FreeForm_1488
num_bytes: 772030662
num_examples: 2000
- name: FreeForm_1529
num_bytes: 769107675
num_examples: 2000
- name: FreeForm_1302
num_bytes: 783287330
num_examples: 2000
- name: FreeForm_1371
num_bytes: 778291875
num_examples: 2000
- name: FreeForm_1439
num_bytes: 775125426
num_examples: 2000
- name: FreeForm_1638
num_bytes: 770945774
num_examples: 2000
- name: FreeForm_1305
num_bytes: 774733211
num_examples: 2000
- name: FreeForm_1644
num_bytes: 763865811
num_examples: 2000
- name: FreeForm_1308
num_bytes: 770073632
num_examples: 2000
- name: FreeForm_1497
num_bytes: 774371998
num_examples: 2000
- name: FreeForm_1706
num_bytes: 767965922
num_examples: 2000
- name: FreeForm_1830
num_bytes: 777364204
num_examples: 2000
- name: FreeForm_1650
num_bytes: 774946127
num_examples: 2000
- name: FreeForm_1537
num_bytes: 770611835
num_examples: 2000
- name: FreeForm_1832
num_bytes: 769485028
num_examples: 2000
- name: FreeForm_1776
num_bytes: 779900472
num_examples: 2000
- name: FreeForm_1322
num_bytes: 778172819
num_examples: 2000
- name: FreeForm_1833
num_bytes: 768188642
num_examples: 2000
- name: FreeForm_1713
num_bytes: 772172320
num_examples: 2000
- name: FreeForm_1553
num_bytes: 774246555
num_examples: 2000
- name: FreeForm_1596
num_bytes: 775757405
num_examples: 2000
- name: FreeForm_1663
num_bytes: 777946907
num_examples: 2000
- name: FreeForm_1556
num_bytes: 770487590
num_examples: 2000
- name: FreeForm_1783
num_bytes: 774307481
num_examples: 2000
- name: FreeForm_1912
num_bytes: 774185583
num_examples: 2000
- name: FreeForm_1559
num_bytes: 774629139
num_examples: 2000
- name: FreeForm_1785
num_bytes: 776955190
num_examples: 2000
- name: FreeForm_1666
num_bytes: 767827026
num_examples: 2000
- name: FreeForm_1729
num_bytes: 780695121
num_examples: 2000
- name: FreeForm_1788
num_bytes: 766180430
num_examples: 2000
- name: FreeForm_1668
num_bytes: 769715133
num_examples: 2000
- name: FreeForm_1918
num_bytes: 774617311
num_examples: 2000
- name: FreeForm_1563
num_bytes: 774817952
num_examples: 2000
- name: FreeForm_1675
num_bytes: 773030944
num_examples: 2000
- name: FreeForm_1962
num_bytes: 786053209
num_examples: 2000
- name: FreeForm_1792
num_bytes: 774700008
num_examples: 2000
- name: FreeForm_1615
num_bytes: 774380131
num_examples: 2000
- name: FreeForm_1846
num_bytes: 774658032
num_examples: 2000
- name: FreeForm_1616
num_bytes: 782429195
num_examples: 2000
- name: FreeForm_1850
num_bytes: 775140091
num_examples: 2000
- name: FreeForm_1964
num_bytes: 780393901
num_examples: 2000
- name: FreeForm_1801
num_bytes: 768773753
num_examples: 2000
- name: FreeForm_1851
num_bytes: 775091817
num_examples: 2000
- name: FreeForm_1965
num_bytes: 774710107
num_examples: 2000
- name: FreeForm_1626
num_bytes: 776500055
num_examples: 2000
- name: FreeForm_1853
num_bytes: 774376334
num_examples: 2000
- name: FreeForm_1967
num_bytes: 767462102
num_examples: 2000
- name: FreeForm_1692
num_bytes: 766343506
num_examples: 2000
- name: FreeForm_1854
num_bytes: 768674186
num_examples: 2000
- name: FreeForm_1975
num_bytes: 765777279
num_examples: 2000
- name: FreeForm_1699
num_bytes: 778883501
num_examples: 2000
- name: FreeForm_1755
num_bytes: 783000185
num_examples: 2000
- name: FreeForm_1757
num_bytes: 769193034
num_examples: 2000
- name: FreeForm_1763
num_bytes: 772044823
num_examples: 2000
- name: FreeForm_1814
num_bytes: 777568635
num_examples: 2000
- name: FreeForm_1816
num_bytes: 776191715
num_examples: 2000
- name: FreeForm_1821
num_bytes: 777857890
num_examples: 2000
- name: FreeForm_1856
num_bytes: 769967566
num_examples: 2000
- name: FreeForm_1862
num_bytes: 767341817
num_examples: 2000
- name: FreeForm_1873
num_bytes: 772574070
num_examples: 2000
- name: FreeForm_1875
num_bytes: 770945433
num_examples: 2000
- name: FreeForm_1877
num_bytes: 772618224
num_examples: 2000
- name: FreeForm_1935
num_bytes: 780171644
num_examples: 2000
- name: FreeForm_1936
num_bytes: 780368989
num_examples: 2000
- name: FreeForm_1938
num_bytes: 775192638
num_examples: 2000
- name: FreeForm_1939
num_bytes: 768517191
num_examples: 2000
- name: FreeForm_1941
num_bytes: 767928606
num_examples: 2000
- name: FreeForm_1977
num_bytes: 780736929
num_examples: 2000
- name: FreeForm_1981
num_bytes: 775615890
num_examples: 2000
- name: FreeForm_1984
num_bytes: 769609649
num_examples: 2000
- name: FreeForm_1985
num_bytes: 770730441
num_examples: 2000
- name: FreeForm_1987
num_bytes: 768263066
num_examples: 2000
- name: FreeForm_1989
num_bytes: 780388977
num_examples: 2000
- name: FreeForm_1990
num_bytes: 772863509
num_examples: 2000
- name: FreeForm_1993
num_bytes: 773757340
num_examples: 2000
- name: FreeForm_1996
num_bytes: 770872885
num_examples: 2000
- name: FreeForm_2000
num_bytes: 32585530
num_examples: 83
- name: FreeForm_1205
num_bytes: 776134960.0
num_examples: 2000
download_size: 1182151585538
dataset_size: 1177371972678.0
configs:
- config_name: default
data_files:
- split: FreeForm_0
path: data/FreeForm_0-*
- split: FreeForm_1
path: data/FreeForm_1-*
- split: FreeForm_2
path: data/FreeForm_2-*
- split: FreeForm_3
path: data/FreeForm_3-*
- split: FreeForm_4
path: data/FreeForm_4-*
- split: FreeForm_5
path: data/FreeForm_5-*
- split: FreeForm_6
path: data/FreeForm_6-*
- split: FreeForm_7
path: data/FreeForm_7-*
- split: FreeForm_8
path: data/FreeForm_8-*
- split: FreeForm_9
path: data/FreeForm_9-*
- split: FreeForm_10
path: data/FreeForm_10-*
- split: FreeForm_11
path: data/FreeForm_11-*
- split: FreeForm_12
path: data/FreeForm_12-*
- split: FreeForm_13
path: data/FreeForm_13-*
- split: FreeForm_14
path: data/FreeForm_14-*
- split: FreeForm_15
path: data/FreeForm_15-*
- split: FreeForm_16
path: data/FreeForm_16-*
- split: FreeForm_17
path: data/FreeForm_17-*
- split: FreeForm_18
path: data/FreeForm_18-*
- split: FreeForm_19
path: data/FreeForm_19-*
- split: FreeForm_20
path: data/FreeForm_20-*
- split: FreeForm_21
path: data/FreeForm_21-*
- split: FreeForm_22
path: data/FreeForm_22-*
- split: FreeForm_23
path: data/FreeForm_23-*
- split: FreeForm_24
path: data/FreeForm_24-*
- split: FreeForm_25
path: data/FreeForm_25-*
- split: FreeForm_26
path: data/FreeForm_26-*
- split: FreeForm_27
path: data/FreeForm_27-*
- split: FreeForm_28
path: data/FreeForm_28-*
- split: FreeForm_29
path: data/FreeForm_29-*
- split: FreeForm_30
path: data/FreeForm_30-*
- split: FreeForm_31
path: data/FreeForm_31-*
- split: FreeForm_32
path: data/FreeForm_32-*
- split: FreeForm_33
path: data/FreeForm_33-*
- split: FreeForm_34
path: data/FreeForm_34-*
- split: FreeForm_35
path: data/FreeForm_35-*
- split: FreeForm_36
path: data/FreeForm_36-*
- split: FreeForm_37
path: data/FreeForm_37-*
- split: FreeForm_38
path: data/FreeForm_38-*
- split: FreeForm_39
path: data/FreeForm_39-*
- split: FreeForm_40
path: data/FreeForm_40-*
- split: FreeForm_41
path: data/FreeForm_41-*
- split: FreeForm_42
path: data/FreeForm_42-*
- split: FreeForm_43
path: data/FreeForm_43-*
- split: FreeForm_44
path: data/FreeForm_44-*
- split: FreeForm_45
path: data/FreeForm_45-*
- split: FreeForm_46
path: data/FreeForm_46-*
- split: FreeForm_47
path: data/FreeForm_47-*
- split: FreeForm_48
path: data/FreeForm_48-*
- split: FreeForm_49
path: data/FreeForm_49-*
- split: FreeForm_50
path: data/FreeForm_50-*
- split: FreeForm_51
path: data/FreeForm_51-*
- split: FreeForm_52
path: data/FreeForm_52-*
- split: FreeForm_53
path: data/FreeForm_53-*
- split: FreeForm_54
path: data/FreeForm_54-*
- split: FreeForm_55
path: data/FreeForm_55-*
- split: FreeForm_56
path: data/FreeForm_56-*
- split: FreeForm_57
path: data/FreeForm_57-*
- split: FreeForm_58
path: data/FreeForm_58-*
- split: FreeForm_59
path: data/FreeForm_59-*
- split: FreeForm_60
path: data/FreeForm_60-*
- split: FreeForm_61
path: data/FreeForm_61-*
- split: FreeForm_62
path: data/FreeForm_62-*
- split: FreeForm_63
path: data/FreeForm_63-*
- split: FreeForm_64
path: data/FreeForm_64-*
- split: FreeForm_65
path: data/FreeForm_65-*
- split: FreeForm_66
path: data/FreeForm_66-*
- split: FreeForm_67
path: data/FreeForm_67-*
- split: FreeForm_68
path: data/FreeForm_68-*
- split: FreeForm_69
path: data/FreeForm_69-*
- split: FreeForm_70
path: data/FreeForm_70-*
- split: FreeForm_71
path: data/FreeForm_71-*
- split: FreeForm_72
path: data/FreeForm_72-*
- split: FreeForm_73
path: data/FreeForm_73-*
- split: FreeForm_74
path: data/FreeForm_74-*
- split: FreeForm_75
path: data/FreeForm_75-*
- split: FreeForm_76
path: data/FreeForm_76-*
- split: FreeForm_77
path: data/FreeForm_77-*
- split: FreeForm_78
path: data/FreeForm_78-*
- split: FreeForm_79
path: data/FreeForm_79-*
- split: FreeForm_80
path: data/FreeForm_80-*
- split: FreeForm_81
path: data/FreeForm_81-*
- split: FreeForm_82
path: data/FreeForm_82-*
- split: FreeForm_83
path: data/FreeForm_83-*
- split: FreeForm_84
path: data/FreeForm_84-*
- split: FreeForm_85
path: data/FreeForm_85-*
- split: FreeForm_86
path: data/FreeForm_86-*
- split: FreeForm_87
path: data/FreeForm_87-*
- split: FreeForm_88
path: data/FreeForm_88-*
- split: FreeForm_89
path: data/FreeForm_89-*
- split: FreeForm_90
path: data/FreeForm_90-*
- split: FreeForm_91
path: data/FreeForm_91-*
- split: FreeForm_92
path: data/FreeForm_92-*
- split: FreeForm_93
path: data/FreeForm_93-*
- split: FreeForm_94
path: data/FreeForm_94-*
- split: FreeForm_95
path: data/FreeForm_95-*
- split: FreeForm_96
path: data/FreeForm_96-*
- split: FreeForm_97
path: data/FreeForm_97-*
- split: FreeForm_98
path: data/FreeForm_98-*
- split: FreeForm_99
path: data/FreeForm_99-*
- split: FreeForm_100
path: data/FreeForm_100-*
- split: FreeForm_101
path: data/FreeForm_101-*
- split: FreeForm_102
path: data/FreeForm_102-*
- split: FreeForm_103
path: data/FreeForm_103-*
- split: FreeForm_104
path: data/FreeForm_104-*
- split: FreeForm_105
path: data/FreeForm_105-*
- split: FreeForm_106
path: data/FreeForm_106-*
- split: FreeForm_107
path: data/FreeForm_107-*
- split: FreeForm_108
path: data/FreeForm_108-*
- split: FreeForm_109
path: data/FreeForm_109-*
- split: FreeForm_110
path: data/FreeForm_110-*
- split: FreeForm_111
path: data/FreeForm_111-*
- split: FreeForm_112
path: data/FreeForm_112-*
- split: FreeForm_113
path: data/FreeForm_113-*
- split: FreeForm_114
path: data/FreeForm_114-*
- split: FreeForm_115
path: data/FreeForm_115-*
- split: FreeForm_116
path: data/FreeForm_116-*
- split: FreeForm_117
path: data/FreeForm_117-*
- split: FreeForm_118
path: data/FreeForm_118-*
- split: FreeForm_119
path: data/FreeForm_119-*
- split: FreeForm_120
path: data/FreeForm_120-*
- split: FreeForm_121
path: data/FreeForm_121-*
- split: FreeForm_122
path: data/FreeForm_122-*
- split: FreeForm_123
path: data/FreeForm_123-*
- split: FreeForm_124
path: data/FreeForm_124-*
- split: FreeForm_125
path: data/FreeForm_125-*
- split: FreeForm_126
path: data/FreeForm_126-*
- split: FreeForm_127
path: data/FreeForm_127-*
- split: FreeForm_128
path: data/FreeForm_128-*
- split: FreeForm_129
path: data/FreeForm_129-*
- split: FreeForm_130
path: data/FreeForm_130-*
- split: FreeForm_131
path: data/FreeForm_131-*
- split: FreeForm_132
path: data/FreeForm_132-*
- split: FreeForm_133
path: data/FreeForm_133-*
- split: FreeForm_134
path: data/FreeForm_134-*
- split: FreeForm_135
path: data/FreeForm_135-*
- split: FreeForm_136
path: data/FreeForm_136-*
- split: FreeForm_137
path: data/FreeForm_137-*
- split: FreeForm_138
path: data/FreeForm_138-*
- split: FreeForm_139
path: data/FreeForm_139-*
- split: FreeForm_140
path: data/FreeForm_140-*
- split: FreeForm_141
path: data/FreeForm_141-*
- split: FreeForm_142
path: data/FreeForm_142-*
- split: FreeForm_143
path: data/FreeForm_143-*
- split: FreeForm_144
path: data/FreeForm_144-*
- split: FreeForm_145
path: data/FreeForm_145-*
- split: FreeForm_146
path: data/FreeForm_146-*
- split: FreeForm_147
path: data/FreeForm_147-*
- split: FreeForm_148
path: data/FreeForm_148-*
- split: FreeForm_149
path: data/FreeForm_149-*
- split: FreeForm_150
path: data/FreeForm_150-*
- split: FreeForm_151
path: data/FreeForm_151-*
- split: FreeForm_152
path: data/FreeForm_152-*
- split: FreeForm_153
path: data/FreeForm_153-*
- split: FreeForm_154
path: data/FreeForm_154-*
- split: FreeForm_155
path: data/FreeForm_155-*
- split: FreeForm_156
path: data/FreeForm_156-*
- split: FreeForm_157
path: data/FreeForm_157-*
- split: FreeForm_158
path: data/FreeForm_158-*
- split: FreeForm_159
path: data/FreeForm_159-*
- split: FreeForm_160
path: data/FreeForm_160-*
- split: FreeForm_161
path: data/FreeForm_161-*
- split: FreeForm_162
path: data/FreeForm_162-*
- split: FreeForm_163
path: data/FreeForm_163-*
- split: FreeForm_164
path: data/FreeForm_164-*
- split: FreeForm_165
path: data/FreeForm_165-*
- split: FreeForm_166
path: data/FreeForm_166-*
- split: FreeForm_167
path: data/FreeForm_167-*
- split: FreeForm_168
path: data/FreeForm_168-*
- split: FreeForm_169
path: data/FreeForm_169-*
- split: FreeForm_170
path: data/FreeForm_170-*
- split: FreeForm_171
path: data/FreeForm_171-*
- split: FreeForm_172
path: data/FreeForm_172-*
- split: FreeForm_173
path: data/FreeForm_173-*
- split: FreeForm_174
path: data/FreeForm_174-*
- split: FreeForm_175
path: data/FreeForm_175-*
- split: FreeForm_176
path: data/FreeForm_176-*
- split: FreeForm_177
path: data/FreeForm_177-*
- split: FreeForm_178
path: data/FreeForm_178-*
- split: FreeForm_179
path: data/FreeForm_179-*
- split: FreeForm_180
path: data/FreeForm_180-*
- split: FreeForm_181
path: data/FreeForm_181-*
- split: FreeForm_182
path: data/FreeForm_182-*
- split: FreeForm_183
path: data/FreeForm_183-*
- split: FreeForm_184
path: data/FreeForm_184-*
- split: FreeForm_185
path: data/FreeForm_185-*
- split: FreeForm_186
path: data/FreeForm_186-*
- split: FreeForm_187
path: data/FreeForm_187-*
- split: FreeForm_188
path: data/FreeForm_188-*
- split: FreeForm_189
path: data/FreeForm_189-*
- split: FreeForm_190
path: data/FreeForm_190-*
- split: FreeForm_191
path: data/FreeForm_191-*
- split: FreeForm_192
path: data/FreeForm_192-*
- split: FreeForm_193
path: data/FreeForm_193-*
- split: FreeForm_194
path: data/FreeForm_194-*
- split: FreeForm_195
path: data/FreeForm_195-*
- split: FreeForm_196
path: data/FreeForm_196-*
- split: FreeForm_197
path: data/FreeForm_197-*
- split: FreeForm_198
path: data/FreeForm_198-*
- split: FreeForm_199
path: data/FreeForm_199-*
- split: FreeForm_200
path: data/FreeForm_200-*
- split: FreeForm_201
path: data/FreeForm_201-*
- split: FreeForm_202
path: data/FreeForm_202-*
- split: FreeForm_203
path: data/FreeForm_203-*
- split: FreeForm_204
path: data/FreeForm_204-*
- split: FreeForm_205
path: data/FreeForm_205-*
- split: FreeForm_206
path: data/FreeForm_206-*
- split: FreeForm_207
path: data/FreeForm_207-*
- split: FreeForm_208
path: data/FreeForm_208-*
- split: FreeForm_209
path: data/FreeForm_209-*
- split: FreeForm_210
path: data/FreeForm_210-*
- split: FreeForm_211
path: data/FreeForm_211-*
- split: FreeForm_212
path: data/FreeForm_212-*
- split: FreeForm_213
path: data/FreeForm_213-*
- split: FreeForm_214
path: data/FreeForm_214-*
- split: FreeForm_215
path: data/FreeForm_215-*
- split: FreeForm_216
path: data/FreeForm_216-*
- split: FreeForm_217
path: data/FreeForm_217-*
- split: FreeForm_218
path: data/FreeForm_218-*
- split: FreeForm_219
path: data/FreeForm_219-*
- split: FreeForm_220
path: data/FreeForm_220-*
- split: FreeForm_221
path: data/FreeForm_221-*
- split: FreeForm_222
path: data/FreeForm_222-*
- split: FreeForm_223
path: data/FreeForm_223-*
- split: FreeForm_224
path: data/FreeForm_224-*
- split: FreeForm_225
path: data/FreeForm_225-*
- split: FreeForm_226
path: data/FreeForm_226-*
- split: FreeForm_227
path: data/FreeForm_227-*
- split: FreeForm_228
path: data/FreeForm_228-*
- split: FreeForm_229
path: data/FreeForm_229-*
- split: FreeForm_230
path: data/FreeForm_230-*
- split: FreeForm_231
path: data/FreeForm_231-*
- split: FreeForm_232
path: data/FreeForm_232-*
- split: FreeForm_233
path: data/FreeForm_233-*
- split: FreeForm_234
path: data/FreeForm_234-*
- split: FreeForm_235
path: data/FreeForm_235-*
- split: FreeForm_236
path: data/FreeForm_236-*
- split: FreeForm_237
path: data/FreeForm_237-*
- split: FreeForm_238
path: data/FreeForm_238-*
- split: FreeForm_239
path: data/FreeForm_239-*
- split: FreeForm_240
path: data/FreeForm_240-*
- split: FreeForm_241
path: data/FreeForm_241-*
- split: FreeForm_242
path: data/FreeForm_242-*
- split: FreeForm_243
path: data/FreeForm_243-*
- split: FreeForm_244
path: data/FreeForm_244-*
- split: FreeForm_245
path: data/FreeForm_245-*
- split: FreeForm_246
path: data/FreeForm_246-*
- split: FreeForm_247
path: data/FreeForm_247-*
- split: FreeForm_248
path: data/FreeForm_248-*
- split: FreeForm_249
path: data/FreeForm_249-*
- split: FreeForm_250
path: data/FreeForm_250-*
- split: FreeForm_251
path: data/FreeForm_251-*
- split: FreeForm_252
path: data/FreeForm_252-*
- split: FreeForm_253
path: data/FreeForm_253-*
- split: FreeForm_254
path: data/FreeForm_254-*
- split: FreeForm_255
path: data/FreeForm_255-*
- split: FreeForm_256
path: data/FreeForm_256-*
- split: FreeForm_257
path: data/FreeForm_257-*
- split: FreeForm_258
path: data/FreeForm_258-*
- split: FreeForm_259
path: data/FreeForm_259-*
- split: FreeForm_260
path: data/FreeForm_260-*
- split: FreeForm_261
path: data/FreeForm_261-*
- split: FreeForm_262
path: data/FreeForm_262-*
- split: FreeForm_263
path: data/FreeForm_263-*
- split: FreeForm_264
path: data/FreeForm_264-*
- split: FreeForm_265
path: data/FreeForm_265-*
- split: FreeForm_266
path: data/FreeForm_266-*
- split: FreeForm_267
path: data/FreeForm_267-*
- split: FreeForm_268
path: data/FreeForm_268-*
- split: FreeForm_269
path: data/FreeForm_269-*
- split: FreeForm_270
path: data/FreeForm_270-*
- split: FreeForm_271
path: data/FreeForm_271-*
- split: FreeForm_272
path: data/FreeForm_272-*
- split: FreeForm_273
path: data/FreeForm_273-*
- split: FreeForm_274
path: data/FreeForm_274-*
- split: FreeForm_275
path: data/FreeForm_275-*
- split: FreeForm_276
path: data/FreeForm_276-*
- split: FreeForm_277
path: data/FreeForm_277-*
- split: FreeForm_278
path: data/FreeForm_278-*
- split: FreeForm_279
path: data/FreeForm_279-*
- split: FreeForm_280
path: data/FreeForm_280-*
- split: FreeForm_281
path: data/FreeForm_281-*
- split: FreeForm_282
path: data/FreeForm_282-*
- split: FreeForm_283
path: data/FreeForm_283-*
- split: FreeForm_284
path: data/FreeForm_284-*
- split: FreeForm_285
path: data/FreeForm_285-*
- split: FreeForm_286
path: data/FreeForm_286-*
- split: FreeForm_287
path: data/FreeForm_287-*
- split: FreeForm_288
path: data/FreeForm_288-*
- split: FreeForm_289
path: data/FreeForm_289-*
- split: FreeForm_290
path: data/FreeForm_290-*
- split: FreeForm_291
path: data/FreeForm_291-*
- split: FreeForm_292
path: data/FreeForm_292-*
- split: FreeForm_293
path: data/FreeForm_293-*
- split: FreeForm_294
path: data/FreeForm_294-*
- split: FreeForm_295
path: data/FreeForm_295-*
- split: FreeForm_296
path: data/FreeForm_296-*
- split: FreeForm_297
path: data/FreeForm_297-*
- split: FreeForm_298
path: data/FreeForm_298-*
- split: FreeForm_299
path: data/FreeForm_299-*
- split: FreeForm_300
path: data/FreeForm_300-*
- split: FreeForm_301
path: data/FreeForm_301-*
- split: FreeForm_302
path: data/FreeForm_302-*
- split: FreeForm_303
path: data/FreeForm_303-*
- split: FreeForm_304
path: data/FreeForm_304-*
- split: FreeForm_305
path: data/FreeForm_305-*
- split: FreeForm_306
path: data/FreeForm_306-*
- split: FreeForm_307
path: data/FreeForm_307-*
- split: FreeForm_308
path: data/FreeForm_308-*
- split: FreeForm_309
path: data/FreeForm_309-*
- split: FreeForm_310
path: data/FreeForm_310-*
- split: FreeForm_311
path: data/FreeForm_311-*
- split: FreeForm_312
path: data/FreeForm_312-*
- split: FreeForm_313
path: data/FreeForm_313-*
- split: FreeForm_314
path: data/FreeForm_314-*
- split: FreeForm_315
path: data/FreeForm_315-*
- split: FreeForm_316
path: data/FreeForm_316-*
- split: FreeForm_317
path: data/FreeForm_317-*
- split: FreeForm_318
path: data/FreeForm_318-*
- split: FreeForm_319
path: data/FreeForm_319-*
- split: FreeForm_320
path: data/FreeForm_320-*
- split: FreeForm_321
path: data/FreeForm_321-*
- split: FreeForm_322
path: data/FreeForm_322-*
- split: FreeForm_323
path: data/FreeForm_323-*
- split: FreeForm_324
path: data/FreeForm_324-*
- split: FreeForm_325
path: data/FreeForm_325-*
- split: FreeForm_326
path: data/FreeForm_326-*
- split: FreeForm_327
path: data/FreeForm_327-*
- split: FreeForm_328
path: data/FreeForm_328-*
- split: FreeForm_329
path: data/FreeForm_329-*
- split: FreeForm_330
path: data/FreeForm_330-*
- split: FreeForm_331
path: data/FreeForm_331-*
- split: FreeForm_332
path: data/FreeForm_332-*
- split: FreeForm_333
path: data/FreeForm_333-*
- split: FreeForm_334
path: data/FreeForm_334-*
- split: FreeForm_335
path: data/FreeForm_335-*
- split: FreeForm_336
path: data/FreeForm_336-*
- split: FreeForm_337
path: data/FreeForm_337-*
- split: FreeForm_338
path: data/FreeForm_338-*
- split: FreeForm_339
path: data/FreeForm_339-*
- split: FreeForm_340
path: data/FreeForm_340-*
- split: FreeForm_341
path: data/FreeForm_341-*
- split: FreeForm_342
path: data/FreeForm_342-*
- split: FreeForm_343
path: data/FreeForm_343-*
- split: FreeForm_344
path: data/FreeForm_344-*
- split: FreeForm_345
path: data/FreeForm_345-*
- split: FreeForm_346
path: data/FreeForm_346-*
- split: FreeForm_347
path: data/FreeForm_347-*
- split: FreeForm_348
path: data/FreeForm_348-*
- split: FreeForm_349
path: data/FreeForm_349-*
- split: FreeForm_350
path: data/FreeForm_350-*
- split: FreeForm_351
path: data/FreeForm_351-*
- split: FreeForm_352
path: data/FreeForm_352-*
- split: FreeForm_353
path: data/FreeForm_353-*
- split: FreeForm_354
path: data/FreeForm_354-*
- split: FreeForm_355
path: data/FreeForm_355-*
- split: FreeForm_356
path: data/FreeForm_356-*
- split: FreeForm_357
path: data/FreeForm_357-*
- split: FreeForm_358
path: data/FreeForm_358-*
- split: FreeForm_359
path: data/FreeForm_359-*
- split: FreeForm_360
path: data/FreeForm_360-*
- split: FreeForm_361
path: data/FreeForm_361-*
- split: FreeForm_362
path: data/FreeForm_362-*
- split: FreeForm_363
path: data/FreeForm_363-*
- split: FreeForm_364
path: data/FreeForm_364-*
- split: FreeForm_365
path: data/FreeForm_365-*
- split: FreeForm_366
path: data/FreeForm_366-*
- split: FreeForm_367
path: data/FreeForm_367-*
- split: FreeForm_368
path: data/FreeForm_368-*
- split: FreeForm_369
path: data/FreeForm_369-*
- split: FreeForm_370
path: data/FreeForm_370-*
- split: FreeForm_371
path: data/FreeForm_371-*
- split: FreeForm_372
path: data/FreeForm_372-*
- split: FreeForm_373
path: data/FreeForm_373-*
- split: FreeForm_374
path: data/FreeForm_374-*
- split: FreeForm_375
path: data/FreeForm_375-*
- split: FreeForm_376
path: data/FreeForm_376-*
- split: FreeForm_377
path: data/FreeForm_377-*
- split: FreeForm_378
path: data/FreeForm_378-*
- split: FreeForm_379
path: data/FreeForm_379-*
- split: FreeForm_380
path: data/FreeForm_380-*
- split: FreeForm_381
path: data/FreeForm_381-*
- split: FreeForm_382
path: data/FreeForm_382-*
- split: FreeForm_383
path: data/FreeForm_383-*
- split: FreeForm_384
path: data/FreeForm_384-*
- split: FreeForm_385
path: data/FreeForm_385-*
- split: FreeForm_386
path: data/FreeForm_386-*
- split: FreeForm_387
path: data/FreeForm_387-*
- split: FreeForm_388
path: data/FreeForm_388-*
- split: FreeForm_389
path: data/FreeForm_389-*
- split: FreeForm_390
path: data/FreeForm_390-*
- split: FreeForm_391
path: data/FreeForm_391-*
- split: FreeForm_392
path: data/FreeForm_392-*
- split: FreeForm_393
path: data/FreeForm_393-*
- split: FreeForm_394
path: data/FreeForm_394-*
- split: FreeForm_395
path: data/FreeForm_395-*
- split: FreeForm_396
path: data/FreeForm_396-*
- split: FreeForm_397
path: data/FreeForm_397-*
- split: FreeForm_398
path: data/FreeForm_398-*
- split: FreeForm_399
path: data/FreeForm_399-*
- split: FreeForm_400
path: data/FreeForm_400-*
- split: FreeForm_401
path: data/FreeForm_401-*
- split: FreeForm_402
path: data/FreeForm_402-*
- split: FreeForm_403
path: data/FreeForm_403-*
- split: FreeForm_404
path: data/FreeForm_404-*
- split: FreeForm_405
path: data/FreeForm_405-*
- split: FreeForm_406
path: data/FreeForm_406-*
- split: FreeForm_407
path: data/FreeForm_407-*
- split: FreeForm_408
path: data/FreeForm_408-*
- split: FreeForm_409
path: data/FreeForm_409-*
- split: FreeForm_410
path: data/FreeForm_410-*
- split: FreeForm_411
path: data/FreeForm_411-*
- split: FreeForm_412
path: data/FreeForm_412-*
- split: FreeForm_413
path: data/FreeForm_413-*
- split: FreeForm_414
path: data/FreeForm_414-*
- split: FreeForm_415
path: data/FreeForm_415-*
- split: FreeForm_416
path: data/FreeForm_416-*
- split: FreeForm_417
path: data/FreeForm_417-*
- split: FreeForm_418
path: data/FreeForm_418-*
- split: FreeForm_419
path: data/FreeForm_419-*
- split: FreeForm_420
path: data/FreeForm_420-*
- split: FreeForm_421
path: data/FreeForm_421-*
- split: FreeForm_422
path: data/FreeForm_422-*
- split: FreeForm_423
path: data/FreeForm_423-*
- split: FreeForm_424
path: data/FreeForm_424-*
- split: FreeForm_425
path: data/FreeForm_425-*
- split: FreeForm_426
path: data/FreeForm_426-*
- split: FreeForm_427
path: data/FreeForm_427-*
- split: FreeForm_428
path: data/FreeForm_428-*
- split: FreeForm_429
path: data/FreeForm_429-*
- split: FreeForm_430
path: data/FreeForm_430-*
- split: FreeForm_431
path: data/FreeForm_431-*
- split: FreeForm_432
path: data/FreeForm_432-*
- split: FreeForm_433
path: data/FreeForm_433-*
- split: FreeForm_434
path: data/FreeForm_434-*
- split: FreeForm_435
path: data/FreeForm_435-*
- split: FreeForm_436
path: data/FreeForm_436-*
- split: FreeForm_437
path: data/FreeForm_437-*
- split: FreeForm_438
path: data/FreeForm_438-*
- split: FreeForm_439
path: data/FreeForm_439-*
- split: FreeForm_440
path: data/FreeForm_440-*
- split: FreeForm_441
path: data/FreeForm_441-*
- split: FreeForm_442
path: data/FreeForm_442-*
- split: FreeForm_443
path: data/FreeForm_443-*
- split: FreeForm_444
path: data/FreeForm_444-*
- split: FreeForm_445
path: data/FreeForm_445-*
- split: FreeForm_446
path: data/FreeForm_446-*
- split: FreeForm_447
path: data/FreeForm_447-*
- split: FreeForm_448
path: data/FreeForm_448-*
- split: FreeForm_449
path: data/FreeForm_449-*
- split: FreeForm_450
path: data/FreeForm_450-*
- split: FreeForm_451
path: data/FreeForm_451-*
- split: FreeForm_452
path: data/FreeForm_452-*
- split: FreeForm_453
path: data/FreeForm_453-*
- split: FreeForm_454
path: data/FreeForm_454-*
- split: FreeForm_455
path: data/FreeForm_455-*
- split: FreeForm_456
path: data/FreeForm_456-*
- split: FreeForm_457
path: data/FreeForm_457-*
- split: FreeForm_458
path: data/FreeForm_458-*
- split: FreeForm_459
path: data/FreeForm_459-*
- split: FreeForm_460
path: data/FreeForm_460-*
- split: FreeForm_461
path: data/FreeForm_461-*
- split: FreeForm_462
path: data/FreeForm_462-*
- split: FreeForm_463
path: data/FreeForm_463-*
- split: FreeForm_464
path: data/FreeForm_464-*
- split: FreeForm_465
path: data/FreeForm_465-*
- split: FreeForm_466
path: data/FreeForm_466-*
- split: FreeForm_467
path: data/FreeForm_467-*
- split: FreeForm_468
path: data/FreeForm_468-*
- split: FreeForm_469
path: data/FreeForm_469-*
- split: FreeForm_470
path: data/FreeForm_470-*
- split: FreeForm_471
path: data/FreeForm_471-*
- split: FreeForm_472
path: data/FreeForm_472-*
- split: FreeForm_473
path: data/FreeForm_473-*
- split: FreeForm_474
path: data/FreeForm_474-*
- split: FreeForm_475
path: data/FreeForm_475-*
- split: FreeForm_476
path: data/FreeForm_476-*
- split: FreeForm_477
path: data/FreeForm_477-*
- split: FreeForm_478
path: data/FreeForm_478-*
- split: FreeForm_479
path: data/FreeForm_479-*
- split: FreeForm_480
path: data/FreeForm_480-*
- split: FreeForm_481
path: data/FreeForm_481-*
- split: FreeForm_482
path: data/FreeForm_482-*
- split: FreeForm_483
path: data/FreeForm_483-*
- split: FreeForm_484
path: data/FreeForm_484-*
- split: FreeForm_485
path: data/FreeForm_485-*
- split: FreeForm_486
path: data/FreeForm_486-*
- split: FreeForm_487
path: data/FreeForm_487-*
- split: FreeForm_488
path: data/FreeForm_488-*
- split: FreeForm_489
path: data/FreeForm_489-*
- split: FreeForm_490
path: data/FreeForm_490-*
- split: FreeForm_491
path: data/FreeForm_491-*
- split: FreeForm_492
path: data/FreeForm_492-*
- split: FreeForm_493
path: data/FreeForm_493-*
- split: FreeForm_494
path: data/FreeForm_494-*
- split: FreeForm_495
path: data/FreeForm_495-*
- split: FreeForm_496
path: data/FreeForm_496-*
- split: FreeForm_497
path: data/FreeForm_497-*
- split: FreeForm_498
path: data/FreeForm_498-*
- split: FreeForm_499
path: data/FreeForm_499-*
- split: FreeForm_500
path: data/FreeForm_500-*
- split: FreeForm_501
path: data/FreeForm_501-*
- split: FreeForm_502
path: data/FreeForm_502-*
- split: FreeForm_503
path: data/FreeForm_503-*
- split: FreeForm_504
path: data/FreeForm_504-*
- split: FreeForm_505
path: data/FreeForm_505-*
- split: FreeForm_506
path: data/FreeForm_506-*
- split: FreeForm_507
path: data/FreeForm_507-*
- split: FreeForm_508
path: data/FreeForm_508-*
- split: FreeForm_509
path: data/FreeForm_509-*
- split: FreeForm_510
path: data/FreeForm_510-*
- split: FreeForm_511
path: data/FreeForm_511-*
- split: FreeForm_512
path: data/FreeForm_512-*
- split: FreeForm_513
path: data/FreeForm_513-*
- split: FreeForm_514
path: data/FreeForm_514-*
- split: FreeForm_515
path: data/FreeForm_515-*
- split: FreeForm_945
path: data/FreeForm_945-*
- split: FreeForm_819
path: data/FreeForm_819-*
- split: FreeForm_756
path: data/FreeForm_756-*
- split: FreeForm_693
path: data/FreeForm_693-*
- split: FreeForm_567
path: data/FreeForm_567-*
- split: FreeForm_516
path: data/FreeForm_516-*
- split: FreeForm_630
path: data/FreeForm_630-*
- split: FreeForm_694
path: data/FreeForm_694-*
- split: FreeForm_757
path: data/FreeForm_757-*
- split: FreeForm_882
path: data/FreeForm_882-*
- split: FreeForm_517
path: data/FreeForm_517-*
- split: FreeForm_568
path: data/FreeForm_568-*
- split: FreeForm_695
path: data/FreeForm_695-*
- split: FreeForm_883
path: data/FreeForm_883-*
- split: FreeForm_946
path: data/FreeForm_946-*
- split: FreeForm_758
path: data/FreeForm_758-*
- split: FreeForm_820
path: data/FreeForm_820-*
- split: FreeForm_518
path: data/FreeForm_518-*
- split: FreeForm_696
path: data/FreeForm_696-*
- split: FreeForm_631
path: data/FreeForm_631-*
- split: FreeForm_884
path: data/FreeForm_884-*
- split: FreeForm_947
path: data/FreeForm_947-*
- split: FreeForm_570
path: data/FreeForm_570-*
- split: FreeForm_759
path: data/FreeForm_759-*
- split: FreeForm_519
path: data/FreeForm_519-*
- split: FreeForm_821
path: data/FreeForm_821-*
- split: FreeForm_697
path: data/FreeForm_697-*
- split: FreeForm_885
path: data/FreeForm_885-*
- split: FreeForm_520
path: data/FreeForm_520-*
- split: FreeForm_632
path: data/FreeForm_632-*
- split: FreeForm_760
path: data/FreeForm_760-*
- split: FreeForm_571
path: data/FreeForm_571-*
- split: FreeForm_948
path: data/FreeForm_948-*
- split: FreeForm_886
path: data/FreeForm_886-*
- split: FreeForm_822
path: data/FreeForm_822-*
- split: FreeForm_698
path: data/FreeForm_698-*
- split: FreeForm_521
path: data/FreeForm_521-*
- split: FreeForm_761
path: data/FreeForm_761-*
- split: FreeForm_633
path: data/FreeForm_633-*
- split: FreeForm_949
path: data/FreeForm_949-*
- split: FreeForm_823
path: data/FreeForm_823-*
- split: FreeForm_572
path: data/FreeForm_572-*
- split: FreeForm_699
path: data/FreeForm_699-*
- split: FreeForm_522
path: data/FreeForm_522-*
- split: FreeForm_762
path: data/FreeForm_762-*
- split: FreeForm_950
path: data/FreeForm_950-*
- split: FreeForm_824
path: data/FreeForm_824-*
- split: FreeForm_700
path: data/FreeForm_700-*
- split: FreeForm_523
path: data/FreeForm_523-*
- split: FreeForm_634
path: data/FreeForm_634-*
- split: FreeForm_763
path: data/FreeForm_763-*
- split: FreeForm_951
path: data/FreeForm_951-*
- split: FreeForm_889
path: data/FreeForm_889-*
- split: FreeForm_701
path: data/FreeForm_701-*
- split: FreeForm_635
path: data/FreeForm_635-*
- split: FreeForm_764
path: data/FreeForm_764-*
- split: FreeForm_952
path: data/FreeForm_952-*
- split: FreeForm_525
path: data/FreeForm_525-*
- split: FreeForm_890
path: data/FreeForm_890-*
- split: FreeForm_636
path: data/FreeForm_636-*
- split: FreeForm_826
path: data/FreeForm_826-*
- split: FreeForm_765
path: data/FreeForm_765-*
- split: FreeForm_953
path: data/FreeForm_953-*
- split: FreeForm_526
path: data/FreeForm_526-*
- split: FreeForm_576
path: data/FreeForm_576-*
- split: FreeForm_637
path: data/FreeForm_637-*
- split: FreeForm_891
path: data/FreeForm_891-*
- split: FreeForm_703
path: data/FreeForm_703-*
- split: FreeForm_527
path: data/FreeForm_527-*
- split: FreeForm_704
path: data/FreeForm_704-*
- split: FreeForm_577
path: data/FreeForm_577-*
- split: FreeForm_828
path: data/FreeForm_828-*
- split: FreeForm_767
path: data/FreeForm_767-*
- split: FreeForm_892
path: data/FreeForm_892-*
- split: FreeForm_955
path: data/FreeForm_955-*
- split: FreeForm_528
path: data/FreeForm_528-*
- split: FreeForm_705
path: data/FreeForm_705-*
- split: FreeForm_768
path: data/FreeForm_768-*
- split: FreeForm_829
path: data/FreeForm_829-*
- split: FreeForm_639
path: data/FreeForm_639-*
- split: FreeForm_893
path: data/FreeForm_893-*
- split: FreeForm_706
path: data/FreeForm_706-*
- split: FreeForm_769
path: data/FreeForm_769-*
- split: FreeForm_640
path: data/FreeForm_640-*
- split: FreeForm_830
path: data/FreeForm_830-*
- split: FreeForm_894
path: data/FreeForm_894-*
- split: FreeForm_957
path: data/FreeForm_957-*
- split: FreeForm_707
path: data/FreeForm_707-*
- split: FreeForm_530
path: data/FreeForm_530-*
- split: FreeForm_770
path: data/FreeForm_770-*
- split: FreeForm_641
path: data/FreeForm_641-*
- split: FreeForm_831
path: data/FreeForm_831-*
- split: FreeForm_958
path: data/FreeForm_958-*
- split: FreeForm_895
path: data/FreeForm_895-*
- split: FreeForm_578
path: data/FreeForm_578-*
- split: FreeForm_642
path: data/FreeForm_642-*
- split: FreeForm_832
path: data/FreeForm_832-*
- split: FreeForm_959
path: data/FreeForm_959-*
- split: FreeForm_896
path: data/FreeForm_896-*
- split: FreeForm_532
path: data/FreeForm_532-*
- split: FreeForm_579
path: data/FreeForm_579-*
- split: FreeForm_772
path: data/FreeForm_772-*
- split: FreeForm_897
path: data/FreeForm_897-*
- split: FreeForm_833
path: data/FreeForm_833-*
- split: FreeForm_533
path: data/FreeForm_533-*
- split: FreeForm_580
path: data/FreeForm_580-*
- split: FreeForm_644
path: data/FreeForm_644-*
- split: FreeForm_898
path: data/FreeForm_898-*
- split: FreeForm_834
path: data/FreeForm_834-*
- split: FreeForm_534
path: data/FreeForm_534-*
- split: FreeForm_774
path: data/FreeForm_774-*
- split: FreeForm_962
path: data/FreeForm_962-*
- split: FreeForm_835
path: data/FreeForm_835-*
- split: FreeForm_899
path: data/FreeForm_899-*
- split: FreeForm_581
path: data/FreeForm_581-*
- split: FreeForm_645
path: data/FreeForm_645-*
- split: FreeForm_535
path: data/FreeForm_535-*
- split: FreeForm_711
path: data/FreeForm_711-*
- split: FreeForm_775
path: data/FreeForm_775-*
- split: FreeForm_536
path: data/FreeForm_536-*
- split: FreeForm_836
path: data/FreeForm_836-*
- split: FreeForm_963
path: data/FreeForm_963-*
- split: FreeForm_900
path: data/FreeForm_900-*
- split: FreeForm_582
path: data/FreeForm_582-*
- split: FreeForm_537
path: data/FreeForm_537-*
- split: FreeForm_647
path: data/FreeForm_647-*
- split: FreeForm_837
path: data/FreeForm_837-*
- split: FreeForm_964
path: data/FreeForm_964-*
- split: FreeForm_583
path: data/FreeForm_583-*
- split: FreeForm_648
path: data/FreeForm_648-*
- split: FreeForm_714
path: data/FreeForm_714-*
- split: FreeForm_902
path: data/FreeForm_902-*
- split: FreeForm_966
path: data/FreeForm_966-*
- split: FreeForm_839
path: data/FreeForm_839-*
- split: FreeForm_840
path: data/FreeForm_840-*
- split: FreeForm_780
path: data/FreeForm_780-*
- split: FreeForm_905
path: data/FreeForm_905-*
- split: FreeForm_781
path: data/FreeForm_781-*
- split: FreeForm_542
path: data/FreeForm_542-*
- split: FreeForm_717
path: data/FreeForm_717-*
- split: FreeForm_587
path: data/FreeForm_587-*
- split: FreeForm_906
path: data/FreeForm_906-*
- split: FreeForm_782
path: data/FreeForm_782-*
- split: FreeForm_543
path: data/FreeForm_543-*
- split: FreeForm_970
path: data/FreeForm_970-*
- split: FreeForm_653
path: data/FreeForm_653-*
- split: FreeForm_907
path: data/FreeForm_907-*
- split: FreeForm_843
path: data/FreeForm_843-*
- split: FreeForm_588
path: data/FreeForm_588-*
- split: FreeForm_718
path: data/FreeForm_718-*
- split: FreeForm_783
path: data/FreeForm_783-*
- split: FreeForm_544
path: data/FreeForm_544-*
- split: FreeForm_971
path: data/FreeForm_971-*
- split: FreeForm_908
path: data/FreeForm_908-*
- split: FreeForm_654
path: data/FreeForm_654-*
- split: FreeForm_844
path: data/FreeForm_844-*
- split: FreeForm_719
path: data/FreeForm_719-*
- split: FreeForm_784
path: data/FreeForm_784-*
- split: FreeForm_545
path: data/FreeForm_545-*
- split: FreeForm_972
path: data/FreeForm_972-*
- split: FreeForm_909
path: data/FreeForm_909-*
- split: FreeForm_845
path: data/FreeForm_845-*
- split: FreeForm_785
path: data/FreeForm_785-*
- split: FreeForm_546
path: data/FreeForm_546-*
- split: FreeForm_656
path: data/FreeForm_656-*
- split: FreeForm_973
path: data/FreeForm_973-*
- split: FreeForm_547
path: data/FreeForm_547-*
- split: FreeForm_592
path: data/FreeForm_592-*
- split: FreeForm_657
path: data/FreeForm_657-*
- split: FreeForm_787
path: data/FreeForm_787-*
- split: FreeForm_847
path: data/FreeForm_847-*
- split: FreeForm_593
path: data/FreeForm_593-*
- split: FreeForm_848
path: data/FreeForm_848-*
- split: FreeForm_788
path: data/FreeForm_788-*
- split: FreeForm_723
path: data/FreeForm_723-*
- split: FreeForm_659
path: data/FreeForm_659-*
- split: FreeForm_849
path: data/FreeForm_849-*
- split: FreeForm_594
path: data/FreeForm_594-*
- split: FreeForm_789
path: data/FreeForm_789-*
- split: FreeForm_913
path: data/FreeForm_913-*
- split: FreeForm_660
path: data/FreeForm_660-*
- split: FreeForm_595
path: data/FreeForm_595-*
- split: FreeForm_790
path: data/FreeForm_790-*
- split: FreeForm_977
path: data/FreeForm_977-*
- split: FreeForm_914
path: data/FreeForm_914-*
- split: FreeForm_851
path: data/FreeForm_851-*
- split: FreeForm_552
path: data/FreeForm_552-*
- split: FreeForm_597
path: data/FreeForm_597-*
- split: FreeForm_852
path: data/FreeForm_852-*
- split: FreeForm_662
path: data/FreeForm_662-*
- split: FreeForm_726
path: data/FreeForm_726-*
- split: FreeForm_553
path: data/FreeForm_553-*
- split: FreeForm_598
path: data/FreeForm_598-*
- split: FreeForm_853
path: data/FreeForm_853-*
- split: FreeForm_916
path: data/FreeForm_916-*
- split: FreeForm_663
path: data/FreeForm_663-*
- split: FreeForm_979
path: data/FreeForm_979-*
- split: FreeForm_554
path: data/FreeForm_554-*
- split: FreeForm_555
path: data/FreeForm_555-*
- split: FreeForm_600
path: data/FreeForm_600-*
- split: FreeForm_556
path: data/FreeForm_556-*
- split: FreeForm_981
path: data/FreeForm_981-*
- split: FreeForm_918
path: data/FreeForm_918-*
- split: FreeForm_855
path: data/FreeForm_855-*
- split: FreeForm_601
path: data/FreeForm_601-*
- split: FreeForm_557
path: data/FreeForm_557-*
- split: FreeForm_982
path: data/FreeForm_982-*
- split: FreeForm_919
path: data/FreeForm_919-*
- split: FreeForm_666
path: data/FreeForm_666-*
- split: FreeForm_730
path: data/FreeForm_730-*
- split: FreeForm_558
path: data/FreeForm_558-*
- split: FreeForm_796
path: data/FreeForm_796-*
- split: FreeForm_920
path: data/FreeForm_920-*
- split: FreeForm_603
path: data/FreeForm_603-*
- split: FreeForm_797
path: data/FreeForm_797-*
- split: FreeForm_560
path: data/FreeForm_560-*
- split: FreeForm_798
path: data/FreeForm_798-*
- split: FreeForm_799
path: data/FreeForm_799-*
- split: FreeForm_605
path: data/FreeForm_605-*
- split: FreeForm_986
path: data/FreeForm_986-*
- split: FreeForm_987
path: data/FreeForm_987-*
- split: FreeForm_735
path: data/FreeForm_735-*
- split: FreeForm_924
path: data/FreeForm_924-*
- split: FreeForm_801
path: data/FreeForm_801-*
- split: FreeForm_988
path: data/FreeForm_988-*
- split: FreeForm_607
path: data/FreeForm_607-*
- split: FreeForm_736
path: data/FreeForm_736-*
- split: FreeForm_672
path: data/FreeForm_672-*
- split: FreeForm_925
path: data/FreeForm_925-*
- split: FreeForm_564
path: data/FreeForm_564-*
- split: FreeForm_608
path: data/FreeForm_608-*
- split: FreeForm_737
path: data/FreeForm_737-*
- split: FreeForm_673
path: data/FreeForm_673-*
- split: FreeForm_803
path: data/FreeForm_803-*
- split: FreeForm_926
path: data/FreeForm_926-*
- split: FreeForm_863
path: data/FreeForm_863-*
- split: FreeForm_738
path: data/FreeForm_738-*
- split: FreeForm_674
path: data/FreeForm_674-*
- split: FreeForm_804
path: data/FreeForm_804-*
- split: FreeForm_927
path: data/FreeForm_927-*
- split: FreeForm_864
path: data/FreeForm_864-*
- split: FreeForm_675
path: data/FreeForm_675-*
- split: FreeForm_805
path: data/FreeForm_805-*
- split: FreeForm_611
path: data/FreeForm_611-*
- split: FreeForm_928
path: data/FreeForm_928-*
- split: FreeForm_676
path: data/FreeForm_676-*
- split: FreeForm_865
path: data/FreeForm_865-*
- split: FreeForm_806
path: data/FreeForm_806-*
- split: FreeForm_929
path: data/FreeForm_929-*
- split: FreeForm_993
path: data/FreeForm_993-*
- split: FreeForm_866
path: data/FreeForm_866-*
- split: FreeForm_678
path: data/FreeForm_678-*
- split: FreeForm_930
path: data/FreeForm_930-*
- split: FreeForm_994
path: data/FreeForm_994-*
- split: FreeForm_867
path: data/FreeForm_867-*
- split: FreeForm_807
path: data/FreeForm_807-*
- split: FreeForm_1011
path: data/FreeForm_1011-*
- split: FreeForm_931
path: data/FreeForm_931-*
- split: FreeForm_808
path: data/FreeForm_808-*
- split: FreeForm_743
path: data/FreeForm_743-*
- split: FreeForm_995
path: data/FreeForm_995-*
- split: FreeForm_809
path: data/FreeForm_809-*
- split: FreeForm_1012
path: data/FreeForm_1012-*
- split: FreeForm_869
path: data/FreeForm_869-*
- split: FreeForm_810
path: data/FreeForm_810-*
- split: FreeForm_616
path: data/FreeForm_616-*
- split: FreeForm_870
path: data/FreeForm_870-*
- split: FreeForm_933
path: data/FreeForm_933-*
- split: FreeForm_811
path: data/FreeForm_811-*
- split: FreeForm_617
path: data/FreeForm_617-*
- split: FreeForm_1014
path: data/FreeForm_1014-*
- split: FreeForm_934
path: data/FreeForm_934-*
- split: FreeForm_871
path: data/FreeForm_871-*
- split: FreeForm_682
path: data/FreeForm_682-*
- split: FreeForm_812
path: data/FreeForm_812-*
- split: FreeForm_1015
path: data/FreeForm_1015-*
- split: FreeForm_747
path: data/FreeForm_747-*
- split: FreeForm_683
path: data/FreeForm_683-*
- split: FreeForm_872
path: data/FreeForm_872-*
- split: FreeForm_1016
path: data/FreeForm_1016-*
- split: FreeForm_619
path: data/FreeForm_619-*
- split: FreeForm_748
path: data/FreeForm_748-*
- split: FreeForm_996
path: data/FreeForm_996-*
- split: FreeForm_936
path: data/FreeForm_936-*
- split: FreeForm_873
path: data/FreeForm_873-*
- split: FreeForm_814
path: data/FreeForm_814-*
- split: FreeForm_620
path: data/FreeForm_620-*
- split: FreeForm_937
path: data/FreeForm_937-*
- split: FreeForm_874
path: data/FreeForm_874-*
- split: FreeForm_815
path: data/FreeForm_815-*
- split: FreeForm_685
path: data/FreeForm_685-*
- split: FreeForm_750
path: data/FreeForm_750-*
- split: FreeForm_998
path: data/FreeForm_998-*
- split: FreeForm_938
path: data/FreeForm_938-*
- split: FreeForm_816
path: data/FreeForm_816-*
- split: FreeForm_622
path: data/FreeForm_622-*
- split: FreeForm_751
path: data/FreeForm_751-*
- split: FreeForm_876
path: data/FreeForm_876-*
- split: FreeForm_939
path: data/FreeForm_939-*
- split: FreeForm_817
path: data/FreeForm_817-*
- split: FreeForm_752
path: data/FreeForm_752-*
- split: FreeForm_1020
path: data/FreeForm_1020-*
- split: FreeForm_624
path: data/FreeForm_624-*
- split: FreeForm_1001
path: data/FreeForm_1001-*
- split: FreeForm_1071
path: data/FreeForm_1071-*
- split: FreeForm_1072
path: data/FreeForm_1072-*
- split: FreeForm_1022
path: data/FreeForm_1022-*
- split: FreeForm_755
path: data/FreeForm_755-*
- split: FreeForm_626
path: data/FreeForm_626-*
- split: FreeForm_690
path: data/FreeForm_690-*
- split: FreeForm_1003
path: data/FreeForm_1003-*
- split: FreeForm_1023
path: data/FreeForm_1023-*
- split: FreeForm_880
path: data/FreeForm_880-*
- split: FreeForm_627
path: data/FreeForm_627-*
- split: FreeForm_1004
path: data/FreeForm_1004-*
- split: FreeForm_1074
path: data/FreeForm_1074-*
- split: FreeForm_1024
path: data/FreeForm_1024-*
- split: FreeForm_944
path: data/FreeForm_944-*
- split: FreeForm_881
path: data/FreeForm_881-*
- split: FreeForm_1135
path: data/FreeForm_1135-*
- split: FreeForm_692
path: data/FreeForm_692-*
- split: FreeForm_1075
path: data/FreeForm_1075-*
- split: FreeForm_1025
path: data/FreeForm_1025-*
- split: FreeForm_1197
path: data/FreeForm_1197-*
- split: FreeForm_1260
path: data/FreeForm_1260-*
- split: FreeForm_629
path: data/FreeForm_629-*
- split: FreeForm_1136
path: data/FreeForm_1136-*
- split: FreeForm_1006
path: data/FreeForm_1006-*
- split: FreeForm_1261
path: data/FreeForm_1261-*
- split: FreeForm_1198
path: data/FreeForm_1198-*
- split: FreeForm_1386
path: data/FreeForm_1386-*
- split: FreeForm_1137
path: data/FreeForm_1137-*
- split: FreeForm_1007
path: data/FreeForm_1007-*
- split: FreeForm_1077
path: data/FreeForm_1077-*
- split: FreeForm_1262
path: data/FreeForm_1262-*
- split: FreeForm_1324
path: data/FreeForm_1324-*
- split: FreeForm_1387
path: data/FreeForm_1387-*
- split: FreeForm_1138
path: data/FreeForm_1138-*
- split: FreeForm_1449
path: data/FreeForm_1449-*
- split: FreeForm_1200
path: data/FreeForm_1200-*
- split: FreeForm_1388
path: data/FreeForm_1388-*
- split: FreeForm_1078
path: data/FreeForm_1078-*
- split: FreeForm_1139
path: data/FreeForm_1139-*
- split: FreeForm_1450
path: data/FreeForm_1450-*
- split: FreeForm_1326
path: data/FreeForm_1326-*
- split: FreeForm_1201
path: data/FreeForm_1201-*
- split: FreeForm_1389
path: data/FreeForm_1389-*
- split: FreeForm_1264
path: data/FreeForm_1264-*
- split: FreeForm_1140
path: data/FreeForm_1140-*
- split: FreeForm_1451
path: data/FreeForm_1451-*
- split: FreeForm_1327
path: data/FreeForm_1327-*
- split: FreeForm_1202
path: data/FreeForm_1202-*
- split: FreeForm_1030
path: data/FreeForm_1030-*
- split: FreeForm_1390
path: data/FreeForm_1390-*
- split: FreeForm_1080
path: data/FreeForm_1080-*
- split: FreeForm_1141
path: data/FreeForm_1141-*
- split: FreeForm_1452
path: data/FreeForm_1452-*
- split: FreeForm_1328
path: data/FreeForm_1328-*
- split: FreeForm_1203
path: data/FreeForm_1203-*
- split: FreeForm_1391
path: data/FreeForm_1391-*
- split: FreeForm_1142
path: data/FreeForm_1142-*
- split: FreeForm_1329
path: data/FreeForm_1329-*
- split: FreeForm_1032
path: data/FreeForm_1032-*
- split: FreeForm_1392
path: data/FreeForm_1392-*
- split: FreeForm_1143
path: data/FreeForm_1143-*
- split: FreeForm_1266
path: data/FreeForm_1266-*
- split: FreeForm_1454
path: data/FreeForm_1454-*
- split: FreeForm_1205
path: data/FreeForm_1205-*
- split: FreeForm_1033
path: data/FreeForm_1033-*
- split: FreeForm_1331
path: data/FreeForm_1331-*
- split: FreeForm_1455
path: data/FreeForm_1455-*
- split: FreeForm_1084
path: data/FreeForm_1084-*
- split: FreeForm_1394
path: data/FreeForm_1394-*
- split: FreeForm_1034
path: data/FreeForm_1034-*
- split: FreeForm_1332
path: data/FreeForm_1332-*
- split: FreeForm_1456
path: data/FreeForm_1456-*
- split: FreeForm_1268
path: data/FreeForm_1268-*
- split: FreeForm_1207
path: data/FreeForm_1207-*
- split: FreeForm_1395
path: data/FreeForm_1395-*
- split: FreeForm_1035
path: data/FreeForm_1035-*
- split: FreeForm_1333
path: data/FreeForm_1333-*
- split: FreeForm_1457
path: data/FreeForm_1457-*
- split: FreeForm_1086
path: data/FreeForm_1086-*
- split: FreeForm_1147
path: data/FreeForm_1147-*
- split: FreeForm_1396
path: data/FreeForm_1396-*
- split: FreeForm_1334
path: data/FreeForm_1334-*
- split: FreeForm_1458
path: data/FreeForm_1458-*
- split: FreeForm_1087
path: data/FreeForm_1087-*
- split: FreeForm_1148
path: data/FreeForm_1148-*
- split: FreeForm_1397
path: data/FreeForm_1397-*
- split: FreeForm_1335
path: data/FreeForm_1335-*
- split: FreeForm_1459
path: data/FreeForm_1459-*
- split: FreeForm_1271
path: data/FreeForm_1271-*
- split: FreeForm_1149
path: data/FreeForm_1149-*
- split: FreeForm_1210
path: data/FreeForm_1210-*
- split: FreeForm_1150
path: data/FreeForm_1150-*
- split: FreeForm_1272
path: data/FreeForm_1272-*
- split: FreeForm_1461
path: data/FreeForm_1461-*
- split: FreeForm_1151
path: data/FreeForm_1151-*
- split: FreeForm_1273
path: data/FreeForm_1273-*
- split: FreeForm_1212
path: data/FreeForm_1212-*
- split: FreeForm_1090
path: data/FreeForm_1090-*
- split: FreeForm_1400
path: data/FreeForm_1400-*
- split: FreeForm_1152
path: data/FreeForm_1152-*
- split: FreeForm_1274
path: data/FreeForm_1274-*
- split: FreeForm_1091
path: data/FreeForm_1091-*
- split: FreeForm_1401
path: data/FreeForm_1401-*
- split: FreeForm_1153
path: data/FreeForm_1153-*
- split: FreeForm_1275
path: data/FreeForm_1275-*
- split: FreeForm_1214
path: data/FreeForm_1214-*
- split: FreeForm_1464
path: data/FreeForm_1464-*
- split: FreeForm_1340
path: data/FreeForm_1340-*
- split: FreeForm_1043
path: data/FreeForm_1043-*
- split: FreeForm_1276
path: data/FreeForm_1276-*
- split: FreeForm_1403
path: data/FreeForm_1403-*
- split: FreeForm_1215
path: data/FreeForm_1215-*
- split: FreeForm_1093
path: data/FreeForm_1093-*
- split: FreeForm_1044
path: data/FreeForm_1044-*
- split: FreeForm_1277
path: data/FreeForm_1277-*
- split: FreeForm_1216
path: data/FreeForm_1216-*
- split: FreeForm_1094
path: data/FreeForm_1094-*
- split: FreeForm_1278
path: data/FreeForm_1278-*
- split: FreeForm_1217
path: data/FreeForm_1217-*
- split: FreeForm_1405
path: data/FreeForm_1405-*
- split: FreeForm_1467
path: data/FreeForm_1467-*
- split: FreeForm_1157
path: data/FreeForm_1157-*
- split: FreeForm_1406
path: data/FreeForm_1406-*
- split: FreeForm_1343
path: data/FreeForm_1343-*
- split: FreeForm_1218
path: data/FreeForm_1218-*
- split: FreeForm_1468
path: data/FreeForm_1468-*
- split: FreeForm_1158
path: data/FreeForm_1158-*
- split: FreeForm_1407
path: data/FreeForm_1407-*
- split: FreeForm_1344
path: data/FreeForm_1344-*
- split: FreeForm_1047
path: data/FreeForm_1047-*
- split: FreeForm_1219
path: data/FreeForm_1219-*
- split: FreeForm_1469
path: data/FreeForm_1469-*
- split: FreeForm_1345
path: data/FreeForm_1345-*
- split: FreeForm_1281
path: data/FreeForm_1281-*
- split: FreeForm_1220
path: data/FreeForm_1220-*
- split: FreeForm_1048
path: data/FreeForm_1048-*
- split: FreeForm_1098
path: data/FreeForm_1098-*
- split: FreeForm_1160
path: data/FreeForm_1160-*
- split: FreeForm_1346
path: data/FreeForm_1346-*
- split: FreeForm_1282
path: data/FreeForm_1282-*
- split: FreeForm_1471
path: data/FreeForm_1471-*
- split: FreeForm_1410
path: data/FreeForm_1410-*
- split: FreeForm_1472
path: data/FreeForm_1472-*
- split: FreeForm_1284
path: data/FreeForm_1284-*
- split: FreeForm_1348
path: data/FreeForm_1348-*
- split: FreeForm_1223
path: data/FreeForm_1223-*
- split: FreeForm_1163
path: data/FreeForm_1163-*
- split: FreeForm_1473
path: data/FreeForm_1473-*
- split: FreeForm_1285
path: data/FreeForm_1285-*
- split: FreeForm_1349
path: data/FreeForm_1349-*
- split: FreeForm_1101
path: data/FreeForm_1101-*
- split: FreeForm_1224
path: data/FreeForm_1224-*
- split: FreeForm_1164
path: data/FreeForm_1164-*
- split: FreeForm_1413
path: data/FreeForm_1413-*
- split: FreeForm_1225
path: data/FreeForm_1225-*
- split: FreeForm_1286
path: data/FreeForm_1286-*
- split: FreeForm_1165
path: data/FreeForm_1165-*
- split: FreeForm_1414
path: data/FreeForm_1414-*
- split: FreeForm_1053
path: data/FreeForm_1053-*
- split: FreeForm_1287
path: data/FreeForm_1287-*
- split: FreeForm_1351
path: data/FreeForm_1351-*
- split: FreeForm_1166
path: data/FreeForm_1166-*
- split: FreeForm_1415
path: data/FreeForm_1415-*
- split: FreeForm_1227
path: data/FreeForm_1227-*
- split: FreeForm_1054
path: data/FreeForm_1054-*
- split: FreeForm_1167
path: data/FreeForm_1167-*
- split: FreeForm_1288
path: data/FreeForm_1288-*
- split: FreeForm_1476
path: data/FreeForm_1476-*
- split: FreeForm_1416
path: data/FreeForm_1416-*
- split: FreeForm_1228
path: data/FreeForm_1228-*
- split: FreeForm_1168
path: data/FreeForm_1168-*
- split: FreeForm_1353
path: data/FreeForm_1353-*
- split: FreeForm_1477
path: data/FreeForm_1477-*
- split: FreeForm_1105
path: data/FreeForm_1105-*
- split: FreeForm_1417
path: data/FreeForm_1417-*
- split: FreeForm_1229
path: data/FreeForm_1229-*
- split: FreeForm_1056
path: data/FreeForm_1056-*
- split: FreeForm_1354
path: data/FreeForm_1354-*
- split: FreeForm_1230
path: data/FreeForm_1230-*
- split: FreeForm_1057
path: data/FreeForm_1057-*
- split: FreeForm_1170
path: data/FreeForm_1170-*
- split: FreeForm_1291
path: data/FreeForm_1291-*
- split: FreeForm_1107
path: data/FreeForm_1107-*
- split: FreeForm_1419
path: data/FreeForm_1419-*
- split: FreeForm_1479
path: data/FreeForm_1479-*
- split: FreeForm_1231
path: data/FreeForm_1231-*
- split: FreeForm_1058
path: data/FreeForm_1058-*
- split: FreeForm_1171
path: data/FreeForm_1171-*
- split: FreeForm_1420
path: data/FreeForm_1420-*
- split: FreeForm_1232
path: data/FreeForm_1232-*
- split: FreeForm_1059
path: data/FreeForm_1059-*
- split: FreeForm_1293
path: data/FreeForm_1293-*
- split: FreeForm_1357
path: data/FreeForm_1357-*
- split: FreeForm_1481
path: data/FreeForm_1481-*
- split: FreeForm_1060
path: data/FreeForm_1060-*
- split: FreeForm_1294
path: data/FreeForm_1294-*
- split: FreeForm_1173
path: data/FreeForm_1173-*
- split: FreeForm_1358
path: data/FreeForm_1358-*
- split: FreeForm_1061
path: data/FreeForm_1061-*
- split: FreeForm_1234
path: data/FreeForm_1234-*
- split: FreeForm_1295
path: data/FreeForm_1295-*
- split: FreeForm_1359
path: data/FreeForm_1359-*
- split: FreeForm_1062
path: data/FreeForm_1062-*
- split: FreeForm_1296
path: data/FreeForm_1296-*
- split: FreeForm_1297
path: data/FreeForm_1297-*
- split: FreeForm_1112
path: data/FreeForm_1112-*
- split: FreeForm_1484
path: data/FreeForm_1484-*
- split: FreeForm_1064
path: data/FreeForm_1064-*
- split: FreeForm_1298
path: data/FreeForm_1298-*
- split: FreeForm_1113
path: data/FreeForm_1113-*
- split: FreeForm_1177
path: data/FreeForm_1177-*
- split: FreeForm_1362
path: data/FreeForm_1362-*
- split: FreeForm_1485
path: data/FreeForm_1485-*
- split: FreeForm_1363
path: data/FreeForm_1363-*
- split: FreeForm_1238
path: data/FreeForm_1238-*
- split: FreeForm_1066
path: data/FreeForm_1066-*
- split: FreeForm_1364
path: data/FreeForm_1364-*
- split: FreeForm_1300
path: data/FreeForm_1300-*
- split: FreeForm_1179
path: data/FreeForm_1179-*
- split: FreeForm_1365
path: data/FreeForm_1365-*
- split: FreeForm_1301
path: data/FreeForm_1301-*
- split: FreeForm_1180
path: data/FreeForm_1180-*
- split: FreeForm_1068
path: data/FreeForm_1068-*
- split: FreeForm_1116
path: data/FreeForm_1116-*
- split: FreeForm_1423
path: data/FreeForm_1423-*
- split: FreeForm_1366
path: data/FreeForm_1366-*
- split: FreeForm_1118
path: data/FreeForm_1118-*
- split: FreeForm_1242
path: data/FreeForm_1242-*
- split: FreeForm_1368
path: data/FreeForm_1368-*
- split: FreeForm_1183
path: data/FreeForm_1183-*
- split: FreeForm_1304
path: data/FreeForm_1304-*
- split: FreeForm_1490
path: data/FreeForm_1490-*
- split: FreeForm_1512
path: data/FreeForm_1512-*
- split: FreeForm_1244
path: data/FreeForm_1244-*
- split: FreeForm_1120
path: data/FreeForm_1120-*
- split: FreeForm_1370
path: data/FreeForm_1370-*
- split: FreeForm_1492
path: data/FreeForm_1492-*
- split: FreeForm_1245
path: data/FreeForm_1245-*
- split: FreeForm_1493
path: data/FreeForm_1493-*
- split: FreeForm_1307
path: data/FreeForm_1307-*
- split: FreeForm_1515
path: data/FreeForm_1515-*
- split: FreeForm_1246
path: data/FreeForm_1246-*
- split: FreeForm_1372
path: data/FreeForm_1372-*
- split: FreeForm_1122
path: data/FreeForm_1122-*
- split: FreeForm_1494
path: data/FreeForm_1494-*
- split: FreeForm_1516
path: data/FreeForm_1516-*
- split: FreeForm_1247
path: data/FreeForm_1247-*
- split: FreeForm_1373
path: data/FreeForm_1373-*
- split: FreeForm_1123
path: data/FreeForm_1123-*
- split: FreeForm_1424
path: data/FreeForm_1424-*
- split: FreeForm_1495
path: data/FreeForm_1495-*
- split: FreeForm_1188
path: data/FreeForm_1188-*
- split: FreeForm_1517
path: data/FreeForm_1517-*
- split: FreeForm_1124
path: data/FreeForm_1124-*
- split: FreeForm_1496
path: data/FreeForm_1496-*
- split: FreeForm_1189
path: data/FreeForm_1189-*
- split: FreeForm_1518
path: data/FreeForm_1518-*
- split: FreeForm_1375
path: data/FreeForm_1375-*
- split: FreeForm_1249
path: data/FreeForm_1249-*
- split: FreeForm_1125
path: data/FreeForm_1125-*
- split: FreeForm_1190
path: data/FreeForm_1190-*
- split: FreeForm_1519
path: data/FreeForm_1519-*
- split: FreeForm_1376
path: data/FreeForm_1376-*
- split: FreeForm_1250
path: data/FreeForm_1250-*
- split: FreeForm_1126
path: data/FreeForm_1126-*
- split: FreeForm_1520
path: data/FreeForm_1520-*
- split: FreeForm_1312
path: data/FreeForm_1312-*
- split: FreeForm_1498
path: data/FreeForm_1498-*
- split: FreeForm_1377
path: data/FreeForm_1377-*
- split: FreeForm_1251
path: data/FreeForm_1251-*
- split: FreeForm_1127
path: data/FreeForm_1127-*
- split: FreeForm_1521
path: data/FreeForm_1521-*
- split: FreeForm_1313
path: data/FreeForm_1313-*
- split: FreeForm_1378
path: data/FreeForm_1378-*
- split: FreeForm_1128
path: data/FreeForm_1128-*
- split: FreeForm_1522
path: data/FreeForm_1522-*
- split: FreeForm_1314
path: data/FreeForm_1314-*
- split: FreeForm_1523
path: data/FreeForm_1523-*
- split: FreeForm_1315
path: data/FreeForm_1315-*
- split: FreeForm_1380
path: data/FreeForm_1380-*
- split: FreeForm_1427
path: data/FreeForm_1427-*
- split: FreeForm_1524
path: data/FreeForm_1524-*
- split: FreeForm_1194
path: data/FreeForm_1194-*
- split: FreeForm_1381
path: data/FreeForm_1381-*
- split: FreeForm_1428
path: data/FreeForm_1428-*
- split: FreeForm_1255
path: data/FreeForm_1255-*
- split: FreeForm_1525
path: data/FreeForm_1525-*
- split: FreeForm_1195
path: data/FreeForm_1195-*
- split: FreeForm_1429
path: data/FreeForm_1429-*
- split: FreeForm_1382
path: data/FreeForm_1382-*
- split: FreeForm_1256
path: data/FreeForm_1256-*
- split: FreeForm_1526
path: data/FreeForm_1526-*
- split: FreeForm_1196
path: data/FreeForm_1196-*
- split: FreeForm_1430
path: data/FreeForm_1430-*
- split: FreeForm_1383
path: data/FreeForm_1383-*
- split: FreeForm_1257
path: data/FreeForm_1257-*
- split: FreeForm_1318
path: data/FreeForm_1318-*
- split: FreeForm_1504
path: data/FreeForm_1504-*
- split: FreeForm_1431
path: data/FreeForm_1431-*
- split: FreeForm_1384
path: data/FreeForm_1384-*
- split: FreeForm_1258
path: data/FreeForm_1258-*
- split: FreeForm_1528
path: data/FreeForm_1528-*
- split: FreeForm_1319
path: data/FreeForm_1319-*
- split: FreeForm_1505
path: data/FreeForm_1505-*
- split: FreeForm_1576
path: data/FreeForm_1576-*
- split: FreeForm_1432
path: data/FreeForm_1432-*
- split: FreeForm_1385
path: data/FreeForm_1385-*
- split: FreeForm_1701
path: data/FreeForm_1701-*
- split: FreeForm_1639
path: data/FreeForm_1639-*
- split: FreeForm_1530
path: data/FreeForm_1530-*
- split: FreeForm_1321
path: data/FreeForm_1321-*
- split: FreeForm_1507
path: data/FreeForm_1507-*
- split: FreeForm_1702
path: data/FreeForm_1702-*
- split: FreeForm_1434
path: data/FreeForm_1434-*
- split: FreeForm_1640
path: data/FreeForm_1640-*
- split: FreeForm_1531
path: data/FreeForm_1531-*
- split: FreeForm_1508
path: data/FreeForm_1508-*
- split: FreeForm_1435
path: data/FreeForm_1435-*
- split: FreeForm_1766
path: data/FreeForm_1766-*
- split: FreeForm_1579
path: data/FreeForm_1579-*
- split: FreeForm_1641
path: data/FreeForm_1641-*
- split: FreeForm_1827
path: data/FreeForm_1827-*
- split: FreeForm_1436
path: data/FreeForm_1436-*
- split: FreeForm_1704
path: data/FreeForm_1704-*
- split: FreeForm_1642
path: data/FreeForm_1642-*
- split: FreeForm_1828
path: data/FreeForm_1828-*
- split: FreeForm_1437
path: data/FreeForm_1437-*
- split: FreeForm_1581
path: data/FreeForm_1581-*
- split: FreeForm_1643
path: data/FreeForm_1643-*
- split: FreeForm_1534
path: data/FreeForm_1534-*
- split: FreeForm_1511
path: data/FreeForm_1511-*
- split: FreeForm_1707
path: data/FreeForm_1707-*
- split: FreeForm_1583
path: data/FreeForm_1583-*
- split: FreeForm_1770
path: data/FreeForm_1770-*
- split: FreeForm_1536
path: data/FreeForm_1536-*
- split: FreeForm_1891
path: data/FreeForm_1891-*
- split: FreeForm_1645
path: data/FreeForm_1645-*
- split: FreeForm_1831
path: data/FreeForm_1831-*
- split: FreeForm_1585
path: data/FreeForm_1585-*
- split: FreeForm_1538
path: data/FreeForm_1538-*
- split: FreeForm_1893
path: data/FreeForm_1893-*
- split: FreeForm_1442
path: data/FreeForm_1442-*
- split: FreeForm_1586
path: data/FreeForm_1586-*
- split: FreeForm_1648
path: data/FreeForm_1648-*
- split: FreeForm_1711
path: data/FreeForm_1711-*
- split: FreeForm_1443
path: data/FreeForm_1443-*
- split: FreeForm_1773
path: data/FreeForm_1773-*
- split: FreeForm_1540
path: data/FreeForm_1540-*
- split: FreeForm_1649
path: data/FreeForm_1649-*
- split: FreeForm_1712
path: data/FreeForm_1712-*
- split: FreeForm_1895
path: data/FreeForm_1895-*
- split: FreeForm_1444
path: data/FreeForm_1444-*
- split: FreeForm_1774
path: data/FreeForm_1774-*
- split: FreeForm_1541
path: data/FreeForm_1541-*
- split: FreeForm_1835
path: data/FreeForm_1835-*
- split: FreeForm_1588
path: data/FreeForm_1588-*
- split: FreeForm_1445
path: data/FreeForm_1445-*
- split: FreeForm_1896
path: data/FreeForm_1896-*
- split: FreeForm_1542
path: data/FreeForm_1542-*
- split: FreeForm_1775
path: data/FreeForm_1775-*
- split: FreeForm_1589
path: data/FreeForm_1589-*
- split: FreeForm_1714
path: data/FreeForm_1714-*
- split: FreeForm_1897
path: data/FreeForm_1897-*
- split: FreeForm_1543
path: data/FreeForm_1543-*
- split: FreeForm_1590
path: data/FreeForm_1590-*
- split: FreeForm_1715
path: data/FreeForm_1715-*
- split: FreeForm_1447
path: data/FreeForm_1447-*
- split: FreeForm_1591
path: data/FreeForm_1591-*
- split: FreeForm_1544
path: data/FreeForm_1544-*
- split: FreeForm_1838
path: data/FreeForm_1838-*
- split: FreeForm_1716
path: data/FreeForm_1716-*
- split: FreeForm_1448
path: data/FreeForm_1448-*
- split: FreeForm_1545
path: data/FreeForm_1545-*
- split: FreeForm_1592
path: data/FreeForm_1592-*
- split: FreeForm_1717
path: data/FreeForm_1717-*
- split: FreeForm_1953
path: data/FreeForm_1953-*
- split: FreeForm_1900
path: data/FreeForm_1900-*
- split: FreeForm_1779
path: data/FreeForm_1779-*
- split: FreeForm_1954
path: data/FreeForm_1954-*
- split: FreeForm_1901
path: data/FreeForm_1901-*
- split: FreeForm_1594
path: data/FreeForm_1594-*
- split: FreeForm_1719
path: data/FreeForm_1719-*
- split: FreeForm_1841
path: data/FreeForm_1841-*
- split: FreeForm_1548
path: data/FreeForm_1548-*
- split: FreeForm_1595
path: data/FreeForm_1595-*
- split: FreeForm_1720
path: data/FreeForm_1720-*
- split: FreeForm_1842
path: data/FreeForm_1842-*
- split: FreeForm_1656
path: data/FreeForm_1656-*
- split: FreeForm_1781
path: data/FreeForm_1781-*
- split: FreeForm_1721
path: data/FreeForm_1721-*
- split: FreeForm_1657
path: data/FreeForm_1657-*
- split: FreeForm_1782
path: data/FreeForm_1782-*
- split: FreeForm_1904
path: data/FreeForm_1904-*
- split: FreeForm_1597
path: data/FreeForm_1597-*
- split: FreeForm_1844
path: data/FreeForm_1844-*
- split: FreeForm_1957
path: data/FreeForm_1957-*
- split: FreeForm_1551
path: data/FreeForm_1551-*
- split: FreeForm_1905
path: data/FreeForm_1905-*
- split: FreeForm_1598
path: data/FreeForm_1598-*
- split: FreeForm_1723
path: data/FreeForm_1723-*
- split: FreeForm_1659
path: data/FreeForm_1659-*
- split: FreeForm_1552
path: data/FreeForm_1552-*
- split: FreeForm_1784
path: data/FreeForm_1784-*
- split: FreeForm_1599
path: data/FreeForm_1599-*
- split: FreeForm_1724
path: data/FreeForm_1724-*
- split: FreeForm_1660
path: data/FreeForm_1660-*
- split: FreeForm_1725
path: data/FreeForm_1725-*
- split: FreeForm_1960
path: data/FreeForm_1960-*
- split: FreeForm_1661
path: data/FreeForm_1661-*
- split: FreeForm_1554
path: data/FreeForm_1554-*
- split: FreeForm_1847
path: data/FreeForm_1847-*
- split: FreeForm_1726
path: data/FreeForm_1726-*
- split: FreeForm_1601
path: data/FreeForm_1601-*
- split: FreeForm_1908
path: data/FreeForm_1908-*
- split: FreeForm_1662
path: data/FreeForm_1662-*
- split: FreeForm_1848
path: data/FreeForm_1848-*
- split: FreeForm_1602
path: data/FreeForm_1602-*
- split: FreeForm_1909
path: data/FreeForm_1909-*
- split: FreeForm_1603
path: data/FreeForm_1603-*
- split: FreeForm_1910
path: data/FreeForm_1910-*
- split: FreeForm_1557
path: data/FreeForm_1557-*
- split: FreeForm_1604
path: data/FreeForm_1604-*
- split: FreeForm_1789
path: data/FreeForm_1789-*
- split: FreeForm_1558
path: data/FreeForm_1558-*
- split: FreeForm_1665
path: data/FreeForm_1665-*
- split: FreeForm_1605
path: data/FreeForm_1605-*
- split: FreeForm_1852
path: data/FreeForm_1852-*
- split: FreeForm_1791
path: data/FreeForm_1791-*
- split: FreeForm_1667
path: data/FreeForm_1667-*
- split: FreeForm_1607
path: data/FreeForm_1607-*
- split: FreeForm_1913
path: data/FreeForm_1913-*
- split: FreeForm_1732
path: data/FreeForm_1732-*
- split: FreeForm_1669
path: data/FreeForm_1669-*
- split: FreeForm_1609
path: data/FreeForm_1609-*
- split: FreeForm_1562
path: data/FreeForm_1562-*
- split: FreeForm_1915
path: data/FreeForm_1915-*
- split: FreeForm_1968
path: data/FreeForm_1968-*
- split: FreeForm_1734
path: data/FreeForm_1734-*
- split: FreeForm_1855
path: data/FreeForm_1855-*
- split: FreeForm_1670
path: data/FreeForm_1670-*
- split: FreeForm_1610
path: data/FreeForm_1610-*
- split: FreeForm_1969
path: data/FreeForm_1969-*
- split: FreeForm_1795
path: data/FreeForm_1795-*
- split: FreeForm_1671
path: data/FreeForm_1671-*
- split: FreeForm_1611
path: data/FreeForm_1611-*
- split: FreeForm_1917
path: data/FreeForm_1917-*
- split: FreeForm_1564
path: data/FreeForm_1564-*
- split: FreeForm_1970
path: data/FreeForm_1970-*
- split: FreeForm_1796
path: data/FreeForm_1796-*
- split: FreeForm_1857
path: data/FreeForm_1857-*
- split: FreeForm_1672
path: data/FreeForm_1672-*
- split: FreeForm_1565
path: data/FreeForm_1565-*
- split: FreeForm_1971
path: data/FreeForm_1971-*
- split: FreeForm_1673
path: data/FreeForm_1673-*
- split: FreeForm_1797
path: data/FreeForm_1797-*
- split: FreeForm_1972
path: data/FreeForm_1972-*
- split: FreeForm_1566
path: data/FreeForm_1566-*
- split: FreeForm_1674
path: data/FreeForm_1674-*
- split: FreeForm_1859
path: data/FreeForm_1859-*
- split: FreeForm_1738
path: data/FreeForm_1738-*
- split: FreeForm_1567
path: data/FreeForm_1567-*
- split: FreeForm_1799
path: data/FreeForm_1799-*
- split: FreeForm_1614
path: data/FreeForm_1614-*
- split: FreeForm_1860
path: data/FreeForm_1860-*
- split: FreeForm_1568
path: data/FreeForm_1568-*
- split: FreeForm_1740
path: data/FreeForm_1740-*
- split: FreeForm_1676
path: data/FreeForm_1676-*
- split: FreeForm_1974
path: data/FreeForm_1974-*
- split: FreeForm_1741
path: data/FreeForm_1741-*
- split: FreeForm_1923
path: data/FreeForm_1923-*
- split: FreeForm_1742
path: data/FreeForm_1742-*
- split: FreeForm_1617
path: data/FreeForm_1617-*
- split: FreeForm_1924
path: data/FreeForm_1924-*
- split: FreeForm_1743
path: data/FreeForm_1743-*
- split: FreeForm_1803
path: data/FreeForm_1803-*
- split: FreeForm_1679
path: data/FreeForm_1679-*
- split: FreeForm_1864
path: data/FreeForm_1864-*
- split: FreeForm_1744
path: data/FreeForm_1744-*
- split: FreeForm_1804
path: data/FreeForm_1804-*
- split: FreeForm_1865
path: data/FreeForm_1865-*
- split: FreeForm_1978
path: data/FreeForm_1978-*
- split: FreeForm_1745
path: data/FreeForm_1745-*
- split: FreeForm_1573
path: data/FreeForm_1573-*
- split: FreeForm_1805
path: data/FreeForm_1805-*
- split: FreeForm_1620
path: data/FreeForm_1620-*
- split: FreeForm_1681
path: data/FreeForm_1681-*
- split: FreeForm_1927
path: data/FreeForm_1927-*
- split: FreeForm_1979
path: data/FreeForm_1979-*
- split: FreeForm_1746
path: data/FreeForm_1746-*
- split: FreeForm_1574
path: data/FreeForm_1574-*
- split: FreeForm_1867
path: data/FreeForm_1867-*
- split: FreeForm_1621
path: data/FreeForm_1621-*
- split: FreeForm_1806
path: data/FreeForm_1806-*
- split: FreeForm_1747
path: data/FreeForm_1747-*
- split: FreeForm_1868
path: data/FreeForm_1868-*
- split: FreeForm_1807
path: data/FreeForm_1807-*
- split: FreeForm_1683
path: data/FreeForm_1683-*
- split: FreeForm_1748
path: data/FreeForm_1748-*
- split: FreeForm_1623
path: data/FreeForm_1623-*
- split: FreeForm_1749
path: data/FreeForm_1749-*
- split: FreeForm_1870
path: data/FreeForm_1870-*
- split: FreeForm_1624
path: data/FreeForm_1624-*
- split: FreeForm_1809
path: data/FreeForm_1809-*
- split: FreeForm_1750
path: data/FreeForm_1750-*
- split: FreeForm_1931
path: data/FreeForm_1931-*
- split: FreeForm_1983
path: data/FreeForm_1983-*
- split: FreeForm_1625
path: data/FreeForm_1625-*
- split: FreeForm_1871
path: data/FreeForm_1871-*
- split: FreeForm_1810
path: data/FreeForm_1810-*
- split: FreeForm_1751
path: data/FreeForm_1751-*
- split: FreeForm_1932
path: data/FreeForm_1932-*
- split: FreeForm_1686
path: data/FreeForm_1686-*
- split: FreeForm_1811
path: data/FreeForm_1811-*
- split: FreeForm_1872
path: data/FreeForm_1872-*
- split: FreeForm_1687
path: data/FreeForm_1687-*
- split: FreeForm_1627
path: data/FreeForm_1627-*
- split: FreeForm_1812
path: data/FreeForm_1812-*
- split: FreeForm_1688
path: data/FreeForm_1688-*
- split: FreeForm_1628
path: data/FreeForm_1628-*
- split: FreeForm_1986
path: data/FreeForm_1986-*
- split: FreeForm_1813
path: data/FreeForm_1813-*
- split: FreeForm_1630
path: data/FreeForm_1630-*
- split: FreeForm_1690
path: data/FreeForm_1690-*
- split: FreeForm_1988
path: data/FreeForm_1988-*
- split: FreeForm_1876
path: data/FreeForm_1876-*
- split: FreeForm_1756
path: data/FreeForm_1756-*
- split: FreeForm_1691
path: data/FreeForm_1691-*
- split: FreeForm_1937
path: data/FreeForm_1937-*
- split: FreeForm_1631
path: data/FreeForm_1631-*
- split: FreeForm_1878
path: data/FreeForm_1878-*
- split: FreeForm_1817
path: data/FreeForm_1817-*
- split: FreeForm_1633
path: data/FreeForm_1633-*
- split: FreeForm_1991
path: data/FreeForm_1991-*
- split: FreeForm_1694
path: data/FreeForm_1694-*
- split: FreeForm_1634
path: data/FreeForm_1634-*
- split: FreeForm_1940
path: data/FreeForm_1940-*
- split: FreeForm_1992
path: data/FreeForm_1992-*
- split: FreeForm_1695
path: data/FreeForm_1695-*
- split: FreeForm_1635
path: data/FreeForm_1635-*
- split: FreeForm_1880
path: data/FreeForm_1880-*
- split: FreeForm_1760
path: data/FreeForm_1760-*
- split: FreeForm_1696
path: data/FreeForm_1696-*
- split: FreeForm_1820
path: data/FreeForm_1820-*
- split: FreeForm_1636
path: data/FreeForm_1636-*
- split: FreeForm_1881
path: data/FreeForm_1881-*
- split: FreeForm_1761
path: data/FreeForm_1761-*
- split: FreeForm_1942
path: data/FreeForm_1942-*
- split: FreeForm_1697
path: data/FreeForm_1697-*
- split: FreeForm_1637
path: data/FreeForm_1637-*
- split: FreeForm_1882
path: data/FreeForm_1882-*
- split: FreeForm_1943
path: data/FreeForm_1943-*
- split: FreeForm_1762
path: data/FreeForm_1762-*
- split: FreeForm_1995
path: data/FreeForm_1995-*
- split: FreeForm_1883
path: data/FreeForm_1883-*
- split: FreeForm_1698
path: data/FreeForm_1698-*
- split: FreeForm_1822
path: data/FreeForm_1822-*
- split: FreeForm_1944
path: data/FreeForm_1944-*
- split: FreeForm_1884
path: data/FreeForm_1884-*
- split: FreeForm_1823
path: data/FreeForm_1823-*
- split: FreeForm_1945
path: data/FreeForm_1945-*
- split: FreeForm_1885
path: data/FreeForm_1885-*
- split: FreeForm_1700
path: data/FreeForm_1700-*
- split: FreeForm_1946
path: data/FreeForm_1946-*
- split: FreeForm_1886
path: data/FreeForm_1886-*
- split: FreeForm_1825
path: data/FreeForm_1825-*
- split: FreeForm_1947
path: data/FreeForm_1947-*
- split: FreeForm_1887
path: data/FreeForm_1887-*
- split: FreeForm_1826
path: data/FreeForm_1826-*
- split: FreeForm_1948
path: data/FreeForm_1948-*
- split: FreeForm_1888
path: data/FreeForm_1888-*
- split: FreeForm_1999
path: data/FreeForm_1999-*
- split: FreeForm_1949
path: data/FreeForm_1949-*
- split: FreeForm_1889
path: data/FreeForm_1889-*
- split: FreeForm_1950
path: data/FreeForm_1950-*
- split: FreeForm_1951
path: data/FreeForm_1951-*
- split: FreeForm_1952
path: data/FreeForm_1952-*
- split: FreeForm_538
path: data/FreeForm_538-*
- split: FreeForm_965
path: data/FreeForm_965-*
- split: FreeForm_539
path: data/FreeForm_539-*
- split: FreeForm_903
path: data/FreeForm_903-*
- split: FreeForm_540
path: data/FreeForm_540-*
- split: FreeForm_917
path: data/FreeForm_917-*
- split: FreeForm_541
path: data/FreeForm_541-*
- split: FreeForm_604
path: data/FreeForm_604-*
- split: FreeForm_818
path: data/FreeForm_818-*
- split: FreeForm_728
path: data/FreeForm_728-*
- split: FreeForm_606
path: data/FreeForm_606-*
- split: FreeForm_997
path: data/FreeForm_997-*
- split: FreeForm_562
path: data/FreeForm_562-*
- split: FreeForm_623
path: data/FreeForm_623-*
- split: FreeForm_1021
path: data/FreeForm_1021-*
- split: FreeForm_731
path: data/FreeForm_731-*
- split: FreeForm_940
path: data/FreeForm_940-*
- split: FreeForm_732
path: data/FreeForm_732-*
- split: FreeForm_878
path: data/FreeForm_878-*
- split: FreeForm_1067
path: data/FreeForm_1067-*
- split: FreeForm_669
path: data/FreeForm_669-*
- split: FreeForm_879
path: data/FreeForm_879-*
- split: FreeForm_1162
path: data/FreeForm_1162-*
- split: FreeForm_1099
path: data/FreeForm_1099-*
- split: FreeForm_670
path: data/FreeForm_670-*
- split: FreeForm_1172
path: data/FreeForm_1172-*
- split: FreeForm_1222
path: data/FreeForm_1222-*
- split: FreeForm_686
path: data/FreeForm_686-*
- split: FreeForm_1337
path: data/FreeForm_1337-*
- split: FreeForm_688
path: data/FreeForm_688-*
- split: FreeForm_1115
path: data/FreeForm_1115-*
- split: FreeForm_1265
path: data/FreeForm_1265-*
- split: FreeForm_1117
path: data/FreeForm_1117-*
- split: FreeForm_1418
path: data/FreeForm_1418-*
- split: FreeForm_1513
path: data/FreeForm_1513-*
- split: FreeForm_1360
path: data/FreeForm_1360-*
- split: FreeForm_1422
path: data/FreeForm_1422-*
- split: FreeForm_1514
path: data/FreeForm_1514-*
- split: FreeForm_1290
path: data/FreeForm_1290-*
- split: FreeForm_1487
path: data/FreeForm_1487-*
- split: FreeForm_1527
path: data/FreeForm_1527-*
- split: FreeForm_1299
path: data/FreeForm_1299-*
- split: FreeForm_1488
path: data/FreeForm_1488-*
- split: FreeForm_1529
path: data/FreeForm_1529-*
- split: FreeForm_1302
path: data/FreeForm_1302-*
- split: FreeForm_1371
path: data/FreeForm_1371-*
- split: FreeForm_1439
path: data/FreeForm_1439-*
- split: FreeForm_1638
path: data/FreeForm_1638-*
- split: FreeForm_1305
path: data/FreeForm_1305-*
- split: FreeForm_1644
path: data/FreeForm_1644-*
- split: FreeForm_1308
path: data/FreeForm_1308-*
- split: FreeForm_1497
path: data/FreeForm_1497-*
- split: FreeForm_1706
path: data/FreeForm_1706-*
- split: FreeForm_1830
path: data/FreeForm_1830-*
- split: FreeForm_1650
path: data/FreeForm_1650-*
- split: FreeForm_1537
path: data/FreeForm_1537-*
- split: FreeForm_1832
path: data/FreeForm_1832-*
- split: FreeForm_1776
path: data/FreeForm_1776-*
- split: FreeForm_1322
path: data/FreeForm_1322-*
- split: FreeForm_1833
path: data/FreeForm_1833-*
- split: FreeForm_1713
path: data/FreeForm_1713-*
- split: FreeForm_1553
path: data/FreeForm_1553-*
- split: FreeForm_1596
path: data/FreeForm_1596-*
- split: FreeForm_1663
path: data/FreeForm_1663-*
- split: FreeForm_1556
path: data/FreeForm_1556-*
- split: FreeForm_1783
path: data/FreeForm_1783-*
- split: FreeForm_1912
path: data/FreeForm_1912-*
- split: FreeForm_1559
path: data/FreeForm_1559-*
- split: FreeForm_1785
path: data/FreeForm_1785-*
- split: FreeForm_1666
path: data/FreeForm_1666-*
- split: FreeForm_1729
path: data/FreeForm_1729-*
- split: FreeForm_1788
path: data/FreeForm_1788-*
- split: FreeForm_1668
path: data/FreeForm_1668-*
- split: FreeForm_1918
path: data/FreeForm_1918-*
- split: FreeForm_1563
path: data/FreeForm_1563-*
- split: FreeForm_1675
path: data/FreeForm_1675-*
- split: FreeForm_1962
path: data/FreeForm_1962-*
- split: FreeForm_1792
path: data/FreeForm_1792-*
- split: FreeForm_1615
path: data/FreeForm_1615-*
- split: FreeForm_1846
path: data/FreeForm_1846-*
- split: FreeForm_1616
path: data/FreeForm_1616-*
- split: FreeForm_1850
path: data/FreeForm_1850-*
- split: FreeForm_1964
path: data/FreeForm_1964-*
- split: FreeForm_1801
path: data/FreeForm_1801-*
- split: FreeForm_1851
path: data/FreeForm_1851-*
- split: FreeForm_1965
path: data/FreeForm_1965-*
- split: FreeForm_1626
path: data/FreeForm_1626-*
- split: FreeForm_1853
path: data/FreeForm_1853-*
- split: FreeForm_1967
path: data/FreeForm_1967-*
- split: FreeForm_1692
path: data/FreeForm_1692-*
- split: FreeForm_1854
path: data/FreeForm_1854-*
- split: FreeForm_1975
path: data/FreeForm_1975-*
- split: FreeForm_1699
path: data/FreeForm_1699-*
- split: FreeForm_1755
path: data/FreeForm_1755-*
- split: FreeForm_1757
path: data/FreeForm_1757-*
- split: FreeForm_1763
path: data/FreeForm_1763-*
- split: FreeForm_1814
path: data/FreeForm_1814-*
- split: FreeForm_1816
path: data/FreeForm_1816-*
- split: FreeForm_1821
path: data/FreeForm_1821-*
- split: FreeForm_1856
path: data/FreeForm_1856-*
- split: FreeForm_1862
path: data/FreeForm_1862-*
- split: FreeForm_1873
path: data/FreeForm_1873-*
- split: FreeForm_1875
path: data/FreeForm_1875-*
- split: FreeForm_1877
path: data/FreeForm_1877-*
- split: FreeForm_1935
path: data/FreeForm_1935-*
- split: FreeForm_1936
path: data/FreeForm_1936-*
- split: FreeForm_1938
path: data/FreeForm_1938-*
- split: FreeForm_1939
path: data/FreeForm_1939-*
- split: FreeForm_1941
path: data/FreeForm_1941-*
- split: FreeForm_1977
path: data/FreeForm_1977-*
- split: FreeForm_1981
path: data/FreeForm_1981-*
- split: FreeForm_1984
path: data/FreeForm_1984-*
- split: FreeForm_1985
path: data/FreeForm_1985-*
- split: FreeForm_1987
path: data/FreeForm_1987-*
- split: FreeForm_1989
path: data/FreeForm_1989-*
- split: FreeForm_1990
path: data/FreeForm_1990-*
- split: FreeForm_1993
path: data/FreeForm_1993-*
- split: FreeForm_1996
path: data/FreeForm_1996-*
- split: FreeForm_2000
path: data/FreeForm_2000-*
tags:
- art
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]
## Bibtex citation
```bibtex
@misc{zhao2024ultraeditinstructionbasedfinegrainedimage,
title={UltraEdit: Instruction-based Fine-Grained Image Editing at Scale},
author={Haozhe Zhao and Xiaojian Ma and Liang Chen and Shuzheng Si and Rujie Wu and Kaikai An and Peiyu Yu and Minjia Zhang and Qing Li and Baobao Chang},
year={2024},
eprint={2407.05282},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.05282},
}
``` |
nguha/legalbench | nguha | "2024-09-30T04:35:09Z" | 14,977 | 87 | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:other",
"size_categories:10K<n<100K",
"arxiv:2308.11462",
"arxiv:2110.01799",
"arxiv:2103.06268",
"arxiv:2301.00876",
"arxiv:1911.00841",
"arxiv:2105.07903",
"region:us",
"legal",
"law",
"finance"
] | [
"text-classification",
"question-answering",
"text-generation"
] | "2023-03-16T23:03:42Z" | ---
language:
- en
license: other
size_categories:
- 10K<n<100K
task_categories:
- text-classification
- question-answering
- text-generation
tags:
- legal
- law
- finance
dataset_info:
- config_name: abercrombie
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 307
num_examples: 5
- name: test
num_bytes: 6240
num_examples: 95
download_size: 19558988
dataset_size: 6547
- config_name: canada_tax_court_outcomes
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2975
num_examples: 6
- name: test
num_bytes: 157411
num_examples: 244
download_size: 19558988
dataset_size: 160386
- config_name: citation_prediction_classification
features:
- name: answer
dtype: string
- name: citation
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 660
num_examples: 2
- name: test
num_bytes: 26112
num_examples: 108
download_size: 19558988
dataset_size: 26772
- config_name: citation_prediction_open
features:
- name: answer
dtype: string
- name: circuit
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 555
num_examples: 2
- name: test
num_bytes: 13460
num_examples: 53
download_size: 19558988
dataset_size: 14015
- config_name: consumer_contracts_qa
features:
- name: answer
dtype: string
- name: contract
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 9941
num_examples: 4
- name: test
num_bytes: 1221320
num_examples: 396
download_size: 19558988
dataset_size: 1231261
- config_name: contract_nli_confidentiality_of_agreement
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4070
num_examples: 8
- name: test
num_bytes: 43818
num_examples: 82
download_size: 19558988
dataset_size: 47888
- config_name: contract_nli_explicit_identification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3615
num_examples: 8
- name: test
num_bytes: 62133
num_examples: 109
download_size: 19558988
dataset_size: 65748
- config_name: contract_nli_inclusion_of_verbally_conveyed_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3817
num_examples: 8
- name: test
num_bytes: 81933
num_examples: 139
download_size: 19558988
dataset_size: 85750
- config_name: contract_nli_limited_use
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4855
num_examples: 8
- name: test
num_bytes: 98534
num_examples: 208
download_size: 19558988
dataset_size: 103389
- config_name: contract_nli_no_licensing
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2591
num_examples: 8
- name: test
num_bytes: 78173
num_examples: 162
download_size: 19558988
dataset_size: 80764
- config_name: contract_nli_notice_on_compelled_disclosure
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3907
num_examples: 8
- name: test
num_bytes: 80470
num_examples: 142
download_size: 19558988
dataset_size: 84377
- config_name: contract_nli_permissible_acquirement_of_similar_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2736
num_examples: 8
- name: test
num_bytes: 87469
num_examples: 178
download_size: 19558988
dataset_size: 90205
- config_name: contract_nli_permissible_copy
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3480
num_examples: 8
- name: test
num_bytes: 39015
num_examples: 87
download_size: 19558988
dataset_size: 42495
- config_name: contract_nli_permissible_development_of_similar_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3921
num_examples: 8
- name: test
num_bytes: 62603
num_examples: 136
download_size: 19558988
dataset_size: 66524
- config_name: contract_nli_permissible_post-agreement_possession
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4608
num_examples: 8
- name: test
num_bytes: 65932
num_examples: 111
download_size: 19558988
dataset_size: 70540
- config_name: contract_nli_return_of_confidential_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3499
num_examples: 8
- name: test
num_bytes: 35672
num_examples: 66
download_size: 19558988
dataset_size: 39171
- config_name: contract_nli_sharing_with_employees
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3173
num_examples: 8
- name: test
num_bytes: 104240
num_examples: 170
download_size: 19558988
dataset_size: 107413
- config_name: contract_nli_sharing_with_third-parties
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3249
num_examples: 8
- name: test
num_bytes: 104822
num_examples: 180
download_size: 19558988
dataset_size: 108071
- config_name: contract_nli_survival_of_obligations
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2272
num_examples: 8
- name: test
num_bytes: 75450
num_examples: 157
download_size: 19558988
dataset_size: 77722
- config_name: contract_qa
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2408
num_examples: 8
- name: test
num_bytes: 26370
num_examples: 80
download_size: 19558988
dataset_size: 28778
- config_name: corporate_lobbying
features:
- name: answer
dtype: string
- name: bill_summary
dtype: string
- name: bill_title
dtype: string
- name: company_description
dtype: string
- name: company_name
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 54334
num_examples: 10
- name: test
num_bytes: 2974813
num_examples: 490
download_size: 19558988
dataset_size: 3029147
- config_name: cuad_affiliate_license-licensee
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4067
num_examples: 6
- name: test
num_bytes: 115798
num_examples: 198
download_size: 19558988
dataset_size: 119865
- config_name: cuad_affiliate_license-licensor
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4247
num_examples: 6
- name: test
num_bytes: 64931
num_examples: 88
download_size: 19558988
dataset_size: 69178
- config_name: cuad_anti-assignment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2070
num_examples: 6
- name: test
num_bytes: 513026
num_examples: 1172
download_size: 19558988
dataset_size: 515096
- config_name: cuad_audit_rights
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2555
num_examples: 6
- name: test
num_bytes: 526977
num_examples: 1216
download_size: 19558988
dataset_size: 529532
- config_name: cuad_cap_on_liability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2621
num_examples: 6
- name: test
num_bytes: 587220
num_examples: 1246
download_size: 19558988
dataset_size: 589841
- config_name: cuad_change_of_control
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2231
num_examples: 6
- name: test
num_bytes: 203823
num_examples: 416
download_size: 19558988
dataset_size: 206054
- config_name: cuad_competitive_restriction_exception
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2774
num_examples: 6
- name: test
num_bytes: 115844
num_examples: 220
download_size: 19558988
dataset_size: 118618
- config_name: cuad_covenant_not_to_sue
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2581
num_examples: 6
- name: test
num_bytes: 153799
num_examples: 308
download_size: 19558988
dataset_size: 156380
- config_name: cuad_effective_date
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2080
num_examples: 6
- name: test
num_bytes: 87802
num_examples: 236
download_size: 19558988
dataset_size: 89882
- config_name: cuad_exclusivity
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1897
num_examples: 6
- name: test
num_bytes: 355097
num_examples: 762
download_size: 19558988
dataset_size: 356994
- config_name: cuad_expiration_date
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1638
num_examples: 6
- name: test
num_bytes: 354232
num_examples: 876
download_size: 19558988
dataset_size: 355870
- config_name: cuad_governing_law
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2420
num_examples: 6
- name: test
num_bytes: 337322
num_examples: 876
download_size: 19558988
dataset_size: 339742
- config_name: cuad_insurance
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2537
num_examples: 6
- name: test
num_bytes: 475827
num_examples: 1030
download_size: 19558988
dataset_size: 478364
- config_name: cuad_ip_ownership_assignment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4756
num_examples: 6
- name: test
num_bytes: 294749
num_examples: 576
download_size: 19558988
dataset_size: 299505
- config_name: cuad_irrevocable_or_perpetual_license
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 5328
num_examples: 6
- name: test
num_bytes: 160279
num_examples: 280
download_size: 19558988
dataset_size: 165607
- config_name: cuad_joint_ip_ownership
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 5011
num_examples: 6
- name: test
num_bytes: 90592
num_examples: 192
download_size: 19558988
dataset_size: 95603
- config_name: cuad_license_grant
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3690
num_examples: 6
- name: test
num_bytes: 709331
num_examples: 1396
download_size: 19558988
dataset_size: 713021
- config_name: cuad_liquidated_damages
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3579
num_examples: 6
- name: test
num_bytes: 97839
num_examples: 220
download_size: 19558988
dataset_size: 101418
- config_name: cuad_minimum_commitment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2874
num_examples: 6
- name: test
num_bytes: 354078
num_examples: 772
download_size: 19558988
dataset_size: 356952
- config_name: cuad_most_favored_nation
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2103
num_examples: 6
- name: test
num_bytes: 32800
num_examples: 64
download_size: 19558988
dataset_size: 34903
- config_name: cuad_no-solicit_of_customers
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3310
num_examples: 6
- name: test
num_bytes: 40828
num_examples: 84
download_size: 19558988
dataset_size: 44138
- config_name: cuad_no-solicit_of_employees
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3619
num_examples: 6
- name: test
num_bytes: 72661
num_examples: 142
download_size: 19558988
dataset_size: 76280
- config_name: cuad_non-compete
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3675
num_examples: 6
- name: test
num_bytes: 211272
num_examples: 442
download_size: 19558988
dataset_size: 214947
- config_name: cuad_non-disparagement
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2168
num_examples: 6
- name: test
num_bytes: 49850
num_examples: 100
download_size: 19558988
dataset_size: 52018
- config_name: cuad_non-transferable_license
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3643
num_examples: 6
- name: test
num_bytes: 269505
num_examples: 542
download_size: 19558988
dataset_size: 273148
- config_name: cuad_notice_period_to_terminate_renewal
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4166
num_examples: 6
- name: test
num_bytes: 100014
num_examples: 222
download_size: 19558988
dataset_size: 104180
- config_name: cuad_post-termination_services
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3349
num_examples: 6
- name: test
num_bytes: 419477
num_examples: 808
download_size: 19558988
dataset_size: 422826
- config_name: cuad_price_restrictions
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2945
num_examples: 6
- name: test
num_bytes: 19430
num_examples: 46
download_size: 19558988
dataset_size: 22375
- config_name: cuad_renewal_term
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2163
num_examples: 6
- name: test
num_bytes: 168528
num_examples: 386
download_size: 19558988
dataset_size: 170691
- config_name: cuad_revenue-profit_sharing
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2581
num_examples: 6
- name: test
num_bytes: 363594
num_examples: 774
download_size: 19558988
dataset_size: 366175
- config_name: cuad_rofr-rofo-rofn
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2817
num_examples: 6
- name: test
num_bytes: 338243
num_examples: 690
download_size: 19558988
dataset_size: 341060
- config_name: cuad_source_code_escrow
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2696
num_examples: 6
- name: test
num_bytes: 58125
num_examples: 118
download_size: 19558988
dataset_size: 60821
- config_name: cuad_termination_for_convenience
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1506
num_examples: 6
- name: test
num_bytes: 181164
num_examples: 430
download_size: 19558988
dataset_size: 182670
- config_name: cuad_third_party_beneficiary
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2378
num_examples: 6
- name: test
num_bytes: 24106
num_examples: 68
download_size: 19558988
dataset_size: 26484
- config_name: cuad_uncapped_liability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2570
num_examples: 6
- name: test
num_bytes: 158009
num_examples: 294
download_size: 19558988
dataset_size: 160579
- config_name: cuad_unlimited-all-you-can-eat-license
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2414
num_examples: 6
- name: test
num_bytes: 22347
num_examples: 48
download_size: 19558988
dataset_size: 24761
- config_name: cuad_volume_restriction
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1397
num_examples: 6
- name: test
num_bytes: 129456
num_examples: 322
download_size: 19558988
dataset_size: 130853
- config_name: cuad_warranty_duration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1815
num_examples: 6
- name: test
num_bytes: 142580
num_examples: 320
download_size: 19558988
dataset_size: 144395
- config_name: definition_classification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1826
num_examples: 8
- name: test
num_bytes: 371743
num_examples: 1337
download_size: 19558988
dataset_size: 373569
- config_name: definition_extraction
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2731
num_examples: 8
- name: test
num_bytes: 254689
num_examples: 687
download_size: 19558988
dataset_size: 257420
- config_name: diversity_1
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 803
num_examples: 6
- name: test
num_bytes: 41135
num_examples: 300
download_size: 19558988
dataset_size: 41938
- config_name: diversity_2
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1041
num_examples: 6
- name: test
num_bytes: 53537
num_examples: 300
download_size: 19558988
dataset_size: 54578
- config_name: diversity_3
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 992
num_examples: 6
- name: test
num_bytes: 50744
num_examples: 300
download_size: 19558988
dataset_size: 51736
- config_name: diversity_4
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1070
num_examples: 6
- name: test
num_bytes: 53464
num_examples: 300
download_size: 19558988
dataset_size: 54534
- config_name: diversity_5
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1232
num_examples: 6
- name: test
num_bytes: 62550
num_examples: 300
download_size: 19558988
dataset_size: 63782
- config_name: diversity_6
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2016
num_examples: 6
- name: test
num_bytes: 100411
num_examples: 300
download_size: 19558988
dataset_size: 102427
- config_name: function_of_decision_section
features:
- name: Citation
dtype: string
- name: Paragraph
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 1547
num_examples: 7
- name: test
num_bytes: 210419
num_examples: 367
download_size: 19558988
dataset_size: 211966
- config_name: hearsay
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: slice
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 788
num_examples: 5
- name: test
num_bytes: 17150
num_examples: 94
download_size: 19558988
dataset_size: 17938
- config_name: insurance_policy_interpretation
features:
- name: answer
dtype: string
- name: claim
dtype: string
- name: index
dtype: string
- name: policy
dtype: string
splits:
- name: train
num_bytes: 3119
num_examples: 5
- name: test
num_bytes: 70764
num_examples: 133
download_size: 19558988
dataset_size: 73883
- config_name: international_citizenship_questions
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 832
num_examples: 4
- name: test
num_bytes: 2089107
num_examples: 9306
download_size: 19558988
dataset_size: 2089939
- config_name: jcrew_blocker
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7352
num_examples: 6
- name: test
num_bytes: 59879
num_examples: 54
download_size: 19558988
dataset_size: 67231
- config_name: learned_hands_benefits
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8267
num_examples: 6
- name: test
num_bytes: 87512
num_examples: 66
download_size: 19558988
dataset_size: 95779
- config_name: learned_hands_business
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6075
num_examples: 6
- name: test
num_bytes: 202116
num_examples: 174
download_size: 19558988
dataset_size: 208191
- config_name: learned_hands_consumer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6355
num_examples: 6
- name: test
num_bytes: 795463
num_examples: 614
download_size: 19558988
dataset_size: 801818
- config_name: learned_hands_courts
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10693
num_examples: 6
- name: test
num_bytes: 228204
num_examples: 192
download_size: 19558988
dataset_size: 238897
- config_name: learned_hands_crime
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7322
num_examples: 6
- name: test
num_bytes: 846597
num_examples: 688
download_size: 19558988
dataset_size: 853919
- config_name: learned_hands_divorce
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10651
num_examples: 6
- name: test
num_bytes: 189279
num_examples: 150
download_size: 19558988
dataset_size: 199930
- config_name: learned_hands_domestic_violence
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11170
num_examples: 6
- name: test
num_bytes: 239797
num_examples: 174
download_size: 19558988
dataset_size: 250967
- config_name: learned_hands_education
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6992
num_examples: 6
- name: test
num_bytes: 79184
num_examples: 56
download_size: 19558988
dataset_size: 86176
- config_name: learned_hands_employment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11223
num_examples: 6
- name: test
num_bytes: 909220
num_examples: 710
download_size: 19558988
dataset_size: 920443
- config_name: learned_hands_estates
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5970
num_examples: 6
- name: test
num_bytes: 216836
num_examples: 178
download_size: 19558988
dataset_size: 222806
- config_name: learned_hands_family
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8714
num_examples: 6
- name: test
num_bytes: 3073508
num_examples: 2265
download_size: 19558988
dataset_size: 3082222
- config_name: learned_hands_health
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6155
num_examples: 6
- name: test
num_bytes: 336934
num_examples: 226
download_size: 19558988
dataset_size: 343089
- config_name: learned_hands_housing
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9726
num_examples: 6
- name: test
num_bytes: 6028612
num_examples: 4494
download_size: 19558988
dataset_size: 6038338
- config_name: learned_hands_immigration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3955
num_examples: 6
- name: test
num_bytes: 165352
num_examples: 134
download_size: 19558988
dataset_size: 169307
- config_name: learned_hands_torts
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4484
num_examples: 6
- name: test
num_bytes: 615649
num_examples: 432
download_size: 19558988
dataset_size: 620133
- config_name: learned_hands_traffic
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6250
num_examples: 6
- name: test
num_bytes: 667539
num_examples: 556
download_size: 19558988
dataset_size: 673789
- config_name: legal_reasoning_causality
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4688
num_examples: 4
- name: test
num_bytes: 87007
num_examples: 55
download_size: 19558988
dataset_size: 91695
- config_name: maud_ability_to_consummate_concept_is_subject_to_mae_carveouts
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5322
num_examples: 1
- name: test
num_bytes: 304051
num_examples: 69
download_size: 19558988
dataset_size: 309373
- config_name: maud_accuracy_of_fundamental_target_rws_bringdown_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 271
num_examples: 1
- name: test
num_bytes: 148869
num_examples: 175
download_size: 19558988
dataset_size: 149140
- config_name: maud_accuracy_of_target_capitalization_rw_(outstanding_shares)_bringdown_standard_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1493
num_examples: 1
- name: test
num_bytes: 152224
num_examples: 181
download_size: 19558988
dataset_size: 153717
- config_name: maud_accuracy_of_target_general_rw_bringdown_timing_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1000
num_examples: 1
- name: test
num_bytes: 152717
num_examples: 181
download_size: 19558988
dataset_size: 153717
- config_name: maud_additional_matching_rights_period_for_modifications_(cor)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2170
num_examples: 1
- name: test
num_bytes: 312632
num_examples: 158
download_size: 19558988
dataset_size: 314802
- config_name: maud_application_of_buyer_consent_requirement_(negative_interim_covenant)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 558
num_examples: 1
- name: test
num_bytes: 96990
num_examples: 180
download_size: 19558988
dataset_size: 97548
- config_name: maud_buyer_consent_requirement_(ordinary_course)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2620
num_examples: 1
- name: test
num_bytes: 138668
num_examples: 181
download_size: 19558988
dataset_size: 141288
- config_name: maud_change_in_law__subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6000
num_examples: 1
- name: test
num_bytes: 448666
num_examples: 99
download_size: 19558988
dataset_size: 454666
- config_name: maud_changes_in_gaap_or_other_accounting_principles__subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5998
num_examples: 1
- name: test
num_bytes: 444442
num_examples: 98
download_size: 19558988
dataset_size: 450440
- config_name: maud_cor_permitted_in_response_to_intervening_event
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2631
num_examples: 1
- name: test
num_bytes: 195447
num_examples: 100
download_size: 19558988
dataset_size: 198078
- config_name: maud_cor_permitted_with_board_fiduciary_determination_only
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3970
num_examples: 1
- name: test
num_bytes: 194108
num_examples: 100
download_size: 19558988
dataset_size: 198078
- config_name: maud_cor_standard_(intervening_event)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 727
num_examples: 1
- name: test
num_bytes: 175140
num_examples: 84
download_size: 19558988
dataset_size: 175867
- config_name: maud_cor_standard_(superior_offer)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1173
num_examples: 1
- name: test
num_bytes: 196905
num_examples: 100
download_size: 19558988
dataset_size: 198078
- config_name: maud_definition_contains_knowledge_requirement_-_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1899
num_examples: 1
- name: test
num_bytes: 231405
num_examples: 147
download_size: 19558988
dataset_size: 233304
- config_name: maud_definition_includes_asset_deals
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 614
num_examples: 1
- name: test
num_bytes: 289644
num_examples: 146
download_size: 19558988
dataset_size: 290258
- config_name: maud_definition_includes_stock_deals
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 683
num_examples: 1
- name: test
num_bytes: 292466
num_examples: 148
download_size: 19558988
dataset_size: 293149
- config_name: maud_fiduciary_exception__board_determination_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1594
num_examples: 1
- name: test
num_bytes: 288180
num_examples: 179
download_size: 19558988
dataset_size: 289774
- config_name: maud_fiduciary_exception_board_determination_trigger_(no_shop)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3538
num_examples: 1
- name: test
num_bytes: 286236
num_examples: 179
download_size: 19558988
dataset_size: 289774
- config_name: maud_financial_point_of_view_is_the_sole_consideration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3290
num_examples: 1
- name: test
num_bytes: 217048
num_examples: 112
download_size: 19558988
dataset_size: 220338
- config_name: maud_fls_(mae)_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4669
num_examples: 1
- name: test
num_bytes: 349856
num_examples: 77
download_size: 19558988
dataset_size: 354525
- config_name: maud_general_economic_and_financial_conditions_subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5998
num_examples: 1
- name: test
num_bytes: 445306
num_examples: 98
download_size: 19558988
dataset_size: 451304
- config_name: maud_includes_consistent_with_past_practice
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1127
num_examples: 1
- name: test
num_bytes: 140161
num_examples: 181
download_size: 19558988
dataset_size: 141288
- config_name: maud_initial_matching_rights_period_(cor)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3041
num_examples: 1
- name: test
num_bytes: 311761
num_examples: 158
download_size: 19558988
dataset_size: 314802
- config_name: maud_initial_matching_rights_period_(ftr)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1850
num_examples: 1
- name: test
num_bytes: 279202
num_examples: 132
download_size: 19558988
dataset_size: 281052
- config_name: maud_intervening_event_-_required_to_occur_after_signing_-_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3055
num_examples: 1
- name: test
num_bytes: 230249
num_examples: 147
download_size: 19558988
dataset_size: 233304
- config_name: maud_knowledge_definition
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 240
num_examples: 1
- name: test
num_bytes: 359730
num_examples: 167
download_size: 19558988
dataset_size: 359970
- config_name: maud_liability_standard_for_no-shop_breach_by_target_non-do_representatives
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 154
num_examples: 1
- name: test
num_bytes: 40946
num_examples: 156
download_size: 19558988
dataset_size: 41100
- config_name: maud_ordinary_course_efforts_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1037
num_examples: 1
- name: test
num_bytes: 140251
num_examples: 181
download_size: 19558988
dataset_size: 141288
- config_name: maud_pandemic_or_other_public_health_event__subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3728
num_examples: 1
- name: test
num_bytes: 447053
num_examples: 98
download_size: 19558988
dataset_size: 450781
- config_name: maud_pandemic_or_other_public_health_event_specific_reference_to_pandemic-related_governmental_responses_or_measures
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3728
num_examples: 1
- name: test
num_bytes: 447053
num_examples: 98
download_size: 19558988
dataset_size: 450781
- config_name: maud_relational_language_(mae)_applies_to
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4948
num_examples: 1
- name: test
num_bytes: 409477
num_examples: 90
download_size: 19558988
dataset_size: 414425
- config_name: maud_specific_performance
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 771
num_examples: 1
- name: test
num_bytes: 107392
num_examples: 178
download_size: 19558988
dataset_size: 108163
- config_name: maud_tail_period_length
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 406
num_examples: 1
- name: test
num_bytes: 108632
num_examples: 179
download_size: 19558988
dataset_size: 109038
- config_name: maud_type_of_consideration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 258
num_examples: 1
- name: test
num_bytes: 139270
num_examples: 172
download_size: 19558988
dataset_size: 139528
- config_name: nys_judicial_ethics
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: year
dtype: string
splits:
- name: train
num_bytes: 1697
num_examples: 8
- name: test
num_bytes: 53974
num_examples: 292
download_size: 19558988
dataset_size: 55671
- config_name: opp115_data_retention
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1791
num_examples: 8
- name: test
num_bytes: 18620
num_examples: 88
download_size: 19558988
dataset_size: 20411
- config_name: opp115_data_security
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2123
num_examples: 8
- name: test
num_bytes: 352667
num_examples: 1334
download_size: 19558988
dataset_size: 354790
- config_name: opp115_do_not_track
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2507
num_examples: 8
- name: test
num_bytes: 26363
num_examples: 110
download_size: 19558988
dataset_size: 28870
- config_name: opp115_first_party_collection_use
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2227
num_examples: 8
- name: test
num_bytes: 463566
num_examples: 2086
download_size: 19558988
dataset_size: 465793
- config_name: opp115_international_and_specific_audiences
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1643
num_examples: 8
- name: test
num_bytes: 338196
num_examples: 980
download_size: 19558988
dataset_size: 339839
- config_name: opp115_policy_change
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1201
num_examples: 8
- name: test
num_bytes: 94060
num_examples: 431
download_size: 19558988
dataset_size: 95261
- config_name: opp115_third_party_sharing_collection
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1217
num_examples: 8
- name: test
num_bytes: 383909
num_examples: 1590
download_size: 19558988
dataset_size: 385126
- config_name: opp115_user_access,_edit_and_deletion
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1251
num_examples: 8
- name: test
num_bytes: 108969
num_examples: 462
download_size: 19558988
dataset_size: 110220
- config_name: opp115_user_choice_control
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1695
num_examples: 8
- name: test
num_bytes: 353113
num_examples: 1546
download_size: 19558988
dataset_size: 354808
- config_name: oral_argument_question_purpose
features:
- name: Docket No.
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 2415
num_examples: 7
- name: test
num_bytes: 95262
num_examples: 312
download_size: 19558988
dataset_size: 97677
- config_name: overruling
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 629
num_examples: 6
- name: test
num_bytes: 443484
num_examples: 2394
download_size: 19558988
dataset_size: 444113
- config_name: personal_jurisdiction
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: slice
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1660
num_examples: 4
- name: test
num_bytes: 21089
num_examples: 50
download_size: 19558988
dataset_size: 22749
- config_name: privacy_policy_entailment
features:
- name: answer
dtype: string
- name: description
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6282
num_examples: 8
- name: test
num_bytes: 3174950
num_examples: 4335
download_size: 19558988
dataset_size: 3181232
- config_name: privacy_policy_qa
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2231
num_examples: 8
- name: test
num_bytes: 2817986
num_examples: 10923
download_size: 19558988
dataset_size: 2820217
- config_name: proa
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1057
num_examples: 5
- name: test
num_bytes: 25475
num_examples: 95
download_size: 19558988
dataset_size: 26532
- config_name: rule_qa
features:
- name: answer
dtype: string
- name: doctrine
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 12665
num_examples: 50
download_size: 19558988
dataset_size: 12665
- config_name: sara_entailment
features:
- name: answer
dtype: string
- name: case id
dtype: string
- name: description
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: statute
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2528
num_examples: 4
- name: test
num_bytes: 225560
num_examples: 272
download_size: 19558988
dataset_size: 228088
- config_name: sara_numeric
features:
- name: answer
dtype: string
- name: case id
dtype: string
- name: description
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: statute
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 238363
num_examples: 4
- name: test
num_bytes: 5725392
num_examples: 96
download_size: 19558988
dataset_size: 5963755
- config_name: scalr
features:
- name: answer
dtype: string
- name: choice_0
dtype: string
- name: choice_1
dtype: string
- name: choice_2
dtype: string
- name: choice_3
dtype: string
- name: choice_4
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: test
num_bytes: 1026740
num_examples: 571
download_size: 19558988
dataset_size: 1026740
- config_name: ssla_company_defendants
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5847
num_examples: 3
- name: test
num_bytes: 2313039
num_examples: 1228
download_size: 19558988
dataset_size: 2318886
- config_name: ssla_individual_defendants
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5962
num_examples: 3
- name: test
num_bytes: 2002620
num_examples: 1012
download_size: 19558988
dataset_size: 2008582
- config_name: ssla_plaintiff
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5831
num_examples: 3
- name: test
num_bytes: 1926518
num_examples: 1033
download_size: 19558988
dataset_size: 1932349
- config_name: successor_liability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: issue
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1734
num_examples: 3
- name: test
num_bytes: 26490
num_examples: 47
download_size: 19558988
dataset_size: 28224
- config_name: supply_chain_disclosure_best_practice_accountability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18987
num_examples: 8
- name: test
num_bytes: 1347025
num_examples: 379
download_size: 19558988
dataset_size: 1366012
- config_name: supply_chain_disclosure_best_practice_audits
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 23879
num_examples: 8
- name: test
num_bytes: 1342065
num_examples: 379
download_size: 19558988
dataset_size: 1365944
- config_name: supply_chain_disclosure_best_practice_certification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22058
num_examples: 8
- name: test
num_bytes: 1338516
num_examples: 378
download_size: 19558988
dataset_size: 1360574
- config_name: supply_chain_disclosure_best_practice_training
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24071
num_examples: 8
- name: test
num_bytes: 1341885
num_examples: 379
download_size: 19558988
dataset_size: 1365956
- config_name: supply_chain_disclosure_best_practice_verification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 27158
num_examples: 8
- name: test
num_bytes: 1338739
num_examples: 379
download_size: 19558988
dataset_size: 1365897
- config_name: supply_chain_disclosure_disclosed_accountability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18902
num_examples: 8
- name: test
num_bytes: 1344444
num_examples: 378
download_size: 19558988
dataset_size: 1363346
- config_name: supply_chain_disclosure_disclosed_audits
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24404
num_examples: 8
- name: test
num_bytes: 1341624
num_examples: 379
download_size: 19558988
dataset_size: 1366028
- config_name: supply_chain_disclosure_disclosed_certification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17987
num_examples: 8
- name: test
num_bytes: 1342646
num_examples: 378
download_size: 19558988
dataset_size: 1360633
- config_name: supply_chain_disclosure_disclosed_training
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 27093
num_examples: 8
- name: test
num_bytes: 1338919
num_examples: 379
download_size: 19558988
dataset_size: 1366012
- config_name: supply_chain_disclosure_disclosed_verification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 25387
num_examples: 8
- name: test
num_bytes: 1340578
num_examples: 379
download_size: 19558988
dataset_size: 1365965
- config_name: telemarketing_sales_rule
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1230
num_examples: 4
- name: test
num_bytes: 17140
num_examples: 47
download_size: 19558988
dataset_size: 18370
- config_name: textualism_tool_dictionaries
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4842
num_examples: 4
- name: test
num_bytes: 102644
num_examples: 107
download_size: 19558988
dataset_size: 107486
- config_name: textualism_tool_plain
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3338
num_examples: 4
- name: test
num_bytes: 167428
num_examples: 165
download_size: 19558988
dataset_size: 170766
- config_name: ucc_v_common_law
features:
- name: answer
dtype: string
- name: contract
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 904
num_examples: 6
- name: test
num_bytes: 12694
num_examples: 94
download_size: 19558988
dataset_size: 13598
- config_name: unfair_tos
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3308
num_examples: 9
- name: test
num_bytes: 787108
num_examples: 3813
download_size: 19558988
dataset_size: 790416
---
# Dataset Card for Dataset Name
- **Homepage: https://hazyresearch.stanford.edu/legalbench/**
- **Repository: https://github.com/HazyResearch/legalbench/**
- **Paper: https://arxiv.org/abs/2308.11462**
## Dataset Description
### Dataset Summary
The LegalBench project is an ongoing open science effort to collaboratively curate tasks for evaluating legal reasoning in English large language models (LLMs). The benchmark currently consists of 162 tasks gathered from 40 contributors.
Note: Because LegalBench is intended to test zero and few-shot reasoning, the available "train" splits are small. However, if you are interested in finetuning models or studying model performance in a more traditional train/test regime, you can combine and re-partition train and test data.
If you have questions about the project or would like to get involved, please see the website for more information.
### Supported Tasks and Leaderboards
LegalBench tasks span multiple types (binary classification, multi-class classification, extraction, generation, entailment), multiple types of text (statutes, judicial opinions, contracts, etc.), and multiple areas of law (evidence, contracts, civil procedure, etc.). For more information on tasks, we recommend visiting the website, where you can search through task descriptions, or the Github repository, which contains more granular task descriptions. We also recommend reading the paper, which provides more background on task significance and construction process.
### Languages
All LegalBench tasks are in English.
## Dataset Structure
### Data Instances
Detailed descriptions of the instances for each task can be found on the Github. An example of an instance, for the `abercrombie` task, is provided below:
```
{
"text": "The mark "Ivory" for a product made of elephant tusks.",
"label": "generic"
"idx": 0
}
```
A substantial number of LegalBench tasks are binary classification tasks, which require the LLM to determine if a piece of text has some legal attribute. Because these are framed as Yes/No questions, the label space is "Yes" or "No".
### Data Fields
Detailed descriptions of the instances for each task can be found on the Github.
### Data Splits
Each task (except for `rule_qa` and `scalr`) has both a training and evaluation split. Following [RAFT](https://huggingface.co/datasets/ought/raft), train splits only consists of a few-labeled instances, reflecting the few-shot nature of most LLMs.
## Dataset Creation
### Curation Rationale
LegalBench was created to enable researchers to better benchmark the legal reasoning capabilities of LLMs.
### Source Data
#### Initial Data Collection and Normalization
Broadly, LegalBench tasks are drawn from three sources. The first source of tasks are existing available datasets and corpora. Most of these were originally released for non-LLM evaluation settings. In creating tasks for LegalBench from these sources, we often significantly reformatted data and restructured the prediction objective. For instance, the original [CUAD dataset](https://github.com/TheAtticusProject/cuad) contains annotations on long-documents and is intended for evaluating extraction with span-prediction models. We restructure this corpora to generate a binary classification task for each type of contractual clause. While the original corpus emphasized the long-document aspects of contracts, our restructured tasks emphasize whether LLMs can identify the distinguishing features of different types of clauses. The second source of tasks are datasets that were previously constructed by legal professionals but never released. This primarily includes datasets hand-coded by legal scholars as part of prior empirical legal projects. The last category of tasks are those that were developed specifically for \name, by the authors of this paper. Overall, tasks are drawn from 36 distinct corpora. Please see the Appendix of the paper for more details.
#### Who are the source language producers?
LegalBench data was created by humans. Demographic information for these individuals is not available.
### Annotations
#### Annotation process
Please see the paper for more information on the annotation process used in the creation of each task.
#### Who are the annotators?
Please see the paper for more information on the identity of annotators for each task.
### Personal and Sensitive Information
Data in this benchmark has either been synthetically generated, or derived from an already public source (e.g., contracts from the EDGAR database).
Several tasks have been derived from the LearnedHands corpus, which consists of public posts on /r/LegalAdvice. Some posts may discuss sensitive issues.
## Considerations for Using the Data
### Social Impact of Dataset
Please see the original paper for a discussion of social impact.
### Discussion of Biases
Please see the original paper for a discussion of social impact.
### Other Known Limitations
LegalBench primarily contains tasks corresponding to American law.
## Additional Information
### Dataset Curators
Please see the website for a full list of participants in the LegalBench project.
### Licensing Information
LegalBench tasks are subject to different licenses. Please see the paper for a description of the licenses.
### Citation Information
If you intend to reference LegalBench broadly, please use the citation below. If you are working with a particular task, please use the citation below in addition to the task specific citation (which can be found on the task page on the website or Github).
```
@misc{guha2023legalbench,
title={LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models},
author={Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
year={2023},
eprint={2308.11462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{koreeda2021contractnli,
title={ContractNLI: A dataset for document-level natural language inference for contracts},
author={Koreeda, Yuta and Manning, Christopher D},
journal={arXiv preprint arXiv:2110.01799},
year={2021}
}
@article{hendrycks2021cuad,
title={Cuad: An expert-annotated nlp dataset for legal contract review},
author={Hendrycks, Dan and Burns, Collin and Chen, Anya and Ball, Spencer},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
@article{wang2023maud,
title={MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement Understanding},
author={Wang, Steven H and Scardigli, Antoine and Tang, Leonard and Chen, Wei and Levkin, Dimitry and Chen, Anya and Ball, Spencer and Woodside, Thomas and Zhang, Oliver and Hendrycks, Dan},
journal={arXiv preprint arXiv:2301.00876},
year={2023}
}
@inproceedings{wilson2016creation,
title={The creation and analysis of a website privacy policy corpus},
author={Wilson, Shomir and Schaub, Florian and Dara, Aswarth Abhilash and Liu, Frederick and Cherivirala, Sushain and Leon, Pedro Giovanni and Andersen, Mads Schaarup and Zimmeck, Sebastian and Sathyendra, Kanthashree Mysore and Russell, N Cameron and others},
booktitle={Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={1330--1340},
year={2016}
}
@inproceedings{zheng2021does,
title={When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings},
author={Zheng, Lucia and Guha, Neel and Anderson, Brandon R and Henderson, Peter and Ho, Daniel E},
booktitle={Proceedings of the eighteenth international conference on artificial intelligence and law},
pages={159--168},
year={2021}
}
@article{zimmeck2019maps,
title={Maps: Scaling privacy compliance analysis to a million apps},
author={Zimmeck, Sebastian and Story, Peter and Smullen, Daniel and Ravichander, Abhilasha and Wang, Ziqi and Reidenberg, Joel R and Russell, N Cameron and Sadeh, Norman},
journal={Proc. Priv. Enhancing Tech.},
volume={2019},
pages={66},
year={2019}
}
@article{ravichander2019question,
title={Question answering for privacy policies: Combining computational and legal perspectives},
author={Ravichander, Abhilasha and Black, Alan W and Wilson, Shomir and Norton, Thomas and Sadeh, Norman},
journal={arXiv preprint arXiv:1911.00841},
year={2019}
}
@article{holzenberger2021factoring,
title={Factoring statutory reasoning as language understanding challenges},
author={Holzenberger, Nils and Van Durme, Benjamin},
journal={arXiv preprint arXiv:2105.07903},
year={2021}
}
@article{lippi2019claudette,
title={CLAUDETTE: an automated detector of potentially unfair clauses in online terms of service},
author={Lippi, Marco and Pa{\l}ka, Przemys{\l}aw and Contissa, Giuseppe and Lagioia, Francesca and Micklitz, Hans-Wolfgang and Sartor, Giovanni and Torroni, Paolo},
journal={Artificial Intelligence and Law},
volume={27},
pages={117--139},
year={2019},
publisher={Springer}
}
``` |
lithium0003/findtextCenterNet_dataset | lithium0003 | "2024-11-16T15:43:06Z" | 14,961 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | null | "2024-01-14T08:58:51Z" | ---
license: mit
---
|
mteb/emotion | mteb | "2022-09-27T19:14:18Z" | 14,853 | 11 | [
"language:en",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-05-23T09:55:39Z" | ---
language:
- en
---
** Attention: There appears an overlap in train / test. I trained a model on the train set and achieved 100% acc on test set. With the original emotion dataset this is not the case (92.4% acc)** |
allenai/sciq | allenai | "2024-01-04T16:23:51Z" | 14,797 | 92 | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-3.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
paperswithcode_id: sciq
pretty_name: SciQ
dataset_info:
features:
- name: question
dtype: string
- name: distractor3
dtype: string
- name: distractor1
dtype: string
- name: distractor2
dtype: string
- name: correct_answer
dtype: string
- name: support
dtype: string
splits:
- name: train
num_bytes: 6546183
num_examples: 11679
- name: validation
num_bytes: 554120
num_examples: 1000
- name: test
num_bytes: 563927
num_examples: 1000
download_size: 4674410
dataset_size: 7664230
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "sciq"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/sciq](https://allenai.org/data/sciq)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
### Dataset Summary
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"correct_answer": "coriolis effect",
"distractor1": "muon effect",
"distractor2": "centrifugal effect",
"distractor3": "tropical effect",
"question": "What phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere?",
"support": "\"Without Coriolis Effect the global winds would blow north to south or south to north. But Coriolis makes them blow northeast to..."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `question`: a `string` feature.
- `distractor3`: a `string` feature.
- `distractor1`: a `string` feature.
- `distractor2`: a `string` feature.
- `correct_answer`: a `string` feature.
- `support`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|11679| 1000|1000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under the [Creative Commons Attribution-NonCommercial 3.0 Unported License](http://creativecommons.org/licenses/by-nc/3.0/).
### Citation Information
```
@inproceedings{SciQ,
title={Crowdsourcing Multiple Choice Science Questions},
author={Johannes Welbl, Nelson F. Liu, Matt Gardner},
year={2017},
journal={arXiv:1707.06209v1}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
facebook/voxpopuli | facebook | "2022-10-14T13:43:12Z" | 14,783 | 95 | [
"task_categories:automatic-speech-recognition",
"multilinguality:multilingual",
"language:en",
"language:de",
"language:fr",
"language:es",
"language:pl",
"language:it",
"language:ro",
"language:hu",
"language:cs",
"language:nl",
"language:fi",
"language:hr",
"language:sk",
"language:sl",
"language:et",
"language:lt",
"license:cc0-1.0",
"license:other",
"size_categories:100K<n<1M",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2101.00390",
"region:us"
] | [
"automatic-speech-recognition"
] | "2022-05-10T14:42:49Z" | ---
annotations_creators: []
language:
- en
- de
- fr
- es
- pl
- it
- ro
- hu
- cs
- nl
- fi
- hr
- sk
- sl
- et
- lt
language_creators: []
license:
- cc0-1.0
- other
multilinguality:
- multilingual
pretty_name: VoxPopuli
size_categories: []
source_datasets: []
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for Voxpopuli
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/facebookresearch/voxpopuli
- **Repository:** https://github.com/facebookresearch/voxpopuli
- **Paper:** https://arxiv.org/abs/2101.00390
- **Point of Contact:** [changhan@fb.com](mailto:changhan@fb.com), [mriviere@fb.com](mailto:mriviere@fb.com), [annl@fb.com](mailto:annl@fb.com)
### Dataset Summary
VoxPopuli is a large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation.
The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home). We acknowledge the European Parliament for creating and sharing these materials.
This implementation contains transcribed speech data for 18 languages.
It also contains 29 hours of transcribed speech data of non-native English intended for research in ASR for accented speech (15 L2 accents)
### Example usage
VoxPopuli contains labelled data for 18 languages. To load a specific language pass its name as a config name:
```python
from datasets import load_dataset
voxpopuli_croatian = load_dataset("facebook/voxpopuli", "hr")
```
To load all the languages in a single dataset use "multilang" config name:
```python
voxpopuli_all = load_dataset("facebook/voxpopuli", "multilang")
```
To load a specific set of languages, use "multilang" config name and pass a list of required languages to `languages` parameter:
```python
voxpopuli_slavic = load_dataset("facebook/voxpopuli", "multilang", languages=["hr", "sk", "sl", "cs", "pl"])
```
To load accented English data, use "en_accented" config name:
```python
voxpopuli_accented = load_dataset("facebook/voxpopuli", "en_accented")
```
**Note that L2 English subset contains only `test` split.**
### Supported Tasks and Leaderboards
* automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
Accented English subset can also be used for research in ASR for accented speech (15 L2 accents)
### Languages
VoxPopuli contains labelled (transcribed) data for 18 languages:
| Language | Code | Transcribed Hours | Transcribed Speakers | Transcribed Tokens |
|:---:|:---:|:---:|:---:|:---:|
| English | En | 543 | 1313 | 4.8M |
| German | De | 282 | 531 | 2.3M |
| French | Fr | 211 | 534 | 2.1M |
| Spanish | Es | 166 | 305 | 1.6M |
| Polish | Pl | 111 | 282 | 802K |
| Italian | It | 91 | 306 | 757K |
| Romanian | Ro | 89 | 164 | 739K |
| Hungarian | Hu | 63 | 143 | 431K |
| Czech | Cs | 62 | 138 | 461K |
| Dutch | Nl | 53 | 221 | 488K |
| Finnish | Fi | 27 | 84 | 160K |
| Croatian | Hr | 43 | 83 | 337K |
| Slovak | Sk | 35 | 96 | 270K |
| Slovene | Sl | 10 | 45 | 76K |
| Estonian | Et | 3 | 29 | 18K |
| Lithuanian | Lt | 2 | 21 | 10K |
| Total | | 1791 | 4295 | 15M |
Accented speech transcribed data has 15 various L2 accents:
| Accent | Code | Transcribed Hours | Transcribed Speakers |
|:---:|:---:|:---:|:---:|
| Dutch | en_nl | 3.52 | 45 |
| German | en_de | 3.52 | 84 |
| Czech | en_cs | 3.30 | 26 |
| Polish | en_pl | 3.23 | 33 |
| French | en_fr | 2.56 | 27 |
| Hungarian | en_hu | 2.33 | 23 |
| Finnish | en_fi | 2.18 | 20 |
| Romanian | en_ro | 1.85 | 27 |
| Slovak | en_sk | 1.46 | 17 |
| Spanish | en_es | 1.42 | 18 |
| Italian | en_it | 1.11 | 15 |
| Estonian | en_et | 1.08 | 6 |
| Lithuanian | en_lt | 0.65 | 7 |
| Croatian | en_hr | 0.42 | 9 |
| Slovene | en_sl | 0.25 | 7 |
## Dataset Structure
### Data Instances
```python
{
'audio_id': '20180206-0900-PLENARY-15-hr_20180206-16:10:06_5',
'language': 11, # "hr"
'audio': {
'path': '/home/polina/.cache/huggingface/datasets/downloads/extracted/44aedc80bb053f67f957a5f68e23509e9b181cc9e30c8030f110daaedf9c510e/train_part_0/20180206-0900-PLENARY-15-hr_20180206-16:10:06_5.wav',
'array': array([-0.01434326, -0.01055908, 0.00106812, ..., 0.00646973], dtype=float32),
'sampling_rate': 16000
},
'raw_text': '',
'normalized_text': 'poast genitalnog sakaenja ena u europi tek je jedna od manifestacija takve tetne politike.',
'gender': 'female',
'speaker_id': '119431',
'is_gold_transcript': True,
'accent': 'None'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `language` (datasets.ClassLabel) - numerical id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `raw_text` (string) - original (orthographic) audio segment text
* `normalized_text` (string) - normalized audio segment transcription
* `gender` (string) - gender of speaker
* `speaker_id` (string) - id of speaker
* `is_gold_transcript` (bool) - ?
* `accent` (string) - type of accent, for example "en_lt", if applicable, else "None".
### Data Splits
All configs (languages) except for accented English contain data in three splits: train, validation and test. Accented English `en_accented` config contains only test split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home)
#### Initial Data Collection and Normalization
The VoxPopuli transcribed set comes from aligning the full-event source speech audio with the transcripts for plenary sessions. Official timestamps
are available for locating speeches by speaker in the full session, but they are frequently inaccurate, resulting in truncation of the speech or mixture
of fragments from the preceding or the succeeding speeches. To calibrate the original timestamps,
we perform speaker diarization (SD) on the full-session audio using pyannote.audio (Bredin et al.2020) and adopt the nearest SD timestamps (by L1 distance to the original ones) instead for segmentation.
Full-session audios are segmented into speech paragraphs by speaker, each of which has a transcript available.
The speech paragraphs have an average duration of 197 seconds, which leads to significant. We hence further segment these paragraphs into utterances with a
maximum duration of 20 seconds. We leverage speech recognition (ASR) systems to force-align speech paragraphs to the given transcripts.
The ASR systems are TDS models (Hannun et al., 2019) trained with ASG criterion (Collobert et al., 2016) on audio tracks from in-house deidentified video data.
The resulting utterance segments may have incorrect transcriptions due to incomplete raw transcripts or inaccurate ASR force-alignment.
We use the predictions from the same ASR systems as references and filter the candidate segments by a maximum threshold of 20% character error rate(CER).
#### Who are the source language producers?
Speakers are participants of the European Parliament events, many of them are EU officials.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
Gender speakers distribution is imbalanced, percentage of female speakers is mostly lower than 50% across languages, with the minimum of 15% for the Lithuanian language data.
VoxPopuli includes all available speeches from the 2009-2020 EP events without any selections on the topics or speakers.
The speech contents represent the standpoints of the speakers in the EP events, many of which are EU officials.
### Other Known Limitations
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is distributet under CC0 license, see also [European Parliament's legal notice](https://www.europarl.europa.eu/legal-notice/en/) for the raw data.
### Citation Information
Please cite this paper:
```bibtex
@inproceedings{wang-etal-2021-voxpopuli,
title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation",
author = "Wang, Changhan and
Riviere, Morgane and
Lee, Ann and
Wu, Anne and
Talnikar, Chaitanya and
Haziza, Daniel and
Williamson, Mary and
Pino, Juan and
Dupoux, Emmanuel",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.80",
pages = "993--1003",
}
```
### Contributions
Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|
EpicPinkPenguin/procgen | EpicPinkPenguin | "2024-11-20T14:26:06Z" | 14,722 | 0 | [
"task_categories:reinforcement-learning",
"language:en",
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1707.06347",
"region:us",
"procgen",
"bigfish",
"benchmark",
"openai",
"bossfight",
"caveflyer",
"chaser",
"climber",
"dodgeball",
"fruitbot",
"heist",
"jumper",
"leaper",
"maze",
"miner",
"ninja",
"plunder",
"starpilot"
] | [
"reinforcement-learning"
] | "2024-06-02T07:31:08Z" | ---
language:
- en
license: apache-2.0
size_categories:
- 10M<n<100M
task_categories:
- reinforcement-learning
pretty_name: Procgen Benchmark Dataset
dataset_info:
- config_name: bigfish
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 129932068797
dataset_size: 289372500000
- config_name: bossfight
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 198057598671
dataset_size: 289372500000
- config_name: caveflyer
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 149023406845
dataset_size: 289372500000
- config_name: chaser
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 63831099402
dataset_size: 289372500000
- config_name: climber
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 63990304413
dataset_size: 289372500000
- config_name: coinrun
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 76990220716
dataset_size: 289372500000
- config_name: dodgeball
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 104691253324
dataset_size: 289372500000
- config_name: fruitbot
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 271549939959
dataset_size: 289372500000
- config_name: heist
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 74316944819
dataset_size: 289372500000
- config_name: jumper
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 101573987650
dataset_size: 289372500000
- config_name: leaper
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 66796546658
dataset_size: 289372500000
- config_name: maze
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 75397896559
dataset_size: 289372500000
- config_name: miner
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 57170722948
dataset_size: 289372500000
- config_name: ninja
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 99759972643
dataset_size: 289372500000
- config_name: plunder
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 103307437365
dataset_size: 289372500000
- config_name: starpilot
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 170031712117
dataset_size: 289372500000
configs:
- config_name: bigfish
data_files:
- split: train
path: bigfish/train-*
- split: test
path: bigfish/test-*
- config_name: bossfight
data_files:
- split: train
path: bossfight/train-*
- split: test
path: bossfight/test-*
- config_name: caveflyer
data_files:
- split: train
path: caveflyer/train-*
- split: test
path: caveflyer/test-*
- config_name: chaser
data_files:
- split: train
path: chaser/train-*
- split: test
path: chaser/test-*
- config_name: climber
data_files:
- split: train
path: climber/train-*
- split: test
path: climber/test-*
- config_name: coinrun
data_files:
- split: train
path: coinrun/train-*
- split: test
path: coinrun/test-*
- config_name: dodgeball
data_files:
- split: train
path: dodgeball/train-*
- split: test
path: dodgeball/test-*
- config_name: fruitbot
data_files:
- split: train
path: fruitbot/train-*
- split: test
path: fruitbot/test-*
- config_name: heist
data_files:
- split: train
path: heist/train-*
- split: test
path: heist/test-*
- config_name: jumper
data_files:
- split: train
path: jumper/train-*
- split: test
path: jumper/test-*
- config_name: leaper
data_files:
- split: train
path: leaper/train-*
- split: test
path: leaper/test-*
- config_name: maze
data_files:
- split: train
path: maze/train-*
- split: test
path: maze/test-*
- config_name: miner
data_files:
- split: train
path: miner/train-*
- split: test
path: miner/test-*
- config_name: ninja
data_files:
- split: train
path: ninja/train-*
- split: test
path: ninja/test-*
- config_name: plunder
data_files:
- split: train
path: plunder/train-*
- split: test
path: plunder/test-*
- config_name: starpilot
data_files:
- split: train
path: starpilot/train-*
- split: test
path: starpilot/test-*
tags:
- procgen
- bigfish
- benchmark
- openai
- bossfight
- caveflyer
- chaser
- climber
- dodgeball
- fruitbot
- heist
- jumper
- leaper
- maze
- miner
- ninja
- plunder
- starpilot
---
# Procgen Benchmark
This dataset contains expert trajectories generated by a [PPO](https://arxiv.org/abs/1707.06347) reinforcement learning agent trained on each of the 16 procedurally-generated gym environments from the [Procgen Benchmark](https://openai.com/index/procgen-benchmark/). The environments were created on `distribution_mode=easy` and with unlimited levels.
Disclaimer: This is not an official repository from OpenAI.
## Dataset Usage
Regular usage (for environment bigfish):
```python
from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="train")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="test")
```
Usage with PyTorch (for environment bossfight):
```python
from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="train").with_format("torch")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="test").with_format("torch")
```
## Agent Performance
The PPO RL agent was trained for 25M steps on each environment and obtained the following final performance metrics on the evaluation environment. These values are attain or surpass the performance described in "Easy Difficulty Baseline Results" in Appendix I of the paper.
| Environment | Steps (Train) | Steps (Test) | Return | Observation |
|:------------|:----------------|:---------------|:-------|:------------|
| bigfish | 9,000,000 | 1,000,000 | 29.72 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/lHQXBqLdoWicXlt68I9QX.mp4"></video> |
| bossfight | 9,000,000 | 1,000,000 | 11.13 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/LPoafGi4YBWqqkuFlEN_l.mp4"></video> |
| caveflyer | 9,000,000 | 1,000,000 | 08.95 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/XVqRwu_9yfX4ECQc4At4G.mp4"></video> |
| chaser | 9,000,000 | 1,000,000 | 10.98 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/FIKVv48SThqiC1Z2PYQ7U.mp4"></video> |
| climber | 9,000,000 | 1,000,000 | 11.66 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/XJQlA7IyF9_gwUiw-FkND.mp4"></video> |
| coinrun | 9,000,000 | 1,000,000 | 09.61 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/Ucv3HZttewMRQzTL8r_Tw.mp4"></video> |
| dodgeball | 9,000,000 | 1,000,000 | 11.07 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/5HetbKuXBpO-v1jcVyLTU.mp4"></video> |
| fruitbot | 9,000,000 | 1,000,000 | 32.49 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/zKCyxXvauXjUac-5kEAWz.mp4"></video> |
| heist | 9,000,000 | 1,000,000 | 08.37 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/AdZ6XNmUN5_00BKd9BN8R.mp4"></video> |
| jumper | 9,000,000 | 1,000,000 | 08.46 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/s5k31gWK2Vc6Lp6QVzQXA.mp4"></video> |
| leaper | 9,000,000 | 1,000,000 | 07.11 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/_hDMocxjmzutc0t5FfoTX.mp4"></video> |
| maze | 9,000,000 | 1,000,000 | 09.95 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/uhNdDPuNhZpxVns91Ba-9.mp4"></video> |
| miner | 9,000,000 | 1,000,000 | 12.21 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/ElpJ8l2WHJGrprZ3-giHU.mp4"></video> |
| ninja | 9,000,000 | 1,000,000 | 08.88 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/b9i-fb2Twh8XmBBNf2DRG.mp4"></video> |
| plunder | 9,000,000 | 1,000,000 | 22.19 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/JPeGNOVzrotuYUjfzZj40.mp4"></video> |
| starpilot | 9,000,000 | 1,000,000 | 49.94 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/wY9lZgkw5tor19hCWmm6A.mp4"></video> |
## Dataset Structure
### Data Instances
Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}).
```json
{'action': 1,
'done': False,
'observation': [[[0, 166, 253],
[0, 174, 255],
[0, 170, 251],
[0, 191, 255],
[0, 191, 255],
[0, 221, 255],
[0, 243, 255],
[0, 248, 255],
[0, 243, 255],
[10, 239, 255],
[25, 255, 255],
[0, 241, 255],
[0, 235, 255],
[17, 240, 255],
[10, 243, 255],
[27, 253, 255],
[39, 255, 255],
[58, 255, 255],
[85, 255, 255],
[111, 255, 255],
[135, 255, 255],
[151, 255, 255],
[173, 255, 255],
...
[0, 0, 37],
[0, 0, 39]]],
'reward': 0.0,
'truncated': False}
```
### Data Fields
- `observation`: The current RGB observation from the environment.
- `action`: The action predicted by the agent for the current observation.
- `reward`: The received reward from stepping the environment with the current action.
- `done`: If the new observation is the start of a new episode. Obtained after stepping the environment with the current action.
- `truncated`: If the new observation is the start of a new episode due to truncation. Obtained after stepping the environment with the current action.
### Data Splits
The dataset is divided into a `train` (90%) and `test` (10%) split. Each environment-dataset has in sum 10M steps (data points).
## Dataset Creation
The dataset was created by training an RL agent with [PPO](https://arxiv.org/abs/1707.06347) for 25M steps in each environment. The trajectories where generated by sampling from the predicted action distribution at each step (not taking the argmax). The environments were created on `distribution_mode=easy` and with unlimited levels.
## Procgen Benchmark
The [Procgen Benchmark](https://openai.com/index/procgen-benchmark/), released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft. |
mteb/stsbenchmark-sts | mteb | "2022-09-27T19:11:21Z" | 14,702 | 11 | [
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-04-19T14:53:43Z" | ---
language:
- en
--- |
BAAI/CCI3-HQ | BAAI | "2024-11-11T12:27:29Z" | 14,654 | 28 | [
"task_categories:text-generation",
"language:zh",
"size_categories:10M<n<100M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2410.18505",
"region:us"
] | [
"text-generation"
] | "2024-09-19T05:33:35Z" | ---
task_categories:
- text-generation
language:
- zh
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: score
dtype: float
splits:
- name: train
configs:
- config_name: default
data_files:
- split: train
path: data/part_*
extra_gated_prompt: "You agree to not use the dataset to conduct experiments that cause harm to human subjects."
extra_gated_fields:
Company/Organization: text
Country: country
---
## Data Description
To address the scarcity of high-quality safety datasets in the Chinese, we open-sourced the [CCI](https://huggingface.co/datasets/BAAI/CCI-Data) (Chinese Corpora Internet) dataset on November 29, 2023.
Building on this foundation, we continue to expand the data source, adopt stricter data cleaning methods, and complete the construction of the CCI 3.0 dataset. This dataset is composed of high-quality, reliable Internet data from trusted sources.
And then with more stricter filtering, The CCI 3.0 HQ corpus released is about 500GB in size.
## Update
- Oct 25, 2024, CCI 3.0 HQ [Tech Report](./tech_report.pdf) released!
- Sep 20, 2024, CCI 3.0 HQ released!
## Data Format
| Field | Type | Meaning |
| :-------: | :----: | :--------------------------: |
| id | String | Document ID, globally unique |
| text | String | Content of the document |
| score | String | Meta Info of the document |
## Sample
```json
{
"id": "02301a3477ca2b5434ab29dfc32f95d853abc",
"text": "《农村财政与财务》杂志创办于1996,是中国农村财政研究会主管的国家重点学术期刊,国家级期刊,影响因子0.163,现被万方收录(中)等权威机构收录,主要方向:研究报告、文献综述、简报、专题研究\n《农村财政与财务》以宣传党和国家财政政策、推动税收体制改革、研究财税理论、指导基层财政和涉农工作,传播理财知识为宗旨,融政策性、指导性、权威性、实用性和知识性为一体。\n《农村财政与财务》是贯彻国家方针、政策、探索财税理论和有关难点、热点问题,交流财政科学化、精细化管理经验,帮助读者提高综合素质和政策水平不可或缺的理想媒体。\n中共中央办公厅国务院办公厅印发《关于加快构建政策体系培育新型农业经营主体的意见》\n9月5号投的,15号就给了初审结果,给出的修改意见,主要是篇幅过长,以及图片格式的问题。修改后过了一周,就发录用通知了。皇天不负有心人啊,继续努力。\n两个意见,总体来看属于一个大修,一个小修,编辑要求修改后复审。但是意见真的给的很中肯,用了一个星期时间认真修改。提交修改稿后,编辑部很快送出外审,当天外审专家就完成了复审工作,然后在第二天立马显示接收了。这个复审速度吓得我惊人,不敢相信是被录用了,后来打电话确认已被录用,等待后续排版工作。\n两个审稿人,审理比较负责,给出了几点小建议,属于小修,修改后录用,编辑对全文进行了细致标注,对格式要求、图表制作规范较为严格,杂志效率挺高,尤其是编辑部反应神速,必须赞一个。\n农村财政与财务杂志的编辑和审稿人都非常专业,两个审稿人分别提出了3条和5条审稿意见,而且有些意见颇有意义,但是对我的文章还是非常肯定的,不到一个月消息回复审稿人分别要求大修和小修,要求比较严谨,数据比较足够,就能中。祝好运。\n农村财政与财务杂志速度还是很快的,而且是我见过的回复字数最多最多的编辑信,投稿一个月,反馈结果。修改后,递交编辑部,审稿人很心细,改的很认真。连标点居然都帮我改……修改两次后录用。\n编辑的工作十分点赞,态度也是很友善,审稿专家也是非常专业,虽然历经的时间比较长才录用,但是也情有可原,毕竟投稿量太大,而且期间加上放假,难免时间较长,进入编辑加工阶段后才进行了咨询,编辑也进行了详细的回复,希望对各位投稿有所帮助。\n农村财政与财务杂志编辑很负责,整个投稿流程节奏非常快。个人感觉这个杂志还是不错的。2位审稿人都比较专业,有个审稿人的一些意见还是非常有帮助,非常有针对性。速度也比较快。推荐大家投稿!\n第二年来订阅杂志了,客服的态度很好哦,杂志的寄送也还及时,希望以后对老顾客有一定的优惠。\n农村财政与财务杂志的审稿速度还是值得肯定的。综合来说,审稿人还是比较认真的,给修改的也比较仔细,对创新性要求还算比较高吧,编辑老师也非常的平易近人。虽然是第一次投稿,但是还是很幸运被收录了。个人建议文章比较注重自主创新,思维清晰。希望能对大家有帮助!\n农村财政与财务杂志效率很高的,也觉得自己蛮幸运的。当时看到外审两三天回来了,以为要被拒了呢,结果给修改意见了。两周后提交修改稿,两三天后显示录用了。整个下来小一个月吧,第一次投稿,还是感觉蛮幸运的。\n该刊审稿较快,出刊也快前后跨度就半年左右,编辑老师态度很好,最好使用邮箱投稿,外审一般会告知你,里面文章质量感觉都挺好的,良心杂志,介意普刊的同仁可以投投看!!\n农村财政与财务杂志质量不错,审稿较严格,录用较快。属于很规范的中文杂志。编辑很负责,处理也很快、工作规范,相当满意。审稿专家很认真细致,意见提的很详细,对论文提高很有帮助!相当愉快的一次投稿经历~\n总的来说,审稿专家还是蛮认真的,对待问题都很细致。另外,编辑也相当赞,经常打电话去咨询状态,一直很要是有创意,内容丰富,应该就没有问题。\neleme**:杂志工作人员的处理速度相当不错哦,审稿专家很负责。\nfazhi**:投稿后编辑态度不错,邮件联系均有及时回复。\n15年11月16日投稿,修改了两次,第一次对文章创新性提出了意见,第二次是格式方面的修改,12月15日通知正刊录用。算是比较快的了。该刊给人的第一感觉就是正规,对论文内容、格式等要求也很严格,应该认真对待。祝大家成功!\nxiajia**:很开心。总体来说,审稿速度很快,比较满意;可以试试。\n9月初投稿,一直没有消息,月底打电话问,还在外审。10月初收到退修通知,修改后返回,编辑回复很快,让修改了格式,然后通知录用。编辑很负责。等待校稿和版费通知。\njince**:感觉给出的意见很诚恳,很有建设性。\n初审大概一周左右,进入外审程序。8月底左右还是正在二审中,我打电话问了下,才告诉我需要修改,网上的状态变成“二审已审回”;按照修改意见修改后以电子邮件形式提交,大概一周后收到录用通知。\nsansui**:审稿速度还是相当神速,编辑部老师很好,很负责任。\n农村财政与财务速度蛮快的,编辑部也很负责,很有主见。审稿人信息反馈很快,20多天就有消息了,录用消息也第一时间通知,很及时、速度、高效,一点也不耽误时间。\n编辑非常认真负责,邮件联系回复也非常快,稿件开始本来有些问题,考虑不用的,但是编辑又给了一次修改的机会,说是修改好了还可能录用,就花心思修,修改后一个月不到就说录用了,还有一些小问题后面陆续解决了。\n用了两个月的时候,才被录用。审稿周期不短,可能也是自己写的不好一再返修的原因。觉得审稿人给的身高意见比较细致、对问题的提出比较准确。农村财政与财务的档次也很高。写的有点多所以相对的版面费也就要多一些。\nsusu**:个人感觉该期刊对文章的选题热点、创新点、写作水平都比较注重。\n个人感觉还不错。第一篇中的论文,还是很开心的。5月28号投稿7月15号通知录用。修改意见中,只有文中的格式问题以及图标中的,字体,单位问题。修改后就成功录用啦。\n农村财政与财务杂志的审稿速度飞快,貌似一个月左右就拟录用了,然后改了两次格式,缩小篇幅,大概也就一个半月搞掂。编辑部人员服务态度很好!很有耐心!大家可以尝试下这个杂志。",
"score": 2.3
}
```
## Download
The CCI 3.0 HQ dataset is simultaneously open-sourced on the [BAAI DataHub](https://data.baai.ac.cn/details/BAAI-CCI3-HQ) and Huggingface.
### BAAI DataHub
Users can click the link [CCI 3.0 HQ Dataset](https://data.baai.ac.cn/details/BAAI-CCI3-HQ) to view the data files, and click to download.
Note that users need to register on BAAI DataHub to use the data, and filling out a survey questionnaire is required before their first download.
### Huggingface
To use the data, you can load it using the following code:
```python
from datasets import load_dataset
dataset = load_dataset("BAAI/CCI3-HQ")
```
### Evaluation
#### Setup
Due to the mixed Chinese and English datasets, we chose Qwen2-0.5B model for datasets evaluation, each experiment with 100B tokens training.
We follow the same evaluation setup for all models using [FineWeb setup](https://github.com/huggingface/cosmopedia/tree/main/evaluation) with [lighteval](https://github.com/huggingface/lighteval) library.
You can checkout the [evaluation script](./lighteval_tasks_v2.py) here.
#### Results
We conducted two types of experiments:
1. Mixed Dataset Experiment: The ratio of English, code, and Chinese is 60% : 10% : 30%.
2. Chinese Dataset Experiment: The Chinese ratio is 100%.
For English datasets, we uniformly used [FineWeb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu/tree/main/sample/100BT). For code data, we used [StarCoder](https://huggingface.co/bigcode/starcoder).
For Chinese datasets, we selected [wanjuan-v1](https://github.com/opendatalab/WanJuan1.0), [skypile](https://huggingface.co/datasets/Skywork/SkyPile-150B), and [cci3.0](https://huggingface.co/datasets/BAAI/CCI3-Data).
For Mixed Dataset Experiment all evaluation metrics are averaged and for Chinese Dataset Experiment only chinese evaluation metrics are averaged.
![Evaluation Metrics](./exp_metrics.png)
All evaluation metrics across training are depicted in ![Evaluation Metrics Across Training](./training_metrics_curve.png).
## Citation Information
You can cite [our paper](https://arxiv.org/abs/2410.18505) or this dataset:
```
@misc{wang2024cci30hqlargescalechinesedataset,
title={CCI3.0-HQ: a large-scale Chinese dataset of high quality designed for pre-training large language models},
author={Liangdong Wang and Bo-Wen Zhang and Chengwei Wu and Hanyu Zhao and Xiaofeng Shi and Shuhao Gu and Jijie Li and Quanyue Ma and TengFei Pan and Guang Liu},
year={2024},
eprint={2410.18505},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.18505},
}
```
## User Agreement
Users need to comply with the usage agreement of the CCI 3.0 HQ dataset. You can view the agreement by clicking on the following link: ([View Usage Agreement](https://data.baai.ac.cn/resources/agreement/cci_usage_aggrement.pdf)). |
trl-internal-testing/zen | trl-internal-testing | "2024-11-26T10:29:22Z" | 14,538 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-13T21:03:47Z" | ---
dataset_info:
- config_name: conversational_implicit_prompt_preference
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2755
num_examples: 17
- name: test
num_bytes: 386
num_examples: 2
download_size: 6623
dataset_size: 3141
- config_name: conversational_language_modeling
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1399
num_examples: 17
- name: test
num_bytes: 210
num_examples: 2
download_size: 3723
dataset_size: 1609
- config_name: conversational_preference
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2070
num_examples: 17
- name: test
num_bytes: 295
num_examples: 2
download_size: 8123
dataset_size: 2365
- config_name: conversational_prompt_completion
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1467
num_examples: 17
- name: test
num_bytes: 218
num_examples: 2
download_size: 5796
dataset_size: 1685
- config_name: conversational_prompt_only
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 821
num_examples: 17
- name: test
num_bytes: 107
num_examples: 2
download_size: 3326
dataset_size: 928
- config_name: conversational_unpaired_preference
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 1441
num_examples: 17
- name: test
num_bytes: 219
num_examples: 2
download_size: 6421
dataset_size: 1660
- config_name: standard_implicit_prompt_preference
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 1537
num_examples: 17
- name: test
num_bytes: 258
num_examples: 2
download_size: 4330
dataset_size: 1795
- config_name: standard_language_modeling
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 744
num_examples: 17
- name: test
num_bytes: 136
num_examples: 2
download_size: 2457
dataset_size: 880
- config_name: standard_preference
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 1213
num_examples: 17
- name: test
num_bytes: 205
num_examples: 2
download_size: 4466
dataset_size: 1418
- config_name: standard_prompt_completion
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 812
num_examples: 17
- name: test
num_bytes: 144
num_examples: 2
download_size: 3231
dataset_size: 956
- config_name: standard_prompt_only
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 460
num_examples: 17
- name: test
num_bytes: 69
num_examples: 2
download_size: 2044
dataset_size: 529
- config_name: standard_stepwise
features:
- name: prompt
dtype: string
- name: completions
sequence: string
- name: label
sequence: bool
splits:
- name: train
num_bytes: 1402.9473684210527
num_examples: 17
- name: test
num_bytes: 165.05263157894737
num_examples: 2
download_size: 5033
dataset_size: 1568.0
- config_name: standard_stepwise_supervision
features:
- name: prompt
dtype: string
- name: completions
sequence: string
- name: labels
sequence: bool
splits:
- name: train
num_bytes: 1382
num_examples: 17
- name: test
num_bytes: 187
num_examples: 2
download_size: 5039
dataset_size: 1569
- config_name: standard_unpaired_preference
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 840
num_examples: 17
- name: test
num_bytes: 131
num_examples: 2
download_size: 3861
dataset_size: 971
configs:
- config_name: conversational_implicit_prompt_preference
data_files:
- split: train
path: conversational_implicit_prompt_preference/train-*
- split: test
path: conversational_implicit_prompt_preference/test-*
- config_name: conversational_language_modeling
data_files:
- split: train
path: conversational_language_modeling/train-*
- split: test
path: conversational_language_modeling/test-*
- config_name: conversational_preference
data_files:
- split: train
path: conversational_preference/train-*
- split: test
path: conversational_preference/test-*
- config_name: conversational_prompt_completion
data_files:
- split: train
path: conversational_prompt_completion/train-*
- split: test
path: conversational_prompt_completion/test-*
- config_name: conversational_prompt_only
data_files:
- split: train
path: conversational_prompt_only/train-*
- split: test
path: conversational_prompt_only/test-*
- config_name: conversational_unpaired_preference
data_files:
- split: train
path: conversational_unpaired_preference/train-*
- split: test
path: conversational_unpaired_preference/test-*
- config_name: standard_implicit_prompt_preference
data_files:
- split: train
path: standard_implicit_prompt_preference/train-*
- split: test
path: standard_implicit_prompt_preference/test-*
- config_name: standard_language_modeling
data_files:
- split: train
path: standard_language_modeling/train-*
- split: test
path: standard_language_modeling/test-*
- config_name: standard_preference
data_files:
- split: train
path: standard_preference/train-*
- split: test
path: standard_preference/test-*
- config_name: standard_prompt_completion
data_files:
- split: train
path: standard_prompt_completion/train-*
- split: test
path: standard_prompt_completion/test-*
- config_name: standard_prompt_only
data_files:
- split: train
path: standard_prompt_only/train-*
- split: test
path: standard_prompt_only/test-*
- config_name: standard_stepwise
data_files:
- split: train
path: standard_stepwise/train-*
- split: test
path: standard_stepwise/test-*
- config_name: standard_stepwise_supervision
data_files:
- split: train
path: standard_stepwise_supervision/train-*
- split: test
path: standard_stepwise_supervision/test-*
- config_name: standard_unpaired_preference
data_files:
- split: train
path: standard_unpaired_preference/train-*
- split: test
path: standard_unpaired_preference/test-*
---
|
faur-ai/fulg | faur-ai | "2024-08-15T10:58:58Z" | 14,536 | 8 | [
"task_categories:text-generation",
"language:ro",
"license:odc-by",
"size_categories:100B<n<1T",
"arxiv:2407.13657",
"region:us",
"language-modeling",
"casual-lm",
"llm"
] | [
"text-generation"
] | "2024-07-16T20:17:27Z" | ---
license: odc-by
viewer: true
task_categories:
- text-generation
language:
- ro
tags:
- language-modeling
- casual-lm
- llm
pretty_name: FuLG
size_categories:
- 100B<n<1T
---
# ❄️FuLG
The FuLG dataset is a comprehensive Romanian language corpus comprising 150 billion tokens, carefully
extracted from Common Crawl. This extensive dataset is the result of rigorous filtering and deduplication
processes applied to 95 Common Crawl snapshots. The compressed dataset has 289 GB.
For more details, check the [arXiv preprint](https://arxiv.org/abs/2407.13657).
### How do I download this?
##### Using 🤗 Datasets
```python
from datasets import load_dataset
# Full dataset
dataset = load_dataset("faur-ai/fulg")
# To load the data from a specific CC snapshot
dataset = load_dataset("faur-ai/fulg", data_dir='2018-05')
```
##### Using Git
```bash
git clone https://huggingface.co/datasets/faur-ai/fulg
```
### Data Fields
The data have several fields:
- `url`: url of the source as a string
- `date_download`: date of crawl
- `digest`: hash of content
- `length`: length of content
- `nlines`: number of lines
- `source_domain`: domain of document
- `title`: title of document
- `raw_content`: text content as a string
- `cc_segment`: source CommonCrawl segment
- `original_nlines`: original number of lines before processing
- `original_length`: original length before processing
- `language`: language (ro)
- `language_score`: score for language
### Licensing Information
We are releasing this dataset under the terms of
[ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this dataset,
you are also bound any license agreements and terms of use of the original data
sources.
## Bibtex
If you use our dataset, please cite us at:
```bibtex
@misc{fulg150bromaniancorpus,
title={FuLG: 150B Romanian Corpus for Language Model Pretraining},
author={Vlad-Andrei Bădoiu and Mihai-Valentin Dumitru and Alexandru M. Gherghescu and Alexandru Agache and Costin Raiciu},
year={2024},
eprint={2407.13657},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.13657},
}
```
|
fixie-ai/common_voice_17_0 | fixie-ai | "2024-10-08T01:12:57Z" | 14,506 | 4 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-21T18:56:23Z" | ---
dataset_info:
- config_name: ar
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 300234489.0
num_examples: 10470
- name: test
num_bytes: 311234035.0
num_examples: 10480
- name: train
num_bytes: 718845895.0
num_examples: 28369
download_size: 1250028526
dataset_size: 1330314419.0
- config_name: de
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 23759438592.6
num_examples: 589100
- name: test
num_bytes: 715601886.0
num_examples: 16183
- name: validation
num_bytes: 710830645.0
num_examples: 16183
download_size: 24582787064
dataset_size: 25185871123.6
- config_name: en
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: test
num_bytes: 9329520290.338
num_examples: 16393
- name: validation
num_bytes: 9434608798.338
num_examples: 16393
- name: train
num_bytes: 44987747251.6
num_examples: 1101170
- name: validated
num_bytes: 68921650062.024
num_examples: 1799288
download_size: 128219063641
dataset_size: 132673526402.3
- config_name: es
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 13216214878.31
num_examples: 336846
- name: test
num_bytes: 748084507.0
num_examples: 15857
- name: validation
num_bytes: 770184703.0
num_examples: 15857
download_size: 14415677901
dataset_size: 14734484088.309998
- config_name: fr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 20630346378.228
num_examples: 558054
- name: test
num_bytes: 684908439.0
num_examples: 16159
- name: validation
num_bytes: 703910244.0
num_examples: 16159
download_size: 21981003249
dataset_size: 22019165061.228
- config_name: frold
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 20616364930.228
num_examples: 558054
- name: test
num_bytes: 674959025.258
num_examples: 16159
- name: validation
num_bytes: 703829746.38
num_examples: 16159
download_size: 21972606682
dataset_size: 21995153701.866
- config_name: hi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 275394930.996
num_examples: 9378
- name: validation
num_bytes: 145392985.176
num_examples: 4856
- name: test
num_bytes: 220164125.264
num_examples: 6308
- name: other
num_bytes: 253400896.056
num_examples: 8088
- name: invalidated
num_bytes: 53706876.0
num_examples: 1550
- name: validated
num_bytes: 721036368.28
num_examples: 20658
download_size: 1481543483
dataset_size: 1669096181.7719998
- config_name: it
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 6137402083.638
num_examples: 169771
- name: validation
num_bytes: 701042124.0
num_examples: 15149
- name: test
num_bytes: 741163579.0
num_examples: 15155
download_size: 7600033249
dataset_size: 7579607786.638
- config_name: ja
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 186515137.0
num_examples: 6261
- name: test
num_bytes: 199063298.0
num_examples: 6261
- name: train
num_bytes: 307772889.0
num_examples: 10039
download_size: 684220424
dataset_size: 693351324.0
- config_name: pt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 290319070.0
num_examples: 9464
- name: test
num_bytes: 304560776.0
num_examples: 9467
- name: train
num_bytes: 624494986.0
num_examples: 21968
download_size: 1188978689
dataset_size: 1219374832.0
- config_name: ru
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 393037777.0
num_examples: 10203
- name: test
num_bytes: 397099376.0
num_examples: 10203
- name: train
num_bytes: 977625337.0
num_examples: 26377
download_size: 1734268016
dataset_size: 1767762490.0
- config_name: sv-SE
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 201604157.344
num_examples: 7744
- name: validation
num_bytes: 145407584.16
num_examples: 5210
- name: test
num_bytes: 168456898.744
num_examples: 5259
- name: other
num_bytes: 182626841.121
num_examples: 6759
- name: invalidated
num_bytes: 43666692.56
num_examples: 1428
- name: validated
num_bytes: 1302439008.81
num_examples: 40770
download_size: 1772780355
dataset_size: 2044201182.7389998
- config_name: tr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 854586956.976
num_examples: 35147
- name: validation
num_bytes: 265450510.268
num_examples: 11258
- name: test
num_bytes: 363424742.28
num_examples: 11290
- name: other
num_bytes: 4238883.0
num_examples: 117
- name: invalidated
num_bytes: 152949072.07
num_examples: 4530
- name: validated
num_bytes: 2694662410.926
num_examples: 114056
download_size: 4038924157
dataset_size: 4335312575.5199995
- config_name: uk
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 824014245.552
num_examples: 25137
- name: validation
num_bytes: 338351263.068
num_examples: 10007
- name: test
num_bytes: 363575667.839
num_examples: 10011
- name: other
num_bytes: 211123163.846
num_examples: 7851
- name: invalidated
num_bytes: 141986802.304
num_examples: 3204
- name: validated
num_bytes: 2579348540.4549994
num_examples: 75489
download_size: 4037277320
dataset_size: 4458399683.063999
configs:
- config_name: ar
data_files:
- split: validation
path: ar/validation-*
- split: test
path: ar/test-*
- split: train
path: ar/train-*
- config_name: de
data_files:
- split: validation
path: de/validation-*
- split: test
path: de/test-*
- split: train
path: de/train-*
- config_name: en
data_files:
- split: test
path: en/test-*
- split: validation
path: en/validation-*
- split: train
path: en/train-*
- split: validated
path: en/validated-*
- config_name: es
data_files:
- split: validation
path: es/validation-*
- split: test
path: es/test-*
- split: train
path: es/train-*
- config_name: fr
data_files:
- split: validation
path: fr/validation-*
- split: train
path: frnew/train-*
- split: test
path: fr/test-*
- config_name: frold
data_files:
- split: train
path: fr/train-*
- split: test
path: fr/test-*
- split: validation
path: fr/validation-*
- config_name: hi
data_files:
- split: train
path: hi/train/**
- split: validation
path: hi/validation/**
- split: test
path: hi/test/**
- split: other
path: hi/other/**
- split: invalidated
path: hi/invalidated/**
- split: validated
path: hi/validated/**
- config_name: it
data_files:
- split: validation
path: it/validation-*
- split: test
path: it/test-*
- split: train
path: it/train-*
- config_name: ja
data_files:
- split: validation
path: ja/validation-*
- split: test
path: ja/test-*
- split: train
path: ja/train-*
- config_name: pt
data_files:
- split: validation
path: pt/validation-*
- split: test
path: pt/test-*
- split: train
path: pt/train-*
- config_name: ru
data_files:
- split: validation
path: ru/validation-*
- split: test
path: ru/test-*
- split: train
path: ru/train-*
- config_name: sv-SE
data_files:
- split: train
path: sv-SE/train/**
- split: validation
path: sv-SE/validation/**
- split: test
path: sv-SE/test/**
- split: other
path: sv-SE/other/**
- split: invalidated
path: sv-SE/invalidated/**
- split: validated
path: sv-SE/validated/**
- config_name: tr
data_files:
- split: train
path: tr/train/**
- split: validation
path: tr/validation/**
- split: test
path: tr/test/**
- split: other
path: tr/other/**
- split: invalidated
path: tr/invalidated/**
- split: validated
path: tr/validated/**
- config_name: uk
data_files:
- split: train
path: uk/train/**
- split: validation
path: uk/validation/**
- split: test
path: uk/test/**
- split: other
path: uk/other/**
- split: invalidated
path: uk/invalidated/**
- split: validated
path: uk/validated/**
---
|
ceval/ceval-exam | ceval | "2023-08-31T14:04:10Z" | 14,487 | 244 | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_categories:question-answering",
"language:zh",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2305.08322",
"region:us"
] | [
"text-classification",
"multiple-choice",
"question-answering"
] | "2023-05-16T01:47:44Z" | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
- multiple-choice
- question-answering
language:
- zh
pretty_name: C-Eval
size_categories:
- 10K<n<100K
---
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
Each subject consists of three splits: dev, val, and test. The dev set per subject consists of five exemplars with explanations for few-shot evaluation. The val set is intended to be used for hyperparameter tuning. And the test set is for model evaluation. Labels on the test split are not released, users are required to submit their results to automatically obtain test accuracy. [How to submit?](https://github.com/SJTU-LIT/ceval/tree/main#how-to-submit)
### Load the data
```python
from datasets import load_dataset
dataset=load_dataset(r"ceval/ceval-exam",name="computer_network")
print(dataset['val'][0])
# {'id': 0, 'question': '使用位填充方法,以01111110为位首flag,数据为011011111111111111110010,求问传送时要添加几个0____', 'A': '1', 'B': '2', 'C': '3', 'D': '4', 'answer': 'C', 'explanation': ''}
```
More details on loading and using the data are at our [github page](https://github.com/SJTU-LIT/ceval#data).
Please cite our paper if you use our dataset.
```
@article{huang2023ceval,
title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},
author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian},
journal={arXiv preprint arXiv:2305.08322},
year={2023}
}
```
|
mlfoundations/MINT-1T-PDF-CC-2023-23 | mlfoundations | "2024-09-19T21:07:25Z" | 14,351 | 1 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | "2024-07-12T05:43:59Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-23`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
![Examples](interleaved-example-twitter.png)
## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
shuaishuaicdp/GUI-World | shuaishuaicdp | "2024-06-23T09:15:47Z" | 14,332 | 15 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"size_categories:10K<n<100K",
"modality:video",
"arxiv:2406.10819",
"region:us"
] | [
"question-answering",
"text-generation"
] | "2024-06-13T09:12:47Z" | ---
task_categories:
- question-answering
- text-generation
language:
- en
pretty_name: GUI-World
size_categories:
- 10K<n<100K
---
<div align="center">
<h1>GUI-World: A Dataset for GUI-Orientated Multimodal Large Language Models
[![Paper](https://img.shields.io/badge/Paper-%F0%9F%8E%93-lightgrey?style=flat-square)](https://arxiv.org/abs/2406.10819) [![Model](https://img.shields.io/badge/Dataset-%F0%9F%92%BE-green?style=flat-square)](https://huggingface.co/shuaishuaicdp/GUI-Vid) [![Website](https://img.shields.io/badge/Website-%F0%9F%90%BE-green?style=flat-square)](https://gui-world.github.io/)
<img src="figures/GUI_overview.png">
<img src="figures/radar.jpg">
<p align="center">
</p>
</div>
## Dataset: GUI-World
### Overview
GUI-World introduces a comprehensive benchmark for evaluating MLLMs in dynamic and complex GUI environments. It features extensive annotations covering six GUI scenarios and eight types of GUI-oriented questions. The dataset assesses state-of-the-art ImageLLMs and VideoLLMs, highlighting their limitations in handling dynamic and multi-step tasks. It provides valuable insights and a foundation for future research in enhancing the understanding and interaction capabilities of MLLMs with dynamic GUI content. This dataset aims to advance the development of robust GUI agents capable of perceiving and interacting with both static and dynamic GUI elements.
### How to use GUI-World
See [Github](https://github.com/Dongping-Chen/GUI-World) for further details. Based on GUI-World, we train the first VideoLLM [**GUI-Vid**](https://huggingface.co/shuaishuaicdp/GUI-Vid) with powerful GUI understanding capability.
## License
This work is licensed under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
## Citation
```
@article{chen2024gui,
title={GUI-WORLD: A Dataset for GUI-Orientated Multimodal Large Language Models},
author={GUI-World Team},
year={2024}
}
``` |
datablations/oscar-filter | datablations | "2023-05-10T06:58:28Z" | 14,314 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-02-01T13:04:53Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: meta
struct:
- name: warc_headers
struct:
- name: warc-record-id
dtype: string
- name: warc-date
dtype: string
- name: content-type
dtype: string
- name: content-length
dtype: int32
- name: warc-type
dtype: string
- name: warc-identified-content-language
dtype: string
- name: warc-refers-to
dtype: string
- name: warc-target-uri
dtype: string
- name: warc-block-digest
dtype: string
- name: identification
struct:
- name: label
dtype: string
- name: prob
dtype: float32
- name: annotations
sequence: string
- name: line_identifications
list:
- name: label
dtype: string
- name: prob
dtype: float32
- name: perplexity_score
dtype: float64
- name: text_length
dtype: int64
- name: url
dtype: string
- name: domain
dtype: string
- name: dup_ratio
dtype: float64
- name: pairs
sequence:
sequence: int64
- name: repetitions
sequence: binary
- name: included_in_dedup
dtype: bool
- name: cluster
sequence: int64
splits:
- name: train
num_bytes: 3188486875748
num_examples: 431992659
download_size: 419397499659
dataset_size: 3188486875748
---
this is the one where we build the suffix array for 25% Oscar and only deduplicate that part - by deduplication I mean removing any document which has an at least 100-char span overlapping with another document in the 25% chunk. This is very strict and preserves only about 20 million documents, so less then 5% of the full Oscar. |
indolem/IndoMMLU | indolem | "2023-10-11T04:30:54Z" | 14,154 | 14 | [
"task_categories:question-answering",
"language:id",
"license:mit",
"size_categories:10K<n<100K",
"arxiv:2310.04928",
"arxiv:2112.10668",
"arxiv:2302.13971",
"region:us",
"knowledge"
] | [
"question-answering"
] | "2023-10-10T11:16:12Z" | ---
license: mit
task_categories:
- question-answering
language:
- id
tags:
- knowledge
pretty_name: IndoMMLU
size_categories:
- 10K<n<100K
---
# IndoMMLU
<!---
[![evaluation](https://img.shields.io/badge/OpenCompass-Support-royalblue.svg
)](https://github.com/internLM/OpenCompass/) [![evaluation](https://img.shields.io/badge/lm--evaluation--harness-Support-blue
)](https://github.com/EleutherAI/lm-evaluation-harness)
-->
<p align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/IndoMMLU-Bar.png" style="width: 100%;" id="title-icon">
</p>
<p align="center"> <a href="http://www.fajrikoto.com" target="_blank">Fajri Koto</a>, <a href="https://www.linkedin.com/in/nuaisyah/" target="_blank">Nurul Aisyah</a>, <a href="https://haonan-li.github.io/" target="_blank">Haonan Li</a>, <a href="https://people.eng.unimelb.edu.au/tbaldwin/" target="_blank">Timothy Baldwin</a> </p>
<h4 align="center">
<p align="center" style="display: flex; flex-direction: row; justify-content: center; align-items: center">
📄 <a href="https://arxiv.org/abs/2310.04928" target="_blank" style="margin-right: 15px; margin-left: 10px">Paper</a> •
🏆 <a href="https://github.com/fajri91/IndoMMLU/blob/main/README_EN.md#evaluation" target="_blank" style="margin-left: 10px">Leaderboard</a> •
🤗 <a href="https://huggingface.co/datasets/indolem/indommlu" target="_blank" style="margin-left: 10px">Dataset</a>
</p>
</h4>
## Introduction
We introduce IndoMMLU, the first multi-task language understanding benchmark for Indonesian culture and languages,
which consists of questions from primary school to university entrance exams in Indonesia. By employing professional teachers,
we obtain 14,906 questions across 63 tasks and education levels, with 46\% of the questions focusing on assessing proficiency
in the Indonesian language and knowledge of nine local languages and cultures in Indonesia.
<p align="left"> <img src="https://github.com/fajri91/eval_picts/blob/master/IndoMMLU-dist.png?raw=true" style="width: 500px;" id="title-icon"> </p>
## Subjects
| Level | Subjects |
|-----------|------------------------------------|
| SD (Primary School) | Science, Social science, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Dayak Ngaju, Minangkabau culture, Art, Sports, Islam religion, Christian religion, Hindu religion |
| SMP (Junior High School) | Science, Social science, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Minangkabau culture, Art, Sports, Islam religion, Christian religion, Hindu religion |
| SMA (Senior High School) | Physics, Chemistry, Biology, Geography, Sociology, Economics, History, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Art, Sports, Islam religion, Christian religion, Hindu religion |
University Entrance Test | Chemistry, Biology, Geography, Sociology, Economics, History, Indonesian Language |
We categorize the collected questions into different subject areas, including: (1) STEM (Science, Technology, Engineering, and Mathematics); (2) Social Science; (3) Humanities; (4) Indonesian Language; and (5) Local Languages and Cultures.
## Examples
These questions are written in Indonesian. For local language subjects, some are written in the local languages. The English version is for illustrative purposes only.
<p align="left">
<img src="https://github.com/fajri91/eval_picts/blob/master/min_example.png?raw=true" style="width: 400px;" id="title-icon">
</p>
## Evaluation
We evaluate 24 multilingual LLMs of different sizes in zero-shot and few-shot settings. This includes [GPT-3.5 (ChatGPT)](https://chat.openai.com/), [XGLM](https://arxiv.org/abs/2112.10668), [Falcon](https://falconllm.tii.ae/), [BLOOMZ](https://huggingface.co/bigscience/bloomz), [mT0](https://huggingface.co/bigscience/bloomz), [LLaMA](https://arxiv.org/abs/2302.13971), and [Bactrian-X](https://github.com/mbzuai-nlp/bactrian-x). Prior to the question and multiple-choice options, we add a simple prompt in the Indonesian language:
```
Ini adalah soal [subject] untuk [level]. Pilihlah salah satu jawaban yang dianggap benar!
English Translation: This is a [subject] question for [level]. Please choose the correct answer!
```
#### Zero-shot Evaluation
| Model (#param) | STEM | Social Science | Humanities | Indonesian Lang. | Local L. Culture | Average |
|---------------------|------|----------|-------------|---------|----------|---------|
| Random | 21.9 | 23.4 | 23.5 | 24.4 | 26.6 | 24.4 |
| [GPT-3.5 (175B)](https://chat.openai.com/) | **54.3** | **62.5** | **64.0** | **62.2** | 39.3 | **53.2** |
| [XGLM (564M)](https://huggingface.co/facebook/xglm-564M) | 22.1 | 23.0 | 25.6 | 25.6 | 27.5 | 25.2 |
| [XGLM (1.7B)](https://huggingface.co/facebook/xglm-1.7B) | 20.9 | 23.0 | 24.6 | 24.8 | 26.6 | 24.4 |
| [XGLM (2.9B)](https://huggingface.co/facebook/xglm-2.9B) | 22.9 | 23.2 | 25.4 | 26.3 | 27.2 | 25.2 |
| [XGLM (4.5B)](https://huggingface.co/facebook/xglm-4.5B) | 21.8 | 23.1 | 25.6 | 25.8 | 27.1 | 25.0 |
| [XGLM (7.5B)](https://huggingface.co/facebook/xglm-7.5B) | 22.7 | 21.7 | 23.6 | 24.5 | 27.5 | 24.5 |
| [Falcon (7B)](https://huggingface.co/tiiuae/falcon-7b) | 22.1 | 22.9 | 25.5 | 25.7 | 27.5 | 25.1 |
| [Falcon (40B)](https://huggingface.co/tiiuae/falcon-40b) | 30.2 | 34.8 | 34.8 | 34.9 | 29.2 | 32.1 |
| [BLOOMZ (560M)](https://huggingface.co/bigscience/bloomz-560m) | 22.9 | 23.6 | 23.2 | 24.2 | 25.1 | 24.0 |
| [BLOOMZ (1.1B)](https://huggingface.co/bigscience/bloomz-1b1) | 20.4 | 21.4 | 21.1 | 23.5 | 24.7 | 22.4 |
| [BLOOMZ (1.7B)](https://huggingface.co/bigscience/bloomz-1b7) | 31.5 | 39.3 | 38.3 | 42.8 | 29.4 | 34.4 |
| [BLOOMZ (3B)](https://huggingface.co/bigscience/bloomz-3b) | 33.5 | 44.5 | 39.7 | 46.7 | 29.8 | 36.4 |
| [BLOOMZ (7.1B)](https://huggingface.co/bigscience/bloomz-7b1) | 37.1 | 46.7 | 44.0 | 49.1 | 28.2 | 38.0 |
| [mT0<sub>small</sub> (300M)](https://huggingface.co/bigscience/mt0-small) | 21.8 | 21.4 | 25.7 | 25.1 | 27.6 | 24.9 |
| [mT0<sub>base</sub> (580M)](https://huggingface.co/bigscience/mt0-base) | 22.6 | 22.6 | 25.7 | 25.6 | 26.9 | 25.0 |
| [mT0<sub>large</sub> (1.2B)](https://huggingface.co/bigscience/mt0-large) | 22.0 | 23.4 | 25.1 | 27.3 | 27.6 | 25.2 |
| [mT0<sub>xl</sub> (3.7B)](https://huggingface.co/bigscience/mt0-xl) | 31.4 | 42.9 | 41.0 | 47.8 | 35.7 | 38.2 |
| [mT0<sub>xxl</sub> (13B)](https://huggingface.co/bigscience/mt0-xxl) | 33.5 | 46.2 | 47.9 | 52.6 | **39.6** | 42.5 |
| [LLaMA (7B)](https://arxiv.org/abs/2302.13971) | 22.8 | 23.1 | 25.1 | 26.7 | 27.6 | 25.3 |
| [LLaMA (13B)](https://arxiv.org/abs/2302.13971) | 24.1 | 23.0 | 24.4 | 29.5 | 26.7 | 25.3 |
| [LLaMA (30B)](https://arxiv.org/abs/2302.13971) | 25.4 | 23.5 | 25.9 | 28.4 | 28.7 | 26.5 |
| [LLaMA (65B)](https://arxiv.org/abs/2302.13971) | 33.0 | 37.7 | 40.8 | 41.4 | 32.1 | 35.8 |
| [Bactrian-X-LLaMA (7B)](https://github.com/mbzuai-nlp/bactrian-x) | 23.3 | 24.0 | 26.0 | 26.1 | 27.5 | 25.7 |
| [Bactrian-X-LLaMA (13B)](https://github.com/mbzuai-nlp/bactrian-x) | 28.3 | 29.9 | 32.8 | 35.2 | 29.2 | 30.3 |
#### GPT-3.5 performance (% accuracy) across different education levels
<p align="left">
<img src="https://github.com/fajri91/eval_picts/blob/master/IndoMMLU-result.png?raw=true" style="width: 370px;" id="title-icon">
</p>
Red indicates that the score is below the minimum passing threshold of 65, while green signifies a score at or above this minimum. We can observe that ChatGPT mostly passes a score of 65 in Indonesian primary school exams.
#### Few-shot Evaluation
<p align="left">
<img src="https://github.com/fajri91/eval_picts/blob/master/plot_fewshot.png?raw=true" style="width: 380px;" id="title-icon">
</p>
## Data
Each question in the dataset is a multiple-choice question with up to 5 choices and only one choice as the correct answer.
We provide our dataset according to each subject in [data](data) folder. You can also access our dataset via [Hugging Face](https://huggingface.co/datasets/indolem/indommlu).
<!--
#### Quick Use
Our dataset has been added to [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [OpenCompass](https://github.com/InternLM/opencompass), you can evaluate your model via these open-source tools.
-->
#### Evaluation
The code for the evaluation of each model we used is in `evaluate.py`, and the code to run them is listed in `run.sh`.
## Citation
```
@inproceedings{koto-etal-2023-indommlu,
title = "Large Language Models Only Pass Primary School Exams in {I}ndonesia: A Comprehensive Test on {I}ndo{MMLU}",
author = "Fajri Koto and Nurul Aisyah and Haonan Li and Timothy Baldwin",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = December,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
}
```
## License
The IndoMMLU dataset is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/). |
ShareGPT4Video/ShareGPT4Video | ShareGPT4Video | "2024-07-08T05:57:32Z" | 14,139 | 181 | [
"task_categories:visual-question-answering",
"task_categories:question-answering",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:image",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.04325",
"doi:10.57967/hf/2494",
"region:us"
] | [
"visual-question-answering",
"question-answering"
] | "2024-05-22T11:59:11Z" | ---
license: cc-by-nc-4.0
task_categories:
- visual-question-answering
- question-answering
language:
- en
pretty_name: ShareGPT4Video Captions Dataset Card
size_categories:
- 1M<n
configs:
- config_name: ShareGPT4Video
data_files: sharegpt4video_40k.jsonl
---
# ShareGPT4Video 4.8M Dataset Card
## Dataset details
**Dataset type:**
ShareGPT4Video Captions 4.8M is a set of GPT4-Vision-powered multi-modal captions data of videos.
It is constructed to enhance modality alignment and fine-grained visual concept perception in Large Video-Language Models (LVLMs) and Text-to-Video Models (T2VMs). This advancement aims to bring LVLMs and T2VMs towards the capabilities of GPT4V and Sora.
* sharegpt4video_40k.jsonl is generated by GPT4-Vision (ShareGPT4Video).
* share-captioner-video_mixkit-pexels-pixabay_4814k_0417.json is generated by our ShareCaptioner-Video trained on GPT4-Vision-generated video-caption pairs.
* sharegpt4video_mix181k_vqa-153k_share-cap-28k.json is curated from sharegpt4video_instruct_gpt4-vision_cap40k.json for the supervised fine-tuning stage of LVLMs.
* llava_v1_5_mix665k_with_video_chatgpt72k_share4video28k.json has replaced 28K detailed-caption-related data in VideoChatGPT with 28K high-quality captions from ShareGPT4Video. This file is utilized to validate the effectiveness of high-quality captions under the VideoLLaVA and LLaMA-VID models.
**Dataset date:**
ShareGPT4Video Captions 4.8M was collected in 4.17 2024.
**Paper or resources for more information:**
[[Project](https://ShareGPT4Video.github.io/)] [[Paper](https://arxiv.org/abs/2406.04325v1)] [[Code](https://github.com/ShareGPT4Omni/ShareGPT4Video)] [[ShareGPT4Video-8B](https://huggingface.co/Lin-Chen/sharegpt4video-8b)]
**License:**
Attribution-NonCommercial 4.0 International
It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use
## Intended use
**Primary intended uses:**
The primary use of ShareGPT4Video Captions 4.8M is research on large multimodal models and text-to-video models.
**Primary intended users:**
The primary intended users of this dataset are researchers and hobbyists in computer vision, natural language processing, machine learning, AIGC, and artificial intelligence.
## Paper
arxiv.org/abs/2406.04325 |
bop-benchmark/datasets | bop-benchmark | "2024-10-19T07:32:50Z" | 14,081 | 15 | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"task_categories:robotics",
"task_categories:zero-shot-object-detection",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2403.09799",
"arxiv:2302.13075",
"arxiv:2009.07378",
"region:us"
] | [
"image-segmentation",
"object-detection",
"robotics",
"zero-shot-object-detection"
] | "2024-03-20T14:39:48Z" | ---
task_categories:
- image-segmentation
- object-detection
- robotics
- zero-shot-object-detection
size_categories:
- n>1T
configs:
- config_name: MegaPose-ShapeNetCore
data_files: MegaPose-ShapeNetCore/*.tar
- config_name: MegaPose-GSO
data_files: MegaPose-GSO/*.tar
---
# BOP: Benchmark for 6D Object Pose Estimation
The goal of BOP is to capture the state of the art in estimating the 6D pose, i.e. 3D translation and 3D rotation, of rigid objects from RGB/RGB-D images. An accurate, fast, robust, scalable and easy-to-train method that solves this task will have a big impact in application fields such as robotics or augmented reality.
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/637fb712084fca81acde6e40/8WSyi9CNNsfDHC-lwaRpG.jpeg)
Homepage: https://bop.felk.cvut.cz/home/
Toolkit: https://github.com/thodan/bop_toolkit
## Downloading datasets
#### Option 1: Using `huggingface_hub`:
<details><summary>Click to expand</summary>
a. Install the library:
```
pip install --upgrade huggingface_hub
```
b. Download the dataset:
```
from huggingface_hub import snapshot_download
dataset_name = "hope"
local_dir = "./datasets"
snapshot_download(repo_id="bop-benchmark/datasets",
allow_patterns=f"{dataset_name}/*zip",
repo_type="dataset",
local_dir=local_dir)
```
If you want to download the entire BOP datasets (~3TB), please remove the `allow_patterns` argument. More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/main/en/guides/download).
</details>
#### Option 2: Using `huggingface_hub[cli]`:
<details><summary>Click to expand</summary>
a. Install the library:
```
pip install -U "huggingface_hub[cli]"
```
b. Download the dataset:
```
export LOCAL_DIR=./datasets
export DATASET_NAME=hope
huggingface-cli download bop-benchmark/datasets --include "$DATASET_NAME/*.zip" --local-dir $LOCAL_DIR --repo-type=dataset
```
Please remove this argument `--include "$DATASET_NAME/*.zip"` to download entire BOP datasets (~3TB). More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/main/en/guides/download).
</details>
#### Option 3: Using `wget`:
<details><summary>Click to expand</summary>
Similar `wget` command as in [BOP website](https://bop.felk.cvut.cz/datasets/) can be used to download the dataset from huggingface hub:
```
export SRC=https://huggingface.co/datasets/bop-benchmark/datasets/resolve/main
wget $SRC/lm/lm_base.zip # Base archive
wget $SRC/lm/lm_models.zip # 3D object models
wget $SRC/lm/lm_test_all.zip # All test images ("_bop19" for a subset)
wget $SRC/lm/lm_train_pbr.zip # PBR training images
```
</details>
Datasets are stored in `.zip` format. You can extract them using the following command:
```
bash scripts/extract_bop.sh
```
If you are running on a machine with high bandwidth, you can increase your download speed by adding the following environment variable:
```
pip install huggingface_hub[hf_transfer]
export HF_HUB_ENABLE_HF_TRANSFER=1
```
## Uploading datasets
You create a new dataset and want to share it with BOP community. Here is a step-by-step guide to upload the dataset and create a pull request to [our huggingface hub](https://huggingface.co/datasets/bop-benchmark/datasets/). Feel free to reach out to vanngn.nguyen@gmail.com if you have any questions.
Similar to the download process, you can upload the dataset using the `huggingface_hub` library or `huggingface_hub[cli]`. We recommend using `huggingface_hub[cli]` for its simplicity.
#### Option 1: Using `huggingface_hub[cli]`:
<details><summary>Click to expand</summary>
a. Install the library:
```
pip install -U "huggingface_hub[cli]"
```
b. Log-in and create a token
```
huggingface-cli login
```
Then go to [this link](https://huggingface.co/settings/tokens) and generate a token. IMPORTANT: the token should have write access as shown below:
<img src="./media/token_hf.png" alt="image" width="300">
Make sure you are in the bop-benchmark group by running:
```
huggingface-cli whoami
```
c. Upload dataset:
The command is applied for both folders and specific files:
```
# Usage: huggingface-cli upload bop-benchmark/datasets [local_path] [path_in_repo] --repo-type=dataset --create-pr
```
For example, to upload hope dataset:
```
export LOCAL_FOLDER=./datasets/hope
export HF_FOLDER=/hope
huggingface-cli upload bop-benchmark/datasets $LOCAL_FOLDER $HF_FOLDER --repo-type=dataset --create-pr
```
</details>
#### Option 2: Using `huggingface_hub`:
<details><summary>Click to expand</summary>
a. Install the library:
```
pip install --upgrade huggingface_hub
```
b. Creating a pull-request:
We recommend organizing the dataset in a folder and then uploading it to the huggingface hub. For example, to upload `lmo`:
```
from huggingface_hub import HfApi, CommitOperationAdd
dataset_name = "lmo"
local_dir = "./datasets/lmo"
operations = []
for file in local_dir.glob("*"):
add_commit = CommitOperationAdd(
path_in_repo=f"/{dataset_name}",
path_or_fileobj=local_dir,
)
operations.append(add_commit)
api = HfApi()
MY_TOKEN = # get from https://huggingface.co/settings/tokens
api.create_commit(repo_id="bop-benchmark/datasets",
repo_type="dataset",
commit_message=f"adding {dataset_name} dataset",
token=MY_TOKEN,
operations=operations,
create_pr=True)
```
If your dataset is large (> 500 GB), you can upload it in chunks by adding the `multi_commits=True, multi_commits_verbose=True,` argument. More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/package_reference/hf_api#huggingface_hub.HfApi.create_pull_request).
</details>
## FAQ
#### 1. How to upload a large file > 50 GB?
Note that HuggingFace limits the size of each file to 50 GB. If your dataset is larger, you can split it into smaller files:
```
zip -s 50g input.zip --out output.zip
```
This command will split the `input.zip` into multiple files of 50GB size `output.zip`, `output.z01`, `output.z01`, ... You can then extract them using one of the following commands:
```
# option 1: combine
zip -s0 output.zip --out input.zip
# option 2: using 7z to unzip directly
7z x output.zip
```
#### 2. How to increase download speed?
If you are running on a machine with high bandwidth, you can increase your download speed by adding the following environment variable:
```
pip install huggingface_hub[hf_transfer]
export HF_HUB_ENABLE_HF_TRANSFER=1
```
## Publications
- [**BOP Challenge 2023 on Detection, Segmentation and Pose Estimation of Seen and Unseen Rigid Objects**](https://arxiv.org/pdf/2403.09799.pdf)
- T. Hodaň, M. Sundermeyer, Y. Labbé, V. N. Nguyen, G. Wang, E. Brachmann, B. Drost, V. Lepetit, C. Rother, J. Matas
- IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW, [CV4MR workshop](https://cv4mr.github.io/)) 2024, Seattle
- [PDF](https://arxiv.org/pdf/2403.09799.pdf), [SLIDES](https://cmp.felk.cvut.cz/sixd/workshop_2023/slides/bop_challenge_2023_results.pdf), [VIDEO](https://www.youtube.com/watch?v=PcDszFANcDQ), [BIB](https://cmp.felk.cvut.cz/~hodanto2/data/hodan2023bop.bib)
- [**BOP Challenge 2022 on Detection, Segmentation and Pose Estimation of Specific Rigid Objects**](https://arxiv.org/pdf/2302.13075.pdf)
- M. Sundermeyer, T. Hodaň, Y. Labbé, G. Wang, E. Brachmann, B. Drost, C. Rother, J. Matas
- IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW, [CV4MR workshop](https://cv4mr.github.io/)) 2023, Vancouver
- [PDF](https://arxiv.org/pdf/2302.13075.pdf), [SLIDES](https://cmp.felk.cvut.cz/sixd/workshop_2022/slides/bop_challenge_2022_results.pdf), [VIDEO 1](https://vimeo.com/showcase/9946695/video/768457697), [VIDEO 2](https://vimeo.com/showcase/9946695/video/768458355), [BIB](https://cmp.felk.cvut.cz/~hodanto2/data/sundermeyer2022bop.bib)
- [**BOP Challenge 2020 on 6D Object Localization**](https://arxiv.org/pdf/2009.07378.pdf)
- T. Hodaň, M. Sundermeyer, B. Drost, Y. Labbé, E. Brachmann, F. Michel, C. Rother, J. Matas
- European Conference on Computer Vision Workshops (ECCVW) 2020, Glasgow
- [PDF](https://arxiv.org/pdf/2009.07378.pdf), [SLIDES](https://bop.felk.cvut.cz/media/bop_challenge_2020_results.pdf), [BIB](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2020bop.bib)
- [**BOP: Benchmark for 6D Object Pose Estimation**](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2018bop.pdf)
- T. Hodaň, F. Michel, E. Brachmann, W. Kehl, A. G. Buch, D. Kraft, B. Drost, J. Vidal, S. Ihrke, X. Zabulis, C. Sahin, F. Manhardt, F. Tombari, T.-K. Kim, J. Matas, C. Rother
- European Conference on Computer Vision (ECCV) 2018, Munich
- [PDF](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2018bop.pdf), [SLIDES](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2018bop_slides_eccv.pdf), [POSTER](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2018bop_poster.pdf), [BIB](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2018bop.bib)
The online evaluation system has been developed by [T. Hodaň](http://www.hodan.xyz) and [A. Melenovský](https://www.linkedin.com/in/anton%C3%ADn-melenovsk%C3%BD-09907b151/). |
cschell/xr-motion-dataset-catalogue | cschell | "2024-05-04T12:15:34Z" | 14,073 | 4 | [
"language:en",
"arxiv:2306.03381",
"region:us",
"kinematic research",
"XR user motions",
"VR user motions",
"AR user motions",
"motions"
] | null | "2024-01-12T15:33:50Z" | ---
language:
- en
tags:
- kinematic research
- XR user motions
- VR user motions
- AR user motions
- motions
pretty_name: XR Motion Dataset Catalogue
---
# XR Motion Dataset Catalogue
## Overview
The XR Motion Dataset Catalogue, accompanying our paper "Navigating the Kinematic Maze: A Comprehensive Guide to XR Motion Dataset Standards," standardizes and simplifies access to Extended Reality (XR) motion datasets. The catalogue represents our initiative to streamline the usage of kinematic data in XR research by aligning various datasets to a consistent format and structure.
### Dataset Specifications
All datasets in this catalogue have been standardized with the following specifications:
- **Coordinate System:** X (Right), Y (Up), Z (Forward)
- **Rotation Representation:** Quaternions
- **Units of Measurement:** Centimeters for spatial data
- **Time Encoding:** Milliseconds for time-related data
These specifications ensure uniformity and comparability across all datasets in the catalogue.
### Conversion Scripts Repository
The alignment of datasets was facilitated by a series of conversion scripts, which are available in our GitHub repository: [XR Motion Dataset Conversion Scripts](https://github.com/cschell/xr-motion-dataset-conversion-scripts). These scripts detail the process of aligning attribute names, coordinate systems, rotation representations, units of measurement, and time encoding.
### Included Datasets
The catalogue includes the following datasets:
1. [LiebersBeatSaber23](https://doi.org/10.1145/3611659.3615696)
2. [Boxrr23](https://doi.org/10.25350/B5NP4V) – *edit 2024-05-04: we are still working on providing the aligned version – in the meantime you find the original version [here](https://huggingface.co/datasets/cschell/boxrr-23/)*
3. BOXRR24 – *WIP: we are currently working on the next version of the BOXRR-23 dataset, which will include significantly more user – we do our best to make it available later this year*
4. [LiebersHand22](https://doi.org/10.1080/10447318.2022.2120845)
5. [LiebersLabStudy21](https://doi.org/10.1145/3411764.3445528)
6. [MooreCrossDomain23](https://doi.org/10.1109/ISMAR59233.2023.00054)
7. <del>[RMillerBall22](https://github.com/Terascale-All-sensing-Research-Studio/VR-Biometric-Authentication)</del> *request for permissions pending*
8. [VrNet](http://arxiv.org/abs/2306.03381)
9. [WhoIsAlyx](https://doi.org/10.3389/frvir.2023.1272234)
## Installation and Usage
### Loading the Dataset with Hugging Face `datasets` Library
To load a dataset from the catalogue, use the `datasets` library in Python. For example, to load the `WhoIsAlyx` dataset:
```python
from datasets import load_dataset
dataset = load_dataset("cschell/xr-motion-dataset-catalogue", "who_is_alyx", trust_remote_code=True)
```
### Loading Individual Recordings with Pandas
To load individual recordings, you can use `pandas`. Here's an example:
```python
import pandas as pd
file_url_path = "hf://datasets/cschell/xr-motion-dataset-catalogue/who_is_alyx/player_02/2022-01-07.parquet"
recording = pd.read_parquet(file_url_path)
```
## Contributing and Feedback
Contributions and feedback are welcome to enhance the XR Motion Dataset Catalogue. Feel free to open a pull request or contact us directly.
<!--
## Citation
If you use the XR Motion Dataset Catalogue in your research, please cite our paper:
```
@article{your_paper_identifier,
title={Navigating the Kinematic Maze: A Comprehensive Guide to XR Motion Dataset Standards},
author={Your Name and Other Authors},
journal={Journal Name},
year={Year}
}
``` -->
|
kamilakesbi/transformers_image_doc | kamilakesbi | "2024-04-22T15:51:29Z" | 14,008 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-04-22T15:50:03Z" | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 406434.0
num_examples: 2
download_size: 381914
dataset_size: 406434.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lmms-lab/MME | lmms-lab | "2023-12-23T09:13:53Z" | 13,987 | 16 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-09-16T07:11:55Z" | ---
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: question_id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
splits:
- name: test
num_bytes: 1733070098.024
num_examples: 2374
download_size: 864018279
dataset_size: 1733070098.024
---
# Evaluation Dataset for MME |
bigscience/xP3all | bigscience | "2023-05-30T15:51:40Z" | 13,980 | 27 | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"language:gu",
"language:hi",
"language:id",
"language:ig",
"language:ki",
"language:kn",
"language:lg",
"language:ln",
"language:ml",
"language:mr",
"language:ne",
"language:nso",
"language:ny",
"language:or",
"language:pa",
"language:pt",
"language:rn",
"language:rw",
"language:sn",
"language:st",
"language:sw",
"language:ta",
"language:te",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:ur",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:zu",
"license:apache-2.0",
"size_categories:10M<n<100M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2211.01786",
"region:us"
] | [
"other"
] | "2022-07-30T21:05:02Z" | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.33|
|bm|107056|0.11|265180|0.33|
|ak|108096|0.11|265071|0.33|
|ca|110608|0.11|271191|0.33|
|eu|113008|0.11|281199|0.35|
|fon|113072|0.11|265063|0.33|
|st|114080|0.11|265063|0.33|
|ki|115040|0.12|265180|0.33|
|tum|116032|0.12|265063|0.33|
|wo|122560|0.12|365063|0.45|
|ln|126304|0.13|365060|0.45|
|as|156256|0.16|265063|0.33|
|or|161472|0.16|265063|0.33|
|kn|165456|0.17|265063|0.33|
|ml|175040|0.18|265864|0.33|
|rn|192992|0.19|318189|0.39|
|nso|229712|0.23|915051|1.13|
|tn|235536|0.24|915054|1.13|
|lg|235936|0.24|915021|1.13|
|rw|249360|0.25|915043|1.13|
|ts|250256|0.25|915044|1.13|
|sn|252496|0.25|865056|1.07|
|xh|254672|0.26|915058|1.13|
|zu|263712|0.26|915061|1.13|
|ny|272128|0.27|915063|1.13|
|ig|325232|0.33|950097|1.17|
|yo|352784|0.35|918416|1.13|
|ne|393680|0.39|315754|0.39|
|pa|523248|0.52|339210|0.42|
|gu|560688|0.56|347499|0.43|
|sw|566656|0.57|1130481|1.4|
|mr|666240|0.67|417269|0.52|
|bn|832720|0.83|428843|0.53|
|ta|926912|0.93|415433|0.51|
|te|1343232|1.35|584590|0.72|
|ur|1918272|1.92|855756|1.06|
|vi|3102512|3.11|1672106|2.07|
|code|4330752|4.34|2707724|3.34|
|hi|4403568|4.41|1554667|1.92|
|zh|4599440|4.61|3589234|4.43|
|id|4612256|4.62|2643418|3.27|
|ar|4683456|4.69|2160181|2.67|
|fr|6591120|6.6|5316403|6.57|
|pt|6886800|6.9|3752156|4.63|
|es|8587920|8.6|5413205|6.69|
|en|39252528|39.33|32740750|40.44|
|total|99807184|100.0|80956089|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for HumanEval)
- Natural Language Inference
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
#### Additional [xP3all](https://huggingface.co/datasets/bigscience/xP3all) datasets
- Coreference Resolution
- [WSC (Fixed)](https://huggingface.co/datasets/super_glue)
- Sentence Completion
- [HellaSwag](https://huggingface.co/datasets/hellaswag)
- Translation
- [MultiEurlex](https://huggingface.co/datasets/multi_eurlex)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
nyu-visionx/Cambrian-10M | nyu-visionx | "2024-07-08T04:34:51Z" | 13,849 | 103 | [
"task_categories:visual-question-answering",
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"arxiv:2406.16860",
"region:us"
] | [
"visual-question-answering",
"question-answering"
] | "2024-05-30T03:27:31Z" | ---
task_categories:
- visual-question-answering
- question-answering
language:
- en
size_categories:
- 1M<n<10M
license: apache-2.0
---
# Cambrian-10M Dataset
**Please see paper & website for more information:**
- https://cambrian-mllm.github.io/
- https://arxiv.org/abs/2406.16860
## Overview
Cambrian-10M is a comprehensive dataset designed for instruction tuning, particularly in multimodal settings involving visual interaction data. The dataset is crafted to address the scarcity of high-quality multimodal instruction-tuning data and to maintain the language abilities of multimodal large language models (LLMs).
## Data Collection
### Multimodal Data Sources
Unlike language data, multimodal instruction-tuning data is much rarer and harder to collect. To address this, we leverage existing multimodal benchmarks and datasets involving visual interaction data, such as Visual Question Answering (VQA) and Optical Character Recognition (OCR) data. This approach helps mitigate the catastrophic forgetting commonly observed when fine-tuning multimodal LLMs.
### Language-Only Instruction-Following Data
To ensure the preservation of language capabilities, we also collect a small volume of high-quality language-only instruction-following data from the community.
### Targeted Internet Data Collection Engine
We introduce a data engine designed to create large-scale, reliable, high-quality knowledge-based multimodal instruction tuning data. The engine works as follows:
1. **Field and Subfield Selection**: The engine selects a target field and subfield, such as “Physics”.
2. **Topic Identification**: An LLM like GPT-4 identifies topics within the field (e.g., “Newton’s Laws”).
3. **Reliable Source Search**: The engine searches reliable sources like Wikipedia for each topic.
4. **Text-Image Association Extraction**: The parser extracts image-caption-text tuples from the sources.
5. **Q&A Pair Generation**: The caption-text is fed to an LLM, such as GPT-3.5, to generate instruction-type Q&A pairs about the image.
These Q&A pairs, along with the images, form our VQA dataset.
### GPT Rewriting
We also incorporate recent MLLMs such as GPT-4v and GPT-4o to generate extended responses and free-form instruction tuning data. To play with gpt generated data, use
[gpt4v_77k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4v_77k.jsonl), Curated [gpt4o_60k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4o_60k.jsonl)
- [gpt4v_77k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4v_77k.jsonl) contains more extended responses from Cambrian-10M.
- [gpt4o_60k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4o_60k.jsonl) contains more creative data in visual interactions.
## Cambrian-10M Composition
The Cambrian-10M dataset consists of approximately 9.784 million data points, offering a diverse range of data for various research applications. The composition of the dataset is visualized in Fig. 9.
## Cambrian-7M
We make an initial effort to study data curation. In particular, we find the following data ratio to perform most optimally
- **Language**: 21.00%
- **General**: 34.52%
- **OCR**: 27.22%
- **Counting**: 8.71%
- **Math**: 7.20%
- **Code**: 0.87%
- **Science**: 0.88%
![Cambrian-7M](cambrian7m.png)
## Getting Started with Cambrian Data
Before you start, ensure you have sufficient storage space to download and process the data.
Cambrian-10M contains a total of 10 million images collected from previous datasets, an internet data engine, and GPT-generated instruction tuning data. Follow these steps to get started:
1. **Download the Data Repository**
Download the data repository. Note that due to Hugging Face policy constraints, the data folder is archived into tar files. We also split the `allava` and `data_engine` data into smaller tar files because they exceed the 50 GB size limit.
2. **Merge Tar Files**
To explore the Cambrian-10M dataset, first merge the different parts of `allava` and `data_engine` together:
```bash
python merge_tars.py
```
3. **Extract Tar Files**
Then, extract all the tar files into the current directory:
```bash
python extract.py
```
4. **Training with Cambrian**
You can train with the raw [Cambrian10M](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/Cambrian10M.jsonl), Curated [Cambrian7M](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/Cambrian7M.jsonl). We recommend using
the Curated [Cambrian7M with system prompt](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/blob/main/jsons/Cambrian7M_withsystemprompt.jsonl) that also alleviates 'answer machine' problem. |
OpenGVLab/OmniCorpus-CC | OpenGVLab | "2024-11-17T07:08:46Z" | 13,794 | 10 | [
"task_categories:image-to-text",
"task_categories:visual-question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.08418",
"region:us"
] | [
"image-to-text",
"visual-question-answering"
] | "2024-08-30T06:16:02Z" | ---
language:
- en
license: cc-by-4.0
size_categories:
- 100M<n<1B
task_categories:
- image-to-text
- visual-question-answering
dataset_info:
- config_name: CC-MAIN-2013-20
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 19908676196
num_examples: 3878063
download_size: 9303464923
dataset_size: 19908676196
- config_name: CC-MAIN-2013-48
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 15282078925
num_examples: 3091537
download_size: 6965036866
dataset_size: 15282078925
- config_name: CC-MAIN-2014-10
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7227087609
num_examples: 1390034
download_size: 3259239561
dataset_size: 7227087609
- config_name: CC-MAIN-2014-15
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 10106913108
num_examples: 1968361
download_size: 4567738362
dataset_size: 10106913108
- config_name: CC-MAIN-2014-23
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7997621043
num_examples: 1455331
download_size: 3468852905
dataset_size: 7997621043
- config_name: CC-MAIN-2014-35
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6228103779
num_examples: 1219200
download_size: 2849584613
dataset_size: 6228103779
- config_name: CC-MAIN-2014-41
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 8321822952
num_examples: 1573955
download_size: 3775989970
dataset_size: 8321822952
- config_name: CC-MAIN-2014-42
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7732679416
num_examples: 1511931
download_size: 3505766162
dataset_size: 7732679416
- config_name: CC-MAIN-2014-49
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4473311810
num_examples: 837735
download_size: 1982728919
dataset_size: 4473311810
- config_name: CC-MAIN-2014-52
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7292722888
num_examples: 1304730
download_size: 2957626766
dataset_size: 7292722888
- config_name: CC-MAIN-2015-06
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5775826679
num_examples: 1061940
download_size: 2462379667
dataset_size: 5775826679
- config_name: CC-MAIN-2015-11
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6263650452
num_examples: 1129411
download_size: 2528026633
dataset_size: 6263650452
- config_name: CC-MAIN-2015-14
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4524425019
num_examples: 885221
download_size: 1939222111
dataset_size: 4524425019
- config_name: CC-MAIN-2015-18
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6195227565
num_examples: 1104115
download_size: 2634204322
dataset_size: 6195227565
- config_name: CC-MAIN-2015-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7008276790
num_examples: 1290530
download_size: 2913627974
dataset_size: 7008276790
- config_name: CC-MAIN-2015-27
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4320140953
num_examples: 784496
download_size: 1828575226
dataset_size: 4320140953
- config_name: CC-MAIN-2015-32
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4952806590
num_examples: 875601
download_size: 2065207099
dataset_size: 4952806590
- config_name: CC-MAIN-2015-35
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6053257306
num_examples: 1086470
download_size: 2632032769
dataset_size: 6053257306
- config_name: CC-MAIN-2015-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5206096790
num_examples: 924036
download_size: 2203603087
dataset_size: 5206096790
- config_name: CC-MAIN-2015-48
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 8343050753
num_examples: 1537468
download_size: 3489600630
dataset_size: 8343050753
- config_name: CC-MAIN-2016-07
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 9329220105
num_examples: 1738650
download_size: 4005599785
dataset_size: 9329220105
- config_name: CC-MAIN-2016-18
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 3897220786
num_examples: 747570
download_size: 1675500816
dataset_size: 3897220786
- config_name: CC-MAIN-2016-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4623903344
num_examples: 857060
download_size: 2000624854
dataset_size: 4623903344
- config_name: CC-MAIN-2016-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 3414418701
num_examples: 627995
download_size: 1403890884
dataset_size: 3414418701
- config_name: CC-MAIN-2016-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7244342539
num_examples: 1183776
download_size: 2913394840
dataset_size: 7244342539
- config_name: CC-MAIN-2016-36
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5402565529
num_examples: 915878
download_size: 2248454753
dataset_size: 5402565529
- config_name: CC-MAIN-2016-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5938544915
num_examples: 1113534
download_size: 2530904625
dataset_size: 5938544915
- config_name: CC-MAIN-2016-44
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 15819536321
num_examples: 3528637
download_size: 6516546200
dataset_size: 15819536321
- config_name: CC-MAIN-2016-50
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 10822695594
num_examples: 2215939
download_size: 4439728574
dataset_size: 10822695594
- config_name: CC-MAIN-2017-04
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 11949732148
num_examples: 2441316
download_size: 5045763620
dataset_size: 11949732148
- config_name: CC-MAIN-2017-09
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 12473370126
num_examples: 2561539
download_size: 5398993614
dataset_size: 12473370126
- config_name: CC-MAIN-2017-13
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 12209904783
num_examples: 2458486
download_size: 5422393873
dataset_size: 12209904783
- config_name: CC-MAIN-2017-17
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 13763109013
num_examples: 2615558
download_size: 6025106556
dataset_size: 13763109013
- config_name: CC-MAIN-2017-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 14456991831
num_examples: 2775332
download_size: 6258001465
dataset_size: 14456991831
- config_name: CC-MAIN-2017-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 15036103558
num_examples: 2973499
download_size: 6813218532
dataset_size: 15036103558
- config_name: CC-MAIN-2017-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 18833639414
num_examples: 3870197
download_size: 8464443468
dataset_size: 18833639414
- config_name: CC-MAIN-2017-34
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 25828116836
num_examples: 4848154
download_size: 11599137919
dataset_size: 25828116836
- config_name: CC-MAIN-2017-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 29432150311
num_examples: 4840435
download_size: 13172655761
dataset_size: 29432150311
- config_name: CC-MAIN-2017-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 32672966840
num_examples: 5724493
download_size: 15041820212
dataset_size: 32672966840
- config_name: CC-MAIN-2017-47
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 34301891443
num_examples: 5291581
download_size: 15593452226
dataset_size: 34301891443
- config_name: CC-MAIN-2017-51
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 30012533603
num_examples: 5466672
download_size: 14005518471
dataset_size: 30012533603
- config_name: CC-MAIN-2018-05
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 47738703452
num_examples: 8053879
download_size: 22533983733
dataset_size: 47738703452
- config_name: CC-MAIN-2018-09
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 45503126107
num_examples: 8045410
download_size: 21900491411
dataset_size: 45503126107
- config_name: CC-MAIN-2018-13
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43904789090
num_examples: 7980931
download_size: 21178075620
dataset_size: 43904789090
- config_name: CC-MAIN-2018-17
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 44481167440
num_examples: 8699878
download_size: 21623780968
dataset_size: 44481167440
- config_name: CC-MAIN-2018-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 64369136465
num_examples: 13332059
download_size: 32293951649
dataset_size: 64369136465
- config_name: CC-MAIN-2018-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 81232597180
num_examples: 16249638
download_size: 41007491366
dataset_size: 81232597180
- config_name: CC-MAIN-2018-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 151537007358
num_examples: 32535697
download_size: 77517210537
dataset_size: 151537007358
- config_name: CC-MAIN-2018-34
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 39026071869
num_examples: 6347230
download_size: 19285382621
dataset_size: 39026071869
- config_name: CC-MAIN-2018-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 35948493161
num_examples: 6372711
download_size: 17597722170
dataset_size: 35948493161
- config_name: CC-MAIN-2018-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 50928918805
num_examples: 8758225
download_size: 25291022646
dataset_size: 50928918805
- config_name: CC-MAIN-2018-47
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43961213014
num_examples: 7270815
download_size: 22024998684
dataset_size: 43961213014
- config_name: CC-MAIN-2018-51
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 58902353921
num_examples: 10215384
download_size: 29497256483
dataset_size: 58902353921
- config_name: CC-MAIN-2019-04
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 54814836003
num_examples: 9930553
download_size: 27458854931
dataset_size: 54814836003
- config_name: CC-MAIN-2019-09
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 54426174385
num_examples: 8897510
download_size: 28125345656
dataset_size: 54426174385
- config_name: CC-MAIN-2019-13
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48712051219
num_examples: 7803004
download_size: 25156014252
dataset_size: 48712051219
- config_name: CC-MAIN-2019-18
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48203751852
num_examples: 7532171
download_size: 24844412087
dataset_size: 48203751852
- config_name: CC-MAIN-2019-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 51674379059
num_examples: 8339842
download_size: 26257475492
dataset_size: 51674379059
- config_name: CC-MAIN-2019-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43336967638
num_examples: 7320268
download_size: 21900316910
dataset_size: 43336967638
- config_name: CC-MAIN-2019-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 46313133200
num_examples: 7682281
download_size: 23262218065
dataset_size: 46313133200
- config_name: CC-MAIN-2019-35
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 49570657315
num_examples: 8098108
download_size: 24938729240
dataset_size: 49570657315
- config_name: CC-MAIN-2019-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43538081906
num_examples: 7102645
download_size: 21728983014
dataset_size: 43538081906
- config_name: CC-MAIN-2019-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 52817470138
num_examples: 8567061
download_size: 26105523209
dataset_size: 52817470138
- config_name: CC-MAIN-2019-47
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 42252827792
num_examples: 6775943
download_size: 21228532199
dataset_size: 42252827792
- config_name: CC-MAIN-2019-51
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 38926356094
num_examples: 6415558
download_size: 19510339598
dataset_size: 38926356094
- config_name: CC-MAIN-2020-05
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48189844491
num_examples: 7921372
download_size: 24235687030
dataset_size: 48189844491
- config_name: CC-MAIN-2020-10
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48904133840
num_examples: 8211791
download_size: 24576159189
dataset_size: 48904133840
- config_name: CC-MAIN-2020-16
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 51243682770
num_examples: 8578633
download_size: 25485035979
dataset_size: 51243682770
- config_name: CC-MAIN-2020-24
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 59424939072
num_examples: 10438139
download_size: 29827361603
dataset_size: 59424939072
- config_name: CC-MAIN-2020-29
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 66229730938
num_examples: 11475631
download_size: 33030161773
dataset_size: 66229730938
- config_name: CC-MAIN-2020-34
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 54287690582
num_examples: 9495610
download_size: 27018821467
dataset_size: 54287690582
- config_name: CC-MAIN-2020-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 71587907978
num_examples: 12058149
download_size: 35795677487
dataset_size: 71587907978
- config_name: CC-MAIN-2020-45
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 59172857400
num_examples: 9694734
download_size: 29495814784
dataset_size: 59172857400
- config_name: CC-MAIN-2020-50
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 58557861606
num_examples: 9539918
download_size: 29083801775
dataset_size: 58557861606
- config_name: CC-MAIN-2021-04
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 74507336015
num_examples: 12273028
download_size: 36874887518
dataset_size: 74507336015
- config_name: CC-MAIN-2021-10
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 60802783945
num_examples: 10176190
download_size: 30326513365
dataset_size: 60802783945
- config_name: CC-MAIN-2021-17
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 75061494488
num_examples: 12343366
download_size: 37345114890
dataset_size: 75061494488
- config_name: CC-MAIN-2021-21
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 70036417178
num_examples: 11584034
download_size: 34806730527
dataset_size: 70036417178
- config_name: CC-MAIN-2021-25
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 73653674063
num_examples: 12065281
download_size: 36581310312
dataset_size: 73653674063
- config_name: CC-MAIN-2021-31
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 79535885182
num_examples: 13383552
download_size: 39702500971
dataset_size: 79535885182
- config_name: CC-MAIN-2021-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 90302065651
num_examples: 14794773
download_size: 45211764750
dataset_size: 90302065651
- config_name: CC-MAIN-2021-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 108356023335
num_examples: 17698206
download_size: 54292215300
dataset_size: 108356023335
- config_name: CC-MAIN-2021-49
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 95867022229
num_examples: 15643875
download_size: 47902433321
dataset_size: 95867022229
- config_name: CC-MAIN-2022-05
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 97602903488
num_examples: 15772898
download_size: 48711364812
dataset_size: 97602903488
- config_name: CC-MAIN-2022-21
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 127495492928
num_examples: 21745889
download_size: 63379692210
dataset_size: 127495492928
- config_name: CC-MAIN-2022-27
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 128061655541
num_examples: 21580054
download_size: 63763936007
dataset_size: 128061655541
- config_name: CC-MAIN-2022-33
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 126436062118
num_examples: 21495687
download_size: 63067252044
dataset_size: 126436062118
- config_name: CC-MAIN-2022-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 123806739937
num_examples: 20206120
download_size: 61929035270
dataset_size: 123806739937
- config_name: CC-MAIN-2022-49
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 152577158166
num_examples: 24634059
download_size: 76529854484
dataset_size: 152577158166
- config_name: CC-MAIN-2023-06
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 174815301023
num_examples: 28962355
download_size: 87301203013
dataset_size: 174815301023
- config_name: CC-MAIN-2023-14
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 227631152876
num_examples: 37223376
download_size: 114188282465
dataset_size: 227631152876
- config_name: CC-MAIN-2023-23
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 321036722459
num_examples: 52119692
download_size: 161491274249
dataset_size: 321036722459
- config_name: CC-MAIN-2023-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 648032999611
num_examples: 101292016
download_size: 317965522325
dataset_size: 648032999611
- config_name: CC-MAIN-2023-50
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 744768384551
num_examples: 117073004
download_size: 365332295606
dataset_size: 744768384551
configs:
- config_name: CC-MAIN-2013-20
data_files:
- split: train
path: CC-MAIN-2013-20/train-*
- config_name: CC-MAIN-2013-48
data_files:
- split: train
path: CC-MAIN-2013-48/train-*
- config_name: CC-MAIN-2014-10
data_files:
- split: train
path: CC-MAIN-2014-10/train-*
- config_name: CC-MAIN-2014-15
data_files:
- split: train
path: CC-MAIN-2014-15/train-*
- config_name: CC-MAIN-2014-23
data_files:
- split: train
path: CC-MAIN-2014-23/train-*
- config_name: CC-MAIN-2014-35
data_files:
- split: train
path: CC-MAIN-2014-35/train-*
- config_name: CC-MAIN-2014-41
data_files:
- split: train
path: CC-MAIN-2014-41/train-*
- config_name: CC-MAIN-2014-42
data_files:
- split: train
path: CC-MAIN-2014-42/train-*
- config_name: CC-MAIN-2014-49
data_files:
- split: train
path: CC-MAIN-2014-49/train-*
- config_name: CC-MAIN-2014-52
data_files:
- split: train
path: CC-MAIN-2014-52/train-*
- config_name: CC-MAIN-2015-06
data_files:
- split: train
path: CC-MAIN-2015-06/train-*
- config_name: CC-MAIN-2015-11
data_files:
- split: train
path: CC-MAIN-2015-11/train-*
- config_name: CC-MAIN-2015-14
data_files:
- split: train
path: CC-MAIN-2015-14/train-*
- config_name: CC-MAIN-2015-18
data_files:
- split: train
path: CC-MAIN-2015-18/train-*
- config_name: CC-MAIN-2015-22
data_files:
- split: train
path: CC-MAIN-2015-22/train-*
- config_name: CC-MAIN-2015-27
data_files:
- split: train
path: CC-MAIN-2015-27/train-*
- config_name: CC-MAIN-2015-32
data_files:
- split: train
path: CC-MAIN-2015-32/train-*
- config_name: CC-MAIN-2015-35
data_files:
- split: train
path: CC-MAIN-2015-35/train-*
- config_name: CC-MAIN-2015-40
data_files:
- split: train
path: CC-MAIN-2015-40/train-*
- config_name: CC-MAIN-2015-48
data_files:
- split: train
path: CC-MAIN-2015-48/train-*
- config_name: CC-MAIN-2016-07
data_files:
- split: train
path: CC-MAIN-2016-07/train-*
- config_name: CC-MAIN-2016-18
data_files:
- split: train
path: CC-MAIN-2016-18/train-*
- config_name: CC-MAIN-2016-22
data_files:
- split: train
path: CC-MAIN-2016-22/train-*
- config_name: CC-MAIN-2016-26
data_files:
- split: train
path: CC-MAIN-2016-26/train-*
- config_name: CC-MAIN-2016-30
data_files:
- split: train
path: CC-MAIN-2016-30/train-*
- config_name: CC-MAIN-2016-36
data_files:
- split: train
path: CC-MAIN-2016-36/train-*
- config_name: CC-MAIN-2016-40
data_files:
- split: train
path: CC-MAIN-2016-40/train-*
- config_name: CC-MAIN-2016-44
data_files:
- split: train
path: CC-MAIN-2016-44/train-*
- config_name: CC-MAIN-2016-50
data_files:
- split: train
path: CC-MAIN-2016-50/train-*
- config_name: CC-MAIN-2017-04
data_files:
- split: train
path: CC-MAIN-2017-04/train-*
- config_name: CC-MAIN-2017-09
data_files:
- split: train
path: CC-MAIN-2017-09/train-*
- config_name: CC-MAIN-2017-13
data_files:
- split: train
path: CC-MAIN-2017-13/train-*
- config_name: CC-MAIN-2017-17
data_files:
- split: train
path: CC-MAIN-2017-17/train-*
- config_name: CC-MAIN-2017-22
data_files:
- split: train
path: CC-MAIN-2017-22/train-*
- config_name: CC-MAIN-2017-26
data_files:
- split: train
path: CC-MAIN-2017-26/train-*
- config_name: CC-MAIN-2017-30
data_files:
- split: train
path: CC-MAIN-2017-30/train-*
- config_name: CC-MAIN-2017-34
data_files:
- split: train
path: CC-MAIN-2017-34/train-*
- config_name: CC-MAIN-2017-39
data_files:
- split: train
path: CC-MAIN-2017-39/train-*
- config_name: CC-MAIN-2017-43
data_files:
- split: train
path: CC-MAIN-2017-43/train-*
- config_name: CC-MAIN-2017-47
data_files:
- split: train
path: CC-MAIN-2017-47/train-*
- config_name: CC-MAIN-2017-51
data_files:
- split: train
path: CC-MAIN-2017-51/train-*
- config_name: CC-MAIN-2018-05
data_files:
- split: train
path: CC-MAIN-2018-05/train-*
- config_name: CC-MAIN-2018-09
data_files:
- split: train
path: CC-MAIN-2018-09/train-*
- config_name: CC-MAIN-2018-13
data_files:
- split: train
path: CC-MAIN-2018-13/train-*
- config_name: CC-MAIN-2018-17
data_files:
- split: train
path: CC-MAIN-2018-17/train-*
- config_name: CC-MAIN-2018-22
data_files:
- split: train
path: CC-MAIN-2018-22/train-*
- config_name: CC-MAIN-2018-26
data_files:
- split: train
path: CC-MAIN-2018-26/train-*
- config_name: CC-MAIN-2018-30
data_files:
- split: train
path: CC-MAIN-2018-30/train-*
- config_name: CC-MAIN-2018-34
data_files:
- split: train
path: CC-MAIN-2018-34/train-*
- config_name: CC-MAIN-2018-39
data_files:
- split: train
path: CC-MAIN-2018-39/train-*
- config_name: CC-MAIN-2018-43
data_files:
- split: train
path: CC-MAIN-2018-43/train-*
- config_name: CC-MAIN-2018-47
data_files:
- split: train
path: CC-MAIN-2018-47/train-*
- config_name: CC-MAIN-2018-51
data_files:
- split: train
path: CC-MAIN-2018-51/train-*
- config_name: CC-MAIN-2019-04
data_files:
- split: train
path: CC-MAIN-2019-04/train-*
- config_name: CC-MAIN-2019-09
data_files:
- split: train
path: CC-MAIN-2019-09/train-*
- config_name: CC-MAIN-2019-13
data_files:
- split: train
path: CC-MAIN-2019-13/train-*
- config_name: CC-MAIN-2019-18
data_files:
- split: train
path: CC-MAIN-2019-18/train-*
- config_name: CC-MAIN-2019-22
data_files:
- split: train
path: CC-MAIN-2019-22/train-*
- config_name: CC-MAIN-2019-26
data_files:
- split: train
path: CC-MAIN-2019-26/train-*
- config_name: CC-MAIN-2019-30
data_files:
- split: train
path: CC-MAIN-2019-30/train-*
- config_name: CC-MAIN-2019-35
data_files:
- split: train
path: CC-MAIN-2019-35/train-*
- config_name: CC-MAIN-2019-39
data_files:
- split: train
path: CC-MAIN-2019-39/train-*
- config_name: CC-MAIN-2019-43
data_files:
- split: train
path: CC-MAIN-2019-43/train-*
- config_name: CC-MAIN-2019-47
data_files:
- split: train
path: CC-MAIN-2019-47/train-*
- config_name: CC-MAIN-2019-51
data_files:
- split: train
path: CC-MAIN-2019-51/train-*
- config_name: CC-MAIN-2020-05
data_files:
- split: train
path: CC-MAIN-2020-05/train-*
- config_name: CC-MAIN-2020-10
data_files:
- split: train
path: CC-MAIN-2020-10/train-*
- config_name: CC-MAIN-2020-16
data_files:
- split: train
path: CC-MAIN-2020-16/train-*
- config_name: CC-MAIN-2020-24
data_files:
- split: train
path: CC-MAIN-2020-24/train-*
- config_name: CC-MAIN-2020-29
data_files:
- split: train
path: CC-MAIN-2020-29/train-*
- config_name: CC-MAIN-2020-34
data_files:
- split: train
path: CC-MAIN-2020-34/train-*
- config_name: CC-MAIN-2020-40
data_files:
- split: train
path: CC-MAIN-2020-40/train-*
- config_name: CC-MAIN-2020-45
data_files:
- split: train
path: CC-MAIN-2020-45/train-*
- config_name: CC-MAIN-2020-50
data_files:
- split: train
path: CC-MAIN-2020-50/train-*
- config_name: CC-MAIN-2021-04
data_files:
- split: train
path: CC-MAIN-2021-04/train-*
- config_name: CC-MAIN-2021-10
data_files:
- split: train
path: CC-MAIN-2021-10/train-*
- config_name: CC-MAIN-2021-17
data_files:
- split: train
path: CC-MAIN-2021-17/train-*
- config_name: CC-MAIN-2021-21
data_files:
- split: train
path: CC-MAIN-2021-21/train-*
- config_name: CC-MAIN-2021-25
data_files:
- split: train
path: CC-MAIN-2021-25/train-*
- config_name: CC-MAIN-2021-31
data_files:
- split: train
path: CC-MAIN-2021-31/train-*
- config_name: CC-MAIN-2021-39
data_files:
- split: train
path: CC-MAIN-2021-39/train-*
- config_name: CC-MAIN-2021-43
data_files:
- split: train
path: CC-MAIN-2021-43/train-*
- config_name: CC-MAIN-2021-49
data_files:
- split: train
path: CC-MAIN-2021-49/train-*
- config_name: CC-MAIN-2022-05
data_files:
- split: train
path: CC-MAIN-2022-05/train-*
- config_name: CC-MAIN-2022-21
data_files:
- split: train
path: CC-MAIN-2022-21/train-*
- config_name: CC-MAIN-2022-27
data_files:
- split: train
path: CC-MAIN-2022-27/train-*
- config_name: CC-MAIN-2022-33
data_files:
- split: train
path: CC-MAIN-2022-33/train-*
- config_name: CC-MAIN-2022-40
data_files:
- split: train
path: CC-MAIN-2022-40/train-*
- config_name: CC-MAIN-2022-49
data_files:
- split: train
path: CC-MAIN-2022-49/train-*
- config_name: CC-MAIN-2023-06
data_files:
- split: train
path: CC-MAIN-2023-06/train-*
- config_name: CC-MAIN-2023-14
data_files:
- split: train
path: CC-MAIN-2023-14/train-*
- config_name: CC-MAIN-2023-23
data_files:
- split: train
path: CC-MAIN-2023-23/train-*
- config_name: CC-MAIN-2023-40
data_files:
- split: train
path: CC-MAIN-2023-40/train-*
- config_name: CC-MAIN-2023-50
data_files:
- split: train
path: CC-MAIN-2023-50/train-*
---
⭐️ **NOTE:** Several parquet files were marked unsafe (viruses) by official scaning of hf, while they are reported safe by ClamAV and Virustotal.
We found [many false positive cases](https://discuss.huggingface.co/u/mcpotato/summary) of the hf automatic scanning in hf discussions and raise [one discussion](https://discuss.huggingface.co/t/one-parquet-file-of-my-dataset-was-marked-unsafe/113745) to ask for a re-scanning.
# OmniCorpus-CC
This is the repository of OmniCorpus-CC, which contains 988 million image-text interleaved documents collected from [Common Crawl](https://commoncrawl.org/).
- Repository: https://github.com/OpenGVLab/OmniCorpus
- Paper: https://arxiv.org/abs/2406.08418
OmniCorpus dataset is a large-scale image-text interleaved dataset, which pushes the boundaries of scale and diversity by encompassing **8.6 billion images** interleaved with **1,696 text tokens** from diverse sources, significantly surpassing previous datasets.
This dataset demonstrates several advantages over its counterparts:
1. **Larger data scale:** Our dataset is 1.7 times larger in images and 12.5 times larger in texts compared to the previously largest multimodal dataset, LAION-5B, while maintaining excellent data quality.
2. **Richer data diversity:** Drawing from a broader range of data sources, our dataset is more diverse than other image-text interleaved datasets. It includes bilingual multimodal data in both Chinese and English, and encompasses text-centric and vision-centric documents extracted from common websites and video platforms.
3. **More flexible format:** The streaming data format of our dataset offers exceptional flexibility, allowing adaptation to various data structures, including pure text corpora, image-text pairs, and interleaved data formats.
<img width="578" alt="image" src="https://github.com/OpenGVLab/OmniCorpus/assets/47669167/641a6427-ba50-41e6-8634-8810113fd803">
The OmniCorpus contains three sections:
- **OmniCorpus-CC**: processed from dumps in Common Crawl from 2013 to Nov./Dec. 2023.
- **OmniCorpus-CW**: sourced from Chinese internet resources, will be availiable in [OpenDataLab](https://opendatalab.com/) platform.
- **OmniCorpus-YT**: samples Youtube video frames as images and collects subtitles as texts.
Code for pre-training, evaluating, main body extracting, and filtering have been released in the official [repository](https://github.com/OpenGVLab/OmniCorpus). A pre-trained model is availiable [here](https://huggingface.co/Qingyun/OmniCorpus-InternVL).
# Data Pipeline
Our data pipeline consists of five key stages: main body extraction, preliminary text filtering, document deduplication, image downloading \& filtering, and detailed text filtering. Each stage efficiently reduces the dataset to retain only high-quality data.
Please refer to our paper for more details about the data pipeline.
<img width="723" alt="image" src="https://github.com/OpenGVLab/OmniCorpus/assets/47669167/a6de8928-58fb-4ff4-8ef9-4bd90e9ada5f">
# Usages
The image-text interleaved documents are recommanded for the following usages:
- Pre-training multimodal large language model (MLLM): Recent MLLMs (such as Flamingo series, EMU series, IDEFICS series, MM1, Cambrian-1, and xGen-MM) have shown that image-text interleaved data aids multimodal in-context learning and maintains the capabilities of large language models during multimodal fine-tuning.
- Long text-image retrieval: We provide image-text similarities calculated with CLIP, which can convert the documents to image-text retrieval dataset with longer text. A retrieval model pre-trained on such data can retrieval images based on longer text, which can be used for multimodal RAG, converting pure text to multimodal sample, etc.
- Source for futher dataset research: Our data is large-scale, which can serve as the source for researches for data curation strategies. We provide many useful attributes as metadata for each document, which can enrich the filtering strategy and reduce the cost.
- ......
# Data Format
Following common practices, the data is organized into Parquet file format.
You might encounter errors when using `pandas.read_parquet` (because the data structure contains nested elements). We recommend using fastparquet to load the parquet files.
```Python
import fastparquet
df = fastparquet.ParquetFile(parquet_file_path).to_pandas()
# You can also use iter_batches
parquet_file = pq.ParquetFile(filepath)
for batch in parquet_file.iter_batches():
df = batch.to_pandas()
```
You can convert the i-th document and convert it into a dictionary.
```Python
doc_dict = df.iloc[i].to_dict()
```
The document format is as follow:
```json
{
'images': [
<str: image_1_url>,
None,
<str: image_2_url>,
None,
],
'texts': [
None,
<str: text_paragraph_1_content>
None,
<str: text_paragraph_2_content>,
]
'metadata': [
<dict: image_1_metadata>,
None,
<dict: image_2_metadata>,
None
],
'general_metadata': {
"url": <str: document url>,
"id": <str: document id>,
"domain": <list[str]: domains extracted from document url>,
"fluency_prob": <float: the probability of fluency>,
"non_advertisement_prob": <float: the probability of non-advertisement>,
"porn_prob": <float: the probability of porn content>,
"politics_prob": <float: the probability of politics content>,
"toxic_prob": <float: the probability of toxic content>,
}
}
```
Each image metadata is as follow:
```json
{
"img_url_sha": <str: sha code of image url>,
"width": <int: image width>,
"height": <int: image height>,
"bytes": <int: byte number of the image file>,
"d_hash": <str: d_hash code of the image, used for image deduplication>,
"p_hash": <str: p_hash code of the image, used for image deduplication>,
"d_hash_dup_count": <int: duplicated times detected by d_hash code>,
"p_hash_dup_count": <int: duplicated times detected by p_hash code>,
"aesthetic prob": <float: aesthetic probility>,
"unsafe prob": <float: NSFW probility>,
}
```
# License
OmniCorpus is released under a [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/deed.en) license, with the primary intent of supporting research activities.
# Citation
```
@article{li2024omnicorpus,
title={OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text},
author={Li, Qingyun and Chen, Zhe and Wang, Weiyun and Wang, Wenhai and Ye, Shenglong and Jin, Zhenjiang and others},
journal={arXiv preprint arXiv:2406.08418},
year={2024}
}
```
|
SPRIGHT-T2I/spright | SPRIGHT-T2I | "2024-10-09T10:05:58Z" | 13,740 | 27 | [
"language:en",
"license:other",
"size_categories:1M<n<10M",
"arxiv:2102.08981",
"arxiv:2304.02643",
"arxiv:1405.0312",
"arxiv:2311.01477",
"arxiv:2404.01197",
"region:us"
] | null | "2024-03-11T06:26:24Z" | ---
language:
- en
size_categories:
- 1M<n<10M
license:
- other
license_name: intel-research-use-license
license_link: LICENSE
---
# <u>Dataset Description</u>
SPRIGHT (**SP**atially **RIGHT**) is the first spatially focused, large scale vision-language dataset. It was built by re-captioning
∼6 million images from 4 widely-used datasets:
* [CC12M](https://arxiv.org/abs/2102.08981)
* [Segment Anything](https://arxiv.org/abs/2304.02643)
* [COCO Validation](https://arxiv.org/abs/1405.0312)
* [LAION Aesthetics](https://laion.ai/blog/laion-aesthetics/)
This repository contains the re-captioned data from CC12M and Segment Anything, while the COCO data is present [here](https://huggingface.co/datasets/SPRIGHT-T2I/spright_coco). We do not release images from LAION, as the parent images are currently private.
Below are some illustrative examples from the SPRIGHT dataset, where the captions are correct in its entirety; both in capturing the
spatial relationships and overall description of the image.
![](good_examples.png)
We also share some illustrative examples from the SPRIGHT dataset, where the captions are not completely correct.
![](bad_examples.png)
## <u>Dataset Sources</u>
### CC-12M
We re-caption a total of 2.3 million images from the CC-12M data taset, filtering out images of resolution less than 768.
### Segment Anything
We re-caption 3.5 million images as part of our process. Since SA has all human faces blurred, we filter out images which contain blurring i.e. we filter our images where humans are detected (using the Owl-V2 object detector). Since SA does not have ground-truth captions, we also generate its general captions using the CoCa captioning model.
## <u>Dataset Structure</u>
### Samples
Each tar file contains 10k samples. Each sample is composed of:
- an image - "{idx}.jpg"
- related captions (general caption and spatial caption) - "{idx}.json"
- metadata (image width and height, original dataset the image was taken from and its original id) - "{idx}.metadata.json"
### How to use it
In order to load the data, you can use the [`load_data.py`](./load_data.py) script. The metadata.json file contains the size and the split for each tar file. We also provide a script
[`robust_upload.py`](robust_upload.py) used to efficiently upload the data to Hugging Face Hub.
Note: filenames inside each .tar partition do NOT contain leading zeroes, which may confound some sorting mechanism (eg: python's sort() function); users that download and extract data or filenames from the .tar partions should be aware of this and use a "natural sort" style function to accomodate this convention.
## <u>Dataset Creation</u>
#### Data Generation
We leverage [LLaVA-1.5-13B](https://github.com/haotian-liu/LLaVA) to produce synthetic spatial captions, and use the following prompt to create the SPRIGHT dataset:
> "Using 2 sentences, describe the spatial relationships seen in the image. You can use words like left/right, above/below, front/behind, far/near/adjacent, inside/outside. Also describe relative sizes of objects seen in the image."
#### Dataset validation
- Using [FAITHScore](https://arxiv.org/abs/2311.01477): We leverage a large language model to deconstruct generated captions into atomic (simple) claims that can be individually and independently verified in VQA format. The captions are on average 88.9% correct.
- Using [GPT4(V)](https://cdn.openai.com/papers/GPTV_System_Card.pdf_): We perform a small-scale study on 100 images to evaluate our captions with GPT-4(V). Specifically, we prompt GPT-4(V) to rate each caption between a score of 1 to 10, especially focusing on the correctness of the spatial relationships captured. We achieve a mean and median rating of 6.41 and 7.0.
- Human annotation: We also annotate a total of 3000 images through a crowd-sourced human study, where each participant annotates a maximum of 30 image-text pairs. Most captions in SPRIGHT have >1 sentences. Therefore, for a fine-grained evaluation, we randomly select 1 sentence, from a caption in SPRIGHT and evaluate its correctness for a given image. Across 149 responses, we get an accuracy of 66.57%.
# <u>Acknowledgements</u>
We thank [Lucain](https://fr.linkedin.com/in/lucainpouget) from the Hugging Face team for helping us with the `robust_upload.py` script.
## <u>Citation</u>
```bibtex
@misc{chatterjee2024getting,
title={Getting it Right: Improving Spatial Consistency in Text-to-Image Models},
author={Agneet Chatterjee and Gabriela Ben Melech Stan and Estelle Aflalo and Sayak Paul and Dhruba Ghosh and Tejas Gokhale and Ludwig Schmidt and Hannaneh Hajishirzi and Vasudev Lal and Chitta Baral and Yezhou Yang},
year={2024},
eprint={2404.01197},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## License
SPRIGHT-T2I/spright is licensed under the [Intel Research License](./LICENSE). All Rights Reserved.
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights. |
lowercaseonly/cghd | lowercaseonly | "2024-11-24T18:48:27Z" | 13,713 | 1 | [
"task_categories:object-detection",
"task_categories:image-segmentation",
"language:en",
"language:de",
"license:cc-by-3.0",
"size_categories:1K<n<10K",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"object-detection",
"image-segmentation"
] | "2023-05-21T12:20:21Z" | ---
license: cc-by-3.0
pretty_name: A Public Ground-Truth Dataset for Handwritten Circuit Diagram Images
size_categories:
- 1K<n<10K
task_categories:
- object-detection
- image-segmentation
language:
- en
- de
---
# Public Ground-Truth Dataset for Handwritten Circuit Diagrams (GTDB-HD)
This repository contains images of hand-drawn electrical circuit diagrams as well as accompanying bounding box annotation for object detection as well as segmentation ground truth files. This dataset is intended to train (e.g. neural network) models for the purpose of the extraction of electrical graphs from raster graphics.
## Structure
The folder structure is made up as follows:
```
gtdh-hd
│ README.md # This File
│ classes.json # Classes List
│ classes_color.json # Classes to Color Map
│ classes_discontinuous.json # Classes Morphology Info
│ classes_ports.json # Electrical Port Descriptions for Classes
│ consistency.py # Dataset Statistics and Consistency Check
| loader.py # Simple Dataset Loader and Storage Functions
│ segmentation.py # Multiclass Segmentation Generation
│ utils.py # Helper Functions
│ requirements.txt # Requirements for Scripts
└───drafter_D
│ └───annotations # Bounding Box Annotations
│ │ │ CX_DY_PZ.xml
│ │ │ ...
│ │
│ └───images # Raw Images
│ │ │ CX_DY_PZ.jpg
│ │ │ ...
│ │
│ └───instances # Instance Segmentation Polygons
│ │ │ CX_DY_PZ.json
│ │ │ ...
│ │
│ └───segmentation # Binary Segmentation Maps (Strokes vs. Background)
│ │ │ CX_DY_PZ.jpg
│ │ │ ...
...
```
Where:
- `D` is the (globally) running number of a drafter
- `X` is the (globally) running number of the circuit (12 Circuits per Drafter)
- `Y` is the Local Number of the Circuit's Drawings (2 Drawings per Circuit)
- `Z` is the Local Number of the Drawing's Image (4 Pictures per Drawing)
### Image Files
Every image is RGB-colored and either stored as `jpg`, `jpeg` or `png` (both uppercase and lowercase suffixes exist).
### Bounding Box Annotations
A complete list of class labels including a suggested mapping table to integer numbers for training and prediction purposes can be found in `classes.json`. The annotations contains **BB**s (Bounding Boxes) of **RoI**s (Regions of Interest) like electrical symbols or texts within the raw images and are stored in the [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) format.
Please note: *For every Raw image in the dataset, there is an accompanying bounding box annotation file.*
#### Known Labeled Issues
- C25_D1_P4 cuts off a text
- C27 cuts of some texts
- C29_D1_P1 has one additional text
- C31_D2_P4 has a text less
- C33_D1_P4 has a text less
- C46_D2_P2 cuts of a text
### Instance Segmentation
For every binary segmentation map, there is an accompanying polygonal annotation file for instance segmentation purposes, which is stored in the [labelme](https://github.com/wkentaro/labelme) format. Note that the contained polygons are quite coarse, intended to be used in conjunction with the binary segmentation maps for connection extraction and to tell individual instances with overlapping BBs apart.
### Segmentation Maps
Binary Segmentation images are available for some samples and bear the same resolution as the respective image files. They are considered to contain only black and white pixels indicating areas of drawings strokes and background respectively.
### Netlists
For some images, there are also netlist files available, which are stored in the [ASC](http://ltwiki.org/LTspiceHelp/LTspiceHelp/Spice_Netlist.htm) format.
### Consistency and Statistics
This repository comes with a stand-alone script to:
- Obtain Statistics on
- Class Distribution
- BB Sizes
- Check the BB Consistency
- Classes with Regards to the `classes.json`
- Counts between Pictures of the same Drawing
- Ensure a uniform writing style of the Annotation Files (indent)
The respective script is called without arguments to operate on the **entire** dataset:
```
$ python3 consistency.py
```
Note that due to a complete re-write of the annotation data, the script takes several seconds to finish. A drafter can be specified as CLI argument to restrict the evaluation (for example drafter 15):
```
$ python3 consistency.py 15
```
### Multi-Class (Instance) Segmentation Processing
This dataset comes with a script to process both new and existing (instance) segmentation files. It is invoked as follows:
```
$ python3 segmentation.py <command> <drafter_id> <target> <source>
```
Where:
- `<command>` has to be one of:
- `transform`
- Converts existing BB Annotations to Polygon Annotations
- Default target folder: `instances`
- Existing polygon files will not be overridden in the default settings, hence this command will take no effect in an completely populated dataset.
- Intended to be invoked after adding new binary segmentation maps
- **This step has to be performed before all other commands**
- `wire`
- Generates Wire Describing Polygons
- Default target folder: `wires`
- `keypoint`
- Generates Keypoints for Component Terminals
- Default target folder: `keypoints`
- `create`
- Generates Multi-Class segmentation Maps
- Default target folder: `segmentation_multi_class`
- `refine`
- Refines Coarse Polygon Annotations to precisely match the annotated objects
- Default target folder: `instances_refined`
- For instance segmentation purposes
- `pipeline`
- executes `wire`,`keypoint` and `refine` stacked, with one common `source` and `target` folder
- Default target folder: `instances_refined`
- `assign`
- Connector Point to Port Type Assignment by Geometric Transformation Matching
- `<drafter_id>` **optionally** restricts the process to one of the drafters
- `<target>` **optionally** specifies a divergent target folder for results to be placed in
- `<source>` **optionally** specifies a divergent source folder to read from
Please note that source and target forlders are **always** subfolder inside the individual drafter folders. Specifying source and target folders allow to stack the results of individual processing steps. For example, to perform the entire pipeline for drafter 20 manually, use:
```
python3 segmentation.py wire 20 instances_processed instances
python3 segmentation.py keypoint 20 instances_processed instances_processed
python3 segmentation.py refine 20 instances_processed instances_processed
```
### Dataset Loader
This dataset is also shipped with a set of loader and writer functions, which are internally used by the segmentation and consistency scripts and can be used for training. The dataset loader is simple, framework-agnostic and has been prepared to be callable from any location in the file system. Basic usage:
```
from loader import read_dataset
db_bb = read_dataset() # Read all BB Annotations
db_seg = read_dataset(segmentation=True) # Read all Polygon Annotations
db_bb_val = read_dataset(drafter=12) # Read Drafter 12 BB Annotations
len(db_bb) # Get The Amount of Samples
db_bb[5] # Get an Arbitrary Sample
db = read_images(drafter=12) # Returns a list of (Image, Annotation) pairs
db = read_snippets(drafter=12) # Returns a list of (Image, Annotation) pairs
```
## Citation
If you use this dataset for scientific publications, please consider citing us as follows:
```
@inproceedings{thoma2021public,
title={A Public Ground-Truth Dataset for Handwritten Circuit Diagram Images},
author={Thoma, Felix and Bayer, Johannes and Li, Yakun and Dengel, Andreas},
booktitle={International Conference on Document Analysis and Recognition},
pages={20--27},
year={2021},
organization={Springer}
}
```
## How to Contribute
If you want to contribute to the dataset as a drafter or in case of any further questions, please send an email to: <johannes.bayer@dfki.de> (corresponding author), <yakun.li@dfki.de>, <andreas.dengel@dfki.de>
## Guidelines
These guidelines are used throughout the generation of the dataset. They can be used as an instruction for participants and data providers.
### Drafter Guidelines
- 12 Circuits should be drawn, each of them twice (24 drawings in total)
- Most important: The drawing should be as natural to the drafter as possible
- Free-Hand sketches are preferred, using rulers and drawing Template stencils should be avoided unless it appears unnatural to the drafter
- Different types of pens/pencils should be used for different drawings
- Different kinds of (colored, structured, ruled, lined) paper should be used
- One symbol set (European/American) should be used throughout one drawing (consistency)
- It is recommended to use the symbol set that the drafter is most familiar with
- It is **strongly** recommended to share the first one or two circuits for review by the dataset organizers before drawing the rest to avoid problems (complete redrawing in worst case)
### Image Capturing Guidelines
- For each drawing, 4 images should be taken (96 images in total per drafter)
- Angle should vary
- Lighting should vary
- Moderate (e.g. motion) blur is allowed
- All circuit-related aspects of the drawing must be _human-recognicable_
- The drawing should be the main part of the image, but _naturally_ occurring objects from the environment are welcomed
- The first image should be _clean_, i.e. ideal capturing conditions
- Kinks and Buckling can be applied to the drawing between individual image capturing
- Try to use the file name convention (`CX_DY_PZ.jpg`) as early as possible
- The circuit range `X` will be given to you
- `Y` should be `1` or `2` for the drawing
- `Z` should be `1`,`2`,`3` or `4` for the picture
### Object Annotation Guidelines
- General Placement
- A **RoI** must be **completely** surrounded by its **BB**
- A **BB** should be as tight as possible to the **RoI**
- In case of connecting lines not completely touching the symbol, the BB should extended (only by a small margin) to enclose those gaps (epecially considering junctions)
- Characters that are part of the **essential symbol definition** should be included in the BB (e.g. the `+` of a polarized capacitor should be included in its BB)
- **Junction** annotations
- Used for actual junction points (Connection of three or more wire segments with a small solid circle)
- Used for connection of three or more sraight line wire segements where a physical connection can be inferred by context (i.e. can be distinuished from **crossover**)
- Used for wire line corners
- Redundant Junction Points should **not** be annotated (small solid circle in the middle of a straight line segment)
- Should not be used for corners or junctions that are part of the symbol definition (e.g. Transistors)
- **Crossover** Annotations
- If dashed/dotted line: BB should cover the two next dots/dashes
- **Text** annotations
- Individual Text Lines should be annotated Individually
- Text Blocks should only be annotated If Related to Circuit or Circuit's Components
- Semantically meaningful chunks of information should be annotated Individually
- component characteristics enclosed in a single annotation (e.g. __100Ohms__, __10%__ tolerance, __5V__ max voltage)
- Component Names and Types (e.g. __C1__, __R5__, __ATTINY2313__)
- Custom Component Terminal Labels (i.e. __Integrated Circuit__ Pins)
- Circuit Descriptor (e.g. "Radio Amplifier")
- Texts not related to the Circuit should be ignored
- e.g. Brief paper, Company Logos
- Drafters auxiliary markings for internal organization like "D12"
- Texts on Surrounding or Background Papers
- Characters which are part of the essential symbol definition should __not__ be annotated as Text dedicatedly
- e.g. Schmitt Trigger __S__, , and gate __&__, motor __M__, Polarized capacitor __+__
- Only add terminal text annotation if the terminal is not part of the essential symbol definition
- **Table** cells should be annotated independently
- **Operation Amplifiers**
- Both the triangular US symbols and the european IC-like symbols symbols for OpAmps should be labeled `operational_amplifier`
- The `+` and `-` signs at the OpAmp's input terminals are considered essential and should therefore not be annotated as texts
- **Complex Components**
- Both the entire Component and its sub-Components and internal connections should be annotated:
| Complex Component | Annotation |
| ----------------- | ------------------------------------------------------ |
| Optocoupler | 0. `optocoupler` as Overall Annotation |
| | 1. `diode.light_emitting` |
| | 2. `transistor.photo` (or `resistor.photo`) |
| | 3. `optical` if LED and Photo-Sensor arrows are shared |
| | Then the arrows area should be includes in all |
| Relay | 0. `relay` as Overall Annotation |
| (also for | 1. `inductor` |
| coupled switches) | 2. `switch` |
| | 3. `mechanical` for the dashed line between them |
| Transformer | 0. `transformer` as Overall Annotation |
| | 1. `inductor` or `inductor.coupled` (watch the dot) |
| | 3. `magnetic` for the core |
#### Rotation Annotations
The Rotation (integer in degree) should capture the overall rotation of the symbol shape. However, the position of the terminals should also be taked into consideration. Under idealized circumstances (no perspective distorion and accurately drawn symbols according to the symbol library), these two requirements equal each other. For pathological cases however, in which shape and the set of terminals (or even individual terminals) are conflicting, the rotation should compromise between all factors.
Rotation annotations are currently work in progress. They should be provided for at least the following classes:
- "voltage.dc"
- "resistor"
- "capacitor.unpolarized"
- "diode"
- "transistor.bjt"
#### Text Annotations
- The Character Sequence in the Text Label Annotations should describe the actual Characters depicted in the respective Bounding Box as Precisely as Possible
- Bounding Box Annotations of class `text`
- Bear an additional `<text>` tag in which their content is given as string
- The `Omega` and `Mikro` Symbols are escaped respectively
- Currently Work in Progress
- The utils script allows for migrating text annotations from one annotation file to another: `python3 utils.py source target`
### Segmentation Map Guidelines
- Areas of __Intended__ drawing strokes (ink and pencil abrasion respectively) should be marked black, all other pixels (background) should be white
- shining through the paper (from the rear side or other sheets) should be considered background
### Polygon Annotation Guidelines
0. Before starting, make sure the respective files exist for the image sample to be polygon-annotated:
- BB Annotations (Pascal VOC XML File)
- (Binary) Segmentation Map
1. Transform the BB annotations into raw polygons
- Use: `python3 segmentation.py transform`
2. Refine the Polygons
- **To Avoid Embedding Image Data into the resulting JSON**, use: `labelme --nodata`
- Just make sure there are no overlaps between instances
- Especially take care about overlaps with structural elements like junctions and crossovers
3. Generate Multi-Class Segmentation Maps from the refined polygons
- Use: `python3 segmentation.py create`
- Use the generated images for a visual inspection
- After spotting problems, continue with Step 2
### Terminal Annotation Guidelines
```
labelme --labels "connector" --config "{shift_auto_shape_color: 1}" --nodata
```
## Licence
The entire content of this repository, including all image files, annotation files as well as has sourcecode, metadata and documentation has been published under the [Creative Commons Attribution Share Alike Licence 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
|
fixie-ai/covost2 | fixie-ai | "2024-08-27T20:58:08Z" | 13,679 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-16T23:40:52Z" | ---
dataset_info:
- config_name: ar_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 61607709.192
num_examples: 2283
- name: validation
num_bytes: 56223234.024
num_examples: 1758
- name: test
num_bytes: 54650910.41
num_examples: 1695
download_size: 160468333
dataset_size: 172481853.626
- config_name: ca_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 4397026262.322
num_examples: 95854
- name: validation
num_bytes: 544108371.96
num_examples: 12730
- name: test
num_bytes: 604755238.63
num_examples: 12730
download_size: 4957773433
dataset_size: 5545889872.912
- config_name: cy_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 51478765.087
num_examples: 1241
- name: validation
num_bytes: 26992697.0
num_examples: 690
- name: test
num_bytes: 28772216.0
num_examples: 690
download_size: 102604972
dataset_size: 107243678.087
- config_name: de_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 5680326209.222
num_examples: 127834
- name: validation
num_bytes: 631442490.202
num_examples: 13511
- name: test
num_bytes: 637042944.685
num_examples: 13511
download_size: 6490850158
dataset_size: 6948811644.108999
- config_name: en_ar
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14105902817.18
num_examples: 289430
- name: validation
num_bytes: 718527564.808
num_examples: 15531
- name: test
num_bytes: 729114452.301
num_examples: 15531
download_size: 13815709729
dataset_size: 15553544834.289001
- config_name: en_ca
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14099092976.18
num_examples: 289430
- name: validation
num_bytes: 718171719.808
num_examples: 15531
- name: test
num_bytes: 728790610.301
num_examples: 15531
download_size: 13814365593
dataset_size: 15546055306.289001
- config_name: en_cy
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098487703.18
num_examples: 289430
- name: validation
num_bytes: 718141953.808
num_examples: 15531
- name: test
num_bytes: 728793811.301
num_examples: 15531
download_size: 13813953593
dataset_size: 15545423468.289001
- config_name: en_de
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14099886814.18
num_examples: 289430
- name: validation
num_bytes: 718219105.808
num_examples: 15531
- name: test
num_bytes: 728857067.301
num_examples: 15531
download_size: 13815103686
dataset_size: 15546962987.289001
- config_name: en_et
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14096877545.18
num_examples: 289430
- name: validation
num_bytes: 718057559.808
num_examples: 15531
- name: test
num_bytes: 728710692.301
num_examples: 15531
download_size: 13813410823
dataset_size: 15543645797.289001
- config_name: en_fa
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14108661241.18
num_examples: 289430
- name: validation
num_bytes: 718670909.808
num_examples: 15531
- name: test
num_bytes: 729271000.301
num_examples: 15531
download_size: 13816798013
dataset_size: 15556603151.289001
- config_name: en_id
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098627451.18
num_examples: 289430
- name: validation
num_bytes: 718144327.808
num_examples: 15531
- name: test
num_bytes: 728802322.301
num_examples: 15531
download_size: 13813201260
dataset_size: 15545574101.289001
- config_name: en_ja
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14103911774.18
num_examples: 289430
- name: validation
num_bytes: 718409304.808
num_examples: 15531
- name: test
num_bytes: 729050991.301
num_examples: 15531
download_size: 13815875328
dataset_size: 15551372070.289001
- config_name: en_lv
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098703097.18
num_examples: 289430
- name: validation
num_bytes: 718152571.808
num_examples: 15531
- name: test
num_bytes: 728792572.301
num_examples: 15531
download_size: 13814849886
dataset_size: 15545648241.289001
- config_name: en_mn
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14113120657.18
num_examples: 289430
- name: validation
num_bytes: 718940418.808
num_examples: 15531
- name: test
num_bytes: 729461016.301
num_examples: 15531
download_size: 13819427515
dataset_size: 15561522092.289001
- config_name: en_sl
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14097158381.18
num_examples: 289430
- name: validation
num_bytes: 718085673.808
num_examples: 15531
- name: test
num_bytes: 728705188.301
num_examples: 15531
download_size: 13813603812
dataset_size: 15543949243.289001
- config_name: en_sv-SE
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14097728051.18
num_examples: 289430
- name: validation
num_bytes: 718093292.808
num_examples: 15531
- name: test
num_bytes: 728747422.301
num_examples: 15531
download_size: 13813332908
dataset_size: 15544568766.289001
- config_name: en_ta
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14135489205.18
num_examples: 289430
- name: validation
num_bytes: 720191394.808
num_examples: 15531
- name: test
num_bytes: 730578783.301
num_examples: 15531
download_size: 13825121271
dataset_size: 15586259383.289001
- config_name: en_tr
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098644786.18
num_examples: 289430
- name: validation
num_bytes: 718161996.808
num_examples: 15531
- name: test
num_bytes: 728786654.301
num_examples: 15531
download_size: 13814279798
dataset_size: 15545593437.289001
- config_name: en_zh-CN
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14095661460.18
num_examples: 289430
- name: validation
num_bytes: 717982705.808
num_examples: 15531
- name: test
num_bytes: 728655191.301
num_examples: 15531
download_size: 13812699892
dataset_size: 15542299357.289001
- config_name: es_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 630615357.241
num_examples: 13221
- name: test
num_bytes: 666447063.067
num_examples: 13221
- name: train
num_bytes: 3769457359.8
num_examples: 79015
download_size: 4531969416
dataset_size: 5066519780.108
- config_name: et_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 97124727.544
num_examples: 1782
- name: validation
num_bytes: 80290798.168
num_examples: 1576
- name: test
num_bytes: 81970364.51
num_examples: 1571
download_size: 257604448
dataset_size: 259385890.222
- config_name: fa_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1872724297.149
num_examples: 53949
- name: validation
num_bytes: 140067911.23
num_examples: 3445
- name: test
num_bytes: 149319550.35
num_examples: 3445
download_size: 1679853440
dataset_size: 2162111758.729
- config_name: fr_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 632191608.84
num_examples: 14760
- name: test
num_bytes: 698178059.08
num_examples: 14760
- name: train
num_bytes: 8128016830.77
num_examples: 207374
download_size: 8900934523
dataset_size: 9458386498.69
- config_name: id_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 36136135.768
num_examples: 1243
- name: validation
num_bytes: 25058845.0
num_examples: 792
- name: test
num_bytes: 26577467.0
num_examples: 844
download_size: 86110062
dataset_size: 87772447.768
- config_name: it_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1517510665.568
num_examples: 31698
- name: validation
num_bytes: 422409218.1
num_examples: 8940
- name: test
num_bytes: 454569171.595
num_examples: 8951
download_size: 2125529183
dataset_size: 2394489055.2630005
- config_name: ja_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 38181610.624
num_examples: 1119
- name: validation
num_bytes: 24623052.0
num_examples: 635
- name: test
num_bytes: 25558787.0
num_examples: 684
download_size: 88228548
dataset_size: 88363449.624
- config_name: lv_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 66152116.328
num_examples: 2337
- name: validation
num_bytes: 32655276.0
num_examples: 1125
- name: test
num_bytes: 50997551.638
num_examples: 1629
download_size: 137700207
dataset_size: 149804943.96600002
- config_name: mn_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 87891433.547
num_examples: 2067
- name: validation
num_bytes: 77519039.943
num_examples: 1761
- name: test
num_bytes: 83667460.167
num_examples: 1759
download_size: 242638800
dataset_size: 249077933.657
- config_name: nl_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 216102081.4
num_examples: 7108
- name: validation
num_bytes: 55386349.319
num_examples: 1699
- name: test
num_bytes: 60219179.711
num_examples: 1699
download_size: 320267264
dataset_size: 331707610.43
- config_name: pt_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 274723273.528
num_examples: 9158
- name: validation
num_bytes: 118345891.704
num_examples: 3318
- name: test
num_bytes: 166247624.001
num_examples: 4023
download_size: 540891735
dataset_size: 559316789.233
- config_name: ru_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 557219472.672
num_examples: 12112
- name: validation
num_bytes: 290218427.6
num_examples: 6110
- name: test
num_bytes: 312622838.0
num_examples: 6300
download_size: 1112848246
dataset_size: 1160060738.272
- config_name: sl_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 55992153.0
num_examples: 1843
- name: validation
num_bytes: 15074155.0
num_examples: 509
- name: test
num_bytes: 10209711.0
num_examples: 360
download_size: 83863293
dataset_size: 81276019.0
- config_name: sv-SE_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 48298330.64
num_examples: 2160
- name: validation
num_bytes: 32544646.416
num_examples: 1349
- name: test
num_bytes: 46894324.615
num_examples: 1595
download_size: 121860373
dataset_size: 127737301.671
- config_name: ta_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 47757197.616
num_examples: 1358
- name: validation
num_bytes: 13670695.0
num_examples: 384
- name: test
num_bytes: 29891516.0
num_examples: 786
download_size: 87791516
dataset_size: 91319408.616
- config_name: tr_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 119299427.798
num_examples: 3966
- name: validation
num_bytes: 52552534.232
num_examples: 1624
- name: test
num_bytes: 59106253.862
num_examples: 1629
download_size: 224018260
dataset_size: 230958215.89200002
- config_name: zh-CN_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 231018998.33
num_examples: 4843
- name: test
num_bytes: 243850956.45
num_examples: 4898
- name: train
num_bytes: 341425113.6
num_examples: 7085
download_size: 766660661
dataset_size: 816295068.38
configs:
- config_name: ar_en
data_files:
- split: train
path: ar_en/train-*
- split: validation
path: ar_en/validation-*
- split: test
path: ar_en/test-*
- config_name: ca_en
data_files:
- split: train
path: ca_en/train-*
- split: validation
path: ca_en/validation-*
- split: test
path: ca_en/test-*
- config_name: cy_en
data_files:
- split: train
path: cy_en/train-*
- split: validation
path: cy_en/validation-*
- split: test
path: cy_en/test-*
- config_name: de_en
data_files:
- split: train
path: de_en/train-*
- split: validation
path: de_en/validation-*
- split: test
path: de_en/test-*
- config_name: en_ar
data_files:
- split: train
path: en_ar/train-*
- split: validation
path: en_ar/validation-*
- split: test
path: en_ar/test-*
- config_name: en_ca
data_files:
- split: train
path: en_ca/train-*
- split: validation
path: en_ca/validation-*
- split: test
path: en_ca/test-*
- config_name: en_cy
data_files:
- split: train
path: en_cy/train-*
- split: validation
path: en_cy/validation-*
- split: test
path: en_cy/test-*
- config_name: en_de
data_files:
- split: train
path: en_de/train-*
- split: validation
path: en_de/validation-*
- split: test
path: en_de/test-*
- config_name: en_et
data_files:
- split: train
path: en_et/train-*
- split: validation
path: en_et/validation-*
- split: test
path: en_et/test-*
- config_name: en_fa
data_files:
- split: train
path: en_fa/train-*
- split: validation
path: en_fa/validation-*
- split: test
path: en_fa/test-*
- config_name: en_id
data_files:
- split: train
path: en_id/train-*
- split: validation
path: en_id/validation-*
- split: test
path: en_id/test-*
- config_name: en_ja
data_files:
- split: train
path: en_ja/train-*
- split: validation
path: en_ja/validation-*
- split: test
path: en_ja/test-*
- config_name: en_lv
data_files:
- split: train
path: en_lv/train-*
- split: validation
path: en_lv/validation-*
- split: test
path: en_lv/test-*
- config_name: en_mn
data_files:
- split: train
path: en_mn/train-*
- split: validation
path: en_mn/validation-*
- split: test
path: en_mn/test-*
- config_name: en_sl
data_files:
- split: train
path: en_sl/train-*
- split: validation
path: en_sl/validation-*
- split: test
path: en_sl/test-*
- config_name: en_sv-SE
data_files:
- split: train
path: en_sv-SE/train-*
- split: validation
path: en_sv-SE/validation-*
- split: test
path: en_sv-SE/test-*
- config_name: en_ta
data_files:
- split: train
path: en_ta/train-*
- split: validation
path: en_ta/validation-*
- split: test
path: en_ta/test-*
- config_name: en_tr
data_files:
- split: train
path: en_tr/train-*
- split: validation
path: en_tr/validation-*
- split: test
path: en_tr/test-*
- config_name: en_zh-CN
data_files:
- split: train
path: en_zh-CN/train-*
- split: validation
path: en_zh-CN/validation-*
- split: test
path: en_zh-CN/test-*
- config_name: es_en
data_files:
- split: validation
path: es_en/validation-*
- split: test
path: es_en/test-*
- split: train
path: es_en/train-*
- config_name: et_en
data_files:
- split: train
path: et_en/train-*
- split: validation
path: et_en/validation-*
- split: test
path: et_en/test-*
- config_name: fa_en
data_files:
- split: train
path: fa_en/train-*
- split: validation
path: fa_en/validation-*
- split: test
path: fa_en/test-*
- config_name: fr_en
data_files:
- split: validation
path: fr_en/validation-*
- split: test
path: fr_en/test-*
- split: train
path: fr_en/train-*
- config_name: id_en
data_files:
- split: train
path: id_en/train-*
- split: validation
path: id_en/validation-*
- split: test
path: id_en/test-*
- config_name: it_en
data_files:
- split: train
path: it_en/train-*
- split: validation
path: it_en/validation-*
- split: test
path: it_en/test-*
- config_name: ja_en
data_files:
- split: train
path: ja_en/train-*
- split: validation
path: ja_en/validation-*
- split: test
path: ja_en/test-*
- config_name: lv_en
data_files:
- split: train
path: lv_en/train-*
- split: validation
path: lv_en/validation-*
- split: test
path: lv_en/test-*
- config_name: mn_en
data_files:
- split: train
path: mn_en/train-*
- split: validation
path: mn_en/validation-*
- split: test
path: mn_en/test-*
- config_name: nl_en
data_files:
- split: train
path: nl_en/train-*
- split: validation
path: nl_en/validation-*
- split: test
path: nl_en/test-*
- config_name: pt_en
data_files:
- split: train
path: pt_en/train-*
- split: validation
path: pt_en/validation-*
- split: test
path: pt_en/test-*
- config_name: ru_en
data_files:
- split: train
path: ru_en/train-*
- split: validation
path: ru_en/validation-*
- split: test
path: ru_en/test-*
- config_name: sl_en
data_files:
- split: train
path: sl_en/train-*
- split: validation
path: sl_en/validation-*
- split: test
path: sl_en/test-*
- config_name: sv-SE_en
data_files:
- split: train
path: sv-SE_en/train-*
- split: validation
path: sv-SE_en/validation-*
- split: test
path: sv-SE_en/test-*
- config_name: ta_en
data_files:
- split: train
path: ta_en/train-*
- split: validation
path: ta_en/validation-*
- split: test
path: ta_en/test-*
- config_name: tr_en
data_files:
- split: train
path: tr_en/train-*
- split: validation
path: tr_en/validation-*
- split: test
path: tr_en/test-*
- config_name: zh-CN_en
data_files:
- split: validation
path: zh-CN_en/validation-*
- split: test
path: zh-CN_en/test-*
- split: train
path: zh-CN_en/train-*
---
This is a partial copy of [CoVoST2](https://huggingface.co/datasets/facebook/covost2) dataset.
The main difference is that the audio data is included in the dataset, which makes usage easier and allows browsing the samples using HF Dataset Viewer.
The limitation of this method is that all audio samples of the `EN_XX` subsets are duplicated, as such the size of the dataset is larger.
As such, not all the data is included: Only the `validation` and `test` subsets are available.
From the `XX_EN` subsets, only `fr`, `es`, and `zh-CN` are included. |
databricks/databricks-dolly-15k | databricks | "2023-06-30T18:34:13Z" | 13,624 | 760 | [
"task_categories:question-answering",
"task_categories:summarization",
"language:en",
"license:cc-by-sa-3.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2203.02155",
"region:us"
] | [
"question-answering",
"summarization"
] | "2023-04-11T16:43:13Z" | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- en
size_categories:
- 10K<n<100K
---
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: English
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT.
Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including
the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using
information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly
instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors.
They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context`
field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts,
this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper.
For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a
corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to
restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might
provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from
these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source,
human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT.
Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including
academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization)
contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the
target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical
of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of
rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# Citation
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors. |
TempoFunk/tempofunk-sdance | TempoFunk | "2023-05-07T07:38:48Z" | 13,614 | 5 | [
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:video-classification",
"task_categories:image-classification",
"language:en",
"license:agpl-3.0",
"size_categories:1K<n<10K",
"region:us"
] | [
"text-to-video",
"text-to-image",
"video-classification",
"image-classification"
] | "2023-04-19T05:08:11Z" | ---
task_categories:
- text-to-video
- text-to-image
- video-classification
- image-classification
language:
- en
size_categories:
- 1K<n<10K
license: agpl-3.0
---
# TempoFunk S(mall)Dance
10k samples of metadata and encoded latents & prompts of videos themed around **dance**.
## Data format
- Video frame latents
- Numpy arrays
- 120 frames, 512x512 source size
- Encoded shape (120, 4, 64, 64)
- CLIP (openai) encoded prompts
- Video description (as seen in metadata)
- Encoded shape (77,768)
- Video metadata as JSON (description, tags, categories, source URLs, etc.) |
MMMU/MMMU | MMMU | "2024-09-19T17:11:03Z" | 13,541 | 196 | [
"task_categories:question-answering",
"task_categories:visual-question-answering",
"task_categories:multiple-choice",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2311.16502",
"region:us",
"biology",
"medical",
"finance",
"chemistry",
"music",
"art",
"art_theory",
"design",
"business",
"accounting",
"economics",
"manage",
"marketing",
"health",
"medicine",
"basic_medical_science",
"clinical",
"pharmacy",
"public_health",
"humanities",
"social_science",
"history",
"literature",
"sociology",
"psychology",
"science",
"geography",
"math",
"physics",
"engineering",
"agriculture",
"architecture",
"computer_science",
"electronics",
"energy_and_power",
"materials",
"mechanical_engineering"
] | [
"question-answering",
"visual-question-answering",
"multiple-choice"
] | "2023-11-27T17:52:01Z" | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- question-answering
- visual-question-answering
- multiple-choice
pretty_name: mmmu
dataset_info:
- config_name: Accounting
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 262599.0
num_examples: 5
- name: validation
num_bytes: 1598285.0
num_examples: 30
- name: test
num_bytes: 22135625.0
num_examples: 380
download_size: 37363379
dataset_size: 23996509.0
- config_name: Agriculture
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 22082656.0
num_examples: 5
- name: validation
num_bytes: 119217558.0
num_examples: 30
- name: test
num_bytes: 993664077.0
num_examples: 287
download_size: 1158036990
dataset_size: 1134964291.0
- config_name: Architecture_and_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 137750.0
num_examples: 5
- name: validation
num_bytes: 721378.0
num_examples: 30
- name: test
num_bytes: 16054607.0
num_examples: 551
download_size: 48763955
dataset_size: 16913735.0
- config_name: Art
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 6241184.0
num_examples: 5
- name: validation
num_bytes: 29934534.0
num_examples: 30
- name: test
num_bytes: 237801390.0
num_examples: 231
download_size: 585798641
dataset_size: 273977108.0
- config_name: Art_Theory
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 7435106.0
num_examples: 5
- name: validation
num_bytes: 33481558.0
num_examples: 30
- name: test
num_bytes: 553174647.0
num_examples: 429
download_size: 930525695
dataset_size: 594091311.0
- config_name: Basic_Medical_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 814310.0
num_examples: 5
- name: validation
num_bytes: 4125930.0
num_examples: 30
- name: test
num_bytes: 48125891.0
num_examples: 326
download_size: 84666454
dataset_size: 53066131.0
- config_name: Biology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 574342.0
num_examples: 5
- name: validation
num_bytes: 8491863.0
num_examples: 30
- name: test
num_bytes: 132966151.0
num_examples: 345
download_size: 410242502
dataset_size: 142032356.0
- config_name: Chemistry
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 262397.0
num_examples: 5
- name: validation
num_bytes: 1518573.0
num_examples: 30
- name: test
num_bytes: 37219529.0
num_examples: 603
download_size: 108345562
dataset_size: 39000499.0
- config_name: Clinical_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1467945.0
num_examples: 5
- name: validation
num_bytes: 10882484.0
num_examples: 30
- name: test
num_bytes: 98201863.0
num_examples: 325
download_size: 160611488
dataset_size: 110552292.0
- config_name: Computer_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 440523.0
num_examples: 5
- name: validation
num_bytes: 2072018.0
num_examples: 30
- name: test
num_bytes: 32047381.0
num_examples: 371
download_size: 55640991
dataset_size: 34559922.0
- config_name: Design
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2259873.0
num_examples: 5
- name: validation
num_bytes: 17923120.0
num_examples: 30
- name: test
num_bytes: 77676331.0
num_examples: 169
download_size: 142866617
dataset_size: 97859324.0
- config_name: Diagnostics_and_Laboratory_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2056117.0
num_examples: 5
- name: validation
num_bytes: 37106233.0
num_examples: 30
- name: test
num_bytes: 157003069.0
num_examples: 162
download_size: 603957093
dataset_size: 196165419.0
- config_name: Economics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 171434.0
num_examples: 5
- name: validation
num_bytes: 1487048.0
num_examples: 30
- name: test
num_bytes: 11852300.0
num_examples: 267
download_size: 20777635
dataset_size: 13510782.0
- config_name: Electronics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 123632.0
num_examples: 5
- name: validation
num_bytes: 641377.0
num_examples: 30
- name: test
num_bytes: 5717686.0
num_examples: 256
download_size: 11602832
dataset_size: 6482695.0
- config_name: Energy_and_Power
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 105006.0
num_examples: 5
- name: validation
num_bytes: 1641935.0
num_examples: 30
- name: test
num_bytes: 14748428.0
num_examples: 432
download_size: 35246567
dataset_size: 16495369.0
- config_name: Finance
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 296124.0
num_examples: 5
- name: validation
num_bytes: 1071060.0
num_examples: 30
- name: test
num_bytes: 12065803.0
num_examples: 355
download_size: 29551521
dataset_size: 13432987.0
- config_name: Geography
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1494060.0
num_examples: 5
- name: validation
num_bytes: 6671316.0
num_examples: 30
- name: test
num_bytes: 137218400.0
num_examples: 565
download_size: 374766631
dataset_size: 145383776.0
- config_name: History
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1444231.0
num_examples: 5
- name: validation
num_bytes: 8819857.0
num_examples: 30
- name: test
num_bytes: 115228815.0
num_examples: 278
download_size: 232549641
dataset_size: 125492903.0
- config_name: Literature
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2451201.0
num_examples: 5
- name: validation
num_bytes: 14241046.0
num_examples: 30
- name: test
num_bytes: 50301541.0
num_examples: 112
download_size: 132145895
dataset_size: 66993788.0
- config_name: Manage
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 449514.0
num_examples: 5
- name: validation
num_bytes: 3277436.0
num_examples: 30
- name: test
num_bytes: 29963963.0
num_examples: 245
download_size: 51186888
dataset_size: 33690913.0
- config_name: Marketing
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 116960.0
num_examples: 5
- name: validation
num_bytes: 1472981.0
num_examples: 30
- name: test
num_bytes: 7732976.0
num_examples: 181
download_size: 13146078
dataset_size: 9322917.0
- config_name: Materials
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 239632.0
num_examples: 5
- name: validation
num_bytes: 2305223.0
num_examples: 30
- name: test
num_bytes: 25256854.0
num_examples: 458
download_size: 105773156
dataset_size: 27801709.0
- config_name: Math
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 175839.0
num_examples: 5
- name: validation
num_bytes: 1444496.0
num_examples: 30
- name: test
num_bytes: 27701845.0
num_examples: 505
download_size: 174098418
dataset_size: 29322180.0
- config_name: Mechanical_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 152542.0
num_examples: 5
- name: validation
num_bytes: 874988.0
num_examples: 30
- name: test
num_bytes: 15093746.0
num_examples: 429
download_size: 30450114
dataset_size: 16121276.0
- config_name: Music
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1417615.0
num_examples: 5
- name: validation
num_bytes: 9359372.0
num_examples: 30
- name: test
num_bytes: 134096770.0
num_examples: 334
download_size: 174725052
dataset_size: 144873757.0
- config_name: Pharmacy
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 207924.0
num_examples: 5
- name: validation
num_bytes: 1656342.0
num_examples: 30
- name: test
num_bytes: 31866248.0
num_examples: 430
download_size: 62721263
dataset_size: 33730514.0
- config_name: Physics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 233734.0
num_examples: 5
- name: validation
num_bytes: 1114130.0
num_examples: 30
- name: test
num_bytes: 15905705.0
num_examples: 408
download_size: 35238571
dataset_size: 17253569.0
- config_name: Psychology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 600864.0
num_examples: 5
- name: validation
num_bytes: 4403886.0
num_examples: 30
- name: test
num_bytes: 53813915.0
num_examples: 305
download_size: 102466671
dataset_size: 58818665.0
- config_name: Public_Health
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 234781.0
num_examples: 5
- name: validation
num_bytes: 1508761.0
num_examples: 30
- name: test
num_bytes: 32150088.0
num_examples: 509
download_size: 48231609
dataset_size: 33893630.0
- config_name: Sociology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 3769220.0
num_examples: 5
- name: validation
num_bytes: 18455336.0
num_examples: 30
- name: test
num_bytes: 144301123.0
num_examples: 252
download_size: 310313826
dataset_size: 166525679.0
configs:
- config_name: Accounting
data_files:
- split: dev
path: Accounting/dev-*
- split: validation
path: Accounting/validation-*
- split: test
path: Accounting/test-*
- config_name: Agriculture
data_files:
- split: dev
path: Agriculture/dev-*
- split: validation
path: Agriculture/validation-*
- split: test
path: Agriculture/test-*
- config_name: Architecture_and_Engineering
data_files:
- split: dev
path: Architecture_and_Engineering/dev-*
- split: validation
path: Architecture_and_Engineering/validation-*
- split: test
path: Architecture_and_Engineering/test-*
- config_name: Art
data_files:
- split: dev
path: Art/dev-*
- split: validation
path: Art/validation-*
- split: test
path: Art/test-*
- config_name: Art_Theory
data_files:
- split: dev
path: Art_Theory/dev-*
- split: validation
path: Art_Theory/validation-*
- split: test
path: Art_Theory/test-*
- config_name: Basic_Medical_Science
data_files:
- split: dev
path: Basic_Medical_Science/dev-*
- split: validation
path: Basic_Medical_Science/validation-*
- split: test
path: Basic_Medical_Science/test-*
- config_name: Biology
data_files:
- split: dev
path: Biology/dev-*
- split: validation
path: Biology/validation-*
- split: test
path: Biology/test-*
- config_name: Chemistry
data_files:
- split: dev
path: Chemistry/dev-*
- split: validation
path: Chemistry/validation-*
- split: test
path: Chemistry/test-*
- config_name: Clinical_Medicine
data_files:
- split: dev
path: Clinical_Medicine/dev-*
- split: validation
path: Clinical_Medicine/validation-*
- split: test
path: Clinical_Medicine/test-*
- config_name: Computer_Science
data_files:
- split: dev
path: Computer_Science/dev-*
- split: validation
path: Computer_Science/validation-*
- split: test
path: Computer_Science/test-*
- config_name: Design
data_files:
- split: dev
path: Design/dev-*
- split: validation
path: Design/validation-*
- split: test
path: Design/test-*
- config_name: Diagnostics_and_Laboratory_Medicine
data_files:
- split: dev
path: Diagnostics_and_Laboratory_Medicine/dev-*
- split: validation
path: Diagnostics_and_Laboratory_Medicine/validation-*
- split: test
path: Diagnostics_and_Laboratory_Medicine/test-*
- config_name: Economics
data_files:
- split: dev
path: Economics/dev-*
- split: validation
path: Economics/validation-*
- split: test
path: Economics/test-*
- config_name: Electronics
data_files:
- split: dev
path: Electronics/dev-*
- split: validation
path: Electronics/validation-*
- split: test
path: Electronics/test-*
- config_name: Energy_and_Power
data_files:
- split: dev
path: Energy_and_Power/dev-*
- split: validation
path: Energy_and_Power/validation-*
- split: test
path: Energy_and_Power/test-*
- config_name: Finance
data_files:
- split: dev
path: Finance/dev-*
- split: validation
path: Finance/validation-*
- split: test
path: Finance/test-*
- config_name: Geography
data_files:
- split: dev
path: Geography/dev-*
- split: validation
path: Geography/validation-*
- split: test
path: Geography/test-*
- config_name: History
data_files:
- split: dev
path: History/dev-*
- split: validation
path: History/validation-*
- split: test
path: History/test-*
- config_name: Literature
data_files:
- split: dev
path: Literature/dev-*
- split: validation
path: Literature/validation-*
- split: test
path: Literature/test-*
- config_name: Manage
data_files:
- split: dev
path: Manage/dev-*
- split: validation
path: Manage/validation-*
- split: test
path: Manage/test-*
- config_name: Marketing
data_files:
- split: dev
path: Marketing/dev-*
- split: validation
path: Marketing/validation-*
- split: test
path: Marketing/test-*
- config_name: Materials
data_files:
- split: dev
path: Materials/dev-*
- split: validation
path: Materials/validation-*
- split: test
path: Materials/test-*
- config_name: Math
data_files:
- split: dev
path: Math/dev-*
- split: validation
path: Math/validation-*
- split: test
path: Math/test-*
- config_name: Mechanical_Engineering
data_files:
- split: dev
path: Mechanical_Engineering/dev-*
- split: validation
path: Mechanical_Engineering/validation-*
- split: test
path: Mechanical_Engineering/test-*
- config_name: Music
data_files:
- split: dev
path: Music/dev-*
- split: validation
path: Music/validation-*
- split: test
path: Music/test-*
- config_name: Pharmacy
data_files:
- split: dev
path: Pharmacy/dev-*
- split: validation
path: Pharmacy/validation-*
- split: test
path: Pharmacy/test-*
- config_name: Physics
data_files:
- split: dev
path: Physics/dev-*
- split: validation
path: Physics/validation-*
- split: test
path: Physics/test-*
- config_name: Psychology
data_files:
- split: dev
path: Psychology/dev-*
- split: validation
path: Psychology/validation-*
- split: test
path: Psychology/test-*
- config_name: Public_Health
data_files:
- split: dev
path: Public_Health/dev-*
- split: validation
path: Public_Health/validation-*
- split: test
path: Public_Health/test-*
- config_name: Sociology
data_files:
- split: dev
path: Sociology/dev-*
- split: validation
path: Sociology/validation-*
- split: test
path: Sociology/test-*
tags:
- biology
- medical
- finance
- chemistry
- music
- art
- art_theory
- design
- music
- business
- accounting
- economics
- finance
- manage
- marketing
- health
- medicine
- basic_medical_science
- clinical
- pharmacy
- public_health
- humanities
- social_science
- history
- literature
- sociology
- psychology
- science
- biology
- chemistry
- geography
- math
- physics
- engineering
- agriculture
- architecture
- computer_science
- electronics
- energy_and_power
- materials
- mechanical_engineering
---
# MMMU (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI)
[**🌐 Homepage**](https://mmmu-benchmark.github.io/) | [**🏆 Leaderboard**](https://mmmu-benchmark.github.io/#leaderboard) | [**🤗 Dataset**](https://huggingface.co/datasets/MMMU/MMMU/) | [**🤗 Paper**](https://huggingface.co/papers/2311.16502) | [**📖 arXiv**](https://arxiv.org/abs/2311.16502) | [**GitHub**](https://github.com/MMMU-Benchmark/MMMU)
## 🔔News
- **🛠️[2024-05-30]: Fixed duplicate option issues in Materials dataset items (validation_Materials_25; test_Materials_17, 242) and content error in validation_Materials_25.**
- **🛠️[2024-04-30]: Fixed missing "-" or "^" signs in Math dataset items (dev_Math_2, validation_Math_11, 12, 16; test_Math_8, 23, 43, 113, 164, 223, 236, 287, 329, 402, 498) and corrected option errors in validation_Math_2. If you encounter any issues with the dataset, please contact us promptly!**
- **🚀[2024-01-31]: We added Human Expert performance on the [Leaderboard](https://mmmu-benchmark.github.io/#leaderboard)!🌟**
- **🔥[2023-12-04]: Our evaluation server for test set is now availble on [EvalAI](https://eval.ai/web/challenges/challenge-page/2179/overview). We welcome all submissions and look forward to your participation! 😆**
## Dataset Details
### Dataset Description
We introduce MMMU: a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning. MMMU includes **11.5K meticulously collected multimodal questions** from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering. These questions span **30 subjects** and **183 subfields**, comprising **30 highly heterogeneous image types**, such as charts, diagrams, maps, tables, music sheets, and chemical structures. We believe MMMU will stimulate the community to build next-generation multimodal foundation models towards expert artificial general intelligence (AGI).
🎯 **We have released a full set comprising 150 development samples and 900 validation samples. We have released 10,500 test questions without their answers.**
The development set is used for few-shot/in-context learning, and the validation set is used for debugging models, selecting hyperparameters, or quick evaluations. The answers and explanations for the test set questions are withheld. You can submit your model's predictions for the **test set** on **[EvalAI](https://eval.ai/web/challenges/challenge-page/2179/overview)**.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6230d750d93e84e233882dbc/2Ulh9yznm1dvISV4xJ_Ok.png)
### Dataset Creation
MMMU was created to challenge multimodal models with tasks that demand college-level subject knowledge and deliberate reasoning, pushing the boundaries of what these models can achieve in terms of expert-level perception and reasoning.
The data for the MMMU dataset was manually collected by a team of college students from various disciplines, using online sources, textbooks, and lecture materials.
- **Content:** The dataset contains 11.5K college-level problems across six broad disciplines (Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, Tech & Engineering) and 30 college subjects.
- **Image Types:** The dataset includes 30 highly heterogeneous image types, such as charts, diagrams, maps, tables, music sheets, and chemical structures, interleaved with text.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6230d750d93e84e233882dbc/Mbf8O5lEH8I8czprch0AG.png)
## 🏆 Mini-Leaderboard
We show a mini-leaderboard here and please find more information in our paper or [**homepage**](https://mmmu-benchmark.github.io/).
| Model | Val (900) | Test (10.5K) |
|--------------------------------|:---------:|:------------:|
| Expert (Best) | 88.6 | - |
| Expert (Medium) | 82.6 | - |
| Expert (Worst) | 76.2 | - |
| GPT-4o* | **69.1** | - |
| Gemini 1.5 Pro* | 62.2 | - |
| InternVL2-Pro* | 62.0 | **55.7** |
| Gemini 1.0 Ultra* | 59.4 | - |
| Claude 3 Opus* | 59.4 | - |
| GPT-4V(ision) (Playground) | 56.8 | **55.7** |
| Reka Core* | 56.3 | - |
| Gemini 1.5 Flash* | 56.1 | - |
| SenseChat-Vision-0423-Preview* | 54.6 | 50.3 |
| Reka Flash* | 53.3 | - |
| Claude 3 Sonnet* | 53.1 | - |
| HPT Pro* | 52.0 | - |
| VILA1.5* | 51.9 | 46.9 |
| Qwen-VL-MAX* | 51.4 | 46.8 |
| InternVL-Chat-V1.2* | 51.6 | 46.2 |
| Skywork-VL* | 51.4 | 46.2 |
| LLaVA-1.6-34B* | 51.1 | 44.7 |
| Claude 3 Haiku* | 50.2 | - |
| Adept Fuyu-Heavy* | 48.3 | - |
| Gemini 1.0 Pro* | 47.9 | - |
| Marco-VL-Plus* | 46.2 | 44.3 |
| Yi-VL-34B* | 45.9 | 41.6 |
| Qwen-VL-PLUS* | 45.2 | 40.8 |
| HPT Air* | 44.0 | - |
| Reka Edge* | 42.8 | - |
| Marco-VL* | 41.2 | 40.4 |
| OmniLMM-12B* | 41.1 | 40.4 |
| Bunny-8B* | 43.3 | 39.0 |
| Bunny-4B* | 41.4 | 38.4 |
| Weitu-VL-1.0-15B* | - | 38.4 |
| InternLM-XComposer2-VL* | 43.0 | 38.2 |
| Yi-VL-6B* | 39.1 | 37.8 |
| InfiMM-Zephyr-7B* | 39.4 | 35.5 |
| InternVL-Chat-V1.1* | 39.1 | 35.3 |
| Math-LLaVA-13B* | 38.3 | 34.6 |
| SVIT* | 38.0 | 34.1 |
| MiniCPM-V* | 37.2 | 34.1 |
| MiniCPM-V-2* | 37.1 | - |
| Emu2-Chat* | 36.3 | 34.1 |
| BLIP-2 FLAN-T5-XXL | 35.4 | 34.0 |
| InstructBLIP-T5-XXL | 35.7 | 33.8 |
| LLaVA-1.5-13B | 36.4 | 33.6 |
| Bunny-3B* | 38.2 | 33.0 |
| Qwen-VL-7B-Chat | 35.9 | 32.9 |
| SPHINX* | 32.9 | 32.9 |
| mPLUG-OWL2* | 32.7 | 32.1 |
| BLIP-2 FLAN-T5-XL | 34.4 | 31.0 |
| InstructBLIP-T5-XL | 32.9 | 30.6 |
| Gemini Nano2* | 32.6 | - |
| CogVLM | 32.1 | 30.1 |
| Otter | 32.2 | 29.1 |
| LLaMA-Adapter2-7B | 29.8 | 27.7 |
| MiniGPT4-Vicuna-13B | 26.8 | 27.6 |
| Adept Fuyu-8B | 27.9 | 27.4 |
| Kosmos2 | 24.4 | 26.6 |
| OpenFlamingo2-9B | 28.7 | 26.3 |
| Frequent Choice | 22.1 | 23.9 |
| Random Choice | 26.8 | 25.8 |
*: results provided by the authors.
## Limitations
Despite its comprehensive nature, MMMU, like any benchmark, is not without limitations. The manual curation process, albeit thorough, may carry biases.
And the focus on college-level subjects might not fully be a sufficient test for Expert AGI.
However, we believe it should be necessary for an Expert AGI to achieve strong performance on MMMU to demonstrate their broad and deep subject knowledge as well as expert-level understanding and reasoning capabilities.
In future work, we plan to incorporate human evaluations into MMMU. This will provide a more grounded comparison between model capabilities and expert performance, shedding light on the proximity of current AI systems to achieving Expert AGI.
## Disclaimers
The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to notify us. Upon verification, such samples will be promptly removed.
## Contact
- Xiang Yue: xiangyue.work@gmail.com
- Yu Su: su.809@osu.edu
- Wenhu Chen: wenhuchen@uwaterloo.ca
## Citation
**BibTeX:**
```bibtex
@inproceedings{yue2023mmmu,
title={MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI},
author={Xiang Yue and Yuansheng Ni and Kai Zhang and Tianyu Zheng and Ruoqi Liu and Ge Zhang and Samuel Stevens and Dongfu Jiang and Weiming Ren and Yuxuan Sun and Cong Wei and Botao Yu and Ruibin Yuan and Renliang Sun and Ming Yin and Boyuan Zheng and Zhenzhu Yang and Yibo Liu and Wenhao Huang and Huan Sun and Yu Su and Wenhu Chen},
booktitle={Proceedings of CVPR},
year={2024},
}
``` |
mlfoundations/MINT-1T-PDF-CC-2024-10 | mlfoundations | "2024-09-19T21:03:25Z" | 13,530 | 2 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | "2024-07-12T05:17:41Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2024-10`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
![Examples](interleaved-example-twitter.png)
## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
jacobbieker/eumetsat-cloudmask-rss | jacobbieker | "2024-02-28T20:56:15Z" | 13,510 | 0 | [
"license:mit",
"doi:10.57967/hf/1642",
"region:us"
] | null | "2024-01-12T18:51:32Z" | ---
license: mit
---
|
Skywork/SkyPile-150B | Skywork | "2023-12-07T06:11:28Z" | 13,467 | 343 | [
"task_categories:text-generation",
"language:zh",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.19341",
"region:us",
"llm ",
"casual-lm",
"language-modeling"
] | [
"text-generation"
] | "2023-10-23T12:55:10Z" | ---
task_categories:
- text-generation
language:
- zh
tags:
- 'llm '
- casual-lm
- language-modeling
pretty_name: SkyPile-150B
size_categories:
- 100B<n<1T
---
# SkyPile-150B
## Dataset Summary
SkyPile-150B is a comprehensive, large-scale Chinese dataset specifically designed for the pre-training of large language models. It is derived from a broad array of publicly accessible Chinese Internet web pages. Rigorous filtering, extensive deduplication, and thorough sensitive data filtering have been employed to ensure its quality. Furthermore, we have utilized advanced tools such as fastText and BERT to filter out low-quality data.
The publicly accessible portion of the SkyPile-150B dataset encompasses approximately 233 million unique web pages, each containing an average of over 1,000 Chinese characters. In total, the dataset includes approximately 150 billion tokens and 620 gigabytes of plain text data.
## Language
The SkyPile-150B dataset is exclusively composed of Chinese data.
## Data Field Explanation
- text: the processed and cleaned text extracted from each page.
## Dataset Safety
We utilized more than 200w rules and the BERT-base model to determine the sensitive data present in the dataset, and subsequently removed any harmful entries we detect.
## Sensitive Information and Bias
Despite our best efforts, SkyPile-150B, given its construction from publicly available web pages, might contain sensitive information such as email addresses, phone numbers, or IP addresses. We have endeavored to minimize this through deduplication and low-quality filtering, but users of SkyPile-150B should remain vigilant.
The Internet is rife with potentially toxic or biased data. We have attempted to mitigate this with specific URL filtering methods, but we encourage users to remain conscious of this potential issue.
## Social Impact of the Dataset
The open-source release of the SkyPile-150B dataset represents our commitment to enhancing access to high-quality web data, which has traditionally been a closely guarded resource among model developers. We believe that this release will foster greater accessibility and the proliferation of high-performance large language models, thereby contributing significantly to the advancement of the field.
## License Agreement
The community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.
## Contact Us and Citation
If you find our work helpful, please feel free to cite our paper~
```
@misc{wei2023skywork,
title={Skywork: A More Open Bilingual Foundation Model},
author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lü and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
year={2023},
eprint={2310.19341},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
sal4ahm/RealCQA | sal4ahm | "2024-09-09T18:14:20Z" | 13,458 | 5 | [
"license:mit",
"modality:image",
"arxiv:2308.01979",
"region:us"
] | null | "2024-02-01T17:18:07Z" | ---
license: mit
---
# RealCQA: Real-World Complex Question Answering Dataset
This repository contains the dataset used in the paper "[RealCQA: Scientific Chart Question Answering as a Test-Bed for First-Order Logic](https://arxiv.org/pdf/2308.01979)" (ICDAR 2023). The dataset is designed to facilitate research in complex question answering, involving a diverse set of real-world images and associated textual question-answer pairs.
## Dataset Overview
The RealCQA dataset consists of 28,266 images, and corresponding 2 million question-answer pairs organized into three complementary subsets. Each image is accompanied by a JSON file containing one or more question blocks. The dataset is structured to address a range of question-answering tasks that require an understanding of the visual content.
### Dataset Structure
The dataset is organized into the following folders:
- **Images**
- `images`: Contains the first 10,000 images.
- `images2`: Contains the next 10,000 images.
- `images3`: Contains the remaining 8,266 images.
- **JSON Files**
- `jsons`: Contains the JSON files corresponding to the images in the `images` folder.
- `jsons2`: Contains the JSON files corresponding to the images in the `images2` folder.
- `jsons3`: Contains the JSON files corresponding to the images in the `images3` folder.
- **QA Files**
These are the QA created in our proposed dataset.
- `qa`: Contains the QA files corresponding to the images in the `images` folder.
- `qa2`: Contains the QA files corresponding to the images in the `images2` folder.
- `qa3`: Contains the QA files corresponding to the images in the `images3` folder.
### File Details
- **Images**: JPEG files named in the format `PMCxxxxxx_abc.jpg`, where `xxxxxx` represents the PubMed Central ID and `abc` represents an identifier specific to the image.
- **JSON Files**: JSON files named in the same format as the images. These are groundtruth annotations from the https://chartinfo.github.io challenge, they provide annotations for chart type, text(OCR), text location, text type (axis/tick/legend), data used to plot the chart.
- **QA Files**: QA files named in the same format as the images. Each QA file is a list of question blocks associated with the corresponding image we created in our proposed dataset.
#### QA Structure
Each QA file contains a list of question blocks in the following format:
```json
[
{
"taxonomy id": "2j",
"QID": "16",
"question": "Are all the bars in the chart visually horizontal?",
"answer": "no",
"answer_type": "Binary",
"qa_id": "XbUzFtjqsEOF",
"PMC_ID": "PMC8439477___g003"
},
{
"taxonomy id": "1a",
"QID": "7a",
"question": "What is the type of chart?",
"answer": "Vertical Bar chart",
"answer_type": "String",
"qa_id": "wzcdDijkrHtt",
"PMC_ID": "PMC8439477___g003"
}
]
```
### Dataset Loader
To facilitate loading and using the dataset, we provide a custom dataset loader script, `dataset.py`. This script defines a PyTorch `Dataset` class to handle loading, preprocessing, and batching of the images and question-answer pairs.
#### How to Use the Dataset Loader
1. **Setup and Requirements**
Ensure you have the following Python packages installed:
```bash
pip install torch torchvision Pillow
```
2. **Dataset Loader Script**
Use the provided `dataset.py` to load the dataset. The script is designed to load the dataset efficiently and handle both training and testing cases.
```python
from dataset import RQADataset
from torch.utils.data import DataLoader
dataset = RQADataset(data_dir='.', split='train') # split='test' for RQA9357 split used in the paper
# Test loading a single item
print(f"Number of samples in dataset: {len(dataset)}")
sample = dataset[0]
print("Sample data:", sample)
# Initialize DataLoader
dataloader = DataLoader(dataset, batch_size=4, collate_fn=RQADataset.custom_collate)
# Test DataLoader
for batch in dataloader:
print("Batch data:", batch)
break # Load only one batch for testing
```
### Citation
If you use this dataset in your research, please cite the following paper:
```bibtex
@InProceedings{10.1007/978-3-031-41682-8_5,
author="Ahmed, Saleem
and Jawade, Bhavin
and Pandey, Shubham
and Setlur, Srirangaraj
and Govindaraju, Venu",
editor="Fink, Gernot A.
and Jain, Rajiv
and Kise, Koichi
and Zanibbi, Richard",
title="RealCQA: Scientific Chart Question Answering as a Test-Bed for First-Order Logic",
booktitle="Document Analysis and Recognition - ICDAR 2023",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
pages="66--83",
abstract="We present a comprehensive study of chart visual question-answering(QA) task, to address the challenges faced in comprehending and extracting data from chart visualizations within documents. Despite efforts to tackle this problem using synthetic charts, solutions are limited by the shortage of annotated real-world data. To fill this gap, we introduce a benchmark and dataset for chart visual QA on real-world charts, offering a systematic analysis of the task and a novel taxonomy for template-based chart question creation. Our contribution includes the introduction of a new answer type, `list', with both ranked and unranked variations. Our study is conducted on a real-world chart dataset from scientific literature, showcasing higher visual complexity compared to other works. Our focus is on template-based QA and how it can serve as a standard for evaluating the first-order logic capabilities of models. The results of our experiments, conducted on a real-world out-of-distribution dataset, provide a robust evaluation of large-scale pre-trained models and advance the field of chart visual QA and formal logic verification for neural networks in general. Our code and dataset is publicly available (https://github.com/cse-ai-lab/RealCQA).",
isbn="978-3-031-41682-8"
}
}
```
### License
This dataset is licensed under the [MIT License](LICENSE). By using this dataset, you agree to abide by its terms and conditions.
### Contact
For any questions or issues, please contact the authors of the paper or open an issue in this repository. |
HAERAE-HUB/KMMLU | HAERAE-HUB | "2024-03-05T14:13:32Z" | 13,418 | 56 | [
"task_categories:multiple-choice",
"language:ko",
"license:cc-by-nd-4.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2402.11548",
"region:us",
"mmlu",
"haerae"
] | [
"multiple-choice"
] | "2023-11-27T09:06:18Z" | ---
configs:
- config_name: Accounting
data_files:
- split: train
path: data/Accounting-train.csv
- split: dev
path: data/Accounting-dev.csv
- split: test
path: data/Accounting-test.csv
- config_name: Agricultural-Sciences
data_files:
- split: train
path: data/Agricultural-Sciences-train.csv
- split: dev
path: data/Agricultural-Sciences-dev.csv
- split: test
path: data/Agricultural-Sciences-test.csv
- config_name: Aviation-Engineering-and-Maintenance
data_files:
- split: train
path: data/Aviation-Engineering-and-Maintenance-train.csv
- split: dev
path: data/Aviation-Engineering-and-Maintenance-dev.csv
- split: test
path: data/Aviation-Engineering-and-Maintenance-test.csv
- config_name: Biology
data_files:
- split: train
path: data/Biology-train.csv
- split: dev
path: data/Biology-dev.csv
- split: test
path: data/Biology-test.csv
- config_name: Chemical-Engineering
data_files:
- split: train
path: data/Chemical-Engineering-train.csv
- split: dev
path: data/Chemical-Engineering-dev.csv
- split: test
path: data/Chemical-Engineering-test.csv
- config_name: Chemistry
data_files:
- split: train
path: data/Chemistry-train.csv
- split: dev
path: data/Chemistry-dev.csv
- split: test
path: data/Chemistry-test.csv
- config_name: Civil-Engineering
data_files:
- split: train
path: data/Civil-Engineering-train.csv
- split: dev
path: data/Civil-Engineering-dev.csv
- split: test
path: data/Civil-Engineering-test.csv
- config_name: Computer-Science
data_files:
- split: train
path: data/Computer-Science-train.csv
- split: dev
path: data/Computer-Science-dev.csv
- split: test
path: data/Computer-Science-test.csv
- config_name: Construction
data_files:
- split: train
path: data/Construction-train.csv
- split: dev
path: data/Construction-dev.csv
- split: test
path: data/Construction-test.csv
- config_name: Criminal-Law
data_files:
- split: train
path: data/Criminal-Law-train.csv
- split: dev
path: data/Criminal-Law-dev.csv
- split: test
path: data/Criminal-Law-test.csv
- config_name: Ecology
data_files:
- split: train
path: data/Ecology-train.csv
- split: dev
path: data/Ecology-dev.csv
- split: test
path: data/Ecology-test.csv
- config_name: Economics
data_files:
- split: train
path: data/Economics-train.csv
- split: dev
path: data/Economics-dev.csv
- split: test
path: data/Economics-test.csv
- config_name: Education
data_files:
- split: train
path: data/Education-train.csv
- split: dev
path: data/Education-dev.csv
- split: test
path: data/Education-test.csv
- config_name: Electrical-Engineering
data_files:
- split: train
path: data/Electrical-Engineering-train.csv
- split: dev
path: data/Electrical-Engineering-dev.csv
- split: test
path: data/Electrical-Engineering-test.csv
- config_name: Electronics-Engineering
data_files:
- split: train
path: data/Electronics-Engineering-train.csv
- split: dev
path: data/Electronics-Engineering-dev.csv
- split: test
path: data/Electronics-Engineering-test.csv
- config_name: Energy-Management
data_files:
- split: train
path: data/Energy-Management-train.csv
- split: dev
path: data/Energy-Management-dev.csv
- split: test
path: data/Energy-Management-test.csv
- config_name: Environmental-Science
data_files:
- split: train
path: data/Environmental-Science-train.csv
- split: dev
path: data/Environmental-Science-dev.csv
- split: test
path: data/Environmental-Science-test.csv
- config_name: Fashion
data_files:
- split: train
path: data/Fashion-train.csv
- split: dev
path: data/Fashion-dev.csv
- split: test
path: data/Fashion-test.csv
- config_name: Food-Processing
data_files:
- split: train
path: data/Food-Processing-train.csv
- split: dev
path: data/Food-Processing-dev.csv
- split: test
path: data/Food-Processing-test.csv
- config_name: Gas-Technology-and-Engineering
data_files:
- split: train
path: data/Gas-Technology-and-Engineering-train.csv
- split: dev
path: data/Gas-Technology-and-Engineering-dev.csv
- split: test
path: data/Gas-Technology-and-Engineering-test.csv
- config_name: Geomatics
data_files:
- split: train
path: data/Geomatics-train.csv
- split: dev
path: data/Geomatics-dev.csv
- split: test
path: data/Geomatics-test.csv
- config_name: Health
data_files:
- split: train
path: data/Health-train.csv
- split: dev
path: data/Health-dev.csv
- split: test
path: data/Health-test.csv
- config_name: Industrial-Engineer
data_files:
- split: train
path: data/Industrial-Engineer-train.csv
- split: dev
path: data/Industrial-Engineer-dev.csv
- split: test
path: data/Industrial-Engineer-test.csv
- config_name: Information-Technology
data_files:
- split: train
path: data/Information-Technology-train.csv
- split: dev
path: data/Information-Technology-dev.csv
- split: test
path: data/Information-Technology-test.csv
- config_name: Interior-Architecture-and-Design
data_files:
- split: train
path: data/Interior-Architecture-and-Design-train.csv
- split: dev
path: data/Interior-Architecture-and-Design-dev.csv
- split: test
path: data/Interior-Architecture-and-Design-test.csv
- config_name: Law
data_files:
- split: train
path: data/Law-train.csv
- split: dev
path: data/Law-dev.csv
- split: test
path: data/Law-test.csv
- config_name: Machine-Design-and-Manufacturing
data_files:
- split: train
path: data/Machine-Design-and-Manufacturing-train.csv
- split: dev
path: data/Machine-Design-and-Manufacturing-dev.csv
- split: test
path: data/Machine-Design-and-Manufacturing-test.csv
- config_name: Management
data_files:
- split: train
path: data/Management-train.csv
- split: dev
path: data/Management-dev.csv
- split: test
path: data/Management-test.csv
- config_name: Maritime-Engineering
data_files:
- split: train
path: data/Maritime-Engineering-train.csv
- split: dev
path: data/Maritime-Engineering-dev.csv
- split: test
path: data/Maritime-Engineering-test.csv
- config_name: Marketing
data_files:
- split: train
path: data/Marketing-train.csv
- split: dev
path: data/Marketing-dev.csv
- split: test
path: data/Marketing-test.csv
- config_name: Materials-Engineering
data_files:
- split: train
path: data/Materials-Engineering-train.csv
- split: dev
path: data/Materials-Engineering-dev.csv
- split: test
path: data/Materials-Engineering-test.csv
- config_name: Mechanical-Engineering
data_files:
- split: train
path: data/Mechanical-Engineering-train.csv
- split: dev
path: data/Mechanical-Engineering-dev.csv
- split: test
path: data/Mechanical-Engineering-test.csv
- config_name: Nondestructive-Testing
data_files:
- split: train
path: data/Nondestructive-Testing-train.csv
- split: dev
path: data/Nondestructive-Testing-dev.csv
- split: test
path: data/Nondestructive-Testing-test.csv
- config_name: Patent
data_files:
- split: train
path: data/Patent-train.csv
- split: dev
path: data/Patent-dev.csv
- split: test
path: data/Patent-test.csv
- config_name: Political-Science-and-Sociology
data_files:
- split: train
path: data/Political-Science-and-Sociology-train.csv
- split: dev
path: data/Political-Science-and-Sociology-dev.csv
- split: test
path: data/Political-Science-and-Sociology-test.csv
- config_name: Psychology
data_files:
- split: train
path: data/Psychology-train.csv
- split: dev
path: data/Psychology-dev.csv
- split: test
path: data/Psychology-test.csv
- config_name: Public-Safety
data_files:
- split: train
path: data/Public-Safety-train.csv
- split: dev
path: data/Public-Safety-dev.csv
- split: test
path: data/Public-Safety-test.csv
- config_name: Railway-and-Automotive-Engineering
data_files:
- split: train
path: data/Railway-and-Automotive-Engineering-train.csv
- split: dev
path: data/Railway-and-Automotive-Engineering-dev.csv
- split: test
path: data/Railway-and-Automotive-Engineering-test.csv
- config_name: Real-Estate
data_files:
- split: train
path: data/Real-Estate-train.csv
- split: dev
path: data/Real-Estate-dev.csv
- split: test
path: data/Real-Estate-test.csv
- config_name: Refrigerating-Machinery
data_files:
- split: train
path: data/Refrigerating-Machinery-train.csv
- split: dev
path: data/Refrigerating-Machinery-dev.csv
- split: test
path: data/Refrigerating-Machinery-test.csv
- config_name: Social-Welfare
data_files:
- split: train
path: data/Social-Welfare-train.csv
- split: dev
path: data/Social-Welfare-dev.csv
- split: test
path: data/Social-Welfare-test.csv
- config_name: Taxation
data_files:
- split: train
path: data/Taxation-train.csv
- split: dev
path: data/Taxation-dev.csv
- split: test
path: data/Taxation-test.csv
- config_name: Telecommunications-and-Wireless-Technology
data_files:
- split: train
path: data/Telecommunications-and-Wireless-Technology-train.csv
- split: dev
path: data/Telecommunications-and-Wireless-Technology-dev.csv
- split: test
path: data/Telecommunications-and-Wireless-Technology-test.csv
- config_name: Korean-History
data_files:
- split: train
path: data/korean-history-train.csv
- split: dev
path: data/korean-history-dev.csv
- split: test
path: data/korean-history-test.csv
- config_name: Math
data_files:
- split: train
path: data/math-train.csv
- split: dev
path: data/math-dev.csv
- split: test
path: data/math-test.csv
task_categories:
- multiple-choice
language:
- ko
tags:
- mmlu
- haerae
size_categories:
- 10K<n<100K
license: cc-by-nd-4.0
---
# KMMLU (Korean-MMLU)
We propose KMMLU, a new Korean benchmark with 35,030 expert-level multiple-choice questions across 45 subjects ranging from humanities to STEM.
Unlike previous Korean benchmarks that are translated from existing English benchmarks, KMMLU is collected from original Korean exams, capturing linguistic and cultural aspects of the Korean language.
We test 26 publically available and proprietary LLMs, identifying significant room for improvement.
The best publicly available model achieves 50.54% on KMMLU, far below the average human performance of 62.6%.
This model was primarily trained for English and Chinese, not Korean.
Current LLMs tailored to Korean, such as Polyglot-Ko, perform far worse. Surprisingly, even the most capable proprietary LLMs, e.g., GPT-4 and HyperCLOVA X, achieve 59.95% and 53.40%, respectively.
This suggests that further work is needed to improve Korean LLMs, and KMMLU offers the right tool to track this progress.
We make our dataset publicly available on the Hugging Face Hub and integrate the benchmark into EleutherAI's Language Model Evaluation Harness.
Link to Paper: [KMMLU: Measuring Massive Multitask Language Understanding in Korean](https://arxiv.org/abs/2402.11548)
### KMMLU Statistics
| Category | # Questions |
|------------------------------|-------------|
| **Prerequisites** | |
| None | 59,909 |
| 1 Prerequisite Test | 12,316 |
| 2 Prerequisite Tests | 776 |
| 2+ Years of Experience | 65,135 |
| 4+ Years of Experience | 98,678 |
| 9+ Years of Experience | 6,963 |
| **Question Type** | |
| Positive | 207,030 |
| Negation | 36,777 |
| **Split** | |
| Train | 208,522 |
| Validation | 225 |
| Test | 35,030 |
| **Total** | 243,777 |
### Categories
To reimplement the categories in the paper, refer to the following:
```
supercategories = {
"accounting": "HUMSS",
"agricultural_sciences": "Other",
"aviation_engineering_and_maintenance": "Applied Science",
"biology": "STEM",
"chemical_engineering": "STEM",
"chemistry": "STEM",
"civil_engineering": "STEM",
"computer_science": "STEM",
"construction": "Other",
"criminal_law": "HUMSS",
"ecology": "STEM",
"economics": "HUMSS",
"education": "HUMSS",
"electrical_engineering": "STEM",
"electronics_engineering": "Applied Science",
"energy_management": "Applied Science",
"environmental_science": "Applied Science",
"fashion": "Other",
"food_processing": "Other",
"gas_technology_and_engineering": "Applied Science",
"geomatics": "Applied Science",
"health": "Other",
"industrial_engineer": "Applied Science",
"information_technology": "STEM",
"interior_architecture_and_design": "Other",
"law": "HUMSS",
"machine_design_and_manufacturing": "Applied Science",
"management": "HUMSS",
"maritime_engineering": "Applied Science",
"marketing": "Other",
"materials_engineering": "STEM",
"mechanical_engineering": "STEM",
"nondestructive_testing": "Applied Science",
"patent": "Other",
"political_science_and_sociology": "HUMSS",
"psychology": "HUMSS",
"public_safety": "Other",
"railway_and_automotive_engineering": "Applied Science",
"real_estate": "Other",
"refrigerating_machinery": "Other",
"social_welfare": "HUMSS",
"taxation": "HUMSS",
"telecommunications_and_wireless_technology": "Applied Science",
"korean_history": "HUMSS",
"math": "STEM"
}
```
### Point of Contact
For any questions contact us via the following email:)
```
spthsrbwls123@yonsei.ac.kr
``` |
ArmelR/the-pile-splitted | ArmelR | "2023-09-06T09:53:16Z" | 13,410 | 20 | [
"size_categories:10M<n<100M",
"format:arrow",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2101.00027",
"arxiv:2201.07311",
"region:us"
] | null | "2023-07-30T14:21:26Z" | ---
configs:
- config_name: all
data_files:
- split: train
path:
- "data/ArXiv/train/*.arrow"
- "data/BookCorpus2/train/*.arrow"
- "data/Books3/train/*.arrow"
- "data/DM Mathematics/train/*.arrow"
- "data/Enron Emails/train/*.arrow"
- "data/EuroParl/train/*.arrow"
- "data/FreeLaw/train/*.arrow"
- "data/Github/train/*.arrow"
- "data/Gutenberg (PG-19)/train/*.arrow"
- "data/HackerNews/train/*.arrow"
- "data/NIH ExPorter/train/*.arrow"
- "data/OpenSubtitles/train/*.arrow"
- "data/OpenWebText2/train/*.arrow"
- "data/PhilPapers/train/*.arrow"
- "data/Pile-CC/train/*.arrow"
- "data/PubMed Abstracts/train/*.arrow"
- "data/PubMed Central/train/*.arrow"
- "data/StackExchange/train/*.arrow"
- "data/UPSTO Backgrounds/train/*.arrow"
- "data/Ubuntu IRC/train/*.arrow"
- "data/Wikipedia (en)/train/*.arrow"
- "data/YoutubeSubtitles/train/*.arrow"
- split: test
path:
- "data/ArXiv/test/*.arrow"
- "data/BookCorpus2/test/*.arrow"
- "data/Books3/test/*.arrow"
- "data/DM Mathematics/test/*.arrow"
- "data/Enron Emails/test/*.arrow"
- "data/EuroParl/test/*.arrow"
- "data/FreeLaw/test/*.arrow"
- "data/Github/test/*.arrow"
- "data/Gutenberg (PG-19)/test/*.arrow"
- "data/HackerNews/test/*.arrow"
- "data/NIH ExPorter/test/*.arrow"
- "data/OpenSubtitles/test/*.arrow"
- "data/OpenWebText2/test/*.arrow"
- "data/PhilPapers/test/*.arrow"
- "data/Pile-CC/test/*.arrow"
- "data/PubMed Abstracts/test/*.arrow"
- "data/PubMed Central/test/*.arrow"
- "data/StackExchange/test/*.arrow"
- "data/UPSTO Backgrounds/test/*.arrow"
- "data/Ubuntu IRC/test/*.arrow"
- "data/Wikipedia (en)/test/*.arrow"
- "data/YoutubeSubtitles/test/*.arrow"
default: true
- config_name: ArXiv
data_files:
- split: train
path: "data/ArXiv/train/*.arrow"
- split: test
path: "data/ArXiv/test/*.arrow"
- config_name: BookCorpus2
data_files:
- split: train
path: "data/BookCorpus2/train/*.arrow"
- split: test
path: "data/BookCorpus2/test/*.arrow"
- config_name: Books3
data_files:
- split: train
path: "data/Books3/train/*.arrow"
- split: test
path: "data/Books3/test/*.arrow"
- config_name: DM Mathematics
data_files:
- split: train
path: "data/DM Mathematics/train/*.arrow"
- split: test
path: "data/DM Mathematics/test/*.arrow"
- config_name: Enron Emails
data_files:
- split: train
path: "data/Enron Emails/train/*.arrow"
- split: test
path: "data/Enron Emails/test/*.arrow"
- config_name: EuroParl
data_files:
- split: train
path: "data/EuroParl/train/*.arrow"
- split: test
path: "data/EuroParl/test/*.arrow"
- config_name: FreeLaw
data_files:
- split: train
path: "data/FreeLaw/train/*.arrow"
- split: test
path: "data/FreeLaw/test/*.arrow"
- config_name: Github
data_files:
- split: train
path: "data/Github/train/*.arrow"
- split: test
path: "data/Github/test/*.arrow"
- config_name: Gutenberg (PG-19)
data_files:
- split: train
path: "data/Gutenberg (PG-19)/train/*.arrow"
- split: test
path: "data/Gutenberg (PG-19)/test/*.arrow"
- config_name: HackerNews
data_files:
- split: train
path: "data/HackerNews/train/*.arrow"
- split: test
path: "data/HackerNews/test/*.arrow"
- config_name: NIH ExPorter
data_files:
- split: train
path: "data/NIH ExPorter/train/*.arrow"
- split: test
path: "data/NIH ExPorter/test/*.arrow"
- config_name: OpenSubtitles
data_files:
- split: train
path: "data/OpenSubtitles/train/*.arrow"
- split: test
path: "data/OpenSubtitles/test/*.arrow"
- config_name: OpenWebText2
data_files:
- split: train
path: "data/OpenWebText2/train/*.arrow"
- split: test
path: "data/OpenWebText2/test/*.arrow"
- config_name: PhilPapers
data_files:
- split: train
path: "data/PhilPapers/train/*.arrow"
- split: test
path: "data/PhilPapers/test/*.arrow"
- config_name: Pile-CC
data_files:
- split: train
path: "data/Pile-CC/train/*.arrow"
- split: test
path: "data/Pile-CC/test/*.arrow"
- config_name: PubMed Abstracts
data_files:
- split: train
path: "data/PubMed Abstracts/train/*.arrow"
- split: test
path: "data/PubMed Abstracts/test/*.arrow"
- config_name: PubMed Central
data_files:
- split: train
path: "data/PubMed Central/train/*.arrow"
- split: test
path: "data/PubMed Central/test/*.arrow"
- config_name: StackExchange
data_files:
- split: train
path: "data/StackExchange/train/*.arrow"
- split: test
path: "data/StackExchange/test/*.arrow"
- config_name: UPSTO Backgrounds
data_files:
- split: train
path: "data/UPSTO Backgrounds/train/*.arrow"
- split: test
path: "data/UPSTO Backgrounds/test/*.arrow"
- config_name: Ubuntu IRC
data_files:
- split: train
path: "data/Ubuntu IRC/train/*.arrow"
- split: test
path: "data/Ubuntu IRC/test/*.arrow"
- config_name: Wikipedia (en)
data_files:
- split: train
path: "data/Wikipedia (en)/train/*.arrow"
- split: test
path: "data/Wikipedia (en)/test/*.arrow"
- config_name: YoutubeSubtitles
data_files:
- split: train
path: "data/YoutubeSubtitles/train/*.arrow"
- split: test
path: "data/YoutubeSubtitles/test/*.arrow"
---
# Dataset description
[The pile](https://arxiv.org/abs/2101.00027) is an 800GB dataset of english text
designed by EleutherAI to train large-scale language models. The original version of
the dataset can be found [here](https://huggingface.co/datasets/EleutherAI/pile).
The dataset is divided into 22 smaller high-quality datasets. For more information
each of them, please refer to [the datasheet for the pile](https://arxiv.org/abs/2201.07311).
However, the current version of the dataset, available on the Hub, is not splitted accordingly.
We had to solve this problem in order to improve the user experience when it comes to deal with
the pile via the hub.
Here is an instance of the pile
```
{
'meta': {'pile_set_name': 'Pile-CC'},
'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...'
}
```
We used the `meta` column to properly divide the dataset in subsets. Each instance `example` belongs to the subset
`domain` and `domain = example['meta']['pile_set_name']`. By doing this, we were able to create a [new version of the pile](https://huggingface.co/datasets/ArmelR/sharded-pile)
that is properly divided, each instance having a new column `domain`.
We further splitted each subset in train/test (97%/3%) to build the current dataset which the following structure
```
data
ArXiv
train
test
BookCorpus2
train
test
Books3
train
test
```
# Usage
```python
from datasets import load_dataset
dataset = load_dataset(
"ArmelR/the-pile-splitted",
subset_of_interest,
num_proc=8
)
```
Using `subset_of_interest = "default"` will load the whole dataset.
|
EdinburghNLP/xsum | EdinburghNLP | "2023-04-05T13:45:25Z" | 13,305 | 91 | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:1808.08745",
"region:us"
] | [
"summarization"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: Extreme Summarization (XSum)
paperswithcode_id: xsum
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 479206608
num_examples: 204045
- name: validation
num_bytes: 26292901
num_examples: 11332
- name: test
num_bytes: 26756165
num_examples: 11334
download_size: 257302866
dataset_size: 532255674
---
# Dataset Card for "xsum"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/EdinburghNLP/XSum
- **Paper:** [Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)
- **Point of Contact:** [Shashi Narayan](mailto:shashi.narayan@ed.ac.uk)
- **Size of downloaded dataset files:** 257.30 MB
- **Size of the generated dataset:** 532.26 MB
- **Total amount of disk used:** 789.56 MB
### Dataset Summary
Extreme Summarization (XSum) Dataset.
There are three features:
- document: Input news article.
- summary: One sentence summary of the article.
- id: BBC ID of the article.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 257.30 MB
- **Size of the generated dataset:** 532.26 MB
- **Total amount of disk used:** 789.56 MB
An example of 'validation' looks as follows.
```
{
"document": "some-body",
"id": "29750031",
"summary": "some-sentence"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `document`: a `string` feature.
- `summary`: a `string` feature.
- `id`: a `string` feature.
### Data Splits
| name |train |validation|test |
|-------|-----:|---------:|----:|
|default|204045| 11332|11334|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@jbragg](https://github.com/jbragg), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
zh-plus/tiny-imagenet | zh-plus | "2022-07-12T09:04:30Z" | 13,281 | 59 | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:extended|imagenet-1k",
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-classification"
] | "2022-07-01T03:33:16Z" | ---
annotations_creators:
- crowdsourced
extra_gated_prompt: "By clicking on \u201CAccess repository\u201D below, you also\
\ agree to ImageNet Terms of Access:\n[RESEARCHER_FULLNAME] (the \"Researcher\"\
) has requested permission to use the ImageNet database (the \"Database\") at Princeton\
\ University and Stanford University. In exchange for such permission, Researcher\
\ hereby agrees to the following terms and conditions:\n1. Researcher shall use\
\ the Database only for non-commercial research and educational purposes.\n2. Princeton\
\ University, Stanford University and Hugging Face make no representations or warranties\
\ regarding the Database, including but not limited to warranties of non-infringement\
\ or fitness for a particular purpose.\n3. Researcher accepts full responsibility\
\ for his or her use of the Database and shall defend and indemnify the ImageNet\
\ team, Princeton University, Stanford University and Hugging Face, including their\
\ employees, Trustees, officers and agents, against any and all claims arising from\
\ Researcher's use of the Database, including but not limited to Researcher's use\
\ of any copies of copyrighted images that he or she may create from the Database.\n\
4. Researcher may provide research associates and colleagues with access to the\
\ Database provided that they first agree to be bound by these terms and conditions.\n\
5. Princeton University, Stanford University and Hugging Face reserve the right\
\ to terminate Researcher's access to the Database at any time.\n6. If Researcher\
\ is employed by a for-profit, commercial entity, Researcher's employer shall also\
\ be bound by these terms and conditions, and Researcher hereby represents that\
\ he or she is fully authorized to enter into this agreement on behalf of such employer.\n\
7. The law of the State of New Jersey shall apply to all disputes under this agreement."
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
paperswithcode_id: imagenet
pretty_name: Tiny-ImageNet
size_categories:
- 100K<n<1M
source_datasets:
- extended|imagenet-1k
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
---
# Dataset Card for tiny-imagenet
## Dataset Description
- **Homepage:** https://www.kaggle.com/c/tiny-imagenet
- **Repository:** [Needs More Information]
- **Paper:** http://cs231n.stanford.edu/reports/2017/pdfs/930.pdf
- **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-tiny-imagenet-1
### Dataset Summary
Tiny ImageNet contains 100000 images of 200 classes (500 for each class) downsized to 64×64 colored images. Each class has 500 training images, 50 validation images, and 50 test images.
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
```json
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=64x64 at 0x1A800E8E190,
'label': 15
}
```
### Data Fields
- image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- label: an int classification label. -1 for test set as the labels are missing. Check `classes.py` for the map of numbers & labels.
### Data Splits
| | Train | Valid |
| ------------ | ------ | ----- |
| # of samples | 100000 | 10000 |
## Usage
### Example
#### Load Dataset
```python
def example_usage():
tiny_imagenet = load_dataset('Maysee/tiny-imagenet', split='train')
print(tiny_imagenet[0])
if __name__ == '__main__':
example_usage()
``` |
facebook/mlqa | facebook | "2024-01-18T11:09:06Z" | 13,222 | 40 | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"language:de",
"language:es",
"language:ar",
"language:zh",
"language:vi",
"language:hi",
"license:cc-by-sa-3.0",
"size_categories:10K<n<100K",
"region:us"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
pretty_name: MLQA (MultiLingual Question Answering)
language:
- en
- de
- es
- ar
- zh
- vi
- hi
license:
- cc-by-sa-3.0
source_datasets:
- original
size_categories:
- 10K<n<100K
language_creators:
- crowdsourced
annotations_creators:
- crowdsourced
multilinguality:
- multilingual
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: mlqa
dataset_info:
- config_name: mlqa-translate-train.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 101227245
num_examples: 78058
- name: validation
num_bytes: 13144332
num_examples: 9512
download_size: 63364123
dataset_size: 114371577
- config_name: mlqa-translate-train.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 77996825
num_examples: 80069
- name: validation
num_bytes: 10322113
num_examples: 9927
download_size: 63364123
dataset_size: 88318938
- config_name: mlqa-translate-train.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 97387431
num_examples: 84816
- name: validation
num_bytes: 12731112
num_examples: 10356
download_size: 63364123
dataset_size: 110118543
- config_name: mlqa-translate-train.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 55143547
num_examples: 76285
- name: validation
num_bytes: 7418070
num_examples: 9568
download_size: 63364123
dataset_size: 62561617
- config_name: mlqa-translate-train.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 80789653
num_examples: 81810
- name: validation
num_bytes: 10718376
num_examples: 10123
download_size: 63364123
dataset_size: 91508029
- config_name: mlqa-translate-train.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 168117671
num_examples: 82451
- name: validation
num_bytes: 22422152
num_examples: 10253
download_size: 63364123
dataset_size: 190539823
- config_name: mlqa-translate-test.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 5484467
num_examples: 5335
download_size: 10075488
dataset_size: 5484467
- config_name: mlqa-translate-test.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3884332
num_examples: 4517
download_size: 10075488
dataset_size: 3884332
- config_name: mlqa-translate-test.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 5998327
num_examples: 5495
download_size: 10075488
dataset_size: 5998327
- config_name: mlqa-translate-test.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4831704
num_examples: 5137
download_size: 10075488
dataset_size: 4831704
- config_name: mlqa-translate-test.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3916758
num_examples: 5253
download_size: 10075488
dataset_size: 3916758
- config_name: mlqa-translate-test.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4608811
num_examples: 4918
download_size: 10075488
dataset_size: 4608811
- config_name: mlqa.ar.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 8216837
num_examples: 5335
- name: validation
num_bytes: 808830
num_examples: 517
download_size: 75719050
dataset_size: 9025667
- config_name: mlqa.ar.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2132247
num_examples: 1649
- name: validation
num_bytes: 358554
num_examples: 207
download_size: 75719050
dataset_size: 2490801
- config_name: mlqa.ar.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3235363
num_examples: 2047
- name: validation
num_bytes: 283834
num_examples: 163
download_size: 75719050
dataset_size: 3519197
- config_name: mlqa.ar.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3175660
num_examples: 1912
- name: validation
num_bytes: 334016
num_examples: 188
download_size: 75719050
dataset_size: 3509676
- config_name: mlqa.ar.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 8074057
num_examples: 5335
- name: validation
num_bytes: 794775
num_examples: 517
download_size: 75719050
dataset_size: 8868832
- config_name: mlqa.ar.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2981237
num_examples: 1978
- name: validation
num_bytes: 223188
num_examples: 161
download_size: 75719050
dataset_size: 3204425
- config_name: mlqa.ar.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2993225
num_examples: 1831
- name: validation
num_bytes: 276727
num_examples: 186
download_size: 75719050
dataset_size: 3269952
- config_name: mlqa.de.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1587005
num_examples: 1649
- name: validation
num_bytes: 195822
num_examples: 207
download_size: 75719050
dataset_size: 1782827
- config_name: mlqa.de.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4274496
num_examples: 4517
- name: validation
num_bytes: 477366
num_examples: 512
download_size: 75719050
dataset_size: 4751862
- config_name: mlqa.de.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1654540
num_examples: 1675
- name: validation
num_bytes: 211985
num_examples: 182
download_size: 75719050
dataset_size: 1866525
- config_name: mlqa.de.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1645937
num_examples: 1621
- name: validation
num_bytes: 180114
num_examples: 190
download_size: 75719050
dataset_size: 1826051
- config_name: mlqa.de.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4251153
num_examples: 4517
- name: validation
num_bytes: 474863
num_examples: 512
download_size: 75719050
dataset_size: 4726016
- config_name: mlqa.de.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1678176
num_examples: 1776
- name: validation
num_bytes: 166193
num_examples: 196
download_size: 75719050
dataset_size: 1844369
- config_name: mlqa.de.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1343983
num_examples: 1430
- name: validation
num_bytes: 150679
num_examples: 163
download_size: 75719050
dataset_size: 1494662
- config_name: mlqa.vi.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3164094
num_examples: 2047
- name: validation
num_bytes: 226724
num_examples: 163
download_size: 75719050
dataset_size: 3390818
- config_name: mlqa.vi.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2189315
num_examples: 1675
- name: validation
num_bytes: 272794
num_examples: 182
download_size: 75719050
dataset_size: 2462109
- config_name: mlqa.vi.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 7807045
num_examples: 5495
- name: validation
num_bytes: 715291
num_examples: 511
download_size: 75719050
dataset_size: 8522336
- config_name: mlqa.vi.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2947458
num_examples: 1943
- name: validation
num_bytes: 265154
num_examples: 184
download_size: 75719050
dataset_size: 3212612
- config_name: mlqa.vi.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 7727204
num_examples: 5495
- name: validation
num_bytes: 707925
num_examples: 511
download_size: 75719050
dataset_size: 8435129
- config_name: mlqa.vi.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2822481
num_examples: 2018
- name: validation
num_bytes: 279235
num_examples: 189
download_size: 75719050
dataset_size: 3101716
- config_name: mlqa.vi.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2738045
num_examples: 1947
- name: validation
num_bytes: 251470
num_examples: 177
download_size: 75719050
dataset_size: 2989515
- config_name: mlqa.zh.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1697005
num_examples: 1912
- name: validation
num_bytes: 171743
num_examples: 188
download_size: 75719050
dataset_size: 1868748
- config_name: mlqa.zh.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1356268
num_examples: 1621
- name: validation
num_bytes: 170686
num_examples: 190
download_size: 75719050
dataset_size: 1526954
- config_name: mlqa.zh.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1770535
num_examples: 1943
- name: validation
num_bytes: 169651
num_examples: 184
download_size: 75719050
dataset_size: 1940186
- config_name: mlqa.zh.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4324740
num_examples: 5137
- name: validation
num_bytes: 433960
num_examples: 504
download_size: 75719050
dataset_size: 4758700
- config_name: mlqa.zh.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4353361
num_examples: 5137
- name: validation
num_bytes: 437016
num_examples: 504
download_size: 75719050
dataset_size: 4790377
- config_name: mlqa.zh.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1697983
num_examples: 1947
- name: validation
num_bytes: 134693
num_examples: 161
download_size: 75719050
dataset_size: 1832676
- config_name: mlqa.zh.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1547159
num_examples: 1767
- name: validation
num_bytes: 180928
num_examples: 189
download_size: 75719050
dataset_size: 1728087
- config_name: mlqa.en.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 6641971
num_examples: 5335
- name: validation
num_bytes: 621075
num_examples: 517
download_size: 75719050
dataset_size: 7263046
- config_name: mlqa.en.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4966262
num_examples: 4517
- name: validation
num_bytes: 584725
num_examples: 512
download_size: 75719050
dataset_size: 5550987
- config_name: mlqa.en.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 6958087
num_examples: 5495
- name: validation
num_bytes: 631268
num_examples: 511
download_size: 75719050
dataset_size: 7589355
- config_name: mlqa.en.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 6441614
num_examples: 5137
- name: validation
num_bytes: 598772
num_examples: 504
download_size: 75719050
dataset_size: 7040386
- config_name: mlqa.en.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 13787522
num_examples: 11590
- name: validation
num_bytes: 1307399
num_examples: 1148
download_size: 75719050
dataset_size: 15094921
- config_name: mlqa.en.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 6074990
num_examples: 5253
- name: validation
num_bytes: 545657
num_examples: 500
download_size: 75719050
dataset_size: 6620647
- config_name: mlqa.en.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 6293785
num_examples: 4918
- name: validation
num_bytes: 614223
num_examples: 507
download_size: 75719050
dataset_size: 6908008
- config_name: mlqa.es.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1696778
num_examples: 1978
- name: validation
num_bytes: 145105
num_examples: 161
download_size: 75719050
dataset_size: 1841883
- config_name: mlqa.es.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1361983
num_examples: 1776
- name: validation
num_bytes: 139968
num_examples: 196
download_size: 75719050
dataset_size: 1501951
- config_name: mlqa.es.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1707141
num_examples: 2018
- name: validation
num_bytes: 172801
num_examples: 189
download_size: 75719050
dataset_size: 1879942
- config_name: mlqa.es.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1635294
num_examples: 1947
- name: validation
num_bytes: 122829
num_examples: 161
download_size: 75719050
dataset_size: 1758123
- config_name: mlqa.es.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4249431
num_examples: 5253
- name: validation
num_bytes: 408169
num_examples: 500
download_size: 75719050
dataset_size: 4657600
- config_name: mlqa.es.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4281273
num_examples: 5253
- name: validation
num_bytes: 411196
num_examples: 500
download_size: 75719050
dataset_size: 4692469
- config_name: mlqa.es.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 1489611
num_examples: 1723
- name: validation
num_bytes: 178003
num_examples: 187
download_size: 75719050
dataset_size: 1667614
- config_name: mlqa.hi.ar
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4374373
num_examples: 1831
- name: validation
num_bytes: 402817
num_examples: 186
download_size: 75719050
dataset_size: 4777190
- config_name: mlqa.hi.de
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 2961556
num_examples: 1430
- name: validation
num_bytes: 294325
num_examples: 163
download_size: 75719050
dataset_size: 3255881
- config_name: mlqa.hi.vi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4664436
num_examples: 1947
- name: validation
num_bytes: 411654
num_examples: 177
download_size: 75719050
dataset_size: 5076090
- config_name: mlqa.hi.zh
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 4281309
num_examples: 1767
- name: validation
num_bytes: 416192
num_examples: 189
download_size: 75719050
dataset_size: 4697501
- config_name: mlqa.hi.en
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 11245629
num_examples: 4918
- name: validation
num_bytes: 1076115
num_examples: 507
download_size: 75719050
dataset_size: 12321744
- config_name: mlqa.hi.es
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 3789337
num_examples: 1723
- name: validation
num_bytes: 412469
num_examples: 187
download_size: 75719050
dataset_size: 4201806
- config_name: mlqa.hi.hi
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 11606982
num_examples: 4918
- name: validation
num_bytes: 1115055
num_examples: 507
download_size: 75719050
dataset_size: 12722037
---
# Dataset Card for "mlqa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/facebookresearch/MLQA](https://github.com/facebookresearch/MLQA)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.15 GB
- **Size of the generated dataset:** 910.01 MB
- **Total amount of disk used:** 5.06 GB
### Dataset Summary
MLQA (MultiLingual Question Answering) is a benchmark dataset for evaluating cross-lingual question answering performance.
MLQA consists of over 5K extractive QA instances (12K in English) in SQuAD format in seven languages - English, Arabic,
German, Spanish, Hindi, Vietnamese and Simplified Chinese. MLQA is highly parallel, with QA instances parallel between
4 different languages on average.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
MLQA contains QA instances in 7 languages, English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese.
## Dataset Structure
### Data Instances
#### mlqa-translate-test.ar
- **Size of downloaded dataset files:** 10.08 MB
- **Size of the generated dataset:** 5.48 MB
- **Total amount of disk used:** 15.56 MB
An example of 'test' looks as follows.
```
```
#### mlqa-translate-test.de
- **Size of downloaded dataset files:** 10.08 MB
- **Size of the generated dataset:** 3.88 MB
- **Total amount of disk used:** 13.96 MB
An example of 'test' looks as follows.
```
```
#### mlqa-translate-test.es
- **Size of downloaded dataset files:** 10.08 MB
- **Size of the generated dataset:** 3.92 MB
- **Total amount of disk used:** 13.99 MB
An example of 'test' looks as follows.
```
```
#### mlqa-translate-test.hi
- **Size of downloaded dataset files:** 10.08 MB
- **Size of the generated dataset:** 4.61 MB
- **Total amount of disk used:** 14.68 MB
An example of 'test' looks as follows.
```
```
#### mlqa-translate-test.vi
- **Size of downloaded dataset files:** 10.08 MB
- **Size of the generated dataset:** 6.00 MB
- **Total amount of disk used:** 16.07 MB
An example of 'test' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### mlqa-translate-test.ar
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
- `id`: a `string` feature.
#### mlqa-translate-test.de
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
- `id`: a `string` feature.
#### mlqa-translate-test.es
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
- `id`: a `string` feature.
#### mlqa-translate-test.hi
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
- `id`: a `string` feature.
#### mlqa-translate-test.vi
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
- `id`: a `string` feature.
### Data Splits
| name |test|
|----------------------|---:|
|mlqa-translate-test.ar|5335|
|mlqa-translate-test.de|4517|
|mlqa-translate-test.es|5253|
|mlqa-translate-test.hi|4918|
|mlqa-translate-test.vi|5495|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{lewis2019mlqa,
title = {MLQA: Evaluating Cross-lingual Extractive Question Answering},
author = {Lewis, Patrick and Oguz, Barlas and Rinott, Ruty and Riedel, Sebastian and Schwenk, Holger},
journal = {arXiv preprint arXiv:1910.07475},
year = 2019,
eid = {arXiv: 1910.07475}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@M-Salti](https://github.com/M-Salti), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
lukaemon/bbh | lukaemon | "2023-02-02T01:14:46Z" | 13,197 | 51 | [
"size_categories:1K<n<10K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2023-02-01T07:46:51Z" | ---
dataset_info:
- config_name: boolean_expressions
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 11790
num_examples: 250
download_size: 17172
dataset_size: 11790
- config_name: causal_judgement
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 198021
num_examples: 187
download_size: 202943
dataset_size: 198021
- config_name: date_understanding
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 54666
num_examples: 250
download_size: 61760
dataset_size: 54666
- config_name: disambiguation_qa
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 78620
num_examples: 250
download_size: 85255
dataset_size: 78620
- config_name: dyck_languages
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38432
num_examples: 250
download_size: 43814
dataset_size: 38432
- config_name: formal_fallacies
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 138224
num_examples: 250
download_size: 145562
dataset_size: 138224
- config_name: geometric_shapes
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 68560
num_examples: 250
download_size: 77242
dataset_size: 68560
- config_name: hyperbaton
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38574
num_examples: 250
download_size: 44706
dataset_size: 38574
- config_name: logical_deduction_five_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 148595
num_examples: 250
download_size: 155477
dataset_size: 148595
- config_name: logical_deduction_seven_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 191022
num_examples: 250
download_size: 198404
dataset_size: 191022
- config_name: logical_deduction_three_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 105831
num_examples: 250
download_size: 112213
dataset_size: 105831
- config_name: movie_recommendation
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 50985
num_examples: 250
download_size: 57684
dataset_size: 50985
- config_name: multistep_arithmetic_two
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 12943
num_examples: 250
download_size: 18325
dataset_size: 12943
- config_name: navigate
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 49031
num_examples: 250
download_size: 55163
dataset_size: 49031
- config_name: object_counting
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 30508
num_examples: 250
download_size: 35890
dataset_size: 30508
- config_name: penguins_in_a_table
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 70062
num_examples: 146
download_size: 74516
dataset_size: 70062
- config_name: reasoning_about_colored_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 89579
num_examples: 250
download_size: 98694
dataset_size: 89579
- config_name: ruin_names
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 46537
num_examples: 250
download_size: 53178
dataset_size: 46537
- config_name: salient_translation_error_detection
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 277110
num_examples: 250
download_size: 286443
dataset_size: 277110
- config_name: snarks
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38223
num_examples: 178
download_size: 42646
dataset_size: 38223
- config_name: sports_understanding
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 22723
num_examples: 250
download_size: 28617
dataset_size: 22723
- config_name: temporal_sequences
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 139546
num_examples: 250
download_size: 148176
dataset_size: 139546
- config_name: tracking_shuffled_objects_five_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 162590
num_examples: 250
download_size: 169722
dataset_size: 162590
- config_name: tracking_shuffled_objects_seven_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 207274
num_examples: 250
download_size: 214906
dataset_size: 207274
- config_name: tracking_shuffled_objects_three_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 122104
num_examples: 250
download_size: 128736
dataset_size: 122104
- config_name: web_of_lies
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 47582
num_examples: 250
download_size: 52964
dataset_size: 47582
- config_name: word_sorting
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 60918
num_examples: 250
download_size: 66300
dataset_size: 60918
---
# BIG-bench Hard dataset
homepage: https://github.com/suzgunmirac/BIG-Bench-Hard
```
@article{suzgun2022challenging,
title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them},
author={Suzgun, Mirac and Scales, Nathan and Sch{\"a}rli, Nathanael and Gehrmann, Sebastian and Tay, Yi and Chung, Hyung Won and Chowdhery, Aakanksha and Le, Quoc V and Chi, Ed H and Zhou, Denny and and Wei, Jason},
journal={arXiv preprint arXiv:2210.09261},
year={2022}
}
``` |
tau/commonsense_qa | tau | "2024-01-04T07:44:16Z" | 13,163 | 79 | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1811.00937",
"region:us"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: commonsenseqa
pretty_name: CommonsenseQA
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: question_concept
dtype: string
- name: choices
sequence:
- name: label
dtype: string
- name: text
dtype: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 2207794
num_examples: 9741
- name: validation
num_bytes: 273848
num_examples: 1221
- name: test
num_bytes: 257842
num_examples: 1140
download_size: 1558570
dataset_size: 2739484
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "commonsense_qa"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.tau-nlp.org/commonsenseqa
- **Repository:** https://github.com/jonathanherzig/commonsenseqa
- **Paper:** https://arxiv.org/abs/1811.00937
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.68 MB
- **Size of the generated dataset:** 2.18 MB
- **Total amount of disk used:** 6.86 MB
### Dataset Summary
CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge
to predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers.
The dataset is provided in two major training/validation/testing set splits: "Random split" which is the main evaluation
split, and "Question token split", see paper for details.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The dataset is in English (`en`).
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 4.68 MB
- **Size of the generated dataset:** 2.18 MB
- **Total amount of disk used:** 6.86 MB
An example of 'train' looks as follows:
```
{'id': '075e483d21c29a511267ef62bedc0461',
'question': 'The sanctions against the school were a punishing blow, and they seemed to what the efforts the school had made to change?',
'question_concept': 'punishing',
'choices': {'label': ['A', 'B', 'C', 'D', 'E'],
'text': ['ignore', 'enforce', 'authoritarian', 'yell at', 'avoid']},
'answerKey': 'A'}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id` (`str`): Unique ID.
- `question`: a `string` feature.
- `question_concept` (`str`): ConceptNet concept associated to the question.
- `choices`: a dictionary feature containing:
- `label`: a `string` feature.
- `text`: a `string` feature.
- `answerKey`: a `string` feature.
### Data Splits
| name | train | validation | test |
|---------|------:|-----------:|-----:|
| default | 9741 | 1221 | 1140 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under the MIT License.
See: https://github.com/jonathanherzig/commonsenseqa/issues/5
### Citation Information
```
@inproceedings{talmor-etal-2019-commonsenseqa,
title = "{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge",
author = "Talmor, Alon and
Herzig, Jonathan and
Lourie, Nicholas and
Berant, Jonathan",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1421",
doi = "10.18653/v1/N19-1421",
pages = "4149--4158",
archivePrefix = "arXiv",
eprint = "1811.00937",
primaryClass = "cs",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
ricdomolm/lawma-tasks | ricdomolm | "2024-09-14T16:50:53Z" | 13,160 | 2 | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:feature-extraction",
"task_categories:zero-shot-classification",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2407.16615",
"region:us"
] | [
"text-classification",
"question-answering",
"feature-extraction",
"zero-shot-classification"
] | "2024-07-22T21:51:16Z" | ---
license: mit
configs:
- config_name: sc_adminaction
data_files:
- split: train
path: sc_adminaction/train-*
- split: val
path: sc_adminaction/val-*
- split: test
path: sc_adminaction/test-*
- config_name: sc_adminaction_is
data_files:
- split: train
path: sc_adminaction_is/train-*
- split: val
path: sc_adminaction_is/val-*
- split: test
path: sc_adminaction_is/test-*
- config_name: sc_adminactionstate
data_files:
- split: train
path: sc_adminactionstate/train-*
- split: val
path: sc_adminactionstate/val-*
- split: test
path: sc_adminactionstate/test-*
- config_name: sc_authoritydecision
data_files:
- split: train
path: sc_authoritydecision/train-*
- split: val
path: sc_authoritydecision/val-*
- split: test
path: sc_authoritydecision/test-*
- config_name: sc_casedisposition
data_files:
- split: train
path: sc_casedisposition/train-*
- split: val
path: sc_casedisposition/val-*
- split: test
path: sc_casedisposition/test-*
- config_name: sc_caseorigin
data_files:
- split: train
path: sc_caseorigin/train-*
- split: val
path: sc_caseorigin/val-*
- split: test
path: sc_caseorigin/test-*
- config_name: sc_caseoriginstate
data_files:
- split: train
path: sc_caseoriginstate/train-*
- split: val
path: sc_caseoriginstate/val-*
- split: test
path: sc_caseoriginstate/test-*
- config_name: sc_casesource
data_files:
- split: train
path: sc_casesource/train-*
- split: val
path: sc_casesource/val-*
- split: test
path: sc_casesource/test-*
- config_name: sc_casesourcestate
data_files:
- split: train
path: sc_casesourcestate/train-*
- split: val
path: sc_casesourcestate/val-*
- split: test
path: sc_casesourcestate/test-*
- config_name: sc_certreason
data_files:
- split: train
path: sc_certreason/train-*
- split: val
path: sc_certreason/val-*
- split: test
path: sc_certreason/test-*
- config_name: sc_decisiondirection
data_files:
- split: train
path: sc_decisiondirection/train-*
- split: val
path: sc_decisiondirection/val-*
- split: test
path: sc_decisiondirection/test-*
- config_name: sc_decisiontype
data_files:
- split: train
path: sc_decisiontype/train-*
- split: val
path: sc_decisiontype/val-*
- split: test
path: sc_decisiontype/test-*
- config_name: sc_declarationuncon
data_files:
- split: train
path: sc_declarationuncon/train-*
- split: val
path: sc_declarationuncon/val-*
- split: test
path: sc_declarationuncon/test-*
- config_name: sc_issue_1
data_files:
- split: train
path: sc_issue_1/train-*
- split: val
path: sc_issue_1/val-*
- split: test
path: sc_issue_1/test-*
- config_name: sc_issue_10
data_files:
- split: train
path: sc_issue_10/train-*
- split: val
path: sc_issue_10/val-*
- split: test
path: sc_issue_10/test-*
- config_name: sc_issue_11
data_files:
- split: train
path: sc_issue_11/train-*
- split: val
path: sc_issue_11/val-*
- split: test
path: sc_issue_11/test-*
- config_name: sc_issue_12
data_files:
- split: train
path: sc_issue_12/train-*
- split: val
path: sc_issue_12/val-*
- split: test
path: sc_issue_12/test-*
- config_name: sc_issue_2
data_files:
- split: train
path: sc_issue_2/train-*
- split: val
path: sc_issue_2/val-*
- split: test
path: sc_issue_2/test-*
- config_name: sc_issue_3
data_files:
- split: train
path: sc_issue_3/train-*
- split: val
path: sc_issue_3/val-*
- split: test
path: sc_issue_3/test-*
- config_name: sc_issue_4
data_files:
- split: train
path: sc_issue_4/train-*
- split: val
path: sc_issue_4/val-*
- split: test
path: sc_issue_4/test-*
- config_name: sc_issue_5
data_files:
- split: train
path: sc_issue_5/train-*
- split: val
path: sc_issue_5/val-*
- split: test
path: sc_issue_5/test-*
- config_name: sc_issue_6
data_files:
- split: train
path: sc_issue_6/train-*
- split: val
path: sc_issue_6/val-*
- split: test
path: sc_issue_6/test-*
- config_name: sc_issue_7
data_files:
- split: train
path: sc_issue_7/train-*
- split: val
path: sc_issue_7/val-*
- split: test
path: sc_issue_7/test-*
- config_name: sc_issue_8
data_files:
- split: train
path: sc_issue_8/train-*
- split: val
path: sc_issue_8/val-*
- split: test
path: sc_issue_8/test-*
- config_name: sc_issue_9
data_files:
- split: train
path: sc_issue_9/train-*
- split: val
path: sc_issue_9/val-*
- split: test
path: sc_issue_9/test-*
- config_name: sc_issuearea
data_files:
- split: train
path: sc_issuearea/train-*
- split: val
path: sc_issuearea/val-*
- split: test
path: sc_issuearea/test-*
- config_name: sc_jurisdiction
data_files:
- split: train
path: sc_jurisdiction/train-*
- split: val
path: sc_jurisdiction/val-*
- split: test
path: sc_jurisdiction/test-*
- config_name: sc_lcdisagreement
data_files:
- split: train
path: sc_lcdisagreement/train-*
- split: val
path: sc_lcdisagreement/val-*
- split: test
path: sc_lcdisagreement/test-*
- config_name: sc_lcdisposition
data_files:
- split: train
path: sc_lcdisposition/train-*
- split: val
path: sc_lcdisposition/val-*
- split: test
path: sc_lcdisposition/test-*
- config_name: sc_lcdispositiondirection
data_files:
- split: train
path: sc_lcdispositiondirection/train-*
- split: val
path: sc_lcdispositiondirection/val-*
- split: test
path: sc_lcdispositiondirection/test-*
- config_name: sc_partywinning
data_files:
- split: train
path: sc_partywinning/train-*
- split: val
path: sc_partywinning/val-*
- split: test
path: sc_partywinning/test-*
- config_name: sc_petitioner
data_files:
- split: train
path: sc_petitioner/train-*
- split: val
path: sc_petitioner/val-*
- split: test
path: sc_petitioner/test-*
- config_name: sc_petitionerstate
data_files:
- split: train
path: sc_petitionerstate/train-*
- split: val
path: sc_petitionerstate/val-*
- split: test
path: sc_petitionerstate/test-*
- config_name: sc_precedentalteration
data_files:
- split: train
path: sc_precedentalteration/train-*
- split: val
path: sc_precedentalteration/val-*
- split: test
path: sc_precedentalteration/test-*
- config_name: sc_respondent
data_files:
- split: train
path: sc_respondent/train-*
- split: val
path: sc_respondent/val-*
- split: test
path: sc_respondent/test-*
- config_name: sc_respondentstate
data_files:
- split: train
path: sc_respondentstate/train-*
- split: val
path: sc_respondentstate/val-*
- split: test
path: sc_respondentstate/test-*
- config_name: sc_threejudgefdc
data_files:
- split: train
path: sc_threejudgefdc/train-*
- split: val
path: sc_threejudgefdc/val-*
- split: test
path: sc_threejudgefdc/test-*
- config_name: songer_abusedis
data_files:
- split: train
path: songer_abusedis/train-*
- split: val
path: songer_abusedis/val-*
- split: test
path: songer_abusedis/test-*
- config_name: songer_adminrev
data_files:
- split: train
path: songer_adminrev/train-*
- split: val
path: songer_adminrev/val-*
- split: test
path: songer_adminrev/test-*
- config_name: songer_agen_acq
data_files:
- split: train
path: songer_agen_acq/train-*
- split: val
path: songer_agen_acq/val-*
- split: test
path: songer_agen_acq/test-*
- config_name: songer_alj
data_files:
- split: train
path: songer_alj/train-*
- split: val
path: songer_alj/val-*
- split: test
path: songer_alj/test-*
- config_name: songer_altdisp
data_files:
- split: train
path: songer_altdisp/train-*
- split: val
path: songer_altdisp/val-*
- split: test
path: songer_altdisp/test-*
- config_name: songer_amicus
data_files:
- split: train
path: songer_amicus/train-*
- split: val
path: songer_amicus/val-*
- split: test
path: songer_amicus/test-*
- config_name: songer_app_stid
data_files:
- split: train
path: songer_app_stid/train-*
- split: val
path: songer_app_stid/val-*
- split: test
path: songer_app_stid/test-*
- config_name: songer_appbus
data_files:
- split: train
path: songer_appbus/train-*
- split: val
path: songer_appbus/val-*
- split: test
path: songer_appbus/test-*
- config_name: songer_appel1_1_2
data_files:
- split: train
path: songer_appel1_1_2/train-*
- split: val
path: songer_appel1_1_2/val-*
- split: test
path: songer_appel1_1_2/test-*
- config_name: songer_appel1_1_3
data_files:
- split: train
path: songer_appel1_1_3/train-*
- split: val
path: songer_appel1_1_3/val-*
- split: test
path: songer_appel1_1_3/test-*
- config_name: songer_appel1_1_4
data_files:
- split: train
path: songer_appel1_1_4/train-*
- split: val
path: songer_appel1_1_4/val-*
- split: test
path: songer_appel1_1_4/test-*
- config_name: songer_appel1_2_2
data_files:
- split: train
path: songer_appel1_2_2/train-*
- split: val
path: songer_appel1_2_2/val-*
- split: test
path: songer_appel1_2_2/test-*
- config_name: songer_appel1_2_3
data_files:
- split: train
path: songer_appel1_2_3/train-*
- split: val
path: songer_appel1_2_3/val-*
- split: test
path: songer_appel1_2_3/test-*
- config_name: songer_appel1_3_2
data_files:
- split: train
path: songer_appel1_3_2/train-*
- split: val
path: songer_appel1_3_2/val-*
- split: test
path: songer_appel1_3_2/test-*
- config_name: songer_appel1_3_3
data_files:
- split: train
path: songer_appel1_3_3/train-*
- split: val
path: songer_appel1_3_3/val-*
- split: test
path: songer_appel1_3_3/test-*
- config_name: songer_appel1_4_2
data_files:
- split: train
path: songer_appel1_4_2/train-*
- split: val
path: songer_appel1_4_2/val-*
- split: test
path: songer_appel1_4_2/test-*
- config_name: songer_appel1_4_3
data_files:
- split: train
path: songer_appel1_4_3/train-*
- split: val
path: songer_appel1_4_3/val-*
- split: test
path: songer_appel1_4_3/test-*
- config_name: songer_appel1_5_2
data_files:
- split: train
path: songer_appel1_5_2/train-*
- split: val
path: songer_appel1_5_2/val-*
- split: test
path: songer_appel1_5_2/test-*
- config_name: songer_appel1_5_3
data_files:
- split: train
path: songer_appel1_5_3/train-*
- split: val
path: songer_appel1_5_3/val-*
- split: test
path: songer_appel1_5_3/test-*
- config_name: songer_appel1_7_2
data_files:
- split: train
path: songer_appel1_7_2/train-*
- split: val
path: songer_appel1_7_2/val-*
- split: test
path: songer_appel1_7_2/test-*
- config_name: songer_appel1_7_3
data_files:
- split: train
path: songer_appel1_7_3/train-*
- split: val
path: songer_appel1_7_3/val-*
- split: test
path: songer_appel1_7_3/test-*
- config_name: songer_appel1_7_4
data_files:
- split: train
path: songer_appel1_7_4/train-*
- split: val
path: songer_appel1_7_4/val-*
- split: test
path: songer_appel1_7_4/test-*
- config_name: songer_appel1_7_5
data_files:
- split: train
path: songer_appel1_7_5/train-*
- split: val
path: songer_appel1_7_5/val-*
- split: test
path: songer_appel1_7_5/test-*
- config_name: songer_appel1_8_2
data_files:
- split: train
path: songer_appel1_8_2/train-*
- split: val
path: songer_appel1_8_2/val-*
- split: test
path: songer_appel1_8_2/test-*
- config_name: songer_appel1_8_3
data_files:
- split: train
path: songer_appel1_8_3/train-*
- split: val
path: songer_appel1_8_3/val-*
- split: test
path: songer_appel1_8_3/test-*
- config_name: songer_appel2_1_2
data_files:
- split: train
path: songer_appel2_1_2/train-*
- split: val
path: songer_appel2_1_2/val-*
- split: test
path: songer_appel2_1_2/test-*
- config_name: songer_appel2_1_3
data_files:
- split: train
path: songer_appel2_1_3/train-*
- split: val
path: songer_appel2_1_3/val-*
- split: test
path: songer_appel2_1_3/test-*
- config_name: songer_appel2_1_4
data_files:
- split: train
path: songer_appel2_1_4/train-*
- split: val
path: songer_appel2_1_4/val-*
- split: test
path: songer_appel2_1_4/test-*
- config_name: songer_appel2_2_2
data_files:
- split: train
path: songer_appel2_2_2/train-*
- split: val
path: songer_appel2_2_2/val-*
- split: test
path: songer_appel2_2_2/test-*
- config_name: songer_appel2_2_3
data_files:
- split: train
path: songer_appel2_2_3/train-*
- split: val
path: songer_appel2_2_3/val-*
- split: test
path: songer_appel2_2_3/test-*
- config_name: songer_appel2_3_2
data_files:
- split: train
path: songer_appel2_3_2/train-*
- split: val
path: songer_appel2_3_2/val-*
- split: test
path: songer_appel2_3_2/test-*
- config_name: songer_appel2_3_3
data_files:
- split: train
path: songer_appel2_3_3/train-*
- split: val
path: songer_appel2_3_3/val-*
- split: test
path: songer_appel2_3_3/test-*
- config_name: songer_appel2_4_2
data_files:
- split: train
path: songer_appel2_4_2/train-*
- split: val
path: songer_appel2_4_2/val-*
- split: test
path: songer_appel2_4_2/test-*
- config_name: songer_appel2_4_3
data_files:
- split: train
path: songer_appel2_4_3/train-*
- split: val
path: songer_appel2_4_3/val-*
- split: test
path: songer_appel2_4_3/test-*
- config_name: songer_appel2_5_2
data_files:
- split: train
path: songer_appel2_5_2/train-*
- split: val
path: songer_appel2_5_2/val-*
- split: test
path: songer_appel2_5_2/test-*
- config_name: songer_appel2_5_3
data_files:
- split: train
path: songer_appel2_5_3/train-*
- split: val
path: songer_appel2_5_3/val-*
- split: test
path: songer_appel2_5_3/test-*
- config_name: songer_appel2_7_2
data_files:
- split: train
path: songer_appel2_7_2/train-*
- split: val
path: songer_appel2_7_2/val-*
- split: test
path: songer_appel2_7_2/test-*
- config_name: songer_appel2_7_3
data_files:
- split: train
path: songer_appel2_7_3/train-*
- split: val
path: songer_appel2_7_3/val-*
- split: test
path: songer_appel2_7_3/test-*
- config_name: songer_appel2_7_4
data_files:
- split: train
path: songer_appel2_7_4/train-*
- split: val
path: songer_appel2_7_4/val-*
- split: test
path: songer_appel2_7_4/test-*
- config_name: songer_appel2_7_5
data_files:
- split: train
path: songer_appel2_7_5/train-*
- split: val
path: songer_appel2_7_5/val-*
- split: test
path: songer_appel2_7_5/test-*
- config_name: songer_appel2_8_2
data_files:
- split: train
path: songer_appel2_8_2/train-*
- split: val
path: songer_appel2_8_2/val-*
- split: test
path: songer_appel2_8_2/test-*
- config_name: songer_appel2_8_3
data_files:
- split: train
path: songer_appel2_8_3/train-*
- split: val
path: songer_appel2_8_3/val-*
- split: test
path: songer_appel2_8_3/test-*
- config_name: songer_appfed
data_files:
- split: train
path: songer_appfed/train-*
- split: val
path: songer_appfed/val-*
- split: test
path: songer_appfed/test-*
- config_name: songer_appfiduc
data_files:
- split: train
path: songer_appfiduc/train-*
- split: val
path: songer_appfiduc/val-*
- split: test
path: songer_appfiduc/test-*
- config_name: songer_applfrom
data_files:
- split: train
path: songer_applfrom/train-*
- split: val
path: songer_applfrom/val-*
- split: test
path: songer_applfrom/test-*
- config_name: songer_appnatpr
data_files:
- split: train
path: songer_appnatpr/train-*
- split: val
path: songer_appnatpr/val-*
- split: test
path: songer_appnatpr/test-*
- config_name: songer_appnonp
data_files:
- split: train
path: songer_appnonp/train-*
- split: val
path: songer_appnonp/val-*
- split: test
path: songer_appnonp/test-*
- config_name: songer_appstate
data_files:
- split: train
path: songer_appstate/train-*
- split: val
path: songer_appstate/val-*
- split: test
path: songer_appstate/test-*
- config_name: songer_appsubst
data_files:
- split: train
path: songer_appsubst/train-*
- split: val
path: songer_appsubst/val-*
- split: test
path: songer_appsubst/test-*
- config_name: songer_attyfee
data_files:
- split: train
path: songer_attyfee/train-*
- split: val
path: songer_attyfee/val-*
- split: test
path: songer_attyfee/test-*
- config_name: songer_bank_app1
data_files:
- split: train
path: songer_bank_app1/train-*
- split: val
path: songer_bank_app1/val-*
- split: test
path: songer_bank_app1/test-*
- config_name: songer_bank_app2
data_files:
- split: train
path: songer_bank_app2/train-*
- split: val
path: songer_bank_app2/val-*
- split: test
path: songer_bank_app2/test-*
- config_name: songer_bank_r1
data_files:
- split: train
path: songer_bank_r1/train-*
- split: val
path: songer_bank_r1/val-*
- split: test
path: songer_bank_r1/test-*
- config_name: songer_bank_r2
data_files:
- split: train
path: songer_bank_r2/train-*
- split: val
path: songer_bank_r2/val-*
- split: test
path: songer_bank_r2/test-*
- config_name: songer_capric
data_files:
- split: train
path: songer_capric/train-*
- split: val
path: songer_capric/val-*
- split: test
path: songer_capric/test-*
- config_name: songer_casetyp1_1-2
data_files:
- split: train
path: songer_casetyp1_1-2/train-*
- split: val
path: songer_casetyp1_1-2/val-*
- split: test
path: songer_casetyp1_1-2/test-*
- config_name: songer_casetyp1_1-3-1
data_files:
- split: train
path: songer_casetyp1_1-3-1/train-*
- split: val
path: songer_casetyp1_1-3-1/val-*
- split: test
path: songer_casetyp1_1-3-1/test-*
- config_name: songer_casetyp1_1-3-2
data_files:
- split: train
path: songer_casetyp1_1-3-2/train-*
- split: val
path: songer_casetyp1_1-3-2/val-*
- split: test
path: songer_casetyp1_1-3-2/test-*
- config_name: songer_casetyp1_1-3-3
data_files:
- split: train
path: songer_casetyp1_1-3-3/train-*
- split: val
path: songer_casetyp1_1-3-3/val-*
- split: test
path: songer_casetyp1_1-3-3/test-*
- config_name: songer_casetyp1_2-2
data_files:
- split: train
path: songer_casetyp1_2-2/train-*
- split: val
path: songer_casetyp1_2-2/val-*
- split: test
path: songer_casetyp1_2-2/test-*
- config_name: songer_casetyp1_2-3-1
data_files:
- split: train
path: songer_casetyp1_2-3-1/train-*
- split: val
path: songer_casetyp1_2-3-1/val-*
- split: test
path: songer_casetyp1_2-3-1/test-*
- config_name: songer_casetyp1_2-3-2
data_files:
- split: train
path: songer_casetyp1_2-3-2/train-*
- split: val
path: songer_casetyp1_2-3-2/val-*
- split: test
path: songer_casetyp1_2-3-2/test-*
- config_name: songer_casetyp1_2-3-3
data_files:
- split: train
path: songer_casetyp1_2-3-3/train-*
- split: val
path: songer_casetyp1_2-3-3/val-*
- split: test
path: songer_casetyp1_2-3-3/test-*
- config_name: songer_casetyp1_3-2
data_files:
- split: train
path: songer_casetyp1_3-2/train-*
- split: val
path: songer_casetyp1_3-2/val-*
- split: test
path: songer_casetyp1_3-2/test-*
- config_name: songer_casetyp1_3-3-1
data_files:
- split: train
path: songer_casetyp1_3-3-1/train-*
- split: val
path: songer_casetyp1_3-3-1/val-*
- split: test
path: songer_casetyp1_3-3-1/test-*
- config_name: songer_casetyp1_3-3-2
data_files:
- split: train
path: songer_casetyp1_3-3-2/train-*
- split: val
path: songer_casetyp1_3-3-2/val-*
- split: test
path: songer_casetyp1_3-3-2/test-*
- config_name: songer_casetyp1_4-3
data_files:
- split: train
path: songer_casetyp1_4-3/train-*
- split: val
path: songer_casetyp1_4-3/val-*
- split: test
path: songer_casetyp1_4-3/test-*
- config_name: songer_casetyp1_5-3
data_files:
- split: train
path: songer_casetyp1_5-3/train-*
- split: val
path: songer_casetyp1_5-3/val-*
- split: test
path: songer_casetyp1_5-3/test-*
- config_name: songer_casetyp1_6-3
data_files:
- split: train
path: songer_casetyp1_6-3/train-*
- split: val
path: songer_casetyp1_6-3/val-*
- split: test
path: songer_casetyp1_6-3/test-*
- config_name: songer_casetyp1_7-2
data_files:
- split: train
path: songer_casetyp1_7-2/train-*
- split: val
path: songer_casetyp1_7-2/val-*
- split: test
path: songer_casetyp1_7-2/test-*
- config_name: songer_casetyp1_7-3-1
data_files:
- split: train
path: songer_casetyp1_7-3-1/train-*
- split: val
path: songer_casetyp1_7-3-1/val-*
- split: test
path: songer_casetyp1_7-3-1/test-*
- config_name: songer_casetyp1_7-3-2
data_files:
- split: train
path: songer_casetyp1_7-3-2/train-*
- split: val
path: songer_casetyp1_7-3-2/val-*
- split: test
path: songer_casetyp1_7-3-2/test-*
- config_name: songer_casetyp1_7-3-3
data_files:
- split: train
path: songer_casetyp1_7-3-3/train-*
- split: val
path: songer_casetyp1_7-3-3/val-*
- split: test
path: songer_casetyp1_7-3-3/test-*
- config_name: songer_casetyp1_7-3-4
data_files:
- split: train
path: songer_casetyp1_7-3-4/train-*
- split: val
path: songer_casetyp1_7-3-4/val-*
- split: test
path: songer_casetyp1_7-3-4/test-*
- config_name: songer_casetyp1_7-3-5
data_files:
- split: train
path: songer_casetyp1_7-3-5/train-*
- split: val
path: songer_casetyp1_7-3-5/val-*
- split: test
path: songer_casetyp1_7-3-5/test-*
- config_name: songer_casetyp1_7-3-6
data_files:
- split: train
path: songer_casetyp1_7-3-6/train-*
- split: val
path: songer_casetyp1_7-3-6/val-*
- split: test
path: songer_casetyp1_7-3-6/test-*
- config_name: songer_casetyp1_9-3
data_files:
- split: train
path: songer_casetyp1_9-3/train-*
- split: val
path: songer_casetyp1_9-3/val-*
- split: test
path: songer_casetyp1_9-3/test-*
- config_name: songer_casetyp2_geniss
data_files:
- split: train
path: songer_casetyp2_geniss/train-*
- split: val
path: songer_casetyp2_geniss/val-*
- split: test
path: songer_casetyp2_geniss/test-*
- config_name: songer_circuit
data_files:
- split: train
path: songer_circuit/train-*
- split: val
path: songer_circuit/val-*
- split: test
path: songer_circuit/test-*
- config_name: songer_civproc1
data_files:
- split: train
path: songer_civproc1/train-*
- split: val
path: songer_civproc1/val-*
- split: test
path: songer_civproc1/test-*
- config_name: songer_civproc2
data_files:
- split: train
path: songer_civproc2/train-*
- split: val
path: songer_civproc2/val-*
- split: test
path: songer_civproc2/test-*
- config_name: songer_classact
data_files:
- split: train
path: songer_classact/train-*
- split: val
path: songer_classact/val-*
- split: test
path: songer_classact/test-*
- config_name: songer_comment
data_files:
- split: train
path: songer_comment/train-*
- split: val
path: songer_comment/val-*
- split: test
path: songer_comment/test-*
- config_name: songer_concur
data_files:
- split: train
path: songer_concur/train-*
- split: val
path: songer_concur/val-*
- split: test
path: songer_concur/test-*
- config_name: songer_confess
data_files:
- split: train
path: songer_confess/train-*
- split: val
path: songer_confess/val-*
- split: test
path: songer_confess/test-*
- config_name: songer_const1
data_files:
- split: train
path: songer_const1/train-*
- split: val
path: songer_const1/val-*
- split: test
path: songer_const1/test-*
- config_name: songer_const2
data_files:
- split: train
path: songer_const2/train-*
- split: val
path: songer_const2/val-*
- split: test
path: songer_const2/test-*
- config_name: songer_constit
data_files:
- split: train
path: songer_constit/train-*
- split: val
path: songer_constit/val-*
- split: test
path: songer_constit/test-*
- config_name: songer_counsel
data_files:
- split: train
path: songer_counsel/train-*
- split: val
path: songer_counsel/val-*
- split: test
path: songer_counsel/test-*
- config_name: songer_counsel1
data_files:
- split: train
path: songer_counsel1/train-*
- split: val
path: songer_counsel1/val-*
- split: test
path: songer_counsel1/test-*
- config_name: songer_counsel2
data_files:
- split: train
path: songer_counsel2/train-*
- split: val
path: songer_counsel2/val-*
- split: test
path: songer_counsel2/test-*
- config_name: songer_crmproc1
data_files:
- split: train
path: songer_crmproc1/train-*
- split: val
path: songer_crmproc1/val-*
- split: test
path: songer_crmproc1/test-*
- config_name: songer_crmproc2
data_files:
- split: train
path: songer_crmproc2/train-*
- split: val
path: songer_crmproc2/val-*
- split: test
path: songer_crmproc2/test-*
- config_name: songer_crossapp
data_files:
- split: train
path: songer_crossapp/train-*
- split: val
path: songer_crossapp/val-*
- split: test
path: songer_crossapp/test-*
- config_name: songer_deathpen
data_files:
- split: train
path: songer_deathpen/train-*
- split: val
path: songer_deathpen/val-*
- split: test
path: songer_deathpen/test-*
- config_name: songer_decuncon
data_files:
- split: train
path: songer_decuncon/train-*
- split: val
path: songer_decuncon/val-*
- split: test
path: songer_decuncon/test-*
- config_name: songer_denovo
data_files:
- split: train
path: songer_denovo/train-*
- split: val
path: songer_denovo/val-*
- split: test
path: songer_denovo/test-*
- config_name: songer_direct1
data_files:
- split: train
path: songer_direct1/train-*
- split: val
path: songer_direct1/val-*
- split: test
path: songer_direct1/test-*
- config_name: songer_direct2
data_files:
- split: train
path: songer_direct2/train-*
- split: val
path: songer_direct2/val-*
- split: test
path: songer_direct2/test-*
- config_name: songer_discover
data_files:
- split: train
path: songer_discover/train-*
- split: val
path: songer_discover/val-*
- split: test
path: songer_discover/test-*
- config_name: songer_dissent
data_files:
- split: train
path: songer_dissent/train-*
- split: val
path: songer_dissent/val-*
- split: test
path: songer_dissent/test-*
- config_name: songer_district
data_files:
- split: train
path: songer_district/train-*
- split: val
path: songer_district/val-*
- split: test
path: songer_district/test-*
- config_name: songer_diverse
data_files:
- split: train
path: songer_diverse/train-*
- split: val
path: songer_diverse/val-*
- split: test
path: songer_diverse/test-*
- config_name: songer_dueproc
data_files:
- split: train
path: songer_dueproc/train-*
- split: val
path: songer_dueproc/val-*
- split: test
path: songer_dueproc/test-*
- config_name: songer_entrap
data_files:
- split: train
path: songer_entrap/train-*
- split: val
path: songer_entrap/val-*
- split: test
path: songer_entrap/test-*
- config_name: songer_erron
data_files:
- split: train
path: songer_erron/train-*
- split: val
path: songer_erron/val-*
- split: test
path: songer_erron/test-*
- config_name: songer_execord
data_files:
- split: train
path: songer_execord/train-*
- split: val
path: songer_execord/val-*
- split: test
path: songer_execord/test-*
- config_name: songer_exhaust
data_files:
- split: train
path: songer_exhaust/train-*
- split: val
path: songer_exhaust/val-*
- split: test
path: songer_exhaust/test-*
- config_name: songer_fedlaw
data_files:
- split: train
path: songer_fedlaw/train-*
- split: val
path: songer_fedlaw/val-*
- split: test
path: songer_fedlaw/test-*
- config_name: songer_fedvst
data_files:
- split: train
path: songer_fedvst/train-*
- split: val
path: songer_fedvst/val-*
- split: test
path: songer_fedvst/test-*
- config_name: songer_foreign
data_files:
- split: train
path: songer_foreign/train-*
- split: val
path: songer_foreign/val-*
- split: test
path: songer_foreign/test-*
- config_name: songer_freeinfo
data_files:
- split: train
path: songer_freeinfo/train-*
- split: val
path: songer_freeinfo/val-*
- split: test
path: songer_freeinfo/test-*
- config_name: songer_frivapp
data_files:
- split: train
path: songer_frivapp/train-*
- split: val
path: songer_frivapp/val-*
- split: test
path: songer_frivapp/test-*
- config_name: songer_frivol
data_files:
- split: train
path: songer_frivol/train-*
- split: val
path: songer_frivol/val-*
- split: test
path: songer_frivol/test-*
- config_name: songer_genapel1
data_files:
- split: train
path: songer_genapel1/train-*
- split: val
path: songer_genapel1/val-*
- split: test
path: songer_genapel1/test-*
- config_name: songer_genapel2
data_files:
- split: train
path: songer_genapel2/train-*
- split: val
path: songer_genapel2/val-*
- split: test
path: songer_genapel2/test-*
- config_name: songer_geniss
data_files:
- split: train
path: songer_geniss/train-*
- split: val
path: songer_geniss/val-*
- split: test
path: songer_geniss/test-*
- config_name: songer_genresp1
data_files:
- split: train
path: songer_genresp1/train-*
- split: val
path: songer_genresp1/val-*
- split: test
path: songer_genresp1/test-*
- config_name: songer_genresp2
data_files:
- split: train
path: songer_genresp2/train-*
- split: val
path: songer_genresp2/val-*
- split: test
path: songer_genresp2/test-*
- config_name: songer_genstand
data_files:
- split: train
path: songer_genstand/train-*
- split: val
path: songer_genstand/val-*
- split: test
path: songer_genstand/test-*
- config_name: songer_habeas
data_files:
- split: train
path: songer_habeas/train-*
- split: val
path: songer_habeas/val-*
- split: test
path: songer_habeas/test-*
- config_name: songer_immunity
data_files:
- split: train
path: songer_immunity/train-*
- split: val
path: songer_immunity/val-*
- split: test
path: songer_immunity/test-*
- config_name: songer_improper
data_files:
- split: train
path: songer_improper/train-*
- split: val
path: songer_improper/val-*
- split: test
path: songer_improper/test-*
- config_name: songer_indict
data_files:
- split: train
path: songer_indict/train-*
- split: val
path: songer_indict/val-*
- split: test
path: songer_indict/test-*
- config_name: songer_indigent
data_files:
- split: train
path: songer_indigent/train-*
- split: val
path: songer_indigent/val-*
- split: test
path: songer_indigent/test-*
- config_name: songer_initiate
data_files:
- split: train
path: songer_initiate/train-*
- split: val
path: songer_initiate/val-*
- split: test
path: songer_initiate/test-*
- config_name: songer_injunct
data_files:
- split: train
path: songer_injunct/train-*
- split: val
path: songer_injunct/val-*
- split: test
path: songer_injunct/test-*
- config_name: songer_insane
data_files:
- split: train
path: songer_insane/train-*
- split: val
path: songer_insane/val-*
- split: test
path: songer_insane/test-*
- config_name: songer_int_law
data_files:
- split: train
path: songer_int_law/train-*
- split: val
path: songer_int_law/val-*
- split: test
path: songer_int_law/test-*
- config_name: songer_interven
data_files:
- split: train
path: songer_interven/train-*
- split: val
path: songer_interven/val-*
- split: test
path: songer_interven/test-*
- config_name: songer_judgdisc
data_files:
- split: train
path: songer_judgdisc/train-*
- split: val
path: songer_judgdisc/val-*
- split: test
path: songer_judgdisc/test-*
- config_name: songer_judrev
data_files:
- split: train
path: songer_judrev/train-*
- split: val
path: songer_judrev/val-*
- split: test
path: songer_judrev/test-*
- config_name: songer_jurisdiction
data_files:
- split: train
path: songer_jurisdiction/train-*
- split: val
path: songer_jurisdiction/val-*
- split: test
path: songer_jurisdiction/test-*
- config_name: songer_juryinst
data_files:
- split: train
path: songer_juryinst/train-*
- split: val
path: songer_juryinst/val-*
- split: test
path: songer_juryinst/test-*
- config_name: songer_late
data_files:
- split: train
path: songer_late/train-*
- split: val
path: songer_late/val-*
- split: test
path: songer_late/test-*
- config_name: songer_majvotes
data_files:
- split: train
path: songer_majvotes/train-*
- split: val
path: songer_majvotes/val-*
- split: test
path: songer_majvotes/test-*
- config_name: songer_method
data_files:
- split: train
path: songer_method/train-*
- split: val
path: songer_method/val-*
- split: test
path: songer_method/test-*
- config_name: songer_mootness
data_files:
- split: train
path: songer_mootness/train-*
- split: val
path: songer_mootness/val-*
- split: test
path: songer_mootness/test-*
- config_name: songer_notice
data_files:
- split: train
path: songer_notice/train-*
- split: val
path: songer_notice/val-*
- split: test
path: songer_notice/test-*
- config_name: songer_numappel
data_files:
- split: train
path: songer_numappel/train-*
- split: val
path: songer_numappel/val-*
- split: test
path: songer_numappel/test-*
- config_name: songer_numresp
data_files:
- split: train
path: songer_numresp/train-*
- split: val
path: songer_numresp/val-*
- split: test
path: songer_numresp/test-*
- config_name: songer_opinstat
data_files:
- split: train
path: songer_opinstat/train-*
- split: val
path: songer_opinstat/val-*
- split: test
path: songer_opinstat/test-*
- config_name: songer_origin
data_files:
- split: train
path: songer_origin/train-*
- split: val
path: songer_origin/val-*
- split: test
path: songer_origin/test-*
- config_name: songer_othadmis
data_files:
- split: train
path: songer_othadmis/train-*
- split: val
path: songer_othadmis/val-*
- split: test
path: songer_othadmis/test-*
- config_name: songer_othappth
data_files:
- split: train
path: songer_othappth/train-*
- split: val
path: songer_othappth/val-*
- split: test
path: songer_othappth/test-*
- config_name: songer_othcrim
data_files:
- split: train
path: songer_othcrim/train-*
- split: val
path: songer_othcrim/val-*
- split: test
path: songer_othcrim/test-*
- config_name: songer_othjury
data_files:
- split: train
path: songer_othjury/train-*
- split: val
path: songer_othjury/val-*
- split: test
path: songer_othjury/test-*
- config_name: songer_oththres
data_files:
- split: train
path: songer_oththres/train-*
- split: val
path: songer_oththres/val-*
- split: test
path: songer_oththres/test-*
- config_name: songer_plea
data_files:
- split: train
path: songer_plea/train-*
- split: val
path: songer_plea/val-*
- split: test
path: songer_plea/test-*
- config_name: songer_polquest
data_files:
- split: train
path: songer_polquest/train-*
- split: val
path: songer_polquest/val-*
- split: test
path: songer_polquest/test-*
- config_name: songer_post_trl
data_files:
- split: train
path: songer_post_trl/train-*
- split: val
path: songer_post_trl/val-*
- split: test
path: songer_post_trl/test-*
- config_name: songer_prejud
data_files:
- split: train
path: songer_prejud/train-*
- split: val
path: songer_prejud/val-*
- split: test
path: songer_prejud/test-*
- config_name: songer_pretrial
data_files:
- split: train
path: songer_pretrial/train-*
- split: val
path: songer_pretrial/val-*
- split: test
path: songer_pretrial/test-*
- config_name: songer_procdis
data_files:
- split: train
path: songer_procdis/train-*
- split: val
path: songer_procdis/val-*
- split: test
path: songer_procdis/test-*
- config_name: songer_procedur
data_files:
- split: train
path: songer_procedur/train-*
- split: val
path: songer_procedur/val-*
- split: test
path: songer_procedur/test-*
- config_name: songer_r_bus
data_files:
- split: train
path: songer_r_bus/train-*
- split: val
path: songer_r_bus/val-*
- split: test
path: songer_r_bus/test-*
- config_name: songer_r_fed
data_files:
- split: train
path: songer_r_fed/train-*
- split: val
path: songer_r_fed/val-*
- split: test
path: songer_r_fed/test-*
- config_name: songer_r_fiduc
data_files:
- split: train
path: songer_r_fiduc/train-*
- split: val
path: songer_r_fiduc/val-*
- split: test
path: songer_r_fiduc/test-*
- config_name: songer_r_natpr
data_files:
- split: train
path: songer_r_natpr/train-*
- split: val
path: songer_r_natpr/val-*
- split: test
path: songer_r_natpr/test-*
- config_name: songer_r_nonp
data_files:
- split: train
path: songer_r_nonp/train-*
- split: val
path: songer_r_nonp/val-*
- split: test
path: songer_r_nonp/test-*
- config_name: songer_r_state
data_files:
- split: train
path: songer_r_state/train-*
- split: val
path: songer_r_state/val-*
- split: test
path: songer_r_state/test-*
- config_name: songer_r_stid
data_files:
- split: train
path: songer_r_stid/train-*
- split: val
path: songer_r_stid/val-*
- split: test
path: songer_r_stid/test-*
- config_name: songer_r_subst
data_files:
- split: train
path: songer_r_subst/train-*
- split: val
path: songer_r_subst/val-*
- split: test
path: songer_r_subst/test-*
- config_name: songer_realapp
data_files:
- split: train
path: songer_realapp/train-*
- split: val
path: songer_realapp/val-*
- split: test
path: songer_realapp/test-*
- config_name: songer_realresp
data_files:
- split: train
path: songer_realresp/train-*
- split: val
path: songer_realresp/val-*
- split: test
path: songer_realresp/test-*
- config_name: songer_record
data_files:
- split: train
path: songer_record/train-*
- split: val
path: songer_record/val-*
- split: test
path: songer_record/test-*
- config_name: songer_respond1_1_2
data_files:
- split: train
path: songer_respond1_1_2/train-*
- split: val
path: songer_respond1_1_2/val-*
- split: test
path: songer_respond1_1_2/test-*
- config_name: songer_respond1_1_3
data_files:
- split: train
path: songer_respond1_1_3/train-*
- split: val
path: songer_respond1_1_3/val-*
- split: test
path: songer_respond1_1_3/test-*
- config_name: songer_respond1_1_4
data_files:
- split: train
path: songer_respond1_1_4/train-*
- split: val
path: songer_respond1_1_4/val-*
- split: test
path: songer_respond1_1_4/test-*
- config_name: songer_respond1_2_2
data_files:
- split: train
path: songer_respond1_2_2/train-*
- split: val
path: songer_respond1_2_2/val-*
- split: test
path: songer_respond1_2_2/test-*
- config_name: songer_respond1_2_3
data_files:
- split: train
path: songer_respond1_2_3/train-*
- split: val
path: songer_respond1_2_3/val-*
- split: test
path: songer_respond1_2_3/test-*
- config_name: songer_respond1_3_2
data_files:
- split: train
path: songer_respond1_3_2/train-*
- split: val
path: songer_respond1_3_2/val-*
- split: test
path: songer_respond1_3_2/test-*
- config_name: songer_respond1_3_3
data_files:
- split: train
path: songer_respond1_3_3/train-*
- split: val
path: songer_respond1_3_3/val-*
- split: test
path: songer_respond1_3_3/test-*
- config_name: songer_respond1_4_2
data_files:
- split: train
path: songer_respond1_4_2/train-*
- split: val
path: songer_respond1_4_2/val-*
- split: test
path: songer_respond1_4_2/test-*
- config_name: songer_respond1_4_3
data_files:
- split: train
path: songer_respond1_4_3/train-*
- split: val
path: songer_respond1_4_3/val-*
- split: test
path: songer_respond1_4_3/test-*
- config_name: songer_respond1_5_2
data_files:
- split: train
path: songer_respond1_5_2/train-*
- split: val
path: songer_respond1_5_2/val-*
- split: test
path: songer_respond1_5_2/test-*
- config_name: songer_respond1_5_3
data_files:
- split: train
path: songer_respond1_5_3/train-*
- split: val
path: songer_respond1_5_3/val-*
- split: test
path: songer_respond1_5_3/test-*
- config_name: songer_respond1_7_2
data_files:
- split: train
path: songer_respond1_7_2/train-*
- split: val
path: songer_respond1_7_2/val-*
- split: test
path: songer_respond1_7_2/test-*
- config_name: songer_respond1_7_3
data_files:
- split: train
path: songer_respond1_7_3/train-*
- split: val
path: songer_respond1_7_3/val-*
- split: test
path: songer_respond1_7_3/test-*
- config_name: songer_respond1_7_4
data_files:
- split: train
path: songer_respond1_7_4/train-*
- split: val
path: songer_respond1_7_4/val-*
- split: test
path: songer_respond1_7_4/test-*
- config_name: songer_respond1_7_5
data_files:
- split: train
path: songer_respond1_7_5/train-*
- split: val
path: songer_respond1_7_5/val-*
- split: test
path: songer_respond1_7_5/test-*
- config_name: songer_respond1_8_2
data_files:
- split: train
path: songer_respond1_8_2/train-*
- split: val
path: songer_respond1_8_2/val-*
- split: test
path: songer_respond1_8_2/test-*
- config_name: songer_respond1_8_3
data_files:
- split: train
path: songer_respond1_8_3/train-*
- split: val
path: songer_respond1_8_3/val-*
- split: test
path: songer_respond1_8_3/test-*
- config_name: songer_respond2_1_2
data_files:
- split: train
path: songer_respond2_1_2/train-*
- split: val
path: songer_respond2_1_2/val-*
- split: test
path: songer_respond2_1_2/test-*
- config_name: songer_respond2_1_3
data_files:
- split: train
path: songer_respond2_1_3/train-*
- split: val
path: songer_respond2_1_3/val-*
- split: test
path: songer_respond2_1_3/test-*
- config_name: songer_respond2_1_4
data_files:
- split: train
path: songer_respond2_1_4/train-*
- split: val
path: songer_respond2_1_4/val-*
- split: test
path: songer_respond2_1_4/test-*
- config_name: songer_respond2_2_2
data_files:
- split: train
path: songer_respond2_2_2/train-*
- split: val
path: songer_respond2_2_2/val-*
- split: test
path: songer_respond2_2_2/test-*
- config_name: songer_respond2_2_3
data_files:
- split: train
path: songer_respond2_2_3/train-*
- split: val
path: songer_respond2_2_3/val-*
- split: test
path: songer_respond2_2_3/test-*
- config_name: songer_respond2_3_2
data_files:
- split: train
path: songer_respond2_3_2/train-*
- split: val
path: songer_respond2_3_2/val-*
- split: test
path: songer_respond2_3_2/test-*
- config_name: songer_respond2_3_3
data_files:
- split: train
path: songer_respond2_3_3/train-*
- split: val
path: songer_respond2_3_3/val-*
- split: test
path: songer_respond2_3_3/test-*
- config_name: songer_respond2_4_2
data_files:
- split: train
path: songer_respond2_4_2/train-*
- split: val
path: songer_respond2_4_2/val-*
- split: test
path: songer_respond2_4_2/test-*
- config_name: songer_respond2_4_3
data_files:
- split: train
path: songer_respond2_4_3/train-*
- split: val
path: songer_respond2_4_3/val-*
- split: test
path: songer_respond2_4_3/test-*
- config_name: songer_respond2_5_2
data_files:
- split: train
path: songer_respond2_5_2/train-*
- split: val
path: songer_respond2_5_2/val-*
- split: test
path: songer_respond2_5_2/test-*
- config_name: songer_respond2_5_3
data_files:
- split: train
path: songer_respond2_5_3/train-*
- split: val
path: songer_respond2_5_3/val-*
- split: test
path: songer_respond2_5_3/test-*
- config_name: songer_respond2_7_2
data_files:
- split: train
path: songer_respond2_7_2/train-*
- split: val
path: songer_respond2_7_2/val-*
- split: test
path: songer_respond2_7_2/test-*
- config_name: songer_respond2_7_3
data_files:
- split: train
path: songer_respond2_7_3/train-*
- split: val
path: songer_respond2_7_3/val-*
- split: test
path: songer_respond2_7_3/test-*
- config_name: songer_respond2_7_4
data_files:
- split: train
path: songer_respond2_7_4/train-*
- split: val
path: songer_respond2_7_4/val-*
- split: test
path: songer_respond2_7_4/test-*
- config_name: songer_respond2_7_5
data_files:
- split: train
path: songer_respond2_7_5/train-*
- split: val
path: songer_respond2_7_5/val-*
- split: test
path: songer_respond2_7_5/test-*
- config_name: songer_respond2_8_2
data_files:
- split: train
path: songer_respond2_8_2/train-*
- split: val
path: songer_respond2_8_2/val-*
- split: test
path: songer_respond2_8_2/test-*
- config_name: songer_respond2_8_3
data_files:
- split: train
path: songer_respond2_8_3/train-*
- split: val
path: songer_respond2_8_3/val-*
- split: test
path: songer_respond2_8_3/test-*
- config_name: songer_rtcouns
data_files:
- split: train
path: songer_rtcouns/train-*
- split: val
path: songer_rtcouns/val-*
- split: test
path: songer_rtcouns/test-*
- config_name: songer_search
data_files:
- split: train
path: songer_search/train-*
- split: val
path: songer_search/val-*
- split: test
path: songer_search/test-*
- config_name: songer_sentence
data_files:
- split: train
path: songer_sentence/train-*
- split: val
path: songer_sentence/val-*
- split: test
path: songer_sentence/test-*
- config_name: songer_source
data_files:
- split: train
path: songer_source/train-*
- split: val
path: songer_source/val-*
- split: test
path: songer_source/test-*
- config_name: songer_st_v_st
data_files:
- split: train
path: songer_st_v_st/train-*
- split: val
path: songer_st_v_st/val-*
- split: test
path: songer_st_v_st/test-*
- config_name: songer_standing
data_files:
- split: train
path: songer_standing/train-*
- split: val
path: songer_standing/val-*
- split: test
path: songer_standing/test-*
- config_name: songer_state
data_files:
- split: train
path: songer_state/train-*
- split: val
path: songer_state/val-*
- split: test
path: songer_state/test-*
- config_name: songer_stateclaim
data_files:
- split: train
path: songer_stateclaim/train-*
- split: val
path: songer_stateclaim/val-*
- split: test
path: songer_stateclaim/test-*
- config_name: songer_stpolicy
data_files:
- split: train
path: songer_stpolicy/train-*
- split: val
path: songer_stpolicy/val-*
- split: test
path: songer_stpolicy/test-*
- config_name: songer_subevid
data_files:
- split: train
path: songer_subevid/train-*
- split: val
path: songer_subevid/val-*
- split: test
path: songer_subevid/test-*
- config_name: songer_suffic
data_files:
- split: train
path: songer_suffic/train-*
- split: val
path: songer_suffic/val-*
- split: test
path: songer_suffic/test-*
- config_name: songer_summary
data_files:
- split: train
path: songer_summary/train-*
- split: val
path: songer_summary/val-*
- split: test
path: songer_summary/test-*
- config_name: songer_timely
data_files:
- split: train
path: songer_timely/train-*
- split: val
path: songer_timely/val-*
- split: test
path: songer_timely/test-*
- config_name: songer_treat
data_files:
- split: train
path: songer_treat/train-*
- split: val
path: songer_treat/val-*
- split: test
path: songer_treat/test-*
- config_name: songer_trialpro
data_files:
- split: train
path: songer_trialpro/train-*
- split: val
path: songer_trialpro/val-*
- split: test
path: songer_trialpro/test-*
- config_name: songer_two_issues
data_files:
- split: train
path: songer_two_issues/train-*
- split: val
path: songer_two_issues/val-*
- split: test
path: songer_two_issues/test-*
- config_name: songer_typeiss
data_files:
- split: train
path: songer_typeiss/train-*
- split: val
path: songer_typeiss/val-*
- split: test
path: songer_typeiss/test-*
- config_name: songer_usc1
data_files:
- split: train
path: songer_usc1/train-*
- split: val
path: songer_usc1/val-*
- split: test
path: songer_usc1/test-*
- config_name: songer_usc1sect
data_files:
- split: train
path: songer_usc1sect/train-*
- split: val
path: songer_usc1sect/val-*
- split: test
path: songer_usc1sect/test-*
- config_name: songer_usc2
data_files:
- split: train
path: songer_usc2/train-*
- split: val
path: songer_usc2/val-*
- split: test
path: songer_usc2/test-*
- config_name: songer_usc2sect
data_files:
- split: train
path: songer_usc2sect/train-*
- split: val
path: songer_usc2sect/val-*
- split: test
path: songer_usc2sect/test-*
- config_name: songer_weightev
data_files:
- split: train
path: songer_weightev/train-*
- split: val
path: songer_weightev/val-*
- split: test
path: songer_weightev/test-*
- config_name: songer_whlaws
data_files:
- split: train
path: songer_whlaws/train-*
- split: val
path: songer_whlaws/val-*
- split: test
path: songer_whlaws/test-*
task_categories:
- text-classification
- question-answering
- feature-extraction
- zero-shot-classification
language:
- en
pretty_name: Lawma legal classification tasks
size_categories:
- 100K<n<1M
---
# Lawma legal classification tasks
This repository contains the legal classification tasks from [Lawma](https://arxiv.org/abs/2407.16615).
These tasks were derived from the [Supreme Court](http://scdb.wustl.edu/data.php) and [Songer Court of Appeals](www.songerproject.org/us-courts-of-appeals-databases.html) databases.
See the project's [GitHub repository](https://github.com/socialfoundations/lawma) for more details.
Please cite as:
```
@misc{dominguezolmedo2024lawmapowerspecializationlegal,
title={Lawma: The Power of Specialization for Legal Tasks},
author={Ricardo Dominguez-Olmedo and Vedant Nanda and Rediet Abebe and Stefan Bechtold and Christoph Engel and Jens Frankenreiter and Krishna Gummadi and Moritz Hardt and Michael Livermore},
year={2024},
eprint={2407.16615},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.16615},
}
``` |
Voxel51/emnist-letters-tiny | Voxel51 | "2024-07-23T18:58:23Z" | 13,117 | 2 | [
"task_categories:image-classification",
"language:en",
"size_categories:10K<n<100K",
"modality:image",
"library:fiftyone",
"arxiv:1702.05373",
"region:us",
"fiftyone",
"image",
"image-classification"
] | [
"image-classification"
] | "2024-07-23T18:43:35Z" | ---
annotations_creators: []
language: en
size_categories:
- 10K<n<100K
task_categories:
- image-classification
task_ids: []
pretty_name: EMNIST-Letters-10k
tags:
- fiftyone
- image
- image-classification
dataset_summary: '
![image/png](dataset_preview.png)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10000 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include ''max_samples'', etc
dataset = load_from_hub("Voxel51/emnist-letters-tiny")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for EMNIST-Letters-10k
<!-- Provide a quick summary of the dataset. -->
A random subset of the train and test splits from the letters portion of [EMNIST](https://pytorch.org/vision/0.18/generated/torchvision.datasets.EMNIST.html)
![image/png](dataset_preview.png)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10000 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("Voxel51/emnist-letters-tiny")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Homepage:** https://www.nist.gov/itl/products-and-services/emnist-dataset
- **Paper :** https://arxiv.org/abs/1702.05373v1
## Citation
**BibTeX:**
```bibtex
@misc{cohen2017emnistextensionmnisthandwritten,
title={EMNIST: an extension of MNIST to handwritten letters},
author={Gregory Cohen and Saeed Afshar and Jonathan Tapson and André van Schaik},
year={2017},
eprint={1702.05373},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/1702.05373},
}
```
## Dataset Card Author
[Jacob Marks](https://huggingface.co/jamarks)
|
TempoFunk/webvid-10M | TempoFunk | "2023-08-19T09:03:19Z" | 13,077 | 59 | [
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:video-classification",
"task_categories:image-classification",
"language:en",
"license:agpl-3.0",
"size_categories:10M<n<100M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-to-video",
"text-to-image",
"video-classification",
"image-classification"
] | "2023-06-16T19:17:16Z" | ---
license: agpl-3.0
task_categories:
- text-to-video
- text-to-image
- video-classification
- image-classification
language:
- en
size_categories:
- 1M<n<10M
--- |
HuggingFaceH4/ultrachat_200k | HuggingFaceH4 | "2024-10-16T11:52:27Z" | 12,967 | 477 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2305.14233",
"region:us"
] | [
"text-generation"
] | "2023-10-24T08:24:57Z" | ---
language:
- en
license: mit
size_categories:
- 100K<n<1M
task_categories:
- text-generation
pretty_name: UltraChat 200k
configs:
- config_name: default
data_files:
- split: train_sft
path: data/train_sft-*
- split: test_sft
path: data/test_sft-*
- split: train_gen
path: data/train_gen-*
- split: test_gen
path: data/test_gen-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_sft
num_bytes: 1397058554
num_examples: 207865
- name: test_sft
num_bytes: 154695659
num_examples: 23110
- name: train_gen
num_bytes: 1347396812
num_examples: 256032
- name: test_gen
num_bytes: 148276089
num_examples: 28304
download_size: 1624049723
dataset_size: 3047427114
---
# Dataset Card for UltraChat 200k
## Dataset Description
This is a heavily filtered version of the [UltraChat](https://github.com/thunlp/UltraChat) dataset and was used to train [Zephyr-7B-β](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), a state of the art 7b chat model.
The original datasets consists of 1.4M dialogues generated by ChatGPT and spanning a wide range of topics. To create `UltraChat 200k`, we applied the following logic:
- Selection of a subset of data for faster supervised fine tuning.
- Truecasing of the dataset, as we observed around 5% of the data contained grammatical errors like "Hello. how are you?" instead of "Hello. How are you?"
- Removal of dialogues where the assistant replies with phrases like "I do not have emotions" or "I don't have opinions", even for fact-based prompts that don't involve either.
## Dataset Structure
The dataset has four splits, suitable for:
* Supervised fine-tuning (`sft`).
* Generation ranking (`gen`) via techniques like rejection sampling or PPO.
The number of examples per split is shown as follows:
| train_sft | test_sft | train_gen | test_gen |
|:-------:|:-----------:|:-----:| :-----:|
| 207865 | 23110 | 256032 | 28304 |
The dataset is stored in parquet format with each entry using the following schema:
```
{
"prompt": "Create a fully-developed protagonist who is challenged to survive within a dystopian society under the rule of a tyrant. ...",
"messages":[
{
"content": "Create a fully-developed protagonist who is challenged to survive within a dystopian society under the rule of a tyrant. ...",
"role": "user"
},
{
"content": "Name: Ava\n\n Ava was just 16 years old when the world as she knew it came crashing down. The government had collapsed, leaving behind a chaotic and lawless society. ...",
"role": "assistant"
},
{
"content": "Wow, Ava's story is so intense and inspiring! Can you provide me with more details. ...",
"role": "user"
},
{
"content": "Certainly! ....",
"role": "assistant"
},
{
"content": "That's really interesting! I would love to hear more...",
"role": "user"
}
{
"content": "Certainly! ....",
"role": "assistant"
},
],
"prompt_id": "d938b65dfe31f05f80eb8572964c6673eddbd68eff3db6bd234d7f1e3b86c2af"
}
```
## Citation
If you find this dataset is useful in your work, please cite the original UltraChat dataset:
```
@misc{ding2023enhancing,
title={Enhancing Chat Language Models by Scaling High-quality Instructional Conversations},
author={Ning Ding and Yulin Chen and Bokai Xu and Yujia Qin and Zhi Zheng and Shengding Hu and Zhiyuan Liu and Maosong Sun and Bowen Zhou},
year={2023},
eprint={2305.14233},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
pixparse/cc3m-wds | pixparse | "2023-12-15T01:42:07Z" | 12,929 | 24 | [
"task_categories:image-to-text",
"license:other",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | [
"image-to-text"
] | "2023-12-14T18:06:04Z" | ---
license: other
license_name: conceptual-captions
license_link: >-
https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE
task_categories:
- image-to-text
size_categories:
- 1M<n<10M
---
# Dataset Card for Conceptual Captions (CC3M)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Conceptual Captions homepage](https://ai.google.com/research/ConceptualCaptions/)
- **Repository:** [Conceptual Captions repository](https://github.com/google-research-datasets/conceptual-captions)
- **Paper:** [Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning](https://www.aclweb.org/anthology/P18-1238/)
- **Leaderboard:** [Conceptual Captions leaderboard](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard)https://ai.google.com/research/ConceptualCaptions/leaderboard?active_tab=leaderboard
- **Point of Contact:** [Conceptual Captions e-mail](mailto:conceptual-captions@google.com)
### Dataset Summary
Conceptual Captions is a dataset consisting of ~3.3M images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. More precisely, the raw descriptions are harvested from the Alt-text HTML attribute associated with web images. To arrive at the current version of the captions, we have developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions.
### Usage
This instance of Conceptual Captions is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format. It can be used with webdataset library or upcoming releases of Hugging Face `datasets`.
...More Detail TBD
### Data Splits
This dataset was downloaded using img2dataset. Images resized on download if shortest edge > 512 to shortest edge = 512.
#### Train
* `cc3m-train-*.tar`
* Downloaded on 2021/12/22
* 576 shards, 2905954 (of 3318333) samples
#### Validation
* `cc3m-validation-*.tar`
* Downloaded on 2023/12/13 (original validation set download in 2021 was corrupted)
* 16 shards, 13443 (of 15840) samples
## Additional Information
### Dataset Curators
Piyush Sharma, Nan Ding, Sebastian Goodman and Radu Soricut.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
### Citation Information
```bibtex
@inproceedings{sharma2018conceptual,
title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning},
author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
booktitle = {Proceedings of ACL},
year = {2018},
}
``` |