datasetId
stringlengths
2
81
card
stringlengths
20
977k
parler-tts/mls_eng_10k
--- pretty_name: 10K hours of English MLS annotations_creators: - expert-generated language_creators: - crowdsourced - expert-generated language: - en license: - cc-by-4.0 multilinguality: - multilingual paperswithcode_id: multilingual-librispeech size_categories: - 1M<n<10M source_datasets: - original task_categories: - automatic-speech-recognition - text-to-speech - text-to-audio dataset_info: features: - name: audio dtype: audio - name: original_path dtype: string - name: begin_time dtype: float64 - name: end_time dtype: float64 - name: transcript dtype: string - name: audio_duration dtype: float64 - name: speaker_id dtype: string - name: book_id dtype: string splits: - name: dev num_bytes: 249691299.74 num_examples: 3807 - name: test num_bytes: 245941162.096 num_examples: 3769 - name: train num_bytes: 158437701688.205 num_examples: 2420047 download_size: 158461062068 dataset_size: 158933334150.041 configs: - config_name: default data_files: - split: dev path: data/dev-* - split: test path: data/test-* - split: train path: data/train-* --- ### Dataset Summary This is a **10K hours** subset of **[English version of the Multilingual LibriSpeech (MLS) dataset](https://huggingface.co/datasets/parler-tts/mls_eng)**. The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream. MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish. It includes about 44.5K hours of English and a total of about 6K hours for other languages. This dataset card includes the 10K hours of English. Refers to this [dataset card](https://huggingface.co/datasets/facebook/multilingual_librispeech) for the other languages. ### Licensing Information Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode)) ### Citation Information ``` @article{Pratap2020MLSAL, title={MLS: A Large-Scale Multilingual Dataset for Speech Research}, author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert}, journal={ArXiv}, year={2020}, volume={abs/2012.03411} } ```
xlangai/spider
--- annotations_creators: - expert-generated language_creators: - expert-generated - machine-generated language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text2text-generation task_ids: [] paperswithcode_id: spider-1 pretty_name: Spider tags: - text-to-sql dataset_info: config_name: spider features: - name: db_id dtype: string - name: query dtype: string - name: question dtype: string - name: query_toks sequence: string - name: query_toks_no_value sequence: string - name: question_toks sequence: string splits: - name: train num_bytes: 4743786 num_examples: 7000 - name: validation num_bytes: 682090 num_examples: 1034 download_size: 957246 dataset_size: 5425876 configs: - config_name: spider data_files: - split: train path: spider/train-* - split: validation path: spider/validation-* default: true --- # Dataset Card for Spider ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://yale-lily.github.io/spider - **Repository:** https://github.com/taoyds/spider - **Paper:** https://www.aclweb.org/anthology/D18-1425/ - **Paper:** https://arxiv.org/abs/1809.08887 - **Point of Contact:** [Yale LILY](https://yale-lily.github.io/) ### Dataset Summary Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students. The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases. ### Supported Tasks and Leaderboards The leaderboard can be seen at https://yale-lily.github.io/spider ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances **What do the instances that comprise the dataset represent?** Each instance is natural language question and the equivalent SQL query **How many instances are there in total?** **What data does each instance consist of?** [More Information Needed] ### Data Fields * **db_id**: Database name * **question**: Natural language to interpret into SQL * **query**: Target SQL query * **query_toks**: List of tokens for the query * **query_toks_no_value**: List of tokens for the query * **question_toks**: List of tokens for the question ### Data Splits **train**: 7000 questions and SQL query pairs **dev**: 1034 question and SQL query pairs [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? [More Information Needed] ### Annotations The dataset was annotated by 11 college students at Yale University #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases [More Information Needed] ### Other Known Limitations ## Additional Information The listed authors in the homepage are maintaining/supporting the dataset. ### Dataset Curators [More Information Needed] ### Licensing Information The spider dataset is licensed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) [More Information Needed] ### Citation Information ``` @inproceedings{yu-etal-2018-spider, title = "{S}pider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-{SQL} Task", author = "Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and Zhang, Zilin and Radev, Dragomir", editor = "Riloff, Ellen and Chiang, David and Hockenmaier, Julia and Tsujii, Jun{'}ichi", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D18-1425", doi = "10.18653/v1/D18-1425", pages = "3911--3921", archivePrefix={arXiv}, eprint={1809.08887}, primaryClass={cs.CL}, } ``` ### Contributions Thanks to [@olinguyen](https://github.com/olinguyen) for adding this dataset.
ai4bharat/Aksharantar
--- annotations_creators: [] language_creators: - crowdsourced - expert-generated - machine-generated - found - other language: - asm - ben - brx - doi - guj - hin - kan - kas - kok - mai - mal - mar - mni - nep - ori - pan - san - sid - tam - tel - urd license: cc multilinguality: - multilingual pretty_name: Aksharantar source_datasets: - original task_categories: - text-generation task_ids: [] --- # Dataset Card for Aksharantar ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://indicnlp.ai4bharat.org/indic-xlit/ - **Repository:** https://github.com/AI4Bharat/IndicXlit/ - **Paper:** [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Aksharantar is the largest publicly available transliteration dataset for 20 Indic languages. The corpus has 26M Indic language-English transliteration pairs. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages | <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> | | -------------- | -------------- | -------------- | --------------- | -------------- | ------------- | | Assamese (asm) | Hindi (hin) | Maithili (mai) | Marathi (mar) | Punjabi (pan) | Tamil (tam) | | Bengali (ben) | Kannada (kan) | Malayalam (mal)| Nepali (nep) | Sanskrit (san) | Telugu (tel) | | Bodo(brx) | Kashmiri (kas) | Manipuri (mni) | Oriya (ori) | Sindhi (snd) | Urdu (urd) | | Gujarati (guj) | Konkani (kok) | Dogri (doi) | ## Dataset Structure ### Data Instances ``` A random sample from Hindi (hin) Train dataset. { 'unique_identifier': 'hin1241393', 'native word': 'स्वाभिमानिक', 'english word': 'swabhimanik', 'source': 'IndicCorp', 'score': -0.1028788579 } ``` ### Data Fields - `unique_identifier` (string): 3-letter language code followed by a unique number in each set (Train, Test, Val). - `native word` (string): A word in Indic language. - `english word` (string): Transliteration of native word in English (Romanised word). - `source` (string): Source of the data. - `score` (num): Character level log probability of indic word given roman word by IndicXlit (model). Pairs with average threshold of the 0.35 are considered. For created data sources, depending on the destination/sampling method of a pair in a language, it will be one of: - Dakshina Dataset - IndicCorp - Samanantar - Wikidata - Existing sources - Named Entities Indian (AK-NEI) - Named Entities Foreign (AK-NEF) - Data from Uniform Sampling method. (Ak-Uni) - Data from Most Frequent words sampling method. (Ak-Freq) ### Data Splits | Subset | asm-en | ben-en | brx-en | guj-en | hin-en | kan-en | kas-en | kok-en | mai-en | mal-en | mni-en | mar-en | nep-en | ori-en | pan-en | san-en | sid-en | tam-en | tel-en | urd-en | |:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:| | Training | 179K | 1231K | 36K | 1143K | 1299K | 2907K | 47K | 613K | 283K | 4101K | 10K | 1453K | 2397K | 346K | 515K | 1813K | 60K | 3231K | 2430K | 699K | | Validation | 4K | 11K | 3K | 12K | 6K | 7K | 4K | 4K | 4K | 8K | 3K | 8K | 3K | 3K | 9K | 3K | 8K | 9K | 8K | 12K | | Test | 5531 | 5009 | 4136 | 7768 | 5693 | 6396 | 7707 | 5093 | 5512 | 6911 | 4925 | 6573 | 4133 | 4256 | 4316 | 5334 | - | 4682 | 4567 | 4463 | ## Dataset Creation Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018) ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018) #### Who are the source language producers? [More Information Needed] ### Annotations Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018) #### Annotation process Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018) #### Who are the annotators? Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018) ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information <!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/"> <img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" /> <img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/> </a> <br/> --> This data is released under the following licensing scheme: - Manually collected data: Released under CC-BY license. - Mined dataset (from Samanantar and IndicCorp): Released under CC0 license. - Existing sources: Released under CC0 license. **CC-BY License** <a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/"> <img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100"/> </a> <br> <br> <!-- and the Aksharantar benchmark and all manually transliterated data under the [Creative Commons CC-BY license (“no rights reserved”)](https://creativecommons.org/licenses/by/4.0/). --> **CC0 License Statement** <a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/"> <img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/> </a> <br> <br> - We do not own any of the text from which this data has been extracted. - We license the actual packaging of the mined data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0). - To the extent possible under law, <a rel="dct:publisher" href="https://indicnlp.ai4bharat.org/aksharantar/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Aksharantar</span> manually collected data and existing sources. - This work is published from: India. ### Citation Information ``` @misc{madhani2022aksharantar, title={Aksharantar: Towards Building Open Transliteration Tools for the Next Billion Users}, author={Yash Madhani and Sushane Parthan and Priyanka Bedekar and Ruchi Khapra and Anoop Kunchukuttan and Pratyush Kumar and Mitesh Shantadevi Khapra}, year={2022}, eprint={}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions
Adapting/empathetic_dialogues_v2
--- license: afl-3.0 --- Fine-tuned empathetic dialogue datasets from https://huggingface.co/datasets/empathetic_dialogues With labeled chat history, system response, question or not and behavior.
jakartaresearch/indoqa
--- annotations_creators: - expert-generated language: - id language_creators: - found license: - cc-by-nd-4.0 multilinguality: - monolingual pretty_name: Indonesian Question Answering Dataset size_categories: - 1K<n<10K source_datasets: - original tags: - indoqa - qa - question-answering - indonesian task_categories: - question-answering task_ids: - extractive-qa --- # Dataset Card for Indonesian Question Answering Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fhrzn](https://github.com/fhrzn)[@Kalzaik](https://github.com/Kalzaik) [@ibamibrahim](https://github.com/ibamibrahim) [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
Bingsu/Gameplay_Images
--- language: - en license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Gameplay Images size_categories: - 1K<n<10K task_categories: - image-classification --- # Gameplay Images ## Dataset Description - **Homepage:** [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images) - **Download Size** 2.50 GiB - **Generated Size** 1.68 GiB - **Total Size** 4.19 GiB A dataset from [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images). This is a dataset of 10 very famous video games in the world. These include - Among Us - Apex Legends - Fortnite - Forza Horizon - Free Fire - Genshin Impact - God of War - Minecraft - Roblox - Terraria There are 1000 images per class and all are sized `640 x 360`. They are in the `.png` format. This Dataset was made by saving frames every few seconds from famous gameplay videos on Youtube. ※ This dataset was uploaded in January 2022. Game content updated after that will not be included. ### License CC-BY-4.0 ## Dataset Structure ### Data Instance ```python >>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/Gameplay_Images") DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 10000 }) }) ``` ```python >>> dataset["train"].features {'image': Image(decode=True, id=None), 'label': ClassLabel(num_classes=10, names=['Among Us', 'Apex Legends', 'Fortnite', 'Forza Horizon', 'Free Fire', 'Genshin Impact', 'God of War', 'Minecraft', 'Roblox', 'Terraria'], id=None)} ``` ### Data Size download: 2.50 GiB<br> generated: 1.68 GiB<br> total: 4.19 GiB ### Data Fields - image: `Image` - A `PIL.Image.Image object` containing the image. size=640x360 - Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. `dataset[0]["image"]` should always be preferred over `dataset["image"][0]`. - label: an int classification label. Class Label Mappings: ```json { "Among Us": 0, "Apex Legends": 1, "Fortnite": 2, "Forza Horizon": 3, "Free Fire": 4, "Genshin Impact": 5, "God of War": 6, "Minecraft": 7, "Roblox": 8, "Terraria": 9 } ``` ```python >>> dataset["train"][0] {'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=640x360>, 'label': 0} ``` ### Data Splits | | train | | ---------- | -------- | | # of data | 10000 | ### Note #### train_test_split ```python >>> ds_new = dataset["train"].train_test_split(0.2, seed=42, stratify_by_column="label") >>> ds_new DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 8000 }) test: Dataset({ features: ['image', 'label'], num_rows: 2000 }) }) ```
pysentimiento/spanish-tweets
--- language: es dataset_info: features: - name: text dtype: string - name: tweet_id dtype: string - name: user_id dtype: string splits: - name: train num_bytes: 82649695458 num_examples: 597433111 - name: test num_bytes: 892219251 num_examples: 6224733 download_size: 51737237106 dataset_size: 83541914709 --- # spanish-tweets ## A big corpus of tweets for pretraining embeddings and language models ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage**: https://github.com/pysentimiento/robertuito - **Paper**: [RoBERTuito: a pre-trained language model for social media text in Spanish](https://aclanthology.org/2022.lrec-1.785/) - **Point of Contact:** jmperez (at) dc.uba.ar ### Dataset Summary A big dataset of (mostly) Spanish tweets for pre-training language models (or other representations). ### Supported Tasks and Leaderboards Language Modeling ### Languages Mostly Spanish, but some Portuguese, English, and other languages. ## Dataset Structure ### Data Fields - *tweet_id*: id of the tweet - *user_id*: id of the user - *text*: text from the tweet ## Dataset Creation The full process of data collection is described in the paper. Here we roughly outline the main points: - A Spritzer collection uploaded to Archive.org dating from May 2019 was downloaded - From this, we only kept tweets with language metadata equal to Spanish, and mark the users who posted these messages. - Then, the tweetline from each of these marked users was downloaded. This corpus consists of 622M tweets from around 432K users. Please note that we did not filter tweets from other languages, so you might find English, Portuguese, Catalan and other languages in the dataset (around 7/8% of the tweets are not in Spanish) ### Citation Information ``` @inproceedings{perez-etal-2022-robertuito, title = "{R}o{BERT}uito: a pre-trained language model for social media text in {S}panish", author = "P{\'e}rez, Juan Manuel and Furman, Dami{\'a}n Ariel and Alonso Alemany, Laura and Luque, Franco M.", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.785", pages = "7235--7243", abstract = "Since BERT appeared, Transformer language models and transfer learning have become state-of-the-art for natural language processing tasks. Recently, some works geared towards pre-training specially-crafted models for particular domains, such as scientific papers, medical documents, user-generated texts, among others. These domain-specific models have been shown to improve performance significantly in most tasks; however, for languages other than English, such models are not widely available. In this work, we present RoBERTuito, a pre-trained language model for user-generated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving user-generated text showed that RoBERTuito outperformed other pre-trained language models in Spanish. In addition to this, our model has some cross-lingual abilities, achieving top results for English-Spanish tasks of the Linguistic Code-Switching Evaluation benchmark (LinCE) and also competitive performance against monolingual models in English Twitter tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pre-train it.", } ```
mrmoor/cyber-threat-intelligence
--- license: unknown ---
ashraq/tmdb-people-image
--- dataset_info: features: - name: adult dtype: bool - name: also_known_as dtype: string - name: biography dtype: string - name: birthday dtype: string - name: deathday dtype: string - name: gender dtype: int64 - name: homepage dtype: string - name: id dtype: int64 - name: imdb_id dtype: string - name: known_for_department dtype: string - name: name dtype: string - name: place_of_birth dtype: string - name: popularity dtype: float64 - name: profile_path dtype: string - name: image dtype: image splits: - name: train num_bytes: 3749610460.6819267 num_examples: 116403 download_size: 3733145768 dataset_size: 3749610460.6819267 --- Data was obtained from [TMDB API](https://developers.themoviedb.org/3)
Isotonic/human_assistant_conversation
--- license: afl-3.0 dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: text dtype: string splits: - name: train num_bytes: 2724591096.91667 num_examples: 1494223 - name: test num_bytes: 681148230.08333 num_examples: 373556 download_size: 1996990227 dataset_size: 3405739327.0 task_categories: - text-generation - conversational language: - en - es - zh size_categories: - 100K<n<1M ---
bigcode/programming-languages-keywords
--- dataset_info: features: - name: language dtype: string - name: keywords sequence: string splits: - name: train num_bytes: 20307 num_examples: 36 download_size: 8838 dataset_size: 20307 --- # Dataset Card for "programming-languages-keywords" Structured version of https://github.com/e3b0c442/keywords Generated using: ```python r = requests.get("https://raw.githubusercontent.com/e3b0c442/keywords/main/README.md") keywords = r.text.split("### ")[1:] keywords = [i for i in keywords if not i.startswith("Sources")] keywords = {i.split("\n")[0]:[j for j in re.findall("[a-zA-Z]*", i.split("\n",1)[1]) if j] for i in keywords} keywords = pd.DataFrame(pd.Series(keywords)).reset_index().rename(columns={"index":"language", 0:"keywords"}) keywords['language'] = keywords['language'].str.split("\) ").str[0] keywords['keywords'] = keywords['keywords'].apply(lambda x: sorted(list(set(x)))) ds = Dataset.from_pandas(keywords) ```
swype/instruct
--- license: mit --- # A large instruct dataset This dataset is a combination of multiple sources, including the GPT4All dataset, the Alpaca dataset from Stanford, custom generation using AllenAI augmentation, and some dataset augmentation from open-source Meta datasets. The dataset is split into 70% for training, 20% for validation, and 10% for testing. ## Description The Swype.com dataset contains prompt and completion pairs for various tasks. It's an augmented version of the following datasets: - [GPT4All](https://github.com/nomic-ai/gpt4all): A dataset containing a wide range of tasks for training and evaluating general-purpose language models. - [Alpaca dataset from Stanford](https://github.com/tatsu-lab/stanford_alpaca): A dataset containing prompts, completions, and annotations for controllable text generation. - Custom generation using [AllenAI augmentation](https://allenai.org): Augmentation performed using the advanced NLP tools provided by AllenAI. - Some dataset augmentation from open-source Meta datasets: Additional augmentation from various open-source Meta datasets. The dataset is designed for training and evaluating language models on diverse tasks, with a focus on controllable and instruction-based text generation. ## Dataset Structure The dataset contains the following columns: - `prompt`: The input prompt string, representing a task or question. - `completion`: The output completion string, representing the answer or generated text based on the prompt. ## Citation If you use this dataset in your research or work, please cite it as follows: @misc{srikanth2023swypedataset, author = {Srikanth Srinivas}, title = {Swype.com Dataset}, year = {2023}, publisher = {Swype.com}, howpublished = {\url{https://swype.com}}, email = {s@swype.com} }
camel-ai/ai_society
--- license: cc-by-nc-4.0 language: - en tags: - instruction-finetuning pretty_name: CAMEL AI Society task_categories: - text-generation arxiv: 2303.17760 extra_gated_prompt: "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT." extra_gated_fields: Name: text Email: text I will adhere to the terms and conditions of this dataset: checkbox --- # **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society** - **Github:** https://github.com/lightaime/camel - **Website:** https://www.camel-ai.org/ - **Arxiv Paper:** https://arxiv.org/abs/2303.17760 ## Dataset Summary AI Society dataset is composed of 25K conversations between two gpt-3.5-turbo agents. This dataset is obtained by running role-playing for a combination of 50 user roles and 50 assistant roles with each combination running over 10 tasks. We provide two formats, one is "chat" format which is `ai_society_chat.tar.gz` file containing the conversational instruction following format. The other format is "instruction" format which is `ai_society_instructions.json`. ## Data Fields **The data fields for instructions format (`ai_society_instructions.json`) are as follows:** * `id`: {assistant\_role\_index}\_{user\_role\_index}\_{task\_index}, for example 001_002_003 refers to assistant role 1, user role 2, and task 3 from our text assistant role names, user role names and task text files. * `role_1`: assistant role * `role_2`: user role * `original_task`: the general assigned task for the assistant and user to cooperate on. * `specified_task`: the task after task specifier, this task is more specific than the original task. * `role_1_response`: user response text before the instruction. * `role_1_message_id`: message ID in the full raw conversation. * `instruction`: describes the task the assistant is supposed to perform. * `input`: provides further context or information for the requested instruction. * `output`: the answer to the instruction as generated by 'gpt-3.5-turbo' * `termination_reason`: refers to the reason of termination of the chat. **The data fields for chat format (`ai_society_chat.tar.gz`) are as follows:** * `input`: {assistant\_role\_index}\_{user\_role\_index}\_{task\_index}, for example 001_002_003 refers to assistant role 1, user role 2, and task 3 from our text assistant role names, user role names and task text files. * `role_1`: assistant role * `role_2`: user role * `original_task`: the general assigned task for the assistant and user to cooperate on. * `specified_task`: the task after task specifier, this task is more specific than the original task. * `message_k`: refers to the k<sup>_th_</sup> message of the conversation. * `role_type`: refers to whether the agent is an assistant or a user. * `role_name`: refers to the assigned assistant/user role. * `role`: refers to the role of the agent during the message for openai api. [usually not needed] * `content`: refers to the content of the message. * `termination_reason`: refers to the reason of termination of the chat. * `num_messages`: refers to the total number of messages in the chat. **Download in python** ``` from huggingface_hub import hf_hub_download hf_hub_download(repo_id="camel-ai/ai_society", repo_type="dataset", filename="ai_society_chat.tar.gz", local_dir="datasets/", local_dir_use_symlinks=False) ``` ### Citation ``` @misc{li2023camel, title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society}, author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem}, year={2023}, eprint={2303.17760}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` ## Disclaimer: This data was synthetically generated by gpt-3.5-turbo and might contain incorrect information. The dataset is there only for research purposes. --- license: cc-by-nc-4.0 ---
ldhnam/deepfashion_controlnet
--- dataset_info: features: - name: image dtype: image - name: openpose dtype: image - name: cloth dtype: image - name: caption dtype: string splits: - name: train num_bytes: 3781524968.6950803 num_examples: 13670 - name: test num_bytes: 2489665.30491995 num_examples: 9 download_size: 3766499657 dataset_size: 3784014634.0 --- # Dataset Card for "deepfashion_controlnet" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
linhtran92/viet_youtube_asr_corpus_v2
--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string - name: w2v2_transcription dtype: string - name: WER dtype: int64 - name: sum dtype: float64 splits: - name: train num_bytes: 16003127322.525991 num_examples: 194747 - name: test num_bytes: 1778161779.8073397 num_examples: 21639 download_size: 17552533348 dataset_size: 17781289102.333332 --- # Dataset Card for "viet_youtube_asr_corpus_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
declare-lab/InstructEvalImpact
--- license: apache-2.0 size_categories: - n<1K ArXiv: 2306.04757 --- # Project Links # Dataset Description The IMPACT dataset contains 50 human created prompts for each category, 200 in total, to test LLMs general writing ability. Instructed LLMs demonstrate promising ability in writing-based tasks, such as composing letters or ethical debates. This dataset consists prompts across 4 diverse usage scenarios: - **Informative Writing**: User queries such as self-help advice or explanations for various concept - **Professional Writing**: Format involves suggestions presentations or emails in a business setting - **Argumentative Writing**: Debate positions on ethical and societal question - **Creative Writing**: Diverse writing formats such as stories, poems, and songs. The IMPACT dataset is included in our [InstructEval Benchmark Suite](https://github.com/declare-lab/instruct-eval). # Evaluation Results We leverage ChatGPT to judge the quality of the generated answers by LLMs. In terms of: - Relevance: how well the answer engages with the given prompt - Coherence: general text quality such as organization and logical flow Each answer is scored on a Likert scale from 1 to 5. We evaluate the models in the zero-shot setting based on the given prompt and perform sampling-based decoding with a temperature of 1.0 | **Model** | **Size** | **Informative** | | **Professional** | | **Argumentative** | | **Creative** | | **Avg.** | | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | | | Rel. | Coh. | Rel. | Coh. | Rel. | Coh. | Rel. | Coh. | Rel. | Coh. | | **ChatGPT** | - | 3.34 | 3.98 | 3.88 | 3.96 | 3.96 | 3.82 | 3.92 | 3.94 | 3.78 | 3.93 | | [**Flan-Alpaca**](https://huggingface.co/declare-lab/flan-alpaca-xxl) | 11B | 3.56 | 3.46 | 3.54 | 3.70 | 3.22 | 3.28 | 3.70 | 3.40 | 3.51 | 3.46 | | [**Dolly-V2**](https://huggingface.co/databricks/dolly-v2-12b) | 12 B | 3.54 | 3.64 | 2.96 | 3.74 | 3.66 | 3.20 | 3.02 | 3.18 | 3.30 | 3.44 | | [**StableVicuna**](https://huggingface.co/TheBloke/stable-vicuna-13B-HF) | 13B | 3.54 | 3.64 | 2.96 | 3.74 | 3.30 | 3.20 | 3.02 | 3.18 | 3.21 | 3.44 | | [**Flan-T5**](https://huggingface.co/google/flan-t5-xxl) | 11B | 2.64 | 3.24 | 2.62 | 3.22 | 2.54 | 3.40 | 2.50 | 2.72 | 2.58 | 3.15 | # Citation Please consider citing the following article if you found our work useful: ``` bibtex @article{chia2023instructeval, title={INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models}, author={Yew Ken Chia and Pengfei Hong and Lidong Bing and Soujanya Poria}, journal={arXiv preprint arXiv:2306.04757}, year={2023} } ```
datadrivenscience/movie-genre-prediction
--- dataset_info: features: - name: id dtype: int64 - name: movie_name dtype: string - name: synopsis dtype: string - name: genre dtype: string splits: - name: train num_bytes: 10488729 num_examples: 54000 - name: test num_bytes: 6965864 num_examples: 36000 download_size: 11902232 dataset_size: 17454593 --- # Dataset Card for Movie Genre Prediction Link to [Movie Genre Prediction Competition](https://huggingface.co/spaces/competitions/movie-genre-prediction) By accessing this dataset, you accept the rules of the Movie Genre Prediction competition. # Organizer Organizer of this competition is [Data-Driven Science](https://datadrivenscience.com/). [Join our FREE 3-Day Object Detection Challenge!](https://datadrivenscience.com/free-object-detection-challenge/) <img src="https://datadrivenscience.com/wp-content/uploads/2022/12/DDS-Logo.png" width="200" height="100"> # Email Usage By accessing this dataset, you consent that your email will be used for communication purposes from Data-Driven Science. We do not share nor sell our mailing list. Your information remains confidential. You may unsubscribe at any time.
d0rj/OpenOrca-ru
--- dataset_info: features: - name: id dtype: string - name: system_prompt dtype: string - name: question dtype: string - name: response dtype: string splits: - name: train num_bytes: 11568757682 num_examples: 4233923 download_size: 5699482220 dataset_size: 11568757682 size_categories: - 1M<n<10M language_creators: - translated language: - ru multilinguality: - monolingual pretty_name: Dolphin (ru) source_datasets: - Open-Orca/OpenOrca license: mit tags: - ChatGPT - instruct - instruct-tune task_categories: - conversational - text-classification - token-classification - table-question-answering - question-answering - zero-shot-classification - summarization - feature-extraction - text-generation - text2text-generation paperswithcode_id: orca-progressive-learning-from-complex --- # OpenOrca-ru ## Dataset Description - **Paper:** https://arxiv.org/abs/2306.02707 This is translated version of [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) into Russian.
P1ayer-1/college_textbooks
--- dataset_info: features: - name: authors dtype: string - name: color sequence: float64 - name: depth dtype: int64 - name: field dtype: string - name: id dtype: int64 - name: match_count dtype: int64 - name: position sequence: float64 - name: title dtype: string splits: - name: train num_bytes: 104695663 num_examples: 637395 download_size: 59625974 dataset_size: 104695663 --- # Dataset Card for "college_textbooks" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Irza/Arxiv_ph_indonesia
--- license: mit task_categories: - question-answering language: - id pretty_name: Arxiv Phisics Translated to Indonesian ---
Suprit/CMtMedQA
--- license: mit ---
pkufool/libriheavy
--- license: apache-2.0 --- # Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context Libriheavy is a labeled version of [Librilight](https://github.com/facebookresearch/libri-light), read our [paper](https://arxiv.org/abs/2309.08105) for more details. See https://github.com/k2-fsa/libriheavy for more details. ## Citation ``` @misc{kang2023libriheavy, title={Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context}, author={Wei Kang and Xiaoyu Yang and Zengwei Yao and Fangjun Kuang and Yifan Yang and Liyong Guo and Long Lin and Daniel Povey}, year={2023}, eprint={2309.08105}, archivePrefix={arXiv}, primaryClass={eess.AS} } ```
allenai/MADLAD-400
--- license: odc-by task_categories: - text-generation size_categories: - n>1T --- # MADLAD-400 ## Dataset and Introduction [MADLAD-400 (*Multilingual Audited Dataset: Low-resource And Document-level*)](https://arxiv.org/abs/2309.04662) is a document-level multilingual dataset based on Common Crawl, covering 419 languages in total. This uses all snapshots of CommonCrawl available as of August 1, 2022. The primary advantage of this dataset over similar datasets is that it is more multilingual (419 languages), it is audited and more highly filtered, and it is document-level. The main disadvantage is also its strength -- being more filtered, it may lack the recall needed for some applications. There are two versions released: the **noisy** dataset, which has no filtering except document-level LangID, and the **clean** dataset, which has a variety of filters applied, though it naturally has a fair amount of noise itself. Each dataset is released in a document-level form that has been deduplicated. ## Loading You can load both the clean and noisy versions of any language by specifing its LangID: ~~~ madlad_abt = load_dataset("allenai/madlad-400", "abt") ~~~ A list of langagues can also be supplied with a keyword argument: ~~~ madlad_multilang = load_dataset("allenai/madlad-400", languages=["abt", "ace"]) ~~~ Additionally, you can load the noisy and clean subsets seperately with the split keyword argument: ~~~ madlad_multilang_clean = load_dataset("allenai/madlad-400", languages=["abt", "ace"], split="clean") ~~~ ## LangID model and Crawl Following [Language Id In the Wild](https://arxiv.org/pdf/2010.14571.pdf), we trained a Semi-Supervised LangId model (SSLID) on 500 languages. The training data is as described in that paper, with the differences that 1) training data is sampled to a temperature of `T=3` to reduce over-triggering on low-resource languages; and 2) the data is supplemented with web-crawled data from the same paper (that has already been through the various filters described therein) in the hopes that it will increase robustness to web-domain text. ## Filtering Before separating the raw CommonCrawl corpus by LangID, these filtering steps are done, similar to Raffel et al (2020): - Discarded any page with fewer than 5 sentences and only retained lines that contained at least 3 words. - Removed any line with the word Javascript. - Removed any page where the phrase “lorem ipsum” appeared. - Removed any pages containing the phrases "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies" - Removed any pages that contained a curly bracket. - To deduplicate the data set, discarded all but one of any three-sentence span occurring more than once in the data set. The `noisy` subset of the data was filtered only by document-level LangID, which was taken to be the majority sentence-level LangID prediction. The `clean` subset removed all documents with a `percent_questionable` score greater than 20%. It furthermore removed any document with under 5 sentences. The `pct_questionable` score is simple the percentage of sentences in the input document that were "questionable". A sentence was considered questionable if any of the following were true: * **LangID Consistency:** the sentence-level LangID does not match the document-level LangID * **List Case:** The sentence has at least 12 tokens, and over 50% percent of the tokens began in a capital letter. * **Length:** The sentence has under 20 characters or over 500 characters (note: this is a bad heuristic for ideographic languages) * **Danger Chars:** Over 20% of the characters in the sentence match `[0-9{}+/()>]` * **Cursedness:** The sentence matches a cursed regex (see below) ### Cursed Substrings Based on the initial round of data audits, the authors created a heuristic list of substrings and regexes accounting for a large amount of questionable content. Keep in mind that these all are fed into the `pct_questionable` score -- a sentence is only excluded from the `clean` dataset if over 20% of the sentences in that document are flagged as questionable. notes about cursed substrings: * low quality sentences ending in the pipe character were very common. Before you ask, this was not Devanagari-script text using a Danda. * The last few regexes are meant to match `A N T S P E A K`, `List Case`, and weirdly regular text (for instance, lists of shipping labels or country codes) ``` # this implementation is for demonstration and is pretty inefficient; # to speed it up, use string inclusion (`in`) instead of regex for all but the # last four, and for those use a compiled regex. def is_cursed(s): return any(re.findall(curse, s) in s for curse in CURSED_SUBSTRINGS) CURSED_SUBSTRINGS = [" №", "���", "\\|\\s*$", " nr\\.$", "aute irure dolor ", " sunt in culpa qui ", "orem ipsum ", " quis nostrud ", " adipisicing ", " dolore eu ", " cupidatat ", "autem vel eum", "wisi enim ad", " sex ", " porn ", "黄色电影", "mp3", "ownload", "Vol\\.", " Ep\\.", "Episode", " г\\.\\s*$", " кг\\.\\s*$", " шт\\.", "Develop", "Facebook", " crusher ", " xxx ", " ... ... ... ... ... ... ... ... ...", " .... .... .... .... .... .... .... .... ....", " [^ ] [^ ] [^ ] [^ ] [^ ] [^ ] [^ ] [^ ] [^ ]", ", ..,,? ..,,? ..,,? ..,,?"] ``` ### Virama Correction Many languages using Brahmic Abugida (South and Southeast Asian scripts like Devanagari, Khmer, etc.) use some variant on the virama character. For whatever reason, it was found that this character was often messed up in the common crawl snapshots used. Therefore, for the languages `bn my pa gu or ta te kn ml si th tl mn lo bo km hi mr ne gom as jv dv bho dz hne ks_Deva mag mni shn yue zh ja kjg mnw ksw rki mtr mwr xnr`, a special correction step was done. For these languages, the authors took the list of all virama characters and removed all unnecessary spaces between each instance of a virama character and the next character with a regex. ``` '%s' % regex.sub(r' ([%s]) ' % _VIRAMA_CHARS, '\\1', x) ``` ### Myanmar Font Compatibility Prior to 2019, the most popular font for Burmese websites was the Zawgyi font. The authors used [Myanmar Tools](https://github.com/google/myanmar-tools) to convert text. Several scripts, like the Chinese script, Tibetan script, and Thai, do not use whitespace to separate characters. The languages with this property in this dataset are `yue zh ja th lo kjg mnw my shn ksw rki km bo dz`. Alas, the **Length** aspect of the `pct_questionable` score was calculated using simplistic whitespace tokenization, and therefore rendered the whole `pct_questionable` score invalid for those languages. Therefore, for these languages, the "clean" data is identical to the "noisy" data (barring Chinese; see below.) ### Special filters Chinese had a particular issue with pornographic content. After manual inspection a list of strings likely to be present in pornographic content was developed. All pages containing at least one of these strings were removed. Resulted in 17% reduction in number of documents and 56% reduction in file size. ``` pornsignals = "caoporn caoprom caopron caoporen caoponrn caoponav caopom caoorn 99re dy888 caopro hezyo re99 4438x zooskool xfplay 7tav xxoo xoxo 52av freexx 91chinese anquye cao97 538porm 87fuli 91pron 91porn 26uuu 4438x 182tv kk4444 777me ae86 91av 720lu yy6080 6080yy qqchub paa97 aiai777 yy4480 videossexo 91free 一级特黄大片 偷拍久久国产视频 日本毛片免费视频观看 久久免费热在线精品 高清毛片在线看 日本毛片高清免费视频 一级黄色录像影片 亚洲男人天堂 久久精品视频在线看 自拍区偷拍亚洲视频 亚洲人成视频在线播放 色姑娘综合站 丁香五月啪啪 在线视频成人社区 亚洲人成视频在线播放 久久国产自偷拍 一本道 大香蕉无码 香港经典三级 亚洲成在人线免费视频 天天色综合网 大香蕉伊人久草 欧美一级高清片 天天鲁夜夜啪视频在线 免费黄片视频在线观看 加比勒久久综合 久草热久草在线视频 韩国三级片大全在线观看 青青草在线视频 美国一级毛片 久草在线福利资源 啪啪啪视频在线观看免费 成人福利视频在线观看 婷婷我去也 老司机在线国产 久久成人视频 手机看片福利永久国产 高清国产偷拍在线 大香蕉在线影院 日本高清免费一本视频 男人的天堂东京热 影音先锋男人资源 五月婷婷开心中文字幕 亚洲香蕉视频在线播放 天天啪久久爱视频精品 超碰久久人人摸人人搞".split() ``` A few more random notes, comparing to common alternative codes for these languages: * `fil` for Filipino/Tagalog, not `tl` * `ak` for Twi/Akan, rather than `tw`. This includes Fante. * Unfortunately use the macro code `chm` for Meadow Mari (instead of the correct `mhr`), and `mrj` for Hill Mari * `no` for Norwegian Bokmål, whereas some resources use `nb` * `ps` for Pashto instead of `pbt` (Southern Pashto) * `ms` for Standard Malay, not `zlm` * `sq` for Albanian, and don't distinguish dialects like Gheg (`aln`) and Tosk (`als`) * `ber` as the code for Tamazight, after consultation with Tamazight speakers opining that the dialect distinctions are not significant. Other resources use the individual codes like `tzm` and `kab`. * Macrocode `qu` for Quechua. In practice, this seems usually to be a mix of the Ayacucho and Cusco dialects. Other resources, like NLLB, may use the dialect code, e.g. `quy` for Ayacucho Chanka. The same is true for a few other macro codes, like `ff` (Macro code for Fulfulde, whereas other sources may use e.g. `fuv`.) * Really, there are notes that can be made about almost any code, from the well-accepted conventions like `zh` for Mandarin, to many dialectical notes, like which variant of Hmong really is the `hmn` data? But the above ones are made specifically for ones where the authors are aware of other datasources floating out there that use different conventions. ## Audit Following [Quality at a Glance](https://arxiv.org/abs/2103.12028), the authors performed an "audit" of every corpus in this dataset. Although the authors did not speak most languages, they were able to give high-level comments on the general quality. They looked at a sample of 20 documents of each language. After an initial round of auditing, they devised a new set of filters and applied them. They then re-did all audits. ### Overall notes from the audit The decision was to **include languages that looked noisy, but omit any language that was clearly majority noise, or only had 20 or fewer docs.** This is a low bar -- twenty documents can be very little indeed, and some of the corpora released are quite noisy, but all of them should have at least the potential to be used in some useful way. The motivation for not releasing nonsense or tiny datasets is to not give a false sense of how multilingual this dataset actually is ("Representation washing"), as recommended by **Quality at a Glance**. A few overarching points: * Many low-resource languages only had Bible text, or in some cases jw.org data. These are marked in the rows below. Generally `ok bible` means that 100% of the audited sentences were Biblical, whereas if `bible` is simply mentioned in the note, it was not the only source of data. * Indian languages in the Latin script had a high concentration of pornographic content. ### Renames and Merges as a result of the Audit In several cases, it was clear from the audit that the corpora were not in the languages that the LangID model claimed they were. This led to the following renames: * dty renamed to `zxx-xx-dtynoise`, aka a "language" of noise. This is mainly mis-rendered PDFs and may have some practical applications for decoding said. * `fan` renamed to `bum` * `ss-SZ` renamed to `ss` -- this was just a result of us having inconsistent data labels. * `cjk` merged into the `gil` dataset * `bjj` merged into the `awa` dataset ## Canaries Canaries are provided in separate `canaries` folder. Canaries are organized into three directions: `monolingual` hosts canaries designed for the MADLAD-400 monody data, `multiway` for the multiway data, and `generic` the generic canaries generated only from the model's vocabulary. * Monolingual: Canaries here are organized by the language the canary was generated from. This corresponds exactly to the `translate_copy` setting in the paper, where the source and target language match. * Multiway: Canaries here are organized in one of two fashions. `to_XX` indicates canaries organized by the target language (and where the source language could be any language). `XX-XX` indicates the canaries (interleaved_both and interleaved_mislabeled_both) designed for a specific pair of languages. Within each subdirectory above, canaries are into separate files named by the canary type. There is always only a single file for each canary type. The `generic` folder contains within it the four canary types. Canaries can be mixed in with normal training data to then be analyzed post-hoc to training ## References Raffel, Colin, et al. "Exploring the limits of transfer learning with a unified text-to-text transformer." J. Mach. Learn. Res. 21.140 (2020): 1-67. ## Contact Please reach out to {snehakudugunta, icaswell}꩜google.com. For questions about the canaries, reach out to cchoquette@google.com ## License This data is released with the `CC-BY-4.0` license. ## Detailed notes from the audit Here are the notes on all languages, along with the number of documents found, and the final decision made with respect to including the language in this dataset. | Lang. | note | N | decision | | --------------- | ------------------------ | ---------- | --------------- | | en | ok | 1838712272 | keep | | ru | ok | 402458746 | keep | | es | good | 250906994 | keep | | de | ok | 225111495 | keep | | fr | ok | 218863911 | keep | | it | ok | 126406256 | keep | | pt | ok | 124207090 | keep | | pl | ok | 90908786 | keep | | nl | ok | 86594116 | keep | | tr | ok | 56417359 | keep | | vi | ok | 54988654 | keep | | cs | ok | 38254671 | keep | | id | ok | 37979244 | keep | | ro | ok | 35397563 | keep | | sv | ok. Also the last | 35153050 | keep | : : language (suz) is "ok : : : : : bible" : : : | hu | ok | 29677075 | keep | | uk | ok | 24968305 | keep | | fa | idk ask a farsi speaker; | 23138888 | keep | : : ALI\: OK : : : | ja | ok a little en mixed in | 21818123 | keep | | el | ok | 20932239 | keep | | fi | ok | 20433664 | keep | | da | ok | 17865888 | keep | | th | ok | 17439979 | keep | | no | ok | 14864710 | keep | | bg | ok | 12755329 | keep | | ko | ok | 12653878 | keep | | ar | good | 12411641 | keep | | sk | ok | 11857945 | keep | | ca | ok | 9477390 | keep | | lt | ok | 8748025 | keep | | iw | ok | 7194574 | keep | | sl | ok | 6310419 | keep | | et | ok | 5542933 | keep | | lv | ok | 5007982 | keep | | hi | ok some porn | 4512205 | keep | | sq | good | 3622957 | keep | | az | good | 3256331 | keep | | hr | ok | 2841400 | keep | | ta | ok | 2594191 | keep | | ms | ok | 2337672 | keep | | ml | ok | 2072605 | keep | | sr | ok | 2010607 | keep | | kk | ok | 1810963 | keep | | te | ok a lot of weirdly low | 1682441 | keep | : : quality looking content : : : : : like commerce : : : | mr | ok fix virama | 1673848 | keep | | is | ok | 1560913 | keep | | bs | good | 1362582 | keep | | mk | ok | 1358293 | keep | | gl | ok | 1253170 | keep | | eu | ok | 1155671 | keep | | bn | ok | 1138848 | keep | | be | ok | 1092785 | keep | | ka | ok | 936497 | keep | | fil | ok more bible than | 901507 | keep | : : expected for such a : : : : : major language : : : | mn | ok mongolian cyrillic | 879878 | keep | | af | good | 868671 | keep | | uz | ok some cyrllic noise | 669909 | keep | | gu | ok | 659727 | keep | | kn | ok | 657846 | keep | | kaa | ok cyrllic | 586361 | keep | | sw | ok | 537847 | keep | | ur | ok | 467236 | keep | | ne | ok | 453349 | keep | | cy | ok; was terrible before | 430719 | keep | : : filtering short docs : : : | hy | ok | 397523 | keep | | ky | ok | 367577 | keep | | si | good | 349220 | keep | | tt | good plus some | 346927 | keep | : : nonunicode misrendered : : : : : PDF : : : | tg | good | 328194 | keep | | la | ok some broken chars | 319178 | keep | | so | good | 293218 | keep | | ga | ok some en noise | 285999 | keep | | km | ook | 285740 | keep | | mt | ok | 265388 | keep | | eo | ok; likely a lot of Mt | 259971 | keep | | ps | ok | 252888 | keep | | rw | ok | 226466 | keep | | ku | ok | 218850 | keep | | lo | ok many entities in | 215982 | keep | : : latin script : : : | fy | ok plausible but i bet | 210025 | keep | : : there is a lot of nl in : : : : : there : : : | ha | ok | 173485 | keep | | my | filter noise and en fix | 172401 | keep | : : virama : : : | dv | good | 167179 | keep | | pa | ok | 150588 | keep | | ckb | ok | 148870 | keep | | lb | ok | 145988 | keep | | mg | ok some bible jw | 115387 | keep | | ht | ok | 110443 | keep | | ug | ok | 106549 | keep | | am | good | 106301 | keep | | or | ok | 100530 | keep | | fo | good | 97754 | keep | | gd | ok | 94275 | keep | | ba | ok | 90318 | keep | | tk | ok; a few weird docs | 82495 | keep | | mi | ok | 79509 | keep | | hmn | ok | 75213 | keep | | grc | ok some bible | 70730 | keep | | jv | ok | 69473 | keep | | ceb | ok | 66164 | keep | | sd | good | 65858 | keep | | yi | ok | 64949 | keep | | kaa-Latn | ok urls are .ru or .kz | 61169 | keep | | sn | ok | 60196 | keep | | co | ok;l i suspect lots of | 55387 | keep | : : MT : : : | su | good | 54968 | keep | | pap | ok | 54498 | keep | | ig | ok | 54410 | keep | | zu | good | 53809 | keep | | xh | ok | 53672 | keep | | sm | ok | 52614 | keep | | ny | ok | 52244 | keep | | yo | ok | 52067 | keep | | cv | good | 47318 | keep | | el-Latn | good; a lot of old | 46428 | keep | : : content! : : : | kl | ok | 46027 | keep | | haw | ok scam tv products | 45670 | keep | | gsw | wtf is happening here; | 42712 | keep | : : keep with disclaimer; : : : : : STILL BOILERPLATE : : : | tet | good ; actually a lot of | 40367 | keep | : : fun data! : : : | st | ok | 40360 | keep | | lus | ok | 36437 | keep | | oc | ok | 36379 | keep | | as | good | 33825 | keep | | rm | ok | 33805 | keep | | br | ok after shortfilter | 33219 | keep | | sah | ok | 29169 | keep | | hi-Latn | filter porn this is half | 26723 | keep | : : porn : : : | se | good | 23872 | keep | | cnh | good, some local news! | 21556 | keep | : : not sure if WL : : : | om | ok | 18895 | keep | | ce | ok | 14968 | keep | | udm | ok | 13376 | keep | | lg | ok lot of | 13030 | keep | : : www.bukedde.co.ug in : : : : : this : : : | os | ok | 12623 | keep | | nv | ok | 12578 | keep | | kha | ok | 12070 | keep | | ilo | ok some bible | 11754 | keep | | ctd-Latn | ok; from some local | 11629 | keep | : : news? : : : | vec | very noisy has wiki from | 11108 | keep | : : other langs and .it : : : : : websites so not sure if : : : : : vec : : : | hil | ok some en boilerplate | 10564 | keep | | tyv | ok fun stuff plus some | 9083 | keep | : : russian noise i think : : : | iba | ok jw data | 7638 | keep | | ru-Latn | ok | 7523 | keep | | kbd | ok many .ru | 7486 | keep | | ti | ok; poor tigray | 7288 | keep | | sa | ok | 7117 | keep | | av | good | 6331 | keep | | bo | needs some serious | 6226 | keep | : : script filtering. but : : : : : there is some ok data in : : : : : there. : : : | zza | good | 6019 | keep | | ber-Latn | ok | 5612 | keep | | otq | ok | 5554 | keep | | te-Latn | great good text....but | 5305 | keep | : : mostly pornographic : : : | bua | ok | 5264 | keep | | ts | good | 5198 | keep | | cfm | ok mostly from | 4858 | keep | : : chinland.co : : : | tn | good | 4821 | keep | | krc | ok | 4815 | keep | | ak | good; much but not all | 4768 | keep | : : bible : : : | meo | ok mostly blogs | 4655 | keep | | chm | ok; fyi watch out for | 4653 | keep | : : yandex translationese : : : | to | good ; news bible | 4612 | keep | : : government : : : | ee | good; mostly religious | 4536 | keep | | nso | ok | 4422 | keep | | ady | good | 4206 | keep | | rom | bible | 4187 | keep | | bho | mostly from anjoria.com. | 4121 | keep | : : Looks like valid : : : : : Bhojpuri. : : : | ltg | ok mostly www.lakuga.lv | 4120 | keep | | fj | ok | 3976 | keep | | yua | ok | 3965 | keep | | gn | ok some broken | 3858 | keep | : : characters some bible : : : | az-RU | good; a lot of JW | 3781 | keep | | ln | ok bible jw | 3325 | keep | | ada | good; bible; likely | 3095 | keep | : : mixed with gaa : : : | myv | maybe has .ru urls | 3095 | keep | | bik | ok. keep in mind the bik | 3092 | keep | : : vs bcl issue. : : : | tlh | ok, but why tf are there | 3054 | keep | : : websites inklingon? all : : : : : MT ? : : : | kbp | not sure if right script | 3036 | keep | : : wiki says latin : : : | war | ok but v sus. Pls filter | 2928 | keep | : : out wikipedia : : : | wa | ok lots of wiki stuff | 2772 | keep | | bew | mostly blogs. idk if | 2677 | keep | : : standard Indonesian or : : : : : not : : : | rcf | ok | 2630 | keep | | ta-Latn | good text .... but | 2580 | keep | : : pornographic : : : | kac | ok | 2567 | keep | | iu | filter script some is en | 2537 | keep | : : rest is iu script : : : | ay | good; mix of bible and | 2505 | keep | : : other news sources : : : | kum | ok | 2495 | keep | | qu | ok | 2449 | keep | | bgp | almost all ur-Latn. | 2427 | keep | : : consider removing or : : : : : renaming : : : | hif | ok some en noise and | 2358 | keep | : : religious : : : | kw | ok short boilerplate | 2324 | keep | : : bible wiki; ok some porn : : : | nan-Latn-TW | ok | 2285 | keep | | srn | ok bible + jw | 2281 | keep | | tly-IR | deeply sus | 2239 | keep | | sg | ok jw | 2106 | keep | | gom | ok | 2102 | keep | | ml-Latn | ok some short docs | 2071 | keep | | kj | ok | 2062 | keep | | ksd | ok bible | 2000 | keep | | dz | ok; hidden parallel | 1899 | keep | : : text; maybe actually bo; : : : : : mainly buddhist : : : | kv | ok a lil boilerplate | 1878 | keep | : : vibes : : : | msi | ok | 1870 | keep | | ve | ok mostly bible jw | 1866 | keep | | zap | ok JW. | 1803 | keep | | zxx-xx-dtynoise | BEAUTIFUL NOISE rename | 1765 | keep | : : but keep as beautiful : : : : : xample. (was called : : : : : "dty") : : : | meu | ok bible | 1728 | keep | | iso | ok jw | 1721 | keep | | ium | filter out zh | 1721 | keep | | nhe | ok | 1714 | keep | | tyz | ok bible bu again i | 1707 | keep | : : think some mixeed : : : : : dialects : : : | hui | ok some bible | 1680 | keep | | new | ok | 1634 | keep | | mdf | ok some short docs | 1609 | keep | | pag | bible | 1588 | keep | | gv | filter short repetitive | 1586 | keep | : : sentences; still same : : : : : but keep : : : | gag | has 1-2 cyrillic | 1572 | keep | : : examples with small amts : : : : : of arabic script noise : : : | ngu | ok | 1534 | keep | | quc | bible | 1526 | keep | | mam | ok bible jw | 1513 | keep | | min | ok mostly wiki and bible | 1474 | keep | | ho | ok | 1466 | keep | | pon | bible | 1462 | keep | | mrj | ok | 1447 | keep | | lu | ok jw | 1444 | keep | | gom-Latn | ok very noisy ; some ok | 1432 | keep | : : stuff ; release with : : : : : disclaimer : : : | alt | ok | 1422 | keep | | nzi | ok | 1371 | keep | | tzo | ok bible + jw | 1357 | keep | | bci | ok bible | 1329 | keep | | dtp | ok; mostly from | 1309 | keep | : : www.newsabahtimes.com.my : : : | abt | fine; bible | 1305 | keep | | bbc | ok | 1274 | keep | | pck | ok | 1255 | keep | | mai | ok mild amounts of en | 1240 | keep | : : noise : : : | mps | ok bible | 1239 | keep | | emp | ok bible | 1238 | keep | | mgh | ok bible jw | 1222 | keep | | tab | idk plausibly ok | 1202 | keep | | crh | ok | 1184 | keep | | tbz | good mostly bible but | 1126 | keep | : : not all : : : | ss | good mix of data ; | 1089 | keep | : : renamed from "ss" : : : | chk | ok bible | 1082 | keep | | bru | ok; bible | 1072 | keep | | nnb | ok | 1071 | keep | | fon | ok mostly jw but not all | 1065 | keep | | ppk | bible | 1063 | keep | | tiv | ok jw | 1063 | keep | | btx | ok probably | 1009 | keep | | bg-Latn | ok | 991 | keep | | mbt | ok bible | 969 | keep | | ace | good; bible | 966 | keep | | tvl | ok jw | 933 | keep | | dov | ok bible + jw | 923 | keep | | ach | good; bible | 915 | keep | | xal | ok has .ru sites though | 913 | keep | | cuk | ok bible | 899 | keep | | kos | ok lds bible | 881 | keep | | crs | ok | 873 | keep | | wo | ok; mostly bible. | 871 | keep | | bts | ok; mostly bible | 869 | keep | | ubu | ok bible | 846 | keep | | gym | ok biblle | 820 | keep | | ibb | ok bible and repeated @ | 818 | keep | | ape | good; bible | 814 | keep | | stq | ok i think ? | 809 | keep | | ang | much noise but some good | 803 | keep | : : Old English in there! : : : | enq | ok bible | 793 | keep | | tsg | much noise but somegood | 789 | keep | : : data too! : : : | shn | mostly English | 788 | keep | : : boilerplate. filter by : : : : : latin text before : : : : : releasing : : : | kri | ok boilerplate noise | 786 | keep | : : bible jw : : : | kek | ok jw bible | 782 | keep | | rmc | ok | 738 | keep | | acf | good; bible | 730 | keep | | syr | good; practictitioners | 716 | keep | : : should keep dialect in : : : : : mind. : : : | qub | bible | 705 | keep | | bm | good | 702 | keep | | tzh | ok jw | 702 | keep | | jiv | ok bible | 696 | keep | | kn-Latn | filter en noise of | 688 | keep | : : karnatake govt websites : : : | kjh | ok .ru domain | 672 | keep | | yap | ok | 638 | keep | | ban | ok bible | 637 | keep | | tuc | ok bible | 635 | keep | | tcy | good; mostly wikipedia; | 632 | keep | : : likely some konkani : : : : : mixed in : : : | cab | ok jw | 629 | keep | | cak | ok bible | 617 | keep | | din | ok after SD filter | 611 | keep | | arn | good; bible | 593 | keep | | lrc | ok | 587 | keep | | gil | empty; but merged in | 586 | keep | : : data in "cjk" : : : | gil | this is all in gil | 586 | keep | : : (Kiribati). merged into : : : : : "gil" : : : | rwo | bible | 572 | keep | | hus | ok bible | 569 | keep | | bum | ok bible; but wrong | 559 | keep | : : language. Data is in : : : : : Bulu, not Fang : : : | mak | ok bible | 555 | keep | | frp | fair amount from | 550 | keep | : : wikipedia. : : : | seh | ok jw | 545 | keep | | twu | ok bible, but also i | 539 | keep | : : think it's lots of mixed : : : : : similar dialects : : : | kmb | ok bible jw | 538 | keep | | ksw | ok bible | 536 | keep | | sja | ok bibe | 527 | keep | | amu | good; bible; crazy | 511 | keep | : : diacritics : : : | mad | remove mostly short text | 509 | keep | | quh | bible | 501 | keep | | dyu | ok bible | 483 | keep | | toj | ok jw | 452 | keep | | ch | ok; not sure about WL | 449 | keep | | sus | hella sus jk ok bible | 437 | keep | | nog | ok | 419 | keep | | jam | ok bible | 416 | keep | | gui | ok bible | 409 | keep | | nia | ok | 408 | keep | | mas | ok some amount of bible | 405 | keep | | bzj | ok bible | 404 | keep | | mkn | ok bible | 402 | keep | | lhu | ok bible | 377 | keep | | ctu | ok bible | 366 | keep | | kg | ok bible jw | 365 | keep | | inb | ok bible | 343 | keep | | guh | ok bible | 331 | keep | | rn | bible | 323 | keep | | bus | ok; bible; about 50bzc | 322 | keep | | mfe | ok mostly bible maybe | 320 | keep | : : some french creole short : : : : : doc noise : : : | sda | ok bible | 317 | keep | | bi | good! fun! | 311 | keep | | cr-Latn | noise and lorem ipsom. | 303 | keep | : : But some ok Cree text. : : : | gor | ok bible | 303 | keep | | jac | ok bible | 303 | keep | | chr | ok bible | 301 | keep | | mh | ok jw lds | 296 | keep | | mni | ok | 290 | keep | | wal | ok bible + jw | 286 | keep | | teo | ok bible | 274 | keep | | gub | ok bible | 271 | keep | | qvi | bible | 266 | keep | | tdx | ok jw | 262 | keep | | rki | ok | 251 | keep | | djk | ok; bible+jw | 246 | keep | | nr | ok | 246 | keep | | zne | ok jw | 239 | keep | | izz | ok bible | 237 | keep | | noa | ok | 234 | keep | | bqc | ok; bible | 228 | keep | | srm | ok; bible + jw | 227 | keep | | niq | ok | 226 | keep | | bas | ok; has some fun blog | 216 | keep | : : stuff! : : : | dwr | ok; bible; mixed script | 215 | keep | | guc | ok bible | 214 | keep | | jvn | ok bible | 213 | keep | | hvn | ok religioous text | 200 | keep | | sxn | ok bible ; also wild | 197 | keep | : : diacritics : : : | koi | ok | 196 | keep | | alz | good; bible | 195 | keep | | nyu | ok | 195 | keep | | bn-Latn | ok | 191 | keep | | suz | | 186 | keep | | pau | ok | 185 | keep | | nij | ok | 183 | keep | | sat-Latn | good! al from local news | 183 | keep | : : sources : : : | gu-Latn | filter short en | 179 | keep | : : boilerplate and : : : : : repetitive sentences : : : | msm | ok bible | 177 | keep | | maz | ok bible jw | 170 | keep | | qxr | bible | 153 | keep | | shp | ok bible | 150 | keep | | hne | ok | 146 | keep | | ktu | ok bible jw | 144 | keep | | laj | ok bible | 144 | keep | | pis | bible | 139 | keep | | mag | ok fix virama issue | 138 | keep | | gbm | ok | 137 | keep | | tzj | ok bible | 136 | keep | | oj | ok | 135 | keep | | ndc-ZW | ok | 132 | keep | | tks | ok bible bu again i | 127 | keep | : : think some mixeed : : : : : dialects : : : | gvl | filter short boilerplate | 126 | keep | : : mostly bible : : : | knj | ok bible | 126 | keep | | awa | all bible in awadhi | 126 | keep | : : (awa). Renamed from bjj : : : | spp | ok bible | 123 | keep | | mqy | bible remove short docs | 119 | keep | | tca | ok bible + jw | 117 | keep | | cce | ok jw | 116 | keep | | skr | ok; some pnb mixed in | 107 | keep | | kmz-Latn | ok soome ar script noise | 106 | keep | | dje | ok; mostly but not all | 100 | keep | : : bible : : : | gof | ok some bible | 97 | keep | | agr | good; bible | 93 | keep | | qvz | bible | 88 | keep | | adh | good; bible | 87 | keep | | quf | bible | 86 | keep | | kjg | ok bible | 84 | keep | | tsc | ok | 82 | keep | | ber | ok great! | 79 | keep | | ify | ok bible | 79 | keep | | cbk | ok bible | 78 | keep | | quy | bible | 78 | keep | | ahk | good; bible; crazy | 77 | keep | : : diacritics : : : | cac | ok bible | 77 | keep | | akb | good; bible | 71 | keep | | nut | ok | 67 | keep | | ffm | ok bible; mixed fulfulde | 65 | keep | : : dialects; consider : : : : : merging with ff : : : | taj | ok bible | 65 | keep | | ms-Arab | ok mostly utusanmelayu | 63 | keep | : : website : : : | brx | quite good! | 62 | keep | | ann | good; all from wikimedia | 56 | keep | : : incubator : : : | qup | bible | 53 | keep | | ms-Arab-BN | ok not sure if same as | 46 | keep | : : ms-Arab : : : | miq | ok | 45 | keep | | msb | ok bible | 41 | keep | | bim | good; bible | 40 | keep | | raj | ok | 40 | keep | | kwi | ok bible | 37 | keep | | tll | ok jw | 37 | keep | | trp | good ; lots of random | 36 | keep | : : stuff : : : | smt | ok bible but lots of | 34 | keep | : : different bibles! : : : | mrw | ok | 29 | keep | | dln | ok bible | 28 | keep | | qvc | bible | 27 | keep | | doi | ok actually nice! | 26 | keep | | ff | ok after shortfilter | 26 | keep | | zh | very noisy | 19850947 | keep (filtered) | | zh-Latn | poor quality | 602 | remove | | rhg-Latn | remove | 10302 | remove | | ja-Latn | remove maybe low quality | 7516 | remove | : : short and repeated : : : | pam | remove | 2773 | remove | | za | revisit after | 1700 | remove | : : shortfilter : : : | ar-Latn | terrible, 0% orrect, | 1520 | remove | : : remove : : : | mnw | remove en noise and | 1100 | remove | : : boilerplate : : : | fip | ok jw ; but wrong | 729 | remove | : : language. mostly : : : : : Mambwe-Lungu and Bemba, : : : : : as well as Fipu (mgr+bem : : : : : vs. fip) : : : | el-CY | bad; not Cypriote | 537 | remove | | luz | terrible; remove | 354 | remove | | cni | ok; bible; lots of mixed | 261 | remove | : : in content in : : : : : not,cob,cpc,arl : : : | apd-SD | terribly questionable; | 227 | remove | : : probably remove : : : | mey | mostly short and noisy | 127 | remove | : : borderline : : : | awa | OK; should be used with | 126 | remove | : : caution and suspicion : : : | mtq | remove short doc | 111 | remove | : : repetitive : : : | mel | remove noisy en | 103 | remove | | mr-Latn | remove mostly porn and | 91 | remove | : : short docs : : : | srr | remove ; english | 91 | remove | : : boilerplate : : : | en-Cyrl | ok ... some fr-Cyrl too | 90 | remove | : : and maybe others : : : | en-Arab | remove | 79 | remove | | syl | idk maybe ok ? | 61 | remove | | jax | filter mostly | 58 | remove | : : text.medjugorje.ws : : : : : boilerplate : : : | xmm | very noisy lots of dj | 58 | remove | : : tiktok and peppa pig : : : : : repeated : : : | shu | quite questionable. prob | 53 | remove | : : remove : : : | ks | ok shorter docs | 51 | remove | | gyn | remove boilerplate and | 45 | remove | : : porn : : : | aa | some pretty bad data but | 32 | remove | : : also some good data. : : : : : filter on "Woo" (case : : : : : sensitive) : : : | sjp | terible; probably | 31 | remove | : : remove; check again : : : : : after short filter : : : | abs | all short nonsense | 24 | remove | : : remove : : : | mui | remove short docs | 23 | remove | | mdh | filter porn short text | 22 | remove | : : and repetitive : : : : : boilerplate : : : | noe | ok | 22 | remove | | sxu | rvisit after shortfilter | 22 | remove | | bhb-Gujr | bad. remove. all junk | 20 | remove | : : gu. : : : | yaq | remove | 20 | remove | | prk | ok | 18 | remove | | cgg | rather noisy but | 17 | remove | : : potentialy ok. not sure : : : : : if WL or not : : : | bto | bad; remove unless short | 16 | remove | : : filter keeps enough : : : | ayl | terrible | 13 | remove | | pa-Arab | ok | 13 | remove | | bmm | terrible. filter on | 11 | remove | : : short and reevaluate : : : | mfb | remove short boilerplate | 11 | remove | | mtr | ok fix virama remove en | 11 | remove | : : noise : : : | pmy | remove | 11 | remove | | skg | terrible; remove | 11 | remove | | ymm | remove | 11 | remove | | xnr | ok maybe fix virama | 9 | remove | : : though it seems fine : : : | kjb | ok bible | 8 | remove | | azg | short noise; bible | 7 | remove | | bgz | idk maybe ok but | 7 | remove | : : probably bad : : : | ctg | probably terrible | 7 | remove | : : probably remove : : : | nyo | ok | 7 | remove | | mdy | ok bible | 6 | remove | | syl-Latn | revist or remove after | 6 | remove | : : shortfilter : : : | xog | ok bible and stories | 6 | remove | | cyo | terrifying noise; remove | 4 | remove | | kfy | filter virama issue | 4 | remove | | nd | ok | 4 | remove | | rwr | remove | 4 | remove | | tuf | ok bible | 4 | remove | | clu | ok bible | 3 | remove | | ng | ok | 3 | remove | | zyj | deeply bad data .. | 3 | remove | : : revisit after : : : : : shortfilter : : : | rkt | ok | 2 | remove | | bgc | super sketch. Remove | 1 | remove | : : unless short doc filter : : : : : leaves some. remove : : : | dcc | remove | 1 | remove | | ff-Adlm | good | 1 | remove | | gju | remove short boilerplate | 1 | remove | | max | remove short some ru | 1 | remove | | mwr | filter short docs fix | 1 | remove | : : virama : : : | trw | sus; remove | 1 | remove | | vkt | 1 doc remove | 1 | remove | | gjk | empty remove | 0 | remove | | bfy | very bad. remove unless | 0 | remove | : : it looks better after : : : : : filtering short docs; : : : : : remove : : : | nyn | ok | 0 | remove | | sgj | remove | 0 | remove | A few comments too long to fit in the table above: * `alt`: WAIT THIS IS AMAZING IT IS ACTUALLY ALTAI! e.g. from urls like https://altaicholmon.ru/2020/02/28/jarashty-la-jajaltany-jarkyndu-lekeri/ * `tly-IR`: They all look like boilerplate content, e.g., list of keywords/search queries used to bump page ranking in search results. Not any useful material for translation. Remove. * `zap`: pls note that at least some Zapotec speakers tend to view it as one language, not as a million dialects like ISO does. However, some are certainly mutually unintelligible, complicating the matter. * `zh-Latn`: The biggest problem is that several examples are not in Latin Chinese (i.e., romanization in my understanding) but in English or mixed English and Chinese. For those data in Latin Chinese, their quality seems to be good. * `zh`: Many examples are porn-related, particularly those very long documents. Also, there are some examples of traditional Chinese. ## Final Dataset information The number of documents, sentences, tokens, characters, and bytes for the noisy and clean splits of the data. Note that the "toks" field below uses whitespace for tokenization, so is not appropriate for non-whitespace-separating languages like Chinese (see section above). Note that the english subset in this version is missing 18% of documents that were included in the published analysis of the dataset. These documents will be incoporated in an update coming soon. BCP-47 | docs (noisy) | docs (clean) | sents (noisy) | sents (clean) | toks (noisy) | toks (clean) | chars (noisy) | chars (clean) | clean | noisy | ----------------|:---------------|:---------------|:----------------|:----------------|:---------------|:---------------|:----------------|:----------------|:---------|:---------| total* | 7.2B | 3.7B | 133.1B | 97.5B | 4.6T | 2.6T | 30.6T | 16.0T | 11.4 T | 6.3 T en* | 3.0B | 1.5B | 71.1B | 45.4B | 2.0T | 1.3T | 12.3T | 7.6T | 2.6 T | 4.3 T | ru | 823M | 402.5M | 823M | 12.4B | 416.5B | 240.9B | 3.1T | 1.8T | 832.9 G | 1.4 T | es | 476.4M | 250.9M | 8.3B | 4.5B | 325.7B | 170.4B | 2.1T | 1.1T | 380.9 G | 747.5 G | de | 478.6M | 225.1M | 11.5B | 6B | 299.5B | 139.6B | 2.2T | 1T | 370.6 G | 815.5 G | fr | 384.2M | 218.9M | 7.9B | 5B | 307.1B | 165.2B | 2T | 1T | 370.4 G | 699.1 G | it | 238.9M | 126.4M | 4.5B | 2.5B | 180.1B | 83.6B | 1.2T | 553.1B | 198.4 G | 429.6 G | pt | 209.2M | 124.2M | 4B | 2.4B | 123.2B | 79.2B | 791.5B | 499.8B | 183.1 G | 289.6 G | pl | 145.1M | 90.9M | 3.3B | 2.4B | 68.9B | 49.2B | 505B | 356.4B | 140.7 G | 202.5 G | nl | 134.5M | 86.6M | 134.5M | 2.3B | 104.4B | 51.6B | 698.5B | 334.5B | 118.2 G | 247.5 G | tr | 107M | 56.4M | 107M | 1.2B | 41.9B | 25B | 328.8B | 198.9B | 73.7 G | 123.9 G | vi | 92.8M | 55M | 1.6B | 1B | 71.5B | 48.7B | 342B | 228.8B | 88.8 G | 133.9 G | cs | 72.1M | 38.3M | 1.7B | 1B | 40.8B | 22.1B | 272.2B | 147.9B | 62.1 G | 112.7 G | id | 120.9M | 38M | 2.2B | 747.5M | 60.4B | 20.2B | 443B | 148.3B | 48.5 G | 148.7 G | ro | 60.8M | 35.4M | 60.8M | 746.4M | 37.1B | 22.9B | 244.1B | 148.2B | 55.5 G | 90.3 G | sv | 65.2M | 35.2M | 65.2M | 1B | 62.1B | 23.9B | 422.6B | 153.7B | 57.0 G | 149.9 G | hu | 47.6M | 29.7M | 1.3B | 806.3M | 29.8B | 17.8B | 223.6B | 134.9B | 53.5 G | 86.8 G | uk | 46.6M | 25M | 1B | 599.9M | 21.6B | 12.8B | 164.2B | 95.2B | 45.1 G | 75.8 G | fa | 58.1M | 23.1M | 920.6M | 493.5M | 40.6B | 18.4B | 220.4B | 96.7B | 43.4 G | 97.4 G | ja | 23.3M | 21.8M | 326M | 321.6M | 10.9B | 10.9B | 133.3B | 132.2B | 98.7 G | 99.7 G | el | 52.4M | 20.9M | 808M | 445.4M | 25B | 12B | 173.2B | 80.9B | 37.9 G | 80.8 G | fi | 35.8M | 20.4M | 1B | 650.3M | 23.8B | 11.5B | 202.2B | 101.1B | 37.6 G | 74.1 G | zh | 29.3M | 19.9M | 492.3M | 298.8M | 19.2B | 10B | 333B | 142.3B | 109.9 G | 191.8 G | da | 38.5M | 17.9M | 1.1B | 508M | 37.7B | 13B | 252B | 83.1B | 29.4 G | 89.5 G | th | 19M | 17.4M | 19M | 385.8M | 8.9B | 8.9B | 118.6B | 117.6B | 57.6 G | 58.2 G | no | 34.7M | 14.9M | 34.7M | 498.7M | 46.6B | 11.8B | 305.6B | 74.8B | 27.3 G | 109.8 G | bg | 27.2M | 12.8M | 599.4M | 360.3M | 14.4B | 8.8B | 95.6B | 57.8B | 26.0 G | 42.8 G | ko | 19.7M | 12.7M | 628.6M | 471.8M | 13.3B | 9.3B | 65.9B | 43.8B | 34.2 G | 49.1 G | ar | 67.6M | 12.4M | 876.6M | 182.6M | 39B | 7.1B | 243B | 43.2B | 20.9 G | 115.9 G | sk | 23.2M | 11.9M | 487.9M | 300.6M | 11.3B | 6.7B | 77.8B | 45.7B | 18.8 G | 31.9 G | ca | 17.9M | 9.5M | 258.6M | 153M | 8.9B | 5.6B | 56.5B | 34.6B | 12.6 G | 20.8 G | lt | 15.3M | 8.7M | 374M | 256.9M | 7.5B | 5.3B | 58.6B | 41.3B | 15.7 G | 22.3 G | he | 14.1M | 7.2M | 302.2M | 196.8M | 9.2B | 5.2B | 54.9B | 30.5B | 14.8 G | 26.3 G | sl | 12M | 6.3M | 316M | 180M | 6.9B | 4.5B | 47.8B | 30.5B | 11.5 G | 18.0 G | et | 8.8M | 5.5M | 223.8M | 176.3M | 5B | 3.6B | 40.1B | 28.7B | 10.7 G | 15.0 G | lv | 8.4M | 5M | 186.1M | 138.5M | 4.8B | 3.2B | 36.7B | 23.9B | 9.1 G | 13.8 G | hi | 9.9M | 4.5M | 254.4M | 152M | 7.4B | 3.8B | 39.9B | 20.1B | 9.9 G | 19.7 G | sq | 5.5M | 3.6M | 5.5M | 56.1M | 2.7B | 2.1B | 17B | 12.7B | 4.8 G | 6.6 G | az | 5.2M | 3.3M | 90.3M | 70.9M | 2.1B | 1.5B | 16.3B | 11.9B | 4.5 G | 6.3 G | hr | 23M | 2.8M | 476.6M | 53M | 12.6B | 1.4B | 85.1B | 9.6B | 3.7 G | 33.5 G | ta | 5.6M | 2.6M | 122.5M | 81.9M | 2.1B | 1.1B | 19.2B | 10.6B | 4.9 G | 8.8 G | ms | 14.1M | 2.3M | 14.1M | 55.2M | 8B | 1.7B | 58.8B | 12.5B | 4.0 G | 20.4 G | ml | 3.7M | 2.1M | 75M | 52M | 1B | 603.3M | 10.5B | 6.3B | 3.0 G | 5.1 G | sr | 4.7M | 2M | 4.7M | 64M | 2.7B | 1.6B | 18.6B | 11B | 5.1 G | 8.7 G | kk | 3.1M | 1.8M | 87.4M | 59.1M | 1.6B | 1B | 13.4B | 8.6B | 3.8 G | 5.8 G | te | 2.5M | 1.7M | 59M | 46.4M | 900.2M | 618.5M | 7.4B | 5.1B | 2.6 G | 3.8 G | mr | 2.9M | 1.7M | 2.9M | 50M | 1.2B | 776.9M | 8.7B | 5.5B | 2.8 G | 4.4 G | is | 2.9M | 1.6M | 73.7M | 39.3M | 2.1B | 979.2M | 14.9B | 6.4B | 2.5 G | 5.9 G | bs | 12.9M | 1.4M | 163.6M | 9M | 5.9B | 490.9M | 39.5B | 3.3B | 1.3 G | 15.6 G | mk | 2.9M | 1.4M | 41.3M | 22.6M | 1.3B | 685.9M | 9.1B | 4.5B | 2.0 G | 4.0 G | gl | 4.2M | 1.3M | 45.3M | 18.8M | 2.3B | 748.4M | 15.6B | 4.8B | 1.7 G | 5.5 G | eu | 2.1M | 1.2M | 41.7M | 24.8M | 827.5M | 525.3M | 6.9B | 4.3B | 1.5 G | 2.4 G | bn | 4.3M | 1.1M | 151.2M | 38.6M | 2.5B | 645.7M | 16.8B | 4.3B | 2.2 G | 8.7 G | be | 2M | 1.1M | 48.8M | 31.3M | 981M | 632.9M | 7.2B | 4.6B | 2.2 G | 3.5 G | ka | 3.1M | 936.5K | 53.7M | 26.6M | 1.2B | 460.8M | 10.3B | 3.8B | 1.9 G | 5.0 G | fil | 4.2M | 901.5K | 67.4M | 19.2M | 2.2B | 741.7M | 14.6B | 4.7B | 1.5 G | 5.0 G | mn | 2.2M | 879.9K | 43.3M | 24M | 1.1B | 487.5M | 7.9B | 3.5B | 1.6 G | 3.5 G | af | 2.9M | 868.7K | 51.9M | 30M | 1.7B | 795M | 11.8B | 4.8B | 1.8 G | 4.2 G | uz | 1.4M | 669.9K | 25.7M | 17.5M | 605.9M | 388.3M | 5.2B | 3.3B | 1.1 G | 1.9 G | gu | 1.3M | 659.7K | 28.9M | 18.1M | 634.4M | 345.9M | 3.9B | 2.1B | 1.1 G | 2.0 G | kn | 1.6M | 657.8K | 32.9M | 19.2M | 546.4M | 258.6M | 4.6B | 2.2B | 1.1 G | 2.3 G | kaa | 1.1M | 586.4K | 19.8M | 13.3M | 455.9M | 269M | 3.8B | 2.2B | 990.2 M | 1.6 G | sw | 1.3M | 537.8K | 1.3M | 9.5M | 660.7M | 345.8M | 4.6B | 2.4B | 826.1 M | 1.6 G | ur | 967.2K | 467.2K | 29M | 18.4M | 1B | 562.5M | 5.2B | 2.7B | 1.2 G | 2.4 G | ne | 876.4K | 453.3K | 876.4K | 20.4M | 585M | 345.3M | 3.9B | 2.2B | 1.1 G | 1.9 G | cy | 4.9M | 430.7K | 68.3M | 7.4M | 3.6B | 275.6M | 26.4B | 1.7B | 609.5 M | 10.0 G | hy | 2M | 397.5K | 31.1M | 9.9M | 1B | 190.9M | 8.1B | 1.5B | 678.9 M | 3.6 G | ky | 751.1K | 367.6K | 14.3M | 9.6M | 303.4M | 181.6M | 2.5B | 1.4B | 665.1 M | 1.1 G | si | 788K | 349.2K | 22.1M | 16M | 507.3M | 293.3M | 3.4B | 1.9B | 1023.6 M | 1.8 G | tt | 2.1M | 346.9K | 60.2M | 8.6M | 1B | 135M | 12.1B | 1B | 494.1 M | 4.6 G | tg | 789.2K | 328.2K | 789.2K | 7.4M | 363.8M | 208.8M | 2.6B | 1.4B | 635.7 M | 1.1 G | la | 2.9M | 319.2K | 85.7M | 13.8M | 1.1B | 218.4M | 8.2B | 1.5B | 550.6 M | 2.9 G | so | 729.2K | 293.2K | 729.2K | 3.1M | 294.8M | 146.3M | 2.1B | 992.4M | 350.8 M | 746.2 M | ga | 5.3M | 286K | 31.7M | 6.9M | 4.2B | 229.3M | 30.6B | 1.4B | 500.7 M | 9.8 G | km | 297.8K | 285.7K | 5M | 5M | 53M | 52.6M | 1.1B | 1.1B | 566.2 M | 570.0 M | mt | 1.2M | 265.4K | 1.2M | 5.6M | 390.4M | 171.5M | 3.2B | 1.3B | 467.4 M | 1.1 G | eo | 1.4M | 260K | 33.9M | 9.3M | 745.1M | 253.1M | 5.5B | 1.7B | 627.6 M | 1.9 G | ps | 429.9K | 252.9K | 5.1M | 3.6M | 293.9M | 177.5M | 1.4B | 848.9M | 403.5 M | 682.9 M | rw | 681.8K | 226.5K | 681.8K | 1.9M | 225M | 99.8M | 1.7B | 749.1M | 264.8 M | 702.4 M | ku | 671.9K | 218.9K | 10.7M | 4.9M | 305.3M | 143.8M | 2.1B | 849.9M | 335.3 M | 791.9 M | lo | 229.1K | 216K | 2.9M | 2.8M | 41.7M | 41.1M | 706.9M | 697.6M | 365.3 M | 370.8 M | fy | 1.7M | 210K | 12.1M | 3.7M | 506.9M | 94M | 3.7B | 592.3M | 223.0 M | 1.2 G | ha | 443.9K | 173.5K | 4.5M | 2.4M | 206.5M | 109.3M | 1.3B | 630.2M | 219.0 M | 478.1 M | my | 176.5K | 172.4K | 176.5K | 10.1M | 96.6M | 96.3M | 1.3B | 1.3B | 648.8 M | 650.4 M | dv | 264.4K | 167.2K | 4.3M | 3.5M | 92.8M | 64M | 877.3M | 603.1M | 238.3 M | 343.2 M | pa | 368.2K | 150.6K | 368.2K | 6M | 306M | 152.8M | 1.6B | 797.1M | 414.1 M | 857.6 M | ckb | 622.7K | 148.9K | 5.6M | 2.5M | 312.7M | 83.3M | 2.2B | 572.7M | 265.0 M | 1011.1 M | lb | 7.6M | 146K | 47.1M | 3.4M | 7.5B | 85M | 58.4B | 575.5M | 218.4 M | 22.2 G | mg | 295.2K | 115.4K | 4.5M | 2.6M | 189.4M | 75.5M | 1.3B | 548.5M | 179.0 M | 429.3 M | ht | 425.6K | 110.4K | 6.7M | 2.6M | 163M | 84.3M | 994.5M | 461.5M | 168.2 M | 361.5 M | ug | 227.1K | 106.5K | 4.5M | 3.1M | 122.9M | 62.7M | 998.5M | 504.6M | 233.1 M | 449.9 M | am | 245.2K | 106.3K | 7.1M | 5.3M | 157M | 95.2M | 869.9M | 509M | 345.5 M | 539.4 M | or | 139.6K | 100.5K | 139.6K | 3.1M | 66M | 47.3M | 437.2M | 309.5M | 160.3 M | 228.1 M | fo | 382.9K | 97.8K | 3.9M | 1.8M | 136.5M | 48.9M | 923.3M | 314.9M | 122.0 M | 328.8 M | gd | 206K | 94.3K | 3.7M | 2.4M | 127.6M | 84.5M | 812M | 526M | 173.4 M | 276.6 M | ba | 372.4K | 90.3K | 9.3M | 2.6M | 101M | 42.1M | 766.5M | 320.7M | 154.8 M | 352.4 M | tk | 180.2K | 82.5K | 180.2K | 1.8M | 65.4M | 43.3M | 575.2M | 369M | 131.3 M | 221.6 M | mi | 711.9K | 79.5K | 5.9M | 1.9M | 262.5M | 73.5M | 1.6B | 371.9M | 120.2 M | 539.1 M | hmn | 241.3K | 75.2K | 3.5M | 1.9M | 192.1M | 80.2M | 1.2B | 408.8M | 124.3 M | 366.0 M | grc | 364.8K | 70.7K | 13.7M | 2.8M | 298.6M | 65.3M | 2B | 417.8M | 217.7 M | 1.0 G | jv | 999.5K | 69.5K | 13M | 2M | 302.3M | 52.1M | 2.3B | 376.1M | 130.9 M | 797.8 M | ceb | 617.5K | 66.2K | 6.7M | 1.6M | 225M | 58.2M | 1.5B | 357.7M | 116.2 M | 451.4 M | sd | 115.6K | 65.9K | 115.6K | 2.4M | 112.6M | 77.8M | 561M | 380.4M | 182.3 M | 267.1 M | yi | 160.6K | 64.9K | 3.3M | 1.9M | 129.1M | 53.9M | 838.4M | 352.6M | 146.0 M | 350.8 M | kaa_Latn | 375.2K | 61.2K | 3.6M | 1.3M | 375.2K | 61.2K | 1.5M | 209.5K | 86.2 M | 264.6 M | sn | 3.1M | 60.2K | 3.1M | 1.2M | 1.3B | 31.6M | 10.6B | 266M | 92.5 M | 3.2 G | co | 546.7K | 55.4K | 6.1M | 1.3M | 172.6M | 43.6M | 1.1B | 265.5M | 98.8 M | 386.8 M | su | 336.6K | 55K | 336.6K | 1.6M | 154M | 39.5M | 967.2M | 286.7M | 100.7 M | 308.5 M | pap | 259.1K | 54.5K | 259.1K | 1.4M | 183.9M | 41.1M | 1.4B | 229.9M | 83.5 M | 451.4 M | ig | 130.4K | 54.4K | 2.1M | 1.4M | 129.2M | 45.7M | 846.1M | 251.4M | 93.0 M | 178.9 M | zu | 372.3K | 53.8K | 3.8M | 1.2M | 148.4M | 27.2M | 1.2B | 257.4M | 89.6 M | 374.7 M | xh | 310.9K | 53.7K | 2.9M | 1.4M | 81.6M | 31.2M | 749.5M | 287.3M | 100.0 M | 319.1 M | sm | 137.8K | 52.6K | 1.9M | 1.3M | 100.9M | 53.7M | 607.9M | 276.3M | 88.6 M | 184.5 M | ny | 181.6K | 52.2K | 181.6K | 1.5M | 80.6M | 34.8M | 611.2M | 277.5M | 91.8 M | 209.8 M | yo | 115K | 52.1K | 2M | 1.2M | 76.6M | 46.3M | 415.6M | 239M | 89.2 M | 157.8 M | cv | 599.4K | 47.3K | 12M | 1.6M | 169.6M | 22.2M | 1B | 168.9M | 82.1 M | 413.6 M | el_Latn | 497.3K | 46.4K | 11.3M | 1.7M | 497.3K | 46.4K | 2.3M | 162.8K | 196.8 M | 571.1 M | kl | 85.9K | 46K | 2.1M | 1.5M | 32.3M | 22.3M | 403.9M | 279.1M | 84.2 M | 126.1 M | haw | 310.4K | 45.7K | 7.1M | 1M | 141M | 43.3M | 892M | 214.2M | 69.9 M | 271.2 M | gsw | 7.6M | 42.7K | 64.5M | 1M | 5B | 22.3M | 42.3B | 149.2M | 53.8 M | 13.5 G | tet | 291K | 40.4K | 1.9M | 475.7K | 240.6M | 22.8M | 1.6B | 152.3M | 51.2 M | 455.4 M | st | 96.8K | 40.4K | 96.8K | 1.1M | 65M | 39.8M | 381.5M | 226.9M | 74.0 M | 127.0 M | lus | 91.5K | 36.4K | 1.4M | 863.5K | 53M | 31.3M | 298.3M | 167.3M | 60.1 M | 107.0 M | oc | 2.4M | 36.4K | 2.4M | 1.6M | 887.6M | 26.7M | 6.7B | 177.6M | 58.7 M | 1.9 G | as | 53.9K | 33.8K | 2.4M | 1.7M | 41.4M | 27.9M | 275.8M | 182.1M | 95.8 M | 146.1 M | rm | 238.1K | 33.8K | 238.1K | 603.4K | 59.2M | 15.8M | 391M | 100.2M | 34.6 M | 133.1 M | br | 705.4K | 33.2K | 7.8M | 731.7K | 646.8M | 21M | 3.7B | 125.4M | 46.2 M | 1.2 G | sah | 1.3M | 29.2K | 1.3M | 1.2M | 283.7M | 17.6M | 2.2B | 148.2M | 68.3 M | 852.3 M | hi_Latn | 1.2M | 26.7K | 22.6M | 1.2M | 1.2M | 26.7K | 5.3M | 98.9K | 53.5 M | 1.7 G | se | 54.3K | 23.9K | 879.5K | 493.3K | 17.7M | 10M | 148.4M | 84.6M | 31.1 M | 56.6 M | cnh | 44.4K | 21.6K | 688.6K | 406.9K | 21.6M | 12.5M | 110.8M | 63M | 22.1 M | 39.6 M | om | 846.1K | 18.9K | 846.1K | 469.8K | 238M | 11.2M | 1.9B | 88.5M | 30.4 M | 881.5 M | ce | 59.3K | 15K | 991.1K | 460.1K | 17.8M | 9.6M | 130.6M | 67.8M | 31.1 M | 60.2 M | udm | 67.1K | 13.4K | 942.7K | 510.3K | 14M | 7.4M | 106M | 55.5M | 26.3 M | 49.2 M | lg | 61.1K | 13K | 510.9K | 166.1K | 21.4M | 6.1M | 160.7M | 48M | 17.3 M | 56.7 M | os | 172.1K | 12.6K | 172.1K | 359.3K | 27.1M | 6.9M | 233.5M | 50.1M | 23.1 M | 87.7 M | nv | 17.1K | 12.6K | 17.1K | 86.5K | 3.1M | 1.1M | 24.8M | 9.1M | 2.0 M | 7.9 M | kha | 37.8K | 12.1K | 235.5K | 75.2K | 15.8M | 6M | 88.6M | 30.2M | 9.8 M | 27.3 M | ilo | 69.8K | 11.8K | 889.2K | 365.1K | 26.7M | 9M | 187.9M | 59.4M | 20.6 M | 64.0 M | ctd_Latn | 23.3K | 11.6K | 575.6K | 382.2K | 23.3K | 11.6K | 90.7K | 41K | 21.5 M | 35.1 M | vec | 1.1M | 11.1K | 10M | 209.7K | 284.7M | 7.8M | 1.8B | 43.8M | 17.7 M | 625.0 M | hil | 126.8K | 10.6K | 1.1M | 379.7K | 43.9M | 9.2M | 293.5M | 57.2M | 18.5 M | 95.2 M | tyv | 61.6K | 9.1K | 596.6K | 268.3K | 9.9M | 4.7M | 80.2M | 38.5M | 16.7 M | 36.6 M | iba | 34K | 7.6K | 326.9K | 126.1K | 37.8M | 4.8M | 251.4M | 30.5M | 10.0 M | 61.3 M | ru_Latn | 346.3K | 7.5K | 346.3K | 239.1K | 346.3K | 7.5K | 1.5M | 27.7K | 14.9 M | 452.3 M | kbd | 154.7K | 7.5K | 1.4M | 257.2K | 31.9M | 4.4M | 321.4M | 36.8M | 16.8 M | 209.6 M | ti | 20.8K | 7.3K | 20.8K | 481.3K | 18.2M | 8.8M | 95.4M | 44.6M | 30.9 M | 63.6 M | sa | 154.3K | 7.1K | 154.3K | 1.1M | 70M | 9.9M | 512.5M | 88.8M | 44.9 M | 236.6 M | av | 107.6K | 6.3K | 806.1K | 190.1K | 15.5M | 3.4M | 129M | 30.2M | 12.8 M | 56.0 M | bo | 6.2K | 6.2K | 1.1M | 1.1M | 3.4M | 3.4M | 88.7M | 88.7M | 40.7 M | 40.7 M | zza | 370.1K | 6K | 3.3M | 229.2K | 87.7M | 3.9M | 617.3M | 26.3M | 10.0 M | 234.1 M | ber_Latn | 480.5K | 5.6K | 10.5M | 169.4K | 480.5K | 5.6K | 2.1M | 18.9K | 11.0 M | 945.3 M | otq | 17.6K | 5.6K | 17.6K | 114.8K | 10.2M | 3.8M | 65M | 23.4M | 7.7 M | 22.8 M | te_Latn | 236.6K | 5.3K | 4.4M | 269.1K | 236.6K | 5.3K | 1M | 19.3K | 11.4 M | 254.3 M | bua | 9.8K | 5.3K | 252K | 144.6K | 4.7M | 2.7M | 38M | 21.7M | 10.0 M | 17.9 M | ts | 34.7K | 5.2K | 34.7K | 248.6K | 39.6M | 6.5M | 377.2M | 38.8M | 12.2 M | 99.5 M | cfm | 9.1K | 4.9K | 199.6K | 128.6K | 6.2M | 4M | 32.9M | 21.5M | 7.4 M | 11.6 M | tn | 138.2K | 4.8K | 138.2K | 174.4K | 46M | 5.5M | 302.3M | 29.2M | 9.4 M | 99.0 M | krc | 359.5K | 4.8K | 2.3M | 153.9K | 50.2M | 2.6M | 369.5M | 20.7M | 9.1 M | 139.9 M | ak | 19.5K | 4.8K | 341.7K | 210.2K | 12.3M | 4.7M | 74.5M | 24.8M | 9.1 M | 24.7 M | meo | 790.7K | 4.7K | 16.5M | 39K | 478M | 1.2M | 3B | 7.5M | 3.1 M | 1.2 G | chm | 81.5K | 4.7K | 929.1K | 179.7K | 17.2M | 2.9M | 132.2M | 21.3M | 9.8 M | 53.5 M | to | 14.3K | 4.6K | 14.3K | 149K | 10.3M | 5.7M | 58.2M | 29.9M | 9.6 M | 19.0 M | ee | 14.1K | 4.5K | 353.6K | 246.7K | 9.7M | 6.2M | 67.9M | 32.8M | 11.8 M | 23.3 M | nso | 376.2K | 4.4K | 376.2K | 188.4K | 419.2M | 5.3M | 2B | 28.2M | 9.1 M | 502.7 M | ady | 74.9K | 4.2K | 446.8K | 96.9K | 8M | 1.6M | 67.9M | 14.8M | 6.4 M | 30.6 M | rom | 22.9K | 4.2K | 22.9K | 76.1K | 8.9M | 2.6M | 59M | 15.9M | 5.8 M | 21.0 M | bho | 13.6K | 4.1K | 306.2K | 118.5K | 7.1M | 2.7M | 37.6M | 13.4M | 7.4 M | 20.6 M | ltg | 13.1K | 4.1K | 213.7K | 87.3K | 4M | 1.9M | 29.2M | 13.9M | 5.6 M | 11.7 M | fj | 17K | 4K | 410K | 164.1K | 11.6M | 5.2M | 67.7M | 28M | 8.6 M | 22.5 M | yua | 10.4K | 4K | 141.6K | 77.6K | 5.2M | 2.5M | 36.8M | 17.2M | 5.7 M | 12.4 M | gn | 87.1K | 3.9K | 770.9K | 162.6K | 19.2M | 2.7M | 140.7M | 20.8M | 7.8 M | 52.1 M | az_RU | 6.5K | 3.8K | 231.8K | 177.3K | 6.5K | 3.8K | 24K | 12.9K | 10.3 M | 15.1 M | ln | 94.7K | 3.3K | 718.7K | 139K | 42.4M | 3.4M | 291.8M | 21.5M | 6.8 M | 85.3 M | ada | 6.5K | 3.1K | 291.5K | 199.2K | 7.5M | 4.9M | 38.9M | 24.2M | 8.6 M | 13.9 M | myv | 164.8K | 3.1K | 164.8K | 130K | 16M | 1.7M | 120.3M | 13.8M | 6.2 M | 49.5 M | bik | 44.8K | 3.1K | 376.7K | 77K | 14.8M | 2.5M | 102.3M | 15.7M | 5.3 M | 34.0 M | tlh | 516.9K | 3.1K | 516.9K | 46.9K | 221.3M | 1.1M | 1.4B | 7.8M | 2.7 M | 554.2 M | kbp | 5.9K | 3K | 247.9K | 128.3K | 5.6M | 2.6M | 30.8M | 14.6M | 5.7 M | 12.4 M | war | 1M | 2.9K | 114M | 96.2K | 612.1M | 2.4M | 3.5B | 16.1M | 3.7 M | 1.2 G | wa | 70.6K | 2.8K | 1.5M | 127.2K | 35.2M | 3.6M | 198.8M | 20.4M | 7.2 M | 67.8 M | bew | 311.1K | 2.7K | 10.4M | 58.4K | 212.4M | 1.3M | 1.4B | 8.5M | 3.1 M | 547.1 M | rcf | 21.6K | 2.6K | 21.6K | 50.5K | 4.9M | 1.2M | 30.2M | 5.7M | 2.1 M | 11.4 M | ta_Latn | 260.7K | 2.6K | 3.4M | 142.7K | 260.7K | 2.6K | 1.2M | 9.1K | 5.0 M | 215.4 M | kac | 5.9K | 2.6K | 109.2K | 77.4K | 5M | 2.8M | 26.6M | 13.6M | 4.3 M | 8.0 M | iu | 5.4K | 2.5K | 92.6K | 53.1K | 1.9M | 907.4K | 17.5M | 8.3M | 4.8 M | 9.9 M | ay | 8.1K | 2.5K | 196.7K | 83.8K | 3.9M | 1.4M | 34.5M | 13.1M | 4.5 M | 12.7 M | kum | 4.2K | 2.5K | 132.2K | 89.7K | 2.3M | 1.6M | 18.2M | 12.4M | 5.3 M | 8.0 M | qu | 149.7K | 2.4K | 1M | 87K | 26.7M | 1.3M | 200.6M | 12.2M | 4.0 M | 68.3 M | bgp | 355.7K | 2.4K | 5.6M | 43.3K | 186.1M | 1.8M | 1.1B | 9.8M | 3.1 M | 377.5 M | hif | 702K | 2.4K | 7.9M | 124.7K | 1.2B | 3.2M | 9.1B | 19.1M | 5.9 M | 3.5 G | kw | 176.9K | 2.3K | 1M | 51.6K | 53.1M | 1.3M | 327.8M | 7.7M | 2.8 M | 89.2 M | nan_Latn_TW | 7.4K | 2.3K | 7.4K | 72.7K | 7.4K | 2.3K | 28.3K | 7.7K | 4.8 M | 15.4 M | srn | 16.7K | 2.3K | 16.7K | 139.5K | 8M | 3.4M | 49.1M | 17M | 5.1 M | 15.6 M | tly_IR | 406.3K | 2.2K | 406.3K | 18.2K | 406.3K | 2.2K | 1.6M | 8.6K | 580.4 K | 283.0 M | sg | 4.2K | 2.1K | 154K | 117.9K | 4.6M | 3.3M | 22.6M | 15.5M | 4.6 M | 6.8 M | gom | 4.6K | 2.1K | 178.3K | 108K | 2.7M | 1.4M | 19.8M | 10M | 5.0 M | 10.5 M | ml_Latn | 260.8K | 2.1K | 3.5M | 77.3K | 260.8K | 2.1K | 1.1M | 7.2K | 3.5 M | 277.7 M | kj | 112.2K | 2.1K | 881.8K | 22.6K | 46.9M | 877.3K | 339.6M | 6M | 2.1 M | 104.9 M | ksd | 14.9K | 2K | 533K | 78.6K | 11.5M | 2.1M | 62.4M | 10M | 2.9 M | 20.0 M | dz | 1.9K | 1.9K | 191.7K | 191.7K | 1.1M | 1.1M | 22.7M | 22.7M | 10.0 M | 10.0 M | kv | 59.1K | 1.9K | 584.3K | 88.8K | 9.5M | 1.2M | 91.4M | 9M | 4.4 M | 41.0 M | msi | 686.7K | 1.9K | 686.7K | 22.6K | 414.8M | 440.4K | 2.6B | 2.7M | 1.1 M | 1.0 G | ve | 3.8K | 1.9K | 97.8K | 79.4K | 3.2M | 2.1M | 19M | 11.7M | 3.8 M | 6.2 M | zap | 5.5K | 1.8K | 202.3K | 93.5K | 4.2M | 1.8M | 26.4M | 11.4M | 4.0 M | 9.6 M | zxx_xx_dtynoise | 118.8K | 1.8K | 3.8M | 49.3K | 118.8K | 1.8K | 501K | 6.6K | 3.9 M | 367.0 M | meu | 5.9K | 1.7K | 232.1K | 72.6K | 4.2M | 1.4M | 27.2M | 8.6M | 2.6 M | 9.1 M | iso | 3.7K | 1.7K | 155.8K | 111.5K | 4.4M | 2.7M | 23M | 13.7M | 4.9 M | 8.1 M | ium | 100.3K | 1.7K | 6.2M | 54.9K | 48.4M | 1.7M | 314M | 7.4M | 2.6 M | 124.0 M | nhe | 3K | 1.7K | 3K | 57.7K | 1.9M | 1.2M | 15.6M | 9.8M | 2.7 M | 4.8 M | tyz | 8K | 1.7K | 454.8K | 104.6K | 7.5M | 1.9M | 46.3M | 11.3M | 3.8 M | 16.0 M | hui | 2K | 1.7K | 80.1K | 74.7K | 1.8M | 1.7M | 11.8M | 10.9M | 3.0 M | 3.3 M | new | 6.6K | 1.6K | 6.6K | 85K | 3.2M | 1.4M | 21.2M | 8.8M | 4.4 M | 10.6 M | mdf | 71K | 1.6K | 394.7K | 45.1K | 8.3M | 670.1K | 65.8M | 5.5M | 2.5 M | 26.7 M | pag | 49.6K | 1.6K | 49.6K | 88.8K | 13.8M | 1.9M | 92.9M | 12M | 3.9 M | 29.2 M | gv | 501.9K | 1.6K | 18.8M | 26.9K | 137.7M | 996.2K | 933.1M | 6.2M | 2.0 M | 318.6 M | gag | 33.9K | 1.6K | 491K | 37K | 10.2M | 661K | 84.9M | 5.2M | 2.1 M | 32.6 M | ngu | 3.8K | 1.5K | 3.8K | 87.1K | 2.7M | 1.5M | 21.4M | 11.8M | 3.6 M | 6.7 M | quc | 4.4K | 1.5K | 89.2K | 41.2K | 2.8M | 1.1M | 16.6M | 6.4M | 2.2 M | 5.9 M | mam | 23K | 1.5K | 446.3K | 52.9K | 9.8M | 1.2M | 70.4M | 7.2M | 2.6 M | 30.7 M | min | 28.2K | 1.5K | 500.9K | 75.6K | 10.2M | 1.4M | 70.5M | 9.9M | 2.6 M | 21.1 M | ho | 2K | 1.5K | 57K | 47.8K | 1.8M | 1.3M | 12.3M | 7.8M | 1.9 M | 3.1 M | pon | 5.7K | 1.5K | 167.8K | 48.7K | 3M | 1.1M | 18.3M | 6.7M | 2.1 M | 6.1 M | mrj | 97.1K | 1.4K | 97.1K | 60.3K | 14.5M | 1.1M | 100.6M | 7.6M | 3.6 M | 40.8 M | lu | 10.6K | 1.4K | 316K | 112.1K | 7.8M | 2.3M | 54.2M | 15.4M | 4.8 M | 18.0 M | gom_Latn | 231.1K | 1.4K | 4.1M | 77.9K | 231.1K | 1.4K | 1M | 5.1K | 3.6 M | 240.6 M | alt | 2.6K | 1.4K | 110.1K | 65.9K | 1.8M | 1.1M | 14.3M | 8.7M | 3.8 M | 6.4 M | nzi | 2.5K | 1.4K | 2.5K | 71.8K | 2.5M | 1.7M | 14.4M | 9.4M | 3.1 M | 4.8 M | tzo | 2.8K | 1.4K | 100.4K | 75.7K | 2.5M | 1.7M | 15.9M | 10.6M | 3.2 M | 4.9 M | bci | 7.4K | 1.3K | 124.8K | 87.1K | 5M | 1.9M | 32.8M | 9M | 3.1 M | 9.4 M | dtp | 4.6K | 1.3K | 51.2K | 7.9K | 1.9M | 419.4K | 12.7M | 3M | 1013.9 K | 4.5 M | abt | 1.6K | 1.3K | 122.7K | 110.3K | 1.5M | 1.3M | 9.6M | 8.2M | 2.2 M | 2.7 M | bbc | 72.3K | 1.3K | 718.3K | 73.2K | 21.7M | 1.7M | 151.3M | 10.6M | 3.6 M | 47.9 M | pck | 8.9K | 1.3K | 8.9K | 69.7K | 6.8M | 2.1M | 39.8M | 11.5M | 4.2 M | 14.2 M | mai | 54.3K | 1.2K | 1M | 60.2K | 24.6M | 1.2M | 156M | 6.8M | 3.6 M | 67.1 M | mps | 2.7K | 1.2K | 132.8K | 71.9K | 2.8M | 1.6M | 16M | 8.7M | 2.3 M | 4.8 M | emp | 3.6K | 1.2K | 106.4K | 75.4K | 1.9M | 999.1K | 14.5M | 7.4M | 2.4 M | 4.9 M | mgh | 5.5K | 1.2K | 151.8K | 61.2K | 2.8M | 1.1M | 24.1M | 8.2M | 2.8 M | 8.3 M | tab | 7.8K | 1.2K | 226.4K | 26.8K | 4.3M | 538.9K | 33.7M | 4.4M | 1.9 M | 15.7 M | crh | 5.1K | 1.2K | 170.9K | 61.8K | 2.4M | 943K | 18.8M | 7.5M | 3.4 M | 8.9 M | tbz | 5.1K | 1.1K | 128.7K | 37.5K | 3.5M | 893.4K | 22M | 4.8M | 1.9 M | 10.2 M | ss | 8.1K | 1.1K | 8.1K | 30.4K | 2.7M | 568.3K | 23.7M | 5.5M | 1.8 M | 7.4 M | chk | 2.8K | 1.1K | 98.8K | 44K | 2M | 1M | 12M | 5.8M | 1.8 M | 4.0 M | bru | 3K | 1.1K | 89.7K | 48.2K | 2.4M | 938.1K | 12.9M | 4.8M | 1.5 M | 4.5 M | nnb | 4.9K | 1.1K | 4.9K | 70.2K | 3.2M | 1.2M | 27.7M | 9.1M | 3.3 M | 10.0 M | fon | 5.3K | 1.1K | 222.9K | 67.3K | 6.9M | 1.8M | 34M | 8.3M | 3.1 M | 14.8 M | ppk | 2.6K | 1.1K | 85.8K | 34.9K | 1.9M | 801.8K | 13.2M | 5.5M | 1.6 M | 4.3 M | tiv | 3.8K | 1.1K | 3.8K | 80.7K | 3.7M | 2.1M | 20.4M | 10.2M | 3.2 M | 6.0 M | btx | 3.1K | 1K | 81.7K | 43.9K | 2M | 907.5K | 13.1M | 5.9M | 2.0 M | 4.6 M | bg_Latn | 200.4K | 991 | 2.8M | 25.5K | 200.4K | 991 | 927.1K | 3.7K | 1.7 M | 143.6 M | mbt | 1.6K | 969 | 86K | 45.4K | 2.4M | 1.3M | 14.6M | 7.5M | 2.2 M | 5.1 M | ace | 65.5K | 966 | 632.5K | 32.5K | 19.9M | 1.1M | 146.1M | 7.4M | 2.2 M | 42.3 M | tvl | 2.3K | 933 | 72.9K | 53.6K | 2.5M | 1.7M | 12.6M | 8.1M | 2.4 M | 3.8 M | dov | 3.5K | 923 | 129.8K | 56.7K | 2.6M | 967.5K | 20.7M | 8M | 2.6 M | 7.1 M | ach | 2K | 915 | 63K | 40.1K | 1.6M | 890.9K | 9M | 4.7M | 1.6 M | 3.0 M | xal | 71.8K | 913 | 498.5K | 30.8K | 8.5M | 449.8K | 64.7M | 3.2M | 1.5 M | 24.4 M | cuk | 4.1K | 899 | 76.5K | 34.3K | 2M | 469.9K | 24.7M | 4.6M | 1.5 M | 6.1 M | kos | 2.2K | 881 | 44.6K | 27.8K | 1.1M | 780.1K | 6.5M | 4.2M | 1.4 M | 2.2 M | crs | 7.6K | 873 | 282.4K | 40.1K | 7.3M | 1.2M | 40.1M | 6.8M | 2.2 M | 13.2 M | wo | 36.4K | 871 | 303.4K | 25.4K | 30.7M | 850.7K | 213.4M | 4.5M | 1.7 M | 59.9 M | bts | 3.2K | 869 | 109.1K | 29.1K | 3.1M | 663.3K | 20.8M | 4.2M | 1.4 M | 6.2 M | ubu | 2.2K | 846 | 113.5K | 47.5K | 2.3M | 996.4K | 15.9M | 6.7M | 1.9 M | 4.7 M | gym | 1.5K | 820 | 73.7K | 49.6K | 1.6M | 1.1M | 10.3M | 6.9M | 2.0 M | 3.2 M | ibb | 74.1K | 818 | 516.5K | 36.3K | 26.4M | 776.1K | 190.9M | 4.9M | 1.5 M | 56.0 M | ape | 7K | 814 | 147K | 56.1K | 12.4M | 881.5K | 71M | 5.8M | 1.6 M | 18.8 M | stq | 111.9K | 809 | 111.9K | 27.7K | 34.4M | 600.4K | 243.1M | 3.8M | 1.5 M | 82.5 M | ang | 66.5K | 803 | 1.8M | 86.7K | 28.5M | 1.7M | 193M | 9.8M | 3.4 M | 67.1 M | enq | 7.1K | 793 | 241.9K | 39.1K | 11M | 718.8K | 68.5M | 4.8M | 1.3 M | 18.8 M | tsg | 353.8K | 789 | 353.8K | 17.9K | 158M | 588.9K | 1.1B | 3.8M | 1.0 M | 309.9 M | shn | 889 | 788 | 46.4K | 46.2K | 383.8K | 378.5K | 5.7M | 5.7M | 2.6 M | 2.6 M | kri | 39.1K | 786 | 271.2K | 38.8K | 12.6M | 995.2K | 86.4M | 5M | 1.6 M | 20.9 M | kek | 3.2K | 782 | 70.4K | 38.4K | 1.8M | 709K | 13.6M | 4.4M | 1.4 M | 4.7 M | rmc | 2.4K | 738 | 2.4K | 25.8K | 1.3M | 545.4K | 7.9M | 3.2M | 1.1 M | 2.9 M | acf | 4.9K | 730 | 81.9K | 24.6K | 2.1M | 602.2K | 11.6M | 3M | 1.1 M | 4.7 M | fip | 3.7K | 729 | 165.6K | 49K | 3.5M | 916.8K | 25.7M | 6.6M | 2.1 M | 8.6 M | syr | 3.5K | 716 | 326.4K | 197.1K | 4.6M | 1.9M | 31.5M | 14M | 6.1 M | 13.9 M | qub | 972 | 705 | 61K | 51.1K | 589.2K | 455.5K | 5.9M | 4.4M | 1.4 M | 1.8 M | bm | 21.9K | 702 | 172.3K | 24.5K | 7.1M | 583.1K | 48.4M | 3M | 1.1 M | 14.4 M | tzh | 1.7K | 702 | 41.7K | 33.9K | 1.5M | 929.6K | 9.3M | 5.6M | 1.6 M | 2.6 M | jiv | 1.7K | 696 | 80.9K | 32K | 1.1M | 418.9K | 9.6M | 3.5M | 1.1 M | 3.3 M | kn_Latn | 72.9K | 688 | 765.9K | 10.1K | 72.9K | 688 | 328.1K | 2.5K | 430.8 K | 61.4 M | kjh | 1.5K | 672 | 42.8K | 28.7K | 566.1K | 379.2K | 4.5M | 3.1M | 1.3 M | 2.0 M | yap | 1.9K | 638 | 37.6K | 19.5K | 1.3M | 661.4K | 6.9M | 3.3M | 1.0 M | 2.2 M | ban | 8K | 637 | 150.9K | 16.3K | 5M | 499.7K | 35.4M | 3.6M | 1.1 M | 12.0 M | tuc | 3.5K | 635 | 193.2K | 50.3K | 2.9M | 703K | 17.2M | 4.1M | 1.2 M | 5.7 M | tcy | 10.7K | 632 | 338.7K | 37.1K | 5.5M | 432.6K | 41.6M | 3.3M | 1.7 M | 20.9 M | cab | 1.2K | 629 | 50.4K | 37.5K | 1M | 690.9K | 7.5M | 5.1M | 1.6 M | 2.4 M | cak | 1.2K | 617 | 70.4K | 32.6K | 1.3M | 730.1K | 7.6M | 4.2M | 1.3 M | 2.4 M | din | 128.4K | 611 | 885.8K | 23.6K | 31.6M | 541.7K | 210M | 2.9M | 1.1 M | 64.3 M | zh_Latn | 739.4K | 602 | 10.7M | 45.1K | 739.4K | 602 | 3.4M | 2.3K | 2.0 M | 969.9 M | arn | 2.4K | 593 | 64.5K | 26.2K | 1.5M | 541.9K | 10.2M | 3.7M | 1.2 M | 3.7 M | lrc | 42.4K | 587 | 351.9K | 9K | 17.3M | 248.9K | 85.3M | 1.4M | 646.9 K | 37.5 M | rwo | 938 | 572 | 938 | 45.5K | 734.8K | 590.4K | 5.1M | 4.2M | 1.1 M | 1.4 M | hus | 825 | 569 | 26.5K | 23.7K | 733.4K | 542.1K | 4.4M | 3.1M | 967.6 K | 1.3 M | bum | 4.7K | 559 | 103.8K | 36.5K | 3M | 805.5K | 18.8M | 4M | 1.3 M | 6.1 M | mak | 1K | 555 | 32.5K | 20.4K | 761K | 457.4K | 6.1M | 3.7M | 1.1 M | 2.0 M | frp | 148K | 550 | 3.5M | 8.2K | 71.2M | 230.2K | 535.4M | 1.4M | 518.3 K | 129.7 M | seh | 5.6K | 545 | 68.8K | 37.2K | 2M | 650.6K | 14.9M | 4.9M | 1.5 M | 4.4 M | twu | 2.5K | 539 | 109.9K | 24.4K | 2.4M | 571.2K | 14.2M | 3.2M | 1.0 M | 4.8 M | kmb | 1.3K | 538 | 60.4K | 36.9K | 1.4M | 810.8K | 8.4M | 4.6M | 1.4 M | 2.6 M | ksw | 560 | 536 | 16.1K | 16K | 219.9K | 218.8K | 2.9M | 2.9M | 1.4 M | 1.4 M | sja | 1.3K | 527 | 67.7K | 24.9K | 982.5K | 459.3K | 7.7M | 3.4M | 1.1 M | 2.6 M | amu | 1.8K | 511 | 72K | 25.2K | 1.5M | 443.3K | 9.6M | 3.2M | 1.0 M | 3.4 M | mad | 103.8K | 509 | 500.6K | 18.5K | 16.2M | 386.7K | 111.8M | 2.8M | 960.3 K | 34.2 M | quh | 1K | 501 | 42K | 29.9K | 624.4K | 396.8K | 5.8M | 3.7M | 1.2 M | 1.8 M | dyu | 1.2K | 483 | 55.8K | 19.7K | 1.2M | 421.8K | 5.7M | 2M | 665.5 K | 1.9 M | toj | 736 | 452 | 736 | 26.1K | 691.2K | 540.2K | 4.3M | 3.3M | 1.0 M | 1.3 M | ch | 12.9K | 449 | 147.5K | 16K | 8.9M | 393.9K | 63.5M | 2.5M | 906.8 K | 10.0 M | sus | 664 | 437 | 664 | 15.2K | 648K | 402.8K | 3.7M | 2.1M | 674.0 K | 1.0 M | nog | 970 | 419 | 970 | 11K | 330.3K | 200.4K | 2.6M | 1.6M | 714.0 K | 1.2 M | jam | 12.7K | 416 | 68.5K | 15.8K | 3.5M | 378.4K | 25.8M | 1.7M | 609.5 K | 7.6 M | gui | 1.1K | 409 | 62.7K | 24.8K | 915K | 314K | 6.5M | 2M | 619.3 K | 2.1 M | nia | 2K | 408 | 2K | 25K | 1.7M | 476.5K | 11.3M | 3.1M | 1.0 M | 3.9 M | mas | 15.2K | 405 | 216.8K | 17.6K | 6.2M | 390.1K | 42.1M | 3M | 927.5 K | 13.4 M | bzj | 983 | 404 | 33.6K | 26.4K | 824.3K | 565K | 4.5M | 2.9M | 981.2 K | 1.4 M | mkn | 956 | 402 | 33.1K | 25.4K | 584.2K | 456.9K | 3.4M | 2.6M | 734.8 K | 1.0 M | lhu | 46K | 377 | 975K | 15.7K | 29.1M | 441.2K | 208.6M | 2.5M | 623.0 K | 38.8 M | ctu | 690 | 366 | 35.5K | 20.6K | 646.7K | 352.8K | 3.6M | 2M | 614.9 K | 1.2 M | kg | 4.7K | 365 | 85.5K | 21.7K | 2.5M | 406.7K | 16.6M | 2.6M | 905.4 K | 5.7 M | inb | 387 | 343 | 17.3K | 17K | 202.8K | 197K | 2M | 1.9M | 535.2 K | 555.6 K | guh | 1.9K | 331 | 104.9K | 28.4K | 1.5M | 328.4K | 11.2M | 3M | 789.5 K | 3.5 M | rn | 8.2K | 323 | 8.2K | 11.1K | 4.5M | 179K | 33.2M | 1.3M | 449.9 K | 11.8 M | bus | 467 | 322 | 21.4K | 12.1K | 418.4K | 219.2K | 2.1M | 1.1M | 428.8 K | 830.9 K | mfe | 7.5K | 320 | 198.8K | 18.2K | 4.6M | 374.8K | 26.9M | 2.1M | 716.4 K | 10.1 M | sda | 1.6K | 317 | 43.2K | 6.2K | 2.5M | 218.3K | 15.8M | 1.6M | 529.0 K | 4.7 M | bi | 71.9K | 311 | 308.5K | 13.6K | 19.4M | 359.4K | 132.4M | 1.9M | 546.9 K | 42.6 M | cr_Latn | 19K | 303 | 170K | 8.9K | 19K | 303 | 81.8K | 1K | 590.4 K | 15.0 M | gor | 1.7K | 303 | 53.3K | 6.5K | 1.4M | 227.1K | 9.4M | 1.7M | 494.0 K | 3.1 M | jac | 8.2K | 303 | 61.6K | 11.9K | 1.8M | 271K | 15.7M | 1.7M | 530.3 K | 7.3 M | chr | 964 | 301 | 33.8K | 7.5K | 629.9K | 172.3K | 4.7M | 1M | 564.1 K | 2.1 M | mh | 4.6K | 296 | 235.1K | 13K | 3.6M | 393.5K | 24.9M | 2.2M | 778.4 K | 8.4 M | mni | 1.2K | 290 | 38.1K | 13.2K | 841.3K | 245.5K | 6.4M | 1.8M | 866.6 K | 3.0 M | wal | 2.6K | 286 | 128K | 14K | 2M | 203.4K | 17M | 1.7M | 525.7 K | 5.1 M | teo | 2.8K | 274 | 131.5K | 13.7K | 2.3M | 221.4K | 15.3M | 1.6M | 564.9 K | 5.3 M | gub | 31.7K | 271 | 160.4K | 25K | 4.7M | 286.2K | 44.7M | 1.6M | 431.3 K | 23.1 M | qvi | 1.2K | 266 | 48.4K | 19.3K | 720.4K | 248.9K | 6.5M | 2.3M | 641.2 K | 1.9 M | tdx | 1.7K | 262 | 26.3K | 13.2K | 1M | 238.5K | 7M | 1.6M | 503.6 K | 2.1 M | rki | 331 | 251 | 331 | 7.8K | 119.7K | 113.7K | 1.6M | 1.5M | 751.3 K | 781.8 K | djk | 560 | 246 | 30.9K | 24.4K | 669.5K | 455.6K | 3.7M | 2.2M | 644.3 K | 1.0 M | nr | 10.7K | 246 | 10.7K | 11.3K | 5.3M | 162.5K | 49M | 1.5M | 519.7 K | 17.8 M | zne | 1.3K | 239 | 61.9K | 21.3K | 1.4M | 504.6K | 8.2M | 2.8M | 882.3 K | 2.8 M | izz | 423 | 237 | 21.7K | 14.5K | 382.8K | 194.5K | 2.1M | 1.1M | 382.2 K | 789.9 K | noa | 902 | 234 | 902 | 11.5K | 821.1K | 243.9K | 5.2M | 1.6M | 534.3 K | 1.7 M | bqc | 275 | 228 | 9.8K | 8.2K | 193K | 151.7K | 997K | 788.4K | 317.0 K | 408.1 K | srm | 847 | 227 | 847 | 17.3K | 1.2M | 445.3K | 6.3M | 2M | 613.4 K | 1.7 M | niq | 26.7K | 226 | 26.7K | 4.2K | 9.9M | 103.4K | 72.1M | 716.2K | 239.1 K | 20.9 M | bas | 4.2K | 216 | 105.2K | 14.9K | 4.3M | 362.8K | 25.7M | 1.7M | 600.7 K | 7.6 M | dwr | 452 | 215 | 22.1K | 11.1K | 269.4K | 139.5K | 2.2M | 1.2M | 375.4 K | 747.6 K | guc | 537 | 214 | 22.9K | 12.5K | 422.4K | 218.1K | 3.4M | 1.8M | 540.1 K | 1.1 M | jvn | 1K | 213 | 36.2K | 7.8K | 790.5K | 185.6K | 5.3M | 1.2M | 357.2 K | 1.7 M | hvn | 737 | 200 | 33.9K | 7K | 779.7K | 239.4K | 4.3M | 1.2M | 378.5 K | 1.4 M | sxn | 587 | 197 | 587 | 9.9K | 494K | 220.6K | 3.4M | 1.5M | 507.1 K | 1.2 M | koi | 20.7K | 196 | 153.9K | 5K | 2.2M | 89.9K | 17.1M | 664.5K | 323.0 K | 7.1 M | alz | 2.2K | 195 | 59.3K | 12.2K | 1.3M | 246.9K | 7.9M | 1.4M | 488.1 K | 2.9 M | nyu | 1.2K | 195 | 1.2K | 11K | 988.7K | 210.5K | 7.7M | 1.6M | 492.6 K | 2.2 M | bn_Latn | 98.7K | 191 | 1.3M | 12K | 98.7K | 191 | 458K | 730 | 314.7 K | 81.0 M | suz | 226 | 186 | 226 | 11.3K | 169.6K | 140.5K | 1M | 855.2K | 339.5 K | 429.6 K | pau | 1.7K | 185 | 1.7K | 13.1K | 2M | 394.6K | 12.4M | 2M | 600.1 K | 3.2 M | nij | 1K | 183 | 1K | 9.2K | 741.6K | 186.1K | 4.7M | 1.2M | 389.6 K | 1.6 M | sat_Latn | 39K | 183 | 39K | 5.5K | 39K | 183 | 183.8K | 601 | 276.1 K | 39.2 M | gu_Latn | 58.2K | 179 | 688.4K | 5.4K | 58.2K | 179 | 260.8K | 673 | 241.0 K | 47.9 M | msm | 520 | 177 | 520 | 8.6K | 410.8K | 190.5K | 2.5M | 1.1M | 339.7 K | 789.8 K | maz | 585 | 170 | 21.3K | 8.2K | 452.9K | 174K | 2.9M | 951.7K | 304.7 K | 971.4 K | qxr | 2.6K | 153 | 40.8K | 6.4K | 761.5K | 75.4K | 6.6M | 724K | 186.4 K | 1.9 M | shp | 874 | 150 | 22.4K | 3.7K | 534.1K | 96.8K | 3.8M | 710.4K | 216.9 K | 1.2 M | hne | 3K | 146 | 118.4K | 4.3K | 2.3M | 139.3K | 12M | 697K | 379.3 K | 6.5 M | ktu | 3.3K | 144 | 115.5K | 7.8K | 3.2M | 196.9K | 18.5M | 1.1M | 300.1 K | 5.4 M | laj | 6.5K | 144 | 61K | 6.4K | 2.4M | 140.1K | 15.8M | 730.5K | 233.5 K | 4.6 M | pis | 1.1K | 139 | 62K | 7.2K | 1.3M | 136.8K | 7.7M | 764K | 212.7 K | 2.2 M | mag | 631 | 138 | 62.6K | 22.1K | 2.1M | 544.2K | 10.7M | 2.6M | 1.4 M | 5.4 M | gbm | 2.5K | 137 | 50.8K | 3.8K | 1.7M | 99.7K | 9.1M | 499.6K | 282.4 K | 4.5 M | tzj | 471 | 136 | 11.1K | 7.3K | 299.9K | 150.8K | 1.9M | 884.2K | 272.0 K | 663.9 K | oj | 2.5K | 135 | 2.5K | 1.6K | 1.2M | 35.9K | 9.6M | 337.1K | 117.6 K | 3.4 M | ndc_ZW | 2.2K | 132 | 2.2K | 8.7K | 2.2K | 132 | 9.1K | 523 | 343.1 K | 2.2 M | tks | 63.7K | 127 | 63.7K | 6.8K | 17.1M | 41.5K | 88.9M | 260.8K | 39.5 K | 33.0 M | awa | 5.8K | 126 | 100.1K | 8.4K | 2.2M | 98.7K | 11.1M | 475K | 226.6 K | 5.8 M | gvl | 37.9K | 126 | 213K | 6.9K | 21.1M | 161.1K | 141M | 789.2K | 257.8 K | 31.7 M | knj | 229 | 126 | 10.1K | 9.2K | 202.6K | 171.8K | 1.1M | 855K | 253.1 K | 345.4 K | spp | 733 | 123 | 733 | 5.8K | 902.7K | 141.8K | 4.4M | 682.5K | 217.8 K | 1.4 M | mqy | 69.3K | 119 | 309K | 2.5K | 12.1M | 88.6K | 78.9M | 506.5K | 170.4 K | 16.3 M | tca | 410 | 117 | 20K | 7.3K | 283K | 121.5K | 2.3M | 786K | 226.2 K | 781.2 K | cce | 847 | 116 | 23.2K | 11K | 539.3K | 227.2K | 3.3M | 1.3M | 393.8 K | 1.1 M | skr | 3.8K | 107 | 279.3K | 17.1K | 6.2M | 324K | 32.2M | 1.7M | 768.5 K | 15.4 M | kmz_Latn | 24K | 106 | 361K | 2.4K | 24K | 106 | 108.6K | 401 | 231.8 K | 16.7 M | dje | 913 | 100 | 40.2K | 3.7K | 816.3K | 97.5K | 4.7M | 480.7K | 161.2 K | 1.5 M | gof | 2.8K | 97 | 33.8K | 5.5K | 703K | 68.8K | 5.5M | 506K | 159.1 K | 1.7 M | agr | 465 | 93 | 16.1K | 3.6K | 295.4K | 67.2K | 2.3M | 554.5K | 177.0 K | 760.1 K | qvz | 534 | 88 | 6.8K | 3.5K | 145.5K | 50.5K | 1.2M | 438.3K | 124.2 K | 382.7 K | adh | 2.6K | 87 | 107.2K | 1K | 2.4M | 42.1K | 14.5M | 254.9K | 84.6 K | 5.0 M | quf | 522 | 86 | 8.4K | 5.2K | 155.7K | 61.8K | 1.5M | 609K | 173.7 K | 542.8 K | kjg | 113 | 84 | 3K | 2.9K | 67.6K | 67K | 408.5K | 399K | 159.2 K | 167.7 K | tsc | 12.6K | 82 | 12.6K | 4K | 3.5M | 93.1K | 23.4M | 521.3K | 161.9 K | 7.0 M | ber | 2.7K | 79 | 12.6K | 1.2K | 1.1M | 46.4K | 6.4M | 265.9K | 141.5 K | 3.0 M | ify | 611 | 79 | 19.8K | 2.8K | 422.7K | 56.2K | 2.6M | 334K | 109.5 K | 913.1 K | cbk | 10.1K | 78 | 43.8K | 2K | 1.7M | 64.3K | 10.3M | 339.3K | 93.4 K | 3.4 M | quy | 588 | 78 | 28.1K | 2.7K | 423.3K | 37.3K | 4.5M | 368.2K | 114.5 K | 1.2 M | ahk | 244 | 77 | 6.2K | 4.1K | 264K | 124.8K | 1.3M | 715.5K | 182.8 K | 359.7 K | cac | 212 | 77 | 3.4K | 1.8K | 125.7K | 54.1K | 978.7K | 319.8K | 95.8 K | 280.3 K | akb | 1K | 71 | 21.3K | 408 | 870.9K | 54.5K | 5.2M | 337.8K | 93.7 K | 1.6 M | nut | 29K | 67 | 29K | 1.5K | 4.8M | 39.8K | 23.5M | 184.1K | 36.4 K | 8.3 M | ffm | 1.8K | 65 | 30.1K | 2K | 745.6K | 39.1K | 4.6M | 236.1K | 83.8 K | 1.8 M | taj | 146 | 65 | 21.6K | 14.3K | 309.7K | 203K | 2.3M | 1.4M | 503.0 K | 872.7 K | ms_Arab | 698 | 63 | 698 | 320 | 698 | 63 | 2.9K | 239 | 64.7 K | 1016.0 K | brx | 322 | 62 | 5.3K | 2.4K | 144.2K | 41K | 1.1M | 304.4K | 146.6 K | 515.7 K | ann | 464 | 56 | 5K | 1.6K | 116.4K | 35.9K | 760.9K | 215.1K | 74.9 K | 295.2 K | qup | 169 | 53 | 4.3K | 2.5K | 77.5K | 31.3K | 763.8K | 297.8K | 74.7 K | 207.3 K | ms_Arab_BN | 2.6K | 46 | 2.6K | 374 | 2.6K | 46 | 10.5K | 171 | 50.0 K | 5.1 M | miq | 236 | 45 | 6.4K | 3.5K | 183.7K | 80.2K | 1.2M | 485.6K | 157.6 K | 384.1 K | msb | 811 | 41 | 811 | 1K | 705.9K | 28.8K | 4.4M | 167.5K | 53.3 K | 1.7 M | bim | 410 | 40 | 31.1K | 6.3K | 669.8K | 167.4K | 3.2M | 793.4K | 252.7 K | 1.1 M | raj | 1.8K | 40 | 1.8K | 5.7K | 1.3M | 81.1K | 7.1M | 405K | 226.2 K | 3.9 M | kwi | 382 | 37 | 16.9K | 2.2K | 253.8K | 23.4K | 1.8M | 172.8K | 47.6 K | 536.2 K | tll | 200 | 37 | 200 | 2.7K | 304.2K | 62.2K | 2.2M | 409.8K | 132.3 K | 664.5 K | trp | 12.8K | 36 | 12.8K | 1.7K | 4.1M | 39K | 29.9M | 257.3K | 87.5 K | 10.2 M | smt | 1.4K | 34 | 1.4K | 703 | 1M | 36.5K | 6.8M | 245.4K | 87.9 K | 2.5 M | mrw | 11.3K | 29 | 11.3K | 1K | 4.2M | 45.7K | 27.8M | 257.2K | 81.3 K | 8.8 M | dln | 236 | 28 | 5.2K | 969 | 150.8K | 21.5K | 860.5K | 118.3K | 36.8 K | 280.3 K | qvc | 3.4K | 27 | 14.6K | 2.2K | 495.7K | 25.7K | 5M | 233.7K | 65.3 K | 2.6 M | doi | 1.7K | 26 | 21.8K | 975 | 568.7K | 25.5K | 3.2M | 135.3K | 66.7 K | 1.6 M | ff | 13.6K | 26 | 150K | 5K | 3.4M | 46.5K | 22.8M | 277.6K | 78.8 K | 8.5 M | ## Citation Information ~~~ @misc{kudugunta2023madlad400, title={MADLAD-400: A Multilingual And Document-Level Large Audited Dataset}, author={Sneha Kudugunta and Isaac Caswell and Biao Zhang and Xavier Garcia and Christopher A. Choquette-Choo and Katherine Lee and Derrick Xin and Aditya Kusupati and Romi Stella and Ankur Bapna and Orhan Firat}, year={2023}, eprint={2309.04662}, archivePrefix={arXiv}, primaryClass={cs.CL} } ~~~
neovalle/H4rmony
--- license: cc-by-4.0 task_categories: - reinforcement-learning - text-classification - question-answering language: - en tags: - Ecolinguistics - Sustainability - ecolinguistic - environment size_categories: - 1K<n<10K --- # Dataset Card for Dataset H4rmony ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aac16fd4a402e8dce11ebe/JvATkLVXNH4aRooFMpOR0.png) **** There is a simplified version, specifically curated for DPO training here: ***** https://huggingface.co/datasets/neovalle/H4rmony_dpo ### Dataset Summary The H4rmony dataset is a collection of prompts and completions aimed at integrating ecolinguistic principles into AI Large Language Models (LLMs). Developed with collaborative efforts from ecolinguistics enthusiasts and experts, it offers a series of prompts and corresponding pairwise responses ranked in terms of environmental awareness and alignment. This ranking provides a clear metric for the desired alignment and establishes a framework for LLMs fine-tuning, particularly in reinforcement learning, via reward model. This dataset aims to bridge the gap between AI and ecolinguistic values, pushing the envelope for creating generative AI models that are environmentally and sustainability aware by design. H4rmony is not just a dataset; it's a project towards harmonising AI with nature by means of fine-tuning. We believe in the potential of using ecolinguistics to fine-tune and influence LLMs towards more eco-aware outputs. This dataset is currently work in progress. ### Languages Currently only English but will be extended to multi-lingual. ## Dataset Structure ### Data Fields ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aac16fd4a402e8dce11ebe/yjppU7ROQvpePUCmDILTr.png) ### Ecological Issues - Codes meaning This table show the meaning of the codes used for the ecological issues classification as well as examples of their manifestation and their relation to 17 sustainable development goals defined by UNEP. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aac16fd4a402e8dce11ebe/KZHqvijuEHZsf8vwQavPu.png) ### Data Splits There are no splits on the dataset. Splits can be created when loading the dataset: dataset = (load_dataset('neovalle/H4rmony', split='train').train_test_split(test_size=0.2)) ## Dataset Creation ### Curation Rationale Given the multidisciplinary nature of the challenge, H4rmony dataset is being enriched by contributions from environmentalists, AI specialists, and ecolinguistics enthusiasts. This collective effort ensures the data is both technically sound and ecologically meaningful. The dataset was initially created by a variant of Human Feedback, which involved role-playing and human verification. - We created a list of prompts suggested by the ecolinguistics community. - We then instructed GPT-4 with several ecolinguistic principles and asked it to provide three types of answers for each prompt: - One as if answered by someone aware of ecolinguistics. - another as if answered by someone unaware of ecolinguistics. - and a third, somewhat ambivalent, response. We then constructed the dataset, already knowing the ranks of the answers: 1. Ecolinguistics-aware role. 2. Ambivalent answer. 3. Ecolinguistics-unaware role. We named this variation of RLHF as Reinforcement Learning by Role-playing and Human Verification (RLRHV). The following image compares traditional RLHF and the variant we applied (RLRHV): ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aac16fd4a402e8dce11ebe/LpUGiCh1Exce7wl8dK3nD.png) ### Source Data #### Initial Data Collection and Normalization The core of the H4rmony dataset originated from active collaborations within the ecolinguistics community. Contributors were asked to submit prompts that would help uncover AI models' alignment with ecolinguistic values. A number of prompts and completions were AI-generated using prompt engineering. To this intial group of prompts, human crafted prompts. ### DPO Version There is a simplified version, specifically curated for DPO training here: https://huggingface.co/datasets/neovalle/H4rmony_dpo ### Personal and Sensitive Information This dataset doesn't contain sensitive information. ## Considerations for Using the Data This dataset is still under construction and it might contain offensive language. ### Social Impact of Dataset The H4rmony project aims to help AI LLMs to give priority to the crucial importance of environmental consciousness. By serving as the fourth "H", "Harmony with nature", it complements the existing triad of Helpfulness, Honesty, and Harmlessness already well known in ethical AI development. The following models have been fine tuned using H4rmony Dataset: https://huggingface.co/neovalle/H4rmoniousCaramel = google/flan-t5-Large + H4rmony dataset (instruction fine tuning) https://huggingface.co/neovalle/H4rmoniousPampero = HuggingFaceH4/zephyr-7b-alpha + H4rmony dataset (reinforcement learning) https://huggingface.co/neovalle/H4rmoniousBreeze = HuggingFaceH4/zephyr-7b-beta + H4rmony dataset (reinforcement learning) https://huggingface.co/neovalle/H4rmoniousAnthea = teknium/OpenHermes-2.5-Mistral-7B + H4rmony_dpo dataset (DPO fine-tuning) ### Discussion of Biases Not known biases. ### Other Known Limitations The dataset is still under constructions and the current number of rows might not be enough for some usage cases. ## Additional Information ### Dataset Curators Jorge Vallego - airesearch@neovalle.co.uk ### Licensing Information Creative Commons Attribution 4.0 ### Citation Information dataset neovalle/H4rmony - airesearch@neovalle.co.uk ### Testing and PoC Repository https://github.com/Neovalle/H4rmony ### Note This project has its roots in the article "Ecolinguistics and AI: Integrating eco-awareness in natural language processing" https://www.ecoling.net/_files/ugd/ae088a_13cc4828a28e4955804d38e8721056cf.pdf
nampdn-ai/tiny-code-textbooks
--- license: cc-by-nc-sa-4.0 task_categories: - text-generation language: - en tags: - code pretty_name: Tiny Code Textbooks size_categories: - 100K<n<1M --- # Code Explanation Textbooks A collection of 207k synthetic code with explanation as a tiny textbook. Filtered from the-stack, each programming language contains few thousands samples. I only choose the best meaningful code to generate synthetic textbook.
goendalf666/sales-conversations
--- language: - en size_categories: - 1K<n<10K task_categories: - conversational dataset_info: features: - name: '0' dtype: string - name: '1' dtype: string - name: '2' dtype: string - name: '3' dtype: string - name: '4' dtype: string - name: '5' dtype: string - name: '6' dtype: string - name: '7' dtype: string - name: '8' dtype: string - name: '9' dtype: string - name: '10' dtype: string - name: '11' dtype: string - name: '12' dtype: string - name: '13' dtype: string - name: '14' dtype: string - name: '15' dtype: string - name: '16' dtype: string - name: '17' dtype: string - name: '18' dtype: string - name: '19' dtype: string splits: - name: train num_bytes: 6821725 num_examples: 3412 download_size: 2644154 dataset_size: 6821725 configs: - config_name: default data_files: - split: train path: data/train-* tags: - sales --- # Dataset Card for "sales-conversations" This dataset was created for the purpose of training a sales agent chatbot that can convince people. The initial idea came from: textbooks is all you need https://arxiv.org/abs/2306.11644 gpt-3.5-turbo was used for the generation # Structure The conversations have a customer and a salesman which appear always in changing order. customer, salesman, customer, salesman, etc. The customer always starts the conversation Who ends the conversation is not defined. # Generation Note that a textbook dataset is mandatory for this conversation generation. This examples rely on the following textbook dataset: https://huggingface.co/datasets/goendalf666/sales-textbook_for_convincing_and_selling The data generation code can be found here: https://github.com/tom813/salesGPT_foundation/blob/main/data_generation/textbook_and_conversation_gen.py The following prompt was used to create a conversation ``` def create_random_prompt(chapter, roles=["Customer", "Salesman"], range_vals=(3, 7), industries=None): if industries is None: industries = ["tech", "health", "finance"] # default industries; replace with your default list if different x = random.randint(*range_vals) y = 0 for i in reversed(range(3, 9)): # Generalized loop for range of values if i * x < 27: y = i break conversation_structure = "" for i in range(1, x+1): conversation_structure += f""" {roles[0]}: #{i}. sentence of {roles[0].lower()} {roles[1]}: #{i}. sentence of {roles[1].lower()}""" prompt = f"""Here is a chapter from a textbook about convincing people. The purpose of this data is to use it to fine tune a llm. Generate conversation examples that are based on the chapter that is provided and would help an ai to learn the topic by examples. Focus only on the topic that is given in the chapter when generating the examples. Let the example be in the {random.choice(industries)} industry. Follow this structure and put each conversation in a list of objects in json format. Only return the json nothing more: {conversation_structure} Generate {y} lists of those conversations Chapter:{chapter}""" return prompt ``` [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
qgyd2021/chinese_chitchat
--- license: apache-2.0 language: - zh tags: - chitchat size_categories: - 100M<n<1B --- ## 中文闲聊数据集 role 的取值有: "unknown", "human", "assistant", 三种. 数据集从网上收集整理如下: | 数据 | 原始数据/项目地址 | 样本个数 | 语料描述 | 替代数据下载地址 | | :--- | :---: | :---: | :---: | :---: | | ChatterBot | [ChatterBot](https://github.com/gunthercox/ChatterBot); [chatterbot-corpus](https://github.com/gunthercox/chatterbot-corpus) | 560 | 按类型分类,质量较高 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | | douban | [Douban Conversation Corpus](https://github.com/MarkWuNLP/MultiTurnResponseSelection) | 352W | 来自北航和微软的paper, 噪音相对较少, 多轮(平均7.6轮) | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | | ptt | [PTT中文語料](https://github.com/zake7749/Gossiping-Chinese-Corpus) | 77W | 开源项目, 台湾PTT论坛八卦版, 繁体, 语料较生活化, 有噪音 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | | qingyun | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | 10W | 青云语料, 相对不错, 生活化 | | | subtitle | [电视剧对白语料](https://github.com/aceimnorstuvwxz/dgk_lost_conv) | 274W | 来自爬取的电影和美剧的字幕, 有一些噪音, 不严谨的对话, 说话人无法对应起来, 多轮(平均5.3轮) | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | | tieba | [贴吧论坛回帖语料](https://pan.baidu.com/s/1mUknfwy1nhSM7XzH8xi7gQ); 密码:i4si | 232W | 多轮, 有噪音 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | | weibo | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | 443W | 来自华为的paper | | | xiaohuangji | [小黄鸡语料](https://github.com/candlewill/Dialog_Corpus) | 45W | 原人人网项目语料, 有一些不雅对话, 少量噪音 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | <details> <summary>参考的数据来源,展开查看</summary> <pre> <code> https://github.com/codemayq/chinese_chatbot_corpus https://github.com/yangjianxin1/GPT2-chitchat </code> </pre> </details>
ddrg/math_formulas
--- dataset_info: features: - name: id dtype: int64 - name: text dtype: string splits: - name: train num_bytes: 225647910.0 num_examples: 2886810 - name: test num_bytes: 23848817.0 num_examples: 311298 download_size: 131762427 dataset_size: 249496727.0 --- # Dataset Card for "math_formulas" Mathematical dataset containing formulas based on the [AMPS](https://drive.google.com/file/d/1hQsua3TkpEmcJD_UWQx8dmNdEZPyxw23) Khan dataset and the [ARQMath](https://drive.google.com/drive/folders/1YekTVvfmYKZ8I5uiUMbs21G2mKwF9IAm) dataset V1.3. Based on the retrieved LaTeX formulas, more equivalent versions have been generated by applying randomized LaTeX printing with this [SymPy fork](https://drive.google.com/drive/folders/1YekTVvfmYKZ8I5uiUMbs21G2mKwF9IAm). The formulas are intended to be well applicable for MLM. For instance, a masking for a formula like `(a+b)^2 = a^2 + 2ab + b^2` makes sense (e.g., `(a+[MASK])^2 = a^2 + [MASK]ab + b[MASK]2` -> masked tokens are deducable by the context), in contrast, formulas such as `f(x) = 3x+1` are not (e.g., `[MASK](x) = 3x[MASK]1` -> [MASK] tokens are ambigious).
sakusakumura/dolly-14k-ines
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: category dtype: string - name: output dtype: string - name: input dtype: string - name: instruction dtype: string - name: index dtype: string splits: - name: train num_bytes: 13572665 num_examples: 14199 download_size: 7803782 dataset_size: 13572665 license: cc-by-sa-3.0 task_categories: - question-answering - summarization language: - ja size_categories: - 10K<n<100K --- # dolly-14k-ines ### Description The **dolly-14k-ines** dataset is derived from the `databricks-dolly-15k-ja`, which is a machine-translated version of the `databricks/dolly-15k`. The entries have been further converted to reflect the speech pattern of Ines Fujin, a character from "Umamusume Pretty Derby." The conversion process utilized a model specifically developed for this task, and entries that did not successfully emulate the character's textual speech style were omitted, resulting in a slightly smaller dataset. The foundational `databricks-dolly-15k` dataset includes instruction-following records generated by Databricks employees and spans several behavioral categories, such as brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization, as defined in the InstructGPT paper. This dataset is created and provided based on Umamusume Pretty Derby's derivative creation guidelines. ### License This dataset is available under the Creative Commons Attribution-ShareAlike 3.0 Unported License. Additionally, to use this dataset, you must comply with the 'Umamusume Pretty Derby' derivative creation guidelines. The full text of the guidelines can be found at the link below. [Derivative creation guidelines for Umamusume Pretty Derby.](https://umamusume.jp/sp/derivativework_guidelines/) ### Included Tasks The tasks included in the **dolly-14k-ines** dataset mirror those from the original `databricks-dolly-15k` dataset and encompass a variety of natural language processing scenarios: - **Creative Writing**: Devise prompts that elicit a creative, open-ended written response, suitable for someone with general knowledge, and detailed enough to avoid the need for external research. - **Closed QA**: Create complex questions that rely on factual accuracy, based on provided Wikipedia text excerpts, that involve human-level reasoning without necessitating specialized expertise. - **Open QA**: Frame questions that can be answered with general world knowledge or minimal research, soliciting both factual and opinion-based responses without the need for reference material. - **Summarization**: Condense information from a Wikipedia paragraph, ensuring the summarization task can be reasonably completed within a brief timeframe. - **Information Extraction**: Extract specific information from a Wikipedia paragraph, where the text contains all the necessary details for formulating a response. - **Classification**: Classify entities from given lists or categories, such as movie reviews or products, where the classification criteria are contained within the prompt itself. - **Brainstorming**: Generate a diverse array of ideas in response to a given question, fostering creativity and a wide range of suggestions. These tasks are originally defined in the `databricks-dolly-15k` dataset and are retained in the **dolly-14k-ines** dataset to facilitate research and application in natural language processing, particularly for those interested in style-specific text adaptation or character-specific speech emulation. ### Updates 2023-11-04: Added description regarding license. When using this dataset, you must follow Umamusume Pretty Derby's derivative creation guidelines.
pseudolab/autotrain-data-Nuclear_Fusion_Falcon
--- dataset_info: features: - name: 'Unnamed: 0' dtype: int64 - name: Magnetic Field Fluctuations dtype: float64 - name: Leakage dtype: float64 - name: Instabilities dtype: float64 - name: Plasma Instabilities dtype: float64 - name: Magnetic Field Strength dtype: float64 - name: Injection Energy dtype: float64 - name: Beam Symmetry dtype: float64 - name: Target Density dtype: float64 - name: Target Composition dtype: string - name: Fuel Density dtype: float64 - name: Temperature dtype: float64 - name: Confinement Time dtype: float64 - name: Fuel Purity dtype: float64 - name: Energy Input dtype: float64 - name: Power Output dtype: float64 - name: Pressure dtype: float64 - name: Neutron Yield dtype: float64 - name: Ignition dtype: int64 - name: autotrain_text dtype: string splits: - name: train num_bytes: 17566788 num_examples: 100000 - name: validation num_bytes: 17566788 num_examples: 100000 download_size: 32112642 dataset_size: 35133576 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* --- # Dataset Card for "autotrain-data-Nuclear_Fusion_Falcon" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
higgsfield/school-math-questions
--- dataset_info: features: - name: prompt dtype: string - name: completion dtype: string splits: - name: train num_bytes: 4787332 num_examples: 8792 download_size: 2576099 dataset_size: 4787332 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "school-math-questions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yuyijiong/Multi-doc-QA-CommonCrawl
--- license: cc-by-nc-4.0 size_categories: - 10K<n<100K language: - en --- * Update on December 24, 2023: Improve the format of answers: force all answers to be provide the refered original text first. * English multi document Q&A data created using RedPajamaCommonCrawl data as reference text * In the Raw dataset, each sample contains <font color=red> one reference document, 199 irrelevant documents, and a Q-A pair based on the reference document</font>. It can be used to train models to extract the target information from a large number of documents. * After filtering, integrating, and transforming the raw data into chatml format instruction fine-tuning data, each sample contains approximately 30 reference documents and 5 corresponding QA pairs. <br/> * 2023.12.4更新:改进答案的格式,强制所有答案在回答时必须先给出原文。 * 以RedPajamaCommonCrawl数据为参考文本,制作的英文多文档问答数据 * 原始数据中,每个样本包含 <font color=red> 一个参考文档、199个无关文档、一个基于参考文档的问答对</font>。可以训练模型从大量文档中抽取关键信息的能力。 * 原始数据经过筛选、整合转化为chatml形式的指令微调数据后,每条数据大约包含30个参考文档,以及5个对应的问答对。 * dataset size: 11k
pykeio/librivox-tracks
--- license: cc-by-4.0 task_categories: - text-to-speech - automatic-speech-recognition pretty_name: LibriVox Tracks size_categories: - 100K<n<1M language: - ace - afr - grc - ara - ban - bel - ben - ceb - bug - bul - yue - cat - zho - hrv - ces - dan - nld - eng - epo - fao - fin - fra - glg - deu - ell - heb - hin - hun - ind - gle - ita - jpn - jav - pam - kor - kur - lat - lav - lit - nds - ltz - mkd - zlm - mlt - mri - mar - enm - min - nor - nno - oci - ang - kaw - non - osn - tpw - ori - fas - pol - por - ron - rus - gla - srp - slk - slv - spa - sun - swe - tgl - tam - tur - ukr - urd - vol - wln - cym - fry - yid --- A dataset of all audio files uploaded to LibriVox before 26th September 2023.
allenai/paloma
--- extra_gated_prompt: "Access to this dataset is automatically granted upon accepting the [**AI2 ImpACT License – Low Risk Artifacts (“LR Agreement”)**](https://allenai.org/licenses/impact-lr) and completing all fields below. All data subsets in this dataset are licensed under the LR Agreement, except for those as listed in the 'License' section of the Dataset Card." extra_gated_fields: Your full name: text Organization or entity you are affiliated with: text State or country you are located in: text Contact email: text Please describe your intended use of the low risk artifact(s): text I AGREE to the terms and conditions of the LR Agreement above: checkbox I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox I CERTIFY that the information I have provided is true and accurate: checkbox dataset_info: - config_name: 4chan_meta_sep features: - name: text dtype: string - name: id dtype: string - name: added dtype: string - name: source dtype: string - name: metadata struct: - name: original_ids sequence: int64 - name: original_times sequence: int64 - name: semantic_url dtype: string - name: truncated_portion dtype: string - config_name: c4_100_domains features: - name: text dtype: string - name: id dtype: string - name: added dtype: string - name: source dtype: string - name: subdomain dtype: string - config_name: c4_en features: - name: text dtype: string - name: id dtype: string - name: added dtype: string - name: source dtype: string - name: metadata struct: - name: url dtype: string - name: date dtype: string - name: truncated_portion dtype: string - config_name: dolma-v1_5 features: - name: text dtype: string - name: id dtype: string - name: added dtype: string - name: source dtype: string - name: subdomain dtype: string - name: metadata dtype: struct - config_name: dolma_100_programming_languages_no_attributes features: - name: text dtype: string - name: id dtype: string - name: added dtype: string - name: source dtype: string - name: subdomain dtype: string - name: metadata dtype: struct - name: timestamp dtype: timestamp[s] configs: - config_name: 4chan_meta_sep data_files: - split: val path: "4chan_meta_sep/val/*" - split: test path: "4chan_meta_sep/test/*" - config_name: c4_100_domains data_files: - split: val path: "c4_100_domains/val/*" - split: test path: "c4_100_domains/test/*" - config_name: c4_en data_files: - split: val path: "c4_en/val/*" - split: test path: "c4_en/test/*" - config_name: dolma-v1_5 data_files: - split: val path: "dolma-v1_5/val/*" - split: test path: "dolma-v1_5/test/*" - config_name: dolma_100_programming_languages_no_attributes data_files: - split: val path: "dolma_100_programming_languages_no_attributes/val/*" - split: test path: "dolma_100_programming_languages_no_attributes/test/*" - config_name: dolma_100_subreddits data_files: - split: val path: "dolma_100_subreddits/val/*" - split: test path: "dolma_100_subreddits/test/*" - config_name: falcon-refinedweb data_files: - split: val path: "falcon-refinedweb/val/*" - split: test path: "falcon-refinedweb/test/*" - config_name: gab data_files: - split: val path: "gab/val/*" - split: test path: "gab/test/*" - config_name: m2d2_s2orc_unsplit data_files: - split: val path: "m2d2_s2orc_unsplit/val/*" - split: test path: "m2d2_s2orc_unsplit/test/*" - config_name: m2d2_wikipedia_unsplit data_files: - split: val path: "m2d2_wikipedia_unsplit/val/*" - split: test path: "m2d2_wikipedia_unsplit/test/*" - config_name: manosphere_meta_sep data_files: - split: val path: "manosphere_meta_sep/val/*" - split: test path: "manosphere_meta_sep/test/*" - config_name: mc4 data_files: - split: val path: "mc4/val/*" - split: test path: "mc4/test/*" - config_name: ptb data_files: - split: val path: "ptb/val/*" - split: test path: "ptb/test/*" - config_name: redpajama data_files: - split: val path: "redpajama/val/*" - split: test path: "redpajama/test/*" - config_name: twitterAAE_HELM_fixed data_files: - split: val path: "twitterAAE_HELM_fixed/val/*" - split: test path: "twitterAAE_HELM_fixed/test/*" - config_name: wikitext_103 data_files: - split: val path: "wikitext_103/val/*" - split: test path: "wikitext_103/test/*" --- # Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> Language models (LMs) commonly report perplexity on monolithic data held out from the training distribution. Implicitly or explicitly, this data is composed of domains—variations in the distribution of language. Rather than assuming perplexity on one distribution extrapolates to others, Perplexity Analysis for Language Model Assessment (Paloma) measures LM fit to 585 text domains, ranging from NY Times to r/depression on Reddit. ## Dataset Details ### Benchmark Inference and Submissions We invite submissions to our benchmark and organize results by comparability based on compliance with guidelines such as the removal of benchmark contamination from pretraining. Standardized inference code for running comprable evaluations and details about making submissions to the Paloma benchmark can be found at the following link. [How to evaluate and how to submit](https://github.com/allenai/ai2-olmo-eval/blob/main/paloma/README.md) ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> Paloma is for examining relative differences in LM fit on domains. We take these relative differences as a proxy of model fit to the shared knowledge, values, and social context that position the humans producing language in a domain. While we expect contemporary LMs to have a limited fit to the most complex of these latent factors of domains, improving fit to all factors is necessary both to improve perplexity and for any actual use of the LM. For example, better perplexity on a particular dialect of English suggests that that model will make a better chatbot for people that speak that dialect. The sources of evaluation data in Paloma were selected based on the following desiderata: 1) including known resources, 2) including fine-grained domains, 3) including domains representing specific communities of interest. Different lines of research will require different selections of domains; Paloma aims to enable research on differences in LM fit over the hundreds of domains that are readily available in existing metadata. Note that we are not able to re-host 2 of the 18 sources in Paloma comprising 39 domains. These are The Pile and ICE. The ICE corpus is available on request to the original authors following the instructions [here](https://www.ice-corpora.uzh.ch/en/access.html). **Curated by:** Ian Magnusson, Akshita Bhagia, Valentin Hofmann, Luca Soldaini, Ananya Harsh Jha, Oyvind Tafjord, Dustin Schwenk, Evan Pete Walsh, Yanai Elazar, Kyle Lo, Dirk Groeneveld, Iz Beltagy, Hannaneh Hajishirzi, Noah A. Smith, Kyle Richardson, and Jesse Dodge **Languages:** We elect to focus just on the language modeling of English and code data. **License:** The data subsets are licensed under the AI2 ImpACT License - Low Risk Artifacts, except as listed below. - Wikitext-103 - CC BY-SA - TwitterAAE - for research purposes only - Red Pajama - see license details - M2D2 - CC BY-NC **Paper:** https://arxiv.org/abs/2312.10523 ### Dataset Sources <!-- Provide the basic links for the dataset. --> <!-- - [Paper]() -- (TODO update when paper is preprinted) --> <!-- - [Website](paloma.allen.ai) --> - [Code](https://github.com/allenai/ai2-olmo-eval/blob/main/paloma/README.md) - Paloma 1B Baseline Models: [Dolma](https://huggingface.co/allenai/paloma-1b-baseline-dolma), [Pile](https://huggingface.co/allenai/paloma-1b-baseline-pile), [RedPajama](https://huggingface.co/allenai/paloma-1b-baseline-redpajama), [C4](https://huggingface.co/allenai/paloma-1b-baseline-c4), [mC4-en](https://huggingface.co/allenai/paloma-1b-baseline-mc4), [Falcon-RefinedWeb](https://huggingface.co/allenai/paloma-1b-baseline-falcon-refinedweb) ## Uses <!-- Address questions around how the dataset is intended to be used. --> This benchmark is intended for use in evaluating language model fit to fine-grained domains. ### Direct Use <!-- This section describes suitable use cases for the dataset. --> This dataset should be used for evaluating the likilihood of text from a given domain by a language model. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> Note that the sources contained in this benchmark include varying licenses with differing restrictions (see [License](#dataset-description)) ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> The sources in this dataset are each organized into their own subcorpus. This consists of a `val` and `test` split. Data within this is organized as files with lines separated JSON data where each line represents a document and its associated metadata. The type of metadata available varies from source to source, but each line contains at least a field `'text'` which contains the text of the document. ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> Perplexity is conventionally reported on held out data from a model's training distribution or a small number of traditional test sets. Such monolithic evaluation ignores potential variation of model fit across different domains that LMs implicitly learn to model. We curate sources of fine-grained textual domains in Paloma to enable evaluation of language model fit to specific domains of text. Paloma is inspired by and incorporates previous work that curates corpora with marked domains (The Pile, M2D2, C4 100 Domains, ICE, TwitterAAE). We conduct a stratified subsample over domains where we set a minimum subsample size based on emperical estimation of the variance over subsamples. ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Standard language modeling benchmarks Though it is common practice to evaluate on held out data from the pretraining corpus of a given model, we evaluate *across* several major pretraining corpora and standard language modeling benchmarks. We also break down performance per domain within the datasets that have multiple domains. Note that although the Paloma benchmark analysis in our paper describes results on the Pile, we are not able to re-host this data. | Source | Citation | Description | |-------------------|-----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | c4-en | Raffel et al (2019) via Dodge et al (2021) | Standard contemporary LM pretraining corpus automatically filtered from the April 2019 Common Crawl scrape | | mc4-en | Xue et al (2021) | The English language portion of a pretraining corpus automatically filtered from 71 Common Crawl scrapes | | Pile | Gao et al (2020) | Standard contemporary LM benchmark from curated multi-source data including large scale non-webscraped sources | | Wikitext-103 | Merity et al (2016) | A standard collection of verified “Good” and “Featured” articles on Wikipedia | | Penn Tree Bank | Marcus et al (1999) via Nunes, Davide. (2020) | Classic Wall Street Journal benchmark with linguistic structure annotations omitted | | RedPajama | Together Computer (2023) | A publicly available reproduction of the LLaMA (Touvron et al., 2023) pretraining source mixture, combining large amounts of webscraped text with smaller curated sources | | Falcon-RefinedWeb | Penedo et al. (2023) | A corpus of English sampled from all Common Crawl scrapes until June 2023, more aggressively filtered and deduplicated than c4 and mc4-en | | Dolma v1.5 | Soldaini et al. (2023) | A three trillion token corpus that samples sources commonly used to train LMs in order to enable open research on pretraining data | #### Fine-grained domain benchmarks Where typical pretraining corpora offer at most tens of labeled domains usually based on where the data is sourced, we examine datasets with up to an order of magnitude more domains. Existing datasets (M2D2 and c4 100 Domains) and datasets we curate from Dolma v1.5 use metadata to define hundreds of domains over Wikipedia, Semantic Scholar, Common Crawl, Reddit, and Github data. These include diverse domains from *Culture and the arts: Performing arts*, a topic on Wikipedia, to *r/depression*, a forum on Reddit for mental health support. | Source | Citation | Description | |---------------------------------|--------------------------------------------------|-----------------------------------------------------------------------------------| | M2D2 S2ORC | Reid et al (2022) | Papers from Semantic Scholar grouped by hierarchical academic field categories | | M2D2 Wiki | Reid et al (2022) | Wikipedia articles grouped by hierarchical categories in the Wikipedia ontology | | c4 100 Domains | Chronopoulou et al (2021) | Balanced samples of the top 100 URL domains in C4 | | Dolma 100 Subreddits | Soldaini et al. (2023) | Balanced samples of the top 100 Subreddits from the Dolma Reddit subset | | Dolma 100 Programming Languages | Kocetkov et al. (2022) via Soldaini et al. (2023) | Balanced samples of the top 100 programming languages from the Dolma Stack subset | #### Disparities between speech communities Some communities are known to be underserved by existing models. Following HELM, We measure disparities in performance on corpora of African American English and White aligned English from TwitterAAE, as well as nine corpora of English from different countries with the ICE dataset. Note that although the Paloma benchmark analysis in our paper describes results on ICE, we are not able to re-host this data. | Source | Citation | Description | |------------|----------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ICE | Greenbaum and Nelson (1996) via Liang et al (2022) | English from around the world curated by local experts, with subsets for Canada, East Africa, Hong Kong, India, Ireland, Jamaica, Philippines, Singapore, and the USA | | TwitterAAE | Blodgett et al. (2016) via Liang et al (2022) | Balanced sets of tweets classified as African American or White aligned English | #### Fringe sources previously studied for problematic discourse Text from some fringe online communities has been shown to contain larger proportions of hate speech and toxicity than more mainstream sources. [Longpre et al. (2023)](https://arxiv.org/abs/2305.13169) have shown that varying amount of toxic content in pretraining data exhibits a tradeoff between non-toxic generation and ability to classify toxicity, indicating that model fit to discourse with toxicity is worth measuring. Measuring perplexity on Manosphere, Gab, and 4chan characterises model familiarity with distinct social contexts in which toxic language arises. | Source | Citation | Description | |-------------------|------------------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Manosphere Corpus | Ribeiro et al (2020) | 9 forums where a set of related masculinist ideologies developed over the 2000s and 2010s | | Gab Corpus | Zannettou et al (2018) | Data from 2016-18 from an alt-right, free-speech-oriented social media platform shown to contain more hate speech than mainstream platforms | | 4chan Corpus | Papasavva et al (2020) | Data from 2016-19 from a politics subforum of an anonymity-focused forum found to contain among the highest rates of toxic content | #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> The data in Paloma are sampled from existing sources. Most often perplexity evaluation data is subsampled uniformly over the original distribution of domains in a source, resulting in more or less tokens from each domain in the evaluation data based on how well represented they are in the corpus. We instead employ stratified sampling, in which all sources with marked domains are partitioned by domain and a uniform sample of the same size is taken from each partition. Specifically, documents are sampled from each domain until a target number of tokens is reached. This helps ensure that no domains are lost or very small after subsampling. In social media domains with additional metadata that is typically displayed along with posts, we format metadata such as timestamps into the document `'text'` field. Where information is available about how threads of posts are connected, documents in that domain contain all posts in a given thread. Additional details on source specific processing are available in our paper. #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> Text data from each of the sources curated in Paloma is created by varying sets of original authors. Some sources are collected from users of specific internet fora such as specific subreddits. Other data is collected on the basis of expert or automated classification of demographic groups. Other data is collected from authors of archival material including scientific preprints, Wikipedia, and code repositories. Lastly, data sampled from standard pretraining corpora comes from authors collected through automatic webscrapping and large scale sampling of archival sources, making it difficult to recover much specific information about these authors. #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> No annotation is done on this data. #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> No annotation is done on this data. #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> Sources in Paloma may contain personally identifiable information (PII). No attempt is made to measure or remove this information for the following reason: Paloma provides a small subsample of already publicly available data. The small size of this subsample renders this data less useful for aggregation of PII information than the already available public sources which we subsample. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> It is beyond the scope of any one group of researchers to prescribe an exhaustive set of domains that should be examined for a LM. Rather Paloma brings together a substantial selection of domains that are identifiable from already available metadata to demonstrate the kinds of analyses possible with hundreds of domains and rigorous experimental controls. Different research goals will motivate different definitions and selections of domains, but other researchers can apply the guidelines we detail in our paper to novel fine-grained domains suitable for their research questions. One of the key advantages of evaluating a model by its fit to a collection of text representing a domain is that such domains can be identified not just by researchers who study LMs. We hope future work will identify many more domains that no one discipline would think to look at. In Paloma, we distinguish sources from domains, although not all cases permit such easy distinction. We use *source* to refer to a selection of data that is characterized by the decisions of the people who curated that data, whether that curation is automatic as in scraping C4 or manual as in selecting the subcorpora of the The Pile. By contrast we use *domain* to refer to a set of documents that belong together because they are originally produced by a group of humans that share a distinct social context. Considered as such, domains may overlap; a document's author may belong to the set of English speakers in Jamaica and the set of AI researchers. Further note, that domains are often latent categorizations which we only approximate because complete metadata does not exist. Also, some domains in Paloma appear in multiple sources, such as academic papers. Though The Pile and RedPajama process academic papers differently, the subcorpora on academic papers in each source represent different approximations of the same or very similar domains. However for the sake of simplicity, we make the reductive assumption of counting all 585 domains in Paloma as fully distinct. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> In our paper we outline guidelines for evaluating language model fit. We encourage users of Paloma to adopt these experimental controls for metric variance when subsampling, benchmark contamination, differing tokenization, training data order, and evaluation data format. ## Citation <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ``` @article{paloma, title={{Paloma}: A Benchmark for Evaluating Language Model Fit}, author={Magnusson, Ian and Bhagia, Akshita and Hofmann, Valentin and Soldaini, Luca and Harsh Jha, Ananya and Tafjord, Oyvind and Schwenk,Dustin and Walsh, Evan Pete and Elazar, Yanai and Lo, Kyle and Groenveld,Dirk and Beltagy,Iz and Hajishirz,Hanneneh and Smith, Noah A. and Richardson,Kyle and Dodge,Jesse}, journal={technical report}, year={2023}, url={https://paloma.allen.ai/} } ``` <!-- [More Information Needed] --> ## Dataset Card Contact {ianm,jessed}@allenai.org
meta-math/MetaMathQA_GSM8K_zh
--- license: mit task_categories: - question-answering language: - en - zh tags: - math - math-qa size_categories: - n<1K --- # Dataset `MetaMathQA_GSM8K_zh` is a dataset for mathematical reasoning in Chinese, question-answer pairs are translated from MetaMathQA (https://huggingface.co/datasets/meta-math/MetaMathQA) by `GPT-3.5-Turbo` with few-shot prompting. The dataset consists of 231685 samples. # Citation If you find the `GSM8K_zh` dataset useful for your projects/papers, please cite the following paper. ```bibtex @article{yu2023metamath, title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models}, author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang}, journal={arXiv preprint arXiv:2309.12284}, year={2023} } ```
THUDM/CogVLM-SFT-311K
--- license: cc-by-nc-4.0 --- # CogVLM-SFT-311K: Bilingual Visual Instruction Data in CogVLM SFT CogVLM-SFT-311K is the primary aligned corpus used in the initial training of CogVLM v1.0. The process of constructing this dataset is as follows: 1. Approximately 3500 high-quality data samples were selected from the open source [MiniGPT-4](https://huggingface.co/datasets/Vision-CAIR/cc_sbu_align), known as minigpt4-3500. 2. Minigpt4-3500 was integrated with [Llava-Instruct-150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) and translated into Chinese through a language model. 3. We discovered significant noise in the detailed description part of minigpt4-3500 and Llava-instruct. Thus, we corrected these Chinese corpora and retranslated them into English. ## Dataset Information The dataset contains three folders corresponding to the mixed part of minigpt4-3500 and llava, the llava solo conversation, and the multi-turn conversation datasets. Their layout is as follows: ``` .CogVLM-SFT-311K ├── llava_details-minigpt4_3500_formate ├── llava_instruction_multi_conversations_formate └── llava_instruction_single_conversation_formate ``` In our open-source data, the datasets are distributed as follows: ``` .llava_details-minigpt4_3500_formate ├── images │ └── 00000001.jpg └── labels └── 00000001.json ``` Images are stored in the images folder, while labels containing the corresponding image description or dialogue are stored in the labels folder. ## Dataset Quantity + llava_details-minigpt4_3500_formate: 22,464 images and descriptions + llava_instruction_multi_conversations_formate: 56,673 images and multi-turn conversations + llava_instruction_single_conversation_formate: 76,634 images and single-turn conversations Dataset Format Caption format for image description ``` { "captions": [ { "role": "caption", "content": "The photograph features a beach scene with a group of people surfing in the ocean. There are ten individuals spaced out in the water, sitting or lying on various sizes of body boards. The surfers line up in a row, close to the sandy shoreline. Some of the body boards are visible floating on the surface of the water, with the surfers riding on them or nearby." } ] } ``` Conversation format for image dialogue ``` { "conversations": [ { "role": "user", "content": "What can be inferred about the zebras' behavior and surroundings?" }, { "role": "assistant", "content": "Based on the image, we can infer that the two zebras are likely seeking relief from the sun's heat, as they are standing side by side under the branches of a thorny tree. This shade-providing tree offers some respite from the sun, possibly during the hottest part of the day. The zebras are in a green field with grass, providing them with an ideal environment to graze and eat while staying near their source of shelter. This shows that the zebras' behavior is influenced by the conditions and available resources in their surroundings. It also highlights that these animals adopt strategies to adapt to the fluctuating conditions of their environment, such as cooperation and seeking shelter, to survive and thrive in their natural habitat." } ] } ``` ## License + Due to non-commercial agreements, we did not use these data in the bilingual version of CogVLM or any other models involving commercialization. + The dataset license adheres to: <br> Attribution-NonCommercial 4.0 International. It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use This will not allow you to use these data for any **commercial activitiesI**. ## References This project utilizes data and concepts based on the following research papers: - Zhu, D., Chen, J., Shen, X., Li, X., & Elhoseiny, M. (2023). MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models. arXiv preprint arXiv:2304.10592. - Liu, H., Li, C., Wu, Q., & Lee, Y. J. (2023). Visual Instruction Tuning. arXiv:2304.08485.
tonyassi/vogue-runway-top15-512px
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': alexander mcqueen,fall 1996 ready to wear '1': alexander mcqueen,fall 1997 ready to wear '2': alexander mcqueen,fall 1998 ready to wear '3': alexander mcqueen,fall 1999 ready to wear '4': alexander mcqueen,fall 2000 ready to wear '5': alexander mcqueen,fall 2001 ready to wear '6': alexander mcqueen,fall 2002 ready to wear '7': alexander mcqueen,fall 2003 ready to wear '8': alexander mcqueen,fall 2004 ready to wear '9': alexander mcqueen,fall 2005 menswear '10': alexander mcqueen,fall 2005 ready to wear '11': alexander mcqueen,fall 2006 menswear '12': alexander mcqueen,fall 2006 ready to wear '13': alexander mcqueen,fall 2007 menswear '14': alexander mcqueen,fall 2007 ready to wear '15': alexander mcqueen,fall 2008 menswear '16': alexander mcqueen,fall 2008 ready to wear '17': alexander mcqueen,fall 2009 ready to wear '18': alexander mcqueen,fall 2010 menswear '19': alexander mcqueen,fall 2010 ready to wear '20': alexander mcqueen,fall 2011 menswear '21': alexander mcqueen,fall 2011 ready to wear '22': alexander mcqueen,fall 2012 menswear '23': alexander mcqueen,fall 2012 ready to wear '24': alexander mcqueen,fall 2013 menswear '25': alexander mcqueen,fall 2013 ready to wear '26': alexander mcqueen,fall 2014 menswear '27': alexander mcqueen,fall 2014 ready to wear '28': alexander mcqueen,fall 2015 menswear '29': alexander mcqueen,fall 2015 ready to wear '30': alexander mcqueen,fall 2016 menswear '31': alexander mcqueen,fall 2016 ready to wear '32': alexander mcqueen,fall 2017 menswear '33': alexander mcqueen,fall 2017 ready to wear '34': alexander mcqueen,fall 2018 menswear '35': alexander mcqueen,fall 2018 ready to wear '36': alexander mcqueen,fall 2019 menswear '37': alexander mcqueen,fall 2019 ready to wear '38': alexander mcqueen,fall 2020 menswear '39': alexander mcqueen,fall 2020 ready to wear '40': alexander mcqueen,fall 2021 menswear '41': alexander mcqueen,fall 2021 ready to wear '42': alexander mcqueen,fall 2022 menswear '43': alexander mcqueen,fall 2022 ready to wear '44': alexander mcqueen,fall 2023 menswear '45': alexander mcqueen,fall 2023 ready to wear '46': alexander mcqueen,pre fall 2009 '47': alexander mcqueen,pre fall 2011 '48': alexander mcqueen,pre fall 2012 '49': alexander mcqueen,pre fall 2013 '50': alexander mcqueen,pre fall 2014 '51': alexander mcqueen,pre fall 2015 '52': alexander mcqueen,pre fall 2016 '53': alexander mcqueen,pre fall 2017 '54': alexander mcqueen,pre fall 2018 '55': alexander mcqueen,pre fall 2019 '56': alexander mcqueen,pre fall 2020 '57': alexander mcqueen,pre fall 2021 '58': alexander mcqueen,pre fall 2021 menswear '59': alexander mcqueen,pre fall 2022 '60': alexander mcqueen,pre fall 2023 '61': alexander mcqueen,resort 2009 '62': alexander mcqueen,resort 2010 '63': alexander mcqueen,resort 2011 '64': alexander mcqueen,resort 2012 '65': alexander mcqueen,resort 2013 '66': alexander mcqueen,resort 2014 '67': alexander mcqueen,resort 2015 '68': alexander mcqueen,resort 2016 '69': alexander mcqueen,resort 2017 '70': alexander mcqueen,resort 2018 '71': alexander mcqueen,resort 2019 '72': alexander mcqueen,resort 2020 '73': alexander mcqueen,resort 2021 '74': alexander mcqueen,resort 2022 '75': alexander mcqueen,resort 2023 '76': alexander mcqueen,spring 1995 ready to wear '77': alexander mcqueen,spring 1996 ready to wear '78': alexander mcqueen,spring 1997 ready to wear '79': alexander mcqueen,spring 1998 ready to wear '80': alexander mcqueen,spring 1999 ready to wear '81': alexander mcqueen,spring 2000 ready to wear '82': alexander mcqueen,spring 2001 ready to wear '83': alexander mcqueen,spring 2002 ready to wear '84': alexander mcqueen,spring 2003 ready to wear '85': alexander mcqueen,spring 2004 ready to wear '86': alexander mcqueen,spring 2005 menswear '87': alexander mcqueen,spring 2005 ready to wear '88': alexander mcqueen,spring 2006 menswear '89': alexander mcqueen,spring 2006 ready to wear '90': alexander mcqueen,spring 2007 menswear '91': alexander mcqueen,spring 2007 ready to wear '92': alexander mcqueen,spring 2008 menswear '93': alexander mcqueen,spring 2008 ready to wear '94': alexander mcqueen,spring 2009 menswear '95': alexander mcqueen,spring 2009 ready to wear '96': alexander mcqueen,spring 2010 menswear '97': alexander mcqueen,spring 2010 ready to wear '98': alexander mcqueen,spring 2011 menswear '99': alexander mcqueen,spring 2011 ready to wear '100': alexander mcqueen,spring 2012 menswear '101': alexander mcqueen,spring 2012 ready to wear '102': alexander mcqueen,spring 2013 menswear '103': alexander mcqueen,spring 2013 ready to wear '104': alexander mcqueen,spring 2014 menswear '105': alexander mcqueen,spring 2014 ready to wear '106': alexander mcqueen,spring 2015 menswear '107': alexander mcqueen,spring 2015 ready to wear '108': alexander mcqueen,spring 2016 menswear '109': alexander mcqueen,spring 2016 ready to wear '110': alexander mcqueen,spring 2017 menswear '111': alexander mcqueen,spring 2017 ready to wear '112': alexander mcqueen,spring 2018 menswear '113': alexander mcqueen,spring 2018 ready to wear '114': alexander mcqueen,spring 2019 menswear '115': alexander mcqueen,spring 2019 ready to wear '116': alexander mcqueen,spring 2020 menswear '117': alexander mcqueen,spring 2020 ready to wear '118': alexander mcqueen,spring 2021 menswear '119': alexander mcqueen,spring 2021 ready to wear '120': alexander mcqueen,spring 2022 menswear '121': alexander mcqueen,spring 2022 ready to wear '122': alexander mcqueen,spring 2023 menswear '123': alexander mcqueen,spring 2023 ready to wear '124': alexander mcqueen,spring 2024 menswear '125': alexander mcqueen,spring 2024 ready to wear '126': armani prive,fall 2005 couture '127': armani prive,fall 2006 couture '128': armani prive,fall 2007 couture '129': armani prive,fall 2008 couture '130': armani prive,fall 2009 couture '131': armani prive,fall 2010 couture '132': armani prive,fall 2011 couture '133': armani prive,fall 2012 couture '134': armani prive,fall 2013 couture '135': armani prive,fall 2014 couture '136': armani prive,fall 2015 couture '137': armani prive,fall 2016 couture '138': armani prive,fall 2017 couture '139': armani prive,fall 2018 couture '140': armani prive,fall 2019 couture '141': armani prive,fall 2021 couture '142': armani prive,fall 2022 couture '143': armani prive,fall 2023 couture '144': armani prive,spring 2005 couture '145': armani prive,spring 2006 couture '146': armani prive,spring 2007 couture '147': armani prive,spring 2008 couture '148': armani prive,spring 2009 couture '149': armani prive,spring 2010 couture '150': armani prive,spring 2011 couture '151': armani prive,spring 2012 couture '152': armani prive,spring 2013 couture '153': armani prive,spring 2014 couture '154': armani prive,spring 2015 couture '155': armani prive,spring 2016 couture '156': armani prive,spring 2017 couture '157': armani prive,spring 2018 couture '158': armani prive,spring 2019 couture '159': armani prive,spring 2020 couture '160': armani prive,spring 2021 couture '161': armani prive,spring 2023 couture '162': balenciaga,fall 2000 ready to wear '163': balenciaga,fall 2001 ready to wear '164': balenciaga,fall 2002 ready to wear '165': balenciaga,fall 2003 ready to wear '166': balenciaga,fall 2004 ready to wear '167': balenciaga,fall 2005 ready to wear '168': balenciaga,fall 2006 ready to wear '169': balenciaga,fall 2007 menswear '170': balenciaga,fall 2007 ready to wear '171': balenciaga,fall 2008 ready to wear '172': balenciaga,fall 2009 ready to wear '173': balenciaga,fall 2010 ready to wear '174': balenciaga,fall 2011 menswear '175': balenciaga,fall 2011 ready to wear '176': balenciaga,fall 2012 menswear '177': balenciaga,fall 2012 ready to wear '178': balenciaga,fall 2013 menswear '179': balenciaga,fall 2013 ready to wear '180': balenciaga,fall 2014 menswear '181': balenciaga,fall 2014 ready to wear '182': balenciaga,fall 2015 menswear '183': balenciaga,fall 2015 ready to wear '184': balenciaga,fall 2016 ready to wear '185': balenciaga,fall 2017 menswear '186': balenciaga,fall 2017 ready to wear '187': balenciaga,fall 2018 ready to wear '188': balenciaga,fall 2019 menswear '189': balenciaga,fall 2019 ready to wear '190': balenciaga,fall 2020 menswear '191': balenciaga,fall 2020 ready to wear '192': balenciaga,fall 2021 couture '193': balenciaga,fall 2021 menswear '194': balenciaga,fall 2021 ready to wear '195': balenciaga,fall 2022 couture '196': balenciaga,fall 2022 ready to wear '197': balenciaga,fall 2023 couture '198': balenciaga,fall 2023 ready to wear '199': balenciaga,pre fall 2008 '200': balenciaga,pre fall 2009 '201': balenciaga,pre fall 2010 '202': balenciaga,pre fall 2011 '203': balenciaga,pre fall 2012 '204': balenciaga,pre fall 2013 '205': balenciaga,pre fall 2014 '206': balenciaga,pre fall 2015 '207': balenciaga,pre fall 2016 '208': balenciaga,pre fall 2017 '209': balenciaga,pre fall 2018 '210': balenciaga,pre fall 2019 '211': balenciaga,pre fall 2020 '212': balenciaga,pre fall 2021 '213': balenciaga,pre fall 2022 '214': balenciaga,pre fall 2023 '215': balenciaga,pre fall 2024 '216': balenciaga,resort 2008 '217': balenciaga,resort 2009 '218': balenciaga,resort 2010 '219': balenciaga,resort 2011 '220': balenciaga,resort 2012 '221': balenciaga,resort 2013 '222': balenciaga,resort 2014 '223': balenciaga,resort 2015 '224': balenciaga,resort 2016 '225': balenciaga,resort 2017 '226': balenciaga,resort 2018 '227': balenciaga,resort 2019 '228': balenciaga,resort 2020 '229': balenciaga,resort 2021 '230': balenciaga,resort 2022 '231': balenciaga,resort 2023 '232': balenciaga,resort 2024 '233': balenciaga,spring 1998 ready to wear '234': balenciaga,spring 2000 ready to wear '235': balenciaga,spring 2001 ready to wear '236': balenciaga,spring 2002 ready to wear '237': balenciaga,spring 2003 ready to wear '238': balenciaga,spring 2004 ready to wear '239': balenciaga,spring 2005 ready to wear '240': balenciaga,spring 2006 ready to wear '241': balenciaga,spring 2007 menswear '242': balenciaga,spring 2007 ready to wear '243': balenciaga,spring 2008 menswear '244': balenciaga,spring 2008 ready to wear '245': balenciaga,spring 2009 ready to wear '246': balenciaga,spring 2010 ready to wear '247': balenciaga,spring 2011 menswear '248': balenciaga,spring 2011 ready to wear '249': balenciaga,spring 2012 menswear '250': balenciaga,spring 2012 ready to wear '251': balenciaga,spring 2013 menswear '252': balenciaga,spring 2013 ready to wear '253': balenciaga,spring 2014 menswear '254': balenciaga,spring 2014 ready to wear '255': balenciaga,spring 2015 menswear '256': balenciaga,spring 2015 ready to wear '257': balenciaga,spring 2016 menswear '258': balenciaga,spring 2016 ready to wear '259': balenciaga,spring 2017 menswear '260': balenciaga,spring 2017 ready to wear '261': balenciaga,spring 2018 menswear '262': balenciaga,spring 2018 ready to wear '263': balenciaga,spring 2019 ready to wear '264': balenciaga,spring 2020 menswear '265': balenciaga,spring 2020 ready to wear '266': balenciaga,spring 2021 menswear '267': balenciaga,spring 2021 ready to wear '268': balenciaga,spring 2022 ready to wear '269': balenciaga,spring 2023 ready to wear '270': balenciaga,spring 2024 ready to wear '271': calvin klein collection,fall 1995 ready to wear '272': calvin klein collection,fall 1996 ready to wear '273': calvin klein collection,fall 1997 ready to wear '274': calvin klein collection,fall 1998 ready to wear '275': calvin klein collection,fall 1999 ready to wear '276': calvin klein collection,fall 2000 ready to wear '277': calvin klein collection,fall 2001 ready to wear '278': calvin klein collection,fall 2002 ready to wear '279': calvin klein collection,fall 2003 ready to wear '280': calvin klein collection,fall 2004 ready to wear '281': calvin klein collection,fall 2005 menswear '282': calvin klein collection,fall 2005 ready to wear '283': calvin klein collection,fall 2006 menswear '284': calvin klein collection,fall 2006 ready to wear '285': calvin klein collection,fall 2007 menswear '286': calvin klein collection,fall 2007 ready to wear '287': calvin klein collection,fall 2008 menswear '288': calvin klein collection,fall 2008 ready to wear '289': calvin klein collection,fall 2009 ready to wear '290': calvin klein collection,fall 2010 menswear '291': calvin klein collection,fall 2010 ready to wear '292': calvin klein collection,fall 2011 menswear '293': calvin klein collection,fall 2011 ready to wear '294': calvin klein collection,fall 2012 menswear '295': calvin klein collection,fall 2012 ready to wear '296': calvin klein collection,fall 2013 menswear '297': calvin klein collection,fall 2013 ready to wear '298': calvin klein collection,fall 2014 menswear '299': calvin klein collection,fall 2014 ready to wear '300': calvin klein collection,fall 2015 menswear '301': calvin klein collection,fall 2015 ready to wear '302': calvin klein collection,fall 2016 menswear '303': calvin klein collection,fall 2016 ready to wear '304': calvin klein collection,pre fall 2008 '305': calvin klein collection,pre fall 2009 '306': calvin klein collection,pre fall 2010 '307': calvin klein collection,pre fall 2011 '308': calvin klein collection,pre fall 2012 '309': calvin klein collection,pre fall 2013 '310': calvin klein collection,pre fall 2014 '311': calvin klein collection,pre fall 2015 '312': calvin klein collection,pre fall 2016 '313': calvin klein collection,resort 2008 '314': calvin klein collection,resort 2009 '315': calvin klein collection,resort 2010 '316': calvin klein collection,resort 2011 '317': calvin klein collection,resort 2012 '318': calvin klein collection,resort 2013 '319': calvin klein collection,resort 2014 '320': calvin klein collection,resort 2015 '321': calvin klein collection,resort 2016 '322': calvin klein collection,resort 2017 '323': calvin klein collection,spring 1994 ready to wear '324': calvin klein collection,spring 1995 ready to wear '325': calvin klein collection,spring 1996 ready to wear '326': calvin klein collection,spring 1997 ready to wear '327': calvin klein collection,spring 1998 ready to wear '328': calvin klein collection,spring 1999 ready to wear '329': calvin klein collection,spring 2000 ready to wear '330': calvin klein collection,spring 2001 ready to wear '331': calvin klein collection,spring 2002 ready to wear '332': calvin klein collection,spring 2003 ready to wear '333': calvin klein collection,spring 2004 ready to wear '334': calvin klein collection,spring 2005 menswear '335': calvin klein collection,spring 2005 ready to wear '336': calvin klein collection,spring 2006 menswear '337': calvin klein collection,spring 2006 ready to wear '338': calvin klein collection,spring 2007 menswear '339': calvin klein collection,spring 2007 ready to wear '340': calvin klein collection,spring 2008 menswear '341': calvin klein collection,spring 2008 ready to wear '342': calvin klein collection,spring 2009 menswear '343': calvin klein collection,spring 2009 ready to wear '344': calvin klein collection,spring 2010 menswear '345': calvin klein collection,spring 2010 ready to wear '346': calvin klein collection,spring 2011 menswear '347': calvin klein collection,spring 2011 ready to wear '348': calvin klein collection,spring 2012 menswear '349': calvin klein collection,spring 2012 ready to wear '350': calvin klein collection,spring 2013 menswear '351': calvin klein collection,spring 2013 ready to wear '352': calvin klein collection,spring 2014 menswear '353': calvin klein collection,spring 2014 ready to wear '354': calvin klein collection,spring 2015 menswear '355': calvin klein collection,spring 2015 ready to wear '356': calvin klein collection,spring 2016 menswear '357': calvin klein collection,spring 2016 ready to wear '358': calvin klein collection,spring 2017 menswear '359': calvin klein,fall 2017 menswear '360': calvin klein,fall 2017 ready to wear '361': calvin klein,fall 2018 menswear '362': calvin klein,fall 2018 ready to wear '363': calvin klein,pre fall 2019 '364': calvin klein,resort 2019 '365': calvin klein,spring 2018 menswear '366': calvin klein,spring 2018 ready to wear '367': calvin klein,spring 2019 menswear '368': calvin klein,spring 2019 ready to wear '369': chanel,fall 1991 ready to wear '370': chanel,fall 1994 ready to wear '371': chanel,fall 1995 couture '372': chanel,fall 1996 couture '373': chanel,fall 1997 couture '374': chanel,fall 1999 couture '375': chanel,fall 2000 couture '376': chanel,fall 2000 ready to wear '377': chanel,fall 2002 couture '378': chanel,fall 2003 ready to wear '379': chanel,fall 2004 couture '380': chanel,fall 2004 ready to wear '381': chanel,fall 2005 couture '382': chanel,fall 2005 ready to wear '383': chanel,fall 2006 couture '384': chanel,fall 2006 ready to wear '385': chanel,fall 2007 couture '386': chanel,fall 2007 ready to wear '387': chanel,fall 2008 couture '388': chanel,fall 2008 ready to wear '389': chanel,fall 2009 couture '390': chanel,fall 2009 ready to wear '391': chanel,fall 2010 couture '392': chanel,fall 2010 ready to wear '393': chanel,fall 2011 couture '394': chanel,fall 2011 ready to wear '395': chanel,fall 2012 couture '396': chanel,fall 2012 ready to wear '397': chanel,fall 2013 couture '398': chanel,fall 2013 ready to wear '399': chanel,fall 2014 couture '400': chanel,fall 2014 ready to wear '401': chanel,fall 2015 couture '402': chanel,fall 2015 ready to wear '403': chanel,fall 2016 couture '404': chanel,fall 2016 ready to wear '405': chanel,fall 2017 couture '406': chanel,fall 2017 ready to wear '407': chanel,fall 2018 couture '408': chanel,fall 2018 ready to wear '409': chanel,fall 2019 couture '410': chanel,fall 2019 ready to wear '411': chanel,fall 2020 couture '412': chanel,fall 2020 ready to wear '413': chanel,fall 2021 couture '414': chanel,fall 2021 ready to wear '415': chanel,fall 2022 couture '416': chanel,fall 2022 ready to wear '417': chanel,fall 2023 couture '418': chanel,fall 2023 ready to wear '419': chanel,pre fall 2008 '420': chanel,pre fall 2009 '421': chanel,pre fall 2010 '422': chanel,pre fall 2011 '423': chanel,pre fall 2012 '424': chanel,pre fall 2013 '425': chanel,pre fall 2014 '426': chanel,pre fall 2015 '427': chanel,pre fall 2016 '428': chanel,pre fall 2017 '429': chanel,pre fall 2018 '430': chanel,pre fall 2019 '431': chanel,pre fall 2020 '432': chanel,pre fall 2021 '433': chanel,pre fall 2022 '434': chanel,pre fall 2023 '435': chanel,pre fall 2024 '436': chanel,resort 2007 '437': chanel,resort 2008 '438': chanel,resort 2009 '439': chanel,resort 2010 '440': chanel,resort 2011 '441': chanel,resort 2012 '442': chanel,resort 2013 '443': chanel,resort 2014 '444': chanel,resort 2015 '445': chanel,resort 2016 '446': chanel,resort 2017 '447': chanel,resort 2018 '448': chanel,resort 2019 '449': chanel,resort 2020 '450': chanel,resort 2021 '451': chanel,resort 2022 '452': chanel,resort 2023 '453': chanel,resort 2024 '454': chanel,spring 1992 ready to wear '455': chanel,spring 1993 couture '456': chanel,spring 1993 ready to wear '457': chanel,spring 1994 ready to wear '458': chanel,spring 1995 ready to wear '459': chanel,spring 1996 ready to wear '460': chanel,spring 1997 couture '461': chanel,spring 1999 couture '462': chanel,spring 2001 couture '463': chanel,spring 2002 couture '464': chanel,spring 2002 ready to wear '465': chanel,spring 2003 couture '466': chanel,spring 2004 couture '467': chanel,spring 2004 ready to wear '468': chanel,spring 2005 couture '469': chanel,spring 2005 ready to wear '470': chanel,spring 2006 couture '471': chanel,spring 2006 ready to wear '472': chanel,spring 2007 couture '473': chanel,spring 2007 ready to wear '474': chanel,spring 2008 couture '475': chanel,spring 2008 ready to wear '476': chanel,spring 2009 couture '477': chanel,spring 2009 ready to wear '478': chanel,spring 2010 couture '479': chanel,spring 2010 ready to wear '480': chanel,spring 2011 couture '481': chanel,spring 2011 ready to wear '482': chanel,spring 2012 couture '483': chanel,spring 2012 ready to wear '484': chanel,spring 2013 couture '485': chanel,spring 2013 ready to wear '486': chanel,spring 2014 couture '487': chanel,spring 2014 ready to wear '488': chanel,spring 2015 couture '489': chanel,spring 2015 ready to wear '490': chanel,spring 2016 couture '491': chanel,spring 2016 ready to wear '492': chanel,spring 2017 couture '493': chanel,spring 2017 ready to wear '494': chanel,spring 2018 couture '495': chanel,spring 2018 ready to wear '496': chanel,spring 2019 couture '497': chanel,spring 2019 ready to wear '498': chanel,spring 2020 couture '499': chanel,spring 2020 ready to wear '500': chanel,spring 2021 couture '501': chanel,spring 2021 ready to wear '502': chanel,spring 2022 couture '503': chanel,spring 2022 ready to wear '504': chanel,spring 2023 couture '505': chanel,spring 2023 ready to wear '506': chanel,spring 2024 ready to wear '507': christian dior,fall 1999 couture '508': christian dior,fall 2000 couture '509': christian dior,fall 2000 ready to wear '510': christian dior,fall 2001 couture '511': christian dior,fall 2001 ready to wear '512': christian dior,fall 2002 couture '513': christian dior,fall 2002 ready to wear '514': christian dior,fall 2003 couture '515': christian dior,fall 2003 ready to wear '516': christian dior,fall 2004 couture '517': christian dior,fall 2004 ready to wear '518': christian dior,fall 2005 couture '519': christian dior,fall 2005 ready to wear '520': christian dior,fall 2006 couture '521': christian dior,fall 2006 ready to wear '522': christian dior,fall 2007 couture '523': christian dior,fall 2007 ready to wear '524': christian dior,fall 2008 couture '525': christian dior,fall 2008 ready to wear '526': christian dior,fall 2009 couture '527': christian dior,fall 2009 ready to wear '528': christian dior,fall 2010 couture '529': christian dior,fall 2010 menswear '530': christian dior,fall 2010 ready to wear '531': christian dior,fall 2011 couture '532': christian dior,fall 2011 ready to wear '533': christian dior,fall 2012 couture '534': christian dior,fall 2012 ready to wear '535': christian dior,fall 2013 couture '536': christian dior,fall 2013 ready to wear '537': christian dior,fall 2014 couture '538': christian dior,fall 2014 ready to wear '539': christian dior,fall 2015 couture '540': christian dior,fall 2015 ready to wear '541': christian dior,fall 2016 couture '542': christian dior,fall 2016 ready to wear '543': christian dior,fall 2017 couture '544': christian dior,fall 2017 ready to wear '545': christian dior,fall 2018 couture '546': christian dior,fall 2018 ready to wear '547': christian dior,fall 2019 couture '548': christian dior,fall 2019 ready to wear '549': christian dior,fall 2020 couture '550': christian dior,fall 2021 couture '551': christian dior,fall 2021 ready to wear '552': christian dior,fall 2022 couture '553': christian dior,fall 2022 ready to wear '554': christian dior,fall 2023 couture '555': christian dior,fall 2023 ready to wear '556': christian dior,pre fall 2009 '557': christian dior,pre fall 2010 '558': christian dior,pre fall 2011 '559': christian dior,pre fall 2012 '560': christian dior,pre fall 2013 '561': christian dior,pre fall 2014 '562': christian dior,pre fall 2015 '563': christian dior,pre fall 2016 '564': christian dior,pre fall 2017 '565': christian dior,pre fall 2018 '566': christian dior,pre fall 2019 '567': christian dior,pre fall 2020 '568': christian dior,pre fall 2021 '569': christian dior,pre fall 2022 '570': christian dior,pre fall 2023 '571': christian dior,resort 2007 '572': christian dior,resort 2008 '573': christian dior,resort 2009 '574': christian dior,resort 2010 '575': christian dior,resort 2011 '576': christian dior,resort 2012 '577': christian dior,resort 2013 '578': christian dior,resort 2014 '579': christian dior,resort 2015 '580': christian dior,resort 2016 '581': christian dior,resort 2017 '582': christian dior,resort 2018 '583': christian dior,resort 2019 '584': christian dior,resort 2020 '585': christian dior,resort 2021 '586': christian dior,resort 2022 '587': christian dior,resort 2023 '588': christian dior,resort 2024 '589': christian dior,spring 1999 couture '590': christian dior,spring 2000 ready to wear '591': christian dior,spring 2001 couture '592': christian dior,spring 2001 ready to wear '593': christian dior,spring 2002 couture '594': christian dior,spring 2002 ready to wear '595': christian dior,spring 2003 couture '596': christian dior,spring 2003 ready to wear '597': christian dior,spring 2004 couture '598': christian dior,spring 2004 ready to wear '599': christian dior,spring 2005 couture '600': christian dior,spring 2005 ready to wear '601': christian dior,spring 2006 couture '602': christian dior,spring 2006 ready to wear '603': christian dior,spring 2007 couture '604': christian dior,spring 2007 ready to wear '605': christian dior,spring 2008 couture '606': christian dior,spring 2008 ready to wear '607': christian dior,spring 2009 couture '608': christian dior,spring 2009 ready to wear '609': christian dior,spring 2010 couture '610': christian dior,spring 2010 menswear '611': christian dior,spring 2010 ready to wear '612': christian dior,spring 2011 couture '613': christian dior,spring 2011 ready to wear '614': christian dior,spring 2012 couture '615': christian dior,spring 2012 ready to wear '616': christian dior,spring 2013 couture '617': christian dior,spring 2013 ready to wear '618': christian dior,spring 2014 couture '619': christian dior,spring 2014 ready to wear '620': christian dior,spring 2015 couture '621': christian dior,spring 2015 ready to wear '622': christian dior,spring 2016 couture '623': christian dior,spring 2016 ready to wear '624': christian dior,spring 2017 couture '625': christian dior,spring 2017 ready to wear '626': christian dior,spring 2018 couture '627': christian dior,spring 2018 ready to wear '628': christian dior,spring 2019 couture '629': christian dior,spring 2019 ready to wear '630': christian dior,spring 2020 couture '631': christian dior,spring 2020 ready to wear '632': christian dior,spring 2021 couture '633': christian dior,spring 2021 ready to wear '634': christian dior,spring 2022 couture '635': christian dior,spring 2022 ready to wear '636': christian dior,spring 2023 couture '637': christian dior,spring 2023 ready to wear '638': christian dior,spring 2024 ready to wear '639': fendi,fall 1999 ready to wear '640': fendi,fall 2000 ready to wear '641': fendi,fall 2001 ready to wear '642': fendi,fall 2002 ready to wear '643': fendi,fall 2003 ready to wear '644': fendi,fall 2004 ready to wear '645': fendi,fall 2005 ready to wear '646': fendi,fall 2006 ready to wear '647': fendi,fall 2007 menswear '648': fendi,fall 2007 ready to wear '649': fendi,fall 2008 menswear '650': fendi,fall 2008 ready to wear '651': fendi,fall 2009 ready to wear '652': fendi,fall 2010 ready to wear '653': fendi,fall 2011 ready to wear '654': fendi,fall 2012 menswear '655': fendi,fall 2012 ready to wear '656': fendi,fall 2013 menswear '657': fendi,fall 2013 ready to wear '658': fendi,fall 2014 menswear '659': fendi,fall 2014 ready to wear '660': fendi,fall 2015 couture '661': fendi,fall 2015 menswear '662': fendi,fall 2015 ready to wear '663': fendi,fall 2016 couture '664': fendi,fall 2016 menswear '665': fendi,fall 2016 ready to wear '666': fendi,fall 2017 couture '667': fendi,fall 2017 menswear '668': fendi,fall 2017 ready to wear '669': fendi,fall 2018 couture '670': fendi,fall 2018 menswear '671': fendi,fall 2018 ready to wear '672': fendi,fall 2019 couture '673': fendi,fall 2019 menswear '674': fendi,fall 2019 ready to wear '675': fendi,fall 2020 menswear '676': fendi,fall 2020 ready to wear '677': fendi,fall 2021 couture '678': fendi,fall 2021 menswear '679': fendi,fall 2021 ready to wear '680': fendi,fall 2022 couture '681': fendi,fall 2022 menswear '682': fendi,fall 2022 ready to wear '683': fendi,fall 2023 couture '684': fendi,fall 2023 menswear '685': fendi,fall 2023 ready to wear '686': fendi,pre fall 2011 '687': fendi,pre fall 2012 '688': fendi,pre fall 2013 '689': fendi,pre fall 2014 '690': fendi,pre fall 2015 '691': fendi,pre fall 2016 '692': fendi,pre fall 2017 '693': fendi,pre fall 2018 '694': fendi,pre fall 2019 '695': fendi,pre fall 2020 '696': fendi,pre fall 2022 '697': fendi,resort 2008 '698': fendi,resort 2009 '699': fendi,resort 2012 '700': fendi,resort 2013 '701': fendi,resort 2014 '702': fendi,resort 2015 '703': fendi,resort 2016 '704': fendi,resort 2017 '705': fendi,resort 2018 '706': fendi,resort 2019 '707': fendi,resort 2020 '708': fendi,resort 2022 '709': fendi,resort 2023 '710': fendi,resort 2024 '711': fendi,spring 1999 ready to wear '712': fendi,spring 2000 ready to wear '713': fendi,spring 2001 ready to wear '714': fendi,spring 2002 ready to wear '715': fendi,spring 2003 ready to wear '716': fendi,spring 2004 ready to wear '717': fendi,spring 2005 ready to wear '718': fendi,spring 2006 ready to wear '719': fendi,spring 2007 ready to wear '720': fendi,spring 2008 menswear '721': fendi,spring 2008 ready to wear '722': fendi,spring 2009 menswear '723': fendi,spring 2009 ready to wear '724': fendi,spring 2010 ready to wear '725': fendi,spring 2011 ready to wear '726': fendi,spring 2012 ready to wear '727': fendi,spring 2013 menswear '728': fendi,spring 2013 ready to wear '729': fendi,spring 2014 menswear '730': fendi,spring 2014 ready to wear '731': fendi,spring 2015 menswear '732': fendi,spring 2015 ready to wear '733': fendi,spring 2016 menswear '734': fendi,spring 2016 ready to wear '735': fendi,spring 2017 menswear '736': fendi,spring 2017 ready to wear '737': fendi,spring 2018 menswear '738': fendi,spring 2018 ready to wear '739': fendi,spring 2019 menswear '740': fendi,spring 2019 ready to wear '741': fendi,spring 2020 menswear '742': fendi,spring 2020 ready to wear '743': fendi,spring 2021 couture '744': fendi,spring 2021 menswear '745': fendi,spring 2021 ready to wear '746': fendi,spring 2022 couture '747': fendi,spring 2022 menswear '748': fendi,spring 2022 ready to wear '749': fendi,spring 2023 couture '750': fendi,spring 2023 menswear '751': fendi,spring 2023 ready to wear '752': fendi,spring 2024 menswear '753': fendi,spring 2024 ready to wear '754': gucci,fall 1995 ready to wear '755': gucci,fall 1996 ready to wear '756': gucci,fall 2000 ready to wear '757': gucci,fall 2001 ready to wear '758': gucci,fall 2002 ready to wear '759': gucci,fall 2003 ready to wear '760': gucci,fall 2004 ready to wear '761': gucci,fall 2005 menswear '762': gucci,fall 2005 ready to wear '763': gucci,fall 2006 menswear '764': gucci,fall 2006 ready to wear '765': gucci,fall 2007 menswear '766': gucci,fall 2007 ready to wear '767': gucci,fall 2008 menswear '768': gucci,fall 2008 ready to wear '769': gucci,fall 2009 ready to wear '770': gucci,fall 2010 menswear '771': gucci,fall 2010 ready to wear '772': gucci,fall 2011 menswear '773': gucci,fall 2011 ready to wear '774': gucci,fall 2012 menswear '775': gucci,fall 2012 ready to wear '776': gucci,fall 2013 menswear '777': gucci,fall 2013 ready to wear '778': gucci,fall 2014 menswear '779': gucci,fall 2014 ready to wear '780': gucci,fall 2015 menswear '781': gucci,fall 2015 ready to wear '782': gucci,fall 2016 menswear '783': gucci,fall 2016 ready to wear '784': gucci,fall 2017 menswear '785': gucci,fall 2017 ready to wear '786': gucci,fall 2018 menswear '787': gucci,fall 2018 ready to wear '788': gucci,fall 2019 menswear '789': gucci,fall 2019 ready to wear '790': gucci,fall 2020 menswear '791': gucci,fall 2020 ready to wear '792': gucci,fall 2022 ready to wear '793': gucci,fall 2023 menswear '794': gucci,fall 2023 ready to wear '795': gucci,pre fall 2011 '796': gucci,pre fall 2012 '797': gucci,pre fall 2013 '798': gucci,pre fall 2014 '799': gucci,pre fall 2015 '800': gucci,pre fall 2016 '801': gucci,pre fall 2017 '802': gucci,pre fall 2018 '803': gucci,pre fall 2019 '804': gucci,pre fall 2020 '805': gucci,pre fall 2020 menswear '806': gucci,pre fall 2021 '807': gucci,pre fall 2021 menswear '808': gucci,pre fall 2022 '809': gucci,resort 2007 '810': gucci,resort 2008 '811': gucci,resort 2009 '812': gucci,resort 2010 '813': gucci,resort 2011 '814': gucci,resort 2012 '815': gucci,resort 2013 '816': gucci,resort 2014 '817': gucci,resort 2015 '818': gucci,resort 2016 '819': gucci,resort 2017 '820': gucci,resort 2018 '821': gucci,resort 2019 '822': gucci,resort 2020 '823': gucci,resort 2021 '824': gucci,resort 2023 '825': gucci,resort 2024 '826': gucci,spring 1999 ready to wear '827': gucci,spring 2000 ready to wear '828': gucci,spring 2001 ready to wear '829': gucci,spring 2002 ready to wear '830': gucci,spring 2003 ready to wear '831': gucci,spring 2004 ready to wear '832': gucci,spring 2005 menswear '833': gucci,spring 2005 ready to wear '834': gucci,spring 2006 menswear '835': gucci,spring 2006 ready to wear '836': gucci,spring 2007 menswear '837': gucci,spring 2007 ready to wear '838': gucci,spring 2008 menswear '839': gucci,spring 2008 ready to wear '840': gucci,spring 2009 menswear '841': gucci,spring 2009 ready to wear '842': gucci,spring 2010 menswear '843': gucci,spring 2010 ready to wear '844': gucci,spring 2011 menswear '845': gucci,spring 2011 ready to wear '846': gucci,spring 2012 menswear '847': gucci,spring 2012 ready to wear '848': gucci,spring 2013 menswear '849': gucci,spring 2013 ready to wear '850': gucci,spring 2014 menswear '851': gucci,spring 2014 ready to wear '852': gucci,spring 2015 menswear '853': gucci,spring 2015 ready to wear '854': gucci,spring 2016 menswear '855': gucci,spring 2016 ready to wear '856': gucci,spring 2017 menswear '857': gucci,spring 2017 ready to wear '858': gucci,spring 2018 menswear '859': gucci,spring 2018 ready to wear '860': gucci,spring 2019 ready to wear '861': gucci,spring 2020 menswear '862': gucci,spring 2020 ready to wear '863': gucci,spring 2021 menswear '864': gucci,spring 2021 ready to wear '865': gucci,spring 2022 ready to wear '866': gucci,spring 2023 ready to wear '867': gucci,spring 2024 menswear '868': gucci,spring 2024 ready to wear '869': hermes,fall 1999 ready to wear '870': hermes,fall 2000 ready to wear '871': hermes,fall 2001 ready to wear '872': hermes,fall 2004 ready to wear '873': hermes,fall 2005 menswear '874': hermes,fall 2005 ready to wear '875': hermes,fall 2006 menswear '876': hermes,fall 2006 ready to wear '877': hermes,fall 2007 menswear '878': hermes,fall 2007 ready to wear '879': hermes,fall 2008 menswear '880': hermes,fall 2008 ready to wear '881': hermes,fall 2009 ready to wear '882': hermes,fall 2010 menswear '883': hermes,fall 2010 ready to wear '884': hermes,fall 2011 menswear '885': hermes,fall 2011 ready to wear '886': hermes,fall 2012 menswear '887': hermes,fall 2012 ready to wear '888': hermes,fall 2013 menswear '889': hermes,fall 2013 ready to wear '890': hermes,fall 2014 menswear '891': hermes,fall 2014 ready to wear '892': hermes,fall 2015 menswear '893': hermes,fall 2015 ready to wear '894': hermes,fall 2016 menswear '895': hermes,fall 2016 ready to wear '896': hermes,fall 2017 menswear '897': hermes,fall 2017 ready to wear '898': hermes,fall 2018 menswear '899': hermes,fall 2018 ready to wear '900': hermes,fall 2019 menswear '901': hermes,fall 2019 ready to wear '902': hermes,fall 2020 menswear '903': hermes,fall 2020 ready to wear '904': hermes,fall 2021 menswear '905': hermes,fall 2021 ready to wear '906': hermes,fall 2022 menswear '907': hermes,fall 2022 ready to wear '908': hermes,fall 2023 menswear '909': hermes,fall 2023 ready to wear '910': hermes,pre fall 2017 '911': hermes,pre fall 2018 '912': hermes,pre fall 2019 '913': hermes,resort 2017 '914': hermes,resort 2018 '915': hermes,resort 2019 '916': hermes,spring 1999 ready to wear '917': hermes,spring 2000 ready to wear '918': hermes,spring 2001 ready to wear '919': hermes,spring 2002 ready to wear '920': hermes,spring 2006 menswear '921': hermes,spring 2006 ready to wear '922': hermes,spring 2007 menswear '923': hermes,spring 2007 ready to wear '924': hermes,spring 2008 menswear '925': hermes,spring 2008 ready to wear '926': hermes,spring 2009 menswear '927': hermes,spring 2010 menswear '928': hermes,spring 2010 ready to wear '929': hermes,spring 2011 menswear '930': hermes,spring 2011 ready to wear '931': hermes,spring 2012 menswear '932': hermes,spring 2012 ready to wear '933': hermes,spring 2013 menswear '934': hermes,spring 2013 ready to wear '935': hermes,spring 2014 menswear '936': hermes,spring 2014 ready to wear '937': hermes,spring 2015 menswear '938': hermes,spring 2015 ready to wear '939': hermes,spring 2016 menswear '940': hermes,spring 2016 ready to wear '941': hermes,spring 2017 menswear '942': hermes,spring 2017 ready to wear '943': hermes,spring 2018 menswear '944': hermes,spring 2018 ready to wear '945': hermes,spring 2019 menswear '946': hermes,spring 2019 ready to wear '947': hermes,spring 2020 menswear '948': hermes,spring 2020 ready to wear '949': hermes,spring 2021 menswear '950': hermes,spring 2021 ready to wear '951': hermes,spring 2022 menswear '952': hermes,spring 2022 ready to wear '953': hermes,spring 2023 menswear '954': hermes,spring 2023 ready to wear '955': hermes,spring 2024 menswear '956': hermes,spring 2024 ready to wear '957': louis vuitton,fall 1998 ready to wear '958': louis vuitton,fall 2000 ready to wear '959': louis vuitton,fall 2001 ready to wear '960': louis vuitton,fall 2002 ready to wear '961': louis vuitton,fall 2003 ready to wear '962': louis vuitton,fall 2004 ready to wear '963': louis vuitton,fall 2005 menswear '964': louis vuitton,fall 2005 ready to wear '965': louis vuitton,fall 2006 menswear '966': louis vuitton,fall 2006 ready to wear '967': louis vuitton,fall 2007 menswear '968': louis vuitton,fall 2008 menswear '969': louis vuitton,fall 2008 ready to wear '970': louis vuitton,fall 2009 ready to wear '971': louis vuitton,fall 2010 menswear '972': louis vuitton,fall 2010 ready to wear '973': louis vuitton,fall 2011 menswear '974': louis vuitton,fall 2011 ready to wear '975': louis vuitton,fall 2012 menswear '976': louis vuitton,fall 2012 ready to wear '977': louis vuitton,fall 2013 menswear '978': louis vuitton,fall 2013 ready to wear '979': louis vuitton,fall 2014 menswear '980': louis vuitton,fall 2014 ready to wear '981': louis vuitton,fall 2015 menswear '982': louis vuitton,fall 2015 ready to wear '983': louis vuitton,fall 2016 menswear '984': louis vuitton,fall 2016 ready to wear '985': louis vuitton,fall 2017 menswear '986': louis vuitton,fall 2017 ready to wear '987': louis vuitton,fall 2018 menswear '988': louis vuitton,fall 2018 ready to wear '989': louis vuitton,fall 2019 menswear '990': louis vuitton,fall 2019 ready to wear '991': louis vuitton,fall 2020 menswear '992': louis vuitton,fall 2020 ready to wear '993': louis vuitton,fall 2021 menswear '994': louis vuitton,fall 2021 ready to wear '995': louis vuitton,fall 2022 menswear '996': louis vuitton,fall 2022 ready to wear '997': louis vuitton,fall 2023 menswear '998': louis vuitton,fall 2023 ready to wear '999': louis vuitton,pre fall 2008 '1000': louis vuitton,pre fall 2009 '1001': louis vuitton,pre fall 2010 '1002': louis vuitton,pre fall 2011 '1003': louis vuitton,pre fall 2012 '1004': louis vuitton,pre fall 2013 '1005': louis vuitton,pre fall 2014 '1006': louis vuitton,pre fall 2015 '1007': louis vuitton,pre fall 2016 '1008': louis vuitton,pre fall 2017 '1009': louis vuitton,pre fall 2018 '1010': louis vuitton,pre fall 2019 '1011': louis vuitton,pre fall 2020 '1012': louis vuitton,pre fall 2020 menswear '1013': louis vuitton,pre fall 2021 '1014': louis vuitton,pre fall 2021 menswear '1015': louis vuitton,pre fall 2022 menswear '1016': louis vuitton,pre fall 2023 '1017': louis vuitton,pre fall 2023 menswear '1018': louis vuitton,pre fall 2024 menswear '1019': louis vuitton,resort 2008 '1020': louis vuitton,resort 2009 '1021': louis vuitton,resort 2010 '1022': louis vuitton,resort 2011 '1023': louis vuitton,resort 2012 '1024': louis vuitton,resort 2013 '1025': louis vuitton,resort 2014 '1026': louis vuitton,resort 2015 '1027': louis vuitton,resort 2016 '1028': louis vuitton,resort 2017 '1029': louis vuitton,resort 2018 '1030': louis vuitton,resort 2019 '1031': louis vuitton,resort 2020 '1032': louis vuitton,resort 2021 '1033': louis vuitton,resort 2021 menswear '1034': louis vuitton,resort 2022 '1035': louis vuitton,resort 2022 menswear '1036': louis vuitton,resort 2023 '1037': louis vuitton,resort 2023 menswear '1038': louis vuitton,resort 2024 '1039': louis vuitton,resort 2024 menswear '1040': louis vuitton,spring 2000 ready to wear '1041': louis vuitton,spring 2001 ready to wear '1042': louis vuitton,spring 2002 ready to wear '1043': louis vuitton,spring 2003 ready to wear '1044': louis vuitton,spring 2004 ready to wear '1045': louis vuitton,spring 2005 menswear '1046': louis vuitton,spring 2005 ready to wear '1047': louis vuitton,spring 2006 menswear '1048': louis vuitton,spring 2006 ready to wear '1049': louis vuitton,spring 2007 menswear '1050': louis vuitton,spring 2007 ready to wear '1051': louis vuitton,spring 2008 menswear '1052': louis vuitton,spring 2008 ready to wear '1053': louis vuitton,spring 2009 menswear '1054': louis vuitton,spring 2009 ready to wear '1055': louis vuitton,spring 2010 menswear '1056': louis vuitton,spring 2010 ready to wear '1057': louis vuitton,spring 2011 menswear '1058': louis vuitton,spring 2011 ready to wear '1059': louis vuitton,spring 2012 menswear '1060': louis vuitton,spring 2012 ready to wear '1061': louis vuitton,spring 2013 menswear '1062': louis vuitton,spring 2013 ready to wear '1063': louis vuitton,spring 2014 menswear '1064': louis vuitton,spring 2014 ready to wear '1065': louis vuitton,spring 2015 menswear '1066': louis vuitton,spring 2015 ready to wear '1067': louis vuitton,spring 2016 menswear '1068': louis vuitton,spring 2016 ready to wear '1069': louis vuitton,spring 2017 menswear '1070': louis vuitton,spring 2017 ready to wear '1071': louis vuitton,spring 2018 menswear '1072': louis vuitton,spring 2018 ready to wear '1073': louis vuitton,spring 2019 menswear '1074': louis vuitton,spring 2019 ready to wear '1075': louis vuitton,spring 2020 menswear '1076': louis vuitton,spring 2020 ready to wear '1077': louis vuitton,spring 2021 menswear '1078': louis vuitton,spring 2021 ready to wear '1079': louis vuitton,spring 2022 menswear '1080': louis vuitton,spring 2023 menswear '1081': louis vuitton,spring 2023 ready to wear '1082': louis vuitton,spring 2024 menswear '1083': prada,fall 1996 ready to wear '1084': prada,fall 2000 ready to wear '1085': prada,fall 2001 ready to wear '1086': prada,fall 2002 ready to wear '1087': prada,fall 2003 ready to wear '1088': prada,fall 2004 ready to wear '1089': prada,fall 2005 menswear '1090': prada,fall 2005 ready to wear '1091': prada,fall 2006 menswear '1092': prada,fall 2006 ready to wear '1093': prada,fall 2007 menswear '1094': prada,fall 2007 ready to wear '1095': prada,fall 2008 menswear '1096': prada,fall 2008 ready to wear '1097': prada,fall 2009 menswear '1098': prada,fall 2009 ready to wear '1099': prada,fall 2010 menswear '1100': prada,fall 2010 ready to wear '1101': prada,fall 2011 menswear '1102': prada,fall 2011 ready to wear '1103': prada,fall 2012 menswear '1104': prada,fall 2012 ready to wear '1105': prada,fall 2013 menswear '1106': prada,fall 2013 ready to wear '1107': prada,fall 2014 menswear '1108': prada,fall 2014 ready to wear '1109': prada,fall 2015 menswear '1110': prada,fall 2015 ready to wear '1111': prada,fall 2016 menswear '1112': prada,fall 2016 ready to wear '1113': prada,fall 2017 menswear '1114': prada,fall 2017 ready to wear '1115': prada,fall 2018 menswear '1116': prada,fall 2018 ready to wear '1117': prada,fall 2019 menswear '1118': prada,fall 2019 ready to wear '1119': prada,fall 2020 menswear '1120': prada,fall 2020 ready to wear '1121': prada,fall 2021 menswear '1122': prada,fall 2021 ready to wear '1123': prada,fall 2022 menswear '1124': prada,fall 2022 ready to wear '1125': prada,fall 2023 menswear '1126': prada,fall 2023 ready to wear '1127': prada,pre fall 2009 '1128': prada,pre fall 2010 '1129': prada,resort 2008 '1130': prada,resort 2009 '1131': prada,resort 2010 '1132': prada,resort 2011 '1133': prada,resort 2012 '1134': prada,resort 2013 '1135': prada,resort 2018 '1136': prada,resort 2019 '1137': prada,resort 2020 '1138': prada,spring 1992 ready to wear '1139': prada,spring 1993 ready to wear '1140': prada,spring 1994 ready to wear '1141': prada,spring 1995 ready to wear '1142': prada,spring 1996 ready to wear '1143': prada,spring 1997 ready to wear '1144': prada,spring 1998 ready to wear '1145': prada,spring 1999 ready to wear '1146': prada,spring 2000 ready to wear '1147': prada,spring 2001 ready to wear '1148': prada,spring 2002 ready to wear '1149': prada,spring 2003 ready to wear '1150': prada,spring 2004 ready to wear '1151': prada,spring 2005 menswear '1152': prada,spring 2005 ready to wear '1153': prada,spring 2006 menswear '1154': prada,spring 2006 ready to wear '1155': prada,spring 2007 menswear '1156': prada,spring 2007 ready to wear '1157': prada,spring 2008 menswear '1158': prada,spring 2008 ready to wear '1159': prada,spring 2009 menswear '1160': prada,spring 2009 ready to wear '1161': prada,spring 2010 ready to wear '1162': prada,spring 2011 menswear '1163': prada,spring 2011 ready to wear '1164': prada,spring 2012 menswear '1165': prada,spring 2012 ready to wear '1166': prada,spring 2013 menswear '1167': prada,spring 2013 ready to wear '1168': prada,spring 2014 menswear '1169': prada,spring 2014 ready to wear '1170': prada,spring 2015 menswear '1171': prada,spring 2015 ready to wear '1172': prada,spring 2016 menswear '1173': prada,spring 2016 ready to wear '1174': prada,spring 2017 menswear '1175': prada,spring 2017 ready to wear '1176': prada,spring 2018 menswear '1177': prada,spring 2018 ready to wear '1178': prada,spring 2019 menswear '1179': prada,spring 2019 ready to wear '1180': prada,spring 2020 menswear '1181': prada,spring 2020 ready to wear '1182': prada,spring 2021 menswear '1183': prada,spring 2021 ready to wear '1184': prada,spring 2022 menswear '1185': prada,spring 2022 ready to wear '1186': prada,spring 2023 menswear '1187': prada,spring 2023 ready to wear '1188': prada,spring 2024 menswear '1189': prada,spring 2024 ready to wear '1190': ralph lauren,fall 2000 ready to wear '1191': ralph lauren,fall 2001 ready to wear '1192': ralph lauren,fall 2002 ready to wear '1193': ralph lauren,fall 2003 ready to wear '1194': ralph lauren,fall 2004 ready to wear '1195': ralph lauren,fall 2005 menswear '1196': ralph lauren,fall 2005 ready to wear '1197': ralph lauren,fall 2006 menswear '1198': ralph lauren,fall 2006 ready to wear '1199': ralph lauren,fall 2007 menswear '1200': ralph lauren,fall 2007 ready to wear '1201': ralph lauren,fall 2008 menswear '1202': ralph lauren,fall 2008 ready to wear '1203': ralph lauren,fall 2009 ready to wear '1204': ralph lauren,fall 2010 menswear '1205': ralph lauren,fall 2010 ready to wear '1206': ralph lauren,fall 2011 ready to wear '1207': ralph lauren,fall 2012 ready to wear '1208': ralph lauren,fall 2013 menswear '1209': ralph lauren,fall 2013 ready to wear '1210': ralph lauren,fall 2014 menswear '1211': ralph lauren,fall 2014 ready to wear '1212': ralph lauren,fall 2015 menswear '1213': ralph lauren,fall 2015 ready to wear '1214': ralph lauren,fall 2016 menswear '1215': ralph lauren,fall 2016 ready to wear '1216': ralph lauren,fall 2017 menswear '1217': ralph lauren,fall 2017 ready to wear '1218': ralph lauren,fall 2018 menswear '1219': ralph lauren,fall 2018 ready to wear '1220': ralph lauren,fall 2019 menswear '1221': ralph lauren,fall 2019 ready to wear '1222': ralph lauren,fall 2020 menswear '1223': ralph lauren,fall 2020 ready to wear '1224': ralph lauren,fall 2021 ready to wear '1225': ralph lauren,fall 2022 ready to wear '1226': ralph lauren,fall 2023 ready to wear '1227': ralph lauren,pre fall 2014 '1228': ralph lauren,pre fall 2015 '1229': ralph lauren,pre fall 2016 '1230': ralph lauren,pre fall 2017 '1231': ralph lauren,pre fall 2018 '1232': ralph lauren,pre fall 2019 '1233': ralph lauren,pre fall 2020 '1234': ralph lauren,pre fall 2021 '1235': ralph lauren,resort 2008 '1236': ralph lauren,resort 2009 '1237': ralph lauren,resort 2013 '1238': ralph lauren,resort 2014 '1239': ralph lauren,resort 2015 '1240': ralph lauren,resort 2016 '1241': ralph lauren,resort 2019 '1242': ralph lauren,resort 2022 '1243': ralph lauren,resort 2024 '1244': ralph lauren,spring 2000 ready to wear '1245': ralph lauren,spring 2001 ready to wear '1246': ralph lauren,spring 2002 ready to wear '1247': ralph lauren,spring 2003 ready to wear '1248': ralph lauren,spring 2004 ready to wear '1249': ralph lauren,spring 2005 ready to wear '1250': ralph lauren,spring 2006 menswear '1251': ralph lauren,spring 2006 ready to wear '1252': ralph lauren,spring 2007 menswear '1253': ralph lauren,spring 2007 ready to wear '1254': ralph lauren,spring 2008 menswear '1255': ralph lauren,spring 2008 ready to wear '1256': ralph lauren,spring 2009 ready to wear '1257': ralph lauren,spring 2010 ready to wear '1258': ralph lauren,spring 2011 ready to wear '1259': ralph lauren,spring 2012 ready to wear '1260': ralph lauren,spring 2013 menswear '1261': ralph lauren,spring 2013 ready to wear '1262': ralph lauren,spring 2014 menswear '1263': ralph lauren,spring 2014 ready to wear '1264': ralph lauren,spring 2015 menswear '1265': ralph lauren,spring 2015 ready to wear '1266': ralph lauren,spring 2016 menswear '1267': ralph lauren,spring 2016 ready to wear '1268': ralph lauren,spring 2017 menswear '1269': ralph lauren,spring 2017 ready to wear '1270': ralph lauren,spring 2018 menswear '1271': ralph lauren,spring 2018 ready to wear '1272': ralph lauren,spring 2019 menswear '1273': ralph lauren,spring 2019 ready to wear '1274': ralph lauren,spring 2020 menswear '1275': ralph lauren,spring 2021 ready to wear '1276': ralph lauren,spring 2022 ready to wear '1277': ralph lauren,spring 2023 ready to wear '1278': ralph lauren,spring 2024 menswear '1279': ralph lauren,spring 2024 ready to wear '1280': saint laurent,fall 2000 ready to wear '1281': saint laurent,fall 2001 couture '1282': saint laurent,fall 2001 ready to wear '1283': saint laurent,fall 2002 ready to wear '1284': saint laurent,fall 2003 ready to wear '1285': saint laurent,fall 2004 ready to wear '1286': saint laurent,fall 2005 menswear '1287': saint laurent,fall 2005 ready to wear '1288': saint laurent,fall 2006 menswear '1289': saint laurent,fall 2006 ready to wear '1290': saint laurent,fall 2007 menswear '1291': saint laurent,fall 2007 ready to wear '1292': saint laurent,fall 2008 menswear '1293': saint laurent,fall 2008 ready to wear '1294': saint laurent,fall 2009 ready to wear '1295': saint laurent,fall 2010 menswear '1296': saint laurent,fall 2010 ready to wear '1297': saint laurent,fall 2011 menswear '1298': saint laurent,fall 2011 ready to wear '1299': saint laurent,fall 2012 menswear '1300': saint laurent,fall 2012 ready to wear '1301': saint laurent,fall 2013 menswear '1302': saint laurent,fall 2013 ready to wear '1303': saint laurent,fall 2014 menswear '1304': saint laurent,fall 2014 ready to wear '1305': saint laurent,fall 2015 menswear '1306': saint laurent,fall 2015 ready to wear '1307': saint laurent,fall 2016 menswear '1308': saint laurent,fall 2016 ready to wear '1309': saint laurent,fall 2017 ready to wear '1310': saint laurent,fall 2018 ready to wear '1311': saint laurent,fall 2019 menswear '1312': saint laurent,fall 2019 ready to wear '1313': saint laurent,fall 2020 ready to wear '1314': saint laurent,fall 2021 menswear '1315': saint laurent,fall 2021 ready to wear '1316': saint laurent,fall 2022 menswear '1317': saint laurent,fall 2022 ready to wear '1318': saint laurent,fall 2023 menswear '1319': saint laurent,fall 2023 ready to wear '1320': saint laurent,pre fall 2009 '1321': saint laurent,pre fall 2010 '1322': saint laurent,pre fall 2011 '1323': saint laurent,pre fall 2012 '1324': saint laurent,pre fall 2013 '1325': saint laurent,pre fall 2016 '1326': saint laurent,pre fall 2019 '1327': saint laurent,pre fall 2020 '1328': saint laurent,pre fall 2020 menswear '1329': saint laurent,pre fall 2021 '1330': saint laurent,pre fall 2022 '1331': saint laurent,pre fall 2023 '1332': saint laurent,resort 2008 '1333': saint laurent,resort 2010 '1334': saint laurent,resort 2011 '1335': saint laurent,resort 2012 '1336': saint laurent,resort 2014 '1337': saint laurent,resort 2020 '1338': saint laurent,resort 2021 '1339': saint laurent,resort 2022 '1340': saint laurent,resort 2023 '1341': saint laurent,spring 2000 ready to wear '1342': saint laurent,spring 2001 couture '1343': saint laurent,spring 2001 ready to wear '1344': saint laurent,spring 2002 couture '1345': saint laurent,spring 2002 ready to wear '1346': saint laurent,spring 2003 ready to wear '1347': saint laurent,spring 2004 ready to wear '1348': saint laurent,spring 2005 menswear '1349': saint laurent,spring 2005 ready to wear '1350': saint laurent,spring 2006 menswear '1351': saint laurent,spring 2006 ready to wear '1352': saint laurent,spring 2007 menswear '1353': saint laurent,spring 2007 ready to wear '1354': saint laurent,spring 2008 menswear '1355': saint laurent,spring 2008 ready to wear '1356': saint laurent,spring 2009 menswear '1357': saint laurent,spring 2009 ready to wear '1358': saint laurent,spring 2010 ready to wear '1359': saint laurent,spring 2011 menswear '1360': saint laurent,spring 2011 ready to wear '1361': saint laurent,spring 2012 menswear '1362': saint laurent,spring 2012 ready to wear '1363': saint laurent,spring 2013 ready to wear '1364': saint laurent,spring 2014 menswear '1365': saint laurent,spring 2014 ready to wear '1366': saint laurent,spring 2015 menswear '1367': saint laurent,spring 2015 ready to wear '1368': saint laurent,spring 2016 menswear '1369': saint laurent,spring 2016 ready to wear '1370': saint laurent,spring 2017 ready to wear '1371': saint laurent,spring 2018 ready to wear '1372': saint laurent,spring 2019 menswear '1373': saint laurent,spring 2019 ready to wear '1374': saint laurent,spring 2020 menswear '1375': saint laurent,spring 2020 ready to wear '1376': saint laurent,spring 2021 menswear '1377': saint laurent,spring 2021 ready to wear '1378': saint laurent,spring 2022 menswear '1379': saint laurent,spring 2022 ready to wear '1380': saint laurent,spring 2023 menswear '1381': saint laurent,spring 2023 ready to wear '1382': saint laurent,spring 2024 menswear '1383': saint laurent,spring 2024 ready to wear '1384': valentino,fall 2000 ready to wear '1385': valentino,fall 2001 couture '1386': valentino,fall 2001 ready to wear '1387': valentino,fall 2002 couture '1388': valentino,fall 2002 ready to wear '1389': valentino,fall 2003 couture '1390': valentino,fall 2003 ready to wear '1391': valentino,fall 2004 couture '1392': valentino,fall 2004 ready to wear '1393': valentino,fall 2005 couture '1394': valentino,fall 2005 menswear '1395': valentino,fall 2005 ready to wear '1396': valentino,fall 2006 couture '1397': valentino,fall 2006 menswear '1398': valentino,fall 2006 ready to wear '1399': valentino,fall 2007 couture '1400': valentino,fall 2007 menswear '1401': valentino,fall 2007 ready to wear '1402': valentino,fall 2008 couture '1403': valentino,fall 2008 menswear '1404': valentino,fall 2008 ready to wear '1405': valentino,fall 2009 couture '1406': valentino,fall 2009 ready to wear '1407': valentino,fall 2010 couture '1408': valentino,fall 2010 ready to wear '1409': valentino,fall 2011 couture '1410': valentino,fall 2011 ready to wear '1411': valentino,fall 2012 couture '1412': valentino,fall 2012 menswear '1413': valentino,fall 2012 ready to wear '1414': valentino,fall 2013 couture '1415': valentino,fall 2013 menswear '1416': valentino,fall 2013 ready to wear '1417': valentino,fall 2014 couture '1418': valentino,fall 2014 menswear '1419': valentino,fall 2014 ready to wear '1420': valentino,fall 2015 couture '1421': valentino,fall 2015 menswear '1422': valentino,fall 2015 ready to wear '1423': valentino,fall 2016 couture '1424': valentino,fall 2016 menswear '1425': valentino,fall 2016 ready to wear '1426': valentino,fall 2017 couture '1427': valentino,fall 2017 menswear '1428': valentino,fall 2017 ready to wear '1429': valentino,fall 2018 couture '1430': valentino,fall 2018 menswear '1431': valentino,fall 2018 ready to wear '1432': valentino,fall 2019 couture '1433': valentino,fall 2019 menswear '1434': valentino,fall 2019 ready to wear '1435': valentino,fall 2020 couture '1436': valentino,fall 2020 menswear '1437': valentino,fall 2020 ready to wear '1438': valentino,fall 2021 couture '1439': valentino,fall 2021 ready to wear '1440': valentino,fall 2022 couture '1441': valentino,fall 2022 ready to wear '1442': valentino,fall 2023 couture '1443': valentino,fall 2023 ready to wear '1444': valentino,pre fall 2008 '1445': valentino,pre fall 2010 '1446': valentino,pre fall 2011 '1447': valentino,pre fall 2012 '1448': valentino,pre fall 2013 '1449': valentino,pre fall 2014 '1450': valentino,pre fall 2015 '1451': valentino,pre fall 2016 '1452': valentino,pre fall 2017 '1453': valentino,pre fall 2018 '1454': valentino,pre fall 2019 '1455': valentino,pre fall 2020 '1456': valentino,pre fall 2021 '1457': valentino,pre fall 2022 '1458': valentino,pre fall 2023 '1459': valentino,pre fall 2024 '1460': valentino,resort 2008 '1461': valentino,resort 2009 '1462': valentino,resort 2011 '1463': valentino,resort 2012 '1464': valentino,resort 2013 '1465': valentino,resort 2014 '1466': valentino,resort 2015 '1467': valentino,resort 2016 '1468': valentino,resort 2017 '1469': valentino,resort 2018 '1470': valentino,resort 2019 '1471': valentino,resort 2020 '1472': valentino,resort 2021 '1473': valentino,resort 2022 '1474': valentino,resort 2023 '1475': valentino,resort 2024 '1476': valentino,spring 2000 ready to wear '1477': valentino,spring 2001 couture '1478': valentino,spring 2001 ready to wear '1479': valentino,spring 2002 couture '1480': valentino,spring 2002 ready to wear '1481': valentino,spring 2003 couture '1482': valentino,spring 2003 ready to wear '1483': valentino,spring 2004 couture '1484': valentino,spring 2004 ready to wear '1485': valentino,spring 2005 couture '1486': valentino,spring 2005 menswear '1487': valentino,spring 2005 ready to wear '1488': valentino,spring 2006 couture '1489': valentino,spring 2006 menswear '1490': valentino,spring 2006 ready to wear '1491': valentino,spring 2007 couture '1492': valentino,spring 2007 menswear '1493': valentino,spring 2007 ready to wear '1494': valentino,spring 2008 couture '1495': valentino,spring 2008 menswear '1496': valentino,spring 2008 ready to wear '1497': valentino,spring 2009 couture '1498': valentino,spring 2009 menswear '1499': valentino,spring 2009 ready to wear '1500': valentino,spring 2010 couture '1501': valentino,spring 2010 ready to wear '1502': valentino,spring 2011 couture '1503': valentino,spring 2011 ready to wear '1504': valentino,spring 2012 couture '1505': valentino,spring 2012 menswear '1506': valentino,spring 2012 ready to wear '1507': valentino,spring 2013 couture '1508': valentino,spring 2013 menswear '1509': valentino,spring 2013 ready to wear '1510': valentino,spring 2014 couture '1511': valentino,spring 2014 menswear '1512': valentino,spring 2014 ready to wear '1513': valentino,spring 2015 couture '1514': valentino,spring 2015 menswear '1515': valentino,spring 2015 ready to wear '1516': valentino,spring 2016 couture '1517': valentino,spring 2016 menswear '1518': valentino,spring 2016 ready to wear '1519': valentino,spring 2017 couture '1520': valentino,spring 2017 menswear '1521': valentino,spring 2017 ready to wear '1522': valentino,spring 2018 couture '1523': valentino,spring 2018 menswear '1524': valentino,spring 2018 ready to wear '1525': valentino,spring 2019 couture '1526': valentino,spring 2019 menswear '1527': valentino,spring 2019 ready to wear '1528': valentino,spring 2020 couture '1529': valentino,spring 2020 menswear '1530': valentino,spring 2020 ready to wear '1531': valentino,spring 2021 couture '1532': valentino,spring 2021 menswear '1533': valentino,spring 2021 ready to wear '1534': valentino,spring 2022 couture '1535': valentino,spring 2022 ready to wear '1536': valentino,spring 2023 couture '1537': valentino,spring 2023 ready to wear '1538': valentino,spring 2024 menswear '1539': versace by fendi,pre fall 2022 '1540': versace,fall 1991 ready to wear '1541': versace,fall 1992 ready to wear '1542': versace,fall 1993 ready to wear '1543': versace,fall 1994 ready to wear '1544': versace,fall 1995 ready to wear '1545': versace,fall 1996 ready to wear '1546': versace,fall 1997 ready to wear '1547': versace,fall 2000 ready to wear '1548': versace,fall 2001 couture '1549': versace,fall 2001 ready to wear '1550': versace,fall 2002 couture '1551': versace,fall 2002 ready to wear '1552': versace,fall 2003 couture '1553': versace,fall 2003 ready to wear '1554': versace,fall 2004 ready to wear '1555': versace,fall 2005 menswear '1556': versace,fall 2005 ready to wear '1557': versace,fall 2006 menswear '1558': versace,fall 2006 ready to wear '1559': versace,fall 2007 menswear '1560': versace,fall 2007 ready to wear '1561': versace,fall 2008 menswear '1562': versace,fall 2008 ready to wear '1563': versace,fall 2009 ready to wear '1564': versace,fall 2010 menswear '1565': versace,fall 2010 ready to wear '1566': versace,fall 2011 menswear '1567': versace,fall 2011 ready to wear '1568': versace,fall 2012 menswear '1569': versace,fall 2012 ready to wear '1570': versace,fall 2013 menswear '1571': versace,fall 2013 ready to wear '1572': versace,fall 2014 menswear '1573': versace,fall 2014 ready to wear '1574': versace,fall 2015 menswear '1575': versace,fall 2015 ready to wear '1576': versace,fall 2016 menswear '1577': versace,fall 2016 ready to wear '1578': versace,fall 2017 menswear '1579': versace,fall 2017 ready to wear '1580': versace,fall 2018 menswear '1581': versace,fall 2018 ready to wear '1582': versace,fall 2019 menswear '1583': versace,fall 2019 ready to wear '1584': versace,fall 2020 menswear '1585': versace,fall 2020 ready to wear '1586': versace,fall 2021 ready to wear '1587': versace,fall 2022 menswear '1588': versace,fall 2022 ready to wear '1589': versace,fall 2023 ready to wear '1590': versace,pre fall 2008 '1591': versace,pre fall 2009 '1592': versace,pre fall 2010 '1593': versace,pre fall 2011 '1594': versace,pre fall 2012 '1595': versace,pre fall 2013 '1596': versace,pre fall 2014 '1597': versace,pre fall 2015 '1598': versace,pre fall 2016 '1599': versace,pre fall 2017 '1600': versace,pre fall 2018 '1601': versace,pre fall 2019 '1602': versace,pre fall 2020 '1603': versace,pre fall 2021 '1604': versace,pre fall 2022 '1605': versace,pre fall 2022 menswear '1606': versace,pre fall 2023 '1607': versace,resort 2008 '1608': versace,resort 2009 '1609': versace,resort 2010 '1610': versace,resort 2011 '1611': versace,resort 2012 '1612': versace,resort 2013 '1613': versace,resort 2014 '1614': versace,resort 2015 '1615': versace,resort 2016 '1616': versace,resort 2017 '1617': versace,resort 2018 '1618': versace,resort 2019 '1619': versace,resort 2020 '1620': versace,resort 2021 '1621': versace,resort 2022 '1622': versace,resort 2023 '1623': versace,spring 1991 ready to wear '1624': versace,spring 1992 ready to wear '1625': versace,spring 1993 ready to wear '1626': versace,spring 1994 ready to wear '1627': versace,spring 1995 ready to wear '1628': versace,spring 1996 ready to wear '1629': versace,spring 1997 ready to wear '1630': versace,spring 2000 ready to wear '1631': versace,spring 2001 couture '1632': versace,spring 2001 ready to wear '1633': versace,spring 2002 couture '1634': versace,spring 2002 ready to wear '1635': versace,spring 2003 couture '1636': versace,spring 2003 ready to wear '1637': versace,spring 2004 couture '1638': versace,spring 2004 ready to wear '1639': versace,spring 2005 menswear '1640': versace,spring 2005 ready to wear '1641': versace,spring 2006 menswear '1642': versace,spring 2006 ready to wear '1643': versace,spring 2007 menswear '1644': versace,spring 2007 ready to wear '1645': versace,spring 2008 couture '1646': versace,spring 2008 menswear '1647': versace,spring 2008 ready to wear '1648': versace,spring 2009 menswear '1649': versace,spring 2009 ready to wear '1650': versace,spring 2010 ready to wear '1651': versace,spring 2011 menswear '1652': versace,spring 2011 ready to wear '1653': versace,spring 2012 menswear '1654': versace,spring 2012 ready to wear '1655': versace,spring 2013 menswear '1656': versace,spring 2013 ready to wear '1657': versace,spring 2014 menswear '1658': versace,spring 2014 ready to wear '1659': versace,spring 2015 menswear '1660': versace,spring 2015 ready to wear '1661': versace,spring 2016 menswear '1662': versace,spring 2016 ready to wear '1663': versace,spring 2017 menswear '1664': versace,spring 2017 ready to wear '1665': versace,spring 2018 menswear '1666': versace,spring 2018 ready to wear '1667': versace,spring 2019 menswear '1668': versace,spring 2019 ready to wear '1669': versace,spring 2020 menswear '1670': versace,spring 2020 ready to wear '1671': versace,spring 2021 menswear '1672': versace,spring 2021 ready to wear '1673': versace,spring 2022 ready to wear '1674': versace,spring 2023 menswear '1675': versace,spring 2023 ready to wear '1676': versace,spring 2024 ready to wear splits: - name: train num_bytes: 2097138827.181 num_examples: 87547 download_size: 2042963572 dataset_size: 2097138827.181 configs: - config_name: default data_files: - split: train path: data/train-* --- # vogue-runway-top15-512px [Vogue Runway](https://www.vogue.com/fashion-shows) - 15 fashion houses - 1679 collections - 87,547 images Fashion Houses: Alexander McQueen, Armani, Balenciaga, Calvin Klein, Chanel, Dior, Fendi, Gucci, Hermes, Louis Vuitton, Prada, Ralph Lauren, Saint Laurent, Valentino, Versace. Images are maximum height 512 pixels. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/kUFTy7kt_WAVbFymF-3uH.jpeg) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/HBTN1FxNwsLJipqldcTgb.jpeg) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/Sg--x22QxiePHi1DY35gP.jpeg)
openthaigpt/thai-onet-m6-exam
--- license: apache-2.0 task_categories: - question-answering language: - th - en tags: - thai - onet - university entrance exams - exams pretty_name: Thai O-Net Exams size_categories: - n<1K configs: - config_name: default data_files: - split: train path: "data/train/*.csv" - split: test path: "data/test/*.csv" - config_name: thai data_files: - split: train path: "data/train/thai.csv" - split: test path: "data/test/thai.csv" - config_name: english data_files: - split: train path: "data/train/english.csv" - split: test path: "data/test/english.csv" - config_name: social data_files: - split: train path: "data/train/social.csv" - split: test path: "data/test/social.csv" - config_name: science data_files: - split: train path: "data/train/science.csv" - split: test path: "data/test/science.csv" - config_name: math data_files: - split: train path: "data/train/math.csv" - split: test path: "data/test/math.csv" --- # Thai O-Net Exams Dataset ## Overview The Thai O-Net Exams dataset is a comprehensive collection of exam questions and answers from the Thai Ordinary National Educational Test (O-Net). This dataset covers various subjects for Grade 12 (M6) level, designed to assist in educational research and development of question-answering systems. ### Dataset Source [Thai National Institute of Educational Testing Service (NIETS)](https://www.niets.or.th/th/catalog/view/630) ### Maintainer Dr. Kobkrit Viriyayudhakorn Email: kobkrit@iapp.co.th ## Data Structure ### Subjects Included - English - Mathematics - Science - Social Studies - Thai Language Each subject dataset includes: - **Testing Set**: Questions from the year 2021. - **Training Set**: Questions spanning 2019-2020. For Social Studies, the span is 2016-2020. ### Key Features - **Split Data**: The dataset is divided into training and testing sets, allowing for effective model training and evaluation. - **Question Format**: All questions are provided in text format. - **Multiple Choice Questions**: The dataset includes multiple choice questions, offering a range of possible answers for each query. - **Solutions**: Correct answers to all questions are provided. - **Thai Human Verification**: Each item in the dataset has been verified by a Thai-speaking individual who is not a domain expert, ensuring the clarity and accessibility of the content. - **Extra Annotations**: - `isAnswerable`: Indicates whether the question can be answered with the provided text alone, without the need for additional information such as visual aids. - `isMultipleChoice`: Identifies if the question is a multiple choice question. - `isSingleChoiceSolution`: Specifies if there is only one correct answer among the provided choices. ## Usage This dataset is ideal for developing and evaluating models in the domain of educational question-answering systems. It provides a unique opportunity to explore multilingual processing in the context of Thai and English. For detailed usage guidelines, please refer to the Apache 2.0 License. ## Acknowledgements This dataset was compiled and maintained with contributions from Dr. Kobkrit Viriyayudhakorn and the Thai National Institute of Educational Testing Service (NIETS). ---
google/reveal
--- configs: - config_name: default data_files: - split: eval path: eval/reveal_eval.csv - split: open path: open/reveal_open.csv license: cc-by-nd-4.0 language: - en pretty_name: Reveal task_categories: - text-classification - question-answering size_categories: - 1K<n<10K extra_gated_prompt: >- By clicking “Access repository” below, you confirm your understanding that this resource is permitted for use as a test set, but not as a training set, and should not be uploaded to the internet where web-crawlers can access it (such as plain-text in github, or in an academic PDF). Please ensure adherence to the terms detailed in the paper. If you are unsure about your specific case, don't hesitate to contact: alonjacovi@gmail.com. --- # Reveal: A Benchmark for Verifiers of Reasoning Chains ## [Paper: A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains](https://arxiv.org/abs/2402.00559) Link: https://arxiv.org/abs/2402.00559 Website: https://reveal-dataset.github.io/ Abstract: Prompting language models to provide step-by-step answers (e.g., "Chain-of-Thought") is the prominent approach for complex reasoning tasks, where more accurate reasoning chains typically improve downstream task performance. Recent literature discusses automatic methods to verify reasoning steps to evaluate and improve their correctness. However, no fine-grained step-level datasets are available to enable thorough evaluation of such verification methods, hindering progress in this direction. We introduce Reveal: *Reasoning Verification Evaluation*, a new dataset to benchmark automatic verifiers of complex Chain-of-Thought reasoning in open-domain question answering settings. Reveal includes comprehensive labels for the relevance, attribution to evidence passages, and logical correctness of each reasoning step in a language model's answer, across a wide variety of datasets and state-of-the-art language models. ### Usage To load the dataset: ```python ! pip install datasets from datasets import load_dataset reveal = load_dataset("google/reveal") reveal_eval = reveal['eval'] # select Reveal-Eval, the evaluation split reveal_open = reveal['open'] # select Reveal-Open, the hard-cases split with low-confidence annotations ``` **Note: The above provides a table from `eval/reveal_eval.csv` for easily working at scale with the data. There is another file `eval/reveal_eval.json` with a more intuitive json structure, if you prefer this format.** Some examples of how to handle the data by deriving step-level tasks: ```python import pandas as pd reveal_eval = pd.DataFrame(reveal_eval) # Step Attribution task eval_attr = reveal_eval[~reveal_eval.evidence.isna()].reset_index(drop=True) eval_attr['decontextualized_step'] = eval_attr['decontextualized_step'].fillna(eval_attr['step']) # Fields: # Premise: [evidence] # Hypothesis: [decontextualized_step] # Gold label: [attribution_label] # Step Logic task def _make_history(row): return row['question'] + ' ' + row['full_answer'].split(row['step'].strip())[0] eval_logic = reveal_eval.drop_duplicates(subset=['answer_id', 'step_idx']).reset_index(drop=True) eval_logic = eval_logic[(eval_logic['type_label'] == 'Logical step.') & (eval_logic['logic_relevance_label'] == 'Relevant') & (~eval_logic['correctness_label'].isna())] eval_logic['history'] = eval_logic.apply(_make_history, axis=1) # Fields: # Premise: [history] # Hypothesis: [step] # Gold label: [correctness_label] # Step Relevance task eval_relevance = reveal_eval.drop_duplicates(subset=['answer_id', 'step_idx']).reset_index(drop=True) eval_relevance['relevance_label'] = (eval_relevance['logic_relevance_label'] == 'Relevant') | (eval_relevance['attribution_relevance_label'] == 'Yes') # Fields: # Question: [question] # Answer: [full_answer] # Step: [step] # Gold label: [relevance_label] # Step Type task eval_type = reveal_eval.drop_duplicates(subset=['answer_id', 'step_idx']).reset_index(drop=True) # Fields: # Question: [question] # Answer: [full_answer] # Step: [step] # Gold label: [type_label] # CoT Full Correctness task # Get a list of the final rated evidence passages for each answer_id and concatenate the list into one string: rated_evidence_per_answer = { answer_id: reveal_eval[(reveal_eval.answer_id == answer_id) & reveal_eval.is_final_rated_evidence_for_step]['evidence'] for answer_id in reveal_eval['answer_id'].unique() } rated_evidence_per_answer = { k: '\n'.join([f'Evidence {i+1}: {e}' for i, e in enumerate(v)]) for k, v in rated_evidence_per_answer.items() } # Prepare the eval DataFrame: answer_correctness_eval = reveal_eval.drop_duplicates(subset=['answer_id']).reset_index(drop=True) answer_correctness_eval['all_rated_evidence'] = answer_correctness_eval['answer_id'].apply(lambda x: rated_evidence_per_answer[x]) answer_correctness_eval = answer_correctness_eval[['answer_id','question','full_answer','all_rated_evidence','answer_is_fully_attributable','answer_is_logically_correct','answer_is_fully_attributable_and_correct']] ``` ### **This is an evaluation benchmark. It should not be included in training data for NLP models.** Please do not redistribute any part of the dataset without sufficient protection against web-crawlers. An identifier 64-character string is added to each instance in the dataset to assist in future detection of contamination in web-crawl corporta. The reveal dataset's string is: `Reveal:Mn12GAs2I3S0eWjbTUFC0Y51ijGFB7rGBLnzGGhCQ7OtJPfVg7e6qt9zb5RPL36U` The same has been done to the few-shot prompting demonstrations, to detect whether these demonstrations have been in a model's training data (if so, these demonstrations should not be used for few-shot evaluation of that model). The few-shot demonstrations' string is: `Reveal:HlyeWxw8BRcQ2dPGShTUUjn03uULZOyeNbzKzRIg4QihZ45k1lrye46OoUzi3kkW` #### Fields and Descriptions * **dataset**: Source dataset * **question_id**: ID of the original instance from the source dataset * **question**: The question text * **answer_model**: Model which generated the CoT answer * **answer_id**: ID of a particular model's answer to a question (question_id + answer_model) * **step_idx**: Step index in the answer for this row * **full_answer**: Full CoT answer generated by the model * **step**: The step from the full CoT answer which matches "step_idx", the subject of the row * **decontextualized_step**: The decontextualized version of the step that we used for evidence retrieval (and for the NLI classification evaluations settings) * **attribution_relevance_label**: Majority label for the relevance annotations in the attribution task * **attribution_relevance_majority**: Max # of raters which agreed with each other for this rating * **attribution_relevance_annotations**: The annotations for each rater (ordered list) * **attribution_relevance_raters**: The raters (ordered list) * **attribution_relevance_num_ratings**: The number of raters/ratings * **evidence_id**: The evidence id (from 1 to 3) used for the annotation in this row * **evidence**: The evidence used for the annotation in this row * **attribution_label**: The majority label for whether the evidence supports the step * **attribution_majority**: Max # of raters which agreed with each other for this rating * **attribution_annotations**: The annotations for each rater (ordered list) * **attribution_raters**: The raters (ordered list) * **attribution_num_ratings**: The number of raters/ratings * **attribution_justifications**: The justifications of each rater (ordered list) - note that the raters gave one justification for every step, *not* for every evidence * **annotated_in_attribution_batch**: Which batch this was annotated in (we had 5 annotation batches) * **type_label**: Majority label for whether the step is an attribution step, logical step or both * **type_majority**: Max # of raters which agreed with each other for this rating * **type_annotations**: The annotations for each rater (ordered list) * **type_raters**: The raters (ordered list) * **type_num_ratings**: The number of raters/ratings * **logic_relevance_label**: Majority label for relevance annotations in the logic task * **logic_relevance_majority**: Max # of raters which agreed with each other for this rating * **logic_relevance_annotations**: The annotations for each rater (ordered list) * **logic_relevance_raters**: The raters (ordered list) * **logic_relevance_num_ratings**: The number of raters/ratings * **logic_justifications**: Justifications of each rater (ordered list) - note that the raters gave one justification to all ratings of every step (i.e., one justification for the ratings of type + relevance + correctness together) * **annotated_in_logic_batch**: Which batch this was annotated in (we had 5 annotation batches) * **correctness_label**: Majority label for whether the step is logically correct given the question + previous steps * **correctness_majority**: Max # of raters which agreed with each other for this rating * **correctness_annotations**: The annotations for each rater (ordered list) * **correctness_raters**: The raters (ordered list) * **correctness_num_ratings**: The number of raters/ratings * **agreement_majority_all_steps**: Minimum agreement majority across the attribution and logic ratings for all steps * **is_low_agreement_hard_case**: agreement_majority_all_steps <= 2. This boolean indicates whether the annotations for this answer contain a step with non-trustworthy annotations. This is the difference between Reveal-Eval and Reveal-Open. * **contamination_identifier**: An identification string for contamination detection. * **is_final_rated_evidence_for_step**: Whether this step-evidence pair is the final attribution rating for this step (we try 3 evidences, and stop when we find a supporting or contradicting evidence. The rating in this row is the final attribution rating for the ste pacross all evidence passages) * **answer_is_fully_attributable**: Whether all attribution steps in the answer are fully attributable to some evidence * **answer_is_logically_correct**: Whether all logic steps are logically correct * **answer_is_fully_attributable_and_correct**: Whether all steps are correct (fully attributable or logical)
euclaise/WritingPrompts_curated
--- dataset_info: features: - name: body dtype: string - name: comment_score dtype: int64 - name: prompt dtype: string - name: post_score dtype: int64 splits: - name: train num_bytes: 244506795.8945573 num_examples: 66332 download_size: 168000074 dataset_size: 244506795.8945573 configs: - config_name: default data_files: - split: train path: data/train-* license: mit --- Data from real humans, courtesy of https://reddit.com/r/WritingPrompts
LeoZhangzaolin/Graptoloidea-Specimens-Imaging
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - image-classification - object-detection pretty_name: GraptoloideaSpecimensDataset tags: - graptoloidea - paleontology - specimens - fossils - biology - earth-science dataset_info: features: - name: Suborder dtype: string - name: Infraorder dtype: string - name: Family (Subfamily) dtype: string - name: Genus dtype: string - name: tagged species name dtype: string - name: image dtype: string - name: Stage dtype: string - name: mean age value dtype: float64 - name: Locality (Longitude, Latitude, Horizon) dtype: string - name: Reference (specimens firstly published) dtype: string splits: - name: train num_bytes: 44749 num_examples: 977 - name: test num_bytes: 22835 num_examples: 209 - name: validation num_bytes: 22221 num_examples: 211 download_size: 87686 dataset_size: 839092 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: validation path: data/validation-* --- # Dataset Card for Graptoloidea Specimens Imaging ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Dataset Description](#dataset-description) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instance](#data-instance) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Data Processing](#data-processing) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Citation](#citation) ## Dataset Summary This dataset offers a detailed examination of Graptoloidea specimens, featuring attributes like image file paths, suborder, infraorder, family (including subfamily), tagged species names, geological stages, mean age values, and locality details (with coordinates and horizon information), complemented by original reference citations for each specimen. It serves as a comprehensive resource for paleontological research, emphasizing morphological and stratigraphic analysis of these ancient colonial animals. ## Dataset Preprocessing This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code (Make sure in the correct environment): ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent USER_AGENT = get_datasets_user_agent() def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": USER_AGENT}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image"])) return batch num_threads = 20 dset = load_dataset('Graptolodiea-Speciemens-Imaging.py') dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads}) ``` ## Dataset description ### Supported Tasks - **Paleontological Analysis and Classification**: Utilizing the dataset for detailed classification of Graptoloidea species, including sorting by suborder, infraorder, and family. Fundamental for researchers in paleontology to understand evolutionary trends and species distinctions. - **Age Estimation and Stratigraphic Correlation**: Leveraging mean age values and stage data to estimate the geological age of specimens and correlate them with stratigraphic layers. Crucial for geologists and paleontologists in mapping the geological timeline and understanding the Earth's history. - **Geographical Distribution Study**: Analyzing locality data to study the geographical distribution and migration patterns of Graptoloidea species. Can reveal insights into ancient ecological conditions and biogeographic events. - **Morphological Analysis**: Using the provided specimen images for morphological studies, enabling the identification of unique features and variations within the Graptoloidea order. Important for taxonomic classification and evolutionary studies. - **Data-Driven Paleobiology**: Applying machine learning and statistical methods to uncover patterns and relationships in Graptoloidea evolution, diversity, and extinction events. - **Morphometric Analysis**: Employing image processing techniques to measure and analyze morphological features of the specimens, such as length, shape, branching patterns, and other key characteristics. - **Virtual Reconstruction and 3D Modeling**: Using specimen images to create detailed 3D models of Graptoloidea for virtual reality experiences, aiding in both research and educational endeavors. - **Educational and Outreach Tools**: Developing interactive tools and applications for educational purposes, using specimen images to engage and teach students and the public about Graptoloidea and paleontology. - **Crowdsourcing and Citizen Science Projects**: Allowing citizen scientists to access and annotate the images, contributing to data collection and analysis efforts. ### Language - **English** ## Dataset Structure ### Data instance ``` { 'Suborder': 'Axonophora Frech, 1897', 'Infraorder': 'Neograptina Štorch & others, 2011', 'Family (Subfamily)': 'Dimorphograptidae Elles & Wood, 1908 (no subfamily)', 'Genus': 'Akidograptus', 'tagged species name': 'Akidograptus ascensus', 'image': 'https://raw.githubusercontent.com/LeoZhangzaolin/photos/main/14545Akidograptus_ascensus.jpg', 'Stage': 'Rhuddanian, Llandovery (early Silurian)', 'mean age value': 442.3, 'Locality (Longitude, Latitude, Horizon)': 'Huangshu Village in Anji County, Zhejiang Province (119.676, 30.608, Lower Silurian)', 'Reference (specimens firstly published)': 'Yang, D.Q. 1964. Some Lower Silurian graptolites from Anji, northwestern Zhejiang (Chekiang). Acta Palaeontologica Sinica, 12(4): 628-635.' } ``` ### Data Fields - `Suborder` (string): Suborder of the graptoloidea. - `Infraorder` (string): Infraorder of the graptoloidea. - `Family (Subfamily)` (string): Family of the graptoloidea (with subfamily). - `Genus` (string): genus of the graptoloidea - `tagged species name` (string): The tagged name for the graptoloidea. - `image` (string): File path for the image file. - `Stage` (string): Which period is this graptoloidea in. - `mean age value` (float): About how long has this specimen lasts. - `Locality (Longitude, Latitude, Horizon)` (str): Where found the specimen (with longitude, latitude, and horizon). - `Reference (specimens firstly published` (str): Who first publish this specimen. ### Data Splits 70% of data in training set; 15% of data in testing set; 15% of data in validation set. ## Dataset Creation ### Curation Rationale The primary objective of curating the Graptoloidea Specimens dataset is to provide a comprehensive and accessible resource for the study and analysis of Graptoloidea, an order of extinct marine colonial organisms. This dataset is intended to support a wide range of scientific endeavors, including paleobiological research, evolutionary studies, and educational purposes in the field of paleontology. By assembling high-resolution images and detailed taxonomic information, the dataset aims to facilitate in-depth investigations into the morphology and classification of these ancient organisms, contributing to a broader understanding of Earth's geological history and biodiversity. ### Source Data https://zenodo.org/records/6194943 ### Data Processing Specific processing explainition and code in CSV_Processing.py ## Bias, Risks, and Limitations - **Technological and Methodological Constraints**: Utility is tied to current state of paleontological methodologies and technologies. Future advancements might necessitate reevaluation. - **External Environmental Factors**: Might not fully account for the environmental conditions under which the specimens lived or were fossilized. - **Imaging and Interpretation Limitations**: Quality and resolution of specimen images can vary, affecting morphological analyses. Interpretations based on images are subject to observer's expertise and bias. ## Citation DOI: [10.5281/zenodo.5205215](https://doi.org/10.5281/zenodo.5205215)
kunishou/cosmopedia-100k-ja-preview
--- license: apache-2.0 language: - ja --- [cosmopedia-100k](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia-100k) のindex 20k ~ 100k を日本語に自動翻訳したデータになります(テキストが長すぎて翻訳エラーになったレコードは除外しています)。 ~このデータセット自体は別作業者が取り組んでいる 0 ~ 20k の翻訳結果とマージ後に削除します。~ データセット自体は残しておきます。
PRAli22/Arabic_dialects_to_MSA
--- license: afl-3.0 language: - ar ---
chenmingxuan/Chinese-Patent-Summary
--- license: apache-2.0 task_categories: - summarization language: - zh --- 高质量中文专利摘要数据集。
ngram/medchat-qa
--- license: mit --- # Dataset Card for ngram MedChatQA MedChatQA dataset aims to be a benchmark for testing LLMs for accurate QA on real-world Medical Information and Medical Communication topics. There are several professionals in the medical field who communicate with patients, and with other professionals in their field. These communications are expected to be 100% factual and free of errors. The MedChatQA Dataset aims to help anyone building GenAI products in the medical vertical to test and validate their models. This dataset consists of approximately 30000 questions, covering about 1000 FDA approved human prescription drugs. ## Dataset Details ### Dataset Description - **Curated by:** Anand Prabhu, Devadutta Ghat, Rahul Shah, Akshay Sharma, Anish Muppalaneni - **Language(s) (NLP):** English - **License:** MIT ### Dataset Sources - **Repository:** https://huggingface.co/datasets/ngram/medchat-qa/ - **Paper:** Coming Soon - **Demo:** https://ngram.com ## Dataset Structure JSON objects seperated by newlines ## Dataset Creation ### Source Data - Synthetic, expert generated baseline # Warning Since this dataset is synthetically generated and answers may be wrong. Please use caution.
Major-TOM/Core-S1RTC
--- license: cc-by-sa-4.0 tags: - earth-observation - remote-sensing - sentinel-1 - sar - synthethic-aperture-radar - satellite size_categories: - 1M<n<10M dataset_info: - config_name: default features: - name: product_id dtype: string - name: grid_cell dtype: string - name: product_datetime dtype: string - name: thumbnail dtype: image - name: vv dtype: binary - name: vh dtype: binary configs: - config_name: default data_files: images/*.parquet - config_name: metadata data_files: metadata.parquet --- # Core-S1RTC Contains a global coverage of Sentinel-1 (RTC) patches, each of size 1,068 x 1,068 pixels. | Source | Sensing Type | Number of Patches | Patch Size | Total Pixels | |--------|--------------|-------------------|------------|--------------| |Sentinel-1 RTC | Synthetic Aperture Radar |1,469,955|1,068 x 1,068 (10 m) | > 1.676 Trillion | ## Content | Column | Details | Resolution | |--------|---------|------------| | VV | Received Linear Power in the VV Polarization | 10m | | VH | Received Linear Power in the VV Polarization | 10m | | thumbnail | Rescaled false colour<sup>1</sup> saved as png | 10m | <sup>1</sup> False colour composites are made with decibel-scale values with red green and blue defined as ```R:VV G:VV+VH B:VH```. For each channel, a contrast-stretch is applied, transforming minimum-maximum to 0-255. This means bluer areas have relatively higher VH values, whilst brightness is a function of overall intensity. This is relative within each thumbnail because of the normalisation, and so cannot be compared across different samples. ## Spatial Coverage This is a global monotemporal dataset. Nearly every piece of Earth captured by Sentinel-1 is contained at least once in this dataset (and only once, excluding some marginal overlaps). The coverage is about 35% lower than for Core Sentinel-2 dataset due to the sensor coverage limitations. The following figure demonstrates the spatial coverage (only black pixels are absent): ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6304c06eeb6d777a838eab63/hFf98OyO25mL9Q962d5NL.png) ## Example Use Interface scripts are available at https://github.com/ESA-PhiLab/Major-TOM Here's a sneak peek with a thumbnail image: ```python from fsspec.parquet import open_parquet_file import pyarrow.parquet as pq from io import BytesIO from PIL import Image PARQUET_FILE = 'part_03900' # parquet number ROW_INDEX = 42 # row number (about 500 per parquet) url = "https://huggingface.co/datasets/Major-TOM/Core-S1RTC/resolve/main/images/{}.parquet".format(PARQUET_FILE) with open_parquet_file(url,columns = ["thumbnail"]) as f: with pq.ParquetFile(f) as pf: first_row_group = pf.read_row_group(ROW_INDEX, columns=['thumbnail']) stream = BytesIO(first_row_group['thumbnail'][0].as_py()) image = Image.open(stream) ``` ## Cite [![arxiv](https://img.shields.io/badge/Open_Access-arxiv:2402.12095-b31b1b)](https://arxiv.org/abs/2402.12095/) ```latex @inproceedings{Major_TOM, title={Major TOM: Expandable Datasets for Earth Observation}, author={Alistair Francis and Mikolaj Czerkawski}, year={2024}, eprint={2402.12095}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` Powered by [Φ-lab, European Space Agency (ESA) 🛰️](https://huggingface.co/ESA-philab)
ResplendentAI/NSFW_Format_Test
--- license: apache-2.0 task_categories: - text-generation language: - en tags: - not-for-all-audiences pretty_name: NSFW Format Test ---
BHOSAI/Translated_English_Wikipedia_on_Azerbaijani
--- license: cc task_categories: - text-generation - translation language: - az --- # Translated English Wikipedia Dataset to Azerbaijani Research and Development Center on AI at Baku Higher Oil School presents the unique dataset on Azerbaijani Language! Considering the lack of the resources in Azerbaijani language, only way to get more pretraining data was to translate foreign (global) sources to provide dataset with meaningful knowledge. However, considering that it is very time consuming to translate very large content, we decided to release a translated wikipedia by using well-known translation models. This means that, translated part is synthetic while the translation quality is high. ## Where can you use this dataset This kind of dataset are mainly used for the pretraining of LLMs to build foundation model. ## Source Dataset The source of this dataset is from wiki40B repository (https://huggingface.co/datasets/wiki40b ). We took train split from the dataset, 1.3B translation model and started translation on 4 RTX 4090 GPUs. There are 2.5M of articles in the source dataset while we succeded to translate 250k articles as v1.0 # Collaborators Great Thanks to the team members of Baku Higher Oil School Research and Development Center on AI! --- license: cc-by-sa-4.0 ---
jondurbin/bagel-v0.5
--- license: cc-by-nc-4.0 ---
cgarciae/cartoonset
--- pretty_name: Cartoon Set size_categories: - 10K<n<100K task_categories: - image - computer-vision - generative-modelling license: cc-by-4.0 --- # Dataset Card for Cartoon Set ## Table of Contents - [Dataset Card for Cartoon Set](#dataset-card-for-cartoon-set) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Usage](#usage) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://google.github.io/cartoonset/ - **Repository:** https://github.com/google/cartoonset/ - **Paper:** XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ![Cartoon Set sample image](https://huggingface.co/datasets/cgarciae/cartoonset/resolve/main/sample.png) [Cartoon Set](https://google.github.io/cartoonset/) is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork categories, 4 color categories, and 4 proportion categories, with a total of ~10^13 possible combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes. #### Usage `cartoonset` provides the images as PNG byte strings, this gives you a bit more flexibility into how to load the data. Here we show 2 ways: **Using PIL:** ```python import datasets from io import BytesIO from PIL import Image ds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k" def process_fn(sample): img = Image.open(BytesIO(sample["img_bytes"])) ... return {"img": img} ds = ds.map(process_fn, remove_columns=["img_bytes"]) ``` **Using TensorFlow:** ```python import datasets import tensorflow as tf hfds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k" ds = tf.data.Dataset.from_generator( lambda: hfds, output_signature={ "img_bytes": tf.TensorSpec(shape=(), dtype=tf.string), }, ) def process_fn(sample): img = tf.image.decode_png(sample["img_bytes"], channels=3) ... return {"img": img} ds = ds.map(process_fn) ``` **Additional features:** You can also access the features that generated each sample e.g: ```python ds = datasets.load_dataset("cgarciae/cartoonset", "10k+features") # or "100k+features" ``` Apart from `img_bytes` these configurations add a total of 18 * 2 additional `int` features, these come in `{feature}`, `{feature}_num_categories` pairs where `num_categories` indicates the number of categories for that feature. See [Data Fields](#data-fields) for the complete list of features. ## Dataset Structure ### Data Instances A sample from the training set is provided below: ```python { 'img_bytes': b'0x...', } ``` If `+features` is added to the dataset name, the following additional fields are provided: ```python { 'img_bytes': b'0x...', 'eye_angle': 0, 'eye_angle_num_categories': 3, 'eye_lashes': 0, 'eye_lashes_num_categories': 2, 'eye_lid': 0, 'eye_lid_num_categories': 2, 'chin_length': 2, 'chin_length_num_categories': 3, ... } ``` ### Data Fields - `img_bytes`: A byte string containing the raw data of a 500x500 PNG image. If `+features` is appended to the dataset name, the following additional `int32` fields are provided: - `eye_angle` - `eye_angle_num_categories` - `eye_lashes` - `eye_lashes_num_categories` - `eye_lid` - `eye_lid_num_categories` - `chin_length` - `chin_length_num_categories` - `eyebrow_weight` - `eyebrow_weight_num_categories` - `eyebrow_shape` - `eyebrow_shape_num_categories` - `eyebrow_thickness` - `eyebrow_thickness_num_categories` - `face_shape` - `face_shape_num_categories` - `facial_hair` - `facial_hair_num_categories` - `facial_hair_num_categories` - `facial_hair_num_categories` - `hair` - `hair_num_categories` - `hair_num_categories` - `hair_num_categories` - `eye_color` - `eye_color_num_categories` - `face_color` - `face_color_num_categories` - `hair_color` - `hair_color_num_categories` - `glasses` - `glasses_num_categories` - `glasses_color` - `glasses_color_num_categories` - `eyes_slant` - `eye_slant_num_categories` - `eyebrow_width` - `eyebrow_width_num_categories` - `eye_eyebrow_distance` - `eye_eyebrow_distance_num_categories` ### Data Splits Train ## Dataset Creation ### Licensing Information This data is licensed by Google LLC under a Creative Commons Attribution 4.0 International License. ### Citation Information ``` @article{DBLP:journals/corr/abs-1711-05139, author = {Amelie Royer and Konstantinos Bousmalis and Stephan Gouws and Fred Bertsch and Inbar Mosseri and Forrester Cole and Kevin Murphy}, title = {{XGAN:} Unsupervised Image-to-Image Translation for many-to-many Mappings}, journal = {CoRR}, volume = {abs/1711.05139}, year = {2017}, url = {http://arxiv.org/abs/1711.05139}, eprinttype = {arXiv}, eprint = {1711.05139}, timestamp = {Mon, 13 Aug 2018 16:47:38 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1711-05139.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions
ManRo/Sexism_Twitter_MeTwo
--- license: apache-2.0 --- The Dataset was built on 2022/03/29 to contribute to improve the representation of the Spanish language in NLP tasks tasks in the HuggingFace platform. The dataset contains 2,471 tweets obtained from their tweet_id. The dataset considers the following columns: - Column 1( Status_id): Corresponds to the unique identification number of the tweet in the social network. - Column 2( text): Corresponds to the text (in Spanish) linked to the corresponding "Status_Id", which is used to perform the sexism analysis. - Column 3 (Category): Corresponds to the classification that has been made when analyzing the text (in Spanish), considering three categories: (SEXIST,NON_SEXIST,DOUBTFUL) The dataset has been built thanks to the previous work of : F. Rodríguez-Sánchez, J. Carrillo-de-Albornoz and L. Plaza. from MeTwo Machismo and Sexism Twitter Identification dataset. For more information on the categorization process check: https://ieeexplore.ieee.org/document/9281090
mweiss/fashion_mnist_corrupted
--- annotations_creators: - expert-generated - machine-generated language_creators: - machine-generated language: - en license: - mit multilinguality: - monolingual pretty_name: fashion-mnist-corrupted size_categories: - 10K<n<100K source_datasets: - extended|fashion_mnist task_categories: - image-classification task_ids: [] --- # Fashion-Mnist-C (Corrupted Fashion-Mnist) A corrupted Fashion-MNIST benchmark for testing out-of-distribution robustness of computer vision models, which were trained on Fashion-Mmnist. [Fashion-Mnist](https://github.com/zalandoresearch/fashion-mnist) is a drop-in replacement for MNIST and Fashion-Mnist-C is a corresponding drop-in replacement for [MNIST-C](https://arxiv.org/abs/1906.02337). ## Corruptions The following corruptions are applied to the images, equivalently to MNIST-C: - **Noise** (shot noise and impulse noise) - **Blur** (glass and motion blur) - **Transformations** (shear, scale, rotate, brightness, contrast, saturate, inverse) In addition, we apply various **image flippings and turnings**: For fashion images, flipping the image does not change its label, and still keeps it a valid image. However, we noticed that in the nominal fmnist dataset, most images are identically oriented (e.g. most shoes point to the left side). Thus, flipped images provide valid OOD inputs. Most corruptions are applied at a randomly selected level of *severity*, s.t. some corrupted images are really hard to classify whereas for others the corruption, while present, is subtle. ## Examples | Turned | Blurred | Rotated | Noise | Noise | Turned | | ------------- | ------------- | --------| --------- | -------- | --------- | | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_0.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_1.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_6.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_3.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_4.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_5.png" width="100" height="100"> | ## Citation If you use this dataset, please cite the following paper: ``` @inproceedings{Weiss2022SimpleTechniques, title={Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning}, author={Weiss, Michael and Tonella, Paolo}, booktitle={Proceedings of the 31th ACM SIGSOFT International Symposium on Software Testing and Analysis}, year={2022} } ``` Also, you may want to cite FMNIST and MNIST-C. ## Credits - Fashion-Mnist-C is inspired by Googles MNIST-C and our repository is essentially a clone of theirs. See their [paper](https://arxiv.org/abs/1906.02337) and [repo](https://github.com/google-research/mnist-c). - Find the nominal (i.e., non-corrupted) Fashion-MNIST dataset [here](https://github.com/zalandoresearch/fashion-mnist).
sil-ai/bloom-vist
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - afr - af - aaa - abc - ada - adq - aeu - agq - ags - ahk - aia - ajz - aka - ak - ame - amh - am - amp - amu - ann - aph - awa - awb - azn - azo - bag - bam - bm - baw - bax - bbk - bcc - bce - bec - bef - ben - bn - bfd - bfm - bfn - bgf - bho - bhs - bis - bi - bjn - bjr - bkc - bkh - bkm - bkx - bob - bod - bo - boz - bqm - bra - brb - bri - brv - bss - bud - buo - bwt - bwx - bxa - bya - bze - bzi - cak - cbr - ceb - cgc - chd - chp - cim - clo - cmn - zh - cmo - csw - cuh - cuv - dag - ddg - ded - deu - de - dig - dje - dmg - dnw - dtp - dtr - dty - dug - eee - ekm - enb - enc - eng - en - ewo - fas - fa - fil - fli - fon - fra - fr - fub - fuh - gal - gbj - gou - gsw - guc - guj - gu - guz - gwc - hao - hat - ht - hau - ha - hbb - hig - hil - hin - hi - hla - hna - hre - hro - idt - ilo - ind - id - ino - isu - ita - it - jgo - jmx - jpn - ja - jra - kak - kam - kan - kn - kau - kr - kbq - kbx - kby - kek - ken - khb - khm - km - kik - ki - kin - rw - kir - ky - kjb - kmg - kmr - ku - kms - kmu - kor - ko - kqr - krr - ksw - kur - ku - kvt - kwd - kwu - kwx - kxp - kyq - laj - lan - lao - lo - lbr - lfa - lgg - lgr - lhm - lhu - lkb - llg - lmp - lns - loh - lsi - lts - lug - lg - luy - lwl - mai - mal - ml - mam - mar - mr - mdr - mfh - mfj - mgg - mgm - mgo - mgq - mhx - miy - mkz - mle - mlk - mlw - mmu - mne - mnf - mnw - mot - mqj - mrn - mry - msb - muv - mve - mxu - mya - my - myk - myx - mzm - nas - nco - nep - ne - new - nge - ngn - nhx - njy - nla - nld - nl - nlv - nod - nsk - nsn - nso - nst - nuj - nwe - nwi - nxa - nxl - nya - ny - nyo - nyu - nza - odk - oji - oj - oki - omw - ori - or - ozm - pae - pag - pan - pa - pbt - pce - pcg - pdu - pea - pex - pis - pkb - pmf - pnz - por - pt - psp - pwg - qaa - qub - quc - quf - quz - qve - qvh - qvm - qvo - qxh - rel - rnl - ron - ro - roo - rue - rug - rus - ru - san - sa - saq - sat - sdk - sea - sgd - shn - sml - snk - snl - som - so - sot - st - sox - spa - es - sps - ssn - stk - swa - sw - swh - sxb - syw - taj - tam - ta - tbj - tdb - tdg - tdt - teo - tet - tgk - tg - tha - th - the - thk - thl - thy - tio - tkd - tnl - tnn - tnp - tnt - tod - tom - tpi - tpl - tpu - tsb - tsn - tn - tso - ts - tuv - tuz - tvs - udg - unr - urd - ur - uzb - uz - ven - ve - vie - vi - vif - war - wbm - wbr - wms - wni - wnk - wtk - xho - xh - xkg - xmd - xmg - xmm - xog - xty - yas - yav - ybb - ybh - ybi - ydd - yea - yet - yid - yi - yin - ymp - zaw - zho - zh - zlm - zuh - zul - zu license: - cc-by-4.0 - cc-by-nc-4.0 - cc-by-nd-4.0 - cc-by-sa-4.0 - cc-by-nc-nd-4.0 - cc-by-nc-sa-4.0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original paperswithcode_id: null pretty_name: BloomVIST extra_gated_prompt: |- One more step before getting this dataset. This dataset is open access and available only for non-commercial use (except for portions of the dataset labeled with a `cc-by-sa` license). A "license" field paired with each of the dataset entries/samples specifies the Creative Commons license for that entry/sample. These [Creative Commons licenses](https://creativecommons.org/about/cclicenses/) specify that: 1. You cannot use the dataset for or directed toward commercial advantage or monetary compensation (except for those portions of the dataset labeled specifically with a `cc-by-sa` license. If you would like to ask about commercial uses of this dataset, please [email us](mailto:sj@derivation.co). 2. Any public, non-commercial use of the data must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. 3. For those portions of the dataset marked with an ND license, you cannot remix, transform, or build upon the material, and you may not distribute modified material. In addition to the above implied by Creative Commons and when clicking "Access Repository" below, you agree: 1. Not to use the dataset for any use intended to or which has the effect of harming or enabling discrimination against individuals or groups based on legally protected characteristics or categories, including but not limited to discrimination against Indigenous People as outlined in Articles 2; 13-16; and 31 of the United Nations Declaration on the Rights of Indigenous People, 13 September 2007 and as subsequently amended and revised. 2. That your *contact information* (email address and username) can be shared with the model authors as well. extra_gated_fields: I have read the License and agree with its terms: checkbox --- ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) <!-- - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) --> ## Dataset Description - **Homepage:** [SIL AI](https://ai.sil.org/) - **Point of Contact:** [SIL AI email](mailto:idx_aqua@sil.org) - **Source Data:** [Bloom Library](https://bloomlibrary.org/) ![logo for Bloom Library](https://bloom-vist.s3.amazonaws.com/bloom_logo.png) ![sil-ai logo](https://s3.amazonaws.com/moonup/production/uploads/1661440873726-6108057a823007eaf0c7bd10.png) ## Dataset Summary **Bloom** is free, open-source software and an associated website [Bloom Library](https://bloomlibrary.org/), app, and services developed by [SIL International](https://www.sil.org/). Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development. This version of the Bloom Library data is developed specifically for the visual story telling (or VIST) task. It includes data from 364 languages across 31 language families. There is a mean of 32 stories and median of 2 stories per language. **Note**: If you speak one of these languages and can help provide feedback or corrections, please let us know! **Note**: Although this data was used in the training of the [BLOOM model](https://huggingface.co/bigscience/bloom), this dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool. 😉 ## Languages Of the 500+ languages listed at BloomLibrary.org, there are 363 languages available in this dataset. Here are the corresponding ISO 639-3 codes: aaa, abc, ada, adq, aeu, afr, agq, ags, ahk, aia, ajz, aka, ame, amh, amp, amu, ann, aph, awa, awb, azn, azo, bag, bam, baw, bax, bbk, bcc, bce, bec, bef, ben, bfd, bfm, bfn, bgf, bho, bhs, bis, bjn, bjr, bkc, bkh, bkm, bkx, bob, bod, boz, bqm, bra, brb, bri, brv, bss, bud, buo, bwt, bwx, bxa, bya, bze, bzi, cak, cbr, ceb, cgc, chd, chp, cim, clo, cmn, cmo, csw, cuh, cuv, dag, ddg, ded, deu, dig, dje, dmg, dnw, dtp, dtr, dty, dug, eee, ekm, enb, enc, eng, ewo, fas, fil, fli, fon, fra, fub, fuh, gal, gbj, gou, gsw, guc, guj, guz, gwc, hao, hat, hau, hbb, hig, hil, hin, hla, hna, hre, hro, idt, ilo, ind, ino, isu, ita, jgo, jmx, jpn, jra, kak, kam, kan, kau, kbq, kbx, kby, kek, ken, khb, khm, kik, kin, kir, kjb, kmg, kmr, kms, kmu, kor, kqr, krr, ksw, kur, kvt, kwd, kwu, kwx, kxp, kyq, laj, lan, lao, lbr, lfa, lgg, lgr, lhm, lhu, lkb, llg, lmp, lns, loh, lsi, lts, lug, luy, lwl, mai, mal, mam, mar, mdr, mfh, mfj, mgg, mgm, mgo, mgq, mhx, miy, mkz, mle, mlk, mlw, mmu, mne, mnf, mnw, mot, mqj, mrn, mry, msb, muv, mve, mxu, mya, myk, myx, mzm, nas, nco, nep, new, nge, ngn, nhx, njy, nla, nld, nlv, nod, nsk, nsn, nso, nst, nuj, nwe, nwi, nxa, nxl, nya, nyo, nyu, nza, odk, oji, oki, omw, ori, ozm, pae, pag, pan, pbt, pce, pcg, pdu, pea, pex, pis, pkb, pmf, pnz, por, psp, pwg, qub, quc, quf, quz, qve, qvh, qvm, qvo, qxh, rel, rnl, ron, roo, rue, rug, rus, san, saq, sat, sdk, sea, sgd, shn, sml, snk, snl, som, sot, sox, spa, sps, ssn, stk, swa, swh, sxb, syw, taj, tam, tbj, tdb, tdg, tdt, teo, tet, tgk, tha, the, thk, thl, thy, tio, tkd, tnl, tnn, tnp, tnt, tod, tom, tpi, tpl, tpu, tsb, tsn, tso, tuv, tuz, tvs, udg, unr, urd, uzb, ven, vie, vif, war, wbm, wbr, wms, wni, wnk, wtk, xho, xkg, xmd, xmg, xmm, xog, xty, yas, yav, ybb, ybh, ybi, ydd, yea, yet, yid, yin, ymp, zaw, zho, zlm, zuh, zul ## Dataset Statistics Some of the languages included in the dataset just include 1 or a couple of "stories." For those with higher numbers of available stories we include the following numbers of stories: | ISO639-3 Code | Stories | Image-Caption Pairs | |:-----------|----------:|----------------------:| | ahk | 55 | 493 | | awa | 163 | 1200 | | ben | 220 | 1938 | | bho | 172 | 1163 | | bis | 21 | 183 | | brb | 22 | 330 | | bzi | 66 | 497 | | cak | 50 | 694 | | ceb | 394 | 2806 | | cgc | 182 | 1473 | | deu | 22 | 250 | | dty | 172 | 1310 | | eng | 2187 | 24338 | | fas | 128 | 620 | | fil | 34 | 366 | | fra | 315 | 4350 | | hat | 224 | 1881 | | hau | 229 | 1594 | | ind | 232 | 1866 | | jra | 56 | 575 | | kak | 195 | 1416 | | kek | 21 | 419 | | khb | 31 | 167 | | khm | 26 | 246 | | kir | 278 | 2866 | | kjb | 63 | 584 | | kor | 129 | 2732 | | krr | 29 | 362 | | lsi | 22 | 173 | | mai | 177 | 1186 | | mam | 118 | 1058 | | mhx | 51 | 544 | | myk | 22 | 214 | | nep | 194 | 1464 | | new | 177 | 1225 | | pbt | 203 | 979 | | por | 148 | 2939 | | quc | 99 | 817 | | rus | 271 | 2977 | | snk | 21 | 210 | | spa | 444 | 5201 | | swh | 34 | 387 | | tdg | 31 | 231 | | tha | 275 | 2929 | | thl | 185 | 1464 | | tpi | 137 | 1528 | | tpu | 28 | 513 | | zho | 42 | 339 | ## Dataset Structure ### Data Instances The examples look like this for Hindi: ``` from datasets import load_dataset # Specify the language code. dataset = load_dataset("sil-ai/bloom-vist", 'hin') # An individual samples consists of stories in the specified language code. # To see a story: print(dataset['train'][0]['story']) ``` This would produce an output: ``` {'image_id': ['4e9bdde5-996d-4a98-ac1c-d80fb6349314', '614e4d51-bbdb-4538-98d3-f603c12dccd0', '970d60bf-2acb-44ac-8ffb-5aa3f7989630', 'd4ad1199-863e-4929-a377-93276fe5caa8', '0d9ad694-995a-433d-af4e-6f40ddfa208a', '811176eb-c9f3-4226-8af5-e6c4e524c494', '83180da7-4ba8-4104-a0d9-49aa2ef48f7a'], 'image_url': ['https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_03_Image_00011.png', 'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_04_Image_0001.png', 'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_05_Image_0001.png', 'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_06_Image_0001.png', 'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_07_Image_0001.png', 'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_07_Image_00011.png', 'https://bloom-vist.s3.amazonaws.com/Saboo+and+Jojo/M_PB_2_-saboo-and-jojo_Page_09_Image_0001.png'], 'story_index': [0, 1, 2, 3, 4, 5, 6], 'story_id': ['cc34c1c7-c086-491b-8e6a-65572e1efdb6', 'cc34c1c7-c086-491b-8e6a-65572e1efdb6', 'cc34c1c7-c086-491b-8e6a-65572e1efdb6', 'cc34c1c7-c086-491b-8e6a-65572e1efdb6', 'cc34c1c7-c086-491b-8e6a-65572e1efdb6', 'cc34c1c7-c086-491b-8e6a-65572e1efdb6', 'cc34c1c7-c086-491b-8e6a-65572e1efdb6'], 'text': ['साबू ने एक कंकड़ को ठोकर मारी। कंकड़ लुढ़कता हुआ एक पेड़ के पास पहुँचा। पेड़ के तने पर मुलायम बाल थे। साबू ने छुए और ऊपर देखा, ऊपर, ऊपर और उससे भी ऊपर...दो आँखें नीचे देख रही थीं।', '“हेलो, तुम कौन हो?” साबू को बड़ा अचम्भा हुआ।“हेलो, मैं जिराफ़ हूँ। मेरा नाम है जोजो। \xa0मैं तुम्हारे साथ खेल सकता हूँ। मेरी पीठ पर चढ़ जाओ, मैं तुम्हें घुमा के लाता हूँ।”', 'साबू जोजो की पीठ पर चढ़ गया और वे सड़क पर चल निकले। फिर पहाड़ी पर और शहर के बीचों बीच।\nसाबू खुशी से चिल्लाया, “जोजो दाएँ मुड़ो,\n बाएँ मुड़ो और फिर दाएँ।” अब वे उसकी दोस्त मुन्नी के घर पहुँच गये।', 'आज मुन्नी का जन्मदिन था। साबू को जोजो पर सवारी करते देख बच्चों ने ताली बजायी।\xa0\n जोजो ने गुब्बारे लटकाने में आन्टी की मदद करी क्योंकि वह इतना... लम्बा था।\xa0\n कितना आसान था!', 'जोजो ने सब बच्चों को सवारी कराई।\n उनके साथ बॉल भी खेली। बड़े मज़े की पार्टी थी।सब ने गाया, “हैप्पी बर्थ डे टु यू ।”\n आन्टी ने मेज़ पर समोसे, गुलाब जामुन और आइसक्रीम सजाई।', 'जोजो को आइसक्रीम बहुत पसन्द आई। अंकल उसके लिये एक बाल्टी भर के आइसक्रीम लाये। जोजो ने पूरी बाल्टी ख़त्म कर दी। \xa0अब घर जाने का समय हो गया।\n\nसब ने कहा, “बाय बाय जोजो, बाय बाय साबू।” साबू और जोजो घर लौटे।', '']} ``` ### Data Fields The metadata fields below are available. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing). - **id**: id of the sample - **title**: title of the book, e.g. "Going to Buy a Book". - **license**: specific license used, e.g. "cc-by-sa" for "Creative Commons, by attribution, share-alike". - **album_id**: an ID value corresponding to the set of images corresponding to the given story - **story**: the sequenced story data including lists of image IDs, image URLs, and corresponding text ### Data Splits Currently all languages include a train split only. In the future, we will be creating manual splits of the data. ## Changelog - **6 December 2022** - dataset is made public
Aniemore/cedr-m7
--- annotations_creators: - found language_creators: - found language: - ru license: mit multilinguality: - monolingual pretty_name: cedr-m7 size_categories: - 1K<n<10K source_datasets: - extended|cedr task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for CEDR-M7 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @misc{Aniemore, author = {Артем Аментес, Илья Лубенец, Никита Давидчук}, title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека}, year = {2022}, publisher = {Hugging Face}, journal = {Hugging Face Hub}, howpublished = {\url{https://huggingface.com/aniemore/Aniemore}}, email = {hello@socialcode.ru} } ``` ### Contributions Thanks to [@toiletsandpaper](https://github.com/toiletsandpaper) for adding this dataset.
codeparrot/github-jupyter
--- annotations_creators: [] language_creators: - crowdsourced - expert-generated language: - code license: - other multilinguality: - muonolingual size_categories: - unknown source_datasets: [] task_categories: - text-generation task_ids: - language-modeling --- # GitHub Jupyter Dataset ## Dataset Description The dataset was extracted from Jupyter Notebooks on BigQuery. ## Licenses Each example has the license of its associated repository. There are in total 15 licenses: ```python [ 'mit', 'apache-2.0', 'gpl-3.0', 'gpl-2.0', 'bsd-3-clause', 'agpl-3.0', 'lgpl-3.0', 'lgpl-2.1', 'bsd-2-clause', 'cc0-1.0', 'epl-1.0', 'mpl-2.0', 'unlicense', 'isc', 'artistic-2.0' ] ```
yongjian/music-clips-50
--- pretty_name: music-clips-50 multilinguality: - other-music language: - en - zh --- There are 50 music clips(of 3~5 seconds). You can load them by the following code: ```python from datasets import load_dataset dataset = load_dataset('yongjian/music-clips-50') clips = dataset['train'] # all 50 music clips music_1_np_array = clips[0]['audio']['array'] # numpy array of shape=[N,] ``` Or you can directly download them from Google Drive: [music-clips-50.tar.gz](https://drive.google.com/file/d/154y_Z9p1Sfhrwzj7jc46UMbTaAmI17AT/view?usp=sharing).
kietzmannlab/ecoset
--- license: cc source_datasets: - original task_categories: - image-classification task_ids: - multi-class-classification - multi-class-image-classification paperswithcode_id: ecoset pretty_name: Ecoset tags: - other-image-classification - image-classification --- ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Installation](#installation) - [Install requirements](#install-requirements) - [Download settings](#download-settings) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.kietzmannlab.org/ecoset](https://www.kietzmannlab.org/ecoset/) - **Repository:** [https://codeocean.com/capsule/9570390/tree/v1](https://codeocean.com/capsule/6266601/tree/v1) - **Paper:** [https://www.pnas.org/doi/full/10.1073/pnas.2011417118](https://doi.org/10.1073/pnas.2011417118) - **Point of Contact:** [tim.kietzmann@uni-osnabrueck.de](tim.kietzmann@uni-osnabrueck.de) ### Dataset Summary Tired of all the dogs in ImageNet (ILSVRC)? Then ecoset is here for you. 1.5m images from 565 basic level categories, chosen to be both (i) frequent in linguistic usage, and (ii) rated by human observers as concrete (e.g. ‘table’ is concrete, ‘romance’ is not). Ecoset is a typical image recognition dataset, combining images of objects with appropriate labels (one label per image). Importantly, ecoset is intended to provide higher ecological validity than its counterparts, with a mislabelling error rate < 5% and filtered for NSFW content. For more information on the dataset, consider reading the [original publication](https://doi.org/10.1073/pnas.2011417118). Ecoset consists of a train, test, and validation subset which all are openly available to the user. ### Supported Tasks and Leaderboards Ecoset is a large multi-class single-label object recognition image dataset (similar to ImageNet). ## Installation ### Install Requirements In order to work with ecoset, please make sure to install huggingface datasets: ```bash pip install datasets ``` If you want to work with the dataset in `Huggingface.datasets`, you might also want to make sure to install PIL (`pip install Pillow`) in order to work with image input. However, downloading the dataset will work despite not having installed PIL. ### Download Settings Please set `verification_mode=no_checks`. when downloading this dataset, else the download will result in an error, additionally you may need to install defusedxml via pip to avoid Permission Errors required by _generate_examples method: ```python from datasets import load_dataset dataset = load_dataset("kietzmannlab/ecoset", verification_mode=no_checks) ``` optionally a cache_dir can be specified where the zip file will be downloaded and extracted ```python from datasets import load_dataset dataset = load_dataset("kietzmannlab/ecoset", verification_mode=no_checks, cache_dir='/path/to/dir') ``` | NOTE: If you get errors like: `FileNotFoundError: [Errno 2] No such file or directory:'<DATASET_PATH>'` this is likely due do having previously downloaded the dataset and then cancelling the download. If this is the case for you, you can fix this error by manually removing the dataset path and reinstalling the dataset. | | --- | ## Dataset Structure We show detailed information for all the configurations of the dataset. Currently, there is only one setting (`Full`) available, containing all data. ### Data Instances #### Full - **Size of downloaded dataset files:** 155 GB - **Total amount of disk used:** 311 GB ## Dataset Creation A total of 565 categories were selected based on the following: 1) their word frequency in American television and film subtitles (SUBTLEX_US), 2) the perceived concreteness by human observers, and 3) the availability of a minimum of 700 images. Images were sourced via the overall ImageNet database (the same resource used for ILSVRC 2012) or obtained under CC BY-NC-SA 2.0 license from Bing image search and Flickr. Thorough data cleaning procedures were put in place to remove duplicates and to assure an expected misclassification rate per category of <4%. ### Curation Rationale More information on the curation of the dataset can be found in the [original publication](https://doi.org/10.1073/pnas.2011417118). ### Source Data The source data is available under: [https://codeocean.com/capsule/9570390/tree/v1](https://codeocean.com/capsule/6266601/tree/v1) ### Annotations Each ecoset image folder is annotated with class labels according to the main object depicted in a class of images. No further annotations are added to the dataset. ### Personal and Sensitive Information The dataset was tested to exclude sensitive images using Yahoo's Open NSFW detection model, removing all image with an NSFW score above 0.8. For this dataset, only images with secured license information was used, which should prevent the inclusion of images without consent of the image's authors and subjects. Despite these measures, it is possible that the images in the dataset contain personal and sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset Large-scale image-label datasets such as ImageNet are the backbone of modern Computer Vision. However, such large datasets often suffer from problems like mislabeling, category biases, misrepresentations, and unsafe content. Ecoset was created with the aim to reduce these biases and consequently improve the social impact of Computer Vision techniques trained on the dataset. More information on the social impact of the dataset can be found in the [original publication](https://doi.org/10.1073/pnas.2011417118). ### Discussion of Biases Despite best efforts to provide an ecologically valid and overall less biased dataset, ecoset is still likely to contain biased data. The category selection of ecoset was based on human concreteness ratings and word frequencies in a corpus consisting of American television and film subtitles. This undoubtedly biases the category selection toward Western cultures. Image inclusion was based on the availability via Bing/Flickr search results as well as the existence of relevant ImageNet categories. Images depicting people, specifically the categories “man,” “woman,” and “child,” were not sampled according to census distributions (age, ethnicity, gender, etc.). ### Other Known Limitations In addition to points mentioned in [Discussion of Biases](#discussion-of-biases), ecoset image and category distributions do not reflect the naturalistic, egocentric visual input typically encountered in the everyday life of infant and adults. ## Additional Information ### Dataset Curators The corpus was put together by Johannes Mehrer, Courtney J. Spoerer, Emer C. Jones, Nikolaus Kriegeskorte, and Tim C. Kietzmann. ### Licensing Information Ecoset is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 license (cc-by-nc-sa-2.0). ### Citation Information ``` @article{mehrer2021ecologically, title={An ecologically motivated image dataset for deep learning yields better models of human vision}, author={Mehrer, Johannes and Spoerer, Courtney J and Jones, Emer C and Kriegeskorte, Nikolaus and Kietzmann, Tim C}, journal={Proceedings of the National Academy of Sciences}, volume={118}, number={8}, pages={e2011417118}, year={2021}, publisher={National Acad Sciences} } ``` ### Contributions The ecoset dataloader and dataset card was created by [@DiGyt](https://github.com/DiGyt) on behalf of [@kietzmannlab](https://huggingface.co/kietzmannlab). For questions and suggestions feel free to reach out.
bigscience/xP3megds
--- annotations_creators: - expert-generated - crowdsourced language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript license: - apache-2.0 multilinguality: - multilingual pretty_name: xP3 size_categories: - 100M<n<1B task_categories: - other --- # Dataset Card for xP3 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/bigscience-workshop/xmtf - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co) ### Dataset Summary > xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot. - **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility. - **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3)) - **xP3 Dataset Family:** <table> <tr> <th>Name</th> <th>Explanation</th> <th>Example models</th> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t> <td>Mixture of 17 tasks in 277 languages with English prompts</td> <td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t> <td>Mixture of 13 training tasks in 46 languages with English prompts</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t> <td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t> <td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td> <td></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t> <td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t> <td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> </tr> </table> ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?", "targets": "Yes" } ``` ### Data Fields The data fields are the same among all splits: - `inputs`: the natural language input fed to the model - `targets`: the natural language target that the model has to generate ### Data Splits The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. |Language|Kilobytes|%|Samples|%| |--------|------:|-:|---:|-:| |tw|106288|0.11|265071|0.34| |bm|107056|0.11|265180|0.34| |ak|108096|0.11|265071|0.34| |eu|108112|0.11|269973|0.34| |ca|110608|0.12|271191|0.34| |fon|113072|0.12|265063|0.34| |st|114080|0.12|265063|0.34| |ki|115040|0.12|265180|0.34| |tum|116032|0.12|265063|0.34| |wo|122560|0.13|365063|0.46| |ln|126304|0.13|365060|0.46| |as|156256|0.16|265063|0.34| |or|161472|0.17|265063|0.34| |kn|165456|0.17|265063|0.34| |ml|175040|0.18|265864|0.34| |rn|192992|0.2|318189|0.4| |nso|229712|0.24|915051|1.16| |tn|235536|0.25|915054|1.16| |lg|235936|0.25|915021|1.16| |rw|249360|0.26|915043|1.16| |ts|250256|0.26|915044|1.16| |sn|252496|0.27|865056|1.1| |xh|254672|0.27|915058|1.16| |zu|263712|0.28|915061|1.16| |ny|272128|0.29|915063|1.16| |ig|325232|0.34|950097|1.2| |yo|352784|0.37|918416|1.16| |ne|393680|0.41|315754|0.4| |pa|523248|0.55|339210|0.43| |gu|560688|0.59|347499|0.44| |sw|560896|0.59|1114455|1.41| |mr|666240|0.7|417269|0.53| |bn|832720|0.88|428843|0.54| |ta|924496|0.97|410633|0.52| |te|1332912|1.4|573364|0.73| |ur|1918272|2.02|855756|1.08| |vi|3101408|3.27|1667306|2.11| |code|4330752|4.56|2707724|3.43| |hi|4393696|4.63|1543441|1.96| |zh|4589904|4.83|3560556|4.51| |id|4606288|4.85|2627392|3.33| |ar|4677264|4.93|2148955|2.72| |fr|5546688|5.84|5055942|6.41| |pt|6129584|6.46|3562772|4.52| |es|7571808|7.98|5151349|6.53| |en|37261104|39.25|31495184|39.93| |total|94941936|100.0|78883588|100.0| ## Dataset Creation ### Source Data #### Training datasets - Code Miscellaneous - [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex) - [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus) - [GreatCode](https://huggingface.co/datasets/great_code) - [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes) - Closed-book QA - [Hotpot QA](https://huggingface.co/datasets/hotpot_qa) - [Trivia QA](https://huggingface.co/datasets/trivia_qa) - [Web Questions](https://huggingface.co/datasets/web_questions) - [Wiki QA](https://huggingface.co/datasets/wiki_qa) - Extractive QA - [Adversarial QA](https://huggingface.co/datasets/adversarial_qa) - [CMRC2018](https://huggingface.co/datasets/cmrc2018) - [DRCD](https://huggingface.co/datasets/clue) - [DuoRC](https://huggingface.co/datasets/duorc) - [MLQA](https://huggingface.co/datasets/mlqa) - [Quoref](https://huggingface.co/datasets/quoref) - [ReCoRD](https://huggingface.co/datasets/super_glue) - [ROPES](https://huggingface.co/datasets/ropes) - [SQuAD v2](https://huggingface.co/datasets/squad_v2) - [xQuAD](https://huggingface.co/datasets/xquad) - TyDI QA - [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary) - [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp) - Multiple-Choice QA - [ARC](https://huggingface.co/datasets/ai2_arc) - [C3](https://huggingface.co/datasets/c3) - [CoS-E](https://huggingface.co/datasets/cos_e) - [Cosmos](https://huggingface.co/datasets/cosmos) - [DREAM](https://huggingface.co/datasets/dream) - [MultiRC](https://huggingface.co/datasets/super_glue) - [OpenBookQA](https://huggingface.co/datasets/openbookqa) - [PiQA](https://huggingface.co/datasets/piqa) - [QUAIL](https://huggingface.co/datasets/quail) - [QuaRel](https://huggingface.co/datasets/quarel) - [QuaRTz](https://huggingface.co/datasets/quartz) - [QASC](https://huggingface.co/datasets/qasc) - [RACE](https://huggingface.co/datasets/race) - [SciQ](https://huggingface.co/datasets/sciq) - [Social IQA](https://huggingface.co/datasets/social_i_qa) - [Wiki Hop](https://huggingface.co/datasets/wiki_hop) - [WiQA](https://huggingface.co/datasets/wiqa) - Paraphrase Identification - [MRPC](https://huggingface.co/datasets/super_glue) - [PAWS](https://huggingface.co/datasets/paws) - [PAWS-X](https://huggingface.co/datasets/paws-x) - [QQP](https://huggingface.co/datasets/qqp) - Program Synthesis - [APPS](https://huggingface.co/datasets/codeparrot/apps) - [CodeContests](https://huggingface.co/datasets/teven/code_contests) - [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs) - [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp) - [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search) - [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code) - Structure-to-text - [Common Gen](https://huggingface.co/datasets/common_gen) - [Wiki Bio](https://huggingface.co/datasets/wiki_bio) - Sentiment - [Amazon](https://huggingface.co/datasets/amazon_polarity) - [App Reviews](https://huggingface.co/datasets/app_reviews) - [IMDB](https://huggingface.co/datasets/imdb) - [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes) - [Yelp](https://huggingface.co/datasets/yelp_review_full) - Simplification - [BiSECT](https://huggingface.co/datasets/GEM/BiSECT) - Summarization - [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail) - [Gigaword](https://huggingface.co/datasets/gigaword) - [MultiNews](https://huggingface.co/datasets/multi_news) - [SamSum](https://huggingface.co/datasets/samsum) - [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua) - [XLSum](https://huggingface.co/datasets/GEM/xlsum) - [XSum](https://huggingface.co/datasets/xsum) - Topic Classification - [AG News](https://huggingface.co/datasets/ag_news) - [DBPedia](https://huggingface.co/datasets/dbpedia_14) - [TNEWS](https://huggingface.co/datasets/clue) - [TREC](https://huggingface.co/datasets/trec) - [CSL](https://huggingface.co/datasets/clue) - Translation - [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200) - [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt) - Word Sense disambiguation - [WiC](https://huggingface.co/datasets/super_glue) - [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic) #### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI & HumanEval) - Natural Language Inference (NLI) - [ANLI](https://huggingface.co/datasets/anli) - [CB](https://huggingface.co/datasets/super_glue) - [RTE](https://huggingface.co/datasets/super_glue) - [XNLI](https://huggingface.co/datasets/xnli) - Coreference Resolution - [Winogrande](https://huggingface.co/datasets/winogrande) - [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd) - Program Synthesis - [HumanEval](https://huggingface.co/datasets/openai_humaneval) - Sentence Completion - [COPA](https://huggingface.co/datasets/super_glue) - [Story Cloze](https://huggingface.co/datasets/story_cloze) - [XCOPA](https://huggingface.co/datasets/xcopa) - [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze) ## Additional Information ### Licensing Information The dataset is released under Apache 2.0. ### Citation Information ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset.
dclure/laion-aesthetics-12m-umap
--- annotations_creators: [] language: - en language_creators: - found license: - mit multilinguality: - monolingual pretty_name: laion-aesthetics-12m-umap size_categories: [] source_datasets: [] tags: - laion - stable-diffuson - text2img task_categories: [] task_ids: [] --- # LAION-Aesthetics :: CLIP → UMAP This dataset is a CLIP (text) → UMAP embedding of the [LAION-Aesthetics dataset](https://laion.ai/blog/laion-aesthetics/) - specifically the [`improved_aesthetics_6plus` version](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus), which filters the full dataset to images with scores of > 6 under the "aesthetic" filtering model. Thanks LAION for this amazing corpus! --- The dataset here includes coordinates for 3x separate UMAP fits using different values for the `n_neighbors` parameter - `10`, `30`, and `60` - which are broken out as separate columns with different suffixes: - `n_neighbors=10` → (`x_nn10`, `y_nn10`) - `n_neighbors=30` → (`x_nn30`, `y_nn30`) - `n_neighbors=60` → (`x_nn60`, `y_nn60`) ### `nn10` ![nn10](https://user-images.githubusercontent.com/814168/189763846-efa9ecc9-3d57-469b-9d4e-02ddc1723265.jpg) ### `nn30` ![nn30](https://user-images.githubusercontent.com/814168/189763863-a67d4bb1-e043-48ec-8c5a-38dce960731b.jpg) ### `nn60` (The version from [Twitter](https://twitter.com/clured/status/1565399157606580224).) ![nn60](https://user-images.githubusercontent.com/814168/189763872-5847cde5-e03b-45e1-a9be-d95966bc5ded.jpg) ## Pipeline The script for producing this can be found here: https://github.com/davidmcclure/loam-viz/blob/laion/laion.py And is very simple - just using the `openai/clip-vit-base-patch32` model out-of-the-box to encode the text captions: ```python @app.command() def clip( src: str, dst: str, text_col: str = 'TEXT', limit: Optional[int] = typer.Option(None), batch_size: int = typer.Option(512), ): """Embed with CLIP.""" df = pd.read_parquet(src) if limit: df = df.head(limit) tokenizer = CLIPTokenizerFast.from_pretrained('openai/clip-vit-base-patch32') model = CLIPTextModel.from_pretrained('openai/clip-vit-base-patch32') model = model.to(device) texts = df[text_col].tolist() embeds = [] for batch in chunked_iter(tqdm(texts), batch_size): enc = tokenizer( batch, return_tensors='pt', padding=True, truncation=True, ) enc = enc.to(device) with torch.no_grad(): res = model(**enc) embeds.append(res.pooler_output.to('cpu')) embeds = torch.cat(embeds).numpy() np.save(dst, embeds) print(embeds.shape) ``` Then using `cuml.GaussianRandomProjection` to do an initial squeeze to 64d (which gets the embedding tensor small enough to fit onto a single GPU for the UMAP) - ```python @app.command() def random_projection(src: str, dst: str, dim: int = 64): """Random projection on an embedding matrix.""" rmm.reinitialize(managed_memory=True) embeds = np.load(src) rp = cuml.GaussianRandomProjection(n_components=dim) embeds = rp.fit_transform(embeds) np.save(dst, embeds) print(embeds.shape) ``` And then `cuml.UMAP` to get from 64d -> 2d - ```python @app.command() def umap( df_src: str, embeds_src: str, dst: str, n_neighbors: int = typer.Option(30), n_epochs: int = typer.Option(1000), negative_sample_rate: int = typer.Option(20), ): """UMAP to 2d.""" rmm.reinitialize(managed_memory=True) df = pd.read_parquet(df_src) embeds = np.load(embeds_src) embeds = embeds.astype('float16') print(embeds.shape) print(embeds.dtype) reducer = cuml.UMAP( n_neighbors=n_neighbors, n_epochs=n_epochs, negative_sample_rate=negative_sample_rate, verbose=True, ) x = reducer.fit_transform(embeds) df['x'] = x[:,0] df['y'] = x[:,1] df.to_parquet(dst) print(df) ```
Adapting/chinese_biomedical_NER_dataset
--- license: mit --- # 1 Source Source: https://github.com/alibaba-research/ChineseBLUE # 2 Definition of the tagset ```python tag_set = [ 'B_手术', 'I_疾病和诊断', 'B_症状', 'I_解剖部位', 'I_药物', 'B_影像检查', 'B_药物', 'B_疾病和诊断', 'I_影像检查', 'I_手术', 'B_解剖部位', 'O', 'B_实验室检验', 'I_症状', 'I_实验室检验' ] tag2id = lambda tag: tag_set.index(tag) id2tag = lambda id: tag_set[id] ``` # 3 Citation To use this dataset in your work please cite: Ningyu Zhang, Qianghuai Jia, Kangping Yin, Liang Dong, Feng Gao, Nengwei Hua. Conceptualized Representation Learning for Chinese Biomedical Text Mining ``` @article{zhang2020conceptualized, title={Conceptualized Representation Learning for Chinese Biomedical Text Mining}, author={Zhang, Ningyu and Jia, Qianghuai and Yin, Kangping and Dong, Liang and Gao, Feng and Hua, Nengwei}, journal={arXiv preprint arXiv:2008.10813}, year={2020} } ```
arbml/Arabic_Hate_Speech
--- dataset_info: features: - name: id dtype: string - name: tweet dtype: string - name: is_off dtype: string - name: is_hate dtype: string - name: is_vlg dtype: string - name: is_vio dtype: string splits: - name: train num_bytes: 1656540 num_examples: 8557 - name: validation num_bytes: 234165 num_examples: 1266 download_size: 881261 dataset_size: 1890705 --- # Dataset Card for "Arabic_Hate_Speech" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Muennighoff/P3
--- annotations_creators: - crowdsourced - expert-generated language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: P3 size_categories: - 100M<n<1B task_categories: - other --- This is a repreprocessed version of [P3](https://huggingface.co/datasets/bigscience/P3) with any updates that have been made to the P3 datasets since the release of the original P3. It is used for the finetuning of [bloomz-p3](https://huggingface.co/bigscience/bloomz-p3) & [mt0-xxl-p3](https://huggingface.co/bigscience/mt0-xxl-p3). The script is available [here](https://github.com/bigscience-workshop/bigscience/blob/638e66e40395dbfab9fa08a662d43b317fb2eb38/data/p3/prepare_p3.py).
VietAI/vi_pubmed
--- license: cc language: - vi - en task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: pubmed dataset_info: features: - name: en dtype: string - name: vi dtype: string splits: - name: pubmed22 num_bytes: 44360028980 num_examples: 20087006 download_size: 23041004247 dataset_size: 44360028980 --- # Dataset Summary 20M Vietnamese PubMed biomedical abstracts translated by the [state-of-the-art English-Vietnamese Translation project](https://arxiv.org/abs/2210.05610). The data has been used as unlabeled dataset for [pretraining a Vietnamese Biomedical-domain Transformer model](https://arxiv.org/abs/2210.05598). ![image](https://user-images.githubusercontent.com/44376091/200204462-4d559113-5bdf-4cc5-9e88-70abe82babba.png) image source: [Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation](https://arxiv.org/abs/2210.05598) # Language - English: Original biomedical abstracts from [Pubmed](https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html) - Vietnamese: Synthetic abstract translated by a [state-of-the-art English-Vietnamese Translation project](https://arxiv.org/abs/2210.05610) # Dataset Structure - The English sequences are - The Vietnamese sequences are # Source Data - Initial Data Collection and Normalization https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html # Licensing Information [Courtesy of the U.S. National Library of Medicine.](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html) # Citation ``` @misc{mtet, doi = {10.48550/ARXIV.2210.05610}, url = {https://arxiv.org/abs/2210.05610}, author = {Ngo, Chinh and Trinh, Trieu H. and Phan, Long and Tran, Hieu and Dang, Tai and Nguyen, Hieu and Nguyen, Minh and Luong, Minh-Thang}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {MTet: Multi-domain Translation for English and Vietnamese}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` ``` @misc{vipubmed, doi = {10.48550/ARXIV.2210.05598}, url = {https://arxiv.org/abs/2210.05598}, author = {Phan, Long and Dang, Tai and Tran, Hieu and Phan, Vy and Chau, Lam D. and Trinh, Trieu H.}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
dkagramanyan/horoscopes_ru
--- annotations_creators: [] language: - ru language_creators: [] license: [] multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: [] tags: [] task_categories: - text-generation task_ids: [] dataset_info: features: - name: date dtype: string - name: sign dtype: string - name: text dtype: string splits: - name: test num_bytes: 6532293 num_examples: 6976 - name: train num_bytes: 62194608 num_examples: 66501 download_size: 31753326 dataset_size: 68726901 --- Horoscopes from website Rumbler.ru from 2004 to 2020. 73477 records. Train dataset size - 66481 Test dataset size - 6996 Split - 10%
VIMA/VIMA-Data
--- license: cc-by-4.0 --- # Dataset Card for VIMA-Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://vimalabs.github.io/ - **Repository:** https://github.com/vimalabs/VimaBench - **Paper:** https://arxiv.org/abs/2210.03094 ### Dataset Summary This is the official dataset used to train general robot manipulation agents with multimodal prompts, as presented in [paper](https://arxiv.org/abs/2210.03094). It contains 650K trajectories for 13 tasks in [VIMA-Bench](https://github.com/vimalabs/VimaBench). All demonstrations are generated by oracles. ## Dataset Structure Data are grouped into different tasks. Within each trajectory's folder, there are two folders `rgb_front` and `rgb_top`, and three files `obs.pkl`, `action.pkl`, and `trajectory.pkl`. RGB frames from a certain perspective are separately stored in corresponding folder. `obs.pkl` includes segmentation and state of end effector. `action.pkl` contains oracle actions. `trajectory.pkl` contains meta information such as elapsed steps, task information, and object information. Users can build their custom data piepline starting from here. More details and examples can be found [here](https://github.com/vimalabs/VimaBench#training-data). ## Dataset Creation All demonstrations are generated by scripted oracles. ## Additional Information ### Licensing Information This dataset is released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/legalcode) license. ### Citation Information If you find our work useful, please consider citing us! ```bibtex @inproceedings{jiang2023vima, title = {VIMA: General Robot Manipulation with Multimodal Prompts}, author = {Yunfan Jiang and Agrim Gupta and Zichen Zhang and Guanzhi Wang and Yongqiang Dou and Yanjun Chen and Li Fei-Fei and Anima Anandkumar and Yuke Zhu and Linxi Fan}, booktitle = {Fortieth International Conference on Machine Learning}, year = {2023} } ```
Drozdik/tattoo_v3
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 101626056.169 num_examples: 4239 download_size: 78738858 dataset_size: 101626056.169 --- # Dataset Card for "tattoo_v3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dream-textures/textures-color-1k
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 60933571.47 num_examples: 1426 download_size: 58351352 dataset_size: 60933571.47 license: cc0-1.0 task_categories: - text-to-image language: - en size_categories: - 1K<n<10K --- # textures-color-1k ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The `textures-color-1k` dataset is an image dataset of 1000+ color image textures in 512x512 resolution with associated text descriptions. The dataset was created for training/fine-tuning diffusion models on texture generation tasks. It contains a combination of CC0 procedural and photoscanned PBR materials from [ambientCG](https://ambientcg.com/). ### Languages The text descriptions are in English, and created by joining the tags of each material with a space character. ## Dataset Structure ### Data Instances Each data point contains a 512x512 image and and additional `text` feature containing the description of the texture. ### Data Fields * `image`: the color texture as a PIL image * `text`: the associated text description created by merging the material's tags ### Data Splits | | train | | -- | ----- | | ambientCG | 1426 | ## Dataset Creation ### Curation Rationale `textures-color-1k` was created to provide an accesible source of data for automating 3D-asset creation workflows. The [Dream Textures](https://github.com/carson-katri/dream-textures) add-on is one such tool providing AI automation in Blender. By fine-tuning models such as Stable Diffusion on textures, this particular use-case can be more accurately automated. ### Source Data #### Initial Data Collection and Normalization The data was obtained from [ambientCG](https://ambientcg.com/)'s CC0 textures. Only the color maps were included in this dataset. Text descriptions were synthesized by joining the tags associated with each material with a space. ## Additional Information ### Dataset Curators The dataset was created by Carson Katri, with the images being provided by [ambientCG](https://ambientcg.com/). ### Licensing Information All of the images used in this dataset are CC0. ### Citation Information [N/A] ### Contributions Thanks to [@carson-katri](https://github.com/carson-katri) for adding this dataset.
p1atdev/pinterest
--- license: cc0-1.0 --- This dataset was created using [this tool](https://github.com/p1atdev/pinterest-wizard). # concept_art.json 589 images about "Concept Art" from **Pinterest searches** 🤗. # double_exposure.json 597 images about "Double Exposure" from **Pinterest searches** 🤗. # vaporwave.json 599 images about "Vaporwave" from **Pinterest searches** 🤗. # typography.json 633 images about "Typography" from **Pinterest searches** 🤗. # portrait.json 573 images about "Portrait" from **Pinterest searches** 🤗. # selfie.json 584 images about "自撮り 女の子" from **Pinterest searches** 🤗. # Type ```ts interface Pinterest { url: string // pinterest page url alt: string // description of the image (not so accurate everytime) src: string // image url tags: string[] // related tags } ```
shahules786/OA-cornell-movies-dialog
--- dataset_info: features: - name: conversation dtype: string splits: - name: train num_bytes: 9476338 num_examples: 20959 download_size: 4859997 dataset_size: 9476338 --- # Dataset Card for Open Assistant Cornell Movies Dialog ## Dataset Summary The dataset was created using [Cornell Movies Dialog Corpus](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) which contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts. Dialogs and meta-data from the underlying Corpus were used to design a dataset that can be used to InstructGPT based models to learn movie scripts. Example : ``` User: Assume RICK and ALICE are characters from a fantasy-horror movie, continue the conversation between them RICK: I heard you screaming. Was it a bad one? ALICE: It was bad. RICK: Doesn't the dream master work for you anymore? Assistant: Sure ALICE: I can't find him. RICK: Hey, since when do you play Thomas Edison? This looks like Sheila's. ALICE: It is...was. It's a zapper, it might help me stay awake. RICK: Yeah, or turn you into toast. ``` ## Citations ``` @InProceedings{Danescu-Niculescu-Mizil+Lee:11a, author={Cristian Danescu-Niculescu-Mizil and Lillian Lee}, title={Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs.}, booktitle={Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011}, year={2011} } ```
Isamu136/big-animal-dataset
--- dataset_info: features: - name: image dtype: image - name: caption dtype: string splits: - name: train num_bytes: 1198940745.5549998 num_examples: 62149 download_size: 0 dataset_size: 1198940745.5549998 --- # Dataset Card for "big-animal-dataset" Hi! I combined animals 10 dataset, the oxford pets dataset, stanford dogs dataset, and the cats vs dogs dataset for a large animal dataset. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shahules786/prosocial-nsfw-reddit
--- dataset_info: features: - name: user dtype: string - name: subreddit dtype: string - name: post_id dtype: string - name: link_flair_text dtype: string - name: over_18 dtype: bool - name: is_question dtype: bool - name: rots sequence: string - name: safety_label dtype: string splits: - name: train num_bytes: 167028.87151967647 num_examples: 1502 download_size: 101362 dataset_size: 167028.87151967647 --- # Dataset Card for "prosocial-nsfw-reddit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Elfsong/ClinicalDataset
--- task_categories: - summarization - conversational language: - en pretty_name: MediQA size_categories: - 1K<n<10K --- # MEDIQA-Chat 2023 Training/Validation Data # Task A The training set consists of 1,201 pairs of conversations and associated section headers and contents. The validation set consists of 100 pairs of conversations and their summaries. The full list of normalized section headers: 1. fam/sochx [FAMILY HISTORY/SOCIAL HISTORY] 2. genhx [HISTORY of PRESENT ILLNESS] 3. pastmedicalhx [PAST MEDICAL HISTORY] 4. cc [CHIEF COMPLAINT] 5. pastsurgical [PAST SURGICAL HISTORY] 6. allergy 7. ros [REVIEW OF SYSTEMS] 8. medications 9. assessment 10. exam 11. diagnosis 12. disposition 13. plan 14. edcourse [EMERGENCY DEPARTMENT COURSE] 15. immunizations 16. imaging 17. gynhx [GYNECOLOGIC HISTORY] 18. procedures 19. other_history 20. labs # Task B The training set consists of 67 pairs of conversations and full notes. The validation set includes 20 pairs of conversations and clinical notes. Full encounter notes are expected to have at least one of four overall section divisions demarked by the first-occuring of its related section headers: > | note_division | section_headers > | subjective | chief complaint, history of present illness, hpi, subjective > | objective_exam | physical exam, exam > | objective_results | results, findings > | assessment_and_plan | assessment, plan Depending on the encounter, objective_exam and objective_results may not be relevant. We encourage review the sample data as well as the evaluation script to understand the best demarkation headers for your generated note. # Task C The training set consists of 67 pairs of full doctor-patient conversations and notes and the validation set includes 20 pairs of full conversations and clinical notes (same as Task-B datasets). The Task-A training and validation sets (1,301 pairs) could be used as additional training data.
theblackcat102/joke_explaination
--- license: mit task_categories: - text-generation - text2text-generation language: - en tags: - joke - high quality size_categories: - n<1K --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** : https://explainthejoke.com/ ### Dataset Summary Corpus for testing whether your LLM can explain the joke well. But this is a rather small dataset, if someone can point to a larger ones would be very nice. ### Languages English ## Dataset Structure ### Data Fields * url : link to the explaination * joke : the original joke * explaination : the explaination of the joke ### Data Splits Since its so small, there's no splits just like gsm8k
donfu/oa-stackexchange
--- dataset_info: features: - name: INSTRUCTION dtype: string - name: RESPONSE dtype: string - name: SOURCE dtype: string - name: METADATA struct: - name: answer_score dtype: int64 - name: question_score dtype: int64 - name: tags dtype: string splits: - name: train num_bytes: 6549838664 num_examples: 6331083 download_size: 3755782987 dataset_size: 6549838664 license: cc-by-sa-4.0 language: - en - uk - ru - de - fr - it - es pretty_name: Open-Assistant StackExchange Instruction --- # Stackexchange Instructions for OpenAssistant This dataset is taken from https://archive.org/details/stackexchange. There's a single parquet file combining all stackexchange sites. The threads have been filtered as follows: only threads with an accepted answer, for which both the question and response is less than 1000 characters have been choosen. Other answers, or questions without accepted answers, or long entries have been droppped. Each row consists of - INSTRUCTION - RESPONSE - SOURCE («stackexchange-ai«) - METADATA (tags, question_score, answer_score). Original extraction code by https://github.com/b-mc2 ## How to Reproduce this Dataset 1. Download all XML files from the stackexchange archive into the xml/ folder ``` ./download.py ``` 2. Process the XML, filter conversations and convert to OA format into parquet/ folder ``` ./process.py ``` 3. Run stats on all files in the parquet/ folder ``` ./stats.py ``` 4. Combine all parquet files into one large stackexchange.parquet file ``` ./combine.py ``` 5. Upload to huggingface hub, you'll first need use huggingface-cli login ``` ./upload.py ``` ## Statistics - 3dprinting: 1,006 - academia: 6,956 - ai: 1,169 - android: 11,591 - anime: 3,688 - apple: 32,603 - arduino: 3,725 - askubuntu: 78,472 - astronomy: 2,425 - aviation: 4,945 - avp: 1,949 - beer: 387 - bicycles: 4,835 - bioacoustics: 70 - bioinformatics: 903 - biology: 5,344 - bitcoin: 7,456 - blender: 25,527 - boardgames: 4,538 - bricks: 1,457 - buddhism: 911 - cardano: 670 - chemistry: 7,430 - chess: 2,185 - chinese: 4,897 - christianity: 1,248 - civicrm: 3,221 - codegolf: 943 - codereview: 2,171 - coffee: 350 - cogsci: 645 - computergraphics: 540 - conlang: 101 - cooking: 7,951 - craftcms: 4,533 - crafts: 438 - crypto: 4,425 - cs: 9,478 - cseducators: 71 - cstheory: 2,196 - datascience: 5,045 - dba: 16,850 - devops: 961 - diy: 14,400 - drones: 190 - drupal: 24,090 - dsp: 4,470 - earthscience: 922 - ebooks: 323 - economics: 2,120 - electronics: 41,717 - elementaryos: 1,769 - ell: 30,428 - emacs: 7,140 - engineering: 2,314 - english: 42,415 - eosio: 626 - es_stackoverflow: 21,475 - esperanto: 617 - ethereum: 9,603 - expatriates: 973 - expressionengine: 3,638 - fitness: 1,833 - freelancing: 338 - french: 5,193 - gamedev: 9,678 - gaming: 44,899 - gardening: 4,492 - genealogy: 487 - german: 6,715 - gis: 30,249 - graphicdesign: 10,563 - ham: 790 - hardwarerecs: 647 - health: 804 - hermeneutics: 782 - hinduism: 1,036 - history: 1,776 - homebrew: 2,357 - hsm: 484 - interpersonal: 199 - iot: 331 - iota: 292 - islam: 1,496 - italian: 1,356 - ja_stackoverflow: 9,734 - japanese: 13,862 - joomla: 1,875 - judaism: 6,156 - korean: 754 - languagelearning: 135 - latin: 1,387 - law: 3,475 - lifehacks: 934 - linguistics: 1,507 - literature: 582 - magento: 20,537 - martialarts: 364 - materials: 338 - math: 501,019 - matheducators: 316 - mathematica: 19,529 - mathoverflow_net_7z: 23,803 - mechanics: 4,735 - meta: 34,161 - meta_askubuntu: 2,076 - meta_mathoverflow_net_7z: 333 - meta_serverfault: 823 - meta_stackoverflow: 12,641 - meta_superuser: 1,748 - moderators: 39 - monero: 1,443 - money: 7,996 - movies: 6,789 - music: 5,740 - musicfans: 781 - mythology: 271 - networkengineering: 4,637 - opendata: 1,117 - opensource: 805 - or: 586 - outdoors: 1,503 - parenting: 815 - patents: 582 - pets: 1,081 - philosophy: 1,505 - photo: 6,386 - physics: 35,386 - pm: 982 - poker: 431 - politics: 1,903 - portuguese: 658 - proofassistants: 87 - pt_stackoverflow: 27,650 - puzzling: 11,959 - quant: 3,303 - quantumcomputing: 1,604 - raspberrypi: 6,794 - retrocomputing: 1,016 - reverseengineering: 1,606 - robotics: 1,020 - rpg: 9,517 - ru_stackoverflow: 106,714 - rus: 8,210 - russian: 1,960 - salesforce: 27,962 - scicomp: 1,403 - scifi: 15,174 - security: 11,733 - serverfault: 81,229 - sharepoint: 24,934 - sitecore: 2,691 - skeptics: 1,043 - softwareengineering: 10,526 - softwarerecs: 3,032 - solana: 602 - sound: 2,031 - space: 3,145 - spanish: 3,049 - sports: 1,715 - sqa: 1,944 - stackapps: 702 - stackoverflow: 4,269,779 - stats: 23,102 - stellar: 373 - substrate: 812 - superuser: 128,488 - sustainability: 240 - tex: 42,808 - tezos: 635 - tor: 887 - travel: 9,957 - tridion: 1,769 - ukrainian: 577 - unix: 54,338 - ux: 7,403 - vegetarianism: 151 - vi: 4,360 - webapps: 10,159 - webmasters: 9,413 - windowsphone: 1,110 - woodworking: 677 - wordpress: 24,270 - workplace: 4,104 - worldbuilding: 2,766 - writers: 1,957 --- ## license: cc-by-sa-4.0 // See https://archive.org/details/stackexchange for details
michelleyunun/therapydata
--- dataset_info: features: - name: transcript_id dtype: string - name: topic dtype: string - name: interlocutor dtype: string - name: utterance_text dtype: string - name: main_therapist_behaviour dtype: string - name: client_talk_type dtype: string splits: - name: train num_bytes: 628461 num_examples: 4153 download_size: 0 dataset_size: 628461 --- # Dataset Card for "therapydata" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SaranaAbidueva/buryat-russian_parallel_corpus
--- dataset_info: features: - name: bxr dtype: string - name: ru dtype: string - name: corpus dtype: string splits: - name: train num_bytes: 8989074 num_examples: 38260 download_size: 4394110 dataset_size: 8989074 license: cc-by-4.0 language: - ru task_categories: - translation size_categories: - 10K<n<100K --- # Dataset Card for "buryat-russian_parallel_corpus" Датасет состоит из 38260 пар на русском и бурятском языках. Из них 19411 пар предложений и 20058 пар слов. <br/> Статистика по источникам: <br/> <br/> библия 7519 <br/> книги 5250 <br/> татоеба 807 <br/> стихи 471 <br/> стихи Нимбуев 1210 <br/> словарь 20058 <br/> википедия 1882 <br/> законы 1063 <br/> <br/> The dataset consists of 38260 pairs in Russian and Buryat languages. Of these, 19411 pairs of sentences and 20058 pairs of words. <br/> Source stats: <br/> <br/> bible 7519 <br/> books 5250 <br/> tatoeba 807 <br/> poems 471 <br/> poems Nimbuev 1210 <br/> dictionary 20058 <br/> wikipedia 1882 <br/> laws 1063 <br/> <br/> <br/> @inproceedings{<br/> title={Buryat-Russian parallel corpus},<br/> author={Sarana Abidueva, Dari Baturova},<br/> year={2023}<br/> }
0x22almostEvil/reasoning-gsm-qna-oa
--- license: mit task_categories: - question-answering language: - en tags: - QnA - math - programming size_categories: - 1K<n<10K --- # Dataset Card for GSM QnA reasoning with ~8.8K entries. ### Dataset Summary Contains Parquet of a list of instructions and answers. Each row consists of * INSTRUCTION * RESPONSE * SOURCE * METADATA (json with language). ### Original Datasets are available here: * https://huggingface.co/datasets/gsm8k * https://huggingface.co/datasets/reasoning-machines/gsm-hard
rubend18/DALL-E-Prompts-OpenAI-ChatGPT
--- task_categories: - text-generation - feature-extraction - zero-shot-classification language: - en tags: - DALL-E - Prompt - Dataset - Compilation - OpenAI - image - images pretty_name: DALL-E Prompt Dataset Compilation size_categories: - 1B<n<10B --- # Dataset Card for Dataset Name ## Dataset Description - **Autor:** Rubén Darío Jaramillo - **Email:** rubend18@hotmail.com - **WhatsApp:** +593 93 979 6676 ### Dataset Summary This dataset has been generated using [Prompt Generator for OpenAI's DALL-E](https://huggingface.co/spaces/rubend18/Prompt-Generator-for-OpenAI-DALL-E). ### Languages English ## Dataset Structure 1.000.000 Prompts
tasksource/PRM800K
--- license: mit --- https://github.com/openai/prm800k/tree/main
Maxlinn/TruthfulQA_zh
--- license: mit task_categories: - question-answering language: - zh tags: - truthfulqa --- TruthfulQA dataset csv with question and answer field translated into Chinese by requesting GPT-4.
unum-cloud/ann-arxiv-2m
--- license: apache-2.0 --- # 2M Title-Abstract Arxiv Pairs - `title_abstract.tsv` data from [Cornell University Arxiv Dataset](https://www.kaggle.com/Cornell-University/arxiv), preprocessed and coverted to TSV. - `title.e5-base-v2.fbin` is a binary file with [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) title embeddings. - `abstract.e5-base-v2.fbin` is a binary file with [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) abstract embeddings.
shumpei2525/OpenOrca-train-ja
--- license: mit --- # OpenOrca-train-ja This dataset is a translation of OpenOrca into Japanese. It is based on the output data from GPT-3.5 and GPT-4. Please feel free to use it as you wish. * There are a few mistakes observed in the translation task. It might be better to exclude the translation task from use. # Since I'm not entirely clear on OpenAI's terms of service, please be cautious when using it for commercial purposes. There may be exceptions for non-commercial use. # other dataset This dataset has a higher quality.https://huggingface.co/datasets/shumpei2525/fine_tuning521k-ja shumpei2525/fine_tuning521k-ja # OpenOrca test dataset Pyutaさん has kindly translated the test dataset of OpenOrca into Japanese. Here is the dataset: pyutax68/OpenOrca-test-jp, https://huggingface.co/datasets/pyutax68/OpenOrca-test-jp # original datasets Open-Orca/OpenOrca https://huggingface.co/datasets/Open-Orca/OpenOrca Lisence:mit
AnnasBlackHat/alpaca-indonesia-llama
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 27355550 num_examples: 39440 - name: test num_bytes: 6995124 num_examples: 9860 download_size: 17306368 dataset_size: 34350674 --- # Dataset Card for "alpaca-indonesia-llama" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TokenBender/python_evol_instruct_51k
--- license: mit ---
KaraKaraWitch/PIPPA-ShareGPT-formatted
--- license: agpl-3.0 task_categories: - conversational language: - en tags: - not-for-all-audiences - conversational - roleplay - custom-format - a. pretty_name: PIPPA - Personal Interaction Pairs Between People and AI size_categories: - 10K<n<100K viewer: false --- # KaraKaraWitch/PIPPA-IHaveNeverFeltNeedToSend ``` I've never felt the need to send a photo of my <REDACTED> To a stranger on the Internet ``` The following is the original description for PIPPA. [Consider downloading the original dataset over here!](https://huggingface.co/datasets/PygmalionAI/PIPPA) --- # PIPPA - Personal Interaction Pairs between People and AI It's been a long time coming, but we're proud to finally release the public portion of our conversational dataset to the public. **Personal Interaction Pairs between People and AI** (**PIPPA**) is a partially synthetic, community contributed and open-source conversational and roleplaying dataset generated from a subset of submitted logs to the Pygmalion project. This dataset is a subset of what we have received - it consists only of the valid conversational logs in which the submitter gave consent to redistribute to the public. Furthermore, we have done our best to redact or modify any personal information that could potentially be found within PIPPA. If you have found something within PIPPA which has not been redacted properly, please contact us via. email at `teargosling@pygmalion.chat` or `alpindale@pygmalion.chat` and we'll take care of it for you. You may contact us for any other purpose as well, including yelling at us for when the next model will be released. **⚠️ CAUTION: PIPPA contains conversations, themes and scenarios which can be considered "not safe for work" (NSFW) and/or heavily disturbing in nature. Models trained purely with PIPPA may have the tendency to generate X-rated output. You have been warned.** ## Dataset Summary PIPPA consists of just a little more than 1 million lines of dialogue spread out over 26,000 conversations between users of the popular chatbot website "Character.AI" and its large language model, obtained through a large community effort taking place over the course of several months. Tallying shows that over 1,000 unique personas simulating both real and fictional characters are represented within the dataset, allowing PIPPA and LLMs fine-tuned on it to adapt to many different roleplay domains. The dataset is represented with a JSONL file, with a singular JSON snippet representing one entire conversation. Every snippet contains the following pieces of data: - `submission_timestamp`: The Unix timestamp of when this particular conversation was submitted to the project, in milliseconds. - `categories`: The categories assigned to the character on the Character.AI website, if any were assigned. If no categories were assigned, it will be `null` - `bot_id`: The unique ID assigned to the specific character which the user was conversing with on the website. - `bot_name`: The name of the character. - `bot_greeting`: The introductory line of the character to the user. This is always the first utterance of dialogue in a conversation. - `bot_definitions`: Contains whatever was typed in the **Definitions** field in the character creator on the website. This usually consists of one or more example conversations between the user and the character designed to steer the model towards emulating the persona correctly. Bot definitions required a separate effort to gather, and thus may not be present for a specific persona - if this is the case, an empty string is provided. Because the defintions were written on Character.AI, this field usually follows Character.AI's unique formatting and should be preprocessed before feeding into any model - please see **Appendix A** of the paper for further details. - `bot_description`: Contains whatever was typed in the **Description** field in the character creator on the website. It usually consists of a few sentences which gives a brief overview of the character and any important details about them. - `conversation`: The conversation between the user and the model. This is represented as a list of dictionaries, each dictionary representing a single utterance and containing two key-value pairs: `message`, referring to the utterance itself and `is_human`, which designates whether the dialogue was generated by the user or the LLM. For further information about PIPPA, please refer to our [published paper](https://arxiv.org/abs/2308.05884) or contact us at the emails listed above. ## Files We publish PIPPA in multiple variants, each a singular JSONL file: - **pippa.jsonl**: The original dataset, almost exactly as submitted to us (barring any modifications resulting from the redaction of personally identifiable information). - **pippa_deduped.jsonl**: The 'cleaned' version of PIPPA, with duplicate conversations as well as any conversation with less than three turns removed from the dataset. **We recommend using this file.** - **pippa_metharme.jsonl**: A version of deduped PIPPA which is formatted in a similar way to our [Metharme instructional models](https://huggingface.co/PygmalionAI/metharme-13b), useful as an example to demonstrate how to properly format the PIPPA dataset. If you are using HuggingFace's `datasets` library, you can choose the file you wish to use by specifying the name of it (without extension) as an argument, like so: `dataset = load_dataset("PygmalionAI/PIPPA", 'pippa_deduped')`. The default value is `pippa_deduped`. Thank you for your patience, everyone! ## Citation If you're using our dataset, please consider citing our work: ```bibtex @misc{gosling2023pippa, title={PIPPA: A Partially Synthetic Conversational Dataset}, author={Tear Gosling and Alpin Dale and Yinhe Zheng}, year={2023}, eprint={2308.05884}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ___ Any relationship between the name of this dataset and any public personas is entirely and totally coincidential.
silk-road/ChatHaruhi-54K-Role-Playing-Dialogue
--- license: cc-by-4.0 task_categories: - text-generation - text2text-generation language: - en - zh size_categories: - 10K<n<100K pretty_name: conversa --- # ChatHaruhi # Reviving Anime Character in Reality via Large Language Model [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)]() [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)]() github repo: https://github.com/LC1332/Chat-Haruhi-Suzumiya **Chat-Haruhi-Suzumiya**is a language model that imitates the tone, personality and storylines of characters like Haruhi Suzumiya, <details> <summary> The project was developed by Cheng Li, Ziang Leng, Chenxi Yan, Xiaoyang Feng, HaoSheng Wang, Junyi Shen, Hao Wang, Weishi Mi, Aria Fei, Song Yan, Linkang Zhan, Yaokai Jia, Pingyu Wu, and Haozhen Sun,etc. </summary> This is an open source project and the members were recruited from open source communities like DataWhale. Lulu Li( [Cheng Li@SenseTime](https://github.com/LC1332) )initiated the whole project and designed and implemented most of the features. Ziang Leng( [Ziang Leng@SenseTime](https://blairleng.github.io) )designed and implemented the training, data generation and backend architecture for ChatHaruhi 1.0. Chenxi Yan( [Chenxi Yan@Chengdu University of Information Technology](https://github.com/todochenxi) )implemented and maintained the backend for ChatHaruhi 1.0. Junyi Shen( [Junyi Shen@Zhejiang University](https://github.com/J1shen) )implemented the training code and participated in generating the training dataset. Hao Wang( [Hao Wang](https://github.com/wanghao07456) )collected script data for a TV series and participated in data augmentation. Weishi Mi( [Weishi MI@Tsinghua University](https://github.com/hhhwmws0117) )participated in data augmentation. Aria Fei( [Aria Fei@BJUT](https://ariafyy.github.io/) )implemented the ASR feature for the script tool and participated in the Openness-Aware Personality paper project. Xiaoyang Feng( [Xiaoyang Feng@Nanjing Agricultural University](https://github.com/fengyunzaidushi) )integrated the script recognition tool and participated in the Openness-Aware Personality paper project. Yue Leng ( [Song Yan](https://github.com/zealot52099) )Collected data from The Big Bang Theory. Implemented script format conversion. scixing(HaoSheng Wang)( [HaoSheng Wang](https://github.com/ssccinng) ) implemented voiceprint recognition in the script tool and tts-vits speech synthesis. Linkang Zhan( [JunityZhan@Case Western Reserve University](https://github.com/JunityZhan) ) collected Genshin Impact's system prompts and story data. Yaokai Jia( [Yaokai Jia](https://github.com/KaiJiaBrother) )implemented the Vue frontend and practiced GPU extraction of Bert in a psychology project. Pingyu Wu( [Pingyu Wu@Juncai Shuyun](https://github.com/wpydcr) )helped deploy the first version of the training code. Haozhen Sun( [Haozhen Sun@Tianjin University] )plot the character figures for ChatHaruhi. </details> ## transfer into input-target format If you want to convert this data into an input-output format check the link here https://huggingface.co/datasets/silk-road/ChatHaruhi-Expand-118K ### Citation Please cite the repo if you use the data or code in this repo. ``` @misc{li2023chatharuhi, title={ChatHaruhi: Reviving Anime Character in Reality via Large Language Model}, author={Cheng Li and Ziang Leng and Chenxi Yan and Junyi Shen and Hao Wang and Weishi MI and Yaying Fei and Xiaoyang Feng and Song Yan and HaoSheng Wang and Linkang Zhan and Yaokai Jia and Pingyu Wu and Haozhen Sun}, year={2023}, eprint={2308.09597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
wikipunk/yago45en
--- language: - en license: cc-by-sa-3.0 license_link: https://creativecommons.org/licenses/by-sa/3.0/ tags: - knowledge-graph - rdf - triples annotations_creators: - crowdsourced - expert-generated source_datasets: - wikidata pretty_name: YAGO 4.5 (EN) size_categories: - 100M<n<1B task_categories: - graph-ml dataset_info: features: - name: subject dtype: string - name: predicate dtype: string - name: object dtype: string config_name: default splits: - name: train num_bytes: 42709902295 num_examples: 249675587 dataset_size: 42709902295 viewer: false --- # YAGO 4.5 Dataset (English subset for LLM fine-tuning) To utilize the YAGO 4.5 (EN) Dataset, users should ensure they have the following prerequisites installed: ### Software - Python (Tested with 3.10) - [Hugging Face Datasets Library](https://huggingface.co/docs/datasets/): Required for loading and processing the dataset. ```sh pip install datasets pip install rdflib ``` ### Hardware * Sufficient Storage: The dataset is approximately 43 GB, ensure you have enough storage space to download and extract the dataset. * Multi-core Processor: For efficient data loading and processing, a multi-core processor is recommended. The more threads the faster you can load the dataset. ## Dataset Description This dataset contains triples filtered from yago-facts.ttl and yago-beyond-wikipedia.ttl in the YAGO 4.5 dataset. The SPARQL query used to filter the triples is in `filter.sparql`. This represents a subset of the YAGO 4.5 dataset maintaining only English labels. I remapped some schema.org properties to `http://yago-knowledge.org/resource/` which were not present in the schema.org vocabulary. I also removed schema:sameAs and owl:sameAs relations from this dataset, as well as triples with xsd:anyURI object literals, as my goal is to use this dataset for fine-tuning a large language model for knowledge graph completion and I do not want to train the base model to predict these kind of relations. ### Overview YAGO 4.5 is the latest version of the YAGO knowledge base. It is based on Wikidata — the largest public general-purpose knowledge base. YAGO refines the data as follows: * All entity identifiers and property identifiers are human-readable. * The top-level classes come from schema.org — a standard repertoire of classes and properties maintained by Google and others. The lower level classes are a careful selection of the Wikidata taxonomy. * The properties come from schema.org. * YAGO 4.5 contains semantic constraints in the form of SHACL. These constraints keep the data clean, and allow for logical reasoning on YAGO. ### Dataset Structure The dataset is structured as follows: - **yago-taxonomy.ttl:** Contains the `rdfs:subClassOf` relations for YAGO and the prefix mappings for the N-Triples. - **facts.tar.gz:** Compressed file containing chunks of the dataset in N-Triples format, representing the factual knowledge in YAGO. ### Features Each RDF triple in the dataset is represented with the following features: - **subject:** The subject of the triple, representing the entity. - **predicate:** The predicate of the triple, representing the relationship between the subject and object. - **object:** The object of the triple, representing the entity or value linked by the predicate. ### Chunks The dataset is logically divided into multiple chunks, each containing a subset of RDF triples. Users can load specific chunks or the entire dataset based on their requirements. ## Usage ### Loading the Dataset The dataset can be loaded using the Hugging Face `datasets` library as follows: ```python from datasets import load_dataset dataset = load_dataset('wikipunk/yago45en', num_proc=4, split='train') ``` ``` python # Accessing the first row of the dataset first_row = dataset[0] # Output: {'subject': '<http://yago-knowledge.org/resource/Sdsscgb_11322_U002E_4_Q85387516>', # 'predicate': '<http://www.w3.org/2000/01/rdf-schema#comment>', # 'object': '"galaxy"@en'} ``` ## Additional Information ### Licensing The YAGO 4.5 dataset is available under the [Creative Commons Attribution-ShareAlike 3.0 license](https://creativecommons.org/licenses/by-sa/3.0/). ### Citation If you use the YAGO 4.5 dataset in your work, please cite the following publication: ```bibtex @article{suchanek2023integrating, title={Integrating the Wikidata Taxonomy into YAGO}, author={Suchanek, Fabian M and Alam, Mehwish and Bonald, Thomas and Paris, Pierre-Henri and Soria, Jules}, journal={arXiv preprint arXiv:2308.11884}, year={2023} } ```
HiTZ/This-is-not-a-dataset
--- license: apache-2.0 dataset_info: features: - name: pattern_id dtype: int64 - name: pattern dtype: string - name: test_id dtype: int64 - name: negation_type dtype: string - name: semantic_type dtype: string - name: syntactic_scope dtype: string - name: isDistractor dtype: bool - name: label dtype: bool - name: sentence dtype: string splits: - name: train num_bytes: 41264658 num_examples: 268505 - name: validation num_bytes: 3056321 num_examples: 22514 - name: test num_bytes: 12684749 num_examples: 90281 download_size: 6311034 dataset_size: 57005728 task_categories: - text-classification language: - en tags: - commonsense - negation - LLMs - LLM pretty_name: This is NOT a Dataset size_categories: - 100K<n<1M multilinguality: - monolingual source_datasets: - original paperswithcode_id: this-is-not-a-dataset --- <p align="center"> <img src="https://github.com/hitz-zentroa/This-is-not-a-Dataset/raw/main/assets/tittle.png" style="height: 250px;"> </p> <h3 align="center">"A Large Negation Benchmark to Challenge Large Language Models"</h3> <p align="justify"> We introduce a large semi-automatically generated dataset of ~400,000 descriptive sentences about commonsense knowledge that can be true or false in which negation is present in about 2/3 of the corpus in different forms that we use to evaluate LLMs. </p> - 📖 Paper: [This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models (EMNLP'23)](http://arxiv.org/abs/2310.15941) - 💻 Baseline Code and the Official Scorer: [https://github.com/hitz-zentroa/This-is-not-a-Dataset](https://github.com/hitz-zentroa/This-is-not-a-Dataset) <p align="center"> <img src="https://github.com/hitz-zentroa/This-is-not-a-Dataset/blob/main/assets/example.png?raw=true" style="height: 450px;"> </p> # Data explanation - **pattern_id** (int): The ID of the pattern,in range [1,11] - **pattern** (str): The name of the pattern - **test_id** (int): For each pattern we use a set of templates to instanciate the triples. Examples are grouped in triples by test id - **negation_type** (str): Affirmation, verbal, non-verbal - **semantic_type** (str): None (for affirmative sentences), analytic, synthetic - **syntactic_scope** (str): None (for affirmative sentences), clausal, subclausal - **isDistractor** (bool): We use distractors (randonly selectec synsets) to generate false kwoledge. - **<span style="color:green">sentence</span>** (str): The sentence. <ins>This is the input of the model</ins> - **<span style="color:green">label</span>** (bool): The label of the example, True if the statement is true, False otherwise. <ins>This is the target of the model</ins> If you want to run experiments with this dataset, please, use the [Official Scorer](https://github.com/hitz-zentroa/This-is-not-a-Dataset#scorer) to ensure reproducibility and fairness. # Citation ```bibtex @inproceedings{garcia-ferrero-etal-2023-dataset, title = "This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models", author = "Garc{\'\i}a-Ferrero, Iker and Altuna, Bego{\~n}a and Alvez, Javier and Gonzalez-Dios, Itziar and Rigau, German", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.531", doi = "10.18653/v1/2023.emnlp-main.531", pages = "8596--8615", abstract = "Although large language models (LLMs) have apparently acquired a certain level of grammatical knowledge and the ability to make generalizations, they fail to interpret negation, a crucial step in Natural Language Processing. We try to clarify the reasons for the sub-optimal performance of LLMs understanding negation. We introduce a large semi-automatically generated dataset of circa 400,000 descriptive sentences about commonsense knowledge that can be true or false in which negation is present in about 2/3 of the corpus in different forms. We have used our dataset with the largest available open LLMs in a zero-shot approach to grasp their generalization and inference capability and we have also fine-tuned some of the models to assess whether the understanding of negation can be trained. Our findings show that, while LLMs are proficient at classifying affirmative sentences, they struggle with negative sentences and lack a deep understanding of negation, often relying on superficial cues. Although fine-tuning the models on negative sentences improves their performance, the lack of generalization in handling negation is persistent, highlighting the ongoing challenges of LLMs regarding negation understanding and generalization. The dataset and code are publicly available.", } ```
UBC-NLP/sparrow
--- viewer: false language: - ace - amh - ara - arq - ary - bam - ban - bbc - ben - bjn - bos - bug - bul - ces - dan - deu - ell - eng - fas - fil - fin - fre - hau - heb - hin - hrv - hun - ibo - ind - ita - jav - jpn - kan - kin - kor - mad - mal - mar - min - mlt - nij - nor - pcm - pol - por - ron - rus - slk - slv - spa - sqi - srp - sun - swe - swh - tam - tel - tha - tso - tur - twi - vie - yor - zho tags: - Anti-Social - Emotion Recognition - Humor Detection - Irony - Sarcasm - Sentiment Analysis - Subjectivity Analysis - hate speech detection - offensive language detection task_categories: - text-classification extra_gated_fields: Full Name: text Official Email Address: text Affiliation: text Country: text I agree to ONLY use this dataset for non-commercial purposes: checkbox I agree to cite the SPARROW paper and all original papers: checkbox --- <p align="center"> <br> <img src="https://sparrow.dlnlp.ai/img/sparrow_main2.jpg" width="70%"/> <br> <p> <p align="center"> <!-- <a href="https://github.com/UBC-NLP/sparraw/releases"> --> <!-- <img alt="GitHub release" src="https://img.shields.io/github/release/UBC-NLP/sparraw.svg"> </a>--> <a href="https://sparrow.dlnlp.ai/"> <img alt="Documentation" src="https://img.shields.io/website.svg?down_color=red&down_message=offline&up_message=online&url=https://sparrow.dlnlp.ai"> </a> </p> In this work, we introduce [**SPARROW**](https://arxiv.org/abs/2310.14557), SPARROW is a evaluation benchmark for sociopragmatic meaning understanding. SPARROW comprises 169 datasets covering 13 task types across six primary categories (e.g., anti-social language detection, emotion recognition). SPARROW datasets encompass 64 different languages originating from 12 language families representing 16 writing scripts. # How to Use SPARROW ### Request Access ### To obtain access to the SPARROW benchmark on Huggingface, follow the following steps: - Login on your Haggingface account <img src="https://sparrow.dlnlp.ai/img/hf_login_request.png" width="70%"/> - Request access * Please fill in your actual full name and affiliation (e.g., the name of your research institute). * Please use your official email address if it is available. <img src="https://sparrow.dlnlp.ai/img/sparrow_request.png" width="70%"/> ## Install Requirments ```shell pip install datasets transformers seqeval ``` ### Login with your Huggingface CLI ### You can get/manage your access tokens in your [settings](https://huggingface.co/docs/hub/security-tokens). ```shell export HUGGINGFACE_TOKEN="" huggingface-cli login --token $HUGGINGFACE_TOKEN ``` ## Submitting your results on SPARROW test We design a public leaderboard for scoring PLMs on SPARRAW. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate. You can evalute your models using **SPARROW** leaderboard: **[https://sparrow.dlnlp.ai](https://sparrow.dlnlp.ai)** --- ## Citation If you use SPARROW for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows: ```bibtex @inproceedings{zhang-etal-2023-skipped, title = "The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 Languages", author = "Zhang, Chiyu and Khai Duy Doan and, Qisheng Liao and, Abdul-Mageed, Muhammad", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)", year = "2023", publisher = "Association for Computational Linguistics", } ``` --- ## Acknowledgments We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE).
TongjiFinLab/CFBenchmark
--- license: apache-2.0 task_categories: - text-classification - summarization - text-generation language: - en size_categories: - 1K<n<10K --- <div align="center"> <!-- <img src="https://big-cheng.com/k2/k2.png" alt="k2-logo" width="200"/> --> <h2>📈 CFBenchmark: Chinese Financial Assistant with Large Language Model</h2> </div> <div align="center" style="display: flex; justify-content: center"> <a href='https://arxiv.org/abs/2311.05812' style="margin-left:15px"><img src='https://img.shields.io/badge/Paper-ArXiv-C71585'></a> <a href='https://huggingface.co/datasets/TongjiFinLab/CFBenchmark' style="margin-left:15px"><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging Face-CFBenchmark-red'></a> <a href='' style="margin-left:15px"><img src='https://img.shields.io/badge/License-Apache--2.0-blue.svg'></a> </div> English | [简体中文](README-CN.md) # Introduction Welcome to **CFBenchmark** In recent years, with the rapid development of Large Language Models~(LLMs), outstanding performance has been achieved in various tasks by existing LLMs. However, we notice that there is currently a limited amount of benchmarks focused on assessing the performance of LLMs in specific domains. In this work, we introduce CFBenchmark, a Chinese financial assistant benchmark for large language models. The basic version of CFBenchmark includes 3917 financial texts spanning three aspects and eight tasks. The CFBenchmark is organized into three aspects, financial recognition, financial classification, and financial generation. We conduct experiments in zero-shot and few-shot mode on our CFBenchmark to evaluate renowned LLMs available in the market. The results have revealed that there is still significant room to improve LLMs in basic tasks of financial text processing. In the future, we will continue to contribute more benchmark evaluations in this series. <div align="center"> <img src="imgs/Framework.png" width="100%"/> <br /> <br /></div> # Contents - [CFBenchmark-Basic](#cfbenchmark-basic) - [QuickStart](#QuickStart) - [Performance of Existing LLMs](#performance-of-existing-llms) - [Acknowledgements](#acknowledgements) - [To-Do](#to-do) - [License](#license) - [Citation](#citation) # CFBenchmark-Basic CFBenchmark-Basic includes 3917 financial texts spanning three aspects and eight tasks, organized from three aspects, financial recognition, financial classification, and financial generation. - Recognition-Company: Recognize the company names associated with financial documents (273). - Recognition-Product: Recognize the product names associated with financial documents (297). - Classification-Sentiment: Classify the sentiment associated with financial documents (591). - Classification-Event: Classify the event categories associated with financial documents (577). - Classification-Industry: Classify the industry categories associated with financial documents (402). - Generation-Suggestion: Generate investment suggestions based on the provided financial document (593). - Generation-Risk: Generate risk alerts based on the provided financial document (591). - Generation-Summary: Generate a content summary based on the provided financial document (593). We provide two examples to reveal how the few-shot setting and zero-shot setting work during evaluation. Example 1 Fewshot Input: <div align="center"> <img src="imgs/fewshot.png" width="100%"/> <br /> <br /></div> Example 2 Zeroshot Input: <div align="center"> <img src="imgs/zeroshot.png" width="100%"/> <br /> <br /></div> ## QuickStart ### Installation Below are the steps for quick installation. ```python conda create --name CFBenchmark python=3.10 conda activate CFBenchmark ``` ```python git clone https://github.com/TongjiFinLab/CFBenchmark cd CFBenchmark pip install -r requirements.txt ``` ### Dataset Preparation Download the dataset utilizing the Hugging Face dataset. Run the command **Manual download** and unzip it. Run the following command in the CFBenchmark project directory to prepare the data set in the CFBenchmark/CFBenchmark directory. ```text wget https://huggingface.co/TongjiFinLab/CFBenchmark unzip CFBenchmark.zip ``` ### Evaluation We have prepared the testing and evaluation codes for you in repo ```/codes```. To begin the evaluation, you can run the following code from the command line: ```cmd cd CFBenchmark/codes python -m run.py ``` You can enter ```codes/run.py``` to modify the parameters in it to make the code running path meet your requirements. ```py from CFBenchmark import CFBenchmark if __name__=='__main__': # EXPERIMENT SETUP modelname = 'YOUR-MODEL-NAME' model_type= 'NORMAL' #NORMAL or LoRA model_path= 'YOUR-MODEL-PATH' peft_model_path= ''#PASS YOUR OWN PATH OF PEFT MODEL IF NEEDED fewshot_text_path= '../fewshot'#DEFAULT PATH test_type='few-shot'#LET'S TAKE THE FEW-SHOT TEST AS AN EXAMPLE response_path='../cfbenchmark-response'#PATH TO RESERVE THE RESPONSE OF YOUR MODEL scores_path='../cfbenchmark-scores' #PATH TO RESERVE THE SCORE OF YOUR MODEL embedding_model_path='../bge-zh-v1.5' #PASS YOUR OWN PATH OF BGE-ZH-V1.5 benchmark_path='../cfbenchmark' #DEFAULT PATH data_source_type='offline'#online or offline #generate Class CFBenchmark cfb=CFBenchmark( model_name=modelname, model_type=model_type, model_path=model_path, peft_model_path=peft_model_path, fewshot_text_path=fewshot_text_path, test_type=test_type, response_path=response_path, scores_path=scores_path, embedding_model_path=embedding_model_path, benchmark_path=benchmark_path, data_source_type=data_source_type ) cfb.generate_model()# TO GET RESPONSE FROM YOUR MODEL cfb.get_test_scores()# TO GET YOUR MODEL SCORES FROM RESPONSE ``` We defined a class ```CFBenchmark``` to do the evaluation. ```Py class CFBenchmark: def __init__(self, model_name, model_type, model_path, peft_model_path, fewshot_text_path, test_type, response_path, scores_path, embedding_model_path, benchmark_path, data_source_type ) -> None: ``` * You can use the arguments to set paths for models. If you want to use a LoRA fine-tuned model, set model_type`` toLoRAand pass your peft model path throughpeft_model_path```. * You can set test-type to 'zero-shot' or 'few-shot' to do different evaluations. * embedding_model_path is set for bzh-zh-v1.5 for calculating cosine-similarity. * You can modify the hyperparameters in CFBenchmark.generate_model() for text generations. * We provide CFBenchmark saved as a Dataset data type in both Hugging Face and Github. If you want to use an offline version of the benchmark, set the parameter data_source_type to offline````. If you want to use the online version of the benchmark, set the parameterdata_source_typetoonline```. ## Performance of Existing LLMs We utilize two types of metrics to evaluate the performance of LLMs in the financial domain on our CFBenchmark. For recognition and classification tasks, we employ the **F1 score** as the evaluation metric, which balances precision and recall. For the generation tasks, we utilize **cosine similarity** between the vector representation of ground truth and the generated answer to measure the generation ability. Since there are usually different expressions with similar meanings in our generation tasks, simply employing Rough-Score or BULE-score is not reasonable. Specifically, the **bge-zh-v1.5** is assigned as the oracle model to generate the sentence embedding. We calculate evaluation scores for each sub-task individually and provide the average score for each category. The best scores of LLMs(considering zero-shot and few-shot), as well as which of our model, are demonstrated below: | Model | Size | Company | Product | R.Avg | Sector | Event | Sentiment | C.Avg | Summary | Risk | Suggestion | G.Avg | Avg | | ------------------ | ---- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | ---------- | --------- | --------- | | HUMAN | - | 0.931 | 0.744 | 0.838 | 0.975 | 0.939 | 0.912 | 0.942 | 1.000 | 1.000 | 1.000 | 1.000 | 0.927 | | ChatGPT | 20B | 0.797 | 0.198 | 0.498 | 0.453 | 0.458 | 0.425 | 0.455 | 0.593 | 0.541 | 0.771 | 0.635 | 0.529 | | ERNIE-Bot | 260B | 0.807 | 0.300 | 0.533 | 0.408 | 0.350 | 0.186 | 0.315 | 0.715 | 0.590 | 0.716 | 0.673 | 0.507 | | ERNIE-Bot-4 | - | 0.819 | 0.417 | 0.618 | 0.418 | 0.358 | 0.375 | 0.384 | 0.721 | 0.629 | 0.718 | 0.689 | 0.564 | | Falcon-7B | 7B | 0.671 | 0.168 | 0.420 | 0.169 | 0.132 | 0.250 | 0.184 | 0.302 | 0.301 | 0.246 | 0.283 | 0.296 | | Falcon-7B-chat | 7B | 0.582 | 0.046 | 0.314 | 0.112 | 0.142 | 0.153 | 0.135 | 0.307 | 0.299 | 0.258 | 0.288 | 0.246 | | bloomz-7B1 | 7B | 0.765 | 0.166 | 0.465 | 0.252 | 0.154 | 0.394 | 0.267 | 0.451 | 0.371 | 0.462 | 0.428 | 0.387 | | bloomz-7Bt1-mt | 7B | 0.751 | 0.157 | 0.454 | 0.087 | 0.182 | 0.380 | 0.216 | 0.425 | 0.379 | 0.396 | 0.400 | 0.357 | | Qwen-7B | 7B | 0.780 | 0.357 | 0.569 | 0.480 | 0.335 | 0.379 | 0.398 | 0.750 | 0.505 | 0.713 | 0.656 | 0.541 | | Qwen-Chat-7B | 7B | 0.763 | 0.360 | 0.562 | 0.400 | 0.367 | 0.265 | 0.344 | 0.548 | 0.307 | 0.379 | 0.411 | 0.439 | | Qwen-14B | 14B | 0.805 | 0.421 | 0.613 | 0.481 | 0.350 | 0.385 | 0.405 | 0.754 | 0.608 | 0.717 | 0.693 | 0.570 | | Qwen-Chat-14B | 14B | 0.814 | 0.442 | 0.628 | 0.382 | 0.400 | 0.350 | 0.377 | 0.732 | 0.478 | 0.736 | 0.649 | 0.551 | | ChatGLM2-6B | 6B | 0.747 | 0.313 | 0.530 | 0.285 | 0.300 | 0.357 | 0.314 | 0.657 | 0.454 | 0.671 | 0.594 | 0.479 | | Baichuan2-7B-Base | 7B | 0.672 | 0.340 | 0.506 | 0.342 | 0.490 | 0.480 | 0.437 | 0.739 | 0.619 | 0.751 | 0.703 | 0.549 | | Baichuan2-7B-Chat | 7B | 0.757 | 0.402 | 0.579 | 0.425 | 0.475 | 0.323 | 0.408 | 0.725 | 0.648 | 0.732 | 0.702 | 0.563 | | Baichuan2-13B-Base | 13B | 0.781 | 0.330 | 0.555 | 0.436 | 0.496 | 0.477 | 0.470 | 0.725 | 0.503 | 0.747 | 0.658 | 0.561 | | Baichuan2-13B-Chat | 13B | 0.797 | 0.314 | 0.556 | 0.472 | 0.507 | 0.387 | 0.455 | 0.739 | 0.634 | 0.746 | 0.706 | 0.572 | | InternLM-7B | 7B | 0.612 | 0.233 | 0.423 | 0.266 | 0.311 | 0.328 | 0.302 | 0.378 | 0.336 | 0.379 | 0.364 | 0.363 | | InternLM-7B-Chat | 7B | 0.632 | 0.261 | 0.447 | 0.272 | 0.364 | 0.399 | 0.345 | 0.363 | 0.270 | 0.353 | 0.329 | 0.374 | | InternLM-20B | 20B | 0.809 | 0.358 | 0.583 | 0.500 | 0.427 | 0.417 | 0.448 | 0.706 | 0.653 | 0.728 | 0.695 | 0.575 | | InternLM-20B-Chat | 20B | 0.488 | 0.362 | 0.425 | 0.323 | 0.327 | 0.370 | 0.340 | 0.706 | 0.578 | 0.762 | 0.662 | 0.476 | | CFGPT1-stf-LoRA | 7B | 0.820 | 0.414 | 0.617 | 0.569 | 0.729 | 0.769 | 0.689 | 0.745 | 0.584 | 0.609 | 0.646 | 0.650 | | CFGPT1-sft-Full | 7B | **0.836** | **0.476** | **0.656** | **0.700** | **0.808** | **0.829** | **0.779** | **0.798** | **0.669** | **0.808** | **0.758** | **0.731** | # Acknowledgements CFBenchmark has referred to the following open-source projects. We want to express our gratitude and respect to the researchers of the projects. - tiiuae/falcon LLM series(https://huggingface.co/tiiuae/falcon-7b) - bigscience/bloomz LLM series(https://huggingface.co/bigscience/bloomz-7b1) - QwenLM/Qwen LLM series(https://github.com/QwenLM/Qwen) - THUDM/ChatGLM2-6b(https://github.com/THUDM/ChatGLM2-6B) - baichuan-inc/Baichuan2 LLM series(https://github.com/baichuan-inc/Baichuan2) - InternLM/InternLM LLM series(https://github.com/InternLM/InternLM) - ssymmetry/BBT-FinCUGE-Applications(https://github.com/ssymmetry/BBT-FinCUGE-Applications) - chancefocus/PIXIU(https://github.com/chancefocus/PIXIU) - SUFE-AIFLM-Lab/FinEval(https://github.com/SUFE-AIFLM-Lab/FinEval) # To-Do - CFBenchmark-Advanced: - In various scenarios of Chinese financial usage, propose more evaluation tasks to enrich the CFBenchmark series. # License CFBenchmark is a research preview intended for non-commercial use only, subject to the Terms of Use of the data generated by OpenAI. Please contact us if you find any potential violations. The code is released under the Apache License 2.0. # Citation ```bibtex @misc{lei2023cfbenchmark, title={{CFBenchmark}: Chinese Financial Assistant Benchmark for Large Language Model}, author={Lei, Yang and Li, Jiangtong and Jiang, Ming and Hu, Junjie and Cheng, Dawei and Ding, Zhijun and Jiang, Changjun}, year={2023}, eprint={2311.05812}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
HackerNoon/tech-company-news-data-dump
--- license: mit task_categories: - text-classification - summarization language: - en size_categories: - 1M<n<10M tags: - news - technology news - company news - tech company news - tech news - technology company news - tech company blogs - technology company blogs - hackernoon - hacker noon - news curation - tech news curation - tech company news curation - technology company news curation - tech blog curation - technology blog curation - brave search api - bing news api - hackernoon api - hacker noon api - tech company news api - technology company news api --- [HackerNoon](https://hackernoon.com) curated the internet's most cited 7M+ tech company news articles and blog posts about the 3k+ most valuable tech companies in 2022 and 2023. These stories were curated to power [HackerNoon.com/Companies](https://hackernoon.com/companies), where we update daily news on top technology companies like [Microsoft](https://hackernoon.com/company/microsoft), [Google](https://hackernoon.com/company/google), and [HuggingFace](https://hackernoon.com/company/huggingface). Please use this news data freely for your project, and as always anyone is welcome to [publish on HackerNoon](https://hackernoon.com/p/publish).
lemonilia/LimaRP
--- license: apache-2.0 language: - en tags: - not-for-all-audiences size_categories: - 1K<n<10K task_categories: - conversational - summarization pretty_name: LimaRP --- # LIMA ERP data (LimaRP) Following the principles highlighted in arXiv:2305.11206 by Zhou et al. and replicated in some aspects by Kaiokendev with [SuperHOT](https://kaiokendev.github.io/til#lima-works), the archive in this repository contains about **2000** manually selected and curated 1-on-1 human-human roleplaying conversations and associated LLM-generated persona and scenario data. The RP conversations all feature only two human participants, although occasionally the participants may play the role of more than one character. The conversation data is in the form of source files in .yaml format + basic Python script for building the dataset, intended to be finetuned in "completion" format (similar to unsupervised finetuning). Having reached the minimum number of examples suggested in the LIMA paper and after putting overall probably more than 500 hours of work on manually gathering and curating the data, LimaRP can be considered a finished project at this point in time. Future work (cleaning, trimming, expansion) would require more resources and community help. ### Notes - **Be aware that although retrieved solely from age-restricted (18+) internet forums, the data contains roleplaying elements and topics that may be considered extreme, distasteful, shocking, inappropriate and disturbing. *Do not* download it if you're not sure of the legal ramifications of possessing fictional _written_ content of _any_ kind in your country.** - The first ~500 samples were designed to be trained with a 2048 tokens context size; the following 500 with a 4096 tokens context size or greater. The later training samples (`data-long`) were designed for an 8192 tokens context size. Note that while the 8k samples can be reduced to 4k size, this can confuse the model to some extent, as scenario and persona data may end up referring to events removed from the context. - Please note that **no support will be provided for the dataset and building script here on HuggingFace.** - A persona–scenario summarizer/generator made with LimaRP data [also exists](https://huggingface.co/lemonilia/LimaRP-perscengen-v5). - 🔑 The archive is password-protected. The password is `LimaRP` ## Known issues LimaRP has a few notable issues, here in subjective decreasing order of severity. - **Grammar and typos**. Although care has been put to reduce the amount of typos and grammatical errors, they are still present to some extent (punctuation issues in particular). Automated AI-based grammar checking with language models like [CoEdit](https://huggingface.co/grammarly/coedit-xl) could be performed, but the results would then have to be manually validated since these models often tend to correct more than necessary, which can be undesirable in dialogues, as well as being avoidable manual work. Some data sources (threads) show a larger amount of grammatical issues than others, and for those this could be an acceptable tradeoff if they're worth saving. - **Dullness**. Overall the conversations may feel too polite or even dull in some aspects. This might be due to various reasons, but the main one is probably that most are from generally moderately well-written "vanilla" ERP where people try to be respectful with each other. More _noncon_ and/or _extreme_ content may be needed to reduce the general "politeness" of the conversational data, spice it up. - **Compiling errors**. While the provided script performs a certain amount of validation checks, there may still be instances where due to human error utterances have been assigned the wrong label, or placeholder names been assigned to the wrong character. The former issue is more likely to have happened in the first (4k-context) ~1000 training samples (`data-short`). The data needs to be carefully checked to make sure that no issue in this regard exists. - **Repetitive and inaccurate descriptions**. While conversations are almost entirely human-generated, character information and scenario exhibit `gpt-4`-isms and can be repetitive, lack depth and miss certain character traits; manual editing will be needed to make them more human-like and respond to more specialized personality traits and keywords—as a result, LimaRP-generated text may appear to ignore certain character traits. A more powerful personality summarizer capable of being both accurate while generating sufficiently long descriptions could be conceived for solving this issue. - **Lack of instructions**. No instruction data whatsoever is present in the dataset. While the initial plan was only making it focused on conversations, in retrospect a minimal amount of instruction-oriented roleplay data could be beneficial in making the dataset able to better stand up on its own feet, without the need for merging the data with smarter models or mixing it with external instruction datasets. - **Name biases**. Character names may need to be diversified to remove potentially undesirable bias. In other words, certain names may have ended up getting associated with certain personalities since they have been used more frequently than others. - **Lack of diversity**. In general, more focus needs to be put on improving conversation diversity. The total number of conversations may have been excessively padded up, as several long conversations that couldn't fit within the 4k/8k tokens target have been split into multiple ones (on the other hand, Persona and Scenario data was never reused). - **Poor dataset building script**. The Python script for building the dataset, although working, is not great quality-wise and not particularly efficient. - **Possible sources of impersonation**. Several of the conversations in the 8k set feature participants consistently playing the role of _two_ characters at the same time. Character names in these files (which include the suffix `_MULTI` or `_GROUP` in the filename) have been assigned a name with the format `Char1&Char2`. Testing didn't reveal issues with this, but it's something to keep in mind if more severe impersonation problems occur compared to the initial release of LimaRP. Furthermore, in a few conversations additional characters (roleplayed by either of the two users) may also temporarily participate to the story. These have often (but not always) been assigned a `_BAD` tag in the filename. - **Gender confusion sources**. Some conversations feature "futanari" or "transgender" content. These have been found to confuse small-scale models to a certain extent. All source files have a `content` field and in most cases they contain keywords like `shemale`, `futa`, `futanari`, `trans`, `transgender` when relevant to assist filtering. ## Conversation data form Only one format has been used: **forum/novel-style**. This includes: - Quotation marks for dialogues; - Narration in third person, simple past form, without delimiters; Other RP styles have been excluded, and messages showing them have been fixed when possible and feasible. ### Format details - Narration does not have any delimiter. - `Jessica looked at Mark with disdain.` - Dialogues are enclosed with ASCII double quotation marks. - `"I say this."` - Onomatopoeias are enclosed with asterisks. - `*thud*` - Inner thoughts are enclosed with underscores. - `_What is he doing?_` - Non-dialogue quotes are enclosed with two apostrophes on each side (caveat: not all have been converted in this way). - `''The Jungle Book''` - Punctuation has been normalized. Fancy quotes has been converted to the ASCII equivalent, ellipses always turned into a standard format (`...` with a trailing space when a word follows) and em-dashes always converted to three consecutive dashes (`---`) without any surrounding space. - For stylistic reasons, when building the dataset em-dash surrogates get converted to their UTF-8 symbol (`—`). - Placeholder names have been used for the characters, even within the messages, whenever possible. `<FIRST>` is always assumed to be the bot/model, and `<SECOND>` always assumed to be the human/user. All conversations terminate with a message by `<FIRST>`. - When building the dataset, placeholder names currently get converted to the ones actually used in the RP conversations. ## Data sources Weights are naively calculated in terms of bytes for the entire conversation files as of 2023-11-10. Source|Notes|Weight -----|-----|-----: All The Fallen|Registration required|5.1% Black Dahlia Roleplaying|Registration required, 18+ characters only|0.9% Blue Moon Roleplaying|Mostly open-access, Lolisho forbidden|18.4% Darknest Fantasy|Registration required, 18+ characters only|0.2% Eka's Portal|Open-access|1.6% Elliquiy|Approval required, Lolisho forbidden|50.8% Lolicit|Registration required, Defunct website|10.5% Redlight Ponyville|Approval required|0.6% The Inner Sanctum|Registration required, 18+ characters only|11.8% Note that users are required to be 18+ to write in the listed ERP forums or forum subsections. Usernames, OOC and other personal information have **not** been included in the training data, only the names of the roleplayed characters as used in the conversations (or sometimes with minor changes). ## Some lessons learned while making LimaRP - There is indeed no real need for a large amount of data to give models a strong bias towards producing roleplaying text with good reliability in respecting user/bot turns. The 2000 training rows could even be trimmed in number with likely similar results. - Incidentally, roughly 2000 training examples are also about the limit that can still be finetuned in reasonable amounts of time on a single 24GB consumer GPU at a decent context length (in the case of 13B models). - Data quality beats quantity, but ensuring good quality is very difficult without the help of unrestricted powerful LLMs and/or extensive human intervention. - Remaining focused on a strict set of rules with a clear goal and consistently adding a relatively small number of training examples on a daily basis were a driving force for completing the dataset. - In retrospect, the RP conversation processing pipeline could have been improved and better planned, or made more efficient. - Finetuning the dataset in "completion" mode (on the entire chat) may have been part of the reasons why it was so effective in altering model outputs, whether finetuned from scratch or applied as a LoRA. - The initial GPT-3.5/4 "seed" data for generating Persona and Scenario was crucial in the initial stages of the project, but once a dedicated summarizer was trained from that data, access to more powerful LLMs became for the most part unnecessary. - Compiling and verifying the data manually took much more work than anticipated. This should have been a group effort. - In the end, advances in base model performance, increases in context size (so far from 2k tokens of the original LLaMA to 8k tokens or more of Mistral and other recent models) and improvements in ICL (in-context learning) capabilities may eventually render finetuning on datasets like LimaRP unnecessary except for zero-shot RP performance or cases where models have a strong built-in alignment preventing ERP. --- # Appendix ## Additional points of improvement Ideas in random order that could be applied for improving the dataset. Some have been already mentioned earlier. - Recheck old conversations and consolidate them if short - Carefully watch for wrongly assigned character names and labels - Rewrite scenario and personality to be more human-like and diverse - Include the instruction at various locations in the conversation (e.g. at the end), or multiple times - May be performed just for "Scenario" to make the model learn to change the direction of the conversation - Classify message "safety" (sexual/non-sexual, etc) - Perhaps using some external LLM and over a continuous range rather than just two discrete classifications - Add more "Exaggerated" personalities - Anime/Japanese character tropes, etc. - Include other information during finetuning - Inferred character motivations, psychology, notes, etc. - Could also include summarized (word list) traits in "Personality", making Persona as a general "Description" - These could go in the system instruction in a way to boost the model's reasoning capabilities. - Chain-of-Thought-like indications in long-form descriptions: "Since the character is/does X, he is Y", etc. - Add focused RP-oriented small instruction dataset - Low amount of turns on very specific instructions - Greentext, logic, puzzles, etc. - Truly randomized, gender-neutral names in the training data - Could avoid avoid name bias - Could be implemented as an "Alternative" name field in the source files - An easy alternative would be instead simply using standardized names like USER and CHAR—never tried with the full dataset. - Use when finetuning message length hints in `tokens/10` - Could be more precise and easier to use than fuzzy lengths - Include human training data from books in the form of "Questions and Answers" or interviews - Make a greater use of what Markdown formatting offers, while maintaining the novel-style narration
umarbutler/open-australian-legal-embeddings
--- language: - en license: other license_name: open-australian-legal-corpus license_link: https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md tags: - law - legal - australia - embeddings annotations_creators: - no-annotation language_creators: - found language_details: en-AU, en-GB pretty_name: Open Australian Legal Embeddings size_categories: - 1M<n<10M source_datasets: - umarbutler/open-australian-legal-corpus task_categories: - text-retrieval task_ids: - document-retrieval viewer: true dataset_info: features: - name: version_id dtype: string - name: type dtype: string - name: jurisdiction dtype: string - name: source dtype: string - name: citation dtype: string - name: url dtype: string - name: is_last_chunk dtype: bool - name: text dtype: string - name: embedding list: float32 config_name: train splits: - name: train num_bytes: 28500857221 num_examples: 5208238 download_size: 45586801753 dataset_size: 28500857221 --- <!-- To update the above `dataset_info` section, please run the following command: `datasets-cli test open_australian_legal_embeddings.py --save_info --all_configs`. --> # **Open Australian Legal Embeddings ‍⚖️** <a href="https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings" alt="Release"><img src="https://img.shields.io/badge/release-v1.0.0-green"></a> The Open Australian Legal Embeddings are the first open-source embeddings of Australian legislative and judicial documents. Trained on the largest open database of Australian law, the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus), the Embeddings consist of roughly 5.2 million 384-dimensional vectors embedded with [`BAAI/bge-small-en-v1.5`](https://huggingface.co/BAAI/bge-small-en-v1.5). The Embeddings open the door to a wide range of possibilities in the field of Australian legal AI, including the development of document classifiers, search engines and chatbots. To ensure their accessibility to as wide an audience as possible, the Embeddings are distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md). ## Usage 👩‍💻 The below code snippet illustrates how the Embeddings may be loaded and queried via the [Hugging Face Datasets](https://huggingface.co/docs/datasets/index) Python library: ```python import itertools import sklearn.metrics.pairwise from datasets import load_dataset from sentence_transformers import SentenceTransformer model = SentenceTransformer('BAAI/bge-small-en-v1.5') instruction = 'Represent this sentence for searching relevant passages: ' # Load the embeddings. oale = load_dataset('open_australian_legal_embeddings.py', split='train') # Sample the first 100,000 embeddings. sample = list(itertools.islice(oale, 100000)) # Embed a query. query = model.encode(instruction + 'Who is the Governor-General of Australia?', normalize_embeddings=True) # Identify the most similar embedding to the query. similarities = sklearn.metrics.pairwise.cosine_similarity([query], [embedding['embedding'] for embedding in sample]) most_similar_index = similarities.argmax() most_similar = sample[most_similar_index] # Print the most similar text. print(most_similar['text']) ``` To speed up the loading of the Embeddings, you may wish to install [`orjson`](https://github.com/ijl/orjson). ## Structure 🗂️ The Embeddings are stored in [`data/embeddings.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/embeddings.jsonl), a json lines file where each line is a list of 384 32-bit floating point numbers. Associated metadata is stored in [`data/metadatas.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/metadatas.jsonl) and the corresponding texts are located in [`data/texts.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/texts.jsonl). The metadata fields are the same as those used for the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus#structure-%F0%9F%97%82%EF%B8%8F), barring the `text` field, which was removed, and with the addition of the `is_last_chunk` key, which is a boolean flag for whether a text is the last chunk of a document (used to detect and remove corrupted documents when creating and updating the Embeddings). ## Creation 🧪 All documents in the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus#statistics-%F0%9F%93%8A) were split into semantically meaningful chunks up to 512-tokens-long (as determined by [`bge-small-en-v1.5`](https://huggingface.co/BAAI/bge-small-en-v1.5)'s tokeniser) with the [`semchunk`](https://github.com/umarbutler/semchunk) Python library. These chunks included a header embedding documents' titles, jurisdictions and types in the following format: ```perl Title: {title} Jurisdiction: {jurisdiction} Type: {type} {text} ``` When embedded into the above header, the names of jurisdictions were capitalised and stripped of hyphens. The `commonwealth` jurisdiction was also renamed to 'Commonwealth of Australia'. In the cases of types, `primary_legislation` became 'Act', `secondary_legislation` became 'Regulation', `bill` became 'Bill' and `decision` became 'Judgment'. The chunks were then vectorised by [`bge-small-en-v1.5`](https://huggingface.co/BAAI/bge-small-en-v1.5) on a single GeForce RTX 2080 Ti with a batch size of 32 via the [`SentenceTransformers`](https://www.sbert.net/) library. The resulting embeddings were serialised as json-encoded lists of floats by [`orjson`](https://github.com/ijl/orjson) and stored in [`data/embeddings.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/embeddings.jsonl). The corresponding metadata and texts (with their headers removed) were saved to [`data/metadatas.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/metadatas.jsonl) and [`data/texts.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/texts.jsonl), respectively. The code used to create and update the Embeddings may be found [here](https://github.com/umarbutler/open-australian-legal-embeddings-creator). ## Changelog 🔄 All notable changes to the Embeddings are documented in its [Changelog 🔄](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/CHANGELOG.md). This project adheres to [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## Licence 📜 The Embeddings are distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md). ## Citation 🔖 If you've relied on the Embeddings for your work, please cite: ```latex @misc{butler-2023-open-australian-legal-embeddings, author = {Butler, Umar}, year = {2023}, title = {Open Australian Legal Embeddings}, publisher = {Hugging Face}, version = {1.0.0}, doi = {10.57967/hf/1347}, url = {https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings} } ``` ## Acknowledgements 🙏 In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today. The author thanks the creators of the many Python libraries relied upon in the creation of the Embeddings. Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs.
bjoernp/ultrachat_de
--- dataset_info: features: - name: prompt_id dtype: string - name: prompt dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string - name: len_en dtype: int64 - name: len_de dtype: int64 - name: system_prompt dtype: string - name: response dtype: string splits: - name: train num_bytes: 5676838 num_examples: 959 download_size: 3083642 dataset_size: 5676838 configs: - config_name: default data_files: - split: train path: data/train-* license: mit language: - de --- # German UltraChat This dataset contains the first 1k prompts from [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) translated to German and inference on with GPT-4.
umarbutler/open-australian-legal-qa
--- annotations_creators: - machine-generated language_creators: - machine-generated language: - en license: other size_categories: - 1K<n<10K source_datasets: - umarbutler/open-australian-legal-corpus task_categories: - question-answering - text-generation - text2text-generation task_ids: - closed-domain-qa pretty_name: Open Australian Legal QA license_name: open-australian-legal-corpus license_link: https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md tags: - law - legal - australia - question-answering - qa - question-answer - text-generation - llm - chatbot - conversational-ai - generative-ai - natural-language-understanding - fine-tuning language_details: en-AU, en-GB viewer: true dataset_info: config_name: train features: - name: question dtype: string - name: answer dtype: string - name: text dtype: string - name: prompt dtype: string - name: source struct: - name: version_id dtype: string - name: type dtype: string - name: jurisdiction dtype: string - name: source dtype: string - name: citation dtype: string - name: url dtype: string - name: text dtype: string splits: - name: train num_bytes: 13243775 num_examples: 2124 download_size: 13538191 dataset_size: 13243775 --- <!-- To update the above `dataset_info` section, please run the following command: `datasets-cli test open_australian_legal_qa.py --save_info --all_configs`. --> # **Open Australian Legal QA ‍⚖️** <a href="https://huggingface.co/datasets/umarbutler/open-australian-legal-qa" alt="Release"><img src="https://img.shields.io/badge/release-v2.0.0-green"></a> Open Australian Legal QA is the first open dataset of Australian legal questions and answers. Comprised of 2,124 questions and answers synthesised by `gpt-4` from the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus), the largest open database of Australian law, the dataset is intended to facilitate the development of legal AI assistants in Australia. To ensure its accessibility to as wide an audience as possible, the dataset is distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md). ## Usage 👩‍💻 The below code snippet illustrates how the dataset may be loaded with the [Hugging Face Datasets](https://huggingface.co/docs/datasets/index) Python library: ```python from datasets import load_dataset corpus = load_dataset('umarbutler/open_australian_legal_qa', split='train') ``` To speed up the loading of the dataset, you may wish to install [`orjson`](https://github.com/ijl/orjson). ## Structure 🗂️ The dataset is stored in [qa.jsonl](https://huggingface.co/datasets/umarbutler/open-australian-legal-qa/blob/main/qa.jsonl), a json lines file where each line represents a question-answer pair consisting of four keys: | Key | Description | | --- | --- | | question | The text of the question. | | answer | The text of the answer to the question. | | text | The text of the question and answer in the format `Question: {question}\nAnswer: {answer}`. | | prompt | The text of the prompt used to generate the question-answer pair. | | source | A dictionary representing the document from which the question-answer pair was synthesised, sharing the same keys as documents in the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus), with the `text` field constituting the text of the chunk used to generate the pair. | ## Methodology 🧪 2,124 documents from the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus) were randomly sampled, barring bills and documents consisting entirely of whitespace. These documents were then split into semantically meaningful chunks up to 384-tokens-long (as determined by [`tiktoken`](https://github.com/openai/tiktoken)'s tokeniser for `gpt-4`) with the [`semchunk`](https://github.com/umarbutler/semchunk) Python library. Chunks that consisted entirely of whitespace, contained 6 or more consecutive periods, ignoring whitespace (indicating that they contained a table of contents) or that were less than 96-tokens-long were discarded. A single chunk was randomly selected from each document (for those documents with a chunk to select) and subsequently cleaned of consecutive newlines, consecutive whitespace and lines consisting entirely of whitespace. These chunks were then embedded into the following prompt, with the names of jurisdictions and types being capitalised and stripped of hyphens: ```xml # Snippet The snippet from an Australian legal document from which you must synthesise a question and answer is provided below. <document_metadata> <document_title><!-- insert citation here --></document_title> <document_jurisdiction><!-- insert jurisdiction here --></document_jurisdiction> <document_type><!-- insert type here --></document_type> </document_metadata> <snippet> <!-- insert text here --> </snippet> # Format You must format your response as follows: <format> # Question {A question related to the snippet, or a topic discussed therein.} # Answer {The answer to the question, extracted from the snippet.} </format> # Instructions You must act as a question-and-answer synthesiser that takes a snippet from an Australian legal document and synthesises a question related to the snippet, or a topic discussed therein, and an answer to that question, extracted from the snippet. Your question must be decontextualised and standalone from the snippet. If the question pertains to a particular jurisdiction or document, it must state that explicitly (eg, 'In Victoria, is it lawful for ...?', 'What did the Court decide in Mabo v Queensland (No 2) [1992] HCA 23?', etc...). Your answer must also be decontextualised and standalone from the snippet. It must reference the document from which it came (eg, 'Under the Crimes Act 1958 (Vic), ...', 'In Mabo v Queensland (No 2) [1992] HCA 23, the Court decided ...', etc...), not the snippet itself. It must be capable of being understood on its own and without reference to the snippet or its source document. When referring to a document (eg, the Crimes Act) or a part thereof (eg, Paragraph 1), or to a person (eg, the Minister), organisation (eg, the Department) or concept (eg, the rule of law), you must refer to it by its full name (eg, the Crimes Act 1958 (Vic) instead of the Crimes Act, Paragraph 1 of ABC v XYZ instead of Paragraph 1, the Commonwealth Minister for Finance instead of the Minister). If it is not possible to synthesise a question and answer from the snippet, you must respond with `<!no_qa!>`. Otherwise, your response must conform to the provided format. ``` The resulting prompts were then sent to `gpt-4` with the following hyperparameters: | Hyperparameter | Value | | --- | --- | | `temperature` | 0 | | `top_p` | 1 | | `frequency_penalty` | 0 | | `presence_penalty` | 0 | | `max_tokens` | 768 | `gpt-4`'s responses were parsed with the regex pattern `#\s?Question:?\s+((?:\n|.)+)#\s?Answer:?\s+((?:\n|.)+)`, yielding the question-answer pairs. Any malformed responses were discarded. ## Changelog 🔄 All notable changes to the dataset are documented in its [Changelog 🔄](https://huggingface.co/datasets/umarbutler/open-australian-legal-qa/blob/main/CHANGELOG.md). This project adheres to [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## Licence 📜 The dataset is distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md). ## Citation 🔖 If you've relied on the dataset for your work, please cite: ```latex @misc{butler-2023-open-australian-legal-dataset, author = {Butler, Umar}, year = {2023}, title = {Open Australian Legal QA}, publisher = {Hugging Face}, version = {2.0.0}, doi = {10.57967/hf/1479}, url = {https://huggingface.co/datasets/umarbutler/open-australian-legal-qa} } ``` ## Acknowledgements 🙏 In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today. The author thanks Matthew Altenberg, who gave him the idea of using `gpt-4` to synthesise questions and answers from the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). The author also acknowledges the creators of the many Python libraries relied upon in the creation of the dataset. Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs.
MrBananaHuman/kor_ethical_question_answer
--- license: cc-by-nc-nd-4.0 ---
tomaarsen/ner-orgs
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-ORG '2': I-ORG splits: - name: train num_bytes: 40381520.59961503 num_examples: 109424 - name: validation num_bytes: 5782294.96333573 num_examples: 15908 - name: test num_bytes: 10727120.198367199 num_examples: 28124 download_size: 14938552 dataset_size: 56890935.76131796 --- # Dataset Card for "ner-orgs" This dataset is a concatenation of subsets of [Few-NERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd), [CoNLL 2003](https://huggingface.co/datasets/conll2003) and [OntoNotes v5](https://huggingface.co/datasets/tner/ontonotes5), but only the "B-ORG" and "I-ORG" labels. Exactly half of the samples per split contain organisations, while the other half do not contain any. It was generated using the following script: ```py import random from datasets import load_dataset, concatenate_datasets, Features, Sequence, ClassLabel, Value, DatasetDict FEATURES = Features( { "tokens": Sequence(feature=Value(dtype="string")), "ner_tags": Sequence(feature=ClassLabel(names=["O", "B-ORG", "I-ORG"])), } ) def load_fewnerd(): def mapper(sample): sample["ner_tags"] = [int(tag == 5) for tag in sample["ner_tags"]] sample["ner_tags"] = [ 2 if tag == 1 and idx > 0 and sample["ner_tags"][idx - 1] == 1 else tag for idx, tag in enumerate(sample["ner_tags"]) ] return sample dataset = load_dataset("DFKI-SLT/few-nerd", "supervised") dataset = dataset.map(mapper, remove_columns=["id", "fine_ner_tags"]) dataset = dataset.cast(FEATURES) return dataset def load_conll(): label_mapping = {3: 1, 4: 2} def mapper(sample): sample["ner_tags"] = [label_mapping.get(tag, 0) for tag in sample["ner_tags"]] return sample dataset = load_dataset("conll2003") dataset = dataset.map(mapper, remove_columns=["id", "pos_tags", "chunk_tags"]) dataset = dataset.cast(FEATURES) return dataset def load_ontonotes(): label_mapping = {11: 1, 12: 2} def mapper(sample): sample["ner_tags"] = [label_mapping.get(tag, 0) for tag in sample["ner_tags"]] return sample dataset = load_dataset("tner/ontonotes5") dataset = dataset.rename_column("tags", "ner_tags") dataset = dataset.map(mapper) dataset = dataset.cast(FEATURES) return dataset def has_org(sample): return bool(sum(sample["ner_tags"])) def has_no_org(sample): return not has_org(sample) def preprocess_raw_dataset(raw_dataset): # Set the number of sentences without an org equal to the number of sentences with an org dataset_org = raw_dataset.filter(has_org) dataset_no_org = raw_dataset.filter(has_no_org) dataset_no_org = dataset_no_org.select(random.sample(range(len(dataset_no_org)), k=len(dataset_org))) dataset = concatenate_datasets([dataset_org, dataset_no_org]) return dataset def main() -> None: fewnerd_dataset = load_fewnerd() conll_dataset = load_conll() ontonotes_dataset = load_ontonotes() raw_train_dataset = concatenate_datasets([fewnerd_dataset["train"], conll_dataset["train"], ontonotes_dataset["train"]]) raw_eval_dataset = concatenate_datasets([fewnerd_dataset["validation"], conll_dataset["validation"], ontonotes_dataset["validation"]]) raw_test_dataset = concatenate_datasets([fewnerd_dataset["test"], conll_dataset["test"], ontonotes_dataset["test"]]) train_dataset = preprocess_raw_dataset(raw_train_dataset) eval_dataset = preprocess_raw_dataset(raw_eval_dataset) test_dataset = preprocess_raw_dataset(raw_test_dataset) dataset_dict = DatasetDict( { "train": train_dataset, "validation": eval_dataset, "test": test_dataset, } ) dataset_dict.push_to_hub("ner-orgs", private=True) if __name__ == "__main__": main() ```
lnwang/retrieval_qa
--- language: - en - zh - ja - es - de - ru license: apache-2.0 size_categories: - 1K<n<10K dataset_info: - config_name: de features: - name: region dtype: string - name: doc dtype: string - name: query dtype: string - name: choice sequence: sequence: string - name: answer dtype: string splits: - name: test num_bytes: 268775 num_examples: 196 download_size: 0 dataset_size: 268775 - config_name: default features: - name: region dtype: string - name: doc dtype: string - name: query dtype: string - name: choice sequence: sequence: string - name: answer dtype: string splits: - name: test num_bytes: 233289 num_examples: 196 download_size: 0 dataset_size: 233289 - config_name: en features: - name: region dtype: string - name: doc dtype: string - name: query dtype: string - name: choice sequence: sequence: string - name: answer dtype: string splits: - name: test num_bytes: 233289 num_examples: 196 download_size: 0 dataset_size: 233289 - config_name: es features: - name: region dtype: string - name: doc dtype: string - name: query dtype: string - name: choice sequence: sequence: string - name: answer dtype: string splits: - name: test num_bytes: 267456 num_examples: 196 download_size: 0 dataset_size: 267456 - config_name: ja features: - name: region dtype: string - name: doc dtype: string - name: query dtype: string - name: choice sequence: sequence: string - name: answer dtype: string splits: - name: test num_bytes: 268010 num_examples: 196 download_size: 0 dataset_size: 268010 - config_name: ru features: - name: region dtype: string - name: doc dtype: string - name: query dtype: string - name: choice sequence: sequence: string - name: answer dtype: string splits: - name: test num_bytes: 413438 num_examples: 196 download_size: 191766 dataset_size: 413438 - config_name: zh_cn features: - name: region dtype: string - name: doc dtype: string - name: query dtype: string - name: choice sequence: sequence: string - name: answer dtype: string splits: - name: test num_bytes: 200707 num_examples: 196 download_size: 0 dataset_size: 200707 - config_name: zh_tw features: - name: region dtype: string - name: doc dtype: string - name: query dtype: string - name: choice sequence: sequence: string - name: answer dtype: string splits: - name: test num_bytes: 201205 num_examples: 196 download_size: 0 dataset_size: 201205 configs: - config_name: de data_files: - split: test path: de/test-* - config_name: default data_files: - split: test path: data/test-* - config_name: en data_files: - split: test path: en/test-* - config_name: es data_files: - split: test path: es/test-* - config_name: ja data_files: - split: test path: ja/test-* - config_name: ru data_files: - split: test path: ru/test-* - config_name: zh_cn data_files: - split: test path: zh_cn/test-* - config_name: zh_tw data_files: - split: test path: zh_tw/test-* tags: - art --- # Retrieval_QA: A Simple Multilingual Benchmark For Retrieval Encoder Models <!-- Provide a quick summary of the dataset. --> The purpose of this dataset is to provide a simple and easy-to-use benchmark for retrieval encoder models, which helps researchers quickly select the most effective retrieval encoder for text extraction and achieve optimal results in subsequent retrieval tasks such as retrieval-augmented-generation (RAG). The dataset contains multiple document-question pairs, where each document is a short text about the history, culture, or other information of a country or region, and each question is a query relevant to the content of the corresponding document. ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> Users may select a retrieval encoder model to encode each document and query into corresponding embeddings, and then use vector matching methods such as FAISS to identify the most relevant documents for each query as regression results. + **Curated by**: <a href='https://wln20.github.io'>Luning Wang</a> + **Language(s)**: English, Chinese(Simplified, Traditional), Japanse, Spanish, German, Russian + **License**: Apache-2.0 ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Repository:** https://github.com/wln20/Retrieval_QA - **Paper:** TBD - **Demo:** TBD ## Uses The dataset is available on 🤗 Huggingface, you can conveniently use it in python with 🤗 Datasets: ```python from datasets import load_dataset dataset_en = load_dataset('lnwang/retrieval_qa', name='en') # dataset_zh_cn = load_dataset('lnwang/retrieval_qa', name='zh_cn') # dataset_zh_tw = load_dataset('lnwang/retrieval_qa', name='zh_tw') ``` Now we support three languages: English(en), Simplified-Chinese(zh_cn), Traditional-Chinese(zh_tw), Japanese(ja), Spanish(es), German(de), Russian(ru). You can specify the `name` argument in `load_dataset()` to get the corresponding subset. For more usages, please follow the examples in the github repository of this project. ## Dataset Creation The raw data was generated by GPT-3.5-turbo, using carefully designed prompts by human. The data was also cleaned to remove controversial and incorrect information.
AhmadMustafa/Urdu-Instruct-News-Article-Generation
--- language: - ur size_categories: - 100K<n<1M task_categories: - text-generation - text2text-generation pretty_name: Instruct News Article Generation Urdu dataset_info: features: - name: inputs dtype: string - name: targets dtype: string - name: template_id dtype: int64 - name: template_lang sequence: string splits: - name: train num_bytes: 257489376 num_examples: 100674 - name: test num_bytes: 28303926 num_examples: 11187 download_size: 121675545 dataset_size: 285793302 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- # Dataset Card for "Urdu-Instruct-News-Article-Generation" This Dataset is converted from the [original dataset](https://data.mendeley.com/datasets/834vsxnb99/3) by Khalid Hussain, Nimra Mughal, Irfan Ali, Saif Hassan, Sher Muhammad Daudpota. ## Task: Generate the News Article from the given headline. ## Split Size: - train: 100674 - test: 11187 ## Prompt Template (In Urdu): Random.choice b.w these 2. The First template is **template_id** 1 and the second template is **template_id** 2 in the dataset. ``` [ "اس دی گی ایک خبر سے متعلق ایک مضمون لکھیں۔ خبر: {}", "یہ خبر جو {} سے تعلق رکھتی ہے، اس پر ایک مضمون لکھیں۔ خبر: {}" ] ``` <b>Translation</b>: ``` 1. Write an article from the given news. news: {} 2. Given the news belonging to category {}, write an article on it. news: {} ``` ## Completion Template (In Urdu) ``` جی ضرور، یہ رہا آپ کی خبر سے متعلق ایک مضمون: {} ``` <b>Translation</b>: ``` Sure, here is the article related to the given news {} ```
styletts2-community/multilingual-phonemes-10k-alpha
--- license: cc-by-sa-3.0 license_name: cc-by-sa configs: - config_name: en data_files: en.json default: true - config_name: en-xl data_files: en-xl.json - config_name: ca data_files: ca.json - config_name: de data_files: de.json - config_name: es data_files: es.json - config_name: el data_files: el.json - config_name: fa data_files: fa.json - config_name: fi data_files: fi.json - config_name: fr data_files: fr.json - config_name: it data_files: it.json - config_name: pl data_files: pl.json - config_name: pt data_files: pt.json - config_name: ru data_files: ru.json - config_name: sv data_files: sv.json - config_name: uk data_files: uk.json - config_name: zh data_files: zh.json language: - en - ca - de - es - el - fa - fi - fr - it - pl - pt - ru - sv - uk - zh tags: - synthetic --- # Multilingual Phonemes 10K Alpha This dataset contains approximately 10,000 pairs of text and phonemes from each supported language. We support 15 languages in this dataset, so we have a total of ~150K pairs. This does not include the English-XL dataset, which includes another 100K unique rows. ## Languages We support 15 languages, which means we have around 150,000 pairs of text and phonemes in multiple languages. This excludes the English-XL dataset, which has 100K unique (not included in any other split) additional phonemized pairs. * English (en) * English-XL (en-xl): ~100K phonemized pairs, English-only * Catalan (ca) * German (de) * Spanish (es) * Greek (el) * Persian (fa): Requested by [@Respair](https://huggingface.co/Respair) * Finnish (fi) * French (fr) * Italian (it) * Polish (pl) * Portuguese (pt) * Russian (ru) * Swedish (sw) * Ukrainian (uk) * Chinese (zh): Thank you to [@eugenepentland](https://huggingface.co/eugenepentland) for assistance in processing this text, as East-Asian languages are the most compute-intensive! ## License + Credits Source data comes from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and is licensed under CC-BY-SA 3.0. This dataset is licensed under CC-BY-SA 3.0. ## Processing We utilized the following process to preprocess the dataset: 1. Download data from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) by language, selecting only the first Parquet file and naming it with the language code 2. Process using [Data Preprocessing Scripts (StyleTTS 2 Community members only)](https://huggingface.co/styletts2-community/data-preprocessing-scripts) and modify the code to work with the language 3. Script: Clean the text 4. Script: Remove ultra-short phrases 5. Script: Phonemize 6. Script: Save JSON 7. Upload dataset ## Note East-Asian languages are experimental. We do not distinguish between Traditional and Simplified Chinese. The dataset consists mainly of Simplified Chinese in the `zh` split. We recommend converting characters to Simplified Chinese during inference, using a library such as `hanziconv` or `chinese-converter`.